Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
Dario Amodei
So I think a bunch of people missed the point there. But even if it were completely unaligned and, you know, could get around all these human obstacles, it would have trouble. But again, if you want this to be an AI system that doesn't take over the world, that doesn't destroy humanity, then, then basically, you know, it's, it's, it's going to need to follow basic human laws, right?
0
💬
0
Comments
Log in to comment.
There are no comments yet.