Menu
Sign In Pricing Add Podcast

Lex Fridman Podcast

#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

8281.781 - 8300.888 Dario Amodei

So I think a bunch of people missed the point there. But even if it were completely unaligned and, you know, could get around all these human obstacles, it would have trouble. But again, if you want this to be an AI system that doesn't take over the world, that doesn't destroy humanity, then, then basically, you know, it's, it's, it's going to need to follow basic human laws, right?

0
💬 0

Comments

There are no comments yet.

Log in to comment.