Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
Dario Amodei
It's very difficult to even understand in detail what they're doing, let alone control it. And like I said, these early signs that it's hard to perfectly draw the boundary between things the model should do and things the model shouldn't do, that, you know, If you go to one side, you get things that are annoying and useless, and you go to the other side, you get other behaviors.
0
💬
0
Comments
Log in to comment.
There are no comments yet.