Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
Dario Amodei
which is not perfect from both the safety and capabilities perspective in that humans are often not able to perfectly identify what the model wants and what humans want in the moment may not be what they want in the long term. So there's a lot of subtlety there, but the models are good at producing what the humans in some shallow sense want.
0
💬
0
Comments
Log in to comment.
There are no comments yet.