Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
Chris Olah
So you have this like this giant space of like theory in your head about what it could mean to like align models. But then like practically, surely there's something where we're just like if a model is like if especially with more powerful models, I'm like my main goal is like I want them to be good enough that things don't go terribly wrong.
0
💬
0
Comments
Log in to comment.
There are no comments yet.