Menu
Sign In Pricing Add Podcast

Lex Fridman Podcast

#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

13937.568 - 13954.551 Chris Olah

So you have this like this giant space of like theory in your head about what it could mean to like align models. But then like practically, surely there's something where we're just like if a model is like if especially with more powerful models, I'm like my main goal is like I want them to be good enough that things don't go terribly wrong.

0
💬 0

Comments

There are no comments yet.

Log in to comment.