Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
Dario Amodei
It's another example of this. Like some of the early work in mechanistic interpretability, so simple. It's just no one thought to care about this question before.
0
💬
0
Comments
Log in to comment.
There are no comments yet.