Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
And, you know, that tells you maybe something about the model, if you can come up with a principled version of that. But it doesn't really tell you, like, what algorithms are running in the model. How was the model actually making that decision? Maybe it's telling you something about what was important to it, if you can make that method work.
0
💬
0
Comments
Log in to comment.
There are no comments yet.