Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
And so I think that's, if you want the deepest reason why we want to have interpretable monosemantic features, I think that's really the deep reason.
0
💬
0
Comments
Log in to comment.
There are no comments yet.