Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
So the problem is that not all of the neurons are interpretable. And there's reason to think that we can get into this more later, that there's this superposition hypothesis, this reason to think that sometimes the right unit to analyze things in terms of is combinations of neurons.
0
💬
0
Comments
Log in to comment.
There are no comments yet.