Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
We're thinking about there as being that if a neuron or a combination of neurons fires more, it's sort of that means more of a particular thing being detected. And then that gives weights a very clean interpretation as these edges between these entities that these features and that edge then has a meaner. So that's in some ways the core thing.
0
💬
0
Comments
Log in to comment.
There are no comments yet.