Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
So if we think about the car detector, you know, the more it fires, the more we sort of think of that as meaning, oh, the model is more and more confident that a car is present. Or, you know, if it's some combination of neurons that represent a car, you know, the more that combination fires, the more we think the model thinks there's a car present. Um, this doesn't have to be the case, right?
0
💬
0
Comments
Log in to comment.
There are no comments yet.