Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
Like you could imagine something where you have, you know, you have this car detector neuron and you think, ah, you know, if it fires like, you know, between one and two, that means one thing, but it means like totally different if it's between three and four. Um, that would be a nonlinear representation. And in principle that, you know, models could do that.
0
💬
0
Comments
Log in to comment.
There are no comments yet.