Menu
Sign In Pricing Add Podcast

Lex Fridman Podcast

#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

16604.917 - 16623.57

So sometimes it's not that there's a single neuron that represents, say, a car, but it actually turns out after you detect the car, the model sort of hides a little bit of the car in the following layer and a bunch of dog detectors. Why is it doing that? Well, you know, maybe it just doesn't want to do that much work on cars at that point. And, you know, it's sort of storing it away to go and

0
💬 0

Comments

There are no comments yet.

Log in to comment.