Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
So circuits are just collections of features connected by weights and they implement algorithms. So they tell us, you know, how are features used? How are they built? How do they connect together? So maybe it's worth trying to pin down like what really is the core hypothesis here? And I think the core hypothesis is something we call the linear representation hypothesis.
0
💬
0
Comments
Log in to comment.
There are no comments yet.