Menu
Sign In Pricing Add Podcast

Lex Fridman Podcast

#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

17849.741 - 17868.135

And it turns out if you do dictionary learning, in particular, if you do sort of a nice efficient way that in some sense sort of nicely regularizes it as well, called a sparse autoencoder, if you train a sparse autoencoder, these beautiful interpolate features start to just fall out where there weren't any beforehand. And so that's not a thing that you would necessarily predict, right?

0
💬 0

Comments

There are no comments yet.

Log in to comment.