Menu
Sign In Pricing Add Podcast

Lex Fridman Podcast

#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

4572.202 - 4585.011 Dario Amodei

We've thought about some of those. I think they're exotic enough. There are ways to render them unlikely. But yeah, generally, you want to preserve mechanistic interpretability as a kind of verification set or test set that's separate from the training process of the model.

0
💬 0

Comments

There are no comments yet.

Log in to comment.