Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
Dario Amodei
We don't want to harm our own, you know, our kind of own ability to have a place in the conversation by imposing these these. very onerous burdens on models that are not dangerous today. So the if-then, the trigger commitment is basically a way to deal with this. It says you clamp down hard when you can show that the model is dangerous.
0
💬
0
Comments
Log in to comment.
There are no comments yet.