Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
Dario Amodei
So that's hard because you need to scale up human interaction. And it's very implicit, right? I don't have a sense of what I want the model to do. I just have a sense of like what this average of a thousand humans wants the model to do. So two ideas. One is, could the AI system itself decide which response is better, right?
0
💬
0
Comments
Log in to comment.
There are no comments yet.