Menu
Sign In Pricing Add Podcast

Lex Fridman Podcast

#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

7220.574 - 7241.066 Dario Amodei

So that's hard because you need to scale up human interaction. And it's very implicit, right? I don't have a sense of what I want the model to do. I just have a sense of like what this average of a thousand humans wants the model to do. So two ideas. One is, could the AI system itself decide which response is better, right?

0
💬 0

Comments

There are no comments yet.

Log in to comment.