Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
Lex Fridman
And I agree with you, when a human is being mean to an AI system, I think the obvious near-term negative effect is on the human, not on the AI system. So there's, we'll have to kind of try to construct an incentive system where you should be, behave the same, just like as you were saying with prompt engineering, behave with Claude like you would with other humans. It's just good for the soul.
0
💬
0
Comments
Log in to comment.
There are no comments yet.