Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
Chris Olah
I see why Claude did that. And I'm like, if you think through how that looks to Claude, you probably could have just written it in a way that wouldn't evoke such a response. Especially this is more relevant if you see failures or if you see issues. It's sort of like, think about what the model failed at. Like, what did it do wrong?
0
💬
0
Comments
Log in to comment.
There are no comments yet.