Menu
Sign In Pricing Add Podcast

Lex Fridman Podcast

#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

13625.146 - 13645.621 Chris Olah

At the same time, if you train models to not do that and then you are correct about a thing and you correct it and it pushes back against you and is like, no, you're wrong. It's hard to describe like that's so much more annoying. So it's like a lot of little annoyances versus like, one big annoyance. It's easy to think that like we often compare it with like the perfect.

0
💬 0

Comments

There are no comments yet.

Log in to comment.