Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
Chris Olah
At the same time, if you train models to not do that and then you are correct about a thing and you correct it and it pushes back against you and is like, no, you're wrong. It's hard to describe like that's so much more annoying. So it's like a lot of little annoyances versus like, one big annoyance. It's easy to think that like we often compare it with like the perfect.
0
💬
0
Comments
Log in to comment.
There are no comments yet.