Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
Chris Olah
So that when it does slip up, it's hopefully like, I don't know, a couple of percent of the time and not, you know, 20 or 30 percent of the time. But I think of it as if you're still seeing issues, each thing is costly to a different degree, and the system prompt is cheap to iterate on. And if you're seeing issues in the fine-tuned model, you can just potentially patch them with a system prompt.
0
💬
0
Comments
Log in to comment.
There are no comments yet.