Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
Chris Olah
And it makes sense to me because I'm like, when you're in such an a priori domain, clarity is sort of this way that you can prevent people from just kind of making stuff up. And I think that's sort of what you have to do with language models. Like very often I actually find myself doing sort of mini versions of philosophy. You know, so I'm like, suppose that you give me a task.
0
💬
0
Comments
Log in to comment.
There are no comments yet.