Lex Fridman Podcast
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
You know, I think this is a sort of hard research problem and it's got a lot of research risk and, you know, it might still very well fail. But I think that some amount of some very significant amount of research risk was sort of put behind us when that started to work. Can you describe what kind of features can be extracted in this way?
0
💬
0
Comments
Log in to comment.
There are no comments yet.