Lex Fridman Podcast
#459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters
Nathan Lambert
It's mostly output tokens. So before, you know, three months ago, whenever O1 launched, all of the use cases for long context length were like, let me put a ton of documents in and then get an answer out, right? And it's a single, you know, Pre-fill, compute a lot in parallel, and then output a little bit. Now, with reasoning and agents, this is a very different idea, right?
0
💬
0
Comments
Log in to comment.
There are no comments yet.