Menu
Sign In Pricing Add Podcast

Lex Fridman Podcast

#459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters

8340.777 - 8358.927 Nathan Lambert

Most of the time, sometimes you're dropping a big document, but then you process it, you get your answer, you throw it away, right? You, you move on to the next thing, right? Whereas with reasoning, I'm now generating tens of thousands of tokens in sequence, right? And so this memory, this KV cache has to stay resident and you have to keep loading it. You have to keep it in memory constantly.

0
💬 0

Comments

There are no comments yet.

Log in to comment.