Lex Fridman Podcast
#459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters
Nathan Lambert
But that requires significant amounts of compute, right? And so the US government has effectively said, And forever, right? Training will always be a portion of the total compute. We mentioned Meta's 400,000 GPUs, only 16,000 made Lama, right?
0
💬
0
Comments
Log in to comment.
There are no comments yet.