Lex Fridman Podcast
#459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters
Nathan Lambert
So they need to have spares to like swap in and out all the way to like now a hundred thousand GPUs that they're training on Lama for on currently, right? Like 128,000 or so, right? This is, you know, think about a hundred thousand GPUs, um, with roughly 1400 watts a piece, that's 140 megawatts, 150 megawatts, right? For 128, right?
0
💬
0
Comments
Log in to comment.
There are no comments yet.