Menu
Sign In Pricing Add Podcast

Lex Fridman Podcast

#459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters

13907.575 - 13926.99 Nathan Lambert

So they need to have spares to like swap in and out all the way to like now a hundred thousand GPUs that they're training on Lama for on currently, right? Like 128,000 or so, right? This is, you know, think about a hundred thousand GPUs, um, with roughly 1400 watts a piece, that's 140 megawatts, 150 megawatts, right? For 128, right?

0
💬 0

Comments

There are no comments yet.

Log in to comment.