Menu
Sign In Pricing Add Podcast

Lex Fridman Podcast

#459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters

8286.274 - 8304.124 Nathan Lambert

64 different users at once, right? Yeah. And therefore your serving costs are lower, right? Because the server costs the same, right? This is eight H100s, roughly $2 an hour per GPU. That's $16 an hour, right? That is like somewhat of a fixed cost. You can do things to make it lower, of course, but like it's like $16 an hour. Now, how many users can you serve? How many tokens can you generate?

0
💬 0

Comments

There are no comments yet.

Log in to comment.