Menu
Sign In Pricing Add Podcast

Lex Fridman Podcast

#459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters

14907.366 - 14927.796 Dylan Patel

So it's just like, effectively, your compute efficiency goes down. I think flops is the standard for how you measure it. But with RL, and you have to do all these things where you... move your weights around in a different way than at pre-training and just generation. It's going to become less efficient, and flops is going to be less of a useful term.

0
💬 0

Comments

There are no comments yet.

Log in to comment.