Lex Fridman Podcast
#459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters
Dylan Patel
So it's just like, effectively, your compute efficiency goes down. I think flops is the standard for how you measure it. But with RL, and you have to do all these things where you... move your weights around in a different way than at pre-training and just generation. It's going to become less efficient, and flops is going to be less of a useful term.
0
💬
0
Comments
Log in to comment.
There are no comments yet.