Lex Fridman Podcast
#459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters
Nathan Lambert
There's the whole thing about scaling laws ending, right? It's so ironic, right? It lasted a month. It was just like literally just, hey, models aren't getting better, right? They're just not getting better. There's no reason to spend more. Pre-training scaling is dead. And then it's like, oh, one, oh, three, right? R1. R1, right?
0
💬
0
Comments
Log in to comment.
There are no comments yet.