Lex Fridman Podcast
#459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters
Nathan Lambert
Like all of these things are going to be environments where compute is spent in quote unquote post-training. But I think it's going to be good. We're going to drop the post from post-training. It's going to be pre-training and it's going to be training, I think. At some point. Because for the bulk of the last few years, pre-training has dwarfed post-training.
0
💬
0
Comments
Log in to comment.
There are no comments yet.