Lex Fridman Podcast
#459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters
Nathan Lambert
I want to train there because that's where all of my GPUs are co-located, where I can put them at a super high networking speed connected together, right? Because that's what you need for training. Now with pre-training, this is the old scale, right? You could, you would increase parameters. You didn't increase data model gets better, right?
0
💬
0
Comments
Log in to comment.
There are no comments yet.