Lex Fridman Podcast
#459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters
Nathan Lambert
I think it's actually probably simpler than that. It's probably something related to computer user robotics rather than science discovery. Because the important aspect here is models take so much data to learn, they're not sample efficient, right? Trillions, they take the entire web, right? Over 10 trillion tokens to train on, right? This would take a human... thousands of years to read, right?
0
💬
0
Comments
Log in to comment.
There are no comments yet.