Menu
Sign In Pricing Add Podcast

Lex Fridman Podcast

#459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters

10718.296 - 10743.342 Nathan Lambert

I think it's actually probably simpler than that. It's probably something related to computer user robotics rather than science discovery. Because the important aspect here is models take so much data to learn, they're not sample efficient, right? Trillions, they take the entire web, right? Over 10 trillion tokens to train on, right? This would take a human... thousands of years to read, right?

0
💬 0

Comments

There are no comments yet.

Log in to comment.