Menu
Sign In Pricing Add Podcast

Acquired

Nvidia Part III: The Dawn of the AI Era (2022-2023)

2531.207 - 2550.977 Ben Gilbert

It's funny when you look at some of the 1 billion parameter models, you're like, there is no chance that turns into anything useful ever. But by merely adding more training data and more parameters... It just gets way, way better. There's this weirdly emergent property where transformer-based models scale really well due to the parallelism.

0
💬 0

Comments

There are no comments yet.

Log in to comment.