Menu
Sign In Pricing Add Podcast

Acquired

Nvidia Part III: The Dawn of the AI Era (2022-2023)

4780.554 - 4804.526 Ben Gilbert

Oh, I totally buy it, though. I mean, I think there's a very real case around, look, you only have to train a model once, and then you can do inference on it over and over and over again. I mean, the analogy I think makes a lot of sense for model training is to think about it as a form of compression, right? LLMs are turning the entire internet of text into a much smaller set of model weights.

0
💬 0

Comments

There are no comments yet.

Log in to comment.