The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton
Arvind Narayanan
So there is training compute, which is when the developer is building the model. And then there is inference compute, when the model is being deployed and the user is using it to do something. And it might seem like really the training cost is the one we should worry about, since it's trained on all of the text on the internet or whatever.
0
💬
0
Comments
Log in to comment.
There are no comments yet.