Menu
Sign In Pricing Add Podcast

Acquired

Nvidia Part III: The Dawn of the AI Era (2022-2023)

2011.354 - 2029.53 Ben Gilbert

Oh, just wait till we get through this episode. It gets deeper. So obviously, yes, traditionally, you'd say this is very, very inefficient. And it actually means that the larger your context window, aka token limit, aka prompt length, gets... the more computationally expensive it gets on a quadratic basis.

0
💬 0

Comments

There are no comments yet.

Log in to comment.