Acquired
Nvidia Part III: The Dawn of the AI Era (2022-2023)
Ben Gilbert
So the reason you want an H100 is they're 30 times faster than an A100, which mind you is only like two and a half years older. It is nine times faster for AI training. The H100 is literally purpose-built for training LLMs, like the full self-driving video stuff. It's super easy to scale up. It's got 18,500 CUDA cores. Remember when we were talking about the von Neumann example earlier, like...
0
💬
0
Comments
Log in to comment.
There are no comments yet.