Acquired
Nvidia Part III: The Dawn of the AI Era (2022-2023)
David Rosenthal
Well, it turns out if you're trying to address hundreds, maybe more than hundreds of GPUs as one single compute cluster to train a massive AI model, yeah, you want really fast data interconnects between them.
0
💬
0
Comments
Log in to comment.
There are no comments yet.