Lex Fridman Podcast
#434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet
Aravind Srinivas
Yeah, it's a very clever insight that, look, you want to learn causal dependencies, but you don't want to waste your hardware, your compute. and keep doing the back propagation sequentially. You want to do as much parallel compute as possible during training. That way, whatever job was earlier running in eight days would run like in a single day. I think that was the most important insight.
0
💬
0
Comments
Log in to comment.
There are no comments yet.