Accidental Tech Podcast
624: Do Less Math in Computers
Casey Liss
It's assumed to be widespread in terms of model training, and it's why there's an ever-increasing number of models converging on GPT-4.0 quality. This doesn't mean that we know for a fact that DeepSeq distilled 4.0 or clawed, but frankly, it would be odd if they didn't.
0
💬
0
Comments
Log in to comment.
There are no comments yet.