Menu
Sign In Add Podcast

Lex Fridman Podcast

#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

5646.61 - 5665.594 Yann LeCun

Indirectly, that gives a high probability to sequences of words that are good and low probability to sequences of words that are bad, but it's very indirect. It's not obvious why this actually works at all, because you're not doing it on a joint probability of all the symbols in a sequence.

0
💬 0

Comments

There are no comments yet.

Log in to comment.