Lex Fridman Podcast
#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI
Yann LeCun
Indirectly, that gives a high probability to sequences of words that are good and low probability to sequences of words that are bad, but it's very indirect. It's not obvious why this actually works at all, because you're not doing it on a joint probability of all the symbols in a sequence.
0
💬
0
Comments
Log in to comment.
There are no comments yet.