Lex Fridman Podcast
#447 – Cursor Team: Future of Programming with AI
Aman Sanger
There's this interesting thing where if you look at language model loss on different domains, I believe the bits per byte, which is kind of character normalized loss for code is lower than language, which means in general, there are a lot of tokens in code that are super predictable, a lot of characters that are super predictable.
0
💬
0
Comments
Log in to comment.
There are no comments yet.