Menu
Sign In Add Podcast

Lex Fridman Podcast

#416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

4599.161 - 4621.727 Yann LeCun

I mean, people have come up with things where you put essentially a random sequence of characters in a prompt, and that's enough to kind of throw the system into a mode where it's going to answer something completely different than it would have answered without this. So that's a way to jailbreak the system, basically go outside of its conditioning, right?

0
💬 0

Comments

There are no comments yet.

Log in to comment.