Menu
Sign In Pricing Add Podcast

Lex Fridman Podcast

#387 – George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God

4624.11 - 4631.812 George Hotz

Yeah. RL with a reward function, not asking is this close to the human policy, but asking would a human disengage if you did this behavior?

0
💬 0

Comments

There are no comments yet.

Log in to comment.