Lex Fridman Podcast
#387 – George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God
George Hotz
Yeah. RL with a reward function, not asking is this close to the human policy, but asking would a human disengage if you did this behavior?
0
💬
0
Comments
Log in to comment.
There are no comments yet.