Menu
Sign In Add Podcast

Lex Fridman Podcast

#431 – Roman Yampolskiy: Dangers of Superintelligent AI

4555.501 - 4577.899 Lex Fridman

These strategies can roughly be placed on a spectrum, depending on how much safety it would grant if successfully implemented. One way to do this is as follows, and there's a set of levels. From level zero, no safety specification is used. To level seven, the safety specification completely encodes all things that humans might want in all contexts. Where does this paper fall short to you?

0
💬 0

Comments

There are no comments yet.

Log in to comment.