Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI
Lex Fridman
These strategies can roughly be placed on a spectrum, depending on how much safety it would grant if successfully implemented. One way to do this is as follows, and there's a set of levels. From level zero, no safety specification is used. To level seven, the safety specification completely encodes all things that humans might want in all contexts. Where does this paper fall short to you?
0
💬
0
Comments
Log in to comment.
There are no comments yet.