Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI
Roman Yampolsky
Well, either they have to prove that, of course, it's possible to indefinitely control godlike superintelligent machines by humans, and ideally let us know how, or agree that it's not possible and it's a very bad idea to do it, including for them personally and their families and friends and capital.
0
💬
0
Comments
Log in to comment.
There are no comments yet.