Menu
Sign In Add Podcast

Lex Fridman Podcast

#431 – Roman Yampolskiy: Dangers of Superintelligent AI

5901.997 - 5920.365 Roman Yampolsky

Well, either they have to prove that, of course, it's possible to indefinitely control godlike superintelligent machines by humans, and ideally let us know how, or agree that it's not possible and it's a very bad idea to do it, including for them personally and their families and friends and capital.

0
💬 0

Comments

There are no comments yet.

Log in to comment.