
Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don't lead to our extinction.Want to help shape TED’s shows going forward? Fill out our survey!Become a TED Member today at ted.com/joinLearn more about TED Next at ted.com/futureyou Hosted on Acast. See acast.com/privacy for more information.
No persons identified in this episode.
No transcription available yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster