
Description
Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don't lead to our extinction. Hosted on Acast. See acast.com/privacy for more information.
Audio
Featured in this Episode
No persons identified in this episode.
Transcription
No transcription available yet
Help us prioritize this episode for transcription by upvoting it.
0
upvotes
Popular episodes get transcribed faster