Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI
Lex Fridman
Do you think the teams that are doing, that are able to do the AI safety on the kind of narrow AI risks that you've mentioned, are those approaches going to be at all productive towards leading to approaches of doing AI safety and AGI? Or is it just a fundamentally different problem?
0
💬
0
Comments
Log in to comment.
There are no comments yet.