Eric Samvar
👤 PersonAppearances Over Time
Podcast Appearances
An example would be the ability for the system to fool us. Another one, if it developed a desire to get access to weapons, especially nuclear weapons.
An example would be the ability for the system to fool us. Another one, if it developed a desire to get access to weapons, especially nuclear weapons.
An example would be the ability for the system to fool us. Another one, if it developed a desire to get access to weapons, especially nuclear weapons.
I actually think, and I know you'll think I'm insane, and maybe I am, that AI is underhyped. We don't know yet if AI can build an AI Einstein, but we think that the computer can get pretty close to it. And that means that each and every one of us would have literally Einstein and Leonardo da Vinci or pretty close in your pocket on your phone. It's a pretty big deal.
I actually think, and I know you'll think I'm insane, and maybe I am, that AI is underhyped. We don't know yet if AI can build an AI Einstein, but we think that the computer can get pretty close to it. And that means that each and every one of us would have literally Einstein and Leonardo da Vinci or pretty close in your pocket on your phone. It's a pretty big deal.
I actually think, and I know you'll think I'm insane, and maybe I am, that AI is underhyped. We don't know yet if AI can build an AI Einstein, but we think that the computer can get pretty close to it. And that means that each and every one of us would have literally Einstein and Leonardo da Vinci or pretty close in your pocket on your phone. It's a pretty big deal.
Basically, there's a set of things that we shouldn't really allow. An example would be the ability for the system to fool us, to deceive us. Another one would be if it developed a desire to get access to weapons, especially nuclear weapons. Imagine that it starts to make copies of itself. It decides that it wants to propagate itself even if we plug it off.
Basically, there's a set of things that we shouldn't really allow. An example would be the ability for the system to fool us, to deceive us. Another one would be if it developed a desire to get access to weapons, especially nuclear weapons. Imagine that it starts to make copies of itself. It decides that it wants to propagate itself even if we plug it off.
Basically, there's a set of things that we shouldn't really allow. An example would be the ability for the system to fool us, to deceive us. Another one would be if it developed a desire to get access to weapons, especially nuclear weapons. Imagine that it starts to make copies of itself. It decides that it wants to propagate itself even if we plug it off.
There's a set of such things that we need to watch for in this technology. It's not capable of doing it right now, but there are signs that it might be capable in the future.
There's a set of such things that we need to watch for in this technology. It's not capable of doing it right now, but there are signs that it might be capable in the future.
There's a set of such things that we need to watch for in this technology. It's not capable of doing it right now, but there are signs that it might be capable in the future.
The truth is that AI and the future is largely going to be built by private companies. It has to do with the incentives and the money and where the talent is and how the world works. They're not going to be built in the equivalent of a Manhattan Project. So it's really important that governments understand what we're doing and keep their eye on us.
The truth is that AI and the future is largely going to be built by private companies. It has to do with the incentives and the money and where the talent is and how the world works. They're not going to be built in the equivalent of a Manhattan Project. So it's really important that governments understand what we're doing and keep their eye on us.
The truth is that AI and the future is largely going to be built by private companies. It has to do with the incentives and the money and where the talent is and how the world works. They're not going to be built in the equivalent of a Manhattan Project. So it's really important that governments understand what we're doing and keep their eye on us.
We're not arguing that we should unilaterally be able to do things without oversight. We think it should be regulated. The real fears that I have are not the ones that most people talk about AI. I talk about extreme risk. There's evidence that the models have knowledge that could allow, for example, a bad biological attack from some evil person.
We're not arguing that we should unilaterally be able to do things without oversight. We think it should be regulated. The real fears that I have are not the ones that most people talk about AI. I talk about extreme risk. There's evidence that the models have knowledge that could allow, for example, a bad biological attack from some evil person.
We're not arguing that we should unilaterally be able to do things without oversight. We think it should be regulated. The real fears that I have are not the ones that most people talk about AI. I talk about extreme risk. There's evidence that the models have knowledge that could allow, for example, a bad biological attack from some evil person.
And I'm sure no one listening is evil, but there must be at least one evil person in the world who could take advantage of that in a really bad way. We want to make sure that doesn't happen. Part of the reason that we're all alive today is because people in the 1950s developed a whole strategy around nuclear containment. The same is not true in computers.
And I'm sure no one listening is evil, but there must be at least one evil person in the world who could take advantage of that in a really bad way. We want to make sure that doesn't happen. Part of the reason that we're all alive today is because people in the 1950s developed a whole strategy around nuclear containment. The same is not true in computers.