Zico Colter
👤 PersonAppearances Over Time
Podcast Appearances
But some are just, there are capabilities that we think these models might enable where they would lower the bar so much for some bad things, like, say, creating a zero-day exploit that takes down software over half the world. The concern is that, not that they can do this sort of autonomously, maybe initially, but
but that they can lower the bar so far in the skill level required to create these things that effectively it puts them in the hands of a huge number of bad actors. And the same is true for things like biological risk or chemical risk or other things like this. And these concerns have to be taken seriously.
but that they can lower the bar so far in the skill level required to create these things that effectively it puts them in the hands of a huge number of bad actors. And the same is true for things like biological risk or chemical risk or other things like this. And these concerns have to be taken seriously.
but that they can lower the bar so far in the skill level required to create these things that effectively it puts them in the hands of a huge number of bad actors. And the same is true for things like biological risk or chemical risk or other things like this. And these concerns have to be taken seriously.
And they have to be things that we really do consider as genuine possibilities if we start putting into everyone's hand the ability to create
And they have to be things that we really do consider as genuine possibilities if we start putting into everyone's hand the ability to create
And they have to be things that we really do consider as genuine possibilities if we start putting into everyone's hand the ability to create
Two issues there. One is AI as dangerous as nuclear weapons, and what does this imply about the open release of certain models? So I'll make two points on this. I think the nuclear weapon analogy is actually not a great one, because nuclear weapons have one purpose, which is to destroy things.
Two issues there. One is AI as dangerous as nuclear weapons, and what does this imply about the open release of certain models? So I'll make two points on this. I think the nuclear weapon analogy is actually not a great one, because nuclear weapons have one purpose, which is to destroy things.
Two issues there. One is AI as dangerous as nuclear weapons, and what does this imply about the open release of certain models? So I'll make two points on this. I think the nuclear weapon analogy is actually not a great one, because nuclear weapons have one purpose, which is to destroy things.
Maybe a better analogy is sort of nuclear technology period, because it has the ability to create nuclear weapons, but it also has the ability to do things like provide power, non-CO2 emitting power to potentially a huge number of people, right? A lot of people are currently making a bet on nuclear as the way we create carbon-free energy.
Maybe a better analogy is sort of nuclear technology period, because it has the ability to create nuclear weapons, but it also has the ability to do things like provide power, non-CO2 emitting power to potentially a huge number of people, right? A lot of people are currently making a bet on nuclear as the way we create carbon-free energy.
Maybe a better analogy is sort of nuclear technology period, because it has the ability to create nuclear weapons, but it also has the ability to do things like provide power, non-CO2 emitting power to potentially a huge number of people, right? A lot of people are currently making a bet on nuclear as the way we create carbon-free energy.
But I think the analogy of nuclear weapons in particular is often overstated precisely because AI has many good uses. Nuclear weapons, arguably, they do one thing, and it's not considered a good use, right? So there's a very different kind of technology there.
But I think the analogy of nuclear weapons in particular is often overstated precisely because AI has many good uses. Nuclear weapons, arguably, they do one thing, and it's not considered a good use, right? So there's a very different kind of technology there.
But I think the analogy of nuclear weapons in particular is often overstated precisely because AI has many good uses. Nuclear weapons, arguably, they do one thing, and it's not considered a good use, right? So there's a very different kind of technology there.
But let me get to your second point now, which is the sort of the open model debate, which is also one that frequently is played out in kind of discussions on AI safety. I should start off by saying I'm a fan of open source models in a general sense.
But let me get to your second point now, which is the sort of the open model debate, which is also one that frequently is played out in kind of discussions on AI safety. I should start off by saying I'm a fan of open source models in a general sense.
But let me get to your second point now, which is the sort of the open model debate, which is also one that frequently is played out in kind of discussions on AI safety. I should start off by saying I'm a fan of open source models in a general sense.
So I want to start by saying that because honestly speaking, open source release of models and I really say open weight because oftentimes these are not actually open source traditional way. They're actually much more like closed source executables. They just you can run them on your own on your own computer. Open weight models. have advanced my ability to study these systems.