Zico Colter
👤 PersonAppearances Over Time
Podcast Appearances
And I hope, my sincere hope would be that if one of these models does really demonstrate the ability to create an exploit for any executable code or compiled code or anything else instantly, and we see that in the closed source model first, we would think a little bit about whether we really want to release this model, an equivalent model, open weight, and just for anyone to use.
I do think that the more far-fetched scenarios about sort of agentic AGI systems that start sort of intentionally acting harmful against humans, the rogue AI that decides it wants to wipe out humanity and goes about planning on how to do this.
I do think that the more far-fetched scenarios about sort of agentic AGI systems that start sort of intentionally acting harmful against humans, the rogue AI that decides it wants to wipe out humanity and goes about planning on how to do this.
I do think that the more far-fetched scenarios about sort of agentic AGI systems that start sort of intentionally acting harmful against humans, the rogue AI that decides it wants to wipe out humanity and goes about planning on how to do this.
These more, I would say, what seem to me, and I'll be honest here, far-flung sci-fi-ish scenarios here, these are often the debates we have when it comes to AI safety. I want to say two things about this. The first is that I think the vast majority of AI safety should not be about these topics.
These more, I would say, what seem to me, and I'll be honest here, far-flung sci-fi-ish scenarios here, these are often the debates we have when it comes to AI safety. I want to say two things about this. The first is that I think the vast majority of AI safety should not be about these topics.
These more, I would say, what seem to me, and I'll be honest here, far-flung sci-fi-ish scenarios here, these are often the debates we have when it comes to AI safety. I want to say two things about this. The first is that I think the vast majority of AI safety should not be about these topics.
The vast majority should be about quite practical concerns we have on making systems safer, like the kind that I've talked with you about so far. There are already massive
The vast majority should be about quite practical concerns we have on making systems safer, like the kind that I've talked with you about so far. There are already massive
The vast majority should be about quite practical concerns we have on making systems safer, like the kind that I've talked with you about so far. There are already massive
safety considerations and risks that are present in current systems and would certainly be present even in slightly more capable systems, irrespective and regardless of the timeframes associated with AGI and certainly the timeframes associated with rogue intelligent AI systems. However, I also don't want to dismiss this entirely.
safety considerations and risks that are present in current systems and would certainly be present even in slightly more capable systems, irrespective and regardless of the timeframes associated with AGI and certainly the timeframes associated with rogue intelligent AI systems. However, I also don't want to dismiss this entirely.
safety considerations and risks that are present in current systems and would certainly be present even in slightly more capable systems, irrespective and regardless of the timeframes associated with AGI and certainly the timeframes associated with rogue intelligent AI systems. However, I also don't want to dismiss this entirely.
The way I would put it is I am glad people are thinking about these problems, I'm glad people are thinking about the capabilities and even what I consider far-flung scenarios. They are good things to think about as, by the way, are much more immediate harms of AI systems like misinformation, like misuse of these things.
The way I would put it is I am glad people are thinking about these problems, I'm glad people are thinking about the capabilities and even what I consider far-flung scenarios. They are good things to think about as, by the way, are much more immediate harms of AI systems like misinformation, like misuse of these things.
The way I would put it is I am glad people are thinking about these problems, I'm glad people are thinking about the capabilities and even what I consider far-flung scenarios. They are good things to think about as, by the way, are much more immediate harms of AI systems like misinformation, like misuse of these things.
I think killing jobs is much more immediate of a concern than killing humans. An example I often use here to kind of try to bring a little bit of these two sides, the AI taking over the world, killing us all, and kind of the more skeptical minded academic folks, we'll say.
I think killing jobs is much more immediate of a concern than killing humans. An example I often use here to kind of try to bring a little bit of these two sides, the AI taking over the world, killing us all, and kind of the more skeptical minded academic folks, we'll say.
I think killing jobs is much more immediate of a concern than killing humans. An example I often use here to kind of try to bring a little bit of these two sides, the AI taking over the world, killing us all, and kind of the more skeptical minded academic folks, we'll say.
I see a path right now to a world in which, you know, in a few years from now, we start integrating AI models into more and more of our software. We start building it up more and more. We sort of make these things a little bit more autonomous in their actions.