Zico Colter
👤 PersonAppearances Over Time
Podcast Appearances
We do not understand how these things work internally, the possible correlated failures, the possible attack factors, all these sorts of things. We don't understand it.
We do not understand how these things work internally, the possible correlated failures, the possible attack factors, all these sorts of things. We don't understand it.
We do not understand how these things work internally, the possible correlated failures, the possible attack factors, all these sorts of things. We don't understand it.
And because of this, we need to think very carefully about how we deploy these systems, how we do consider safety concerns, especially when it comes to things like critical infrastructure that I think are extremely pressing concerns. And yes, things like bio-risk, again, these I work much less on, but pressing concerns. And you don't have to believe in super intelligent systems.
And because of this, we need to think very carefully about how we deploy these systems, how we do consider safety concerns, especially when it comes to things like critical infrastructure that I think are extremely pressing concerns. And yes, things like bio-risk, again, these I work much less on, but pressing concerns. And you don't have to believe in super intelligent systems.
And because of this, we need to think very carefully about how we deploy these systems, how we do consider safety concerns, especially when it comes to things like critical infrastructure that I think are extremely pressing concerns. And yes, things like bio-risk, again, these I work much less on, but pressing concerns. And you don't have to believe in super intelligent systems.
evil robots, in order to have these as pressing concerns, AI safety is a concern right now. And we all need to come to grips with the fact that it's a concern right now and start solving the problems right now.
evil robots, in order to have these as pressing concerns, AI safety is a concern right now. And we all need to come to grips with the fact that it's a concern right now and start solving the problems right now.
evil robots, in order to have these as pressing concerns, AI safety is a concern right now. And we all need to come to grips with the fact that it's a concern right now and start solving the problems right now.
Yeah, it's really wild. The current things that we have already upend a massive amount of sort of the systems we've built in place, and they will continue to be upended more and more from evolving AI technology. And these are real concerns that we have to come to terms with.
Yeah, it's really wild. The current things that we have already upend a massive amount of sort of the systems we've built in place, and they will continue to be upended more and more from evolving AI technology. And these are real concerns that we have to come to terms with.
Yeah, it's really wild. The current things that we have already upend a massive amount of sort of the systems we've built in place, and they will continue to be upended more and more from evolving AI technology. And these are real concerns that we have to come to terms with.
I would classify myself as an optimist when it comes to AI. I already enjoy these tools. I'm excited about the potential things we can do with these tools. Yes, even up to AGI. And I use the word tool here, not pejoratively. Hopefully, AGI is a tool, right? Hopefully, AGI is a system that we still deploy to our ends to achieve our ends. I can't help but be excited about these things.
I would classify myself as an optimist when it comes to AI. I already enjoy these tools. I'm excited about the potential things we can do with these tools. Yes, even up to AGI. And I use the word tool here, not pejoratively. Hopefully, AGI is a tool, right? Hopefully, AGI is a system that we still deploy to our ends to achieve our ends. I can't help but be excited about these things.
I would classify myself as an optimist when it comes to AI. I already enjoy these tools. I'm excited about the potential things we can do with these tools. Yes, even up to AGI. And I use the word tool here, not pejoratively. Hopefully, AGI is a tool, right? Hopefully, AGI is a system that we still deploy to our ends to achieve our ends. I can't help but be excited about these things.
This is the culmination of a lot of the work that we in the field have been doing. It's kind of coming to fruition in a lot of ways in a way that is directly sort of beneficial for a lot of the things that I do. So I want to have these tools and maybe this gets to the clear point here. I want to develop and improve safety of these tools and because I want to use them.
This is the culmination of a lot of the work that we in the field have been doing. It's kind of coming to fruition in a lot of ways in a way that is directly sort of beneficial for a lot of the things that I do. So I want to have these tools and maybe this gets to the clear point here. I want to develop and improve safety of these tools and because I want to use them.
This is the culmination of a lot of the work that we in the field have been doing. It's kind of coming to fruition in a lot of ways in a way that is directly sort of beneficial for a lot of the things that I do. So I want to have these tools and maybe this gets to the clear point here. I want to develop and improve safety of these tools and because I want to use them.
It's not that we have some moral imperative that we have to develop these tools. I mean, maybe there are, or we have to develop AI and AGI. Maybe that's true, but that's not what motivates me to develop them, right? I want to develop them because I want to use them and I want to be able to have them to reach that point. They have to be safe, right? It's a condition, it's a necessary condition.
It's not that we have some moral imperative that we have to develop these tools. I mean, maybe there are, or we have to develop AI and AGI. Maybe that's true, but that's not what motivates me to develop them, right? I want to develop them because I want to use them and I want to be able to have them to reach that point. They have to be safe, right? It's a condition, it's a necessary condition.