Zico Colter
👤 PersonAppearances Over Time
Podcast Appearances
So I want to start by saying that because honestly speaking, open source release of models and I really say open weight because oftentimes these are not actually open source traditional way. They're actually much more like closed source executables. They just you can run them on your own on your own computer. Open weight models. have advanced my ability to study these systems.
So I want to start by saying that because honestly speaking, open source release of models and I really say open weight because oftentimes these are not actually open source traditional way. They're actually much more like closed source executables. They just you can run them on your own on your own computer. Open weight models. have advanced my ability to study these systems.
They've been the primary tool by which we conduct research in academia and beyond, and they are becoming, I would argue, a critical part of the overall ecosystem of AI right now, number one. Number two,
They've been the primary tool by which we conduct research in academia and beyond, and they are becoming, I would argue, a critical part of the overall ecosystem of AI right now, number one. Number two,
They've been the primary tool by which we conduct research in academia and beyond, and they are becoming, I would argue, a critical part of the overall ecosystem of AI right now, number one. Number two,
If you look at the current best models that there are right now, so things like GPT-4, Claude 3.5, Gemini, things like this, I would not currently be all that nervous about having an open source model that was as capable of these in terms of the catastrophic effects of it. Because these models actually aren't by themselves. We have a good handle on them, right?
If you look at the current best models that there are right now, so things like GPT-4, Claude 3.5, Gemini, things like this, I would not currently be all that nervous about having an open source model that was as capable of these in terms of the catastrophic effects of it. Because these models actually aren't by themselves. We have a good handle on them, right?
If you look at the current best models that there are right now, so things like GPT-4, Claude 3.5, Gemini, things like this, I would not currently be all that nervous about having an open source model that was as capable of these in terms of the catastrophic effects of it. Because these models actually aren't by themselves. We have a good handle on them, right?
We sort of know what they're capable of. Arguably, we're already here because Lama 3, 405 billion is pretty close. I don't think it's quite at that level yet, but it's getting there. And, you know, this release has not yet caused some catastrophic event. Because the reality is these models, they still have a ways to go.
We sort of know what they're capable of. Arguably, we're already here because Lama 3, 405 billion is pretty close. I don't think it's quite at that level yet, but it's getting there. And, you know, this release has not yet caused some catastrophic event. Because the reality is these models, they still have a ways to go.
We sort of know what they're capable of. Arguably, we're already here because Lama 3, 405 billion is pretty close. I don't think it's quite at that level yet, but it's getting there. And, you know, this release has not yet caused some catastrophic event. Because the reality is these models, they still have a ways to go.
Right now, to a certain extent, I think things are okay with open weight release of the models. However, there will come a time when a certain capability, a certain ability of these models reaches the point that should give us pause when it comes to just turning these things over to whoever and everything. what, however they want to use them.
Right now, to a certain extent, I think things are okay with open weight release of the models. However, there will come a time when a certain capability, a certain ability of these models reaches the point that should give us pause when it comes to just turning these things over to whoever and everything. what, however they want to use them.
Right now, to a certain extent, I think things are okay with open weight release of the models. However, there will come a time when a certain capability, a certain ability of these models reaches the point that should give us pause when it comes to just turning these things over to whoever and everything. what, however they want to use them.
And I do think this, there are certain levels of capabilities that you could see that are within kind of eyesight of our current development. That if I was sort of just to ask the question, no, should, should we give this to everyone, not just to use, but to use and tune and specialize however they want.
And I do think this, there are certain levels of capabilities that you could see that are within kind of eyesight of our current development. That if I was sort of just to ask the question, no, should, should we give this to everyone, not just to use, but to use and tune and specialize however they want.
And I do think this, there are certain levels of capabilities that you could see that are within kind of eyesight of our current development. That if I was sort of just to ask the question, no, should, should we give this to everyone, not just to use, but to use and tune and specialize however they want.
And I would just sort of say, I think there will be a point where I get uncomfortable with that. What is that point? So if you think about a model that really could analyze any code base or even any binary executable or website or JavaScript or anything like this and immediately find a vulnerability that it could exploit to take down a large portion of the internet or a large portion of software.
And I would just sort of say, I think there will be a point where I get uncomfortable with that. What is that point? So if you think about a model that really could analyze any code base or even any binary executable or website or JavaScript or anything like this and immediately find a vulnerability that it could exploit to take down a large portion of the internet or a large portion of software.
And I would just sort of say, I think there will be a point where I get uncomfortable with that. What is that point? So if you think about a model that really could analyze any code base or even any binary executable or website or JavaScript or anything like this and immediately find a vulnerability that it could exploit to take down a large portion of the internet or a large portion of software.