Zico Colter
👤 PersonAppearances Over Time
Podcast Appearances
That's not how these models actually work. These models are trained once on a very large collection of data. And if you use some of the things like API access and stuff like that, your data's not going to be trained. The model will not be retrained on that. And even if it was, That is not the same thing. The fact that a model can answer your question does not mean the model is training on it.
That's not how these models actually work. These models are trained once on a very large collection of data. And if you use some of the things like API access and stuff like that, your data's not going to be trained. The model will not be retrained on that. And even if it was, That is not the same thing. The fact that a model can answer your question does not mean the model is training on it.
That's not how these models actually work. These models are trained once on a very large collection of data. And if you use some of the things like API access and stuff like that, your data's not going to be trained. The model will not be retrained on that. And even if it was, That is not the same thing. The fact that a model can answer your question does not mean the model is training on it.
This is honestly just very simple levels of misunderstanding, I think, that a lot of people have a very hard time getting over. And I still see these misconceptions when I talk with companies. So it's not like we've done, in some sense, maybe a very bad job of marketing because we just don't really always, people don't really understand at some level, sort of, this is not...
This is honestly just very simple levels of misunderstanding, I think, that a lot of people have a very hard time getting over. And I still see these misconceptions when I talk with companies. So it's not like we've done, in some sense, maybe a very bad job of marketing because we just don't really always, people don't really understand at some level, sort of, this is not...
This is honestly just very simple levels of misunderstanding, I think, that a lot of people have a very hard time getting over. And I still see these misconceptions when I talk with companies. So it's not like we've done, in some sense, maybe a very bad job of marketing because we just don't really always, people don't really understand at some level, sort of, this is not...
in certain use cases, any riskier than just having your data in the cloud to begin with, which all of them typically do. They've all moved that way. So I think this will just happen naturally with progression of time.
in certain use cases, any riskier than just having your data in the cloud to begin with, which all of them typically do. They've all moved that way. So I think this will just happen naturally with progression of time.
in certain use cases, any riskier than just having your data in the cloud to begin with, which all of them typically do. They've all moved that way. So I think this will just happen naturally with progression of time.
The thing that frustrates me most, honestly speaking, is the degree of certainty that some people have about whether we will definitely get there very, very soon or even more on the flip side, that there's absolutely no way that we will ever achieve AGI with these current models because of X, Y, Z, right? This does actually kind of start to irk me a little bit.
The thing that frustrates me most, honestly speaking, is the degree of certainty that some people have about whether we will definitely get there very, very soon or even more on the flip side, that there's absolutely no way that we will ever achieve AGI with these current models because of X, Y, Z, right? This does actually kind of start to irk me a little bit.
The thing that frustrates me most, honestly speaking, is the degree of certainty that some people have about whether we will definitely get there very, very soon or even more on the flip side, that there's absolutely no way that we will ever achieve AGI with these current models because of X, Y, Z, right? This does actually kind of start to irk me a little bit.
Because I personally, even as a product of the AI winter skepticism, I see what's happening in these models. And I am amazed by it. And the people that have been sort of ringing this bell for a while and saying, look, this is coming. They've, in many cases, in my view, been proven right. And I've updated my sort of posterior beliefs based upon the evidence I've seen.
Because I personally, even as a product of the AI winter skepticism, I see what's happening in these models. And I am amazed by it. And the people that have been sort of ringing this bell for a while and saying, look, this is coming. They've, in many cases, in my view, been proven right. And I've updated my sort of posterior beliefs based upon the evidence I've seen.
Because I personally, even as a product of the AI winter skepticism, I see what's happening in these models. And I am amazed by it. And the people that have been sort of ringing this bell for a while and saying, look, this is coming. They've, in many cases, in my view, been proven right. And I've updated my sort of posterior beliefs based upon the evidence I've seen.
And so what irks me the most about a lot of people's philosophy of AGI is that, to a certain extent, how little it seems like observable evidence has changed their beliefs one iota. They had certain beliefs about what it would take to get to general AI, or maybe that AI was impossible by definition, or AGI was impossible by definition. And they kind of maintained those beliefs, in my view,
And so what irks me the most about a lot of people's philosophy of AGI is that, to a certain extent, how little it seems like observable evidence has changed their beliefs one iota. They had certain beliefs about what it would take to get to general AI, or maybe that AI was impossible by definition, or AGI was impossible by definition. And they kind of maintained those beliefs, in my view,
And so what irks me the most about a lot of people's philosophy of AGI is that, to a certain extent, how little it seems like observable evidence has changed their beliefs one iota. They had certain beliefs about what it would take to get to general AI, or maybe that AI was impossible by definition, or AGI was impossible by definition. And they kind of maintained those beliefs, in my view,
in the face of overwhelming evidence, at least pointing to contrary outcomes.
in the face of overwhelming evidence, at least pointing to contrary outcomes.