
(0:00) Welcoming Sam Altman to the show! (2:28) What's next for OpenAI: GPT-5, open-source, reasoning, what an AI-powered iPhone competitor could look like, and more (21:56) How advanced agents will change the way we interface with apps (33:01) Fair use, creator rights, why OpenAI has stayed away from the music industry (42:02) AI regulation, UBI in a post-AI world (52:23) Sam breaks down how he was fired and re-hired, why he has no equity, dealmaking on behalf of OpenAI, and how he organizes the company (1:05:33) Post-interview recap (1:10:38) All-In Summit announcements, college protests (1:19:06) Signs of innovation dying at Apple: iPad ad, Buffett sells 100M+ shares, what's next? (1:29:41) Google unveils AlphaFold 3.0 Follow Sam: https://twitter.com/sama Follow the besties: https://twitter.com/chamath https://twitter.com/Jason https://twitter.com/DavidSacks https://twitter.com/friedberg Follow on X: https://twitter.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@all_in_tok Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://twitter.com/yung_spielburg Intro Video Credit: https://twitter.com/TheZachEffect Referenced in the show: https://twitter.com/EconomyApp/status/1622029832099082241 https://sacra.com/c/openai https://twitter.com/tim_cook/status/1787864325258162239 https://openai.com/index/introducing-the-model-spec https://twitter.com/SabriSun_Miller/status/1788298123434938738 https://www.archives.gov/founding-docs/bill-of-rights-transcript https://twitter.com/ClayTravis/status/1788312545754825091 https://www.inc.com/bill-murphy-jr/warren-buffett-just-sold-more-than-100-million-shares-of-apple-reason-why-is-eye-opening.html https://www.youtube.com/watch?v=snbTCWL6rxo https://www.digitimes.com/news/a20240506PD216/apple-ev-startup-genai.html https://www.theonion.com/fuck-everything-were-doing-five-blades-1819584036 https://blog.google/technology/ai/google-deepmind-isomorphic-alphafold-3-ai-model
I first met our next guest, Sam Altman, almost 20 years ago when he was working on a local mobile app called Looped. We were both backed by Sequoia Capital. And in fact, we were both in the first class of Sequoia Scouts. He did investment in a little unknown fintech company called Stripe. I did Uber. And in that tiny experimental fund- You did Uber?
I've never heard that before. Yeah, I think so. It's possible. You've got it starting already.
You should write a book, Jacob. Maybe.
Maybe.
Rain Man, David Sack.
And it said, we open sourced it to the fans and they've just gone crazy with it. Love you guys. Queen of Kinwa.
That tiny experimental fund that Sam and I were part of as scouts is Sequoia's highest multiple returning fund. A couple of low-digit millions turned into over 200 million, I'm told. Really? Yeah, that's what I was told by Ruloff, yeah. And he did a stint at Y Combinator, where he was president from 2014 to 2019.
In 2016, he co-founded OpenAI with the goal of ensuring that artificial general intelligence benefits all of humanity. In 2019, he left YC to join OpenAI full-time as CEO. Things got really interesting on November 30th of 2022. That's the day OpenAI launched ChatGPT. In January 2023, Microsoft invested $10 billion. In November 2023, Over a crazy five-day span, Sam was fired from OpenAI.
Everybody was going to go work at Microsoft. A bunch of heart emojis went viral on X slash Twitter, and people started speculating that the team had reached artificial general intelligence. The world was going to end, and suddenly... A couple days later, he was back to being the CEO of OpenAI. In February, Sam was reportedly looking to raise $7 trillion for an AI chip project.
This after it was reported that Sam was looking to raise a billion from Masayoshi-san to create an iPhone killer with Johnny Ive, the co-creator of the iPhone. All of this while chat GPT has become better and better and a household name. It's having a massive impact on how we work and how work is getting done.
And it's reportedly the fastest product to hit 100 million users in history in just two months. And check out OpenAI's insane revenue ramp up. They reportedly hit 2 billion in ARR last year. Welcome to the All In podcast, Sam Altman.
Thank you. Thank you, guys.
Sax, you want to lead us off here?
Okay, sure. I mean, I think the whole industry is waiting with bated breath for the release of GPT-5. I guess it's been reported that it's launching sometime this summer, but that's a pretty big window. Can you narrow that down? I guess, where are you in the release of GPT-5?
We take our time on releases of major games.
New models, and I don't think we I think it will be great When we do it, and I think we'll be thoughtful about how we do it Like we may release it in a different way than we've released previous models Also, I don't even know if we'll call it GPT-5 What I what I will say is you know a lot of people have noticed how much better GPT-4 has gotten Since we've released it and particularly over the last few months.
I think I I think that's a better hint of what the world looks like, where it's not the one, two, three, four, five, six, seven, but you use an AI system and the whole system just gets better and better fairly continuously. I think that's both a better technological direction, I think that's easier for society to adapt to. But I assume that's where we'll head.
Does that mean that there's not going to be long training cycles and it's continuously retraining or training submodels, Sam? And maybe you could just speak to us about what might change architecturally going forward with respect to large models.
Well, I mean, one thing that you could imagine is just that you keep training a model. That would seem like a reasonable thing to me.
And we talked about releasing it differently this time. Are you thinking maybe releasing it to the paid users first or a slower rollout to get the red teams tight since now there's so much at stake? You have so many customers actually paying and you've got everybody watching everything you do. You have to be more thoughtful now, yeah?
GPT-4 is still only available to the paid users, but one of the things that we really want to do is figure out how to make more advanced technology available to free users too. I think that's a super important part of our mission.
And this idea that we build AI tools and make them super widely available, free or not that expensive, whatever it is, so that people can use them to go kind of invent the future rather than the magic AGI in the sky inventing the future and showing it down upon us. That seems like a much better path. It seems like a more inspiring path. I also think it's where things are actually heading.
It makes me sad that we have not figured out how to make GPT-4 level technology available to free users. It's something we really want to do.
It's just very expensive, I take it.
It's very expensive.
Yeah. Chamath, your thoughts?
I think maybe the two big vectors, Sam, that people always talk about is that underlying cost and sort of the latency that's kind of rate-limited a killer app, and then I think the second is sort of the long-term ability for people to build in an open source world versus a closed source world. And I think the crazy thing about this space is that the open source community is rabid.
So one example that I think is incredible is, you know, we had these guys do a pretty crazy demo for Devon, remember, like even like five or six weeks ago that looked incredible. And then some kid just published it under an open MIT license, like OpenDevon. And it's incredibly good and almost as good as that other thing that was closed source.
So maybe we can just start with that, which is tell me about the business decision to keep these models closed source. And where do you see things going in the next couple of years?
So on the first part of your question, speed and cost, those are hugely important to us. And I don't want to give a timeline on when we can bring them down a lot because research is hard, but I am confident we'll be able to. We want to cut the latency super dramatically. We want to cut the cost really, really dramatically. And I believe that will happen.
We're still so early in the development of the science and understanding how this works. Plus, we have all the engineering tailwinds. So I don't know when we get to intelligence too cheap to meter and so fast that it feels instantaneous to us and everything else, but... I do believe we can get there for a pretty high level of intelligence. It's important to us.
It's clearly important to users, and it'll unlock a lot of stuff. On the sort of open source, closed source thing, I think there's great roles for both, I think. You know, we've open sourced some stuff. We'll open source more stuff in the future. But really, like, our mission is to build towards AGI and to figure out how to broadly distribute its benefits. We have a strategy for that.
It seems to be resonating with a lot of people. It obviously isn't for everyone, and there's, like, a big ecosystem, and there will also be open source models and people who build that way. One area that I'm particularly interested personally in open source for is I want an open source model that is as good as it can be that runs on my phone.
And that, I think, is going to, you know, the world doesn't quite have the technology for a good version of that yet. But that seems like a really important thing to go do at some point.
Will you do? Will you do that?
I don't know if we will or someone will.
What about Lama 3? Llama 3 running on a phone? Well, I guess maybe there's like a $7 billion version. Yeah, yeah. I don't know if that will fit on a phone or not.
That should be fittable on a phone, but I'm not sure if that one is like... I haven't played with it.
I don't know if it's good enough to kind of do the thing I'm thinking about here. So when Llama 3 got released, I think the big takeaway for a lot of people was, oh, wow, they've like caught up to GPT-4. I don't think it's equal in all dimensions, but it's like pretty... pretty close or pretty in the ballpark. I guess the question is, you know, you guys released four a while ago.
You're working on five or, you know, more upgrades to four. I mean, I think to Jamal's point about Devin, how do you stay ahead of open source? I mean, that's just like a very hard thing to do in general, right? I mean, how do you think about that?
What we're trying to do is not make the sort of smartest decisions set of weights that we can. But what we're trying to make is like this useful intelligence layer for people to use. And a model is part of that. I think we will stay pretty far ahead of, I hope we'll stay pretty far ahead of the rest of the world on that. But
There's a lot of other work around the whole system that's not just that the model waits. And we'll have to build up enduring value the old-fashioned way like any other business does. We'll have to figure out a great product and reasons to stick with it and deliver it at a great price.
When you founded the organization, the stated goal or part of what you discussed was, hey, this is too important for any one company to own it. So therefore, it needs to be open. Then there was the switch. Hey, it's too dangerous for anybody to be able to see it. And we need to lock this down because you had some fear about that, I think. Is that accurate?
Because the cynical side is like, well, this is a capitalistic move. And then the, I think, You know, I'm curious what the decision was here in terms of going from open. The world needs to see this. It's really important to close. Only we can see it. Well, how did you come to that conclusion?
Part of the reason that we released ChatGPT was we want the world to see this. And we've been trying to tell people that AI is really important. And if you go back to like October of 2022, not that many people thought AI was going to be that important or that it was really happening. No. And a huge part of what we try to do is put the technology in the hands of people.
Now, again, there's different ways to do that. And I think there really is an important role to just say, like, here's the way to have at it. But the fact that we have so many people using a free version of ChatGPT that we don't run ads on, we don't try to make money on, we just put out there because we want people to have these tools, I think has done a lot to...
provide a lot of value and teach people how to fish, but also to get the world really thoughtful about what's happening here. Now, we still don't have all the answers, and we're fumbling our way through this like everybody else, and I assume we'll change strategy many more times as we learn new things.
You know, when we started OpenAI, we had really no idea about how things were going to go, that we'd make a language model, that we'd ever make a product. We started off just... I remember very clearly that first day where we're like, well, Now we're all here. That was, you know, it was difficult to get this set up, but what happens now? Maybe we should write some papers.
Maybe we should stand around a whiteboard. And we've just been trying to like put one foot in front of the other and figure out what's next and what's next and what's next. And... I think we'll keep doing that.
Can I just replay something and just make sure I heard it right? I think what you were saying on the open source, closed source thing is, if I heard it right, all these models, independent of the business decision you make, are going to become asymptotically accurate towards some amount of accuracy. Not all, but let's just say there's four or five that are
well-capitalized enough, you guys, Meta, Google, Microsoft, whomever, right? So let's just say four or five, maybe one startup. And on the open web. And then quickly... the accuracy or the value of these models will probably shift to these proprietary sources of training data that you could get that others can't or others can get that you can't.
Is that how you see this thing evolving, where the open web gets everybody to a certain threshold and then it's just an arms race for data beyond that?
So I definitely don't think it'll be an arms race for data, because when the models get smart enough at some point, it shouldn't be about more data, at least not for training. It may matter data to make it useful. Look, the one thing that I have learned most throughout all of this is that it's hard to make confident statements a couple of years in the future about where this is all going to go.
And so I don't want to try now. I will say that I expect lots of very capable models in the world. And, you know, like, it feels to me like we just, like, stumbled on a new fact of nature or science or whatever you want to call it, which is, like, we can create, you can, like... I mean, I don't believe this literally, but it's like a spiritual point.
You know, intelligence is just this emergent property of matter, and that's like a rule of physics or something. So people are going to figure that out. But there will be all these different ways to design the systems. People will make different choices, figure out new ideas. And I'm sure, like, you know,
Like any other industry, I would expect there to be multiple approaches and different people like different ones. Some people like iPhones, some people like an Android phone. I think there will be some effect like that.
Let's go back to that first section of just the cost and the speed. all of you guys are sort of a little bit rate limited on literally Nvidia's throughput, right? And I think that you and most everybody else have sort of effectively announced how much capacity you can get just because it's as much as they can spin out.
What needs to happen at the substrate so that you can actually compute cheaper, compute faster, get access to more energy? How are you helping to frame out the industry solving those problems?
We'll make huge algorithmic gains for sure, and I don't want to discount that. I'm very interested in chips and energy, but if we can make a same quality model twice as efficient, that's like we had twice as much compute. And I think there's a gigantic amount of work to be done there. And I hope we'll start really seeing those results. Other than that, the whole supply chain is very complicated.
There's LogicFab capacity, there's how much HBM the world can make. There's how quickly you can get permits and pour the concrete, make the data centers, and then have people in there wiring them all up. There's finding the energy, which is a huge bottleneck. But I think when there's this much value to people, the world will do its thing. We'll try to help it happen faster.
And there's probably like I don't know how to give it a number, but there's some percentage chance where there is, as you were saying, a huge substrate breakthrough, and we have a massively more efficient way to do computing, but I don't bank on that or spend too much time thinking about it.
What about the device side? And sort of, you mentioned sort of the models that can fit on a phone. So obviously, whether that's an LLM or some SLM or something, I'm sure you're thinking about that. But then does the device itself change? I mean, does it need to be as expensive as an iPhone?
I'm super interested in this. I love like great new form factors of computing. And it feels like with every major technological advance, a new thing becomes possible. Phones are unbelievably good, so I think the threshold is very high here. I personally think an iPhone is the greatest piece of technology humanity has ever made. It's really a wonderful product. What comes after it? I don't know.
That was what I was saying. It's so good to get beyond it. I think the bar is quite high.
Well, you've been working with Johnny Ive on something, right?
We've been discussing ideas, but I don't, like, if I knew.
Is it that that it has to be more complicated or actually just much, much cheaper and simpler?
Well, almost everyone's willing to pay for a phone anyway. So if you could, like, make a way cheaper device, I think the barrier to carry a second thing or use a second thing is pretty high. So I don't think, given that we're all willing to pay for phones, or most of us are, I don't think cheaper is the answer.
Different is the answer then?
Would there be like a specialized chip that would run on the phone that was really good at powering a phone size AI model?
Probably, but the phone manufacturers are going to do that for sure. That doesn't necessitate a new device. I think you'd have to find some really different interaction paradigm that the technology enables. And if I knew what it was, I would be excited to be working on it right now.
Well, you have voice working right now in the app. In fact, I set my action button on my phone to go directly to ChatGPT's voice app, and I use it with my kids, and they love it, talking to it. It's got latency issues, but it's really great.
We'll get that better. And I think voice is a hint to whatever the next thing is. If you can get voice interaction to be really good, it feels great. I think that feels like a different way to use a computer.
But again, like we already- Let's talk about that, by the way. Like why is it not responsive and- you know, it feels like a CB, you know, like over, over. It's really annoying to use, you know, in that way. But it's also brilliant when it gives you the right answer.
We are working on that. It's so clunky right now. It's slow. It's like kind of doesn't feel very smooth or authentic or organic. Like we'll get all that to be much better.
What about computer vision? I mean, they have glasses or maybe you could wear a pendant. I mean, you take the combination of visual or video data, combine it with voice, and now the AI knows everything that's happening around you.
Super powerful to be able to like, the multimodality of saying like, hey, ChatGPT, what am I looking at? Or like, what kind of plant is this? I can't quite tell. That's another, I think, hint, but whether people want to wear glasses or hold up something when they want that,
There's a bunch of just like the sort of like societal interpersonal issues here are all very complicated about wearing a computer on your face.
We saw that with Google Glass. People got punched in the face in the mission. Started a lot of fights.
I forgot about that. I forgot about that. So I think it's like.
What are the apps that could be unlocked if AI was sort of ubiquitous on people's phones? Do you have a sense of that or what would you want to see built?
I think what I want is just this always on like super low friction thing where I can... either by voice or by text or ideally like some other, it just kind of knows what I want, have this like constant thing helping me throughout my day that's got like as much context as possible. It's like the world's greatest assistant. And it's just this like thing working to make me better and better.
There's like a, I know when you hear people like talk about the AI future, they imagine there's sort of two things
different approaches and they don't sound that different but I think they're like very different for how we'll design the system in practice there's the I want an extension of myself I want like a ghost or an alter ego or this thing that really like is me is acting on my behalf is responding to emails not even telling me about it is sort of like It becomes more me and is me.
And then there's this other thing, which is like, I want a great senior employee. It may get to know me very well. I may delegate it. You know, you can like have access to my email and I'll tell you the constraints, but I think of it as this like separate entity. And I personally like the separate entity approach better and think that's where we're gonna head. And so in that sense,
The thing is not you, but it's like a always available, always great, super capable assistant executive.
It's an agent in a way, like it's out there working on your behalf and understands what you want and anticipates what you want is what I'm reading into what you're saying.
I think there'd be agent-like behavior, but there's like a difference between a senior employee and an agent. And like I want it, you know, I think of like my, I think like a bit, Like one of the things that I like about a senior employee is they'll push back on me. They will sometimes not do something I ask or they sometimes will say like, I can do that thing if you want.
But if I do it, here's what I think would happen and then this and then that. And are you really sure? Yeah. I definitely want that kind of vibe, which not just like this thing that I give a task and it blindly does. It can reason.
Yeah. Yeah, and push back.
It can reason. It has like the kind of relationship with me that I would expect out of a really competent person that I worked with, which is different from like a sycophant.
Yeah.
The thing in that world where if you had this like Jarvis-like thing that can reason, what do you think it does to products that you use today where the interface is very valuable?
So for example, if you look at an Instacart or if you look at an Uber or if you look at a DoorDash, these are not services that are meant to be pipes that are just providing a set of APIs to a smart set of agents that ubiquitously work on behalf of 8 billion people.
What do you think has to change in how we think about how apps need to work, of how this entire infrastructure of experiences need to work in a world where you're agentically interfacing to the world?
I'm actually very interested in designing a world that is equally usable by humans and by AIs. So I... I like the interpretability of that. I like the smoothness of the handoffs. I like the ability that we can provide feedback or whatever. So, you know, DoorDash could just expose some API to my future AI assistant and they could go put the order in or whatever.
Or I could say, I could be holding my phone and I could say, okay, AI assistant, you put in this order on DoorDash, please. And I could watch the app open and see the thing clicking around and I could say, hey, no, not this. There's something about designing a world that is usable equally well by humans and AIs that I think is an interesting concept.
Same reason I'm more excited about humanoid robots than sort of robots of very other shapes. The world is very much designed for humans, and I think we should absolutely keep it that way. And a shared interface is nice.
So you see voice, chat, that modality kind of gets rid of apps. You just ask it for sushi. It knows sushi you liked before. It knows what you don't like and does its best shot at doing it.
It's hard for me to imagine that we just go to a world totally where you say like, hey, ChatGPT, order me sushi. And it says, okay, do you want it from this restaurant? What kind, what time, whatever? I think... I think visual user interfaces are super good for a lot of things.
And it's hard for me to imagine a world where you never look at a screen and just use voice mode only, but I can imagine that for a lot of things.
I mean, Apple tried with Siri. Supposedly, you can order an Uber automatically with Siri. I don't think anybody's ever done it because it's... Why would you take the risk of not putting it in your phone?
To your point, the quality is not good. But when the quality is good enough, you'll actually prefer it just because it's just lighter weight. You don't have to take your phone out. You don't have to search for your app and press it. Oh, it automatically logged you out. Oh, hold on, log back in. Oh, TFA. It's a whole pain in the ass.
You know, it's like setting a timer with Siri, I do every time because it... works really well. And it's great.
More information.
But ordering an Uber, like, I want to see the prices for a few different options, I want to see how far away it is, I want to see like, maybe even where they are on the map, because I might walk somewhere, I get a lot more information by, I think, in less time by looking at that order the Uber screen than I would if I had to do that all through the audio channel.
So idea of watching it happen. That's kind of cool.
I think there will just be like, yeah, different There are different interfaces we use for different tasks, and I think that'll keep going.
Of all the developers that are building apps and experiences on OpenAI, are there a few that stand out for you where you're like, okay, this is directionally going in a super interesting area, even if it's like a toy app. But are there things that you guys point to and say, this is really important?
I met with a new company this morning or barely even a company. It's like two people that are going to work on a summer project trying to actually finally make the AI tutor. And I've always been interested in this space. A lot of people have done great stuff on our platform. But if someone can deliver like the way that you actually like.
They used a phrase I love, which is this is going to be like a Montessori level reinvention for how people learn things. But if you can find this new way to let people explore and learn in new ways on their own, I'm personally super excited about that. A lot of the coding-related stuff you mentioned, Devin, earlier, I think that's like a super cool vision of the future.
The thing that I am, healthcare, I believe, should be pretty transformed by this. But the thing I'm personally most excited about is the sort of doing faster and better scientific discovery. GPT-4 clearly not there in a big way, although maybe it accelerates things a little bit by making scientists more productive. But alpha 43, yeah. That's like... But Sam... That will be a triumph.
Those are not... Like these models are trained... and built differently than the language models. I mean, to some, obviously there's a lot that's similar, but there's a lot, there's kind of a ground up architecture to a lot of these models that are being applied to these specific problem sets, these specific applications like chemistry interaction modeling, for example.
You'll need some of that for sure. But the thing that I think we're missing across the board for many of these things we've been talking about is models that can do reasoning. And once you have reasoning, you can connect it to chemistry stimulators or whatever else.
Yeah, that's the important question I wanted to kind of talk about today was this idea of networks of models. People talk a lot about agents as if there's kind of this linear set of call functions that happen. But one of the things that arises...
in biology is networks of systems that have cross interactions that the aggregation of the system, the aggregation of the network produces an output rather than one thing calling another, that thing calling another.
Do we see like an emergence in this architecture of either specialized models or network models that work together to address bigger problem sets, use reasoning, there's computational models that do things like chemistry or arithmetic, and there's other models that do, rather than one model to rule them all that's purely generalized?
I don't know.
I don't know how much reasoning is going to turn out to be a super generalizable thing. I suspect it will, but that's more just like an intuition and a hope, and it would be nice if it worked out that way. I don't know if that's like...
But let's walk through the protein modeling example.
There's a bunch of training data, images of proteins, and then sequence data, and they build a model, predictive model, and they have a set of processes and steps for doing that. Do you envision that there's this artificial general intelligence or this great reasoning model that then figures out how to build that sub model that figures out how to solve that problem by acquiring the necessary data
There's so many ways where that could go. Maybe it trains a literal model for it, or maybe it just knows the one big model. It can go pick what other training data it needs and ask a question and then update on that.
I guess the real question is, are all these startups going to die? Because so many startups are working in that modality, which is go get special data and then train a new model on that special data from the ground up. And then it only does that one sort of thing. And it works really well at that one thing. And it works better than anything else at that one.
You know, there's like a version of this. I think you can like... already see. When you were talking about biology and these complicated networks of systems, the reason I was smiling, I got super sick recently, and I'm mostly better now, but it was just like, body got beat up, one system at a time. You can really tell, okay, it's this cascading thing, and
And that reminded me of you talking about biology is just these, you have no idea how much these systems interact with each other until things start going wrong. And that was sort of interesting to see. But I was using ChatGPT to try to figure out what was happening, whatever, and would say, well, I'm unsure of this one thing. And then I just posted a paper
on it without even reading the paper, like in the context. And it says, oh, that was the thing I wasn't sure of. Like now I think this instead. So there's like a, that was like a small version of what you're talking about, where you can like, can say this, I don't know this thing. And you can put more information. You don't retrain the model, you're just adding it to the context here.
And now you're getting a. So these models that are predicting protein structure, like let's say, right, this is the whole basis. And now other molecules at AlphaFold3, Can they... Yeah, I mean, is it basically a world where the best generalized model goes in and gets that training data and then figures out on its own? And maybe you could use an example for us.
Can you tell us about Sora, your video model that generates amazing moving images, moving video? And what's different about the architecture there, whatever you're willing to share, on how that is different?
Yeah, so my... On the general thing first, my... You clearly will need specialized simulators, connectors, pieces of data, whatever. But my intuition, and again, I don't have this like backed up with science. My intuition would be if we can figure out the core of generalized reasoning, connecting that to new problem domains in the same way that humans are generalized reasoners.
would, I think, be doable.
It's like a faster unlock. I think so.
But yeah, Sora does not start with a language model. That's a model that is customized to do video. And so we're clearly not at that world yet.
Right. So just as an example, for you guys to build a good video model, you built it from scratch using, I'm assuming, some different architecture and different data. But in the future, the generalized reasoning system, the AGI, whatever system, theoretically could render that by figuring out how to do it.
Yeah, I mean, one example of this is like, okay, you know, as far as I know, all the best text models in the world are still autoregressive models, and the best image and video models are diffusion models. That's like sort of strange in some sense.
Yeah.
So there's a big debate about training data. You guys have been, I think, the most thoughtful of any company. You've got licensing deals now, FT, et cetera. And we got to just be gentle here because you're involved in a New York Times lawsuit. You weren't able to settle, I guess, an arrangement with them for training data. How do you think about fairness in fair use
we've had big debates here on the pod. Obviously, your actions speak volumes that you're trying to be fair by doing licensing deals. So what's your personal position on the rights of artists who create beautiful music, lyrics, books, and you taking that and then making a derivative product out of it and then monetizing it? And what's fair here? And how do we get to a world where
you know, artists can make content in the world and then decide what they want other people to do with it. Yeah. And I'm just curious, your personal belief, because I know you to be a thoughtful person on this. And I know a lot of other people in our industry are not very thoughtful about how they think about content creators.
So I think it's very different for different kinds of, I mean, look, on unfair use, I think we have a very reasonable position under the current law, but I think AI is so different that for things like art, we'll need to think about them in different ways. I would say if you go read a bunch of math on the internet and learn how to do math, that I think seems unobjectionable to most people.
And then there's another set of people who might have a different opinion. Well, what if you like Actually, let me not get into that, just in the interest of not making this answer too long. So I think there's one category people are like, okay, there's generalized human knowledge.
You can kind of go, if you learn that, that's open domain or something, if you kind of go learn about the Pythagorean theorem. That's one end of the spectrum. And then I think the other extreme end of the spectrum is...
is art, and maybe even like more than, more specifically I would say it's like doing, it's a system generating art in the style or the likeness of another artist would be kind of the furthest end of that. And then there's many, many cases on the spectrum in between.
I think the conversation has been historically very caught up on training data, but it will increasingly become more about what happens at inference time. As training data becomes less valuable and what the system does accessing information in context in real time or...
you know, taking like something like that, what happens at inference time will become more debated and what the new economic model is there. So if you say like, if you say like, create me a song in the style of Taylor Swift,
even if the model were never trained on any Taylor Swift songs at all, you can still have a problem, which is it may have read about Taylor Swift, it may know about her themes, Taylor Swift means something. And then the question is, should that model, even if it were never trained on any Taylor Swift song whatsoever, be allowed to do that? And if so, How should Taylor get paid? Right.
So I think there's an opt-in, opt-out in that case, first of all, and then there's an economic model. Staying on the music example, there is something interesting to look at from... the historical perspective here, which is sampling and how the economics around that work. This is not quite the same thing, but it's like an interesting place to start looking.
Sam, let me just challenge that. What's the difference in the example you're giving of the model learning about things like song structure, tempo, melody, harmony relationships, discovering all the underlying structure that makes music successful, and then building new music using training data?
and what a human does, that listens to lots of music, learns about, and their brain is processing and building all those same sort of predictive models or those same sort of discoveries or understandings. What's the difference here? And why are you making the case that perhaps artists should be uniquely paid. This is not a sampling situation.
The AI is not outputting and it's not storing in the model the actual original song. It's learning structure.
I wasn't trying to make that point because I agree in the same way that humans are inspired by other humans. I was saying if you say generate me a song in the style of Taylor Swift.
I see. Right.
Okay.
Where the prompt leverages some artist.
I think personally that's a different case.
Would you be comfortable asking, or would you be comfortable letting the model train itself, a music model being trained on the whole corpus of music that humans have created without royalties being paid to the artists that that music is being fed in? And then you're not allowed to ask, you know, artist specific prompts.
You could just say, hey, play me a really cool pop song that's fairly modern about heartbreak, you know, with a female voice, you know?
We have currently made the decision not to do music, and partly because exactly these questions of where you draw the lines. I was meeting with several musicians I really admire recently, and I was just trying to talk about some of these edge cases. But even the world in which... If we...
went and let's say we paid 10,000 musicians to create a bunch of music just to make a great training set where the music model could learn everything about strong structure and what makes a good catchy beat and everything else. And only trained on that. Let's say we could still make a great music model, which maybe we could.
You know, I was kind of like posing that as a thought experiment to musicians. And they're like, well, I can't object to that on any principle basis at that point. And yet there's still something I don't like about it. Now, that's not a reason not to do it necessarily. But it is. Did you see that ad that Apple put out?
Maybe it was yesterday or something of like squishing all of human creativity down into one really thin iPad.
What was your take on it? People got really emotional about it, yeah.
Yeah.
Stronger reaction than you would think.
There's something about... I'm obviously hugely positive on AI, but there is something that I think is beautiful about human creativity and human artistic expression. And, you know, for an AI that just does better science, like, great, bring that on. But an AI that is going to do this, like, deeply beautiful human creative expression, I think we should, like... figure out it's going to happen.
It's going to be a tool that will lead us to greater creative heights. But I think we should figure out how to do it in a way that preserves the spirit of what we all care about here.
And I think your actions speak loudly. We were trying to do Star Wars characters in Dali. And if you ask for Darth Vader, it says, hey, we can't do that. So you've, I guess, red teamed or whatever you call it internally. We try.
Yeah.
Yeah, you're not allowing people to use other people's IP. So you've taken that decision. Now, if you asked it to make a Jedi bulldog, or a Sith Lord bulldog, which I did, it made my bulldogs as Sith bulldogs. So there's an interesting question about like, right?
Yeah, you know, we put out this thing yesterday called the spec, where we're trying to say here are, here's, here's how our model is supposed to behave. And it's very hard, it's a long document, it's very hard to specify exactly in each case where the limits should be, and I view this as a discussion that's gonna need a lot more input. But these sorts of questions about
okay, maybe it shouldn't generate Darth Vader, but the idea of a Sith Lord or a Sith-style thing or Jedi at this point is part of the culture. These are all hard decisions.
Yeah, and I think you're right. The music industry is going to consider this opportunity to make Taylor Swift songs their opportunity. It's part of the four-part fair use test is who gets to capitalize on new innovations for existing art. And Disney has an argument that, hey, you know, If you're going to make Sora versions of Ashoka or whatever, Obi-Wan Kenobi, that's Disney's opportunity.
And that's a great partnership for you to pursue.
So I think this section I would label as AI and the law. So let me ask maybe a higher level question. What does it mean when people say regulate AI? Totally. Sam, what does that even mean?
And comment on California's new proposed regulations as well if you're up for it.
I'm concerned. I mean, there's so many proposed regulations, but most of the ones I've seen on the California state things I'm concerned about. I also have a general fear of the states all doing this themselves. When people say regulate AI, I don't think... they mean one thing. I think there's like, some people are like, ban the whole thing.
Some people are like, don't allow it to be open source, require it to be open source. The thing that I am personally most interested in is I think there will come Look, I may be wrong about this. I will acknowledge that this is a forward-looking statement and those are always dangerous to make.
But I think there will come a time in the not super distant future, like, you know, we're not talking like decades and decades from now, where the frontier AI systems are capable of causing significant damage global harm.
And for those kinds of systems, in the same way we have global oversight of nuclear weapons or synthetic bio or things that can really have a very negative impact way beyond the realm of one country, I would like to see some sort of international agency that is looking at the most powerful systems and ensuring reasonable safety testing.
These things are not going to escape and recursively self-improve or whatever.
The criticism of this is that you have the resources to cozy up, to lobby, to be involved, and you've been very involved with politicians, and then startups, which you're also passionate about and invest in, are not going to have the ability to resource and deal with this, and that this regulatory capture, as per our friend You know, Bill Gurley did a great talk last year about it.
So maybe you could address that head on.
Do you feel like if the line where we're only going to look at models that are trained on computers that cost more than 10 billion or more than 100 billion or whatever dollars, I'd be fine with that. There'd be some line that'd be fine. And I don't think that puts any regulatory burden on startups.
So if you have the nuclear raw material to make a nuclear bomb, there's a small subset of people who have that. Therefore, you use the analogy of a nuclear inspector kind of situation. Yeah. I think that's interesting. Sax, you have a question?
Well, Chamath, go ahead. You had a follow-up. Can I say one more thing about that? Of course. I'd be super nervous about regulatory overreach here. I think we can get this wrong by doing way too much, or even a little too much. I think we can get this wrong by doing not enough.
But I do think part of... And now, I mean, we have seen regulatory overstepping or capture just get super bad in other areas. And... you know, also maybe nothing will happen. But I think it is part of our duty and our mission to like talk about what we believe is likely to happen and what it takes to get that right.
The challenge, Sam, is that we have statute that is meant to protect people, protect society at large. What we're creating, however, is statute that gives the government rights to go in and audit code, to audit business trade secrets. We've never seen that to this degree before. Basically, the California legislation that's proposed and some of the federal legislation that's been proposed
basically requires the government to audit a model, to audit software, to audit and review the parameters and the weightings of the model. And then you need their check mark in order to deploy it for commercial or public use. And for me, it just feels like
We're trying to rein in the government agencies for fear and because folks have a hard time understanding this and are scared about the implications of it, they want to control it. And the only way to control it is to say, give me a right to audit before you can release it.
Yeah, and they're clueless. These people are clueless.
I mean, the way that the stuff is written, you read it, you're like going to pull your hair out because as you know better than anyone, in 12 months, none of this stuff is going to make sense anyway.
Totally. Right. Look, the reason I have pushed for... an agency-based approach for kind of like the big picture stuff and not a like write it in laws. I don't, in 12 months, it will all be written wrong. And I don't think, even if these people were like true world experts, I don't think they could get it right looking at 12 or 24 months.
And I don't, these policies, which is like, we're going to look at, you know, we're going to audit all of your source code and like look at all of your weights one by one. Like, I think there's a lot of crazy proposals out there.
By the way, especially if the models are always being retrained all the time, if they become more dynamic.
Again, this is why I think it's... But, like, when... Before an airplane gets certified, there's, like, a set of safety tests. We put the airplane through it, and... Totally. It's different than reading all of your code.
That's reviewing the output of the model, not reviewing the insides of the model.
And so what I was going to say is that is the kind of... that I think as safety testing makes sense.
How are we going to get that to happen, Sam? And I'm not just speaking for OpenAI, I speak for the industry, for humanity, because I am concerned that we draw ourselves into almost like a dark ages type of era by restricting the growth of these incredible technologies that humanity can prosper from so significantly. How do we change the sentiment and get that to happen?
Because this is all moving so quickly at the government levels. And folks seem to be getting it wrong. And I'm personally concerned.
Just to build on that, Sam, the architectural decision, for example, that Lama took is pretty interesting in that it's like, we're going to let Lama grow and be as unfettered as possible. And we have this other kind of thing that we call Lama guard that's meant to be these protective guardrails. Is that how you see the problem being solved correctly?
Or do you see that... At the current strength of models... Definitely some things are going to go wrong, and I don't want to make light of those or not take those seriously. But I don't have any catastrophic risk worries with a GPT-4 level model. And I think there's many safe ways to choose to deploy this.
Maybe we'd find more common ground if we said that, like, you know, the specific example of models that are capable, that are technically capable, even if they're not going to be used this way, of recursive self-improvement or of, you know, autonomously designing and deploying a bioweapon or something like that. Or a new model. Yeah. That was the recursive self-improvement point.
We should have safety testing on the outputs at an international level for models that have a reasonable chance of posing a threat there. I don't think GPT-4, of course, does not...
pose it in any sort of well, I don't say any sort because We don't yeah, I don't think the GPT-4 poses a material threat on those kinds of things And I think there's many safe ways to release a model like this but you know when like significant loss of human life is a serious possibility like airplanes or
any number of other examples where I think we're happy to have some sort of testing framework. Like I don't think about an airplane when I get on it. I just assume it's going to be safe.
Right. There's a lot of hand-wringing right now, Sam, about jobs. And you had a lot of, I think you did like some sort of a test when you were at YC about UBI.
Our results on that come out very soon. It was a five-year study that wrapped up or started five years ago. Well, there was like a beta study first and then it was like a long one that ran.
But- Well, Mark, what did you learn about that? Yeah, why'd you start it? Maybe just explain UBI and why you started it.
So we started thinking about this in 2016, kind of about the same time, started taking AI really seriously. And the theory was that the magnitude of the change that may come to society and jobs and the economy, and sort of in some deeper sense than that, like what the social contract looks like, meant that we should have many studies to study many ideas about new ways to arrange that.
I also think that I'm not a super fan of how the government has handled most policies designed to help poor people. And I kind of believe that if you could just give people money, they would make good decisions and the market would do its thing. And, you know, I'm very much in favor of lifting up the floor and reducing, eliminating poverty.
But I'm interested in better ways to do that than what we have tried for the existing social safety net and kind of the way things have been handled. And I think giving people money is not going to go solve all problems. It's certainly not going to make people happy, but it might solve some problems and it might give people a better horizon with which to help themselves.
And I'm interested in that. I think that now that we see some of the ways, so 2016 was a very long time ago. Now that we see some of the ways that AI is developing, I wonder if there's better things to do than the traditional way conceptualization of UBI. Like I wonder, I wonder if the future looks something like, more like universal basic compute than universal basic income.
And everybody gets like a slice of GPT-7's compute and they can use it, they can resell it, they can donate it to somebody to use for cancer research. But what you get is not dollars, but this like, slice. Yeah, you own like part of the productivity.
Right. I would like to shift to the gossip part of this.
Okay.
Gossip? What gossip? Sam, let's go back to November. What the flying f*** happened?
Um... You know, I, if you have specific questions, I'm happy to maybe I said, maybe I want to talk about it at some point.
So here's the point. What happened?
You were fired, you came back and it was palace intrigue. Did somebody stab you in the back? Did you find AGI? What's going on? This is a safe space.
I was fired. I was I talked about coming back. I kind of was a little bit unsure at the moment about what I wanted to do because I was very upset. And I realized that I really loved OpenAI and the people and that I would come back. And I kind of I knew it was going to be hard. It was even harder than I thought. But I kind of was like, all right, fine. I agreed to come back.
The board like took a while to figure things out. And then, you know, we were kind of like, trying to keep the team together and keep doing things for our customers. And, you know, sort of started making other plans, then the board decided to hire a different interim CEO. And then everybody There are many people. Oh, my gosh.
What was that guy's name? He was there for like a Scaramucci, right? Emmett's great.
And I have nothing but good things to say about Emmett. I was here for Scaramucci. And then.
Where were you when you found the news that you'd been fired? Take me to that moment.
I was in a hotel room in Vegas for F1 weekend.
I think that's happened to you before, J. Cal. So, you're there and they want you to get a text and they're like, fire, pick up the couch.
I said, I think that's happened to you before, J. Cal. Yeah.
I'm trying to think if I ever got fired. I don't think I've gotten fired. Yeah, I got a text. No, it's just a weird thing. Like, it's a text from who?
Actually, no, I got a text the night before. And then I got on a phone call with the board. And then that was that. And then I kind of like, I mean, then everything went crazy. I was like, it was like. I mean, my phone was like unusable. It was just a nonstop vibrating thing of like text messages, calls.
Basically, you got fired by tweet. That happened a few times during the Trump administration. A few cabinet appointments got tweeted out. They did call me first before tweeting.
It was nice of them. And then like, you know, I kind of did like a few hours of just this like absolute fugue state in the hotel room. trying to like I was just confused beyond belief trying to figure out what to do and so weird and then like Flew home at maybe like, I don't know, 3 p.m. or something like that. Still just like, you know, crazy nonstop phone blowing up.
Met up with some people in person. By that evening, I was like, okay, you know, I'll just like go do AGI research and was feeling pretty happy about the future. Yeah, you have options. And then the next morning, had this call with a couple of board members about coming back and that led to a few more days of craziness. And then, uh, and then it kind of, I think it got resolved.
Well, it was like a lot of insanity in between.
What percent of it was because of these nonprofit board members?
Um, well, we only have a nonprofit board, so it was all the nonprofit board members. Uh, there, the board had gotten down to six people. Um, they, uh, And then they removed Greg from the board and then fired me. So, but it was like, you know.
But I mean, like, was there a culture clash between the people on the board who had only nonprofit experience versus the people who had startup experience?
And maybe you can share a little bit about if you're willing to, the motivation behind the action, anything you can.
I think there's always been culture clashes at... Look, obviously... not all of those board members are my favorite people in the world, but I have serious respect for the gravity with which they treat AGI and the importance of getting AI safety right. And even if I stringently disagree with their decision making and actions, which I do, I have never once doubted their
integrity or commitment to the sort of shared mission of safe and beneficial AGI. You know, do I think they, like, made good decisions in the process of that or kind of know how to balance all the things OpenAI has to get right? No. But I think the, like... The intent. The intent of the magnitude of AGI and getting that right
Actually, let me ask you about that. So the mission of OpenAI is explicitly to create AGI, which I think is really interesting. A lot of people would say that if we create AGI, that would be like an unintended consequence of something gone wrong.
horribly wrong and they're very afraid of that outcome but open ai makes that the actual mission does that create like more fear about what you're doing i mean i understand it can create motivation too but how do you reconcile that i guess why i think a lot of i think a lot of the well i mean first i'll say i'll answer the first question in the second one i think it does create a great deal of fear i think a lot of the world is understandably
very afraid of AGI or very afraid of even current AI and very excited about it and even more afraid and even more excited about where it's going. And we We wrestle with that, but I think it is unavoidable that this is going to happen. I also think it's going to be tremendously beneficial. But we do have to navigate how to get there in a reasonable way.
And a lot of stuff is going to change, and change is pretty uncomfortable for people. So there's a lot of pieces that we got to get right.
Can I ask a different question? You have created, I mean... It's the hottest company. And you are literally at the center of the center of the center. But then it's so unique in the sense that all of this value you eschewed economically. Can you just walk us through why?
Yeah, I wish I had taken equity so I never had to answer this question.
If I could go back in time. Why don't they give you a grant now? Why doesn't the board just give you a big option grant like you deserve?
Yeah, give you five points. What was the decision back then? Why was that so important?
The decision back then, the original reason was just the structure of our nonprofit. There was something about... yeah, okay, this is like nice from a motivations perspective, but mostly it was that our board needed to be a majority of disinterested directors. And I was like, that's fine, I don't need equity right now. I kind of...
But in this weird way, now that you're running a company, yeah, it creates these weird questions of like, well, what's your real motivation?
One thing I have noticed, it's so deeply unimaginable to people to say, I don't really need more money. Like, and I get how toned up.
I think people think it's a little bit of an ulterior motive.
Well, yeah, yeah, yeah. No, so it assumes.
It's like, what else is he doing on the side to make money?
If I were just trying to say, like, I'm going to try to make a trillion dollars with open AI, I think everybody would have an easier time and it wouldn't save me. It would save a lot of conspiracy theories.
Sam, this is the back channel. You are a great dealmaker. I've watched your whole career. I mean, you're just great at it. You got all these connections. You're really good at raising money. You're fantastic at it. And you got this Johnny Ive thing going. You're inhumane. You're investing in companies. You got the orb. You're raising $7 trillion to build fabs, all this stuff.