Sam Altman
Appearances
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Okay. So we can have a really interesting conversation here. I did something on my other podcast, This Week in Startups, that I'll show you right now. That was crazy yesterday.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
I am using a one preview. Now let me show you what I did here. Just so the audience can level set here. If you're not watching us go to YouTube and type in all in and you can you can watch us we do video here. So I was analyzing, you know, just some early stage deals and cap tables and I put in here, hey, a startup just raised some money at this valuation.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Here's what the friends and family invested the accelerator, the seed investor, etc. In other words, like the history, the investment history in a company. what O1 does distinctly differently than the previous versions. And the previous version, I felt, was three to six months ahead of competitors. This is a year ahead of competitors.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
And so here, Chamath, if you look, it said it thought for 77 seconds. And if you click the down arrow, Sax, what you'll see is it gives you an idea of what its rationale is for interpreting and what secondary queries it's doing
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
And by the way, what this did was what prompt engineers were doing or prompt engineering websites were doing, which was trying to help you construct your question. And so if you look to this one, it says listing disparities, I'll compile a cap table with investments, evaluations, building the cap table, accessing the share evaluation, breaking down ownership, breaking down ownership,
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
etc, evaluating the terms, and then it checks its work a bit, it waits investment options. And you can see this is a this is fired off like two dozen different queries to as Freiberg correctly pointed out, you know, build this chain. And it got incredible answers, explain the form is so it's thinking about what your next question would be.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
And this when I share this with my team, it was like a super game changer. Sachs, you had some thoughts here.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Wow, they're just all gone. Wait, oh no, don't worry, he's replacing everybody. Here we go. He's replacing with the G700, a Bugatti, and I guess Sam's got mountains of cash. So don't worry, he's got a backup plan, Shamaz.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Anyway, as an industry and as leaders in the industry, the show sends its regards to Sam and the OpenAI team on their tragic losses and congratulations on the $150 billion valuation and your 7%. Sam now just cashed in $10 billion apparently. So congratulations to friend of the pod, Sam Oman. Is the round done?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
This is such a good point, Freeberg. The ad hoc piece of it, when we're processing 20,000 applications for funding a year, we do 100 plus meetings a week. The analysts on our team are now putting in the transcripts and key questions about markets, and they are getting so smart so fast.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
that, you know, when somebody comes to them with a marketplace in diamonds, their understanding of the diamond marketplace becomes so rich so fast that we can evaluate companies faster than we're also seeing Chamath.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Before we call our lawyers, when we have a legal question about a document, we start putting in, you know, let's say the the standard note template or the standard safe template, we put in the new one. And there's a really cool project by Google called notebook LL, LM, where you can put in multiple documents, and you can start asking questions.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
So imagine you take every single legal document sacks that Yammer had when you had Chamath as an investor, I'm not sure if he's on the board. And you can start asking questions about the documents. And we have had people make changes to these documents, and it immediately finds and explains them.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
And so everybody's just getting so goddamn smart, so fast using these tools, that I insisted that every person on the team when they hit control tab, It opens a chat GPT-4 window in 01, and we burned out our credits immediately. It stopped us. It said, you have to stop using it for the rest of the month. Chamath, your thoughts on this?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Is all of that done? I mean, it's reportedly allegedly that he's going to have 7% of the company and we can jump right into our first story.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
All right. Your thoughts, Sax. You operate in the SaaS space with System of Records and investing in these type of companies. Give us your take.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
I'm going to be okay, I think.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
According to reports, this round is contingent on not being a non-profit anymore and sorting that all out.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
In fairness, app stores are a great way to allow people to build on your platform and cover those niche cases.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Can you talk about his philanthropy first? Okay, let's get back to focus here. Let's get focused, everybody.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
So is mainstream media? We trust the mainstream media in this case because it aligns with Sachs' interest.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Right. Yeah. That's the very interesting piece for me is I'm, you know, watching startups, you know, working on this, the AI first ones, I think are going to come to it with a totally different cost structure. The idea of paying for seats. And I mean, some of these seats are 5,000 per person per year.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Yeah, this is always a tough one. This year, we tragically lost giants in our industry. These individuals bravely honed their craft at OpenAI before departing. Ilya Suskiver, he left us in May. Jan Laika, also left in May. John Shulman tragically left us in August.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
A lot of startups now are doing consumption-based pricing. So they're saying, you know, how many... How many sales calls are you doing? How many are we analyzing as opposed to how many sales executives do you have? Because when you have agents, as we're talking about, those agents are going to do a lot of the work. So we're going to see the number of people working at companies become fixed.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
And I think the static team size that we're seeing at a lot of large companies is only going to continue. It's going to be down and to the right. And if you think you're going to get a high-paying job at a big tech company, And you have to beat the agent. You're going to have to beat the maestro who has five agents working for them. I think this is going to be a completely different world.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
I want to get back to OpenAI with a couple of other pieces.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Last word for you. Last word.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
This has been speculated for months. The $150 billion valuation raising something in the range of $6 to $7 billion. If you do the math on that, and Bloomberg is correct, that Sam Altman got his 7%. I guess that would be $10 billion.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Well, I mean, right between the two of you, I think is the truth, because what's happening is if you look at investing, it's very hard to get into these late stage companies because they don't need as much capital. Because to your point, Shamath, they when they do hit profitability with 10 or 20 people, the revenue per employee is going way up.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
If you look at Google, Uber, Airbnb, and Facebook meta, they have the same number or less employees than they did three years ago, but they're all growing in that 20 to 30% a year, which means in but two to three years, each of those companies has doubled revenue per employee.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
So that concept of more efficiency, and then that trickles down, Sachs, to the startup investing space where you and I are. I'm a pre-seed seed investor, you're a seed series A investor. If you don't get in in those three or four rounds, I think it's going to be really expensive, and the companies are not going to need as much money downstream.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
He's not in it for the money. He has health insurance tax. Yeah. Is it Congress?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
All right. And before we get to our first story there about OpenAI, congratulations to Chamath. Let's pull up the photo here. He was a featured guest. On the Alex Jones show. No, sorry. I'm sorry. That would be Joe Rogan. Congratulations on coming to Austin and being on Joe Rogan. What was it like to do a three-hour podcast with Joe Rogan?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
You might need to get yourself one of them fancy agents from Hollywood or an attorney from the Wilson-Sonsini Corporation to renegotiate your contract, son, because you're worth a lot more from what I can gather in your performance today than just some simple health care. And I hope you took the Blue Cross Blue Shield.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
And you let the donators get first bite of the apple if you do convert. Because remember, Vinod and Hoffman got all their shares on the conversion.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Freeberg, you've been a little quiet here. Any thoughts on the transaction, the nonprofit to for-profit? If you were looking at that in what you're doing, do you see a way that Ohalo could take a nonprofit status, raise a bunch of money through donations for virtuous work, then license those patents to your for-profit? Would that be advantageous to you?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
It's been done a couple times. The Mozilla Foundation did it. We talked about that in a previous episode. Sax, you want to wrap us up here on the corporate structure? Any final thoughts? I mean, Elon put in $50 million. I think he gets the same as Sam. Don't you think he should just chip off 7% for Elon?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Not that Elon needs the money where he's asking, but I'm just wondering why Elon doesn't get the 7% and get, or, you know, if they're going to redo this.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
He put in $50 million is the report, right? In the nonprofit. Yeah. Hoffman put in $10.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Clearly is the limitation of this podcast is the other three of us. Finally, you have found a way to make it about yourself.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Yeah. And in fairness to Vinod, he bought that incredible beachfront property and donated it to the public trust so we can all surf and have our Halloween party there. So it's all good. Thank you, Vinod, for giving us that incredible beach. I want to talk to you guys about interfaces that came up, Chamath, in your headwinds or your four pack of reasons that
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
you know, open AI, when you steal men, the bear case could have challenges. Obviously, we're seeing that. And it is emerging that meta is working on some AR glasses that are really impressive. Additionally, I've installed iOS 18, which is Apple intelligence that works on 15 phones and 16 phones. 18 is the iOS. Did any of you install the beta of iOS 18 yet and use Siri?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
It's pretty clear with this new one that you're going to be able to talk to Siri as an LLM, like you do in ChatGPT mode, which I think means they will not make themselves dependent on ChatGPT, and they will siphon off half the searches that would have gone to ChatGPT.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
But what I will say is there are features of it where if you squint a little bit, you will see that Siri is going to be conversational. So when I was talking to it with music and, you know, you can have a conversation with it and do math like you can do with the ChatGPT version. And you have Microsoft Teams. doing that with their copilot. And now Matt is doing it at the top of each one.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
So everybody's going to try to intercept the queries and the voice interface. So chat GPT, four is now up against meta, Siri, Apple and Microsoft for that interface, it's going to be challenging. But let's talk about these meta glasses here. Meta showed off the AR glasses that Nick will pull up right now. These aren't goggles. Goggles look like ski goggles.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
That's what Apple is doing with their Vision Pro. Or when you see the MetaQuest, you know how those work. Those are VR with cameras that will create a version of the world. These are actual chunky sunglasses, like the ones I was wearing earlier when I was doing the bit. So these let you operate in the real world and are supposedly extremely expensive. They made a thousand prototypes.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
They were letting a bunch of influencers and folks like Gary Vaynerchuk use them and they're not ready for primetime. But the way they work Freeburg is there's a wristband that will track your fingers and your wrist movement. So you could be in a conversation like we are here on the pod.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
And below the desk, you could be you know, moving your arm and hand around to be doing replies to I don't know, incoming messages or whatever it is. What do you think of this AR vision of the world and meta making this progress?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Is that because he's... It's just a Texas short podcaster who's short and stout and they look similar. So it's just a... But I mean, it looks like Alex Jones started lifting weights, actually. No, they're both... The same height and yeah, both have podcasts.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Chamath, any thoughts on Facebook's progress with AR and how that might impact computing and interfaces when paired with language models.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
I think you're on this one wrong, Chamath, because I saw this revolution in Japan maybe 20 years ago. They got obsessed with augmented reality. There were a ton of startups right as they started getting to the mobile phones. And the use cases were really very compelling. And we're starting to see them now in education.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
And when you're at dinner with a bunch of friends, how often does picking up your phone and you know, looking at a message disturb the flow? Well, people will have glasses on, they'll be going for walks, they'll be driving, they'll be at a dinner party, they'll be with their kids.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
And you'll have something on like focus mode, you know, whatever the equivalent is in Apple, and a message will come in from your spouse or from your child, but you won't have to take your phone out of your pocket.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
And I think once these things weigh a lot less, you're going to have four different ways to interact with your computer in your pocket, your phone, your watch, your AirPods, whatever you have in your ears and the glasses. And I bet you glasses are going to take like a third of the tasks you do. I mean, what is the point of taking out your phone and watching the Uber come to you?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
But seeing that little strip that tells you the Uber is 20 minutes away, 15 minutes away, or what the gate number is. I don't have that anxiety. Well, I don't know if it's anxiety, but I just think it's ease of use.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
I think it adds up. I think taking your phone out of your pocket 50 times a day.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Okay, Zach, do you have any thoughts on this impressive demo or the demo that people who've seen have said is pretty darn compelling?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
But didn't he also do Survivor or one of those? And then the UFC. I mean, this guy's got four distinct careers.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Based Zuck is the best Zuck.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Yeah. Well, I mean, I think he got the UFC out of Fear Factor and being a UFC fighter and a comedian. And there's like a famous story where like, Dana White was pursuing him. And he was like, I don't know. And then Dana White's like, I'll send a plane for you. You can bring your friends. He's like, okay, fine, I'll do it. He did it for free.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
And then Dana White pursued him heavily to become the voice of the UFC. And yeah, obviously, it's grown tremendously. And it's worth billions of dollars. Okay.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
I just want to point out like the form factor you're seeing now is going to get greatly reduced. These were some of the early Apple. I don't know if you guys remember these, but Frog Design made these crazy tablets in the 80s. that were the eventual inspiration for the iPad, you know, 25 years later, I guess. And so that's the journey we're on here right now.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
This clunky, and these are not functional prototypes, obviously.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
And there was an interesting waypoint. Microsoft had the first tablets. Here's the Microsoft tablet for those of you watching. That came, you know... I don't know, this was the late 90s or early 2000s, Friedberg, if you remember it. These like incredibly bulky tablets that Bill Gates was bringing to all the events.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
All right. OpenAI, as we were just joking in the opening segment, is trying to convert into a for-profit benefit corporation. That's a B Corp. It just means, we'll explain B Corp later.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
So you get a lot of false starts. They're spending, I think, close to $20 billion a year on this ARVR stuff.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
This is the convergence of like three or four really interesting technological waves. All right, just dovetailing with tech jobs and the static team size, there is a report of a blue-collar boom. The tool belt generation is what Gen Z is being referred to as. A report in the Wall Street Journal reports, hey, tech jobs have dried up. We're all seeing that.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
And according to Indeed, developer jobs down within 30% since February of 2020, pre-COVID, of course. If you look at layoffs that FYI, you'll see all the, you know, tech jobs that have been eliminated since 2022, over a half million of them, bunch of things at play here. And the Wall Street Journal notes that entry level tech workers are getting hit the hardest
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
especially all these recent college graduates. And if you look at a historical college enrollment, let's pull up that chart, Nick, you can see your undergraduate, graduate and total with the red line, we peaked at 21 million people in either graduate school or undergraduate in 2010. And that's come down to 8.6 million.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
At the same time, obviously, in the last 12 years, you've had the population has grown. So this is even, you know, if it was a percentage basis would be even more dramatic. So what's behind this?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
A poll of 1,000 teens this summer found that about half believe a high school degree, trade program, or two-year degree best meets their career needs, and 56% said real-world on-the-job experience is more valuable than obtaining a college degree, something you've talked about with your own personal experience, Chamath, at Waterloo. doing apprenticeships, essentially.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Your thoughts on Generation Tool Belt?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
A benefit corporation is a C-Corporation variant that is not a non-profit, but the board of directors, Sachs, is required not only to be a fiduciary for all shareholders, but also for the stated mission of the company. That's my understanding of a B-Corp, am I right? Freeberg?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Saks, your thoughts on this generation tool belt we're reading about and, you know, the sort of combination with static team size that we're seeing in technology, companies keeping the number of employees the same or trending down while they grow 30% year over year?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Friedberg, is this like just the pendulum swung too much and education got too expensive, spending 200K to make $50,000 a year distinctly different than our childhoods, or I'm sorry, our adolescence when we were able to go to college for 10K a year, 20K a year, graduate with some low tens of thousands in debt if you did take debt, and then your entry-level job was 50, 60, 70K coming out of college.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
What are your thoughts here? Is this a value issue with college?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
And it's a way to, I guess, signal to investors, the market employees that you care about something more than just profit. So famous, most famous B Corp, I think is Tom's. Is that the shoe company, Tom's? That's a famous B Corp. Somebody will look it up here.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Don't stop, Friedberg.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Patagonia. Yeah, that falls into that category. So for profit with a mission. Reuters has cited anonymous sources close to the company, that the plan is still being hashed out with lawyers and shareholders and the timeline isn't certain. But what's being discussed is that the nonprofit will continue to exist as a minority shareholder in the new company.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Why do you have that? I want to play some Zach Bryan songs, and he's got a couple songs I like with a harmonica in them. So I just got a harmonica. My daughter and I have been playing harmonica, yeah. Are you teaching yourself?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Let's hear it. I'll play it next week. I'm deep in the laboratory.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
It could be a bit.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Be a little shy. He's a little shy. No, no, I'll write a Trump song for you. I'll do the trials and tribulations of Donald Trump, and I'll do a little Bob Dylan send-up song for you.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
That means something.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
You understand too soon there is no chance of dying. Yeah, that's an incredible clip. All right, you guys want to wrap or you want to keep talking about more stuff? We were at 90 minutes here.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
How much of a minority shareholder, I guess, is the devil's in the detail there. Do they own 1% or 49%? The very much discussed Friedberg 100x profit cap for investors will be removed That means investors like Vinod, friend of the pod, and Reid Hoffman, also friend of the pod, could see a 100x turn into 1,000x or more.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Oh, well, it's time for a very emotional segment we do here on the all in podcast. I just got to get myself composed for this.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
According to the Bloomberg report, Sam Waltman's going to get his equity finally 7%. That would put him at around $10.5 billion, if this is all true. And OpenAI could be valued as high as $150 billion. We'll get into all the shenanigans. But let's start with your question, Freeberg. And since you asked it, I'm going to boomerang it back to you. Make the bull case for $150 billion valuation.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Jared Sof left on Wednesday. Bob McGrew also left on Wednesday.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
And Mira Marotti also left us tragically on Wednesday.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
SACS. Here's a chart of open areas revenue growth that has been piecemeal together from various sources at various times. But you'll see here they are reportedly as of June of 2024, on a $3.4 billion run rate for this year, after hitting $2 billion in 23, $1.3 billion in October of 23. And then back in 2022, it's reported they only had $28 million in revenue.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
So this is a pretty big streak here in terms of revenue growth. I would put it at 50 times top line revenue, $150 billion valuation. You want to give us the bear case, maybe, or the bull case?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Yeah. And Greg Brockman is on extended leave.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Thank you for your service. Your memories will live on as training data. And may your memories be a vesting.
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Okay, Chamath, do us a favor here. If there is a bear case, what is it?
All-In with Chamath, Jason, Sacks & Friedberg
OpenAI's $150B conversion, Meta's AR glasses, Blue-collar boom, Risk of nuclear war
Okay, I think this is very well put. And I have been using... ChatGPT and Claude and Gemini exclusively. I stopped using Google Search. And I also stopped, Sax, asking people on my team to do stuff before I asked ChatGPT to do it, specifically Freebird, the 01 version. And the 01 version is distinctly different. Have you gentlemen been using 01 like on a daily basis?
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Probably, but the phone manufacturers are going to do that for sure. That doesn't necessitate a new device. I think you'd have to find some really different interaction paradigm that the technology enables. And if I knew what it was, I would be excited to be working on it right now.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
We'll get that better. And I think voice is a hint to whatever the next thing is. If you can get voice interaction to be really good, it feels great. I think that feels like a different way to use a computer.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
We are working on that. It's so clunky right now. It's slow. It's like kind of doesn't feel very smooth or authentic or organic. Like we'll get all that to be much better.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Super powerful to be able to like, the multimodality of saying like, hey, ChatGPT, what am I looking at? Or like, what kind of plant is this? I can't quite tell. That's another, I think, hint, but whether people want to wear glasses or hold up something when they want that,
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
There's a bunch of just like the sort of like societal interpersonal issues here are all very complicated about wearing a computer on your face.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I forgot about that. I forgot about that. So I think it's like.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I think what I want is just this always on like super low friction thing where I can... either by voice or by text or ideally like some other, it just kind of knows what I want, have this like constant thing helping me throughout my day that's got like as much context as possible. It's like the world's greatest assistant. And it's just this like thing working to make me better and better.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
There's like a, I know when you hear people like talk about the AI future, they imagine there's sort of two things
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
different approaches and they don't sound that different but I think they're like very different for how we'll design the system in practice there's the I want an extension of myself I want like a ghost or an alter ego or this thing that really like is me is acting on my behalf is responding to emails not even telling me about it is sort of like It becomes more me and is me.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
And then there's this other thing, which is like, I want a great senior employee. It may get to know me very well. I may delegate it. You know, you can like have access to my email and I'll tell you the constraints, but I think of it as this like separate entity. And I personally like the separate entity approach better and think that's where we're gonna head. And so in that sense,
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
The thing is not you, but it's like a always available, always great, super capable assistant executive.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I think there'd be agent-like behavior, but there's like a difference between a senior employee and an agent. And like I want it, you know, I think of like my, I think like a bit, Like one of the things that I like about a senior employee is they'll push back on me. They will sometimes not do something I ask or they sometimes will say like, I can do that thing if you want.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
But if I do it, here's what I think would happen and then this and then that. And are you really sure? Yeah. I definitely want that kind of vibe, which not just like this thing that I give a task and it blindly does. It can reason.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
It can reason. It has like the kind of relationship with me that I would expect out of a really competent person that I worked with, which is different from like a sycophant.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I'm actually very interested in designing a world that is equally usable by humans and by AIs. So I... I like the interpretability of that. I like the smoothness of the handoffs. I like the ability that we can provide feedback or whatever. So, you know, DoorDash could just expose some API to my future AI assistant and they could go put the order in or whatever.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Or I could say, I could be holding my phone and I could say, okay, AI assistant, you put in this order on DoorDash, please. And I could watch the app open and see the thing clicking around and I could say, hey, no, not this. There's something about designing a world that is usable equally well by humans and AIs that I think is an interesting concept.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Same reason I'm more excited about humanoid robots than sort of robots of very other shapes. The world is very much designed for humans, and I think we should absolutely keep it that way. And a shared interface is nice.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
It's hard for me to imagine that we just go to a world totally where you say like, hey, ChatGPT, order me sushi. And it says, okay, do you want it from this restaurant? What kind, what time, whatever? I think... I think visual user interfaces are super good for a lot of things.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
And it's hard for me to imagine a world where you never look at a screen and just use voice mode only, but I can imagine that for a lot of things.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
You know, it's like setting a timer with Siri, I do every time because it... works really well. And it's great.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
But ordering an Uber, like, I want to see the prices for a few different options, I want to see how far away it is, I want to see like, maybe even where they are on the map, because I might walk somewhere, I get a lot more information by, I think, in less time by looking at that order the Uber screen than I would if I had to do that all through the audio channel.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I think there will just be like, yeah, different There are different interfaces we use for different tasks, and I think that'll keep going.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I met with a new company this morning or barely even a company. It's like two people that are going to work on a summer project trying to actually finally make the AI tutor. And I've always been interested in this space. A lot of people have done great stuff on our platform. But if someone can deliver like the way that you actually like.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
They used a phrase I love, which is this is going to be like a Montessori level reinvention for how people learn things. But if you can find this new way to let people explore and learn in new ways on their own, I'm personally super excited about that. A lot of the coding-related stuff you mentioned, Devin, earlier, I think that's like a super cool vision of the future.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
The thing that I am, healthcare, I believe, should be pretty transformed by this. But the thing I'm personally most excited about is the sort of doing faster and better scientific discovery. GPT-4 clearly not there in a big way, although maybe it accelerates things a little bit by making scientists more productive. But alpha 43, yeah. That's like... But Sam... That will be a triumph.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
You'll need some of that for sure. But the thing that I think we're missing across the board for many of these things we've been talking about is models that can do reasoning. And once you have reasoning, you can connect it to chemistry stimulators or whatever else.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
We take our time on releases of major games.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
New models, and I don't think we I think it will be great When we do it, and I think we'll be thoughtful about how we do it Like we may release it in a different way than we've released previous models Also, I don't even know if we'll call it GPT-5 What I what I will say is you know a lot of people have noticed how much better GPT-4 has gotten Since we've released it and particularly over the last few months.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I don't know how much reasoning is going to turn out to be a super generalizable thing. I suspect it will, but that's more just like an intuition and a hope, and it would be nice if it worked out that way. I don't know if that's like...
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
There's so many ways where that could go. Maybe it trains a literal model for it, or maybe it just knows the one big model. It can go pick what other training data it needs and ask a question and then update on that.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
You know, there's like a version of this. I think you can like... already see. When you were talking about biology and these complicated networks of systems, the reason I was smiling, I got super sick recently, and I'm mostly better now, but it was just like, body got beat up, one system at a time. You can really tell, okay, it's this cascading thing, and
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
And that reminded me of you talking about biology is just these, you have no idea how much these systems interact with each other until things start going wrong. And that was sort of interesting to see. But I was using ChatGPT to try to figure out what was happening, whatever, and would say, well, I'm unsure of this one thing. And then I just posted a paper
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
on it without even reading the paper, like in the context. And it says, oh, that was the thing I wasn't sure of. Like now I think this instead. So there's like a, that was like a small version of what you're talking about, where you can like, can say this, I don't know this thing. And you can put more information. You don't retrain the model, you're just adding it to the context here.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Yeah, so my... On the general thing first, my... You clearly will need specialized simulators, connectors, pieces of data, whatever. But my intuition, and again, I don't have this like backed up with science. My intuition would be if we can figure out the core of generalized reasoning, connecting that to new problem domains in the same way that humans are generalized reasoners.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I've never heard that before. Yeah, I think so. It's possible. You've got it starting already.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
would, I think, be doable.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
But yeah, Sora does not start with a language model. That's a model that is customized to do video. And so we're clearly not at that world yet.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I think I I think that's a better hint of what the world looks like, where it's not the one, two, three, four, five, six, seven, but you use an AI system and the whole system just gets better and better fairly continuously. I think that's both a better technological direction, I think that's easier for society to adapt to. But I assume that's where we'll head.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Yeah, I mean, one example of this is like, okay, you know, as far as I know, all the best text models in the world are still autoregressive models, and the best image and video models are diffusion models. That's like sort of strange in some sense.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
So I think it's very different for different kinds of, I mean, look, on unfair use, I think we have a very reasonable position under the current law, but I think AI is so different that for things like art, we'll need to think about them in different ways. I would say if you go read a bunch of math on the internet and learn how to do math, that I think seems unobjectionable to most people.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
And then there's another set of people who might have a different opinion. Well, what if you like Actually, let me not get into that, just in the interest of not making this answer too long. So I think there's one category people are like, okay, there's generalized human knowledge.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
You can kind of go, if you learn that, that's open domain or something, if you kind of go learn about the Pythagorean theorem. That's one end of the spectrum. And then I think the other extreme end of the spectrum is...
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
is art, and maybe even like more than, more specifically I would say it's like doing, it's a system generating art in the style or the likeness of another artist would be kind of the furthest end of that. And then there's many, many cases on the spectrum in between.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I think the conversation has been historically very caught up on training data, but it will increasingly become more about what happens at inference time. As training data becomes less valuable and what the system does accessing information in context in real time or...
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
you know, taking like something like that, what happens at inference time will become more debated and what the new economic model is there. So if you say like, if you say like, create me a song in the style of Taylor Swift,
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
even if the model were never trained on any Taylor Swift songs at all, you can still have a problem, which is it may have read about Taylor Swift, it may know about her themes, Taylor Swift means something. And then the question is, should that model, even if it were never trained on any Taylor Swift song whatsoever, be allowed to do that? And if so, How should Taylor get paid? Right.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
So I think there's an opt-in, opt-out in that case, first of all, and then there's an economic model. Staying on the music example, there is something interesting to look at from... the historical perspective here, which is sampling and how the economics around that work. This is not quite the same thing, but it's like an interesting place to start looking.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I wasn't trying to make that point because I agree in the same way that humans are inspired by other humans. I was saying if you say generate me a song in the style of Taylor Swift.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I think personally that's a different case.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
We have currently made the decision not to do music, and partly because exactly these questions of where you draw the lines. I was meeting with several musicians I really admire recently, and I was just trying to talk about some of these edge cases. But even the world in which... If we...
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
went and let's say we paid 10,000 musicians to create a bunch of music just to make a great training set where the music model could learn everything about strong structure and what makes a good catchy beat and everything else. And only trained on that. Let's say we could still make a great music model, which maybe we could.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Well, I mean, one thing that you could imagine is just that you keep training a model. That would seem like a reasonable thing to me.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
You know, I was kind of like posing that as a thought experiment to musicians. And they're like, well, I can't object to that on any principle basis at that point. And yet there's still something I don't like about it. Now, that's not a reason not to do it necessarily. But it is. Did you see that ad that Apple put out?
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Maybe it was yesterday or something of like squishing all of human creativity down into one really thin iPad.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
There's something about... I'm obviously hugely positive on AI, but there is something that I think is beautiful about human creativity and human artistic expression. And, you know, for an AI that just does better science, like, great, bring that on. But an AI that is going to do this, like, deeply beautiful human creative expression, I think we should, like... figure out it's going to happen.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
It's going to be a tool that will lead us to greater creative heights. But I think we should figure out how to do it in a way that preserves the spirit of what we all care about here.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Yeah, you know, we put out this thing yesterday called the spec, where we're trying to say here are, here's, here's how our model is supposed to behave. And it's very hard, it's a long document, it's very hard to specify exactly in each case where the limits should be, and I view this as a discussion that's gonna need a lot more input. But these sorts of questions about
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
okay, maybe it shouldn't generate Darth Vader, but the idea of a Sith Lord or a Sith-style thing or Jedi at this point is part of the culture. These are all hard decisions.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I'm concerned. I mean, there's so many proposed regulations, but most of the ones I've seen on the California state things I'm concerned about. I also have a general fear of the states all doing this themselves. When people say regulate AI, I don't think... they mean one thing. I think there's like, some people are like, ban the whole thing.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Some people are like, don't allow it to be open source, require it to be open source. The thing that I am personally most interested in is I think there will come Look, I may be wrong about this. I will acknowledge that this is a forward-looking statement and those are always dangerous to make.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
But I think there will come a time in the not super distant future, like, you know, we're not talking like decades and decades from now, where the frontier AI systems are capable of causing significant damage global harm.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
And for those kinds of systems, in the same way we have global oversight of nuclear weapons or synthetic bio or things that can really have a very negative impact way beyond the realm of one country, I would like to see some sort of international agency that is looking at the most powerful systems and ensuring reasonable safety testing.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
These things are not going to escape and recursively self-improve or whatever.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Do you feel like if the line where we're only going to look at models that are trained on computers that cost more than 10 billion or more than 100 billion or whatever dollars, I'd be fine with that. There'd be some line that'd be fine. And I don't think that puts any regulatory burden on startups.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Well, Chamath, go ahead. You had a follow-up. Can I say one more thing about that? Of course. I'd be super nervous about regulatory overreach here. I think we can get this wrong by doing way too much, or even a little too much. I think we can get this wrong by doing not enough.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
GPT-4 is still only available to the paid users, but one of the things that we really want to do is figure out how to make more advanced technology available to free users too. I think that's a super important part of our mission.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
But I do think part of... And now, I mean, we have seen regulatory overstepping or capture just get super bad in other areas. And... you know, also maybe nothing will happen. But I think it is part of our duty and our mission to like talk about what we believe is likely to happen and what it takes to get that right.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Totally. Right. Look, the reason I have pushed for... an agency-based approach for kind of like the big picture stuff and not a like write it in laws. I don't, in 12 months, it will all be written wrong. And I don't think, even if these people were like true world experts, I don't think they could get it right looking at 12 or 24 months.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
And this idea that we build AI tools and make them super widely available, free or not that expensive, whatever it is, so that people can use them to go kind of invent the future rather than the magic AGI in the sky inventing the future and showing it down upon us. That seems like a much better path. It seems like a more inspiring path. I also think it's where things are actually heading.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
And I don't, these policies, which is like, we're going to look at, you know, we're going to audit all of your source code and like look at all of your weights one by one. Like, I think there's a lot of crazy proposals out there.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Again, this is why I think it's... But, like, when... Before an airplane gets certified, there's, like, a set of safety tests. We put the airplane through it, and... Totally. It's different than reading all of your code.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
And so what I was going to say is that is the kind of... that I think as safety testing makes sense.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Or do you see that... At the current strength of models... Definitely some things are going to go wrong, and I don't want to make light of those or not take those seriously. But I don't have any catastrophic risk worries with a GPT-4 level model. And I think there's many safe ways to choose to deploy this.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Maybe we'd find more common ground if we said that, like, you know, the specific example of models that are capable, that are technically capable, even if they're not going to be used this way, of recursive self-improvement or of, you know, autonomously designing and deploying a bioweapon or something like that. Or a new model. Yeah. That was the recursive self-improvement point.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
We should have safety testing on the outputs at an international level for models that have a reasonable chance of posing a threat there. I don't think GPT-4, of course, does not...
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
pose it in any sort of well, I don't say any sort because We don't yeah, I don't think the GPT-4 poses a material threat on those kinds of things And I think there's many safe ways to release a model like this but you know when like significant loss of human life is a serious possibility like airplanes or
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
any number of other examples where I think we're happy to have some sort of testing framework. Like I don't think about an airplane when I get on it. I just assume it's going to be safe.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Our results on that come out very soon. It was a five-year study that wrapped up or started five years ago. Well, there was like a beta study first and then it was like a long one that ran.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
So we started thinking about this in 2016, kind of about the same time, started taking AI really seriously. And the theory was that the magnitude of the change that may come to society and jobs and the economy, and sort of in some deeper sense than that, like what the social contract looks like, meant that we should have many studies to study many ideas about new ways to arrange that.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
It makes me sad that we have not figured out how to make GPT-4 level technology available to free users. It's something we really want to do.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I also think that I'm not a super fan of how the government has handled most policies designed to help poor people. And I kind of believe that if you could just give people money, they would make good decisions and the market would do its thing. And, you know, I'm very much in favor of lifting up the floor and reducing, eliminating poverty.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
But I'm interested in better ways to do that than what we have tried for the existing social safety net and kind of the way things have been handled. And I think giving people money is not going to go solve all problems. It's certainly not going to make people happy, but it might solve some problems and it might give people a better horizon with which to help themselves.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
And I'm interested in that. I think that now that we see some of the ways, so 2016 was a very long time ago. Now that we see some of the ways that AI is developing, I wonder if there's better things to do than the traditional way conceptualization of UBI. Like I wonder, I wonder if the future looks something like, more like universal basic compute than universal basic income.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
And everybody gets like a slice of GPT-7's compute and they can use it, they can resell it, they can donate it to somebody to use for cancer research. But what you get is not dollars, but this like, slice. Yeah, you own like part of the productivity.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Um... You know, I, if you have specific questions, I'm happy to maybe I said, maybe I want to talk about it at some point.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I was fired. I was I talked about coming back. I kind of was a little bit unsure at the moment about what I wanted to do because I was very upset. And I realized that I really loved OpenAI and the people and that I would come back. And I kind of I knew it was going to be hard. It was even harder than I thought. But I kind of was like, all right, fine. I agreed to come back.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
The board like took a while to figure things out. And then, you know, we were kind of like, trying to keep the team together and keep doing things for our customers. And, you know, sort of started making other plans, then the board decided to hire a different interim CEO. And then everybody There are many people. Oh, my gosh.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
And I have nothing but good things to say about Emmett. I was here for Scaramucci. And then.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I was in a hotel room in Vegas for F1 weekend.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Actually, no, I got a text the night before. And then I got on a phone call with the board. And then that was that. And then I kind of like, I mean, then everything went crazy. I was like, it was like. I mean, my phone was like unusable. It was just a nonstop vibrating thing of like text messages, calls.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
It was nice of them. And then like, you know, I kind of did like a few hours of just this like absolute fugue state in the hotel room. trying to like I was just confused beyond belief trying to figure out what to do and so weird and then like Flew home at maybe like, I don't know, 3 p.m. or something like that. Still just like, you know, crazy nonstop phone blowing up.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Met up with some people in person. By that evening, I was like, okay, you know, I'll just like go do AGI research and was feeling pretty happy about the future. Yeah, you have options. And then the next morning, had this call with a couple of board members about coming back and that led to a few more days of craziness. And then, uh, and then it kind of, I think it got resolved.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Well, it was like a lot of insanity in between.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Um, well, we only have a nonprofit board, so it was all the nonprofit board members. Uh, there, the board had gotten down to six people. Um, they, uh, And then they removed Greg from the board and then fired me. So, but it was like, you know.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I think there's always been culture clashes at... Look, obviously... not all of those board members are my favorite people in the world, but I have serious respect for the gravity with which they treat AGI and the importance of getting AI safety right. And even if I stringently disagree with their decision making and actions, which I do, I have never once doubted their
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
integrity or commitment to the sort of shared mission of safe and beneficial AGI. You know, do I think they, like, made good decisions in the process of that or kind of know how to balance all the things OpenAI has to get right? No. But I think the, like... The intent. The intent of the magnitude of AGI and getting that right
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
very afraid of AGI or very afraid of even current AI and very excited about it and even more afraid and even more excited about where it's going. And we We wrestle with that, but I think it is unavoidable that this is going to happen. I also think it's going to be tremendously beneficial. But we do have to navigate how to get there in a reasonable way.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
And a lot of stuff is going to change, and change is pretty uncomfortable for people. So there's a lot of pieces that we got to get right.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Yeah, I wish I had taken equity so I never had to answer this question.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
The decision back then, the original reason was just the structure of our nonprofit. There was something about... yeah, okay, this is like nice from a motivations perspective, but mostly it was that our board needed to be a majority of disinterested directors. And I was like, that's fine, I don't need equity right now. I kind of...
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
One thing I have noticed, it's so deeply unimaginable to people to say, I don't really need more money. Like, and I get how toned up.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Well, yeah, yeah, yeah. No, so it assumes.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
If I were just trying to say, like, I'm going to try to make a trillion dollars with open AI, I think everybody would have an easier time and it wouldn't save me. It would save a lot of conspiracy theories.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
So the things like, you know, device companies or if we were doing some chip fab company, it's like those are not SAM project. Those would be like opening. I would get that equity. They would.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Well, that's not like kind of the people like you who have to like commentate on this stuff all day's perception, which is fair because we haven't announced this stuff because it's not done. I don't think most people in the world like are thinking about this, but I agree it spins up a lot of conspiracies. conspiracy theories in like tech commentators.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
And if I could go back, yeah, I would just say like, let me take equity and make that super clear. And then everyone would be like, all right. Like I'd still be doing it because I really care about AGI and think this is like the most interesting work in the world. But it would at least hype check to everybody.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I don't know where that came from, actually. I genuinely don't. I think the world needs a lot more AI infrastructure, a lot more than it's currently planning to build and with a different cost structure. The exact way for us to play there is we're still trying to figure that out. Got it.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
So on the first part of your question, speed and cost, those are hugely important to us. And I don't want to give a timeline on when we can bring them down a lot because research is hard, but I am confident we'll be able to. We want to cut the latency super dramatically. We want to cut the cost really, really dramatically. And I believe that will happen.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Oh, I have to go in a minute. It's not because... It's not to prevent the edge cases that we need to be more organized, but it is that these systems are so complicated and concentrating bets are so important. Like one...
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
You know, at the time, before it was, like, obvious to do this, you have, like, DeepMind or whatever has all these different teams doing all these different things, and they're spreading their bets out. And you had OpenAI say, we're going to, like, basically put the whole company and work together to make GPT-4. And that was, like, unimaginable for how to run an AI research lab. But it is...
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I think what works, at a minimum, it's what works for us. So not because we're trying to prevent edge cases, but because we want to concentrate resources and do these big, hard, complicated things, we do have a lot of coordination on what we work on.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Great talking to you guys. Yeah, it was fun. Thanks for coming out.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I'm really happy it finally happened. Yeah, it's awesome. I really appreciate it. I would love to come back on after our next major launch and I'll be able to talk more directly about some of these things.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
We haven't seen you at poker in a while. You know, I would love to play poker. It has been forever. That would be a lot of fun.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I don't really know if you, I don't know if you could say that about anybody else. I don't, I'm not going to.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
We're still so early in the development of the science and understanding how this works. Plus, we have all the engineering tailwinds. So I don't know when we get to intelligence too cheap to meter and so fast that it feels instantaneous to us and everything else, but... I do believe we can get there for a pretty high level of intelligence. It's important to us.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
It's clearly important to users, and it'll unlock a lot of stuff. On the sort of open source, closed source thing, I think there's great roles for both, I think. You know, we've open sourced some stuff. We'll open source more stuff in the future. But really, like, our mission is to build towards AGI and to figure out how to broadly distribute its benefits. We have a strategy for that.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
It seems to be resonating with a lot of people. It obviously isn't for everyone, and there's, like, a big ecosystem, and there will also be open source models and people who build that way. One area that I'm particularly interested personally in open source for is I want an open source model that is as good as it can be that runs on my phone.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
And that, I think, is going to, you know, the world doesn't quite have the technology for a good version of that yet. But that seems like a really important thing to go do at some point.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I don't know if we will or someone will.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
That should be fittable on a phone, but I'm not sure if that one is like... I haven't played with it.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
What we're trying to do is not make the sort of smartest decisions set of weights that we can. But what we're trying to make is like this useful intelligence layer for people to use. And a model is part of that. I think we will stay pretty far ahead of, I hope we'll stay pretty far ahead of the rest of the world on that. But
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
There's a lot of other work around the whole system that's not just that the model waits. And we'll have to build up enduring value the old-fashioned way like any other business does. We'll have to figure out a great product and reasons to stick with it and deliver it at a great price.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Part of the reason that we released ChatGPT was we want the world to see this. And we've been trying to tell people that AI is really important. And if you go back to like October of 2022, not that many people thought AI was going to be that important or that it was really happening. No. And a huge part of what we try to do is put the technology in the hands of people.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Now, again, there's different ways to do that. And I think there really is an important role to just say, like, here's the way to have at it. But the fact that we have so many people using a free version of ChatGPT that we don't run ads on, we don't try to make money on, we just put out there because we want people to have these tools, I think has done a lot to...
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
provide a lot of value and teach people how to fish, but also to get the world really thoughtful about what's happening here. Now, we still don't have all the answers, and we're fumbling our way through this like everybody else, and I assume we'll change strategy many more times as we learn new things.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
You know, when we started OpenAI, we had really no idea about how things were going to go, that we'd make a language model, that we'd ever make a product. We started off just... I remember very clearly that first day where we're like, well, Now we're all here. That was, you know, it was difficult to get this set up, but what happens now? Maybe we should write some papers.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Maybe we should stand around a whiteboard. And we've just been trying to like put one foot in front of the other and figure out what's next and what's next and what's next. And... I think we'll keep doing that.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
So I definitely don't think it'll be an arms race for data, because when the models get smart enough at some point, it shouldn't be about more data, at least not for training. It may matter data to make it useful. Look, the one thing that I have learned most throughout all of this is that it's hard to make confident statements a couple of years in the future about where this is all going to go.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
And so I don't want to try now. I will say that I expect lots of very capable models in the world. And, you know, like, it feels to me like we just, like, stumbled on a new fact of nature or science or whatever you want to call it, which is, like, we can create, you can, like... I mean, I don't believe this literally, but it's like a spiritual point.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
You know, intelligence is just this emergent property of matter, and that's like a rule of physics or something. So people are going to figure that out. But there will be all these different ways to design the systems. People will make different choices, figure out new ideas. And I'm sure, like, you know,
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Like any other industry, I would expect there to be multiple approaches and different people like different ones. Some people like iPhones, some people like an Android phone. I think there will be some effect like that.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
We'll make huge algorithmic gains for sure, and I don't want to discount that. I'm very interested in chips and energy, but if we can make a same quality model twice as efficient, that's like we had twice as much compute. And I think there's a gigantic amount of work to be done there. And I hope we'll start really seeing those results. Other than that, the whole supply chain is very complicated.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
There's LogicFab capacity, there's how much HBM the world can make. There's how quickly you can get permits and pour the concrete, make the data centers, and then have people in there wiring them all up. There's finding the energy, which is a huge bottleneck. But I think when there's this much value to people, the world will do its thing. We'll try to help it happen faster.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
And there's probably like I don't know how to give it a number, but there's some percentage chance where there is, as you were saying, a huge substrate breakthrough, and we have a massively more efficient way to do computing, but I don't bank on that or spend too much time thinking about it.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
I'm super interested in this. I love like great new form factors of computing. And it feels like with every major technological advance, a new thing becomes possible. Phones are unbelievably good, so I think the threshold is very high here. I personally think an iPhone is the greatest piece of technology humanity has ever made. It's really a wonderful product. What comes after it? I don't know.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
That was what I was saying. It's so good to get beyond it. I think the bar is quite high.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
We've been discussing ideas, but I don't, like, if I knew.
All-In with Chamath, Jason, Sacks & Friedberg
In conversation with Sam Altman
Well, almost everyone's willing to pay for a phone anyway. So if you could, like, make a way cheaper device, I think the barrier to carry a second thing or use a second thing is pretty high. So I don't think, given that we're all willing to pay for phones, or most of us are, I don't think cheaper is the answer.
Decoder with Nilay Patel
The AI arms race to build digital god
Thank you. You can discover insights and learn how to convert digital disruption into revenue growth by reading the 2024 Digital Disruption Report at www.alexpartners.com. That's www.alexpartners.com. In the face of disruption, businesses trust Alex Partners to get straight to the point and deliver results when it really matters.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
So those were decided honestly without, you know, that's like you kind of do that on the battlefield. You don't have time to design a rigorous process then. For new board members since... new board members will add going forward. We have some criteria that we think are important for the board to have, different expertise that we want the board to have.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Unlike hiring an executive where you need them to do one role well, the board needs to do a whole role of kind of governance and thoughtfulness well. And so one thing that Brett says, which I really like, is that we want to hire board members in slates, not as individuals one at a time. And thinking about a group of people that will bring
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Nonprofit expertise, expertise in running companies, sort of good legal and governance expertise. That's kind of what we've tried to optimize for.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Not for every board member, but for certainly some you need that. That's part of what the board needs to do.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Look, I think you definitely need some technical experts there. And then you need some people who are like, how can we deploy this in a way that will help people in the world the most and people who have a very different perspective? You know, I think a mistake that you or I might make is to think that only the technical understanding matters.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And that's definitely part of the conversation you want that board to have. But there's a lot more about how that's going to just like impact society and people's lives that you really want represented in there too. And you're just kind of, are you looking at the track record of people or you're just having conversations? Track record is a big deal.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
You of course have a lot of conversations, but I, um, There's some roles where I totally ignore track record and just look at slope, ignore the Y-intercept. Thank you. Thank you for making it mathematical for the audience. For a board member, I do care much more about the Y-intercept. I think there is something deep to say about track record there.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
and experience is sometimes very hard to replace.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I mean, there's so many low, like it was very bad. There were great high points, too. Like my phone was just like sort of nonstop blowing up with nice messages from people I work with every day, people I hadn't talked to in a decade. I didn't get to appreciate that as much as I should have because I was just like in the middle of this firefight. But that was really nice.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
It was like a battle fought in public to a surprising degree, and that was extremely exhausting to me, much more than I expected. I think fights are generally exhausting, but this one really was. You know, the board did this... Friday afternoon, I really couldn't get much in the way of answers, but I also was just like, well, the board gets to do this.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And so I'm going to think for a little bit about what I want to do, but I'll try to find the blessing in disguise here. And I was like, well, I... You know, my current job at OpenAI is or it was like to like run a decently sized company at this point. And the thing I'd always liked the most was just getting to like work on work with the researchers.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And I was like, yeah, I can just go do like a very focused AGI research effort. And I got excited about that. Didn't even occur to me at the time to like possibly that this was all going to get undone. This was like Friday afternoon. So you've accepted your, the death of this previous. Very quickly.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Like within, you know, I mean, I went through like a little period of confusion and rage, but very quickly. And by Friday night, I was like talking to people about what was going to be next. And I was excited about that. Um, I think it was Friday night evening for the first time that I heard from the exec team here, which was like, hey, we're going to fight this and we think, whatever.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And then I went to bed just still being like, okay, excited, like onward. Were you able to sleep? Not a lot. It was one of the weird things was there was this period of four and a half days where sort of didn't sleep much, didn't eat much, and still kind of had like a surprising amount of energy. You learn like a weird thing about adrenaline in wartime.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And I was excited for the new thing. I was just like, okay, this was crazy, but whatever.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And then Saturday morning, two of the board members called and said, hey, we, you know, destabilize, we didn't mean to destabilize things. We don't want to store a lot of value here. You know, can we talk about you coming back?
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And I immediately didn't want to do that, but I thought a little more and I was like, well, I don't really care about the people here, the partners, shareholders, like all of the, I love this company. And so I thought about it and I was like, well, okay, but like, here's, here's the stuff I would need. And, and then the most painful time of all was over the course of that weekend, um,
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I kept thinking and being told, and we all kept, not just me, like the whole team here kept thinking, well, we were trying to like keep OpenAI stabilized while the whole world was trying to break it apart, people trying to recruit, whatever. We kept being told like, all right, we're almost done. We're almost done. We just need like a little bit more time. And it was this like very confusing state.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And then Sunday evening, when again, like every few hours, I expected that we were going to be done and we're going to like figure out a way for me to return and Things go back to how they were. The board then appointed a new interim CEO. And then I was like, I mean, that feels really bad. That was the low point of the whole thing. You know, I'll tell you something.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
It felt very painful, but I felt a lot of love that whole weekend. It was not other than that one moment, Sunday night, I would not characterize my emotions as anger or hate. But I really just like, I felt a lot of love from people towards people. It was painful, but the dominant emotion of the weekend was love, not hate.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Well, she did a great job during that weekend in a lot of chaos. But people often see leaders in the crisis moments, good or bad. But a thing I really value in leaders is how people act on a boring Tuesday at 9.46 in the morning and in just sort of the normal drudgery of the day-to-day. How someone shows up in a meeting, the quality of the decisions they make.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Yeah, I mean, look, what you have wanted to spend the last 20 minutes about, and I understand, is like this one very dramatic weekend.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
But that's not really what Opening Eye is about. Opening Eye is really about the other seven years.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I love Ilya. I have tremendous respect for Ilya. I... I don't have anything I can say about his plans right now. That's a question for him. But I really hope we work together for certainly the rest of my career. He's a little bit younger than me. Maybe he works a little bit longer.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Oh, he has not seen AGI. None of us have seen AGI. We have not built AGI. I do think one of the many things that I really love about Ilya is he takes AGI and the safety concerns, broadly speaking, including things like the impact this is going to have on society, very seriously. As we continue to make significant progress,
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Ilya is one of the people that I've spent the most time over the last couple of years talking about what this is going to mean, what we need to do to ensure we get it right, to ensure that we succeed at the mission.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I don't want to like speak for, Oh yeah. I think that you should ask him that. Um, He's definitely a thoughtful guy. I kind of think Avelio is always on a soul search in a really good way.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I've never witnessed a silly Ilya, but I look forward to that as well. I was at a dinner party with him recently, and he was playing with a puppy. And he was in a very silly mood, very endearing. And I was thinking, oh man, this is not the side of Ilya that the world sees the most.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I feel great about the new board. In terms of the structure of OpenAI, one of the board's tasks is to look at that and see where we can make it more robust. We wanted to get new board members in place first, but we clearly learned a lesson about structure throughout this process. I don't have, I think, super deep things to say. It was a crazy, very painful experience.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I think it was like a perfect storm of weirdness. It was like a preview for me of what's going to happen as the stakes get higher and higher and the need that we have like robust governance structures and processes and people.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Yes. Just on a personal level? Yes. I think I'm like an extremely trusting person. I always had a life philosophy of, you know, like, don't worry about all of the paranoia. Don't worry about the edge cases. You know, you get a little bit screwed in exchange for getting to live with your guard down. And this was so shocking to me. I was so caught off guard. that it has definitely changed.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And I really don't like this. It's definitely changed how I think about just like default trust of people and planning for the bad scenarios. You got to be careful with that. Are you worried about becoming a little too cynical? I'm not worried about becoming too cynical.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I think I'm like the extreme opposite of a cynical person, but I'm worried about just becoming like less of a default trusting person.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I think you could make all kinds of comments about the board members and the level of trust I should have had there or how I should have done things differently. But in terms of the team here, I think you'd have to give me a very good grade on that one. And I have just enormous gratitude and trust and respect for the people that I work with every day.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And I think being surrounded with people like that is...
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I don't know what it's really about. We started off just thinking we were going to be a research lab and having no idea about how this technology was going to go. Because it was only seven or eight years ago, it's hard to go back and really remember what it was like then. But this was before language models were a big deal.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
This was before we had any idea about an API or selling access to a chatbot. This was before we had any idea we were going to productize at all. So we're like, we're just going to try to do research and we don't really know what we're going to do with that.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I think with many fundamentally new things, you start fumbling through the dark and you make some assumptions, most of which turn out to be wrong.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
So we said, okay, well, the structure doesn't quite work for that. How do we patch the structure? And then you patch it again and patch it again, and you end up with something that does look kind of eyebrow-raising to say the least. But we got here gradually with, I think, reasonable decisions at each point along the way.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And it doesn't mean I wouldn't do it totally differently if we could go back now with an oracle, but you don't get the oracle at the time. But anyway, in terms of what Elon's real motivations here are, I don't know.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Elon said this set of things. Here's our characterization. Here's the characterization of how this went down. We tried to not make it emotional and just sort of say, here's the history.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Wasn't that long ago? Elon was crazily talking about launching rockets. Yeah. When people were laughing at that thought, uh,
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
He thought OpenAI was going to fail. He wanted total control to sort of turn it around. We wanted to keep going in the direction that now has become OpenAI. He also wanted Tesla to be able to build an AGI effort. Various times he wanted to make OpenAI into a for-profit company that he could have control of or have it merge with Tesla.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
We didn't want to do that, and he decided to leave, which that's fine.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
My memory is the proposal was just like, yeah, get acquired by Tesla and have Tesla have full control over it.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I would definitely pick a different—speaking of going back with an Oracle, I'd pick a different name— One of the things that I think OpenAI is doing that is the most important of everything that we're doing is putting powerful technology in the hands of people for free as a public good. We don't run ads on our free version. We don't monetize it in other ways. We just say it's part of our mission.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
We want to put increasingly powerful tools in the hands of people for free and get them to use them. And I think... That kind of open is really important to our mission. I think if you give people great tools and teach them to use them or don't even teach them, they'll figure it out and let them go build an incredible future for each other with that. That's a big deal.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
So if we can keep putting like free or low cost or free and low cost powerful AI tools out in the world, I think it's a huge deal for how we fulfill the mission. Yeah. Open source or not, yeah, I think we should open source some stuff and not other stuff. It does become this religious battle line where nuance is hard to have, but I think nuance is the right answer.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I think that speaks to the seriousness with which Elon means the lawsuit.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
So look, I mean, Grok had not open sourced anything until people pointed out it was a little bit hypocritical. And then he announced that Grok will open source things this week. I don't think open source versus not is what this is really about for him.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Look, I think this whole thing is like unbecoming of a builder, and I respect Elon as one of the great builders of our time. I know he knows what it's like to have haters attack him. And it makes me extra sad he's doing it to us.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
It makes me sad. And I think it makes a lot of people sad. There's a lot of people who've really looked up to him for a long time. I said in some interview or something that I miss the old Elon. And the number of messages I got being like that exactly encapsulates how I feel.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Yeah, I think there is definitely a place for open source models, particularly smaller models that people can run locally. I think there's huge demand for. I think there will be some open source models. There will be some closed source models. It won't be unlike other ecosystems in that way.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Is that something? I don't, I would heavily discourage any startup that was thinking about starting as a nonprofit and adding like a for-profit arm later. I'd heavily discourage them from doing that. I don't think we'll set a precedent here. Okay. So most, most startups should go just. For sure. And again, if we knew what was going to happen, we would have done that too.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I think all of these models understand something more about the world model than most of us give them credit for. And because there are also very clear things they just don't understand or don't get right, it's easy to look at the weaknesses, see through the veil, and say, oh, this is all fake. But it's not all fake. It's just some of it works and some of it doesn't work.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Like, I remember when I started first watching Sora videos and I would see, like, a person walk in front of something for a few seconds and occlude it and then walk away and the same thing was still there. I was like, oh, that's pretty good. Or there's examples where, like, the underlying physics looks so well represented over, you know, a lot of steps in a sequence.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
It's like, oh, this is, like, quite impressive. Yeah. But, like, fundamentally, these models are just getting better, and that will keep happening. If you look at the trajectory from Dolly 1 to 2 to 3 to Sora, you know, there were a lot of people that were dunked on each version, saying, it can't do this, it can't do that, and, like, look at it now.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Yeah, so what I would say is it's doing something to deal with occlusions really well. What I represent that it has like a great underlying 3D model of the world, it's a little bit more of a stretch.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
It looks like this approach is going to go surprisingly far. I don't want to speculate too much about what limits it will surmount and which it won't, but... What are some interesting limitations of the system that you've seen?
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
There's all kinds of fun. I mean, like, you know, cats sprouting an extra limb at random points in a video. Like, pick what you want, but there's still a lot of problems, a lot of weaknesses.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Like, I think there is something about the approach which just seems to feel different from how we think and learn and whatever. And then also, I think it'll get better with scale.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I mean, without saying anything specific about the SOAR approach, we use lots of human data in our work.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
We have.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I mean, it looks to me like yes, but we have more work to do.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I mean, frankly speaking, one thing we have to do before releasing the system is just like get it to work at a level of efficiency that will deliver the scale people are going to want from this. So I don't want to like downplay that. And there's still a ton, ton of work to do there. But, you know, you can imagine like issues with deep fakes, misinformation.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
We try to be a thoughtful company about what we put out into the world, and it doesn't take much thought to think about the ways this can go badly.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I think the question behind that question is, do people who create valuable data deserve to have some way that they get compensated for?
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
use of it and that i think the answer is yes i don't know yet what the answer is people have proposed a lot of different things we've tried some different models but you know if i'm like an artist for example a i would like to be able to opt out of people generating art in my style and b if they do generate art in my style i'd like to have some economic model associated with that yeah it's that uh transition from cds to napster to spotify
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
The model changes, but people have got to get paid.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Everything I worry about, humans are going to do cool shit, and society is going to find some way to reward it. I... That seems pretty hardwired. We want to create. We want to be useful. We want to achieve status in whatever way. That's not going anywhere, I don't think.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Maybe financial in some other way. Again, I don't think we've seen the last evolution of how the economic system is going to work.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Artists were also super worried when photography came out.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And then photography became a new art form and people made a lot of money taking pictures. And I think things like that will keep happening. People will use the new tools in new ways.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
People talk about how many jobs they're going to do in five years. And the framework that people have is what percentage of current jobs are just going to be totally replaced by some AI doing the job. The way I think about it is not what percent of jobs AI will do, but what percent of tasks will AI do and over what time horizon.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
So if you think of all of the five-second tasks in the economy, the five-minute tasks, the five-hour tasks, maybe even the five-day tasks, how many of those can AI do? And I think that's a way more interesting, impactful, important question than how many jobs AI can do.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Because it is a tool that will work at increasing levels of sophistication and over longer and longer time horizons for more and more tasks and let people operate at a higher level of abstraction. So maybe people are way more efficient at the job they do. And at some point,
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
That's not just a quantitative change, but it's a qualitative one, too, about the kinds of problems you can keep in your head. I think that for videos on YouTube, it'll be the same. Many videos, maybe most of them, will use AI tools in the production, but they'll still be fundamentally driven by a person thinking about it, putting it together, doing parts of it, sort of directing and running it.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I mean, there's like a lot of examples.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
No, I think it is an amazing thing. But relative to where we need to get to and where I believe we will get to, you know, at the time of like GPT-3, people were like, oh, this is amazing. This is this like marvel of technology. And it is. It was. But, you know, now we have GPT-4 and You look at GPT-3 and you're like, that's unimaginably horrible.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I expect that the delta between 5 and 4 will be the same as between 4 and 3. And I think it is our job to live a few years in the future and remember that the tools we have now are going to kind of suck looking backwards at them. And that's how we make sure the future is better.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
What are the best things it can do?
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
You know, one thing I've been using it for more recently is sort of like a brainstorming partner.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And there's a glimmer of something amazing in there. I don't think it gets, you know, when people talk about it, what it does, they're like, oh, it helps me code more productively. It helps me write more faster and better. It helps me, you know, translate from this language to another. All these like amazing things, but. There's something about the creative brainstorming partner.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I need to come up with a name for this thing. I need to think about this problem in a different way. I'm not sure what to do here. That I think gives a glimpse of something I hope to see more of. One of the other things that you can see a very small glimpse of is
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
when I can help on longer horizon tasks, you know, break down something into multiple steps, maybe like execute some of those steps, search the internet, write code, whatever, put that together. When that works, which is not very often, it's like very magical.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
It works a lot for me. What do you mean? Uh, iterative back and forth with a human, it can get more often when it can go do like a 10 step problem on its own. Oh, it doesn't work for that too often. Sometimes. At multiple layers of abstraction, or do you mean just sequential? Both like, you know, to break it down and then do things that different layers of abstraction and put them together.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Look, I don't want to, I don't want to like downplay the accomplishment of GPT-4. Um, But I don't want to overstate it either. And I think this point that we are on an exponential curve, we will look back relatively soon at GPT-4 like we look back at GPT-3 now.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And in that sense, I do think it'll be a moment where a lot of the world went from not believing to believing. That was more about the ChatGPT interface than the... And by the interface and product, I also mean the post-training of the model and how we tune it to be helpful to you and how to use it, then the underlying model itself.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I mean, they're both super important, but the RLHF, the post-training step, the little wrapper of things that, from a compute perspective, little wrapper of things that we do on top of the base model, even though it's a huge amount of work, that's really important to say nothing of the product that we build around it. In some sense, we did have to do two things.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
We had to invent the underlying technology, and then we had to figure out how to make it into a product people would love, which is not just about the actual product work itself, but this whole other step of how you align and make it useful.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And that. But... You know, that was like a known difficult thing. Like we knew we were going to have to scale it up. We had to go do two things that had like never been done before that were both like I would say quite significant achievements. And then a lot of things like scaling it up that other companies have had to do before.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Most people don't need all the way to 128 most of the time, although if we dream into the distant future, like way distant future, we'll have context length of several billion. You will feed in all of your information, all of your history over time, and it'll just get to know you better and better, and that'll be great. So for now, the way people use these models, they're not doing that.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And, you know, people sometimes post in a paper or, you know, a significant fraction of a code repository or whatever. But most usage of the models is not using the long context most of the time.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I saw this internet clip once. I'm going to get the numbers wrong, but it was like Bill Gates talking about the amount of memory on some early computer. Maybe 64K, maybe 640K, something like that. And most of it was used for the screen buffer.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And he just couldn't seem genuine in this, couldn't imagine that the world would eventually need gigabytes of memory in a computer or terabytes of memory in a computer. And you always do. Or you always do just need to like follow the exponential of technology. And we're going to like, we will find out how to use better technology.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
So I can't really imagine what it's like right now for context links to go out to the billions someday. And they might not literally go there, but effectively it'll feel like that. But I know we'll use it and really not want to go back once we have it.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
The thing that I find most interesting is not any particular use case that we can talk about those, but it's people who kind of like This is mostly younger people, but people who use it as like their default start for any kind of knowledge work task. And it's the fact that it can do a lot of things reasonably well.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
You can use GPTV, you can use it to help you write code, you can use it to help you do search, you can use it to like edit a paper. The most interesting thing to me is the people who just use it as the start of their workflow.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
That's obviously an area of intense interest for us. I think it's going to get a lot better with upcoming versions, but we'll have to continue to work on it, and we're not going to have it all solved this year.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I... I'm of two minds about that. I think people are like much more sophisticated users of technology than we often give them credit for. And people seem to really understand that GPT, any of these models hallucinate some of the time. And if it's mission critical, you got to check it.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Of the long list of things I'd like to dunk on journalists for, this is not my top criticism of them.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
We're very early in our explorations here, but I think what people want, or at least what I want for myself, is a model that gets to know me and gets more useful to me over time. This is an early exploration. I think there's a lot of other things to do, but that's where we'd like to head.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
You'd like to use a model and over the course of your life, or use a system, there'll be many models, and over the course of your life, it gets better and better.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
It's not just that I want it to remember that. I want it to integrate the lessons of that. Yes. And remind me in the future... What to do differently or what to watch out for. And, you know, we all gain from experience over the course of our lives, varying degrees. And I'd like my AI agent to gain with that experience too.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
So if we go back and let ourselves imagine that, you know, trillions and trillions of context length. If I can put every conversation I've ever had with anybody in my life in there, if I can have all of my emails input out, like all of my input output in the context window every time I ask a question, that'd be pretty cool, I think.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I think the right answer there is just user choice. You know, anything I want stricken from the record from my AI agent, I want to be able to like take out. If I don't want it to remember anything, I want that too. you and I may have different opinions about where on that privacy utility trade-off for our own AI we want to be, which is totally fine.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
But I think the answer is just, like, really easy user choice.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
That's totally true. You know, you mentioned earlier that I'm like blocking out the November stuff.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Well, I mean, I think it was a very traumatic thing and it did immobilize me for a long period of time. Like definitely the hardest thing Like the hardest work that I've had to do was just like keep working that period because I had to like, you know, try to come back in here and put the pieces together while I was just like in sort of shock and pain. And, you know, nobody really cares about that.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I mean, the team gave me a pass and I was not working on my normal level, but there was a period where I was just like, It was really hard to have to do both. But I kind of woke up one morning and I was like, this was a horrible thing that happened to me. I think I could just feel like a victim forever.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Or I can say this is like the most important work I'll ever touch in my life and I need to get back to it. And it doesn't mean that I've repressed it because sometimes I like wake up in the middle of the night thinking about it. But I do feel like an obligation to keep moving forward.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I can imagine many ways to implement that. I think that's less important than the question you were getting at, which is, do we need a way to do a slower kind of thinking where the answer doesn't have to get like, you know, it's like, I guess like spiritually, you could say that you want an AI to be able to think harder about a harder problem and answer more quickly about an easier problem.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And I think that will be important.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
It seems to me like you want to be able to allocate more compute to harder problems. Like, it seems to me that a system knowing
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I mean, there's a lot of things that you could imagine working.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
There is no nuclear facility.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I would love to have a secret nuclear facility. There isn't one.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
All right. One can dream. OpenAI is not a good company at keeping secrets. It would be nice. We've been plagued by a lot of leaks. It would be nice if we were able to have something like that.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I mean, we work on all kinds of research.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
It's interesting. To me, it all feels pretty continuous.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I do wonder if we should have... So, you know, part of the reason that we deploy the way we do is that we think... We call it iterative deployment. We... Rather than go build in secret until we got all the way to GPT-5, we decided to talk about GPT-1, 2, 3, and 4. Part of the reason there is I think AI and surprise don't go together.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Also, the world, people, institutions, whatever you want to call it, need time to adapt and think about these things. I think one of the best things that OpenAI has done is this strategy. We get the world to pay attention to the progress, to take AGI seriously, to think about what
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
systems and structures and governance we want in place before we're like under the gun and have to make a rush decision. I think that's really good. But the fact that people like you and others say you still feel like there are these leaps makes me think that maybe we should be doing our releasing even more iteratively. I don't know what that would mean. I don't have an answer ready to go.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
But like our goal is not to have shock updates to the world. The opposite.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
But that's what we're trying to do. That's like our state of the strategy. And I think we're somehow missing the mark. So maybe we should think about releasing GPT-5 in a different way or something like that.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
People do like milestones. I totally get that. I think we like milestones too. It's like fun to say, declare victory on this one and go start the next thing. But yeah, I feel like we're somehow getting this a little bit wrong.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I also, we will release GPT-5. an amazing new model this year. I don't know what we'll call it.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
many different things. I think they'll be very cool. I think before we talk about like a GPT-5-like model called that or not called that or a little bit worse or a little bit better than what you'd expect from a GPT-5, I know we have a lot of other important things to release first.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I was... What's the one big unlock? Is it a bigger computer? Is it a new secret? Is it something else? It's all of these things together. The thing that OpenAI, I think, does really well, this is actually an original Ilya quote that I'm going to butcher, but it's something like, we multiply 200 medium-sized things together into one giant thing.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
There's a few people who have to think about putting the whole thing together, but a lot of people try to keep most of the picture in their head.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
At a high level, yeah. You don't know exactly how every piece works, of course, but one thing I generally believe is that it's sometimes useful to zoom out and look at the entire map. And And I think this is true for like a technical problem. I think this is true for like innovating in business.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
But things come together in surprising ways and having an understanding of that whole picture, even if most of the time you're operating in the weeds in one area, pays off with surprising insights. In fact, one of the things that I used to have and I think was super valuable was I used to have like a a good map of that, all of the front or most of the frontiers in the tech industry.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And I could sometimes see these connections or new things that were possible that if I were only, you know, deep in one area, I wouldn't, I wouldn't, I wouldn't be able to like have the idea for it because I wouldn't have all the data. And I don't really have that much anymore. I'm like super deep now. But I know that it's a valuable thing. You're not the man you used to be, Sam.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I never said like, we're raising $7 trillion, blah, blah, blah.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Look, I think compute is going to be the currency of the future. I think it will be maybe the most precious commodity in the world. And I think we should be investing heavily to make a lot more compute.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
chips for mobile phones or something like that. And you can say that, okay, there's 8 billion people in the world. Maybe 7 billion of them have phones. Maybe there are 6 billion, let's say. They upgrade every two years. So the market per year is 3 billion system on chip for smartphones. And if you make 30 billion, you will not sell 10 times as many phones because most people have one phone.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
But compute is different. Like intelligence is going to be more like energy or something like that, where the only thing that I think makes sense to talk about is at price X, the world will use this much compute and at price Y, the world will use this much compute.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Because if it's really cheap, I'll have it like reading my email all day, like giving me suggestions about what I maybe should think about or work on and trying to cure cancer. And if it's really expensive, maybe I'll only use it or will only use it to try to cure cancer. So I think the world is going to want a tremendous amount of compute. And there's a lot of parts of that that are hard.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Energy is the hardest part. Building data centers is also hard. The supply chain is hard. And then, of course, fabricating enough chips is hard. But this seems to me where things are going. We're going to want an amount of compute that's just hard to reason about right now.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Who's going to solve that? I think Helion's doing the best work, but I'm happy there's a race for fusion right now. Nuclear fission, I think, is also quite amazing, and I hope as a world we can re-embrace that. It's really sad to me how the history of that went, and I hope we get back to it in a meaningful way.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Well, I think we should make new reactors. I think it's just like it's a shame that industry kind of ground to a halt.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I worry about that for A.I.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I think it will get caught up in like left versus right wars. I don't know exactly what that's going to look like, but I think that's just what happens with anything of consequence, unfortunately. What I meant more about theatrical risks is like AI is going to have, I believe, tremendously more good consequences than bad ones, but it is going to have bad ones.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And there'll be some bad ones that are bad, but not theatrical. You know, like, A lot more people have died of air pollution than nuclear reactors, for example. But most people worry more about living next to a nuclear reactor than a coal plant.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
But something about the way we're wired is that although there's many different kinds of risks we have to confront, the ones that make a good climax scene of a movie carry much more weight with us than the ones that are very bad over a long period of time but on a slow burn.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I think that's a pretty straightforward question. Maybe I can think of more nuance later, but the pros seem obvious, which is that we get better products and more innovation faster and cheaper, and all the reasons competition is good.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
We spend a lot of time talking about the need to prioritize safety. And I've said for like a long time that I think if you think of a quadrant of safety, slow timelines to the start of AGI, long timelines, and then a short takeoff or a fast takeoff. I think short timelines, slow takeoff is the safest quadrant and the one I'd most like us to be in.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
But I do want to make sure we get that slow takeoff.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
There were great things about it too, and I wish it had not been
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Elon says at least that he cares a great deal about AI safety and is really worried about it. And I assume that he's not going to race unsafely.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Not really the thing he's most known for.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I was thinking, someone just reminded me the other day about how the day that he got, like surpassed Jeff Bezos for like richest person in the world, he tweeted a silver medal at Jeff Bezos. I hope we have less stuff like that as people start to work on towards AGI.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
in such an adrenaline rush that i wasn't able to stop and appreciate them at the time but um i came across this old tweet of mine or this tweet of mine from that time period which was like it was like you know kind of going to your own eulogy watching people say all these great things about you and uh just like unbelievable support from people i love and care about uh that was really nice.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
The amazing stuff about Elon is amazing and I super respect him. I think we need him. All of us should be rooting for him and need him to step up as a leader through this next phase.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I mean, if the question is like, if we can build a better search engine than Google or whatever, then sure, we should like go, you know, like people should use a better product.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Google shows you 10 blue links, well, 13 ads and then 10 blue links, and that's one way to find information. But the thing that's exciting to me is not that we can go build a better copy of Google search, but that maybe there's just some much better way to help people find and act and on and synthesize information.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Actually, I think ChatGPT is that for some use cases and hopefully we'll make it be like that for a lot more use cases. But I don't think it's that interesting to say, like, how do we go do a better job of giving you, like, 10 ranked web pages to look at than what Google does. Maybe it's really interesting to go say, how do we help you get the answer or the information you need?
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
How do we help create that in some cases, synthesize that in others, or point you to it in yet others? A lot of people have tried to just make a better search engine than Google. And it is a hard technical problem. It is a hard branding problem. It's a hard ecosystem problem. I don't think the world needs another copy of Google.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
As you might guess, we are interested in how to do that well. That would be an example of a cool thing. How to do that well. Like a heterogeneous, like integrating. The intersection of LLMs plus search. I don't think anyone has cracked the code on yet. I would love to go do that.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
You know, I kind of hate ads just as like an aesthetic choice. I think ads needed to happen on the internet for a bunch of reasons to get it going. But it's a more mature industry. The world is richer now. I like that people pay for chat GPT and know that the answers they're getting are not influenced by advertisers. There is, I'm sure, there's an ad unit that makes sense for
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
lms and i'm sure there's a way to like participate in the transaction stream in an unbiased way that is okay to do but it's also easy to think about like the dystopic visions of the future where you ask chat gbt something and it says oh here's you know you should think about buying this product or you should think about you know this going here for your vacation or whatever and i don't know like
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Like I know I'm paying and that's how the business model works. And when I go use like Twitter or Facebook or Google or any other great product, but ad supported great product, I don't love that. And I think it gets worse, not better in a world with AI.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
But it looks like we're going to figure that out. If the question is, do I think we can have a great business that pays for our compute needs without ads, that I think the answer is yes.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I guess I'm saying I have a bias against them.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
We've made our own mistakes. We'll make others. I assume Google will learn from this one. Still make others. It is all... These are not easy problems. One thing that we've been thinking about more and more is
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I think this was a great idea somebody here had, like, it'd be nice to write out what the desired behavior of a model is, make that public, take input on it, say, you know, here's how this model is supposed to behave and explain the edge cases too.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And then when a model is not behaving in a way that you want, it's at least clear about whether that's a bug the company should fix or behaving as intended and you should debate the policy. And right now it can sometimes be caught.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
There were definitely times I thought it was going to be one of the worst things to ever happen for AI safety. I also think I'm happy that it happened relatively early. I thought at some point between when OpenAI started and when we created AGI,
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
in between like black nazis obviously ridiculous but there are a lot of other kind of subtle things that you could make a judgment call on either way yeah but sometimes if you write it out and make it public you can use kind of language that's you know the google's ad principles are very high level that doesn't that's not what i'm talking about that doesn't work like i have to say you know when you ask it to do thing x it's supposed to respond in way y
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Yeah, I'm open to a lot of ways a model could behave them, but I think you should have to say, you know, here's the principle and here's what I should say in that case.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I feel very lucky that we don't have the challenges at OpenAI that I have heard of at a lot of other companies. I think part of it is like... Every company's got some ideological thing. We have one about AGI and belief in that, and it pushes out some others. We are much less caught up in the culture war than I've heard about at a lot of other companies.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
San Francisco's a mess in all sorts of ways, of course. So that doesn't infiltrate OpenAI? I'm sure it does in all sorts of subtle ways, but not in the obvious.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
We've had our flare-ups for sure, like any company, but I don't think we have anything like what I hear about happen at other companies here on this topic.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And it won't be like, it's not like you have one safety team. It's like when we shipped GPT-4, that took the whole company thing with all these different aspects and how they fit together. And I think it's going to take that. More and more of the company thinks about those issues all the time.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
It'll be all those things. Yeah, I was going to say it'll be people, state actors trying to steal the model. It'll be all of the technical alignment work. It'll be societal impacts, economic impacts. It'll It's not just like we have one team thinking about how to align the model. It's really going to be like getting to the good outcome is going to take the whole effort.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I don't actually want any further details on this point.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I'm excited about being smarter, and I know that sounds like a glib answer, but I think the really special thing happening is that it's not like it gets better in this one area and worse at others. It's getting better across the board. That's, I think, super cool.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
That's for sure.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I mean, a lot, but I think it'll be in a very different shape. Like, you know, maybe some people will program entirely in natural language. Entirely natural language. I mean, no one programs, like, writing bytecode out to some people. No one programs the punch cards anymore. I'm sure you can find someone who does.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
How much it changes the predisposition?
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Yeah, I think with most other cases, the best practitioners of the craft will use multiple tools and they'll do some work in natural language and when they need to go, you know, write C for something, they'll do that.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I think it's sort of depressing if we have AGI and the only way to get things done in the physical world is to make a human go do it. So I really hope that as part of this transition, as this phase change, we also get... We also get humanoid robots or some sort of physical world robots.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
We're like a small company. We have to really focus. And also robots were hard for the wrong reason at the time. But like we will return to robots in some way at some point.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
The road to AGI should be a giant power struggle. Like, the world should... Well, not should. I expect that to be the case.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Why?
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
We will return to work on developing robots. We will not turn ourselves into robots, of course.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I used to love to speculate on that question. I have realized since that I think it's very poorly formed and that people use extremely different definitions for what AGI is. And so I think it makes more sense to talk about when we'll build systems that can do capability X or Y or Z rather than, you know, when we kind of like fuzzily cross this one mile marker.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
It's not like, like AGI is also not an ending. It's much more of a, it's closer to a beginning, but it's much more of a mile marker than either of those things. And, but what I would say in the interest of not trying to dodge a question is
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
If we could look at it now, you know, maybe we've adjusted by the time we get there.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Yeah, but I don't think 3-5 changed the world. it maybe changed the world's expectations for the future. And that's actually really important. And it did kind of like get more people to take this seriously and put us on this new trajectory. And that's really important too. So again, I don't want to undersell it.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I think it like I could retire after that accomplishment and be pretty happy with my career, but as an artifact, I don't think we're gonna look back at that and say that was a threshold that really changed the world itself.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Like, does the global economy feel any different to you now or materially different to you now than it did before we launched GPT-4? I think you would say no.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I mean, again, people define AGI all sorts of different ways. So maybe you have a different definition than I do.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I think when a system can significantly increase the rate of scientific discovery in the world, that's like a huge deal. I believe that most real economic growth comes from scientific and technological progress.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I mean, definitely the researchers here will do that before I do. But what will I, I've actually thought a lot about this question. If I were, someone was like, I think this is, as we talked about, I think this is a bad framework, but if someone were like, okay, Sam, we're finished. Here's a laptop. This is the AGI.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I find it surprisingly difficult to say what I would ask that I would expect that first AGI to be able to answer. Like that first one is not going to be the one which is like go like, You know, I don't think like go explain to me like the grand unified theory of physics, the theory of everything for physics. I'd love to ask that question. I'd love to know the answer to that question.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Well, then those are the first questions I would ask.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Yeah, I mean, well, so I don't expect that this first AGI could answer any of those questions, even as yes or no. But if it could, those would be very high on my list.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
But at this point, it feels... You know, like something that was in the past that was really unpleasant and really difficult and painful. But we're back to work and things are so busy and so intense that I don't spend a lot of time thinking about it. There was a time after. There was like this fugue state for kind of like the month after, maybe 45 days after.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Maybe.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I mean, maybe it says, like, you know, you want to know the answer to this question about physics? I need you to, like, build this machine and make these five measurements and tell me that.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I'll just be very honest with this answer. I was going to say, and I still believe this, that it is important that I nor any other one person have total control over OpenAI or over AGI. And I think you want a robust governance system. I can point out a whole bunch of things about all of our board drama from last year.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
about how I didn't fight it initially and was just like, yeah, that's the will of the board, even though I think it's a really bad decision. And then later I clearly did fight it and I can explain the nuance and why I think it was okay for me to fight it later.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I continue to not want super voting control over OpenAI. I never have, never had it, never wanted it.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I continue to think that no company should be making these decisions and that we really need governments to put rules of the road in place and I realize that that means people like Mark Andreessen or whatever will claim I'm going for regulatory capture and I'm just willing to be misunderstood there it's not true and I think in the fullness of time it'll get proven out why this is important but
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And I am proud of the track record overall, but I don't think any one person should. And I don't think any one person will. I think it's just like too big of a thing now and it's happening throughout society in a good and healthy way. But I don't think any one person should be in control of an AGI or this whole movement towards AGI.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
That is not my top worry. As I currently see things, there have been times I worried about that more. There may be times, again, in the future where that's my top worry.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Like, saying it's not my top worry doesn't mean... I think we need to work on it super hard. And we have great people here who do work on that. I think there's a lot of other things we also have to get right.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
You know, we talked about theatrical risks earlier. That's a theatrical risk. That is a... That is a thing that can really take over how people think about this problem. And there's a big group of very smart, I think very well-meaning AI safety researchers that got super hung up on this one problem. I'd argue without much progress, but super hung up on this one problem.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I was just sort of like drifting through the days. I was so out of it. I was feeling so down. Just on a personal psychological level. Yeah. Really painful. And hard to have to keep running OpenAI in the middle of that. I just wanted to crawl into a cave and kind of recover for a while. But now it's like we're just back to working on the mission.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I'm actually happy that they do that because I think we do need to think about this more. But I think it pushed aside, it pushed out of the space of discourse a lot of the other very significant AI-related risks.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I mean, other people ask me about that too. Yeah. Any intuition?
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
He doesn't capitalize his tweets.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Follow the rules, Sam. I grew up as a very online kid. I'd spent a huge amount of time chatting with people back in the days where you did it on a computer and you could log off Instant Messenger at some point. And I never capitalized there. As I think most internet kids didn't. Or maybe they still don't, I don't know. And Actually, this is like – now I'm like really trying to reach for something.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
But I think capitalization has gone down over time. Like if you read like old English writing, they capitalized a lot of random words in the middle of sentences, nouns and stuff that we just don't do anymore. I personally think it's sort of like a dumb construct that we capitalize the letter at the beginning of a sentence and of certain names and whatever. But, you know, I don't – that's fine.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And then – And I used to, I think, even capitalize my tweets because I was trying to sound professional or something. I haven't capitalized my private DMs or whatever in a long time. And then slowly, stuff like shorter form, less formal stuff has slowly drifted closer and closer to how I would text my friends.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
If I like write, if I like pull up a Word document and I'm like writing a strategy memo for the company or something, I always capitalize that. If I'm writing like a long kind of more like formal message, I always use capitalization there too. So I still remember how to do it. But even that may fade out. I don't know. Like it's, but I never spend time thinking about this.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
So I don't have like a ready-made strategy.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I wonder if people like still capitalize their Google searches. Like if you're writing something just to yourself or their chat GPT queries, if you're writing something just to yourself, do some people still bother to capitalize?
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Yeah, there's a percentage, but it's a small one. The thing that would make me do it is if people were like, it's a sign of, like, because I'm sure I could, like, force myself to use capital letters, obviously. If it felt like a sign of respect to people or something, then I could go do it. But I don't know, I just, like, I don't think about this.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I don't even think that's true, but maybe, maybe.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
But I was certain we would be able to do something like Sora at some point. It happened faster than I thought. But I guess that was not a big update.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Very simple sounding, but very psychedelic insights that exist sometimes. So the square root function. Square root of four, no problem.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
But once I come up with this easy idea of a square root function that you can kind of explain to a child and exists by even looking at some simple geometry, then you can ask the question of what is the square root of negative one? And that, this is why it's like a psychedelic thing, that tips you into some whole other kind of reality.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And you can come up with lots of other examples, but I think this idea that
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And I think there are a lot of those operators for why people may think that any version that they like of the simulation hypothesis is maybe more likely than they thought before.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Very possible.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I think the past is like a lot. I mean, we just look at what humanity has done in a not very long period of time. you know huge problems deep flaws lots to be super ashamed of but on the whole very inspiring gives me a lot of hope just the trajectory of it all yeah that we're together pushing towards a better future it is
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
You know, one thing that I wonder about is, is AGI going to be more like some single brain? Or is it more like the sort of scaffolding in society between all of us? You have not had a great deal of genetic drift from your great-great-great-grandparents.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And that is not... That's not because of biological change. I mean, you got a little bit healthier probably. You have modern medicine, you eat better, whatever. But what you have is this scaffolding that we all contributed to, built on top of. No one person is going to go build the iPhone. No one person is going to go discover all of science, and yet you get to use it.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
And that gives you incredible ability. And so in some sense, the like,
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I mean, if I got shot tomorrow and I knew it today, I'd be like, oh, that's sad.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
structure and incentives and what we need out of a board. And I think it is valuable that this happened now in some sense. I think this is probably not the last high-stress moment of OpenAI, but it was quite a high-stress moment. My company very nearly got destroyed. And we think a lot about many of the other things we've got to get right for AGI.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Great to talk to you. Thank you for having me.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
But thinking about how to build a resilient org and how to build a structure that will stand up to a lot of pressure in the world, which I expect more and more as we get closer, I think that's super important.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I think, I think the board members were, are well meaning people on the whole.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
OpenAI will be – we're going to have to have a board and a team that are good at operating under pressure. Do you think the board had too much power? I think boards are supposed to have a lot of power. But one of the things that we did see is in most corporate structures, boards are usually answerable to shareholders. Sometimes people have like super voting shares or whatever.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
In this case, and I think one of the things with our structure that we maybe should have thought about more than we did, is that the board of a nonprofit has, unless you put other rules in place, quite a lot of power. They don't really answer to anyone but themselves. And there's ways in which that's good.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
But what we'd really like is for the board of OpenAI to answer to the world as a whole as much as that's a practical thing.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
The old board sort of got smaller over the course of about a year. It was nine and then it went down to six. And then we couldn't agree on who to add. And the board also, I think, didn't have a lot of experienced board members. And a lot of the new board members at Open AI just have more experience as board members. I think that'll help.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
So Brett and Larry were kind of decided in the heat of the moment over this very tense weekend. And that weekend was like a real roller coaster. It was like a lot of ups and downs. And we were trying to agree on... new board members that both sort of the executive team here and the old board members felt would be reasonable. Larry was actually one of their suggestions, the old board members.
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
Brett, I think I had even previous to that weekend suggested, but he was busy and didn't want to do it. And then we really needed help and would. We talked about a lot of other people too, but that was
Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
I didn't think I could work with the old board again in the same configuration, although we then decided, and I'm grateful that Adam would stay, but we wanted to get to, we considered various configurations, decided we wanted to get to a board of three. And had to find two new board members over the course of sort of a short period of time.
Search Engine
Should we be worried about OpenAI?
Sam Altman had primed everyone to think that way because a couple days before they do this demonstration where they show off the voice for the first time, Sam Altman tweets the word her. Or I should say he posts it on X. And so, of course, when this demo happens, everyone is like, oh.
Search Engine
Should we be worried about OpenAI?
And so everyone was sort of primed to think, oh, wow, OpenAI has realized Silicon Valley's decade-long dream of making the movie Her a reality.
Search Engine
Should we be worried about OpenAI?
Then it turned out that Scarlett Johansson was really mad because Sam Altman had gone to her last year and said, hey, would you like to be a voice for this thing? And she thought about it and she said, no, I don't want to. And then...
Search Engine
Should we be worried about OpenAI?
Apparently, after he had posted, like just in the couple days before the demo, he'd gone back to her agents and tried to renegotiate this whole thing and said, are you sure you don't want to be the voice for this thing? And she said no, and they showed it off anyway. And they never said, this is Scarlett Johansson, but they absolutely let everyone believe it.
Search Engine
Should we be worried about OpenAI?
I agree with you. And I think you framed it really well because this is the company that has told us from the beginning, we're working on something very powerful. We think it could solve a lot of problems. If it falls into the wrong hands, it could also be extremely dangerous.
Search Engine
Should we be worried about OpenAI?
And so that's why we're going to come up with a very unusual structure for ourselves and try to do absolutely everything we can do in our power to proceed safely, cautiously, and responsibly. And so you look at the Scarlett Johansson thing, and none of that squares with their behavior in that case.
Search Engine
Should we be worried about OpenAI?
I mean, this is a kind of short and funny one, but there was reporting this year that they built a tool that detects when students are using ChatGPT to do their homework, but they won't release it. Oh!
Search Engine
Should we be worried about OpenAI?
So I should say the Wall Street Journal broke this story and the statement they gave to them was the text watermarking method we're developing is technically promising, but as important risks we're weighing while we research alternatives, we believe the deliberate approach we've taken is necessary given the complexities involved and its likely impact on the broader ecosystem. beyond open AI.
Search Engine
Should we be worried about OpenAI?
That is what they said. The Journal sort of made an alternate case, which is that if you can't use ChatGPT to cheat on your homework, you will stop paying the company $20 a month.
Search Engine
Should we be worried about OpenAI?
We did learn about Sam Altman's investment empire this year, thanks to some reporting in the Wall Street Journal. And they really dug into all of the stakes that he has in many startups and found that he controls at least $2.8 billion worth of holdings.
Search Engine
Should we be worried about OpenAI?
And he's used those holdings to create a line of debt, which he has from JPMorgan Chase, which gives him access to hundreds of millions of more dollars, which he can put into private companies. And why is this interesting? Well, one, that's kind of a pretty risky gamble to have a lot of your... net worth tied up in debt that you raised using your venture investments as collateral.
Search Engine
Should we be worried about OpenAI?
That's kind of like a rickety ladder of investments right there. But it also creates questions around what companies is OpenAI doing deals with? are those companies that Sam has investments in. Of course, Sam doesn't own equity in OpenAI right now, and so his own wealth is tied up in these investments.
Search Engine
Should we be worried about OpenAI?
And while nobody really thinks that Sam is doing any of this for the money, there was just kind of also this financial element to what we learned about him this year that I think raised some questions for people.
Search Engine
Should we be worried about OpenAI?
I essentially have the same view of his motivations. And I think the generous version of it is that he is in a long line of Silicon Valley entrepreneurs who thought they could use innovation to solve some of the world's biggest problems and that that is how they want to spend their lives.
Search Engine
Should we be worried about OpenAI?
I think the less generous version of it is that this person coming out of that tradition found himself working on this technology that could essentially be like the technology that ends all other technologies. Because if the thing works out, the thing you've created just creates all other innovation automatically for the rest of time. And that...
Search Engine
Should we be worried about OpenAI?
is a position of extraordinary power to put yourself into it. And I do think that he is attracted to the power and the influence that will come from being one of the people that invents this incredibly powerful thing.
Search Engine
Should we be worried about OpenAI?
So the first big one out the door this year is this guy, Andrej Karpathy, who was part of the founding team. He left for a while to go to Tesla. He comes back for exactly one year and then leaves.
Search Engine
Should we be worried about OpenAI?
In May, Ilya Sutskever, who was one of the board members who had forced Sam out last year, he announces that he is leaving the company and doesn't really say much about why he's leaving. But within a month, it's revealed that he's working on his own AI company called Safe Super Intelligence and raises a billion dollars just to get it off the ground.
Search Engine
Should we be worried about OpenAI?
Yeah. He had a guy on his research team named Jan Leike. So this was somebody else who was trying to make sure that AI is built safely. He leaves to go to Anthropic to work on that problem there. Gretchen Kruger, who's another policy researcher, leaves in May.
Search Engine
Should we be worried about OpenAI?
Then in August, John Schulman, who was one of the members of the founding team, he announced that he was going to Anthropic, and he had previously helped to build ChatGPT. Then Greg Brockman, who is the president of OpenAI and one of its main public facing spokespeople, he announces that he is taking an extended leave of absence.
Search Engine
Should we be worried about OpenAI?
Basically just says he really needs a break, not entirely sure what happened there. Then finally, Meera Moradi announces that she is leaving in September. She had also been part of this board drama last year. And at the same day that she left, it was revealed that the company's chief research officer, Bob McGrew, and another research VP, Barrett Zoff, were also leaving the company.
Search Engine
Should we be worried about OpenAI?
That's just a lot of talent walking out the door, PJ. And I can say, if you look at the other major AI companies, so like a Google, a Meta, an Anthropic, there has been nothing comparable this year in terms of that level of turnover.
Search Engine
Should we be worried about OpenAI?
Yeah, totally. But, you know, another really important story about Mira Marotti is that before... Sam was ousted last year. She had written a private memo to Sam raising questions about his management and had shared his concerns with the board. Oh, interesting. And my understanding is that that had weighed heavily on the board when they fired Sam. Because to have the CTO of the company...
Search Engine
Should we be worried about OpenAI?
coming to you and saying, hey, this is a real problem, that's going to get your attention in a way that maybe a rank-and-file employee might not have been able to get their attention. So we have known for some time now that Mira has had long-standing concerns with Sam's management style. And so when she finally left, it felt like the end to a story that we had been following for some time.
Search Engine
Should we be worried about OpenAI?
So, you know, she said there's never an ideal time to step away from a place one cherishes, which I felt like was just an acknowledgement that this seemed like a pretty bad time to step away. But she said that she wanted the time and space to do her own exploration.
Search Engine
Should we be worried about OpenAI?
And on the day that we recorded this, the information reported that she's already talking to some other recently departed OpenAI people about potentially starting another AI company with them. Because that is what people do. Most people, when they leave OpenAI, they start an AI company that looks shockingly similar to OpenAI, just without Sam. And why is that? Well...
Search Engine
Should we be worried about OpenAI?
My glib answer is that the high-ranking people who leave OpenAI seem to feel like the problem with OpenAI is Sam Altman. And that if you could build AI without Sam Altman, you would probably be having a better time. I see.
Search Engine
Should we be worried about OpenAI?
And then there's this one other guy who left that I want to talk about.
Search Engine
Should we be worried about OpenAI?
It's this guy named Leopold Aschenbrenner. Okay. Have you heard of this guy?
Search Engine
Should we be worried about OpenAI?
So he is quite young. He's still in his 20s. He was a researcher at OpenAI. He is fired, he says, for taking some concerns to the board about safety research. OpenAI denies this. But he goes away and he comes back in June and he publishes a 50,000 word document online called Situational Awareness. Were you aware of Situational Awareness?
Search Engine
Should we be worried about OpenAI?
Okay, well, I'm here to make you aware of Situational Awareness. It's this very long document that was the talk of Silicon Valley for a week or so. And in it, Leopold says... Essentially, the rest of you out there in the world don't seem to be getting it. You don't understand how fast AI is developing. You don't understand that we're actually running out of benchmarks to have it blow past.
Search Engine
Should we be worried about OpenAI?
And this technology really is about to change everything just within a few years. And it sure seems like outside our tiny little bubble here, not enough people are paying attention. And this document winds up getting circulated all throughout the Biden White House. It's circulated in the Trump campaign.
Search Engine
Should we be worried about OpenAI?
And I think Leopold Aschenbrenner might, in a Trump administration, have talked himself into a role like leading the Homeland Security Department or something. But yeah, he was another one of the interesting departures this year.
Search Engine
Should we be worried about OpenAI?
I think that while you might take issue with some of his logic and some of his graphs, and maybe he's hand-waving past certain potential limits in the development of this technology, he is getting at something real, which is that it does seem like even though AI is essentially topic number one in tech, it doesn't feel like people are really reckoning with the potential consequences should have.
Search Engine
Should we be worried about OpenAI?
You know, some people may listen to this and say, well, you know, Casey has sort of fallen for all of the hype here. You know, there remains this contingent of people who believe that this whole thing is a house of cards and that once the successor to GPT-4 comes out, we will see that the rate of progress has slowed. And in fact, no one is going to invent superintelligence anytime soon.
Search Engine
Should we be worried about OpenAI?
And all of these things are just going to sort of wash away. It might just be an effect of who I spend my time with and the conversations that are happening at dinners and drinks in San Francisco every day. But I am more or less persuaded that we are very close to having technology that is smarter than very smart humans in most cases.
Search Engine
Should we be worried about OpenAI?
And that if you are the person who controls the keys to that technology, then yes, you will be extraordinarily powerful.
Search Engine
Should we be worried about OpenAI?
Yes, and there's actually this really fascinating precedent for this in Silicon Valley. So we call Silicon Valley Silicon Valley because it was where the semiconductor industry was founded. And the biggest early semiconductor company was called Fairchild. And much like OpenAI, in the early days of chip manufacturing, it attracted all the best talent.
Search Engine
Should we be worried about OpenAI?
But one by one, for various reasons, a lot of people leave Fairchild and they go on to start their own companies, companies with names like Intel.
Search Engine
Should we be worried about OpenAI?
And there wind up being so many of these companies that they start calling them the Fairchildren because they were born out of this initial company that sort of seeded the ecosystem with talent, made some of the key early discoveries, and then lost all of that talent. My guess is you probably didn't know the name Fairchild before I said it just now, but you do know the name Intel. Yeah.
Search Engine
Should we be worried about OpenAI?
And the question is, do Anthropic and some of these other upstarts become the actual winners of this race? And OpenAI, 50 years from now, is just a footnote in history.
Search Engine
Should we be worried about OpenAI?
Yeah, I mean, we have always used software tools since their advent to try to automate away drudgery. And that has traditionally been seen as a good thing, right? It's nice that you have a spreadsheet to do your financial planning and aren't trying to do it all on a legal pad. Presumably that brought a benefit to your life, made you better at your job, and also helped you do it faster.
Search Engine
Should we be worried about OpenAI?
And I view the AI tools I use as doing that. They take something that used to take me a lot of time and effort and now make it simpler. For just one example, I have a human editor who reads my column before I send it out. But I also will, most of the time, just run it through Claude, actually, which is the Anthropix model, and just see if it can find any spelling or grammatical errors.
Search Engine
Should we be worried about OpenAI?
And every once in a while, it really saves my bacon. And all it cost me is $20 a month. So I don't think there is any shame in using these tools as a kind of backstop to prevent you from making a mistake or from doing some research. Because that's just the way that we've always used software and technology. So I understand the... anxiety about this.
Search Engine
Should we be worried about OpenAI?
I understand people who, for their own principled reasons, decide, well, I don't want to use this in my work. Maybe I'm a creative person. It's very important to me that all the work that I do is 100% human and has no AI in it. These are very reasonable positions to strike.
Search Engine
Should we be worried about OpenAI?
But I think that to tell someone, you shouldn't use this particular kind of software because it is evil, I don't understand that argument. Can I tell you about another way I've been using AI this year? Yeah. And I was actually thinking about you.
Search Engine
Should we be worried about OpenAI?
Because during one of our conversations, we were reflecting on the fact that there were only a couple of things that people could do to improve their mental health. And one was therapy and the other was meditation. And you were saying how frustrating it is to know what the answer is and to not want to do it, right? Yes. It's like...
Search Engine
Should we be worried about OpenAI?
Yes, if you started a meditation practice, like that would obviously be very helpful, but then you have to like sit quietly with your thoughts for 20 minutes a day. Like, obviously that seems horrible.
Search Engine
Should we be worried about OpenAI?
So recently I've been experiencing these feelings of burnout related to my newsletter. where I love doing it, but it also feels harder than it has. And I've been doing it at least three times a week, sometimes as many as five for seven years. And so I think this is just sort of a natural thing.
Search Engine
Should we be worried about OpenAI?
And so I felt like I need to maybe break glass in the case of this emergency and try something that I'd never previously wanted to do, which was meditate. Oh, wow. So I'm only a few days into this. I don't want to tell you that I've solved anything here. I did enjoy my first few experiences.
Search Engine
Should we be worried about OpenAI?
But one of the things that I did both in the run-up to and the aftermath of these meditation experiences was to just chat with Claude. Because Claude lets you create something called a project where you can upload a few documents and you can chat with those documents.
Search Engine
Should we be worried about OpenAI?
And then you can just also kind of check in with it from day to day and tell you what you're noticing or observing or if you have questions. And to me, this was a perfect use case for this technology because I truly know nothing about meditation. People have talked to me about it. I've done it a couple of times before, but I've never read a book about it.
Search Engine
Should we be worried about OpenAI?
I've never talked with any of my friends at length about it. So I'm just as fresh as you can be. And the level of knowledge that is inside Claude, which was, of course, just stolen from the internet without paying anyone from their labor, is actually quite high. Yeah. And it was able to help give me a good start.
Search Engine
Should we be worried about OpenAI?
And then afterwards, I could come back and say, well, you know, here's what I noticed. And I struggle with this thing. They'll say, oh, well, you might want to try that. Or, you know, I sort of wish it was a little bit more like this. And it would say, oh, well, then you might want to try this other kind of meditation. Tell me more about that. Okay, yeah, sure. Here's everything. And
Search Engine
Should we be worried about OpenAI?
I was talking earlier about like, what will it be like when you have an AI coworker? It's like, well, I have a meditation coach that I pay 20 bucks a month for. Some people are laughing. Some people are saying, Casey, you can meditate for free. You don't need a coach. I get that. I am somebody who likes to like pay for access to expertise. And I feel like I have it.
Search Engine
Should we be worried about OpenAI?
And first of all, I am going to go meditate after this because I want to recenter myself and I didn't get to do it this morning. I don't know if I'm still going to be doing this in like two or three weeks. But if I am, I think the AI is actually going to be part of that story because it's giving me a place where I can go after these experiences to reflect.
Search Engine
Should we be worried about OpenAI?
Well, I think on the business side, OpenAI has had an incredible year. The New York Times recently reported that its monthly revenue had hit $300 million in August, which was up 1700% since the beginning of 2023. And it expects about $3.7 billion in annual sales this year. I went back to February, and back then it was predicted that OpenAI was going to make a mere $2 billion this year.
Search Engine
Should we be worried about OpenAI?
Again, I hear people saying, Casey, you realize that journals exist. You could like write this down. But yeah, I get what you're saying. What I'm telling you is this is a journal that talks back to you. This is a journal that is an expert about the thing that I'm journaling about that is holding my hand through a process. None of this existed two years ago, right?
Search Engine
Should we be worried about OpenAI?
The challenge of talking about any of this stuff is when the rate of change in your day-to-day is high, sometimes it feels quite obvious. Other times it becomes this weird blind spot where you don't even realize- that the conditions around you have changed, right?
Search Engine
Should we be worried about OpenAI?
This is what Leopold is getting at in situational awareness, is like, you need to stop and collaborate and listen, as Vanilla Ice once said, right? You need to do what you're doing on this podcast, PJ, which is like, it's been a year, what happened? This is the right question, right?
Search Engine
Should we be worried about OpenAI?
You know, we were talking so much earlier about these AI critics that are like, it's all hype, it's constantly wrong, screw these Silicon Valley bros, right? And I totally get all of the animus and resentment that powers that. But something that those folks do to their detriment is they tune out everything that is happening in AI because they think, I've already made up my mind about this stuff.
Search Engine
Should we be worried about OpenAI?
I already know that I hate everyone involved. I hate the output and I hope it chokes and dies, right? Like this is how these people feel. And again, I get it. I understand all of those emotions. What I'm saying to you though, is you actually have to look around. You have to engage.
Search Engine
Should we be worried about OpenAI?
You have to keep trying out these chatbots every two or three months, if only to get a sense of what they can do now that they couldn't do two to three months ago. Because otherwise you are going to miss What is happening here? And it is wild.
Search Engine
Should we be worried about OpenAI?
That's a great question. I think there's a lot that goes into it. I think that we're living at a time where there's kind of low watermark in trust in our technology companies. I think the social media era really destroyed most of the goodwill that Silicon Valley had in the world because people see these technologies like Facebook and Instagram as TikTok as mainly just things that like
Search Engine
Should we be worried about OpenAI?
steal our time and reshape the way we relate to each other in ways that are obviously worse. And the whole time, the people building these technologies insist that actually that they're saving the world and that there's nothing wrong with them. And so when another generation comes along and says, oh, hi, we are actually here to invent God, there's going to be a lot of...
Search Engine
Should we be worried about OpenAI?
So just this year, the amount of money they expected to make doubled. They further believe that their revenue will be $11.6 billion next year. So those are growth rates that we typically see only for kind of once-in-a-generation companies that really manage to hit on something new and novel in technology.
Search Engine
Should we be worried about OpenAI?
There's going to be a lot of skepticism about that. And it is the AI companies themselves who told us this thing will create massive job loss. It will create massive social disruption. We may have to... come up with a new way of organizing society when we are done with our work.
Search Engine
Should we be worried about OpenAI?
That is something that every CEO of every AI company believes, BJ, is that we will have to reorganize society because essentially capitalism won't make sense anymore. So most people will agree that they don't like change. Change is bad. And when they say they don't like change, it usually means, well, I have a new manager at work.
Search Engine
Should we be worried about OpenAI?
The change that these people are talking about is that capitalism won't exist anymore. And it's unclear.
Search Engine
Should we be worried about OpenAI?
Yeah, yeah, exactly. And nobody wanted capitalism to go away and be replaced with something where Silicon Valley seemed to be in control of everyone's future.
Search Engine
Should we be worried about OpenAI?
Maybe something else to say that's important is that the way all of this is unfolding is anti-democratic. No one really asked for this, and the average person does not get a vote. If you're just an average person, you don't want AI to replace your job. There's really nothing you can do about it. And so I think that actually breeds a ton of resentment against these companies.
Search Engine
Should we be worried about OpenAI?
And while the government is starting to pay attention, at least here in the United States, they're being very, very gentle about everything. And so if you wanted to change the course of AI, it's not actually clear how you would go about that. And so I think that's another really big reason why people often resent it.
Search Engine
Should we be worried about OpenAI?
Yeah, let me just say I'm going to keep paying attention to it.
Search Engine
Should we be worried about OpenAI?
So I think at a high level, and somewhat to my surprise, Sam Altman changed very little about the way that he led OpenAI in the last year. Like if the concern that came up last year was that Sam was not being very collaborative, that he was not empowering other leaders, that he was operating this as a sort of very strong CEO who was not delegating a lot of power.
Search Engine
Should we be worried about OpenAI?
I haven't seen a lot of change in the past year. I have seen him continue to pursue his own highest priorities, like fundraising to build giant microchip fabrication plants, for example, which has been a huge priority for him. At the same time, there have been stories that have come out along the way that reminded you why people were nervous about the company last year.
Search Engine
Should we be worried about OpenAI?
One that comes to mind is that it was revealed this spring that OpenAI had been forcing employees when they left to sign non-disclosure agreements, which is somewhat unusual. But then very unusually, they told those employees, if you do not sign this NDA, we can claw back the equity that we have given you in the company.
Search Engine
Should we be worried about OpenAI?
It would be impossible. They don't do that. They don't do that? No, they don't do that. So this is just extraordinarily unusual.
Search Engine
Should we be worried about OpenAI?
You know, sometimes with like a C-suite executive or someone very high up in the company, if they, maybe let's say they're fired, but the company doesn't want them to run around badmouthing them to their competitors, they might make that person sign an NDA in exchange for a lot of money. But this thing was just hitting the rank and file employees at OpenAI, and that was really, really unusual.
Search Engine
Should we be worried about OpenAI?
Yeah. And afterwards, Sam Altman posted on X saying that he would not do this and that it was one of the few times he had been genuinely embarrassed running OpenAI. He did not know this was happening and he should have, is what he said.
Search Engine
Should we be worried about OpenAI?
Yeah, absolutely. And, you know, I will say that there has been great reporting over the past year by other journalists who have gotten at what some of those concerns are. And a lot of them wind up being the same thing, which is we launched a product, and I think we should have done a lot more testing before we launched that product, but we didn't.
Search Engine
Should we be worried about OpenAI?
And so now we have accelerated this kind of AI arms race that we are in, and that will likely end badly because we are much closer to building superintelligence than we are to understanding how to safely build a superintelligence. I see.
Search Engine
Should we be worried about OpenAI?
Exactly, and we have seen this time and time again. I mean, this is really fundamental to the DNA of OpenAI. When they released ChatGPT, other companies had developed large language models that were just as good, but Sam got spooked that his rival, Anthropic, which had an LLM named Claude, was going to release their product first and might steal all of their thunder.
Search Engine
Should we be worried about OpenAI?
And so they released ChatGPT to get out in front of Claude. And that was essentially the starting gun that launched the entire AI race. And so I think it is fundamental to how Sam sees the world that all of this stuff is inevitable. And if it's going to happen anyway, all other things being equal, you would rather be the person who did it, right?
Search Engine
Should we be worried about OpenAI?
And got the credit and the glory and the users and the revenue.
Search Engine
Should we be worried about OpenAI?
Do you want to tell that story? Yeah. For a while, OpenAI had been working on a voice mode for ChatGPT. Instead of just typing in a box, you could tap a button on your phone and interact with the model using a voice. And a movie that has long inspired people in Silicon Valley is the Spike Jonze film, Her.
Search Engine
Should we be worried about OpenAI?
And in that film, Joaquin Phoenix, who plays the protagonist of that film, talks constantly to an AI companion who is voiced by Scarlett Johansson.
Search Engine
Should we be worried about OpenAI?
Well, look, you could take different lessons from her. You know, I think a bad lesson to take would be human companionship is worthless at the moment that we invent AI superintelligence because we can just talk to superintelligence all day long and turn our backs on humanity. That would be a bad lesson to learn.
Search Engine
Should we be worried about OpenAI?
I think a lot of people in Silicon Valley looked at her and they thought, oh, that's a really good natural user interface. Like, if we could just wear earbuds all day long and you could answer any question you ever had just by saying, hey, her, what's going on with this? That would be great.
Search Engine
Should we be worried about OpenAI?
And then, in fact, you do start to see the arrival of products like Siri and Alexa and sort of baby steps toward this new world. So I completely agree with you. Her is a dystopian film. It should not be viewed as a blueprint to build the future. At the same time, I do feel like I see what Silicon Valley saw in it.
Search Engine
Should we be worried about OpenAI?
Right, and lightsabers are a good idea, and we should build those.
Search Engine
Should we be worried about OpenAI?
Not only did the voice sound very much like Scarlett Johansson, it was also presented in this very flirty way. When they did this demo, it was like, it's a man using a assistant who has the voice of a woman who sounds a lot like Scarlett Johansson. And she's like, oh PJ, you're so bad. That was kind of the tone of it. And it was sort of like, what are you doing here exactly?