Honestly with Bari Weiss
Sam Altman on His Feud with Elon Musk—and the Battle for AI's Future
Thu, 19 Dec 2024
Just a few years ago, as AI technology was beginning to spill out of start-ups in Silicon Valley and hitting our smartphones, the political and cultural conversation about this nascent science was not yet clear. I remember asking former Google CEO Eric Schmidt on Honestly in January 2022 if AI was just like the sexy robot in Ex Machina. I literally said to him, “What is AI? How do you define it? I do not understand.” Today, not only has it become clear what AI is and how to use it—ChatGPT averages more than 120 million active daily users and processes over a billion queries per day—but it’s also becoming clear with the political and cultural ramifications—and the arguments and debates—around AI are going to be over the next few years. Among those big questions are who gets to lead us into this new age of AI technology, what company is going to get there first and achieve market dominance, how those companies are structured so that bad actors with nefarious incentives can’t manipulate this technology for evil purposes, and what role the government should play in regulating all of this. At the center of these important questions are two men: Sam Altman and Elon Musk. And if you haven’t been following, they aren’t exactly in alignment. They started off as friends and business partners. In fact, Sam and Elon co-founded OpenAI in 2015. But over the years, Elon Musk grew increasingly frustrated with OpenAI until he finally resigned from the board in 2018. That feud escalated this past year when Elon sued Sam and OpenAI on multiple occasions to try to prevent the company from launching a for-profit arm of the business, a structure that Elon claims is never supposed to happen in OpenAI—and he also argues that changing its structure in this way might even be illegal. On the one hand, this is a very complex disagreement. To understand every single detail of it, you probably need a law degree and special expertise in American tax law. But you don’t need a degree or specialization to understand that at its heart, this feud is about something much bigger and more existential than OpenAI’s business model, although that’s extremely important. What this is really a fight over is who will ultimately be in control of a technology that some say, if used incorrectly, could very well make human beings obsolete. Here to tell his side of the story is Sam Altman. We talk about where AI is headed, and why he thinks superintelligence—the moment where AI surpasses human capabilities—is closer than ever. We talk about the perils of AI bias and censorship, why he donated $1 million to Trump’s inaugural fund as a person who has long opposed Trump, what happens if America loses the AI race to a foreign power like China, and of course, what went wrong between him and the richest man on Earth. If you liked what you heard from Honestly, the best way to support us is to go to TheFP.com and become a Free Press subscriber today. *** This show is proudly sponsored by the Foundation for Individual Rights and Expression (FIRE). FIRE believes free speech makes free people. Make your tax-deductible donation today at www.thefire.org/honestly. Learn more about your ad choices. Visit megaphone.fm/adchoices
Hi, Honestly listeners, it's Barry here with a big end of year ask. By the end of 2024, in a few short weeks, we want to get to 1 million free press subscribers. That's right, a million people, 1 million. That's right, 1 million people who value journalistic independence and curiosity, and above all, who want a news source that reflects reality.
If you're here, it's not just because you believe in fearless old-school journalism for yourself. It's because you believe in it for other people too. You believe it's for the good of the country. Free pressers tell us again and again that we're not just a media company. We're a public trust.
So if for some crazy reason you listen to Honestly and you still don't support the free press, here's your moment. Take out your computer or your phone and go to the Free Press' website. Go to thefp.com slash subscribe and become a Free Presser. You don't even need to take out your credit card.
Sign up for free and get our daily emails that gives you our view of the world and a view into the world of the Free Press in your inbox every morning. If you're already signed up, why not give a gift subscription to your friends and family members or anyone you think can use a good dose of reality? Okay, one more time.
Support us by going to the Free Presses website at dfp.com slash subscribe or click the link in our show notes and help us get to our goal of minting 1 million Free Pressers by December 31st, 2024. Thanks so much.
Ryan Reynolds here for Mint Mobile. One of the perks about having four kids that you know about is actually getting a direct line to the big man up north. And this year, he wants you to know the best gift that you can give someone is the gift of Mint Mobile's unlimited wireless for $15 a month. Now, you don't even need to wrap it. Give it a try at mintmobile.com slash switch.
$45 upfront payment required, equivalent to $15 per month. New customers on first three-month plan only. Taxes and fees extra. Speeds lower above 40 gigabytes on unlimited. See mintmobile.com for details.
From the Free Press, this is Honestly, and I'm Barry Weiss. Just a few years ago, as AI technology was beginning to spill out of startups in Silicon Valley and hit our smartphones, the political and cultural conversation about this nascent technology was not yet clear, or at least it wasn't clear yet to civilians like me.
I remember asking former Google CEO Eric Schmidt on Honestly in January 2022 if AI was just like, and this is actually what I said, the sexy robot in Ex Machina. I literally said to him, what is AI? How do you define it? I do not understand.
I cringe listening back to that because today, in the waning days of 2024, not only has it become clear what AI is and how to use it, ChatGPT, just to choose one example, averages more than 120 million daily active users and processes over a billion queries per day.
But it's also becoming clear what the political and cultural ramifications and the arguments and debates around AI are and what they're going to be over the next few years. Among those big questions are who gets to lead us into this new age of AI technology? What company is going to get there first and achieve market dominance?
How those companies are structured so that bad actors with bad incentives can't manipulate this technology for evil purposes. What role the government should play in regulating all of this. At the center of these important questions, at least for right now, are two men, Sam Altman and Elon Musk. And if you haven't been following, they aren't exactly in alignment.
I don't trust OpenAI. I don't trust Sam Altman. And I don't think we want to have the most powerful AI in the world controlled by someone who is not trustworthy.
It would be profoundly un-American to use political power to the degree that Elon has it, to hurt your competitors and advantage your own businesses.
They started off as friends and business partners. In fact, Sam and Elon co-founded OpenAI, the company that makes ChatGPT, in 2015. But over the years, Elon Musk grew increasingly frustrated with OpenAI until he finally resigned from the board in 2018.
That feud escalated this past year when Elon sued Sam and OpenAI on multiple occasions to try to prevent OpenAI for launching a for-profit arm of the business, a structure that Elon claims is not only never supposed to happen in OpenAI. He likes to remind people that a nonprofit transparent company should not become a closed for-profit one.
But he argues that changing its structure in this way might even be illegal. Now, on the one hand, this is a very complex disagreement. To understand every single detail of it, you probably need a law degree and special expertise in American tax law, neither of which I happen to have.
But you don't need any special degree or specialization to understand that at its heart, this feud is about something much bigger and more existential than the business model of open AI, although that's extremely important.
At its heart, what this is really about is a fight over who will ultimately be in control of a technology that some say, if used incorrectly, could very well make human beings obsolete. So the stakes are low. Here to tell his side of the story is Sam Altman.
We talk about where AI is headed, why he thinks superintelligence, in other words, the moment where AI surpasses human capabilities, is closer than ever. We talk about the perils of AI bias and censorship, why he donated a million dollars to Trump's inaugural fund as a person who had long opposed Trump, What happens if America loses the AI race to a foreign power like China?
And of course, what went wrong and is going wrong between him and the richest man on Earth? We'll be right back. Today's episode is brought to you by the Foundation for Individual Rights and Expression or FIRE. FIRE believes that free speech is the foundation of a free society. This freedom is fundamental.
It drives scientific progress, entrepreneurial growth, artistic expression, civic participation, and so much more. But free speech rights don't protect themselves. And that's where FIRE comes in. Proudly nonpartisan, they defend free speech and the First Amendment where it's needed most, on campus, in the courtroom, and throughout our culture.
If you believe in that fight, and if you believe in the principles of free speech, consider joining FIRE with a gift before the end of the year. Your donation will help FIRE continue their critical work and its tax deductible. Visit thefire.org slash donate today to make your gift and join the free speech movement. Bernard-Henri Lévy's new book, Israel Alone, will make the perfect holiday gift.
You may have read BHL in our pages ahead of the French elections this past summer, or perhaps you remember our reporting in the free press about how an ad for his new book, Israel Alone, was rejected from a trade publication on the grounds that it would cause controversy. You heard that right. An ad for a book about Israel would itself cause controversy.
Or perhaps you're familiar with one of the 48 books that Levy has written. Regardless, I urge you to pick up a copy of his new book, Israel Alone. It's a passionate creed decor about Israel and the tragedy of October 7th, starting with Levy's eyewitness account. He was on the ground in Israel the day after the pogrom.
From his unique humanist perspective, Levy analyzes what exactly Hamas did to Israel on October 7th and delves into how Iran, Russia, radical Islamist groups, Turkey, and China have played roles in and profited from this tragedy.
He weaves in his experiences from his first trip to Israel in 1967 and his meetings with Israeli leaders throughout the decades, including Menachem Begin, Shimon Peres, Ariel Sharon, Yitzhak Shamir, and Yitzhak Rabin. The book addresses the worldwide eruption of anti-Semitism. over the past year and takes head-on the arguments for a ceasefire.
It's a deep meditation on Zionism and Israel, and I think regular listeners of this show will get a lot out of it. Israel Alone is available on Amazon and at local booksellers. If your local independent bookstore does not carry Israel Alone, ask them to order it. This ad is sponsored by Marty Peretz in honor of Bernard-Henri Lévy. Sam Altman, welcome to Honestly.
Thanks for having me. Good to see you.
The last time we spoke, and I know you've given a zillion interviews since then, but it was in April of 2023, and it feels like a world away. ChatGPT had just launched, and people were just at the very beginning of trying to figure out, like in the abstract, what this technology was and how it might transform their everyday lives. And
Now, sitting here in December of 2024, ChatGPT is a household name. So is OpenAI, and of course, some of your competitors are too, like Perplexity and Gemini and Claude. And average Americans are using these tools every day, everything from math tutoring to debugging code to drafting emails, and it's very, very good at doing that.
Tell me about how ChatGPT and, I guess, AI technology more broadly has changed since we last spoke a year and a half ago and whether or not it's where you expected it to be today or further along.
So I think there's two different things we can talk about. One is how much the technology itself has changed, and that has gotten way better. I mean, if you think about the AI we were excited about back in April of 2023, it was so primitive relative to what we have now. And the things that the technology is capable of are – pretty mind-blowing to me.
But even more than that, the rate at which it will continue to get better over the next year, and if we came back in another 18 months and talked about what it can do, I think it'll feel like as big or maybe even bigger as a gap from April 2023 to December of 2024. The other thing that's happened is it's really integrated into society.
Like, back then, it was still a curiosity, something many people had heard of. People really use it now a lot for, like, a lot of their work, their personal lives, their... It's... I've never seen a technology become widely adopted this fast, not just as something people like dabble with, but something that people like really use in all the ways you were talking about.
So that part of the adoption curve happened much more quickly than I thought. I expected the technology to happen quickly.
Give me a sense of, like, how are you using the tool that you have helped create in your daily life? Like, the way that most people I know are using it, Tyler Cowen and lots of people who are, like, passionate early adopters, it almost seems to have, like, replaced Google for them. And it's just, like, a much, much deeper Google. Is that how it's working for you?
I use it in all sorts of ways, but the newest one, a few months ago, we released search integration. And now ChatGPT can search the internet for kind of real-time information. And of everything we've ever shipped, that was the one that felt like it doubled my usage all at once. And since then, I mean, I must have still used Google for something, but I can't remember what it is.
Wow.
And I switched ChatGBT to be my default search in Chrome, and I have not looked back. The degree to which... that behavior changed in me for something that was really deeply ingrained. And now the fact that like when I remember the way that I used to search feels like kind of, oh man, that was like a pre-iPhone kind of equivalent. That's the sort of like level of shift that I feel about it.
That's been the most surprising change to me in the last few months. is that I do all my searching now inside of ChatGP.
What do you call it? Do you call it searching or is there a verb in the way that Googling is a verb?
I still call it search. I mean, I just like people, other people say like I chatted it or I chat. I chatted it. People say I chatted it. A lot like young people seem to just only call it chat. But I would say I just use search.
in September, so just a few months ago, you published this manifesto on your website predicting the emergence of superintelligence in the next few years, or as you put it, and memorably, in the next few thousand days. Explain to us what superintelligence is. Tell us how we'll know if it's actually here and how it stands to change people's lives over the next decades.
One thing that I use as a sort of my attempt at my own mental framework for it, is the rate of scientific progress. If the rate of scientific progress that's happening in the world as a whole tripled, or maybe even like 10x'd.
You know, the discoveries that we used to expect to take 10 years, and the technological progress that we used to expect to take 10 years, if that happened every year, and then we compounded on that the next one, and the next one, and the next one, that to me would feel like superintelligence had arrived, and it would, I think in many ways, change the way that society, the economy work.
What it won't change, and I think a lot of the sort of AI commentators get this wrong, is it won't change the deep fundamental human drives. And so in that sense, we've been through many technological revolutions before. Things that we tend to care about and what drive all of us, I think, change very little or maybe not at all through most of those.
But the world in which we exist will change a lot.
Okay, well, Sam, one of the reasons we wanted to have this conversation with you today is not just because we want to hear about the ways that AI is going to transform the way that we live and work, but because you're in a very public battle right now with your original OpenAI co-founder, Elon Musk.
And I think it's safe to say that most listeners of this show will like vaguely know that there's a conflict between Elon Musk having to do with this, one of his companies, one of his many companies. But they're certainly not following the nitty gritty details of the various lawsuits and of the conflict more generally. So I want to try and summarize it in the most fair way that I can.
And then you'll tell me if I've gotten it. wrong or where I've overstepped. So OpenAI begins in 2015, and it starts as a nonprofit. And in a blog post introducing OpenAI to the world in December of that year, you wrote this, "'OpenAI is a nonprofit artificial intelligence research company.
Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.'" Since our research is free from financial obligations, we can better focus on a positive human impact. And this was a huge aspect of the brand.
Then, fast forward four years, in 2019, OpenAI moves to what it called a hybrid model with a for-profit arm that got a billion-dollar investment from Microsoft in that year. Since then, Microsoft has poured something like $13 billion—it might be a higher number—more into the company. And Elon was one of the co-founders, as I mentioned, since the beginning.
But his relationship with the company soured over time because he disagreed with the shift that I just described, the shift from this nonprofit model to a hybrid model. And he eventually leaves the company and steps down from the board. And that takes us to this year.
in which Elon has sued you and OpenAI on several different occasions so far this year, and he has given many interviews and posted countless amount of tweets or exes or whatever we're supposed to call them about this conflict. All of the lawsuits claim that you were in some kind of contract violation by putting profits ahead of the public good in the move to advance AI.
And then last month, and this is the most recent development, Elon asked the district judge in California to block OpenAI from converting to this for-profit structure. Okay, that was a mouthful. Did I summarize it properly? And is there anything crucial that I left out or misstated?
You mostly summarized it properly, but I mean, it was Elon that most wanted OpenAI to be a for-profit at one point and had made a bunch of proposals that would have, and also things like OpenAI being part of Tesla, but mostly just create a new for-profit that he was going to be in control of and So other than that, I think a lot of the summary there is correct.
I have a bunch of thoughts and opinions on it, but as a statement of facts, that was otherwise mostly correct.
Give us like the 10,000-foot version. What is the fundamental conflict between Elon Musk and his various allies, Meta being one of them, and you guys? Like what is the disagreement fundamentally about?
Yeah. Look, I don't live inside Elon's head, so this is a little bit of speculation. Elon definitely did a lot to help open AI in the early days, and in spite of all of this, I'm very grateful, and I think he's just a sort of legendary entrepreneur. He's also clearly a bully, and he's also someone who clearly likes to get in fights. You know, right now it's me.
It's been Bezos, Gates, Zuckerberg, lots of other people. And I think fundamentally, this is about OpenAI is doing really well. Elon cares about doing really well. Elon started and now runs a very direct competitor that's trying to do exactly what OpenAI does. And I'll point out as a structure, you know, it's like a public benefit corp.
And I heard Elon has majority ownership and control and seems like a reasonable thing he would do. I think a lot of the press has been misreported. Even if we go through with any of the conversion ideas or evolution ideas we're talking about, it's not like the nonprofit goes away. The nonprofit doesn't stop being a nonprofit and becomes a for-profit.
We've talked publicly about maybe we evolve our current LLC into a PBC, but anything we do would strengthen the nonprofit. The nonprofit would continue to exist. It would continue to serve people. hopefully better serve the same purpose.
And the overall mission of the company that you talked about, which is develop this incredible technology, do it in a way that we think is maximally beneficial to humans and get it out into the world for people. We keep doing that. I'm incredibly proud of our track record on doing that so far. People, as you were saying earlier, use ChatGPT and love it. There's an incredible free tier of ChatGPT.
We lose money on it. It's not ad supported or anything. We just want to put AI in people's hands. We continue to want to deploy this technology so that people co-evolve with it, understand it, that the world is going through this process it's going through right now of contending with AI and eventually AGI and thinking how it's going to go.
And everything we're doing, I believe Elon would be happy about if he weren't in control of the company. He left when he thought we were on a trajectory to certainly fail and also wouldn't do something where he had total control over OpenAI.
But I think it's a little bit of a sideshow, and the right thing for us to do is just keep doing incredible research, keep shipping products people love, and most importantly, keep pursuing this mission of AGI to benefit people and getting that out into the world.
For someone who's just sort of tuning into this topic, why is it important, Sam, that OpenAI has a for-profit arm or converts in the way that you've been talking about? Why is that essential to your growth?
When we started OpenAI, we thought... It's hard to go back and remember how different things were in 2015. That was before language models and chatbots. It was way before ChatGPT. We were doing research and publishing papers and working on AIs that could play video games and control robotic hands and things like that. And we were supposed to get a billion dollars, but ended up not.
We thought with a billion dollars, we could make substantial progress towards what we were trying to do. As we learned more and got into the scaling language model world, We realized that it was not going to cost $1 billion or even $10 billion, but like $100 billion plus. And we couldn't do that as a nonprofit. So that was the fundamental reason for it.
Maybe another way to say it is like it's absolutely essential for the computational power to create.
And every other effort pursuing AI has realized this and is set up in some way where they can sort of access capital markets.
You've said a lot of different things about Elon in recent days. You gave this interview at Dealbook where Andrew Ross Sorkin is sort of asking you how you feel about the conflict. And you say, sad. And you also say that you think Elon's companies are awesome.
And then he asked you, do you think he's going to use his newfound political influence to kind of punish you or punish OpenAI or punish his competitors? And you said in that interview that you thought he would do the right thing. How do you square that with what you just told me, which is that Elon's a bully? Bullies don't typically do the right thing.
Oh, well, I think there are people who will really be a jerk on Twitter who will still not like abuse the system of a country they're now in a sort of extremely influential political role for. That seems completely different to me.
Until now, much of this battle – for those of us who are like perpetually online and perpetually on Twitter, we have been following the conflict via like tweets lobbed, subtweets. It's all sort of been playing out in real time on Twitter for us to watch. OpenAI, though, has sort of been in, like, response mode sometimes or mostly kind of ignoring everything. That's sort of how I'd characterize it.
That changed a few days ago when you guys published this very, very long memo on OpenAI's website. And it's like a timeline going back to 2015, proving from your perspective that, you know, via emails and screenshots of texts and explanations of those screenshots and those texts that... Elon was open to OpenAI being a for-profit going all the way back then. I read all 29 pages.
For those who don't want to do that, they could go to ChatGPT and ask Chat to summarize it. Here's how ChatGPT summarized it. This article details the rift between Elon Musk and OpenAI's leadership, particularly Sam Altman, stemming from Musk's dissatisfaction with OpenAI's shift from a nonprofit to a hybrid for-profit model.
This feud is crucial, Chet told me, because it underscores the broader ethical dilemma of how AI should be developed and controlled, whether it should prioritize public good or corporate profit, especially as powerful AI technologies become increasingly influential in society in the economy. I thought that was pretty good. What do you think?
Not bad.
Anything you would add to it?
No, but on your general point, you are right that we do not sit there and throw tomatoes back and forth on Twitter. The reason for this one was we had to make a legal filing, and we wanted to provide some context. We've published about this once before, also when we had to make a legal filing. I've lost track of how many times that Elon has sued us.
I think it's like four withdraws, changes, goes for this preliminary injunction, whatever. Our job is to build AGI in a way that benefits humanity and figure out how to safely and broadly distribute it. Our job is not to engage in like a Twitter fight with Elon. But when we have to respond to illegal filing, we will and sometimes we'll provide context. I think we've only done this twice.
In the early days of OpenAI, the brand, like the way I encountered the brand of it was transparency and nonprofit. Like those were the things that it over and over emphasized. And the reason you said that you couldn't take any equity and the reason you took such a small salary is because you said, you know, I don't want to be conflicted.
I want to always be motivated to do the thing that's best for humanity. The day after OpenAI launched in December in 2015, you described it to Vanity Fair as a nonprofit company to save the world from a dystopian future. You also said that trying to make OpenAI a for-profit would lead to, quote, misaligned incentives that would be suboptimal to the world as a whole.
I guess I want to ask, like, do you still agree with that, but simply you've had to adapt to the reality, which is that developing these models takes billions and billions and billions of dollars? Two things.
One, I think I was, like, a little bit wrong about that. And I have been—although I have had concerns, I have been impressed by how much not just us but the other AI labs have— even though they have this like wild sort of market or economic incentive, have really been focused on developing safe models. I think there's many factors that went into that.
We did get a little lucky on the direction the technology went. But also if you deploy these models in a way that is harmful to people, you would like very quickly, I believe, lose your license to operate if it was an obvious one. Now, there are subtle things that can go wrong.
I think social media is an example of a place where maybe the harms weren't so obvious at the time, and then there was an emergent property at scale, and you could imagine something happening with AI that could be like that.
But the incentive problem has been better than I thought at the time, and I will cheerfully say I was a little bit naive about how the world works 10 years ago, and I feel better now.
Naive how?
Oh, the pressure, the societal pressure on big companies and sort of the power of researchers to push their companies to do the right thing, even in the face of this gigantic profit motive, have been pretty good.
But there is something that I don't feel naive about that I felt at the time too, which is it continues to be fairly crazy to me that this is happening in the hands of a small number of private companies. To me, this feels like the Manhattan Project or the Apollo program of our time. And those were not done by private companies. And I think it's like a mark of a well-functioning society.
I've said this many times. This is not like new breaking news. But, you know, I think this would be too.
Do you think that we need a Manhattan Project here?
I think the companies are going to do the right thing and it's going to go well, and I don't think a government effort in this current world would work at all. I don't think it would be good if it did, honestly. I wish we were in a world where I felt like that was the way it should and was happening.
Meta right now, you know, Mark Zuckerberg's company, is also siding with Elon. A few days ago, Meta asked California's AG to block OpenAI from becoming a for-profit. And this is what they said in their letter. OpenAI's conduct could have seismic implications for Silicon Valley.
If OpenAI's new business model is valid, nonprofit investors would get the same for-profit upside as those who invest in the conventional way in for-profit companies while also benefiting from the tax write-offs bestowed by the government. This echoes what Musk said last year when he said, I'm confused as to how a nonprofit, which I donated to, somehow became a market cap for profit.
In other words, if this is legal, like, why isn't everyone doing this?
So, look, first of all, I don't know why I'm going to set that letter, but I do know they know that's not how it works. I know that part's in bad faith. If you— In any of these worlds, our nonprofit will keep going, and the people that invested in the nonprofit don't. You don't get to have a benefit from a nonprofit donation accrue to a for-profit equity, of course. And they know that, too.
You can imagine lots of other reasons that Meta might have sent this letter. You can imagine that they wanted to curry favor with Elon. You can imagine that they felt like it would help them compete with us. You could imagine that they were like, annoyed with us for a perceived anti-open source stance, which I don't think is accurate or something that I feel. I don't know.
You should ask them what the reason was.
But for the civilian who's hearing, how does a nonprofit become a for-profit? What's the answer? It doesn't.
It doesn't. The nonprofit stays as the nonprofit. I believe that the OpenAI nonprofit is on a trajectory, I hope, if we do well, to be the largest and most impactful nonprofit of all time. That nonprofit doesn't become anything else. Like many other things, our world, our ecosystem, can have a— for-profit business also, but the nonprofit does not convert. The nonprofit does not go anywhere.
The nonprofit does not stop doing nonprofit things.
At the end of the day, Sam, who is going to profit most from the success of OpenAI?
I'll tell you what I hope. Everyone gives their analogy for what technological revolution this is most like. It's the industrial revolution. It's like electricity. It's like the web. The thing I hope for is that it's like the transistor. We discovered a new important fundamental physical law, whatever you want to call it. We did a bunch of research. So did others.
And it will seep into all aspects of the economy, products, everything. And You and I today are using many devices with transistors in them to make this podcast possible. Your computer has some. Your microphone has some. All of the internet equipment between you and me has a lot. But we don't sit here and think about transistors.
And the transistor company does not sit here and make all of the money. It is this new incredible scientific discovery that seeped into everything we do. And everybody made a lot of money. That's what I hope AI will be like, and I think there's many reasons why it's the best analogy.
Will you have equity or do you have equity or what kind of stake do you have in this new capped for-profit?
Well, so we haven't formed a new entity yet. We have obviously considered forming a new entity or maybe converting our existing LLC into one that's more accurate. I have a tiny sliver of equity from an old YC fund. I used to have some via Sequoia Fund, but that one turned out to be easier to sell and not keep the position in. So I have a very small amount that's quite insignificant to me.
10%.
I mean, you understand why. Do you get why people are fixated on that?
For sure. But as I've said many times before, if I could go back in time, I would have taken equity. I I think, again, I understand more about why my earlier misgivings were misplaced. I also get that it's weird for me to take it now after not earlier. On the other hand, I would love to never have to answer this question again and be like, we're a normal company. I run it. I've got some equity.
Investors don't have to worry that I'm misaligned. The whole air of suspicion of not having any is one of the decisions I regret the most of open AI structure things. But I understand why people are fixated on it. That makes sense.
If you could go back in time, how would you have done this from the beginning? Like, let's wind back the clock to 2015.
If an oracle had said to me on what was it, November of 2015, before we set up. Number one, you're going to need 100 plus billion dollars.
Number two, even though you have no idea today how you're going to ever productize this and you think of yourself as a research lab, eventually you're going to become a company that does have a way to productize it and business model it so you can explain to investors why they're not just funding a research lab. And number three, that the incentives of
people working on this are going to be more naturally kept in check because it's not going to be what I and many others thought at the time of like one effort that is way far ahead of everyone else, but something more like the transistor that seeps out. And so there will be better equilibrium dynamics.
If an Oracle had told me all three of those things that turned out to be true, I would say, great, let's be a public benefit corp.
How essential was Elon to getting OpenAI off the ground? Like if the Oracle also told you about this fight that would ensue with someone that you regarded as your close friend, would you have said, you know, don't need him, can do it myself?
No, he was really helpful. I'm super appreciative.
I remember, I think it was the first time I ever saw Elon Musk was on stage at a conference. You were interviewing him. You guys had a wonderful dynamic. You seemed like you were really good friends. He has said some really harsh things about you. He's compared you to Littlefinger in the Game of Thrones. And he has most devastatingly said, I don't trust him.
And I don't want the most powerful AI in the world to be controlled by someone who isn't trustworthy. Why is he saying that?
I think it's because he wants the most powerful AI in the world to be controlled by him. And, again, I've seen Elon's attacks to many other people, many friends of mine. You know, everyone gets their period of time in his spotlight. But this all seems like standard behavior from him.
I'm trying to put myself in a position of a former friend, a former co-founder of mine, saying those kinds of things about me. You seem relatively calm about it.
No, I'm upset by it for sure. I was talking to someone recently who I did think of as close, and they said, like, Elon doesn't have any friends. Elon doesn't do peers. Elon doesn't do friends. And that was sort of a sad moment for me because I do think of him as a friend. But I don't know. I can look at this, like, somewhat dispassionately.
Like, I remember what it was like when he said opening eyes has a 0% chance of success, and, you know, you guys are idiots, and I'm pulling funding, and I'm going to do my own thing. I remember what it was like when there were moments since then where it felt like he kind of wanted to reconcile and figure out a way to work together.
And then I remember moments where he's just like, you know, off doing his thing on Twitter. But if it were only towards me, I think it'd be much more painful. But, you know, I think you see who he is on Twitter. And so I can like hold it somewhat impersonally and just be like, this is about Elon. This is not about me. It still sucks. I've had a long time to get used to it, I guess.
This recent blog post that went up on OpenAI's site said that Elon should, quote, be competing in the marketplace rather than in the courtroom. And the cynical view, of course, is to say, and you've alluded to this in this conversation, that Elon, who now owns an OpenAI competitor himself called XAI, is suing you not out of some concern over...
AI safety or anything else, but really just to get in on the competition. What do you say to that? Is the cynical view true? Is this really just a fight to be the first to dominate the market? Or is there... You should ask him. I hope, yeah, I hope to. I invited him on.
Great.
After the break, more with OpenAI's Sam Altman. We'll be right back.
The Credit Card Competition Act would help small business owners like Raymond. We asked Raymond why the Credit Card Competition Act matters to him.
I'm Raymond Huff. I run Russell's Convenience in Denver, Colorado. I've ran this business for more than 30 years, but keeping it going is a challenge. One of the biggest reasons I found is credit card swipe fees were forced to pay. That's because the credit card companies fix prices. It goes against the free market that made our economy great.
The Credit Card Competition Act would ensure we have basic competition. It's one of the few things in Washington that both sides agree on. Please ask your member of Congress to pass the Credit Card Competition Act. Small businesses and my customers need it now.
For more information on how the Credit Card Competition Act will help American consumers save money, visit merchantspaymentscoalition.com and contact your member of Congress today. Paid for by the Merchants Payments Coalition. Not authorized by any candidate or candidates committee.
Empower your business or digital agency with Bluehost, trusted by over 5 million WordPress users globally. Bluehost features top-to-bottom hosting optimizations designed specifically for WordPress, giving you 24-7 access to a team of experts for support, plus thousands of WordPress help articles.
So if you want to streamline WordPress website creation with intuitive controls and premium support, choose Bluehost, powering over 2 million websites worldwide.
Let's talk a little bit about AI regulation and questions about safety in AI. You're not just known as one of the most important AI CEOs, AI developers in the world. You're also a very, very well-known proponent of AI regulation.
And the cynical view here, right, is that in the very same way that you could cast dispersions on Elon's motives, you could look at the way that you have lobbied for AI regulations as a way to stifle competition and benefit your company. Obviously, you've heard that argument before. I'd love for you to respond to that.
I think too much regulation clearly has huge negative consequences in society right now in many places we have experienced. too much. I mean, Elon has also been a lot of proponent of calling for AI regulation as have the heads of most other large efforts. When you step on an airplane, you think very high likelihood it's going to be a safe experience.
When you eat food in the US, you don't think too much about food safety. Some regulation is clearly a good thing. Now, I can imagine versions of AI regulation that are really problematic and would disadvantage smaller efforts. And I think that would be a real mistake. But for some safety guardrails on the most powerful systems, that should only affect the people at the frontier.
That should only affect OpenAI and a small handful of others. I don't think we're at the level yet where these systems have huge safety implications. But I don't think we're like wildly far away either. So that's the sort of art here.
But the argument that some of these startups are making, startups like – there's an AI startup called Hugging Face, which is an unbelievable name, the founder of a company called Stability AI. They're basically saying what Sam and the other big guys, the incumbents, are trying to do, OpenAI, Google, and Apple, is to kind of create –
basically asking government to kind of build a moat around you and stifle the competition through regulatory capture. What do you say to those people? And this is sort of like the argument between big tech and little tech. We can frame it in all kinds of ways.
What do you say to those people who are saying, we want to get in on the competition, the regulation that people like Sam and others at many other times are pushing for will hurt us and benefit them?
I don't... Well, if what they're saying is we're behind opening eyes, so it doesn't matter, and what we're calling for is only regulation at the frontier, like only stuff that is... new and untested, but otherwise put out whatever open source model you want, I don't think it's reasonable for them to make that argument. I don't know, I'm curious what you think.
If we do, let's say we succeed and make a super intelligence, we make this computer program that is smarter, maybe more capable than all of humanity put together, Do you think there should be any regulation on that at all or would you say just say none?
I definitely – first of all, I don't even understand what we're talking about when we talk about superintelligence. You understand what that means and the implications of it in a way that I just don't. So that's number one. And number two, if this technology is as powerful as people like you and Elon and so many others that are closer to it say that it is –
Of course, I think it should be regulated in some way. How and when is obviously like the relevant question. How and when matters a lot.
For sure. How and when matters a lot. But I agree with that. And I could easily see it going really wrong.
Recently, Marc Andreessen was on this show. And he talked to me about his perception of what the Biden administration was trying to do around AI technology. He came on and made the argument and told a story, really, that he experienced. He says... that the Biden administration was trying to sort of completely control AI.
And what they were aiming to do was to make it so closely regulated by the government that in his words, there would only be sort of two or three big companies that they would work with and that they were trying to ultimately protect them from competition. Is that true? Do you know what he's referencing? Was OpenAI one of those companies?
I don't think it's true. I don't know what he's referencing. I also will say very, very clearly, I think regulation that reduces competition for AI is a very bad thing.
So OpenAI was not one of those companies?
No. I don't actually know what that's about, but we've certainly, as far as I know.
You weren't like in a room with OpenAI and a number of, you weren't in a room ever with the Biden administration and other AI companies.
I don't even think, like, the Biden administration is competent enough to – I mean, we were in a room with them, but never – and other companies in the administration – but never, like, here's our conspiracy theory. We're going to make it only you few companies that can build AI. And then you have to do what we say. Never anything like that.
What was your feeling in general about the Biden administration's posture toward AI and tech more generally? You just said, like, you didn't think they'd have the competence to –
I think – and she – every conversation I had with her, I thought she kind of got it. Overall, I would say the administration was not that effective.
The things that I would most – that I think should have been the administration's priorities and I hope will be the next administration's priorities are building out massive AI infrastructure in the U.S., having a supply chain in the U.S., things like that.
OK, that's like a perfect analogy to get us to the comparison that's often made, which is the comparison between AI and nuclear weapons. When Mark was on, I asked him to kind of steel man the Biden administration's perspective or steel man the perspective that this should be heavily regulated.
And he basically drew the analogy to the Manhattan Project and the development of the atomic bomb when the government failed. felt that it needed to make sure that this new science and innovation remained classified. First of all, do you think that that's a good analogy?
And if so, if it is as powerful as nuclear weapons, wouldn't it make sense for this to be not OpenAI and Gemini and Claude, but rather a project of the federal government?
Yeah. First of all, I think all the analogies are tough because they work in some ways and don't work in other ways. You can point to things that are similar to the nuclear era. You can talk about like it takes enormous resources and huge amounts of energy to – enrich uranium on one hand or to produce these models on the other. So you can find things like that that work.
And then the use of one of these models and the use of a nuclear weapon are like quite different things. And sort of the geopolitical implications are also quite different things. So I think to steelman the argument of people who say things like, you know, it's like nuclear weapons, I think what they mean is that it's It's extremely expensive and has extreme geopolitical consequences.
We don't know exactly what those are or how to think about them. But because we don't know exactly what they are, shouldn't we have like a principle of letting the government decide? And? I can imagine other governments at other times in history where we should be very thrilled about that outcome.
I think putting the current United States government in charge of developing AGI faster and better than our competitors would not likely go well. I think the decline in state capacity in this country is not a new observation but a mournful one.
At the beginning of the nuclear age, we had people in this country who functioned almost like chief science officers, right? I'm thinking about people like Vannevar Bush who helped launch the Manhattan Project and came up with the National Science Foundation and kind of guided American policy for those first few like very crucial years of nuclear energy. Does that person exist?
Like, if we wanted to have someone like that who sort of understood the technology, had no financial stake in it, and could talk, whether it's President Biden or Trump or whoever comes after him, sort of the pros and cons, not just of the development of AI here, but the competition with China. Like, does that person exist actually right now in America? Like, could you be that person, arguably?
I think the willingness... It's coming back a little bit, but for a long time, the willingness of the American public to be excited about future developments in science and technology has been gone. I sort of think it went away with the nuclear weapons, actually, if I had to pick one moment in time. There was sort of a weird few-decade hangover before there was the generational change.
But when the people who were young... when the bomb was dropped, kind of got older and in power. I don't think America ever embraced the excitement and belief in science and technology driving the world forward to the same degree as we used to. You can read these stories about what people like that used to do and how revered they were and how people believed that
scientific, technological progress more broadly was going to make the world better. That seems missing now. And I don't think it's because we don't have an individual who could do that. I think it's because the government doesn't want it and the public doesn't want it.
Don't you feel, though, that, I mean, what do you make of not just the political vibe shift, but the cultural vibe shift that we've been experiencing since November 5th? Like, if you made that argument to me eight weeks ago, I would say, yeah, Sam's probably right. Now it feels like a different country.
There's a huge cultural vibe shift and I think there's a very positive – there's positive momentum in many ways. I'm not sure that it exists for, hey, we think science is really important again and science is what's going to save us and solve all of our problems. Do you think that? Or do you think it's – like that's the one area where I haven't felt it.
I don't know science. I just think that there's a shift in the direction of growth is a good thing. Technological progress is a good thing. Nihilism feels like it's passe and falling out of favor. Like I feel that change happening in a dramatic way.
Now maybe it's because I spend a lot of time on X and like a lot of it's sort of like fomenting there and sort of leaping from the online into the real world. So, you know, if you went and like talked to the average PhD student uptown at Columbia, I don't think that they would have the same experience I do because everything is so balkanized.
As I said earlier, I think it is getting better even on – like I strongly agree with you on the kind of general shift towards excitement about growth and success and having the country and the economy do well. I do somewhat agree as I was saying earlier that I think – Even excitement about science is in a better place than it's not here.
But when you talk about those people who are like the scientific ambassadors of the country and who people like really listen to and were excited about and preach to a willing audience, I'm still not sure I feel that. I think there's that excitement for business but not for science.
Well, one of the companies that I feel excited about, perhaps it's controversial to say this, but I just think the founder is one of the most interesting people in the country, is Palmer Luckey and his company, Anderle Industries. And OpenAI recently entered into an agreement with, with Anduril to develop AI with military applications.
Now, previously, OpenAI had had a prohibition against using its technology for weaponry. Now, with the caveat, of course, that you're concentrating on defensive systems at the moment, the sorts of things that could guard us against attacks like drone swarms, perhaps like what's happening in New Jersey right now. We don't have time to talk about that.
But what made you change your mind fundamentally about integrating your company's technology with into even a defensive weaponry system.
So we have a set of principles that we established, and we approved this one for some use cases that comply with those. But I think if the leading United States efforts do not help defend the United States and our allies against our adversaries, we're going to be in a very bad place. And so we need to figure out how to do that.
A year and a half ago when we were talking, part of our conversation was where the AI arms race with China was. I think now it's like well and definitively clear that we are very much in that arms race with China. And I think even people who worry about the power of AI in this country feel like, well, if it's a choice between us and China— It's got to be us. We got to win.
Spell out for us, Sam, in your mind, because I'm sure you're thinking about this all the time, like what it looks like if China wins the AI arms race. Like what happens to America? What happens to the world?
Whatever China wants.
And do you think the possibility of that happening is a real one? Them winning?
I mean, we intend to work our fucking hardest to make sure they don't.
How do we know if they are winning given how much they lie and also steal stuff from us?
This is the hard thing, right? We know what they publicly release. We don't know what they don't publicly release. We have a lot of signals and we have an intelligence system. But it's – my own stance on this is we have got to try to be cooperative. And arms races are bad for everybody involved. We've learned that lesson again and again throughout history.
But we need to be able to win if we need to. So I am hopeful that this can be a great moment for world peace. And I believe that if there's ever a time for humanity to come together, this seems like a good candidate. And I want us to get there. But we can't be naive about that.
President Trump talks a lot about, you know, peace through strength. Is the Sam Altman OpenAI version of peace through strength, we have to crush, get ahead and win on AI so it's not even a question that China could do whatever it wants?
Not crush. We have to be ahead and then we have to be as willing to work together as possible. And I think that is somewhat similar to peace through strength. It's like if there's an arms race, we'll win it, but we don't want to. We want to do this in a way that everybody benefits.
Meaning if there's an arms race, we want to win, but we don't want the arms race, period.
Yeah. But it's here. It's not even that. It's more like if there's any path towards doing this as a collaborative effort, we should, but we can't control what other entities do.
You mean collaborate with our enemies?
Yeah. We collaborate with China. Yeah, actually, I'll say that directly. I think we collaborate with people we don't get along with all the time in areas where it's in our strategic interest to do so. And this is one where I think the interests of the world and certainly the mission of our company would dictate that if it is possible to be truly collaborative, we should do that.
Are we doing that right now? With China on AI? Like, you know more than I do.
I was going to say, you might know more than I. Like, that will be a big question for the new administration. But that's not going to happen at the company-to-company level. That's going to happen, like, the presidents of the two countries level.
If Trump called you tomorrow and said, hey, Sam, I want to make you AI regulation chief. You can do whatever you want in this position. What's the first thing that you would do? What's the most important thing that the person in that position would do?
U.S. infrastructure and supply chain. Build our own chips here, build enough energy to run data centers here, change what it takes to build data centers here, but be able to build the very expensive complex supply chain, very expensive infrastructure in the United States. That's the first thing I would do.
Bias and censorship in AI is an enormous topic and one that we think a lot about here at the Free Press. And, you know, the most obvious example of this, the one that trended for days and everyone was laughing at, was when Gemini generated those images of, like, a black George Washington and, like, a trans Nazi, and it was hilarious. Yeah.
In a way, it was really serious because it felt like only the most sort of like exaggerated, hyperbolic, obvious example of a much, much deeper endemic problem, which is the bias that is baked into these technologies, both because of the people programming those technologies and because of the information that they're sort of scraping online.
Talk to us about how you're thinking about it at ChatGPT because obviously the system that is closest to reality, it seems to me, will win in the end of the day. If a ChatGPT is giving me images of, you know, is telling me George Washington was trans, I'm like, I'm not going to rely on this. We don't do that. Okay, fine. But you understand my point.
How do you think about the problem of bias and how are you solving for it?
I think there are two things that matter. One is... what flexibility a user has to get the system to behave the way they want. And I think, or we think, there should be very wide bounds. There are some things like you don't want a system to tell you how to create nuclear weapons. Fine, we can all agree on that.
But if you want a system to be pretty offensive and you ask it to be, I think part of alignment is doing what its user asks for within these broad bounds that society agrees on. The second thing that really matters is what the defaults are. So if you don't do any of that, which most users don't, And you ask whatever controversial question you want, how should the system respond?
And we put a ton of work into both of those things. We also try to write up how the model should behave. We call this the model spec, such that you can tell if it's a bug or you disagree with us on some stance. But that's how we think about it.
Is it possible to build a thing like ChatGPT or any other technology in this lane that we can't even conceive of yet that doesn't have a political point of view? Isn't that inevitable?
I think no matter how neutral you try to write the thing, it will either be useless because it will just say, I can't answer that because there's politics in everything, or it will have some sort of point of view, which is why what we think we can do is write down what we intend for our default. People can debate that. If there's bugs in there, we can look at the bugs.
If there's problems with how we defined it, we can change what the definition is and retrain the system. But, yeah, I don't think any system can be – no two people are ever going to agree that one system is perfectly unbiased. But that's another reason why personalization matters so much.
Do you believe that AI or chat GPT has a responsibility to fight pernicious ideas? Let me give you an example of what I mean. If you knew that by... putting your thumb on the scale and in the teeniest, tiniest way, you might be able to usher in a world where there's less racism, less anti-Semitism, less misogyny. And maybe it would even be invisible to people because they wouldn't know.
At a certain point, as we've just talked about, this is going to be, I don't know if this was Mark or somebody else, the control layer of all of our information.
Yeah.
How do you think about that?
Actually, here's one thing I've been thinking about recently as a principal. Like OpenAI has not adopted this at all, but this has just been an idea that I think gets at what you're saying. Let's say we discover some new thing where it's like if you do this, people learn way better. If ChatGPT responds always with the Socratic method or whatever, students using it learn way better.
But let's say user preferences are not to get the Socratic message. Users just say, like, I just want you to answer my question. Tell me. Right. Then, like, how should we decide what to do there as the default behavior? And one idea that I have increasingly been thinking about is... What if we're always just really clear when we make a change to the spec?
And so you'll never have our thumb on the scale hiding behind an algorithm, which I think Twitter does all the time, for example, and all sorts of weird things there. We'll always tell you what the intended behavior is. And if we make a change to it, we'll explain why.
But if we do discover something like what you just said, or like what I just used as an example, and we say, okay, when people are using it for education, we are going to use the Socratic method because it does seem to have this measurable effect, and here's why we're doing it. We can debate that publicly. Maybe we change our default if you convince us otherwise.
Anyone can, of course, change that in their user preferences because the AI is like a tool for you and should do what you want. But I think the thing that would be wrong is if we change that and didn't reflect it in the spec and didn't tell people we were changing it. I think the black box of the Twitter algorithm, for example, doesn't feel good to me.
Sam, you've donated a million dollars to Trump's inauguration, and it turned some heads because in the past you've called him a racist, a misogynist, and a conspiracy theorist, among other things.
You've been a prolific donor to Democratic candidates and causes over the years, but now you say that Trump is going to lead us into the age of AI, and you're eager to support his efforts to ensure America stays ahead. Is this a change of heart, a political evolution, a vibe shift inside of you? What's going on?
All of those things and also I hope – I mean like he's our president and I wish him every bit of success and any way we can work to support this part of what he wants to do, we want to do.
What's the vibe shift inside of you? We know that there's one going on inside Silicon Valley and one going on in the culture. How have you changed in the past few years?
I mean a ton of ways but one is that I – I've watched for the last maybe 10, 12 years as I think things have gotten off track. Things have been good in some ways, but I think gotten really off track in terms of how we think about the importance of growth and economic success and a focus on the right things in the country and in the world more broadly. And I think it got very off track.
And I'd say the vibe shift is a hope that as we're facing down one of these most important moments in technological history, That can help drive a vibe shift back to what I believe in very deeply, which is that growth is the only good way forward.
Do you think growth and the growth of open AI and the growth of AI more generally is a patriotic duty?
Yes. I actually wrote something like – someone just sent this back to me. I wrote something more than 10 years ago about how growth – I think it was my very first blog post ever about how growth was the central ingredient to democracy working well. And I think the world got badly confused about that, and I'm happy to see it re-recognized.
I'm going to use my 30 seconds on a lightning round. Sam, lightning round. What are the drone things, what are the flying objects flying over New Jersey right now?
I have no idea. I'm really interested in this question. Do you know?
No, we're reporting on it a lot. I find it interesting that various electeds are saying it's the Iran mothership or China. Do you think Twitter has become better or worse since Elon Musk took control? Worse. You're having a baby. Yes. Will you let your kid have an AI friend?
Yes.
Will you let them go on social media?
At some point.
Will you let them have screen time?
Yes.
What's your favorite sports car? You love sports cars.
A McLaren F1.
Favorite movie?
The Dark Knight.
Do you have any normal hobbies?
I, like, have dinner with my friends. I go hiking. I, like, you know, exercise. I just, like, sit around my friends doing dumb stuff. I don't know. Yeah, it feels pretty normal.
You built a treehouse recently. I did. Why'd you do that?
It was Thanksgiving, and we were looking at activity for, like, the adults and the kids. Everybody was at our ranch, and we wanted an activity we all thought would be fun and was not just sitting around drinking all day, and it was great.
Would you box Logan Paul?
No.
Will we enter World War III in 2025? I hope not. What's your New Year's resolution?
I don't do those.
Sam Altman, thank you so much for coming on Honestly.
Thank you.
Thanks for listening. If you liked this conversation, if it got you Googling or chatting things about Elon Musk, open AI, tech, government, the future of humanity, that's all good. Share this conversation with your friends and family and use it to have an honest conversation of your own.
I can't think of a better holiday topic than whether or not humans are going to become obsolete in the next few thousand days. Last but not least, if you want to support Honestly, there's one way to do it. It's by going to the Free Press' website at thefp.com and becoming a subscriber today.
Happy Hanukkah, Merry Christmas, something festivus, and we'll see you next week for some holiday episodes that we love and that we're really excited for you to hear.
It is Ryan Seacrest here. There was a recent social media trend which consisted of flying on a plane with no music, no movies, no entertainment. But a better trend would be going to ChumbaCasino.com. It's like having a mini social casino in your pocket. Chumba Casino has over 100 online casino-style games, all absolutely free. It's the most fun you can have online and on a plane.
So grab your free welcome bonus now at ChumbaCasino.com. Sponsored by Chumba Casino. No purchase necessary. VGW Group. Void where prohibited by law. 18 plus terms and conditions apply.