A year ago, we saw a stand-off between OpenAI's non-profit board and its leader, Sam Altman. Since then, the board has been reshuffled, Altman has consolidated power, and under his leadership, some strange things have happened. If AI might change the world, and OpenAI is leading the field -- how worried should we be? We check in with tech reporter Casey Newton of the newsletter Platformer and the podcast Hard Fork. Listen to our previous OpenAI episode (https://pjvogt.substack.com/p/who-should-be-in-charge-of-ai) Support the show: searchengine.show To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
Today's episode is presented by SAP Business AI, revolutionary technology, real-world results. Search Engine is brought to you by Ford. As a Ford owner, there are lots of choices of where to get your vehicle serviced. You can choose to go to their place, the local dealership, your place, home, apartment, condo, your workplace, even your happy place, like your cottage on the lake.
Go to your Ford dealer or choose Ford pickup and delivery to have your vehicle picked up, serviced, and brought right back. Or choose mobile service, where a technician will come to you and do routine maintenance right on the spot. Both are complimentary and depend on your location. That's ownership built around you.
Contact your participating dealer or visit FordService.com for important details and limitations. Welcome to Search Engine, no question too big, no question too small. This week, should we be worried about OpenAI? So last fall, we reported a story about OpenAI, the leading company in artificial intelligence, led by charismatic co-founder Sam Altman.
The company was famous not just for its runaway success, but also for their unusual ethos and structure. Rather than simply being a for-profit company, it was a non-profit in charge of a for-profit company. And that nonprofit could seemingly disable the for-profit company at any point if it decided that the company was acting in a way that was dangerous for society.
It was like a tech company with a doomsday switch built into it. A recognition both of AI's potential power to reshape society, as well as an understanding perhaps that the last round of technological innovation has not been completely wonderful for the world. Our last story was about how OpenAI's nonprofit guardians had decided that the company had in fact gone off course.
In November, 2023, they deposed their own leader, suddenly and dramatically. Sam Altman is out as CEO of OpenAI, the company just announcing a leadership transition.
The godfather of chat GPT kicked out of the company he founded.
It looks like things were over for Sam Altman until his loyalists got on board with a counter coup. Nearly every rank and file employee at the company signed a petition demanding his return.
90 percent of the company's 770 employees signed a letter threatening to leave unless the current board of directors resigned and reinstated Altman as head of OpenAI.
Finally, Microsoft, OpenAI's biggest shareholder, also stepped in, in support of Sam. Quickly thereafter, he was reinstated.
Sam Altman back as CEO of OpenAI. OpenAI posting on X that Sam Altman will now officially return as CEO. It's also overhauling the board that fired him with new directors, ending a dramatic five-day standoff that's transfixed Silicon Valley and the artificial intelligence industry.
So OpenAI's rebellious board was basically replaced with a compliant one. Sam Altman, who was temporarily deemed too dangerous to run his own company, instead consolidated power there. That was a year ago.
In the years since, OpenAI has not turned on an army of terminators to kill us all, but the company has transformed into a somewhat different-seeming institution, with lots of strange public errors and judgment along the way. We hoped to talk to someone at OpenAI for this story. They did not make anyone available for comment. So instead, I called a tech journalist I know.
Want to see something crazy?
Of course I want to see something crazy.
Okay. Oh, I guess I can only go one way.
Wait, what are you doing?
I just got a new webcam and it follows my face.
But it didn't follow your face.
Now it's not following it. Damn it!
You just stood up and just sashayed out a frame and I was like trying to figure out what I was supposed to pay attention to. Casey Newton, founder and editor of the Platformer newsletter, co-host of the Hard Fork podcast, and perhaps a sometimes too early adopter of exciting technologies. Casey's a reporter we spoke to last year when everything seemed to be exploding at OpenAI.
And he's continued covering all the strange happenings at the company since then. I wanted to talk to him not because I'm a gossip hound for Silicon Valley, but because I really wondered, if AI is a technology that can really change the world, how concerned should I be about some relatively erratic behavior from the company leading the field?
Casey was happy to fill me in on what had been going on with Sam Altman and his very valuable startup since I last wondered about these things 12 months ago.
Well, I think on the business side, OpenAI has had an incredible year. The New York Times recently reported that its monthly revenue had hit $300 million in August, which was up 1700% since the beginning of 2023. And it expects about $3.7 billion in annual sales this year. I went back to February, and back then it was predicted that OpenAI was going to make a mere $2 billion this year.
So just this year, the amount of money they expected to make doubled. They further believe that their revenue will be $11.6 billion next year. So those are growth rates that we typically see only for kind of once-in-a-generation companies that really manage to hit on something new and novel in technology.
What about how are they actually running the place? Because I will tell you my perception as a person who follows this less closely than you is, I feel like I see as many stories about OpenAI tripping over its clown shoes as I do stories about how the new GPT is slightly better than the one that preceded it.
Can you give me the timeline of last year, which stories stuck out to you and how you thought about them?
So I think at a high level, and somewhat to my surprise, Sam Altman changed very little about the way that he led OpenAI in the last year. Like if the concern that came up last year was that Sam was not being very collaborative, that he was not empowering other leaders, that he was operating this as a sort of very strong CEO who was not delegating a lot of power.
I haven't seen a lot of change in the past year. I have seen him continue to pursue his own highest priorities, like fundraising to build giant microchip fabrication plants, for example, which has been a huge priority for him. At the same time, there have been stories that have come out along the way that reminded you why people were nervous about the company last year.
One that comes to mind is that it was revealed this spring that OpenAI had been forcing employees when they left to sign non-disclosure agreements, which is somewhat unusual. But then very unusually, they told those employees, if you do not sign this NDA, we can claw back the equity that we have given you in the company.
So how unusual is that? How unusual is that in tech for a tech company to say, if a person quits Facebook and then they say Facebook was a bad company, How unusual would it be for Facebook to be like, we are taking back your stock?
It would be impossible. They don't do that. They don't do that? No, they don't do that. So this is just extraordinarily unusual.
You know, sometimes with like a C-suite executive or someone very high up in the company, if they, maybe let's say they're fired, but the company doesn't want them to run around badmouthing them to their competitors, they might make that person sign an NDA in exchange for a lot of money. But this thing was just hitting the rank and file employees at OpenAI, and that was really, really unusual.
Yeah. And afterwards, Sam Altman posted on X saying that he would not do this and that it was one of the few times he had been genuinely embarrassed running OpenAI. He did not know this was happening and he should have, is what he said.
And just to like, I feel like journalists have this bias, which is like, we believe in transparency, we believe in disclosure. Sometimes I think non-journalists care less than we do because... We kind of have a rooting interest in transparency and disclosure. But it's also been really confusing, not as a reporter, but just as a human being. I don't know. There's a lot of things I worry about.
Most of them are selfish and personal. Like, what happens with open AI is... maybe in the top 500 or a couple hundred. But there is a part of my mind that worries about it. And when I worry about it and I try to like, my prediction ledger activates, I'm always like, well, it seems like a lot of people are quitting. A lot of the people work on the let's stop this from screwing up the world team.
But they always quit and they're like, well, we just have a difference of agreement, can't say more. And it's really confusing.
Yeah, absolutely. And, you know, I will say that there has been great reporting over the past year by other journalists who have gotten at what some of those concerns are. And a lot of them wind up being the same thing, which is we launched a product, and I think we should have done a lot more testing before we launched that product, but we didn't.
And so now we have accelerated this kind of AI arms race that we are in, and that will likely end badly because we are much closer to building superintelligence than we are to understanding how to safely build a superintelligence. I see.
It's like what I've noticed as a user of AI. I actually noticed the safeguards. The other day I saw somebody was making a meme making fun of a celebrity online. And as often happens these days, I like didn't recognize the celebrity. And I plugged the picture into ChatGPT and I was like, who's this? Which is the main way I use ChatGPT is to say, what's this?
And it was like, I don't identify human beings. I was like, okay.
Oh.
That's a rule that you're following. But what you're saying is that in these fast rollouts, smart rules like that, which would stop people from using AI in a bad way or stop AI from just deciding to do things that are bad, those might be getting overridden.
And that if all these companies are competing with each other to build the most powerful thing the fastest, one company ignoring safeguards means all the other companies ignore safeguards.
Exactly, and we have seen this time and time again. I mean, this is really fundamental to the DNA of OpenAI. When they released ChatGPT, other companies had developed large language models that were just as good, but Sam got spooked that his rival, Anthropic, which had an LLM named Claude, was going to release their product first and might steal all of their thunder.
And so they released ChatGPT to get out in front of Claude. And that was essentially the starting gun that launched the entire AI race. And so I think it is fundamental to how Sam sees the world that all of this stuff is inevitable. And if it's going to happen anyway, all other things being equal, you would rather be the person who did it, right?
And got the credit and the glory and the users and the revenue.
So that is our overarching problem here. AI developers might care about safety, but in the rush to be first in the field, the company who wins could actually be the company who cares about safety the least. which is why we are talking about worrying incidents from the industry leader, OpenAI.
So one of the incidents was this NDA incident first reported by Vox this May, and the company did backtrack on those NDAs. An OpenAI spokesperson told Vox, quote, we have never canceled any current or former employees' vested equity, nor will we if people do not sign a release or non-disparagement agreement when they exit, end quote.
A separate incident, Casey, I dug into was the Scarlett Johansson incident.
Do you want to tell that story? Yeah. For a while, OpenAI had been working on a voice mode for ChatGPT. Instead of just typing in a box, you could tap a button on your phone and interact with the model using a voice. And a movie that has long inspired people in Silicon Valley is the Spike Jonze film, Her.
And in that film, Joaquin Phoenix, who plays the protagonist of that film, talks constantly to an AI companion who is voiced by Scarlett Johansson.
Do you want to know how I work? Yeah, actually. How do you work? Well, basically, I have intuition. I mean, the DNA of who I am is based on the millions of personalities of all the programmers who wrote me. But what makes me me is my ability to grow through my experiences. So basically, in every moment, I'm evolving, just like you.
And I just wanted to say, before you even continue with your story, what is so weird about this movie being a huge inspiration to people in Silicon Valley is it is a cautionary, dystopian film. I saw this movie. This is not a joke. I saw this movie and it upset me so much at the time.
I was talking to a friend afterwards and she said, I think you should probably talk to a psychiatrist and go on antidepressants. Which I did for several years. I'm not on them any longer. I went on them because of the movie Hurt.
Oh, my God.
It is so strange to me that people saw this movie and were like, ah, we should have this. But anyway, they love it. They want to make it the future.
Well, look, you could take different lessons from her. You know, I think a bad lesson to take would be human companionship is worthless at the moment that we invent AI superintelligence because we can just talk to superintelligence all day long and turn our backs on humanity. That would be a bad lesson to learn.
I think a lot of people in Silicon Valley looked at her and they thought, oh, that's a really good natural user interface. Like, if we could just wear earbuds all day long and you could answer any question you ever had just by saying, hey, her, what's going on with this? That would be great.
And then, in fact, you do start to see the arrival of products like Siri and Alexa and sort of baby steps toward this new world. So I completely agree with you. Her is a dystopian film. It should not be viewed as a blueprint to build the future. At the same time, I do feel like I see what Silicon Valley saw in it.
Right, you could see Star Wars and be like, oh, spaceships one person can pilot could be a good idea. It doesn't mean you're trying to build, like, TIE fighters to take over Alderaan or whatever.
Right, and lightsabers are a good idea, and we should build those.
I completely agree, and I still think about it. So... Her comes out, tech people are like, oh, it would be really good to have an AI you could talk to. That's like one lesson from the movie. Lightsabers would be good too.
And when OpenAI releases their voice agent, which is sort of, you know, a real life version of part of this movie, the thing that a lot of people notice is that one of the possible voices for the voice agent sounds quite a bit like Scarlett Johansson, the voice from the movie.
Hey, how's it going?
Hey, Rocky. I'm doing great. How about you?
I'm awesome. Listen, I got some huge news.
Oh, do tell. I'm all ears.
Well, in a few minutes, I'm going to be interviewing at OpenAI. Have you heard of them?
OpenAI? Huh? Sounds vaguely familiar. Kidding, of course. That's incredible, Rocky. What kind of interview?
Not only did the voice sound very much like Scarlett Johansson, it was also presented in this very flirty way. When they did this demo, it was like, it's a man using a assistant who has the voice of a woman who sounds a lot like Scarlett Johansson. And she's like, oh PJ, you're so bad. That was kind of the tone of it. And it was sort of like, what are you doing here exactly?
After the product launched, a user on TikTok even asked ChatGPT itself if it believed it was a Johansson clone. Hey, is your voice supposed to be Scarlett Johansson?
No, my voice isn't designed to replicate Scarlett Johansson or any specific person.
Hilariously, the voice has never sounded more similar to Johansson's to me than when it was denying the resemblance. Casey said the company itself had also contributed to this confusion.
Sam Altman had primed everyone to think that way because a couple days before they do this demonstration where they show off the voice for the first time, Sam Altman tweets the word her. Or I should say he posts it on X. And so, of course, when this demo happens, everyone is like, oh.
And so everyone was sort of primed to think, oh, wow, OpenAI has realized Silicon Valley's decade-long dream of making the movie Her a reality.
And then what happens?
Then it turned out that Scarlett Johansson was really mad because Sam Altman had gone to her last year and said, hey, would you like to be a voice for this thing? And she thought about it and she said, no, I don't want to. And then...
Apparently, after he had posted, like just in the couple days before the demo, he'd gone back to her agents and tried to renegotiate this whole thing and said, are you sure you don't want to be the voice for this thing? And she said no, and they showed it off anyway. And they never said, this is Scarlett Johansson, but they absolutely let everyone believe it.
A new controversy tonight in the world of artificial intelligence, as one of Hollywood's biggest movie stars says her voice was copied without her consent by one of the most powerful AI companies. Actress Scarlett Johansson claims OpenAI's ChatGPT mimicked her voice for its latest personal assistant program.
This bizarre moment led to Scarlett Johansson then making the rounds on TV, advocating for legislation to protect the intellectual property, really the identity of actors like herself.
Obviously, we're all waiting and supporting, like, this, like, the passing of legislation to protect everybody's individual rights. And I think, you know, it's, yeah, like, we're still waiting for it, right? So, like, until this is just maybe sort of highlights, like, how vulnerable everybody is to it.
I think this was the story for me of all the stories that really stuck with me. And maybe it was because the message it gave me was a kind of impunity. And the promise, as I've understood from opening AI, has been exactly the opposite of impunity.
And obviously, of all the choices they make, whether they find a sound-alike voice actress and do a voice that sounds a lot like Scarlett Johansson and then kind of smudge the truth, I could see a person getting overenthusiastic and making that mistake. It's the kind of mistake podcasts would make it in its first couple of years, you're like, oh, geez, oh, God, we're really sorry.
But it seems careless. Also, this is a product where one of people's concerns is the copyright implications, where these AI companies are hoovering up a lot of people's creative work to make their products. And it just felt like what you expect from a company that doesn't care what you think and wants to do what it wants.
And I don't know if I'm overreading, but it was a moment that kind of like gave me a little bit of future nausea.
I agree with you. And I think you framed it really well because this is the company that has told us from the beginning, we're working on something very powerful. We think it could solve a lot of problems. If it falls into the wrong hands, it could also be extremely dangerous.
And so that's why we're going to come up with a very unusual structure for ourselves and try to do absolutely everything we can do in our power to proceed safely, cautiously, and responsibly. And so you look at the Scarlett Johansson thing, and none of that squares with their behavior in that case.
So that was the Scarlett Johansson incident. Casey told me about another incident, this one from this past August. Let's call that one the lazy student problem.
I mean, this is a kind of short and funny one, but there was reporting this year that they built a tool that detects when students are using ChatGPT to do their homework, but they won't release it. Oh!
How do they explain why they're not releasing it? As someone who has had to have a conversation with a teenager about why they shouldn't cheat using open AI and really stumbled on the part where I was like, listen... It's the wrong thing to do and you probably won't get caught. And also, yes, probably all your friends are doing it.
And then like there was like several ellipses of pause while I realized like the hole I dug myself into. Why won't they just release the homework checker?
So I should say the Wall Street Journal broke this story and the statement they gave to them was the text watermarking method we're developing is technically promising, but as important risks we're weighing while we research alternatives, we believe the deliberate approach we've taken is necessary given the complexities involved and its likely impact on the broader ecosystem. beyond open AI.
That is what they said. The Journal sort of made an alternate case, which is that if you can't use ChatGPT to cheat on your homework, you will stop paying the company $20 a month.
It's so funny to imagine what part of their revenue is coming from high schoolers and college kids. And also, like, I don't know, maybe there's an argument that sort of like the same way we don't need to do long division. Nobody needs to be able to think or reason in an essay form. But I kind of think people still need to be able to think or reason in an essay form.
I mean, maybe long division is important, too. I don't know. Right.
So if we're trying to decide if we trust OpenAI to be not just a profitable company, but also a kind of unusually ethical AI standard bearer, their willingness to accept a bunch of grubby $20 bills from high schoolers who want to skip their homework and play more Fortnite, it's not the end of the world, but it is behavior unethical enough that you'd probably fire a babysitter over it.
Casey also told me about an additional incident that had given some people pause, the investments incident. This one had to do with Sam Altman personally, specifically the way he's been quietly spending his money, investing in companies like Stripe, Airbnb, and Reddit.
We did learn about Sam Altman's investment empire this year, thanks to some reporting in the Wall Street Journal. And they really dug into all of the stakes that he has in many startups and found that he controls at least $2.8 billion worth of holdings.
And he's used those holdings to create a line of debt, which he has from JPMorgan Chase, which gives him access to hundreds of millions of more dollars, which he can put into private companies. And why is this interesting? Well, one, that's kind of a pretty risky gamble to have a lot of your... net worth tied up in debt that you raised using your venture investments as collateral.
That's kind of like a rickety ladder of investments right there. But it also creates questions around what companies is OpenAI doing deals with? are those companies that Sam has investments in. Of course, Sam doesn't own equity in OpenAI right now, and so his own wealth is tied up in these investments.
And while nobody really thinks that Sam is doing any of this for the money, there was just kind of also this financial element to what we learned about him this year that I think raised some questions for people.
I feel like one of the things where I feel a little bit disabused is I think a couple years ago, I hadn't made up my mind, but I felt very willing to entertain the possibility that Sam Altman was a very unusual kind of person, that he didn't seem to be motivated by...
accumulating wealth to the same degree as maybe other people are, that he might not be entirely motivated by accumulating power, that he might just have a vision for a technology that could be really useful or it could be really dangerous and thought he might be the best person to be a steward of that. I'm not saying I was right then. I'm not saying I was wrong then.
But like, do you feel like you have a changed or refined view of what motivates this person who has a lot of power?
Hmm.
I essentially have the same view of his motivations. And I think the generous version of it is that he is in a long line of Silicon Valley entrepreneurs who thought they could use innovation to solve some of the world's biggest problems and that that is how they want to spend their lives.
I think the less generous version of it is that this person coming out of that tradition found himself working on this technology that could essentially be like the technology that ends all other technologies. Because if the thing works out, the thing you've created just creates all other innovation automatically for the rest of time. And that...
is a position of extraordinary power to put yourself into it. And I do think that he is attracted to the power and the influence that will come from being one of the people that invents this incredibly powerful thing.
After a short break, Casey already mentioned that there have been a lot of senior level departures at OpenAI. We're gonna dive deeper into who left and what they seemed to believe about the company they were quitting. Plus, we'll look at a fairly worrying warning manifesto published by an ex-OpenAI employee. That's after some ads. Today's episode is presented by SAP Business AI.
Revolutionary technology, real-world results. Hi, Bailey.
Hello, it's so nice to meet you.
I recently spoke to a listener in North Carolina who emailed us about life at her job. She was calling us from outside her office, inside her car.
My friend Zoe is lurking outside because she's the one that introduced me to the podcast. Ha ha ha! So she's looking at a leaf excitedly and giving me strange looks.
That's so funny. That's so funny. Bailey's a web designer for a company that helps promote events for clients all over the world. She told me about some of the ways she uses AI to shorten her workday.
There are some fun use cases like extending images a little bit larger in Photoshop. There's languages that are like non-Latin that have funky characters with no spaces like Japanese. And sometimes I have to do translated websites for events that are happening in, say, West Japan.
Oh, wow.
And when that happens... I have to include manual breaks for the lines because if there aren't manual breaks, the meaning will change. And so I will sometimes use AI to help me find the best breaks that can be there every possible spot.
So you're using it not for translation, but to make sure that you're not like starting a new line in a way that would totally change the meaning of what you're trying to communicate?
Yeah, exactly.
And did you learn this the hard way that the line break can change the meaning of a thing?
A little bit. I know we sent a site to a client in like a local office to someone who spoke Japanese, and it didn't go the best. They had to respond back and say like, hey, this doesn't work. This isn't going to be okay.
Bailey at first thought she was going to have to learn rudimentary Japanese, but then she realized software could actually help her out here. No need to learn a new language.
I get to spend more of my time doing the actual like creative, intensive and UX parts of web design versus the menial kind of repetitive parts.
Thank you so much for talking about this.
Yeah.
Should we wave to Zoe?
Oh, absolutely. Wait, where did she go?
She may have wandered off.
She might be hiding behind my car.
Thanks again to Bailey for chatting with us and Zoe for so expertly hiding nearby. Ready to elevate your business? With SAP Business AI, you can grow revenue, increase efficiencies, and manage risks seamlessly. It's relevant, reliable, and responsible. AI is embedded into SAP solutions to enable businesses to drive immediate business impact with AI embedded across their organization.
It can also help you make confident decisions based on AI grounded in business data and put AI into practice with the highest ethical, security, and privacy standards. SAP Business AI, revolutionary technology, real-world results. This episode is also brought to you by Rocket Money.
I tried out Rocket Money and I found out that I should never take a taxi cab again, and that I was signed up for a bunch of fringe streaming services I'd signed up to watch one thing on and forgotten about. So it saved me a bunch of money.
Rocket Money is a personal finance app that helps find and cancel your unwanted subscriptions, monitors your spending, and helps lower your bills so you can grow your savings. See all of your subscriptions in one place and know exactly where your money is going. For any you don't want anymore, Rocket Money can help you cancel them with a few taps.
RocketMoney will even try to negotiate lower bills for you, sometimes by up to 20%. They automatically scan your bills to find opportunities to save, then you can ask them to negotiate. They'll deal with customer service. RocketMoney has over 5 million users and has saved a total of $500 million in canceled subscriptions, saving members up to $740 a year when using all of the app's features.
Stop wasting money on things you don't use. Cancel your unwanted subscriptions by going to rocketmoney.com. That's rocketmoney.com. Welcome back to the show. So if you, like me, were at best quarter paying attention to developments at OpenAI the past 12 months, the thing you still may have noticed was just a very unusual amount of senior level people leaving their jobs.
It was the kind of turnover you'd expect to see at a Halloween store in November, not typically at one of the most valuable new American technology companies. We've already mentioned this, but OpenAI employees were in many cases discouraged from criticizing the company. And yet, there's still been some evidence about why they left and what they saw before they did. So we're going to get into that.
This part is not so much an incident as it is a series of incidents, a trend. Let's call this bit Sudden Departures.
So the first big one out the door this year is this guy, Andrej Karpathy, who was part of the founding team. He left for a while to go to Tesla. He comes back for exactly one year and then leaves.
Okay.
In May, Ilya Sutskever, who was one of the board members who had forced Sam out last year, he announces that he is leaving the company and doesn't really say much about why he's leaving. But within a month, it's revealed that he's working on his own AI company called Safe Super Intelligence and raises a billion dollars just to get it off the ground.
Oh, wow.
Yeah. He had a guy on his research team named Jan Leike. So this was somebody else who was trying to make sure that AI is built safely. He leaves to go to Anthropic to work on that problem there. Gretchen Kruger, who's another policy researcher, leaves in May.
Then in August, John Schulman, who was one of the members of the founding team, he announced that he was going to Anthropic, and he had previously helped to build ChatGPT. Then Greg Brockman, who is the president of OpenAI and one of its main public facing spokespeople, he announces that he is taking an extended leave of absence.
Basically just says he really needs a break, not entirely sure what happened there. Then finally, Meera Moradi announces that she is leaving in September. She had also been part of this board drama last year. And at the same day that she left, it was revealed that the company's chief research officer, Bob McGrew, and another research VP, Barrett Zoff, were also leaving the company.
That's just a lot of talent walking out the door, PJ. And I can say, if you look at the other major AI companies, so like a Google, a Meta, an Anthropic, there has been nothing comparable this year in terms of that level of turnover.
So you have, like, huge turnover at the top of a company that, in theory, people should want to stay at because it's, like, leading the industry. It's incredibly valuable. It's the winning team. And people are walking out the door saying they don't want to play for it.
Yeah, totally. But, you know, another really important story about Mira Marotti is that before... Sam was ousted last year. She had written a private memo to Sam raising questions about his management and had shared his concerns with the board. Oh, interesting. And my understanding is that that had weighed heavily on the board when they fired Sam. Because to have the CTO of the company...
coming to you and saying, hey, this is a real problem, that's going to get your attention in a way that maybe a rank-and-file employee might not have been able to get their attention. So we have known for some time now that Mira has had long-standing concerns with Sam's management style. And so when she finally left, it felt like the end to a story that we had been following for some time.
And so has she said anything publicly that is very decipherable about her reason for exiting?
So, you know, she said there's never an ideal time to step away from a place one cherishes, which I felt like was just an acknowledgement that this seemed like a pretty bad time to step away. But she said that she wanted the time and space to do her own exploration.
And on the day that we recorded this, the information reported that she's already talking to some other recently departed OpenAI people about potentially starting another AI company with them. Because that is what people do. Most people, when they leave OpenAI, they start an AI company that looks shockingly similar to OpenAI, just without Sam. And why is that? Well...
My glib answer is that the high-ranking people who leave OpenAI seem to feel like the problem with OpenAI is Sam Altman. And that if you could build AI without Sam Altman, you would probably be having a better time. I see.
I see.
And then there's this one other guy who left that I want to talk about.
Yeah.
It's this guy named Leopold Aschenbrenner. Okay. Have you heard of this guy?
No, I've not.
So he is quite young. He's still in his 20s. He was a researcher at OpenAI. He is fired, he says, for taking some concerns to the board about safety research. OpenAI denies this. But he goes away and he comes back in June and he publishes a 50,000 word document online called Situational Awareness. Were you aware of Situational Awareness?
I was not aware of Situational Awareness.
Okay, well, I'm here to make you aware of Situational Awareness. It's this very long document that was the talk of Silicon Valley for a week or so. And in it, Leopold says... Essentially, the rest of you out there in the world don't seem to be getting it. You don't understand how fast AI is developing. You don't understand that we're actually running out of benchmarks to have it blow past.
And this technology really is about to change everything just within a few years. And it sure seems like outside our tiny little bubble here, not enough people are paying attention. And this document winds up getting circulated all throughout the Biden White House. It's circulated in the Trump campaign.
And I think Leopold Aschenbrenner might, in a Trump administration, have talked himself into a role like leading the Homeland Security Department or something. But yeah, he was another one of the interesting departures this year.
That's a crazy document. What do you make of it?
I think that while you might take issue with some of his logic and some of his graphs, and maybe he's hand-waving past certain potential limits in the development of this technology, he is getting at something real, which is that it does seem like even though AI is essentially topic number one in tech, it doesn't feel like people are really reckoning with the potential consequences should have.
You know, some people may listen to this and say, well, you know, Casey has sort of fallen for all of the hype here. You know, there remains this contingent of people who believe that this whole thing is a house of cards and that once the successor to GPT-4 comes out, we will see that the rate of progress has slowed. And in fact, no one is going to invent superintelligence anytime soon.
And all of these things are just going to sort of wash away. It might just be an effect of who I spend my time with and the conversations that are happening at dinners and drinks in San Francisco every day. But I am more or less persuaded that we are very close to having technology that is smarter than very smart humans in most cases.
And that if you are the person who controls the keys to that technology, then yes, you will be extraordinarily powerful.
Listening to Casey, I started to imagine a potential world where AI continues to grow at whatever pace it grows at, but where OpenAI squanders its early lead in the industry and just becomes less important over time. I wanted to know what Casey thought of this possibility.
Do you think there's a world where open AI becomes less important to the future of this thing and, you know, we'll end up talking more about these other companies because these other companies have absorbed so much of the talent of that blaze?
Yes, and there's actually this really fascinating precedent for this in Silicon Valley. So we call Silicon Valley Silicon Valley because it was where the semiconductor industry was founded. And the biggest early semiconductor company was called Fairchild. And much like OpenAI, in the early days of chip manufacturing, it attracted all the best talent.
But one by one, for various reasons, a lot of people leave Fairchild and they go on to start their own companies, companies with names like Intel.
And there wind up being so many of these companies that they start calling them the Fairchildren because they were born out of this initial company that sort of seeded the ecosystem with talent, made some of the key early discoveries, and then lost all of that talent. My guess is you probably didn't know the name Fairchild before I said it just now, but you do know the name Intel. Yeah.
And the question is, do Anthropic and some of these other upstarts become the actual winners of this race? And OpenAI, 50 years from now, is just a footnote in history.
So how much should we be worried about OpenAI? I guess the answer for now seems to be somewhat. If you think AI really could be powerful, and if you think AI safety is then important, it doesn't really seem like the incentives in a race to dominate the AI market are really that aligned. OpenAI might end up leading the field. It might end up being a fair child.
But it's hard to imagine why any AI company would succeed while also moving forward with an abundance of caution, at least not without some regulation. After a quick break, we're gonna switch tracks a little bit. We talked a lot about why this technology may be concerning. A lot of people agree, so much so that on some quarters of social media, you can get shamed just for using AI products.
But I am one of the people who both worries about AI and uses AI. And in the last year, as the technology has gotten much more powerful, I find I'm using it in stranger ways. When we come back, I'm gonna talk to Casey a little bit about how he thinks about the ethical concerns here, and also about the very bizarre way he's begun talking intimately with a machine.
This episode of Search Engine is also brought to you by Rosetta Stone. I'll tell you why I want to learn a new language. I'm going to Amsterdam this week, and the only Dutch words I know are really, really horrific curse words that seem funny until you say them in front of Dutch people.
A good way to learn non-offensive Dutch would be to use Rosetta Stone, the most trusted language learning program available on desktop or as an app, one that truly immerses you in the language you want to learn. Rosetta Stone is the trusted expert for 30 years with millions of users and 25 languages offered.
Spanish, French, Italian, German, Korean, Chinese, Japanese, Dutch, not just the curse words, Arabic and Polish. Don't put off learning that language. There's no better time than right now to get started. Search Engine listeners can get Rosetta Stone's lifetime membership for 50% off. Visit rosettastone.com slash search engine.
That's 50% off unlimited access to 25 language courses for the rest of your life. Redeem your 50% off at rosettastone.com slash search engine today. This episode is brought to you in part by Grammarly. Your team spends over half their time writing, and we all know how that happens. One confusing email turns into 12 confused replies and a meeting to get aligned.
Grammarly is a trusted AI writing partner that saves your company from miscommunication and all the wasted time and money that goes with it. So your team's words lead to results, and they can take on the next task, no matter where you're communicating. Grammarly can help you find the right tone at work by personalizing your writing based on audience and context.
You'll get peace of mind from Grammarly's enterprise-grade security and its business model that does not depend on selling your data. Four out of five professionals say Grammarly helps them gain buy-in and action through their communication. It helps improve the substance, not just the style of their writing, by identifying exactly what's missing.
This will help you move faster by reducing unnecessary back and forth. Join 70,000 teams and 30 million people who trust Grammarly to get results on the first try. Go to grammarly.com slash enterprise to learn more. Grammarly. Enterprise ready AI. This episode of Search Engine is also brought to you by Mubi.
MUBI is a curated streaming service dedicated to elevating great cinema from around the globe. From iconic directors to emerging auteurs, there's always something new to discover. With MUBI, each and every film is hand-selected, so you can explore the best of cinema, streaming anytime, anywhere.
Out now from MUBI is The Substance, a Cannes prize-winning sensation, delirious, shocking, and absolutely unmissable. Demi Moore gives a career best performance as Elizabeth Sparkle, a pastor prime Hollywood A-lister that turns to a mysterious experimental drug in an attempt to recapture the glories of her youth.
Sensational supporting turns from Margaret Qualley and Hollywood veteran Dennis Quaid as a repellent studio exec. Critically adored with reviews hailing the film as an instant body horror classic, Rolling Stone, must be seen to be believed, Variety, the best horror film of 2024, World of Real. Visit trythesubstance.com for showtimes and tickets.
And to stream great films at home, you can try MUBI free for 30 days at MUBI.com slash search engine. That's MUBI.com slash search engine for a whole month of great cinema for free. I'm going to try The Substance even though I'm scared of scary movies. If you watch it, shoot me an email. Welcome back to the show.
So I wanted to ask Casey about this AI question I've been personally conflicted on and remain somewhat personally conflicted on. It's the first time in my life I've seen a new digital technology that some people despise so much. They don't want to use it at all. I see people shaming each other online for using AI at all. And that feels like...
a very online response to something, but it doesn't feel like a strategy. But I also, like, understand where the impulse to shame comes from. Like, how do you square it for yourself where people's jobs are important, people having jobs is important, all that money just sort of getting swept into a big pile for open AI doesn't feel, like, totally socially advantageous. At the same time, like,
I use ChatGPT. It's not replacing anybody's job in my usage of it, but I don't think as it became more useful, there'd be a point where I would say, it's immoral for me to use it, I'm going to stop.
Yeah, I mean, we have always used software tools since their advent to try to automate away drudgery. And that has traditionally been seen as a good thing, right? It's nice that you have a spreadsheet to do your financial planning and aren't trying to do it all on a legal pad. Presumably that brought a benefit to your life, made you better at your job, and also helped you do it faster.
And I view the AI tools I use as doing that. They take something that used to take me a lot of time and effort and now make it simpler. For just one example, I have a human editor who reads my column before I send it out. But I also will, most of the time, just run it through Claude, actually, which is the Anthropix model, and just see if it can find any spelling or grammatical errors.
And every once in a while, it really saves my bacon. And all it cost me is $20 a month. So I don't think there is any shame in using these tools as a kind of backstop to prevent you from making a mistake or from doing some research. Because that's just the way that we've always used software and technology. So I understand the... anxiety about this.
I understand people who, for their own principled reasons, decide, well, I don't want to use this in my work. Maybe I'm a creative person. It's very important to me that all the work that I do is 100% human and has no AI in it. These are very reasonable positions to strike.
But I think that to tell someone, you shouldn't use this particular kind of software because it is evil, I don't understand that argument. Can I tell you about another way I've been using AI this year? Yeah. And I was actually thinking about you.
Because during one of our conversations, we were reflecting on the fact that there were only a couple of things that people could do to improve their mental health. And one was therapy and the other was meditation. And you were saying how frustrating it is to know what the answer is and to not want to do it, right? Yes. It's like...
Yes, if you started a meditation practice, like that would obviously be very helpful, but then you have to like sit quietly with your thoughts for 20 minutes a day. Like, obviously that seems horrible.
Yes.
So recently I've been experiencing these feelings of burnout related to my newsletter. where I love doing it, but it also feels harder than it has. And I've been doing it at least three times a week, sometimes as many as five for seven years. And so I think this is just sort of a natural thing.
And so I felt like I need to maybe break glass in the case of this emergency and try something that I'd never previously wanted to do, which was meditate. Oh, wow. So I'm only a few days into this. I don't want to tell you that I've solved anything here. I did enjoy my first few experiences.
But one of the things that I did both in the run-up to and the aftermath of these meditation experiences was to just chat with Claude. Because Claude lets you create something called a project where you can upload a few documents and you can chat with those documents.
And then you can just also kind of check in with it from day to day and tell you what you're noticing or observing or if you have questions. And to me, this was a perfect use case for this technology because I truly know nothing about meditation. People have talked to me about it. I've done it a couple of times before, but I've never read a book about it.
I've never talked with any of my friends at length about it. So I'm just as fresh as you can be. And the level of knowledge that is inside Claude, which was, of course, just stolen from the internet without paying anyone from their labor, is actually quite high. Yeah. And it was able to help give me a good start.
And then afterwards, I could come back and say, well, you know, here's what I noticed. And I struggle with this thing. They'll say, oh, well, you might want to try that. Or, you know, I sort of wish it was a little bit more like this. And it would say, oh, well, then you might want to try this other kind of meditation. Tell me more about that. Okay, yeah, sure. Here's everything. And
I was talking earlier about like, what will it be like when you have an AI coworker? It's like, well, I have a meditation coach that I pay 20 bucks a month for. Some people are laughing. Some people are saying, Casey, you can meditate for free. You don't need a coach. I get that. I am somebody who likes to like pay for access to expertise. And I feel like I have it.
And first of all, I am going to go meditate after this because I want to recenter myself and I didn't get to do it this morning. I don't know if I'm still going to be doing this in like two or three weeks. But if I am, I think the AI is actually going to be part of that story because it's giving me a place where I can go after these experiences to reflect.
Again, I hear people saying, Casey, you realize that journals exist. You could like write this down. But yeah, I get what you're saying. What I'm telling you is this is a journal that talks back to you. This is a journal that is an expert about the thing that I'm journaling about that is holding my hand through a process. None of this existed two years ago, right?
Totally.
The challenge of talking about any of this stuff is when the rate of change in your day-to-day is high, sometimes it feels quite obvious. Other times it becomes this weird blind spot where you don't even realize- that the conditions around you have changed, right?
This is what Leopold is getting at in situational awareness, is like, you need to stop and collaborate and listen, as Vanilla Ice once said, right? You need to do what you're doing on this podcast, PJ, which is like, it's been a year, what happened? This is the right question, right?
You know, we were talking so much earlier about these AI critics that are like, it's all hype, it's constantly wrong, screw these Silicon Valley bros, right? And I totally get all of the animus and resentment that powers that. But something that those folks do to their detriment is they tune out everything that is happening in AI because they think, I've already made up my mind about this stuff.
I already know that I hate everyone involved. I hate the output and I hope it chokes and dies, right? Like this is how these people feel. And again, I get it. I understand all of those emotions. What I'm saying to you though, is you actually have to look around. You have to engage.
You have to keep trying out these chatbots every two or three months, if only to get a sense of what they can do now that they couldn't do two to three months ago. Because otherwise you are going to miss What is happening here? And it is wild.
It is wild. To me, it's really interesting that it is, in a strange way, a tool you are using to know yourself. And I don't mean to overstate it. It is also just a journal that is talking to you and giving you pointers. But I find that interesting.
I also feel like, for whatever reason, I think because there's such a culture of we don't want to be enthusiastic about technology anymore, particularly this technology, which you don't want to end up looking like the person who was gleefully celebrating the arrival of our doom.
There's a weird lack of just like 10 years ago, I think had this come out, there'd be a tech press that would say, here's 10 new ways you can use this. Here's how I'm using it. Nobody wants to be seen doing that, so no one's using it. I had a thing happen a couple of days ago.
I think Sam Altman, he was retweeting someone whose suggestion was, ask your agent, from all of our interactions, what is one thing that you can tell me about myself that I may not know about myself? And I asked it this question and I got an answer and it wasn't like a fortune cookie horoscope, like vague enough that it would apply to anybody and maybe be useful anyway.
Like it was a real thing that I hadn't noticed. It was like the preponderance of your questions to me are about trying to put structure and precision around processes in your life that do not have them. You are constantly asking how long things should take and how much time to allocate. It is clearly something you're struggling with.
Wow.
Which is the kind of thing, like, a good friend would tell me. Yeah. And it is not an experience I've had with software. And I don't know, like, I find myself in a moment where I'm trying to hold everything in my head at the same time to say, these are technologies we should be skeptical of and, to your point, keep paying attention to.
And also, in the time before this possibly changes the world in ways I might not enjoy, pretty useful. Yeah.
Absolutely. Absolutely.
I mean, it's interesting because I think you're right. I think we've always used software to automate drudgery. And one way you could think of that is it does eliminate human labor. And the people who have drudgy jobs and have had drudgy jobs aren't like, I'm so glad that I've been freed to produce something else. They're upset that their sort of income is being taken from them.
Why do you think AI is the... place where these anxieties finally come to a head? Because in previous eras of software, whatever skepticism people had about it, this skepticism actually feels new to me.
That's a great question. I think there's a lot that goes into it. I think that we're living at a time where there's kind of low watermark in trust in our technology companies. I think the social media era really destroyed most of the goodwill that Silicon Valley had in the world because people see these technologies like Facebook and Instagram as TikTok as mainly just things that like
steal our time and reshape the way we relate to each other in ways that are obviously worse. And the whole time, the people building these technologies insist that actually that they're saving the world and that there's nothing wrong with them. And so when another generation comes along and says, oh, hi, we are actually here to invent God, there's going to be a lot of...
There's going to be a lot of skepticism about that. And it is the AI companies themselves who told us this thing will create massive job loss. It will create massive social disruption. We may have to... come up with a new way of organizing society when we are done with our work.
That is something that every CEO of every AI company believes, BJ, is that we will have to reorganize society because essentially capitalism won't make sense anymore. So most people will agree that they don't like change. Change is bad. And when they say they don't like change, it usually means, well, I have a new manager at work.
The change that these people are talking about is that capitalism won't exist anymore. And it's unclear.
It's so funny because everybody, I mean, this would be a little bit broadly, many people in our generation are like, I would love for capitalism to not exist anymore, by which they don't mean robots do the work now and robots are your boss and robots take all the money and you're hoping for maybe universal basic income. No one meant for capitalism to go away like this.
Yeah, yeah, exactly. And nobody wanted capitalism to go away and be replaced with something where Silicon Valley seemed to be in control of everyone's future.
Right. And so we continue to pay attention to this because while who knows how true these promises will come, the idea that this is socially disruptive seems like a safe bet.
Maybe something else to say that's important is that the way all of this is unfolding is anti-democratic. No one really asked for this, and the average person does not get a vote. If you're just an average person, you don't want AI to replace your job. There's really nothing you can do about it. And so I think that actually breeds a ton of resentment against these companies.
And while the government is starting to pay attention, at least here in the United States, they're being very, very gentle about everything. And so if you wanted to change the course of AI, it's not actually clear how you would go about that. And so I think that's another really big reason why people often resent it.
And it's funny, there's always a part in my mind when you see these stories of all these departures to say, okay, that's like the internal drama of a company that I do not have an internal view on. And it might matter, it might not. I would have to know more than I know to know.
But to what your point is, if part of the problem is that these technologies can restructure society, we have a democratic society, but the way they're restructuring society is not democratic, then the fact that even within these companies, they're becoming more like monarchies does seem like something that's worth paying attention to.
Yeah, yeah, absolutely.
Casey Newton. He writes the newsletter Platformer. Go check it out. You can also listen to him every week on the podcast Hard Fork. We're going to keep using you to monitor this.
Yeah, let me just say I'm going to keep paying attention to it.
Casey, thank you.
You're welcome.
Search Engine is a presentation of Odyssey and Jigsaw Productions. It was created by me, PJ Vogt, and Shruthi Pinamaneni, and is produced by Garrett Graham and Noah Johns. Backchecking This Week by Mary Mathis. Theme, original composition, and mixing by Armin Bizarrian. Our executive producers are Jenna Weiss-Berman and Leah Reese-Dennis.
Thanks to the team at Jigsaw, Alex Gibney, Rich Perrello, and John Schmidt. And to the team at Odyssey, J.D. Crowley, Rob Morandi, Craig Cox, Eric Donnelly, Kate Rose, Matt Casey, Maura Curran, Josefina Francis, Kurt Courtney, and Hilary Shove. Thanks for listening. We'll see you next week.