Today, we’re going to try and figure out "digital god." I figured we’ve been doing Decoder long enough, let’s just get after it. Can we build an artificial intelligence so powerful it changes the world and answers all our questions? The AI industry has decided the answer is yes. In September, OpenAI’s Sam Altman published a blog post claiming we’ll have superintelligent AI in “a few thousand days.” And earlier this month, Dario Amodei, the CEO of Anthropic published a 14,000-word post laying out what he thinks such a system will be capable of when it does arrive, which he says could be as soon as 2026. Verge senior AI reporter Kylie Robison joins me on the show to break it all down. Links: Machines of Loving Grace | Dario Amodei The Intelligence Age | Sam Altman Anthropic’s CEO thinks AI will lead to a utopia | The Verge AI manifestos flood the tech zone | Axios OpenAI just raised $6.6 billion to build ever-larger AI models | The Verge OpenAI was a research lab — now it’s just another tech company | The Verge California governor vetoes major AI safety bill | The Verge Inside the white-hot center of AI doomerism | NYT Microsoft and OpenAI’s close partnership shows signs of fraying | NYT The $14 Billion question dividing OpenAI and Microsoft | WSJ Anthropic has floated $40 Billion valuation in funding talks | The Information Credits: Decoder is a production of The Verge and part of the Vox Media Podcast Network. Our producers are Kate Cox and Nick Statt. Our editor is Callie Wright. Our supervising producer is Liam James. The Decoder music is by Breakmaster Cylinder. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Support for Decoder comes from AT&T. What's it like to get the new iPhone 16 Pro with AT&T next up anytime? It's like when you first light up the grill and think of all of the mouthwatering possibilities. Learn how to get the new iPhone 16 Pro with Apple Intelligence on AT&T and the latest iPhone every year. With AT&T, next up anytime. AT&T, connecting changes everything.
Apple intelligence coming fall 2024, with Siri and device language set to US English. Some features and languages will be coming over the next year. Zero dollar offer may not be available on future iPhones. Next up anytime feature may be discontinued at any time. Subject to change. Additional fees, terms, and restrictions apply. See AT&T.com slash iPhone for details.
Amgen, a leading biotechnology company, needed a global financial company to facilitate funding and acquisition to broaden Amgen's therapeutic reach, expand its pipeline, and accelerate bringing new and innovative medicines to patients in need globally.
They found that partner in Citi, whose seamlessly connected banking, markets, and services businesses can advise, finance, and close deals around the world. Learn more at citi.com slash client stories.
Support for Decoder comes from Vanta. Do you know the status of your compliance controls right now? Like literally right this moment. You know that real-time visibility is critical for security, and that's where Vanta can help. Vanta automates compliance for SOC 2, ISO 27001, and more, saving you time and money while also helping you build customer trust.
Over 8,000 global companies like Atlassian, FlowHealth, and Quora use Vanta to manage risk and prove security in real-time. Learn more at vanta.com slash decoder. That's vanta.com slash decoder.
Hello and welcome to Decoder. I'm Eli Patel, editor-in-chief of The Verge, and Decoder is my show about big ideas and other problems. Today, we're going to try and figure out digital God. I figure we've been doing Decoder long enough. Let's get after it. Can we build an artificial intelligence so powerful that changes the world and answers all our questions?
you will not be surprised to know that the AI industry has decided the answer is yes. In September, OpenAI's Sam Altman published a blog post claiming we'll have super intelligent AI in just a few thousand days.
And earlier this month, Dario Amode, the CEO of Anthropic, published a blog post laying out what he thinks such a system will be capable of when it does arrive, which he says could be as soon as 2026. That blog post is 14,000 words long. Dario has a lot of ideas. What's fascinating is that the visions Sam and Dario lay out in their posts are so similar.
They both promise dramatic, super-intelligent AI that will bring about massive improvements to work, to science and healthcare, even to democracy and prosperity. To happiness. Digital God, baby. But while the visions are similar, the companies in many ways are openly opposed. Anthropic is the original OpenAI defection story.
Dario and a cohort of his fellow researchers left OpenAI in 2021 after growing concerned with the company's increasingly commercial direction and approach to safety. And they created Anthropic to be a safer, slower AI company. And the emphasis really has been on safer, which has sometimes had a pretty dramatic effect on the company's reputation.
Just last year, a major New York Times profile of Anthropic called it, quote, the white-hot center of AI doomerism. But the launch of ChatGPT and the generative AI boom that's followed has kicked off a colossal tech arms race. And Anthropic is as much in that game as anyone else.
It's taken in billions of funding, mostly from Amazon, and it's built Claude, a chatbot and language model, to rival OpenAI's GPT-4. And now, Dario is writing long blog posts about spreading democracy with AI. So what's going on here?
Why is the head of Anthropic suddenly talking so optimistically about AI when his company was previously known for being the safer, slower alternative to the progress at all costs open AI team? Is this just more hype to court prospective investors or researchers? And if AGI really is just around the corner, how are we even measuring what it means for it to be safe?
To break it all down, I brought on Verge senior AI reporter Kylie Robison to discuss what it means, what's going on in the industry, and whether we can even trust all these AI CEOs to be telling us what they really think. All right, digital god and capitalism, but mostly digital god. Here we go. Kyla Robinson, welcome to The Coder.
Thank you for having me.
I am excited to talk to you about Digital God.
Love it.
And the race to either build it or spend money on building it, and whether Digital God will be cool.
Right.
It feels like there's a lot of debate on whether Digital God will be cool or not. Where do you come down?
That's a great way to start. Do I think Digital God will be cool?
Like chill.
What's the vibe check on Digital God? I think... You know, are humans good and chill? It's just a philosophical debate at this point. I hope so. I don't think so.
Yeah. Is digital god chill, I think, is a motivating question for Silicon Valley right now. It really sums up a lot of things. And you have described this as tribalism. You've described it as religious. You've described it as ideological in the conversations we've had. At a high level, just explain what's going on here.
It is very tribal and it's something I've experienced covering it, which, you know, the side that's building it increasingly is saying, listen, we are building something that is going to transform the world. It is going to, as one CEO put it, it could spread democracy, it could cure diseases, etc. not just like diseases, but PTSD and anxiety, really nebulous things.
They truly believe this and they are pushing hard on this narrative, whereas a whole other side is saying that this is all a scam and that they shouldn't be trusted. So both sides seem to be completely gnawing at each other.
Yeah. And that conversation is not chill, regardless of whether digital God is chill. The debate right now seems ferocious.
Definitely. And I am really sympathetic to both sides. I was just listening to another podcast about tribalism, which is why it's at the front of my mind, which is both sides want the same thing, which is for the betterment of humanity. And one side thinks that AI is going to make humanity worse. And one side thinks it's going to be made better.
So into this steps Anthropic and Anthropic CEO Dario Amadei. Anthropic famously the first of the we're leaving open AI to start a safer AI company companies. There are now lots of them, but they were the first. He's trying to split the difference. He's got this long blog post called Machines of Love and Grace. And he is saying, like, we're trying to build the safest one.
We look at all this cool stuff we could do if we can pull it off. What is going on there?
So it was about 14,000 words where Dario says, you know, I know that this is very fabulous and crazy to say, but I'm going to say it anyways, because I think it's worth saying that we could shrink about 100 years of scientific breakthroughs and progress in five to 10 years with AGI. He doesn't like to call it AGI. He thinks it's like sort of a crazy term, which is artificial general intelligence.
He likes to call it powerful AI. It can cure PTSD. It can spread democracy. It can do all of these crazy things just if humans weren't so limited in terms of compute. And yeah, it's really selling. This is the future we can have if we work hard enough, if we achieve AGI, if we achieve it in a chill way.
This is right next to OpenAI, which is making many of the same claims. Sam Altman wrote his own blog post a few weeks ago saying within a few thousand days, we might have AGI and here's all the stuff we could do. They've obviously just raised a lot of money. There's a ferocious competition for talent in this industry. We keep calling it digital god because that's funny to say.
But is the end state the same? Are they all racing to the same place?
Yes. I believe DeepMind's first mission was build AGI. OpenAI's mission, build AGI. Anthropic, build AGI. They have stated very clearly that's what they want to build. I don't know if they would agree with our joke about digital god, but it is more fun to say.
Yeah, they all want to build general intelligence because they see that as a way to change the world in many different ways rather than only changing one sector. They could generally change the entire world with general intelligence.
Can you actually explain the mechanism of that to me? I've used these tools today. Some of them are very powerful. They can certainly make a video of Will Smith eating spaghetti at ever increasing levels of fidelity. But I don't know how they spread democracy.
It's, again, 14,000 words. He really sells this in a way that's these tiny breakthroughs just, you know, for science. He had quoted this person who said, you know, it's all these tiny breakthroughs that get you to larger breakthroughs. So it can make us more efficient in terms of our processes. And that can be said for large-scale data.
data analytics for finance, for medicine, for a lot of different sectors. So what they see is a model that can understand and analyze and parse through large, large, large amounts of data in ways humans can't. And they see all the ways that can change the efficiencies of certain sectors in which it can get us humans to have more breakthroughs. But not only that,
They are hoping that they can do this autonomously all the time. So I think he says a million of the smartest people in a data center is how he views this is like cities of people, but they're just AI working all the time on these issues. That's how they view it.
So this is a show about decision making. And every time I hear a pitch like that, it occurs to me that the goal is to give up some enormous amount of decision making. I don't know how to distribute food throughout our city or lay out the electrical grid or whatever it is. And we're just going to let the robots in the data center do it. And the data center might be owned by someone. That's fine.
Don't worry about it. And then we'll be free because the AI will just do it. That's the pitch, right? Is that we'll just hand over a bunch of control to an AGI. I mean, that's why I keep calling it digital god.
What they like to say is that when it comes to really complex issues, it will work all day, hours and hours, thinking through this issue. And then it will come back for clarification. Oh, I see. So that's the caveat. It's like, no, it's not going to control everything, but it will control those rote tasks individually. And then come back to you and be like, okay, I thought about this.
Can you answer these questions? What do you think about this? Think of it as a partner in that way. But they ultimately, yeah, they don't want you to have to check up on it all the time. That's true.
But a partner to who?
A partner to what they claim are some of the world's smartest people, which are people working on cancer, people working on autonomous vehicles themselves, which I can get into. Yeah.
The promise I sort of understand, right, we'll have ultra powerful computing systems that can reason and help us solve problems and they'll never get tired or have feelings about what we're using them for. Fine. I read these blog posts. I read Sam's. And it seems like the part where a bunch of people still have to make decisions is fully swept under the rug.
Yeah, I read the entire Amadei blog and I felt as starry-eyed and cool this utopia might be, like, where are we to getting there? Like, what is the answers to actually getting there? And I have the job of explaining to readers who are extremely skeptical because they're like, it can't even count the letter of Arsene Strawberry. What are you talking about?
I don't feel like they're doing a great job at convincing us that the tools that we have today are much different, that we won't just need humans. I just don't see a coherent path other than, don't worry, we're building utopia. Don't worry about it. Just give us money.
That's the thing, right? Just give us money. Is that why Dario wrote this? Is that why Sam wrote his? Is that why Marc Andreessen wrote his?
We can never know for certain unless they say out loud, this is why I wrote this. And I think, you know, it's sort of my job to look at this and not just take it at face value. Because when I read it, I thought, well, Anthropic is reportedly looking for funding right now. And the competition has never been more fiercely Everyone is leaving OpenAI to build an even safer AI company every day.
Mira Marotti, their CTO, is reportedly making her own company. And then another VP of research there might also be making their own company. And that's just like in the last month. So you have to compete for money. You have to compete for talent. You have to compete for compute to a lesser extent. But money and talent are really where it's at right now. And safety is not like the sexiest pitch.
I do believe... that these AI executives believe what they're saying, that they're going to build utopia. I think why Dario released his blog at the time that he did, which was out of step for Anthropic, he says at the top, we don't usually do this. I think it does have a lot to do with competition and market pressures and funding.
We need to take a quick break. We'll be right back.
Support for this podcast and the following message is brought to you by E-Trade from Morgan Stanley. Take control of your financial future with E-Trade. No matter what kind of investor you are, our tools and resources can help you be ready for what's next. Now when you open an account, you can get up to $1,000 with a qualifying deposit. Terms apply. Learn more at etrade.com slash vox.
Investing involves risks. Morgan Stanley Smith, Barney LLC, member SIPC. E-Trade is a business of Morgan Stanley.
They're not writers, but they help their clients shape their businesses' financial stories. They're not an airline, but their network connects global businesses in nearly 180 local markets. They're not detectives, but they work across businesses to uncover new financial opportunities for their clients. They're not just any bank. They are Citi. Learn more at Citi.com slash WeAreCiti.
That's C-I-T-I dot com slash WeAreCiti.
Support for Decoder comes from Shopify. Always be selling. It's the de facto motto at the core of most businesses out there. But commitment to moving a product and actually doing it successfully are two different things. Sustainable growth isn't always easy, but partnering with the right platform can help you achieve more than going it alone. So how do you find that partner?
You might want to check out Shopify. Shopify is an all-in-one digital commerce platform that wants to help your business sell better than ever before. It doesn't matter if your customers spend their time scrolling through your feed or scrolling past your physical storefront. Shopify says they can help you convert browsers into buyers and sell more over time.
And their ShopPay feature may even convert more customers and end those abandoned shopping carts for good. There's a reason companies like Allbirds turn to Shopify to sell more products to more customers. Businesses that sell more sell with Shopify. Want to upgrade your business and get the same checkout Allbirds uses?
You can sign up for your $1 per month trial period at Shopify.com slash decoder. You can go to Shopify.com slash decoder to upgrade your selling today. Shopify.com slash decoder.
We're back with Verge Senior AI Reporter Kylie Robison. Before the break, we're talking about Anthropic CEO Dario Amode's very long, very intense blog post discussing the benefits of super intelligent AI. But a big part of developing super intelligent AI is safety. For AI to benefit humanity, it needs to be safe. You'll hear AI researchers talk about this using the word alignment.
AI needs to be aligned with humanity and our best interests. But what does it even mean to develop safe AI? That's a big question. And that's why it's such a big deal that Anthropic, which was the highest profile of the safer AI companies, is starting to talk more about how a super intelligent AI could change the world, and not just focusing on how it might go wrong.
Let's talk about safety broadly, and then I want to talk about Anthropic specifically. So the idea of AI safety is we built a reasoning robot that can take action in the world all by itself. That thing had better be aligned with us, right? It had better follow the rules we lay out for it. OpenAI famously overthrows Sam Altman for 25 minutes, right?
because their board thinks that he's not trustworthy, but now he's back and then everyone's quitting because they want to start safer AI startups. What is going on there? Is OpenAI just not building a safe AI? Is it not safe enough? What are the dynamics?
That's funny because I wrote about this. I said, OpenAI is no longer a research lab. It is a tech company like everyone else. And I had researchers reach out to me and disagree. My take is that it's like academics versus like a product manager at Meta. they're extremely different people.
So a company that was started to do deep research on AI filled with a lot of academics and incredibly smart people who just wanted to do that research, they're not exactly looking to work fast, break things, build products. That's not exactly why a lot of them went there. And they might deem the market pressures to build these products on these powerful models, they might deem that as unsafe.
It is just a philosophical debate. It is that tribalism every day. And some people are like, I don't care. I think it's really cool that we can build products for everyone to use on these LLMs that we have spent millions of dollars and so much time building. So I think the people who are leaving because they deem OpenAI not safe are It's a debate.
And unfortunately, OpenAI is not very transparent in their processes. So it's hard to deem from an outsider's point of view. We have to rely on these people leaving and saying it's not safe.
So the culture of these new companies that say we're safer, how are they measuring safety? Or is it just everyone is saying it, so we believe it? Is there a test? Is there like an SAT for AI safety?
There's a whole lot of benchmarks. And I got to write an article about reward hacking, which was my favorite thing, which basically the AI lying to you. Really fun stuff. So yeah, they do a whole bunch of benchmarks. They do a whole bunch of safety tests. My opinion here, what I'm gleaning from this is that safety is moving slow and thoughtfully versus moving fast to launch things.
I think a lot of people see the AI safety debate as... don't make racist pictures in Grok or whatever. Don't let Gemini make racist photos and they're going to pull it down and we're going to make sure we don't do it. And there's just a combination of content moderation and prompt engineering that feels very familiar. That debate feels very familiar. And then there's
The bigger problem, which is what if these things take actions that we don't want them to take because we've given them control. We have given control of the electrical grid to AI and we know it's safe, which is the promise of the AGI system. And it feels like we can't solve the first one.
Yeah.
So how on earth are we going to solve the biggest one?
Yeah, and I think that you can see why these researchers are so sensitive to a change in equilibrium and why they're like, okay, opening AI is not safe. We've got to go to anthropic, which takes these dangers much more seriously. And I think the broader public doesn't exactly see these dangers because if you can't count the number of R's in strawberry, how is it going to destroy the world?
But a certain subset of these people take it very seriously. But no, I don't know how we get there.
So let's talk about Anthropic specifically. Dario's post, particularly interesting because Anthropic has the safety reputation because they were the first of their kind to leave OpenAI and say, we're building a safer one. But the post is, hey, I'm still building AGI.
Even though we have this reputation, even though I want to go slow, and even though we care about safety, I'm chasing the same goal as OpenAI. Why do you think he's trying to walk that line right now?
I think that it's important to sell a utopia, and a dystopia is harder to sell. That was my take reading it, because I have yet to see this from Anthropic since they were born.
that we're going to build this utopia it's been mostly we need to slow down and that doomer sort of personality that they've adopted i think it it really just has to come down to market pressures they have to compete they have to be as cool as sam altman it's the drama the intrigue the building utopia it's where you'd want to put your money it's where you might want to work
You wrote about Dario's post. You wrote in that piece, Anthropic is looking to raise at a $40 billion valuation. OpenAI just raised $6.6 billion. Is all this money just for NVIDIA GPUs? What are they spending it on?
Well, researchers cost millions and millions of dollars at this point because they are in such small supply. And there was a story not that long ago that Mark Zuckerberg was emailing researchers directly to recruit them. So there's a lot at stake for researchers. So they're getting a lot of money. That's a huge chunk. And yes, GPUs, cloud compute, it costs money.
so much money to train these models. I had likened this just in conversation about this to like imagine you're leaving your AC on all the time at home and then thousand X that. You already know what your bill looks like when you leave your AC on too long. Just it is so expensive to cool these GPUs, to run them all day long.
And then people are also using your products, which are run on large language models that run on these computes. So that's expensive as well. It's just a very, very expensive operation. And it eats up money because they're not making much money.
Yeah, that's the other part of this. How are any of these companies going to make money? How is Anthropa going to make money?
I wrote about this in terms of agents. That's what everyone's building. Google, Anthropic, OpenAI, they're all building agents. So that's kind of what we've been talking about, this autonomous AI that can do your work for you, that can book travel for you, etc. I think that this is their next thing that they believe works.
will be able to show off that these large language models are useful and can also charge for it. Do I actually think that they're going to be profitable anytime soon? No, I don't think that's coming anytime soon.
All these fundraising moments are happening right on top of each other, right? OpenAI just raised, XAI is raising, obviously Anthropic is looking. Is there a reason? Is this just a coincidence? Life cycle of these companies?
No, I think that you're just running out of money. I think if they want to build the next frontier models, like Dario says this himself, that we are reaching models that are going to take $100 billion to train. Like they just need that money to train the next frontier models. And also these VCs really want to see the next GPT-5, right?
So they need to rush quickly to get this compute, to spend this money, and they're just burning through it.
Is there any chance that these companies are going to run out of money before they raise again? Like, if they're burning it that fast and they need to raise this much, it does feel like these lines might converge faster than anyone hopes.
Yes, I think so. I wouldn't say that I'm so well-versed in funding and finance, but I think I can do normal math. And if they're losing billions of dollars hand over fist and they're only raising $6.6 billion... you're not making a profit, you're kind of screwed, you're going to run out of money.
I don't think any of these companies are going to go under, but the smallest companies that don't have a Microsoft or an Amazon to fund them, I think that those are the companies that are going to suffer. We've already seen that with Inflection, for example.
I want to come back to that and how much these companies are reliant on the big companies, because there's a lot of complication there. But just big picture, here we are, famous tech CEOs are writing manifestos about building digital gods so they can somehow spread democracy. And I'm getting the, this is all just a ploy to raise money vibe from you. Is it that simple? Is it that cynical?
No, I do believe that Altman and Dario actually believe that this is how AI is going to change the world. I do believe that the researchers who spend day in, day out building this technology believe that's the future. I think that the timing of Dario's blog, it's weird. It's just weird. It's like, okay. Well, obviously, this seems tied to the fact that you need to raise a lot of money.
And XAI just raised the most that anyone's ever raised. And then OpenAI raises the most that anyone's ever raised. Everyone's trying to build the next biggest model, and it costs a lot of money. And saying, we're going to be really slow and chill, and that doesn't really make people excited to invest. And the devil's advocate position is, does he really need to do that?
Does he need to write a blog to get people to invest? It's already so buzzy. And I just come back to the competition. Competition has never gotten stiffer.
Let me ask you one very dumb question. And I do want to talk about the big companies and how they're related to all this. Both Anthropic, OpenAI, the rest of them, they're all kind of built on LLMs, right? Like they're built on one very foundational technology. And the idea is that if we just throw more data and compute and time and electricity and money at it, we can just get there.
We're just going to horsepower our way into an AGI. Then at meta, there's Yann LeCun, who's like, no, you can't. There are some other people who are very skeptical of this approach. Can they do it? Is this the right path? Is this even worth it?
Worth it? I mean, there's, again, so nascent, so many ways to be argued. And I think that's what I find the hardest part. Part about covering AI is that it is just so easy to argue about the smallest things all day. They believe that you have to completely change the structure, which we build LLMs to reach AGI. And yes, that this is not the path to take to reach it.
Very smart people like Jan believe that, no, you can't just horsepower your way through building AGI. I think my answer is I would like to see. I would like to see proof. I am just asking every day for proof and we don't have it.
Show me digital God.
Show me digital God. Show me a path to digital God. No, it's just like, there's no path. It's just like, let's just keep going this way and it should be fine.
Yeah. It just strikes me that if you're, you know, an Andreessen Horowitz limited partner, you are probably on the order of like a college pension fund. And you're like, so digital God, if we just give you all the money, you'll make digital. And that's going to return us how? And it doesn't seem like that loop is closing very fast.
No.
No one's making money and we need more money to build the next thing with which might make us money by putting all the travel agents out of business. And it's just somewhere in there is a bunch of question marks. And I it seems unclear to me how any of that gets resolved.
For us and for the listeners, I think that this is also really unclear to the people who are building it and investing in it. Because if you look at OpenAI's mission statement on their website, it has a big pink box that says, anything you invest should be considered a donation. So it is clear that investors were like, ah, phooey. It'll be fine.
And now they have to change from a nonprofit to a for-profit. because they're like actually i don't want to just donate i want some money back so that's where we're at and they're figuring it out we have to take another quick break we'll be right back
Support for Decoder comes from AT&T. What does it feel like to get the new iPhone 16 Pro with AT&T NextUp anytime? It's like when you first pick up those tongs and you're now the one running the grill. It's indescribable, like something you've never felt before.
All the mouthwatering anticipation of new possibilities, whether that's making a perfect cheeseburger or treating your family to a grilled baked potato, which you know will forever change the way they look at potatoes. With AT&T NextUp Anytime, you can feel this way again and again.
Learn how to get the new iPhone 16 Pro with Apple Intelligence on AT&T and the latest iPhone every year with AT&T NextUp Anytime. AT&T. Connecting changes everything. Apple Intelligence coming fall 2024 with Siri and device language set to US English. Some features and languages will be coming over the next year. Zero dollar offer may not be available on future iPhones.
Next up anytime feature may be discontinued at any time. Subject to change. Additional fees, terms, and restrictions apply. See AT&T.com slash iPhone for details. Support for this show comes from the refinery at Domino. Location and atmosphere are key when deciding on a home for your business, and the refinery can be that home.
If you're a business leader, specifically one in New York, the refinery at Domino is an opportunity to claim a defining part of the New York City skyline. The refinery at Domino is located in Williamsburg, Brooklyn, and it offers all the perks and amenities of a brand new building while being a landmark address that dates back to the mid-19th century.
It's 15 floors of Class A modern office environment, housed within the original urban artifact, making it a unique experience for inhabitants as well as the wider community. The building is outfitted with immersive interior gardens, a glass-domed penthouse lounge, and a world-class event space.
The building is also home to a state-of-the-art equinox with a pool and spa, world-renowned restaurants, and exceptional retail. As New Yorkers return to the office, the refinery at Domino can be more than a place to work. It can be the magnetic hub fit to inspire your team's best ideas. Visit therefinery.nyc for a tour. Support for Decoder comes from Grammarly.
We've all been in a meeting and wondered, couldn't this have been an email? Well, next time it is an email, Grammarly can help you out with writing more clear and efficient communications. Grammarly is a trusted AI writing partner that can save your company from miscommunication and all the wasted time and money that goes with it.
Grammarly helps you improve the substance, not just the style of your writing, by identifying exactly what is missing. It can reduce unnecessary back and forth, resulting in less confusion, less meetings, and more clarity. According to Grammarly data, teams that use it report 66% less time spent editing marketing content and 70% improved brand compliance across the company.
Grammarly works where you work, from docs to messages to emails. It integrates seamlessly across 500,000 apps and websites. For 15 years, Grammarly has helped professionals do more with their writing. You can join the 70,000 teams and 30 million people who trust Grammarly to get results on the first try. You can go to grammarly.com slash enterprise to learn more. Grammarly. Enterprise ready AI.
We're back with Verge senior AI reporter Kylie Robison. Before the break, you heard Kylie mention a big piece of news from earlier this month, that OpenAI is shifting towards a for-profit structure. That was part of OpenAI's recent $6.6 billion funding round. The switch to a for-profit company has to happen within two years, or those investors can ask for their money back.
This is important for a very decoder reason. If you're a decoder listener, you know that structure is important. How companies like Anthropic and OpenAI are organized, who their investors are, how they plan to make money, and where all the compute comes from will have a huge impact on the kinds of products they build.
It will affect how fast they release those products to stay competitive, and whether safety will take even more of a backseat in the future.
If you believe that AI is going to usher in a utopia, as Sam Altman and Dario Amode theorize, well, it increasingly looks like utopia depends on major cloud computing providers continuing to write the checks, and whether other investors think there's a massive payout waiting for them on the other side of the race to build AGI. So I think that brings us to now, basically.
OpenAI is converting to a for-profit. It seems that's very contentious. Just before we started speaking, there was both a big New York Times story and a big Wall Street Journal story about different aspects of that process, and mostly OpenAI's relationship with Microsoft. So how much equity will Microsoft get in exchange for already being the biggest investor slash donator to OpenAI right now?
And then... how much more compute and how much more dependency will Microsoft have on OpenAI versus going its own way. There's a lot in there. My favorite piece is that if OpenAI does build AGI, it gets out of its Microsoft contract, which is cited as a goal, as an incentive for OpenAI.
We should build Digital Gods so we can get out of this Microsoft deal, which is hilarious, just on its face hilarious. And then there's also people at OpenAI who are apparently complaining that Microsoft won't give it enough compute so it can train the next model and actually build AGI. What is going on here?
Is this just those two companies had a weird falling out after Sam got ousted and came back? Is it OpenAI is totally dependent on Microsoft and there's friction there? If Microsoft goes away, can OpenAI continue to succeed?
So OpenAI needed money because Elon Musk was a co-founder of OpenAI and said, actually, I am not into this anymore. Bye. And he took all his money with him. And they really needed money. And Microsoft saved them. And now OpenAI is in an awkward position where they really need Microsoft to survive because that's who provides the bulk of their compute.
They have an exclusive cloud partnership with them. So now we have gotten to a point where, oh my gosh, Microsoft does not have enough compute. So they are not happy about that. Microsoft made one concession in this exclusive agreement to let them make a partnership with Oracle to get some more compute, which was rare. But yes. Altman was ousted last year.
And I think that really pissed off Satya. I think he was having a nice Thanksgiving break. And I think that he had to go on CNBC and defend OpenAI. And he's like, we are just too dependent on this company for the future, what I believe is the future of technology. So we've got to create a backup plan. And I think that's where Inflection comes in.
And that's what some of the New York Times article gets into. Mustafa Suleiman, who was the CEO of Inflection, is now the CEO of AI at Microsoft. I just think it's so messy. They both want to build the future and they both depend on each other.
It seems like broadly OpenAI being dependent on people writing ever bigger checks and getting more and more Azure compute time. That's a huge dependency for a company, right?
They are completely dependent on these cloud companies, and they're realizing that. And they're trying to figure out how to be slightly less dependent. I mean, Sam Altman is apparently going around the world trying to pitch his own multi-trillion dollar chip startup so he can own this portion of his business. I think they're scared that they're so dependent on Microsoft.
So OpenAI is really dependent on Microsoft. Anthropic has that same kind of relationship with Amazon, right? Yes. They're paying their bills, and that's fine.
I don't think it is as testy. Not that I've heard, not that's been reported. It seems like Anthropic is moving a lot slower. It's a lot less dramatic. There's not the boardroom coups or, you know, the splashy releases ahead of Google I.O.
There's not the boardroom coups is like a real... Just a real measure of a company.
Right, exactly. So no, I don't think it's as testy, but I think Anthropica seems to be really pulling their punches. They're moving a lot more carefully and trying to avoid stepping in mess.
Do you think that the state of the industry and the tone of these big pitches is related to these business pressures. Hey, we have to start shipping products that people pay for at scale to prove out that there's demand for all of this investment. Hey, there's ferocious competition for talent.
Hey, our big cloud provider benefactors might start to wonder if they should just build their own products. It seems like that's a lot of anxiety that is being expressed as, oops, we might destroy the world if we succeed.
I was thinking about this for the people who want to argue with me about how I don't truly believe in AGI and such. I think if these are your messiahs and you don't also just notice them as businessmen, to think that this is not full of tactical decisions, like these blog posts are not tactically written with tons of factors in mind is just... Ludicrous.
I think it'd be a lot easier to focus on those technology pressures when you didn't have the business pressures because you need the money, you need the talent to move your technology forward. And these people are like throwing elbows for this kind of thing. Again, like Zuckerberg writing emails, XAI holding recruiting parties in San Francisco and inviting OpenAI employees.
It's testy out here and you have to do anything to fight your way in.
Do you think we should trust these folks? That's a tough one. My instinctive answer is no, right? We're reporters. We shouldn't trust them. But they are trying to build things that products are shipping. You can use them to whatever extent you want to use them. They have a vision. You can believe it or not. Are they generally trustworthy in your interactions with them or the people they work for?
Or the people who work for them?
This is so funny. Back to that – episode on tribalism I was talking about that I'd listened to a podcast about. Their advice against tribalism was like, you should probably just take people at their word and believe that they believe what they're saying. And I thought that that's a great way to look at it.
I do believe that they think that they're building AGI and going to change the world and such. In terms of trust and I think it is my job to be skeptical. I don't think I can read a blog and tell our readers, they are definitely going to do this and just ignore all of the other factors at play here. I think that they have to earn that trust.
I think that this is sort of a pitch we have seen for years. Look at all of the times tech executives have promised to cure death, to get us to...
mars to like to fix all of these ailments and here we still are with all of these ailments so i think that they just have to earn it and i think that's okay they can't just demand today that we all trust them because it's kind of a damaged reputation in silicon valley it's okay you're gonna have to earn it there's two ways to keep folks in line one is ferocious market competition
then people will just vote with their dollars. The other way is politics where people vote with their votes. I'm not sure that the market competition is producing much alignment, for lack of a better word. Like no one's picking an AI system right now because it's quote unquote safer. They're just picking the one that's in front of them.
And maybe one day they'll just pick the one that's preloaded on their iPhone. On the politics side, California had a bill and Gavin Newsom just vetoed it. It would have made these products safer. Anthropic didn't oppose it. They didn't endorse it. There was some ferocious opposition. Is the politics of this just doomed and we're relying on the market? Yeah.
In terms of SB 1047, which is that regulation in California, that was a really difficult one because California is filled with these technologists who do not exactly want strict regulation right out of the gate. And Governor Gavin Newsom was lobbied pretty hard against patenting. this and people were threatening to leave.
And so much of the California economy does rely on these big spenders coming here and building their technology. So in terms of California regulation, I think that's going to be an uphill battle. I think it's going to rely on federal regulation. And I think that remains to be seen if they'll get that right. I don't know if they have a history of getting that right.
And I think, you know, I wasn't a journalist when Section 230 was passed, but I think that has caused a lot of unease, that we should pass something now to control this before things get worse and we have no control over the technology that runs our society. I get that unease.
I just want to be clear, I was 16 years old when Section 230 was passed. I'm not that old. Sorry. But it is true that we live in the shadow of that law and people have many, many opinions of it.
Yeah.
Here, it just seems a lot simpler, right? Like, I feel like we know how to write product liability laws. Is it just too hard or the tech industry is too good at claiming that no government can ever possibly understand their work?
Well, I think that we're trying to get our hands on this slippery fish. It's so nascent. I don't know if the people who are building this and the people who are regulating this know exactly what to do to fix something that is so new. It feels like trying to regulate Facebook when it was still a Harvard project. social media platform.
It's just hard to figure out exactly how this will change the world. And I'm not sure promising utopia is going to help. I don't know if promising dystopia is going to help. We just don't know for sure how this is going to shake out.
It kind of sounds like the fact that the big tech companies have a ton of control is the regulating part of the market right now. It's not Gavin Newsom. It's not whatever Biden administration executive orders were passed. It's not any other law. It's not competition between them, even though they all say they're safer. It's maybe just Satya Nadella saying, well, you seem out of control.
I'm going to build my own. Or it's Andy Jassy saying, I want to use AWS for something else. It seems like that is actually the place where the most control over these companies will be expressed.
Right.
Is that who I have to trust? Is it just that Satya's a good guy?
It's so funny because there's this line in Silicon Valley, the TV show that I quote in my article is like, I don't know about you guys, but I don't want to live in a world where someone makes the world a better place than we do. Yeah. That's where we're at right now. Why should we be forced to trust these big tech executives?
Why does it have to be in the hands of just a handful of big tech executives? Have they really proven that they can be trusted with Digital God?
I'm just asking.
Yeah, I don't think so.
So where we're at right now, just to sum this up, is it feels like Everyone is racing towards building the same kinds of products against the same vision at faster and slower rates. Some people think they shouldn't because they might destroy the world. But if we get it right, everything will be groovy and no one's really in charge.
Who is to say whether Ilya Sutskover's company, which is literally, I believe, called Safe Super Intelligence, is actually safer than Anthropic? There's just a lot of people claiming this thing that they think the market wants or people might want or is worth the money. But there's, I don't think the market broadly understands that it even wants that or how to measure it or how to say it.
And then the other choice you have is some other body of people, whether that's just the providers of cloud computing or whether it's the government could make some decisions and they seem not motivated or not capable to make those decisions.
Two things. I don't think anyone was going to choose a safer Facebook. They just want the one that's, as you said, is in front of them, the one that works better, the one they enjoy using. So I think that's how that's going to shake out. I don't think the normal person, my siblings are going to care which one's safer. Who decides if they're safe?
I really do like the idea of them having to talk to the government and be completely and fully transparent about what their models are capable of and what those tests are doing. Because I find pause that they're able to be like, no, it's fine. Like we tested it. It's totally chill. They do have some third party researchers, but it's not as transparent as it could be.
So, yeah, I think regulation would be a good place to start of the government having their own researchers and be like, OK, we are going to test this model for safety. We're not just going to rely on these people to test themselves. And that's how you prove if this is safe, if these are people we can trust.
What's next for these companies? What should people be looking out for?
I think that both are going to be really keen on getting reasoning models out there to the public with different speeds. They want something that can code faster, that can reason for you reliably. I got a demo where getting back to agents is... I got a demo from OpenAI where they called a fake dessert shop to place an order for them, but it did get some things wrong.
That's the future that these companies are seeing is autonomous agents that can reason. So I think that's what we're going to continue seeing. I was promised by OpenAI that we'll start seeing those agents in the wild as soon as early 2025. So we'll see. All right.
Well, I'm going to be hidden away from them safely in a bunker somewhere else. Kylie, thank you so much for coming on the show.
Thank you.
Thanks again to Kylie for joining me on the show, and thank you for listening. I hope you enjoyed it. And please let us know what you think a chill digital god should look like. I'm curious to know what you think. If you have those thoughts, you can email us at decoder at theverge.com. We really do read all the emails. Or you can hit me up directly on threads. I'm at reckless1280.
We also have a TikTok. Check it out. It's at decoderpod. It's a lot of fun. If you like Decoder, please share it with your friends and subscribe wherever you get your podcasts. And if you really love the show, hit us with that five-star review. Decoder is a production of The Verge and part of the Vox Media Podcast Network. Our producers are Kate Cox and Nick Statt. Our editor is Callie Wright.
Our supervising producer is Liam James. The Decoder music is by Breakmaster Cylinder. We'll see you next time.
Thank you. You can discover insights and learn how to convert digital disruption into revenue growth by reading the 2024 Digital Disruption Report at www.alexpartners.com. That's www.alexpartners.com. In the face of disruption, businesses trust Alex Partners to get straight to the point and deliver results when it really matters.
Support for Decoder comes from ServiceNow. AI is set to transform the way we do business, but it's early days, and many companies are still finding their footing when it comes to implementing AI. ServiceNow partnered with Oxford Economics to survey more than 4,000 global execs and tech leaders to assess where they are in the process. They found their average maturity score is only 44 out of 100.
But a few pacesetters came out on top, and the data shows they have some things in common. The most important one? Strategic leadership. They're operating with a clear AI vision that scales the entire organization. which is how ServiceNow transforms business with AI.
Their platform has AI woven into every workflow with domain-specific models that are built with your company's unique use cases in mind. Your data, your needs. And most importantly, it's ready now, and early customers are already seeing results. But you don't need to take our word for it.
You can check out the research for yourself and learn why an end-to-end approach to AI is the best way to supercharge your company's productivity. Visit servicenow.com slash AI maturity index to learn more.