
Today, we’re talking about DeepSeek, and how the open source AI model built by a Chinese startup has completely upended the conventional wisdom around chatbots, what they can do, and how much they should cost to develop. We’re also talking about Stargate, OpenAI’s new $500 billion data center venture that’s supposed to supercharge domestic AI infrastructure. Both stand in stark contrast with one another — and represent a new, escalating front in the US-China relationship and the geopolitics of AI. Verge senior AI reporter Kylie Robison joins me to break it all down. Links: Why everyone is freaking out about DeepSeek | Verge DeepSeek FAQ | Stratechery DeepSeek: all the news about the startup that’s shaking up AI stocks | Verge OpenAI and Softbank are starting a $500 billion AI data center company | Verge The AI spending frenzy is just getting started | Command Line After DeepSeek, VCs face questions about AI investments | NYT Satya Nadella on Stargate: ‘All I know is I’m good for my $80 billion’ | Verge OpenAI says it has evidence DeepSeek used its model to train competitor | FT DeepSeek sparks global AI selloff, Nvidia loses about $593 billion of value | Reuters Four big reasons to worry about DeepSeek (and four reasons to calm down) | Platformer Credits: Decoder is a production of The Verge and part of the Vox Media Podcast Network. Our producers are Kate Cox and Nick Statt. Our editor is Ursa Wright. The Decoder music is by Breakmaster Cylinder. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Support for the show comes from Toyota. What do you get when you mix quality craftsmanship and reliable performance with bold design and effortless sophistication? You get a Toyota Crown. Whether it's a sleek sedan or an impressive SUV, the Toyota Crown family has the car you've been searching for, with a powerful exterior that makes you stand out and a smooth ride that keeps you grounded.
You can learn more at toyota.com slash toyotacrownfamily. Toyota, let's go places. Support for Decoder comes from Indeed. Stop struggling to get your job posts seen on other job sites. With Indeed's sponsored jobs, your post jumps to the top of the page for your relevant candidates so you can reach the people you want faster. There's no need to wait any longer.
Speed up your hiring right now with Indeed. And listeners of this show will get a $75 sponsored job credit. to get your jobs more visibility at indeed.com slash decoder. Just go to indeed.com slash decoder right now and support our show by saying you heard about Indeed on this podcast. Indeed.com slash decoder. Terms and conditions apply. Hiring? Indeed is all you need.
Thank you. You can upgrade your business and get the same checkout Allbirds uses with Shopify. You can sign up for your $1 per month trial period at Shopify.com slash VoxBusiness, all lowercase. Just go to Shopify.com slash VoxBusiness to upgrade your selling today. Shopify.com slash VoxBusiness.
Hello and welcome to Decoder. I'm Neil Apatow, Editor-in-Chief of The Verge, and Decoder is my show about big ideas and other problems. Today, we're talking about the only thing the AI industry and pretty much the entire tech world has been able to talk about for the last week.
DeepSeek, the AI model built by a Chinese hedge fund that's completely upended the conventional wisdom around bleeding edge AI models, what they can do, and importantly, how much they should cost to develop. DeepSeek, if you haven't played with it, is expressed a lot like ChatGPT. There's a website and a mobile app, and you can type into a little text box and have it talk back to you.
What makes it special is how it was built. On January 20th, DeepSeek released a reasoning model called R1, which came just weeks after the company's V3 model, both of which showed some very impressive AI benchmark performance. It quickly became clear that DeepSeek's models perform at the same level, or in some cases even better, than the competing models from OpenAI, Meta, and Google.
And they're totally free to use. But here's the real catch. While OpenAI's GPT-4 reportedly cost as much as $100 million to train, DeepSeek claims that it cost less than $6 million to claim R1. In a matter of days, DeepSeek went viral. It became the number one app in the United States. And on Monday morning, the controversy over its underlying economics punched a hole in the stock market.
Panicked investors wiped more than a trillion dollars off tech stocks in a frenzied sell-off earlier this week, and Nvidia in particular suffered a record stock market decline of nearly $600 billion when it dropped 17% on Monday.
That's because for more than two years now, tech executives have been telling us that the path to unlocking the full potential of AI was to throw GPUs at the problem to spend money. Since then, scale has been king.
And scale was certainly top of mind less than two weeks ago when OpenAI CEO Sam Altman went to the White House and announced a new project called Stargate that he claims will spend $500 billion building data centers around the country to supercharge OpenAI's ability to train and deploy new models.
Altman's claim is essentially that you need to spend a lot of money to train AGI, or artificial general intelligence, however that is defined. And once you achieve AGI, however that's defined, the productivity gains across the economy will more than justify the enormous investment. DeepSeek, on the other hand, might be evidence that you don't have to spend all that money.
and that the United States export controls of NVIDIA chips to China might not have been very effective at all. The aftermath of all this has been a bloodbath, to put it lightly. Venture capitalist Marc Andreessen, who has been advising the Trump White House and Trump himself, called deep-seek AI Sputnik moment, referencing Russia's early win in the space race.
And that does appear to be how the AI industry and global financial markets are treating it. In DeepSeek and Stargate, we have the perfect encapsulation of two competing visions for the future of AI. Stargate is closed and expensive and requires placing an ever-increasing amount of money and faith into the hands of open AI and its partners.
The other is scrappy and open source, but with major questions around censorship of information, data privacy practices, and whether it's truly as low cost as we're being told. The only thing that's clear is that we've entered a new phase of the AI arms race, and DeepSeek and Stargate represent more than just two different paths towards superintelligence.
They also represent a new escalating front in the US-China relationship and the geopolitics of AI. This is all becoming especially fraught as Trump continues to wreak havoc on foreign relations with new threats of tariffs on foreign semiconductors. There's a whole lot going on here, and this news cycle is moving really fast.
So to break it all down, I invited Verge senior AI reporter Kylie Robison on the show to discuss all the events in the past couple weeks and figure out where the AI industry might be headed next. Okay, DeepSeek, Stargate, and a new AI arms race. Here we go.
First, let's zoom out a bit for some context on how the AI industry got to where it is today and why DeepSeek has proven to be so disruptive to what everyone thought they knew. Since the release of ChatGPT in 2022, AI executives have been clamoring nonstop about the need for more compute, specifically the need for more NVIDIA GPUs.
But behind all that talk of compute has always been a much simpler, more powerful force. Money. Money is what buys all those NVIDIA GPUs and what pays the construction bills for new data centers, and importantly, what pays the exorbitantly high salaries of AI researchers developing frontier models.
And once those models are trained up, it also costs a fortune to run inference on them for businesses and consumers. And only some of those users are paying anything at all to access the models. If you've been following along, you know that the nascent commercial AI industry in the United States hasn't made any money yet, even though it's spending so much.
And the promise of making tons of money down the line is what's been propping up all the sky-high valuations and glossing over the steep losses of companies like OpenAI. And that all really culminated earlier this month with the big Stargate announcement. OpenAI's new joint data center venture with a handful of partners, most importantly Oracle and SoftBank.
The stated goal of Stargate is to, quote, secure American leadership in AI and to also, quote, provide strategic capability to protect the national security of America and its allies.
And while the first Stargate data center is technically underway right now in Texas, details are thin on what shape the whole project will actually take and how it's supposed to actually collect $500 billion in funding over the next four years to accomplish its goals.
The only thing we really know for sure about Stargate is that it was a huge success in generating a lot of PR, both for the Trump White House and for OpenAI. Here's Kylie.
It's supposed to generate headlines. That's what it's supposed to do. It's a joint venture between OpenAI, SoftBank, Arm, MGX, a whole bunch of partners. The goal is over four years, they will spend $500 billion on creating a bunch of new data centers or, you know, putting that infrastructure into existing data centers. The whole goal is to give OpenAI a shitload of compute.
It's light on the details, but the information reported that there's this interesting split for the ownership of this company. I think, you know, OpenAI gets about half of the ownership. And yeah, this is meant for OpenAI, but, you know, SoftBank, Oracle, MGX, they're all investors in this company, so they should be getting a return is sort of the idea.
Of course, where the money for that compute comes from is a big unanswered question. And it's the sticking point Elon Musk used to criticize the announcement and amp up his feud with Altman. OpenAI certainly doesn't have the money, even though they said they would pledge $100 million immediately to the project.
But it might not matter if the point of Stargate was to hand Trump an early tech industry win and give OpenAI yet more runway to plan for the future.
OpenAI does not have the money and that number is completely made up is my take. $500 billion is a nice big round number that's very splashy and likely it won't reach anywhere close to that commitment.
I've been talking to people about this, and people are typically in San Francisco so pro-Sam Altman that I have been shocked to hear so many people coming to me saying that this feels like complete bullshit. How is OpenAI getting this money? I don't know. Their revenue would not speak to being able to afford this project. But Sam Altman's otherworldly dealmaking abilities...
have me not doubting that they can reach $100 billion this year, but can they reach $500 billion over four years? I don't see that happening.
But again, all of this, like so much of AI over the past few years, is built around promises and now a significant amount of Trump-style showboating. Because if you can convince the people with money and now the people with money who work in and around the Trump administration, then you might just get the funding or the easing of regulatory scrutiny you need to keep the lights on for another year.
Data center measuring contest is like the key word here. They're all just trying to say my data center is bigger. And I even get told this as a journalist, like when I bring this up to PR people of Frontier AI Labs, they're like, well, we have a giant data center in the Midwest. I think it is a lot of PR. It's to show we are working on the biggest and best next thing.
This is what we're using the billions of dollars we raised. We're using it here. You know, OpenAI is valued at $150 billion, and that is insane to me. So I think, you know, this is them saying, like, look, this is what $150 billion evaluation looks like. We have the biggest data center project in history.
Up until this last week, massive data centers were all the rage. Mark Zuckerberg even went on Facebook the same week of the Stargate announcement to remind everyone that Meta is building a $10 billion data center in Louisiana. Amazon's also planning an $11 billion AWS expansion in Georgia. There's no reason to believe yet that these companies are going to hit pause on any of these plans.
Microsoft has said it's planning to spend $80 billion this year alone on AI-related investment, some of which will go to Stargate, we think, but a lot of which is already planned expansions of its existing Azure infrastructure. And that's why DeepSeq hit so hard. It's arguably the most significant challenge to the AI status quo to date.
In particular, a challenge to the kind of funding that's driven the past few years of the generative AI boom. Because if the claims around DeepSeq are true, then it's shown that companies might not actually need all that compute. And they certainly don't need $500 billion worth of new data center infrastructure to release a competitive model.
DeepSeq released this research paper about a model called V3, which was just designed for general purpose language tasks. And it was efficient and cost effective, which made a lot of AI researchers at the time go, wow, OK, I'm surprised that China could make this with way less resources. They were able to really optimize with what they had.
And then R1 came out, and it sort of blew everyone's minds. It's a reasoning model. And it's supposed to be on par with O1, OpenAI's reasoning model that only came out a few months ago. So people really freaked out about that because usually it takes a lot longer for China to match these capabilities, but they were able to do it in just a few months.
Already, we've seen high-profile industry figures respond to DeepSeek. Sam Altman on Monday called it a, quote, impressive model and added that it was, quote, legit invigorating to have a new competitor. NVIDIA was nice about it, too, with the company releasing a statement calling DeepSeek an excellent AI advancement.
David Sachs, the new AI czar in the Trump administration, was quick to twist the moment around into a criticism of the Biden administration, posting on X that, quote, DeepSeek R1 shows the AI race will be very competitive, and President Trump was right to rescind the Biden EO, which hamstrung American AI companies, without asking whether China would do the same. He did warn, though, that the U.S.
can't be complacent. Sachs might be right, but I disagree on how. DeepSeek is a success because it hyper-optimized the GPUs that we were allowing NVIDIA to sell to China, not because China didn't have regulations. Still the wrong outcome, but that doesn't mean Sachs is right.
I really underestimated how much the finance people would freak out. I think that's my tech brain. But two things happened, in my opinion here. One, Davos happened. And Alexander Wang, the CEO of Scale AI, which helps provide data for some of these large language models, that CEO pointed to the fact that this model existed and that it was on par with O1 per their own benchmarks.
So people noticed that. And then I think the Wall Street Journal wrote a story about DeepSeek. And that's what sent the market freaking out that possibly half a trillion dollars worth of data centers wasn't the best moat. Perhaps we have been just burning money.
We need to take a quick break. We'll be right back.
Support for the show comes from Vanta. Trust isn't just earned, it's demanded. Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in.
Vanta says they can help businesses establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. They also centralize security workflows, complete questionnaires up to five times faster, and proactively manage vendor risk. In short, Vanna wants to save you time and money.
And a new IDC white paper found that Vanna customers achieve $535,000 per year in benefits. And the platform pays for itself in just three months. You can join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time. For a limited time, our audience gets $1,000 off Vanta at vanta.com slash decoder.
That's V-A-N-T-A dot com slash decoder for $1,000 off.
Support for this show comes from Oracle. Even if you think it's a bit overhyped, AI is suddenly everywhere, from self-driving cars to molecular medicine to business efficiency. If it's not in your industry yet, it's coming fast. But AI needs a lot of speed and computing power. So how do you compete without costs spiraling out of control? Time to upgrade to the next generation of the cloud.
Oracle Cloud Infrastructure, or OCI. OCI is a blazing fast and secure platform for your infrastructure, database, application development, plus all your AI and machine learning workloads. OCI costs 50% less for compute and 80% less for networking, so you're saving a pile of money. Thousands of businesses have already upgraded to OCI, including Vodafone, Thomson Reuters, and Suno AI.
Right now, Oracle is offering to cut your current cloud bill in half if you move to OCI. For new U.S. customers with minimum financial commitment, offer ends March 31st. See if your company qualifies for this special offer at oracle.com. That's oracle.com.
Support for this podcast comes from Vanta. Trust isn't just earned, it's demanded. Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical. or more complex. That's where Vanta comes in.
Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001, centralized security workflows, complete questionnaires up to five times faster, and proactively manage vendor risk. Vanta not only saves you time, it can also save you money.
A new IDC white paper found that Vanta customers achieve $535,000 per year in benefits, and the platform pays for itself in just three months. You can join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time. For a limited time, our audience gets $1,000 off Vanta at vanta.com slash vox.
That's V-A-N-T-A dot com slash vox for $1,000 off.
We're back with The Verge senior AI reporter, Kylie Robison. Before the break, we were talking about some of the early panic reactions to DeepSeek and why it's become such a big deal in such a short amount of time. Part of that has to do with its capabilities, but a lot of it has to do with its cost.
DeepSeek wouldn't exist without a lot of the foundational technology developed here in the United States, including open source advancements by Meta with its Lama models and by OpenAI technology as well, which AI startups worldwide have been reverse engineering for some time now. on Tuesday in what's a pretty funny twist.
OpenAI even began to complain that its intellectual property was violated, telling the Financial Times that it has evidence that DeepSeek used its models to train its own with a technique known in the AI world as distillation.
Of course, OpenAI scraped the entire web without permission to train its initial models, so it's pretty hard to complain that its models have now been ripped off, but that's the world we live in.
Regardless, DeepSeq is proof that the closed-sourced approach to AI, the one that's most advantaged OpenAI up until now, is not as much of a head start as anyone once believed, or built a sky-high valuation around. If a startup spun out of a Chinese hedge fund can do this with so little time and resources, well, maybe there's no moat around these frontier models at all.
You might have seen Meta's stock bump as well. It seemed like everyone was going down and Meta went up. And my understanding is that perhaps the market is seeing open source as a real competitor, something that closed source companies should worry about.
Some of the sub-tweets from OpenAI VCs and Sam Altman himself at the time when B3 came out was like, well, they can copy our old work, but there's no way they're going to make the best frontier models. No one can really deny that it appears that they really innovated on a few key areas that really optimized these models. And that's what's really interesting.
It's still very early, and there's no definitive takeaway on how DeepSeq might affect the AI industry over the long term. According to Kiley, the major decline in NVIDIA's share price might be much more about the market correcting an obviously overvalued stock than it is about the industry losing faith in the effectiveness of NVIDIA chips.
In her reporting this week, she says experts told her DeepSea could actually benefit the larger model makers because they'll be able to replicate the efficiency gains on future reasoning models like O1 and R1. This is why everyone keeps talking about Jevons paradox, an economic concept that everyone seems to have learned about the past several days that hopefully explains everything.
Basically, it means that making a resource cheaper will lead to increased consumption of that resource. Think of it a little bit like engines and gasoline. If you make an engine more fuel efficient, it uses less gas. So it's cheaper to run, which means you might run it more and eventually use more gas than you were using to begin with.
Everyone likes to talk about Jevons paradox, which is scale will still matter because if we're able to do this so efficiently with lower costs, that means if we do it with way more chips, then it will create better models. So you're going to see a lot of people on the Internet telling people to buy the Nvidia dip because Nvidia will still matter.
And, you know, an expert we talked to in our story was saying, you know, Nvidia was always sort of a bubble. They were always pretty surprisingly overvalued. And this perhaps is just a market correction. When I first saw this news, my take was this was the market's. excuse to level it out and bring some of these valuations down to earth. I think there's two arguments to be made.
And a lot of people are peering into the crystal ball and aren't able to predict the future. So the two arguments are, you know, the frontier labs, the people spending the most money in making, you know, the biggest, most capable models are saying, look, you know, if it's more efficient, that is better for us.
On the flip side, people are saying smaller startups and enterprises that want to cut their spend on AI might look at this and be like, wow, we can do something for much cheaper. We don't have to pay OpenAI a bunch of money or Anthropic through the tokens to do the same thing. So it does level the playing field in that sense.
But at the end of the day, the people who are paying for this want the most capable model. And If they're able to get their work done just as efficiently and as well and accurately with something that's as affordable as DeepSeek, maybe they'll use that.
There's obviously a lot of privacy and security implications with DeepSeek having servers in China, but it's an open source model that can be fine-tuned, just like Lama. But yeah, it remains to be seen how this will shake out.
My friend Ben Thompson of the newsletter Stratechery made a similar argument, writing this week that DeepSeek, quote, provided a massive gift to nearly everyone and that the biggest winners are consumers and businesses who can anticipate a future of effectively free AI products and services.
According to Thompson, quote, a world of free AI is a world where product and distribution matters most, unquote. And as we talk about here on Decoder quite a bit, distribution is where big consumer tech companies like Apple and Google still have the most advantage because they control the phones and the app stores.
What is pretty obvious now, even if we don't know exactly what future outcomes will occur because of DeepSeek, is that the AI cost structures we've been operating under are going out the window. The big debate in AI last year was about diminishing returns.
And if AI companies were hitting a scaling wall that was growing steeper all the time, and whether that would result in shrinking gains between new releases, no matter how many GPUs we threw at the problem. For instance, we might never see the type of performance jump we saw between GPT-2 and GPT-3 ever again.
But if DeepSeek has proven anything, it's that there's much more to AI advancement than just building the biggest supercluster, buying the most GPUs, and throwing all your money into a furnace.
I don't think it's going to change much other than people sort of asking the question that perhaps critics have been asking for a long time, which is why do we need all of these billions, sometimes trillions of dollars going into these models? It's been written for months now by investors. There's blogs about this.
We have sunk so much money into these models, and we're not exactly seeing the huge returns that we expected. We're not exactly seeing huge differences in the models like we saw right at the beginning. I think that that's just sort of normal trajectory for AI. This is all normal trajectory for chips. They're going to get cheaper and it's going to be more efficient to create these models.
So this is all like linearly very normal. I just think, again, it's this excuse to bring some of these valuations down to earth because Anthropic is valued at 60 billion. OpenAI is valued at 150 billion. And I think people are wondering how that is possible. And this is sort of an excuse to be like, look, you know, you can do it cheaper.
We need to take another quick break. We'll be right back.
It's time to review the highlights. I'm joined by my co-anchor, Snoop. Hey, what up, dog? Snoop, number one has to be getting iPhone 16 with Apple Intelligence at T-Mobile. Yeah, you should hustle down at T-Mobile like a dog chasing a squirrel, chasing a nut. Number two, at T-Mobile, families can switch and save 20% on plans plus streaming services versus the other big guys. What a deal.
Y'all giving it away too fast, T-Mobile. Slow down. Head to T-Mobile.com and get iPhone 16 on them. Yeah, you can save on wireless and streaming versus the other big guys at T-Mobile.com slash switch. Apple Intelligence requires iOS 18.1 or later.
Amazon Pharmacy presents Painful Thoughts.
The guy in front of me in the pharmacy line is halfway through an incredibly detailed 17-minute story about his gout. A story likely more painful than the gout itself.
Next time, save yourself the pain and let Amazon Pharmacy deliver your meds right to your door. Amazon Pharmacy. Healthcare just got less painful.
Okay, business leaders, are you playing defense or are you on the offense? Are you just, excuse me. Hey, I'm trying to talk business here. As I was saying, are you here just to play or are you playing to win? If you're in it to win, meet your next MVP, NetSuite by Oracle. NetSuite is your full business management system in one suite.
With NetSuite, you're running your accounting, your financials, HR, e-commerce, and more all from your online dashboard. One source of truth means every department's working from the same numbers with no data delays. And with AI embedded throughout, you're automating manual tasks, plus getting fast insights for your next move.
Whether you're competing on your home turf or looking to conquer international markets, NetSuite helps you get the W. Over 40,000 businesses have already made the move to NetSuite, the number one cloud ERP.
Right now, get the CFO's Guide to AI and Machine Learning at netsuite.com slash vox. Get this free guide at netsuite.com slash vox.
Okay, guys.
We're back with Verge Senior AI Reporter Kylie Robson. Before the break, we were discussing what the longer-term impacts of DeepSeq might be on the AI industry and why it's not exactly doom and gloom for the frontier model makers. In fact, according to Kylie's reporting, many of those companies might be able to benefit from the efficiency gains DeepSeq has accomplished.
There's no reason to believe right now that AI models won't continue to get bigger and more sophisticated at the cutting edge, something that will necessarily require those companies to keep buying more NVIDIA GPUs and building bigger data centers.
Jan LeCun, who has met as chief AI scientist and often one of the first people in the industry to call bullshit on popular narratives, says the deep-seek freakout is, quote, woefully unjustified and based on, quote, major misunderstandings about AI infrastructure investments.
He pointed out that when AI companies and tech companies today say they're going to spend billions of dollars, a good chunk of that money is going towards inference or using the models and not just training them. And total inference costs are only going to increase as multimodal AI with video and audio becomes more popular.
And no matter what, maintaining those models once they've been trained and serving them to millions of customers costs quite a lot. Assuming that just because you can train a model cheaply doesn't mean you can run inference on it at the scale OpenAI, Google, and Anthropic do. In other words, we might need those chips and data centers after all.
And if all that compute isn't being used for AI, well, it might get used for something else.
Our demands and reliance on cloud computing and AI and streaming, all of the things we do when we don't have a server in our home, you know, those demands just keep rising. We saw, like, the listener might remember Netflix, their servers absolutely melting when the Mike Tyson-Jig Paul fight happened. Like, the need for... for this kind of structure, sure, it's important.
And in a perfect world, some of this money would be used to retrofit old data centers and focus on renewable energy for these data centers and make it all more energy efficient. I don't know if that'll be the case because Mark Zuckerberg's tweeting, we have a data center the size of Manhattan or whatever.
It depends who you ask if we need more data centers or more infrastructure, but I don't think it hurts. We clearly are becoming more reliant on needing this technology.
But that doesn't mean it's smooth sailing for OpenAI and Stargate. The timing of DeepSeat going viral almost immediately after the Stargate announcement has thrown some serious cold water on OpenAI's claims that it needs a half-trillion-dollar data center project to stay competitive.
I asked Kylie about this specifically and about whether OpenAI, with its constant concerns about cash flow, can really pull this off. And most importantly, whether they have anywhere near the amount of money they're claiming to need.
All I want to say is no, and they don't. Goodbye, the end. They don't have the money. I don't know how they survived this. I think the announcement of Stargate and Satya Nadella saying that we're good for our $80 billion and Microsoft sort of piecing out a portion of this contract and putting an end date on it for 2030, I thought all of that, for me, felt like a signal of...
I don't know how OpenAI is going to live up to the promises and hype that it has been drumming up for the last couple of years. That much will be clear at least by the end of the year. If OpenAI's revenue projections fall short, we'll all know about it. It will leak eventually. And same with Stargate. But yeah, OpenAI does not have this money. Here's the thing.
And like I said earlier, is the biggest puzzle piece that we cannot underestimate is that Sam Altman's dealmaking capabilities have been described as almost sociopathic. He knows how to get what he wants. He has known for more than a decade how to get what he wants. And that much is true here.
He was going around the world, and he was laughed out of Japan for saying he wanted $7 trillion for data centers. And that clearly shrunk a lot. I think it'll continue to shrink. But he will figure out how to get what he wants. That is a piece that cannot be underestimated here. So yeah, I think that they'll probably raise more money.
I mean, I can say with probably 99% certainty that they will raise another round this year, maybe two. XAI will raise another round and they'll brag about new data center projects.
On Tuesday, after a full day of utter chaos in the world of AI and more than a trillion dollar wipeout in tech stocks, Sam Altman posted an unassuming selfie with Microsoft CEO Satya Nadella on his ex-account. It was the kind of post that I now think of as classic Altman. Short, ambiguous, easy to confirm any assumption you already had.
Altman captioned the post, quote, the next phase of the Microsoft open AI partnership is going to be much better than anyone is ready for. He included two exclamation points just in case.
To me, this was clearly meant as a signal to showcase to the world that Microsoft, despite the deep-seek noise and a pattern of keeping OpenAI at arm's length in recent announcements and public appearances, was still on board the Altman AGI train. But we don't actually know how any of this will impact Stargate.
The stars aligned perfectly for Sam Altman. If he hadn't been ousted and Microsoft didn't pull back and Trump wasn't elected, Stargate probably wouldn't have happened in the time it did. And then DeepSeek just totally nerfed it, it feels like, just publicly. I don't know if it'll have an actual impact on the project itself because, like I said, I don't see that project working out anyways.
It is not reaching $500 billion whatsoever. But DeepSeek definitely gives... everyone who has the money the chance to say, are we sure we need this much money to build what you claim
Thank you. Thank you. Decoder is a production of Verge and part of the Vox Media Podcast Network. Our producers are Kate Cox and Nick Statt. Our editor is Ursa Wright. Decoder Music is by Breakmaster Cylinder. We'll see you next time.
Support for the show comes from Alex Partners. If you're in the tech industry, wondering how artificial intelligence is going to affect your business might seem like the new normal by now. Alex Partners is a global consulting firm dedicated to helping you navigate the changing headwinds of AI without getting lost in the noise.
Learn how your business can navigate AI while making sure your strategic initiatives are aligned by reading Alex Partners' latest technology industry insights, available at www.alexpartners.com. That's www.alexpartners.com. In the face of disruption, businesses trust Alex Partners to get straight to the point and deliver results when it really matters.
Support for the show comes from Curiosity Weekly, a new podcast from Discovery. Science is constantly evolving with new discoveries happening every week. And on Curiosity Weekly, they help us unpack the latest science and tech news with expert guests who make it all make sense.
From the research behind our brain activity when we're scrolling on social media to finding out why lefties are sometimes more musical than righties. The breakthroughs never stop. Listen to Curiosity Weekly wherever you get your podcasts.
This episode is brought to you by On Investing, an original podcast from Charles Schwab. I'm Kathy Jones, Schwab's chief fixed income strategist.
And I'm Lizanne Saunders, Schwab's chief investment strategist. Between us, we have decades of experience studying the indicators that drive the economy and how they can have a direct impact on your investments.
We know that investors have a lot of questions about the markets and the economy, and we're here to help. So download the latest episode and subscribe at schwab.com slash on investing or wherever you get your podcasts.