Menu
Sign In Pricing Add Podcast
Podcast Image

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch

20VC: Deepseek Special: Is Deepseek a Weapon of the CCP | How Should OpenAI and the US Government Respond | Why $500BN for Stargate is Not Enough | The Future of Inference, NVIDIA and Foundation Models with Jonathan Ross @ Groq

Thu, 30 Jan 2025

Description

Jonathan Ross is the Co-Founder and CEO of Groq, providing fast AI inference. Prior to founding Groq, Jonathan started Google’s TPU effort where he designed and implemented the core elements of the original chip. Jonathan then joined Google X’s Rapid Eval Team, the initial stage of the famed “Moonshots factory,” where he devised and incubated new Bets (Units) for Alphabet.  The 10 Most Important Questions on Deepseek: How did Deepseek innovate in a way that no other model provider has done? Do we believe that they only spent $6M to train R1? Should we doubt their claims on limited H100 usage? Is Josh Kushner right that this is a potential violation of US export laws? Is Deepseek an instrument used by the CCP to acquire US consumer data? How does Deepseek being open-source change the nature of this discussion? What should OpenAI do now? What should they not do? Does Deepseek hurt or help Meta who already have their open-source efforts with Lama? Will this market follow Satya Nadella’s suggestion of Jevon’s Paradox? How much more efficient will foundation models become? What does this mean for the $500BN Stargate project announced last week?  

Audio
Featured in this Episode
Transcription

0.069 - 6.217 Harry Stebbings

So everyone's seen the news about Deep Seek today. Is it as big a deal as everyone is making of it?

0
💬 0

6.755 - 18.758 Jonathan Ross

Yes, it is Sputnik 2.0. It is true that they spent about six million or whatever it was on the training. They spent a lot more distilling or scraping the OpenAI model.

0
💬 0

18.958 - 34.783 Jonathan Ross

I can't speak for Sam Altman or OpenAI, but if I was in that position, I would be gearing up to open source my models in response because it's pretty clear you're gonna lose that, so you might as well try and win all the users and the love from open sourcing. Open always wins.

0
💬 0

35.223 - 60.545 Harry Stebbings

always this is 20 vc with me harry stebbings and today we focus on deep seek as our guests put it today this is sputnik 2.0 and joining me for the discussion is one of the best placed in the business jonathan ross co-founder and ceo of grok providing fast ai inference prior to founding grok jonathan started google's tpu effort where he designed and implemented the core elements of the original google chip

0
💬 0

60.805 - 83.785 Harry Stebbings

But before we dive in today, here are two fun facts about our newest brand sponsor, Kajabi. First, their customers just crossed a collective $8 billion in total revenue. Wow. Second, Kajabi's users keep 100% of their earnings, with the average Kajabi creator bringing in over $30,000 per year. In case you didn't know, Kajabi is the leading creator commerce platform with an

0
💬 0

84.005 - 104.021 Harry Stebbings

all in one suite of tools including websites, email marketing, digital products, payment processing and analytics for as low as $69 per month. Whether you are looking to build a private community, write a paid newsletter or launch a course, Kajabi is the only platform that will enable you to build and grow your online business without taking a cut of your revenue.

0
💬 0

104.461 - 128.813 Harry Stebbings

20 VC listeners can try Kajabi for free for 30 days by going to kajabi.com forward slash 20VC. That's kajabi.com, K-A-J-A-B-I.com forward slash 20VC. Once you've built your creator empire with Kajabi, take your insights and decision-making to the next level with AlphaSense, the ultimate platform for uncovering trusted research and expert perspectives.

0
💬 0

128.953 - 146.813 Harry Stebbings

As an investor, I'm always on the lookout for tools that really transform how I work. Tools that don't just save time, but fundamentally change how I uncover insights. That's exactly what AlphaSense does. With the acquisition of Tegas, AlphaSense is now the ultimate research platform built for professionals who need insights they can trust fast.

0
💬 0

147.213 - 168.447 Harry Stebbings

I've used Tegas before for company deep dives right here on the podcast. It's been an incredible resource for expert insights. But now with AlphaSense leading the way, it combines those insights with premium content, top broker research, and cutting-edge generative AI. The result? A platform that works like a supercharged junior analyst, delivering trusted insights and analysis on demand.

0
💬 0

168.907 - 189.075 Harry Stebbings

AlphaSense has completely reimagined fundamental research, helping you uncover opportunities from perspectives you didn't even know how they existed. It's faster, it's smarter, And it's built to give you the edge in every decision you make. To any VC listeners, don't miss your chance to try AlphaSense for free. Visit alphasense.com forward slash 20 to unlock your trial.

0
💬 0

189.155 - 204.479 Harry Stebbings

That's alphasense.com forward slash 20. And speaking of incredible products, what comes to mind when you think about business banking? Probably not speed, ease, or growth. I'm willing to bet that's because you're not using Mercury.

0
💬 0

204.619 - 225.13 Harry Stebbings

With Mercury, you can quickly send wires and pay bills, get access to credit sooner to hit the ground running faster, unlock capital that's designed for scaling, and see all these money moves all in one place. I speak to dozens of founders every week, and most of them are using Mercury because they're super smart and that's what you have to be using.

0
💬 0

225.27 - 246.689 Harry Stebbings

Visit Mercury.com to experience it for yourself. Mercury is a financial technology company, not a bank. Banking services provided by Choice Financial Group, Column NA, and Evolve Bank and Trust, members of FDIC. You have now arrived at your destination. Jonathan, thank you so much for joining me today. I so appreciate you doing this emergency podcast with me.

0
💬 0

246.989 - 266.167 Jonathan Ross

No problem. But before we start, can I just say one thing? I think you have the most amazing, unique go-to-market that I've ever seen in my life for a podcast. I've never seen this before. I think your strategy is you're literally interviewing every single audience member, forcing them to watch videos and get addicted to you.

0
💬 0

268.16 - 285.445 Harry Stebbings

I mean, I thought you were going to say my accent, but I'm totally going to take that. That's wonderful. And yes, you're absolutely right. And do things at your own scale. But I do want to start. Obviously, everyone's just talking about deep sea. A little bit of context. Why are you so well-placed to speak about deep sea? And let's just start there for some context.

0
💬 0

286.001 - 299.887 Jonathan Ross

Well, my background, so I started the Google TPU, the AI chip that Google uses, and in 2016 started an AI chip startup called Grok with a Q, not with a K, that builds AI accelerator chips, which we call LPUs.

0
💬 0

300.464 - 308.753 Harry Stebbings

Okay, so everyone's seen the news about Deepsea today. I want to just start off by saying, is it as big a deal as everyone is making of it?

0
💬 0

309.033 - 333.347 Jonathan Ross

Yes, it's Sputnik. It is Sputnik 2.0. Even more so, you know that story about how NASA spent a million dollars designing a pen that could write in space and the Russians brought a pencil. That just happened again. So it's a huge deal. Why is it such a huge deal? So up until recently, the Chinese models have been behind sort of Western models.

0
💬 0

333.447 - 356.706 Jonathan Ross

And I say Western, including like Mistral as well and some other companies. And it was largely focused on how much compute you could get. Most people actually don't realize this. Most companies have access to roughly the same amount of data. They buy them from the same data providers and then just churn through that data with a GPU and they produce a model and then they deploy it.

0
💬 0

357.146 - 374.883 Jonathan Ross

And they'll have some of their own data and that'll make them subtly better at one thing or another. But they're largely all the same. More GPUs, the better the model because you can train on more tokens. It's the scaling law. This model was supposedly trained on a smaller number of GPUs and a much, much tighter budget.

0
💬 0

375.284 - 394.283 Jonathan Ross

I think the way that it's been put is less than the salary of many of the executives at Meta, and that's not true. There's an element of marketing involved in the DeepSea release. It is true that they train the model on approximately $6 million for the GPUs, right? They claim 2000

0
💬 0

395.706 - 422.25 Jonathan Ross

GPUs for, I think it was 60 days, which by the way, also don't forget was about the same amount of GPU time, 4,000 GPUs for 30 days as the original, I believe Lama 70. Now more recently, Meta has been training on more GPUs, but Meta hasn't been using as much good data as DeepSeq because DeepSeq was doing reinforcement learning using OpenAI.

0
💬 0

422.93 - 424.911 Harry Stebbings

Is this distillation, just so I understand?

0
💬 0

424.931 - 425.512 Jonathan Ross

Yes, exactly.

0
💬 0

425.872 - 436.379 Harry Stebbings

And so can you just help me and help the audience understand what is distillation in this regard and how have DeepSeat been using distillation to get better quality output through open AI data?

0
💬 0

436.799 - 455.512 Jonathan Ross

It's a little bit like speaking to someone who's smarter and getting tutored by someone who's smarter. You actually do better than if you're speaking to someone who's not as knowledgeable about the area or giving you wrong answers. First of all, before we get into any of this, I need to start with the scaling laws. These are like the physics of LLMs.

0
💬 0

456.033 - 475.707 Jonathan Ross

And there's a particular curve and the more tokens, which are sort of the syllables of an LLM, they don't match up exactly with human syllables, but kind of. So the more tokens that you train on, the better the model gets. But there's sort of these asymptotic returns where it starts trailing off.

0
💬 0

476.167 - 493.312 Jonathan Ross

The thing about the scaling law that everyone forgets, and that's why everyone was talking about how it's like the end of the scaling law, we're out of data on the internet, there's nothing left. What most people don't realize is that assumes that the data quality is uniform. If the data quality is better, then you can actually get away with training on fewer tokens.

0
💬 0

493.832 - 518.674 Jonathan Ross

So going back to my background, one of the fun things that I got to witness, I wasn't directly involved, was AlphaGo. Google beat the world champion, Lee Sedol, in Go. That model was trained on a bunch of existing games. But later on, they created a new one called AlphaGo Zero, which was trained on no existing games. It just played against itself. So how do you play against yourself and win?

0
💬 0

518.714 - 539.19 Jonathan Ross

Well, you train a model on some terrible moves. It does okay. And then you have it play against itself. And when it does better, you train on those better games. And then you keep leveling up like this, right? So you get better, better data. The better your model is when it outputs something, the better the result, the better the data.

0
💬 0

539.61 - 551.156 Jonathan Ross

So what you do is you train a model, you use it to generate data, and then you train a model and you use it to generate data and you keep getting better and better and better. So you can sort of beat the scaling law problem.

0
💬 0

551.816 - 574.12 Jonathan Ross

One quick hack to get past all of that in the stepping up is if there's a really good model already right here, just have it generate the data and you go right up to where it is. And that's what they did. It is true that they spent about six million or whatever it was on the training. They spent a lot more distilling or scraping the open AI model.

0
💬 0

574.517 - 583.925 Harry Stebbings

So they scrape the OpenAI model, they get this higher quality data from that and from refining it, and then they get greater, higher quality output, correct?

0
💬 0

584.71 - 601.796 Jonathan Ross

Correct. And all that said, they did a lot of really innovative things. That's what makes it so complicated, because on the one hand, they kind of just scraped the open AI model. On the other hand, they came up with some unique reinforcement learning techniques that are so similar. What did they do that was so impressive?

0
💬 0

601.876 - 607.138 Harry Stebbings

Because I think a lot of people want to just say, oh, finally, the Chinese copy and duplicate as they always have done.

0
💬 0

607.518 - 621.904 Jonathan Ross

No, they came up with innovative stuff. But actually, the best way to describe it, have you ever taken a test before you got an answer right, and your professor marked it wrong. And then you go back to the professor and you have to argue with them and everything. And it's a pain, right?

0
💬 0

622.438 - 642.384 Jonathan Ross

Well, if there is only one answer, and it's a very simple answer, and you say, write that answer in this box, then there is no arguing. You either get it right or not, right? So what they did was, rather than having human beings check the output and say yes or no or whatever, what they did was they said, here's the box. There's literally some code to say here's a box.

0
💬 0

642.904 - 660.472 Jonathan Ross

I'll put the answer here and then check it. And if it's correct, we have the answer. If not, we don't. No need to involve a human. Completely automated. Can OpenAI not just do distillation on DeepSeq's model then? They don't need to because they're actually better still. They're a little bit better. They could, but why would they?

0
💬 0

660.952 - 670.937 Harry Stebbings

Do we buy their GPU usage? Alex Wang, who we both know, was like, nah, they've got 50,000 H100s. Do we buy the GPU usage?

0
💬 0

671.497 - 690.423 Jonathan Ross

Or is that questionable doubt there? I don't think you have to disbelieve it because of the quality delta. However, why would they try and smuggle in GPUs when all they'd have to do is log into any cloud provider and rent GPUs? This is like the biggest gaming hole in the whole way that export control is done.

0
💬 0

690.683 - 708.407 Jonathan Ross

You can literally log in, you can swipe credit card, whatever, and just like pay and get GPUs to use. So export laws are necessary then? They're good, but the problem is it's like the Minagino line. You just go around it. So you need to like seal it up a little more. There's a little bit of room left to go here.

0
💬 0

708.707 - 725.794 Jonathan Ross

Keep in mind, OpenAI was effectively subsidizing accidentally the training of this model because they were using OpenAI, right? And rumors are that OpenAI may not be completely profitable yet in terms of every token in the API, like on the subscriptions maybe, but in the API.

0
💬 0

726.474 - 737.764 Jonathan Ross

And so each one that they generate, effectively, they were losing a little bit of money while DeepSeq was getting training data. Now, by the way, OpeningEye probably still has that data. In theory, they could just probably train on it.

0
💬 0

738.164 - 745.05 Harry Stebbings

Josh Christian said in a tweet today, though, that this would likely be a violation of US export rules. Do you think that's not true?

0
💬 0

745.632 - 769.521 Jonathan Ross

I'm not aware of where it would be an export issue. I do know that many people log into cloud providers and just use them from remote. One of the problems, so we actually block IP addresses from China, and I believe we might be unique in doing that. It's also a little bit fruitless because someone could just like rent a server anywhere, log into us from there. Then there's nothing we can check.

0
💬 0

769.921 - 778.848 Harry Stebbings

You said they're about kind of blocking IP addresses from China. There's a lot of concern about US customer data going back to China. Do you think that is a legitimate and justified concern?

0
💬 0

779.248 - 802.565 Jonathan Ross

Yes, it's probably the most significant concern. There are other concerns that's probably the most significant because people don't think. They're so used to using these services. When you use one of these other services, you might be shocked to hear this. When you say delete... What they do is they write delete right next to your data. They don't actually delete it. They just mark it, delete it.

0
💬 0

802.845 - 818.954 Jonathan Ross

When you later come back and ask for your data, they give it to you with the word delete right next to it. It's still there. And these are well-meaning companies. Do you really think like the CCP doesn't have all your data and isn't going to look it up later? Some governments are more aggressive than others.

0
💬 0

819.414 - 837.178 Jonathan Ross

And if they have access to your data, not even your data, it could be your next door neighbor's data. Your next door neighbor might put something in there that accidentally gives information away that makes you more vulnerable. Now the CCP has something. Maybe you had some package delivered and they put a complaint somewhere and whatever.

0
💬 0

837.459 - 842.96 Jonathan Ross

Like you might not even do it yourself, but other people around you, the health data of a spouse, right?

0
💬 0

843.4 - 852.968 Harry Stebbings

Jonathan, I'm going to avoid the British indirectness. Do you think DeepSeek is an instrument that will be used by the CCP to increase control on Western democracies?

0
💬 0

853.587 - 878.444 Jonathan Ross

Yes, but I don't think it's DeepSeek that's doing it. So you have to understand any company that operates in China and Hong Kong, the one country, two systems thing didn't quite work out as anticipated or maybe as anticipated, but not as stated. They have no choice. In 2016, when Grok started, we decided that we were not going to do business in China. This was not a geopolitical decision.

0
💬 0

878.484 - 900.212 Jonathan Ross

This was purely commercial. And what it was, was we kept seeing companies like Google, Meta fail over and over again, trying to win in China. The formula is actually pretty simple. You're not allowed to make net money. You're allowed to spend more money in China. But the moment that you start to become profitable or anywhere near profitable, all of a sudden there's a thumb on the scale.

0
💬 0

900.612 - 922.55 Jonathan Ross

Companies that manufacture a lot in China and send more money to China can actually be successful there. They can sell things there. Yeah, it's a pretty simple formula. You must send more money to China than you take out. But at the same time, they also require that you hand over all data. And not only that, they also require that certain answers be in a form that they find acceptable.

0
💬 0

922.85 - 942.877 Jonathan Ross

So, for example, one of the more common ways ones that you see about deep seek right now is when you ask about Tiananmen Square, if the temperature is low on the model and temperature, we don't need to get into that. It's complicated, but it's how like low means low creativity. Then it's actually going to give you an answer that basically says, I don't want to talk about that.

0
💬 0

942.937 - 961.288 Jonathan Ross

It's a sensitive topic, but you ask it about other things that are sensitive topics elsewhere in the world. And it'll just answer. But what happens if the CCP requires that they start to say, what about TikTok? Should it be banned? Absolutely not. Here's why. And it gives you a cogent reason. That's kind of scary.

0
💬 0

961.709 - 976.6 Harry Stebbings

Jonathan, what do we do from here? I share your concerns completely. My challenge is TikTok you can ban and shut off. They would not sell the algo. That is a closed end product that we can ban tomorrow if we really want to. Here, it's open source.

0
💬 0

977.06 - 987.483 Jonathan Ross

Yeah. And worse. So we up until recently refused to run any Chinese models and we had to make a very difficult decision on DeepSeek. We now have it on our API at Grok.

0
💬 0

987.903 - 990.884 Harry Stebbings

Why did you decide that you would break the rule for DeepSeek?

0
💬 0

991.384 - 1012.19 Jonathan Ross

So what it came down to was when we saw DeepSeek become the number one app on the app store, the realization was people were going to be putting their data in there. And what we want to make sure is that you actually have an option. So we store nothing. There is no like delete or whatever. Like there is just, we'd store nothing. We don't even have hard drives. We have DRAM.

0
💬 0

1012.65 - 1032.023 Jonathan Ross

And when the power goes off, everything goes away. So we wanted to make sure that there was an alternative where when you use DeepSeek's model, your data is not going to the CCP. Well, right now, the CCP is probably going to be taking the safeties off the weapons. They're going to be like, why are you making this model open source? Please direct your data towards us.

0
💬 0

1032.243 - 1053.52 Jonathan Ross

Go win a bunch of customers this way. But now we want the data. Right. And so they're going to change the strategy. But remember, DeepSeek is a real I mean, it's a hedge fund. They're doing this themselves and they're just influenced by the CCP. And the CCP, now that they've seen the success of this, might see it as yet another TikTok.

0
💬 0

1053.941 - 1058.449 Harry Stebbings

My question to you is, how long is it before the US reacts to prevent this?

0
💬 0

1059.193 - 1081.909 Jonathan Ross

One question to ask is, are we going to be talking about R1 for the next six months? And the answer is absolutely not. We might be talking about R2 and R3 and R4, but R1 was one shot. The question is, are they going to keep coming up with very interesting things? Are we going to cat and mouse it? Is everyone going to learn from this? The biggest problem is we've

0
💬 0

1082.529 - 1099.364 Jonathan Ross

this has just made it absolutely nakedly clear that the models are commoditized, right? You've been asking the question, right? Like if there was any doubt before, that doubt's over. So what is the moat? And for me, I love Hamilton-Helmer's seven powers, right?

0
💬 0

1099.624 - 1104.709 Harry Stebbings

One of my favorites. I do it for every single investment we do. We have to fill it out. Every single person must fill it out. So yes.

0
💬 0

1105.169 - 1128.277 Jonathan Ross

So marketing is the art of decommoditizing your product. And the seven powers are seven great ways to decommoditize your product. Scale economies, network effects, brand counter-positioning, cornered resource, switching cost, process power, right? The question is, who's going to do what? OpenAI, and you've got to give Sam Altman and that team credit.

0
💬 0

1128.598 - 1148.791 Jonathan Ross

They've got amazing brand power, like no one else in this space. And that's going to serve them for a really long time. But what you see Sam trying to do is scale, right? He's trying to go scale. That's why we hear about Stargate and $500 billion, right? That's the power he would like to have, but the power he has right now is brand. And he's trying to bridge that. But what about the others?

0
💬 0

1149.271 - 1160.96 Harry Stebbings

I'm sorry, does this news not ridicule the $500 billion announcement? At a time when we've seen increasing efficiency to a scale like never before with DeepSeat today, the $500 billion seems ridiculed.

0
💬 0

1161.388 - 1184.544 Jonathan Ross

Actually, I don't think it's enough spending. And the reason is, so we saw this happen at Google over and over again. We do the TPU. So why did we do the TPU? The speech team trained a model. It outperformed human beings at speech recognition. This was like back in 2011, 2012. And so Jeff Dean, most famous engineer at Google, gives a presentation to the leadership team. It's two slides.

0
💬 0

1184.665 - 1200.736 Jonathan Ross

Slide number one, good news, machine learning finally works. Slide number two, Bad news, we can't afford it. And we're Google. We're going to need to double or triple our global data center footprint at probably a cost of $20 to $40 billion. And that'll get a speech recognition. Do you also want to do search and ads?

0
💬 0

1201.197 - 1218.75 Jonathan Ross

There's always this giant mission accomplished banner every time someone trains a model. And then they start putting it into production. And then they realize, oh, this is going to be expensive. This is why we've always focused on inference. And so now think about it this way. At Google, we always ended up spending 10 to 20 times as much on the inference as the training back when I was there.

0
💬 0

1219.151 - 1232.322 Jonathan Ross

Now the models are being given away for free. How much are we going to spend on inference? And now with the test time compute, I've asked questions of DeepSeq where it took 18,000 intermediate tokens before it gave me the answer.

0
💬 0
0
💬 0

1239.626 - 1254.815 Jonathan Ross

I mean, it just makes sense, right? You don't train to become, you know, a cardiovascular surgeon and then that's what you do for 95% of your life and then you perform 5%. It's the opposite. You train for a little and then you do it for the rest of your life.

0
💬 0

1255.155 - 1260.658 Harry Stebbings

Do you think the U.S. put sanctions on DeepSea to prevent the CCP using it for data capture on U.S. citizens?

0
💬 0

1261.294 - 1282.663 Jonathan Ross

I don't know what the solution is. There's carrot and there's stick, right? So you can either use a stick, block it. That might be effective. I don't know that the U.S. has really done that before. There's also the carrot, which is it's kind of interesting how it's being offered for free in China and not just in China, but to anyone else. And then others are doing that, too.

0
💬 0

1282.783 - 1288.689 Jonathan Ross

Is it possible the CCP is underwriting that because they want the data? Dude, they're doing it with the car industry.

0
💬 0

1288.99 - 1296.027 Harry Stebbings

The subsidization of cars for Chinese cars with BYD in particular, destroying the European car market is absolutely that.

0
💬 0

1296.542 - 1318.239 Jonathan Ross

The thing is, we have a lesson from the Cold War, which was mutually assured destruction. The problem is we do some sort of tariff and then we do a tariff back. There needs to be some sort of automated response of like, if you do this, we will respond. If you subsidize this industry, we will automatically subsidize the equivalent industry. Just automatic.

0
💬 0

1318.539 - 1340.926 Jonathan Ross

So don't do it because there's no benefit to you. Does the fact that it's open source, how does that change everything? It's the only reason people are using it. If it wasn't open source, it wouldn't have gotten the excitement. And open always wins. Always. Keep in mind, Linux won back when people didn't trust open source. They thought it was less secure. They thought the features were worse.

0
💬 0

1340.946 - 1351.174 Jonathan Ross

It was more buggy. And it still won. Now people expect open to be more secure, less buggy, and have more features. So how is proprietary ever going to win?

0
💬 0

1351.194 - 1368.569 Harry Stebbings

Everyone always says that actually distribution is one of the major advantages that chat, GPC, and hence open AI has, especially over the other providers. Every single day that DeepSeat is out and is being used so pervasively, it is diminishing the value of open AI. Yeah. Agree. Disagree.

0
💬 0

1368.949 - 1391.239 Jonathan Ross

Agree. Especially for the pricing, because they're losing their pricing power on this. I can't speak for Sam Altman or OpenAI or anything like that. But if I was in that position, I would be gearing up to open source my models in response, because it's pretty clear you're going to lose that. So you might as well try and win all the users and the love from open sourcing.

0
💬 0

1391.678 - 1399.228 Jonathan Ross

Otherwise, like you're already at a point where you're going to be using your other powers like brand and so on. I don't know why you try and keep that internal anymore.

0
💬 0

1399.328 - 1404.114 Harry Stebbings

Would that be possible? And would that not cannibalize that core main line of revenue?

0
💬 0

1404.455 - 1428.498 Jonathan Ross

But how would it cannibalize it any other way? Remember, distribution, right? How many people are going to buy something because they trust Dell? People trust Dell. Dell has earned their reputation over the course of decades. Supermicro built some interesting hardware, but look at what they've been going through recently. you know, there's a pro and con, right? Cheaper, trusted.

0
💬 0

1428.858 - 1440.961 Jonathan Ross

You got to make a decision. Open AI has been around for a while. Most people think of them synonymously as AI. They could just switch to deep seek and people would still use them. It's brand. It's one of the seven powers.

0
💬 0

1441.442 - 1446.463 Harry Stebbings

So if you were open AI and sound today, you would switch to open and offer it for free.

0
💬 0

1446.997 - 1465.255 Jonathan Ross

I would. And there's probably more cleverness. They could probably strike some deals before they do it or whatever, but that would be the move that I would make. And also it would be a position of strength. The only problem is the timing because if it happens right after deep seek, it looks like a response as opposed to an intentional thing. So I don't know how you do that.

0
💬 0

1465.676 - 1472.269 Jonathan Ross

Do you not just own in it's a response? Maybe, you know, we had to respond. We're better. Let's see which model people choose.

0
💬 0

1472.649 - 1478.871 Harry Stebbings

How do we think about matter? Matters share the open source values that DeepSeeker espoused. Does this help or hurt matter?

0
💬 0

1479.432 - 1495.118 Jonathan Ross

I think one of the ways that we've been looking at LLMs is a little bit like you look at an open source project, software project like Linux or something. The thing is, Linux has switching cost. And I think what we've discovered is LLMs have no switching cost whatsoever.

0
💬 0

1495.558 - 1502.702 Harry Stebbings

I swear the analogy to cloud doesn't hold up at all because everyone's like, oh, it's like cloud. There's going to be a couple of cool vendors and actually they're going to win.

0
💬 0

1503.043 - 1521.754 Jonathan Ross

No, you don't really want your cloud very often. Okay, so let's start mapping seven powers to the top tech companies. So I would say Microsoft's biggest strength is switching cost, right? You go into a room full of people and you're like, who uses Microsoft? A bunch of hands go up and you're like, who likes using Microsoft? Hands go down. It's very largely switching cost.

0
💬 0

1522.074 - 1543.439 Jonathan Ross

So you go into, you know, Gen AI. Is that a thing that gets disrupted? You look at meta, it's network effects. They could literally give every piece of technology away for free. I am completely jealous of that because if I had that right now, I would open source everything. Because then you don't have to worry about it and you get everyone helping you.

0
💬 0

1543.88 - 1561.597 Jonathan Ross

So I think meta is sort of, because of the network effect thing, always in a position where open source is to their advantage. It almost doesn't matter where it comes from. Now, I'm sure that they would prefer to have the Linux of LLMs, but I think the more it goes open source, the more of an advantage they have inherently.

0
💬 0

1561.937 - 1563.778 Harry Stebbings

If you were Meta, would you do anything different?

0
💬 0

1564.358 - 1586.07 Jonathan Ross

Meta is an amazing competitor. What they would normally do if this was some sort of proprietary social mechanism, they would try and replicate and then they would compete and they would say, come join or not. I don't think the come join works here, but the beautiful thing is all of the information for this model is available. Meta has already been doing this. They have way more compute.

0
💬 0

1586.37 - 1596.206 Jonathan Ross

The question is, Are they willing to scrape open AI like DeepSeek did? They've been super careful on everything that they've been doing. And so that's the disadvantage.

0
💬 0

1596.721 - 1600.062 Harry Stebbings

Do not put morals aside to win. This is the AI arms race.

0
💬 0

1600.502 - 1619.026 Jonathan Ross

And I think that's going to happen. I think people will like you cannot lose. And so what it's done is it's changed the game. Right. So, OK, so let's talk about Europe for a minute. We almost forgot about Europe. It feels like with Europe, there's a lack of a willingness to take risk. There's a black mark if you get it wrong.

0
💬 0

1619.466 - 1645.096 Jonathan Ross

everything's about downside protection whereas in the u.s it's like that was a great effort you failed but i'm going to fund you again right so there's that difference but when you look at the u.s and then you look at china china practices rdt research development theft it's just part of the culture and it's not just against western companies it's against each other too the difference is if you're a western company

0
💬 0

1645.496 - 1669.356 Jonathan Ross

then the government steals from the Western company and then provides it to the Chinese companies, which is less fair. The famous stories of turning on Huawei switches and you see Cisco's logo and all the bugs, right? So is that a new paradigm? I really hope not. Like for Europe to compete with the US, Europe has to adopt a more risk on attitude.

0
💬 0

1669.716 - 1680.428 Jonathan Ross

Does the West have to adopt a more theft on attitude? I really hope not. Like that's just like viscerally disgusting to me. I'm like literally repulsed by the idea.

0
💬 0

1680.808 - 1688.493 Harry Stebbings

Are we not being idealistic? If you're running in a race with someone who's willing to take steroids, if you want to win, you're going to have to take steroids too.

0
💬 0

1688.853 - 1710.546 Jonathan Ross

And then everyone is taking steroids. Whereas if no one was taking it, then everyone's healthier and you have a real competition. Yeah, it's a real problem. And the question is, can governments get involved? Here's the thing. I would love nothing more than to compete directly with Chinese companies on a fair footing. They have really smart people. DeepSeek has proven this. Really smart people.

0
💬 0

1710.846 - 1720.251 Jonathan Ross

But when the government keeps putting its thumb on the scale, we're going to try and avoid that competition wherever we can. And now there's no avoiding it. Maybe the governments just have to get involved.

0
💬 0

1720.651 - 1734.458 Harry Stebbings

But dude, I'm being blunt. Xi Jinping cares about one thing, power retention. And growth is the only thing that matters to him. And AI is central to that. He will do whatever it takes to win. Having some rational discourse about some rules of play is bluntly unrealistic.

0
💬 0

1734.858 - 1755.423 Jonathan Ross

Okay, and it gets worse than that. China has a lot of advantages. The chief advantage is the number of people they have. Now, number of people is not sufficient. So you also have India. And India has an advantage from the number of people, but China has out-executed. In fact, India was asking China for some time to help build out the roads and infrastructure. They've really mastered that, right?

0
💬 0

1755.643 - 1780.602 Jonathan Ross

But people and sort of organization, discipline, alignment, right? And so what is the concern with AI? The concern with AI is, What if an LP or GPU becomes the equivalent of a contributor to the workforce? And you could literally just add more to the GDP by creating more chips and providing more power. Now, if that becomes the case, does China's advantage erode?

0
💬 0

1781.022 - 1799.418 Jonathan Ross

They're concerned that in terms of workforce, the US could catch up, the West could catch up. And then at the same time, they have a huge population advantage. And this is why I so much want for Europe to get into the fight on AI. There's 500 million people who could be jumping into this.

0
💬 0

1799.919 - 1803.281 Harry Stebbings

If you were to advise the EU today on Europe's dance, what would you say?

0
💬 0

1803.702 - 1804.763 Jonathan Ross

So have you ever seen Station F?

0
💬 0

1805.143 - 1807.183 Harry Stebbings

Yeah, of course. I was there last week. We hosted an event.

0
💬 0

1807.663 - 1829.147 Jonathan Ross

So I would say by the end of this year, you should have 100 station Fs. And by the end of next year, you should have 1,000. Done. So what you're doing is you're collecting up 3,000 people and surrounding them with other risk-taking entrepreneurs. And then they're supporting each other. They're risk on. And when you surround yourself with other people who are risk on, you're going to be risk on.

0
💬 0

1829.267 - 1831.748 Jonathan Ross

And you're going to take the entrepreneurial leap.

0
💬 0

1832.307 - 1845.073 Harry Stebbings

What does this space look like in three years time? I'm obviously a venture capitalist for a living. All of my friends are going, oh my God, oh my God, we just lost hundreds of millions of dollars on these foundation model companies.

0
💬 0

1845.633 - 1849.435 Jonathan Ross

How many companies are you aware of that have become incredibly successful that didn't pivot?

0
💬 0

1850.38 - 1851.021 Harry Stebbings

Mass pivot.

0
💬 0

1851.441 - 1873.476 Jonathan Ross

Yeah, exactly. So pivot, get over it. Just pivot. I've been talking to a lot of the LLM companies and frankly, they have some good ideas. In fact, I really like, so I watched your interview with the Suno founder. I think he saw it from the beginning, like models are going to be commoditized and that's why he's focused on the product. He got it from the beginning. What is your product?

0
💬 0

1873.676 - 1894.505 Jonathan Ross

Not what is the model? Model is, it's a piece of machinery. It's an engine. But what is the car? What is the experience? What do you think propriety is in three years? A question I used to get asked when we were raising money a little while ago was, is AI the next internet? And I'm like, absolutely not. Because the internet is an information age technology.

0
💬 0

1894.765 - 1917.859 Jonathan Ross

It's about duplicating data with high fidelity and distributing it. Telephone does. It's what internet does. It's what the printing press did. They're all the same technology, just much different scale and speed and capability. Generative AI is different. It's about coming up with something contextual, creative, unique in the moment. And so the LLM is just the printing press of the generative age.

0
💬 0

1917.899 - 1932.39 Jonathan Ross

It's the start of it. And then there's going to be all these other stages. Just imagine trying to start Uber. when we didn't have mobile yet. Great. I'm going to book a trip over to here. How do I get home? You can't carry a desktop with you, right? So you need to be at the right stage.

0
💬 0

1932.851 - 1954.949 Jonathan Ross

So when I look at perplexity, I look at perplexity as being perfectly positioned for the moment that the hallucination or really confabulation rate comes down. The moment that these models get good enough where you don't have to check the citations anymore, That's going to open up a whole set of industries. All of a sudden, you'll be able to do medical diagnoses from LLMs.

0
💬 0

1955.309 - 1975.444 Jonathan Ross

You'll be able to do legal work from LLMs. Until then, it's like trying to create Uber before we had smartphones. It just doesn't make any sense. However, people are willing to use perplexity today. even though you have to check the citations. So they have an actual business that gets to continue. So like they're getting to sort of ride the wave.

0
💬 0

1975.844 - 1996.813 Jonathan Ross

And the moment that that tsunami of lack of confabulation or hallucination comes along, they're perfectly positioned. Each company has to find their own thing. And I would look at Suno as a great example of how things are being done around the product as opposed to just the models.

0
💬 0

1996.833 - 2006.515 Harry Stebbings

Do you think it is possible to pivot when you are OpenAI or Anthropic or any of the very large providers who've ingested billions of dollars?

0
💬 0

2007.115 - 2012.476 Jonathan Ross

Disruption happens. If you're not able to pivot now, you're not going to be able to pivot later when you get disrupted anyway.

0
💬 0

2012.998 - 2023.126 Harry Stebbings

One would think that with commoditization of models and with cheaper inference, that actually big tech wins, right? Have you seen the stock market today? They've been hit hard.

0
💬 0

2023.406 - 2048.376 Jonathan Ross

How do you think about that? What you see is a bunch of people who are concerned about training and the need for it. And everyone's still thinking that most of compute is training. And that there's going to be less of it because someone trained a model on 2000 GPUs and the nerfed A800 version with slower memory or whatever it is. And they're like, oh, people aren't going to need as many chips.

0
💬 0

2048.756 - 2070.164 Jonathan Ross

But again, Jevin's paradox, right? The more you bring the cost down, the more people consume. So for the last five to six decades, like clockwork, once a decade, the cost of compute has gone down 1,000x. People buy 100,000x as much compute, spending 100 times as much. So every decade, they spend 100 times as much. So you make it cheaper, they want more.

0
💬 0

2070.464 - 2091.812 Jonathan Ross

What's really happening is every time one of these models gets cheaper, we see our developer count just skyrocket. And then it comes back down a little bit, but the slope is higher than when it started. Better models create more demand for inference. More demand for inference then has people going, I should train a better model. And the cycle continues.

0
💬 0

2092.472 - 2108.909 Harry Stebbings

I just bought a shitload of NVIDIA. They dropped 16% on the thesis that the increasing efficiency means that obviously we wouldn't need as much NVIDIA chips. And I thought exactly that, which is like, you'll still need the NVIDIA inference and you'll just have much higher usage. So to me, it's the most screaming buy of the century.

0
💬 0

2109.189 - 2112.893 Harry Stebbings

Do you share my optimism on NVIDIA given what you just said in Jevons Paradox?

0
💬 0

2113.453 - 2132.883 Jonathan Ross

So I think over the long term, the only thing I say is Warren Buffett and Charlie Munger in the short term, the market is a popularity contest. In the long term, it's a weighing machine. I can't tell you about the popularity contest, but in terms of the weighing machine part, this is a misunderstanding. It's actually more valuable thanks to deep seek, not less valuable.

0
💬 0

2133.203 - 2155.317 Jonathan Ross

Okay, so Jevin's paradox was actually discovered by Jevin as recently made famous in Satya's tweet. However, I did beat him to that by quite a bit. And just as Satya likes to say that he made Google dance, I'm going to say I made Satya dance. He might take exception to that. But less than a month before he posted that, I did a cute little tweet on it.

0
💬 0

2155.657 - 2173.545 Jonathan Ross

So what's really happening here was in the 1860s, this guy Jevin, he actually wrote a treatise on steam engines, which I guess is what you did for fun back then in England. He realized every time steam engines became more efficient, people would buy more coal, which is the paradox.

0
💬 0

2173.966 - 2193.177 Jonathan Ross

But if you think about it from a business point of view, when the OPEX comes down, more activities come into the money. So people do more things. And so what's happened is every time we've seen the cost of tokens for a particular level of quality of models come down, We've actually seen the demand grow significantly. Price elasticity, baby.

0
💬 0

2193.197 - 2212.454 Harry Stebbings

A lot of people suggest that NVIDIA's incredible high margin status, which I'm going to butcher. I can't remember what it was in their latest release. It was something like 45 or whatever it was. But it was very, very high. And then relate to your margin is my opportunity. I think of it back to the seven powers and go, their margin is their defensibility.

0
💬 0

2212.674 - 2221.065 Harry Stebbings

And it makes me really just consider the strength of their moat. Do you think your margin is my opportunity? Or do you think their defensibility is their margin?

0
💬 0

2221.519 - 2240.475 Jonathan Ross

Today, there's this wonderful business selling mainframes with a pretty juicy margin because no one seems to want to enter that business. Training is a niche market with very high margins. And when I say niche, it's still going to be worth hundreds of billions a year. But inference is the larger market. And...

0
💬 0

2241.095 - 2258.877 Jonathan Ross

I don't know that NVIDIA will ever see it this way, but I do think that those of us focusing on inference and building stuff specifically for that are probably the best thing that's ever happened for NVIDIA stock because we'll take on the low margin, high volume inference so that NVIDIA can keep its margins nice and high.

0
💬 0

2259.217 - 2260.459 Harry Stebbings

Do you think the world sees this?

0
💬 0

2261.135 - 2282.271 Jonathan Ross

No. And I was actually like, we raised some money late 2024. In that fundraise, we still had to explain to people why inference was going to be a larger business than training. Remember, this was our thesis when we started eight years ago. So for me, I struggle on why people think that training is going to be bigger. It just doesn't make sense.

0
💬 0

2282.652 - 2285.454 Harry Stebbings

Just for anyone who doesn't know, what's the difference between training and inference?

0
💬 0

2285.954 - 2295.919 Jonathan Ross

Training is where you create the model. Inference is where you use the model. You want to become a heart surgeon, you spend years training, and then you spend more years practicing. Practicing is inference.

0
💬 0

2296.339 - 2303.463 Harry Stebbings

Where does efficiency go from here? Everyone was so shocked by how R1 is so much more efficient. What next?

0
💬 0

2304.096 - 2310.98 Jonathan Ross

what you're going to see is everyone else starting to use this MOE approach. Now, there's another thing that happens here.

0
💬 0

2311.4 - 2319.985 Harry Stebbings

And the MOE approach, just so I understand, is like the segmentation of where information goes. So it's rooted to like the optimal point of the model.

0
💬 0

2320.265 - 2346.316 Jonathan Ross

Yeah, so MOE stands for mixture of experts. When you use LAMA 70 billion, you actually use every single parameter in that model. When you use Mixtrals 8x7b, you use two of the roughly 8b experts, but it's much smaller. And effectively, while it doesn't correlate exactly, it correlates very closely. The number of parameters effectively tells you how much compute you're performing.

0
💬 0

2346.596 - 2368.469 Jonathan Ross

Now, if I have, let's take the R1 model. I believe it's about 671 billion parameters versus 70 billion for LAMA. And there's a 405 billion dense model as well, right? But let's focus on 70 versus 671. I believe there's 256 experts, each of which is somewhere around 2 billion parameters.

0
💬 0

2368.911 - 2386.986 Jonathan Ross

And then it picks some small number, I'm forgetting which, maybe it's like eight of those or 16 of them, whatever it is. And so it only needs to do the compute for that. That means that you're getting to skip most of it, right? Sort of like your brain, like not every neuron in your brain fires when I say something to you about the stock market, right?

0
💬 0

2387.066 - 2409.894 Jonathan Ross

Like the neurons about, you know, playing football, right? those don't kick off, right? That's the intuition there. Previously, it was famously reported that OpenAI's GPT-4, it started off with something like 16 experts and they got it down to eight. I forget the numbers, but it started off larger and they shrunk it a little and they were smaller or whatever.

0
💬 0

2410.234 - 2430.985 Jonathan Ross

And then with what's happened with DeepSeq model is they've gone the opposite. They've gone to a very large number of experts. The more parameters you have, it's like having more neurons. It's easier to retain the information that comes in. And so by having more parameters, they're able to, on a smaller amount of data, get good.

0
💬 0

2431.425 - 2447.436 Jonathan Ross

However, because it's sparse, because it's a mixture of experts, they're not doing as much computation. And part of the cleverness was figuring out how they could have so many experts so it could be so sparse so they could skip so many of the parameters.

0
💬 0

2447.956 - 2464.741 Harry Stebbings

but if we take that then back to like that's where we are staying how they've become so efficient what's the the next stage of that then experts they can root it so efficiently what now so meta recently released their llama 3.3 70b and it outperformed their 3.1 405b so there's new 70b

0
💬 0

2471.483 - 2489.929 Jonathan Ross

outperformed their 405. What was surprising to me, I thought they retrained it from scratch. It turns out you read the paper and they talk about how they just fine tuned. So they used a relatively small amount of data to make it much better. Again, this goes to the quality of the data. They have higher quality data. They took their old model. They trained it, got much better.

0
💬 0

2490.25 - 2510.71 Jonathan Ross

But that 70B, that new 70B outperforms their previous 405B. What you're going to see now is now that everyone has seen this deep seek architecture, they're going to go, great, I have hundreds of thousands of GPUs. I'm now going to use a lot of them to create a lot of synthetic data. And then I'm going to train the bejesus out of this model.

0
💬 0

2510.97 - 2534.066 Jonathan Ross

Because the other thing is, while it's sort of asymptotes, the question is, on this curve, where do you stop? It depends on how many people you have doing inference. You can either make the model bigger, which makes it more expensive, and then you train it on less. Or you make it smaller, and it's cheaper to run, but you have to train it more. So DeepSeq didn't have a lot of users until recently.

0
💬 0

2534.426 - 2548.536 Jonathan Ross

And so for them, it would have never made sense to train it a lot anyway. They would much rather have a bigger model. But now what you're going to see is all these other people either making smaller models or trying to make higher quality ones of the same size, but just training it more.

0
💬 0

2549.076 - 2558.142 Harry Stebbings

We've seen DeepSeat now say, hey, only now Chinese phone numbers can log in. That is the new sign up, I think it is. What's happened and what is the result of that?

0
💬 0

2558.523 - 2582.997 Jonathan Ross

So they ran out of compute. And this is the other reason why chip startups are going to do just fine. because they ran out of inference compute. You train it once, but now... So you spend money to make the model, like designing a car, but then each car you build costs you money, right? Well, each query that you serve requires hardware. Training scales with the number of ML researchers you have.

0
💬 0

2583.518 - 2586.199 Jonathan Ross

Inference scales with the number of end users you have.

0
💬 0

2586.619 - 2591.362 Harry Stebbings

Do you think DeepSeq are astonished by the response they've got from the global community?

0
💬 0

2591.781 - 2613.613 Jonathan Ross

I think they marketed very well. Like you look at some of the publication and they make it sound like it's a philosophical thing. And, you know, they talk about they spent six million on the GPUs and everyone just zoomed in on that, neglecting the fact that Lama's first model was trained on like, I think, five million worth of GPU time. And it set the world on fire in a good way. And then

0
💬 0

2614.213 - 2625.059 Jonathan Ross

ignoring the fact that they spent a ton generating the data and all this. They're really good at marketing. I think they were probably surprised at how well it worked, but I think this is what they were going for.

0
💬 0

2625.459 - 2628.761 Harry Stebbings

Is there anything that I haven't asked or we haven't spoken about that we should?

0
💬 0

2629.638 - 2632.722 Jonathan Ross

What's up with the $500 billion Stargate effort?

0
💬 0

2633.023 - 2635.806 Harry Stebbings

Okay. What's up with the $500 billion Stargate effort?

0
💬 0

2636.107 - 2654.944 Jonathan Ross

I've gone back and forth on that. I actually did. So Gavin Baker tweeted some math. Before I saw that tweet, I came up with very similar math. However, talking to some people in the know, some of the comments are actually they've got it. But then you keep pressing and it's like, well, maybe is there some cutesiness to it?

0
💬 0

2655.284 - 2676.795 Jonathan Ross

What I think it is, is an acknowledgement that the models have been commoditized and infrastructure is what's important in terms of maintaining elite like scale. It's one of the seven powers. I think what you're seeing there is an attempt to move from having a cornered resource or something like that into a scale economy.

0
💬 0

2677.115 - 2678.275 Harry Stebbings

Do you think it will work?

0
💬 0

2678.776 - 2700.764 Jonathan Ross

I don't think you get there in a short period of time with GPUs because most of the compute is inference. And so, you know, if you're talking about building out all the power, like it's going to take time. It's infrastructure. It's CapEx. The real win here is brand. That's what I would be doubling down on. I would be like hiring the best brand firms I could. I would do a complete makeover.

0
💬 0

2701.314 - 2705.116 Harry Stebbings

Will OpenAI have a stronger or weaker brand in three years' time?

0
💬 0

2705.516 - 2717.262 Jonathan Ross

Much stronger. I think they're going to double down on that and they're going to focus on it. Who will lose? People who can't adapt to disruption. Anyone who just wants to keep going on a straight line and do what they were doing before is going to lose.

0
💬 0

2717.602 - 2740.233 Jonathan Ross

And the rate of disruption is probably going to increase because going back to the analogy of LLMs being the printing press, imagine if there were a couple of smartphones left over from an ancient civilization. All of a sudden, the printing press is invented and you're like, ooh, Uber's coming. I want a position for it. I know where this is going. We are the smartphones.

0
💬 0

2740.693 - 2758.443 Jonathan Ross

We know where generative age technology goes. And now everyone's like, well, we know how big this gets. Let's put money into it. I can't be the one who doesn't spend money on this because I know how big of an advantage it's going to be. It's like getting to add more workers to the workforce. And so I think...

0
💬 0

2758.883 - 2764.445 Jonathan Ross

the generative age, we're going to speed run it faster than whatever comes next because we know what it looks like.

0
💬 0

2764.925 - 2777.01 Harry Stebbings

Is there any chance we see a plateau? We saw it in self-driving, for example, where we kind of went through this desert of blacker progression and suddenly all of a sudden it came. Will we see that or will we just see this continuing dominance?

0
💬 0

2777.574 - 2799.259 Jonathan Ross

I think with self-driving, the problem you had was the threshold. It had to be way superhuman. Because if you look at the number of miles driven by these self-driving vehicles, it's an enormous number. And the number of fatalities and incidents is lower per mile. But we have no tolerance whatsoever for them when it's a machine.

0
💬 0

2799.619 - 2805.94 Jonathan Ross

When you're writing poetry and code, it's very different versus doing a surgery or driving a car.

0
💬 0

2806.46 - 2812.305 Harry Stebbings

If you're Elon and X.ai, how are you feeling? And do you feel better or worse post this?

0
💬 0

2812.726 - 2827.979 Jonathan Ross

I would probably feel both better and worse. I'd feel better about my bet on building out more hardware. I would feel worse about trying to build out my own model. Why is Elon doing that? Just pick one up off the ground. Like, why are you making your own?

0
💬 0

2828.259 - 2843.087 Harry Stebbings

Are you excited when you look forward to the next few years? Or are you quite nervous? You could say this is a time of heightened international warfare in terms of this new AI arms race. China's stealing everything. Us forced to steal back.

0
💬 0

2843.701 - 2867.107 Jonathan Ross

Long ago, I stopped having good days and bad days. It's yes, it's how many good things, it's how many bad things, right? When you run an organization, I'm both excited and nervous. And I'm excited and nervous about different things at the same time. The thing that I am most nervous about is that unlike nuclear war, you can use AI tools to attack each other.

0
💬 0

2867.467 - 2875.896 Jonathan Ross

Google just announced recently the first zero-day exploit found by an LLM that was previously unknown. Yeah, that's a scary one.

0
💬 0

2876.296 - 2879.74 Harry Stebbings

So now... Why is this scary for anyone who doesn't understand zero-day exploit?

0
💬 0

2880 - 2891.547 Jonathan Ross

So how would you like me to have access to your phone? Not ideal. How would you like the CCP to have access to your phone? Even last night. That's a nation state and nation states have a lot of resources.

0
💬 0

2892.008 - 2906.695 Jonathan Ross

And if they stand up a bunch of compute and they start scanning for vulnerabilities and all the open source that's out there and not even the open source, just like scanning ports on the Internet and trying to figure out if they can break in, they can just automate that now. They don't need to hire people to do that.

0
💬 0

2906.915 - 2930.027 Jonathan Ross

And now the defense has to be automated because there's no way to keep up with automated attackers. And what happens if this gets out of control? But worse, it's not killing anyone. And it's also deniable. That's the hardest part about it, because is it really China? Is it? Russia? Is it North Korea? Is it a friendly that's making it seem like it's one of them or vice versa?

0
💬 0

2930.387 - 2949.466 Jonathan Ross

Now you have this ability. So you go from where we had a cold war because having a war was unconscionable. It was unthinkable because of the consequences to now. Yeah, I'm just hacking you. That could spiral out of control. I'm worried that we're going to have more back and forth. And think of it this way.

0
💬 0

2949.806 - 2974.583 Jonathan Ross

If you are a nation state and you let's say that, Harry, you're a beacon to the venture community and you want to rally the European entrepreneurs to be risk on. And I'm someone who doesn't want that because I don't want the competition, a country that doesn't want that. Maybe I sully your reputation. Maybe I make you persona non grata. How is that any worse than shooting someone?

0
💬 0

2974.863 - 2995.408 Jonathan Ross

It could be worse in some ways, but you can get away with it. And so that has me nervous, really nervous, but I'm also really excited. We are seriously going to be able to innovate as fast as we can come up with ideas. Now, you're not gonna have to implement things. You're gonna be able to prompt engineer your way through things.

0
💬 0

2995.888 - 3015.528 Jonathan Ross

Just as we moved from hardware engineers to software engineers and sped up productivity, you're now just going to be able to have a prompt engineer who doesn't even write software. One of our engineers made this app where you can just describe what you want built and it builds it. And because we're so fast, it's like that. And you just iterate and it'll build an app for you.

0
💬 0

3015.828 - 3034.064 Harry Stebbings

I just don't understand where the value, and sorry to just continue this, but where the value accrues then? Because you mentioned that kind of, hey, they created this tool which allows you to prompt them, it'll build the app. I'm sure you've seen bolt.new. I'm not sure if you've seen lovable, where it's basically chat GPT, but for kind of website creation. Is there value?

0
💬 0

3034.104 - 3041.226 Harry Stebbings

And everyone was like, there's no value in these wrapper apps. Everyone's like, there's no value in these foundation models. Where the fuck is there value?

0
💬 0

3041.626 - 3063.073 Jonathan Ross

And that's part of the exciting part. It's discovering that. But I think people will always prefer to use the highest quality, most polished product. I think there is an opportunity for artisanship, craftsmanship, and just perfecting it. Getting to a certain number of nines in the details. The Eames quote, the details aren't the details, the details are the thing.

0
💬 0

3063.493 - 3080.193 Jonathan Ross

I used to be a little concerned with the quote, you know, if you're not ashamed of the quality of your first release and you've waited too long, because there's a subtlety and nuance there. There's soundness and then there's completeness. What you want is an incomplete product, something that doesn't do everything. That's why you should be embarrassed.

0
💬 0

3080.474 - 3099.81 Jonathan Ross

But it shouldn't like blue screen of death on you. That's not a good embarrassment, right? And so what you're going to see now is because it's so easy to come up with something that just kind of works, it's a little embarrassing, but it kind of works. People are really going to value well-crafted, high-quality products.

0
💬 0

3100.33 - 3108.119 Harry Stebbings

Jonathan, I cannot thank you enough for breaking down so many different elements for me and putting up with my basic questions. You've been fantastic.

0
💬 0

3108.4 - 3113.326 Jonathan Ross

No problem. Have fun out there. I mean, this is a brand new age. It really is.

0
💬 0

3115.256 - 3133.63 Harry Stebbings

I mean, what a show that was. If you want to watch the episode in full, you can find it on YouTube by searching for 20VC. That's 20VC on YouTube. But before we leave you today, here are two fun facts about our newest brand sponsor, Kajabi. First, their customers just crossed a collective... $8 billion in total revenue. Wow!

0
💬 0

3134.13 - 3154.95 Harry Stebbings

Second, Kajabi's users keep 100% of their earnings, with the average Kajabi creator bringing in over $30,000 per year. In case you didn't know, Kajabi is the leading creator commerce platform with an all-in-one suite of tools, including websites, email marketing, digital products, payment processing, and analytics for as low as $69 per month.

0
💬 0

3155.051 - 3179.871 Harry Stebbings

Whether you are looking to build a private community, write a paid newsletter, or launch a course, Kajabi is the only platform that will enable you to build and grow your online business without taking a cut of your revenue. 20 VC listeners can try Kajabi for free for 30 days by going to kajabi.com forward slash 20VC. That's kajabi.com, K-A-J-A-B-I.com forward slash 20VC.

0
💬 0

3180.352 - 3198.766 Harry Stebbings

Once you've built your creator empire with Kajabi, take your insights and decision-making to the next level with AlphaSense, the ultimate platform for uncovering trusted research and expert perspectives. As an investor, I'm always on the lookout for tools that really transform how I work. Tools that don't just save time, but fundamentally change how I uncover insights.

0
💬 0

3198.866 - 3215.2 Harry Stebbings

That's exactly what AlphaSense does. With the acquisition of Tegas, AlphaSense is now the ultimate research platform built for professionals who need insights they can trust fast. I've used Tegas before for company deep dives right here on the podcast. It's been an incredible resource for expert insights.

0
💬 0

3215.28 - 3230.506 Harry Stebbings

But now with AlphaSense leading the way, it combines those insights with premium content, top broker research, and cutting-edge generative AI. The result? A platform that works like a supercharged junior analyst, delivering trusted insights and analysis on demand.

0
💬 0

3230.986 - 3251.158 Harry Stebbings

AlphaSense has completely reimagined fundamental research, helping you uncover opportunities from perspectives you didn't even know how they existed. It's faster, it's smarter, and it's built to give you the edge in every decision you make. To any VC listeners, don't miss your chance to try AlphaSense for free. Visit alphasense.com forward slash 20 to unlock your trial.

0
💬 0

3251.238 - 3266.588 Harry Stebbings

That's alphasense.com forward slash 20. And speaking of incredible products, what comes to mind when you think about business banking? Probably not speed, ease, or growth. I'm willing to bet that's because you're not using Mercury.

0
💬 0

3266.708 - 3287.192 Harry Stebbings

With Mercury, you can quickly send wires and pay bills, get access to credit sooner to hit the ground running faster, unlock capital that's designed for scaling, and see all these money moves all in one place. I speak to dozens of founders every week, and most of them are using Mercury because they're super smart and that's what you have to be using.

0
💬 0

3287.332 - 3307.382 Harry Stebbings

Visit Mercury.com to experience it for yourself. Mercury is a financial technology company, not a bank. Banking services provided by Choice Financial Group, Column NA, and Evolve Bank & Trust, members of FDIC. As always, we so appreciate all your support and stay tuned for an incredible episode coming on Friday with the CEO of Monzo.

0
💬 0
Comments

There are no comments yet.

Please log in to write the first comment.