We finally sit down with the man himself: Nvidia Cofounder & CEO Jensen Huang. After three parts and seven+ hours of covering the company, we thought we knew everything but — unsurprisingly — Jensen knows more. A couple teasers: we learned that the company’s initial motivation to enter the datacenter business came from perhaps not where you’d think, and the roots of Nvidia’s platform strategy stretch back beyond CUDA all the way to the origin of the company.We also got a peek into Jensen’s mindset and calculus behind “betting the company” multiple times, and his surprising feelings about whether he’d go on the founder journey again if he could rewind time. We can’t think of any better way to tie a bow on our Nvidia series (for now). Tune in!Sponsors:ServiceNow: https://bit.ly/acqsnaiagentsHuntress: https://bit.ly/acqhuntressVanta: https://bit.ly/acquiredvantaMore Acquired!:Get email updates with hints on next episode and follow-ups from recent episodesJoin the SlackSubscribe to ACQ2Merch Store!© Copyright 2015-2024 ACQ, LLCNote: Acquired hosts and guests may hold assets discussed in this episode. This podcast is not investment advice, and is intended for informational and entertainment purposes only. You should do your own research and make your own independent decisions when considering any financial transactions.
I will say, David, I would love to have NVIDIA's full production team every episode. It was nice not having to worry about turning the cameras on and off and making sure that nothing bad happened myself while we were recording this.
Yeah, just the gear. I mean, the drives that came out of the camera.
All right, red cameras for the home studio starting next episode. Yeah, great. All right, let's do it.
Who got the truth?
Welcome to this episode of Acquired, the podcast about great technology companies and the stories and playbooks behind them. I'm Ben Gilbert.
I'm David Rosenthal.
And we are your hosts. Listeners, just so we don't bury the lead, this episode was insanely cool for David and I.
Yeah.
After researching NVIDIA for something like 500 hours over the last two years, we flew down to NVIDIA headquarters to sit down with Jensen himself. And Jensen, of course, is the founder and CEO of NVIDIA, the company powering this whole AI explosion. At the time of recording, NVIDIA is worth $1.1 trillion and is the sixth most valuable company in the entire world.
And right now is a crucible moment for the company. Expectations are set high. I mean, sky high. They have about the most impressive strategic position and lead against their competitors of any company that we've ever studied. But here's the question that everyone is wondering. Will NVIDIA's insane prosperity continue for years to come? Is AI going to be the next trillion-dollar technology wave?
How sure are we of that? And if so, can NVIDIA actually maintain their ridiculous dominance as this market comes to take shape? So Jensen takes us down memory lane with stories of how they went from graphics to the data center to AI, how they survived multiple near-death experiences.
He also has plenty of advice for founders, and he shared an emotional side to the founder journey toward the end of the episode.
Yeah, I got new perspective on the company and on him as a founder and a leader just from doing this, despite, you know, we thought we knew everything before we came in advance. And it turned out we didn't.
Turns out the protagonist actually knows more. Yes. All right, well, listeners, join the Slack. There is incredible discussion of everything about this company, AI, the whole ecosystem, and a bunch of other episodes that we've done recently going on in there right now. So that is acquired.fm slash Slack. We would love to see you. And without further ado, this show is not investment advice.
David and I may have investments in the companies we discuss, and this show is for informational and entertainment purposes only. On to Jensen. So Jensen, this is acquired. So we want to start with story time. So we want to wind the clock all the way back to, I believe it was 1997.
You're getting ready to ship the Riva 128, which is one of the largest graphics chips ever created in the history of computing. It is the first fully 3D accelerated graphics pipeline for a computer. And you guys have about six months of cash left. And so you decide to do the entire testing in simulation rather than ever receiving a physical prototype.
You commission the production run sight unseen with the rest of the company's money. So you're betting it all right here on the Revo 128. It comes back, and of the 32 DirectX blend modes, it supports eight of them. And you have to convince... the market to buy it and you got to convince developers not to use anything but those eight blend modes. Walk us through what that felt like.
The other 24 weren't that important.
Okay. So wait, wait.
First question.
Was that the plan all along? When did you realize that only eight were going to work?
I didn't learn about it until it was too late. We should have implemented all 32. But we built what we built and so we had to make the best of it. That was really an extraordinary time. Remember, Revo 120 was NV3. NV1 and NV2 were based on forward texture mapping, no triangles but curves, and it tessellated the curves.
And because we were rendering higher-level objects, we essentially avoided using Z-buffers. And we thought that that was going to be a good rendering approach, and it turns out to have been completely the wrong answer. And so what Revo Run 28 was, was a reset of our company. Now remember, at the time that we started the company in 1993, we were the only consumer 3D graphics company ever created.
And we were focused on transforming the PC into an accelerated PC because at the time, Windows was really a software rendered system. And so anyways, Revo 128 was a reset of our company because by the time that we realized we had gone down the wrong road, Microsoft had already rolled out DirectX. It was fundamentally incompatible with Nvidia's architecture.
30 competitors have already shown up, even though we were the first company at the time that we were founded. So the world was a completely different place. The question about what to do as a company strategy, at that point, I would have said that we made a whole bunch of wrong decisions. But on that day that mattered, we made a sequence of extraordinarily good decisions.
And that time, 1997, was probably NVIDIA's best moment. And the reason for that was our backs were up against the wall. We were running out of time. We were running out of money. for a lot of employees running out of hope. And the question is, what do we do? Well, the first thing that we did was we decided that, look, DirectX is now here. We're not going to fight it.
Let's go figure out a way to build the best thing in the world for it. And Revo 128 is the world's first fully accelerated hardware accelerated pipeline for rendering 3D. And so the transform, the projection, every single element all the way down to the frame buffer was completely hardware accelerated. We implemented a texture cache.
We took the frame buffer limit to as big as physics could afford at the time. We made the biggest chip that anybody had ever imagined building. We used the fastest memories. Basically, if we built that chip, there could be nothing that could be faster.
And we also chose a cost point that is substantially higher than the highest price that we think that any of our competitors would be willing to go. If we built it right, we accelerated everything, we implemented everything in DirectX that we knew of, and we built it as large as we possibly could, then obviously nobody can build something faster than that.
Today, in a way, you kind of do that here at NVIDIA too. You were a consumer products company back then, right? It was end consumers who were going to have to pay the money to buy.
That's right. But we observed that there was a segment of the market where people were, because at the time the PC industry was still coming up and it wasn't good enough. Everybody was clamoring for the next fastest thing. And so if your performance was 10 times higher this year than what was available, there's a whole large market of enthusiasts who we believe would have gone after it.
And we were absolutely right that the PC industry had a substantially large enthusiast market that would buy the best of everything. To this day, it kind of remains true. And for certain segments of the market where the technology is never good enough, like 3D graphics, when we chose the right technology, 3D graphics is never good enough.
And we call it back then, 3D gives us sustainable technology opportunity because it's never good enough. And so your technology can keep getting better. We chose that. We also made the decision to use this technology called emulation. There was a company called Icos. And on the day that I called them, they were just shutting the company down because they had no customers.
And I said, hey, look, I'll buy what you have inventory and no promises are necessary. And the reason why we needed that emulator is because if you figure out how much money that we have, if we taped out a chip and we got it back from the fab and we started working on our software,
By the time that we found all the bugs, because we did the software, then we taped out the chip again, well, we would have been out of business already. And your competitors would have caught up. Well, not to mention we would have been out of business.
Who cares? Exactly.
So if you're going to be out of business anyways, that plan obviously wasn't the plan. The plan that companies normally go through, which is build the chip, write the software, fix the bugs, tape out the new chip, so on and so forth, that method wasn't going to work.
And so the question is, if we only had six months and you get to tape out just one time, then obviously you're going to tape out a perfect chip. So I remember having conversation with our leaders and they said, but Jensen, how do you know it's going to be perfect? And I said, I know it's going to be perfect because if it's not, we'll be out of business. And so let's make it perfect.
We get one shot. We essentially virtually prototyped the chip by buying this emulator. And Dwight and the software team wrote our software, the entire stack, and ran it on this emulator and just sat in the lab waiting for Windows to paint. It was like 60 seconds per frame or something like that. Oh, easily. I actually think that it was an hour per frame, something like that.
And so we would just sit there and watch it paint. And so on the day that we decided to tape out, I assumed that the chip was perfect. And everything that we could have tested, we tested in advance and told everybody this is it. We're going to tape out the chip. It's going to be perfect. Well, if you're going to tape out a chip and you know it's perfect, then what else would you do?
That's actually the good question. If you knew that you hit enter, you taped out a chip, and you knew it was going to be perfect, then what else would you do? Well, the answer, obviously, go to production. And marketing blitz.
Yeah, yeah. And developer relations.
Just start, kick everything off, kick everything off because you got a perfect chip. And so we got it in our head that we have a perfect chip.
How much of this was you and how much of this was like your co-founders, the rest of the company, the board? Was everybody telling you you were crazy?
No, everybody was clear we had no shot. Not doing it would be crazy. Because the last- Otherwise, you might as well go home. Yeah, you're going to do our business anyways. So anything aside from that is crazy. So it seemed like a fairly logical thing. And quite frankly, right now, as I'm describing it, you're probably thinking, yeah, it's pretty sensible.
Well, it worked.
Yeah. And so we take that out and went directly to production.
So is the lesson for founders out there, when you have conviction on something like the Revo 128 or CUDA, go bet the company on it. And this keeps working for you. So it seems like your lesson learned from this is, yes, keep pushing all the chips in because so far it's worked every time.
No. How do you think about that? No, no. When you push your chips in, I know it's going to work. Notice, we assume that we taped out a perfect chip. The reason why we taped out a perfect chip is because we emulated the whole chip before we taped it out. We developed the entire software stack. We ran QA on all the drivers and all the software. We ran all the games we had.
We ran every VGA application we had. And so when you push your chips in, what you're really doing is when you bet the farm, you're saying, I'm going to take everything in the future, all the risky things, and I'm going to pull it in advance. And that is probably the lesson. And to this day, everything that we can prefetch, everything in the future that we can simulate today, we prefetch it.
We talk about this a lot. We were just talking about this on our Costco episode. You want to push your chips in when you know it's going to work.
So every time we see you make a bet the company move, you've already simulated it. You know. Yeah, yeah, yeah. Do you feel like that was the case with CUDA?
Right.
And so we were already playing with the concept of how do we create an abstraction layer above our chip that is expressible in a higher-level language and higher-level expression? And how can we use our GPU for things like CT reconstruction, image processing? We were already down that path.
there were some positive feedback and some intuitive positive feedback that we think that general purpose computing could be possible. And if you just looked at the pipeline of a programmable shader, it is a processor and is highly parallel and it is massively threaded and it is the only processor in the world that does that.
And so there were a lot of characteristics about programmable shading that would suggest that CUDA has a great opportunity to succeed.
And that is true if there was a large market of machine learning practitioners who would eventually show up and want to do all this great scientific computing and accelerated computing.
But at the time when you were starting to invest what is now something like 10,000 person years in building that platform, did you ever feel like, oh man, we might have invested ahead of the demand for machine learning since we're like a decade before the whole world is realizing it?
I guess yes and no. You know, when we saw deep learning, when we saw AlexNet and realized its incredible effectiveness in computer vision, we had the good sense, if you will, to go back to first principles and ask, you know, what is it about this thing that made it so successful?
When a new software technology, a new algorithm comes along and somehow leapfrogs 30 years of computer vision work, you have to take a step back and ask yourself, but why? And fundamentally, is it scalable? And if it's scalable, what other problems can it solve? And there were several observations that we made.
The first observation, of course, is that if you have a whole lot of example data, you could teach this function to make predictions. Well, what we've basically done is discovered a universal function approximator, because the dimensionality could be as high as you want it to be.
And because each layer is trained one layer at a time, there's no reason why you can't make very, very deep neural networks. Okay, so now you just reason your way through, right? Okay, so now I go back to 12 years ago. You could just imagine the reasoning I'm going through in my head, that we've discovered a universal function approximator.
In fact, we might have discovered, with a couple more technologies, a universal computer.
Are you paying attention to the ImageNet competition every year leading up to this?
Yeah. The reason for that is because we were already working on computer vision at the time, and we were trying to get CUDA to be a good computer vision system, or most of the algorithms that were created for computer vision aren't a good fit for CUDA. So what we're sitting there trying to figure it out, all of a sudden AlexNet shows up. So that was incredibly intriguing.
It's so effective that it makes you take a step back and ask yourself, why is that happening? So by the time that you reason your way through this, you go, well, what are the kind of problems in a world where a universal function approximator can solve, right? Well, we know that most of our algorithms start from principled sciences, okay?
You want to understand the causality, and from the causality, you create a simulation algorithm that allows us to scale. Well, for a lot of problems, we kind of don't care about the causality. We just care about the predictability of it. Like, do I really care for what reason you prefer this toothpaste over that? I don't really care the causality.
I just want to know that this is the one you would have predicted. Do I really care that the fundamental cause of somebody who buys a hot dog buys ketchup and mustard? It doesn't really matter. It only matters that I can predict it. It applies to predicting movies, predicting music. It applies to predicting, quite frankly, weather. We understand thermodynamics.
We understand radiation from the sun. We understand cloud effects. We understand oceanic effects. We understand all these different things. We just want to know whether we should wear a sweater or not. Isn't that right? And so causality for a lot of problems in the world doesn't matter. We just want to emulate the system and predict the outcome.
It can be an incredibly lucrative market. If you can predict what the next best performing feed item to serve into a social media feed, turns out that's a hugely valuable market.
This is where I was going to go with that. I love the examples you pulled. Toothpaste, ketchup, music, movies.
When you realize this, you realize, hang on a second, a universal function approximator, a machine learning system, something that learns from examples, could have tremendous opportunities because just the number of applications is quite enormous. And everything from, obviously, we just talked about commerce all the way to science.
And so you realize that maybe this could affect a very large part of the world's industries. Almost every piece of software in the world would eventually be programmed this way. And if that's the case, then how you build a computer and how you build a chip, in fact, can be completely changed.
And realizing that, the rest of it just comes with, you know, do you have the courage to put your chips behind it?
So that's where we are today. And that's where NVIDIA is today. But I'm curious, there's a couple of years after AlexNet, and this is when Ben and I were getting into the technology industry and the venture industry ourselves.
I started at Microsoft in 2012, so right after AlexNet, but before anyone was talking about machine learning and even the mainstream engineering community.
There were those couple of years there where to a lot of the rest of the world, these looked like science projects. Yeah. The technology companies here in Silicon Valley, particularly the social media companies, they were just realizing huge economic value out of this. The Googles, the Facebooks, the Netflixes, et cetera.
And obviously that led to lots of things, including OpenAI a couple of years later. But during those couple of years, when you saw just that huge economic value unlock here in Silicon Valley, how were you feeling during those times?
The first thought was, of course, reasoning about how we should change our computing stack. The second thought is where can we find earliest possibilities of use? If we were to go build this computer, what would people use it to do?
And we were fortunate that working with the world's universities and researchers was innate in our company because we were already working on CUDA, and CUDA's early adopters were researchers because we democratized supercomputing. You know, CUDA is not just used, as you know, for AI. CUDA is used for almost all fields of science.
Everything from molecular dynamics to imaging, CT reconstruction, to seismic processing, to weather simulations, quantum chemistry, the list goes on, right? And so the number of applications of CUDA in research was very high. And so when the time came and we realized that deep learning could be really interesting, it was natural for us to go back to the researchers
and find every single AI researcher on the planet and say, how can we help you advance your work? And that included Yann LeCun and Andrew Eng and Jeff Hinton. And that's how I met all these people. And I used to go to all the AI conferences and that's where, you know, I met Ilya Suscomber there for the first time. Yeah.
And so it was really about, at that point, what are the systems that we can build and the software stacks we can build to help you be more successful to advance the research? Because at the time, it looked like a toy. But we had confidence that even GAN, the first time I met Goodfellow, the GAN was like 32 by 32. And it was just a blurry image of a cat, but how far can it go?
And so we believed in it. We believed that one, you could scale deep learning because obviously it's trained layer by layer and you could make the data sets larger and you could make the models larger. And we believe that if you made that larger and larger, it would get better and better, kind of sensible.
I think the discussions and the engagements with the researchers was the exact positive feedback system that we needed. I would go back to research. That's where it all happened.
When OpenAI was founded in 2015, that was such an important moment that's obvious today now. But at the time, I think most people, even people in tech, were like, What is this? Were you involved in it at all?
Because you were so connected to the researchers, to Ilya, taking that talent out of Google and Facebook, to be blunt, but reseeding the research community and opening it up was such an important moment. Were you involved in it at all?
I wasn't involved in the founding of it, but I knew a lot of the people there. And Elon, of course, I knew. And Peter Beale was there. And Ilya was there. And we have some great employees today that were there in the beginning.
I knew that they needed this amazing computer that we were building, and we're building the first version of the DGX, which today when you see a hopper, it's 70 pounds, 35,000 parts, 10,000 amps. But DGX, the first version that we built, was used internally, and I delivered the first one to OpenAI, and that was a fun day.
Most of our success was aligned around, in the beginning, just about helping the researchers get to the next level. I knew it wasn't very useful in its current state, but I also believed that in a few clicks, it could be really remarkable. And that belief system came from the interactions with all these amazing researchers, and it came from just seeing the incremental progress.
At first, the papers were coming out every three months, and then papers today are coming out every day, right? So you could just monitor the archive papers, and I took an interest in learning about the progress of deep learning, and to the best of my ability, read these papers, and you could just see the progress happening, you know, in real time, exponentially in real time.
It even seems like within the industry, from some researchers we spoke with, it seemed like no one predicted how useful language models would become when you just increase the size of the models. They thought, oh, there has to be some algorithmic change that needs to happen.
But once you cross that 10 billion parameter mark, and certainly once you cross the 100 billion, they just magically got much more accurate, much more useful, much more lifelike. Were you shocked by that the first time you saw a truly large language model? Do you remember that feeling?
Well, my first feeling about the language model was how clever it was to just mask out words and make it predict the next word. It's self-supervised learning at its best. We have all this text, you know, I know what the answer is. I'll just make you guess it. And so my first impression of BERT was really how clever it was. And now the question is, how can you scale that?
You know, the first observation on almost everything is interesting, and then try to understand intuitively why it works. And then the next step, of course, is from first principles, how would you extrapolate that? And so obviously we knew that BERT was going to be a lot larger. Now, one of the things about these language models is it's encoding information, isn't that right?
It's compressing information. And so within the world's languages and text, there's a fair amount of reasoning that's encoded in it. We describe a lot of reasoning things, and so if you were to say that few-step reasoning is somehow learnable from just reading things, I wouldn't be surprised. For a lot of us, we get our common sense and we get our reasoning ability by reading.
And so why wouldn't a machine learning model also learn some of the reasoning capabilities from that? And from reasoning capabilities, you could have emergent capabilities, right? Emergent abilities are consistent with intuitively from reasoning. And so some of it could be predictable, but still, it's still amazing. The fact that it's sensible doesn't make it any less amazing. Right.
I could visualize literally the entire computer and all the modules in a self-driving car. And the fact that it's still keeping lanes makes me insanely happy. And so...
I even remember that from my first operating systems class in college when I finally figured out all the way from programming language to the electrical engineering classes bridged in the middle by that OS class. I'm like, oh, I think I understand how the Von Neumann computer works soup to nuts. And it's still a miracle. Yeah. Yeah.
Yeah, yeah. Exactly. Yeah, yeah. When you put it all together, it's still a miracle. Yeah.
Now is a great time to talk about one of our favorite companies, Statsig, and we have some tech history for you.
Yes. So in our NVIDIA Part 3 episode, we talked about how the AI research teams at Google and Facebook drove incredible business outcomes with cutting-edge ML models. And these models powered features like the Facebook News Feed, Google Ads, and the YouTube Next video.
recommendation in the process transforming google and facebook into the juggernauts that we know today and while we talked all about the research we didn't touch on how these models were actually deployed yeah the most common way to deploy new models was through experimentation a b testing
when the research team created a new model product engineers would deploy the model to a subset of users and measure the impact of the model on core product metrics great experimentation tools transformed the machine learning development process they de-risked releases since each model could be released to a small set of users they sped up release cycles researchers could suddenly get quick feedback from real user data and most importantly
They created a pragmatic, data-driven culture since researchers were rewarded for driving actual product improvements. And over time, these experimentation tools gave Facebook and Google a huge edge because they really became a requirement for leading ML teams.
Yep. So now you're probably thinking, well, that's great for Facebook and Google, but my team can't build out our own internal experimentation platform. Well, you don't have to, thanks to Statsig. So Statsig was literally founded by ex-Facebook engineers who did all this. They've built a best-in-class experimentation feature flagging and product analytics platform that's available to anyone.
And surprise, surprise, a ton of AI companies are now using StatSig to improve and deploy their models, including Anthropic.
Yep. So whether you're building with AI or not, Statsig can help your team ship faster and make better data-driven product decisions. They have a very generous free tier and a special program for venture-backed companies, simple pricing for enterprises, and no seat-based fees. If you're in the Acquired community, there's a special offer.
You get 5 million free events a month and white glove onboarding support. So visit statsig.com slash acquired and get started on your data-driven journey. We have some questions we want to ask you. Some are cultural about NVIDIA, but others are generalizable to company building broadly.
And the first one that we wanted to ask is, we've heard that you have 40 plus direct reports and that this org chart works a lot differently than a traditional company org chart. Do you think there's something special about NVIDIA that makes you able to have so many direct reports, not worry about coddling or focusing on career growth of your executives?
And you're like, no, you're just here to do your freaking best work and the most important thing in the world, now go. A, is that correct? And B, is there something special about NVIDIA that enables that?
I don't think it's something special with NVIDIA. I think that we had the courage to build a system like this. NVIDIA is not built like a military. It's not built like the armed forces where you have generals and colonels. We're not set up like that. We're not set up in a command and control and information distribution system from the top down.
we're really built much more like a computing stack. And a computing stack, the lowest layer is our architecture, and then there's our chip, and then there's our software, and on top of it, there are all these different modules, and each one of these layers of modules are people.
And so the architecture of the company, to me, is a computer with a computing stack with people managing different parts of the system. and who reports to whom, your title is not related to anywhere you are in the stack. It just happens to be who is the best at running that module on that function, on that layer, it is in charge, and that person is the pilot in command.
So that's one characteristic.
Have you always thought about the company this way? Even from the earliest days.
Yeah, pretty much. The reason for that is because your organization should be the architecture of the machinery of building the product. That's what a company is. And yet everybody's company look exactly the same, but they all build different things. How does that make any sense? Do you see what I'm saying? Yeah.
How you make fried chicken versus how you flip burgers versus how you make Chinese fried rice is different. And so why would the machinery, why would the process be exactly the same? And so it's not sensible to me that if you look at the org charts of most companies, it all kind of looks like this.
And then you have one group that's for a business and you have another for another business, you have another for another business, and they're all kind of supposedly autonomous. And so none of that stuff makes any sense to me. It just depends on what is it that we're trying to build and what is the architecture of the company that best suits to go build it. So that's number one.
In terms of information system and how do you enable collaboration, we kind of wired up like a neural network. And the way that we say is that there's a phrase in the company called mission is the boss. And so we figure out what is the mission, and we go wire up the best skills and the best teams and the best resources to achieve that mission.
And it cuts across the entire organization in a way that doesn't make any sense, but it looks a little bit like a neural network.
And when you say mission, do you mean mission like... NVIDIA's mission is- Build Hopper. Yeah, okay. So it's not like further accelerated computing. It's like we're shipping DJX Cloud.
Build Hopper or somebody else's build a system for Hopper. Somebody has built CUDA for Hopper. Somebody's job is build cuDNN for CUDA for Hopper. Somebody's job is the mission, right? So your mission is to do something.
What are the trade-offs associated with that versus the traditional structure?
The downside is the pressure on the leaders is fairly high. And the reason for that is because in a command and control system, the person who you report to has more power than you. And the reason why they have more power than you is because they're closer to the source of information than you are.
In our company, the information is disseminated fairly quickly to a lot of different people, and it's usually at a team level. So for example, just now I was in our robotics meeting,
and we're talking about certain things and we're making some decisions, and there are new college grads in a room, there's three vice presidents in a room, there's two E-staffs in a room, and at the moment that we decided together, we reasoned through some stuff, we made a decision, everybody heard it at exactly the same time. So nobody has more power than anybody else. Does it make sense?
The new college grad learned at exactly the same time as the E-staff. And so the executive staff and the leaders that work for me and myself, you earn the right to have your job based on your ability to reason through problems and helping other people succeed. And it's not because you have some privileged information that I knew the answer was 3.7 and only I knew. You know, everybody knew.
When we did our most recent episode on video part three that we just released, We did this thought exercise, especially over the last couple of years. Your product shipping cycle has been very impressive, especially given the level of technology that you are working with and the difficulty of this all. We said, could you imagine Apple shipping two iPhones a year?
And we said that for illustrative purposes.
For illustrative purposes, not to pick on Apple or whatnot.
A large tech company shipping two flagship products or their flagship product twice per year.
Yeah, or two WWDCs a year.
There seems to be something unique.
You can't really imagine that, whereas that happens here. Are there other companies, either current or historically, that you look up to, admire, maybe took some of this inspiration from?
In the last 30 years, I've read my fair share of business books. And as in everything you read, you're supposed to, first of all, enjoy it, right? Enjoy it, be inspired by it, but not to adopt it. That's not the whole point of these books. The whole point of these books is to share their experiences. And you're supposed to ask, what does it mean to me in my world?
And what does it mean to me in the context of what I'm going through? What does this mean to me in the environment that I'm in? And what does this mean to me in what I'm trying to achieve? And what does this mean to NVIDIA in the age of our company and the capability of our company? And so you're supposed to ask yourself, what does it mean to you?
And then from that point, being informed by all these different things that we're learning, we're supposed to come up with our own strategies. You know, what I just described is kind of how I go about everything. You're supposed to be inspired and learn from everybody else. And the education's free, you know? When somebody talks about a new product, you're supposed to go listen to it.
You're not supposed to ignore it. You're supposed to go learn from it. And it could be a competitor, it could be adjacent industry, it could be nothing to do with us. The more we learn from what's happening out in the world, the better. But you're supposed to come back and ask yourself, you know, what does this mean to us?
Yeah, you don't just want to imitate them.
That's right. Yeah.
I love this tee-up of learning but not imitating and learning from a wide array of sources. There's this sort of... unbelievable third element, I think, to what NVIDIA has become today, and that's the data center. It's certainly not obvious.
I can't reason from Alex Nett and your engagement with the research community and social media feed recommenders to you deciding and the company deciding we're going to go on a five-year all-in journey on the data center. How did that happen?
Our journey to the data center happened, I would say, almost 17 years ago. I'm always being asked, I mean, what are the challenges that the company could see someday? And I've always felt that the fact that NVIDIA's technology is plugged into a computer and that computer has to sit next to you. because it has to be connected to a monitor.
That will limit our opportunity someday because there are only so many desktop PCs that plug a GPU into. There's only so many CRTs and the time LCDs that we could possibly drive. So the question is, wouldn't it be amazing if our computer doesn't have to be connected to the viewing device, that the separation of it made it possible for us to compute somewhere else?
And one of our engineers came and showed it to me one day, and it was really capturing the frame buffer, encoding it into video, and streaming it to a receiver device, separating computing from the viewing device.
In many ways, that's cloud gaming, like 18 years ago.
In fact, that was when we started GFN. We knew that GFN was going to be a journey that would take a long time because you're fighting all kinds of problems, including the speed of light. Latency everywhere you look. That's right.
For instance, GFN, GeForce Now.
GeForce Now. Yeah. Yeah, GeForce Now. We've been working on GeForce Now.
It all makes sense, your first cloud product.
That's right. Look at GeForce Now was NVIDIA's first data center product. And our second data center product was remote graphics, putting our GPUs in the world's enterprise data centers, which then led us to our third product, which combined CUDA plus our GPU, which became a supercomputer, which then worked towards more and more and more.
And the reason why it's so important is because the disconnection between where NVIDIA's computing is done versus where it's enjoyed, if you can separate that, your market opportunity explodes. Yeah, yeah. And it was completely true. And so we're no longer limited by the physical constraints of the desktop PC sitting by your desk. And we're not limited by one GPU per person.
And so it doesn't matter where it is anymore. And so that was really the great observation.
It's a good reminder. The data center segment of NVIDIA's business to me has become synonymous with How is AI going? And that's a false equivalence. And it's interesting that you were only this ready to sort of explode in AI and the data center because you had three plus previous products where you learned how to build data center computers. Exactly.
Even though those markets weren't these like gigantic, world-changing technology shifts the way that AI is, that's how you learned.
Yeah, that's right. You want to pave the way to future opportunities. You can't wait until the opportunity is sitting in front of you for you to reach out for it. And so you have to anticipate. Our job as CEOs is to look around corners and to anticipate where will opportunities be someday? And even if I'm not exactly sure what and when, how do I position the company to be near it?
to be just standing near under the tree, and we can do a diving catch when the apple falls. You guys know what I'm saying? Yeah. But you've got to be close enough to do the diving catch.
Rewind to 2015 and OpenAI. If you hadn't been laying this groundwork in the data center, you wouldn't be powering OpenAI right now.
Yeah. But the idea that computing will be mostly done away from the viewing device. that the vast majority of computing would be done away from the computer itself. That insight was good. In fact, cloud computing, everything about today's computing is about separation of that.
And by putting it in a data center, we can overcome this latency problem, meaning you're not going to overcome speed of light. Speed of light end-to-end is only 120 milliseconds or something like that. It's not that long.
From a data center to an internet user?
Anywhere on the planet, yeah. Oh, I see.
End-to-end, literally across the planet.
Yeah, right. So if you could solve that problem, approximately something like that, I forget the number, but 70 milliseconds, 100 milliseconds, but it's not that long. And so my point is, if you could remove the obstacles everywhere else, then speed of light should be, you know, perfectly fine. And you could build data centers as large as you like, and you can do amazing things.
And this little tiny device that we use as a computer or, you know, your TV as a computer, whatever computer, they can all instantly become amazing. And so that insight, you know, 15 years ago was a good one.
So speaking of the speed of light, InfiniBand. Yeah. David's like begging me to go here. I was having the same thought. You totally saw that InfiniBand would be way more useful, way sooner than anyone else realized. Acquiring Mellanox, I think you uniquely saw that this was required to train large language models, and you were super aggressive in acquiring that company.
Why did you see that when no one else saw that?
Well, there are several reasons for that. First, if you want to be a data center company, building the processing chip isn't the way to do it. A data center is distinguished from a desktop computer versus a cell phone, not by the processor in it. A desktop computer in a data center uses the same CPUs, uses the same GPUs apparently, very close.
So it's not the chip, it's not the processing chip that describes it, but it's the networking of it, it's the infrastructure of it, it's how the computing is distributed. how security is provided, how networking is done, you know, so on and so forth. And so those characteristics are associated with Mellanox, not NVIDIA.
And so the day that I concluded that really NVIDIA wants to build computers of the future and computers of the future are going to be data centers, embodied in data centers, and we want to be data center-oriented company, then we really need to get into networking, and so that was one.
The second thing is observation that whereas cloud computing started in hyperscale, which is about taking commodity components, a lot of users, and virtualizing many users on top of one computer, AI is really about distributed computing where one job, one training job is orchestrated across millions of processors. And so it's the inverse of hyperscale almost.
And the way that you design a hyperscale computer with off-the-shelf commodity Ethernet, which is just fine for Hadoop, it's just fine for search queries, it's just fine for all of those things. But not when you're sharding a model across multiple racks. Not when you're sharding a model across, right. And so that observation says that the type of networking you want to do is not exactly Ethernet.
And the way that we do networking for supercomputing is really quite ideal. And so the combination of those two ideas convinced me that Mellanox is absolutely the right company, because they're the world's leading high-performance networking company, and we worked with them in so many different areas in high-performance computing already. Plus, I really like the people.
The Israel team is world-class. We have some 3,200 people there now, and it was one of the best strategic decisions I'd ever made.
When we were researching particularly part three of our NVIDIA series, we talked to a lot of people and many people told us the Mellanox acquisition is one of, if not the best of all time by any technology company.
I think so too, yeah. And it's so disconnected from the work that we normally do, it was surprising to everybody.
But frame this way, you were standing near where the action was, so you could figure out as soon as that Apple becomes available to purchase, like, oh, LLMs are about to blow up. I'm going to need that. Everyone's going to need that. I think I know that before anyone else does.
You want to position yourself near opportunities. You don't have to be that perfect, you know. You want to position yourself near the tree. And even if you don't catch the apple before it hits the ground, so long as you're the first one to pick it up. Yeah. You want to position yourself close to the opportunities.
And so that's kind of a lot of my work is positioning the company near opportunities and having the company having the skills to monetize each one of the steps along the way so that we can be sustainable.
What you just said reminds me of a great aphorism from Buffett and Munger, which is it's better to be approximately right than exactly wrong.
Yeah, there you go. Yeah, that's a good one.
That's a good one to live by. Yeah. All right, listeners, we are here to tell you about a company that literally couldn't be more perfect for this episode, Crusoe.
Yes, Crusoe, as you know by now, is a cloud provider built specifically for AI workloads and powered by clean energy. And NVIDIA is a major partner of Crusoe. Their data centers are filled with A100s and H100s. And as you probably know, with the rising demand for AI, there's been a huge surge in the need for high performing GPUs. leading to a noticeable scarcity of NVIDIA GPUs in the market.
Crusoe has been ahead of the curve and is among the first cloud providers to offer NVIDIA's H100s at scale. They have a very straightforward strategy. Create the best AI cloud solution for customers using the very best GPU hardware on the market that customers ask for, like NVIDIA, and invest heavily in an optimized cloud software stack.
Yep. To illustrate, they already have several customers already running large-scale generative AI workloads on clusters of NVIDIA H100 GPUs, which are interconnected with 3200 gigabit InfiniBand and leveraging Crusoe's network-attached block storage solution.
And because their cloud is run on wasted, stranded, or clean energy, they can provide significantly better performance per dollar than traditional cloud providers.
Yep. Ultimately, this results in a huge win-win. They take what is otherwise a huge amount of energy waste that causes environmental harm and use it to power massive AI workloads. And it's worth noting that through their operations, Crusoe is actually reducing more emissions than they would generate.
In fact, in 2022, Crusoe captured over 4 billion cubic feet of gas, which led to the avoidance of approximately 1%. 500,000 metric tons of CO2 emissions. That's equivalent to taking about 160,000 cars off the road.
Amazing. If you, your company, or your portfolio companies could use lower cost and more performant infrastructure for your AI workloads, go to crusocloud.com slash acquired. That's C-R-U-S-O-E cloud.com slash acquired, or click the link in the show notes.
I want to move away from NVIDIA, if you're okay with it, and ask you some questions, since we have a lot of founders that listen to this show, sort of advice for company building. The first one is, when you're starting a startup in the earliest days, your biggest competitor is you don't make anything people want.
Like, your company's likely to die just because people don't actually care as much as you do about what you're building. That's right. In the later days, you actually have to be very thoughtful about competitive strategy. And I'm curious, what would be your advice to companies that have product market fit, that are starting to grow, they're in interesting growing markets?
Where should they look for competition and how should they handle it?
while there are all kinds of ways to think about competition, we prefer to position ourselves in a way that serves a need that usually hasn't emerged.
I've heard you or others in NVIDIA, I think, use the phrase zero billion dollar markets.
Yeah, that's exactly right. It's our way of saying there's no market yet, but we believe there will be one. And usually when you're positioned there, everybody's trying to figure out why are you here? Right? Because when we first got into automotive, because we believe that in the future, the car is going to be largely software.
And if it's going to be largely software, a really incredible computer is necessary. And so when we positioned ourselves there, most people, I still remember one of the CTOs told me, you know what, cars cannot tolerate the blue screen of death. I don't think anybody can tolerate that, but... But that doesn't change the fact that someday every car will be a software-defined car.
And I think, you know, 15 years later, we're largely right. And so oftentimes there's non-consumption, and we like to navigate our company there. And by doing that, by the time the market emerges, it's very likely there aren't that many competitors shaped that way. And so we were early in PC gaming, and today NVIDIA is very large in PC gaming.
We reimagined what a design workstation would be like, and today just about every workstation on the planet uses NVIDIA's technology. We reimagined how supercomputing ought to be done and who should benefit from supercomputing, that we would democratize it. And look, today, NVIDIA's accelerated computing is quite large. And we reimagined how software would be done.
And today, it's called machine learning. And how computing would be done, we call it AI. And so we reimagined these kind of things, tried to do that about a decade in advance. And so we spent about a decade in $0 billion markets. And Today, I spend a lot of time on Omniverse, and Omniverse is a classic example of a $0 billion business.
There's like 40 customers now, something like that.
Amazon, BMW.
Yeah, no, it's cool. It's cool. So let's say you do get this great 10-year lead, but then other people figure it out, and you've got people nipping at your heels. What are some structural things that someone who's building a business can do to sort of stay ahead and you can just keep your pedal to the metal and say, we're going to outwork them and we're going to be smarter.
That works to some extent, but those are tactics. What strategically can you do to make sure that you can maintain that lead?
Oftentimes, if you created the market, you ended up having what people describe as moats. Because if you build your product right and it's enabled an entire ecosystem around you to help serve that end market, you've essentially created a platform. Sometimes it's a product-based platform, sometimes it's a service-based platform, sometimes it's a technology-based platform.
But if you were early there and you were mindful about helping the ecosystem succeed with you, you ended up having this network of networks and all these developers and all these customers who are built around you. And that network is essentially your moat. And so, I don't love thinking about it in the context of a moat,
And the reason for that is because you're now focused on building stuff around your castle. I tend to like thinking about things in the context of building a network. And that network is about enabling other people to enjoy the success of the final market. You know, that you're not the only company that enjoys it, but you're enjoying it with a whole bunch of other people, including me.
I'm so glad you brought this up because I wanted to ask you. In my mind at least, and it sounds like in yours too, NVIDIA is absolutely a platform company, of which there are very few meaningful platform companies in the world. I think it's also fair to say that when you started for the first few years, you were a technology company and not a platform company.
every example i can think of of a company that tried to start as a platform company fails you got to start as a technology person when did you think about making that transition to being a platform like your first graphics cards were technology they weren't there was no cuda there was no platform yeah what you observed is not wrong however inside our company we were always a platform company and the reason for that
is because from the very first day of our company, we had this architecture called UDA. It's the UDA of CUDA.
CUDA is Compute Unified Device Architecture.
That's right. The reason for that is because what we've done, what we essentially did in the beginning, even though Revo 128 only had computer graphics, the architecture described accelerators of all kinds. And we would take that architecture and developers would program to it. In fact, NVIDIA's first strategy, business strategy, was we were going to be a game console inside the PC.
And a game console needs developers, which is the reason why NVIDIA, a long time ago, one of our first employees was a developer relations person. And so it's the reason why we knew all the game developers and all the 3D developers and we knew everything.
So wait, so was the original business plan to like- Sort of like to build DirectX.
Yeah, compete with Nintendo and Sega as like with PCs? In fact, the original NVIDIA architecture was called Direct Envy. Direct NVIDIA, yeah. And DirectX was an API that made it possible for operating system to directly connect hardware.
But DirectX didn't exist when you started NVIDIA, right? And that's what made your strategy run for the first couple of years.
In 1993, we had Direct NVIDIA, which in 1995 became, well, DirectX came out.
So this is an important lesson. We were always a developer-oriented company. The initial attempt was we will get the developers to build on Direct NV, and then they'll build for our chips, and then we'll have a platform. Yeah, exactly. What played out is Microsoft already had all these developer relationships, so you learned the lesson the hard way of like, Yikes, we just got to slide into that.
I mean, that's what Microsoft did back in the day. They're like, oh, that could be a developer platform. We'll take that. Thank you.
No, but they had a lot. They did it very differently, and they did a lot of things right. We did a lot of things wrong. You were competing against Microsoft in the 90s.
It's like trying to compete against NVIDIA today.
No, it's a lot different, but I appreciate that. But we were nowhere near competing with them. If you look now, when CUDA came along, there was OpenGL, there was DirectX, but there's still another extension, if you will, and that extension is CUDA. And that CUDA extension allows a chip that got paid for running DirectX and OpenGL to create an install base for CUDA.
And so that's the immediate strategy.
And is this why you were so... Militant, and I think from our research, it really was you being militant that every NVIDIA chip will run CUDA.
Yeah, if you're a computing platform, everything's got to be compatible. We are the only accelerator on the planet where every single accelerator is architecturally compatible with the others. None has ever existed. There are literally a couple of hundred million, right? 250 million, 300 million installed base of active CUDA GPUs being used in the world today.
And they're all architecturally compatible. How would you have a computing platform if, you know, NV30 and NV39 and NV40, they're all different? At 30 years, it's all completely compatible. And so that's the only un-negotiable rule in our company. Everything else is negotiable.
I mean, and I guess kudos was a rebirth of UDA, but understanding this now, UDA going all the way back, it really is all the way back to all the chips you've ever made.
Wow.
For the record, I didn't help any of the founding CEOs that are listening. I got to tell you, while you were asking that question, what lessons would I impart? I don't know. I mean, the characteristics of successful companies and successful CEOs, I think, are fairly well described. There are a whole bunch of them. I just think starting successful companies are insanely hard.
It's just insanely hard. And when I see these amazing companies getting built, I have nothing but admiration and respect, because I just know that it's insanely hard. And I think that everybody did many similar things. There are some good, smart things that people do. There are some dumb things that you can do. But you could do all the right, smart things and still fail.
You could do a whole bunch of dumb things, and I did many of them, and still succeed. So obviously, that's not exactly right. I think skills are the things that you can learn along the way. But at important moments, certain circumstances have to come together. And I do think that the market has to be one of the agents to help you succeed.
It's not enough, obviously, because a lot of people still fail.
Do you remember any moments in Nvidia's history where you're like, oh, we made a bunch of wrong decisions, but somehow we got saved because, you know, it takes the sum of all the luck and all the skill in order to succeed.
Do you remember any moments where you're like... I actually thought that you starting with Revo 128 was spot on. Revo 128, as I mentioned, the number of smart decisions we made, which are smart to this day, how we design chips is exactly the same to this day. Because gosh, you know, nobody's ever done it back then.
And we pulled every trick in the book in a desperation because we had no other choice. Well, guess what? That's the way things ought to be done. And now everybody does it that way. Right. Everybody does it because why should you do things twice if you can do it once? Why tape out a chip seven times if you could tape it out one time, right?
And so the most efficient, the most cost-effective, the most competitive, speed is technology, right? Speed is performance. Time to market is performance. All of those things apply. So why do things twice if you could do it once? And so Revo 128 made a lot of great decisions in how we spec products, how we think about market needs and lack of, and how do we judge markets, and all of those.
Man, we made some amazingly good decisions. Yeah, we were back against the wall. We only had one more shot to do it, but...
Once you pull out all the stops and you see what you're capable of, why would you put stops in next time? Exactly. You're like, let's keep the stops out all the time, every time. That's right.
Is it fair to say though, maybe on the luck side of the equation, thinking back to 1997, that that was the moment where consumers tip to really, really valuing 3D graphical performance in games?
Oh yeah, so for example, luck. Let's talk about luck. If Carmack hadn't decided to use acceleration, because remember, Doom was completely software rendered, and the NVIDIA philosophy was that although general purpose computing is a fabulous thing, it's going to enable software and IT and everything, we felt that there were applications that wouldn't be possible
Or it would be costly if it wasn't accelerated. It should be accelerated. And 3D graphics was one of them, but it wasn't the only one. And it just happens to be the first one, and a really great one. And I still remember the first times we met John, he was quite emphatic about using CPUs, and his software render was really good.
I mean, quite frankly, if you look at Doom, the performance of Doom was really hard to achieve, even with accelerators at the time. If you didn't filter, if you didn't have to do bi-linear filtering, it did a pretty good job.
The problem with Doom, though, was you needed Carmack to program it.
Yeah, you need Carmack to program it. Exactly. It was a genius piece of code. But nonetheless, software renders did a really good job. And if he hadn't decided to go to OpenGL and accelerate for Quake, frankly, what would be the killer app that put us here?
Right.
And so Carmack and Sweeney, both between Unreal and Quake, created the first two killer applications for consumer 3D. And so I owe them a great deal.
I want to come back real quick to, you know, you said you told these stories and you're like, well, I don't know what founders can take from that.
I actually do think, you know, if you look at all the big tech companies today, perhaps with the exception of Google, they did all start, and understanding this now about you, by addressing developers, planning to build a platform and tools for developers. You know, all of them. Apple, Amazon.
Not Amazon.
Well, I guess with AWS, that's how AWS started. So I think that actually is a lesson to your point. That won't guarantee success by any means, but that'll get you hanging around a tree if the apple falls.
Yeah. As many good ideas as we have, you don't have all the world's good ideas. And the benefit of having developers is you get to see a lot of good ideas.
Yeah. Well, as we start to drift toward the end here, we spent a lot of time on the past and I want to think about the future a little bit. I'm sure you spend a lot of time on this being on the cutting edge of AI. We're moving into an era where the productivity that software can accomplish when a person is using software can massively amplify the impact and the value that they're creating,
has to be amazing for humanity in the long run. In the short term, it's going to be inevitably bumpy as we sort of figure out what that means. What do you think some of the solutions are as AI gets more and more powerful and better at accelerating productivity for all the displaced jobs that are going to come from it? Well, first of all, we have to keep AI safe.
And there's a couple of different areas of AI safety that's really important. Obviously, in robotics and self-driving car, there's a whole field of AI safety, and we've dedicated ourselves to functional safety and active safety and all kinds of different areas of safety. When to apply human in the loop, when is it okay for human not to be in the loop? How do you get to a point where
where increasingly human doesn't have to be in the loop, but human largely in the loop. In the case of information safety, obviously bias, false information, and appreciating the rights of artists and creators, that whole area deserves a lot of attention. And you've seen some of the work that we've done.
Instead of scraping the internet, we partnered with Getty and Shutterstock to create a commercially fair way of applying artificial intelligence generated AI. In the area of large language models and the future of increasingly greater agency AI, clearly the answer is for as long as it's sensible, and I think it's going to be sensible for a long time, is human in the loop.
The ability for an AI to self-learn and improve and change out in the wild in a digital form should be avoided. And we should collect data, we should carry the data, we should train the model, we should test the model, validate the model before we release it out in the wild again. So human is in the loop. Yep.
There are a lot of different industries that have already demonstrated how to build systems that are safe and good for humanity, and obviously the way autopilot works for a plane and two-pilot system and then air traffic control. redundancy and diversity and all of the basic philosophies of designing safe systems apply as well in self-driving cars and so on and so forth.
And so I think there's a lot of models of creating safe AI and I think we need to apply them. With respect to automation, my feeling is that, and we'll see, but it is more likely that AI is going to create more jobs in the near term. The question is, what's the definition of near term? And the reason for that is the first thing that happens with productivity is prosperity.
And prosperity, when the companies get more successful, they hire more people because they want to expand into more areas. And so the question is, if you think about a company and say, okay, if we improve the productivity, then they need fewer people. Well, that's because the company has no more ideas, but that's not true for most companies.
If you become more productive and the company becomes more profitable, usually... they hire more people to expand into new areas. And so long as we believe that there are more areas to expand into, that there are more ideas in drugs, drug discovery, there are more ideas in transportation, there are more ideas in retail, there are more ideas in entertainment, that there's more ideas in technology.
So long as we believe that there are more ideas, the prosperity of the industry, which comes from improved productivity, results in hiring more people, more ideas. Now, you go back in history, we can fairly say that today's industry is larger than the world's industries a thousand years ago. And the reason for that is because, obviously, humans have a lot of ideas.
And I think that there's plenty of ideas yet for prosperity and plenty of ideas that can be begat from productivity improvements, but my sense is that it's likely to generate jobs. Now, obviously, Net generation of jobs doesn't guarantee that any one human doesn't get fired. I mean, that's obviously true.
And it's more likely that someone will lose a job to someone else, some other human that uses an AI. And not likely to an AI, but to some other human that uses an AI. And so I think the first thing that everybody should do is learn how to use AI so that they can augment their own productivity
And every company should augment their own productivity to be more productive so that they can have more prosperity, hire more people. And so I think jobs will change. My guess is that we'll actually have higher employment. We'll create more jobs. I think industries will be more productive.
And many of the industries that are currently suffering from lack of labor workforce is likely to use AI to get themselves off the feet and get back to growth and prosperity. So I see it a little bit differently, but I do think that jobs will be affected, and I'd encourage everybody just to learn AI.
This is appropriate. There's a version of something we talk about a lot on Acquired. We call it the Moritz Corollary to Moore's Law after Mike Moritz from Sequoia.
Sequoia was the first investor in our company. Yeah, of course. Yeah.
The great story behind it is that when Mike was taking over for Don Valentine with Doug, he was sitting and looking at Sequoia's returns and he was looking at fund three or four, I think it was four maybe that had Cisco in it. And he was like, how are we ever going to top that? You know, I can't, I can't, you know, Don's going to have us beat, we're never going to beat that.
And he thought about it and he realized that, well, as compute gets cheaper and it can access more areas of the economy because it gets cheaper and can get adopted more widely, well, then the markets that we can address should get bigger.
Yeah.
And AI, your argument is basically, AI will do the same thing.
Exactly. I just gave you exactly the same example. That in fact, productivity doesn't result in us doing less. Productivity usually results in us doing more. Everything we do will be easier. but we'll end up doing more. Because we have infinite ambition. The world has infinite ambition. And so if a company is more profitable, they tend to hire more people to do more.
That's true. Technology is a lever. And the place where the idea kind of falls down is that we would be satisfied.
Yeah, humans have never-ending ambition.
No, humans will always expand and consume more energy and attempt to pursue more ideas. That has always been true of every version of our species. Yeah. Over time.
Now is a great time to share something new from our friends at Blinkist and Go1 that is very appropriate to this episode.
Yes. So personal story time. I, a few weeks ago, was scouring the web to find Jensen's favorite business books, which was proving to be difficult. I really wanted Blinkist to make blinks of each of those books so you could all access them. And I think I found one or two in random articles, but that just wasn't enough.
So finally, before I gave up, as a last resort, I asked an AI chat bot, specifically Bard, to provide me a list and cite the sources of Jensen's favorite business books. And miraculously, it worked. Bard found books that Jensen had called out in public forums over the past several decades.
So if you click the link in the show notes or go to Blinkist.com slash Jensen, you can get the blinks of all five of those books, plus a few more that Jensen specifically told us about later in the episode.
Yes. And we also have an offer from Blinkist and Go1 that goes beyond personal learning. Blinkist has handpicked a collection of books related to the themes of this episode. So tech innovation, leadership, the dynamics of acquisitions. These books offer the mental models to adapt to a rapidly changing technology environment.
And just like all other episodes, Blinkist is giving acquired listeners an exclusive 50% discount on all premium content. This gives you key insights from thousands of books at your fingertips, all condensed into easy-to-digest summaries.
And if you're a founder, a team lead, or an L&D manager, Blinkist also includes curated reading lists and progress tracking features all overseen by a dedicated customer success manager to help your team flourish as you grow.
Yes. So to claim the whole free collection, unlock the 50% discount, and explore Blinkist's enterprise solution, simply visit Blinkist.com slash Jensen and use the promo code Jensen. Blinkist and their parent company, GoOne, are truly awesome resources for your company and your teams as they develop from small startup to enterprise. Our thanks to them. And seriously, this offer is pretty awesome.
Go take them up on it. We have a few lightning round questions we want to ask you. And then we have a very fun... I can't think that fast.
We'll open with an easy one based on all these conference rooms we see named around here. Favorite sci-fi book?
I've never read a sci-fi book before.
No, come on.
Yeah, yeah.
What's with the obsession with Star Trek?
Oh, just, you know, I watch the TV show. Okay, favorite sci-fi TV show. Star Trek's my favorite. Yeah, Star Trek's my favorite.
I saw V'ger out there on the way in. It's a good conference room name.
V'ger's an excellent one, yeah.
Yeah.
What car is your daily driver these days? And related question, do you still have the Supra?
Oh, it's one of my favorite cars and also favorite memories. You guys might not know this, but Lori and I got engaged Christmas one year and we drove back in my brand new Supra and we totaled it. We were this close to the end. Thank God you didn't. But nonetheless, it wasn't my fault. It wasn't the Supra's fault, but it's a mark.
The one time when it wasn't the Supra's fault.
I love that car. I'm driven these days for security reasons and others, but I'm driven in the Mercedes EQS. It's a great car. Nice. Yeah, great car.
Nice. Using NVIDIA technology?
Yeah, we're the central computer. Sweet.
I know we already talked a little bit about business books, but one or two favorites that you've taken something from?
Clay Christensen, I think, the series is the best. I mean, there's just no two ways about it. And the reason for that is because it's so intuitive and so sensible. It's approachable. But I read a whole bunch of them, and I read just about all of them. I really enjoyed Andy Grove's books. They're all really good.
Awesome. Favorite characteristic of Don Valentine?
Grumpy, but endearing. And what he said to me the last time as he decided to invest in our company, he says, if you lose my money, I'll kill you. Of course I did. And then over the course of the decades, the years that followed, When something is nice written about us in Mercury News, it seems like he wrote it in a crayon. You know, he'll say, good job, Don.
You know, just write over the newspaper, just, good job, Don, and he mails it to me. And I hope I've kept them, but anyways, you could tell he's a real sweetheart, but he cares about the companies. He's a special character. Yeah, he's incredible.
What is something that you believe today that 40-year-old Jensen would have pushed back on and said, no, I disagree?
There's plenty of time. Yeah, there's plenty of time. If you prioritize yourself properly and you make sure that you don't let Outlook be the controller of your time, there's plenty of time.
Plenty of time in the day, plenty of time- To do anything. To achieve things.
Yeah, to do anything. Just don't do everything. Prioritize your life, make sacrifices. Don't let outlook control what you do every day. Notice I was late to our meeting. And the reason for that, by the time I looked up, I, oh my gosh, you know, Ben and Dave are waiting, you know, it's already. We got time. Yeah, exactly.
Didn't stop this from being a great job.
No, but you have to prioritize your time really carefully and don't let outlook determine that.
Love that. What are you afraid of, if anything?
I'm afraid of the same things today that I was in the very beginning of this company, which is letting the employees down. You have a lot of people who joined your company because they believe in your hopes and dreams and they've adopted it as their hopes and dreams. And you want to be right for them. You want to be successful for them.
You want them to be able to build a great life as well as help you build a great company and be able to build a great career. You want them to have to enjoy all of that. And these days, I want them to be able to enjoy the things I've had the benefit of enjoying and all the great success I've enjoyed. I want them to be able to enjoy all of that.
And so I think the greatest fear is that you let them down.
What point did you realize that you weren't going to have another job? That, like, this was it?
I just, I don't change jobs. You know, if it wasn't because of Chris and Curtis convincing me to do NVIDIA, I would still be at LSI Logic today, I'm certain of it. Wow, really? Yeah, yeah. I would keep doing what I'm doing. And at the time that I was there, I was completely dedicated and focused on helping LSI Logic be the best company it could be. And I was LSI Logic's best ambassador.
I've got great friends to this day that I've known from LSI Logic. It's a company I loved then, I love dearly today. I know exactly why it went, the revolutionary impact it had on chip design and system design and computer design. in my estimation, one of the most important companies that ever came to Silicon Valley and changed everything about how computers were made.
It put me in the epicenter of some of the most important events in computer industry. It led me to meeting Chris and Curtis and Andy Bechtolsheim and John Rubenstein and some of the most important people in the world. And Frank that I was with the other day and just, I mean, the list goes on. And so LSI Logic was really important to me and I would still be there.
Who knows what LSI Logic would have become if I were still there, right? And so that's kind of how my mind works.
Powering the AI of the world.
Yeah, exactly. I mean, I might be doing the same thing I'm doing today.
I got the sense from remembering back to part one of our series on NVIDIA.
But until I'm fired, this is my last job. I love it.
I got the sense that LSI Logic might have also changed your... perspective and philosophy about computing to the sense I we got from the research was that When right out of school and when you first went to AMD first, right?
Yeah, you believed that like kind of a version of that was it the Jerry Sanders real men have fabs like you need to do the whole stack like you got to do everything and that LSI logic changed you what LSI logic did was was realized that you can express
transistors and logical gates and chip functionality in high-level languages. that by raising the level of abstraction and what is now called high-level design, it was coined by Harvey Jones who's on NVIDIA's board and I met him way back in the early days of Synopsys. But during that time, there was this belief that you can express chip design in high-level languages.
By doing so, you could take advantage of optimizing compilers and optimization logic and tools and be a lot more productive. That logic was so sensible to me, and I was 21 years old at the time, and I wanted to pursue that vision. Frankly, that idea happened in machine learning. It happened in software programming.
I want to see it happen in digital biology so that we can think about biology in a much higher level language. Probably a large language model would be the way to make it representable. That transition was so revolutionary, I thought that was the best thing that ever happened to the industry, and I was really happy to be part of it, and I was at ground zero.
And so I saw one industry change revolutionize another industry. And if not for LSI Logic doing the work that it did, Synopsys, shortly after, then why would the computer industry be where it is today? Yeah, it's really, really terrific. I was at the right place at the right time to see all that.
That's super cool.
Yeah.
And it sounded like the CEO of LSI Logic put a good word in for you with Don Valentine too.
I didn't know how to write a business plan.
Which it turns out is not actually important.
No, no, no. It turns out that making a financial forecast that nobody knows is going to be right or wrong turns out not to be that important. But the important things that a business plan probably could have teased out, I think that the art of writing a business plan ought to be much, much shorter. And it forces you to condense, you know, what is the true problem you're trying to solve?
What is the unmet need that you believe will emerge? And what is it that you're going to do that is sufficiently hard that when everybody else finds out it's a good idea, they're not going to swarm it and, you know, make you obsolete? And so it has to be sufficiently hard to do. There are a whole bunch of other skills that are involved in just, you know,
product and positioning and pricing and go to market and all that kind of stuff. But those are skills and you can learn those things easily. The stuff that is really, really hard is the essence of what I described. And I did that okay, but I had no idea how to write the business plan. And I was fortunate that Wolf Corrigan was so pleased with me and the work that I did when I was at LSLogic.
He called up Don Valentine and told Don, you know, invest in this kid and he's going to come your way. And so I was, you know, I was set up for success from that moment and got us on the ground. As long as he didn't lose the money. I think Sequoia did okay. I think we probably are one of the best investments they've ever made.
Have they held through today?
The VC partner is still on the board, Mark Stevens.
Yeah, Mark Hill.
Yeah. Yeah. All these years. The two founding VCs are still on the board. Sutter Hill and Sequoia? Yeah, Tench Cox and Mark Stevens. I don't think that ever happens.
Yeah.
Amazing. We are singular in that circumstance, I believe.
they've added value this whole time uh been inspiring this whole time uh uh gave great wisdom and and uh great support uh but they they also uh they haven't killed you yet no not yet but they've been entertained you know by the company inspired by the company and enriched by the company and so they stayed with it and i'm really grateful well in that being our final question for you it's 2023 30 years anniversary of the founding of nvidia
If you were magically 30 years old again today in 2023, and you were going to Denny's with your two best friends who are the two smartest people you know, and you're talking about starting a company, what are you talking about starting?
I wouldn't do it. I know. And the reason for that is really quite simple, ignoring the company that we would start. First of all, I'm not exactly sure. The reason why I wouldn't do it, and it goes back to why it's so hard, is building a company and building a video turned out to have been a million times harder than I expected it to be, any of us expected it to be.
And at that time, if we realized the pain and suffering and just how vulnerable you're gonna feel, and the challenges that you're going to endure, the embarrassment and the shame and the list of all the things that go wrong, I don't think anybody would start a company. Nobody in their right mind would do it. And I think that that's kind of the superpower of a entrepreneur.
They don't know how hard it is. And they only ask themselves, how hard can it be? And to this day, I trick my brain into thinking, how hard can it be? Because you have to. Still, when you wake up in the morning. Yeah, how hard can it be? Everything that we're doing, how hard can it be? Omniverse, how hard can it be?
I don't get the sense though that you're planning to retire anytime soon though.
No, I'm still young.
You could choose to say like, whoa, this is too hard.
The trick is still working. The trick is still working. I'm still enjoying myself immensely and I'm adding a little bit of value, but that's really the trick of an entrepreneur. You have to get yourself to believe that it's not that hard. because it's way harder than you think.
And so if I go taking all of my knowledge now and I go back and I said, I'm going to endure that whole journey again, I think it's too much. It is just too much.
Do you have any suggestions on any kind of support system or a way to get through the emotional trauma that comes with building something like this?
Family and friends and all the colleagues we have here. I'm surrounded by people who've been here for 30 years. Chris has been here for 30 years and Jeff Fisher has been here 30 years. Dwight's been here 30 years and Jonah and Brian have been here, you know, 25 some years and probably longer than that. And, you know, Joe Greco has been here 30 years.
I'm surrounded by these people that never one time gave up and they never one time gave up on me. And that's the entire ball of wax, you know, and to be able to go home and have your family be fully committed to everything that you're trying to do and thick or thin, they're proud of you and proud of the company and You kind of need that. You need the unwavering support of people around you.
You know, Jim Gaithers and, you know, the Tench Coxes and Mark Stevens and, you know, Harvey Jones and all the early people of our company, the Bill Millers. They not one time gave up on the company and us and And you need that. Not kind of need that, you need that.
And I'm pretty sure that almost every successful company and entrepreneurs that have gone through some difficult challenges, they had that support system around them.
I can only imagine how meaningful that is. I mean, I know how meaningful that is in any company, but for you, I feel like the NVIDIA journey is...
particularly amplified on these dimensions, right?
And like, you know, you went through two, two if not three, 80% plus drawdowns in the public markets. To have investors who've stuck with you from day one through that must be just like,
so much support yeah yeah it is incredible and you hate that any of that stuff happened and and most of you you know most of it is is out of your control but you know 80 fall it it's an extraordinary thing no matter how you look at it and
I forget exactly, but I mean, we traded down at about a couple of two, three billion dollars in market value for a while because of the decision we made in going into CUDA and all that work. And your belief system has to be really, really strong. You know, you have to really, really believe it and really, really want it. Otherwise, it's just too much to endure.
I mean, because, you know, everybody's questioning you and employees aren't questioning you, but employees have questions.
Right.
People outside are questioning you. And it's a little embarrassing. It's like, you know, when your stock price gets hit, it's embarrassing no matter how you think about it. And it's hard to explain, you know. And so there's no good answers to any of that stuff. CEOs are human and companies are built of humans and these challenges are hard to endure.
Ben had an appropriate comment on our most recent episode on you all where we were talking about the current situation in NVIDIA. I think he said, for any other company, this would be a precarious spot to be in, but for NVIDIA.
This is old hat. You guys are familiar with these large swings in amplitude.
Yeah. The thing to keep in mind is at all times, what is the market opportunity that you're engaging? And that informs your size. I was told a long time ago that NVIDIA can never be larger than a billion dollars. Obviously, it's an underestimation, underimagination of the size of the opportunity. It is the case that no chip company can ever be so big.
And so, but if you're not a chip company, then why is that applied to you? And this is the extraordinary thing about technology right now, is technology is a tool, and it's only so large. What's unique about our current circumstance today is is that we're in the manufacturing of intelligence. We're in the manufacturing of work world. That's AI.
And the world of tasks, doing work, productive, generative AI work, generative intelligent work, that market size is enormous. It's measured in trillions. One way to think about that is if you built a chip for a car, how many cars are there and how many chips would they consume? That's one way to think about that.
However, if you build a system that, whenever needed, assisted in the driving of the car, what's the value of an autonomous chauffeur every now and then? And so now the market, obviously the problem becomes much larger, the opportunity becomes larger. You know, what would it be like if we were to magically conjure up a chauffeur for everybody who has a car? And, you know, how big is that market?
And obviously, that's a much, much larger market. And so the technology industry is at the, you know, where what we've discovered, what NVIDIA has discovered and what some of them discovered, is that by separating ourselves from being a chip company, But building on top of a chip and you're now in the ad company, the market opportunity has grown by probably a thousand times.
Don't be surprised if technology companies become much larger in the future because what you produce is something very different. And that's kind of the way to think about how large can your opportunity, how large can you be. It has everything to do with the size of the opportunity.
Yep. Well, Jensen, thank you so much. Thank you. Woo, David. That was awesome.
So fun.
Well, listeners, we want to tell you that you should totally sign up for our email list. Of course, it is notifications when we drop a new email, but we've added something new. We're including little tidbits that we learn after releasing the episode, including listener corrections. And we also have been sort of teasing what the next episode will be.
So if you want to play the little guessing game along with the rest of the Acquired community, sign up at acquired.fm slash email. Our huge thank you to Blinkist, Statsig, and Crusoe. All the links in the show notes are available to learn more and get the exclusive offers for the Acquired community from each of them. You should check out ACQ2, which is available at any podcast player.
As these main Acquired episodes get longer and come out, you know, once a month instead of once every couple weeks. It's a little bit more of a rarity these days. We've been up-leveling our production process, and that takes time. Yes. ACQ2 has become the place to get more from David and I, and we've just got some awesome episodes coming up that we are excited about.
If you want to come deeper into the Acquired kitchen, become an LP, acquired.fm slash LP. Once every couple months or so, we'll be doing a call with all of you on Zoom just for LPs to get the inside scoop of what's going on in Acquired land and get to know David and I a little bit better. And once a season, you'll get to help us pick a future episode. So that's acquired.fm slash LP.
Anyone should join the Slack, acquired.fm slash Slack. God, we've got a lot of things now, David. I know, the hamburger bar on our website is expanding. Expanding, I know. That's how you know we're becoming enterprise. We have a mega menu, a menu of menus, if you will.
What is the acquired solution that we can sell? That's true. We got to find that.
All right. With that, listeners, acquire.fm slash Slack to join the Slack and discuss this episode. Acquire.fm slash store to get some of that sweet merch that everyone is talking about. And with that, listeners, we will see you next time.
We'll see you next time.