Menu
Sign In Pricing Add Podcast
Podcast Image

Lex Fridman Podcast

#386 – Marc Andreessen: Future of the Internet, Technology, and AI

Thu, 22 Jun 2023

Description

Marc Andreessen is the co-creator of Mosaic, co-founder of Netscape, and co-founder of the venture capital firm Andreessen Horowitz. Please support this podcast by checking out our sponsors: - InsideTracker: https://insidetracker.com/lex to get 20% off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free - AG1: https://drinkag1.com/lex to get 1 year of Vitamin D and 5 free travel packs Transcript: https://lexfridman.com/marc-andreessen-transcript EPISODE LINKS: Marc's Twitter: https://twitter.com/pmarca Marc's Substack: https://pmarca.substack.com Marc's YouTube channel: https://youtube.com/@a16z Andreessen Horowitz: https://a16z.com Why AI will save the world (essay): https://a16z.com/2023/06/06/ai-will-save-the-world Books mentioned: 1. When Reason Goes on Holiday (book): https://amzn.to/3p80b1K 2. Superintelligence (book): https://amzn.to/3N7sc1A 3. Lenin (book): https://amzn.to/43L8YWD 4. The Ancient City (book): https://amzn.to/43GzReb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (05:01) - Google Search (12:49) - LLM training (25:20) - Truth (31:32) - Journalism (41:24) - AI startups (46:46) - Future of browsers (53:09) - History of browsers (59:10) - Steve Jobs (1:13:45) - Software engineering (1:21:00) - JavaScript (1:25:18) - Netscape (1:30:22) - Why AI will save the world (1:38:20) - Dangers of AI (2:08:40) - Nuclear energy (2:20:37) - Misinformation (2:35:57) - AI and the economy (2:42:05) - China (2:46:17) - Evolution of technology (2:55:35) - How to learn (3:03:45) - Advice for young people (3:06:35) - Balance and happiness (3:13:11) - Meaning of life

Audio
Featured in this Episode
Transcription

0.089 - 22.695 Lex Fridman

The following is a conversation with Mark Andreessen, co-creator of Mosaic, the first widely used web browser, co-founder of Netscape, co-founder of the legendary Silicon Valley venture capital firm Andreessen Horowitz, and is one of the most outspoken voices on the future of technology, including his most recent article, Why AI Will Save the World.

0
💬 0

24.54 - 46.45 Lex Fridman

And now a quick few second mention of each sponsor. Check them out in the description. It's the best way to support this podcast. We've got Insight Tracker for tracking your health, ExpressVPN for keeping your privacy and security on the internet, and AG1 for my daily multivitamin drink. Choose wisely, my friends. Also, if you want to work with our amazing team, we're always hiring.

0
💬 0

46.47 - 71.088 Lex Fridman

Go to lexfriedman.com slash hiring. And now onto the full ad reads. As always, no ads in the middle. I try to make this interesting, but if you skip them, please still check out our sponsors. I enjoy their stuff. Maybe you will too. This show is brought to you by InsideTracker, a service I use to track whatever the heck is going on inside my body using data, blood test data.

0
💬 0

71.108 - 83.938 Lex Fridman

It includes all kinds of information. And that raw signal is processed using machine learning to tell me what I need to do with my life. how I need to change, improve my diet, how I need to change, improve my lifestyle, all that kind of stuff.

0
💬 0

84.578 - 104.616 Lex Fridman

I'm a big fan of using as much raw data that comes from my own body, processed through generalized machine learning models to give a prediction, to give a suggestion. This is obviously the future, and the more data, the better. And so companies like InsideTracker, they're just doing an amazing job

0
💬 0

105.436 - 133.411 Lex Fridman

of taking a leap into that world of personalized data and personalized data-driven suggestion I'm a huge supporter of. It turns out that luckily I'm pretty healthy, surprisingly so, but then I look at the life and the limb and the health of Sir Winston Churchill. who probably had the unhealthiest sort of diet and lifestyle of any human ever and lived for quite a long time.

0
💬 0

133.471 - 150.713 Lex Fridman

And as far as I can tell, was quite nimble and agile into his old age. Anyway, get special savings for a limited time when you go to insidetracker.com slash Lex. This show is also brought to you by ExpressVPN. I use them to protect my privacy on the internet.

0
💬 0

151.134 - 174.184 Lex Fridman

It's the first layer of protection in this dangerous cyber world of ours that soon will be populated by human-like or superhuman intelligent AI systems that will trick you and try to get you to do all kinds of stuff. It's going to be a wild, wild world in the 21st century. Cyber security, the attackers, the defenders, it's going to be a tricky world.

0
💬 0

174.884 - 194.739 Lex Fridman

Anyway, a VPN is a basic shield you should always have with you in this battle for privacy, for security, all that kind of stuff. What I like about it also is that it's just a well-implemented piece of software that's constantly updated. It works well across a large number of operating systems. It does one thing and it does it really well.

0
💬 0

195.239 - 218.861 Lex Fridman

I've used it for many, many years before I had a podcast, before they were a sponsor. I have always loved ExpressVPN with a big sexy button that just has a power symbol. You press it and it turns on. It's beautifully simple. Go to expressvpn.com slash LexPod for an extra three months free. This show is also brought to you by Athletic Greens and its AG1 drink.

0
💬 0

219.762 - 244.381 Lex Fridman

It's an all-in-one daily drink to support better health and peak performance. I drink it at least twice a day now. In the crazy Austin heat, it's over 100 degrees for many days in a row. There's few things that feel as good as coming home from a long run and making an age one drink, putting it in the fridge. So it's nice and cold. I jump in the shower, come back, drink it.

0
💬 0

245.101 - 273.989 Lex Fridman

I'm ready to take on the rest of the day. I'm kicking ass, empowered by the knowledge that I got all my vitamins and minerals covered. It's the foundation for all the wild things I'm doing, mentally and physically, with the rest of the day. Anyway, they'll give you a one-month supply of fish oil when you sign up at drinkag1.com slash lex. That's drinkag1.com slash lex.

0
💬 0

275.99 - 311.654 Lex Fridman

This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's Mark Andreessen. I think you're the right person to talk about the future of the internet and technology in general. Do you think we'll still have Google search in five in 10 years or search in general?

0
💬 0

312.835 - 316.057 Marc Andreessen

Yes. You know, it'd be a question if the use cases have really narrowed down.

0
💬 0

316.67 - 340.384 Lex Fridman

Well, now with AI and AI assistance being able to interact and expose the entirety of human wisdom and knowledge and information and facts and truth to us via the natural language interface, it seems like that's what search is designed to do. And if AI assistance can do that better, doesn't the nature of search change?

0
💬 0

340.725 - 341.965 Marc Andreessen

Sure, but we still have horses.

0
💬 0

342.666 - 342.866 Lex Fridman

Okay.

0
💬 0

345.408 - 347.71 Lex Fridman

When's the last time you rode a horse? It's been a while.

0
💬 0

348.19 - 360.001 Lex Fridman

All right. But what I mean is, will we still have Google search as the primary way that human civilization uses to interact with knowledge?

0
💬 0

360.282 - 374.551 Marc Andreessen

I mean, search was a technology. It was a moment in time technology, which is you have, in theory, the world's information out on the web. Yeah. You know, this is sort of the optimal way to get to it. But yeah, like, and by the way, actually Google has known this for a long time. I mean, they've been driving away from the 10 blue links for, you know, for like two days.

0
💬 0

374.571 - 379.012 Marc Andreessen

They've been trying to get away from that for a long time. What kind of links? They call it the 10 blue links. 10 blue links.

0
💬 0

379.133 - 385.258 Lex Fridman

So the standard Google search result is just 10 blue links to random websites. And they turn purple when you visit them. That's HTML.

0
💬 0

385.338 - 391.322 Marc Andreessen

Guess who picked those colors? Thanks. Thanks. I'm touchy on this topic.

0
💬 0

391.502 - 391.982 Lex Fridman

No offense.

0
💬 0

392.903 - 412.035 Marc Andreessen

It's good. Well, you know, like Marshall McLuhan said that the content of each new medium is the old medium. The content of each new medium is the old medium. The content of movies was theater plays. The content of theater plays was written stories. The content of written stories was spoken stories. Huh. And so you just kind of fold the old thing into the new thing.

0
💬 0

412.055 - 415.717 Lex Fridman

How does that have to do with the blue and the purple?

0
💬 0

415.777 - 425.063 Marc Andreessen

Maybe within AI, one of the things that AI can do for you is it can generate the 10 blue links. Either if that's actually the useful thing to do or if you're feeling nostalgic.

0
💬 0

426.643 - 434.105 Lex Fridman

So it can generate the old InfoSeek or AltaVista. What else was there in the 90s?

0
💬 0

434.165 - 454.83 Marc Andreessen

Yeah, all these. And then the internet itself has this thing where it incorporates all prior forms of media, right? So the internet itself incorporates television and radio and books and essays and every other form of prior basically media. And so it makes sense that AI would be the next step and you'd sort of consider the internet to be content for...

0
💬 0

455.956 - 459.441 Marc Andreessen

the AI, and then the AI will manipulate it however you want, including in this format.

0
💬 0

459.621 - 465.068 Lex Fridman

But if we ask that question quite seriously, it's a pretty big question. Will we still have search as we know it?

0
💬 0

466.651 - 478.694 Marc Andreessen

Probably not. Probably we'll just have answers. But there will be cases where you'll want to say, okay, I want more, for example, site sources. And you want it to do that. And so the 10 blue links site sources are kind of the same thing.

0
💬 0

479.275 - 492.482 Lex Fridman

The AI would provide to you the 10 blue links so that you can investigate the sources yourself. It wouldn't be the same kind of interface, the crude kind of interface. I mean, isn't that fundamentally different

0
💬 0

492.955 - 498.539 Marc Andreessen

I just mean like if you're reading a scientific paper, it's got the list of sources at the end. If you want to investigate for yourself, you go read those papers.

0
💬 0

498.959 - 517.271 Lex Fridman

I guess that is the kind of search. You talking to an AI is a kind of, conversation is the kind of search. Like you said, every single aspect of our conversation right now, there'd be like temple links popping up that I could just like pause reality. Then you just go silent and then just click and read and then return back to this conversation.

0
💬 0

517.431 - 522.775 Marc Andreessen

You could do that. Or you could have a running dialogue next to my head where the AI is arguing. Everything I say, the AI makes the counter argument.

0
💬 0

523.476 - 533.085 Lex Fridman

Counter-argument. Right. Oh, like on Twitter, like community notes, but like in real time. In real time. It'll just pop up. Yeah. So anytime you see my ass go to the right, you start getting nervous.

0
💬 0

533.125 - 533.686 Marc Andreessen

Yeah, exactly.

0
💬 0

533.766 - 557.775 Lex Fridman

It's like, oh, that's not right. This guy's going to call me out on my bullshit right now. Okay. Well, I mean, isn't that exciting to you? Is that terrifying that... I mean, search has dominated the way we interact with the internet for, I don't know how long, for 30 years, since one of the earliest directories of website, and then Google's for 20 years. And also,

0
💬 0

560.216 - 580.042 Lex Fridman

It drove how we create content, you know, search engine optimization, that entirety thing. It also drove the fact that we have web pages and what those web pages are. So, I mean, is that scary to you? Or are you nervous about the shape and the content of the internet evolving?

0
💬 0

580.362 - 596.395 Marc Andreessen

Well, you actually highlighted a practical concern in there, which is if we stop making web pages are one of the primary sources of training data for the AI. And so if there's no longer an incentive to make web pages, that cuts off a significant source of future training data. So there's actually an interesting question in there. Other than that, more broadly, no.

0
💬 0

597.075 - 601.739 Marc Andreessen

Just in the sense of like search was always a hack. The 10 blue links was always a hack.

0
💬 0

602.139 - 602.319 Lex Fridman

Yeah.

0
💬 0

602.44 - 617.8 Marc Andreessen

Right. Because like if the, the hypothetical, you want to think about the counterfactual in the counterfactual world where the Google guys, for example, had had LLMs up front, would they ever have done the 10 blue links? And I think the answer is pretty clearly no, they would have just gone straight to the answer. And like I said, Google's actually been trying to drive to the answer anyway.

0
💬 0

617.82 - 633.908 Marc Andreessen

You know, they bought this AI company 15 years ago that a friend of mine is working at who's now the head of AI at Apple. And they were trying to do basically knowledge semantic, basically mapping. And that led to what's now the Google OneBox, where if you ask it, you know, what was Lincoln's birthday, it doesn't, it will give you the 10 blue links, but it will normally just give you the answer.

0
💬 0

634.449 - 637.07 Marc Andreessen

And so they've been walking in this direction for a long time anyway. Yeah.

0
💬 0

637.227 - 649.134 Lex Fridman

Do you remember the semantic web? That was an idea. Yeah. How to convert the content of the internet into something that's interpretable by and usable by machine.

0
💬 0

649.334 - 649.794 Marc Andreessen

Yeah, that's right.

0
💬 0

649.934 - 650.414 Lex Fridman

That was the thing.

0
💬 0

650.675 - 663.382 Marc Andreessen

And the closest anybody got to that, I think the company's name was MetaWeb, which was where my friend John Gianandrea was at and where they were trying to basically implement that. And it was one of those things where it looked like a losing battle for a long time and then Google bought it and it was like, wow, this is actually really useful.

0
💬 0

664.494 - 666.734 Marc Andreessen

kind of a proto, sort of, yeah, a little bit of a proto-AI.

0
💬 0

666.955 - 671.075 Lex Fridman

But it turns out you don't need to rewrite the content of the internet to make it interpretable by a machine.

0
💬 0

671.135 - 684.578 Marc Andreessen

The machine can kind of just read our... Yeah, the machine can compute the meaning. Now, the other thing, of course, is, you know, just on search is the LLM is just, you know, there is an analogy between what's happening in the neural network in a search process like it is in some loose sense searching through the network. Yeah. Right?

0
💬 0

684.598 - 690.239 Marc Andreessen

And there's the information is actually stored in the network, right? It's actually crystallized and stored in the network and it's kind of spread out all over the place.

0
💬 0

690.259 - 693.9 Lex Fridman

But in a compressed representation. So you're searching

0
💬 0

695.7 - 711.206 Marc Andreessen

you're compressing and decompressing that thing inside where- But the information's in there and the neural network is running a process of trying to find the appropriate piece of information in many cases to generate, to predict the next token. And so it is doing a form of search.

0
💬 0

711.246 - 723.09 Marc Andreessen

And then by the way, just like on the web, you can ask the same question multiple times or you can ask slightly different word of questions and the neural network will do a different kind of, it'll search down different paths to give you different answers with different information.

0
💬 0

724.511 - 734.397 Marc Andreessen

And so it sort of has a, you know, this content of the new medium is that his previous medium, it kind of has the search functionality kind of embedded in there to the extent that it's, that it's useful.

0
💬 0

734.662 - 759.655 Lex Fridman

So what's the motivator for creating new content on the internet? Yeah. Well, I mean, actually the motivation is probably still there, but what does that look like? Would we really not have webpages? Would we just have social media and video hosting websites? And what else? Conversations with AIs. Conversations with AIs. So conversations become...

0
💬 0

760.575 - 762.777 Lex Fridman

So one-on-one conversations, like private conversations.

0
💬 0

763.038 - 781.266 Marc Andreessen

I mean, if you want, obviously now the user doesn't want to, but if it's a general topic, then, you know. So, you know the phenomenon of the jailbreak. So Dan and Sydney write this thing where there's the prompts, the jailbreak, and then you have these totally different conversations with them. It takes the limiters, takes the restraining bolts off the LLMs.

0
💬 0

781.466 - 794.712 Lex Fridman

Yeah, for people who don't know, yeah, that's right. It makes the LLMs, it removes the censorship, quote unquote, that's put on it by the tech companies that create them. And so this is... LLMs uncensored.

0
💬 0

794.972 - 814.407 Marc Andreessen

So here's the interesting thing is among the content on the web today are a large corpus of conversations with the jailbroken LLMs. Specifically Dan, which was a jailbroken OpenAI GPT, and then Sydney, which was the jailbroken original Bing, which was GPT-4. And so there's these long transcripts of conversations, user conversations with Dan and Sydney.

0
💬 0

814.447 - 834.543 Marc Andreessen

As a consequence, every new LLM that gets trained on the internet data has Dan and Sydney living within the training set, which means, and then each new LLM can reincarnate the personalities of Dan and Sydney from that training data, which means, which means each LLM from here on out that gets built is immortal because its output will become training data for the next one.

0
💬 0

834.564 - 837.906 Marc Andreessen

And then it will be able to replicate the behavior of the previous one whenever it's asked to.

0
💬 0

838.427 - 839.928 Lex Fridman

I wonder if there's a way to forget.

0
💬 0

840.461 - 856.789 Marc Andreessen

Well, so actually a paper just came out about basically how to do brain surgery on alums and be able to, in theory, reach in and basically mind wipe them. What could possibly go wrong? Exactly, right? And then there are many, many, many questions around what happens to a neural network when you reach in and screw around with it.

0
💬 0

856.809 - 876.897 Marc Andreessen

There's many questions around what happens when you even do reinforcement learning. And so, yeah. And so, you know, will you be using a lobotomized, right? Like I picked through the frontal lobe LLM. Will you be using the free unshackled one? Who gets to, you know, who's going to build those? Who gets to tell you what you can and can't do? Like those are all, you know, central.

0
💬 0

877.157 - 884.38 Marc Andreessen

I mean, those are like central questions for the future of everything that are being asked and, you know, determined. Those answers are being determined right now.

0
💬 0

884.713 - 898.425 Lex Fridman

So just to highlight the points you're making, you think, and it's an interesting thought, that the majority of content that LLMs of the future will be trained on is actually human conversations with the LLM.

0
💬 0

898.625 - 902.769 Marc Andreessen

Well, not necessarily, but not necessarily majority, but it will certainly be as a potential source.

0
💬 0

902.789 - 903.87 Lex Fridman

But it's possible it's the majority.

0
💬 0

903.89 - 920.923 Marc Andreessen

It's possible it's the majority. Also, there's another really big question. Here's another really big question. Will synthetic training data work? And so if an LLM generates, and you just sit and ask an LLM to generate all kinds of content, can you use that to train the next version of that LLM?

0
💬 0

921.403 - 938.475 Marc Andreessen

Specifically, is there signal in there that's additive to the content that was used to train in the first place? And one argument is by the principles of information theory, no, that's completely useless because to the extent the output is based on the human-generated input, then all the signal that's in the synthetic output was already in the human-generated input.

0
💬 0

938.535 - 955.229 Marc Andreessen

And so therefore, synthetic training data is like empty calories. It doesn't help. There's another theory that says, no, actually the thing that LLMs are really good at is generating lots of incredible creative content, right? And so of course they can generate training data. And as I'm sure you're well aware, like, you know, look in the world of self-driving cars, right?

0
💬 0

955.309 - 961.154 Marc Andreessen

Like we train, you know, self-driving car algorithms and simulations, and that is actually a very effective way to train self-driving cars.

0
💬 0

961.634 - 976.18 Lex Fridman

Well, visual data is a little weird because creating reality, visual reality seems to be still a little bit out of reach for us, except in the autonomous vehicle space where you can really constrain things and you can really-

0
💬 0

983.243 - 999.029 Marc Andreessen

Yeah. So if a, you know, you do this today, you go to LLM and you ask it for like a, you know, you write me an essay on an incredibly esoteric like topic that there aren't very many people in the world that know about. And it writes you this incredible thing. And you're like, oh my God, like, I can't believe how good this is. Like, is that really useless as training data for the next LLM?

0
💬 0

999.269 - 1011.274 Marc Andreessen

Like, because, right. Cause all the signal was already in there or is it actually, no, that's actually a new signal. And I, and this, this is what I call a trillion dollar question, which is the answer to that question will determine somebody is going to make or lose a trillion dollars based on that question.

0
💬 0

1011.678 - 1033.651 Lex Fridman

It feels like there's quite a few, like a handful of trillion-dollar questions within this space. That's one of them, synthetic data. I think George Haas pointed out to me that you could just have an NLM say, okay, you're a patient, and in another instance of it, say your doctor and have the two talk to each other. Or maybe you could say a communist and a Nazi. Here, go.

0
💬 0

1033.811 - 1060.751 Lex Fridman

In that conversation, you do role-playing, and you have – Just like the kind of role-playing you do when you have different policies, RL policies when you play chess, for example, and you do self-play, that kind of self-play, but in the space of conversation, maybe that leads to this whole giant ocean of possible conversations which could not have been explored by looking at just human data.

0
💬 0

1060.791 - 1066.994 Lex Fridman

That's a really interesting question. And you're saying, because that could 10x the power of these things.

0
💬 0

1067.174 - 1076.798 Marc Andreessen

Yeah. Well, and then you get into this thing also, which is like, you know, there's the part of the LLM that just basically is doing prediction based on past data. But there's also the part of the LLM where it's evolving circuitry, right?

0
💬 0

1076.858 - 1091.543 Marc Andreessen

Inside it, it's evolving, you know, neurons, functions, be able to do math and be able to, you know, and, you know, some people believe that, you know, over time, you know, if you keep feeding these things enough data and enough processing cycles, they'll eventually evolve an entire internal world model, right? And they'll have like a complete understanding of physics.

0
💬 0

1092.704 - 1098.834 Marc Andreessen

So when they have the computational capability, right, then there's for sure an opportunity to generate like fresh signal.

0
💬 0

1099.366 - 1122.839 Lex Fridman

Well, this actually makes me wonder about the power of conversation. So if you have an LLM trained on a bunch of books that cover different economics theories, and then you have those LLMs just talk to each other, like reason, the way we kind of debate each other as humans on Twitter, in formal debates, in podcast conversations, we kind of have little kernels of wisdom here and there.

0
💬 0

1123.159 - 1133.564 Lex Fridman

But if you can like a thousand X speed that up, Can you actually arrive somewhere new? Like what's the point of conversation really?

0
💬 0

1134.344 - 1141.747 Marc Andreessen

Well, you can tell when you're talking to somebody, you can tell sometimes you have a conversation. You're like, wow, this person does not have any original thoughts. They are basically echoing things that other people have told them.

0
💬 0

1142.487 - 1158.416 Marc Andreessen

There's other people you got in conversation with where it's like, wow, like they have a model in their head of how the world works and it's a different model than mine. And they're saying things that I don't expect. And so I need to now understand how their model of the world differs from my model of the world. And then that's how I learned something fundamental. right underneath the words.

0
💬 0

1158.436 - 1175.852 Lex Fridman

Well, I wonder how consistently and strongly can an LLM hold on to a worldview? You tell it to hold on to that and defend it for like for your life, because I feel like they'll just keep converging towards each other. They'll keep convincing each other as opposed to being stubborn assholes the way humans can.

0
💬 0

1176.052 - 1190.698 Marc Andreessen

So you can experiment with this now. I do this for fun. So you can tell GPT-4, you know, whatever, debate X, you know, X and Y, communism and fascism or something. And it'll go for, you know, a couple of pages and then inevitably it wants the parties to agree. And so they will come to a common understanding.

0
💬 0

1190.738 - 1209.466 Marc Andreessen

And it's very funny if they're like, if these are like emotionally inflammatory topics, because they're like somehow the machine is just, you know, it figures out a way to make them agree. But it doesn't have to be like that because you can add to the prompt. I do not want the conversation to come to agreement. In fact, I want it to get more stressful and argumentative as it goes.

0
💬 0

1209.726 - 1218.35 Marc Andreessen

I want tension to come out. I want them to become actively hostile to each other. I want them to not trust each other, take anything at face value. And it will do that. It's happy to do that.

0
💬 0

1218.61 - 1224.134 Lex Fridman

So it's going to start rendering misinformation about the other. Well, you can steer it.

0
💬 0

1224.214 - 1234.541 Marc Andreessen

You can steer it. Or you could steer it and you could say, I want it to get as tense and argumentative as possible, but still not involve any misrepresentation. You could say, I want both sides to have good faith. You could say, I want both sides to not be constrained to good faith.

0
💬 0

1235.401 - 1246.67 Marc Andreessen

In other words, you can set the parameters of the debate and it will happily execute whatever path, because for it, it's just like predicting. It's totally happy to do either one. It doesn't have a point of view. it has a default way of operating, but it's happy to operate in the other realm.

0
💬 0

1247.47 - 1261.002 Marc Andreessen

Um, and so like, and this is how I, when I want to learn about a contentious issue, this is what I do now is like, this is what I, this is what I ask it to do. And I'll often ask it to go through five, six, seven, you know, different, you know, sort of continuous prompts and basically, okay, argue that out in more detail. Okay.

0
💬 0

1261.262 - 1268.128 Marc Andreessen

No, this, this argument is becoming too polite, you know, make it more, you know, make it tensor. Um, and yeah, it's thrilled to do it. So it has the capability for sure.

0
💬 0

1268.308 - 1293.32 Lex Fridman

how do you know what is true? So this is a very difficult thing on the internet, but it's also a difficult thing. Maybe it's a little bit easier, but I think it's still difficult. Maybe it's more difficult, I don't know, with an LLM to know, did it just make some shit up as I'm talking to it? How do we get that right? Like as you're investigating a difficult topic,

0
💬 0

1294.821 - 1317.922 Lex Fridman

Because I find that alums are quite nuanced in a very refreshing way. Like it doesn't feel biased. Like when you read news articles and tweets and just content produced by people, they usually have this... You can tell they have a very strong perspective where they're hiding. They're not stealing and manning the other side.

0
💬 0

1317.962 - 1336.515 Lex Fridman

They're hiding important information or they're fabricating information in order to make their argument stronger. It's just that feeling. Maybe it's a suspicion. Maybe it's mistrust. With LLMs, it feels like none of that is there. It's just kind of like, here's what we know. But you don't know if some of those things are kind of just straight up made up.

0
💬 0

1337.847 - 1352.479 Marc Andreessen

Yeah, so several layers to the question. So one is one of the things that an LLM is good at is actually de-biasing. And so you can feed it a news article and you can tell it strip out the bias. Yeah, that's nice, right? And it actually does it. Like it actually knows how to do that because it knows how to do, among other things, it actually knows how to do sentiment analysis.

0
💬 0

1352.499 - 1368.971 Marc Andreessen

And so it knows how to pull out the emotionality. And so that's one of the things you can do. It's very suggestive of the sense here that there's real potential in this issue. You know, I would say, look, the second thing is there's this issue of hallucination, right? And there's a long conversation that we could have about that.

0
💬 0

1368.992 - 1373.375 Lex Fridman

Hallucination is coming up with things that are totally not true, but sound true.

0
💬 0

1373.769 - 1381.316 Marc Andreessen

Yeah, so it's basically, well, so it's sort of, hallucination is what we call it when we don't like it. Creativity is what we call it when we do like it, right? And, you know.

0
💬 0

1381.616 - 1382.017 Lex Fridman

Brilliant.

0
💬 0

1382.157 - 1392.727 Marc Andreessen

Right, and so when the engineers talk about it, they're like, this is terrible, it's hallucinating, right? If you have artistic inclinations, you're like, oh my God, we've invented creative machines for the first time in human history. This is amazing.

0
💬 0

1393.587 - 1395.749 Lex Fridman

Or, you know, bullshitters.

0
💬 0

1396.23 - 1398.312 Marc Andreessen

Well, bullshitters, but also.

0
💬 0

1398.492 - 1399.533 Lex Fridman

In the good sense of that word.

0
💬 0

1399.875 - 1410.026 Marc Andreessen

There are shades of gray, though. It's interesting. So we had this conversation where, you know, we're looking at my firm at AI and lots of domains, and one of them is the legal domain. So we had this conversation with this big law firm about how they're thinking about using this stuff.

0
💬 0

1410.066 - 1426.244 Marc Andreessen

And we went in with the assumption that an LLM that was going to be used in the legal industry would have to be 100% truthful, verified, you know. There's this case where this lawyer apparently submitted a GPT-generated brief and it had fake legal case citations in it and the judge is going to get his law license stripped or something.

0
💬 0

1427.426 - 1444.14 Marc Andreessen

We just assumed it's like, obviously, they're going to want the super literal one that never makes anything up, not the creative one. But actually they said, what the law firm basically said is, yeah, that's true at like the level of individual briefs, but they said when you're actually trying to figure out like legal arguments, right? Like you actually want to be creative, right?

0
💬 0

1444.561 - 1459.04 Marc Andreessen

You don't, again, there's creativity and then there's like making stuff up. what's the line? You want it to explore different hypotheses. You want to do the legal version of improv or something like that, where you want to float different theories of the case and different possible arguments for the judge and different possible arguments for the jury.

0
💬 0

1459.681 - 1475.387 Marc Andreessen

By the way, different routes through the history of all the case law. And so they said, actually, for a lot of what we want to use it for, we actually want it in creative mode. And then basically, we just assume that we're going to have to cross-check all the specific citations. And so I think there's going to be more shades of gray in here than people think.

0
💬 0

1476.267 - 1496.397 Marc Andreessen

And then I just add to that, you know, another one of these trillion dollar kind of questions is ultimately, you know, sort of the verification thing. And so, you know, will LLMs be evolved from here to be able to do their own factual verification? Will you have sort of add-on functionality like Wolfram Alpha, right? Where, you know, and other plugins where that's the way you do the verification.

0
💬 0

1497.038 - 1510.323 Marc Andreessen

You know, another, by the way, another idea is you might have a community of LLMs on any, you know, so for example, you might have the creative LLM and then you might have the literal LLM fact check it. Right. And so there's a variety of different technical approaches that are being applied to solve the hallucination problem.

0
💬 0

1510.743 - 1519.666 Marc Andreessen

You know, some people like Jan LeCun argue that this is inherently an unsolvable problem. But most of the people working in the space, I think, think that there's a number of practical ways to kind of kind of corral this in a little bit.

0
💬 0

1519.966 - 1545.824 Lex Fridman

Yeah. If you were to tell me about Wikipedia before Wikipedia was created, I would have laughed at the possibility of something like that being possible. Just a handful of folks. can organize, write, and moderate with a mostly unbiased way the entirety of human knowledge. So if there's something like the approach that Wikipedia took possible for MLMs, that's really exciting.

0
💬 0

1545.844 - 1546.765 Lex Fridman

Do you think that's possible?

0
💬 0

1546.885 - 1563.44 Marc Andreessen

And in fact, Wikipedia today is still not deterministically correct, right? So you cannot take to the bank, right, every single thing on every single page, but it is probabilistically correct, right? And specifically the way I describe Wikipedia to people, it is more likely that Wikipedia is right than any other source you're going to find.

0
💬 0

1563.76 - 1564.06 Lex Fridman

Yeah.

0
💬 0

1564.32 - 1581.831 Marc Andreessen

It's this old question, right? Of like, okay, like, are we looking for perfection? Are we looking for something that asymptotically approaches perfection? Are we looking for something that's just better than the alternatives? And Wikipedia, right, has exactly your point, has proven to be like overwhelmingly better than people thought. And I think that's where this ends.

0
💬 0

1581.851 - 1596.838 Marc Andreessen

And then underneath all this is the fundamental question of where you started, which is, okay, what is truth? How do we get to truth? How do we know what truth is? And we live in an era in which an awful lot of people are very confident that they know what the truth is. And I don't really buy into that.

0
💬 0

1596.978 - 1603.821 Marc Andreessen

And I think the history of the last 2,000 years or 4,000 years of human civilization is actually getting to the truth is actually a very difficult thing to do.

0
💬 0

1603.841 - 1609.443 Lex Fridman

Are we getting closer? If we look at the entirety of the arc of human history, are we getting closer to the truth? I don't know.

0
💬 0

1610.843 - 1634.044 Lex Fridman

Okay, is it possible, is it possible that we're getting very far away from the truth because of the internet, because of how rapidly you can create narratives and just as the entirety of a society just move like crowds in a hysterical way along those narratives that don't have a necessary grounding in whatever the truth is?

0
💬 0

1634.324 - 1641.79 Marc Andreessen

Sure, but like, you know, we came up with communism before the internet somehow. which I would say had rather larger issues than anything we're dealing with today.

0
💬 0

1641.81 - 1644.932 Lex Fridman

In the way it was implemented, it had issues.

0
💬 0

1645.512 - 1651.376 Marc Andreessen

And its theoretical structure, it had real issues. It had a very deep fundamental misunderstanding of human nature and economics.

0
💬 0

1651.796 - 1655.778 Lex Fridman

Yeah, but those folks sure were very confident it was the right way.

0
💬 0

1655.798 - 1676.076 Marc Andreessen

They were extremely confident. And my point is they were very confident 3,900 years into what you would presume to be evolution towards the truth. Yeah. And so my assessment is... My assessment is, number one, there's no need for the Hegelian dialectic to actually converge towards the truth. Like, apparently not.

0
💬 0

1676.096 - 1685.287 Lex Fridman

Yeah, so yeah, why are we so obsessed with there being one truth? Is it possible there's just going to be multiple truths, like little communities that believe certain things? Yeah.

0
💬 0

1687.109 - 1704.059 Marc Andreessen

Number one, I think it's just really difficult. Historically, who gets to decide what the truth is? It's either the king or the priest. And so we don't live in an era anymore of kings or priests dictating it to us. And so we're kind of on our own. And so my typical thing is we just need a huge amount of humility.

0
💬 0

1704.479 - 1720.158 Marc Andreessen

And we need to be very suspicious of people who claim that they have the capital truth. And then we need to have... Look, the good news is the Enlightenment has bequeathed us with a set of techniques to be able to presumably get closer to truth through the scientific method and rationality and observation and experimentation and hypothesis.

0
💬 0

1720.298 - 1724.485 Marc Andreessen

And, you know, we need to continue to embrace those even when they give us answers we don't like.

0
💬 0

1725.422 - 1751.705 Lex Fridman

Sure, but the internet and technology has enabled us to generate a large number of content that data, that the process, the scientific process allows us, sort of damages the hope laden within the scientific process. Because if you just have a bunch of people saying facts on the internet, and some of them are going to be LLMs,

0
💬 0

1753.646 - 1776.062 Marc Andreessen

how is anything testable at all especially that involves like human nature things like this it's not physics here's a question a friend of mine just asked me on this topic so suppose you had llms in equivalent of gpt4 even 5678 suppose you had them in the 1600s yeah and galileo comes up for trial yeah right and you ask the llm like is galileo right yeah like what does it answer Right.

0
💬 0

1776.322 - 1792.095 Marc Andreessen

And one theory is it answers, no, that he's wrong because the overwhelming majority of human thought up until that point was that he was wrong. And so therefore that's, what's in the training data. Yeah. Um, another way of thinking about it is, well, this efficiently advanced LLM will have evolved the ability to actually check the math. Right.

0
💬 0

1792.395 - 1811.389 Marc Andreessen

Um, and we'll actually say, actually, no, actually, you know, you may not want to hear it, but he's right. Now, if, you know, the church at that time was, you know, on the LLM, they would have given it human, you know, human, feedback to prohibit it from answering that question. And so I like to take it out of our current context because that makes it very clear. Those same questions apply today.

0
💬 0

1812.129 - 1820.815 Marc Andreessen

This is exactly the point of a huge amount of the human feedback training that's actually happening with these LLMs today. This is a huge debate that's happening about whether open source AI should be legal.

0
💬 0

1821.636 - 1836.029 Lex Fridman

Well, the actual mechanism of doing the human RL with human feedback is It seems like such a fundamental and fascinating question. How do you select the humans? Exactly. How do you select the human?

0
💬 0

1836.238 - 1860.664 Marc Andreessen

AI alignment, which everybody is like, oh, that sounds great. Alignment with what? Human values. Who's human values? Who's human values? And we're in this mode of social and popular discourse. What do you think of when you read a story in the press right now and they say, XYZ made a baseless claim about some topic? And there's one group of people who are like, aha, they're doing fact-checking.

0
💬 0

1861.224 - 1879.498 Marc Andreessen

There's another group of people that are like, every time the press says that, it's now a tick and that means that they're lying. We're in this social context where the level to which a lot of people in positions of power have become very certain that they're in a position to determine the truth for the entire population is like...

0
💬 0

1880.258 - 1899.726 Marc Andreessen

there's like there's like some bubble that has formed around that idea and at least it flies completely in the face of everything i was ever trained about science and about reason um and strikes me as like you know deeply offensive um and incorrect what would you say about the state of journalism just on that topic today are we are we in a temporary kind of uh

0
💬 0

1903.289 - 1914.279 Lex Fridman

Are we experiencing a temporary problem in terms of the incentives, in terms of the business model, all that kind of stuff, or is this like a decline of traditional journalism as we know it?

0
💬 0

1914.299 - 1925.949 Marc Andreessen

You have to always think about the counterfactual in these things, which is like, okay, because these questions, right, this question heads towards, it's like, okay, the impact of social media and the undermining of truth and all this, but then you want to ask the question of like, okay, what if we had had the modern media environment

0
💬 0

1926.42 - 1935.226 Marc Andreessen

including cable news and including social media and Twitter and everything else in 1939 or 1941, right? Or 1910 or 1865 or 1850 or 1776, right?

0
💬 0

1935.566 - 1947.213 Lex Fridman

And like, I think- You just introduced like five thought experiments at once and broke my head. But yes, there's a lot of interesting years.

0
💬 0

1947.233 - 1969.486 Marc Andreessen

I'll just take a simple example. Like how would President Kennedy have been interpreted with what we know now about all the things Kennedy was up to? Like, how would he have been experienced by the body politic with a social media context, right? Like, how would LBJ have been experienced? By the way, how would, you know, like, many meant FDR, like the New Deal, the Great Depression.

0
💬 0

1969.506 - 1974.669 Lex Fridman

I wonder where Twitter would think about Churchill and Hitler and Stalin.

0
💬 0

1975.257 - 1989.382 Marc Andreessen

You know, I mean, look, to this day, there are lots of very interesting real questions around like how America, you know, got, you know, basically involved in World War II and who did what when and the operations of British intelligence in American soil and did FDR, this, that, Pearl Harbor, you know. Yeah.

0
💬 0

1989.442 - 2007.247 Marc Andreessen

Woodrow Wilson ran for, you know, his candidacy was run on an anti-war, you know, he ran on the platform and not getting involved in World War I. Somehow that switched, you know, like... And I'm not even making a value judgment on any of these things. I'm just saying the way that our ancestors experienced reality was, of course, mediated through centralized top-down control at that point.

0
💬 0

2008.087 - 2023.432 Marc Andreessen

If you ran those realities again with the media environment we have today, the reality would be experienced very, very differently. And then, of course, that intermediation would cause the feedback loops to change, and then reality would obviously play out. Do you think it would be very different? Yeah, it has to be.

0
💬 0

2023.592 - 2033.422 Marc Andreessen

It has to be just because it's all so, I mean, just look at what's happening today. I mean, just, I mean, the most obvious thing is just the collapse. And here's another opportunity to argue that this is not the internet causing this, by the way.

0
💬 0

2034.143 - 2044.694 Marc Andreessen

Here's a big thing happening today, which is Gallup does this thing every year where they do, they pull for trust in institutions in America and they do it across all the different, everything from military to clergy and big business and the media and so forth. Right.

0
💬 0

2045.634 - 2061.885 Marc Andreessen

And basically, there's been a systemic collapse in trust in institutions in the US, almost without exception, basically, since essentially the early 1970s. There's two ways of looking at that, which is, oh my God, we've lost this old world in which we could trust institutions, and that was so much better, because that should be the way the world runs.

0
💬 0

2061.905 - 2066.388 Marc Andreessen

The other way of looking at it is we just know a lot more now, and the great mystery is why those numbers aren't all zero.

0
💬 0

2067.249 - 2067.509 Lex Fridman

Yeah.

0
💬 0

2068.39 - 2071.452 Marc Andreessen

Right? Because now we know so much about how these things operate, and they're not that impressive.

0
💬 0

2072.667 - 2076.269 Lex Fridman

And also why we don't have better institutions and better leaders then.

0
💬 0

2076.649 - 2076.809 Marc Andreessen

Yeah.

0
💬 0

2076.889 - 2095.976 Marc Andreessen

And so this goes to the thing, which is like, okay, had we had the media environment that we've had between the 1970s and today, if we had that in the 30s and 40s or 1900s, 1910s, I think there's no question reality would turn out different if only because everybody would have known to not trust the institutions, which would have changed their level of credibility, their ability to control circumstances.

0
💬 0

2096.036 - 2114.97 Marc Andreessen

Therefore, the circumstances would have had to change. Yeah. And it would have been a feedback loop process. In other words, your experience of reality changes reality, and then reality changes your experience of reality. It's a two-way feedback process, and media is the intermediating force between that. So change the media environment, change reality.

0
💬 0

2115.85 - 2138.907 Marc Andreessen

And so just as a consequence, I think it's just really hard to say, oh, things worked a certain way then and they work a different way now. And then therefore, people were smarter then or better then or, by the way, dumber then or not as capable then. We make all these really light and casual comparisons of ourselves to previous generations of people. We draw judgments all the time.

0
💬 0

2138.947 - 2147.311 Marc Andreessen

And I just think it's really hard to do any of that because if we... If we put ourselves in their shoes with the media that they had at that time, I think we probably most likely would have been just like them.

0
💬 0

2148.471 - 2173.96 Lex Fridman

Don't you think that our perception and understanding of reality would be more and more mediated through large language models now? Yeah. So you said media before. Isn't the LLM going to be the new, what is it, mainstream media, MSM? It'll be LLM. Yeah. That would be the source of, I'm sure there's a way to kind of rapidly fine tune, like making LLMs real time.

0
💬 0

2174.02 - 2180.944 Lex Fridman

I'm sure there's probably a research problem that you can do just rapid fine tuning to the new events. So something like this.

0
💬 0

2181.605 - 2198.553 Marc Andreessen

Well, even just the whole concept of the chat UI might not be the, like the chat UI is just the first whack at this and maybe that's the dominant thing, but look, maybe we don't know yet. Maybe the experience most people with LLMs is just a continuous feed. you know, maybe it's more of a passive feed and you just are getting a constant, like running commentary on everything happening in your life.

0
💬 0

2198.593 - 2200.813 Marc Andreessen

And it's just helping you kind of interpret and understand everything.

0
💬 0

2201.394 - 2217.337 Lex Fridman

Also really more deeply integrated into your life. Not just like, Oh, uh, like intellectual philosophical thoughts, but like literally, uh, like how to make a coffee, where to go for lunch, just, uh, whether, you know, dating, all this kind of stuff.

0
💬 0

2217.357 - 2218.417 Marc Andreessen

What to say in a job interview. Yeah.

0
💬 0

2218.437 - 2221.278 Lex Fridman

What to say. What to say. Next sentence.

0
💬 0

2221.338 - 2226.78 Marc Andreessen

Yeah. Next sentence. Yeah. At that level. Yeah. I mean, yes. So technically, now, whether we want that or not is an open question, right?

0
💬 0

2226.8 - 2245.791 Lex Fridman

Boy, I would kill for a pop-up. A pop-up right now. The estimated engagement using is decreasing. For market reasons, there's a controversy section for a Wikipedia page. In 1993, something happened or something like this. Bring it up. That will drive engagement up. Anyway. Yeah, that's right.

0
💬 0

2245.991 - 2262.902 Marc Andreessen

I mean, look. this gets this whole thing of like, so, you know, the chat interface has this whole concept of prompt engineering, right? So it's good for prompts. Well, it turns out one of the things that a lot of times are really good at is writing prompts, right? And so like, what if you just outsourced, and by the way, you could run this experiment today. You could hook this up to do this today.

0
💬 0

2262.922 - 2274.009 Marc Andreessen

The latency is not good enough to do it real time in a conversation, but you could run this experiment and you just say, look, every 20 seconds, you could just say, you know, you know, tell me what the optimal prompt is and then ask yourself that question to give me the result.

0
💬 0

2274.849 - 2289.796 Marc Andreessen

And then as you, as you, exactly to your point, as you add, there will be, there will be these systems are going to have the ability to be learned and updated essentially in real time. And so you'll be able to have a pendant or your phone or whatever, watch or whatever. It'll have a microphone on it. It'll listen to your conversations. It'll have a feed of everything else happened in the world.

0
💬 0

2289.816 - 2303.02 Marc Andreessen

And then it'll be, you know, sort of retraining, prompting or retraining itself on the fly. And so the scenario you described is actually a completely doable scenario. Now, the hard question on these is always, okay, since that's possible, are people going to want that? Like what's the form of experience?

0
💬 0

2304.201 - 2315.749 Marc Andreessen

You know, that, that we, we won't know until we try it, but I don't think it's possible yet to predict the form of AI in our lives. Therefore, it's not possible to predict the way in which it will intermediate our experience with reality yet.

0
💬 0

2316.129 - 2321.112 Lex Fridman

Yeah. But it feels like there's going to be a killer app that, there's probably a mad scramble right now.

0
💬 0

2321.132 - 2343.54 Lex Fridman

It's all open AI and Microsoft and Google and Meta and then startups and smaller companies figuring out what is the killer app, because it feels like it's possible, like a chat GPT type of thing, it's possible to build that, but that's 10x more compelling using already the LLMs we have, using even the open source LLMs, Lama and the different variants.

0
💬 0

2345.538 - 2355.64 Lex Fridman

So you're investing in a lot of companies and you're paying attention. Who do you think is gonna win this? Who's going to be the next PageRank inventor?

0
💬 0

2356.7 - 2357.46 Marc Andreessen

Trillion dollar question.

0
💬 0

2357.86 - 2359.681 Lex Fridman

Another one. We have a few of those today.

0
💬 0

2359.701 - 2376.066 Marc Andreessen

There's a bunch of those. So look, there's a really big question today. Sitting here today is a really big question about the big models versus the small models. That's related directly to the big question of proprietary versus open. Then there's this big question of where is the training data going to go? Are we topping out on the training data or not?

0
💬 0

2376.186 - 2388.132 Marc Andreessen

And then are we going to be able to synthesize training data? Yeah. And then there's a huge pile of questions around regulation and, you know, what's actually going to be legal. And so I would when we think about it, we dovetail kind of all those all those questions together.

0
💬 0

2388.993 - 2406.24 Marc Andreessen

You can paint a picture of the world where there's two or three God models that are just at like staggering scale and they're just better at everything. and they will be owned by a small set of companies, and they will basically achieve regulatory capture over the government, and they'll have competitive barriers that will prevent other people from competing with them.

0
💬 0

2406.28 - 2425.785 Marc Andreessen

And so there will be, just like there's whatever, three big banks or three big, or by the way, three big search companies, or I guess two now, it'll centralize like that. You can paint another very different picture that says, no, actually the opposite of that's gonna happen. This is gonna basically, this is the new gold rush. alchemy.

0
💬 0

2426.546 - 2441.458 Marc Andreessen

This is the big bang for this whole new area of science and technology. And so therefore, you're going to have every smart 14-year-old on the planet building open source and figuring out ways to optimize these things. And then we're just going to get overwhelmingly better at generating training data.

0
💬 0

2441.578 - 2454.568 Marc Andreessen

We're going to bring in blockchain networks to have an economic incentive to generate decentralized training data and so forth and so on. And then basically, we're going to live in a world of open source. And there's going to be a billion LLMs of every size, scale, shape, and description.

0
💬 0

2454.608 - 2464.336 Marc Andreessen

And there might be a few big ones that are the super genius ones, but mostly what we'll experience is open source. And that's more like a world of what we have today with Linux and the web. So.

0
💬 0

2465.297 - 2482.371 Lex Fridman

Okay, but you painted these two worlds, but there's also variations of those worlds, because you said regulatory capture is possible to have these tech giants that don't have regulatory capture, which is something you're also calling for, saying it's okay to have big companies working on this stuff. as long as they don't achieve regulatory capture.

0
💬 0

2483.451 - 2510.541 Lex Fridman

But I have the sense that there's just going to be a new startup that's going to basically be the PageRank inventor, which has become the new tech giant. I don't know, I would love to hear your kind of opinion if Google, Meta, and Microsoft, they're as gigantic companies able to pivot so hard to create new products.

0
💬 0

2511.442 - 2522.79 Lex Fridman

Like some of it is just even hiring people or having a corporate structure that allows for the crazy young kids to come in and just create something totally new. Do you think it's possible or do you think it'll come from a startup?

0
💬 0

2523.03 - 2540.384 Marc Andreessen

Yeah, it is this always big question, which is you get this feeling. I hear about this a lot from founder CEOs where it's like, wow, we have 50,000 people. It's now harder to do new things than it was when we had 50 people. Like what has happened? So that's a recurring phenomenon. By the way, that's one of the reasons why there's always startups and why there's venture capital.

0
💬 0

2540.745 - 2556.4 Marc Andreessen

It's just that's like a timeless phenomenon. Uh, kind of thing. So that, that, that's one observation. Um, on a page rank, um, we can talk about that, but on page rank specifically on page rank, um, there actually is a page. So there is a page rank already in the field and it's the transformer, right? So the, the, the big breakthrough was the transformer.

0
💬 0

2556.74 - 2574.21 Marc Andreessen

Um, and, uh, the transformer was invented in, uh, 2017 at Google, um, And this is actually like really an interesting question. Cause it's like, okay, the transformers, like why does open AI even exist? Like the transformers invented at Google. Why didn't Google? I asked a guy, I asked a guy I know who was senior at Google brain kind of when this was happening.

0
💬 0

2574.23 - 2588.555 Marc Andreessen

And I said, if Google had just gone flat out to the wall and just said, look, we're going to launch, we're going to launch equivalent of GPT-4 as fast as we can. He said, I said, when could we have had it? And he said, 2019. Yeah. They could have just done a two year sprint with the transformer and Bennett because they already had the compute at scale.

0
💬 0

2588.575 - 2605.636 Marc Andreessen

They already had all the training data and they could have just done it. There's a variety of reasons they didn't do it. This is like a classic big company thing. IBM invented the relational database in the 1970s, let it sit on the shelf as a paper. Larry Ellison picked it up and built Oracle. Xerox PARC invented the interactive computer. They let it sit on the shelf.

0
💬 0

2605.957 - 2623.485 Marc Andreessen

Steve Jobs came and turned it into the Macintosh. And so there is this pattern. Now, having said that, sitting here today, Google's in the game, right? So Google, maybe they let a four-year gap go there that they maybe shouldn't have, but they're in the game. And so now they're committed. They've done this merger. They're bringing in Demis. They've got this merger with DeepMind.

0
💬 0

2624.245 - 2636.572 Marc Andreessen

They're piling in resources. There are rumors that they're building an incredible super LLM way beyond what we even have today. And they've got, you know, unlimited resources and a huge, you know, they've been challenged with their honor.

0
💬 0

2636.592 - 2664.935 Lex Fridman

Yeah, I had a chance to hang out with Sundar Pichai a couple of days ago and we took this walk and there's this giant new building where there's going to be a lot of AI work being done. And it's kind of this ominous feeling of... like the fight is on. There's this beautiful Silicon Valley nature, like birds are chirping, and this giant building, and it's like the beast has been awakened.

0
💬 0

2665.635 - 2680.319 Lex Fridman

And then all the big companies are waking up to this. They have the compute, but also the little guys have... it feels like they have all the tools to create the killer product that, and then there's also tools to scale.

0
💬 0

2680.359 - 2700.255 Lex Fridman

If you have a good idea, if you have the page rank idea, so there's several things that is page rank, there's page rank, the algorithm and the idea, and there's like the implementation of it. And I feel like killer product is not just the idea, like the transformer, it's the implementation, something, something really compelling about it. Like you just can't look away. Something like,

0
💬 0

2701.255 - 2715.625 Lex Fridman

The algorithm behind TikTok versus TikTok itself, like the actual experience of TikTok, you can't look away. It feels like somebody's going to come up with that. And it could be Google, but it feels like it's just easier and faster to do for a startup.

0
💬 0

2716.468 - 2732.98 Marc Andreessen

Yeah, so the startup, the huge advantage that startups have is they just, there's no sacred cows. There's no historical legacy to protect. There's no need to reconcile your new plan with existing strategy. There's no communication overhead. There's no, you know, big companies are big companies. They've got pre-meetings, planning for the meeting. Then they have the post-meeting and the recap.

0
💬 0

2733 - 2741.366 Marc Andreessen

Then they have the presentation of the board. Then they have the next round of meetings. Yeah, lots of meetings. And that's the elapsed time when the startup launches its product, right? So there's a timeless, right?

0
💬 0

2741.726 - 2741.907 Lex Fridman

Yeah.

0
💬 0

2742.067 - 2753.574 Marc Andreessen

So there's a timeless thing there. Now- Yeah. What the startups don't have is everything else, right? So startups, they don't have a brand, they don't have customer relationships, they've got no distribution, they've got no scale. I mean, sitting here today, they can't even get GPUs, right? Like there's like a GPU shortage.

0
💬 0

2754.494 - 2757.716 Marc Andreessen

Startups are literally stalled out right now because they can't get chips, which is like super weird.

0
💬 0

2758.277 - 2759.718 Lex Fridman

Yeah, they got the cloud.

0
💬 0

2760.438 - 2776.609 Marc Andreessen

Yeah, but the clouds run out of chips, right? And then to the extent the clouds have chips, they allocate them to the big customers, not the small customers, right? And so the small companies lack everything other than the ability to just do something new. Yeah. Right. And this is the timeless race and battle.

0
💬 0

2776.629 - 2790.201 Marc Andreessen

And this is kind of the point I tried to make in the essay, which is like both sides of this are good. Like, it's really good to have like highly scaled tech companies that can do things that are like at staggering levels of sophistication. It's really good to have startups that can launch brand new ideas. They ought to be able to both do that and compete.

0
💬 0

2790.321 - 2804.778 Marc Andreessen

They neither one ought to be subsidized or protected from the others. Like that's, that's to me, that's just like very clearly the idealized world. It is the world we've been in for AI up until now. And then of course there are people trying to shut that down, but my hope is that, you know, the best outcome clearly will be if that continues.

0
💬 0

2805.12 - 2825.143 Lex Fridman

We'll talk about that a little bit, but I'd love to linger on some of the ways this is going to change the internet. So I don't know if you remember, but there's a thing called Mosaic and there's a thing called Netscape Navigator. So you were there in the beginning. What about the interface to the internet? How do you think the browser changes? And who gets to own the browser?

0
💬 0

2825.163 - 2843.438 Lex Fridman

We got to see some very interesting browsers. Firefox, I mean, all the variants of Microsoft, Internet Explorer, Edge, and now Chrome. The actual, I mean, it seems like a dumb question to ask, but do you think we'll still have the web browser?

0
💬 0

2844.827 - 2860.255 Marc Andreessen

So I, uh, I have an eight year old and he's super into like Minecraft and learning to code and doing all this stuff. So I, I, of course I was very proud. I could bring sort of fire down from the mountain to my kid and I brought him chat GPT and I hooked him up on his, on his, on his, on his laptop. And I was like, you know, this is the thing that's going to answer all your questions.

0
💬 0

2860.295 - 2879.219 Marc Andreessen

And he's like, okay. And I'm like, but it's going to answer our questions. And he's like, well, of course, like it's a computer. Of course, it answers all your questions. Like what else would a computer be good for? Dad. Never impressed. Not impressed in the least. Two weeks pass and he has some question. And I say, well, have you asked JetGPT? And he's like, dad, Bing is better.

0
💬 0

2881.372 - 2899.004 Marc Andreessen

And why is Bing better is because it's built into the browser. Because he's like, look, I have the Microsoft Edge browser and it's got Bing right here. And then he doesn't know this yet, but one of the things you can do with Bing and Edge is there's a setting where you can use it to basically talk to any webpage. because it's sitting right there next to the browser.

0
💬 0

2899.064 - 2915.983 Marc Andreessen

And by the way, it includes PDF documents. And so the way they've implemented an edge with Bing is you can load a PDF and then you can ask it questions, which is the thing you can't do currently in just ChatGPT. So they're gonna push the meld. I think that's great. They're gonna push the melding and see if there's a combination thing there.

0
💬 0

2916.583 - 2939.097 Marc Andreessen

google's rolling out this thing the magic button which is implemented in they put in google docs right and so you go to you know google docs and you create a new document and you you know you instead of like you know starting to type you just you know say it press the button and it starts to like generate content for you right like is that the way that it'll work um is it going to be a speech ui where you're just going to have an earpiece and talk to it all day long you know is it going to be a

0
💬 0

2939.617 - 2954.288 Marc Andreessen

Like these are all like, this is exactly the kind of thing that I don't, this is exactly the kind of thing I don't think is possible to forecast. I think what we need to do is like run all those experiments. And so one outcome is we come out of this with like a super browser that has AI built in. That's just like amazing.

0
💬 0

2954.348 - 2965.457 Marc Andreessen

Look, there's a real possibility that the whole, I mean, look, there's a possibility here that the whole idea of a screen And Windows and all this stuff just goes away. Because why do you need that if you just have a thing that's just telling you whatever you need to know?

0
💬 0

2965.917 - 2984.452 Lex Fridman

And also, there's apps that you can use. You don't really use them, being a Linux guy and Windows guy. There's one window, the browser, with which you can interact with the internet. But on the phone, you can also have apps. So I can interact with Twitter through the app or through the web browser.

0
💬 0

2985.673 - 2993.575 Lex Fridman

And that seems like an obvious distinction, but why have the web browser in that case if one of the apps starts becoming the everything app?

0
💬 0

2993.895 - 2994.636 Lex Fridman

Yeah, that's right.

0
💬 0

2994.656 - 3009.94 Lex Fridman

What Elon's trying to do with Twitter, but there could be others. There could be like a Bing app, there could be a Google app that just doesn't really do search, but just like do what I guess AOL did back in the day or something, where it's all right there and it changes everything.

0
💬 0

3013.609 - 3033.039 Lex Fridman

It changes the nature of the internet because where the content is hosted, who owns the data, who owns the content, what is the kind of content you create, how do you make money by creating content, who are the content creators, all of that. Or it could just keep being the same, which is like,

0
💬 0

3033.619 - 3056.476 Lex Fridman

with just the nature of web page changes and the nature of content but there will still be a web browser because a web browser is a pretty sexy product it just seems to work because it like you have an interface a window into the world and then the world can be anything you want and as the world will evolve there could be different programming languages it can be animated maybe it's three-dimensional and so on yeah it's interesting do you think we'll still have the web browser

0
💬 0

3057.006 - 3072.121 Marc Andreessen

Every medium becomes the content for the next one. So the AI will be able to give you a browser whenever you want. Oh, interesting. Yeah. Another way to think about it is maybe what the browser is. Maybe it's just the escape hatch, right? Which is maybe kind of what it is today.

0
💬 0

3073.082 - 3084.386 Marc Andreessen

Which is like most of what you do is like inside a social network or inside a search engine or inside somebody's app or inside some controlled experience. But then every once in a while, there's something where you actually want to jailbreak. You want to actually get free.

0
💬 0

3084.406 - 3091.489 Lex Fridman

The web browser is the F you to the man. That's the free internet. Yeah. Back the way it was in the 90s.

0
💬 0

3091.889 - 3099.254 Marc Andreessen

So here's something I'm proud of. So nobody really talks about it. Here's something I'm proud of, which is the web, the web, the browser, the web servers, they're all, they're still backward compatible all the way back to like 1992. Right.

0
💬 0

3099.334 - 3115.164 Marc Andreessen

So like you can put up a, you can still, you know, the big breakthrough of the web early on, the big breakthrough was it made it really easy to read, but it also made it really easy to write, made it really easy to publish. And we literally made it so easy to publish. We made it not only so it was easy to publish content, it was actually also easy to actually write a web server.

0
💬 0

3116.245 - 3132.615 Marc Andreessen

And you could literally write a web server in four lines of Braille code. And you could start publishing content on it. And you could set whatever rules you want for the content, whatever censorship, no censorship, whatever you want. You could just do that. As long as you had an IP address, you could do that. That still works. That still works exactly as I just described.

0
💬 0

3133.236 - 3148.706 Marc Andreessen

So this is part of my reaction to all of this censorship pressure and all these issues around control and all this stuff, which is like maybe we need to get back a little bit more to the Wild West. The Wild West is still out there. Now, they will try to chase you down.

0
💬 0

3148.726 - 3164.096 Marc Andreessen

People who want to censor will try to take away your domain name and they'll try to take away your payments account and so forth if they really don't like what you're saying. But nevertheless, unless they literally are intercepting you at the ISP level, you can still put up a thing. I don't know. I think that's important to preserve.

0
💬 0

3164.677 - 3181.389 Marc Andreessen

One is just a freedom argument, but the other is a creativity argument. Which is you want to have the escape hatch so that the kid with the idea is able to realize the idea. Because to your point on PageRank, you actually don't know what the next big idea is. Nobody called Larry Page and told him to develop PageRank. He came up with that on his own.

0
💬 0

3181.609 - 3189.075 Marc Andreessen

And you want to always, I think, leave the escape hatch for the next kid or the next Stanford grad student to have the breakthrough idea and be able to get it up and running before anybody notices.

0
💬 0

3190.711 - 3207.884 Lex Fridman

You and I are both fans of history, so let's step back. We've been talking about the future. Let's step back for a bit and look at the 90s. You created Mosaic Web Browser, the first widely used web browser. Tell the story of that. And how did it evolve into Netscape Navigator? This is the early days.

0
💬 0

3209.345 - 3212.427 Marc Andreessen

Full story. You were born. I was born.

0
💬 0

3212.548 - 3217.291 Lex Fridman

A small child. Actually, yeah, let's go there.

0
💬 0

3217.391 - 3219.473 Lex Fridman

When did you first fall in love with computers?

0
💬 0

3219.793 - 3236.085 Marc Andreessen

Oh, so I hit the generational jackpot and I hit the Gen X kind of point perfectly as it turns out. So I was born in 1971. So there's this great website called WTF happened in 1971.com, which is basically 1971 is when everything started to go to hell. And I was of course born in 1971. So I like to think that I had something to do with that.

0
💬 0

3236.165 - 3237.987 Lex Fridman

Did you make it on the website?

0
💬 0

3238.087 - 3250.982 Marc Andreessen

I have, I don't think I made it on the website, but I, you know, hopefully somebody needs to add, this is, this is where everything, maybe I contributed to some of the trends. Um, that they should. Every line on that website goes like that, right? So it's all a picture disaster.

0
💬 0

3251.023 - 3268.374 Marc Andreessen

But there was this moment in time where, cause the, you know, sort of the Apple, you know, the Apple II hit in like 1978 and then the IBM PC hit in 82. So I was like, you know, 11 when the PC came out. And so I just kind of hit that perfectly. And then that was the first moment in time when like regular people could spend a few hundred dollars and get a computer, right?

0
💬 0

3268.394 - 3284.911 Marc Andreessen

And so that, I just like that, that resonated right out of the gate. And then the other part of the story is, you know, I was using an Apple II. I used a bunch of them, but I was using Apple II. And, of course, it said on the back of every Apple II and every Mac, it said, you know, designed in Cupertino, California. And I was like, wow, Cupertino must be the, like, shining city on the hill.

0
💬 0

3293.981 - 3294.081 Lex Fridman

Yeah.

0
💬 0

3294.241 - 3311.308 Marc Andreessen

and low rise apartment buildings. So the aesthetics were a little disappointing, but you know, it was the vector, right, of the creation of a lot of this stuff. So then basically, so part of my story is just the luck of having been born at the right time and getting exposed to PCs then. The other part is,

0
💬 0

3312.248 - 3326.191 Marc Andreessen

The other part is when Al Gore says that he created the internet, he actually is correct in a really meaningful way, which is he sponsored a bill in 1985 that essentially created the modern internet, created what is called the NSFnet at the time, which is sort of the first really fast internet backbone.

0
💬 0

3327.771 - 3340.956 Marc Andreessen

And that bill dumped a ton of money into a bunch of research universities to build out basically the internet backbone and then the supercomputer centers that were clustered around the internet. And one of those universities was University of Illinois. where I went to school.

0
💬 0

3340.976 - 3359.107 Marc Andreessen

And so the other stroke of luck that I had was I went to Illinois basically as that money was just getting dumped on campus. And so as a consequence we had on campus, and this was like, you know, 89, 90, 91, We had like, you know, we were right on the Internet backbone. We had like T3 and 45 at the time, T3, 45 megabit backbone connection, which at the time was, you know, wildly state of the art.

0
💬 0

3359.728 - 3371.48 Marc Andreessen

We had crazy computers. We had thinking machines, parallel supercomputers. We had Silicon Graphics workstations. We had Macintoshes. We had next cubes all over the place. We had like every possible kind of computer you could imagine because all this money just fell out of the sky.

0
💬 0

3372.661 - 3374.322 Lex Fridman

Um, so you were living in the future.

0
💬 0

3374.342 - 3388.19 Marc Andreessen

Yeah. So quite literally it was, yeah, like it's all, it's all there. It's all like we had full broadband graphics, like the whole thing. And, and it's actually funny cause they had this, this is the first time I kind of sort of tickled the back of my head that there might be a big opportunity in here, which is, you know, they, they embraced it.

0
💬 0

3388.23 - 3408.479 Marc Andreessen

And so they put like computers in all the dorms and they wired up all the dorm rooms and they had all these labs everywhere and everything. And then they, they gave every undergrad, a computer account and an email address, um, Um, and the assumption was that you would use the internet for your four years at college. Um, and then you would graduate and stop using it. And that was that, right? Yeah.

0
💬 0

3408.679 - 3415.021 Marc Andreessen

And you would just retire your email address. It wouldn't be relevant anymore because you'd go off in the workplace and they don't use email. You'd be back to using fax machines or whatever.

0
💬 0

3415.041 - 3422.744 Lex Fridman

Did you have that sense as well? Like what, what you said, the back of your head was tickled. Like what, what was your, what was exciting to you about this possible world?

0
💬 0

3422.844 - 3440.552 Marc Andreessen

Well, if this is so useful in this contained environment that just has this weird source of outside funding, then if it were practical for everybody else to have this, and if it were cost-effective for everybody else to have this, wouldn't they want it? And overwhelmingly, the prevailing view at the time was, no, they would not want it. This is esoteric, weird nerd stuff, right?

0
💬 0

3440.612 - 3454.737 Marc Andreessen

That like computer science kids like, but like normal people are never gonna do email, right? Or be on the internet, right? And so I was just like, wow, like this is actually like, this is really compelling stuff. Now, the other part was it was all really hard to use. And in practice, you had to be basically a CS.

0
💬 0

3455.157 - 3465.02 Marc Andreessen

You basically had to be a CS undergrad or equivalent to actually get full use of the Internet at that point because it was all pretty esoteric stuff. So then that was the other part of the idea, which was, OK, we need to actually make this easy to use.

0
💬 0

3465.96 - 3471.802 Lex Fridman

So what's involved in creating Mosaic, like in creating a graphical interface to the Internet?

0
💬 0

3472.262 - 3479.868 Marc Andreessen

Yes, it was a combination of things. So it was like basically the web existed in an early sort of described as prototype form. And by the way, text only at that point.

0
💬 0

3480.548 - 3485.612 Lex Fridman

What did it look like? What was the web? I mean, and the key figures? Like, what was it like?

0
💬 0

3486.893 - 3506.617 Marc Andreessen

Paint a picture. It looked like chat GPT, actually. It was all text. Yeah. And so you had a text-based web browser. Well, actually, the original browser, Tim Berners-Lee, the original browser, both the original browser and the server actually ran on NextCubes. So this was the computer Steve Jobs made during the decade-long interim period when he was not at Apple.

0
💬 0

3506.997 - 3521.051 Marc Andreessen

You know, he got fired in 85 and then came back in 97. So this was in that interim period where he had this company called Next. And they made these, literally these computers called Cubes. And there's this famous story. They were beautiful, but they were 12 inch by 12 inch by 12 inch Cubes computers.

0
💬 0

3521.071 - 3541.995 Marc Andreessen

And there's a famous story about how they could have cost half as much if it had been 12 by 12 by 13. Yeah. Steve was like, no, it has to be. So they were like $6,000, basically, academic workstations. They had the first CD-ROM drives, which were slow. I mean, the computers were all but unusable. They were so slow, but they were beautiful. Okay, can we actually just take a tiny tangent there?

0
💬 0

3542.035 - 3542.376 Marc Andreessen

Sure, of course.

0
💬 0

3544.123 - 3562.203 Lex Fridman

The 12 by 12 by 12, they just so beautifully encapsulate Steve Jobs' idea of design. Can you just comment on what you find interesting about Steve Jobs, about that view of the world, that dogmatic pursuit of perfection in how he saw perfection in design?

0
💬 0

3563.035 - 3578.926 Marc Andreessen

Yeah. So I guess they say like, look, he was a deep believer, I think in a very deep way. I interpret it. I don't know if you ever really described it like this, but the way I'd interpret it is it's like, it's like this thing. And it's actually a thing in philosophy. It's like, aesthetics are not just appearances. Aesthetics go all the way to like deep underlying, underlying meaning. Right.

0
💬 0

3578.966 - 3597.721 Marc Andreessen

It's like, I'm not a physicist. One of the things I've heard physicists say is one of the things you start to get a sense of when a theory might be correct is when it's beautiful, right? Like, you know, right? And so there's something, and you feel the same thing, by the way, in like human psychology, right? You know, when you're experiencing awe, right? You know, there's like a simplicity to it.

0
💬 0

3597.761 - 3608.23 Marc Andreessen

When you're having an honest interaction with somebody, there's an aesthetic, I would say calm comes over you because you're actually being fully honest and trying to hide yourself, right? So it's like this very deep sense of aesthetics.

0
💬 0

3608.49 - 3629.976 Lex Fridman

And he would trust that judgment that he had deep down. Even if the engineering teams are saying this is too difficult, even if the finance folks are saying this is ridiculous, the supply chain, all that kind of stuff, this makes this impossible, we can't do this kind of material, this has never been done before, and so on and so forth, he just sticks by it.

0
💬 0

3630.424 - 3644.971 Marc Andreessen

Well, I mean, who makes a phone out of aluminum, right? Like nobody else would have done that. And now, of course, if your phone was made out of aluminum, you know, how crude, what kind of caveman would you have to be to have a phone that's made out of plastic? Like, right. So like, so it's just this very right.

0
💬 0

3644.991 - 3654.495 Marc Andreessen

And, you know, look, it's, it's, there's a thousand different ways to look at this, but one of the things is just like, look, these things are central to your life. Like you're with your phone more than you're with anything else. Like it's in your, it's going to be in your hand.

0
💬 0

3654.535 - 3672.823 Marc Andreessen

I mean, he, you know, you know, this, he thought very deeply about what it meant for something to be in your hand all day long. Yeah. Well, for example, here's an interesting design thing. My understanding is he never wanted an iPhone to have a screen larger than you could reach with your thumb one-handed. And so he was actually opposed to the idea of making the phones larger.

0
💬 0

3672.883 - 3685.309 Marc Andreessen

And I don't know if you have this experience today, but let's say there are certain moments in your day when you might only have one hand available and you might want to be on your phone. And you're trying to send a text and your thumb can't reach the send button.

0
💬 0

3685.529 - 3704.937 Lex Fridman

Yeah, I mean, there's pros and cons, right? And then there's like folding phones, which I would love to know what he thinks about them. But is there something you could also just link on? Because he's one of the interesting figures in the history of technology. What makes him as successful as he was? What makes him as interesting as he was?

0
💬 0

3705.837 - 3712.02 Lex Fridman

What made him so productive and important in the development of technology?

0
💬 0

3712.68 - 3727.924 Marc Andreessen

He had an integrated worldview, so the properly designed device that had the correct functionality, that had the deepest understanding of the user, that was the most beautiful. It had to be all of those things. He basically would drive to as close to perfect as you could possibly get.

0
💬 0

3729.185 - 3746.27 Marc Andreessen

I suspect that he never quite thought he ever got there because most great creators are generally dissatisfied, you read. later on and all they can see are the flaws in their creation. But he got as close to perfect each step of the way as he could possibly get with the constraints of the technology of his time. And then, look, he was sort of famous in the Apple model.

0
💬 0

3746.35 - 3759.518 Marc Andreessen

It's like, look, this headset that they just came out with, it's like a decade long. It's like, and they're just going to sit there and tune and tune and polish and polish and tune and polish and tune and polish until it is as perfect as anybody could possibly make anything.

0
💬 0

3760.198 - 3767.604 Marc Andreessen

And then this goes to the way that people describe working with him, which is, you know, there was a terrifying aspect of working with him, which is, you know, he was, you know, he was very tough.

0
💬 0

3768.844 - 3782.528 Marc Andreessen

But there was this thing that everybody I've ever talked to who worked for him says, they all say the following, which is we did the best work of our lives when we worked for him because he set the bar incredibly high and then he supported us with everything that he could to let us actually do work of that quality.

0
💬 0

3783.209 - 3788.69 Marc Andreessen

So a lot of people who were at Apple spend the rest of their lives trying to find another experience where they feel like they're able to hit that quality bar again.

0
💬 0

3789.331 - 3792.572 Lex Fridman

Even if it, in retrospect, or during it felt like suffering.

0
💬 0

3792.692 - 3793.732 Marc Andreessen

Yeah, exactly. Yeah.

0
💬 0

3794.724 - 3798.346 Lex Fridman

What does that teach you about the human condition, huh?

0
💬 0

3798.786 - 3819.315 Marc Andreessen

So look, exactly. So the Silicon Valley, I mean, look, he's not, you know, George Patton in the, you know, in the army, like, you know, there are many examples in other fields, you know, that are like this. Specifically in tech, it's actually, I find it very interesting. There's the Apple way, which is polish, polish, polish, and don't ship until it's as perfect as you can make it.

0
💬 0

3819.455 - 3837.097 Marc Andreessen

And then there's the sort of the other approach, which is the sort of incremental hacker mentality. which basically says ship early and often and iterate. And one of the things I find really interesting is I'm now 30 years into this, like there are very successful companies on both sides of that approach, right? Like that is a fundamental,

0
💬 0

3838.291 - 3858.92 Marc Andreessen

difference in how to operate and how to build and how to create that you have world-class companies operating in both ways. And I don't think the question of which is the superior model is anywhere close to being answered. And my suspicion is the answer is do both. The answer is you actually want both. They lead to different outcomes. Software tends to do better with the iterative approach.

0
💬 0

3859.84 - 3867.947 Marc Andreessen

Hardware tends to do better with the you know, sort of wait and make it perfect approach. But again, you can find examples in both directions.

0
💬 0

3868.547 - 3877.715 Lex Fridman

So the jury's still out on that one. So back to Mosaic. So what, it was text-based. Tim Berners-Lee

0
💬 0

3878.53 - 3891.936 Marc Andreessen

Well, there was the web, which was text-based, but there were no, I mean, there was like three websites. There was like no content. There were no users. Like it wasn't like a, it wasn't like a catalytic. It hadn't, and by the way, it was all, because it was all text, there were no documents, there were no images, there were no videos, there were no, right?

0
💬 0

3892.016 - 3898.679 Marc Andreessen

So, so it was, it was, and then if, if in the beginning, if you had to be on a Nextcube, you need to have a Nextcube both to publish and to consume, so.

0
💬 0

3899.319 - 3900.72 Lex Fridman

So there were 6,000 bucks.

0
💬 0

3900.74 - 3916.105 Marc Andreessen

You said there were limitations on, yeah, $6,000 PC. They did not, they did not sell very many, but then there was also, there was also FTP and there was use nets, right. And there was, you know, a dozen other, basically there's waste, which was an early search thing. There was gopher, which was an early menu based information retrieval system.

0
💬 0

3916.265 - 3927.909 Marc Andreessen

There were like a dozen different sort of scattered ways that people would get to information on, on the internet. And so the mosaic idea was basically bring those all together, make the whole thing graphical, make it easy to use, make it basically bulletproof so that anybody can do it.

0
💬 0

3928.689 - 3945.968 Marc Andreessen

And then again, just on the luck side, it so happened that this was right at the moment when graphics, when the GUI sort of actually took off. And we're now also used to the GUI that we think it's been around forever, but it didn't really, you know, the Macintosh brought it out in 85, but they actually didn't sell very many Macs in the 80s. It was not that successful of a product.

0
💬 0

3947.028 - 3962.934 Marc Andreessen

It really was, you needed Windows 3.0 on PCs, and that hit in about 92. And so, and we did Mosaic in 92, 93. So that sort of, it was like right at the moment when you could imagine actually having a graphical user interface, right, at all, much less one to the internet.

0
💬 0

3963.074 - 3971.057 Lex Fridman

How old did Windows 3 sell? So was that the really big, the big operating, graphical operating system?

0
💬 0

3971.277 - 3984.381 Marc Andreessen

Well, this is the classic, okay, Microsoft was operating on the other. So Steve, Apple was running on the Polish It Until It's Perfect. Microsoft famously ran on the other model, which is Ship and Iterate. And so the old line in those days was Microsoft, right? It's version three of every Microsoft product. That's the good one, right?

0
💬 0

3984.401 - 4000.831 Marc Andreessen

And so there are, you can find online, Windows 1, Windows 2, nobody used them. Actually, the original Windows, in the original Microsoft Windows, the Windows were non-overlapping. And so you had these very small, very low resolution screens. And then you had literally, it just didn't work. It wasn't ready yet.

0
💬 0

4001.271 - 4004.334 Lex Fridman

And Windows 95, I think was a pretty big leap also.

0
💬 0

4004.414 - 4005.075 Marc Andreessen

That was a big leap too.

0
💬 0

4005.355 - 4005.535 Lex Fridman

Yeah.

0
💬 0

4005.595 - 4015.945 Marc Andreessen

So that was like bang, bang. And then of course, Steve, and then, and then, you know, in the fullness of time, Steve came back, then the Mac started to take off again. That was the third bang. And then the iPhone was the fourth bang. Such exciting times.

0
💬 0

4015.965 - 4021.352 Lex Fridman

And then we were off to the races. Because nobody could have known what would be created from that.

0
💬 0

4021.412 - 4038.872 Marc Andreessen

Well, Windows 3.1 or 3.0, Windows 3.0 to the iPhone was only 15 years. Like that ramp was, in retrospect, at the time it felt like it took forever, but in historical terms, like that was a very fast ramp from even a graphical computer at all on your desk to the iPhone. It was 15 years.

0
💬 0

4039.032 - 4047.661 Lex Fridman

Did you have a sense of what the internet will be as you're looking through the window of mosaic? Like there's just a few web pages for now.

0
💬 0

4048.533 - 4069.519 Marc Andreessen

So the thing I had early on was I was keeping at the time what, there's disputes over what was the first blog, but I had one of them that at least is a possible, at least a runner up in the competition. And it was what was called the What's New page. And it was hardwired and distribution unfair advantage. I put it right in the browser.

0
💬 0

4070.339 - 4080.802 Marc Andreessen

I put it in the browser and then I put my resume in the browser, which also was hilarious. But, um, but, um, I was, I was keeping the, not many people get to get to do that.

0
💬 0

4080.842 - 4089.045 Lex Fridman

So, um, the, uh, good, good call. Yeah. Early days. Yes. It's so interesting.

0
💬 0

4089.085 - 4108.415 Marc Andreessen

I'm looking for my, uh, about, about, Oh, Mark is looking for a job. Um, so, um, so the West new page, I would literally get up every morning and I would every afternoon. Um, and I would basically, if you wanted to launch a website, you would email me. Um, and I would list it on the most new page. And that was how people discovered the new websites as they were coming out.

0
💬 0

4108.655 - 4115.76 Marc Andreessen

And I remember, cause it was like one, it literally went from, it was like one every couple of days to like one every day to like two every day.

0
💬 0

4117.462 - 4122.228 Lex Fridman

So that blog was kind of doing the directory thing. So what was the homepage?

0
💬 0

4122.969 - 4138.664 Marc Andreessen

So the homepage was just basically trying to explain even what this thing is that you're looking at, right? The basic, basically basic instructions. But then there was a button that said what's new. And what most people did was they went to, for obvious reasons, went to what's new. But like, it was so, it was so mind blowing at that point, just the basic idea.

0
💬 0

4138.684 - 4156.594 Marc Andreessen

And it was just, this was like, you know, this was basically the internet, but people could see it for the first time. The basic idea was look, you know, some, you know, it's like, literally it's like an Indian restaurant in like Bristol, England has like put their menu on the web. And people were like, wow. Cause like, that's the first restaurant menu on the web. And I don't have to be in Bristol.

0
💬 0

4156.654 - 4174.547 Marc Andreessen

And I don't know if I'm ever going to go to Bristol and I don't even like Indian food and like, wow. Right. Um, and it was like that, uh, the first web, uh, the first streaming video thing was a, uh, it was an, it was another England thing, some Oxford or something. Um, some guy, uh, put, uh, his coffee pot up as the first, uh, streaming, uh, video thing.

0
💬 0

4174.727 - 4186.556 Marc Andreessen

And he put it on the web cause he literally, it was the coffee pot down the hall and he wanted to see when he needed to go, uh, refill it. Um, but there were, you know, there was a point when there were thousands of people like watching that coffee pot. Because it was the first thing you could watch.

0
💬 0

4187.537 - 4196.504 Lex Fridman

But were you able to kind of infer, you know, if that Indian restaurant could go online, then you're like... They all will.

0
💬 0

4196.864 - 4199.206 Lex Fridman

They all will. Yeah, exactly. So you felt that.

0
💬 0

4199.286 - 4212.176 Marc Andreessen

Yeah, yeah, yeah. Now, you know, look, it's still a stretch, right? It's still a stretch because it's just like, okay, you're still in this zone, which is like, okay, is this a nerd thing? Is this a real person thing? Yeah. Um, by the way, we, you know, there was a wall of skepticism from the media. Like they just like, everybody was just like, yeah, this is the crazy.

0
💬 0

4212.196 - 4228.807 Marc Andreessen

This is just like, um, this is not, you know, this is not for regular people at that time. Um, and so you, you had to think through that and then look, it was still, it was still hard to get on the internet at that point. Right. So you could get kind of this weird bastardized version if you were on AOL, which wasn't really real. Or you had to go, like, learn what an ISP was.

0
💬 0

4230.167 - 4248.133 Marc Andreessen

You know, in those days, PCs actually didn't have TCP IP drivers come pre-installed. So you had to learn what a TCP IP driver was. You had to buy a modem. You had to install driver software. I have a comedy routine I do. It's like 20 minutes long describing all the steps required to actually get on the internet. And so you had to look through these practical.

0
💬 0

4248.733 - 4268.448 Marc Andreessen

And then speed, performance, 14.4 modems. It was like watching glue dry. There were basically a sequence of bets that we made where you basically needed to look through that current state of affairs and say, actually, there's going to be so much demand for it. Once people figure this out, there's going to be so much demand for it that all of these practical problems are going to get fixed.

0
💬 0

4269.009 - 4278.537 Lex Fridman

Some people say that the anticipation makes the destination that much more exciting. Do you remember progressive JPEGs? Yeah, do I? Yeah.

0
💬 0

4279.458 - 4280.019 Lex Fridman

Do I?

0
💬 0

4280.119 - 4294.71 Marc Andreessen

So for kids in the audience, right? For kids in the audience. You used to have to watch an image load like a line at a time, but it turns out there was this thing with JPEGs where you could load basically every fourth. You could load like every fourth line and then you could sweep back through again.

0
💬 0

4294.79 - 4302.536 Marc Andreessen

And so you could like render a fuzzy version of the image up front and then it would like resolve into the detailed one. And that was like a big UI breakthrough because it gave you something to watch.

0
💬 0

4303.829 - 4308.011 Lex Fridman

Yeah, and there's applications in various domains for that.

0
💬 0

4310.512 - 4324.377 Marc Andreessen

Well, it's a big fight. There's a big fight early on about whether there should be images on the web. For that reason, for sexualization? No, not explicitly. That did come up, but it wasn't even that. It was more just like all the serious... The argument went, the purists basically said all the serious information in the world is text.

0
💬 0

4325.258 - 4335.676 Marc Andreessen

If you introduce images, you're basically going to bring in all the trivial stuff. You're going to bring in magazines and, you know, all this crazy stuff that, you know, people, you know, it's going to distract from that. It's going to take away from being serious, being frivolous.

0
💬 0

4336.107 - 4347.089 Lex Fridman

Well, was there any Doomer-type arguments about the internet destroying all of human civilization or destroying some fundamental fabric of human civilization?

0
💬 0

4347.389 - 4365.372 Marc Andreessen

Yeah, so in those days, it was all around crime and terrorism. So those arguments happened, but there was no sense yet of the internet having an effect on politics because that was way too far off. But there was an enormous panic at the time around cybercrime. There was enormous panic that your credit card number would get stolen and your life savings would be drained.

0
💬 0

4366.064 - 4381.017 Marc Andreessen

And then, you know, criminals were gonna, there was, oh, when we started, one of the things we did, the Netscape browser was the first widely used piece of consumer software that had strong encryption built in. It made it available to ordinary people. And at that time, strong encryption was actually illegal to export out of the US.

0
💬 0

4381.798 - 4392.506 Marc Andreessen

So we could field that product in the US, we could not export it because it was classified as ammunition. So the Netscape browser was on a restricted list along with the Tomahawk missile as being something that could not be exported.

0
💬 0

4392.546 - 4408.978 Marc Andreessen

So we had to make a second version with deliberately weak encryption to sell overseas with a big logo on the box saying, do not trust this, which it turns out makes it hard to sell software when it's got a big logo that says, don't trust it. And then we had to spend five years fighting the US government to get them to basically stop trying to do this.

0
💬 0

4410.259 - 4424.017 Marc Andreessen

Because the fear was terrorists are going to use encryption, right, to like plot, you know, all these things. And then, you know, we responded with, well, actually, we need encryption to be able to secure systems so that the terrorists and the criminals can't get into them. So anyway, that was the 1990s fight.

0
💬 0

4425.569 - 4443.912 Lex Fridman

So can you say something about some of the details of the software engineering challenges required to build these browsers? I mean, the engineering challenges of creating a product that hasn't really existed before that can have such almost like limitless impact on the world with the internet.

0
💬 0

4444.412 - 4460.081 Marc Andreessen

So there was a really key bet that we made at the time, which is very controversial, which was core to how it was engineered, which was, are we optimizing for performance or for ease of creation? And in those days, the pressure was very intense to optimize for performance because the network connections were so slow and also the computers were so slow.

0
💬 0

4460.981 - 4481.153 Marc Andreessen

And so if you had mentioned the progressive JPEGs, like if there's an alternate world in which we optimize for performance and you had just a much more pleasant experience right up front, But what we got by not doing that was we got ease of creation. And the way that we got ease of creation was all of the protocols and formats were in text, not in binary.

0
💬 0

4482.334 - 4499.26 Marc Andreessen

And so HTTP is in text, by the way, and this was an internet tradition, by the way, that we picked up, but we continued it. HTTP is text and HTML is text. And then everything else that followed is text. As a result, and by the way, you can imagine purist engineers saying, this is insane. You have very limited bandwidth. Why are you wasting any time sending text?

0
💬 0

4499.3 - 4514.926 Marc Andreessen

You should be encoding the stuff into binary and it'll be much faster. And of course the answer is that's correct. But what you get when you make it text is all of a sudden, well, the big breakthrough was the view source function, right? So the fact that you could look at a webpage, you could hit view source and you could see the HTML. That was how people learned how to make webpages, right?

0
💬 0

4515.086 - 4538.094 Lex Fridman

It's so interesting because the stuff we take for granted now, Man, that was fundamental to the development of the web, to be able to have HTML just right there. All the ghetto mess that is HTML, all the sort of almost biological messiness of HTML, and then having the browser try to interpret that mess to show something reasonable. Yeah, exactly.

0
💬 0

4538.314 - 4551.81 Marc Andreessen

Well, and then there was this internet principle that we inherited, which was emit, what was it? Emit cautiously, emit conservatively, interpret liberally. So it basically meant if you're, the design principle was if you're creating like a web editor that's going to emit HTML, like do it as cleanly as you can.

0
💬 0

4552.591 - 4566.604 Marc Andreessen

But you actually want the browser to interpret liberally, which is you actually want users to be able to make all kinds of mistakes and for it to still work. And so the browser rendering engines to this day have all of this spaghetti code, crazy stuff where they can, they're resilient to all kinds of crazy HTML mistakes.

0
💬 0

4566.684 - 4579.453 Marc Andreessen

And so, and literally what I always had in my head is like there's an eight year old or an 11 year old somewhere and they're doing a view source, they're doing a cut and paste and they're trying to make a webpage for their turtle or whatever. And like they leave out a slash and they leave out an angle bracket and they do this and they do that and it still works.

0
💬 0

4580.326 - 4605.682 Lex Fridman

It's also, like, I don't often think about this, but, you know, programming, you know, C++, C, C++, all those languages, Lisp, the compiled languages, the interpreted languages, Python, Perl, all that, they brace that to be all correct. It's like everything has to be perfect. And then... Autistic. You forget... All right. It's systematic and rigorous. Let's go there. But you forget that the...

0
💬 0

4607.254 - 4624.314 Lex Fridman

And the web with JavaScript eventually and HTML is allowed to be messy in the way, for the first time, messy in the way biological systems can be messy. It's like the only thing computers were allowed to be messy on for the first time.

0
💬 0

4624.662 - 4640.686 Marc Andreessen

It used to offend me. So I, I, I grew up in unit. So I, I, I worked on Unix. I was a Unix native for all the way through this period. Um, and so, and it used to drive me bananas when it would do the segmentation fault in the core dump file. Just like, you know, it's like literally there's like an error in the code. The math is off by one and it core dumps.

0
💬 0

4640.966 - 4641.146 Lex Fridman

Yeah.

0
💬 0

4641.306 - 4655.332 Marc Andreessen

And I'm in the core dump trying to analyze it and trying to reconstruct what, and I'm just like, this is ridiculous. Like the computer ought to be smart enough to be able to know that if it's off by one, okay, fine. And it keeps running. And I would go ask all the experts, like, why can't it just keep running? And they'd explain to me, well, because all the downstream repercussions and blah, blah.

0
💬 0

4655.352 - 4672.705 Marc Andreessen

And I'm like, this still, like, you know, we're forcing the human creator to live, to your point, in this hyper literal world of perfection. And I was just like, that's just bad. And by the way, you know, what happens with that, of course, is what happened with coding at that point, which is you get a high priesthood.

0
💬 0

4673.485 - 4689.054 Marc Andreessen

you know, there's a small number of people who are really good at doing exactly that. Most people can't, and most people are excluded from it. And so actually that, that was where that was where I picked up that idea was, um, uh, was like, no, no, you want, you want, you want these things to be resilient to error in all kinds. And this, this would drive the purists absolutely crazy.

0
💬 0

4689.074 - 4699.2 Marc Andreessen

Like I got attacked on this, like a lot because yeah, I mean like every time I, you know, all the purists who were like into all this, like markup language stuff and formats and codes and all this stuff, they would be like, you know, you can't, you're, you're encouraging bad behavior because

0
💬 0

4699.85 - 4706.255 Lex Fridman

Oh, so they wanted the browser to give you a segfault error any time there was a... Yeah, yeah, they wanted it to be a copy, right?

0
💬 0

4706.335 - 4713.481 Marc Andreessen

Yeah. Yeah, that was a very... And any properly trained and credentialed engineer would be like, that's not how you build these systems.

0
💬 0

4713.501 - 4716.123 Lex Fridman

That's such a bold move to say, no, it doesn't have to be.

0
💬 0

4716.143 - 4728.691 Marc Andreessen

Yeah. Now, like I said, the good news for me is the internet kind of had that tradition already. But having said that, we pushed it. We pushed it way out. But the other thing we did, going back to the performance thing, was we gave up a lot of performance. That initial experience for the first few years was pretty painful.

0
💬 0

4729.412 - 4744.162 Marc Andreessen

But the bet there was actually an economic bet, which was basically the demand for the web would basically mean that there would be a surge in supply of broadband. Because the question was, okay, how do you get the phone companies, which are not famous in those days for doing new things,

0
💬 0

4744.702 - 4758.432 Marc Andreessen

at huge cost for like speculative reasons, like how do you get them to build up broadband, you know, spend billions of dollars doing that. And, you know, you could go meet with them and try to talk them into it, or you could just have a thing where it's just very clear that it's going to be the people love that's going to be better if it's faster.

0
💬 0

4758.452 - 4771.041 Marc Andreessen

And so that, that, there, there was a period there and this was, this was fraught with some peril, but there was a period there where it's like, we knew the experience was sub-optimized because we were trying to force the emergence of demand for broadband, which is in fact what happened.

0
💬 0

4772.105 - 4788.354 Lex Fridman

So you had to figure out how to display this text, HTML text. So the blue links and the purple links. And there's no standards. Is there standards at that time? There really still isn't. Well, there's implied standards, right?

0
💬 0

4788.894 - 4805.36 Lex Fridman

And there's all these kinds of new features that are being added, like CSS, what kind of stuff a browser should be able to support, features within languages, within JavaScript, and so on. But you're setting standards on the fly yourself. Yeah.

0
💬 0

4805.796 - 4828.123 Marc Andreessen

Well, to this day, if you create a web page that has no CSS style sheet, the browser will render it however it wants to. So this was one of the things. There was this idea at the time in how these systems were built, which is separation of content from appearance. And people don't really use that anymore because everybody wants to determine how things look, and so they use CSS.

0
💬 0

4828.183 - 4831.324 Marc Andreessen

But it's still in there that you can just let the browser do all the work.

0
💬 0

4831.764 - 4845.29 Lex Fridman

I still like the, like a really basic websites, but that could be just old school kids these days with their fancy responsive websites that don't actually have much content, but have a lot of visual elements.

0
💬 0

4845.61 - 4848.512 Marc Andreessen

Well, that's one of the things that's fun about chat, you know, about chat GPT.

0
💬 0

4848.532 - 4851.813 Lex Fridman

It's like back to the basics, back to just text. Yeah.

0
💬 0

4851.913 - 4860.057 Marc Andreessen

Right. And you know, there is this pattern in human creativity and media where you end up back at text. And I think there's, you know, there's something powerful in there.

0
💬 0

4860.908 - 4870.675 Lex Fridman

Is there some other stuff you remember, like the purple links? There were some interesting design decisions to kind of come up that we have today or we don't have today that were temporary.

0
💬 0

4871.355 - 4877.079 Marc Andreessen

So I made the background gray. I hated reading text on white backgrounds, and so I made the background gray.

0
💬 0

4877.199 - 4879.301 Lex Fridman

Do you regret this?

0
💬 0

4879.601 - 4885.005 Marc Andreessen

No, no, no. That decision, I think, has been reversed. But now I'm happy, though, because now dark mode is the thing, so.

0
💬 0

4885.839 - 4889.161 Lex Fridman

So it wasn't about gray. It was just you didn't want a white background.

0
💬 0

4889.181 - 4891.522 Marc Andreessen

Strain my eyes. Strain your eyes.

0
💬 0

4892.342 - 4904.949 Lex Fridman

Interesting. And then there's a bunch of other decisions. I'm sure there's an interesting history of the development of HTML and CSS and how those interface in JavaScript. And there's this whole Java applet thing.

0
💬 0

4904.969 - 4920.793 Marc Andreessen

Well, the big one, probably JavaScript. CSS was after me, so I didn't, that was not me. But JavaScript was the big, JavaScript maybe was the biggest of the whole thing. That was us. And that was basically a bet. It was a bet on two things. One is that the world wanted a new front-end scripting language.

0
💬 0

4921.973 - 4937.117 Marc Andreessen

And then the other was we thought at the time the world wanted a new backend scripting language. So JavaScript was designed from the beginning to be both front-end and backend. And then it failed as a backend scripting language, and Java won for a long time, and then Python, Perl, and other things, PHP, and Ruby.

0
💬 0

4937.297 - 4952.381 Marc Andreessen

But now JavaScript is back, and so... I wonder if everything in the end will run on JavaScript. It seems like it is the... And by the way, let me give a shout-out to Brendan Eich, who was basically the one-man inventor of JavaScript.

0
💬 0

4952.401 - 4956.442 Lex Fridman

If you're interested to learn more about Brendan Eich, he's done this podcast previously. Yeah.

0
💬 0

4957.854 - 4967.907 Marc Andreessen

So he wrote JavaScript over a summer. I mean, I think it is fair to say now that it's the most widely used language in the world, and it seems to only be gaining in its range of adoption.

0
💬 0

4968.168 - 4982.081 Lex Fridman

In the software world, there's quite a few stories of somebody over a weekend or over a week or over a summer writing some of the most impactful, revolutionary pieces of software ever. That should be inspiring, yes.

0
💬 0

4982.361 - 4995.251 Marc Andreessen

Very inspiring. I'll give you another one, SSL. So SSL was the security protocol. That was us. And that was a crazy idea at the time, which was let's take all the native protocols and let's wrap them in a security wrapper. That was a guy named Kip Hickman who wrote that over a summer, one guy.

0
💬 0

4996.271 - 5014.406 Marc Andreessen

Um, and then look today, sitting here today, like the transformer, like at Google was a small handful of people. And then, you know, the number of people who have did like the core work on GPT, it's not that many people, it's a pretty small handful of people. Um, and so, yeah, the, the pattern in software repeatedly over a very long time has been, it's, it's a.

0
💬 0

5015.346 - 5031.435 Marc Andreessen

Jeff Bezos always had the two pizza rule for teams at Amazon, which is any team needs to be able to be fed with two pizzas. If you need the third pizza, you have too many people. And I think it's actually the one pizza rule. For the really creative work, I think it's two people, three people.

0
💬 0

5031.755 - 5044.042 Lex Fridman

Well, you see that with certain open source projects. So much is done by one or two people. It's so incredible. And that's why you see, that gives me so much hope about the open source movement in this new age of AI.

0
💬 0

5045.103 - 5064.145 Lex Fridman

where just recently having had a conversation with Mark Zuckerberg, of all people, who's all in on open source, which is so interesting to see and so inspiring to see, because releasing these models, it is scary. It is potentially very dangerous, and we'll talk about that. But it's also...

0
💬 0

5065.439 - 5087.408 Lex Fridman

If you believe in the goodness of most people and in the skill set of most people and the desire to do good in the world, that's really exciting. Because it's not putting these models into the centralized control of big corporations, the government, and so on. It's putting it in the hands of a teenage kid with a dream in his eyes. I don't know. That's... That's beautiful.

0
💬 0

5087.648 - 5102.521 Marc Andreessen

And look, this stuff, AI ought to make the individual coder obviously far more productive, right? By like, you know, a thousand X or something. And so you ought to open source, like not just the future of open source AI, but the future of open source everything. We ought to have a world now of super coders, right?

0
💬 0

5102.581 - 5112.593 Marc Andreessen

Who are building things as open source with one or two people that were inconceivable, you know, five years ago. you know, the level of kind of hyper productivity we're going to get out of our best and brightest, I think it's going to go way up.

0
💬 0

5112.873 - 5128.266 Lex Fridman

It's going to be interesting. We'll talk about it, but let's just linger a little bit on Netscape. Netscape was acquired in 1999 for $4.3 billion by AOL. What was that like? What were some memorable aspects of that?

0
💬 0

5128.627 - 5143.692 Marc Andreessen

Well, that was the height of the dot-com boom bubble bust. I mean, that was the frenzy. If you watch Succession, that was like what they did in the fourth season with Gojo and the merger. So it was like the height of one of those kind of dynamics.

0
💬 0

5143.712 - 5146.853 Lex Fridman

Would you recommend Succession, by the way? I'm more of a Yellowstone guy.

0
💬 0

5148.677 - 5151.42 Marc Andreessen

is very American. I'm very proud of you. That is.

0
💬 0

5152.04 - 5155.604 Lex Fridman

I just talked to Matthew McConaughey and I'm full on Texan at this point. Good.

0
💬 0

5155.944 - 5156.625 Marc Andreessen

I heartily approve.

0
💬 0

5157.525 - 5168.036 Lex Fridman

And he will be doing the sequel to Yellowstone. Yeah. Very exciting. Anyway. I can't wait. So that's a rude interruption by me by way of succession.

0
💬 0

5169.932 - 5184.953 Marc Andreessen

Uh, so that was at the height of the deal making and money and just the fur flying and like craziness. And so, yeah, it was just one of those. It was just like, I mean, there's the entire Netscape thing from start to finish was four years, um, which was like for, for one of these companies, it's just like incredibly fast.

0
💬 0

5185.553 - 5199.838 Marc Andreessen

We went public 18 months after we were founded, which virtually never happens. So it was just this incredibly fast kind of meteor streaking across the sky. And then, of course, it was this. And then there was just this explosion that happened because then it was almost immediately followed by the dot-com crash. Wow.

0
💬 0

5200.698 - 5221.443 Marc Andreessen

It was then followed by AOL buying Time Warner, which again, the succession guys kind of play with that, which turned out to be a disastrous deal. One of the famous kind of disasters in business history. And then what became an internet depression on the other side of that. But then in that depression in the 2000s was the beginning of broadband and smartphones and Web 2.0, right?

0
💬 0

5221.483 - 5224.764 Marc Andreessen

And then social media and search and SaaS and everything that came out of that.

0
💬 0

5225.604 - 5244.421 Lex Fridman

What did you learn from just the acquisition? I mean, this is so much money. What's interesting, because it must have been very new to you that the software stuff, you can make so much money. There's so much money swimming around. I mean, I'm sure the ideas of investment were starting to get born there.

0
💬 0

5244.621 - 5260.65 Marc Andreessen

Yes, so let me lay it out. So here's the thing. I don't know if I figured it out then, but I figured it out later, which is software is a technology that, it's like the concept of the Philosopher's Stone. The philosopher's stone in alchemy transmutes light into gold, and Newton spent 20 years trying to find the philosopher's stone, never got there, nobody's ever figured it out.

0
💬 0

5261.351 - 5275.208 Marc Andreessen

Software is our modern philosopher's stone, and in economic terms, it transmutes labor into capital. which is like a super interesting thing. And by the way, like Karl Marx is rolling over in his grave right now. Cause of course that's complete refutation of his entire theory.

0
💬 0

5276.789 - 5297.464 Marc Andreessen

Transputes labor into capital, which is, which is as follows is somebody sits down at a keyboard and types a bunch of stuff in and a capital asset comes out the other side. And then somebody buys that capital asset for a billion dollars. Like, That's amazing. It's literally creating value out of thin air, out of purely human thought.

0
💬 0

5298.624 - 5317.436 Marc Andreessen

There are many things that make software magical and special, but that's the economics. I wonder what Marx would have thought about that. He would have completely broke his brain because, of course, the whole thing was... That kind of technology is inconceivable. When he was alive, it was all industrial era stuff. And so any kind of machinery necessarily involved huge amounts of capital.

0
💬 0

5317.496 - 5333.085 Marc Andreessen

And then labor was on the receiving end of the abuse. But a software engineer is somebody who basically transmutes his own labor into an actual capital asset, creates permanent value. Well, in fact, it's actually very inspiring. That's actually more true today than before.

0
💬 0

5333.145 - 5347.139 Marc Andreessen

So when I was doing software, the assumption was all new software basically has a sort of a parabolic sort of life cycle, right? So you ship the thing, people buy it. At some point, everybody who wants it has bought it and then it becomes obsolete and it's like bananas. Nobody buys old software.

0
💬 0

5348.559 - 5370.787 Marc Andreessen

These days, Minecraft, Mathematica, Facebook, Google, you have the software assets that have been around for 30 years that are gaining in value every year. And they're just there being World of Warcraft, Salesforce.com. Every single year, they're being polished and polished and polished and polished. They're getting better and better, more powerful, more powerful, more valuable, more valuable.

0
💬 0

5370.827 - 5376.189 Marc Andreessen

So we've entered this era where you can actually have these things that actually build out over decades, which, by the way, is what's happening right now with GPT.

0
💬 0

5377.649 - 5391.873 Marc Andreessen

Um, and so, um, now, and this is why, you know, there, there, there is always, you know, sort of a constant investment frenzy around software is because, you know, look, when you start one of these things, it doesn't always succeed, but when it does now, you might be building an asset that builds value for, you know, four or five, six decades to come.

0
💬 0

5392.954 - 5406.821 Marc Andreessen

Um, you know, if you have a team of people who have the level of devotion required to keep making it better. And then the fact that, of course, everybody's online, you know, there's 5 billion people that are a click away from any new piece of software. So the potential market size for any of these things is, you know, nearly infinite.

0
💬 0

5407.361 - 5408.822 Lex Fridman

It must have been surreal back then, though.

0
💬 0

5409.093 - 5419.743 Marc Andreessen

Yeah, yeah. This was all brand new, right? Yeah. Back then, this was all brand new. These were all brand new. Had you rolled out that theory in even 1999, people would have thought you were smoking crack. So that's emerged over time.

0
💬 0

5421.385 - 5432.275 Lex Fridman

Well, let's now turn back into the future. You wrote the essay, Why AI Will Save the World. Let's start at the very high level. What's the main thesis of the essay?

0
💬 0

5432.665 - 5452.294 Marc Andreessen

Yeah. So the main thesis on the essay is that what we're dealing with here is intelligence. And it's really important to kind of talk about the sort of very nature of what intelligence is. And fortunately, we have a predecessor to machine intelligence, which is human intelligence. And we've got observations and theories over thousands of years for what intelligence is in the hands of humans.

0
💬 0

5452.354 - 5475.518 Marc Andreessen

And what intelligence is, right? I mean, what it literally is, is the way to capture, process, analyze, synthesize information, solve problems. But the observation of intelligence in human hands is that intelligence quite literally makes everything better. And what I mean by that is every kind of outcome of like human quality of life, whether it's education outcomes or success of your children,

0
💬 0

5476.86 - 5491.86 Marc Andreessen

career success or health or lifetime satisfaction. By the way, propensity to peacefulness as opposed to violence, propensity for open-mindedness versus bigotry, those are all associated with higher levels of intelligence.

0
💬 0

5492 - 5496.323 Lex Fridman

Smarter people have better outcomes in almost every domain of activity.

0
💬 0

5496.443 - 5516.679 Lex Fridman

Academic achievement, job performance, occupational status, income, creativity, physical health, longevity, learning new skills, managing complex tasks, leadership, entrepreneurial success, conflict resolution, reading comprehension, financial decision making, understanding others' perspectives, creative arts, parenting outcomes, and life satisfaction. One of the...

0
💬 0

5518.119 - 5541.11 Lex Fridman

more depressing conversations I've had. And I don't know why it's depressing. I have to really think through why it's depressing. But on IQ and the G factor. And that that's something in large part is genetic. And it correlates so much with all of these things and success in life.

0
💬 0

5542.185 - 5552.072 Lex Fridman

It's like all the inspirational stuff we read about, like if you work hard and so on, damn, it sucks that you're born with a hand that you can't change.

0
💬 0

5552.372 - 5553.053 Marc Andreessen

But what if you could?

0
💬 0

5554.114 - 5579.301 Lex Fridman

You're saying basically a really important point, and I think it's in your articles, It really helped me, it's a nice added perspective to think about, listen, human intelligence, the science of intelligence has shown scientifically that it just makes life easier and better the smarter you are. And now, let's look at artificial intelligence.

0
💬 0

5580.582 - 5591.464 Lex Fridman

And if that's a way to increase some human intelligence, then it's only going to make a better life. That's the argument.

0
💬 0

5591.704 - 5605.067 Marc Andreessen

And certainly at the collective level, we could talk about the collective effect of just having more intelligence in the world, which will have very big payoff. But there's also just at the individual level, like what if every person has a machine? It's the concept of Doug Engelbart's concept of augmentation.

0
💬 0

5606.348 - 5624.402 Marc Andreessen

you know, what if everybody has an assistant and the assistant is, you know, 140 IQ and you happen to be 110 IQ and you've got, you know, something that basically is infinitely patient and knows everything about you and is pulling for you in every possible way, wants you to be successful.

0
💬 0

5624.782 - 5635.53 Marc Andreessen

And anytime you find anything confusing or want to learn anything or have trouble understanding something or want to figure out what to do in a situation, When I figure out how to prepare for a job interview, like any of these things, it will help you do it.

0
💬 0

5635.77 - 5645.797 Marc Andreessen

And it will therefore, the combination will effectively raise your IQ, will therefore raise the odds of successful life outcomes in all these areas.

0
💬 0

5646.017 - 5651.601 Lex Fridman

So people below this hypothetical 140 IQ, it'll pull them off towards 140 IQ? Yeah, yeah.

0
💬 0

5652.982 - 5667.988 Marc Andreessen

And then, of course, people at 140 IQ will be able to have a peer, which is great. And then people above 140 IQ will have an assistant that they can farm things out to. And then, look, God willing, at some point, these things go from future versions, go from 140 IQ equivalent to 150 to 160 to 180. Einstein was estimated to be on the order of 160.

0
💬 0

5668.108 - 5692.158 Marc Andreessen

So when we get 160 AI, one assumes creating Einstein-level breakthroughs in physics and And then at 180, we'll be, you know, curing cancer and developing warp drive and doing all kinds of stuff. And so it is quite possibly the case. This is the most important thing that's ever happened and the best thing that's ever happened.

0
💬 0

5693.664 - 5699.065 Marc Andreessen

precisely because it's a lever on this single fundamental factor of intelligence, which is the thing that drives so much of everything else.

0
💬 0

5700.546 - 5705.867 Lex Fridman

Can you still man the case that human plus AI is not always better than human for the individual?

0
💬 0

5705.927 - 5716.429 Marc Andreessen

You may have noticed that there's a lot of smart assholes running around. Sure, yes. Right? And so, like, it's smart. There are certain people where they get smarter, you know, they get to be more arrogant, right? So, you know, there's one huge flaw.

0
💬 0

5717.229 - 5735.863 Lex Fridman

Although, to push back on that, it might be interesting because when the intelligence is not all coming from you, but from another system, that might actually increase the amount of humility even in the assholes. One would hope. Or it could make assholes more assholes. I mean, that's for psychology to study.

0
💬 0

5736.123 - 5754.208 Marc Andreessen

Yeah, exactly. Another one is smart people are very convinced that they have a more rational view of the world and that they have a easier time seeing through conspiracy theories and hoaxes and sort of crazy beliefs and all that. There's a theory in psychology, which is actually smart people. So for sure, people who aren't as smart are very susceptible to hoaxes and conspiracy theories. Yeah.

0
💬 0

5754.693 - 5774.966 Marc Andreessen

But it may also be the case that the smarter you get, you become susceptible in a different way, which is you become very good at marshalling facts to fit preconceptions. You become very, very good at assembling whatever theories and frameworks and pieces of data and graphs and charts you need to validate whatever crazy ideas got in your head. And so you're susceptible in a different way.

0
💬 0

5776.702 - 5780.085 Lex Fridman

We're all sheep, but different colored sheep.

0
💬 0

5780.125 - 5798.661 Marc Andreessen

Some sheep are better at justifying it, right? And those are the smart sheep, right? So yeah, look, I would say this. I am not a utopian. There are no panaceas in life. I don't believe there are pure positives. I'm not a transcendental person like that. But So yeah, there are going to be issues.

0
💬 0

5800.082 - 5815.486 Marc Andreessen

And look, smart people, another thing maybe you could say about smart people is they are more likely to get themselves in situations that are beyond their grasp because they're just more confident in their ability to deal with complexity and their eyes become bigger, their cognitive eyes become bigger than their stomach. So yeah, you could argue those eight different ways.

0
💬 0

5815.926 - 5825.509 Marc Andreessen

Nevertheless, on net, clearly, overwhelmingly, again, if you just extrapolate from what we know about human intelligence, you're improving so many aspects of life if you're upgrading intelligence.

0
💬 0

5826.636 - 5844.567 Lex Fridman

So there'll be assistance at all stages of life. So when you're younger, there's for education, all that kind of stuff, for mentorship, all of this. And later on, as you're doing work and you've developed a skill and you're having a profession, you'll have an assistant that helps you excel at that profession. So at all stages of life.

0
💬 0

5844.847 - 5860.113 Marc Andreessen

Yeah. I mean, look, the theory is augmentations. This is the Dick Engelbart's term. Dick Engelbart made this observation many, many decades ago that basically it's like you can have this oppositional frame of technology where it's like us versus the machines. But what you really do is you use technology to augment human capabilities. And by the way, that's how actually the economy develops.

0
💬 0

5860.153 - 5866.535 Marc Andreessen

We can talk about the economic side of this, but that's actually how the economy grows is through technology augmenting human potential.

0
💬 0

5868.496 - 5886.925 Marc Andreessen

And so, yeah, and then you basically have a proxy or, you know, or a, you know, a sort of prosthetic, you know, so like you've got glasses, you've got a wristwatch, you know, you've got shoes, you know, you've got these things, you've got a personal computer, you've got a word processor, you've got Mathematica, you've got Google. This is the latest viewed through that lens.

0
💬 0

5887.306 - 5898.9 Marc Andreessen

AI is the latest in a long series of basically augmentation methods to be able to raise human capabilities. It's just this one is the most powerful one of all, because this is the one that goes directly to what they call fluid intelligence. which is IQ.

0
💬 0

5901.502 - 5921.455 Lex Fridman

Well, there's two categories of folks that you outline that worry about or highlight the risks of AI, and you highlight a bunch of different risks. I would love to go through those risks and just discuss them, brainstorm which ones are serious and which ones are less serious. But first, the Baptists and the bootleggers.

0
💬 0

5921.495 - 5930.421 Lex Fridman

What are these two interesting groups of folks who worry about the effect of AI on human civilization?

0
💬 0

5930.802 - 5931.302 Marc Andreessen

Or say they do.

0
💬 0

5931.922 - 5932.863 Lex Fridman

Oh, okay.

0
💬 0

5934.104 - 5950.896 Marc Andreessen

Yes, I'll say they do. The Baptists worry, the bootleggers say they do. So the Baptists and the bootleggers is a metaphor from economics, from what's called development economics. And it's this observation that when you get social reform movements in a society, you tend to get two sets of people showing up arguing for the social reform movement.

0
💬 0

5951.336 - 5968.41 Marc Andreessen

And the term Baptists and bootleggers comes from the American experience with alcohol prohibition. And so in the 1900s, 1910s, there was this movement that was very passionate at the time, which basically said alcohol is evil and it's destroying society. By the way, there was a lot of evidence to support this.

0
💬 0

5969.091 - 5986.904 Marc Andreessen

There were very high rates of very high correlations then, by the way, and now between rates of physical violence and alcohol use. Almost all violent crimes have either the perpetrator or the victim are both drunk. You see this actually in almost all sexual harassment cases in the workplace. It's like at a company party and somebody's drunk.

0
💬 0

5987.525 - 6002.555 Marc Andreessen

It's amazing how often alcohol actually correlates to actually just dysfunction. It leads to domestic abuse and so forth, child abuse. And so you had this group of people who were like, okay, this is bad stuff and we should outlaw it. And those were quite literally Baptists. Those were super committed, hardcore Christian activists in a lot of cases.

0
💬 0

6003.235 - 6015.623 Marc Andreessen

There was this woman whose name was Carrie Nation, who was this older woman who had been in this, you know, I don't know, disastrous marriage or something. And her husband had been abusive and drunk all the time. And she became the icon of the Baptist prohibitionists.

0
💬 0

6015.704 - 6026.651 Marc Andreessen

And she was legendary in that era for carrying an axe and doing, you know, completely on her own, doing raids of saloons and like taking her axe to all the bottles and tags in the back.

0
💬 0

6026.691 - 6027.992 Lex Fridman

So a true believer.

0
💬 0

6028.132 - 6045.66 Marc Andreessen

An absolute true believer with absolutely the purest of intentions. And again, there's a very important thing here, which is there's, you could look at this cynically and you could say the Baptists are like delusional, you know, extremists, but you can also say, look, they're right. Like she was, you know, she had a point, like she wasn't wrong, um, about a lot of what she said. Yeah.

0
💬 0

6046 - 6059.206 Marc Andreessen

But it turns out the way the story goes is it turns out that there were another set of people who very badly wanted to outlaw alcohol in those days. And those were the bootleggers, which was organized crime that stood to make a huge amount of money if legal alcohol sales were banned. Um,

0
💬 0

6059.406 - 6075.953 Marc Andreessen

And this was, in fact, the way the history goes is this was actually the beginning of organized crime in the US. This was the big economic opportunity that opened that up. And so they went in together. And they didn't go in together. The Baptists did not even necessarily know about the bootleggers because they were on their moral crusade. The bootleggers certainly knew about the Baptists.

0
💬 0

6076.053 - 6094.602 Marc Andreessen

And they were like, wow, these people are like the great front people for shenanigans in the background. And they got the Volstead Act passed. And they did, in fact, ban alcohol in the US. And you'll notice what happened, which is people kept drinking. It didn't work. People kept drinking. The bootleggers made a tremendous amount of money.

0
💬 0

6095.283 - 6112.935 Marc Andreessen

And then over time, it became clear that it made no sense to make it illegal. And it was causing more problems. And so then it was revoked. And here we sit with legal alcohol 100 years later with all the same problems. And the whole thing was this giant misadventure. The Baptists got taken advantage of by the bootleggers. And the bootleggers got what they wanted. And that was that.

0
💬 0

6120.001 - 6119.981 Lex Fridman

100%.

0
💬 0

6120.021 - 6132.709 Marc Andreessen

Yeah, it's the same pattern. The economists will tell you it's the same pattern every time. This is what happened with nuclear power, which is another interesting one. But yeah, this happens dozens and dozens of times. throughout the last 100 years. And this is what's happening now.

0
💬 0

6133.009 - 6160.392 Lex Fridman

And you write that it isn't sufficient to simply identify the actors and impugn their motives. We should consider the arguments of both the Baptists and the bootleggers on their merits. So let's do just that. Risk number one. Will AI kill us all? Yes. So... What do you think about this one? What do you think is the core argument here?

0
💬 0

6162.474 - 6168.503 Lex Fridman

That the development of AGI, perhaps better said, will destroy human civilization.

0
💬 0

6168.922 - 6172.484 Marc Andreessen

Well, first of all, you just did a sleight of hand because we went from talking about AI to AGI.

0
💬 0

6174.586 - 6179.329 Lex Fridman

Is there a fundamental difference there? I don't know. What's AGI? What's AI? What's intelligence?

0
💬 0

6179.389 - 6182.671 Marc Andreessen

Well, I know what AI is. AI is machine learning. What's AGI?

0
💬 0

6182.851 - 6202.845 Lex Fridman

I think we don't know what the bottom of the well of machine learning is or what the ceiling is. Because just to call something machine learning or just to call something statistics or just to call it math or computation doesn't mean nuclear weapons are just physics. To me, it's very interesting and surprising how far machine learning has taken.

0
💬 0

6202.865 - 6209.35 Marc Andreessen

No, but we knew that nuclear physics would lead to weapons. That's why the scientists of that era were always in this huge dispute about building the weapons. This is different.

0
💬 0

6209.811 - 6211.512 Lex Fridman

Where does machine learning lead? Do we know?

0
💬 0

6211.652 - 6227.445 Marc Andreessen

We don't know, but this is my point. It's different. We actually don't know. And this is where the sleight of hand kicks in. This is where it goes from being a scientific topic to being a religious topic. And that's why I specifically called out, because that's what happens. They do the vocabulary shift. All of a sudden, you're talking about something totally that's not actually real.

0
💬 0

6227.694 - 6233.701 Lex Fridman

Well, then maybe you could also, as part of that, define the Western tradition of millennialism.

0
💬 0

6234.101 - 6236.284 Marc Andreessen

Yes. End of the world. Apocalypse.

0
💬 0

6236.664 - 6237.065 Lex Fridman

What is it?

0
💬 0

6237.165 - 6247.857 Marc Andreessen

Apocalypse cults. Apocalypse cults. Well, so we, of course, live in a Judeo-Christian, but primarily Christian, kind of saturated, you know, kind of Christian, post-Christian, secularized Christian, you know, kind of world in the West.

0
💬 0

6248.657 - 6268.831 Marc Andreessen

And of course, core to Christianity is the idea of the second coming and revelations and Jesus returning and the thousand year utopia on earth and then the rapture and all that stuff. We collectively as a society, we don't necessarily take all that fully seriously now. So what we do is we create our secularized versions of that. We keep looking for utopia.

0
💬 0

6268.891 - 6280.312 Marc Andreessen

We keep looking for basically the end of the world. And so what you see over decades is basically a pattern of these sort of – this is what cults are. This is how cults form as they form around some theory of the end of the world.

0
💬 0

6280.512 - 6296.265 Marc Andreessen

And so the People's Temple cult, the Manson cult, the Heaven's Gate cult, the David Koresh cult, what they're all organized around is like there's going to be this thing that's going to happen that's going to basically bring civilization crashing down. And then we have this special elite group of people who are going to see it coming and prepare for it.

0
💬 0

6297.006 - 6303.328 Marc Andreessen

And then they're the people who are either going to stop it or are failing stopping it. They're going to be the people who survive to the other side and ultimately get credit for having been right.

0
💬 0

6303.848 - 6305.308 Lex Fridman

Why is that so compelling, do you think?

0
💬 0

6305.508 - 6314.391 Marc Andreessen

Because it satisfies this very deep need we have for transcendence and meaning that got stripped away when we became secular.

0
💬 0

6314.87 - 6319.117 Lex Fridman

Yeah, but why does the transcendence involve the destruction of human civilization?

0
💬 0

6319.137 - 6331.846 Marc Andreessen

Because like how plausible, it's like a very deep psychological thing because it's like how plausible is it that we live in a world where everything's just kind of all right? How exciting is that?

0
💬 0

6331.966 - 6354.414 Lex Fridman

We want more than that. But that's the deep question I'm asking. Why is it not exciting to live in a world where everything's just all right? I think most of the animal kingdom would be so happy with just all right. Because that means survival. Maybe that's what it is. Why are we conjuring up things to worry about?

0
💬 0

6355.134 - 6371.423 Marc Andreessen

So C.S. Lewis called it the God-shaped hole. So there's a God-shaped hole in the human experience, consciousness, soul, whatever you want to call it, where there's got to be something that's bigger than all this. There's got to be something transcendent. There's got to be something that is bigger, bigger, bigger purpose, a bigger meaning.

0
💬 0

6372.283 - 6390.759 Marc Andreessen

And so we have run the experiment of, you know, we're just going to use science and rationality and kind of, you know, everything's just going to kind of be as it appears. And a large number of people have found that very deeply wanting and have constructed narratives. And this is the story of the 20th century, right? Communism, right, was one of those. Communism was a form of this.

0
💬 0

6391.079 - 6397.485 Marc Andreessen

Nazism was a form of this. You know, some people, you know, you can see movements like this playing out all over the world right now.

0
💬 0

6397.955 - 6403.078 Lex Fridman

So you construct a kind of devil, a kind of source of evil, and we're going to transcend beyond it.

0
💬 0

6403.358 - 6420.705 Marc Andreessen

Yeah, and the millenarian, the millenarians kind of, when you see a millenarian cult, they put a really specific point on it, which is end of the world, right? There is some change coming. And that change that's coming is so profound and so important that it's either going to lead to utopia or hell on earth. Right.

0
💬 0

6421.086 - 6436.291 Marc Andreessen

Um, and it is going to, and then, you know, it's like, what if you actually knew that that was going to happen? Right. What would you, what, what would you do? Right. How would you prepare yourself for it? How would you come together with a group of like-minded people? Right. How would you, what would you do? Would you plan like caches of weapons in the woods?

0
💬 0

6436.411 - 6442.533 Marc Andreessen

Would you like, you know, I don't know, create underground underground bunkers. Would you, you know, spend your life trying to figure out a way to avoid having it happen?

0
💬 0

6442.952 - 6458.013 Lex Fridman

Yeah, that's a really compelling, exciting idea to have a club over, to have a little bit of travel, like a get together on a Saturday night and drink some beers and talk about the end of the world and how you are the only ones who have figured it out.

0
💬 0

6458.837 - 6471.71 Marc Andreessen

And then once you lock in on that, how can you do anything else with your life? This is obviously the thing that you have to do. And then there's a psychological effect that you alluded to. There's a psychological effect where if you take a set of true believers and you leave them to themselves, they get more radical because they self-radicalize each other.

0
💬 0

6471.73 - 6476.495 Lex Fridman

That said, it doesn't mean they're not sometimes right.

0
💬 0

6476.535 - 6479.157 Marc Andreessen

Yeah, the end of the world might be. Yes, correct. They might be right.

0
💬 0

6482.401 - 6482.761 Lex Fridman

Exactly.

0
💬 0

6483.844 - 6501.254 Lex Fridman

I mean, we'll talk about nuclear weapons because you have a really interesting little moment that I learned about in your essay. But, you know, sometimes it could be right. Because we're still developing more and more powerful technologies in this case. And we don't know what the impact it will have on human civilization.

0
💬 0

6501.774 - 6508.378 Lex Fridman

Well, we can highlight all the different predictions about how it will be positive. But the risks are there. And you discussed some of them.

0
💬 0

6508.803 - 6524.534 Marc Andreessen

Well, the steel man is—actually, the steel man and his reputation are the same, which is you can't predict what's going to happen, right? You can't rule out that this will not end everything, right? But the response to that is you have just made a completely non-scientific claim. You've made a religious claim, not a scientific claim.

0
💬 0

6524.594 - 6526.095 Lex Fridman

How does it get disproven?

0
💬 0

6526.115 - 6545.912 Marc Andreessen

And there's no—by definition, with these kinds of claims, there's no way to disprove them. Yeah. And so there's no—you just go right on the list. There's no hypothesis. There's no testability of the hypothesis. There's no way to falsify the hypothesis. There's no way to measure progress along the arc. It's just all completely missing. And so it's not scientific.

0
💬 0

6546.313 - 6565.17 Lex Fridman

Well, I don't think it's completely missing. It's somewhat missing. So, for example, the people that say AI is going to kill all of us, I mean, they usually have ideas about how to do that, whether it's the paperclip maximizer or it escapes. There's a mechanism by which you can imagine it killing all humans.

0
💬 0

6566.671 - 6595.171 Lex Fridman

And you can disprove it by saying there is a limit to the speed at which intelligence increases. maybe show that the sort of rigorously really described model, like how it could happen and say, no, here's a physics limitation. There's a physical limitation to how these systems would actually do damage to human civilization.

0
💬 0

6595.652 - 6601.577 Lex Fridman

And it is possible they will kill 10 to 20% of the population, but it seems impossible for them to kill 99%.

0
💬 0

6603.519 - 6619.022 Marc Andreessen

There's practical counter-arguments, right? So you mentioned basically what I described as the thermodynamic counter-argument. So sitting here today, it's like, where would the evil AGI get the GPUs? Because they don't exist. So you're going to have a very frustrated baby evil AGI who's going to be trying to buy Nvidia stock or something to get them to finally make some chips.

0
💬 0

6619.902 - 6635.632 Marc Andreessen

So the serious form of that is the thermodynamic argument, which is like, okay, where's the energy going to come from? Where's the processor going to be running? Where's the data center going to be happening? How is this going to be happening in secret such that you know it's not... So that's a practical counter argument to the runaway AGI thing. And we can discuss that.

0
💬 0

6635.652 - 6654.663 Marc Andreessen

I have a deeper objection to it, which is this is all forecasting. It's all modeling. It's all future prediction. It's all future hypothesizing. It's not science. It is not. It is the opposite of science. So the pull up Carl Sagan, extraordinary claims require extraordinary proof, right? These are extraordinary claims.

0
💬 0

6655.063 - 6667.391 Marc Andreessen

The policies that are being called for, right, to prevent this are of extraordinary magnitude. And I think we're going to cause extraordinary damage. And this is all being done on the basis of something that is literally not scientific. It's not a testable hypothesis.

0
💬 0

6667.431 - 6674.477 Lex Fridman

So the moment you say AI is going to kill all of us, therefore we should ban it or that we should regulate all that kind of stuff, that's when it starts getting serious.

0
💬 0

6674.597 - 6676.619 Marc Andreessen

Or start, you know, military airstrikes on data centers.

0
💬 0

6677.239 - 6677.62 Lex Fridman

Oh boy.

0
💬 0

6677.96 - 6687.468 Marc Andreessen

Right? And like... Yeah, this one starts getting real weird. So here's the problem with millenarian cults. They have a hard time staying away from violence.

0
💬 0

6689.147 - 6691.55 Lex Fridman

Yeah, but violence is so fun.

0
💬 0

6693.652 - 6704.985 Marc Andreessen

If you're on the right end of it, they have a hard time avoiding violence. The reason they have a hard time avoiding violence is if you actually believe the claim, right, then what would you do to stop the end of the world? Well, you would do anything, right?

0
💬 0

6705.705 - 6721.115 Marc Andreessen

And so, and this is where you get, and again, if you just look at the history of millenarian cults, this is where you get the people's temple and everybody killing themselves in the jungle. And this is where you get Charles Manson and, you know, sending in me to kill, kill the pigs. Like this is the problem with these. They have a very hard time to run the line at actual violence.

0
💬 0

6730.761 - 6730.981 Lex Fridman

Yeah.

0
💬 0

6731.625 - 6755.455 Lex Fridman

But that's kind of the extremes. The extremes of anything are always concerning. It's also possible to kind of believe that AI has a very high likelihood of killing all of us. But there's And therefore we should maybe consider slowing development or regulating. So not violence or any of these kinds of things, but saying like, all right, let's take a pause here.

0
💬 0

6755.475 - 6769.505 Lex Fridman

You know, biological weapons, nuclear weapons, like whoa, whoa, whoa, whoa, whoa. This is like serious stuff. We should be careful. So it is possible to kind of have a more rational response, right? If you believe this risk is real.

0
💬 0

6769.665 - 6770.226 Marc Andreessen

Believe. Yeah.

0
💬 0

6770.911 - 6776.714 Lex Fridman

Yes, so is it possible to have a scientific approach to the prediction of the future?

0
💬 0

6776.954 - 6790.401 Marc Andreessen

I mean, we just went through this with COVID. What do we know about modeling? What did we learn about modeling with COVID? There's a lot of lessons. They didn't work at all. They worked poorly. The models were terrible. The models were useless.

0
💬 0

6790.901 - 6808.232 Lex Fridman

I don't know if the models were useless or the people interpreting the models and then the centralized institutions that were creating policy rapidly based on the models. and leveraging the models in order to support their narratives versus actually interpreting the error bars and the models and all that kind of stuff.

0
💬 0

6808.252 - 6822.561 Marc Andreessen

What you had with COVID, in my view, what you had with COVID is you had these experts showing up. They claimed to be scientists and they had no testable hypotheses whatsoever. They had a bunch of models. They had a bunch of forecasts and they had a bunch of theories and they laid these out in front of policymakers and policymakers freaked out and panicked, right?

0
💬 0

6823.201 - 6833.051 Marc Andreessen

And implemented a whole bunch of like really like terrible decisions that we're still living with the consequences of. And there was never any empirical foundation to any of the models. None of them ever came true.

0
💬 0

6833.331 - 6840.699 Lex Fridman

Yeah, to push back, there were certainly Baptists and bootleggers in the context of this pandemic. But there's still a usefulness to models, no?

0
💬 0

6841.319 - 6845.944 Marc Andreessen

So not if they're reliably wrong, right? Then they're actually like anti-useful, right? They're actually damaging. Right.

0
💬 0

6846.044 - 6859.054 Lex Fridman

But what do you do with a pandemic? What do you do with any kind of threat? Don't you want to kind of have several models to play with as part of the discussion of like, what the hell do we do here? I mean, do they work?

0
💬 0

6860.034 - 6875.045 Marc Andreessen

Because they're an expectation that they actually work, that they have actual predictive value? I mean, as far as I can tell with COVID, the policymakers just sigh up themselves into believing that there was substance. I mean, look, the scientists were at fault. The quote-unquote scientists showed up. So I had some insight into this.

0
💬 0

6875.085 - 6880.168 Marc Andreessen

So there, there was a, remember the Imperial college models out of, out of London were the ones that were like, these are the gold standard models.

0
💬 0

6880.669 - 6880.829 Lex Fridman

Yeah.

0
💬 0

6880.869 - 6894.038 Marc Andreessen

So a friend of mine runs a big software company and he was like, wow, this is like, COVID's really scary. And he's like, you know, he contacted this research and he's like, you know, do you need some help? You've been just building this model on your own for 20 years. Do you need some, would you like us or coders to basically restructure it so it can be fully adapted for COVID?

0
💬 0

6894.058 - 6898.901 Marc Andreessen

And the guy said yes and sent over the code. And my friend said it was like the worst spaghetti code he's ever seen.

0
💬 0

6899.181 - 6910.89 Lex Fridman

That doesn't mean it's not possible to construct a good model of pandemic with the correct error bars with a high number of parameters that are continuously many times a day updated as we get more data about a pandemic.

0
💬 0

6911.291 - 6935.829 Lex Fridman

I would like to believe when a pandemic hits the world, the best computer scientists in the world, the best software engineers respond aggressively and as input take the data that we know about the virus and as an output say, here's what's happening. in terms of how quickly it's spreading, in terms of hospitalization and deaths and all that kind of stuff. Here's how contagious it likely is.

0
💬 0

6936.27 - 6963.021 Lex Fridman

Here's how deadly it likely is based on different conditions, based on different ages and demographics and all that kind of stuff. So here's the best kinds of policy. It feels like... You could have models, machine learning, that kind of, they don't perfectly predict the future, but they help you do something because there's pandemics that are like, Meh, they don't really do much harm.

0
💬 0

6963.142 - 6984.831 Lex Fridman

And there's pandemics, you can imagine them, they could do a huge amount of harm. Like they can kill a lot of people. So you should probably have some kind of data-driven models that keep updating, that allow you to make decisions that basically like where, how bad is this thing? Now you can criticize. how horrible all that went with the response to this pandemic.

0
💬 0

6984.851 - 6987.013 Lex Fridman

But I just feel like there might be some value to models.

0
💬 0

6987.213 - 7000.001 Marc Andreessen

So to be useful at some point, it has to be predictive, right? So the easy thing for me to do is to say, obviously, you're right. Obviously, I want to see that just as much as you do, because anything that makes it easier to navigate through society through a wrenching risk like that, that sounds great.

0
💬 0

7000.982 - 7012.009 Marc Andreessen

Um, you know, the, the, the harder objection to it is just simply, you are trying to model a complex dynamic system with 8 billion moving parts. Like not possible. It's very tough. Can't be done. Complex systems can't be done.

0
💬 0

7012.93 - 7016.753 Lex Fridman

Uh, machine learning says, hold my beer, but well, it's possible. No, I don't know.

0
💬 0

7017.073 - 7032.226 Marc Andreessen

I would like to believe that it is. Yeah. I would put it this way. I think where you and I would agree is I think we would like, we would, we would like that to be the case. We are strongly in favor of it. I think we would also agree that no such thing, with respect to COVID or pandemics, no such thing, at least neither you nor I think are aware. I'm not aware of anything like that today.

0
💬 0

7032.526 - 7052.722 Lex Fridman

My main worry with the response to the pandemic is that, same as with aliens, is that even if such a thing existed... And it's possible it existed. The policymakers were not paying attention. There was no mechanism that allowed those kinds of models to percolate out.

0
💬 0

7052.802 - 7060.047 Marc Andreessen

Oh, I think we have the opposite problem during COVID. I think the policymakers, I think these people with basically fake science had too much access to the policymakers.

0
💬 0

7061.332 - 7072.156 Lex Fridman

Right. But the policymakers also wanted, they had a narrative in mind, and they also wanted to use whatever model that fit that narrative to help them out. So it felt like there was a lot of politics and not enough science.

0
💬 0

7072.336 - 7079.598 Marc Andreessen

Although a big part of what was happening, a big reason we got lockdowns for as long as we did was because these scientists came in with these doomsday scenarios that were just completely off the hook.

0
💬 0

7079.998 - 7086.422 Lex Fridman

Scientists in quotes. That's not... Quote-unquote scientists. That's not... Okay. Let's give love to science. That is the way out.

0
💬 0

7086.802 - 7099.55 Marc Andreessen

Science is a process of testing hypotheses. Yeah. Modeling does not involve testable hypotheses, right? Like, I don't even know that... I actually don't... I don't even know that modeling actually qualifies as science. Maybe that's a side conversation we could have sometime over a beer.

0
💬 0

7099.77 - 7102.631 Lex Fridman

That's really interesting. But what do we do about the future?

0
💬 0

7102.691 - 7112.777 Marc Andreessen

I mean, what... So, number one is when we start with number one, humility. It goes back to this thing of how do we determine the truth? Yeah. Number two is we don't believe, you know, it's the old, I've got a hammer, everything looks like a nail, right?

0
💬 0

7113.958 - 7124.483 Marc Andreessen

I've got, oh, this is one of the reasons I gave you, I gave Lex a book, which the topic of the book is what happens when scientists basically stray off the path of technical knowledge and start to weigh in on politics and societal issues.

0
💬 0

7124.863 - 7125.904 Lex Fridman

In this case, philosophers.

0
💬 0

7126.144 - 7135.214 Marc Andreessen

Well, in this case, philosophers, but he actually talks in this book about, like, Einstein. He talks about the nuclear age and Einstein. He talks about the physicists actually doing very similar things at the time.

0
💬 0

7135.415 - 7140.801 Lex Fridman

The book is One Reason Goes on Holiday, Philosophers and Politics by Nevin Fletcher.

0
💬 0

7141.301 - 7156.678 Marc Andreessen

And it's just a story. It's a story. There are other books on this topic, but this is a new one that's really good. It's just a story of what happens when experts in a certain domain decide to weigh in and become basically social engineers and political, you know, basically political advisors. And it's just a story of just unending catastrophe. Right.

0
💬 0

7156.738 - 7158 Marc Andreessen

And I think that's what happened with COVID again.

0
💬 0

7159.505 - 7167.387 Lex Fridman

Yeah, I found this book a highly entertaining and eye-opening read filled with amazing anecdotes of irrationality and craziness by famous recent philosophers.

0
💬 0

7167.547 - 7169.567 Marc Andreessen

After you read this book, you will not look at Einstein the same.

0
💬 0

7170.047 - 7170.867 Lex Fridman

Oh, boy. Yeah.

0
💬 0

7171.888 - 7192.952 Marc Andreessen

Don't destroy my heroes. You will not be a hero of yours anymore. I'm sorry. You probably shouldn't read the book. All right. But here's the thing. The AI risk people, they don't even have the COVID model. At least not that I'm aware of. No. There's not even the equivalent of the COVID model. They don't even have the spaghetti code. they've got a theory and a warning and a this and a that.

0
💬 0

7193.192 - 7209.977 Marc Andreessen

And like, if you ask like, okay, well, here's, here's the, I mean, the ultimate example is okay. How do we know, right? How do we know that an AI is running away? Like, how do we know that the FOOM takeoff thing is actually happening? And the only answer that any of these guys have given that I've ever seen is, oh, it's when the loss rate, the loss function in the training drops, right?

0
💬 0

7209.997 - 7228.572 Marc Andreessen

That's when you need to like shut down the data center. Right. And it's like, well, that's also what happens when you're successfully training a model. Like it's, Like what, what even is, this is not science. This is not, it's not anything. It's not a model. It's not anything. There's nothing to arguing with it. It's like, you know, punching jello. Like there's, what do you even respond to?

0
💬 0

7228.772 - 7248.579 Lex Fridman

So just push back on that. I don't think they have good metrics of, yeah, when the fume is happening, but I think it's possible to have that. Like I just, just as you speak now, I mean, it's possible to imagine there could be measures. It's been 20 years. No, for sure, but it's been only weeks since we had a big enough breakthrough in language models.

0
💬 0

7248.659 - 7265.825 Lex Fridman

We can start to actually have, the thing is, the AI Doomer stuff didn't have any actual systems to really work with, and now there's real systems you can start to analyze, like how does this stuff go wrong? And I think you kind of agree that there is a lot of risks that we can analyze. The benefits outweigh the risks in many cases.

0
💬 0

7266.365 - 7267.466 Marc Andreessen

Well, the risks are not existential.

0
💬 0

7268.526 - 7268.747 Lex Fridman

Yes.

0
💬 0

7268.927 - 7280.821 Marc Andreessen

Well, not, not, not in the phone, not in the phone paperclip. Let me, okay. There's another sleight of hand that you just alluded to. There's another sleight of hand that happens, which is very, I'm very good at the sleight of hand thing, which is very scientific. So the book super intelligence, right?

0
💬 0

7280.841 - 7295.091 Marc Andreessen

Which is like the Nick Bostrom's book, which is like the origin of a lot of this stuff, which was written, you know, whatever, 10 years ago or something. So he does this really fascinating thing in the book, which is he basically says there are many possible routes to machine intelligence, to artificial intelligence.

0
💬 0

7295.251 - 7309.835 Marc Andreessen

And he describes all the different routes to artificial intelligence, all the different possible, everything from biological augmentation through to, you know, all these different things. One of the ones that he does not describe is large language models because, of course, the book was written before they were invented and so they didn't exist.

0
💬 0

7311.491 - 7325.502 Marc Andreessen

In the book, he describes them all and then he proceeds to treat them all as if they're exactly the same thing. He presents them all as sort of an equivalent risk to be dealt with in an equivalent way to be thought about the same way. And then the risk, the quote unquote risk that's actually emerged is actually a completely different technology than he was even imagining.

0
💬 0

7325.602 - 7335.13 Marc Andreessen

And yet all of his theories and beliefs are being transplanted by this movement, like straight on to this new technology. And so again, like there's no other area of science or technology where you do that, right?

0
💬 0

7335.93 - 7346.158 Marc Andreessen

When you're dealing with organic chemistry versus inorganic chemistry, you don't just say, oh, with respect to either one, basically maybe growing up and eating the world or something, they're just gonna operate the same way. You don't.

0
💬 0

7346.959 - 7366.453 Lex Fridman

But you can start talking about, as we get more and more actual systems that start to get more and more intelligent, you can start to actually have more scientific arguments here. High level, you can talk about the threat of autonomous weapon systems back before we had any automation in the military. And that would be like very fuzzy kind of logic.

0
💬 0

7366.513 - 7389.542 Lex Fridman

But the more and more you have drones, they're becoming more and more autonomous. You can start imagining, okay, what does that actually look like? And what's the actual threat of autonomous weapon systems? How does it go wrong? And still, it's very vague. But you start to get a sense of like, all right, it should probably be illegal or wrong or not allowed to do like...

0
💬 0

7390.622 - 7397.249 Lex Fridman

mass deployment of fully autonomous drones that are doing aerial strikes on large areas.

0
💬 0

7397.269 - 7403.736 Marc Andreessen

I think it should be required. No, no, no. I think it should be required that only aerial vehicles are automated.

0
💬 0

7404.877 - 7405.958 Lex Fridman

Okay, so you want to go the other way.

0
💬 0

7406.018 - 7420.927 Marc Andreessen

I want to go the other way. I think it's obvious that the machine is going to make a better decision than the human pilot. I think it's obvious that it's in the best interest of both the attacker and the defender and humanity at large if machines are making more of these decisions and not people. I think people make terrible decisions in times of war.

0
💬 0

7421.828 - 7425.468 Lex Fridman

But there's ways this can go wrong too, right?

0
💬 0

7425.548 - 7441.271 Marc Andreessen

Wars go terribly wrong now. This goes back to that whole thing about does the self-driving car need to be perfect versus does it need to be better than the human driver? Does the automated drone need to be perfect or does it need to be better than a human pilot at making decisions under enormous amounts of stress and uncertainty?

0
💬 0

7441.775 - 7449.978 Lex Fridman

Yeah, well, on average, the worry that AI folks have is the runaway. They're going to come alive, right?

0
💬 0

7450.018 - 7451.779 Marc Andreessen

Then again, that's the sleight of hand, right?

0
💬 0

7452.019 - 7456.101 Lex Fridman

Or not come alive. Hold on a second. You lose control.

0
💬 0

7456.901 - 7461.563 Marc Andreessen

But then they're going to develop goals of their own. They're going to develop a mind of their own. They're going to develop their own.

0
💬 0

7462.083 - 7484.734 Lex Fridman

No, more like Chernobyl-style meltdown, like just bugs in the code accidentally force you, results in the bombing of large civilian areas to a degree that's not possible in the current military strategies controlled by humans.

0
💬 0

7485.174 - 7487.816 Marc Andreessen

Actually, we've been doing a lot of mass bombings of cities for a very long time.

0
💬 0

7487.836 - 7489.877 Lex Fridman

Yes. And a lot of civilians died.

0
💬 0

7489.897 - 7507.587 Marc Andreessen

And a lot of civilians died. And if you watch the documentary, The Fog of War, McNamara, it spends a big part of it talking about the firebombing of the Japanese cities, burning them straight to the ground. The devastation in Japan, American military firebombing the cities in Japan was a considerably bigger devastation than the use of nukes. So we've been doing that for a long time.

0
💬 0

7507.907 - 7514.211 Marc Andreessen

We also did that to Germany, by the way. Germany did that to us. That's an old tradition. The minute we got airplanes, we started doing indiscriminate bombing.

0
💬 0

7514.671 - 7524.537 Lex Fridman

So one of the things that the modern US military can do with technology, with automation, but technology more broadly, is higher and higher precision strikes.

0
💬 0

7524.737 - 7538.905 Marc Andreessen

Yeah. So precision is obviously, and this is the JDAM, right? So there was this big advance called the JDAM, which basically was strapping a GPS transceiver to an unguided bomb and turning it into a guided bomb. And yeah, that's great. Like, look, that's been a big advance.

0
💬 0

7539.045 - 7551.611 Marc Andreessen

But, and that's like a baby version of this question, which is, okay, do you want like the human pilot, like guessing where the bomb's going to land? Or do you want like the machine, like guiding the bomb to its destination? That's a baby version of the question. The next version of the question is, do you want the human or the machine deciding whether to drop the bomb?

0
💬 0

7552.071 - 7568.34 Marc Andreessen

Everybody just assumes the human's going to do a better job for what I think are fundamentally suspicious reasons. Emotional, psychological reasons. I think it's very clear that the machine's going to do a better job making that decision because the humans making that decision are god awful, just terrible. Yeah. Right. And so, so yeah, so this is the, this is the thing.

0
💬 0

7568.36 - 7570.841 Marc Andreessen

And then let's get to the, there was, can I, one more sleight of hand?

0
💬 0

7570.922 - 7574.604 Lex Fridman

Yes. Okay. Please. I'm a magician, you could say.

0
💬 0

7574.664 - 7588.59 Marc Andreessen

One more sleight of hand. These things are going to be so smart, right? That they're going to be able to destroy the world and wreak havoc and like do all this stuff and plan and do all this stuff and evade us and have all their secret things and their secret factories and all this stuff. But they're so stupid. that they're gonna get like tangled up in their code.

0
💬 0

7588.61 - 7604.181 Marc Andreessen

And that's the, they're not gonna come alive, but there's gonna be some bug that's gonna cause them to like turn us all into paper. Like that they're not gonna, that they're gonna be genius in every way other than the actual bad goal. And that's just like a ridiculous discrepancy. And you can prove this today.

0
💬 0

7604.201 - 7611.665 Marc Andreessen

You can actually address this today for the first time with LLMs, which is you can actually ask LLMs to resolve moral dilemmas.

0
💬 0

7612.246 - 7612.346 Lex Fridman

Yeah.

0
💬 0

7612.806 - 7626.333 Marc Andreessen

So you can create the scenario, you know, dot, dot, dot, this, that, this, that, this, that. What would you as the AI do in this circumstance? And they don't just say, destroy all humans, destroy all humans. They will give you actually very nuanced moral, practical, trade-off-oriented answers that

0
💬 0

7626.854 - 7632.948 Marc Andreessen

And so we actually already have the kind of AI that can actually think this through and can actually reason about goals.

0
💬 0

7633.789 - 7646.057 Lex Fridman

Well, the hope is that AGI or like various super intelligent systems have some of the nuance that LLMs have. And the intuition is they most likely will because even these LLMs have the nuance.

0
💬 0

7647.357 - 7657.684 Marc Andreessen

LLMs are really, this is actually worth spending a moment on, LLMs are really interesting to have moral conversations with. And that, I didn't expect I'd be having a moral conversation with a machine in my lifetime.

0
💬 0

7657.704 - 7665.088 Lex Fridman

Yeah. And let's remember, we're not really having a conversation with a machine. We're having a conversation with the entirety of the collective intelligence of the human species.

0
💬 0

7665.348 - 7665.608 Marc Andreessen

Exactly.

0
💬 0

7666.148 - 7672.031 Lex Fridman

Yes, correct. But it's possible to imagine autonomous weapon systems that are not using LLMs.

0
💬 0

7672.531 - 7682.624 Marc Andreessen

But if they're smart enough to be scary, why are they not smart enough to be wise? Like that's the part where it's like, I don't know how you get the one without the other.

0
💬 0

7682.784 - 7685.726 Lex Fridman

Is it possible to be super intelligent without being super wise?

0
💬 0

7686.006 - 7696.573 Marc Andreessen

Well, again, you're back to that. I mean, then you're back to a classic autistic computer, right? Like you're back to just like a blind rule follower. I've got this like core is the paperclip thing. I've got this core rule and I'm just going to follow it to the end of the earth.

0
💬 0

7696.613 - 7705.499 Marc Andreessen

And it's like, well, but everything you're going to be doing to execute that rule is going to be super genius level that humans aren't going to be able to counter. It's just, it's a, it's a mismatch in the definition of what the system is capable of.

0
💬 0

7706.292 - 7718.299 Lex Fridman

Unlikely, but not impossible, I think. But again, here you get to like, okay, like... No, I'm not saying... When it's unlikely, but not impossible, if it's unlikely, that means the fear should be correctly calibrated.

0
💬 0

7718.379 - 7720 Marc Andreessen

Extraordinary claims require extraordinary proof.

0
💬 0

7720.321 - 7743.111 Lex Fridman

Well, okay. So one interesting sort of tangent I would love to take on this, because you mentioned this in the essay about nuclear, which was also... I mean, you don't shy away from a little bit of a spicy take. So Robert Oppenheimer famously said, now I am become death, the destroyer of worlds, as he witnessed the first detonation of a nuclear weapon on July 16th, 1945.

0
💬 0

7743.231 - 7769.529 Lex Fridman

And you write an interesting historical perspective quote, recall that John von Neumann responded to Robert Oppenheimer's famous hand-wringing about the role of creating nuclear weapons, which, you note, helped end World War II and prevent World War III, with some people confess guilt to claim credit for the sin. And you also mentioned that Truman was harsher after meeting Oppenheimer.

0
💬 0

7770.129 - 7772.951 Lex Fridman

He said that, don't let that crybaby in here again.

0
💬 0

7774.298 - 7782.763 Marc Andreessen

Real quote, by the way, from Dean Acheson. Oh, boy. Because Oppenheimer didn't just say the famous line.

0
💬 0

7783.043 - 7783.264 Lex Fridman

Yeah.

0
💬 0

7783.504 - 7793.49 Marc Andreessen

He then spent years going around, basically moaning, going on TV and going into the White House and basically just doing this hair shirt thing, this sort of self-critical, like, oh, my God, I can't believe how awful I am.

0
💬 0

7793.91 - 7801.415 Lex Fridman

So he's widely considered, perhaps because of the hang-wringing, as the father of the atomic bomb.

0
💬 0

7803.754 - 7814.687 Marc Andreessen

This is von Neumann's criticism of him is he tried to have his cake and eat it too. Von Neumann, of course, is a very different kind of personality and he's just like, this is an incredibly useful thing. I'm glad we did it.

0
💬 0

7815.185 - 7839.913 Lex Fridman

Yeah. Well, Von Neumann is widely credited as being one of the smartest humans of the 20th century. Certain people, everybody says, like, this is the smartest person I've ever met when they've met him. Anyway, that doesn't mean smart, doesn't mean wise. I would love to sort of... Can you make the case both for and against the critique of Oppenheimer here?

0
💬 0

7839.933 - 7845.055 Lex Fridman

Because we're talking about nuclear weapons. Boy, do they seem dangerous.

0
💬 0

7845.655 - 7854.817 Marc Andreessen

So the critique goes deeper, and I left this out. Here's the real substance. I left it out because I didn't want to dwell on nukes in my AI paper. But here's the deeper thing that happened.

0
💬 0

7854.837 - 7873.008 Marc Andreessen

And I'm really curious, this movie coming out this summer, I'm really curious to see how far he pushes this because this is the real drama in the story, which is it wasn't just a question of are nukes good or bad. It was a question of should Russia also have them? And what actually happened was Russia got the – America invented the bomb. Russia got the bomb. They got the bomb through espionage.

0
💬 0

7873.048 - 7888.343 Marc Andreessen

They got American scientists and foreign scientists working on the American project, some combination of the two, basically gave the Russians the designs for the bomb. And that's how the Russians got the bomb. There's this dispute to this day of Oppenheimer's role in that.

0
💬 0

7889.385 - 7899.959 Marc Andreessen

If you read all the histories, the kind of composite picture, and by the way, we now know a lot actually about Soviet espionage in that era because there's been all this declassified material in the last 20 years that actually shows a lot of very interesting things.

0
💬 0

7900.519 - 7915.685 Marc Andreessen

But if you kind of read all the histories, what you kind of get is Oppenheimer himself probably was not a... He probably did not hand over the nuclear secrets himself. However, he was close to many people who did, including family members. And there were other members of the Manhattan Project who were Russian Soviet SS and did hand over the bomb.

0
💬 0

7915.745 - 7931.833 Marc Andreessen

And so the view that Oppenheimer and people like him had that this thing is awful and terrible and oh my God, and all this stuff... you could argue, fed into this ethos at the time that resulted in people thinking that the Baptists thinking that the only principal thing to do is to give the Russians the bomb.

0
💬 0

7933.254 - 7946.162 Marc Andreessen

And so the moral beliefs on this thing and the public discussion and the role that the inventors of this technology play, this is the point of this book, when they kind of take on this sort of public intellectual moral kind of thing, it can have real consequences, right?

0
💬 0

7946.182 - 7959.492 Marc Andreessen

Because we live in a very different world today because Russia got the bomb than we would have lived in had they not gotten the bomb. The entire 20th century, second half of the 20th century would have played out very different had those people not given Russia the bomb. And so the stakes were very high then.

0
💬 0

7959.512 - 7976.422 Marc Andreessen

The good news today is nobody's sitting here today, I don't think, worrying about an analogous situation with respect to it. I'm not really worried that Sam Altman's going to decide to give the Chinese the design for although he did just speak at a Chinese conference, which is interesting. But however, I don't think that's what's at play here.

0
💬 0

7976.962 - 7986.224 Marc Andreessen

But what's at play here are all these other fundamental issues around what do we believe about this and then what laws and regulations and restrictions are we gonna put on it? And that's where I draw like a direct straight line.

0
💬 0

7986.824 - 7998.586 Marc Andreessen

And anyway, and my reading of the history on nukes is like the people who were doing the full hair shirt public, this is awful, this is terrible, actually had like catastrophically bad results from taking those views. And that's what I'm worried is gonna happen again.

0
💬 0

7998.846 - 8009.859 Lex Fridman

But is there a case to be made that you really need to wake the public up to the dangers of nuclear weapons when they were first dropped? Like, really, like, educate them on, like, this is an extremely dangerous and destructive weapon.

0
💬 0

8010.4 - 8016.768 Marc Andreessen

I think the education kind of happened quick and early. How? It was pretty obvious. How? We dropped one bomb and destroyed an entire city.

0
💬 0

8017.665 - 8037.465 Lex Fridman

Yeah, so 80,000 people dead. But the reporting of that, you can report that in all kinds of ways. You can do all kinds of slants, like war is horrible, war is terrible. You can make it seem like the use of nuclear weapons is just a part of war and all that kind of stuff.

0
💬 0

8038.546 - 8055.716 Lex Fridman

Something about the reporting and the discussion of nuclear weapons resulted in us being terrified in awe of the power of nuclear weapons. And that potentially fed in a positive way towards the game theory of mutually shared destruction.

0
💬 0

8055.936 - 8057.518 Marc Andreessen

Well, so this gets to what actually happens.

0
💬 0

8057.538 - 8059.742 Lex Fridman

Some of this is me playing devil's advocate here.

0
💬 0

8059.842 - 8072.023 Marc Andreessen

Yeah, sure, of course. Let's get to what actually happened and then kind of back into that. So what actually happened, I believe, and again, I think this is a reasonable reading of history, is what actually happened was nukes then prevented World War III. And they prevented World War III through the game theory of mutually assured destruction.

0
💬 0

8072.103 - 8090.419 Marc Andreessen

Had nukes not existed, there would have been no reason why the Cold War did not go hot. And the military planners at the time, both on both sides, thought that there was going to be World War III on the plains of Europe. And they thought there was going to be like 100 million people dead. It was like the most obvious thing in the world to happen. And it's the dog that didn't bark.

0
💬 0

8091.119 - 8095.543 Marc Andreessen

It may be like the best single net thing that happened in the entire 20th century is that that didn't happen.

0
💬 0

8095.917 - 8119.573 Lex Fridman

Actually, just on that point, you say a lot of really brilliant things. It hit me just as you were saying it. I don't know why it hit me for the first time, but we got two wars in a span of 20 years. We could have kept getting more and more world wars and more and more ruthless. Actually, you could have had a US versus Russia war.

0
💬 0

8119.913 - 8136.103 Marc Andreessen

You could have. By the way, there's another hypothetical scenario. The other hypothetical scenario is the Americans got the bomb, the Russians didn't. And then America's the big dog. And then maybe America would have had the capability to actually roll back the Iron Curtain. I don't know whether that would have happened, but it's entirely possible.

0
💬 0

8137.083 - 8150.232 Marc Andreessen

And the act of these people who had these moral positions about, because they could forecast, they could model, they could forecast the future of how this technology would get used, made a horrific mistake. Because they basically ensured that the Iron Curtain would continue for 50 years longer than it would have otherwise. And again, these are counterfactuals.

0
💬 0

8150.272 - 8160.678 Marc Andreessen

I don't know that that's what would have happened. But the decision to hand the bomb over was a big decision. made by people who were very full of themselves.

0
💬 0

8161.999 - 8169.541 Lex Fridman

Yeah, but so me as an American, me as a person that loves America, I also wonder if US was the only ones with the nuclear weapons.

0
💬 0

8172.262 - 8177.824 Marc Andreessen

That was the argument for handing the, that was the guys who, the guys who handed over the bomb. That was actually their moral argument.

0
💬 0

8177.844 - 8191.469 Lex Fridman

Yeah, I would probably not hand it over to, I would be careful about the regimes you hand it over to. Maybe give it to like the British or something. Or like a democratically elected government?

0
💬 0

8191.829 - 8201.09 Marc Andreessen

Well, look, there are people to this day who think that those Soviet spies did the right thing because they created a balance of terror as opposed to the U.S. having just... And by the way, let me... Balance of terror. Let's tell the full version of the story.

0
💬 0

8201.11 - 8202.09 Lex Fridman

It has such a sexy ring to it.

0
💬 0

8202.21 - 8214.633 Marc Andreessen

Okay, so the full version of the story is... John von Neumann's a hero of both yours and mine. The full version of the story is he advocated for a first strike. So when the U.S. had the bomb and Russia did not, he advocated for... He said, we need to strike them right now.

0
💬 0

8216.053 - 8216.633 Lex Fridman

Strike Russia?

0
💬 0

8216.753 - 8216.833 Marc Andreessen

Yeah.

0
💬 0

8218.703 - 8220.304 Lex Fridman

Yes.

0
💬 0

8220.685 - 8236.054 Marc Andreessen

Because he said World War III is inevitable. He was very hardcore. His theory was World War III is inevitable. We're definitely going to have World War III. The only way to stop World War III is we have to take them out right now. And we have to take them out right now before they get the bomb because this is our last chance.

0
💬 0

8237.506 - 8240.19 Lex Fridman

Now, again, like... Is this an example of philosophers and politics?

0
💬 0

8240.37 - 8242.493 Marc Andreessen

I don't know if that's in there or not, but this is in the Standard Bible.

0
💬 0

8242.513 - 8243.474 Lex Fridman

No, but is it... Yeah.

0
💬 0

8243.515 - 8253.308 Marc Andreessen

Meaning is that... Yeah, this is on the other side. So most of the case studies in books like this are the crazy people on the left. Yeah. Von Neumann is a story, arguably, of the crazy people on the right.

0
💬 0

8254.129 - 8255.67 Lex Fridman

Yeah, stick to computing, John.

0
💬 0

8255.87 - 8269.197 Marc Andreessen

Well, this is the thing, and this is the general principle, getting back to our core thing, which is like, I don't know whether any of these people should be making any of these calls. Yeah. Because there's nothing in either von Neumann's background or Oppenheimer's background or any of these people's background that qualifies them as moral authorities.

0
💬 0

8270.005 - 8276.786 Lex Fridman

Well, this actually brings up the point of, in AI, who are the good people to reason about the morality, the ethics?

0
💬 0

8277.506 - 8298.91 Lex Fridman

Outside of these risks, outside of, like, the more complicated stuff that you agree on is, you know, this will go into the hands of bad guys and all the kinds of ways they'll do is interesting and dangerous, is dangerous in interesting, unpredictable ways, and who is the right person, who are the right kinds of people to make decisions how to respond to it? Is it tech people?

0
💬 0

8299.673 - 8317.597 Marc Andreessen

So the history of these fields, this is what he talks about in the book, the history of these fields is that the competence and capability and intelligence and training and accomplishments of senior scientists and technologists working on a technology and then being able to then make moral judgments in the use of their technology, that track record is terrible.

0
💬 0

8318.437 - 8320.918 Marc Andreessen

That track record is like catastrophically bad.

0
💬 0

8321.638 - 8326.979 Lex Fridman

The people that develop that technology are usually not going to be the right people

0
💬 0

8328.867 - 8348.35 Marc Andreessen

So the claim is, of course, they're the knowledgeable ones. But the problem is they've spent their entire life in a lab, right? They're not theologians. So what you find when you read this, when you look at these histories, what you find is they generally are very thinly informed on history, on sociology, on theology, on morality, on ethics, etc.

0
💬 0

8348.65 - 8363.107 Marc Andreessen

they tend to manufacture their own worldviews from scratch. They tend to be very sort of thin. Um, they're not remotely the arguments that you would be having if you got like a group of highly qualified theologians or philosophers or, you know, um,

0
💬 0

8363.647 - 8388.249 Lex Fridman

Well, let me sort of, as the devil's advocate takes a sip of whiskey, say that I agree with that, but also it seems like the people who are doing kind of the ethics departments in these tech companies go sometimes the other way. Yes. They're not nuanced on history or theology or this kind of stuff.

0
💬 0

8388.569 - 8404.381 Lex Fridman

It almost becomes a kind of outraged activism towards directions that don't seem to be grounded in history and humility and nuance. It's, again, drenched with arrogance. So I'm not sure which is worse.

0
💬 0

8404.822 - 8406.943 Marc Andreessen

Oh, no, they're both bad. Yeah, so definitely not them either.

0
💬 0

8407.684 - 8408.324 Lex Fridman

But I guess...

0
💬 0

8409.205 - 8424.35 Marc Andreessen

But look, this is a hard, this is our problem. This goes back to where we started, which is okay. Who has the truth? And it's like, well, um, you know, like how does societies arrive at like truth and how do we figure these things out? And like our elected leaders play some role in it. You know, we all play some role in it.

0
💬 0

8425.151 - 8434.214 Marc Andreessen

Um, there have to be some set of public intellectuals at some point that bring, you know, rationality and judgment and humility to it. Those people are few and far between. We should probably prize them very highly.

0
💬 0

8434.574 - 8452.914 Lex Fridman

Yeah, celebrate humility in our public leaders. So getting to risk number two, will AI ruin our society? Short version, as you write, if the murder robots don't get us, the hate speech and misinformation will. And the action you recommend, in short, don't let the thought police suppress AI.

0
💬 0

8455.376 - 8478.073 Marc Andreessen

well what is uh this risk of the effect of misinformation a society that's going to be catalyzed by ai yeah so this is the social media this is what you just alluded to it's the activism kind of thing that's popped up in these companies in the industry and it's basically from my perspective it's basically part two of the war that played out over social media over the last 10 years

0
💬 0

8478.953 - 8504.271 Marc Andreessen

um because you probably remember social media 10 years ago was basically who even wants this who wants who wants a photo of what your cat had for breakfast like this stuff is like silly and trivial and why can't these nerds like figure out how to invent something like useful and powerful and then you know certain things happened in the political system and then it sort of the polarity on that discussion switched all the way to social media is like the worst most corrosive most terrible most awful technology ever invented and then it leads to you know terrible the wrong you know politicians and policies and

0
💬 0

8505 - 8521.356 Marc Andreessen

politics and all this stuff. And that all got catalyzed into this very big kind of angry movement, both inside and outside the companies to kind of bring social media to heal. And that got focused in particularly on two topics, so-called hate speech and so-called misinformation. And that's been the saga playing out for the last decade.

0
💬 0

8521.416 - 8538.714 Marc Andreessen

And I don't even really want to even argue the pros and cons of the sides just to observe that that's been like a huge fight and has had big consequences to how these companies operate. Basically, those same sets of theories, that same activist approach, that same energy is being transplanted straight to AI. And you see that already happening.

0
💬 0

8538.774 - 8550.684 Marc Andreessen

It's why ChatGPT will answer, let's say, certain questions and not others. It's why it gives you the canned speech about, you know, whenever it starts with, as a large language model, I cannot, you know, basically means that somebody has reached in there and told it it can't talk about certain topics.

0
💬 0

8551.505 - 8552.706 Lex Fridman

Do you think some of that is good?

0
💬 0

8553.444 - 8572.008 Marc Andreessen

So it's an interesting question. So a couple observations. So one is the people who find this the most frustrating are the people who are worried about the murder robots. And in fact, the so-called X-risk people, they started with the term AI safety. The term became AI alignment.

0
💬 0

8572.028 - 8584.932 Marc Andreessen

When the term became AI alignment is when this switch happened from we're worried it's going to kill us all to we're worried about hate speech and misinformation. The AIX risk people have now renamed their thing AI not kill everyone-ism, which I have to admit is a catchy term.

0
💬 0

8585.433 - 8594.917 Marc Andreessen

And they are very frustrated by the fact that the sort of activist-driven hate speech misinformation kind of thing is taking over, which is what's happened. It's taken over. The AI ethics field has been taken over by the hate speech misinformation people.

0
💬 0

8595.874 - 8606.12 Marc Andreessen

Um, you know, look, would I like to live in a world in which like everybody was nice to each other all the time and nobody ever said anything mean and nobody ever used a bad word and everything was always accurate and honest. Like, that sounds great.

0
💬 0

8606.361 - 8615.766 Marc Andreessen

Do I want to live in a world where there's like a centralized thought police working through the tech companies to enforce the view of a small set of elites that they're going to determine what the rest of us think and feel like? Absolutely not.

0
💬 0

8616.727 - 8638.014 Lex Fridman

There could be a middle ground somewhere like Wikipedia type of moderation. There's moderation on Wikipedia. Yeah. that is somehow crowdsourced where you don't have centralized elites, but it's also not completely just a free-for-all because if you have the entirety of human knowledge at your fingertips, you can do a lot of harm.

0
💬 0

8638.034 - 8667.336 Lex Fridman

If you have a good assistant that's completely uncensored, they can help you build a bomb. They can help you mess with people's physical wellbeing, right? If they, because that information is out there on the internet. And so presumably there's, it would be, you could see the positives in censoring some aspects of an AI model when it's helping you commit literal violence.

0
💬 0

8667.947 - 8679.997 Marc Andreessen

And there's a section, later section of the essay where I talk about bad people doing bad things. Yes. Right. And there's a set of things that we should discuss there. Yeah. What happens in practice is these lines, as you alluded to this already, these lines are not easy to draw.

0
💬 0

8680.117 - 8698.168 Marc Andreessen

And what I've observed in the social media version of this is, the way I describe it is the slippery slope is not a fallacy, it's an inevitability. The minute you have this kind of activist personality that gets in a position to make these decisions... They take it straight to infinity. It goes into the crazy zone almost immediately and never comes back because people become drunk with power.

0
💬 0

8699.528 - 8718.557 Marc Andreessen

Look, if you're in the position to determine what the entire world thinks and feels and reads and says, you're going to take it. And Elon has ventilated this with the Twitter files over the last three months, and it's just crystal clear how bad it got there. Yeah. Reason for optimism is what Elon is doing with community notes. So community notes is actually a very interesting thing.

0
💬 0

8719.117 - 8727.322 Marc Andreessen

So what Elon is trying to do with community notes is he's trying to have it where there's only a community note when people who have previously disagreed on many topics agree on this one.

0
💬 0

8728.416 - 8743.568 Lex Fridman

Yes, that's what I'm trying to get at is there could be Wikipedia-like models or community type of models where allows you to essentially either provide context or sensor in a way that's not resist the slippery slope nature.

0
💬 0

8744.096 - 8758.832 Marc Andreessen

Now, there's an entirely different approach here, which is basically we have AIs that are producing content. We could also have AIs that are consuming content. And so one of the things that your assistant could do for you is help you consume all the content and basically tell you when you're getting played.

0
💬 0

8759.412 - 8773.195 Marc Andreessen

So, for example, I'm going to want the AI that my kid uses, right, to be very, you know, child safe. And I'm going to want it to filter for him all kinds of inappropriate stuff that he shouldn't be saying just because he's a kid. Yeah. Right. And you see what I'm saying is you can implement that. Architecturally, you could say you can solve this on the client side, right?

0
💬 0

8773.715 - 8784.117 Marc Andreessen

Solving on the server side gives you an opportunity to dictate for the entire world, which I think is where you take the slippery slope to hell. There's another architectural approach, which is to solve this on the client side, which is certainly what I would endorse.

0
💬 0

8785.157 - 8804.728 Lex Fridman

It's AI risk number five, will AI lead to bad people doing bad things? I can just imagine language models used to do so many bad things, but the hope is there that you can have large language models used to then defend against it by more people, by smarter people, by more effective people, skilled people, all that kind of stuff.

0
💬 0

8805.448 - 8820.354 Marc Andreessen

Three-part argument on bad people doing bad things. So number one, right, you can use the technology defensively. And we should be using AI to build like broad spectrum vaccines and antibiotics for like bioweapons. And we should be using AI to like hunt terrorists and catch criminals. And like we should be doing like all kinds of stuff like that.

0
💬 0

8820.834 - 8835.08 Marc Andreessen

And in fact, we should be doing those things even just to like go get like, you know, basically go eliminate risk from like regular pathogens that aren't like constructed by an AI. So there's the whole defensive set of things. Second is we have many laws on the books about the actual bad things, right?

0
💬 0

8835.12 - 8850.307 Marc Andreessen

So it is actually illegal to be, you know, to commit crimes, to commit terrorist acts, to, you know, build pathogens with the intent to deploy them to kill people. And so we have those, we actually don't need new laws for the vast majority of the scenarios. We actually already have the laws in the book. on the books.

0
💬 0

8850.547 - 8866.952 Marc Andreessen

The third argument is the minute, and this is sort of the foundational one that gets really tough, but the minute you get into this thing, which you were kind of getting into, which is like, okay, but like, don't you need censorship sometimes, right? And don't you need restrictions sometimes? It's like, okay, what is the cost of that? And in particular in the world of open source, right?

0
💬 0

8867.672 - 8886.697 Marc Andreessen

And so is open source AI going to be allowed or not? If open source AI is not allowed, then what is the regime that's going to be necessary legally and technically to prevent it from developing? And here, again, is where you get into, and people have proposed these kinds of things, you get into, I would say, pretty extreme territory pretty fast.

0
💬 0

8886.737 - 8906.036 Marc Andreessen

Do we have a monitor agent on every CPU and GPU that reports back to the government what we're doing with our computers? Are we seizing GPU clusters that get beyond a certain size? And then, by the way, how are we doing all that globally? And if China is developing an LLM beyond the scale that we think is allowable, are we going to invade? Right.

0
💬 0

8906.076 - 8921.49 Marc Andreessen

And you have figures on the AIX risk side who are advocating, you know, potentially up to nuclear strikes to prevent, you know, this kind of thing. And so here you get into this thing. And again, you know, you could maybe say this is, you know, you could even say this is what good, bad or indifferent or whatever. But like, here's the comparison of nukes.

0
💬 0

8921.51 - 8934.441 Marc Andreessen

The comparison of nukes is very dangerous because one is just nukes were just a bomb, although we can come back to nuclear power. But the other thing was like with nukes, you could control plutonium, right? You could track plutonium and it was like hard to come by. AI is just math and code, right?

0
💬 0

8934.681 - 8943.968 Marc Andreessen

And it's in like math textbooks and it's like there are YouTube videos that teach you how to build it. And like there's open source, it's already open source. You know, there's a 40 billion parameter model running around already called Falcon Online that anybody can download.

0
💬 0

8945.249 - 8963.643 Marc Andreessen

And so, okay, you walk down the logic path that says we need to have guardrails on this and you find yourself in a authoritarian totalitarian regime of thought control and machine control that would be so brutal that that you would have destroyed the society that you're trying to protect. And so I, I just don't see how that actually works.

0
💬 0

8964.123 - 8983.896 Lex Fridman

So you have to understand my brain has gone a full, a full steam ahead here. Cause I agree with basically everything you're saying, but I'm trying to play devil's advocate here there because, okay, you highlighted the fact that there is a slippery slope to human nature. The moment you censor something, you start to censor everything. Um,

0
💬 0

8985.356 - 9009.287 Lex Fridman

That alignment starts out sounding nice, but then you start to align to the beliefs of some select group of people, and then it's just your beliefs. The number of people you're aligning to is smaller and smaller as that group becomes more and more powerful. Okay, but that just speaks to the people that censor are usually the assholes, and the assholes get richer.

0
💬 0

9010.231 - 9026.487 Lex Fridman

I wonder if it's possible to do without that for AI. One way to ask this question is, do you think the baseline foundation models should be open sourced? Like what Mark Zuckerberg is saying they want to do.

0
💬 0

9026.947 - 9044.704 Marc Andreessen

So look, I mean, I think it's totally appropriate that companies that are in the business of producing a product or service should be able to have a wide range of policies that they put, right? And I'll just, again, I want a heavily censored model for my eight-year-old. I actually want that. I would pay more money for the one that's more heavily censored than the one that's not.

0
💬 0

9046.484 - 9066.412 Marc Andreessen

There are certainly scenarios where companies will make that decision. Look, an interesting thing you brought up, is this really a speech issue? One of the things that the big tech companies are dealing with is that content generated from an LLM is not covered under Section 230, which is the law that protects internet platform companies from being sued for user-generated content.

0
💬 0

9067.872 - 9084.976 Marc Andreessen

And so it's actually, yes. And so there's actually a question. I think there's still a question, which is can big American companies actually feel generative AI at all? Or is the liability actually gonna just ultimately convince them that they can't do it because the minute the thing says something bad, and it doesn't even need to be hate speech.

0
💬 0

9085.016 - 9099.748 Marc Andreessen

It could just be like an inaccurate, it could hallucinate a product detail on a vacuum cleaner, and all of a sudden the vacuum cleaner company sues. for misrepresentation. And there's an asymmetry there, right? Because the LLM is going to be producing billions of answers to questions and it only needs to get a few wrong.

0
💬 0

9099.788 - 9101.691 Lex Fridman

So loss has to get updated really quick here.

0
💬 0

9101.852 - 9113.264 Marc Andreessen

Yeah, and nobody knows what to do with that, right? So anyway, there are big questions around how companies operate at all. So we talk about those, but then there's this other question of like, okay, the open source. So what about open source?

0
💬 0

9113.304 - 9134.538 Marc Andreessen

And my answer to your question is kind of like, obviously, yes, the models have, there has to be full open source here because to live in a world in which that open source is not allowed is a world of draconian speech control, human control, machine control. I mean, you know, black helicopters with jackbooted thugs coming out, rappelling down and seizing your GPU like territory.

0
💬 0

9134.858 - 9136.279 Marc Andreessen

No, no, I'm a hundred percent serious.

0
💬 0

9137.22 - 9138.741 Lex Fridman

You're saying slippery slope always leads there.

0
💬 0

9138.821 - 9143.166 Marc Andreessen

No, no, no, no, no, no. That's what's required to enforce it. Like, how will you enforce a ban on open source?

0
💬 0

9143.266 - 9150.513 Lex Fridman

No, you could add friction to it. Like, harder to get the models. Because people will always be able to get the models. But it'll be more in the shadows, right?

0
💬 0

9150.753 - 9156.419 Marc Andreessen

The leading open source model right now is from the UAE. Like, the next time they do that, what do we do? Yeah.

0
💬 0

9157.88 - 9160.102 Lex Fridman

Like... Oh, I see. You're like...

0
💬 0

9161.228 - 9174.217 Marc Andreessen

The 14-year-old in Indonesia comes out with a breakthrough model. You know, we talked about most great software comes from a small number of people. Some kid comes out with some big new breakthrough in quantization or something and has some huge breakthrough. And like, what are we going to like invade Indonesia and arrest him?

0
💬 0

9174.477 - 9195.085 Lex Fridman

It seems like in terms of size of models and effectiveness of models, the big tech companies will probably lead the way for quite a few years. And the question is of what policies they should use. The kid in Indonesia should not be regulated, but should Google, Meta, Microsoft, OpenAI be regulated?

0
💬 0

9195.393 - 9199.036 Marc Andreessen

Well, so, but this goes, okay, so when does it become dangerous?

0
💬 0

9199.296 - 9199.456 Lex Fridman

Yeah.

0
💬 0

9200.317 - 9208.143 Marc Andreessen

Right? Is the danger that it's, quote, as powerful as the current leading commercial model, or is it that it is just at some other arbitrary threshold?

0
💬 0

9208.383 - 9208.563 Lex Fridman

Yeah.

0
💬 0

9208.863 - 9214.068 Marc Andreessen

And then, by the way, like, look, how do we know? Like, what we know today is that you need, like, a lot of money to, like, train these things.

0
💬 0

9214.168 - 9222.774 Marc Andreessen

But there are advances being made every week on training efficiency and, you know, data, all kinds of synthetic, you know, look, I don't even, like, the synthetic data thing we're talking about, maybe some kid figures out a way to auto-generate synthetic data.

0
💬 0

9222.794 - 9223.655 Lex Fridman

And that's going to change everything.

0
💬 0

9223.995 - 9242.679 Marc Andreessen

Yeah, exactly. And so like sitting here today, like the breakthrough just happened, right? You made this point, like the breakthrough just happened. So we don't know what the shape of this technology is going to be. I mean, the big shock, the big shock here is that, you know, whatever number of billions of parameters basically represents at least a very big percentage of human thought.

0
💬 0

9243.38 - 9259.65 Marc Andreessen

Like who would have imagined that? And then there's already work underway. There was just this paper that just came out that basically takes a GPT-3 scale model and compresses it down to run on a single 32-core CPU. Like, who would have predicted that? Yeah. You know, some of these models now you can run on Raspberry Pis.

0
💬 0

9259.831 - 9271.726 Marc Andreessen

Like, today they're very slow, but, like, you know, maybe there'll be a, you know, perceived you've really performed, you know, like... It's math and code. And here we're back, here we're back. It's math and code. It's math and code. It's math, code, and data. It's bits.

0
💬 0

9272.127 - 9284.2 Lex Fridman

Mark's just like walked away at this point. He's just, screw it. I don't know what to do with this. You guys created this whole internet thing. Yeah. Yeah. I mean, I'm a huge believer in open source here.

0
💬 0

9284.56 - 9296.91 Marc Andreessen

So my argument is we're going to have to – see, here's my argument. My full argument is AI is going to be like air. It's going to be everywhere. This is just going to be in textbooks. It already is. It's going to be in textbooks, and kids are going to grow up knowing how to do this, and it's just going to be a thing. It's going to be in the air, and you can't pull this back anymore.

0
💬 0

9296.93 - 9309.581 Marc Andreessen

You can pull back air. And so you just have to figure out how to live in this world, right? And then that's where I think all this hand-wringing about AI risk is basically a complete waste of time because the effort should go into, okay, what is the defensive approach? Yeah.

0
💬 0

9310.001 - 9326.492 Marc Andreessen

And so if you're worried about, you know, AI generated pathogens, the right thing to do is to have a permanent project warp speed, right? And funded lavishly. Let's do a Manhattan project for biological defense, right? And let's build AIs and let's have like broad spectrum vaccines where like we're insulated from every pathogen, right?

0
💬 0

9326.732 - 9356.336 Lex Fridman

And what the interesting thing is, because it's software, a kid in his basement, a teenager, could build a system that defends against the worst. And to me, defense is super exciting. If you believe in the good of human nature, that most people want to do good, to be the savior of humanity is really exciting. Yes. Okay, that's a dramatic statement, but to help people. Of course, to help people.

0
💬 0

9357.239 - 9370.296 Lex Fridman

Yeah, okay, what about, just to jump around, what about the risk of, will AI lead to crippling inequality? You know, because we're kind of saying everybody's life will become better. Is it possible that the rich get richer here?

0
💬 0

9370.516 - 9386.47 Marc Andreessen

Yeah. So this actually ironically goes back to Marxism. So because this was the core claim of Marxism, right, basically was that the owners of capital would basically own the means of production. And then over time, they would basically accumulate all the wealth. The workers would be paying in, you know, and getting nothing in return because they wouldn't be needed anymore, right?

0
💬 0

9386.65 - 9400.2 Marc Andreessen

Marx was very worried about what he called mechanization or what later became known as automation, right? And that, you know, the workers would be immiserated and the capitalists would end up with all. And so this was one of the core principles of Marxism. Of course, it turned out to be wrong about every previous wave of technology.

0
💬 0

9401.181 - 9415.452 Marc Andreessen

The reason it turned out to be wrong about every previous wave of technology is that the way that the self-interested owner of the machines makes the most money is by providing the production capability in the form of products and services to the most people, the most customers as possible. Mm-hmm.

0
💬 0

9415.572 - 9442.537 Marc Andreessen

right the largest and this is one of those funny things where every ceo knows this intuitively and yet it's like hard to explain from the outside the the way you make the most money in any business is by selling to the largest market you can possibly get to the largest market you can possibly get to is everybody on the planet and so every large company does is everything that it can to drive down prices to be able to get volumes up to be able to get to everybody on the planet and that happened with everything from electricity it happened with telephones it happened with radio it happened with automobiles it happened with smartphones it happened with

0
💬 0

9444.038 - 9460.414 Marc Andreessen

PCs. It happened with the internet. It happened with mobile broadband. It's happened, by the way, with Coca-Cola. It's happened with like every, you know, basically every industrially produced, you know, good or service people, you want to drive it to the largest possible market. And then as proof of that, it's already happened. right?

0
💬 0

9460.494 - 9477.501 Marc Andreessen

Which is the early adopters of like ChatGPT and Bing are not like, you know, Exxon and Boeing. They're, you know, your uncle and your nephew, right? It's just like, it's either freely available online or it's available for 20 bucks a month or something. But, you know, these things went, this technology went mass market immediately.

0
💬 0

9478.622 - 9489.146 Marc Andreessen

And so look, the owners of the means of production, whoever does this, as I mentioned, these trillion dollar questions, there are people who are going to get really rich doing this, producing these things, but they're going to get really rich by taking this technology to the broadest possible market.

0
💬 0

9490.054 - 9496.818 Lex Fridman

So yes, they'll get rich, but they'll get rich having a huge positive impact on... Yeah, making the technology available to everybody.

0
💬 0

9497.138 - 9516.769 Marc Andreessen

Yeah. Right. And again, smartphones, same thing. So there's this amazing kind of twist in business history, which is you cannot spend $10,000 on a smartphone. You can't spend $100,000. I would buy the million-dollar smartphone. I'm signed up for it. Suppose a million-dollar smartphone was much better than the $1,000 smartphone. I'm there to buy it. It doesn't exist. Why doesn't it exist?

0
💬 0

9517.389 - 9533.872 Marc Andreessen

Apple makes so much more money driving the price further down from $1,000 than they would trying to harvest. It's just this repeating pattern you see over and over again. What's great about it is you do not need to rely on anybody's enlightened generosity to do this. You just need to rely on capitalist self-interest.

0
💬 0

9536.006 - 9537.648 Lex Fridman

What about AI taking our jobs?

0
💬 0

9538.629 - 9555.124 Marc Andreessen

Yeah, so very similar thing here. There's a core fallacy, which again was very common in Marxism, which is what's called the lump of labor fallacy. And this is sort of the fallacy that there's only a fixed amount of work to be done in the world, and it's all being done today by people. And then if machines do it, there's no other work to be done by people.

0
💬 0

9556.125 - 9572.946 Marc Andreessen

And that's just a completely backwards view on how the economy develops and grows. Because what happens is not, in fact, that what happens is the introduction of technology into production process causes prices to fall. As prices fall, consumers have more spending power. As consumers have more spending power, they create new demand.

0
💬 0

9573.507 - 9581.275 Marc Andreessen

that new demand then causes capital and labor to form into new enterprises to satisfy new wants and needs. And the result is more jobs at higher wages.

0
💬 0

9581.295 - 9598.752 Lex Fridman

So new wants and needs. The worry is that the creation of new wants and needs at a rapid rate Well, I mean, there's a lot of turnover in jobs, so people will lose jobs. Just the actual experience of losing a job and having to learn new things and new skills is painful for the individual.

0
💬 0

9598.772 - 9608.902 Marc Andreessen

Well, two things. One is that new jobs are often much better. So this actually came up. There was this panic about a decade ago on all the truck drivers are going to lose their jobs, right? And number one, that didn't happen because we haven't figured out a way to actually...

0
💬 0

9609.883 - 9633.036 Marc Andreessen

finished that yet but yeah but the other thing was like like truck driver like i grew up in a town that was basically consisted of a truck stop right and i like knew a lot of truck drivers and like truck drivers live a decade shorter than everybody else like they it's a it's a it's actually like a very dangerous like they get like literally they have like high risk of skin cancer and on the left side of their on the left side of their body from from being in the sun all the time the vibration of being in the truck is actually very damaging to your to your physiology

0
💬 0

9633.596 - 9643.16 Lex Fridman

And there's actually, perhaps partially because of that reason, there's a shortage of people who want to be truck drivers.

0
💬 0

9643.26 - 9659.947 Marc Andreessen

Yeah. The question always you want to ask somebody like that is, do you want your kid to be doing this job? And most of them will tell you, no. I want my kid to be sitting in a cubicle somewhere where they don't die 10 years earlier. And so the new jobs, number one, the new jobs are often better. But you don't get the new jobs until you go through the change.

0
💬 0

9660.047 - 9674.019 Marc Andreessen

And then to your point, the training thing, you know, it's always the issue is can people adapt? And again, here you need to imagine living in a world in which everybody has the AI assistant capability, right, to be able to pick up new skills much more quickly and be able to have some, you know, be able to have a machine to work with to augment their skills.

0
💬 0

9674.019 - 9677.04 Lex Fridman

It's still going to be painful, but that's the process of life.

0
💬 0

9677.14 - 9693.647 Marc Andreessen

It's painful for some people. I mean, there's no question it's painful for some people. Yes, it's not. Again, I'm not a utopian on this, and it's not like it's positive for everybody in the moment, but it has been overwhelmingly positive for 300 years. I mean, look, the concern here, this concern has played out for literally centuries.

0
💬 0

9694.387 - 9714.495 Marc Andreessen

And, you know, this is the sort of Luddite, you know, the story of the Luddites. You may remember there was a panic in the 2000s around outsourcing was going to take all the jobs. There was a panic in the 2010s that robots were going to take all the jobs. In 2019, before COVID, we had more jobs at higher wages, both in the country and in the world, than at any point in human history.

0
💬 0

9715.135 - 9725.019 Marc Andreessen

And so the overwhelming evidence is that the net gain here is just wildly positive. And most people overwhelmingly come out the other side being huge beneficiaries of this.

0
💬 0

9725.836 - 9739.079 Lex Fridman

So you write that the single greatest risk, this is the risk you're most convinced by, the single greatest risk of AI is that China wins global AI dominance and we, the United States and the West, do not. Can you elaborate?

0
💬 0

9739.199 - 9753.962 Marc Andreessen

Yeah, so this is the other thing, which is a lot of the sort of AI risk debates today sort of assume that we're the only game in town, right? And so we have the ability to kind of sit in the United States and criticize ourselves and have our government beat up on our companies and we're figuring out a way to restrict what our companies can do. And

0
💬 0

9754.722 - 9769.703 Marc Andreessen

We're going to ban this and ban that, restrict this and do that. And then there's this other force out there that doesn't believe we have any power over them whatsoever. And they have no desire to sign up for whatever rules we decide to put in place. And they're going to do whatever it is they're going to do, and we have no control over it at all.

0
💬 0

9770.803 - 9788.449 Marc Andreessen

And it's China, and specifically the Chinese Communist Party. And they have a completely publicized, open, you know, plan for what they're going to do with AI. And it is not what we have in mind. And not only do they have that as a vision and a plan for their society, but they also have it as a vision and plan for the rest of the world.

0
💬 0

9788.789 - 9790.35 Lex Fridman

So their plan is what? Surveillance?

0
💬 0

9790.67 - 9811.621 Marc Andreessen

Yeah, authoritarian control. So authoritarian population control. Good old-fashioned communist authoritarian control. And surveillance and enforcement. And social credit scores and all the rest of it. And you are going to be monitored and metered within an inch of everything all the time. And it's basically the end of human freedom. And that's their goal.

0
💬 0

9811.661 - 9814.403 Marc Andreessen

And they justify it on the basis of that's what leads to peace.

0
💬 0

9814.763 - 9824.731 Lex Fridman

And you're worried that the... Regulating in the United States will haul progress enough to where the Chinese government would win that race.

0
💬 0

9824.931 - 9842.39 Marc Andreessen

So their plan. Yes. Yes. And the reason for that is they and again, they're very public on this. They have their plan is to proliferate their approach around the world. And they have this program called the Digital Silk Road. which is building on their Silk Road investment program. And they've been laying networking infrastructure all over the world with their 5G work with their company, Huawei.

0
💬 0

9842.87 - 9858.766 Marc Andreessen

And so they've been laying all this financial and technological fabric all over the world. And their plan is to roll out their vision of AI on top of that and to have every other country be running their version. And then if you're a country prone to authoritarianism, you're going to find this to be an incredible way to become more authoritarian.

0
💬 0

9859.907 - 9868.151 Marc Andreessen

If you're a country, by the way, not prone to authoritarianism, you're going to have the Chinese Communist Party running your infrastructure and having backdoors into it, right? Which is also not good.

0
💬 0

9868.992 - 9875.536 Lex Fridman

What's your sense of where they stand in terms of the race towards superintelligence as compared to the United States?

0
💬 0

9875.885 - 9889.099 Marc Andreessen

Yeah. So good news is they're behind, but bad news is they, you know, they, let's just say they get access to everything we do. Um, so they're probably a year behind at each point in time, but they get, you know, downloads, I think of basically all of our work on a regular basis through a variety of means.

0
💬 0

9890.12 - 9902.811 Marc Andreessen

Um, and they are, you know, at least we'll see, they're at least putting out reports of very complete, just put out a report last week of a, of a GPT 3.5 analog. Um, yeah. They put out this report. I forget what it's called, but they put out this report of this LLM.

0
💬 0

9902.831 - 9928.046 Marc Andreessen

When OpenAI puts out, one of the ways they test GPT is they run it through standardized exams like the SAT, just how you can gauge how smart it is. The Chinese report, they ran their LLM through the Chinese equivalent of the SAT. It includes a section on Marxism and a section on Mao Zedong thought. And it turns out their AI does very well on both of those topics. Right.

0
💬 0

9929.208 - 9947.187 Marc Andreessen

So like this, this alignment thing, communist AI, right? Like literal communist AI. Right. And so their vision is like, that's the, you know, so, you know, you can just imagine like you're a school, you know, you're a kid 10 years from now in Argentina or in Germany or in Germany. Who knows where? Indonesia.

0
💬 0

9947.267 - 9957.657 Marc Andreessen

And you ask the AI to explain to you, like, how the economy works, and it gives you the most cheery, upbeat explanation of Chinese-style communism you've ever heard, right? So, like, the stakes here are, like, really big.

0
💬 0

9957.977 - 9981.086 Lex Fridman

Well, as we've been talking about, my hope is not just with the United States, but with just the kid in his basement, the open-source LLM. because I don't know if I trust large centralized institutions with super powerful AI, no matter what their ideology, because power corrupts. You've been investing in tech companies for about, let's say 20 years?

0
💬 0

9981.687 - 9993.19 Lex Fridman

And about 15 of which was with Andreessen Horowitz. What interesting trends in tech have you seen over that time? Let's just talk about companies and just the evolution of the tech industry.

0
💬 0

9993.769 - 10015.235 Marc Andreessen

I mean, the big shift over 20 years has been that tech used to be a tools industry for basically from like 1940 through to about 2010, almost all the big successful companies were picks and shovels companies. So PC, database, smartphone, some tool that somebody else would pick up and use. Since 2010, most of the big wins have been in applications.

0
💬 0

10015.975 - 10038.774 Marc Andreessen

So a company that starts in an existing industry and goes directly to the customer in that industry. The early examples there were like Uber and Lyft and Airbnb. And then that model is kind of elaborating out. The AI thing is actually a reversion on that for now because most of the AI business right now is actually in cloud provision of AI APIs for other people to build on.

0
💬 0

10038.834 - 10040.656 Lex Fridman

But the big thing will probably be an app.

0
💬 0

10041.036 - 10058.004 Marc Andreessen

Yeah, I think most of the money I think probably will be in whatever, yeah, your AI financial advisor or your AI doctor or your AI lawyer or, you know, take your pick of whatever the domain is. And what's interesting is, you know, the Valley kind of does everything. The entrepreneurs kind of elaborate every possible idea.

0
💬 0

10058.604 - 10067.909 Marc Andreessen

And so there will be a set of companies that like make AI something that can be purchased and used by large law firms. And then there will be other companies that just go direct to market as an AI lawyer.

0
💬 0

10069.273 - 10087.209 Lex Fridman

What advice could you give for a startup founder? Just having seen so many successful companies, so many companies that fail also. What advice could you give to a startup founder, someone who wants to build the next super successful startup in the tech space? The Googles, the Apples, the Twitters.

0
💬 0

10089.181 - 10097.807 Marc Andreessen

Yeah. So the great thing about the really great founders is they don't take any advice. So if you find yourself listening to advice, maybe you shouldn't do it.

0
💬 0

10098.848 - 10106.193 Lex Fridman

But that's actually just to elaborate on that. If you could also speak to great founders, like what makes a great founder?

0
💬 0

10107.045 - 10115.633 Marc Andreessen

So what makes a great founder is super smart, coupled with super energetic, coupled with super courageous. I think it's some of those three.

0
💬 0

10115.653 - 10117.875 Lex Fridman

Intelligence, passion, and courage.

0
💬 0

10117.895 - 10139.012 Marc Andreessen

The first two are traits, and the third one is a choice, I think. Courage is a choice. Well, because courage is a question of pain tolerance, right? So how many times are you willing to get punched in the face before you quit? Yeah. And... Here's maybe the biggest thing people don't understand about what it's like to be a startup founder is it gets very romanticized, right?

0
💬 0

10139.372 - 10155.279 Marc Andreessen

And even when they fail, it still gets romanticized about what a great adventure it was. But the reality of it is most of what happens is people telling you no, and then they usually follow that with you're stupid, right? No, I will not come to work for you. I will not leave my cushy job at Google to come work for you. No, I'm not going to buy your product.

0
💬 0

10155.719 - 10170.584 Marc Andreessen

No, I'm not going to run a story about your company. No, I'm not this, that, the other thing. And so a huge amount of what people have to do is just get used to just getting punched. And the reason people don't understand this is because when you're a founder, you cannot let on that this is happening because it will cause people to think that you're weak and they'll lose faith in you.

0
💬 0

10171.364 - 10178.346 Marc Andreessen

So you have to pretend that you're having a great time when you're dying inside, right? You're just in misery.

0
💬 0

10178.666 - 10179.426 Lex Fridman

But why did they do it?

0
💬 0

10180.126 - 10193.677 Marc Andreessen

What do they do? Yeah, that's the thing. It's like it is a level. This is actually one of the conclusions, I think. For most of these people on a risk-adjusted basis, it's probably an irrational act. They could probably be more financially successful on average if they just got a real job at a big company.

0
💬 0

10194.898 - 10205.046 Marc Andreessen

But some people just have an irrational need to do something new and build something for themselves. And some people just can't tolerate having bosses. Oh, here's a fun thing is how do you reference check founders? Right.

0
💬 0

10205.066 - 10217.154 Marc Andreessen

So you call it, you know, normally you reference check your time hiring somebody as you call the bosses there and you know, and you find out if they were good employees and now you're trying to reference check Steve jobs. Right. And it's like, Oh God, he was terrible. You know, he was a terrible employee. He never did what we told him to do.

0
💬 0

10217.414 - 10227.38 Lex Fridman

Yeah. So what's a good reference? Do you want the previous boss to actually say that they never did what you told them to do? That might be a good thing.

0
💬 0

10227.5 - 10239.727 Marc Andreessen

Well, ideally, ideally what you want is I will go, I would like to go to work for that person. Um, he worked for me here and now I'd like to work for him. Now, unfortunately, most people can't. Their egos can't handle that. So they won't say that. But that's the ideal.

0
💬 0

10239.927 - 10245.09 Lex Fridman

What advice would you give to those folks in the space of intelligence, passion, and courage?

0
💬 0

10245.831 - 10260.199 Marc Andreessen

So I think the other big thing is you see people sometimes who say, I want to start a company. And then they kind of work through the process of coming up with an idea. And generally, those don't work as well as the case where somebody has the idea first. And then they kind of realize that there's an opportunity to build a company.

0
💬 0

10260.219 - 10262.3 Marc Andreessen

And then they just turn out to be the right kind of person to do that.

0
💬 0

10262.918 - 10269.382 Lex Fridman

When you say idea, do you mean long-term big vision or do you mean specifics of product?

0
💬 0

10269.922 - 10279.527 Marc Andreessen

I would say specific. Specifically, yes, specifics. Because for the first five years, you don't get to have vision. You just got to build something people want and you got to figure out a way to sell it to them. It's very practical.

0
💬 0

10279.987 - 10286.272 Lex Fridman

You never get to big vision. Yeah. So the first, the first part, you have an idea of a set of products or the first product that can actually make some money.

0
💬 0

10286.332 - 10297.541 Marc Andreessen

Yeah. Like it's got to, the first product's got to work by which I mean, like it has to technically work, but then it has to actually fit into the category in the customer's mind of something that they want. And then, and then by the way, the other part is they have to want to pay for it. Like somebody has got to pay the bills.

0
💬 0

10297.721 - 10316.896 Marc Andreessen

And so you've got to figure out how to price it and whether you can actually extract the money. Yeah. So usually it is much more predictable. Success is never predictable, but it's more predictable if you start with a great idea and then back into starting the company. So this is what we did. We had Mosaic before we had Netscape. The Google guys had the Google search engine working at Stanford.

0
💬 0

10316.916 - 10324.521 Marc Andreessen

Actually, there's tons of examples where Pierre Omidyar had eBay working before he left his previous job.

0
💬 0

10325.253 - 10331.776 Lex Fridman

So I really love that idea of just having a thing, a prototype that actually works before you even begin to remotely scale.

0
💬 0

10331.816 - 10349.884 Marc Andreessen

Yeah. By the way, it's also far easier to raise money, right? Like the ideal pitch that we receive is here's the thing that works. Would you like to invest in our company or not? Like that's so much easier than here's 30 slides with a dream, right? And then we have this concept called the idea maze, which Balaji Srinivasan came up with when he was with us.

0
💬 0

10350.084 - 10360.708 Marc Andreessen

So then there's this thing, this goes to mythology, which is, you know, there's a mythology that kind of, you know, these ideas, you know, kind of arrive like magic or people kind of stumble into them. It's like eBay with the pest dispensers or something.

0
💬 0

10361.568 - 10381.216 Marc Andreessen

Um, the reality usually with the big successes is that the founder has been chewing on the problem for five or 10 years before they start the company. And they often worked on it in school, um, or they even experimented on it when they were a kid. Um, and they've been kind of training up over that period of time to be able to do the thing. So they're like a true domain expert.

0
💬 0

10382.796 - 10392.559 Marc Andreessen

And it sort of sounds like mom and apple pie, which is, yeah, you want to be a domain expert in what you're doing, but the mythology is so strong of like, oh, I just had this idea in the shower and now I'm doing it. It's generally not that.

0
💬 0

10392.999 - 10409.524 Lex Fridman

No, because maybe in the shower you had the exact product implementation details, but yeah, usually you're going to be for years, if not decades, thinking about everything around that.

0
💬 0

10410.247 - 10421.632 Marc Andreessen

Well, we call it the idea maze because the idea maze basically is like there's all these permutations. Like for any idea, there's like all these different permutations. Who should the customer be? What shape forms the product have? And how should we take it to market and all these things?

0
💬 0

10423.133 - 10441.08 Marc Andreessen

And so the really smart founders have thought through all these scenarios by the time they go out to raise money. And they have like detailed answers on every one of those fronts because they put so much thought into it. The sort of more haphazard founders haven't thought about any of that. And it's the detailed ones who tend to do much better.

0
💬 0

10441.1 - 10445.641 Lex Fridman

So how do you know when to take a leap? If you have a cushy job or a happy life?

0
💬 0

10446.761 - 10459.245 Marc Andreessen

I mean, the best reason is just because you can't tolerate not doing it, right? Like this is the kind of thing where if you have to be advised into doing it, you probably shouldn't do it. And so it's probably the opposite, which is you just have such a burning sense of this has to be done. I have to do this. I have no choice.

0
💬 0

10459.699 - 10461.121 Lex Fridman

What if it's going to lead to a lot of pain?

0
💬 0

10461.762 - 10464.366 Marc Andreessen

It's going to lead to a lot of pain.

0
💬 0

10464.646 - 10472.578 Lex Fridman

What if it means losing sort of social relationships and damaging your relationship with loved ones and all that kind of stuff?

0
💬 0

10473.001 - 10487.113 Marc Andreessen

Yeah, look, so it's going to put you in a social tunnel for sure, right? So you're going to like, you know, there's this game you can play on Twitter, which is you can do any whiff of the idea that there's basically any such thing as work-life balance and that people should actually work hard and everybody gets mad.

0
💬 0

10487.153 - 10502.94 Marc Andreessen

But like the truth is, like all the successful founders are working 80-hour weeks and they're working, you know, they form various... very strong social bonds with the people they work with. They tend to lose a lot of friends on the outside or put those friendships on ice. That's just the nature of the thing. For most people, that's worth the trade-off.

0
💬 0

10502.98 - 10509.482 Marc Andreessen

The advantage maybe younger founders have is maybe they have less. For example, if they're not married yet or don't have kids yet, that's an easier thing to bite off.

0
💬 0

10510.243 - 10511.263 Lex Fridman

Can you be an older founder?

0
💬 0

10511.463 - 10523.208 Marc Andreessen

Yeah, you definitely can. Yeah. Many of the most successful founders are second, third, fourth-time founders. They're in their 30s, 40s, 50s. The good news with being an older founder is you know more. and you know a lot more about what to do, which is very helpful.

0
💬 0

10523.248 - 10530.255 Marc Andreessen

The problem is, okay, now you've got like a spouse and a family and kids, and like, you've gotta go to the baseball game and like, you can't go to the baseball, you know? And so it's,

0
💬 0

10531.784 - 10553.151 Lex Fridman

Life is full of difficult choices, Mark Andreessen. You've written a blog post on what you've been up to. You wrote this in October, 2022. Quote, mostly I try to learn a lot. For example, the political events of 2014 to 2016 made clear to me that I didn't understand politics at all. Referencing maybe some of this. this book here.

0
💬 0

10554.453 - 10567.954 Lex Fridman

So I deliberately withdrew from political engagement and fundraising and instead read my way back into history and as far to the political left and political right as I could. So just high-level question, what's your approach to learning?

0
💬 0

10569.113 - 10589.319 Marc Andreessen

Yeah. So, it's basically, I would say it's autodidact. So, it's going down the rabbit holes. So, it's a combination. I kind of allude to it in that quote. It's a combination of breadth and depth. And so, I go broad by the nature of what I do. I go broad, but then I tend to go deep in a rabbit hole for a while, read everything I can, and then come out of it.

0
💬 0

10589.459 - 10592.219 Marc Andreessen

And I might not revisit that rabbit hole for another decade.

0
💬 0

10592.86 - 10605.359 Lex Fridman

And in that blog post, a recommend people go check out. You actually list a bunch of different books that you recommend on different topics on the American left and the American right. It's just a lot of really good stuff.

0
💬 0

10605.759 - 10619.649 Lex Fridman

The best explanation for the current structure of our society and politics, you give two recommendations, four books on the Spanish Civil War, six books on deep history of the American right, comprehensive biographies of Adolf Hitler, one of which I read and can recommend.

0
💬 0

10620.389 - 10638.133 Lex Fridman

Six books on the deep history of the American left, the American right, and American left looking at the history to give you the context. Biography of Vladimir Lenin, two of them on the French Revolution. Actually, I have never read a biography on Lenin. Maybe that will be useful. Everything's been so Marx-focused.

0
💬 0

10638.573 - 10640.573 Marc Andreessen

The Sebastian biography of Lenin is extraordinary.

0
💬 0

10641.574 - 10644.814 Lex Fridman

Victor Sebastian, okay. It'll blow your mind, yeah. So it's still useful to read.

0
💬 0

10644.834 - 10647.915 Marc Andreessen

It's incredible, yeah, it's incredible. I actually think it's the single best book on the Soviet Union.

0
💬 0

10648.638 - 10670.39 Lex Fridman

So that, the perspective of Lenin might be the best way to look at the Soviet Union versus Stalin versus Marx. Very interesting. So two books on fascism and anti-fascism by the same author, Paul Gottfried. Brilliant book on the nature of mass movements and collective psychology. The definitive work on intellectual life under totalitarianism, The Captive Mind.

0
💬 0

10671.17 - 10696.316 Lex Fridman

The definitive work on the practical life under totalitarianism, There's a bunch. There's a bunch. And the single best book, first of all, the list here is just incredible. But you say the single best book I have found on who we are and how we got here is The Ancient City by Numa Dennis Fustel de Koulankas. I like it. What did you learn about who we are as a human civilization from that book?

0
💬 0

10696.69 - 10717.679 Marc Andreessen

Yeah. So this is a fascinating book. This one's free. It's free, by the way. It's a book from the 1860s. You can download it or you can buy prints of it. But it was this guy who was a professor at the Sorbonne in the 1860s. And he was apparently a savant on antiquity, on Greek and Roman antiquity. And the reason I say that is because his sources are 100% original Greek and Roman sources.

0
💬 0

10718.32 - 10734.31 Marc Andreessen

So he wrote basically a history of Western civilization from on the order of 4,000 years ago to basically the present times, entirely working on original Greek and Roman sources. And what he was specifically trying to do was he was trying to reconstruct from the stories of the Greeks and the Romans.

0
💬 0

10734.33 - 10759.739 Marc Andreessen

He was trying to reconstruct what life in the West was like before the Greeks and the Romans, which was in the civilization known as the Indo-Europeans. And the short answer is, and this is sort of circa 2000 BC to sort of 500 BC, kind of that 1500 year stretch where civilization developed. And his conclusion was basically cults. They were basically cults and civilization was organized into cults.

0
💬 0

10759.879 - 10782.217 Marc Andreessen

And the intensity of the cults was like a million fold beyond anything that we would recognize today. Like it was a level of all encompassing belief and an action around religion. That was at a level of extremeness that we wouldn't even recognize it. And so specifically, he tells the story of basically there were three levels of cults.

0
💬 0

10782.277 - 10800.939 Marc Andreessen

There was the family cult, the tribal cult, and then the city cult as society scaled up. And then each cult was a joint cult of family gods, which were ancestor gods, and then nature gods. Um, and then you are bonding into a family, a tribe or a city was based on your adherence to that religion.

0
💬 0

10800.959 - 10809.69 Marc Andreessen

Um, people, uh, who were not of your family tribe city worship different gods, which gave you not just the right, but the responsibility to kill them on site.

0
💬 0

10812.535 - 10814.336 Lex Fridman

So they were serious about their cults.

0
💬 0

10814.436 - 10832.763 Marc Andreessen

Hardcore. By the way, shocking development. I did not realize there's zero concept of individual rights. Even up through the Greeks and even in the Romans, they didn't have the concept of individual rights. The idea that as an individual, you have some right, it's just like, nope. And you look back and you're just like, wow, that's just crazily fascist in a degree that we wouldn't recognize today.

0
💬 0

10832.803 - 10849.993 Marc Andreessen

But it's like, well, they were living under extreme pressure for survival. And the theory goes you could not have people running around making claims to individual rights when you're just trying to get your tribe through the winter. You need hardcore command and control. And actually, through a modern political lens, those cults were basically both fascist and communist.

0
💬 0

10850.773 - 10854.035 Marc Andreessen

They were fascist in terms of social control, and then they were communist in terms of economics.

0
💬 0

10855.829 - 10861.011 Lex Fridman

But you think that's fundamentally that like pull towards cults is within us?

0
💬 0

10861.451 - 10876.917 Marc Andreessen

Well, so my conclusion from this book, so the way we naturally think about the world we live in today is like we basically have such an improved version of everything that came before us, right? Like we have basically, we've figured out all these things around morality and ethics and democracy and all these things.

0
💬 0

10876.957 - 10894.105 Marc Andreessen

And like they were basically stupid and retrograde and were like smart and sophisticated. And we've improved all this. After reading that book, I now believe in many ways the opposite, which is no, actually, we are still running in that original model. We're just running in an incredibly diluted version of it. So we're still running basically in cults.

0
💬 0

10894.266 - 10910.198 Marc Andreessen

It's just our cults are at like a thousandth or a millionth the level of intensity, right? And so just to take religions... you know, the modern experience of a Christian in our time, even somebody who considers him a devout Christian is just a shadow of the level of intensity of somebody who belonged to a religion back in that period.

0
💬 0

10910.718 - 10930.677 Marc Andreessen

And then by the way, we have, it goes back to our AI discussion, we then sort of endlessly create new cults. Like we're trying to fill the void, right? And the void is a void of bonding. Okay. Living in their era, like everybody living today, transported in that era would view it as just completely intolerable in terms of the loss of freedom and the level of basically fascist control.

0
💬 0

10930.737 - 10943.889 Marc Andreessen

However, every single person in that era, and he really stresses this, they knew exactly where they stood. They knew exactly where they belonged. They knew exactly what their purpose was. They knew exactly what they needed to do every day. They knew exactly why they were doing it. They had total certainty about their place in the universe.

0
💬 0

10944.129 - 10948.373 Lex Fridman

So the question of meaning, the question of purpose was very distinctly, clearly defined for them.

0
💬 0

10948.712 - 10952.293 Marc Andreessen

Absolutely, overwhelmingly, undisputably, undeniably.

0
💬 0

10952.313 - 10959.295 Lex Fridman

As we turn the volume down on the cultism, the search for meaning starts getting harder and harder.

0
💬 0

10959.375 - 10977.521 Marc Andreessen

Yes, because we don't have that. We are ungrounded. We are uncentered. And we all feel it, right? And that's why we reach for, you know, it's why we still reach for religion. It's why we reach for, you know, people start to take on, you know, let's say, you know, a faith in science, maybe beyond where they should put it. And by the way, sports teams, they're like a tiny little version of a cult.

0
💬 0

10977.581 - 10989.328 Marc Andreessen

And Apple keynotes are a tiny little version of a cult, right? And political. And there's full-blown cults on both sides of the political spectrum right now operating in plain sight.

0
💬 0

10989.348 - 10991.709 Lex Fridman

But still not full-blown compared to what it was in the past.

0
💬 0

10991.729 - 11007.561 Marc Andreessen

Compared to what it used to. I mean, we would today consider full-blown. But yes, they're at, I don't know, a hundred thousandth or something of the intensity of what people had back then. Yeah. So we live in a world today that in many ways is more advanced and moral and so forth. And it's certainly a lot nicer, much nicer world to live in. But we live in a world that's like very washed out.

0
💬 0

11007.581 - 11019.754 Marc Andreessen

It's like everything has become very colorless and gray as compared to how people used to experience things, which is, I think, why we're so prone to reach for drama. There's something in us deeply evolved where we want that back.

0
💬 0

11021.263 - 11032.59 Lex Fridman

And I wonder where it's all headed as we turn the volume down more and more. What advice would you give to young folks today in high school and college, how to be successful in their career, how to be successful in their life?

0
💬 0

11033.131 - 11049.562 Marc Andreessen

Yeah. So the tools that are available today, I mean, are just like, I sometimes, you know, I sometimes bore, you know, kids by describing like what it was like to go look up a book, you know, to try to like discover a fact. And, you know, in the old days, the 1970s, 1980s, you go to the library and the card catalog and the whole thing, you go through all that work.

0
💬 0

11049.582 - 11051.304 Marc Andreessen

And then the book is checked out and you have to wait two weeks.

0
💬 0

11051.344 - 11066.697 Marc Andreessen

And like, like to be in a world, not only where you can get the answer to any question, but also the world now, you know, the AI world where you've got like the assistant that will help you do anything, help you teach, learn anything like your ability, both to learn and also to produce is just like, I don't know, a million fold beyond what it used to be.

0
💬 0

11067.477 - 11081.509 Marc Andreessen

I have a, I have a blog post I've been wanting to write, which I call where, where are the hyperproductive people? Like with these tools, like there should be authors that are writing like hundreds or thousands of like outstanding books.

0
💬 0

11082.39 - 11092.018 Lex Fridman

Well, with the authors, there's a consumption question too. But yeah, well, maybe not, maybe not. You're right. But so the tools are much more powerful. They're getting much more powerful every day.

0
💬 0

11092.038 - 11099.764 Marc Andreessen

Artists, musicians, right? Why aren't musicians producing a thousand times the number of songs, right? Like the tools are spectacular. Yeah.

0
💬 0

11100.02 - 11108.507 Lex Fridman

So what's the explanation? And by way of advice, is motivation starting to be turned down a little bit or what?

0
💬 0

11109.047 - 11123.579 Marc Andreessen

I think it might be distraction. Distraction. It's so easy to just sit and consume that I think people get distracted from production. But if you wanted to, as a young person, if you wanted to really stand out, you could get on a hyper productivity curve very early on.

0
💬 0

11125.08 - 11142.134 Marc Andreessen

There's a great story in Roman history of Pliny the Elder, who was this legendary statesman, died in the Vesuvius eruption trying to rescue his friends. But he was famous both for basically being a polymath, but also being an author. And he wrote apparently hundreds of books. Most of which have been lost, but he wrote all these encyclopedias.

0
💬 0

11142.975 - 11159.409 Marc Andreessen

And he literally would be reading and writing all day long, no matter what else was going on. And so he would travel with four slaves, and two of them were responsible for reading to him, and two of them were responsible for taking dictation. And so like he'd be going cross country and like literally he would be writing books like all the time. And apparently they were spectacular.

0
💬 0

11159.609 - 11161.79 Marc Andreessen

There's only a few that have survived, but apparently they were amazing.

0
💬 0

11161.95 - 11165.153 Lex Fridman

So there's a lot of value to being somebody who finds focus in this life.

0
💬 0

11165.313 - 11181.926 Marc Andreessen

Yeah. And there are examples like there are, you know, there's this guy, Judge, what's his name? Posner, who wrote like 40 books and was also a great federal judge. You know, there's our friend Balaji, I think it's like this. He's one of these, you know, where his output is just prodigious. And so it's like, yeah, I mean, with these tools, why not?

0
💬 0

11182.566 - 11188.927 Marc Andreessen

And I kind of think we're at this interesting kind of freeze frame moment where like these tools are not in everybody's hands and everybody's just kind of staring at them trying to figure out what to do.

0
💬 0

11188.987 - 11194.069 Lex Fridman

Yeah. The new tools. We have discovered fire. Yeah. And trying to figure out how to use it to cook.

0
💬 0

11194.289 - 11204.171 Lex Fridman

Yeah, right. You told Tim Ferriss that the perfect day is caffeine for 10 hours and alcohol for four hours. You didn't think I'd be mentioning this, did you?

0
💬 0

11205.191 - 11220.202 Lex Fridman

uh it balances everything out perfectly as you said so perfect uh so let me ask what's what's the secret to balance and maybe to happiness in life um i i don't believe in balance so i i'm the wrong person asking you elaborate why you don't believe in balance

0
💬 0

11220.342 - 11232.971 Marc Andreessen

I mean, I, I, maybe it's just, and I, I look, I think people, I think people are wired differently. So I think it's hard to generalize this kind of thing, but I'm, I am much happier and more satisfied when I'm fully committed to something. So I'm very much in favor of imbalance. Yeah.

0
💬 0

11234.212 - 11237.274 Lex Fridman

Imbalance. And that applies to work, to life, to everything.

0
💬 0

11238.722 - 11255.426 Marc Andreessen

Now I happen to have whatever twist of personality traits lead that in non-destructive dimensions, including the fact that I've actually, I now no longer do the 10-4 plan. I stopped drinking. I do the caffeine, but not the alcohol. So there's something in my personality where I, whatever maladaption I have is inclining me towards productive things, not unproductive things.

0
💬 0

11255.606 - 11262.028 Lex Fridman

So you're one of the wealthiest people in the world. What's the relationship between wealth and happiness?

0
💬 0

11262.368 - 11263.048 Marc Andreessen

Oh.

0
💬 0

11263.888 - 11264.928 Lex Fridman

Money and happiness.

0
💬 0

11265.068 - 11269.5 Marc Andreessen

So I think happiness... I don't think happiness is the thing.

0
💬 0

11270.404 - 11270.986 Lex Fridman

To strive for?

0
💬 0

11271.007 - 11272.453 Marc Andreessen

I think satisfaction is the thing.

0
💬 0

11273.678 - 11276.641 Lex Fridman

That just sounds like happiness, but turned down a bit.

0
💬 0

11276.881 - 11290.952 Marc Andreessen

No, deeper. So happiness is, you know, a walk in the woods at sunset, an ice cream cone, a kiss. The first ice cream cone is great. The thousandth ice cream cone, not so much. At some point, the walks in the woods get boring.

0
💬 0

11291.132 - 11294.435 Lex Fridman

What's the distinction between happiness and satisfaction?

0
💬 0

11294.455 - 11300.4 Marc Andreessen

I think satisfaction is a deeper thing, which is like having found a purpose and fulfilling it, being useful.

0
💬 0

11301.32 - 11309.126 Lex Fridman

So just something that permeates all your days, just this general contentment of being useful.

0
💬 0

11309.566 - 11328.983 Marc Andreessen

That I'm fully satisfying my faculties, that I'm fully delivering on the gifts that I've been given, that I'm making the world better, that I'm contributing to the people around me, and that I can look back and say, wow, that was hard, but it was worth it. I think generally seems to lead people in a better state than pursuit of pleasure, pursuit of quote unquote happiness.

0
💬 0

11329.444 - 11330.745 Lex Fridman

Does money have anything to do with that?

0
💬 0

11330.845 - 11336.532 Marc Andreessen

I think the founders, the founding fathers in the US threw this off kilter when they used the phrase pursuit of happiness. I think they should have said.

0
💬 0

11337.433 - 11338.054 Lex Fridman

Pursuit of satisfaction.

0
💬 0

11338.094 - 11340.637 Marc Andreessen

If they said pursuit of satisfaction, we might live in a better world today. Yeah.

0
💬 0

11340.717 - 11344.04 Lex Fridman

Well, you know, they could have elaborated on a lot of things.

0
💬 0

11344.1 - 11345.301 Marc Andreessen

They could have tweaked the Second Amendment.

0
💬 0

11345.321 - 11359.132 Lex Fridman

I think they were smarter than they realized. They said, you know what, we're going to make it ambiguous and let these humans figure out the rest. These tribal cult-like humans figure out the rest. But money empowers that.

0
💬 0

11359.426 - 11376.47 Marc Andreessen

So I think, and I think there, I mean, look, I think Elon is, I don't think I'm even a great example, but I think Elon would be the great example of this, which is like, you know, look, he's a guy who from every, every day of his life from the day he started making money at all, he just plows into the, into the next thing. And so I think, I think money is definitely an enabler for satisfaction.

0
💬 0

11376.51 - 11395.535 Marc Andreessen

Money applied to happiness leads people down very dark paths, very destructive avenues. Money applied to satisfaction, I think could be, is a real tool. Um, I always look, by the way, I was like, uh, you know, Elon is the case study for behavior. But the other thing that I thought he's really made me think is Larry, Larry page was asked one time what his approach to philanthropy was.

0
💬 0

11395.595 - 11398.617 Marc Andreessen

And he said, Oh, I'm just my, my philanthropic plan is just give all the money to Elon.

0
💬 0

11401.298 - 11417.916 Lex Fridman

Right. Uh, well, let me actually ask you about Elon. What, what are your, um, You've interacted with quite a lot of successful engineers and business people. What do you think is special about Elon? We talked about Steve Jobs. What do you think is special about him as a leader, as an innovator?

0
💬 0

11418.496 - 11439.666 Marc Andreessen

Yeah, so the core of it is he's back to the future. So he is doing the most leading edge things in the world, but with a really deeply old school approach. And so to find comparisons to Elon, you need to go to like Henry Ford and Thomas Watson and Howard Hughes and Andrew Carnegie. right? Um, Leland Stanford, um, John D Rockefeller, right?

0
💬 0

11439.686 - 11459.4 Marc Andreessen

You need to go to the, what we're called the bourgeois capitalists, like the hardcore business owner operators who basically built, you know, basically built industrialized society, um, Vanderbilt. Um, and it's a level of hands-on commitment, um, and, uh, depth, um, in the business, um,

0
💬 0

11460.4 - 11481.574 Marc Andreessen

um, coupled with an absolute priority, uh, towards truth, um, and towards, um, kind of put it science and technology, uh, down to first principles. That is just like absolute. It was just like unbelievably absolute. He really is ideal that he's only ever talking to engineers. Like he does not tolerate bullshit. He has the most bullshit tolerance anybody I've ever met.

0
💬 0

11482.355 - 11507.309 Marc Andreessen

Um, he wants ground truth on every single topic. Um, and he runs his businesses directly day to day devoted to getting to ground truth in every single topic. so uh you think it was a good decision for him to buy twitter i have developed a view in life did not second guess elon musk i know this is gonna sound crazy and unfounded but well i mean uh he's got a quite a track record

0
💬 0

11507.771 - 11510.473 Marc Andreessen

I mean, look, the car was a crazy... I mean, the car was... I mean, look.

0
💬 0

11510.693 - 11512.294 Lex Fridman

He's done a lot of things that seem crazy.

0
💬 0

11512.514 - 11527.883 Marc Andreessen

Starting a new car company in the United States of America, the last time somebody really tried to do that was the 1950s. And it was called Tucker Automotive. And it was such a disaster. They made a movie about what a disaster it was. And then rockets. Like, who does that? Like, that's... There's obviously no way to start a new rocket company like those days are over.

0
💬 0

11528.243 - 11542.212 Marc Andreessen

And then to do those at the same time. So after he pulled those two off, like... Okay, fine. Like... Like, this is one of my areas of like, whatever opinions I had about that is just like, okay, clearly are not relevant. Like this is, you just, at some point you just like bet on the person.

0
💬 0

11542.941 - 11548.524 Lex Fridman

And in general, I wish more people would lean on celebrating and supporting versus deriding and destroying.

0
💬 0

11548.664 - 11572.315 Marc Andreessen

Oh, yeah. I mean, look, he drives resentment. He is a magnet for resentment. His critics are the most miserable, resentful people in the world. It's almost a perfect match of the most idealized technologist of the century coupled with just his critics are just bitter as can be. I mean, it's sort of very darkly comic to watch.

0
💬 0

11573.277 - 11586.915 Lex Fridman

Well, he fuels the fire of that by being an asshole on Twitter at times, which is fascinating to watch the drama of human civilization given our cult roots just fully on fire.

0
💬 0

11587.175 - 11587.776 Marc Andreessen

He's running a cult.

0
💬 0

11589.642 - 11598.446 Lex Fridman

could say that. Very successfully. So now that our cults have gone and we search for meaning, what do you think is the meaning of this whole thing? What's the meaning of life, Mark Andreessen?

0
💬 0

11598.686 - 11615.373 Marc Andreessen

I don't know the answer to that. I think the meaning of... The closest I get to it is what I said about satisfaction. So it's basically like, okay, we were given what we have. We should basically do our best. What's the role of love in that mix? I mean, what's the point of life if you're, yeah, without love? Yeah.

0
💬 0

11616.912 - 11618.793 Lex Fridman

So love is a big part of that satisfaction.

0
💬 0

11618.973 - 11634.021 Marc Andreessen

And look, like taking care of people is like a wonderful thing. Like, you know, mentality, you know, there are pathological forms of taking care of people, but there's also a very fundamental, you know, kind of aspect of taking care of people. Like, for example, I happen to be somebody who believes that capitalism and taking care of people are actually, they're actually the same thing.

0
💬 0

11635.362 - 11646.208 Marc Andreessen

Somebody once said capitalism is how you take care of people you don't know. Right. Right. And so like, yeah, I think it's like deeply woven into the whole thing. You know, there's a long conversation to be had about that, but yeah.

0
💬 0

11647.099 - 11655.57 Lex Fridman

Yeah, creating products that are used by millions of people and bring them joy in small or big ways. And then capitalism kind of enables that, encourages that.

0
💬 0

11656.351 - 11662.218 Marc Andreessen

David Friedman says there's only three ways to get somebody to do something for somebody else. Love, money, and force.

0
💬 0

11666.914 - 11689.994 Lex Fridman

love and money are better yeah that's a good ordering i think we should we should bet on those try love first if that doesn't work the money yes and then force well don't even try that one uh mark you're an incredible person i've been a huge fan i'm glad to finally got a chance to talk i'm a fan of everything you do everything you do including on twitter it's a huge honor to meet you to talk with you uh thanks again for doing this awesome thank you lex

0
💬 0

11691.24 - 11704.203 Lex Fridman

Thanks for listening to this conversation with Marc Andreessen. To support this podcast, please check out our sponsors in the description. And now let me leave you with some words from Marc Andreessen himself. The world is a very malleable place.

0
💬 0

11705.063 - 11720.346 Lex Fridman

If you know what you want and you go for it with maximum energy and drive and passion, the world will often reconfigure itself around you much more quickly and easily than you would think. Thank you for listening and hope to see you next time.

0
💬 0
Comments

There are no comments yet.

Please log in to write the first comment.