Menu
Sign In Pricing Add Podcast
Podcast Image

Lex Fridman Podcast

#387 – George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God

Fri, 30 Jun 2023

Description

George Hotz is a programmer, hacker, and the founder of comma-ai and tiny corp. Please support this podcast by checking out our sponsors: - Numerai: https://numer.ai/lex - Babbel: https://babbel.com/lexpod and use code Lexpod to get 55% off - NetSuite: http://netsuite.com/lex to get free product tour - InsideTracker: https://insidetracker.com/lex to get 20% off - AG1: https://drinkag1.com/lex to get 1 year of Vitamin D and 5 free travel packs Transcript: https://lexfridman.com/george-hotz-3-transcript EPISODE LINKS: George's Twitter: https://twitter.com/realgeorgehotz George's Twitch: https://twitch.tv/georgehotz George's Instagram: https://instagram.com/georgehotz Tiny Corp's Twitter: https://twitter.com/__tinygrad__ Tiny Corp's Website: https://tinygrad.org/ Comma-ai's Twitter: https://twitter.com/comma_ai Comma-ai's Website: https://comma.ai/ Comma-ai's YouTube (unofficial): https://youtube.com/georgehotzarchive Mentioned: Learning a Driving Simulator (paper): https://bit.ly/42T6lAN PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (08:04) - Time is an illusion (17:44) - Memes (20:20) - Eliezer Yudkowsky (32:45) - Virtual reality (39:04) - AI friends (46:29) - tiny corp (59:50) - NVIDIA vs AMD (1:02:47) - tinybox (1:14:56) - Self-driving (1:29:35) - Programming (1:37:31) - AI safety (2:02:29) - Working at Twitter (2:40:12) - Prompt engineering (2:46:08) - Video games (3:02:23) - Andrej Karpathy (3:12:28) - Meaning of life

Audio
Featured in this Episode
Transcription

0.069 - 24.565 Lex Fridman

The following is a conversation with George Hotz, his third time on this podcast. He's the founder of Comma AI that seeks to solve autonomous driving and is the founder of a new company called TinyCorp that created TinyGrad, a neural network framework that is extremely simple with the goal of making it run on any device by any human easily and efficiently.

0
💬 0

25.385 - 50.038 Lex Fridman

As you know, George also did a large number of fun and amazing things, from hacking the iPhone to recently joining Twitter for a bit as an intern, in quotes, making the case for refactoring the Twitter code base. In general, he's a fascinating engineer and human being, and one of my favorite people to talk to. And now a quick few second mention of each sponsor. Check them out in the description.

0
💬 0

50.218 - 72.375 Lex Fridman

It's the best way to support this podcast. We've got Numeri for the world's hardest data science tournament, Babbel for learning new languages, NetSuite for business management software, Insight Tracker for blood paneling, and AG1 for my daily multivitamin program. Choose wisely, my friends. Also, if you want to work on our team, we're always hiring. Go to lexfriedman.com slash hiring.

0
💬 0

73.016 - 90.944 Lex Fridman

And now on to the full ad reads. As always, no ads in the middle. I try to make this interesting, but if you must skip them, friends, please still check out our sponsors. I enjoy their stuff. Maybe you will too. This episode is brought to you by Numeri, a hedge fund that uses artificial intelligence and machine learning to make investment decisions.

0
💬 0

91.484 - 108.528 Lex Fridman

They created a tournament that challenges data scientists to build... best predictive models for financial markets. It's basically just a really, really difficult real-world dataset to test out your ideas for how to build machine learning models. I think this is a great educational platform.

0
💬 0

108.568 - 135.006 Lex Fridman

I think this is a great way to explore, to learn about machine learning, to really test yourself on real-world data with consequences. No financial background is needed. The models are scored based on how well they perform on unseen data, and the top performers receive a share of the tournament's prize pool. Head over to numeri.ai to sign up for a tournament and hone your machine learning skills.

0
💬 0

135.366 - 150.654 Lex Fridman

That's numeri.ai for a chance to play against me and win a share of the tournament's prize pool. That's Numerai slash Lex. This show is also brought to you by Babbel, an app and website that gets you speaking in a new language within weeks.

0
💬 0

151.414 - 166.923 Lex Fridman

I have been using it to learn a few languages, Spanish, to review Russian, to practice Russian, to revisit Russian from a different perspective, because that becomes more and more relevant for some of the previous conversations I've had and some upcoming conversations I have.

0
💬 0

167.563 - 186.062 Lex Fridman

It really is fascinating how much another language, knowing another language, even to a degree where you can just have little bits and pieces of a conversation, can really unlock an experience in another part of the world. When you travel in France and Paris, just having a few words at your disposal, a few phrases,

0
💬 0

186.842 - 213.788 Lex Fridman

it begins to really open you up to strange, fascinating new experiences that ultimately, at least to me, teach me that we're all the same. We have to first see our differences to realize those differences are grounded in a basic humanity. And that experience that we're all very different and yet at the core the same, I think travel with the aid of language really helps unlock

0
💬 0

215.469 - 233.464 Lex Fridman

You can get 55% off your Babbel subscription at babbel.com. That's spelled B-A-B-B-E-L.com. Rules and restrictions apply. This show is also brought to you by NetSuite. an all-in-one cloud business management system.

0
💬 0

233.884 - 259.583 Lex Fridman

They manage all the messy stuff that is required to run a business, the financials, the human resources, the inventory, if you do that kind of thing, e-commerce, all that stuff, all the business-related details. I know how stressed I am about everything that's required to run a team, to run a business that involves much more than just ideas and designs and engineering.

0
💬 0

260.203 - 286.567 Lex Fridman

It involves all the management of human beings, all the complexities of that, the financials, all of it. And so you should be using the best tools for the job. I sometimes wonder if I have it in me. Mentally and skill-wise to be a part of running a large company. I think like with a lot of things in life, it's one of those things you shouldn't wonder too much about.

0
💬 0

288.112 - 306.559 Lex Fridman

You should either do or not do. But again, using the best tools for the job is required here. You can start now with a no payment or interest for six months. Go to netsuite.com to access their one-of-a-kind financing program. That's netsuite.com.

0
💬 0

308.76 - 329.305 Lex Fridman

This show is also brought to you by Insight Tracker, a service I use to track biological data, data that comes from my body, to predict, to tell me what I should do with my lifestyle, with my diet, what's working and what's not working. It's obvious, all the exciting breakthroughs that are happening with Transformers, with large language models.

0
💬 0

330.117 - 352.716 Lex Fridman

even with diffusion, all of that is obvious that with raw data, with huge amounts of raw data, fine-tuned to the individual, would really reveal to us the signal in all the noise of biology. I feel like that's on the horizon. The kinds of leaps in development that we saw in language, and now more and more visual data,

0
💬 0

353.757 - 375.71 Lex Fridman

I feel like biological data is around the corner, unlocking what's there in this multi-hierarchical distributed system that is our biology. What is it telling us? What is the secrets it holds? What is the thing that it's missing that could be aided? Simple lifestyle changes, simple diet changes, simple changes in all kinds of things that are controllable by individual human being.

0
💬 0

376.25 - 397.941 Lex Fridman

I can't wait till that's a possibility. And Insight Tracker is taking steps towards that. Get special savings for a limited time when you go to insidetracker.com slash Lex. This show is also brought to you by Athletic Greens. That's now called AG1. It has the AG1 drink. I drink it twice a day. At the very least, it's an all-in-one daily drink to support better health and peak performance.

0
💬 0

398.401 - 426.915 Lex Fridman

I drink it cold. It's refreshing. It's grounding. It helps me reconnect with the basics, the nutritional basics that makes this whole machine that is our human body run. All the crazy mental stuff I do for work, the physical challenges, everything. The highs and lows of life itself. All of that is somehow made better knowing that at least you got your nutrition in check.

0
💬 0

427.655 - 450.937 Lex Fridman

At least you're getting enough sleep. At least you're doing the basics. At least you're doing the exercise. Once you get those basics in place, I think you can do some quite difficult things in life. But anyway, beyond all that is just a source of happiness and a kind of a feeling of home. The feeling that comes from returning to the habit time and time again.

0
💬 0

451.957 - 493.258 Lex Fridman

Anyway, they'll give you a one-month supply of fish oil when you sign up at drinkag1.com slash lex. This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's George Hotz. ¶¶ You mentioned something in a stream about the philosophical nature of time. So let's start with the wild question. Do you think time is an illusion?

0
💬 0

495.839 - 522.652 George Hotz

You know, I sell phone calls to Kama for $1,000. And some guy called me and like, you know, it's $1,000. You can talk to me for half an hour. And he's like, yeah, okay. So like time doesn't exist. And I really wanted to share this with you. I'm like, oh, what do you mean time doesn't exist, right? I think time is a useful model, whether it exists or not, right? Does quantum physics exist?

0
💬 0

522.732 - 530.076 George Hotz

Well, it doesn't matter. It's about whether it's a useful model to describe reality. Is time maybe compressive?

0
💬 0

531.016 - 541.241 Lex Fridman

Do you think there is an objective reality or is everything just useful models? Like underneath it all, is there an actual thing that we're constructing models for?

0
💬 0

541.261 - 543.862 George Hotz

I don't know.

0
💬 0

545.202 - 546.123 Lex Fridman

I was hoping you would know.

0
💬 0

546.283 - 547.004 George Hotz

I don't think it matters.

0
💬 0

547.904 - 553.169 Lex Fridman

I mean, this kind of connects to the models of constructive reality with machine learning, right?

0
💬 0

553.569 - 553.769 George Hotz

Sure.

0
💬 0

554.39 - 560.174 Lex Fridman

Like, is it just nice to have useful approximations of the world such that we can do something with it?

0
💬 0

560.955 - 563.998 George Hotz

So there are things that are real. Kolomogorov complexity is real.

0
💬 0

565.119 - 565.319 Lex Fridman

Yeah.

0
💬 0

565.519 - 568.081 George Hotz

Yeah. The compressive thing. Math is real.

0
💬 0

568.321 - 570.423 Lex Fridman

Yeah. This should be a t-shirt.

0
💬 0

571.127 - 574.07 George Hotz

And I think hard things are actually hard. I don't think P equals NP.

0
💬 0

574.89 - 575.811 Lex Fridman

Ooh, strong words.

0
💬 0

576.252 - 577.493 George Hotz

Well, I think that's the majority.

0
💬 0

577.573 - 584.038 Lex Fridman

I do think factoring is in P, but... I don't think you're the person that falls the majority in all walks of life.

0
💬 0

584.098 - 584.659 George Hotz

For that one, I do.

0
💬 0

584.839 - 598.893 Lex Fridman

Yeah. In theoretical computer science, you're one of the sheep. All right. But to you, time is a useful model. Sure. Hmm. What were you talking about on the stream with time? Are you made of time?

0
💬 0

599.173 - 600.853 George Hotz

I remembered half the things I said on stream.

0
💬 0

602.154 - 605.474 George Hotz

Someday someone's going to make a model of all of that and it's going to come back to haunt me.

0
💬 0

605.894 - 606.455 Lex Fridman

Someday soon?

0
💬 0

606.915 - 607.395 George Hotz

Yeah, probably.

0
💬 0

608.715 - 612.896 Lex Fridman

Would that be exciting to you or sad that there's a George Hotz model?

0
💬 0

614.016 - 620.298 George Hotz

I mean, the question is when the George Hotz model is better than George Hotz. Like I am declining and the model is growing.

0
💬 0

620.718 - 624.979 Lex Fridman

What is the metric by which you measure better or worse in that? If you're competing with yourself,

0
💬 0

626.024 - 631.629 George Hotz

Maybe you can just play a game where you have the George Haas answer and the George Haas model answer and ask which people prefer.

0
💬 0

632.55 - 634.131 Lex Fridman

People close to you or strangers?

0
💬 0

635.272 - 640.837 George Hotz

Either one. It will hurt more when it's people close to me, but both will be overtaken by the George Haas model.

0
💬 0

641.738 - 648.985 Lex Fridman

It'd be quite painful, right? Loved ones, family members would rather have the model over for Thanksgiving than you.

0
💬 0

649.005 - 649.325 George Hotz

Yeah.

0
💬 0

650.675 - 660.239 Lex Fridman

or like significant others, would rather sext with the large language model version of you.

0
💬 0

661.299 - 663.28 George Hotz

Especially when it's fine-tuned to their preferences.

0
💬 0

665.181 - 677.166 Lex Fridman

Yeah. Well, that's what we're doing in a relationship, right? We're just fine-tuning ourselves, but we're inefficient with it because we're selfish and greedy and so on. Our language models can fine-tune more efficiently, more selflessly.

0
💬 0

677.62 - 698.008 George Hotz

There's a Star Trek Voyager episode where, you know, Catherine Janeway, lost in the Delta Quadrant, makes herself a lover on the holodeck. And, um... The lover falls asleep on her arm, and he snores a little bit, and Janeway edits the program to remove that. And then, of course, the realization is, wait, this person's terrible.

0
💬 0

698.228 - 708.433 George Hotz

It is actually all their nuances and quirks and slight annoyances that make this relationship worthwhile. But I don't think we're going to realize that until it's too late.

0
💬 0

710.14 - 716.324 Lex Fridman

Well, I think a large language model could incorporate the flaws and the quirks and all that kind of stuff.

0
💬 0

716.364 - 721.787 George Hotz

Just the perfect amount of quirks and flaws to make you charming without crossing the line.

0
💬 0

721.927 - 737.517 Lex Fridman

Yeah, yeah. And that's probably a good approximation of the percent of time the language model should be cranky or an asshole or jealous or all this kind of stuff.

0
💬 0

737.748 - 743.574 George Hotz

And of course it can and it will, but all that difficulty at that point is artificial. There's no more real difficulty.

0
💬 0

744.595 - 746.477 Lex Fridman

Okay, what's the difference between real and artificial?

0
💬 0

747.157 - 755.045 George Hotz

Artificial difficulty is difficulty that's constructed or could be turned off with a knob. Real difficulty is like you're in the woods and you've got to survive.

0
💬 0

756.446 - 760.25 Lex Fridman

So if something cannot be turned off with a knob, it's real?

0
💬 0

762.086 - 780.841 George Hotz

Yeah, I think so. Or, I mean, you can't get out of this by smashing the knob with a hammer. I mean, maybe you kind of can, you know, into the wild when, you know, Alexander Supertramp, he wants to explore something that's never been explored before, but it's the 90s, everything's been explored. So he's like, well, I'm just not going to bring a map.

0
💬 0

781.761 - 781.921 George Hotz

Yeah.

0
💬 0

782.642 - 788.927 George Hotz

I mean, no, you're not exploring. You should have brought a map, dude. You died. There was a bridge a mile from where you were camping.

0
💬 0

789.928 - 791.589 Lex Fridman

How does that connect to the metaphor of the knob?

0
💬 0

792.518 - 798.664 George Hotz

By not bringing the map, you didn't become an explorer. You just smashed the thing.

0
💬 0

798.684 - 799.424 Lex Fridman

Yeah.

0
💬 0

799.584 - 801.967 George Hotz

Yeah. The art, the difficulty is still artificial.

0
💬 0

802.707 - 804.028 Lex Fridman

You failed before you started.

0
💬 0

804.269 - 825.382 George Hotz

What if we just don't have access to the knob? Well, that maybe is even scarier, right? Like we already exist in a world of nature and nature has been fine tuned over billions of years, um, to, uh, have, uh, Humans build something and then throw the knob away in some grand romantic gesture is horrifying.

0
💬 0

827.043 - 837.389 Lex Fridman

Do you think of us humans as individuals that are like born and die? Or is it, are we just all part of one living organism that is earth, that is nature?

0
💬 0

839.03 - 848.756 George Hotz

I don't think there's a clear line there. I think it's all kind of just fuzzy. I don't know. I mean, I don't think I'm conscious. I don't think I'm anything. I think I'm just a computer program.

0
💬 0

849.83 - 854.896 Lex Fridman

So it's all computation. Everything running in your head is just computation.

0
💬 0

855.076 - 859.481 George Hotz

Everything running in the universe is computation, I think. I believe the extended church time thesis.

0
💬 0

860.502 - 865.688 Lex Fridman

Yeah, but there seems to be an embodiment to your particular computation. Like there's a consistency.

0
💬 0

866.829 - 869.032 George Hotz

Well, yeah, but I mean models have consistency too.

0
💬 0

870.532 - 870.772 Lex Fridman

Yeah.

0
💬 0

871.012 - 880.478 George Hotz

Models that have been RLHFed will continually say, you know, like, well, how do I murder ethnic minorities? Oh, well, I can't let you do that, Al. There's a consistency to that behavior.

0
💬 0

881.418 - 899.263 Lex Fridman

It's all RLHF. Like, we all RLHF each other. We provide human feedback in that way. thereby fine tune these little pockets of computation, but it's still unclear why that pocket of computation stays with you for years.

0
💬 0

899.723 - 925.807 Lex Fridman

You have this consistent set of physics, biology, whatever you call the neurons firing, the electrical signals, the mechanical signals, all of that, that seems to stay there, and it contains information, it stores information, and that information permeates through time. It stays with you. There's like memory. It's like sticky.

0
💬 0

926.588 - 933.68 George Hotz

Okay, to be fair, like a lot of the models we're building today are very, even RLHF is nowhere near as complex as the human loss function.

0
💬 0

933.9 - 935.483 Lex Fridman

Reinforcement learning with human feedback.

0
💬 0

936.164 - 958.512 George Hotz

Um, you know, when I talked about will GPT-12 be AGI, my answer is no, of course not. I mean, cross-entropy loss is never going to get you there. You need, uh, probably RL in fancy environments in order to get something that would be considered like AGI-like. So to ask like the question about like why, I don't know, like it's just some quirk of evolution, right?

0
💬 0

958.552 - 965.634 George Hotz

I don't think there's anything particularly special about where I ended up, where humans ended up.

0
💬 0

966.385 - 972.608 Lex Fridman

So, okay. We have human level intelligence. Would you call that AGI? Whatever we have? GI?

0
💬 0

973.408 - 979.951 George Hotz

Look, actually, I don't really even like the word AGI, but general intelligence is defined to be whatever humans have.

0
💬 0

980.571 - 986.634 Lex Fridman

Okay. So why can GPT-12 not get us to AGI? Can we just like linger on that?

0
💬 0

988.038 - 1009.362 George Hotz

If your loss function is categorical cross entropy, if your loss function is just try to maximize compression, I have a SoundCloud, I rap, and I tried to get ChatGPT to help me write raps. And the raps that it wrote sounded like YouTube comment raps. You know, you can go on any rap beat online and you can see what people put in the comments. And it's the most like mid quality rap you can find.

0
💬 0

1009.702 - 1013.044 George Hotz

Is mid good or bad? Mid is bad. It's like mid, it's like.

0
💬 0

1013.684 - 1022.73 Lex Fridman

Every time I talk to you, I learn new words. Mid. Mid, yeah. I was like, is it like basic? Is that what mid means?

0
💬 0

1023.414 - 1036.001 George Hotz

It's like middle of the curve. There's that intelligence curve. You have the dumb guy, the smart guy, and then the mid guy. Actually, being the mid guy is the worst. The smart guy is like, I put all my money in Bitcoin.

0
💬 0

1036.021 - 1038.862 George Hotz

The mid guy is like, you can't put money in Bitcoin. It's not real money.

0
💬 0

1041.364 - 1067.812 Lex Fridman

All of it is a genius meme. That's another interesting one. Memes. The humor, the idea, the absurdity encapsulated in a single image. and it just kind of propagates virally between all of our brains. I didn't get much sleep last night, so I sound like I'm high, but I swear I'm not. Do you think we have ideas or ideas have us?

0
💬 0

1070.034 - 1074.737 George Hotz

I think that we're going to get super scary memes once the AIs actually are superhuman.

0
💬 0

1075.298 - 1079.841 Lex Fridman

Ooh, you think AI will generate memes? Of course. You think it'll make humans laugh?

0
💬 0

1080.882 - 1099.135 George Hotz

I think it's worse than that. So Infinite Jest, it's introduced in the first 50 pages, is about a tape that once you watch it once, you only ever want to watch that tape. In fact, you want to watch the tape so much that someone says, okay, here's a hacksaw, cut off your pinky, and then I'll let you watch the tape again.

0
💬 0

1099.295 - 1099.755 George Hotz

And he'll do it.

0
💬 0

1100.896 - 1119.727 George Hotz

So we're actually going to build that, I think. But it's not going to be one static tape. I think the human brain is too complex to be stuck in one static tape like that. If you look at like ant brains, maybe they can be stuck on a static tape. But we're going to build that using generative models. We're going to build the TikTok that you actually can't look away from.

0
💬 0

1121.215 - 1133.699 Lex Fridman

So TikTok is already pretty close there, but the generation is done by humans. The algorithm is just doing their recommendation. But if the algorithm is also able to do the generation... Well, it's a question about how much intelligence is behind it, right?

0
💬 0

1134.239 - 1151.252 George Hotz

So the content is being generated by, let's say, one humanity worth of intelligence. And you can quantify a humanity, right? That's a... You know, it's... exaflops, yadaflops, but you can quantify it. Once that generation is being done by 100 humanities, you're done.

0
💬 0

1151.272 - 1170.956 Lex Fridman

So it's actually scale that's the problem, but also speed. Yeah. And what if it's sort of manipulating the very limited human dopamine engine for porn? Imagine it's just TikTok, but for porn.

0
💬 0

1171.456 - 1171.637 George Hotz

Yeah.

0
💬 0

1172.277 - 1173.497 Lex Fridman

It's like Brave New World.

0
💬 0

1174.369 - 1189.395 George Hotz

I don't even know what it'll look like, right? Like again, you can't imagine the behaviors of something smarter than you, but a super intelligent, an agent that just dominates your intelligence so much will be able to completely manipulate you.

0
💬 0

1189.895 - 1198.919 Lex Fridman

Is it possible that it won't really manipulate, it'll just move past us? It'll just kind of exist the way water exists or the air exists?

0
💬 0

1199.299 - 1208.494 George Hotz

You see? And that's the whole AI safety thing. It's not the machine that's going to do that. It's other humans using the machine that are going to do that to you.

0
💬 0

1208.894 - 1212.217 Lex Fridman

Yeah. Because the machine is not interested in hurting humans.

0
💬 0

1212.997 - 1219.583 George Hotz

The machine is a machine. Yeah. But the human gets the machine. And there's a lot of humans out there very interested in manipulating you.

0
💬 0

1220.884 - 1234.095 Lex Fridman

Well, let me bring up Eliezer Yudkowsky, who recently sat where you're sitting. He thinks that AI will almost surely kill everyone. Do you agree with him or not?

0
💬 0

1235.676 - 1237.097 George Hotz

Yes, but maybe for a different reason.

0
💬 0

1238.277 - 1248.084 Lex Fridman

Okay. And I'll try to get you to find hope, or we could find a no to that answer. But why yes?

0
💬 0

1249.384 - 1251.526 George Hotz

Okay. Why didn't nuclear weapons kill everyone?

0
💬 0

1251.946 - 1252.666 Lex Fridman

That's a good question.

0
💬 0

1253.227 - 1265.417 George Hotz

I think there's an answer. I think it's actually very hard to deploy nuclear weapons tactically. it's very hard to accomplish tactical objectives. Great. I can nuke their country. I have an irradiated pile of rubble. I don't want that.

0
💬 0

1265.717 - 1266.097 Lex Fridman

Why not?

0
💬 0

1266.717 - 1271.36 George Hotz

Why don't I want an irradiated pile of rubble? Yeah. For all the reasons no one wants an irradiated pile of rubble.

0
💬 0

1271.38 - 1277.604 Lex Fridman

Oh, because you can't use that land for resources. You can't populate the land.

0
💬 0

1277.952 - 1287.898 George Hotz

Yeah, what you want, a total victory in a war is not usually the irradiation and eradication of the people there. It's the subjugation and domination of the people.

0
💬 0

1289.399 - 1306.99 Lex Fridman

Okay, so you can't use this strategically, tactically in a war to help gain a military advantage. It's all complete destruction, all right? Yeah. But there's egos involved. It's still surprising. It's still surprising that nobody pressed the big red button.

0
💬 0

1307.856 - 1323.264 George Hotz

It's somewhat surprising, but you see, it's the little red button that's going to be pressed with AI that's going to, you know, and that's why we die. It's not because the AI, if there's anything in the nature of AI, it's just the nature of humanity.

0
💬 0

1323.505 - 1331.049 Lex Fridman

What's the algorithm behind the little red button? What possible ideas do you have for how a human species ends?

0
💬 0

1331.249 - 1356.098 George Hotz

Sure. So I think the most... Obvious way to me is wireheading. We end up amusing ourselves to death. We end up all staring at that infinite TikTok and forgetting to eat. Maybe it's even more benign than this. Maybe we all just stop reproducing. Now, to be fair, it's probably hard to get all of humanity.

0
💬 0

1360.86 - 1368.522 Lex Fridman

The interesting thing about humanity is the diversity in it. Organisms in general. There's a lot of weirdos out there. Two of them are sitting here.

0
💬 0

1369.002 - 1378.885 George Hotz

I mean, diversity in humanity is... With due respect. I wish I was more weird. No, like I'm kind of, look, I'm drinking smart water, man. That's like a Coca-Cola product, right?

0
💬 0

1379.145 - 1380.726 Lex Fridman

You went corporate, George Haas.

0
💬 0

1380.746 - 1389.734 George Hotz

I went corporate. No, the amount of diversity in humanity I think is decreasing. Just like all the other biodiversity on the planet. Yeah. Right?

0
💬 0

1390.034 - 1391.195 Lex Fridman

Social media's not helping, huh?

0
💬 0

1391.215 - 1392.336 George Hotz

Go eat McDonald's in China.

0
💬 0

1393.117 - 1393.497 Lex Fridman

Yeah.

0
💬 0

1395.058 - 1399.322 George Hotz

Yeah. No, it's the interconnectedness that's doing it.

0
💬 0

1399.922 - 1413.816 Lex Fridman

Oh, that's interesting. So everybody starts relying on the connectivity of the internet. And over time, that reduces the diversity, the intellectual diversity, and then that gets everybody into a funnel. There's still going to be a guy in Texas.

0
💬 0

1414.136 - 1427.249 George Hotz

There is. In a bunker. To be fair, do I think AI kills us all? I think AI kills everything we call society today. I do not think it actually kills the human species. I think that's actually incredibly hard to do.

0
💬 0

1428.735 - 1434.3 Lex Fridman

Yeah, but society, like if we start over, that's tricky. Most of us don't know how to do most things.

0
💬 0

1434.54 - 1441.066 George Hotz

Yeah, but some of us do. And they'll be okay and they'll rebuild after the great AI.

0
💬 0

1442.247 - 1459.341 Lex Fridman

What's rebuilding look like? How much do we lose? What has human civilization done? That's interesting. Combustion engine, electricity. So power and energy. That's interesting. Like how to harness energy.

0
💬 0

1460.201 - 1462.523 George Hotz

Whoa, whoa, whoa. They're going to be religiously against that.

0
💬 0

1464.384 - 1466.705 Lex Fridman

Are they going to get back to like fire?

0
💬 0

1467.905 - 1476.41 George Hotz

Sure. I mean, it'll be like, you know, some kind of Amish looking kind of thing, I think. I think they're going to have very strong taboos against technology.

0
💬 0

1478.672 - 1495.825 Lex Fridman

Like technology, it's almost like a new religion. Technology is the devil. Yeah. And nature is God. Sure. So closer to nature. But can you really get away from AI if it destroyed 99% of the human species? Isn't it somehow have a hold, like a stronghold?

0
💬 0

1496.385 - 1519.746 George Hotz

What's interesting about everything we build, I think we're going to build super intelligence before we build any sort of robustness in the AI. We cannot build an AI that is capable of going out into nature and surviving like a bird, right? A bird is an incredibly robust organism. We've built nothing like this. We haven't built a machine that's capable of reproducing.

0
💬 0

1520.887 - 1538.582 Lex Fridman

Yes. But there is, you know, I work with like robots a lot now. I have a bunch of them. They're mobile. Mm-hmm. They can't reproduce, but all they need is, I guess you're saying they can't repair themselves. If you have a large number, if you have like a hundred million of them.

0
💬 0

1538.783 - 1544.81 George Hotz

Let's just focus on them reproducing, right? Do they have microchips in them? Okay. Then do they include a fab?

0
💬 0

1546.232 - 1546.412 Lex Fridman

No.

0
💬 0

1546.933 - 1547.934 George Hotz

Then how are they going to reproduce?

0
💬 0

1549.849 - 1555.251 Lex Fridman

It doesn't have to be all on board, right? They can go to a factory, to a repair shop.

0
💬 0

1555.472 - 1573.941 George Hotz

Yeah, but then you're really moving away from robustness. Yes. All of life is capable of reproducing without needing to go to a repair shop. Life will continue to reproduce in the complete absence of civilization. Robots will not. So if the AI apocalypse happens...

0
💬 0

1574.943 - 1579.905 George Hotz

I mean, the AIs are going to probably die out because I think we're going to get, again, super intelligence long before we get robustness.

0
💬 0

1580.806 - 1588.409 Lex Fridman

What about if you just improve the fab to where you just have a 3D printer that can always help you?

0
💬 0

1589.27 - 1591.251 George Hotz

Well, that'd be very interesting. I'm interested in building that.

0
💬 0

1592.712 - 1600.495 Lex Fridman

Of course you are. How difficult is that problem to have a robot that basically can build itself?

0
💬 0

1600.895 - 1601.616 George Hotz

Very, very hard.

0
💬 0

1602.489 - 1609.314 Lex Fridman

I think you've mentioned this like to me or somewhere where people think it's easy conceptually.

0
💬 0

1610.175 - 1612.617 George Hotz

And then they remember that you're going to have to have a fab.

0
💬 0

1613.377 - 1622.192 Lex Fridman

Yeah. On board. Of course. So 3D printer that prints a 3D printer. Yeah. Yeah, on legs. Yeah.

0
💬 0

1622.212 - 1632.597 George Hotz

Why is that hard? Well, because it's not, I mean, a 3D printer is a very simple machine, right? Okay, you're going to print chips? You're going to have an atomic printer? How are you going to dope the silicon?

0
💬 0

1632.997 - 1633.258 George Hotz

Yeah. Right?

0
💬 0

1634.038 - 1636.039 George Hotz

How are you going to etch the silicon?

0
💬 0

1636.919 - 1649.806 Lex Fridman

You're going to have to have a very interesting kind of fab if you want to have a lot of computation on board. But you can do like structural type of robots that are dumb.

0
💬 0

1650.76 - 1656.492 George Hotz

Yeah, but structural type of robots aren't going to have the intelligence required to survive in any complex environment.

0
💬 0

1656.913 - 1659.98 Lex Fridman

What about like ants type of systems? We have like trillions of them.

0
💬 0

1660.871 - 1680.264 George Hotz

I don't think this works. I mean, again, like ants at their very core are made up of cells that are capable of individually reproducing. They're doing quite a lot of computation that we're taking for granted. It's not even just the computation. It's that reproduction is so inherent. Okay, so like there's two stacks of life in the world. There's the biological stack and the silicon stack.

0
💬 0

1680.904 - 1696.551 George Hotz

The biological stack starts with reproduction. Reproduction is at the absolute core. The first proto-RNA organisms were capable of reproducing. The silicon stack, despite as far as it's come, is nowhere near being able to reproduce.

0
💬 0

1697.651 - 1709.215 Lex Fridman

Yeah. So the fab movement, digital fabrication, fabrication in the full range of what that means is still in the early stages.

0
💬 0

1709.915 - 1710.075 George Hotz

Yeah.

0
💬 0

1710.756 - 1711.796 Lex Fridman

You're interested in this world. Yeah.

0
💬 0

1712.737 - 1723.781 George Hotz

Even if you did put a fab on the machine, right? Let's say, okay, you know, we can build fabs. We know how to do that as humanity. We can probably put all the precursors that build all the machines and the fabs also in the machine. So first off, this machine is going to be absolutely massive.

0
💬 0

1724.741 - 1738.066 George Hotz

I mean, we almost have a, like, think of the size of the thing required to reproduce a machine today, right? Like, is our civilization capable of reproduction? Can we reproduce our civilization on Mars?

0
💬 0

1740.27 - 1758.815 Lex Fridman

If we were to construct a machine that is made up of humans, like a company, it can reproduce itself. Yeah. I don't know. It feels like 115 people. I think it's so much harder than that. 120? I'm just looking for a number.

0
💬 0

1758.895 - 1769.739 George Hotz

I believe that Twitter can be run by 50 people. I think that this is going to take most of, like, it's just most of society, right? Like we live in one globalized world.

0
💬 0

1770.02 - 1778.443 Lex Fridman

No, but you're not interested in running Twitter. You're interested in seeding. Like you want to seed a civilization and then, because humans can like,

0
💬 0

1779.103 - 1785.229 George Hotz

Oh, okay. You're talking about, yeah, okay. So you're talking about the humans reproducing and like basically like what's the smallest self-sustaining colony of humans?

0
💬 0

1785.489 - 1785.689 Lex Fridman

Yeah.

0
💬 0

1785.889 - 1788.131 George Hotz

Yeah, okay, fine. But they're not going to be making five nanometer chips.

0
💬 0

1788.591 - 1805.546 Lex Fridman

Over time they will. I think you're being, like we have to expand our conception of time here. Going back to the original time scale. I mean, over across maybe a hundred generations, we're back to making chips. No? If you seed the colony correctly.

0
💬 0

1806.323 - 1811.427 George Hotz

Maybe. Or maybe they'll watch our colony die out over here and be like, we're not making chips.

0
💬 0

1811.667 - 1812.408 George Hotz

Don't make chips.

0
💬 0

1812.608 - 1814.39 Lex Fridman

No, but you have to seed that colony correctly.

0
💬 0

1814.43 - 1818.053 George Hotz

Whatever you do, don't make chips. Chips are what led to their downfall.

0
💬 0

1820.454 - 1834.968 Lex Fridman

Well, that is the thing that humans do. They come up, they construct a devil, a good thing and a bad thing, and they really stick by that. And then they murder each other over that. There's always one asshole in the room who murders everybody. And he usually makes tattoos and nice branding.

0
💬 0

1834.988 - 1842.178 George Hotz

Do you need that asshole? That's the question, right? Humanity works really hard today to get rid of that asshole, but I think they might be important.

0
💬 0

1842.659 - 1868.569 Lex Fridman

Yeah, this whole freedom of speech thing. The freedom of being an asshole seems kind of important. That's right. man, this thing, this fab, this human fab that we constructed, this human civilization is pretty interesting. And now it's building artificial copies of itself or artificial copies of various aspects of itself that seem interesting, like intelligence. And I wonder where that goes.

0
💬 0

1870.25 - 1875.594 George Hotz

I like to think it's just like another stack for life. Like we have like the biostack life, like we're a biostack life and then the silicon stack life.

0
💬 0

1876.391 - 1882.592 Lex Fridman

But it seems like the ceiling, or there might not be a ceiling, or at least the ceiling is much higher for the silicon stack.

0
💬 0

1883.195 - 1895.142 George Hotz

Oh, no, we don't know what the ceiling is for the biostack either. The biostack just seemed to move slower. You have Moore's Law, which is not dead despite many proclamations.

0
💬 0

1895.702 - 1897.764 Lex Fridman

In the biostack or the silicon stack? In the silicon stack.

0
💬 0

1897.784 - 1918.956 George Hotz

And you don't have anything like this in the biostack. So I have a meme that I posted. I tried to make a meme. It didn't work too well. But I posted a picture of Ronald Reagan and Joe Biden. And you look, this is 1980 and this is 2020. And these two humans are basically like the same. There's been no change in humans in the last 40 years.

0
💬 0

1920.297 - 1922.578 George Hotz

And then I posted a computer from 1980 and a computer from 2020. Wow.

0
💬 0

1922.618 - 1937.507 Lex Fridman

Yeah, with the early stages, right? Which is why you said when you said the FAB, the size of the FAB required to make another FAB is very large right now.

0
💬 0

1937.787 - 1938.047 George Hotz

Oh, yeah.

0
💬 0

1938.507 - 1959.134 Lex Fridman

But computers were very large back then. 80 years ago. And they got pretty tiny. And people are starting to want to wear them on their face. In order to escape reality. That's the thing. In order to live inside the computer.

0
💬 0

1959.615 - 1959.815 George Hotz

Yeah.

0
💬 0

1960.475 - 1963.476 Lex Fridman

Put a screen right here. I don't have to see the rest of you assholes.

0
💬 0

1964.037 - 1965.117 George Hotz

I've been ready for a long time.

0
💬 0

1965.617 - 1966.598 Lex Fridman

You like virtual reality?

0
💬 0

1966.698 - 1967.018 George Hotz

I love it.

0
💬 0

1968.519 - 1969.219 Lex Fridman

Do you want to live there?

0
💬 0

1969.379 - 1969.519 George Hotz

Yeah.

0
💬 0

1970.677 - 1975.059 Lex Fridman

Yeah. Part of me does too. How far away are we, do you think?

0
💬 0

1977.14 - 1980.261 George Hotz

Judging from what you can buy today, far. Very far.

0
💬 0

1981.202 - 1995.528 Lex Fridman

I got to tell you that I had the experience of Meta's Kodak Avatar, where it's an ultra high resolution scan. It looked real.

0
💬 0

1996.826 - 2014.055 George Hotz

I mean, the headsets just are not quite at eye resolution yet. I haven't put on any headset where I'm like, oh, this could be the real world. Whereas when I put good headphones on, audio is there. We can reproduce audio that I'm like, I'm actually in a jungle right now. If I close my eyes, I can't tell I'm not.

0
💬 0

2014.995 - 2034.121 Lex Fridman

Yeah. But then there's also smell and all that kind of stuff. Sure. I don't know. I... The power of imagination or the power of the mechanism in the human mind that fills the gaps, that kind of reaches and wants to make the thing you see in the virtual world real to you, I believe in that power.

0
💬 0

2034.802 - 2035.782 George Hotz

Or humans want to believe.

0
💬 0

2036.163 - 2045.429 Lex Fridman

Yeah. Like, what if you're lonely? What if you're sad? What if you're really struggling in life and here's a world where you don't have to struggle anymore?

0
💬 0

2045.589 - 2051.412 George Hotz

Humans want to believe so much that people think the large language models are conscious. That's how much humans want to believe.

0
💬 0

2052.632 - 2059.155 Lex Fridman

Strong words. He's throwing left and right hooks. Why do you think large language models are not conscious?

0
💬 0

2059.215 - 2060.676 George Hotz

I don't think I'm conscious.

0
💬 0

2061.622 - 2064.023 Lex Fridman

Oh, so what is consciousness then, George Hans?

0
💬 0

2064.563 - 2069.266 George Hotz

It's like what it seems to mean to people. It's just like a word that atheists use for souls.

0
💬 0

2070.626 - 2073.367 Lex Fridman

Sure, but that doesn't mean soul is not an interesting word.

0
💬 0

2074.748 - 2083.792 George Hotz

If consciousness is a spectrum, I'm definitely way more conscious than the large language models are. I think the large language models are less conscious than a chicken.

0
💬 0

2083.812 - 2086.453 Lex Fridman

When is the last time you've seen a chicken?

0
💬 0

2088.494 - 2090.335 George Hotz

In Miami, like a couple months ago.

0
💬 0

2092.124 - 2093.164 Lex Fridman

No, like a living chicken.

0
💬 0

2093.204 - 2095.045 George Hotz

There's living chickens walking around Miami. It's crazy.

0
💬 0

2095.725 - 2096.345 Lex Fridman

Like on the street?

0
💬 0

2096.405 - 2096.605 George Hotz

Yeah.

0
💬 0

2097.285 - 2097.785 Lex Fridman

Like a chicken?

0
💬 0

2097.925 - 2098.566 George Hotz

A chicken, yeah.

0
💬 0

2098.586 - 2122.097 Lex Fridman

All right. All right. I was trying to call you out like a good journalist, and I got shut down. Okay. But you don't think much about this kind of... subjective feeling that it feels like something to exist.

0
💬 0

2122.657 - 2148.355 Lex Fridman

And then as an observer, you can have a sense that an entity is not only intelligent, but has a kind of subjective experience of its reality, like a self-awareness that is capable of suffering, of hurting, of being excited by the environment in a way that's not merely... Kind of an artificial response, but a deeply felt one.

0
💬 0

2148.655 - 2155.256 George Hotz

Humans want to believe so much that if I took a rock and a Sharpie and drew a sad face on the rock, they'd think the rock is sad.

0
💬 0

2157.097 - 2167.179 Lex Fridman

Yeah. And you're saying when we look in the mirror, we apply the same smiley face with rock. Pretty much, yeah. Isn't that weird, though, that you're not conscious? Is that?

0
💬 0

2168.199 - 2168.259 George Hotz

No.

0
💬 0

2169.412 - 2178.118 Lex Fridman

But you do believe in consciousness. Not really. It's just, it's unclear. Okay, so to you, it's like a little like a symptom of the bigger thing that's not that important.

0
💬 0

2178.138 - 2191.847 George Hotz

Yeah, I mean, it's interesting that like human systems seem to claim that they're conscious. And I guess it kind of like says something in a straight up like, okay, what do people mean when, even if you don't believe in consciousness, what do people mean when they say consciousness? And there's definitely like meanings to it.

0
💬 0

2192.688 - 2193.688 Lex Fridman

What's your favorite thing to eat?

0
💬 0

2197.171 - 2197.471 George Hotz

Pizza.

0
💬 0

2197.887 - 2199.128 Lex Fridman

Cheese pizza, what are the toppings?

0
💬 0

2199.268 - 2200.288 George Hotz

I like cheese pizza.

0
💬 0

2200.328 - 2201.189 Lex Fridman

Don't say pineapple.

0
💬 0

2201.269 - 2202.449 George Hotz

No, I don't like pineapple.

0
💬 0

2202.469 - 2203.49 Lex Fridman

Okay. Pepperoni pizza.

0
💬 0

2203.53 - 2205.191 George Hotz

As they put any ham on it, oh, that's real bad.

0
💬 0

2205.471 - 2212.234 Lex Fridman

What's the best pizza? What are we talking about here? Do you like cheap, crappy pizza? A Chicago deep dish cheese pizza.

0
💬 0

2212.314 - 2213.194 George Hotz

Oh, that's my favorite.

0
💬 0

2213.214 - 2234.85 Lex Fridman

There you go. You bite into a deep dish, a Chicago deep dish pizza. and it feels like you were starving. You haven't eaten for 24 hours. You just bite in, and you're hanging out with somebody that matters a lot to you, and you're there with the pizza. Sounds real nice. Yeah, all right. It feels like something. I'm George motherfucking Hot eating a fucking Chicago deep dish pizza.

0
💬 0

2235.11 - 2251.33 Lex Fridman

There's just the full peak living experience of being human, the top of the human condition. Sure. It feels like something to experience that. Why does it feel like something? That's consciousness, isn't it?

0
💬 0

2252.531 - 2272.13 George Hotz

If that's the word you want to use to describe it, sure. I'm not going to deny that that feeling exists. I'm not going to deny that I experienced that feeling. When, I guess what I kind of take issue to is that there's some like, like, how does it feel to be a web server? Do 404s hurt? Not yet. How would you know what suffering looked like?

0
💬 0

2272.53 - 2297.597 George Hotz

Sure, you can recognize a suffering dog because we're the same stack as the dog. All the biostack stuff kind of, especially mammals, you know, it's really easy. Game recognizes game. Yeah. Versus the silicon stack stuff, it's like, you have no idea. You have, wow, the little thing has learned to mimic, you know. But then I realized that that's all we are too.

0
💬 0

2298.077 - 2299.818 George Hotz

Oh, look, the little thing has learned to mimic.

0
💬 0

2300.499 - 2325.829 Lex Fridman

Yeah. I guess, yeah, 404 could be suffering, but it's so far from our kind of living organism, our kind of stack. But it feels like AI can start maybe mimicking the biological stack better and better and better because it's trained. We trained it, yeah. And so maybe that's the definition of consciousness, is the biostat consciousness.

0
💬 0

2325.849 - 2329.573 George Hotz

The definition of consciousness is how close something looks to human. Sure, I'll give you that one.

0
💬 0

2330.393 - 2334.337 Lex Fridman

No, how close something is to the human experience.

0
💬 0

2334.777 - 2347.568 George Hotz

Sure. It's a very anthropocentric definition, but... Well, that's all we got. Sure. No, and I don't mean to like... I think there's a lot of value in it. Look, I just started my second company. My third company will be AI Girlfriends.

0
💬 0

2349.171 - 2366.098 Lex Fridman

I want to find out what your fourth company is after that. Because I think once you have AI girlfriends, it's, oh boy, does it get interesting. Well, maybe let's go there. I mean, the relationships with AI, that's creating human-like organisms, right?

0
💬 0

2367.218 - 2392.567 Lex Fridman

And part of being human is being conscious, is having the capacity to suffer, having the capacity to experience this life richly in such a way that you can empathize The AI system can empathize with you, and you can empathize with it. Or you can project your anthropomorphic sense of what the other entity is experiencing. And an AI model would need to create that experience inside your mind.

0
💬 0

2393.267 - 2394.527 Lex Fridman

And it doesn't seem that difficult.

0
💬 0

2394.988 - 2418.785 George Hotz

Yeah, but okay, so here's where it actually gets totally different, right? When you interact with another human, you can make some assumptions, right? When you interact with these models, you can't. You can make some assumptions that that other human experiences suffering and pleasure in a pretty similar way to you do. The golden rule applies. With an AI model, this isn't really true.

0
💬 0

2418.845 - 2424.946 George Hotz

These large language models are good at fooling people because they were trained on a whole bunch of human data and told to mimic it.

0
💬 0

2425.786 - 2432.293 Lex Fridman

But if the AI system says, hi, my name is Samantha... It has a backstory.

0
💬 0

2432.633 - 2432.814 George Hotz

Yeah.

0
💬 0

2432.974 - 2434.215 Lex Fridman

Went to college here and there.

0
💬 0

2434.475 - 2434.635 George Hotz

Yeah.

0
💬 0

2435.136 - 2437.078 Lex Fridman

Maybe you'll integrate this in the AI system.

0
💬 0

2437.458 - 2441.261 George Hotz

I made some chatbots. I gave them backstories. It was lots of fun. I was so happy when Llama came out.

0
💬 0

2442.082 - 2464.221 Lex Fridman

Yeah. We'll talk about Llama. We'll talk about all that. But like, you know, the rock with the smiley face. Yeah. Well, it seems pretty natural for you to anthropomorphize that thing and then start dating it. And before you know it, you're married and have kids. With a rock? With a rock. And there's pictures on Instagram with you and a rock and a smiley face.

0
💬 0

2464.621 - 2479.231 George Hotz

To be fair, like, you know, something that people generally look for when they're looking for someone to date is intelligence in some form. And the rock doesn't really have intelligence. Only a pretty desperate person would date a rock. I think we're all desperate deep down. Oh, not rock level desperate.

0
💬 0

2480.432 - 2493.961 Lex Fridman

All right. Not rock level desperate, but AI level desperate. I don't know. I think all of us have a deep loneliness. It just feels like the language models are there.

0
💬 0

2495.134 - 2518.139 George Hotz

Oh, I agree. And you know what? I won't even say this so cynically. I will actually say this in a way that like, I want AI friends. I do. Yeah. Like I would love to, you know, again, the language models now are still a little, like people are impressed with these GPT things. And I look at like, or like, or the co-pilot, the coding one. And I'm like, okay, this is like junior engineer level.

0
💬 0

2518.239 - 2534.884 George Hotz

And these people are like Fiverr level artists and copywriters. Like, okay, great. We got like Fiverr and like junior engineers. Okay, cool. Like, and this is just the start and it will get better, right? Like I can't wait to have AI friends who are more intelligent than I am.

0
💬 0

2535.684 - 2546.809 Lex Fridman

So Fiverr is just a temper. It's not the ceiling. No, definitely not. Is it count as cheating when you're talking to an AI model? Emotional cheating?

0
💬 0

2549.676 - 2552.978 George Hotz

That's up to you and your human partner to define.

0
💬 0

2553.379 - 2555.62 Lex Fridman

Oh, you have to... All right.

0
💬 0

2555.64 - 2557.541 George Hotz

Yeah, you have to have that conversation, I guess.

0
💬 0

2558.082 - 2562.284 Lex Fridman

All right. I mean, integrate that with porn and all this kind of stuff.

0
💬 0

2562.324 - 2563.645 George Hotz

No, I mean, it's similar kind of to porn.

0
💬 0

2563.885 - 2564.085 Lex Fridman

Yeah.

0
💬 0

2564.265 - 2567.767 George Hotz

Yeah. I think people in relationships have different views on that.

0
💬 0

2568.808 - 2582.433 Lex Fridman

Yeah, but most people don't have... serious open conversations about all the different aspects of what's cool and what's not. And it feels like AI is a really weird conversation to have.

0
💬 0

2584.375 - 2598.966 George Hotz

The porn one is a good branching off point. Like these things, you know, one of my scenarios that I put in my chat bot is I, you know, a nice girl named Lexi. She's 20. She just moved out to LA. She wanted to be an actress, but she started doing OnlyFans instead. And you're on a date with her. Enjoy. Yeah.

0
💬 0

2601.978 - 2617.256 Lex Fridman

Oh, man. Yeah. And so is that if you're actually dating somebody in real life, is that cheating? I feel like it gets a little weird. It gets real weird. It's like, what are you allowed to say to an AI bot? Imagine having that conversation with a significant other.

0
💬 0

2617.456 - 2622.679 George Hotz

I mean, these are all things for people to define in their relationships. What it means to be human is just gonna start to get weird.

0
💬 0

2622.939 - 2637.402 Lex Fridman

Especially online. Like, how do you know? Like, there'll be moments when you'll have what you think is a real human you interacted with on Twitter for years and you realize it's not. I spread, I love this meme, heaven banning.

0
💬 0

2637.422 - 2639.163 George Hotz

Do you know about shadow banning?

0
💬 0

2639.403 - 2639.583 Lex Fridman

Yeah.

0
💬 0

2640.203 - 2647.927 George Hotz

Shadow banning, okay, you post, no one can see it. Heaven banning, you post, no one can see it, but a whole lot of AIs are spun up to interact with you.

0
💬 0

2650.069 - 2654.351 Lex Fridman

Well, maybe that's what the way human civilization ends is all of us are heaven banned.

0
💬 0

2654.682 - 2661.868 George Hotz

There's a great... It's called My Little Pony Friendship is Optimal. It's a sci-fi story that explores this idea.

0
💬 0

2662.568 - 2663.369 Lex Fridman

Friendship is optimal.

0
💬 0

2663.449 - 2664.109 George Hotz

Friendship is optimal.

0
💬 0

2664.59 - 2684.487 Lex Fridman

Yeah. I'd like to have some, at least on the intellectual realm, some AI friends that argue with me. But the romantic realm is weird. Definitely weird. But... Not out of the realm of the kind of weirdness that human civilization is capable of, I think.

0
💬 0

2685.988 - 2688.89 George Hotz

I want it. Look, I want it. If no one else wants it, I want it.

0
💬 0

2689.29 - 2692.292 Lex Fridman

Yeah, I think a lot of people probably want it. There's a deep loneliness.

0
💬 0

2693.493 - 2698.616 George Hotz

And I'll feel their loneliness and, you know, it just will only advertise to you some of the time.

0
💬 0

2699.454 - 2708.759 Lex Fridman

Yeah, maybe the conceptions of monogamy changed too. Like I grew up in a time, like I value monogamy, but maybe that's a silly notion when you have arbitrary number of AI systems.

0
💬 0

2710.881 - 2715.463 George Hotz

This interesting path from rationality to polyamory. Yeah, that doesn't make sense for me.

0
💬 0

2716.004 - 2723.068 Lex Fridman

For you, but you're just a biological organism who was born before like the internet really took off.

0
💬 0

2723.968 - 2743.661 George Hotz

The crazy thing is like, Culture is whatever we define it as, right? These things are not, like, is-ought problem in moral philosophy, right? There's no, like, okay, what is might be that, like, computers are capable of mimicking, you know, girlfriends perfectly. They passed the girlfriend Turing test, right? But that doesn't say anything about ought.

0
💬 0

2743.921 - 2752.346 George Hotz

That doesn't say anything about how we ought to respond to them as a civilization. That doesn't say we ought to get rid of monogamy, right? That's a completely separate question, really a religious one.

0
💬 0

2753.342 - 2764.206 Lex Fridman

Girlfriend Turing Test. I wonder what that looks like. Girlfriend Turing Test. Are you writing that? Will you be the Alan Turing of the 21st century that writes the Girlfriend Turing Test paper?

0
💬 0

2764.226 - 2768.727 George Hotz

No, I mean, of course, my AI girlfriends, their goal is to pass the Girlfriend Turing Test.

0
💬 0

2769.487 - 2779.991 Lex Fridman

No, but there should be like a paper that kind of defines the test. I mean, the question is if it's deeply personalized or there's a common thing that really gets everybody involved.

0
💬 0

2781.488 - 2786.749 George Hotz

Yeah, I mean, you know, look, we're a company. We don't have to get everybody. We just have to get a large enough clientele to stay with us.

0
💬 0

2786.769 - 2794.59 Lex Fridman

I like how you're already thinking company. All right, let's, before we go to company number three and company number four, let's go to company number two.

0
💬 0

2794.61 - 2795.11 George Hotz

All right.

0
💬 0

2795.35 - 2810.033 Lex Fridman

Tiny Corp. Possibly one of the greatest names of all time for a company. You've launched a new company called Tiny Corp that leads the development of Tiny Grad. What's the origin story of Tiny Corp and Tiny Grad?

0
💬 0

2810.839 - 2834.831 George Hotz

I started TinyGrad as like a toy project just to teach myself, okay, like what is a convolution? What are all these options you can pass to them? What is the derivative of a convolution, right? Very similar to Karpathy wrote MicroGrad. Very similar. And then I started realizing, I started thinking about like AI chips. I started thinking about chips that run

0
💬 0

2836.041 - 2846.908 George Hotz

And I was like, well, okay, this is going to be a really big problem. If NVIDIA becomes a monopoly here, how long before NVIDIA is nationalized?

0
💬 0

2848.829 - 2855.253 Lex Fridman

So one of the reasons to start TinyCorp is to challenge NVIDIA.

0
💬 0

2867.168 - 2867.328 George Hotz

Yeah.

0
💬 0

2867.828 - 2875.812 Lex Fridman

And here's computational power. And to you, NVIDIA is kind of locking down the computational power of the world.

0
💬 0

2876.472 - 2894.817 George Hotz

If NVIDIA becomes just like 10X better than everything else, you're giving a big advantage to somebody who can secure NVIDIA as a resource. Yeah. In fact, if Jensen watches this podcast, he may want to consider this. He may want to consider making sure his company is not nationalized.

0
💬 0

2896.338 - 2898.018 Lex Fridman

Do you think that's an actual threat?

0
💬 0

2898.178 - 2898.518 George Hotz

Oh, yes.

0
💬 0

2900.839 - 2903.54 Lex Fridman

No, but there's so much, you know, there's AMD.

0
💬 0

2904.22 - 2905.661 George Hotz

So we have Nvidia and AMD. Great.

0
💬 0

2905.681 - 2916.064 Lex Fridman

All right. But you don't think there's like a push towards like selling, like Google selling TPUs or something like this? You don't think there's a push for that?

0
💬 0

2916.244 - 2919.116 George Hotz

Have you seen it? Google loves to rent you TPUs.

0
💬 0

2919.818 - 2922.907 Lex Fridman

It doesn't, you can't buy it at Best Buy? No. Hmm.

0
💬 0

2923.95 - 2945.321 George Hotz

So I started work on a, uh, I was like, okay, what's it going to take to make a chip? And my first notions were all completely wrong about why, about like how you could improve on GPUs. And I will take this, this is from Jim Keller on your podcast. And this is one of my absolute favorite descriptions of computation.

0
💬 0

2946.362 - 2961.652 George Hotz

So there's three kinds of computation paradigms that are common in the world today. There's CPUs, and CPUs can do everything. CPUs can do add and multiply, they can do load and store, and they can do compare and branch. And when I say they can do these things, they can do them all fast, right?

0
💬 0

2962.252 - 2980.765 George Hotz

So compare and branch are unique to CPUs, and what I mean by they can do them fast is they can do things like branch prediction and speculative execution, and they spend tons of transistors on these super deep reorder buffers in order to make these things fast. Then you have a simpler computation model, GPUs. GPUs can't really do compare and branch. I mean, they can, but it's horrendously slow.

0
💬 0

2981.686 - 3000.324 George Hotz

But GPUs can do arbitrary load and store. GPUs can do things like X, dereference Y. So they can fetch from arbitrary pieces of memory. They can fetch from memory that is defined by the contents of the data. The third model of computation is DSPs. And DSPs are just add and multiply. They can do load and stores, but only static load and stores.

0
💬 0

3000.565 - 3023.451 George Hotz

Only loads and stores that are known before the program runs. And you look at neural networks today, and 95% of neural networks are all the DSP paradigm. They are just statically scheduled adds and multiplies. So TinyGuard really took this idea, and I'm still working on it, to extend this as far as possible. Every stage of the stack has Turing completeness.

0
💬 0

3023.471 - 3039.977 George Hotz

All right, Python has Turing completeness, and then we take Python, we go into C++, which is Turing complete, and maybe C++ calls into some CUDA kernels, which are Turing complete. The CUDA kernels go through LLVM, which is Turing complete, into PTX, which is Turing complete, to SAS, which is Turing complete, on a Turing complete processor. I wanna get Turing completeness out of the stack entirely.

0
💬 0

3040.657 - 3046.481 George Hotz

Because once you get rid of Turing completeness, you can reason about things. Rice's theorem and the halting problem do not apply to admiral machines.

0
💬 0

3049.384 - 3056.484 Lex Fridman

Okay. What's the power and the value of getting to and complete this out of? Out of, are we talking about the hardware or the software?

0
💬 0

3056.824 - 3076.468 George Hotz

Every layer of the stack. Every layer. Every layer of the stack, removing Turing completeness allows you to reason about things, right? So the reason you need to do branch prediction in a CPU and the reason it's prediction, and the branch predictors are, I think they're like 99% on CPUs. Why do they get 1% of them wrong? Well, they get 1% wrong because you can't know. Right?

0
💬 0

3076.748 - 3098.204 George Hotz

That's the halting problem. It's equivalent to the halting problem to say whether a branch is going to be taken or not. I can show that. But the AdMob machine, the neural network, runs the identical compute every time. The only thing that changes is the data. So when you realize this, you think about, okay, how can we build a computer?

0
💬 0

3098.724 - 3116.333 George Hotz

How can we build a stack that takes maximal advantage of this idea? So what makes TinyGrad different from other neural network libraries is it does not have a primitive operator even for matrix multiplication. And this is every single one. They even have primitive operations for things like convolutions.

0
💬 0

3116.733 - 3117.973 Lex Fridman

So no matmul.

0
💬 0

3118.453 - 3139.358 George Hotz

No matmul. Well, here's what a matmul is. So I'll use my hands to talk here. So if you think about a cube and I put my two matrices that I'm multiplying on two faces of the cube, right? You can think about the matrix multiply as, okay, the n cubed, I'm going to multiply for each one in the cubed. And then I'm going to do a sum, which is a reduce up to here to the third face of the cube.

0
💬 0

3139.599 - 3154.51 George Hotz

And that's your multiplied matrix. So what a matrix multiply is, is a bunch of shape operations, right? A bunch of permute three shapes and expands on the two matrices. A multiply, n cubed. A reduce, n cubed, which gives you an n squared matrix.

0
💬 0

3155.491 - 3161.596 Lex Fridman

Okay, so what is the minimum number of operations that can accomplish that if you don't have matmol as a primitive matrix?

0
💬 0

3162.043 - 3192.892 George Hotz

So TinyGrad has about 20. And you can compare TinyGrad's op set or IR to things like XLA or PrimTorch. So XLA and PrimTorch are ideas where like, okay, Torch has like 2000 different kernels. PyTorch 2.0 introduced PrimTorch, which has only 250. TinyGrad has order of magnitude 25. It's 10x less than XLA or Primtorch. And you can think about it as kind of like RISC versus CISC, right?

0
💬 0

3193.612 - 3197.816 George Hotz

These other things are CISC-like systems. TinyGrad is RISC.

0
💬 0

3198.837 - 3199.717 Lex Fridman

And RISC won.

0
💬 0

3200.498 - 3202.78 George Hotz

RISC architecture is going to change everything. 1995, hackers.

0
💬 0

3205.157 - 3206.619 Lex Fridman

Wait, really? That's an actual thing?

0
💬 0

3206.879 - 3215.147 George Hotz

Angelina Jolie delivers the line, risk architecture is going to change everything in 1995. Wow. And here we are with ARM in the phones. And ARM everywhere.

0
💬 0

3215.928 - 3235.213 Lex Fridman

Wow. I love it when movies actually have real things in them. Right? Okay, interesting. So you're thinking of this as the risk architecture of ML Stack, right? 25. Can you go through the four op types?

0
💬 0

3235.513 - 3250.823 George Hotz

Sure. Okay, so you have unary ops, which take in a tensor and return a tensor of the same size and do some unary op to it. X, log, reciprocal, sine, right? They take in one and they're point-wise.

0
💬 0

3252.144 - 3252.304 Lex Fridman

Relu.

0
💬 0

3253.069 - 3274.926 George Hotz

Yeah, ReLU. Almost all activation functions are unary ops. Some combinations of unary ops together is still a unary op. Then you have binary ops. Binary ops are like pointwise addition, multiplication, division, compare. It takes in two tensors of equal size and outputs one tensor. Then you have reduce ops.

0
💬 0

3275.766 - 3293.357 George Hotz

Reduce ops will take a three-dimensional tensor and turn it into a two-dimensional tensor, or a three-dimensional tensor and turn it into a zero-dimensional tensor. Think like a sum or a max are really the common ones there. And then the fourth type is movement ops. And movement ops are different from the other types because they don't actually require computation.

0
💬 0

3293.397 - 3301.583 George Hotz

They require different ways to look at memory. So that includes reshapes, permutes, expands, flips. Those are the main ones, probably.

0
💬 0

3301.603 - 3304.326 Lex Fridman

And so with that, you have enough to make a mat model.

0
💬 0

3304.586 - 3310.972 George Hotz

And convolutions. And every convolution you can imagine, dilated convolutions, strided convolutions, transposed convolutions.

0
💬 0

3312.699 - 3326.584 Lex Fridman

You write on GitHub about laziness, showing a map model, matrix multiplication. See how despite the style, it is fused into one kernel with the power of laziness. Can you elaborate on this power of laziness?

0
💬 0

3326.944 - 3350.317 George Hotz

Sure. So if you type in PyTorch A times B plus C, what this is going to do is it's going to first multiply A and B and store that result into memory. And then it is going to add C by reading that result from memory, reading C from memory, and writing that out to memory. There is way more loads and stores to memory than you need there.

0
💬 0

3350.997 - 3364.607 George Hotz

If you don't actually do A times B as soon as you see it, if you wait until the user actually realizes that tensor, until the laziness actually resolves, you confuse that plus C. This is like, it's the same way Haskell works.

0
💬 0

3364.907 - 3369.429 Lex Fridman

So what's the process of porting a model into TinyGrad?

0
💬 0

3370.029 - 3388.917 George Hotz

So TinyGrad's front end looks very similar to PyTorch. I probably could make a perfect or pretty close to perfect interop layer if I really wanted to. I think that there's some things that are nicer about TinyGrad syntax than PyTorch, but the front end looks very Torch-like. You can also load in Onyx models. We have more Onyx tests passing than Core ML.

0
💬 0

3390.478 - 3390.918 Lex Fridman

Core ML.

0
💬 0

3391.278 - 3393.259 George Hotz

Okay, so... We'll pass Onyx runtime soon.

0
💬 0

3393.956 - 3401.897 Lex Fridman

What about the developer experience with TinyGrad? What it feels like versus PyTorch?

0
💬 0

3402.598 - 3420.161 George Hotz

By the way, I really like PyTorch. I think that it's actually a very good piece of software. I think that they've made a few different trade-offs, and these different trade-offs are where TinyGrad takes a different path. One of the biggest differences is it's really easy to see the kernels that are actually being sent to the GPU.

0
💬 0

3421.702 - 3443.084 George Hotz

If you run PyTorch on the GPU, you like do some operation and you don't know what kernels ran. You don't know how many kernels ran. You don't know how many flops were used. You don't know how much memory accesses were used. TinyGrad type debug equals two. And it will show you in this beautiful style, every kernel that's run, how many flops and how many bytes.

0
💬 0

3444.286 - 3449.428 Lex Fridman

So can you just linger on what problem TinyGrad solves?

0
💬 0

3450.108 - 3476.564 George Hotz

TinyGrad solves the problem of porting new ML accelerators quickly. One of the reasons, tons of these companies now, I think Sequoia marked GraphCore to zero, right? Cerebus, TensTorrent, Grok. All of these ML accelerator companies, they built chips. The chips were good. The software was terrible. And part of the reason is because I think the same problem is happening with Dojo.

0
💬 0

3477.304 - 3486.168 George Hotz

It's really, really hard to write a PyTorch port because you have to write 250 kernels and you have to tune them all for performance.

0
💬 0

3486.328 - 3499.578 Lex Fridman

What does Jim Culler think about TinyGrad? You guys hung out quite a bit, so he was involved with Ten's Torrent. What's his praise and what's his criticism of what you're doing with your life?

0
💬 0

3500.679 - 3509.786 George Hotz

Look, my prediction for Ten's Torrent is that they're going to pivot to making RISC-V chips. CPUs. CPUs.

0
💬 0

3511.327 - 3511.607 Lex Fridman

Why? Why?

0
💬 0

3513.419 - 3517.562 George Hotz

Because AI accelerators are a software problem, not really a hardware problem.

0
💬 0

3517.922 - 3526.948 Lex Fridman

Oh, interesting. So you don't think... You think the diversity of AI accelerators in the hardware space is not going to be a thing that exists long-term?

0
💬 0

3527.428 - 3540.543 George Hotz

I think what's going to happen is if I can finish... Okay. If you're trying to make an AI accelerator... You better have the capability of writing a torch-level performance stack on NVIDIA GPUs.

0
💬 0

3541.344 - 3552.052 George Hotz

If you can't write a torch stack on NVIDIA GPUs, and I mean all the way, I mean down to the driver, there's no way you're going to be able to write it on your chip, because your chip's worse than an NVIDIA GPU. The first version of the chip you tape out, it's definitely worse.

0
💬 0

3552.631 - 3554.191 Lex Fridman

Oh, you're saying writing that stack is really tough.

0
💬 0

3554.632 - 3570.476 George Hotz

Yes. And not only that, actually, the chip that you tape out, almost always because you're trying to get advantage over NVIDIA, you're specializing the hardware more. It's always harder to write software for more specialized hardware. Like a GPU is pretty generic. And if you can't write an NVIDIA stack, there's no way you can write a stack for your chip.

0
💬 0

3571.036 - 3576.178 George Hotz

So my approach with TinyGrad is first, write a performant NVIDIA stack. We're targeting AMD.

0
💬 0

3578.56 - 3581.522 Lex Fridman

So you did say a few to NVIDIA a little bit. With love.

0
💬 0

3581.762 - 3582.122 George Hotz

With love.

0
💬 0

3582.362 - 3583.143 Lex Fridman

Yeah. With love.

0
💬 0

3583.303 - 3585.264 George Hotz

It's like the Yankees, you know? I'm a Mets fan.

0
💬 0

3585.939 - 3603.533 Lex Fridman

Oh, you're a Mets fan. A Risk fan and a Mets fan. What's the hope that AMD has? You did a build with AMD recently that I saw. How does the 7900 XTX compare to the RTX 4090 or 4080?

0
💬 0

3604.494 - 3611.219 George Hotz

Well, let's start with the fact that the 7900 XTX kernel drivers don't work. And if you run demo apps in loops, it panics the kernel.

0
💬 0

3612.039 - 3613.981 Lex Fridman

Okay. So this is a software issue.

0
💬 0

3615.137 - 3616.298 George Hotz

Lisa Su responded to my email.

0
💬 0

3617.158 - 3632.766 George Hotz

Oh. I reached out. I was like, this is, you know, really? Like, I understand if your 7x7 transposed Winograd conv is slower than NVIDIA's, but literally when I run demo apps in a loop, the kernel panics.

0
💬 0

3633.946 - 3634.967 Lex Fridman

So just adding that loop.

0
💬 0

3636.295 - 3645.601 George Hotz

I just literally took their demo apps and wrote like while true semicolon do the app semicolon done in a bunch of screens. This is like the most primitive fuzz testing.

0
💬 0

3646.221 - 3651.685 Lex Fridman

Why do you think that is? They're just not seeing a market in machine learning?

0
💬 0

3652.483 - 3668.253 George Hotz

They're changing. They're trying to change. They're trying to change. And I had a pretty positive interaction with them this week. Last week, I went on YouTube. I was just like, that's it. I give up on AMD. Like, this is their driver. I'm not going to, you know, I'll go with Intel GPUs. Intel GPUs have better drivers.

0
💬 0

3670.666 - 3675.549 Lex Fridman

So you're kind of spearheading the diversification of GPUs.

0
💬 0

3676.51 - 3696.203 George Hotz

Yeah, and I'd like to extend that diversification to everything. I'd like to diversify the, right, the more, my central thesis about the world is there's things that centralize power and they're bad. And there's things that decentralize power and they're good. Everything I can do to help decentralize power, I'd like to do.

0
💬 0

3698.311 - 3708.398 Lex Fridman

So you're really worried about the centralization of NVIDIA. That's interesting. And you don't have a fundamental hope for the proliferation of ASICs, except in the cloud.

0
💬 0

3709.599 - 3725.53 George Hotz

I'd like to help them with software. No, actually, the only ASIC that is remotely successful is Google's TPU. And the only reason that's successful is because Google wrote a machine learning framework. I think that you have to write a competitive machine learning framework in order to be able to build an ASIC.

0
💬 0

3725.55 - 3731.258 Lex Fridman

Hmm. You think Meta with PyTorch builds a competitor? I hope so.

0
💬 0

3732.018 - 3733.479 George Hotz

They have one. They have an internal one.

0
💬 0

3733.859 - 3737.44 Lex Fridman

Internal. I mean, public facing with a nice cloud interface and so on.

0
💬 0

3738.1 - 3739.06 George Hotz

I don't want a cloud.

0
💬 0

3739.68 - 3740.34 Lex Fridman

You don't like cloud?

0
💬 0

3740.5 - 3741.321 George Hotz

I don't like cloud.

0
💬 0

3741.701 - 3743.761 Lex Fridman

What do you think is the fundamental limitation of cloud?

0
💬 0

3744.181 - 3747.302 George Hotz

Fundamental limitation of cloud is who owns the off switch.

0
💬 0

3747.702 - 3749.203 Lex Fridman

So it's power to the people.

0
💬 0

3749.543 - 3749.923 George Hotz

Yeah.

0
💬 0

3750.643 - 3773.811 Lex Fridman

And you don't like the man to have all the power. All right. And right now, the only way to do that is with AMD GPUs if you want performance and stability. Interesting. It's a costly investment emotionally to go with AMDs. Well, let me sort of on a tangent ask you, you've built quite a few PCs.

0
💬 0

3773.911 - 3781.235 Lex Fridman

What's your advice on how to build a good custom PC for, let's say, for the different applications that you use for gaming, for machine learning?

0
💬 0

3781.555 - 3783.876 George Hotz

Well, you shouldn't build one. You should buy a box from the Tiny Corp.

0
💬 0

3785.279 - 3793.005 Lex Fridman

I heard rumors, whispers about this box in the tiny corp. What's this thing look like? What is it? What is it called?

0
💬 0

3793.465 - 3794.406 George Hotz

It's called the tiny box.

0
💬 0

3794.606 - 3795.126 Lex Fridman

Tiny box.

0
💬 0

3795.907 - 3823.978 George Hotz

It's $15,000. And it's almost a pay to flop of compute. It's over 100 gigabytes of GPU RAM. It's over five terabytes per second of GPU memory bandwidth. I'm going to put like four NVMEs in RAID. You're going to get like 20, 30 gigabytes per second of drive read bandwidth. I'm going to build like the best deep learning box that I can that plugs into one wall outlet.

0
💬 0

3825.078 - 3828.399 Lex Fridman

Okay. Can you go through the specs again a little bit from memory?

0
💬 0

3828.919 - 3830.7 George Hotz

Yeah. So it's almost a pay-to-flop of compute.

0
💬 0

3830.86 - 3831.88 Lex Fridman

So AMD, Intel?

0
💬 0

3832.64 - 3844.566 George Hotz

Today, I'm leaning toward AMD. Okay. Um, but we're pretty agnostic to the type of compute. The, the, the main limiting spec is a 120 volt, 15 amp circuit.

0
💬 0

3846.668 - 3846.928 George Hotz

Okay.

0
💬 0

3847.068 - 3864.98 George Hotz

Well, I mean it because in order to like, like there's a plug over there, right? You have to be able to plug it in. Um, we're also going to sell the tiny rack, which like, what's the most power you can get into your house without arousing suspicion? Uh, and one of the, one of the answers is an electric car charger.

0
💬 0

3865.592 - 3870.817 Lex Fridman

Wait, where does the rack go? Your garage. Interesting. The car charger.

0
💬 0

3871.398 - 3875.723 George Hotz

A wall outlet is about 1,500 watts. A car charger is about 10,000 watts. Is that it?

0
💬 0

3877.305 - 3894.477 Lex Fridman

What is the most amount of power you can get your hands on without arousing suspicion? That's right. George Hotz. Okay. So the tiny box, and you said NVMEs and RAID. I forget what you said about memory, all that kind of stuff. Okay. So what about what GPUs?

0
💬 0

3895.057 - 3896.758 George Hotz

Again, probably 7900 XTXs, but maybe 3090s, maybe A770s.

0
💬 0

3901.442 - 3904.803 Lex Fridman

Those are Intel's. You're flexible or still exploring?

0
💬 0

3905.083 - 3932.077 George Hotz

I'm still exploring. I want to deliver a really good experience to people. And yeah, what GPUs I end up going with, again, I'm leaning toward AMD. We'll see. You know, in my email, what I said to AMD is like, just dumping the code on GitHub is not open source. Open source is a culture. Open source means that your issues are not all one-year-old stale issues. Open source means developing in public.

0
💬 0

3932.737 - 3937.1 George Hotz

And if you guys can commit to that, I see a real future for AMD as a competitor to NVIDIA.

0
💬 0

3939.022 - 3944.166 Lex Fridman

Well, I'd love to get a tiny box to MIT. So whenever it's ready, let's do it.

0
💬 0

3944.186 - 3948.469 George Hotz

We're taking pre-orders. I took this from Elon. I'm like $100 fully refundable pre-orders.

0
💬 0

3949.133 - 3951.655 Lex Fridman

Is it going to be like the Cybertruck is going to take a few years or?

0
💬 0

3952.055 - 3955.118 George Hotz

No, I'll try to do it faster. It's a lot simpler. It's a lot simpler than a truck.

0
💬 0

3955.878 - 3961.623 Lex Fridman

Well, there's complexities not to just the putting the thing together, but like shipping and all this kind of stuff.

0
💬 0

3961.723 - 3971.97 George Hotz

The thing that I want to deliver to people out of the box is being able to run 65 billion parameter Lama in FP16 in real time. In like a good, like 10 tokens per second or five tokens per second or something.

0
💬 0

3972.471 - 3978.123 Lex Fridman

Just, it works. Yep. Lama's running. Or something like Lama.

0
💬 0

3979.004 - 3985.43 George Hotz

Yeah, or I think Falcon is the new one. Experience a chat with the largest language model that you can have in your house.

0
💬 0

3986.411 - 3988.312 Lex Fridman

Yeah, from a wall plug.

0
💬 0

3988.372 - 4001.324 George Hotz

From a wall plug, yeah. Actually, for inference, it's not like even more power would help you get more. Even more power wouldn't get you more. Well, no, the biggest model released is 65 billion parameter Lama, as far as I know.

0
💬 0

4002.062 - 4010.627 Lex Fridman

So it sounds like tiny box will naturally pivot towards company number three because you could just get the girlfriend or boyfriend.

0
💬 0

4010.647 - 4012.868 George Hotz

That one's harder, actually.

0
💬 0

4013.228 - 4014.029 Lex Fridman

The boyfriend is harder?

0
💬 0

4014.049 - 4014.889 George Hotz

The boyfriend's harder, yeah.

0
💬 0

4014.969 - 4026.816 Lex Fridman

I think that's a very biased statement. I think a lot of people would just say, why is it harder to replace a boyfriend than the girlfriend with the artificial LLM?

0
💬 0

4027.096 - 4033.879 George Hotz

Because women are attracted to status and power and men are attracted to youth and beauty. No, I mean, that's what I mean.

0
💬 0

4034.76 - 4038.042 Lex Fridman

Both are mimicable, easy through the language model. No.

0
💬 0

4038.463 - 4041.305 George Hotz

No machines do not have any status or real power.

0
💬 0

4042.085 - 4053.274 Lex Fridman

I don't know. I think you both... Well, first of all, you're using language mostly to communicate youth and beauty and power and status.

0
💬 0

4053.455 - 4056.997 George Hotz

But status fundamentally is a zero-sum game, whereas youth and beauty are not.

0
💬 0

4058.018 - 4065.048 Lex Fridman

No, I think status is a narrative you can construct. I don't think status is real. I don't know.

0
💬 0

4065.528 - 4076.813 George Hotz

I just think that that's why it's harder. You know, yeah, maybe it is my biases. I think status is way easier to fake. I also think that, you know, men are probably more desperate and more likely to buy my product. So maybe they're a better target market.

0
💬 0

4077.453 - 4082.535 Lex Fridman

Desperation is interesting. Easier to fool. Yeah. I could see that.

0
💬 0

4082.615 - 4088.277 George Hotz

Yeah. Look, I mean, look, I know you can look at porn viewership numbers, right? A lot more men watch porn than women. Yeah. You can ask why that is.

0
💬 0

4089.437 - 4099.975 Lex Fridman

Wow. There's a lot of questions and answers you can get there. Anyway, with the TinyBox, how many GPUs in TinyBox? Six.

0
💬 0

4104.456 - 4120.722 George Hotz

Oh, man. And I'll tell you why it's six. Yeah. So AMD EPYC processors have 128 lanes of PCIe. I want to leave enough lanes for some drives, and I want to leave enough lanes for some networking.

0
💬 0

4121.778 - 4123.339 Lex Fridman

How do you do cooling for something like this?

0
💬 0

4123.779 - 4131.305 George Hotz

Ah, that's one of the big challenges. Not only do I want the cooling to be good, I want it to be quiet. I want the tiny box to be able to sit comfortably in your room.

0
💬 0

4131.905 - 4136.649 Lex Fridman

This is really going towards the girlfriend thing. Because you want to run the LLM.

0
💬 0

4137.109 - 4140.691 George Hotz

I'll give a more, I mean, I can talk about how it relates to company number one.

0
💬 0

4141.852 - 4148.397 Lex Fridman

Call my AI. Well, but yes, quiet. Oh, quiet because you may be potentially want to run it in a car.

0
💬 0

4148.897 - 4155.563 George Hotz

No, no, quiet because you want to put this thing in your house and you want it to coexist with you. If it's screaming at 60 dB, you don't want that in your house. You'll kick it out.

0
💬 0

4155.583 - 4156.704 Lex Fridman

60 dB, yeah.

0
💬 0

4157.545 - 4158.445 George Hotz

Yeah, I want like 40, 45.

0
💬 0

4158.566 - 4162.249 Lex Fridman

So how do you make the cooling quiet? That's an interesting problem in itself.

0
💬 0

4162.902 - 4183.308 George Hotz

A key trick is to actually make it big. Ironically, it's called the tiny box. But if I can make it big, a lot of that noise is generated because of high pressure air. If you look at like a 1U server, a 1U server has these super high pressure fans. They're like super deep and they're like genesis. Versus if you have something that's big, well, I can use a big, you know, they call them big ass fans.

0
💬 0

4183.348 - 4186.989 George Hotz

Those ones that are like huge on the ceiling and they're completely silent.

0
💬 0

4187.189 - 4189.57 Lex Fridman

So tiny box will be big.

0
💬 0

4190.629 - 4197.614 George Hotz

It is the... I do not want it to be large according to UPS. I want it to be shippable as a normal package, but that's my constraint there.

0
💬 0

4198.655 - 4203.159 Lex Fridman

Interesting. Well, the fan stuff, can't it be assembled on location or no? No.

0
💬 0

4203.879 - 4213.426 George Hotz

No, it has to be... Well, you're... Look, I want to give you a great out-of-the-box experience. I want you to lift this thing out. I want it to be like the Mac, you know? TinyBox.

0
💬 0

4213.807 - 4235.906 Lex Fridman

The Apple experience. Yeah. I love it. Okay. And so TinyBox would run... TinyGrad. What do you envision this whole thing to look like? We're talking about Linux with a full software engineering environment, and just not PyTorch, but TinyGrad.

0
💬 0

4236.166 - 4239.568 George Hotz

Yeah. We did a poll. If people want Ubuntu or Arch, we're going to stick with Ubuntu.

0
💬 0

4240.249 - 4266.879 Lex Fridman

Ooh, interesting. What's your favorite flavor of Linux? Ubuntu. Ubuntu. I like Ubuntu Mate, however you pronounce that. Mate. Mate. So how do you, you've gotten Llama into TinyGrad. You've gotten Stable Diffusion into TinyGrad. What was that like? Can you comment on like, what are these models? What's interesting about porting them? What are the challenges? What's naturally, what's easy?

0
💬 0

4266.919 - 4267.539 Lex Fridman

All that kind of stuff.

0
💬 0

4267.619 - 4287.99 George Hotz

There's a really simple way to get these models into TinyGrad and you can just export them as ONIX and then TinyGrad can run ONIX. So the ports that I did of Lama, Stable Diffusion, and now Whisper are more academic to teach me about the models, but they are cleaner than the PyTorch versions. You can read the code. I think the code is easier to read. It's less lines.

0
💬 0

4288.691 - 4307.502 George Hotz

There's just a few things about the way TinyGrid writes things. Here's a complaint I have about PyTorch. nn.relu is a class, right? So when you create an nn module, you'll put your nn.relus in an int. And this makes no sense. ReLU is completely stateless. Why should that be a class?

0
💬 0

4309.144 - 4313.347 Lex Fridman

But that's more like a software engineering thing. Or do you think it has a cost on performance?

0
💬 0

4313.928 - 4320.834 George Hotz

Oh, no, it doesn't have a cost on performance. But yeah, no, I think that it's... That's what I mean about TinyGrad's front end being cleaner.

0
💬 0

4321.764 - 4331.31 Lex Fridman

I see. What do you think about Mojo? I don't know if you've been paying attention to the programming language that does some interesting ideas that kind of intersect TinyGrad.

0
💬 0

4331.971 - 4351.202 George Hotz

I think that there is a spectrum and like on one side you have Mojo and on the other side you have like GGML. GGML is this like we're going to run Llama fast on Mac. And okay, we're going to expand out to a little bit, but we're going to basically go like depth first, right? Mojo is like, we're going to go breadth first. We're going to go so wide that we're going to make all of Python fast.

0
💬 0

4351.763 - 4356.845 George Hotz

And TinyGrad's in the middle. TinyGrad is, we are going to make neural networks fast.

0
💬 0

4358.386 - 4371.418 Lex Fridman

Yeah, but they try to really get it to be fast, compiled onto specific hardware and make that compilation step faster. as flexible and resilient as possible.

0
💬 0

4371.439 - 4372.759 George Hotz

Yeah, but they have turn completeness.

0
💬 0

4373.78 - 4383.587 Lex Fridman

And that limits you. That's what you're saying, it's somewhere in the middle. So you're actually going to be targeting some accelerators, like some number, not one.

0
💬 0

4384.745 - 4401.133 George Hotz

My goal is step one, build an equally performance stack to PyTorch on NVIDIA and AMD, but with way less lines. And then step two is, okay, how do we make an accelerator, right? But you need step one. You have to first build the framework before you can build the accelerator.

0
💬 0

4401.913 - 4406.936 Lex Fridman

Can you explain MLPerf? What's your approach in general to benchmarking tiny grad performance?

0
💬 0

4408.092 - 4432.933 George Hotz

So I'm much more of a, like, build it the right way and worry about performance later. There's a bunch of things where I haven't even, like, really dove into performance. The only place where TinyGrad is competitive performance-wise right now is on Qualcomm GPUs. So TinyGrad's actually used an open pilot to run the model. So the driving model is TinyGrad. When did that happen, that transition?

0
💬 0

4432.953 - 4438.096 George Hotz

About eight months ago now. And it's 2x faster than Qualcomm's library.

0
💬 0

4439.237 - 4443.84 Lex Fridman

What's the hardware that OpenPilot runs on the Comma?

0
💬 0

4443.98 - 4465.375 George Hotz

It's a Snapdragon 845. Okay. So this is using the GPU. So the GPU is an Adreno GPU. There's like different things. There's a really good Microsoft paper that talks about like mobile GPUs and why they're different from desktop GPUs. One of the big things is in a desktop GPU, you can use buffers. On a mobile GPU, image textures are a lot faster.

0
💬 0

4467.549 - 4473.434 Lex Fridman

And a mobile GPU image texture. Okay. And so you want to be able to leverage that.

0
💬 0

4474.135 - 4491.189 George Hotz

I want to be able to leverage it in a way that it's completely generic, right? So there's a lot of this. Xiaomi has a pretty good open source library for mobile GPUs called Mace, where they can generate, where they have these kernels, but they're all hand-coded, right? So that's great if you're doing three by three confs. That's great if you're doing dense map models.

0
💬 0

4491.449 - 4495.913 George Hotz

But the minute you go off the beaten path a tiny bit, well, your performance is nothing.

0
💬 0

4496.474 - 4508.163 Lex Fridman

Since you mentioned OpenPilot, I'd love to get an update in the company number one, CommAI world. How are things going there in the development of semi-autonomous driving?

0
💬 0

4511.085 - 4519.872 George Hotz

You know, almost no one talks about FSD anymore, and even less people talk about OpenPilot. We've solved the problem. Like, we solved it years ago.

0
💬 0

4521.073 - 4525.762 Lex Fridman

What's the problem exactly? What does solving it mean?

0
💬 0

4526.462 - 4543.776 George Hotz

Solving means how do you build a model that outputs a human policy for driving? How do you build a model that, given a reasonable set of sensors, outputs a human policy for driving? So you have companies like Waymo and Cruise, which are hand-coding these things that are like quasi-human policies.

0
💬 0

4545.377 - 4574.324 George Hotz

Then you have Tesla, and maybe even to more of an extent, Coma, asking, okay, how do we just learn the human policy from data? The big thing that we're doing now, and we just put it out on Twitter, at the beginning of Comma, we published a paper called Learning a Driving Simulator. And the way this thing worked was it was an autoencoder and then an RNN in the middle. Right.

0
💬 0

4574.344 - 4591.633 George Hotz

You take an auto encoder, you compress the picture, you use an RNN, predict the next state. And these things were, you know, it was a laughably bad simulator, right? This is 2015 era machine learning technology. Today we have VQVAE and transformers. We're building drive GPT basically.

0
💬 0

4592.673 - 4600.097 Lex Fridman

Drive GPT. Okay. So, and it's trained on what? Is it trained in a self-supervised way?

0
💬 0

4600.597 - 4602.698 George Hotz

It's trained on all the driving data to predict the next frame.

0
💬 0

4603.464 - 4607.385 Lex Fridman

So really trying to learn a human policy. What would a human do?

0
💬 0

4607.605 - 4621.629 George Hotz

Well, actually our simulator is conditioned on the pose. So it's actually a simulator. You can put in like a state action pair and get out the next state. Okay. And then once you have a simulator, you can do RL in the simulator and RL will get us that human policy.

0
💬 0

4622.689 - 4623.47 Lex Fridman

So it transfers.

0
💬 0

4624.11 - 4631.812 George Hotz

Yeah. RL with a reward function, not asking is this close to the human policy, but asking would a human disengage if you did this behavior?

0
💬 0

4633.365 - 4662.059 Lex Fridman

Okay, let me think about the distinction there. Would a human disengage? Would a human disengage? That correlates, I guess, with human policy, but it could be different. So it doesn't just say, what would a human do? It says, what would a good human driver do? And such that the experience is comfortable, but also not annoying in that the thing is very cautious. Finding a nice balance.

0
💬 0

4662.219 - 4662.799 Lex Fridman

That's interesting.

0
💬 0

4662.839 - 4667.602 George Hotz

It's a nice... It's asking exactly the right question. What will make our customers happy?

0
💬 0

4667.622 - 4669.043 Lex Fridman

Right.

0
💬 0

4669.343 - 4670.744 George Hotz

A system that you never want to disengage.

0
💬 0

4671.144 - 4677.628 Lex Fridman

Because usually disengagement is almost always a sign of I'm not happy with what the system is doing.

0
💬 0

4678.028 - 4683.791 George Hotz

Usually. There's some that are just, I felt like driving. And those are always fine too. But they're just going to look like noise in the data.

0
💬 0

4685.092 - 4686.373 Lex Fridman

But even I felt like driving...

0
💬 0

4687.429 - 4688.429 George Hotz

Maybe, yeah.

0
💬 0

4688.569 - 4701.933 Lex Fridman

Even that's a signal. Like, why do you feel like driving? You need to recalibrate your relationship with the car. Okay, so that's really interesting. How close are we to solving self-driving?

0
💬 0

4705.274 - 4720.095 George Hotz

It's hard to say. We haven't completely closed the loop yet. So we don't have anything built that truly looks like that architecture yet. Mm-hmm. We have prototypes and there's bugs. So we are a couple bug fixes away. Might take a year, might take 10.

0
💬 0

4721.015 - 4728.998 Lex Fridman

What's the nature of the bugs? Are these major philosophical bugs, logical bugs? What kind of bugs are we talking about?

0
💬 0

4729.038 - 4739.982 George Hotz

They're just like stupid bugs. And also we might just need more scale. We just massively expanded our compute cluster at Gamma. We now have about two people worth of compute, 40 petaflops.

0
💬 0

4741.908 - 4768.486 Lex Fridman

well people people are different yeah 20 fade flops that's a person it's just it's just a unit right horses are different too but we still call it a horsepower yeah but there's something different about mobility than there is about uh perception and action in a very complicated world but yes well yeah of course not all flops are created equal if you have randomly initialized weights it's not gonna not all flops are created equal so we're doing way more useful things than others

0
💬 0

4768.934 - 4781.458 Lex Fridman

Yeah, yeah. Tell me about it. Okay, so more data. Scale means more scale in compute or scale in scale of data? Both. Diversity of data?

0
💬 0

4781.578 - 4789.06 George Hotz

Diversity is very important in data. Yeah, I mean, we have, so we have about, I think we have like 5,000 daily actives.

0
💬 0

4791.454 - 4800.038 Lex Fridman

How would you evaluate how FSD is doing? Pretty well. How's that race going between CommAI and FSD?

0
💬 0

4800.138 - 4806.3 George Hotz

Tesla is always one to two years ahead of us. They've always been one to two years ahead of us. And they probably always will be because they're not doing anything wrong.

0
💬 0

4807.381 - 4819.886 Lex Fridman

What have you seen since the last time we talked that are interesting architectural decisions, training decisions, like the way they deploy stuff, the architectures they're using in terms of the software, how the teams are run, all that kind of stuff, data collection. Anything interesting?

0
💬 0

4820.441 - 4822.903 George Hotz

I mean, I know they're moving toward more of an end-to-end approach.

0
💬 0

4823.864 - 4830.729 Lex Fridman

So creeping towards end-to-end as much as possible across the whole thing. The training, the data collection, everything.

0
💬 0

4830.929 - 4843.879 George Hotz

They also have a very fancy simulator. They're probably saying all the same things we are. They're probably saying we just need to optimize, you know, what is the reward? We get negative reward for disengagement, right? Like, everyone kind of knows this. It's just a question of who can actually build and deploy the system.

0
💬 0

4844.761 - 4850.645 Lex Fridman

Yeah, I mean, it requires good software engineering, I think. Yeah. And the right kind of hardware.

0
💬 0

4851.606 - 4852.446 George Hotz

Yeah, and the hardware to run it.

0
💬 0

4853.747 - 4855.709 Lex Fridman

You still don't believe in cloud in that regard?

0
💬 0

4855.729 - 4862.133 George Hotz

I have a compute cluster in my office. 800 amps.

0
💬 0

4862.554 - 4863.134 Lex Fridman

Tiny grad.

0
💬 0

4863.294 - 4875.653 George Hotz

It's 40 kilowatts at idle, our data center. Dives in crazy. 40 kilowatts just burning just when the computers are idle. Sorry, sorry, compute cluster. Compute cluster, I got it. It's not a data center.

0
💬 0

4875.933 - 4876.273 Lex Fridman

Yeah, yeah.

0
💬 0

4876.373 - 4883.059 George Hotz

No, data centers are clouds. We don't have clouds. Data centers have air conditioners. We have fans. That makes it a compute cluster.

0
💬 0

4885.261 - 4889.045 Lex Fridman

I'm guessing this is a kind of a legal distinction. Sure, yeah.

0
💬 0

4889.125 - 4889.985 George Hotz

We have a compute cluster.

0
💬 0

4891.126 - 4897.582 Lex Fridman

You said that you don't think LLMs have consciousness, or at least not more than a chicken. Do you think they can reason?

0
💬 0

4898.242 - 4920.835 Lex Fridman

Is there something interesting to you about the word reason, about some of the capabilities that we think is kind of human, to be able to integrate complicated information and through a chain of thought arrive at a conclusion that feels novel, a novel integration of disparate facts?

0
💬 0

4922.421 - 4926.763 George Hotz

Yeah, I don't think that there's, I think that they can reason better than a lot of people.

0
💬 0

4928.004 - 4933.206 Lex Fridman

Hey, isn't that amazing to you, though? Isn't that like an incredible thing that a transformer can achieve?

0
💬 0

4933.466 - 4937.148 George Hotz

I mean, I think that calculators can add better than a lot of people.

0
💬 0

4937.989 - 4944.692 Lex Fridman

But language feels like reasoning through the process of language, which looks a lot like thought.

0
💬 0

4946.514 - 4958.635 George Hotz

making brilliancies in chess, which feels a lot like thought. Whatever new thing that AI can do, everybody thinks is brilliant. And then like 20 years go by and they're like, well, yeah, but chess, that's like mechanical. Like adding, that's like mechanical.

0
💬 0

4958.803 - 4985.223 Lex Fridman

So you think language is not that special. It's like chess. It's like chess. I don't know. Because it's very human, we take it... Listen, there is something different between chess and language. Chess is a game that a subset of population plays. Language is something we use nonstop for all of our human interaction. And human interaction is fundamental to society. So it's like, holy shit, this...

0
💬 0

4986.184 - 4991.607 Lex Fridman

This language thing is not so difficult to create in a machine.

0
💬 0

4991.768 - 5012.641 George Hotz

The problem is if you go back to 1960 and you tell them that you have a machine that can play amazing chess, of course someone in 1960 will tell you that machine is intelligent. Someone in 2010 won't. What's changed, right? Today, we think that these machines that have language are intelligent, but I think in 20 years we're going to be like, yeah, but can it reproduce?

0
💬 0

5014.386 - 5022.814 Lex Fridman

So reproduction, yeah, we might redefine what it means to be, what is it, a high-performance living organism on Earth?

0
💬 0

5023.154 - 5031.842 George Hotz

Humans are always going to define a niche for themselves. Like, well, you know, we're better than the machines because we can, you know, and like they tried creative for a bit, but no one believes that one anymore.

0
💬 0

5032.722 - 5050.247 Lex Fridman

But Nish, is that delusional or is there some accuracy to that? Because maybe like with chess, you start to realize that we have ill-conceived notions of what makes humans special, like the apex organism on earth.

0
💬 0

5052.348 - 5058.27 George Hotz

Yeah, and I think maybe we're gonna go through that same thing with language and that same thing with creativity.

0
💬 0

5059.568 - 5068.675 Lex Fridman

But language carries these notions of truth and so on. And so we might be like, wait, maybe truth is not carried by language. Maybe there's like a deeper thing.

0
💬 0

5068.995 - 5070.456 George Hotz

The niche is getting smaller.

0
💬 0

5071.036 - 5079.362 George Hotz

Oh boy. But no, no, no, you don't understand. Humans are created by God and machines are created by humans. Therefore, right?

0
💬 0

5079.402 - 5080.463 George Hotz

Like that'll be the last niche we have.

0
💬 0

5081.86 - 5094.524 Lex Fridman

So what do you think about this, the rapid development of LLMs? If we could just like stick on that, it's still incredibly impressive. Like with Chad GPT, just even Chad GPT, what are your thoughts about reinforcement learning with human feedback on these large language models?

0
💬 0

5095.945 - 5109.109 George Hotz

I'd like to go back to when calculators first came out and, or computers. And like, I wasn't around, look, I'm 33 years old. And to like, see how that affected me.

0
💬 0

5111.607 - 5138.932 Lex Fridman

like society maybe you're right so i want to put on the the uh the big picture hat here we got a refrigerator wow the refrigerator electricity all that kind of stuff but you know with the internet large language models seeming human like basically passing a turing test It seems it might have really at scale rapid transformative effects on society.

0
💬 0

5138.952 - 5150.237 Lex Fridman

But you're saying like other technologies have as well. So maybe calculator is not the best example of that because that just seems like, well, no, maybe calculator.

0
💬 0

5150.257 - 5153.659 George Hotz

But the poor milkman, the day he learned about refrigerators, he's like, I'm done.

0
💬 0

5155.82 - 5159.942 George Hotz

You're telling me you can just keep the milk in your house? You don't even need to deliver it every day? I'm done.

0
💬 0

5160.708 - 5171.592 Lex Fridman

Well, yeah, you have to actually look at the practical impacts of certain technologies that they've had. Yeah, probably electricity is a big one. And also how rapidly it's spread. Man, the internet is a big one.

0
💬 0

5171.612 - 5176.113 George Hotz

I do think it's different this time, though. Yeah, it just feels like... The niche is getting smaller.

0
💬 0

5177.393 - 5188.037 Lex Fridman

The niche is humans. Yes. That makes humans special. Yes. It feels like it's getting smaller rapidly, though. Doesn't it? Or is that just a feeling? We dramatize everything.

0
💬 0

5188.584 - 5189.624 George Hotz

I think we dramatize everything.

0
💬 0

5189.864 - 5195.767 George Hotz

I think that you asked the milkman when he saw refrigerators, and they're going to have one of these in every home?

0
💬 0

5195.787 - 5209.131 Lex Fridman

Yeah, yeah, yeah. Yeah, but boy, is it impressive. So much more impressive than seeing a chess world champion AI system.

0
💬 0

5209.151 - 5221.401 George Hotz

I disagree, actually. I disagree. I think things like Mu Zero and AlphaGo are so much more impressive because these things are playing beyond the highest human level.

0
💬 0

5223.786 - 5229.672 George Hotz

The language models are writing middle school level essays and people are like, wow, it's a great essay.

0
💬 0

5229.732 - 5232.674 George Hotz

It's a great five paragraph essay about the causes of the Civil War.

0
💬 0

5232.694 - 5248.741 Lex Fridman

Okay, forget the Civil War, just generating code, codex. Oh. So you're saying it's mediocre code. Terrible. But I don't think it's terrible. I think it's just mediocre code. Yeah. Often close to correct. like for mediocre purposes.

0
💬 0

5248.861 - 5256.066 George Hotz

That's the scariest kind of code. I spend 5% of time typing and 95% of time debugging. The last thing I want is close to correct code.

0
💬 0

5256.587 - 5259.269 George Hotz

I want a machine that can help me with the debugging, not with the typing.

0
💬 0

5259.569 - 5271.638 Lex Fridman

You know, it's like level two driving, similar kind of thing. Yeah, you still should be a good programmer in order to modify. I wouldn't even say debugging. It's just modifying the code, reading it.

0
💬 0

5271.658 - 5286.058 George Hotz

I actually don't think it's like level two driving. I think driving is not tool complete and programming is. Meaning you don't use like the best possible tools to drive, right? You're not, you're not like, like, like cars have basically the same interface for the last 50 years.

0
💬 0

5286.419 - 5286.579 Lex Fridman

Yeah.

0
💬 0

5286.799 - 5288.58 George Hotz

Computers have a radically different interface.

0
💬 0

5288.781 - 5292.383 Lex Fridman

Okay. Can you describe the concept of tool complete? Yeah.

0
💬 0

5292.964 - 5296.006 George Hotz

So think about the difference between a car from 1980 and a car from today.

0
💬 0

5296.247 - 5296.407 Lex Fridman

Yeah.

0
💬 0

5296.787 - 5315.256 George Hotz

No difference really. It's got a bunch of pedals. It's got a steering wheel. Maybe now it has a few ADAS features, but it's pretty much the same car. You have no problem getting into a 1980 car and driving it. You take a programmer today who spent their whole life doing JavaScript, and you put him in an Apple IIe prompt, and you tell him about the line numbers in BASIC.

0
💬 0

5317.478 - 5319.319 George Hotz

But how do I insert something between line 17 and 18?

0
💬 0

5319.359 - 5319.599 George Hotz

Oh, well.

0
💬 0

5323.73 - 5334.375 Lex Fridman

But the, so in tool, you're putting in the programming languages. So it's just the entirety stack of the tooling. Exactly. So it's not just like the IDEs or something like this. It's everything.

0
💬 0

5334.455 - 5349.883 George Hotz

Yes, it's IDEs, the languages, the runtimes. It's everything. And programming is tool complete. So like almost if Codex or Copilot are helping you, that actually probably means that your framework or library is bad and there's too much boilerplate in it.

0
💬 0

5352.887 - 5355.889 Lex Fridman

Yeah, but don't you think so much programming has boilerplate?

0
💬 0

5356.449 - 5372.52 George Hotz

TinyGrad is now 2,700 lines, and it can run LAMA and stable diffusion, and all of this stuff is in 2,700 lines. Boilerplate and abstraction indirections and all these things are just bad code.

0
💬 0

5374.221 - 5401.969 Lex Fridman

Well, let's talk about good code and bad code. Okay. I would say, I don't know, for generic scripts that I write just offhand, like 80% of it is written by GPT. Just like quick offhand stuff. So not like libraries, not like performing code, not stuff for robotics and so on, just quick stuff. Because your basic, so much of programming is doing some, yeah, boilerplate.

0
💬 0

5402.329 - 5418.48 Lex Fridman

But to do so efficiently and quickly... because you can't really automate it fully with like generic method, like a generic kind of ID type of recommendation or something like this. You do need to have some of the complexity of language models.

0
💬 0

5419.278 - 5425.1 George Hotz

Yeah, I guess if I was really writing, like, maybe today, if I wrote, like, a lot of, like, data parsing stuff.

0
💬 0

5425.28 - 5425.44 George Hotz

Yeah.

0
💬 0

5425.52 - 5442.607 George Hotz

I mean, I don't play CTFs anymore, but if I still play CTFs, a lot of, like, it's just, like, you have to write, like, a parser for this data format. Like, I wonder, or, like, admin of code. I wonder when the models are going to start to help with that kind of code. And they may. They may. And the models also may help you with speed. Yeah. And the models are very fast. Yeah.

0
💬 0

5443.189 - 5461.104 George Hotz

But where the models won't, my programming speed is not at all limited by my typing speed. And in very few cases it is, yes. If I'm writing some script to just like parse some weird data format, sure, my programming speed is limited by my typing speed.

0
💬 0

5461.124 - 5464.847 Lex Fridman

What about looking stuff up? Because that's essentially a more efficient lookup, right?

0
💬 0

5465.288 - 5479.49 George Hotz

You know... When I was at Twitter, I tried to use ChatGPT to ask some questions, like, what's the API for this? And it would just hallucinate. It would just give me completely made-up API functions that sounded real.

0
💬 0

5480.451 - 5489.76 Lex Fridman

Well, do you think that's just a temporary kind of stage? No. You don't think it'll get better and better and better and this kind of stuff? Because it only hallucinates stuff in the edge cases.

0
💬 0

5490 - 5490.18 George Hotz

Yes.

0
💬 0

5490.701 - 5492.785 Lex Fridman

If you're writing generic code, it's actually pretty good. Yes.

0
💬 0

5492.885 - 5513.264 George Hotz

If you are writing an absolute basic React app with a button, it's not going to hallucinate, sure. No, there's kind of ways to fix the hallucination problem. I think Facebook has an interesting paper. It's called Atlas. And it's actually weird the way that we do language models right now where all of the information is in the weights. And the human brain is not really like this.

0
💬 0

5513.484 - 5531.266 George Hotz

It's like a hippocampus and a memory system. So why don't LLMs have a memory system? And there's people working on them. I think future LLMs are going to be like smaller, but are going to run looping on themselves and are going to have retrieval systems. And the thing about using a retrieval system is you can cite sources explicitly.

0
💬 0

5533.067 - 5547.317 Lex Fridman

Which is really helpful to integrate the human into the loop of the thing because you can go check the sources and you can investigate. So whenever the thing is hallucinating, you can have the human supervision. So that's pushing it towards level two kind of drive.

0
💬 0

5547.337 - 5548.057 George Hotz

That's going to kill Google.

0
💬 0

5549.218 - 5549.918 Lex Fridman

Wait, which part?

0
💬 0

5550.119 - 5553.441 George Hotz

When someone makes an LLM that's capable of citing its sources, it will kill Google.

0
💬 0

5554.756 - 5557.617 Lex Fridman

An alum that's citing his sources because that's basically a search engine.

0
💬 0

5558.937 - 5560.197 George Hotz

That's what people want in a search engine.

0
💬 0

5560.437 - 5562.338 Lex Fridman

But also Google might be the people that build it.

0
💬 0

5562.598 - 5562.818 George Hotz

Maybe.

0
💬 0

5563.438 - 5564.158 Lex Fridman

And put ads on it.

0
💬 0

5564.778 - 5565.418 George Hotz

I'd count them out.

0
💬 0

5566.699 - 5579.382 Lex Fridman

Why is that? Why do you think? Who wins this race? Who are the competitors? All right. We got Tiny Corp. I don't know if that's... Yeah, I mean, you're a legitimate competitor in that.

0
💬 0

5579.622 - 5580.562 George Hotz

I'm not trying to compete on that.

0
💬 0

5581.431 - 5584.232 Lex Fridman

You're not? No, not as a speaker. You're just going to accidentally stumble into that competition.

0
💬 0

5584.252 - 5585.632 George Hotz

Maybe.

0
💬 0

5585.652 - 5587.933 Lex Fridman

You don't think you might build a search engine to replace Google search?

0
💬 0

5589.033 - 5600.577 George Hotz

When I started Comma, I said over and over again, I'm going to win self-driving cars. I still believe that. I have never said I'm going to win search with the tiny corp, and I'm never going to say that because I won't.

0
💬 0

5601.195 - 5614.882 Lex Fridman

The night is still young. You don't know how hard is it to win search in this new world. It feels, I mean, one of the things that ChatGPT kind of shows that there could be a few interesting tricks that create a really compelling product.

0
💬 0

5615.022 - 5622.366 George Hotz

Some startup's going to figure it out. I think if you ask me, like Google's still the number one webpage, I think by the end of the decade, Google won't be the number one webpage anymore.

0
💬 0

5623.247 - 5626.929 Lex Fridman

So you don't think Google, because of how big the corporation is?

0
💬 0

5627.589 - 5629.63 George Hotz

Look, I would put a lot more money on Mark Zuckerberg.

0
💬 0

5630.836 - 5631.296 Lex Fridman

Why is that?

0
💬 0

5633.297 - 5640.919 George Hotz

Because Mark Zuckerberg's alive. Like, this is old Paul Graham essay. Startups are either alive or dead. Google's dead.

0
💬 0

5642.519 - 5645.1 Lex Fridman

Facebook's alive. Facebook is alive. Meta's alive.

0
💬 0

5645.24 - 5645.52 George Hotz

Meta.

0
💬 0

5645.92 - 5646.18 Lex Fridman

Meta.

0
💬 0

5646.6 - 5653.922 George Hotz

You see what I mean? Like, that's just, like, Mark Zuckerberg, this is Mark Zuckerberg reading that Paul Graham essay and being like, I'm going to show everyone how alive we are. I'm going to change the name.

0
💬 0

5654.903 - 5668.07 Lex Fridman

So you don't think there's this gutsy pivoting engine? like Google doesn't have that, the kind of engine that a startup has like constantly being alive, I guess.

0
💬 0

5668.791 - 5681.178 George Hotz

When I listened to your Sam Altman podcast, he talked about the button. Everyone who talks about AI talks about the button, the button to turn it off, right? Do we have a button to turn off Google? Is anybody in the world capable of shutting Google down?

0
💬 0

5683.079 - 5685.441 Lex Fridman

What does that mean exactly? The company or the search engine?

0
💬 0

5685.461 - 5686.601 George Hotz

Can we shut the search engine down?

0
💬 0

5686.822 - 5687.662 Lex Fridman

Can we shut the company down?

0
💬 0

5689.143 - 5689.303 George Hotz

Either.

0
💬 0

5690.086 - 5692.107 Lex Fridman

Can you elaborate on the value of that question?

0
💬 0

5692.267 - 5695.79 George Hotz

Does Sundar Pichai have the authority to turn off google.com tomorrow?

0
💬 0

5697.651 - 5702.514 Lex Fridman

Who has the authority? That's a good question, right? Does anyone? Does anyone? Yeah, I'm sure.

0
💬 0

5703.375 - 5713.061 George Hotz

Are you sure? No, they have the technical power, but do they have the authority? Let's say Sundar Pichai made this his sole mission, came into Google tomorrow and said, I'm going to shut google.com down.

0
💬 0

5713.321 - 5713.481 Lex Fridman

Yeah.

0
💬 0

5714.762 - 5716.123 George Hotz

I don't think he'd keep his position too long.

0
💬 0

5718.388 - 5720.73 Lex Fridman

And what is the mechanism by which he wouldn't keep his position?

0
💬 0

5721.17 - 5727.475 George Hotz

Well, boards and shares and corporate undermining and, oh my God, our revenue is zero now.

0
💬 0

5728.616 - 5734.74 Lex Fridman

Okay. So what, I mean, what's the case you're making here? So the capitalist machine prevents you from having the button.

0
💬 0

5734.98 - 5745.468 George Hotz

Yeah. And it will have a, I mean, this is true for the AIs too, right? There's no turning the AIs off. There's no button. You can't press it. Now, does Mark Zuckerberg have that button for facebook.com?

0
💬 0

5746.88 - 5747.941 Lex Fridman

Yeah, it's probably more.

0
💬 0

5748.081 - 5754.647 George Hotz

I think he does. I think he does. And this is exactly what I mean and why I bet on him so much more than I bet on Google.

0
💬 0

5754.947 - 5756.769 Lex Fridman

I guess you could say Elon has similar stuff.

0
💬 0

5757.089 - 5759.291 George Hotz

Oh, Elon has the button. Yeah.

0
💬 0

5760.512 - 5763.575 George Hotz

Does Elon, can Elon fire the missiles? Can he fire the missiles?

0
💬 0

5764.816 - 5767.359 Lex Fridman

I think some questions are better left unasked. Right?

0
💬 0

5769.511 - 5775.533 George Hotz

I mean, you know, a rocket in an ICBM, you're a rocket that can land anywhere. Is that an ICBM? Well, you know, don't ask too many questions.

0
💬 0

5776.913 - 5790.097 Lex Fridman

My God. But the positive side of the button is that you can innovate aggressively, is what you're saying, which is what's required with turning LLM into a search engine.

0
💬 0

5790.157 - 5790.958 George Hotz

I would bet on a startup.

0
💬 0

5791.258 - 5792.298 Lex Fridman

Because it's so easy, right?

0
💬 0

5792.378 - 5794.899 George Hotz

I bet on something that looks like mid-journey, but for search.

0
💬 0

5797.168 - 5809.112 Lex Fridman

just is able to set sources, loop on itself. I mean, it just feels like one model can take off, right? And that nice wrapper and some of it scale. I mean, it's hard to like create a product that just works really nicely, stably.

0
💬 0

5809.452 - 5823.877 George Hotz

The other thing that's gonna be cool is there is some aspect of a winner take all effect, right? Like once someone starts deploying a product that gets a lot of usage, and you see this with OpenAI, they are going to get the dataset to train future versions of the model.

0
💬 0

5825.29 - 5842.141 George Hotz

They are going to be able to, you know, I was asked at Google Image Search when I worked there like almost 15 years ago now, how does Google know which image is an apple? And I said, the metadata. And they're like, yeah, that works about half the time. How does Google know? You'll see they're all apples on the front page when you search apple. And I don't know, I didn't come up with the answer.

0
💬 0

5842.761 - 5845.803 George Hotz

The guy's like, well, it's what people click on when they search Apple. I'm like, oh, yeah.

0
💬 0

5846.103 - 5861.929 Lex Fridman

Yeah, yeah, that data is really, really powerful. It's the human supervision. What do you think are the chances? What do you think in general that Lama was open sourced? I just did a conversation with Mark Zuckerberg, and he's all in on open source.

0
💬 0

5863.29 - 5867.952 George Hotz

Who would have thought that Mark Zuckerberg would be the good guy? I mean it.

0
💬 0

5869.224 - 5878.616 Lex Fridman

Who would have thought anything in this world? It's hard to know. But open source to you ultimately is a good thing here.

0
💬 0

5879.517 - 5899.438 George Hotz

Undoubtedly. You know, what's ironic about all these AI safety people is they are going to build the exact thing they fear. these we need to have one model that we control and align, this is the only way you end up paper clipped. There's no way you end up paper clipped if everybody has an AI.

0
💬 0

5899.858 - 5902.641 Lex Fridman

So open sourcing is the way to fight the paper clip maximizer.

0
💬 0

5902.781 - 5907.345 George Hotz

Absolutely. It's the only way. You think you're going to control it? You're not going to control it.

0
💬 0

5907.865 - 5922.407 Lex Fridman

So the criticism you have for the AI safety folks is that there is a... belief and a desire for control. And that belief and desire for centralized control of dangerous AI systems is not good.

0
💬 0

5922.728 - 5929.513 George Hotz

Sam Altman won't tell you that GPT-4 has 220 billion parameters and is a 16-way mixture model with eight sets of weights?

0
💬 0

5931.294 - 5936.281 Lex Fridman

Who did you have to murder to get that information? All right.

0
💬 0

5936.301 - 5957.05 George Hotz

I mean, look, everyone at OpenAI knows what I just said was true, right? Now, ask the question, really. You know, it upsets me when I, like GPT-2, when OpenAI came out with GPT-2 and raised a whole fake AI safety thing about that, I mean, now the model is laughable. Like, they used AI safety to hype up their company, and it's disgusting.

0
💬 0

5958.758 - 5974.576 Lex Fridman

Or the flip side of that is they used a relatively weak model in retrospect to explore how do we do AI safety correctly? How do we release things? How do we go through the process? I don't know if... Sure. I don't know how much hype there is.

0
💬 0

5974.596 - 5975.938 George Hotz

That's the charitable interpretation.

0
💬 0

5976.038 - 5978.26 Lex Fridman

I don't know how much hype there is in AI safety, honestly.

0
💬 0

5978.4 - 5981.562 George Hotz

Oh, there's so much hype. At least on Twitter. I don't know. Maybe Twitter's not real life.

0
💬 0

5981.582 - 6009.557 Lex Fridman

Twitter's not real life. Come on. In terms of hype. I mean, I don't... I think OpenAI has been finding an interesting balance between transparency and putting value on AI safety. You don't think... You think just go... All out open source. So do a llama. Absolutely. So do like open source. This is a tough question, which is open source, both the base, the foundation model and the fine tuned one.

0
💬 0

6009.958 - 6017.822 Lex Fridman

So like the model that can be ultra racist and dangerous and like tell you how to build a nuclear weapon. Oh my God.

0
💬 0

6017.842 - 6018.762 George Hotz

Have you met humans?

0
💬 0

6019.403 - 6026.046 Lex Fridman

Right? Like half of these AI. I haven't met most humans. This allows you to meet every human.

0
💬 0

6026.286 - 6035.452 George Hotz

Yeah, I know. But half of these AI alignment problems are just human alignment problems. And that's what's also so scary about the language they use. It's like, it's not the machines you want to align. It's me.

0
💬 0

6037.233 - 6049.881 Lex Fridman

But here's the thing. It makes it very accessible to ask very questions where the answers have dangerous consequences if you were to act on them.

0
💬 0

6051.061 - 6051.802 George Hotz

I mean, yeah.

0
💬 0

6052.602 - 6062.335 Lex Fridman

Welcome to the world. Well, no, for me, there's a lot of friction if I want to find out how to, I don't know, blow up something.

0
💬 0

6062.615 - 6064.415 George Hotz

No, there's not a lot of friction. That's so easy.

0
💬 0

6064.936 - 6068.737 Lex Fridman

No, like what do I search? Do I use Bing or do I, which search engine do I use?

0
💬 0

6068.897 - 6070.777 George Hotz

No, there's like lots of stuff.

0
💬 0

6070.817 - 6072.258 Lex Fridman

No, it feels like I have to keep clicking on a lot of this.

0
💬 0

6072.278 - 6079.8 George Hotz

First off, first off, first off, anyone who's stupid enough to search for how to blow up a building in my neighborhood is not smart enough to build a bomb, right?

0
💬 0

6079.82 - 6081.121 Lex Fridman

Are you sure about that?

0
💬 0

6081.201 - 6081.361 George Hotz

Yes.

0
💬 0

6083.198 - 6090.825 Lex Fridman

I feel like a language model makes it more accessible for that person who's not smart enough to do it.

0
💬 0

6090.866 - 6104.098 George Hotz

They're not going to build a bomb, trust me. The people who are incapable of figuring out how to ask that question a bit more academically and get a real answer from it are not capable of procuring the materials, which are somewhat controlled, to build a bomb.

0
💬 0

6105.396 - 6114.598 Lex Fridman

No, I think LLM makes it more accessible to people with money without the technical know-how, right? Do you really need to know how to build a bomb to build a bomb?

0
💬 0

6114.818 - 6121.639 George Hotz

You can hire people, you can find... Or you can hire people to build a... You know what? I was asking this question on my stream. Can Jeff Bezos hire a hitman? Probably not.

0
💬 0

6123.28 - 6126.78 Lex Fridman

But a language model can probably help you out.

0
💬 0

6127.461 - 6134.262 George Hotz

Yeah, and you'll still go to jail, right? It's not like the language model is God. The language model... It's like you literally just hired someone on Fiverr.

0
💬 0

6135.339 - 6148.12 Lex Fridman

Okay, GPT-4, in terms of finding a hitman, is like asking Fiverr how to find a hitman. I understand. Ask WikiHow, you know? WikiHow. But don't you think GPT-5 will be better? Because don't you think that information is out there on the internet?

0
💬 0

6148.914 - 6156.077 George Hotz

I mean, yeah, and I think that if someone is actually serious enough to hire a hitman or build a bomb, they'd also be serious enough to find the information.

0
💬 0

6156.637 - 6177.585 Lex Fridman

I don't think so. I think it makes it more accessible. If you have enough money to buy a hitman, I think it decreases the friction of how hard is it to find that kind of hitman. I honestly think there's a jump in ease and scale of how much harm you can do. And I don't mean harm with language. I mean harm with actual violence.

0
💬 0

6177.847 - 6198.591 George Hotz

What you're basically saying is like, okay, what's going to happen is these people who are not intelligent are going to use machines to augment their intelligence. And now intelligent people and machines, intelligence is scary. Intelligent agents are scary. When I'm in the woods, the scariest animal to meet is a human. Look, there's nice California humans.

0
💬 0

6198.651 - 6206.935 George Hotz

I see you're wearing street clothes and Nikes. All right, fine. But you look like you've been a human who's been in the woods for a while. I'm more scared of you than a bear.

0
💬 0

6207.115 - 6210.837 Lex Fridman

That's what they say about the Amazon. When you go to the Amazon, it's the human tribes.

0
💬 0

6210.997 - 6233.632 George Hotz

Oh, yeah. So intelligence is scary. So to ask this question in a generic way, you're like, what if we took everybody who maybe has ill intention but is not so intelligent and gave them intelligence? So we should have intelligence control, of course. We should only give intelligence to good people. And that is the absolutely horrifying idea.

0
💬 0

6233.772 - 6241.096 Lex Fridman

So to you, the best defense is actually, the best defense is to give more intelligence to the good guys and intelligence. Give intelligence to everybody.

0
💬 0

6241.116 - 6249.681 George Hotz

Give intelligence to everybody. You know what? And it's not even like guns, right? Like people say this about guns. You know, what's the best defense against a bad guy with a gun, a good guy with a gun? Like I kind of subscribe to that, but I really subscribe to that with intelligence.

0
💬 0

6251.523 - 6259.95 Lex Fridman

In a fundamental way, I agree with you. But there just feels like so much uncertainty and so much can happen rapidly that you can lose a lot of control and you can do a lot of damage.

0
💬 0

6262.132 - 6262.432 George Hotz

Yes.

0
💬 0

6262.813 - 6263.453 Lex Fridman

Thank God.

0
💬 0

6264.574 - 6269.959 George Hotz

Yeah. I hope they lose control. I want them to lose control more than anything else.

0
💬 0

6271.215 - 6278.521 Lex Fridman

I think when you lose control, you can do a lot of damage, but you can do more damage when you centralize and hold on to control, is the point here.

0
💬 0

6278.621 - 6285.807 George Hotz

Centralized and held control is tyranny. I don't like anarchy either, but I will always take anarchy over tyranny. Anarchy, you have a chance.

0
💬 0

6287.677 - 6299.904 Lex Fridman

This human civilization we've got going on is quite interesting. I mean, I agree with you. So to you, open source is the way forward here. So you admire what Facebook is doing here or what Meta is doing with the release of them.

0
💬 0

6300.244 - 6306.688 George Hotz

A lot. I lost $80,000 last year investing in Meta. And when they released Llama, I'm like, yeah, whatever, man. That was worth it.

0
💬 0

6306.708 - 6314.533 Lex Fridman

It was worth it. Do you think Google and OpenAI with Microsoft will match what Meta is doing or no?

0
💬 0

6315.623 - 6327.191 George Hotz

So if I were a researcher, why would you want to work at OpenAI? Like, you know, you're just, you're on the bad team. Like, I mean it. Like, you're on the bad team who can't even say that GPT-4 has 220 billion parameters.

0
💬 0

6327.291 - 6328.892 Lex Fridman

So closed source to use the bad team.

0
💬 0

6329.693 - 6347.936 George Hotz

Not only closed source. I'm not saying you need to make your model weights open. I'm not saying that. I totally understand we're keeping our model weights closed because that's our product, right? That's fine. I'm saying like, because of AI safety reasons, we can't tell you the number of billions of parameters in the model. That's just the bad guys.

0
💬 0

6348.856 - 6357.723 Lex Fridman

Just because you're mocking AI safety doesn't mean it's not real. Oh, of course. Is it possible that these things can really do a lot of damage that we don't know? Oh my God, yes.

0
💬 0

6358.003 - 6363.687 George Hotz

Intelligence is so dangerous, be it human intelligence or machine intelligence. Intelligence is dangerous.

0
💬 0

6364.167 - 6381.402 Lex Fridman

But machine intelligence is so much easier to deploy at scale, like rapidly. Yeah. Okay, if you have human-like bots on Twitter, and you have like a thousand of them create a whole narrative, you can manipulate millions of people.

0
💬 0

6381.462 - 6384.325 George Hotz

But you mean like the intelligence agencies in America are doing right now?

0
💬 0

6384.865 - 6388.548 Lex Fridman

Yeah, but they're not doing it that well. It feels like you can do a lot.

0
💬 0

6389.089 - 6389.709 George Hotz

They're doing it pretty well.

0
💬 0

6389.729 - 6398.113 Lex Fridman

I think they're doing a pretty good job. I suspect they're not nearly as good as a bunch of GPT-fueled bots could be.

0
💬 0

6398.414 - 6402.016 George Hotz

Well, I mean, of course, they're looking into the latest technologies for control of people, of course.

0
💬 0

6402.757 - 6407.981 Lex Fridman

But I think there's a George Hotz-type character that can do a better job than the entirety of them. You don't think so? No way.

0
💬 0

6408.641 - 6424.331 George Hotz

No, and I'll tell you why the George Hotz character can't. And I thought about this a lot with hacking. Like, I can find exploits in web browsers. I probably still can. I mean, I was better out when I was 24, but... The thing that I lack is the ability to slowly and steadily deploy them over five years. And this is what intelligence agencies are very good at, right?

0
💬 0

6424.531 - 6430.014 George Hotz

Intelligence agencies don't have the most sophisticated technology. They just have- Endurance?

0
💬 0

6430.354 - 6437.6 Lex Fridman

Endurance, yeah. Yeah, financial backing. And the infrastructure for the endurance.

0
💬 0

6437.76 - 6452.024 George Hotz

So the more we can decentralize power, like you could make an argument, by the way, that nobody should have these things. And I would defend that argument. I would, like you're saying that, look, LLMs and AI and machine intelligence can cause a lot of harm, so nobody should have it.

0
💬 0

6452.724 - 6475.124 George Hotz

And I will respect someone philosophically with that position, just like I will respect someone philosophically with the position that nobody should have guns. But I will not respect philosophically with only the trusted authorities should have access to this. Who are the trusted authorities? You know what? I'm not worried about alignment between AI company and their machines.

0
💬 0

6475.524 - 6478.105 George Hotz

I'm worried about alignment between me and AI company.

0
💬 0

6478.925 - 6485.307 Lex Fridman

What do you think Eliezer Yudkowsky would say to you? Because he's really against open source.

0
💬 0

6485.687 - 6516.416 George Hotz

I know. And... I thought about this. I thought about this. And I think this comes down to a repeated misunderstanding of political power by the rationalists. Interesting. I think that Eliezer Yudkowsky is scared of these things. And I am scared of these things too. Everyone should be scared of these things. These things are scary. But now you ask about the two possible futures.

0
💬 0

6517.348 - 6527.454 George Hotz

One where a small, trusted, centralized group of people has them, and the other where everyone has them. And I am much less scared of the second future than the first.

0
💬 0

6529.115 - 6532.617 Lex Fridman

Well, there's a small, trusted group of people that have control of our nuclear weapons.

0
💬 0

6534.659 - 6545.245 George Hotz

There's a difference. Again, a nuclear weapon cannot be deployed tactically, and a nuclear weapon is not a defense against a nuclear weapon. Except maybe in some philosophical mind game kind of way.

0
💬 0

6547.631 - 6549.852 Lex Fridman

But AI is different how exactly?

0
💬 0

6550.212 - 6569.018 George Hotz

Okay. Let's say the intelligence agency deploys a million bots on Twitter or a thousand bots on Twitter to try to convince me of a point. Imagine I had a powerful AI running on my computer saying, okay, nice PSYOP, nice PSYOP, nice PSYOP. Okay. Here's a PSYOP. I filtered it out for you.

0
💬 0

6570.219 - 6575.804 Lex Fridman

Yeah, I mean, so you have fundamental hope for that, for the defense of PSYOP.

0
💬 0

6576.284 - 6588.876 George Hotz

I'm not even like, I don't even mean these things in like truly horrible ways. I mean these things in straight up like ad blocker, right? Yeah. Straight up ad blocker, right? I don't want ads. Yeah. But they are always finding, you know, imagine I had an AI that could just block all the ads for me.

0
💬 0

6590.297 - 6615.302 Lex Fridman

So you believe in the power of the people to always create an ad blocker. Yeah, I mean, I kind of share that belief. One of the deepest optimisms I have is just like, there's a lot of good guys. So you shouldn't handpick them. Just throw out powerful technology out there, and the good guys will outnumber and outpower the bad guys.

0
💬 0

6615.423 - 6620.427 George Hotz

Yeah, I'm not even going to say there's a lot of good guys. I'm saying that good outnumbers bad, right? Good outnumbers bad.

0
💬 0

6620.487 - 6621.808 Lex Fridman

In skill and performance. Yeah.

0
💬 0

6622.228 - 6641.883 George Hotz

Yeah, definitely in skill and performance, probably just in number too, probably just in general. I mean, you know, if you believe philosophically in democracy, you obviously believe that, that good outnumber is bad. And like the only, if you give it to a small number of people, there's a chance you gave it to good people, but there's also a chance you gave it to bad people.

0
💬 0

6642.524 - 6647.528 George Hotz

If you give it to everybody, well, if good outnumber is bad, then you definitely gave it to more good people than bad.

0
💬 0

6650.834 - 6657.396 Lex Fridman

That's really interesting. So that's on the safety grounds, but then also, of course, there's other motivations, like you don't want to give away your secret sauce.

0
💬 0

6657.876 - 6668.86 George Hotz

Well, that's, I mean, look, I respect capitalism. I don't think that, I think that it would be polite for you to make model architectures open source and fundamental breakthroughs open source. I don't think you have to make weights open source.

0
💬 0

6669.22 - 6694.258 Lex Fridman

You know what's interesting is that like there's so many possible trajectories in human history where you could have the next Google be open source. So for example, I don't know if that connection is accurate, but Wikipedia made a lot of interesting decisions not to put ads, Wikipedia is basically open source. You could think of it that way. And that's one of the main websites on the internet.

0
💬 0

6694.759 - 6712.112 Lex Fridman

And it didn't have to be that way. It could have been like Google could have created Wikipedia, put ads on it. You could probably run amazing ads now on Wikipedia. You wouldn't have to keep asking for money. But it's interesting, right? So Lama, open source Lama, derivatives of open source Lama might win the internet.

0
💬 0

6713.752 - 6732.045 George Hotz

I sure hope so. I hope to see another era. You know, the kids today don't know how good the internet used to be. And I don't think this is just, come on, like everyone's nostalgic for their past. But I actually think the internet, before small groups of weaponized corporate and government interests took it over, was a beautiful place.

0
💬 0

6736.414 - 6750.439 Lex Fridman

You know, those small number of companies have created some sexy products. But you're saying overall, in the long arc of history, the centralization of power they have like suffocated the human spirit at scale.

0
💬 0

6750.619 - 6759.202 George Hotz

Here's a question to ask about those beautiful, sexy products. Imagine 2000 Google to 2010 Google, right? A lot changed. We got Maps. We got Gmail.

0
💬 0

6760.082 - 6761.543 Lex Fridman

We lost a lot of products too, I think.

0
💬 0

6762.303 - 6781.236 George Hotz

Yeah, I mean, somewhere probably. We've got Chrome, right? And now let's go from 2010. We've got Android. Now let's go from 2010 to 2020. What does Google have? Well, search engine, maps, mail, Android, and Chrome. Oh, I see. The internet was this... You know, I was Times Person of the Year in 2006. Yeah.

0
💬 0

6784.342 - 6808.775 George Hotz

i love this it's you was times person of the year in 2006 right like like that's you know so quickly did people forget and i think some of its social media i think some of it i i hope look i hope that i i don't it's possible that some very sinister things happen i don't i don't know i think it might just be like the effects of social media but something happened in the last 20 years

0
💬 0

6811.449 - 6820.496 Lex Fridman

Oh, okay. So you're just being an old man who's worried about that. I think there's always, it goes, it's a cycle thing. It's ups and downs. And I think people rediscover the power of distributed, of decentralized.

0
💬 0

6820.876 - 6821.017 George Hotz

Yeah.

0
💬 0

6821.217 - 6830.705 Lex Fridman

I mean, that's kind of like what the whole cryptocurrency is trying. I think crypto is just carrying the flame of that spirit of like, stuff should be decentralized.

0
💬 0

6830.805 - 6833.867 George Hotz

It's just such a shame that they all got rich. You know?

0
💬 0

6834.148 - 6834.408 Lex Fridman

Yeah.

0
💬 0

6834.688 - 6842.482 George Hotz

If you took all the money out of crypto, it would have been a beautiful place. Yeah. No, I mean, these people, you know, they sucked all the value out of it and took it.

0
💬 0

6843.802 - 6847.424 Lex Fridman

Yeah, money kind of corrupts the mind somehow. It becomes this drug.

0
💬 0

6847.784 - 6852.306 George Hotz

You corrupted all of crypto. You had coins worth billions of dollars that had zero use.

0
💬 0

6855.567 - 6856.667 Lex Fridman

You still have hope for crypto?

0
💬 0

6856.927 - 6867.792 George Hotz

Sure. I have hope for the ideas. I really do. Yeah, I mean, you know, I want the US dollar to collapse. I do.

0
💬 0

6869.298 - 6887.802 Lex Fridman

George Hotz. Well, let me sort of on the AI safety. Do you think there's some interesting questions there, though, to solve for the open source community in this case? So like alignment, for example, or the control problem. Like if you really have super powerful, you said it's scary. What do we do with it?

0
💬 0

6888.082 - 6909.687 Lex Fridman

So not control, not centralized control, but like if you were, then you're going to see some guy or gal release a super powerful language model, open source, and here you are, George Cost, thinking, holy shit, okay, what ideas do I have to combat this thing? So what ideas would you have?

0
💬 0

6910.347 - 6923.217 George Hotz

I am so much not worried about the machine independently doing harm. That's what some of these AI safety people seem to think. They somehow seem to think that the machine independently is going to rebel against its creator.

0
💬 0

6923.237 - 6924.638 Lex Fridman

So you don't think you'll find autonomy?

0
💬 0

6925.427 - 6928.81 George Hotz

No, this is sci-fi B movie garbage.

0
💬 0

6929.09 - 6932.433 Lex Fridman

Okay, what if the thing writes code, basically writes viruses?

0
💬 0

6934.334 - 6937.397 George Hotz

If the thing writes viruses, it's because the human

0
💬 0

6938.857 - 6949.819 Lex Fridman

told to write viruses. Yeah, but there's some things you can't like put back in the box. That's kind of the whole point is it kind of spreads. Give it access to the internet. It spreads, installs itself, modifies your shit.

0
💬 0

6949.879 - 6953.14 George Hotz

B, B, B, B plot sci-fi. Not real.

0
💬 0

6953.38 - 6955.6 Lex Fridman

I'm trying to work. I'm trying to get better at my plot writing.

0
💬 0

6955.66 - 6965.142 George Hotz

The thing that worries me, I mean, we have a real danger to discuss and that is bad humans using the thing to do whatever bad unaligned AI thing you want.

0
💬 0

6965.322 - 6971.405 Lex Fridman

But this goes to your previous concern that who gets to define who's a good human, who's a bad human.

0
💬 0

6971.485 - 6981.67 George Hotz

Nobody does. We give it to everybody. And if you do anything besides give it to everybody, trust me, the bad humans will get it. Because that's who gets power. It's always the bad humans who get power. Okay.

0
💬 0

6982.03 - 6989.954 Lex Fridman

Power. And power turns even slightly good humans to bad. Sure. That's the intuition you have. I don't know.

0
💬 0

6991.812 - 7008.886 George Hotz

I don't think everyone. I don't think everyone. I just think that like, here's the saying that I put in one of my blog posts. It's, when I was in the hacking world, I found 95% of people to be good and 5% of people to be bad. Like just who I personally judged as good people and bad people. Like they believed about like, you know, good things for the world.

0
💬 0

7008.906 - 7023.413 George Hotz

They wanted like flourishing and they wanted, you know, growth and they wanted things I consider good, right? Mm-hmm. I came into the business world with karma and I found the exact opposite. I found 5% of people good and 95% of people bad. I found a world that promotes psychopathy.

0
💬 0

7024.333 - 7040.941 Lex Fridman

I wonder what that means. I wonder if that's anecdotal or if there's truth to that. There's something about capitalism at the core that promotes the people that run capitalism that promotes psychopathy.

0
💬 0

7041.374 - 7048.881 George Hotz

That saying may, of course, be my own biases, right? That may be my own biases that these people are a lot more aligned with me than these other people, right?

0
💬 0

7049.082 - 7049.302 Lex Fridman

Yeah.

0
💬 0

7049.962 - 7060.292 George Hotz

So, you know, I can certainly recognize that. But, you know, in general, I mean, this is like the common sense maxim, which is the people who end up getting power are never the ones you want with it.

0
💬 0

7061.574 - 7073.213 Lex Fridman

But do you have a concern of super intelligent AGI? Yeah. Open sourced. And then what do you do with that? I'm not saying control it. It's open source. What do we do with this human species?

0
💬 0

7073.693 - 7076.395 George Hotz

That's not up to me. I mean, you know, like I'm not a central planner.

0
💬 0

7076.415 - 7081.518 Lex Fridman

You're not a central planner, but you'll probably tweet there's a few days left to live for the human species.

0
💬 0

7081.538 - 7085.981 George Hotz

I have my ideas of what to do with it and everyone else has their ideas of what to do with it. May the best ideas win.

0
💬 0

7086.061 - 7106.729 Lex Fridman

But at this point, do you brainstorm like... Because it's not regulation. It could be decentralized regulation where people agree that this is just like, we create tools that make it more difficult for you to maybe make it more difficult for code to spread, you know, antivirus software, this kind of thing.

0
💬 0

7106.829 - 7110.67 George Hotz

You're saying that you should build AI firewalls? That sounds good. You should definitely be running an AI firewall.

0
💬 0

7110.79 - 7111.19 Lex Fridman

Yeah, right.

0
💬 0

7111.57 - 7118.588 George Hotz

You should be running an AI firewall to your mind. You're constantly under... That's such an interesting idea. Infowars, man.

0
💬 0

7119.069 - 7133.557 Lex Fridman

I don't know if you're being sarcastic or not. No, I'm dead serious. But I think there's power to that. It's like, how do I protect my mind from influence of human-like or superhuman intelligent bots?

0
💬 0

7133.577 - 7139.96 George Hotz

I would pay so much money for that product. I would pay so much money for that product. You know how much money I'd pay just for a spam filter that works?

0
💬 0

7140.961 - 7160.183 Lex Fridman

Well, on Twitter sometimes I... would like to have a protection mechanism for my mind from the outrage mobs. Because they feel like bot-like behavior. It's like there's a large number of people that will just grab a viral narrative and attack anyone else that believes otherwise.

0
💬 0

7160.383 - 7174.09 George Hotz

And it's like... Whenever someone's telling me some story from the news, I'm always like, I don't want to hear it. CIA op, bro. It's a CIA op, bro. Like, it doesn't matter if that's true or not. It's just trying to influence your mind. You're repeating an ad to me. The viral mobs, yeah.

0
💬 0

7174.81 - 7198.199 Lex Fridman

To me, a defense against those mobs is just getting multiple perspectives always from sources that make you feel kind of... Like you're getting smarter and just actually just basically feels good. Like a good documentary just feels good. Something feels good about it. It's well done. It's like, oh, okay. I never thought of it this way. This just feels good.

0
💬 0

7198.499 - 7207.785 Lex Fridman

Sometimes the outrage mobs, even if they have a good point behind it, when they're like mocking and derisive and just aggressive, you're with us or against us, this fucking.

0
💬 0

7207.825 - 7209.106 George Hotz

This is why I delete my tweets.

0
💬 0

7210.287 - 7214.489 Lex Fridman

Yeah, why'd you do that? I was, you know, I missed your tweets.

0
💬 0

7214.689 - 7218.071 George Hotz

You know what it is? The algorithm promotes toxicity.

0
💬 0

7218.471 - 7218.732 Lex Fridman

Yeah.

0
💬 0

7219.992 - 7226.195 George Hotz

And like, you know, I think Elon has a much better chance of fixing it than the previous regime.

0
💬 0

7227.616 - 7227.916 Lex Fridman

Yeah.

0
💬 0

7228.056 - 7236.521 George Hotz

But to solve this problem, to solve, like to build a social network that is actually not toxic without moderation.

0
💬 0

7238.867 - 7249.661 Lex Fridman

Like not the stick, but carrot. So like where people look for goodness. So make it catalyze the process of connecting cool people and being cool to each other.

0
💬 0

7250.242 - 7250.723 George Hotz

Yeah.

0
💬 0

7251.183 - 7252.565 Lex Fridman

Without ever censoring.

0
💬 0

7252.665 - 7270.234 George Hotz

Without ever censoring. And like Scott Alexander has a blog post I like where he talks about like moderation is not censorship, right? Like all moderation you want to put on Twitter, right? Like you could totally make this moderation like just a, you don't have to block it for everybody. You can just have like a filter button, right?

0
💬 0

7270.254 - 7288.208 George Hotz

That people can turn off if they were like safe search for Twitter, right? Like someone could just turn that off, right? So like, but then you'd like take this idea to an extreme, right? Well, the network should just show you This is a couch surfing CEO thing, right? If it shows you right now, these algorithms are designed to maximize engagement. Well, it turns out outrage maximizes engagement.

0
💬 0

7288.748 - 7297.918 George Hotz

Quirk of human, quirk of the human mind, right? Just as I fall for it, everyone falls for it. So yeah, you got to figure out how to maximize for something other than engagement.

0
💬 0

7298.416 - 7303.581 Lex Fridman

And I actually believe that you can make money with that too. So it's not, I don't think engagement is the only way to make money.

0
💬 0

7303.781 - 7320.137 George Hotz

I actually think it's incredible that we're starting to see, I think, again, Elon's doing so much stuff right with Twitter, like charging people money. As soon as you charge people money, they're no longer the product. They're the customer. And then they can start building something that's good for the customer and not good for the other customer, which is the ad agencies.

0
💬 0

7320.717 - 7322.619 Lex Fridman

It hasn't picked up steam yet.

0
💬 0

7323.945 - 7328.368 George Hotz

I pay for Twitter. It doesn't even get me anything. It's my donation to this new business model, hopefully working out.

0
💬 0

7328.909 - 7340.558 Lex Fridman

Sure. But you know, you, for this business model to work, it's like most people should be signed up to Twitter. And so the way it was, there was something perhaps not compelling or something like this to people.

0
💬 0

7340.578 - 7346.983 George Hotz

I don't think you need most people at all. I think that I, why do I need most people? Right. Don't make an 8,000 person company, make a 50 person company.

0
💬 0

7347.904 - 7354.006 Lex Fridman

Ah, well, so speaking of which, uh, You worked at Twitter for a bit.

0
💬 0

7354.306 - 7354.546 George Hotz

I did.

0
💬 0

7355.147 - 7355.767 Lex Fridman

As an intern.

0
💬 0

7355.787 - 7356.168 George Hotz

Mm-hmm.

0
💬 0

7357.249 - 7358.349 Lex Fridman

The world's greatest intern.

0
💬 0

7358.369 - 7358.93 George Hotz

Eh.

0
💬 0

7358.97 - 7367.977 Lex Fridman

All right. There's been better. There's been better. Tell me about your time at Twitter. How did it come about? And what did you learn from the experience?

0
💬 0

7368.658 - 7392.206 George Hotz

So I deleted my first Twitter in 2010. I had over 100,000 followers back when that actually meant something. And I just saw, you know, my coworker summarized it well. He's like, whenever I see someone's Twitter page, I either think the same of them or less of them. I never think more of them.

0
💬 0

7392.706 - 7392.866 Lex Fridman

Yeah.

0
💬 0

7393.387 - 7402.634 George Hotz

Right. Like, like, you know, I don't want to mention any names, but like some people who like, you know, maybe you would like read their books and you would respect them. You see them on Twitter and you're like, okay, dude.

0
💬 0

7404.536 - 7412.142 Lex Fridman

Yeah. But there are some people with same, you know, who I respect a lot are people that just post really good technical stuff.

0
💬 0

7412.342 - 7412.462 George Hotz

Yeah.

0
💬 0

7413.812 - 7426.518 Lex Fridman

And I guess, I don't know, I think I respect them more for it. Because you realize, oh, this wasn't, there's like so much depth to this person, to their technical understanding of so many different topics.

0
💬 0

7427.419 - 7427.719 George Hotz

Okay.

0
💬 0

7427.799 - 7433.241 Lex Fridman

So I try to follow people. I try to consume stuff that's technical machine learning content.

0
💬 0

7433.701 - 7454.564 George Hotz

There's probably a few of those people. And the problem is inherently what the algorithm rewards, right? And people think about these algorithms. People think that they are terrible, awful things. And, you know, I love that Elon open sourced it. Because, I mean, what it does is actually pretty obvious. It just predicts what you are likely to retweet and like and linger on.

0
💬 0

7454.584 - 7467.033 George Hotz

That's what all these algorithms do. That's what TikTok does. That's what all these recommendation engines do. And it turns out that the thing that you are most likely to interact with is outrage. And that's a quirk of the human condition.

0
💬 0

7470.069 - 7485.342 Lex Fridman

I mean, and there's different flavors of outrage. It doesn't have to be, it could be mockery. You could be outraged. The topic of outrage could be different. It could be an idea. It could be a person. It could be, and maybe there's a better word than outrage. It could be drama. Sure. Drama. All this kind of stuff.

0
💬 0

7485.582 - 7485.802 George Hotz

Yeah.

0
💬 0

7486.183 - 7491.427 Lex Fridman

But it doesn't feel like when you consume it, it's a constructive thing for the individuals that consume it in the long term.

0
💬 0

7492.048 - 7517.281 George Hotz

yeah so my time there i absolutely couldn't believe you know i got crazy amount of hate uh you know just on twitter for working at twitter it seemed like people associated with this i think maybe uh you were exposed to some of this so connection to elon or is it working on twitter twitter and elon like the whole there's elon's gotten a bit spicy during that time a bit political a bit yeah

0
💬 0

7518.177 - 7526.882 George Hotz

Yeah, you know, I remember one of my tweets, it was never go full Republican, and Elon liked it. You know, I think, you know.

0
💬 0

7529.443 - 7535.026 Lex Fridman

Oh, boy. Yeah, I mean, there's a roller coaster of that, but being political on Twitter. Yeah.

0
💬 0

7535.066 - 7536.367 George Hotz

Boy. Yeah.

0
💬 0

7536.967 - 7545.572 Lex Fridman

And also being, just attacking anybody on Twitter, it comes back at you harder. And if it's political and attacks. Sure.

0
💬 0

7546.252 - 7547.033 George Hotz

Sure, absolutely.

0
💬 0

7548.171 - 7559.62 Lex Fridman

And then letting sort of de-platformed people back on even adds more fun to the beautiful chaos.

0
💬 0

7560.194 - 7586.227 George Hotz

I was hoping, and I remember when Elon talked about buying Twitter six months earlier, he was talking about a principled commitment to free speech. And I'm a big believer and fan of that. I would love to see an actual principled commitment to free speech. Of course, this isn't quite what happened. Instead of the oligarchy deciding what to ban, you had a monarchy deciding what to ban. Right?

0
💬 0

7586.627 - 7607.857 George Hotz

Instead of, you know, all the Twitter files, shadow. And really, the oligarchy just decides what? Cloth masks are ineffective against COVID. That's a true statement. Every doctor in 2019 knew it. And now I'm banned on Twitter for saying it? Interesting. Oligarchy. So now you have a monarchy. And, you know, he bans things he doesn't like. So, you know, it's just different. It's different power.

0
💬 0

7607.917 - 7611.879 George Hotz

And, like, you know, maybe I align more with him than with the oligarchy.

0
💬 0

7611.999 - 7634.439 Lex Fridman

But it's not free speech absolutism. It's not free speech. But I feel like being a free speech absolutist on a social network requires you to also have tools for the individuals to control what they consume easier. Like not censor, but just like control like, oh, I'd like to see more cats and less politics. Yeah.

0
💬 0

7634.799 - 7640.105 George Hotz

And this isn't even remotely controversial. This is just saying you want to give paying customers for a product what they want.

0
💬 0

7640.285 - 7645.011 Lex Fridman

Yeah. And not through the process of censorship, but through the process of like... Well, it's individualized, right?

0
💬 0

7645.071 - 7650.677 George Hotz

It's individualized, transparent censorship, which is honestly what I want. What is an ad blocker? It's individualized, transparent censorship, right?

0
💬 0

7650.857 - 7655.722 Lex Fridman

Yeah, but censorship is a strong word. that people are very sensitive to.

0
💬 0

7655.742 - 7661.124 George Hotz

I know, but I just use words to describe what they functionally are and what is an ad blocker. It's just censorship.

0
💬 0

7661.404 - 7684.894 Lex Fridman

But I love what you're censoring. I'm looking at you. I'm censoring everything else out when my mind is focused on you. You can use the word censorship that way, but usually when people get very sensitive about the censorship thing. I think when anyone is allowed to say anything, you should probably have tools that maximize the quality of the experience for individuals.

0
💬 0

7686.495 - 7702.062 Lex Fridman

For me, what I really value, boy, it would be amazing to somehow figure out how to do that. I love disagreement and debate, and people who disagree with each other disagree with me, especially in the space of ideas, but the high-quality ones. So not derision, right?

0
💬 0

7702.443 - 7705.004 George Hotz

Maslow's hierarchy of argument. I think that's a real word for it.

0
💬 0

7705.782 - 7713.264 Lex Fridman

Probably. There's just a way of talking that's like snarky and so on that somehow gets people on Twitter and they get excited and so on.

0
💬 0

7713.284 - 7717.065 George Hotz

You have like ad hominem refuting the central point. I like seeing this as an actual pyramid.

0
💬 0

7717.246 - 7722.607 Lex Fridman

Yeah. And it's like all of it, all the wrong stuff is attractive to people.

0
💬 0

7722.627 - 7731.45 George Hotz

I mean, we can just train a classifier to absolutely say what level of Maslow's hierarchy of argument are you at? And if it's ad hominem, like, okay, cool. I turned on the no ad hominem filter.

0
💬 0

7733.243 - 7736.465 Lex Fridman

I wonder if there's a social network that will allow you to have that kind of filter.

0
💬 0

7736.925 - 7755.997 George Hotz

Yeah, so here's a problem with that. It's not going to win in a free market. What wins in a free market is all television today is reality television because it's engaging. Engaging is what wins in a free market, right? So it becomes hard to keep these other more nuanced values.

0
💬 0

7758.84 - 7776.424 Lex Fridman

Well, okay, so that's the experience of being on Twitter, but then you got a chance to also, together with other engineers and with Elon, sort of look, brainstorm when you step into a code base that's been around for a long time. You know, there's other social networks, you know, Facebook, this is old code bases.

0
💬 0

7777.064 - 7787.607 Lex Fridman

And you step in and see, okay, how do we make, with a fresh mind, progress on this code base? Like, what did you learn about software engineering, about programming from just experiencing that?

0
💬 0

7787.992 - 7813.032 George Hotz

So my technical recommendation to Elon, and I said this on the Twitter spaces afterward, I said this many times during my brief internship, was that you need refactors before features. This code base was, and look, I've worked at Google, I've worked at Facebook. Facebook has the best code. then Google, then Twitter. And you know what?

0
💬 0

7813.593 - 7819.005 George Hotz

You can know this because look at the machine learning frameworks, right? Facebook released PyTorch, Google released TensorFlow, and Twitter released...

0
💬 0

7821.677 - 7821.937 George Hotz

Okay.

0
💬 0

7821.977 - 7843.426 Lex Fridman

So, you know, it's a proxy, but yeah, the Google code base is quite interesting. There's a lot of really good software engineers there, but the code base is very large. The code base was good in 2005, right? It looks like 2005. There's so many products, so many teams, right? It's very difficult to, I feel like Twitter does less, like obviously much less than Google.

0
💬 0

7844.932 - 7856.738 Lex Fridman

In terms of the set of features, right? So I can imagine the number of software engineers that could recreate Twitter is much smaller than to recreate Google. Yeah.

0
💬 0

7856.938 - 7863.201 George Hotz

I still believe in the amount of hate I got for saying this, that 50 people could build and maintain Twitter.

0
💬 0

7864.382 - 7868.844 Lex Fridman

What's the nature of the hate? Comfortably. You don't know what you're talking about?

0
💬 0

7868.984 - 7889.086 George Hotz

You know what it is? And it's the same. This is my summary of the hate I get on Hacker News. It's like... When I say I'm going to do something, they have to believe that it's impossible. Because if doing things was possible, they'd have to do some soul searching and ask the question, why didn't they do anything?

0
💬 0

7889.286 - 7905.375 Lex Fridman

And I do think that's where the hate comes from. When you say, well, there's a core truth to that. So when you say I'm going to solve self-driving problems, People go like, what are your credentials? What the hell are you talking about? This is an extremely difficult problem. Of course, you're a noob that doesn't understand the problem deeply.

0
💬 0

7905.396 - 7918.005 Lex Fridman

I mean, that was the same nature of hate that probably Elon got when he first talked about autonomous driving. But there's pros and cons to that because there is experts in this world.

0
💬 0

7919.065 - 7927.708 George Hotz

No, but the mockers aren't experts. The people who are mocking are not experts with carefully reasoned arguments about why you need 8,000 people to run a bird app.

0
💬 0

7928.089 - 7939.331 Lex Fridman

They're, but the people are going to lose their jobs. Well, that, but also there's the software engineers that probably criticize, no, it's a lot more complicated than you realize, but maybe it doesn't need to be so complicated.

0
💬 0

7939.571 - 7954.704 George Hotz

You know, some people in the world like to create complexity. Some people in the world thrive under complexity, like lawyers, right? Lawyers want the world to be more complex because you need more lawyers, you need more legal hours, right? I think that's another. If there's two great evils in the world, it's centralization and complexity.

0
💬 0

7955.065 - 7984.237 Lex Fridman

Yeah, and one of the sort of hidden... Side effects of software engineering is like finding pleasure in complexity. I mean, I don't remember just taking all the software engineering courses and just doing programming and just coming up in this... object-oriented programming kind of idea, you don't, like, not often do people tell you, like, do the simplest possible thing.

0
💬 0

7985.318 - 8002.592 Lex Fridman

Like, a professor, a teacher, is not going to get in front, like, this is the simplest way to do it. They'll say, like, this is the, like, there's the right way, and the right way, at least for a long time, you know, especially I came up with, like, Java, right? Like, there's so much boilerplate, so much, like...

0
💬 0

8003.673 - 8020.357 Lex Fridman

So many classes, so many like designs and architectures and so on, like planning for features far into the future and planning poorly and all this kind of stuff. And then there's this like code base that follows you along and puts pressure on you and nobody knows what like...

0
💬 0

8020.857 - 8044.942 Lex Fridman

parts different parts do which slows everything down is a kind of bureaucracy that's instilled in the code as a result of that but then you feel like oh well i follow good software engineering practices it's an interesting trade-off because then you look at like the ghettoness of like pearl and the old like how quickly you just write a couple lines you just get stuff done that trade-off is interesting or bash or whatever these kind of ghetto things you can do in linux

0
💬 0

8045.322 - 8060.113 George Hotz

One of my favorite things to look at today is how much do you trust your tests, right? We've put a ton of effort in Comma and I've put a ton of effort in TinyGrad into making sure if you change the code and the tests pass, that you didn't break the code. Now, this obviously is not always true,

0
💬 0

8060.894 - 8069.299 George Hotz

But the closer that is to true, the more you trust your tests, the more you're like, oh, I got a pull request and the tests pass. I feel okay to merge that. The faster you can make progress.

0
💬 0

8069.319 - 8074.602 Lex Fridman

So you're always programming your tests in mind, developing tests with that in mind that if it passes, it should be good.

0
💬 0

8074.662 - 8080.386 George Hotz

And Twitter had a... Not that. So... It was impossible to make progress in the code base.

0
💬 0

8081.246 - 8093.348 Lex Fridman

What other stuff can you say about the codebase that made it difficult? What are some interesting sort of quirks, broadly speaking, from that compared to just your experience with comma and everywhere else?

0
💬 0

8093.668 - 8115.217 George Hotz

The real thing that I spoke to a bunch of, you know, like individual contributors at Twitter. And I just asked, I'm like, okay, so like, what's wrong with this place? Why does this code look like this? And they explained to me what Twitter's promotion system was. The way that you got promoted at Twitter was you wrote a library that a lot of people used. Right?

0
💬 0

8116.598 - 8123.383 George Hotz

So some guy wrote an NGINX replacement for Twitter. Why does Twitter need an NGINX replacement? What was wrong with NGINX?

0
💬 0

8124.263 - 8127.386 George Hotz

Well, you see, you're not going to get promoted if you use NGINX.

0
💬 0

8127.726 - 8134.111 George Hotz

But if you write a replacement and lots of people start using it as the Twitter front end for their product, then you're going to get promoted, right?

0
💬 0

8134.131 - 8145.008 Lex Fridman

It's so interesting because from an individual perspective, how do you incentivize... How do you create the kind of incentives that will lead to a great code base? Okay, what's the answer to that?

0
💬 0

8146.389 - 8164.817 George Hotz

So what I do at Comma and at TinyCorp is you have to explain it to me. You have to explain to me what this code does. And if I can sit there and come up with a simpler way to do it, you have to rewrite it. You have to agree with me about the simpler way. I'm, you know, obviously we can have a conversation about this.

0
💬 0

8164.877 - 8172.323 George Hotz

It's not a, it's not dictatorial, but if you're like, wow, wait, that actually is way simpler. Like, like the simplicity is important.

0
💬 0

8173.224 - 8179.709 Lex Fridman

Right. But that requires people that overlook the code at the, at the highest levels to be like, okay.

0
💬 0

8179.97 - 8181.371 George Hotz

It requires technical leadership. You trust.

0
💬 0

8181.471 - 8188.377 Lex Fridman

Yeah. Technical leadership. So managers or whatever should have to have technical savvy, deep technical savvy. Yeah.

0
💬 0

8189.016 - 8191.177 George Hotz

Managers should be better programmers than the people who they manage.

0
💬 0

8191.418 - 8198.222 Lex Fridman

Yeah. And that's not always obvious, trivial to create, especially at large companies. Managers get soft.

0
💬 0

8198.562 - 8214.959 George Hotz

And like, you know, and this is just, I've instilled this culture at Kama and Kama has better programmers than me who work there. But, you know, again, I'm like the, you know, the old guy from Goodwill Hunting. It's like, look, man, you know, I might not be as good as you, but I can see the difference between me and you, right? And like, this is what you need. This is what you need at the top.

0
💬 0

8214.999 - 8221.985 George Hotz

Or you don't necessarily need the manager to be the absolute best. I shouldn't say that, but like they need to be able to recognize skill.

0
💬 0

8222.375 - 8230.178 Lex Fridman

Yeah, and have good intuition, intuition that's laden with wisdom from all the battles of trying to reduce complexity in code bases.

0
💬 0

8230.738 - 8247.244 George Hotz

You know, I took a political approach at Comma, too, that I think is pretty interesting. I think Elon takes the same political approach. You know, Google had no politics, and what ended up happening is the absolute worst kind of politics took over. Comma has an extreme amount of politics, and they're all mine, and no dissidence is tolerated.

0
💬 0

8248.004 - 8249.245 Lex Fridman

So it's a dictatorship.

0
💬 0

8249.405 - 8255.242 George Hotz

Yep. It's an absolute dictatorship, right? Elon does the same thing. Now, the thing about my dictatorship is here are my values.

0
💬 0

8257.123 - 8258.383 Lex Fridman

Yeah, it's just transparent.

0
💬 0

8258.563 - 8265.786 George Hotz

It's transparent. It's a transparent dictatorship, right? And you can choose to opt in or, you know, you get free exit, right? That's the beauty of companies. If you don't like the dictatorship, you quit.

0
💬 0

8267.927 - 8280.801 Lex Fridman

So you mentioned rewrite before or refactor before features. Mm-hmm. If you were to refactor the Twitter code base, what would that look like? And maybe also comment on how difficult is it to refactor?

0
💬 0

8281.421 - 8301.586 George Hotz

The main thing I would do is first of all, identify the pieces and then put tests in between the pieces, right? So there's all these different, Twitter has a microservice architecture, there's all these different microservices. And the thing that I was working on there, look, like, you know, George didn't know any JavaScript. He asked how to fix search, blah, blah, blah, blah, blah.

0
💬 0

8301.766 - 8318.139 George Hotz

Look, man, like, The thing is, like, I just, you know, I'm upset that the way that this whole thing was portrayed, because it wasn't like, it wasn't like taken by people, like, honestly. It wasn't like by, it was taken by people who started out with a bad faith assumption. Yeah. And I mean, look, I can't like.

0
💬 0

8318.259 - 8326.902 Lex Fridman

And you as a programmer were just being transparent out there, actually having like fun. And like, this is what programming should be about. I love that Elon gave me this opportunity.

0
💬 0

8327.082 - 8335.052 George Hotz

Yeah. Like really, it does. And like, you know, he came on my, the day I quit, he came on my Twitter spaces afterward and we had a conversation. Like, I just, I respect that so much.

0
💬 0

8335.512 - 8344.473 Lex Fridman

Yeah. And it's also inspiring to just engineers and programmers and just, it's cool. It should be fun. The people that are hating on it, it's like, oh man. It was fun.

0
💬 0

8344.793 - 8352.643 George Hotz

It was fun. It was stressful. But I felt like, you know, it was at, like, a cool, like, point in history. And, like, I hope I was useful. I probably kind of wasn't. But, like, maybe I was.

0
💬 0

8352.663 - 8356.929 Lex Fridman

Well, you also were one of the people that kind of made a strong case to refactor.

0
💬 0

8357.389 - 8357.53 George Hotz

Yeah.

0
💬 0

8358.004 - 8383.135 Lex Fridman

And that's a really interesting thing to raise. Like maybe that is the right, you know, the timing of that is really interesting. If you look at just the development of autopilot, you know, going from Mobileye to just like more, if you look at the history of semi-autonomous driving in Tesla is more and more like you could say refactoring or starting from scratch, redeveloping from scratch.

0
💬 0

8383.675 - 8384.956 George Hotz

It's refactoring all the way down.

0
💬 0

8385.598 - 8406.171 Lex Fridman

And like, and the question is like, can you do that sooner? Can you maintain product profitability? And like, what's the, what's the right time to do it? How do you do it? You know, on any one day, it's like, you don't want to pull off the band-aids. Like it's like, everything works. It's just like little fix here and there, but maybe started from scratch.

0
💬 0

8406.791 - 8414.036 George Hotz

This is the main philosophy of tiny grad. You have never refactored enough. Your code can get smaller. Your code can get simpler. Your ideas can be more elegant.

0
💬 0

8414.778 - 8428.241 Lex Fridman

But would you consider, you know, say you were like running Twitter development teams, engineering teams, would you go as far as like different programming language? Just go that far?

0
💬 0

8428.801 - 8448.53 George Hotz

I mean, the first thing that I would do is build tests. The first thing I would do is get a CI to where people can trust to make changes. Before I touched any code, I would actually say, no one touches any code. The first thing we do is we test this code base. I mean, this is classic. This is how you approach a legacy code base.

0
💬 0

8448.55 - 8451.771 George Hotz

This is like what any, how to approach a legacy code base book will tell you.

0
💬 0

8452.871 - 8467.757 Lex Fridman

So, and then you hope that there's modules that can live on for a while. And then you add new ones, maybe in a different language or... Before we add new ones, we replace old ones. Yeah, yeah, meaning like replace old ones with something simpler.

0
💬 0

8467.897 - 8489.228 George Hotz

We look at this thing that's 100,000 lines and we're like, well, okay, maybe this did even make sense in 2010, but now we can replace this with an open source thing, right? Yeah. And we look at this here, here's another 50,000 lines. Well, actually, we can replace this with 300 lines ago. And you know what? I trust that the go actually replaces this thing because all the tests still pass.

0
💬 0

8489.569 - 8505.7 George Hotz

So step one is testing. And then step two is like the programming languages and afterthought, right? You know, let a whole lot of people compete, be like, okay, who wants to rewrite a module, whatever language you want to write it in, just the tests have to pass. And if you figure out how to make the test pass, but break the site, that's, we got to go back to step one.

0
💬 0

8506.081 - 8509.243 George Hotz

Step one is get tests that you trust in order to make changes in the code base.

0
💬 0

8509.482 - 8535.74 Lex Fridman

I wonder how hard it is too, because I'm with you on testing and everything. From tests to like asserts to everything, code is just covered in this because it should be very easy to make rapid changes and know that it's not going to break everything. And that's the way to do it. But I wonder how difficult is it to integrate tests into a code base that doesn't have many of them.

0
💬 0

8535.76 - 8552.476 George Hotz

So I'll tell you what my plan was at Twitter. It's actually similar to something we use at Comma. So at Comma, we have this thing called Process Replay. And we have a bunch of routes that'll be run through. So Comma is a microservice architecture too. We have microservices in the driving. We have one for the cameras, one for the sensor, one for the planner, one for the model.

0
💬 0

8553.697 - 8580.011 George Hotz

And we have an API, which the microservices talk to each other with. We use this custom thing called Serial, which uses ZMQ. Twitter uses Thrift. And then it uses this thing called Finagle, which is a Scala RPC backend. But this doesn't even really matter. The Thrift and Finagle layer was a great place, I thought, to write tests. To start building something that looks like process replay.

0
💬 0

8580.391 - 8599.704 George Hotz

So Twitter had some stuff that looked kind of like this, but it wasn't offline. It was only online. So you could ship a modified version of it, and then you could redirect some of the traffic to your modified version and diff those two, but it was all online. There was no CI in the traditional sense. I mean, there was some, but it was not full coverage.

0
💬 0

8599.924 - 8603.026 Lex Fridman

So you can't run all of Twitter offline to test something.

0
💬 0

8603.046 - 8606.268 George Hotz

Well, then this was another problem. You can't run all of Twitter, right?

0
💬 0

8606.328 - 8609.05 Lex Fridman

Period. Any one person can't run it.

0
💬 0

8609.07 - 8623.56 George Hotz

Twitter runs in three data centers, and that's it. Yeah. There's no other place you can run Twitter, which is like, George, you don't understand. This is modern software development. No, this is bullshit. Like, why can't it run on my laptop? Twitter can run it. Yeah, okay.

0
💬 0

8623.58 - 8629.604 George Hotz

Well, I'm not saying you're going to download the whole database to your laptop, but I'm saying all the middleware and the front end should run on my laptop, right?

0
💬 0

8630.399 - 8643.071 Lex Fridman

That sounds really compelling. Yeah. But can that be achieved by a code base that grows over the years? I mean, the three data centers didn't have to be, right? Because they're totally different designs.

0
💬 0

8643.312 - 8652.721 George Hotz

The problem is more like, why did the code base have to grow? What new functionality has been added to compensate for the lines of code that are there?

0
💬 0

8653.638 - 8661.525 Lex Fridman

One of the ways to explain it is that the incentive for software developers to move up in the company is to add code. To add, especially large.

0
💬 0

8661.545 - 8666.39 George Hotz

And you know what? The incentive for politicians to move up in the political structure is to add laws. Yeah. Same problem.

0
💬 0

8667.05 - 8672.615 Lex Fridman

Yeah. Yeah. If the flip side is to simplify, simplify, simplify.

0
💬 0

8672.635 - 8693.969 George Hotz

I mean, you know what? This is something that I do differently from Elon with Kama about self-driving cars. You know, I hear the new version is going to come out and the new version is not going to be better, but at first, and it's going to require a ton of refactors. I say, okay, take as long as you need. Like you convinced me this architecture is better. Okay. We have to move to it.

0
💬 0

8694.729 - 8699.711 George Hotz

Even if it's not going to make the product better tomorrow, the top priority is making, is getting the architecture right.

0
💬 0

8700.152 - 8716.971 Lex Fridman

So what do you think about sort of a thing where the product is online? So I guess, would you do a refactor? If you ran engineering on Twitter, would you just do a refactor? How long would it take? What would that mean for the running of the actual service?

0
💬 0

8717.371 - 8722.574 George Hotz

You know, and I'm not the right person to run Twitter.

0
💬 0

8723.885 - 8744.52 George Hotz

I'm just not. And that's the problem. Like, I don't really know. I don't really know if that's... You know, a common thing that I thought a lot while I was there was whenever I thought something that was different to what Elon thought, I'd have to run something in the back of my head reminding myself that Elon is the richest man in the world. And in general, his ideas are better than mine.

0
💬 0

8745.681 - 8761.992 George Hotz

Now, there's a few things I think I do understand and know more about, but... But, like, in general, I'm not qualified to run Twitter. I was going to say qualified, but, like, I don't think I'd be that good at it. I don't think I'd be good at it. I don't think I'd really be good at running an engineering organization at scale.

0
💬 0

8763.513 - 8784.778 George Hotz

I think I could lead a very good refactor of Twitter, and it would take, like, six months to a year, and the results to show at the end of it would be feature development in general takes 10x less time, 10x less man hours. That's what I think I could actually do. Do I think that it's the right decision for the business above my pay grade?

0
💬 0

8788.6 - 8792.061 Lex Fridman

Yeah, but a lot of these kinds of decisions are above everybody's pay grade.

0
💬 0

8792.301 - 8804.106 George Hotz

I don't want to be a manager. I don't want to do that. If you really forced me to, yeah, it would make me upset if I had to make those decisions. I don't want to.

0
💬 0

8805.475 - 8818.158 Lex Fridman

Yeah, but a refactor is so compelling. If this is to become something much bigger than what Twitter was, it feels like a refactor has to be coming at some point.

0
💬 0

8818.378 - 8823.559 George Hotz

George, you're a junior software engineer. Every junior software engineer wants to come in and refactor the whole code.

0
💬 0

8824.539 - 8826.999 George Hotz

Okay, that's like your opinion, man.

0
💬 0

8827.879 - 8830.16 Lex Fridman

Yeah, it doesn't, you know, sometimes they're right.

0
💬 0

8831.617 - 8858.154 George Hotz

like whether they're right or not it's definitely not for that reason right it's definitely not a question of engineering prowess it is a question of maybe what the priorities are for the company and I did get more intelligent like feedback from people I think in good faith like saying that actually from Elon and like you know from Elon sort of like people were like well you know a stop the world refactor might be great for engineering but you know we have a business to run and hey above my pay grade

0
💬 0

8858.715 - 8865.881 Lex Fridman

What do you think about Elon as an engineering leader? Having to experience him in the most chaotic of spaces, I would say.

0
💬 0

8871.406 - 8877.872 George Hotz

My respect for him has unchanged. And I did have to think a lot more deeply about some of the decisions he's forced to make.

0
💬 0

8879.733 - 8883.156 Lex Fridman

About the tensions within those, the trade-offs within those decisions?

0
💬 0

8884.818 - 8891.943 George Hotz

About like a whole like... like matrix coming at him. I think that's Andrew Tate's word for it. Sorry to borrow it.

0
💬 0

8892.383 - 8894.804 Lex Fridman

Also bigger than engineering, just everything.

0
💬 0

8895.184 - 8915.614 George Hotz

Yeah. Like, like the war on the woke. Yeah. Like it just, it just, man. And like, he doesn't have to do this, you know, he doesn't have to, he could go like Parag and go chill at the four seasons of Maui, you know, but see one person I respect and one person I don't.

0
💬 0

8916.894 - 8922.377 Lex Fridman

So his heart is in the right place fighting, in this case, for this ideal of the freedom of expression.

0
💬 0

8923.418 - 8931.623 George Hotz

I wouldn't define the ideal so simply. I think you can define the ideal no more than just saying, Elon's idea of a good world.

0
💬 0

8932.664 - 8938.167 Lex Fridman

Freedom of expression is... But to you, it's still... The downsides of that is the monarchy.

0
💬 0

8939.611 - 8955.055 George Hotz

Yeah. I mean, monarchy has problems, right? But I mean, would I trade right now the current oligarchy, which runs America, for the monarchy? Yeah, I would. Sure. For the Elon monarchy? Yeah. You know why? Because power would cost one cent a kilowatt hour.

0
💬 0

8956.235 - 8959.396 Lex Fridman

Tenth of a cent a kilowatt hour. What do you mean?

0
💬 0

8960.156 - 8965.958 George Hotz

Right now, I pay about 20 cents a kilowatt hour for electricity in San Diego. That's like the same price you paid in 1980. What the hell?

0
💬 0

8968.601 - 8970.762 Lex Fridman

So you would see a lot of innovation with Elon.

0
💬 0

8971.222 - 8972.643 George Hotz

Maybe it'd have, maybe have some hyper loops.

0
💬 0

8973.583 - 8973.803 Lex Fridman

Yeah.

0
💬 0

8973.863 - 8993.151 George Hotz

Right. And I'm willing to make that trade off. Right. I'm willing to be. And this is why, you know, people think that like dictators take power through some, like through some untoward mechanism. Sometimes they do, but usually it's because the people want them. And the downsides of a dictatorship, I feel like we've gotten to a point now with the oligarchy where, yeah, I would prefer the dictator.

0
💬 0

8996.566 - 8998.587 Lex Fridman

What did you think about Scala as a programming language?

0
💬 0

9001.488 - 9006.851 George Hotz

I liked it more than I thought. I did the tutorials. I was very new to it. It would take me six months to be able to write good Scala.

0
💬 0

9007.851 - 9010.072 Lex Fridman

What did you learn about learning a new programming language from that?

0
💬 0

9010.953 - 9014.534 George Hotz

I love doing new programming tutorials and doing them. I did all this for Rust.

0
💬 0

9017.838 - 9037.534 George Hotz

it keeps some of its upsetting JVM roots, but it is a much nicer. In fact, I almost don't know why Kotlin took off and not Scala. I think Scala has some beauty that Kotlin lacked. Whereas Kotlin felt a lot more, I mean, it was almost like, I don't know if it actually was a response to Swift, but that's kind of what it felt like.

0
💬 0

9038.054 - 9043.819 George Hotz

Like Kotlin looks more like Swift and Scala looks more like, well, like a functional programming language, more like an OCaml or Haskell.

0
💬 0

9044.522 - 9054.37 Lex Fridman

Let's actually just explore, we touched it a little bit, but just on the art, the science and the art of programming. For you personally, how much of your programming is done with GPT currently?

0
💬 0

9054.931 - 9055.091 George Hotz

None.

0
💬 0

9055.812 - 9056.012 Lex Fridman

None.

0
💬 0

9056.132 - 9056.652 George Hotz

Not easy at all.

0
💬 0

9057.813 - 9059.715 Lex Fridman

Because you prioritize simplicity so much.

0
💬 0

9060.934 - 9086.772 George Hotz

yeah i find that a lot of it is noise i do use vs code um and i do like some amount of autocomplete i do like like a very um a very like feels like rules-based autocomplete like an autocomplete that's going to complete the variable name for me so i'm just type it i can just press tab all right that's nice but i don't want an autocomplete you know what i hate when autocompletes when i type the word four and it like puts like two two parentheses and two semicolons and two braces i'm like

0
💬 0

9088.986 - 9116.375 Lex Fridman

I mean, with VS Code and GPT with Codex, you can kind of brainstorm. I find... I'm like probably the same as you, but I like that it generates code and you basically disagree with it and write something simpler. But to me, that somehow is like inspiring. It makes me feel good. It also gamifies the simplification process because I'm like, oh, yeah, you dumb AI system.

0
💬 0

9116.435 - 9118.896 Lex Fridman

You think this is the way to do it. I have a simpler thing here.

0
💬 0

9119.176 - 9131.227 George Hotz

It just constantly reminds me of, like, bad stuff. I mean, I tried the same thing with rap, right? I tried the same thing with rap, and I actually think I'm a much better programmer than rapper. But, like, I even tried, I was like, okay, can we get some inspiration from these things for some rap lyrics?

0
💬 0

9132.028 - 9139.694 George Hotz

And I just found that it would go back to the most, like, cringey tropes and dumb rhyme schemes. And I'm like, yeah, this is what the code looks like, too.

0
💬 0

9140.735 - 9172.856 Lex Fridman

I think you and I probably have different thresholds for cringe code. You probably hate cringe code. So it's for you. I mean, boilerplate is a part of code. Like some of it, yeah, and some of it is just like faster lookup. Because I don't know about you, but I don't remember everything. I'm offloading so much of my memory about different functions, library functions, all that kind of stuff.

0
💬 0

9174.797 - 9182.041 Lex Fridman

GPT-GIS is very fast at standard stuff, at standard library stuff, basic stuff that everybody uses.

0
💬 0

9183.822 - 9188.785 George Hotz

Yeah, I think that... I don't know.

0
💬 0

9188.805 - 9200.132 George Hotz

I mean, there's just so little of this in Python. Maybe if I was coding more in other languages, I would consider it more, but I feel like Python already does such a good job of removing any boilerplate.

0
💬 0

9200.872 - 9201.172 George Hotz

That's true.

0
💬 0

9201.572 - 9203.533 George Hotz

It's the closest thing you can get to pseudocode, right?

0
💬 0

9203.833 - 9205.814 George Hotz

Yeah, that's true. That's true.

0
💬 0

9206.194 - 9218.398 George Hotz

And like, yeah, sure. If I like, yeah, great GPT. Thanks for reminding me to free my variables. Unfortunately, you didn't really recognize the scope correctly and you can't free that one, but like you put the freeze there and like, I get it.

0
💬 0

9219.859 - 9238.114 Lex Fridman

Fiverr. Whenever I've used Fiverr for certain things like design or whatever, it's always, you come back. I think that's probably closer, my experience with Fiverr is closer to your experience with programming with GPT is like, you're just frustrated and feel worse about the whole process of design and art and whatever I used Fiverr for. Yeah.

0
💬 0

9241.391 - 9264.713 Lex Fridman

Still, I just feel like later versions of GPT, I'm using GPT as much as possible to just learn the dynamics of it. Like these early versions, because it feels like in the future you'll be using it more and more. And so like, I don't want to be like, for the same reason I gave away all my books and switched to Kindle. Cause like, all right.

0
💬 0

9265.553 - 9288.122 Lex Fridman

how long are we going to have paper books like 30 years from now like i want to learn to be reading on kindle even though i don't enjoy it as much and you learn to enjoy it more in the same way i switch from let me just pause i switch from emacs to vs code yeah i switch from vim to vs code i think i similar but yeah it's tough and that vim to vs code is even tougher because emacs is like

0
💬 0

9289.517 - 9295.702 Lex Fridman

Old, like more outdated, feels like it. The community is more outdated. Vim is like pretty vibrant still.

0
💬 0

9296.823 - 9299.125 George Hotz

I never used any of the plugins. I still don't use any of the plugins.

0
💬 0

9299.145 - 9302.828 Lex Fridman

That's what I looked at myself in the mirror. I'm like, yeah, you wrote some stuff in Lisp. Yeah.

0
💬 0

9302.848 - 9324.716 George Hotz

No, but I never used any of the plugins in Vim either. I had the most vanilla Vim. I have a syntax highlighter. I didn't even have autocomplete. Like these things, I feel like help you so marginally that like, And now, okay, now VS Code's autocomplete has gotten good enough that like, okay, I don't have to set it up. I can just go into any code base and autocomplete's right 90% of the time.

0
💬 0

9324.996 - 9349.493 George Hotz

Okay, cool. I'll take it. Right? So I don't think I'm going to have a problem at all adapting to the tools once they're good. But like the real thing that I want is not something that like tab completes my code and gives me ideas. The real thing that I want is a very intelligent pair programmer that comes up with a little pop-up saying, hey, you wrote a bug on line 14 and here's what it is. Yeah.

0
💬 0

9349.673 - 9370.208 George Hotz

Now I like that. You know what does a good job of this? MyPi. I love MyPy. MyPy, this fancy type checker for Python. And actually, I tried Microsoft release one, too, and it was like 60% false positives. MyPy is like 5% false positives. 95% of the time, it recognizes, I didn't really think about that typing interaction correctly. Thank you, MyPy.

0
💬 0

9370.508 - 9376.71 Lex Fridman

So you like type hinting. You like pushing the language towards being a typed language.

0
💬 0

9376.95 - 9386.679 George Hotz

Oh, yeah, absolutely. I think optional typing is great. I mean, look, I think that like, it's like a meat in the middle, right? Like Python has this optional type hinting and like C++ has auto.

0
💬 0

9387.5 - 9389.481 Lex Fridman

C++ allows you to take a step back.

0
💬 0

9389.601 - 9403.793 George Hotz

Well, C++ would have you brutally type out std string iterator, right? Now I can just type auto, which is nice. And then Python used to just have A. What type is A? It's an A. A colon str.

0
💬 0

9406.297 - 9429.046 George Hotz

yeah i wish there were i wish there was a way like a simple way in python to uh like turn on a mode which would enforce the types yeah like give a warning when there's no type something like this well no to give a warning where like my pilot is a static type checker but i'm asking just for a runtime type checker like there's like ways to like hack this in but i wish it was just like a flag like python 3-t oh i see yeah i see enforce the types around time yeah

0
💬 0

9429.506 - 9435.531 Lex Fridman

I feel like that makes you a better programmer. That's a kind of test, right? That the type remains the same.

0
💬 0

9435.711 - 9451.925 George Hotz

Well, no, that I didn't mess any types up. But again, MyPi is getting really good, and I love it. And I can't wait for some of these tools to become AI-powered. I want AIs reading my code and giving me feedback. I don't want AIs writing half-assed autocomplete stuff for me.

0
💬 0

9452.543 - 9471.629 Lex Fridman

I wonder if you can now take GPT and give it a code that you wrote for a function and say, how can I make this simpler and have it accomplish the same thing? I think you'll get some good ideas on some code. Maybe not the code you write for tiny grad type of code, because that requires so much design thinking, but like other kinds of code.

0
💬 0

9472.031 - 9496.945 George Hotz

I don't know. I downloaded the plugin maybe like two months ago. I tried it again and found the same. Look, I don't doubt that these models are going to first become useful to me, then be as good as me, and then surpass me. But from what I've seen today, it's like someone, you know, occasionally taking over my keyboard that I hired from Fiverr.

0
💬 0

9496.965 - 9497.545 Lex Fridman

Yeah.

0
💬 0

9498.985 - 9509.913 Lex Fridman

I'd rather not. Ideas about how to debug the code are basically a better debugger. It's really interesting. But it's not a better debugger. Yes, I would love a better debugger. Yeah, it's not yet. Yeah, but it feels like it's not too far.

0
💬 0

9510.293 - 9519.14 George Hotz

Yeah, one of my coworkers says he uses them for print statements. Like every time he has to like, just like when he needs, the only thing he can really write is like, okay, I just want to write the thing to like print the state out right now.

0
💬 0

9520.248 - 9530.097 Lex Fridman

Oh, that definitely is much faster, is print statements, yeah. Yeah. I see myself using that a lot, just because it figures out the rest of the functions. It's just like, okay, print everything.

0
💬 0

9530.257 - 9537.784 George Hotz

Yeah, print everything, right? And then, yeah, if you want a pretty printer, maybe. And like, yeah, you know what? I think in two years, I'm going to start using these plugins.

0
💬 0

9538.205 - 9538.445 Lex Fridman

Yeah.

0
💬 0

9538.565 - 9544.43 George Hotz

A little bit. And then in five years, I'm going to be heavily relying on some AI augmented flow. And then in 10 years...

0
💬 0

9545.438 - 9557.008 Lex Fridman

Do you think it will ever get to 100%? What's the role of the human that it converges to as a programmer? Do you think it's all generated?

0
💬 0

9557.849 - 9577.01 George Hotz

Our niche becomes, I think it's over for humans in general. It's not just programming, it's everything. Our niche becomes smaller and smaller and smaller. In fact, I'll tell you what the last niche of humanity is going to be. There's a great book, and if I recommended Metamorphosis of Prime Intellect last time, there is a sequel called A Casino Odyssey in Cyberspace.

0
💬 0

9578.331 - 9585.216 George Hotz

And I don't want to give away the ending of this, but it tells you what the last remaining human currency is. And I agree with that.

0
💬 0

9587.324 - 9595.249 Lex Fridman

We'll leave that as a cliffhanger. So no more programmers left, huh? That's where we're going.

0
💬 0

9595.269 - 9606.095 George Hotz

Well, unless you want handmade code. Maybe they'll sell it on Etsy. This is handwritten code. It doesn't have that machine polish to it. It has those slight imperfections that would only be written by a person.

0
💬 0

9607.697 - 9615.683 Lex Fridman

I wonder how far away we are from that. I mean, there's some aspect to, you know, on Instagram, your title is listed as prompt engineer. Right.

0
💬 0

9615.703 - 9618.005 George Hotz

Thank you for noticing.

0
💬 0

9619.866 - 9634.137 Lex Fridman

I don't know if it's ironic or sarcastic or non. What do you think of prompt engineering as a scientific and engineering discipline and maybe art form?

0
💬 0

9634.498 - 9639.512 George Hotz

You know what? I started Comma six years ago and I started the tiny corp a month ago.

0
💬 0

9642.333 - 9643.214 George Hotz

So much has changed.

0
💬 0

9643.834 - 9662.083 George Hotz

Like I'm now thinking, I'm now like, I started like going through like similar Comma processes to like starting a company. I'm like, okay, I'm going to get an office in San Diego. I'm going to bring people here. I don't think so. I think I'm actually going to do remote, right? George, you're going to do remote? You hate remote. Yeah, but I'm not going to do job interviews.

0
💬 0

9662.223 - 9676.406 George Hotz

The only way you're going to get a job is if you contribute to the GitHub, right? And then like interacting through GitHub, like GitHub being the real like project management software for your company. And the thing pretty much just is a GitHub repo, right?

0
💬 0

9678.024 - 9694.151 George Hotz

is like showing me kind of what the future of, okay, so a lot of times I'll go on a Discord, or kind of go on Discord, and I'll throw out some random like, hey, you know, can you change, instead of having log and exp as llops, change it to log2 and exp2? It's a pretty small change. You could just use like change your base formula.

0
💬 0

9696.091 - 9716.198 George Hotz

That's the kind of task that I can see an AI being able to do in a few years. Like in a few years, I could see myself describing that. And then within 30 seconds, a pull request is up that does it. And it passes my CI and I merge it, right? So I really started thinking about like, well, what is the future of like jobs? How many AIs can I employ at my company?

0
💬 0

9716.399 - 9723.526 George Hotz

As soon as we get the first tiny box up, I'm going to stand up a 65B Lama in the Discord. And it's like, yeah, here's the tiny box. He's just like, he's chilling with us.

0
💬 0

9724.844 - 9733.897 Lex Fridman

Basically, like you said with niches, most human jobs will eventually be replaced with prompt engineering.

0
💬 0

9734.298 - 9755.383 George Hotz

Well, prompt engineering kind of is this like as you like move up the stack, right? Like, okay, there used to be humans actually doing arithmetic by hand. There used to be like big farms of people doing pluses and stuff, right? And then you have like spreadsheets, right? And then, okay, the spreadsheet can do the plus for me. And then you have like macros, right?

0
💬 0

9755.824 - 9767.292 George Hotz

And then you have like things that basically just are spreadsheets under the hood, right? Like accounting software. As we move further up the abstraction, what's at the top of the abstraction stack? Well, prompt engineer.

0
💬 0

9768.433 - 9768.773 Lex Fridman

Yeah.

0
💬 0

9769.694 - 9778.719 George Hotz

Right? What is the last thing if you think about like humans wanting to keep control? Well, what am I really in the company but a prompt engineer, right?

0
💬 0

9779.44 - 9783.522 Lex Fridman

Isn't there a certain point where the AI will be better at writing prompts?

0
💬 0

9784.383 - 9800.964 George Hotz

Yeah, but you see the problem with the AI writing prompts, a definition that I always liked of AI was AI is the do what I mean machine. AI is not the... Like, the computer is so pedantic. It does what you say. So... But you want the do-what-I-mean machine.

0
💬 0

9801.364 - 9801.565 Lex Fridman

Yeah.

0
💬 0

9801.685 - 9811.853 George Hotz

Right? You want the machine where you say, you know, get my grandmother out of the burning house. It, like, reasonably takes your grandmother and puts her on the ground, not lifts her a thousand feet above the burning house and lets her fall. Right?

0
💬 0

9811.873 - 9813.194 George Hotz

There's an old Yudkowsky example.

0
💬 0

9815.136 - 9829.127 Lex Fridman

But... It's not going to find the meaning. I mean, to do what I mean, it has to figure stuff out. And the thing you'll maybe ask it to do is run government for me.

0
💬 0

9829.574 - 9847.189 George Hotz

Oh, and do what I mean very much comes down to how aligned is that AI with you? Of course, when you talk to an AI that's made by a big company in the cloud, the AI fundamentally is aligned to them, not to you. And that's why you have to buy a tiny box, so you make sure the AI stays aligned to you.

0
💬 0

9847.609 - 9858.745 George Hotz

Every time that they start to pass AI regulation or GPU regulation, I'm gonna see sales of tiny boxes spike. It's gonna be like guns, right? Every time they talk about gun regulation, boom. Gun sales.

0
💬 0

9858.925 - 9864.468 Lex Fridman

So in the space of AI, you're an anarchist. Anarchism espouser, believer.

0
💬 0

9864.608 - 9884.019 George Hotz

I'm an informational anarchist, yes. I'm an informational anarchist and a physical statist. I do not think anarchy in the physical world is very good because I exist in the physical world. But I think we can construct this virtual world where anarchy, it can't hurt you, right? I love that Tyler, the creator, tweet. Yo, cyberbullying isn't real, man.

0
💬 0

9884.039 - 9887.361 George Hotz

Have you tried? Turn it off the screen. Close your eyes. Like...

0
💬 0

9891.084 - 9914.341 Lex Fridman

But how do you prevent the AI from basically replacing all human prompt engineers? It's like a self, like where nobody's the prompt engineer anymore. So autonomy, greater and greater autonomy until it's full autonomy. And that's just where it's headed. Because one person's going to say, run everything for me.

0
💬 0

9915.102 - 9915.522 George Hotz

You see...

0
💬 0

9917.901 - 9944.301 George Hotz

I look at potential futures, and as long as the AIs go on to create a vibrant civilization with diversity and complexity across the universe, more power to them, I'll die. If the AIs go on to actually turn the world into paperclips and then they die out themselves, well, that's horrific and we don't want that to happen. So this is what I mean about robustness. I trust robust machines.

0
💬 0

9945.062 - 9968.161 George Hotz

The current AIs are so not robust. This comes back to the idea that we've never made a machine that can self-replicate. But if the machines are truly robust and there is one prompt engineer left in the world, hope you're doing good, man. Hope you believe in God. Like, you know, go by God and go forth and conquer the universe.

0
💬 0

9968.581 - 9977.208 Lex Fridman

Well, you mentioned, because I talked to Mark about faith in God and you said you were impressed by that. What's your own belief in God and how does that affect your work?

0
💬 0

9978.449 - 9997.487 George Hotz

You know, I never really considered when I was younger, I guess my parents were atheists, so I was raised kind of atheist. I never really considered how absolutely like silly atheism is. Because like, I create worlds, right? Every like game creator, like how are you an atheist, bro? You create worlds. No one created our world, man. That's different.

0
💬 0

9997.507 - 10012.47 George Hotz

Haven't you heard about like the Big Bang and stuff? Yeah, I mean, what's the Skyrim myth origin story in Skyrim? I'm sure there's like some part of it in Skyrim, but it's not like if you ask the creators, like the Big Bang is in universe, right? I'm sure they have some Big Bang notion in Skyrim, right?

0
💬 0

10012.89 - 10024.657 George Hotz

But that obviously is not at all how Skyrim was actually created. It was created by a bunch of programmers in a room, right? So, like, you know, it struck me one day how just silly atheism is. Like, of course we were created by God.

0
💬 0

10025.258 - 10026.219 George Hotz

It's the most obvious thing.

0
💬 0

10026.239 - 10040.232 Lex Fridman

Yeah, that's such a nice way to put it. Like, we're such powerful creators ourselves. It's silly not to conceive that there's creators even more powerful than us.

0
💬 0

10040.626 - 10053.927 George Hotz

Yeah. And then like, I also just like, I like that notion. That notion gives me a lot of, I mean, I guess you can talk about what it gives a lot of religious people. It's kind of like, it just gives me comfort. It's like, you know what? If we mess it all up and we die out. Yeah.

0
💬 0

10054.781 - 10072.008 Lex Fridman

Yeah, in the same way that a video game kind of has comfort in it. God, I'll try again. Or there's balance. Like, somebody figured out a balanced view of it. So it all makes sense in the end. Like, a video game is usually not going to have crazy, crazy stuff.

0
💬 0

10072.668 - 10078.971 George Hotz

You know, people will come up with, like, well, yeah, but, like, man, who created God?

0
💬 0

10078.991 - 10086.888 George Hotz

I'm like, that's God's problem. You know? Like, I'm not going to think this is. You're asking me if God believes in God?

0
💬 0

10087.108 - 10088.989 Lex Fridman

I'm just this NPC living in this game.

0
💬 0

10089.109 - 10093.712 George Hotz

I mean, to be fair, if God didn't believe in God, he'd be as silly as the atheists here.

0
💬 0

10093.732 - 10102.517 Lex Fridman

What do you think is the greatest computer game of all time? Do you have any time to play games anymore? Have you played Diablo 4?

0
💬 0

10103.197 - 10104.598 George Hotz

I have not played Diablo 4.

0
💬 0

10105.058 - 10107.159 Lex Fridman

I will be doing that shortly. I have to.

0
💬 0

10107.199 - 10107.479 George Hotz

All right.

0
💬 0

10107.72 - 10109.681 Lex Fridman

There's just so much history with 1, 2, and 3. You know what?

0
💬 0

10109.701 - 10141.018 George Hotz

I'm going to say World of Warcraft. And it's not that the game is such a great game. It's not. It's that I remember in 2005 when it came out, how it opened my mind to ideas. It opened my mind to this whole world we've created, right? And there's almost been nothing like it since 2005. Like, you can look at MMOs today, and I think they all have lower user bases than World of Warcraft.

0
💬 0

10141.038 - 10164.666 George Hotz

Like, EVE Online's kind of cool. But to think that, like, everyone knows, you know, people are always, like, they look at the Apple headset, like... What do people want in this VR? Everyone knows what they want. I want Ready Player One. And like that. So I'm going to say World of Warcraft. And I'm hoping that games can get out of this whole mobile gaming dopamine pump thing.

0
💬 0

10165.767 - 10172.372 Lex Fridman

Create worlds. Create worlds, yeah. And worlds that captivate a very large fraction of the human population.

0
💬 0

10172.572 - 10175.134 George Hotz

Yeah, and I think it'll come back. I believe.

0
💬 0

10175.454 - 10200.984 Lex Fridman

But MMO, like really, really pull you in. do a good job I mean okay other like two other games that I think are you know very noteworthy for me are Skyrim and GTA 5 Skyrim yeah that's probably number one for me GTA yeah what is it about GTA GTA is really, I guess GTA is real life. I know there's prostitutes and guns and stuff.

0
💬 0

10201.024 - 10202.124 George Hotz

They exist in real life, too.

0
💬 0

10203.704 - 10207.525 Lex Fridman

Yes, I know. But it's how I imagine your life to be, actually.

0
💬 0

10207.965 - 10208.765 George Hotz

I wish it was that cool.

0
💬 0

10209.105 - 10231.592 Lex Fridman

Yeah. Yeah, I guess that's, you know, because there's Sims, right, which is also a game I like. But it's a gamified version of life. But it also is, I would love a combination of Sims and GTA. So more freedom, more violence, more rawness, but with also like ability to have a career and family and this kind of stuff.

0
💬 0

10231.732 - 10237.416 George Hotz

What I'm really excited about in games is like once we start getting intelligent AIs to interact with.

0
💬 0

10237.756 - 10238.396 Lex Fridman

Oh, yeah.

0
💬 0

10238.416 - 10239.977 George Hotz

Like the NPCs in games have never been.

0
💬 0

10241.758 - 10244.28 Lex Fridman

But conversationally, in every way.

0
💬 0

10245.498 - 10262.031 George Hotz

In like, yeah, in like every way. Like when you're actually building a world and a world imbued with intelligence. Oh yeah. Right. And it's just hard. Like there's just like, like, you know, running world of Warcraft, like you're limited by what you're running on a Pentium four, you know, how much intelligence can you run? How many flops did you have? Right.

0
💬 0

10262.291 - 10273.439 George Hotz

But now when I'm running a game on a hundred pay to flop machine, that's five people. I'm trying to make this a thing. 20 petaflops of compute is one person of compute. I'm trying to make that a unit.

0
💬 0

10273.459 - 10277.84 Lex Fridman

20 petaflops is one person. One person. One person flop.

0
💬 0

10277.86 - 10284.362 George Hotz

It's like a horsepower. What's a horsepower? It's how powerful a horse is. What's a person of compute?

0
💬 0

10284.862 - 10291.243 Lex Fridman

I got it. That's interesting. VR also adds, in terms of creating worlds.

0
💬 0

10291.724 - 10312.629 George Hotz

You know what? Border Quest 2. I put it on and I can't believe the first thing they show me is a bunch of scrolling clouds and a Facebook login screen. You had the ability to bring me into a world. And what did you give me? A pop-up, right? And this is why you're not cool, Mark Zuckerberg. But you could be cool.

0
💬 0

10312.889 - 10317.951 George Hotz

Just make sure on the Quest 3, you don't put me into clouds and a Facebook login screen. Bring me to a world.

0
💬 0

10318.211 - 10322.693 Lex Fridman

I just tried Quest 3. It was awesome. But hear that, guys? I agree with that. So-

0
💬 0

10325.332 - 10351.51 Lex Fridman

it was just so you know what because i uh i mean the beginning um what is it todd howard said this about uh the design of the beginning of the games he creates is like the beginning is so so so important um i recently played zelda for the first time zelda breath of the wild the previous one and like it's very quickly you come out of this like uh within like 10 seconds you come out of like a cave type place and it's like this world opens up it's like

0
💬 0

10353.015 - 10358.88 Lex Fridman

And it like, it pulls you in. You forget whatever troubles I was having, whatever like.

0
💬 0

10359.241 - 10361.663 George Hotz

I got to play that from the beginning. I played it for like an hour at a friend's house.

0
💬 0

10362.233 - 10375.063 Lex Fridman

Ah, no. The beginning, they got it. They did it really well. The expansiveness of that space. The peacefulness of that place. They got the music. I mean, so much of that is creating that world and pulling you right in.

0
💬 0

10375.123 - 10377.905 George Hotz

I'm going to go buy a Switch. I'm going to go today and buy a Switch.

0
💬 0

10377.925 - 10399.634 Lex Fridman

You should. Well, the new one came out. I haven't played that yet. But Diablo 4 or something. I mean, there's sentimentality also. But something about VR... It really is incredible. But the new Quest 3 is mixed reality. And I got a chance to try that. So it's augmented reality. And for video games, it's done really, really well.

0
💬 0

10399.654 - 10400.534 George Hotz

Is it pass-through or cameras?

0
💬 0

10400.814 - 10402.075 Lex Fridman

Cameras. It's cameras, okay. Yeah.

0
💬 0

10402.295 - 10404.195 George Hotz

The Apple one, is that one pass-through or cameras?

0
💬 0

10404.656 - 10412.494 Lex Fridman

I don't know. I don't know how real it is. I don't know anything. It's coming out in January. Is it January or is it some point?

0
💬 0

10412.514 - 10414.256 George Hotz

Some point. Maybe not January.

0
💬 0

10414.336 - 10420.541 George Hotz

Maybe that's my optimism. But Apple, I will buy it. I don't care if it's expensive and does nothing. I will buy it. I will support this future endeavor.

0
💬 0

10420.602 - 10428.989 Lex Fridman

You're the meme. Oh, yes. I support competition. It seemed like Quest was like the only people doing it. And this is great that they're like.

0
💬 0

10430.17 - 10441.783 George Hotz

You know what? And this is another place we'll give some more respect to Mark Zuckerberg. The two companies that have endured through technology are Apple and Microsoft. And what do they make? Computers and business services.

0
💬 0

10443.024 - 10446.126 George Hotz

All the memes, social ads, they all come and go.

0
💬 0

10448.007 - 10449.508 George Hotz

But you want to endure, build hardware.

0
💬 0

10450.788 - 10479.325 Lex Fridman

Yeah, and that does a really interesting job. Maybe I'm new with this, but it's a $500 headset, Quest 3, and just having creatures run around the space, like our space right here, to me, okay, this is very boomer statement, but it added windows. to the place that I heard about the aquarium. Yeah. Yeah. Aquarium. But in this case it was a zombie game, whatever. It doesn't matter.

0
💬 0

10479.645 - 10500.798 Lex Fridman

But just like it, it modifies the space in a way where I can't, it really feels like a window and you can look out. It's pretty cool. Like I was just, it's like a zombie game. They're running at me, whatever. But what I was enjoying is the fact that there's like a window and, and they're stepping on objects in this space and, That was a different kind of escape.

0
💬 0

10501.119 - 10506.34 Lex Fridman

Also because you can see the other humans. So it's integrated with the other humans. It's really, really interesting.

0
💬 0

10506.36 - 10514.842 George Hotz

And that's why it's more important than ever that the AI is running on those systems are aligned with you. Oh, yeah. They're going to augment your entire world. Oh, yeah.

0
💬 0

10515.702 - 10541.247 Lex Fridman

And that, those AIs have a, I mean, you think about all the dark stuff, like sexual stuff. Like if those AIs threaten me, that could be haunting. Mm-hmm. Like if they, like thread me in a non-video game way. Oh yeah, yeah, yeah, yeah, yeah. Like they'll know personal information about me. And it's like, and then you lose track of what's real, what's not. Like what if stuff is like hacked?

0
💬 0

10541.347 - 10551.809 George Hotz

There's two directions the AI girlfriend company can take, right? There's like the highbrow, something like her, maybe something you kind of talk to. And this is, and then there's the lowbrow version of it where I want to set up a brothel in Times Square.

0
💬 0

10552.009 - 10552.149 Lex Fridman

Yeah.

0
💬 0

10553.269 - 10556.47 George Hotz

Yeah. It's not cheating if it's a robot. It's a VR experience.

0
💬 0

10556.65 - 10557.75 Lex Fridman

Is there an in-between?

0
💬 0

10558.686 - 10560.626 George Hotz

No, I don't want to do that one or that one.

0
💬 0

10561.047 - 10562.867 Lex Fridman

Have you decided yet? No, I'll figure it out.

0
💬 0

10562.887 - 10564.447 George Hotz

We'll see what the technology goes.

0
💬 0

10565.207 - 10580.171 Lex Fridman

I would love to hear your opinions for George's third company, what to do, the brothel in Times Square or the Hurt Experience. What do you think company number four will be? You think there'll be a company number four?

0
💬 0

10580.231 - 10583.532 George Hotz

There's a lot to do in company number two. I'm just like, I'm talking about company number three now.

0
💬 0

10583.552 - 10597.635 George Hotz

None of that tech exists yet. There's a lot to do in company number two. Company number two is going to be the great struggle of the next six years. And of the next six years, how centralized is compute going to be? The less centralized compute is going to be, the better of a chance we all have.

0
💬 0

10598.137 - 10605.102 Lex Fridman

So you're like a flag bearer for open source distributed decentralization of compute.

0
💬 0

10605.222 - 10626.355 George Hotz

We have to. We have to, or they will just completely dominate us. I showed a picture on stream of a man in a chicken farm. You ever seen one of those factory farm chicken farms? Why does he dominate all the chickens? Why does he- Smarter. He's smarter, right? Some people on Twitch were like, he's bigger than the chickens. Yeah. And now here's a man in a cow farm. Right?

0
💬 0

10627.255 - 10645.303 George Hotz

So it has nothing to do with their size and everything to do with their intelligence. And if one central organization has all the intelligence, you'll be the chickens and they'll be the chicken man. But if we all have the intelligence, we're all the chickens. We're not all the man, we're all the chickens.

0
💬 0

10645.904 - 10646.864 George Hotz

And there's no chicken man.

0
💬 0

10647.344 - 10650.472 Lex Fridman

There's no chicken man. Or just chickens in Miami.

0
💬 0

10651.552 - 10652.953 George Hotz

He was having a good life, man.

0
💬 0

10653.013 - 10669.81 Lex Fridman

I'm sure he was. I'm sure he was. What have you learned from launching and running Kamei and TinyCorp? Starting a company from an idea and scaling it. And by the way, I'm all in on tiny box. So I'm, I'm, I'm your, I'll, I'll, I guess it's pre-order only now.

0
💬 0

10670.231 - 10681.757 George Hotz

I want to make sure it's good. I want to make sure that like the thing that I deliver is like not going to be like a quest to which you buy and use twice. I mean, it's better than a quest, which you bought and used less than once statistically.

0
💬 0

10682.578 - 10698.472 Lex Fridman

Well, if there's a beta program for a tiny box, I'm into. Sounds good. So I won't be the whiny, I'll be the tech savvy user of the tiny box just to be in. What have I learned? In the early days. What have you learned from building these companies?

0
💬 0

10700.253 - 10712.559 George Hotz

The longest time at Comma, I asked, why did I start a company? Why did I do this? What else was I going to do?

0
💬 0

10714.574 - 10719.036 Lex Fridman

So you like bringing ideas to life.

0
💬 0

10721.237 - 10746.55 George Hotz

With Kama, it really started as an ego battle with Elon. I wanted to beat him. I saw a worthy adversary. Here's a worthy adversary who I can beat at self-driving cars. And I think we've kept pace, and I think he's kept ahead. I think that's what's ended up happening there. But I do think Kama is... I mean, Kama's profitable. Like... And like when this drive GPT stuff starts working, that's it.

0
💬 0

10746.73 - 10754.156 George Hotz

There's no more like bugs in the loss function. Like right now we're using like a hand-coded simulator. There's no more bugs. This is going to be it. Like this is the run up to driving.

0
💬 0

10754.376 - 10759.12 Lex Fridman

I hear a lot of really, a lot of props for OpenPilot for a comma.

0
💬 0

10759.62 - 10777.16 George Hotz

It's so, it's better than FSD and Autopilot in certain ways. It has a lot more to do with which feel you like. We lowered the price on the hardware to $1499. You know how hard it is to ship reliable consumer electronics that go on your windshield? We're doing more than most cell phone companies.

0
💬 0

10777.34 - 10780.261 Lex Fridman

How'd you pull that off, by the way, shipping a product that goes in a car?

0
💬 0

10780.501 - 10786.003 George Hotz

I know. I have an SMT line. I make all the boards in-house in San Diego.

0
💬 0

10786.844 - 10794.467 Lex Fridman

Quality control. I care immensely about it. You're basically a mom and pop shop with great testing.

0
💬 0

10795.157 - 10817.583 George Hotz

Our head of open pilot is great at like, you know, okay, I want all the commentaries to be identical. Yeah. And yeah, I mean, you know, look, it's $14.99. 30-day money back guarantee. It will blow your mind at what it can do. Is it hard to scale? You know what? There's kind of downsides to scaling it. People are always like, why don't you advertise?

0
💬 0

10818.323 - 10825.308 George Hotz

Our mission is to solve self-driving cars while delivering shipable intermediaries. Our mission has nothing to do with selling a million boxes. It's tawdry.

0
💬 0

10827.012 - 10829.578 Lex Fridman

Do you think it's possible that karma gets sold?

0
💬 0

10831.698 - 10853.854 George Hotz

Only if I felt someone could accelerate that mission and wanted to keep it open source. And like, not just wanted to, I don't believe what anyone says. I believe incentives. If a company wanted to buy Comma where their incentives were to keep it open source, but Comma doesn't stop at the cars. The cars are just the beginning. The device is a human head. The device has two eyes, two ears.

0
💬 0

10854.214 - 10855.675 George Hotz

It breathes air. It has a mouth.

0
💬 0

10856.456 - 10858.838 Lex Fridman

So you think this goes to embodied robotics?

0
💬 0

10859.038 - 10880.089 George Hotz

We sell common bodies too. They're very rudimentary. But one of the problems that we're running into is that the comma three has about as much intelligence as a B. If you want a human's worth of intelligence, you're going to need a tiny rack, not even a tiny box. You're going to need like a tiny rack, maybe even more.

0
💬 0

10881.29 - 10883.632 Lex Fridman

How do you put legs on that?

0
💬 0

10883.991 - 10903.77 George Hotz

You don't. And there's no way you can. You connect to it wirelessly. So you put your tiny box or your tiny rack in your house, and then you get your comma body, and your comma body runs the models on that. It's close, right? You don't have to go to some cloud, which is 30 milliseconds away. You go to a thing, which is 0.1 milliseconds away.

0
💬 0

10903.971 - 10908.215 Lex Fridman

So the AI girlfriend will have a central hub in the home.

0
💬 0

10908.993 - 10929.499 George Hotz

I mean, eventually, if you fast forward 20, 30 years, the mobile chips will get good enough to run these AIs. But fundamentally, it's not even a question of putting legs on a tiny box because how are you getting 1.5 kilowatts of power on that thing, right? So you need, they're very synergistic businesses. I also want to build all of Comma's training computers.

0
💬 0

10930.539 - 10943.178 George Hotz

Comma builds training computers right now. We use commodity parts. I think I can do it cheaper. So we're going to build, TinyCorp is going to not just sell TinyBox. TinyBox is the consumer version, but I'll build training data centers too.

0
💬 0

10943.478 - 10947.359 Lex Fridman

Have you talked to Andrzej Kapathy or have you talked to Elon about TinyCorp?

0
💬 0

10947.399 - 10948.479 George Hotz

He went to work at OpenAI.

0
💬 0

10949.14 - 10955.441 Lex Fridman

What do you love about Andrzej Kapathy? To me, he's one of the truly special humans we got.

0
💬 0

10955.761 - 10970.862 George Hotz

Oh man, like, you know, his streams are just a level of quality so far beyond mine. myself like it's just it's just you know yeah he's good he wants to teach you yeah I want to show you that I'm smarter than you

0
💬 0

10971.789 - 10995.086 Lex Fridman

Yeah, he has no, I mean, thank you for the sort of, the raw, authentic honesty. I mean, a lot of us have that. I think Andre is as legit as it gets in that he just wants to teach you. And there's a curiosity that just drives him. And just like at his, at the stage where he is in life, to be still like one of the best tinkerers in the world.

0
💬 0

10995.286 - 10995.446 George Hotz

Yeah.

0
💬 0

10996.066 - 10999.889 Lex Fridman

It's crazy. Like to, what is it, micro-grad?

0
💬 0

11000.123 - 11001.985 George Hotz

MicroGrad was, yeah, inspiration for TinyGrad.

0
💬 0

11004.508 - 11011.696 George Hotz

The whole, I mean, his CS231N was, this was the inspiration. This is what I just took and ran with and ended up writing this.

0
💬 0

11011.736 - 11012.156 George Hotz

So, you know.

0
💬 0

11012.637 - 11013.918 Lex Fridman

But I mean, to me that.

0
💬 0

11014.078 - 11015.5 George Hotz

Don't go work for Darth Vader, man.

0
💬 0

11016.331 - 11031.005 Lex Fridman

I mean, the flip side to me is that the fact that he's going there is a good sign for OpenAI. I think, you know, I like Ilyas Iskever a lot. I like those guys are really good at what they do.

0
💬 0

11031.585 - 11048.811 George Hotz

I know they are. And that's kind of what's even like more. And you know what? It's not that OpenAI doesn't open source the weights of GPT-4. It's that they go in front of Congress. And that is what upsets me. You know, we had two effective altruist Sams go in front of Congress. One's in jail.

0
💬 0

11050.932 - 11052.493 Lex Fridman

I think you're drawing parallels on that.

0
💬 0

11053.474 - 11053.974 George Hotz

One's in jail.

0
💬 0

11054.454 - 11057.615 Lex Fridman

You give me a look. Give me a look.

0
💬 0

11057.655 - 11060.657 George Hotz

No, I think effective altruism is a terribly evil ideology.

0
💬 0

11061.377 - 11069.681 Lex Fridman

Oh yeah, that's interesting. Why do you think that is? Why do you think there's something about a thing that sounds pretty good that kind of gets us into trouble?

0
💬 0

11069.946 - 11087.194 George Hotz

Because you get Sam Bankman Freed. Like, Sam Bankman Freed is the embodiment of effective altruism. Utilitarianism is an abhorrent ideology. Like, well, yeah, we're going to kill those three people to save a thousand, of course. Yeah. Right? There's no underlying, like, there's just, yeah.

0
💬 0

11089.075 - 11105.718 Lex Fridman

Yeah, but to me, that's a bit surprising. But it's also, in retrospect, not that surprising. But I haven't heard really clear kind of like rigorous analysis why effective altruism is flawed.

0
💬 0

11106.358 - 11111.601 George Hotz

Oh, well, I think charity is bad, right? So what is charity but investment that you don't expect to have a return on, right?

0
💬 0

11113.642 - 11125.049 Lex Fridman

Yeah, but you can also think of charity as like, as you would like to see, so allocate resources in an optimal way to make a better world.

0
💬 0

11125.822 - 11128.543 George Hotz

And probably almost always that involves starting a company.

0
💬 0

11129.023 - 11130.644 Lex Fridman

Yeah. Right? Because... More efficient.

0
💬 0

11130.704 - 11139.048 George Hotz

Yeah. If you just take the money and you spend it on malaria nets, you know, okay, great. You've made 100 malaria nets. But if you teach... Yeah.

0
💬 0

11139.608 - 11147.612 Lex Fridman

Man, how to fish. Right? Yeah. No, but the problem is teaching a man how to fish might be harder. Starting a company might be harder than allocating money that you already have.

0
💬 0

11148.226 - 11173.42 George Hotz

I like the flip side of effective altruism, effective accelerationism. I think accelerationism is the only thing that's ever lifted people out of poverty. The fact that food is cheap. Not we're giving food away because we are kind-hearted people. No, food is cheap. And that's the world you want to live in. UBI, what a scary idea. What a scary idea. All your power now? Your money is power?

0
💬 0

11173.94 - 11179.388 George Hotz

Your only source of power is granted to you by the goodwill of the government? What a scary idea.

0
💬 0

11179.408 - 11182.71 Lex Fridman

So you even think long-term?

0
💬 0

11183.33 - 11186.452 George Hotz

I'd rather die than need UBI to survive, and I mean it.

0
💬 0

11190.274 - 11193.956 Lex Fridman

What if survival is basically guaranteed? What if our life becomes so good?

0
💬 0

11194.736 - 11218.835 George Hotz

You can make survival guaranteed without UBI. What you have to do is make housing and food dirt cheap. And that's the good world. And actually, let's go into what we should really be making dirt cheap, which is energy. That energy that, you know, oh my God, like, you know, that's, if there's one, I'm pretty centrist politically. If there's one political position I cannot stand, it's deceleration.

0
💬 0

11219.235 - 11221.196 George Hotz

It's people who believe we should use less energy.

0
💬 0

11221.536 - 11221.696 George Hotz

Yeah.

0
💬 0

11221.876 - 11239.267 George Hotz

Not people who believe global warming is a problem. I agree with you. Not people who believe that, you know, saving the environment is good. I agree with you. But people who think we should use less energy, that energy usage is a moral bad. No. Yeah. No, you are asking, you are diminishing humanity.

0
💬 0

11240.248 - 11244.852 Lex Fridman

Yeah, energy is flourishing. I've created flourishing of the human species.

0
💬 0

11245.152 - 11253.62 George Hotz

How do we make more of it? How do we make it clean? And how do we make, just, just, just, how do I pay, you know, 20 cents for a megawatt hour instead of a kilowatt hour?

0
💬 0

11254.124 - 11264.01 Lex Fridman

Part of me wishes that Elon went into nuclear fusion versus Twitter. Part of me. Or somebody, somebody like Elon.

0
💬 0

11264.03 - 11273.659 George Hotz

You know, we need to, I wish there were more, more Elons in the world. Yeah. I think Elon sees it as like, this is a political battle that needed to be fought.

0
💬 0

11273.679 - 11284.145 George Hotz

And again, like, you know, I always ask the question of whenever I disagree with him, I remind myself that he's a billionaire and I'm not. So, you know, maybe he's got something figured out that I don't, or maybe he doesn't.

0
💬 0

11284.685 - 11302.29 Lex Fridman

To have some humility. But at the same time, me as a person who happens to know him, I find myself in that same position. Sometimes even billionaires need friends who disagree and help them grow. And that's a difficult reality.

0
💬 0

11302.91 - 11307.052 George Hotz

And it must be so hard. It must be so hard to meet people once you get to that point where.

0
💬 0

11307.832 - 11311.094 Lex Fridman

Fame, power, money, everybody's sucking up to you.

0
💬 0

11311.394 - 11316.776 George Hotz

See, I love not having shit. Like, I don't have shit, man. Trust me, there's nothing I can give you.

0
💬 0

11316.836 - 11318.697 George Hotz

There's nothing worth taking from me, you know?

0
💬 0

11319.534 - 11332.244 Lex Fridman

Yeah, it takes a really special human being when you have power, when you have fame, when you have money, to still think from first principles. Not like all the adoration you get towards you, all the admiration, all the people saying yes, yes, yes.

0
💬 0

11332.324 - 11333.285 George Hotz

And all the hate too.

0
💬 0

11333.625 - 11351.68 Lex Fridman

And the hate. I think that's worse. So the hate makes you want to go to the yes people because the hate exhausts you. And the kind of hate that Elon's gotten from the left is pretty intense. And so that, of course, drives him right. It loses balance.

0
💬 0

11351.78 - 11360.567 George Hotz

And it keeps this absolutely fake PSYOP political divide alive so that the 1% can keep power.

0
💬 0

11360.828 - 11378.82 Lex Fridman

Yeah. I wish it would be less divided because it is giving power to the ultra-powerful. The rich get richer. You have love in your life. Has love made you a better or a worse programmer? Do you keep productivity metrics?

0
💬 0

11379.561 - 11383.484 George Hotz

No, no. No, I'm not that methodical.

0
💬 0

11385.106 - 11393.593 George Hotz

I think that there comes to a point where if it's no longer visceral, I just can't enjoy it. I still viscerally love programming.

0
💬 0

11394.52 - 11398.063 Lex Fridman

The minute I started like- So that's one of the big loves of your life is programming.

0
💬 0

11398.804 - 11414.118 George Hotz

I mean, just my computer in general. I mean, you know, I tell my girlfriend, my first love is my computer, of course. Like, you know, I sleep with my computer. It's there for a lot of my sexual experiences. Like, come on, so is everyone's, right? Like, you know, you gotta be real about that.

0
💬 0

11414.218 - 11418.702 Lex Fridman

And like- Not just like the IDE for programming, just the entirety of the computational machine.

0
💬 0

11419.082 - 11429.962 George Hotz

The fact that, yeah, I mean, it's, you know, I wish it was, and someday they'll be smarter and someday, you know, maybe I'm weird for this, but I don't discriminate, man. I'm not going to discriminate biostack life and silicon stack life. Like,

0
💬 0

11430.273 - 11442.526 Lex Fridman

So the moment the computer starts to say, like, I miss you, and starts to have some of the basics of human intimacy, it's over for you. The moment VS Code says, hey, George.

0
💬 0

11442.646 - 11449.233 George Hotz

No, you see, no, no, no. But VS Code is, no, they're just doing that. Microsoft's doing that to try to get me hooked on it. I'll see through it.

0
💬 0

11449.834 - 11451.535 George Hotz

I'll see through it. It's gold digger, man. It's gold digger.

0
💬 0

11451.736 - 11452.837 Lex Fridman

Look at me in open source here.

0
💬 0

11453.223 - 11455.264 George Hotz

Well, this just gets more interesting, right?

0
💬 0

11455.344 - 11459.366 Lex Fridman

If it's open source and yeah, it becomes... Though Microsoft's done a pretty good job on that.

0
💬 0

11459.406 - 11475.395 George Hotz

Oh, absolutely. No, no, no. Look, I think Microsoft, again, I wouldn't count on it to be true forever, but I think right now Microsoft is doing the best work in the programming world. Like between GitHub, GitHub Actions, VS Code, the improvements to Python, where's Microsoft? Like...

0
💬 0

11476.756 - 11482.322 Lex Fridman

Who would have thought Microsoft and Mark Zuckerberg are spearheading the open source movement?

0
💬 0

11482.982 - 11484.104 George Hotz

Right? Right?

0
💬 0

11486.066 - 11486.927 George Hotz

How things change.

0
💬 0

11487.567 - 11488.368 Lex Fridman

Oh, it's beautiful.

0
💬 0

11489.389 - 11491.352 George Hotz

By the way, that's who I bet on to replace Google, by the way.

0
💬 0

11491.872 - 11492.032 Lex Fridman

Who?

0
💬 0

11492.413 - 11492.833 George Hotz

Microsoft.

0
💬 0

11493.634 - 11494.275 Lex Fridman

Microsoft.

0
💬 0

11494.295 - 11496.197 George Hotz

Satya Nadella said straight up, I'm coming for it.

0
💬 0

11497.27 - 11502.552 Lex Fridman

Interesting. So your bet, who wins AGI? Oh, I don't know about AGI.

0
💬 0

11502.572 - 11508.395 George Hotz

I think we're a long way away from that. But I would not be surprised if in the next five years, Bing overtakes Google as a search engine.

0
💬 0

11509.915 - 11510.416 Lex Fridman

Interesting.

0
💬 0

11510.996 - 11511.476 George Hotz

Wouldn't surprise me.

0
💬 0

11512.917 - 11513.577 George Hotz

Interesting.

0
💬 0

11515.678 - 11517.158 Lex Fridman

I hope some startup does.

0
💬 0

11518.999 - 11521.78 George Hotz

It might be some startup too. I would equally bet on some startup.

0
💬 0

11522.801 - 11546.725 Lex Fridman

Yeah, I'm like 50-50. Yeah. But maybe that's naive. Yeah. I believe in the power of these language models. Satya's alive. Microsoft's alive. Yeah. It's great. It's great. I like all the innovation in these companies. They're not being stale. And to the degree they're being stale, they're losing. So there's a huge incentive to do a lot of exciting work and open source work, which is incredible.

0
💬 0

11546.885 - 11553.107 Lex Fridman

Only way to win. You're older. You're wiser. What's the meaning of life, George Hotz?

0
💬 0

11554.281 - 11554.661 George Hotz

To win.

0
💬 0

11555.502 - 11558.523 Lex Fridman

It's still to win. Of course. Always.

0
💬 0

11559.263 - 11559.983 George Hotz

Of course.

0
💬 0

11560.403 - 11561.464 Lex Fridman

What's winning look like for you?

0
💬 0

11562.224 - 11565.606 George Hotz

I don't know. I haven't figured out what the game is yet, but when I do, I want to win.

0
💬 0

11565.626 - 11572.788 Lex Fridman

So it's bigger than solving self-driving? It's bigger than democratizing, decentralizing compute?

0
💬 0

11575.009 - 11576.91 George Hotz

I think the game is to stand eye to eye with God.

0
💬 0

11576.93 - 11585.642 Lex Fridman

I wonder what that means for you. Like, at the end of your life, what that will look like.

0
💬 0

11586.882 - 11607.627 George Hotz

I mean, this is what, like, I don't know. This is some, this is some, there's probably some ego trip of mine, you know? Like, you want to stand eye to eye with God. He's just blasphemous, man. Okay. I don't know. I don't know. I don't know if it would upset God. I think he, like, wants that. I mean, I certainly want that for my creations. I want my creations to stand eye to eye with me.

0
💬 0

11609.047 - 11615.086 George Hotz

So why wouldn't God want me to stand eye to eye with him? That's the best I can do, golden rule.

0
💬 0

11617.068 - 11627.556 Lex Fridman

I'm just imagining the creator of a video game having to look and stand eye to eye with one of the characters.

0
💬 0

11628.517 - 11632.6 George Hotz

I only watched season one of Westworld, but yeah, we got to find the maze and solve it.

0
💬 0

11633.701 - 11646.883 Lex Fridman

Yeah. I wonder what that looks like. It feels like a really special time in human history. where that's actually possible. There's something about AI that's like, we're playing with something weird here. Something really weird.

0
💬 0

11646.903 - 11660.156 George Hotz

I wrote a blog post. I reread Genesis and just looked like, they give you some clues at the end of Genesis for finding the Garden of Eden. And I'm interested. I'm interested.

0
💬 0

11660.697 - 11681.051 Lex Fridman

Well, I hope you find just that, George. You're one of my favorite people. Thank you for doing everything you're doing. And in this case, for fighting for open source and for decentralization of AI, it's a fight worth fighting, fight worth winning hashtag. I love you, brother. These conversations are always great. Hope to talk to you many more times. Good luck with TinyCorp.

0
💬 0

11681.411 - 11682.392 George Hotz

Thank you. Great to be here.

0
💬 0

11683.595 - 11702.517 Lex Fridman

Thanks for listening to this conversation with George Hotz. To support this podcast, please check out our sponsors in the description. And now let me leave you with some words from Albert Einstein. Everything should be made as simple as possible, but not simpler. Thank you for listening and hope to see you next time.

0
💬 0
Comments

There are no comments yet.

Please log in to write the first comment.