Lex Fridman Podcast
#387 – George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God
Fri, 30 Jun 2023
George Hotz is a programmer, hacker, and the founder of comma-ai and tiny corp. Please support this podcast by checking out our sponsors: - Numerai: https://numer.ai/lex - Babbel: https://babbel.com/lexpod and use code Lexpod to get 55% off - NetSuite: http://netsuite.com/lex to get free product tour - InsideTracker: https://insidetracker.com/lex to get 20% off - AG1: https://drinkag1.com/lex to get 1 year of Vitamin D and 5 free travel packs Transcript: https://lexfridman.com/george-hotz-3-transcript EPISODE LINKS: George's Twitter: https://twitter.com/realgeorgehotz George's Twitch: https://twitch.tv/georgehotz George's Instagram: https://instagram.com/georgehotz Tiny Corp's Twitter: https://twitter.com/__tinygrad__ Tiny Corp's Website: https://tinygrad.org/ Comma-ai's Twitter: https://twitter.com/comma_ai Comma-ai's Website: https://comma.ai/ Comma-ai's YouTube (unofficial): https://youtube.com/georgehotzarchive Mentioned: Learning a Driving Simulator (paper): https://bit.ly/42T6lAN PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (08:04) - Time is an illusion (17:44) - Memes (20:20) - Eliezer Yudkowsky (32:45) - Virtual reality (39:04) - AI friends (46:29) - tiny corp (59:50) - NVIDIA vs AMD (1:02:47) - tinybox (1:14:56) - Self-driving (1:29:35) - Programming (1:37:31) - AI safety (2:02:29) - Working at Twitter (2:40:12) - Prompt engineering (2:46:08) - Video games (3:02:23) - Andrej Karpathy (3:12:28) - Meaning of life
The following is a conversation with George Hotz, his third time on this podcast. He's the founder of Comma AI that seeks to solve autonomous driving and is the founder of a new company called TinyCorp that created TinyGrad, a neural network framework that is extremely simple with the goal of making it run on any device by any human easily and efficiently.
As you know, George also did a large number of fun and amazing things, from hacking the iPhone to recently joining Twitter for a bit as an intern, in quotes, making the case for refactoring the Twitter code base. In general, he's a fascinating engineer and human being, and one of my favorite people to talk to. And now a quick few second mention of each sponsor. Check them out in the description.
It's the best way to support this podcast. We've got Numeri for the world's hardest data science tournament, Babbel for learning new languages, NetSuite for business management software, Insight Tracker for blood paneling, and AG1 for my daily multivitamin program. Choose wisely, my friends. Also, if you want to work on our team, we're always hiring. Go to lexfriedman.com slash hiring.
And now on to the full ad reads. As always, no ads in the middle. I try to make this interesting, but if you must skip them, friends, please still check out our sponsors. I enjoy their stuff. Maybe you will too. This episode is brought to you by Numeri, a hedge fund that uses artificial intelligence and machine learning to make investment decisions.
They created a tournament that challenges data scientists to build... best predictive models for financial markets. It's basically just a really, really difficult real-world dataset to test out your ideas for how to build machine learning models. I think this is a great educational platform.
I think this is a great way to explore, to learn about machine learning, to really test yourself on real-world data with consequences. No financial background is needed. The models are scored based on how well they perform on unseen data, and the top performers receive a share of the tournament's prize pool. Head over to numeri.ai to sign up for a tournament and hone your machine learning skills.
That's numeri.ai for a chance to play against me and win a share of the tournament's prize pool. That's Numerai slash Lex. This show is also brought to you by Babbel, an app and website that gets you speaking in a new language within weeks.
I have been using it to learn a few languages, Spanish, to review Russian, to practice Russian, to revisit Russian from a different perspective, because that becomes more and more relevant for some of the previous conversations I've had and some upcoming conversations I have.
It really is fascinating how much another language, knowing another language, even to a degree where you can just have little bits and pieces of a conversation, can really unlock an experience in another part of the world. When you travel in France and Paris, just having a few words at your disposal, a few phrases,
it begins to really open you up to strange, fascinating new experiences that ultimately, at least to me, teach me that we're all the same. We have to first see our differences to realize those differences are grounded in a basic humanity. And that experience that we're all very different and yet at the core the same, I think travel with the aid of language really helps unlock
You can get 55% off your Babbel subscription at babbel.com. That's spelled B-A-B-B-E-L.com. Rules and restrictions apply. This show is also brought to you by NetSuite. an all-in-one cloud business management system.
They manage all the messy stuff that is required to run a business, the financials, the human resources, the inventory, if you do that kind of thing, e-commerce, all that stuff, all the business-related details. I know how stressed I am about everything that's required to run a team, to run a business that involves much more than just ideas and designs and engineering.
It involves all the management of human beings, all the complexities of that, the financials, all of it. And so you should be using the best tools for the job. I sometimes wonder if I have it in me. Mentally and skill-wise to be a part of running a large company. I think like with a lot of things in life, it's one of those things you shouldn't wonder too much about.
You should either do or not do. But again, using the best tools for the job is required here. You can start now with a no payment or interest for six months. Go to netsuite.com to access their one-of-a-kind financing program. That's netsuite.com.
This show is also brought to you by Insight Tracker, a service I use to track biological data, data that comes from my body, to predict, to tell me what I should do with my lifestyle, with my diet, what's working and what's not working. It's obvious, all the exciting breakthroughs that are happening with Transformers, with large language models.
even with diffusion, all of that is obvious that with raw data, with huge amounts of raw data, fine-tuned to the individual, would really reveal to us the signal in all the noise of biology. I feel like that's on the horizon. The kinds of leaps in development that we saw in language, and now more and more visual data,
I feel like biological data is around the corner, unlocking what's there in this multi-hierarchical distributed system that is our biology. What is it telling us? What is the secrets it holds? What is the thing that it's missing that could be aided? Simple lifestyle changes, simple diet changes, simple changes in all kinds of things that are controllable by individual human being.
I can't wait till that's a possibility. And Insight Tracker is taking steps towards that. Get special savings for a limited time when you go to insidetracker.com slash Lex. This show is also brought to you by Athletic Greens. That's now called AG1. It has the AG1 drink. I drink it twice a day. At the very least, it's an all-in-one daily drink to support better health and peak performance.
I drink it cold. It's refreshing. It's grounding. It helps me reconnect with the basics, the nutritional basics that makes this whole machine that is our human body run. All the crazy mental stuff I do for work, the physical challenges, everything. The highs and lows of life itself. All of that is somehow made better knowing that at least you got your nutrition in check.
At least you're getting enough sleep. At least you're doing the basics. At least you're doing the exercise. Once you get those basics in place, I think you can do some quite difficult things in life. But anyway, beyond all that is just a source of happiness and a kind of a feeling of home. The feeling that comes from returning to the habit time and time again.
Anyway, they'll give you a one-month supply of fish oil when you sign up at drinkag1.com slash lex. This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's George Hotz. ¶¶ You mentioned something in a stream about the philosophical nature of time. So let's start with the wild question. Do you think time is an illusion?
You know, I sell phone calls to Kama for $1,000. And some guy called me and like, you know, it's $1,000. You can talk to me for half an hour. And he's like, yeah, okay. So like time doesn't exist. And I really wanted to share this with you. I'm like, oh, what do you mean time doesn't exist, right? I think time is a useful model, whether it exists or not, right? Does quantum physics exist?
Well, it doesn't matter. It's about whether it's a useful model to describe reality. Is time maybe compressive?
Do you think there is an objective reality or is everything just useful models? Like underneath it all, is there an actual thing that we're constructing models for?
I don't know.
I was hoping you would know.
I don't think it matters.
I mean, this kind of connects to the models of constructive reality with machine learning, right?
Sure.
Like, is it just nice to have useful approximations of the world such that we can do something with it?
So there are things that are real. Kolomogorov complexity is real.
Yeah.
Yeah. The compressive thing. Math is real.
Yeah. This should be a t-shirt.
And I think hard things are actually hard. I don't think P equals NP.
Ooh, strong words.
Well, I think that's the majority.
I do think factoring is in P, but... I don't think you're the person that falls the majority in all walks of life.
For that one, I do.
Yeah. In theoretical computer science, you're one of the sheep. All right. But to you, time is a useful model. Sure. Hmm. What were you talking about on the stream with time? Are you made of time?
I remembered half the things I said on stream.
Someday someone's going to make a model of all of that and it's going to come back to haunt me.
Someday soon?
Yeah, probably.
Would that be exciting to you or sad that there's a George Hotz model?
I mean, the question is when the George Hotz model is better than George Hotz. Like I am declining and the model is growing.
What is the metric by which you measure better or worse in that? If you're competing with yourself,
Maybe you can just play a game where you have the George Haas answer and the George Haas model answer and ask which people prefer.
People close to you or strangers?
Either one. It will hurt more when it's people close to me, but both will be overtaken by the George Haas model.
It'd be quite painful, right? Loved ones, family members would rather have the model over for Thanksgiving than you.
Yeah.
or like significant others, would rather sext with the large language model version of you.
Especially when it's fine-tuned to their preferences.
Yeah. Well, that's what we're doing in a relationship, right? We're just fine-tuning ourselves, but we're inefficient with it because we're selfish and greedy and so on. Our language models can fine-tune more efficiently, more selflessly.
There's a Star Trek Voyager episode where, you know, Catherine Janeway, lost in the Delta Quadrant, makes herself a lover on the holodeck. And, um... The lover falls asleep on her arm, and he snores a little bit, and Janeway edits the program to remove that. And then, of course, the realization is, wait, this person's terrible.
It is actually all their nuances and quirks and slight annoyances that make this relationship worthwhile. But I don't think we're going to realize that until it's too late.
Well, I think a large language model could incorporate the flaws and the quirks and all that kind of stuff.
Just the perfect amount of quirks and flaws to make you charming without crossing the line.
Yeah, yeah. And that's probably a good approximation of the percent of time the language model should be cranky or an asshole or jealous or all this kind of stuff.
And of course it can and it will, but all that difficulty at that point is artificial. There's no more real difficulty.
Okay, what's the difference between real and artificial?
Artificial difficulty is difficulty that's constructed or could be turned off with a knob. Real difficulty is like you're in the woods and you've got to survive.
So if something cannot be turned off with a knob, it's real?
Yeah, I think so. Or, I mean, you can't get out of this by smashing the knob with a hammer. I mean, maybe you kind of can, you know, into the wild when, you know, Alexander Supertramp, he wants to explore something that's never been explored before, but it's the 90s, everything's been explored. So he's like, well, I'm just not going to bring a map.
Yeah.
I mean, no, you're not exploring. You should have brought a map, dude. You died. There was a bridge a mile from where you were camping.
How does that connect to the metaphor of the knob?
By not bringing the map, you didn't become an explorer. You just smashed the thing.
Yeah.
Yeah. The art, the difficulty is still artificial.
You failed before you started.
What if we just don't have access to the knob? Well, that maybe is even scarier, right? Like we already exist in a world of nature and nature has been fine tuned over billions of years, um, to, uh, have, uh, Humans build something and then throw the knob away in some grand romantic gesture is horrifying.
Do you think of us humans as individuals that are like born and die? Or is it, are we just all part of one living organism that is earth, that is nature?
I don't think there's a clear line there. I think it's all kind of just fuzzy. I don't know. I mean, I don't think I'm conscious. I don't think I'm anything. I think I'm just a computer program.
So it's all computation. Everything running in your head is just computation.
Everything running in the universe is computation, I think. I believe the extended church time thesis.
Yeah, but there seems to be an embodiment to your particular computation. Like there's a consistency.
Well, yeah, but I mean models have consistency too.
Yeah.
Models that have been RLHFed will continually say, you know, like, well, how do I murder ethnic minorities? Oh, well, I can't let you do that, Al. There's a consistency to that behavior.
It's all RLHF. Like, we all RLHF each other. We provide human feedback in that way. thereby fine tune these little pockets of computation, but it's still unclear why that pocket of computation stays with you for years.
You have this consistent set of physics, biology, whatever you call the neurons firing, the electrical signals, the mechanical signals, all of that, that seems to stay there, and it contains information, it stores information, and that information permeates through time. It stays with you. There's like memory. It's like sticky.
Okay, to be fair, like a lot of the models we're building today are very, even RLHF is nowhere near as complex as the human loss function.
Reinforcement learning with human feedback.
Um, you know, when I talked about will GPT-12 be AGI, my answer is no, of course not. I mean, cross-entropy loss is never going to get you there. You need, uh, probably RL in fancy environments in order to get something that would be considered like AGI-like. So to ask like the question about like why, I don't know, like it's just some quirk of evolution, right?
I don't think there's anything particularly special about where I ended up, where humans ended up.
So, okay. We have human level intelligence. Would you call that AGI? Whatever we have? GI?
Look, actually, I don't really even like the word AGI, but general intelligence is defined to be whatever humans have.
Okay. So why can GPT-12 not get us to AGI? Can we just like linger on that?
If your loss function is categorical cross entropy, if your loss function is just try to maximize compression, I have a SoundCloud, I rap, and I tried to get ChatGPT to help me write raps. And the raps that it wrote sounded like YouTube comment raps. You know, you can go on any rap beat online and you can see what people put in the comments. And it's the most like mid quality rap you can find.
Is mid good or bad? Mid is bad. It's like mid, it's like.
Every time I talk to you, I learn new words. Mid. Mid, yeah. I was like, is it like basic? Is that what mid means?
It's like middle of the curve. There's that intelligence curve. You have the dumb guy, the smart guy, and then the mid guy. Actually, being the mid guy is the worst. The smart guy is like, I put all my money in Bitcoin.
The mid guy is like, you can't put money in Bitcoin. It's not real money.
All of it is a genius meme. That's another interesting one. Memes. The humor, the idea, the absurdity encapsulated in a single image. and it just kind of propagates virally between all of our brains. I didn't get much sleep last night, so I sound like I'm high, but I swear I'm not. Do you think we have ideas or ideas have us?
I think that we're going to get super scary memes once the AIs actually are superhuman.
Ooh, you think AI will generate memes? Of course. You think it'll make humans laugh?
I think it's worse than that. So Infinite Jest, it's introduced in the first 50 pages, is about a tape that once you watch it once, you only ever want to watch that tape. In fact, you want to watch the tape so much that someone says, okay, here's a hacksaw, cut off your pinky, and then I'll let you watch the tape again.
And he'll do it.
So we're actually going to build that, I think. But it's not going to be one static tape. I think the human brain is too complex to be stuck in one static tape like that. If you look at like ant brains, maybe they can be stuck on a static tape. But we're going to build that using generative models. We're going to build the TikTok that you actually can't look away from.
So TikTok is already pretty close there, but the generation is done by humans. The algorithm is just doing their recommendation. But if the algorithm is also able to do the generation... Well, it's a question about how much intelligence is behind it, right?
So the content is being generated by, let's say, one humanity worth of intelligence. And you can quantify a humanity, right? That's a... You know, it's... exaflops, yadaflops, but you can quantify it. Once that generation is being done by 100 humanities, you're done.
So it's actually scale that's the problem, but also speed. Yeah. And what if it's sort of manipulating the very limited human dopamine engine for porn? Imagine it's just TikTok, but for porn.
Yeah.
It's like Brave New World.
I don't even know what it'll look like, right? Like again, you can't imagine the behaviors of something smarter than you, but a super intelligent, an agent that just dominates your intelligence so much will be able to completely manipulate you.
Is it possible that it won't really manipulate, it'll just move past us? It'll just kind of exist the way water exists or the air exists?
You see? And that's the whole AI safety thing. It's not the machine that's going to do that. It's other humans using the machine that are going to do that to you.
Yeah. Because the machine is not interested in hurting humans.
The machine is a machine. Yeah. But the human gets the machine. And there's a lot of humans out there very interested in manipulating you.
Well, let me bring up Eliezer Yudkowsky, who recently sat where you're sitting. He thinks that AI will almost surely kill everyone. Do you agree with him or not?
Yes, but maybe for a different reason.
Okay. And I'll try to get you to find hope, or we could find a no to that answer. But why yes?
Okay. Why didn't nuclear weapons kill everyone?
That's a good question.
I think there's an answer. I think it's actually very hard to deploy nuclear weapons tactically. it's very hard to accomplish tactical objectives. Great. I can nuke their country. I have an irradiated pile of rubble. I don't want that.
Why not?
Why don't I want an irradiated pile of rubble? Yeah. For all the reasons no one wants an irradiated pile of rubble.
Oh, because you can't use that land for resources. You can't populate the land.
Yeah, what you want, a total victory in a war is not usually the irradiation and eradication of the people there. It's the subjugation and domination of the people.
Okay, so you can't use this strategically, tactically in a war to help gain a military advantage. It's all complete destruction, all right? Yeah. But there's egos involved. It's still surprising. It's still surprising that nobody pressed the big red button.
It's somewhat surprising, but you see, it's the little red button that's going to be pressed with AI that's going to, you know, and that's why we die. It's not because the AI, if there's anything in the nature of AI, it's just the nature of humanity.
What's the algorithm behind the little red button? What possible ideas do you have for how a human species ends?
Sure. So I think the most... Obvious way to me is wireheading. We end up amusing ourselves to death. We end up all staring at that infinite TikTok and forgetting to eat. Maybe it's even more benign than this. Maybe we all just stop reproducing. Now, to be fair, it's probably hard to get all of humanity.
The interesting thing about humanity is the diversity in it. Organisms in general. There's a lot of weirdos out there. Two of them are sitting here.
I mean, diversity in humanity is... With due respect. I wish I was more weird. No, like I'm kind of, look, I'm drinking smart water, man. That's like a Coca-Cola product, right?
You went corporate, George Haas.
I went corporate. No, the amount of diversity in humanity I think is decreasing. Just like all the other biodiversity on the planet. Yeah. Right?
Social media's not helping, huh?
Go eat McDonald's in China.
Yeah.
Yeah. No, it's the interconnectedness that's doing it.
Oh, that's interesting. So everybody starts relying on the connectivity of the internet. And over time, that reduces the diversity, the intellectual diversity, and then that gets everybody into a funnel. There's still going to be a guy in Texas.
There is. In a bunker. To be fair, do I think AI kills us all? I think AI kills everything we call society today. I do not think it actually kills the human species. I think that's actually incredibly hard to do.
Yeah, but society, like if we start over, that's tricky. Most of us don't know how to do most things.
Yeah, but some of us do. And they'll be okay and they'll rebuild after the great AI.
What's rebuilding look like? How much do we lose? What has human civilization done? That's interesting. Combustion engine, electricity. So power and energy. That's interesting. Like how to harness energy.
Whoa, whoa, whoa. They're going to be religiously against that.
Are they going to get back to like fire?
Sure. I mean, it'll be like, you know, some kind of Amish looking kind of thing, I think. I think they're going to have very strong taboos against technology.
Like technology, it's almost like a new religion. Technology is the devil. Yeah. And nature is God. Sure. So closer to nature. But can you really get away from AI if it destroyed 99% of the human species? Isn't it somehow have a hold, like a stronghold?
What's interesting about everything we build, I think we're going to build super intelligence before we build any sort of robustness in the AI. We cannot build an AI that is capable of going out into nature and surviving like a bird, right? A bird is an incredibly robust organism. We've built nothing like this. We haven't built a machine that's capable of reproducing.
Yes. But there is, you know, I work with like robots a lot now. I have a bunch of them. They're mobile. Mm-hmm. They can't reproduce, but all they need is, I guess you're saying they can't repair themselves. If you have a large number, if you have like a hundred million of them.
Let's just focus on them reproducing, right? Do they have microchips in them? Okay. Then do they include a fab?
No.
Then how are they going to reproduce?
It doesn't have to be all on board, right? They can go to a factory, to a repair shop.
Yeah, but then you're really moving away from robustness. Yes. All of life is capable of reproducing without needing to go to a repair shop. Life will continue to reproduce in the complete absence of civilization. Robots will not. So if the AI apocalypse happens...
I mean, the AIs are going to probably die out because I think we're going to get, again, super intelligence long before we get robustness.
What about if you just improve the fab to where you just have a 3D printer that can always help you?
Well, that'd be very interesting. I'm interested in building that.
Of course you are. How difficult is that problem to have a robot that basically can build itself?
Very, very hard.
I think you've mentioned this like to me or somewhere where people think it's easy conceptually.
And then they remember that you're going to have to have a fab.
Yeah. On board. Of course. So 3D printer that prints a 3D printer. Yeah. Yeah, on legs. Yeah.
Why is that hard? Well, because it's not, I mean, a 3D printer is a very simple machine, right? Okay, you're going to print chips? You're going to have an atomic printer? How are you going to dope the silicon?
Yeah. Right?
How are you going to etch the silicon?
You're going to have to have a very interesting kind of fab if you want to have a lot of computation on board. But you can do like structural type of robots that are dumb.
Yeah, but structural type of robots aren't going to have the intelligence required to survive in any complex environment.
What about like ants type of systems? We have like trillions of them.
I don't think this works. I mean, again, like ants at their very core are made up of cells that are capable of individually reproducing. They're doing quite a lot of computation that we're taking for granted. It's not even just the computation. It's that reproduction is so inherent. Okay, so like there's two stacks of life in the world. There's the biological stack and the silicon stack.
The biological stack starts with reproduction. Reproduction is at the absolute core. The first proto-RNA organisms were capable of reproducing. The silicon stack, despite as far as it's come, is nowhere near being able to reproduce.
Yeah. So the fab movement, digital fabrication, fabrication in the full range of what that means is still in the early stages.
Yeah.
You're interested in this world. Yeah.
Even if you did put a fab on the machine, right? Let's say, okay, you know, we can build fabs. We know how to do that as humanity. We can probably put all the precursors that build all the machines and the fabs also in the machine. So first off, this machine is going to be absolutely massive.
I mean, we almost have a, like, think of the size of the thing required to reproduce a machine today, right? Like, is our civilization capable of reproduction? Can we reproduce our civilization on Mars?
If we were to construct a machine that is made up of humans, like a company, it can reproduce itself. Yeah. I don't know. It feels like 115 people. I think it's so much harder than that. 120? I'm just looking for a number.
I believe that Twitter can be run by 50 people. I think that this is going to take most of, like, it's just most of society, right? Like we live in one globalized world.
No, but you're not interested in running Twitter. You're interested in seeding. Like you want to seed a civilization and then, because humans can like,
Oh, okay. You're talking about, yeah, okay. So you're talking about the humans reproducing and like basically like what's the smallest self-sustaining colony of humans?
Yeah.
Yeah, okay, fine. But they're not going to be making five nanometer chips.
Over time they will. I think you're being, like we have to expand our conception of time here. Going back to the original time scale. I mean, over across maybe a hundred generations, we're back to making chips. No? If you seed the colony correctly.
Maybe. Or maybe they'll watch our colony die out over here and be like, we're not making chips.
Don't make chips.
No, but you have to seed that colony correctly.
Whatever you do, don't make chips. Chips are what led to their downfall.
Well, that is the thing that humans do. They come up, they construct a devil, a good thing and a bad thing, and they really stick by that. And then they murder each other over that. There's always one asshole in the room who murders everybody. And he usually makes tattoos and nice branding.
Do you need that asshole? That's the question, right? Humanity works really hard today to get rid of that asshole, but I think they might be important.
Yeah, this whole freedom of speech thing. The freedom of being an asshole seems kind of important. That's right. man, this thing, this fab, this human fab that we constructed, this human civilization is pretty interesting. And now it's building artificial copies of itself or artificial copies of various aspects of itself that seem interesting, like intelligence. And I wonder where that goes.
I like to think it's just like another stack for life. Like we have like the biostack life, like we're a biostack life and then the silicon stack life.
But it seems like the ceiling, or there might not be a ceiling, or at least the ceiling is much higher for the silicon stack.
Oh, no, we don't know what the ceiling is for the biostack either. The biostack just seemed to move slower. You have Moore's Law, which is not dead despite many proclamations.
In the biostack or the silicon stack? In the silicon stack.
And you don't have anything like this in the biostack. So I have a meme that I posted. I tried to make a meme. It didn't work too well. But I posted a picture of Ronald Reagan and Joe Biden. And you look, this is 1980 and this is 2020. And these two humans are basically like the same. There's been no change in humans in the last 40 years.
And then I posted a computer from 1980 and a computer from 2020. Wow.
Yeah, with the early stages, right? Which is why you said when you said the FAB, the size of the FAB required to make another FAB is very large right now.
Oh, yeah.
But computers were very large back then. 80 years ago. And they got pretty tiny. And people are starting to want to wear them on their face. In order to escape reality. That's the thing. In order to live inside the computer.
Yeah.
Put a screen right here. I don't have to see the rest of you assholes.
I've been ready for a long time.
You like virtual reality?
I love it.
Do you want to live there?
Yeah.
Yeah. Part of me does too. How far away are we, do you think?
Judging from what you can buy today, far. Very far.
I got to tell you that I had the experience of Meta's Kodak Avatar, where it's an ultra high resolution scan. It looked real.
I mean, the headsets just are not quite at eye resolution yet. I haven't put on any headset where I'm like, oh, this could be the real world. Whereas when I put good headphones on, audio is there. We can reproduce audio that I'm like, I'm actually in a jungle right now. If I close my eyes, I can't tell I'm not.
Yeah. But then there's also smell and all that kind of stuff. Sure. I don't know. I... The power of imagination or the power of the mechanism in the human mind that fills the gaps, that kind of reaches and wants to make the thing you see in the virtual world real to you, I believe in that power.
Or humans want to believe.
Yeah. Like, what if you're lonely? What if you're sad? What if you're really struggling in life and here's a world where you don't have to struggle anymore?
Humans want to believe so much that people think the large language models are conscious. That's how much humans want to believe.
Strong words. He's throwing left and right hooks. Why do you think large language models are not conscious?
I don't think I'm conscious.
Oh, so what is consciousness then, George Hans?
It's like what it seems to mean to people. It's just like a word that atheists use for souls.
Sure, but that doesn't mean soul is not an interesting word.
If consciousness is a spectrum, I'm definitely way more conscious than the large language models are. I think the large language models are less conscious than a chicken.
When is the last time you've seen a chicken?
In Miami, like a couple months ago.
No, like a living chicken.
There's living chickens walking around Miami. It's crazy.
Like on the street?
Yeah.
Like a chicken?
A chicken, yeah.
All right. All right. I was trying to call you out like a good journalist, and I got shut down. Okay. But you don't think much about this kind of... subjective feeling that it feels like something to exist.
And then as an observer, you can have a sense that an entity is not only intelligent, but has a kind of subjective experience of its reality, like a self-awareness that is capable of suffering, of hurting, of being excited by the environment in a way that's not merely... Kind of an artificial response, but a deeply felt one.
Humans want to believe so much that if I took a rock and a Sharpie and drew a sad face on the rock, they'd think the rock is sad.
Yeah. And you're saying when we look in the mirror, we apply the same smiley face with rock. Pretty much, yeah. Isn't that weird, though, that you're not conscious? Is that?
No.
But you do believe in consciousness. Not really. It's just, it's unclear. Okay, so to you, it's like a little like a symptom of the bigger thing that's not that important.
Yeah, I mean, it's interesting that like human systems seem to claim that they're conscious. And I guess it kind of like says something in a straight up like, okay, what do people mean when, even if you don't believe in consciousness, what do people mean when they say consciousness? And there's definitely like meanings to it.
What's your favorite thing to eat?
Pizza.
Cheese pizza, what are the toppings?
I like cheese pizza.
Don't say pineapple.
No, I don't like pineapple.
Okay. Pepperoni pizza.
As they put any ham on it, oh, that's real bad.
What's the best pizza? What are we talking about here? Do you like cheap, crappy pizza? A Chicago deep dish cheese pizza.
Oh, that's my favorite.
There you go. You bite into a deep dish, a Chicago deep dish pizza. and it feels like you were starving. You haven't eaten for 24 hours. You just bite in, and you're hanging out with somebody that matters a lot to you, and you're there with the pizza. Sounds real nice. Yeah, all right. It feels like something. I'm George motherfucking Hot eating a fucking Chicago deep dish pizza.
There's just the full peak living experience of being human, the top of the human condition. Sure. It feels like something to experience that. Why does it feel like something? That's consciousness, isn't it?
If that's the word you want to use to describe it, sure. I'm not going to deny that that feeling exists. I'm not going to deny that I experienced that feeling. When, I guess what I kind of take issue to is that there's some like, like, how does it feel to be a web server? Do 404s hurt? Not yet. How would you know what suffering looked like?
Sure, you can recognize a suffering dog because we're the same stack as the dog. All the biostack stuff kind of, especially mammals, you know, it's really easy. Game recognizes game. Yeah. Versus the silicon stack stuff, it's like, you have no idea. You have, wow, the little thing has learned to mimic, you know. But then I realized that that's all we are too.
Oh, look, the little thing has learned to mimic.
Yeah. I guess, yeah, 404 could be suffering, but it's so far from our kind of living organism, our kind of stack. But it feels like AI can start maybe mimicking the biological stack better and better and better because it's trained. We trained it, yeah. And so maybe that's the definition of consciousness, is the biostat consciousness.
The definition of consciousness is how close something looks to human. Sure, I'll give you that one.
No, how close something is to the human experience.
Sure. It's a very anthropocentric definition, but... Well, that's all we got. Sure. No, and I don't mean to like... I think there's a lot of value in it. Look, I just started my second company. My third company will be AI Girlfriends.
I want to find out what your fourth company is after that. Because I think once you have AI girlfriends, it's, oh boy, does it get interesting. Well, maybe let's go there. I mean, the relationships with AI, that's creating human-like organisms, right?
And part of being human is being conscious, is having the capacity to suffer, having the capacity to experience this life richly in such a way that you can empathize The AI system can empathize with you, and you can empathize with it. Or you can project your anthropomorphic sense of what the other entity is experiencing. And an AI model would need to create that experience inside your mind.
And it doesn't seem that difficult.
Yeah, but okay, so here's where it actually gets totally different, right? When you interact with another human, you can make some assumptions, right? When you interact with these models, you can't. You can make some assumptions that that other human experiences suffering and pleasure in a pretty similar way to you do. The golden rule applies. With an AI model, this isn't really true.
These large language models are good at fooling people because they were trained on a whole bunch of human data and told to mimic it.
But if the AI system says, hi, my name is Samantha... It has a backstory.
Yeah.
Went to college here and there.
Yeah.
Maybe you'll integrate this in the AI system.
I made some chatbots. I gave them backstories. It was lots of fun. I was so happy when Llama came out.
Yeah. We'll talk about Llama. We'll talk about all that. But like, you know, the rock with the smiley face. Yeah. Well, it seems pretty natural for you to anthropomorphize that thing and then start dating it. And before you know it, you're married and have kids. With a rock? With a rock. And there's pictures on Instagram with you and a rock and a smiley face.
To be fair, like, you know, something that people generally look for when they're looking for someone to date is intelligence in some form. And the rock doesn't really have intelligence. Only a pretty desperate person would date a rock. I think we're all desperate deep down. Oh, not rock level desperate.
All right. Not rock level desperate, but AI level desperate. I don't know. I think all of us have a deep loneliness. It just feels like the language models are there.