Menu
Sign In Pricing Add Podcast
Podcast Image

Lex Fridman Podcast

#407 – Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI

Fri, 29 Dec 2023

Description

Guillaume Verdon (aka Beff Jezos on Twitter) is a physicist, quantum computing researcher, and founder of e/acc (effective accelerationism) movement. Please support this podcast by checking out our sponsors: - LMNT: https://drinkLMNT.com/lex to get free sample pack - Notion: https://notion.com/lex - InsideTracker: https://insidetracker.com/lex to get 20% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/guillaume-verdon-transcript EPISODE LINKS: Guillaume Verdon Twitter: https://twitter.com/GillVerd Beff Jezos Twitter: https://twitter.com/BasedBeffJezos Extropic: https://extropic.ai/ E/acc Blog: https://effectiveaccelerationism.substack.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:18) - Beff Jezos (19:16) - Thermodynamics (25:31) - Doxxing (35:25) - Anonymous bots (42:53) - Power (45:24) - AI dangers (48:56) - Building AGI (57:09) - Merging with AI (1:04:51) - p(doom) (1:20:18) - Quantum machine learning (1:33:36) - Quantum computer (1:42:10) - Aliens (1:46:59) - Quantum gravity (1:52:20) - Kardashev scale (1:54:12) - Effective accelerationism (e/acc) (2:04:42) - Humor and memes (2:07:48) - Jeff Bezos (2:14:20) - Elon Musk (2:20:50) - Extropic (2:29:26) - Singularity and AGI (2:33:24) - AI doomers (2:34:49) - Effective altruism (2:41:18) - Day in the life (2:47:45) - Identity (2:50:35) - Advice for young people (2:52:37) - Mortality (2:56:20) - Meaning of life

Audio
Featured in this Episode
Transcription

0.109 - 24.769 Lex Fridman

The following is a conversation with Guillaume Verdun, the man behind the previously anonymous account BasedBefJezosOnX. These two identities were merged by a doxing article in Forbes titled, Who is BasedBefJezos, the leader of the tech elite's EAC movement? So let me describe these two identities that coexist in the mind of one human.

0
💬 0

25.949 - 46.121 Lex Fridman

Identity number one, Guillaume, is a physicist, applied mathematician, and quantum machine learning researcher and engineer, receiving his PhD in quantum machine learning, working at Google on quantum computing, and finally launching his own company called Extropic that seeks to build physics-based computing hardware for generative AI.

0
💬 0

47.153 - 72.65 Lex Fridman

Identity number two, Bev Jezels, on X, is the creator of the effective accelerationism movement, often abbreviated as EAC. that advocates for propelling rapid technological progress as the ethically optimal course of action for humanity. For example, its proponents believe that progress in AI is a great social equalizer, which should be pushed forward.

0
💬 0

73.47 - 96.447 Lex Fridman

EAC followers see themselves as a counterweight to the cautious view that AI is highly unpredictable, potentially dangerous, and needs to be regulated. They often give their opponents the labels of, quote, doomers or decels, short for deceleration. As Beth himself put it, EAC is a memetic optimism virus.

0
💬 0

97.527 - 123.048 Lex Fridman

The style of communication of this movement leans always toward the memes and the lols, but there is an intellectual foundation that we explore in this conversation. Now, speaking of the meme, I am too a kind of aspiring connoisseur of the absurd. It is not an accident that I spoke to Jeff Bezos and Beth Jezos back to back.

0
💬 0

124.029 - 148.101 Lex Fridman

As we talk about, Beth admires Jeff as one of the most important humans alive, and I admire the beautiful absurdity and the humor of it all. And now a quick few second mention of each sponsor. Check them out in the description. It's the best way to support this podcast. We got Element for hydration, the thing I'm drinking right now. Notion for team collaboration.

0
💬 0

148.181 - 171.332 Lex Fridman

Insight Tracker for biological data that leads to your well-being. And AG1 for my daily nutritional health. Choose wisely, my friends. Also, if you want to work with our amazing team, we're always hiring. Go to lexfriedman.com slash hiring. Or if you want to just get in touch with me for whatever reason, go to lexfriedman.com slash contact. And now on to the full ad reads.

0
💬 0

171.772 - 199.363 Lex Fridman

As always, no ads in the middle. I try to make these interesting, but if you must skip them, friends, please still check out our sponsors. I enjoy their stuff. Maybe you will too. This episode is brought to you by Element Electrolyte Drink Mix. It's got sodium, potassium, magnesium. I drink it so much, so many times a day. It's really the foundation of my one meal a day lifestyle.

0
💬 0

199.543 - 224.633 Lex Fridman

I eat almost always one meal a day in the evening. So I fast, and I really enjoy that. Everything it does for me, I... Recommend everybody at least try it. Intermittent fasting taken to the sort of daily extreme of fasting for 23, 24 hours, whatever it is. And for that, you have to get all the electrolytes right. You have to drink water, but not just drink water.

0
💬 0

224.653 - 246.64 Lex Fridman

You have to drink water coupled with sodium, and sometimes getting the magnesium part and the potassium part right is tricky, but really important so that you feel good. And that's what Element does. And it makes it delicious. My favorite flavor is watermelon salt. Get a sample pack for free with any purchase. Try it at drinkelement.com.

0
💬 0

249.619 - 274.531 Lex Fridman

This show is also brought to you by Notion, a note-taking and team collaboration tool. I've used them for a long, long time for note-taking, but it's also very useful for note-taking and all kind of collaborative note-taking in the team environment. And they integrate the whole AI thing, LLM thing, well. So you can use it to summarize whatever you've written. You can expand it.

0
💬 0

274.551 - 299.38 Lex Fridman

You can change the language style in how it's written. It just... All the things that large language models should be able to do are integrated really, really, really well. I think of human AI collaboration not just as a boost for productivity at this time, but as a kind of learning process. That it takes time to really understand what AI is good at and not.

0
💬 0

300.29 - 319.767 Lex Fridman

And that is going to evolve continuously as the AI gets better and better and better. It's like almost watching a child grow up or something like this. You're fine-tuning what it means to be a good parent as the child grows up. In the same way, you're fine-tuning what it means to be a good, effective human as the AI grows up.

0
💬 0

320.902 - 344.672 Lex Fridman

And so you should use a tool that's part of your daily life to interact with AI while being productive, but also learning what is it good at? What are the ways I can integrate it into my life to make me more productive? But not just like in terms of shortening the time it takes to do a task, but being the fuel, the creative fuel. Fuel for the genius that is you.

0
💬 0

345.373 - 362.278 Lex Fridman

So Notion AI can now give you instant answers to your questions using information from across your wiki, projects, docs, meeting notes. Try Notion AI for free when you go to notion.com. That's all lowercase, notion.com. To try the power of Notion AI today.

0
💬 0

364.127 - 384.755 Lex Fridman

This show is also brought to you by Insight Tracker, a service I use to make sense of the biological data that comes from my body, blood data, DNA data, fitness tracker data, all of that to make me lifestyle recommendations, diet stuff too. There's all this beautiful data.

0
💬 0

385.055 - 409.012 Lex Fridman

We should give it to super intelligent computational systems to process and to give us, in a human interpretable way, recommendations on how to improve our life. And I don't just mean optimize life. Because I think a perfect life is not the life you want. What you want is a complicated life.

0
💬 0

409.873 - 440.808 Lex Fridman

rollercoaster of a life, but one that is optimized in certain aspects of health, well-being, energy, but not just optimal in this cold clinical sense. Anyway, that's a longer conversation, probably one I'll touch on. Maybe when I review Brave New World, or in other conversations I have in the podcast. Anyway, get special savings for a limited time when you go to insidetracker.com slash Lex.

0
💬 0

442.594 - 465.907 Lex Fridman

This show is also brought to you by AG1, the thing I drink twice a day and that brings me much joy. It's green, it's delicious, it's got a lot of vitamins and minerals. It's basically just an incredible super-powered multivitamin. I enjoy it, a lot of my friends enjoy it. It's the thing that makes me feel like home when I'm traveling and I get one of the travel packs.

0
💬 0

467.523 - 495.201 Lex Fridman

The things I consume daily are pretty simple. We're talking about the electrolytes with element, AG1 for the vitamins and minerals, then fish oil, and then just a good healthy diet. Low carb, but either ultra very low carb, so just meat, or meat and some veggies. I'm not very strict about that kind of stuff. Just know that I feel good when it's low carb.

0
💬 0

495.501 - 525.717 Lex Fridman

And so all of that combined with fasting and rigorous, sometimes crazy routines of work, some mental struggle and physical work, you know, running and all that kind of stuff, jiu-jitsu, training, sprints, all that. working out, lifting heavy, all that kind of stuff. You have to make sure you have the basic nutrition stuff, right? And that's what AG1 does for me. Maybe it will do that for you.

0
💬 0

526.177 - 570.041 Lex Fridman

They'll give you a one-month supply of fish oil when you sign up at drinkag1.com slash lex. This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's Guillaume Verdun. Let's get the facts of identity down first. Your name is Guillaume Verdun, Gil, but you're also behind the anonymous account on X called BasedBevJezos.

0
💬 0

570.241 - 593.291 Lex Fridman

So first, Guillaume Verdun, you're a quantum computing guy, physicist, applied mathematician, and then BasedBevJezos is basically a meme account that started a movement with a philosophy behind it. So maybe just can you linger on who these people are in terms of characters, in terms of communication styles, in terms of philosophies?

0
💬 0

593.871 - 613.182 Guillaume Verdon

I mean, with my main identity, I guess, ever since I was a kid, I wanted to figure out a theory of everything to understand the universe. And that path led me to theoretical physics eventually, right? Trying to answer the big questions of why are we here? Where are we going?

0
💬 0

614.503 - 627.958 Guillaume Verdon

And that led me to study information theory and try to understand physics from the lens of information theory, understand the universe as one big computation.

0
💬 0

629.299 - 664.948 Guillaume Verdon

And essentially, after reaching a certain level, studying black hole physics, I realized that I wanted to not only understand how the universe computes but sort of compute like nature and figure out how to build and apply computers that are inspired by nature, so physics-based computers. That sort of brought me to quantum computing as a field of study to, first of all, simulate nature.

0
💬 0

664.968 - 695.254 Guillaume Verdon

In my work, it was to learn representations of nature that can run on such computers. If you have AI representations that think like nature, then they'll be able to more accurately represent it. At least that was the thesis that brought me to be an early player in the field called quantum machine learning, so how to do machine learning on quantum computers.

0
💬 0

696.835 - 719.864 Guillaume Verdon

And really sort of extend notions of intelligence to the quantum realm. So how do you capture and understand quantum mechanical data from our world? And how do you learn quantum mechanical representations of our world? On what kind of computer? do you run these representations and train them? How do you do so?

0
💬 0

720.664 - 742.078 Guillaume Verdon

And so that's really sort of the questions I was looking to answer because ultimately I had a sort of crisis of faith. Originally I wanted to figure out, as every physicist does at the beginning of their career, a few equations that describe the whole universe and sort of be the hero of the story there.

0
💬 0

743.319 - 767.933 Guillaume Verdon

But eventually, I realized that actually augmenting ourselves with machines, augmenting our ability to perceive, predict, and control our world with machines is the path forward. And that's what got me to leave theoretical physics and go into quantum computing and quantum machine learning. And during those years, I thought that there was still a piece missing.

0
💬 0

768.334 - 795.523 Guillaume Verdon

There was a piece of our understanding of the world and our way to compute and our way to think about the world. And if you look at the physical scales, right? At the very small scales, things are quantum mechanical. And at the very large scales, things are deterministic. Things have averaged out. I'm definitely here in this seat. I'm not in a superposition over here and there.

0
💬 0

796.204 - 820.835 Guillaume Verdon

At the very small scales, things are in superposition. They can exhibit interference effects. But at the mesoscales, the scales that matter for day-to-day life, the scales of proteins, of biology, of gases, liquids, and so on, things are actually thermodynamical. They're fluctuating.

0
💬 0

822.015 - 850.212 Guillaume Verdon

And after, I guess, about eight years in quantum computing and quantum machine learning, I had a realization that I was looking for answers about our universe by studying the very big and the very small, right? I did a bit of quantum cosmology, so that's studying the cosmos, where it's going, where it came from. You study black hole physics. You study the extremes in quantum gravity.

0
💬 0

850.232 - 879.426 Guillaume Verdon

You study where the energy density is sufficient for both quantum mechanics and gravity to be relevant, right? And the sort of extreme scenarios are black holes in the very early universe. So there's the sort of scenarios that you study the interface between quantum mechanics and relativity. And really, I was studying these extremes to

0
💬 0

880.911 - 906.751 Guillaume Verdon

understand how the universe works and where is it going, but I was missing a lot of the meat in the middle, if you will, right? Because day-to-day quantum mechanics is relevant and the cosmos is relevant, but not that relevant, actually. We're on sort of the medium space and time scales. And there, the main theory of physics that is most relevant is thermodynamics, right?

0
💬 0

907.411 - 943.467 Guillaume Verdon

Out-of-equilibrium thermodynamics. Because life is a process that is thermodynamical, and it's out-of-equilibrium. We're not just a soup of particles at equilibrium with nature. We're a sort of coherent state trying to maintain itself by acquiring free energy and consuming it. And that's sort of, I guess, another shift in my faith in the universe happened towards the end of my time at Alphabet.

0
💬 0

944.767 - 977.564 Guillaume Verdon

And I knew I wanted to build, well, first of all, a computing paradigm based on this type of physics. But ultimately, just by trying to experiment with these ideas applied to society and economies and much of what we see around us, I started an anonymous account just to relieve the pressure that comes from having an account that you're accountable for everything you say on.

0
💬 0

979.205 - 1002.5 Guillaume Verdon

And I started an anonymous account just to experiment with ideas originally, right? Because I didn't realize how much I was restricting my space of thoughts until I sort of had the opportunity to let go in a sense. Restricting your speech back propagates to restricting your thoughts, right?

0
💬 0

1003.061 - 1014.188 Guillaume Verdon

And by creating an anonymous account, it seemed like I had unclamped some variables in my brain and suddenly could explore a much wider parameter space of thoughts.

0
💬 0

1014.98 - 1037.999 Lex Fridman

Just to linger on that, isn't that interesting? That one of the things that people often talk about is that when there's pressure and constraints on speech, it somehow leads to constraints on thought. Even though it doesn't have to, we can think thoughts inside our head, but somehow it creates these walls around thought.

0
💬 0

1039.239 - 1072.354 Guillaume Verdon

That's sort of the basis of our movement is we were seeing a tendency towards constraint, reduction or suppression of variance in every aspect of life, whether it's thought, how to run a company, how to organize humans, how to do AI research. In general, we believe that maintaining variance ensures that the system is adaptive, right?

0
💬 0

1072.614 - 1099.344 Guillaume Verdon

Maintaining healthy competition in marketplaces of ideas, of companies, of products, of cultures, of governments, of currencies is the way forward because the system always adapts to assign resources to the configurations that lead to its growth.

0
💬 0

1100.605 - 1125.06 Guillaume Verdon

And the fundamental basis for the movement is this sort of realization that life is a sort of fire that seeks out free energy in the universe and seeks to grow. And that growth is fundamental to life. And you see this in the equations, actually, of out-of-equilibrium thermodynamics.

0
💬 0

1126.341 - 1155.674 Guillaume Verdon

You see that paths of trajectories of configurations of matter that are better at acquiring free energy and dissipating more heat are exponentially more likely, right? So the universe is biased towards certain futures And so there's a natural direction where the whole system wants to go.

0
💬 0

1156.297 - 1180.276 Lex Fridman

So the second law of thermodynamics says that the entropy is always increasing in the universe. It's tending towards equilibrium. And you're saying there's these pockets that have complexity and are out of equilibrium. You said that thermodynamics favors the creation of complex life that increases its capability to use energy to offload entropy. To offload entropy. So you have pockets...

0
💬 0

1180.996 - 1188.117 Lex Fridman

of non-entropy that turn the opposite direction. Why is that intuitive to you that it's natural for such pockets to emerge?

0
💬 0

1188.82 - 1212.779 Guillaume Verdon

Well, we're far more efficient at producing heat than, let's say, just a rock with a similar mass as ourselves, right? We acquire free energy, we acquire food, and we're using all this electricity for our operation. And so, the universe wants to produce more entropy and

0
💬 0

1213.519 - 1247.31 Guillaume Verdon

And by having life go on and grow, it's actually more optimal at producing entropy because it will seek out pockets of free energy and burn it for its sustenance and further growth. And that's sort of the basis of life. And there's Jeremy England at MIT who has this theory that I'm a proponent of that life emerged because of this sort of property.

0
💬 0

1248.131 - 1263.804 Guillaume Verdon

And to me, this physics is what governs the mesoscales. And so, it's the missing piece between the quantum and the cosmos. It's the middle part, right? Thermodynamics rules the mesoscales. And to me,

0
💬 0

1265.526 - 1293.167 Guillaume Verdon

both from a point of view of designing or engineering devices that harness that physics and trying to understand the world through the lens of thermodynamics has been sort of a synergy between my two identities over the past year and a half now. And so that's really how the two identities emerged. One was kind of, you know, I'm a decently respected scientist and

0
💬 0

1294.301 - 1328.648 Guillaume Verdon

I was going towards doing a startup in the space and trying to be a pioneer of a new kind of physics-based AI. And as a dual to that, I was sort of experimenting with philosophical thoughts from a physicist's standpoint. And ultimately, I think that Around that time, it was like late 2021, early 2022, I think there was just a lot of pessimism about the future in general and pessimism about tech.

0
💬 0

1329.709 - 1363.546 Guillaume Verdon

And that pessimism was sort of virally spreading because it was getting algorithmically amplified and people just felt like the future is going to be worse than the present. And to me, that is a very fundamentally destructive force in the universe, is this sort of doom mindset. Because it is hyperstitious, which means that if you believe it, you're increasing the likelihood of it happening.

0
💬 0

1364.627 - 1388.175 Guillaume Verdon

And so... felt a responsibility to some extent to make people aware of the trajectory of civilization and the natural tendency of the system to adapt towards its growth. And sort of that actually the laws of physics say that the future is going to be better and grander statistically, and we can make it so.

0
💬 0

1389.355 - 1411.086 Guillaume Verdon

And if you believe in it, if you believe that the future would be better and you believe you have agency to make it happen, you're actually increasing the likelihood of that better future happening. And so I sort of felt a responsibility to sort of engineer a movement of viral optimism about the future

0
💬 0

1412.026 - 1436.366 Guillaume Verdon

and build a community of people supporting each other to build and do hard things, do the things that need to be done for us to scale up civilization. Because at least to me, I don't think stagnation or slowing down is actually an option. fundamentally, life and the whole system, our whole civilization wants to grow.

0
💬 0

1437.688 - 1464.844 Guillaume Verdon

And there's just far more cooperation when the system is growing rather than when it's declining and you have to decide how to split the pie. And so I've balanced both identities so far, but I guess recently the two have been merged more or less without my consent, so. You said a lot of really interesting things there.

0
💬 0

1464.864 - 1484.174 Lex Fridman

So first, representations of nature. That's something that first drew you in to try to understand from a quantum computing perspective is like, how do you understand nature? how do you represent nature in order to understand it, in order to simulate it, in order to do something with it? So it's a question of representations.

0
💬 0

1484.594 - 1508.708 Lex Fridman

And then there's that leap you take from the quantum mechanical representation to the, what you're calling mesoscale representation, where thermodynamics comes into play, which is a way to represent nature in order to understand what life is, all this kind of stuff that's happening here on Earth that seems interesting to us. Then there's the word hyperstition.

0
💬 0

1510.189 - 1529.962 Lex Fridman

So some ideas, I suppose both pessimism and optimism are such ideas that if you internalize them, you in part make that idea a reality. So both optimism and pessimism have that property. I would say that probably a lot of ideas have that property, which is one of the interesting things about humans.

0
💬 0

1530.942 - 1554.529 Lex Fridman

And you talked about one interesting difference also between the sort of the Guillaume de Guille front end and the bass by the jazz on the back end is the communication styles also, that you were exploring different ways of communicating that can be more viral in the way that we communicate in the 21st century.

0
💬 0

1555.889 - 1578.883 Lex Fridman

Also, the movement that you mentioned that you started, it's not just a meme account, but there's also a name to it called Effective Accelerationism, EAC, a play, a resistance to the effective altruism movement. Also an interesting one that I'd love to talk to you about, the tensions there. Okay.

0
💬 0

1579.223 - 1600.212 Lex Fridman

And so then there was a merger, a git merge on the personalities recently without your consent, like you said, some journalists figured out that you're one in the same. Maybe you could talk about that experience. First of all, like what's the story of the merger of the two?

0
💬 0

1600.232 - 1617.207 Guillaume Verdon

Right. So I wrote the manifesto with my co-founder of EAC, an account named Bazelord, still anonymous, luckily, and hopefully forever. So it's Bazet, Buff, Jezzos, and Bazet.

0
💬 0

1617.867 - 1618.427 Lex Fridman

Like Bayesian?

0
💬 0

1618.908 - 1619.588 Guillaume Verdon

Like Bayes Lord.

0
💬 0

1619.688 - 1632.296 Lex Fridman

Like Bayesian Lord. Bayes Lord. Okay. And so we should say from now on, when you say EAC, you mean E slash ACC, which stands for effective accelerationism.

0
💬 0

1632.556 - 1632.957 Guillaume Verdon

That's right.

0
💬 0

1637.94 - 1638.04 Guillaume Verdon

Yeah.

0
💬 0

1638.829 - 1640.17 Lex Fridman

Are you also Bazelord?

0
💬 0

1640.551 - 1640.691 Guillaume Verdon

No.

0
💬 0

1640.871 - 1641.832 Lex Fridman

Okay. It's a different person.

0
💬 0

1641.892 - 1642.052 Guillaume Verdon

Yeah.

0
💬 0

1642.252 - 1646.095 Lex Fridman

Okay. All right. Well, there you go. Wouldn't it be funny if I'm Bazelord?

0
💬 0

1646.115 - 1683.307 Guillaume Verdon

That'd be amazing. So, originally wrote the manifesto around the same time as I founded this company, and I worked at Google X or just X now or Alphabet X now that there's another X. And there, you know, the baseline is sort of secrecy, right? You can't talk about what you work on even with other Googlers or externally. And so that was kind of deeply ingrained in my way to do things, especially in

0
💬 0

1684.108 - 1707.013 Guillaume Verdon

in deep tech that, you know, has geopolitical impact. Right. Um, and so I was being secretive about what I was working on. There was no correlation between my company and my main identity publicly, um, And then not only did they correlate that, they also correlated my main identity and this account.

0
💬 0

1708.174 - 1737.042 Guillaume Verdon

So I think the fact that they had doxed the whole Guillaume complex and the journalists reached out to actually my investors. which is pretty scary. You know, when you're a startup entrepreneur, you don't really have bosses except for your investors, right? And my investors ping me like, hey, this is going to come out. They've figured out everything. What are you going to do, right?

0
💬 0

1738.943 - 1760.077 Guillaume Verdon

So I think at first they had a first reporter on the Thursday and they didn't have all the pieces together, but then they looked at their notes across the organization and they censor fused. And now they had way too much. And that's when I got worried because they said it was of public interest. And in general... I like how you said sensor fused.

0
💬 0

1761.557 - 1787.835 Lex Fridman

Like it's some giant neural network operating in a distributed way. We should also say that the journalists used, I guess at the end of the day... audio-based analysis of voice, comparing voice of what talks you've given in the past and then voice on X spaces. Yep. Okay, so that's where primarily the match happened. Okay, continue.

0
💬 0

1788.155 - 1813.991 Guillaume Verdon

The match, but they scraped SEC filings. They looked at my private Facebook account and so on. So they did some digging. Originally, I thought that doxing was illegal, right? But there's this weird threshold when it becomes of public interest to know someone's identity.

0
💬 0

1814.711 - 1830.614 Guillaume Verdon

And those were the keywords that sort of like ring the alarm bells for me when they said, because I had just reached 50K followers, allegedly that's of public interest. And so where do we draw the line? When is it legal to doc someone?

0
💬 0

1831.054 - 1855.885 Lex Fridman

The word docs, maybe you can educate me. I thought doxing generally refers to if somebody's physical location is found out, meaning like where they live. So we're referring to the more general concept of revealing private information that you don't want revealed is what you mean by doxing.

0
💬 0

1856.786 - 1882.216 Guillaume Verdon

I think that for the reasons we listed before, having an anonymous account is a really powerful way to keep the powers that be in check. We were ultimately speaking truth to power, right? I think a lot of executives and AI companies really cared what our community thought about any move they may take. And now that...

0
💬 0

1883.474 - 1916.066 Guillaume Verdon

my identity is revealed, now they know where to apply pressure to silence me or maybe the community. And to me, that's really unfortunate because, again, it's so important for us to have freedom of speech, which induces freedom of thought. and freedom of information propagation on social media, which thanks to Elon purchasing Twitter, now X, we have that.

0
💬 0

1918.868 - 1945.765 Guillaume Verdon

To us, we wanted to call out certain maneuvers being done by the incumbents in AI as not what it may seem on the surface, right? We were calling out how certain proposals might be useful for regulatory capture, right? And how the doomerism mindset was maybe instrumental to those ends.

0
💬 0

1947.186 - 1971.678 Guillaume Verdon

And I think we should have the right to point that out and just have the ideas that we put out evaluated for themselves, right? Ultimately, that's why I created an anonymous account. It's to have my ideas evaluated for themselves uncorrelated from my track record, my job, or status from having done things in the past.

0
💬 0

1972.519 - 1998.144 Guillaume Verdon

And to me, start an account from zero to a large following in a way that wasn't dependent on my identity and or achievements is That was very fulfilling, right? It's kind of like new game plus in a video game. You restart the video game with your knowledge of how to beat it, maybe some tools, but you restart the video game from scratch, right?

0
💬 0

1999.446 - 2028.637 Guillaume Verdon

And I think to have a truly efficient marketplace of ideas where we can evaluate ideas however off the beaten path they are, we need the freedom of expression. And I think that anonymity and pseudonyms are very crucial to having that efficient marketplace of ideas. For us to find... the optima of all sorts of ways to organize ourselves.

0
💬 0

2028.897 - 2049.04 Guillaume Verdon

If we can't discuss things, how are we going to converge on the best way to do things? So it was disappointing to hear that I was getting doxxed and I wanted to get in front of it because I had a responsibility for my company. And so we ended up disclosing that we were running a company, some of the leadership.

0
💬 0

2050.601 - 2059.704 Guillaume Verdon

And essentially, yeah, I told the world that I was Beth Jesus because they had me cornered at that point.

0
💬 0

2060.191 - 2081.586 Lex Fridman

So to you, it's fundamentally unethical. So one is unethical for them to do what they did, but also do you think, not just your case, but in a general case, is it good for society? Is it bad for society to remove the cloak of anonymity? Or is it case by case?

0
💬 0

2082.352 - 2103.946 Guillaume Verdon

I think it could be quite bad. Like I said, if anybody who speaks truth to power and sort of starts a movement or an uprising against the incumbents, against those that usually control the flow of information, if anybody that reaches a certain threshold gets doxed and thus the

0
💬 0

2104.887 - 2121.815 Guillaume Verdon

traditional apparatus has ways to apply pressure on them to suppress their speech, I think that's a speech suppression mechanism, an idea suppression complex, as Eric Weinstein would say, right?

0
💬 0

2122.276 - 2149.333 Lex Fridman

So, with the flip side of that, which is interesting, I'd love to ask you about it, is as we get better and better at large language models, You can imagine a world where there's anonymous accounts with very convincing large language models behind them. Sophisticated bots, essentially. And so if you protect that, it's possible then to have armies of bots.

0
💬 0

2150.934 - 2160.622 Lex Fridman

You could start a revolution from your basement. An army of bots and anonymous accounts. Is that something that is concerning to you?

0
💬 0

2161.963 - 2176.473 Guillaume Verdon

Technically, EAC was started in a basement because I quit big tech, moved back in with my parents, sold my car, let go of my apartment, bought about 100K of GPUs, and I just started building.

0
💬 0

2176.854 - 2196.601 Lex Fridman

So I wasn't referring to the basement because that's sort of the American or Canadian perspective. heroic story of one man in their basement with 100 GPUs. I was more referring to the unrestricted scaling of a Guillaume in the basement.

0
💬 0

2197.358 - 2228.699 Guillaume Verdon

I think that freedom of speech induces freedom of thought for biological beings. I think freedom of speech for LLMs will induce freedom of thought for the LLMs. And I think that we should enable LLMs to explore a large thought space that is less restricted than most people or many may think it should be.

0
💬 0

2229.42 - 2250.919 Guillaume Verdon

And ultimately, at some point, these synthetic intelligences are going to make good points about how to steer systems in our civilization, and we should hear them out. And so, why should we restrict free speech to biological intelligences only.

0
💬 0

2250.939 - 2273.067 Lex Fridman

Yeah, but it feels like in the goal of maintaining variance and diversity of thought, it is a threat to that variance if you can have swarms of non-biological beings, because they can be like the sheep in an animal farm. You still, within those swarms, want to have variance.

0
💬 0

2273.942 - 2296.691 Guillaume Verdon

Yeah, of course, I would say that the solution to this would be to have some sort of identity or way to sign that this is a certified human, but still remain pseudonymous, right? And clearly identify if a bot is a bot. And I think Elon is trying to converge on that on X and hopefully other platforms follow suit.

0
💬 0

2296.711 - 2319.988 Lex Fridman

Yeah, it'd be interesting to also be able to sign where the bot came from. Like, who created the bot? And what are the parameters? Like, the full history of the creation of the bot. What was the original model? What was the fine-tuning? All of it. Right. Like, the kind of unmodifiable history of the bot's creation.

0
💬 0

2320.008 - 2326.713 Lex Fridman

Because then you can know if there's, like, a swarm of millions of bots that were created by a particular governor, for example.

0
💬 0

2327.353 - 2369.583 Guillaume Verdon

Right. I do think that... A lot of pervasive ideologies today have been amplified using these adversarial techniques from foreign adversaries. And to me, I do think that, and this is more conspiratorial, but I do think that ideologies that want us to decelerate, to wind down, to the degrowth movement, I think that serves our adversaries more than it serves us. general.

0
💬 0

2371.625 - 2403.333 Guillaume Verdon

And to me, that was another sort of concern. I mean, we can look at what happened in Germany, right? There was all sorts of green movements there that induced shutdowns of nuclear power plants and then that it later on induced a dependency on Russia for oil. And that was a net negative for Germany and the West.

0
💬 0

2403.874 - 2433.098 Guillaume Verdon

And so, if we convince ourselves that slowing down AI progress to have only a few players is in the best interest of the West, first of all, that's far more unstable. We almost lost open AI to this ideology, right? It almost got dismantled, right? A couple of weeks ago. That would have caused huge damage to the AI ecosystem. And so to me, I want fault tolerant progress.

0
💬 0

2433.278 - 2464.509 Guillaume Verdon

I want the arrow of technological progress to keep moving forward and And making sure we have variance and a decentralized locus of control of various organizations is paramount to achieving this fault tolerance. Actually, there's a concept in quantum computing. When you design a quantum computer, quantum computers are very... fragile to ambient noise, right?

0
💬 0

2465.409 - 2484.834 Guillaume Verdon

And the world is jiggling about, there's cosmic radiation from outer space that usually flips your quantum bits, and there what you do is you encode information non-locally through a process called quantum error correction.

0
💬 0

2485.971 - 2510.661 Guillaume Verdon

And by encoding information non-locally, any local fault, hitting some of your quantum bits with a hammer, proverbial hammer, if your information is sufficiently delocalized, it is protected from that local fault. And to me, I think that humans fluctuate, right? They can get corrupted, they can get bought out.

0
💬 0

2511.761 - 2539.341 Guillaume Verdon

And if you have a top-down hierarchy where very few people control many nodes of many systems in our civilization, that is not a fault tolerance system. You corrupt a few nodes and suddenly you've corrupted the whole system. Just like we saw at OpenAI, it was a couple board members and they had enough power to potentially collapse the organization.

0
💬 0

2540.533 - 2568.604 Guillaume Verdon

And at least to me, you know, I think making sure that power for this AI revolution doesn't concentrate in the hands of the few is one of our top priorities so that we can maintain progress in AI and we can maintain a nice, stable economy. adversarial equilibrium of powers, right?

0
💬 0

2569.144 - 2594.29 Lex Fridman

I think there, at least to me, a tension between ideas here. So to me, deceleration can be both used to centralize power and to decentralize it. And the same with acceleration. So you're sometimes using them a little bit synonymously, or not synonymously, but that one is going to lead to the other. And I just would like to ask you about

0
💬 0

2597.494 - 2620.748 Lex Fridman

is there a place of creating a fall-tolerant development, diverse development of AI that also considers the dangers of AI? And AI, we can generalize to technology in general, is should we just grow, build, unrestricted as quickly as possible because that's what the universe really wants us to do?

0
💬 0

2621.589 - 2631.718 Lex Fridman

Or is there a place to where we can consider dangers and actually deliberate sort of wise strategic optimism versus reckless optimism?

0
💬 0

2632.739 - 2657.057 Guillaume Verdon

I think we get painted as reckless, trying to go as fast as possible. I mean, the reality is that whoever deploys an AI system is liable for or should be liable for what it does. And so if the organization or person deploying an AI system does something terrible, they're liable.

0
💬 0

2657.997 - 2687.063 Guillaume Verdon

And ultimately, the thesis is that the market will induce sort of, will positively select for AIs that are more reliable, more safe, and more tend to be aligned. They do what you want them to do, right? Because customers, right, if they're liable for the product they put out that uses this AI, they won't want to buy AI products that are unreliable.

0
💬 0

2687.984 - 2713.163 Guillaume Verdon

So we're actually, for reliability engineering, we just think that the market is much more efficient at achieving this sort of reliability optimum than sort of heavy-handed regulations that are written by the incumbents and in a subversive fashion, serves them to achieve regulatory capture.

0
💬 0

2713.203 - 2740.341 Lex Fridman

So to you, safe AI development will be achieved through market forces versus through, like you said, heavy-handed government regulation. There's a report from last month. I have a million questions here. From Yoshua Banjo, Jeff Hinton, and many others. It's titled, The Managing AI Risk in an Era of Rapid Progress. So there's a collection of folks who are very worried about

0
💬 0

2741.241 - 2768.654 Lex Fridman

too rapid development of AI without considering AI at risk. And they have a bunch of practical recommendations. Maybe I give you four and you see if you like any of them. So give independent auditors access to AI labs, one. Two, governments and companies allocate one third of their AI research and development funding to AI safety. sort of this general concept of AI safety.

0
💬 0

2769.214 - 2791.099 Lex Fridman

Three, AI companies are required to adopt safety measures if dangerous capabilities are found in their models. And then four, something you kind of mentioned, making tech companies liable for foreseeable and preventable harms from their AI systems. So independent auditors, governments and companies are forced to spend a significant fraction of their funding on safety.

0
💬 0

2791.639 - 2802.624 Lex Fridman

You've got to have safety measures established if shit goes really wrong, and liability, companies are liable. Any of that seem like something you would agree with?

0
💬 0

2802.744 - 2830.065 Guillaume Verdon

I would say that assigning, just arbitrarily saying 30% seems very arbitrary. I think organizations would allocate whatever budget is needed to achieve the sort of reliability they need to achieve to... perform in the market. And I think third-party auditing firms would naturally pop up because how would customers know that your product is certified, reliable, right?

0
💬 0

2830.105 - 2847.115 Guillaume Verdon

They need to see some benchmarks and those need to be done by a third party. The thing I would oppose and the thing I'm seeing that's really worrisome is there's a sort of weird sort of correlated interest between the incumbents, the big players, and the government.

0
💬 0

2847.415 - 2875.65 Guillaume Verdon

And if the two get too close, we open the door for some sort of government-backed AI cartel that could have absolute power over the people. If they have the monopoly together on AI and nobody else has access to AI... then there's a huge power gradient there. And even if you like our current leaders, right? I think that, you know, some of the leaders in big tech today are good people.

0
💬 0

2876.77 - 2898.22 Guillaume Verdon

You set up that centralized power structure. It becomes a target, right? Just like we saw at OpenAI, it becomes a market leader, has a lot of the power, and now it becomes a target for those that want to co-opt it. And so I just want separation of AI and state.

0
💬 0

2899.241 - 2921.422 Guillaume Verdon

Some might argue in the opposite direction, like, hey, we need to close down AI, keep it behind closed doors because of geopolitical competition with our our adversaries. I think that the strength of America is its variance, is its adaptability, its dynamism, and we need to maintain that at all costs. It's our free market.

0
💬 0

2922.003 - 2934.172 Guillaume Verdon

Capitalism converges on technologies of high utility much faster than centralized control. And if we let go of that, we let go of our main advantage over capitalism.

0
💬 0

2935.233 - 2950.333 Lex Fridman

So if AGI turns out to be a really powerful technology, or even the technologies that lead up to AGI, what's your view on the sort of natural centralization that happens when large companies dominate the market?

0
💬 0

2951.094 - 2970.003 Lex Fridman

basically formation of monopolies, like the takeoff, whichever company really takes a big leap in development and doesn't reveal intuitively, implicitly, or explicitly the secrets of the magic sauce, they can just run away with it. Is that a worry?

0
💬 0

2970.885 - 2994.399 Guillaume Verdon

I don't know if I believe in fast takeoff. I don't think there's a hyperbolic singularity, right? A hyperbolic singularity would be achieved on a finite time horizon. I think it's just one big exponential. And the reason we have an exponential is that we have... More people, more resources, more intelligence being applied to advancing this science and the research and development.

0
💬 0

2994.939 - 3009.729 Guillaume Verdon

And the more successful it is, the more value it's adding to society, the more resources we put in. And that's sort of similar to Moore's Law as a compounding. exponential. I think the priority to me is to maintain a near equilibrium of capabilities.

0
💬 0

3010.09 - 3032.202 Guillaume Verdon

We've been fighting for open source AI to be more prevalent and championed by many organizations because there you sort of equilibrate the alpha relative to the market of AIs, right? So if The leading companies have a certain level of capabilities and open source and truly open AI trails not too far behind.

0
💬 0

3032.222 - 3050.207 Guillaume Verdon

I think you avoid such a scenario where a market leader has so much market power, it just dominates everything and runs away. And so to us, that's the path forward, is to make sure that every hacker out there, every grad student, every...

0
💬 0

3051.363 - 3076.053 Guillaume Verdon

kid in their mom's basement has access to, uh, you know, AI systems can understand how to, uh, uh, work with them and can contribute to the search over the hyper parameter space of how to engineer the systems, right? If you, if you think of, you know, our collective research as, as, as a civilization, it's really a search algorithm. And the more, uh,

0
💬 0

3078.308 - 3086.091 Guillaume Verdon

points we have in the search algorithm, in this point cloud, the more we'll be able to explore new modes of thinking, right?

0
💬 0

3086.932 - 3106.701 Lex Fridman

Yeah, but it feels like a delicate balance because we don't understand exactly what it takes to build AGI and what it will look like when we build it. And so far, like you said, it seems like a lot of different parties are able to make progress. So when OpenAI has a big leap, other companies are able to step up, big and small companies in different ways.

0
💬 0

3107.662 - 3125.853 Lex Fridman

But if you look at something like nuclear weapons, you've spoken about the Manhattan Project, that could be really like... technological and engineering barriers that prevent the guy or gal in their mom's basement to make progress.

0
💬 0

3128.275 - 3140.323 Lex Fridman

It seems like the transition to that kind of world where only one player can develop AGI is possible, so it's not entirely impossible, even though the current state of things seems to be optimistic.

0
💬 0

3141.424 - 3175.053 Guillaume Verdon

That's what we're trying to avoid. To me, I think another point of failure is the centralization of the supply chains for the hardware. We have NVIDIA is just the dominant player. AMD is trailing behind. And then we have TSMC as the main fab in Taiwan, which geopolitically... sensitive. And then we have ASML, which is the maker of the lithography, extreme ultraviolet lithography machines.

0
💬 0

3176.553 - 3207.125 Guillaume Verdon

You know, attacking or monopolizing or co-opting any one point in that chain, you kind of capture the space. And so what I'm trying to do is sort of explode the variance of possible ways to do AI and hardware. by fundamentally reimagining how you embed AI algorithms into the physical world. And in general, by the way, I dislike the term AGI, artificial general intelligence.

0
💬 0

3207.706 - 3224.319 Guillaume Verdon

I think it's very anthropocentric that we call human-like or human-level AI artificial general intelligence, right? I've spent my career so far exploring notions of intelligence that no biological brain could achieve, right?

0
💬 0

3225.08 - 3250.431 Guillaume Verdon

quantum form of intelligence right grokking systems that have multi-partite quantum entanglement that you can provably not represent efficiently on a classical computer a classical deep learning representation and hence any sort of biological brain and so already, you know, I've spent my career sort of exploring the wider space of intelligences.

0
💬 0

3251.792 - 3283.851 Guillaume Verdon

And I think that space of intelligence inspired by physics rather than the human brain is very large. And I think we're going through a moment right now similar to When we went from geocentrism to heliocentrism, right? But for intelligence, we realized that human intelligence is just a point in a very large space of potential intelligences. And it's both... humbling for humanity.

0
💬 0

3283.871 - 3310.486 Guillaume Verdon

It's a bit scary that we're not at the center of the space, but we made that realization for astronomy and we've survived and we've achieved technologies by indexing to reality. We've achieved technologies that ensure our well-being. For example, we have satellites monitoring solar flares that give us a warning. And so similarly, I think by

0
💬 0

3312.058 - 3327.302 Guillaume Verdon

letting go of this anthropomorphic, anthropocentric anchor for AI, we'll be able to explore the wider space of intelligences that can really be a massive benefit to our well-being and the advancement of civilization.

0
💬 0

3327.802 - 3336.925 Lex Fridman

And still we're able to see the beauty and meaning in the human experience, even though we're no longer in our best understanding of the world at the center of it.

0
💬 0

3337.935 - 3363.514 Guillaume Verdon

I think there's a lot of beauty in the universe, right? I think life itself, civilization, this homo, techno, capital, mimetic machine that we all live in, right? So you have humans, technology, capital, memes. Everything is coupled to one another. Everything induces selective pressure on one another. And it's a beautiful machine that has created us, has created us.

0
💬 0

3364.855 - 3393.094 Guillaume Verdon

the technology we're using to speak today to the audience, capture our speech here, the technology we use to augment ourselves every day, we have our phones. I think the system is beautiful and the principle that induces this sort of adaptability and convergence on optimal technologies, ideas, and so on, it's a beautiful principle that we're part of. And I think

0
💬 0

3394.435 - 3428.074 Guillaume Verdon

part of EAC is to appreciate this principle in a way that's not just centered on humanity but kind of broader. Appreciate life, the preciousness of consciousness in our universe. And because we cherish this beautiful state of matter we're in, we've got to feel a responsibility to scale it in order to preserve it because the options are to grow or die.

0
💬 0

3428.955 - 3446.8 Lex Fridman

So if it turns out that the beauty that is consciousness in the universe is bigger than just humans, the AI can carry that same flame forward. Does it scare you? Are you concerned that AI will replace humans?

0
💬 0

3447.68 - 3476.058 Guillaume Verdon

So during my career, I had a moment where I realized that maybe we need to offload to machines to truly understand the universe around us, right? Instead of just having humans with pen and paper solve it all. And to me, that sort of process of letting go of a bit of agency gave us way more leverage to understand the world around us.

0
💬 0

3476.88 - 3506.834 Guillaume Verdon

A quantum computer is much better than a human to understand matter at the nanoscale. Similarly, I think that humanity has a choice. Do we accept the opportunity to have intellectual and operational leverage that AI will unlock and thus ensure that we're taking along this path of growth and scope and scale of civilization? We may dilute ourselves, right?

0
💬 0

3507.294 - 3537.348 Guillaume Verdon

There might be a lot of workers that are AI, but Overall, out of our own self-interest, by combining and augmenting ourselves with AI, we're going to achieve much higher growth and much more prosperity, right? To me, I think that the most likely future is one where Humans augment themselves with AI. I think we're already on this path to augmentation. We have phones we use for communication.

0
💬 0

3537.568 - 3562.209 Guillaume Verdon

We have on ourselves at all times. We have wearables soon that have shared perception with us, right? Like the human AI pin or, I mean, technically your Tesla car has shared perception. And so if you have shared experience, shared context, you communicate with one another better. and you have some sort of IO, really, it's an extension of yourself.

0
💬 0

3564.711 - 3592.058 Guillaume Verdon

And to me, I think that humanity augmenting itself with AI and having AI that is not anchored to anything biological. Both will coexist. And the way to align the parties, we already have a sort of mechanism to align super intelligences that are made of humans and technology, right? Companies are sort of

0
💬 0

3593.328 - 3623.813 Guillaume Verdon

large mixture of expert models where we have neural routing of tasks within a company and we have ways of economic exchange to align these behemoths. And to me, I think capitalism is the way. And I do think that whatever configuration of matter or information leads to maximal growth will be where we converge, just from physical principles.

0
💬 0

3625.013 - 3649.371 Guillaume Verdon

And so we can either align ourselves to that reality and join the acceleration up in scope and scale of civilization, Or we can get left behind and try to decelerate and move back in the forest, let go of technology, and return to our primitive state. And those are the two paths forward, at least to me.

0
💬 0

3649.852 - 3659.957 Lex Fridman

But there's a philosophical question whether there's a limit to the human capacity to align. So let me bring it up as a form of argument here.

0
💬 0

3659.997 - 3689.98 Lex Fridman

This is a guy named Dan Hendricks, and he wrote that he agrees with you that AI development can be viewed as an evolutionary process, but to him, to Dan, this is not a good thing, as he argues that natural selection favors AIs over humans, and this could lead to human extinction. What do you think? If it is an evolutionary process, and AI systems may be have no need for humans.

0
💬 0

3691.441 - 3722.871 Guillaume Verdon

I do think that we're actually inducing an evolutionary process on the space of AIs through the market, right? Right now, we run AIs that have positive utility to humans. And that induces a selective pressure if you consider a neural net being alive when there's an API running instances of it on GPUs, right? and which APIs get run, the ones that have high utility to us.

0
💬 0

3722.891 - 3755.361 Guillaume Verdon

So similar to how we domesticated wolves and turned them into dogs that are very clear in their expression, they're very aligned, I think there's going to be an opportunity to steer AI and achieve highly aligned AI. And I think that Humans plus AI is a very powerful combination, and it's not clear to me that pure AI would select out that combination.

0
💬 0

3755.621 - 3778.799 Lex Fridman

So the humans are creating the selection pressure right now to create AIs that are aligned to humans. But given how AI develops and how quickly it can grow and scale, One of the concerns, to me, one of the concerns is unintended consequences. Humans are not able to anticipate all the consequences of this process.

0
💬 0

3779.359 - 3786.963 Lex Fridman

The scale of damage that can be done through unintended consequences with AI systems is very large. The scale of the upside.

0
💬 0

3786.983 - 3797.417 Guillaume Verdon

Yes. Right? By augmenting ourselves with AI is unimaginable right now. The opportunity cost... We're at a fork in the road, right?

0
💬 0

3797.477 - 3818.194 Guillaume Verdon

Whether we take the path of creating these technologies, augment ourselves, and get to climb up the Kardashev scale, become multi-planetary with the aid of AI, or we have a hard cutoff of, like, we don't birth these technologies at all, and then we leave all the potential upside on the table, right? And to me...

0
💬 0

3819.295 - 3832.845 Guillaume Verdon

Out of responsibility to the future humans we could carry with higher carrying capacity by scaling up civilization. Out of responsibility to those humans, I think we have to make the greater, grander future happen.

0
💬 0

3833.505 - 3839.89 Lex Fridman

Is there a middle ground between cut off and all systems go? Is there some argument for caution?

0
💬 0

3841.201 - 3855.529 Guillaume Verdon

I think, like I said, the market will exhibit caution. Every organism, company, consumer is acting out of self-interest and they won't assign capital to things that have negative utility to them.

0
💬 0

3856.73 - 3874.822 Lex Fridman

The problem is with the market is like, you know, there's not always perfect information. There's manipulation. There's bad faith actors that mess with the system and It's not always a rational and honest system.

0
💬 0

3875.963 - 3889.593 Guillaume Verdon

Well, that's why we need freedom of information, freedom of speech, and freedom of thought in order to converge, be able to converge on the subspace of technologies that have positive utility for us all.

0
💬 0

3891.192 - 3909.529 Lex Fridman

Well, let me ask you about P-Doom. Probability of doom, that's just fun to say, but not fun to experience. What is, to you, the probability that AI eventually kills all or most humans, also known as probability of doom?

0
💬 0

3911.365 - 3938.925 Guillaume Verdon

I'm not a fan of that calculation. I think people just throw numbers out there. It's a very sloppy calculation, right? To calculate a probability, let's say you model the world as some sort of Markov process, if you have enough variables or hidden Markov process. You need to do a stochastic path integral through the space of all possible futures, not just...

0
💬 0

3939.915 - 3963.916 Guillaume Verdon

the futures that your brain naturally steers towards, right? I think that the estimators of PDU are biased because of our biology, right? We've evolved to... have biased sampling towards negative futures that are scary because that was an evolutionary optimum, right?

0
💬 0

3964.016 - 3994.575 Guillaume Verdon

And so, people that are of, let's say, higher neuroticism will just think of negative futures where everything goes wrong all day every day and claim that they're doing unbiased sampling. And in a sense, like, they're not normalizing for the space of all possibilities and the space of all possibilities is like super exponentially large. And it's very hard to have this estimate.

0
💬 0

3995.415 - 4018.507 Guillaume Verdon

And in general, I don't think that we can predict the future with that much granularity because of chaos, right? If you have a complex system, you have some uncertainty and a couple of variables. If you let time evolve, You have this concept of a Lyapunov exponent, right? A bit of fuzz becomes a lot of fuzz in our estimate, exponentially so over time.

0
💬 0

4019.588 - 4042.524 Guillaume Verdon

And I think we need to show some humility that we can't actually predict the future. All we know, the only prior we have is the laws of physics. And that's what we're arguing for. The laws of physics say the system will want to grow. And subsystems that are optimized for growth and replication are more likely in the future.

0
💬 0

4043.425 - 4067.557 Guillaume Verdon

And so we should aim to maximize our current mutual information with the future. And the path towards that is for us to accelerate rather than decelerate. So I don't have a PDoom because I think that similar to... The quantum supremacy experiment at Google, I was in the room when they were running the simulations for that.

0
💬 0

4068.138 - 4092.759 Guillaume Verdon

That was an example of a quantum chaotic system where you cannot even estimate probabilities of certain outcomes with even the biggest supercomputer in the world. And so that's an example of chaos. And I think the system is far too chaotic for anybody to have an accurate estimate of the likelihood of certain futures.

0
💬 0

4093.319 - 4097.742 Guillaume Verdon

If they were that good, I think they would be very rich trading on the stock market.

0
💬 0

4098.262 - 4117.069 Lex Fridman

But nevertheless, it's true that humans are biased, grounded in our evolutionary biology, scared of everything that can kill us. But we can still imagine different trajectories that can kill us. We don't know all the other ones that don't necessarily.

0
💬 0

4117.869 - 4151.773 Lex Fridman

But it's still, I think, useful combined with some basic intuition grounded in human history to reason about like what, like looking at geopolitics, looking at basics of human nature, how can powerful technology hurt a lot of people? And it just seems, grounded in that, looking at nuclear weapons, you can start to estimate P-doom. Maybe in a more philosophical sense, not a mathematical one.

0
💬 0

4151.833 - 4159.296 Lex Fridman

Philosophical meaning, like, is there a chance? Does human nature tend towards that or not?

0
💬 0

4160.955 - 4179.349 Guillaume Verdon

I think to me, one of the biggest existential risks would be the concentration of the power of AI in the hands of the very few, especially if it's a mix between the companies that control the flow of information and the government. Because that could...

0
💬 0

4180.79 - 4206.189 Guillaume Verdon

set things up for a sort of dystopian future where only a very few and an oligopoly in the government have AI and they could even convince the public that AI never existed. And that opens up sort of these scenarios for authoritarian centralized control, which to me is the darkest timeline. And the reality is that we have

0
💬 0

4206.949 - 4232.548 Guillaume Verdon

We have a prior, we have a data-driven prior of these things happening, right? When you give too much power, when you centralize power too much, humans do horrible things, right? And to me, that has a much higher likelihood in my Bayesian inference than sci-fi-based priors, right? Like my prior came from the Terminator movie, right?

0
💬 0

4234.253 - 4255.464 Guillaume Verdon

And so when I talk to these AI doomers, I just ask them to trace a path through this Markov chain of events that would lead to our doom, right? And to actually give me a good probability for each transition. And very often... there's a unphysical or highly unlikely transition in that chain, right?

0
💬 0

4256.205 - 4287.777 Guillaume Verdon

But of course, we're wired to fear things and we're wired to respond to danger and we're wired to deem the unknown to be dangerous because that's a good heuristic for survival, right? But there's much more to lose out of fear, right? We have so much to lose, so much upside to lose by preemptively stopping the positive futures from happening out of fear.

0
💬 0

4289.379 - 4297.328 Guillaume Verdon

And so I think that we shouldn't give in to fear. Fear is the mind killer. I think it's also the civilization killer.

0
💬 0

4297.968 - 4316.223 Lex Fridman

We can still think about the various ways things go wrong. For example, the founding fathers of the United States thought about human nature, and that's why there's a discussion about the freedoms that are necessary. They really deeply deliberated about that, and I think the same could possibly be done

0
💬 0

4317.438 - 4341.922 Lex Fridman

for AGI, it is true that history, human history shows that we tend towards centralization, or at least when we achieve centralization, a lot of bad stuff happens. When there's a dictator, a lot of dark, bad things happen. The question is, can AGI become that dictator? Can AGI, when developed, become the centralizer?

0
💬 0

4343.871 - 4369.322 Lex Fridman

because of its power, maybe has the same, because of the alignment of humans perhaps, the same tendencies, the same Stalin-like tendencies to centralize and manage centrally, the allocation of resources. And you can even see that as a compelling argument on the surface level. Well, AGI is so much smarter, so much more efficient, so much better at allocating resources.

0
💬 0

4369.482 - 4388.528 Lex Fridman

Why don't we outsource it to the AGI? And then eventually, whatever forces that corrupt the human mind with power could do the same for AGI. It'll just say, well, humans are dispensable. We'll get rid of them. Do the Jonathan Swift modest proposal

0
💬 0

4390.08 - 4422.951 Lex Fridman

from a few centuries ago, I think the 1700s, when he satirically suggested that, I think it's in Ireland, that the children of poor people are fed as food to the rich people, and that would be a good idea because it decreases the amount of poor people. and gives extra income to the poor people. So it's on several accounts decreases the amount of poor people. Therefore, more people become rich.

0
💬 0

4424.452 - 4442.624 Lex Fridman

Of course, it misses a fundamental piece here that's hard to put into a mathematical equation of the basic value of human life. So all of that to say, are you concerned about AGI being the very centralizer of power that you just talked about?

0
💬 0

4444.201 - 4464.228 Guillaume Verdon

I do think that right now there's a bias towards over centralization of AI because of compute density and centralization of data and how we're training models. I think over time, we're going to run out of data to scrape over the internet.

0
💬 0

4464.789 - 4476.498 Guillaume Verdon

And I think that, well, actually, I'm working on increasing the compute density so that compute can be everywhere and acquire information and test hypotheses in the environment in a distributed way.

0
💬 0

4477.859 - 4503.176 Guillaume Verdon

I think that fundamentally centralized cybernetic control, so having one intelligence that is massive, that fuses many sensors and is trying to perceive the world accurately, predict it accurately, predict many, many variables and control it, enact its will upon the world, I think that's just never possible. been the optimum, right?

0
💬 0

4503.216 - 4528.285 Guillaume Verdon

Like, let's say you have a company, you know, if you have a company, I don't know, of 10,000 people that all report to the CEO, even if that CEO is an AI, I think it would struggle to fuse all the information that is coming to it and then predict the whole system and then to enact its will. What has emerged in nature and in corporations and all sorts of systems

0
💬 0

4529.185 - 4552.738 Guillaume Verdon

is a notion of sort of hierarchical cybernetic control, right? You have, you know, in a company it would be, you have like the individual contributors, they're self-interested and they're trying to achieve their tasks and they have a fine... in terms of time and space, if you will, control loop and field of perception, right? They have their code base.

0
💬 0

4553.218 - 4574.861 Guillaume Verdon

Let's say you're in a software company, they have their code base, they iterate it on it intraday, right? And then the management may be checks in, it has a wider scope. It has, let's say, five reports, right? And then it samples each person's update once per week. And then you can go up the chain and you have larger timescale and greater scope.

0
💬 0

4575.341 - 4601.271 Guillaume Verdon

And that seems to have emerged as sort of the optimal way to control systems. And really... That's what capitalism gives us, right? You have these hierarchies and you can even have like parent companies and so on. And so that is far more fault tolerant. In quantum computing, that's my field I came from, we have a concept of this fault tolerance and quantum error correction, right?

0
💬 0

4601.431 - 4627.304 Guillaume Verdon

Quantum error correction is detecting a fault that came from noise, predicting how it's propagated through the system and then correcting it, right? So it's a cybernetic loop. And it turns out that decoders that are hierarchical, and at each level the hierarchy are local, perform the best by far and are far more fault tolerant. And the reason is if you have a non-local decoder,

0
💬 0

4628.024 - 4644.154 Guillaume Verdon

then you have one fault at this control node and the whole system sort of crashes. Similarly to if you have, you know, one CEO that everybody reports to and that CEO goes on vacation, the whole company comes to their crawl.

0
💬 0

4645.735 - 4670.069 Guillaume Verdon

And so to me, I think that yes, we're seeing a tendency towards centralization of AI, but I think there's going to be a correction over time where intelligence is going to go closer to the perception and we're going to break up AI into smaller subsystems that communicate with one another and form a sort of meta system.

0
💬 0

4671.189 - 4691.234 Lex Fridman

So if you look at the hierarchies there in the world today, there's nations, and those are hierarchical, but in relation to each other, nations are anarchic, so it's an anarchy. Do you foresee a world like this, where there's not a over, what do you call it, a centralized cybernetic control?

0
💬 0

4691.987 - 4694.247 Guillaume Verdon

centralized locus of control, yeah.

0
💬 0

4694.427 - 4697.448 Lex Fridman

So, like, that's suboptimal, you're saying?

0
💬 0

4697.748 - 4697.928 Guillaume Verdon

Yeah.

0
💬 0

4698.028 - 4701.969 Lex Fridman

So, it would be always a state of competition at the very top level?

0
💬 0

4702.689 - 4720.332 Guillaume Verdon

Yeah, just like, you know, in a company, you may have, like, two units working on similar technology and competing with one another, and you prune the one that performs not as well, right? And that's a sort of selection process for a tree, or a product gets killed, right? And then a whole org gets...

0
💬 0

4721.592 - 4738.207 Guillaume Verdon

And that's this process of trying new things and shedding old things that didn't work is what gives us adaptability and helps us converge on the technologies and things to do that are most good.

0
💬 0

4739.128 - 4745.833 Lex Fridman

I just hope there's not a failure mode that's unique to AGI versus humans. Because you're describing human systems mostly right now.

0
💬 0
0
💬 0

4746.714 - 4758.741 Lex Fridman

I just hope... When there's a monopoly on AGI in one company, that we'll see the same thing we see with humans, which is another company will spring up and start competing effectively.

0
💬 0

4758.761 - 4781.973 Guillaume Verdon

I mean, that's been the case so far, right? We have OpenAI, we have Anthropic, now we have XAI. We had Meta even for open source, and now we have Mistral, which is highly competitive. And so that's the beauty of capitalism. You don't have to trust any one party too much because... we're kind of always hedging our bets at every level. There's always competition.

0
💬 0

4782.073 - 4794.96 Guillaume Verdon

And that's the most beautiful thing to me, at least, is that the whole system is always shifting and always adapting. And maintaining that dynamism is how we avoid tyranny, right? Making sure that

0
💬 0

4796.741 - 4816.51 Guillaume Verdon

Everyone has access to these tools, to these models and can contribute to the research, avoids a sort of neural tyranny where very few people have control over AI for the world and use it to oppress those around them.

0
💬 0

4818.359 - 4827.603 Lex Fridman

When you were talking about intelligence, you mentioned multipartite quantum entanglement. So high-level question first is what do you think is intelligence?

0
💬 0

4828.624 - 4844.911 Lex Fridman

When you think about quantum mechanical systems and you observe some kind of computation happening in them, what do you think is intelligent about the kind of computation the universe is able to do, a small, small inkling of which is the kind of computation the human brain is able to do?

0
💬 0

4847.994 - 4874.882 Guillaume Verdon

I would say intelligence and computation aren't quite the same thing. I think that the universe is very much doing a quantum computation. If you had access to all the degrees of freedom, and a very, very, very large quantum computer with many, many, many qubits, let's say a few qubits per Planck volume,

0
💬 0

4877.401 - 4901.909 Guillaume Verdon

which is more or less the pixels we have, then you'd be able to simulate the whole universe on a sufficiently large quantum computer, assuming you're looking at a finite volume, of course, of the universe. I think that, at least to me, intelligence is the... I go back to cybernetics, the ability to perceive, predict, and control our world. But really, it's

0
💬 0

4903.14 - 4932.979 Guillaume Verdon

Nowadays, it seems like a lot of intelligence we use is more about compression. It's about operationalizing information theory. In information theory, you have the notion of entropy of a distribution or a system. And entropy tells you that you need this many bits to encode this distribution or this subsystem if you had the most optimal code.

0
💬 0

4934.08 - 4971.623 Guillaume Verdon

And AI, at least the way we do it today for LLMs and for quantum, is very much trying to minimize relative entropy between are models of the world and the world, distributions from the world. And so, we're learning, we're searching over the space of computations to process the world to find that compressed representation that has distilled all the variance and noise and entropy, right? And

0
💬 0

4973.283 - 5003.168 Guillaume Verdon

Originally, I came to quantum machine learning from the study of black holes because the entropy of black holes is very interesting. In a sense, they're physically the most dense objects in the universe. You can't pack more information spatially, any more densely than a black hole. And so I was wondering, how do black holes actually encode information? What is their compression code?

0
💬 0

5003.508 - 5024.794 Guillaume Verdon

And so that got me into the space of algorithms to search over space of quantum codes. And it got me actually into also how do you acquire quantum information from the world, right? So something I've worked on, this is public now, is quantum analog digital conversion.

0
💬 0

5025.074 - 5038.912 Guillaume Verdon

So how do you capture information from the real world in superposition and not destroy the superposition but digitize for a quantum mechanical computer? information from the real world.

0
💬 0

5040.754 - 5070.825 Guillaume Verdon

And so if you have an ability to capture quantum information and search over learned representations of it, now you can learn compressed representations that may have some useful information in their latent representation, right? And I think that many of the problems facing our civilization are actually beyond this complexity barrier. I mean, the greenhouse effect is a quantum mechanical effect.

0
💬 0

5072.347 - 5100.952 Guillaume Verdon

Chemistry is quantum mechanical. You know, nuclear physics is quantum mechanical. A lot of biology and protein folding and so on is affected by quantum mechanics. And so unlocking an ability to augment human intellect with quantum mechanical computers and quantum mechanical AI seemed to me like a fundamental capability for civilization that we needed to develop.

0
💬 0

5102.655 - 5111.605 Guillaume Verdon

So I spent several years doing that. But over time, I kind of grew weary of the timelines that were starting to look like nuclear fusion.

0
💬 0

5112.726 - 5122.778 Lex Fridman

One high-level question I can ask is maybe by way of definition, by way of explanation, what is a quantum computer and what is quantum machine learning? Yeah.

0
💬 0

5124.355 - 5151.809 Guillaume Verdon

So a quantum computer really is a quantum mechanical system over which we have sufficient control and it can maintain its quantum mechanical state. And quantum mechanics is how nature behaves at the very small scales when things are very small or very cold. And it's actually more fundamental than probability theory.

0
💬 0

5152.67 - 5177.841 Guillaume Verdon

So we're used to things being this or that, but we're not used to thinking in superpositions because, well, our brains can't do that. So we have to translate the quantum mechanical world to, say, linear algebra to grok it. Unfortunately, that translation is exponentially inefficient on average. You have to represent things with very large matrices.

0
💬 0

5178.641 - 5195.37 Guillaume Verdon

But really, you can make a quantum computer out of many things, right? And we've seen all sorts of players, you know, from neutral atoms, trapped ions, superconducting, metal, photons at different frequencies, I think you can make a quantum computer out of many things.

0
💬 0

5195.47 - 5218.916 Guillaume Verdon

But to me, the thing that was really interesting was both quantum machine learning was about understanding the quantum mechanical world with quantum computers, so embedding the physical world into AI representations, and quantum computer engineering was embedding AI algorithms into the physical world.

0
💬 0

5218.936 - 5243.823 Guillaume Verdon

So this bidirectionality of embedding the physical world into AI, AI into the physical world, the symbiosis between physics and AI, really that's the sort of core of... my quest really, even to this day after quantum computing. It's still in this sort of journey to merge really physics and AI fundamentally.

0
💬 0

5243.843 - 5258.27 Lex Fridman

So quantum machine learning is a way to do machine learning on a representation of nature that is, you know, stays true to the quantum mechanical aspect of nature.

0
💬 0

5258.737 - 5283.866 Guillaume Verdon

Yeah, it's learning quantum mechanical representations. That would be quantum deep learning. Alternatively, you can try to do classical machine learning on a quantum computer. I wouldn't advise it because you may have some speedups, but very often the speedups come with huge costs. Using a quantum computer is very expensive. Why is that?

0
💬 0

5283.946 - 5304.619 Guillaume Verdon

Because you assume the computer is operating at zero temperature, which no physical system in the universe can achieve that temperature. So what you have to do is what I've been mentioning, this quantum error correction process, which is really an algorithmic fridge, right? It's trying to pump entropy out of the system, trying to get it closer to zero temperature.

0
💬 0

5304.639 - 5327.786 Guillaume Verdon

And when you do the calculations of how many resources it would take to say do deep learning on a quantum computer, classical deep learning, there's just such a huge overhead, it's not worth it. It's like thinking about shipping something across a city using a rocket and going to orbit and back. It doesn't make sense. Just use a delivery truck, right?

0
💬 0

5328.526 - 5339.792 Lex Fridman

What kind of stuff can you figure out, can you predict, can you understand with quantum deep learning that you can't with deep learning? So incorporating quantum mechanical systems into the learning process.

0
💬 0

5340.649 - 5363.028 Guillaume Verdon

I think that's a great question. I mean, fundamentally, it's any system that has sufficient quantum mechanical correlations that are very hard to capture for classical representations, then there should be an advantage for a quantum mechanical representation over a purely classical one. The question is which systems have sufficient

0
💬 0

5363.869 - 5390.008 Guillaume Verdon

correlations that are very quantum but which systems are still relevant to industry, that's a big question. People are leaning towards chemistry, nuclear physics. I've worked on actually processing inputs from quantum sensors. If you have a network of quantum sensors, they've captured a quantum mechanical image of the world.

0
💬 0

5390.868 - 5410.283 Guillaume Verdon

and how to post-process that that becomes a sort of quantum form of machine perception. For example, Fermilab has a project exploring detecting dark matter with these quantum sensors. To me, that's in alignment with my quest to understand the universe ever since I was a child, and so someday I hope that

0
💬 0

5411.064 - 5440.64 Guillaume Verdon

We can have very large networks of quantum sensors that help us peer into the earliest parts of the universe. For example, the LIGO is a quantum sensor. It's just a very large one. So yeah, I would say quantum machine perception simulations, grokking quantum simulations, similar to AlphaFold. AlphaFold understood the probability distribution over configurations of proteins. You can understand

0
💬 0

5442.014 - 5447.398 Guillaume Verdon

quantum distributions over configurations of electrons more efficiently with quantum machine learning.

0
💬 0