Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
AMA | September 2024
Mon, 02 Sep 2024
Welcome to the September 2024 Ask Me Anything episode of Mindscape! These monthly excursions are funded by Patreon supporters (who are also the ones asking the questions). We take questions asked by Patreons, whittle them down to a more manageable number -- based primarily on whether I have anything interesting to say about them, not whether the questions themselves are good -- and sometimes group them together if they are about a similar topic. Enjoy!Blog post with AMA questions and transcript: https://www.preposterousuniverse.com/podcast/2024/09/02/ama-september-2024/See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
We are all driven by searching for something better. But when it comes to hiring, the best way to search for a candidate isn't to search at all. Don't search, but match with Indeed. If you need to hire, you need Indeed.
Indeed is your matching and hiring platform with over 350 million global monthly visitors, according to Indeed data, and a matching engine that helps you find quality candidates fast. What I like about Indeed is the instant match feature that shows you the best possible candidates right away before you do any busy work.
Listeners of Mindscape will get a $75 sponsored job credit to get your jobs more visibility at Indeed.com slash Mindscape. Just go to Indeed.com slash Mindscape right now and support our show by saying you heard about Indeed on this podcast. That's Indeed.com slash Mindscape. Terms and conditions apply. Need to hire? You need Indeed.
Step into a world where stories come alive. On Audible, there's more to imagine when you listen. Immerse yourself in captivating audiobooks, including master storyteller Stephen King's latest short story collection, You Like It Darker. Let King's chilling tales transport you to realms both haunting... and thrilling. With Audible, you're not just hearing words, you're experiencing them.
From thrilling mysteries to heartwarming romances, there's a story waiting for everyone. As an Audible member, you choose one title a month to keep from their entire catalog, and new members can try Audible free for 30 days. Visit audible.com slash WonderyPod or text WonderyPod to 500-500. That's audible.com slash WonderyPod or text WonderyPod to 500-500.
Now trending? Fall. And DSW's here with all the obsession-worthy shoe trends you need in your closet. Feeling fierce? DSW's got bold snake print boots and a retro sneaker with animal print details. Going for demure? Kitten heels are the eye-catching yet walkable heel all over your feed and all over DSW.
From edgy moto boots to sweetly simple ballet flats, find the must-own shoe trends of the season right now at your DSW store or DSW.com.
Hello everyone, and welcome to the September 2024 Ask Me Anything edition of the Mindscape Podcast. I'm your host, Sean Carroll. Being as it's September, of course, for us university-bound folks, that means the semester has started, the school year has started again. As I'm recording this, I haven't quite yet started teaching. That'll be tomorrow, but it is already the vibe in the air.
I've been, you know, on campus, the students are back, they're getting oriented, we've had welcome events for the different departments, and... I gotta say, I love it. It's just so romantic and beautiful.
And the only downside is that I can't help in this day and age of being reminded that we underplay that romance, the romance of a bunch of people coming together to learn new things and to share that knowledge, right? To learn new things in the sense of doing research, right? also learn new things in the sense of going to classes, think about these ideas.
You know, we make fun of the idea of the dorm room conversations, but dorm room conversations are super important, I think, in our lives. And For whatever reasons, for lots of different reasons that I'm not going to go into right now, we tend to sort of be a little bit more small-minded about how we think about education these days.
Either pro-education people who try to emphasize the practical benefits of it, getting jobs, technological innovation, whatever it is, or the anti-education people who think that universities are overly politicized, by which they often mean that they don't have their own politics and they don't approve of that.
And so we don't have enough blatant, straightforward celebration of the idea of learning for its own sake, learning about the universe, learning about ourselves. So I don't have anything profound to say about this, but I do think that we shouldn't forget that more high-minded aspect of learning, of being at a university. It's a very, very special place.
Universities, colleges, high schools, and whatever it is, you know, a very special place where your job is to learn new things. And of course, we can't neglect the fact that in this day and age, it's easier to keep learning forever, if you want to do that, with books, with the internet, with podcasts, with online courses, with a whole bunch of different things.
So, you know, we can't be too practical-minded about all these different things. It's okay to have a little song in our hearts about the adventure of learning more about our universe and sharing what it is that we have learned. So this AMA, of course, like all AMAs, is supported by Patreon supporters of Mindscape. You too can be a Patreon supporter.
Just go to patreon.com slash Sean M. Carroll and pledge a little bit. For those of you who don't know, Patreon, because of pressure from Apple and the Apple App Store, is going to have to change its model for charging people. So we're going to have to change from a... pay a dollar per episode model to you pay a certain number of dollars per month model.
I'm hoping that it's not going to be too much of a big deal. You know, back when I started Mindscape, I wasn't completely sure I'd be doing it every week. So I wanted to charge by the episode rather than by the month. But six years in, I've been doing it every week. So I think that charging by the month makes perfect sense.
When you do become a Patreon supporter, which is very easy to do, you get to participate by asking the questions that I eventually answer in the AMAs. I don't answer all of them. I try to answer the ones I have something interesting to say about. As always, many apologies to those who don't get their questions answered. And we also have, after every episode, I do a little reflections audio.
So I record just a few minutes of me talking about what I thought about the episode that we just had. That's exclusive just for Patreon supporters. And so we appreciate it when you join on Patreon. But if you don't join, you're still a listener, that's still good too. Plenty of other places to talk about Mindscape episodes.
There's a whole subreddit slash Sean M. Carroll, believe it or not, where you can talk about Mindscape episodes. And I am always very happy to have so many people supporting the show. So with that, let's go. 🎵 Mark V asks a priority question.
Priority questions, for those who don't know, are those where once in your lifetime, every Patreon supporter gets to ask a question that I will definitely try to answer. There's so many questions I can't answer all of them, but the priority questions, I'm going to give it my best shot.
So Mark asks, imagine a scenario where Sean Carroll is born in a distant future, long after the possibility of detecting other galaxies, the cosmic microwave background, and similar phenomena have vanished. However, society has preserved uninterrupted records of earlier observations.
How would you think about cosmology when direct observations are no longer possible and only historical records remain? Would it be any different if society had not preserved those records, but instead they were suddenly rediscovered? I think that there's sort of two issues here.
One is an issue about the scientific method and the other is an issue about how you would reconstruct truths about cosmology in a data-impoverished universe. As far as the scientific method is concerned, I don't think it's that much different about the scientific method if you didn't have the CMB and other galaxies and stuff like that and you just had… historical records.
If you thought that the historical records had good reason to be accurate and their implications were compatible with what you did see around you, then I would see no reason to doubt them, or at least put high credence on them and then prepare to change your mind later. it's pretty analogous to what actually happens in the universe.
In terms of photons, we do see the cosmic microwave background, but that's the surface of last scattering. That is the transition moment in the history of the universe when it became transparent after having been opaque in all previous times, which means that all of the information about previous times as far as at least light and direct visual observations are concerned, is invisible.
We can't see what happened before the surface of last scattering, which is about 380,000 years after the Big Bang, right? There is one exception to that because we do have data from primordial nucleosynthesis. which happens just a few seconds or minutes after the Big Bang. But that's not as detailed as maybe you would like.
You get really sort of a couple of data points from primordial nucleosynthesis. But the point is, we take what data we have and we try to fit it to a model. It's not that you just observe everything you want to observe ever in science. You have to have a comprehensive story, and then you have to match it to whatever data you do have. sometimes that will be very, very hard.
Even though we have the CMB and nucleosynthesis, we still can't see what happened before the Big Bang. We don't even know if anything happened before the Big Bang. That does not stop us from thinking about it. So I think a similar kind of situation would hold if you were in a universe where the galaxies and the CMB themselves had disappeared.
The other one is, what could you possibly infer without those wonderful observations that we have? And probably not that much. You know, you could certainly make the case that there's still an arrow of time, I presume, in this hypothetical future universe.
If it's literally the future of our universe, then there will be an arrow of time for a while before we eventually reach equilibrium and the arrow of time ceases to exist. So the cosmologists in that era, epoch, might hypothesize that there had to be a lower entropy beginning in order to give you an arrow of time.
And then they might even come up with something roughly resembling what we think of as the Big Bang. I don't know. They might alternatively come up with other scenarios that are not Big Bang-like. But the point is that...
cosmologists, astronomers, other kinds of scientists, archaeologists, paleontologists, etc., are very good at figuring out quite detailed features of our universe from relatively small amounts of information, of direct observational data. So I think they would get pretty far. Kalan says, I know you are a sports fan from following the podcast.
However, do you ever feel like your views on eternalism or determinism detract from your enjoyment of sports? Personally, I kind of feel like the excitement isn't there as much if there is some already determined fact of the matter as to who's going to win. Sports is all about competition, but if determinism or eternalism are true, well, the competition is just kind of epistemic.
Sure, I don't know who's going to win, and neither do the players, but it's not up for grabs, so to speak." I guess this is just a psychological difference. I have zero feeling that if the future of the universe is determined, which of course it's not because there's quantum mechanics, but if it's fairly determined, then I should have less enjoyment about sports.
I mean, this goes hand in hand with my compatibilist views on free will, compatibilism between determinism and the existence of free will. To me the existence of free will is an epistemic matter, right? None of us is Laplace's demon, you may have heard me say before. And therefore we go through life not knowing what the future is going to hold. And that's fine.
That's actually an important part of what makes the universe interesting. I think that I mean, think about it this way. Imagine that you were betting on something trivial like the outcomes of coin tosses, OK? Just for fun, you and your friends were tossing coins and betting on the outcomes. And imagine, for the sake of a thought experiment, that you found that interesting.
Now, instead of that, say that someone tossed a coin out of your view and wrote down, recorded the outcomes, and then you revealed the outcomes one at a time, and you bet on that, right? In one case, the event hasn't happened yet, but you don't know, and you don't know what the outcome is going to be. In the other... case, the event has happened yet, but you don't know. To me, it's identical.
It doesn't actually change my enjoyment of the situation whatsoever. You know, when the NBA, the National Basketball Association, does its lottery for which teams get to pick first in the upcoming draft, they do all of the actual lottery choosing ahead of time, and they put the answers in envelopes, and all you're watching on the TV screen is them being revealed.
but it is just as exciting to me as if they were somehow conjuring that number in real time. Okay, I'm going to group a few questions together, which we do sometimes. Andy Chaumont says, we've been observing the cosmos for less than 100,000 years.
Although we've made tremendous progress in the last 500 years in terms of understanding the nature of the universe, how can we actually trust our observations? What if our attempts to understand reality are like ants attempting to understand Seinfeld? Paul Turek says,
And Gary Miller says, at the end of your conversation with Doris Tsao, you each suggested you agreed on the emergent nature of consciousness in complex systems, but she seemed to feel that subjectiveness is necessarily a fundamental feature of reality. We'd love to hear more on your take on that. Does a conscious experience require subjectiveness as a feature of nature?
So all of these, these are very loosely grouped. I didn't need to group them, but I wanted to sort of comment on the commonality of trying to understand what consciousness is and And I'm always very quick to say I do not understand what consciousness is. Not that it is un-understandable, but it is hard to understand.
It is a thing that we scientists are still working on, and it is not my area of expertise. So you shouldn't trust anything that I have to say. about consciousness. Having said that, I can give you my completely uneducated or mildly educated opinions about it. And these questions are about, you know, what if there is higher levels of consciousness, right?
What if there is something even beyond consciousness that we don't have yet and that we're just completely clueless about. So Andy says, you know, what about, what if our attempts to understand reality are like ants attempting to understand Seinfeld? And Paul says, what about an AGI that has this higher level of consciousness? I don't think that that's a thing.
I do not think that there are higher levels. I do believe completely that there's a possibility of better consciousness, being more conscious or, you know, being more aware of things, certainly being more rational, being better able to think about the world. That's completely 100% plausible to me. But I do think that there's a phase transition.
There's a threshold that we pass when we enter into a world of subjectivity and self-awareness And some things don't have that and some things do. It's not a sharp transition. You can get more and more of it. But I don't think that there are layers to it. I think it's there or it's not there, right?
There's more or less of it, but there's not sort of a series of different discrete transitions that you go through. I could be completely wrong about that. Like I said, this is not something that I have any theorems about or even very highly educated opinions about. But the slight analogy is with actual rationality and computation, where you do have this idea of Turing completeness.
You have an idea, going back not that far actually, but to people like Alan Turing, that there are machines that can calculate any function that is calculable. And they have very specific definitions of what you mean by calculable, etc., But it's, again, either there or it's not. Like once you've crossed that threshold, you don't get better and better at it.
I mean, well, you can get better and better at it quantitatively. You can get faster and faster, more and more accurate, but you're not learning a new thing. You're still computing that function, okay?
I suspect that consciousness is something like that, that you have it or you don't, and you might have it in degrees, but there's not a new thing toward the future that we're going to aim for someday. And in terms of Gary's question, he's asking whether conscious experience requires subjectiveness as a feature of nature. So I don't know.
You know, Doris Tsao and I had this conversation, but we didn't have that much time to get into some of the nitty-gritty about it. As most of you know, who've been listening to me on the podcast, I don't think that there's anything over and above the known laws of physics, of atoms and molecules and forces, so forth, so forth. Yeah.
and so forth, the core theory that describes the stuff of which we are made. But of course at the higher emergent levels, all sorts of unanticipated features might arise. So I'm not sure exactly—I should have pushed Doris a little bit more on whether or not she was claiming to go beyond that or not.
But of course she's an actual neuroscientist, not a particle physicist, so she might not think in those terms at all. I think that the definition of consciousness involves subjectivity, like you have to be a subject to have consciousness, but I don't really think that it's an intrinsic feature of nature in any sense.
I think it's a higher level emergent thing that happens under the right circumstances because of the collective behavior of ordinary atoms and particles and other non-conscious things. NJTPL says, which one do you think is more weird slash interesting, dark matter or dark energy? Well, dark matter is certainly more dynamic, right?
We don't know exactly what either one of these things are, but in the case of dark energy, the thing that makes the universe accelerate, we have an overwhelmingly plausible candidate, namely Einstein's cosmological constant. And the cosmological constant is the least likely least intricate thing you can imagine. It's literally just one number. It is the energy density of empty space.
And we've measured it. We know what it is. If the dark energy is the cosmological constant, no more observations we ever do will teach us anything more about it. Observations we do of other things might teach us about the theory that helps predict the number, but the actual knowledge about the dark energy itself would just be that number.
Now if the dark energy turns out to be dynamical somehow, if it's not a constant energy density but a slowly changing energy density, then there's lots of different possibilities that open up. The dark energy could be dynamically interesting or it could just be kind of dynamically dynamical but boring.
It could interact with other fields of nature and that would be super duper interesting, of course. But I think that those possibilities are a little bit less likely than just the simple cosmological constant. Whereas dark matter we know is dynamical. It collects in galaxies and clusters. It has an effect on the evolution of the universe.
So the evolution of galaxies and structures and things like that. So it seems overwhelmingly likely that there's more going on in the dark matter world than the dark energy world. Not 100% likely, but it seems very most likely. And again, the dark matter is dynamical, but it could be dynamical in a relatively boring way. It could just be some cold particles that don't interact with each other.
That fits the data quite well as long as you can have a theory of why it has the abundance that it does. Or there's all sorts of intricacies it could have. I mean, this is something that I myself have worked on quite a bit, different ways that dark matter can interact with other dark matter particles or with ordinary matter or with large long-range forces, things like that.
All of those are very much alive. There's no evidence for them, really, but our evidence is sufficiently weak that there's still plenty of room for it in the future that we could actually hopefully discover it. Have you thought about a gift for yourself this year, one that has the power to help you grow, learn, and become a better version of you?
Give yourself the gift of language by getting Babbel. Babbel is the language learning app that gets you talking with quick 10-minute lessons handcrafted by over 200 language experts. Babbel gets you talking a new language in just three weeks.
Whenever I'm going to visit a country where I don't speak the language, Babbel gives me a leg up in learning the basics so that I'm not afraid to participate in conversations. And here's a special holiday treat for our listeners. Right now, get up to 60% off your Babbel subscription, but only for our listeners at babbel.com slash mindscape. Get up to 60% off at babbel.com slash mindscape.
That's B-A-B-B-E-L dot com slash mindscape. Rules and restrictions may apply. Kim Burke says, Measurements on orthogonal axes will be random. It always sounds like the quantum state specified is an attribute of the particle alone, but it strikes me this cannot be true.
An experimenter isolated from and unaware of the first measurement would have no way of measuring what state the particle was in, or distinguishing it from a random unmeasured particle which remains in a superposition of states with respect to all axes. Can a particle have a state which is in principle unmeasurable? It strikes me
that the quantum state of the particle only makes sense in relation to the apparatus on which it was measured, i.e., is actually a statement about the correlation between two systems. Am I on the right track or missing something? I would not put it the way you're putting it.
And in fact, I think that there is something very, very profound going on here, but it's sort of backwards from what you're shocked by. You're asking, can a particle have a state which is in principle unmeasurable? I would say it this way. In quantum mechanics, Every state is in principle unmeasurable. That's the weirdness of quantum mechanics.
In classical mechanics, you have states of systems and you can measure them. And you can measure them, you can be sloppy about it and measure it badly or disturb the system, but you can also imagine being very, very precise about them and measure it without disturbing the system.
Whereas in quantum mechanics, even the gentlest of measurements can extremely disturb the system because wave functions collapse onto specific values of whatever it is you have measured. And so what that collapse means is that if the particle, the system, is in some unknown state, there is literally nothing you can do to measure it and tell you what state it was in before the measurement.
You know what state it's in after the measurement, very plausibly. That's what happens with the spin of the particle going through a magnet, right? I send it spin going through the magnet if it goes up, now I know that it's in spin up. If it goes down, I know it's in spin down. But I don't know what state it was in before.
This is a fundamental time asymmetry in quantum mechanics that you can worry about. Personally, I think that it's the same origin as the thermodynamic time asymmetry, and it's nothing really to worry about, but it's an interesting feature there. So I think that there are quantum states of particles. The thing to be impressed by is that we can prepare quantum states, right?
You can't measure a quantum state if you don't know what it is, but you can prepare a system so that you do know what it is, and then you can do the measurements on it. And of course, as always worth saying, maybe we don't understand quantum mechanics perfectly well, so we'll change our minds down the road, but that is the conventional story as we currently understand it.
Joseph Ellie says, or Eli, says, I watched a clip recently of Brian Green on Joe Rogan talking about the interplay between science and religion.
Brian offered one of the most surprisingly sympathetic views of religion I have heard from a presumably atheist scientist, examining religion more from an anthropological and evolutionary point of view and judging its usefulness in our lives based on its ability to help us understand ourselves deeply and figure out how we interact with the world and what is important to us in a way that a completely scientific worldview may never quite achieve.
I'm interested to hear your thoughts on this, specifically on if you think religion can or should be thought of as a valuable tool for living a meaningful life, rather than being a source for true facts about the world. In other words, do you think that religion or something like it can still have a place in a modern naturalistic worldview?
these questions are very hard to answer because people don't agree on what the word religion means. And especially when you say religion or something like it, that's a pretty broad bushel of ideas, right? Religion or something like it. In my book, The Big Picture, I try to make a case for naturalism, which is generally thought to be in conflict with religion and
But some religions claim to be naturalistic, right? You can buy books on naturalistic religion or religious naturalism, for that matter. And, you know, good for them. But at some point, if— what are you doing with this word, religion, if it also applies to things that completely atheistic people would believe?
In the book, in the big picture, I say, look, it would be very, very surprising given the fact that for thousands of years, the deepest, most profound ruminations by human beings on the the human condition and what it meant to be a good person and our place in the world were all carried out within a religious tradition, to find that those reflections were completely worthless, right?
I would be very surprised if all of those reflections were completely worthless. I think there's something to be said for thinking carefully about the Sermon on the Mount or the Ten Commandments. The difference is I don't think that there's any authority that those things have because they come from religious sources. You know, you can think carefully about the Ten Commandments.
That doesn't mean you have to agree with them, right? You can say, oh, that's a good idea, but oh, that's not a good idea. And then by doing that, you're invoking standards that are from outside the religious tradition or the religious perspective. So I'm all in favor of being inspired by religion to think about things in new ways, to be a
just like I'm in favor of being inspired by literature or philosophy or art or whatever, right? Why not be inspired by religion? But I don't think there's anything special about religion that gives it a place or religious thought or religious traditions that give them a privileged place in thinking about these questions. I don't think it's necessary to think in a religious way.
or even that because something comes from a religious set of ideas, it is somehow presumed to be more insightful about these very deep questions. Spencer says, So I personally don't understand how it could be possible to maintain the idea that there is no complete theory of the universe, for the simple reason that in some sense the universe itself is a complete theory of the universe, right?
We just haven't discovered everything there is to know about the universe. But the universe is doing something. Whatever it's doing, that's the theory, right? In a language that we haven't yet quite grasped with. That's not to say there can't be an infinite number of fields or particles or whatever. That might very well be true.
Indeed, one of the big selling points of string theory, one of the ones that is completely ignored by the sort of popular level anti-string theory contingent, is that when you think about ultraviolet processes, when you think about, for those of you who have not heard me talk about this stuff or read Quanta and Fields, my most recent book, ultraviolet just means high energy, short distance, okay?
The regimes of particle physics and field theory that we can ignore in the effective field theory framework. So when you scatter particles in the deep ultraviolet above the Planck scale, Our naive expectation, or not completely naive, our expectation is that gravity becomes important, including all of the interactions between gravity and everything else, right?
So this is why—and the infinities that you generally get from a non-renormalizable quantum field theory like general relativity— It blows up, not only are there infinities when you naively quantize gravity, but the infinities depend not only on the graviton, but on all the other particles as well.
So this is why a lot of people are very, very skeptical about approaches like loop quantum gravity that try to quantize gravity without including all the other particles. How in the world are you going to get the right answer when everything matters in that ultraviolet regime?
And the miracle of string theory is that indeed there are effectively an infinite number of different kinds of particles, but they are organized. They are organized into the vibrational modes of the strings. And all the infinities happily cancel each other in string theory in an apparently miraculous way.
And so you have an answer to this question of how is it possible to quantize gravity in a sensible way despite the fact that you need to know everything about all the fields in nature. String theory says we know everything about all the fields in nature. They're all vibrations of a string, OK?
So but anyway, that's just an aside to get to the fact that maybe there are effectively an infinite number of fields out there. That does not mean that there's no theory of it, right? There's an infinite number of integers, but we have a pretty good theory of the integers. It might just be like that. Qubit says, Well, there's a couple things going on here that we have to get straight.
One is the approach that I've been investigating on space from Hilbert space gets you general relativity at the end of the day in the classical infrared limit. If it didn't, we wouldn't be interested in it. In fact, I shouldn't even say that it gets you general relativity. It plausibly gets you general relativity, okay?
We don't understand enough about the approach to say that it actually succeeds in doing that. But that's what we're aiming for because general relativity is the theory that you want to get because we have tested gravity and it acts like general relativity in the infrared. And once that is true, then it's not that you have the quote that you have here at the end of the question is,
Isn't it plausible that your approach leads to a completely new type of force that doesn't rely on an additional particle like the graviton? It's not a new type of force. It's gravity. That's the force. And it's not an additional particle. It's the particle that you get by quantizing gravity in the infrared.
So no matter what your approach is, whether it's emergent spacetime or loop quantum gravity or string theory, as long as you obey the rules of quantum mechanics and you get general relativity in the infrared limit, you will have gravitons. That doesn't mean that you start with gravitons, right? That doesn't mean that gravitons are fundamental in any sense.
Indeed, if you think about things, and this is what people doing condensed matter physics do all the time, if you think about non-fundamental systems like solids or gases or whatever, you can quantize them. And instead of getting photons, like you get by quantizing the electromagnetic field, you can get sound waves, which you then quantize to get phonons, right?
Sound waves aren't fundamental, but there's still quantized excitations in the fields and their perturbations that give rise to sound waves rippling through the medium, okay? That's what gravitons could be. Gravitons might not be fundamental, but they're still going to be there if you have quantum mechanics and general relativity.
Mikkel Pickle says, have you come across a, quote, use it till you find a better one, unquote, method for addressing very small risk of very bad outcome? Is it a hard problem? It is a hard problem, but we are apparently faced with more than one in the world right now.
As an alternative, have you come across a method or do you have a recommendation for addressing multiple small risk big consequences problem at one time? Perhaps the consideration changes when you have more than one in the world that seems like a small but existential risk. Yeah, I don't have a once and for all perfect methodology that I favor for these questions.
I think this is a super interesting question that I have not seen anyone give a convincing answer to. Obviously, this is in part inspired by the Nate Silver conversation. And there was something in Silver's book that I thought was actually very interesting that we didn't quite get an opportunity to talk about.
dealing with these very small probability events, he didn't quite advocate, but at least he discussed the idea that rather than trying to think of what is the probability of this unlikely event happening, think of what the range of plausible probabilities are.
what is the lowest probability you would put on this, and what is the highest probability that you would put on this, and sort of deal with living in that range of uncertainty, okay? If you realize that, you know, oh, maybe I think the probability is 1%, but it could be as low as 10 to the minus 20, then maybe your thinking changes a little bit, right?
I don't think that's a sufficient answer to the question, but it is an interesting change of perspective I hadn't thought of. Two other things I'll just throw out there as things to keep in mind when thinking about these problems. One is that it's too easy in my mind to assign a small probability to an unlikely event.
and forget about all the other unlikely events that have similar probabilities, right? When you're talking about something as a probability of 10 to the minus 8, which is a very small chance, right? One in 100 millionth of a chance. But it has huge consequences. The world would end or whatever, right?
what are other things that might happen with exactly that probability or even more probability than that? Maybe some of them would be positive effects, right? So at least that doesn't, again, that's not helpful. It's not a complete algorithm for dealing with these, but it is something to very much keep in mind.
And the other is sometimes, I think, again, oftentimes when we're dealing with these existential worries, that is to say worries or problems that could destroy all of life on Earth, many of them have the feature that it's not just that they seem unlikely, but that they would creep up on us, right? That they don't just... not exist today and then suddenly tomorrow they destroy all life on Earth.
That would be something I would be quite worried about because you can't sort of shift in midstream, right? But for a lot of these worries, they would actually creep up on us. You would take that very tiny probability of them happening that you started with and say, oh, you know, look, the probability is going up. We can see exactly how this is happening now. Let's try to do something about it.
And then I think that I'm much more willing to take risks if those are the probabilities we're talking about, because then we can be more clear as we gather more data about what the correct actions to take might be. Alan Lubell says, Thanks.
So to just be clear, so everyone knows what I take this question to be, the question to be is that, okay, there's some diet that might not be what you would ordinarily eat, but you're guaranteed, maybe not guaranteed is too strong, but on average, you would get 20 years extra lifespan, and it would be healthy lifespan, okay? You would actually be, you know, functioning at a high level.
It's not just like you're sitting in the, in the bed all day with, with a aging body, but you actually have 20 more productive years. Yeah, I think that, I'd like to think that I would do that. There's always – the human condition is to struggle with short-term pleasures versus long-term investments in your happiness.
But I suppose I could try to do that because even though I get a lot of pleasure out of eating and eating a variety of foods in particular, not just the same thing every day – I also get pleasure out of other things like thinking about things and writing books and traveling and stuff like that, which I could do 20 years more of. So yeah, I think I would be tempted to try to do that.
If it were the pizza diet, just pizza and ice cream and Doritos, then I would definitely do it. Then I would do it no problem at all. Maybe I could get rid of the Doritos. Those are more of a childhood craving that I used to have. Happily or unhappily, I see very, very, very, very small credence that eating just red meat, salt, and water would increase your lifespan.
It might do lots of things to you. Increasing your lifespan is very unlikely to be one of them. Nikola Ivanov says, Yeah, they're exactly the same kind of thing. They are mathematical constructs to describe something that is really happening. So when you say they're mathematical constructs, that's not the same as saying they don't exist.
There is something, there's some effect that is being described. And you invented these mathematical constructs to help you calculate and think about what those effects are. The real effect, whether it's the true vacuum state or scattering interactions of particles, is that there are many quantum fields that interact with each other in ways that we don't have straightforward ways of calculating.
I think it was last month's AMA, we talked a little bit about the amplitudes program for calculating scattering of particles in quantum field theories. The aspiration there is to jump over the idea of virtual particles and go right to the answer in some simpler way. So virtual particles are a tool that we use to think about these things that we're calculating.
They're a metaphor in some sense, but they are absolutely having an effect, okay? It's just the language that we use is a little bit colorful sometimes to describe how we calculate what that effect is. And that's true whether it's the vacuum or the physical interactions of scattering particles. Captain Brick says, I have a question about Bayesian reasoning.
You've mentioned a couple of times that you should never set your priors to one because that would mean no evidence could change your mind. I don't see why I shouldn't have my prior set to one, but then given some evidence update my credence to close to zero. What is so special about the prior one? Yeah, this is a good question. I could probably be more clear about explaining this.
Let's imagine that we only have two choices, right? A or not A. Those are the only choices that we have. These are the two propositions that we are going to try to attach credences to. We could go through Bayes' theorem and you could show that if the credence on A was equal to 1, then updating according to Bayes' theorem would never change you from 1. That is a true fact.
But it is easier to think about the alternative, right? If you set your credence on A to 1, then the fact that not A and A are the complete set of possibilities and are mutually exclusive means that your credence for not A has to be 0, right? Right? Has to be exactly zero.
And if you visualize Bayes' rule in your mind, the updated probability for a proposition is proportional to the prior probability of that proposition. And the great thing about numbers that are proportional to zero is that they're all zero.
So it's easier to see why if you have something for which your credence is zero, it can never change because Bayes' theorem just sets the new probability to be a number times the old probability. And that number is never infinity. So it's going to be a finite number times 0. That's still going to be 0.
So if your only two options are a and not a, and the probability of not a is 0, no amount of evidence will make the probability of not a anything other than 0. And therefore, the probability of a will remain 1. That's why evidence will never help you if you're in that degenerate case.
Brzozowski says, do you think that artificial intelligence can potentially experience pain or emotions or are they tied to our biology? I think I'm probably halfway in between the two allowed answers here. I see zero obstacle to some kind of artificial intelligence experiencing pain and emotions. I think I'm a physicalist about consciousness and all of those things.
There's no reason why we cannot entirely reproduce the full spectrum of thoughts and conscious experiences that human beings have in an artificial context. However, having said that, I think that modern work on artificial intelligence and simulations of people and things like that vastly underestimates the importance of our biology.
I think not only are our brains embedded in our bodies and constantly receiving sensory inputs and things like that, But we're also, you know, running for a certain – there's a certain metabolism that we have in our bodies, right? We need food. We need light from the sun or whatever. These needs are generally not baked into modern approaches to artificial intelligence.
You know, there's no reason to give an AI program hunger, right? Now, again, there's no reason why we can't do it either. But I think that my impression from knowing a little bit about what we do in modern AI research is that people write computer programs and they let the computer programs run, right? We're not embedding them in bodies that have needs and desires and –
things built in by the course of biological evolution that act like feelings and goals. Go way back to the podcast we did with Antonio Damasio. The feelings that he keeps emphasizing are exactly that, are these feedbacks that we're getting from our biology that have a huge role in how our brains actually work. So
My suspicion is that if you want to get something that is honestly close to what human beings experience as pain or emotions, you're going to have to somehow mimic or simulate or even just reproduce the biological aspects of our thinking, not just the computational aspects. This episode of Mindscape is sponsored by BetterHelp.
Halloween is approaching and it's time to think about what is it that scares us. But what about those fears that don't involve zombies and ghosts? For those, therapy is a great tool for facing our fears and finding ways to overcome them. Because sometimes the scariest thing is not facing our fears in the first place and holding ourselves back.
And if you've been thinking about giving therapy a try, think about BetterHelp. BetterHelp is entirely online and is designed to be convenient, flexible, and suited to your schedule. Just fill out a brief questionnaire to get matched with a licensed therapist, and you can switch therapists at any time for no additional charge. So overcome your fears with BetterHelp.
Visit betterhelp.com slash mindscape today to get 10% off your first month. That's betterhelp, H-E-L-P, dot com slash mindscape. Alex Thu says, recently my wife took an interest in natal charts, N-A-T-A-L, which appear to be an extension of astrological interpretations of life.
Natal charts seemingly offer high fidelity information about one's personality based on relative, planetary, and solar positioning at the time and location of one's birth. While neither of us seriously believe in astrology's predictive powers, the results of our and our family's natal chart readouts were uniquely specific and familiar. The results were uncanny.
My question is, to what extent do you find that physical phenomena in the universe can have real effects of life on Earth, particularly on more abstract concepts such as personality? Certainly, moon phases affect tides, which affects various forms of life interaction, but to what extent can such dynamics play on moods, perspectives, beliefs, etc. ?
Essentially none is the short answer to this question for two reasons. Number one, people have done studies on this. Just recently a study came out.
Whenever you are careful about it, whenever you're doing, you know, double-blind, blah, blah, blah, blah studies, there's zero relationship between astrological charts, including natal charts or whatever, and anything to do with how human beings behave or their personalities or anything.
It is far too easy to ex post facto hear some prediction or some reading or whatever and go, oh yeah, that kind of sounds like me, right? It's very well known that that is something that human beings are very bad at. That is not something we can judge. When you try to control it and be a good scientist, all the effects go away.
But the other thing is, and this is much more important to how I think about it, there's just no room for these effects in how physics works, right? The moon indeed is close enough to the Earth that its gravitational field affects tides. But there's plenty of other things here on Earth that have a much bigger impact on human beings than tides do.
The temperature in the room that you're in is much more important than the tides. I mean, unless you really think that you can sit in the room a thousand miles away from the ocean and sort of suss out what level the tides are at just by thinking about it very carefully— which, by the way, you can't. I'm just teasing.
It's completely implausible that even the tides caused by the moon have any important impact on our development, especially since, as you probably know, human beings being conceived— growing up inside the womb and then being born takes a few months, takes nine months. So it's not like the tides, which go up and down every day, have any single push on that kind of thing.
And other planets and stars and things like that are just wildly far away. We know that there aren't any long-range forces that we're missing in our description of the world, at least not any that are anywhere near strong enough to affect human life, behavior, growth, development, anything like that.
So both on the basis of my priors about the laws of physics and by thinking about the experiments that have actually been done, I put next to no credence on the idea that natal charts are telling you anything about who you are. OK, now I'm going to group together a whole bunch of questions, but I'm not going to group them together.
I've just arranged things so that I will discuss common topics all at once. So this has to do, unsurprisingly, with the recent podcast with Blaise Aguera y Arcas. It's a very provocative podcast and people have a lot of questions. The questions are often very close to each other, but they're not the same.
So rather than reading them all first and then trying to answer, I will read them and try to answer one by one, but it will be easier to answer the later questions because I will have covered similar things earlier. So Sandro Stucki says, I really enjoyed your episode with Blaise Aguera y Arcas. Early on in your discussion, he noted that life doesn't seem exactly encouraged by thermodynamics.
There's something mysterious there. I was hoping his work would shed some light on this, but BFF seems to be completely irreversible with an arrow of time built in. Do you think that we can nevertheless learn something about the connection between entropy and life from his experiments?
Well, I don't think that we can learn much realistic about the connection between entropy and life from his experiments. So let's just back up and talk in a general way about what's going on here. When Blaze does a computer simulation, the simulation with the program that he's running is embedded in the physical world, right?
It's not separate from the physical world, but you have to plug in the computer to make it happen. So it is not a closed system. It is an open system that has free energy being put into it, and that enables it to do certain manipulations, which, like Sandro says, are irreversible, right? It's not like you can go back and figure out exactly where you came from.
When you add 2 plus 2 to get 4, if someone gives you 2 plus 2, they can tell you the answer is 4, but if someone gives you 4, you don't know what you added together. It could be 5 minus 1, right?
So that is a different thing than thinking about the fundamental laws of physics, which more or less do conserve energy and do have a reversible character at the deepest possible levels, as we discussed in the podcast. However, that's less of a barrier than you might think because, of course— you very often talk about aspects of the world where you have open systems, right?
You know, like I said, that computer is part of the physical world. And if you restrict your attention to subsystems of the world that are not closed systems, you can get effective dynamics. You can get a theory that whether it's an emergent higher level thing or even just, you know, a theory of chemistry.
Like if you talk to chemists, chemists will often have irreversible processes that they study. And what's really going on from the physicist's point of view is that those processes are giving off radiation, right? Giving off infrared or even longer wavelength light because they're dissipative. They're losing energy to the environment. And the chemists are just not keeping track
of those photons that are being given off. So they're just keeping track of the molecules and they will find what appear in their worlds to be irreversible processes. Of course, we know that it's compatible with deeper down reversible laws, but that's not what they see because they're not studying the whole system. In some very weak sense, that's what's going on in these computer simulations.
You would like, I would like, to embed any result that you get from these computer simulations in a more full theory that did care about entropy and dissipation and the fundamentally reversible theory. nature of the laws of physics that we understand. But okay, they didn't do that yet. That's perfectly okay.
Now, there's another connection between entropy and life, which is a little bit less down and mechanistic there. So putting aside the fact that we have the second law of thermodynamics, etc., there is still the question of the statistical mechanics of these systems that he's looking at. So forgetting about dissipation and photons, there's still probabilistic questions we can ask.
Blaze was trying to make a claim or at least a conjecture, let's put it that way, that given his kind of setup where you had many different copies of these little programs and they interacted in certain ways, um, if- there's two different questions you can ask.
One is, in the space of all possible configurations of that system, which of them, uh, have- are doing many computations and are- have little, uh, subsets that reproduce themselves? And the answer is very, very tiny. Very, very few configurations actually look that way in the set of all possible configurations.
Instead of all possible random programs, most of them aren't going to have those properties. But, he said, that tiny subspace is an attractor. Now, an attractor in the world of dynamical systems is a subspace where many different trajectories flow to it, okay?
And if you live in the world of reversible dynamics, like frictionless dynamics to a physicist, you can easily prove there are no attractors because there's a theorem, Liouville's theorem, that says that the volume of a space is conserved in such systems.
But if you have a dissipative system, if you have an open system that is not the entire closed, isolated universe, then you can have attractors and you see attractors all the time. So there can be tiny subspaces of the whole state space of the system, which are unlikely to be chosen if you just choose something randomly, but can be very likely to be an ultimate destination of the evolution.
of that system. So that was his conjecture. His conjecture was that some kind of computation is an attractor in this dynamical system sense. And so that's a kind of relationship between entropy and his computational work. And I don't know if the conjecture is true. I think it's very interesting. I don't even think that the conjecture is quite well formulated yet.
And I certainly don't know how to address it. But I think that's a super interesting question there that I would like to know more about. Connor says your last episode with Blaise Gary Yarkas was one of my favorite ever. At one point, you brought up how there doesn't seem to be any obvious energy or entropy or dissipation in their simulated world.
But biological life arises from the dissipation of the sun's free energy. Is it surprising that there could be life in their simulation without apparent energy dissipation? What is going on here? So hopefully you see that what I just said, DeSandro's question, will help us also with Conor's question. Yes, there isn't any obvious energy or entropy or dissipation.
It's not supposed to be a model of the fundamental laws of physics. It's not even supposed to be a model of chemistry, okay? So I've seen some people, not in mindscape land, but elsewhere responding to Blaise's paper, saying things like, but this isn't really chemistry. This isn't really life. To which I want to say, like, Yeah. Or they say things like, it doesn't answer all the questions we have.
Yeah. Of course it doesn't answer all the questions we have. That's not usually how science works. You know, maybe sometimes you get lucky, but usually you work in steps. So this is not in any sense a realistic model of biology or chemistry or anything like that.
It's taking one aspect of biology, the idea of information containing subsystems replicating themselves with small variations due to mutations and so forth, and asking, can that arise without it being put in? I've seen a whole nother bunch of people saying, well, this has already been looked at in work on artificial life and things like that.
But generally, and I don't know if this is always true, but certainly generally, again, as we mentioned in the podcast, that work starts with something lifelike and watches it evolve. um, is usually not about the origin of this, um, reproduc- reproductive behavior from true random initial conditions.
I mean, there could be counter examples to that, but I know- and I know of like one or two, but there's- there's not that many. So anyway, to Connor's question, is it surprising that there could be life in the simulation without apparent energy dissipation? As we talked about in the previous question, there is effective energy dissipation. There is irreversible dynamics, okay?
So in that sense, it is not surprising to me that there could be something lifelike in their simulation. Dennis says, I liked a lot of the recent podcast with Blaise Aguera-Yarkas. At the beginning, he claims that in his experiment, replication emerges from nothing without being put in the system from the start.
That seems like cheating, considering that there is a copy instruction in the base language. This point is even kind of acknowledged later, but not as a weakening of the original claim. More generally, do you think that starting from a programming language to make computation emerge is a weak point of this approach? So again, as I alluded to before, it's not a weak point of the approach.
It's a feature of the approach. I don't mean feature as opposed to bug as a positive feature. It's just a fact about this approach. This approach is not supposed to be realistic chemistry. It's not supposed to exactly answer all the questions you have about the origin of life. There is no analog here of a nucleotide, right? in a DNA molecule or anything like that, right?
It's an entirely different kind of thing. It's trying to ask, does computation and replication, do computation and replication naturally emerge from random initial conditions in certain kinds of circumstances? So of course, yes, this programming language does allow for copy as an instruction, but it doesn't copy, it doesn't naturally copy the whole program, and that's what you're looking for. So
They absolutely are dealing with a context where it is possible to get the answer that you were hoping to get. And indeed, they get the answer they were hoping to get. But baby steps. It's going to be a long journey before we go from this to understanding how actual life actually formed.
But I will say, like, you know, just so I can be a little bit more substantively positive, I think the hope is—well, one of the hopes might be the following, that if it's true that there is some sense in which computation is an attractor, even in this very, very toy model spherical cow example that they do on the computer, maybe that makes you think that some kind of life is more ubiquitous in the universe than you thought.
Maybe it is evidence that given that the laws of physics we know allow for life because we're here, we're life, we're consistent with the laws of physics. If there's some attractor behavior to this kind of computational model, then maybe biology, chemistry, geology are more likely to make it happen than you might think.
I think that the evidence for that is pretty weak right now, but it's something we can absolutely think about more carefully. Adam Rotmill says, I enjoyed the podcast with Blaze on computational life using BFF. Also started Sarah Walker's book, Life as No One Knows It, along similar lines.
How do you think these computational approaches compare to the way Daniel Dennett references Conway's Game of Life in Freedom Evolves? Has the newer paradigm changed? I don't know too much about the Game of Life, to be honest. I mean, I know what the Game of Life is, but there's been a lot of research on it at a detailed level that I have not kept up on.
So as far as I know, there's two statements. One statement that I know is true, which is that the Game of Life, Conway's Game of Life, for those of you who don't know, you probably do know, it's this sort of grid cellular automaton, a two-dimensional cellular automaton with white and black lines. squares that interact with their nearest neighbors in definite ways.
And you can build reproducing things like gliders and you can build things that make an infinite number of gliders and so on. And it has been proven that this particular cellular automaton is Turing complete.
That is to say you can construct a configuration in the game of life that will be able to be a Turing machine, that will be able to compute any computable function just as well as anything else. So that is known. What I don't know is how robust that configuration is, right? In other words, precisely this question of is that an attractor in the space of the dynamics?
I have no idea whether anyone has looked at that question in the Game of Life. it's incredibly plausible to me that you can construct a Turing machine in the game of life, but they're very, very fragile. And if you bump into a little bit, it breaks, right? And it never arises from random initial conditions. The game of life is not that robust by itself.
Like all the interesting stuff is not that robust. If you put random configurations down, usually they peter out, okay? They usually don't start reproducing interesting things in them. Now, I don't know if There's some subset of conditions that are not completely random but for which computation happens naturally. That's something that I'm not up on.
Nate Wadoops says, the episode with Blaise Aguera-Iarcas was outstanding. It got me wondering why we, got me wondering, we only see very limited forms of self-replication emerge from Conway's Game of Life.
For example, we often see patterns that crawl across the grid, but the soup generally converges pretty quickly to either a static image or to something that repeats after a small number of steps. Do you have any thoughts on what sorts of extensions to the rules of Conway's Game of Life might enable more interesting phenomena to emerge from a substrate of cellular automata?
Yeah, so that's a good question. So that's exactly what I was getting at before. Like generally you converge to boring things in the game of life. And so one open question, and it's okay to have open questions. It's not a flaw in the paper you wrote because you didn't answer every possible open question.
you
figure out how likely it is that the kind of behavior you saw with the BFF simulations will also exist in these other simulations. It seems completely plausible to me that the game of life is on one side of a divide, and that divide separates automata in which computation almost never spontaneously arises.
On the other side, there's a set of automata and programming languages or whatever where computation almost always arises from random initial conditions. But that's all work to be done. That's why it's so much fun, because we don't know the answers to these questions.
Daniel Bagley says, I'm having trouble with what seems to be a teleological perspective coming from Blaze in your most recent episode. If life is computation and it was created via instructions, doesn't that imply agency or teleology? Who or what encoded instructions in the matter and information that comprises life? This seems to open the door for creationism.
Well, I hate to disappoint you, Daniel, but the door has been open to creationism for a very long time. It has been the dominant paradigm for thousands of years. We're just crawling out from under it. And we should go wherever we go, you know, whether it opens doors or not, whatever is the best theory that we can invent. However, it is absolutely not teleological in the relevant sense.
Of course, as we said, this is an experiment done on a computer and someone built the computer, someone designed the programming language, someone set up the experiment that we're running. All very, very true. That's going to be necessary in any simulation we ever do. It's going to be set up by human beings, right?
There's zero thought that that implies that our physical universe was set up by some higher intelligence. That's just a different kind of question. This model that they have is supposed to be a version of laws of physics. Certainly not our laws of physics. Absolutely not. We've already talked about the fact that their computer algorithm is irreversible rather than reversible, for example.
But there's other differences as well. And the fact that we have laws of physics does not imply that those laws were passed by a legislature or that they were invented by some cosmic autocrat or anything like that. And it doesn't matter. You know, we have to lead where the science takes us. And we have to figure out why it is that life arose.
And, you know, if ultimately someday we decide that it must have been because God did it, then I will live with that. I think that my credence on that is very, very, very tiny. But I'm absolutely willing to decide that if that's where the evidence eventually points.
John Haig says, with the Blaze Aguirre-Yarcos podcast fresh in our minds, I have a question concerning the Briggs-Rauscher oscillating reaction. This reaction oscillates on average about 10 times. The oscillating color cycle goes from clear to amber to dark blue and then back to clear again, always ending on the dark blue stage.
Do you believe and or think, Blaze would believe, that each oscillation is one complete complex life cycle and that each new oscillation is a form of replication? If so, how would you define death and rebirth? If not, could it all just be one life cycle with 10 stages of morphogenesis?
So yeah, if you don't know, this idea, the BR oscillation, Briggs-Rauscher oscillating reaction, this is a chemical reaction that people who study complexity sometimes get very excited about. And it's kind of interesting because you have this system. I'm not super expert on it because I don't think it's that exciting, to be honest, as I will just explain. But you have this chemical reaction.
And, you know, many chemical reactions will happen. And then at the end of the day, you equilibrate, right? Everything is smooth and more or less constant everywhere. This kind of chemical reaction goes through these oscillations. And you see patterns. And the patterns are unpredictable where exactly they will appear. But there's like stripes. And the stripes sort of curl around each other.
And they oscillate in color. They change color. So it looks kind of unpredictable and kooky and kind of structured and complex. But there's not really any information being processed there. It's actually all pretty simple.
Honestly, even though the picture looks complex, you know, it's not something where you could consider mutations or learning or any of the kinds of things that we associate with what I would associate with important aspects of life. You know, think back, as I like to keep reminding us of prior podcasts, Stuart Bartlett.
we had a very interesting conversation about what he called Leufe, which I think is a terrible way of pronouncing this neologism L-Y-F-E, where he and Michael Wong tried to say, look, what matters is not finding the right definition of life. It's acknowledging that life has many different aspects to it, and some of those aspects might appear in systems without the other ones, okay?
So metabolism and reproduction are some of those aspects, but so are learning and adaptation. And those kinds of things don't exist in this Briggs-Rauscher reaction. So I think it's a cool reaction, but I would not call it life in honestly any sense whatsoever. That's my view.
This Halloween, ghoul all out with Instacart. Whether you're hunting for the perfect costume, eyeing that giant bag of candy, or casting spells with eerie decor, we've got it all in one place. Download the Instacart app and get delivery in as fast as 30 minutes. Plus, enjoy $0 delivery fees on your first three orders. Offer valid for a limited time, minimum $10 per order.
Service fees, other fees, and additional terms apply. Instacart, bringing the store to your door this Halloween.
OK, that is enough for the moment about the origin of replication and computation. We can move on. Ahmed Hindawi says, what are your thoughts on an election system where each voter assigns a score between minus 1 and 1 to every candidate, perhaps with increments of 0.1? In this system, voters wouldn't need to normalize their scores to any specific value. I see two potential advantages.
Number one, moderate candidates could gain support from across the political spectrum, potentially outperforming more extreme polarizing candidates. And number two, it could increase voter turnout. Even if a voter is indifferent to candidate X, they might still be motivated to give candidate Y a negative score to offset someone else's positive vote.
A potential downside is that this system is more complex than a simple single choice ballot. Yeah, this is a known system called range voting or score voting. And I think that it, in theory, has a lot of advantages. For those voting theory aficionados out there, it avoids Arrow's theorem.
Kenneth Arrow proved a famous theorem that said that no voting system under certain reasonable assumptions can keep everybody happy—not keep everybody happy, of course. If you lose the election, you're not happy, but can satisfy certain conditions that you would want a fair voting system to satisfy, like no one person is a dictator. If you—
prefer A to B and B to C, prefer A to C, things like that. But one of the assumptions of Arrow's theorem is that it's an ordinal system, that is to say you rank or vote yes or no for candidates rather than a cardinal system, as it would be called, where cardinal means you can assign numbers, right? And this is exactly what range voting or score voting does.
And I think that there's even been papers written saying that people will be unhappy when you have an election because not everyone's candidate wins. But people are at least unhappy in range voting kind of systems. So I think it's a good idea overall.
It's still subject to the worry that people are insincere, that people try to strategically vote, you know, if there's someone who they think – well, so here – with range voting, here is the typical worry. You might have three candidates. One is the best, and you give them a plus one. One is the worst, and you give them a minus one. And one you're indifferent to, so you want to give them a zero.
Your sincere score would be giving them a zero. Okay. But there's two worries you have. One is maybe your least favorite candidate is actually popular. So you want to maximize the difference between your least favorite candidate and everybody else, especially if your second favorite candidate is more popular than your first favorite candidate. So you might be...
pushed towards exaggerating your preference, increasing the score of your second favorite candidate to increase the distance between them and your least favorite one. But also there's the backwards worry. If the second favorite candidate is popular and so is your favorite candidate, you might be tempted to lower your score for your second favorite candidate. It's always going to be true.
that you should give your favorite candidate the highest score and your worst candidate the lowest score. But apparently there's some research, I'm not sure if this is completely reliable or not, but empirically, apparently people, when they have this voting system, do tend to try to be honest, to try to give fair scores to people rather than voting strategically.
I mean, maybe that's just because these systems are not very common, so people haven't learned to game the system. And also maybe it's not so bad to game the system. Maybe that's perfectly okay. Anyway, I do think that it would be better than our current system.
But our current system in the United States, most jurisdictions have what is called first-past-the-post or winner-take-all kind of elections where whoever gets the most votes wins, right? Everyone who does voting theory wins. thinks that that's the worst possible system. Except for, as Ahmed says, it is a simplest system, first past the post.
And therefore, people worry that if you have more complicated systems, people just won't vote or won't vote correctly. Or especially, you know, I remember... There was a California election, the one that Arnold Schwarzenegger won, where you had like over 100 people on the ballot. So you're going to give all of them scores? No.
You need to have some way to either have a primary or something like that. But overall, yeah, I do think that it's worth trying. You have to figure out how to improve the current system. It might help voters. third party candidates.
I think that the United States presidential system, where you have one president, and that person has a lot of power, and they're basically voted on directly rather than by the assembly, like in a parliamentary system, then in that kind of system, you will always have enormous preference for a two party system, because it's just hard to, it's not like where you
have a parliamentary system and there can be coalitions or something like that. You vote for the president, right, directly. That's always going to favor a two-party system. And indeed, in the United States, there's always been two parties that have been dominant. But maybe you could have in the primaries or something like that, quirkier outcomes or
More moderation if you didn't just do first past the post. In the most recent British elections, which they do have a parliamentary system, but they do have first past the post, which means that it basically turns up the contrast knob, right? If you have 100 districts...
And in every district, 51% vote for candidate X and 49% vote for candidate Y. The country is pretty evenly split, but the parliament is 100% candidate X, right, or party X. So that's not really a good way to – make things most representative of the feelings in the country. That probably would still be the case if all you did was have range voting or score voting or something like that.
But there's other ways that you can have more proportional representation. So I do think that modern voting systems have become – modern polarization and things like that have become – big enough problems that even though systems might be more complex, they are still worth trying out. And places have tried ranked choice voting and things like that with actual success, as far as I can tell.
So definitely worth considering. George Hampton says, I was re-listening to your July AMA while walking in the grocery store and while answering a question about John Moffat's theory. In your story about your undergrad studies, you said it was 40 years ago. We can admit it. I'm a bit older than you and just turned 60, and I've been thinking and saying things like that for a while.
My question is, how do you feel about approaching 60 and growing older? It's been surprising to me how much I've been reflecting on my life and the world in general, and I'm wondering if you have been having similar feelings. Yeah, I've definitely been having similar feelings. And it's a cliche, and it's predictable, and it's going to happen anyway.
For those of you who are young out there, it is definitely something on your mind when you grow older. In a very real physical sense, one does not bounce back from minor injuries in one's 50s as one did in one's 20s. And there are more of them. And your trips to the doctor become a little bit more action-packed. than they are when you're young.
I'm in pretty good health right now, but you know, there are things that I have, like I have to take vitamin C supplements now, which I never had to do. Very minor thing, but it's something, right? It's a reminder that the number of things you're going to have to take as time goes on only increases. And in a more existential way,
I'm at an age where, absent some dramatic increase in human longevity science, I have had more years behind me than I have ahead of me, right? There's this feeling you have when you're young that... you're preparing for life, right? You're learning things, you're getting good at things, and then you're going to someday put all these new skills to work.
And later in life, you have to think, you know, look, I got a finite amount of time left. I got to figure out exactly what I want to get done. I got to prioritize. And what I want to get done, that doesn't mean necessarily like, you know, work, writing books or whatever.
Like sometimes what you want to get done is to travel or to have a good time or to enjoy your family life or your pets or good food or whatever it is. But I do absolutely think it shifts your perspective a little bit to grow older. This is not a novel... insight on my part in any way. But it is true for those young people out there. It will happen to you too.
Redmond says, I favor a low bar on suffrage. Suffrage means the right to vote. And Redmond says, a GEDW2, DD214, rent receipt, or other indicia of minimum participation in society. So for those of you who are not Americans or acronym freaks, GED is a high school equivalency diploma. W-2 is the form you get for when you get paid for the income taxes.
DD-214 is a discharge from the military, things like that. And Redmond says, why should the village idiot and town drunk get to vote? So the answer is because the village idiot and the town drunk are people too. They have absolutely just as much right to vote as anyone else does.
I think there's a fundamental mistake that some people make about the purpose of democracy, or at least what I think the purpose of democracy should be. You know, we had the conversation with Henry Farrell some time back about how democracies can be useful as ways of making decisions.
You know, they're cognitive democracy, democracy as a way to find an equilibrium, sort of analogously to how markets work for economics. But that's not really the moral or ethical case for democracy. The moral or ethical case is people should have a voice that when you have a government, you know, again, not saying anything new or insightful here.
When you give some subset of your society the right to make decisions for the society as a whole, where does that right come from? From the people being governed is the theory of democracy. People have the right to speak for their own interests. It's not an IQ test. It's not something where only the intelligent or only the productive or whatever get to participate.
Every human being who is above a certain age and a citizen of the country should get the right to participate. And I buy that ethical argument. I'm entirely in favor of it.
George says, you did an episode a couple years ago that roughly ended with a suggestion that someone should consider doing a careful analysis of cosmologists' psychological profiles and how they might inform said cosmologists' tastes for cosmological models. In your field and in the wild, have you noticed or do you have a hunch about any of those potential patterns yourself?
Maybe along the lines of messy office equals bigger, more diverse universe, clean office equals neater, more predictable universe, and so forth. Well, so no, I have not done anything, I have not noticed anything nearly that straightforward. which is exactly why I think that someone should do a careful analysis of it.
It's far too easy to have your own personal, informal, not careful analysis be swayed by some particular vivid examples rather than being fair to the whole data set, right? And also, my prediction is not that if there is any connection between personality and scientific theory preference, it would be anything like that straightforward. What I'm thinking of is—
Things like whenever we pick scientific theories, when we don't know the right answer, when we don't have the correct theory in front of us and we have different options, especially when those different options are ill-defined, when we don't exactly know all the details of what the options are going to be. When someone says, for example, dark matter versus modified gravity,
Well, that's great, but what modified gravity? How exactly are you modifying gravity? What dark matter candidate? Where exactly does it come from and how does it behave, right? So even what you call a model is ill-defined. But you have preferences, and this is well known that scientists will prefer different hypothetical models, even though they completely admit that we don't know yet.
They will have preferences, and those preferences are based on different criteria, and the criteria cannot be objectively weighted against each other. Maybe one theory is very simple to write down, but you really kind of have to stretch to make it cover the data. Another theory is more complicated, but it fits the data very well.
One theory is very elegant in its own right, but doesn't fit in with other theories very well that we understand. You know what I mean? Some theories postulate entities that we don't see. Some don't. Okay, so there's many different things, fruitfulness and things like that. Thomas Kuhn, long after he wrote The Structure of Scientific Revolutions, wrote an attempt to—
defend himself from charges of that he was a relativist, that he thought that, you know, Kuhn and the structure of scientific revolutions argue that there are non-epistemic factors that go into scientists' preference for one theory or another. And people read this as saying that it was arbitrary, and he later wrote that, no, it is not arbitrary, but it's just not an algorithm, right?
It's not just perfectly mapped from here are the data, here are the theories, here's what scientists will agree on. There's judgment that comes into it, and there's different factors that come into that judgment, and those different factors will be weighted differently. Foundations of quantum mechanics is another example where you can come up with your own different things.
my issue or my suggestion, my conjecture was that, um, the way that different people weight these different factors might be correlated in interesting ways with their personality profiles. I have no idea whether that is true. Um,
It was Lee Smolin, former Mindscape guest, who pointed out that people who think that computers can someday become conscious are more likely to support the Everett interpretation of quantum mechanics.
And I thought that was extremely insightful, not because those two issues are directly correlated, but they're co-correlated with a third thing, which is how happy are you to take like a very, very simple basic structure and extrapolate it very, very far? right, and have confidence that eventually the extrapolation will work.
That is what happens both in physicalist theories of consciousness, where you say, I don't understand consciousness yet, but I really do have a good amount of credence in the underlying physical construction of the world, so I suspect that when we understand consciousness, we will understand it from a physicalist point of view. And likewise for many worlds, where you say, look, the
whole thing about quantum mechanics is very weird. No matter what choice you pick, I'm going to pick the choice that is the simplest model and just believe its predictions, even if those predictions involve metaphysical surprises that I wouldn't otherwise have sought out, like all these other universes.
So I think that there is a psychological makeup in how physicists and philosophers, for that matter, go about preferring theories that we haven't yet established one way or the other. Robert Ruxandrescu says, does causality really exist fundamentally, or is it just a way of talking about events that happen from our limited perspective?
I'm thinking about the Humean view of laws of physics and the idea that causes and effects are emergent properties, and if so, can we really say that we cause things to happen when we make a decision to perform an action? It's more like we witness a movie and are tricked into thinking there's causality involved when it's really not.
Well, I think, if I understand what you're getting at, I think this is the classic... example of somehow being reluctant to think that emergent things are real. But as soon as you use the word we, you're already attributing reality to some certain kind of emergent things, namely human beings, right?
There is a way of thinking about the universe in which there is just the most fundamental level, and Laplace's demon would understand it perfectly well, and that's the only way of talking about the universe. But nobody really talks that way. No human being really talks that way.
If you believe that tables and chairs are real, if you believe that people are real, then there's no reason why you shouldn't think that causes are real. Causes, as I've often said, are not—the idea of a cause is not a concept that is anywhere to be found in the fundamental laws of physics, as far as we know. But likewise, neither are cats. But I think the cats exist.
They just exist at a higher level of abstraction, and so does cause and effect. There's no discrepancy there. There's no inconsistency there. Massimo Tori says, I recently read an article by Ethan Siegel titled New Theoretical Calculation Solves the Muon G-2 Puzzle.
In the article, he discusses the observed discrepancy between the measured value of the muon's magnetic moment and the value predicted by the standard model. This discrepancy had been seen as a potential indication of new physics beyond the standard model.
However, this difference appears to be the result of a flaw in the technique used to calculate the theoretical value of G-2 rather than a flaw in the standard model itself.
While I understand that the muon interacts with other fields, such as quark fields, I'd assume that these interactions would occur only through the electromagnetic or weak force given that the muon is a lepton and interacts via these forces alone. However, the revised calculation explicitly considers the contribution of the strong force as described by QCD.
Would you clarify why the strong force would play a role in determining the muon's magnetic moment? Sure, this I can do. In fact, I'm pretty sure I did a solo episode about this. There's a solo episode. See, if I did any research before I did my AMAs, I would have done this ahead of time. Maybe I can just do this in real time as I'm recording this. I will look for my solo episode.
Yes, episode 144 was a solo episode called Are We Moving Beyond the Standard Model? And I discussed some of these purported discrepancies, most of which involved muons, and the most promising one of which was, well, I shouldn't say it's the most promising, it's probably not, but a promising one of which was the so-called G-2 puzzle of the muon.
And G-2 is a way of saying the magnetic moment of the muon. The muon is a spinning particle that So it has a magnetic field. How symmetric is that magnetic field versus how distorted is it? That's a thing that you can measure and you can predict it on the basis of the standard model. And the point is that if the muon were just a point particle all by itself, g, the magnetic moment, would be two.
And it's not. There's little corrections to that. Why? Because quantum field theory. Because a muon traveling all by itself, just like any other fundamental particle, will be constantly interacting with other fields around it. And you can think about those interactions as being described by Feynman diagrams, right? The particle is just moving, but then it spits off a photon.
So the muon going along spits off a photon, then reabsorbs it. That is a Feynman diagram that contributes to the following process. Muon becomes a muon. So the muon is just traveling along through space. But there's all these buzzing fields around it, which we discuss using Feynman diagrams. There are also diagrams where the muon spits out a photon and eventually reabsorbs it.
But there are higher loop diagrams where that photon, along its trajectory, will split itself into a particle and an antiparticle and then reabsorb them, right? So there's a two-loop diagram where muon becomes muon plus photon. Photon becomes, let's say, quark and antiquark. which then get reabsorbed to become a photon again, and then reabsorbed into the muon.
Quarks and anti-quarks can be produced in the motion of a photon, in what we call the propagator of a photon, because quarks are charged particles. And they are also strongly interacting particles. So eventually, you will have every particle of the standard model interacting with every other particle. They might not be direct interactions, right?
There's no direct interaction between muons and quarks or gluons. But there are indirect interactions mediated by these higher loop diagrams. That's where the muon magnetic moment comes from. And it is a beastly calculation to do it because, number one, you need more than one loop. And whenever you get more than one loop in a Feynman diagram, it becomes hard to do.
But number two, you have the strong interactions. As soon as you make quarks or gluons— then those particles interact strongly. So that quark-antiquark pair that can be produced by the traveling photon will themselves scatter gluons back and forth and other quark-antiquark pairs. And that turns out to be hard to actually calculate. And it's not just photons spitting off of the muon.
You can have Z bosons or whatever. So... The fact that the muon is a bit heavier and interacts with the Z and the Higgs more than the electron does means it's a more noticeable feature of the muon's magnetic moment, which is why it's the muon that is being looked at for this discrepancy.
But all the way back when I did this solo episode, which was in 2021, right, over three years ago, it was already clear that one of the possible discrepancies was in that theoretical calculation. We had an experiment that came up with a measured value for the muon g minus 2. And we had two different ways of doing a theoretical calculation.
One, which was a lattice calculation, where you try to discretize spacetime, put it on a computer. And the other one was with the little Feynman diagrams, pencil and paper. But of course, they also use computers to do those integrals and so forth. And I forget which one it was, but one of them agreed with the experiment and one of them disagreed.
So it was a very obvious kind of loophole that if the two theories don't—theoretical predictions don't agree with each other, then you shouldn't be surprised that one of them disagrees with the experiment.
And, you know, for the last 50 years, it's always been a smart bet to say if you think you have an anomaly that points to new physics, it will probably go away unless it's like super strong and absolutely unmistakable. So it's still possible that it's out there. But, yeah, this is a very plausible explanation for that apparent experimental anomaly.
Thies Janssen says, your ideas about space being emergent from entanglement seem to have a lot in common with the basic assumptions from Penrose's conformal cyclic cosmology. You mentioned that you were not very interested in CCC. Without diving deeper into the ideas, you don't find it very convincing. That surprises me. Why is that? Well, lots of reasons.
I don't see what the connection is, honestly, from space emerging from entanglement with the conformal cyclic cosmology. But My lack of interest in CCC just stems from the model all by itself for two reasons. Number one, it's not really physics. It's magic. Penrose says, you know, you have the certain assumption about what the very early universe looked like, which by itself is plausible.
And then there is the future of the universe. The far future of our cosmology will look like empty space with a positive cosmological constant. And you can do a mathematical redefinition to sort of match this early condition to the late condition. But it's not a physical definition. redefinition. It's not a physical thing.
It's not, you know, the physical conditions in the early universe are super different than the physical conditions in the late universe. So Penrose just sort of conjectures that one turns into the other, and that is just not predicted by any known laws of physics. He just made it up, okay? Now, maybe there are unknown laws of physics that would make it happen.
That's great, but it's super speculative and not based on anything really that fits in with anything else that we understand. And number two, I don't even think it solves the important problem. You know, I think that the, the important problem as I see it, for these theories of initial conditions is why was the entropy low? Why is there an arrow of time?
Why is there an asymmetry between past and future? And Penrose's answer, the CCC answer, is it's just there. It's put in by hand. There is an eternally persisting arrow of time from the far past to the far future. And again, That might be right. That might be the correct answer. But it is highly unsatisfying to me.
As we were just saying a second ago, in the realm of theories that are a little bit ill-defined, people's personal preferences are relevant to their judgments about what's likely to be true. To me, CCC might very well be true, but it would be a highly unsatisfying answer to why the initial conditions of the universe look the way they do.
Jonathan Good says, how likely are sterile neutrinos as an answer to the majority of dark matter? Well, they're possible. You know, I would say that, again, we were just talking about different criteria we use for understanding theories that are not completely fully baked yet. Dark matter has overwhelming evidence that it exists, and we know some of its properties.
It's cold, and it's largely non-interacting, and we know approximately how much of it there is, okay? That's not a lot to go on, but it's a little bit to go on. And what you want to do is sort of minimize the number of miracles that need to occur. Or sometimes we say you want to minimize the number of invocations of the tooth fairy in your theory.
So the reason why weakly interacting massive particles are so popular, have been popular for a very long time for dark matter, is that they're involved in a completely different problem other than dark matter. They're involved in whatever happens at the electroweak scale, the hierarchy problem, the Higgs boson, whatever.
These are all things – in other words, you could have a fully comprehensive theory that explained the hierarchy problem, the mass of the Higgs, and also the dark matter, and that would be great. In particular, in a very quantitative way, there is what is called the weak miracle, or the wimp miracle, which is that if you just have a particle that naturally –
annihilates and scatters with a strength similar to what we think is there for the weak interactions of particle physics, you will tend to get approximately the right density of dark matter. So you don't need to invoke a miracle. There's plenty of models where there are stable particles with the right density. That's exactly what you want.
But we've looked for the WIMPs and we haven't found them yet. Maybe we will, but we haven't found them yet, which is sort of depressing. The other popular candidate are axions. And again, the reason why axions are popular, it's not because you naturally get the right density of dark matter, but the axions solve another problem. They solve what is called the strong CP problem.
Why is the CP symmetry respected by the strong interactions but not the weak interactions? And axions are part of an explanation for that. So you like that. You like it when the particle explains something else. And even though axions don't necessarily have the right density to be the dark matter, you can give them the right density to be the dark matter without too much effort.
There's a free parameter that you can pick so that it would naturally have the dark matter density, and there you go. So people are happy about that. But we haven't seen axions either. We haven't looked for them nearly as hard.
And then if it's neither of those, I think that the next couple of candidates that you would take seriously are sterile neutrinos, which are neutrinos not like the ones we already have in the standard model, but neutrinos that don't feel the weak interactions at all. That's why they're called sterile. And on the one hand, on the other hand, black holes. Why are these two examples so good?
Well, they're things that are very closely related to things that exist. Black holes do exist. But the problem with black holes as dark matter is it's very hard to get the right abundance of black holes with the right masses so that we haven't already ruled them out. It's very hard to get the right abundance at all.
You need to invent some very weird physics in the early universe to get enough black holes to be dark matter. And then you want them to be not too heavy or not too light, not too heavy you haven't already noticed, not too light they already evaporated away, right? So it's a very weird set of circumstances, but it's allowed and it's nice because black holes are known to exist.
Likewise, the sterile neutrinos might be involved with some theories of neutrino masses, right? So like the axions, they might be related to some known problem. It's just not very obvious they should have the right abundance to be the dark matter. So we don't know. I have not followed the latest wrinkles in what people think about their favorite dark matter models.
I think the dark matter is out there, but since I'm now old, as we've already discussed, model building in particle physics is not how I choose to spend the rest of my time, unless I really invent something that is absolutely genius. Stay tuned for that. You'll be the first to know, I promise. Beetroot says, as a European, I'm watching with anxiety your presidential elections.
I'm very preoccupied that the Democratic Party and supporting social media channels are not highlighting enough the implications of Project 2025, but instead are fixated on every blunder of Trump, which is basically everything he says or does. I've taken a deep look into the Project 2025 paper.
What really concerns me is that it's a battle plan to end democracy in the United States and turn it into a Christian nationalistic autocracy. Do you have an opinion about the Democratic strategy and this specific aspect? Well, yeah, from my internal perspective, it's more or less the opposite of what you say, which I don't know if that's a good thing or a bad thing.
So for people who don't know or for people who are listening to this 500 years from now, we are in the middle of a – not in the middle of, near the end of, a couple months from the end of a presidential election between Donald Trump and Kamala Harris, the Republican candidate and the Democratic candidate.
And a Republican think tank, the Heritage Foundation, published this document, Project 2025, which would be a roadmap for policies they would like to implement. And it's a little bit sketchy because it wasn't an official Trump campaign document.
But if you look at it, like everyone who was involved in writing it, not everyone, slight exaggeration, but there's a huge overlap between people who wrote that document and people who either have advised Trump or previously worked in his previous administration. So it's more or less them coming out on the open and having a wish list. This is what we would like to do if we get back into power.
And it's full of very specific strategies for doing these things. And the interesting thing is that it was put out at a time when, as most listeners know, Joe Biden, the current president, was still the Democratic candidate. Since then, he has stepped down and Kamala Harris got nominated. And Biden was not doing very well in the polls. And the Republicans...
basically thought they were going to win. And so they put out, they were basically giving red meat to the base, as we say. They were, you know, telling everyone on their side already what awesome things they were going to do in a way that was completely horrifying to people on the other side, the Democrat Party. And, you know, a lot of it is, you know, beetroot is not wrong.
A lot of it is both accumulating and then consolidating power in ways that are not very democratic, small d, democratic. So if you actually read the list of policies in Project 2025 to the median American voter, they are horrified. In my mind, the Democratic Party has done an amazingly effective job at publicizing the existence of this policy document. That's a very hard thing to do.
I don't know how they succeeded, honestly, in getting as many people to know what Project 2025 is. That seems to me to be a kind of inside baseball kind of thing. But somehow they have turned it into a target. I think they've turned it into a target pretty effectively. Whether it will work or not, I don't know. But there's a long way to go. I'm not going to make a prediction about the election.
It's going to be close. There's this thing called the Electoral College, which messes everything up. There's a whole bunch of attempts to suppress vote or disenfranchise people in different ways or to mount legal challenges when the votes come in. So we have a long way to go. But I do think that in that particular single aspect of highlighting the dangers of Project 2025 –
the Democratic Party has done better than I would have expected. Jacob says, you've often emphasized that curiosity should drive scientific inquiry rather than just practical applications. In your view, which specific area of physics is most likely to yield the next breakthrough that will have significant impact on everyday life?
Well, I'm not going to answer this with the specific answer because I don't have one. I don't know exactly what area of physics it will be. Also, I suspect that if it is, I suspect that what it will be will be an interdisciplinary area of physics. So computational physics or biological physics or something like that. Biological physics in particular, like I don't know what you counted.
If you learn how to build a robot out of DNA – which is a very plausible thing that people are trying to do right now. Does that count as physics? Maybe, I don't know. It's probably not. But who cares? What matters is the impact it will have. What I can say, the reason why I am answering the question is I do not think that it will be what we call fundamental physics.
I do not think that improving our understanding of particle physics and gravity and things like that will have a significant impact on everyday life. the room for significant impacts on everyday life has moved up to the emergent level, the level of biophysics and biology itself. Material science could very well have an impact, certainly building better computers, building better batteries.
There's all sorts of ways in which physics can have an impact on everyday life, but it's still going to be constructed from the same set of particles and forces that we have known in the core theory for quite a while.
Pauline Guerri says, I've gotten used to thinking of probabilities as subjective, which implies that questions such as, what was the probability the nuclear war would happen in 1962, don't make much sense, even though they're related to coherent questions such as, what did smart people think at the time? My question is, does many worlds change that?
It seems like the proportion of the wave function associated with worlds in which the nuclear war erupted is an objective thing. but are split branches of the wave function a good approximation of all the ways things could have gone? So I see where you're going, but I think that Many Worlds does not change this in any very important way.
I think that the much more interesting question, the much more relevant notion of probability is... given what classical macroscopic observers were aware of at the time, what is the best they could do in talking about what the probability would have been? You know, I went to Villanova University as an undergraduate.
And while I was there, Villanova beat Georgetown in the NCAA basketball championship game. And they were highly, highly underdogs. They were not favored to win, the Villanova Wildcats. Everyone thought Georgetown would win. And one way this has been stated is if they played that game again 10 times, Georgetown would have won 9 times out of 10 or 99 times out of 100. I think those are subjective.
It's hard to exactly measure things like that and call them probabilities in any objective way. But they're real and relevant and important. And I think quantum mechanics has nothing to do with it. It's very, very— I don't know this for sure, but it's very, very possible that given the classical configuration of the world that eventually led Villanova to beating Georgetown—
So there is quantum uncertainty there, and there are going to be branches of the wave function in which Georgetown wins, but those branches might typically have weights of something like 10 to the minus 20 or something like that. It's not going to be anywhere close to what you normally think of as your probability arising just from good old classical uncertainty.
So it's not a good enough approximation to all the ways things could have gone in the sense you're asking. Now I'm going to group some questions together. Connor Kostick says, I enjoyed listening to you and David Wallace discuss Schrodinger's cat. And it seemed to me that the approach articulated in your conversation would also address the apparent conundrums of the two-slit experiment.
Is that right? Then Tejas Damania says... Thank you very much. And finally, Luke Gendrow says, it seems to me that a lot of the fundamental mysteries we're still confronted with are related to the uncertainty principle in some way, or at the very least it comes up a lot when talking about them. Are there any legitimate theoretical attempts to refute or abandon the uncertainty principle?
And if so, could you give some idea of their flavor? If not, could you describe why the proof is strong enough for there not to be? So these sound like different questions. I do get it, okay? Connor is asking about the double slit experiment and conundrums there. Tejas is asking about quantum computers thought of from the many worlds perspective, and Luke is asking about the uncertainty principle.
But the reason why I'm grouping them together is I kind of am tempted to give a similar kind of answer to all of them, which is that there are mysteries and then there are mysteries. And this is something that is very important when talking about quantum mechanics in particular, because we motivate...
thinking about quantum mechanics by talking about mysteries, by talking about puzzles or paradoxes or things like that. But then, you know, is the electron a wave or a particle, right? Things like this, questions that seem difficult to answer because you can say here is the argument, you should think of it as a particle. Here's the argument, you should think of it as a wave.
But sometimes we then figure out the answer to those puzzles and we know what the answer is, okay? So These are all cases in which I would argue we know the answers. These are not actual existing mysteries that we need to keep banging our heads against. Sometimes we answer questions. So Connor asks about the apparent conundrums of the double slit experiment. Those are...
We know what the answers are to those apparent conundrums. It's a motivation for taking quantum mechanics seriously, but quantum mechanics gives completely unambiguous predictions for what happens in the double slit experiment. The only conundrum is there is no classical way of explaining what you see. But there's absolutely a quantum mechanical way.
You can do it in many worlds perfectly well, but you can also do it in Bohmian mechanics, or for that matter, in the Copenhagen interpretation of quantum mechanics. You get the same answers for what you observe in the double-slit experiment. So straightforwardly, yes, the approach we articulated does address the apparent conundrums of the double-slit experiment, but don't
think that people are worried about the double slit experiment. It's again, just a motivation to take quantum mechanics seriously, not an ongoing puzzle within quantum mechanics. Tejas asked about what is, what is happening in a Hadamard gate when it's putting a bit in a superposition. you know, the many worlds perspective on this is that you should think about putting a bit in a superposition.
You should think about the quantum state, the wave function or the vector in Hilbert space or whatever you want to call it. And the quantum computer is not a reality selector. The quantum computers as an example of a physical system which obeys the Schrodinger equation, again, in the many worlds version of things.
And because many worlds is really just saying that there are quantum states that obey the Schrodinger equation. So quantum computers are no different than that. At the end of the quantum computation, which is just again a vector evolving according to the Schrodinger equation, you measure it. And that measurement in the many worlds language is described by decoherence.
You bring the output of the quantum computation into entanglement with its environment and you branch the wave function to different places where you got different answers. So again, there's no mystery there once you accept some particular version of quantum mechanics. Likewise, finally, for Luke's question with the uncertainty principle.
It's true that back in 1927, people were worried about the implications of the uncertainty principle, but they quickly realized that those implications were true. All versions of quantum mechanics have the uncertainty principle as a bedrock feature of it. The uncertainty principle is not an axiom or an assumption, it's a theorem that you derive from the axioms of quantum theory.
So today, most people, certainly anyone who believes many worlds or Bohmian mechanics or etc., accepts the uncertainty principle as just true. It's not really very problematic, right? Now, of course, it's possible that quantum mechanics is wrong, but that's hard to imagine.
Plenty of people from Einstein since have been trying to come up with better theories than quantum mechanics, but none of them have succeeded yet. So I think we just should accept the uncertainty principle. John Eastman says, you say that the doomsday argument fails.
The doomsday argument, for those who want to, who haven't heard of it, is an argument that doomsday for the human race is not that far off in the future, in some way of measuring, based on statistics and the fact that the past of the human race is not that far in the past, right?
It would be unlikely to find ourselves in the first 10 to the minus 5 of the whole history of the universe, of the whole history of humanity, right? or even 10 to the minus 3 of the whole history of humanity. So therefore, probably, the whole history of humanity is not stretched into the future very far. That's the doomsday argument.
So you say the doomsday argument fails because you are not typical, but consider the chronological list of the n humans who will ever live. Almost all the humans have fractional position encoded by an algorithm of size log2n bits. This implies their fractional position has a uniform probability density function on the interval 0 to 1, so the doomsday argument proceeds.
Surely it is likely that you are one of those humans. No, I can't agree with any of this, really, to be honest. I mean, sure, you can encode the fractional position with a certain... string of a certain length, n, okay, great. Sorry, the log2 n is the length of the string. Yes, that is true. There's absolutely no justification to go from that to a uniform probability density function.
In fact, I am absolutely sure that I am not randomly selected from a uniform probability distribution on the set of all human beings who ever existed. Because most of those human beings don't have the first name Sean, right? There you go. I am atypical in this ensemble. But where did this probability distribution purportedly come from? And why does it get set on human beings, right?
Why not living creatures? Or why not people with an IQ above a certain or below a certain threshold? Or why not people in technologically advanced societies? You get wildly different answers, right? If you depend—if you put— different, if you have different, what are they called, reference classes for the set of people in which you are purportedly typical. Multi-celled organisms, right?
You know, so that's why it's pretty easy to see that this kind of argument can't be universally correct, because there's just no good way to decide the reference class. People try. Nick Bostrom, former Mindscape guest, has put a lot of work into this, wrote a book on it, and we talked about it in our conversation, but I find all the efforts to put that distribution on completely unsatisfying.
The one possible counterexample would be—possible counterexample—would be if we were somehow in equilibrium, right? If somehow there was some feature of humanity where every generation was more or less indistinguishable from the previous generation. Then within that equilibrium era, if there was a finite number of people, you might have some justification for choosing that as your reference class.
But we are clearly not in equilibrium. Things are changing around us very, very rapidly. So no era in modern human history is the same as the next era. No generation is the same. There's no reason to treat them similarly in some typicality calculation. Artem Vorostov says, I was listening to your lovely podcast with Philip Goff, and the following question emerged in my mind.
Do you believe that consciousness could emerge in a purely Newtonian world? In other words, is quantum mechanics and or general relativity essential for such emergence? Consciousness can be an essential component of quantum mechanics with known or inconceivable laws of evolving, and its role can be to choose the branch in the linear combination of Newtonian on big scales branches.
So again, I do not know what consciousness is or what it requires. I see absolutely zero reason why it couldn't emerge in a purely Newtonian world. It's true that things happen in our brains that are fundamentally stochastic because of the rules of quantum mechanics. But there's very little that happens in your brain that actually depends on a single quantum mechanical event, right?
Maybe you can come up with one, but most things are big and squishy and biological and therefore described pretty well by classical mechanics. So in some sense, consciousness does emerge in a purely Newtonian world, not 100%. But I think that that is completely plausible. So I'm happy to be surprised by this, by future research, but I see no reason right now.
Steve Welton says, I enjoyed your podcast with David Goyer and I'm a big fan of the Foundation series and the novels by Isaac Asimov. In his books, Asimov describes the laws of robotics, which are intended to protect humans and humanity, which are hardwired into the positronic brains of the androids. Do you have any thoughts on the future serious applications of these or similar laws?
I'm a little surprised there isn't more discussion about the subject in the mainstream other than predictions on the likelihood of AI doom. For fun, what would you suggest for the top three to four laws if we were on the verge of creating sentient androids?
Well, I will just note that if you read, if any of you out there have read The Robot Stories by Asimov, he proposes the three laws of robotics. I'm not going to remember the three laws. Sorry about that. But it's, you know, don't allow humans to come to harm unless, or don't allow harm to come to yourself unless it would allow humans to come to harm, those kinds of things. It's trying to
Be fail-safes, be preventative from the robots doing bad things. Let them do as many good things as they can without doing any bad things. But every story in the robot series of stories is about the laws failing, is about pressure being put on the first law versus the second law and them being incompatible and things like that.
And I think that that's not just dramatically interesting, but kind of a feature of this kind of attempt to be too general. I think that if we're actually going to have... Well, I should say... There's been a lot of discussion of this kind of thing.
I don't think that Asimov's laws per se are a central feature in these discussions, but there's been a lot of discussion about how to make AI and robots ethical or how to make their values align. It's Google alignment problem in AI, and that's what this exactly refers to. But I suspect that the right way to do it is going to be much more specific than general. You know, when...
you know, if you try to have a law in a robot brain that says, you know, make human beings happy, then you run the risk that they will, you know, strap humans down to tables and give them drugs that will make them happy. And that's not the intended consequence that you want. And I think that the solution to that is don't give such vague open-ended instructions to the robots.
Be very, very specific about what you want them to do. So I therefore— have to apologize, they do not have the top three or four laws that we should give to sentient androids. Chris Gunter says, suppose you were advising Marvel on a new storyline regarding Magneto, the master of electromagnetism, and he wanted to expand his electromagnetic powers in a novel way that sounds physics-y.
What would you suggest? So again, I'm not going to give a perfect answer to this, or at least a straightforward answer. I will tell you my thoughts about Magneto, which was that Originally, I was never a big X-Men guy when I was in my comic reading days as a youngster. My comics were Green Lantern, Doctor Strange, and Thor. Those are my favorites. Occasionally, the Fantastic Four, I suppose.
But I knew about the X-Men. There were just too many X-Men, and they kept changing who was in the team and what they could do, so I never really got into it. But Magneto was famously one of the antagonists, and in the films, he's been a big part of it. And his power is manipulating, as Chris says, electromagnetism.
And when I was a kid, I thought that was a lame power, especially if it's just magnetism. Like there aren't that many magnets out there, right? But of course, if it's electromagnetism, which really it is supposed to be, then that turns out to be super powerful. And in the movies, you know, anytime there's metal, Basically, Magneto can do whatever he wants with metal.
So you have to – if you're trying to imprison him, he has to be in plastic or glass or something like that. But the truth is that once you've expanded his powers to be electromagnetic – any manipulation of electromagnetic fields – That's basically anything.
He's basically omnipotent almost at the human scale because everything that happens in chemistry and biology is mediated by electromagnetic fields. The very stability of matter is mediated by electromagnetic fields. Not an individual, I mean, individual atoms have structure because of the Pauli exclusion principle a little bit,
But the size of the orbitals of the electrons in the atoms is entirely determined by electromagnetism. Certainly all the bonds of different atoms into molecules is determined by electromagnetism. So the idea that you could actually imprison Magneto in a plastic or glass cage is ridiculous. He could just make any matter made of atoms and molecules dissolve as soon as he wanted to.
He could instantly kill any human being if he wanted to, or even more interestingly, he could make human beings think different thoughts by changing the neurons firing in their brains, right? So you don't have to go very far to imagine that if Magneto were anywhere near realistic, he'd be far and away the most powerful antagonist you could imagine having.
Henry Jacob says, when you coarse grain a system, it seems analogous to block diagonalizing a matrix into a macro scale and micro scale component. This would mean the system is actually the product of two systems. However, most of the coarse grainings I've seen, e.g. in thermodynamics, are not of this form. It seems like we are simply ignoring the off-diagonal terms. Am I right?
And if so, is there a penalty? Well, I'm not exactly sure what the matrix is that you have in mind. I think that the way to put it – again, I'm not sure that I'm going to be addressing your concern here, but – If you have a matrix, so again, for the non-mathy people out there, this is an array of numbers. Let's say it's a square matrix. So it's n by n numbers.
And these matrices appear in physics all the time. The metric tensor in general relativity is a matrix. The Hamiltonian or any other operator in quantum mechanics is also a matrix.
And if you have a form where near the diagonal of the matrix you have a lot of non-zero entries, and away from the diagonal the entries are all zero, then that gives you an enormous simplification over what the matrix is trying to do. So... Anyway, to get what I was saying, that is a form of coarse graining, but it is certainly not the only form.
In the case of thermodynamics, for example, I don't know what matrix it would be that you are coarse graining. I'm not exactly sure what you have in mind. But it is absolutely true that when you want to coarse grain in a useful way to move away from the matrix language and just think more physically, You want to do two things.
Number one, you want to coarse grain so that the – so coarse graining just means you're throwing away information. Rather than having – keeping track of every single atom in a system, you just have some macroscopic features like pressure and temperature and density and things like that. That's a coarse graining.
So you map the microstates that have all the molecules and atoms and what they're doing – to macro states where you only have incomplete information. That's what coarse graining means. And you want that to be useful in some way, which means, number one, that the coarse grained system has its own dynamics, right?
That you can predict what it will do next, at least to a certain approximation, based on the information that is left after you've done the coarse graining. But number two, it sort of plays nicely with other coarse-grained systems. So a baseball is a coarse-grained system. A bat is a coarse-grained system.
And you can tell by giving me the behavior of the bat and the behavior of the baseball what's going to happen to the system, right? So there's enough predictive power in the interactions that you have interesting non-trivial dynamics there.
If you coarse grain badly, then you might have, rather than a baseball and a bat, you might have like the top half of the baseball and the top half of the bat and call that a single subsystem. There's no useful dynamics for that system, right? Like Daniel Dennett talks about this in his real patterns idea, if you go back to the We did the podcast with Dan and we talked about real patterns.
It makes a difference how you choose to throw away information and how you choose to keep it. So it's not like we're just ignoring things willy-nilly. We're ignoring things that empirically don't need to be kept to do the job that we want to do. Matthew Wright says, in your interview with Doris Tsao, at one point you said, I'm going to go off script here.
I presume that was mostly a figure of speech, but it got me wondering about the extent to which the podcasts are scripted. Do you plan out most of your questions in advance and do the guests know more or less what you'll be asking them or is it more off the cuff? It's basically in between that. I certainly don't plan out questions in advance, but I do have a few talking points that I want to hit.
I mean, most of these guests have done work. I think all of them have done work that I think is interesting to talk about. And so I try my best to have some understanding of what work they've done. And the big worry is that the guest has something really interesting to say, and I don't ask a question that lets them say it. You know, that's what you want to avoid. So you don't want to be a lecture.
I certainly never tell them ahead of time what questions I'm going to ask because I don't know, but I don't even give them, like, an outline or anything like that. I might say, like, okay, you have a book coming out. We're going to talk about that. But no more than that. So you're right.
And when I said going to go off script here, that was more or less to mean I'm going to go off the whole topic I thought we were talking about, right? You know, for Doris, you know, the topic was – starting with the visual cortex and moving our way up to bigger questions about consciousness.
But, you know, there's a lot of other topics that we could have talked about, and we ended up talking about some of them. You know, I do think that if I critique my own abilities as a podcaster and an interviewer, I could be better at letting the conversation wander around to places where I didn't anticipate ahead of time. I'm a So I keep trying to become better at that.
But then, again, the problem with letting the conversation wander, you might end up in an interesting place, but you might then leave out things you know are interesting that the guest has to say. So I actually do need enough structure to be able to say what I think is their most interesting stuff. Brian Rahm asks a priority question.
In a podcast earlier this year, you offered a moving appreciation of both the writing and the science of the great sci-fi author, Werner Wenge, who, by the way, I think I mispronounced his name back then. Maybe I'm still mispronouncing it. Sorry about that. Who had just recently passed away.
My initial request is for you to expand on those brief comments of the art and science that have contributed to his legend. You could also invite someone who knew and worked closely with him. And speaking for myself and his legions of other super fans, now bereft in the certain knowledge that the fate of mankind, in the Unfinished Zones of Thought series will forever remain unknown.
Maybe you can find a guest who might know something of his plans for the series' conclusions. So I'm going to be very, very disappointing here in my answer to this priority question. I don't know that much about Werner Wenge's work. I've read one book by him. I talked about that in a podcast earlier this year when we were talking about the singularity and phase transitions and so forth.
But my knowledge otherwise is very superficial. So I am not the one to expand on those comments. I basically gave you all the comments that I could. I appreciate his work very, very much. He was a thoughtful guy, clearly, who was one of the science fiction authors who sort of –
The job of science fiction is not to predict the future, but there is a variety of good science fiction which tries to take very seriously what the future could be. You know, there's all sorts of good science fiction, like Star Wars is perfectly good science fiction, but it's not trying to be in any sense telling you what the future is going to be like. It's basically...
some kind of combination of a Western and a Roman gladiator epic moved to outer space, right? It's not trying to envision the implications of any major change in technology or society. But Vinge's work and other people's work is much like that, you know, really thinking through the implications of coming changes.
And so, again, not to predict that this is going to be what it's like, but to let us anticipate what the possibilities are. And I think that's super important. So he was one of the greatest at that. Let me just say that. That's probably the best I can do. Ken Wolfe says...
Years ago, I read a book by George Lakoff and Rafael Nunez called Where Mathematics Comes From, How the Embodied Mind Brings Mathematics Into Being. They had an interesting take on Euler's identity, where E, the natural logarithm, raised to the power of I, the square root of minus 1, times pi plus one equals zero.
They seem to more or less reject the idea that there's anything really profound about this identity. Instead, it was simply a function of the way we plot the imaginary component of complex numbers as the y-axis on the same graph paper we use for Euclidean geometry. Are they onto something? Are they missing something? Are they onto something and missing something?
I think they're missing something, honestly. Or maybe, to be more generous, they are both onto something and missing something. You know, it reminds me a little bit, because we were just talking about the uncertainty principle, of the following claim that you will hear, and I think maybe in my youth I even made this claim myself. The uncertainty principle is completely trivial.
It's just a feature of Fourier transforms. Yeah. It's just a restatement of the fact that the axes of the momentum basis are at 45 degrees to the axes of the position basis. Okay. Those are all true statements. Even if those statements are meaningless to you, the audience member, those are true mathematical statements.
But it is absolutely untrue that the uncertainty principle is somehow a triviality. The derivation of the uncertainty principle is a triviality once you have set up an enormous amount of work to understand what position and momentum are.
from the quantum mechanical viewpoint, that there are certain kind of sets of operators, that they're canonically conjugate to each other, blah, blah, blah, blah, blah, on a vector space rather than simply coordinates on phase space as they are in classical mechanics.
So it's an absolutely profound change of perspective that once you make that change of perspective, this particular mathematical result is kind of trivial, okay? So it's that kind of thing. e to the i pi is minus one. Is trivial once you understand what all those symbols mean. But really, you know, you're inventing trigonometry.
You know, there's a lot of things like what a cosine and a sine are that go into that kind of identity. How there is a natural way of coordinate-izing the set of complex numbers using trigonometry is, you know, highly non-trivial. But then once you do it, once you set it up, then it's all trivial, right? So that's the sense in which they're both onto something and missing something.
The end result, given all the groundwork that you've laid, is very trivial. But all that groundwork is highly non-trivial. Kilngod says, here is my odd thought that could explain dark matter in the cosmological constant. We know vacuum energy is created from virtual particles. What if creating virtual particles also results in antigravitons? Well, there's no such thing as an antigraviton.
That's just not a thing that exists, just like there's no such thing as antiphotons. I know that physicists sometimes say for every particle there's an antiparticle, but that's just kind of not true, at least not in the usual way of thinking about antiparticles.
There's two different kinds of gravitons, just like there's two different kinds of photons, one that are helicity plus one and helicity minus one. Same thing is true for gravitons, so different spins of the gravitons, but they're not in any sense antiparticles to each other. It's generally speaking charged particles that have antiparticles.
If you have a charge minus one particle like the electron, you're guaranteed to have a charge plus one particle antiparticle like the positron. But particles like gravitons and photons carry no conserved quantities that could be negative, so they don't really have antiparticles. Jeff Babon says, it's been very interesting listening to your views on entropy and the heat depth of the universe.
I'm a biochemist and so run experiments every day where local entropy decreases, whether it's growing bacteria, synthesizing DNA, or translating proteins. This is fine because they are not closed systems and all of those processes require an input of external energy. My question is about dark energy.
If it's constantly adding energy to the universe, does that mean the universe is not a closed system? And is it conceivable that it could be harnessed to decrease local entropy in a region of space forever? So a couple things going on here. One is that you gotta be careful about entropy versus energy, right?
The universe could very well be a closed system and still have dark energy becoming more and more over time. I wrote a blog post once then you can look up Google energy is not conserved. And you will find my blog post explaining that there's this particular example of a way of defining energy, which is take the energy density per cubic centimeter and multiply it by the number of cubic centimeters.
And that gives you a number that is not conserved in cosmology. That's not because dark energy is weird. It's not conserved if your universe has nothing in it but photons also. Every photon loses energy as space expands. What's really going on is that there's an interplay between the energy of the stuff, the photons or the dark energy or whatever, and the curvature of space-time.
And that interplay is a little subtle. There's just as many rules in cosmology as there is in a flat space-time where there is energy conservation, but the rule is a slightly different rule than you thought it was. So the universe can be closed even with dark energy, and none of that has anything to do with entropy, except very indirectly. The entropy of the universe is increasing
That's the law, the second law of thermodynamics. But how it increases depends on details, and the existence of dark energy is an important detail. And the specific detail is that the future of the universe will be empty space with nothing in it but the cosmological constant, if the dark energy is the cosmological constant, which, as I've said, I think it probably is.
Scott asks a question where he says, I've been reading about the possibility of electroweak vacuum decay. If a vacuum decay were to happen inside a black hole, would the expanding bubble be contained by the black hole? Are vacuum decays more likely to happen inside a black hole due to the high energy densities?
So the idea here is that you may have heard that there's something called the Higgs boson. The Higgs boson is the particle which is an excitation in a field, the Higgs field. And the thing about the Higgs field that is different than all the other fields that we know for sure exist in the universe is that it has a non-zero vacuum expectation value.
So what that means is the Higgs field, when you have a field, like the electromagnetic field or the electron field or the quark field or whatever, It is natural to imagine that the lowest energy state is when the value of the field is zero. And that's typically true, but it's not true for the Higgs.
For the Higgs field, the Higgs field could be zero, but that is a higher energy state than the Higgs field living at some large value, which is what it actually has. The Higgs field plays this important role in the electroweak theory, in the unification of electricity and magnetism with the weak nuclear force.
So it is possible that the value that the Higgs field has, even though it is lower energy density than it would at zero, is still not the lowest energy density that it could have. If that's true, it opens up the possibility of vacuum decay. There could be a little bubble, a very, very, very, very tiny bubble where the Higgs field takes on a much larger value than it currently has.
But once it does that, it is lower energy than the Higgs field in the world in which we live in. So the universe likes lower energy. So that bubble, if it ever forms, would grow at a tremendously fast rate, basically the speed of light or very close to the speed of light. And if this happens all over the place, it would wipe out our universe, basically speaking.
All of the laws of physics would change. We would all die. And we would not even see it coming. It would just happen almost instantaneously. So the question is from Scott, could this happen inside a black hole? And would the expanding bubble be contained by the black hole? Yes and yes. It could happen. Would it happen more? Is it more likely to happen?
I think that depends on details that I don't really know that- that are not known, let's put it that way, but it's possible that it's more likely to happen, but it would stay inside the black hole. If you think about black holes from a sophisticated point of view, what is a black hole? A black hole is a region of space-time from which nothing can escape because of the speed of light.
You would have to move faster than the speed of light to escape from the black hole. The bubble of true vacuum that is nucleated in this electric weak vacuum decay scenario expands at a certain velocity relative to some reference frame, and that velocity is constrained by the speed of light. The bubble cannot grow faster than the speed of light. Therefore, the bubble cannot escape the black hole.
This is actually pretty clear if you have read... Space, time, and motion, my first installment in the Biggest Ideas in the Universe series, where you have pictures of what it looks like inside a black hole, the space-time diagram, where the singularity is in the future.
And if you have a little bubble that is confined to the interior of its light cone, its future light cone will hit that singularity everywhere. There's nowhere where we'll escape to the outside world. So don't worry. And worry about black holes all you want, but don't worry that they're going to nucleate electroweak vacuum decay. Okay, I'm going to group a couple of questions together.
Tim Giannitsos says, great conversation with Doris Tsao about consciousness. You mentioned that you can tell if something is conscious because of how it behaves, i.e. they are aware of certain things, update mental states, etc., And Rue Phillips says, Yeah. So again, I'm going to try to emphasize here, I don't know the answer to questions like this.
I don't even pretend to have a vague theory about questions like this. I do think that consciousness is likely to be a bit of a spectrum rather than a sharp phase transition. There can be sharp phase transitions in nature, and so I could be wrong about that. There could be some You know, let's put it this way.
In the theory of random graphs, okay, a random graph is you have some dots which are going to be nodes, and then you randomly assign edges between some of the nodes and not other nodes. And there are, as you get very, very large numbers of nodes, and you increase the number of edges between them, there's a phase transition that happens in the percolation phase transition.
For a small number of edges, the components are mostly disconnected, right? You've connected two nodes together, but they remain disconnected from everything else, probably. And for a large enough number of edges, it will be the case that almost all nodes are connected together, okay?
For some fixed number of nodes, as you increase the number of edges, there's that kind of percolation phase transition. So maybe something like that is responsible or necessary for consciousness. I'm just throwing it out there as an example because I don't think it's true.
I think that it's probably more likely that you gradually develop the capacities, which when they're fully developed, we recognize as consciousness.
I don't even want to give the impression that, you know, Tim says that I said, and as you know from longtime listeners know, I can't remember what I said in most of these podcast conversations, that you can tell something is conscious because of how it behaves. I'm not at all claiming that that's necessarily the case. In practice, I think that you can—
pretty clearly delineate certain conscious things from certain unconscious things by doing that, but there could also be edge cases, difficult cases, counterexamples, so forth, anything like that. So I don't really know whether anything intricate enough to exhibit those behaviors is conscious, nor do I know what the fundamental requirements are. Sorry about that.
JC says, what went on with the wild cat you were feeding? Still around. So yes, good timing to ask this question. The wild cat is Puck. Puck visited us on our back porch and we sort of adopted him or her or them. Puck might be non-binary for all purposes. We don't know whether Puck is a boy or a girl. But it was almost a year ago that Puck started hanging out and we've been very...
We've been very dedicated to making life comfortable for Puck. But what we didn't do, what we knew we had to do was take Puck to the vet to get shots, maybe to get spayed or neutered or whatever. You don't want more stray cats out there than you need. That's for sure. So there's a certain responsibility there. that we take on because we're taking care of Puck.
And part of that is you got to take Puck to the vet. And the downside was, you know, you have to trap Puck. Puck doesn't want to go to the vet. Puck doesn't know what it means. You cannot use symbolic language and explain to the kitty that this is for their own good, even though It's something new and scary. So you have to trap the kitty, which we did.
We bought a little trap and we were worried that Puck was too smart to fall for the trap. No need to worry about that. It was actually pretty easy to trap Puck. But as I speak, as I'm recording this, Puck is in the room next to me in a little bathroom. chilling until we take them to the vet tomorrow to get examined. And then once that happens, we will release Puck back out into the wild.
I mean, Puck is clearly not going to be a happy cat if they're confined inside, so We'll keep it with a sort of hybrid indoor-outdoor lifestyle. I mean, hopefully, as time goes on, Puck will be more and more acclimated to us. But the worry, I started saying the worry is, the worry is Puck doesn't like us anymore, right? Because we trapped Puck and are taking them to the big bad vet.
So hopefully that's not true. We're trying to be very nice, giving Puck all the treats and saying that Puck is a pretty little kitty. So we'll see how that goes.
Kevin's Disobedience says, if we've understood quantum mechanics perfectly, do you think it would be ideal to teach quantum mechanics before classical mechanics and then have second-year students derive simple macroscopic systems from quantum field theory, or will it always be better to work backwards and quantize our intuitions?
Good question, perfectly legit question, but I think that it will always be better to quantize our intuitions for the following reason. It's a question of emergence, right? There is a of quantum mechanics that looks like classical mechanics. Quantum mechanics is a broader, wider range of applicability than classical mechanics does.
Any system that can be described classically could also be described quantum mechanically. But... there are some systems that you don't need quantum mechanics to talk about. I'm emphasizing this because it gets fuzzy sometimes. Sometimes you get the impression for big things, they obey the rules of classical mechanics. For small things, they obey the rules of quantum mechanics.
Everything obeys the rules of quantum mechanics. It's just that for small things, you need... to use quantum mechanics, whereas for large things you have the option of using classical mechanics. And classical mechanics is in many ways much easier. I can describe a single particle or an object, a particle-like object, just using a couple numbers, right? Position and velocity.
Whereas a quantum mechanical object needs a wave function, so in principle an infinite number of numbers. Furthermore, the specific realm in which classical mechanics applies to the world is the realm of our everyday experience. It is much more intuitive, much more easily graspable.
So there's no reason not to point to classical mechanics and teach that first and then generalize it later to quantum mechanics. I think that's a very natural thing to do.
Otherwise, you know, if you didn't think that was true, then rather than teaching math in usual ways, you would start with category theory or some other very highly abstract logical theory and derive all of the implications in logical order. But that's not necessarily the best pedagogical strategy.
Schleyer says, is it fair to think of complexity in the universe as having increased in a relatively small number of steps? Specifically, for like 10 billion years, there were just clumps of stuff, and then suddenly there was life. Then 3 billion years later, suddenly there was complex life.
Is it wrong to think of these things as the most meaningful increases in the complexity of the universe that we know of? Yeah, I think that's basically right. I mean, at least let's put it this way. That is basically my view. So I've been thinking about the process of complexogenesis, how complexity comes to be in the universe. And I absolutely do think that it's a series of phase transitions.
And I even think that, and this I'm much more tentative about, but I think that tentatively, those phase transitions can be thought of as more and more sophisticated uses of information. You know, there's a way of thinking about information, a physicist's way of thinking about information, such that low entropy systems contain a lot of information.
They contain a lot of information in the sense that you know a lot about the system if it's low entropy and you know it's macrostate because there's not that many microstates it could be in. If you have a high entropy system, there's many, many microstates that look that way, so you have less information about it.
And as the universe expands and entropy grows, you're basically using up that resource that you had in the low entropy past, and you're able along the way to use that information in more and more specific ways. First, just to sort of locate yourself in the universe. You know, here's a star or a planet, there's not. But then you can find food in the universe, right?
That's a more sophisticated use of information. And then you can sort of start thinking about things. That's a yet more sophisticated use of information. So that's a vague picture. And sort of firming that up is something that I'm trying to think about how to do in a quantitative way. Tariq says, my question is related to the matter-antimatter asymmetry.
Matter-antimatter should have been created in equal quantities in the early universe, and the assumption is that unless there was an asymmetry in that process that led to matter dominating, everything should have just annihilated.
Were the matter-antimatter particles created in the early universe the same type of particles we see today, or was it a special type of particle, antiparticle, that decayed into the conventional particles we know of?
Why is it that if matter and antimatter particles were created in the quantities that would have been present in the early universe, that they couldn't have interacted in chaotic ways, even if they were created in equal quantities, such that we could have regions dominated by matter and regions dominated by antimatter, but separated by vast regions of empty space where most everything did annihilate?
Well... There's a couple questions in there. You're cheating, Tariq, because you're supposed to only ask one question, but that's okay. We'll sort of group them together. You know, we don't know what the particles were in the early universe.
They might be exactly the particles that we know and love today, but there's plenty of theories according to which, you know, theories of unification and so forth, according to which the fundamental fields of the world were rearranged into different groupings in the early universe that we would recognize now as different particles.
or that there were other fields that are just too massive and unstable for us to notice now, but played an important role in the early universe. So we don't know any of these things. Even the claim that there should have been equal amounts of matter and antimatter, I don't know if that's true or not. I don't know who decides what should have been true in the early universe.
It's a simple, obvious starting point, but we don't know for sure that it's right. I think that there are arguments in the standard model of particle physics. Here's a slightly not well-advertised fact.
In the standard model of particle physics, we've never experimentally seen violation of what we call baryon number, the number of baryons, which is basically the number of quarks minus the number of antiquarks, right? Quarks never turn into antiquarks or vice versa, or they don't even turn into non-quarks as far as we know.
But the standard model predicts that there should be baryon number-violating transitions. They're called sphalerons, and you can look them up. But they're supposed to be so very, very rare in the current universe that you would never notice them. But maybe they were frequent in the early universe. And so even if that's true, even if you started with unequal numbers of baryons and antibaryons—
they would have equilibrated. They would go back and forth and end up with roughly the same numbers. There's loopholes to that argument, so don't take it too seriously, but there's various arguments that indeed it would have been natural, let's put it that way, from our current perspective to have equal numbers of particles and antiparticles. So today we don't.
We have more particles than antiparticles. One of the schemes for generating that asymmetry is indeed something like you outline. There's something called leptogenesis, which arises from production of a certain kind of neutrino more than its antiparticle. And then those massive neutrinos, like super heavy neutrinos, which then decay.
These leptons, neutrinos, decay into particles and antiparticles asymmetrically, and then the standard model processes turn some of those leptons into baryons. Maybe that's what happened, but we honestly don't know. It's a puzzling thing because, you know, when I was a starting out cosmologist, 80s and 90s, a lot of people were thinking about baryogenesis, and it's tantalizingly close to
to the kind of work that is experimentally testable, right? I mean, it's not, you know, weird multiple universes or anything like that. You're messing with the standard model of particle physics or nearby phenomena and asking what happens. But I think people sort of have lost a little bit of interest just because it turns out to be harder to connect it to observations than people thought.
So we don't know is the short answer for why there is that asymmetry. Your specific scenario about sort of chaotic interactions I think is just ruled out by the data, right? The data say that the early universe was pretty smooth, roughly similar numbers of particles and antiparticles in every cubic centimeter.
And as I said, even if you started out with different numbers of particles and antiparticles, they would tend to equilibrate in the early enough universe. So as a matter of fact, we look at our universe today, like look at the cosmic microwave background, there's no big empty regions, which would separate regions of matter and antimatter.
All the matter, all the particles and antiparticles that existed back then were bumping into each other. So we think that all the other galaxies that we're looking at today are matter. The whole universe that we see is just more matter than antimatter. Brent Meeker says, my friend and I are having an argument about black hole Hawking radiation and Unruh radiation. I'm just glad to hear that.
I think more people should have arguments about this kind of thing. Susskind and Lindsay describe an observer hovering above the event horizon and then refer to his acceleration as creating a Rindler horizon and Unruh radiation, which they then go on to equate with the Hawking radiation of that black hole.
They then also conclude that if the observer were freely falling into the black hole, he would not observe any radiation. Lindsay writes, a freely falling observer would not detect a horizon or temperature without violating the principle of equivalence.
This seems wrong to me since it would imply that if he were orbiting the black hole out beyond the near field, then he would see no radiation, yet he must. Hawking radiation is not a subjective experience relative to one accelerated observer, it's a real loss of energy radiated away, whether or not anyone is there to see it.
And sufficiently far away, there's no difference between a stationary observer and an orbiting observer, so which view is right? I'm glad you're asking this question. I apologize to the folks listening for whom there's a bit of technicality in there that was hard to follow. I'll try to clear it up.
But I've worried about this question a lot, and I've been recently talking to a graduate student at Harvard, Chris Shalhoub, who has been tackling this question in a very careful, quantitative way. And I think he has an answer, and I think the answer makes sense to me, so I can lay it on you. I don't think I'm giving away any secrets. The idea... Let me just explain the idea of Unruh radiation.
So you probably... Most of you have heard of Hawking radiation. Black holes give off radiation with a black body spectrum with a temperature that you can calculate.
There's an analogous and much simpler phenomenon called Unruh radiation that Bill Unruh invented after Hawking invented Hawking radiation because Unruh was trying to sort of simplify it down to the most common denominator, which is what physicists like to do. So Unruh pointed out that if you have...
Flat spacetime, so no black holes, no gravity for that matter, just the vacuum state of flat spacetime, empty space, okay? So if you have a detector sitting there, it would, if you turn it on and let it equilibrate, et cetera, it would not detect any particles. It's in empty space. But now you ask that what happens if you have a detector that is accelerating, right?
at accelerating at a constant velocity. Don't ask me why it's, sorry, a constant acceleration, I should say. Don't ask me why it's accelerating, maybe it has a rocket engine or whatever, but we assume that whatever is making it accelerate does not actually interfere with the experiment. And the experiment is, you have a detector, you have a particle detector looking for particles.
Now, you're still in empty space, okay? There's no difference in the quantum state of the universe whether your particle detector is stationary or accelerating. But there is a difference, Unruh showed, in what the detector detects. An accelerating detector detects particles in what you thought was empty space.
And that is a feature of the relationship between the particle detector and the quantum vacuum. And you can even, and Unruh does this, you can analogize the fact that the detector is moving at a constant acceleration is kind of like sitting stationary outside a black hole horizon. There is also a horizon called the Rindler horizon for the accelerating observer.
So there's a close mathematical connection there. Indeed, in my general relativity textbook, sort of as a bonus chapter at the end, I talk about quantum field theory in curved spacetime, and I do this example of unruh radiation. It's much simpler than doing Hawking radiation, which is more complicated.
So anyway, there's a rough tension because if you're standing outside the black hole and you look at it, you're supposed to see thermal radiation. If you fall into the black hole, you're supposed to see nothing because you wave your hands about the principle of equivalence or something like that. But there's an expectation you're supposed to see nothing. So what's really going on?
Yeah.
What happens, roughly speaking, if I'm to vastly oversimplify, is that you don't have enough time to observe Hawking radiation when you're falling into the black hole. When you're falling past the event horizon, think of it this way. We say that when you're far outside, you see Hawking radiation with a certain temperature. But what is that temperature?
Temperatures of radiation are associated with wavelengths of radiation. For any given temperature, there is a wavelength at which most of the radiation is coming out, a typical wavelength for the thermal radiation. For a black hole, the typical wavelength of the thermal radiation is roughly the size of the black hole. It's the Schwarzschild radius of the black hole, roughly speaking.
So very low frequency, long wavelength, photons are coming out. And basically what happens as you fall in, your speed increases. you would imagine that you're seeing these photons to be blue shifted, okay? But really what is going on is that if you have a detector that is sensitive to certain wavelengths, it is sensitive to more and more blue photons.
But there just aren't that many blue photons, and it's sensitive to blue photons, short wavelength photons, because there's only a short period of time you have before you cross the event horizon. So essentially, your sensitivity window blue shifts away from where the radiation is. And even though there is radiation, you end up not seeing it.
So there actually is a consistent story, I think, that you can tell. Chris is still writing his paper, so forgive me if there's elaborations to come on that view. Eugene says computer scientists mostly assume that p is not equal to np, which means that there's a variety of problems for which exponential time is required to compute answers on a Turing machine.
For quantum algorithms, the equivalent of p is called bqp. You've mentioned that there is some support for the idea that locality is not fundamental in our universe, e.g. from the holographic principle.
If locality is not fundamental or computation is not bounded by the limitations of gravity, are there implications for the existence of computational engines that do not require exponential time on a wider class of problems? Or is this a nonsense question? No, this is a super good question, Eugene. This is a very, very important question. There's various interesting ways.
Scott Aronson, former Mindscape guest, is literally the world's expert on this kind of thing. But there's various interesting ways in which if you change the laws of physics by a little bit, you have ended up granting yourself powers to answer hard questions faster. Like if you have a time machine, for example, you can answer hard questions faster than you thought you would.
So you have complexity classes in the presence of closed-time light curves. So what you're asking is, do you change complexity classes in the presence of non-locality? And the general answer that you would expect if it's sort of generic non-locality is yes, that you have more power than you thought you would. It's the non-generic cases that matter.
So in holography, for example, in the ADS-CFT correspondence, where you have a boundary theory and a bulk theory, that's holographic. It is non-local. It's non-local because anything that is happening in the bulk is described non-locally on the boundary and vice versa, okay? But nevertheless, if you consider the bulk by itself or the boundary by itself, you have two local theories—
So there's a non-local relationship between the two theories, but each theory is perfectly local. So neither theory actually gives you the capacity to do calculations any faster than you thought you would. Nevertheless, maybe there's other kinds of non-localities. Maybe there's something more subtle. Maybe the quantum gravity is actually giving you some other kinds of powers.
I do think that it's probably not an accident that it is so hard to solve NP problems. I should have said this earlier. P versus NP. P are problems that are easy to solve, okay? It only takes polynomial time. That's what the P stands for, which means if you have N inputs, then the difficulty of the problem, the number of steps it would take, the time it would take, scales as N to some power, okay?
all the details are kind of fuzzy and don't matter. But roughly speaking, these are easier than exponentially hard problems, problems that it takes, you know, e to the number of inputs to solve in some algorithm that you could write down. NP problems are problems where you can check the solution very easily, but you are not guaranteed to be able to find the solution very easily.
Now, the guaranteed is playing a big role there. It's actually hard to know, given a problem, whether it is NP or whether it is polynomial, whether it's P. Let me put it that way. You can take a problem where you know it's easy to solve, easy to verify a solution. but it's hard to know whether it is difficult to actually find the solution.
So just a classical illustrative example of easy to check versus hard to solve is, if I take two very, very large numbers, and I multiply them together to get a third even larger number, and someone hands you just the larger number and says, factor it into two smaller numbers. Okay, if someone just says factor it, that's very hard to do, as far as we currently know.
It's hard to exactly quantify precisely how hard it is to do, but it's hard. Whereas if someone says, I think it's these two numbers that got multiplied together, then you can easily multiply them and check, right? It's much easier to check the numbers. So OK, would nonlocality help us with this? Like I said, various forms of different changes in the laws of physics do help you.
So I bet that there is some idea that generic changes would help you. But I also feel that in the real world, those changes seem to be not within our grasp. So I'm suspecting that in the real world that will continue to be the case, that there's not going to be in practice any nonlocality from quantum gravity helping us to solve NP problems.
Emmett Francis says, you've done well convincing me this isn't the case, but I'm curious, what is your best steel man argument for consciousness being linked to the collapse of wave functions? Well, I think that there's two aspects here. It depends on whether you think that consciousness requires something non-physical or not.
Because I think that if wave function collapse is somehow related to consciousness, wave function collapse, whether it is done by conventional quantum mechanics or not, is still a perfectly physical thing, right? There are wave functions. They're physical, they collapse.
It's not entering a new element into your ontology of the world, some purely mental aspect that is affecting the wave function collapse. If I have wave function collapses that are describable by some perfectly physical theory, then the only way to make them connected to consciousness is to somehow say that consciousness requires a certain kind of dynamics. I think that's what Roger Penrose thinks.
I honestly do not get it. I can repeat his argument, but I don't think it's a steel man argument because I think it's pretty weak, you know, his argument. And again, Scott Aronson has vividly explained why this is not a good argument, but it's roughly speaking based on Gödel's theorem. This is why he writes The Emperor's New Mind, and, you know, it's about Gödels and machines and intelligence.
And the idea is that Gödel has proven that an insufficiently powerful formal system— I'm going to paraphrase, apologies to the experts out there, but there are true things that you can't prove if you assume that the system that you're looking at is consistent in some way. And Penrose says, but I'm a mathematician. I can see the truth. of these statements even though I can't prove them.
Therefore, I'm better than a computer. I'm not at least a sort of computer that obeys the kind of formal system logic that Gödel was thinking about. And that's why he needs to go beyond the ordinary laws of physics and he does so in a way that invokes the collapse of the wave function. So the short – there's various longer responses to that.
But the short answer is how do you know that your axioms are consistent? That is kind of the point. Gödel has proven you actually can't prove the consistency of the axioms within the system itself. And you might have a feeling that they are consistent, but that's not a proof of anything at all. And even a computer can sort of – guess at things that are not proven false.
So I'm not quite sure why we need to modify the laws of physics to make any of this happen. Anyway, the other idea is that consciousness must be purely non-physical. Now, that I don't... Again, I have a tough time making a steel man argument for that because I'm not—well, because I find all the arguments for it very, very weak. You know, the zombie argument or the Mary's room argument or whatever.
I think that they're pretty straightforwardly shown why they don't work. So it would have to be my steel man argument for consciousness requiring something nonphysical— would ultimately end of the day just come down to consciousness being very difficult to understand. Therefore, we're going to have to go beyond what we already understand about the physical underpinnings of the natural world.
That's not a crazy argument, right? I mean, but it's not really an argument about consciousness. It's an argument about epistemology, about how well we know what goes on in the world. What is the future theory of the world going to be like?
I would grant the plausibility of an argument that says, look, in the space of all possible future theories, I just don't see that many that rely on the laws of physics as we understand them and account for consciousness, okay? Like, I don't believe that, but I can see that that is an argument, and maybe that would ultimately lead you to want to mess with the collapse of the wave function.
Kyle Kabasares says, I'm curious what your thoughts are on neural link implants. Would you ever consider getting one implanted within yourself if it were verifiably safe and could enhance your ability to do your research? So first note, you shouldn't call them neural link
You know, there is this company that was founded by Elon Musk called Neuralink that is trying to input – to implant brain-computer interfaces inside people's brains. But it's not the only company that is doing that, and it's not even the company that is anywhere near furthest ahead as far as I can tell right now. So there's a burgeoning area of brain-computer interfaces.
Some of them are what we call invasive. drilling a hole in your head and putting something inside, but mostly these days people don't want to do that, so they're looking at non-invasive BCIs, brain-computer interfaces, and that's, you know, some obvious shortcomings to do it that way, but it's way safer, so that's what's going on. Would I consider it done to myself?
Yeah, I mean, there are many, many worries that one would have about that, but there are many, many worries about automobiles or, you know... nuclear power, or a whole bunch of different things. Fire, there's many, many worries about. So sometimes one can control the dangers in an acceptable way.
And I think that, in fact, I will go further, and I will say that probably eventually everyone will have some kind of brain-computer interface. We haven't been able to talk directly about the technology of brain-computer interfaces that much here on Mindscape, but we did talk with Nita Farharani in what I thought was a very good podcast about the dangers to privacy of brain-computer interfaces.
I do think that they're coming, and I think that probably, as is often the case with new technologies, as Duran Asamoglu explained to us, they will initially very plausibly not be to the benefit of many people. They'll be to the benefit of a small number of people, and other people will suffer, but then eventually we will equilibrate, and hopefully everyone will be better off.
I think that's the optimistic scenario. BG167 says, of the papers you've published, are there any that would deserve Nobel Prize nominations if their conjectures were confirmed in experiment? And do you think that the Nobel Committee generally chooses wisely in the physics category?
Second question first, I do think the Nobel Committee generally does a pretty good job, at least in the areas that I understand. I can nitpick. I do think that the Foundations of Quantum Mechanics is an area where the Nobel Committee could do more recognizing. It gave the prize to the people who tested Bell inequalities, so that was totally deserved. That's great.
They never gave a prize to John Bell, right? He died. Because back in those days, the foundations of quantum mechanics were not thought to be all that important. Even today, the theory side of those foundations is very underrepresented. And I don't just mean people working on, like, many worlds or whatever. Nobel Prize-winning discoveries do tend to involve experimentally testable ideas, right?
Short-term experimentally testable ideas. Stephen Hawking never won the Nobel Prize. And Roger Penrose winning the Nobel Prize was actually a bit of a surprise. They sort of bent the rules to include the existence of black holes as Roger Penrose's Nobel Prize-worthy finding. Yeah, you know, okay, fine. I'm not going to argue about that.
But in quantum information theory and quantum foundations, people like Charlie Bennett or Wojtek Zurek, Yakir Aronov, there's a bunch of physicists who've done very important work on quantum mechanics, which I think deserve the Nobel Prize. But anyway, that's not the point. I think that generally they do a pretty good job. The one other prize I think is really just calling out to be given would be
the experimenters behind the Large Hadron Collider who helped find the Higgs boson, right? We gave it to the theorists but not to the experimenters. It's very complicated because the Nobel Committee has decided no more than three people can win it at any one time. And there were thousands and thousands of people involved. So I don't know how they will –
finesse that one, but I do think that it is deserving. In terms of my papers that I've published, you know, one can get lucky with the Nobel Prize. There's plenty of examples of completely worthy, good Nobel Prizes that were given out to people who basically got lucky. They didn't even know what they were doing.
Penzias and Wilson, who discovered the cosmic microwave background, are the best examples of that. They were not looking for the cosmic microwave background. They were looking for other things, but they found it. Perfectly okay. They found it. The prize is not given for having the most IQ points. It's given for finding things, for really discovering something true about nature.
So I don't think that any paper that I've written that makes verifiable experimental predictions is like super duper clever in the way that general relativity or quantum mechanics was super duper clever. But I could get lucky.
I do have papers out there that predicted different models like violating Lorentz invariance or how dark energy could interact with gravity or with other particles, I should say. Sorry about that. Which if I get super duper lucky, that could show up. Whether that would actually merit a Nobel Prize for me, I'm a little dubious of that. Let's just put it that way. But I have made predictions.
Any one of them is unlikely to come true. But if they do come true, I'll become famous. That would be great. I would love it, prizes or not. Tyler Haley says, I have a friend who is currently getting his master's in physics and he told me something I'm having trouble getting my head around. He said that light interacts with matter but matter doesn't really interact with light.
He uses the example that you can't push a photon but a photon can push you. Can you make out what he means? He's a well-read student and clearly understands interactions are two-way events so I think he's getting at something a bit deeper. Well, I think he's just getting – he's just wrong. That's what I think. I can push photons all the time. I can put a photon through a prism.
I can bounce photons off a mirror, right? I can detect photons in a CCD camera. I don't see any problem with pushing photons in a very real way. I really am not sure what your friend is getting at. There is a true statement you can make that is sort of grammatically similar to this statement, which is that photons interact with charged particles, but photons are not themselves charged particles.
So photons directly interact with electrons and protons and so forth, but they don't directly interact with each other. That doesn't sound exactly what your friend is getting at, but that's a true statement that I would trust. Okay, Jeff Davis says, there are a lot of unsolved problems in cosmology, the Hubble tension, the nature of dark matter, formation of supermassive black holes, etc.
Predictions are hard, especially about the future, but I wonder which you find most puzzling and most likely to require new physics to solve, and which are you most optimistic about being solved in the nearer term? in our lifetime, for example. I would be pretty optimistic about all of these being solved in my lifetime. I hope my lifetime goes on long enough for all those to be solved.
But these are three very different puzzles, Hubble tension, nature of dark matter, formation of supermassive black holes. Hubble tension is a relatively new problem, and it might just go away via better observations or better understanding of our current observations. We had Adam Rees on the podcast.
He's done an amazingly good job of establishing that to the best of our current understanding, the Hubble tension is a real thing. It's not just a silly mistake. If it's a mistake, it's a very, very subtle and interesting mistake, and they haven't been able to find it yet.
But it still could be out there, and as I've often said, the Hubble tension is not something for which there's any obvious solution. It's not like, oh, if I just add simple ingredient X, everything fixes itself. And so that decreases our credence that there is some complicated theoretical solution. That increases our credence that it is in fact some issue with the observations.
But the non-zero credence is there for both, so I really don't know what's going to happen. For the nature of dark matter, on the other hand, we've had it for decades, and we have lots of ideas, lots of good theoretical ideas that could explain it. We just don't know which one is true. So at any moment, we could get lucky and find the dark matter, and that would be it. But we might not get lucky.
We don't know. The formation of supermassive black holes, my suspicion is, is a much easier problem than these other ones. Supermassive black holes seem to form in the early universe a little bit sooner than most experts have expected. But, you know, it's a complicated problem. And I think that we're at the
Stage now where we're throwing big supercomputer simulation resources at it, and we're getting data from JWST and other sources so we know more about the conditions under which these supermassive black holes form. So I'm relatively optimistic that that one will be figured out fairly soon.
Gregory Kusnick says, in the August AMA, you said something along the lines of, if God exists, he's powerful enough to make me believe in him. The corollary is that he can just as easily convince you of his non-existence or indeed of any other consistent proposition that suits his purpose. It seems to me this quickly gets us into the realm of cognitive instability.
If a theory posits the existence of a being powerful enough to arbitrarily manipulate evidence, then there's no coherent way to assign a credence to that theory. Am I way off base here? I don't think that you're quite right, but I think that the point is that the word God is not by itself a theory.
You know, I've given talks where I've pointed out that theism as a general idea is by itself not well-defined. So to have a theory that you could assign credence to, you can't just say God exists. You can't even just say... there is a powerful being, okay, a being powerful enough to arbitrarily manipulate evidence, you also need to specify some details about how that being actually behaves.
Does that being have goals, right? Does that being have feelings, you know, wants, desires? So when I say that God is powerful enough to make me believe in him, I'm specifically referring to a version of God that is pretty close to the standard, traditional, monotheistic view of a being that is omnipotent, but also omniscient and omnibenevolent.
A God that cares about me and wants me to believe true things. That would give me evidence that God would not just try to trick me. Eric Wonlick says, Um, I don't know is the short answer. I, I, I, you need to specify whether or not I can like get into the tank and then just see how long I can take it. Or do I need to specify ahead of time how long I want to be in the tank?
I am not very good at this kind of thing. I know from previous experience with, uh, sensory deprivation tanks and things like that, uh, I have a little bit of claustrophobia in this case. It's not really about being in small, um, areas, but about not being able to move. Like, that bugs me.
There's something primally irrational in me that the inability to move, and you did specify in the question, I cannot do anything with my hands or leave the tank. So that does, at a visceral level, bug me. And, you know, at some level, one has to just accept that one is old and one's skin has blemishes and scars and wrinkles, right? And so...
even though I can't give you a specific answer in periods of time, I don't think I would actually vote for a very long period of time spent in the tank. Sorry, if it were a period of time spent in isolation in a house where I could walk around and eat and read books, even though I couldn't talk to people or check the internet, then I'd be willing to spend much, much longer.
Marie Rouskew says, on the topic of to pick the right problems to work on, how do you persuade someone, either a person or a group, what is the right problem or shift their focus to it? I mean, in general, not necessarily in physics. I face the issue that my team won't usually focus on anything like, sorry, on anything else than the easiest or the most likable thing of all the things to solve.
Yeah, that's a very good question. I actually don't know the answer to that. I haven't quite faced that problem. I mean, maybe in some ways I have. There's been times when I've had, you know, my team, my group of grad students, postdocs and whatever, and I would say, you know, we really should think about this issue. And they would go, hmm, that sounds hard and not do it.
But, you know, part of me is believing that maybe they're just right, you know. Part of me says I'm the old person here. I should just tell them what to do and they should listen to me. But another part says, you know, don't be that advisor who thinks they always know what is best. So I think that, you know, there's nothing better than honesty in these situations.
If you have a good reason why you think it would be worth it to do this harder thing, then tell them what the reason is. See if you can actually articulate the rational reason why they should do this very difficult thing, spend all their time working on this difficult problem. And maybe if you can't be very persuasive there, maybe the reasons aren't quite as good as you thought.
Stuart Hain says, in your discussion with Nate Silver, there was a mention of a 50-50 risk to lose everything or have two times plus epsilon framed in terms of utility. Viscerally, I would not risk everything for two times plus epsilon on a 50-50 bet, even though the odds say I should. This made me think that utility may not scale linearly. Any thoughts on this?
Is utility more like a log function in shape? Well, a function of what is what you have to ask. So economists know perfectly well that utility does not scale linearly with something like wealth or money or whatever, right? If you are poor and destitute on the street, $1,000 is worth a lot more to you than if you already are a billionaire. OK, that's a very, very well understood thing.
And I think that it is kind of like a log function. But of course, the actual curve is going to depend on psychology and individual idiosyncrasies and things like that. So you have to make assumptions, some assumptions to get there. But the point of Nate Silver's examples is that this is not a 50-50 bet for two times the money.
two times plus epsilon the money, it is for two times plus epsilon the utility, whatever that is, okay? So you take what utility you have for a certain amount of money and you compare getting zero of it or getting twice as much of it. That's the game you're supposed to be playing here. So you're right, the utility is not linear in money, but that's okay.
Economists worry about utility, not about money. Ari Moody says, if extraterrestrials were advanced enough to send a signal to us, would we be able to even recognize it as an ET message? Wouldn't it be more like me trying to converse with ants? Well, I think there's a couple things here.
As I think I've already said, I do think that, you know, we have crossed, we human beings have crossed some cognitive threshold, some phase transition that lets us think symbolically and in terms of language and written symbols and transmitted symbols that is probably pretty universal. It would be my guess.
I don't place huge credence on this guess because we have no data about it, but I'm willing to think that that just like utility does not scale linearly with money, ability to think does not scale linearly with evolutionary time, okay? I don't think that even if human beings evolved for another billion years, they wouldn't be as much...
They wouldn't have as many transitions in what it meant to think as we have had between now and a billion years ago, okay? We will be better at thinking because we better at computation and we'll figure out clever ways to solve problems, but I still think we'll be Turing complete. We'll be solving problems like a good Turing machine, just a much more efficient one.
So I see no reason to think that a message from ET would be impossible for us to decipher as if we were ants. The ants you can't converse with because ants just can't converse using symbolic languages. More importantly, though, if these really are super smart extraterrestrials, I would give them enough credit to think that they would send us a signal we could read.
If it's true that there are various higher forms of consciousness or cognition to which we humans don't have access, then either these ETs don't want to communicate with us or they understand what level we're at and they're going to send us a signal that is comprehensible to us.
Cooper says, do physicists have crackpots that tend to focus on them personally, like how people have a stalker, or do crackpots tend to blast out their papers to entire departments? Not generally entire departments, but certainly large lists of people. Some Crackpots do kind of have stalkerish tendencies.
You know, I have a certain set of Gmail filters that when I get emails from some people, they get deleted right away. And I've never regretted that policy. Sometimes I go back into the trash and come across a message by accident. I go, oh, yeah, OK, that guy. But more often, crackpots will look for any feedback they can get. So yeah, they're going to send it to lots of people.
In fact, recently, I was trying to compile an email list of people to identify for some email I want to send out to a broad group of people. And one of the best sources of email addresses was emails I was getting from crackpots. The crackpots have done a lot of research to find out who are the good people doing work on physics or philosophy or whatever it is. So yeah, many...
Many crackpots are kind of notorious in certain communities. Nate Heller says, in your emerging journey into complexity research, are you planning to focus solely on identifying universal law-like patterns akin to those in fundamental physics, or do you also intend to explore specific classes of systems and particular types of data? Well, you know, one does what makes one progress.
My predilections are absolutely to look for unifying ideas across many systems. So I would love to understand robust features of complex systems that are true for very, very different kinds of complex systems. My favorite kinds of things to understand would be true for both the human brain and the world economy. Even those are two very, very different kinds of systems.
But one takes what one can get. And if it turns out that I discover or think about something interesting that only applies to one kind of specific complex system, I'm going to think about that. We'll see where it goes. You don't get to pick where the research takes you ahead of time.
Kyle Stevens says, if eternal inflation and the infinite cosmological multiverse are true, would it then be possible to coarse grain at a large enough scale to replicate all of the subatomic behavior of our universe, e.g., where our observable universe contributes only some fraction to a subatomic particle at some massive scale?
Well, you know, anything's possible, but probably not in this particular case. And the reason is one of timescales. You know, a feature of, let's just say, a human body, okay? You have a lot of atoms in your body, and you are big compared to those atoms.
But those atoms are all bumping into each other and they're literally like attached to each other and they're interacting and trading electrons and creating new molecules and all this stuff. And the fact that the interactions happen and they happen quite rapidly is kind of important. It is kind of a big deal. On cosmological scales, things are very far apart.
And that doesn't mean just everything slows down, but it becomes literally impossible to for things to interact with each other. Given the fact that we have a positive vacuum energy, distant galaxies are going to move apart from each other and never interact.
If you have a cosmological multiverse, maybe you have some kind of fractal structure to the universe on very large scales, but that fractal describes parts of space-time that never interact with each other.
So again, to the best of our current way of thinking about things, there is zero sense in which the large complicated universe is just a bigger and slower version of the small interacting universe inside matter as we know it. Hugen says, what are your credences about Claudia de Ram's theory of gravity that decreases faster than inverse square at a distance?
You know, to be honest, it's pretty low. This refers to a podcast we did with Claudia de Ram about modified gravity, massive gravity, and various extra-dimensional models that try to modify gravity both for the purposes of better understanding what is possible and impossible, but specifically for possible cosmological application to the accelerating universe and so forth.
Look, I think that it's unlikely that that approach is right, but let me be very clear. I think that it is unlikely that any known approach is right to explaining dark matter, dark energy, things like that, because there's many approaches, and they're all kind of speculative. I mean, I guess the one...
counter example to that is I think that the dark energy itself is probably a cosmological constant. I would give more than 50% credence to that. I would not give more than 50% credence to any specific model of dark matter. I think that I would put huge credence on the idea that dark matter exists, but there's many different theories of it, and we don't really know which one is on the right track.
So the fact that I give small credence to it is not a way of saying I think it's not worth thinking about. These are high-risk, high-gain kind of operations. You make a speculative idea. And this goes back to the question about my papers earlier. I would put the same exact low credence or I would put a lower credence on some of my ideas, maybe a marginally higher credence on some of them.
But you take your shot, right? You say, oh, here's an interesting idea. I don't think it's probably right, but it's possibly right. And we'll let the data decide. What is right? And if I'm right, it's very, very important, right? That's the bucket into which I would put Claudia's work and her collaborators.
Tim Converse says, Theory seems to have preceded observation in cases like the Higgs boson and black holes. What is this corresponding story for cosmic inflation? Was there any theoretical reason to expect a quickly expanding early universe, or do we just need that theory to explain our observations that space is isotropic and flat?
Here definitely the observations came first, and in particular the fact, the observational fact, that our universe looks pretty smooth and isotropic and geometrically flat. These were known for quite a long time, relatively long time. It wasn't until the 1970s that it was very specific. It was Jim Peebles and Robert Dickey who pointed out that these features are puzzling, right?
So first people guessed them, and then they observed that, yeah, the guesses are more or less right. And then Peebles and Dickey point out that, you know, actually, they're a little unstable, you know, these features of the universe. If you deviated from them a little bit, those deviations would grow in time. So they're not really as natural as you might have thought they are.
And these were dubbed the horizon and flatness problem. And so Dickey and Peebles gave—sorry, I think it was Dickey who gave some lectures at— Cornell, where Alan Guth was a postdoc at the time, and Guth went to those lectures. That's where he heard about these cosmological problems. Guth was trained as a particle physicist, and he was mostly thinking about particle physics and symmetry breaking.
And in particular, he was thinking about magnetic monopoles. There's this idea that magnetic monopoles should be predicted by... theories that were very popular at the time, grand unified theories, and they're predicted in a much larger number than was consistent with the observations. So Guth was mostly thinking about how to get rid of the monopoles.
And when he invented inflation as a way to get rid of the monopoles, he instantly realized that it would also solve, potentially solve, the horizon and flatness problems that Dickey and Peoples had. So that was definitely a case where the theory came after the observations. Now,
it was a unanticipated side bonus to realize quite quickly after that that the right kind of inflationary scenario would also explain the density perturbations in our universe, the tiny perturbations in early times that eventually grow into stars and galaxies and things like that. That was unanticipated when Guth was first thinking about it, but people quickly realized it.
And so that was a case where the theory came before the observations.
Jeremy Dittman says in Mitchell Waldrop's book on the Santa Fe Institute and the search for a theory of complexity, a section on Chris Langton describes his epiphany connecting complexity and dynamical systems as living in the transition between order and chaos, akin to the phase transition from a solid to a liquid as well as to computational classes moving from halting to undecidable to non-halting.
And all of these analogies were connected with Wolfram cellular automata classes, class four being the interesting one. From your perspective, are these concepts likely to be fundamental parts of a theory of complexity or attractive poetic analogies that don't get us very far? Or worse, are they distractors that miss the point? I will say attractive poetic analogies.
So as we said just a little bit ago, I think complexity, the development of complexity over cosmic time proceeds in stages and is a story that seems to me to be understood in terms of information utilization, using the resource of information that is granted to us by the low entropy of the early universe.
And the interesting features of complexity to me have to do with structure in the system and how that structure allows it to utilize information, to gather information, to store information, to take that information and use it to decide what to do next in some slightly anthropomorphic language. None of that is really there in these cellular automata models or these edge of chaos models.
Or fractals is another example, right? So all of these, and I think that the, I'm not saying anything weird here. I think that the modern take on complexity is, is more about adaptive systems and hierarchies and information utilization and less about the boundary between chaos and order.
Those are fun analogies and cool things to look at, but they are missing really important parts of the complexity story, I think. Dan Cohen says, in Quanta and Fields, you explain that it is the exclusion principle that keeps solids solid and stops matter from being compressed and not the electromagnetic force. If that is so, why doesn't the exclusion principle count as a force? It does.
Go ahead. Count it. In fact, I say this in Quanta and Fields. If you read closely enough, I explain that we have a sort of traditional way of listing four fundamental forces of nature, strong and weak nuclear forces, electromagnetism, and gravity. But that's just human language. okay?
We have realized through the development of quantum field theory that the fundamental ontology of the world is not divided into matter and forces. It is all quantum fields, and the quantum fields interact in certain ways. The thing that is universal between the strong and weak nuclear forces, electromagnetism, and gravity is that they are gauge theories. There's a symmetry underlying them that
that helps us account for the specific fields that exist and the way that those fields interact with each other. But there are other things like the Higgs boson. Is the Higgs boson, it's a boson, just like the photon is, or like the graviton, et cetera. Does the Higgs boson carry a force? The answer is, sure. If you want it to, if you want to call it that, it does something.
It's a field and we know how that field interacts, no problem. If you want to call it a force, go ahead. The exclusion principle makes matter solid. It is literally why when I push my hand on the table in front of me here, the table pushes back. Okay. Sometimes we call that a force. In a neutron star or a white dwarf, we talk about the degeneracy pressure.
And indeed, I think they talk about the Pauli force. because these electrons don't want to be in the same quantum states, or these neutrons. It's just a word. It's a word that turns out not to be fundamental, the word force. The idea of what a force is does not map cleanly onto the fundamental nature of reality. That's okay. It's still pretty evocative.
We know what we mean, usually, so we use it, and you can decide whether something like the exclusion principle counts as a force or not. Fran Pla says, yummy French canelés you posted on Instagram. So for those of you who don't know, I do have an Instagram account. I essentially never use it. Like once every six months, I'll post something there. But we did go to France a little while back.
And in Bordeaux, they have this local delicacy called the canelé. It's all over the place, but Bordeaux is the center of it. You can't get out, you can't escape Canelés if you're in Bordeaux. You arrive at the hotel and they give you Canelés. You go to breakfast and they give you Canelés. You're on the street and you go by a Canelés store.
They're these beautiful little pastries with a kind of a hard crust and a custardy inside that are very, very yummy, flavors of vanilla and rum. which don't sound very French, but Bordeaux was a major port back in the days when the trade from the West Indies started. So Bordeaux was in the receiving end of all these exotic flavors like vanilla and rum, and so that's why they feature in Canelés.
Anyway, I learned how to make them. Very proud of myself. And Fran is asking... Kudos to you because I've read that canelés are very hard to make. To connect with the fantastic episode number 103 with Kenji Lopez-Alt, what, in your opinion, is the most rewarding thing about cooking? And are you experimenting and rebelling on recipes more since that episode? Yeah, you know, I do like cooking.
I've always liked cooking. I've never been very expert at it. I will let others decide whether I'm any good at it, but I will absolutely say that I'm not very expert at it in the following sense. I can't whip together good dishes out of random ingredients that happen to be laying around. I'm pretty good at following recipes. I'm quite good at following recipes.
So I can make yummy things if someone gives me a good recipe for them. I don't have this intuitive quasi-magical ability that truly good chefs have to whip up something more spontaneously or change ingredients or whatever. I am at least pleased to learn that there are other people like me. You know, there's this—
There's this – I forget where I was reading it on the internet, but people were complaining about a certain genre of reviews online for recipes. People post their recipes online. Other people review them, give them stars. Apparently, there's a subgenre which consists of taking the recipe –
changing an ingredient to something completely different and then complaining that the recipe they made wasn't very good. Like, literally, this was a recipe for carrot cake. I don't like carrots very much, so I used kale instead of carrots, and it came out not tasting very good. Two stars, right? So it's not just me that doesn't really know how to make this work.
But anyway, it's not so much since talking to Kenji, but more since moving to Baltimore. We now have a bigger house, a bigger kitchen, and you know, I'll be very honest here, a somewhat more domestic cast of mind than we used to have living in our townhouse in LA. So I am trying to learn to cook more. I've... I use that as an excuse to buy gizmos, which I like doing.
So I have a nice cast iron wok that I bought from Made In, for example, that I love very much. A really good sharp Japanese chef's knife. Those little instant read thermometers that you stab into things. And it's funny because, you know, what you realize by doing this, and I just like gizmos and gadgets in general, but... For the most part, you realize, holy smokes, this is super useful.
How in the world did I ever get by without an instant read thermometer? It's one of the most useful things in the world. So I don't know whether my ability to actually cook yummy things has improved upon, but I am having fun trying to do it once or twice a week, trying to actually cook something. This is my lifestyle ambition these days.
Mike Gottlieb says, oh, and I should say to anyone who's interested, In the canelés in particular, they are notoriously hard to make in the sense that if you go online and read about making canelés, you will get intimidated because two things. Number one, you're told you must make them in copper molds. This is the way they are traditionally made in Bordeaux.
Individual molds made out of copper because their heat-conducting properties are very, very good. And number two, even though copper molds are very good at conducting heat, they're also very sticky. So the traditional Bordeaux thing to do is to coat the interior of the copper mold with a mixture of beeswax and butter. Okay, so number one, this is hilariously expensive.
Like the one copper mold to make one canelé costs like 35 bucks, and a canelé is like a tiny thing. So it can get very expensive very quickly if you make a dozen canelés. That's a big... investment you have to put there, especially if you don't know if it's going to work. And number two, it's a pain to like get beeswax pellets and then melt them and then coat the thing, whatever and whatever.
So I did find a recipe that was very helpful that assured me that a good copper steel canelé pan that makes 20 or 12 canelés at the same time works perfectly well. That's what I used. They came out great. Don't buy the hype about the copper molds and the beeswax.
I predict that someday, if I'm like 85 years old and retired from writing books and doing physics and living a life of leisure where I get to like indulge all of my leisure time desires, I'm going to get the copper molds and I'm going to get the beeswax and I'm going to, you know, devote myself to making the world's perfect canelés. But for now, the copper steel works perfectly well.
Mike Gottlieb says, what's your take on the declining replacement population numbers in developed countries? I could not possibly care less about that. I mean, number one, because the world population is still growing. We talked about this on more than one different podcast. The world population is growing, but the rate of growth is going down.
So experts predict that the world population will peak at some foreseeable time in the future. But it'll still be bigger than it is now by quite a bit. And when I was your age, we worried that there were too many people on the Earth and the population was growing exponentially. Nothing grows exponentially other than the universe. So that was a silly worry to have.
But, you know, what is the right number of people to have in the world? I have no idea what number that is. And so I have zero worry that 10 billion people is not enough. OK, that's just not a worry that I have. The fact that it's in developed countries rather than elsewhere, you know, I hope that all countries become developed sooner rather than later.
And then maybe, yeah, families stop having babies and the population goes down. I would predict that that would also be temporary, right? I predict that there would be a new equilibrium that is reached. The world right now is not an equilibrium. Society is not an equilibrium. Technology is changing. Our lifestyles are changing. How we live on the land and— in the ecosystem is changing.
So we're not close to whatever the future equilibrium is going to be. If it turns out the future equilibrium has a billion people on the earth, I'm perfectly happy with that. We're nowhere close to that right now. So my list of problems to worry about, that is not in the top 1,000. Don McKenzie says, Good.
I don't have a very good credence for either one of these, and it's not because I shouldn't. It's just because I don't, because I am not really sure what counts as a computation. There are different definitions of what a computation is. Someone like Seth Lloyd, quantum information theorist, has a very broad definition of what is computation.
So he's going to be the kind of person who says the universe is a computer. He's written a kind of interesting popular level book arguing that the universe is a quantum computer. But to get there, you basically have to say that what I mean by a computer is just anything that evolves in time, especially if it evolves according to some sort of simple rules, some kind of
thing that you could cast as an algorithm, right? In that case, lots of things are computers, but that's just so broad that I'm not quite sure what the usefulness of it is, right? The Earth is a computer, sure. The Moon is a computer, sure.
But there's another, there are other, more than one, there's multiple other sets of meanings one could attach to this that take seriously more the definition of a useful computation. in which case you have something about certain variables evolving in a certain kind of way. Some systems are, like we mentioned before, Turing-complete and some are not.
So I would very much like to have a very clear view on when something should count as a computer and when it shouldn't. I don't have that view right now. The claim that life is a computation is plausible to me, because if I do think that what is interesting about life considered as a complex system is that it has learned to take advantage of information processing in a
what one might profitably define to be a computation. So to me, these are good questions, not ones I have very strong feelings on right now. Russell McClellan says, in the Feynman lectures on physics in 1964, Feynman said, it is important to realize that in physics today, we have no knowledge of what energy is. Is this still true today in 2024? Wow, I have no idea.
I have no idea what Feynman was talking about in that quote. Usually, when Feynman says one of these provocative things, I can translate it into something I understand. But I truly don't know what he is talking about here. Maybe what he means is the following.
There's this thing that happens in—because I know he does talk about the following fact, that energy is something that we like to think is usually conserved. It's roughly speaking conserved. There's footnotes and counterexamples or— exceptions there.
I have talked about both energy not being conserved in quantum measurement and energy not being conserved in the expansion of the universe, but let's put aside those. Let's just think about ordinary stuff here on Earth in the lab where we think energy is going to be largely conserved. There's a worry that you say, oh, it's meaningless to say there's this thing called energy that is conserved.
Energy is not a fluid, right? Energy is not a substance that moves from place to place. It's a characteristic. It's something that is dependent on other quantities of a system, like its position in space and its velocity and things like that. It's the reason why I say it's meaningless to say there's a conserved thing called energy is you have to tell me what it is.
You have to say this thing is conserved and I'm going to call it energy, right? When Einstein says E equals MC squared, he's saying there's a whole nother contribution to energy that we didn't tell you about before. Even when an object is sitting still, it has energy, it's rest energy, MC squared. So the worry that Feynman does talk about is that we can always come up
with a conserved quantity just by adding more and more terms, adding more and more contributions to this thing that we call the energy. But I'm not, I certainly would not translate that into saying we have no knowledge of what energy is, if only because we have Noether's theorem.
Emmy Noether proved that when you have a symmetry of some continuum theory of physics, that symmetry will be associated with a conserved quantity. And energy is the conserved quantity associated with time translation invariance.
The fact that the laws of physics are invariant with respect to what time you apply them, they're the same laws at every moment in time, that implies that energy is conserved. I think that's a perfectly good definition of what energy is. It's the thing that is conserved because of time translation invariance. So I don't think it was true in 1964 or today. Sorry.
Ronald Gorin says, I still read science fiction and my favorite stories are the genre of space opera. So after rereading Space, Time and Motion and finally beginning to truly grok the concepts inside, I was dismayed by a line in chapter six, the section Simultaneity and its Discontents, which states it is safer for physics and for fiction to just exclude faster than light travel entirely.
So that's a quote from me and I would stand by it. Anyway, Ronald says, I just finished a new release by C.J. Chera that seemed to do an excellent job of dealing with the vagaries of time in FTL travel. I can see where far-flung star empires might not be feasible, but it certainly seems to work at smaller distances in this book.
Any hope of working FTL into fiction so I can enjoy my space operas again? Sorry about that may just be the answer, and that's okay. So, no, I don't want to just say sorry about that in this case. What I want to say is If you want to imagine there's faster than light travel, your job is not done.
Because in the context of ordinary relativity theory as we know it, particles either move slower than the speed of light or at the speed of light. You can imagine new kinds of particles, new kinds of substances, tachyons, that only move faster than the speed of light, okay? We have zero evidence that those things exist in the real world, but you can imagine them.
However, once you imagine them, once you imagine particles that are allowed to move faster than the speed of light, the feature of relativity that says that different...
Reference frames, different ways of putting coordinates on space and time, and in some sense all such frames are equally good, means that if you can have a particle going faster than the speed of light, you can have a particle going backward in time. And so the worry that I was referring to in that chapter was faster-than-light travel seems to indicate time travel.
And that's true in the context of relativity as we know it. So the point is you can't just say faster than light travel. You can't just say, oh, I can go three times faster than the speed of light. You know, in whose reference frame? That's a meaningless statement.
And if you can go faster than the speed of light at all, you can go infinity times or even minus three times as fast as the speed of light, and that's problematic. However, OK, so you just change the rules. You change relativity. Relativity is not right. Imagine that the fundamental positive relativity, that there is no background state of rest in the universe, is wrong.
So imagine that there is some field in the universe that actually does define a universal rest frame. People, including myself, have written physics papers about this possibility. Maybe it's true. And so maybe relativity is incomplete.
And in some incomplete theory, there is a preferred reference frame, and there's a new rule that says the actual speed that is the maximum at which you can go is 10 times the speed of light. So there is a maximum speed, but it's not the speed of light as we know it. It's a bigger thing.
You could imagine that, but all I'm saying is you have a lot of work to do to figure out a theory that accommodates that without leading to disaster. Paul Cousin says, I just read your paper, Reality as a Vector in Hilbert Space. It was super cool and exciting, especially the introduction to your work on quantum mereology.
So I haven't taken a course in quantum field theory yet, so I'm not sure to be equipped for your paper with Ashmeet Singh. Could you tease me about what you've been able to achieve? Yeah, so the paper with Ashmeet on quantum meteorology does not require any quantum field theory at all, okay? So don't worry about that. It's pure quantum mechanics all the way down.
But, you know, if I'm honest, it is super technical quantum mechanics. I kind of tried to make it less technical, but I didn't really succeed. The paper is kind of long, and there's a lot of equations there, and... It's intricate, so that's what it is. But what we're trying to achieve is the following quite modest goal, which is this.
If someone gives you a quantum mechanical system, a theory of a quantum mechanical system, so what I mean by that is what is called the Hamiltonian theory. If any of you know the Schrodinger equation, if you know what I taught you in the book, Quanta and Fields, or in the solo episode, et cetera, the Hamiltonian is what powers the Schrodinger equation.
The Hamiltonian is an operator which asks of a quantum state, what is your energy? And typically the answer will be, well, I am a superposition of many different energies and here they are, okay?
And so different physical systems, you know, here's an atom, there is a crystal, there is the gluon field, different physical systems have different Hamiltonians, and that defines what they do, how they evolve with time. So the quantum muriology question is, how do you know how to divide a big quantum system into subsystems?
in particular such that at least one of those subsystems seems to match up with our classical behavior that we know and love. You know, again, we said... Quantum mechanics is a superset of classical mechanics. I don't need to know quantum mechanics to predict how the Moon will go around the Earth, okay? So I can have a classical limit that describes the Moon going around the Earth.
To do that, I ignore various other things like the photons bumping off of the Moon and so forth, right? So I have the system I care about, the Moon. I also have the environment, like all the photons in the solar system. That's a division. That's what mereology is about, the relationship between wholes and parts. So usually, we go backwards. We say, I have photons. I have the moon.
I'm going to add them together to make the whole system. The quantum mereology question is, how do you go backwards? How do you go from the whole system and say, ah, identify this as the classical behaving system. Identify that as the environment.
And we, Ashmeet and I, came up with a couple of criteria for doing that, minimizing entanglement, minimizing the spread of the wave function so that it looks relatively classical and so forth. And I think that you should be able to get the basic features of the paper, even if you don't have any quantum field theory at all. We don't even really talk about quantum field theory.
We're all working with discrete finite dimensional systems. Spencer Hargis says, when Kurt Jamagal asked you to blow his mind, you tantalizingly floated the idea—I was on a podcast, the Theory of Everything podcast—you tantalizingly floated the idea that the laws of physics could have evolved.
Do you suspect there's a replicator involved here which might have gotten started a little like abiogenesis, the origin of life, or the brain fuck programs of Blaise Aguera? If so, what would this replicator correspond to? What is the fitness it's trying to maximize? So no, in the particular scenario that I have in mind, I'm not imagining there's a replicator of any such sort.
Other people have suggested things like that. I mean, something kind of like that happens in eternal inflation in the cosmological multiverse. If you have a landscape of different possibilities, inflation can populate a multiverse where the laws of physics are very different.
More directly, Lee Smolin has come up with an idea where inside black holes you pinch off a new universe with slightly different constants of nature. Now that's much less well-defined because in the string theory case you have microscopic dynamics that predict the existence of a multiverse. Those dynamics might not be right, but at least the theory is there. Smolin is just hypothesizing.
He's saying maybe this happens, wouldn't it be cool, okay? In that case, in Smolin's case, in some sense you are passing down information from one universe to another. That is not what I have in mind. What I have in mind is more a situation where the early universe is kind of a mess where there's no interpretation of it in terms of space and time and fields.
Time maybe, but at least not space and fields and locality and things like that. And the conjecture is, and we're working on this, but the conjecture is that out of that quantum mechanical mess emerges individual branches of the wave function. And on each branch, you sort of home in on a certain set of laws of physics, okay?
So they don't evolve in the sense of changing from moment to moment in time. They evolve in the sense of emerging or coalescing out of some primordial chaos. That would be the idea. Michael Wall says, are the different dark matter theories mutually exclusive or is there compatibility in overlapping parameter space among some of them? Oh no, yeah, they're not exclusive at all.
So every dark matter theory gives you two things. It gives you the dark matter candidate. So what is the particle or black hole or whatever that is the dark matter? And then number two, it gives you a theory of the abundance of that dark matter. Where did it come from? Why do you get the certain amount of dark matter? Indeed, the first of these turns out to be way easier than the second.
It's easy to come up with an example of a neutral, stable, invisible particle. It's very hard to get the right abundance. There's a lot of constraints there. But most of the successful theories don't really pin down the abundance to any hyper-specific number. Like, there's usual free parameters in there where if they were a little bit different, you would get a very different abundance.
And therefore, it's simpler and therefore common to imagine that if you have a dark matter candidate that is the right one, it is the only right one, essentially, right? If it's axions, then all the dark matter is axions. If it's WIMPs, then all the dark matter are WIMPs. But it's not hard at all to imagine there's actually a cocktail, right?
Indeed, in some sense, since we know that neutrinos are massive, neutrinos are a part of the dark matter, right? We think that they're a small part of the dark matter. For one thing, the neutrinos we know and love would be hot dark matter, which does not fit the data. And for another, we can count them, and it's more or less, you know, an energy density of 10 to the minus 4 or something like that.
Not nearly enough of what we need to be all the dark matter we see. But they're there. So, you know, there is in fact—if there's also WIMPs, let's say, then the dark matter cocktail is, you know, 25% of the energy density of the universe is WIMPs and 10 to the minus 4 of it is neutrinos. But it could easily be that—
15% of the energy density is WIMPs, and 10% is axions, and then 10 to the minus 4 is neutrinos, or anything like that. Since no one of these candidates seems like inevitable, having more than one be interestingly comparable to each other seems even less likely, but who knows? We can keep an open mind about that. Ilya Lavov says, your chat with Blaise Aguero was great.
Blaise was extremely well-spoken while academically rigorous, and he and his team seemed to have achieved a deep and important scientific result very quickly. Do you have any commentary on the fact that Blaise and his work were based in Google rather than in academia? Is this fact even worth any commentary? Sure, it's worth some commentary. I mean, the zeroth order commentary is, that is awesome.
It is great. I would like to live in a world where high-level academic research does not only happen at universities. Indeed, we clearly don't live in that world because there are research centers and think tanks, etc., like the Santa Fe Institute, but also the Perimeter Institute, the Institute for Advanced Study, and so forth. But also in commercial enterprises.
Famously, back in the heyday of Bell Labs, they were a Nobel Prize-producing factory devoted to pure research with the idea that important ideas would eventually come out, okay?
Plenty of important ideas happened, not just because corporate enterprises gave money to pie-in-the-sky research, but even because they said our applied research might be helped out if we step back and think about deep ideas. You know, Claude Shannon inventing information theory wasn't just playing around with equations.
He was saying, what is the best way to send a signal over a transatlantic cable, right? So a lot of these are driven by applications. Most of Blazegara's work is in AI and in applications that literally show up on your smartphone. So Google is good enough to let people do some fraction of their work on more pie-in-the-sky stuff, and he takes advantage of that. So I think it is great.
But I guess the final thought there would be I don't think there's anything about that work that would necessitate or even go along especially well with being at Google. I think the people at universities could have done it just as well. Just so happens that they didn't. Plenty of other good work is done in universities. So the more, the merrier.
Jameson says in one of Leonard Susskind's books on quantum mechanics, as well as in a few other popular science books by other authors, he says that quantum logic is different than classical logic. Is it true that the quantum mechanics actually changes the laws of logic, or is that overstated?
So it is absolutely not true that quantum mechanics changes the laws of logic, but there is nevertheless kind of a sense in which there is something called quantum logic that is different than classical logic. It's just that both obey the rules of logic. The difference is that they are applied to different systems, okay?
Classical logic, if you want to call it that, is traditionally interpreted as Boolean logic. You have bits of information. They are on or off, yes or no, zero or one. Quantum information deals with the manipulation of qubits. Qubits are little vectors in two-dimensional complex Hilbert spaces. And you can do a little math and show that's equivalent to being a point on a sphere.
These are called the Bloch sphere, B-L-O-C-H. The Bloch sphere is the space of states of a qubit. And so a sphere, a two-dimensional sphere, has an infinite number of points on it, right? Because it's a smooth sphere. But even if you ignore that, you need to give me two numbers, two coordinates to tell me where you are on that sphere. And they are real-valued numbers, not integers.
So there's more information in a qubit than a bit. And of course you also know that qubits can be entangled, etc. So the rules of logic are the same in both cases, but the system that you're manipulating to do your computations is different. That's all. Now I will also say that—so that's the—let's put it this way.
That is the respectable interpretation of the phrase quantum logic is different than classical logic, and certainly Lenny Susskind understands this perfectly well. There is a disreputable interpretation of that phrase, which is the following. In classical logic, there is only true and false. But in quantum logic, there is neither true or false. You can be in a superposition of true or false.
And therefore, certain things like the law of the excluded middle are no longer true. Because if you have an electron in a box, it is not true that it's on the left-hand side or the right-hand side. It's neither and both at the same time. That's just like purposefully annoying imprecision. That is not getting you new insight.
That is just talking about quantum states as if they were classical yes-no things and then acting surprised that they're not. Of course they're not. There's no such thing as the electron on the left-hand side of the box or the right-hand side of the box, but there is such a thing as what is the wave function of the electron.
If you stop talking about these observational outcomes and start talking about what the system actually is, you find that all of your conventional rules of logic are perfectly fine. Sam Davies says, So congratulations, Sam. That's a big leap and one that will help make the world a better place, I think. So good for you. He says, Well, yeah, that's the beginning of the semester.
I should be thinking about this, right? You know, look, I'm generally bad at this. I'm generally a believer that the best way to learn—to become a better teacher— is not to have someone give you advice on how to do it. There's a bunch of things to do to become a better teacher rather than to teach well, if you see the difference.
You're the one who has to decide how to teach well, but I can give you advice on how to help decide how to teach well. Number one, of course, watch what other people do. So if you're reading a book or watching a lecture or listening to a lecture or something like that, pretend you were giving the lecture or writing the book. Imagine what you would say next.
See what is actually said by this person who you think is good at it, and then say, well, if the thing that you would have said is always the same as what they said, then good for you. You're doing well. But in the more likely event that they're different— analyze that. Think about why they're doing something different. Did they pause to tell a historical anecdote?
Did they repeat an important lesson more than once? Did they stop to give the bigger picture kind of thing? Did they give a little philosophical, whatever it is, you know? Did they do more examples than you would have done? Maybe you have a better way of doing it, but at least ask yourself, is there a reason why they were doing it that way? But even more importantly,
I think that becoming a better teacher or speaker or writer or almost anything involves a synergy between two things. Number one, paying attention to what you are doing. And number two, caring about doing it better, right? So a lot of people, you know, if they need to teach or to write or whatever, they have some sort of minimal—what is the word that the economists use? Satisfaction.
That's not quite it, but, you know, something close to that. Rather than optimizing and being perfect, there's a minimal level of competence that they're happy with, and once they reach that, they stop. Okay? So not being— content to stop with merely adequate is the huge step to becoming a better teacher. And what that means is ask people how you're doing. Like, I don't know, did that make sense?
Do you understand what I'm saying? Ask for feedback. Sometimes they'll give it to you, sometimes they won't. If I maybe, you know, it's the end of a long podcast, I will say something a tiny bit self-aggrandizing here. I had told the story before, but I once gave a talk
A popular physics talk, I think it was in the Higgs boson days, and a friend of mine I was chatting with afterward, and I said, so, you know, how could I have done better? Like, what did you think of the talk? What are the parts that were not clear? And she said, you know, I know lots of people who give talks, and you give the best talks, but you're the only one who asks me how I could do better.
And I said, well, maybe those things are correlated, right? There's no such thing as the perfect talk. You can always do better. So thinking about how you're doing, thinking about how you can do it better, asking for actual input from other people on how you can do it better, these are all super important.
So I'm not going to teach you like how to explain things, but maybe I can give advice for figuring out how to explain things and then you can do it. Okay, and then the final question of this month's AMA comes from David Maxwell. Watch any review of the new Google Pixels, and you'll hear the reviewer ask the question, what even is a photo? Often followed by, what even is reality?
Every person is asking this question. Kids are asking this question. Will generative AI help philosophy become a permanent feature of common human thinking, and can we give it a nudge? Well, wow, I would love it if that were going to be the case. Sadly, I'm going to give a slightly deflationary spin on this question. You know, philosophy starts by asking these questions. What even is a photo?
What even is reality? But it doesn't end there. And I think that one of the various huge barriers to philosophy becoming a permanent feature of common human thinking is the casual impression that philosophical questions are ones where they're worth spending five or ten minutes bullshitting about. but not actually deserving of serious, careful investigation.
So philosophy starts with these questions, but then it's been thinking about these questions for thousands of years. And it has some opinions. It doesn't have the definitive once and for all answers. Maybe those are not going to come for another 10,000 years. I don't know. But we've learned a lot about how to talk about these questions.
And I think that there's a huge difference between an advertisement for Google Pixels raising a philosophical question and nudging the people watching that advertisement to actually think in a recognizably philosophically careful way. right? That's a whole nother level of importance. I'm doing my little part, you know, I'm in favor of thinking in a philosophically careful way.
I have a podcast that a few thousand people listen to. Maybe I can nudge them into acknowledging or getting the impression that philosophy can occasionally be useful, among other ways of thinking. and maybe they'll spread that word to their friends. But it doesn't mean just going like, hey man, what's reality?
It means perhaps getting some informed opinions, some actual careful previous art, and reading it and getting to know it. Like what have other people thought about what reality is? What is a photo? I do think it's important, I made the joke on Blue Sky the other day, that this is the moment for epistemologists to finally step forward. because we can now manipulate photos in any way we want.
So the idea that a photograph is a semi-reliable piece of evidence for something that actually happened in the world is just no longer going to be true. It was true for two centuries. It was never perfectly true, because you could always manipulate photos, going back to Arthur Conan Doyle. But
It gets so easy now that the value of photos for establishing claims that might be contested becomes essentially zero. How do you know when you have enough evidence to believe a claim about something that happened? It's a good epistemology question. So it's time for professional philosophers to do their job. And I think that that's not only true for epistemology, but
I also think it's true for ethics, moral philosophy. We're going to be editing genes. We're going to have artificial intelligence, which is giving the impression of being conscious. There's going to be plenty of opportunities for real, serious philosophical questions being given an airing in the public sphere. I hope— that both the public and the philosophy profession are up to the challenge.
We will see. Thanks very much for supporting Mindscape. Thanks once again for listening. Talk to you next time. Bye bye.