Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
AMA | October 2024
Mon, 07 Oct 2024
Welcome to the October 2024 Ask Me Anything episode of Mindscape! These monthly excursions are funded by Patreon supporters (who are also the ones asking the questions). We take questions asked by Patreons, whittle them down to a more manageable number -- based primarily on whether I have anything interesting to say about them, not whether the questions themselves are good -- and sometimes group them together if they are about a similar topic. Enjoy!Support Mindscape on Patreon.Blog post with questions and transcript: https://www.preposterousuniverse.com/podcast/2024/10/07/ama-october-2024/See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
We are all driven by searching for something better. But when it comes to hiring, the best way to search for a candidate isn't to search at all. Don't search, but match with Indeed. If you need to hire, you need Indeed.
Indeed is your matching and hiring platform with over 350 million global monthly visitors, according to Indeed data, and a matching engine that helps you find quality candidates fast. What I like about Indeed is the instant match feature that shows you the best possible candidates right away before you do any busy work.
Listeners of Mindscape will get a $75 sponsored job credit to get your jobs more visibility at Indeed.com slash Mindscape. Just go to Indeed.com slash Mindscape right now and support our show by saying you heard about Indeed on this podcast. That's Indeed.com slash Mindscape. Terms and conditions apply. Need to hire? You need Indeed.
Ryan Reynolds here from Mint Mobile. With the price of just about everything going up during inflation, we thought we'd bring our prices down. So to help us, we brought in a reverse auctioneer, which is apparently a thing.
Give it a try at mintmobile.com slash switch.
$45 upfront payment equivalent to $15 per month. New customers on first three-month plan only. Taxes and fees extra. Speeds lower above 40 gigabytes per detail.
Step into a world where stories come alive. On Audible, there's more to imagine when you listen. Immerse yourself in captivating audiobooks, including master storyteller Stephen King's latest short story collection, You Like It Darker. Let King's chilling tales transport you to realms both haunting... and thrilling. With Audible, you're not just hearing words, you're experiencing them.
From thrilling mysteries to heartwarming romances, there's a story waiting for everyone. As an Audible member, you choose one title a month to keep from their entire catalog, and new members can try Audible free for 30 days. Visit audible.com slash wonderypod or text wonderypod to 500-500. That's audible.com slash wonderypod or text wonderypod to 500-500.
Hello, everyone. Welcome to the October 2024 Ask Me Anything edition of the Mindscape Podcast. I'm your host, Sean Carroll. So we're in the middle of the semester here at Johns Hopkins, teaching and writing papers and all that stuff. And I wondered the other day about how do I remember how to actually pull off doing these AMAs, right?
In the middle of a semester while I'm trying to like teach and things like that. So the answer is I have not figured this out. I have not figured out how to do it. But maybe one strategy is to make the introductions shorter, to spend less time in the intro. So I'm going to do that. I will mention very quickly that pluralism works. is one of the themes of today's AMA.
Strength in diversity, taking seriously other perspectives, sometimes sticking by your guns when there's a true or correct answer, but other times recognizing that we don't know all the answers and letting other people have their feelings about things. That's one thing that shows up again and again in the questions this month, whether it is refereeing scientific papers or mixing cocktails.
There are different ways to do it correctly. Reminder that the questions in the AMA are asked by Patreon supporters of the Mindscape podcast. Sometimes I'm getting emails from people who say literally, I know that AMA questions are asked by Patreon supporters, but I'm not a Patreon supporter. Here's my question anyway. Please don't do that.
We get more than enough questions from the Patreon supporters. And I do actually think that they're a pretty good representative sample of the questions we would get anyway. I don't think we're introducing a lot of biases by taking the questions from the Patreon supporters. And it's a nice little benefit you can get. The set of people who ask questions is a small subset of the Patreon supporters.
Most people just support anyway. Those people could include you. If you want to become a Patreon supporter of Mindscape, you can go to www.patreon.com slash Sean M. Carroll. And you get the ability to ask the AMA questions. You get the ability to listen without ads. And you get little reflection audio commentaries by me after every podcast. Thank you very much.
And finally, I realized that I have been giving a little bit of incorrect information about the subreddit. So there is someone, some nice person, not me, set up a subreddit devoted to me and my stuff, okay? And I keep saying that it's reddit.com slash r slash Sean M. Carroll, but it's not. It's slash Sean Carroll with no M in there. So there's a Sean Carroll subreddit. Subreddit.
And it's not as active as... It's like a little bit active, but every episode of the podcast gets a post that you can talk about things there. You can also talk about things on the Patreon page or on the podcast page. But you can also just ask other questions or just talk to other members. I chime in there occasionally, but not very often because... It's the semester.
I need to teach courses, and I love my students, and they need to get first priority. Students and Mindscape listeners, it's you all who get priority here. So with that, let's go. OK, I need to start out by saying that originally in the beginning of the podcast, there was a question, the AMA.
I was answering a question about spirits, that is to say alcoholic beverages, not spirits like Halloween and cocktails and things like that. And I warmed my subject matter maybe a little bit more than many people will care about. So I have moved that subject. to the very end of the podcast, for those of you who are not martini-pilled, as it were. And we can dive into more regular subjects.
It'll be there at the end if you listen all the way. So Anonymous asks a priority question. Priority questions, remember, are ones where the Patreon supporters get to ask one priority question in their lifetime. I haven't dealt with loopholes about if they upload their consciousness to a computer, but that hasn't happened yet. So we're thinking about ordinary biological lifetimes here.
And I will do my best to answer that one question. So Anonymous asks, should I go back to school and pursue a formal physics education and or job? The pros of this are that I spend all my time already reading about physics and science in general. It is definitely my primary interest. The cons are that I'm too old. This is I'm quoting. This is not me talking. This is the anonymous questioner.
I'll be behind everyone else and I'll be leaving a promising career. If I try this out and fail, I'll have been gone so long it would be essentially impossible to return." For background, I graduated at 19 from a top college with honors. Now I'm a 32-year-old songwriter. I wrote some hit songs and brought my dream home in LA with enough left over to live comfortably while I pay for tuition.
The people around me are celebrities and celebrity types, uninspiring, unintelligent, and most importantly, not curious about the world. Their primary concern is being perceived as high status. It's very unfulfilling, but then again, so are a lot of jobs. My dad says the only thing less likely than being a hit songwriter is being a hit physicist. He's probably right.
Curious what you would do in this situation. So as I often say for personal advice kinds of questions, I can't actually give you the advice you want. This is a situation, if you're trying to imagine changing your career in some dramatic way, where there are no certainties, right? There are only probabilities, and the things you do don't make some good outcome possible.
entirely certain or bad outcome entirely certain, they raise or lower the probabilities of things. So to do something risky and uncertain is just a very deeply personal decision that you're going to have to make for yourself. What I can do is talk about some of what those probabilities are. One thing is that, you know, being a physicist is hard, like you said.
I don't know if it's as unlikely as being a hit songwriter. It's less unlikely. to become a practicing physicist than to become a professional basketball player, for example, but it's still pretty unlikely.
The rule of thumb is that—that I always say, I haven't seen the statistics in years and years, but the rule of thumb for someone who gets a PhD from a top program is that maybe one in four graduates are going to become— faculty members with tenure at some university at some point, which is usually what those people want to do.
And first you have to get into a good graduate school, et cetera, et cetera. Physics, science, like all other careers, involve a certain amount of strategizing about balancing what you want to do personally versus what is good for your career personally, right?
If it goes well for you, if you're the kind of person who would be a successful physicist, it is generally true that the kinds of things that you are passionate about doing
line up fairly comfortably with the kinds of things that would lead to success and jobs and things like that if there is huge tension between the kind of physics you want to do and the kind that would actually make you employable then that's a very big barrier that you're going to have to try to overcome or contemplate whether or not it's worth overcoming i don't think that being too old
is a problem. I mean, it depends on how old you are. I would say that if you were 70, maybe it becomes more difficult to learn these things. 32 years old is not too old to learn physics. I don't think your brain has stopped being plastic enough to learn these things. You have to convince people. And I've known people who have entered grad school in their 30s, in their mid-30s.
It is absolutely possible. But it's hard. It's like one more little barrier. If you think about all of the things that make it difficult to get a faculty job in physics, like... Maybe you're not talented enough. Maybe you are talented enough, but you got stuck with a bad advisor. Maybe you got stuck with a good advisor, but somehow you just didn't come up with the good research programs.
Maybe you got sick in the middle of grad school. There's many things that can go wrong, many things that can basically lower your probabilities for getting the job. Yeah.
being a little bit older is another thing that lowers the probability because you have that extra work to do of convincing other people that you're serious about it, that you, you know, despite the fact that for the last 10 years you've not been in grad school when you could have been, nevertheless, doing physics is what inspires you and you're passionate about and so forth.
So maybe you can do that. Maybe you can convince people. Like I said, I've known people who've been in grad school and gotten PhDs at later stages in life and But it is one more thing to try to overcome. Having said that, physics is awesome. You know, thinking about the fundamental nature of reality as your day job and getting paid for it is a pretty amazing thing.
What I can't judge is how good you are at it, anonymous priority question asker. So I can't give you any honest opinion about that. But...
If you are good at it, if you are the kind of person who, if you get into grad school, will do the work, will impress your advisor and other faculty members, they will write you good letters, you will get papers written that other people pay attention to and move the field forward in some tangible way, then it's all good.
Then you're going to get a faculty job and you're going to be a physicist, right? I can't tell you the probability that that's going to happen. If it does, well, I want to say you'll be very happy. But of course, It also depends, personally. Some people are very happy with those kinds of jobs.
Some people realize that, ah, I just wanted to, like, sit around and think about the universe, but instead I have to, like, teach and apply for grants and supervise students and, you know, go to committee meetings. And, yeah, this is just like any other job, right? Every job has its aspects that you do because they need to be done, not because they're what you're there to do.
But, you know, physics is no different than that. The only other thing that I will say – I'm sure this has not been a super-duper helpful answer to your question, but I'm surprised that you say that the people you hang around with are uninspiring, unintelligent, and not curious about the world.
I was in LA for 16, 17 years, and I found plenty of people who were super-intelligent, super-inspiring, and super-curious about the world. Maybe they were not, you know, the highest level celebrities, but there are people in L.A. who are absolutely creative, absolutely curious about things, whether it's screenwriters, songwriters, movie directors. You know, it's a...
place full of creative, inspirational types of people. They might not be the ones you're hanging out with. So maybe there's a much easier switch that you can do in your life to just sort of find more like-minded people who are close to the area that you're already in, somehow combine your interest in being ambitious about thinking about the world with the job that you're already in the middle of.
You know, I wish that it were easier to change your career every 20 years, right? You can do it. You can change your career every 20 years if you wanted to. 20 years is enough time to establish yourself, accomplish something, and move on. It's not what we standardly do. And part of that is just because success in a career is cumulative in some ways, right? You prove yourself worthy.
It becomes easier and easier to convince your colleagues that what you're doing is worthwhile. and so forth. So what I want to say, the romantic part of me wants to say, go for it, do it, leave songwriting and become a physicist. But I am also ruthlessly practical about these things. So if you do it, do it with your eyes open, knowing what the prospects are.
Zach McKinney says, building on your reflections from the end of episode 290. So Zach is referring to the fact, for those of you who are not Patreon supporters, uh,
At the end of every episode, after every episode, I do a little reflections, like what I thought about the episode, sometimes closely related to what we just talked about, other times spinning off into some completely different direction. But, you know, little five-minute reflections on every – episode of the podcast, and those are posted for Patreon supporters exclusively.
So Zach is referring to the reflections on the episode with Hari Hahn, and he says, what hypotheses, if any, would you make with respect to extrapolating Dr. Hahn's observations about the potential advantages of nestled or fractal structures within institutions? to the governance of emerging technologies such as AI or neurotechnology?
In particular, do you see in Dr. Han's work any insights regarding the optimal balance and interplay between top-down and bottom-up approaches to the regulation of rapidly evolving high-impact technologies, both within and across organizations and jurisdictions? I do think that these kinds of considerations are super important in exactly this area. I'm not
specifically, personally, an expert on how best to alleviate harms and governing emerging technologies that are intrinsically complex. But I do think that exactly, this is one of exactly the motivations for thinking of complexity as a field of study. For thinking of robust universal principles that are recognizable between different kinds of complex structures.
Because when you have a new technology, we also talked, we hit on this quite substantially in the episode with Daron Asimoglu. When you have a new technology, you can't really predict what's going to happen. what exactly it's going to bring about, right?
It changes some of the fundamental presuppositions of society or the economy or whatever, and some bad actors can rush in there and take advantage of that and scoop up a lot of wealth to the detriment of other people, and it can be exploitative and so forth until we finally figure it out, right?
Until we finally go, oh, okay, now we need to switch things and change things up to be a little bit more equitable, right?
So it would be nice if we could use those kinds of insights, those kinds of considerations from what did we learn from other situations where things were complex and changing and hierarchical and they were both top-down and bottom-up influences when things are rapidly changing and they're going to be different than they were before, but maybe some of those
universal recurrent features will be important. So again, I don't know what they are. I don't know exactly how to go about doing this. This is an absolutely rich field of endeavor that there are people who do study it. And I'm a kibitzier here. I'm just watching from the sidelines. But I do think that it is a very valuable perspective to keep in mind.
Kyle Stevens says, you often refer to brute facts in physics to which there is no further explanation. Is there any a priori reason we should prefer brute facts to either an infinite or circular explanatory chain? No, I don't really think so. For one thing, I worry a little bit about the whole idea of an explanatory chain. I'm not quite sure that that is the kind of thing that you have in
modern physics or modern ontologies trying to understand the fundamental nature of reality. That's sort of a more classical way of thinking about things. I think instead in terms of emergence and different theories offering multiple vocabularies for talking about the same underlying things going on in the world. And it's not that I insist that there are brute facts.
It's just it seems obvious to me that there are, right? And here's the argument. The world could have been different. The world could have been different dimensions of space-time or different forces of nature. It might not have been quantum mechanical at all. It could have been completely classical. It could have been discrete. It could have been continuous.
There are many different possible worlds as far as we currently know. Maybe... There is some argument that no one has ever thought of. People have certainly tried, but they have done a pitifully bad job of coming up with an argument to say that the world around us is in some sense uniquely the world as it could have been, right? That's a tough argument.
kind of thing to imagine having an argument for, given all the weird specificities of the world. I mean, you're telling me that the ratio of the mass of the electron to the mass of the muon is somehow inevitable? It couldn't have been anything else? So, I mean, maybe it needed to be that. Early days of string theory in the 1980s, that's what people hypothesized.
But of course, what they seem to have found is, oh, actually, no, there's many, many ways that it could be. And that's not at all unique to string theory. That's true in every other attempt to unify physics that has made any progress at all. So if that's true, that there are different possible worlds, then there is a brute fact about the fact that we're in this world.
As I say in my paper on why there is something rather than nothing, people have tried to say that – and I also – I wrote a tiny little paper recently that you can find it on the web on physics and the principle of sufficient reason. The principle of sufficient reason is this idea from Leibniz that everything that happens – happens for a reason.
Everything that exists has a reason or cause for it existing. And I tried to make the argument, you know, in physics, no, that's, I mean, there could be a cheap kind of trivial construal of what that means, which is just everything obeys the laws of physics. Okay, sure, everything obeys the laws of physics.
But if it's supposed to be something deeper that says there's a reason why the laws of physics are the way they are, I'm skeptical that that's possibly true because there are other ways the laws of physics could have been, and at some point you just say this is the way it is rather than that way.
So my own personal argument is not so much that I don't want there to be an infinite or circular explanatory chain. I'm not even sure what the circular explanatory chain would mean. That might not be fair, but if there was an infinite chain, That's fine. I just don't think it's true. It's not that I think that there's some logical impossibility about it.
I just don't think that's how philosophy and physics work. Have you thought about a gift for yourself this year, one that has the power to help you grow, learn, and become a better version of you? Give yourself the gift of language by getting Babbel. Babbel is the language learning app that gets you talking with quick 10-minute lessons handcrafted by over 200 language experts.
Babbel gets you talking a new language in just three weeks. Whenever I'm going to visit a country where I don't speak the language, Babbel gives me a leg up in learning the basics so that I'm not afraid to participate in conversations. And here's a special holiday treat for our listeners.
Right now, get up to 60% off your Babbel subscription, but only for our listeners at babbel.com slash mindscape. Get up to 60% off at babbel.com slash mindscape. That's B-A-B-B-E-L dot com slash mindscape. Rules and restrictions may apply. Tim Falzone says, which philosophers do you think have had the most profound insights into the nature of complexity?
Is the science of complexity theory ahead of philosophy at this point, or are there useful exchanges between science and philosophy in the area? I think it's growing, actually. This is a very, very good question, and I thought about it, and I don't think I've done a good job of coming up with it. great philosophers who have really given profound insight into the nature of complexity.
I think this is one of the areas in which philosophy might be lagging a little bit behind the science. You know, in many ways, the, well, let me just mention the Santa Fe Institute recently is in the process of collecting the great papers in the history of complexity, right? Most of them are from the 20th century. I think they had a cutoff of 2000, but there aren't many from the 1800s.
complexity science. And they're publishing them in four books. And there's an introduction by David Krakauer, a former Mindscape guest. It was a 100-page introduction which goes over, surveys the history of the field and draws connections between all these different papers. And you can buy that as a separate little book. So this is my plug.
You should buy that separate little book if you're at all interested in this stuff. It does a great job great job of sort of not only telling you about the history, but connecting different ideas that you didn't realize were otherwise there.
And there's absolutely a connection between the growth of complexity as something we think about and the growth of computers as both something to think about and as a tool for thinking about things, right? So I don't think that philosophy has studied complexity for as long as it might have.
I don't know if there are any, you know, early modern philosophers who really thought that much about complexity.
I think that, you know, as much as I am a cheerleader for philosophy, I think that complexity is a case where scientists of various sorts, computer scientists, economists, people like that, have been leading the charge to sort of think about what complexity is, and the philosophers have some catching up to do. Emergence might be one counterexample to that, or exception, I should say, to that.
Philosophers have thought about emergence very, very carefully for a long time now, And I tend to disagree with some of the things that are popular to be said in the philosophical community about that, but at least they've been thinking about it. Back to the British emergentists wrote and maybe John Stuart Mill even wrote about these things.
Again, it's very different than how I've thought about those things. But the closest that I could come to a name that you deserve, Tim, for your question here is actually Charles Bennett, who is certainly not a professional philosopher. He is a computer scientist, theoretical computer scientist, complexity theorist.
famous for his work in quantum computation, no-cloning theorems, teleportation, things like that. He's a brilliant guy, and he's thought a lot about complexity, and he has written an influential paper that, in fact, we recently read in the complexity reading course that Janan Ismail and I are doing here at Johns Hopkins, where he introduced the idea of logical depth and
you might have heard of Comolgorov complexity. So if you have a string of bits, the Comolgorov complexity is roughly the length of the shortest computer program that will output that string of bits. Logical depth is the number, roughly, very, very roughly, I'm trying to get to other questions here, but it's roughly the number of computations you need to go through.
So you could have a very short program, right? But it just, you run it forever before you get the output. that has small of complexity, Bennett is highlighting the number of steps that you have to do in such a program to actually output that string of bits, and that's the logical depth. And he makes an argument that this is what matters to complexity, right?
This is what matters to being different in some sense than just a simple thing repeated over and over again. that you have to like carefully do things in a complex system to put them together in exactly one way rather than some other way. So it takes a lot of steps to do that. This sounds like a very basic kind of simple idea, but it's actually very deep.
It makes connections not only with computer science, but with like origin of life type questions, with larger questions about complex systems generally. And it's one of the better contributions to the very – messy literature about how best to define complexity. So I think people like that have been the leading theorists of complexity.
People like Jeffrey West have been very, very successful at applying complexity. Sam Bowles and David Krakauer and others we've had on the podcast. So I do think that the philosophers have some catching up to do. I mean, that's one—I've said this before—
One of the reasons why I'm changing my own research focus these days from cosmology and particle physics and gravity to things like complexity is there's so much good work to be done. There's so many low-hanging fruit questions yet to be answered. So I'm going to try to do that myself. Benjamin Barbrell says, How to set these statements about entropy straight?
A. Entropy is the result of a coarse-graining process, the log of the number of microstates per macrostate, and therefore depends on a subjective choice of macrostates. B. Entropy is responsible for such things as the thermodynamic arrow of time. It should never have an observer-independent definition. And C. The universe knows precisely its microstate, therefore its entropy is always zero.
Well, there's a couple things going on here. One is, of course, there's more than one definition of entropy. There's many definitions of entropy, and that's perfectly okay. That's not a mistake. It's not that some people are using the wrong definition. It's that which version of entropy you care about depends on what situation you're thinking about, okay?
So this definition that Benjamin says in A, which is the one that is engraved on Boltzmann's tombstone, that you have a coarse graining into macrostates, and then you take the logarithm of the number of microstates per macrostate. So the low entropy means there aren't many microstates that look that way. High entropy means there are many microstates that look that way, okay?
That is what is relevant for statements about the arrow time in cosmology. That is the relevant empirical fact is that the coarse-grained information we have about the early universe does not have that many microstates that look that way. And we have been since growing into coarse-grained macrostates
that have more and more microstates associated with them, and that is why entropy tends to increase. So in B, you are worried about it having an observer-independent definition, and in C, you're worried about Laplace's demon knowing precisely the microstate. But these are both very related things.
Laplace's demon, who knows everything about the microstate of the universe, doesn't know about entropy, doesn't know about the arrow of time. Laplace's demon just knows everything in the universe. The reason why we think there is an arrow of time, forgetting for the moment about quantum mechanics and the messiness there, but it doesn't really matter, so I'm happy to forget about it for right now.
Or rather, I should say, it's not really a different story, so I'm happy to forget about it right now. If you believe in classical deterministic microphysics, then there is no arrow of time if you know the microstate of the universe. But we don't.
And it is simply a fact that human beings and other intelligent creatures have very, very strong correlations in what they can observe about the universe. It's not that two different people observe completely different macroscopic information about things, right? We coarse-grain in the real world in very consistent ways. So it's observer-dependent in a very weak sense.
The coarse-graining of the set of all possible microstates is observer-dependent in the weak sense that, yes, we choose what to call the macrostates. We observers do that. but that's entirely okay because exactly what we're trying to explain is something that is experienced by those observers. The arrow of time, right?
When an ice cube melts in a glass of water, if I were able to observe all the molecules in their exact Newtonian position and momentum microstates, I wouldn't say that any information had come or gone or been lost or there's any arrow of time. I'm just following the microstates. The reason why I experience an arrow of time is because I don't observe those microstates.
So the fact that I need to use some coarse graining to define the arrow of time and to define entropy is completely okay. We're using an observer-defined thing to explain an observer-defined experience. Renan Boschetti says, he seeks a clarification of your point of view regarding AI and consciousness. I've been hearing you say the phrase, in principle, AI can be conscious. I can't see why not.
Consciousness is an emergent property of nervous systems, says Renan. That is, when neurons do what they do, consciousness emerges. It seems clear that it is only possible because we are this system, i.e., we are the neurons in fields, and field oscillations. It is an emergent property of this system that when these excitations happen in this very particular way, these fields feel, present qualia.
We can describe exactly how it happens, that's all. Well, just like when simulating an electromagnetic field in a computer, it can capture in a certain regime its mathematical behavior, Nobody ever claimed there is an electromagnetic field in the computer. Why do people claim there could be consciousness in the computer?
I clearly see a fundamental limitation for AI on consciousness in the way we do it now, i.e. without recreating the real patterns in the electromagnetic field in reality as an initial ansatz, which turns out to be what perception is made of. Am I missing something? Well, I don't know if you're missing something or not.
I mean, you could be right, but it's generally not the way that I think about consciousness or the way other people think about consciousness. What you are saying would be completely true if consciousness were some kind of substance, right? If there were a consciousness field that interacted with our neurons in a certain way to make us conscious—
then you could absolutely make an argument that says that's not going to happen for a computer because it's not a nervous system. It's made of chips or whatever. It's not made of neurons and other kinds of cells. But I don't think that is what is going on in consciousness. I think that when we talk about consciousness, we're talking about a certain kind of pattern.
A certain kind of behavior, a certain kind of way of conceptualizing at a very high level, emergent level, the sort of informational relationships between different parts of a brain or something like that. In other words, forget about the word consciousness. Think about the word computation.
One of the great things about computations is that they're independent of the substrate that is computing them. Two plus three equals five, whether you do it on an abacus or on your fingers or on your computer or on your watch or whatever. Consciousness, to me and to most people like it, who think about these things, is like that. It is a higher level structure.
It is not dependent on some specific kind of material substrate. Again, that could be wrong, but that is the basic idea. Okay, so I'm going to group together two questions. One is from Ilya Lvov. who says energy conservation is a consequence of time translation variance.
This, to the best of my understanding, means that the laws of physics at t0 are the same as at t1, not that they are the same forward and backward in time. Energy is not conserved in quantum measurement. Hereby, it follows that quantum measurement changes laws of physics over time. What is this change?
And the second question is from Sandro Stucchi, who says, in the September AMA, you mentioned your blog post, Energy is Not Conserved, where you explain that it's been well understood since at least the 1920s that energy is not conserved in general relativity and why that isn't a problem.
But Noether's theorem says energy conservation is a consequence of the time translation symmetry of the laws of physics. What gives? Are the laws of physics not symmetric under time translation after all? So both of these questions are about energy conservation and their relationship to time translation invariance. And they deal with subtleties in Noether's theorem.
Emmy Noether famously proved this wonderful theorem that really set the stage for a lot of modern talk about gauge theories and charges and conserved quantities and so forth.
And the way that we usually informally state the theorem is that when there is a symmetry of nature, at least a continuous symmetry, we can talk about discrete symmetries or some subtleties there, but think about a continuous symmetry of nature. Like time translation invariance, which as we say is the laws of physics or the situation or whatever are the same over time.
spatial translations, rotations, things like that. Noether's theorem says whenever there's such a symmetry, there is an associated conserved quantity, okay? There's something that doesn't change over time. Energy is associated with time translation of variance. Momentum is associated with spatial translations. Angular momentum associated with rotations.
Electric charge is associated with gauge transformations, etc. Okay, but there are two subtleties here. One is, what do we mean by the laws of physics remaining invariant? So to Sandro's question, in the blog post about energy not being conserved in general relativity, as I actually encourage you to read the blog post, it says something that is relevant to what I said earlier about entropy.
the word energy has different definitions in different contexts, okay? So general relativity as a theory is completely time translation invariant. There's no special time in the universe according to general relativity. Therefore, by Neurodress theorem, there should be a conserved quantity, and there is. There is a conserved quantity that you can call the energy of the universe.
It's absolutely clear how to get it in general relativity. The problem is that that if you're in a closed universe, that quantity is simply zero. it doesn't matter what is happening in the universe. As long as you're in a closed universe, then you can show mathematically that no matter what's going on inside, that energy is going to be zero. So it's not a very informative fact.
It doesn't separate out certain kinds of universes from other kinds of universes. And if you're not in a closed universe, if you're in an open universe, then it's actually, it becomes harder to define what this quantity is because it depends on what's happening infinitely far away
And in an open universe, there is an infinitely far away, but it's changing with time because the universe is expanding or something like that. Therefore, even though this thing exists, the Hamiltonian of the universe, if you want to call it that, it is not what we sort of intuitively feel in our bones as energy. It exists. You can define it, but it's not what we usually think of.
So as I say in the blog post, what we usually think of in cosmology as energy is not the energy of the gravitational field itself. which is an important contribution to the formula in general relativity, it's just the energy of the stuff inside the universe. So the matter, the radiation, the dark matter, the dark energy, all that stuff has energy and you can add it up.
But if you do that, that quantity would be conserved if the universe were not expanding. Because then conditions that the stuff in the universe are experiencing would be time translation invariant. If the universe is not expanding, the amount of stuff is the same from moment to moment.
It can change from one kind of stuff to another, but it's still starting from the same point and just doing its transformations. Whereas, because we've now defined energy in a way that does not include the gravitational field, the fact that in an expanding universe, gravity is changing over time, the universe is expanding, right? Things are getting more dilute as time goes on.
That means that relevant to the stuff in the universe, things are not the same. from moment to moment in time. They are more dense in the past, less dense in the future, et cetera. And therefore, Noether's theorem, by the standards of Noether's theorem, there is no time translation of variance in an expanding universe, because the universe is changing. That's it. Simple as that.
Now, you might say, well, I don't want to define energy that way. You are welcome to define it however you like. Knock yourself out. It's a free country. To Ilya's question, it's even a more subtle aspect of Noether's theorem, which is that she proves her theorem using what is called the principle of least action.
This is—in fact, you can prove it classically, and then you can go and show that in quantum mechanics there's a version of it that still works. But the principle of least action is based on the idea that we can consider the space of all possible paths or trajectories for some physical system— And then we can imagine the transformations that are enacted by a symmetry on those paths.
And you can see this. It's in one of the biggest ideas in the universe videos if you want the details for it. I ended up not going into the details in the book of the biggest ideas because it was just like a little too technical for a book that was already way technical enough. But the point is that –
It's an assumption of Noether's theorem, not just that you have time translation variance for energy, but that you can define the laws of physics you care about in terms of the principle of least action.
The principle of least action says there is a quantity called the action that I can assign to any possible trajectory the system could imaginably have, and the one it physically does have in classical mechanics is the one where that action is the lowest, okay? But quantum measurement doesn't fit into that paradigm. Quantum measurement doesn't have a principle of least action.
Quantum measurement is ill-defined, for one thing, okay? But to the extent that it's defined, it's sudden and unpredictable, and you're just asserted that there is something called the Born Rule and the collapse postulate, okay? You collapse onto a definite state of whatever observable you were looking at, and you have a probability for doing that collapse.
But none of that is based on having an action principle or minimizing that action or anything like that. So Noether's theorem simply doesn't apply. There you go. So in both cases, you have to read the fine print on what the theorem is trying to tell you. That's often good advice anyway. Benny Spess says, I was recently reading about the development of matrix mechanics.
As I understand it, Heisenberg was concerned with manipulating properties we can measure, not whatever the underlying reality might be. The text indicated that the uncertainty principle derives from the fact that matrices do not commute. Is this just stating that the order in which you measure position and momentum matters?
Or does the noncommutativity imply a fundamental limit on what we can know about the system regardless of measurement? I think I had to translate—I'm sorry, but I had to translate your questions from English into math. I think I'm going to agree with the second version rather than the first one.
It's not just stating that the order in which you measure position and momentum matters, although it's true that the order in which you measure position and momentum does matter because they don't commute. That is true, but the word just is making me hesitant to agree with or to say yes to that question.
The only reason I'm not quite saying yes to the second question, does the noncommutativity imply a fundamental limit on what we can know about the system regardless of measurement, is because it's not about what we can know. It really isn't. It's about what states exist.
When you say a limit on what we know, you're leaving open the door, at least implicitly, to the idea that there is some fact about the system and we can just never know it. That's not what the uncertainty principle is trying to say. It is— The phrase at the end, regardless of measurement, is on the right track. The uncertainty principle is not about measurements.
It has implications for measurement outcomes, but it's not about measurements, and it's not about what we can know. It's about what quantum states exist. And so if you say the uncertainty principle is delta x times delta p is greater than h-bar, h-bar is Planck's constant, delta x is the uncertainty that you would get in a measurement,
from measuring x, the position, delta p is the uncertainty you would get in a measurement from measuring the momentum p, well even if you don't measure it that's a statement about quantum states. The statement is that there are no quantum states that have precisely defined positions and momenta, okay? There's always going to be.
The quantum states that exist can never have completely predictable measurement outcomes for those things. But again, that's a feature. The fact that that's a statement about measurement is only how we— get there, how we observe it, how we find out about it, it's true whether or not you ever do the measurement. It has nothing to do with the process of measuring.
Heisenberg's Uncertainty Principle has nothing to do with the idea that when you measure something, you're going in there with photons or with your fingers or anything, and you are disturbing it with your energy or momentum. That has zero to do with the Uncertainty Principle. It has to do with what states could possibly describe that system, whether you're observing it or not.
Grant Stone says, is the concept of self-locating uncertainty something that you see as being broadly applicable to questions of how subjective observers should assign credences? You wrote a paper on deriving the Born rule in many worlds where you used this epistemic separability principle for this purpose.
It seems to me that the implication of this logic working for quantum mechanics should be that any process which can be modeled from one perspective as a deterministically involving in time— but looks uncertain from a perspective of an observer, which is part of that process, would have the credences of the observer quantified in the same way.
Is there something special about the kind of branching in quantum mechanics with the states becoming orthogonal after decoherence? So I got a little lost. Sorry, Grant, in the middle of the questions. I'm not sure I'm going to be answering exactly what you asked, but I'll give it a shot.
For those of you who don't know, I wrote this paper with Chip Siebens a while back, Deriving the Born Rule for Quantum Mechanics. which says that the probability of getting a measurement outcome is given by the associated wave function squared. And we derived it in the context of many worlds, in the context of Everettian quantum mechanics.
And it started, the whole project started because there's a standard argument that you can't do that, and Chip knew that argument and gave it to me, and we sort of battered it back and forth, and we came up with a reason why actually you could do it. And The idea is based on actually Lev Weidman who first pointed this out. He's another physicist.
And when you have branching in Everettian quantum mechanics, there will be a moment after the wave function of the universe has branched where there are two versions of the observer. let's say it's spin up and spin down, there's an observer on the spin up branch, an observer on the spin down branch, and neither one of them knows which branch they're on.
They could try to look very, very quickly, but the branching always happens faster, okay, as a matter of empirical fact. So that's a condition of self-locating uncertainty. Neither one of those two observers knows—they might know everything there is to know about the universe. They know the entire wave function of the universe, but they don't know where they are in it, okay?
So I do think that that's something special about the kind of branching in quantum mechanics, okay? It's not because the states are orthogonal, although that's true, that's important, that's a part of it. The relevant fact is there are two copies of the person, right? That's where self-locating uncertainty is kind of unavoidable.
You know, self-locating uncertainty in general is the idea you know everything in the original about the universe, but there's multiple copies of people like you, and you don't know which one you are. So that might be relevant to a multiverse. It might be relevant to a recurrent universe. It might be relevant to just a good old classical universe that is so big, maybe infinitely big.
that there are things that happen over and over again, including copies of you elsewhere in space. But all of those possibilities are highly speculative and might not be there if you're an Everettian, then this actually happens in quantum mechanics.
So I do think that it's an idea that can be relevant to other circumstances, but in Everettian quantum mechanics, that's the condition where the situation where it is sort of absolutely inevitable to happen. And then Chip and I made the argument that how do you resolve that self-locating uncertainty? Well, you're allowed to do whatever you want. Again, free country.
But there's a uniquely sensible way to do it, and that way ends up giving you the Born Rule. By the way, I've said this before, but just for people who remain a little bit unhappy with the derivation of the Born Rule in Everett, when people talk about deriving the Born Rule, the Born Rule is the probabilities of the wave function squared, right? They don't care.
They're not primarily concerned with the fact that the probability is given by the wave function squared. that's kind of obvious. Like even though that's what you say you're deriving, there's nothing else it could have been. It's not like, well, it could have been equal to the wave function or the logarithm of the wave function or whatever. The wave function itself is a complex number.
It can't be a probability, right? So you might already guess that's something you should square to get a non-negative, non-imaginary number. And indeed, it turns out there's a theorem, Gleason's theorem, that, you know, among other things, it reminds you that The set of numbers given by the wave function squared for different measurement outcomes is a set of numbers that are between 0 and 1.
and they add up to one, and their sum is constant over time. In other words, it's exactly what you want for a probability. So once you think that you will get a probability distribution out of many worlds, the fact that it is the wave function squared is the easiest thing in the world. There's nothing else it could have been, right? If you'd said the wave function to the fourth...
OK, that's not conserved over time. So even though it is numbers between zero and one, they would not necessarily add up to one. They're not conserved over time, etc. But people worry that maybe there just isn't any association of probability with Everettian branches at all. And to me, the existence of self-locating uncertainty says, no, there clearly is.
Like there's so obviously a probability distribution that we need to deal with, the self-locating uncertainty. Which one am I? And so kind of the last holdout for people who don't think you can derive the Born rule in Everett is to say, well, yes, there's uncertainty, but I refuse to – apply probability distribution to it.
That's something that someone like David Albert would say, for example, or probably Tim Modlin. The only way you can get out of Everett working completely is to say, I refuse to believe that there is any way to assign a correct probability distribution, even though the wave function squared is a pretty obvious thing to do. So anyway, that's why I think it's all in pretty good hands, actually.
This episode of Mindscape is sponsored by BetterHelp. Halloween is approaching and it's time to think about what is it that scares us. But what about those fears that don't involve zombies and ghosts? For those, therapy is a great tool for facing our fears and finding ways to overcome them. Because sometimes the scariest thing is not facing our fears in the first place and holding ourselves back.
And if you've been thinking about giving therapy a try, think about BetterHelp. BetterHelp is entirely online and is designed to be convenient, flexible, and suited to your schedule. Just fill out a brief questionnaire to get matched with a licensed therapist, and you can switch therapists at any time for no additional charge. So overcome your fears with BetterHelp.
Visit betterhelp.com slash mindscape today to get 10% off your first month. That's betterhelp, H-E-L-P dot com slash mindscape. Ben Linus says I've been thinking about human willpower. Some people have a remarkable drive to push beyond others. Yet it seems we all give up at some point. Why do some people develop successful strategies to go further than others?
Is this simply a mental muscle that needs to be trained? I think it's a little weird that you are specifically thinking about willpower in terms of a drive to push beyond others. Willpower can be for all sorts of things. You can accomplish things that aren't necessarily better than someone else is accomplishing. Maybe you just want to do your push-ups every morning or something like that.
Who cares if other people are doing it or not? I'm not caring about pushing beyond others. I'm just trying to be a little bit healthier, right? But aside from that, again, I'm not an expert on human psychology or anything like that.
The only thing that I have to offer here is the kind of interesting fact about willpower and similar human things is that they're a relationship between our present self and our future self, right? How often have you— eaten, I don't know, ice cream or something like that and regretted it later. I mean, maybe never. That's good for you in that case.
But willpower is the ability to avoid doing something that in the moment you are kind of tempted to do. Stay in bed, eat the ice cream, whatever it is. Why? Because you think that there is a payoff down the road, right? That there is a future reward that you're going to get from this. And I do think it's fascinating how the human mind carries out this kind of calculation, right?
Why isn't the human mind much better at willpower? Why isn't it optimized to make our future selves very happy? And I am sure that psychologists have thought about this. It's not a novel theory. consideration by any means, but I don't know what the story is there. We human beings, as we talked about with Adam Bully a while back, we human beings have a complicated relationship with time.
It's both super duper important to who we are and how we live, and it's also something that we need to think about and exert our willpower to try to successfully navigate and even master. So I'm not actually offering any very useful answers to your questions, Ben, but I do think it's an interesting thing to think about. I think it's going to be a theme in today's AMA.
I picked out a bunch of questions where I have to say, like, I don't know what the answer is, but this is a good question. So food for thought for those of you who are listening out there in podcast land. Varun Narasimachar says, Recently, there have been reports of savvy provocateurs expressing veiled threats to prominent personalities in language barely within legal free speech protection.
Although disturbing, this phenomenon has gotten me curious about the fascinating world of legal technicalities of free speech. Would you consider inviting an expert on such matters to discuss both the legal theory for its own sake and the messy praxis of protecting and regulating free speech? Yeah, it's a very important question. I completely agree that it is messy.
The only super-duper strong opinion I have about free speech discussion is that if anyone says, oh, it's all very simple, just do it this way, they're not worth listening to. They have not thought it through very carefully or they're just not very interested in all the nuances. I tend myself to be very, very protective of free speech as long as it means –
If someone wants to say something and someone else wants to listen, they are welcome to do that. No one has free speech on my Blue Sky account. I can block them, right? There's no demand that I need to listen to them. That is not part of free speech. But otherwise, I'm very, very open.
If someone wants to invite some terrible, terrible person to a university to give a talk, I think they should have the ability to do that. Both the speaker and the inviter have the right to do that. But, yeah, I don't know who would be a good person to invite.
We did have one such discussion with Theresa Bejan, who's a political theorist and also a political historian at Oxford, and we had a very interesting discussion on the origins of free speech back—I don't know if it's the origins of free speech. I shouldn't put it that way.
Part of the prehistory of free speech was in the American colonies, in Rhode Island in particular, and the importance of the notion of civility there. And civility didn't exactly mean back then what it means now, so it was an interesting thing to dig into. But I don't know exactly who would be good to talk to about that, but I do think it's an interesting thing— to get right.
The reason why I don't know of anyone who's going to talk to about it is because it becomes too kind of polarized and simplistic, right? It's the reason why we don't have political candidates generally here on Mindscape. It's not because they're not interesting or smart people. It's just because there are agendas, right?
There are things that they are trying to bring about other than thinking about the true nature of reality. And here at Mindscape, we're mostly thinking about the true nature of reality.
So I'm happy to talk about the true nature of a certain modern political reality, but I have to do it in a way where I'm convinced that the person is most interested in getting at the truth rather than achieving some goal one way or another. Perry Romanowski says, How do you come up with fresh ideas to explore in a field that feels like all the major discoveries have already been made?
For instance, in physics, there seem to be only a few remaining mysteries, but it doesn't seem like there are many promising avenues that could lead to groundbreaking changes in how we understand the world. I work in chemistry, particularly in the cosmetic industry, and it sometimes feels like there isn't much left to discover.
How do you stay engaged with your discipline and find inspiration for ideas that might lead to something truly novel? Well, I do think that in every area of science, there are still major discoveries to be made, okay? But it's not that they're all equally easy. So this goes back to something I just said.
One of the reasons why I'm not thinking that much about cosmology or gravity is because the low-hanging fruit has been picked. We do have—I mean, this is a success story, not a failure story. We have really good theories—
of particle physics, of gravity, and cosmology, and that makes it hard to make progress, because it's not like we have 20 experimental results that we don't have an explanation for. When you're in that kind of situation, like we were in the 1950s and 60s in particle physics, then it's playtime for theorists, right?
Trying to come up with theoretical explanations for all these wonderful experimental results. Likewise in chemistry, I can imagine that it depends on exactly what area of chemistry you're in. But in certain simple areas of chemistry, maybe you have a pretty good basic picture and it's hard to come up with something truly fundamentally new.
The obvious thing to do, what I did, is to start thinking about things where we don't have a wonderful theoretical understanding of things, right? Where There are data that are coming in that are hard to explain, that are interesting and surprising. And the theoretical ideas we do have are not yet completely mature and fleshed out.
So I think you have to, you know, I don't know when you say you work in the cosmetic industry, I don't know what your job is. I don't know the extent to which you have freedom to pick what you work on. And I also don't know enough about cosmetic chemistry to say whether or not there are any areas where Everything is still sort of chaotic and up for grabs.
But those areas, the ones where things are still chaotic and up for grabs, are the ones where the biggest discoveries will be made. Let me mention, just because there's reasons to go into all these different things, that it's also super frustrating— to work in those areas.
When you have a mature field, it might be hard to make a super important breakthrough, but it's much easier to make an incremental progress. It's much easier to have a well-defined problem, ask a question and answer it, right?
When you're in these sort of more chaotic pre-paradigmatic areas, that's when you, maybe you'll make a breakthrough, but maybe you'll just waste your time for years and years and years. That's kind of the trade-off that you have to think about. Eric Runquist says, you often use chairs as a basic example of an emergent phenomenon. At a fundamental physics level, there's no such thing as chairs.
And if you eliminated conscious creatures like us who reify the experience of certain patterns of matter into chairs, it would cease to be a valid category with which to describe reality. Is that right? And if so, if space and time were also emergent, would the same logic apply to them? So no, that is not right. That's not anyway the way that I think about it.
In my way of thinking about emergent phenomena, whether or not there are conscious creatures never has anything to do with it whatsoever. It might, as we said before, in the case of Heisenberg's uncertainty principle, it might be very useful to conscious creatures when there are emergent phenomena. But the fact that a phenomenon is emergent is an objectively true one.
And it has to do with whether or not you can make predictions about what happens in the world based on dramatically incomplete information. But the information you have refers to these higher-level emergent phenomena. When someone says, you know, we have 12 guests coming. How many chairs do we need? you don't need to know the microphysical state of the world to answer that question, right?
You don't need to know the position of every atom and molecule in your neighborhood to answer that question because the idea of chairs has causal power. You know that if you're going to have 12 guests total, you're going to need 12 chairs, right? There's a high-level emergent
conversation, a vocabulary, a theoretical structure that makes sense even though you don't know all the details about the microstate. And the existence of that higher level way of talking is completely independent of the existence of conscious creatures. Of course, as I just said, it's super useful to conscious creatures to know about that.
It is an unmistakable fact about the world that we conscious creatures don't immediately apprehend the deepest level of reality. We apprehend much more directly some higher level way of viewing reality. But that's not what the layer is about. It's just the fact that it's useful to us. And therefore, space and time exactly the same way.
And this is Dan Dennett's point in his famous paper about real patterns, which we talked about in the conversation with Dan. The word real in the phrase real patterns is doing something there, right? A chair is part of a real pattern. It doesn't have to do with subjective experiences of human beings.
A chair is a chair is a chair, whether or not there's any human beings sitting in it or thinking about it. Gilbert Rodriguez says, For example, I've read that we find fundamentally incompatible concepts or unexpected resolutions funny. But I feel like this is also a mark of good science, e.g. particle wave duality.
Another theory suggests humor is a mechanism for pent-up emotions or tension through emotional relief. But a good physics paper makes unexpected connections that feel inevitable almost cathartic. I have in fact been brought to the brink of tears in my math classes upon understanding some beautiful equation.
Math, philosophy, and physics seem to rely on all the same elements that humor does, even beyond paradox and emotion. There's obviously an isomorphism between science and humor. But what makes it an isomorphism and not an identity, i.e., how are they fundamentally different? Good. So this is one of the questions that I promised early on I don't have a good answer to.
I just thought it was a great question. I think it's not quite obvious that there's an isomorphism between science and humor. And I think that you're sort of hinting at that. They're clearly different in some important way. And the most obvious way in which they're different is the— you know, if you want to call it the teleology, the function, okay?
Maybe there are some aspects of the practice of science that parallel certain aspects of the practice of humor, but the goal is a completely different thing. You know, maybe a good mathematical equation brings you to tears. I get that. I can see that. But that's not the purpose of the mathematical equation. The purpose of the joke is to make you laugh.
The purpose of the mathematical equation is to capture some logical truth. The purpose of the scientific theory and the process to get to that scientific theory is to provide some understanding of the physical world. So I would argue that the most obvious and important difference is that they're oriented toward different goals. For humor, the emotional reaction is the point.
For science and math, the emotional reaction follows along because of how human beings respond to the point. Matt Becker says, I'm curious how Einstein's field equations are used to predict the age of the universe. What observational evidence is being extrapolated to make such predictions and what variable in the equations is causing the singularity?
Well, there's a simple version of this and a more complicated version. The simple version is you model the universe as a very, very simple thing. You say the universe is completely homogeneous and isotropic. That is to say, it's the same looking in every direction. That's isotropic. And it's the same at every point. That's homogeneous.
And that simplifies, and you say that about, sorry, you say that about space, not about time, okay? So you're drawing a distinction between space and time, which is fine. It's not a feature of the deep down equations. It's a feature of the specific arrangement of matter in the universe. And also, it need not be true. It's not actually, you know, demanded on us by any principle.
It's just a fact that in our universe, that's a pretty good approximation, especially when you look at very large scales and average out the number of galaxies and stuff like that. So once you've done that, it turns out that if the universe is homogeneous and isotropic, then at every moment of time you have three-dimensional space, and it's completely characterized by its overall curvature.
Its overall curvature can be positive, negative, or zero. So it's a number between minus infinity and infinity. That's a very simple thing to have. And then what happens to that space is it expands. So you have another quantity which is the scale factor. The scale factor tells you the relative size of the universe, of the spatial universe, at different moments of time.
So you put this all together and Einstein's equation gives you, or reduces to, what we call the Friedmann equation. named after Alexander Friedman, an early cosmologist, and it relates the change of the scale factor over time, which is given by the Hubble constant, a dot over a, if a is the scale factor and dot is a time derivative, the Hubble expansion parameter.
is a way of characterizing the rate of growth of the scale factor. And the Friedman equation relates that to the curvature of space and to the energy density within space. So the fundamental thing you do as a starting out cosmologist is you learn the Friedman equation and you solve it. So you say, oh, the curvature is such and such, or the density of matter is matter or radiation or dark energy.
And with those inputs, you solve the Friedmann equation for how the universe expands. And then, of course, in the real world, when you're a slightly more sophisticated cosmologist, you actually measure these things. You go, oh, here's how much radiation we have. Here's how much matter. Here's how much dark energy.
And then you can use your solution to the Friedmann equation to extrapolate it backward in time. And that's how you get a 14 billion year old universe. So that is your theoretical prediction. You say general relativity plus our current knowledge about the universe gives us this prediction. The Hubble constant is also important.
The current rate of expansion plus the stuff in the universe makes a prediction for how old the universe is. And then you can test that, right? It was already a pretty good success decades and decades ago. when we realized that the ages of the stars in the universe are of the same order of magnitude as the age of the universe predicted by the Friedman equation, right?
It could have been that the stars were a thousand times older than our prediction from general relativity for the age of the universe, and that would have been a severe problem. In fact, it looked like for a while that the ages of the oldest stars were a little bit longer than the age of the universe as a whole. This problem went away when we found the dark energy in 1998.
If you include the dark energy or the cosmological constant, all else being equal, you make the universe a little bit older. than you would have otherwise. And so now it all fits together very, very beautifully. So you both make the theoretical prediction, you test it. Right now, it's more or less bang on, working very, very well.
Sergei says in your last AMA, you said there is this thing called the Electoral College which messes everything up. There's a whole bunch of attempts to suppress votes or disenfranchise people in different ways or to mount legal challenges when the votes come in. It seems like you would rather have a straight up majority vote for president.
Let's imagine that in November, Trump wins the majority vote, but Kamala wins the Electoral College and any challenge from him is shot down by the current Supreme Court. Would you then say that the current setup is robust and worked as expected? Or would you still prefer changes in the electoral and maybe judicial process, assuming this was in the realm of possibility?
It has nothing to do with who wins, okay? I don't care who wins or who doesn't win. The system is stupid, and I would like a less dumb system. Yes, I would like that. there was, you know, historically the electoral college here in the United States was put in for dumb reasons.
It was largely because slave states wanted to make sure that they had a little bit more representation than they otherwise would get. But maybe you could invent ex post facto some kind of justification for it if you thought that having people vote for electors who then were sort of responsible, educated people who then debate who they were going to vote for for president.
The Electoral College would actually do the voting. Maybe that would be a justifiable system. That's certainly not the system, right? That's not even close to the system. In fact, the electors are pledged to vote for certain people. So it's the opposite of the system we have. And, you know, the current system does absolutely zero good at all.
It does great harm because it means that vast parts of the country are completely ignored in nationwide elections because they're either all red or all blue. It's only a very tiny sliver of states that get a lot of attention in presidential elections. So I don't care who wins. I don't care what the process is. I would like just a majority vote.
Thank you.
Chipsy says, do you think something like an ideological Turing test is a useful tool for helping people to think and argue better? Basically, such a test challenges someone to explain an idea they disagree with in terms that a person who actually believes that idea would fully agree with.
It is intended to demonstrate to yourself and to your debate partners that you understand what they actually believe rather than arguing against a weaker distorted version. If both sides in debate can articulate the other side's arguments convincingly, then they are more likely to have a fruitful exchange. Yeah, I'm a big believer in things like this.
I definitely think that if you're having a reasonable argument, okay, you know, people, gimmicks and strategies to make sure that debates and disagreements are carried out fairly are easy to game by bad actors, right? By people who aren't being sincere, people who don't have any good arguments, etc. But let's put that aside. Okay.
let's imagine that you have a sincere disagreement with someone who you respect, and maybe you're both open to thinking about things in new ways, then you should be looking for ways to understand each other better. And absolutely, the ability to articulate what the other person is saying is crucially important. It's a strategy for doing those things.
It doesn't necessarily make you more sympathetic. Like sometimes you can come up with a way of—you can figure out what to say in the terms that the people who actually disagree with you would say them, and you realize, oh my goodness, this is a terrible argument, right? But at least you're trying to understand it from the perspective of those people. And I would say it's not just true for—
disagreements between people. This is true for any time you're trying to understand why somebody would believe something, even if that is a historical figure. Why would Aristotle have ever said this particular thing, right?
That's a very useful mental exercise for people to go through, trying to understand the context, the reason why people are saying things in terms, like you say, they would agree with. Mikkel Pickle says, the conversations this month have been fantastic. I was especially motivated to spend time in self-examination after listening to Hari Han on making multicultural democracy work.
In particular, I've been repeating your statement that being okay with losing the vote is a big part of what makes democracy work. But now I think we should have added, believing that you will have another chance and personally identifying as a member of the democratic enterprise. What else do you think is essential, if anything? Yeah, these are good questions.
You know, I do think that personally, my... reason for being a supporter of democracy is mostly a moral one, mostly a normative one. You know, sometimes people want to say, well, let's add requirements to who can vote, like they have to be able to pass a citizenship test or something. And I think that to me, that's missing the point.
The point is not that we think that the democratic process is going to lead to the smartest decisions. We think that it's going to represent the interests of the people who are doing the voting. Now, it fails at that in a lot of cases because people don't always vote in their own interest. People don't always understand who is going to stand up for their interests, et cetera.
But they should have the right, in my view, to be able to say that. What makes it work is some – the most important thing that makes it work is the buy-in. right? It's—we agree that, okay, we are going to abide by the will of the majority, or some version of the majority in a more Republican system, right? That's fine.
But whatever process we have to—this is part of, like, the physics of democracy thinking. In ordinary physics, we often coarse-grain or renormalize, right? We have a bunch of spins, and we calculate the expectation value of the spins in a certain block, and we assign a single spin to that block.
And the nice thing about physics is that you know, or maybe you know or you don't, but the correct way to coarse-grain is fixed by the physics, right? It's the way that leads to some higher-level emergent phenomena without knowing all the microstates. And some ways of coarse-graining will work and some ways won't. So in democracy, we choose how we coarse-grain, right?
We have a large number of people all with their individual interests and preferences, and we're choosing by identifying a voting system and a governmental system how to sort of coarse-grain those up into a small number of people making a relatively compact number of decisions.
So some methods for voting systems and for representation systems will be better than others, and you should have a discussion about how to do that. But anyway, whatever that method is, I think the buy-in on the part of the voters is important. I think that you have to have a citizenry which overall thinks that democracy is a good idea.
Everyone says democracy is a good idea because they want their point of view to be represented. But you have to have people who say democracy is a good idea even when it's the other points of view that end up carrying the day. There needs to be some fairness. There needs to be some minimal requirements. That's why we have a Bill of Rights and things like that.
You have to protect people's interests against the— feeling of the mob in the moment, absolutely sure. But overall, you have to be able to take some policy defeats if you're in a democracy. I do think that second, you need some responsibility, right? You need some responsibility as a voter to be informed, to basically know what is going on.
We live in a country, I live in a country in the United States where voting is not mandatory, right? Some places it's mandatory, but Half the people, more than half the people in most elections just don't vote at all. I think that's okay. It's okay not to vote. That's fine.
I wish more people did vote, but I wish that the people who did vote had a better idea of what was going on, had a more realistic view of what different candidates stood for, what different policy choices, what their effects would be, all that stuff. It's not something I would ever mandate. It's not something I would ever put into law, but we can encourage it, right?
And finally, I think that you need an understanding of what it means to be active in a democracy. And what I mean by that is so much of our current discourse these days is just pointing at the other people who you disagree with and saying that they're wrong. And that maybe they are wrong, OK, on both sides. This is one of the issues that is a both sides issue. Not every issue is.
But on both sides, people just like to highlight the worst parts of the people they disagree with and mock them. And that doesn't grow your coalition, really. That doesn't help people understand why your perspective is better than the other perspective. You actually have to persuade people to be on your side. And so few people seem to be interested in that part of democracy.
That's one of the things that is making it harder and harder for democracy to work in the modern world. Part of the responsibility is voting. Part of the responsibility is bringing people into agreement. with you, right? Not just saying that they're wrong or they're right, but changing their minds or helping them make up their minds if they're not yet decided.
So I think we're going to come back to this in a later question, but democracy is work when you do it right. It's not something that happens once every four years. It's kind of a responsibility that is a good one to have. It's better than living in a dictatorship, but it's a responsibility worth taking seriously.
It's better over here. Now at T-Mobile, get four 5G phones on us in four lines for $25 a line per month when you switch with eligible trade-ins. All on America's largest 5G network.
Minimum of four lines for $25 per line per month with auto pay discount using debit or bank account. $5 more per line without auto pay, plus taxes and fees and $10 device connection charge. Phones will be at 24 monthly bill credits for well-qualified customers. Contact us before canceling entire accounts and continue bill credits or credit stop and balance on a required finance agreement due.
Bill credits end if you pay off devices early. See T-Mobile.com.
Nicholas Latart-Bersionic says, when you look into a telescope today, what comes to your mind first, the thought of the cosmologist or that of the philosopher? I love the idea that I'm looking into telescopes today. I have not looked into a telescope in quite a while. I was never really, by the way, a real telescope looker. You know, I was interested in cosmology from a very young age.
And so family and friends who didn't understand what I really was interested in, they roughly had the idea that I liked things in the sky, the stars, space, or whatever. So I, you know, got a telescope as a present one Christmas, et cetera. And I tried to use it, but it was never really my thing. I'm interested in the equations. I'm interested in the ideas and the concepts.
And you don't see equations when you look through the telescope. But anyway, I did spend a lot of time looking through telescopes, especially as an undergraduate astronomy major. I worked as an observatory assistant among various other part-time jobs. But these days, when I metaphorically—I understand, Nicholas, that you're not really worried about that.
Your question is, am I thinking about the universe primarily as a cosmologist or a philosopher? And the answer is neither one. You know, the answer is my whole shtick is to appreciate that those are not two separate categories. It's not that, you know, now that I'm half in the physics department and half in the philosophy department that I am—
somehow more of a philosopher than I was before, that I have switched. My interest is still the same, understanding how nature works. That is my interest. I have changed, but the change is appreciating the ways in which thinking like a philosopher helps that goal, helps move us toward better understanding how the universe works. So
When I look through the metaphorical telescope, it is not as a cosmologist or philosopher. It is as someone who wants to understand the world better, given any tools that I can lay my hands on to move closer to that goal. Steve NZ says, how does it feel to be someone whose opinions are sought out by thousands of people all over the world?
Your views are sought on such diverse subjects as cosmology, history, particle physics, pizza cats, relationships, life, the universe, and everything. When you were, say, 17 or 18 planning your career, did you ever think you would be where and what you are today? That's a good question. You know, I don't want to exaggerate the extent to which my opinions are sought after.
And also, I don't want to exaggerate The compliment that the world pays you by being a person whose opinions are sought after, just look at the other people whose opinions are sought after. Let's just say it's a wide spectrum. There's some very admirable people whose opinions are sought after, some less admirable ones whose opinions are taken seriously. way too seriously.
So it's easy to be humble about that part of it. And also, you know, I appreciate that, yes, people ask me questions about pizza and cats and martinis and things, but that's part of, you know, just expanding the space a little bit when they really, you know, most of the questions, as you'll notice in the AMAs, are about physics and philosophy, right? Which makes perfect sense.
If I just started a pizza podcast, I don't think it would be very, very popular. I would enjoy it. But anyway, it's hard when you're 17 or 18 to have any idea. So if I take the question to be, what was I planning when I was 17 or 18? Look, when I was 17 or 18, I wanted to be a physics major or astronomy major at university. Um, and I didn't know much about what came after that.
Like I vaguely knew that you go to graduate school. Okay. But I came from a family where there's no academics, right. And you know, there's no, there's not a lot of books in the house that I had growing up. Uh, I went to a large public school. You know, it wasn't usual that they sent people off to eventually get PhDs and become professors. It did happen, but it didn't happen all the time.
So I didn't have any guidance. And even when I was an undergraduate at Villanova, which was a wonderful place in many ways, but there was no graduate school there in physics or astronomy. So the professors were good, but no one there did theoretical cosmology or particle physics or gravity like I wanted to do.
And there wasn't other graduate students or any graduate students for me to get sort of wisdom from. So I just stumbled from moment to moment. I remember it took me a while just to learn that I could afford to go to graduate school, because when you go to graduate school in physics, for example, you get a stipend, right? So you either are a research assistant or a teaching assistant.
So they will list when you— we didn't have the internet in these days, right? So there was this book put out by the American Institute of Physics that listed all the graduate programs that said who was there, what research they were doing, and things like that. And it also listed the tuition. And I would look at the tuition and go, yeah, there's no way I can pay that. This is impossible.
And my professors did help me out in explaining that, oh, no one pays the tuition in graduate school. Nobody pays graduate school tuition. It It's covered somehow or another through hook or by crook. And I certainly didn't understand that there was a system of postdocs and whatever. So anyway, all this is to say, zero intention of becoming a public figure in any way.
I did maybe imagine writing books some days. I didn't have a podcast plan in mind when I was 17 years old. But writing books always seemed like that would be a fun thing to do. In fact, if anything— My regret is that I didn't take those more outlandish plans more seriously, more early.
It would have been even worse for my career than what it was, but I would have gotten a head start, which is always a good thing. So short answer, no. I didn't think I would be in precisely this place today, but I didn't have a great idea of where I would be either.
You know, I knew that there were people like Stephen Hawking and Steven Weinberg and Ed Witten who were doing great things in physics. I had that vague idea. You know, these are the people who were big names at the time, and I kind of wanted to be like them in some rough way.
And, you know, Stephen Hawking and Steven Weinberg, of course, did wonderfully good things in writing books and reaching out to a wider audience. Neither one of them quite ever had a podcast, though, so I have that. I have that over them. Ed asks, I saw this on Britannica.com and I have doubts about its veracity. Is any of this true? Quote,
meaning that humans are able to see more and more of the universe with the passing of time. While humans will never be able to see the entire universe from Earth, only the relatively small bubble of the observable universe, the sphere of observation is ever expanding. You know, this is something that is gesturing toward the truth in a little awkward way, okay?
So the observable universe expanding the light year per year is not—that's not it. That's quantitatively not quite right. But the observable horizon, our ability to look backward in time and see some things in the universe, is getting bigger because the universe is getting older. And the thing that bounds our observable horizon is that when you look back in time, you're
You run into the cosmic microwave background. You run into the moment in time when the universe was opaque. And every year, that moment gets to be one year in the past, okay? One year further into the past, and therefore you can see a little bit farther. Because the universe is expanding and changing its size in between now and then, it's not exactly a one-to-one map. years to light years.
That's why the quantitative thing isn't quite right. And of course, this depends on the future history of the universe. But right now, yes, it is true. We are seeing more and more of the universe. When you think of it, though, you know, if you live 100 years, that gives you an extra 100 years of time in between you and the Big Bang.
That's not a big number compared to the 14 billion years that is roughly speaking actually already there. So it's not a very large effect. Robert Ruxandrescu says, back in 2021, after my COVID infection, my sense of smell and taste were severely affected. At some point, I had no taste or smell. They recovered a little, but in a terrible way.
Everything tasted really bad, especially eggs, onions, any type of meat, even bread. It seems like a spectrum of my taste, the sour taste, was destroyed, and everything tasted and smelled really weird, usually like rotten meat. Considering that you've mentioned things about cooking and wines quite a bit, how would such an event affect your life?
Just a little, a lot, would it be absolutely terrible? Would you be able to deal with this for the rest of your life? I'm not going to be able to give honest answers here because I think that these things are hard to predict. The literal question is, how would it affect my life? I can predict that I would be really disturbed by this if my sense of taste went away.
But how I would deal with it, how I would cope with it is hard to say. I like to think— that I would shift focus, you know, my loci of pleasures from things that used to give me pleasure to other new things that could give me pleasure, right? Maybe I would finally become a better musician than I am now. But you don't know. Like, maybe I would just become bitter and grumpy all the time.
So I don't want to predict how good I would be at handling this kind of thing. I do think, you know, look, different people have different inputs, sensory inputs or whatever. Jennifer, my wife, is what is called a super taster. So she has a very, very sensitive palate, which is good in some ways, bad in other ways. You know, she's able to pick out notes in wine and everything, but...
There's other things like broccoli, okay, which to me and to most people is crunchy but unobjectionable. To her is inedibly bitter, right? She tastes bitter notes in there that other people just don't taste. Tomatoes, raw tomatoes, she can't eat. They make her throw up because of the effect it has. And so that's okay. You just don't eat those things. You eat other things, right? Um,
If everything about your sense of smell and taste went wrong, that would definitely disturb me a lot, I'm happy to admit, but I don't know how I would go about dealing with it. I would try to be zen about the whole thing and look for pleasures elsewhere, but I can't promise you I would succeed. Sometimes we're more successful at those aspirations than others.
Joshua Hillerup says, a lot of people, including on some episodes of your podcast, have talked about why politics in the US are so polarized right now. What I don't understand is why it's polarized on a federal level so close to 50-50. Why do you think that is instead of, say, 70-30 in favor of one party or the other? You know, I think this is a sneakily good question in the sense that there are
answers to it that are not that hard to find, but I'm not sure those answers are right. So the simple answer is you're trying to win elections, right? There's something called the median voter theorem in voting theory, which says that the place you want to be as a political candidate is as close to the median voter as you can be. What that means is if you have a single spectrum of opinions—
and you're reducing complicated political feelings just to that single spectrum, and they line up from left to right, you want to be in the middle of that in terms of actual numbers of people, because if you are moving to the left or right of the middle, then your opponent can move just a little bit to the other side of you, and they will always get more than 50% of the vote.
This assumes a lot of things, the median voter theorem. It assumes that there is a single spectrum you can line everyone up on. It assumes that candidates have well-defined locations on that spectrum. It assumes that all people do is to vote for the person that is closer to them on the spectrum. All of these assumptions are false, okay?
But nevertheless, we call it a theorem because it does follow from the axioms. That's what theorems are supposed to be. Now, if you look at the people who actually run for president, it is perfectly clear that the median voter theorem doesn't work, that the actual candidates do not both try to stick as close as possible to the center, right? And why? Well, there's many different reasons.
One is that we have a party system, and the party system is an almost inevitable result of the Constitution of the U.S., which gives so much— power to a single president who is popularly elected as opposed to giving power to the parliament or whatever. So given that we have a party system, the party carries a lot of the work of choosing candidates and defining positions.
And what you end up with is people who more or less represent their party, and then the two parties are not going to be right there at the median voter level. Because it's not just a matter of getting people to prefer you, you have to get them to vote for you, right? So you have to make people enthusiastic.
You have to get them out of their seats and into the voting booth, which means you just can't be the milquetoasty centrist candidate all the time. Now, so all that makes kind of sense, right? Even though we do have political parties, etc., it makes sense from reasoning analogous to the median voter theorem that the parties should divvy up the vote about 50-50.
That's the simple answer that I don't think is quite right. It's too close to 50-50, honestly, especially when so many voices in either party are not really representative of moderation in any way. So I think that part of this is because there's, well, so I don't know what the 50-50 answer is. I would like to know more about that. I do have some understanding of the polarization.
You know, back in the day, as we learned from Will Wilkinson and Ezra Klein and other people, Back in the day, there was simply less alignment between different sets of political questions in individual people. You might have people who were very typical voters, who were very liberal on one issue and very conservative on other issues.
And then what has happened over time is that both individual people and the political parties have kind of lined up, right? So if you were— conservative in one thing, then you're probably conservative in something else, and likewise for being liberal in United States sort of nomenclature. There's analogous nomenclature all over the world.
So that makes it easier to just think of people in the other party as the enemy. Right. And that sort of drives people apart. There is, you know, remember the very first episode of Mindscape was with Carol Tavris, a social psychologist, and she and her co-author in the book that we talked about. where mistakes were made, but not by me, they have something called, what is it called?
The pyramid of choice, if I'm remembering. It was a long time ago that we talked about this. But the pyramid of choice is this. There's two people, and they are both faced with a question, the same question. And to both of them, it's like, okay, I don't really care what the answer is. 50-50, you know, it could go one way or it could go the other way.
But imagine that for whatever reason, these two people choose different answers to this question. there is a psychological feature of human beings that we do want our beliefs and choices to sort of be right. And that means not only right externally, but also in agreement with other things that we believe. So as soon as you make the choice, you start justifying it, right?
You start saying, oh, yes, that was clearly the right choice. And then so you can see they trace in the book how it's called the pyramid of choice because you can see like these two people who started out with exactly identical opinions but just by random quantum fluctuations ended up making different decisions become less and less alike.
They sort of move apart because they're both internally justifying that original decision that they made. I suspect something like this is going on at the national level to sort of push people apart now that they're – interests and political opinions line up, and now that elections in the U.S. have become more national because of technology and things like that.
I don't know how that explains the 50-50 split, which is still baffling to me, but I'm sure that there's probably a perfectly good explanation out there. So if anyone knows what it is, let us know. Clement Goers says, if you could ask an all-powerful genius one question that can only be answered with yes or no, what would your question be and why?
So this is one of those questions I'm going to read out loud, but I'm not going to answer. I want to talk about the idea of the question. So I'm not... I'm not into questions like this. I think that this question represents a different way of conceptualizing human knowledge than the way that I have, right? I had a good analogy for this. What was the analogy?
When I first read the question, I came up with a good analogy. But it was something like... You know, if you could only eat one part of the pizza, the crust or the cheese or whatever, which part would it be? It's kind of missing out on the point of pizza, right? Pizza is my mind because we just were talking about it a second ago.
It's the grouping and the interplay of the different ingredients that matters. For knowledge— It's the whole story that matters, right? Like if I want to know, this is not the question I would pick, but let's say, you know, I want to know, did the universe begin at the Big Bang? And so I asked the all-powerful genius and they said, yes, it did.
As opposed to, you know, having a prehistory or something like that, right? Because we honestly don't know the answer to that question. So that's very valuable information. It's sort of half of the possibility space has now been eliminated, right? But it's entirely dissatisfying.
It's a slight help in our future research programs that we now no longer have to consider certain models and can only consider other models. But we still have a huge number of models to be considered, right? So it's one bit of information. And that's just not how science or knowledge works. And I— I think it's a feature of science and knowledge. They don't work that way.
It's a journey that builds up a story piece by piece, and all of the story matters. So sorry, Clement, if I'm kind of being a jerk and not answering your question in the spirit in which you intended it. But I think it's emblematic. I mean, I get lots of questions, not quite like this, but along these lines. You know, what question would you have for—
Albert Einstein or to ask God or, you know, who would you want to invite to your dinner party of historical features? Like, none of these are my kind of question, I got to say. I just don't think like that. The journey matters as well as the individual achievements along the way.
Marie Rouskew says, what, in your opinion, are the main properties making a model theory or an equation, etc., beautiful? That's a good question. And you know, I haven't deeply theorized about this. People have sat down and really tried to think about, you know, what makes a model theory or equation beautiful. I'm happy to use the language.
You know, I'm happy to say that general relativity is the most beautiful physical theory ever invented. So why is that? What is it that makes it more beautiful? Let's say more beautiful than the Schrodinger equation of quantum mechanics, which is beautiful in its own right, or the Lagrangian of the standard model of particle physics, which is kind of not beautiful.
It has beautiful aspects, but most people would say it's kind of clunky. Why is that? Well, look, in the standard model, there's a lot of seemingly arbitrary choices, right? There are symmetries. What are the symmetries? Oh, there's an SU2 and a U1 and an SU3. Why those symmetries? There's some discrete symmetries.
There's some partial symmetries that are like approximately good but not perfectly good. There are three generations of fermions, but why not four? Why not two? And you can go on and on. And it just seems arbitrary and kind of— ad hoc, and that feeds into something, the feeling that we get that it's kind of ugly. Also, it's kind of long.
It's like a lot of separate things going on in the standard model of particle physics. Oh, look, there's the gluon, there's the Higgs field, there's the neutrinos, etc. Whereas general relativity is There's Einstein's equation. You can read about it in Space, Time, and Motion, because I did its volume one, or I did a whole solo podcast explaining Einstein's equation. And it's elegant.
It's elegant in that it's very simple, it's very short, and it explains a huge amount. It was invented almost by pure thought by Albert Einstein, circa 1915. Pure thought, there was some empirical input to it, like the principle of equivalence and relativity itself, so the speed of light being constant, mattered.
But it wasn't like we were doing enormous numbers of gravitational experiments that forced us to general relativity. Einstein was trying to come up with a very simple, beautiful reconciliation of relativity with the existence of gravity. And he kind of did it. And now over 100 years later, we have not improved on his idea.
Whereas the standard model of particle physics is not something you would have come up with by pure thought. Every single bit of input there mattered. You know, I start when I give my talk on volume two of the biggest ideas in the universe, where I do talk about the standard model a little bit, quantum field theory, etc.,
I have a list of concepts that played a very big role in constructing the standard model, and I give a couple of names who helped us establish each concept, and it fills up the whole slide with small print, okay? There's many, many, many, many names, all of which had something to add to that, and it's all necessary, and it's all because of individual specific features of the real world.
So it is impressive without being elegant or beautiful. So something in there about the elegance or the beauty is it's simple, and once you see it, it seems almost inevitable. How could it possibly have been any other way? As a scientist, of course, you have to keep in mind that it could have been another way, okay? Simplicity is not what decides whether things are true or false.
Simplicity, elegance, beauty help us look for hypotheses, which then we test against the data, and that's what tells us whether something is true or false. Ned Grady says, I've heard recently a continually accelerating observer experiences radiation in a vacuum. But if the observer is in a vacuum, relative to what are they accelerating? Yes, you've heard correctly.
You've heard about Unruh radiation. William Unruh, who's still around, still at the University of British Columbia. did a wonderful thing in the 1970s after Stephen Hawking showed that black holes evaporate, Unruh did what every good physicist dreams of doing, which is to sort of distill that idea down to its essence. And, you know, he asked, like, what is the simplest version of this?
What is the spherical cow? And he discovered that if you have a constantly observing particle detector— in empty flat space-time, Minkowski space, no matter energy or anything, then in the quantum field theory vacuum, that particle detector that is constantly accelerating will tell you that it's detecting particles.
That turns out that the mathematical demonstration of that is very, very closely related to the derivation of Hawking radiation. In fact, I'm working with a graduate student right now.
He, Chris Shalhoub, who's the grad student, is doing almost all the work, but we are writing a paper that looks at what particle detectors detect when they fall into black holes in what we like to think is an unprecedentedly careful way, and we found some interesting results there. So hopefully that will be out soon. But anyway, so that's Unruh radiation.
And what you're asking is, how do we know they're accelerating? Well, that goes back to Galileo. You don't need Einstein for this. The idea of relativity, if you're either Galileo or Einstein, is that there is no preferred location in the universe. And there is no preferred velocity in the universe. But there is a preferred acceleration, namely zero.
The demonstration of this is just that if you're in a sealed spaceship and you can't look outside, you do not know where you are. You can't look outside. You do not know how fast you're moving. You don't feel that motion because you and the spaceship are all moving together. But you absolutely know if you're accelerating.
If you're accelerating, you're pushed against the wall or the floor of the spaceship. And if you're not accelerating, you're not. So there is a preferred set of trajectories in space-time called the inertial trajectories, those that feel no acceleration. And you are accelerating relative to those.
Paul says much has been discussed in recent times regarding the impact of AI technology on work and removing the mundane aspects of life. Much less has been said about the possible impact on our hobbies, sports and the arts. How do you see AI impacting these aspects of life that many of us hold dear? Well, I think it'll be a mixed bag.
I already confessed above that predicting the actual specific outcomes of these large-scale shifts in technological capacity is very hard to do, so I'm not going to claim to get it on the bullseye here. There's sort of the optimistic scenario and the pessimistic scenario, right? The optimistic scenario is it enables people— to do hobbies at an unprecedented level.
And I know that some people are sort of scared by this, and I think that's bad. You know, we had Grimes on the podcast a while back. The thing that I was really interested in talking about is just that You know, it's possible now with a computer. It wasn't even AI at the time. We didn't talk that much about AI. She wanted to talk about AI more than I was interested in.
So she saw it better than I did that AI would have a big impact on this. But this was a couple years ago. And just the existence of computers and synthesizers and drum machines makes it really much easier for a person without any real musical skill or training to write and create a song. You know? I think that's great. I think that's a wonderful thing. Not that many people do it.
I mean, some people do it, but it's not like, you know, before you had to learn to play the piano or the guitar or something, and now you need to learn less, but still you need to, like, learn something. And it turns out that most people are just not that into creating new songs for themselves, which is fine. So the optimistic scenario for AI is that it makes it even easier, right?
Is that you can tell the AI, look, I've had this drum track and this set of key changes, chord changes in a certain key. Write a melody that will go along with it, and it will do it. And maybe that just inspires people who might not put in all the effort to learn an instrument in the usual way to explore creative avenues to an unprecedented extent.
Again, I don't know whether it will happen, but it's certainly possible. You see people already who are making movies using AI. And that's just something that would be very, very hard to do with a pocket camera to make a sort of professional-level film. You still can't even with the AI, but it's very conceivable that you could once AIs get better.
Now, I'm putting aside – I should have said it at the start. I'm putting aside here all of the very, very important and real-world worries about AI. Number one, using up a huge amount of natural resources. Number two, polluting the environment. Number three, stealing from other artists, etc.,
Those are very good issues, but that's not the question that Paul asked, so I'm not addressing that one right now. The pessimistic side would be that people become couch potatoes, right?
Like that rather than putting in the work to learn to play the guitar or the keyboards or whatever or learn to set up a shot with proper lighting and sound design in a movie, we just, you know, let the AI do it, right? It's the WALL-E theory of the impact of AI on our lives.
You know, I don't—I get that that's a little bit of a worry, but I don't really think that's going to be a huge thing either. I'm sure that when movies came along, people said this would completely kill off live theater performances. And maybe it diminished the role that live theater performances have in our lives, but they still exist. They're still out there. People still do it.
High schoolers still put on musicals, right? Yeah. I think that people will still do art the old-fashioned way, even when newfangled ways come along. So overall, I would tend to be optimistic. I would tend to fall on the side that the AI will give us some tools that we didn't otherwise have, and it will still allow us to do things the old way if that's what we want to do.
To make that optimistic scenario come out, it is going to require thought. thought and care and, believe it or not, regulation and rules, because this is what Daron Asamoglu warned us about. If we don't do that, then it's just going to be an extractive capitalist institution, not going to be something that allows for a flourishing of human creativity.
Anders says, you've said you don't like smooth jazz. At what point do you think preference becomes snobbery? Well, I think it's revealing you would even associate those two things. You know, if I said I don't like 12-tone classical music, you would not think that that was snobbery. It is true that smooth jazz is something that people are snobbish about. But I don't
ever think that liking or disliking a certain kind of art is snobbery. That's just your personal preference. Some people like more accessible kinds of art. Some people like less accessible kinds of art. Some people aren't very artistically inclined at all. This is all perfectly fine as far as I'm concerned. Snobbery is a reference to how you think about other people's interests in art.
If someone says they like country music and not classical music, and you look down upon them because of that, that would be snobbery. And I'm not into that either. So if someone else likes smooth jazz, knock yourself out. That's completely okay with me. Matthew Wright says, last month you left us on a real cliffhanger.
You just managed to trap Puck, the wild cat you've been taking care of, for a trip to the vet, but the vet visit itself had not yet happened. How did it end up going? I did. So in the call for AMA questions, what exactly was the – it was a month ago, so I don't remember the things.
Anyway, let's jump forward because I know that different people out there inside Patreon and outside Patreon have different amounts of background on Puck, our stray cat. As I am recording this AMA right now, Puck is sitting next to me. He's about five feet away. We're upstairs in our house here. So we trapped Puck.
took him to the vet, discovered Puck was a he, which we weren't sure about, did the operation that he needed. He got all of his shots. He got the fleas removed, et cetera. And we were told that he needed to come back for a booster shot. I forget which one it is. Maybe the feline HIV shot, but it can't be HIV because that's human immunodeficiency.
deficiency virus but you know there's a there's a shot you need to get if you're a cat and you need to get one shot and then a booster three weeks later so we suspected because puck is like not super genius level but he's pretty clever at avoiding things and so we figured we only had one chance ever to trap him
So when that worked and we trapped him, we took him to the vet, we figured, okay, three weeks, we're just going to keep him inside for three weeks. We have a big room, like a playroom that we can give up in the house, put him in the playroom, feed him, kitty litter box, the whole bit. And what we suspected was that after those three weeks, we would sort of let him out again.
Like we'd take him back to the vet, do that, let him out again. Um, I will put up pictures, at least for the Patreon subscribers, you'll see pictures of Puck in his room. The truth is he just made himself at home right away. We were thinking that he would try to escape, that he would meow and express unhappiness, that he would, you know, like scratch at the windows or whatever.
He hasn't done that. Puck has sort of realized there's soft blankets here and there's food all the time and it's
warm and dry and comfortable and he has made himself at home he is very very happy right now um and he's he's not uh he doesn't want to be touched doesn't want to be petted but he will come right up to you sniff your fingers you know nod and say okay you're all right i know who you are he will follow me around in the room and uh when i come into the um room to do the podcast that that's next to his room and he will follow me in as he is right now sitting next to me listening to me podcasting
So therefore, because he is just clearly up there, he's looking at me. Hi there, Puckster. He's clearly much more at home and safer in here than he would be outside. Now we're thinking maybe we should just keep him inside. That raises the whole new challenge of socializing him to be friends with Ariel and Caliban. Ariel and Caliban were here first.
They get veto power, and bless their hearts, they are not sociable kitty cats. They do not want any strangers in the house. So that has not happened yet. We've been keeping all the cats apart. But even though Ariel and Caliban don't like intruders or interlopers, they're at heart good-natured, OK? And we live in a house where there's enough room.
There's different rooms where they can all spend time apart if that's what they want to do. So we are optimistic that we will adjust all the cats to being friends with each other. Now, then the second question, the biggest question is how the three cats get along. The second one is, will we let Puck outdoors again? Um, Part of Puck, I'm sure, is going to want to go back outdoors again.
But like we said, it is less safe out there. We have a yard, but he doesn't stay in the yard. He crosses the street and things like that. It's super dangerous out there. There are falcons in our neighborhood and foxes and things like that. So part of us wants to just keep him inside. Whether he will be happy with that or not, I'm not sure.
Like I said, so far he has shown zero communication to us that he would rather be outside, but we'll see how that goes once he gets the run of the house. So updates to come as they are warranted. Josh B. says, imagine an intergalactic advanced alien civilization in terms of technology, virtually unlimited source of energy and societal organization, little to no scarcity of resources.
In what way would you imagine their society being organized in terms of hierarchy and division of labor? So I don't believe in the question, honestly. When you say virtually unlimited source of energy, little to no scarcity of resources, that can't happen. I know that people like to imagine this, but I think that they're fooling themselves.
I think that they're imagining, well, how much could I personally possibly want in life? I can imagine a civilization where everyone has that much or more. But you know what? Human nature being what it is, what I suspect is that people will want more things. I want my own galaxy, right? Who's to say? Well, if you say, well, okay, we don't have that much. You can't have your own galaxy.
That is scarcity right there. If you don't believe that will happen, look at what actual super rich people do. They buy a lot of stuff. Some are more modest than others, sure. Some are very showy about it. But there's— a whole bunch of people who have a lot of resources already and still want a lot more, you know? I think that, again, it's a journey, not a destination kind of thing.
So whatever the organization of the society is going to be, I suspect it will not be dramatically different than ours simply because scarcity has been overcome. I do think that we can overcome poverty. I do think that if we, we could do that right now here on Earth, right? If we put our minds to it, We could absolutely distribute resources in such a way that no one was truly, truly badly off.
We choose not to do that for whatever reason, and we can talk about that. And I suspect that those reasons are deeply ingrained in human nature. Therefore, I would not be surprised if even when a society is much more advanced and wealthier, there are still vast disparities and there's still a lot of poverty out there. I don't want that to be the case.
And indeed, you know, to be fair, we have made a certain amount of progress in reducing the amount of poverty in the world. But that's a tricky thing to measure because what you're comparing against is unclear, et cetera, et cetera. But I can't say that if society reaches a level or an alien society is already at the level,
where they have enormous, enormous resources, that that would somehow mean that things are more equitable or less hierarchical. I would like to think things like that are true. I have zero reason to expect it in the real world. Connor Schaffran says, what do you think is the most misunderstood concept in modern cosmology, and why do you think it's so challenging for people to grasp?
This is a hard question, not because there's no good answers, but because there's too many good answers, depending on what you mean by the set of people who are misunderstanding it, right? I think that, you know, there's a surprising number of people on the street who barely know the universe is expanding. They don't have a grasp of cosmology yet.
up to the level of the typical Mindscape listener, much less that of a professional cosmologist, etc. So different sets of people are going to have different misconceptions. You know, I'll just name one popular one, which is that if the universe is expanding, if you already know that, then you think that it should be expanding into something.
And it's perfectly clear why that misconception exists, because whenever we visualize things, we visualize them inside the three-dimensional space that is around us. So our intuition says that things that are expanding are expanding into the space that is around them.
And Einstein, using mathematical techniques developed by Riemann and others, going back to Gauss, et cetera, figured out that the four-dimensional geometry of spacetime can be expanding or can be dynamical more generally without being embedded in any bigger thing. And to add to that, if you're a careful scientist, if someone says, what is the universe expanding into?
We can say, as far as we know, it's not expanding into anything, but it's conceivable that it is. We have zero reason to think that it is, and we have perfectly good theories that fit the data without it doing that, but maybe it is. And in some sense, there's various theories of extra dimensions in which something like that is actually happening. So I do think that, generally speaking, cosmology...
has some misconceptions because its regime of applicability is just very, very far away from the intuition that we've built up as ordinary human beings living our everyday lives. It shouldn't be surprising. It's okay for things to be misunderstood. All you should do is work harder to understand them better. Aaron Munger says, how can information be preserved with quantum indeterminacy?
Shouldn't it also work backward in time as well, and therefore make it impossible to determine a past state? It's not quantum indeterminacy. I don't really like that word. I'm not sure what that refers to. But there is, in the real world, an arrow of time associated with quantum mechanics, namely... wave functions collapse toward the future, not toward the past.
And indeed, the process of wave function collapse does not preserve information. That is true. When people talk about, you know, in the context of black holes or whatever, information being preserved, they mean other than the measurement process in quantum mechanics. This is the weird thing about quantum mechanics.
There's one set of rules for when things are not being measured, another set of things for when they are being measured. So when they are not being measured, the rules that we understand them in quantum mechanics tell us that quantum states preserve information. But measurements seem to not preserve information. There we go.
Again, it depends on your favorite view of quantum mechanics and Everettian quantum mechanics. The universe as a whole, the wave function of the universe, does preserve information, but we don't have access to it. We have access to one branch at a time. And the time asymmetry comes from the fact that the early universe was special.
That is exactly the same origin as the thermodynamic time asymmetry. I think we'll talk about that a little bit later in the AMA. So therefore, in fact, if someone says I have a quantum spin, I'm going to measure it. Oh, look, it's spin up. Tell me what its state was before I measured it. You can't. all you know is that there was a greater than zero contribution from spin-up, okay?
You don't know whether it's 50-50 or 70-30 or 99-1 or whatever. All you know is there was some spin-up part, and probabilistically you got that. So you cannot infer the past from the present in quantum mechanics, given the information that is actually available to observers. Going to group two questions together.
Peter Kraus says, Roger Penrose has said in an interview that the cosmic microwave background has a nearly perfect blackbody spectrum, which would indicate a thermal equilibrium state, which in turn would indicate high entropy. Therefore, he assumes gravity to balance out with very low entropy so that the past hypothesis can be maintained. Hopefully I didn't misunderstand something.
What is your take on this? And Bits Plus Atoms says, I heard Brian Greene interview Roger Penrose. And Penrose says, paraphrased, an intensity versus frequency graph of the CMB is almost identical to the Planck curve representing blackbody radiation. That curve represents thermal equilibrium for a system.
This would suggest that at the birth of the CMB, the universe was in a state of thermal equilibrium, but it wasn't. The overall low entropy of the universe is due to the negative contribution of gravity.
I know you don't put high credence on Penrose's cyclical cosmology approach, but are his statements about CMB gravity correct in terms of the contributions to the early low entropy of the universe? You know, I think that Penrose—I give Penrose a huge amount of credit for this. I think that Roger Penrose was the first—
at least big-name physicist, to really, really appreciate the importance of the low entropy of the early universe near the Big Bang. He was saying this in the 70s. When inflation came along, he pointed out that inflation did not help solve this problem, and he's been consistent on that ever since. I didn't hear this interview that he did.
Probably Peter's interview is the same one as Bits Plus Atoms heard. I don't like this way of phrasing it. I think that this is a frequent way of talking about the early universe that is, in my mind— almost intentionally obscurantist, okay? And here's what I mean. The early universe had low entropy. That's just a fact.
Now, it is also a fact that if you take the spectrum of the cosmic microwave background, it looks like a black body. and black bodies are usually associated with high entropy states. Therefore, you might be temporarily confused, okay? But the confusion is very, very easy to resolve. It's because the black bodies that you're used to measuring have a self-gravity
That is to say the gravitational force enacted by one part of the object on another part of the object that is completely negligible, right? If you have like an oven that is glowing like a blackbody, you can ignore the gravity of one part of the oven on the other part of the oven, right? So gravity is just completely ignorable. In the early universe, it's not.
Gravity is super-duper important in the early universe, and gravity is a force of nature. Therefore... This idea that the early universe looks like it has high entropy because it looks like a black body is just a bad idea, just a wrong idea. You never should have had that idea, okay?
So somehow acting like it's a big puzzle, that we have this tension here, like the tension is super-duper resolvable. And furthermore, it's absolutely not true that gravity has negative entropy. That's not what is going on at all. That's just false. So I don't know whether Roger misspoke or it was mistranslated in the interview or whatever.
It's that gravity could, is allowed to, contribute a huge amount of positive entropy. And in the early universe, it just doesn't. Because everything is smooth. The gravitational field is more or less the same from place to place.
Whereas a high entropy state would look either like a black hole or a set of black holes or a highly expanded universe, like our future of the universe where everything is very far apart from everything else. All of those would be high entropy. And gravity is just not in any of those configurations, so it's not contributing the entropy that it could contribute.
So I think that the safe thing to say is just that the actual entropy of the early universe is low. That's a true statement. This idea that it's broken up into gravity entropy and other entropy— is not anything very well defined. I mean, maybe talking in those terms will help you come up with a new theory of it. That's great.
But it's just not, it's very, very hand wavy and suggestive, not anything rigorously defined at all. Darren Ho says, If the laws of physics govern, why would that necessarily be true? Don't the particles have to interact according to the forces of nature such that there could be such a thing that prevents such a configuration from ever happening? Is that not what entropy tells us cannot happen?
Good. I'm very glad you're asking this question because you are channeling many physicists in the 1870s. In the 1870s, we had the second law of thermodynamics saying that entropy will increase in closed systems. we're beginning to have a grasp on statistical mechanics from Maxwell and Boltzmann and others. And in statistical mechanics, the second law is not a law.
It's just a probabilistic statement. It's very, very likely for entropy to increase.
people rejected that people or at least had trouble with that they thought that they had a law in their hands but you know experimentally you never have a law right you can never it's the black swan problem you can never verify that not only has it never happened but it never will happen that entropy spontaneously goes down all you can do is see a bunch of cases where it goes up and generalize that in your head and say oh i think maybe it always goes up
And someone else says, no, it only goes up 99.99999999% of the time, so you've never seen it. You have not experimentally distinguished those. You should be open to that possibility if there's a good reason to be. There is a subtlety with the apple example. If you have the right ingredients to make an apple out of randomly distributed atoms, so you have the right
amount of hydrogen and carbon and oxygen and so forth, it's not absolutely necessary that over time it will come to be an apple. It depends on the configuration that it is in. You want there to be the right energy, right? There better be enough energy in there to be an apple.
But there is a principle called ergodicity, which says that in a certain kind of physical system with many moving parts, the system will basically, over time— explore every part of configuration space that it is allowed to explore. By allowed, we mean we're not going to violate energy conservation or things like that.
But other than that, a typical system, not all systems are ergodic, but we think that typical systems are, they will explore every possibility. So it's not, even though we talk about randomness, and we talk about a probability, that's all from the fact that we just don't know exactly the microscopic state of the system.
Even if you put aside questions of randomness and probability, under sufficiently controlled circumstances, you can prove that the system will do everything the system can possibly do given enough time. The time scale is what is called the Poincare recurrence time after Henri Poincare, and it is of order e to the power s, where s is the entropy of the system. So that is a huge time.
You know, notice I don't even have units correct in there because it doesn't matter because e to the s is going to be such a huge number. It doesn't matter whether you measure it in years or microseconds. But eventually it will happen. That number is less than infinity. So we think that the apple will spontaneously come together. Sorry if you have trouble believing that.
Again, very far away from our everyday experience. P. Walder says, you have published extensively on the arrow of time and the associated increase in disorder in the universe. Sarah Walker and Lee Cronin's assembly theory promotes a perspective on complexity accumulation, which seems to challenge the centrality of the second law of thermodynamics.
Do you feel there is merit in assembly theory and how the second law of thermodynamics may or may not be key to the origin of life explanations? So I do, there's two things. So Sarah Walker, former Mindscape guest, and Lee Cronin and other collaborators had this idea called assembly theory, which they put forward as a way of understanding how complex structures come to be in the universe.
Very much thinking in the back of their minds about complex molecules and ultimately life, but in principle, any kind of complex structure. And what they point out is that once you have different pieces, different tiny pieces that can be put together in big pieces in many different ways, the space of combinatoric possibilities becomes very, very large, very, very quickly.
People like Stuart Kaufman have pointed this out a very long time ago. So that's a well-known thing. So you're not going to explore all of the combinatorial possibilities. The human genome has 3 billion base pairs in it, okay? You're not even coming anywhere close in the history of the universe to exploring all the different ways to arrange 3 billion base pairs.
So instead, you have some way of exploring a bit of that space. And the assembly theory idea is that we focus on the ways that you can build up slightly more complex things from very simple things, and then repeat that to put slightly more complex things together, and you have a history of assembly. So you're not just randomly putting together your base pairs or whatever.
You have a particular trajectory over time that leads to particular places in the space of possible configurations. Now, all this seems perfectly reasonable and interesting and potentially useful. None of it, in my mind, has any conflict with the second law of thermodynamics. So I have mixed feelings about assembly theory.
On the one hand, I think that the approach to thinking about complexity and its accumulation is potentially very, very promising. By all means, let's think about it and take it seriously, and maybe that helps. Because again, there's a lot we don't know about complex systems, and it would be nice to know more, and maybe this is a helpful tool for doing that.
On the other hand, the advocates of this theory seem very happy to give in to the temptation to say this is somehow incompatible with what we know about physics. I have seen zero evidence that it is in any way compatible with what we know about physics. To me, if anything, it's a perfectly natural consequence of what we know about physics.
So I kind of like where they get to and don't always like the rhetoric that they use along the way. Qubit says, The short answer is I don't see a big difference. I think that both of those would very plausibly be wrong, okay? There's a difference in what they are, and there's more of a difference than people think.
Because when you say living in an artificial but realistic world, that's a big ask. You toss it out. Not you, Cupid. One tosses that out, we'll put our simulated consciousness or whatever in a complicated and tricky than people give it credit for. That's at least as hard as building the artificial consciousness in the first place, okay, which we're not close to doing right now.
Putting it in a robot and having the robot be in the real world is much more straightforward and achievable than doing an entirely artificial world. But granting that, just because I want to make sure that people understand that distinction, I don't think that there's necessarily any obvious moral distinction between these two scenarios.
Now having said that, I also want to emphasize that who knows? You know, as a moral constructivist, I think that morality comes from a way of sort of systematizing our preferences and intuitions about how to live good lives in the world. It's not an objectively true thing that we find by experiment. or by proving theorems, okay?
So when you get to these kinds of thought experiments that are super far away from anything that we have in our intuition or experience, I'm open to different people disagreeing about where to go, as long as they all agree that we should be cautious and not too wedded to our conclusions, because we're trying to extrapolate
It's like we've always lived within a five-block-wide part of the world, part of a city, and we're trying to extrapolate what things like are on a different continent or a different planet entirely. Not that we shouldn't try, but we shouldn't get overly wedded to whatever comes up, whatever our imagination comes up with. Yeah.
Sid Huff says, what is your take on the recent dramatic rise in sports betting? I've read perspectives that range from a great way to generate even more interest in sports to another nail in the coffin of America. Do you think that on balance it is a good thing, a bad thing, or something in between? I have to say I think it's bad because at the end of the day, I'm an empiricist.
I look at the data. it is causing a lot of harm. So anyway, for those of you who are not in the U.S. or whatever, who just don't pay attention to sports, when I was your age, you were not allowed to bet on sports except for maybe in Nevada, okay? You literally had to go to Nevada to place a bet on a sporting event. And the major sports leagues were very, very, very concerned about—
corrupting the games by letting players or coaches or whatever be involved in betting on them. Pete Rose, who was a great talented baseball player who recently died, was found out that he bet on baseball games while he was a player and a manager, and then he was banned from baseball and prohibited from going into the Hall of Fame.
Since then, in recent years, they have realized that there's a lot of money to be made in sports betting. And so they have allowed it into their games. I think I say they, but at least all the major sports in the United States have welcomed betting on their sports. If you watch a... broadcast of a game.
They will give you updates on the betting lines during the game, during the broadcast, you know. And with the internet, it's just much easier to place these bets in perfectly legal ways. And the data are coming in and they're saying, yeah, people are ruining their lives. They're going bankrupt. They're not managing their bankrolls well, and so on.
So I'm torn here because I'm a big believer in letting people live their lives as they want to live their lives. I want people to be allowed to bet on sports. I do not think it should be illegal. But there have to be some guardrails if, as a matter of fact, this is ruining people's lives at an unsustainable rate, right? I don't know what those guardrails are going to be.
I think that somehow we have to make it impossible to bet too much or something like that or to lose too much, right? Everyone thinks they're going to win. Most of them are not. That's the way the numbers work, one way or another. So I think that we haven't figured out how to do it right. I hope that we can do it, but I think that we need to be a little bit more responsible about how it's done.
Robert Grenice says, I'm reading Tom Chivers' book, Everything is Predictable. He is a Bayesian apologist and makes the case for its superiority over statistical analyses focused on a p-value. I know that you are also a proponent of Bayes and wonder if it still applies in physics, which has a lot of raw data. So the question is, do you use both approaches?
And how do you decide which is best in a given situation? Yeah, I think it's actually changed. It's interesting. Someone should do a study on how the way that scientists think about statistics has changed in the last few decades. When I was a graduate student, no one talked about Bayes' theorem or Bayesian analysis.
And while I was a postdoc and junior professor, the data sets that were being looked at became increasingly bigger and more sophisticated, and we needed better statistical tools for thinking about them. And people discovered—some people already knew, of course, all along, but the wider communities discovered—
Bayesian analysis, and they became rather annoyingly evangelical about them, you know, browbeating you if you didn't use Bayesian statistics. I think it's just Bayesian statistics are correct. I think it's just the right way to do it. You don't always need to do it. So the alternative to being a Bayesian is to be a frequentist, right?
To say, what we're talking about when we talk about probabilities is a summary or a shorthand for an infinite time frequency. You can imagine doing the same thing over and over and over again, and there's going to be a certain number of times it turns out A, a certain number of times it turns out B. The ratios of those give you the probabilities. And Bayesians don't say that at all.
They don't, they're not... forced to think about doing things an infinite number of times. If you say, I think the probability that Donald Trump will win the presidential election is 60%, no one has the nightmare scenario of running that an infinite number of times in their head, okay? It's a credence that you put on things. And so the Bayesians focus their attention on the likelihoods
under different hypotheses, certain experimental outcomes are more likely or less likely. And that's supposed to be objectively computable as opposed to the priors that are your initial beliefs. And that turns out to be super important, right? If you have data that is, if you have two theories, theory one and theory two,
and you have data that is more likely to be predicted by Theory 2 than by Theory 1, is that good evidence that Theory 2 is true and you should just disregard Theory 1? Well, no, not if your prior was that Theory 1 was much, much, much more likely, right? And this feeling can be made very quantitative. You can show very, very explicitly
that this can save you from incorrect conclusions in medical studies or something like that, because there are things that are less likely a priori, but if they were true, then you get certain data. But when you get that data, it's still not enough. to overcome the fact that they're just a priori less likely, right? So Bayesian analysis, I think, is just right.
It rubs people the wrong way a little bit sometimes, because the priors are subjective. If you have enough data, that doesn't matter anymore, and the priors go away. Plus, reasonable people often have priors that are actually pretty close to each other, despite the philosophical disagreements about whether they need to be or not.
Matt Grinder says, I listened to your interview with Philip Goff on panpsychism, and I agree with you that any theory of consciousness cannot contradict the laws of physics. So would the following be a way out for the panpsychist?
Every time a particle changes state by wave function collapse, a calculation must be made for the particle to decide what to do next, and this calculation involves a qualia. Over time, the calculations via qualia would have to agree with the Born rule. This seems to me not to contradict any laws of physics. Is it just an add-on to physics? Yeah, this is something that is absolutely conceivable.
People have conceived it. David Chalmers, former Mindscape guest, and his collaborator Kelvin McQueen wrote a paper that really looked at exactly this possible idea. I would say a few things. Number one, it absolutely is a change in the laws of physics because the laws of physics as we know them now don't say that. I should say I don't say it's a change of laws of physics.
We don't know the laws of physics. I should say it's a change in what we take the laws of physics right now to be, okay? Because right now we do not say that the probabilities depend on quality in any way. If you say, yes, they do, you are suggesting that the laws of physics are different than what we think they are. It's hard to make it work.
It's hard to make it work for whatever you're saying, precisely because, long story short, like you say, over time the calculation would have to agree with the Born rule. Well, what does that mean? Like if... If the qualia are pushing it all, you know, you have a bunch of spins that are 1 over square root of 2 spin up and 1 over square root of 2 spin down.
If somehow your qualia are making you get spin up every time, then there's some catch-up procedure later where you get a bunch of extra spin downs. Like, it's hard to make work. that way. Number one. Number two, zero evidence for anything like that in anything that we've ever seen in either physics or neuroscience.
And number three, it would be of zero help in solving the hard problem of consciousness. The hard problem is specifically about experience, not behavior. And you're saying, this theory is saying, Things behave slightly differently than you would have predicted by conventional physics. So what? I mean, great. They're behaving differently.
That doesn't help you explain this thing about consciousness that proponents of the hard problem say cannot possibly be reduced to behavior. So this is why I don't spend a lot of time worrying about this stuff other than answering AMA questions.
Joseph Eli says, assuming many worlds is actually the true fundamental theory of quantum mechanics, how long do you think it will take for it to become the status quo? Do you envision one large discovery that convinces the physics community all at once, or a slow process of competing theories being falsified?
Furthermore, do you think that important discoveries in physics are potentially being hindered by a lack of support for many worlds? Yeah, you know, compatible with what I said earlier about yes-no questions from oracles in physics, there's not going to be one large discovery that convinces the physics community all at once. I don't even know what kind of discovery that would be.
But, you know, there almost never is one large discovery. Usually, when we discover things in physics, they do accumulate, and it's accumulation on both the theory side and the experimental side. There are certainly examples of times when there has been one large discovery. You know, the accelerating universe in 1998 is an example.
But you have to remember the physics community was primed for that, okay? We were already talking about the possibility. Most people didn't think it was true, but we knew it was a possibility that there was dark energy accelerating the universe. We knew what it could be. We knew that it would solve various problems if it was true.
So the time was right for one big discovery to instantly be accepted. Usually it's not like that. Usually you have to go back and forth. a little bit. When Einstein invented general relativity, people didn't believe it right away.
When they did the experimental measurements of deflection of light by the sun during the total solar eclipse, they had had some time to mull it over and they were ready to go, oh yeah, okay, that's it. It was not really, you know, a complete unexpected shift. So I don't think anything like that would happen in quantum mechanics either.
And it's not even—I don't even think it's a slow process of competing theories being falsified. I think that very often theories just sort of gradually fade away because they are found to be less useful, less fruitful, less well-defined than other theories, right?
You can always fix up your theory by tweaking it, by adding epicycles or whatever, but eventually it just becomes boring and not very productive. So
I think that it is absolutely possible that many worlds will be accepted by the vast, vast majority of the physics community, but it'll be a gradual process and it will be because many worlds proves to be the best way of thinking about quantum mechanics, both for known features like the apparent collapse of the wave function, the measurement problem, the Born rule, and also potential future insights.
So this is the last part of your question. I 100% think that important discoveries in physics are potentially being hindered by a lack of support for many worlds. People, by choosing not to think hard about the difficulties of quantum mechanics at the foundational level, are leaving money on the table, leaving meat on the bone, however you want to say it.
That's one of the reasons why I do it, because, you know, you want to look where other people aren't looking. You don't want to just look under the lamppost. Krzysztof Randowski says, Roughly speaking, no. I mean, technically I've considered doing things like that, but it seems like not the best use of my time and energy. The podcast is already here. The podcast exists.
There are full transcripts of every podcast. So really making a book out of them would just be like, number one, picking my favorites, which is very hard to do because that means that some people are going to be informed that they are not my favorites. Number two, editing them. Number three, getting a publisher and putting them into a book. rights to reproduce them.
Like, I didn't have anyone who appears on the podcast sign a form saying that I could use their interviews for whatever I want. So it would be a thing, and I don't want to do that thing. I mean, maybe I would do it if I had infinite times and resources, but I would rather do other things with the time I have. In the meantime, hopefully everyone can find the transcripts.
I think a remarkable number of people—I Don't ever know that there are transcripts to all the Mindscape podcasts on the webpage, but they're there. The original version for starting the Patreon was so that I could pay to get transcripts made. And that still works. That is still going on. Sam Hartzog says, why aren't most multicellular organisms warm-blooded, aka endotherms?
Intracellular processes should be taking in energy and doing work to maintain a low entropy internal state. Shouldn't that kind of thing result in waste heat more often than not, making endothermy the most natural way to make a living? Well, look, you're not asking the right person this question, okay? So I'm certainly going to advocate that you not take my answer at a high confidence level.
Let's put it that way. I can say some things that I think are true as food for thought kinds of things, but you should ask an expert about this. Yes, there is waste heat generated in any animal, cold-blooded or warm-blooded. That is a very generic feature of thermodynamics. You should expect that to happen. But that's different than what we think of as being warm-blooded or cold-blooded.
Warm-blooded versus cold-blooded is more about regulation of temperature, right? Warm-blooded animals are not just their blood is warm. It's that their blood is kept at a relatively high temperature. It's a homeostatic temperature. kind of thing made for certain reasons, that happens for certain reasons inside the organism.
So not knowing a lot about biology, I'm guessing that there are trade-offs in resource allocation, right? It costs energy to maintain your body temperature at a certain temperature. Is it worth it? You get some benefits from doing that. Some heat is generated, but clearly, I mean, look, cold-blooded organisms are not going out of their way to refrigerate themselves. That's not what they're doing.
They're just not going out of their way to heat themselves. Cold-blooded organisms respond more dramatically to changes in the environmental temperature than warm-blooded organisms do. So maybe for some reason it's just not worth it for them to put those resources into that particular homeostatic maintenance. The other thing to keep in mind is that organisms are not intelligently designed, right?
There are accidents of... history of evolution that lock you into certain choices. And so when you find that certain animals do things a certain way, it may or may not be the best way to do things. It's a satisficing question. You do things well enough to get along and survive. You don't necessarily do things in the completely optimal way. Rob Gebeler says, No. Short answer is no.
I know that people have talked about this. Roger Penrose talks about adjacent ideas to this. But to me, it's like clearly sliding around the meaning of some of the terms in Gödel's theorem. Gödel's theorem says, and again, even this is a simplification, but very roughly it says, I have a system with some axioms and some way of turning those axioms into theorems.
And there are going to be some propositions that are true but not provable if the system is consistent, okay? So I cannot prove them, but I can't disprove them either, right? And okay, that's fine. That's not what the brain is. The brain is not an axiomatic system. I don't know.
Penrose sometimes acts as if you should—he doesn't think it's an axiomatic system either, I think, but he acts as if he thinks that you should think it is and then goes to an effort to fix it somehow because he thinks that human creativity is something that is incompatible with this kind of reasoning because of Gödel's theorem. I don't think that at all.
I just don't think that's a good model for what the brain is doing. The brain is not trying to prove theorems. The brain is trying to haphazardly and heuristically model the world around it, right? So I've never—like, people often bring up the possibility that there are some truths— that human minds are just never going to be able to reach. I see zero evidence for this.
And if there are any such truths, we're not close to reaching them. I see no evidence that we're bumping into any barriers. Some things are hard. Some questions are hard. But I am always much more impressed by how far we've gotten in understanding things than depressed about the possibility there are some places we won't get to.
Alex says, could you explain how measurement of one component of spin, e.g. the Z component, affects the results of measuring some other component of the spin, like the X component? Sure. This is, I mean, I explained this in some detail, both in Something Deeply Hidden and in Quanta and Fields. So it is out there, but it is a fundamental fact I'm happy to talk about again.
And it's very analogous to the uncertainty principle that we mentioned before. You know, the uncertainty principle from Heisenberg's original formulation is about position and momentum, x and p. And one way of coming to a derivation of the uncertainty principle is to realize that in quantum mechanics, unlike in classical mechanics, position and momentum are not independent variables.
Indeed, they are not features of the state at all. They are observables. They can give you different answers. But the point is that the wave function in quantum mechanics is a function of just position. It's not separately a function of position and momentum. There's a separate thing called the wave function in momentum space, but it is derivable from the wave function in position space.
The wave function in momentum space contains exactly the same information as the wave function in momentum space. It's just that the information is encoded differently in the form of the wave function. It's exactly the same thing for the spin, to get back to your question. The state of a quantum spin is a element of a two-dimensional Hilbert space.
And what that means is there are two basis vectors, and the quantum state is a superposition of these two basis vectors, a component in one direction plus a component in the other direction. And I have freedom to change the basis, right? And in any vector space, I have a freedom to change around my basis vectors. They're complex vectors, not real valued vectors.
So that's the thing you have to be careful about, but we're putting that aside for right now. And the short answer is, one way of choosing the basis vectors for your spin is spin up in the z-direction and spin down in the z-direction. So in Hilbert space, spin up and spin down are orthogonal to each other. They're not pointing opposite, they're pointing perpendicular, okay?
Because those are the two states to be in, and you're in one or the other. And so that is the entirety of the Hilbert space. Alpha times spin up plus beta times spin down. So where does the spin in the x-direction come from? If you have the x-direction, maybe you can measure spin left or spin right, right? How does that fit in? you can derive it from spin up and spin down.
It's just a change of basis. Indeed, it's just a rotation of the basis by 45 degrees. So spin plus x, I'm not going to get the signs right here, so forgive me, but roughly speaking, spin plus x is 1 over the square root of 2, spin up plus spin down in the z direction. Spin minus x is 1 over square root of 2, spin up minus spin down in the z direction. So they're just related to each other.
And what it means is if you measure a spin in the up direction and you get up, so in the z-direction I should say, and you get up, you instantly know its wave function in the x-direction and you instantly know that it's maximally uncertain.
You instantly know that it is 1 over the square root of 2 plus x plus 1 over the square root of 2 minus x. So 50-50 chance if you were to measure in the x-direction to get either spin left or spin right. Shane Blazier says, I just watched Meta, aka what used to be called Facebook, reveal their prototype augmented reality glass, and they look really interesting.
It seems clear to me that this type of device will become mainstream by allowing us to be remotely present with loved ones. interact with virtual screens without looking at a phone, and more naturally interact with AI. I'm curious if you have any thoughts on this new product category, specifically how it may improve science communication with things like interactable 3D content.
You know, my track record for predicting the adoption of technology is not especially good. I don't think you should give especially high credence to my opinions here. I'm a little skeptical on the glasses, OK? The glasses, you know, you have to take into account human beings, human beings. Many of them, a majority of them, I don't know, don't like wearing glasses.
I used to wear glasses when I was a kid, and then I got contact lenses, and eventually I got LASIK, because it's just more convenient not to have to put glasses on. And I spend a lot of my day looking at a screen. Here I am right now reading questions off of my laptop, okay? But I want to separate that from my everyday life. I don't want to be looking at screens all the time.
I absolutely see the benefits that could happen from wearing these glasses and having them augmented, augmented vision and AI or not AI. And they absolutely have attractive looking use cases. But I don't actually, if I had to guess, I would say 100 years from now, you're not going to see greater than 50% of Americans wearing glasses. augmented reality glasses all the time.
I do still have a soft spot for virtual reality more generally. You don't even need the whole headset. You can just do it on your computer screen, right? Video games, in a very real sense, are that. Well, immersive video games are that. And I can very easily imagine... We get much more realistic things to replace Zoom meetings or things like that, where you have some virtual reality kind of thing.
Jennifer used to run a lecture series in Second Life. She would interview people. I think this was affiliated with someone. Who was it affiliated with? I'm forgetting now. But she would interview people in Second Life, which was an early VR platform. I think it's still out there. People would have their avatars and everything, and they'd sit down in a fake room,
in a virtual auditorium and people could come in and listen and so forth. And it was a lot of fun. It didn't quite catch on because it was clunky, but I can imagine something like that happening. But that's a step away from having it be glued to your face all the time or most of the time. That's my personal prediction.
Beau Parizeau says, how would you explain why neutrons, the one massively electrically non-interacting particle we know about for sure, are not a candidate for dark matter? Plenty of reasons. So this is actually a very simple question to answer.
I'm happy to answer it, but I wanted to answer it even though many other people could do so because it's a good excuse for driving home the kind of constraints cosmologists have to deal with. when it comes to inventing candidates for dark matter. It's not an anything-goes kind of situation. The very simple reason why neutrons can't be dark matter is because they're not stable.
They decay away in a matter of minutes. Dark matter has been there for 14 billion years, so neutrons do not qualify. There you go. That's the simple answer. But the thing is, that's not the only answer. That's not the only reason why it doesn't work. Another reason is that neutrons are strongly interacting. They affect each other very much.
That means that their dynamics in a galaxy or something like that would be different than that of dark matter particles. Even though they're electrically neutral, they can bump into atoms. They can even bump into photons because they're made of charged quarks inside. So they're not completely transparent like we want a good dark matter particle to be.
They would fit into what is called strongly interacting dark matter, which was—it had a moment for a while where people were thinking about it, but I think overall it doesn't fit the data quite as well. And finally, most importantly, when you do find a stable, electrically neutral, non-interacting particle out there, you have to get the right abundance of it.
We think that there's about five times as much dark matter as ordinary matter in the universe by energy budget. So you need a theory for the production of those particles that gets you the right abundance. And that's often the hardest thing to do. I mean, neutrons, if there was an equal number of neutrons and antineutrons, they would have just decayed away a long time ago.
There would not be nearly enough of them to be the dark matter. So anyway, lots of reasons, lots of things that cosmologists know about the universe make neutrons not a good candidate and reemphasize how difficult it is to find a good candidate.
Wes Payne says, in your excellent mini lecture on tensors at the end of the July 2024 AMA, you mentioned the polarizing effects that gravitation by Mr. Thorne and Wheeler has on students. This is a textbook, a famous book by Mr. Thorne and Wheeler. I'm just wondering, what do you think of it? Did you fall in love forever, toss the book out and never read it again or something else?
I would say something else. You know, I first came across Mr. Thornton Wheeler as an undergrad where I didn't know a lot of GR, but I wanted to. And so I did take a look at it. It didn't teach me GR. Let's put it that way. It did ruin at least one book bag that I had, one backpack. It was so heavy that my cheapo backpack fell apart carrying Mr. Thornton Wheeler around campus.
But, you know, it is a style, and it's an intimidating style. So I think that it could possibly be used as a textbook for teaching with a professor who told you exactly which parts of it to read. You know, it's 1,000-plus pages. Just reading from the start, you would be tearing your hair out before you ever got to Einstein's equation, okay?
Stylistically, in addition to being long, you know, it has a very— a very noticeable style that warms the hearts of some people and throws others off. And I'm in between. You know, I get the style. I appreciate the style, just like I get Steven Weinberg's style and I get Bob Wald's style. These are all different styles. They all serve a purpose.
And, you know, there's a reason why I wrote a general relativity textbook myself. It's because I don't think that any of these styles, for me, would have been a good way to learn general relativity for the first time. They're all, you know, look, Misner and Thorne and Wheeler and Weinberg and Wald all know better about general relativity than I ever will.
But they all have such an idiosyncratic way of thinking about it that it doesn't sort of serve the common purpose. They're better for research reference books than for textbooks. So I actually decided to write a book that was –
purely devoted to being a pedagogical treatise, to teach people general relativity in a way that also didn't make any quirky assumptions about what is too hard or what is too easy. I'm like, I'm going to tell you what you need to know, not too much about what you don't need to know. I'm going to tell it to you straight. I'm not going to hide the hard parts, but I will try to make it clear.
That was the philosophy behind my book, and I don't think that any of these other books did that very well, so that's why I did it. Anyway, that was not your question, but I hope that answers the spirit of your question. Arnie says, I don't know if the odds are 1 billion to 1 or 1 trillion to 1, but why not utilize cryonics? All you've got to lose is money. Am I being ridiculous?
So I presume that you mean why not have your body frozen after you're dead to maybe be revitalized sometime in the future. So one thing is I think the odds are much lower than 1 trillion to 1. I think that pretty clearly with the current state of the art, after you're dead, you're dead. And cryonics does not preserve you in any sensible way. You're losing money. Yes, you are losing money.
You're dead, so you don't care. But maybe that money could be put to other uses other than a scam company that is pretending to keep your body frozen for a long time. So I don't think you're being ridiculous. You know, I get the calculation. If there's some chance that they will be able to revive you down the road, that would be an awfully good reward.
But I don't think the chance is non-negligible enough to make it worth considering. Edward A. Morris says, Would you be skeptical of this conclusion, or would you take it as an unsurprising confirmation of the Everettian view that since individual mutations can be caused by quantum events with non-zero weights in the wave function, the conjunction was guaranteed to happen in some branch?
No, I would be skeptical of that conclusion because there's no guarantee that The exact way that human beings are needs to exist in any branch of the Everettian wave function, right? Or the selection effect, if there's some kind of anthropic selection that we're only going to find ourselves in branches where human life can exist.
You need to be exactly clear about what you mean by human life, etc. I'm not very firm on this, so don't take me completely solidly here. As I said before, I think I should have ordered these questions better. But as I think we're going to get to later in the AMA, I don't think we understand how to use the anthropic principle exactly well.
So in cases like this, I don't think that my answers are very firm. But I think that as a methodological matter, appealing to unlikely branches of the wave function should literally be your last resort. You know, the Bayesian prior on that one is very, very small in my mind. There's almost always going to be a larger prior for there's some reason for this, we just haven't figured it out yet.
Tarun says, I've thought of the principle of conservation of information as meaning that Laplace's demon would be able to perfectly retrodict the past based on the current state of every particle, even though in practice that knowledge is unobtainable. However, in a previous AMA, you said that even in principle, that knowledge is unobtainable due to quantum uncertainty.
In what sense then is information conserved? Well, I could have grouped this with a previous question. When we talk about Laplace's demon, we often do exactly what I did before, which is to say, let's simplify our lives and imagine the world is classical, okay? That's the world in which Laplace actually invented Laplace's demon, and in that world, Laplace's demon is very simple to explain.
Quantum mechanics comes along, and it has the idea of measurement. in it, which classical mechanics didn't, and the measurements are unpredictable. So if you have quantum mechanics complete with measurements, and you say that those measurements are truly unpredictable, let's just say that for the moment, then Laplace's demon doesn't exist.
Then there is no quantum mechanical Laplace's demon, full stop. That's it. People will nevertheless say information is conserved, but secretly they mean as long as you're not doing a measurement, okay? There's yet another footnote saying that in something like pilot wave theories or many worlds, there's a sense of which information is still conserved, but you don't have access to it.
But okay, you still don't have access to it, so I don't see what good that is doing you. More importantly, for the purposes of understanding the language used by modern physicists who keep saying, who keep banging on about the conservation of information, they are intentionally excluding the collapse of the wave function when they talk about that.
So when you include that, information is just not conserved. Redmond says, while I believe human activity is warming the planet, the notion that governments can make the climate great again strikes me as laughably hubristic, with a backfire of some sort as likely as success. After decades of talk, the temperature is still rising.
Would not limited funds be better spent on adaptation rather than prevention? No. Limited funds would not be better spent on adaptation rather than prevention. There's a whole bunch of things going on here. Number one, like you say, after decades of talk, the temperature is still rising. That's because talk does not lower the temperature.
Action lowers the temperature, and as a planet, we have not taken the right actions to do it. The amazingly good thing about the climate is, roughly speaking, there's a simple thing happening with a simple solution. Of course, there are also very complicated things happening, right?
The atmosphere is a complex system, and you can sort of drive yourself batty getting into the details of exactly what's going on. But interestingly, amazingly, there is kind of a robust, simple underlying thing. We are putting greenhouse gases into the atmosphere. They are heating up the globe, and the temperature is going up. That's it, right? Everything else is downstream of that.
There's many other things going on. There's melting going on. There's changes in the frequencies of storms and whatever. Patterns of wind are shifting. But roughly speaking, there's one cause for all of this. And therefore, we can fix it. We can just stop putting those greenhouse gases into the atmosphere, and we're choosing not to do it.
It is absolutely the simplest thing, most straightforward thing, and we're just choosing not to do it. We're getting a little bit better, right? We are getting a little bit better. We're shifting to less harmful energy sources and so forth. So maybe it will be good. We've had several conversations on the podcast about the optimistic side. of climate change.
And so that's absolutely something that we can do. Would limited funds be better spent on adaptation? No, absolutely not. For one thing, those funds would be enormously larger than the funds that we just, I mean, it's actually not that difficult to stop spewing greenhouse gases into the environment compared to picking up and moving, what, a billion people?
to get away from areas that are going to be devastatingly hurt by climate change. You know, if you're in the United States, you can sort of get along by saying, yeah, how bad would it be? It's already bad in regions like India and more generally in the global south, where they're both closer to the warm parts of the earth and less well-equipped to deal with these things.
And so the suffering is already beginning. Indeed, as I'm recording this here in the United States, we just had a hurricane that did tremendous damage to Asheville, North Carolina, among other places. And, you know, as hopefully everyone knows, you cannot do a one to one map between this certain hurricane and global climate change.
But you can do a map between the tendency to have more hurricanes and more severe ones and global climate change. And the reason why I'm mentioning this one is because Asheville, North Carolina, was literally used as an example of a place that will not be harmed by climate change before the storm hit, OK?
You can't say ahead of time that, you know, here's where to go if you don't want to be affected by climate change. You just don't know. It's just so much easier to fix the problem than to try to let the problem get worse and worse and worse and hope that you can avoid its worst consequences. I think that's a very easy choice.
Gary Miller says, if we find signs of technologically advanced alien life in the next 30 years, what signs do you think we would most likely see? Would they be light signals, spacecraft, artifacts, whatever? I think that we probably won't, is my bet. But if we did, I'm still a fan of the artifact way of doing things. The monolith hypothesis for you 2001 fans out there.
And the reason why is simply, to use the technical term, integration time. So if you send out a spacecraft to visit – if you are the aliens, OK, and you are exploring the galaxy and you send out a spacecraft to like visit another star system and then come back and tell people what you've learned, you're only going to spend a short period of time there, right?
If you send a radio signal, it's literally moving through the other star system at the speed of light, right? Whereas if you send an artifact, if you send a machine that will just sit and park itself, it can wait for potentially millions or billions of years for life to come to existence and then become more technologically advanced in that system.
So if you're a smart alien civilization, the smart way to get to know other civilizations, extraterrestrials from your point of view, in the universe is to plant little listening stations or maybe speaking stations all throughout the galaxy. So even though I don't think it's likely to happen, I think that that's the kind of way it would be most likely to happen.
Franketh Rag Kernow says, I recently stumbled upon a video of Richard Gott, who is showing a glass model of a branching inflationary multiverse. One of the branches looped back to form the main stem. Richard was explaining that a closed time loop in one of the branches could mean that the multiverse caused itself, thus avoiding the singularity of the Big Bang. How does this work?
I was under the impression that branches cannot interact. Well, yes. So two things. Number one, the branches that Richard Gott are talking about are not quantum multiverse branches. They're not Everettian branches. It's just sort of a different part of spacetime. You know, Gott is a very clever guy.
His most influential work has been in relatively down-to-earth studies of large-scale structure and things like that. But he's a creative person who has some wacky ideas out there. And this is one of his wacky ideas. It's one that never really caught on. He was focused on the idea that conventional cosmology has a singularity at the beginning. Can we get rid of it somehow?
And he has this idea of time loops, closed time like curves at the beginning of time. It doesn't really fit in with what we know about cosmology and gravity and things like that, so not a lot of people jumped on that bandwagon. And furthermore, you know, the existence of the Big Bang Singularity is probably not true.
You know, singularity is a feature of classical general relativity, and classical general relativity doesn't apply in those circumstances. So we don't know what happened at the beginning of the universe, which is why we're welcome to think of different possibilities.
I think lots of different possibilities are on the table, but I'm just not that focused on smoothing things out but still talking classically. I think that quantum mechanics is going to be very, very important in understanding what happened at what we think of as the Big Bang.
Murray Cantor says, you were quoted in a recent special issue of Quantum Magazine on treating spacetime as a continuous approximation to a deeper underlying structure. You were quoted as saying spacetime emerged from this behavior of the underlying system. Please expand on this and share your thoughts on what might be the structure of this deeper reality. Right.
If you want more details on this that I'm about to give you right now, there was an early solo episode from years ago, I don't know, five years ago, on finding gravity within quantum mechanics. I want to be clear, I don't read the articles quoting me very much, right? So I don't know exactly what I said in the issue of quanta, which is very worth looking at.
I saw other parts of it that are very well done. So anyone out there, I recommend they check out Quanta's special interactive feature on emergent spacetime. But it's not – I would not say that I'm imagining spacetime to be a continuous approximation to a deeper underlying structure.
I mean, that's not strictly speaking false, but it's not the way that I would say it because it gives the impression of like a discrete kind of lattice, you know, imagining that space-time is made out of little blocks of a certain fixed size glued together in some way, or there's a certain minimum distance or something like that, none of which is what I have in mind.
I'm thinking of quantum mechanics, okay? So Hilbert space in quantum mechanics is smooth. It is not discrete. It could be finite dimensional or infinite dimensional. And I think it's very interesting to think of it as finite dimensional, at least for the part of the universe that we observe. And that's what I've been thinking about.
But it is not the same as having like a little lattice underlying space-time itself. So I want to clarify that one thing. So what I'm imagining, though, is that there is some quantum wave function, some quantum state with some Hamiltonian, and there is an emergent description of that system that looks like spacetime.
So emergence is a story of finding a coarse-grained, higher-level way of talking about the underlying theory where there's sort of variables that have an independent—not an independent— Variables that can be defined from the underlying fundamental variables, which you don't need to know all the microscopic information for, but have a sort of existence of their own.
They can propagate, they can evolve, etc., in ways that are self-contained. The vocabulary doesn't exist, which is why I keep stumbling here. You don't want to say independent or autonomous because the higher level emergent variables are defined by the lower level variables. But you don't need to know about the lower level variables to understand what the emergent theory does. That's the idea.
So the question is, can you start with that underlying quantum, purely quantum description, and extract the classical spacetime from it? So we've written a couple papers that, you know, point in directions that try to do that. I would not say it's anywhere near far along. There's all sorts of questions we don't know the answer to. The whole thing might fail any moment.
But I do think it's a very promising way forward, and I'm hoping that more people start thinking about it. Gavin McQuillan says, You know what? No. I think all the conventional advice works. But, of course, you have to adapt the conventional advice to the context that you're in. So when you say—
your path into higher education, I'm not sure whether you mean as a student or like because you want to become a faculty member in higher education. Those are two very different things. But in either, so let's go for the commonalities here.
You know, I think that the single mistake people make when they're in college or getting their training to be professors someday is of doing what is asked of them And that's it. Like, you know, maybe having fun, going to parties or being on an extracurricular activity or whatever. But academically, they're doing what is asked of them rather than taking the initiative.
I just always advise to my students, you know, take more courses than you need to. Read more about the subject matter than is required in the course. Learn about the material in ways other than just what the course is doing for you, whether it's online or whatever. Take the initiative. Try to learn it because you want to learn it, not just because the course is forcing you to do so.
Wander outside your chosen area. Go to seminars or colloquia in other departments other than your own. Expose yourself to a very wide variety of possibilities, and then stand up and learn about them intentionally, not just because you're required to do so.
Whenever we, you know, it's the nature of the linear passage of time, but we are forced to make deep decisions about what to do for our lives at a moment when we are far too young to know much about the space of possibilities about what to do. So learning about what that space of possibilities is and moving into it with purpose and intentionality is the best possible thing you can do.
And I don't think that changes in any way because of new technologies or new systems of education being introduced. Brendan Barry says, I really enjoyed your conversation with Kari Cesarati. However, there was one statement that I'm questioning. Kari stated if you were to build a 10 TeV muon collider, which sounds less than the LHC because that's 14, but...
Protons are composite, so a 10 TeV muon collider would be comparable to the physics for the average collision you can get out of something like a 70 or 80 TeV, if not more, proton-proton machine. So that 100 TeV number that you might hear thrown around by China and CERN would be comparable to a 14 TeV muon collider.
I understand that with a proton-proton collider, you don't get the full energy of the protons in a collision event. However, won't there be some collision events where two colliding partons possess significant portions of the total proton's momentum?
In other words, while the average hard collision energy for a 100 TeV proton collider may be 14 TeV, won't some events be closer to the 100 TeV energy? This is a great question. This is clearly a physics-informed question. I love it. So just so everyone is on the same page here, protons are composite particles, okay?
They have not only the three quarks that you know and love inside a proton, two up quarks and a down quark. But there's a whole bunch of virtual quarks, quark-anti-quark pairs, popping in and out of existence. And there's a whole bunch of gluons, virtual gluons, popping in and out of existence.
So when you smash two protons together, the things that actually collide and produce a spray of new particles are not the whole big floppy bag of protons. They are the individual pieces inside, which Richard Feynman called partons. Murray Gell-Mann was very mad at Richard Feynman for calling that. He thought they should have just called them quarks.
But the gluons are also partons, so Feynman did know what he was doing a little bit. So the things that collide in a proton-proton collision have less energy— than the proton as a whole because that energy is spread out over all these partons. So even though you're colliding at 14 TeV at the Large Hadron Collider, it's not really a 14 TeV worth of energy in each collision that's spread out.
So the question from Brendan is, well, but if the things are moving around inside the proton, some are going to be moving coincidentally toward each other from one proton and the other proton, some will be moving away. Won't you get some higher energy collisions? And the short answer is no. You have to do this calculation more carefully than I'm about to intuitively do it.
But the point is that inside the proton, the kind of typical average effective velocity of a parton is not that big, right? I mean, the whole proton has a mass of about 1 GeV, which is one thousandth of a TeV in the units we're using here. And we're talking about 10 TeV and 100 TeV collisions.
So the typical momentum or energy of one of those partons is very, very tiny compared to the overall collision energy that you're getting. And what that means is that there will be fluctuations around the average energy, but they're going to be very tiny. They're not going to be relevant. You're not going to go from a typical, you know, 1 TeV worth of energy in a collision up to 100 TeV.
because of this. Just not going to happen. Or let's say it could happen, but the probability is really, really tiny. The fraction of events with that energy are going to be very, very tiny.
And then it's not going to help that you have a lot of energy in those very tiny fraction of collisions because the kinds of discoveries that are made at a proton-proton collider are not like, oh, here's an event that must be a new particle. you make discoveries by having many, many, many events, or at least enough of them that you can see them above the background noise, right?
The Higgs boson, you can see the plots. It's a bump around a background. So if you add one or two extra events out there near the tail of the distribution, it's not a statistically significant thing. So number one, you're not going to get super high energies. Number two, when you do get high energies, there are not going to be that many of them and you can't do much with them.
Having muons, which are elementary particles and you know exactly what their energies are, is a much more careful way of knowing that you're seeing truly high energy things. Floris Queek says, can you explain how to think of the geometry of the universe before electroweak symmetry breaking? As nothing had any mass yet everything was moving at the speed of light, right?
So no rest frames, no proper time. How do I wrap my head around this concept? I think probably the first step to wrapping your head around this concept is to distinguish between the geometry of spacetime and what stuff does within spacetime. So as I was literally just teaching my class the other day, the speed of light is not special because of light.
The speed of light is special because it's the speed limit in the universe. It's the thing that remains invariant in special relativity or general relativity. Every observer measures the speed of light to be the same thing. And it provides structure on spacetime by distinguishing present, future, past, and inaccessible, right?
If you're too far away from one point in spacetime to get there other than moving faster than the speed of light, you can't get there. That's space-like separated. That's inaccessible. The question is, given that there is a speed limit in the universe, does anything move at that speed limit? And the answer is yes.
And one way of thinking about what moves at that speed limit is massless particles or certain kind of massless waves, if you want to think of it that way. And those include electromagnetic waves and gravitational waves. They move at the maximum speed limit. So we call it massless. the speed of light, okay?
Things that are massive, like electrons, protons, et cetera, move slower than the speed of light. So they move slower than the speed limit. But even if there was nothing around, even if there were no particles in the universe, there would still be that speed limit.
Of course, there's always general relativity, so there's always curved spacetime and propagating gravitational waves, so that would make it a physically real thing, this speed of light limit that we would just call the speed of gravity if there were no light around. So before the electroweak symmetry breaking, the particles that we know about were all massless.
We don't – neutrinos are a special case. So forget about the neutrinos. They may or may not be. We don't understand where their masses come from. But the other particles that we know about at that scale were indeed moving close to the speed of light. So it's not true that there's no rest frames.
A rest frame can be defined by a physicist defining a rest frame, whether or not there's any stuff that is at rest with respect to that rest frame. Furthermore, if I have a box full of photons, okay, I literally have a box, a bunch of light particles bumping around inside the box, all those particles are moving at the speed of light. But there is an average amount of energy, right?
There is still a mass for the whole box. I imagine that the size of the box... have no energy. I'm imagining this as a thought experiment. There's an energy that I get from the combination of all the energy of the photons inside the box, and there's a rest frame for that box, okay? There's a rest frame.
There's a frame in which the average amount of photons going to the left and the average amount of photons going to the right are equal to each other. In other frames, they wouldn't be the same. So there was a fluid, a plasma, whatever you want to call it, in the early universe that defined a rest frame with respect to that. I'm trying to be comprehensive here.
So I'm saying there needn't have been that. There still would be the concept of rest frames. But in fact, there was a fluid of particles, each individual particle moving the speed of light, but the fluid was not moving at the speed of light. Just like the speed of the air around you is not equal to the speed of the individual air molecules. It's an average over all of them.
So I don't think it should be that hard to wrap your head around that particular concept. You've just got to get used to the idea of spacetime having a structure independent of what happens to be in that spacetime in any one moment. David Maxwell says, mattering in proportion to their wave function squared.
As branches get thinner, their significance lessens when considering your effects on future use. It feels wrong to conclude that I'm less important than any previous me, but I can't pinpoint why. How do we think about a system of thinning worlds and not infer something negative about the existential meaningfulness of the future? I think there's two things going on here. I like the question.
It's sort of a clever... taking seriously of the idea of the significance of individual worlds lessening over time. But there are two things to say. One is that the – if you just take literally the idea that as the world's branch and then each individual branch matters less, you also have to take seriously the idea that there are more branches.
The total amount of meaningfulness didn't decrease when the worlds branched any more than the amount of cake decreases when you cut the cake into slices, right? It's just divided up slightly differently. So what matters for the future is exactly the same, whether you're in one world or in many worlds.
The other is that from the perspective of any one person in any one of those worlds, they have thinned out in the sense that the amplitude associated with their branch of the wave function has gone down, but the whole world has thinned out. It's very analogous to thinking about conservation of energy.
Why does it seem like the whole world has the same amount of energy even if it's branched into multiple copies and its overall contribution to the energy of the wave function of the universe is much, much less? Well, you have branched and everything else is branched and you're all multiplied by this small number.
So the amount that you matter to the rest of the stuff in the world around you is just as big as it ever was because relatively it hasn't changed that much. So I see no reason to be existentially worried about the branching of the wave function.
Massimo Tori says, could you clarify why Calabi-Yau manifolds are the preferred choice in superstring theory for describing the six compactified extra dimensions? What specific properties make them so suitable for this role? I'm not the superstring theory expert to answer this question, but this is, I think, a pretty basic one. The basic idea is just that they solve Einstein's equation.
There's one subtlety here. because Einstein's equation relates the curvature of space-time to the amount of energy density in it. And how much energy density is in the extra dimensions of space-time? That sounds like a hard question. But the simplest kind of model is to say there is zero energy density in the extra dimensions of space-time.
That doesn't mean that there's zero energy density in the universe. It just means that the energy density that we have comes in the form of particles whose wavelengths are much larger than the size scale of the extra dimensions. So basically a photon that you see in your room carries energy, but its wavelength is much, much larger than the size of the extra dimensions.
And so from the perspective of the extra dimensions, it carries zero energy. The energy is not spread across the extra dimensions. It's only spread across your three dimensions, okay? So back in the 80s, when they started thinking about string theory, they looked for solutions to the equations where there's zero energy in the extra dimensions.
And then Einstein's equations become a little bit simpler. You're just setting a certain condition on the curvature tensor of the extra dimensions. R mu nu equals zero for those experts out there. And Calabi-Yau manifolds are curled up kinds of manifolds with R mu nu equals zero. They're the kinds of things that can easily plug in to the string theory equations and get a solution.
These days, since the 90s, so not just these days, but for a while now, they've been thinking more generally about what are called flux compactifications, where you actually do have energy density threading the extra dimensions. And that opens up a whole new... landscape, literally, of possibilities. This is where the string theory landscape comes from.
Once you have d-brains and fluxes that might affect the geometry of the extra dimensions, you have a lot more possibilities going on. So these days we would not think of Calabi-Yau manifolds as the only possible compactification spaces. They're still there. There's still a possibility, but there are many, many, many possibilities. Linio Miziara says, No, absolutely not.
In fact, it's very much the opposite. Typically, when you branch the wave function of the universe, you do not affect every symphony ever created. When you measure the spin of an electron in your laboratory in the basement of your physics department— and you get either spin up or spin down, you're not affecting Beethoven's Fifth Symphony in any way.
There are now two branches of the wave function in which that symphony is exactly the same. Likewise, again, typically, all the laws of physics are the same. Everything is more or less the same except for that one measurement outcome. And to be super duper careful, everything that might directly be affected by that measurement outcome.
But most things in the universe are not affected by that measurement outcome. GS says, in past podcasts, I believe you said that you are a Humean, or at least you lean more toward it than anti-Humeanism, but didn't go into much detail as to why you felt this way. Could you share more of your reasoning behind being more of a Humean as opposed to an anti-Humean?
Yeah, I think that when you try to think about what the world is made of, the fundamental ontology of reality, I tend to favor the picture that gets the most for the least, right? I tend to say, like, what can we get by with as minimal ingredients out of which everything else emerges? There is kind of a personality that comes into this. You know, some people— are very happy.
I want to almost say that they prefer to assign new elements of reality to sort of every kind of phenomenon that they see around them. So electrons are real, but consciousness is also real. Life is also real and not just real in the sense of an emergent higher level reality, but a fundamental reality to them. I am not that kind of person.
I want to see, I think it's just more productive to say, oh, here's a very tiny set of ingredients. and we can explain everything else in terms of them. So when it comes to the laws of physics, if you say like, okay, here is Einstein's equation, for example, we were just talking about it. Law of physics that explains how the universe's spatial space-time geometry evolves throughout history.
There's two attitudes to take towards that law. One is the Humean view, which is that you just have spacetime. What exists is spacetime, and different points in spacetime are related to each other in different ways. And there's a pattern.
If you know what spacetime has done over time, you discern that all this geometric relations between different points in spacetime and different curves through it, et cetera, look, they seem to obey this equation, which is Einstein's equation. In other words, the Humean says the laws of physics are a convenient way of summarizing what the actual universe is doing.
The antihumian says, no, the laws of physics are the reason why the universe does that. The laws of physics play a role. The laws of physics bring the universe into existence somehow. So it's not just that the universe exists and the laws of physics summarize what it does. It's that laws of physics exist separately and in addition to the physical universe. That's the antihumian view.
And to that, I want to say, well, what's the difference? How would I experimentally tell the difference between these two scenarios? Maybe you think the Humean view is just incoherent somehow, but I haven't heard any— convincing argument to that effect. So I want to know, like, how do I know that these laws of physics actually exist?
I get the temptation to say that they exist, because otherwise you're stuck saying, well, isn't it a big coincidence that the universe just happens to obey these laws all the time? And I think that that's just a situation where our intuitions are getting the better of us. We don't have any strong way of saying that the universe should or should not obey some rules.
All we can say is that as a matter of fact, it does. I don't think that that means that there are somehow existing these rules in some ontologically robust sense. Where are these rules? What are they— What would be the difference between them creating the universe and the universe just existing all by itself? I have very similar feelings about mathematical realism.
I tend against mathematical realism for exactly the same way. What are the causal influences that these extra things seem to have? What are the influences of the number two or Einstein's equation on the universe over and above summarizing what the universe does? I can't perceive anything, therefore I go Humean in these ways of thinking.
Benjamin Zand says, my question is, how do we know the laws of physics and the physical constants are the same everywhere in the observable universe? Is this an assumption or can they be confirmed by observation? It's not an assumption. It's, you know, science doesn't really generally work by assumptions. It works by hypotheses. You make a hypothesis. You say maybe things are this way.
Then you test the hypothesis against the data that you have. Sometimes those tests are rather indirect. Sometimes they're super direct. But for the constants of nature, we say, you know, OK, so first start, let's imagine that we have— Something like Einstein's equation that uses Newton's gravitational constant and the speed of light, and we imagine those are constant everywhere.
But then we also say, let's imagine they're not constant everywhere. How would things be different? How would we test that? How could we invent a theory where that is true? So for the constants of nature that we know, we absolutely have done an enormous number of tests to make sure that they're not different other places. Think of it this way. Here's the rule of thumb.
If there is some assumption that you could undo in physics and then you could test the impact of undoing that assumption, and if you tested it and found that your test had located some difference in the usual way of doing things, and by making that discovery you become a super famous scientist and win all the Nobel Prizes, then probably people have done it. Already.
So for things like the speed of light, the mass of the electron, Newton's gravitational constant, it would be super duper important physics discovery to actually detect them changing over time. So of course people have tested that and there's all sorts of ways to test it. My favorite way is big bang nucleosynthesis. You know, when the universe was one minute old, it was a nuclear reactor.
It was fusing hydrogen and protons and neutrons together to to make helium and other elements. And that rate of nuclear reaction depends on the masses and charges of all the elementary particles in a very specific way.
And so you can make a prediction, and then you go back and test the prediction, and the model that works is the one where the constants of nature had basically the same values back then as they do now. Norman Wickner says, is the wave function defined on all of the universes, those that exist after multiple splits at once, or is there a different wave function defined on each?
Would a wave function of all the universes be a simple weighted linear combination of wave functions of each universe? Roughly speaking, yes. I think that that's not usually the vocabulary that Everettians use. The usual vocabulary is there's just one wave function. period. That one wave function includes all of the universes, okay? The word universe is sloppy here.
It goes back to the early 70s and Bryce DeWitt calling it the many worlds interpretation of quantum mechanics, but it's okay. It's all right to use it. It's better, we think, to call them branches of the wave function because that makes them clear that they exist inside the wave function. When people want to know where are all these other universes located, That's a category error.
That is a mistake in reasoning because there's not a physical location for the other universes. Space and time exist within the universes, within each branch. It's not that the branches are located somewhere in space-time.
But within the approximation where you have branching that is sort of clear and clean and you can say, all right, here's one branch of the wave function, there's another branch, et cetera, then it is true, yes, that the whole wave function is just a linear combination, weighted linear combination of the wave function of each branch.
Mehran Mizrahi says, in a prior episode you mentioned in relation to spin that those are the only options that we've seen. We can imagine others. We've never found a fundamental particle with any spin other than that. Nima Arkani-Hamed in his Cornell lectures in 2007 made a much stronger statement that quantum field theory constrains the possible menu to only these five values.
Plus there can be only one spin zero and one spin two. Is he correct what mechanism creates this constraint? So you have to be careful when you talk about spins because there's sort of two different things going on. One is the total amount of spin and the other is the projection of that amount onto some axis, right? Like spin is like a vector.
It's not quite a vector because it's spin one half rather than spin one. But it could be spin one half, I should say. But it's kind of like a vector that has a length. But then when you measure it, you're projecting it onto some axis. So if you have a spin one half particle, the total spin, and we simply say spin one half, but there are two possible projections.
They're separated by an amount equal to one. All of these numbers are secretly multiplied by Planck's constant h-bar, but we set h-bar equal to one so we don't notice. So spin one half, the actual amount of spin is one half times h-bar, we just say a half. So it could be spin plus a half, i.e. spin up, or it could be spin minus a half.
When you have a spin one particle, when you measure its spin, you can get plus one, zero, or minus one. That's the amount of spin in the z direction. If you had a spin 3 halves particle, you could have plus 3 halves, plus 1 half, minus 1 half, or minus 1. If you had a spin 2 particle, you could have plus 2, 1, 0, minus 1, minus 2, okay?
So that's where the number 5 comes from in Nima's statement, because in quantum field theory as we currently understand it, in terms of fundamental particles, not composite particles, so you have a nucleus with whatever spin you want, but in fundamental particles, The maximum total spin is two, okay? And the spin, like Nima said, there's only one spin two particle. It's the graviton.
There are good reasons not to have multiple spin two particles. And so the five he's referring to is the five possible spin projections of a spin two graviton. Plus two, one, zero, minus one, minus two. Now, I don't know the origin of the statement that there can be only one spin zero. I don't even think that's true.
There's generally, certainly the standard model of particle physics has more than one spin zero particle, so I don't think he said that. It has one spin zero Higgs boson, as we say, but that's a complex doublet. So complex numbers mean there's a real and imaginary part. Doublet means there's two of them.
So there's actually four spin zero particles in the standard model, but three of them are eaten by the W plus, W minus, and Z bosons. So we have one spin zero left over, but we could have more than. one spin zero particle. There are reasons to think we only have one spin two particle, the graviton. Anyway, you're asking about what are those reasons?
So there's a whole set of reasons and it's complicated. I'm not going to go into great detail here, but it's a good question because it reminds us that when theoretical physicists build quantum field theories, particle physics theories, there are enormous numbers of constraints that they have to satisfy. The most basic constraint is that the energy needs to be bounded below.
There needs to be a minimum amount of energy in your quantum field theory. There needs to be, in other words, a vacuum state, a state for which the total energy is the minimum possible value. The reason for that is more or less an empirical one. If that weren't true, if you could get arbitrarily large energies, negative energies, then you would get arbitrarily large negative energies.
Whatever state you are in right now would decay into the lower energy state plus some particles. It would decay infinitely fast in some unregularized way of thinking about things, but super fast in any possible way of thinking about things. So you want a stable vacuum state. That's one very important thing.
And if you just start throwing fields around and you don't work very hard, you will end up with a quantum field theory that allows you to have particles with negative energies. And that means that your vacuum state is not stable. If you have one particle that has a negative energy, you can get an arbitrarily large negative energy just by creating a lot of those particles, right?
So that's one constraint on what you can do. Another constraint is that you don't want to have particles moving faster than the speed of light, right? And that might sound easy to do, but in fact if you just start writing down random quantum field theories with high spin particles, the fact that your spin is so large means that every kind of particle comes with many different components.
And it's going to be the case that if one of those components moves slower than the speed of light, then another one moves faster. Or if one is positive energy, another one is negative energy. Things like that. So I'm not going through all the different possibilities, but just to let you know that there are a lot of constraints in particle physics on what you can possibly do.
And one of those constraints adds up to the fact that spins are two or less overall. Ken Wolf says, But if that level of minute manual control is required by all of us in perpetuity, is that not just a sign that the government has helped itself to too many opportunities to derange people's lives without their consent?
I know your question started out really good and then kind of ran off the rails there at the end, Ken. I don't know what we're talking about with deranging people's lives without their consent. But I do think that it's perfectly accurate to characterize what Hari was saying as involving – real participation in democracy.
I mean, the lesson is that democracy is not something you show up for at the ballot box once every four years. It's an ongoing process. Why should that be surprising? I think that's a very natural thing. It might be a worry that people become too busybody-ish, etc. It's the typical—
Homeowners Association problem where people start controlling what other people can do in their houses and that's something you need to fight back against, yes.
But the very idea that the authority for governing a country with hundreds of millions of people in it is vested in the people themselves, it should not be surprising that that idea leads you to say that those people need to do some work, right? That work might involve educating themselves. It might involve talking to other people.
It might involve listening to other people, sharing their opinions back and forth, doing work to convince people. Yeah, that's absolutely going to happen. That's what life in a democracy is like. It might be more efficient to have just one person who makes all the decisions, but we have other values in addition to efficiency.
The one person making all the choices system never actually works out well for the majority in the long run. Eric Stromquist says, a few months ago, I saw you on Robinson Earhart's podcast, where you gave the anthropic principle as an example of a piece of philosophy that physicists tend to handle poorly. What is the right application of the anthropic principle?
I've always taken it to mean that because the values of some properties of fear are fine-tuned for our existence, we are justified in inferring that an ensemble actually exists, be it of universes, planets, or whatever, where different ensemble members have different values of the fine-tuned properties and where we necessarily exist and an ensemble member doesn't.
having values that allow our existence. Well, there's two things. Number one, I don't exactly think that that's the right way of stating the anthropic principle. It's very close. I would tweak it a little bit. I would not say that because some values of properties appear fine-tuned, we're justified in inferring that an ensemble actually exists. I think that's too much.
That is granting ourselves too much. I would say that the hypothetical existence of an ensemble could be a perfectly good explanation for why some properties in our observed universe appear fine-tuned. It's just a selection effect. That's like the sort of most weak, asking the least of us version of the anthropic principle.
If there is an ensemble, we're going to find ourselves in the part of the ensemble where we can exist, right? How can you possibly disagree with that? People manage to find ways of disagreeing with that, even though it's perfectly true right here in the solar system, right?
There's different parts of the solar system where it would be very difficult for life to exist, parts where it's very easy for life to exist. Lo and behold, we find ourselves in a part of the solar system where it is easy for life to exist. The anthropic principle at work, okay?
But I wouldn't necessarily, if the Earth were clouded over in perpetuity and we didn't know about the rest of the solar system, I wouldn't necessarily say that we have to infer the existence of the rest of the solar system. We hypothesize it and then we wait to see if there is better evidence that comes along that makes it, lets us make choices between the alternatives.
You know, there's no rule in physics that says the universe has to give us answers quickly and cheaply. Like sometimes we might just have to live in uncertainty for a while. Okay, but In cosmological applications of the anthropic principle, not the uncertainty principle, that's something different. Here's the second point. Physicists often want to do more than that.
They want to say, if you have a certain kind of ensemble, then I want to be able to make a prediction for what typical observers will measure, right? This is what Steven Weinberg did back in the late 80s for the cosmological constant, what let him predict that a typical observer in an ensemble where different people saw different values of the cosmological constant
that they should observe something that is a small but not zero number, and that eventually turned out to be right, that prediction, right? So in some sense, maybe he was on the right track. We still debate that. That kind of thing where you're actually making a prediction is harder to get right, and I don't think that we do get it right. So –
When I say that physicists don't get it right, I don't mean that philosophers do get it right. So your first question is, what is the right application of the anthropic principle? I don't know. I don't think that we've thought it through very, very carefully. I take very seriously the very basic critique that you and I are not typical observers in the universe. universe.
We know that we're not typical for all sorts of reasons. So why should we be so clueless as to forget all of our specificity and then pretend that we're typical and then remember it again and try to make a prediction? There's got to be a better way of doing it than that. And I don't know what that good way is. That's why I'm encouraging philosophers to think about it.
I'm thinking about it myself, but I don't have the final answer yet. I don't know exactly how we should do this. People like Nick Bostrom, former Mindscape guest, have thought about it and written books about it. I just find their answers completely unconvincing. I think we've got to do better. I'm going to group two questions together.
One is from Brian Gunnison who says, how does theoretical physics research contribute to real world applications and technological advancements considering that historically breakthroughs like Einstein's theory of relativity led to GPS technology and quantum mechanics enabled the development of modern electronics?
Can we quantify the impact given that approximately 28,000 physicists are employed in the US alone with theoretical physicists comprising about 10% of this workforce? Can we expect anything else soon? And then Rufus Knapp says, I was thinking about Oppenheimer the other day, and the question occurred to me, what areas of cutting edge theoretical physics currently require a security clearance?
So I think that there's a lot going on here. And one very tiny footnote, it is not accurate to say that Einstein's theory of relativity led to GPS technology. The correct thing to say is that to get GPS right, you either need to understand relativity, so people invent the technology, but then it would have been giving the wrong answers if they didn't know about relativity.
But maybe that's not even a big – kind of problem. If you didn't know about relativity, you would do the experiment. You would put satellites up there, you would realize it was giving you the wrong answer, and you would figure out how to correct for it. The problem would be you wouldn't know why. You wouldn't know why it was going wrong, right? Relativity provides that answer.
But it's not like once you knew relativity, suddenly you could invent GPS. It's just that you could invent GPS that works correctly, okay? Quantum mechanics has led directly to modern electronics and other things, so I think that's a better example there. But we need to distinguish a couple things. One is the difference between fundamental physics and sort of higher level emergent physics, right?
There's a set of people working on quantum field theory and particle physics and gravity and cosmology and things like that who are doing fundamental physics. Those people get a lot of the airtime in the public sphere, but they are not the majority of physicists, not even the majority of theoretical physicists, right?
Most physicists are working on atomic physics and condensed matter physics and plasmas and biophysics and all sorts of things that are much more down to earth. So when you want to ask, you know, what good is physics doing for technology, you have to distinguish between those two sets of people.
And as I've said various times before, there was a time when the people doing fundamental physics had a huge impact on applications to technology. But that time was before 1950. Going back to, you know, Sachi Carnot and building steam engines, it was very clear that fundamental physics had a technological application, not to mention Newton and Galileo, etc., right?
All the way up to nuclear physics and people like Oppenheimer, right? When we were discovering radioactivity and nuclear fission and fusion, no question that those kinds of fundamental physics cutting-edge discoveries were important for technological progress.
Since the 1950s, roughly speaking, you know, you can argue about details, but since then, we have constructed a theory of fundamental physics that works well enough for all technological applications, right? We discovered new particles, like, okay, now we know about the top quark. How can we put that to work in a technological application? And the answer is we can't.
The top quark has no technological applications. Maybe someday someone will invent one, but you have to really be impressed by how difficult that would be to invent a technological application of the top quark for the simple reason that top quarks disappear in a tiny fraction of a second.
the difference between physics pre-1950s and fundamental physics, pre-1950s and fundamental physics post-1950s, is that pre-1950s we were learning more and more about the behavior of the particles all around us. right? Nuclei and electrons and things like that existed all around us even before we understood them.
And by learning more about them, we learned more about how to manipulate them and create technology. The progress in fundamental physics since the 1950s has not been understanding electrons better. It hasn't even really been in understanding nuclei better. It's been in discovering new particles and new aspects of quantum physics
field theory and understand symmetry breaking and things like that, inflationary cosmology, dark matter, whatever, which are great, which are super important. It's what I do for a living. I am very impressed by the importance of these areas, but they're not going to lead to technological advancements because they're talking about things that are not around us. And if you make them, they disappear.
So it's like exactly made for not being technologically very, very relevant.
Meanwhile, of course, the vast majority of theoretical physicists are working on things that do have something to say about the materials that are all around us, whether you're working on superconductivity or atomic transitions that are relevant to lasers and so forth, or biophysical things about molecules that are relevant to DNA.
These are not what we call fundamental physics, but they're absolutely physics, and they absolutely will have important technological and medical, for that matter, applications going forward. So to Rufus's question, what areas of cutting-edge theoretical physics currently require a security clearance? Not fundamental physics, not string theory or loop quantum gravity or anything like that.
But there are other areas. Quantum information theory, maybe? To be super direct about the answer, it never is true that an area of cutting-edge theoretical physics requires a security clearance. It might be that the kind of physics that you're doing allows you to get a security clearance and therefore know about some project, okay?
So I can absolutely do quantum information theory without a security clearance, but the
government might be building some especially good quantum computer for which I would need a security clearance while doing theoretical physics but quantum information theory is just the Schrodinger equation at the end of the day okay we know the Schrodinger equation you're not inventing new fundamental physics you're putting it to work
And putting it to work is a very, very good thing to do, just like superconductivity or whatever. So the areas of cutting-edge theoretical physics that might require security clearance are those that involve particles we already know about, okay?
I mean, here at Johns Hopkins, we run something called the Applied Physics Lab, which is a giant laboratory that does a lot of research that requires a security clearance.
If you look up, I think I mentioned this before, but if you look up lists of universities ranked by the amount of grant money they get from the United States, Johns Hopkins is number one, has been number one for many, many years. And a lot of people think it's because of the medical school, which is part of it, but mostly it's because of the applied physics lab.
But mostly applied physics is not fundamental, cutting edge, emergent space time kind of physics, to put it that way. Dan Butler says, when talking about many worlds, sometimes you talk about discrete events like atomic decay or the detection of a photon being the cause of the branching process.
But other times you talk about how it's all just a smooth wave function evolving smoothly under the Schrodinger equation. Do you think of branching as a discrete or continuous process? Yeah, it's absolutely smooth in the sense that the wave function of the universe evolves smoothly. Branching, remember, is a higher level emergent phenomenon, okay?
We human beings find it convenient to talk about the wave function of the universe by splitting it into branches. So it's an approximation. The branches are approximately orthogonal to each other, but they're not exactly. They go from being not orthogonal at all, then under a measurement process they become pretty darn orthogonal.
Again, super duper duper close to being orthogonal to each other, so more than good enough for government work. But it is, strictly speaking, a smooth process that is just an approximation that we human beings use to discuss the universe in easily understood ways.
Hail Zeus says, when you are asked to review a physics article for a prominent peer-reviewed journal, how do you approach completing the review? I find it interesting that articles by different authors in the same journal can at times reach rather different conclusions or even directly contradict each other, yet both are published.
I would appreciate your thoughts on what role reviewers play in this process or even what you think a journal editor's responsibility is in such situations. I think it's a great question. I think that, number one, the general public is— very misinformed about what referees do and how reviewing works.
And number two, professional scientists aren't especially in agreement about how that process should work. You know, I think of reviewing or refereeing as more or less a filter, right? And it filters out the weakest things, the things that are obviously mistakes, or should, it tries to. Refereeing is not perfect, you Neither journal editors nor referees get everything right.
So just because something appears in a journal that has been peer-reviewed doesn't mean it's reliable. It increases your credence that it's reliable, but that credence shouldn't be 100%. It certainly doesn't mean it's interesting or that it's right. You know, you can be
You can be correct but not right, by which I mean you can have some equations and you can solve them and you can solve them correctly, but the thing you're talking about doesn't apply to the real world, right? So I have a theory of dark matter that says it's this kind of particle. Someone else has a theory of dark matter that says it's a completely different particle.
We both do calculations within our specific models. We both publish papers. One of us is right in the sense maybe, hopefully, that one of us is right in describing the world. The other one is not. But they're both, you know, correct within the – I forget. I'm mixing up the words right and correct here. But within the model, within the set of assumptions, the calculations are legit.
But that doesn't mean it describes the world correctly. It would be too high a bar. to say that the referee needs to perceive the reality of the cosmos before they can accept a paper. So plenty of papers are accepted provisionally under the model that they're looking at. And also some issues are just controversial, right? Some issues, the field as a whole doesn't know the answer to them.
And so you will get published papers that have contradictory results. That's part of the process. You know, this is the theme of today's episode is that science, they process, right? It's not a set of true-false statements that are handed down by God, okay? It's not a set of experiments that just tell us the right thing once and for all and then we move on.
We make hypotheses, we test them, we try to figure out whether we correctly made the predictions based on those hypotheses. Sometimes we did, sometimes we didn't. We interpret what we've done, other people interpret it differently. It's a mess. but it takes time and the process eventually makes progress. I personally, in the process of refereeing, I like to be forgiving.
I end up rejecting a lot of papers because they're just bad or wrong, I think. But I tend to think that if something is saying something interesting, look, I just... accepted a paper whose literally its whole job was to argue against something that I had argued about. I'm trying to hide the specifics here, but I had written a paper.
Someone else wrote a paper saying, no, you absolutely cannot think this way. It's wrong. And I recommended that it be accepted because it was—even though I disagree with the conclusions of the paper— I thought that the arguments presented were very interesting and worth considering and that issues are difficult. So maybe I'm wrong, right?
So I like to be forgiving when accepting papers in terms of the conclusions as long as they are well argued and plausible and, you know, something that I could imagine. Well, you know, maybe someday I will realize that I was wrong about this and I want that point of view out there in the published literature so people can think about it. I think that should be the standard.
Kyle Cabasares says, who or what gave you the inspiration to start writing your textbook, Spacetime and Geometry, early in your academic career as opposed to later? I noticed that the lecture notes that were eventually transformed into the textbook were on archive back in 1997. Yeah, what happened with the book was when I was still a postdoc at MIT, so my first ever postdoc.
A postdoc is a three-year position, usually something like that, and then you try at the end of the postdoc to apply for faculty jobs. Sometimes you succeed, sometimes you don't. If not, you apply for another postdoc, okay? So I was in the last year of my postdoc at MIT and applied for faculty jobs and also new postdocs.
And one of the professors at MIT in the physics department went on leave and was supposed to come back, and he said, nope, I'm not coming back. That happens sometimes. So he was supposed to teach general relativity, and they were out a professor, and so they asked me to teach general relativity as a postdoc.
Now, in fact, Ted Pine and I, Ted Pine, of course, all Mindscape listeners know as the guitarist and composer for the Mindscape theme music. When he composed it, it was not the Mindscape theme music. This is, you know, his band did this back in the 90s, and I just borrowed it because I knew that he wouldn't sue me for infringement, and I liked the songs.
So Ted was a graduate student with me in the astronomy department at Harvard, and the two of us led a course. We taught a course for our fellow graduate students in general relativity. We both took general relativity from Nick Warner, who was at MIT at that time, is now a scientist at USC.
And he taught this wonderful course in general relativity, and we loved it so much that we volunteered to teach it to our own graduate friends at the astronomy department at Harvard. The reason being that typically in those days, the GR course at Harvard was very bad, and the GR course at MIT was very good, and it was a schlep to go from the astronomy department at Harvard to MIT.
So my fellow astronomy grad students just would not have taken general relativity, which would have been a shame, so we taught it. So I was more or less ready to go because we had lecture notes. What Ted and I did was we hand wrote lecture notes and then we Xeroxed them and handed them out to the class. So we had, you know, a whole couple hundred pages of lecture notes in general relativity.
based on Nick Warner's lecture notes, but we put our own spins on them in various ways. And then when the opportunity came to teach general relativity, normally you shouldn't teach a course all by yourself when you're a postdoc because you're trying to get papers written, trying to do research, and trying to get a faculty job.
But I was so prepared for it, and they asked nicely, and it was after the application season had gone by, right? So I'd already applied for second postdocs and whatever before I started teaching general relativity. And I ended up going to the Institute for Theoretical Physics in Santa Barbara. So I said yes, and I taught it. And I was young and energetic at the time.
So not only did I rewrite our lecture notes in my own way, but I typed them all up, right? So I had these latex lecture notes, and I handed them out before teaching the class. We'll never do that again. That's something you do when you're young and energetic. Sorry about that.
And then, you know, yeah, so someone pointed out that after the course, people who liked the lecture notes were Xeroxing them and sharing them around. So I just cleaned them up a little bit and put them on the internet. So they're on the archive from 1997.
So then, you know, once that happens and then you become a professor, which I did in 1999, once you have a set of lecture notes, that's halfway to a book. And so people started knocking on my door and say, like, you want to make it into a book? So I talked to different publishers and eventually in a moment of weakness, I said yes. So I turned it into a book.
I wasn't really in any way thinking about, oh, this is early in my career. This is going to hurt my career. It absolutely did hurt my career. It was the single dumbest thing I did in terms of getting tenure at the University of Chicago because you don't want to let them know that you're interested in doing things other than writing research papers.
And writing a textbook definitely is something other than writing a research paper. But if you forget about short-term careerist motives, I think it was a very good thing to do. I did it. I had more time then than I have now to do it. So I might not have done it if I had waited until longer in my career. And I was young and I would do things a little bit differently now.
But mostly I think that it was a good job. So I'm happy I did it. Peter Newell says, many worlds seems to imply that there were fewer branches of the wave function in the past and more in the future. This sounds like an arrow of time to me. Is this arrow of time somehow the same as the entropic arrow of time? Yes, I think it is. Anyway, you're right.
There were fewer branches of the wave function in the past, more in the future. And... Again, we define branches of the wave function in ways that are convenient to us human beings, so it turns out to be difficult to quite objectively define how many branches there are, how many branches you should say there are, because it depends on what purposes you're asking these questions for, okay?
But roughly speaking, in the conventional way of thinking about many worlds— The arrow of time comes from an assumption about the past state of the universe. Things were relatively unentangled back then. David Wallace, a former Mindscape guest, is the world's expert on this, so you should look up his writings about it.
Things were relatively unentangled back then, become more entangled as you go toward the future, and there's more and more branching. why were things in that special state in the early universe? It's exactly the same question as why the early universe had low entropy at early times. We don't know the answer to either one of those questions, but we know they're the same basic question.
Henry Jacobs says many worlds pluralism being a good Bayesian compatibilism all these things seem to be cut from the same or similar cloth. They are all theories that entertain many possibilities albeit not equally. Do you recognize this pattern in your interests and does it extend farther than those examples. Yeah, I think that's a perfectly legit characterization of how I like to think.
I like my individual ontologies to be very simple and minimal, but I also like to be able to live in uncertainty, to recognize that there are different possibilities that are worth pursuing. I don't want to tell everyone how to live their individual lives. I want people to be able to make their own choices, and I want to think that those choices are valid.
If you want to enjoy smooth jazz, fine for you. You're able to do that. I want people to be able to live together, respecting each other's different choices and lifestyle arrangements, etc. Likewise, you know, I think it might be a bit of a stretch for many worlds. I think that many worlds, what I care about is not that there are many worlds.
What I care about is that the fundamental ontology is very, very simple. That's the overriding interest that I have in physics theories. But the pluralism about values, the being a good Bayesian about propositions about the world comes down to imagining the possibility that various different things are true
or correct, or valuable, or whatever you want to say, and that we're not absolutely sure what they are. That is absolutely a big part of my way of thinking about how we should go through life. Chillin' like a leovillion. that's probably my favorite handle of any Patreon supporter ever, asks the following.
The quark model sounds a bit like a collection of spin one-half fields for which up to three particles can occupy a given state rather than the usual zero or one for fermions because of the Pauli exclusion principle.
There is a color charge taking one of three values that provides an additional quantum number to distinguish the states, but you can never isolate a quark and determine if its color is the same or different from another's. The gluon fields basically exist to guarantee symmetry among color assignments.
Would such fields behave fundamentally differently from the quark-gluon model if not forbidden by the spin statistics theorem? So I don't think, yeah, so I take the question to say that maybe there's some extension of the Pauli exclusion principle that says rather than ordinary fermions, where there's only one or zero fermion in every quantum state.
Maybe there's a different kind of particle where there could be zero, one, two, or three particles in the same quantum state. And maybe that could somehow recover the ordinary quark model.
It's very close to one of the motivations for the early quark model, which is that when Gelman and his friends started thinking about quarks and then they were able to predict the existence of collections of quarks, baryons, collections of three quarks that had never been observed, right? one of the—well, actually, I shouldn't say had never been observed.
He did predict the existence of baryons that had never been observed, but also he made sense of baryons that already had been observed, and I'm not going to historically remember which was which, okay? So forgive me for that. But there was one particle called the—I think it was the delta, the delta plus plus. To make sense of that particle—it's a baryon—
To make sense of that particle in the quark model, you had to imagine three quarks, all the same, so all up quarks we would say in the current lingo, all the same quark in all the same spin. So they're spinning parallel to each other because you want to have all the charge in the spin three halves particle overall, so you had to add up the spins.
So you had basically in that particle, you know, unless you could play games, and this is why science is hard, and this is why it's not definitive, right? You could say, okay, well, I have three spins, three quarks with the same aligned spin, but maybe they're in a higher excited state or something like that, so they're not really exactly in the same state.
But certainly the simplest, most direct thing to say, and maybe they knew this from measurements of the mass of the particle, I don't really know— But certainly the simplest thing to say would be all of the three quarks are in exactly the same quantum state with their spins aligned. OK, the same kind of quark, same spin, et cetera. That is ruled out by the Pauli exclusion principle.
You can't even have two in the same state, much less three. So they said, and I think this is the motivation, maybe there's a new quantum number. Let's call it color. The word color didn't come along until later, but maybe there's another quantum number that takes three possible values.
So these three quarks, each of which look like they're in the same quantum state in the delta plus plus baryon, actually are in different states because they take on different colors, okay? And that turns out to be correct. So that's the ordinary way of doing things with the Pauli exclusion principle remaining unchanged.
It was only later, not much later, because people are very clever and they were thinking around, people like Yoshiro Nambu and others started asking, well, could color, this new quantum number – again, it wasn't called color at the time – but could this new quantum number – be the basis for some new symmetry, an SU3 symmetry, and could that provide the force that holds the particles together?
And Nambu's original model wasn't on the right track, but eventually Gelman and Harold Fritsch figured out how to make it work, and we invented QCD. So that all works. It works fine. There's no reason, you know, there's no empirical experimental reason to overthrow it and replace it with something better. But you're allowed to imagine trying to replace it with something better.
In this case, for this new idea that you're proposing, if I understand it correctly, the challenge would be to get the SU gauge bosons to come out right. So it's not at all obvious that if you generalize the exclusion principle to allow for zero, one, two, three, or four fermions in the same state, that that should be associated with a gauge symmetry in any sense.
much less one that has essentially identical properties to good old quantum chromodynamics, SU3. You know, SU3 QCD has been tested experimentally to quite high precision, and roughly speaking, it passes all the tests with flying colors. So I suspect that if you tried to mess with something that sounded foundationally different...
Either it would somehow secretly turn out to be the same as QCD, just using different language, or it would give very different predictions and it would be hard to match what we know. Nate Wadoops says, So also imagine a gravitational wave or field sensor that is sensitive enough to detect an individual particle and determine which slit the particle went through.
Which of the following would you expect? A. Particle detection via gravity is sufficient to produce two stripes on the screen, or B.
the gravity sensor just sees waves passing through the slits unless something else first detects which slit the particle passed through, or C, a sufficiently sensitive gravity-based sensor is theoretically impossible, thus rendering the question nonsensical, or D, something else.
A, if I'm imagining I'm granting you the thought experiment license to imagine that your gravitationally based particle detector is sensitive enough. Whether or not that could be feasible in the real world is, you know, it's not actually practically feasible, no, because gravity is too weak. We cannot detect the gravitational field of a single particle.
But maybe you make your particles really massive, right? Or maybe you make the slits very far apart and you imagine very, very sensitive gravity detectors. This is an engineering problem. This is a technological problem, not one of principle of physics, so I'm granting you the ability to use gravity to measure which slit the particle goes through. So that would count as a measurement.
The gravitational field of the particle is just as much a measurement as the electric field of the particle, etc. So absolutely, yes, you would ruin the interference pattern by doing that measurement. Anonymous says, what are ways that your quirky preferences for thinking about math differ from other people's quirky preferences? E.g.
some people like MTW bend over backwards to geometrically visualize covectors, or I'm more of an algebra guy and don't see what the fuss is about. Yeah, there's a rough division of mathematicians into geometers and algebraists, right? People who like drawing pictures and looking at shapes, people who like writing equations and filling in the blanks in those equations. I'm closer to a geometer.
I like pictures more than equations, but I'm not an extremist about either one. I recognize the need for equations. There's nothing like a good equation. I did once have an ambition, it's still kind of an ambition, to write a really good physics paper or even math paper proving a theorem of some sort where there were no equations. It was just done by geometric demonstration.
That would be very fun. The closest I ever came was a paper that I wrote with Alan Guth and Eddie Fari and Ken Olam back in the 90s on restrictions on closed time-like curves in 2 plus 1 dimensions. In that paper, the fundamental demonstration was just drawing pictures. In fact, it was drawing pictures of geodesics or at least piecewise geodesic curves in anti-de Sitter space.
But to set it up, you need to have a lot of equations. So we had a lot of equations in that paper. It was very far from this pristine ideal of just drawing a picture. But I think that everything is valuable. You know, again, this is the pluralist in me coming out. So even though I like pictures better than equations, I'm not going to disparage the use of a good equation.
Whereas Ted Pine, my colleague who I just mentioned, he was absolutely an equation guy. Like if you ask him what a tensor is, he would say, you know, he would visualize an equation with some slots and he'd say, yeah, you put the vectors into these slots and it gives you out a scalar quantity. And I'm thinking more geometrically than that, but I get the importance.
Azure Propagation says, I appreciate Mindscape for taking a down-to-earth approach and just letting experts talk about the details they find exciting. But I think there's also pressure to select for exciting guests and topics. I was wondering if you had considered inviting on a grad student or a postdoc just to talk shop.
I think it would be quite interesting to get a very authentic, deglamorized, day-in-the-life image of what it means to be an academic. Yeah, I mean, this is an interesting suggestion. I've certainly thought about things like that. I have had some people who are postdoc level or beginning assistant professor level on the show. There's no pressure to select for exciting guests and topics.
There is an interest that I have in learning new things. And talking about things that I think are exciting and new, right? Yeah, that's sometimes not new. Like when we're talking about the ribosome with Venki Ramakrishnan, that's old stuff in some sense, but it's new to me. So that counts as new as far as I'm concerned. And I do try to have a variety of different age groups.
If you think about the last three guests, there is Kari Cesarotti, who is a postdoc doing physics. There is Hari Han, who is a mid-career successful political scientist. And then there is Venky, who is a Nobel Prize winning senior biochemist. OK, so we do try to get different groups.
layers there different kinds of career moments and I do try it's actually remarkably interesting how people are reluctant to give me the day to day thing I think that they're trained to not talk about that so like when I talk to an experimentalist I always want to get a feeling for what the lab is like and
And they never want to give it to me, not that they reject or object to giving it to me, but they skip to the answers, right? And I want to know more about the details about how they got there, but I need to get better at pulling that out of people. But I will also say, look, very, very honestly, It's a skill to be a good podcast guest.
And it's a skill that people learn over time, not by training to be a good podcast guest, but just by giving talks, talking to journalists, giving talks at different levels, some popular level talks, some technical talks, maybe even appearing as a podcast guest, you know.
You might have super brilliant young people who are grad students or postdocs who are not very good at being podcast guests, you know. And the— Very often for me, I'm asking people to be on the podcast who I actually haven't talked to in any detail in the real world before. It's the first time I'm meeting them.
So for my purposes of making sure that you all get a good podcast, it's a little bit safer for me to talk to older people because it is more likely, again, never 100% likely, but more likely that they have a slightly more polished way of getting to the point of what they are talking about. But talking about the day in the life, yeah, I mean that would be a risky thing.
It could easily degenerate into sort of – especially if it was someone who was very close to my intellectual area. It's tricky. Yeah. to talk at a level, to talk in a way that addresses the concerns of people who are not at that exact specialty within science, right? So as I mentioned to Kari before we had our conversation, I have to play dumb. I'm the interviewer.
I have to say, what is a quark, even though I know what a quark is. So it's a tricky thing. I suspect the average attempt to for me and a younger person to sit around and shoot the shit about the day in the life of what we do would be less compelling than you think it is. I think that it would be better to have someone who thought about discussing that stuff in a careful way.
It matters to think about it a lot, you know? If you talk to a basketball player about how to play basketball, some of them are no good at explaining it. They're super good at playing basketball or hitting a baseball or whatever. But being able to talk about something in an interesting way is a different skill set than being able to do it. And so you've got to look for the people who can do both.
Elias Aspin says, why are you a sports fan? And bonus question, why those teams? I think I am missing the fan gene. It's just never been very interesting to me to watch others doing sports, much less to root for one team over another. Of course, many people around me do enjoy spectator sports, so in a sense I realize that I'm the weird one. I'd be interested in your thoughts.
Why is it interesting to watch other people play basketball? Yeah, well, I'm going to take the pluralist line here. You're certainly welcome to ask why it is interesting, but you're under no obligation to find it interesting. That's not at all something that you should feel guilty about.
Honestly, you know, whenever someone has a hobby, whenever someone has a leisure time activity that you don't get, whether it's watching sports or cooking or going on vacations or whatever— Count your blessings because as long as you have other things, but like every individual thing that you like to do takes time and money and things like that, right?
So if you can save yourself a little bit of anxiety, sports is like the worst, right? If you like travel, then you can just go to nice places and enjoy them. Sports, usually your team is not going to win the championship, right? Most sports fans' seasons end in disappointment. You're just setting yourself up.
for disappointment because only one team can win the championship out of let's say the 30 in the nba as a philadelphia sports fan i'm especially knowledgeable about this fact that you have to set yourself up for disappointment we've been doing better in the past decade or so not in basketball but in in baseball and football at least which i don't care about as much So why do I like it?
Hard to say. I do think basketball is also the sport that I played the most. So I appreciate it a little bit. I was never super good at it, but I was adequate. I could hold my own, right? And when you play against people who are even a little bit better than you, like – I'm on an intramural basketball team in graduate school.
And when we played with people who were mediocre players on the Harvard basketball team, which is not a very high level basketball team, But they so completely annihilated us, or at least like one person on each of our pickup teams would just dominate everything because they were so much faster, so much more skilled. And they're like Ivy League basketball players.
They're not as good as the good college basketball players who are not as good as the pros, et cetera. So there's just an appreciation for the levels of skill that exist here. Yeah. And there's something – again, I'm not an expert on this, so a good psychologist who studies it would know more than I do.
But there's something primal about rooting, about competition, right, about organized competition in the form of a game. We did talk a little bit with Ti Nguyen, the philosopher, a while back about gamification and how it – It sparks people's interest in things, whether it's sports and you're being a spectator or whether you're playing something on your phone, right?
You're playing solitaire on your phone. The gamification, the existence of a reward, the goal that is explicitly stated, the ability to accumulate points and make progress and eventually achieve the reward, this all speaks to something primal within us. It's also – I'm very pleased that it is – controlled, right? It sort of replaces violent conflict in some very real sense.
It's completely arbitrary. You know, you're asking why those teams rather than others. It's because I grew up with those teams. In the 1970s, when I was a kid, I lived just outside Philadelphia, so rooted for the Philadelphia teams, the Phillies, the Sixers, the Eagles, and I just kept rooting for them.
My favorite was definitely the Sixers because they had Dr. J, Julius Irving, who was just the most entertaining basketball player of all time to watch play. And they did exactly, you know, the sort of heroic journey where they kept coming close and couldn't quite make it. You know, in 1977 and then in 1980 and in 81, 82, they came very close to winning the championship.
And couldn't quite make it until they broke through in 1983. So, you know, very cathartic when they finally did win. And happily for me, the Internet has allowed me to become, to remain a fan, a very, very embedded fan in my hometown basketball team, even though I lived all over the country in the meantime. So I get enjoyment out of it.
You know, I follow who the teams are, what are the trades that we've made. Basketball I like especially because there's only five players on the team at any one time. They're not wearing big bulky uniforms and they're all asked to do the same thing. It's not like baseball where everyone is specialized. The pitcher and the catcher and the first baseman are entirely different roles.
there's slightly different roles in basketball, but everyone has to shoot the ball, pass the ball, rebound the ball, etc., you know, and you can see them, you can see their individual differences, you can get to know the people, they have personalities, some of them are jerks, some of them are awesome people. We just, a couple of days ago, when I was recording this, Dikembe Mutombo passed away.
He was a Basketball player, famously good, tall defender, not a very fluid scorer, but he was a wonderful shot blocker and also a very vibrant personality. He played for the 76ers for just two years, but one of those years is when they made the NBA Finals in 2001. So he has— He has a warm place in the hearts of many Philadelphia sports fans.
And he also – he was born in Africa and he devoted his post-basketball career to being a humanitarian in Africa. He was building hospitals with the money that he made and he was using his star power to – get other people to give money to make Africa a better place. And that just warms your heart. You know, it's not that athletes are better people than others.
Some of them are really bad people, but some of them are good people. And when you're rooting for people because you want them to block a shot, and then later in life they turn out to actually be really good human beings, that makes you feel good. It's a microcosm in a controlled... planned way of the in and outs of life, right? Of struggles to succeed, of setting goals and trying to achieve them.
And it's completely arbitrary. If I had been growing up in a different city, I would have rooted for a different team. That's fine. Right now, the team I root for is the Sixers. And until they do something really terrible, I'm going to root for them. And now the martini question. Rue Phillips says, what are your favorite alcoholic spirits, including any brands?
Do you drink any neat or on the rocks or do you always go for the cocktail? Yeah, sadly, I'm in a part of my life where I got to start cutting down on the drinking a little bit. I'm still allowed to do it. It's still quite healthy. But, you know, one is not as young as one used to be. So most of my drinking these days is wine. But I do enjoy cocktails and I also enjoy spirits neat upon occasion.
You know, I have this. It's not even gizmo. It's just a styrofoam container, but it lets me make spherical ice cubes that are perfectly clear. So it's a little way of freezing ice cubes in a spherical mold that pulls out all the bubbles to a reservoir below. So a typical ice cube, you know, if you just make an ice cube, it's filled with bubbles and it's sort of cloudy.
I can make perfectly clear spherical ice cubes. So it's very cool. to have, you know, a glass of scotch or bourbon or whatever poured over the spherical ice cubes. I'm very proud of that. So I take, you know, pleasure in these little tiny aesthetic touches. But I do prefer a good cocktail to just drinking spirits neat.
You know, when I first, I didn't first discover whiskey, scotch whiskey and related American whiskeys like bourbon and rye. until, I remember, it was literally at a physics conference. It was at, in the 90s, in Ambleside, England, a Cosmo, maybe the first Cosmo conference.
I remember it because it was in, Ambleside is in the Lake District in England, so it's technically in England, but it's close to Scotland, so it's in the Lake District, is up there. So there's a lot of Scotch drinking going on, and the conference organizers planned an event for keeping the conference goers entertained, which was a whiskey tasting.
So they invited—I mean, I'm sure they paid, or maybe they didn't pay because the company thought that they would get income from it down the road. But J&B is a major scotch manufacturer of blended scotches— And a representative from J&B came and gave us a whiskey tasting demonstration. And so we were all tasting whiskeys. And what a blended scotch is, is individual single malt scotches.
So an individual distillery will make what is called a single malt, a very specific, very unique kind of whiskey. And then a blend will try to mix and match them to make something that fits a certain flavor profile. And the guy was very entertaining. He had a thick Scotch accent. He explained that J&B brags about having 41 different single malts in there.
And he's like, yeah, maybe five of them matter. The others are just so we can say we have 41 malts in our blend. And he was very specific about, you know, like a very, very peaty, smoky thing like a Laphroaig. He insisted you have to drink it either with water or over ice. You can't just drink it straight.
Of course, I thought that this was an insult and would only ever drink it straight until I did discover later that, in fact, it's better if it's over ice or with a little bit of water. Yeah, he was just correct. He knew what he was talking about. You shouldn't try to be macho when you're trying to decide how to enjoy your spirits. So good Scotch whiskey.
And I like—I do—I still am macho in liking the peatiest ones when it comes to Scotch, the Laphroaig and the Lagavulin and the Talisker. There's some very good Glenmorangies. Glenmorangie—or Glenmorangie, I think it's pronounced—
They have a special place in whiskey lovers' esteem because the guy who's in charge of making the whiskeys there is some kind of mad scientist who takes the scotch and puts it in different kinds of barrels, right? So there's scotch you get that was aged in sherry barrels and other scotch that is aged in bourbon barrels and whatever. And so you get all sorts of—it actually does work, right?
It sounds like it could be awful, but it's a different way to enjoy scotch than sort of the straight-up-in-your-face Laphroaig way of doing things. And I'm not so stuck up that I can't enjoy a good American whiskey. I have struggled to enjoy Irish whiskeys and Canadian whiskeys, even though I have Irish heritage. I like a good bourbon or rye more than Canadian or Irish whiskeys, I have to say.
I have a bottle of French whiskey, which is, you know, it's a— conversation piece because it's French whiskey, but the French are good at other things besides whiskey. I did discover that I love Armagnac more than Cognac. Cognac is good, but Armagnac is just as good, if not better, and way cheaper. And it's kind of much more fun to search the world for a good Armagnac, a good bargain.
So that's a lot of fun. But as I said, this is all preliminary to say that cocktails are more interesting to me. I find it—I struggle in many restaurants or cocktail bars because they're all about, like, fruitiness or sweetness. They put sugar and orange juice in their cocktails. I'm just not about that. Spirit forward would be the way that I would describe my favorite way of doing cocktails.
So I go for the classics. I go for martinis, Manhattans, Negronis. Um, occasional explorations of sidecars or corp survivors or things like that. Um, and honestly, the martini is my favorite. Um, A Manhattan is good, but it's a little, you know, it's more like a dessert than a dinner kind of cocktail. A martini, I don't know what it is. It somehow gets it exactly right.
And as I'm sure you have heard, even if you're not a martini person, there are great controversies in the world of martinis about how to make them. The basic idea of a martini is gin or vodka, mostly, a little bit of dry vermouth. So that's the green vermouth bottles if you're in the store buying them.
And then some sort of garnish in the form of either olives or a little – what is called a twist, a little bit of the peel of a lemon or something like that. And then how you make it. What are the ratios of vodka or gin to – so all the things you can argue about. How much vodka or gin you should use versus how much vermouth. Should you use vodka or gin? Should you shake it or should you stir it?
What is the best kind of garnish? How should you serve it? All of these are controversies that people rage over at great length. As a pluralist, knock yourself out. Do whatever you want. But also I have my favorites, and so I will tell you what my favorites are in terms of the answers to all these questions. Number one, it's absolutely gin, not vodka. Like, what are you thinking?
I'm going to pretend that my opinions are just the objective truth for the rest of this answer. So you can translate back into my actual pluralist leanings. But what is the point of a vodka martini? I mean, vodka is great as like a little cold shot when you're enjoying, I don't know, black bread and anchovies or something like that on a Siberian winter. Shot of vodka is a great thing.
I remember very clearly in Las Vegas going to a Russian restaurant and having a vodka tasting. So a vodka flight, right? Like five little glasses with a tiny amount of vodka in each of them. The point being that vodka tries to be flavorless in some approximation, but it doesn't succeed and it sort of intentionally doesn't succeed.
So different vodkas actually do taste different, and the feel of them is different, and the little tiny bit of flavor is different depending on what it's made from and so forth. So if you're doing a vodka tasting with different vodkas right in front of you, you absolutely can tell the difference between them. At least I can, and I don't have the most sensitive palate out there.
But it's still very, very subtle. Like if I'm going to have vodka, there's two reasons to have vodka. One is to have that little shot and just enjoy the pristine purity of the vodka, maybe in some environment that calls out for that. Or you're just trying to get drunk, right? You're like mixing vodka with orange juice or whatever because it's not very flavorful and you can drink a lot of it.
And I've never been interested in that whatsoever. But in a cocktail, the whole point of the cocktail is that different spirits with different flavors are interacting with each other. And vodka doesn't do that much interacting because there's not that much there. It just serves as a basis to feed you the alcohol if it's mixed with something else. So for a martini, I absolutely want gin.
Gin is, roughly speaking, and I know the real gin connoisseurs will not agree with this characterization, but roughly speaking, it's alcohol plus flavor. It's vodka plus flavor, where the flavor comes in the form of various botanicals, okay? So in a typical gin, most of the botanicals, the dominant flavor is from juniper,
But you mix and match all sorts of different botanicals, lemon peels and thyme and whatever to make your particular kind of gin. And so different gins are actually very different because the botanicals are very noticeable in a good gin. My local – shout out to local Baltimore folks here. You should go to the Remington Bottle.com. which is a little local liquor store near where I live.
And the guy is like this old couple who owns it, and they're very into it, right? So it's exactly what you want in a local store or establishment where the people who own it are very passionate about what they do, and they can tell you everything. They can speak at great length about everything they sell. So he knew that I bought gin, and so he suggested to me this gin. What is it called?
It's Tom's Gin, but it's like Old Tom's or something like that. But it's this... Again, mad scientist, in this case in Vermont, who ages gin in bourbon barrels. this is just bizarre because those are two very different places in spirit, flavor, space, gin and bourbon, right? And so the gin is not clear that comes out this way. It's slightly brownish color.
It looks like watered-down bourbon, but it tastes like somewhere in between gin and bourbon. And I'm both delighted by it and haven't yet figured out how to use it in a cocktail. So I'm challenged by that one. Old Tom's, I think it is. Anyway, the gins are very different. If you're into gin or into martinis, it is absolutely worth doing your own little gin tasting.
Buy five bottles of gin, invite some friends over, taste them all, and figure out which is the one that you like the best. OK, so that's one question. I mean, I guess I should say what the answer is, but I'll come back to an even better answer. So the tentative answer is I kind of like, you know, a classic Bombay Sapphire, but St.
George's is a smaller organization that makes a bunch of good gins, and their terroir gin is probably my favorite basic gin, St. George's terroir, if you can find that. OK.
others that are very good there's a baltimore gin that is actually remarkably good the shot tower gin the shot tower is a famous tower in baltimore where they made shot 200 years ago in the sense of uh the shots that you put into your musket right so you need little lead pellets and you let them roll down a spiral inside a tall tower to cool off and so that's the shot tower and somehow they named the gin after that i don't think it's made at the shot tower but it's actually quite good um
There's a bunch of other good gins. Monkey 47 is great, a little bit expensive, but it's very, very individual. So I truly don't think it makes sense to talk about what is the best gin. I think it makes sense to talk about what is the gin that you like, okay? And then... ratio of gin to vermouth, I think you should be able to taste the vermouth.
There's this weird thing that is overcome, and again, it's because I like the interplaying of the spirits, but there's this weird thing that the drier the martini, the better. And dry just means less vermouth. So, you know, famously there's a competition to come up with ways of describing the least amount of vermouth, which is silly because the least amount of vermouth is zero.
So you can just say, I don't want any vermouth, I just want to drink cold gin. Good for you if that's what you want to do. It's not a martini, but good for you if that's what you want to do. So like some people will say, I wave the vermouth bottle over the martini glass, or, you know, I bow in the direction of France because France is where the vermouth is made.
Gin is funny because gin is a very British drink, a spirit kind of alcohol. It's from England, but the martini is not from England. Martinis, they'll kick you out of a pub in England if you order a martini. It's an American drink. It combines a British thing in gin with a French thing in vermouth. So I like to be able to taste the vermouth.
I think the interplay of the vermouth and the gin is part of what makes a great martini. And I would put it at like three to one or four to one ratio of gin to vermouth. In fact, I don't carefully measure it. I'm just making it for myself. But I think that that's the ratio I would serve to somebody else. Then do you shake or do you stir it? And I go back and forth on this.
Again, this is — interestingly, there's arguments on both sides, shaking versus stirring. The argument that it's what James Bond does is not a good argument at all. James Bond was intentionally uncouth in a whole bunch of ways, and people don't get it. People have been ruined by the movies to think that James Bond is a role model of sophistication. But in many ways, he was kind of a —
unsophisticated guy. That's why he wore those sports wristwatches with a tuxedo. You should never do that. That is not a cool thing to do. And shaking, not stirring your martini is not necessarily a good thing, but there are arguments for it. So there is, again, a completely bogus argument, which is that if you shake the martini in the shaker, you'll bruise the gin somehow.
That's completely made up. You're not going to bruise the gin. Shake to your heart's content. The actual things that matter are the benefit of stirring is that the gin remains clear, right? You do not introduce tiny little bubbles into the gin.
And part of the aesthetic pleasure of the martini, once you pour it, is that perfectly clear crystal, almost invisibility to the combination of the gin and the vermouth. So cloudy martini is not quite as aesthetically pleasing as... the clear martini. But the argument for shaking it is that you cool it off much more efficiently.
Like stirring it is just harder to bring it down to a low temperature. And to me, other than getting the gin and the vermouth right, the coldness of the martini is the absolutely most important thing. A good martini should be as cold as possible for it to be. I've been in places that I won't mention any states, but Texas, where you can order a martini and they will serve it at room temperature.
I'm like, oh, my God, what are you even thinking about? Make your martini cold. Steak they're good at. Martinis they're not so good at in certain states of the United States. So I actually shake these days because I'm willing to give up the aesthetic pleasure of the clarity because eventually the bubbles go away anyway. I would rather have the martini be cold.
And then it's the garnish, and then I revert back to pluralism. Sometimes a twist, sometimes a regular olive, sometimes feeling a little playful, have an olive with garlic or jalapeno or blue cheese inside. All that is fine. I'm not going to be strict about that. But all that is to say I've recently perfected my martini by stepping a little bit outside of the advice I just gave you.
I discovered at the Remington Bottle, this little fun little liquor store, a Japanese gin. And in whiskey circles, Japanese whiskey is highly esteemed. The Japanese are really good at making whiskey. I have tried it, and I don't quite like it as much as the best American bourbons or Scottish Scotch whiskeys, so I'm not a connoisseur of Japanese whiskey whiskey.
But gin, I thought, okay, I'll give it a shot, because gin is about the botanicals, right? It's a very basic spirit that you then— give some oomph to by choosing the botanicals. And that sounds to me like something that Japanese people are good at, right? The subtlety of flavor profiles, et cetera. And so it turns out there's a bunch of different kinds of Japanese gin.
And the one that I got and fell in love with is called Etsu, E-T-S-U. And there's a little bit of tea leaves in there, as well as other botanicals, as well as the juniper and others. So it's a sort of Japanese take on the typical British way of making gin. And number one, it's amazing all by itself. Like it really, really works. I was skeptical, but it totally works.
And number two, at a different local Baltimore place that sells all sorts of different kinds of bitters. Parentheses, bitters are a type of – they're alcoholic by themselves, but they're like not that much alcohol. They're all about the flavors. So it's like gin, if you reduced the amount of actual – whiskey of background spirit down to almost nothing.
So it's an intense kind of flavorful—I'm missing the right vocabulary words here, but there's a whole universe of different kinds of bitters, and you can have bitters with different kinds of flavors. So there's like straightforward bitters, like Angostura is the standard thing, but there's also orange-flavored bitters, and then you go crazy.
There's chocolate bitters and pecan bitters, and there's all sorts of Italian Amari that are bitters-like but have different flavor profiles, some very astringent. They're almost like mouthwash and others, you know, quite smooth and delightful. So I found a little tiny bottle of what are called Woodland Bitters from the Portland Bitters Company.
And it gives you a flavor of being in an evergreen forest, right? You know, that very typical scent of evergreen trees all around you on a crisp fall evening, right, in the form of bitters.
And so if you make a martini, three to one gin to vermouth with this etsu gin, which has the little bit of tea leaf flavor in it, and then you add two drops of the woodland bitters, and it gives you this tiny feeling of evergreen trees. And then you shake it and you pour it. And you don't want olives with this one because the olives are a little bit overwhelming. And this is very, very subtle.
So you want the twist. So you want a little bit of the lemon peel in there. You twist it over the glass and dump it in. That's it. That is my favorite martini right now. That is the martini I envision myself drinking when I'm drinking martinis from here for the rest of my life. I don't see how you can get better than this martini.
Of course, in practice, as a pluralist, I will keep trying different things. I'm always trying different things, even though I suspect they're not as good as my best. I can't order this martini when I'm out. No one else makes it. It's my martini. But I will serve it to others, and I'm going to push it on others. We've made friends with the local bartender at our local restaurant here.
We made friends with the owners and chefs also. So we had a little cocktail tasting at our house with the bartender, and we were trading recipes, and I was— I was showing off my martini. And sadly, the bartender, he's a great guy. He's not really a martini guy. He likes other kinds of things, which is fine. But he did understand why, if you liked martinis, this would be the martini to like.
Thank you.