Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Sean Carroll

👤 Person
10759 total appearances

Appearances Over Time

Podcast Appearances

Is there a way to verify to measure that we're traversing through all or some quantum fields by moving or, on the contrary, fields are always anchored to the observer? Certainly in the way that we think about quantum field theory and what quantum field theory means, the fields exist everywhere. That's what it means to be a field.

Is there a way to verify to measure that we're traversing through all or some quantum fields by moving or, on the contrary, fields are always anchored to the observer? Certainly in the way that we think about quantum field theory and what quantum field theory means, the fields exist everywhere. That's what it means to be a field.

A field is the answer to a question, at this point in space, what is the value of the field? For every single point in space, a field has a value. The electric field has a value at every point in space. It might be zero. So you might say there's no electric field here, but you don't actually mean the electric field doesn't exist there.

A field is the answer to a question, at this point in space, what is the value of the field? For every single point in space, a field has a value. The electric field has a value at every point in space. It might be zero. So you might say there's no electric field here, but you don't actually mean the electric field doesn't exist there.

What you mean is the value of the electric field is zero there. Just like when you have a function of y as a function of x, if that function happens to cross through y equals zero, you don't say the function stops existing, you just say that its value is zero. If the temperature is zero degrees Fahrenheit, you don't say there's no temperature. you just say the value is zero.

What you mean is the value of the electric field is zero there. Just like when you have a function of y as a function of x, if that function happens to cross through y equals zero, you don't say the function stops existing, you just say that its value is zero. If the temperature is zero degrees Fahrenheit, you don't say there's no temperature. you just say the value is zero.

Same thing with quantum fields. They exist everywhere. They're not pulled along or traveling along. You absolutely do pass through them in the sense that you pass through space and the fields are everywhere in space. I don't know how to pronounce this.

Same thing with quantum fields. They exist everywhere. They're not pulled along or traveling along. You absolutely do pass through them in the sense that you pass through space and the fields are everywhere in space. I don't know how to pronounce this.

Timlo, P-T-M-I-L-O, Private Milo maybe, says, is your objection to the potential for large language models to exhibit more generalized and extrapolation-heavy intelligence based on deeper principles, or is it more intuitive? That is, is there anything in information theory that tells us it is impossible for locally generated interpolations of tokens to uncover patterns and sequences that

Timlo, P-T-M-I-L-O, Private Milo maybe, says, is your objection to the potential for large language models to exhibit more generalized and extrapolation-heavy intelligence based on deeper principles, or is it more intuitive? That is, is there anything in information theory that tells us it is impossible for locally generated interpolations of tokens to uncover patterns and sequences that

that are indecipherable from conventional human-like extrapolation successes. Well, I don't think it's based on deeper principles in the sense that it's a proof that you're asking about. In fact, I'm open to the possibility that large language models could construct, you know, sorry, let's back up.

that are indecipherable from conventional human-like extrapolation successes. Well, I don't think it's based on deeper principles in the sense that it's a proof that you're asking about. In fact, I'm open to the possibility that large language models could construct, you know, sorry, let's back up.

The large language model is optimized to give sensible-sounding answers to human beings asking questions of it. Now, it may very well turn out that in that black box of many, many layers of deep learning, the way the LLM does that is essentially to invent intelligence, to invent a model of the world, invent sort of counterfactual reasoning, invent all those things. I'm open to that possibility.

The large language model is optimized to give sensible-sounding answers to human beings asking questions of it. Now, it may very well turn out that in that black box of many, many layers of deep learning, the way the LLM does that is essentially to invent intelligence, to invent a model of the world, invent sort of counterfactual reasoning, invent all those things. I'm open to that possibility.

However, number one, I don't see why that would necessarily be the case because that's not how you've programmed the LLM. It would have to be a case where the optimization procedure was just so successful that the LLM founded itself despite the fact that that's not what it was trained to do. And number two, in the data, I see no evidence of that happening, right?

However, number one, I don't see why that would necessarily be the case because that's not how you've programmed the LLM. It would have to be a case where the optimization procedure was just so successful that the LLM founded itself despite the fact that that's not what it was trained to do. And number two, in the data, I see no evidence of that happening, right?

It's not that LLMs don't become better and better. They're clearly becoming better and better. But they aren't perfect, and they make mistakes, and they make failures. And my point has always been the types of failures they make are precisely the type you would expect if they were not being real human intelligence, if they were not causally mapping the world and inventing counterfactual reasoning.

It's not that LLMs don't become better and better. They're clearly becoming better and better. But they aren't perfect, and they make mistakes, and they make failures. And my point has always been the types of failures they make are precisely the type you would expect if they were not being real human intelligence, if they were not causally mapping the world and inventing counterfactual reasoning.

So I am very open to new evidence coming in that changes my mind. That'd be super duper interesting if it were true. I just haven't seen it yet. Ed says, I know you're not an AI expert, but you have had a number of AI expert guests, so you likely have a better handle on it than I do.

So I am very open to new evidence coming in that changes my mind. That'd be super duper interesting if it were true. I just haven't seen it yet. Ed says, I know you're not an AI expert, but you have had a number of AI expert guests, so you likely have a better handle on it than I do.