Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Sean Carroll

👤 Person
10759 total appearances

Appearances Over Time

Podcast Appearances

Do you have a sense as to whether there's a fundamental difference between the theory of operation of an AI LLM and that of an autocorrect feature on my phone? Is it just a massively scaled up version of this thing that is always failing to guess the next word I want, or is it doing the same thing in some utterly different way? I think it's both. It's a little bit half and half there.

Do you have a sense as to whether there's a fundamental difference between the theory of operation of an AI LLM and that of an autocorrect feature on my phone? Is it just a massively scaled up version of this thing that is always failing to guess the next word I want, or is it doing the same thing in some utterly different way? I think it's both. It's a little bit half and half there.

Certainly, there's a spiritual connection between LLMs and autocorrect. I mean, autocorrect is not a separate kind of technology, right? Autocorrect is next token prediction. And in a very real sense, LLMs are... very, very, very souped up next token prediction. It's not just the next token. They're predicting more than that.

Certainly, there's a spiritual connection between LLMs and autocorrect. I mean, autocorrect is not a separate kind of technology, right? Autocorrect is next token prediction. And in a very real sense, LLMs are... very, very, very souped up next token prediction. It's not just the next token. They're predicting more than that.

They have some memory of what they've been talking about and things like that. They have very important, crucial distinctions between a simple autocorrect kind of thing. But there is that spiritual connection. So I think that there's both a similarity and a difference there. Jesse Rimler says, I'm currently reading the wonderful new David Bentley Hart book, All Things Are Full of Gods.

They have some memory of what they've been talking about and things like that. They have very important, crucial distinctions between a simple autocorrect kind of thing. But there is that spiritual connection. So I think that there's both a similarity and a difference there. Jesse Rimler says, I'm currently reading the wonderful new David Bentley Hart book, All Things Are Full of Gods.

It's a thoughtful and engaging philosophical treatise on consciousness and materialism written as a platonic dialogue. Hart is religious, and I generally disagree with him. I'm guessing you would too. but it does make me think about the areas where non-materialists can find argumentative purchase.

It's a thoughtful and engaging philosophical treatise on consciousness and materialism written as a platonic dialogue. Hart is religious, and I generally disagree with him. I'm guessing you would too. but it does make me think about the areas where non-materialists can find argumentative purchase.

Do you think that the irreducible experience of consciousness is one of those brute facts that allows otherwise rational thinkers the wiggle room to play around with non-scientific ideas? If I understand what you're asking, no, I do not think that. I'm not sure what the word irreducible means in the phrase the irreducible experience of consciousness. there is an experience of consciousness.

Do you think that the irreducible experience of consciousness is one of those brute facts that allows otherwise rational thinkers the wiggle room to play around with non-scientific ideas? If I understand what you're asking, no, I do not think that. I'm not sure what the word irreducible means in the phrase the irreducible experience of consciousness. there is an experience of consciousness.

How do I know it's not reducible? I don't even know necessarily what reducible means. I worry that it means different things to different people. I've been very, very clear about what I think consciousness is. I think that people obey the laws of physics.

How do I know it's not reducible? I don't even know necessarily what reducible means. I worry that it means different things to different people. I've been very, very clear about what I think consciousness is. I think that people obey the laws of physics.

And I think that we talk about people using a higher level emergent vocabulary, which absolutely includes all the interior first person consciousness talk. I don't think that there's any fundamental difference between that and the exterior talk that we use about how people are moving or talking or thinking or whatever.

And I think that we talk about people using a higher level emergent vocabulary, which absolutely includes all the interior first person consciousness talk. I don't think that there's any fundamental difference between that and the exterior talk that we use about how people are moving or talking or thinking or whatever.

So I think that there is some temptation to treat consciousness as different precisely because it is first person. There is something unique about my consciousness from my perspective, sure. But I want to understand the world comprehensively and fundamentally, and I think that by far the leading way to do that is to not treat me as all that special, including my consciousness.

So I think that there is some temptation to treat consciousness as different precisely because it is first person. There is something unique about my consciousness from my perspective, sure. But I want to understand the world comprehensively and fundamentally, and I think that by far the leading way to do that is to not treat me as all that special, including my consciousness.

It's just a higher-level emergent way of talking about the collective behavior of atoms and electrons and photons in my brain. Polina Vino says... Computable analysis is a kind of analysis that is compatible with computability theory.

It's just a higher-level emergent way of talking about the collective behavior of atoms and electrons and photons in my brain. Polina Vino says... Computable analysis is a kind of analysis that is compatible with computability theory.

For example, we have the computable intermediate value theorem, the assertion that if f is a computable continuous function and f of a less than c less than f of b for computable reals a, b, and c, then there's a computable d with f of d equals c. Does that make sense?

For example, we have the computable intermediate value theorem, the assertion that if f is a computable continuous function and f of a less than c less than f of b for computable reals a, b, and c, then there's a computable d with f of d equals c. Does that make sense?