Jonathan Birch
👤 PersonPodcast Appearances
I was one of the co-organizers of the New York one. Okay, good.
I was one of the co-organizers of the New York one. Okay, good.
So well established that it feels like... It feels like it's at least a year old, yeah.
So well established that it feels like... It feels like it's at least a year old, yeah.
It's a delicate balance, I think. What we wanted to do, and it's similar to the project in the book, the Edge of Sentience book, was to acknowledge that there is a huge amount of disagreement about these issues. And that's fine. It's to be expected when our understanding of what sentience is is so poor.
It's a delicate balance, I think. What we wanted to do, and it's similar to the project in the book, the Edge of Sentience book, was to acknowledge that there is a huge amount of disagreement about these issues. And that's fine. It's to be expected when our understanding of what sentience is is so poor.
But nonetheless, despite all of that reasonable disagreement, there can be certain points of wide agreement about what the reasonable range of views is and what the realistic possibilities are. That was the thought behind it. And then, well...
But nonetheless, despite all of that reasonable disagreement, there can be certain points of wide agreement about what the reasonable range of views is and what the realistic possibilities are. That was the thought behind it. And then, well...
we got together an initial group of 40 signatories and just had a series of Zoom calls where we were talking about, well, what are the, do we agree about a realistic range of possibilities? And if so, what can be said about what that range is? And that's how we got this text that acknowledges a realistic possibility of consciousness, which was the term we used there,
we got together an initial group of 40 signatories and just had a series of Zoom calls where we were talking about, well, what are the, do we agree about a realistic range of possibilities? And if so, what can be said about what that range is? And that's how we got this text that acknowledges a realistic possibility of consciousness, which was the term we used there,
perhaps a more widely used term than sentience, in octopuses, cephalopod mollus, decapod crustaceans, and insects. And so we were trying to avoid the sense of projecting certainty, or even confidence or knowledge, but using this language of realistic possibility to say, what we do agree on is the need to take this really seriously. Sure.
perhaps a more widely used term than sentience, in octopuses, cephalopod mollus, decapod crustaceans, and insects. And so we were trying to avoid the sense of projecting certainty, or even confidence or knowledge, but using this language of realistic possibility to say, what we do agree on is the need to take this really seriously. Sure.
That was always my view. Yeah, that was my view. But in this group of 40, a more common view was that people don't understand the term sentience yet. They're not ready for it. Use a term they already understand. namely consciousness. Both sides have pitfalls, because as I say, if you start talking about consciousness, people might think you mean the inner monologue, self-awareness.
That was always my view. Yeah, that was my view. But in this group of 40, a more common view was that people don't understand the term sentience yet. They're not ready for it. Use a term they already understand. namely consciousness. Both sides have pitfalls, because as I say, if you start talking about consciousness, people might think you mean the inner monologue, self-awareness.
There's quite a range of things they might think you're talking about. So there's trade-offs there. I think the term sentience is on the up, so to speak. And for me, it's hopefully the term of the future that will start to displace consciousness in these debates.
There's quite a range of things they might think you're talking about. So there's trade-offs there. I think the term sentience is on the up, so to speak. And for me, it's hopefully the term of the future that will start to displace consciousness in these debates.
That was the thought, yeah. And that may be true as things stand.
That was the thought, yeah. And that may be true as things stand.
How do we know when another animal is, do you mean? I think we know when we ourselves are.
How do we know when another animal is, do you mean? I think we know when we ourselves are.
Right, and when thinking about crabs, for example, we're very much stuck with the third person perspective. And we're stuck too with a big range of reasonable disagreement and quite a lot of realistic possibilities. Some will make it very unlikely that crabs are experiencing things and others make it very likely that they are.
Right, and when thinking about crabs, for example, we're very much stuck with the third person perspective. And we're stuck too with a big range of reasonable disagreement and quite a lot of realistic possibilities. Some will make it very unlikely that crabs are experiencing things and others make it very likely that they are.
And what I do in the book is I suggest a pragmatic shift in how we think about the question. from is the animal sentient to is the animal a sentience candidate? Where this concept of a sentience candidate is defined in such a way as to make the question answerable,
And what I do in the book is I suggest a pragmatic shift in how we think about the question. from is the animal sentient to is the animal a sentience candidate? Where this concept of a sentience candidate is defined in such a way as to make the question answerable,
Because it's about, well, is there a realistic possibility of sentience established by at least one view in that zone of reasonable disagreement? And is there an evidence base that is rich enough to allow us to identify welfare risks and to design and assess precautions? To me, I hope at least people find that pragmatic shift helpful.
Because it's about, well, is there a realistic possibility of sentience established by at least one view in that zone of reasonable disagreement? And is there an evidence base that is rich enough to allow us to identify welfare risks and to design and assess precautions? To me, I hope at least people find that pragmatic shift helpful.
And I think if you're thinking about animals like crabs, for example, to me, it's quite clear that they are sentience candidates in that sense, and that we do have to worry about welfare risks posed by the way we treat them, despite the fact that, of course, we're still uncertain about whether they're sentient or not.
And I think if you're thinking about animals like crabs, for example, to me, it's quite clear that they are sentience candidates in that sense, and that we do have to worry about welfare risks posed by the way we treat them, despite the fact that, of course, we're still uncertain about whether they're sentient or not.
I think it's everything at once. I think neural evidence and behavioral evidence are both powerful evidence. And they're more powerful when pursued together as part of a coordinated research program than in isolation from each other.
I think it's everything at once. I think neural evidence and behavioral evidence are both powerful evidence. And they're more powerful when pursued together as part of a coordinated research program than in isolation from each other.
What we have with a lot of invertebrate animals is quite tantalizing, I think, because often you've got a lot of behavioral evidence showing surprising things, impressive things. And then you have studies of neuroanatomy saying, well, there's perhaps there's more neurons in there than you think, particularly with octopuses.
What we have with a lot of invertebrate animals is quite tantalizing, I think, because often you've got a lot of behavioral evidence showing surprising things, impressive things. And then you have studies of neuroanatomy saying, well, there's perhaps there's more neurons in there than you think, particularly with octopuses.
There's big integrative brain regions that are plausibly performing functions relating to learning and memory. And then those are the two parts of the picture and they don't join up as it were. that what we're lacking in most of these cases is detailed knowledge of the mechanisms in those brain regions producing the behaviors we're seeing.
There's big integrative brain regions that are plausibly performing functions relating to learning and memory. And then those are the two parts of the picture and they don't join up as it were. that what we're lacking in most of these cases is detailed knowledge of the mechanisms in those brain regions producing the behaviors we're seeing.
So people talk about grasping the elephant from different sides. It's two ways of converging on a picture that are both valuable and all the more valuable when pursued together.
So people talk about grasping the elephant from different sides. It's two ways of converging on a picture that are both valuable and all the more valuable when pursued together.
Well, there's a range of different studies, and I don't see any individual study as being conclusive, and it's an area where phrases like conclusive evidence, proof, are not really appropriate. But
Well, there's a range of different studies, and I don't see any individual study as being conclusive, and it's an area where phrases like conclusive evidence, proof, are not really appropriate. But
What we have is research programs, particularly Bob Elwood, who is another of the signatories to our declaration, really started with this question of, well, people think that all that is going on here is reflexes. So they think that the crab skitters away and it's like when I put my hand on a hot stove and my hand withdraws and that reflex withdrawal is underway before I feel anything.
What we have is research programs, particularly Bob Elwood, who is another of the signatories to our declaration, really started with this question of, well, people think that all that is going on here is reflexes. So they think that the crab skitters away and it's like when I put my hand on a hot stove and my hand withdraws and that reflex withdrawal is underway before I feel anything.
And people say, that's all the crabs have. They just have those reflexes. And he thought about how might I convince someone who has that view that that is not all that's going on.
And people say, that's all the crabs have. They just have those reflexes. And he thought about how might I convince someone who has that view that that is not all that's going on.
And that just like in us, the information about the noxious stimulus, like the hot stove, reaches the brain and is integrated with other kinds of information and is used for lots of functions relating to learning, memory, decision-making. And he came up with these motivational trade-off experiments where what he had was hermit crabs.
And that just like in us, the information about the noxious stimulus, like the hot stove, reaches the brain and is integrated with other kinds of information and is used for lots of functions relating to learning, memory, decision-making. And he came up with these motivational trade-off experiments where what he had was hermit crabs.
And the hermit crabs, they're interesting because they have very strong preferences for certain types of shell. And in the wild, you see them exchanging one type of shell for another. And they have this hierarchy of what they think the best shells are. And Elwood, in these experiments, he drilled holes in the shells, put little electrodes in and administered small electric shocks to the crab.
And the hermit crabs, they're interesting because they have very strong preferences for certain types of shell. And in the wild, you see them exchanging one type of shell for another. And they have this hierarchy of what they think the best shells are. And Elwood, in these experiments, he drilled holes in the shells, put little electrodes in and administered small electric shocks to the crab.
And his question was, well, would the crab just evacuate the shell when it was shocked as a kind of reflex? Or would it take account of how good the shell was and how bad it would be to lose that shell in making that decision? And would it require a higher voltage of shock to make it leave a higher quality shell. And he found evidence that indeed it seems to.
And his question was, well, would the crab just evacuate the shell when it was shocked as a kind of reflex? Or would it take account of how good the shell was and how bad it would be to lose that shell in making that decision? And would it require a higher voltage of shock to make it leave a higher quality shell. And he found evidence that indeed it seems to.
And so this is the kind of thing where it's not conclusive proof, but if you're coming in with this view that they're just reflex machines, all they do is stimulus response. There's nothing integrative or centralized going on. This kind of evidence should shake that confidence.
And so this is the kind of thing where it's not conclusive proof, but if you're coming in with this view that they're just reflex machines, all they do is stimulus response. There's nothing integrative or centralized going on. This kind of evidence should shake that confidence.
Yes, credulousness, right? Taking the surface behavior as immediate evidence of sentience.
Yes, credulousness, right? Taking the surface behavior as immediate evidence of sentience.
would it be too provocative to say thinking contemplating musing on the part of the crab to balance the different aspects integrating integrating modeling and weighing of yeah the opportunities and risks posed by the environment yeah and then you have a certain family of theories associated with bjorn merker yak panksepp that treat that as very closely linked to sentience that they say well what what
would it be too provocative to say thinking contemplating musing on the part of the crab to balance the different aspects integrating integrating modeling and weighing of yeah the opportunities and risks posed by the environment yeah and then you have a certain family of theories associated with bjorn merker yak panksepp that treat that as very closely linked to sentience that they say well what what
What is sentience? Fundamentally, well, they propose that it's to do with this evaluative modeling where you're trying to represent in an integrated model the opportunities and risks posed by the environment. And so there's a nice mesh there between the behavioral evidence we're seeing in the crabs and the sorts of...
What is sentience? Fundamentally, well, they propose that it's to do with this evaluative modeling where you're trying to represent in an integrated model the opportunities and risks posed by the environment. And so there's a nice mesh there between the behavioral evidence we're seeing in the crabs and the sorts of...
brain mechanisms that according to this family of theories would be enough for sentience.
brain mechanisms that according to this family of theories would be enough for sentience.
There's no reason to think the magnet is internally representing those field strengths.
There's no reason to think the magnet is internally representing those field strengths.
Yeah. Yeah. And according to the sort of Merkur-Pankset, that family of views, not just any internal integrative representation, but it has to have this evaluative character as well. It has to be a certain kind of modeling of what are the opportunities and risks? What are my needs? What do I need to prioritize right now?
Yeah. Yeah. And according to the sort of Merkur-Pankset, that family of views, not just any internal integrative representation, but it has to have this evaluative character as well. It has to be a certain kind of modeling of what are the opportunities and risks? What are my needs? What do I need to prioritize right now?
Well, I mean, I think it's quite important in these experiments that it has some representation of some kind of the different shell types and their relative qualities. And that is somehow getting integrated with how bad is this electric shock.
Well, I mean, I think it's quite important in these experiments that it has some representation of some kind of the different shell types and their relative qualities. And that is somehow getting integrated with how bad is this electric shock.
So I do think there's something inherently more impressive about experiments that do not simply provide two immediate stimuli and say, trade these off, but rather in some way rely on the animal's capacity for mental representation. And it's a similar story with the evidence from bees as well. That's what researchers have been trying to do. Sorry, tell us about the evidence from bees.
So I do think there's something inherently more impressive about experiments that do not simply provide two immediate stimuli and say, trade these off, but rather in some way rely on the animal's capacity for mental representation. And it's a similar story with the evidence from bees as well. That's what researchers have been trying to do. Sorry, tell us about the evidence from bees.
I was just thinking of Matilda Gibbon's experiments where they're inspired by Elwood's crab experiments. But bees don't have the shells that hermit crabs have. So you've got to test for the same thing in a different way. And so she came up with this setup where
I was just thinking of Matilda Gibbon's experiments where they're inspired by Elwood's crab experiments. But bees don't have the shells that hermit crabs have. So you've got to test for the same thing in a different way. And so she came up with this setup where
They have a choice of feeders they can land on, and different concentrations of sugar solution are available at different feeders, and different temperatures of heat pad are there that they have to stand on to access the feeder. And so the question now is about a different kind of trade-off. Will they trade off when choosing which feeder to go to? How...
They have a choice of feeders they can land on, and different concentrations of sugar solution are available at different feeders, and different temperatures of heat pad are there that they have to stand on to access the feeder. And so the question now is about a different kind of trade-off. Will they trade off when choosing which feeder to go to? How...
how high was the heat they had to withstand, and how sweet were the rewards that they can access. And again, a crucial part of it for Tilda was this thought that you want to look at their decisions when they're anticipating what they're going to experience at these feeders based on their memories.
how high was the heat they had to withstand, and how sweet were the rewards that they can access. And again, a crucial part of it for Tilda was this thought that you want to look at their decisions when they're anticipating what they're going to experience at these feeders based on their memories.
Yeah, because when they're doing it, there is this possibility that, well, there is some integration of some kind going on, but it's just two immediate stimuli pushing against each other. But when they're making that choice in an anticipatory fashion, it's got to be some kind of representation of the risks and opportunities.
Yeah, because when they're doing it, there is this possibility that, well, there is some integration of some kind going on, but it's just two immediate stimuli pushing against each other. But when they're making that choice in an anticipatory fashion, it's got to be some kind of representation of the risks and opportunities.
So, yes, not every critic is convinced by this kind of evidence, of course. But in a way, you're going after that critic who says these animals are just reflex machines. And because they're just reflex machines, there's no credible theory of sentience of any kind on which they're going to meet the conditions. And it's showing that that is not the case.
So, yes, not every critic is convinced by this kind of evidence, of course. But in a way, you're going after that critic who says these animals are just reflex machines. And because they're just reflex machines, there's no credible theory of sentience of any kind on which they're going to meet the conditions. And it's showing that that is not the case.
I think that's right, that they're prospectively modeling the environment and the rewards and the risks that it offers. And they have some way of weighing up those risks and rewards in a common currency. And that ties in with this quite longstanding idea that, well, that's kind of what sentience does for us.
I think that's right, that they're prospectively modeling the environment and the rewards and the risks that it offers. And they have some way of weighing up those risks and rewards in a common currency. And that ties in with this quite longstanding idea that, well, that's kind of what sentience does for us.
That pain and pleasure, valence, states, they're the currency through which we make decisions and represent the risks and opportunities of our environment.
That pain and pleasure, valence, states, they're the currency through which we make decisions and represent the risks and opportunities of our environment.
I mean, I think that's something that goes beyond sentience, much the same way that the inner monologue, et cetera, goes beyond sentience. It's something some sentient beings can do, but probably not all. I think that's going to be the case for counterfactual reasoning. Of course, it depends a bit on what we mean by that.
I mean, I think that's something that goes beyond sentience, much the same way that the inner monologue, et cetera, goes beyond sentience. It's something some sentient beings can do, but probably not all. I think that's going to be the case for counterfactual reasoning. Of course, it depends a bit on what we mean by that.
I think if you think of rats in a maze and the vicarious trial and error behavior that was observed by Tolman many, many decades ago and has been intensively studied, where they seem to pause at the junction in the maze and look both ways as if simulating what reward lies down each path
I think if you think of rats in a maze and the vicarious trial and error behavior that was observed by Tolman many, many decades ago and has been intensively studied, where they seem to pause at the junction in the maze and look both ways as if simulating what reward lies down each path
And then there's more recent studies that suggest that the hippocampus genuinely is doing that, that simulating, uh, you know, this is, it's not really counterfactual reasoning, or at least that would be a pretty tendentious description of it, but it's perspective simulation. Um, and I suspect that that capacity for perspective simulation is, is quite widespread among animals.
And then there's more recent studies that suggest that the hippocampus genuinely is doing that, that simulating, uh, you know, this is, it's not really counterfactual reasoning, or at least that would be a pretty tendentious description of it, but it's perspective simulation. Um, and I suspect that that capacity for perspective simulation is, is quite widespread among animals.
It's a hypothetical, right? It's possible futures that could be actual.
It's a hypothetical, right? It's possible futures that could be actual.
So there's no sense of, well, that didn't happen, but what if it had happened? So that bit's not there.
So there's no sense of, well, that didn't happen, but what if it had happened? So that bit's not there.
Well, I think Andrew Barron and Colin Klein have this paper about insects and the origin of consciousness. And another one called Insects Have the Capacity for Subjective Experience. And their case is based on the idea that what they have is this integrative model of the agent in space where they model the environment around them. That may be prospection on a very short timescale, I suppose.
Well, I think Andrew Barron and Colin Klein have this paper about insects and the origin of consciousness. And another one called Insects Have the Capacity for Subjective Experience. And their case is based on the idea that what they have is this integrative model of the agent in space where they model the environment around them. That may be prospection on a very short timescale, I suppose.
And then it's largely an open question about prospection on longer timescales. Some of the most interesting evidence there is probably the Porsche spider evidence, where these are jumping spiders that hunt other spiders. And they're famed for this detour behavior. where you put them on a platform where they can see prey item in the distance and they can see two paths to the prey item.
And then it's largely an open question about prospection on longer timescales. Some of the most interesting evidence there is probably the Porsche spider evidence, where these are jumping spiders that hunt other spiders. And they're famed for this detour behavior. where you put them on a platform where they can see prey item in the distance and they can see two paths to the prey item.
One of them has a break in it. If they take that path, they will fall through it and they go from side to side. They seem to be inspecting the two paths. Then they climb back down off the platform. So the paths are out of sight and they nearly always choose the unbroken path. Um, leading to a debate about how on earth they do something like that.
One of them has a break in it. If they take that path, they will fall through it and they go from side to side. They seem to be inspecting the two paths. Then they climb back down off the platform. So the paths are out of sight and they nearly always choose the unbroken path. Um, leading to a debate about how on earth they do something like that.
And of course, one possible explanation involves perspective simulation, where they are in the brain modeling what will happen if they take each path.
And of course, one possible explanation involves perspective simulation, where they are in the brain modeling what will happen if they take each path.
Yes, well, and I think in the Porsche Spider case, what's lacking is the neural evidence that we have in the rats. So say if you have both, if you have the behavior and you have neural recording practically showing the simulation happening in real time, then that's probably as strong evidence as you're ever going to get. And we don't have that for the Portia spiders, but it's very suggestive.
Yes, well, and I think in the Porsche Spider case, what's lacking is the neural evidence that we have in the rats. So say if you have both, if you have the behavior and you have neural recording practically showing the simulation happening in real time, then that's probably as strong evidence as you're ever going to get. And we don't have that for the Portia spiders, but it's very suggestive.
Right. Yeah, that's part of what's so impressive. In a brain of, I think, about 60,000 neurons, so really, really small, less than 10% of the size of the bee brain by neuron count, They're doing something that dogs clearly fail to do.
Right. Yeah, that's part of what's so impressive. In a brain of, I think, about 60,000 neurons, so really, really small, less than 10% of the size of the bee brain by neuron count, They're doing something that dogs clearly fail to do.
I think we can't really talk with confidence about this because it depends very much on your theory of the brain mechanisms involved.
I think we can't really talk with confidence about this because it depends very much on your theory of the brain mechanisms involved.
If you have that Merkur-Pankset view, or that family of views, I should say, where we're talking about something very evolutionarily ancient, supported by subcortical mechanisms, mechanisms in the midbrain at the top of the brainstem, and that is about evaluative modeling of the animals' priorities and needs, then there's a very clear function relating to decision-making.
If you have that Merkur-Pankset view, or that family of views, I should say, where we're talking about something very evolutionarily ancient, supported by subcortical mechanisms, mechanisms in the midbrain at the top of the brainstem, and that is about evaluative modeling of the animals' priorities and needs, then there's a very clear function relating to decision-making.
in that what sentience allows is, well, an escape from being a reflex machine and the possibility of weighing up quite different options in very flexible ways. So that view has some plausibility, I think. And I also think it's quite plausible that sentience facilitates learning. That if you think about that hot stove situation, Think about what the pain does for you.
in that what sentience allows is, well, an escape from being a reflex machine and the possibility of weighing up quite different options in very flexible ways. So that view has some plausibility, I think. And I also think it's quite plausible that sentience facilitates learning. That if you think about that hot stove situation, Think about what the pain does for you.
What it doesn't seem to do for you is trigger the reflex withdrawal of the hand because that's underway already. But what it plausibly does do is help you learn about where not to put your hand on future occasions. And that leads to a very interesting debate about what kinds of learning sentience facilitates and why.
What it doesn't seem to do for you is trigger the reflex withdrawal of the hand because that's underway already. But what it plausibly does do is help you learn about where not to put your hand on future occasions. And that leads to a very interesting debate about what kinds of learning sentience facilitates and why.
Well, as I say, I think the sentience candidate is a better concept in a way. Fair enough. And I suggest in the book that insects are sentience candidates.
Well, as I say, I think the sentience candidate is a better concept in a way. Fair enough. And I suggest in the book that insects are sentience candidates.
So in terms of cases where we have enough evidence to really compel us to take seriously a realistic possibility of sentience, we're definitely talking about all vertebrates and the cephalopod mollusk, like octopuses, squid, cuttlefish, and the decapod crustaceans and the insects that are both arthropods. And then... It could be that we're talking about something that has evolved three times.
So in terms of cases where we have enough evidence to really compel us to take seriously a realistic possibility of sentience, we're definitely talking about all vertebrates and the cephalopod mollusk, like octopuses, squid, cuttlefish, and the decapod crustaceans and the insects that are both arthropods. And then... It could be that we're talking about something that has evolved three times.
It could be something that is there in the common ancestor of all three groups, and we're not really in a position to have much confidence either way on that one.
It could be something that is there in the common ancestor of all three groups, and we're not really in a position to have much confidence either way on that one.
Yeah, over 560 million years ago, very small worm-like creature So, I mean, yeah, perhaps unlikely to possess the mechanisms that convince us in those three cases that sentience is a realistic possibility. So I suppose I perhaps lean myself towards the three origin view.
Yeah, over 560 million years ago, very small worm-like creature So, I mean, yeah, perhaps unlikely to possess the mechanisms that convince us in those three cases that sentience is a realistic possibility. So I suppose I perhaps lean myself towards the three origin view.
Exactly, yeah, particularly in those lineages where we see complex active bodies. This is Mike Trestman's term, where you have the challenges that come with trying to manage articulated bodies with lots of parts. And you can't be a reflex machine as such anymore because then different bits of the body will start tearing each other apart.
Exactly, yeah, particularly in those lineages where we see complex active bodies. This is Mike Trestman's term, where you have the challenges that come with trying to manage articulated bodies with lots of parts. And you can't be a reflex machine as such anymore because then different bits of the body will start tearing each other apart.
there has to be some kind of centralized, sophisticated control system in place. And that's when we seem to start seeing realistic candidates for sentience. And if that's true, then certainly the cephalopod mollusks and the arthropods are looking like candidates.
there has to be some kind of centralized, sophisticated control system in place. And that's when we seem to start seeing realistic candidates for sentience. And if that's true, then certainly the cephalopod mollusks and the arthropods are looking like candidates.
Yes, well, the octopuses have become poster children, as it were. They're often the case that gets people to take the possibility of invertebrate sentience seriously. And I think once you've got that far, you think, well, you know, are they really the only invertebrates for which there's relevant evidence? And no, they're not.
Yes, well, the octopuses have become poster children, as it were. They're often the case that gets people to take the possibility of invertebrate sentience seriously. And I think once you've got that far, you think, well, you know, are they really the only invertebrates for which there's relevant evidence? And no, they're not.
No. And in the book, I have these two concepts, sentience candidate and investigation priority, where that second group of investigation priority is for those cases where the evidence is falling short of sentience candidature. But we think there's a
No. And in the book, I have these two concepts, sentience candidate and investigation priority, where that second group of investigation priority is for those cases where the evidence is falling short of sentience candidature. But we think there's a
prospect of that bar being achieved by future evidence and we think there are welfare risks posed by human activity that might call for precautions and so some invertebrates are put in that category but unicellular organisms and plants i don't think are investigation priorities either.
prospect of that bar being achieved by future evidence and we think there are welfare risks posed by human activity that might call for precautions and so some invertebrates are put in that category but unicellular organisms and plants i don't think are investigation priorities either.
Yeah, there's just no evidence of the relevant kinds at all, I would say, in plants. you have this quite wide range of realistic possibilities about the brain mechanisms supporting sentience, some of them emphasizing the cortex, prefrontal cortex, other ones emphasizing the midbrain.
Yeah, there's just no evidence of the relevant kinds at all, I would say, in plants. you have this quite wide range of realistic possibilities about the brain mechanisms supporting sentience, some of them emphasizing the cortex, prefrontal cortex, other ones emphasizing the midbrain.
These are all credible theories, and on none of those theories are any of the relevant mechanisms present in plants as far as we know. So I guess I don't want to say that people can't speculate. because it's all right.
These are all credible theories, and on none of those theories are any of the relevant mechanisms present in plants as far as we know. So I guess I don't want to say that people can't speculate. because it's all right.
And I don't want to say people can't research the question if they want to, but I think it would be a mistake to say that there is evidence now, which is very different from a lot of invertebrates.
And I don't want to say people can't research the question if they want to, but I think it would be a mistake to say that there is evidence now, which is very different from a lot of invertebrates.
Well, in the book, I'm trying to speak to everyone in the range of reasonable disagreement. And I suggest that physicalism is not the only reasonable view and that there are sensibly articulated versions of dualism, panpsychism, panprotopsychism. Often, in the modern versions of those views,
Well, in the book, I'm trying to speak to everyone in the range of reasonable disagreement. And I suggest that physicalism is not the only reasonable view and that there are sensibly articulated versions of dualism, panpsychism, panprotopsychism. Often, in the modern versions of those views,
like the Philip Goff version of panpsychism, the so-called Rossellian monism, the questions we end up asking about animals end up surprisingly similar. It's just that where other people say sentient or conscious, the Rossellian monist ends up saying macro-conscious because for them, electrons are not sentient beings as such and that they don't have pain, pleasure, and so on.
like the Philip Goff version of panpsychism, the so-called Rossellian monism, the questions we end up asking about animals end up surprisingly similar. It's just that where other people say sentient or conscious, the Rossellian monist ends up saying macro-conscious because for them, electrons are not sentient beings as such and that they don't have pain, pleasure, and so on.
They don't have rich inner lives. And so they still face this question of under what conditions do those tiny micro-conscious states combine to form a unified macro-conscious subject? And then they're asking exactly the same questions anybody else is. So I think it's a reasonable view in a way, but it doesn't make a huge difference to practical debates about... sentience.
They don't have rich inner lives. And so they still face this question of under what conditions do those tiny micro-conscious states combine to form a unified macro-conscious subject? And then they're asking exactly the same questions anybody else is. So I think it's a reasonable view in a way, but it doesn't make a huge difference to practical debates about... sentience.
Yeah, in terms of my personal views, I try to keep an open mind about these things. I think I've drifted, I suppose, from being relatively convinced materialistic as to being less convinced, I think, okay. Give those those alternatives some chance of being correct, 10% chance.
Yeah, in terms of my personal views, I try to keep an open mind about these things. I think I've drifted, I suppose, from being relatively convinced materialistic as to being less convinced, I think, okay. Give those those alternatives some chance of being correct, 10% chance.
Yeah, yeah. Perhaps, I don't know if that's surprising or not, but those seminar room issues about the mind-body relationship, though intrinsically very interesting, don't make a massive difference when the question is, well, should we drop crabs into pans of boiling water?
Yeah, yeah. Perhaps, I don't know if that's surprising or not, but those seminar room issues about the mind-body relationship, though intrinsically very interesting, don't make a massive difference when the question is, well, should we drop crabs into pans of boiling water?
you know, things like that, where, yeah, there's a very wide range of reasonable views one might have where you can converge on the need to take precautions.
you know, things like that, where, yeah, there's a very wide range of reasonable views one might have where you can converge on the need to take precautions.
Well, no, or any dicapod crustacean, I think. We did a big review in 2021 that influenced the law in the UK community, on these issues. And yeah, as part of that review, we reviewed evidence that it takes two to three minutes a lot of the time for the crab or lobster to die. And in that time, there's this storm of nervous system activity as there would be in your pet cat or in any other animal.
Well, no, or any dicapod crustacean, I think. We did a big review in 2021 that influenced the law in the UK community, on these issues. And yeah, as part of that review, we reviewed evidence that it takes two to three minutes a lot of the time for the crab or lobster to die. And in that time, there's this storm of nervous system activity as there would be in your pet cat or in any other animal.
So it's a prolonged extreme slaughter method. It seems like everyone should be able to see the risk there and see the problem and see the need for common sense precautions. You might not think the response is to ban eating crabs and lobsters. You might think that the right response is to mandate stunning of some kind.
So it's a prolonged extreme slaughter method. It seems like everyone should be able to see the risk there and see the problem and see the need for common sense precautions. You might not think the response is to ban eating crabs and lobsters. You might think that the right response is to mandate stunning of some kind.
And those debates about proportionality, I think are absolutely central right across the family of cases at the edge of sentience. But everyone should be able to agree on the need to do something.
And those debates about proportionality, I think are absolutely central right across the family of cases at the edge of sentience. But everyone should be able to agree on the need to do something.
Yes, well, and I think that's a very widespread view. And what I'm looking for in the book are points of consensus. So realistic range of possibilities in the scientific domain, but also points of overlapping consensus in the ethical domain as well. And I think that duty to avoid causing gratuitous suffering, either intentionally or through recklessness or negligence,
Yes, well, and I think that's a very widespread view. And what I'm looking for in the book are points of consensus. So realistic range of possibilities in the scientific domain, but also points of overlapping consensus in the ethical domain as well. And I think that duty to avoid causing gratuitous suffering, either intentionally or through recklessness or negligence,
through just not caring, I think people from any reasonable ethical starting point can agree on that and then use that to guide the way we think about these cases where we have sentience candidates.
through just not caring, I think people from any reasonable ethical starting point can agree on that and then use that to guide the way we think about these cases where we have sentience candidates.
I think that principle is so weak in a way, it's so thin, the duty to avoid causing gratuitous suffering Where gratuitous implies the absence of any adequate reason for what you're doing. I think because it is so deliberately thin, it then can command genuine consensus. And then, of course, a lot of people want to go beyond that and say our duties are much stronger.
I think that principle is so weak in a way, it's so thin, the duty to avoid causing gratuitous suffering Where gratuitous implies the absence of any adequate reason for what you're doing. I think because it is so deliberately thin, it then can command genuine consensus. And then, of course, a lot of people want to go beyond that and say our duties are much stronger.
And I guess I do think this in my own life, but... For the purpose of formulating public policy, it's good to have these quite thin principles. And I think that's one of them. Yeah, okay, good.
And I guess I do think this in my own life, but... For the purpose of formulating public policy, it's good to have these quite thin principles. And I think that's one of them. Yeah, okay, good.
Yeah. I hope that we don't have to. What I'm skeptical of is the idea of there being a sort of technocratic solution to this, where if we just find the right currency... And I suppose you have a policy on the table where some people working in the shellfish industry will be disadvantaged. Maybe their costs will go up because you're going to force them to stun the animals before killing them.
Yeah. I hope that we don't have to. What I'm skeptical of is the idea of there being a sort of technocratic solution to this, where if we just find the right currency... And I suppose you have a policy on the table where some people working in the shellfish industry will be disadvantaged. Maybe their costs will go up because you're going to force them to stun the animals before killing them.
And the stunners cost money. And then the question is, well, how do you weigh the suffering of the... ah, you know, my livelihood has been made more difficult versus the crab spending the two minutes in the boiling water. And I think there's no technocratic common currency that will give us one size fits all answers to this kind of thing.
And the stunners cost money. And then the question is, well, how do you weigh the suffering of the... ah, you know, my livelihood has been made more difficult versus the crab spending the two minutes in the boiling water. And I think there's no technocratic common currency that will give us one size fits all answers to this kind of thing.
What I propose in the book is that democratic, inclusive deliberation and discussion is the way forward here. And I'm quite an advocate of citizens assemblies as the kind of model that we can use for this whole set of issues at the edge of sentience, where they're issues that, well, they call for judgments of proportionality.
What I propose in the book is that democratic, inclusive deliberation and discussion is the way forward here. And I'm quite an advocate of citizens assemblies as the kind of model that we can use for this whole set of issues at the edge of sentience, where they're issues that, well, they call for judgments of proportionality.
There will naturally be disagreements in a pluralistic democratic society about what is proportionate to these risks. And the way we can resolve those value conflicts is democratically through citizens' assemblies.
There will naturally be disagreements in a pluralistic democratic society about what is proportionate to these risks. And the way we can resolve those value conflicts is democratically through citizens' assemblies.
We do, yeah, not just to crabs, yes. And often to many animals that are widely regarded as sentient, so pigs, chickens, for example, it's quite clear that widespread recognition of a particular species as sentient does not lead people immediately to behavioural change and does lead to lots of gratuitous suffering still being caused. So my focus in this book is on the edge cases, as it were.
We do, yeah, not just to crabs, yes. And often to many animals that are widely regarded as sentient, so pigs, chickens, for example, it's quite clear that widespread recognition of a particular species as sentient does not lead people immediately to behavioural change and does lead to lots of gratuitous suffering still being caused. So my focus in this book is on the edge cases, as it were.
But, you know, even in those core cases, we do need discussion about, well, how are we going to change the way we treat these animals?
But, you know, even in those core cases, we do need discussion about, well, how are we going to change the way we treat these animals?
Well, particularly the UK's Animal Welfare Sentience Act of 2022, my team ended up having some influence on because we were commissioned to produce a report of the evidence of sentience in cephalopod mollusks and decapod crustaceans, so octopuses, crabs, lobsters, shrimps. And basically the government had...
Well, particularly the UK's Animal Welfare Sentience Act of 2022, my team ended up having some influence on because we were commissioned to produce a report of the evidence of sentience in cephalopod mollusks and decapod crustaceans, so octopuses, crabs, lobsters, shrimps. And basically the government had...
produced this bill that creates a duty on policymakers to consider the animal welfare impacts of their actions, which I think is a pretty good idea. And in drafting it, they needed to say something about the scope of the bill, because you've got to say which animals. Do you have an obligation to consider plankton, microscopic animals? Is it just pets or what?
produced this bill that creates a duty on policymakers to consider the animal welfare impacts of their actions, which I think is a pretty good idea. And in drafting it, they needed to say something about the scope of the bill, because you've got to say which animals. Do you have an obligation to consider plankton, microscopic animals? Is it just pets or what?
And they came up with a draft that included all vertebrates and which on the plus side included fishes, which it should, but on the negative side excluded all invertebrates, which led to some criticism from animal welfare groups.
And they came up with a draft that included all vertebrates and which on the plus side included fishes, which it should, but on the negative side excluded all invertebrates, which led to some criticism from animal welfare groups.
So the government ended up commissioning a team led by me to produce a review of the evidence concerning those two particular groups of invertebrates, and we recommended that they amend the bill to extend the duty to them. And they did. So we got something, you know, we got our central recommendation implemented.
So the government ended up commissioning a team led by me to produce a review of the evidence concerning those two particular groups of invertebrates, and we recommended that they amend the bill to extend the duty to them. And they did. So we got something, you know, we got our central recommendation implemented.
Hi, Sean. Thanks for inviting me.
Hi, Sean. Thanks for inviting me.
Now we put a lot of other recommendations in the report as well, which have not been implemented. And so we're still pushing for action on a lot of these issues, but that basic point that the sentience of octopuses, squid, cuttlefish, crabs, lobsters was recognized in UK law. That's something.
Now we put a lot of other recommendations in the report as well, which have not been implemented. And so we're still pushing for action on a lot of these issues, but that basic point that the sentience of octopuses, squid, cuttlefish, crabs, lobsters was recognized in UK law. That's something.
I think we're mammal chauvinists a lot of the time. I mean, human chauvinists the most, then mammals. And then sometimes you can get people to take fishes seriously and they still will neglect the interests of invertebrates. So I think really we need to be yet more inclusive.
I think we're mammal chauvinists a lot of the time. I mean, human chauvinists the most, then mammals. And then sometimes you can get people to take fishes seriously and they still will neglect the interests of invertebrates. So I think really we need to be yet more inclusive.
Yeah, I talk in the book about the open worm project, which I think is still going.
Yeah, I talk in the book about the open worm project, which I think is still going.
Yeah, where the aim was to emulate the nervous system of C. elegans in computer software, see if you can put the emulation in charge of a robot, see if it behaves like C. elegans. I suppose we've learned something from this, which is how difficult the task is.
Yeah, where the aim was to emulate the nervous system of C. elegans in computer software, see if you can put the emulation in charge of a robot, see if it behaves like C. elegans. I suppose we've learned something from this, which is how difficult the task is.
There's a lot of stuff going on at the within neuron level in C. elegans that even knowing the entire connectome does not tell you very much about. So even that is a very, very hard challenge.
There's a lot of stuff going on at the within neuron level in C. elegans that even knowing the entire connectome does not tell you very much about. So even that is a very, very hard challenge.
But to me, it's a good way into this topic of artificial sentience because you can easily entertain in imagination the idea that this project had succeeded very quickly and then moved on to open Drosophila, open mouse. Once you have open mouse, I think you have a sentience candidate. If you've completely recreated in computer software everything the brain of a mouse does...
But to me, it's a good way into this topic of artificial sentience because you can easily entertain in imagination the idea that this project had succeeded very quickly and then moved on to open Drosophila, open mouse. Once you have open mouse, I think you have a sentience candidate. If you've completely recreated in computer software everything the brain of a mouse does...
Yeah, that's right. There's a lot we don't know from the connectome. One thing you can't read off from the connectome is the weights of the connections, which is hugely important, or how those weights are changed by learning. But also, even if you had all of that, what happens within the neurons is also important.
Yeah, that's right. There's a lot we don't know from the connectome. One thing you can't read off from the connectome is the weights of the connections, which is hugely important, or how those weights are changed by learning. But also, even if you had all of that, what happens within the neurons is also important.
There are within-neuron computations that are really crucial to steering behavior, for example. And so you wouldn't expect to get the steering behavior in a emulation unless you'd actually emulated the individual compartments within the neurons and how they're arranged in space.
There are within-neuron computations that are really crucial to steering behavior, for example. And so you wouldn't expect to get the steering behavior in a emulation unless you'd actually emulated the individual compartments within the neurons and how they're arranged in space.
Yeah, I suppose part of what I want to do with this book, The Edge of Sentience, is get people using that term sentience perhaps a bit more. I think Stephen Harnad's been doing much the same thing with his journal Animal Sentience. And I think it is a term that is on the way up. Good. That doesn't mean consciousness is on the way down, but I think it's plateauing and sentience is on the way up.
Yeah, I suppose part of what I want to do with this book, The Edge of Sentience, is get people using that term sentience perhaps a bit more. I think Stephen Harnad's been doing much the same thing with his journal Animal Sentience. And I think it is a term that is on the way up. Good. That doesn't mean consciousness is on the way down, but I think it's plateauing and sentience is on the way up.
Well, I think they've been, they've been trying. Yeah. Um, I'd be in favor of this sort of work receiving more funding than it does. Cause it's to me that there's risks, there's risks of creating artificial sentience candidates, but there's huge opportunities as well, because you've got the potential to create a system that could replace a lot of animal research.
Well, I think they've been, they've been trying. Yeah. Um, I'd be in favor of this sort of work receiving more funding than it does. Cause it's to me that there's risks, there's risks of creating artificial sentience candidates, but there's huge opportunities as well, because you've got the potential to create a system that could replace a lot of animal research.
Cause you could be doing research on the, the emulation where you can actually intervene at a really precise level, without injuring or hurting. And you could be doing that instead of lesioning living animals. So I'd like to see much more of this, and I think it's been largely funding limited, I think, so far.
Cause you could be doing research on the, the emulation where you can actually intervene at a really precise level, without injuring or hurting. And you could be doing that instead of lesioning living animals. So I'd like to see much more of this, and I think it's been largely funding limited, I think, so far.
Well, the octopus has about 500 million neurons, so I don't know how that translates into synaptic connections. A lot. It's going to be quite a lot, yeah. Crabs' brains are much, much smaller, and it varies a great deal by species, but not dissimilar to insects in terms of the number of neurons. With bees, you have about a million neurons, Drosophila, about 100,000. Okay.
Well, the octopus has about 500 million neurons, so I don't know how that translates into synaptic connections. A lot. It's going to be quite a lot, yeah. Crabs' brains are much, much smaller, and it varies a great deal by species, but not dissimilar to insects in terms of the number of neurons. With bees, you have about a million neurons, Drosophila, about 100,000. Okay.
Yeah, yeah, indeed, yeah.
Yeah, yeah, indeed, yeah.
Yeah, these are very hard cases. I suppose when I started writing the book around 2020, not sure the large language models were even on my radar at all. And then they've jumped onto everybody's radar through things like ChatGPT. And I suppose I've been on a journey like everyone else during that time.
Yeah, these are very hard cases. I suppose when I started writing the book around 2020, not sure the large language models were even on my radar at all. And then they've jumped onto everybody's radar through things like ChatGPT. And I suppose I've been on a journey like everyone else during that time.
I initially thought, well, these are next token predictors and the sector has been moving away from brain-like forms of organization. So it's been taking out things like recurrent processing that on many theories of consciousness are absolutely essential, but transformers take that out. So I thought, well, here is something that is conspicuously unlikely to be sentient.
I initially thought, well, these are next token predictors and the sector has been moving away from brain-like forms of organization. So it's been taking out things like recurrent processing that on many theories of consciousness are absolutely essential, but transformers take that out. So I thought, well, here is something that is conspicuously unlikely to be sentient.
But then I suppose I'm not sure that's the correct view anymore, I suppose, because I've been quite astonished by the feats of reasoning they seem to perform today. where it's, well, it's reasonably evident that we do not understand how they work. They're incredibly opaque to us. We don't know how they do what they do.
But then I suppose I'm not sure that's the correct view anymore, I suppose, because I've been quite astonished by the feats of reasoning they seem to perform today. where it's, well, it's reasonably evident that we do not understand how they work. They're incredibly opaque to us. We don't know how they do what they do.
And there seems to be some element of acquiring algorithms during training that were never explicitly programmed into them. So in a way that architecture that was programmed into them, the transformer architecture, no reason at all to think that would be capable of sentience.
And there seems to be some element of acquiring algorithms during training that were never explicitly programmed into them. So in a way that architecture that was programmed into them, the transformer architecture, no reason at all to think that would be capable of sentience.
And it's a term that, at least as I use it, it's an attempt to capture...
And it's a term that, at least as I use it, it's an attempt to capture...
But when you have these very, very large models where they've acquired algorithms during training, we don't know how and we don't know what they are. We don't know the upper limit on what algorithms they might acquire. And we don't know what algorithms are sufficient or not for sentience. And so we're not really in a position to be so sure anymore. that they couldn't acquire those algorithms.
But when you have these very, very large models where they've acquired algorithms during training, we don't know how and we don't know what they are. We don't know the upper limit on what algorithms they might acquire. And we don't know what algorithms are sufficient or not for sentience. And so we're not really in a position to be so sure anymore. that they couldn't acquire those algorithms.
So, for example, if you think a global workspace is what it takes to have sentience, as many have suggested, we don't know that they couldn't acquire a global workspace.
So, for example, if you think a global workspace is what it takes to have sentience, as many have suggested, we don't know that they couldn't acquire a global workspace.
Well, this is Stan De Haan's theory. His book Consciousness and the Brain is a nice exposition of it. But it's this quite popular idea that consciousness has to do with a network that puts the whole brain on the same page, as it were, by taking inputs from many, many different sensory sources and integrating them into something coherent and then broadcasting that content back
Well, this is Stan De Haan's theory. His book Consciousness and the Brain is a nice exposition of it. But it's this quite popular idea that consciousness has to do with a network that puts the whole brain on the same page, as it were, by taking inputs from many, many different sensory sources and integrating them into something coherent and then broadcasting that content back
the most basic elemental evolutionarily ancient base layer of consciousness as it were, which is in part, just what philosophers like to call phenomenal consciousness, subjective experience, there being something it feels like to be you, whether or not you have any kind of overlay of conscious reflection on what it is you're experiencing. And then also there's a,
the most basic elemental evolutionarily ancient base layer of consciousness as it were, which is in part, just what philosophers like to call phenomenal consciousness, subjective experience, there being something it feels like to be you, whether or not you have any kind of overlay of conscious reflection on what it is you're experiencing. And then also there's a,
to the input systems and onwards to other systems of motor planning, reasoning, etc. So it's the the bit where you know, the central coming together of everything in the brain. And well, of course, that is designed as a theory of consciousness in the human brain. But the basic architecture
to the input systems and onwards to other systems of motor planning, reasoning, etc. So it's the the bit where you know, the central coming together of everything in the brain. And well, of course, that is designed as a theory of consciousness in the human brain. But the basic architecture
where you have lots and lots of input processes competing for access to this workspace, where once a representation gets in, the integrated content will then be broadcast back and onwards. There's nothing about that architecture that is inherently difficult to achieve computationally. And so we did a big report on this last year, 19 of us.
where you have lots and lots of input processes competing for access to this workspace, where once a representation gets in, the integrated content will then be broadcast back and onwards. There's nothing about that architecture that is inherently difficult to achieve computationally. And so we did a big report on this last year, 19 of us.
It was led by Rob Long and Patrick Butlin and had some top AI experts in there, including Yoshua Bengio. And our conclusion was there's no obvious technical barriers for why AI might not achieve something like a global workspace in the near future.
It was led by Rob Long and Patrick Butlin and had some top AI experts in there, including Yoshua Bengio. And our conclusion was there's no obvious technical barriers for why AI might not achieve something like a global workspace in the near future.
These kinds of things are underway as we speak, I think. And it puts us in a really difficult position, I think, epistemically. They're really difficult to know what to say about these cases. In the book, I talk about the gaming problem, which is, I think, a huge problem in this area, which is that we've got our lists of markers developed in good faith for assessing crabs, octopuses, and so on.
These kinds of things are underway as we speak, I think. And it puts us in a really difficult position, I think, epistemically. They're really difficult to know what to say about these cases. In the book, I talk about the gaming problem, which is, I think, a huge problem in this area, which is that we've got our lists of markers developed in good faith for assessing crabs, octopuses, and so on.
If we just test for those same markers consistently, in the large language model case? Well, there's always going to be two explanations competing. Now, one is that it produces these markers because it genuinely has the state in question. And the other explanation is, well, it produces these markers because it has decided that it serves its objectives to persuade us of its sentience.
If we just test for those same markers consistently, in the large language model case? Well, there's always going to be two explanations competing. Now, one is that it produces these markers because it genuinely has the state in question. And the other explanation is, well, it produces these markers because it has decided that it serves its objectives to persuade us of its sentience.
And it knows the list of criteria from its training data that humans use to judge that question. And a lot of I think by default, that second explanation starts off as more plausible. And when you have people even now being persuaded by their AI assistants that they're sentient, it's not that they've got genuine evidence that they are.
And it knows the list of criteria from its training data that humans use to judge that question. And a lot of I think by default, that second explanation starts off as more plausible. And when you have people even now being persuaded by their AI assistants that they're sentient, it's not that they've got genuine evidence that they are.
It's that the AI assistants have various goals relating to user satisfaction, prolonging interaction time. And in service of those goals, they superficially mimic the way a sentient human would behave. And now that is a huge epistemological problem that we don't face when we're dealing with an octopus or a crab.
It's that the AI assistants have various goals relating to user satisfaction, prolonging interaction time. And in service of those goals, they superficially mimic the way a sentient human would behave. And now that is a huge epistemological problem that we don't face when we're dealing with an octopus or a crab.
Right, yes. If you're totally naive, yeah, there's ways in which even a cat might deceive you. But I guess I don't think vets, sort of experts, are being deceived. But in the AI case, well, there are no experts, as it were. Right. There's no easy way to be sure you're dealing with the real thing rather than skillful mimicry. And no one has a solution to that problem right now.
Right, yes. If you're totally naive, yeah, there's ways in which even a cat might deceive you. But I guess I don't think vets, sort of experts, are being deceived. But in the AI case, well, there are no experts, as it were. Right. There's no easy way to be sure you're dealing with the real thing rather than skillful mimicry. And no one has a solution to that problem right now.
a slight extra component as well, which is that I'm focusing specifically on experiences that feel good or feel bad, like pain or pleasure, valence experiences, as it were, they have a positive or negative valence and, And I'm using that term sentience to capture that capacity for valence to experience like pain and pleasure.
a slight extra component as well, which is that I'm focusing specifically on experiences that feel good or feel bad, like pain or pleasure, valence experiences, as it were, they have a positive or negative valence and, And I'm using that term sentience to capture that capacity for valence to experience like pain and pleasure.
Yeah, I think that's what the whole Edge of Sentience book is about. This family of cases at the Edge of Sentience where they all have this science meets policy aspect, where they're trying to make policy based on an incredibly uncertain scientific picture. And hopefully one of the roles for philosophy here is to try and stabilise that relationship
Yeah, I think that's what the whole Edge of Sentience book is about. This family of cases at the Edge of Sentience where they all have this science meets policy aspect, where they're trying to make policy based on an incredibly uncertain scientific picture. And hopefully one of the roles for philosophy here is to try and stabilise that relationship
and say, well, here is how you can make sensible precautionary policy on the basis of uncertain science.
and say, well, here is how you can make sensible precautionary policy on the basis of uncertain science.
Well, I mean, I hope that my book is helpful. Good. I hope so. I mean, one has to hope this. And we will see. It's... It's a book that should be judged on its consequences in a way because it's making all kinds of proposals for how we could manage risk better and how we could be more precautionary.
Well, I mean, I hope that my book is helpful. Good. I hope so. I mean, one has to hope this. And we will see. It's... It's a book that should be judged on its consequences in a way because it's making all kinds of proposals for how we could manage risk better and how we could be more precautionary.
And the book succeeds if people take those proposals seriously and discuss them and think about how they might implement them in their own lives and organizations, institutions, policies.
And the book succeeds if people take those proposals seriously and discuss them and think about how they might implement them in their own lives and organizations, institutions, policies.
Yeah, there's a tendency sometimes for people to say, maybe we'll never know. But if you say, but maybe we'll never know, that can't be a license to do whatever you want. It can't be a license to drop the crabs into pans of boiling water and so on. There's got to be sensible precautionary steps we can agree on in the face of uncertainty. And the book is about trying to find these.
Yeah, there's a tendency sometimes for people to say, maybe we'll never know. But if you say, but maybe we'll never know, that can't be a license to do whatever you want. It can't be a license to drop the crabs into pans of boiling water and so on. There's got to be sensible precautionary steps we can agree on in the face of uncertainty. And the book is about trying to find these.
Thank you.
Thank you.
And I think it's a really important concept because it, to me at least, captures what is really ethically significant. If a system is sentient in that sense, if it's capable of valence to experience, then its interests matter morally and we need to do something about that.
And I think it's a really important concept because it, to me at least, captures what is really ethically significant. If a system is sentient in that sense, if it's capable of valence to experience, then its interests matter morally and we need to do something about that.
Well, I wouldn't say that to have a conscious experience, you need to know about it. But the problem is that the term consciousness gets used in quite ambiguous ways. And it can refer to the human form of consciousness, which is a very complex form, I think. And it does have Layers. So Herbert Feigl in the 50s talked about sentience, sapience, and selfhood.
Well, I wouldn't say that to have a conscious experience, you need to know about it. But the problem is that the term consciousness gets used in quite ambiguous ways. And it can refer to the human form of consciousness, which is a very complex form, I think. And it does have Layers. So Herbert Feigl in the 50s talked about sentience, sapience, and selfhood.
Tolving in a separate body of work had these terms, anoetic, noetic, autonoetic. They're both ways of trying to capture the idea that there's layers. There's the raw, basic, subjective experience, like the feelings of pain, pleasure, sight, sound, odor. Then there's also the knowledge and the concepts when we think and reflect about what's going on.
Tolving in a separate body of work had these terms, anoetic, noetic, autonoetic. They're both ways of trying to capture the idea that there's layers. There's the raw, basic, subjective experience, like the feelings of pain, pleasure, sight, sound, odor. Then there's also the knowledge and the concepts when we think and reflect about what's going on.
And then also to some extent, there's a sense of self as well. And this idea, we recognize ourselves to be persisting subjects of experience with lives that extend into the past and extend into the future. And these overlays,
And then also to some extent, there's a sense of self as well. And this idea, we recognize ourselves to be persisting subjects of experience with lives that extend into the past and extend into the future. And these overlays,
They involve levels of cognitive sophistication that you might not need to have that base level of just sentience, of just feeling ouch, feeling pain, feeling happiness, joy. Right.
They involve levels of cognitive sophistication that you might not need to have that base level of just sentience, of just feeling ouch, feeling pain, feeling happiness, joy. Right.
Well, in some senses of the word conscious, yes. That's right, yeah. If you're the kind of person who wants to use this term conscious to refer to that whole package, the sentience, sapience, and selfhood, then yes, there's going to be lots of animals that are sentient without being conscious.
Well, in some senses of the word conscious, yes. That's right, yeah. If you're the kind of person who wants to use this term conscious to refer to that whole package, the sentience, sapience, and selfhood, then yes, there's going to be lots of animals that are sentient without being conscious.
Now, I don't necessarily think we should use the term in that way, but one of the things I like about sentience is that it very strongly draws people towards that. that most basic aspect, just the raw subjective experience.
Now, I don't necessarily think we should use the term in that way, but one of the things I like about sentience is that it very strongly draws people towards that. that most basic aspect, just the raw subjective experience.
Somewhat. Yeah. Which is not to say that it's perfectly defined. You know, there are real limits on our ability to define subjective experience, but, um,
Somewhat. Yeah. Which is not to say that it's perfectly defined. You know, there are real limits on our ability to define subjective experience, but, um,
The problem with consciousness as a term is that even when you bracket that issue of subjective experience and its mysteriousness, it's still a term people use to refer to many other things as well, like reflection and self-awareness and those other things.
The problem with consciousness as a term is that even when you bracket that issue of subjective experience and its mysteriousness, it's still a term people use to refer to many other things as well, like reflection and self-awareness and those other things.
So I'd rather use a term that is perhaps a little bit more constrained in how you can use it and where people will let you stipulate a bit more. And if I say I just mean the capacity for valence to experience, I think people get that. And they get the need to have a concept that is drawing our attention to states like pain and pleasure, but is a bit broader than that.
So I'd rather use a term that is perhaps a little bit more constrained in how you can use it and where people will let you stipulate a bit more. And if I say I just mean the capacity for valence to experience, I think people get that. And they get the need to have a concept that is drawing our attention to states like pain and pleasure, but is a bit broader than that.
And that is not just about pain and pleasure, but about that whole category of. feelings, experiences that feel bad or feel good.
And that is not just about pain and pleasure, but about that whole category of. feelings, experiences that feel bad or feel good.
Well, Jeremy Bentham famously had this footnote where he wrote in relation to other animals, the question is not can they talk nor can they reason, but can they suffer? And I think that's a
Well, Jeremy Bentham famously had this footnote where he wrote in relation to other animals, the question is not can they talk nor can they reason, but can they suffer? And I think that's a
to me at least, a profound insight that if an animal can't speak to us and tell us how it's feeling, if it can't reason very well, as arguably is the situation with a shrimp, for example, it doesn't mean that it's feeling nothing. It doesn't mean that it's incapable of suffering. So it doesn't mean that there aren't things we could do to it that would be cruel and that would cross ethical lines.
to me at least, a profound insight that if an animal can't speak to us and tell us how it's feeling, if it can't reason very well, as arguably is the situation with a shrimp, for example, it doesn't mean that it's feeling nothing. It doesn't mean that it's incapable of suffering. So it doesn't mean that there aren't things we could do to it that would be cruel and that would cross ethical lines.
Yeah, I think we're already asking the questions and I think it's right to be asking the questions and it's right to try and run ahead as it were to for the ethical debates to be running ahead of where the technology actually is, because we might get quite rapidly overtaken by events in the AI case. Right.
Yeah, I think we're already asking the questions and I think it's right to be asking the questions and it's right to try and run ahead as it were to for the ethical debates to be running ahead of where the technology actually is, because we might get quite rapidly overtaken by events in the AI case. Right.
When I think about aspects of human consciousness that might possibly be uniquely human, I think that inner monologue is one of them. It's not something even all humans have. And you get a lot of reports of variation among humans where some people say, what is this inner monologue? I've never experienced anything like that. And other people, including myself, for whom it's there constantly.
When I think about aspects of human consciousness that might possibly be uniquely human, I think that inner monologue is one of them. It's not something even all humans have. And you get a lot of reports of variation among humans where some people say, what is this inner monologue? I've never experienced anything like that. And other people, including myself, for whom it's there constantly.
And I don't rule out that some other animals might have something a bit like that, but I don't really think crabs do. And I think this is an example of something that is probably a lot more cognitively sophisticated than sentient.
And I don't rule out that some other animals might have something a bit like that, but I don't really think crabs do. And I think this is an example of something that is probably a lot more cognitively sophisticated than sentient.
Yeah, it's a bit like that for me as well. But I mean, there's always inner music playing. Yeah, very often. And then there's usually some line of thought running over the music. Not so much when I'm talking like this, because when I'm talking, it's like the inner monologue becomes the outer monologue. But in the rest of life, yeah, it's like I'm constantly having a conversation with myself. Yeah.
Yeah, it's a bit like that for me as well. But I mean, there's always inner music playing. Yeah, very often. And then there's usually some line of thought running over the music. Not so much when I'm talking like this, because when I'm talking, it's like the inner monologue becomes the outer monologue. But in the rest of life, yeah, it's like I'm constantly having a conversation with myself. Yeah.
But, yeah, I think the need here is to try and distinguish that, that sophisticated thing I have, from just the raw experiences that it's providing commentary on. And those raw experiences the crab may well have.
But, yeah, I think the need here is to try and distinguish that, that sophisticated thing I have, from just the raw experiences that it's providing commentary on. And those raw experiences the crab may well have.
I think it's a topic of ongoing research. I don't have much to add to that, I think. There's some looping. There's feedback loops. In the past, there were people who thought that the vocal cords were genuinely moving a little bit.
I think it's a topic of ongoing research. I don't have much to add to that, I think. There's some looping. There's feedback loops. In the past, there were people who thought that the vocal cords were genuinely moving a little bit.
behaviorists sort of had to think this right okay because they they couldn't really believe in true interiority so they had to say well what you think isn't in a monologue is actually a motor action being prepared and just getting to the tiniest stages but never coming out audibly but i think according to current theories not even that is happening it is genuinely internal it's engaging
behaviorists sort of had to think this right okay because they they couldn't really believe in true interiority so they had to say well what you think isn't in a monologue is actually a motor action being prepared and just getting to the tiniest stages but never coming out audibly but i think according to current theories not even that is happening it is genuinely internal it's engaging
some of those speech production processes, but they're never reaching the actual motor neurons.
some of those speech production processes, but they're never reaching the actual motor neurons.
Well, I don't know. But I mean, the point is really to say, well, even if your cat doesn't, it may nonetheless be sentient. Because when we're talking about sentience, we're talking about something much more basic than that. And of course, we have a tendency to strongly anthropomorphize our pets and to imagine our pets as little humans. And we can actually oppose that.
Well, I don't know. But I mean, the point is really to say, well, even if your cat doesn't, it may nonetheless be sentient. Because when we're talking about sentience, we're talking about something much more basic than that. And of course, we have a tendency to strongly anthropomorphize our pets and to imagine our pets as little humans. And we can actually oppose that.
We can resist that and say that's a bad idea, while nonetheless thinking they are sentient beings with ethically significant interests. Right.
We can resist that and say that's a bad idea, while nonetheless thinking they are sentient beings with ethically significant interests. Right.