The Sentience Institute Podcast

David Gunkel on robot rights

Sentience Institute Episode 20

Robot rights are not the same thing as a set of human rights. Human rights are very specific to a singular species, the human being. Robots may have some overlapping powers, claims, privileges, or immunities that would need to be recognized by human beings, but their grouping or sets of rights will be perhaps very different.

  • David Gunkel

Can and should robots and AI have rights? What’s the difference between robots and AI? Should we grant robots rights even if they aren’t sentient? What might robot rights look like in practice? What philosophies and other ways of thinking are we not exploring enough? What might human-robot interactions look like in the future? What can we learn from science fiction? Can and should we be trying to actively get others to think of robots in a more positive light?

 David J. Gunkel is an award-winning educator, scholar, and author, specializing in the philosophy and ethics of emerging technology. He is the author of over 90 scholarly articles and book chapters and has published twelve internationally recognized books, including The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press 2012), Of Remixology: Ethics and Aesthetics After Remix (MIT Press 2016), and Robot Rights (MIT Press 2018). He currently holds the position of Distinguished Teaching Professor in the Department of Communication at Northern Illinois University (USA). 

 Topics discussed in the episode:

  • Introduction (0:00)
  • Why robot rights and not AI rights? (1:12)
  • The other question: can and should robots have rights? (5:39)
  • What is the case for robot rights? (10:21)
  • What would robot rights look like? (19:50)
  • What can we learn from other, particularly non-western, ways of thinking for robot rights? (26:33)
  • What will human-robot interaction look like in the future? (33:20)
  • How artificial sentience being less discrete than biological sentience might affect the case for rights (40:45)
  • Things we can learn from science fiction for human-robot interaction and robot rights (42:55)
  • Can and should we do anything to encourage people to see robots in a more positive light? (47:55)
  • Why David pursued philosophy of technology over computer science more generally (52:01)
  • Does having technical expertise give you more credibility (54:01)
  • Shifts in thinking about robots and AI David has noticed over his career (58:03)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

Speaker 1:

Welcome to the Sentience Institute podcast, and to our 20th episode. I'm Michael Deli, Yvo strategy lead and researcher at Sentence Institute on the Sentience Institute podcast. We interview activists, entrepreneurs, and researchers about the most effective strategies to expand humanities moral circle. Our guest for today is David Gun. David is an award-winning educator, scholar, and author specializing in the philosophy and ethics of emerging technology. He is the author of over 90 scholarly articles and book chapters, and as published 12 internationally recognized books including the Machine Question, Critical Perspectives on ai, Robots and Ethics of Remix, Ethics and Aesthetics. After remix and Robot rights, he currently holds the position of distinguished teaching professor in the Department of Communication at Northern Illinois University in usa. All right. I'm joined by David Gun. David, thank you so much for joining us on the Sentience Institute podcast.

Speaker 2:

Yeah, thank you for having me. Good to be here.

Speaker 1:

Great. Great to have you. So, uh, I'd like to start with a question, uh, about terminology. And I should preface this by saying I actually know your answer, I think because I heard it in your interview with the Ben Byford, um, with Ben Byford on the Machine Ethics Podcast in 2020. Uh, but I think it's a good place for our listeners to start. So why do you call it robot rights and not artificial intelligence riots sentence, or something else like that?

Speaker 2:

Yeah, so it's a really good question, and I think I can answer it with three sort of ways of, of directing your, uh, you know, way of thinking about this. First of all, it's just, it's an alliterative statement, right? Robot rights. So just the alliteration sort of makes it easy to say artificial intelligence rights seems a little clumsy, and as a result, robot rights is pretty easy for a person to sort of rattle off, and it has a alliterative, sort of poetic feel to it because of the way it sort of rolls out of your mouth and, uh, sounds when you hear it. The second reason is that this terminology is not mine. Um, this is terminology that came to me from other people who preceded me in this work. Either, uh, legal scholars or people in philosophy or in the area of artificial intelligence and ethics, but they were the ones that began using this terminology. And so my engagement with it was to sort of pick up the thread and carry it further. And since that was the terminology that they had already employed, I sort of inherited that terminology. Lastly, and I think most importantly, the difference between the word robot and artificial intelligence is something that we oftentimes, you know, struggle with and try to figure out where to draw the line. And is it cloud based application a robot? Is it just ai? You know, how do we, we sort of sort these things. The important thing for me is that AI is the result of a academic workshop held in the mid 1950s in Dartmouth College. Robot, on the other hand, is the result of science fiction. It comes to us from the 1920 stage play rur, and it was formulated and really was a reuse of a check term robota, meaning worker or slave laborer, uh, by Carl Chap. And already in that play, which not only gave us the idea of the robot, but the word robot, the robots are already subjugated to human masters, and there's this uprising. So not only does that set the sort of template for future science fiction, but it also gives us this notion of the robot as an enslaved or a servant type, you know, figure or individual. And so the robot rights, uh, idea sort of fits in that pattern, beginning with topics play, and the way in which that has developed not only in subsequent science fiction, but also in subsequent writings on the legal and the moral aspects of these technologies and the way they connect with us.

Speaker 1:

Yeah, that, that makes sense. I, I think for the rest of the interview, I'll probably use robots and maybe have that, um, apply to AI as well, uh, just as, as shorthand, but in a sense, they're not really interchangeable. I, I feel, is that, does that make sense to you? Because AI sort of, um, brings, uh, an image of, uh, an intelligence that's not, that's not necessarily tied to a physical body, whereas robot seems to imply it's tied to a physical body. Does that sound about right?

Speaker 2:

It sounds about right, but I, I will say that this idea that the intelligence is disembodied, it's almost a kind of transcendentalist way of thinking that's almost religious ouris are embodied, right? Even the cloud has a body, it's a server connected to, to wires and to fiber network cables and things like this. So there is an embodiment even for the so-called disembodied ai. It's just that that body doesn't look like our bodies. And so I find this to be a useful distinction, the embodiment distinction, but I also find it a bit troubling because it leads us to think that the AI has this kind of transcendental feature to it, and it really doesn't, when we talk about the, you know, resources of the earth about power, about, uh, environmental impact and all these other things that AI certainly contributes to carbon and to climate change. And that has to do because of the way it is embodied and where it is embodied.

Speaker 1:

Yeah, yeah, that makes sense. Thanks for mentioning that. So in 2018, you wrote a paper called The Other Question, Can Robot, Sorry. Can and Should Robots Have Rights? And the other question here being in reference to the main question most people focus on when they think about AI and robots, which is about how they affect us, for example, um, AI safety being a hot topic at the moment, that's, that's mostly about how an AI affects humans. And the, the other question, I guess is more focused on the interests of the robots themselves. So you wrote that there's an important difference between the can and the should here. So can you start by talking about that?

Speaker 2:

Right. So I'll make two distinctions, which I think will help sort this out. Uh, in more philosophy, we distinguish the moral agent from the moral patient. A moral agent is someone who can act in such a way that is either good, bad, or is morally culpable or praiseworthy, whatever the case is, a moral patient is the recipient of that action. Now, in our world, we are both moral agents and moral patients. We can do good and bad, we can suffer good and bad, but there are some things that are moral patients and not moral agents animals, for example, we don't hold the dog responsible for barking at the postman, but we do hold the postman responsible for kicking and injuring the dog, right? And the dog therefore can be the recipient of an action that is either good or bad. And as you said, I saw a lot of people putting a lot of the research they were doing on the side of agency. And the question was, how can we ensure that these devices, these tools, these instruments, these artifacts are employed in our world in a way that has the right outcomes, that doesn't disadvantage people, that doesn't create bad outcomes, whatever the cases, that's an agency question. My question, the other question was, well, okay, that's great, but what is the status of these things? What is, what is the social moral, legal position of these devices that are increasingly intelligent and socially and interactive? And how do we grapple with the moral patency question of these machines? And then when go to the next stage is the can should distinction. And this is really derived from David Hume. David Hume says, You know, you cannot derive ought from is, it's a very famous item in, in David Hume's thinking, but a lot of people have picked it up and developed it to us in, you know, since that time, the can and should question is really this, Can robots have rights? Yeah. All you have to do is make a law that says robots have rights. Now, should you do that is another question. So the ability to do so maybe is entirely feasible and very easy to answer that question. The moral question, the should question is a little more complicated. And I think that's where we get into the weeds on, um, how we want to shape the world that we live in and how we integrate these things alongside us in our social reality.

Speaker 1:

Mm-hmm. So I'll jump the gun a little bit, and just because you mentioned it and ask, uh, what are some reasons why robots shouldn't have rights? Uh, what are what are some arguments one might use?

Speaker 2:

So one of the strongest arguments is that they machines, right? They're not people, they're not human beings, they're not animals. They should, you know, they're just artifacts and therefore they are things. We may have this distinction that comes to us from the Romans, uh, from Gaas in in particular, that, you know, actions that are moral or legal in nature have two kinds of objects or two kinds of, uh, entities. There is either persons or things. And so in the category of persons, we put you and I and we put corporations and we put maybe some animals, but we don't generally put technological objects. Those are things that's a strong argument based on this ontology that we've inherited from the Romans and the way in which our legal structures especially have operationalized this way of dividing things into persons or property. Another argument is that if we give robots some kind of moral or legal standing, we have complicated our legal system in ways that goes beyond what maybe we would like to handle, and that it maybe doesn't lend anything very useful to the way that we decide these relationships. Those are usually the two big ones.

Speaker 1:

Mm. When you make those arguments or when, when you put those forward, it makes me think of, uh, these are some of the arguments I've heard in relation to why when people say non-human animals shouldn't have rights, it, it complicates the legal system. Um, it, uh, it's, yeah. So that, that, that sounds familiar, but what are, what are the, um, best arguments for why robots should have rights?

Speaker 2:

So there's a number of arguments, and I don't wanna try to be exhausted here, but let me cover some of the important ones that have been circulating. Um, the literature in this field has absolutely exploded, um, in the last decade. And when I started working on this back in the early, well, mid, mid two, you know, 2006 I started First book comes out in 2012. So in those early years, it was really easy to keep track of who was arguing what, because the number of different arguments and circulation were pretty manageable. By the time I get to robot rights in 2018, this thing is spinning out of control because a lot of people find reason to engage the question and to deliver their own sort of response to it. So let me just hit a few reasons why we might want to do this. One is directly derived from what we've learned in the animal rights experience. So in animal rights, we know Jeremy, Jeremy Bentham really, uh, is the pivot, right? And he said it's not, can they think, can they reason, but can they suffer? Are they sentis? And that opened up the moral circle to include things that had been previously excluded. It had been, up until that point, a very human-centric kind of moral universe. And when we start to engage in the animal question, we widen that circle to include other creatures that are non-human. And the reason why we included animals in the moral circle, whether you're following Peter Singer or Tom Regan, or one of the other innovators in animal rights, is because of Sentis, because of the experience the animal has of pain or pleasure. And you can see just recently with Blake Lemo, he was talking to Lambda, and Lambda, he said, is sentient, which led him to believe that Lambda needed to have rights protected for it because it was another kind of sentient creature. So we use sentience as a benchmark, and the question is, how can you tell whether an AI or a robot is sentient? Well, that takes you back to the tour test because you can't necessarily look inside and know exactly what it's doing. You can know some of what's going on, but really what Blake Lamoin did is learn from the behavioral experience that it was exhibited to him in the conversational interactions that he had with Lambda. So that's one reason why people argue for giving robots rights, this notion that they'll be at some point either sentient or conscious or some of these other benchmarks that make something available to having standing status and need for protection. Another argument, and this comes from Kate Darling, who I think was really innovative in this area by using Ks indirect, uh, duties argument, uh, Kant, unlike Bentham, was no animal rights advocate. Kant thought animals were just mechanisms like Da Hart did. But he argued you should not hurt animals, because when you do so, you dease yourself, you are corrupting your own moral character. You're corrupting your own moral education and providing a bad example for other people. And so indirectly, you're harming somebody else if you harm an animal. And Kate Darling says, you know, this is one way of thinking about why we don't want to harm the robot because of the example it sets, because of the, uh, way in which it could corrupt our own moral characters. And she uses the constant direct duties argument to make a case for the rights of robots as a way of protecting our social mechanisms, uh, the way that we relate to each other, either morally or legally. A third argument for doing this, and this is more in line with where I take it, um, in my own research, is that the properties, ascensions and consciousness have traditionally been really good benchmarks for things like animal rights and, and items like that. But I come out of environmental ethics and in environmental ethics, you don't harm a mountain, dirt does not feel pain. A waterway does not experience this pleasure. Nevertheless, these are part of our integral experience on this planet, and we have responsibilities to the other entities that occupy this fragile planet with us. And I think climate change is a really good example of how we can screw that up if we assume wrongly, that these things are just raw materials that we can utilize to our benefit and artifacts may also play a role in our social world in a way that we need to think more creatively about how we craft our moral responsibilities and our legal responsibilities to others. And that is what I'm calling a relational approach to moral status. That it's not what the thing is, but how it stands in relationship to us and how we interact with it on a scale that treats these other entities as fellow travelers, as kin. And how we can come up with ways of integrating not just the natural objects of the environment, but also the artifacts that we create in our moral universe, in our legal universe in a way that makes sense for us, but also for our future generations and for the kind of environment and the kind of world we want to occupy.

Speaker 1:

That was great. Thanks David. There's, there's a lot. Uh, I wanted to, to, um, mention from that first, uh, that you mentioned Blake Lemo and Lambda. Uh, we actually spoke to, uh, Thomas Metzinger in our last podcast episode that came out last week, and we, we talked about that topic as well. Um, you, so the second, the second argument you made about if we treat robots in, in a bad way, in the same way that if we treat animals in a bad way, that might have repercussions for, um, for how it affects us and how it affects humans in general. Uh, now it sounds like even if there is, even if robots are not currently sentient, even if robots maybe even can never be sentient, that would still remain an argument in favor of giving robots rights. Uh, does, does that sound, does that sound right? Um, yes. Yeah. Great. Yeah, sure. And, uh, the, the last point you mentioned it again, it seems to be coming back to, uh, how, um, how the way we interact with robots affects, affects humans. Uh, I know you've spoken about the case of the Wongan Nui River in New Zealand being granted legal person who writes as an analogy. And I think you said something like, it's not that people are arguing that the river is sentient, it's just that that's a tool in our current legal system, uh, to get protection. And then it's for instrumental purposes or for, um, orsic purposes. It's how, uh, that, uh, affects humans. Uh, so giving robots, right, giving robot rights, sorry, giving robots rights now might therefore be an instrumental tool in the same way. Yeah,

Speaker 2:

I, I think a really good example is a recently 12 states in the US made some legislative, uh, decisions and, and put into act, uh, some laws that try to deal with these personal delivery robots that are being deployed on the city streets. And they decided that for the purposes of figuring out right of way and who can go in the crosswalk and who can't go in the crosswalk and things like this, it made sense to extend to the robot the rights of a pedestrian. Now, that's not making a decision about robotic personhood. That's not making a, you know, a distinction that would grant personhood to the robot. It's just saying rights are the way that we figure out how to integrate things in situations where you have competing claims, powers, privileges, or immunities coming from different actors in the social environment. And one way in which we negotiate this is by granting rights, by giving something a right, a privilege, a power, a claim, or an immunity. It has to be at least, uh, respected by some, by someone else in that social environment. Yeah. So this is just a tool we have to try to develop ways of responding to these challenges that allow us to occupy space and work with. And alongside these various things,

Speaker 1:

I, I think I a pretty clear example of that is, um, how we corporations have rights and they, they're clearly not, um, senti, they're clearly not, um, persons, but in, in the eyes of the law, often they are treated like they are, uh, persons in, in a lot of

Speaker 2:

Ways. Can I say the, Yeah. The real trouble here. I think the, the real point of, of debate and contention is the fact that we're trying to work with two legal categories that are mutually exclusive person or thing. And as I said before, this comes from 2000 years ago when Gaas developed this distinction in his own, uh, legal thinking. And our western legal systems have worked with us for a long time, and it's worked pretty well, but that's why the corporation got moved from a thing to a person because we wanted to be able to sue it. We wanted it to be able to stand before the court as a subject and not just an object. And so this whole debate is about how do we negotiate this distinction person, On the one hand that is an subject before the law, an object or thing on the other hand, and it may be the case that we just need a better moral ontology, we just might need something that gives our legal system a little more latitude with regards to the kinds of entities that can occupy these kinds of, uh, positions in our world.

Speaker 1:

Yeah. When, when it comes to say, non-human animal rights, I do like Peter Singer's take on what that might look like. It's, it's not that we're asking for non-humans to have the right to, to drive or to vote, for example, We're just asking that, um, their interests are considered they're similar interests. Um, for example, there, uh, the right to not be harmed. Uh, so yeah. With that in mind, do you have any thoughts about what robot rights might look like? Um, or what, what perhaps one of your most ideal scenarios might be for how that might look in practice? Uh, you've sort of talked about that a little bit, I guess. Um, but also just given that, uh, robots may have very different interests to us, let's say in the case where they do become sentient. And I'd just like to nudge you as well to maybe mention, uh, camel, uh, maax paper, humans, Neanderthals, robots, uh, and rights. I think that that seems relevant here as well. That's talking about moral patients, US agencies. So, um, yeah, please, please talk about that a little bit.

Speaker 2:

Yeah. So, you know, this is where I think the analogy with animal rights starts to, if not break down, at least reach a limit. Mm-hmm.<affirmative> animals, I think we can say have interests and we can guess pretty well what they are, even though it's guesswork, we can pretty much sort of figure out pretty well for ourselves, you know, what the dog has as an interest, food, a walk, whatever the case is, right? Our current technologies don't have interests, and if we're looking, looking for interests, we might be barking up the wrong tree at some point in the future, that's possible. But I don't want to hold out this expectation that we should wait for that to happen before we engage these questions. Um, if that does happen, then we maybe finding ourselves having to deal with this, uh, analogy to the animal much more directly. But I think for now, the real issue is we have interests and these objects are in our world, and we need to figure out how to integrate them in our moral and legal systems in a way that makes sense for us. And just responding to these devices as if they were tools or instruments doesn't seem to work. In other words, the robot is sort of in between the thing and the person, and it resists both reification and personification. And that's the problem, right? That's, that's the challenge and the opportunity before us. How do we scale existing moral and legal systems that work with these two categories for something that seems to resist one or the other? And I think what Camille has done in his work on, uh, the subject of moral patency is really instructive because he's saying, We're not looking for solitude. We're not looking for robots to be like human beings and therefore have those same rights as a human being would have. Robot rights are not the same thing as the, as a set of human rights. Human rights are very specific to a singular species. The human being robots may have some overlapping powers, claims, privileges, or immunities that would need to be recognized by human beings, but their grouping or sets of rights will be perhaps very different. And I think Camille's point is that difference actually matters here. And that if we're looking for similitude, we are going to, uh, actually paint ourselves into a corner on the subject where, uh, we'll be really missing what's important. And I think the focus on difference and how to integrate what is different into, uh, our current way of thinking about our relationships to other kinds of things will actually help. And again, I think environmental ethics is a really good guide here, because we don't want to think about the water like us. It's not like us, right? What is the water to us and why is it different? And how is that difference important to our way of engaging it and living alongside it?

Speaker 1:

Yeah. So to, to go back to that paper, and there's one one point I found, uh, kind of interesting. Uh, so they used, uh, uh, Neals as, um, as an analogy, I guess for, for robots in that there seems to be some evidence I might, I'm not really familiar. I just, um, going off that paper that, uh, ne the tools might not have, um, uh, moral agency, uh, per se. They might have moral agency, uh, correct me if I'm, I'm mistaken here, but they, they're arguing that, uh, it, it might, um, we might want to treat Neos if they were in our current society, in the legal system, treat them more as say we would a human child in that, um, what we do to them, uh, matters and what, but we might not necessarily hold them, um, accountable or to blame for the actions they do, uh, to us. Does that, um, sound about right and is, if so, is that a reasonable analogy for how we might treat robots in a legal system?

Speaker 2:

Right. So lemme say, uh, two things. One, a disclaimer. I know very little about Neanderthals<laugh>, so I'm not, I'm not gonna speak directly to the same page. Yeah, we're on the same page there. Um, so I can't really enlighten you in any appreciable way in that regard. Um, but I will say, and this again, I I'm gonna go back to environmental ethics. Um, Thomas Birch, who was an environmental ethicist, um, said that, you know, what we're talking about is power. When you talk about widening the moral circle or widening the inclusion of what is on the inside and what's on the outside, um, someone in the inside decides whether or not to expand the circle, right? And that is a power relationship where those in power extend to those without power inclusion. You can see this already in previous rights, uh, expansion, Mary Walstone Craft, who wrote the vindication of the rights of women, had to pitch her argument to the men who were in power and had the right to them because they were the ones who could expand the circle to include women in moral and legal consideration. The same with animals, right? In order to include animals, someone had to pitch an argument on behalf of the animals, but they were inside the circle to begin with, otherwise they would not have been able to make the argument for expanding that circle. And I think this is the same thing we see playing out with, you know, rights expansion beyond, um, even animals. That this is a dynamic that is very much related to power and politics, and how this plays out is really something that is in our hands because we're the insiders. We're the ones who make these decisions. So how robots get integrated into our moral and legal systems is entirely ours to decide, and therefore, we need to engage this responsibly in ways that really adhere to our moral standards and protect our futures.

Speaker 1:

You've, you've said that to change the topic a little bit, uh, you've said that AI ethics, um, is often dominated by, um, western thinking, and that expanding the dialogue to include other ways of thinking, like, for example, indigenous or aism could be useful. In some of our research at Sentience Institute, we, we found that people who reported more belief in, uh, that artificial beings like ais and robots, if they have reported belief that those entities can have spirits, they also tended to extend more moral consideration to them, which, um, doesn't sound that surprising, I guess, but it's an example of how maybe some other ways of thinking might actually be, um, beneficial to bring into this discussion. So do you have any other examples of how, say bringing indigenous animism or other ways of thinking into the AI ethics conversation might be useful?

Speaker 2:

Yeah, so lemme just say that a lot of the AI ethics, uh, discourse has been distinctly western, right? We've used consequentialism, we've used deontology, we've used virtue ethics, we've used traditions that are very much grounded in a Western European Christian sort of tradition. And there's nothing wrong with that except that we've gotta recognize that that's not a universal position, right? That's very particular. And for people who live in the global north and have grown up with these philosophical and religious traditions, it may make sense, but the rest of the world looks at things from different perspectives and does things, um, that do not necessarily track with what comes out of a Western experience. And so, I think you're exactly right. There's ways in which we can look beyond, um, our own way of thinking about these matters and do so to help inform this in a more global perspective and draw on a wider range of human wisdom as a way of developing responses to this. Now, I'll caution, we gotta be careful here because this could turn into Orientalism, right? This is one of the premier sort of colonialist kinds of gestures. You go out to the other and you take from them what you think is gonna help you, um, in your own endeavors. And we've gotta protect against that kind of gesture. It's not about going and colonizing these other ways of thinking, um, in order to mine from them some sort of insight that we lack in our way of doing things. So it's about learning, it's about engagement in order to be, um, students of other ways of thinking and to learn from these other traditions how to see and engage the world in ways that will be different from what we may have grown up with and different from the standard practices that we have from our own traditions. So I'll mention, uh, just a couple things that I think are useful here. One is, I think African philosophies like Ubuntu. Obviously Ubuntu is not one philosophy, it's a, a collection or a constellation of different philosophies, but it is a much more holistically oriented and less individualistic. Whereas Decart said, I think therefore I am the philosophers arguing and working in the abub to tradition says, you know, I am because we are, And it comes much more out of a communal kind of relationship to a wider perspective on the world. And I think that can help us, because I think a lot of the work that is done in a AI ethics and in even the robots rights literature tends to be very much focused on a Cartesian subject that is sentient, that is conscious, and that becomes the unit of analysis. If you look at things from a more holistic, communal perspective, we're looking at it then in a more relational approach that I had described earlier. Another tradition I think can be really useful is by looking to indigenous, uh, epistemologies and cosmologies. And again, there is no one indigenous epistemology. There are a plurality, a multiplicity, because they're very different across the world. But there are ways in which our very idea of rights is already a western concept, right? This idea of God given rights to the individual. And that's very Christian, it's very European, it's very modern. And the pre-modern sort of indigenous ways of thinking about these things look at not rights. They don't have that concept yet. They talk about kinship relationships and how do we build kin with our machines? How do we exist alongside these entities that are our tools, our servants, our instruments that doesn't turn them into a slave, that doesn't turn them into something that is beholden to us. And I think kinship relationships as developed in a lot of indigenous traditions can be a nice way to sort of complicate the rights literature that we often bring to bear on these questions. And then the third thing I will say, and this comes outta Confucianism and some research that some Confucian scholars have done recently. Instead of talking about robot rights, r i g H t, they talk about robot rights, r i t e s, that it's idea of a ritual of a performance, and that the robot is engaged with us alongside us in performative activity. And as a result, they are engaging us in rights of social interaction and that we should put the focus not on rights as our I G H T, but writes r i t e s as a different way of sort of shifting the focus from this individual possession to a communal performance.

Speaker 1:

Yeah, that's, that's interesting. Do you have any examples of how that might look in practice? What, what would that entail doing?

Speaker 2:

So this is what I've tried to develop, especially with this relational turn, um, concept that I, along with Mark Kleberg have really been formulating and researching for the last decade or more. This idea is not ours alone. It comes out of environmental ethics. It comes out of the sts feminist, uh, ethics, uh, like Karen Barard and Rosiere. Um, but it's this idea that we need to begin to think about our more relationships as maybe taking precedence over the individual moral, uh, entity, and that we are all born alongside others, and that we are already in that relationship prior to this extraction of our sort of identity of ourselves. Um, so it sort of works counter to the Cartesian way of thinking about being in the world where Decart is sort of isolated from others and then has to go out to others and figure out his, uh, responsibilities to others. This way of thinking is always already responsible to others, and that the, the individual is a product of that sort of interaction. But, uh, yeah, that's, uh, that's a life's work right there.

Speaker 1:

Sure, sure. Thanks. Do you have any thoughts about what a human interaction might look like in the future? We've talked a bit about the, the legal context, um, but there's, there's, there would be a lot of aspects of, of interaction. And, um, I guess this, you could answer this in the long term where, as you say, when at some point in the future, robots, um, become likely sentient, uh, but there's also the short term answer. I mean, we have human robot interaction now that's not necessarily, um, related to robots being sentient. So what, what, is there anything that we can expect in the future? How much can we, One thing I wanna talk about as well is how much can we learn from science fiction? Um, how much of that is lessons about what we might see and how much of that is just me a fantasy?

Speaker 2:

Yeah. No, this is a really important question because I think sometimes we think that the robot invasion is something from the future, right? We're waiting for it. The robots are gonna rise up, or they're gonna descend from the heavens with guns and, you know, bombs and they're gonna attack us. And that's, that's a science fiction scenario. I think the robot invasion is way less exciting, way less dramatic, even moon mundane, it's like the fall of Rome. We invite these things into our world, and over a couple hundred years, we wonder where the robots came from, um, because they have infiltrated us in very slow movements of, you know, our decisions to use a device here to use of a device there. So I think we need to look not necessarily at the big picture, long term kinds of questions, but I wanna look more immediately, where are we at right now? Like, what is happening in our relationships to these devices that is maybe of interest to us in changing our social relationships with each other in the process? So one thing we've seen recently, um, as being reason for both concern, but also of interest is children saying, Thank you to Alexa. Now, that's weird. We don't say thank you to objects, right? We, we, you know, don't say thank you to our automobile for getting us around town. We say thank you to persons. Some people do, Yeah, mostly not. But you know, when we say thank you to persons, right, and yet the abilities of these very simple digital assistance to use language brings us into a social relationship where we think that we need to be polite to the object. And there's nothing necessarily wrong with that. There's reason to think that that is part of what makes us social creatures and that we need to be really concerned with not only, um, what that artifact is, but how we engage it, how we respond to it. I think sometimes people try to write this off as anthropomorphism. They say, you know, this is the anthropomorphism, anthropomorphism is a dirty word because we shouldn't be doing that. Um, I think anthropomorphism is not a bug. It's a feature. It's a feature of human sociology. We do it to each other, other, we do it to our animals, and we do it to our objects. So it's not a matter of yes, no, with anthropomorphism, it's not a binary. It's a matter of careful and informed management. How do we want to manage the anthropomorphism that we are developing and designing in the process of creating these things? And I don't know that we have answers to those questions, but I do know we have lots of ways of engaging in this question. Cause we not only have the example of talking to Alexa and saying, thank you. We have robot abuse studies in which people find it very, uh, disconcerting and problematic to, um, harm something that they're told as just like a toaster. Nevertheless, it's social interactivity makes it very difficult to do these things. We can already see in very rudimentary robotic and AI systems, ways in which we are accommodating ourselves to these objects and bringing them into our social relationships in ways that maybe don't exactly fit our human to human relationships, but are creating new relationships. I'm part of a new field in, uh, communication called human machine communication. And that's because we recognize the machines are no longer the medium through which we send messages to each other. They are the thing we talk to, they are the thing we interact with. And this, I think, raises some interesting, uh, immediate questions that we don't have to wait until, uh, you know, two, three decades from now when we get sentis or agi or whatever the heck it is.

Speaker 1:

Yeah, yeah. Um, we, we talked about this a little bit with, uh, Thomas Smith singer as well. It's, I guess, kind of a social hallucination where we, we might just all accept that, whether Alexa or something else, um, we just kind of accept and act like it's, uh, sentient even if it's not. Um, one, one thing I wanna maybe push back a little bit on is, uh, I mean there are some examples, I guess other examples of where people kind of act like something is sentient when it's not like children with stuffed toys, for example. Or maybe in like a very realistic video game where you kind of, or you are, maybe not intentionally, but you're sort of forgetting maybe that it, it what you're interacting with is an npc, like an AI character, not, um, not another, a real person. So I, I have to ask, I guess, um, is that necessarily a bad thing or is, I mean, you mentioned before, uh, it, it, the way we treat, um, robots, even if they're not sentient, might actually be important because it, it, uh, influences how we affect how we interact with other humans as well. So is that, is that a good thing, a bad thing? Not, not quite a clear answer.

Speaker 2:

So I don't think it's a good or bad thing, but it's a thing, It's a thing we have to really take seriously. Um, we talk about suspension of disbelief. When you go to the theater or you watch a movie, the characters on screen are not real. And yet we feel for them, we engage with their emotions, and we have an experience as a result of that. Um, and, you know, know in the early years of cinema, that was something that people were worried about. Um, would people, you know, lose themselves in, in the story and, you know, exit reality and spend more time in the, in the movies than in real world? Well, that didn't happen. We figured it out. But that's why I say I think it's a management problem. It's a way of managing these relationships and managing these responses that we make to these devices, because that is where I think the real challenge is. I think saying yes no is way too simplistic. It's, you know, we we're not going to fix this by saying, don't do that. I don't think you fix a social problem by saying to people, stop doing something. Prohibition never really fixes the problem. You've gotta figure out how to engage them in a reasonable and emotionally informed response that we are able to effectively manage and that works for them.

Speaker 1:

Yeah. I actually find that a little bit amusing how you mentioned people would think that, um, cinema is going to make people lose, lose themselves in, in all these fictional worlds. I guess, uh, the example I'm familiar with most recently is, um, virtual reality and I guess video games in general. People had that worry, Uh, and I didn't realize there was that worry about cinema. And then I also thought, well, what, I mean, you could go back further and save with this. It's not like, um, cinema was the first, uh, iteration of fiction. There were plays there, there were books. So unless something is particularly different about this new medium, maybe it's, you know, the newer mediums are more engaging. It is kind of interesting and funny to think about for me. So one example from science fiction that I wanted to get your thoughts on is in science fiction, uh, artificial entities are often seen as being quite discreet. Uh, so for example, what often you have a robot and that robot is sentient, and that's in, like, their mind is encased in that, in that physical robot. But in reality, it might be a little bit more complex. You might have you say a single sense, uh, sentient entity, artificial entity that controls multiple different robots. Uh, and you mentioned already that, uh, it's, it's a mistake to think about artificial intelligence as being disembodied because it is embodied somewhere. It might just be more, um, diffuse more, more spread out in say, different servers. So for example, maybe for, um, maybe for an AI losing that controls multiple different robots, losing an individual robot might be more like, say, losing a limb than say, a human dying. So in, in cases like this where it's much more diffuse and hard to tell really where an AI begins and ends, or robot begins and ends, how might this affect the case for robot rights? Or how might this affect rubber rights in practice?

Speaker 2:

So I think here, corporate law provides a pretty good template because corporations are also diffuse, right? Corporations are in such a way that there is no one place you can go and say, that's the corporation, right? It's all over the place and it, it has different manifestations. And I think if we really want to learn from that experience, I think we'll have a good template, at least for beginning to address a lot of these questions. Because I think the relationship is much more direct between AI and the, and the corporation because both our artifacts, both are humanly created and both have a kind of status. Um, in the case of the corporation, they have personhood. In the case of ai, we're now arguing whether AI should also have personhood. Um, and again, I think oftentimes we're looking to the animal question as the template for how we decide a lot of questions regarding the moral legal status of AI and robots. But I think the corporation may be a better, uh, analogy to follow. Um, as we try to think our way through these things.

Speaker 1:

Are there things you think that we can learn from science fiction that maybe some, some depictions where the useful thought experiments or where you might think, Oh, they've, they've got it right, That looks like it's a plausible, um, plausible scenario?

Speaker 2:

Yeah, I think there's a lot we can learn from science fiction, and I appreciate the question because I think sometimes, um, the response to science fiction by roboticists is this kind of, yes, but no, you know, I'm interested in it, but don't go there because science fact is way more complicated and it's not as simplistic as what you get in fiction. And we how to bracket off that fictional stuff. So we can talk about the real stuff. I think science fiction does a lot of heavy lifting for us in this field. It already gave us the word robot. We wouldn't have the word robot if it wasn't for science fiction to begin with. Secondly, I don't think science fiction is about the future. Many science fiction writers and filmmakers will tell you this. Corey Daro is one of them. He says, Science fiction is not about predicting the future, it's about diagnosing the present. And so what we see in science fiction are present anxieties, present worries, present concerns projected on the screen of a possible future. And so we can see science fiction as a way of self-diagnosing our own worries and concerns, almost like a psychoanalysis of us as a species right here, right now, and what really troubles us. And so if we look at science fiction, not as predictive, but as diagnostic, I think we can learn a lot from our science fiction. I also think science fiction gets a lot of things right way before we get into those matters in the scientific research. So for example, already in Blade Runner, you have this analogy between real animals and robotic animals. And this whole notion of the electric sheep that is the title from Phillip Cape original novella, is this idea that we are developing devices that are indistinguishable from real entities, and that we could have artificial animals and we could have natural animals. And so this idea, I think, helps us grapple with the way in which we build these analogies to other kinds of entities, the way we analogize the robot by comparing it to the animal and learning from our relationship to animals, how we relate to the robot or the ai. I also think you see in science fiction, a lot of deep thinking about human robot interaction. I mean, we already today are talking about the, uh, effect and possible social, uh, consequences of sex robots. We've already grappled with a lot of those questions in science fiction. Now, maybe we haven't got the right answers, and maybe we haven't, uh, developed even the right inquiry, but we've already seen prototyped for us the kinds of things that we should be concerned with, the kinds of directions that we should take our inquiries so that when we do engage these questions in social reality, we are prepared to do so. Finally, I think science fiction does a lot of good work making public a lot of things about robots and AI that average people would not have access to. A lot of the research is done in proprietary labs behind closed doors, and we only hear about it once it's released to the public. And then there's either excitement or outrage as the case may be, right? I think a lot of people, if you ask them what they know about robots, they inevitably are gonna talk about Mr. Data. They're gonna talk about Westworld, they're gonna talk about Wally, they're gonna talk about R two D two. They know the robots science fiction way before they get into the science fact. This is what it's called, science fiction prototyping. And I don't think that science fiction prototyping is necessarily something that is bad. I think there's a lot of education that takes place in popular media, and if we are careful about how we create our stories, if we're careful about how we cultivate critical, uh, viewers in our young people, I think we can use this resource as a really good way of popularizing our thinking about these new challenges.

Speaker 1:

Yeah, I I really like what you said about, uh, science fiction being almost like a, a thought experiment, which that's one of the reasons why I love sci science, reading science fiction, watching science fiction so much. And I just wanna shout out as well, one of my favorite science fictions, which depicts, um, AI in a lot of different forms is, um, Culture Ian Banks, uh, series. Um, so I would recommend, uh, people check that out. Uh, one, one thing that's related this from, uh, that we found from some research at Cent Institute, we, um, uh, found that, uh, people with a science fiction, uh, fan identity who, um, self-identified as being, um, science fiction fans, that trait was correlated with people perceiving more mind in currently existing robots and AI perceiving more mind in robots that might exist in the future, Uh, stronger beliefs that Ai, ai and robots should have, uh, would have similar value to human feelings, less moral exclusion of robots in the eyes than I could go on. Um, but it, it does seem like, uh, that, um, science fiction fan identity or being interested in science fiction has some positive effects, I guess hard to say, whether that's, um, whether that's causal or maybe that's, if one, someone has one that likely to have the other. But that gets me thinking about, uh, what, what kinds of things can we do to, uh, actually, I guess, um, almost like an intervention, if we were interested in moral circle expansion in the AI robot context, um, what can we do? I don't, I don't mean like, uh, making people watch science fiction or something, but, um, is there anything that you think we could do to, uh, encourage people to think about robots and AI in a more positive light? Or should we, and should we be doing anything?

Speaker 2:

Yeah, no, it's, it's, again, it's a really good question and it's important because more expansion is something that is part of our evolution in both moral philosophy and in law, right? I mean, we, we've opened the circle to include previously excluded individuals and groups, and that's been a good thing. And so engaging people in these kinds of, uh, exercises, if you wanna call'em, that I think is, uh, not only instructive for them, but it also contributes to our evolution in our thinking on these matters. I think, as we just have discussed, I think science fiction is one place that you can engage people in these questions. I know when I work with my students, one of the things that I find them to be most engaged with and most excited about is when you can take something in their popular media experience and open it up in a way that allows for them to really see a lot of these things at play, um, and gives them some access to it. Because I think a lot of times these technological subjects and these matters seem rather inaccessible. And if you can make'em accessible by fiction, by whatever means, I think that's a, a really good, um, opener to the conversation. It doesn't mean you end the conversation there, but that's where you begin to cultivate this way of thinking. I think another way to do this, and I again, have found this to be a direct, uh, instance in my own classroom, is by giving people access to devices, by letting them just engage with robots. You know, we have this idea of the robot petting zoo that some people put together conferences and stuff, but I think this is important. I think kids are curious, especially younger, younger kids, you know, high school and and below, they want to engage these things. They want to take their curiosity and see, you know, what happens. And giving them access, I think is crucial, because otherwise, it's something that happens at Google. It's something that happens at Microsoft, and therefore it's not really a part of what they are. It's not really in their world in a way that they can make sense of it. And I think access is absolutely necessary in, in that area. I also think education is very key to a lot of this stuff. Again, I think we've limited access to a lot of these materials to specialists in computer science, artificial intelligence, engineering, and elsewhere. We, I think we've gotta open the curriculum and make this stuff broadly available. Um, you can see already with the release of Dolly and the way people are using it to generate images, we need artists to be engaged with this technology, and we need artists to help us make sense of what this creativity with AI is all about. And if we don't let artists into the conversation, we're not going to learn what we can possibly gather from the history of human artistic expression and experience. The same with music, the same with journalism, the same with any field. I think this technology is no longer able to be limited to one specific field, and we've gotta teach it across the curriculum in a way that begins early and that gets the curiosity of our young learners engaged from the very early stages of their career.

Speaker 1:

Great. Thanks for that. Uh, so just a couple of questions to sort of wrap this all up. Um, I've noticed that you, you had an interest in programming from a young age, and you've actually developed internet applications. You, you, um, uh, a an established developer, but instead of pursuing computer science, uh, more generally, you followed a career in the philosophy of technology. Um, why do you think that is? What interests you about, about the philosophy of technology more so than coding itself?

Speaker 2:

Yeah, this is interesting because I, I used web development as the means to pay for the bad habit of going to grad school,<laugh>. But it's funny because those two things tracked really nicely because one was very hands on, very practical, and the other was very heady, very theoretical, and so they sort of balanced each other out. Uh, but one, one thing I noticed as I was doing these things simultaneously is that the one could really speak to the other, if somebody would build the bridge, that what we were doing in programming and in developing applications could actually gain from the traditions of human learning, from a epistemology, from metaphysics, from ethics, you name it. If we would only build the bridge to those philosophical traditions, we'd be able to open that conversation. And I think we've been rather successful with that. If you see how AI ethics has really exploded in the last five years, but it also goes the other direction. I think the computer and digital media and ai and robots can actually provide philosophy with some really interesting thought experiments on the one hand, but also some interesting challenges, the human exceptionalism and the way we think about ourselves as being the center of the universe, and therefore the only sentient creature to, you know, exist on planet earth, which obviously isn't true. So what I saw as this ability to use the one to inform the other, and the reason I went in one direction as opposed to the other direction, it just had to do that. It turned out I'm a better teacher than I am programmer<laugh>. And so I pursued the one that was gonna take me in that direction.

Speaker 1:

Yeah. Do you, do you think, uh, your work, um, in developing has given you some credibility? Because I imagine there, there might be some, uh, people in the philosophy of technology who maybe aren't taken so seriously by people who actually work on artificial intelligence, machine learning, what have you. Uh, and I, I can think of, um, some, there are some people who don't, like, for example, don't take, um, AI safety very seriously, who work in, uh, who work in actual the actual development of ai. They might think these people, you know, they have these ideas, but they don't really know anything about technology. They're kind of naive is, is what they might say. So do you think because you've kind of, um, you've, you've done both, do you think that gives you some credibility in, in that, uh, the tech space?

Speaker 2:

I hope it does. What, what I will say is that it feels very dishonest to me to talk about machine learning or any other technology and not know how it works. Mm-hmm.<affirmative>, I'll just give you some examples from my own sort of trajectory. So I wrote a book on remix in the, you know, 2016, I think it came out. And it took me a while to write the book, not because I couldn't write it, but because I wanted to learn to be a DJ before I wrote the book. And I spent all this time developing the practice because I didn't think I had the credibility to speak about this artistic practice and the technology behind it without knowing the how it works, knowing the tools and having hands on experience with it. The same, when I started to engage with AI and robotics, I knew that there was no way I could speak with any credibility about machine learning, about big data, about neural networks and all these things. If I hadn't built one, if I hadn't done the task at hand in actually constructing a neural network, training it on data and see what it does. And I think for my own writing, this is what allows me to speak with some conviction, with some real good grounding in the technology. And that hopefully is communicated in the resulting text to the rest of the world that I'm not just making this up. I, I come to this from the perspective of really knowing what goes on behind the scenes and have brought my philosophical knowledge to bear on something I have direct hands-on experience with.

Speaker 1:

See it. So I I, I've got a similar experience in that I tried to, um, I, I actually did a PhD in space science, uh, and, um, I have a geo geoscience background and I wanted to, uh, work a little bit on say, some long-termism, uh, like ethical questions and apply that as, as it applies to space science. Uh, and I thought that doing that might, could be more credibility when I talk about these ethical problems. Um, but I, in my experience, something that's too soon to say perhaps, but it doesn't feel like it. Uh, I think there's just, I've been met with a lot of skepticism from the space science community on some of those ethical ideas. Um, but it, it sounds like that's worked out better for you, so that's that's good to hear.

Speaker 2:

It. It doesn't mean that you don't get pushback. It doesn't mean that you don't get criticism. Yeah, yeah. Um, I think, you know, it, it's always a push pull. You're always kind of putting yourself out there and then trying to justify what it is you're doing to others who may be skeptical of it. Um, especially when your ideas might be less than popular, which often is the case in, in academia. But I think the dialogue is crucial. And I think meeting people where they're at is part of building that transaction, um, and making it work. Uh, I did have a guy at one point on Twitter say to me, Um, you should shut up because you don't work in this field and you don't know what you're talking about. And so I sent him the, uh, neural network that I built. I just sent him the code, and I just said, Here,

Speaker 1:

<laugh>, that that must be satisfying in a, in a kind of<laugh>, vindictive

Speaker 2:

Way. It was very satisfying.

Speaker 1:

Can imagine. Um, well, yeah, just, uh, to, to kind of bring this together. Do you, um, over your career so far, do you think you've noticed any shifts in thinking, um, about how we think about robot rights? Um, one, one just to, to kind of prime you is, um, maybe a shift in people thinking about robots, um, as moral agents to shifting to thinking about them more as moral patients. Do you, what have you seen over your career so far?

Speaker 2:

So I, yeah, I, as I said, I think earlier is, you know, when this started for me it was a really small, you know, I, I could really hold on, you know, my fingers, how many people were working on this subject, right? And that was it. And it's really exploded. I think the work that you've done at Sun Institute documenting this in your review of literatures that you've done, uh, really shows this exponential increase in interest, but also, uh, scholarship in this field. And that, on the one hand is very gratifying. On the other hand, it's hard to keep up<laugh> cause there's so much happening and there's a lot to read and a lot to stay on top of. Um, but I will say that a couple of trends have emerged, uh, in this process. I think there has been an increasing move from this being a speculative moral subject to this being a much more pr pragmatic and practical legal subject. My own thinking has evolved in that way. My first book, the Machine Question was very philosophical, very morally situated in those traditions. My most recent book, which is gonna be this thing called Person thing robot from MIT press, come on next year, um, is much more engaged with the legal philosophy and with legal practice. And that just, I think, is a reflection of the fact that that's how the trend is gone in the research over the last decade. Another thing I've noticed is a development in the, bringing into the conversation these non-western ways of thinking about these questions. I think when these questions began over a decade ago, the way in which I and others were very much engaging these things were by leveraging sort of the western moral and legal traditions to try to speak to the people who are building these things and developing these technologies. Over this past decade, we've seen, I think, a greater engagement with, in a greater desire to engage with other ways of thinking and other ways of seeing not as a way of, uh, doing something better or worse or whatever the case is, but it's just tapping into that difference that we can see in human thought processes that allow for us to really cultivate a relationship to the wide range of human wisdom as it's developed over time, but also over space. And I would say the last thing I've seen, and this is very gratifying, I think when I started this and began to talk about robot rights as a subject matter for investigation, uh, there was I think a lot of very abrasive and very, uh, sort of triggered reactions. How can you say that this is, this is just horrendous. I mean, who would talk this way? Um, and I had this very famous picture I put on Twitter with me holding a sign that said, Ro robot writes now say that. Yeah. And it really sparked an amazing, huge controversy, um, about a decade ago, well, about five years ago. And I learned a lot in that little exchange that took place, but it was an explosion of in, of interest, but also of, of pushback. I think we've seen it evolve to the point now where people are saying, Yeah, we need to talk. This has gotta be talked about. This has gotta be grappled with, We can't just put, put our fingers in our ears and go, blah, blah, blah, blah, blah. This doesn't exist. It does exist. Laws are being made, hearings are happening. AI personhood is not something for the future. It's something that legislatures are looking at right now. And as a result of all this, taking these questions seriously, um, and engaging with them in a way that is informed by good moral and legal thinking processes, I think is absolutely crucial. And I've seen in the last five years that mature in a way that I think really, uh, speaks to the fact that a lot of people have found this to be not only of interest, but also something that is crucial for our engagement with as researchers.

Speaker 1:

Hmm,<affirmative>. Great. Well thanks. Thanks for that, David. Um, just to finish up, where can listeners, uh, best follow you in your work? And is there anything in particular you'd wanna suggest they look at, whether it's a book or any, any other, um, piece of work you've worked on? If they're, especially if they're interested in this topic and want to wanna learn more.

Speaker 2:

So you can follow me on Twitter. It's, uh, David Underlined Gun. Uh, that's my handle. Um, you can find it very easily. My website is gun web.com and you can go there for access to texts and books and things like that. I would say right now, if this is of interest to you and you really want to jump in feet first and sort of see what it's all about, uh, the two books that began all this was the Machine Question, Critical Perspectives on ai, Robots and Ethics from 2012. And Robot Writes from 2018, both published by MIT Press, and, uh, you should be able to get both of'em used for very cheap these days. Um, or go to the library, they have'em too. Uh, but that's a pretty good way to get, I think, into this material. Um, and because of the kind of research I do, I try to be as exhaustive as possible and documenting what people have said, where it's going, where it's come from, and hopefully make sense of it. So it hopefully will provide people with a good guide to, uh, finding their way through this stuff and figuring out where they stand.

Speaker 1:

That's great. Thanks. We'll have, uh, links to all of that in the show notes and everything else that we've referred to in the, in the show. So thank you again, David, Really appreciate your time and thanks for joining us.

Speaker 2:

Yeah, it's been really great to talk to you and, uh, I appreciate the questions. Uh, as, as you said early on, there's a sort of reason that the Sentence Institute is interested in these questions and there's a reason I'm interested in these questions and I think they dovetail very nicely. And it was great, you know, to talk with you about these matters.

Speaker 1:

Thanks for listening. I hope you enjoyed the episode. You can subscribe to The Sentence Institute podcast on iTunes, Stitcher, or any podcast app.