The Future of You

Robot Rights and Relationships #9

Tracey Follows Season 2 Episode 9

What is the origin of the word robot? Who makes the definitions of these words robots, avatars, nanobots, etc? Who gets to define them? How do we decide? Is there a new social category that's emerging, and should see non biological intelligences through the lens of our own social interaction?

Do avatars as representations of the self need the same rights as humans, and importantly, should there be distinctions around the governance of virtual worlds versus physical worlds? What do we need from our relationships with this new set of beings in the future?

Today, I speak with Dr Julie Carpenter, research fellow in the Ethics and Emerging Science Group at California Polytechnic State University, and Dr David Gunkel, Professor of Media Studies at Northern Illinois University. Dr Carpenter’s primary research goal is to investigate human-technology emotional attachment and trust issues, looking at the ways in which embodied AI offers a testing ground for media, communication and cultural theories. She is currently writing her latest book, The Naked Android. Dr Gunkel’s work focuses on the philosophy of technology and emerging ethics relating to AI, robots and algorithms. He has published twelve influential and award-winning books including How To Survive a Robot Invasion, The Machine Question, Gaming The System, and Robot Rights. His latest book is called Deconstruction and is part of MIT's Essential Knowledge Series. 

This episode of The Future of You covers:

  • The origin of robots and who defines them
  • The person-thing dichotomy in legal and social categories
  • Avatar rights and responsibilities
  • The governance of virtual spaces
  • Emotional valence of non-human personalities
  • How to engage the public in thinking about the rights and relationships of robots

The Future of You: Can Your Identity Survive 21st Century Technology?: www.amazon.co.uk/Future-You-Identity-21st-Century-Technology-ebook/dp/B08XBN4GBB

Links and references at: www.traceyfollows.com



Tracey Follows:

Today I'm joined by two guests to discuss robots rights and our relationships with both. Dr. David Gunkel is professor of Media Studies at Northern Illinois University, where he has been for over 20 years. His work focuses on the philosophy of technology and emerging ethics relating to AI, robots and algorithms. He has published 12 influential and award winning books, including How to Survive a Robot Invasion, The Machine Question, Gaming the System, and one of my all time favourites, Robot Rights. His latest book is called Deconstruction and it's part of MIT's Essential Knowledge series. Dr. Julie Carpenter is a research fellow in technology, ethics and emerging Sciences at the California Institute of Technology. And her primary research goal is to investigate human technology, emotional attachment and trust issues. Looking at the ways in which embodied AI offers a testing ground for media communication and cultural theories. She is currently writing her latest book, The Naked Android. I was very lucky to get David and Julie together to share their wealth of expertise in this area. We did cover a lot, everything from the origin story of the word robot, to who makes the definitions of these words robots, avatars, nanobots, etc. Who gets to define them? How do we decide ? We talked about a new social category that's emerging, and how we should see non biological intelligences through the lens of our own social interaction. We discussed anthropomorphism versus instrumentalism, and how we regard things and label them if we really want to integrate them into our lives properly and David gave us some great examples, some concrete examples in law, that are live about robots already having the rights of a pedestrian on the sidewalk. Of course, we also talked avatars as representations of the self, and importantly, made some distinctions around the governance of virtual worlds versus physical worlds. And we discussed a lot more besides, including what we need from our relationships with this new set of beings in the future. So without further ado, let's listen in to David and Julie, on robots, rights and identity. So David, and Julie, welcome to the Future of You Thank you so much for spending the time coming in for a bit of a chat. I wonder if you could start by thinking about the emergent environment that we are in now and what might be coming down the line in the future? I'm thinking about, you know, a world in which it's not just animal, vegetable, mineral, but we've got some non biological intelligences perhaps and we might think about robots, that might be the first thing we come to think of, and probably because of some of the aesthetics that have come to us through science fiction, but there are lots of other versions of non biological intelligences and you are the two best people to talk to about this. And I wondered if we could just start by you just in characterising the world that we are heading into.

Julie Carpenter:

I know that we're going to work our way towards, you know, AI, and rights and autonomy and everything in the conversation. That's David's bailiwick. I like to start with the word definitions and operationalize them, not just because I'm a social scientist, but because I study how we communicate, right, and the words we choose, and who chooses them and who applies them in science, are all modes of power that also shape these technologies and tools. Before I even like to start talking about like, characterising the future of what could be intelligent technologies myself and my line of work, I like to start thinking about what kind of intelligence are we moving towards in these different technologies, because you can have lots of conversations about the different kinds of intelligence, right human like intelligence or intelligence, new kinds of intelligence,

David Gunkel:

I would agree with that and just like to amplify the fact that there is a lot of power involved in who gets to decide these things. And so who it is that gets to issue a word or a term gets to empower us to speak that language. That is a move that some of us are empowered to make and some of us are not, and one of the questions we always have to ask is who was empowered to make these decisions and occupy the space and who was not? But I will also add, beyond intelligence, I mean, another way to look at this is just what is changing in our world with regards to social interaction. For the longest time, the human being was the only creature capable of speech. And we used to think of the definition of the human being as being the animal capable of speech. That's the Greeks definition, right? But now, everything talks to us, our car talks to us, our refrigerator talks to us. We are going to be occupying a world in which social interaction through vocal performances of some sort or another, are going to be available to us in all kinds of non human objects. And that really, for us, I think challenges our definition of ourselves of our notion of human exceptionalism, but also our notion of who or what is a social entity.

Tracey Follows:

So it will include robots, which I know you work, you both have done lots of work around, who defined what a robot was then?

David Gunkel:

Well, that goes back to Čapek, right? Karel Čapek in his 1920 stageplay RUR introduces the word robot, it's it comes from a Czech word robota, which means labour or some people say forced labour, but in Czech, it's a it's a little less value laden, it just means work, labour, servitude. And since Čapek's time, that has been where the word robot was sort of appropriated and drawn into our dialogue. So really, the science fiction origins of robots are very well established. And the reason why a lot of people when you ask them what they think about or know about robots, they're gonna give you a science fiction sort of example, because that's really where this thing all began.

Julie Carpenter:

And then I have to chime in because I love this story. This sort of origin story of the word and of course, the storytelling and cultural factors are like the whole basis like I was just saying of a lot of my research culturally. What influences our ideas, what comes into your mind, when you even think of the word robot? So for example, in RUR Čapek's book, actually, the robots were kind of cyborg, I think they had some biological aspects to them. But over the years that word has changed to have this meaning. And we often when you use the word robot to this day, it has had such a strong influence on other aspects of our storytelling. In our brain, we go right to that human like, robot, right? I also love that it comes from the word means, labour, right, that makes also sort of an interesting introduction for the concept of robots inserted into our lives. They have already been labelled as workers, right? And some of the conflict in the book on the surface comes from how do people navigate, when they start to treat the robot socially, or they observe the robots interact socially, of course, there's the whole idea of rights and autonomy and political uprising as well. But there is that whole fuzziness that starts in that book. However, of course, that's the word but the idea of people creating artificial life goes back fables throughout antiquity, right? I mean, everything from the Bible really, where Adam, through, you know, God and the rib and whatever creates Eve, right. But you have all sorts of fables and all sorts of stories around the world, culturally, about statues coming to life, usually it's meant Frankenstein, that's a good modern one and golems. That's another good one, a Jewish based golems, which are also human life, and protectorates and sometimes labourers. And that's artificial life as well. And all of these stories, the root of them is on the surface, they can be entertaining, and didactic, but they really make you think about how to treat and care for things that may or may not be sentient, and how we treat others and how we other things in relation to ourselves and how we navigate choosing to be in this world with things that are non human, and not necessarily organic, but have aspects that we recogniseas social.

Tracey Follows:

So how do you think we should treat them? Should we treat them as persons should we treat them as property or should we treat them as something completely? Something else?

David Gunkel:

This is the subject of my next new book. In Western traditions in particular, we inherit from the Romans, a categorization introduced by Gaius between persons and things. And you can see this in current legal practices where you're either a person in the eyes of the law or your an object or thing. And the problem or the challenge, I think that come from AI robots and other socially interactive artefacts, is they kind of straddle the person-thing dichotomy. And at times, they invite personification. At times, they invite reification. But in the final analysis, the robot really resists either category. And we're looking at a particular case now where we have an entity among us, that doesn't easily fit the available categories for making sense of things, either morally, socially, or legally. And I think that's the real challenge that we're facing in the 21st century.

Tracey Follows:

Would this, but the liminality of them?

David Gunkel:

It is yeah, they straddle these categories. And they sort of make the boundaries between them blurry as Donna Haraway would say,

Julie Carpenter:

They're emerging as their own social categories. I think right now, we've already narrowed the scope of what we're talking about, even unintentionally, from what I started when I said, Hey, we're, we're talking about robots. That's like a huge subject, right? You've got nanobots, you've got, you know, mecha, you've got industrial robots. And right away our conversation just sort of zoomed in on anthropomorphic robots, I think, is what we're talking. And I think in a lot of ways, that's really what excites a lot of people and what most people think of as robots, as opposed to something that may be just is a machine, like what some people might consider a robot on a factory line that's marketed as a robot, because it has some smart capabilities, right? But the general public might not consider a robot might consider it a machine. And that talks to the situation of how we regard roles and how we label things again, which comes back to language. If I tell you it's a robot, you might think of it differently than if I tell you it's some machine correct, or a tool, right? So I think that there are emerging social categories where there will be nuanced differences. And I've said this before, humans are so good at social categorization, right? We do that all day, we move between different ways we communicate and regard different people in our lives all day. We're code switching our language, we do it sometimes very strongly, with in and out cultures. And sometimes we do it in nuanced ways, you know, you might talk to your colleague, differently than someone you bumped into on the street or in a drugstore, something.

Tracey Follows:

Perhaps you could turn to the whole issue of robot rights. Obviously, I know you've both worked on this. But David, in particular, obviously, I've read your books on Robot Rights. So I wonder if you could just give a quick precis of the thinking and the landscape around, you know, where we've got to on robot rights.

David Gunkel:

So we can go back to Čapek. Already in Čapek's play there is the question of the rights of the robots, right? That that's the motivating narrative feature that happens, and really happens in every single robot story since Čapek's RUR. But for us rights, I think we've got to look at it as social recognition, as a way of being able to recognise something not as a thing, but as another social entity requiring some kind of obligations on our part. So I would say that rights are always correlative, that something has a right when something else has an obligation or responsibility. And the question of robot rights is, so in the face of these socially situated interactive entities, what obligations do we face? Or what new challenges do we face in recognising the social position and status of these non human artefacts that are a social other, but unlike the others, we normally deal with, other humans, other animals, et cetera. And it sounds very abstract, but it actually is in right now something that is happening in real practice. I'll give you a very concrete example of this. So in the United States, in 12 states now, they have had the legislature make laws that try to recognise the status of robots that are delivering food or packages on the city streets and needing to cross the crosswalk and be on the sidewalk. So in Pennsylvania, Virginia, and a few other states, they've now made a law that says that the robot has the right of a pedestrian when it's on the street and on the sidewalk. Now, that's not a decision about personality for robots. It's not saying that this is a legal person. It's not any of these big declarations that we would see this as sort of necessitating. It's simply a way to how do we figure out who has the right of way? How do we figure out what the status of one entity is when it counters another entity in a social environment like a crosswalk? And so this stuff doesn't have to be big, heady, philosophical kinds of thinking, it can be very mundane, we've got to figure out how to live with these things, we got to figure out how to integrate them into our social environment in a way that makes sense. And that rights is, for better or worse, the word that we use mostly in our legal categories to make sense of these things. And as a result, we're confronting these questions. Now there's another one, Ryan Abbott is doing it with patents and copyrights. Whether or not an AI can be recognised as the holder of a patent, or the author for the sake of copyright. So we're already seeing some movement in this direction. But it's not a liberation movement. It's not the robots rising up and demanding their rights like in the science fiction scenarios, it's a much more, I always tell my students, it's less like Westworld, and less like Battlestar Galactica and Blade Runner, it's more like a really boring episode of Law and Order.

Julie Carpenter:

You know, that your conversation reminds me again, speaking of animals and dogs, and I, I talked about in my first book, military working dogs, of course. So originally, just a few years ago, at least, the US military really, I believe, only had two classifications for really anything in the military, and that was personnel, and equipment. And military working dogs didn't fall into either of those pillars. So they would often fall through the loop. And trainers who were very close to these dogs, these canines that they worked with, sometimes these dogs, they weren't taken care of through retirement in the way that they should have done. I think people who love animals would say, and so there was a very strong movement to say, look, we have to either develop a third category, or these dogs have to be recognised as something as either personnel or equipment. So we don't just abandon them overseas, or, you know, put them up for adoption for people who aren't ready to handle a dog that's been trained for a military working. And you know, so again, that was rights and responsibilities and legal categorisation to protect the animal, right? We love dogs. So that's sort of it feels like, it feels like a no brainer, right? And robots are sort of more complex. But I think David would agree, that sort of that valley, where there wasn't this expectation, when the military set up these things, they weren't thinking about things like emotional connections to the military working dogs and how to categorise them. And I think it's safe to say, uh, you know, that's why we're having conversations like this. Most of us haven't predicted or anticipated, or thought through how we would interact with all of these different robots coming at us in the real world, in different forms in different situations.

David Gunkel:

I've often had, you know, people who are critical of any talk about robot rights, say to me, Well, you know, we don't got to talk about this now, wait until they're conscious or sentient, then come and talk to us about robot rights, we'll be ready 2030 years from now. And my argument has always been, we've got to get out in front of this before any of that happens, because we are already having these emotional connections the same way that soldiers have had the emotional connections with dogs. And if we're going to get it right, we have to have a framework in place to be able to address these questions and respond to them in a way that not only protects the robot, but more importantly, protects us, protects our social institutions and our legal categories from erosion in ways that we would find unacceptable.

Tracey Follows:

Yeah, and I think if you can get out ahead of it, it means that people understand it much better than the the headlines that they're used to seeing, which is, you know, Bill Gates says we should tax robots, or can this robot be given, you know, legal citizenship, and there's a whole category of rights which feel to me like they're more akin to claims, you know, non discriminatory claims, etc, than they are protections and I think, I just don't think people well, obviously, they're not going to get involved in this in the nitty gritty in their everyday life. They've got things to do, and they're very busy. But if you can get out ahead of it, you can kind of guide the conversation, can't you and to your earlier point, make the definitions before they're made for us by people who maybe aren't thinking in such a sophisticated way or, you know, in any sort of philosophical way and aren't really thinking about the human interaction with these beings. They're just thinking about what can we extract from them?

David Gunkel:

To go back to where we started when Julie was talking about power, if indeed we want to participate as democratic citizens in the shaping of our future laws and social environment, we need to know what the conversation is. And we need to get involved in the debates. If we don't, someone's going to do it for us. And that will be imposed from the outside where we don't have a voice in how we shape that future. And that is a responsibility on each citizen and any democratically elected government to participate actively in making these decisions, and guiding how we proceed into the next decade or two.

Julie Carpenter:

Well, I was just thinking, Oh, the other examples that, you know, I know, David is the expert on of non human but organic things, environmental aspects that have gained rights, rivers, mountains, things like that. David, would you speak to that, because I think that's so interesting.

David Gunkel:

So this is interesting, because it is a strategy that has been used by both animal rights activists and environmental rights activists, to try to use what is ostensibly corporate law as a way of recognising and protecting natural features of the environment. So in New Zealand, they tried to protect a river and its surrounding ecosystem, by getting it declared a person in the courts. And that is really using the strategy that was already in play in the 19th century for recognising corporations as persons. We've done the same with some mountains, we've done the same with some other ecosystems. It's certain ecosystems like in South America, the entire ecosystem is recognised as a person, by legislative decree. And these are ways of working with this person-thing dichotomy. If we only have persons or things, then if we want to protect the natural environment from exploitation, one way to do that is by moving them from the category of natural resource thing object into the category of person so that they can be protected as a subject of law. And this has turned out to be a rather crucial innovation in a lot of non human rights innovations. Now, is the same possible for robots? Sure, it is. But I think the more interesting or the more important challenge we have isn't to personify robots but to go back to what Julia was talking about. Maybe we need a more nuanced legal ontology, maybe we need a much more fine grained way of dividing up the world into categories? The military couldn't do it with military animals, they have to decide is it equipment or personnel. The same is happening with person thing in our legal categories, we just might need to be a little more creative with the way that we create our categories for both moral and legal status.

Tracey Follows:

I mean, when it comes to authenticating these things, because they are in this liminal space, at the moment where we're thinking about authentication or verification, we've still got that demarcation of people and things. As we move into even like the metaverse or immersive media or web three, whatever it might be, we seem to be drawing on NFTs, as if you know, that's some sort of protection or rights of property things, and verifiable credentials for kind of, you know, peopley stuff like skills. And again, it's the massive dichotomy. And if we think about robots, or AI, or whatever is going to be in existence in these weird immersive media worlds. Again, potentially, we couldn't jump either side of those and choose Oh, it's either an NFT or a verifiable credentials that will actually authenticate or verify. And so we're left them with all these beings not really being identified, which again is a is another conundrum in this weird space in between.

Julie Carpenter:

Yeah, you talk about the metaverse and of course, that makes me start to think about avatars, which are another very squishy space, but people can become very emotionally attached to their avatars. It can become a representation of yourself, for many people in a virtual world. Of course, it doesn't have to be, people make throw away ones that make fun ones for exploration or discovery online or in games and Metaverse and Second Life and wherever you happen to be. But a lot of people spend a lot of time crafting avatars, maybe you're in a game winning armour, things like that to outfit yourself, making the avatar look either more like you, more like how you wish you looked. You know, it's a representation of who you are. And you can have very strong attachment we know to this representation of self, right, in this other virtual space, which also means that with that joy of discovery and creativity can come, you know, the negative have experiences and situations as well we often talk about, or no, we don't often enough talk about in my opinion, frankly, there can be female presenting avatars, right in games or in virtual spaces that get harassed. That and the people behind those avatars can take it in a very visceral, emotional way, even though they're not physically touched, or, you know, harmed in that way, it's still a form of harassment, right? And there is usually on a lot of what we see in social media, there's attempts at content moderation, right? How do you do that? You can't do that in a situation that's like a Metaverse, they have tried to put rules in place, like for example, avatars need to have a certain circumference of physical space around them. But we all know that for every rule like that, there will be bypasses. And also something like that immediately makes me think, well, that limits your experience, right? So let's take that example. Let's say you're an avatar, it doesn't have to be the metaverse in a game or whatever. And the people who designed it have decided that for the safety of the players, because some of them getting harassed in this situation, we're gonna say that there's a circumference, which can't approach another person in that physicality of this digital virtual space. Well, then I think a lot of people would then complain, but I make friends in this space as well, I want to be closer to them, we have consent, right? So then you have to build in a consent mode, maybe you can have consent to get within the circumference of space, you know? But in real life, you don't necessarily give consent, every time you walk in a crowd to everybody that bumps into you see what I'm saying?

Tracey Follows:

Because we gave off other signals.

Julie Carpenter:

Exactly. Well, that's a whole other interesting thing. But when we're trying to build things that resemble our physical and human, very imperfect world, we're also replicating and sometimes magnifying, or reintroducing a lot of the same problems we have in the human space, but we haven't caught up. We have trouble mitigating them in our own world, let alone the virtual world. Right? And then that goes back to power dynamics, who's coming up with the content moderation? Who's deciding these rules? Who's deciding how safe spaces for women or LGBT, you know, gay, queer, you know, trans people non binary in a space or people of colour or anybody? Right? So these are the all these rights issues, I think that come in with a, you know, dovetail, where my interests and I think, David's come together.

Tracey Follows:

Where does the rights on avatars come from them? Because I don't think they're the same as robots.

Julie Carpenter:

I mean, in my opinion, it's, it's not it's, it's a completely different. You're in a physical space, versus a virtual space and that there's embodiment. And there's two different sets, there's a physical safety factor, right, with robots, that actually is not talked about a lot outside of science fiction, but there is a physical safety factor. But that's not to lessen the emotional damage that can be done, if you feel harassed in a virtual space. Maybe most likely, because it feels more real. It can if you're fully immersed, you can certainly be in that flow state while you're playing a game and you know, that psychological state where you can ignore your biological needs. I think any of us who've, whether it's playing a game, or reading a good book, or watching or bingeing TV or something can relate to that flow state where you're, you don't get up to go to the bathroom, you don't want to miss anything, right? And so when you're that emotionally invested in something, right, it does more than hurt your feelings, and especially if you were being sexually harassed in a space like that, that's a terrible thing.

David Gunkel:

So I would say the biggest difference is about governance and about proprietary control. So avatars exist within the proprietary platforms created by corporations like Meta, like Facebook, like Second Life, like your games, whatever, and the governing statements, the governing documents of those worlds are contained in the terms of service, which none of us read. And so the rights of those avatars, whatever they may be, are buried in the terms of service and we click Agree and don't really know what those rights are. In the physical reality in which we're dealing with social robots, or industrial robots, or any kind of robot, those are generally situated within nation state type situations, where the laws of a particular state or a particular nation play a role. And so I think the biggest difference between the robot rights on the one hand and the avatar rights, on the other hand, is this difference in governance, that the avatars are governed completely by private, proprietary contractual agreements with players. Whereas the robots in physical space are governed oftentimes by the laws and the stipulations of the jurisdictions in which they happen to be located.

Tracey Follows:

So what does that mean for the way in which society will evolve then? Because we, as individuals have a better sense of our social contract with the state, don't we? I mean, even if sometimes it is sort of being infringed from time to time, there seems to be some checks and balances. There feel to me like there are no checks and balances with these terms of service, even if we know what they are. And they can kind of change on the hoof. And sometimes, you know, they're profiling us in a way, where, let's say, we've got two different avatars or whenever we might have two different completely different profiles, then do we get treated differently? I mean, how, how do we keep track of and have any sense of autonomy in that, that world of the virtual space, where you have just quite rightly talked about a completely different method of governance,?

David Gunkel:

It's really complicated, because if you engage with more than one platform, you're actually occupying two or three or four different political spaces, all defined very differently, and all contractually bound in different ways. And because these things are not made for easy reading by the public, it means we are often engaged in political environments where we don't know our rights, or we don't know our rules. And as a result, we often learn the hard way, by making assumptions, either from our physical experiences and in real life, and importing those into the virtual world. A good example of this is my students who complain about their right to free speech on Facebook, no, you don't have a right to free speech on Facebook. You have what Facebook allows you to do. But that's not the same as the first amendment rights you have in the United States. Or we go from one platform to another and we assume that what Twitter tells us we can do, we can do on Instagram, which also isn't true, because they're different contractual agreements. So we're actually having to juggle and keep straight in our heads, these different political structures, that we move in and out of our daily lives, as we traverse from real life to virtual life and back again.

Tracey Follows:

Oh, my god, we're gonna have to have a virtual UN!

Julie Carpenter:

In a way, I think it's as David was explaining, I was thinking it's in a way, it's sort of like a new form of code switching culturally, in our head. And it is even just as we go from social platform to social platform, and how we interact with it is obviously, like a social, it's a form of code switching, right? You TikTok has these little slices of one thing, Instagram has stories, you know, Twitter is two sentences, you know, all of these things are different ways of communicating as well as different terms of service. And so and also, we will have different tones and different groups of friends. And you know, so we present different versions of ourself, and our personas on these different platforms. We may adopt fake names, right? Like have a finsta, right, or something like that, you know, so all of these things, I think, speak to us and all of the imperfect ways we are as humans, but our sort of unending curiosity and wanting to discover these things, but there is a very real need for safety, as well. And we're really trying to navigate that, as well as the social parts and where they fit into our lives and how much of it fits into our lives safely.

Tracey Follows:

Doesn't bode very well for interoperability, does it? What you've just been saying there, you know, in terms of having a universal avatar, I mean, that to me feels like it's further and further off.

David Gunkel:

You know, even when you want to occupy the same kind of Avatar space because the environments in which those avatars reside are different, the avatars inevitably become different. And I think that's where the real challenge is.

Tracey Follows:

Now these avatars, as we're talking about that they're kind of created through quite intense profiling of us. And that, again, feels like it's a completely different relationship, out in the physical world, where we might have, you know, rights and responsibilities, there is a definite other, even if we think it's a little bit more liminal, we can say, here is us, and we are a separate entity to this other thing that we might be collaborating with or communicating with. When we're looking at avatars or virtual beings, that profiling is so good, and they're picking up on so many things that about us, that make us us and reflecting them back to us, I think it's going to be quite difficult to demarcate us from the other, like when does it, when we're in a virtual environment. And we're being taught to in a way that the other being knows we like to be talked to, as you mentioned, Julie it's in a tone, we like to be spoken to, you know, they're reflecting so much of ourselves back to us. How are we actually going to demarcate me from another virtual being and these other avatars in this world? I just, I just feel like it really is an immersive environment. And therefore, any kind of separateness, presumably, is going to be even more difficult, isn't it?

Julie Carpenter:

No, I guess I'm not sure I see the same challenges as you do. But I think we might be talking about two different kinds of avatars as well, when you're talking about ones that are created for you, like a virtual, like a second self based on information and data collected about you, if you're talking about that, and what companies think they know about you, based on data they gathered, which is another interesting kind of avatar in itself. Right? Then basically, I think of myself and data that people, companies would have collected about me throughout a lifetime. And that the version of me that they have is so distorted and inaccurate, right? Just a company like Amazon, I was an early adopter with, right, they know what I like to read, they know the clothes I like to wear, they you know, the food I like to eat for the last, you know, 20 odd years, right? But if I'm have Amazon Prime video on in the background, I might run movies, TV, etc, while I'm working and have no idea what's even on. And they think that they know me, right? I mean, it's the same for any information, if you're talking about that sort of corporate avatar, where they're building a version of what they think you are based on what you purchase, when you throw it away.

Tracey Follows:

Like the Google knowledge box.

Julie Carpenter:

Right, right, exactly what you bought, who you give it to, and how they triangulate all of that information about you. And then we get into things like emotion reading technology, right? Where people purport to do that by triangulating sort of that similar similar data. And we're talking about creating these corporate doppelgangers, which I think are very distorted versions of what corporations think you are based on what you own. And it's a hilarious sort of vicious circle, especially in capitalism, where they're trying to influence what you buy, you buy that, and then they think that they know you based on that messaging, right?

Tracey Follows:

Do you know one of the maybe I'm wrong, but one of the things I never see in the metaverse are these immersive environments is a robot. And I mean, maybe there's an obvious reason why but we're trying to, at the moment, transplant everything we send the physical world into a virtual world and give it a virtual version. Except for robots.

David Gunkel:

Well, there are there are with bots, right? So many of the others that you encounter in avatar form, don't have human beings behind them. Did they have a bunch of code behind them? And so you know, you have that very famous story of Epstein, Robert Epstein, who fell in love with an avatar, right. But the Avatar was a bot, and had a relationship with a bot. And you know, these sorts of moments occur. And we're sort of disturbed by them. Because we make this assumption that behind the avatar, there must be a human puppet master who pulls the strings. Well, what if there's nothing behind the avatar, except some lines of code that are designed to engage us in social interactions and conversation? And this goes all the way back to Turing's original concept of what defines machine intelligence. If it behaves as if it's human, and we accept it as such. Then maybe it's intelligent.

Tracey Follows:

Yeah, because we've got, we've got a in Asia, particularly sort of marriage certificates, haven't we now between, you know, these young men and their digital assistants, and it's not quite the same as a bot, but it's a non biological or its intelligence, but it's some sort of liminal non biological being, isn't it? Do you think there will be rights emerging in the further future, governance around relationships if you like?

David Gunkel:

If there is, I would say, it's not going to be because we recognise that there is something in the bot that needs to be protected, but rather, there's something in the relationship that needs to be protected. And so that that extension of rights is going to be more about what we need from these relationships, as opposed to what the bot might or might not need.

Julie Carpenter:

Can I give an example about a valence. So some that this is so simplistic, but this is, this is a rights thing that goes in my head, and it's not like this would this would not be like a court level example. This is almost just as a silly one, but it's one I think everyone could understand. So Aibo those little Japanese robot puppies, right? People love them, you give one to your child, your child adores this puppy robot, they name it, whatever, somebody, let's say it costs. Let's say it costs $500. I don't know what they cost now. Somebody runs over with their car. Right? Now, this could be something adult subtle amongst themselves, you clearly ran over the robot, I saw it with my Ring camera and my other technology, right? We're gonna work this out insurance are true. But there's another valence to this. And that's it, let's say the child saw it happen. And other child is traumatised because with attachment, right, they saw the puppy as something meaningful to them. And it could have been as socially meaningful to them as a real dog, they've magical thinking it's a little child, right? It's very important to them. And that's where I think, a very simplistic version of adding the valence of emotions, as well as legality. And I think that's what we were talking about, with the avatar and the vulnerability and the emotional, visceral reaction you can have, because for all the opportunity, things like metaverses can give us to explore. As I said before, these are also situations that in the real world, are not safe for some versions of people, right? And so this sort of can magnify those issues or put them in spaces, we didn't anticipate to be in danger, and can bring up anticipated emotions, as in the puppy Aibo thing. You know, so you go $500, I brought up a cost on purpose. So if this went to court, you know, a judge would go boom, fine, you know, you had it for six months. So here's $400, end of case, but it's not the end for the child, right? It's not the end for the family that has to comfort the child or the child to therapy, or decide what if they're going to replace the robot and when and how, right, these are all the other complicated problems. And that's a very, you might argue, small subset of problems. But that's that whole valence of emotion, and attachment, because with that also comes loss. And those are issues that are going to have to start being governed. I think that's part in my opinion, of what David's trying to get ahead of, though he very much is focusing on laying the groundwork of the legality and recognising that it needs to be done. And it's not so much about personhood, or acknowledging intelligence or sentience or anything like that. But part of it can be or eventually can be that there's this valence of meaningfulness in these relationships that we have with these inorganic things. And they will sometimes belong to us in an ownership way. And sometimes maybe like with avatars, they'll feel an extension of ourselves, right? And how you place a value when you feel that an extension of yourself has been violated harmed, deleted, whatever it is.

David Gunkel:

And we have to remember that, you know, there's also cultural expectations that play a huge role in these decisions. So in sort of the Western imagination, we can say they're just things, these emotional attachments aren't real, or whatever we want to call them. Other traditions like Shintoism, indigenous cosmologies, have a very different way of recognising our kinship with the nonhuman inanimate. And we can learn I think, by looking more broadly than we currently do with regards to our sort of metaphysics. We sort of seem locked in a very Cartesian way of thinking, and we may need to think outside that box to see how these relationships we have with things are very human, and they're very much a part of our entire experience as human beings and validating those things, and acknowledging those relationships is going to be really important to deciding on a future for shared humanity on a global scale.

Julie Carpenter:

And that's that's exactly what this my book is about. It's not going to be out for at least a year. But The Naked Android is exactly what I'm talking about. Because there are so many ways of looking at all these problems. And it's easy to get into your own cultural box. And I think that it's important to look at I call them lenses, right, their cultural lenses to look at all of these different cultural lenses. To get another metaphor in there really quick, in my mind, for some reason, I think that when you go to the eye doctor, and you're getting checked for a new prescription, right, they put that ophthalmologist machine in front of you. And they say, you know, is this better or worse, or the same as they click the lenses to see if your prescription needs to be updated. And it can get blurry, or it can get clearer and they keep asking better, worse, the same? And that's actually how I'm starting that book, because there's all these different ways of looking at these different problems. And is it better or worse, the same? It's not necessarily that there's as easy a decision as a biological eye prescription. Right. I wish there was. But I think it's worth examining, as David said, all of these other ways, and we're in an early enough stage. I think that it's great that, like, David brings up like, you have to bring these things up, right, as Tracey, you know, I don't have to I'm preaching to the choir. So it's not a matter, I think of getting ahead of technology, or even keeping a pace of it. It's also in a big way about our safety, or emotional safety, and anticipating problems and keeping us safe, really long term.

Tracey Follows:

You've brought me neatly on to what was going to be my final question, but you've maybe part answered it then which is, you know, most people are now I don't know what I feel like it that everyday people mostly have the sort of instrumentalist view, you know, that maybe this stuff, if you like, they're just tools to help us do stuff, especially when you think about the language of automation and productivity and efficiency. How do we start to get people, more people into this conversation, as we discussed at the beginning to help shape the future? And, you know, set us off on the right direction? It's a long way from sort of instrumentalist viewpoint to this notion of kinship, for example, how at what would we start to do? How could we, I mean, obviously, there's your book, Julie and David's books too. But how do we start to get more people engaged and, you know, get them into this other space, if you like?

David Gunkel:

So I'm thinking, you know, in terms of how I engage my students in this, because that's really where my sort of lived experience is as a teacher, one thing I find to be really instructive that opens their eyes and starts them questioning in the right way is science fiction, right? I mean, we often complain about how science fiction sets up false expectations, and maybe leads us down blind alleys and things like this. But a lot of science fiction, either literature or film, television, I think a lot of them are able to bring to the surface, a number of these questions that you ask. And one way to get students really engaged with talking about these things in a more substantive kind of way, is to say, Okay, you liked this movie? What is it you like about it? What does it do? What buttons does it push? And how can we leverage that experience, and lead you down the road to the next step where you're able to leverage that interest and turn it into something that is maybe a thesis, maybe a paper, whatever the case is, but I like to use a lot of science fiction, not because I think it's right, not because I think I want to, you know, use it as sensationalism, but as a sort of entry point that is accessible, that doesn't require a lot of heavy lifting to begin with, but eventually leads to the heavy lifting that they need to do to really engage these questions in a substantive kind of way.

Julie Carpenter:

Yeah, I mean, I can't, I can't disagree with David. He knows that. I mean, you know, and a big part of the Naked Android and Tracey, you know, that is going to be about the stories we tell, including film, literature, science fiction, which heavily influenced my own career, and I think just about every roboticists career, because if you or you know, anybody who works really a lot of us who work in technology, that have been interested, especially in emerging technologies, you know, we didn't just fall off an academic boat. We were influenced by science fiction as well, right. And then the work we do goes in a circle and influences science fiction too. And so I think that that's a great entry point. And I would also bring up and I also, when I talk about science fiction, and I also make a big point, I'm sure David does, too, to talk about the power dynamics as well. What story is being told through what lens through what gaze? How would the story have been done different? Maybe if it had been made in India? If the movie had been made in China, if the movie had been made in Japan, would these lenses have been different? Right? If it had been, if the screenplay or if it had been directed by a woman instead of a man, if the protagonist or the robot had been a different gender presenting, or if the robot hadn't had any gender in the story, would that have changed things? So those are the kinds of things that I, as a social scientist also like to push forward. And I agree with David, I think, you know, science fiction at its core, they're stories, their ways we communicate, I said this at the beginning, right? They're didactic, they're thought provoking. And it's a whole very popular genre for a reason. It's not just about technology. It's really metaphors about how we treat things that we have othered. Right? So that's things. And so those are power dynamics stories, and there's a lot of value in them.

Tracey Follows:

I mean, I guess that's one of the thing that's going to change in entertainment as it moves more virtual anyway, that we'll be able to see lots of different perspectives. It won't just be the one sort of archetypal narrator or a hero's journey. Or you could already there's already lots of interesting experiments now out there where you can see it from the protagonist point of view, then the victims point of view, etc. You put it all together, and you get kind of different, a different experience, really. So maybe that's one of the things to, you know, to look forward to, with our avatars in some of these phases. Maybe I'll never have a right in future as an avatar, to see all the perspectives of the story. Thanks so much for your time today. Thank you for listening to the Future of You podcast hosted by me Tracey Follows. Do like and subscribe wherever you listen to podcasts to make sure you don't miss a single episode. And if you know someone you think will enjoy this episode, please do share it with them. Also visit www.thefutureofyou.co.uk For more on the future of identity in a digital world and visit www.futuremade.consulting for the future of everything else. The future of your podcast is edited by Big Tent Media and produced by Emily Crosby Media.

People on this episode