The Future of You
The Future of You is the home of Tracey Follows’ ongoing work on identity, agency, and the changing relationship between systems and selves in an AI-mediated world.
This channel now brings together three strands of that work.
The Future of You podcast explores how technology is reshaping identity, from digital selves and predictive systems to automation, intimacy, trust, and human futures.
The Future of You audio series is the original 2021 book, released here chapter by chapter. It explores what Tracey came to call the technology of the self: a third dimension of identity, alongside the psychology of the self and the biology of the self. These recordings are presented as an audio archive of the original published text.
Me:chine Dialogues is a special series from The Future of You exploring identity, agency, and AI-mediated systems — where the machinable and unmachinable selves meet. It follows the emerging synthetic condition shaping who we are becoming: not man versus machine, but the meeting of selves, the part that can be copied and the part that can never be caught.
Together, these three strands trace an evolving inquiry into identity: from the digital self, to the technological self, to the Me:chine self.
Across all of them runs one continuous question: what happens to human identity when the systems around us begin to see us, sort us, predict us, generate us, and increasingly speak in our name?
Identity is becoming infrastructure for systems. This channel explores what remains of the self inside them.
Core concepts include:
Systems & Self
Identity as Infrastructure
The Technology of the Self
Me:chine — the machinable and unmachinable self
New here? Start with:
→ Me:chine Dialogues: Manifesto
→ The Future of You audio series: Chapter 1, Knowing You
→ The Future of You podcast archive
Visit:
→ Me:chine World and essays: me-chine.com
→ Podcast archive: The Future of You
→ Audio series: weekly chapters on this channel Introduction
About Tracey Follows
Tracey Follows is a futurist specialising in identity, agency, and the relationship between systems and selves in an AI-mediated world. Her work includes the frameworks Systems & Self, Identity as Infrastructure, and Me:chine, exploring the machinable and unmachinable dimensions of human identity.
The Future of You was named Best Tech Show at the Independent Podcast Awards 2023.
Her central premise: “The future is written between the system and the self.”
Follow to receive each new transmission as it is released.AI-mediated systems - where the machinable and unmachinable selves meet.
The Future of You
AI, Religion and Identity #12
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Today, I'm joined by Prof. Dr. Beth Singler, Assistant Professor in Digital Religion(s) at the University of Zurich and Rev. Dr. Joshua K. Smith, Pastor and Theologian, to discuss digital religion, and our spiritual identity online.
This episode of The Future of You covers:
- The trends in digital religion
- How people are developing communities and ideas about technology online
- How AI shapes and molds what we are doing spiritually.
- Our perspective bias at the dawn of a new millennium
- Anthropomorphism of AI
- Whether AI can help us become better moral beings
- Corporatist agency, personal agency, and AI politicians
- The ongoing outsourcing of our own decision making to AI
- Nihilism, apocalypticism, and the reemergence of the Satanic and occult aesthetic.
The Future of You: Can Your Identity Survive 21st Century Technology?: www.amazon.co.uk/Future-You-Identity-21st-Century-Technology-ebook/dp/B08XBN4GBB
Links and references at: www.traceyfollows.com
Tracey Follows 00:20
On today's episode of the Future of You, I'm blessed by the algorithm to be joined by a pastor and an anthropologist to discuss digital religion. Dr. Beth Singler is the Assistant Professor in Digital Religions. Yes, that's religions plural. We'll get into that in a moment at the University of Zurich, and Dr. Joshua K. Smith is a pastor and theologian researching ethics around emerging technologies, including robotics. In this discussion, we cover the trends in digital religion, how people are developing communities and ideas about technology online, as well as how AI shapes and moulds what we are doing spiritually. The conversation covers our perspective bias at the dawn of a new millennium, anthropomorphism of AI, whether AI can help us become better moral beings, corporatist agency, personal agency, and AI politicians, and the ongoing outsourcing of our own decision making to AI. The focus inevitably turns to nihilism, apocalypticism, and the reemergence of the Satanic and occult aesthetic. And we even talk aliens. So join me, Beth, and Josh, for a chat about digital religion, and our spiritual identity online.
Tracey Follows 01:53
Well, Hi, Beth and Josh, thank you for joining me here on the future view. Great to have you both here with me.
Dr. Beth Singler 01:58
It's really great to be here. And to be able to have this chat with you both.
Dr. Joshua K. Smith 02:02
Yeah, it's nice to meet you both
Tracey Follows 02:04
I've been wanting to talk to you both for ages. Now I'm right out of my comfort zone on this one, talking about religion and theology. But I'm hoping I might be able to catch up a bit on the technical digital side. For the listeners of the podcast, I wonder if you could both just tell us a little bit about what you do because I know you are both in very, very different ways involved with tracking digital religion and looking at theology and technology and how they interplay in our culture. I wonder if you could just share a little bit of your work and tell us a bit about your methods in the madness?
Dr. Joshua K. Smith 02:40
Sure. My name is Josh, I'm a pastor in Mississippi, so in North America. I primarily do that as my full time job. But I'm also a researcher. And so I look into robotics and kind of the social ethics concerns from a theological perspective, of course, but that's kind of my my area of focus. And I just kind of happened to fall into it by accident, while doing my doctoral work, and just kind of, I don't know, I felt like it chose me, I didn't choose it. So everything from war, robots, sex robots, work industry, anything that has to do with kind of the disruptive technologies and, and how that fits in with our ideas about morality and ethics, as a society. So that's kind of what I do in my night job, and my day job is just helping people in crisis and trying to be a force for good.
Dr. Beth Singler 03:47
Well, I am the new assistant professor in digital religions at the University of Zurich. I'm an academic, I've been working in the area of AI and religion and religion and technology, well, since I went back to my Masters in 2010, where I started looking at people's spiritual identities online. So I'm primarily an anthropologist, rather than a theologian I'm looking at describing and theorising how people develop community and ideas on technology, quite often with a religious perspective. And then from 2016, I, after my PhD, I started I joined a postdoc position, which was looking specifically at artificial intelligence. And that's been a through line of my work since then. So I look at both how religion describes and understands artificial intelligence but also how artificial intelligence will shape and mould how we do spirituality, religiosity, not down to a specific faith, but I look at all different faith groups and the development of new religious movements and new spiritualities on the internet and in social media and basically, that's an area of particular interest for me AI and religion.
Tracey Follows 05:05
I remember when you and I had a conversation on Twitter about an amazing job, which was like Head of the Apocalypse, or something like that at some university department and I think we both thought that just sounded fantastic.
Dr. Beth Singler 05:20
Oh yeah, there's certainly people who are looking at questions of apocalypse, existential risks, the concerns about what will happen with technology in particular artificial intelligence. That's probably why we were both looking at it.
Tracey Follows 05:31
Well, we'll come back to that, because it is we're at a fascinating time, I think in which all of that popping up again. What are your observations and insights around how this area is changing? What sorts of trends have you observed of late that you think are taking hold or already have taken, taken hold, and are having quite a big impact on various areas of our life, society, culture, etc?
Dr. Beth Singler 05:58
Well, in terms of the technology, it's really hard to keep up with the exponential growth of applications. And a lot of those applications are not very apparent, they're invisible. There's not very much transparency in where they're being applied, and what's happening as a result of them. In the technology itself with artificial intelligence, it's sometimes hard to understand why particular decisions and outcomes come out of the algorithmic decision making system. So in that sense, it's hard to keep up with some of the trends and the changes, but we can make some sort of thematic continuities out in the discussion . We've already had mentioned of the apocalypse. As soon as there is the smallest, most incremental advancement in AI technology or digital technology, I immediately as an anthropologist, see people online panicking about the end of the world, and the Terminator, and Skynet, and all this is going to go down that very familiar science fiction route. So we're always going to see those sorts of repeating patterns. And as a religious studies scholar, as well, I look out for religious narratives and tropes that repeats, that we've had them historically, for a long time, quite often monotheistic, Western civilization tropes, but they play out again and again, in our imaginaries of artificial intelligence.
Dr. Joshua K. Smith 07:12
Yeah, I think it goes in cycles. So we're just in another cycle of, of challenge and change, not really anything significant. But like Beth was saying, any, any challenge to what we've kind of known, always brings fear. And that is very much, any challenge to our anthropology, or I think, I've been thinking a lot about electricity, lately, and how society responded to that. And it's very much similar to our responding to AI and advanced robotics, now. Still a lot of challenge, especially from a religious perspective, you see a lot of pessimism. But a lot of inconsistencies in how people apply those technologies, because we use them every day, and incorporate them into our life. And they may be very invisible, as far as like how it changes us but we're very humans are very inconsistent, in how they are both afraid of technology, but they're also kind of attached to it a lot of different ways. And so I don't think anything is changed or is new. It's just we've, we're in a new century. So we just, that's the only thing that's changed. And I think about this too, especially as my study theology, and Beth knows this as well is that, you know, every generation has had, like, some type of vision of the apocalypse. And you know, for a long time it was the Pope, you know, is this is the Antichrist or whatever. And so, I don't know, I just tried to be a false or a false optimist or a calm pessimist is kind of where I fall into and try not to buy into the doom and gloom, you know, so try to be hopeful.
Tracey Follows 09:02
It's funny, you mentioned that about because I've got it on my notes here to ask you, is there something weird about the new millennium? You know, is this something about going into a new millennia? That's, I don't know, creates some sort of, as you say, you know, fear and uncertainty, but maybe even more than that, like a moral panic, especially when there's new technology around it? And is it nothing to do with it? Is it just just accidental or what?
Dr. Beth Singler 09:25
I think there's a there's certainly a perspective bias. So we are in this moment as individuals and as communities observing events and interpreting them as a part of our own narrative. So that leads us into thinking that this is the most significant time there has ever been because we are the ones
Tracey Follows 09:45
because we're in it,
Dr. Beth Singler 09:46
we're in it, right? So we know historically, and obviously, okay, time is also socially constructed. So the Millennium isn't the Millennium for every cultural group. But we know historically, there have been numerous fears of the apocalypse. I think, to pick up on Joshua's point about technologies being disruptive, and yes, you can look historically and say, okay, the advent of electricity was a significant moment it changed society changed how we communicate how we travel, all these aspects. I think what's slightly different, if not entirely new about AI is the way that we anthropomorphize it. That it's not just a technology that changes things. It's a technology that is being created intentionally by humans to interact with humans. So that interactivity makes it significantly different, I think, to things like electricity, or the Spinning Jenny of the Industrial Revolution, or the emergence of the novel, that it talks back, because we are creating it to talk back. It creates things that seemingly happen on their own. So the narratives we're telling ourselves about AI in particular, is that it is a person before it could be anything like a person. And there's a whole big, philosophical/theological debates about that, which I'm I described, but I'm not necessarily qualified to get into the nitty gritty of but I think it's interesting that yes, in this moment, we can tie it into where we are historically, we can tie it into subjective bias, but also we're interacting with a technology that interacts back with us in a way that other things didn't.
Tracey Follows 11:20
I think that's one of the things that makes some of the best science fiction, the best science fiction that it's about communications technologies, and not just, you know, sending something up into space or some sort of military technology, it is that, that's what captivates people's imaginations, isn't it? And where do you to stand on this anthropomorphize ation of robots, and that leading to conversations around robot rights and the way in which we will interact as you say Beth? I mean, do you take the instrumentalist view? Or are you a bit more, I don't know, a bit more open minded about the way in which we might come to emote with these beings?
Dr. Joshua K. Smith 12:02
Yeah, I've, I've been on a journey with this question. For years now, thinking about the nuance here, when we talk about persons, when we talk about something, I think that is a feature of human, of us as a creature, a creative being that we want to see ourselves in our creation, we want to see ourselves, even in the animal kingdom, in how, you know, we deal with domestic pets. And I would say even how, if you really look at things like cattle and stuff, when you look in their eyes, you see a lot of beauty, you see, not that we know what it's like to be a cow or any other animal. You can make assumptions, but we project something to them. And anything that squeals or says Ouch, or barks or you know, squirms, feels pain we, we give it a new level. And, and I think the same thing is true of AI and robots. It's not necessarily that there's something underneath the hood or behind the face, so to speak, but that we will struggle, especially if you're more open, if your moral circle is a little bit wider than some, such as the instrumentalists. You know, and I tend to be sympathetic towards that. Because in my religious background, and you know, the Hebrew Scriptures, especially with my study of Judaism, there is moral consideration of animals and of creation, because it's not that I am a conqueror of the world, like this world is given to us to conquer in and dominate but that I am a steward of the world. And so everything within the world is for me to help flourish, to make it grow, to make it blossom, to create beauty, to be creative with that, and I think AI and robots certainly can be a part of that. But there's also a darker side that we all know it can be very destructive, very disruptive. And so I see our part is not necessarily saying yes or no to whether or not we give moral consideration to certain robots, qualified robots or AI is but to make sure that, you know, it's not harming other people. It's not socially disruptive as, and some of them are very disruptive right now, and are causing harm. Right? And so I see it as a wave. I see personhood as a way to kind of mitigate some of those harms. It's not necessarily for me about, you know, making a robot person like me, I don't think that'll ever be the case. I think we're, we will always be biologically distinct. But just because I'm distinct from something, doesn't mean I have a right to treat it however I want to and so that is my concern for that question, like I'm not saying habeas corpus, all rights to robots, you know, I'm very closely related to David Dunkleys' argumentation on there. I mean, we disagree a lot about philosophy and in different things. But I think we're on the same boat, so to speak, in that regard, like, there are societal benefits to granting a robot rights or considerations or whatever. Not on the same level as humans, but for the benefit of humans. And I think, yeah, we should consider it. And I think rights in general should be, all things on the table should be considered, we shouldn't just ever say no, we're not ever going to consider this. And we could just look at the historical analogy. I've been reading, you know, working a lot, my project now about womanist perspectives on rights. So Black, African American rights, that movement, and you know, it's very recent. And I would still say, like, even now, in this, I live in the south, there are still people that I pastor who think there's something different about a human with one phenotype versus another. And that is very much a like folk understanding. And it's not, you know, I don't believe that there is any distinction there. But for them, it's very real. And so I see this, these connections, not just with human to human interaction, but with human to technology, and how we interact with it, you know, and some of the arguments are very, eerily similar. And I think I have a lot of problems with some of that logic. And if you look at how the history of technology, and how the black body has been imagined as a technology, I think there's even more problems about the language that we use to describe it. And so I'm very concerned about some of these arguments. And that's more than you asked for. But that's my heart as far as like the social justice side of it, and what I'm concerned about as a theologian and pastor, about how we're using this, and sometimes a lot of people just dismiss it altogether. So yeah,
Dr. Beth Singler 17:11
I'd like to partially agree, but add an add some sort of caveats, as well. So absolutely. I agree with the positive of recognising historical continuities in our discussion of both non human and human others, including sort of, as you say, indigenous cultures, that when we quotation marks encountered intelligence elsewhere, there were long debate, sometimes theologically informed, or badly theologically informed debates about personhood, and we see analogies now, to early discussions about AI and robot rights. My caveat is that, with our natural tendency to anthropomorphize, we can see corporate hacking of that tendency. So yes, it's, it's valuable to have conversations about rights because then of course, you raise these issues, and you expand our consideration to say, what would happen if this could happen. But already, the conversation is being pushed there by some corporate interests, who want you to focus more on the question of individual agential AIs rights, versus what is the corporation doing when it creates that algorithm that makes the decisions that has particular weights on it that has outcomes for particular groups. So if you if you hand over agency too soon, to AI or even as some of my work looks at uper agency, when people start to think of AI is already super intelligent, or even godlike in some ways, then it just obscures the role of humans in the machin That humans in the corporate setting have made specific decisions on how algorithms will work. And that's not individual agent agency in an AI that is a corporate agency, and that needs to be that needs to be highlighted as well.
Tracey Follows 18:57
Have you got any examples that you're thinking of specifically, of those sort of corporatist versions?
Dr. Beth Singler 19:04
Well, I mean, it's sometimes there in the nebulous public discourse. So when figures like charismatic authorities like Elon Musk, start talking about the dangers of summoning the demon with AI. That narrative pushes this idea that it's already, it's already somehow in that instance, malevolent. That it's also somehow making decisions for us and some of my ethnographic research has been on people's conceptions of AI, as I say, as more like a godlike entity, where people on social media report feeling that they've been blessed by the algorithm, because particular decisions, say if you're in a gig economy role, and as a Lyft driver or an Uber driver, you get a series of good rides and you make lots of money. The feeling that you've been blessed by the algorithm might be reported. It's not it's not a large scale thing. The actual corpus I looked at was very, very small. But if we skip quickly into the idea of AI as already having agency and then beyond that some sort of form of super agency, then we obscure the fact that those corporations like Lyft, or Uber or YouTube, with recommendation systems are making decisions that benefit them within the framework of capitalism. That algorithms are designed to either garner attention or be efficient in particular ways for the purpose of making profit. So if we start thinking of AI, here, I'm going to make a pun, if we're going to start thinking of AI as prophets will miss the AI is about making profits.
Dr. Joshua K. Smith 20:33
Yeah, that's good. You know, I think as a Westerner, you know, we, we can't think about technology without thinking about capitalism in a lot of ways. And so we think about Turk work and these ghost workers and different things. I think a lot of it has to do with like, and I don't mean this, you know, facetiously, but I think a lot of people are just very ignorant of about technology in general, not that we aren't not that we can't use it. And so we could go on YouTube and, and find small kids who can teach you how to code things on Scratch, or C or, you know, but as far as like, actually understanding some of this stuff, I don't I don't think that we, we have a lot of knowledge about that. And I think that's why it's so hard in and even jumping to the question of robot rights. It's just so discombobulating for a lot of people, because they're like, I'm not even sure some some of my people understand how rights work. And I'm not. And I'm not saying that if I wouldn't have studied it, the way that I have and others like, I wouldn't either. And, and so it's not a judgement, it's just all these things are so complex, and so esoteric to the public. And so sometimes it's overwhelming, like, how do we even educate people. Like people that don't read people that are, that their only concern is about the pragmatics of everything. Would this technology help me in my business or, and so when I'm thinking about this, as far as like the church, there's not much if anything, that's, you know, trying to make an altruistic AI or platform that has no desire to manipulate or you know, and I think about this in terms of the replica Chatbot and stuff, like, purposely, as Beth was talking about manipulating the consumer to upgrade to certain features. And I think that's the biggest concern that we need to educate people about is that your technology is not working for you. It is working against you in many ways. And even the stuff that we're using right now, why why does a particular platform want me to use Chrome? Why does it want me, because it wants to steal for me, it wants to take analytics for me and can't use DuckDuckGo. I can't hide from it, because it wants my information. And in total degree, most people in the United States, they don't really care about that. They don't they say, Well, what do we have to hide? I'm like, you know, you like you. I mean, you have a lot to hide, and there's a lot of value to your information. And so just to go along with what Beth is saying, there's a lot of the elites in my context just don't care and don't want to know. And they're, they're okay with the media manipulation part as long as there's a payoff and benefit. And so it's the the click agree, boilerplate type stuff that I'm not going to read, I don't care about I just want to use iTunes. So who cares? And, and I'm still not sure how to help people make that jump from Okay. I know, I should probably care about this. And like, Would you want somebody to sit and observe your children? Like just in just outside your home? Would you would you allow that, you know, so that they can send you an email saying, hey, Susie, or whatever. She likes these things. Here's a Christmas list for you. I'm like, There's toys that do that. And, you know, I don't I don't know. Like, there's just, there's so much happening in our western mindset to kind of push that aside, while like you say that we are totally being taken advantage of and being manipulated, either by film and these narratives that we get pushed before us, you know, Skynet, Terminator, every, every movie that comes out about AI or robots, somebody's gonna, you know, take advantage of you or you should be afraid of them. But no, really, it's the creators that I'm afraid of. It's the human behind the machine that I know from my anthropology I'm like, that's who we need to, you know, tie up or you know, round in and so that they can understand that it's, it's not okay just to make money off people.
Tracey Follows 25:04
It's really interesting, because I used to think it was, it's just the path of least resistance for most people, they just want convenience. But when I started to do quite a lot of research with, I suppose what we'd call Gen Z, what they were in early 20s, at the time, and it was on a specific project about the future of media. What I realised was that they were outsourcing lots of decisions more than I even realised to I suppose you could call it machine learning or AI. And I remember a couple of different specific examples, there was one example of a girl who was saying that she wanted an AI or a bot to tell her what fashion most suited her. And we had this discussion about Well, isn't that the job of your best friend? And it was like, Well, yes, it could be, but the AI will absolutely know, it will know what suits me. And it's, it became very obvious to me, Oh, you don't trust your own instinct in terms of what suits you. And you feel like you have to outsource it to AI, who really knows, knows. Ie has analysed and knows something that you could not know from what I take as a data point, a gut instinct. And then there was another example of a guy who was like, Well, I want the fridge to shut its door and lock itself when I've had too many calories that day. And it made me realise that they want AI to self regulate them. And it made me wonder, do we think, maybe it's beyond just the convenience and somehow, maybe even unconsciously, we're hoping that AI will make us good. AI will make us more moral, put us on the right path. I don't know how to express it. But that's sort of one of the things I was, I think I was starting to realise it. And it was particularly to that younger generation. But then of course, I hadn't compared it with older generations. But I don't know what your thoughts on that, do we think we're somehow making using AI to make us good?
Dr. Beth Singler 27:01
I think definitely there is a very strong narrative. And I mean, you can trace it back way before we had something that we called AI, but a very strong narrative of the irrationality of humanity and the rationality of machines. That there is a way to purify all the good things about our intelligence, and replicate it in a different format, so that it doesn't succumb to all those problematic messinesses that humans have, it won't get angry, and it won't get hungry, and therefore hungry. And I'm a very hungry person so I recognise that. But it you know, this idea that there is a purified version of the human that can operate in the digital realm, and therefore can make these better decisions. Which obviously, completely obscures the fact that all the data going into these systems is from humans. So it comes with all of those messiness is that we want to be rid of. But you see, you know, I've done a small amount of work on people who think that robot politicians, quotation marks, so AI decision making systems, would be in some way preferable to human politicians, because they wouldn't be corruptible. And they wouldn't make irrational choices. And they would, you know, and that comes again, part of science fiction has come up a couple of times, and I have no bone to pick with science fiction, I think the bone I have to pick is when we're not clear when we're being told a story or not. So if I go see a science fiction film, and it presents, oh, my favourite Star Trek, if I go watch an episode of Star Trek Next Generation, Lieutenant Commander Data presented as utterly rational, and you can see his precursor in Spok, but he wasn't an AI but utterly rational Commander Data. That's great. That's a story. But if I then start to perceive AI as having that purified rationality and able to do these sort of a superhuman things in terms of intelligence, then that's, that's dangerous, because it obscures, as I say, all the problems and the mistakes that we feed into these systems with our data. And I can completely understand, I did some schools engagement work where I talked to much younger children than Gen Z, and they're already at the age of sort of 7/8/9. They have narratives of AI and robots and what they should be able to do for them. And it is things as you say, like making decisions, making good choices clearing up for me, you know, being the ultimate sort of almost Jiminy Cricket on the shoulder of telling you what you should and shouldn't do. So an AI conscience. And that's, that's a concern because again, like that story does obscure all the irrationalities in machines that for a lot of instances, it's better to think of them as artificial stupidity than artificial intelligence. They do make an awful lot of mistakes, they do some very impressive successful things. But the whole for instance, the whole bloom of AI arts we're in at the moment where people find it fantastic, a magic trick you can put in a sentence and get a piece of art, but if you narrow in and look at the art, how many fingers does that person have? There's lots of problems in the data set that's coming in, not to mention the ethical issue of using artists work without permission. But yeah, once we actually start focusing in on the decisions that are being made, we have to question if they're the perfect decisions for us all the perfect friend who can give you the perfect answer about what you should work.
Tracey Follows 30:08
Yeah, cuz I felt like it was beyond just efficiency. It's almost like duty. I'll be dutiful if I can only download this app and use it. Josh, sorry, did you want to come in?
Dr. Joshua K. Smith 30:19
Um, yeah, I think some of this goes back to, like, the erosion if, you know, don't read too much into that, like, where we used to have self regulation and, you know, we practice willpower. So like, I think about the Frog and Toad book about the episode about cookies. You know? I mean, whether it's, I don't know, it's just, nothing will replace that aspect of our humaneness will, we have to have discipline and, and that's, there's a reason why it's hard. But also the fact that, like you and Beth, were saying that, if you look inside something, you look inside its code, those that make stuff those actions, they understand that it's not perfect. And so I try to tell people that it will always be. And so in theological language, we have what's called the Fall, right, where everything was in harmony, you know, shalom, perfect peace with God and then humans in their curiosity and creativity, they saw that as a prison, right,? We want to go outside this fence. And so from that moment, which I think every human understands, right, even from when we're little we, we want to do something opposite of what we're told, we just, for whatever reason. So I think AI robots, they're, they're always going to be a part of that system, because it's tied to the system that we're in. So I think all things in harmony, like all molecules are attached to one another. So we can't just say that, you know, all this stuff comes from the ground is connected to a fallen world, we can't say that it's somehow going to be perfect. Just because we polish it and put it in a nice plastic package. Now somehow, it's different. No, I mean, things still lead towards death and destruction, decay. That's the natural progression of life. And so I try to help people understand that just because I make a robot doesn't mean that it's perfect. In any sense. In fact, most people know that when you create things, it's a lot of issues, right? And so for every one robot I have working, I probably have 10 that don't work. And sometimes you don't know why. Why does this optical sensor go out? What what happened? What, you know, why did this circuit blow? There's no rhyme or reason to it. And I think if anybody that's ever worked on a farm, they understand that there's some days where the pressure just affects things differently. Like the electronics in the houses and stuff, they why aren't they working? Why did that sensor not work? And now the heaters are going on too high and all like, maybe nothing's wrong, you know, but it's just not working right. So you know, I think about this from agriculture, because there's a push to automate farming and different and that's, I think, could be great in some ways. But most farmers are pessimistic because they understand yes, tractors are amazing tools, and help us be more efficient in farming. But unless you've worked on one, you don't really understand the, like, things break easily. And even now, like farmers are struggling because John Deere won't let you work on their tractors now. So I mean, the more we integrate, the more complicated that relationship becomes. And then just my last point here is that with AI politicians stuff, it will always be in my opinion, we will always be a human machine team. And this desire to outsource regulation, temperance, discipline. I don't think it will work. And I'm a total pessimist about that, because we will always be and I look at too, in the Department of Defence and stuff, where we get all this technology from and all this research is, you know, even there, they understand that it will always be human machine teamwork. And so I don't envision a society where we're completely. Some do, like this complete utopia of robots doing everything and humans just have all this leisure. Just because of practical experience. Sometimes the robots just aren't going to work well. Sometimes the sensors aren't going to connect, things blow. I don't know, and maybe we'll get to the point where they can fix themselves and stuff. But I feel like there always be a need for us to work together. And I think a part of our desire to separate from that is a part of our fallenness as well. And our weakness is that we were made for a relationship or made for connection with humans, with creation, with animals, and we had this kind of rebellion with it in this tension. Like, I'm a pastor, okay, so my job is people. And I tell my people this all the time, I don't like people. Like, I'm hardcore introvert, you know, leave me alone with books, I'd be perfectly fine with an R2D2 or, you know, a DO whatever, like that, to me would be great. But that's not, in my opinion, why I was made and created in the lineage of what I'm a part of. And I think that is a fault. So, you know, I fight against that, and I try to help people understand that, you know, we were supposed to do this together. And we can make wonderful tools that certainly, at some part could be our friends, and perhaps even better friends, but we shouldn't forego all of our calling, and I guess our duties in a way to to outsource that. And the more we outsource, the more I worry that we're kind of subverting our original design, maybe. So when we think about good and use that word. What is the good? I think, you know, well, depends on what the design is, what was what we made to do, and why are we here? What's our purpose in life? And so if that is meaningful relationship connection, then I don't think it's a good design if we've made things that take us away from that, then certainly, we can use technology in a good way that that does fit into that good design. So that that will be my challenge for the next generation coming up. And even for myself, I guess
Tracey Follows 37:01
I hope you're right, because this, this is an interesting point about the AI politician because I feel like during the pandemic, we can kind of quite close, it was almost as if, hey, let's trial this idea that politicians can be replaced by modelling and AI and etc. and technological incentives are really. It's interesting that this has come up today, because I was just, I don't know if you saw that piece, I think it was in the Atlantic that went around Twitter, like wildfire this morning. It said, Let's declare a pandemic amnesty. We need to forgive one another for what we did and said, when we were in the dark about COVID. And I was like, Hmm, interesting. Like, is this a, you know, is this about redemption, forgiveness? If so, you know, what, what's been committed here? What sort of injustice has been committed? And, and I would say that without getting into the whole COVID thing, but I would say there was lots of coercive behaviour. There were incentives being built into technologies, and into narratives, Beth, that people were being nudged by, and sometimes rather unconsciously, and because they wanted to be a good person, and behave in a moral way, and not be branded or shamed as selfish. I think a lot of people were pushed into doing something they didn't want to do, or were threatened to, you know, losing their job if they didn't, you know, carry out some medical instruction. The whole area of bodily autonomy was up for grabs. And, and what we saw was technology, sort of whether it's track and trace, or PCR tests, whatever, the whole technocracy took over. And there wasn't very much space for human debate, discussion, or any humanity. Actually, I felt that the whole thing was really rather dehumanising. I don't know. Those are just random thoughts, because I was thinking about that Atlantic piece this morning, but I don't know if you want to speak to it.
Dr. Beth Singler 39:23
Yeah, I have seen that piece. And I've seen who it's come from as well. And people are obviously picking out particular things from that person that they said during the time and saying, why would you want an amnesty based on this? I think my particular interest is, in that era, that we're not out of, the era of the pandemic, was where modelling was, as you say, we had modelling going on we had modelling of the, of the pandemic. But there were attempts to avoid dealing with the modelling of the pandemic or to see it as overblown. So there was that human element there, but my particular interest was in the algorithms that were employed for predicting student's grades during the pandemic. Because I wasn't directly involved. But I have a small I had a small admissions role at my college at the University. So I got to talk with a lot of people who were more directly involved in dealing with the admissions problems that that caused. But you could see like the frustration of very capable students who were told that they weren't capable, because their schools historically didn't have a history of sending students to Oxbridge or other excellent universities. And that kind of what I call algorithmic thinking that you're bound to the history of your context, you're bound to that even your own data set that says previously, you've done this well, so you can't possibly do very well in the future, that has wider repercussions. It fits in as well to the modelling of the pandemic, because our knowledge of what was going to be successful in the beginning was very limited. There was a lot of discussion about how transmission happened, our scientific understanding of the virus improved, and our modelling improved as well. But some people were very wedded to that, again, algorithmic thinking of because something we thought something was like this in the past, it's going to be like this in the future as well. So we have to be very careful that we can allow for I hate that word disruption, because it is co-opted by technological discourse, but disruptive elements in our datasets that we don't just see everything as history repeating itself, while also learning from the lessons of history. But if we if we start to think like algorithms, we're also in trouble.
Tracey Follows 41:40
It's like, what's the opposite of being blessed by the algorithm?
Dr. Beth Singler 41:44
Oh it's being cursed by the algorithm, there are people also, who say that,. In the the corpus I looked at, it wasn't as significant a number as the blessed by the algorith, but same idea, you know. But that's, that's with the interpretation of AI is being the superpower that makes decisions. You could be literally cursed by the algorythm in the sense that it makes decisions based on its its parameters. But the way in which that narrative was being used was more there is a almost like a powerful demonic entity that's deciding to do bad things for me. So we can make that distinction between the overblown narratives, and the more practical outcomes of people's personal mini apocalypses. When, say, an insurance algorithm decides against them, or a mortgage algorithm decides against them. And that's not because it has a pure agency and consciousness, it's not deciding it, like it dislikes this person in particular. But again, it's just looking at demographic groups and making very broad generalisations.
Dr. Joshua K. Smith 42:49
Yeah, I feel like that's a human feature as well, though, I mean, that, you know, my, my father in law was a farmer, and he was denied, denied a loan for the farm, that they had a government initiative that they wanted to give it to a black farmer. And so for three years, no one applied. And, and this is in the early 90s, late 80s. And they basically said, We won't give it to you because you're black. And, and so it's like, you know, looking for that particular pattern. I think that's a very human thing.
Tracey Follows 43:24
Yeah, it is. Absolutely, which is, you know, what I'm saying, we have to be careful, we don't fall into that kind of thinking, but also that we don't reinforce it with our algorithmic systems. Like we were talking about earlier that the view of AI as being having perfected rationality, we're only going to reinforce those kinds of unfair systems, if we think that it's actually making better decisions, but it's making the same kinds of decisions that we've made historically.
Dr. Joshua K. Smith 43:50
Yeah, you know, you're just able to, instead of using a human to, you know, force your will, you're now able to kind of hide behind the veil of your code. And, and I think that's just going to be a part of it in a capitalistic economy. And, and I think there's, there's ways to mitigate that. But I don't I don't know how realistic they are. I mean, I hope for them to be a reality and look at barristers like Jacob Turner, who, who's written a book called Robot Rules. Which is about AI, and how to regulate it. But again, it's going to depend on government agencies to ensure that it is so I don't know, like I don't know how realistic it is. If we're not a human machine team, right, ensuring that this didn't happen or being accountable for what the system does. And it's just like, in the gig economy with, was it Uber or one of them? That the AI just started detecting fraud in some of it, and they started firing people, the AI did.
Tracey Follows 45:07
Well, we had in the in the UK, we had a very bad case with the Post Offices and the algorithms they were using to judge whether the post offices or some sort of like a franchise, whether the the individuals and couples who were running the post offices deciding whether they were committing fraud or not. And lots of very innocent people got accused, there's been a whole massive court case where they've tried to get restitution for it. Yeah. Same sort of idea.
Tracey Follows 45:31
I have two pressing questions I really wanted to ask you. The first one is, what about Satan? I know, my, you know, we often talk about AI and these being these godlike tendencies and superpowers or omnipotence. But there seems to me, but you can put put me right, if I'm wrong, and Beth, but there seems to be an awful lot of a kind of occult aesthetic around at the moment. And also all of this chat about conspiracies, but also secret societies. And what is all that about at the moment? And I, have I just woken up to it? Or is it just been around forever?
Dr. Joshua K. Smith 46:11
Yeah. I always love that. Yeah, I mean, I think what we have a tendency to do as humans is anything that bad happens in our life, and just even without AI and stuff, it's like, that was the devil or whatever. And I hear people say that, but theologically, it's nonsense. Because everything that we know, or at least in my theological world, every things happen under the sovereignty of God. So even technically, in our system, even Satan is under the sovereignty of God. So he can only do that which God allows him to do. So it just doesn't make sense to kind of use that. Satan is just kind of a catch all for things that don't understand or don't want to wrestle with. And so I just see it as as a laziness to just kind of lump everything in that category. But like Beth was saying, there is a lot of religious language and all of this stuff, so even in all the futurism, all the crazy rich white guys, I would include Elon in that. This vision of you know, overcoming, like technologies can overcome things. I find a very different narrative in the works of especially female theologians and others, a very different, and even roboticist has a very different emphasis on, you know, caring for society versus escaping. Anyway, that's it. You can read more about that my work if you're interested. But yeah, I do think there's a tendency to blame things that we don't like on Satan and kind of put him and we've always done that, by the way, I mean, it's just, I think it's just a part of our nature just to kind of ascribe that to something we don't like or figure that's supposed to be bad.
Dr. Beth Singler 46:11
I need another, we need another hour or two for me to delve into that. But obviously there is as I said, there's there's continuities of religious narratives, tropes and images when people discuss technology. And I mentioned Elon Musk talking about how AI could be like summoning the devil, the demon. And that's absolutely an aspect of it. And I don't I don't approach this theologically, perhaps Joshua would like to do that. But I look at those communities of thought around those ideas and looking at some of these very strict, moral utilitarian rationalist groups who start talking about AI as basically evolving into something demonic, that's, it's very complicated. I won't go into it now. But it's thought experiments like Roko's Basilisk, which describe the ultimate super intelligence singularity AI as being capable of, of basically punishing people in hell forever. They, they draw very much on existing religious narratives, their conception of a very simplistic view of monotheism, and that has space within it for Hell, and evil, and Satan and all those things. So for some people, that's a very strong reality. Some people if you look at the overlap between groups like Q Anon and some conspiracy theorists, and their views of transhumanism, which again is a whole big subject we could get into, but for some people, Transhumanism is from Satan. This idea that, through technology we can enhance the human being, we can extend our lives you can perhaps even do away with death, is seen as a satanic purpose for some people in these kinds of groups that it's it's mocking God to do any of these things, even to create AI that was as intelligent as a human being could be seen as mocking God in some of these interpretations. So yeah, absolutely, there is a lot of this strongly, biblically inspired Apocalypticism, even in some groups that are very secular. They will eventually say they don't agree with religions, but they'll then use some of the same structures and ways of describing AI as a super intelligence as being in some ways demonic. So that, to me is fascinating. And I have written various different things on it if people want to dig around in my work, but I don't, as I say, do that, theologically. So Joshua, if you want to jump in and talk about Satan, that's up to you.
Dr. Joshua K. Smith 50:09
And I think that kind of all gets to roundabout way, that all this stuff, all technology, all of its efforts, in some way or another is a religious effort. And so I think the Enlightenment has failed in that regard, like, you know, the removal of that part of our brain or our thinking, it's, you can't undo it from people. No matter what system you're in, all people have a worldview that they, you know, operate in, and that they're trying to find meaning and purpose in. But I think in a lot of ways people are very religious to that. And they incorporate different aspects of different thoughts and worldviews into that. So ideas about heaven and hell, we use words, you know, good and bad, I'm a good person, not a bad person. You know, you just kind of dig around and say, What do you mean by that? And, and even with technology. I think, just my personal opinion, it's not wrong to create things, as long as it's not oppressing, hurting enslaving another, and that you can keep it accountable and like, anyway, there's lots of biblical principles behind that. But I think the biggest thing that irks me about this conversation is that people, anytime people use scripture, or the Bible or any worldview, and they don't understand that reading is an ethical project. Right. And I think that's where there's so much terrible stuff comes from is, you know, taking a book like the Bible, which is a very dangerous book, if you can't read well, and then taking views about technology. That's a very dangerous combination. The Bible is dangerous just by itself, right? Lots of women and children have been killed, j ust just in the name of God, right. And so with all this stuff, we have to be ethical readers. And we have to be very diligent about why this person is saying this, why why is this billionaire saying these certain things about AI and then now he's making an AI humanoid robot, like, I mean, people are much more simple than we, this, it's about money, right? It's all about money. And the Bible has something to say about that, you know, the, the root of all evil is the love of money. And so it kind of kind of see how it connects there. Like there's some wisdom in that. And you know, if you're caring about your neighbour, if you're caring about creation, sustainability, right, we're the only creature that depends on nature and animals and plants to survive. Everything else is fine. Just, it could just be, but we can't, and yet here we are mining the earth, for all these technologies that we may or may not need, and taking advantage of different cultures and, and bringing in them into our imperialistic endeavours. And so it's, it's a big mess, really, it's a big mess, but it's all theological. It's all spiritual. And I think at the end of the day, we just have to stop saying it's Satan's fault. There's some abstract thing that we're kind of pointing it to, and then, you know, just kind of look in the mirror of our own desires and deceptions and say, you know, how am I contributing to this process?
Tracey Follows 53:22
Very good. Interesting. It makes me think of em. I've got Douglas Rushkoff's book here, if you read it? Survival of the Richest because it ties into a few of those threads we've been talking about and one of the bits of blurb on the back says um. Survival of the richest is more than a primer on a soulless worldview, pervading all aspects of life. Defying fantasies of escape from each other from Earth leanness from Earth. Rushkoff offers something at once more realistic, more imaginative, mutual regard, responsibility and flourishing. I mean, it's a fun, it's kind of a funny and tragic kind of, book when you read it, but it's well worth doing, it makes those points. And it kind of brings me to, I guess my final question, which I'd love to speak to you for ages about this. I'll ask two questions, take which one you want, really because they are connected. I was wondering whether AI will be more successful in a secular society in the future like China than one that's perhaps Judeo Christian or has a sort of Catholicism, a tradition of Catholicism or something like that. Or the other way of maybe thinking about the question is, you know, 50 years from now, will we be less religious or more religious as a society?
Dr. Beth Singler 54:43
Well, I want to I want to tackle the secular religious distinction because I, as a critical Religious Studies scholar that doesn't, doesn't really hold a lot of water. We have we we've been talking a lot about narratives and the secularisation narrative is another one emerges out the Enlightenment. And it's really very connected to our view of the possibility of a pure human rationality, that religion will decline and go away. And I like to say that I'm the person who's read Dan Brown's Origin five times, so you don't have to. And it's the same is the same story that AI will destroy religion, because it will make us more intelligent, more rational, and more, you know, adult, more grown up. And so your question about, you know, will AI be more successful in a secular region? I mean, I did, yes, I debate whether China is a secular region, it's just it depends on how you framed religion. And for Western culture, we frame it around, as you say, the Judeo Christian, which again, needs a lot of unpacking, but the kind of the monotheistic model, when actually there's a lot of actions going on, including some of these transhumanist ideas in groups that we've been talking about, doing religious things without calling it religion. So I agree with Joshua made point earlier that religion is a part of who we are, we just, I would say we call it different things at different times. And whether or not we call something a religion is an ideological act, which has power in it. And you can either be positive about calling something a religion, or you can be negative about calling something a religion. So to go to your second question, because I want to quickly deal very fast with both of them, I don't think it's possible to see a decline of religion, except if you define it in very specific ways. And that's where all our graphs about the decline of Christianity come in. But that's because we have a very specific model of what religion is. So no, I think the future religion will be just as important in its multitudinous forms. And it will be a question of whether we call it religion or not, and AI will be entangled with that, because religion is entangled with society. And so is AI.
Dr. Joshua K. Smith 56:53
I mean, I've pretty much had the same opinion there. I mean, I don't, I do think that places like Japan, I know it's more so atheistic. I still like that's, you know, you have to really, you know, delve into that, the indebtedness of Shintoism. And I think, like, I think that will lead and has led them to more integration of, especially robotics, right, where it becomes a part of the family systems, you know, in a way, right. So I think it just deals with,
Dr. Beth Singler 57:28
I just want to I just want to quickly pick up Western animism, because it never gets a mention.
Dr. Joshua K. Smith 57:33
Yes, yeah.
Dr. Beth Singler 57:34
We always talk about Eastern animism. and Western animism is forgotten, but we have strong animistic roots and they are still there.
Tracey Follows 57:40
Why do we ignore them though? Why do we ignore that?
Dr. Beth Singler 57:43
because again, it's that meta narrative of mysticism, my term meta narrative of the rationalisation effect of the Enlightenment, that we quotation marks in the West became more serious and sensible when we did away with religion, but them over there in the east, they didn't. I guess it's an othering. It's a you know, it's Orientalism to say, but it's, I'm not saying Josh was wrong. I'm saying, it's to reflect back on ourselves and say, We also do these things.
58:12
We just haven't categorised it in that way. Or, or noticed it.
Dr. Beth Singler 58:15
Exactly. Yeah. We have robot priests in Japan, we have robot priests in Germany. I mean, we've done things in the same similar sorts of ways. Because we're excited about robot priests.
Dr. Joshua K. Smith 58:28
No, I think it's all the same, I think, you know, even, like, there's a lot of First Nations in this area. And it's, it's everywhere, like, it's not, it's not just in the East. And I think that is a shame that we only think about that. But I mean, it's, it's so fascinating to me. We look at these different cultures, cultures that we are part of part of our family trees in the West. But then like you're saying, The Enlightenment has told us we have to have this one particular perspective and faith and reason can't possibly go together. But I'm like, there's faith and reason in all things that we do, you know, and I have faith when I get on an aeroplane that it's not going to crash. Like, it's kind of, you know, the pilots not going to be drunk. And so faith is a part of our lives. It's just how we want to talk about it and integrate it and and like Beth was saying. But I do think we will slowly get to where people are in the Eastern perspective, I think we'll open up to that more is where I was going. It's just Japan is a little bit ahead of us. China is just a little bit ahead of us. But I see this kind of like melting pot happening in different societies. It's more like integration of different perspectives. And I've talked to people about especially religious things like they're, they're picking parts of Buddhism, or they're picking parts of this religion, and saying that it's mine. So, you know, I'm Baptist, and so we talk a lot about, you know, our people leaving the church and stuff and it's like, it's not that they're leaving the church. They're leaving This church, yes, but they're just picking up on different parts of religion that they maybe had never heard before. And so maybe they weren't taught about animism. But they actually have a lot of those beliefs, like Beth is saying, and they actually have a lot of those practices. And so, and then the cultures where it is a part of their culture this like now they're, they're more atheistic. And so it's just kind of maybe this reversal is happening where we are in cycle now, I don't know. But I'm not of the perspective that religion is going anywhere, that we're becoming less religious. If anything, I think we're becoming more religious than some of the things that we believe about AI and especially about robots. And so I think we're just, I don't know, just kind of hodgepodge picking what we want to believe now, and we're no longer just fully accepting our grandparents and our parents religion, and I don't think that's a bad thing. I don't, I think it's, it could be a very positive and beautiful thing for people to really delve into what they believe and, and to wrestle with that, and not take on the culture. I mean, not to reject their culture or anything, but to, to really challenge it. And, like, I mean, there's lots of things that I love that are not part of my culture, and not a part of my heritage. But I've learned a lot from them. And I see a lot of beauty and wisdom in them. And so I encourage all people like, one of the things we used to do when I was a church planter quickly, just in the northeast, is there was this one road called Highway to Heaven, and it had a Vedic temple, a Buddhist temple, there were, there are monks that you could talk to, there was a Jewish temple, there was a Sikh temple on this like side by side by side. And we would take people there and we say, look, ask them questions, whatever you got, you know, whatever you would want to ask me ask them, and we'll look at their answers together. And we'll look and respectfully dialogue with them. And I think people always walked away from that, like, more like solidified in what they believed originally. They're like this, this is what I believe. And, and so I think there's, there's a purpose to that. And, and it's great to dialogue with people that are not like you, and, and to be open to dialogue. And, and I don't know if Beth has encountered this, but I have in these circles is that, you know, not a lot of people want to talk to a theologian about some of these issues. And I'm like, why wouldn't you want to talk to anthropologists, I mean, this, this person knows a lot about history and traditions, and you know, how we assimilate these ideas together. And these are the people you should be talking to, and not to say anything bad about computer scientists, but I'm like, what do they know about some of these things? And so they're like, Well, what do you know about computer science? I'm like, I don't know, like we can, I can learn, you know, we can dialogue about them. But there's just this tendency to put everybody in a particular box and kind of like Meta saying other people and and I think we just need to resist that as much as possible. And give everybody a face, so to speak, and in place at the table. And I think that's the future of this is very religious, very open, a widening circle. And people don't like that, though. There's a lot of people that want homogeny and sameness, but so the beauty of all this this conversation is that it challenges perspectives. It forces people to, especially Westerners to deal with their, I guess, marriage to Descartes. And you know, like, really, the challenge that and why why did you reject Aristotle so much? And there are reasons to but there are reasons that you know, we're still married to him as well. And so like, how do we how are we dealing with that? Not so well?
Tracey Follows 1:03:56
Well, on that theme of exploring, rather than having things prescribed for you, where do people find more about your work? Because obviously, this is a very, very in depth subject. We've, we've covered a lot today, but of course, we can't this is the tip of the iceberg really, but it's fascinating. So where would you point people to both of you?
Dr. Beth Singler 1:04:15
Well, I have a very comprehensive website with all my publications and my press bits and pieces and other podcasts. So I'm very Googleable. I'm also a social media addict. That's where I do a lot of my research that's my excuse, but I am I am on many platforms and also debating leaving Twitter but I'm very easy to find and hopefully in the next year, I will have two books out on religion and AI. One an edited volume and one a book just for me, but there's things coming. But yeah, if you want to see some of my publications, some of the things that my website is the best place at the moment.
Tracey Follows 1:04:52
Did I see you tweet out something today about aliens?
Dr. Beth Singler 1:04:55
Yes. I've just done a big one. I don't actually know how much of me they use. I was interviewed for a BBC documentary on first contact with aliens. Yeah, from the perspective of an anthropologist, you know, thinking very briefly encounters with aliens is in some ways parallel to our encounters with how we perceive AI becoming, as well. So there was a bit of that discussion and the social societal impact of first contact.
Dr. Joshua K. Smith 1:05:19
Joshua K. Smith is my website and not as much material as Beth by any means. And so, still kind of new to the game, new to the table, but I have three works coming out in the next couple of years. So I guess be on the lookout for those. I'm not really sure when and where they will be published, but definitely excited about some of the publications coming out on AI and warfare, AI and Christian theology, and then AI and Christian ethics. So if that's your cup of tea, then you'll be interested.
Tracey Follows 1:05:57
Absolutely. I think between the two of you pretty much covered everything. Next time we do aliens, right?
Dr. Joshua K. Smith 1:06:04
Yes. Yeah,
Dr. Beth Singler 1:06:05
That's a deal.
Dr. Joshua K. Smith 1:06:06
I want to hear that. Yeah.
Tracey Follows 1:06:07
Thanks so much for your time both of you. It's been a pleasure.
Dr. Joshua K. Smith 1:06:09
Thank you, Tracy.
Dr. Beth Singler 1:06:10
Thank you very much. Thank you both.
Tracey Follows 1:06:18
Thank you for listening to the future view podcast hosted by me Tracy Follows. Do like and subscribe wherever you listen to podcasts to make sure you don't miss a single episode. And if you know someone you think will enjoy this episode, please do share it with them. Also visit www.thefutureofyou.co.uk For more on the future of identity in a digital world and visit www.futuremade.consulting for the future of everything else. The Future of You podcast is edited by Big Tent Media and produced by Emily Crosby Media.