Episode 9: "Interview with Paula Boddington, Author of 'AI Ethics: A Textbook'"
Listen to this episode:
Episode description:
In this episode I speak with Paula Boddington, Associate Professor of Philosophy and Healthcare, Geller Institute of Aging and Memory, at the University of West London. She is also the author of the newly released AI Ethics: A Textbook released by Springer Publishing in March of 2023.
AI Ethics: A Textbook:
https://www.amazon.com/Ethics-Artificial-Intelligence-Foundations-Algorithms/dp/9811993815?&_encoding=UTF8&tag=scotpansing-20
Episode script:
SCOT: Hello, everyone, and welcome back to AI QuickBits: Snackable Artificial Intelligence Content for Everyone. My name is Scot, and today I'm going to be speaking with Paula Boddington, associate professor of Philosophy and Healthcare at the Geller Institute of Aging and Memory, University of West London. Paula is a philosopher by background and has done a variety of interdisciplinary work, especially in ethics. She worked in medical ethics, genetics and genomics before starting work on AI ethics in 2015 at Oxford University and is the author of AI Ethics, A Textbook released by Springer Publishing in March of 2023. Paula, thank you so much for speaking with me today. How's it going?
PAULA: Oh, fine. Thanks, Scot. Thanks for having me on. It's great to talk to you.
SCOT: Absolutely. I'll start off by saying I'm one of these people that was a little sad when the word “literally” started to take on a little bit of a different meaning. And there's this phrase that people sometimes use to describe people's expertise. They'll say this person “literally wrote the book on something” when they maybe didn't really write a book, but they're describing their expertise. But we can say, “you literally wrote a book called AI Ethics: A Textbook,” which is a wonderful simple title, and I would love to talk to you about it today. So the first thing I would ask is, why did you write the book? And also, who is the intended audience? Is it everyone or students or anyone developing AI products? Could you tell me a little bit about that?
PAULA: Okay, great. So I first started working in AI, I think in 2015, actually, when I was at Oxford University, and a group of three of us: Mike Waldridge from the Computer Science Department, and Peter Milligan, who's a philosopher who also works in computing, and me, we actually were successful in getting some funding from a Future of Life Institute. I don't know if you remember round about that time, lots of people like Elon Musk and Stephen Hawking were starting to get very worried that AI might gain too much control. And Elon Musk actually put a lot of money into the Future of Life Institute for Projects looking at AI ethics. And I was working on one, looking at how we might develop codes of ethics for AI, and in the course of doing that, really realized that of all the many different practical questions in ethics I've been looking at, over the course of my career that questions in AI ethics are by far the most profound and by far the most exciting and challenging. Because important as it might be to think about regulation and codes of ethics, as soon as we look at it, it opens up all sorts of really fundamental questions about, for example, when we're comparing what it is, when we're trying to compare, should you use a human being for this, or should you use an AI for a certain task that raises questions about what is it to be a human being? Why might a human being be preferable to a machine? Why might a machine be better? At certain tasks, we have to ask really important questions about human nature, really, really deep questions about ethics.
And so I've also done some teaching on master's courses about philosophy and AI, looking at AI ethics. And the more I did it, the more I saw the need to try to get a book that looked at an overview of different questions and how lots of different disparate issues all line up.
So that's quite a long introduction, but who the audience of a book is, I suppose I share your sadness at the use of the loss of a term, “literally.” I suppose it was a bit cheeky calling it AI Ethics: A Textbook, because it makes it sound as if I have written the textbook. But one of the things I really want to make clear is, in a sense, there's no such thing as the textbook. What I really wanted to do was to write a contribution that was going to assist people from all different backgrounds in thinking about these issues so that people can join in conversations and see where links are, see where they might contribute, see what more questions there are that need to be asked. Because we all need to look at these questions together. So it's nothing at all like, I'm giving you the answers. What's the answer to all these equal questions? My real concern is to try to contribute to the ongoing conversations that we need to be having.
SCOT: Fantastic. And could we cover some of the many themes regarding artificial intelligence and ethics, I could rattle off a couple like bias or misinformation. And even within those, there's systemic bias that a human may input into the model, or even when they're creating an AI product, their bias might creep in, or there might even be computational bias that comes out of the machine. Also misinformation. There are so many. Could you speak to some of the overall areas that the book covers and questions that are asked?
PAULA: Oh, yes. Okay. So questions about bias and misinformation are, of course, really critical questions. As a philosopher, one of the things I'm most interested in is looking at the underlying issues behind it.
So, for example, when we're looking at bias in the world, a lot of the work on bias is going to be looking at how can we make certain there aren't bias against what we now know of as protected characteristics, like protected groups in society? So this is the kind of question which I found really interesting to discuss in class with students, because how we think of particular protected characteristics is actually really new historically, and it's really important to note that it varies from country to country. So we then have to ask the question, why do we think that certain characteristics form what you might call what a philosopher might call a natural kind.
So race and ethnicity is a really interesting one. I've used this as an illustration in class. You can see from looking at, say, the US census that how race and ethnicity have been carved up and understood over the years has changed. And if you look at different parts around the world, there are different things which different ways of grouping race and ethnicity, which have been different depending on particular different circumstances. So the question about bias is critically important. But underneath that, there's whole accretions of value systems, history, particular circumstances that need to be unearthed behind it.
So one of the things that's really, really brilliant and interesting about AI ethics, it's making us, by looking at the technical details about how we might combat bias in algorithms, we can look at the technological side of it, which I've only had to look at an outline in this book because I'm not a technologist, I'm a philosopher. But we also at the same time need to be asking these deeper questions. Does that make kind of sense as an answer?
SCOT: Yeah, I actually had an example that maybe we could drill into here. I'm not saying we need to come to the answer, but speaking of different cultures and races and countries dealing with things differently, for example, there are people making applications, AI applications, that are chatbots, that mimic dead relatives for people who are grieving. And my initial reaction to that, and I think many people in the Western world or in Europe, the United States, etc. would say this is maybe not healthy, this is not really helping the grieving process, etc.
But in other countries, maybe Mexico with Day of the Dead or India and how different cultures, experience death and grieving in different ways. And I sort of asked myself, well, who am I to say? I don't really know how another culture might maybe react, maybe that product is really popular in another country and wouldn't be popular in this country, or something like that. Maybe that's an example.
PAULA: Yes, that could be an example. So another example around that is that it's often said that in Japan robots are much more acceptable because of Shinto beliefs. But actually, this just goes to show how much more we need to know and how many people we need involved in this. I was talking to somebody the other day who said that actually a lot of production and interest in, say, robots for elder care in Japan is going off a boil now because it hasn't been terribly successful.
But also, we can think about it in terms of, say, American culture. You can probably tell from my accent I'm British.
SCOT: No!
PAULA: I'm British. Yes. (laughs) I think they would agree that we as a culture don't really deal with grief terribly, terribly well. And of course, one of the proper reasons for it, at the same time as this technology is rising, there's been huge shifts in questions like religiosity. There's been huge cultural shifts with immigration of different groups and so on and really big social and demographic changes. So one of the things we would need to think about in our two actually similar but also very different cultures is whether or not by thinking that the problem is a certain way, thinking that a problem is a certain way, we could introduce the technology and solve a problem, but we haven't really analyzed what the problem is.
I could remember, for example, when I was at college, a Canadian friend was lamenting how that it seemed to be standard in her country, others might disagree, seem to be more standard that following a death your doctor might prescribe you something like Valium or some sedative rather than going through the grief. So those kinds of questions about how do we actually address how do we actually address grief?
It gets back to that question that I raised before that you raised so brilliantly by asking this question about trying to mimic the dead. What is death? How do we even understand it? How do we understand what it is that somebody is not there anymore> I mean, these are questions that you might think someone's developed some piece of technology. Somebody can easily develop a tech well, fairly easily develop a technology now to do something. And sometimes you feel as if people develop their technology and sort of think oh, what can we use this for? Rather than the other way around.
SCOT: Yeah, I think right now, certainly the fact that the large language models are so good at conversation that seems human like.
PAULA: Yes.
SCOT: You can see already that there are products that the fact that they can simulate emotion is so tempting to make products that will then be appealing to customers because people desire connection, emotional connection. I would think they desire positive human emotional connection. But it feels so human that there are these products that are coming online and I believe that completely unregulated that companies will tend to push the limits of manipulating people with emotions, simulated emotions, whether that's to get them to buy products or to whatever to profit off of their loneliness or whatever it may be. How do you feel about the fact that humans are bound to anthropomorphize these products especially when the people making them, many of them will push the limits of the emotional simulation since it seems so real. What are the ethical questions around that that you pose in the book?
PAULA: Well, now I'm trying to think about whether I actually posed them in the book because a lot of the time I'm just raising questions. I don't know if I actually pose this particular question in the book, but maybe I did because one of the things I did in the book because I'm also doing work in relation to the care of people living with dementia. So I use quite a lot of examples using technology for people with dementia and tender for people living with dementia as an example.
One of the reasons why I did that actually, is precisely because I think that we need to look at how the human being is treated. And people living with dementia are a kind of test case, if you like, because sadly, in our societies, they're very often dehumanized. The care that they receive is often really not terribly good. But there's also a huge market for technology. So a piece of sort of technology might be say, remote monitoring or something which might provide companionship during the day. So then a really big question is whether or not well, there are questions about why the need for that arises. What's led to societies where there are so many people who are older and lonely and isolated.
There are also questions about what actually really counts as real human contact. And I think we can all appreciate that over the course of the lockdowns. So it's absolutely fantastic that there you are, 8 hours distant from me in California. You've just started breakfast and I'm just thinking about what I'm going to have for dinner. And we've never met before and we can communicate, but the question is whether or not how much this is a substitute for actual communication. And I think there are going to be individual differences and individual styles here.
But actually the pandemic has been quite a good test case for trying to do research on what actually helps to combat loneliness. And in fact, there have been some it's a kind of really good empirical experiment, but you couldn't never get ethics approval. You'd never get ethics approval isolating people in their apartments for weeks on end and getting them to communicate with each other through zoom calls. But we've had this experiment and I think we're going to find that it serves some of our needs and not others.
I don't know what you feel about that because I personally feel as if one of the things that we really need to think about is the fact that we're embodied physical human beings, but we're biological creatures and we have a need for physical contact with people. But as well as talking through language, there's so much we can communicate just through the physical proximity of other people as well. And in fact, in relation to things like virtual reality, somebody whose work I find really interesting is I'm not going to pronounce his name incorrectly is Jeron Lanier, who's done a lot of work in VR.
SCOT: I've heard the name.
PAULA: I think you might find him really interesting actually. So one of the things that he talks about is how great VR is. But one of the best things about it is taking off the VR and realizing how fantastic, how detailed, how amazing the actual, real world around us is.
So one of the things that was really noticeable, I don't know what it was like where you are, but when over here, when the lockdown started to ease and we were allowed out, there was one day when the government said, you can now go out and meet other people in groups of six. And I live really close to this really fantastic park in London called Battersea Park. It's the best park in London. It's on the verge of the Thames and it's three quarters of a mile wide. And I went to Battersea Park and there were groups of six continuous from one side of the park to the other and there were champagne bottles all over the place and people were having parties. There was just one big party in the whole of Battery Park. It was kind of like when you let the cows out in the spring.
So I kind of think that what we need to do is to get a mature understanding of what the technology is good for and what it's not so good for and what actual harms it might what actual harms it might be causing. Because one of the things that of course, as you know, a lot of the technology does is hack our weaknesses. So it hacks the way in which we're easily addicted to things. It hacks our reward centers, it hacks our attention. But those were some of the things I'm really, really concerned about.
SCOT: 100%. I think that's a great way to put it. Like I was saying, the manipulation of people via something that really feels like it's like a human talking to you is a big concern.
I think you're right about the pandemic also. It showed that lots of people experienced it in different ways and took different things from it. A lot of people enjoyed having some time back to themselves and their homes. And now that it's opening up again, people are saying, hey, I don't want to go back to going into the office five days a week. There were some benefits to this great experiment, like you said, that we could have never done, that did show that some people were like, I got to get back to how it was. I want to go into the office five days a week, I need to see people.
But other people, maybe it really helped them to be able to kind of disconnect a little bit, not entirely, but a couple of days a week to themselves where they can stay home and work from home and not have to confront everyone in their life or I'm not sure. But I think it did show that a lot of people was a wide variety of reaction to the lockdowns.
PAULA: Yes. One of the many questions we need to think about with technology in general is whether or not it's locking us into particular ways of doing things because when technology comes along, it can be really difficult to remember how we used to do things. And you can assume that new ways of doing things are better and you can forget that there are things that you've lost from the past.
So this isn't AI per se, but one of the struggles I have with students when they're writing essays is the very fact that students tend to compose everything on a computer. So you have just in front of you the computer screen, which just a small section of text in front of you. And I'm always trying to convince students to try writing on a piece of paper on a table in front of you. I compose stuff on massive sheets of old fashioned 1980s computer paper so that I can see everything in front of me using using pencil or fountain pen. But that's just a small example of how we can assume that a certain way of thinking constructed by the way in which technology can become accepted part of our infrastructure.
So the ways in which now virtually everybody's got a smartphone. So when you walk along the streets, virtually everybody is kind of looking at their phone instead of looking at the environment around them. It's those kinds of things with the technology that I think we need to be thinking about because you're right, people reacted differently. People reacted differently partly depending on whether or not who you were living with. Are you on your own? Are you for example? It must have been the same way you are. There were lots of people stuck in flats with no balconies children at home trying to homeschool people and working while sitting on getting terrible backache from sitting with a laptop on their knee while their husband is in another room and they're shouting at each other because they can't both hear. Those sorts of things are not terribly good, but yet it's the capacity to think about something that suits different people's circumstance rather than thinking that we've got to operate in a particular way.
SCOT: Yeah, the tech is not always better even though sometimes it appears more efficient or better. Similarly to your writing example, I find that even though it may be not the most environmental thing with paper, but when I have an article I want to read, a lot of times I'll just print it out so that I can step away from the computer and go sit in the other room with the natural sunlight hitting the paper. I find that it's way easier for me to absorb the material and enjoy the article than reading it even on my nice big monitor on my desk. Sometimes I find that much better experience.
So, I would like to get into the book a little bit more. And another thing I would like to ask you if you wouldn't mind, if you're comfortable. You have a book on AI ethics, but there are different ethical frameworks. And when someone takes an intro to ethics course or starts to research ethics, there are these themes that come up.
And I'm not asking you to distill an entire college course into our conversation, but if you wouldn't mind, if you could maybe explain a little bit of the differences between consequential ethics, deontology, virtue ethics.. those seem to be the big three. I know there's a lot more to it, but if you wouldn't mind going into how those overlay onto your book, I think that would be helpful for my audience.
PAULA: Yes, great. Of course, one of the big questions we have in ethics is who's ethics and what ethics? And especially one of the big questions we have in AI ethics is whether or not certain ways of doing things and certain big corporations and certain ways, certain ways of doing things.
Because a lot of the AI that we've got now comes out of comes out of where you live, actually, Scot comes out of big tech corporations in California trying to make certain that we don't impose one particular way of thinking on the rest of the world. So we're really, really conscious of that, and we should be.
One of the approaches I took in the book was thinking, well, I can't possibly write a book which takes into account every single different ethical approach, because who am I to do that? I'm just one person, and I don't know it. The approach I tried to take was thinking, if we look really deeply at sort of our ethics the way we've done it, say broadly in the west, and try to look a little bit of a history of it.
That's why I put in some stuff about the history of AI, a little bit about the history of ethics, a bit about the history of how we've looked at technology that helps us to understand where we are and where we've come from and in terms of looking at ethical theory. So, yes, you're right. One of the what's distilled out of current thinking in ethics when we're looking at trying to answer practical questions are three main broad approaches consequentialism virtue ethics and deontological ethics. And so they can be often used as different competing theories, but also very commonly, they're seen as different, complementary ways that we need to use to look at ethical questions.
So just to outline really briefly, in a sense, it's quite easy to grasp why the differences between these different theories, because they're focusing on looking at different aspects of asking ethical questions, different aspects of what matters when we're looking at ethics. So, consequentialism, that's the one. I perhaps spent more of it on the book, partly because it fits very well with a lot of ways in which people think about ethics in AI. But it's perhaps the easiest to grasp because it's simply saying that when we're making a moral choice, when we're making ethical judgment, the only thing or the most important thing we need to look at is what's going to happen as a result of our choices? What are the consequences going to be for? The simplest approach to this is what's known as utilitarianism, which was started back in the 18th century and the 19th century. Utilitarians like John Stuart Mill, for example, were trying to work out what's the greatest happiness for the greatest number, what's going to be the best overall consequences for our decisions?
So you might say, think of an example. You might say, oh, so if we take, say, some some question like the capital punishment or the death penalty, you would say the most important question about that is whether or not, if we have capital punishment, say, for serious crimes like murder, is that going to result in a decrease in the rate of murders, for instance? Is that going to be a best result overall? So that would be incredibly simple account of consequentialism because there are lots and lots of different approaches to how you might actually fill out the details.
Deontological systems of ethics are rule based systems of ethics where there are certain sorts of actions that are permitted or prohibited. So the Ten Commandments would perhaps be the best sort of example of that. Or there might be rules against you shall not murder or systems of laws, systems of statutes, rules against murder, rules against stealing property, for example, rules against various kinds of sexual offenses or property offenses, offenses against the person. So those would tend to be certain things which maybe they're obligatory.
So for example, in some countries it's obligatory to vote. In other countries, it's up to you whether you vote or not. There's different ways of looking at it. So things that you have to do or things that you're prohibited from doing. And quite a lot of the so called the codes of ethics for AI that have been put forward at the moment, some of them are actually a mixture.
So some would be saying that AI should benefit everybody. So that's like saying a very, very broad consequentialist approach. And some would be having some kind of rule based account so that you should try to make your AI systems as transparent as possible so that you could give an explanation of what's going on. You got looking at the consequences and looking what sort of action you're doing.
So to contrast the two. So you might think that if you've got a rule in favor of protecting people's individual property, you might say you can't take somebody else's property even if you think it would benefit you because it belongs to them. But on the other hand, there are often exceptions to that. So for example, if, for instance, the only way in which you can, say, prevent some grave wrong like somebody being murdered would be to take somebody else's property, then you'd probably be permitted to do that.
But my favorite example of that actually is it was probably reported in America not so long ago, there was an attempted terrorist attack on London Bridge. Do you remember hearing about that? And somebody took a narwhal tusk off, this ornamental narwhal tusk off the wall, off the wall of the building where they were in. It was like really valuable tusk of a narwhal and he used it to attack the terrorist. And he managed to subdue him using this narwhal tusk. So obviously, obviously the rule, it wasn't his property. That would be an example.
SCOT: Well, I was just going to say as we wrap up here, I think something that we can talk about that is often used in ethics discussions, the trolley problem, I think has direct correlation to AI, especially when you get to self driving cars and all of these things. So the trolley problem being, a trolley is on a runaway, its brakes have failed, and if nothing is done, the trolley is going to kill maybe seven people on a track and someone has a switch. And if they have the power to divert the trolley to another track, which will still go along and maybe kill one or two people. So fewer people dead. And so the ends justifying the means would be I'm going to take the power in my hands and only these few people will perish to save a larger group of people.
But as I understand it, the universal rule part of it is well, the deontological ethics part of it is you're taking someone else's life against their will. And that is a universal thing that should not be done. And so therein lies the problem. And so if a self driving car is in a situation where it has to choose between who's going to perish, this is like the trolley problem becoming real. Not just an exercise that students do in class, but this is actually like manifesting itself. And it could manifest itself on a much larger scale, even not just like five people and one person, but there could be many more numbers of people's lives at stake.
PAULA: Yes. So the trolley problem has been used a lot in relation to self driving cars. And in fact, it's one of the examples that I look at in the book. The MIT Moral Machines experiment has used different variations of a trolley problem. So one of the interesting things in relation to cars is that the trolley problem actually is specifically about trolleys to make it as simple as possible. They're actually on tracks which self driving cars aren't. So that's the immediate complication. So one of the things I use it as an example to show how we could try to approach these questions, but then also invite the readers of a book to look at what problems there might be, what challenges. There might be in actually using that as a way of thinking about things.
So in the book I've got case examples. It's designed as a textbook, so there are also lots of little exercises for people to do which you can either do or not do, because I'm also hoping that the general reader might be able to work through it as well and just give people food for thought for looking at these.
But trolley problems, actually, you might know they originate they originate from the work of Philippa Foot, who was a philosopher working in Oxford University. She then later on went to work in America at Somerville College in Oxford in sort of the mid 20th century. But I also suspect that she might have got the idea from another philosopher at the time who who also taught me when I was there later on, Richard Hare, who had actually been a prisoner of Japanese who'd worked on a Thai Burma railway during the Second World War and often used to talk about his actual experience of actual loose trucks coming down the lines and how they responded to it.
Why am I mentioning that? It seems a bit irrelevant now. But it really, really struck me how vividly I think being taught by somebody who had worked under such appallingly, inhumane conditions really struck home for me, actually, that we can think of these as little abstract examples that are fun to discuss in the pub or over a cup of tea, but they really, really make a difference. They really make a difference and they're really important questions. One of the dangers in AI is that we can think about it very much at a distance, but it's actually there affecting our lives. It's affecting our lives, and it's going to affect our lives more and more and more, which is why I think we really all need to try to understand and grapple with it as much as we possibly can.
SCOT: I totally agree. I think we can end it there because if we don't, this is going to be an extremely long discussion and I look forward to speaking with you again. But I definitely agree that is going to affect everyone. It's not just for techies to discuss over a pint or tea, like you said. And I really appreciate you speaking with me today.
I want to say that Paula's book, AI Ethics: A Textbook, is available now on all the platforms.
And Paula, I really appreciate you spending the time with me today and I hope you have a great rest of your week.
PAULA: Thank you very much. It's been great to speak to you Scot.
SCOT: Talk to you soon.
PAULA: Bye.