Subscriber Benefit
As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe NowJames McGrath is a professor of religion at Butler University, but it’s really his fascination with science fiction—books, movies, TV—that led him to ponder ethical and moral questions about artificial intelligence.
Can a robot have a soul?
Does it have rights?
Can we actually make a machine like that? And perhaps most important, should we?
“You watch ‘Star Trek’ and you think that could be great,” McGrath said. “You watch ‘Terminator’ and you think, ‘This is a really bad idea.’”
These are the types of issues McGrath talked about with his Butler colleague, Ankur Gupta, an associate professor of computer science and software engineering who is also a science fiction fan.
“We had some common interests, and because of our different backgrounds and disciplines, we also have very different ways of approaching these questions,” McGrath said.
That led to their current project: a series of articles that will be turned into a book about the intersection of artificial intelligence, wisdom, ethics and religion as it impacts present-day and future technology.
Much of their work so far has focused on autonomous vehicles and the ethical questions about how they’re programmed, but they’ll be exploring a variety of issues before they’re done.
McGrath talked to IBJ about some of the key questions at the intersection of ethics and AI.
Lots of people get their thoughts about AI from science fiction, from movies, from TV. And sometimes that’s scary. Are those the things people should be worrying about?
I don’t think there’s anything wrong with focusing on those questions about the AI apocalypse and, “Could they take over the world and will they try to wipe us out?” And we should perhaps learn the lesson from “Matrix” and “Terminator” and other franchises that if we treat [AI devices] as slaves and disregard them and mistreat them and yet make them more powerful than us, then we shouldn’t be too surprised if that comes back to haunt us. But I think those things are very speculative, and in our distant future they may or may not be technologically feasible.
There are some much more present-day and near-future questions that I think fans of science fiction may be missing out on, that ethicists and computer scientists are not talking about quite as often as you might’ve expected.
So one issue that we immediately found our attention drawn to was the question of driverless cars, autonomous vehicles. And that is a present-day example of artificial intelligence. The sort of umbrella term that we’re using is artificial wisdom.
What is it about autonomous vehicles that drew your attention?
It’s clear that we can accomplish artificial intelligence at least at a rudimentary level. We have machines that can do things that before only people could do. It’s not human-level intelligence. There’s no sentience as far as you can tell. It’s not the sci-fi AI.
What does that word mean—sentience?
Self-awareness, personhood, personality. We’re able to program machines that can do interesting things. But wisdom seems to involve those aspects of human life where values, ethics, decision making goes beyond pre-programmed choices. And really, that’s the overarching theme of our exploration: What happens at the intersection between wisdom and computing?
Can you give me an example of a difference between intelligence and wisdom?
Sure. So it takes intelligence to navigate a street, whether you’re being guided by the street markings and signs, or whether using GPS software. People can do this. Machines can do it to an impressive degree.
But think about a vehicle coming upon a situation in which there’s a car coming the other way. Then, there is a person who stepped out into the street and has not looked. Do you hit the brakes knowing that you probably don’t have enough time to avoid hitting the person? Do you veer to the right, risking your own life to save others? Do you veer in front of the other car thinking that they will cushion the impact? Do you even have time to think about these things in that moment, and how do you weigh the moral value of those different choices?
What if you recognize the people on the crosswalk and there’s more than one person that you could potentially hit because you’re trying to avoid a collision, right? … One of the people is pregnant, one is elderly, one is a child, one you recognize as a criminal. This one’s had a full life. This one, their life is ahead of them. This one has a second life inside them that hasn’t even had a chance to be born yet. How should they be valued?
These are the kinds of questions that we don’t like to ask as human beings for understandable reasons. Should we make any distinctions? Should we value a criminal less than a law-abiding citizen? The human mind can’t even compute those things in that moment.
But, in theory, a computer that’s driving a car will just follow some rigid rules. Or could it actually maybe have some facial recognition? Can it have some calculations it can do [to choose whom to hit]? What should we be programming this machine to do?
Thinking about these things in advance and prioritizing our values is something we don’t like to do. But doing that will probably lead to a better ethical outcome for us as drivers as well as help us to figure out how to program the machines.
That’s one of the big issues at the root of artificial intelligence—the programming. That programming is done by people, and at the moment, predominantly white men. How does that impact concerns about how these artificially intelligent devices are programmed?
We began intersecting not so much with the question you raised about the dominance of white men, but the question of the ethics of the programmer and the potential for bias to come through in programming. There’s been a lot of work on algorithms on software used by police forces to decide where to patrol … for bank loans, advertising, where the metrics used is a data set that was inputted.
In these cases, the [concern is not] the way the software is working. It’s just doing what it was told to do with the data. And so the program may not be written with bias, but the data set reflects bias. The classic example is, you take a company that’s been around for 150 years, you feed the computer software information about all the people who have been the CEO during this period and say: So what are the characteristics that make for a good CEO, based on who’s been in that role in the past? Guess what? It’s going to suggest white men. Why? Because that’s part of the data set. The data set has a bias that’s woven into it.
Then should the programmers be compensating … to find a way to filter bias from the data set? Is it appropriate for programs of search engines to be counteracting human biases? For a company like Google, it’s within their purview. They’re a private company. But we’re asking not just what can they do, what might they do, what might customers want them to do, but [also] what should they do? And those are challenging questions to answer because not everybody agrees.
Can you ever come to an understanding or an agreement about what that might be? Because we all have such different values.
One possible solution [for the automated car] that’s been proposed is to let the person in the car basically have a knob they can turn or something like that [to make choices]. Do you want it to be more altruistic? So if you can spare three lives, but it’s likely to cost me my own, do that. … Or if you’ve got the altruism set all the way, cranked up all the way, you might turn it down a notch as you put your family in there because you’re thinking, “OK, in this case, I want them to be safe.” These are not easy decisions to make and we don’t like making them.
Are companies that are creating autonomous cars—or are in the AI race in general—thinking about these issues?
Yes. And it’s increasingly common for ethics … to be required in [college technology] programs. Certainly there’s an encouraging increase in the number of publications related to this. And so it is being discussed in the realm of computer programming, in the realm of ethics more than it had been even a few years ago.
Search engines are the AI tool that most of us use all the time. They raise a lot of questions about bias.
Yes. And how do you address that and what does it mean to be fair and equitable? And there’s an analogy you can make with the democratic process, right? It can be just a popularity contest. If people who are not well-informed are clicking on particular things and those things are rising in popularity and that’s what everybody is seeing when they type in these keywords, whose fault is that and what should be done about it? It’s hard to answer.
If there are people who are unhappy with the outcome of an election … people say the system is broken, we should rethink it. But is the system not working? Or is the only actual real solution to get people to change their values and what they are doing and how they are using the system?
And so it really does raise interesting questions, not just about computer programming, but about things that are really at the heart of the effort to be a democratic society.
Does government have a role to play?
Potentially. And that’s one suggestion that Ankur Gupta and I are actually looking for an alternative to. There was a very important book by Safiya Noble called “Algorithms of Oppression.” One suggestion she made was basically to socialize, to nationalize search, to put it in the hands of government to make this a public good. There are a lot of questions about whether putting things in the hands of the government makes things work better, more efficient, makes it fair. Does it depend who’s been elected?
What I would love to come out of this … is if we could figure out how to program an AI to evaluate these different approaches to ethics and evaluate them and compare the outcomes. And to some extent, an AI may actually be able to run simulations of different scenarios and say, “When everybody gets a chance to fend for themselves, let’s see what happens.”•
Please enable JavaScript to view this content.