Moral Responsibility and Decision-making Computers

© 2023 Alrawashdeh,L. All rights reserved

Citation: Alrawashdeh, L (2023) Moral Responsibility and Decision-making Computers available at https://www.oxford-aiethics.ox.ac.uk/moral-responsibility-and-decision-making-computers

 

Moral Responsibility and Decision-making Computers

by Lina Alrawashdeh (2023)


I believe that the importance of moral responsibility places limits on the kinds of decisions that computers should make. In this essay, I argue for this in three main steps. First, I introduce a distinction between two kinds of decisions, which I call original and derivative decisions. Second, I show that moral responsibility is closely tied to original decisions. Third, I argue that because of the importance of holding decision-makers accountable. only moral agents should be tasked with making original decisions. Since computers are not moral agents, they should not make original decisions.


It can be helpful to think of decisions as falling into one of two categories: original and derivative. Original decisions require independent, and often creative, thinking. Derivative decisions aim to implement or conform to prior original decisions and, therefore, leave less room for free-thinking. An example will help clarify the distinction. When the governing body of a particular sport creates the rules, its members are engaged in the process of making original decisions. They are dictating what the game is and how it is to be played. On the field, a referee is also tasked with making decisions. But his will be decisions of a different kind. He is not deciding for himself what the game should look like. Instead, he is trying to implement the governing body’s vision. As a result, his decisions are derivative. We see original and derivative decisions everywhere. Lawmakers, for instance, are tasked with making original decisions. Judges, whose job is to implement existing laws, are tasked with making derivative decisions. When the university sets admissions criteria, it is making original decisions. When tutors try to decide which applicants best fit the criteria, they are making derivative decisions.


It is important to note that the lines between original and derivative decisions are often blurry. Many cases involve a mixture of the two. In writing this essay I made many original decisions, about what to say, how to structure it, what examples to use, and so on. But I also had to conform to the rules of the competition. I could not, for instance, answer a different question or exceed the word limit. In that regard, my decisions were derivative. Nonetheless, the original-derivative distinction offers a helpful (if imprecise) framework. And in most cases, we can intuitively tell whether to consider a decision as primarily original or derivative within the context being discussed.


Typically, when a decision leads to an unfavourable outcome, the person who made that decision will be responsible. But in the case of derivative decisions, responsibility often (though not necessarily) traces back to the person who made the underlying original decision. An example can help illustrate. Suppose a young woman in her early twenties goes to A&E complaining of chest pain. There is a nurse there whose job is to record people’s symptoms and to sort patients into a queue depending on urgency. The nurse does not make these decisions on a whim. Instead, he follows a set of guidelines issued by the hospital. Because of the young woman’s age and gender, she is a low-risk patient for heart problems. Therefore, the guidelines dictate that the nurse place her quite low in the queue. While waiting, the woman has a heart attack.


Who is responsible for this? It is possible that no one is. Even under the most perfect guidelines, there will always be some patients that do not receive care in time. The goal of guidelines is to minimize, not eliminate, that number. But it is also possible that this woman was a victim of flawed guidelines that overlook cases like hers. If this were the case, the nurse would not be the one responsible. He has, after all, done nothing wrong. His role is to follow the rules and guidelines as they are. In fact, were the nurse to act contrary to the rules, he would be violating his professional duties. The problem is not with the nurse’s implementation of the guidelines, it is with the guidelines themselves. Therefore, the responsibility seems to fall on the person (or people) who created the guidelines. What this shows is that so long as the agent making derivative decisions is correctly implementing the rules (and so long as they are acting in a legitimate role), then responsibility traces back to the maker of the original decision.


I understand that responsibility is a contentious concept. So, it is important to clarify how I use it. I am adopting a Strawsonian view, whereby responsibility has to do with reactive attitudes. On this account, to blame someone is to have attitudes such as resentment towards them. And to praise someone is to have attitudes such as gratitude towards them.1 Reactive attitudes are only appropriate when aimed at certain kinds of agents. Resentment can be appropriate if a person punches me, but not if my cat bites me. Those who can be proper targets for reactive attitudes are called moral agents.


My concern is that computers are not moral agents. Therefore, if a computer were to make an original decision, there would be no one to hold responsible. Not only do I think that computers are not, in their current form, moral agents. I also think that they can never be moral agents. This is because moral responsibility (and moral agency more broadly) has an epistemic component—it requires a grasp of moral reasons. It is because of this requirement that animals, children, and (more controversially) psychopaths are not considered moral agents. While we might one day create a computer that ‘knows’ moral facts, such knowledge will only be superficial. The computer can never understand the full weight of moral reasons because it cannot comprehend key facts about human life. We can know that killing is wrong, in part, because we are conscious beings and we understand the gravity of having that consciousness taken away from us. We can know that torture is wrong because we have felt pain before and we know just how unbearable and all-consuming it can be. Unless we can build a computer that understands what it is to be human—a computer that is conscious, feels pleasure and pain, experiences emotions—then computers will never be moral agents.


The problem is not that computers will make morally worse decisions. The problem is that decisions made by computers are of a fundamentally different nature. When a moral agent harms me, I can say that I have been wronged, or treated unjustly, or that my rights have been violated. But when a non-moral agent harms me, none of these apply. To accuse a computer of wronging me seems to involve a misunderstanding of what it means to be wronged. It is almost as absurd as accusing a heart attack of violating a person’s right to life. Responsibility, wronging, and injustice are all tied to moral agency. Therefore, the more original decisions we allow computers to make, the less applicable these concepts will become. And these concepts are an important part of human life. In cases of serious harm, it can be empowering for the victim to look the wrong-doer in the eyes and say ‘you have wronged me.’ The ability to express to others what they have done to us and to hold attitudes like resentment towards them is an important part of coming to terms with what has happened and taking a step towards closure. Similarly, showing that decisions led to wronging or injustice can be a crucial part of getting public recognition for what has happened and for people to start making attempts to rectify the situation.


I understand that there might be a temptation to use computers to unburden us from some of our decision-making. They can be quicker and more efficient than us. They can have access to more information and process it faster than we do. They don’t get distracted or tired. They might even be less prone to making mistakes. While these may be good reasons to put computers in charge of derivative decisions, we should resist the temptation to put them in charge of original decisions. Responsibility is a far too important part of life to sacrifice for the sake of efficiency. Computers should only make decisions when those decisions can trace back to a person and that person is willing to take on the responsibility for the computer’s actions.

Footnotes:
1 Strawson, P. F. 1962. "Freedom and Resentment." Proceedings of the British Academy 48.