WordPress database error: [Table 'aifutures_org.wp_ppress_meta_data' doesn't exist]
SELECT * FROM wp_ppress_meta_data WHERE meta_key = 'content_restrict_data'

AI and Moral Responsibility – AI Futures
Blog post

AI and Moral Responsibility

WordPress database error: [Table 'aifutures_org.wp_ppress_meta_data' doesn't exist]
SELECT * FROM wp_ppress_meta_data WHERE meta_key = 'content_restrict_data'

We are entering a new chapter in the digitalisation of society, advancing from machines as decision support, to machines as “decision makers” (in one sense of the word). This is made possible by the many breakthroughs in the development of artificial intelligence (AI) in the last decade. There is reason to expect that AI will be used increasingly not only for private decisions, but also for decisions made by public authorities and institutions. The implementation of AI in public decision making has raised concerns in many quarters and led to much discussion among philosophers, legal scholars, and social scientists. Who is responsible when a machine commits a serious error? According to some, the use of autonomous machines can create so-called “responsibility gaps” in the decision process. A responsibility gap is when a person or an agent is responsible for an act, but cannot be held to account, resulting in a situation where no one can be held responsible. An example of such a situation would be an accident caused by a deceased person. 

Legal scholars concerned about responsibility gaps argue that there are machines that can be autonomous in the sense that they can initiate action, adapt their behaviour to new circumstances, and learn from experience. This, the argument goes, entails that such machines hold responsibility for the consequences of their actions, and subsequently, that no one else is responsible for those actions. Holding a machine liable for poor decisions, is of course not possible. The means that there will be cases where no one is answerable for mistakes made, thereby producing responsibility gaps. 

“A responsibility gap is when a person or an agent is responsible for an act, but cannot be held to account, resulting in a situation where no one can be held responsible.”

A responsibility gap should not be confused with another more common situation, namely difficulties establishing responsibility. If an innocent person is convicted of a crime, there are several actors that could be at fault: the police mishandling the investigation; the prosecutor intentionally presenting misleading evidence; or the judge being biased or not applying legal frameworks properly. Oftentimes, it is when several involved decision makers make mistakes that flawed decisions are made, and in such cases, establishing where the responsibility lies can be difficult. A responsibility gap is another matter altogether. It refers to the absence of a subject that can be held to account. 

According to those who claim that AI can create responsibility gaps, our current trajectory is bringing about a society where institutions, public and other, will produce thousands of AI supported decisions on a daily basis. Some even claim that the many responsibility gaps generated by autonomous machines will lead to a legal paradigm shift. I would like to challenge this view, considering in more detail machines as moral agents. 

Moral vs. Autonomous Agents 

Some machines, for example robotic vacuum cleaners, are autonomous. My robot vacuum activates at certain times and cleans my flat on its own accord. It is equipped with sensors to map the different rooms, stores the information and uses it when going about with its task. In other words, a robot vacuum can make representations of the world, it can use its sensors to perceive the world, and it can, in some sense, learn from experience. It also has an ability, albeit limited, to solve problems by itself. A high threshold is a challenge for a lowrider machine, but it can figure out how to work around it, by approaching the threshold from an angle. An agent, of course, has agency, the ability to act, and an autonomous agent is also capable of forming beliefs about the world, and have at least one wish or instruction for action. My robot vacuum has something sufficiently similar to a belief about the world: it has a map of my flat. It also follows instructions that lead it to certain behaviours. Insofar, it is an autonomous agent. According to the proponents of the responsibility gap theory, this behaviour also indicates that my robot vacuum is responsible for its own actions. 

However, it is important to differentiate an autonomous agent, such as my robot vacuum, from a moral agent. To be accountable for its actions, an agent needs something more than merely the ability to act. In modern analytical philosophy, a moral agent is an agent that is susceptible to moral reasons, a moral reason being a proposition following the format “You should do X because of A and B.” To be susceptible, in this context, means to be able to reflect around moral reasons, and through this reflection understand them. Let’s consider an example. 

Alex says to Robin: “You should get vaccinated because it helps protect those who can’t get the vaccination.” Robin, being a moral agent, is able to reflect on this piece of advice, and understands that the purpose of the action “to get vaccinated” is to impede transmission of a pathogen in the population. Understanding the purpose, in turn, leads to an apprehension also of when the advice to get vaccinated doesn’t apply. Since Robin is susceptible to moral reasoning, Alex doesn’t need to describe every possible situation in which the recommendation isn’t valid, or when it should be modified. 


Chomska the rabbit. Not a moral agent.

Unlike Robin, my robot vacuum doesn’t understand anything. It performs its chores by following stepwise instructions and its problem-solving skills are limited to working around physical obstacles. The same is true for more advanced agents, such as my rabbit Chomska. She possesses super intelligence, compared to the robot vacuum, but Chomska is not a moral agent, because she doesn’t comprehend moral reasoning. That animals are not moral agents, is considered true in most societies of the world. We don’t punish animals to bring them to account. An aggressive dog might be put down, but not as a punitive measure. Rather, it is done to make sure no one gets hurt, for the same reason that we would scrap a car that is no longer fit for traffic. There is, to my knowledge, no country that holds dogs legally accountable the way we do with people. 

The ability for discernment of moral reasons is not enough to be a moral agent, according to a broad consensus within moral philosophy. An agent also needs to have the capacity to reflect on their own attitudes and behaviour, and to in some degree have the ability to change or endorse those attitudes and behaviours. In other words, there are high requirements for being considered a moral agent, that is, a person who is morally accountable for their actions. In conclusion, machines of today have a long way to go before reaching anything like moral agency. 

Shifting Accountability 

Why then, is it important to establish that (so far) machines are not moral agents ? Let’s look at an historical example of the introduction of autonomous machinery. 

About a hundred years ago elevators had human operators, people whose job it was to close and open the elevator doors, and to operate the lever controlling the elevator. Following technical innovations, elevators became autonomous agents operating by themselves according to certain instructions. How did this shift change our view on moral responsibility in elevator related accidents? To my knowledge, no one cried “responsibility gap!” when automation made an entrance and elevators became autonomous agents. Instead, the accountability structure shifted. In cases where the operator traditionally would have been answerable, liability would now fall on either the manufacturer (faulty design), the proprietor of the building (insufficient maintenance), or the passenger (erroneous usage). Elevators fulfill the requirements for being autonomous agents but lack the sapiens to qualify as moral agents. This shows that a responsibility gap is not a necessary consequence of the use of autonomous machines. As long as the machine is not a moral agent, there is no subsequent responsibility gap. 

“To my knowledge, no one cried ‘responsibility gap!’ when automation made an entrance and elevators became autonomous agents.”

If we consider the autonomous machines of our time as the automatic elevators of the last century, an analogous accountability shift should be implemented today. When decision makers in public institutions commit errors, there are already standards for establishing responsibility depending on where in the process the error occurred. Obviously, if an AI application produces an unsound decision, it makes no sense to hold it accountable. But developers, technicians, and buyers are still potentially responsible, just as for any other product or service. If an error occurs because of mistakes on part of the developers, then they should be held accountable in the same way that car manufacturers are answerable for mistakes leading to traffic accidents. Decision makers in the public sphere are responsible for the machines and software used in the organisation, and to make sure that these do not undermine the capacity for making well-founded decisions. This responsibility applies also to analysing and assessing the quality of products acquired through public tenders.

In the current discourse, innovations and technological advances are driven solely by technical developments. History shows us, however, that political choices and priorities also play an important part. Politicians can refrain from acquiring certain technology, or they can direct how it is to be implemented. In Michigan, politicians chose to introduce an AI system (MiDAS) based on machine learning to identify fraudulent use of unemployment benefits. Up to 40,000 individuals were falsely accused of fraud, with hefty fines as a result. Moreover, the possibility for appeal was reduced since many case workers had been laid off in the same process. The consequences for the wrongfully implicated were severe, many losing their homes, and some were even driven to suicide. This is a clear example of how politics and ideology play into the implementation of a new technology. The aim in this case, was to “get tough with the cheaters,” even if this meant that some would be accused groundlessly. Sadly, politicians and public servants were not held accountable in this case. They hid behind defunct technology and dodged responsibility by making out the AI system as the culprit. 

Spelling out the difference between autonomous and moral agents gives us ammunition to counter the politicians’ arguments and make sure that machines — autonomous or not — can never be used to renounce responsibility. 

[1] For example:

Matthias, Andreas. 2004. “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata.” Ethics and Information Technology 6 (3): 175–83. 

Hevelke, Alexander, and Julian Nida-Rümelin. 2015. “Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis.” Science and Engineering Ethics 21 (3): 619–30.

[2] All moral agents are autonomous agents, but not all autonomous agents are moral agents.