How to Hold Mixed Human-AI Groups Responsible

Publication date
Decoration Only - AdobeStock by metamorworks -described as a mixed media of group of multinational businesspeople meeting in the office and digital technology concept.
image by metamorworks from Adobe Stock

It might soon be the norm for human and AI agents to work together in groups. But who should be held responsible when such groups act badly?

By Dr David Storrs-Fox

As many readers will know, the final report into the National Health Service’s (NHS) infected blood scandal was published in May. The report describes how the NHS administered contaminated blood products to thousands of patients, despite the significant, and known, risk of infection from these products. Moreover (as the Financial Times put it), the report finds that “the British state was guilty of a ‘chilling’ and ‘pervasive’ cover-up” of the scandal. The Prime Minister at the time and the chief executive of NHS England immediately apologised on behalf of the state and the NHS. The Leader of the Opposition (now Prime Minister) commented: “Collectively, we failed to protect some of the most vulnerable in our society. Politics itself failed [the victims].”[1]

The infected blood scandal occurred through the action and inaction of many people: medical professionals, managers, politicians, and others. Their mistakes include not only the administration of contaminated products, but poor decisions concerning licensing and procurement, failures in risk assessment, deceptive and opaque communication, and failures to investigate. It is appropriate to hold many of these people responsible for their individual roles in the scandal.

However, simply assigning responsibility to individuals risks missing what the report describes as the “systemic” and “collective” nature of the wrongdoing. Moreover, the responsibility of many individuals involved might be somewhat small, because of the size of their contribution, the pressures of the system they operated in, and the degree to which they understood the situation. Still, as the report says, this point about individual responsibility “does not diminish the appalling nature of what occurred.” [2]

How should we understand the “collective” wrongdoing in such cases? And if the responsibility assignable to individuals falls short of “the appalling nature of what occurred,” is there any more responsibility to assign? Many philosophers argue that our social world is populated not only by individual human beings, but also by “group agents” composed of individual humans.[3] Like an individual human being, a group agent can (according to this widespread view) act on its own goals, and sometimes can fairly be blamed for what it does. The British state and the NHS are group agents, and are (on this view) culpable for knowingly administering the contaminated products, and for covering up their actions. Corporations provide further examples of group agency. For instance, Boeing is plausibly culpable for deceiving the US Federal Aviation Administration about safety-relevant features of its 737 MAX 8 planes, two of which were involved in devastating crashes in 2018 and 2019.[4]

How might a group agent’s culpability be acknowledged? One common way is for a representative of the group to apologise on behalf of the group, as the Prime Minister and NHS chief executive did.[5] Apologies might not be enough, though. We might demand that the group pay compensation, fines or reparations for its wrongdoing. Punishments aside, we might also simply resent, denounce and be angry at a corporation or a state for its wrongdoing.[6]

AI and mixed group wrongdoing

Group agents will, no doubt, keep behaving badly in the coming years. However, the future of group wrongdoing looks importantly different from its past. It is now a familiar thought that developments in AI will give rise to (and, to some extent, have given rise to) artificial agents, which have goals and are able to act on those goals without continuous human intervention. 

AI agency has attracted considerable attention, but it brings with it an additional kind of agency: that of mixed groups, which have as members both human and AI agents. The Boeing of the future will plausibly involve many artificial agents in roles that humans previously filled: AI engineers, mechanics, managers, logisticians, quality inspectors, and so on. Similar comments apply to the British state, the NHS, and to many other group agents. And once mixed human-AI groups begin to act in our world, some of them will act badly.

So what? Can’t we simply hold mixed groups responsible in the same way as human-only groups? I don’t think we can take that for granted. 

Here is the issue: We generally think agents are culpable only if they have some sensitivity to, or grasp of, moral matters. You might be annoyed that your dog dirtied your carpet, but most of us wouldn’t think they can be morally culpable or guilty in the way humans can. Why not? Dogs can’t grasp moral matters. Many recent authors have argued that AI systems – at least, in their current and near-future incarnations – also cannot grasp moral matters, and so are not culpable for their conduct. 

In this respect, human-only group agents are plausibly more like humans than like dogs. They can grasp moral matters, because they are made up of agents who can. Just as Boeing builds planes through its individual members, so it can plausibly grasp moral considerations through its individual members. We might imagine those members expressing that grasp by saying, “we have acted wrongly by cutting so many corners, and by pursuing profit above all else.”[7] And Boeing could act on that grasp by (for example) hiring more experienced safety inspectors. 

But what if the Boeing of the future is composed not only of human agents, but also of AI agents: that is, if Boeing becomes a mixed group agent? In my view, that would not automatically remove Boeing’s grasp of moral matters. The remaining human members could still provide that grasp. However, I argue that an agent is culpable for a specific action only if it had the capacity to bring its moral grasp to bear on that action. A mixed group will be able to do that (I argue) only if humans play an appropriate role in bringing about the group’s action. There is; therefore, a real threat that mixed group agents will often not be culpable for their actions, however bad those actions might be. In this way, they might be like the AI agents that partially compose them.

How can we hold mixed group agents responsible? 

In my view, it is crucial that mixed groups be structured appropriately, with humans in the right places, if they are to be held responsible for their conduct. Regulators and corporate leaders should take note. Someone might wonder, though, whether it really matters if the mixed is held responsible for its conduct. Even if such a group is, in principle, the right kind of thing to be held responsible, is it not enough simply to hold individual human beings responsible for their roles? In response, consider again the infected blood scandal. The degree of each individual’s culpability might be very small, and only in the group (the British state) do we find an agent fully culpable for the catastrophic harm caused. I think the same applies to mixed groups, too.

Alternatively, someone might suggest that whenever we have a mixed group that does something bad, we can simply say that the human members of that group are a group agent. We can then simply hold that human-only group agent responsible. Why extend responsibility to the mixed group as a whole? Here is a quick version of my answer: I deny that there will even be such a human-only group agent, for many possible mixed groups. Consider: Do all the left-handed members of your university or company compose a group agent, which does not include any of the right-handed members? Probably not. The issue is that these people are not coordinated apart from the context of the larger group. The collection of left-handed members does not have its own goals, or perform its own actions. If all the right-handed member roles were filled by AI agents, it would be hard to see how that would make the left-handed members into group agents. The mixed group might well be the only group agent present and, as such, the only group agent that can be held responsible.

Of course, I haven’t yet told you exactly which roles humans need to fill in a mixed group if the whole group is to be culpable. I will provide more details about that in a future post. I’ll also argue that mixed groups can help to solve the problem of AI “responsibility gaps,” in which AI agents do things for which (it seems) nobody can reasonably be held responsible. Where the AI agent is acting as a member of a mixed group agent, I believe the group as a whole can be held responsible for its action (which is the group’s action). In my view, this will often give us reason to require (as far as possible) that AI agents do not act except as members of mixed groups.

For now, let me close with this. It is not only because of questions about culpability or moral responsibility that we should be interested in mixed groups. The general point is that such groups contain agents (human and AI) that differ from one another in important ways. I have focused here on the thought that AI agents lack the moral grasp that humans have. However, there are plenty of other differences, the details of which will depend on the AI systems under consideration. AI agents might greatly surpass human agents in some areas and fall far short of them in others. AI agents might reason (or fail to reason) in ways that humans do not. AI agents within a group might be more easily replicable or more easily transferred to different roles. And they may be worse (or better!) than humans at winning the trust of (other) humans or communicating with them. 

Any of these differences could make mixed group agents different from human-only group agents in ways that philosophers of action should attend to. Still, mixed groups are not simply an object of theoretical interest. They are coming, so AI ethicists must seek to understand them better.

The author would like to thank Caroline Emmer De Albuquerque Green, Linda Eggert and John Tasioulas.

[1] Gross, A., et al. (2024) Rishi Sunak apologises for ‘calamity’ of infected blood scandal. Financial Times. https://on.ft.com/3wFSjbN. Accessed 31 May 2024.                           

[2]P. 6 of Langstaff, S. B. (2024) Infected Blood Inquiry: the Report (HC 569-I). https://www.infectedbloodinquiry.org.uk/sites/default/files/Volume_1.pdf. Accessed 31 May 2024.               

[3] List, C. and P. Pettit (2011). Group Agency. Oxford, Oxford University Press.                              

[4] https://www.justice.gov/opa/pr/boeing-charged-737-max-fraud-conspiracy-and-agrees-pay-over-25-billion

[5] For helpful discussion of such apologies, see Collins, S. (2022). "I, Volkswagen." The Philosophical Quarterly 72(2): 283-304. The article’s title alludes to the example of the apology Volkswagen’s CEO made for his company’s emissions scandal (https://www.youtube.com/watch?v=t7ne3cwqI4A).

[6] Tollefsen, D. (2006). "The Rationality of Collective Guilt." Midwest Studies in Philosophy 30.            

[7] Cf. Collins, S. (2023). Organizations as Wrongdoers. Oxford, Oxford University Press.

      ; Chokshi, N., et al. (2024) 'Shortcuts Everywhere': How Boeing Favored Speed Over Quality. The New York Times. https://www.nytimes.com/2024/03/28/business/boeing-quality-problems-speed.html. Accessed 31 May 2024.