By Professor Yuval Shany
Traditional and new AI human rights
The discourse surrounding an AI bill of rights often revolves around a core list of human rights and principles, which have already been recognized in legal and policy instruments, seeking to regulate AI systems in a manner that aligns their operation with existing human rights standards. For example, the new Council of Europe (COE) AI Framework Convention, concluded in May 2024, contains language alluding to the need to respect, when resorting to AI systems, notions such as human dignity and individual autonomy, transparency and oversight, accountability and responsibility, non-discrimination, data privacy, procedural safeguards and remedies. Article 4 of the Convention explicitly provides that: “Each Party shall adopt or maintain measures to ensure that the activities within the lifecycle of artificial intelligence systems are consistent with obligations to protect human rights, as enshrined in applicable international law and in its domestic law”.
In the same vein, the US White House 2022 Blueprint for an AI Bill of Rights calls for measures to ensure safe and effective AI systems, protection from algorithmic discrimination, data privacy rights, notice and explanation and human alternatives, consideration and fallback, in order to ensure respect for existing civil rights and democratic values. Indeed, the first three policy directives in the Blueprint appear to overlap with existing human rights to personal security (as well as with the rights to life, health and property, and other rights whose enjoyment might be compromised by unsafe or ineffective AI systems), equality and privacy. The fourth directive appears to be rooted in general concerns about abuse of power, arbitrariness and accountability that traditionally underscore a number of substantive and procedural human rights, such as the right to seek information, the right to due process and the right to an effective remedy.
The reference in the White House Blueprint to “human alternatives, consideration and fallback” – the fifth directive – appears, however, to go beyond the existing human rights catalogue. Rather, it introduces into the Blueprint a new or emerging human rights in the field of AI – the right not be subject to an automated decision entailing important consequences, which finds expression in some regional instruments, including article 22 of the EU General Data Protection Regulation (GDPR), article 9(1)(a) of the COE Modernised Convention for the Protection of Individuals with Regard to the Processing of Personal Data (Convention 108+) and article 14(5) of the African Union Convention on Cyber Security and Personal Data Protection. Such a new or emerging human right not only protects individuals from algorithmic bias, arbitrariness and lack of accountability; it also reinforces a possible right to explanation and expands the scope of the right to democratic participation. Furthermore, it reflects concerns about the dehumanization and harm to dignity implied in having machines lacking in human feelings, such as empathy and solidarity, take important decisions relating to human wellbeing, and about the treatment of human beings by computer algorithms as “data shadows” as opposed to wholesome and real-life persons. (For more discussion of these concerns, see here).
Arguably, the ban on certain automated decisions taps into a strong preference that some human beings have that other human beings, and not algorithms, would adopt, or at least review, important decisions concerning them. This preference also appears to be strongly connected to human dignity considerations: Having the power to decide the fate of others implies a hierarchical power relationship between the decision-maker and the person impacted by the decision. (For a discussion of decision-making power and hierarchy, see here). Subjecting human beings to automated decision making in important contexts might therefore suggest, at least in the eyes of some, that machines are considered, at some level, to be hierarchically superior to human beings. The right to opt out from automated decision making, or to challenge it before another human being, not only offers a more fair, just and legitimate decision-making process in the eyes of persons affected by it; it also offers a process that could vindicate the humanity of those affected by the decision and reaffirm their social status as human beings in an increasingly digital environment.
Non-decisional interactions
To be sure, significant interactions with algorithms take place not only in the context of decision-making but in other contexts as well: AI-systems are increasingly used for medical diagnosis and care-giving, teaching and training, online customer service, organizational planning and content recommendation in professional and social settings. Arguably, in some of these contexts, certain individuals may have a legitimate expectation or strong preference to interact with other humans – e.g., to be diagnosed by a human doctor and to be treated by a human nurse, to be instructed by a human tutor, to speak to a real person in a call center, to discuss with other humans their program of work and to obtain recommendations regarding professional literature or leisure activity from individuals, not algorithms. Of course, not every personal preference deserves to be recognized as a right, and, even more so, as a human right. Still, it is interesting to note that some of the moral and policy underpinnings of the aforementioned new or emerging right not to be subject to automated decisions could also apply to a new right to human-to-human interaction, should such a right be recognized in the future. The willingness by some international actors to recognize specific rights, which may comprise parts of the right to human-to-human interaction (the right not to be subject to automated decision and the right to be notified about interaction with AI system, discussed below) suggests that an overarching right to human-to-human interaction might, in the future, be also accepted as a new digital human rights.
Like the claim for a right to a human decision, the claim for a right to human-to-human interaction, including in non-decisional contexts, could be informed by concerns about bias in shaping the contours of interaction with an algorithm and its specific modalities (e.g., bias in ‘internal decisions’ adopted by the AI system in relation to its own operations, such as what level of care or support to offer different patients or pupils). It could also involve demands for transparency in reason-giving concerning specific aspects of the interaction (e.g., why certain drugs have not been prescribed, why certain service options were not offered by the service providing chatbot etc.). There are also concerns about accountability for harm resulting from the substitution of interactions with humans with interactions with machines.
However, the key concern underlying a putative human right to human-to-human interaction appears to touch – like with automated decisions – the deeper significance of the loss of human virtues attendant to human-to-human interaction, which features at his highest point feelings, such as empathy, solidarity, compassion, humor and friendship. Such virtues correspond to important dimensions of the notion of human dignity, which requires that persons be treated in a manner compatible with their intrinsic worth as human beings, including as individuals whose well-being and flourishing is valuable to society. Here too, certain people may experience interaction with a computerized system as a dehumanizing and disempowering experience, and even as a form of social deprivation (for a discussion of the possibility of a right to friendship as a derivative of the interest in not being subject to social deprivation, see here).
Of course, many persons might actually prefer to interact with an AI system (in the same way in which some persons might prefer that AI systems and not human beings take decisions which impact their lives). Furthermore, certain interactions with algorithmic systems can be described as trivial in nature, and, therefore, not of the kind that should fall within the scope of a right to human interaction as part of human rights law (e.g., film or shopping recommending algorithms). A key normative question that is likely to arise in this regard, in the future, is the scope of choice that should be afforded to individuals (and societies) to control the scope and nature of their interactions with AI-systems, and to designate certain interactions as sufficiently important for them to be covered by human rights law.
The link between AI interaction and notification rights
A first step in the direction of exercise of choice about modalities of interaction with AI system is knowledge about the existence of AI-system on the other side of the interaction. A right to know about interaction with an algorithm has already been acknowledged by the White House Blueprint – “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you” – and by the EU Declaration, which protects “informed choices” pertaining to benefitting from AI systems, and commits to ensuring “an adequate level of transparency about the use of algorithms and artificial intelligence, and that people are empowered to use them and are informed when interacting with them”. Furthermore, recital 132 of the AI Act contains specific language supporting a duty of notification in order to address the risk of AI systems impersonating human beings:
Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems and subject to targeted exceptions to take into account the special need of law enforcement. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect taking into account the circumstances and the context of use. When implementing that obligation, the characteristics of natural persons belonging to vulnerable groups due to their age or disability should be taken into account to the extent the AI system is intended to interact with those groups as well... (emphasis added)
Arguably, growing recognition of the right to be notified about interaction with AI systems, with a view to prohibiting impersonation and preventing deception, also lays the groundwork for a right to refuse to interact with AI systems, in certain circumstances. Knowing about interacting with AU systems without having the ability to do anything about it, such as having the ability to request to switch to human-to-human interaction in appropriate cases, might be regarded as meaningless without a complementary right to opt out of the interaction.
A new right to human-to-human interaction – were it to eventually emerge – also sits well with the increasing tendency to refer in some of the instruments purporting to regulate AI to “human-centric” AI systems (See e.g., the AI Act, article 1; the EU Declaration, article 9(a), the AI Safety Summit Declaration). Placing human beings at the center of the choice whether or not to engage in interaction with AI systems is arguably a significant component in any human-centric approach to uses of AI.
Concluding remarks
To summarise, the emerging rights not to be subject to automated decisions and to be notified when interacting with AI system, which already appear in certain AI regulatory frameworks, may be stepping stones on a future path towards developing a new and broader right to human-to-human interaction. In fact, as the recent Schufa case before the Court of Justice of the EU shows, it may be difficult at times to draw a clear line between an algorithmic decision and other forms of interaction with an algorithm. Such a right – if it were to develop – would have to identify the types of interaction important enough to fall within its scope application.
Like with other new human rights, a moral case would need to be made in favor of such a right, which coincides with the identification of a practical policy problem that needs fixing, and with broad political support for such a right. Whereas the latter has not yet happened, it may occur as a possible natural extension of the aforementioned emerging rights. Moreover, the greater the extent to which human beings are being substituted by AI systems in different modalities of social interactions, the greater demand there may be for having a choice – either for reasons stemming from uneasiness with technology (similar considerations apply to demands to offer offline service option available in certain regulatory schemes) or due to a subjective belief in the superior nature of human-to-human interactions when compared to human-machine interactions. Ultimately, one can regard a right to human-to-human interaction as a norm closely related to the notion of human dignity. Being treated by a human in consequential contexts may be close to core of what it means to be part of a human society, at least at the present transitional phase in which human-to-human interactions are still perceived as part of the normal human condition, and interactions with AI are perceived as exceptional in nature.