by Yuval Shany
Digital rights v. human rights
Growing concerns about the need to protect the interests of online users and data subjects from questionable practices undertaken by powerful actors in the technology sector, including the extensive use of databases and algorithms by private and public actors, increasingly manifest themselves in the resort to the “language of rights” in technology regulation. Instruments such as the 2016 EU GDPR (and its predecessor – the 1995 Data Protection Directive), the 2022 EU DSA and the 2018 California Consumer Privacy Act, use such language to protect “data subjects”, “consumers’ rights” and “rights of the recipients of the service”. A similar trend can be identified in instruments providing a regulatory framework for the development and use of AI, such as the 2021 Draft EU AI Bill, which deals extensively with impacts of new AI technology on fundamental rights, the 2022 White House Blueprint for an AI Bill of Rights, which aims at identifying normative principles for protecting the rights of the American Public at the AI era, and the 2023 Interim Measures for the Management of Generative Artificial Intelligence Services, published by the PRC Cyberspace Administration, which allude to the need to respect the “lawful rights and interests of others”.
The reference to the language of rights implies that the regulation of new technology, including AI, involves the creation of new legal rights and affects the enjoyment of existing legal rights. The latter legal rights affected appear to include fundamental rights, protected in domestic constitutional law, and in regional and global human rights instruments. Still, it is increasingly claimed that aforementioned regulatory efforts produce new digital rights, which can also give rise to new human rights (the claim that digital rights are in the process of also becoming human rights is not free from controversy; see e.g., here). Such digital human rights may include, inter alia, a right to exercise a degree of control or self-determination over personal data (e.g., data rectification, erasure, portability), a right to access the Internet and its contents, a right to explanation and a right to a human decision (also referred to as a right not to be subject to automated decision-making). The latter right is of a particular interest for those following the interplay between digital rights and human rights, since it appears harder to connect it to any existing human rights, such as the right to privacy, freedom of expression and due process. Embedding this right in human rights law would therefore entail the creation of a brand new human right.
From the perspective of international human rights law (IHRL) theory and practice, the difference between ‘regular’ digital rights and ‘digital human rights’ boils down to three specific features: universality, inalienability and state obligations. Rights recognized under IHRL attach to every human being, and the availability of such rights is not contingent on status or conduct and is, as a result, irrevocable (still, in most cases, human rights are non-absolute in nature, and the rights of one person can be restricted in order to protect the rights of other persons or important public interests). Furthermore, human rights entail under IHRL obligations for states exercising jurisdiction over relevant individual right holders. In the context of digital human rights, this implies a duty to protect individuals from infringement of their rights by other individuals or companies, including technology companies.
In a recent draft SSRN publication, I have argued that the transfer from ‘regular’ rights to human rights tends to be based on a certain pattern of justifications, comprising moral claims and considerations supporting the universality of the right, its inalienability and the imposition of correlative duties, a problem analysis involving a cost-benefit analysis supportive of the introduction of the right and the identification of broad political support for recognizing the right in question as a new human right. A review of the development of the right to a human decision according to this justificatory pattern suggests that this right is well on its way to emerge as a new digital human right.
The emerging right to a human decision
The right to a human decision can be found in domestic legislation, regional instruments and international treaties. It was first captured in a 1978 French Law on Data Processing, Data Files and Individual Liberties which banned sole reliance on automated processing of data involving personal profiling in certain decisions about human conduct. A version of that Law has been adopted in the aforementioned EU Data Protection Directive, which included an individual right to opt out of automated decision-making which produces legal effects or which significantly affects the individual. That Directive has served as the basis for implementing legislation in the different EU member-states (see e.g., here at pp. 119-126). In 2016, the Directive was replaced by the GDPR, whose article 22(1) provides as follows:
The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.
Other sections of Article 22 provide for certain exceptions to the opt out rule, based on contractual relations, national legislation and consent. These sections also require that, in some cases, an alternative right to obtain human intervention in order to contest the automated decision be offered, and exclude from the scope of legal exceptions certain categories of personal data.
Significantly, legal texts recognizing a right to a human decision are found not only in Europe. A number of consumer data protection laws recently adopted at US State level contain such a right in the context of consumer law (see e.g., here and here). In the same vein, the 2021 PRC Personal Information Protection Law contains a right to demand explanation and to reject a decision adopted solely by automated decision-making entailing a significant impact on individual rights and interests. Moreover, the right to a human decision has also been embraced by international treaties, signed and ratified by both European and non-European countries. The 2018 Council of Europe Convention 108+ on Protection of Individuals with Regard to Processing of Personal Data contains a right for an individual “not to be subject to a decision significantly affecting him or her based solely on an automated processing of data without having his or her views taken into consideration”. The Convention has been ratified by 28 European states and 3 non-European states, and signed by 12 additional European States and 3 non-Council of Europe States (it will enter into force upon 38 ratification). In June 2023, another international treaty providing for a right to human decision entered into force – the 2014 African Union Convention on Cyber Security and Personal Data Protection, which was adopted by 15 states (and signed by 12 others).
The upshot of this short survey is that a clear picture emerges, showing the growing international acceptance of the right to a human decision, which traverses different regions and regional systems. At the same time, it should be noted that the specific contents of the right are still at a state of flux, and variations can be found between different laws and treaty provisions, inter alia, with respect to the types of decisions covered by the right, the actual degree of required human involvement and the precise configuration of the right as a ban on certain decisions, an opt-out right from them or a right of appeal.
The transformation to human rights law
The growing recognition of a right to a human decision, does not automatically imply that the right is also emerging as a human right. Such a transformation from a digital right to a human right assumes, as mentioned above, conditions of universality, inalienability and state obligations. It also tends to be supported by a certain pattern of justifications, involving moral claims and considerations, problem analysis and broad political support. Review of normative developments pertaining to the right to a human decision and the justifications appended to them in the respective travaux préparatoires, legal preambles and explanatory reports, suggest that the transformation from digital to human rights is already occurring.
One significant development in this regard is the permeation of the right to a human decision into documents which are framed as ‘digital bill of rights’ or ‘charters of digital rights’ – i.e., use constitutional framings – or constitute part of a corpus of regional or international human rights instruments. The US Blueprint for an AI Bill of Rights recognizes five principles that should guide the design, use and deployment of AI, including the possibility of opting-out from automated decisions in favor of human alternatives. Furthermore, a number of digital charters of rights or declarations of Internet rights – non-binding legal instruments introduced in Portugal (2021), Spain (2021) and Italy (2015) in order to assist domestic law-appliers in adapting domestic constitutional rights to the challenges presented by the digital age – allude to the right to a human decision. In the same vein, the 2022 EU Declaration on Digital Rights and Principles for the Digital Decade, which is aimed at assisting in upholding EU fundamental rights (including the EU Charter on Fundamental Rights), contains a commitment to human supervisions over algorithmic systems that affect people’s safety and fundamental rights. Finally, Convention 108+ has been described in its preamble and article 1 as an instrument designed to contribute to respect for human rights and fundamental freedoms.
The growing cross-over between the digital right to the human right to a human decision is further accentuated by the use of typical justifications for new human rights by law-makers proposing the inclusion of the right in national, regional or international legislation. Among the moral claims and consideration raised in favor of a right to a human decision are concerns about arbitrariness and abuse of power of automated decisions, their lack of accountability and democratic legitimacy, the inability of data subjects to participate in decision making, fears of algorithmic discrimination and unfairness, and the incompatibility with human dignity of algorithmic decision making which is lacking in empathy and that substitute human beings for their ‘data shadows’. In addition, problem analysis types of justification point to the fast-moving nature of AI technology with its expanding reach to different areas of life and uncertain impacts. Some areas of life – such as the use of facial recognition technology in law enforcement and the use of predictive AI to assist judicial decision making – generate particular apprehension.
Given the relationship between the justifications invoked for the right to a human decision and the justifications attached to other human rights protecting core human values such as due process, political participation, equality and dignity, it appears that a new right to a human decision could fit well within the corpus of universal and inalienable human rights. The serious concerns about the impact of AI technology can also justify the imposition of limits on development and use, and other regulatory obligations on states (and on other potential duty holders, such as IGOs and technology companies) in order to protect individual rights. Where the justifications provided so far deviate from traditional human rights justifications is with respect to the broad political support, across different nations, which tends not to be asserted. Given the fast spread of digital rights instruments that recognize the right to a human decision, and their increasing acceptance by countries from both the global north and south, this is likely to change however.
The bottom line of this analysis is that the right to a human decision maker appears to be well on its way to becoming recognized as a universal human rights, recognized under IHRL. Such a process could be further expedited and harmonized should international standard setting bodies, like the UN Human Rights Council or General Assembly, adopt resolutions formally acknowledging that such a new right exists and specifying the contours of the rights and its precise contents.
Suggested citation: Y. Shany, ‘From digital rights to international human rights: The emerging right to a human decision maker’, AI Ethics at Oxford Blog (11th December 2023) (available at https://www.oxford-aiethics.ox.ac.uk/blog/digital-rights-international-human-rights-emerging-right-human-decision-maker)