The need for, and feasibility of, an International AI Bill of Human Rights

Group photo of delegates at Academic Consultation, Bonavero Institute
Photo Credit: Fisher Studios

Write-up by Konrad Ksiazek, DPhil student based at Balliol College.

On Wednesday 5 March, we held the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, the second event of our Accelerator Fellowship Programme. This event was co-hosted with the Bonavero Institute of Human Rights, exploring the case for a new AI Bill of Human Rights. It consisted of a closed academic consultation led by Professor Yuval Shany, and a panel discussion chaired by Professor Başak Çalı. This consultation will be followed by three other consultations, to be held in Geneva, Harvard and Pretoria.

The consultation started with introductory remarks from Professor Yuval Shany, highlighting the need to discuss the value, feasibility and potential content of a possible International AI Bill of Human Rights. He set the foundations for our deliberations by exploring  the previous human-rights based efforts to regulate AI, including the 2022 White House Blueprint for an AI Bill of Rights, and initiatives by OECD, ACHPR, Council of Europe, EU and the UN, which attempted to approach the issue of AI through the lens of International Human Rights Law. Professor Shany acknowledged that the need for an AI Bill of Rights is contested. On the one hand, the digital revolution led to a major change of circumstances in which human rights treaties apply. As such, it may have exposed gaps and shortcomings in existing human rights instruments, and it naturally invited more precise guidance on how to apply them. He stressed the immense value of framing the challenges of the digital age as human rights issues for the victims of their violations. However, he also acknowledged that the fast-changing nature of the AI landscape makes it hard to anticipate and specify in advance the nature of protections which it may call for. He also noted that regional differences in attitudes to AI may pose a challenge to the conclusion of a global agreement in this domain, and acknowledged the relevance to worries about human rights inflation to our debate.

The remainder of the discussion proceeded under Chatham House Rules. The first major theme across the day related to the normative underpinnings of the potential instrument (which may be a “soft law” document). Many participants stressed the importance of reflecting on the nature and purpose of human rights as moral entitlements, to ensure that their provisions of any proposed instrument coherently fit the human rights enterprise as a whole. Relatedly, several participants warned against viewing human rights as the sole appropriate framework for addressing all of the major challenges facing humanity, noting that the important task of successfully regulating the challenges posed by AI may often go beyond simply elucidating the content of our rights. Some participants have also wondered whether any future AI bill of rights should aim to re-articulate and specify the rights covered under existing frameworks, or whether the exercise should be seen as breaking new entirely new ground and focusing on new rights. 

The second major theme of our discussions concerned the appropriate form of the potential bill. One pertinent issue was how specific or general an instrument of this kind should be. Participants observed that it is easier to build broad consensus around a general text, and that given the rapid pace of developments in this area, such an approach may better stand the test of time, while also giving states appropriate room for manoeuvre to determine how to reflect their specific national contexts. Some of them have presented the Council of Europe Framework Convention on AI and Human Rights as a good compromise between these concerns, allowing sufficient generality to encourage its broad appeal, while providing additional specification of the relevant standards in a non-binding explanatory report. Some participants also stressed the value of the framework convention model in helping to foster and shape further debate about the standards concerned. However, others have stressed that unduly general treaties might be unhelpful. Participants also noted that there is much precedent for successfully developing corrective and implementing human rights instruments focused on specific concerns, including CEDAW, UNCRC and UNCRPD.

Another central theme of the consultation was feasibility. While many participants saw enormous value in developing charters of digital rights, they perceived the present political climate as a major challenge. They noted that recent efforts to promote AI inclusion have been met with significant pushback from the new US administration. Some participants argued that such developments made the prospect of enacting an AI Bill of Rights unlikely, and observed that pursuing this aim in the current climate risks undermining the efforts to regulate AI through existing instrument and institutions, because they may be taken to concede the existence of a gap in their scope. One participant argued that more could be achieved by appointing UN Special Rapporteurs dedicated to this issue. Participants also wondered whether regulation through technology regulations (such as the EU AI Act), soft law and voluntary standards address the challenges of digital human rights through more effectively than traditional treaties, particularly given the limited influence that they can exert on international corporations.

An additional subject of discussion was which rights to include in a potential AI Bill of Human Rights. The potential provisions considered by the group – largely based on existing instruments in the field – included the right to access to AI, right to control over data, right to a human decision, prohibition of certain forms of impersonation, manipulation and bias, the right to algorithmic transparency and explanation, as well as the right to algorithmic safety and reliability. Other potential rights suggested by the participants included the right to public participation in decisions about AI, the right to redress against harms caused by AI, the prohibition of social scoring and certain forms of AI surveillance, the prohibition concerning the use of certain forms of lethal autonomous weapons systems, and the prohibition of AI killings more generally. Some participants noted that the appropriate content of the instrument depends on whether it is conceived as a soft law instrument or a binding convention, whether it is intended to have a regional or global reach. Others observed that the unique challenge in identifying the appropriate contents of a document of this nature is that is trying to anticipate and pre-empt future developments. In contrast, most existing IHRL instruments emerged in response to pre-existing challenges. Another key question to consider was whether the obligations contained in such a document should be directed towards individuals and corporations responsible for developing AI products, or if they should focus on imposing positive obligations on states.

The day concluded with a public-facing event concerning the Council of Europe Framework Convention on AI and Human Rights, chaired by Professor Başak Çalı. Our panellists included Professor Mario Hernandez Ramos (Complutense University of Madrid), Dr Angela Müller, (AlgorithmWatch), Professor Thompson Chengeta (John Moores University), and Semeli Hadjiloizou  (Alan Turing Institute). 

Professor Hermandez Ramos reflected on his experience as the Chair of the Committee on Artificial Intelligence in the Council of Europe, discussing the origins of the Framework Convention and the negotiation process. He highlighted the impressive speed at which the agreement was developed, and expressed optimism about the future of AI regulation.  

Dr Müller reflected on the virtues and shortcomings of the Convention. She noted that the very fact of its enactment is itself a significant achievement, praising it for emphasising the centrality of human rights to AI regulation. Nonetheless, she argued that it is sometimes too general and deferential to states, and that it includes unduly permissive exceptions on national security grounds. 

Professor Chengeta also argued that the Convention is commendable in part because it may encourage other regional efforts placing human rights at the centre of AI regulation. However, he argued that it suffers from certain shortcomings, noting that its provisions on non-discrimination do not carefully specify the full range of grounds of discrimination that the Convention ought to guard against, and that it does not address the issue of the use of AI for military purposes.

Finally, Semeli Hadjiloizou discussed the HUDEIRA project and its significance for the implementation of the Framework Convention.

Collage of photos with group shot of delegates and the panel