Ethics in AI Lunchtime Research Seminar, Wednesday 6th March @ 12:30pm (GMT) with Dr Veronika Fikfak (Political Science, UCL) and Professor Laurence R. Helfer (Law, Duke University)
International human rights courts and treaty bodies are increasingly turning to automated decision-making (ADM) technologies to expedite and enhance their review of individual complaints. Algorithms, machine learning, and AI offer numerous potential benefits to achieve these goals, such as improving the processing and sorting of complaints, identifying patterns in case law, enhancing the consistency of decisions, and predicting outcomes. However, these courts and quasi-judicial bodies have yet to consider the many legal, normative, and practical issues raised by the use of different types of automation technologies for these purposes.
This article offers a comprehensive and balanced assessment of the benefits and challenges of introducing ADM into international human rights adjudication. We reject the use of fully automated decision-making tools on legal, normative, and practical grounds. In contrast, we conclude that semi-automated systems—in which ADM makes recommendations that judges, treaty body members, and secretariat or registry lawyers can accept, reject or modify—is justified provided that judicial discretion is preserved and cognitive biases are minimised. (Cont'd below)
We will run each seminar in a hybrid format, allowing audiences to join in-person or online. Please register via the link below to reserve your space.