Ethics in AI Lunchtime Seminars - Automating International Human Rights Adjudication

Speaker
Dr Veronika Fikfak (UCL), Professor Laurence R. Helfer (Duke University)
Event date
Event time
12:30
Venue
Institute for Ethics in AI
Please register to receive details
Event type
Lectures and seminars
Event cost
Free
Disabled access?
Yes
Booking required
Required

Abstract: International human rights courts and treaty bodies are increasingly turning to automated decision-making (ADM) technologies to expedite and enhance their review of individual complaints. Algorithms, machine learning, and AI offer numerous potential benefits to achieve these goals, such as improving the processing and sorting of complaints, identifying patterns in case law, enhancing the consistency of decisions, and predicting outcomes. However, these courts and quasi-judicial bodies have yet to consider the many legal, normative, and practical issues raised by the use of different types of automation technologies for these purposes.

This article offers a comprehensive and balanced assessment of the benefits and challenges of introducing ADM into international human rights adjudication. We reject the use of fully automated decision-making tools on legal, normative, and practical grounds. In contrast, we conclude that semi-automated systems—in which ADM makes recommendations that judges, treaty body members, and secretariat or registry lawyers can accept, reject or modify—is justified provided that judicial discretion is preserved and cognitive biases are minimised. Applying this framework, we find a strong case for using ADM to digitize documents and for internal case management purposes, such as assigning complaints according to expertise. We also endorse the use of facilitated ADM to make straightforward recommendations regarding registration, inadmissibility, and the calculation of damages. Conversely, we reject the use of algorithms or AI to predict whether a state has violated a human rights treaty. In between these polar categories we discuss semi-automated programs that cluster similar cases together, summarize and translate key texts, and recommend relevant precedents. The benefits of introducing these tools to improve international human rights adjudication need to be weighed against the challenges posed by two types of cognitive biases—biases inherent in the datasets on which ADM is trained, and biases arising from interactions between humans and machines.

We also introduce a framework for enhancing the accountability of ADM in international human rights adjudication. This includes public review, consultations, and external oversight before automation tools are adopted, as well as systemic and case-specific explanations about how the tools have been deployed in individual cases. Concerns about the ability of humans to meaningfully supervise machine learning and AI programs also raise questions about revisiting the finality of international decisions made with the assistance of ADM.