PUBLISHED:April 22, 2025

How AI could enable faster outcomes for international human rights cases

Heading

Professor Laurence Helfer, an expert in international adjudication, suggests reforms

Harry R. Chadwick, Sr. Distinguished Professor of Law Laurence R. Helfer Harry R. Chadwick, Sr. Distinguished Professor of Law Laurence R. Helfer

Cases that come before international human rights bodies often have an element of urgency. A family faces deportation to a country where they claim to face threats to their life or liberty. A political prisoner languishing in prison could be denied critical medical care or access to a lawyer while an international monitoring body decides whether she has been deprived of her right to liberty or due process. In cases where time is of the essence, justice delayed may mean justice denied.

Unfortunately, international human rights courts and treaty bodies, such as the Inter-American Court of Human Rights or the United Nations Human Rights Committee, are “drowning in cases,” said Duke Law professor Laurence R. Helfer. He said it often takes three to five years or more for complaints to be decided on the merits.

Helfer explained that these international monitoring bodies serve a threefold function. They create precedents about how different human rights should be interpreted. They hold governments to account for violations. Finally, they apply existing human rights law to new circumstances and challenges in an evolutionary way.

“These bodies are doing very important work, but they simply do not have the resources to achieve these different functions,” Helfer said. 

Enter algorithms

In a new article in the Michigan Journal of International Law, Automating International Human Rights Adjudication, Helfer, the Harry R. Chadwick, Sr. Distinguished Professor of Law, and his co-author, University College London’s Veronika Fikfak, examined how automated decision-making (ADM) could facilitate the administration of international human rights law to make it more speedy and effective.

Helfer, a current member of the U.N. Human Rights Committee, and Fikfak, an ad hoc judge on the European Court of Human Rights, first began thinking about using ADM in international human rights adjudication in September of 2022, just a few months before OpenAI released the first version of ChatGPT. But even before today’s advanced versions of AI, computer science scholars had over the past decade published studies attempting to use algorithms to predict judicial decisions.

“These predictions are useless for a judge, who is not going to plug in a bunch of variables into a program and then say, ‘Oh, now I know whether someone's rights were violated or not,’” Helfer said. “But that doesn't mean humans can't be aided by technology.” 

The two authors started by thinking broadly about the range of ADM tools available, and whether or how they might be used at each step of the adjudication process – a process they each know well as scholars and participants. Ultimately, they found several places where using ADM “could save time and money, thereby helping to do justice to individuals and their human rights claims,” Helfer said.

Use in lower-stakes cases  

Helfer said some of the low-stakes, high-reward areas where technology could assist legal practitioners include digitization of complaints and other records and making the content searchable. A decision-tree program could then evaluate these digitized records to answer simple questions: Was this petition submitted in time to make it admissible for review? What is the typical amount awarded for damages in a case like this?

Helfer said a more sophisticated program could also recognize patterns in cases and group them together, so that courts and treaty bodies don’t get bogged down with repetitive cases caused by the same systemic problem. For example, a cluster of complaints accusing a country of violating the right to housing could be handled via an international class action process – labeled by the European Court of Human Rights as a “pilot judgment” procedure – in which the remedy for a leading case seeks to address the root causes of a violation that is then applied to other similar cases. 

AI’s limitations

But the use of automated decision-making gets trickier the closer one gets to the process of ruling on a legal issue.

“The more discretionary a decision, the less the AI should be used to guide that decision,” Helfer said. “There's something inherent in a court, whether domestic or international, deciding on the rights of individuals, where a human needs to be involved.”

One reason is because ADM is structurally hamstrung by several well-known types of bias. For example, algorithms trained on preexisting case law are inherently backwards looking. Picture an algorithm trained on a dataset of U.S. Supreme Court abortion cases between 1973 and 2021; it would never have predicted the 2022 Dobbs decision that overturned the constitutional right to abortion. That’s a particular problem in the forward-looking realm of international human rights law, where courts and treaty bodies overrule precedents with relative frequency, although almost always to expand individual rights. 

Simply adding a human to the equation doesn’t necessarily solve the problem. Judges and lawyers working with algorithmically generated recommendations bring their own biases to the table. Sometimes they too readily accept an algorithm’s decision, while at other times, they discount the algorithm’s work and instead apply their own biased judgment.

A potential middle ground

One solution is to customize any automated programs used to facilitate adjudication. Helfer noted that the Constitutional Court of Colombia, a judicial body that has made extensive use of ADM, spent years in discussions with AI specialists and technologists to develop a program that is currently being used to address a crushing case load.

Since principles in international human rights law evolve rapidly, a program should also be sensitive to changing legal and social developments. Helfer, who has worked extensively on international LGBT rights, pointed out that case law in that area has seen a major evolution over the past two or three decades, so an algorithm must be trained to apply the most recent reasoning and legal principles. Helfer and Fikfak also advised building accountability into ADM systems, such as by ensuring there is a public review process before adopting any automation tools and creating external oversight bodies that supervise the tools and their use.

Human rights practitioners, Helfer added, should also be wary of relying too heavily on AI tools, given their strategy of using the law creatively to advance new rights.

“You have to have a certain amount of imagination to be able to construct a legal argument about something new that has never been seen before,” he explained. “At the moment, that kind of thinking, at least in my view, is still the province of the human brain.”