How to Protect Due Process Rights in the Age of AI
Duke Law’s Brandon Garrett says governments can help protect Americans’ due process rights by using interpretable, or “glass box,” AI systems
Distinguished Professor Brandon L. Garrett
Like companies and individuals, government agencies have adopted a wide range of AI tools. They can be used to identify a crime suspect or determine someone’s eligibility for health benefits. But when governments use AI models in ways that implicate our liberty or property, procedural due process rights are at risk, warns Duke Law professor Brandon L. Garrett.
“We’ve seen AI systems rolled out throughout the government and often without regulatory guardrails,” Garrett said. “I think people are rightly worried about the threats to their due process rights. And fortunately, the Due Process Clause does provide important constitutional guardrails.”
Especially concerning, he says, are “black box” AI models, which by design do not disclose the process through which they generate their output. In his recent paper Artificial Intelligence and Procedural Due Process, Garrett weaves together legal history, judicial decisions, and recent computer science research on interpretable AI, to chart a path forward for how governments can take advantage of AI’s vast capabilities while also respecting due process.
“You cannot possibly protect someone’s due process rights if the government is relying on black box AI, because they do not disclose what factors the system is relying on, and certainly the person affected doesn’t know how it works,” said Garrett, the David W. Ichel Professor of Law and director of the Wilson Center for Science and Justice at Duke Law School.
The key term, defined in prior work with Duke computer science and engineering professor Cynthia Rudin, is “interpretable” AI: systems that set out what factors and weights actually correspond to its predictions. Such a model is designed to tell people how it actually makes its calculations. Garrett explains: “People are entitled to meaningful notice and an opportunity to be heard, if the government seeks to deprive them of life, liberty or property. A black box system offers no notice.”
Trading black box for glass box
While courts are seeing a growing number of lawsuits over decisions made with the use of AI, Garrett points out that new technologies have long raised questions about due process. For example, early cases challenged computerized notice methods for “failure to provide meaningful explanations of government actions,” Garrett writes.
Today, governments are adopting AI and other algorithmic-based systems that can make a wide range of momentous decisions. Garrett describes a 2023 case in which Francisco Arteaga was apprehended as a robbery suspect after his image was matched to crime-scene surveillance footage by an AI-powered facial recognition system.
The trial court ordered prosecutors to provide Arteaga with information about the AI system — “the identity, design, specifications, and operation of the program or programs used for analysis, and the database or databases used for comparison” — so he could meaningfully respond. While he eventually won the appeal, Arteaga spent nearly four years in pre-trial detention fighting for information about the system that initially identified him.
“What the government does pre-trial during criminal investigations can still impact people’s rights in important ways, so it’s quite concerning if someone gets falsely arrested based on an AI hit, even if the government doesn’t try to introduce that AI hit in court,” Garrett said.
As more state actions that rely on AI are challenged in court, judges are turning to longstanding legal principles that govern due process rights, which include the right to have meaningful notice from the government, and a meaningful opportunity to respond. And sometimes, those disclosures have helped to uncover not just a lack of notice, but underlying flaws in the system. It is a separate concern that often AI systems are not tested for their reliability, the subject of a more recent article Garrett and Rudin are writing together. “A lot of pain and suffering and expense could have been avoided if the government actually tested these systems in advance,” Garrett noted.
Garrett is no Luddite when it comes to AI and the legal system. There is a mature body of computer science research showing how interpretable AI performs just as well as black box alternatives. And working with Rudin and with Duke students, they have analyzed large datasets to identify patterns concerning use of pretrial supervision and outcomes in Durham, for example. Using interpretable AI, one can identify which factors actually provide useful information about outcomes. Garrett suggests governments can reap the benefits of technological innovation while respecting due process by using such “glass box” AI – transparent, interpretable, and testable AI models – rather than opaque black box models. And he hopes that due process considerations will one day be built into such technology as a design feature.
“The choice for governments isn’t between an expensive, proprietary black box AI model or no technology at all,” Garrett explains. “The choice is between interpretable, well-tested AI that actually provides due process and black box AI, of uncertain reliability, that provides none.”
“I think people are rightly worried about the threats to their due process rights. And fortunately, the Due Process Clause does provide important constitutional guardrails.”