PUBLISHED:March 31, 2023

Focus on Scholarship: Professor Brandon Garrett on the use of artificial intelligence in criminal justice

Heading

In a forthcoming article, Duke Law Professor Brandon Garrett and Duke Computer Science Professor Cynthia Rudin argue that when a person's life or liberty are at stake, or public safety, we can do better than "black box AI." 

Professor Brandon Garrett Professor Brandon Garrett

The fear that artificial intelligence could one day gain an alarming level of power over human life is increasingly prevalent as the technology develops. Recently, hundreds of scientists, businesspeople, and public figures signed an open letter warning that AI is advancing too quickly to ensure adequate guardrails are placed around its use.

But decision-making algorithmic models so complex and opaque as to be impenetrable to the public are already pervasive enough to have a name – “black box AI” – and they have been widely adopted by law enforcement and other government agencies. The name also applies to models that are shielded from public scrutiny to protect commercial interests.

In “The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice,” forthcoming in Cornell Law Review, Duke Law Professor Brandon Garrett and Duke Computer Science Professor Cynthia Rudin call for a different approach. They argue that when a person’s life or liberty are at stake – or the safety of the public – only systems that can be examined, interpreted, and validated should be employed. Not only is “glass box AI” required to uphold basic fairness, but by enabling lawyers and judges to challenge the technology’s inner workings it is also more likely to be accurate, particularly in the criminal context, they write.

In this scholarship Q&A, Garrett, the L. Neil Williams, Jr. Professor of Law and director of the Wilson Center for Science and Justice talks about the problems with black box AI in the criminal context and implications for constitutional criminal procedure and legislative policy. 

Legal scholars do not frequently co-author law review articles with computer scientists. How did this collaboration between two Duke colleagues come about?

It has been such a wonderful experience to collaborate with Cynthia Rudin. We first met when speaking to judges in the judicial master’s program at Duke Law. I had done work examining how judges use risk assessments in practice. Cynthia presented on groundbreaking work in computer science, demonstrating in a series of studies that artificial intelligence need not be a black box or unintelligible system to perform well. Cynthia has shown that simple and understandable AI can be just as accurate, in a range of contexts. We shared an interest in making the case across law, policy, and computer science, that it is crucial to get AI right in high-stakes settings like the criminal system. We wrote an op-ed together on risk assessment and a statement in response to a White House call for comments on a proposed AI Bill of Rights, and we expanded these arguments regarding AI research and criminal procedure rights in this article. Unfortunately, pressing issues concerning the deployment of AI in criminal cases are not being meaningfully addressed. Much work remains to be done.

The potential for error or bias in artificial intelligence models that we increasingly rely upon to help make decisions is well-known, but it’s particularly troubling in the criminal system, especially when the models lack transparency. What are some of the problematic applications of black box AI in the legal system?

As artificial intelligence has become an everyday presence in our daily lives, rather than step in to safeguard our rights, government agencies, including law enforcement, are increasingly trying to deploy these new technologies. Examples include the use of AI to analyze complex DNA mixtures, risk assessments used in pretrial decision-making and sentencing, and facial recognition systems used by law enforcement to identify suspects. If a forensic tool adds accuracy and value to a criminal investigation, then how it works should be disclosed. Instead, unregulated and undisclosed AI has been used in a wide range of settings. When defendants have challenged these applications, arguing that their rights are violated by use of black box AI, judges’ rulings have so far been decidedly mixed. To be sure, law enforcement has sometimes kept black box AI out of court by using it to generate leads, but not offering evidence from it in court. Those types of subterfuges are also troubling.

We describe three types of problems with developing AI systems in general that pose particular challenges in criminal justice settings: data, validation, and interpretation. First, criminal justice data is often noisy, highly selected and incomplete, and full of errors. In a black box system, those errors cannot be detected or corrected. Second, one cannot easily validate the system and tell how accurate it is if it is a black box system. Third, interpretability is particularly important in legal settings. If it is a black box system, police, lawyers, judges, and jurors cannot fairly and accurately use what they cannot understand.

You have written extensively about the problems with forensic science and the criminal system’s reliance on it, including in your most recent book, Autopsy of a Crime Lab. Are there parallels with how AI has begun to be used?

At trial, an expert witness may present AI evidence, but if the AI is a black box, the parties cannot readily vet the expert. There are strong reasons to fear that judges will not rigorously examine black box AI evidence or insist on a glass box. In the past, judges have deferentially reviewed admissibility of expert evidence in criminal cases, even after the U.S. Supreme Court’s Daubert ruling and amendments to Federal Rule of Evidence 702 tightened the gatekeeping requirements for expert evidence. The National Academy of Sciences explained in a landmark 2009 report that where judges have long failed to adequately scrutinize forensic evidence, scientific safeguards must be put into place by the government. There are good reasons to fear that the same issues may recur when black box AI is used in criminal cases. A criminal defendant, if indigent, may be denied funds to retain an expert to examine methods or technology used by a prosecution expert. For black box AI, the barriers to practically challenging the evidence may be greater, where the defendant may have no way to independently re-examine prosecution use of AI.

As you point out, the use of black box AI in the courts raises serious implications for the constitutional rights of criminal defendants, including under the 5th, 6th, and 14th Amendments. Why have judges been reluctant to allow it to be challenged?

One might think that given the constitutional interests at stake, that judges would require a substantial showing to justify nondisclosure of AI to the defense. Instead, while there are a few promising rulings, many judges have denied defense requests for access to information about AI being used. For example, a Pennsylvania court rejected a defense challenge, denying the request for review by independent scientists of the underlying “proprietary” software. The court emphasized “it would not be possible to market” the software “if it were available for free.”  Developing a market for a product that serves the public interest could be a laudable goal. However, without careful validation and interpretability, one does not know if the system is accurate in general or as used in a case. Other courts, like the New York Court of Appeals tolerate similar proprietary use of AI in criminal cases by concluding it is reliable, based on studies done by the corporate provider, and placing the burden on the defense to show a “particularized” need for access. Such rulings poorly apply constitutional rights and too readily assume that black box AI systems have been demonstrated accurate. In contrast, when judges have required that the government disclose information about AI, serious errors have come to light. This was the case with DNA software that had been used in New York City.

You and Professor Rudin propose moving to “glass box AI,” in which the predictive models underlying the system can be tested for accuracy and fairness without sacrificing performance. What would that require in the criminal context?

We write to counter the widely held myth that the use of black box AI systems is a necessary evil, despite the risk to constitutional rights, because they have a performance advantage over simpler or open systems. In a range of settings, simple glass box systems have been shown to be just as accurate. We argue that AI secrecy in the criminal system is an avoidable and poor policy choice. Instead, in the criminal system, both fairness and public safety benefit from glass box AI—and therefore, judges and lawmakers should firmly recognize a right to glass box AI in criminal cases. There should be a substantial burden on the government to justify any use of secret black box AI in settings like criminal cases.

What kind of legislative or regulatory changes are necessary to support a move to glass box AI in the criminal system, including in non-trial settings? What is the outlook for such an agenda being adopted?

A range of legal measures can ensure that black box AI is not used in the criminal system. As just discussed, far more can and should be done to apply and robustly protect the existing Bill of Rights in the U.S. Constitution, particularly when AI is used to provide evidence regarding criminal defendants. There is an unfortunate reality, however that constitutional rights may not be enough to address these issues, where they have been unevenly enforced in criminal cases, given the challenges that largely indigent defendants face in obtaining adequate discovery and the pressures to plead guilty and waive trial rights.

The legislative response to the use of black box AI in criminal cases has only just begun, and a main focus of the first wave of local and state legislation in the United States has been police use of facial recognition technology. In Europe, the AI Act, if enacted, will provide a model for regulation of uses of AI in court and by law enforcement. We propose that in the United States, legislation require glass box or interpretable AI be mandatory, absent a compelling showing of necessity, for most uses by law enforcement agencies in criminal investigations. So long as the use of AI could result in generation of evidence used to investigate and potentially convict a person, the system should be validated, based on adequate data, and it should be fully interpretable, so that in a criminal case, lawyers, judges, and jurors can understand how the system reached its conclusions.

 

Andrew Park is associate dean for communications, marketing, and events at Duke Law School. Reach him at andrew.park@law.duke.edu.