PUBLISHED:January 12, 2026

How to keep deepfakes out of court

Heading

Paul Grimm proposes new rules to reduce the risk of AI-generated fake content being presented to juries as real evidence

Judge Paul W. Grimm Judge Paul W. Grimm

Generative AI allows anyone with a smartphone to create voices and images that can be surprisingly realistic and startlingly convincing: fraudulent impersonators are using tools like “deepfakes” to scam Americans out of more than $16 billion a year.    

Now, deepfakes could disrupt the nation’s judicial system as more AI-generated audio, video, and text is presented as potential evidence in legal proceedings, says Judge Paul W. Grimm MJS ‘16, an expert on AI and the law, who warns that courts must act as a bulwark against fake content entering the courtroom as real evidence. 

“Evidentiary standards have not been well developed for this particular use, because rules of evidence are usually technology-agnostic and change slowly,” said Grimm, who served for 25 years as a federal judge and most recently led Duke Law’s Bolch Judicial Institute. “But more people are talking about whether the existing assumptions and the rules of evidence can continue to be sufficient to protect the integrity of the fact-finding process in trials involving highly technical evidence.

As AI platforms churn out content that’s virtually impossible for even experts to distinguish, bogus evidence in the courtroom could influence juries and impact the outcome of cases, says Grimm. He and co-author Maura Grossman argue in a recent paper for new rules governing how AI-generated evidence should be treated by judges in deciding whether it meets the threshold to be admitted as evidence at trial.

“It is no exaggeration to say that GenAI has democratized fraud,” they write. “Deepfakes … will most certainly find their way into the resolution of court cases where judges and juries will face real challenges understanding the operations and output of complex AI systems and distinguishing between what is real and what is not.” 

New rules for evolving technology 

AI content may be presented to judges, the gatekeepers of evidence, as “acknowledged” AI-generated evidence that is known and accepted by both parties to have been created or modified by AI, such as a video that has been enhanced for clarity using AI tools. Parties might still argue over whether such evidence is valid or reliable; in these cases, Grimm and Grossman say existing rules of evidence can be revised to help courts assess whether the evidence meets the accepted standards.  

A bigger challenge, they say, is “unacknowledged” AI-generated evidence, where a party proffering evidence claims it is real and an opposing party claims it is a product of AI. Current rules state that for technical evidence to be admissible, the party proffering it need only show preponderance — that the evidence is more likely than not what it is purported to be. That’s a problematic standard in the deepfake era, Grimm says.  

“The challenge under the evidence rules is that's a low barrier. If something is 51% likely to be what you say it is, it’s 49% likely not to be,” he said. “If the issue in a case is how long someone goes to jail, or whether an important right has been infringed or not, then is mere preponderance sufficient? Or do we need a higher threshold? The key question is, if we get this wrong, what's the consequence? And will we tolerate that amount of risk to allow this to come in?” 

If a judge does admit evidence, it’s left to juries to decide whether it truly is authentic and how much weight or credence to attach to it. And allowing juries to see fake evidence can have real consequences, Grimm says. AI technology can replicate human voices and likenesses to an astonishing degree of sophistication, and even if a voice or image is discovered to be an AI creation before a verdict is returned, it may have already prejudiced the jury. 

“Some studies say that even if you tell a jury this is bogus evidence, that it’s not authentic, they saw it,” Grimm said. “You can't get the toothpaste back in the tube. They've seen it, and even though they know it's bogus, it impacts the way in which they're processing that information.” 

Courts can’t wait 

Grimm and Grossman have proposed two new rules tailored to AI-generated evidence. One, addressing evidence where parties dispute its authenticity, would require the challenging party to show a court that “a jury reasonably could find that the evidence had been altered or fabricated” using AI. It would require the party proffering the evidence to show that its value would outweigh any prejudicial impact on a jury. The process would take place well before a jury is exposed to potentially fake evidence. 

The other, addressing evidence acknowledged by both parties as a product of AI, would require additional evidence supporting its authenticity. The draft rule “provides a ‘recipe’ lawyers can follow when preparing for trials and hearings, and that judges can refer to in ruling on evidentiary challenges,” they write. 

But changes to the Federal Rules of Evidence may take years, Grimm and Grossman acknowledge. In the meantime, courts must prepare for more AI-generated evidence and establish guidelines on the use of AI in legal proceedings, including giving parties adequate time to identify, disclose, evaluate, and challenge any AI-generated evidence well in advance of trial. More broadly, Grimm urges collaboration between attorneys, judges, technology experts and policymakers on a set of standards flexible enough to accommodate a rapidly evolving tech landscape. 

“The problem with coming up with rules of evidence designed for specific technology is what do you do when the technology changes so fast that next year they're no good? You’re back where you started,” Grimm said. 

“We need to bring in the stakeholders and come up with =performance criteria that can allow in various types of technology that have guidelines and best practices built in. Where the outcome can produce an unacceptably wrong or unfair outcome, then you should not let it come in. That's the way the final balance should be, and I think that's the arc of how this is going to go.”