PUBLISHED:May 30, 2025

AI is flooding the zone with patents. How can they be more reliable?

Heading

The growing use of AI in innovation is adding stress to a system that already fails to take reliability into account, says Duke researcher Arti Rai

Arti K. Rai Arti K. Rai

Artificial intelligence, or AI, is playing a growing role in the biosciences. It can aid radiologists in identifying abnormalities in organs. It can identify new drug regimens for rare diseases. And it can assist in innovation by generating new products, ideas, or molecules that can ultimately be patented.

But those benefits may come with a cost. AI’s ability to generate speculative ideas at a rapid clip risks overrunning the system with ideas that are granted patents without adequate scrutiny, deterring others from pursuing them, says intellectual property and innovation policy expert Arti K. Rai.  She argues that the patent system already suffers from lax standards that allow a patent to be granted simply because the invention proposed “doesn’t defy the laws of physics.” 

“One bad scenario is patents going to the wrong people, who do poor-quality AI work,” said Rai, the Elvin R. Latty Distinguished Professor of Law and Co-Director of The Center for Innovation Policy at Duke Law School. “As a consequence, the people who are more careful and do higher-quality work will not be able to get a patent.” She added that even if speculative ideas generated by AI are simply put into the public domain, the public availability of such speculation could, under current patent law standards, preclude patents for those who do careful work.

In “The Reliability Response to Patent Law’s AI Challenges,” forthcoming in UC Davis Law Review, Rai suggests a way forward. By emphasizing reliability, or whether or not an idea is likely to pan out in real life, patent doctrine can ensure that the system does not get flooded with purely speculative AI-generated ideas.

Theoretically, a well-designed patent system strikes a balance between rewarding innovation with a degree of exclusivity for an invention while also not stifling innovation by granting patents too early in the process or allowing patents that are too broad. Such a scenario hinders other competing parties who might be in a better position to pursue research or development of a given idea or product.  A well-designed system should also ensure that speculation in the public domain doesn’t preclude patents by those who do reliable work.

In practice, though, the time pressures on patent examiners and the lax doctrine that has evolved in the courts have lowered the standard for credibility to the point where the standard can be met by nearly anything “that doesn’t claim to turn a base metal into gold,” Rai said, only half-joking. In symmetrical fashion, courts have disallowed patents based on casual speculation in the public domain.

In her paper, Rai develops the concept of reliability as a heightened standard that would address these shortcomings. Emphasizing reliability would mean raising the bar for the various requirements a patent is technically supposed to meet, which include utility, novelty, nonobviousness, and inventorship.  It would also mean that unreliable prior speculation did not preclude patents.

Rai says advancements in AI highlight the danger of not raising the bar: “If AI does everything — if it comes up with the molecule, for the most part, and a human isn't doing anything — does that satisfy the novelty and nonobviousness requirements?” And if AI creates the molecule, then who is the inventor?

AI in drug development and discovery

In drug development, for example, pharmaceutical firms typically begin their search for patentable molecules within a universe of approximately 10 million compounds. But AI systems could conceivably generate and sort through a vastly larger set of compounds to suggest possible molecules for patenting. Predictive AI models, trained on data from existing drugs, have already been used by pharmaceutical scientists to predict off-target (unintended) effects of their drug company’s molecules and to link clinical data with drug target outcomes.

Rai uses the drug discovery and development process as a case study to illustrate how AI is already being used with little transparency, and why developing a meaningful standard of reliability can help the patent system properly recenter and reward human agency while still encouraging people to explore AI’s potential.

Rai assembled a data set of 40 “AI-native” firms that exclusively rely on AI for drug discovery and that also hold three or more patents or had molecules in clinical trials. She found that of 135 patents filed by those firms, just four mentioned AI use in their patent disclosures. “These firms talk about AI in their marketing materials, but if you look at the patent they don't talk about it at all,” Rai said, likely because of concerns that “if they disclose the use of AI, there's going to be a problem for human inventorship.”

In her view, this lack of transparency will advance neither useful science nor sound law. “The whole system is kind of messed up, in my view. It's almost like, ‘Don't ask, don't tell,” Rai said. “I don't think that's the way it should be. I don't think that promotes rule of law values.”

Rai bolsters her argument in a separate study, “What patents on AI-derived drugs reveal,” published in Science. In an examination of patents filed by 116 AI-native drug development firms, Rai and Boston University School of Law professor Janet Freilich found that compared with similar non-AI-powered firms, the AI-native firms did less testing on live animals — a step that gives at least some indication that the molecule in question has some utility. Only 23% of AI-native firms conducted animal testing before obtaining their patent, compared with 47% of traditional firms. Whether or not a molecule performs in animal testing is “a very concrete example of what reliability might mean in the drug context,” Rai said.

Emphasizing human involvement

Rai proposes enhancing reliability by highlighting the role of the human mind, even in AI-powered drug discovery. Centering human agency and responsibility in the development of AI tools helps the system as a whole, Rai said. “It also addresses the unique challenges that AI creates for inventorship in particular, and maybe obviousness as well.”

This means firms should “show their work” by disclosing how humans shaped the AI’s performance: Did a human assemble the training data? Check the model? Error-check the output? 

In the context of drug development, “We should think about invention less as the ‘eureka moment’ in which the AI, rather than the human, comes up with a molecule, but the process by which the human then checks on the eureka moment to see whether it makes any sense,” Rai said.

“The human has to be involved in order to show reliability.”