Duke researchers receive Greenwall Foundation grant to address issues in AI-enabled health care delivery
Researchers at the Duke Law Center for Innovation Policy (CIP) and Duke-Margolis, MD, Center for Health Policy (Duke-Margolis Center) have received a one-year grant of more than $196,000 from The Greenwall Foundation to address an emerging problem in artificial intelligence (AI)-enabled health care delivery: the tension between the need for “explainability” of treatment rationales versus the need to protect trade secrets in the burgeoning area of clinical decision support software innovation.
Health care professionals’ duty to promote patient welfare includes basing decisions on sound and explainable rationales, and patients, the researchers point out in their grant proposal, have the right to understand the nature of and rationale for treatments, including software-generated recommendations. But certain types of detailed explanations could potentially facilitate reproduction and in that way, compromise trade secrets, a key incentive for innovation in the field.
The grant will enable the team to collect and review quantitative data on private and public investment in the AI-based software that is the focus of the project, and to conduct interviews and hold private workshops with stakeholders in the health care regulatory sector, including developers, purchasers, regulators, users, and patients. The researchers’ goals include generating recommendations to facilitate the provision of appropriate levels of explainability to health care professionals (and, ultimately, to patients) and determining whether and what legal or private self-regulatory approaches might best be employed in doing so. They anticipate publishing their results and recommendations in leading peer-reviewed journals and will also publish a white paper and hold a public conference at Duke’s Washington office to discuss their findings.
Elvin R. Latty Professor of Law Arti Rai, co-director of CIP and a principal investigator on the grant, said the project is the first of its kind to tie explainability issues to quantitative data on current needs for trade secrecy by commercial actors. “While software development has always involved trade secrecy, the importance of trade secrecy as an innovation incentive may have increased as a consequence of challenges associated with securing and enforcing software patents,” said Rai, an internationally recognized expert in intellectual property (IP) law, innovation policy, administrative law, and health law. “For this reason, the principal regulator of AI-based software, the FDA, as well as professional organizations, providers, and insurers are actively interested in the question of how to balance explainability and trade secrecy.”
Rai is co-authoring a white paper with Gregory Daniel, PhD, MPH, deputy director, policy and clinical professor at the Duke-Margolis Center and Christina Silcox, PhD, the center’s managing associate. The paper will serve as a background document on the legal and regulatory landscape surrounding AI-based clinical diagnostic support software. Guillermo Sapiro, the James B. Duke Professor of Electrical and Computer Engineering at the Pratt School of Engineering, is also a principal investigator on the project, with Duke-Margolis director, Dr. Mark McClellan, Kimberly J. Jenkins University Professor of New Technologies, Vincent Conitzer, and Associate Professor of Computer Science Cynthia Rudin serving as advisors.
“Research on how to effectively integrate artificial intelligence into health care delivery is a new and emerging area of work for the Duke-Margolis Center,” said Daniel. “By working in collaboration with Duke Law, we can move much more quickly to identify real-world policy approaches to support emerging technologies that incorporate AI in helping physicians and patients make better healthcare decisions.”
The Duke project is funded by a “Making a Difference Grant” from The Greenwall Foundation, which primarily supports research in bioethics. The foundation’s Making a Difference in Real World Bioethics Dilemmas program, launched in 2013, supports research “to help resolve an important emerging or unanswered bioethics problem in clinical care, biomedical research, public health practice, or public policy,” according to its website.