PUBLISHED:October 28, 2020

Rai co-authors guidelines on the use of AI-enabled medical software

Heading

The collaboration with Duke-Margolis Center recommends ways to promote accountability in AI-enabled software while protecting intellectual property.

Elvin R. Latty Professor of Law Arti Rai has co-authored a white paper with guidance for the use of artificial intelligence (AI) in health care. Rai, the co-director of the Duke Center for Innovation Policy, collaborated on the report with colleagues at Duke University’s Robert J. Margolis, MD, Center for Health Policy (the Duke-Margolis Center).

The report, titled Trust, but Verify: Informational Challenges Surrounding AI-Enabled Clinical Decision Software, is a resource for software developers, regulators, clinicians, policy makers, and other stakeholders on how to promote innovation of safe, effective, AI-enabled medical products while communicating as transparently as possible on how and when to use them. It was released following a yearlong study by Rai and her co-authors on potential tensions between users’ need for explanation of software output and  developers’ need to protect certain trade secrets in this burgeoning area..

“Stakeholders require substantial information about AI-enabled software to effectively harness its benefits and mitigate risk,” write Rai and co-authors Christina Silcox and Isha Sharma, managing associate and senior research assistant, respectively, at the Duke-Margolis Center. Their report examines where mismatches between stakeholders’ positions on information flow may exist and proposes recommendations to bridge them.

Venture capital investment in AI-enabled clinical decision software has risen sharply and the health AI market is expected to reach $6.6 billion in 2021, up from $600 million in 2014, the report states, citing a 2017 Accenture study. The explosive growth of such technologies has prompted an ongoing reevaluation of digital medical device regulation by the Food & Drug Administration.

Clinical decision software can be used in health care to assist or potentially even fully automate clinical decision-making around risk assessment, diagnosis, and treatment through rules-based or data-based algorithms. Rules-based software has long been used by providers in health care settings for uses ranging from administrative tasks to diagnosis and treatment decisions. One example of a simple rules-based output the report offers is a reminder that a patient is due for a certain test or procedure based on a scheduling rule.

Recent advances in machine learning have led to the development of more sophisticated data-based algorithms that can assist clinicians in more complex decision-making. The report states that nearly all stakeholders interviewed said these tools can enhance workflow, positively influence health care decisions, and improve outcomes. But their use in a clinical setting may entail conflicts around transparency that can hinder the adoption of such tools, the report states.

“AI has the potential to streamline workflows, increase job satisfaction, reduce spending, and improve health outcomes,” write the co-authors. “Estimates show that AI can help address about 20 percent of unmet clinical demand. However, to achieve this goal and long-term success, ensuring that the right information is shared with the right stakeholder at the right time will be essential.”

The report covers a broad range of issues, addressing the unique challenges of using AI in health care, key questions and answers about AI-enabled clinical decision software, and the patent status, regulation, and adoption of this software.

Among its recommendations: a call for public disclosure about the intended use of AI-enabled decision software, including the clinical context and how the software’s recommendations should be used; the development of best practices and recommendations on evaluating and vetting new AI-enabled software products; joint monitoring, evaluation, and information-sharing by manufacturers and health systems after products are implemented; and, for products with a higher risk profile, procedures to share information that developers consider a trade secret with trusted third parties such as the FDA.

The report was released in conjunction with the preprint of an article, Accountability, Secrecy, and Innovation in AI-Enabled Clinical Decision Software, that will be published in a forthcoming issue of the Journal of Law and the Biosciences. The project is funded by a $196,000 “Making a Difference” grant from The Greenwall Foundation.