Duke Center on Law & Technology launches RAILS initiative
The Responsible AI in Legal Services network brings together industry leaders to promote the ethical and safe use of AI in legal services.
The Duke Center on Law & Technology on Wednesday launched Responsible AI in Legal Services (RAILS), a collaborative network of legal services industry experts supporting the responsible, ethical, and safe use of artificial intelligence to advance the practice of law and delivery of legal services.
The RAILS initiative aims to create guidelines and educate professionals on incorporating artificial intelligence (AI) technology into legal and legal-adjacent services, said Jeff Ward, clinical professor of law and director of the Duke Center on Law & Technology. Currently there are limited workable best practices for how to responsibly and ethically apply AI in a legal setting.
“From contract analysis to litigation support to access-to-justice innovation, AI technology has entered the legal services domain and has enormous potential for positive impact if it is applied responsibly,” Ward said. “We launched RAILS to provide both enthusiasm and guardrails to how AI can and should be applied in legal services.”
Shaped over recent months with the guidance of a widely representative steering committee, RAILS includes members of the judiciary, private corporations, law firms, tech providers, and non-profit organizations.
“As with any transformational technology, ethical guidance will need to be developed and RAILS will serve as an important catalyst of this transformation by fostering an open dialogue and bringing together diverse viewpoints,” said LeeAnn Black, chief operating officer at Latham & Watkins and a member of the RAILS steering committee.
In addition to increasing the general understanding of AI by identifying gaps in current research and areas that need further exploration, RAILS will develop guidelines and best practices and work to educate legal professionals, Ward said.
These guidelines will address key issues such as client confidentiality, unbiased decision-making, transparency, accountability, and more. To ensure the guidelines are practical and universally applicable, RAILS will actively seek input and feedback from a diverse range of legal professionals and technologists. Once tangible and workable solutions are identified, RAILS will create educational and training resources to help legal professionals understand AI tools, their potential applications in legal services, and the ethical considerations surrounding their use.
“The potential of AI in advancing access to justice is immense,” said steering committee member Bridget McCormack, former Michigan Supreme Court chief justice who now serves as president and CEO of the American Arbitration Association and strategic advisor of the Future of the Profession Initiative at University of Pennsylvania Carey Law School.
“RAILS is a critical step towards ensuring that this technology is used not just effectively, but equitably and with a clear focus on enhancing the public’s trust in the legal system.”
Members of the initial steering committee also include Ward and Paul W. Grimm MJS ’16, the David F. Levi Professor of the Practice of Law and director of the Bolch Judicial Institute. Grimm, a retired district judge of the United States District Court for the District of Maryland, has been a prominent voice on the growing use of AI in the judicial system, particularly on the admissibility of AI-generated evidence.