Raising the Bar for Medical AI

Thought leaders, patient advocates develop guidelines for ethical use of AI in medicine
 

A computer generated image of a digital display featuring a rotating body.

From the invention of the wheel to the advent of the printing press to the splitting of the atom, history is replete with cautionary tales of new technologies emerging before humanity was ready to cope with them.

For Zak Kohane, the chair of the Department of Biomedical Informatics in the Blavatnik Institute at Harvard Medical School, the arrival in fall 2022 of generative artificial intelligence tools like ChatGPT was one such moment.

“After going through the stages of grief, from denial to acceptance, I realized we’re on the verge of a major change,” Kohane said. “It was urgent to have a public discussion.”

In academic circles, Kohane has been long known as an AI evangelist. He has studied AI and written about its tremendous promise to change medicine for the better by doing everything from detecting novel disease syndromes, minimizing rote work, reducing medical errors, reducing clinician burnout, and empowering clinical decision-making, all of which would converge to improve patient health.

So why was the news of ChatGPT’s arrival so unsettling?

“It is a mind-blowing technology, yet for now we cannot guarantee that its advice is reliably trustworthy every time,” Kohane said. “Despite their promise, ChatGPT and tools like it are immature and evolving so we need to figure out how to trust their abilities but verify their output.”

For Kohane and likeminded colleagues, one question looms larger than others: How to prevent harm without extinguishing the enormous potential of a promising technology?

With that urgent question in mind, Kohane convened colleagues from across the world, across disciplines and across industries to ponder critical questions about AI in health care. The aim: To develop an ethical framework that would inform and guide policymakers and regulators.

“We have a societal obligation to develop a pathway to guide us in what is a deeply confusing situation,” Kohane told attendees.

During the last two days of October, experts in policy, patient advocacy, healthcare economy, AI, bioethics, and medicine pondered and debated several questions related to the safe and ethical use of artificial intelligence in medicine.

The deliberations culminated in a set of broad guiding principles published simultaneously in Nature MedicineandThe New England Journal of Medicine AI,of which Kohane is editor-in-chiefThese principles, the participants said, should help inform both the public discussion and eventual regulations of AI in medicine.

The overarching consensus converged on the theme of doing good while minimizing harm. Adopting medical AI will pose challenges, participants agreed, but failure to do so may pose a greater risk, especially where AI stands to yield the greatest benefits, such as in absorbing administrative rote tasks, lessening clinician stress, improving access to care, and reducing medical errors.

Who should medical AI serve?

How should regulators balance the overlapping, and sometimes diverging, interests of patients, clinicians, and institutions in the design and deployment of medical AI models?

Because of the potential for misalignment of incentives and interests, regulation should recognize the heterogeneity of interests and contexts and maximize equity of access.

Panelists agreed that medical AI models should be designed and deployed under the moral imperative to not merely to avoid harm but to do good and achieve maximal benefit for the greatest number of patients.

“Patients should be viewed as the ultimate stakeholders and primary beneficiaries of medical AI,” said Tania Simoncelli, vice president, Science in Society at the Chan Zuckerberg Initiative.

And patients should be actively involved, panelists said.

“Patients need to be active participants in the process of designing, deploying, and using AI, not just the passive beneficiaries of the things that smart people do for them,” said patient advocate and activist Dave deBronkart, known as e-Patient Dave.

Read full article in HMS News