No Labels? No problem!

New tool overcomes major hurdle in clinical AI design

Image of a chest x-ray
Image: agcuesta/iStock/Getty Images Plus

Harvard Medical School scientists and colleagues at Stanford University have developed an artificial intelligence diagnostic tool that can detect diseases on chest X-rays directly from natural-language descriptions contained in accompanying clinical reports.

The step is deemed a major advance in clinical AI design because most current AI models require laborious human annotation of vast reams of data before the labeled data are fed into the model to train it.

A report on the work, published Sept. 15 inNature Biomedical Engineeringshows that the model, called CheXzero, performed on par with human radiologists in its ability to detect pathologies on chest X-rays.

The team has made the code for the model publicly available for other researchers.

Most AI models require labeled datasets during their “training” so they can learn to correctly identify pathologies. This process is especially burdensome for medical image-interpretation tasks since it involves large-scale annotation by human clinicians, which is often expensive and time-consuming. For instance, to label a chest X-ray dataset, expert radiologists would have to look at hundreds of thousands of X-ray images one by one and explicitly annotate each one with the conditions detected. While more recent AI models have tried to address this labeling bottlenck by learning from unlabeled data in a “pre-training” stage, they eventually require fine-tuning on labeled data to achieve high performance.

By contrast, the new model is self-supervised, in the sense that it learns more independently, without the need for hand-labeled data before or after training. The model relies solely on chest X-rays and the English-language notes found in accompanying X-ray reports.

“We’re living the early days of the next-generation medical AI models that are able to perform flexible tasks by directly learning from text,” said study lead investigator Pranav Rajpurkar, assistant professor of biomedical informatics in the Blavatnik Institute at HMS. “Up until now, most AI models have relied on manual annotation of huge amounts of data—to the tune of 100,000 images—to achieve a high performance. Our method needs no such disease-specific annotations.

Read full article in HMS News