AI Insights: Beating Bias in Medical AI
By Josh Miller and Ouwen Huang
Radiology Today
Vol. 23 No. 7 P. 10

Developments in medical imaging over recent years have enabled radiologists to provide earlier diagnoses and better treatments, while improving overall patient outcomes. At the heart of many of these developments is some form of AI. To create the algorithms, software developers use large amounts of medical data to train AI models to uncover and predict associations. These powerful algorithms have the potential to spot relationships in the data that would have been invisible to humans or taken a long time to discover, allowing earlier medical intervention and potentially saving lives. The benefits of these innovations aren’t universally felt, however, and improvements in care for minority groups have been hindered by bias within technologies.

Bias in Data
Many innovations are developed within small geographic regions, using only locally acquired data from that population. When the product is taken to a new market with a different population, it can change the performance of the technology. The same concept applies to diseases, too; if you teach a technology to recognize only a specific presentation of a condition, its ability to spot subtly different variations will be affected.

Some models exacerbate existing social bias, such as gender-based differences in cardiovascular diagnostics, and some do not work effectively on specific populations, for example, skin lesion detection algorithms on nonwhite patients. This holds back the potential of medical AI, at best reducing the number of people it can help, at worst, producing products that may harm those who don’t fit the narrow margins within which the product was developed.

The key to overcoming bias in medical AI lies in the data on which algorithms are developed and trained. Developers of AI, whether within academia or in a commercial start-up setting, need to be able to access vast amounts of diverse data. This can be difficult for many reasons.

Barriers to Access
There are necessary, tight privacy regulations around using health care data, meaning that institutions that can easily gather these data cannot easily share the data. Legislation, such as the General Data Protection Regulation, and HIPAA, are often cited as key risk considerations by developers of medical AI. As the quantity and complexity of necessary data grow, so, too, does the potential compliance risk.

Also, gathering, curating, and storing medical data can be incredibly costly, especially at the scale required to develop robust AI models. Some complex models require tens of thousands of data points; each one needs to be collected in a scientifically valid manner and may require expert labeling before it can be compliantly archived for upward of five years. Even with modern cloud computing, these hosting costs can soon spiral.

In addition, time to market is a key factor for investors and innovators who will have a limited runway for developing and deploying products. The traditional approach of large-scale validation studies can take years to design, conduct, and interpret; this not only reduces the chance of commercial success but also delays the human benefit. Unexpected delays crop up at all points during the development lifecycle and can mean the difference between success and failure. This commercial structure creates pressure for developers to rush to market with data sets that are not as large or heterogenous as they should be.

Best Practices
As a developer of AI innovations, how can you overcome these obstacles?

The good news is that over recent years, data management methods have evolved greatly, and AI developers have more options available to them than ever before. From data intermediaries and partners to federated learning and synthetic data, there are myriad sophisticated approaches to the problem. However they choose to solve it, there are two key considerations for developers setting their data strategy:

One: You need a large, heterogeneous data set. This is the first consideration you should have when sourcing data.
Is this going to be a large enough data set that is truly representative of the population for whom the product is intended, and can you prove it? Having a data set that represents the people you’re targeting is the only way to ensure that the models are going to be effective in interpreting the real-world images they need to work on.

You also need sufficient examples for the models to train on; this is not the situation for N=1. A large data set gives the model enough input to learn how to accurately classify and detect the specific ground truth they are looking for.

Two: Data must be sourced responsibly and reliably.
From an ethical perspective, all data used to train AI models should be responsibly sourced. In other words, we mean that it is properly anonymized, it complies with all data regulations for the specific region it has been gathered from, and, where necessary, ethics boards have approved its use.

This isn’t the only consideration. There should be clear responsibilities for who anonymizes the data and who is liable if some identifiable data are accidentally disclosed. A good way of looking at this is whether the data subjects stand to benefit from the end application.

As mentioned, complying with the relevant data regulations for the region should be standard for any data sourcing. This also means that when it comes time to submit a product to the relevant regulatory bodies, all the correct boxes have been ticked in terms of how the models have been developed.

Ultimately, medical AI will only ever be as good as the data it is trained on. For truly representative products to be made that help their intended population, innovators need to think deeply about how data are sourced and whether they are eliminating bias.

Josh Miller, CEO and cofounder of Gradient Health, holds a BS/BSE in computer science and electrical engineering from Duke University. He continues to serve as an advisor for startups on top of his current role leading the growth of Gradient Health.

Ouwen Huang is chief scientific officer and cofounder of Gradient Health. He is an MD/PhD candidate and associate instructor at Duke University.