Welcome to the Federation
By Aine Cryts
Vol. 23 No. 5 P. 14
Federated learning helps diversify the data pool for AI algorithms.
The successful use of AI in radiology has one particularly important requirement: AI algorithms must be exposed to different data, patients, equipment, and protocols. That’s what allows algorithms to see the whole breadth of what’s out there in practice, says Christoph Wald, MD, PhD, FACR, chair of the department of radiology at Lahey Hospital and Medical Center in Burlington, Massachusetts.
His radiology department at Lahey Hospital and Medical Center was an early implementer of ACR AI-LAB, a software package given to members by the ACR that allows them to install and test AI at their institutions’ computers. The benefit, according to Wald: “It’s a framework in which you can participate in federated learning experiments.”
Developed by the ACR Data Science Institute, AI-LAB is a data science toolkit designed to “democratize AI by empowering radiologists to develop algorithms at their own institutions, using their own patient data, to meet their own clinical needs,” according to the ACR’s website.
Wald also serves as chair of the ACR Commission on Informatics, where he leads and coordinates informatics efforts on behalf of the ACR with volunteer physician members. Part of his responsibilities as chair include working with the ACR Data Science Institute, which empowers the advancement, validation, and implementation of AI in medical imaging for the benefit of patients, society, and the radiological professions, according to the institute’s website.
In early 2022, users of AI-LAB began participating in federated learning, what the ACR describes as a method of sharing collaborative AI model training that doesn’t require data sharing. On a website dedicated to AI-LAB, the ACR notes that radiologists interested in participating in future AI-LAB-hosted federated learning experiments can sign up for updates at Federated Learning, AILAB (https://ailab.acr.org/Promotional/FederatedLearning).
Keith Dreyer, DO, PhD, FACR, FSIIM, chief data science officer, chief imaging information officer, and vice chairman of radiology-informatics at Massachusetts General Hospital and Brigham and Women’s Hospital in Boston, says it’s important to dispel myths associated with the use of AI in radiology. For example, he says, it’s not about having enough data. “The real issue,” he says, “is, ‘Do you have broad enough patient demographics, parameters, protocols, slice thicknesses?’”
With federated learning, the AI can work on data from various institutions, which makes the AI models more accurate, says Dreyer, who is also chief science officer with the ACR Data Science Institute.
Optimism and Caution Are Appropriate
Wald not only champions AI-LAB but also uses it with his colleagues at Lahey Hospital and Medical Center. And he’s no newcomer to the use of AI in radiology. Wald presented on the topic “Integrating AI Into the Clinical Workflow,” for the ACR Bulletin Podcast in January. During the podcast, Wald discussed what his radiology team learned about integrating AI into their workflow in the triage of critical cases.
According to an interview provided to the ACR in March, his team at Lahey Hospital and Medical Center used AI algorithms, in concert with workflow orchestration software, to help triage cases; in that instance, cases with potentially critical findings were moved to the top of the worklist, thus improving patient care.
Despite his enthusiasm for AI, Wald is acutely aware of its challenges. For one, there’s “persuading our colleagues in multiple institutions to share their data,” Wald says. First, it requires a lot of work to find the data and still more effort to anonymize it, he explains. Second, a radiologist then needs to convince leaders at their health care institution to share the data “with someone else so that it’s aggregated across the institutions.”
Because of these difficulties, AI hasn’t worked well; and, as a result, the algorithms aren’t exposed to sufficient data, he adds.
Federated learning for AI helps solve these challenges because the algorithm is shifted to the data, and the data remain behind the traditional firewall where they’re safe and never leave health care institutions, Wald explains.
That leaves the algorithm to “do its thing and extract the outcomes of that interaction and use that to make the algorithm smarter,” he says. Or put differently, he says, “There’s no more need for data sharing.”
“That’s a very exciting road to go down for the future to … let algorithms see a lot of data from a lot of places, across a lot of different patients, to make [the algorithms] stronger and more generalizable,” Wald says.
Lung Conditions Provide Many Opportunities for Learning
Use of federated learning for AI could be helpful in diagnosing conditions that impact the lung, for example, Wald says. The human lung, with its large surface area that’s folded in a very complicated way, is exposed to a variety of substances, he explains. That includes pollen, car exhaust, and cigarette smoke.
The results of these exposures can include pneumonia or lung cancer or other conditions. That’s important as a backdrop because there are so many images taken of the chest area.
“We are continuously taking chest images of people that come to our practices, so we have a lot of data on people’s chests. Usually, the radiologist will make their report and they’ll say whether pneumonia is present or a heart attack is present or not.”
Take, for example, a radiologist who finds that a patient has emphysema, Wald says. That diagnosis really means that the structural tissue in the lung is getting thinned and stretched out and it doesn’t work well.
Lung disease can shorten a person’s lifespan, and it’s often present in patients whose exams are read by radiologists. “Radiologists are somewhat okay at reporting these conditions,” Wald says. “[But] sometimes they get overlooked or they’re really not reported in all detail because the reason for the exam was completely different … and we’re also really not good at estimating how bad emphysema is.”
Companies have developed algorithms to help radiologists quantify these conditions. “When I say we’re not good at estimating emphysema, [I mean] we can see it’s there. I can say it’s really bad, but I can’t really tell you how bad. With my eyes, I can’t do that. But an algorithm is not particularly challenged by that. It can actually tell you what the emphysema degree is; it’s quantifying that,” Wald says.
“That’s something I’m really excited about. [AI] may open up the possibility of running some or at least the suitable part of the chest imaging that we do through those algorithms to identify those patients who may benefit from additional pulmonology care or a medicine to stop interstitial lung disease or a valve that will explode in part of the lung,” he continues. “We’re in the very beginning of trying to see whether this might work. I can’t tell you yet. We’re going to have to try it out; we’ll try it out in our practice, amongst other things.”
What Radiology Departments and Groups Need to Know
Federated learning for AI works by “sharing AI models, not sensitive data, with participating institutions,” according to the ACR. The requirements for health care organizations include having an on-premise ACR Connect installation or a secure ACR Connect instance in the cloud. Interested institutions are also required to create and validate a relevant data set using tools provided by ACR. Institutions that want to participate must provide a designated point of contact, after which the experiments will be automated for participating organizations.
There’s a lot that occurs behind the scenes with AI-LAB. A dedicated microsite developed explains that the ACR central server passes a base model of the algorithm to the instance of ACR Connect residing at the participating institution. Further, the base model at each institution will be trained on a previously prepared local data set, thus, creating a new model.
New models from each institution are then passed back to the ACR central server for aggregation purposes, with the result of a single model that’s used as the base model in the next training round. The final federated learning model provided on the ACR central server will be the result of repeating this process many times; a local model at each participating institution will also be fine tuned, based on data from that institution, per the ACR.
Federated Learning Brings the Algorithm to the Data
John Kalafut, PhD, founder and principal at Asher Orion Group in Pittsburgh, tries to demystify federated learning for AI by describing it this way: “All this excitement, hubbub, about federated AI … really what these algorithms or computer codes are doing is … they’re learning patterns and trends from data sets,” he says. “Machines [are learning] these patterns. They need to be taught, told what a cat looks like, what a dog looks like, what a belly button looks like.”
Kalafut has more than 20 years of experience conceiving, designing, and bringing to market medical technology innovations to global markets. With clients, he focuses on AI and machine-learning product and market strategy, front-end innovation, and quality and regulatory strategies, among other projects.
He uses the consumer’s experience with a smartphone to explain how federated AI works. “Most of us are familiar with the machine learning algorithms that are used in detecting misspellings or in doing speech-to-text, like what happens on our smartphones or Google or Siri,” Kalafut says.
Setting aside privacy concerns, Kalafut concurs with Wald that AI models require data sets to work on. To illustrate the concept, he borrows from the technology used by self-driving cars. “You will typically need … a human to take those pictures and circle something on them. Like, ‘This is a stop sign for a self-driving car,’ vs ‘This is a yield sign,’ he says. “That’s how the system learns.”
Outside the realm of the person with the smartphone or the self-driving car or the radiologist at their workstation is the learning that occurs on a server with tremendous amounts of computing power in the cloud, he explains. “It’s really difficult to train algorithms on your little pocket phone.”
Federated learning for AI overcomes those challenges by bringing the code to the person’s smartphone or a car or an MRI or an ultrasound scanner, he adds. That’s where the statistical work is done, while accessing the data that’s on the local machine. As described by the ACR earlier in this article, Kalafut explains that this process reiterates in an effort to make the AI algorithm increasingly smarter, in much the same way that AI recognizes speech-to-text on a smartphone and that a stop sign is a stop sign in a self-driving car.
What’s Next for Federated Learning in AI?
“Federated learning in AI is really in its total infancy [in radiology]. It’s embryonic,” Wald says. “It’s a technique that’s been used a lot in data science. … There are probably less than five platforms out there that I’m aware of that you could license tomorrow to just get into the game and become a site and can actually participate in this approach,” he adds.
The value the ACR brings to the table with AI-LAB is providing the federated learning system to imaging departments that share some combination of these three characteristics, per Wald: They’re not engaged in AI, they’re very cautious about AI, and “they’re not overflowing with free cash.” He adds that the ACR is making an investment of membership dollars into creating some of the infrastructure supporting federated learning for AI and then providing it to member practices.
Two of the ACR’s goals include getting practices more comfortable with the AI terminology and the logistics of participating in these types of experiments, according to Wald. “[Practices] can try it out in a safe environment, with a trusted partner rather than with a commercial entity. [The ACR] is trying to get people educated, familiarized with this approach. Federated learning is just one technique,” he says.
Wald recommends that practices realize that using federated learning for AI—and, indeed, AI in general—isn’t easy. “It’s not easy to get reliable results. You really have to get reliable results. You have to work together. The clinical users of this information have to team up with the imaging experts that they collaborate with. You know why? What’s going into the AI is what we make here in radiology. If I put the wrong ingredients into the algorithm, it ain’t working,” he cautions.
In terms of a timeline, he estimates that federated learning in AI will take off in the next two to three years. Wald expects that corporate entities will respond quickly to positive adoption and use of this model in radiology, in great part because it would significantly lower development costs and the generalizability of commercialized products.
As Dreyer sees it, it could take anywhere from two years to five or 10 years before the use of federated learning for AI really takes off. The wild cards here—which could dramatically shorten the timeline—are companies such as Google that have experience in AI. Being pragmatic, he observed that many of these consumer and business-to-business behemoths don’t quite understand the intricacies of health care, yet.
— Aine Cryts is a health care writer based in the Boston area.