Scientists from CSIRO, Australia’s national science agency, have developed a world-first method to teach AI how to write more accurate chest X-ray reports by giving it the same information doctors use in real life. Using more than 46,000 real-world patient cases from a leading US hospital dataset, the team trained a powerful multimodal language model to generate detailed radiology reports. The results showed 17% better diagnostic insights and stronger alignment with expert radiologist reporting.
With hospitals worldwide struggling to keep pace with demand amid chronic radiologist shortages, this research could pave the way for faster, safer, and more reliable X-ray reporting in clinical settings. Until now, AI tools tasked with interpreting chest X-rays relied solely on the images themselves and the doctor’s referral, without being equipped to read the vital clues hidden in patients’ medical records. Researchers from CSIRO’s Australian e-Health Research Centre flipped that approach—combining imaging with emergency department data such as vital signs, medication history, and clinical notes to significantly improve diagnostic performance.
“The AI is functioning as a diagnostic detective, and we’re equipping it with more evidence,” says lead author Aaron Nicolson, PhD. “When you combine what’s in the X-ray with what’s happening at the bedside, the AI gets more accurate, and much more useful.”
Nicolson presented his recent findings at the International Association for Computational Linguistics conference in Vienna, Austria.
“This is a practical, scalable way to help overworked clinical teams, reduce diagnostic delays, and ultimately improve outcomes for patients,” Nicolson says.
Professor Ian Scott, a research fellow at University of Queensland Digital Health Centre and clinical consultant in AI at Metro South Hospital and Health Service—one of the organizations involved in testing this new technology—sees strong potential in the approach.
“For hard-pressed radiologists confronting ever-increasing workloads, we need this type of automated multimodal technology to reduce cognitive burden, improve workflows, and allow timely and accurate reporting of chest X-rays for treating clinicians,” Scott says.
Nicolson and his team are currently trialing the technology with the Princess Alexandra Hospital in Brisbane, Australia, to explore how well the AI reporting compares with that of a human radiologist. The team is also looking for other sites on which to trial the technology.
The team’s code and dataset are freely available to researchers worldwide, enabling further innovation in AI-assisted diagnostics.
Source: CSIRO
AI can scan a chest X-ray and diagnose whether an abnormality is fluid in the lungs, an enlarged heart, or cancer. But being right is not enough, says Ngan Le, PhD, an assistant professor of computer science and computer engineering at the University of Arkansas. We should understand how a computer makes its diagnosis, yet most AI systems are black boxes whose “thought process” cannot even be explained by their creators.
“When people understand the reasoning process and limitations behind AI decisions, they are more likely to trust and embrace the technology,” Le says.
Le and her colleagues developed a transparent and highly accurate AI framework for reading chest X-rays called ItpCtrl-AI, which stands for interpretable and controllable artificial intelligence. The team explained their approach in “ItpCtrl-AI: End-to-End Interpretable and Controllable Artificial Intelligence by Modeling Radiologists’ Intentions,” published in Artificial Intelligence in Medicine.
The researchers taught the computer to look at chest X-rays like a radiologist. The gaze of radiologists, both where they looked and how long they focused on a specific area, was recorded as they reviewed chest X-rays. The heat map created from that eye gaze dataset showed the computer where to search for abnormalities and which section of the image required less attention.
Creating an AI framework that uses a clear, transparent method to reach conclusions—in this case, a gaze heat map—helps researchers adjust and correct the computer so it can provide more accurate results. In a medical context, transparency also bolsters the trust of doctors and patients in an AI-generated diagnosis.
“If an AI medical assistant system diagnoses a condition, doctors need to understand why it made that decision, to ensure it is reliable and aligns with medical expertise,” Le says.
A transparent AI framework is also more accountable, a legal and ethical concern in areas with high stakes, such as medicine, self-driving vehicles, or financial markets. Because doctors know how ItpCtrl-AI works, they can take responsibility for its diagnosis.
“If we don’t know how a system is making decisions, it’s challenging to ensure it is fair, unbiased, or aligned with societal values,” Le says.
Le and her team, in collaboration with the MD Anderson Cancer Center in Houston, are now working to refine ItpCtrl-AI so it can read more complex, 3D CT scans.
Source: University of Arkansas