Getting Along With AI
By Beth W. Orenstein
Radiology Today
Vol. 26 No. 4 P. 18

Collaboration is key to optimizing the benefits that both people and technology bring to interpreting images.

AI is now its own industry. More than 700 tools have been cleared by the FDA, and more than 100 AI companies exhibited at the RSNA 2024 meeting in Chicago last November. “Surveys suggest that a majority of radiology practices are using AI,” says Curtis Langlotz, MD, PhD, a professor of radiology, medicine, and biomedical data science and senior associate vice provost for research at Stanford University. He runs the Center for Artificial Intelligence in Medicine and Imaging at Stanford.

“AI is not just here—it’s here to stay,” adds Nina Kottler, MD, MS, FSIIM, associate CMO for clinical AI at Radiology Partners, also at Stanford. “AI is set to become an indispensable part of modern medical imaging, driving efficiency, accuracy, and accessibility in patient care.”

There are benefits and drawbacks to using AI in radiology. AI can improve the accuracy and efficiency of radiologists, Langlotz says. It also can improve the quality of images produced by imaging devices. However, he adds, AI also can decrease efficiency.

“For example, detection algorithms require the radiologist to carefully examine any false positive alerts, and language models have problems maintaining factual correctness,” Langlotz says.

Early in its development, there were fears that AI would replace radiologists. Those fears seem to have been largely overblown. However, the ideal role of AI in radiology is still up for grabs. The focus now, as it was at RSNA 2024, seems to be: What’s the ideal AI-human interaction in imaging studies for diagnostics and treatment?

Langlotz believes that “there is a huge gap in our understanding of how humans and AI should interact.” The answer, he says, is not one-size-fits-all. The ideal AI-human interaction “likely will vary across settings,” Langlotz says. “For example, in some screening settings, it may be optimal for AI to work without any human oversight when screening studies are almost certainly normal. However, for detection and triage, AI works best when it presents its results for the human to adjudicate their value.”

Kottler agrees that the ideal human- AI interaction can vary depending on the scenario. “The optimal human-AI interaction is a seamless augmentation of both, where each contributes based on its strengths, rather than a fixed ratio,” she says. Kottler explains that there is no single “ideal” percentage for human vs AI involvement. In some instances, she says, AI might play a dominant role (eg, triage), while in others, the radiologist should lead (eg, complex or rare pathologies).

The key challenge, Kottler says, is determining when to rely more on AI and when to prioritize human expertise. “Patients do not desire to have their health care performed autonomously by AI systems,” Kottler says. “It is therefore important to create an environment in which human-AI collaboration is optimized.”

Overcoming Bias
Pranav Rajpurkar, PhD, an assistant professor at Harvard Medical School who leads a research lab focused on advancing medical AI, explains that for years the standard view was that radiologists should work alongside AI, with both examining the same images in parallel. “It seemed obvious that combining human judgment with machine precision would naturally lead to better results,” he says. However, he says, his research has revealed something surprising: The “assistive” model often underperforms.

“When radiologists review images with AI assistance, they exhibit systematic cognitive biases,” Rajpurkar says. “Sometimes they defer too much to AI suggestions (automation bias), and sometimes they discount valid AI findings (automation neglect).”

What works best, Rajpurkar believes, is a clear division of labor— what is called “role separation.” Instead of radiologists and AI doing the same task in parallel, he says, they should each take ownership of different parts of the workflow or different types of cases. “The most promising approach,” Rajpurkar says, “is what we call the ‘case allocation model.’ Rather than having AI and radiologists review every case together, cases should be routed based on their characteristics. AI might handle certain categories of cases independently, while radiologists focus on others where human expertise adds the most value.”

It is worth noting, Rajpurkar says, that “We don’t know yet how these generalist models might transform the human-AI relationship. Our current evidence on cognitive biases comes from studying relatively narrow AI tools. Generalist models that think more holistically, like our [company’s] system that detects numerous conditions simultaneously, might completely change this equation. They could potentially enable new forms of synergy that we haven’t seen before,” he says. “This would be where the whole truly becomes greater than the sum of its parts. That’s why continued research with these advanced systems is so critical.”

Work at a2z Radiology AI, cofounded in 2024 by Rajpurkar and his father, Samir Rajpurkar, is pioneering a new approach to AI-radiologist collaboration. “Traditional radiology AI tools are narrow,” Rajpurkar explains. “They look for one specific finding such as a pulmonary nodule or brain bleed. But we are developing generalist medical AI systems that can simultaneously detect multiple conditions across an entire scan, which is more like how radiologists actually work.” The company is building comprehensive diagnostic systems capable of analyzing hundreds of clinical findings in each scan, far beyond the capabilities of conventional AI tools, Rajpurkar says. The goal is to ensure that no disease goes undetected, he says.

Pranav Rajpurkar is an author of a study published in March 2025 in Nature that found the interactive ability of modern AI algorithms also opens the doors for them to serve a broader role as “AI residents,” inspired by the workflow at academic hospitals. “Outside of solely report generation, AI assistants could improve medical education by enabling comparisons with similar images and their reports,” the study notes. “Their ability to offer real-time assistance is also relevant for enhancing clinician and patient understanding. An AI resident could enable treating clinicians, such as general practitioners, to probe details of a report in the context of the corresponding image and develop a deeper insight. Patients could query it under supervision to quickly gain a new perspective on their condition. Algorithms could even generate grounded and interactive reports that link particular sentences and findings to relevant areas of the image.”

Optimizing Collaboration
To optimize human-AI collaborations, Kottler recommends the following:

• AI findings should be overseen by specialists who can make independent decisions without blindly trusting AI results (eg, imaging findings should be overseen by radiologists who can perform image interpretation independently).
• Predeployment validation should be considered. Radiologists must understand AI’s capabilities and limitations before employing it clinically.
• There must be transparency at point-of- care. “AI should provide explainability, confidence scores, and training data insights to aid interpretation,” she says.

Langlotz, who has worked with AI since the 1980s, believes the combination of AI and human interaction is better than either one alone. “Another way to say that is: AI intelligence is different, not better, than human intelligence,” he says. A few recent studies have cast doubt on that longstanding principle because the results show that AI alone is better than AI plus human experts. However, Langlotz says, “I believe those results highlight the need to train clinicians to use AI optimally.”

The optimal level of human-AI interaction can vary depending on the modality and the study’s purpose, the experts agree. Variability is not due to the modality itself but to the data used to train the AI model and the prevalence of the pathology being identified, Kottler says. Positive predictive value (PPV) increases with disease prevalence, while negative predictive value (NPV) decreases. “This mathematical relationship helps determine when to trust AI results,” Kottler says.

For example, “In low-prevalence settings, AI’s PPV is lower, leading to more false positives so physician oversight is critical. However, NPV remains high in low-prevalence settings, making AI more reliable for ruling out disease. Conversely, in high-prevalence settings, such as acute findings in the emergency department, PPV is higher, making positive AI findings more trustworthy than in an outpatient setting where acute pathology is less likely,” Kottler explains. “AI performance depends on the dataset it was trained on, the data being evaluated, and the clinical context in which it is applied. Understanding these factors is helpful in optimizing AI in imaging workflows.” Additional considerations include technical and workflow limitations, physician availability, physician performance in the same setting, presence of AI explainability, potential patient harm, and patient preferences, she adds.

Increasing Interaction
Kottler and Langlotz both expect to see more AI-human interaction going forward. “As AI systems improve in accuracy, especially for the long tail of less common pathologies,” Kottler says, “their role in medical imaging and diagnostics will expand.” Additionally, as systems are developed to provide clear, easy-to-consume information about AI reliability, physicians will have greater trust in AI-driven insights, they agree.

However, Langlotz notes, while AI will become a more integral part of the work of radiologists, “We all need to learn how these systems work so we know when they might be leading us in the wrong direction.”

Humans and AI systems function differently, and it is natural to anthropomorphize AI, assuming it “thinks” like a human, Kottler says. “However,” she says, “AI does not reason in the same way, and that fundamental difference is precisely what makes human-AI collaboration so powerful.”

Kottler believes it is important to balance human interaction with AI because that is what makes such collaboration powerful. At her practice, Kottler says, multiple AI models were evaluated on thousands of highly variable exams and, in that process, several key findings were identified:

• AI models can detect true pathology that physicians may miss.
• Physicians with expertise in the underlying technology identify pathology that AI does not recognize.
• When combined, AI and physicians can complement each other, enhancing diagnostic accuracy beyond what either could achieve alone.

“This potential synergy demonstrates that AI is not a replacement but an augmentation tool, the combination of which leverages the strengths of both human expertise and computational capabilities,” Kottler says. However, to fully realize AI’s potential, radiologists must establish systems that mitigate human-computer biases, ensuring that both entities interact optimally and transparently in clinical decision making.

Bigger Role for AI
In the future, Kottler adds, AI will take on more of the initial perceptive tasks such as pattern recognition and triage, while physicians will elevate their roles to focus on higher level cognitive decision making, complex cases, and personalized patient care. “This shift will ultimately improve efficiency, diagnostic accuracy, and patient outcomes while ensuring that clinicians maintain oversight and control over AI-assisted workflows,” Kottler says.

Langlotz also expects AI to play a larger role in radiology in the future. “AI will become an integral part of the work we do—every step,” he says.

Will newly trained radiologists be more likely to embrace AI than those who have been practicing without it for years? Change is challenging for everyone, regardless of age or experience, Kottler says. However, exposure to AI plays a key role, as radiologists who have had more experience with AI are generally less apprehensive and more willing to embrace it. “This fact underscores the importance of increasing physician engagement with AI tools to drive broader acceptance, including as a required component of radiology residency training,” she says.

Those who are especially resistant to change and slow to adopt new technology may struggle to keep pace with colleagues leveraging AI for greater efficiency and accuracy. “To address this, we should offer support and education, as most resistance stems from fear or uncertainty rather than fundamental opposition,” Kottler says. When given the right exposure and training, many initially skeptical physicians come to appreciate AI’s benefits, she notes.

Langlotz believes that radiologists who use AI will replace those who do not. “Younger radiologists are more receptive to new technology generally, and to AI specifically,” he says. However, he believes that “everyone from medical students to the most senior attending will need training to use these systems optimally.”

Langlotz’s sentiment highlights a broader truth, Kottler says. “Physicians who adopt AI will gain an advantage, while those who do not may struggle to keep up with evolving clinical workflows.”

— Beth W. Orenstein of Northampton, Pennsylvania, is a freelance medical writer and regular contributor to Radiology Today.