March/April 2026 Issue
AI Insights: Reimagining Radiology
By Cameron Andrews
Radiology Today
Vol. 27 No. 2 P. 5
For 30 years, diagnostic imaging has lived inside a fractured ecosystem. PACS, reporting, and worklists each evolved along separate paths to produce three different worlds loosely connected. Each piece of the puzzle improved over time, but never together as one. The result is an imaging IT stack that is extraordinarily advanced in parts yet fundamentally inefficient as a whole.
Radiologists today spend as much time navigating systems as they do interpreting studies. Every single handoff between platforms or new integration slows them down, and AI tools meant to transform their workflows have only added more layers of complexity. Think of the imaging IT stack as an old house that’s gone through one too many renovations: It’s time for a complete rebuild and reimagining from the ground up.
Most innovations and advances in radiology have only shifted the workload instead of removing it. For example, when speech recognition replaced transcription, the delay was eliminated, but the assistant who once managed documentation was eliminated. And when AI entered the field as a tool promised to ease the pain, it just became another login and another workflow to run beside, not with, the diagnostic process. The overall system, the architecture, is still holding radiology back.
Perhaps the biggest problem of all remains that PACS and RIS systems weren’t initially built for an outside context. Each study was an isolated event without a relationship to priors, pathology, notes, or genomic data. Today, as personalization in medicine continues to grow, this lack of context is alarming. Without a shared ontology or data model, systems can’t really deeply understand the information they manage. They can view it, store it, retrieve it, and share it beautifully, but not connect with it meaningfully. To move forward, we must stop stacking legacy systems on legacy systems and instead build a unified core—one that treats pixels as the new expression of an image.
From Pixel to Reporting
The unification that is needed in radiology begins at the point of interpretation. Pixel-to-reporting is not just a new feature but rather a whole new philosophy. It means that the images themselves directly generate structured findings in real time. In a pixel-to-reporting environment, every interaction with an image is an action; the system automatically captures the context and translates it into structured language. Reporting becomes part of the viewing experience, not a separate task. Radiologists interact with pixels, and the system can follow their reasoning, like linking image regions, measurements, and observations in a coherent report.
But to make that possible, imaging data must be expressed in a shared language, a unified ontology that defines and relates concepts within radiology. Within a model, the system doesn’t just display images; it can understand them. For example, a chest CT angiogram is automatically recognized as a subset of a thoracic CT. AI results, prior history, and new measurements can all align with the same framework. This allows the platform to automatically retrieve the correct priors, normalize across reports, and make AI outputs native to the platform. It turns pixels into valuable assets and knowledge.
Personalization no longer comes at the cost of productivity when a workflow is unified from the ground up. Prior studies appear automatically beside the current exam, not because someone searched for them, but because the system knows their relevance. AI doesn’t interrupt the reading flow; it lives in the background while quality checkers and auto-impressions run invisibly. The system adapts to the reader’s preferences, specialty, and habits in real time. Each interaction sharpens its understanding of how that radiologist works, creating an environment that feels faster and more human.
Radiology has accepted incremental progress for too long. But the problems we face—from fragmented data to clinician fatigue—are not incremental; they’re structural. We cannot solve architectural failures with surface-level fixes. The next generation of radiology infrastructure must be unified, context aware, and intelligent from the ground up. It must treat imaging, data, and reporting as one continuous process: a direct line from pixel to report.
— Cameron Andrews is the founder of Sirona. He studied biology and computer science at Stanford, doing his graduate work through Stanford’s Center for Artificial Intelligence in Medicine & Imaging. In addition, he spent three years working at Lux Capital, where he focused on evaluating companies at the intersection of AI and medicine.