By Beth W. Orenstein
Vol. 19 No. 5 P. 24
AI applications are poised to elevate CT workflow.
Most radiologists agree that artificial intelligence (AI) and machine learning will change their jobs significantly in the coming years. Few, if any, believe it will replace their expertise or diminish their value. Rather, most believe it will enable them to do their part in reading scans more expertly and efficiently, which ultimately will lead to better patient care. They are already seeing that scenario play out with AI applications for CT.
"Fundamentally, what artificial intelligence is doing for imaging, including CT, is taking some of these algorithms we have today for computer-aided detection and computer-aided diagnosis and making them better, smarter, and, potentially, continuously learning," says Lawrence Tanenbaum, MD, FACR, vice president, medical director, and national director of CT, MR, and advanced imaging for RadNet, Inc, the largest provider of outpatient imaging in the United States.
When it comes to CT, experts see a number of potential benefits to applying AI. One is using AI to triage CT scans with urgent findings for immediate attention. A number of such products are at or close to market. Other possible roles for AI in CT include helping with accurate and timely diagnoses, mining imaging reports using natural language processing, and reducing radiation dose—a holy grail of ionizing imaging studies.
vRad, a MEDNAX, Inc, company and a teleradiology services and telemedicine practice, manages the distribution of 16,000 patient imaging studies every 24 hours, 95% of which are emergent. As part of ongoing testing, vRad has run an AI program in the background that designated studies as having life-threatening emerging abnormalities. The data were then used to expedite the distribution and reading of those studies to its radiologists. Benjamin W. Strong, MD, CMO of vRad, says being able to run AI in the background on its CT scans has been "an absolute boon for us." The early algorithms that "we have tested have been good enough to use in that way," he says. "We find they have fair sensitivity and good specificity."
Strong says variability in sensitivity and specificity depends on the entity the algorithm is looking at, "but that's generally how they perform. From a study triage standpoint, algorithms can catch most of a given abnormality and, when they do, they are usually right." That makes AI "very useful and patient friendly," he says.
Tanenbaum agrees; in busy inpatient practices, radiologists can fall behind reading studies or even finish interpreting outpatients for the day, leaving exams with unexpected urgent findings unread. If AI elevates CT studies with possible urgent findings to the front of the queue, "That will have a big impact," he says.
While Strong is a fan of AI programs for CT triage, he's not ready to concede that the early algorithms are much help with actual diagnoses. What he's seen in the last year or so, "is not yet enough for diagnostic accuracy," he says. Strong has found AI to be better at identifying findings on CT scans that identify extremes in density, such as an intracranial hemorrhage or pneumothorax.
"Abnormalities that are more complex, such as vessel narrowing and occlusions, seem to be a bit trickier," Strong says.
The AI triage was tested, and vRad found that it did not unnecessarily expedite too many head CTs "and clog our system," Strong says. AI has been able to reduce a CT's time to reading to about 17 minutes, which could make a significant difference in administering life-saving treatment to patients with stroke or other head traumas, Strong says. Also, he notes, should the AI solution unnecessarily expedite a patient's scan, it really isn't harmful.
vRad's radiologists aren't aware that an AI algorithm flagged the scan that's next up on their worklist. "Because our worklist is so dynamic, they really don't see that it was expedited," Strong says. "They just see it as the study on their worklist that the vRad platform determined algorithmically is the best for them to read next." The system is designed that way on purpose; the AI score could prejudice the readers, and vRad doesn't want that to happen, Strong says.
Radiology is a very "perceptive profession," Strong explains. "Radiologists need to use their expertise and conclude for themselves what they are or are not seeing on the images," he says. As far as he's concerned, at this point, AI is "just another layer of decision making in our algorithm tree."
Tanenbaum, too, has no doubt that if an algorithm recognizes something and the radiologists know it, it could influence their interpretation—in negative as well as positive ways. At present, AI algorithms may be better accepted as a double-check on the radiologists' read, he says.
Laying the Foundation
Strong remains optimistic that, as it gets better and smarter, AI software will play a bigger role in diagnostics with CT, but he doesn't believe the state of the technology is there yet. "There's a lot of talk of computers taking over and going off on their own, but I can tell you from my experience in imaging, that's a long way off," Strong says. Currently, AI is very good at answering binary questions. "Ultimately, the accumulation of enough of those binary questions will amount to a pretty comprehensive diagnostic process, but it has to be built with those small building blocks," he adds.
Tanenbaum, who serves as an advisor to Enlitic, a San Francisco-based AI software company, also sees a role for AI in CT scan acquisition. "Whenever you scan a patient, you want to make sure the area you're interested in is on the first and last slice and that you don't over- or undercover," he says. AI will be able to adjust coverage so that the scans show just the parts of the body that are necessary, and no more. Even if the concern over the dose associated with extra slices is miniscule, extra slices "offer zero value," he says. "An AI-driven scanner can improve quality and consistency."
At the very least, Tanenbaum sees machine learning algorithms eventually eliminating the need for scout scans—preliminary images obtained prior to performing the major portion of a particular study to be sure the coverage is appropriate. Tanenbaum is also optimistic that machine learning products working their way to market will bring greater dose reduction than what is provided by today's iterative reconstruction techniques.
Strong sees yet another role for AI, and that's in natural language processing. Imagine how valuable it would be, he says, if a radiologist wanted to look for cases of pulmonary embolisms. If the radiologist were to ask the computer to look for just the words "pulmonary embolism," he or she would come up with them but also with those that say, "no evidence of pulmonary embolism" or "checked for pulmonary embolism." It wouldn't narrow down the cases to only those in which the patient did have a pulmonary embolism. AI can be used to "pull specific image sets" that can be used in radiologists' reporting, Strong says.
The FDA has recently designated AI tools as Class 2 medical devices, which have a lower barrier to approval. "I expect that in the near term we will see a lot more tools and algorithms available," Tanenbaum says. "We're already starting to see a trickle, but I won't be surprised by a flood in the near term."
Here's a look at some of the AI solutions for CT that are being tested or have recently come to market.
An Israeli startup, Aidoc unveiled its full-body AI solution at the European Congress of Radiology in Vienna at the end of February. Aidoc's AI-powered solution is integrated into radiologists' daily workflow to support their reading scans of areas such as the head, cervical spine, chest, and abdomen.
The focus of its early products is detection, especially acute findings, says Tom Valent, vice president of business development for Aidoc. "We train our algorithms to detect abnormalities that are typically acute and, in 95% of the cases, that's a strong sign that the case should be looked at fairly soon." Radiologists can use the results to prioritize their worklists and read those cases with possible acute findings sooner than they might otherwise, Valent says.
Valent says feedback from radiologists at about 15 different clinical sites who have tested the solution has been positive. "What we're seeing and hearing is that radiologists like that the algorithms point them to key slices and to which ones they should look at early." One of the value propositions of AI is speed, "And by running the AI automatically, right after the scan was performed, we begin adding value right after the worklist, which is generally the first intersection of the radiologist and the case," he says.
The AI solutions are also relevant for the read itself, Valent says. "With the extended preview and key slices we provide, we facilitate the interpretation of the scan." Aidoc's solution works with all the major PACS vendors so it can be added to radiologists' existing software and workflow without needing to change anything, Valent says. "There are no redundant clicks [and] no real estate required."
Matrix Analytics, of Denver, has a software suite that uses AI and CT to identify whether pulmonary nodules are more likely to be benign or malignant. "Without deep learning and AI, radiologists do a pretty good job of identifying patients with pulmonary nodules," says Christine Spraker, Matrix's president. However, she adds, care breaks down once nodules are identified. "Seventy percent of the time, those patients are not managed according to what the clinical guidelines say."
Matrix's AI tool uses machine automation to determine which patients are likely to have lung cancer and need treatment and which have benign or harmless nodules that can be watched carefully rather than treated. "We've combined the modalities of deep learning and machine automation in order to accomplish this task," says Akrum Al-Zubaidi, DO, FCCP, interventional pulmonologist and Matrix's founder and CMO. He says the software is most useful for nodules that are indeterminate, to help avoid PET scans, biopsies, and possibly surgeries that may not be necessary.
"Our deep learning tool is really focused on getting patients in the right bucket of risk stratification and saving unnecessary procedures," Al-Zubaidi says. "We also make sure that patients who have a high likelihood of cancer actually get the necessary procedures done."
The tool is based on the scans of more than 50,000 patients and has been externally validated at the Cleveland Clinic with more than 400 patients. The software will be continually updated and trained as more data are added, Spraker says, and the company will pursue FDA approval for additional product features.
The tool also is designed to help with patient management. "Ninety percent of patients who have a lung cancer CT scan only require a 12-month follow-up," Al-Zubaidi says. However, hospital staff are spending one to three hours manually inputting data and navigating that patient's care. "With our system, we can take it down to five to 10 minutes per year on those patients with minimal risk," he says. "That means staff has much more time to focus on the 10% of patients who are at risk and need more complex care."
In February, the FDA cleared a software application from San Franciso-based Viz.ai that uses AI to analyze CT angiography images for indicators associated with a stroke, such as a large vessel occlusion. It sits in the background, says Chris Mansi, MD, CEO and cofounder of Viz.ai. If it recognizes signs of a stroke, the application sends a text notification to a neurovascular specialist so that the patient can be treated as quickly as possible. When it comes to stroke, time is of the essence, as faster treatment may lessen the extent or progression of the stroke, Mansi says. He explains that the specialist can receive the notification on his or her smartphone or tablet and can see the images there but still must review the images on a clinical workstation. The software is meant to help triage stroke patients and has not been approved for diagnosis. Viz.ai is researching and developing AI software for detecting other conditions as well, including brain hemorrhages and cardiovascular issues.
"We're focused on AI, and our job is to make our doctors who treat and diagnose patients more efficient, thus allowing them to help their patients the best they can," Mansi says.
Enlitic is developing a deep learning tool for thoracic CT. It is suitable for the detection and characterization of lung nodules and for longitudinal monitoring of findings, among other uses. "The early lung cancer detection model should be ready by the fall," according to Sally Daub, Enlitic's CEO. "We're currently testing the model with some of our US health care partners and research institutions." Enlitic is also developing an algorithm for helping to detect brain hemorrhages on CT. The goal with these AI tools, as with others, Daub says, is to add value to the radiologist's workflow, making follow-up faster and drawing attention to findings that need more urgent attention.
— Beth W. Orenstein of Northampton, Pennsylvania, is a freelance medical writer and regular contributor to Radiology Today.