Precision Prescription: Using AI to Help Identify COVID-19 in the Lung
By Kathy Hardy
Radiology Today
Vol. 22 No. 1 P. 10

Emergency department (ED) personnel know what to look for when triaging trauma victims, with tools available to help quickly detect broken bones and internal bleeding. The diagnostic path forward isn’t as cut-and-dried with patients potentially suffering from a novel coronavirus. Introducing AI to the ED tool set of chest X-rays and CT scans may help expedite the identification of COVID-19 and differentiate its effects from other lung conditions, with the goal of properly prioritizing a patient’s next course of treatment.

“Health care providers often know the patient has COVID-19 but may need help assessing the severity of their condition,” says Ohad Arazi, CEO of Zebra Medical Vision (Zebra-Med). “Imaging AI allows you to see the progress of pneumonia in the lung and helps doctors make the decision of whether or not that patient needs to go to the ICU, needs a ventilator, or their condition is not as urgent. It helps hospitals prioritize COVID-19 patients.”

Medical professionals note that there are many variables when it comes to triaging a COVID-19 patient—symptoms, no symptoms, positive or negative tests. Factoring in any preexisting lung conditions, such as asthma or COPD, presents another challenge, particularly if a patient is unable to communicate and their family is unable to enter the ED to attest to their medical history or advocate for their care.

Chest X-rays and CTs are the modalities of choice in identifying COVID-19 pathologies in the lung. They are useful in the initial assessment of suspected pneumonia and comorbidities, including cardiovascular disease. Repeated chest X-rays can be used to assess disease progression, with chest CT providing a more detailed evaluation of potential lung complications. Therefore, AI professionals are focusing their development efforts on solutions for use with images created with those modalities.

Developers at Zebra-Med recognized the need to quickly determine which patients should be hospitalized, as well as how best to allocate ED capacity, ICU beds, and ventilators, based on disease progression. They also understood that the process must fit within the existing workflow. While lab testing has served as the first line of diagnosis during the pandemic, not everyone has access to testing, and these tests are not foolproof. Lab tests also don’t address the state of disease progression or consider the presence of preexisting lung conditions. With that impetus, the company created an AI solution for COVID-19 that leverages Zebra-Med’s algorithm to automatically detect and quantify suspected COVID-19 findings on standard contrast and noncontrast chest CTs.

“As the pandemic started, we saw the challenges they were having in places like Italy and New York City, where doctors needed to make quick decisions to quantify suspected cases of COVID-19,” Arazi says. “Efforts at that time were focused on testing and developing a vaccine. There was little out there to help doctors understand disease progression and devise a care plan.”

Zebra-Med’s AI1 platform leverages the company’s patented Ground-Glass Opacities (GGO) algorithm. GGO—hazy areas that do not obscure the underlying structures of the lung—are a dominant imaging feature of COVID-19 pneumonia that appear on contrast and noncontrast chest CT. AI1 measures the percentage of affected lung-burden volume relative to the entire lung volume, along with segmentation of the suspected findings.

Algorithm Education
To train their AI model, Arazi says Zebra-Med used clinical data gathered from existing medical images and later tuned and validated the algorithm with images of confirmed COVID-19 cases. They continue to gather data to “teach” the algorithm.

“This is not a static algorithm,” he says. “As they discover new aspects of COVID-19 and we gather more image data, we can train, tune, and validate the algorithm to adapt. This is not a ‘yes’ or ‘no’ test. You need to see all the variations in the patient’s lung disease. With COVID-19, the focus is on a patient’s acute needs. AI sheds light on what’s on the periphery of the spotlight and can deal with myriad conditions in the lung.”

Susan A. Wood, PhD, president and CEO of VIDA, agrees that keeping up with lung characteristics is an important part of getting ahead in the process of identifying COVID-19. The Coralville, Iowa, software development company develops solutions for identifying chronic lung diseases. The company received 510(k) clearance from the FDA in 2020 for enhancements to its VIDA Insights solution that include deep learning–based lung and lobe segmentation algorithms designed to improve performance of lung imaging analysis and better calculate disease probability and progression.

According to Wood, the VIDA algorithm learns how to analyze CT lung scans by incorporating information from the company’s database of disease-specific evidence gathered through their involvement in clinical and academic trials. Patients with lung disease generally end up receiving CT scans, creating a significant collection of data.

“We are able to follow the patients involved in these trials and can see progression of disease in the lung,” she says. “Also, we can see how the lung responds to treatment. We use these data to validate efficacy and incorporate this information into our intelligence engine.”

Results generated by VIDA Insights are incorporated into existing workflow, an important consideration when decisions on patient conditions need to be made quickly. Looking for COVID in the lung is challenging, but Wood cites other lung conditions that involve a lengthy diagnostic process, with numerous, often overlapping tests. She says it can be complicated to generate a diagnosis and treatment plan in a timely manner.

“We generate automated disease quantifications based on where issues are located in the lung,” she says. “We can aid differential diagnosis and phenotyping of disease, providing objective information with visualizations that are incorporated into the existing workflow of radiologists and clinicians.”

Part of the process of triaging a COVID patient involves finding other conditions in the lung, the presence of which could complicate the patient’s treatment process and prioritize them as a higher-risk patient. Arazi says that through partnerships with facilities currently in the throes of dealing with COVID-19 patients, such as Apollo Hospitals in India and Northwell Health System in New York, AI solutions can use such data to learn how to help track progression of the disease.

“Development of this AI tool has piqued interest in discovery of comorbidities such as COPD and emphysema,” Arazi says.

Visualization Applications
Another AI resource for use in the identification of COVID-19 images has its basis in industrial automation. Cognex Vision Systems, an international software company headquartered in Natick, Massachusetts, has developed VisionPro Deep Learning 1.0, taking its deep learning vision tool and applying the technology to the process of identifying COVID-19 on chest X-rays and chest CT scans. According to Joerg Vandenhirtz, PhD, an AI expert and Cognex’s senior business development manager for Life Sciences OEM Europe, this solution is a logical step to take their software from inspecting vehicle parts to images of lung parts.

“We are dealing with images,” he says. “The addition of deep learning vision to our machine vision product portfolio basically opened up new opportunities, such as using deep learning tools to look at medical images.”

In industrial uses, Cognex Deep Learning tools solve complex manufacturing applications that are too difficult to solve for rule-based machine vision algorithms and too fast for reliable, consistent results with human visual inspection. VisionPro DL uses deep learning models to solve part location, assembly verification, defect detection, classification, and optical character recognition applications. When applying the tools to health care, Vandenhirtz says the process is simply a matter of training the neural network on three classes of chest X-rays or CT scans: normal, pneumonia, and COVID-19.

“Once the neural network is trained, you apply the trained neural network to a test image dataset that the neural network never saw before and measure how often it classifies those test images correctly,” he says.

VisionPro DL’s graphical training interface simplifies the task of collecting images, training the neural network, and testing it on a variety of image sets. The overall process is broken down into smaller steps, making it easier to optimize and requiring fewer training images.

“One of the big advantages of our tools over open-source tools is that our tools are super easy to use,” he says. “If you’re able to use Microsoft Office, you can use our deep learning tools. On the other side, you have to be proficient in scripted computer languages like Python to be able to use open-source tools.”

Vandenhirtz sees the introduction of tools such as VisionPro DL to the marketplace as a way of democratizing AI, by putting AI tools in the hands of all radiologists, not just those with high-level technology experience.

“The intellectual property on how to annotate medical images is owned by the radiologists who are the subject matter experts on how to look at those medical images,” he says. “However, radiologists usually do not have a big software engineering background. Most don’t know anything about Python. So, with open-source AI tools, radiologists always need to speak to a deep learning software expert to come up with AI solutions. With our easy-to-use deep learning tools, every radiologist is able to train a neural network on their own image datasets with their own ground truth, just the same way they would train a new team member on how to look at those images.”

Accurate Vision
Vandenhirtz says users will see improved results with deep learning solutions vs open source. One metric for measuring accuracy is the F-score, which is calculated based on the proportion of correct and false predictions generated by the deep learning system. Vandenhirtz is an author on a recent study on the identification of COVID-19 images from chest X-rays using deep learning, comparing VisionPro DL with state-of-the-art, open-source neural network architectures: ResNet, DenseNet, Inception, or Xception. VisionPro DL was tested with two settings: first by selecting the entire image as the region of interest and second by segmenting the lungs in the first step, then conducting the classification step on the segmented lungs only. VisionPro DL’s F-score for the identification of COVID-19 on the entire image was 0.96%; the F-score for the segmented lungs was 0.97%.

“We were surprised to learn that it seems easy for the trained software to differentiate between the pathologies that show up on the X-ray images,” Vandenhirtz says. “It’s in many cases more difficult for humans to figure out differences in X-ray images with different pathologies. Five radiologists can give five different opinions on these kinds of images, but, with trained software, they have another data point to aid their analysis.”

In a similar study comparing VisionPro DL with state-of-the-art, open-source deep learning architectures used for identifying COVID-19 on chest CT images, the F-scores were 0.99%, even when lowering the training image dataset by more than 50%, from almost 62,000 training images to just 26,000 training images.

Building Knowledge
With the introduction of new tools that increase accessibility to enhanced lung images, education will play a role in improving detection of COVID-19 and preexisting lung diseases. Australian company DetectED-X is working with several AI companies to help clinicians learn to recognize the early signs of COVID-19 on lung CT scans.

“We’re looking at AI in a novel way,” says Patrick Brennan, PhD, a professor at the university of Sydney as well as the CEO and cofounder of DetectED-X. “Instead of focusing on diagnosis, we’re focused on education designed to improve detection.”

Called CovED, DetectED-X’s web-based system allows clinicians anywhere to access a virtual clinical environment, at no cost, to review deidentified images and learn more about how to diagnose COVID-19 in the lung. Some images will show disease; others will not. Participants view and judge a set of lung CT cases in DICOM format and mark the cases using online scoring software. Users will receive a score based on the accuracy of their findings and receive a certificate for CME purposes upon completion.

Users can return to CovED and complete the process with other images, giving them an opportunity to fine-tune their skills. Brennan says this is an important feature, given the novel aspect of COVID-19, as well as the different skill sets of clinicians looking to gain further expertise in diagnosing the presence of the virus in the lung.

“When you’re teaching someone how to drive a car, one can rapidly learn what someone knows and doesn’t know and adapt the training appropriately,” he says. “When learning to diagnose COVID-19, it is no different. The system needs be adaptive to each individual’s needs. What you are good and bad at is different from what I am good and bad at. Our system learns rapidly where an individual weakness lies, based on their interactions during current and previous sessions. Then, rather than give more of the same training, our interactive and adaptive system uses AI to build training modules using images that meet individual needs.”

Arazi says knowledge of lung conditions gained by using AI will increase even after the COVID-19 pandemic. Knowledge gained now will help identify other acute and chronic conditions and stratify the risks.

“We could determine how far along the patient is in the disease process and whether their condition is mild, moderate, or severe,” he says.

In addition, at a time when personalized medicine is becoming more prevalent, precise algorithms used in the development of AI solutions could be used to optimize patient outcomes.

“If you want precision health care, you need to examine patients more precisely,” Wood says. “With AI analysis of lung scans, you can see early onset of disease and changes in disease over time. This is a much more individualized process.”

Kathy Hardy is a freelance writer based in Pottstown, Pennsylvania. She is a frequent contributor to Radiology Today.