Web Exclusive

Which AI Solution Is Right for You?

A Practical Roadmap for Evaluating AI Vendors

By Elad Walach

“I am bombarded” is the most common comment we hear from radiologists. “There are so many AI vendors out there, and I don’t know who to partner with.”

With the explosion of AI solutions, one of the biggest challenges for radiology groups is the increasing complexity of assessing a potential AI partner. Given the media hype, it is often difficult to separate the wheat from the chaff and determine which solutions are right for you.

Having been through this evaluation process with hundreds of medical centers already, I would like to share the best practices from some of the most professional evaluations we have undergone.

Phase 1: Defining Key Parameters. When embarking on the AI journey, you have to evaluate two major factors carefully: the AI solution (obviously) and the partner.

The following six parameters are critical for assessing any solution:

How Well Does It Work?
Of course, if you are going to adopt an AI solution, then you have to ensure that it is accurate and robust enough to deliver what you need. I’ll discuss exactly how to assess this later on.

Value
You should be able to define what problem the AI is solving for you and whether it can provide the outcomes needed to impact care in a meaningful and cost-effective way. Although that may seem obvious, this is actually the last question we are sometimes asked. Make sure to define clear performance indicators that you hope to achieve: Is it turnaround time? Length of stay? Efficiency?

Ease of Integration
You’re probably strapped for IT resources, making it all the more critical that the AI application is easy to implement. It’s not just about the solution architecture, but also your perception of the vendor’s capabilities to be able to handle your workflow. You definitely want to make sure it’s not their first rodeo.

User Experience and Workflow Integration
You need to ensure that the solution can be easily taught to the users’ population. The more seamless the AI, the less intrusive it will be; it will be easier to test and implement without extensive change management.

Professionalism
Does the vendor really understand your needs? Find out whether they are prepared to customize the solution and address your specific requirements. Learn early on whether they are responsive, professional, and flexible as partners.

Are They a Good Long-Term Partner?
Most medical centers form partnerships with vendors that last, so before embarking on an AI partnership, ensure that the company has robust funding sources, delivers a comprehensive suite of tools, and has a clear vision so it can scale up with you over time. This is especially crucial with the AI market moving as fast as it is. You want a company that evolves with you.

Phase 2: Establish the Foundation. After assessing your needs, you will need to start evaluating the partner based on your defined parameters. The following boxes must be ticked:

Define a Clear Champion
You want a leader who can overcome all of the internal hurdles and is passionate about technology, with enough clout to be able to convince decision makers. The goal is for you to identify a key stakeholder who can digest all of the parameters and execute a clear, robust decision-making process.

Meet With Clinical Stakeholders
One of the first meetings the champion should organize is a clinical meeting, focused on value and workflow, with all stakeholders. During this meeting, try to determine to what degree the partner understands your needs.

Meet With IT to Assess the Ease of Integration
Ensure that it is not resource intensive to test and address potential implementation concerns.

Review the Literature Provided by the Vendors
The key in any literature review lies in making sure that it has been done in multiple settings and, specifically, in settings similar to yours—eg, if you’re a community hospital testing a solution in the emergency department, look for similar institutions. Most importantly, ask the vendor for any outcomes/return on investment (ROI) literature that is available. This is rare but extremely valuable for you to assess the solution’s value. 

Key Performance Indicator (KPI) Definition Meeting
At this stage, you may want another clinical meeting. Once you’ve identified the potential value and any research projects of interest, you may want to involve additional stakeholders. The goal of this meeting is to identify the KPIs and key metrics for success.

Additional Organizational Stakeholders
You may want to get buy-in from additional stakeholders, such as finance, if you’re requesting additional budget expenditures. It’s common here to ask the vendor for their ROI model. Typically, you also want to make sure legal and security teams have given their initial green lights.

If at least one vendor ticked all of the above boxes, you can rest assured that you’re moving forward with the proper due diligence.

In many cases, you probably want to see whether the solution actually works on your own data. If so, we recommend moving to the next phase: advanced evaluation methods.

Phase 3: Advanced Evaluation. As the AI market is still young, sometimes the existing literature and references are not enough. Therefore, most AI companies offer advanced evaluation concepts. Below are the two most common types:

Optional: Retrospective Evaluation (RE)
This method is not recommended. In our early days, we performed numerous REs. In a typical RE, you send your data to the AI vendor, who then analyzes the cases and sends back the results for your assessment. This phase is useful in adding credibility to new algorithms or solutions that have not been thoroughly vetted. However, in most cases, we don’t recommend RE. While it sounds simple, it requires a fair amount of legwork. You need to select the right cases, export them, anonymize them, and send them back. You will also need to review and assess the sensitivity and specificity of the solution. Given the required effort, chances are that the RE benchmark size would be limited. Hence, your confidence in the RE test will be low as well.

Additionally, it is difficult to evaluate the impact of the system on the all-important workflow. If there is enough supporting literature, then it is recommended to skip this stage.

Optional: The Pilot
Finally, the rubber meets the road in the pilot stage. The key is to establish pilot KPIs from the outset and ensure that they are measurable. Sometimes, you can define a limited set of users or algorithms for the initial test. To make the evaluation process simpler, we recommend doing so step by step. Typically, 60 to 90 days should be enough time to evaluate most products, but this will depend on the use case, patient mix, and results. Meticulous attention should be given to parameters such as algorithm performance, both technically and clinically; workflow impact; and overall satisfaction.

Conclusion
By the time you have completed each of these steps and the pilot, you should have a clear view of the AI partner and their capabilities. It is indeed difficult to find the perfect AI partner, but this guide should provide you with a framework that will help you to efficiently fine-tune your search.

Do you use a different framework? Is there any test here that I missed? Do you disagree vehemently with the structure that is presented? I would love to hear your thoughts!

— Elad Walach is the founder and CEO of Aidoc.