May 4, 2009
Radiology Today Interview With David J. Marichal — Speech Recognition Seeks Understanding
Vol. 10 No. 9 P. 20
Mention speech recognition (SR) technology to a radiologist, and you may get the reply, “Thank you, but no thank you.” In many cases, that presumes a little politeness. While SR can speed reporting and decrease costs, many radiologists find the systems cumbersome and disruptive to their workflow—and subsequently balk at using them.
David J. Marichal, RT(R)(CT)(MR)(ARRT), CIIP, is chief information officer and chief operating officer at Radiology & Imaging Specialists, a Lakeland, Fla.-based imaging enterprise. He spoke with Radiology Today about how the Radiology & Imaging Specialists successfully implemented SR technology during the organization’s second attempt. He also shared views about how the still-evolving technology may shape the future of reporting.
Radiology Today (RT): Tell us about your facility and the types of studies done there.
David J. Marichal: We have 20 radiologists at four imaging centers. We also cover four hospitals in the area and read for a multispecialty clinic whose radiology department leases our PACS. So among all these facilities, roughly 150,000 outpatient procedures are put through our RIS/PACS each year.
Geographically, we cover about 40 miles. It’s a very diverse practice, with a women’s imaging center and a multimodality center that has MR, CT, x-ray, and ultrasound. One site also offers nuclear medicine exams, too.
RT: What has your experience been with SR for reporting, and what type of system did you use?
Marichal: Overall, when we began with SR, we used a Dragon-based system, but it never really took off. The doctors didn’t like it because they felt forced to change the way they did their work. Only one of my physicians would commit to using the technology.
So then we went live with GE’s Centricity Precision Reporting, which uses what it calls “speech understanding.” There was a much smoother transition—what I like to call minimal impact for the radiologists. That was a huge key to having a successful application.
RT: What is different about the new approach?
Marichal: The technology is very intuitive. The more it’s used, the more accurate it becomes. It is very different than just looking at word strings. It understands the natural flow of language, and it doesn’t require extensive training. The doctors can continue to work in the same fashion that they are used to. Because speech understanding takes context into account, it leads to a very accurate, complete document.
And since it is embedded and interfaced into the RIS, we don’t have to worry about maintenance and monitoring or the system going down. Our radiologists don’t have to wait for the information to arrive from other systems to finalize their reports.
RT: Did the radiologists buy in this time around?
Marichal: As a reference point, when we went live with PACS several years ago, we knew it would be disruptive, but the radiologists were asking for the technology, so they didn’t mind. With SR, however, they aren’t beating down the door to bring it on board. So, I knew that the system had to be much less disruptive and took the approach that the system had to let radiologists dictate the way they wanted to.
It was a very simple conversion. You can use the system and not even realize it, and they didn’t have to train in the traditional sense. M*Modal’s speech understanding technology seems to be designed to include a transcriptionist or editor on the back end. [M*Modal developed the technology used in GE Healthcare’s Centricity Precision Reporting system.]
RT: How does that affect how the radiologists view the technology?
Marichal: When the radiologists are finished dictating, they actually can edit if they wish and authenticate the report at that time, and some do, depending on how many items are flagged for checking on the report, or they can send it to transcription.
When a radiologist launches a study, the speech understanding box pops up and is already integrated with the RIS, so there are no interface issues to deal with between PACS and dictation. The system can be set up to each radiologist’s preferences. It will show the text after dictation, and they can choose to edit at that point if there isn’t too much indicated with the red highlight for correction. Or, if there is too much red, they can send it along to an editor.
RT: So how many prefer to self-edit, and how many use the editors?
Marichal: Right now, I’d estimate that out of eight radiologists who cover any given shift, two or three of them self-edit 100%, and the rest use a combination of self-edit or send it to an editor. At least I know they are using it because I sometimes get calls with questions. With the prior SR system, they simply refused to use it because they felt it slowed them down by changing the way they did things.
RT: What have you seen as the biggest difference from what one may call traditional SR?
Marichal: What is great about having a hosted solution is that it continuously learns. We have radiologists reading diverse studies at different sites, so as the radiology lexicon evolves, the system is smart enough to add all the phrases and terminology continuously, so each dictation thereafter gets easier and easier for the system to interpret.
The radiologists who helped with the pre–“go-live” simulation found that when practicing with the technology, it was learning on M*Modal’s hosted server, and it kept the radiology terminology it learned in the system during the simulation and carried it over after we went live.
RT: Where is speech technology headed?
Marichal: I see a lot of potential for this technology in the future. I really think it will help us to achieve our SR goals, not only in improving turnaround time and realizing substantial time and cost savings, but especially in enabling us to incorporate critical findings into the patient care loop.
Because the system understands the radiology lexicon, it can recognize phrases in a report—for example, “hemorrhage”—and flag them as a critical finding at the top of the report. It will also flag the report to ask, “How do you want to deal with this?” and use the RIS to create an exam task, which will ensure that the patient is treated regarding the critical finding.
By using speech understanding, it will help elevate dictation to the next level because the system recognizes words and phrases. The radiologist can create an exam task in the report that will electronically be assigned to a “follow-up” work list. Then ancillary staff can call the appropriate physician to coordinate the needed care.
The beauty is that it’s already embedded in the RIS, so the information is automatically there to close the patient care loop.
RT: Why is the technology to report critical findings so important?
Marichal: Critical findings is a new buzzword as agencies such as [The Joint Commission] focus on evidence-based medicine, so facilities want to have that capability. Many are wondering if legislation will force us to have critical findings identified on reports and what will happen if we don’t.
Having that capability is also important in maintaining a competitive edge because, let’s face it, we’re a service-based industry. We have to have an electronic means to keep everything straight, especially in light of critical findings. And we need our referring physicians to know that we are doing everything possible to get the most comprehensive information to them in a timely manner.
Doctors are under so much pressure day in and day out to keep track of so many things, with more to do and less time to do it. I don’t think in today’s climate they can do it without an electronic system. They ask themselves, “Can it help me take care of my patients properly?” and “How can I improve the quality of care of my patients?” Radiologists can help by delivering results faster and thereby enable physicians to make their patient care decisions in a more timely, knowledgeable fashion.
RT: Electronic medical record (EMR) implementation is in the future for healthcare facilities. How might speech understanding affect EMRs?
Marichal: At our facility, through our RIS, we are able to do integration with the next-generation EMR, where the results flow from our system into the EMR.
Generally, I see a lot of potential for speech understanding in regard to the EMR, in addition to populating the EMR fields. There are discreet elements you could mine for, but it’s not yet been fully explored.
When you watch a physician use an EMR, you see how cumbersome the point-and-click technology is. Any physician, whether charting manually or not, repeats certain elements again and again. If you had a system to capture that data contextually, it could really streamline their process.
Adding the critical findings could also be a huge benefit, with critical results being fed right into the EMR.
RT: What about the push toward standards for structured reporting? Can speech understanding move forward in that direction?
Marichal: It has the flexibility to allow you to incorporate structured reporting in a way that works for your facility. Even at its most basic, it is able to plug in information from the RIS, such as history and protocols—things that are repeated again and again—so that the physician no longer has to input them. And the way the system is structured, a physician can have information automatically placed into sections of a report. For example, you could automatically send to “Findings” or “Impression” sections of the dictation to the report in a structured way. In the system’s document mode, which is analogous to a template, structure is already inherent to the report, and we are working gradually toward making the information flow more easily to areas of the report that will decrease the amount of time our radiologists have to spend populating the fields while ultimately improving patient care because of the information generated.
RT: How have your physicians reacted to the new system?
Marichal: Our physicians love it, and I know that because they are using it. … To “sell” this to our radiologists, we had to prove it was something different from the speech recognition software they had used in the past and that it wouldn’t slow them down.
One area that has dramatically improved is the amount of STAT requests we get from the referring physicians. The multispecialty clinic has an imaging center that runs similar to a hospital, and the doctors want STAT results so they can appropriately treat patients.
At one point, it seemed all the orders were for STAT reports because the physicians weren’t receiving reports as quickly as they needed. Now, the turnaround time is so quick that we rarely get STAT orders. Even the radiologists who refuse to self-edit the reports can turn them around very quickly. It works well because with speech understanding, the text doesn’t need much correcting, and the report automatically goes to the top of the editor’s list because there is only one system organizing the workload.
RT: Down the road, where do you see the technology taking you?
Marichal: We’ve only been at this since November 2008 and have seen huge improvements already, in not only the use of the technology but in the ease the radiologists have in completing their reports. Of course, our turnaround time and costs have improved as well, although that wasn’t a huge issue for us.
And because the system is continuously learning, it will only get better. We are always pushing to get the latest improvements. There is a lot of untapped potential. We’ve come from something the radiologists wouldn’t accept to a system that they like and improves patient care and referring physicians’ satisfaction. And we’re able to achieve it at a high level that can even be improved upon.