Peerless Review
By Beth W. Orenstein
Radiology Today
Vol. 20 No. 6 P. 22

Peer learning is replacing peer review, with good reason.

For many years, radiology practices of all sizes have performed traditional score-based peer review to find errors and improve outcomes. The ACR’s RADPEER program, which randomly selects cases for review and allows colleagues to score them for accuracy from 1 (in agreement) to 4 (not in agreement), is the most commonly used peer review process for radiology in the United States. Few, however, find RADPEER worth their time or helpful in achieving improved health care delivery. Radiologists find that score-based peer reviews not only add little value but also can build resentment and fear in their departments.

“We review tens of thousands of imaging studies through peer review every year and have relatively little to show for it,” says Richard Sharpe Jr, MD, MBA, a breast imaging subspecialist, department value advisor, and chair of the interregional clinical practice group for radiology at Permanente Medicine in Colorado. “We really don’t have impact from that process and certainly not one that we can look to and say we made things better with it.”

Lane F. Donnelly, MD, is chief quality officer at the Lucile Packard Children’s Hospital at Stanford and Stanford Children’s Health and a professor of radiology and pediatrics as well as associate dean of maternal and child health for quality and safety at Stanford University School of Medicine. He says traditional peer review systems try to lump together three very different aims: identify system issues that can be improved, identify individual physician competency problems, and deal with individual physician behavioral issues.

“You need to have mechanisms that are put into place when any of those things comes into question, and that’s important,” Donnelly says. “But peer review has not worked well to deal with any of those issues. Most departments have been using peer review for that, for the last 15 years, and many have not addressed problems in any of those areas.”

A growing number of radiology departments across the country have found what they believe is a demonstrably better way: peer learning. Radiologists who have replaced peer review with peer learning find that their colleagues are much more engaged and interested in the process and believe it’s where their specialty is clearly headed. As a field, radiology has traditionally been slow to accept and make change, Donnelly says, “But peer learning seems to be catching on quite quickly.” Part of it, he says, is that “there is a lot of pent up dissatisfaction with the way the old RADPEER system works. It required a lot of everyone’s time to do, and nobody saw results. But I think people have been pretty quick to adopt peer learning because it leads to education and improvement.”

More Like Case Conferences
Peer learning involves finding and discussing cases that either have errors or are “great calls” and spending the time to learn from them, rather than just scoring them. Peer learning conferences are similar to, but distinct from, interesting case conferences, says David Larson, MD, MBA, vice chair of education and clinical operations at Stanford University School of Medicine, who developed a model used at Cincinnati Children’s Hospital prior to moving to Stanford five years ago. “It’s a discussion of our misses and our great finds. If it’s a miss, we talk about what made this case difficult and what factors may have contributed to it. We discuss how a good radiologist could have potentially made the wrong call in the situation and what strategies might help prevent others from making the same mistake in the future.” The ultimate goal is for the radiologists who participate to learn from past performance—their own and others’—as well as both excellent and less than optimal performance, Larson says.

At Cincinnati Children’s, Larson ran roughly one peer learning conference every other month. “We could get through up to 30 cases in an hour,” he says. “To do that, some case discussions are going to be really short, such as a missed fracture on an X-ray, and have few talking points, whereas some cases require much more discussion, particularly examples of misinterpretation rather than missed findings.” Usually, he notes, great calls are from experienced radiologists who are very conscientious “and can differentiate normal from abnormal based on subtle findings.” Unlike interesting case conferences, performance is the focus of the peer learning conference.

“It’s not so much about sharing interesting imaging pathology but discussing our performance in interpreting images, especially on tricky cases that may be at higher risk for error,” Larson says.

The peer learning model takes more time to manage than the peer review model, but, if it’s well done, valuable, and the radiologists believe they can learn from it, they are eager and willing to participate, Larson says. “If it results in improvements, then at least it is an investment in time well spent.” Larson cites Sharpe’s work at Kaiser Permanente in Denver. There, the peer learning program has gone over so well, he says, that some radiologists come in on their days off to participate. “They found it has that much value,” he says.

Permanente was one of the early adopters of peer learning, Sharpe says. Sharpe started the conversation with his team about four years ago. “Now, we organize a one-hour conference monthly and an online module for our radiologists,” he says. Radiologists who attend the in-person conference and complete the online module can get CME credits. The online module provides a self-assessment CME, “and that’s a lot harder of a CME to find. That’s a value that peer learning brings that peer review does not.”

Problematic Process
About three years ago, when Jennifer Broder, MD, became vice chair for radiology quality and safety at Lahey Hospital and Medical Center in Burlington, Massachusetts, the department was using RADPEER. “Every radiologist was randomly scored on two random cases a day,” she recalls. The computer selected the cases for them. Radiologists could close and skip them. “Reviewing that specific case wasn’t mandatory,” she says. Many people would opt out, especially if the case was too hard or if it had a mistake and they didn’t feel comfortable reviewing it.

“The process was problematic in many ways,” Broder says. Sometimes the cases were extremely remote—they were many years old—and sometimes the original reader was no longer with the practice. “Even if you found a mistake, you couldn’t do anything about what you found other than acknowledge it,” she says.

Peer review also had bias built into it, Broder says. “The radiologist reading the current study already had formed an opinion of the exam by the time they looked at the priors, and then they open the case up and decide what they think is going on. They already have more information than the prior radiologist had because of the current exam, so it’s biased in that they have more information and they also know who read the prior exam. There’s no way to anonymize it.”

Broder’s department’s RADPEER scores were 99% 1s, meaning 99% concordant, no disagreements, Broder says. “Our RADPEER scores were showing everyone was agreeing with everyone all the time, which we knew not to be the case in our practice,” she says. “We know people make mistakes and more than just 1% of the time. That’s what the literature shows.”

Broder was convinced that just doing random reviews wasn’t going to find all the mistakes and that radiologists obviously weren’t telling the truth in their reviews or engaging in the process. “We also found a wide variation in how people applied scores,” she says. “It was very inconsistent.”

She found that RADPEER was a significant point of anxiety for many members of the department. “If someone gave them a score other than a one, they were extremely worried about what was going to happen,” Broder says. “Would the results be sent to the credentialing office, the ACR? Who’s seeing these scores? Who’s using them? While we weren’t using the scores for anything, the anxiety about the scores was overwhelming.” The department was using RADPEER for compliance but not putting much emphasis on the results during the radiologists’ formal department reviews, she says.

A New Model
Broder attended a conference where she heard Larson talk about the peer learning model he helped develop that was an alternative to the traditional format. Broder decided to take the idea back to Lahey and worked with colleagues to implement it. “It was clear to me that something needed to change, and David’s peer learning seemed a much better program,” she recalls.

Broder spoke with her chair and held a meeting with colleagues to discuss what they valued about peer review, what they wanted to keep, and what they wanted to throw out. “They definitely wanted to throw out judgment and wanted smaller peer review meetings and more meetings with their peers—subspecialists reviewing with other subspecialists, so neuroradiologists would review other neuroradiologists and musculoskeletal radiologists would review other [musculoskeletal] radiologists, etc.” They implemented a peer learning model, based on staff suggestions and Larson’s model, that they have found to be far more successful and more valuable than RADPEER, Broder says.

Under their peer learning model, the radiologists have the benefit of working in smaller groups of subspecialists, which feels much less intimidating and allows specialized conversations to take place, Broder says. Also, the cases are shown anonymously “so no one knows whose mistake it was.” Rather than being selected randomly, the radiologists are encouraged to submit cases with discrepancies as well as those where the reader “made a great call,” Broder says.

Case submission is intentionally simple so that radiologists can submit a case to their section head, who then chooses the best cases for conferences, with just a few clicks on their computer keyboard. “Much of the discussion is about teaching points, so there is a much higher yield,” Broder says. She has the numbers to prove it: In a 10-month period under the RADPEER model, Lahey’s radiology department reported 64 discrepancies. In a 10-month period after they switched to the peer learning model, they found 488 discrepancies as well as 396 great calls and 157 cases submitted for further discussion. Since they started doing peer learning, Broder has found that about one-half of the cases submitted for discussion are great calls and one-half are discrepancies.

The department also moved from monthly whole-department, traditional morbidity and mortality conferences to quarterly, subspecialty-focused peer learning conferences so that radiologists could anonymously discuss cases in a more in-depth fashion. In all, 286 cases were shown in conferences under the peer learning model, as compared with only 47 under the traditional morbidity and mortality model. The department shares its feedback through an internal communications system within its PACS.

Broder says the radiologists in her department clearly prefer the peer learning model. “My impression is they’re more comfortable with it. I haven’t done a formal poll to assess that, but one of the best compliments I got was from someone new to the practice,” she says. “The radiologist said that ‘I really like the way we do peer review here. I think we do it the right way,’ indicating they had come from a more judgmental place.”

When, due to a technical error, one of the radiologists wasn’t getting all her peer learning messages, she approached Broder with her concerns about it. “I took her concerns as a sign that she found the feedback really valuable,” Broder says.

Lahey’s residents learn a great deal from the peer learning conferences and are being trained in an environment where mistakes are approached without judgment, Broder adds. Broder’s only concern is that when the residents go elsewhere to practice and see that peer review is not done the same everywhere, they will be “surprised and dismayed,” she says.

Growing Influence
Peer learning has yet to be officially recognized by certification and regulatory bodies, but Larson expects that it will. “It is in the spirit of what these organizations have espoused,” he says. “There is some hesitation because peer learning doesn’t have the quantitative output often sought by administrators and outside entities, but the model supports learning and improvement over time, a benefit that scoring-based peer review does not.” Larson believes that certifying bodies are starting to recognize the benefits, as evidence emerges, and are warming up to the idea.

Donnelly says that when his department recently underwent a Joint Commission review, which requires a system in place to evaluate physicians, its staff drew attention to its peer learning model. “We highlighted our peer learning model as one of the things we’re working on, and they liked it,” he says.

Lahey had the same experience, according to Broder.

Donnelly adds that the punitive nature of peer review and evaluating for physician competence stifles any movement toward improvement. For improvement to occur, the individual competence processes need to be sequestered from the evaluation of potential system issues, as it is in peer learning, he says.

If nothing else, Larson adds, peer learning builds trust among colleagues, whereas scoring-based peer review often has a toxic effect on a department’s culture. “From what I’ve seen, once radiologists have experienced the peer learning model, they support it and don’t want to go back,” Larson says. “I’ve never had anyone who, after switching from peer review to peer learning, say, ‘This is the wrong approach.’ Rather, it is almost always recognized as a worthwhile investment in radiologists’ skill development—an investment that increases the value of both the individual radiologists and the group as a whole.”

— Beth W. Orenstein of Northampton, Pennsylvania, is a freelance medical writer and regular contributor to Radiology Today.