How root cause analysis can improve patient safety

Hospitals face so many urgent tasks in safety – computerize, promote teamwork, implement evidence-based safety practices, discover unsafe conditions – that it’s hard to know where to start. If you’re struggling, I recommend that you put your Root Cause Analysis enterprise on steroids. This is what we did at UCSF Medical Center, and it was the most important change we’ve made in our safety journey. Here’s the story, a case of function following form:

RCA, like many of our approaches to patient safety, is familiar to other industries (such as engineering and aviation) but, until recently, alien to medicine. It involves the dissection of an error or a near-miss with an eye toward getting at “root causes” – the underlying system flaws (“latent conditions”) that set up the individual caregivers to cause harm. It doesn’t deny the possibility of human error, but recognizes that while a knee-jerk response that focuses on the person with the smoking gun may be satisfying, it leaves the underlying flaws in the system unaddressed – a setup for a repeat performance.

I first became aware of the power of the RCA when we launched our Quality Grand Rounds series in the Annals of Internal Medicine in 2002. Our first case in the series involved a breathtaking error: a patient received an invasive cardiac procedure intended for another patient with a similar last name. In their masterful analysis of this case, Drs. Mark Chassin (now President of the Joint Commission) and Elise Becher highlighted 17 individual problems that set up the hospital for this mishap. They wrote:

Even though many individuals made errors, none was egregious or causative by itself. Instead, the systemic problems of poor communication, dysfunctional teams, and the absence of meticulously designed and implemented identity verification procedures permitted these errors to do harm. Just as we screen asymptomatic patients for hypertension, all health care systems should assess how well communication, teamwork, and protocols are functioning. Just as treating hypertension effectively prevents strokes, addressing underlying system flaws will greatly increase the likelihood that the inevitable errors of individuals will be intercepted and prevented from causing harm.

I immediately became an RCA fan. But, in our first several years of conducting them, something seemed amiss – the exercise was not producing the impact we expected.

But why? After all, we were doing what everybody else was (and mostly is) doing. When a bad error occurred, we threw together a group of experts and senior administrators to analyze it. The group was predominantly members of our Risk Management Committee, which cast a legalistic shadow over the proceedings. We then identified the front-line participants in the case, and added them to the invite list. Needless to say, trying to get this gaggle of individuals (committee members and caregivers) in the same room was challenging, particularly when the agenda (for the caregivers) involved a painful rehash of a terrible error. (Think about how enthusiastic you are when scheduling an elective root canal and you’ll get the picture.)

Once we found a workable time slot, often more than a month after the event, the conduct of the RCA was a bit haphazard, with little systematic development of an action plan or follow-up. Although some meaningful changes flowed from these exercises, they were clearly not reaching their potential.

As so often happens in safety and quality, it takes a combination of a strong leader and external pressures to overcome inertia. I’ve introduced you previously to our Chief Medical Officer at the time, Ernie Ring, a passionate guy who wasn’t afraid to break an egg or two. Ernie was also uncomfortable with our RCA process. In mid-2007, the State of California implemented a requirement that hospitals report every serious adverse events (basically the NQF “Never Events” list) to state authorities within 30 days. Ernie sprang into action.

“I don’t think our RCA process gets us where we need to be,” I recall him saying. “We need to approach it differently.” Ernie’s a smart guy, but I’m not sure even he recognized how transformative his solution would become.

First, Ernie announced that the RCA committee (euphemistically called the “Clinical Events Oversight Committee”) would have a standing two-hour meeting each week, every Wednesday from 9-11. “Are you crazy, Ernie? Who has the time for that,” skeptics asked.

OK, I asked.

I was wrong. The decision to have a standing weekly RCA committee meeting was essential. After describing what happens at the meeting, I’ll tell you why.

Now, when we learn of an error (through informal channels or via our incident reporting system), our patient safety manager Kathy Radics does some preliminary investigation, putting together the basics of “what happened” and “who was involved.” On Thursday or Friday of the same week, an email goes out from the Chief Medical Officer’s office to all the involved participants – ranging from a ward clerk to the chair of surgery. Basically, it says that we’ll be discussing your case next Wednesday at 9. Be there. Aloha.

This is a big deal – invariably, some of the docs have surgeries or clinic visits scheduled. But the organization is signaling something vital: learning from our mistakes takes priority over virtually everything else. Yes, it may inconvenience a patient or two, but I’d argue that this prioritization is more “patient-centric” than analyzing the error after all the detailed memories of the incident have faded and dozens more patients have been subjected to unfixed system risks.

“But how do you know you’re going to have a terrible error every week?” you might logically ask. We don’t (and, in fact, we don’t have a terrible error every week, thank goodness). But this is an example of an “unpredictably predictable” phenomenon, and needs to be addressed accordingly. By way of analogy, for the first 8 years of my hospitalist group, we had no formal plans to deal with maternity leaves, other than knowing that we needed to cover for them. So I’d get a call from a female faculty member, asking, “Hey Bob, can I talk to you for a minute this afternoon?” (I was generally the second person to know after the spouse). Moments later, we’d begin informing a bunch of people that they were now unexpectedly scheduled for additional clinical service. Their joy over the blessed event was always tinged by the pain of the additional coverage obligation. About 3 years ago, it dawned on me (yes, I’m that thick) that, in a group of 50 young physicians (about 30 of whom are women), while one couldn’t predict which person would get pregnant and when, one could predict that 2-4 people would be pregnant every year. And so we developed a maternity leave “jeopardy” schedule, essentially hiring 1-2 additional FTEs and creating a clear and predictable coverage schedule for maternity leaves. When folks on “macro-jeopardy” are tapped for coverage, they are no longer surprised or disappointed. Everybody is happier.

Similarly, errors requiring RCAs are not predictable, but predicting that a 600-bed hospital will have 30-40 such errors yearly is. Given this, systematizing the schedule to analyze them is essential. Knowing that the committee is available from 9-11am each Wednesday also lowers the threshold for employing the RCA technique for scary near-misses or other issues that might benefit from this kind of scrutiny. Yes, we cancel the first hour of our two-hour meeting (the hour we reserve to review new cases) every now and then… but not very often. Last year, we conducted 40 RCAs.

The committee is ably led by my hospitalist colleague and Associate CMO Adrienne Green, and includes many members of the “C-Suite” (COO, CIO, CMO, CNO), several senior faculty, a few experts in patient safety, and a couple of front-line nurses and docs. During that first hour (9-10am), all the involved caregivers are there as well. Adrienne begins the meeting with introductions, then lays out the plan and the purpose: this is a “no blame” and confidential forum; our goal is to fix system problems; we can only learn about them from front-line providers; and so on. The effect is as calming as it can be, given the meeting’s agenda.

Then we spend about 30 minutes hearing about what happened and dissecting the error, trying to understand the underlying systems factors. This is pretty fluid – although some organizations use a formal structure to ensure that no stone is left unturned, I personally find these things a bit too formulaic and prefer a more conversational forum, as long as the members of the committee are well schooled in safety science and ultimately explore all the key issues.

After coming to a shared understanding of what happened and why, we begin brainstorming fixes. We cover the usual suspects: Was this a staffing issue? Were there cultural issues such as poor communication or steep hierarchies? Does a process need to be standardized or simplified? Would a checklist help? Is there a feasible IT solution? Over the years, we have learned to be skeptical of fixes that involve “increased awareness” or “education”, instinctively favoring solutions that change the process of care or build in forcing functions. We also know that it is easy to suggest solutions while sipping a café mocha in a conference room; the true test will be whether they actually work at the point of care. After we’ve hammered out a tentative plan, Adrienne will often ask the caregivers, “Tell us why this solution, which seems right to us, won’t work in your world.” Better to hear the answer there than inflict the wrong fix.

The last 10 minutes of the hour are devoted to creating a formal action plan. What are we going to do? Who is going to do it? Do they need any new resources? (Last year, we implemented 144 action items from our 40 reviews.) A few specifics about the case are also addressed: Has the patient been informed about the error? (If not, we create a plan to do so.) Should we waive the patient’s bill? (If it was truly our fault, we do.) Does this error need to be reported to the state? (Even when it is a close call, we do that too.)

The accountable individuals are then charged with completing any further investigation and implementing the action plan, assisted by Dr. Green, Ms. Radics, and other members of our quality or risk management departments. Although they are clearly motivated by ethics and professionalism, they have another incentive: the responsible individual is scheduled to return to the committee in 1-2 months to report what actually happened.

The second hour of the committee meeting (from 10-11am) is devoted to hearing these reports, usually in 15-minute intervals. The committee members are all still there, so that the same individuals who heard the RCA of the dramatic error a few weeks earlier also review the follow-up report. When a report comes back with, “we stalled out because we encountered political problem X” or “we need to buy a $50,000 piece of equipment,” the senior administrators, who control the resources, are there – and, since they heard the case, they are much more inclined to make hard choices than they would have been had they read a bloodless report prepared by an underling. The RCA committee accepts some follow-up reports, while others are deemed incomplete (with plans for a return report after new steps are taken).

Why is this process transformative? First, the most senior leaders of the organization are devoting two hours a week to error analysis and problem solving. Their time and energy is our most precious, and scarce, resource. Moreover, my concern that busy people wouldn’t be able to find this much time has been allayed – compared with all of the stultifying meetings those of us with administrative responsibilities attend, this meeting has a unique sense of drama, immediacy, and results-orientation. People, even extraordinarily busy ones, make the effort to be there.

Second, we, the RCA committee members, are improving with experience. We have learned how to function more effectively as a committee, what solutions tend to work (and don’t), and how to sniff out themes. Today, we might hear a terrible ER error and say, “you know, the underlying issues in that case are very similar to the ones from that case last month from the endoscopy suite.” And that changes the priority we give to the problem and its solutions.

The meeting also more broadly benefits the organization’s safety culture. Every week, nearly a dozen front-line caregivers are brought face-to-face with senior hospital leaders to discuss a terribly upsetting error. The providers wouldn’t be human if they weren’t anxious about this process, but, in my experience, most leave the meeting feeling that a) the organization is serious about safety; b) I now understand a bit more about that “systems thinking” thing; c) the senior leaders aren’t so bad after all. The effect is to improve the safety culture and problem-solving capacity of our institution, a little bit each week. The participants become mini-ambassadors for safety when they return to their clinical homes.

The process isn’t perfect. Some organizations have included on their committees patient representatives or non-clinical experts in areas like human factors, and we haven’t done that. I recently learned of an organization that conducts some of its RCAs in the actual site of the error (the OR, the L&D suite), finding that an appreciation of the physical setting is often crucial to understanding the problem. We sometimes fix an error at its source but lack the resources to generalize the fix to similar clinical areas that may have similar risks.

But I’m quite confident that our RCA transformation has been the most powerful thing we’ve done to improve safety. While some safety fixes (such as computerization and incident reporting systems) have been less useful than I would have expected, our RCA process has been more effective – because it really is much more than an error-analysis exercise. It is an organizational-messaging, culture-changing, capacity-building process.

That sure seems like it’s worth two hours a week.

Bob Wachter is chair, American Board of Internal Medicine and professor of medicine, University of California, San Francisco. He coined the term “hospitalist” and is one of the nation’s leading experts in health care quality and patient safety. He is author of Understanding Patient Safety, Second Edition, and blogs at Wachter’s World, where this post originally appeared.

View 6 Comments >

Most Popular

✓ Join 150,000+ subscribers
✓ Get KevinMD's most popular stories