Professor James Reason is the intellectual father of the patient safety field. I remember reading his book Managing the Risks of Organizational Accidents in 1999 and having the same feeling that I had when I first donned eyeglasses: I saw my world anew, in sharper focus.
Reason’s “Swiss cheese” model, in particular – which holds that most errors in complex organizations are caused not so much by the inevitable human mistakes but rather by the organization’s incomplete layers of protection, which allow the errors to pass through on their way to causing terrible harm – was an epiphany. It is the fundamental mental model for patient safety, as central to our field as the double helix is to genetics.
Last month, I returned to England to give a couple of talks, one at a conference called “Risky Business,” the other at the UK’s National Patient Safety Congress. The former brings together many of the leading thinkers in a variety of risk-heavy fields, including aviation, nuclear power, space travel, the financial system … and healthcare. At the latter, I was asked to give the 2012 James Reason Lecture, a singular honor.
Attending both conferences, I spent a lot of the week thinking about Swiss cheese. Two particular case studies I heard stand out – one in which a series of human and technological errors combined to kill 26 men in a friendly fire accident, the other in which highly skilled pilots managed to safely land a mammoth aircraft after one of its engines exploded. Both were instructive, in different ways.
At the Risky Business conference, Scott Snook described a tragically famous friendly fire accident: the 1994 incident in which two American F-15 fighters shot down two U.S. Army UH-60 Blackhawk helicopters, killing all 26 men aboard, on a crystal-clear day in the “no fly zone” in Northern Iraq. Snook is a blunt, tough-looking, head-shaven, fast-talking New Jersey native. It’s not surprising to learn that he spent 22 years in the US Army, rising to the rank of Colonel before retiring in 2002. More surprising are his Harvard MBA, his Harvard PhD in Organization Behavior and his present job: Senior Lecturer at Harvard Business School.
The test of a conceptual framework is how often it holds true, and in every large-scale accident I’ve studied (Chernobyl, BP, 9/11, Tenerife, Bhopal, and innumerable medical errors), the Swiss cheese model fits. Such errors virtually always involve competent, caring people in hard jobs, trying to do their best with imperfect data and under various pressures, felled by glitchy pieces of technology, poor communication, and really bad karma. And there’s nearly always a cultural mindset that contributes to the whole mess, one that had existed for years and been tolerated because, well, “that’s just how things work around here.”
In the friendly fire case (I’ll make a long story relatively short; Snook wrote a whole book about the incident), two Air Force F-15s were flying “lazy eights” in the sky above Northern Iraq, as they’d been doing each day for a couple of years without excitement. Both planes carried only a single pilot and an awful lot of weaponry. There had been reports of Iraqi saber rattling in the preceding weeks, so the pilots were a bit on edge.
That morning, they had received their pre-flight briefing; it included a printout of all the coalition planes expected in the zone that day. Helicopters, particularly those from another branch of the Armed Services, were often omitted from the list, and the two Blackhawks that day were no exception.
The Blackhawks’ were tasked to shuttle some Kurdish civilians and service personnel from the US, Britain, Turkey, and France to a tribal meeting in the town of Zakhu, in the hills of Northern Iraq, inside the no-fly zone. They took off from an Army base in Turkey, and, as always, flew low and fast; hugging the mountainous terrain was their best protection against being spotted by enemy radar.
Flying at 32,000 feet, high above both the F-15s and the Blackhawks, was an AWACS surveillance plane, a specially-outfitted Boeing 707 that directs traffic over an entire region. One of the computer screens on the AWACS malfunctioned that day, so two people who usually sat next to each other (one directing traffic coming into a given zone, the other the outgoing traffic) were placed a few rows away, where they could no longer tap on each others’ shoulders to discuss a confusing finding. The AWACS crewman on the Incoming monitor spotted the Blackhawks on his radar, but then they disappeared from the screen, probably because the helicopters had gone behind a mountain. He placed an electronic arrow on his screen to remind him of the choppers’ prior position, but such arrows were designed to disappear after a few minutes to avoid screen clutter.
The F-15s had four ways of guarding against friendly fire incidents. The first was the list of expected flights in their zone, strapped to the pilot’s thigh. By failing to list all flights – in particular the Blackhawks – one slice of Swiss cheese was breached even before the fighter planes took off. The second protection was a technology known as IFF (“Identification Friend or Foe”). Every US or allied plane has a transponder that emits a signal telling others that they are on the same side. The F-15’s rules of engagement require it to “paint” a potential target with its IFF probe; a friendly plane returns an electronic signal that says, “I’m on your team.”
But on April 14, 1994, it too proved to be more hole than cheese. The Blackhawk helicopters had their code set for the Turkey frequency when they took off, and neglected to switch to the Iraq frequency when they entered Kurdistan (the later investigation revealed that this happened frequently). This meant that when the F-15s pointed their IFF at the Blackhawks, the response back was “Foe.” It didn’t dawn on the F-15 pilots to try the other frequency for Turkey, which would have entailed turning their dial by one click. Layer Two was breached.
The F-15 pilots radioed the AWACS to ask if there were any friendly helicopters in the area. By this time, the electronic arrow had disappeared from the Incoming crewman’s monitor; there appeared to be “no friendlies” in the zone. Inexplicably, the crewman who had seen the Blackhawks earlier did not speak up. No one is quite sure why neither he, nor a few other AWACS personnel who had seen the Blackhawks’ signal, stayed quiet during the minute when they could have prevented tragedy with a single word. Perhaps each expected that someone else would speak up. In any case, they failed to call off the dogs. Layer Three.
The final protection was the requirement for a visual identification (“VID”, in service lingo) of a target before attack. The F-15’s swooped down to a position 1000 feet above and 500 feet to the side of the Blackhawks. Russian-built Iraqi “Hind” choppers had threatening side-ordnance hanging off the main fuselage; Blackhawks generally did not. But these particular Blackhawks had been outfitted with two side-hanging fuel tanks that, from a few thousand feet, might have looked like missiles. Moreover, despite the pilots’ extensive VID training, IDing a chopper from that distance is like determining a mini-van’s make and model from five football fields away. And one wonders whether these pilots, who had been flying dull surveillance missions for years, had their adrenaline pumping, which led them to see what they wanted to see, a phenomenon known as confirmation bias.
All layers of Swiss cheese having been breached, the F-15 pilots both pulled their triggers. In a chilling bit of Top Gun swagger, as the second missile hit its target, Pilot 2 told Pilot 1, “Stick a fork in them, they’re done.” The pilots returned to base, getting huge atta-boys from their colleagues, and were waiting to be debriefed by their general when they looked up at a TV screen in the general’s waiting room. CNN was reporting that two Blackhawk helicopters were missing in Northern Iraq. One can only imagine how they felt at that moment.
While it would be easy to point any number of fingers, James Reason’s model teaches us that to prevent another friendly fire incident, the most important thing is to identify the holes in the Swiss cheese, shrinking them to the degree possible and creating enough different layers that the probability that they will ever line up again is made as low as possible. (The investigators mostly did that, though they did court marshal one person, AWACS supervisor Capt. Jim Wang.)
Professor Reason also teaches us to mine cases in which things go right for their lessons. At the Risky Business conference, another speaker was Capt. David Evans, a Quantas training pilot who was on the flight deck when Engine #2 of a giant Airbus 380, carrying 440 passengers and 29 crew, exploded in November 2010.
Captain Evans walked right out of central casting – handsome, broad shoulders, with a soothing Australian voice. A few minutes after takeoff, a hydraulic tube ruptured, spilling slippery fluid inside the engine, which caused a turbine to spin too fast and ultimately to explode, sending engine fragments at “infinite energy” in all directions. In an extraordinary bit of luck, only one 75-pound fragment plowed into the fuselage; it missed the passenger compartment by a few feet. But the explosion made a mess of the wing, blowing out not only the left inside engine but several other mission-critical systems like generators and fuel pumps.
He and the other pilots (there happened to be five on the plane that day) methodically (they flew two hours before they landed) went through their paces, ticking through checklists, ascertaining the extent of the damage. While they used the technology when they could, they were also skeptical of it in the face of a catastrophic insult. For example, the computer system recognized that one wing was much lighter than the other (duh, it was missing a big chunk of a 25-ton engine) and signaled the pilots to move fuel from the heavier side to the lighter to keep the plane balanced. They wisely decided to ignore that recommendation.
On the other hand, the pilots programmed all their data into the plane’s landing app, which told them that – because they’d be coming in steeply, heavy with fuel, and with partly damaged brakes – they’d need the longest possible runway (they choose the main runway in Singapore), and the computer predicted that they’d be able to stop about 100 yards from the end of the 2-mile long runway. The prediction was accurate, nearly to the foot, and no one was hurt.
Professor Reason’s model helps us understand why things go right, and sometimes why they go wrong. By causing us to focus more on bad systems than bad people, the model has been responsible for much of the progress we’ve made in patient safety.
In my talk at the Patient Safety Congress, I offered the audience my own thoughts on the successes, failures, surprises and epiphanies in the decade or so since the safety movement began. I was thrilled that Professor Reason, now retired, came to the talk. In addition to highlighting the central role of the Swiss cheese model, I made the point that many people took the model as support for an unblinking acceptance of “systems thinking” and “no blame” as an apt response to every error. The effort to rebalance systems thinking with accountability, often attributed to David Marx’s “Just Culture” model, is sometimes regarded as a counterpoint to Reason’s teachings.
Yet nothing could be further from the truth. I reminded the audience that in 1997 – three years before Marx first wrote about the “Just Culture” – Reason had done the same. In Managing the Risks of Organizational Accidents, he wrote:
A ‘no-blame’ culture is neither feasible nor desirable. A small proportion of human unsafe acts are egregious… and warrant sanctions, severe ones in some cases. A blanket amnesty on all unsafe acts would lack credibility in the eyes of the workforce. More importantly, it would be seen to oppose natural justice. What is needed is a just culture, an atmosphere of trust in which people are encouraged, even rewarded, for providing essential safety-related information – but in which they are also clear about where the line must be drawn between acceptable and unacceptable behavior.
I had a chance to chat with James Reason after my lecture, and he was very pleased that I had highlighted this point, since he believes that he is often misinterpreted as sugarcoating the role of bad behavior. Professor Reason asked me to sign his copy of the new edition of my book, Understanding Patient Safety, which I did, proudly.
My own interest in patient safety came from seeing terrible errors (and committing a few of my own) and learning – from James Reason – that the way I’d been taught to think about them was all wrong. Having a chance to give a lecture in Reason’s name, with him in the audience, was one of the great thrills of my career. We stand on the shoulders of giants, observed Issac Newton, and I’ve never felt that more acutely than I did last month in a remarkable week in England.
Bob Wachter is chair, American Board of Internal Medicine and professor of medicine, University of California, San Francisco. He coined the term “hospitalist” and is one of the nation’s leading experts in health care quality and patient safety. He is author of Understanding Patient Safety, Second Edition, and blogs at Wachter’s World, where this post originally appeared.