Computerized Physician Order Entry (CPOE) affects patient safety

Recently, yet another alarming Computerized Physician Order Entry (CPOE) study made headlines.

According to Healthcare IT News, The Leapfrog Group, a staunch advocate of CPOE, is now sounding the alarm on untested CPOE as their new study “points to jeopardy to patients when using health IT.” Up until now we had inconclusive studies pointing to increased and also decreased mortality in one hospital or another following CPOE implementation, but never an alarm from a non-profit group who made it its business to improve quality in hospitals by encouraging CPOE adoption, and this time the study involved 214 hospitals using a special CPOE evaluation tool over a period of a year and a half.

According to the brief Leapfrog report, 52% of medication errors and 32.8% of potentially fatal errors in adult hospitals did not receive appropriate warnings (42.1% and 33.9% accordingly, for pediatrics). A similar study published in the April edition of Health Affairs, using the same Leapfrog CPOE evaluation tool, but only 62 hospitals, provides some more insights into the results.

The hospitals in this study are using 7 commercial vendors and one home grown system (not identified), and most interestingly, the CPOE vendor had very little to do with the system’s ability to provide appropriate warnings. For basic adverse events, such as drug-to-drug or drug-to-allergy, an average of 61% of events across all systems generated appropriate warnings. For more complex events, such as drug-to-diagnosis or dosing, appropriate alerts were generated less that 25% of the time. The results varied significantly amongst hospitals, including hospitals using the same product. To understand the implications of these studies we must first understand the Leapfrog CPOE evaluation tool, or “flight simulator” as it is sometimes referred to.

The CPOE “simulator” administers a 6 hours test. It is a web based tool where hospitals can print out a list of 10-12 test patients with pertinent profiles, i.e. age, gender, problem list, meds and allergy list and possibly test results. The hospital needs to enter these patients into their own EHR system. According to Leapfrog, this is best done by admission folks, lab and radiology resources and maybe a pharmacist. Once the test patients are in the EHR, the hospital should log back into the “simulator” and print out about 50 medication orders for those test patients, along with instructions and a paper form for recording CPOE alerts.

Once the paper artifacts are created, the hospital is supposed to enter all medication orders into the EHR and record any warnings generated by the EHR on the paper form provided by the “simulator.” This step is best done by a physician with experience in ordering meds in the EHR, but Leapfrog also suggests that the Chief Medical Information Officer would be a good choice for entering orders. Finally, the recorded warnings are reentered into the Leapfrog web interface and the tool calculates and displays the hospital scores.

If the process above sounds familiar, it is probably because this is very similar to how CCHIT certifies clinical decision support in electronic prescribing. Preset test patients followed by application of test scripts are intended to verify, or in this case assess, which modules of medication decision support are activated and how the severity levels for each are configured. As Leapfrog’s disclaimer correctly states, this tool only tests the implementation, or configuration, of the system. This is a far cry from a flight simulator where pilot (physician) response is measured against simulated real life circumstances (busy ED, rounding, discharge). The only alarm the Leapfrog study is sounding, and it is an important alarm, is that most hospitals need to turn on more clinical decision support functionality.

It is not clear whether doctors will actually heed decision support warnings, or just ignore them. Since the medication orders are scripted, we have no way of knowing if, hampered by the user interface, docs without a script would end up ordering the wrong meds. And since the “simulator” is really not a simulator, we have no way of knowing if an unfriendly user interface caused the physician to enter the wrong frequency, or dose, or even the wrong medication (Leapfrog has no actual access to the EHR).

We have no indication that the system actually recorded the orders as entered, subsequently displayed a correct medication list or transmitted the correct orders to the pharmacy. We cannot be certain that a decision support module which generates appropriate alerts for the test scripts, such as duplicate therapy, will not generate dozens of superfluous alerts in other cases. We do know that alerts are overridden in up to 96% of cases, so more is not necessarily better.

Do the high scoring hospitals have a higher rate of preventing errors, or do they just have more docs mindlessly dismissing more alerts?

All in all, the Leapfrog CPOE evaluation tool is a pretty blunt instrument. However, the notion of a flight simulator for EHRs is a good one. A software package that allows users to simulate response to lifelike presentations, and scores the interaction from beginning to end, accounting for both software performance and user proficiency, would facilitate a huge Leap forward in the quality of HIT. This would be an awesome example of true innovation.

Margalit Gur-Arie is a partner at EHR pathway, LLC and Gross Technologies, Inc. She blogs at
On Healthcare Technology.

Submit a guest post and be heard.

Comments are moderated before they are published. Please read the comment policy.

  • jsmith

    One more time: EHR advocates have the wrong solution to the problem. The problem is partial cognitive failure (mistakes)on the part of health care providers. This is caused by information overload, time pressure and mutlitasking. The false solution given by EHR advocates is more technology,. which results in even more information overload, multitasking, anxiety and cognitive failure, and adverse pts outcomes.
    Where are the psychologists in all of this? Medical software design to too important to be left to engineers.
    The outcome: Money down a rathole, with little or no cost savings or quality improvement.

    • http://onhealthtech.blogspot.com/ Margalit Gur-Arie

      What Engineers? Engineers would be a huge step up. Do a little Googling on education attainment among EHR companies leadership. Here and there you’ll find someone with a degree in computer science or information technology, which is not at all Engineering. Here and there you will also find a token physician.
      There is an entire new profession around computer-human interaction. Perhaps, they should employ more of those too.

      • jsmith

        Wow, That’s even scarier.

  • Marc Gorayeb, MD

    EHR: Artificial intelligence in search of a problem that the artificial intelligence can actually solve.

  • Bobby

    As a potential patient, I find this very scary!

  • http://humanfactorinmedicineandlife.blogspot.com/ Syed Ali, MD.

    I started with a strong opposition towards CPOE. Since I have been using this for the past 1 year, I find this extremely useful. I am a strong supporter of safety too. We have a committee of physicians who sit down every few months and evaluate for any new recommendations. This is a science in evolution, so yes there would be problems in the way. The question is the errors using CPOE are less than paper orders?

    Irfan