Health care in U.S. hospitals is suffering from three under-recognized conditions that I will refer to as “Metris,” “Severe Metris” and “Metric Shock.”
Metris occurs when a health system begins to focus more on achieving certain metrics than on improving actual patient care. Severe metris occurs when pressured providers begin to base their clinical decisions on achieving a metric rather than on their best clinical judgment. And sadly, metric shock occurs when these aberrant clinical decisions cause patient harm. In my opinion, metric shock is a silent epidemic in the U.S. health care system.
The pathogenesis of metric shock is rooted in so-called value-based (indirectly revenue-based) metrics, as well as other performance metrics. To achieve these metrics, health systems implement system-wide campaigns and bonus structures that educate staff and incentivize performance.
All levels of administration work synergistically to promote “metric success.” The unintended result is a palpable fear of underperformance and overt or covert actions to avoid “metric failures.” Pressure grows as the health system compares each facility to the others on prominent bar graphs. And managers naturally jockey to avoid critique and receive praise.
In my experience, some of the actions taken to improve metrics actually cause patient harm — harm which generally escapes measurement. Of great concern, many young medical trainees are now “growing up” in this environment that promotes metrics over patients, potentially cementing this approach into the health care system for at least another generation.
Although it should be possible to measure and incentivize quality and efficiency by using carefully chosen metrics, we should admit that it just isn’t working. Here are some of the common quality and other performance metrics that can result in metric shock, based on my experience.
The fear of “missing” sepsis and not providing the “appropriate” sepsis bundle of care is so great that many clinicians inappropriately treat non-septic patients with aggressive fluid boluses, broad-spectrum antibiotics, and unnecessary blood testing in order not to “fall out.”
This ultimately leads to unmeasured harm, like flash pulmonary edema requiring urgent diuresis. Unfortunately, the system-wide education does not teach that sepsis is not the only cause of lactic acidosis, tachycardia, leukocytosis, etc. Most likely, the best performing sepsis docs are the ones who label and treat “everything” as sepsis, but unfortunately, they are likely the ones causing the most (unmeasured) patient harm. Deceptively, the increased diagnosis of “supposed” sepsis also lowers the percent sepsis mortality (but not the absolute mortality), creating an additional incentive to promote this practice.
2. ER throughput
The non-clinician notion that fast, efficient ER care represents quality care is fiercely defended, while in reality, shorter ER LOS is not achieved without cutting clinical corners. That is, patient care is rushed, clinical decisions are made without gathering accurate information or performing a careful physical exam; and wasteful shotgun orders, based solely on the chief complaint, become routine, replacing an algorithm-based, history-based approach to patient care (because there’s no time). Rushed care eventually results in harm (either to the patient or to the pocketbook) and unnecessary admissions.
3. C. diff colitis
One campaign, designed to reduce rates of C. diff colitis, focused primarily on screening all C. diff tests ordered and blocking studies that might yield false-positive results.
Unfortunately, the multi-layered pressure to avoid positive tests became so great that it led to some doctors to treat suspected C. diff colitis empirically rather than sending off the test at all, potentially treating a patient for the wrong diagnosis and/or missing an alternative diagnosis.
Celebrated as a great metric success, the improved numbers were a mirage since the only way to actually decrease the rates of true C. diff infection would be to enforce more judicious use of antibiotics and proper contact precautions and sanitation, not by decreased testing.
4. CLABSIs and CAUTIs
It is interesting to watch clinical leaders quietly panic when a potential CLABSI or CAUTI is identified, as they huddle to review the case and identify, if possible, an alternative explanation that would exclude reporting.
Worse, I’ve assumed care for central line patients, who were febrile at some point in their hospital stay but were treated with empiric broad-spectrum antibiotics without any blood cultures ever being obtained, a flawless method to avoid CLABSIs completely.
CAUTIs can easily be dismissed as colonization by attributing fever and leukocytosis to an alternative diagnosis, like the omnipresent atelectasis. Alas, clever documentation and attribution go a long way toward metric success.
5. Medication reconciliation
Although medication reconciliation is very important, the metric of “100% completion” within 24 hours of admission is blind and dangerous since anyone can “complete” the medication reconciliation without a single medication is accurate. A conscientious physician quickly realizes this when, at the time of discharge, the admission errors are exposed and hopefully corrected.
6. Antibiotic stewardship
Although antibiotic stewardship is critically important, the approach to antibiotic stewardship enforcement can be deceptive and dangerous. Typically, leaders report the absolute use of selected antibiotics (i.e., “we’re using too much vancomycin”), with no case-based assessment of appropriate or inappropriate use. The resultant fear of “bar graph underperformance” leads to decreased use of the selected antibiotic (even when it is the best choice), potentially causing patient harm by treating with a less appropriate antibiotic (either a less effective or less monitored one). Thus, decreased rates of use are often celebrated as patient harm is potentially increasing.
7. Length of stay
The myth that a shorter hospital length of stay is, by definition, “better,” leads to rushed care and unsafe premature discharges. Although this practice may result in an increased readmission rate, the failure may not be directly noticed at the index facility as the patient is not uncommonly taken by family to a different facility the second time (out of distrust).
8. Case-mix index
The myth that a higher case mix index (not an appropriate CMI) is better, with peers directly compared on bar charts, leads inevitably to documentation inflation, in which all patients are intentionally made to appear sicker than they actually are, inadvertently leading to misleading communication between providers, and potentially unnecessary testing and treatment.
Ironically, in my experience, the simple act of verbally promoting the concept of a “zero harm environment” (the metric being “zero”) does not improve safety, but simply leads to less reporting of adverse events because no individual staff member or manager wants to shine a spotlight on their failures.
System-wide huddles that are established to address any and all safety issues from top to bottom are unfortunately performances in self-deception because they are founded on the faulty metrics listed above and on the fears of revealing any relative underperformance compared to peers.
I do believe that physicians can recognize high-value care when they see it, but my hope that any isolated metric will ever actually capture this high-value care is gone. It is time for value-based care programs (and other performance-based incentive programs) to make a hard stop and reassess what they are actually accomplishing. Surviving metric shock depends on it.
David M. Mitchell is a hospitalist.
Image credit: Shutterstock.com