The difficulty of moving the needle on patient safety

Adverse events — when bad things happen to patients because of what we as medical professionals do — are a leading cause of suffering and death in the U.S. and globally.  Indeed, as I have written before, patient safety is a major issue in American health care, and one that has gotten far too little attention. Tens of thousands of Americans die needlessly because of preventable infections, medication errors, surgical mishaps, and so forth.

As I wrote previously, according to Office of Inspector General (OIG), when an older American walks into a hospital, he or she has about a 1 in 4 chance of suffering some sort of injury during their stay.  Many of these are debilitating, life-threatening, or even fatal.  Things are not much better for younger Americans.

Given the magnitude of the problem, many of us have decried the surprising lack of attention and focus on this issue from policymakers.  Well, things are changing; and while some of that change is good, some of it worries me.  Congress, as part of the Affordable Care Act, required Centers for Medicare and Medicaid Services (CMS) to penalize hospitals that had high rates of “HACs” — hospital acquired conditions.  CMS has done the best it can, putting together a combination of infections (as identified through clinical surveillance and reported to the CDC) and other complications (as identified through the patient safety indicators, or PSIs).  PSIs are useful: They use algorithms to identify complications coded in the billing data that hospitals send to CMS.

However, there are three potential problems with PSIs:  Hospitals vary in how hard they look for complications, they vary in how diligently they code complications, and finally, although PSIs are risk-adjusted, their risk-adjustment is not very good — and sicker patients generally have more complications.

So, HACs are imperfect: But the bottom line is, every metric is imperfect.  Are HACs particularly imperfect?  Are the problems with HACs worse than with other measures?  I think we have some reason to be concerned.

HACs: Who gets penalized?

Our team was asked by Jordan Rau of Kaiser Health News to run the numbers.  He sent along a database that listed CMS’s calculation of the HAC score for every hospital, and the worst 25% that were likely to get penalized. So, we ran some numbers, looking at characteristics of hospitals that do and do not get penalized:


These are bivariate relationships — that is, major teaching hospitals were 2.9 times more likely to be penalized than non-teaching hospitals.  This does not simultaneously adjust for the other characteristics because as a policy matter, it’s the unadjusted value that matters.  If you want to understand to what degree academic hospitals are being penalized because they also happen to be large, then you need multivariate analyses — and therefore, we went ahead and ran a multivariable model — and even in the multivariable model (logistic model with each of the above variables in the model), the results are qualitatively similar although not all the differences remain statistically significant.

What does this mean?

So how should we interpret these data?  A simple way to think about it is this: Who is getting penalized?  Large, urban, public, teaching hospitals in the Northeast with lots of poor patients.  Who is not getting penalized?  Small, rural, for-profit hospitals in the South.  Here are the data from the multivariable model:  The chances that a large, urban, public, major teaching hospital that has lots of poor patients (i.e. top quartile of DSH index) will get the HAC penalty?  62%.  The chances that a small, rural, for-profit, non-teaching hospital in the south with very few poor patients will get the penalty? 9%.

Is that a problem?  You could make the argument that these large, Northeastern teaching hospitals are terrible places to get care — while the hospitals that are really doing it well are the small, rural, for-profit hospitals in the south. Maybe. I suspect this is much more about the underlying patient population and vigilance than actual safety.  Beth Isarel Deaconess Medical Center (BIDMC) in Boston is one of the very few hospitals in the country with exceptionally low mortality rates across all three publicly reported conditions and a hospital that I have written about as having great leadership and a laser focus attention on quality. And yet, it is being penalized as being one of the hospitals with, according to the HAC metric, a poor record on safety.  So is Brigham and Women’s (though I’m affiliated there, so watch my bias) — a pioneer in patient safety whose chief quality and safety officer is David Bates, one of nation’s foremost safety gurus.  So are the Cleveland Clinic and Barnes Jewish, RWJF Medical Center, LDS Hospital in Salt Lake, and Indiana University Hospital, to name a few.

So what are we to do?  Is this just whining that our metrics aren’t perfect?  Don’t we have to do something to move the needle on patient safety?  Absolutely.  But, we are missing a great opportunity to do something much more useful.  Patient safety as a field has been stuck.  It’s been 15 years since the IOM’s To Err is Human report came out — and by all counts, progress has been painstakingly slow.  Therefore, I am completely on board with the sentiment behind Congressional intent and CMS’s efforts.  We have to do something: But I think we should do something a little different.

If you look across the safety landscape, one thing becomes clear: When we have good measures, we make progress.  We have made modest improvements in hospital acquired infections — because of tremendous work by the CDC (and their clinically-based National Hospital Surveillance Network) that collects good data on patient safety and feeds it back to hospitals. We have also made some progress on surgical complications, partly because a group of hospitals are willing to collect high quality data, and feed it back to their institutions.  But the rest of the field of patient safety?  Not so much.  What we need are good measures.  And, luckily, there is still a window of opportunity if we are willing to make patient safety a priority.

How to move forward

This gets us to the actual solution:  Harnessing the power of meaningful use in the electronic health records incentive program.  We need clinically-based, high quality patient safety metrics.  Electronic health records can capture these far more effectively than billing codes can.  The federal government is giving out billions of dollars to doctors and hospitals that “meaningfully use” certified EHRs.  A couple of years ago, David Classen and I wrote a piece in NEJM that outlined how the federal government, if it wanted to be serious about patient safety, could require, that EHR systems measure, track, and feed back patient safety events as part of certification and requirements for meaningful use.  The technology is there.  Lots of companies have developed adverse event monitoring tools.  It just requires someone to decide that improving patient safety is important — and that clinically-based metrics are useful.

So here we are: HACs.  Well intentioned, and a step forward, I think, in the effort to make health care better.  Everyone I know thinks HACs have important limitations — but reasonable people disagree over whether their flaws make them unusable for financial incentives or not.  The good news is that all of us can agree that we can do much better.  And now is the time to do it.

Ashish Jha is an associate professor of health policy and management, Harvard School of Public Health, Boston, MA.  He blogs at An Ounce of Evidence and can be found on Twitter @ashishkjha.

Leave a Comment

Most Popular

Join 140,000+ subscribers

Get the best of KevinMD in your inbox

Sign me up! It's free. 
✓ Join 140,000+ subscribers 
✓ Get KevinMD's 5 most popular stories
Subscribe. It's free.