Rating health care: There are limits to every method

You can find reviews on almost anything; we “Yelp” restaurants before we try them, we scrutinize customer feedback when we purchase products on Amazon and we check out how many stars a dry cleaner or car mechanic has. Medicine is, for better or for worse, becoming the same way. You can Google a hospital or physician and find comments or reviews, and this worries many doctors.

Most physicians don’t like the idea of patients looking them up because they are afraid the information out there doesn’t reflect the truth. In most instances, only a handful of patients have reviewed the doctor, and usually, those comments are polarized; it’s common to see either “one star” or “five stars” without much helpful feedback. You only go online to write about someone if you were surprised and impressed or upset and angry. I’ve previously written about how we worry that we are being rated on things other than medicine, such as our facility or parking.

Unlike websites that simply solicit patient reviews, more objective rating scales exist to help patients identify the best medical facilities. For example, U.S. News and World Report publishes annual lists of the “best hospitals” in different specialties. These reports, however, depend heavily on algorithms that place different emphases on patient volume, reputation, mortality and outcomes. Trying to compare organizations and entities as complex as hospitals is challenging, and significant controversy exists on how much weight to give to each element. As a result, I think reports like this may give us a gestalt of the most cutting-edge, efficient, safe and competent hospitals, but the specific order of ranking can be quite debatable.

At websites like HealthData.gov, the Department of Health and Human Services publishes a plethora of statistics and outcomes about physicians and hospitals. Consumers can look up a hospital’s readmission rate, a surgeon’s 30-day mortality rate and a skilled nursing facility’s Medicare cost. These publicly reported statistics motivate hospitals and physicians to engage in quality improvement and implement programs to reduce adverse outcomes. Indeed, in my work environment, I am very aware of outcomes being measured and how it reflects on my hospital and practice.

Although public databases with hospital outcomes may stimulate quality improvement, it may also paradoxically lead to worse health care delivery. For example, if a surgeon is worried about his 30-day mortality rate, he may choose not to operate on high risk candidates; if a hospital is concerned about readmissions for congestive heart failure, it may keep patients in the hospital longer than necessary. I have seen surgeons and administrators make clinical decisions based on optimizing their statistics and not on patients’ clinical conditions. This, I think, is the system backfiring; in medicine, we are supposed to take care of people to the best of our ability, not  game the system and improve our score.

Unfortunately, I don’t think there is a perfect way of comparing hospitals and knowing which doctor to see for a particular clinical condition. There are a lot of resources out there, from patient-generated feedback to U.S. News and World Report rankings to publicly reported Medicare data, but all of these have their limitations and drawbacks. For the patient or consumer, it’s important to keep these caveats in mind when we choose a doctor or hospital.

How do you choose where to get your medical care?

Craig Chen is an anesthesiology resident.  This article originally appeared in The American Resident Project.

View 2 Comments >

Most Popular

KevinMD logo - transparent