Do satisfaction scores really measure quality care?

Does patient satisfaction matter? The answer is a resounding “yes.”

In fact, as the director of an emergency department, feedback from dissatisfied patients has provided both me and our group with an early warning about several physicians who were not performing up to our standards. Studies show a clear correlation between decreased patient satisfaction and increased medical malpractice risk, so meeting our patients’ needs is not only in their best interests, but it is also in our best interests.


There is a huge difference between a qualitative phone call and a quantitative survey, though. When we attempt to quantify and compare patient satisfaction scores, we take a good thing and pervert it. Patient satisfaction is an important part of medical care, but patient satisfaction rankings harm us and harm our patients.

Not too long ago, I opened a three pound bag of M&Ms, grabbed a handful, showed them to my ten year old son, and asked him how many blue M&Ms were left in the bag. He furrowed his brows at me and said “you can’t tell just from looking at a handful.” My son understands the concept, yet satisfaction survey companies apparently don’t. They routinely perform advanced statistical calculations on the results of a handful of surveys taken from many thousands of patient visits. When hospitals or contract management groups then tie physician compensation or even physician employment to monthly “numbers” that don’t come close to meeting statistical significance, they incentivize and penalize physicians for what amount to random events. A lack of statistical validity is only part of the problem, though. The larger issue is that satisfaction rankings grade physicians on inappropriate metrics.

Bedside manner is an important part of patient care, but when patients come to an emergency department with an emergency condition, first and foremost they want a competent physician who delivers quality medical care. It would be wonderful if we could measure a physician’s competence and quality and then reward the highest performers. Unfortunately, the concept of “quality” is a lot like the concept of “justice” – we know it when we see it, but no one can properly define it. Because “quality” can’t be measured, satisfaction survey companies instead take a variable that can be measured and use their “experts” to make everyone believe that this measurable variable is the most important aspect of medical care. In other words, they take a big pot of “patient satisfaction” and slap the label “quality” over the front of it – kind of like slapping the label “thermometer” on a ruler and using it to measure the temperature. Then, by showing hospital administrators and hospital boards how nearby hospitals are “performing better” on these statistically invalid metrics, patient satisfaction companies start “competition wars” and get a full scale buy-in from their clients to see who can be the “best.”

Satisfaction survey companies prey upon a hospitals’ desires to set themselves apart. This “Top 100” and that “Top 100” are plastered on billboards all over town. Satisfaction scores give hospitals yet another metric to brag about, but those scores ignore a patient’s quality of care. Medical judgment doesn’t matter as long as the numbers are at the 90th percentile.

Think about what satisfaction scores actually measure. Satisfaction scores don’t grade us on well we treat extremely sick patients. Admitted and transferred patients don’t get our surveys. Satisfaction scores generally don’t even measure how we treat a majority of our discharged patients. We all do a pretty darn good job at communicating with our patients and the numbers prove it. In a recent set of Press Ganey physician courtesy scores, doctors needed a mean score of 91.8 in order to be in the coveted 90th percentile. If their mean scores dropped only four points to 87.8, they found themselves in the loathsome 50th percentile. The grouping is tight which shows that most docs are behaving quite similarly.

What separates the “good” doctor from the “bad” doctor? Their ability to please difficult patients – the patients who have unreasonable expectations or who want inappropriate medical care. Give them what they want, or with a few pencil swipes, one angry patient can drag a physician from the 99th percentile to the tenth percentile. Think about it. Start with the scores of four patients who rated a physician with perfect “100s” and average in one patient with all “zero” ratings. The mean score falls from “100” to “80,” moving the physician from the 99th percentile to less than the 10th percentile on Press Ganey’s rankings. One patient can cause a change of 20 points, yet only 4 points separate the 50th from the 90th percentile? Houston, we have a problem.

Woe is the doctor who fails to admit a patient who wants to be admitted but who does not meet admission criteria. A coughing patient won’t accept your explanation why a Z-pack won’t help him? Turn him away at your own risk. With our employment and our compensation hinging on every “5” we can get, doctors are being coerced into giving patients whatever they want, regardless of medical appropriateness. When we cater to satisfaction scores more than we cater to proper medical care, we are violating our oath, devaluing our education, and potentially harming our patients.

Patient feedback is a tremendous asset in showing us how we can make our patients happier. But when we create large spreadsheets with green, yellow, and red percentile scores comparing statistically insignificant data about an unrelated set of criteria to manufacture some grand illusion that one hospital or one physician is “better” than another, we’re losing touch with reality.

Just ask my son.

William Sullivan is an emergency physician and an attorney and who blogs at Dr. William Sullivan’s Med Law Chronicles.

Submit a guest post and be heard on social media’s leading physician voice.

View 7 Comments >

Most Popular