Why patient satisfaction surveys are riddled with problems

Nearly all physicians are now subject to patient satisfaction ratings. In my case, and many thousands of my colleagues across the country, it is via the survey tool sold to health care facilities by the Press Ganey Company. There are also many, many online sources that rate physicians.

The idea is a good one: Physicians should be subject to feedback from patients about patient perceptions of how good a job the doctors do. If nothing else, how are we otherwise to change our behavior if we don’t find out where our problems are? The surveys don’t measure medical competence, but they could be a good metric of another aspect of how good we are as physicians. But, as currently used, patient satisfaction surveys are riddled with problems. They don’t measure what they’re supposed to measure, and they can easily drive physician behavior the wrong way.

I’ve read the Press Ganey survey forms, and the questions they ask are all very reasonable. I’d like to see the results if all the parents of my patients would fill one out. But that’s the problem. It is a fundamental principle of statistics that the sample (those who fill out the survey) you use to analyze the whole data set (which would be all the patients) is representative of the entire group. This doesn’t happen. Although the forms are sent out to a random sample of patients, a very nonrandom distribution of them are returned. Perhaps only the patients who are happy, or those who are unhappy, send them back. This is in fact likely. For the analysis to have any validity at all the patients who do not return the forms must also be randomly distributed among all those sent forms. But a valid survey, one in which efforts are made to get a very high return rate using such things as follow-up calls or contacts, is much more expensive to do.

There is another problem. Patient satisfaction and good medical care do not entirely overlap. It is certainly true that an experienced and skilled physician can and should deliver bad news to patients in a way in which the patient feels understood and accepting. But not infrequently doctors have to tell a patient that what the patient wants is not good medical care. This might be something as simple as not prescribing antibiotics for a viral illness, even though the patient may want that, to not prescribing narcotics to a drug-seeking patient in the emergency department. Both of these scenarios are common, and so can be the result: a dissatisfied patient. This issue would also be solved by a getting surveys from a truly random sample of patients, since the dissatisfied antibiotic or drug-seeking ones would be washed out by all the others. But now the mad ones fill out the forms — many others toss them in the trash.

This is not a trivial issue. Recent research has strongly suggested that the most satisfied patients often don’t get the best care; they are more likely to be admitted to the hospital (an often dangerous place), and they may even have a higher death rate. The best doctors can easily have the worst patient satisfaction scores.

I don’t want you to think I am against holding physicians accountable for what we do — I’m not. Patient satisfaction is a key component of how to do that. But we must have better tools, especially since we are now tying a doctor’s income to the satisfaction score. What we do now can easily result in statistical nonsense. Any scientist will tell you that bad data are worse than no data.

For what it’s worth, I looked for my own scores on several of the big physician rating sites. Good news! I got 4 stars (excellent)! The number of reviews I could find, out of the thousands of patients I’ve seen over 35 years of practice? One: a single review. So thanks to whoever the reviewer was, but one out of many thousands doesn’t seem to be a very representative sample.

Christopher Johnson is a pediatric intensive care physician and author of Keeping Your Kids Out of the Emergency Room: A Guide to Childhood Injuries and IllnessesYour Critically Ill Child: Life and Death Choices Parents Must FaceHow to Talk to Your Child’s Doctor: A Handbook for Parents, and How Your Child Heals: An Inside Look At Common Childhood Ailments.  He blogs at his self-titled site, Christopher Johnson, MD.

Comments are moderated before they are published. Please read the comment policy.

  • JR

    While it’s not a good idea to base decisions on only one study with no other studies to confirm the data…

    The study only compares “respondents in the highest patient satisfaction quartile” to the “lowest patient satisfaction quartile”. There are no comparisons for anyone in the middle 50% of the sample. I have a feeling it throws their numbers off.

    They conclude that the 25% least satisfied patients are the most likely to go to the ER for care, but less likely to die. If we conclude this means this bottom 25% is getting better care, than that clearly means that we should all start going to the ER for routine care because it provides the best care!*

    *this statement is sarcasm.

  • Thomas D Guastavino

    Excellent points. You have to have a patient population that is motivated to get better for patient satisfaction surveys to have any validity. Secondary gain issues are not insignificant. Patient wait times, a key factor in patient satisfaction, can be greatly effected by the complexity of the your patients problems, or if you are called away in an emergency. The law of unintended consequences is forcing physicians to avoid complex cases, emergency rooms, and those patients that the physician cannot safely satisfy whatever secondary gain issue exists.

  • Markus

    In addition to the sampling problem, I believe there is a serious methodological problem in treating these issues as numerical values. Someone who gives you a four is not twice as happy with you as someone who gives you a two. The four is happy, and the two is unhappy. These things do not scale in simple arithematical ways. And the they get averaged! If you get a four and a two, half the people dislike you, but a score of three gets reported. The whole process is as if you showed up for morning rounds and were told that your patient’s temperature averaged 38 yesterday.
    In actual usage when I was involved with it, the scores almost always came out a 4.3 for docs that varied considerably in my opinion. Most of the survey responders would give high marks mainly because that is what people do, and there would be a rare p.o’ed person thrown in. I wanted to get rid of Press-Ganey for the above listed reasons, but their methology is the default across the country.
    The scores seem so precise because they are numerical and even have decimal points, but do they mean much?
    We do need a good way of measuring performance.

    • http://www.chrisjohnsonmd.com/ Chris Johnson

      I don’t think they mean much at all unless they are uniformly very high or very low.

      And yes, it also bothers me that they take these discrete survey answers and treat them as if they are continuous variables. That gaff would get you an F in Statistics 101. But, as you say, Press Ganey is now the default. And we have bad data, which is far worse than having no data.

  • JR

    Interestingly enough, the survey is based on the patient’s feeling about their health care at the beginning of the survey only, then it tracks their health outcomes.

    They did not track the patient’s changing feelings about their health care over time.

    I have a feeling that those who rate their health the highest at the beginning may be more likely to have greater healthcare needs than those who rate them as just “ok”. If you rarely see your doctor you may not think highly or negatively about them. If you see someone four times a year you may have a stronger positive or negative feeling about them.

    I disagree about “survival” as the only thing that matters. Health care isn’t always life or death – so it can’t be judged on that one factor.