Pay for performance: Not everything that can be counted counts

There is a lot of talk these days about changing our health care system from “pay for volume” to “pay for value.”  The idea is that we currently reimbursement doctors and hospitals according to the number of services delivered rather than how well they make us.  This perverse system of incentives results in runaway health care spending that doesn’t necessarily result in a more healthy citizenry.

There is certainly truth in this way of thinking.  But to my mind, it very much oversimplifies the situation.  To illustrate why I believe this to be the case, I present a letter entitled, “Performance Indicators and Clinical Excellence” from the Lancet.  

The author Chris Kenyon writes:

Attending post-intake ward rounds in various National Health Service (NHS) trusts around the UK, I am concerned that clinical expertise is being crowded out by a need to meet various key performance indicators.

In one hospital, I was told on arrival that the trust had, over a year, moved from the bottom to the top category of performing trusts. I was therefore puzzled when soon thereafter I attended a consultant post-intake ward round where a patient seen had ascending leg weakness mistakenly diagnosed by the admitting doctor as Guillain-Barré syndrome. The consultant spent little time reviewing the history of the patient (missing the history of pseudo-seizures), did not test the patient’s reflexes or power in the legs, and concluded that the patient required intravenous immune globulin. The consultant did, however, introduce all members of the ward round to the patient, check that any drug allergies were filled out on both pages of the drug chart and a checklist of 23 other items, which were ticked off or not according to the consultant’s compliance thereof. A sticker containing the 25 items ticked off was duly placed in the notes, the patient received intravenous immune globulin for their somatisation disorder and the patient contact scored 100% for audit purposes.

On post-intake ward rounds in the past, the consultants would, with a few pertinent questions and clinical findings, recognise the most likely diagnosis. This would determine a streamlined approach to the further investigations and management. Performance indicators are necessary, but with the limited time available for each consultant-patient contact, I wonder how much thought has been put into how the setting of performance targets such as this list of 25 items has crowded out the time available for clinical excellence.

 In this story, the doctor had met all of the quality measures.  The auditors assessing the “value” of the doctor’s care could feel good about what he had done since the checklist of performance indicators had been completed. The only problem is that the patient was given a wrong diagnosis and, therefore, received a very expensive, unnecessary, and potentially harmful treatment.  This is illustrative of an important defect in the way that doctors’ performance is currently being measured.  These quality measures all assume the presence of an accurate diagnosis.

But, in fact, making the right diagnosis — finding the real reason for what is wrong with the patient–is one of the most challenging and patently crucial parts of helping a sick person get well.  And yet I am not aware of a quality measure that takes this important clinical activity into account.  The quality indicators focus instead on what is easy to measure–whether people’s cholesterol, blood pressure, and diabetes readings are at goal and how many have received their age-indicated vaccines and cancer screening tests. These are no doubt important parts of care, but to reduce being a good doctor to this is a grave error.  I believe a payment method based entirely on meeting quality measures risks de-emphasizing less quantifiable yet equally vital components of the healing profession.

With all its flaws, the current fee-for-service system is often an indirect measure of quality — perhaps in some instances superior to a method based on quantifiable quality indicators.  In Austin, where I practice, there is a particular orthopedic surgeon known both in the medical community and among patients for getting very good results with his knee and hip replacements.  It, therefore, takes a long time to get an appointment and a surgery date with him.  This is not because he is looking to do more surgeries.  Indeed, he is known to turn away people seeking joint replacements if he does not believe doing so would be appropriate.  He is busy because doctors and patients know he is good at what he does.  This is same reason that many of the best doctors’ schedules are full.  And this is something that people working to re-engineer our health care delivery system often seem to miss.

I close with words from a sign that hung in Albert Einstein’s office at Princeton.  ”Not everything that counts can be counted, and not everything that can be counted counts.”

James Marroquin is an internal medicine physician who blogs at his self-titled site, James Marroquin.

email

Comments are moderated before they are published. Please read the comment policy.

  • Ron Smith

    Hi, James. I’m so with you on this.

    I try to teach my nurse practitioner colleagues and nursing staff that medicine consists of two parts. There is the clinical ‘evidence’ part and the non-clinical art part.

    It is interesting how the term ‘evidence-based’ has evolved. It was not a talked-about topic thirty years ago when I started pediatrics. The medicine that I practice is ‘evidence-based’ in that I want to compile the list of effects and come up hopefully with one cause. Sometimes its easy and sometimes its not. Many times it requires the art of medicine to complete the assessment before crafting a plan.

    I posted previously about my youngest granddaughter who is not quite a month old now. She had been fussy and irritable and discussing it with my daughter just didn’t give the inner physician in me a piece of mind that I felt I needed. I told my daughter that I just needed to see her and hold her in my arms first. When I got home a few hours later, I held little Adelle in my great big gorilla hands for maybe a minutes, feasting with my eyes. Then and only then was I sure that I felt she was really OK.

    That medical art has no means for assessment. Yet the evidence didn’t fully tell the story. Like Sully on Monsters University who challenged Mike Wozowski to ‘dig deep’ for the best scare, I recommend to residents, nurse practioners and young physicians to dig deep inside themselves.

    You may have all the evidence in front of you and still not feel good about a patient. Pay attention! It might mean you are about to miss something big! If it turns out to be nothing, you will still never regret sincere concern!

    Warmest regards,

    Ron Smith, MD
    www (adot) ronsmithmd (adot) com

    • Rob Burnside

      It’s another way of saying, “Let the force be with you” and I agree–instinct plays a huge part in many professions including, believe it or not, firefighting. After running into numerous burning buildings you learn to recognize, at a glance, which ones are likely to kill you, and you proceed with greater caution. Or, in some cases, greater alacrity. All of which makes the World Trade Center fires on 9-11 that much more amazing to me. Most of those firefighters had to know, going in, that they weren’t coming out.

  • karen3

    If you look at the literature, the error rate in diagnosis is rather high- 25-50% — with the 25% being defined as “major error” at autopsy. I’m not sure that your colleagues would be so thrilled with a system where compensation was based on being correct. In fact, there are alot of posts bemoaning a system wherein physicians are held accountable solely for major errors.

    Given the difficulty of proving diagnoses to be “correct” without some significant passage of time, would you endorse a more rigorous error-penalizing process for major errors?

    • jimmyquin

      As you write, proving diagnoses to be correct without a significant passage of time is difficult. Indeed, I would argue that it is difficult to quantifiably assess much of what goes into being a good doctor and providing valuable care. If this is the case, the foundation of paying for performance is defective. I have no problem with incorporating quantifiable quality measures such as vaccination rates and meeting HbA1c/blood pressure/cholesterol goals into a physician’s method compensation. But I believe a volume-based model should still play a part in physician-pay. This is because even with its limitations and room for abuse, it reflects who patients and the medical community deem competent and worthy to care for people.

  • mtbwalt

    “pay for performance” and “pay for value” are phrases meant to distract you from the actual goal: pay less.

    They have survived as some of the fittest phrases of the regulatory capture newspeak because they implicitly put physicians and other caregivers on the defensive. Instead of acknowledging a physician as captain of the ship who knows more about his or her patients and how to care for them than anyone else, these phrases challenge a physician to prove it to an idiot beancounter’s satisfaction by jumping through hoops that bear no resemblance to any sane person’s definition of quality.

    • Rob Burnside

      Healthcare is not the only place this sort of thing has happened. As a society, we seem overly fascinated with “process” at the expense of “product.” Einstein was right, Dr. Marroquin is right, you’re right, and so is the individual who famously said, “A fiendish consistency is the hobgoblin of little minds.” A little less science and a little more art, if you please!