A love-hate relationship with practice guidelines

I have a love-hate relationship with practice guidelines. Love because it is often helpful to refer to a set of evidence-based recommendations as part of clinical decision-making; hate because of all of the shortcomings of the guidelines themselves, as well as the evidence upon which they are based.

A recent piece in JAMA and the editorial that accompanied it reinforced my ambivalence.

The research report addressed a straightforward question: How often do class I recommendations change in successive editions of guidelines on the same subject from the same organization. Recall that class I recommendations are things that physicians should do for eligible patients. They are particularly important, because these recommendations often form the basis for quality metrics, against which physician performance is measured, increasingly with financial consequences. It is not hard to understand why.

First, the recommendations are, by nature, definitive: If a patient meets certain criteria (i.e. has evidence of ischemic vascular disease, and no allergy to aspirin), then she should get the indicated therapy or intervention (aspirin), making the quality assessment fairly straightforward. It is also generally easy to detect if the intervention was made. Finally, it is also easier to engage clinicians using quality metrics that detect underuse (patient did not get something he should have) than overuse (patient got a treatment or service he should not have).

The authors limited their study to guidelines published jointly by the American College of Cardiology and the American Heart Association. These are generally well-respected documents, and are often held up as models for how guidelines should be developed and promulgated. (Disclosure: I am a card-carrying fellow of both organizations.) They categorized the status of the original class I recommendations in the subsequent guideline as either retained, downgraded or reversed, or omitted.  So what did the study find?

The findings are summarized in this table:

A love hate relationship with practice guidelines

Overall, about 9% of the recommendations were downgraded or reversed in the follow-up guideline.

I don’t know about you, but that seems like a lot to me, especially since the median time interval between the paired guidelines was 6 years. This is even more disturbing when you think about how many years it takes to develop quality metrics based on these guidelines, making it inevitable that some quality metrics will be based on discredited recommendations. The discordance of the newest cholesterol management guidelines with the widely adopted HEDIS measure for LDL management is just one example where this is already the case.

I think this is just one more reason why quality measures built around process (did you do this or that in the care of a patient) have to give way to those measuring outcomes (how well did the patient do under your care).

Ira Nash is a cardiologist who blogs at Auscultation.

Comments are moderated before they are published. Please read the comment policy.

  • Dr. Drake Ramoray

    “I think this is just one more reason why quality measures built around process (did you do this or that in the care of a patient) have to give way to those measuring outcomes (how well did the patient do under your care).”

    The problem isn’t guidelines, evolving guidelines, or even guidelines that disagree with each other. The ATA had thyroid nodule guidelines in 2006, revised in 2009, and an upcoming revision either this year or next. A lot has changed over those few years (for starters we use a lot less radiation now). One of the weakness of the the early ones and some society guidelines is he absence of any age criteria (I’m not really interested or worried about a 90 year old with a thyroid nodule). In addition, the ATA guidelines are quite different than AACE, radiology’s, and even other countries (most notably Europe and South Korea).

    The problem is using guidelines as a cudgel to reduce physician payment or try and grade them on how they are doing. They are guidelines, there are always exceptions to the rule, and there is certainly room for debate on many of them (lipds being the most notable and thyroid as stated above). There is even current debate in the Endocrine world about appropriate blood pressure in diabetics, something that was literal dogma during my training.

    Societys and groups establishing guidelines is fine, and they evolve over time. Punishing physicians for not following them is where the real issue lies (well that and as medicine moves to being more reliant on guidelines both in practice and in compensation it eases the transition to healthcrare being provided by non-Md providers. But that is a topic for another thread.).

    • JR

      I agree!. Guidelines are very useful, but using them as a “quality measure” isn’t right.

      There are some screenings I know I will not do when I become of the age to need them, and I don’t want my provider dinged for my personal choices!

    • SteveCaley

      The issue of a guideline and what it means reaches deep into philosophy, and the understanding of what it means to practice medicine.

      As we physicians are educated, we are taught certain classical patterns to which diseases often conform. If that was all one needed to be a doctor, then, there is really no need for postgraduate training. The newly-minted MD is entirely capable of imprinting the results of guidelines.

      The remainder of training is stripping away the myth of the Form of Disease. There really is no such thing. One might look at art, and say they understand the Style of Manet, or the Vision of Picasso, but such things are absolute rubbish. One can develop a more and more sophisticated appreciation of Manet and Picasso by seeing their individual works throughout a lifetime.
      Our absurdly vain society thinks that they can take an Art History Class, and then say – ‘yes, yes – Picasso, the blue period, I know all about it.’ Of course, one does not see the actual works, but one can google them, and scrutinize them online.
      Guidelines are the Cliff’s Notes of medicine. Properly used, they can help a provider bring the acuity of one’s thoughts more swiftly to the point. They do not represent reality, any more than a cliff’s notes or an art history textbook represents art.
      I read somewhere that it is rare to find an undergraduate student of Classics who actually reads the works they study. Cicero is great, of course -but it’s all in Latin, and who can read that?
      Guidelines should be judged on their purpose – and the purpose they are used for is malign. They do not intend to help you care for the patient before you in a way you might not have considered. They allow for a Legally Defensible Diagnosis.
      Say a patient comes to you with eye pain, and you’re not even an ophthalmologist. If you use the principle of inherent skills and the care of the patient, you will proceed to a diagnosis that is of use to your suffering patient. However, one need only flip to the authoritative Guideline, and see that the most common cause of eye pain is, say, corneal abrasion. You patch the eye, reassure the patient, and send her off.
      When the eye is lost to acute closed-angle glaucoma and you are sued, well, you can wave the guideline and say “This told me what to do, and I was afraid to contradict it!” That may well be enough these days for a Legally Defensible Diagnosis.
      Medicine is terribly difficult, and as Maimonides said, the longer you study and practice it, the harder it gets. He said that any fool with a smattering of knowledge can practice quick and confident medicine.
      I point out that Maimonides was never Board Certified, or even licensed in his state, when he practiced 800 years ago. We have come far beyond you, Maimonides!

  • Kaya5255

    Guidelines are not cast in concrete nor cast in bronze. They are fluid. They change depending upon circumstances.
    Providers, in my opinion, need only to do what is right and best for the consumer, based upon their assessment of the consumer, given the information they have at the time.

Most Popular