Dangers of strict adherence to clinical guidelines

Medical guidelines are well on their way to becoming the Law of the Land.  Dr. Rich over at The Covert Rationing Blog featured a recent post that highlighted this point.

When a study published in the Journal of the American Medical Association showed that nearly 23% of patients receiving implantable cardioverter-defibrillators (ICDs) received them under circumstances that did not match up with the recommended “guidelines” for doing so, this breach of etiquette was characterized as being the moral equivalent of fraud:

CNN.com put it like this: “Of more than 100,000 people who received ICDs, almost 23% did not need them according to evidence-based guidelines.” As the lead investigator of the JAMA study told CNN, “It’s a lot of people who are getting defibrillators who may not need them.”

As Dr. Rich himself then observed:

“Guidelines” implies, literally, a guide, a signpost, a general set of factors that one ought to take into account when making specific decisions regarding specific individual patients. Guidelines are a strong set of recommendations which (all other things being equal) one ought to follow in the majority of cases, and when one chooses not to follow them, one ought to have a good reason for making that choice.

When the use of clinical guidelines is considered in view of this now-quaint notion, one does not expect 100% compliance. After all, patients being patients, they bring to the table lots and lots of special considerations one ought to take into account when deciding how to apply guidelines. Depending on the level of evidence upon which a certain set of guidelines were established, and considering the array of variations on the mean which patients still insist on bringing to a doctor’s notice, the optimal applicability of a given set of guidelines to a given population of patients ought to look something like a bell-shaped-curve. It is not immediately obvious, for instance, that a rate of compliance with a set of guidelines of 77.5% is simply too low. Indeed, a rate of compliance with your typical clinical guidelines well north of that number might imply, when one fully considers the matter, an abrogation of the physician’s duty to make informed clinical decisions based on ALL available evidence, including those introduced by an individual patient’s specific circumstances.

As a matter of fact, the very guidelines regarding ICDs which doctors are now accused of abusing admit that “the ultimate judgment regarding care of a particular patient must be made by the physician and the patient in light of all of the circumstances presented by that patient.”

In this light, a very striking feature of this new report is its baseline assertion that the strict following of guidelines is “evidence-based” practice, while any deviation is “non-evidence-based;” that is, by implication at least, it is good medicine vs. bad medicine. And so, “only” 77.5% of ICD implanters are practicing good medicine, and that is clearly a major concern – one for which urgent solutions should be sought.”

The good Dr. Rich then went on to point out a number of circumstances under which it might be perfectly rational to alter the recommended timing of ICDs implantation, such not wishing to subject a patient who needs a pacemaker anyway to two separate procedures, evidence that suggests a higher-than-expected risk of sudden death, or even simply allowing the patient to receive the device before they lost their insurance.

But while there are any number of “justifiable” reasons to violate clinical guidelines, (just as there are probably lots of “justifiable” reasons that one could come up with for not charging the suggested retail price, failing to read books on the suggested reading list, or even driving 60 in a 55 mph zone), it is at least as interesting that CNN characterized these particular guidelines as being “evidence-based”.  I specifically asked Dr. Rich about whether the ICD guidelines in question were; specifically the recommendations that were violated about the timing of implants.  His reply?  Not really.

… the only trial designed to test ICD implantation immediately after heart attacks,was the DINAMIT study in 2004. This study enrolled patients who were at particularly high overall cardiovascular risk, both arrhythmically and hemodynamically. It showed no benefit in overall survival with ICDs, even though the rate of sudden death was substantially reduced (patients died of pump failure).  So it was a negative trial, but again, enrolled only a sick subset of post-heart attack patients. Its results are not generalizable.

So what we know is:

-   ICDs improve survival after [heart attacks] or heart failure when the heart’s ejection fraction [a measure of heart function - ed] is < 35%

-   But ICDs have not been tested in these patients right after heart attacks (except for the poorly-designed DINAMIT trial) or right after heart failure diagnosis

-   So the “evidence-base” that says don’t implant in these patients is not positive evidence (we tested it and it doesn’t work), but negative evidence (it hasn’t been tested)

-   Since the risk of sudden death is particularly high in these early patients, it is not unreasonable for doctors to decide to occasionally “violate” this early-implant-prohibition, in the case of individual patients who otherwise are indicated for ICDs and who appear likely to live for a substantial period of time if their sudden deaths can just be prevented.”

Okay, fine.  So these particular “evidence-based” recommendations weren’t really all that evidence-based.  But surely most of them are, right?

Well as it turns out, no.

This inconvenient truth, (well, inconvenient for government bureaucrats and other folks who insist that it’s “my way or the highway” in medicine, anyway), was brought into sharp focus by the results of a recent study and an accompanying editorial published in the Archives of Internal Medicine.  The study by Lee and Vielemeyer looked at the overall basis and quality of the evidence behind 41 guidelines release by the Infectious Diseases Society of America between 1994 and 2010.  Within these guidelines they found and tabulated 4,218 individual recommendations, and specifically classified them as falling into one of three categories:

Level I: Evidence from at least one properly randomized controlled trial;

Level II: Evidence from at least one well-designed clinical trial without randomization, case-controlled analytical studies or dramatic results from uncontrolled experiments; and

Level III: The opinions of “respected experts” or committees.

Their findings?  Only 14% of the recommendations in the guidelines (581 out of 4,218 of them) were based upon properly randomized controlled trials, and an additional 31% were based on pretty good studies.  However over half of them (55%), were based on little more than “expert” opinion.  And how many authors does it take to create the average guideline?  The average number is 13, but it ranges from a low of just 4 to a high of 66.

Holy cow.  So the vast majority of these recommendations are there because a dozen people think that they’re right, but don’t really know for sure?

Nor are these things updated particularly often.  About half of the guidelines had been updated at some point, with an average time between updates of 6.7 years and a range of 1-15 years.  But even when the guidelines were updated it was most often to add more recommendations based upon expert opinions – not to add great new data from well-done clinical trials.

Although this particular study looked just at guidelines in the area of infectious diseases, its results echo those of another recent study that looked at the basis of 7,000 recommendations made by guidelines in cardiology.  Here a median of 11% were based on data from randomized controlled trials (RCTs), while 48% on Level III data.

Of course, there are many things that ought to be done in medicine that will never be the subject of randomized clinical trials.  Sometimes the common sense “expertise” really is enough to do the trick. For example, if an expert opinion recommends that patients with contagious diseases be isolated from other patients, it’s pretty unlikely that there will ever be a double-blind, randomized controlled trial done to dispute it.  But the editorial cited previously showed, using shaky clinical guidelines as a means of forcing doctors to behave in certain ways has backfired before:

For example, a quality-of-care [pay-for-performance – ed.] rule based on observational data that patients with community-acquired pneumonia receive antimicrobials within 4 hours of presentation was based on a recommendation in 2003 guidelines. One study showed that implementation of this rule resulted in an approximately 20% increase in the misdiagnosis of pneumonia and greater unnecessary exposure to antimicrobials with no decrease in mortality.

So what’s the take-home lesson then?  What are we supposed to think about all of this as patients, taxpayers, parents and children?

The most important message is that we ought to be extremely skeptical of anything or anyone who offers to evaluate the performance of any clinician based upon his or her overall level of adherence to any clinical guideline.  There are simply too many variables involved, none of which can be adequately gauged by any type of summary reporting.  Moreover, it invokes a level of confidence in the “correctness” of guidelines that is simply not justified by scientific and medical reality.  In fact, strict adherence to standardized guidelines ought to raise a red flag.

Consider this: any guideline is a numbers game.  Because they cannot take into account the peculiarities of every particular patient, at least some of the recommendations any clinical guideline will be inappropriate for some non-zero (and perhaps very substantial) percentage of cases.  If a clinician fails to deviate from a given guideline in some percentage of cases, it means that he’s almost certainly applying “cookbook medicine” rather than critical thinking.  Cookbook medicine is just as bad as it sounds.  It’s the application of the same recipe regardless of the clinical ingredients one is given.  Imagine having someone insist that you use a recipe for bread pudding when the ingredients you’re given consist of bread, garlic and butter.  You can follow the pudding instructions all you want, but you won’t end up with the desired result.

The uncomfortable fact is that anyone who treats every patient exactly the same way every time is almost certainly treating some of those patients inappropriately.  Some might even say it’s committing medical malpractice.

And it’s certainly not “quality” medicine.

The second message really is a corollary of the first.  It should be illegal (and it is, at the very least, unethical), to reward or punish a clinician based upon his or her overall level of adherence to any clinical guideline.  To do so is a misuse, and indeed an abuse, of the tool.

And there is one final lesson.  All of those politicians, reporters and regulators who bandy about the term “evidence-based guidelines”, almost certainly have no idea how much verifiable “evidence” those guidelines actually contain.

Doug Perednia is an internal medicine physician and dermatologist who blogs at Road to Hellth.

Submit a guest post and be heard on social media’s leading physician voice.

Comments are moderated before they are published. Please read the comment policy.

  • doc99

    The REAL Inconvenient Truth, Indeed.

  • http://www.pulmonarycentral.org Matt Hoffman

    Good post.
    Speaking of debates on IDSA guidelines, the ATS/IDSA guidelines for health care associated pneumonia were recently challenged in Lancet ID after the IMPACT-HAP authors argued they seemed to result in increased mortality. As you can guess, the ATS/IDSA guideline authors had something to say about that. You can read more here:
    http://www.pulmonarycentral.org/march-2011

  • http://holtzreport.com Andrew Holtz

    I need some help to understand what the problem is that this column is complaining about. Is it that a news story described the guidelines in a way that arguably overstated the underlying evidence? In other words… is the argument here that because some media reports may mis-characterize the evidence behind certain guidelines, therefore guidelines are bad? And should it really be illegal “to reward or punish a clinician based upon his or her overall level of adherence to any clinical guideline”? ANY guideline? Even those based on rock-solid evidence? If a clinician fails to adhere to infection control guidelines when placing central line catheters, should it be illegal to apply any sanctions, despite the clear hazard such practice presents to patients?

    I also wonder about the use of anecdotes pointing out the obvious fact that cases exist that don’t mesh perfectly with guidelines. Gosh, what a surprise. Of course, it would be easy to send a return volley of anecdotes about cases where ICDs or other interventions were applied in clearly inappropriate circumstances. Dueling anecdotes are of little value.

    The final slap against “All of those politicians, reporters and regulators” sounds like a call to return to some imagined past when doctors were free to practice any way they pleased and everyone else would just keep quiet and do what the doctor ordered. Comments that exude such hubris, absolutism and paternalism are not useful contributions to public discussion about the best ways to improve the quality of health care.

  • http://nostrums.blogspot.com Doc D

    It’s when “guidelines” become standards that we lose sight of the fact that patients are unique individuals, whose needs should come first.

  • Lynne

    It is much more important to know what sort of a patient has a disease than what sort of a disease a patient has. William Osler, still relevant after all these years. Good post.

  • http://www.BocaConciergeDoc.com Steven Reznick MD

    I am not sure which part of the excellent article by Dr Peredina ,Mr Holtz doesnt understand. Clinical guidelines are being established with very little credible evidence in some areas. Those guidelines are being cited by politicians, insurers and media as the gold standard for care with deviations being considered poor care and or fraud when in fact they may be nothing more than clinical judgement of the patients physicians based on the patients unique situation in an area where the guidelines have not been established with evidence based on that unique situation. It is the same old story for clinicians that insurance companies, government agents and politiicians consider all practitioners fraudulent and set the rules and regs as if all are because they have never set up an infrastructure or checks and balance to actually catch true fraud until the perpetrators have ripped the system off for millions of dollars and moved on to a new storefront.
    Defining quality in medical care is still a very debatable controversial issue that will take more time and discussion to begin to define. Fraud on the other hand is rampant because the insurers and the Federal Government have never put in the type of security that Credit Card companies use before they pay a bill.

  • http://holtzreport.com Andrew Holtz

    The original commentary sets up a false choice… as though the only options are extreme micromanagement or total independence. Neither extreme is reasonable. But by arguing that it should be illegal to impose any sanctions on physicians for non-adherence to any guidelines, no matter how well grounded in clinical evidence, the column pushes an absurd position.

    Unfortunately it is too easy to cite plenty of anecdotes of the errors and worse that occur when human beings operate without appropriate systems for oversight and guidance. Here’s just one fresh example: http://www.nytimes.com/2011/04/03/business/03implant.html

    Of course physicians should be able to adapt to the circumstances of individual patients. But to reject guidelines out of hand is an extreme position that fails to recognize that even physicians are not infallible.

    • http://www.roadtohellth.com Doug Perednia, M.D.

      Mr. Holtz – I appreciate what you are trying to say with respect to having “no rules”, however the existence or absence of all of the facts is really what distinguishes clinical guidelines from clinical rules that can and should be observed by every clinician. A guideline is an attempt to suggest a course of reasonable action in the presence of uncertainty about many other factors that might be involved in a case. E.g., “if x then y would be the best course of action 79% of the time in the presence or absence of definitive information about a, b and c.” In other words, “the available data say to do this, other things being equal”. But in the field things are rarely equal. That’s why guidelines can and should only be considered as suggestions – because the creators of the guideline could not and did not have access to facts that the actual physician and patient have in hand. This is very different from the rational application of definitive medical rules. E.g., “you should never give cyanide tablets to patients because there is no known good that they can do, while they will kill virtually 100% of the people who receive them.” This is a very different thing from the guideline that “90% of the time, one will come out with a bad result if you give penicillin to a penicillin-allergic patient”. 90% is pretty “rock solid”, but what if no other antibiotics will work in this life-threatening infection? What if the patient’s religion forbids using other antibiotics, etc. etc. ? If society wishes to go over the particulars of every given case with a provider and reward him/her based on good or bad judgment, one could defend that course of action. I doubt it’s a good use of resources except in the rarest of competence hearings, but that’s a different matter. However it is inexcusable to look at any provider’s gross statistics and say they should be rewarded or penalized based upon guideline compliance if we don’t know all the rest of the facts. A doctor who follows guidelines when it is inappropriate is just as guilty of bad medicine as one who does not follow them when the actions recommended are clinically appropriate.

  • Tom

    I don’t see a rejection of guidelines as the answer either, but one should recognize that guidelines are an incomplete answer, at least as they apply to patient care. To tie reimbursement to ones compliance to guidelines that may well be inappropriate, if not poorly thought out, seems silly. The problem here is that a bureaucracy is trying to impose a measure on meidcal care, which really isn’t suited for the measure they want to use. It should be recognized that there are many different paths to the same end, and use of guidelines relies on a dialogue with the patient. Doctors can’t force care on our patients anymore, and that’s what 100% compliance would require. Oh, and willful blindness to other solutions too..

  • doctor1991

    I think what the author is pointing out is that “evidence based” guidelines often don’t have much “evidence” behind them, but then become a “black and white” standard for practice and non adherence relegates one to the judgement of being a “bad” doctor. Further, that the power for setting these guidelines then comes not from clinical outcomes in randomized trials, but from the prejudices and opinions of the people who sit on these committees that determine guidelines.