Medical guidelines are well on their way to becoming the Law of the Land. Dr. Rich over at The Covert Rationing Blog featured a recent post that highlighted this point.
When a study published in the Journal of the American Medical Association showed that nearly 23% of patients receiving implantable cardioverter-defibrillators (ICDs) received them under circumstances that did not match up with the recommended “guidelines” for doing so, this breach of etiquette was characterized as being the moral equivalent of fraud:
CNN.com put it like this: “Of more than 100,000 people who received ICDs, almost 23% did not need them according to evidence-based guidelines.” As the lead investigator of the JAMA study told CNN, “It’s a lot of people who are getting defibrillators who may not need them.”
As Dr. Rich himself then observed:
“Guidelines” implies, literally, a guide, a signpost, a general set of factors that one ought to take into account when making specific decisions regarding specific individual patients. Guidelines are a strong set of recommendations which (all other things being equal) one ought to follow in the majority of cases, and when one chooses not to follow them, one ought to have a good reason for making that choice.
When the use of clinical guidelines is considered in view of this now-quaint notion, one does not expect 100% compliance. After all, patients being patients, they bring to the table lots and lots of special considerations one ought to take into account when deciding how to apply guidelines. Depending on the level of evidence upon which a certain set of guidelines were established, and considering the array of variations on the mean which patients still insist on bringing to a doctor’s notice, the optimal applicability of a given set of guidelines to a given population of patients ought to look something like a bell-shaped-curve. It is not immediately obvious, for instance, that a rate of compliance with a set of guidelines of 77.5% is simply too low. Indeed, a rate of compliance with your typical clinical guidelines well north of that number might imply, when one fully considers the matter, an abrogation of the physician’s duty to make informed clinical decisions based on ALL available evidence, including those introduced by an individual patient’s specific circumstances.
As a matter of fact, the very guidelines regarding ICDs which doctors are now accused of abusing admit that “the ultimate judgment regarding care of a particular patient must be made by the physician and the patient in light of all of the circumstances presented by that patient.”
In this light, a very striking feature of this new report is its baseline assertion that the strict following of guidelines is “evidence-based” practice, while any deviation is “non-evidence-based;” that is, by implication at least, it is good medicine vs. bad medicine. And so, “only” 77.5% of ICD implanters are practicing good medicine, and that is clearly a major concern – one for which urgent solutions should be sought.”
The good Dr. Rich then went on to point out a number of circumstances under which it might be perfectly rational to alter the recommended timing of ICDs implantation, such not wishing to subject a patient who needs a pacemaker anyway to two separate procedures, evidence that suggests a higher-than-expected risk of sudden death, or even simply allowing the patient to receive the device before they lost their insurance.
But while there are any number of “justifiable” reasons to violate clinical guidelines, (just as there are probably lots of “justifiable” reasons that one could come up with for not charging the suggested retail price, failing to read books on the suggested reading list, or even driving 60 in a 55 mph zone), it is at least as interesting that CNN characterized these particular guidelines as being “evidence-based”. I specifically asked Dr. Rich about whether the ICD guidelines in question were; specifically the recommendations that were violated about the timing of implants. His reply? Not really.
… the only trial designed to test ICD implantation immediately after heart attacks,was the DINAMIT study in 2004. This study enrolled patients who were at particularly high overall cardiovascular risk, both arrhythmically and hemodynamically. It showed no benefit in overall survival with ICDs, even though the rate of sudden death was substantially reduced (patients died of pump failure). So it was a negative trial, but again, enrolled only a sick subset of post-heart attack patients. Its results are not generalizable.
So what we know is:
– ICDs improve survival after [heart attacks] or heart failure when the heart’s ejection fraction [a measure of heart function – ed] is < 35%
– But ICDs have not been tested in these patients right after heart attacks (except for the poorly-designed DINAMIT trial) or right after heart failure diagnosis
– So the “evidence-base” that says don’t implant in these patients is not positive evidence (we tested it and it doesn’t work), but negative evidence (it hasn’t been tested)
– Since the risk of sudden death is particularly high in these early patients, it is not unreasonable for doctors to decide to occasionally “violate” this early-implant-prohibition, in the case of individual patients who otherwise are indicated for ICDs and who appear likely to live for a substantial period of time if their sudden deaths can just be prevented.”
Okay, fine. So these particular “evidence-based” recommendations weren’t really all that evidence-based. But surely most of them are, right?
Well as it turns out, no.
This inconvenient truth, (well, inconvenient for government bureaucrats and other folks who insist that it’s “my way or the highway” in medicine, anyway), was brought into sharp focus by the results of a recent study and an accompanying editorial published in the Archives of Internal Medicine. The study by Lee and Vielemeyer looked at the overall basis and quality of the evidence behind 41 guidelines release by the Infectious Diseases Society of America between 1994 and 2010. Within these guidelines they found and tabulated 4,218 individual recommendations, and specifically classified them as falling into one of three categories:
Level I: Evidence from at least one properly randomized controlled trial;
Level II: Evidence from at least one well-designed clinical trial without randomization, case-controlled analytical studies or dramatic results from uncontrolled experiments; and
Level III: The opinions of “respected experts” or committees.
Their findings? Only 14% of the recommendations in the guidelines (581 out of 4,218 of them) were based upon properly randomized controlled trials, and an additional 31% were based on pretty good studies. However over half of them (55%), were based on little more than “expert” opinion. And how many authors does it take to create the average guideline? The average number is 13, but it ranges from a low of just 4 to a high of 66.
Holy cow. So the vast majority of these recommendations are there because a dozen people think that they’re right, but don’t really know for sure?
Nor are these things updated particularly often. About half of the guidelines had been updated at some point, with an average time between updates of 6.7 years and a range of 1-15 years. But even when the guidelines were updated it was most often to add more recommendations based upon expert opinions – not to add great new data from well-done clinical trials.
Although this particular study looked just at guidelines in the area of infectious diseases, its results echo those of another recent study that looked at the basis of 7,000 recommendations made by guidelines in cardiology. Here a median of 11% were based on data from randomized controlled trials (RCTs), while 48% on Level III data.
Of course, there are many things that ought to be done in medicine that will never be the subject of randomized clinical trials. Sometimes the common sense “expertise” really is enough to do the trick. For example, if an expert opinion recommends that patients with contagious diseases be isolated from other patients, it’s pretty unlikely that there will ever be a double-blind, randomized controlled trial done to dispute it. But the editorial cited previously showed, using shaky clinical guidelines as a means of forcing doctors to behave in certain ways has backfired before:
For example, a quality-of-care [pay-for-performance – ed.] rule based on observational data that patients with community-acquired pneumonia receive antimicrobials within 4 hours of presentation was based on a recommendation in 2003 guidelines. One study showed that implementation of this rule resulted in an approximately 20% increase in the misdiagnosis of pneumonia and greater unnecessary exposure to antimicrobials with no decrease in mortality.
So what’s the take-home lesson then? What are we supposed to think about all of this as patients, taxpayers, parents and children?
The most important message is that we ought to be extremely skeptical of anything or anyone who offers to evaluate the performance of any clinician based upon his or her overall level of adherence to any clinical guideline. There are simply too many variables involved, none of which can be adequately gauged by any type of summary reporting. Moreover, it invokes a level of confidence in the “correctness” of guidelines that is simply not justified by scientific and medical reality. In fact, strict adherence to standardized guidelines ought to raise a red flag.
Consider this: any guideline is a numbers game. Because they cannot take into account the peculiarities of every particular patient, at least some of the recommendations any clinical guideline will be inappropriate for some non-zero (and perhaps very substantial) percentage of cases. If a clinician fails to deviate from a given guideline in some percentage of cases, it means that he’s almost certainly applying “cookbook medicine” rather than critical thinking. Cookbook medicine is just as bad as it sounds. It’s the application of the same recipe regardless of the clinical ingredients one is given. Imagine having someone insist that you use a recipe for bread pudding when the ingredients you’re given consist of bread, garlic and butter. You can follow the pudding instructions all you want, but you won’t end up with the desired result.
The uncomfortable fact is that anyone who treats every patient exactly the same way every time is almost certainly treating some of those patients inappropriately. Some might even say it’s committing medical malpractice.
And it’s certainly not “quality” medicine.
The second message really is a corollary of the first. It should be illegal (and it is, at the very least, unethical), to reward or punish a clinician based upon his or her overall level of adherence to any clinical guideline. To do so is a misuse, and indeed an abuse, of the tool.
And there is one final lesson. All of those politicians, reporters and regulators who bandy about the term “evidence-based guidelines”, almost certainly have no idea how much verifiable “evidence” those guidelines actually contain.
Doug Perednia is an internal medicine physician and dermatologist who blogs at Road to Hellth.
Submit a guest post and be heard on social media’s leading physician voice.