A few months ago marked the 20th anniversary of the launch of evidence based medicine (EBM). Now seems a good time for a retrospective. After twenty years what does EBM mean? Where has it taken us? What are the distortions and unintended consequences? You might be surprised.
What I intend to do is start with a little of the history of EBM, talk about the essential notion as originally conceived by the founders, and mention a little about the process as applied to clinical practice today. Then I’ll deal with some distortions and misconceptions as well as some surprising unintended consequences.
EBM was launched on November 4, 1992 with the publication of this paper in JAMA. I don’t recall noticing the paper coming out but when discussions of EBM as a new movement became popular I wondered what was so new about it. After all I thought I had always made clinical decisions based on evidence. I recalled so many of the clinical trials I had applied in my practice through the years. Practicing doctors for the most part, I thought, were conscientious in applying this evidence to their clinical decisions.
What I later learned was that it was not the reliance on evidence but the manner in which it was applied that was new. EBM sought to take things to a new level with a more systematic and rigorous approach to the application of research data.
Very soon after the publication of the JAMA paper, however, misconceptions began to fly. These mainly centered around the idea that EBM was “one size fits all” or “cookie cutter” medicine. That distortion persists today. More on that below. But it prompted some of the founders to publish another paper, which I regard as the second founding document, a paper in BMJ titled, Evidence based medicine: what it is and what it is and what it isn’t. In it we find what is most widely quoted today among aficionados as the essential definition:
It’s about integrating individual clinical expertise and the best external evidence …
Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.
Note that EBM is about individual expertise and judgment and about the individual patient.
What about the process? It’s useful to take a look at that because it can be rather onerous and I suspect that not many of us today practice it in its pure form chiefly because of the time it takes. I covered the process of EBM in a couple of long posts a few years ago.
Suffice it to say here that it starts with asking a clinical question focused on your patient and converting it into a set of search terms that can be applied to a search engine such as PubMed. Citations thus retrieved are appraised according to a set of criteria after the model of the EBM pyramid. There are several variations on the basic pyramid theme, an example of which is shown here.
Using the criteria the list of citations is trimmed down to those articles that are current and deemed of sufficient quality. The final step involves taking the information thus selected and applying it to the patient’s idiosyncrasies including circumstances, preferences and values.
That last piece is often lost in discussions today when EBM is invoked to guide public policy. Herein lies one of the more pervasive distortions of EBM. Public policy discussions about the direction of health care got into high gear early in the Obama administration when the gimmicky phrase “comparative effectiveness research” entered the lexicon and debates raged about health care reform.
There were diverse arguments about the direction health care should go but many of them sought to homogenize it, citing such things as the Dartmouth Atlas data and other sources which showed variation in delivery. The idea was that through various forms of central control one could minimize the variation which after all many believed to be the enemy of quality and efficiency. Pathways, performance measures, positive incentives and negative incentives all seek to do this and many proponents try to bolster their arguments by appealing to EBM. They thereby only reflect their ignorance about what EBM is. Maybe you like central control and can make some good arguments for it but you can’t invoke EBM to support those arguments. EBM is diametrically opposed to central control. If you’ve any doubt about whether EBM concerns primarily individual decision makers and individual patients, watch this interview with David Sackett, one of the movement’s founders:
I don’t purpose to argue against central control here. Maybe it has something to recommend it but by definition the more external forces encroach on your decision making the less you will practice EBM. Just by definition.
But even when EBM is practiced according to its core principles there are unintended consequences. These relate to a fatal flaw of EBM found in that pyramid I referenced earlier. What I am referring to is that though not intended by its founders, application of EBM has led to numerous promotions of quackery. I have cited examples of this many times before as have David Gorski and other bloggers. It has to do with placing scientific plausibility at the bottom of the EBM pyramid, as I once explained:
… that this failure of EBM is more than just one of occasional unintended consequences. While it may have been unintended in the minds of the original founders, now it’s systematic.
The problem is illustrated by the evidence pyramids, which are at the very core of the teaching about EBM. There are numerous versions of these pyramids, and Gorski links to a few examples, but what they have in common is that they put biologic plausibility (variously termed basic science, physiologic rationale, etc) at the bottom if they include it at all.
A casual observer might think that putting basic scientific rationale at the bottom means it’s intended to be foundational (remember the old stepped care diagrams for treatment of hypertension and rheumatoid arthritis?). But that’s not the way the EBM hierarchies of evidence are designed. Instead, higher levels on the pyramid trump those below. Students of EBM are taught to start at the top of the pyramid and “drill down,” level-by-level, until they find the evidence they want, and stop there.
Kimbal Atwood had a good example of how this has been applied to promote nutty ideas like homeopathy, in which occasional weak clinical trials which appear to favor homeopathy by chance variation trump overwhelming basic science information against it:
When this sort of evidence is weighed against the equivocal clinical trial literature, it is abundantly clear that homeopathic “remedies” have no specific, biological effects. Yet EBM relegates such evidence to “Level 5”: the lowest in the scheme. How persuasive is the evidence that EBM dismisses? The “infinitesimals” claim alone is the equivalent of a proposal for a perpetual motion machine. The same medical academics who call for more studies of homeopathy would be embarrassed, one hopes, to be found insisting upon “studies” of perpetual motion machines. Basic chemistry is still a prerequisite for medical school, as far as I’m aware.
The way we got into this fix is that the founders of EBM realized we couldn’t rely only on scientific rationale. That has gotten us into trouble in the past. High level outcome based evidence is needed to be placed in the mix to address that problem. But what the EBM apologists left out was the fact that while scientific plausibility couldn’t be the sole basis for making clinical decisions it couldn’t be ignored. It needed to be a prerequisite.
Solutions? A group of physicians and scientists have recognized that EBM’s unintended consequence could be avoided by giving basic scientific plausibility its due place while following the other EBM principles. You can find out lots more about that at their website.
So what do you think about EBM 20 years later? The verdict is mixed but it hasn’t lived up to its promise. Maybe with the help of science based medicine (SBM) we’ll get there.
Robert Donnell is a hospitalist who blogs at Notes from Dr. RW.