Evidence-based medicine and the limitations of research

Before medical school I worked in a research lab investigating the relationship between stress and memory. As a research assistant, I dutifully administered memory test and collected saliva samples to test for cortisol levels.

My boss sent the data to her statistician for analysis, and was thrilled to find that despite the lack of connection between most of the variables of stress and memory studied, there was one positive finding — a connection between hippocampal volume (the part of the brain associated with memory) and life-long stress. I helped to write the article, and it was published in a major medical journal.

It wasn’t until years later that I realized my boss was engaging in an extremely common, but scientifically misleading, practice called data mining. Instead of having one specific hypothesis (i.e. that high cortisol levels would correlate with memory loss as measured on one specific test), she looked at numerous variables associated with stress and memory with the hope that some positive result could be dug out from the heaps of data.

The problem with this approach is that if you test enough variables, you will, by chance alone, find a result that meets the somewhat arbitrary criteria for “statistical significance” that will allow your work to be published as a positive finding. For example (and this is an oversimplified example), if you flip a quarter enough times, eventually you may gets heads 10 times in a row, but this doesn’t mean you’ve got a special quarter.

The pressure in the scientific community to publish positive results is enormous. After all, negative results don’t often advance careers or bring fame and fortune to the person who publishes them. It is not uncommon for researchers to data mine, to do statistical acrobatics, and even in more extreme cases to flat-out create fake data in order to get published (the New York Times reported earlier this year that Diederik Stapel, a Dutch social scientist and academic star, had risen to fame by faking experiments that were published in major journals for over a decade).

As Stapel admitted to the reporter writing the story, “My behavior shows that science is not holy.”

In my article on the placebo effect, I referenced Irving Kirsch, who published a paper, and then a popular book, arguing that antidepressants act only as placebos in the vast majority of people. His work seemed well-researched and well-referenced, and he even contacted the FDA to get results of unpublished trials done by pharmaceutical companies (pharmaceutical companies generally avoid publishing results that show their medications don’t work — shocking, I know).

But then I watched a lecture by two psychiatrists and researchers who went through Kirsch’s work and pointed out a few significant problems with it. Kirsch argued that antidepressants don’t work for people with mild or moderate depression. However, he misclassified mild, moderate, and severe symptoms as mild, moderate, and severe depression, when, by definition, a person must have severe symptoms in order to be diagnosed with depression at all (if you only have a few severe symptoms, you would be diagnosed with mild depression).

When these psychiatrists reanalyzed Kirsch’s data using the correct classification, they found that antidepressants worked better than placebo for people with both moderate and severe depression, and that it was only in cases of mild depression where antidepressants didn’t beat the placebo.

Sound complicated? Well, the TV show 60 Minutes agreed. After Kirsch appeared on the show telling the world that antidepressants were no better than placebo, these two psychiatrists spent over an hour on the phone with the producers explaining the issues with his research.

Apparently they listened attentively, and agreed that the issue was more complicated that Kirsch made it seem, but decided not to have them on the show to counter. Whereas statistics is a complex science, news outlets like to push out stories that are simple and sensationalized.

So there are problems with the way research is conducted and reported, but what’s a well-intentioned healthcare practitioner to do? It doesn’t make sense to completely ignore a large body of scientific research that can help us practice medicine in a safer and more effective way. Instead of ignoring scientific research, or preaching it from the rooftops as if it were gospel, I propose a middle way.

First, let’s be honest that much of what we currently believe to be true in medicine will be disproven at some point. I remember the first time I ever heard that something I had learned as dogma in medical school was totally wrong; it was only six months after I graduated.

This reality is perhaps more true in psychiatry than in any other field, where we deal with an incredibly complex organ (the brain), and an even more complex system that is connected to the brain in a way we don’t fully understand (the mind).

Second, knowing that our knowledge base is ever evolving, let’s not be afraid to use some common sense. Do I really need a research study showing that being an empathetic and supportive physician improves outcomes to believe that it’s true? Or if a patient tells me a natural supplement helps them, should I tell them, “No, you’re wrong” just because there’s not yet a double-blinded, placebo-controlled trial on it?

Which brings me to my next point — the concept of “evidenced-informed” rather than “evidence-based” medicine, which I first heard proposed at a conference on integrative medicine. Integrative medicine is a field that takes a holistic view of the patient, and combines both traditional, allopathic treatments with complementary and alternative approaches, such as acupuncture, homeopathy, herbal medicine, etc.

Skeptical allopathic doctors will often say that there’s “no proof” these alternative therapies work, but I’d argue that’s not the whole story. Instead, when evaluating the evidence base of a treatment (whether traditional or alternative), let’s consider its 1) safety, 2) tolerability, 3) cost, 4) efficacy, and 5) convenience.

Examining a treatment like chemotherapy, for example, which is potentially dangerous, expensive, and inconvenient, we should require a lot of research showing that it is effective before recommending it. But take a treatment like a multivitamin, which is safe, cheap, without side effects, and convenient. Do we really need a dozen positive studies before recommending it to our patients?

Lastly, let’s be skeptical of people or organizations with obvious agendas, no matter how much research they quote. With thousands of articles published a day, it’s easy to find a few dozen that support your agenda, whatever it may be.

Recently I read a book by the journalist Robert Whitaker called Anatomy of an Epidemic: Magic Bullets, Psychiatric Drugs, and the Astonishing Rise of Mental Illness in America. He argued, persuasively, that many psychiatric medications, most notably antipsychotics, are more harmful than helpful, and I naively assumed he was presenting an accurate and complete picture of the relevant literature.

At a few points I found myself rolling my eyes at his biased and selective stories of patients who were miraculously cured when they stopped taking their medications, but it wasn’t until I got to the chapter on lithium and bipolar disorder that I had to put down the book in disgust.

It was here that it became abundantly clear that Whitaker had selectively cited articles that supported his agenda, while leaving out dozens of well-known and well-done studies that presented a different, and more nuanced, point of view. Sigh. I guess sensationalism must sell more books than nuance.

At the opposite end of the spectrum, it’s probably wise to be skeptical of research presented and published by pharmaceutical companies. Earlier this year I was at the annual conference of the American Psychiatric Association, and during one of the breaks found myself wandering to the booth of Otsuka America Pharmaceuticals because they were offering free ice cream (yes, I’m easy to manipulate).

As I waited in line for my ice cream, I tried to avoid making eye contact with any of the reps, but one started chatting with me anyway. I told him I didn’t have much experience working with reps because because they’re not allowed on campus at the academic medial center where I work.

He looked at me, aghast, and asked, “But how do you get education, then?”

I was speechless. Umm … residency? Conferences? Lectures? Articles? Books? Supervisors? Mentors? I think I’m fine on the education front, thanks.

So in sum, instead of practicing “evidence-based medicine” as if the evidence were dogma, let’s use research to move the practice of medicine forward while also remembering its limitations, including biases against publishing negative results and the agendas of the people doing or reporting the literature.

Let’s consider practicing “evidence-informed medicine” instead, where we consider factors like safety, tolerability, and cost when deciding how much evidence we need to see before recommending a treatment.

And lastly, let’s stay humble, respect how complex the body and brain are, and remember that there’s much that we still don’t know.

Elana Miller is a psychiatrist who blogs at Zen Psychiatry.

email

Comments are moderated before they are published. Please read the comment policy.

  • Ron Smith

    Hi, Elana.

    Very, very good article. Well done. Being a 30-year veteran, and thus a wizened old codger, I know *always* consider spectacular study claims as suspect. Three in particular three studies come to mind.

    The first was the one published about 10 or 15 years ago associating prone sleeping position in newborns and young infants with SIDS. The recommendation to sleep these children supine became standard. I was hopeful as many were.

    Since then we have seemingly done little to eradicate SIDS other than create a subset of children requiring the services of a new, costly medical industry needed to treat the resulting plagiocephaly.

    The original study was actually done in five countries. The US was not one of those countries. Having been on almost a dozen overseas medical mission trips, I’ve seen the various ways that different cultures sleep their children.

    As time went on, I became suspicious of this recommendation. I make the recommendation but with that careful caveat. My own grandchildren did sleep prone after this same discussion with my daughter.

    A now notorious claim that the MMR causes autism is one that most know. If you don’t you’re either not in Pediatrics (or maybe you live under a… well you know).

    Andrew Wakefield’s apparently fraudulent research caused the Lancet to repudiate the results that they have published some 13 or 14 years earlier.

    Research should follow scientific method: hypothesis, method, observation, and analysis. Studies that make claims should be repeated by independent, unassociated parties for confirmation and further analysis.

    This brings to mind a 2009 study that was published with the conclusion that treating fever either before or the 24 hours or so after vaccinations result in decreased immunologic take. I am making a cautious recommendation to parents at this point because, like I said, I’m a suspicious old codger. This study may have significance, but without independent confirmation, it may really turn out to be so much bull.

    It seems those of us dealing with the patents are sometimes just as taken by misinformation as the patients who go to Google for their medical sources.

    Again, thanks for the great article. Warmest regards and Merry Christmas!

    Ron Smith, MD
    www (adot) ronsmithmd (adot) com

  • adh1729

    “I naively assumed he was presenting an accurate and complete picture of the relevant literature.” If Miller is such a naive psychiatrist, and so ignorant of the psychiatric literature that a journalist(!) could lead her astray — then she should stop offering opinions in public.

    “At a few points I found myself rolling my eyes at his biased and selective stories of patients who were miraculously cured when they stopped taking their medications” — Miller has apparently missed one of the main points of the book: Whitaker details, ad nauseum, how patients become dependent on psychiatric medications, and how they often worsen, markedly, when they stop the medications. He urged that they never start the medications in the first place. (The book is completely devoid of miracles, BTW.)

  • rbthe4th2

    Well balanced, well said article.

  • Jim Sweet

    Albeit a consderable smaller exageration than the proponents of multivitamins have engaged in.