Before medical school I worked in a research lab investigating the relationship between stress and memory. As a research assistant, I dutifully administered memory test and collected saliva samples to test for cortisol levels.
My boss sent the data to her statistician for analysis, and was thrilled to find that despite the lack of connection between most of the variables of stress and memory studied, there was one positive finding — a connection between hippocampal volume (the part of the brain associated with memory) and life-long stress. I helped to write the article, and it was published in a major medical journal.
It wasn’t until years later that I realized my boss was engaging in an extremely common, but scientifically misleading, practice called data mining. Instead of having one specific hypothesis (i.e. that high cortisol levels would correlate with memory loss as measured on one specific test), she looked at numerous variables associated with stress and memory with the hope that some positive result could be dug out from the heaps of data.
The problem with this approach is that if you test enough variables, you will, by chance alone, find a result that meets the somewhat arbitrary criteria for “statistical significance” that will allow your work to be published as a positive finding. For example (and this is an oversimplified example), if you flip a quarter enough times, eventually you may gets heads 10 times in a row, but this doesn’t mean you’ve got a special quarter.
The pressure in the scientific community to publish positive results is enormous. After all, negative results don’t often advance careers or bring fame and fortune to the person who publishes them. It is not uncommon for researchers to data mine, to do statistical acrobatics, and even in more extreme cases to flat-out create fake data in order to get published (the New York Times reported earlier this year that Diederik Stapel, a Dutch social scientist and academic star, had risen to fame by faking experiments that were published in major journals for over a decade).
As Stapel admitted to the reporter writing the story, “My behavior shows that science is not holy.”
In my article on the placebo effect, I referenced Irving Kirsch, who published a paper, and then a popular book, arguing that antidepressants act only as placebos in the vast majority of people. His work seemed well-researched and well-referenced, and he even contacted the FDA to get results of unpublished trials done by pharmaceutical companies (pharmaceutical companies generally avoid publishing results that show their medications don’t work — shocking, I know).
But then I watched a lecture by two psychiatrists and researchers who went through Kirsch’s work and pointed out a few significant problems with it. Kirsch argued that antidepressants don’t work for people with mild or moderate depression. However, he misclassified mild, moderate, and severe symptoms as mild, moderate, and severe depression, when, by definition, a person must have severe symptoms in order to be diagnosed with depression at all (if you only have a few severe symptoms, you would be diagnosed with mild depression).
When these psychiatrists reanalyzed Kirsch’s data using the correct classification, they found that antidepressants worked better than placebo for people with both moderate and severe depression, and that it was only in cases of mild depression where antidepressants didn’t beat the placebo.
Sound complicated? Well, the TV show 60 Minutes agreed. After Kirsch appeared on the show telling the world that antidepressants were no better than placebo, these two psychiatrists spent over an hour on the phone with the producers explaining the issues with his research.
Apparently they listened attentively, and agreed that the issue was more complicated that Kirsch made it seem, but decided not to have them on the show to counter. Whereas statistics is a complex science, news outlets like to push out stories that are simple and sensationalized.
So there are problems with the way research is conducted and reported, but what’s a well-intentioned healthcare practitioner to do? It doesn’t make sense to completely ignore a large body of scientific research that can help us practice medicine in a safer and more effective way. Instead of ignoring scientific research, or preaching it from the rooftops as if it were gospel, I propose a middle way.
First, let’s be honest that much of what we currently believe to be true in medicine will be disproven at some point. I remember the first time I ever heard that something I had learned as dogma in medical school was totally wrong; it was only six months after I graduated.
This reality is perhaps more true in psychiatry than in any other field, where we deal with an incredibly complex organ (the brain), and an even more complex system that is connected to the brain in a way we don’t fully understand (the mind).
Second, knowing that our knowledge base is ever evolving, let’s not be afraid to use some common sense. Do I really need a research study showing that being an empathetic and supportive physician improves outcomes to believe that it’s true? Or if a patient tells me a natural supplement helps them, should I tell them, “No, you’re wrong” just because there’s not yet a double-blinded, placebo-controlled trial on it?
Which brings me to my next point — the concept of “evidenced-informed” rather than “evidence-based” medicine, which I first heard proposed at a conference on integrative medicine. Integrative medicine is a field that takes a holistic view of the patient, and combines both traditional, allopathic treatments with complementary and alternative approaches, such as acupuncture, homeopathy, herbal medicine, etc.
Skeptical allopathic doctors will often say that there’s “no proof” these alternative therapies work, but I’d argue that’s not the whole story. Instead, when evaluating the evidence base of a treatment (whether traditional or alternative), let’s consider its 1) safety, 2) tolerability, 3) cost, 4) efficacy, and 5) convenience.
Examining a treatment like chemotherapy, for example, which is potentially dangerous, expensive, and inconvenient, we should require a lot of research showing that it is effective before recommending it. But take a treatment like a multivitamin, which is safe, cheap, without side effects, and convenient. Do we really need a dozen positive studies before recommending it to our patients?
Lastly, let’s be skeptical of people or organizations with obvious agendas, no matter how much research they quote. With thousands of articles published a day, it’s easy to find a few dozen that support your agenda, whatever it may be.
Recently I read a book by the journalist Robert Whitaker called Anatomy of an Epidemic: Magic Bullets, Psychiatric Drugs, and the Astonishing Rise of Mental Illness in America. He argued, persuasively, that many psychiatric medications, most notably antipsychotics, are more harmful than helpful, and I naively assumed he was presenting an accurate and complete picture of the relevant literature.
At a few points I found myself rolling my eyes at his biased and selective stories of patients who were miraculously cured when they stopped taking their medications, but it wasn’t until I got to the chapter on lithium and bipolar disorder that I had to put down the book in disgust.
It was here that it became abundantly clear that Whitaker had selectively cited articles that supported his agenda, while leaving out dozens of well-known and well-done studies that presented a different, and more nuanced, point of view. Sigh. I guess sensationalism must sell more books than nuance.
At the opposite end of the spectrum, it’s probably wise to be skeptical of research presented and published by pharmaceutical companies. Earlier this year I was at the annual conference of the American Psychiatric Association, and during one of the breaks found myself wandering to the booth of Otsuka America Pharmaceuticals because they were offering free ice cream (yes, I’m easy to manipulate).
As I waited in line for my ice cream, I tried to avoid making eye contact with any of the reps, but one started chatting with me anyway. I told him I didn’t have much experience working with reps because because they’re not allowed on campus at the academic medial center where I work.
He looked at me, aghast, and asked, “But how do you get education, then?”
I was speechless. Umm … residency? Conferences? Lectures? Articles? Books? Supervisors? Mentors? I think I’m fine on the education front, thanks.
So in sum, instead of practicing “evidence-based medicine” as if the evidence were dogma, let’s use research to move the practice of medicine forward while also remembering its limitations, including biases against publishing negative results and the agendas of the people doing or reporting the literature.
Let’s consider practicing “evidence-informed medicine” instead, where we consider factors like safety, tolerability, and cost when deciding how much evidence we need to see before recommending a treatment.
And lastly, let’s stay humble, respect how complex the body and brain are, and remember that there’s much that we still don’t know.
Elana Miller is a psychiatrist who blogs at Zen Psychiatry.