A new study confirms something we here at HealthNewsReview.org have been emphasizing for many years: Health news stories often overstate the evidence from a new study, inaccurately claiming that one thing causes another — as in drinking alcohol might help you live longer, facial exercises may keep your cheeks perky, and that diet soda might be a direct line to dementia.
The researchers looked at the 50 “most-shared academic articles and media articles covering them” in 2015, according to data from the NewsWhip database. Seven of the 50 studies were randomized controlled trials, the gold standard for “causal inference” in medicine (meaning, one can reasonably infer that an intervention caused an outcome, but not always).
The rest were observational studies, which is what it sounds like: Observing people and then seeing what happens to them (or what happened to them, if it’s looking at data collected in the past). They are not true experiments, with a control and placebo group. Sometimes, with lots of observational data — after long-term, repeated findings in thousands of people from different studies that used terrific methodology –the evidence becomes so strong that it can make sense to change public health or medical practice based on only observational data. Smoking and lung cancer is one such case. But it’s also clear that the literature has become littered with poorly done observational studies that make causal claims that cannot be supported.
They found a “large disparity” between what was written in the news stories compared to what the research showed:
- “44% of the media articles used causal language that was stronger than the academic articles” (and many of those studies were overstated to start with).
- “58% of the media articles contained at least one substantial inaccuracy about the study.”
X “may be caused by” Y
One way that news stories can overreach is by inaccurately using language that implies x caused y:
- “may be caused by”
- “seems to result in”
- “is caused by”
- “is due to”
In many cases, the language needed to be dialed back to better describe the research. For help on learning how to do this, see:
- Observational studies: Does the language fit the evidence? Association vs. causation
- 5 tips for writing better health news headlines
Lead study author Noah Haber, a postdoctoral researcher at the University of North Carolina Chapel Hill, said he’s always been interested in how research may or may not determine that a health exposure leads to an outcome, which he calls “causal inference.”
At the same time, he noticed that many of his friends were sharing health news articles on Facebook and Twitter that didn’t accurately describe the research (as in “new study shows drinking red wine seems to result in people living longer”).
“This study takes that process and takes it to research-driven extremes, where we’re looking at what is being shared across all of the Internet in 2015,” he explained.
Aligns closely with our own reviews of news stories
Among the popular stories in 2015, some usual suspects made the list: diet, coffee/caffeine, pregnancy/childbirth, green space, medical devices/treatments, pets, and air pollution were toward the top. Some interesting outliers included the impact of horror movies, birth order, and weekend hospital admissions. These were linked to a wide range of outcomes on things like mood/mental health, cardiovascular disease, IQ, mortality and BMI (and many others). All of the data can be seen here.
Well-known institutions and journals were often the sources. For example, studies from Harvard University were covered in 18% of the news stories. The stories that were shared most often were produced by outlets we review regularly, including CBS News, the New York Times, The Guardian, Los Angeles Times and NPR.
That 58% of the stories inaccurately reported the evidence closely matches our own number of 61% for the more than 2,500 news stories we’ve reviewed and assessed on our evidence quality criterion.
Where does the misinformation start?
Haber emphasized that the study didn’t conclusively pinpoint who’s to blame for the misinformation. The published studies themselves slightly overstated the evidence, for example. And, as we’ve learned from reviewing news releases, publicity is often a common source of misinformation. Haber’s work didn’t look at news releases, though he hopes to investigate that in the future.
Ideally, well-trained journalists should scrutinize the news releases and the original research to look for problems that might produce misleading assumptions. That is the role of the journalist, after all.
All of this matters, he said, because people may make health decisions based on the misinformation they’ve read, a problem we are currently exploring in our series, Patient Harms from Misleading Media.
The media attention lavished on these topics also may have other unintended consequences. For example, it may encourage more researchers to study issues that they see grabbing headlines in major news outlets — since those questions may be viewed as having greater public importance and greater potential to advance careers.
“It also crowds out a lot of the good science information and changes the landscape of what people are producing,” Haber said. “There’s a feedback loop in these things.”
Joy Victory is deputy managing editor, HealthNewsReview.org, and can be reached on Twitter @thejoyvictory.
Image credit: Shutterstock.com