Bias in top tier academic journals

Venture-capital guy Bruce Booth has a provocative post, based on experience, about how reproducible those papers are that make you say, “Someone should try to start a company around that stuff”:

The unspoken rule is that at least 50% of the studies published even in top tier academic journals – Science, Nature, Cell, PNAS, etc… – can’t be repeated with the same conclusions by an industrial lab. In particular, key animal models often don’t reproduce. This 50% failure rate isn’t a data free assertion: it’s backed up by dozens of experienced R&D professionals who’ve participated in the (re)testing of academic findings. This is a huge problem for translational research and one that won’t go away until we address it head on.

Why such a high failure rate? Booth’s own explanation is clearly the first one to take into account – that academic labs live by results. They live by publishable, high-impact-factor-journal results, grant-renewing tenure-application-supporting results. And it’s not that there’s a lot of deliberate faking going on (although there’s always a bit of that to be found), as much as there is wishful thinking and running everything so that it seems to hang together just well enough to get the paper out. It’s a temptation for everyone doing research, especially tricky cutting-edge stuff that fails a lot of the time anyway. Hey, it did work that time, so we know that it’s real – those other times it didn’t go so smoothly, well, we’ll figure out what the problems were with those, but for now, let’s just write this stuff up before we get scooped.

Even things that turn out to be (mostly) correct often aren’t that reproducible, at least, not enough to start raising money for them. Booth’s advice for people in that situation is to check things out very carefully. If the new technology is flaky enough that only a few people can get it to work, it’s not ready for the bright lights yet.

He also has some interesting points on “academic bias” versus “pharma bias”. You hear a lot about the latter, to the point that some people consider any work funded by the drug industry to be de facto tainted. But everyone has biases. Drug companies want to get compounds approved, and to sell lots of them once that happens. Academic labs want to get big, impressive publications and big, impressive grants. The consequences of industrial biaes and conflicts of interest can be larger, but if you’re working back at the startup stage, you’d better keep an eye on the academic ones. We both have to watch ourselves.

Derek Lowe has worked for several major pharmaceutical companies and blogs at In the Pipeline.

Submit a guest post and be heard on social media’s leading physician voice.

Comments are moderated before they are published. Please read the comment policy.

  • http://www.duethealth.com/ Jennifer Shine Dyer MD, MPH

    ‘Publish or perish’, aka academic bias is very real. As an academic author myself, I propose that reproducible studies (ie proof that the original paper’s experiment) or even negative studies be given equal academic credit and allowed publishing too. This may solve the bias that goes with first potentially unreproducible positive results studies. Just my opinion.

  • Joe

    Anymore, many big schools require researchers to be fully supported by a grant or lose their job. So they MUST find positive results even thought we all know MUCH early stage research will inevitably not pan out.

  • http://www.healthyworcester.com Sherry Pagoto PhD

    As an academician, I just want to add that results very often go contrary to our hypotheses, and we just publish them that way. A research result isn’t either positive or null. It’s not a dichotomy. Assuming these are the 2 options oversimplifies things greatly. Data always tell a complex story. In my experience, most findings are not exactly as hypothesized and the challenge (in academic research) is to understand what story the data tell. Of course we have biases, but it is important to understand that we can make a career regardless of whether our hypotheses are confirmed or not. I should also add that it would be extremely difficult to get grants through the peer review process and funded with less than rigorous data. You can TRUST ME on this one!

    • Joe Says

      ” it is important to understand that we can make a career regardless of whether our hypotheses are confirmed or not” – Anymore, I think that to make a career, you need to get a grant to support yourself. While not required of course, confirmatory research is much more popularly granted.

    • Kristin

      I have to say, I’m not convinced that the people working with either the grant proposal process or the peer review process are as well-educated as you suggest. A study that came out one or two years ago looked at reported analyses of neuroscience data from fMRI, and 42% of researchers considered in that study were reporting fMRI data in clearly misleading and statistically inaccurate ways. Specifically, they looked at their data before they picked their hot zones. This is the statistical equivalent of claiming that you’re doing a one-tailed test, but not specifying in which direction until after you see the findings. You double (or more) your chances of getting a “significant” finding by widening your margin. It’s not good, but it was widely prevalent, and this was among peer-reviewed journals.

      Until peer reviewers understand statistical concepts and apply them ethically, publication should not be understood as some sort of final verification.

      Just because the grant committee or the peer reviewers ask you questions you don’t like, doesn’t mean that they understand and are capable of evaluating your research. Your ego is invested in what you do. It’s a mistake to think that researchers don’t play with the data–one of my friends in a PhD program right now is tearing her hair out because her thesis advisor is making her run dozens of tests with minor variations, without correcting for familywise error. I don’t care how sure you are about the “story,” you need to be using a Bonferroni correction or admitting that you’re not working within the standard 5% alpha error range.

      Ten bucks says she gets published anyway. Her thesis advisor is a heavy hitter. Psychology, at least, has admitted as a field that these are serious failings–the bias toward novel results rather than replication–and yet we’re not addressing them in any meaningful way.

  • gzuckier

    Note also the very strong bias against publication of failures to replicate; as a result, that bogus initial publication lives on and on deceiving grad students for years and years.

Most Popular