One of my favorite new blogs this year is Retraction Watch, written by Adam Marcus and Ivan Oransky, both carrying substantial science editing and journalism credentials. If you’re a scientist and you’re not following it, you really should. Anyway, last week brought the retraction of another high-profile and controversial immunology paper (which happened to be the fourth retraction from Nature this year and came in the same issue as an editorial about the increase in retractions). Oransky posted a response from Tom Decoursey, an author of a study that challenged the findings of the now-retracted paper. Decoursey makes a strong case for the importance of removing wrong answers from the literature. There was one thing he said that struck a dissonant chord with me. Decoursey says, in reference to this particular, retraction (emphasis mine):
Despite the fact that a few insiders had doubts about the Ahluwalia et al (2004) paper from the outset, and this was discussed heatedly at numerous international (specialty) meetings, it is incorrect to assume that most people knew what the real story was. Very few people are expert enough, or confident enough, to evaluate opposing claims. Even after two groups (Femling et al, 2006; Essin et al, 2007) had published papers thoroughly disproving every major conclusion reached in the Nature paper, the stock position taken by authors who published papers subsequently was, “There is controversy in the field” or “Group A says this, but Groups B & C say the opposite.” It is not clear how many papers must pile up to refute one incorrect study. Certainly the number is greater when the original study was published in Nature or Science.
I agree that it’s unclear how many “rights” negate a “wrong” so to speak. There are so many variables in experiments–many of which aren’t actually reported in manuscripts–that it can prove quite difficult to distinguish between variation in conditions (concentrations, lots, strains, etc.), honest mistakes, and intentional misconduct. But is the burden of proof higher (e.g. greater number of contradictory studies required) if the original finding in question is published in a GlamourMag? Should it be?
It doesn’t mean that the study or peer-review was more rigorous just because it is published in Cell, Nature, or Science. GlamourMags reject more manuscripts because they receive so many submissions and publish so few articles (compared to society journals). The standards for novelty, impact, and breadth (in theory) are higher because of the broad audiences for these journals. However, as far as I can tell, the peer-review process is much the same as it is for lower impact factor journals. Perhaps there are more rounds of review, higher expectations about the data presented, which sometimes result in experiments being thrown together at the last minute, trying to beat the editor’s deadline (thus those experiments might not be as stringently designed and controlled as one might like).
In my mind, the more important factor in determining the burden of proof is the type of study and the variables involved. Any study involving animals will generally require more evidence to refute than one based on in vitro studies. The more complex the methods, the more variables for which you have to account, perhaps the more evidence we should accumulate before collectively refuting a study–at least in the absence of evidence of fabrication or falsification. It’s still important to publish contradictory results, as this case has demonstrated. But the burden of proof should be determined by the methods not where the study is published.
Pingback: Tweets that mention Burden of Proof | There and (hopefully) back again -- Topsy.com
Is Decoursey describing reality or making a prescriptive statement?
I agree with biochemmebelle, however. Burden of proof shouldn’t be tied to GlamourCred. Should be related to strength of original data and strength of critique.
I suspect this a perspective statement… but seeing as my telepathic skills are a little rusty, I can’t be sure 😉
I think, *on average*, it is quite possible that peer-review is at least slightly more rigorous for CNS journals than society journals.
I would like it to be true that it doesn’t matter if you need to refute CNS or “another article in another journal less impact factor”. However, I think people in general tend to view an article in GlamMag/CNS as “more proven” since it is harder to get published there. This might be a faulty (or oversimplified might be a more appropriate term?) assumption, partly since it has more to do with “importance for vaster field/mechanistic insights”. Although I do think that it is hard to generalise the reviewing process for “non CNS journals” since they have such a huge span and therefore I’d refrain from making blanket statements. The prestige of being a CNS reviewer might make more thourough refeering than for “mag f”? [all guessing here]
I personally tend to think that the more specialized journal, the more likelyhood that the experimental set up and analysis tend to be “in line with what’s the most common practice” and that the reviewers will know more about the quirks and specifics for that field?
I guess the main important thing would be to push for being more open about the retracted articles and make it clear that that article is not suitable to use for reference again. It sounds obvious but it’s intriguing how many retracted articles that still float around in reviews and references in papers…..
Another possible interpretation re: burden of proof for GlamourMag vs OtherMag occurred to me. Perhaps because these are high profile pubs, which result in high profile retractions (when they occur), you should be damn sure about your claims before you refute it. But I still stand by my earlier stance that burden of proof should be dependent on the study not the journal.
Chall, you bring up an interesting point re: transparency of retractions. This is a serious concern for scientists. Retraction Watch reported that the UK Research Integrity Office has published new guidelines for retractions, which focus on increasing transparency and visibility. Another point made elsewhere (though I can’t recall where, now) is the failure of institutions and news organizations to amend press releases or stories about published work that has since been retracted.
Pingback: 12 Months of Biochembelle | There and (hopefully) back again