June 2009

SCIENCE PUBLISHING:

Increasingly Popular Research Yields Increasingly Unreliable Results

How reliable are research findings in the technical science literature? Scientists have previously demonstrated that nonplagiaristic fraud is common in science, and that technical medical publishers are commonly indifferent to plagiarism.

Research reliability may be linked to the popularity of the research field in question. It has been proposed that the more popular the field, the less reliable the reported results.

It has been suggested that this popularity link may be related to fraud, such as the pressure to publish in the face of stiff competition. An alternative possibility is error, such as the increasing probability of at least one research group falsely obtaining positive results when many other groups are working towards the same goal.

Thomas Pfeiffer and Robert Hoffmann (Harvard University and Massachusetts Institute of Technology) have explored the possible link between research field popularity and erroneous research results. They have found that published research findings are more likely to be erroneous when the research field is popular, or when multiple researchers publish results on the same research question.

Choosing the research to be evaluated.

The scientists focused their efforts on thousands of published statements, regarding thousands of interactions, between proteins in yeast. They chose this topic for their analysis of the reliability of scientific research findings in the technical literature for four reasons.

One is that many molecular biology and similar research fields study protein interactions. Additionally, published statements of this type (i.e., protein interactions) can be readily obtained through specialized software, or manually from titles and abstracts from publications.

Another reason is that published statements regarding protein interactions can be tested by comparing them to data obtained recently from current rapid instrumental techniques. These new, fast techniques are not as subject to fraud and error in relation to research popularity, because results from these techniques can be cranked out quickly.

A final reason is that some proteins are discussed in the technical science literature far more often than others. All of these are good reasons why protein interactions are ideal for studying the veracity of published statements, in relation to scientific research field popularity.

Reliability, popularity, and multiple independent testing.

The scientists found that most statements on protein interactions in yeast seem to appear only once in the technical science literature. However, some appear far more frequently; for example, statements on the interactions between myosin and actin proteins in yeast appear approximately 100 times.

Confirming previous hypotheses, the scientists found that reliable results were negatively correlated with the popularity of a protein and multiple independent testing of the protein's interactions. They found the effect of multiple independent testing to be roughly ten times greater than the effect of protein popularity.

What can be done?

The scientists propose that this unsettling situation may be countered by directing research funding to less popular fields, and to independent evaluation of results after publication. I respectfully feel that the first proposal is unlikely to happen; many scientists have noted the difficulty of publishing on uncommon topics, or even obtaining research funding for them; neither will change quickly, due to entrenched mindsets.

I also respectfully feel that the second proposal will not improve the quality of published scientific results; as the scientists note, research results are more likely to be retracted when they are published in a "high-impact" journal. Every scientist knows that uncommon topics published in relatively obscure journals are unlikely to even be read, much less corrected.

However, the research presented here demonstrates that even popular topics are likely to be erroneous, due to either fraud or simple chance. This is a topic few scientists will openly address.

Academics, funders, and publishers all share some level of responsibility for the prevalence of false research findings in the science technical literature. I suggest that the situation would be ameliorated by reducing the pressure in scientific research, conducting research collaboratively, and publishing in a more open format in which the results are more readily compiled and reviewed.

for more information:
Pfeiffer, T., & Hoffmann, R. (2009). Large-Scale Assessment of the Effect of Popularity on the Reliability of Research PLoS ONE, 4 (6) DOI: 10.1371/journal.pone.0005996