SUMMARY: A small number of cancer disciplines dominate the most prestigious medical journals. Scientists working on other cancer types may be at a professional disadvantage, e.g. for promotion and funding. |
Ghostwriters at DesignWrite, employed by the former pharmaceutical company Wyeth, systematically promoted hormone replacement therapy for unapproved purposes. The direct yearly cost of fraud in scientific research may exceed $100 million USD.
It should therefore come as no surprise that peer review, the portal through which every scientist must traverse to publish his/her research, receive grant money, and get promoted, has plenty of problems. As one personal example, my postdoctoral fellowship application (through the National Institutes of Health) received a "top 20%" evaluation by one reviewer, and a "bottom 25%" evaluation by the other.
One scientist largely ignored the text in my application, and the other scientist directly quoted my text in his/her rebuttal. Guess how my application turned out.
There's clearly a glaring conflict here, one begging for analysis by a third reviewer. Unfortunately, such a conflict will almost always kill the submission (as it did in my case).
Addressing technical manuscripts rather than fellowship applications, recent research suggests that peer review may be improved by soliciting reviewer commentary rather than reviewer recommendations. In other words, extended thoughts rather than a one-off accept/reject statement may improve the fairness of peer review.
Personally, I feel that the method adopted by PLoS ONE, to accept every manuscript that is rigorous and technically sound, may be the best approach for technical manuscripts. Unfortunately, I don't see a method of improving peer review of grants and promotion applications without a fundamental change in the mindset of scientists, who incidentally are the source of the problems.
For scientists focused on research, decisions on whether to accept a grant, or issue a promotion, are almost always based on some variant of the scientists' research "impact factor." Such tests are some manipulation of how many times a scientists' work is cited per total output over a given length of time.
Heavy weight is also given to the "impact factor" of the journals in which the scientist publishes his/her research. In the never-ending quest to reduce the professional life of a scientist to a number, one should not forget that the process may be unfair.
Recent research by Ronan Glynn (National University of Ireland Galway) and coworkers adds weight to this note of caution. They have found that several types of cancer heavily dominate other cancer disciplines in the most prestigious medical journals, indicating that scientists working on other worthy topics may be at a professional disadvantage.
Analyzing cancer in the medical literature.
The scientists searched two common public scientific indices, PubMed and Web of Science, for published English-language peer-reviewed technical articles (and the citations each generated) on the twenty-six highest incidence types of cancer (e.g. lung and thyroid). The search was conducted from May to August 2009, on manuscripts published through the entire year of 2007, finding roughly 190,000 articles.
They organized the technical journals in which the articles were published into three categories. These were the top twenty medical journals (via two common measures of impact), the top twenty cancer journals (again via the two measures of imapct), and the ten journals publishing the most on each cancer topic.
Representation of cancer in the medical literature.
The twenty-six cancers under study account for a little over 8% of the articles in both databases. Breast cancer was the most common.
Cancer topics constituted 25% of the articles published in the top twenty medical journals, by both measures of impact. Roughly two-thirds of these articles concerned one of six cancers (e.g. breast cancer again).
In other words, cancer is given disproportionate weight in the most prestigious medical journals, and several types of cancer are favored over others. The former does not seem fair to me, although the latter is open to interpretation, to be discussed shortly.
The twenty-six highest-incidence types of cancer constituted 53% or 72% of all articles published in the top twenty cancer journals, depending on the measure of impact. Roughly two-thirds of these articles concerned one of seven cancers (e.g. breast cancer, yet again).
The ten journals publishing the most on each cancer type published over 16,000 articles on cancer. Over half of these articles concerned one of six cancers (e.g., you guessed it, breast cancer).
In other words, only a relatively small number of cancer types dominate the others, in both the general medical literature as well as the cancer specialist literature. Is this fair?
This data would seem to suggest that articles on certain cancers are given unfair prominence over others. This is in fact true, but not for what you may expect (e.g. breast cancer).
It turnns out that breast, prostate, lung, and intestinal cancer are under-represented in the literature, relative to their incidence in the real world. On the other hand, liver cancer over-represents breast cancer by over a factor of 40 (cancers of the central nervous system are also over-represented).
Implications.
Funding agencies and promotion committees should be careful to not judge a scientist's research output based on topical popularity. Unfortunately, topical popularity does worm its insidious way into common measures of research impact.
Furthermore, if a scientist wishes to find articles most relevant to his/her specialty, a search via journal impact factor may be inappropriate. This is especially true for research on types of cancer that are not popular in the medical literature.
I'd love to see an analogous study performed on the chemistry literature, e.g. to assess the formal representation of "nano" topics, of much hype in chemistry today. I personally find it irritating to see that prefix appear in so many articles where it isn't particularly appropriate (this is speaking as someone who did research on nanoparticle assembly).
In brief, scientists need to be careful when using various measures of "impact factor" when evaluating research productivity. I've heard many scientists complain about it, and I've read of attempts to address the issue, but almost every academic research institution continues to use it for critical professional decisions (e.g. funding and promotion).
This blog post was written on November 9, and the technical manuscript
on which it is based was published on the same date.
I'd like to unofficially declare the second week of November to be:
Anyone want to join me?
NOTE: The scientists' research was funded by the National Breast Cancer Research Institute of Ireland.
Glynn, R. W., Chin, J. Z., Kerin, M. J., & Sweeney, K. J. (2010). Representation of Cancer in the Medical Literature - A Bibliometric Analysis PLoS ONE, 5 (11) DOI: 10.1371/journal.pone.0013902