The purpose of placing a particular news feature on the front page of a newspaper is to gain the readers' attention. However, given the prominence of such articles, it is the responsibility of the journalists and editors to put extra effort in ensuring that the articles are based on thoroughly sound source material.
It can be argued that it is especially unethical to write newspaper articles on medical and health research that has not been peer-reviewed, due to the possibility of imparting false knowledge to the public; university press releases and medical conferences are not peer-reviewed. It is even worse to present such news without even acknowledging that the source of the news article is preliminary.
Are newspaper journalists and editors up to the task? William Lai and Trevor Lane (University of Hong Kong, China) have shown that frequently they are not, at least with respect to medical and health news features.
Methods of evaluation.
The scientists utilized LexisNexis, an internet-based news archive, to obtain newspaper articles, on medical and health research topics, placed on the front page sometime between 2000 and 2002. The scientists selected for high-circulation English-language newspapers, e.g., The New York Times and The Guardian.
The articles were searched manually to ensure that they met the criteria for reporting on medical and health research, e.g., were not based on medical policies or business reports. Articles based on reports or surveys carried out by government agencies, or transnational government or nongovernment agencies, were excluded because their source material was often difficult to determine.
The remaining articles were classified according to whether or not they reported on peer-reivewed research. If not, they were further classified according to whether or not the articles noted this fact, and whether the reported research was eventually published within the next three years.
The articles were further classified according to the level of rigor for the evidence in the original source material, and the topic of the research. All of the statistical analyses were performed by one researcher; however, a second researcher randomly sampled 20% of the news stories to check for reliability of the analyses.
Strengths and limitations of the evaluation.
Consistent with the spirit of the scientists' study, they state the strengths and limitations of their analysis of medical and health coverage on the front page of English-language newspapers. First, it is skewed towards publications in the United States.
Additionally, the keyword set the scientists utilized for their news article search could have been more extensive, and newspaper article topic classification was based on the abstract from the original source material, not from the remainer (bulk) of the article. However, the huge strength of the study is that it is based on a large number of newspapers and articles, meaning that the results are very likely to at least be relevant to newspapers in the United States.
Newspaper articles based on peer-reviewed research.
The scientists found that only 57% of the news stories reported on medical and health research that had been peer-reviewed. Of these, there were many problems.
Ninety percent of the news stories reported on the research findings without criticism (to be fair to the journalists under assessment in this research, if I question research findings, I typically do not write on them at all). Seventy-six percent incorrectly stated the study type, and 18% incorrectly cited, or did not cite, the original source material.
None of them directly stated the rigor of the evidence for the research findings. As it turns out, only 3% of the evidence was from systematic reviews of randomized control trials, in stark contrast with the 31% that was based on expert opinion.
The published research was from a wide range of technical journals. However, 20% of it was published from one source, Journal of the American Medical Association, which may reflect the unwillingness of newspaper medical and health journalists to scour the technical research literature, or the limited financial resources available to them.
Newspaper articles based on preliminary (not peer-reviewed) research.
The news stories reporting on medical and health research that had not been peer-reviewed fared no better than the news stories based on peer-reviewed research. Neither of them reported on the rigor of the research findings.
The scientists found that only 18% of the news stories not based on peer-reviewed research mentioned that the research was preliminary. Fifty-five percent of this preliminary research remained unpublished after three years (and seemingly never will be published).
Only 1% of this research gave evidence based on systematic reviews of randomized control trials. This is again in stark contrast with the 33% that was based on expert opinion.
The research that was eventually published was sent to a wide range of technical journals. However, none of them dominated; the Lancet was the most common, at 5%.
What were the newspaper article topics?
The scientists found that 55% of the newspaper articles, based on published research, were on womens' health (24%); occupational, environmental, and public health (19%); and other medical research (12%). The situation was roughly similar to the newspaper articles based on research that had not been peer-reviewed at the time but was later published.
Here, they found that 44% of the newspaper articles were on occupational, environmental, and public health (17%); other medical research (15%), and cancer (12%). In other words, topics weren't dramatically skewed based on whether or not the research had been peer-reviewed, and the scientists' subsequent analyses hold across all topics.
Implications for medical and health journalism.
Many medical and health journalists at major newspapers do not use reliable source material for their news stories. It's important to remember that published technical research is not without its ethical lapses.
Technical medical editors are often indifferent to plagiarism, nonplagiaristic fraud is widespread in science, and published results from popular research topics are more likely to be erroneous than those from less popular research topics. However, research that is published can be readily assessed and scrutinized after publication; unpublished findings cannot, and may be more open to sensationalism and exaggeration.
How to act on these findings.
A possible solution to the problems facing lay-audience medical and health journalism (and science journalism as well) may be to hire technical specialists, such as those with a science PhD. If they're worth the paper on which their diploma was printed, such individuals can scour a wide range of technical journals, and fairly assess the merits of the research.
I feel that short lay-audience journalist courses in "how scientific research is conducted" will not help matters. They do not provide the extensive technical base that is often required to comprehend technical content, especially from technical articles that may possess a solid scientific base, but are nevertheless garbled and somewhat unorganized.
In the meantime, lay-audience journalists and editors have a fundamental responsibility to base their reporting on the most reliable sources possible. This is a responsibility that those reporting on medical and health topics often neglect.
for more information:
Lai, W. Y. Y., & Lane, T. (2009). Characteristics of Medical Research News Reported on Front Pages of Newspapers PLoS ONE, 4 (7) DOI: 10.1371/journal.pone.0006103