PIO FORUMby Joann Ellison Rodgers[Editor’s Note: The following reports on research to assess the process and impact on the public of mass media reporting of disease-related genetic discoveries. The multi-year effort offers insights and practical tools for science communicators.] Pearl diving (Second of Two parts) More confident now that their assessment tool reflected what scientists, journalists, and consumers think important to include in a genetic disease story, the Hopkins group was eager to move on to actually rating stories, but realized that not all of the items in the instrument were relevant to every genetic discovery. So with the help of individuals with training in genetics and epidemiology—but no special knowledge about individual discoveries that the stories to be evaluated covered—they customized the assessment tool for each of two widely covered genetic discoveries. These were the 1994 identification of the BRCA1 gene and a 1996 report linking prostate cancer to markers on chromosome 1. The number of content items applicable to the BRCA1 stories was 30 and to the prostate stories, 28. Next, using the Lexis-Nexis database, they identified newspaper, radio, TV, and magazine stories to score. For breast cancer, there were 17 major newspaper stories, six wire service stories, and four TV transcripts, along with four magazine stories. For the prostate cancer linkage story, they found 12 major newspaper stories, two wire service stories, and two radio/TV transcripts. Each of four “raters” gave a score of “1” if the story presented the information accurately; a score of “-1” if it had errors of commission, and an “0” if it had errors of omission. Because information considered most important in a news story (or release) is handled in the lead, each of the raters separately graded the top several graphs of any story. Two of the raters had expertise in genetics. Their ratings were no different than those of the other two. There also were several questions added for the raters to address the issue of “balance,” which is a more subjective measure than just story elements. The investigators defined balance as whether the story “presented limitations or risks of the discovery, whether it included critical comments of other scientists, and whether it exaggerated (or not) the relevance of the discovery for human health.” The raters also evaluated the overall quality of each story (on a scale of 1 to 5, with 5 as “excellent”), but not the headlines. The Hopkins team used long-accepted “strength of effect” calculations, including Cohen’s kappa, chi square, Kendall’s tau-b, and Pearson correlations. (Don’t ask me to explain these.) Here is what they found about accuracy, balance, and content: • Low ABC scores were for the most part due to what was left out than what was in the story. Raters assigned an “inaccurate” rating only five times across all stories and items. (Example: three BRCA1 stories said smoking was an environmental risk factor for breast cancer, although there had been nothing published in the literature about this linkage at that time.) • Very short stories of a few hundred words covered fewer elements on the rating sheet than longer stories. Stories over 1,000 words, however, did not make up much ground on the hard information side, because they tended to focus on human-interest angles that sell well but add little to the “essentials.” This might sound like a no-brainer, but one possible lesson for PIOs is that neither the McNugget version of news releases nor their overly long, narrative, chock-full-of-meaningless-quotes counterparts will do the best job if essentials are ignored. • Science writers, not surprisingly, may do a better job on their ABCs than journalists not trained in or devoted to science writing. • For both the breast and prostate cancer stories, a trio of elements was included in at least 80 percent of stories. A description of clinical use was the most-covered item. The institution where the research was done and the proportion of people with the disease who had the gene or marker in question were also highly covered. Interestingly, prevention or early detection of prostate cancer was mentioned much less often than early detection and prevention of breast cancer. • Potential clinical and theoretical applications of a discovery and whether a mutation is generally inherited or acquired were more likely to be in leads of stories than the bodies of stories. The name of the investigator, outside opinions, and prevalence of the disease were mentioned in more than half the stories about both discoveries, but in the lead graphs less than half of the time. And fewer than half the stories made clear to which category(ies) of people the discovery pertained. • Of the 17 items seen as really important by more than 50 percent of both journalists and scientists, eight were included in fewer than half the actual stories. For instance, all of the journalists and scientists suggested that the species on which the research was done (mouse, man, etc.) be reported. Only 45 percent of the stories contained this information. • Not one of the 47 stories rated carried anything about whether the researchers conducting the study stood to benefit financially, although nearly half of all scientists and journalists think this is essential stuff. Funding source was considered very important by 29 percent of journalists and 36 percent of scientists, but was carried in only one story. • Using the instrument, raters said that 36 percent of all the BRCA1 stories they assessed, and 25 percent of the prostate cancer stories, “exaggerated” the benefits of the discovery. For example, more than 60 percent of the BRCA1 stories and 80 percent of prostate stories left out any mention of risks of the discovery (such as false positives on screening). On the other hand, more than 80 percent of the breast cancer and half of the prostate cancer stories were considered “balanced” when it came to using outside expert opinions. It’s unlikely that PIOs will ever get the go-ahead to quote “outside” experts, particularly critical ones, in our press releases, but these findings point up the need for serious caution in how we play the benefits of our scientists’ work. • No individual story got a “1”, but the raters felt that the more elements contained in a story, the better the overall quality was. Two of the four stories that received the top score of “5,” indicating excellence, used fewer than two thirds of the essential scoring items. “Raters’ notes…indicated these stories rated highly because they contained enough information, were clearly written, and did not exaggerate the significance of the discovery,” the Hopkins authors wrote. This finding supports the notion—not popular in all PIO shops—that PIOs who are really good science writers are a precious resource. In their discussion of the findings, what the Hopkins investigators said surprised them the most was “the extent of agreement among all three groups [scientists, journalists, consumers] about what should be covered.” They found especially noteworthy the fact that consumers without science training could “not understand how diseases that did not occur until adulthood could be due to inherited mutations.” The Hopkins team also highlighted the fact that consumers’ “insistence on having stories include evidence of the credibility of research often stemmed from their admitted inability to judge the quality of the science and consequently their reliance on trusted authorities. … [T]he more details a story provided, the more likely they were to accept the credibility of the research. They were particularly dismayed when media stories extrapolated from animal studies to human problems.” In short, in Boyce Rensberger’s words in Science [289,61], scientists need to “wade into the methods” with journalists to increase the odds that journalists will be able to evaluate the credibility of the work. But the message is just as good for PIOs. The “bottom line” is not enough when crafting news releases. Get the “process of science” in there. Finally, the authors point out that “lack of balance took several forms.” Among the most frequent problems were stories that, when describing a clinical use, failed to give a timetable for such use. “Others exaggerated the nearness of the applications [and over] half failed to mention that the discovery was applicable mainly or only to high-risk families. The Hopkins study is not without its flaws. For instance, it’s hard to know whether the non-respondents among scientists and journalists were similar to or very different from those who did respond. Web stories were not included, and very few television or radio stories were captured in the databases and therefore evaluated. But the research team, in still unpublished work, has now tested the reliability of the scoring instrument in more than 225 stories produced for print, TV, and radio. The results published in Science Communications are holding up well, as evidenced by more detailed quantitative analysis of content and balance, and by qualitative research, led by Gail Geller, which describes persistent themes from among dozens of lengthy, telling interviews with scientists and journalists. In general, what is emerging from these follow-up studies is that while there is considerable work to do to improve the content and balance of news products (including, in my view, news releases) related to genetic discoveries and other medical and health stories, both scientists and journalists tended to view each other not as combatants or clods insensitive to each other’s cultures, but as “cautious collaborators.” And although scientists and journalists are not always motivated by the same things in seeking information for news stories, their relationship is mutually perceived as mostly positive. Their motives, according to Geller, are certainly compatible, sharing a common ethical goal of accurately and carefully informing the public about an enterprise both saw as important or interesting to the public. Such knowledge can’t hurt and may certainly help us as we work at the intersection of reporters in search of stories and scientists in search of publicity. # Joann Ellison Rodgers is director of media relations in the Office of Communications and Public Affairs at Johns Hopkins Medicine. |