PIO forumby Joann Ellison Rodgers Pearl Diving (First of Two Parts) Finding communications research data with practical applications and implications for public information officers is about as easy as prying open an oyster shell with your thumb. But the prospect of scooping out an occasional pearl makes the effort worthwhile. Thus it was that your correspondent signed on to a small role in a multi-year research effort, led by a handful of seasoned Johns Hopkins researchers with expertise in genetics, epidemiology, bioethics, and behavioral science, to understand something about how science stories get made. The research, funded by a multi-year grant from the National Human Genome Research Institute, set out more than five years ago to assess the process and impact on the public of mass media reporting of disease-related genetic discoveries. To date, several papers have been published and others, which expand on some persistent themes and data in the first major paper, are in the process of being submitted for publication. Ever mindful of journal policies and prohibitions on premature publicity of research results, I was able to write only briefly about some of the findings of the first major paper in this space last year (SW, Fall 2003). Now I can report more fully on that one, and allude to findings in others, in hopes that readers will at least check out the published work and at best, find something applicable to our craft. That first paper was published in Science Communication, edited by NASW member Carol Rogers. Authors included lead investigator Gail Geller, a bioethicist and behavioral scientist; Neil A. “Tony” Holtzman, emeritus professor of pediatrics at Hopkins, an authority on genetics and epidemiology and chairman of the Task Force on Genetic Testing for the NIH and DOE Working Group on Ethical, Legal and Social Implications (ELSI) of the Human Genome Project; genetics and health policy expert Barbara Bernhardt; and several others. This paper’s key finding was the astonishingly high rate of agreement among scientists, journalists, and at least some consumers about what content elements are important for inclusion in any story about gene-related disease discoveries (and perhaps by extension other disease-related stories). I don’t think I’m making a big leap here to assume that what all these major stakeholders consider important has some relevance to the products PIOs generate. Most of the pearls, however, are to be found among the details of the researchers’ efforts to find a reliable way to study accuracy, balance, and completeness of content (the “ABCs”) in genetic disease news coverage. The investigators began with the fact that few media stories about “health, science in general, or genetics in particular” have been subjected to truly rigorous content analysis. They then set out to develop a rigorous way to do so-to assess the ABCs. Their two main criteria for a good evaluation tool were story elements that consumers with some interest in science news would like included in print or TV coverage, and elements that scientists and journalists consider essential. They also decided that to be reliable, the instrument had to be comprised of measurable, objective items whose variations in use were easy to find in stories generated from the same genetic discovery, and for which independent raters would come up with the same scores. The first “draft” of the instrument was made up of elements commonly found in original scientific papers, media stories of five genetic discoveries by Hopkins scientists, and last but not least, university press releases. To refine the draft, individuals were recruited, through ads placed in the Baltimore Sun and the Baltimore Teachers Union Newsletter, to participate in a two-hour focus group to talk about “science and the media.” Paid $30 and served a light supper, 23 consumers took part: 12 females and 11 males, ranging in age from 23 to 64 years. Some 78 percent were white, the rest African American. Forty-eight percent had a college degree or higher and 14 percent a high school diploma only. Nine worked in the commercial sector, five were school teachers. The responders were divided into two groups; individuals who were college educated and those who were not. With facilitators and a court stenographer in place, participants were asked, “If a discovery of a gene associated with a disease were being reported, what would you want to have included in the report?” The focus groups were not given any hint of what the investigators already had in mind in their first draft. Now here come the first of those pearls. When facilitators of the focus groups asked consumers what should be included in any press coverage about a new genetic discovery, the consumers mentioned the same items the Hopkins team had pulled from those research papers, news releases and news stories, but they wanted more detail. Specifically, they wanted to know such things as why the research was undertaken, in what species over what period of time, how frequently the disease occurs in the general population and how mutations could actually lead to disease, i.e. the process by which genes do their work. Useful stuff, I think, for those of us who put research news releases into the information mix.
To further refine the assessment tool, investigators sent a list of 31 items or elements deemed worthy to 15 Hopkins scientists whose genetic research had already resulted in press interviews; and to 21 journalists around the country who had written news stories on genetics and health. These individuals were told that the elements were candidate measures to be used to evaluate breaking news stories published within three days of the report of a discovery about a disease-gene link in a peer reviewed journal. They were asked to rate each item as “essential, discretionary or beyond the scope” of a story. Amazingly (in my humble opinion) 73.3 percent of scientists (11 of 15) and 76.1 percent of journalists (16 of 21) returned the survey. In the end, the final instrument used to rate both specific stories, and what scientists and journalists thought needed to be included in a story had 38 items all told in 6 categories: 1) description of research, 2) credibility of research, 3) genetic and epidemiology of the gene-disease links, 4) description of the disease in all affected people, 5) description of the disease only in those in whom the disease can be attributed to the gene or to a linked marker, and 6) implications of the discovery. The categories included such objectively measurable content items as whether the story described incidents or events that led to the discovery, the species in which the discovery was made, control groups used, the need for replication of the study, funding sources, conflicts of interest, the name of the institution and/or researcher conducting the research, opinions of researchers not involved in the study, whether the mutation was in the germ line or acquired, the cause of the mutation, the frequency of the mutations, prevalence or incidence rates of the disease, mortality rate, symptoms and signs, impact of treatment, availability of treatment, whether prevention or early detection is possible and so on. Here are some results of the assessment of story element importance by scientists and journalists:
Once again, there would seem to be useful information here for PIOs as we prepare our releases and work with reporters and scientists who talk to each other with some frequency. # Joann Ellison Rodgers is director of media relations in the Office of Communications and Public Affairs at Johns Hopkins Medicine. Editor’s note: In the next PIO Forum more on how stories were rated for ABC and the implication for PIOs. Reference: Mountcastle-Shah E, Tambor E, Bernhardt BA, Geller G, Rodgers J, Holtzman NA. Assessing Mass Media Reporting of Disease-Related Genetic Discoveries: Development of an Instrument and Initial Findings. Science Communication 2003; 24:458-478. |