by Jon Turney
In theory, it is easy to find out about science. Open publication of results is an article of faith for most researchers. In practice, it is an almost impossible problem. There is so much published research - mountains of the stuff. How on earth do you know where to look? And how do you know if what you find is any good?
With so many thousands of papers prepared every week, someone has to choose. And mostly, the choice is left to a select band, the staff of a few key journals which by general consent publish the really significant new work across the sciences.
Those journals - the two weeklies Nature and Science for all the natural science, and a few more in medicine - now wield extraordinary influence. There is a paradox here. There is so much new, though mostly trivial, science pouring from the labs that these journals can cover only an ever-diminishing fraction of it. But it is just because the volume of research is so overwhelming that thousands of readers rely on them to filter gold from grit.
Researchers know this, of course. It is not just that these few titles have what the citation buffs call a high impact factor. They are also the way to catch the eye of policy-makers, funding agencies, even potential shareholders in companies. And helping the process along are one group of readers who have then power to amplify the journals influence still further, the journalists.
Ever noticed how good science stories always seem to break in the second half of the week? Thats because the weekly journals all appear then. If a story breaks on Sunday or Monday, chances are someone has bro-ken the embargo on the weekly press release from Nature or The Lancet, or there has been a leak of results about to appear in one of those journals. Dolly the sheep was the most memorable recent example.
Even if an early announcement is deliberate, the full details often still have to wait for a paper to appear. Few things were more awkward for the Government after the announcement in March last year that BSE might be linked to a new variant of human Creutzfeld-Jacob disease than the two-week delay before full details appeared in The Lancet.
The BSE affair is an extreme example of the pressures on this rather precarious system of selection and validation of the cream of scientific papers. But how do the editors of these journals handle the more everyday pressures of their position as the key gatekeepers between science and the wider world? And how should we view the use of the papers they publish by journalists, who regard publication in Nature as news in itself?
Philip Campbell, editor of Nature since the end of 1995, gave his answers at a meeting last month at University College London. More than four-fifths of the papers submitted to Nature are rejected, so how do they choose? The prime criterion is unquestionably science impact, he maintained. A unique result may also appeal, even if it is not especially profound science, like the person who counted, I think it was a million drops of water from a tap....
Social impact comes some way behind, Campbell suggests, but is definitely a factor: Theres no question that if its good science, and its going to play a key role in some public issue of the time, we will take that into account. But, he insisted, Nature will not pander to journalists: social impact is not the same as news value, and news value alone will not suffice if the science is merely ordinary. One of my editors came to me recently with a paper about a genetic link with a particular condition. I decided that we shouldnt publish it. It was clear that it was going to make a lot of news, but the reason was purely because of the interest in the condition. The science itself wasnt particularly novel.
He also has to be the final arbiter when personal animosities cloud the supposedly objective evaluations of papers from an authors scientific peers. Its very hard to battle around the peer review system and exert some independent judgments, but one has to do it. Would-be authors should try and steer clear of such controversies, it appears, because Nature errs on the side of caution. What happens, I think, is that we reject good papers, rather than publish bad ones. I think its probably safer for the public, but its not so good for the scientists.
And so the select few papers finally appear in the journal. And an even smaller selection are featured in Natures weekly press release. This, as Tom Wilkie, former science editor of The Independent, confessed, is the real influence on the wider reporting of important science. For hard-pressed hacks, the system works perfectly. In the first place, publication of a paper in a peer-reviewed journal provides a news event. And the review process means no checking is needed. The papers are assumed already to have gone through the internal quality-control of science, so actually journal-istic inquiry has no scope.
The precious press release then answers the next two most important questions about a piece of science news: can you understand the story; and can you turn it around quickly? So the journal paper which is supposedly the reason for the story probably doesnt even get read, but a simplified version gets into the next days newspaper without any great effort. Turning the stories round fast has become the most highly prized skill and rewriting the press release one of the ways of achieving it.
The system, then, suits the science journals and the popular media very well. But how well does it serve the wider public? Both Campbell and Wilkie had some reservations. From Natures point of view, the lack of competition can be worrying. Are there enough other voices, commenting, or offering supplementary information?, wondered Campbell. Tom Wilkie, for his part, suggested that the increasingly heavy reliance of the press on the peer-reviewed journals may be happening at just the wrong time. The scientific enterprise is changing rapidly and the journals may be the only gatekeepers, but they may not be the best.
Not until 29 August last year did Nature publish a news and views from David Skegg at the University of Otago in New Zealand declaring that British science relating to BSE was too little too late. Epidemiological research, he wrote, had been carried out by too few scientists, involving too few studies, and too few animals. Laboratory research has been no more timely or adequate. He dismissed as paltry the public investment in BSE research between 1988 and 1991 and in effect, accused Britain of skimping. Why, one wonders, was Nature not publishing that sort of criticism years earlier?
Equally, the press tend to neglect such issues, too. As Richard Horton, editor of The Lancet, put it, What are the motivations underlying the funding of research? What are the motivations of the scientists doing the research? What creates the agenda for doing the research? These are the issues that are not played out in the media at all in this country. Relying on the journals, in other words, implies taking science as a given, simply reporting on work which is already done. That is certainly a way to help people find out some things about science. But it restricts the reporting to the things that scientists want the rest of us to know. The journals certainly select stringently, but they do not pose new questions. The problem is, with the symbiotic link between the prestige vehicles for scientific publishing and the small cadre of science correspondents, who will?