What do I mean by ‘negative results’? The term applies to studies conducted both in human and in animal subjects and encompasses three different types of result:
- truly inconclusive with ‘no evidence of effect’, generally because the study was too small and inadequately powered (several of the small studies included in the Cochrane systematic review of stroke units are in this category) ;
- a well-conducted study, which is sufficiently large to provide ‘clear evidence of no effect’, i.e. that any effect is too small to be worthwhile pursuing either clinically or in further research (the Clots in Legs Or Stockings after Stroke (CLOTS) trial of graded compression stocking for deep vein thrombosis prevention is a good example) ; or
- clear evidence of harm when benefit had been expected.
Why should negative studies be published? The most important reason is ethics. If human subjects have given consent to participate in a clinical research study, be it a treatment trial, or an observational study, they have done so in the clear understanding that the research results will in some way be of benefit to other people and contribute to scientific advance. Furthermore, these human subjects have exposed themselves to risk and inconvenience by participating in the study, and the justification for doing that ‘good deed’ should be that the author makes the data publicly available and ensures that it is put to good use. Although animals do not give consent to participate in research, we still have an ethical duty to make the best use of the data from any animals used in such research.
There are many scientific reasons for publishing negative or neutral (i.e. uninformative) studies; chiefly, they contain valuable information, which should become part of the scientific record on the subject under study. Systematic reviews are an essential part of the research cycle . When scientists plan new research studies (clinical trials or observational studies in humans or experiments on laboratory animals), the first step should be a systematic review of the evidence . Such a review may reveal that the question has already been answered reliably, or it may indicate that a further study is justified. For example, the UK Medical Research Council and the UK Health Technology Appraisal Programme require that new applications for clinical trial funding should have performed (or at least cite) an up-to-date systematic review of the subject to ensure that the new research really is justified. If during the course of a clinical study, a large negative or neutral study is published, this might require the trial steering committee or data monitoring committee to pause for thought and consider whether the study should continue or be modified in some way. At the end of any clinical study, the results should preferably be presented in context of all the available evidence. For clinical trials, this is a requirement of the Consolidated Standards of Reporting Trials (CONSORT) guidelines on publication of randomized clinical trials .
Systematic reviews can put a small but strikingly positive study into context. For example, when a strikingly positive small study is viewed within the totality of the evidence available, it becomes evident that it is a freak ‘lucky’ result arising from the play of chance, and not a reliable estimate of the true effect [4, 7]. The availability of the ‘negative’ studies then ensures that the one small positive (but outlying) study is interpreted appropriately. There are numerous examples from the literature where small clinical trials or small studies of genetic associations produce striking positive results (often leading to a high-profile paper in a major journal), which are then not subsequently replicated when larger, more reliable (neutral or negative) studies are published [8-10].
To be reliable, systematic reviews need to include all relevant randomized trials. The Stroke Unit Trialists’ Collaborative systematic review of organized inpatient stroke care  very clearly demonstrates the importance of making negative trial results available for inclusion in systematic reviews. Fourteen of the 16 trials comparing a stroke unit with general medical ward care were ‘negative’ for the effect on death (they were all underpowered to detect a moderate but clinically important difference), and four were never published. However, the overall estimate of the effect from all the trials together is that stroke unit care reduces the odds of death by 17% (95% confidence interval 4–29%) (P = 0·01), a result which has helped support the introduction of stroke unit care into practice worldwide. What would have happened if all of the apparently ‘negative’ studies had remained unpublished? We might well have ‘lost’ one of the most significantly beneficial interventions for stroke!
There are many obstacles to publication of ‘neutral or negative’ studies; the first is the author. A study that does not have a big fat juicy P-value does not stir the author's adrenalin sufficiently to write the paper in the first place. It may not be in the commercial interest of the sponsors of negative research for the results to be published. Journals like clear stories with definite results to increase their sales. Negative studies do not do much for journal income! There have been reports that some journals would only publish results from studies that are statistically significant at the P < 0·05 level. This is clearly absurd, but I suspect that the practice still continues (perhaps rather more covertly these days). In the fast-moving world of the Internet and social media, where only the most strikingly positive results gain their ‘15 minutes of fame’, it is difficult to ensure that neutral or negative results from well-conducted research would reach the public domain.
There are a number of possible solutions. Journal editors could (perhaps) be persuaded to prioritize publishing well-conducted negative research over poorly conducted positive research. There are easier alternatives: although the Journal of Negative Results in Biomedicine (http://www.jnrbm.com/) provides a rather specific destination, there are now many open-access publishing journals. Furthermore, several grant-giving bodies, such as the UK Medical Research Council and the Wellcome Trust, expect data from research they have funded to be published in an open-access format (applications for research grants now need to incorporate anticipated costs for such publications). For authors without such funding, many institutions now make datasets accumulated by their scientists freely available via open data repositories (University of Edinburgh Datashare site is an example http://datashare.is.ed.ac.uk/).
In summary, if a study involves informed consent in humans or the use of whole animals, and has been satisfactorily conducted, it should appear in the publicly available scientific record, irrespective of its overall conclusions.