The need for publishing the silent evidence from negative trials


Mankind history is written by winners. Similarly, the history of psychopharmacology is written by winning drugs. And yet, there is little we can learn from victories and much to be learnt from failure. Hence, the problem of publication bias, where positive studies are considered by editors – and, probably, by many scientific readers – as particularly appealing whilst negative studies are seen as dull and irrelevant, has to be considered not only from a legal perspective but also from an academic one. Nowadays, publication bias constitutes a solid barrier not only to our pharmacological practice but, specially, to clinical trials design improvement.

To begin with, lack of publication of clinical trials with negative outcomes is depriving us from improving our clinical practice by learning ‘what not to do’– i.e. wrong titration or dose – and ‘who not to treat’– by allowing us to identify a non-responder profile. Moreover, and maybe worse than that, publication bias is depriving us to learn on ‘what went wrong’ on a certain trial and, hence, improving our clinical trial designing. This is particularly important in the case of negative studies of drugs that have latter received agencies’ approval because of other positive studies and in the case of failed studies – where a design problem blunted outcomes both of the experimental drug and the comparator. Hence, the distinction between negative and failed studies is important and, to make it possible, three-arm designs (including placebo, an active comparator and the experimental drug) should be priorized (1) and subsequently published regardless of outcome.

Silent evidence versus salient evidence

With all their pros and cons (2), randomized clinical trials are still considered as a sort of almost ideal experiment regarding treatments efficacy, with a relevant impact on clinical practice. A quick search trough the usual tools (MEDLINE, PSYCHLIT, COCHRANE) reveals an unequivocal tendency to publish randomized clinical trials whose results favour the experimental treatment, constituting a sort of ‘salient evidence’ of efficacy of a certain drug which per se masks the ‘silent evidence’ of non-published negative studies.

Drug companies play an outstanding role within the biomedical research field, usually performing neat and highly controlled research. The proportion of most frequently cited articles funded by industry increases year by year (3), and we all have to acknowledge the role of drug companies as one of the main engines allowing biomedical research to improve.

However, several authors have described a strong relationship between publication bias and source of funding of the study. Allegedly, pharmaceutical companies-funded research would be more likely not to see the light of day in the event of negative results (4, 5) and so industry-sponsored trials would be more likely to report favourable outcomes (6, 7). Perhaps, one of the most known cases of publication bias within psychiatry is the one regarding lamotrigine use in bipolar disorder. As pointed out by Ghaemi and colleagues (8), including both negative and positive studies – something that, as stressed by the author, ‘was not a voluntary act but rather because of legal judgment brought by the state of New York after a lawsuit about paroxetine use in children’– changed the positioning of lamotrigine within the bipolar field.

Despite this, we should not forget that this publication bias may be the rule rather than the exception even for those trials where scientific glory or academic renown would take the place of financial interests, when it comes to conflict. This would be the case of randomized clinical trials on the efficacy of non-pharmacological interventions. We could hardly find negative studies on psychotherapies and the few existing ones are invariably full with post hoc – minor- positive findings and baroque elaborations attempting to, somewhat, present positive conclusions beyond the existing negative results. Moreover, the few available psychotherapy trials whose outcome does not benefit the experimental interventions receive invariably many criticisms coming from strong believers of the treatment that has failed to show efficacy (note the mild aesthetic nuisance of the word ‘believer’ when it comes to a scientific discipline). A good example of these two phenomena in the same paper is the negative study on the use of cognitive-behavioural therapy (CBT) for bipolar disorders published a few years ago by Scott and colleagues (9). Scott and colleagues found out no efficacy of individual CBT on relapse prevention on a well-designed and powered trial. This is one of the few non-failed clearly negative studies on the use of psychotherapy for a severe psychiatric disorder, with a relevant panel of experts authoring it and a sample that was by far larger than in similar studies (n = 253). And yet despite admitting the non-efficacy of CBT and failing to show any difference benefiting the experimental treatment regarding the two main outcome measures, the authors still struggled for positiveness in their conclusions, and the National Institute for Clinical Excellence guidelines report it as a positive study (10). Moreover, the authors were brave enough to register statistical analysis strategy beforehand (something rare and uncommon amongst psychotherapy trials) and honest enough to publish a negative study. Even then, they had to face strong criticisms by their paradigm colleagues (11). Whilst the pharmacotherapy literature is full of debates on the impact of conflicts of interest (12), little is disclosed in the area of psychotherapy, where financial and celebrity-related conflicts can be as relevant as with drug manufacturers.

All this without forgetting the secular treatment paradigms that hide behind their ‘no evidence of efficacy’ to keep their therapeutic approaches safely protected from their potential ‘evidence of no efficacy’. More than silent evidence, we may define the former phenomena as ‘resilient non-evidence’ and constitutes, even currently, one of the most spectacular and less justifiable gaps between evidence and clinical practice in psychiatry.

From silent evidence to hidden evidence: a step forward?

To deal with the problem of publication bias and avoid this being tentatively used to attack drug industry as a whole, in 2002, as a result of the Food and Drug Administration (FDA) Modernization Act, the Pharmaceutical Research and Manufacturers Association (PhRMA) published its ‘Principles on the Conduct of Clinical Trials and Communication of Clinical Trial Results’. In these principles, the PhRMA companies commit to the ‘timely communication of all meaningful results of clinical trials, whether those results are positive or negative’. Making trial registration mandatory was, no doubt, another step forward to the transparency of drug companies’ research, thus reducing the amount of silent evidence of unpublished negative clinical trials. Nowadays, all registered trials are available at and results are posted at

Unfortunately, the meaning of ‘timely communication’ of results accepts some broad interpretations going from oral communication in a meeting to results being posted on a website and does not make publication in form of a scientific paper strictly compulsory. And even if this would be compulsory, it would be difficult to ensure publication on a well-known and disseminated journal.

Interestingly, journal prestige and impact factor of the journal seem to predict scientific paper popularity more than any other variable (13). If you want your research manuscript to become popular, you should thus publish it on a mainstream journal. But, where there is a law there is a loophole: there must be journals small enough to publish your negative results if you do not want them to be seen.

The issue does not seem to improve with the launching of internet-based e-journals; as recently reported by Jakobsen and colleagues (14), author-paid open access publishing preferentially increases accessibility to studies funded by industry, which could favour dissemination of pro-industry results.

We may call this phenomenon ‘hidden evidence’, or the art of publishing your negative results in small journals or remotest websites, safe from public exposure. Hidden evidence is a major source of bias in metaanalysis and systematic reviews and may make effect-sizes look larger than they actually are. Although nowadays pharmaceutical companies seem more keen to publish their negative studies, there is still a significant delay in the publication of the studies with larger placebo response, as opposed to those that had smaller placebo response and were, therefore, more likely to be positive and to show large effects (15).

Peer-reviewed results-blind preacceptance of publication: a possible solution

Randomized controlled trials are highly esteemed by both journals and readers, as shown by the fact that they are the second-cited study design, immediately after metaanalysis (16). Hence, there is a high predisposition of outstanding scientific journals to publish this type of research, whether it is on the regular journal or in supplements which, at least in the field of psychiatry, are robustly cited and influential (17). In other words, journals need to publish good randomized clinical trials to maintain their bibliometric status no less than companies need to publish their randomized clinical trials in good journals to reach proper dissemination of findings and influence clinical practice. The problem comes when choice of publication journal is depending on the sign (positive or negative) of the trial, which is clearly on the origin of ‘hidden evidence’. We suggest a possible solution to try to solve this problem: Results-blind preacceptance of manuscripts.

Same way that making trial registration compulsory has allowed the field for a higher control, the same agencies may compel companies (and other researchers, i.e. trials on psychological interventions, alternative remedies or biophysical treatments) to publish the randomized clinical trial main manuscripts – regardless the results – in mainstream journals. And the only way to properly ensure this is to make mandatory to submit the manuscript before the results are known. This should not represent any problem for the reviewing process, rather the opposite: sometimes results are too shiny to let us see the methods. And yet, the quality of a research manuscript is better defined by its methods rather than by its results. In other words, when acting as journal referees, we should all first consider the quality of the methods described (including design, inclusion/exclusion criteria, main outcome measures et cetera) rather than the results reported. At time of trial registration, companies – or, in the case of non-industry funded trials, authors themselves – should already submit a draft of the manuscript. This submission would include, obviously, only the introduction and methods section, which would be peer-reviewed and accepted or rejected regardless the results – unknown at the time of submitting. The journal and the authors (or company) would then agree and legally commit to publish that specific trial – once finished – in that specific journal.

This contingent acceptance would be completed or not on the basis of final fulfilment of other core final elements of the study such as quality of recruitment, statement of results, interpretation and discussion.

This proposal represents a radical switch from the current culture of publishing and peer review process and, of course, may have some limitations and aspects that need to be improved: authors, for instance, could still choose not to finally publish contingently accepted papers on trials which happened to be negative, despite the agreement, by simply lowering the final output of the manuscript, which would lead to rejection and, hence, to silent another piece of negative evidence. On the other hand, completing (from design to submission) a randomized clinical trial usually takes 2–3 years and during this time editorial policy or editorial board composition may vary and could lead certain journals not to feel at ease with some preaccepted studies.

All these questions would need to be properly addressed and discussed specially by journal editors and other experts in the field. But we still feel that it would represent a step forward. For instance, no author (or company) would risk publishing their potentially fancy results on a minor journal. This way, mainstream journals would publish randomized clinical trials which may or may not benefit the experimental drug, adding a plus of transparency to the relationship between journals and drug companies and limiting the impact of silent and hidden evidence.


Francesc Colom thanks the support and funding of the Spanish Ministry of Health, Instituto de Salud Carlos III, CIBER-SAM. Dr Colom is also funded by the Spanish Ministry of Science and Innovation, Instituto Carlos III, through a ‘‘Miguel Servet’’ postdoctoral contract (CP08/00140) and a FIS (PS09/01044).

Declaration of interest

Dr. Francesc Colom has served as advisor or speaker for the following companies: Astra Zeneca, Bristol-Myers, Eli-Lilly, Glaxo-Smith-Kline, MSD-Merck, Otsuka, Pfizer Inc, Sanofi-Aventis, Shire & Tecnifar and research funding from the Spanish Ministry of Science and Innovation – Instituto de Salud Carlos III.

Dr Vieta has received research grants and served as consultant, advisor or speaker for the following companies: Almirall, Astra-Zeneca, Bristol-Myers Squibb, Eli Lilly, Forest Research Institute, Geodon Richter, Glaxo-Smith-Kline, Janssen-Cilag, Jazz, Lundbeck, Merck, Novartis, Organon, Otsuka, Pfizer Inc, Sanofi-Aventis, Servier, Solvay, Schering-Plough, Takeda, United Biosource Corporation, and Wyeth, research funding from the Spanish Ministry of Science and Innovation, the Stanley Medical Research Institute and the 7th Framework Program of the European Union.