SEARCH

SEARCH BY CITATION

In April 2010, this journal carried an editorial entitled ‘Fraud or flawed: adverse impact of fabricated or poor quality research’ [1]. In it, Moore et al. described the difficulty in telling when trial data were fabricated, and how systematic reviews could help by demonstrating aberrations that appeared when data from different studies were presented together. Moore et al. used data from one researcher in Japan to illustrate their point, referring to a previous comment on this researcher’s data that was published 10 years earlier, with the beautifully understated title ‘Reported data on granisetron and postoperative nausea and vomiting by Fujii et al. are incredibly nice!’ [2].

The Journal received a number of responses to Moore et al.’s editorial, amongst them one from a reader who bemoaned the fact that the evidence base remained distorted by that researcher’s work, and challenging anaesthetic journal editors to do something about it. This all happened just a few months after an unprecedented international collaboration between anaesthetic journal editors and publishers, that had led to retraction of almost 90 published papers (including six from this journal) by Joachim Boldt [3]. With this in mind, I counter-challenged the correspondent to perform an analysis of Fujii’s work, more in-depth than that of Kranke etal.’s. If suggestive that Fujii’s data should be removed from the evidence base, this analysis might be used by the same group of editors to confront Fujii and/or his institution(s). So as the correspondent started his work, his letter remained unpublished, and the international group of editors started to discuss how the case might be handled.

That correspondent was John Carlisle, and in this issue of the Journal he presents his analysis, some 19 months, 18 reiterations, two consultations with the Committee on Publication Ethics (COPE) and three statisticians later [4]. This is the piece of work that was going to be presented as evidence to Fujii’s institutions – but in the event, it was never used for this purpose, for as we reached the final stages of the manuscript’s preparation, there were rapid developments in Japan. In response to questions raised over a completely separate and coincidental concern that arose in late 2011, Fujii's institution (Toho University, Tokyo) set up an investigation that led to the dismissal of Fujii in February of this year for lack of ethical approval (see Anaesthesia homepage: http://onlinelibrary.wiley.com/journal/10.1111/%28ISSN%291365-2044).

Carlisle’s analysis [4] is an extraordinary piece of work. Necessarily based on complicated calculations, its message is nevertheless easily accessible to the reader through its every graph and p value, that clearly show the degree to which Fujii’s data deviate from what would be expected. In an accompanying editorial [5], Pandit provides an explanation of the methods used and their context. Every submission to Anaesthesia is screened using specific software for overlap with other published work before it is reviewed, to detect plagiarism [6]; could Carlisle’s methods similarly form the basis of a screening method for detecting fabrication?

I have described in these pages not so long ago the murky world of research misconduct [6]. One question I didn’t cover in any depth is whose responsibility it is to look into suspect cases. This is explored by Wager, in another accompanying editorial [7]. As she points out, institutions don’t always do as they should when approached by editors with concerns – though the institutions in both the Boldt case and the Fujii case should be congratulated for their investigations. As Wager describes, we all have responsibility: from readers through editors, reviewers and publishers to co-investigators, institutions and even governments. From an editorial point of view, being able to access COPE for advice and support, as well as other groups such as the World Association of Medical Editors (see http://www.wame.org/), is enormously useful, and such organisations have a valuable role in keeping up the pressure on all parties, and at all levels, reminding them of their obligations.

Medicine – and, it seems, anaesthesia in particular – has seen a flurry of research misconduct scandals recently, with potentially serious implications for the wellbeing of patients who might have received treatments that were ineffective or harmful, and/or been denied those that were beneficial. A further, and possibly longer-lasting, harm is the damage to public trust that results from such cases and the publicity around them. Our specialty is still reeling from the Reuben [8] and Boldt cases [3]; the last thing we needed was another, even bigger, case. We can only hope that the rule that bad things come in threes holds true.

Such cases are deeply troubling, raising difficult questions about the framework within which research is done, the motivation and support of those who are prepared to risk so much personally and for their patients, and the reliability of the evidence upon which we rely. However, a lot has changed for the better, since Kranke et al.’s letter a decade ago: there is better awareness of the problem, stronger resolve to stamp out misconduct, and better tools to detect wrongdoing. Carlisle deserves our gratitude for contributing to the last.

Acknowledgement

  1. Top of page
  2. Acknowledgement
  3. Competing interests
  4. References

I am grateful to the authors of these articles and to the Editorial Board, Editorial Team, Publishers, COPE and my fellow Editors-in-Chief of other journals (in particular, Steve Shafer of Anesthesia and Analgesia and Don Miller of Canadian Journal of Anesthesia), for their support.

Competing interests

  1. Top of page
  2. Acknowledgement
  3. Competing interests
  4. References

I finished my second and final term as a member of COPE’s Council in March 2012.

References

  1. Top of page
  2. Acknowledgement
  3. Competing interests
  4. References