SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. References

Sir Iain Chalmers, founder of the Cochrane Collaboration, has done more than anyone else to make sure medical treatments are based on evidence – not prejudice – and that medical research is aimed at producing treatments that patients actually want. Julian Champkin spoke to him.

The logo of the Cochrane Collaboration shows diagrammatically the results of a systematic analysis of trials of drugs called corticosteroids. These drugs are given to women who are expected to give birth prematurely. They reduce the risk of their babies dying by between one-third and one-half. They were first trialled in 1972. For decades the treatment was inadequately taken up.

“It led to dead babies”, says Sir Iain Chalmers. “Tens of thousands of them. It led to families bereaved unnecessarily.” Less importantly, but still important in these financially strapped times, it led to unnecessarily high spending on intensive care. Obstetricians simply had not realised that the treatment was so effective. A systematic review of corticosteroids was not published until 1989 – and the evidence, from a large number of trials, was available, synthesised, summarised, in one document and in one place for doctors to find and to read. By 1991 the benefits of the drug were very clear – and at last it began to be widely used. “It was a dramatic example of the need to create an international organisation to give doctors, other health professionals and patients the evidence they need to make informed decisions about treatments.”

The Cochrane Collaboration was envisaged by Iain Chalmers in 1991 to do exactly that.

That it is not called the Chalmers Collaboration is, I suspect, due to the man's modesty. He named it after Archie Cochrane, who inspired much of the work; Cochrane died in 1988, so was in no position to object to his colleague giving him the whole of the glory. Chalmers generally declines honours (though he was persuaded to accept a knighthood in 2000 – the Lancet reported that he accepted only because he might be able to use it as a way of getting people to take note of the things he feels passionately about). He posed for a National Portrait Gallery photograph only on the condition that he was surrounded by pictures of dozens of others who had inspired him, from James Lind (1716–94) onwards.

image

The Cochrane logo, illustrating a systematic analysis of corticosteroid trials. (The Cochrane logo is a registered trademark of The Cochrane Collaboration)

image

© Julia Fullerton-Batten/National Portrait Gallery, London

Read the Cochrane Collaboration website – or, better, talk to Chalmers – and you find, told in clear, calm language, hair-raising examples to make you wonder if we really do live in a twenty-first century that has cast out folklore and prides itself instead on scientific and rational medicine. Poor treatments, useless treatments, even harmful treatments are neither unknown nor even rare. They happen not just occasionally but become established medical practice, taught, in the teeth of evidence, in schools to succeeding generations of clinicians. Like viruses, they have become self-replicating. And, as Chalmers repeatedly says, it is a betrayal of patients by the scientific community, and it is an ethical disgrace.

Chalmers has been described as the “maverick master of medical evidence”. You could also call him the systematic review man. He has probably done more than anyone in recent years (or perhaps ever) to make sure that medical treatments are based on good evidence and good statistics. Cochrane reviews do not just look at a trial of a medical treatment: they endeavour to look at all the results of all the trials that have been done on that treatment – strong results, weak results, conflicting results, results that are biased because of inadequacies in the test sample or the methods or the analysis – and synthesise them into a summaries that medical staff can find easily, read easily and use with confidence. Cochrane systematic reviews are widely regarded as the gold standard of analyses of medical treatments. They keep dozens of people busy, doing, when appropriate, meta-analyses – which you could call doing the statistics on statistics. Yet Chalmers is a clinician, not a statistician. Which means that people, not statistics, are at the heart of his passion, of what drives him and his philosophy.

Which in turn has lessons, both for statisticians and for clinicians: keep sight of the ends, not the means; and look outside your own narrow ghetto of specialisation. “Statistics has its own fascinations. I used to find statisticians arguing passionately over what seemed to be different statistical theologies”, he says. “But the arguments only matter to outsiders, to patients, to people who are ill, if they affect the treatments they get offered. Statisticians should ask themselves more often whether the theologies make a blind bit of difference to the practical results.”

What research is needed?

What research is needed but remains undone? What research will benefit patients and clinicians – rather than benefiting just the researchers?

Of more than 25 000 reports published in six leading basic-science journals between 1979 and 1983, 101 included confident claims that the new discoveries had clear clinical potential. Yet only five had resulted in interventions with licensed clinical use by 2003, and only one led to the development of an intervention that is used widely.

Evidence suggests that the end users of research are much less interested in drug research than are the institutions and investigators who fund and do research.

Citation patterns show that previous research is being ignored. An analysis of clinical trials reported over four decades showed that, irrespective of the number of relevant previous trials, fewer than a quarter of previous studies (and a median of only two) had been cited in reports.

More than half of the public and charitable investment in research in the UK and the US is allocated to basic research. This long-standing funding pattern is partly a result of assertions made three decades ago by two scientists, Julius Comroe and Robert Dripps6, who claimed that 62% of all reports judged to be essential for subsequent clinical advances were the result of basic research. However, an attempt to replicate the findings showed not only that Comroe and Dripps’ analysis was “not repeatable, reliable or valid”, but also that only 2–21% of research underpinning clinical advances could be described as basic.

Source: Chalmers et al.3

The basic idea of the Cochrane Collaboration is, as he puts it, that “attempts should be made to synthesise results of clinical trials in health care”. More simply, all the research that has been done on a topic so far should be taken into account when deciding which treatments are likely to work best. Revolutionary? Earth-shattering? Weird and dangerous and wrong? Many physicians seemed initially to think so. “Basically, it is blindingly obvious. Most people you stop in street would think you are a bit barmy for asking these questions.”

The blindingly obvious is sometimes one of the hardest things to notice. What drove him to see it? Partly, as he told listeners when BBC Radio 4 devoted a Life Scientific programme to him in 2012 (see http://www.bbc.co.uk/programmes/b01cjwtd), it was sparked by a period as a UN doctor in the Gaza Strip: “I found that some of the things I had been told at medical school were quite clearly wrong. What was particularly annoying in retrospect was that the information that I could have used in the interests of my patients had already been published when I went out there. It just had not been brought together, presented in an understandable way and made available to me and my patients. Everything started from that.” It was reinforced as a young resident in a maternity hospital in the early 1970s.

“I would be called to the bedside of a woman in labour; and instead of asking myself what was the best treatment for her I would find myself asking ‘Who is this woman's consultant?’ Because all the consultants had their own favoured treatments and responses, whatever I suggested had to fit in with each of them.” To base a woman's treatment on the non-medical grounds of whom she had happened to be assigned to was very clearly not necessarily in her best interests – nor of good clinical medicine.

Nor was it evidence-based decision-making: it was eminence-based decision-making; and it was standard medical practice. “Too many treatments had never been shown to do good. Or, worse, had actually been shown to do harm.” Yet still they were inflicted on patients.

It was not entirely the fault of the medical staff: “Doctors have no time to read all the research – there's just too much of it.” A headline-grabbing paper, on the other hand, heralding a new and apparently wonderfully successful treatment, but reported in isolation and perhaps taken out of context, can be all too alluring. And we all tend to believe what we were taught as students.

The idea of comparing medical treatments to see which ones actually worked was hardly new. Chalmers traces it back at least to James Lind's experiment in the 1740s, trying out treatments for scurvy on different groups of sailors. Yet synthesis, considering the new evidence in the context of the evidence that has already been gathered, is not properly recognised in medicine, nor indeed in most of science. “Physicists are best at it”, says Chalmers. “Lord Rayleigh put it well as long ago as 1884, at a meeting of the British Association for the Advancement of Science in Montreal: ‘Most credit should go to people who set the results of new research in the context of what is already known.'”

Archie Cochrane, Chalmers’ mentor and inspiration, wrote in much the same vein almost a hundred years later. Cochrane is best known for his influential book, Effectiveness and Efficiency: Random Reflections on Health Services1, published in 1972. In 1979 he wrote: “It is surely a great criticism of our profession that we have not organised a critical summary, by speciality or subspeciality, adapted periodically, of all relevant randomised controlled trials.”

Statisticians argue over what amount to theologies. They should test whether their theologies make a blind bit of difference to the practical results

When Chalmers met Cochrane, it became an idea whose time had come (or was overdue). Whether or not inspired by Lord Rayleigh, it was happening in other disciplines, mainly in America: “At Los Alamos, for small-particle physicists, Paul Ginsparg set up ArXiv to cut through the lock of too many conflicting papers in a fast-moving subject. Researchers contribute to a website; only after a stable state is reached is observation put into print”, says Chalmers. The word “meta-analysis”, the statisitical process applied in most Cochrane reviews, was only introduced in 1976. “It was the American social scientist Gene Glass who introduced the term. There really weren't any social scientists to speak of in [the UK] who were concerned about synthesising results.”

Amid the welter of published studies finding in favour of this new treatment or that – and the often unpublished studies finding against them – some process of synthesising, amalgamating, forming an overview of them was badly needed: systematic review and meta-analysis, in other words. “And in the process one wanted certain things”, says Chalmers. “Respect for scientific principles is one. Another is to have a defensible search strategy for reported and unreported studies.” (The unreported studies of course are a biased subset – biased because negative results, unfavourable outcomes and generally unprofitable research too often go unpublished and unreported.) “We needed also to have a way of grading the quality of studies; and if necessary, applicable and possible, to think in terms of quantitative measures of their quality.” Then, from all that, systematic reviewers have to come to sensible conclusions.

The principles seem obvious;“but textbooks do not observe them. For example, the Oxford Textbook of Medicine, in its second edition [1987] said that clot-busting drugs had not been shown to have beneficial effects. Yet five years previously such drugs had been shown to be life-saving. So many doctors did not prescribe the drugs that could have saved their patients’ lives.”

The Oxford Textbook of Medicine is for experts. Dr Spock's 1946 book, Baby and Child Care, was for every parent who could read English and was read by a huge proportion of them; throughout its first 52 years in print it outsold every other book except the Bible. “It recommended that babies should be laid to sleep on their stomachs. Now we know that doing that increases the risk of cot (crib) death. Tens of thousands of babies died needlessly because of that advice.”

These, he says, are dramatic examples of the scientific community letting people down – of patients suffering and dying as a result of the failure of that community to apply the most basic principles to what they were doing. “That was the background. It created a minority among academics who wanted to do something about it.”

The Cochrane Collaboration was launched in 1993; and once set up, it grew with extraordinary speed. Fast growth generally calls for huge funding. Part of the genius of Chalmers and his collaborators was to avoid that need. “There were lots of well-informed, geneous-spirited people who wanted to be part of it. It was clear that the people who were interested in specific health problems – the clinicians and the review groups, but also the patients, and the patient's friends and families and support groups – they had to find the resources. No one was looking for enormous initial grants or looking for institutions to fund the syntheses. Instead it depended on individuals. That's why it grew so fast.” Those who did the work also found the funding or time for the work they did.

Research that leads to neither new insights nor new applications is wasted. Research that is poorly done is wasted. Research that nobody reads is wasted. Research that has been done before is wasted

Systematic reviewing involves a lot of work, as well as expertise. It involves an understanding, among many other things, of the extent of publication bias and of the perverse incentives, many and strange, that dog research and publication. Nevertheless, the Cochrane Collaboration has found tens of thousands of people willing to take on the work. Lest anyone be tempted to believe that clinical research betrayals are a thing of the past, consider the case of Tamiflu. Governments have spent billions on stockpiling it and similar anti-influenza drugs. A Cochrane review of the evidence – once that evidence was released by the drug companies – has revealed that the drugs reduce discomfort only marginally, reduce the duration of fever by perhaps half a day, and have saved, and during any epidemic probably would save, no lives at all. A quick flip through any past issue of this magazine will reveal dozens more references to the Cochrane Collaboration.

Chalmers left the Cochrane Collaboration in 2002. “I had been there for 10 years. I left when I did very deliberately, and I have no intention of looking over the shoulders of the people who are still there.” One huge international success in promoting research synthesis might be enough for a lifetime. Chalmers has hardly started. He is involved in at least two more. Systematic review à la Cochrane involves, as we have seen, research into research. But a systematic review can only analyse research that has already been done. Research tries to answer questions. But the questions it tries to answer are generally chosen by the researchers. The questions that are important to patients and clinicians are often ignored.

He gives the example of arthritic knees. There is lots of research into drugs to treat the problem. “But people who actually have arthritis in their knee don't want drugs. If you ask them, they will say they want better artificial joints, and they want better physiotherapy.” And there is far less research into physiotherapy than there is for drugs.

Similarly, research may measure treatments in terms of outcomes rated important by researchers – but not necessarily the outcomes that are important to patients, or to the clinicians who look after them. Here statisticians also need to examine their theologies. They like – or perhaps need – outcomes that can be quantified – to as many places of decimals as necessary. Hard, objective, accurate data is clearly better than soft, subjective, less accurate stuff. But we may be too hung up on ‘hard’ data: “Stephen Evans put it very well. ‘Better to measure inaccurately something which is important than accurately something which is unimportant.'” A formative experience for Chalmers was a trial assessing monitoring babies electronically during childbirth. There was concern that too much attention to the technology and too little to the woman might predispose to her unhappiness after delivery. Psychometric scales were considered briefly until a sociologist suggested just asking the women how they were feeling.

The outcomes might be those in hospital settings – it is much easier to set up a trial in a hospital – when GP surgeries are where the treatments are actually used in practice.

“A clinical trial should be based on three communities coming together: researchers, patients, and their clinicians. The last two should be the ones calling the shots. Too frequently they are the ones left out of the loop altogether”, he says. Researchers decide what they want to research. Consequently, research focuses on the needs of researchers. Enter what we might call Chalmers 2: the Lind Alliance.

“There is minimal research into what users of research – notably, patients – wish to see addressed. The Lind Alliance was set up to get patients, clinicians and researchers together to identify and promote the top ten topics in their areas of concern that they wanted researchers to address. The Alliance has involved people with an interest in many different health problems – urinary incontinence, eczema, schizophrenia, and so on – bringing a bottom-up approach to shaping the research agenda, taking account of unanswered questions that they felt could help to make their lives better.”

“Alliance” is exactly the right word for it. People interested in, for example, visual impairment would include those who have the conditions, and the relatives of those who have them, just as much as those whose specialty is treating visual impairment or those whose interest is in researching it or in statistically analysing the multitude of research papers that have already been done. This is applied statistics – applied, that is, to people, and with a vengeance.

The Lind Alliance brings people together because of their shared interest, either because they have the condition or have friends or family who have it, or because the are health professionals trying to help patients. Having confirmed there is inadequate evidence available in an up-to-date systematic review, they work out what research ought to be done.

To research topics that patients actually want researched? It seems yet another bizarre and revolutionary idea. And, like Cochrane systematic reviews and meta-analyses, also a blindingly obvious one; so blindingly obvious in fact that very few people in the research community seem ever to have thought of studying it before. Which is either a matter of shame, or a matter of being blindingly grateful to mavericks like Iain Chalmers for pointing it out.

His third and latest major thrust is on research waste. There is, let's face it, an awful lot of research these days. And an awful lot of it is wasted research. “It is wasted for several reasons,” he says. “Because it leads neither to new insights nor to new applications; because it doesn't take account of relevant existing evidence; because it is poorly done and invalid; because, though well done and valid and potentially useful, it is either not reported at all or reported inadequately. All this is a colossal waste of effort, and of brainpower, and of money. “

In 2009, Chalmers and Paul Glasziou2 estimated that more that 85% of investment in medical research is wasted – and that is 85% of a very large sum indeed, some $240 billion in 20103. And all that research is paid for, in the end, by the public. Once again, they are the ones who are lost sight of.

“They pay for it through their taxes that fund some research; through donating to charities that fund other research; through the National Health Service, which pays more than it should for drugs that are priced far too high; and, if they are ill, they pay through receiving treatments that are not the best.”

It gets still worse. Patients are recruited into clinical trials that are not needed. More than 7000 individuals who had had a stroke were enrolled in clinical trials of a drug called nimodipine. But systematic reviews of the effects of the drug in animal studies of stroke did not identify any protective effects of nimodipine. The human trials were therefore unjustified. In 2002, 1600 patients enrolled in a large study of endothelin receptor blockers. Had animal studies been reviewed systematically, the study would probably not have been carried out in the first place3.

A series of papers led by Glasziou and Chalmers published in the Lancet earlier this year, has highlighted the issue; the website http://www.researchwaste.net promotes it. “As Doug Altman4 lamented as long ago as 1994”, Chalmers notes, “what we need is less research, better research, and research done for the right reasons.”

I am more than happy to give credit to responsible companies. There are a lot of “baddies” in the drug industry, but also some “goodies”, and they need our support

Who, then, are the villains in all of this? It is tempting, and easy, to cast Big Pharma in the big bad wolf role. Certainly lack of disclosure of commercial clinical trials has been and remains an issue, not least for those preparing systematic reviews. However, Chalmers goes only partly down that road:

“Last month I was given a lifetime award by the British Medical Journal. I have declined invitations to let my name go forward previously, but the editor, Fiona Godlee, persuaded me to agree this year, on condition that it was made clear that I didn't regard my lifetime as over. A quarter of an hour later she rang again: the award was sponsored by GlaxoSmithKline. Would that be a problem for me? It was not. Had it been any other drug company I would probably have pulled out. But Glaxo Wellcome, before it merged with SmithKline Beecham [in 2000], was the first international company to commit to registering all its trials publicly.

“After the GW merger with SmithKlineBeecham to become GSK, they went into a bad period, from which they are still suffering: they were fined huge amounts in New York and elsewhere for misbehaviour. But I think the current senior management team are very good. I am more than happy to give credit to responsible companies to help bring others up to a defensible standard. There are a lot of ‘baddies’ in the drug industry, but also some ‘goodies', and they need our support. And it is important to be clear that industry could not do many of the bad things it does without the collusion of very many people in my profession – medicine.”

He is too polite to mention statisticians in that context; but what should they be doing? “Steven Julious and others from Statisticians in the Pharmaceutical Industry came out with strong statements about transparency and publication a few years ago5. That was very responsible. It took quite a long time for the Royal Statistical Society to take up the issue; it should reflect that lead. Statisticians should be far more involved, and far more vocal. Some are, like Doug Altman and Stephen Senn. (Note: Stephen Senn is a member of the Editorial Board of Significance. – Ed.) There are a number of ways statisticians can use their influence. They have opportunities to make a louder noise.

“For example, they should promote evidence-based research.” It is like evidence-based decision-making, but further back up the line. It is planning your research based on evidence of what research has been done before. “It's a cute term. I wish I had invented it. It is exactly what is needed. Make sure people design additional research systematically. Most normal people find it extraordinary that it isn't routine.

“That is one aspect I wish to continue bellyaching about. I will still shout from the touchline about the non-publication issue, about uncertainties in the effects of treatments, about the gaps between research and practice.

“There is loads still to be done. People need access to reliable evidence from research, relevant to their decisions in health care.

“We need to encourage the public to become better bullshit detectors. It may work best if we concentrate on schoolchildren. It is a shame so many adults haven't managed better. There have been false claims of bad effects, as with the measles–mumps–rubella vaccine; there are false claims of good effects: there is fame and money to be made from apparent successes which turn out not to be successes at all. There is no justification for any let-up at all, and that's why our website (www.testingtreatments.org) is already available in ten languages.

image

Photo: Theo Chalmers

“I have been shouting about these things for thirty years. I don't feel the least bit ready to give up yet.”

References

  1. Top of page
  2. Abstract
  3. References
  • 1
    Cochrane, A. L. (1972) Effectiveness and Efficiency. Random Reflections on Health Services. London: Nuffield Provincial Hospitals Trust. (Reprinted in 1999 for Nuffield Trust by Royal Society of Medicine Press, London.)
  • 2
    Chalmers, I. and Glasziou, P. (2009) Avoidable waste in the production and reporting of research evidence. Lancet, 374(9683), 8689.
  • 3
    Chalmers, I., Bracken, M. B., Djulbegovic, B. et al. (2014) How to increase value and reduce waste when research priorities are set. Lancet, 383(9912), 156165.
  • 4
    Altman, D. (1984) The scandal of poor medical research. British Medical Journal, 308, 283284.
  • 5
    Julious, S. A., Pyke, S. and Hughes, S. (2011) Best practice for statisticians in industry sponsored trials. British Medical Journal, 342, d1636.
  • 6
    Comroe, J. H. and Dripps, R. D. (1976) Scientific basis for the support of biomedical science. Science, 192, 105111.