SEARCH

SEARCH BY CITATION

Keywords:

  • clinical trials;
  • evidence;
  • health operational research

Recent years have seen a growing interest in operational research as a means to support improvements in health care in low- and middle-income settings. In support of global efforts to expand access to antiretrovirals, for example, a substantial increase in operational research activities has helped to define models of care that work in resource-limited settings (Zachariah et al. 2012). Reflecting the importance of these activities, operational research has now become a standard track at international AIDS conferences, (http://www.aids2012.org/Default.aspx?pageId=477) and major HIV journals have established sections dedicated to the publication of operational research findings (Anon 2010).

Yet, despite this rising appreciation of the contribution of operational research to policy and practice, many still view operational research as the ‘poor cousin’ of the randomised trial. Traditional hierarchies of evidence place case reports at the bottom of the pyramid and randomised trials (or meta-analyses of trials) at the top. While randomised trials may be the best way to come close to an unconfounded estimate of the effect size of a given intervention, they generally provide little information about how to take an intervention to scale in a given setting (Rawlins 2008).

Over the last few years, large randomised controlled trials (RCTs) have shown convincing beneficial effects of artesunate over quinine to treat severe malaria, (Dondorp et al. 2010) male medical circumcision to prevent HIV infection (Siegfried et al. 2009) and limiting the use of fluid bolus in the management of paediatric septic shock (Maitland et al. 2011). However, several years after results of these trials were published, quinine remains the standard of care in most high-malaria-burden countries, (Ford et al. 2011) coverage of male medical circumcision in southern Africa is below 10% in most high-burden countries, (Njeuhmeli et al. 2011) and a year after the publication of the trial results, guidelines for the management of septic shock had yet to be revised in any Africa country (Ehrhardt & Meyer 2012). The speed with which RCT findings translate into a change in policy and practice depends on several factors, including limited resources to fund a new intervention, feasibility, and the values and preferences of policy makers, care providers and patients.

From an epidemiological perspective, operational research is riddled with confounding, biases, missing variables, and non-random sampling that make for highly unreliable statistical inferences. Such concerns will lead appraisers of evidence, using tools designed to assess comparative drug efficacy trials, to relegate operational research to the bottom of the evidence quality pyramid. The idea that operational research findings should be regarded with scepticism is a view reinforced, perhaps unintentionally, by guideline development tools such as GRADE which rank evidence derived from observational studies as being of low or very low quality (Guyatt et al. 2011a) (except in rare situations where studies show large and consistent effects (Guyatt et al. 2011b)).

Programme implementers, however, have a different perspective. When confronted with the results of a trial, implementers will likely be far more concerned about whether the intervention will work in their setting, with their patient population and resource constraints, than whether randomisation or allocation concealment was carried out adequately (notwithstanding the importance of such issues for trial design). For them, reports from operational research can provide valuable insights that are considered to be more reflective of ‘real life’ than the results of randomised trials in which patients were carefully selected, staff were highly motivated and additional resources were provided (Maher et al. 2012).

Operational research is therefore a critical step in the pathway from new knowledge to improved outcomes. Randomised trial data are important, for example, to demonstrate equivalence of nurse- versus doctor-delivered antiretroviral therapy, while operational research will help define the package of training and supervision required to capacitate nurses in new clinical responsibilities in different contexts. In this way, rather than viewing operational research as the poor cousin of randomised trials, the two approaches should be viewed as ‘relatives’, which can cooperate very productively if done well. Without a collective effort on the part of researchers, funders and policy makers to integrate operational research into their activities, there is a risk that many years will continue to pass between the publication of ‘definitive’ trial results, and changes in policy and practice where it matters most.

The views expressed are those of the authors and may not necessarily represent the views of the affiliated organisations.

References

  1. Top of page
  2. References
  • Anon (2010) New Focus Area: Implementation and Operational Research. Journal of Acquired Immune Deficiency Syndromes 54 4, 339.
  • Dondorp AM, Fanello CI, Hendriksen IC et al. (2010) Artesunate versus quinine in the treatment of severe falciparum malaria in African children (AQUAMAT): an open-label, randomised trial. Lancet 376, 16471657.
  • Ehrhardt S & Meyer CG (2012) Transfer of evidence-based medical guidelines to low- and middle-income countries. Tropical Medicine & International Health 17, 144146.
  • Ford NP, de Smet M, Kolappa K & White NJ (2011) Responding to the evidence for the management of severe malaria. Tropical Medicine & International Health 16, 10851086.
  • Guyatt GH, Oxman AD, Vist G et al. (2011a) GRADE guidelines: 4. Rating the quality of evidence–study limitations (risk of bias). Journal of Clinical Epidemiology 64, 407415.
  • Guyatt GH, Oxman AD, Sultan S et al. (2011b) GRADE guidelines: 9. Rating up the quality of evidence. Journal of Clinical Epidemiology 64, 13111316.
  • Maher D, Harries AD, Nachega JB & Jaffar S (2012) Methodology matters: what type of research is suitable for evaluating community treatment supporters for HIV and tuberculosis treatment? Tropical Medicine & International Health 17, 264271.
  • Maitland K, Kiguli S, Opoka RO et al. (2011) Mortality after fluid bolus in African children with severe infection. New England Journal of Medicine, 364, 24832495.
  • Njeuhmeli E, Forsythe S, Reed J et al. (2011) Voluntary medical male circumcision: modeling the impact and cost of expanding male circumcision for HIV prevention in eastern and southern Africa. PLOS Medicine 8, e1001132.
  • Rawlins M (2008) De testimonio: on the evidence for decisions about the use of therapeutic interventions. Lancet 372, 21522161.
  • Siegfried N, Muller M, Deeks JJ & Volmink J (2009) Male circumcision for prevention of heterosexual acquisition of HIV in men. Cochrane Database Systematic Review 15, CD003362.
  • Zachariah R, Ford N, Maher D et al. (2012) Is operational research delivering the goods? The journey to success in low-income countries. The Lancet Infectious Diseases 12, 415421.