SEARCH

SEARCH BY CITATION

Keywords:

  • Evaluation;
  • RCT;
  • tobacco policy

We agree with Lawrence and colleagues [1] that we need the most rigorous evidence possible on the effectiveness, cost-effectiveness and equity impacts of population health interventions to reduce smoking and exposure to other risk factors for chronic diseases, but for a number of reasons we would temper their advocacy for randomized controlled trials (RCTs) as the sine qua non of population health evaluation.

First, there are limits to what RCTs can tell us even about drug treatments. They are regarded as the ‘gold standard’ for evidence of efficacy in evaluating the intended effects of treatment [2,3], because randomization increases confidence that any differences in outcome between drug and placebo are attributable to the drug. However, their results are often of uncertain relevance to routine clinical practice because: trials are often conducted in well-resourced specialist treatment settings; exclusion criteria often exclude many of the patients (e.g. those with comorbid conditions) who will receive the drug if it is approved for clinical use; and follow-up periods are typically too short to assess more rare adverse effects, and those that occur only after sustained regular use. Pharmacoepidemiologists attempt to address the latter weakness by conducting post-registration observational studies of drug safety in routine clinical use [3,4].

Secondly, it is not practically or politically possible to conduct RCTs of important population level interventions such as tobacco tax increases, advertising bans and, arguably, plain packaging of cigarettes. Demands for RCT studies of their effectiveness will be heartily endorsed by the tobacco and alcohol industries as a way of ensuring that they are never implemented. It is better to evaluate the effects of plausible public health policies in each country or jurisdiction that implements them, using the best available quasi-experimental designs and analytical methods [5,6].

Thirdly, RCTs cannot be used to make societal decisions about how to allocate resources to different preventive programmes (e.g. increased tobacco taxation, advertising bans, smoking restrictions, etc.). We cannot implement multi-factorial analysis of variance (ANOVA) research designs in which nation states or communities are randomized to each of the logically possible combinations of these interventions (e.g. high or low taxation; advertising bans or not; smoking restrictions or not; plain packaging or not, etc.). There are no feasible alternatives in making resource allocation decisions to epidemiological and economic modelling (e.g. [7]) using the best estimates of the effects of different policies derived from quasi-experiments and econometric time–series analyses (e.g. evaluations of the effects of raising the minimum legal drinking age in the United States to 21 years on road crash deaths in young adults [8,9]).

Lawrence et al. also over-estimate the power and influence of public health advocates. These advocates have a limited capacity to persuade governments to introduce legislative and other regulatory interventions in a phased way, or to randomize subsets of the population to receive a public health intervention or not; nor do they command the funding needed to undertake RCTs or other large-scale policy evaluations. The current state of public health policy evaluation reflects the lack of government investment in such research and the necessity for public health researchers to evaluate policies opportunistically that have often been introduced for reasons other than their best efforts at advocacy.

We support RCTs of public health policies where they can be conducted, and where funding is available, but we believe that any approach that gives priority to RCTs over other forms of evaluation runs the risk of making the ideal the enemy of the good.

Acknowledgements

  1. Top of page
  2. Acknowledgements
  3. References

The authors were funded by an NHMRC Post-doctoral Fellowship (Coral Gartner) and an NHMRC Australia Fellowship (Wayne Hall).

References

  1. Top of page
  2. Acknowledgements
  3. References
  • 1
    Lawrence D., Mitrou F., Zubrick S. Global research neglect of population-based approaches to smoking cessation: time for a more rigorous science of population health interventions. Addiction 2011; 106: 154954.
  • 2
    Vandenbroucke J. P. Observational research, randomised trials, and two views of medical science. PLoS Med 2008; 5: e67.
  • 3
    Vandenbroucke J. P., Psaty B. M. Benefits and risks of drug treatments: how to combine the best evidence on benefits with the best data about adverse effects. JAMA 2008; 300: 24179.
  • 4
    Institute of Medicine Committee on the Assessment of the U.S. Drug Safety System, BaciuA., StrattonK. R., BurkeS. P., editors. The Future of Drug Safety: Promoting and Protecting the Health of the Public. Washington, DC: National Academies Press; 2007.
  • 5
    Murnane R. J., Willett J. B. Methods Matter: Improving Causal Inference in Educational and Social Science Research. Oxford: Oxford University Press; 2011.
  • 6
    Shadish W. R., Cook T. D., Campbell D. T. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston: Houghton-Mifflin; 2002.
  • 7
    Cobiac L., Vos T., Doran C., Wallace A. Cost-effectiveness of interventions to prevent alcohol-related disease and injury in Australia. Addiction 2009; 104: 164655.
  • 8
    Voas R. B., Tippetts A. S., Fell J. C. Assessing the effectiveness of minimum legal drinking age and zero tolerance laws in the United States. Accid Anal Prev 2003; 35: 57987.
  • 9
    Fell J. C., Fisher D. A., Voas R. B., Blackman K., Tippetts A. S. The relationship of underage drinking laws to reductions in drinking drivers in fatal crashes in the United States. Accid Anal Prev 2008; 40: 143040.