We agree with Lawrence and colleagues [1] that we need the most rigorous evidence possible on the effectiveness, cost-effectiveness and equity impacts of population health interventions to reduce smoking and exposure to other risk factors for chronic diseases, but for a number of reasons we would temper their advocacy for randomized controlled trials (RCTs) as the sine qua non of population health evaluation.

First, there are limits to what RCTs can tell us even about drug treatments. They are regarded as the ‘gold standard’ for evidence of efficacy in evaluating the intended effects of treatment [2,3], because randomization increases confidence that any differences in outcome between drug and placebo are attributable to the drug. However, their results are often of uncertain relevance to routine clinical practice because: trials are often conducted in well-resourced specialist treatment settings; exclusion criteria often exclude many of the patients (e.g. those with comorbid conditions) who will receive the drug if it is approved for clinical use; and follow-up periods are typically too short to assess more rare adverse effects, and those that occur only after sustained regular use. Pharmacoepidemiologists attempt to address the latter weakness by conducting post-registration observational studies of drug safety in routine clinical use [3,4].

Secondly, it is not practically or politically possible to conduct RCTs of important population level interventions such as tobacco tax increases, advertising bans and, arguably, plain packaging of cigarettes. Demands for RCT studies of their effectiveness will be heartily endorsed by the tobacco and alcohol industries as a way of ensuring that they are never implemented. It is better to evaluate the effects of plausible public health policies in each country or jurisdiction that implements them, using the best available quasi-experimental designs and analytical methods [5,6].

Thirdly, RCTs cannot be used to make societal decisions about how to allocate resources to different preventive programmes (e.g. increased tobacco taxation, advertising bans, smoking restrictions, etc.). We cannot implement multi-factorial analysis of variance (ANOVA) research designs in which nation states or communities are randomized to each of the logically possible combinations of these interventions (e.g. high or low taxation; advertising bans or not; smoking restrictions or not; plain packaging or not, etc.). There are no feasible alternatives in making resource allocation decisions to epidemiological and economic modelling (e.g. [7]) using the best estimates of the effects of different policies derived from quasi-experiments and econometric time–series analyses (e.g. evaluations of the effects of raising the minimum legal drinking age in the United States to 21 years on road crash deaths in young adults [8,9]).

Lawrence et al. also over-estimate the power and influence of public health advocates. These advocates have a limited capacity to persuade governments to introduce legislative and other regulatory interventions in a phased way, or to randomize subsets of the population to receive a public health intervention or not; nor do they command the funding needed to undertake RCTs or other large-scale policy evaluations. The current state of public health policy evaluation reflects the lack of government investment in such research and the necessity for public health researchers to evaluate policies opportunistically that have often been introduced for reasons other than their best efforts at advocacy.

We support RCTs of public health policies where they can be conducted, and where funding is available, but we believe that any approach that gives priority to RCTs over other forms of evaluation runs the risk of making the ideal the enemy of the good.


The authors were funded by an NHMRC Post-doctoral Fellowship (Coral Gartner) and an NHMRC Australia Fellowship (Wayne Hall).