[Commentary] FIDDLING WHILE ROME BURNS? BALANCING RIGOUR WITH THE NEED FOR PRACTICAL KNOWLEDGE
Article first published online: 5 FEB 2008
© 2008 The Author
Volume 103, Issue 3, pages 414–415, March 2008
How to Cite
GRAHAM, K. (2008), [Commentary] FIDDLING WHILE ROME BURNS? BALANCING RIGOUR WITH THE NEED FOR PRACTICAL KNOWLEDGE. Addiction, 103: 414–415. doi: 10.1111/j.1360-0443.2007.02130.x
- Issue published online: 5 FEB 2008
- Article first published online: 5 FEB 2008
- Evaluation methods;
- licensed premises;
- responsible beverage service
Around the world, communities are searching for and adopting strategies and programmes to address problems related to drinking in licensed premises [1–4]. Most of the programmes and strategies that are adopted are essentially unevaluated; or if they have an evaluation component, it is often of the ‘Let’s put on a show in the barn' variety. That is, the evaluation is conducted (often on a shoestring budget) by people with no basic knowledge of how to design an evaluation in a way that will allow them to draw meaningful or valid conclusions about the programme's impact. Nevertheless, many local communities are prepared to adopt unproven approaches rather than do nothing at all.
The research community, on the other hand, demands the highest level of methodological rigour; consequently, studies are left with nothing to offer if promising approaches do not meet rigorous evaluation criteria. For example, the study by Toomey et al. (this issue ) included a thoughtful, well-developed, research-based intervention  with an excellently designed and implemented evaluation conducted by highly trained, skilled and experienced researchers. However, with null findings on programme effectiveness, the study could not offer any constructive recommendations for communities looking for solutions to addressing problems related to over-serving in licensed premises. This is not intended to be a critique of their exemplary research, as their standards and methods meet the universally endorsed gold standards for programme evaluation described by Cook & Campbell . Rather, their study raises the question of whether we need to take into consideration practical needs, as well as rigour when we design evaluations of community interventions.
So how can research increase practical knowledge while still maintaining high standards of scientific rigour? The first step might be to take a more nuanced approach to conceptualizing outcomes. For example, it is possible that a significant effect of the intervention (relative to the comparison group) might have been found had the study employed a more sensitive outcome measure. The dichotomous outcome measure (served/not served) employed by Toomey et al.  based on one visit by a pseudo-patron feigning intoxication would be insensitive to minor improvements in reducing service to intoxicated people. For example, some of the non-refusal strategies that Toomey et al. refer to as ‘mild interventions, such as offering alcohol-free beverages’[8–12] may well have the impact of reducing overall intoxication levels of patrons and related harms, despite no measurable impact on service refusal. In addition, adopting a broader view of outcomes such as measuring problems associated with intoxication  (e.g. violence, vandalism and disorder, driving, injury) as well as serving practices might provide additional valuable information about the effects of policy-oriented approaches such as alcohol risk management (ARM).
Secondly, assessment of statistical significance may need to be reconsidered in the context of evaluations of community prevention initiatives. The problem with the ‘rigour or nothing’ approach is most apparent when an effect is found that is not big enough to meet the criterion for statistical significance. By focusing on avoiding Type 1 error, we increase the odds of committing Type 2 error, with the result that potentially useful approaches to prevention may never become known to those in the community who are seeking solutions. Meta-analyses and Cochrane reviews are useful tools for addressing the problem of interpreting significant findings (or lack thereof) from a single study, especially in fields of research where it is possible to conduct numerous trials relatively cheaply. However, there are fewer opportunities for cross-study analyses of evaluations of community interventions, which are relatively rare due to the long time-frame required and high costs of implementation. Perhaps we should tolerate a higher Type 1 error (e.g. being wrong one in 10 times rather than one in 20) when we conclude that a programme is effective, as long as there is clear evidence that there is no negative impact of the programme. Perhaps the solution for evaluation of preventive interventions is to adopt an ordinal rather than a dichotomous method for interpreting significance when results are in the predicted direction—for example:
- 1P < 0.05 could count as ‘proven effective’ according to usual norms;
- 2P < 0.10 and ≥ 0.05 could be counted as ‘evidence of probable effectiveness; and
- 3P < 0.20 and ≥ 0.10 might be classified as ‘promising and no evidence of negative effects’.
Although the lack of impact on alcohol policies supports the conclusions of Toomey et al. that the ARM programme did not have the desired effect, it is interesting that there are now two evaluations of ARM [5,14] which found immediate increases in refusal of service, although in both cases the effect did not meet the criterion of P < 0.05 for statistical significance. Taken together, the two studies provide evidence of a small but consistent effect. This is not to suggest that such training would be sufficient to eliminate all serving to intoxication, but perhaps its potential usefulness should also not be disregarded.
Thirdly, qualitative and descriptive research needs to be included in evaluations both to improve interventions as well as to understand more clearly why a programme failed to have an impact. For example, findings from studies that have examined why staff and management have difficulties refusing service to intoxicated patrons  could be incorporated into the development of more effective approaches to preventing intoxication and more nuanced outcome measures. It would have been interesting in the Toomey et al. study  to have asked bar managers why their staff were still serving intoxicated patrons, despite participation in the training. One also wonders why adoption of policies by experimental premises was no better than the adoption rate for control premises, given that this was the focus of the intervention. Traci Toomey and her colleagues have conducted considerable work [5,14] describing explanatory and process factors, and perhaps have data addressing these issues that they will be publishing elsewhere.
Finally, the relevance of scientific research generally to community applications might be improved by expanding the study time-frame, including implementing long-term interventions. For example, the most successful responsible beverage service project to date, the STAD (Stockholm Prevents Alcohol and Drug Problems) project in Sweden, had a 10-year time-frame, and the impact of the programme showed a steady increase over time [16,17]. This means that we need to change the current research context in many countries, especially the time-line of grant funding, that typically does not permit this kind of long-term implementation and evaluation.
This commentary is not intended as a criticism of this well-conducted research. Rather, I have used this paper to raise the general issue of increasing the potential for applicability of research findings while maintaining high methodological standards. With communities and the general public increasingly recognizing the need for evidence-based interventions, perhaps it is time for researchers to change how we design community prevention research so that we can increase practical relevance of findings even when we are not able to demonstrate a statistically significant impact on the primary outcome measure.
- 1Government of Alberta. Alberta Roundtable on Violence in and around Licensed Premises. 2006. Available at: http://www.gaming.gov.ab.ca/pdf/news/ALGCRoundtableReportWeb.pdf (accessed 7 January 2008).
- 2Alcohol and Licensed Premises: Best Practice in Policing. A Monograph for Police and Policy Makers. Payneham, SA, Australia: Australasian Centre for Policing Research; 2003.,
- 3Responsible Hospitality Institute. Planning Managing and Policing Hospitality Zones. A Practical Guide. Santa Cruz, CA: Responsible Hospitality Institute; 2006.
- 4Alcohol Audits, Strategies and Initiatives: Lessons from Crime and Disorder Reduction Partnerships. London, UK: Communications Development Unit, Home Office; 2004., ,
- 6Alcohol outlet workers and managers: focus groups on responsible service practices. J Alcohol Drug Educ 1999; 44: 60–71., , , ,
- 7The design and conduct of quasi-experiments and true experiments in field settings. In: DunnetteM. D., editor. Handbook of Industrial and Organizational Psychology. Chicago: Rand McNally; 1976, p. 223–326.,
- 8Does server intervention training make a difference? Alcohol Health Res World 1987; 11: 64–9., ,