[Commentary] RESEARCH ASSESSMENTS: INSTRUMENTS OF BIAS AND BRIEF INTERVENTIONS OF THE FUTURE?
Article first published online: 13 JUL 2009
© 2009 The Author. Journal compilation © 2009 Society for the Study of Addiction
Volume 104, Issue 8, pages 1311–1312, August 2009
How to Cite
McCAMBRIDGE, J. (2009), [Commentary] RESEARCH ASSESSMENTS: INSTRUMENTS OF BIAS AND BRIEF INTERVENTIONS OF THE FUTURE?. Addiction, 104: 1311–1312. doi: 10.1111/j.1360-0443.2009.02684.x
- Issue published online: 13 JUL 2009
- Article first published online: 13 JUL 2009
- brief intervention;
- Hawthorne effect;
The possibility that research assessments themselves have the potential to positively influence drinking behaviour has long been recognized. The earliest research data were retrospective accounts of how helpful research assessments were found to be in treatment cohort studies [1,2]. The first trials investigated this issue among dependent drinkers within treatment settings and found them unimportant (e.g. ). Nonetheless, the phenomenon continued to attract attention within the research community, and subsequent non-randomized treatment trials identified assessment effects which appeared substantial and important to further study [4,5]. Qualitative process studies within large multi-centre treatment trials provided data attesting to the influence of research assessments, indicating that it was an important component of the treatment experience with impact on actual behaviour change . This research interest led to a more recent randomized trial of assessment in the context of treatment which detected smaller effects, providing greater certainty about the nature of the phenomenon and the magnitude of effects that may be expected .
Research assessment effects on alcohol consumption have also been studied within the context of brief intervention trials. Marginal influence of hazardous and harmful drinking is at issue in these studies rather than the resolution of complex and long-standing problems. The interventions themselves are generally capable of exerting small to medium effects, somewhat in contrast to the large effects evaluated in treatment studies with smaller sample sizes . The potential for assessment reactivity to bias estimates of brief intervention effectiveness is greater by virtue of relative size. Assessment effects lead to underestimates of the true effects of interventions through contamination . The possibility of converting to convert insights gained from the study of this phenomenon into instruments of behaviour change in non-treatment populations has also aroused much interest.
The study by Walters and colleagues  adds to a succession of randomized manipulations of assessment effects in recent years which have identified this double-edged promise [9,11,12]. A novel feature of this study is evaluation of the combined impact of multiple assessments. These produce larger effects on some outcome measures than have been identified previously. Careful attention to the data from these studies, however, reveals these effects are found inconsistently across outcome measures. It is also noteworthy that they have been identified in college student populations and their generalizability must be called into question when one considers the lack of any such finding in the well-conducted emergency department study by Daeppen and colleagues . Possible problems with self-report are acknowledged widely, yet little-studied so far. Although the validity of self-report data in treatment settings has been studied and found to be acceptable , it is not a safe inference to apply these data to brief intervention contexts without dedicated study. It could be that the effects themselves are fragile or that relevant concepts and study methods are at an early stage of evolution.
The way in which this issue has attracted attention in the alcohol field contrasts strikingly with the literatures on similar effects on other behaviours. There somewhat isolated individual studies are much more common, yet there are clear similarities in the small effects identified on subsequent subjective questionnaire responses (e.g. ). There is also evidence of effects on direct behavioural outcomes such as blood donation  and the uptake of screening , and again an inconsistency in these effects, with other studies finding no effects on similar outcomes . While these effects have been labelled generally as ‘assessment reactivity’ within the alcohol field, they are described differently elsewhere.
There are likely to be many component effects and many ways to conceptualize them. Conceptual work and empirical studies which are both methodologically and intervention orientated need to recognize and take advantage of the changing landscape of brief interventions in the alcohol field. Brief interventions are becoming briefer as time pressures become more pronounced and information technologies become more prominent. While they can clearly be effective, brief interventions achieve less reduction in drinking than occurs in non-intervention control groups in the same studies, even though the extent of reduction in the latter is highly heterogeneous . More sophisticated experimental methods will continue to yield advances which cannot be attained in other ways. Qualitative studies are also recognized increasingly as valuable and complementary to the conduct of trials, and secondary research on existing studies has the capacity to produce findings which primary research cannot do so straightforwardly.
Vulnerability to bias in the design of behavioural intervention trials self-evidently requires evaluation of the nature of the problem and exploration of the implications for both existing data and for new studies. As well as being disturbing, it is also exciting that these unwanted artefacts appear to achieve inadvertently the behaviour change aspirations of brief interventions, albeit to a more modest extent. If something so simple as a 10-item screening questionnaire  can do so, one wonders how potent questions could be that are actually designed to promote behaviour change.
I am grateful to John Cunningham who provided helpful comments on an earlier draft of this commentary.