SEARCH

SEARCH BY CITATION

Keywords:

  • behaviour change;
  • behavioural science;
  • complex interventions;
  • CONSORT;
  • evaluation;
  • intervention content

THE PROBLEM

  1. Top of page
  2. THE PROBLEM
  3. THE SOLUTION: TWO KEY STEPS
  4. References

How can we change a particular behaviour most effectively—such as reducing alcohol consumption? A behavioural scientist trying to answer this question would probably examine the content of the most effective alcohol consumption reduction interventions in the literature. This sounds straightforward but, because of limitations in the way that such interventions are reported, identifying the content of evaluated behaviour change interventions (BCIs) is challenging and sometimes impossible.

The descriptions included in published evaluation reports almost always fall far short of what is required for replication, and intervention manuals (or protocols) describing exactly what materials were used and exactly what was conducted are often not available from the authors of evaluation reports. For example, a study of the intervention content of a Cochrane review of audit and feedback interventions was able to obtain additional material about intervention content from only 27% (16 of 59) authors contacted [1]. Thus, although Consolidated Standards of Reporting Trials (CONSORT) guidelines specify that evaluators should report ‘precise details of interventions (as) . . . actually administered’[2], this is rarely achieved in behavioural science. In our experience, even when intervention manuals are available they vary greatly in the level of detail and style. This collective failure to describe clearly the experimental procedures on which published findings are based is unacceptable in other scientific communities, and should be unacceptable in addiction science. It also has serious consequences for the development and impact of our scientific endeavours to understand ‘what works and how’[3].

The variability in terminology used to describe the content of BCIs is clearly evident when comparing the descriptions of interventions provided in published evaluations. Different descriptions are employed by different researchers to refer to what appear to be near-identical procedures and quite different combinations of technique may be masked by vague descriptions such as ‘homework’, ‘educational materials’ or ‘behavioural counselling’. An example of the latter label specifying completely different techniques is ‘educating patients’[4] and ‘feedback, self-monitoring and reinforcement’[5]. Thus decoding BCI content descriptions and comparing the content across interventions is painstaking detective work, and sometimes impossible to achieve. This highlights the need for consensual definitions of intervention techniques of the kind that are available in other sciences.

Two meta-analyses have linked the detailed content of BCIs to effectiveness [6,7]. The first investigated which, of 10 intervention techniques, was associated with effectiveness in promoting condom use across 17 years of intervention evaluations. Condom use has received particular attention because of increased funding of human immunodeficiency virus (HIV)-preventive interventions. Results showed that inclusion of threat-inducing messages did not enhance effectiveness in any context or for any group, suggesting that this technique is ineffective in promoting condom use. This ground-breaking research is limited by consideration of only 10 intervention techniques. The second meta-analysis of efficacy of interventions found to change intentions across a range of (mainly health) behaviours found that use of incentives and provision of social support were associated more strongly with BCI effectiveness than any other content. It also found small to medium effects for goal setting, action planning and prompting of self-monitoring; yet none of these techniques were included in the analyses reported in the first meta-analysis. Thus, even when behavioural scientists set out to examine the link between specific BCI content and effectiveness, they focus upon different aspects of content and generate incompatible analyses of content–effectiveness relationships. Again this highlights the importance of developing consensual definitions of BCI techniques use (separately and in combination).

This state of affairs impedes theory testing because, if specific techniques responsible for effectiveness cannot be identified, then causal mechanisms remain unclear [8]. In addition, variability in intervention descriptions inhibit faithful adoption of effective interventions (e.g. by health promotion agencies), thereby curtailing the contribution of BCI evaluations to evidence-based practice. For example, if a technique associated with effectiveness is not identifiable in available intervention descriptions then adopting agencies are likely to omit this technique. If the intervention is found subsequently to be ineffective in an applied setting, this may be attributed wrongly to delivery failures rather than to (possibly unnoticed) deviations from the original content.

THE SOLUTION: TWO KEY STEPS

  1. Top of page
  2. THE PROBLEM
  3. THE SOLUTION: TWO KEY STEPS
  4. References

First, behavioural scientists need to standardize how BCIs are described in intervention manuals and in evaluation reports. Davidson et al. [9] provide a useful extension of CONSORT, proposing that the following eight descriptors of BCIs be included in evaluation reports and manuals: the content or elements of the intervention, characteristics of the those delivering the intervention, characteristics of the recipients, the setting (e.g. work-site), the mode of delivery (e.g. face-to-face), the intensity (e.g. contact time), the duration (e.g. number sessions over a given period) and adherence to delivery protocols. Clarity concerning the first descriptor, the ‘content or elements’, requires further work. For example, Abraham & Michie [10] have shown how a defined set of 26 theory-linked techniques could be identified reliably across a range of BCIs. We have extended this recently to describe over 100 different behaviour change techniques, with definitions [11]. Further development of such a nomenclature of technique definitions would provide a simplified and standardized method for describing the content of BCIs.

Secondly, standardized intervention protocols or manuals should be published alongside intervention evaluations (e.g. to be posted on journal websites) so that researchers and practitioners can discover how techniques constituting the content of interventions were used in practice. This is necessary for replication, allowing scientists to accumulate evidence about intervention effects and causal mechanisms. It is also necessary for those delivering interventions to ensure that those shown to be effective are faithfully delivered and that ineffective interventions are not delivered. A great service to advancing science and health care would be provided if journals insisted on the public availability of interventions protocols as a condition for publishing intervention evaluations.

Declarations of interest

None.

References

  1. Top of page
  2. THE PROBLEM
  3. THE SOLUTION: TWO KEY STEPS
  4. References
  • 1
    Michie S. Designing and implementing ‘behaviour change’ interventions to improve population health. J Health Serv Res Policy; in press; 2008.
  • 2
    Moher D., Schulz K. F., Altman D. G. The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials. Ann Intern Med2001; 134: 65762.
  • 3
    Michie S., Abraham C. Identifying techniques that promote health behaviour change: evidence based or evidence inspired? Psychol Health2004; 19: 2949.
  • 4
    Steptoe A. L., Kerry S., Rink E., Hilton S. The impact of behavioral counseling on stage of change in fat intake, physical activity, and cigarette smoking in adults at increased risk of coronary heart disease. Am J Public Health2001; 91: 2659.
  • 5
    Tate D. F., Jackvony E. H., Wing R. R. Effects of internet behavioral counseling on weight loss in adults at risk for type 2 diabetes: a randomized trial. JAMA2003; 289: 18336.
  • 6
    Albarracin D., Gillette J. C., Earl A. N., Glasman L. R., Durantini M. R. A test of major assumptions about bahaviour change: a comprehensive look at the effects of passive and active HIV-prevention interventions since the beginning of the epidemic. Psychol Bull2005; 131: 85697.
  • 7
    Webb T. L., Sheeran P. Does changing behavioral intentions engender behavior change? A meta-analysis of the experimental evidence. Psychol Bull2006; 132: 24968.
  • 8
    Rothman A. J. Is there is nothing more practical than a good theory?’: Why innovations and advances in health behavior change will arise if interventions are used to test and refine theory. Int J Behav Nutr Phys Act2004; 1: 11.
  • 9
    Davidson K. W., Goldstein M., Kaplan R. M., Kaufmann P. G., Knatterund G. L., Orleans C. T. et al. Evidence-based behavioral medicine: what is it and how do we achieve it? Ann Behav Med2003; 26: 16171.
  • 10
    Abraham C., Michie S. A taxonomy of behavior change techniques used in interventions. Health Psychol; in press; 2008.
  • 11
    Michie S., Johnston M., Francis J., Hardeman W., Eccles M. From theory to intervention: mapping theoretically derived behavioural determinants to hebaviour change techniques. An international review. Appl Psychol; in press; 2008.