SEARCH

SEARCH BY CITATION

Carey et al. [1] have conducted a randomized trial comparing face-to-face brief motivational interviewing (BMI) with two different computer-delivered interventions (CDIs) for college students facing sanctions for violating campus alcohol policies in the United States. The findings, in general, lean towards supporting the strength of the face-to-face BMI over the two CDIs. I applaud Carey's continued research to attempt to elucidate what interventions might work best for problem drinkers and for whom different interventions are effective (e.g. gender differences in the current study). It can be a challenge to demonstrate intervention effects when comparing a brief intervention to a minimally active comparator, let alone when comparing the intervention to another putatively active intervention.

I was, however, left with one primary concern after reading the study. That is, is it time to start discussing what types of computer-delivered interventions are being employed in this brief intervention research? BMI is perhaps the most-validated and -researched brief intervention for problem drinkers with a consistent track record of measurable impact [2]. CDIs are still really in their infancy but there have been enough trials to conduct several recently published meta-analyses [3,4]. While these analyses note limitations with the current literature, there is perhaps enough research conducted to date to at least divide CDIs into two groups: (i) those that have been developed by modifying clinically researched and validated brief interventions for problem drinkers (e.g. the various personalized feedback interventions that could loosely be said to be descendents of the Drinker's Check-up tradition) [5]; and (ii) those that are education-orientated, stemming from a desire to promote safe (or no) drinking on American college campuses or that are automated versions of educational programs for those same students facing sanctions for drinking on campus. My reading of this literature is that it is the first group of CDIs—those which trace their development from clinically based brief interventions—which have generally been more likely to have published trials which report significant intervention effects. Certainly, it is too early to say definitely that one of these groups of CDIs is superior to the other. However, it is not too early to select a CDI which already has some evidence of efficacy (i.e. has at least one published randomized trial with positive results) when setting out to conduct a trial that most readers may take as a comparison of face-to-face brief interventions versus computerized interventions. Personally, I suspect that a well-delivered face-to-face intervention from a skilled clinician will always be superior to a computerized substitute. However, it would be good to have a test to demonstrate this superiority using the state-of-the-art version of both of these candidates.

References

  1. Top of page
  2. Declarations of interest
  3. References