Kiluk et al.  report that observer-rated quality of coping response following exposure to computerized skills training partially mediates the superior effectiveness of this computer-administrated cognitive–behavioural therapy (CBT) over treatment as usual (TAU). This finding is noteworthy. An earlier review of potential mediators of CBT effectiveness found no empirical support for hypothesized mediators of CBT's effectiveness over TAU . This failure to identify mediators of treatment effectiveness is neither unique to CBT and addictions (e.g. ) nor to behavioral treatments for other psychiatric disorders (e.g. ). This contextual perspective raises the question as to what is responsible for this noteworthy exception. The most cautious response is that the finding is spurious, attributable to small sample size and the vagaries of chance. This explanation cannot be ruled out, and of course replication is essential. However, assuming a real effect, one or more of three characteristics of the study may be responsible.
First and most plausible is the way in which coping behaviors were measured. In attempting to account for the lack of evidence for CBT's active ingredients, Morgenstern & Longabaugh  suggested faulty theory and/or inadequate measurement. Typically, quantity of coping responses have been measured, as they were in the present study. However, quality of coping response was also measured, and was found to be the superior index for accounting for the CBT/outcome relationship. As CBT theory pertains to quality of coping it makes total sense that a measure of quality, rather than quantity, should be more robust. Researchers' testing treatment theories in general and CBT more specifically, need to give greater consideration and emphasis to decreasing the distance between theoretical constructs and their empirical measurement in treatment/outcome studies [5–7].
A second notable feature of the study was its reliance upon more robust methods of testing for mediation . When reporting tests of mediation, earlier studies have relied generally upon the classic, but less powerful, Barron & Kenny steps procedure . The steps procedure nevertheless retains value in identifying where the logic of the hypothesis breaks down .
The third notable study feature was that CBT skills teaching relied upon a computer platform, CBT4CBT . As so much more of the variance in outcome is accounted for by therapist than by treatment modality , this may have allowed the active ingredients of CBT to have been presented more reliably.
That each of these three features focus upon quality of behavior, more robust analytical methods in testing for mediation and standardization of treatment protocol through computer platform is to be commended. Each should be used in seeking to identify the active ingredients of treatment that embody or lead to mechanisms of change . It is not enough that treatments are found to be effective in meta-analytical reviews. Variability in observed effectiveness across studies is sufficient to question whether treatments work for the reasons offered by their proponents . Only mechanisms of change research can provide support for identifying how treatments work under what set of conditions and for whom. Without such supporting evidence, belief in empirically supported treatment as a basis for evidence-based practice will remain a collective delusion.