Jim Orford [1] raises the question as to why methodologically rigorous evaluations of treatments in the addictions field have produced disappointing results. This commentary will address, and disagree with, the first of his three proposed answers, that ‘the field should stop studying named techniques and focus instead on change processes’.

In seeking to improve the effectiveness of interventions to change behaviour, the following evidence is needed: (i) do they work? (effect sizes); (ii) what is it that works? (intervention techniques or active ingredients); and (iii) how do they work? (change processes or causal mechanisms). To progress the science of behaviour change, the field should start studying named techniques and change processes; and stop treatment evaluations that do not specify intervention content or measure change processes. Techniques and change processes do not need to be pitted against each other; indeed, it can be argued that both are needed to fully understand the other.

Take the analogy of seeking to improve omelette cooking: information about the change processes involved (e.g. how egg whites stiffen when whipped or solidify when heated) will not produce a well-cooked omelette without precise details of techniques (e.g. ingredients, how much heat, for how long, in what container). Describing interventions accurately in terms of shared and distinctive techniques and relating such descriptions to effectiveness can support or rule out potential mediating (or change) processes, thereby distinguishing between competing theoretical accounts of behaviour change.

Orford posits that the essence of psychological treatment is not a technique, but rather the positive alliances between therapist and client [1]. Again, this is a false dichotomy: such alliances are themselves the result of behaviours clustered into techniques (e.g. asking open-ended questions, summarizing what the other has said). Techniques, however, are only one part of what is required for the kind of full intervention description that is needed to make progress in understanding and improving intervention effects. Davidson et al.[2] propose that behavioural scientists should report (i) the content or elements of the intervention; (ii) characteristics of those delivering the intervention; (iii) characteristics of the recipients; (iv) the setting (e.g. work-site); (v) the mode of delivery (e.g. face to face); (vi) the intensity (e.g. contact time); (vii) the duration (e.g. number sessions over a given period); and (viii) adherence to delivery protocols.

Orford raises the problem of ‘manualising’ interventions, which may improve intervention delivery but thereby mask individual differences in delivery [1]. This point has been made eloquently by Leventhal & Friedman [3], who argue that assessing variation in response to interventions across situations, providers and participants is a necessary part of building a theoretical understanding of the process of change. While this is true, unless one is clear that the delivered intervention is the one planned, one can make no claims about the theoretical bases of effective interventions. It does not matter whether the planned intervention is a flexible application of behavioural components or whether it is rigid and manualised: one should be clear about which it is and assess its fidelity accordingly [4].

Orford calls for more research into change processes [1]. While this is desirable, change processes are not necessarily causal determinants of the desired outcome. Theory represents an integrated summary of causal determinants, and allows the accumulation of evidence about causal mechanisms across studies. There are three main reasons for advocating the use of theory in designing interventions. First, interventions are likely to be more effective if they target causal determinants of the desired outcome; this requires understanding the theoretical mechanisms of change. Secondly, theory-based interventions facilitate an understanding of what works and thus a basis for developing better theory across different contexts, populations and behaviours [5]. Thirdly, they provide an opportunity to test and develop theory which can, in turn, be used to design more effective interventions. Developing this use of theory will depend upon developing the science of mapping input (intervention techniques) with change processes (theoretical mechanisms) [6].