Seemingly similar in meaning, ‘efficacy’ and ‘effectiveness’ express distinctly different concepts. A medical intervention is ‘efficacious’ if it works under strictly controlled (laboratory) conditions and it is considered ‘effective’ if it works under ‘real life’ conditions. Efficacy (or fastidious) trials test for efficacy and effectiveness (pragmatic) studies for effectiveness of a therapy. Discussion continues over which of the two approaches are more valuable.
The main characteristics of the two types of clinical investigation are summarized in Table 1. The typical efficacy study tests whether the experimental therapy generates specific therapeutic effects, while the typical effectiveness study is aimed at quantifying the sum of specific effects and other contributors to the total therapeutic effect. In theory therefore, the effect size measured in an efficacy study is a pure product of the specific effect of the experimental treatment, while that of an effectiveness study can be due to additional factors such as placebo- or context effects, effects of undeclared concomitant therapies and social desirability (Fig. 1). Thus causal attribution between the treatment and the clinical outcome is usually possible in efficacy trials, while this is rarely the case in effectiveness studies.
|Efficacy studies||Effectiveness studies|
|May have design features such as randomization and blinding||Can be randomized but are rarely blinded|
|Often use placebo controls||Rarely use placebo controls|
|Tightly defined inclusion/exclusion criteria||Less tightly defined inclusion/exclusion criteria|
|All aspects of intervention are strictly standardized||Aspects of the intervention may be only loosely defined, e.g. concomitant treatments|
|Tend to employ objective outcome measures||Tend to employ subjective outcome measures|
|Compliance usually closely monitored||Compliance usually not closely monitored|
|Bias and confounders are minimized||Bias and confounders can be substantial|
|Per protocol analysis||Intention-to-treat analysis|
|High internal validity||High external validity|
|Research question: can it work?||Research question: does it work?|
Efficacy trials will tell us whether a treatment works in principle and under optimal, ‘laboratory’ conditions. In some cases, however, ‘real life situations’ might differ from the ideal. For instance, a drug that normalizes cholesterol levels might not be acceptable to ‘real life’ patients if it causes adverse events. Similarly, an anti-hypertensive drug might work for a very narrowly defined homogenous group of patients (e.g. those who have no other risk factors and take no other medication), but in ‘real life’ where patients are heterogenous the effect could be smaller or totally lost. Whenever there is reason to suspect that this could be the case, effectiveness trials are required to test whether the results of efficacy trials also hold true in practice.
Would it then make sense to rely on the findings of effectiveness studies in the absence of efficacy data? If the results of effectiveness trials suggest a therapeutic benefit of the experimental therapy, we would not normally be able to be sure whether this is due to the treatment itself or whether it was caused by one of the many other factors that could have contributed to the overall therapeutic effect (Fig. 1). Effectiveness trials are therefore most valuable to test whether the results of efficacy trials are applicable to ‘real life’. In the absence of efficacy data, the results of effectiveness trials can be difficult to interpret and may therefore be of little value.
What if both efficacy and effectiveness data are available but do not agree with each other? In many conventional settings, this would correspond to situations similar to the ones outlined above: a treatment is efficacious yet in ‘real life’ there are obstacles, e.g. due to problems with compliance. It is, however, also conceivable that an effectiveness trial shows a benefit while an efficacy trial does not. This scenario is frequently encountered in the area of complementary/alternative medicine. The most logical explanation would then be that the experimental treatment is devoid of specific effects but generates benefit through context effects. Such a treatment would be associated with positive outcomes without directly causing them. An example could be homoeopathy, which seems to help many patients [1, 2] while rigorous trials of homoeopathic remedies have repeatedly been shown them to be no different from placebos [3, 4]. Is this sufficient justification for adopting such a (placebo) treatment into routine health care? We don't think so. We do not need a placebo to generate a placebo effect: even treatments with specific effects generate placebo- or context effects.
In most instances, the debate about the relative value of efficacy and effectiveness trials distracts from the fact that both address different research questions. Both types of study undoubtedly have their place. We should, however, be vigilant that effectiveness trials are not (mis)used (in the absence of convincing efficacy data) to convince policy makers of the value of treatments which have no specific effects at all.