The simple distribution of guidelines is not sufficient to change clinician behaviour or patient outcome is a clear message of the review by Weinmann and colleagues in this issue of the journal (1). A great deal of progress in the development of psychiatric treatment guidelines has been made in the last 20 years. Gaebel et al. (2) found 27 national guidelines on schizophrenia and Stiegler et al. (3) assessed 61 guidelines produced by European national associations for treating various psychiatric disorders, but if the more regional or even local work were included, the numbers would be much higher. Thus guidelines are available for many psychiatric diseases, and the procedures as to how guidelines should be drawn up have been documented [e.g. (4, 5)]. Rating Scales such as the Appraisal Guideline Research and Evaluation Instrument [AGREE, (6)] have been developed to enable the methodological quality of guidelines to be rated.
In contrast to the large number of guidelines available, Weinmann and colleagues (1) found only a few preliminary studies on the evaluation of guideline implementation. To implement guidelines in clinical practice is the last step, but a crucial one in the development of a guideline, because this is where it shows up whether a guideline achieves its goal, i.e. whether it changes clinical practice and patient outcome. Unfortunately, most studies found only modest and transient effects at best. The work by Weinmann et al. (1) certainly does not represent the final word, because guideline implementation is still in its infancy, but it does show that there is a long road ahead of us. In this context it will be discussed in the following text whether the modest effects found are really so sobering or whether they are realistic at the present stage.
A distinction must be made between provider performance and patient outcome. For example, if antidepressant use increased only slightly despite the recommendation of a guideline, or if the clinicians do not even theoretically know the guideline recommendation, such a poor provider performance is certainly sobering, because even the best guideline is ineffective if it is simply not applied. Regrettably, this was often the case in the studies reviewed by Weinmann and colleagues (1). Here it must be found out how clinicians’ behaviour can best be changed. Although the suggestion of the authors to work on ‘ongoing support and feedback, specific psychological models to overcome implementation barriers or social marketing techniques’ (1) does make sense, this area is as yet quite a black box, as was shown in a broader review on guideline implementation not restricted to psychiatry (7). But a clear conclusion is that the simple distribution of guidelines is not sufficient to change provider performance. Even if clinicians know the guidelines, they do not necessarily apply them. For example, Hamann et al. (8) recently found that psychiatrists knew the guideline recommendation for the duration of antipsychotic relapse prevention for 75% of their patients, but they said that they actually gave this recommendation to only 33% of the patients; and when the patients themselves were asked, only 11% knew the correct recommendation. The authors were able only to speculate about the reasons why the psychiatrists shied away from explaining these rather basic principles of antipsychotic drug treatment to their patients. But this example illustrates the need for identifying the barriers in guideline implementation. About 10 years ago Kissling (9) coined the term doctors’ non-compliance to emphasize that physicians can not expect patients to be adherent if they themselves do not apply the recommendations.
It is even more concerning whether even in the best case we can currently achieve more than relatively small effect sizes in terms of patient outcomes. Firstly, Gaebel et al. (2) and Stiegler et al. (3) showed that the quality of psychiatric guidelines is currently only moderate on the average, although some excellent publications do exist. If the guideline is not very good, its effects can not be expected to be tremendous. Then, even the methodologically better guidelines often make only relatively vague recommendations. This probably has to do with a number of factors, such as the involvement of many stakeholders with different interests, but more specific recommendations are often also impossible, because for many simple questions, hard evidence is not yet available.
Figure 1 depicts as an example a few of the most fundamental decisions that must be made in the treatment of an acute schizophrenic episode:
- 1When an acutely ill patient is admitted to the hospital, the psychiatrist must choose among the probably more than 50 antipsychotic drugs available worldwide, but unfortunately research has not discovered reliable predictors as to which drug helps which patient best. Recommendations (usually guidelines suggest to consider the side-effects experienced in previous treatment attempts, to choose the antipsychotic again that helped in the last episode, and to take a patient's preference into account) are based on common sense, but it has not been tested whether they actually change outcome.
- 2Hardly any sound studies are available as to which sedation strategy (antipsychotics alone and which antipsychotic, benzodiazepines or a combination of both) is the most appropriate one in agitated or even violent patients. Clinically relevant pragmatic studies on this important question have only recently emerged (10, 11).
- 3While most guidelines recommend the use of atypical antipsychotics for negative symptoms, the effect sizes in meta-analyses were modest (12).
- 4Recent reviews have rejected the dogma that there is a delay of onset of antipsychotic drug action (13, 14), but evidence on the clinically crucial question of how long we should wait before considering an antipsychotic ineffective is sparse.
- 5In cases of non-response to standard doses, should we increase the dose or rather switch the antipsychotic drug?
- 6After how many antipsychotic drugs should we try clozapine?
- 7There is no augmentation strategy with a robust evidence base proving its efficacy in schizophrenia.
Admittedly, the lack of efficacy of augmentation strategies should at least lead to a reduction of harm by side-effects and drug–drug interactions resulting from irrational polypharmacy.
Or in long-term treatment there is clearer evidence for some questions such as that people with schizophrenia need maintenance treatment with antipsychotics and that intermittent treatment is usually not an option. But some of these recommendations may be so well established that guideline implementation strategies would not detect a difference in comparison with control groups. It could, for example, be that due to the information initiatives of pharmaceutical companies many clinicians know the optimum doses of second generation antipsychotics even if they have not read the guidelines. Therefore, as long as the recommendations of the guidelines are rather soft, huge changes in patient outcomes can not be expected from them. Accepting this situation, it is no wonder that many studies did not find significant effects of guideline implementation, because as Weinmann and colleagues (1) showed, they were often underpowered to detect small differences.
Therefore, studies on simple, but clinically essential questions are still needed. The quality of the guidelines must also be improved. As the development of a good guideline is a major endeavour in terms of time and cost, it has been suggested to centralize the development process to a greater extent (15). Especially the collection of the evidence could be made on an international level, while it could then be the task of national associations to adapt the evidence to the specific conditions of their health systems. As Weinmann and colleagues (1) pointed out, the barriers hindering clinicians in applying guidelines must be better understood, and strategies for guideline implementation must be developed. But a wrong conclusion from this review would be that guidelines are useless. There is no alternative to evidence based medicine. We can only improve it.