As someone who has expressed concern about the quality of drug prevention research for many years, I welcome Dr Holder's concise and trenchant exposition of the fundamental problems pervading this area [1]. Holder's objective in raising these issues is to generate debate in order to strengthen the scientific base of drug prevention research. Whether the proponents of researcher-designed prevention programs will choose to engage in any such critical debate remains to be seen. Personally, I am skeptical, as I doubt the commitment of many researcher-designers to the practice of science. Indeed, I would contend that this field of research is best viewed as a form of pseudoscience, not science.

The distinction between science and pseudoscience is one of degree rather than kind, and not all forms of pseudoscience display exactly the same features [2]. There are, however, certain activities and attitudes that are common among practitioners of pseudoscience, such as hostility to criticism, encouragement of consensual thinking and an inability to accept the fallibility of cherished ideas and theories [3,4]. An emphasis on hypothesis confirmation rather than refutation is also a key characteristic of pseudoscience, and a number of the features of researcher-designed drug prevention noted by Holder (e.g. post hoc selection of outcome variables and use of one-tailed tests of statistical significance) clearly serve the function of confirming, rather than falsifying, the hypothesis that ‘Program X works’.

It is as a case study in pseudoscience that I think the literature reviewed by Holder is of most interest, as it offers insight into the consequences of abandoning hypothesis refutation in favor of hypothesis confirmation. Indeed, it provides a much better example of pseudoscience than the activities to which the term is typically applied. Most research into the practice of pseudoscience in the social sciences has focused upon psychological treatments and applied the term to two main forms of therapeutic practice. The first is comprised of treatments for which there exists very little, if any, empirical research demonstrating their effectiveness, but which are none the less used widely by practitioners [5]. Because it eschews research and promotes itself primarily through personal testimonies, Alcoholics Anonymous represents a good example of this form of activity [6]. The second group of psychological interventions to which the term ‘pseudoscience’ has been applied is comprised of those for which there are a number of empirical studies indicating absence of effect but which continue to be used by a sizable number of practitioners [7,8]. The Drug Abuse Resistance Education (DARE) program is perhaps the best example of this in the drug prevention field, as it is used widely in American schools despite numerous evaluations showing that it has little effect on drug use [6]. In each of these two types of activity, the term ‘pseudoscience’ is used to describe the behavior of practitioners who base their clinical decisions on something other than empirical research (e.g. intuition or some view of human behavior). The term is not, however, being used to describe the practices of individuals who claim to be scientists, as in the first instance few, if any, scientific studies exist, and in the second there are perfectly good scientific studies (supporting the null hypothesis of no difference) that are simply ignored by practitioners. The evaluators of the DARE program, for example, make no attempt to discount or explain away their null findings [9]. The fact that practitioners choose to ignore these findings and continue to use the program should not reflect poorly on the evaluators.

In contrast to the DARE evaluation studies, the researcher-designed prevention discussed by Holder has generated a considerable literature that rarely falsifies a hypothesis pertaining to program effectiveness, yet purports to be genuine science. This literature should be re-examined critically before any further commitment of public funds is made to these types of intervention programs and, where possible, the data sets upon which this literature is based should be re-examined by independent scholars in order to ascertain the extent to which the results reported are dependent upon the analysis strategies employed by the original investigators. If this body of literature really is science, then those who have generated it should have no problem in sharing data with fellow scientists.

Declaration of interest