SEARCH

SEARCH BY CITATION

Keywords:

  • Cost–benefit;
  • policy analysis;
  • public health

Random student drug testing has been implemented in some states as part of comprehensive drug prevention policies. The programs have, perhaps predictably, drawn opposition as well as support, and DuPont and colleagues [1], in this issue of the Journal, summarize the literature in the area. The review makes it clear that while there are some data to support the efficacy of these programs the whole body of evidence remains limited, and that we will need rigorous long-term evaluations to evaluate these programs definitively.

The absence of evidence about these programs, however, is unlikely to stem the tide of argument for and against these programs. Like much in the drug prevention armamentarium, the volume of debate around random student drug testing well surpasses the body of evidence, and the gap is unlikely to narrow any time soon.

Although policy standing on shaky evidence is not particularly new and clouds many fields, it is perhaps particularly true of drug use policy. Public engagement with the issue of drug use can create the need, or the perception of need, for policy action at a speed that outstrips the perceived need for data that might inform that policy.

While at some level this problem is intractable, it presents real challenges. Programs that aim to do well, to prevent substance use and promote health, may have unintended and negative consequences. In addition, the implementation of any program ties up always-constrained resources, and the implementation of programs that are ineffective limits the resources available for the implementation of programs that work.

Therefore, it clearly behoves us to ensure that drug use prevention programs are effective and that the scientific community provides data that can inform the implementation of rigorously evaluated programs that achieve their stated goals—in this case, the minimization of drug use and its consequent harms. It is hard to find scientists who would disagree with this and, by and large, policy-markers, recently informed by increasing calls towards an evidence-based program, are also likely to agree [2].

Why, then, do we have this disconnect between the implementation of programs that aim to reduce drug use and its harms and the body of evidence that supports, or refuses, these programs' efficacy? What are our barriers to the implementation of drug use prevention programs that are solidly grounded in evidence, or are at least implemented concurrently with rigorous evaluations?

I would suggest that we face three central challenges, and that understanding them can help us to move beyond our current practice towards more effective evidence-based policies.

First, it is worth recognizing that much policy is informed by a core set of societal values, reflecting a balance between dominant values in a pluralistic society and those of the dominant political forces at a particular point in time. Those of us in the scientific community often spend far less time than we should considering how values are formative of policy efforts and how, in many respects, values and social norms represent a ubiquitous influence on policy choices and programs that are implemented. Unfortunately, not recognizing the role of values threatens to be a ‘missing variable’ problem in any effort we may make to try to understand the drivers of policy choices and how they may be influenced. Although recent work has begun to document how culture and norms influence substance use [3, 4], much less work has assessed systematically how these factors determine policy implementation and retention, serving as a powerful counterweight to, and in many cases replacement for, the evidence.

Second, much policy is driven by the power of compelling ideas. It is difficult for policy makers to implement and sustain large-scale interventions that are not based on compelling ‘stories’ and as such that are not saleable to a broad constituency. This results in the implementation of policies that ‘make sense’ and that, by implication, cannot possibly be wrong [5]. Of course, the road to failed policy intervention is paved with good intentions and many compelling ideas do not translate into effective policy efforts. Unfortunately, this presents science with a steep uphill climb at times. Random student drug testing, after all, sounds like a reasonable idea, building as it does upon broadly recognized concepts such as random testing of athletes and random work-place testing; but that does not take into account the potential for unintended consequences of these policies and, perhaps, the ever more present chance that such policies are ineffective. Understanding the genesis of policies, the foundational compelling idea that drives the intervention in question, can present us with an opportunity to carefully dissect the motivation behind proposed ideas and, consequently, to inform a critical analysis of these central motivations ahead of evidence that may provide a definitive assessment.

Third, policy exigencies are informed by rather different time-lines than are research time-lines. Most research projects are longer than congressional terms and as long as federal presidential terms. Policy windows open and shut rapidly. Therefore, the desire on the part of those in a position to implement programs to act when the opportunity arises is understandable. Unfortunately, rapid implementation of programs almost by definition precludes rigorous assessment. Compounding the issue, program monies seldom include evaluation resources up-front, creating a time lag between program implementation and evaluation. By the time the evidence has caught up, or is catching up, with the policies implemented, programs are well under way and very much have a life of their own, making their elimination or modification structurally difficult and potentially politically unpalatable.

Random student drug testing illustrates these three challenges well. Based on a compelling idea, and driven by a deep concern with adolescent drug use, the program that has been implemented is slowly expanding without the weight of evidence behind it. In that respect, the review by DuPont and colleagues performs an important service, taking stock of the literature and offering clear prescriptions for next steps in our evaluation of this program. We would do well as a field to urge that these prescriptions be taken up quickly, both by funders who need to create the resources to facilitate this work and by researchers who should take up the funders on the challenge.

Declaration of interests

None.

References

  1. Top of page
  2. References