SEARCH

SEARCH BY CITATION

Keywords:

  • Cannabis;
  • cohort;
  • consequences;
  • epidemiology;
  • marijuana

Until a decade ago, a range of methodological, cultural, attitudinal and legal barriers meant that we had few or no data on patterns of cannabis use from community settings [1,2]. Views on the harms around cannabis use were generally personalized, polarized and of little use in guiding policy [3]. Prospective studies outside the clinic have now led to very different understandings of cannabis use and its consequences [4]. Despite this remarkable progress, Temple and colleagues suggest that we have already hit a ‘grass ceiling’[5].

They argue that current conceptualization and measurement of cannabis use are major barriers to progress. It is true that community studies typically employ frequency or dependency to define use (e.g. [6–8]). There is also little doubt that other aspects of cannabis use could be measured (e.g. quantities consumed, settings of use, mode of administration and concurrent substance use); but the reasons for the limited measures available are more complex. Few recent community studies have focused solely upon cannabis. Rather, the focus has been health and development across adolescence and young adulthood, of which cannabis has been a small element. Future studies could certainly make greater investment in measures of cannabis use. However, there are likely to be constraints that relate, as the authors suggest, to a need to assess cannabis use in a broader social context. This has generally been the approach of recent studies, but no study can measure everything. Better measurement of social and life-style contexts takes time and limits the scope for measuring cannabis in detail. Designing a study to focus predominantly on cannabis use may bring further problems. Selection biases commonly come into play in studies of specific problems, so that those with the condition of interest (i.e. cannabis use) are often the least likely to participate [9,10].

The authors are critical of different classifications of frequency of use across studies. This is indeed a barrier to comparison. Strengthening the face validity of measures makes good sense. Validity should also be tested in other ways. It would be possible to look at concurrent validity of composite measures of frequency and dose by comparing these definitions with serum levels of tetrahydrocannabinol (THC). In time, it may also be possible to look at the predictive validity of different indicators of use in terms of health and developmental outcomes yet to a large extent the differences in measures between studies reflect a more fundamental problem: that of very different levels of use in different samples. In some important studies, the highest levels of exposure to cannabis fall well short of those that might be seen in clinical practice (e.g. [6]). Very few of the large longitudinal studies have had substantial numbers of participants using cannabis daily, and yet it is in daily users where the clearest consequences are generally found (e.g. [11]). Where base rates of use are low, advances are unlikely to come from better measurement alone. Rather, there will be a need for larger sample sizes perhaps combined with nested designs that focus greater attention upon the heavier users. The use of snowballing and internet recruitment have attracted much attention, and early findings are encouraging [12], but these samples are likely to present challenges of their own around representativeness and whether participants are prepared to commit to longitudinal study over many years.

The authors are pessimistic about the scope for meta-analysis [5]. Differences between studies in the measurement of cannabis and its possible outcomes do pose challenges in data synthesis [13], but Horwood et al. have recently demonstrated its feasibility using three Australasian studies [14]. This analysis not only strengthened the previous evidence around cannabis use and educational attainment, but also identified the importance of sex as an effect modifier. Some differences may have been an advantage. Despite different measured confounders the associations were consistent across studies, diminishing the possibility of confounding as an explanation [14].

Temple and colleagues remind us about the importance of assessing effect size, yet most recent studies report both odds ratios as well as prevalence of cannabis use from which it is possible to estimate population-attributable risks. The problem has been more that some of these estimates have seemed implausibly high (e.g. in cannabis and psychosis), raising the question of unmeasured confounding [15]. They are mistaken in suggesting that the comparisons are being made only between non-users and current users in population-based studies. The most informative studies have graded use more finely and clear distinctions have emerged in comparisons of daily, weekly, occasional and non-users (e.g. [11]). So, too, it is difficult to support the idea that all use is being seen as harmful. Increasingly, recent studies have examined low-level and remitted use with few adverse developmental effects emerging [16,17].

There is a grass ceiling, but it is one that we are unlikely to breach through any single methodological advance. A next generation of cannabis studies should build on the lessons of the past decade to design studies matched to specific settings, taking into account local variations in patterns of cannabis use. Even so, community attitudes are unlikely to shift quickly, so that evidence on cannabis use and it consequences alone will remain a necessary but not sufficient basis for cannabis policy [18,19].

Acknowledgement

  1. Top of page
  2. Declaration of interest
  3. Acknowledgement
  4. References

George Patton is supported by a Senior Principal Research Fellowship from the National Health and Medical Research Council in Australia.

References

  1. Top of page
  2. Declaration of interest
  3. Acknowledgement
  4. References