International Center for Alcohol Policies (ICAP)'s latest report on alcohol education: a flawed peer review process

Authors


David Foxcroft
School of Health and Social Care
Oxford Brookes University
Marston Campus
Jack Straws Lane
Oxford OX3 OFL
UK
Tel: + 44 (0) 1865 485283
Fax: + 44 (0) 1865 485297
E-mail: david.foxcroft@brookes.ac.uk

With regard to the International Center for Alcohol Policies (ICAP) report number 16, entitled ‘Alcohol Education and its Effectiveness’[1], I must express my disappointment and more generally highlight probable deficiencies in the peer review process for evidence-based reports from reputable organizations.

In November 2004 ICAP sent me a copy of the draft report and asked if I would be prepared to offer some comments. ICAP did not ask explicitly for an academic peer review of the draft report, but as the report was written in an academic style, contained many references to academic journal papers and also discussed effectiveness of prevention approaches, I decided to offer my comments as if I were undertaking a peer review of an academic paper.

Before I provided my comments I also declared to ICAP a potential conflict of interest. A Cochrane Review of alcohol misuse prevention for young people, on which I was lead author and which was published in Addiction in 2003, highlighted the potential effectiveness of the Strengthening Families Programme (SFP10–14) and researchers in my Department have subsequently been awarded funding to adapt and test the SFP10–14 for use in the United Kingdom. Some of this funding is from the alcohol industry.

The comments I made were generally critical of the substance of the ICAP report, pointing out a number of weaknesses and inconsistencies in the draft and suggesting a number of improvements. I drew particular attention to the inappropriate use of evidence. Yet the final report hardly differs from the draft, although there are some mainly cosmetic changes. Of course, other reviewers may not have pointed out similar concerns and I could simply be guilty of arrogance, but when many significant comments are disregarded alongside the expectation that the organization that produced the report should uphold the highest scientific standards, that causes me concern. I will not list all the specific comments I made as they were substantial and ran to four typed sides of A4 paper. However, I feel I should point out several areas of concern that were not addressed in the final report.

The report states that life skills training approaches can be effective and that such approaches work in many different countries. My comments to ICAP pointed out that there is no good evidence to support life skills prevention approaches for alcohol, and that the cited studies of life skills are beset by methodological shortcomings, inappropriate statistical analysis, poor reporting and occasionally statistically significant findings that are of trivial importance. My comments also pointed out that this latter issue, of statistical significance as opposed to effectiveness, was problematic in two other prevention approaches highlighted in the ICAP report, SHAHRP [2] and Northland [3].

While I agreed that the SHAHRP study was well designed and the results interesting and therefore, on the face of it, shows some promise, there are also some issues with this study that need to be clarified before firm conclusions can be drawn. First, the findings as reported in several journal articles contain several reporting inconsistencies in which non-significant effects are reported as significant in the conclusion and summary sections. Secondly, few of the directly measured behaviours showed important changes, and it is only when a post-hoc multiplicative combination of outcome measures is compared between intervention and control groups that an apparently important effect emerges. Moreover, the misuse of combined end-points is a fairly common error [4], and the implication of the researchers combining their end-point measures in this study needs to be fully understood. Finally, an important issue which runs through all evaluation research is how to interpret the importance of the findings. Although results are presented frequently as statistically significant, this is not the same as importance in terms of effectiveness. In the SHARHP study relative risk reductions are reported which increase the perception of importance but often hide the real—absolute—impact.

I also pointed out that while Project Northland evidence was derived from a sophisticated statistical analysis, with statistically significant results, I was concerned that the results fall into the category of studies that show statistically significant but trivially important effects. If the effect sizes shown in the growth curve modelling analyses are examined, the growth rate differences between intervention and control communities is of very questionable importance. I commented to ICAP that I was happy to be persuaded otherwise, but at the moment I could see no real benefit, in terms of alcohol misuse prevention effectiveness, of the Northland approach.

The ICAP report also highlights social norming theory evidence but as this was not included in our Cochrane Systematic Review of alcohol misuse prevention I pointed out that I was not familiar with the quality of this evidence. I suspect that this evidence is not from controlled studies, so the potential for confounding (i.e. other factors that might account for the results) is high. Therefore I suggested that the report should be more tentative about this evidence.

In fact, overall the report reads like a fairly traditional, or narrative, literature review undertaken by someone with knowledge in the field. The problem, however, with such narrative reviews is that there is good evidence from the work of Mulrow and others [5,6] that they are an unreliable source of information, with conclusions made that are not based on rigorous and critical review methods and that are often inconsistent with best evidence.

In the ICAP report the final conclusions are even somewhat inconsistent with the preceding sections. One conclusion is that targeted interventions are more effective than broad-based measures, but there is no evidence in the body of the report that directly addresses or substantiates this conclusion. Another conclusion is that combined approaches have been shown to be more effective than single approaches. Again, there is no evidence in the body of the report that directly addresses or substantiates this conclusion.

The points above highlight specific concerns with the quality of this particular ICAP report that were not addressed through the usual academic peer review process. Surely, to command respect and trust, to earn and maintain credibility in the field, the peer review process used by any organization should be independent, authoritative and influential. Those expectations have not been met in the present case. Moreover, ICAP reports are unattributed and that must detract from their authority. I do not understand why ICAP does not state who authored a particular report and, more honestly, cast such reports as reflecting the opinions of the authors rather than the potentially misleading assumption that such reports are comprehensive, authoritative, critical and state-of-the-art reviews of the available evidence.

In the process of independent peer review there are three parties: the author(s), the reviewers, and the editor(s). It is the editor's job to ensure that the author(s) take the suggestions of the reviewers into account, and indeed most journals allow the reviewer the option to re-review the paper to ensure that their suggestions have been acted upon. Here, partly because the issue of authorship is not clear and the publication emerges from the organization itself, the roles of author and editor seem blurred, with no one in fact seeming to take on the responsibility of ensuring that the final report has taken account of the reviewer's comments. There is a danger that such reports are seen to uphold the highest scientific standards, that the process of peer review is similar to that of reputable journals, when in fact the process is very different. Perhaps it is time for all organizations which issue apparently authoritative but unattributed reports, with the aim of guiding and influencing policy, to be transparent about how such reports are authored and quality controlled. Reassuringly, in response to my concerns about the process, I am pleased to say that ICAP have recommended a number of significant changes to the way ICAP reports are prepared and peer reviewed in the future. Perhaps other organizations should follow their example.

Ancillary