• Open Access

Systematic reviews in public health research

Authors


A traditional narrative review is a subjective exercise in which the author draws conclusions based on an idiosyncratic selection of the literature with no explicit methods of critical appraisal, analysis or summation of data. Not surprisingly, narrative reviews are susceptible to biased and misleading conclusions and are best regarded as viewpoints or opinion pieces rather than robust summaries of evidence.

In contrast, a well-designed systematic review resembles a scientific investigation. According to the Cochrane definition, a systematic review has a clearly formulated question and uses systematic and explicit methods to identify, select and critically appraise relevant research, and to collect and analyse data from the studies that are included in the review. Meta-analysis refers to the use of statistical techniques to integrate the results of included studies, but is not always included in a systematic review.1 As in any other scientific report, the methods should be reported in sufficient detail to enable the findings to be independently verified.

Systematic reviews were initially used to combine information from more than one comparative study, usually randomised controlled trials (RCTs) that were individually underpowered or as a group failed to produce consistent or conclusive findings. Systematic review methods evolved to incorporate a range of other study types, such as cohort and case-control studies. In many systematic reviews meta-analysis is not achievable or sensible given the nature of the studies. Nevertheless, descriptive overviews are being published with increasing frequency and can be used to establish the status of existing evidence on a question of interest for a planned study, or the lack of reliable evidence to support an accepted idea or practice.

Having the words ‘systematic review’ in a title is not a guarantee of quality or reliability. Methodological flaws can affect all steps in the process of the review, from poor definition of the research question to inappropriate combination of the findings of biased studies. Synthesis of data from non-randomised studies, where biases may be similar across studies, can result in a more precise but seriously biased meta-analytical result. Publication bias can also be potentiated in a systematic review. Critical appraisal of a systematic review is every bit as important as critical appraisal of individual studies, especially as systematic reviews are often attributed greater credibility.

A particular concern for public health researchers is that the type of study and the nature of data generated may not obviously fit the systematic review framework. Some may feel that the methods are not suitable for answering public health questions where context is paramount and interventions are complex. There might be a risk that the methods will constrain the questions we can ask. However, systematic reviews are not only for simple individual level interventions, and there are many reviews that draw on a range of evidence about public health policy and practice.2 An example is the recently published meta-analysis of the influence of liquor pricing on alcohol consumption3 that draws strong and policy-relevant conclusions from a diverse literature.

There is now a Cochrane Collaboration Public Health Review Group focusing on such reviews (http://ph.cochrane.org/) and the Cochrane Health Equity Field has been set up to encourage reviewers to explicitly assess the effects of interventions on the disadvantaged as well as the population as a whole (http://equity.cochrane.org/welcome). An equity checklist for systematic review authors is available on this website.4 There is increasing recognition of the varied evaluation methods used in Public Health, and of the difficulties of applying standard quantitative methods to many important questions. This is leading to a broader and more flexible approach to the incorporation of evidence into systematic reviews, including qualitative evidence.5 It is clear that a systematic approach that aims to objectively collate and summarise all the available information is not only possible but is a necessary replacement for the traditional narrative approach where the conclusions may be unjustified. If the goal is evidence-based policy-making to achieve public health gain, systematic reviews will be a vital component of the process.

When designing a systematic review the authors have discretion to set the parameters. If they choose to set the bar very high and only include homogenous methodologically high-quality studies, they may end up with nothing to report. On the other hand, setting broad criteria and ending up with sparse, heterogeneous evidence may create a real challenge in how to report it. Authors need to exercise judgment when they design a review and take advice from the excellent and detailed resources that are now available. These resources include the Guidelines from the Centre for Review and Dissemination (CDR) at York University6 and the Cochrane Collaboration.1,7 Standards for transparent and accurate reporting of systematic reviews, such as PRISMA (preferred reporting items for systematic reviews and meta-analyses) and MOOSE (meta-analysis of observational studies in epidemiology), are embedded in these guidelines. Journals have a key role in encouraging authors to adhere to these standards.

Ancillary