• Open Access

Obesity prevention programs demand high-quality evaluations

Authors


Correspondence to:
Professor Boyd Swinburn, WHO Collaborating Centre for Obesity Prevention, Faculty of Health, Medicine, Nursing, and Behavioural Sciences, Deakin University, 221 Burwood Highway, Burwood, Victoria 3125. Fax: (03) 9244 6640; e-mail: boyd.swinburn@deakin.edu.au

Abstract

Obesity prevention programs are at last underway or being planned in Australia and New Zealand. However, it is imperative that they are well-evaluated so that they can contribute to continuous program improvement and add much-needed evidence to the international literature on what works and does not work to prevent obesity. Three critical components of program evaluation are especially at risk when the funding comes from service delivery rather than research sources. These are: the need for comparison groups; the need for measured height and weight; and the need for sufficient process and context information. There is an important opportunity to build collaborative mechanisms across community-based obesity prevention sites to enhance the program and evaluation quality and to accelerate knowledge translation into practice and policy.

Obesity prevention programs are springing up in response to growing concerns about childhood obesity. This is a very welcome development following more than a decade of inaction since the epidemic was recognised in the mid-1990s. Another welcome development has been the increased emphasis on using evidence to inform public health practice, programs and policies.1 Unfortunately, knowing what works and what does not work for obesity prevention is difficult because the evidence base is so limited and the settings in which interventions have been tested are so few (mainly primary schools).2,3

The Primary Prevention Group of the Australian Childhood and Adolescent Obesity Research Network (ACAORN) is concerned that some obesity prevention programs are being planned or implemented with insufficient priority being placed on appropriate designs or sufficient funding for rigorous evaluation. Expensive programs with weak evaluations waste precious resources, fail to contribute to their own quality enhancement, and also fail to contribute much-needed effectiveness evidence to the literature. These exact concerns have also recently been raised about the United Kingdom (UK) response to childhood obesity by the UK Audit Office.4

The purpose of this article is to identify the main evaluation components that are at risk in Australasian intervention programs and to propose opportunities to lift the quality of evaluation of obesity prevention programs in the region.

Funding sources and priorities

Interventions tend to be funded either by research agencies (where evidence creation is primary) or by government health agencies (where service delivery is primary). For example, the Pacific OPIC Project (Obesity Prevention in Communities) is a $5.8-million research project in Australia, New Zealand, Fiji and Tonga that is funded by the Wellcome Trust, the National Health and Medical Research Council and the New Zealand Health Research Council.5 It involves measurements of 15,000 adolescents in intervention and control sites. While this project will be evidence-rich about what worked and what did not, it runs the risk of not being able to convert its programs into sustainable service delivery. On the other hand, the evaluations of large projects funded through service agencies tend to be heavily constrained by the usual 10–15% budget allocation for evaluation and have program designs that aim to maximise on-the-ground delivery. These projects run the risk of not knowing if the interventions are successful and why.

Providing funding for support and evaluation that is separate from program implementation would not only allow funding from a variety of sources (including from research agencies) but would also lift evaluation from a minor after-thought component to a major component alongside implementation. This approach is being used in France, where a successful obesity prevention program6 is being rolled out to about 130 municipalities7 using a funding model of one euro per capita from each municipality for on-the-ground programs and one euro per capita from a variety of other sources for the support, social marketing and evaluation (J.M. Borys, personal communication).

Program evaluation

The evaluation of community-based interventions is complex because communities themselves are complex and interventions are usually more ‘organic’ than the classic, clinical, investigator-controlled trials. In design, the comparison population selection is often challenging because of quasi-experimental designs, effects of clustering, and long intervention durations (usually 2–3 years). Health promotion theories, process evaluation and program logic models are also needed to show how the proposed inputs (interventions) influence the mediators and outcomes.8,9 The community engagement needed for most obesity prevention projects can add extra dimensions of community interpretation of the findings and sharing of knowledge gained.

Below, we highlight three critical aspects of program evaluation that create a challenge for all obesity prevention programs. While these are fundamental and uncontroversial from a research trial perspective, at the intersection of research and the delivery of health promotion programs (where community-based obesity prevention must sit at this stage) they are at risk of being lost.

The need for comparison groups

Engaging communities and schools to be comparison populations is very difficult and runs counter to a service-delivery philosophy. However, without a non-intervention comparison, it is not possible to know whether any decreases or increases in obesity prevalence in intervention areas represent a positive, negative or null effect. True experimental designs at a population level usually involve cluster randomisation by settings such as schools. Quasi-experimental designs can obtain comparison data from matched settings, regionally representative samples, or other population monitoring data. It is also possible that, within Australia, multiple intervention sites could use pooled comparison data.

The need for measured height and weight

Anthropometry (height, weight, and waist) provides the key outcome measures for obesity prevention interventions. Without these, it is not possible to determine obesity prevention effectiveness. Self-reported height and weight and behaviours are notoriously prone to bias.10,11 Anthropometry may not be needed for some efficacy interventions aimed at changing specific behaviours (such as TV viewing time or fundamental motor skills) or environments (such as school policies or neighbourhood facilities), but it is required if obesity prevention is part of the aim of a project and for population monitoring related to obesity.12

Often the anthropometry measurements take place in a school setting and, while principals and education departments are usually very supportive of these measures, there are understandable sensitivities about measuring children. However, it is the experience of the ACAORN group that when anthropometry is conducted in a private and sensitive manner, the risk of psychological or social discomfort for the child is minimal.

The need for sufficient process evaluation and contextual information

Process evaluation, which involves analysing the program's implementation and reach across different population subgroups and related contextual factors, contributes to the interpretation of impact and outcome results.8 The frequent lack of information on implementation reach and dose hampers the ability to compare interventions and draw conclusions on the effectiveness of strategies to prevent child obesity.13 Contextual information is vital for assessing the applicability of the interventions to other places, populations, and implementation conditions.8 Without this information there is a risk of drawing the false conclusion that there was intervention failure, when really there was implementation failure.14

Illustrative examples

The Fleurbaix-Laventie Ville Sante project is a childhood obesity prevention program in northern France.6 It started in 1992 with baseline anthropometry in two intervention villages and two similar comparison villages. Periodic repeat measurements showed that it took about eight years of intervention to reverse the increasing prevalence of obesity and after 12 years there was significantly less obesity in the intervention compared with the control villages. While there is confidence that the intervention was eventually effective in reducing obesity, there was virtually no process evaluation (because of a shoestring budget), so explaining why and how the program worked was not possible.

The French project at least had an outcome evaluation. By contrast, the ‘Active After School Communities’ program15 is by far Australia's most expensive program to increase physical activity and reduce obesity in children (about $200 million of federal funds over eight years). However, it has no outcome evaluation and minimal process evaluation. We will never know whether this program was effective and whether it warranted this massive investment.

The potential of Australasian programs to create the evidence

In Australia and New Zealand, there about 20 other substantial community-based prevention programs, either under way or in the planning stages, that have the potential to prevent obesity. However, three are not taking height and weight measurements in intervention and comparison populations and six are yet to decide. These demonstration projects represent the vital first step before rolling out ‘proven’ interventions for obesity prevention and, with full evaluations, there is an unparalleled opportunity to contribute to the rapid development and quality of the evidence base on obesity prevention and to support the rapid dissemination of the findings into policy and practice.

To fully capture this opportunity, there would need to be as much consistency as possible of evaluation approaches and instruments across sites to facilitate multi-site comparisons and meta-analyses. Collaboration mechanisms would need to be in place to maximise the interactions between sites and promote the rapid dissemination of the findings into policy and practice. The Primary Prevention Group of ACAORN is advocating for a Collaboration of Community-based Obesity Prevention Sites (the CO-OPS Collaboration) to be funded to achieve these outcomes.

Conclusions

Given the limited evidence base on obesity prevention, funders of major obesity prevention programs are obliged to support high-quality evaluations. At a minimum, this means the inclusion of height and weight measurements in intervention and comparison groups, analyses of outcomes by demographic variables, and detailed descriptions of key intervention strategies and their intensity. Within Australasia there are already a substantial number of whole-of-community childhood obesity prevention programs under way. The funding of a structure like the proposed CO-OPS Collaboration would capture their collective strength, increase the quality and comparability of the program evaluations, and accelerate the translation of what is learned into practice and policy.

Ancillary