Editorial: Context and conduct, and accessibility in scientific reporting


  • Tom O’Connor

It is typical for readers of this journal to see references to a variety of professional research and health organizations such the World Health Organization or a psychological association from the United States or United Kingdom. In the current issue, readers will see an unfamiliar set of acronyms and citations to works of ministries that rarely get a mention in clinical research journals. Ager et al. (2011) report in this issue on a psychosocial school-based program to improve the behavioral adjustment of children in northern Uganda. Their report adds to a growing clinical science literature that seeks to extend psychosocial curricula and clinical programs to children in worn-torn, conflict-ridden, developing countries. That step signals the confidence of investigators that some existing programs, many of which were developed in some radically different contexts, may benefit children whose depth of need could not have been anticipated by the program’s developers. This journal has a history of including studies of this kind (e.g., Smith, Perrin, Yule, & Rabe-Hesketh, 2001). In the particular case of parenting treatment studies, for example, there are several kinds of efforts to test the generalizability of interventions to universal settings, to ethnically diverse samples within a western context, and to rather different cultural contexts (Baker-Henningham, Walker, Powell, & Gardner, 2009; Scott, O’Connor, et al., 2010; Scott, Sylva, et al., 2010). And, there is reason to anticipate a persisting presence of like-minded studies. Ager et al.’s article exposes the benefits and challenges of this work, and raises a host of interesting questions for reviewers and editors.

First, the benefits. Children who received the school-based program, which was composed of 15 one-hour sessions that focused on topics such as safety, coping and future planning, reported greater relative gains over a 1-year period on a specially tailored index of well-being; parent reports but not teacher reports indicated significant gains over the same follow-up period. Improvements in children’s resilient behavior need to be put in the following context: many children would have experienced major trauma; the intervention was comparatively brief and carried out without the usual oversights on fidelity; the interventionists were those already placed in the school settings and not outsiders parachuted in for the purposes of the study. An equally important feature is that the study capitalized on a naturally occurring (technically man-made) experiment – a scaled-up implementation. Implications of the findings are substantial. If, as the study demonstrated, sizable behavioral changes may occur in the children’s happy, responsible, and nonviolent behavior, then there may also be further benefits to children’s somatic health. Furthermore, positive changes in classrooms of children may spring positive improvements for the community. Ager et al. show that behavioral interventions based on sound psychological theory have broad appeal and impact. Promoting behavioral well-being is prominent in the hierarchy of needs for individuals in these settings; it is mistaken to presume that an apparently modest behavioral intervention cannot work where more ‘basic’ needs may not be reliably met.

Second, the challenges. In addition to exporting evidence-based programs, there is a need to export scientific rigor to new and different clinical proving grounds. That is, there is a need for careful evaluation no matter where the program is placed or how its funding is secured. Difficulty in conducting a rigorous evaluation requires both a sound infrastructure and sympathetic mindset for evidence-based practice and policy. Neither may exist nor arise naturally across settings, where political and humanitarian concerns may be – or may be perceived – as overwhelming. Third, the questions. Reviewers, editors, and readers may, for some articles of this sort, need to tolerate a greater degree of ambiguity and uncertainty because of a lack of information than might otherwise be tolerated. Methods sections are meant to be written in sufficient detail and clarity that the study could be replicated by a separate research group. To be sure, that is an ideal that can be difficult to obtain: if every nuanced detail of daily decision-making in conducting a ‘typical’ clinical trial were provided in the text, there would be little room left for references, title pages, or face pages. Nonetheless, essential methodological details are provided and inferences, where needed, may be safely drawn, and perhaps confirmed at meetings or from e-mails. Ager et al.’s trial is anything but ‘typical’. What must get edited out in this and similar articles is the narrative of the months- or years-long journey that culminates in the methodology that is reported. Impressed readers wishing to replicate such a study would need to do much more than read the article or e-mail the authors; borrowing techniques from anthropologists would seem as basic as conducting power calculations. Faced with such challenges, the task of distilling a methods section from all the different procedures carried out is not straightforward. Reviewers, editors and, later, readers will need to be sympathetic to the complexities and unusually modest when abstracting because of what was extracted to conform to word limits. There are other issues that raise challenges, such as the interpretation of a scale may not have any substantial meaning outside the context in which it was developed. The broadest question for editors is whether or not a study with such a defined and particular context meets a generalizable scientific or clinical aim – even if the particular sample profile and measures would not appear in an article in the same journal for years to come, or ever. That is, the task is to discern if the study has more than local interest and has a place outside a technical report for the funding agency. The article by Ager et al. does, and may help further establish the role of clinical science in humanitarian efforts across the world.

Among the other articles to figure in this issue is a set of articles on the many faces of conduct problems in youth, from inhibitory control in young children to full-scale psychopathy in adolescents. Asscher et al.’s (2011) meta-analysis focuses on the latter. It provides effect size calculations that should generate discussion about methods and clinical strategies. The amount of online material in that article is greater than is typical for this journal, at least so far. Readers may wonder about and comment on the amount of online, off-article material. Conventions concerning what is essential material for an article are changing and already vary across journals. In different ways, the articles by Ager et al. and Asscher et al. illustrate the teamwork needed among reviewers, editors, and publishers to define what is meant by accessibility in scientific reporting. Time, space, and money results will need discussion.