Editorial: Writing (and reading) Research Reviews in child psychology and psychiatry – principles and practice, opportunities and pitfalls


As the incoming Research Reviews Editor for The Journal of Child Psychology and Psychiatry (JCPP), it is a privilege to participate in helping a flagship international journal to advance knowledge and practice through well-targeted reviews. Readers may be aware that JCPP has three kinds of review outlets. Practitioner Reviews are directed at applied questions, and are handled by Jonathan Green. Annual Research Review issues (currently under the direction of James Leckman) pull together expert reviews and commentary, often on an annual theme. Finally, the regular Research Reviews are published throughout the year (approximately every other issue). Most reviews are commissioned (invited), because this allows us to balance the topical areas year-to-year and ensure that the overall review-set is coherent. Nonetheless, we occasionally consider excellent proposals from authors that happen to coincide with that year’s goals and priorities, or which show us that we have missed a critical and timely topic in our planning.

Authors can think about how to aim a review, and readers can recognize what these reviews accomplish, by understanding the different approaches to reviews. The regular Research Reviews are brief reviews intended to fill a certain needed niche in our field. Two kinds of papers can fit in the regular Research Review slot. The first and most common are scholarly updates on fields that have significant literatures. These are not comprehensive, Psychological Bulletin or Psychological Review pieces. At the same time, they are not speculative Medical Hypotheses type pieces. Rather, these are intended to be succinct yet original, rigorous, and sophisticated scholarly updates on critical issues in the scientific literature on child psychology and psychiatry. Precisely because short reviews in most instances cannot pretend to be comprehensive, they require clear conceptual framing, methodological explanation of how topics and literature were selected for citation and discussion, and skillful recognition of important but not-reviewed aspects of the topic literature. Good topics include maturing but rapidly evolving domains that we have not recently reviewed or updated; neglected but important domains that now have sufficient literature to justify directing a spotlight their way; and topics with controversial or unresolved questions that could benefit from more light and less heat in the form of a clear-headed summary and review. These reviews are not ordinarily intended as a platform to summarize a body of work in one laboratory, (like a Child Development Monograph or Psychological Review paper might do) but, rather, should reflect the state of field as a whole.

A second, but much less common essay will be an idea-generating or discovery-oriented essay. These are not reviews per se. Rather, these are essays which will be appropriate in areas in which there is very little current literature, but unusual potential and importance. An effective essay will spur research, identify the most promising ideas, and perhaps even provoke controversy by challenging existing beliefs. For these kinds of essays, mastery of relevant literature remains important but the literature may be sufficiently thin or disparate that the review emphasizes the conceptual logic of the author’s idea or new paradigm and pushes the field to think in a new way, rather than claiming a conclusion from past literature. Some recent examples of hypothesis-setting reviews published in JCPP included those by Pelphrey et al. (2011) and Forbes & Dahl (2012).

This is an era in which our field is participating in dramatic societal and technological changes that influence children’s mental health and illness. Advances in physics, biology, and computer science are rapidly changing how research is conceived and how it is done. Rapid changes in personal technology are continuing to alter how children are socialized while also opening up new opportunities for intervention. Children’s environments continue to be disrupted by changes in diet, environmental pollutants, rapid urbanization, and social upheaval. Cultural and societal changes are dramatic in many parts of the world, often with little or no local scientific data that could enable evaluation of effects on children’s mental health or illness. To inform wise practice and policy while continuing to aim for long-term improvement in how society supports its youngest members (and thus its own future), our science must be nimble yet far-sighted, confident in what it knows but equally realistic about the limits to that knowledge – particularly in regard to sampling generalizability in a diverse and rapidly changing world. Effective Research Reviews will help us do that, by mapping where we are, what we in fact know, and what is within reach as the important next step. The occasional idea-provoking essay will point out which far-horizon is most worth aiming at, recognizing the arc of the larger scientific enterprise.

Likewise, the field’s conceptions of the purpose and method of literature review has evolved and matured and critical readers should recognize this. Four kinds of reviews can be identified for heuristic purposes. The first is the type that was typical a generation ago: a selective, impressionistic review of an area, with literature assembled by the author to advance a point of view, much as one might have done in a debating society at one time. Those types of reviews should become, for the most part, a thing of the past. We know too much, now, about availability heuristics and other cognitive biases that render such post-hoc and selective reviews vulnerable to serious error. Thus, a review without an explicit methodology or approach can not suffice to advance our field. In their stead might be the occasional new concept, which is not a review at all, but a proposal or an idea that the field should consider. Rather than making claims, it makes proposals, and thus stimulates new research. Thus, it is critical that readers distinguish between reviews that draw conclusions, and essays that serve actually to provoke discussion.

Reviews that are meant to draw together literature and enable conclusions are invaluable but must now be one of the remaining forms*. A systematic review follows strict rules of literature identification and selection, essentially attempting to employ sampling theory to select the appropriate set or sample of studies in order to answer a very focused question. Such reviews often attempt to answer yes/no questions that are framed as clinical decisions that clinicians must make (“Should biofeedback be recommended to patients as a treatment for smoking cessation?”). They have become common in medicine, and to a lesser extent in psychology, with the establishing of resources such as the Cochrane Collaboration (http://www.cochrane.org) as well as systematic review software that takes investigators through the steps of the review process. Recent examples of systematic reviews published in JCPP include those by Dunn et al. (2011) and Strong et al. (2011). The principal advantage of a systematic review is its objectivity. Done properly, it is not vulnerable to the author’s cognitive heuristics or biases, other than the “bias” that selected the question in the first place. The disadvantages of such reviews include their cost (they are time and labor intensive), the fact that very strict selection criteria often result in exclusion of large amounts of relevant data (leading to potentially inaccurate conclusions about a slightly broader version of their specific question), and their misapplication to questions that are not yet ready for definitive yes/no answers. However, all reviews can benefit from the systematic review principle of following an explicit method for selecting and including literature, and readers should look for this.

A quantitative meta-analysis also features an explicit method for study inclusion, but typically casts a wider net than a systematic review. Meta-analysis yields a quantitative summary effect size that seeks to quantify the size of a relationship or effect, place a confidence interval around that effect size, and in many instances determine whether the effect size is of clinical or social importance: for example, “How big is the association between ingestion of food dyes and symptoms of ADHD?”; see also Asscher (2011). To address issues of publication bias, meta-analysts typically cast a wide net, then conduct sensitivity analyses to determine whether there are important moderators of effect size or whether unpublished studies could plausibly overturn an observed conclusion. The major advantage of a meta-analysis lies in its provision of a quantitative, rather than impressionistic summary of a literature. Disadvantages include failure to capture important moderators of effects, exclusion of important data based on selection criteria, and faulty conclusions stemming from the combining of studies which, due to differences in their methodology or materials, ought not to be pooled.

A rigorous qualitative review is less formally defined in the field, still generally uses an explicit methodology to select and review studies, but does not engage in quantitative pooling. It seeks to characterize and draw conclusions from a literature without enforcing a clinical decision rule or estimating a formal effect size. Such reviews are well-suited to summarizing a disparate literature that has too much variation in methods for pooling to be prudent. They can be helpful in consolidating what has been done so that subsequent research is more focused and well-targeted. Their weaknesses lie in a tendency to rely on box scoring of results (overlooking, for example, that four non-significant results could, when pooled, detect a reliable population effect), in failure to quantify conclusions, and in advertent non-systematic (unbalanced) consideration of evidence for or against a conclusion.

In short, reviews can both illuminate and mislead, depending on how well their strengths and limitations are appreciated. The central value of reviews for readers is in allowing them and the field to move forward in their own work with data in hand. Without such reviews, as a field, and as readers, we too often fall into the trap of saying “the evidence is inconclusive” when it really isn’t. At a certain point, that statement does not suffice and the burden of proof on a question shifts -- because there are substantial data, despite their inevitable limitations. Basing our thinking on some data, if the data are properly vetted, is better than dismissing the data. A good review does this.

On the other hand, we also have to gauge those situations in which a question is better answered by doing the correct experiment than by summarizing numerous flawed experiments. Further, sometimes quantifying an effect is far more sensible than attempting to gauge whether numerous underpowered studies tell us anything. Other times, pooling studies that should not be pooled can lead to faulty conclusions. It is the job of editors (and referees), as well as authors and readers, to critically appreciate when a topic has been well chosen, a question well asked, a method well applied, and hazards and benefits of an approach well understood.

It is my hope that the reviews to be published in JCPP in the coming years will inform the broad readership and enable all of us to think in more sophisticated, careful, and up-to-date ways about critical issues in the field. It is also my hope that they will stimulate new discussions and insights by others. I look forward to exchanges with authors, readers, referees and editors in the many areas of discovery in which our field is engaged.



I am grateful to Edmund Sonuga-Barke for his comments on an earlier draft of the article. J.N. has disclosed that he has no competing or potential conflicts of interest relevant to this article.