SEARCH

SEARCH BY CITATION

Dr Richard Henderson’s talk on bone health was one of the highlights of the 2009 AACPDM Annual Meeting of the American Academy for Cerebral Palsy and Developmental Medicine. Toward the end of the talk Dr Henderson outlined the randomized controlled trial (RCT) that seemed to be the logical next step in evaluating interventions to enhance bone health – and then showed the complete impossibility of ever actually being able to mount a credible study! The barriers he identified included the huge sample size required, participant retention over time, the necessarily long duration of the study, and the impossibility of funding such a complex study.

These issues resonated with me, as I have often been frustrated by what I see as the tyranny of people’s expectation that we do RCTs to address issues in our field. As Dick Henderson showed, the requirements for an RCT are almost impossible to fulfil for the majority of big questions we need to answer. These sentiments may seem heretical coming from a clinical and health services researcher! Let me explain.

An RCT requires a very specific intervention question that can be answered with a ‘human experiment’. It demands that we focus the study in a way that is analogous to taking a high-powered view of an object with a microscope. First we need to know where to look. We must control as many elements of the clinical situation as possible, and vary one factor (i.e. the active ingredient, the intervention). We have to develop criteria for who is and is not eligible for the study; we must specify what the intervention and control groups will receive; we have to delineate what other factors will be held constant to the extent possible; and so on. These actions can all be accomplished when we explore the value of a specific short-term intervention but that is scarcely how we provide most services.

Given the life-course perspective that is increasingly our interest for children and youth with neurodisabilities, I believe that we need to take a much broader view of the issues, and celebrate the variation that exists among the young people with whom we work. We need to identify the many potential sources of variation that have an impact on what we want to know and that are almost impossible to account for in all but the largest RCTs. These can be thought of as the ‘Yes, but what about…?’ issues that arise in any discussion of whether an intervention might work to accomplish a stated goal. A few examples may illustrate the point.

Consider age. We work with all ages of children and young people with neurodisabilities, and recognize the myriad of differences across ages and developmental stages. In constraining any RCT to specific age groups, we risk losing generalizability of the findings. Otherwise we face the huge challenge of recruiting large enough samples to accommodate this variation – and then we invariably analyse and report our findings within age bands!

Consider functional abilities. An intervention must stratify people appropriately, as one can do with people with cerebral palsy using the Gross Motor Function Classification System and Manual Ability Classification System. This in turn means that we will make important decisions that affect sample size and, again, the potential generalizability of the findings beyond the participants of that study.

Consider the outcomes to be evaluated. From among the many outcomes that can be assessed, which are primarily important, and to whom? How do we decide? What is a relevant time-course for the outcomes? Can any RCT run long enough to inform us about the late impacts of any short-term gains observed in an RCT?

I believe that we need to address the big questions in our field with research designs that take advantage of these sources of variation with the broader low-power view provided by the microscope. Prospective inception cohort studies can and should include everybody. Longitudinal studies can allow us to look at within-person change and development. We can assess relevant aspects of people’s lives with ‘living’ protocols that adapt to newly identified outcomes of interest.

For these reasons I hope people will resist the siren call of the RCT simply because it is there – and use the best designs for the ‘big’ questions we need to answer.