Clinical comparability of the new antiepileptic drugs in refractory partial epilepsy: Reply to Costa et al.

Authors

  • Sylvain Rheims,

    1. sylvain.rheims@chu-lyon.fr
      Department of Functional Neurology and Epileptology, Institute for Children and Adolescents with Epilepsy (IDEE), Hospices Civils de Lyon, Lyon, France
    2. INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center, Translational and Integrative Group in Epilepsy Research, Lyon, France
    Search for more papers by this author
  • Emilio Perucca,

    1. Clinical Pharmacology Unit, University of Pavia and Clinical Trial Centre, National Institute of Neurology IRCCS C Mondino Foundation, Pavia, Italy
    Search for more papers by this author
  • Philippe Ryvlin

    1. sylvain.rheims@chu-lyon.fr
      Department of Functional Neurology and Epileptology, Institute for Children and Adolescents with Epilepsy (IDEE), Hospices Civils de Lyon, Lyon, France
    2. INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center, Translational and Integrative Group in Epilepsy Research, Lyon, France
    Search for more papers by this author

To the Editors:

We read with interest the meta-analysis by Costa et al. (2011) of add-on placebo-controlled trials of new antiepileptic drugs (AEDs) in refractory partial epilepsy. Although we agree that the differences observed are of “relatively small magnitude to allow a definite conclusion about which new AED(s) has superior effectiveness,” we have concerns about the methodology used in the analysis, which in our view invalidates some of the conclusions.

First, at least two of the basic assumptions made to justify the validity of the meta-analysis can be questioned. One relates to clinical homogeneity, whereby it was assumed that adult and pediatric trials can be pooled together because their outcomes were “not different.” In fact, we evaluated the same set of pediatric trials with the same AEDs, and demonstrated that the treatment effect was significantly lower in children than in adults, primarily due to twofold higher placebo responder rates in children (Rheims et al., 2008). Therefore, pooling pediatric and adult trials could bias indirect comparisons, particularly since some of the selected AEDs were not tested in children. Another questionable assumption relates to absence of effect of the time at which studies were done. Although the authors found a higher responder rate in the placebo group in trials with the most recent AEDs (eslicarbazepine acetate and lacosamide), they assumed that this was not related to the years in which these trials were conducted. In fact, it was recently demonstrated that response to placebo increases gradually over the years, virtually doubling between 1989 and 2009 in the same trials included in the authors’ meta-analysis (Guekht et al., 2010; Rheims et al., 2011). This observation strongly questions the comparability of the effect size measured in studies spanning a period of 25 years.

Second, the authors combined placebo-controlled trials and head-to-head trials, which according to their selection criteria should all be blinded. In fact, at least half of the selected head-to-head randomized controlled trials were open-label trials (Specchio et al., 1999; Chmielewska & Stelmasiak, 2001; Crawford et al., 2001; Fritz et al., 2005), challenging some of the conclusions made using data from head-to-head trials.

Third, the authors considered the issue of drug dosages only when a significant dose–response relationship was demonstrated by their analysis. However, there might be several reasons for many AEDs not demonstrating a dose–response relationship in such analysis, including lack of statistical power or inappropriate trial design. Moreover, some AEDs were investigated at higher dosages than those approved (e.g., topiramate), whereas others were tested at lower, nonefficacious dosages (e.g., pregabalin 50 mg/day). Because the authors included only AEDs that are currently marketed, it would have been more appropriate to select only dosages that have been approved. Including in the pooled comparisons dosages in excess of the approved range, or inefficacious dosages, may have biased the results of the analysis.

Finally, and most importantly, the primary efficacy comparisons were based on responder rates calculated using the last observation carried forward (LOCF) analysis. When making indirect comparisons of efficacy across drugs, this estimate can be grossly misleading. In fact, LOCF significantly overestimates the true rate of treatment success, because it allows to classify as responders those patients who had fewer seizures but withdrew prematurely due to intolerable side-effects. As expected, the degree of overestimation is dependent on the withdrawal rate (Rheims et al., 2011). Therefore, AEDs tested at doses with poor tolerability profile (such as topiramate 800 and 1,000 mg/day and oxcarbazepine 2,400 mg/day) show artificially inflated responder rates. By performing a meta-analysis in which we included only responders who completed the trial [still using an intention to treat (ITT) denominator], we obtained results that are strikingly different from those presented by the authors. In particular, topiramate, which was associated with one of the highest relative risk (RR) for responder rate using the LOCF method (RR 3.07), showed much poorer efficacy in the completers analysis (RR 2.27), particularly when analysis was restricted to the approved dose range (RR 1.92), resulting in an efficacy rank inferior to that of several other AEDs (Rheims et al., 2011).

Overall, we believe that some of the main conclusions reached by the authors, including those related to a greater efficacy of topiramate compared with other AEDs, are not justified and reflect the methodologic flaws discussed earlier. More generally, these methodologic issues hamper the usefulness of indirect comparisons between AEDs, and reinforce the need for improved standardization in the reporting of efficacy outcomes.

Disclosure

Sylvain Rheims has received speaker fees from Pfizer and UCB Pharma. Philippe Ryvlin has received speaker or consultant fees from Pfizer, GSK, UCB Pharma, Eisai, and Bial. Emilio Perucca received research grants from the European Union, the Italian Medicines Agency, the Italian Ministry of Health, and the Italian Ministry for Education, University and Research. He also received speaker’s or consultancy fees and/or research grants from Bial, Eisai, GSK, Johnson and Johnson, Novartis, Pfizer, Sepracor, SK Life Sciences Holdings, Supernus, UCB Pharma, Upsher-Smith, and Vertex.

We confirm that we have read the Journal’s position on issues involved in ethical publication and affirm that this report is consistent with those guidelines.

Ancillary