In this issue of Arthritis & Rheumatism, Tseng et al present the final results of an important clinical trial that was initially presented at the 2003 Annual Scientific Meeting of the American College of Rheumatology (1, 2). Clinical trials in systemic lupus erythematosus (SLE) are difficult to do, and it is also difficult to recruit sufficient numbers of subjects. The best study designs and end point measures are evolving. Although new funding is available, clinical trials are still not adequately resourced. Tseng et al deserve our admiration and gratitude for their efforts. Their results have relevance for patient management and for experimental therapeutics in SLE.
Their study addresses an important and longstanding clinical question in SLE, the roots of which are found in the history of the discovery of the molecular biology of SLE. From Ehrlich's concept of horror autotoxicus, to the discovery of autoantibodies to nuclear constituents, to their application in the diagnosis and subsetting of phenotypes of SLE, and to the demonstration of autoantibodies and complement products in situ, the essential roles of autoantibodies and complement make a nice story of pathogenesis. This history has stimulated the widespread and often uncritical use of measurements of antibody and complement levels to diagnose and monitor the disease, as well as the idea that these immunologic measures can be used to decide when to initiate, taper, and stop therapy (3, 4). These latter clinical conclusions have occurred despite disquieting observations that autoantibodies may be neither sufficiently specific nor sensitive for diagnosis, nor may they be necessarily correlated with clinically “active” disease or predictive of a flare (5–8).
In clinical research there are no perfect studies, and good studies beget new questions. The present study illustrates both principles.
The authors are appropriately circumspect about the generalizability of their findings. It is unclear which patients at the various centers were referred for screening and how representative these subjects are. The data indicate that the patients to which the study results apply are uncommon. The study accrued subjects over a 5-year period. The authors estimate that 50% of patients were neither eligible nor referred for formal screening for the study. Only 27% of the clinically stable patients were randomized, while nearly half of those who did not experience a serologic flare were censored. Although 40–80% of patients with SLE demonstrate antibodies to double-stranded DNA (anti-dsDNA) at some time during their disease course (7–11), the proportion of these patients who have clinically quiescent disease is uncertain; it is probably a small number of individuals (12, 13).
The landmark study by Bootsma et al (14), upon which Tseng and coworkers' study builds, evaluated all available patients with SLE, but randomized those who had anti-dsDNA by blocks in order to balance the participants according to the presence or absence of a major or minor flare in the previous 2 years and by 2 immunosuppression maintenance regimens (stable treatment with corticosteroid and other immunosuppressive agents or decreasing corticosteroid dosage versus no immunosuppressive agents). The Dutch investigators showed that, compared with conventional treatment implemented after clinical relapse, early treatment of increases in the anti-dsDNA antibody titer with prednisone (30 mg/day over the maintenance dosage to a maximum 60 mg/day) reduced the incidence of major and minor flares. This first controlled study used only the anti-dsDNA antibody titer as an indication to treat and less stringent criteria (a ≥25% increase in titer) as compared with the Tseng study, which used a stricter definition of serologic flare and a smaller dosage of corticosteroid. In addition, Tseng studied a more ethnically diverse population, with presumably more severe disease, than Northern Europeans are thought to have. The current study contained ∼46% Hispanic, 22% African American, and 17% Asian patients in the randomized arm.
The clinical dilemmas raised by both the Bootsma and Tseng studies are also relevant to the process by which drugs for the treatment of lupus are approved by the US Food and Drug Administration. One of the most critical barriers to faster drug discovery and evaluation is the absence of validated biosurrogates for clinically significant end points (15, 16). Anti-dsDNA and complement activation products are viewed as promising biomarkers of flares for use in clinical trials, but the evidence is mixed, likely because of the different populations studied, the nature of the study protocol (whether cross-sectional or prospective), and other methodologic differences (15–17).
We suggest that the Tseng study should not be used as evidence that changes in anti-dsDNA and C3a levels are valid biosurrogates of a clinical flare. A different study design would have been required to show this. However, the study does provide a clue that these are not strong biomarkers. Of the 20 patients in the placebo group who experienced a serologic flare, 8 experienced some form of clinical flare, which corresponds to a positive predictive value of 40% for the serologic change to predict flares. (In these crude calculations, we exclude the 21 patients with serologic changes randomized to prednisone, since the intervention may have influenced clinical flare outcome.) Nevertheless, this study advances our knowledge and suggests that the search for SLE biomarkers has not ended.
The management of SLE balances the potential risks against the benefits. Individualizing from an average experience of groups to a single patient about whom one is trying to make a decision is both an art and a science. For a patient who has had anti-dsDNA antibodies in the past and who has had a recent elevation of anti-dsDNA and C3a levels, we would offer that patient the study by Tseng as well as the study by Bootsma as information to discuss the following options: make no change in the followup routine or treatment; monitor more frequently; or adjust treatment preemptively with the regimen used in either the Tseng or the Bootsma study. (Neither study addresses an alternative option: adding or increasing the dosage of an antimalarial drug or other medications.) Our crude estimates above indicate that prediction of an impending flare may be wrong 60% of the time, potentially exposing the patient to the long-term side effects of unneeded corticosteroids. The preemptive strategy may also not adequately control the disease. When collaborating with their physicians in making decisions about treatment, patients are likely to weigh their own experiences with a severe flare, the time since their last flare, and the likelihood of serious side effects and potential toxicity of treatment (18).
A similar discussion may also apply to devising a monitoring strategy for patients who have anti-dsDNA antibodies, whether or not the titer has recently increased. The patient will be the primary determinant of what to do. If, when informed of these strategies, they would choose to be treated, we would monitor and respond. If they would choose not to accept treatment regardless of the serologic findings, we would not monitor.