Members of the ACR Ad Hoc Committee on SLE Response Criteria are as follows: Matthew H. Liang, MD, MPH, Chair, Paul Fortin, MD, MPH, Co-Chair, Matthias Schneider, MD, Co-Chair, Michal Abrahamowicz, PhD, Co-Chair, Graciela S. Alarcón, MD, MPH, Stefano Bombardieri, MD, James Balow, MD, Elizabeth Benito-Garcia, MD, MPH, Heike Bischoff-Ferrari, MD, MPH, Jill Buyon, MD, Gamal Chehab, MD, Karen Costenbader, MD, MPH, Leslie Crofford, MD (American College of Rheumatology Committee on Research Liaison), Paola de Pablo, MD, MPH, John M. Esdaile, MD, MPH, Axel Finckh, MD, MS, Rebecca Fischer-Betz, MD, Dafna Gladman, MD, Caroline Gordon, MD, Gabor Illei, MD, David Isenberg, MD, Cristoph Iking-Konert, MD, Kent Johnson, MD, Joachim Kalden, MD, Munther Khamashta, MD, PhD, Takao Koike, MD, Michael Lockshin, MD, MPH, Susan Manzi, MD, MPH, Joseph McCune, MD, Alain Meyrier, MD, Jamal Mikdashi, MD, Andrew Moore, MD, Marta Mosca, MD, Michelle Petri, MD, MPH, Charlotte Phillips, RN, MPH, Neal Roberts, Jr., MD, Peter Schur, MD, Josef Smolen, MD, E. William St.Clair, MD, and Vibeke Strand, MD.
The American College of Rheumatology response criteria for systemic lupus erythematosus clinical trials: Measures of overall disease activity
Article first published online: 4 NOV 2004
Copyright © 2004 by the American College of Rheumatology
Arthritis & Rheumatism
Volume 50, Issue 11, pages 3418–3426, November 2004
How to Cite
American College of Rheumatology Ad Hoc Committee on Systemic Lupus Erythematosus Response Criteria (2004), The American College of Rheumatology response criteria for systemic lupus erythematosus clinical trials: Measures of overall disease activity. Arthritis & Rheumatism, 50: 3418–3426. doi: 10.1002/art.20628
- Issue published online: 4 NOV 2004
- Article first published online: 4 NOV 2004
- Manuscript Accepted: 10 AUG 2004
- Manuscript Received: 28 MAR 2003
- American College of Rheumatology, a Kirkland Scholar Award
- SLE Foundation of New York
- Alliance for Lupus Research
- Lupus Erythematodes Selbsthilfegemeinschaft e.V. Germany
- NIH. Grant Numbers: AR-47782, R13-AR-47584-01
- Robert B. Brigham Arthritis and Musculoskeletal Diseases Clinical Research Center
- Heinrich-Heine-University in Düsseldorf
- Arthritis Research Centre of Canada
- Arthritis Centre of Excellence
- Arthritis and Autoimmune Disease Centre at The University Health Network, University of Toronto
- Office of the Director, National Institute of Arthritis and Musculoskeletal and Skin Diseases
- Center for Advanced Methodological Support for Innovative SLE Trials (ASSIST)
Improved standards for the evaluation of therapeutic interventions in systemic lupus erythematosus (SLE) are needed. The purpose of this study by a committee of the American College of Rheumatology was to define clinically meaningful improvement, no change, or worsening in 6 existing clinical measures of SLE disease activity. This represents an important step in a disease in which some organ symptoms get better and others get worse. It is intended to help investigators develop sample size estimates based on meaningful effect sizes and to gauge the clinical relevance of any observed change in disease activity.
Medical records from 310 patients drawn from 3 sources were abstracted into a standard format. Each vignette included clinical and laboratory data obtained during 2–3 visits. Ratings on the following 6 instruments were obtained for the same patients during the visit or retrospectively: the British Isles Lupus Assessment Group (BILAG), the Systemic Lupus Erythematosus Disease Activity Index (SLEDAI), the revised Systemic Lupus Activity Measure (SLAM-R), the European Consensus Lupus Activity Measure (ECLAM), the Safety of Estrogens in Lupus Erythematosus: National Assessment (SELENA)–SLEDAI, and the Responder Index for Lupus Erythematosus (RIFLE). From this pool of vignettes, 5 common vignettes and 10 randomly selected vignettes were rated through a secure Web site by 88 international experts on SLE. The experts, who were blinded to the activity measure scores, were asked to rate each patient's clinical condition as worsened, improved, or unchanged relative to the previous visit. These ratings were transformed by statistical procedures into performance characteristic curves that related a change on a particular SLE activity measure to the physicians' agreement on whether that patient had worsened, improved, or remained the same clinically. These were discussed by the committee members, who were blinded to the actual instrument used. The committee then voted on what level of expert agreement would be used to determine clinically meaningful change.
The physician ratings on the 5 common vignettes revealed considerable variation in their clinical appraisals. Overall, the 6 SLE activity measures showed excellent separation of clinical conditions as being worsened, improved, or the same. The committee voted to take 70% agreement by physicians as the point on the performance characteristic curves at which meaningful change in a score could be identified. For each instrument, we computed the units of change required to indicate improvement or worsening.
To our knowledge, these are the first response criteria in any disease where a clinically relevant change has been determined a priori and mapped to standardized measures. This criterion should aid the clinical evaluation of new therapies, improve comparability between trials, and facilitate innovative trial designs.
The treatment of systemic lupus erythematosus (SLE) has improved dramatically, but the morbidity, long-term course, treatment-associated morbidity, and refractory subsets of SLE still impose a considerable toll on patients (1). With the remarkable advances in biology and powerful new technologies directed toward identifying targets, a burgeoning number of new therapeutic possibilities have appeared. However, the means by which these possibilities will be evaluated for their impact on target organ systems and the patient's health and well-being is in its infancy and lags behind drug development.
The conduct of clinical trials of therapeutic agents in SLE is challenged by the relatively small numbers of patients who are eligible for such trials, the heterogeneity of the disease, and the lack of reliable markers of disease activity and organ damage (2). In addition, clinical response continues to be defined on ad hoc and post hoc bases. Standardized criteria for SLE trials would have enormous advantages for testing new agents (3, 4). They would provide a common basis for comparing treatment options and would eliminate post hoc analyses of effects. Standardizing response criteria would permit both qualitative and quantitative (meta-analysis) syntheses of different clinical trials. Criteria would also permit the use of innovative clinical trial designs, which might be more efficient than the standard randomized clinical trials.
Defining a minimally important clinical difference is critical for the conduct of rigorous and interpretable clinical trials. First, lupus is a disease in which the activity can improve in some organ systems and worsen in others. Physicians evaluating the same patient may differ in their assessment of the overall disease activity; this is demonstrated graphically in the present study. Use of composite measures of overall activity permits the calculation of a summary score, and defining a minimally important clinical difference for overall activity and for individual organ systems reduces the measurement error. Second, many trials in SLE have inadequate numbers of subjects to demonstrate differences or to be definitive (“adequate statistical power”), and deciding on the number of subjects that are needed require an estimate of an important effect size. Finally, a definition of minimally important differences allows one to interpret the clinical relevance of any observed difference in disease activity.
The American College of Rheumatology (ACR) charged the Ad Hoc Committee on SLE Response Criteria with the development of criteria by which interventions can be evaluated in SLE. This work builds on the work of the Systemic Lupus International Collaborating Clinics (SLICC) and Outcome Measures in Rheumatology (OMERACT) groups, who recommended that all clinical trials in SLE include measures of cumulative organ damage, SLE disease activity, health-related quality of life, and adverse events (5). This report details the empirical work that led to the definition of a minimally important clinical difference for 6 existing instruments that have documented metric properties or have been used in clinical trials. A report on suggested criteria for evaluating the steroid-sparing ability of interventions in SLE appears elsewhere in this issue of Arthritis & Rheumatism (6). Articles describing our work on response criteria for specific target organs are in preparation.
The committee consisted of clinicians and trials methodologists from the ACR, SLICC, European League Against Rheumatism, Pan American League of Associations for Rheumatology, International League of Associations for Rheumatology, Food and Drug Administration, and OMERACT. It met at Schloss Mickeln, Heinrich-Heine-University, Düsseldorf, Germany, on May 9–12, 2002. No industry funding was accepted. All participants signed ACR Conflict of Interest statements and attested that they met the standards of ethical conduct as delineated by the ACR. Primary data analyses and all interpretations were performed blindly with regard to the identity of the individual patients and the specific instruments. After a period of open comment from persons responding to the advertisement on the ACR Web site and from consultants who were actively solicited (see Acknowledgments), this report was reviewed by the Committee on Research and endorsed by the Board of Directors of the ACR.
Identification of expert clinicians in SLE.
The committee reviewed the results of an international survey in which expert clinicians evaluated the case histories of actual SLE patients. A list of 338 “SLE expert clinicians” was assembled by inspecting the membership rolls of the SLICC, the Editorial Board of the journal Lupus, speakers and authors on SLE presenting at ACR meetings during 1997–1999, attendees of the Fifth International Conference on SLE, and the 1998 ACR Membership Directory.
To quantify the relationship between the SLE disease activity measures and the physicians' judgment of clinical changes, clinicians were instructed to evaluate a large number of actual cases and to decide whether the patient had improved, remained the same, or had worsened. These clinicians were blinded to the scores on the SLE disease activity measures. The level of expert agreement on a particular change was set by the committee (see Results). This allowed us to map a change in score on a given instrument to clinically relevant improvement, worsening, or no change.
The persons who organized the premeeting study did not participate in the study itself. Vignettes for the survey were abstracted by one physician (Andrew Moore) from the medical records of 310 patients. The case histories came from the Montreal General Hospital (n = 86), a multicenter trial of plasmapheresis and cyclophosphamide (n = 93) (7), and from European patients in a study of an SLE activity measure (n = 131) (8).
There were no data available from a clinical trial or an observational cohort in which all 6 measures were scored in real time. Therefore, to carry out the exercise, we used cases scored by different raters and, for some measures, scored retrospectively. The British Isles Lupus Assessment Group (BILAG) (9) and European Consensus Lupus Activity Measure (ECLAM) (8) scores were obtained prospectively in vignettes 180–310 and retrospectively in vignettes 1–179. The Systemic Lupus Erythematosus Disease Activity Index (SLEDAI) (10) and revised Systemic Lupus Activity Measure (SLAM-R) (11, 12) were rated prospectively by the physicians participating in each study. For all vignettes, the Safety of Estrogens in Lupus Erythematosus: National Assessment (SELENA)–SLEDAI (13) was rated retrospectively by Dr. Jill Buyon, the Responder Index for Lupus Erythematosus (RIFLE) (14) by Dr. Michelle Petri, and the BILAG by Drs. Sonya Abraham and David Isenberg. The ECLAM was rated retrospectively by Dr. Marta Mosca for vignettes 1–179. The BILAG and RIFLE are ordinal transition scales, and changes on these instruments had to be transformed into continuous data for the analyses.
Each vignette provided demographic information, history, symptoms, and/or physical findings. Unavailable information on the history, physical examination, or laboratory results was inferred from data on the SLE activity subscales. After their construction, each vignette was rescored with the SLAM-R and the SLEDAI to ensure an accurate backward and forward transformation. Vignettes 1–194 included data from the baseline, 2-month, and 6-month encounters; the rest had data from only the baseline and 2-month encounters.
Reproducibility of ratings.
To evaluate the reproducibility of the experts' ratings, 5 common vignettes were assigned to all survey respondents. To ensure a sufficient number of responses across a range of SLE activity, we divided the vignettes by 5 intervals of change in SLE activity between baseline and the 2-month evaluation, and then we randomly sampled vignettes from each group (Figure 1). Ten additional vignettes were sampled from the rest of the vignettes for each participant, and the order of presentation of the 15 vignettes was also randomized. Table 1 gives an example of a vignette.
|Vignette no.||Transition||Physician responses, %||Total no.|
|54||M0 to M2||6.7||13.3||80.0||75|
|M2 to M6||6.7||45.3||48.0||75|
|89||M0 to M2||1.4||1.4||97.3||74|
|M2 to M6||14.9||23.0||62.2||74|
|109||M0 to M2||12.0||28.8||58.9||73|
|M2 to M6||11.0||26.0||63.0||73|
|137||M0 to M2||37.8||51.4||10.8||74|
|M2 to M6||6.8||58.1||35.1||74|
|167||M0 to M2||45.8||22.2||31.9||72|
|M2 to M6||5.6||6.9||87.5||72|
For the Internet survey, a secure relational database was constructed. Patient data were presented chronologically, and the response could not be changed. The vignettes were presented without scores from the SLE disease activity measures.
Goals of statistical procedures.
The statistical techniques detailed in Appendix A essentially mapped a clinician's appraisal of whether there had been a meaningful change in scores on the disease activity measures. Since the physicians' assessments were often in disagreement, it is more accurate to describe the probability of agreement that a given patient had improved, experienced no change, or worsened. Statistical procedures and computer simulations were used to produce performance characteristic curves relating this probability to scores on the activity measures, as well as to estimate confidence intervals, smooth curves, and adjust for the number of vignettes that were rated by the experts.
There are no conventions for setting the level of agreement between physicians' ratings to establish quantitative categories of better, no change, and worse. The committee therefore voted on the level of agreement before the data were reviewed. Using this level of agreement, the performance characteristic curves were inspected for corresponding scores on a given disease activity measure.
Characteristics of the survey respondents.
From the initial list of 338 physicians, e-mail addresses were identified for 255 (75.4%). They were invited to participate via an ACR-issued e-mail that explained the project and gave them a user name and password. In all, 130 experts logged-on to the survey between February 29 and April 4, 2000; 116 of them (78 men and 38 women; 45.5% of the 255 persons contacted by e-mail) answered the initial demographic queries. Of the nonresponders who could be contacted, 12 explained their reasons for not participating, and 19 had technical problems that were then corrected. The respondents' who completed all the vignettes and gave permission to be acknowledged are listed in the Acknowledgments.
Of the 116 participants, 108 were from a teaching institution, 96 had 10 or more years of experience in managing lupus, and 24 countries were represented. The mean age of the participants was 46.3 years, compared with a mean age of 50.2 years for the nonparticipants.
A total of 88 SLE experts completed at least 1 patient vignette, and 68 of them (77%) completed all 15 vignettes that were assigned. Twenty of the experts completed only a portion of their assigned vignettes, with the number of responses per physician varying between 1 and 11. The survey yielded a total of 1,090 responses. These responses covered 232 different vignettes.
The response patterns of the 20 experts who did not complete all of the survey were screened for evidence of nonrandom, biased selection. Such a bias could occur if, for example, a physician evaluated “easier” vignettes and omitted more difficult ones. Eighteen physicians who evaluated a part of the vignettes responded in a pattern consistent with random selection; that is, they completed the first vignettes in the prescribed sequence and omitted the last vignettes. Given that the order of the vignettes in the sequence was randomized, there was no indication of a bias in these responses, and the data were retained for the final analyses. Two of the 116 physicians appeared to selectively rate vignettes, and their responses were not included.
Interphysician variation in assessments of 5 common vignettes.
All 5 common vignettes were evaluated by 68 physicians. Table 1 shows the results, which demonstrate that for some vignettes, there was impressive consistency, whereas for other vignettes, there was substantial interphysician variation. The committee discussed these vignettes in detail. Although there may have been variation based on occasional misunderstanding of the vignettes, the likely reason for the discrepancies was that clinicians differed in their weighting of manifestations, especially in circumstances where some manifestations improved and others worsened.
Relationship between changes in SLE disease activity scores and experts' assessments of overall change.
The analyses were based on 767 responses covering the transition from baseline to 2 months and on 529 responses covering the transition from 2 months to 6 months. Table 2 shows the distributions of the experts' responses used in the analyses. The relatively lower frequency of physicians reporting an increase in activity implies that the results are more precise with respect to estimating the probability of an improvement than with respect to estimating the probability of a worsening.
|Instrument||Transition||Physician responses, no. (%)||Total no.|
|BILAG||M0 to M2||179 (23.7)||139 (18.4)||437 (57.9)||755|
|M2 to M6||96 (18.2)||144 (27.2)||289 (54.6)||529|
|SLEDAI||M0 to M2||180 (23.5)||140 (18.3)||447 (58.3)||767|
|M2 to M6||96 (18.2)||144 (27.2)||289 (54.6)||529|
|SLAM-R||M0 to M2||180 (23.5)||140 (18.3)||447 (58.3)||767|
|M2 to M6||96 (18.2)||144 (27.2)||289 (54.6)||529|
|ECLAM||M0 to M2||180 (23.5)||140 (18.3)||447 (58.3)||767|
|M2 to M6||95 (18.3)||139 (26.7)||286 (55.0)||520|
|SELENA–SLEDAI||M0 to M2||164 (25.7)||114 (17.8)||361 (56.5)||639|
|M2 to M6||93 (17.7)||144 (27.4)||289 (54.9)||526|
|RIFLE||M0 to M2||179 (23.6)||138 (18.2)||442 (58.2)||759|
|M2 to M6||95 (18.0)||143 (27.1)||289 (54.8)||527|
Figure 1 depicts hypothetical data on the performance characteristics of a given SLE activity measure. It shows the relationship between a change in disease activity score and the probability that the physicians would judge this as “improved,” “no change,” or “worsened,” with confidence intervals at selected points. The probability of “improved” is very low in the right part of the graph, where there is an increased activity score, and this probability increases rapidly when the activity score decreases. For example, a 4-point decrease in the score (indicated by the vertical dotted line) corresponds to an ∼82% chance that an expert will judge the patient as having improved, with a 12% and a 6% probability of judgments of no change and worsened, respectively. Thus, for the measure, a decrease of 4 or more points gives high confidence that the patient improved according to the overall assessment by the expert physicians. Indeed, the 95% confidence interval of 0.74–0.90 confirms that the probability of “improvement” at ΔX = −4 is at least 74%, and indicates satisfactory precision of the curves.
Figure 2 depicts the actual data for each of the instruments between baseline and 2 months. Similar plots were generated for the period between 2 months and 6 months (results not shown). These curves were used to determine the change in score on each SLE instrument, and corresponded to ∼70% agreement for “better,” “no change,” and “worse.” If another level of agreement were to be chosen, the change for these categories could easily be determined from the curves.
In the absence of any standard criterion, the committee voted that if 70% of the respondents agreed that a patient's clinical condition had improved, worsened, or stayed the same it would constitute significant agreement. Using the 70% criterion, the change in any given instrument score corresponding to a clinically important improvement or worsening was then computed from the performance characteristic curves (Table 3). All 6 instruments we studied showed good to excellent discriminatory properties and separated patients according to whether their condition had improved, worsened, or remained the same.
SLE is a complex disease that is clinically challenging to evaluate, and it has a pathobiology that is incompletely understood. That notwithstanding, we were able to derive empirically determined changes in overall measures of disease activity that constitute a minimally important improvement or worsening (Table 4). We believe this is the first time in SLE or any disease that a clinically meaningful difference has been defined first and then mapped to quantitative clinical scales. Having standardized end points, even though they are imperfect, will provide practical advantages to the field as well as to lupus patients themselves. The response criterion should be tested on primary data from appropriate randomized clinical trials.
If alternative cut points are to be used, they need to be established a priori. All 6 of the disease activity measures we studied demonstrated excellent performance characteristics in separating responses, which underscores their usefulness in quantitative studies.
In ∼10% of the patients in our data set, an organ manifestation improved while others worsened. This lends support to the use of global indices and, perhaps, explains the variation in the physicians' overall appraisals of patients that was observed in the exercise. This observation also undermines the assumption that there is a single molecular target in the pathogenesis of SLE.
There are some caveats, however. For practical reasons, we had to use abstracted vignettes. Both the abstracted vignettes and the information from clinical notes represent a series of interpretations and judgments about the actual patient. Nevertheless, all participants were presented with the same information, and neither the results nor the response criteria should have been affected.
Also, by necessity, the scoring of the SLE measures we evaluated was done by different physicians. Some were done by physicians who were involved with the actual care of the patient represented in the vignette, and others were done retrospectively, using only the information contained in the vignettes. Again, this introduce some systematic errors, but all participants saw the same information, and so, this should not have affected the results.
This exercise attempted to capture a physician's assessment of a “meaningful change.” This judgment of “meaningful” may be discordant with that of the patient (15, 16), whose appraisal may be driven by their dominant symptoms (in contrast to the most serious pathophysiology), by their priorities, and/or by the severity of their disease at baseline. The discordance between the physician's assessment and the patient's assessment of meaningful change, particularly with regard to patient- reported symptoms, should be studied. Investigators may wish to express the change in disease activity as both the absolute change and the percentage change, and to use transition questions to capture “meaningful change” in the individual patient (e.g., Have you experienced a change in your symptoms? Has this change made a difference to you? How much of a difference has this been?) (17).
Finally, one notes that the change in overall disease activity corresponding to a worsening of the patient's clinical condition would likely be larger for patients with high levels of disease activity at baseline and smaller for patients with lower levels of disease activity at baseline. Future research might explore whether examining the percentage change or the percentage decrease or using different cutoff points for patients entering a trial with lower baseline levels of disease activity would permit a more accurate depiction.
In summary, the committee recommended that controlled trials of therapy in SLE should use organ-specific measures, with response criteria that are defined a priori, and valid, reliable composite instruments for evaluating overall disease activity. Although composite indices reduce sample size requirements and have advantages for statistical analyses, they can, by their nature, mask worsening and responding organ systems. This makes it important to present both the overall activity and the activity in individual organs.
The 6 instruments we examined demonstrated discriminatory properties that were more than sufficient for use in clinical trials. It is likely that other validated measures would be useful as well. Investigators are urged to use one of the instruments and to calculate a sample size using the response criteria (or minimally important difference) for that particular disease activity measure. By implication, patients need to have a clinically important and sufficient level of disease activity prior to treatment in order to demonstrate a significant change (18). The choice of which activity measure to use should be based on the specific study, costs, convenience, and other factors beyond the scope of this analysis.
In addition to measuring overall disease activity, individualized organ-specific measures should be used. A priori response criteria for a specific organ/manifestation should be defined. A process was started in Düsseldorf, and the work will be the subject of future publications. There is also a need to test these criteria using actual clinical data sets. Identifying enough patients with specific organ involvement will be difficult, and it means that for all practical purposes in the foreseeable future, consensus will have to suffice.
Additional recommendations of the committee are as follows:
- 1The use of an independent end-points committee whose members are blinded to treatment status could be valuable for adjudicating the status of patients and for ensuring the internal validity of a trial.
- 2Procedures for ensuring the reliability and accuracy of the objective data (e.g., urinary sediment) and the subjective data collected from the subjects or clinician assessors (e.g., overall disease activity measures) in a study are an essential part of ensuring precision. The results of reliability tests that are performed during the trial should be reported.
- 3A strong program of research on the identification and testing of biologic, imaging, and clinical and laboratory markers of activity and organ damage, as well as of disease activity that leads to long-term organ damage, is needed.
- 4The published reporting standards for clinical trials (19) need to be supplemented by additional information in SLE trials in order to improve the quality and interpretation of the findings.
All science, in one sense, is about measurement, but not all measurement is science. Current measures of clinical phenomena are simply measures, nothing more, until the cause(s) of SLE and its subsets are elucidated. The committee acknowledges that these recommendations are but a beginning to what, in the final analysis, must be judged by new data and by their usefulness in furthering the treatment-discovery process and improving patient outcomes.
We gratefully acknowledge the invaluable contributions of Erika Chang, MSc, Elizabeth Concepcion, Kaleena Scamman, Mary Scamman, Victoria Gall, RPT, Jessica Tullar, Jennifer Akerblom, Connie Herndon, Sonya Abraham, and Roxane duBerger. Amy Miller coordinated and supported the meeting in Düsseldorf. We are indebted to Dr. Jeffrey Siegel, who attended the Düsseldorf meeting and commented on earlier drafts of the manuscript.
The following physicians participated in the Web-based survey: Sang-Cheol Bae, Gilles Boire, Larry Brent, Frank Buttgereit, Jill Buyon, Richard Cervera, Alf Cividino, Leslie Crofford, John Davis, Michal De Bandt, Raphael DeHoratius, R. H. W. Derksen, Pao-Hssii Feng, Barri Fessler, Alan Friedman, Azzudin Gharavi, Gary Gilkeson, Winfried Graninger, E. Gromnica-Ihle, Hiroshi Hashimoto, Marc Hochberg, Frederic Houssiau, Gabor Illei, Mariana Kaplan, Elizabeth Karlson, John Klippel, Masataka Kuwana, Michael Lockshin, Klaus Machold, Walter Maksymowych, Bernhard Manger, Thomas Medsger, Yair Molad, James Oates, Chaim Putterman, Rosalind Ramsey-Goldman, Morris Reichlin, John Reveille, Jane Salmon, Emilia Inoue Sato, Johann Schroeder, Robert Shmerling, Yeong Wook Song, Christof Specker, Gunnar Sturfelt, Deborah Symmons, Tsutomu Takeuchi, L. B. A. van de Putte, Carlos Vasconcelos, Asad Zoma, and Michel Zummer.
We are indebted to the committee's consultants, who reviewed earlier drafts of the manuscript. Their input sharpened the work considerably. They are Mee Leng Boey, Dimitrios Boumpas, Richard Brasington, Deh Ming Chang, Jefferson Doyle, Vern Farewell, Ellen Ginzler, Bevra Hahn, Jie Huang, Elizabeth Karlson, C. S. Lau, Joan Merrill, Ola Nived, Stanley R. Pillemer, Theresa Podrebarac, Janet Pope, Rosalind Ramsey-Goldman, Kristian Steinsson, Alan Tyndall, Dan Wallace, Michael Ward, and David Wofsy.
- 8The European Consensus Study Group for Disease Activity in SLE. Disease activity in systemic lupus erythematosus: report of the Consensus Study Group of the European Workshop for Rheumatology Research. III. Development of a computerised clinical chart and its application to the comparison of different indices of disease activity. Clin Exp Rheumatol 1992; 10: 549–54., , , , , , et al, and
- 14RIFLE: Responder Index for Lupus Erythematosus [abstract]. Arthritis Rheum 2000; 43: S244., , , , , , et al.
- 15LUMINA Study Group. Systemic lupus erythematosus in three ethnic groups. XI. Sources of discrepancy in perception of disease activity: a comparison of physician and patient visual analog scale scores. Arthritis Rheum 2002; 47: 408–13., , , , , , et al, for the
- 17Measuring clinically important changes with patient-oriented questionnaires. Med Care 2002; 40 Suppl 4: II45–51., , , , .
- 23Generalized additive models. London: Chapman & Hall, 1990., .
The mean number of responses for vignettes other than the 5 common vignettes was ∼3. To minimize the excessive influence of the 5 common vignettes on the analyses, we decreased the number of responses to the vignettes in the analyses to 9, which corresponds to the 95th percentile of the distribution of the number of responses available for other vignettes. This was achieved by random sampling of 9 of the 75 responses available for a given standard vignette, with sampling performed independently for each of the 5 vignettes. The 9 responses selected for a given vignette were then retained for the analyses.
The main analyses determined the relationship between the change in the score on each of the 6 SLE disease activity measures and the probability that an expert would assess the patient's overall SLE activity as 1) improved (less activity), 2) unchanged, or 3) worse (more activity). Given the relatively low frequencies of the extreme responses of “much better” and “much worse,” we pooled “much better” with “better,” and we pooled “much worse” with “worse.”
The analyses recognized several sources of variation. First, in SLE, different patterns of changes in organ-specific symptoms can yield the same disease activity score, and the physicians varied in their assessments of whether the patient had responded, stayed the same, or worsened. There was also variation in the assessment of the same vignette, and therefore, the “average” probabilities (i.e., the probability that applies to the responses of a randomly selected “average” physician for a randomly selected “average” patient with a given change in score [18, 20]) were estimated. The modeling also ensured that for any change in score, the estimated probabilities of the 3 responses (i.e., worsened, no change, improved) had to sum to 1.0.
To meet these requirements, a computationally intensive approach that combined different nonparametric methods was used. The approach was based on a modified flexible regression spline polytomous regression model (21, 22), using generalized additive models (23). The final stage of the analyses assessed the precision of the probability curves and estimated confidence intervals around the point estimates. The intervals were also adjusted for sources of interdependent observations: 1) the same physician assessed several vignettes, and 2) the same vignette was assessed by several physicians. To account for these, we used a modified bootstrap approach (24), which allowed for direct modeling of the sources of variation by repeated resampling of the original data. The 95% pointwise confidence intervals reported for a given change in score were based on the 2.5th and 97.5th percentiles of the empirical distribution of the 1,000 corresponding bootstrap-based estimates of the probability of a given response (14). Further details on the statistical approaches are available from Dr. Michal Abrahamowicz.