To the Editors:

We thank de Vries et al for their interest in our study and reiterating for the readers that their prior analysis of General Practice Research Database (GPRD) data yielded a lower incidence estimate of SLE compared with our findings (1). An additional cause of difference to those they mentioned is that we standardized for age and they did not. The difference in estimates underscores the importance of a thorough understanding of the complexities of both the GPRD and SLE when analyzing and interpreting results based on this data set.

An important distinction between these 2 studies is that, as detailed in our article, our group conducted an analysis in accordance with the published methodology of Lewis et al (2) in order to determine the appropriate analysis time window to differentiate between prevalent and incident cases. Based on our empirical findings, we determined that patients who registered with a general practitioner (GP) after the GP contributed data to the GPRD needed a minimum of 1 year of followup in order to be eligible for the analysis. In contrast, Nightingale et al (1) made a decision to exclude the first 3 years of followup for all patients. They justified their choice of a 3-year window because in their understanding the “average duration of remission for SLE patients” is mean ± SD 2.3 ± 1.1 years, based on findings published by Barr et al on SLE disease activity patterns from the Hopkins Lupus Cohort (3). However, the study by Barr et al actually showed that only a minority of SLE person-years were characterized by a “long quiescent” pattern of disease activity (∼16%), with remaining person-time classified as “chronic active” or “relapsing–remitting,” which by definition involves disease activity during a given year. The average remission duration only applies to the “long quiescent” fraction of person-time. Moreover, it is unclear how even an accurate summary measure for the average length of remission would translate into a meaningful time window for the differentiation of prevalent and incident cases, since the standard of care for SLE requires medical followup during apparent quiescent periods to enable detection of subclinical disease (e.g., renal involvement). Use of a long exclusion period can further introduce bias if patients with followup of >3 years differ systematically from those with shorter followup.

As we discussed in our article, a limitation of the GPRD is that data related to ACR criteria for SLE, including autoantibody profiles, are not uniformly available, i.e., the lack of such data should not be confused with a negative result. Also, certain therapeutics (e.g., cyclophosphamide) that are prescribed by hospital consultants rather than GPs are not recorded in the GPRD. Nevertheless, Nightingale et al included ACR criteria or prescriptions for medications such as cyclophosphamide as part of their case definition. However, the application of ACR criteria (assuming it was a valid approach in this setting) versus physician diagnosis could indeed account for differences in incidence estimates, since the classification criteria have limited sensitivity when applied to external populations (<85%), as opposed to the initial test population (4, 5). For this reason, efforts are underway in the SLE community to develop a set of criteria that performs better.

We have chosen to use standard definitions that can apply to the whole data set. We believe that there is potential bias when criteria, whether ACR or free text, can only be applied to a proportion, and often a small proportion, of the database.

A final point worth recapitulating from our article is that as external validation of our methodology, we compared our region-specific incidence estimates with those from 2 active surveillance studies based in the same geographic regions, yielding remarkably similar findings (and within the confidence intervals) to those independent studies. We estimated an age-adjusted incidence (per 100,000) of 3.56 for the West Midlands (similar to 3.8 published for Birmingham &lsqbr;6&rsqbr;) and 4.37 for Trent (similar to 4.0 published for Nottingham &lsqbr;7&rsqbr;).

In summary, we believe that the consistent methodology we used is likely to have resulted in more accurate estimates than those proposed by Nightingale et al, as indicated by the external validity.

Emily C. Somers PhD, ScM*, Sara L. Thomas MB, BS, MSc, PhD†, Liam Smeeth MBChB, MRCGP, MSc, PhD†, W. Marieke Schoonen PhD, MSc†, Andrew J. Hall MB, BS, MSc, PhD†, * University of Michigan, Ann Arbor, † London School of Hygiene and Tropical Medicine, London, UK.