To identify characteristics that predict a valid rheumatoid arthritis (RA) or juvenile idiopathic arthritis (JIA) diagnosis among RA- and JIA-coded individuals in the General Practice Research Database (GPRD), and to assess limitations of this type of diagnostic validation.
Four RA and 2 JIA diagnostic groups were created with differing strengths of evidence of RA/JIA (Group 1 = strongest evidence), based on RA/JIA medical codes. Individuals were sampled from each group and clinical and prescription data were extracted from anonymized hospital/practice correspondence and electronic records. American College of Rheumatology and International League of Associations for Rheumatology diagnostic criteria were used to validate diagnoses. A data-derived diagnostic algorithm that maximized sensitivity and specificity was identified using logistic regression.
Among 223 RA-coded individuals, the diagnostic algorithm classified individuals as having RA if they had an appropriate GPRD disease-modifying antirheumatic drug prescription or 3 other GPRD characteristics: >1 RA code during followup, RA diagnostic Group 1 or 2, and no later alternative diagnostic code. This algorithm had >80% sensitivity and specificity when applied to a test data set. Among 101 JIA-coded individuals, the strongest predictor of a valid diagnosis was a Group 1 diagnostic code (>90% sensitivity and specificity).
Validity of an RA diagnosis among RA-coded GPRD individuals appears high for patients with specific characteristics. The findings are important for both interpreting results of published GPRD studies and identifying RA/JIA patients for future GPRD-based research. However, several limitations were identified, and further debate is needed on how best to validate chronic disease diagnoses in the GPRD.
The General Practice Research Database (GPRD) is an electronic database of primary care medical records for approximately 5% of the UK population. In addition to its large size (46 million person-years of anonymized data), its strengths include provision of comprehensive longitudinal prescribing and diagnostic information for all patients registered with participating practices, and generalizability: more than 98% of the UK population is registered with a general practitioner (GP), and GPRD practices are broadly representative of all practices in the UK (1). Patient data in the database include outcomes of practice consultations, hospitalizations and specialist referrals, key test results, and prescriptions. Practices are included in the GPRD only after they meet a range of data quality criteria and undergo ongoing evaluations (2).
The GPRD has proved invaluable for studying relatively uncommon rheumatic conditions, providing a statistically powerful data source that complements smaller, more expensive population-based studies. The database is used to investigate the epidemiology, natural history, and treatments for a range of rheumatic diseases (3–9). However, data are sparse on the diagnostic validity of arthritic disorders recorded in general practice.
Data in the GPRD are coded. Therefore, validations of GPRD diagnoses first entail compiling a list of relevant medical with/without disease-appropriate therapeutic codes. Patients with these codes are sampled and the diagnosis is validated against a gold standard, such as evidence that a hospital specialist made the diagnosis or that specific diagnostic procedures were performed. External gold standard data are usually purchased from the practices where the patients are registered; questionnaires can be sent to GPs to ascertain specific diagnostic details or copies of anonymized hospital correspondence and practice notes can be acquired. Obtaining external data is expensive; therefore, many validations have small sample sizes. Also, some validations confine patient sampling to highly selected practices and patients.
We carried out a large (n = 400) external validation exercise of GPRD diagnoses for 2 specific rheumatic disorders: rheumatoid arthritis (RA) and juvenile idiopathic arthritis (JIA). Our aims were to 1) select patients with medical codes that represented the range of RA/JIA codes in the GPRD, 2) identify GPRD characteristics among individuals with these codes that predicted a valid RA/JIA diagnosis, and 3) use information on patient selection and practice participation to assess the limitations of this type of validation in the GPRD.
MATERIALS AND METHODS
Categorizing RA and JIA patients.
All relevant medical RA and JIA codes were identified from the GPRD coding dictionary by 3 of the authors (AJH, SLT, LS) and verified by a professor of rheumatology (CC). We ascertained patients with an RA or JIA code recorded between 1987 and 2002 during their followup (the time they were registered with a practice that was contributing data to the GPRD). These patients were subdivided into those age ≥16 or <16 years at the time of their first code (potential RA and JIA cases, respectively).
For potential RA cases, each RA medical code recorded during followup was categorized into 1 of 4 diagnostic groups, with Group 1 having the strongest evidence of RA and Group 4 having the weakest. Group 1 included codes for seropositive or erosive RA, Group 2 comprised “rheumatoid arthritis” codes (such as RA of knee), Group 3 codes were for systemic manifestations of RA (e.g., myopathy due to RA), and Group 4 codes were for seronegative RA or other weak evidence for RA. Individuals were categorized according to the highest RA group in their GPRD record; for example, an individual with a mixture of Group 1 and Group 3 RA codes was classified in Group 1. In total, 31,830 individuals had a first RA code at age ≥16 years (Group 1 = 1,998, Group 2 = 25,740, Group 3 = 525, Group 4 = 3,567).
For JIA cases, individuals age <16 years at the time of their first code were categorized into 2 groups. Those in Group 1 had codes that matched most closely the general definition or any of the 7 subtypes of the 2001 International League of Associations for Rheumatology (ILAR) revised JIA classification, and/or codes that GPs were likely to choose to represent these subtypes; for example, pauciarticular-onset juvenile chronic arthritis, RA, or psoriatic arthritis (10). However, other young GPRD patients had nonspecific arthritis codes such as juvenile polyarthritis or monarthritis, with no subsequent diagnostic code to explain this arthritis. Because these codes could also represent JIA, we categorized these patients into JIA Group 2. After categorization, 2,901 individuals were identified with a JIA code at age <16 years (Group 1 = 868, Group 2 = 2,033). Both RA and JIA code lists are available from the authors.
Our research funds allowed us to include in the study 300 RA-coded patients (80 each from Groups 1 and 4, 120 from Group 2, 20 from Group 3) and 100 JIA-coded patients (30 from Group 1, 70 from Group 2). We first excluded individuals with no available external data because they had died, left the practice, or were registered with a practice that does not take part in research studies. Only 253 (34%) GPRD practices were listed as willing to provide copies of patient correspondence at the time of our study. We then derived a random sample from each group with an approximate 20% excess to accommodate possible additional losses after the end of study followup in 2002.
Obtaining patient correspondence.
Additional Information Services (AIS), the company that acts as an intermediary between GPRD practices and researchers, telephoned practices to check that the selected patients were still registered and that the practice was willing to participate. A standardized letter was then sent to each practice, requesting all (anonymized) correspondence and test results in the patient notes relating to the patient's arthritis.
Applying diagnostic criteria.
We used the 1987 American College of Rheumatology (ACR; formerly the American Rheumatism Association) revised RA classification criteria (both the 4 of 7 and classification tree formats) with the MacGregor et al modification, which increased diagnostic sensitivity by incorporating lifetime evidence of disease (11, 12). For JIA, we used the 2001 ILAR criteria (10). We identified factors that were likely to be recorded accurately in the database and/or were relevant to diagnostic accuracy; these included patient characteristics, RA/JIA diagnostic group, number of RA/JIA codes and relevant symptom/investigation codes in their GPRD record, evidence of each element of the ACR/ILAR criteria, timing of arthritis drug prescriptions, and alternative diagnoses for RA/JIA suggested in patients' electronic records or written correspondence. Two authors (AJH and SLT) independently extracted these data from the correspondence/computerized records onto a prepiloted case validation form. We applied the ACR/ILAR criteria, categorizing each patient as either 1) fulfilling the criteria, 2) not fulfilling the criteria, 3) indeterminate (unclear whether the criteria were fulfilled), or 4) having insufficient data to apply the criteria. Results were compared, but in order to maximize accuracy we referred all records with disagreement or uncertainty about the diagnosis to a consultant rheumatologist (CJE) for a third opinion. We also referred records where the criteria were fulfilled but an alternative diagnosis was proposed in hospital correspondence, or where criteria were not fulfilled but a diagnosis of RA/JIA was suggested in a rheumatology clinic. Referred cases were categorized (CJE) as having definite, probable, possible, or unlikely RA/JIA, or having insufficient data to make a diagnosis.
Data were analyzed using Stata, version 9 (Stata Corporation, College Station, TX). Individuals with possible RA/JIA or with insufficient data were excluded from main analyses. Remaining patients were categorized as having RA/JIA if they fulfilled the ACR/ILAR criteria on initial assessment and this decision was not overruled (by CJE), or if they had definite or probable RA/JIA on expert opinion. Similarly, individuals were classified as not having RA/JIA if they did not fulfill the diagnostic criteria and this decision was not later overruled, if they were classified as having unlikely RA/JIA on expert opinion, or if their GP reported that the patient had either been wrongly coded as having RA/JIA or had no mention of RA/JIA anywhere in their full medical record.
We compared GPRD characteristics of individuals with confirmed RA/JIA with those without RA/JIA, using cross-tabulations to assess the accuracy of each characteristic in predicting a valid diagnosis; both sensitivity (the proportion of individuals with a valid diagnosis who had the characteristic) and specificity (the proportion of individuals without a valid diagnosis who did not have the characteristic). Because each characteristic formed a categorical variable, we used logistic regression to identify discriminant functions (13). Univariable odds ratios (ORs) were obtained for each characteristic using the validated RA diagnosis as the outcome. A multivariable logistic model was set up to develop a data-derived diagnostic algorithm, i.e., the combination of GPRD characteristics that best predicted a valid diagnosis. Characteristics with an OR ≥2.0 and/or a P value less than 0.1 on univariable analysis were added sequentially to the model and kept if they retained an association with RA. The log ORs for GPRD characteristics in the final model were used to derive an RA score for each individual by assigning to the individual the value of the log OR for each characteristic present in their GPRD record, and then summing these values to produce the individual's overall score (13). We determined the cutoff score that best separated individuals without a valid diagnosis (scores below the cutoff) from those with a valid diagnosis (scores above the cutoff). All possible cutoffs were examined to identify the cutoff that maximized diagnostic sensitivity and specificity. The validity of the algorithm was first examined in the whole sample, then repeated by randomly splitting the data set in half, recreating the algorithm in one half and testing it in the second half.
Approval for the study was obtained from the Scientific and Ethical Advisory Group of the GPRD.
Figure 1 shows the sampling of eligible individuals with RA or JIA codes. In total, 18,771 (54%) patients were registered with practices that did not participate in external validations of this type. An additional 4,471 patients left their GPRD practice or died during followup. We randomly sampled 479 patients from the remaining individuals; AIS identified 11 of these as having transferred out or died since 2002. The final 468 patients (349 RA, 119 JIA) came from 179 practices with a median of 2 patients (range 1–22) per practice.
On initial contact with AIS, 146 practices reported that they could help with 348 (74%) patients, either with all requests (98 practices) or with some requests (48 practices). Twenty-four practices could not help with any patients and 9 did not respond to repeated requests. AIS did not routinely report why practices could not help. To achieve our target sample size, we randomly sampled replacement patients from the practices who said that they could help with at least 1 patient. An additional 72 patients were added to the original 348 patients, totaling 420 patients, including 319 RA patients (71% female) and 101 JIA patients (57% female) with a median age at diagnosis of 51 years (interquartile range [IQR] 42–61) and 7 years (IQR 4–10.5), respectively.
Figure 2 shows subsequent responses from practices for these 420 patients. Despite initial agreement, practices for 71 (17%) patients (61 RA, 10 JIA) did not participate: practices for 49 patients refused to provide data, 7 patients had left the practice or died, 7 patients could not be identified (see Figure 2 legend for detailed explanation), and practices for 8 patients did not respond.
Practices sent data for 323 patients and reported that for a further 26 patients (11 RA, 15 JIA), either the patient had been miscoded and did not have RA/JIA or there was no mention of RA/JIA anywhere in the patient's written records. Among the patients with data, 118 fulfilled and 102 did not fulfill the ACR/ILAR diagnostic criteria, classification was unclear for 47, and opinion was divided for 38 (Figure 2). The remaining 18 patients had insufficient data to apply the criteria; these had limited GP notes and no hospital referrals or had only recent hospital data because practices had not kept older correspondence.
We sent 97 (30%) sets of notes for further opinion. After combining results from applying the criteria, expert opinion, and GP feedback about wrongly coded patients, 125 (48%) of the 258 RA-coded patients and 26 (29%) of the 91 JIA-coded patients were classified as having RA/JIA, 99 and 56 were classified as not having RA/JIA, 13 and 2 were classified as having possible RA/JIA, and 21 and 7 were unclassifiable, respectively. The most common true diagnosis for those without RA was osteoarthritis; other diagnoses included psoriatic arthritis, polymyalgia rheumatica, cervical spondylitis, epicondylitis, rotator cuff syndrome, and (among those with seronegative RA codes) nonspecific arthralgia with a negative rheumatoid factor test. Among Group 2 JIA-coded patients, the most common alternative diagnosis was irritable hip.
The validity of specific GPRD characteristics for an RA diagnosis are shown in Table 1 (code lists available from the authors). Six characteristics had ORs ≥2.0 on univariable analysis. Of these, having >1 RA code during followup had high sensitivity and specificity (≥80%). High specificity but low sensitivity was demonstrated for 2 other variables: 1) having a disease-modifying antirheumatic drug (DMARD) prescription after the first RA GPRD code with no alternative indication for that DMARD in the previous 5 years, and 2) having an oral steroid prescription after the first RA code. Three variables (being in RA diagnostic Group 1 or 2, having 2 nonsteroidal antiinflammatory drug [NSAID] prescriptions within a 6-month period, and having no alternative diagnosis for RA in the GPRD record after the last RA code) all had high sensitivity but poor specificity. Three variables had a weaker association with a valid RA diagnosis (Table 1); age and sex were not associated with a valid diagnosis (data not shown).
Table 1. Comparison of characteristics of individuals with a valid and invalid RA diagnosis (n = 224)*
RA = rheumatoid arthritis; OR = odds ratio; 95% CI = 95% confidence interval; NSAID = nonsteroidal antiinflammatory drug; GPRD = General Practice Research Database; DMARD = disease-modifying antirheumatic drug.
Prescription data missing for 1 individual with an invalid diagnosis; analyses for DMARDs, steroids, and NSAIDs based on 223 individuals.
Four levels of strength of evidence based on GPRD codes, where 1 = strongest evidence and 4 = weakest evidence (see Materials and Methods for detailed explanation).
Prescriptions after first RA code with no medical code for an alternative indication for the specific DMARD in the 5 years before the first prescription date.
Alternative diagnosis in GPRD record before first RA code?
Alternative diagnosis in GPRD record after last RA code?
Results of multivariable analyses of RA-coded data are shown in Table 2. Four characteristics remained associated with a valid RA diagnosis in the final model. The log OR for an appropriate DMARD prescription was more than twice that of any other characteristic, and 87% sensitivity (95% confidence interval [95% CI] 80–93%) and 88% specificity (95% CI 80–94%) were obtained with a cutoff score (4.01) just below this log OR. Using this cutoff and the log ORs for the 4 variables, we obtained a diagnostic algorithm, whereby an individual would be classified as having RA (score >4.01) if he or she had either an appropriate GPRD DMARD prescription or no appropriate DMARD prescription but all 3 other GPRD characteristics (being in RA Group 1 or 2, having no alternative GPRD diagnosis for RA after the last RA code, and having >1 RA code during followup).
Table 2. Multivariable analyses of the whole data set and test data set of individuals with valid and invalid rheumatoid arthritis (RA) diagnoses*
OR = odds ratio; 95% CI = 95% confidence interval; DMARD = disease-modifying antirheumatic drug; GPRD = General Practice Research Database.
One individual with missing therapy data excluded from analyses.
Randomly selected 50% of individuals with confirmed and unlikely RA (see Materials and Methods for detailed explanation).
Adjusted for other variables in the table.
The log ORs for each GPRD characteristic in the model were used to obtain an individual's score. For example, an individual in RA diagnostic Group 1 who had 3 RA codes during followup, no alternative diagnosis in their GPRD record after their last RA code, and no DMARD prescription would have a total score of (1.85 + 0.95 + 1.64) 4.44. This is greater than the cutoff score of 4.01, so the individual would be classified as having RA.
Four levels of strength of evidence based on GPRD codes, where 1 = strongest evidence and 4 = weakest evidence.
Prescriptions after first RA code with no medical code for an alternative indication for a specific DMARD in the 5 years before the first prescription date.
≥1 DMARD prescription in GPRD record with no prior alternative indication for the DMARD**
Alternative diagnosis on computer after last RA code?
After randomly splitting the data in half, recreating the model and cutoff in one half and testing it in the other half, the same 4 variables remained in the final model (Table 2). The same diagnostic algorithm had a sensitivity of 84% (95% CI 73–94%) and a specificity of 86% (95% CI 72–92%) for a valid RA diagnosis when tested on the other half of the data.
We investigated alternative algorithms. Dropping the least strongly associated variable (an alternative diagnosis to RA) from the model gave an algorithm with lower specificity (79%). Using a simpler DMARD characteristic (dropping the requirement for no prior alternative indication for the DMARD) gave an algorithm of similar validity to the original (87% sensitivity, 86% specificity). A higher cutoff score of 4.03 gave an algorithm with higher specificity (91%), classifying individuals as having RA if they had either a DMARD code and any 1 of the 3 other characteristics or all 3 other characteristics; however, this algorithm could not be recreated in the test data set because the combined score from the 3 non-DMARD characteristics was less than the score for the DMARD variable (Table 2).
The major predictor of a valid JIA diagnosis was the JIA diagnostic group. In total, 25 of 26 of those classified as having JIA were in JIA Group 1 (96% sensitivity) compared with 4 of 56 of those with unlikely JIA (93% specificity); only 1 of the 53 children in Group 2 was classified as having JIA. Remaining analyses therefore focused on the 29 children in diagnostic Group 1 (Table 3). Cross-tabulations produced many zero cells and therefore, ORs were not estimated. The strongest associations with a valid JIA diagnosis were having 2 NSAID prescriptions within 6 months and having >1 JIA or other arthritis code during followup.
Table 3. Comparison of characteristics of individuals with a valid and invalid JIA diagnosis among patients with a Group 1 JIA code*
JIA = juvenile idiopathic arthritis; NSAID = nonsteroidal antiinflammatory drug; GPRD = General Practice Research Database; DMARD = disease-modifying antirheumatic drug. Patients had a GPRD code consistent with the 2001 International League of Associations for Rheumatology revised classification of JIA.
Fisher's exact test.
Prescriptions after first JIA code with no medical code for an alternative indication for specific DMARD in the 5 years before the first prescription date.
Prescriptions after first JIA code.
>1 specific JIA code (on different dates)
>1 specific JIA code and/or nonspecific arthritis code
2 NSAID prescriptions in GPRD within 6 months of each other
≥1 DMARD prescription in GPRD record with no prior alternative indication for the DMARD‡
Alternative diagnosis in GPRD record before first JIA code?
Alternative diagnosis in GPRD record after last JIA code?
Application to all RA/JIA coded individuals.
Among the 31,830 RA-coded patients age ≥16 years in the original GPRD data set, 15,746 (49%) had an appropriate DMARD prescription, 27,738 (87%) were in RA Groups 1 or 2, 16,300 (51%) had >1 RA code during followup, and 27,184 (85%) had no alternative diagnostic code after their last RA code. In total, 19,492 (61%) RA-coded patients fulfilled the diagnostic algorithm. For JIA, there were 868 individuals with a Group 1 JIA code.
Comparison of participants and nonparticipants.
We compared GPRD characteristics of the 224 RA patients categorized as having a valid or invalid RA diagnosis with the 34 possible and unclassifiable patients and the remaining 150 RA-coded patients from the original 414 selected (excluding the 6 known to have left or died). The groups were similar with respect to most characteristics (Table 4). Those who did not participate were slightly older at the time of their first RA code. The proportion of patients with a GPRD DMARD prescription was substantially lower in the possible/unclassifiable group compared with those included in algorithm analyses, and was slightly higher in the nonparticipant group.
Table 4. Characteristics of RA patients who participated and who did not participate in the validation study*
Values are the median (interquartile range) unless otherwise specified. RA = rheumatoid arthritis; DMARD = disease-modifying antirheumatic drug.
Excludes 6 patients reported to have died or left the practice.
Comparing the 258 individuals in the validation study with the 150 who did not participate. Pearson's chi-square test was used for categorical variables; a nonparametric K-sample test was used on equality of medians for continuous data.
Prescription after first RA/juvenile idiopathic arthritis code with no alternative indication for that DMARD in the previous 5 years.
Most validations of GPRD diagnoses assess the proportion of patients with a predefined set of medical and/or therapeutic codes with a valid diagnosis; i.e., the positive predictive value (PPV) of these codes. Often, sampling is from patients selected for a specific study, further restricting validation to those who fulfill the study's inclusion criteria. Unlike sensitivity and specificity, the PPV varies with the prevalence of the condition; the prevalence of RA (and thus the PPV) among RA-coded individuals may be higher in subpopulations with specific comorbidities or prescriptions. Therefore, results may not be generalizable to the wider GPRD population.
In contrast, our aim was to identify characteristics in GPRD electronic records that were associated with a valid RA/JIA diagnosis among individuals with any one of a wide range of RA or JIA codes. We used these characteristics to develop a diagnostic algorithm with optimum validity for an RA diagnosis. Applying established diagnostic gold standards for RA and JIA to patient medical data allowed us to assess the diagnosis independently from GP opinion.
To our knowledge, the only previous external validation of GPRD RA diagnoses was carried out in a study of the association between naproxen and thromboembolic events among RA patients (4). These patients were age 40–79 years with an RA code, a prescription for an NSAID, DMARD, or systemic steroid, and without specific comorbid conditions. A questionnaire was sent to GPs to verify the RA diagnosis for different subgroups of patients. Details of the questionnaire and practice response rates were not reported; the PPV of the RA code combination varied from 74–80%. This RA case definition could have limited specificity; our study (using stricter criteria for timing of prescriptions) found 25% specificity for RA-coded individuals with any of these prescriptions.
In JIA analyses, a Group 1 code was the major predictor of a valid diagnosis, having high sensitivity and specificity. Our search for possible JIA cases among those with other unexplained arthritis codes identified only one individual with probable JIA who had a single “arthritis” code. This suggests that JIA cases are uncommon among those without specific JIA codes, but may occur. Further characterization of a valid JIA diagnosis among those with specific codes was limited by small numbers.
It is difficult to gauge the generalizability of GPRD validation results without sufficient details of study methodology. An explicit aim of our study was to provide information on selection processes and practice response rates to assess limitations of this type of external validation. We could not randomly select patients because only a third of practices were listed as willing to participate in external validations. Among selected practices, 45% could not help with one or more patients, and those who initially agreed subsequently did not provide data for an additional 57 patients. Some nonparticipation may have occurred because patients died or left the practices, but we did not have data to quantify this. A few practices reported that they would not photocopy extensive hospital correspondence for the remuneration offered. Among participants, we could not apply diagnostic criteria for some patients with longstanding disease because practices retained only recent correspondence.
Incomplete participation may have affected our findings in various ways. Accuracy of diagnosis and coding may differ between practices that agree or refuse to participate in research studies. Patients excluded because they died or had large case files are likely to have included those with longstanding and/or severe RA disease, with probable good evidence of the diagnosis and GPRD characteristics similar to those included in the diagnostic algorithm (e.g., an appropriate DMARD prescription). This, together with oversampling of individuals with Group 3 and 4 RA codes to assess their validity and the probable lower proportion of hospital-diagnosed RA cases compared with JIA cases, could explain the relatively low proportion of valid RA diagnoses in the validation sample. In contrast, those with insufficient data comprised those with missing hospital correspondence and those with no hospital referrals; these are likely to include both valid and invalid diagnoses. It is unclear whether misuse of particular RA codes in this study, such as seronegative RA codes for patients with arthralgia and a negative rheumatoid factor test, is found throughout GPRD practices.
Our planned sample size of 400, although large compared with many previous GPRD validations, was restricted by available research funds. Acquiring a set of patient notes is costly: a nonrefundable £70. Therefore, we focused on detailing characteristics of a valid RA diagnosis among RA-coded patients and did not assess the negative predictive value of RA diagnoses; i.e., the proportion of those without an RA code who did not have RA. This presents a formidable challenge, requiring sampling from the very large number of GPRD patients with codes for other arthritic conditions or symptoms. Few validation studies have attempted this aspect of diagnostic validity.
Our findings can be used to interpret results of published RA GPRD studies and the diagnostic algorithm can be used or adapted to select RA patients for future GPRD-based research. Our study period finished 2 years before biologic therapies were approved for widespread use in the UK. Later use of these agents may influence our algorithm, but its use prior to 2002 appears to be valid. Reappraisal of the algorithm incorporating biologic therapies will need to wait for adequate followup and accumulation of sufficient subjects in the GPRD cohort. Our methods could also be applied to investigate diagnostic validity of other arthritic disorders in the GPRD. However, our study has highlighted several limitations of this type of external validation, and further debate is needed on how best to validate chronic disease diagnoses in general practice data.
Dr. Thomas had full access to all of the anonymized data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study design. Thomas, Smeeth, Cooper, Hall.
Acquisition of data. Thomas, Hall.
Analysis and interpretation of data. Thomas, Edwards, Smeeth, Cooper, Hall.