SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS AND METHODS
  5. RESULTS
  6. DISCUSSION
  7. AUTHOR CONTRIBUTIONS
  8. REFERENCES
  9. Supporting Information

Objective

To conduct a systematic review of the literature on the validation of algorithms identifying infections in administrative data for future use in populations with rheumatic diseases.

Methods

Medline and EMBase were searched using the themes “administrative data” and “infection” between 1950 and October 2012. Inclusion criteria consisted of validation studies of administrative data identifying infections in adult populations. Article quality was assessed using a validated tool.

Results

A total of 5,941 articles were identified, 90 articles underwent detailed review, and 24 studies were included. The majority (17 of 24) examined bacterial infections and 9 examined opportunistic infections. Eighteen studies were from the US and all but 4 studies used International Classification of Diseases, Ninth Revision codes. Rheumatoid arthritis patients were studied in 6 of 24 articles. The studies on bacterial infections in general reported highly variable sensitivity and positive predictive value (PPV) for the diagnosis of infections using administrative data (sensitivity range 4.4–100%, PPV range 21.7–100%). Algorithms to identify opportunistic infections similarly had a highly variable sensitivity (range 20–100%) and PPV (range 1.3–100%). Thirteen studies compared the diagnostic accuracy of different algorithms, which revealed that strategies including a comprehensive algorithm using a greater number of diagnostic codes or codes in any position had the highest sensitivity for the diagnosis of infection. Algorithms that incorporated microbiologic or pharmacy data in combination with diagnostic codes had improved PPV for identification of tuberculosis.

Conclusion

Algorithms for identifying infections using administrative data should be selected based on the purpose of the study, with careful consideration as to whether a high sensitivity or PPV is required.


INTRODUCTION

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS AND METHODS
  5. RESULTS
  6. DISCUSSION
  7. AUTHOR CONTRIBUTIONS
  8. REFERENCES
  9. Supporting Information

Infections are a common comorbidity for patients with autoimmune rheumatic diseases such as rheumatoid arthritis (RA) ([1, 2]) and systemic lupus erythematosus ([3]). They may arise as a consequence of the disease itself or of its treatment, causing significant morbidity and mortality. A population approach using administrative data in the form of billing data from physician visits and hospitalizations may be used to obtain comprehensive estimates of the burden of infections in patients with rheumatic diseases. Administrative data are also useful in pharmacoepidemiologic studies for evaluating infections as potential adverse events of medications used to treat rheumatic diseases; however, validated methods for utilizing administrative data to identify infections are required to assure accurate estimates of infections in populations.

There is increasing interest among epidemiologists and policymakers in the use of administrative data to identify the burden of comorbidities in patients with rheumatic diseases for surveillance and pharmacoepidemiologic purposes. A conference was held in Montreal in February 2011 to develop consensus statements for the use of administrative data for research and disease surveillance in rheumatic diseases ([4]). In preparation for the meeting, a working group (led by DL) was tasked with conducting a series of systematic reviews of the literature to evaluate the validity of algorithms using administrative data for the identification of select comorbid conditions of interest in patients with rheumatic diseases. The primary question addressed by the series of systematic reviews was: “Can administrative health care data accurately identify the chronic conditions of interest for the purpose of using these comorbidities as covariates or as outcomes in research studies?” This systematic review examined whether administrative data can be used to accurately identify infections as covariates or as outcomes.

Box 1. Significance & Innovations

  • A variety of algorithms can be used to identify bacterial infections using administrative data with variable sensitivity and positive predictive value (PPV) depending on the infection, population studied, reference standard, and algorithm used.
  • Increasing the number of diagnostic codes in algorithms for identifying infections improves the sensitivity for identifying infections.
  • Using only the first (principal) diagnostic code generally improves the PPV.
  • There is less research on the validation of algorithms to identify opportunistic infections in administrative data and further work in this area is needed.

MATERIALS AND METHODS

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS AND METHODS
  5. RESULTS
  6. DISCUSSION
  7. AUTHOR CONTRIBUTIONS
  8. REFERENCES
  9. Supporting Information

Data sources and searches

A systematic literature search was conducted to identify studies reporting on the validation of infections identified using administrative health data. Two medical databases were searched: Medline (from 1950 to October 2012) and EMBase (from 1980 to October 2012). Key search themes included “administrative data” and “serious or opportunistic infections” and were described by medical subject heading terms and keywords. The search strategy is shown in Supplementary Appendix A (available in the online version of this article at http://onlinelibrary.wiley.com/doi/10.1002/acr.21959/abstract). Additionally, the references of all identified studies were hand searched to identify additional relevant literature.

Study selection

Peer-reviewed studies that reported on the validation of algorithms using administrative data to identify infections were eligible for inclusion. The focus of our review was infections requiring hospitalization and opportunistic infections. We did not attempt to capture specific individual types of infection, but rather evaluate the ability of administrative data to capture the overall risk of infections. The following criteria for inclusion were used for eligibility: original full-length articles, use of administrative health data, having performed a validation study of the infection diagnosis using a reference standard (such as chart review), and studies of adult populations evaluating serious infections (opportunistic infections or infections requiring hospitalization). Studies validating the International Classification of Diseases (ICD) prior to the ICD, Ninth Revision (ICD-9) were excluded, as were studies of acquired immunodeficiency syndrome (AIDS), nosocomial infections, or other specific infections, such as malaria, since they were not relevant to the focus of our study (Figure 1).

image

Figure 1. Study selection for systematic review. ICD = International Classification of Diseases.

Download figure to PowerPoint

Data extraction and quality assessment

A standardized data collection form was used to describe the methods used for validation of the infection diagnosis and to extract the results. Quality was assessed using recently published guidelines ([5]).

RESULTS

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS AND METHODS
  5. RESULTS
  6. DISCUSSION
  7. AUTHOR CONTRIBUTIONS
  8. REFERENCES
  9. Supporting Information

After exclusion of duplicates, 5,941 studies were found and 5,851 were excluded after a review of the abstract. A total of 90 studies were included for detailed review and 23 were retained as meeting the criteria for entry (Figure 1). Upon hand searching references, 1 additional study was identified. In total, 24 studies met our criteria for entry into the systematic review ([6-29]) (Figure 1 and Table 1). Seventeen studies examined bacterial infections, including 8 specifically examining the validation of pneumonia (Table 2). Nine studies examined opportunistic infections (Table 3).

Table 1. Characteristics of studies included in the systematic review*
Author, year (ref.)CountryAdministrative data sourceType of infection studiedStudied as comorbidity or primary diseasePopulation
  1. ICD-9-CM = International Classification of Diseases, Ninth Revision, Clinical Modification; RA = rheumatoid arthritis; ICD-10 = International Statistical Classification of Diseases and Related Health Problems, Tenth Revision; ICU = intensive care unit; UAB = University of Alabama at Birmingham; VA = Veterans Affairs; THIN = The Health Improvement Network; ICD-10-AM = Australian Modification of the ICD-10; TB = tuberculosis; EMR = electronic medical record; anti-TNF = anti–tumor necrosis factor.

Bacterial infections (general)     
Curtis et al, 2007 ([7])US
  • ICD-9-CM codes
  • Hospitalization data (physician or diagnostic test claims)
  • Large US health care organization not otherwise specified
BacterialComorbidityRA
Gedeborg et al, 2007 ([10])Sweden
  • ICD-9 and ICD-10
  • Hospitalization data (discharge codes)
  • Swedish Hospital Discharge Register
Sepsis, pneumonia, and central nervous systemPrimary diseaseICU population
Grijalva et al, 2008 ([11])US
  • ICD-9-CM
  • Hospitalization data (discharge codes)
  • Tennessee Medicaid
Pneumonia, sepsis, invasive pneumococcal disease, and opportunistic mycosesComorbidityRA
Landers et al, 2010 ([12])US
  • ICD-9-CM codes
  • Hospitalization data (discharge and symptom codes and culture data)
  • Tertiary hospital in New York
Urinary tractPrimary diseaseGeneral population
Patkar et al, 2009 ([13])US
  • ICD-9
  • Hospitalization data (discharge codes)
  • UAB Health System
BacterialComorbidityRA
Schneeweiss et al, 2007 ([14])US
  • ICD-9-CM
  • Hospitalization data (discharge codes)
  • VA database, New England region
BacterialPrimary diseaseVA population
Sepsis and bacteremia (exclusively)     
Madsen et al, 1998 ([15])Denmark
  • ICD-10 codes
  • Hospitalization data (discharge codes)
  • Aalborg Hospital
BacteremiaPrimary diseaseGeneral population
Ollendorf et al, 2002 ([16])US
  • ICD-9-CM
  • Hospitalization data (discharge codes)
  • 10 medical centers not otherwise specified
SepsisPrimary diseaseICU population
Pneumonia (exclusively)     
Aronsky et al, 2005 ([8])US
  • ICD-9-CM
  • Hospitalization data on emergency room visits and hospital admissions (discharge codes)
  • LDS Hospital, Salt Lake City, Utah
PneumoniaPrimary diseaseGeneral population
Guevara et al, 1999 ([17])US
  • ICD-9-CM
  • Hospitalization (discharge codes)
  • 15 acute care hospitals in 2 counties in Ohio
Streptococcal pneumoniaPrimary diseaseGeneral population age ≥65 years
Jackson et al, 2003 ([18])US
  • ICD-9-CM codes
  • Hospitalization data (discharge codes)
  • Group Health Cooperative, Seattle, Washington
PneumoniaPrimary diseaseGeneral population age ≥65 years
Marrie et al, 1987 ([19])Canada
  • ICD-9-CM hospitalization data (admission and discharge codes)
  • Tertiary care hospital not otherwise specified
PneumoniaPrimary diseaseGeneral population
Meropol and Metlay, 2012 ([20])UKTHIN database codes (Read codes)PneumoniaPrimary diseaseGeneral population
Skull et al, 2008 ([21])Australia
  • ICD-10-AM
  • Hospitalization data (discharge codes)
  • Royal Melbourne Hospital and Western Hospital, Footscray
PneumoniaPrimary diseaseGeneral population age ≥65 years
Van de Garde et al, 2007 ([22])The Netherlands
  • ICD-9-CM codes
  • Hospitalization data (discharge codes)
  • 7 hospitals
PneumoniaPrimary diseaseGeneral population
Whittle et al, 1997 ([9])US
  • ICD-9-CM
  • Hospitalization data (discharge codes)
  • Presbyterian University Hospital, Pittsburgh, Pennsylvania
Community-acquired pneumoniaPrimary diseaseGeneral population
Yu et al, 2011 ([23])US
  • ICD-9-CM
  • Hospitalization data (discharge codes)
  • Group Health Cooperative, Seattle, Washington
PneumoniaPrimary diseaseGeneral population
Opportunistic infections (exclusively)     
Curtis et al, 2007 ([6])US
  • ICD-9-CM
  • Hospitalization and outpatient data sources, including physician claims, diagnostic tests, and procedure claims
  • Large US health care organization not otherwise specified
Opportunistic (and other serious adverse events)ComorbidityRA and Crohn's disease patients
TB (exclusively)     
Calderwood et al, 2010 ([24])US
  • ICD-9-CM
  • Anywhere in EMR: laboratory tests, pharmacy data
  • Atrius Health
TBPrimary diseaseGeneral population
Fiske et al, 2012 ([25])US
  • ICD-9-CM
  • Physician and pharmacy claims
  • Tennessee Medicaid
TBComorbidityRA
Trepka et al, 1999 ([26])US
  • ICD-9-CM
  • Hospitalization data (discharge codes), laboratory data
  • Wisconsin Office of Health Care
TBPrimary diseaseGeneral population
Winthrop et al, 2011 ([27])US
  • ICD-9-CM codes
  • Hospitalization or outpatient data sources, pharmacy and laboratory data
  • Kaiser Permanente, Northern California, and Portland VA Medical Center
TB and nontuberculous mycobacteriaComorbidityRA (treated with anti-TNF agents)
Yokoe et al, 1999 ([28])US
  • ICD-9-CM
  • Diagnosis, procedure, microbiology or radiology codes, and pharmacy data
  • Harvard Pilgrim Health Care
TBPrimary diseaseGeneral population
Yokoe et al, 2004 ([29])US
  • Pharmacy data
  • 3 different health plans
TBPrimary diseaseGeneral population
Table 2. Selected results from validation studies of general bacterial infections identified by administrative data*
Author, year (ref.)RS for validationCase definition for case identificationNSensitivity (95% CI), %PPV (95% CI), %
  1. RS = reference standard; 95% CI = 95% confidence interval; PPV = positive predictive value; ID = infectious disease; MD = medical doctor; ICD-9 = International Classification of Diseases, Ninth Revision; UTI = urinary tract infection; ICU = intensive care unit; ICD-10 = International Statistical Classification of Diseases and Related Health Problems, Tenth Revision; CNS = central nervous system; CXR = chest radiograph; EPR = electronic patient record; ED = emergency department; DRG = disease-related grouping; ICD-9-CM = ICD-9, Clinical Modification; CAP = community-acquired pneumonia; ICD-10-AM = Australian Modification of the ICD-10; CART = classification and regression tree analysis.

  2. Because the total number of codes examined was unclear, these are the ones reported.

General bacterial infections     
Curtis et al, 2007 ([7])
  1. Chart review; data extracted were evaluated by 2 independent ID MDs
  2. Case definitions
  • ≥2 ICD-9 codes for bacterial infections, of which 1 is MD visit (in hospital)
  • Claims could be in any position
217
  • 66 (using RS 1 for a “definite” infection) and 85 (if RS 1 is “empirically treated” or “definite” infection)
  • 35 (using RS 2)
Landers et al, 2010 ([12])No clear reference standard, each algorithm compared against another7 different algorithms including combinations of the following elements: hospital discharge codes for UTI (ICD-9 code 599.0), positive urine culture, presence of fever2,614 with ≥1 criterion for UTIICD-9 code vs. fever + positive culture algorithm: 55.6 (52.7–58.5)
Patkar et al, 2009 ([13])Reviewer's impression on chart reviewEvaluated 2 sets of ICD-9 codes, any position in claim: “comprehensive” set, “restricted” set162Definite infections: 100 (96–100) 59 (48–69)
Schneeweiss et al, 2007 ([14])MD impression or diagnostic criteria on chart review≥1 ICD-9 code for:
  1. Meningitis: 320.x, 0.49.x
  2. Encephalitis: 323.x, 054.3
  3. Cellulitis: 681.x, 682.x
  4. Endocarditis: 421.x
  5. Pneumonia: 481.x, 482.x
  6. Pyelonephritis: 590.x
  7. Septic arthritis: 711.x
  8. Osteomyelitis: 730.0x, 730.1x, 730.2x
  9. Bacteremia: 038.x, 790.7
  10. Any of the above
  1. 8
  2. 3
  3. 23
  4. 19
  5. 23
  6. 11
  7. 29
  8. 22
  9. 22
  10. 158
Using MD impression:
  1. 88 (65–100)
  2. 66 (12–100)
  3. 74 (56–92)
  4. 74 (54–94)
  5. 70 (51–89)
  6. 73 (47–99)
  7. 73 (57–89)
  8. 95 (86–100)
  9. 91 (79–100)
  10. 80 (74–86)
Sepsis     
Gedeborg et al, 2007 ([10])ICU database (maintained by 2 ICU physicians)Primary vs. secondary diagnosis, ICD-9 vs. ICD-10 (see appendix for codes; wide vs. narrow combinations numerous)
  1. CNS infections
  2. Sepsis
  3. Pneumonia
  1. 50
  2. 365
  3. 406
  • ICD-9:

    1. 95.4 (86.8–100)

    2. 45.7 (38.7–52.9)
    3. 48.1 (40.9–55.3)
  • ICD-10 not shown
Madsen et al, 1998 ([15])1. Reviewer's impression on chart review; criteria usedICD-10 (31 unique codes included)83 (75 patients)ICD-10: Septicemia: 4.4 (2.4–6.4)Septicemia: 21.7 (12.8–30.5)
 2. Bacteremia database 207 (186 patients)Septicemia and sepsis: 5.9 (3.6–8.2) compared to RS 2 
Ollendorf et al, 2002 ([16])Prospective trial of sepsisICD-9 sepsis codes in any position: 038.3, 022.3, 790.7, 038.4, 038.49, 038.40, 038.41, 054.5, 036.2, 038.2, 038.43, 003.1, 038.8, 038.9, 020.2, 038.44, 038.1, 038.012275.4
Pneumonia (exclusively)     
Aronsky et al, 2005 ([8])3 steps:
  1. Pneumonia chief symptom, CXR, ICD-9 admission or discharge of pneumonia, ≥1% chance of pneumonia on modeling, search for pneumonia in EPR
  2. MD review of ED and radiology
  3. Majority vote of 3 respirologists
  • Discharge diagnosis
  • Algorithm 1: 480–483, 485–487.0
  • Algorithm 2: 3.22, 21.2, 39.1, 52.1, 55.1, 73.0, 112.4, 114.0, 115.05, 115.15, 115.95, 130.4, 136.3, 480.0–480.2, 480.8–481, 482, 482.0, 482.1, 482.3, 482.4, 482.8, 482.9, 483, 484.1, 484.3, 484.5, 484.6, 484.7, 484.8, 485–487, 507.0, 510.0, 510.9, 511.1, 513.0
  • Algorithm 3: 480–483, 485–487.0, 507 as a primary diagnosis or 518.8 as a primary diagnosis and 480–483, 485–487.0 as a secondary diagnosis†
  • Algorithm 4: DRG 89, 90
  • Algorithm 5: DRG 79, 80, 89, 90
  1. 129
  2. 159
  3. 164
  4. 102
  5. 145
  1. 54.8 (47.8–61.5)
  2. 68.3 (61.6–74.4)
  3. 69.8 (63.1–75.8)
  4. 44.7 (38.0–51.7)
  5. 62.3 (55.4–68.8)
  1. 84.5 (77.3–89.7)
  2. 85.5 (79.2–90.2)
  3. 84.8 (78.5–98.5)
  4. 87.3 (79.4–92.4)
  5. 85.5 (78.9–90.3)
Grijalva et al, 2008 ([11])Medical chart review for Streptococcus pneumoniae organism identification requiredPrincipal vs. secondary position ICD-9-CM codes:
  1. Pneumonia (480–487.0)
  2. Invasive pneumococcal disease (320.1, 038.2, 567.1)
  3. Sepsis (003.1, 036.2, 785.52, 790.7, 038)
  1. 161
  2. 7
  3. 45
  1. Any field: 84 (77–89)

    • Principal: 95 (90–98)

    • Secondary: 60 (46–74)
  2. Any field: 100
  3. Any field: 80 (65–90)

    • Principal: 100

    • Secondary: 75 (58–88)
Guevara et al, 1999 ([17])
  • Prospective, population-based study
  • Categories: definite, probable, and possible cases
  • ICD-9 codes for S pneumoniae in position 1–5 of discharge codes vs. position 1 (example for 481 shown here)
  • The following individual ICD-9 codes are described: 38.00, 38.20, 38.80, 481.0, 482.3, 486.00, and 518.81
  • They also examined combinations of codes (single and combination codes with the highest accuracy scores shown here)
4,385 (all) 240 (definite) 53 (probable) 268 (possible)
  • ICD-9 code 481:

    • Position 1: 45.4

    • Position 1–5: 58.3
  • Combination:

    • ICD-9 codes 38.2+, 481+, 38.00

    • Position 1–5: 76.67 (definite as ref. standard)
  • ICD-9 code 481:

    • Position 1: 56.8

    • Position 1–5: 59.1
  • Combination:

    • ICD-9 codes 38.2+, 481+, 38.00

    • Position 1–5: 61.33 (definite as ref. standard)
Jackson et al, 2003 ([18])Chart review; treating physician's impression was pneumonia
  • ICD-9 discharge codes for pneumonia 480–487.0 (unclear position) after excluding readmissions and nosocomial cases (unclear how the latter was excluded)
  • Also examined pneumococcal bacteremia, but validation results not clearly reported
2,45565
Marrie et al, 1987 ([19])Prospective cohort of pneumonia for separate study (laboratory, clinical data)
  • Admission + discharge pneumonia diagnosis
  • ICD-9: 011.6, 021.2, 136.3, 480–487, 506, 507
  • Cohort: 105
  • ICD-9: 127
  • Match: 73
69.557
Meropol and Metlay, 2012 ([20])Reviewer's impression on chart reviewBroad list of hospitalization codes for CAP, including organism-specific codes (total number 59 codes)59 charts available86 (75–94)
Skull et al, 2008 ([21])
  1. Medical record notation of “pneumonia”
  2. CXR report
  3. 1 and 2
ICD-10-AM codes J10–J18 in any position
  1. 5,098
  2. 3,345
  3. 3,343
  1. 97.8 (97.1–98.3)
  2. 89.2 (87.7–90.6)
  3. 97.8 (96.9–98.5)
  1. 96.2 (95.4–97.0)
  2. 71.4 (69.4–73.3)
  3. 68.1 (68.1–72.0)
Whittle et al, 1997 ([9])Chart review: “explicit criteria,” “implicit review,” “physician panel”Discharge ICD-9-CM (principal position) vs. study algorithm vs. DRGTotal: 144
  • ICD-9-CM: 84
  • DRG: 74
  • Algorithm: 89
  • ICD-9-CM: 92
  • DRG: 93
  • Algorithm: 89
Van de Garde et al, 2007 ([22])Prospective study of pneumoniaDischarge ICD-9 codes in primary or secondary positions:
  1. Pneumococcal pneumonia (481)
  2. Pneumonia with other organism specified (482.x and 483.x)
  3. Pneumonia not otherwise specified (485–486)
293 total
  1. 40
  2. 82
  3. 171
  • Principal diagnosis code: 72.4 (all)

    1. 35.0

    2. 18.3
    3. 62.6
  • Any position: 79.5 (all)
Yu et al, 2011 ([23])Chart review: “definite CAP,” “probable CAP” (based on report of physician opinion)
  • Numerous algorithms including the following components:

    1. Single ICD-9 codes in primary position vs. any position

    2. Additional comorbidities
    3. Procedure codes
    4. Demographics, including age, sex, length of hospital stay, season of admission, and death from admitting illness Algorithms developed using CART
  • 3,991 (total)
  • 2,491 (CAP)
  • Primary discharge code for pneumonia:

    • Age 18–64 years: 63 (59–67)

    • Age ≥65 years: 65 (63–67)
  • CART analysis:

    • Age 18–64 years: 81 (77–84)

    • Age ≥65 years: 89 (87–90)
  • Primary discharge code for pneumonia:

    • Age 18–64 years: 91 (87–94)

    • Age ≥65 years: 89 (87–91)
  • CART analysis:

    • Age 18–64 years: 84 (80–87)

    • Age ≥65 years: 82 (80–83)
Table 3. Selected results from validation studies of opportunistic infections identified by administrative data*
Author, year (ref.)Reference standard for case validationCase definition for case identificationNSensitivity (95% CI), %PPV (95% CI), %
  1. 95% CI = 95% confidence interval; PPV = positive predictive value; TB = tuberculosis; PCP = Pneumocystis jiroveci pneumonia; MD = medical doctor; ICD-9 = International Classification of Diseases, Ninth Revision; CDC = Centers for Disease Control and Prevention; PZA = pyrazinamide; AFB = acid-fast bacilli; TIMS = TB Information Management System; NTM = nontuberculous mycobacteria; VA = Veterans Affairs; KP = Kaiser Permanente.

  2. This study reported on additional adverse events, including aplastic anemia, non-Hodgkin's lymphoma, and “lupus-like syndrome,” and this sensitivity is for all reported adverse events (since separate sensitivities were not reported for infectious diseases).

General opportunistic infections     
Curtis et al, 2007 ([6])Chart review using diagnostic criteria (evidence-based abstraction form)≥1 diagnosis code on any type of claim (could be laboratory or diagnostic)
  1. Active TB
  2. PCP
  3. Histoplasmosis
  4. Cryptococcus
  5. Coccidioidomycosis
  1. 14
  2. 1
  3. 3
  4. 2
  5. 1
  • 18 (9–33) overall†
  • 14 cases of active TB identified by claims data, but none confirmed on chart review
  • 3 cases of histoplasmosis were confirmed and 1 case of Cryptococcus
Schneeweiss et al, 2007 ([14])MD impression or diagnostic criteria on chart review
  1. Total
  2. Total minus candidiasis
  3. All TB
  4. Pulmonary TB
  5. Atypical mycobacteria
  6. Candidiasis
  7. Cryptococcus
  8. Aspergillosis
  1. 69
  2. 49
  3. 22
  4. 20
  5. 10
  6. 20
  7. 5
  8. 12
  1. 58 (46–70)
  2. 73 (61–85)
  3. 73 (54–92)
  4. 80 (62–98)
  5. 70 (42–98)
  6. 20 (2–38)
  7. 100 (45–100)
  8. 67 (40–94)
Grijalva et al, 2008 ([11])Medical chart review (organism identification required)Principal vs. secondary positions: ICD-9 codes 117.3, 518.6, 484.6, 112.4, 112.5, 112.81, 112.83, 112.84, 114, 117.5, 321.0, 115 (21 candidiasis, 2 Cryptococcus, 2 aspergillosis, 1 histoplasmosis)26
  • Any field: 62 (41–80)
  • Principal: 100
  • Secondary: 50 (27–73)
TB     
Calderwood et al, 2010 ([24])Chart review, TB case must fulfill CDC criteria (for PPV, used denominator of all cases captured by all algorithms for identification and not shown here)
  1. Prescription for PZA
  2. ICD-9 for TB + order for AFB
  3. ICD-9 + ≥2 anti-TB drugs
  4. Union of 1 or 2 or 3
  • 6
  • Historical cohort: 183
  1. 67 (24–94)
  2. 33 (6–76)
  3. 83 (36–99)
  4. 100 (52–100)
Confirmed TB
  1. 57 (20–88)
  2. 67 (13–98)
  3. 86 (42–99)
  4. 64 (32–88) Historical cohort: 47 (41–54)
Fiske et al, 2012 ([25])TIMS uses 4 criteria to confirm cases: isolation of organism, positive AFB, clinical diagnosis, and provider diagnosis
  1. Physician ICD-9 codes for TB: 010–018, V12.01, V01.1, 647.3
  2. Pharmacy claims for ≥2 anti-TB medications on the same day
  3. Both 1 and 2
  1. 449
  2. 49
  3. 8 10 (TIMS)
  1. 60 (26.2–87.8)
  2. 20 (2.5–55.6)
  3. 20 (2.5–55.6)
  1. 1.3 (0.5–2.9)
  2. 4.1 (0.5–14.3)
  3. 25 (3.2–65.1)
Trepka et al, 1999 ([26])
  • TB registry (included only cases meeting CDC definitions)
  • Chart review
ICD-9 codes 010–018 in any position133
  • ICD-9: 47.7
  • Laboratory data: 82.2
  • ICD-9: 38.3
  • Laboratory data: 98.9
Winthrop et al, 2011 ([27])
  • Chart review: confirmed cases met CDC criteria for TB and published criteria for NTM
  • For VA cohort, also confirmed with local TB registry
  • For TB cases (sample algorithms):

    1. TB isolated in culture

    2. ≥1 code 010–018 for TB
    3. ≥2 codes 010–018 for TB
    4. Prescription for PZA or isoniazid/rifampin and TB codes
  • For NTM:

    1. 1 code 031 for NTM disease

    2. ≥1 NTM culture
    3. 1 and 2
  • 14 (KP)
  • 22 (VA)
  • KP (TB):

    1. 79 (42–92)

    2. 100 (77–100)
    3. 71 (42–92)
    4. 93 (66–100)
  • VA (TB):

    1. 55 (32–76)

    2. 77 (55–92)
    3. 64 (41–83)
  • KP (NTM):

    1. 50 (26–74)

    2. 100 (81–100)
    3. 50 (26–74)
  • VA (NTM):

    1. 65 (53–76)

    2. 76 (65–85)
    3. 42 (31–55)
  • KP (TB):

    1. 100 (69–100)

    2. 54 (42–92)
    3. 71 (0.42–0.92)
    4. 87 (60–98)
  • VA (TB):

    1. 100 (74–100)

    2. 9 (5–14)
    3. 21 (12–32)
  • KP (NTM):

    1. 82 (48–98)

    2. 78 (56–93)
    3. 90 (56–100)
  • VA (NTM):

    1. 74 (62–85)

    2. 41 (32–50)
    3. 77 (61–89)
Yokoe et al, 1999 ([28])Chart review: TB defined according to CDC criteria
  • 12 combinations of codes (see study for full list)
  • 1. ≥2 anti-TB drugs
  • 2. ≥3 anti-TB drugs (these had the best sensitivity and PPV)
45
  1. 89 (76–96)
  2. 84 (71–94)
  1. 30 (22–39)
  2. 50 (38–62)
Yokoe et al, 2004 ([29])TB registry supplemented by chart review All cases met the following criteria: positive TB skin test, signs and symptoms compatible with TB, and treatment with ≥2 anti-TB medicationsPharmacy data: ≥2 anti-TB medications2443633

The characteristics of the studies are shown in Table 1. The majority of studies were from the US (18 of 24) and used ICD-9 codes (20 of 24). One study used The Health Improvement Network database, 3 used International Statistical Classification of Diseases and Related Health Problems, Tenth Revision (ICD-10) codes, and 1 used pharmacy data only (Table 1). Six of the studies were in populations of RA patients.

Bacterial infections

The studies on bacterial infections in general reported highly variable sensitivities and positive predictive values (PPVs) for the diagnosis of infections using administrative data depending on the infection, the algorithm used, and the population it studied, as well as the reference standard (Table 2). Although specificity was occasionally reported by studies, it was often unclear if it had been correctly computed based on the methods provided, and we have limited our discussion here to sensitivity and PPV.

Nine studies compared the diagnostic accuracy of different algorithms to identify bacterial infections ([8-13, 17, 22, 23]). Table 4 shows the types of algorithms employed. Algorithms tested varied based on the number of diagnostic codes used to identify a specific infection, whether the code for infection was in the first position (the most responsible diagnosis) or a secondary position, a combination of number of codes and position of codes, or combining data on diagnostic codes with other types of administrative data. Since studies that compared and contrasted differing algorithms offer significant insight into the use of administrative data for identifying infections, some selected examples will be described here, and the remainder is shown in Table 2.

Table 4. Types of algorithms using administrative data used to identify infections
Diagnostic codes: position 1 vs. any position in the discharge abstract
Diagnostic codes: including discharge and admission codes
Diagnostic codes: using >1 code (e.g., physician code and diagnostic code)
Pharmacy data: either alone or in combination with discharge codes or culture data
Diagnostic codes in combination with additional administrative data: age, length of stay, sex, death, season of admission, comorbidity
Diagnostic codes in combination with culture data

Patkar et al ([13]) conducted a cross-sectional study of hospitalized RA patients and examined the accuracy of 2 algorithms of ICD-9 codes for identifying infections that were established based on expert consensus. One algorithm was “restrictive” and one was more inclusive, with the goal of maximizing sensitivity. ICD-9 codes could be in any position in the (hospital) discharge data. The restricted set of codes had previously been validated ([14]). The reference standard was chart review by 2 independent trained reviewers, and cases were classified based on “clinical judgment” as “no infection,” “infection empirically treated,” or “definite infection.” The study also simultaneously tested the diagnostic characteristics of a set of infection criteria for 16 types of bacterial infections based on “clinical, microbiological, laboratory and radiographic” parameters. The study concluded that the sensitivity of infections identified using the comprehensive set of ICD-9 codes was 100% (95% confidence interval [95% CI] 96–100%) compared to 59% (95% CI 48–69%) for infections defined with the restricted set, using “definite” infections as the reference standard. The specificity of infections using the comprehensive set of codes was lower compared to the restricted set using the same reference standard (40%; 95% CI 31–49% versus 81%; 95% CI 73–87%). Lastly, the study also examined the diagnostic utility of 16 infection criteria used in combination with ICD-9 codes for identifying infections and found that the combination of the two led to the greatest accuracy (PPV 96%).

Grijalva et al ([11]) examined infections in Medicaid patients with RA, including community-acquired pneumonia, invasive pneumococcal disease, sepsis, and opportunistic mycoses. They compared algorithms in which the infection was coded in the first position (principal) versus any other position in the discharge summary. A medical chart review was the gold standard and only the PPV was reported. The PPV for all diagnoses was higher when the infection was identified by a code in the principal diagnostic position (except for invasive pneumococcal disease, which had a PPV of 100% regardless of the field).

Gedeborg et al ([10]) examined central nervous system infections, sepsis, and pneumonia in an intensive care unit (ICU) population defined by a number of algorithms: using either ICD-9 or ICD-10 codes in either the primary or secondary position in the discharge abstract. For sepsis, they also examined using a “wide” algorithm of codes (higher number) versus more “narrow” criteria (smaller number of codes). The gold standard was the ICU database, which was maintained by the ICU physicians and was separate from the discharge register. The authors demonstrated that restricting the case definitions (using a narrower algorithm) increased the accuracy of the algorithm, but at the expense of sensitivity. Similar findings occurred when only the principal position in the discharge abstract was used in the algorithm. They also noted that the ICD-9 and ICD-10 codes performed differently (ICD-9 more accurately for sepsis and ICD-10 more accurately for pneumonia).

Whittle et al ([9]) selected a random sample of hospitalized patients with an ICD-9, Clinical Modification (ICD-9-CM) diagnosis of pneumonia. They excluded patients with hospital-acquired pneumonia and patients with human immunodeficiency virus/AIDS or organ transplants. They compared 3 administrative data–based algorithms for identifying subjects with pneumonia against the reference standard of clinical chart review by blinded abstractors with medical training. The algorithms evaluated were: 1) an algorithm developed by the authors based on hospital discharge data, including ICD-9-CM codes and a patient management categories (PMC) system, which grouped patients based on related clinical diagnoses; 2) an algorithm using disease-related grouping (DRG) classification; and 3) the presence of a pneumonia ICD-9-CM code in the principal diagnosis position of hospitalization data. The first algorithm (using PMC and ICD-9-CM codes) had a sensitivity of 89%, a specificity of 80%, and a PPV of 89%. Interestingly, algorithm 3 (using only an ICD-9-CM code for pneumonia in the principal diagnosis position) had a similar sensitivity (84%), specificity (86%), and PPV (92%) and was less complex to use. Lastly, algorithm 2 (based on DRGs) had a lower sensitivity for identifying pneumonia (74%), but had the highest PPV (93%). Some comorbidities, including lung cancer, made it more difficult to classify cases, but overall accuracy did not vary by age, number of secondary diagnoses, or vital status at discharge.

Aronsky et al ([8]) examined 5 different algorithms with specific codes to identify pneumonia using hospitalization discharge data: 3 of the algorithms utilized ICD-9 codes in varying number or position, with the third algorithm identifying severe pneumonia cases (pneumonia cases with sepsis or respiratory failure), and the last 2 algorithms used DRGs in different combinations. The authors had a complex 3-step reference standard for pneumonia, which is described in detail in the study ([8]). The authors examined emergency department patients, 73.2% of whom required hospitalization (data are presented for all patients and hospitalized patients separately). Lastly, they combined chart review with the claims-based algorithms described above to evaluate whether the patients identified by the 5 different algorithms had different features with respect to age, sex, hospitalization rate, pneumonia severity and inpatient mortality, cost, and length of stay for the subset that was hospitalized.

Algorithms 2 and 3 included a greater number of diagnostic codes and had the highest sensitivity and PPV (Table 2), but results varied between patients that required hospitalization and those that did not. In the entire sample, the algorithms had a sensitivity of 65–66% and a PPV of 80%, whereas when hospitalized patients were examined separately, algorithms 2 and 3 performed slightly better, with a sensitivity of 68–69% and a PPV of 84%. When claims-based data were combined with chart review, length of stay and costs were determined to be less using DRG-based algorithms compared to the reference standard and mortality was slightly lower using one of the DRG algorithms (algorithm 4), but other features described above were not measurably different between the reference standard and in patients identified by the 5 algorithms.

Yu et al ([23]) examined diagnostic codes for pneumonia combined with other types of administrative data, including demographic features (e.g., age, sex, length of stay, season), relevant comorbidities (e.g., asthma, heart failure), and procedure codes, and examined the performance of various algorithms against the gold standard of chart review using classification and regression tree (CART) analysis. They determined that the performance of the algorithms varied by age group. Overall compared to models where only a primary discharge diagnosis code for pneumonia was used, the CART algorithms improved the sensitivity by 18–32%, with only a small decrease in PPV by 2–7%.

Opportunistic infections

Opportunistic infections were examined in 9 articles ([6, 11, 14, 24-29]). Three studies examined a variety of opportunistic infections ([6, 11, 14]). In the study by Schneeweiss et al ([14]), opportunistic infections, including pulmonary tuberculosis (TB), atypical mycobacteria, candidiasis, Cryptococcus, and aspergillosis, were examined. Candidiasis had the lowest PPV (20%) and the remainder had PPVs that varied between 67% and 100%. The overall PPV for the identification of an opportunistic infection using administrative data was 58% (95% CI 46–70%), and increased to 73% (95% CI 61–85%) if Candida infections were excluded. Grijalva et al (described above) showed a high PPV for opportunistic mycoses (100%) when the first diagnostic position was used ([11]). The second article by Curtis and colleagues ([6]) reviewed adverse events, including opportunistic infections, in patients with RA or Crohn's disease treated with anti–tumor necrosis factor α agents. Patients were identified using medical and pharmacy claims data from a large US health care organization. The opportunistic events of interest, including active TB, Pneumocystis jiroveci, histoplasmosis, coccidioidomycosis, and Cryptococcus, were identified using ≥1 diagnosis code on any type of claim after the index date (including physician visits, diagnostic tests, or radiologic studies). Other adverse events captured by the study included aplastic anemia, non-Hodgkin's lymphoma, and “lupus-like syndrome.” The reference standard was medical chart review using an “evidence-based, pilot-tested data abstraction form.” The PPV of claims data for confirmed adverse events was poor overall (including opportunistic infection and other adverse events) at 18% (95% CI 9–33%). Individual PPVs for opportunistic infections were not reported. For some infections, including TB, none of the cases could be confirmed on chart review. Of note, overall, the PPVs of claims from inpatient settings were higher than those in outpatient settings, and the PPV was higher if >1 diagnostic claim was used in the algorithm for case definition. Because there were very few infectious complications other than TB (n = 14 with TB and n = 7 other), it is not possible to comment on the PPV of specific opportunistic infections identified using administrative data.

Six studies specifically examined algorithms for identifying TB ([24-29]). Calderwood et al developed algorithms for TB detection incorporating ICD-9 codes for TB, pharmacy data, and an order for acid-fast bacilli. They tested the algorithm in 3 separate cohorts (a development cohort, a historical cohort, and a prospective cohort; the first 2 are shown in Table 3). Although the PPV for confirmed active TB was not demonstrated to be high using their screening criteria (64%), the PPV for physician-suspected active TB was 91% and their algorithm aimed for high sensitivity, which was achieved (100%). They then implemented their algorithm during 18 months of prospective followup for physician-suspected TB and demonstrated a high PPV for physician-suspected active TB (100%; only 1 case was not confirmed); however, this represents only 7 cases and further validation is required.

In a recently published study by Fiske et al ([25]) of RA Medicaid patients using a TB registry as the gold standard, ICD-9 data alone grossly overestimated the number of TB cases (449 versus 10 confirmed cases in the registry); even when ICD-9 codes were combined with pharmacy data, the false-positive rate was still 75%. Trepka et al ([26]) also demonstrated that the sensitivity and PPV for discharge diagnosis for TB are low (47.7% and 38.3%, respectively).

Yokoe et al ([29]) examined pharmacy data alone for identification of TB using a definition of prescription for ≥2 anti-TB medications and similarly found very low sensitivity and PPV (36% and 33%, respectively).

Different algorithms may perform differently in different administrative databases. Winthrop et al ([27]) examined 2 different administrative data sources, a Veteran's Affairs data source and data from Kaiser Permanente, and found differing accuracy of their algorithms for identification of TB and nontuberculous mycobacteria (examples of some of the algorithms are shown in Table 3). This study also demonstrated that inclusion of microbiologic evidence is a highly sensitive and accurate method for case ascertainment and that TB diagnostic codes in combination with pharmacy data were superior to TB codes alone (Table 2).

Study quality

The quality of the studies was assessed using a standardized assessment ([5]), and key features of our quality review are shown in Table 5. Overall, the studies were rated as good quality. Twenty-two studies (91.7%) included a statement in the introduction that specifically stated that one of the goals of the study was disease identification and validation and 19 studies (79.2%) reported a PPV and/or negative predictive value, with 15 studies (62.5%) reporting 95% CIs. Areas for quality improvement include the following: only 10 studies (41.7%) described the training and expertise of those reading the reference standard, only 6 studies (25%) clearly stated that the readers of the reference standard were blinded, and only 8 studies (33.3%) reported ≥4 estimates of diagnostic accuracy (Table 5).

Table 5. Results from quality assessment of validation studies of administrative data used to identify infections*
Author, year (ref.)Intro: states disease identification and validation as goals of studyMethods: describes disease classificationNumber and training of those reading the reference standard describedReaders of the reference standard were blindedResults: study flow diagramResults: ≥4 estimates of diagnostic accuracy are reportedResults: for relevant subgroups additional data are presentedResults: PPV reportedResults: 95% CI reportedDiscussion: applicability findings discussed
  1. PPV = positive predictive value; 95% CI = 95% confidence interval; NA = not applicable.

Aronsky et al, 2005 ([8])Yes???NoYesNoYesYesYes
Calderwood et al, 2010 ([24])YesYesYes?NoNoNoYesYesYes
Curtis et al, 2007 ([7]) (bacterial)NoYesYesYesNoNoNoNoNo?
Curtis et al, 2007 ([6]) (opportunistic)YesYes??NoNoNoYesYesYes
Fiske et al, 2012 ([25])YesYes??NoYesNoYesYesYes
Gedeborg et al, 2007 ([10])YesYesYes?NoYesNoNoYesYes
Grijalva et al, 2008 ([11])YesYesYesYesNoNoNoYesYesYes
Guevara et al, 1999 ([17])YesYes?YesNoYesNoYesNoYes
Jackson et al, 2003 ([18])NoYesNo?NoNoNoYesNoNo
Landers et al, 2010 ([12])YesYesNANAYesNoNoNoYesYes
Madsen et al, 1998 ([15])YesYesNo?NoNoNoYesYesYes
Marrie et al, 1987 ([19])Yes?No?YesNoNoYesNo?
Meropol and Metlay, 2012 ([20])YesYesNoNoNoNoNoYesYesYes
Ollendorf et al, 2002 ([16])YesYes??NoNoNoNoNoYes
Patkar et al, 2009 ([13])Yes?Yes?YesYesNoYesYesYes
Schneeweiss et al, 2007 ([14])YesYesYes?NoNoNoYesYesYes
Skull et al, 2008 ([21])YesYesYesYesYesYesYesYesYesYes
Trepka et al, 1999 ([26])YesYesYes?NoNoYesYesNoYes
Van de Garde et al, 2007 ([22])YesYes??NoNoYesNoNoYes
Whittle et al, 1997 ([9])YesYesYesYesNoYesYesYesNoYes
Winthrop et al, 2011 ([27])YesYesNo?NoNoYesYesYesYes
Yokoe et al, 1999 ([28])YesYesNo?NoNoNoYesYesYes
Yokoe et al, 2004 ([29])YesYes??NoNoYesYesNoYes
Yu et al, 2011 ([23])YesYesYesYesNoYesYesYesYesYes

DISCUSSION

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS AND METHODS
  5. RESULTS
  6. DISCUSSION
  7. AUTHOR CONTRIBUTIONS
  8. REFERENCES
  9. Supporting Information

Infectious complications are a significant cause of morbidity and mortality in rheumatic diseases. The results of our review have important implications for researchers and policy planners using administrative data for disease surveillance. Infections are also an important outcome of interest in pharmacoepidemiologic studies evaluating adverse events of medications used to treat rheumatic diseases. The principal finding of our study is that hospitalization administrative data have variable accuracy for identification of serious infections as outcomes or comorbidities, depending on the type of infection, source of administrative data, population studied, and algorithm used. Although we initially set out to define the most appropriate algorithms for identifying infections using administrative data, it is apparent from our review that we cannot endorse a specific algorithm, since the choice of algorithm would depend on the purpose of the study (i.e., whether it is more important to maximize sensitivity or PPV). Additionally, no specific threshold values exist for accuracy measures that are defined as acceptable ([10]). We did uncover certain principles when choosing algorithms for identifying infections in administrative data that are important to consider when designing a study, and we have summarized these below.

Our study has a number of limitations. First, our search strategy was designed to broadly evaluate infections (specifically, opportunistic infections and those requiring hospitalization) and their identification using administrative data. As such, we did not search medical databases with an exhaustive list of individual types of infections and thus there may be specific infections where validation in administrative data sets exists that were not identified by our search. Additionally, the choice of index terms for this systematic review was difficult because administrative databases are not well indexed in the literature databases, and therefore relevant studies may have been missed. A major limitation of applying the algorithms identified by our search is that the majority utilized ICD-9 codes, and these will not be applicable in administrative data sets in jurisdictions using ICD-10 codes. It is also possible that performance of diagnostic coding algorithms may vary slightly by jurisdiction depending on coding practices (this was shown in one of our studies that used cross-validation in a separate administrative database [27]). Finally, comparison of algorithm performance across studies was limited by the variation of reference standards, which varied from clinical impression based on chart review to a specific “evidence-based” criterion and even identification of events in prospective cohorts.

Despite these limitations, there are some general key principles shown by this work. A number of studies compared differing algorithms to identify infections and demonstrated that increasing the number of diagnostic codes for infections improves sensitivity; however, this is often at the expense of decreasing specificity ([13]). The use of multiple data sources for identifying infections also improved accuracy. For example, using infection diagnostic codes from hospital discharge data in combination with microbial or pharmacy data improved sensitivity and specificity for diagnosis of TB.

The position of the diagnostic code is also of importance. Using infection diagnostic codes placed at any position of hospitalization data also improved sensitivity when compared to an algorithm using only diagnostic codes placed in the first position of hospitalization data ([17]); however, the latter improved PPV ([11]).

Finally, the strategy of using ICD-9 diagnostic codes to screen for infections, followed by chart review to confirm infections, leads to improved PPV ([8, 13, 17]); however, access to medical records or limited patient data from discharge summaries is not available in many centers and may be impractical for large population studies.

Although in general the algorithms presented to identify bacterial infections from administrative data had reasonable sensitivity, we identified some significant exceptions worth noting. For example, some infectious complications have a very low PPV, such as systemic candidiasis ([14]). Sepsis had highly variable estimates of accuracy ([10, 11, 15, 16]). Both sepsis and candidiasis have complex definitions and further validation is likely required prior to applying the presented algorithms in different databases.

In contrast to the available information for bacterial infections, less data were available evaluating the accuracy of administrative data to identify opportunistic infections. Furthermore, the number of patients in each study was relatively small (especially for non-TB opportunistic infections). This is likely because opportunistic infections are rare. Additionally, information about the occurrence of opportunistic infections such as TB is often maintained by additional agencies such as public health departments, which may have more accurate information on infections, but the data may not be linkable to other sources of administrative data.

Hospital records alone are an inaccurate data source for identifying TB, and the primary reason for this discrepancy may be that cases are often coded as TB during investigation when the diagnosis has not yet been proven. Use of pharmacy data to identify TB can also be problematic if patients are obtaining medications from public health departments, which are not captured in pharmacy billing databases. Our results suggest that the methods most likely to be successful in identifying opportunistic infections would require linkage to additional public health data sources for reportable diseases such as TB and/or addition of case confirmation using culture data.

Our quality assessment of the literature on validation studies using administrative data did identify some deficiencies in that many of the studies did not report on the training and expertise of the individuals reading the reference standard or if they were blinded, which could influence the presence of bias. Additionally, many of the studies did not report on 4 or more tests of diagnostic accuracy, which has been listed as a quality criterion in validation studies of administrative data ([5]). Among studies that did report multiple measures, often methods for calculating specificity were not adequately described, and we have chosen not to report this measure. Since the quality criterion for validation studies of administrative data was only recently published, we hope that future studies of this nature will adhere more firmly to these recommendations ([5]).

In conclusion, when using administrative data to identify serious infections as outcomes or covariates, hospitalization data can be used to identify serious bacterial infections. If greater sensitivity is desired, using a more comprehensive definition including a greater number of individual infection codes and/or using a diagnostic code for infection found in any position of the claims data is recommended. Current data are not sufficient to recommend the use of administrative data to identify opportunistic infections without multiple data linkage to ensure adequate specificity.

AUTHOR CONTRIBUTIONS

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS AND METHODS
  5. RESULTS
  6. DISCUSSION
  7. AUTHOR CONTRIBUTIONS
  8. REFERENCES
  9. Supporting Information

All authors were involved in drafting the article or revising it critically for important intellectual content, and all authors approved the final version to be published. Dr. Barber had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study conception and design. Barber, Lacaille, Fortin.

Acquisition of data. Barber, Fortin.

Analysis and interpretation of data. Barber, Lacaille, Fortin.

REFERENCES

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS AND METHODS
  5. RESULTS
  6. DISCUSSION
  7. AUTHOR CONTRIBUTIONS
  8. REFERENCES
  9. Supporting Information

Supporting Information

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS AND METHODS
  5. RESULTS
  6. DISCUSSION
  7. AUTHOR CONTRIBUTIONS
  8. REFERENCES
  9. Supporting Information

Additional Supporting Information may be found in the online version of this article.

FilenameFormatSizeDescription
ACR_21959_sm_SupplApp.doc34KSupplementary Data

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.