A peer comparison program for the quality assurance of human papillomavirus DNA detection using the Digene Hybrid Capture II/SurePath method shows excellent analytic interlaboratory correlation †
Dr. Wilbur is a member of the TriPath Imaging, Inc. Speakers' Bureau, and his institution has received grant support from TriPath Corporation and Cytyc Corporation.
Interlaboratory peer comparison programs are quality-assurance activities mandated by the Clinical Laboratory Improvement Amendments of 1988. No commercial program is available currently that was designed for cytology laboratories performing only human papillomavirus (HPV) DNA testing. In this report, the authors provide the results from a self-developed program between 2 cytology laboratories.
Between 4 and 11 SurePath liquid-based cervical cytology samples were selected at each of the 2 participating laboratories each quarter and exchanged without accompanying patient information. Samples were selected to test both positive and negative high-risk HPV DNA results in roughly equivalent numbers. Samples were run with the Hybrid Capture II method using each laboratory's standard procedure. The result obtained was compared with the originating laboratory's result. Correlation was compared on an ongoing basis as a method to assess analytic performance.
Over a 3-year period, 12 exchanges took place, constituting 113 total specimens. Overall, there were 9 exchanges of 76 specimens that had 100% correlation, and 3 exchanges in which 4 of 37 specimens had discordant results. Overall, this represented a 97% correlation (109 of 113 specimens) of results between laboratories. All 4 discordant cases were reported as negative by the original laboratory and positive by the exchange laboratory (2 in each direction).
The interlaboratory peer comparison result of 97% concordance demonstrated excellent analytic agreement between the HPV DNA-detection procedures of each laboratory. All discordant cases were “negative to positive” and were distributed equally by originating laboratory. The procedure was easily set up and provided assurance to each laboratory of ongoing performance for the detection of the HPV DNA analyte. Cancer (Cancer Cytopathol) 2007. © 2007 American Cancer Society.
The test for the detection of high-risk human papillomavirus (hrHPV) DNA is a mainstay for triage management decisions with cervical cytology specimens interpreted as atypical squamous cells of undetermined significance (ASC-US) and, more recently, for primary screening in conjunction with the Papanicolaou (Pap) tests in women aged ≥30 years.1 Because ASC-US interpretations can comprise as much as 5% to 10% of all Pap tests,2 the ability to divide these women into high risk (requiring colposcopy) and low risk (requiring only repeat Pap tests) is a very important cost-savings decision point. In addition, the number of women who receive HPV testing in conjunction with cytologic screening is growing rapidly. Therefore, it is extremely important that the operating characteristics of the test used must be well characterized (especially positive and negative predictive values), and consistent quality must be maintained in laboratory performance.3 Hence, the regulations of the Clinical Laboratory Improvement Amendments of 1988 (CLIA 88) require validation of methods and ongoing quality-assurance (QA) measures.4
Section 493.1213 Standard; Establishment and Verification of Method Performance Specifications: Prior to reporting patient test results, the laboratory must verify or establish, for each method, the performance specifications for the following performance characteristics: accuracy; precision; analytical sensitivity and specificity, if applicable; the reportable range of patient test results; the reference range(s) (normal values); and any other applicable performance characteristic.
To determine the operating characteristics of the hrHPV DNA assay, analytic and clinical validations should be performed to insure not only that sensitivity is taken into account but also that clinical relevance of the result can be assured.5 The latter component is most important in the context of insuring that current management guidelines (which are based on known test performance parameters) will achieve the anticipated clinical utility and, hence, patient safety and cost effectiveness. Current methods to insure these operating characteristics are the U.S. Food and Drug Administration (FDA) approval process and in-laboratory “homebrew” validations for non-FDA-approved methods.4
In addition to validation studies, CLIA 88 mandates that laboratories performing regulated tests insure ongoing technical quality once a test has been validated. In addition to the routine use of known positive and negative control specimens in a quality-control “intratest” mode, another mandated QA process to test ongoing testing quality is the proficiency testing process.6
Section 493.801 Condition; Enrollment and Testing of Samples
Each laboratory must enroll in a proficiency testing program that meets the criteria…
Section 493.831 Standard; Virology
(b) Failure to participate in a testing event is unsatisfactory performance and results in a score of 0 for the testing event.
One method of satisfying this proficiency testing requirement for laboratory analytes is through an interlaboratory peer comparison program. In this procedure, masked specimens with known results are sent to laboratories from a vendor or are passed between laboratories, and the results obtained by the test laboratory are compared with the reference results from the vendor or originating laboratory, respectively. Such programs are well established for common analytes in the clinical laboratories and often are maintained by vendors such as the College of American Pathologists (CAP)7 or the American Proficiency Institute.8 Where no such program exists, it is incumbent on the laboratory to develop and implement such programs to insure ongoing quality in test performance. This report details how that process was established at 2 laboratories. The method used is not currently FDA-approved, and no program testing this methodology is available currently from a commercial vendor.
MATERIALS AND METHODS
This protocol received approval from the Human Subjects Institutional Review Board. Specimens were selected from the routine hrHPV DNA detection assays performed in 2 noncompeting laboratories. All tests selected were SurePath liquid-based cervical cytology samples (Tripath Imaging, Burlington, NC) that used Hybrid Capture II hrHPV DNA assays (Digene, Gaithersburg, Md). A previously described method was used for test performance.9 Between 4 and 11 SurePath samples were selected at the 2 noncompeting laboratories on a quarterly basis (the final developed protocol now stipulates that the number selected in each laboratory will be 5 per quarter). Samples were selected to test both positive and negative results in roughly equivalent proportions. All selected samples were less than 3 weeks from the collection date to insure consistent results within the parameters of the manufacturer's stability data for room temperature storage of SurePath samples. The selection criteria for negative test events was a relative light unit (RLU) score <.5, and the criteria for a positive test was an RLU score >5. These cutoff points were selected because of concern related to reproducibility of results in the “equivocal” range of scores near the negative to positive “cutoff” value of 1.0 RLU. There was no attempt to select patients who had a particular diagnosis; all specimens were eligible for selection, and the specimen RLU value and adequacy of the sample were the only criteria. Samples were masked for all patient-protected information and were exchanged with the second laboratory. The samples exchanged were residual cell suspensions that had been preprocessed through the SurePath density-gradient centrifugation process. They were the same cell suspensions that had been used for primary HPV testing in the originating laboratory. The testing sample volume was 75 μL, the same volume that is used for the initial clinical testing event. The sample volume sent to the exchange laboratory varied but was not less than 0.5 mL to insure adequate material for initial testing and for retesting if necessary. Tests for hrHPV DNA were run on the selected samples in the exchange laboratory using the identical method that was used in the originating laboratory. Results from the originating and exchange laboratories were reported as either “negative for hrHPV” or “positive for hrHPV” with a recording of the absolute “score” in RLU for each sample run in each laboratory. Results of the assays from the 2 laboratories were compared based on the “negative” or “positive” result obtained, and the findings were tabulated. Analysis of the RLU level for each specimen result was performed in discordant cases, and repeat tests were performed in these cases (where specimen volume was adequate) to examine the potential reasons for discrepancy.
The period of this report included results that were obtained over 3 years with 12 quarterly exchanges of specimens. During the study period, 113 total samples were exchanged. There were 9 exchanges in which no discrepant results were obtained in a total of 76 specimens. There were 3 exchanges that resulted in 4 discordant specimens in a total of 37 specimens. Overall, the concordance rate was 97% (109 of 113 total challenges matched). All 4 discordant specimens showed results of “negative for hrHPV” in the originating laboratory and “positive for hrHPV” in the exchange laboratory. For a summary of the study's primary results, see Table 1. There were 2 discordant results identified that originated in each of the laboratories. The specifics of the discordant cases are illustrated in Table 2. Two discordant cases showed RLU values that were clustered around the cutoff value between negative and positive results, meaning that the positive result in the exchange laboratory was just above an RLU level of 1.0. These cases showed exchange laboratory values of 1.19 RLU and 1.86 RLU. A repeat assay performed on the specimen with 1.19 RLU showed a value of 1.53 RLU, and a repeat was not performed on the specimen with 1.86 RLU. One of these specimens was interpreted as ASC-US, and the other was interpreted as negative for intraepithelial lesion or malignancy (NILM). Both women have had NILM repeat cytology specimens for 2 and 3 year periods. Two discordant cases showed RLU values in the exchange laboratory that were significantly above the negative to positive cutoff level. One result showed an RLU of 96.29, and the other showed an RLU of 7.63 in the exchange laboratory. Repeat testing was done on both of these specimens and showed results of 118.2 RLU and 6.58 RLU, respectively. One of these cases was interpreted as ASC-US, and had 2 repeat ASC-US specimens within the next year, both of which also were negative for hrHPV DNA. The other specimen was interpreted as NILM; and, in follow-up, that woman has had 2 NILM repeat cervical cytology specimens. Therefore, all patient follow-up was in support of the original negative hrHPV DNA result. In addition, all exchange laboratory retesting was consistent with their initial testing result.
Table 1. Summary of the Overall Results of the Current Study*
|Duration of program, y||3|
|No. of exchanges||12|
|Total no. of specimens exchanged||113|
|Total no. of exchanges with 100% concordance (%)||9 (69)|
|Total no. exchanges with discordant cases (%)||3 (31)|
|Total no. of discordant cases (%)||4 (3)|
|Overall concordance rate, %||97|
|No. of discordant results per laboratory||2|
Table 2. Discordant Results Obtained, the Result at Each Institution, the Results of Retesting, the Original Cytologic Interpretation of That Specimen, and the Follow-up Obtained
|1||0.17||96.29 (repeat, 118.2)||NILM||Negative|
|2||0.17||1.19 (repeat, 1.53)||ASC-US||Negative|
| ||Laboratory 2||Laboratory 1|| || |
|3||0.2||7.63 (repeat, 6.58)||ASC-US||Negative|
|4||0.15||1.86 (repeat, ND)||NILM||Negative|
Patient management, safety, and cost-effectiveness issues in laboratory testing are tied to consistency of the results obtained. Therefore, it is important that laboratories insure the initial and ongoing quality of tests performed through high-level laboratory practices, which include the initial validation of methods, quality control with the use of appropriate intratest controls, and external peer comparison programs. Peer comparison programs are used commonly in anatomic and clinical laboratories to monitor performance in comparison to an accepted external “norm.” Such programs exchange slides with known reference diagnoses or specimens that contain known analytes, respectively, to compare individual laboratory performance. These specimens are then processed in the laboratory in as close as practical to a “clinically routine” fashion. With such a process, a laboratory's ongoing performance can be measured, and possible problems can be identified. This process is included in the regulations of CLIA 88 and also in the laboratory accreditation checklists of CAP10 and the Joint Commission on Accreditation of Healthcare Organizations (JCAHO).11 Commercial products that test many common analytes are available widely from vendors suchy as CAP and the American Proficiency Institute. A standard “pass” rate for such peer review challenges generally is determined by the Centers for Medicare and Medicaid Services (CMS) or, if it is not specified, then a by panel of experts for that analyte. CMS has determined that a pass rate for viral analyte detection should be set at 80%.12
Section 493.831 Standard; Virology
(a) Failure to attain an overall testing event score of at least 80 percent is unsatisfactory performance.
The CAP maintains a current peer comparison program for HPV detection; however, that program distributes samples suspended in Digene Standard Transport Media, which is an FDA-approved platform for HPV DNA testing. Because no commercially available peer comparison program is available currently for laboratories that perform hrHPV DNA testing on SurePath collected samples, our laboratories developed an in-house program and began to exchange samples.
Our data indicate that the SurePath/Hybrid Capture II assay has an excellent concordance rate between laboratories of 97%, indicating acceptable analytic performance in each participating laboratory compared with the CMS passing score set at 80%. All discrepant cases were of 1 type, that is, they changed from “negative to positive,” and were distributed evenly between the laboratories. Two of the discrepancies were results that were clustered around the negative/positive cutoff value of 1.0 RLU and may have represented statistical variability limits of the testing process. A repeat on 1 of those specimens confirmed the result obtained. The other 2 discrepancies were represented by “negative” tests in the originating laboratory that reportedly were “positive” in the exchange laboratory at a level well above the negative/positive cutoff value. Both of those samples showed correspondingly high positive results on repeat testing. In this circumstance, the possible causes of the discrepancy would include contamination of the exchange laboratory specimen, mislabeling of the specimen, or switching of the test and another specimen during some phase of the program. Based on an analysis of the cytology results from all 4 samples that had either NILM or ASC-US interpretations and the completely negative follow-up of all 4 women, it is likely that these indeed were hrHPV DNA-negative samples. Thus, the reasons for the discordant exchange laboratory's positive results most likely are those described above. The actual reason(s) for the discrepant results was (were) not resolved in the current report.
It was demonstrated previously that the Digene Hybrid Capture II assay has robust reproducibility in a report from the Atypical Squamous Cells of Undetermined Significance/Low-Grade Squamous Intraepithelial Lesion Triage Study.13 The authors of that report randomly retested 5% of samples between the clinical centers from which the specimens originated and a central quality-control laboratory. They observed “good to excellent” κ statistic values for results between clinical centers and between clinical centers and the central quality-control facility, with discorrelating specimens noted in 7.84%. Such a study provides a good baseline for reproducibility performance. The current study adds the necessary component of performance over time as required for an ongoing QA exercise.
An alternative proposal to the current pilot program would be to increase the number of laboratories participating in the sample exchange and to vary the exchange patterns. Such a program would provide increased assurance of the level of performance in an individual laboratory, because the results would represent a wider peer group. This program would require greater sample size, allowing more aliquots to be prepared, and a more comprehensive and centralized data review.
In summary, the current study reports on the implementation of an interlaboratory peer comparison program for hrHPV DNA testing on SurePath cervical cytology samples. The program is easily set up and administered, and it illustrates the excellent analytic performance that can be achieved with this test. It provides our laboratories with ongoing QA for this analyte, as mandated by the regulations of CLIA 88 and by accrediting organizations, such as CAP and JCAHO.