Fax: (412) 623-4014
Clinical impact and frequency of anatomic pathology errors in cancer diagnoses
Article first published online: 10 OCT 2005
Copyright © 2005 American Cancer Society
Volume 104, Issue 10, pages 2205–2213, 15 November 2005
How to Cite
Raab, S. S., Grzybicki, D. M., Janosky, J. E., Zarbo, R. J., Meier, F. A., Jensen, C. and Geyer, S. J. (2005), Clinical impact and frequency of anatomic pathology errors in cancer diagnoses. Cancer, 104: 2205–2213. doi: 10.1002/cncr.21431
- Issue published online: 31 OCT 2005
- Article first published online: 10 OCT 2005
- Manuscript Accepted: 11 JUL 2005
- Manuscript Revised: 14 JUN 2005
- Manuscript Received: 29 MAR 2005
- Agency for Healthcare Research and Quality. Grant Number: HS13321-01
- diagnostic error;
- patient safety;
- interobserver agreement
To the authors' knowledge, the frequency and clinical impact of errors in the anatomic pathology diagnosis of cancer have been poorly characterized to date.
The authors examined errors in patients who underwent anatomic pathology tests to determine the presence or absence of cancer or precancerous lesions in four hospitals. They analyzed 1 year of retrospective errors detected through a standardized cytologic–histologic correlation process (in which patient same-site cytologic and histologic specimens were compared). Medical record reviews were performed to determine patient outcomes. The authors also measured the institutional frequency, cause (i.e., pathologist interpretation or sampling), and clinical impact of diagnostic cancer errors.
The frequency of errors in cancer diagnosis was found to be dependent on the institution (P < 0.001) and ranged from 1.79–9.42% and from 4.87–11.8% of all correlated gynecologic and nongynecologic cases, respectively. A statistically significant association was found between institution and error cause (P < 0.001); the cause of errors resulting from pathologic misinterpretation ranged from 5.0–50.7% (the remainder were due to clinical sampling). A statistically significant association was found between institution and assignment of the clinical impact of error (P < 0.001); the aggregated data demonstrated that for gynecologic and nongynecologic errors, 45% and 39%, respectively, were associated with harm. The pairwise kappa statistic for interobserver agreement on cause of error ranged from 0.118–0.737.
Errors in cancer diagnosis are reported to occur in up to 11.8% of all reviewed cytologic-histologic specimen pairs. To the authors' knowledge, little agreement exists regarding whether pathology errors are secondary to misinterpretation or poor clinical sampling of tissues and whether pathology errors result in serious harm. Cancer 2005. © 2005 American Cancer Society.
The diagnosis of many disease processes depends to a large extent on the pathologic assessment of tissues. The majority of cancer diagnoses are made on the basis of histologic or cytologic evaluation. Consequently, diagnostic pathology errors may lead to incorrect patient management plans, including delays in treatment or the implementation of incorrect treatment regimens.1, 2 The reported frequency of anatomic pathologic errors ranges from 1–43% of all specimens, and the effect of these errors is unknown.1–14 Diagnostic pathology error frequency and effect are poorly characterized, partly because of the lack of uniform measurement processes, a lack of understanding of when an error has occurred, and fear of disclosure.
Anatomic pathology errors are detected by several methods.2 The most commonly used method is secondary review, in which a second pathologist reviews slides previously examined by a first pathologist.5 Pathologists employ different types of secondary review. For example, the Clinical Laboratory Improvement Amendments of 1988 (CLIA '88) require correlation of patient material in which same-site cytologic and surgical specimens are obtained (e.g., sputum cytology and lung biopsy specimens) and the two pathologic diagnoses are discrepant (e.g., sputum is suspicious for cancer and the lung biopsy tissue is benign).15 Nearly all correlations are performed to detect potential errors in cancer diagnosis. Errors detected through correlation review may be classified as interpretive (i.e., the disease process is misclassified) or sampling (i.e., the specimen does not contain the diagnostic tissue).1 In general, interpretive errors are related to errors made in the pathology laboratory, whereas sampling errors are made either in the pathology laboratory (in tissue processing) or during tissue procurement.5
To our knowledge, a detailed study of the effect of errors in cancer diagnosis, such as those detected by cytologic-histologic (CH) correlation, is lacking. Based on CH review of nongynecologic cases performed at a single institution, Clary et al. reported that 2.3% of cytologic specimens and 0.44% of surgical specimens contained an error and 23% of errors had a marked effect on patient care.1 The current study examines the frequency and cause of anatomic pathology error at four institutions, the interinstitutional variability in assigning the cause of correlation error, and the clinical impact of anatomic pathology error on patient care.
MATERIALS AND METHODS
Background and Design
In 2002, the Agency for Healthcare Research and Quality (AHRQ) funded four institutions to: 1) share deidentified anatomic pathology diagnostic error data using a Web-based database, 2) determine baseline error frequencies detected by different methods, 3) collect patient outcome information to determine the clinical impact of diagnostic errors, 4) perform root cause analysis to derive error reduction strategies, and 5) assess the success of these error reduction strategies using both quantitative and qualitative measures.5
We have added a different error detection method in each year of the project. In 2002, we began collecting errors detected by the CH correlation process. In this study, we used the Year 2002 data to establish CH correlation error frequencies, causes, and outcomes. Each institution obtained Institutional Review Board approval for the performance of this project.
The four institutions are geographically located either in the mid-Atlantic region or the Midwestern region of the U.S.
Standardization of CH Correlation Review Process
Because CLIA '88 does not mandate how the CH process is to be performed, laboratories perform CH quite differently (Table 1), which leads to bias in error reporting.5 In the beginning of the project, we first standardized the CH correlation process in the four institutions. On a monthly basis, a cytotechnologist used an existing laboratory information system program to identify all patients who had both cytology and surgical specimens from the same anatomic site that had been obtained within 6 months of each other prior to the date of review. A designated “review” pathologist selected cases in which the cytologic and surgical specimens were discrepant. The cytotechnologist then retrieved the patient slides and reports and generated a hardcopy review sheet. The review pathologist examined the material and determined the cause of error.
|Site||Method of case retrieval||Time interval used for search||Determination method for case review||Prescreening performed by cytotechnologist||Reviewer||Arbitration method|
|A||Computer search for all correlating surgical and cytology specimens followed by manual review for those anatomically correlating||6-mo wide search using previous month's surgical pathology specimens to search for cytology cases. Search performed monthly.||Two-step discrepancy||No||Designated group of three pathologists||Cases shown to original patholo for input. Final decision based on reviewer|
|B||Same||4-mo wide search. Reviewed periodically as convenient||Any disagreement||Yes||One pathologist||None|
|C||Same||12-mo search. Cases reviewed bimonthly||Two-step disagreement||Yes||Two pathologists||None|
|D||Previous cytology case for review identified only when triggered by positive surgical specimen||Reviewed daily||Any disagreement||No||Pathologist signing out surgical case||Case shown to original cytolog if disagreement|
Definition of CH Error and Cause
We defined a discrepancy as a difference between the cytologic and histologic diagnoses.5 Because cytology and surgical diagnostic schema are somewhat different, we considered the diagnoses in a scaled categoric context to determine whether a discrepancy occurred. The categoric context was different if the specimens were gynecologic (e.g., Papanicolaou [Pap] test and cervical biopsy) or nongynecologic (e.g., lung brushing and biopsy) (Table 2). We defined a CH correlation error as at least a two-step discrepancy.5 We evaluated only two-step or greater CH correlation discrepancies because of the lack of reproducibility and the clinical import of one-step discrepancies.1, 16, 17 For example, a diagnostic error occurred if a patient's bronchial brush specimen was diagnosed as benign and the patient's lung biopsy specimen was diagnosed as nonsmall cell carcinoma. This example falls within the scope of the Institute of Medicine's definition of error because in at least one specimen, the definitive pathologic diagnosis was not reached.18
|Step||Gynecologic specimens||Non-gynecologic specimens|
|Cytology diagnosis||Surgical diagnosis||Cytology diagnosis||Surgical diagnosis|
|0||No evidence of intraepithelial lesion or malignancy (NIL)||Benign||Benign||Benign|
|1||Atypical squamous cells of undetermined significance (ASC-US)||No equivalent||Atypical|
|2||Low-grade squamous intraepithelial lesion (LSIL)||Cervical intraepithelial neoplasia of type 1 (CIN1)||Suspicious|
|3||High-grade squamous intraepithelial lesion (HSIL)||Cervical intraepithelial neoplasia of type 2 or 3 (CIN 2 or CIN 3)||Malignant||Malignant|
|4||Invasive carcinoma||Invasive carcinoma|
The review pathologist microscopically examined all slides and determined if the cytology, surgical, both diagnoses, or neither diagnosis was in error. The pathologist then assigned a “cause” of the error, using the categories of interpretation, sampling, or both.1 An interpretation error was an error in disease categorization, and this error was classified further as an overcall (if the review diagnosis was categorically lower than the original diagnosis) or an undercall (if the review diagnosis was higher than the original diagnosis). A sampling error was an error in which the diagnostic material was not present on the slide, even on review. Using the above example, if the review pathologist concurred with both the original lung biopsy and brushing diagnoses, a sampling error occurred in the brushing specimen, because material diagnostic of cancer was not present on the cytology slides.
CH Correlation Data Collection
We developed a two-part CH correlation data collection instrument. The first part contained pathology items, including the date of cytology and surgical specimen collection, specimen type, original and review diagnoses, original and review pathologists and cytotechnologists, limitations in specimen quality, and causes of error. The second part contained patient management and outcomes items, including additional tests ordered, unnecessary or additional treatment protocols initiated, morbidity or mortality related to additional tests or treatments, and delays in diagnosis. We performed a clinical record review on all nongynecologic errors, all gynecologic errors that had either original or review diagnoses of high-grade squamous intraepithelial lesion (HSIL)/cervical intraepithelial neoplasia (CIN) of type 2 or greater, and a random sample of 10% of all gynecologic errors that had either original or review diagnoses of less than HSIL/CIN 2.5 A 10% review was performed on this subset because of the lower likelihood of adverse outcomes.
A data collector reviewed the pathology CH correlation logs and pathology reports to complete the first part of the instrument. An honest broker reviewed the hospital electronic and hardcopy medical records to complete the second part of the instrument. An honest broker was a clinical outcomes data collector who was the only person exposed to clinical data linked to individual patient identifiers. Use of the honest broker satisfied the Health Insurance Portability and Accountability Act (HIPAA) requirements regarding the use of medical records data for research purposes. The data then were deidentified by the honest broker, and a pathologist assessed the clinical severity of the error, using the categories shown in Table 3. We devised this scheme based on error severity schema published in the medical literature,19–22 recognizing that error severity instruments have not been specifically designed for diagnostic pathology errors. The data collector entered the deidentified case data into the Web-based patient safety database.
|No harm: The clinician acted regardless of an erroneous diagnosis.|
|Example: A patient had a lung mass and the clinician performed a bronchial washing and biopsy at the same time. The washing was diagnosed as malignant and the biopsy was diagnosed as benign (sampling error). The clinician acted on the malignant cytology diagnosis regardless of the surgical diagnosis.|
|Near miss: The clinician intervened before harm occurred or the clinician did not act on an erroneous diagnosis.|
|Example: A patient had a lung mass and a bronchoalveolar lavage was obtained and diagnosed as benign (sampling error). The surgeon proceeded with a therapeutic surgical procedure because the radiological evidence supported the diagnosis of malignancy. The diagnosis on the surgical specimen was malignant.|
|Minimal harm (Grade 1):|
|a. Further unnecessary noninvasive diagnostic test(s) performed (e.g., blood test or non-invasive radiologic examination).|
|b. Delay in diagnosis or therapy of ≤ 6 mos.|
|c. Minor morbidity due to (otherwise) unnecessary further diagnostic effort(s) or therapy (e.g., bronchoscopy) predicated on the presence of (unjustified) diagnosis.|
|Moderate harm (Grade 2):|
|a. Unnecessary invasive further diagnostic test(s) (e.g., tissue biopsy, re-excision, angiogram, radionuclide study, or colonoscopy).|
|b. Delay in diagnosis or therapy of > 6 mos.|
|c. Major morbidity lasting ≤ 6 mos due to (otherwise) unnecessary further diagnostic efforts or therapy predicted on the presence of (unjustified) diagnosis.|
|Severe harm (Grade 3): Loss of life, limb, or other body part, or long-lasting morbidity (lasting > 6 mos).|
We analyzed the Year 2002 CH correlation errors by stratifying the errors by institution, specimen type (gynecologic vs. nongynecologic), nongynecologic specimen anatomic site, cause of error, clinical management protocol, outcome, and clinical severity. A priori sample size calculations, assuming an error frequency difference of at least 2% (the smallest difference in an error frequency that we deemed clinically significant), a nondirectional alpha of 0.05, and a power of 0.80, showed that we needed a denominator of 1398 cases per institution to detect statistical significance.
Overall error frequencies were calculated for each institution using the number of CH correlation errors as a numerator. Error frequencies were calculated in two ways, using different denominators. First, we used the total number of discrepant and nondiscrepant correlating cytology and surgical specimen pairs a denominator.1 This measure expressed error frequency in terms of the total number of cases secondarily reviewed. The percentage of correlated cases in relation to the total cytology and surgical workload varied considerably by institution. Second, we used the total number of institutional cytology cases as a denominator. This measure expressed error frequency in terms of total laboratory case workload, recognizing that the majority of cases were not reviewed.
We retrieved denominator data from the institutional laboratory information systems using query tools for the number of correlating cytology and surgical pairs and overall 2002 cytology workload. Aggregated error frequencies for the entire 2002 dataset were calculated using weighted means.
We examined institutional differences in overall error frequencies, error frequencies by specimen type, cause of error (i.e., sampling or interpretation), and assessment of error severity using chi-square and Fisher exact tests. Statistical significance was assumed with a P value ≤ 0.05. All statistical analyses were performed using SPSS software (Version 11; SPSS Inc., Chicago, IL).
Pathologist Agreement on the Cause of CH Correlation Error
A confounding factor in the comparison of CH correlation errors is the interobserver variability23–27 of the review pathologists' assessment of error cause. To our knowledge, no previous studies have measured the level of pathologist agreement with regard to CH correlation error cause.
We selected a sample of 10 CH correlation errors (5 gynecologic errors with conventional Pap tests and 5 nongynecologic pulmonary errors with either bronchial brushes or bronchial washes) from each institution for review. Slides from all 40 cases were deidentified and assessed independently by the CH correlation review pathologists. Each pathologist examined the slides, recorded cytology and surgical diagnoses, and determined the cause of error. Estimates of agreement between reviewers were calculated using an unweighted kappa statistic. The agreement between the review pathologists' assignment and the original assignment at the institution and the agreement between the review pathologists' current assignments were measured. Pathologists rereviewed their own institutional cases, and the kappa statistic was used as an intraobserver measure of agreement.
Institutional CH correlation error frequencies for the Year 2002 are shown in Table 4. Error frequencies for nongynecologic specimens were higher than for gynecologic specimens. For some institutions, more than 1 of every 10 patients who had a correlating CH specimen pair had an error in diagnosis. Contingency tables showed that gynecologic and nongynecologic error frequencies, regardless of the denominator used, were dependent on institution (P < 0.001). Compared with the other three institutions, the gynecologic error frequencies of Institution A were higher (P < 0.001). Compared with Institution C, the gynecologic and nongynecologic error frequencies of Institution B were lower (P < 0.001).
|Institution||No. of gynecologic errors||No. of correlating cases||Error frequency using denominator of correlating cases||Total cytology workload||Error frequency using denominator of workload|
|No. of non-gynecologic errors||No. of correlating cases||Error frequency using denominator of correlating cases||Total cytology workload||Error frequency using denominator of workload|
Table 5 shows the number of institutional nongynecologic errors by anatomic site. All the institutions showed a relatively high number of nongynecologic errors associated with specimens obtained from the urinary tract and lung. A variable number of errors were detected for some specimen sites. For example, Institution C reported that 19% of correlating specimens from the pleura were associated with an error, and Institutions B and D reported no errors in pleural specimens.
|Anatomic site||No. of errors (error percentage by total no. of correlating discrepant and non-discrepant cases specimens for Institutions A and C) by institution|
|Biliary tract||0 (0)||6||7 (9)||0|
|Bone/soft tissue||0 (0)||0||4 (8)||1|
|Brain||0 (0)||0||2 (4)||0|
|Breast||3 (15)||3||44 (13)||0|
|GI tract||1 (4)||2||6 (6)||0|
|Urinary tract||17 (11)||25||99 (25)||3|
|Liver||0 (0)||0||7 (16)||1|
|Lung||48 (17)||46||80 (6)||12|
|Lymph node||1 (5)||3||22 (16)||3|
|Pelvis||0 (0)||1||51 (13)||0|
|Peritoneum||0 (0)||3||78 (16)||1|
|Pleura||4 (13)||0||34 (19)||0|
|Thyroid||0 (0)||1||26 (11)||2|
The institutional causes of error are shown in Table 6. Contingency tables showed a statistically significant association between institution and error cause for both gynecologic and nongynecologic errors (P < 0.001). Institutions A and B reported significantly fewer interpretation errors and Institution C reported significantly more interpretation errors than expected. The majority of errors were attributed to cytology, rather than surgical, sampling or interpretation; Institution D never attributed an error to surgical specimen interpretation or sampling.
|Specimen type||Cause error||Institution||Aggregated (%)|
|A No. (%)||B No. (%)||C No. (%)||D No. (%)|
|Gynecologic||Interpretation||Cytology||4 (3)||7 (7)||195 (45)||3 (17)||40|
|Surgical||3 (2)||2 (2)||23 (5)||0 (0)|
|Sampling||Cytology||110 (79)||37 (36)||114 (27)||15 (83)||60|
|Surgical||23 (17)||61 (59)||126 (29)||0 (0)|
|Non-gynecologic||Interpretation||Cytology||3 (4)||16 (16)||163 (34)||2 (9)||29|
|Surgical||1 (1)||0||37 (8)||0 (0)|
|Sampling||Cytology||60 (81)||76 (76)||321 (67)||21 (91)||71|
|Surgical||13 (18)||4 (4)||36 (8)||0|
The major clinical outcomes and error severity for gynecologic and nongynecologic errors are shown in Tables 7 and 8, respectively. Contingency tables showed that for both gynecologic and nongynecologic errors, a statistically significant association existed between institution and the assignment of clinical error severity (P < 0.001). For both gynecologic and nongynecologic errors, Institution A reported significantly more no-harm events and fewer harm events, Institution B reported significantly fewer no-harm events and more harm events, and Institution D reported significantly fewer harm events than expected (P < 0.001). For nongynecologic cases alone, Institution C reported significantly fewer no-harm events (P < 0.001). Institution D reported that errors in cancer diagnosis never resulted in patient harm.
|Outcome||Percentage by institution|
|Patient lost to follow-up||26.1||7.0||9.4||12.5|
|Repeat Pap test||39.8||52.2||50.0||12.5|
|Colposcopy with additional sampling||31.8||22.1||25.0||75.0|
|Ancillary therapy for cancer||0||0.9||7.0||0|
|Clinical severity assessment||A||B||C||D|
|Harm, Grade 1||2.8||48.7||55.1||0|
|Harm, Grade 2||0||23.3||15.0||0|
|Harm, Grade 3||0||4.9||0.3||0|
|Outcome||Percentage by institution|
|Patient lost to follow-up||14.9||0||8.7||6.7|
|No specific follow-up documented||9.5||3.0||5.7||4.4|
|Routine monitoring for malignancy||0||32.0||60.0||17.4|
|Additional cytology specimen obtained||28.4||19.0||32.3||4.4|
|Additional surgical specimen obtained||40.5||37.0||34.2||17.4|
|Ancillary cancer therapy||52.7||54.0||44.0||52.2|
|Antibiotics or other non-chemotherapy medications||12.2||1.0||10.5||13.0|
|Clinical severity assessment||A||B||C||D|
|Harm, Grade 1||0||16.0||22.4||0|
|Harm, Grade 2||0||45.0||15.1||0|
|Harm, Grade 3||0||2.0||1.9||0|
For the aggregated data, the frequency of error severity assignment for gynecologic errors was 46% for no-harm events, 8% for near-miss events, and 45% for harm events. The frequency of error severity assignment for nongynecologic errors was 55% for no-harm events, 5% for near-miss events, and 39% for harm events. If harm occurred, it generally was assessed as Grade 1 or 2.
Interobserver (40 cases) and intraobserver (10 cases) agreement concerning the review causes of error with the original assessment is shown in Table 9. Institution B exhibited worse agreement when reviewing their own cases compared with when reviewing cases from Institutions A and D. Pairwise agreement between the review pathologists is shown in Table 10 and was found to demonstrate high variability.
|Institution||Review reason for error|
|Original reason for error||A||0.615||0.412||0.024||0.286|
|Institution||Slide set A|
|Slide set B|
|Slide set C|
|Slide set D|
It is exceedingly difficult to measure the true frequency of errors in cancer diagnosis because of the variety of detection methods used, bias, and the inability of institutions to secondarily review large case volumes.5 As part of a multiinstitutional, national effort to improve practice, we are in the process of standardizing methods, decreasing bias by sharing cases and data among institutions, and establishing more accurate error frequencies by detecting errors using multiple methods.5
In the current study, we reported cancer diagnostic error frequencies based on the CH correlation method, with the understanding that these are minimum frequencies because the majority of patient specimens are not reviewed. Assuming that our aggregated error frequencies are representative of all American laboratories, the minimum number of patients per year who have a Pap test/gynecologic histologic diagnostic error is 150,000 (assuming 50 million annual Pap tests), and that for patients who have a nongynecologic diagnostic error (assuming 5 million nongynecologic specimens) is 155,000.28 If the frequency of error were based on the denominator of correlating CH case pairs, these numbers would be 2–10 times higher.
The effect of diagnostic cancer errors on patient outcome is largely unknown.5 Clinicians often do not know when a diagnostic error has occurred and pathologists often have no knowledge of the effect. The study of errors in cancer diagnosis has been limited by the lack of taxonomy to classify error severity. We devised an error severity scale based on several factors such as morbidity, mortality, delay in diagnosis, and additional testing performed. Similar to other classification systems,19–22 we found that pathologists disagreed on the extent of harm caused by diagnostic errors.21, 29 Some institutions claimed that harm never occurred after an error in cancer diagnosis. Using aggregated clinical severity data and the number of errors calculated above, harm appears to occur in a minimum of 127,950 patients per year in the U.S. as a result of errors in the diagnosis of cancer in gynecologic and nongynecologic specimens.
Individual institutional error frequencies differed partly because of variable clinical and laboratory practices and partly because of biases that were difficult to control. Standardization of laboratory practices is haphazard. For example, our participant laboratories differed with regard to pathologist experience, subspecialty sign-out practice, training programs, methods of preparing specimens, and presign-out quality assurance methods, all of which potentially affected error frequency. We currently are measuring these processes to identify those that may be key to better outcomes. Several authors have reported regional variations in several medical practices such as organ-specific surgery and other treatment protocols.30–35 We believe that differences in test ordering practices contribute to clinical sampling error frequencies. For example, clinicians who bypass noninvasive cytologic diagnostic techniques for more invasive surgical techniques may have a higher rate of more accurate cancer diagnoses, but with associated higher costs, morbidity, and mortality.
The results of the current study demonstrate how false-positive and false-negative cancer diagnoses affect patient outcome. In the pathology literature, false-positive diagnoses usually are attributed to interpretation failures that may be avoided if pathologists learn potential pitfalls. The pathology culture is one of individual diagnostic responsibility and errors are not attributed to poorly designed systems that may be fixed.36 False-negative diagnoses often are attributed to inherent limitations in testing, with the solution being the adoption of better tests. For example, meta-analyses have demonstrated that the mean sensitivity of the conventional Pap test is 58%,37, 38 and the growth of some new Pap test technologies is aimed at improving sensitivity. Our goal is to use error data to maximize the sensitivity and specificity of tests for cancer diagnosis based on tissue sampling. For pathology laboratories, this entails reducing diagnostic variability and designing systems that decrease the probabilities of false-positive and false-negative results. For clinical systems, this means improving test sampling.
Variability in cancer diagnosis has been shown to exist for nearly every cancer type,23–27 although to our knowledge successful interventions at decreasing variability are rare. Page et al. showed that variability in the diagnosis of breast cancer was reduced after education and the adoption of standard histologic criteria.27, 39, 40 However, experts often do not agree on the standard criteria, and without a mechanism to force agreement and adherence, decreasing diagnostic variability is difficult. The diagnostic variability measured in the current study is an expression of institutional diagnostic differences related to all organ systems. We are attempting to use standardized criteria within and across laboratories and other processes, such as telepathology, to standardize diagnoses in CH correlation. Clearly, this is a first small step in decreasing national interobserver diagnostic variability, and the entire pathology community will need to play a role in this effort.
The standardization and uniform reporting of errors in cancer diagnosis is a first step in improving safety. Additional sites have been recruited to contribute error data to further nationalize this effort. In the second phase of this project, root cause analysis was used to devise error reduction plans that were implemented in all laboratories. Reports of the success and failure of these plans are forthcoming.
- 15Department of Health and Human Services, Health Care Financing Administration. Clinical laboratory improvement amendments of 1988: final rule. Federal Register 57, no. 7146 (1992) (codified at 42 CFR §493).
- 18KohnLT, CorriganJM, DonaldsonMS, editors. To err is human: building a safer health system. Washington, DC: National Academy Press, 1999.
- 24Observer variation, dysplasia grading, and HPV typing: a review. Am J Clin Pathol. 2000; 114: S21–S35..
- 28Pap Test. Primary Care Consultants. Available at URL: http://www.pccdocs.com [accessed February 3, 2005].
- 36Medical errors and medical narcissism. Boston: Jones & Bartlett Publishers, 2005..