SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Definitions of Medical Error
  5. The Total Testing Process
  6. The TTP and Errors by Testing Phase
  7. Estimates of Error Frequency and Harm in Cancer Screening and Testing
  8. Error Root Cause Analysis
  9. Quality Improvement in Cancer Testing and Screening
  10. Evidence for Quality Improvement Opportunities in Oncologic Diagnostic Testing and Screening
  11. Summary
  12. References

Improving the quality of oncologic pathology diagnosis is immensely important as the overwhelming majority of the approximately 1.6 million patients who will be diagnosed with cancer in 2010 have their diagnoses established through the pathologic interpretation of a tissue sample. Millions more patients have tissue samples obtained to rule out cancer and do not have cancer. The majority of studies on the quality of oncologic pathology diagnoses have focused on patient safety and have documented a variety of causes of error that occur in the clinical and pathology laboratory testing phases of diagnostic testing. The reported frequency of a diagnostic error made by oncologic pathology depends on several factors, such as definitions and detection methods, and ranges from 1% to 15%. The large majority of diagnostic errors do not result in severe harm, although mild to moderate harm in the form of additional testing or diagnostic delays occurs in up to 50% of errors. Clinical practitioners play an essential role in error reduction through several avenues such as effective test ordering, providing accurate and pertinent clinical information, procuring high-quality specimens, providing timely follow-up on test results, effectively communicating on potentially discrepant diagnoses, and advocating second opinions on the pathology diagnosis in specific situations. CA Cancer J Clin 2010;60:139–165. © 2010 American Cancer Society, Inc.


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Definitions of Medical Error
  5. The Total Testing Process
  6. The TTP and Errors by Testing Phase
  7. Estimates of Error Frequency and Harm in Cancer Screening and Testing
  8. Error Root Cause Analysis
  9. Quality Improvement in Cancer Testing and Screening
  10. Evidence for Quality Improvement Opportunities in Oncologic Diagnostic Testing and Screening
  11. Summary
  12. References

Almost all primary and many recurrent diagnoses of cancer are based on the pathology diagnosis. In the United States, approximately 1.6 million individuals will be diagnosed with cancer in 2010,1 and far more individuals will have pathology tissues procured to rule out cancer and will not have cancer. In the current era of healthcare reform and reorganization, the assessment of quality in all aspects of our healthcare system is critically important.2, 3 The screening, diagnosis, and management of patients with cancer form the basis of a sprawling, complex system with extensive practitioner subspecialization. With the gradual aging of the US population, Smith et al estimated that 2.3 million people will be diagnosed with cancer in 2030.1 As most of these patients will enter the healthcare system through the portal of a pathology diagnosis, it is apropos to assess the current state of quality in oncologic pathology diagnosis.

In 1990, the Institute of Medicine (IOM) defined the quality of healthcare as the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.4 This definition addressed both population and individual healthcare needs and encompassed clinician and patient perspectives.4 The IOM further classified quality into 6 domains: safety, timeliness, effectiveness, efficiency, equity, and patient centeredness.5, 6 As most oncologic pathology quality research has heretofore focused on aspects of test performance, a large proportion of the medical literature has reported on patient safety or failures in pathology diagnostic testing and screening.

Most active and published quality improvement activities in oncologic pathology testing affect microsystems of practice or units of care delivery, which Berwick defined as Level B of a 4-level system of care.7 These published quality initiatives affect entities such as local diagnostic testing or screening services or individual laboratories. For there to be true transformation of the quality of oncologic pathology diagnostic care, change will need to occur at additional levels of healthcare, addressing the experience of patients and communities (Level A), healthcare organizations (Level C), and healthcare environments (Level D).

Given the breadth and depth of the interplay between safety and oncologic diagnostic testing and screening practice, an article on oncologic pathology diagnostic safety, by necessity, must be written from a specific perspective, such as that of the practicing clinician, pathologist, health services researcher, payer, or patient. This article is written for practicing clinicians and, as such, excludes details of some existing quality structures, including those within laboratories. Although we will focus on the practical approaches that clinical practitioners may take to contribute to the improvement of patient safety in oncologic pathology testing, we recognize that more global strategies of change affecting higher and lower levels of care are necessary to improve quality in the overall system of oncologic pathology diagnosis. We recognize that an accurate oncologic diagnosis requires the collaboration of pathologists, oncology specialists, and other clinicians. Lastly, we chose to focus this article on activities surrounding quality improvement of patient-safety systems, which has been driven by the activities of quality assurance or the assessment of current levels of safety.

Definitions of Medical Error

  1. Top of page
  2. Abstract
  3. Introduction
  4. Definitions of Medical Error
  5. The Total Testing Process
  6. The TTP and Errors by Testing Phase
  7. Estimates of Error Frequency and Harm in Cancer Screening and Testing
  8. Error Root Cause Analysis
  9. Quality Improvement in Cancer Testing and Screening
  10. Evidence for Quality Improvement Opportunities in Oncologic Diagnostic Testing and Screening
  11. Summary
  12. References

An underlying theme that currently is driving the discussion of patient safety in oncologic pathology diagnosis is the lack of agreement on the definition of “diagnostic” pathology error.8–10 This lack of agreement has resulted in major differences in reported error rates and has limited the effectiveness of quality improvement activities. A contributing factor to this dilemma is the lack of acceptance of the IOM definition of a medical error as applied to pathology diagnosis.

The IOM defined a medical error as the failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim.2 This definition encompasses all types of error and does not link patient outcome to error. It is important to note that error or failure does not suggest blame or necessarily lack of skill, negligence, or legal liability.

Pathology laboratories traditionally have considered 2 types of error: errors of accuracy and errors of precision.11 Both types of error may be incorporated into the IOM definition of error. Accuracy is the closeness of a measure to its true value, and precision is the degree to which repeated measures show the same results. Accuracy and precision often are visually portrayed with a cartoon of darts thrown at a dart board or gunshots at a target, as shown in Figure 1.12

thumbnail image

Figure 1. These 4 pictures demonstrate 4 combinations of accuracy and precision through the use of a bull's-eye. The center of the bull's-eye represents the accuracy of a laboratory test, and an oncologic pathology test is accurate when the diagnosis hits the center (ie, correctly identifies the disease process). The closeness of the shots (or test groupings) measures precision or reproducibility. Note that a group of pathologists all could agree with a diagnosis, but the diagnosis may not be accurate (see “Not Accurate, Precise” figure). In the “Accurate, Not Precise” figure, the average of the individual scores lies in the bull's-eye. Source: http://celebrating200years.noaa.gov/magazine/tct/accuracy_vs_precision.html. Reproduced with permission from Barr JT, Silver S.

Download figure to PowerPoint

As a starting point for the study of patient safety in diagnostic pathology, a consortium of pathologists funded by the Agency for Healthcare Research and Quality (AHRQ) defined diagnostic error as the failure of a test (the planned action) to produce a diagnosis that corresponds to the actual disease state in a patient.13 This concept of error is one of accuracy. This definition has not been widely accepted by anatomic pathologists, partly because of the connotation that a diagnostic error implies an error in a pathologist's interpretation. In reality, an error in interpretation is only one of several root causes of a diagnostic error, as will be discussed later.

Precision in oncologic pathology diagnostic testing generally has been reported as the reproducibility of the pathologist's diagnostic interpretation.14, 15 The precision of the testing activities leading to and following the diagnostic interpretation has not been extensively studied. Measures of the precision of diagnostic interpretation are expressed in terms of diagnostic agreement among pathologists and include the metrics of kappa and crude agreement.16, 17 Some pathologists argue that the lack of precision, which is measured by diagnostic disagreement, is not a form of error but represents variability in practice,9, 18 similar to other forms of variation, such as the reported geographic variation in hysterectomy rates.19–21

Diagnostic precision is critically important to clinicians, as will be discussed in the section on Secondary Review in Pathology. By definition, variation represents a form of error, and in clinical practice, variation suggests less than optimal care.19–21 Although we may not know the best clinical practice, disparate rates of the performance of specific procedures (eg, hysterectomy, requesting chest films, etc) imply that one or multiple ways of practicing are not ideal. By using the example of variation in hysterectomy rates, geographic differences in high, moderate, and low rates cannot all be equally clinically effective for patients at equal risk, even when we do not know the optimal rate. A problem in some areas of clinical medicine is that there is a lack of data linking the rates of performance of specific procedures with patient outcome. The same holds true in pathologists' diagnostic interpretations, as we may not know how different diagnostic schemes or even how 2 different pathologists' diagnoses will affect patient outcomes. However, a difference in diagnostic interpretation on the same patient specimen is problematic, especially when clinical management differs by diagnosis.

The Total Testing Process

  1. Top of page
  2. Abstract
  3. Introduction
  4. Definitions of Medical Error
  5. The Total Testing Process
  6. The TTP and Errors by Testing Phase
  7. Estimates of Error Frequency and Harm in Cancer Screening and Testing
  8. Error Root Cause Analysis
  9. Quality Improvement in Cancer Testing and Screening
  10. Evidence for Quality Improvement Opportunities in Oncologic Diagnostic Testing and Screening
  11. Summary
  12. References

One way to conceptualize oncologic pathology safety is in terms of a total testing process (TTP), which is a system-based framework for examining all possible interactions and activities that may affect the quality of laboratory tests.22–24 This framework allows for the design and implementation of interventions that may reduce or eliminate errors that adversely affect testing and patient-health outcomes. This framework also allows for the study of barriers and limits to quality-improvement activities. The TTP encompasses all components or steps of the cycle from the point of the clinical question to the point of clinical action. For a patient who has a lesion suspicious for cancer (based on clinical examination and diagnostic imaging), this cycle traverses from the clinical question of “Does this patient have cancer?” to the clinical action of cancer treatment and follow-up, when the diagnosis of cancer is rendered. In this regard, the TTP for oncologic pathology testing is defined by activities (Fig. 2)23 in 3 distinct phases that align with clinical workflow internal and external to the pathology laboratory as follows:

  • 1
    Preanalytic phase: clinician test selection, test ordering, specimen procurement, patient and specimen identification, and specimen transport
  • 2
    Analytic phase: specimen processing, preparation, immediately reporting results, and interpretation
  • 3
    Postanalytic phase: test-result reporting and clinician receipt, clinician interpretation of test results, and clinical action based on interpretation25
thumbnail image

Figure 2. The total testing process (TTP) begins with a clinical question and the care provider and patient. Each larger step in the process comprises many smaller steps, and laboratory medicine is the sum total of all testing steps. Source: Adapted with permission from Boone J. Presentation at the Institute on Critical Issues in Health Laboratory Practice: Managing for Better Health, September 23–26, 2007. Atlanta, GA: Centers for Disease Control and Prevention. The total testing process and its implications for laboratory administration and education. Clin Lab Manage Rev 1994;8:526–542.

Download figure to PowerPoint

All phases in the TTP involve multiple opportunities for making clinical decisions, and some phases involve the use of highly technical skills. Years of training and practice are necessary to hone many of the skills necessary for optimal performance of work-related tasks. Many of the steps in the TTP are based on subjective assessment of criteria. In some ways, a pathologist examining a slide is similar to a cardiologist listening to heart sounds or an internist performing a physical examination.

In oncologic pathology testing, most preanalytic- and postanalytic-phase processes occur outside of the laboratory. The utility of the TTP concept lies in the linking together of all testing steps and the crossing of historical boundaries of testing-process ownership. Currently, data on the performance and problems of some steps of the TTP are well known but are lacking for other steps. In the frame of the TTP, quality improvement and error-reduction initiatives are optimized through a team approach that involves clinicians and pathologists.

The TTP and Errors by Testing Phase

  1. Top of page
  2. Abstract
  3. Introduction
  4. Definitions of Medical Error
  5. The Total Testing Process
  6. The TTP and Errors by Testing Phase
  7. Estimates of Error Frequency and Harm in Cancer Screening and Testing
  8. Error Root Cause Analysis
  9. Quality Improvement in Cancer Testing and Screening
  10. Evidence for Quality Improvement Opportunities in Oncologic Diagnostic Testing and Screening
  11. Summary
  12. References

Patient-safety researchers are able to report the frequency, cause, and outcome of error for only some steps of the TTP. Historically, quality improvement initiatives in laboratory medicine have focused on the analytic phase, although pathologists have focused on some aspects of the analytic process more than other aspects. Root cause analytic studies confirm that more errors occur in the preanalytic and postanalytic phases of testing. Bonini et al reported that for the entire field of laboratory medicine, distribution of errors was 32% to 75% in the preanalytic phase, 13% to 32% in the analytic phase, and 9% to 31% in the postanalytic phase.26 Stroobants et al estimated that 20% of all laboratory tests were associated with an error, and greater than 85% of errors occurred in the preanalytic or postanalytic phase.25 These 2 studies concentrated on reviews and estimates of clinical pathology-testing services, which, for the most part, involved automated instruments in the analytic phase. As cancer diagnostic testing and screening by anatomic pathology involves less automation in the analytic phase, the potential for error may be higher, although the frequency of error in the preanalytic and postanalytic phases may be similar.

Schiff et al reported that of 583 missed or delayed diagnoses reported by 310 clinicians at 22 institutions, 10.3% of cases were missed or delayed diagnoses of lung, breast, or colon cancer.27 Errors occurred most frequently in the testing phase (failure to order, report, and follow up a laboratory result, 44%), followed by clinician assessment errors (failure to consider or tendency to overweigh competing diagnoses, 32%), history taking (10%), physical examination (10%), and referral or consultation delays and errors (4%).

Estimates of Error Frequency and Harm in Cancer Screening and Testing

  1. Top of page
  2. Abstract
  3. Introduction
  4. Definitions of Medical Error
  5. The Total Testing Process
  6. The TTP and Errors by Testing Phase
  7. Estimates of Error Frequency and Harm in Cancer Screening and Testing
  8. Error Root Cause Analysis
  9. Quality Improvement in Cancer Testing and Screening
  10. Evidence for Quality Improvement Opportunities in Oncologic Diagnostic Testing and Screening
  11. Summary
  12. References

Studies of patient safety in diagnostic oncologic pathology show a tremendous heterogeneity in study design and error-reporting methods. Most studies of patient safety in oncologic pathology testing are based on single-institution data and rely on retrospective review of specific case types. Crucial to the evaluation of these studies is documentation of the specific error-detection method and standardization of the process of collecting data on errors.

In 2002, AHRQ funded 4 institutions to evaluate the frequency, cause, and outcome of diagnostic pathology errors.13 These institutions evaluated the frequency of error in diagnosing cancer by using cytologic-histologic correlation to compare histologic and cytologic diagnoses in patients who underwent both a cytologic and a histologic procedure to obtain specimen material. Histologic and cytologic sampling might have occurred at the same clinical diagnostic procedure or at different procedures performed at different times. Because cytopathology and surgical pathology diagnostic schema are somewhat different, the researchers considered diagnoses in a scaled-step categorical context to determine whether a discrepancy occurred. The researchers performed chart reviews of patients who had histologic and cytologic diagnoses that were discrepant by 2 or more steps (Table 1).13 For example, a lung fine-needle aspiration diagnosis of benign and a lung biopsy diagnosis of malignant would be considered discrepant.

Table 1. Diagnostic Steps for Gynecologic and Nongynecologic Specimens
STEPGYNECOLOGIC SPECIMENSNONGYNECOLOGICAL SPECIMENS
CYTOLOGY DIAGNOSISSURGICAL DIAGNOSISCYTOLOGY DIAGNOSISSURGICAL DIAGNOSIS
  1. Source: Reprinted with permission from Raab SS, Grzybicki DM, Janosky JE. Clinical impact and frequency of anatomic pathology errors in cancer diagnosis. Cancer. 2005;104:2205–2213.

0No evidence of intraepithelial lesion or malignancy (NIL)BenignBenignBenign
1Atypical squamous cells-undetermined significance (ASC-US)No equivalentAtypicalNot generally used
2Low grade squamous intraepithelial lesion (LSIL)Cervical intraepithelial neoplasia 1 (CIN 1)SuspiciousNot generally used
3High grade squamous intraepithelial lesion (HSIL)Cervical intraepithelial neoplasia 2 or 3 (CIN 2 or 3)MalignantMalignant
4Invasive carcinomaInvasive carcinoma

For nongynecologic pathology, the frequency of error ranged by institution from 4.87% to 11.8% of all correlating cytology and histology case pairs (P < .001).13 Across all institutions, harm occurred in 39% of all cases of error and was generally classified as low grade, consisting of unnecessary repeat testing or delays in diagnosis. Severe harm occurred in less than 2% of all cases of error. A second AHRQ study evaluated the patient safety of cervical cancer prevention services in 4 hospital systems from 1998 to 2004 in which patients underwent Pap testing followed by colposcopy with biopsy for specific Pap-test diagnoses.28 The researchers reported 5278 cytologic-histologic discrepancies (0.321% of all Pap tests procured during this time period) with approximately half of the errors occurring in the Pap-test phase and half occurring in the colposcopy phase of service. Unnecessarily repeated tests and diagnostic delays occurred in 79.8% and 63.9% of errors involving high- and low-grade lesions, respectively. The researchers reported that cervical cancer screening was highly successful in detecting squamous cell cancer (missed squamous cell cancer in only 1 of 187,786 Pap tests) but was associated with failures linked to minor or moderate harm consisting of overtreatment or unnecessary follow up.

Error Root Cause Analysis

  1. Top of page
  2. Abstract
  3. Introduction
  4. Definitions of Medical Error
  5. The Total Testing Process
  6. The TTP and Errors by Testing Phase
  7. Estimates of Error Frequency and Harm in Cancer Screening and Testing
  8. Error Root Cause Analysis
  9. Quality Improvement in Cancer Testing and Screening
  10. Evidence for Quality Improvement Opportunities in Oncologic Diagnostic Testing and Screening
  11. Summary
  12. References

Different methods of error root cause analysis (eg, Eindhoven method29–31 or the Toyota method of asking 5 why's32–34) focus on different categorizations of error, such as latent or active causes of error.35 Latent causes of errors include system problems that contribute to individuals making active errors. Historically, oncologic pathology diagnostic error root cause analysis has centered on active errors of accuracy occurring in the analytic phase of testing, although a few studies have focused on errors in precision and latent factors. The study of root cause analysis has the most meaning when error cause is linked to a quality improvement initiative. As the TTP may be subdivided into a large number of work steps, root cause analysis often is implemented in conjunction with process mapping, where specific process steps are identified.36–38 Figure 3 shows a high-level process map of an anatomic pathology laboratory.37 Errors in oncologic pathology diagnosis may be caused by process failures in specific accessioning steps, such as specimen mislabeling or compromise of specimen integrity.

thumbnail image

Figure 3. This process map of an anatomic pathology laboratory shows the flow of a specimen as it traverses through an anatomic pathology laboratory. Ten large steps are identified as the specimen is accessioned, macroscopically examined, processed, and then interpreted by a pathologist. By examining the substeps of the larger steps during daily work processes, one may detect specific error-prone steps. In this process map, the flow of a specimen in the histology laboratory is characterized by excess movement and crossover, which reflect waste. The crossover points are also sites where specimen mix up may occur and may be targeted for redesign to reduce error. Source: Raab SS, Grzybicki DM, Condel JL, et al. Effect of Lean method implementation in the histopathology section in an anatomical pathology laboratory. J Clin Pathol. 2008;61:1193–1199.

Download figure to PowerPoint

Work steps may be categorized as 1) activities, or the processes that individuals perform; 2) connections, or the handoffs between individuals; and 3) pathways or the flow processes.39–45 Active or latent errors may be associated with each of these work steps, and root cause analysis is used to pinpoint the steps that are prone to fail. Most oncologic pathology tests consist of more than 200 to 300 unique steps from the time a test is ordered to the time the test result is acted upon.

In oncologic pathology diagnosis, researchers have used different error-categorization schemes based on cause attributed to specific process steps.46–48 Meyer et al classified anatomic pathology diagnostic error causes as failures in patient identification (preanalytic or analytic causes), specimen quality (preanalytic or analytic causes), interpretation (analytic causes), or reporting (analytic or postanalytic causes).46 As mentioned above, root cause analyses of errors detected by cytologic-histologic correlation attribute cause to interpretation (analytic cause) or specimen procurement (preanalytic cause).49, 50 These classification schemes have examined mainly active components of error, although rare reports of system errors also have been studied.

Quality Improvement in Cancer Testing and Screening

  1. Top of page
  2. Abstract
  3. Introduction
  4. Definitions of Medical Error
  5. The Total Testing Process
  6. The TTP and Errors by Testing Phase
  7. Estimates of Error Frequency and Harm in Cancer Screening and Testing
  8. Error Root Cause Analysis
  9. Quality Improvement in Cancer Testing and Screening
  10. Evidence for Quality Improvement Opportunities in Oncologic Diagnostic Testing and Screening
  11. Summary
  12. References

Quality improvement initiatives vary in scope and complexity from a simple individual frontline change to a large-scale change involving multiple steps of several testing phases. Hospital and laboratory microsystems may use quality improvement systems, such as continuous quality improvement (CQI), total quality management (TQM), Six Sigma, and Lean (eg, Toyota Production System [TPS]) to improve efficiency, another IOM quality metric, but also may introduce change to improve patient safety.11, 51 These improvement systems may fix problems in specific steps, sets of work steps, or systems. In the work-step process model, failures in early steps may result in failures in later steps.34, 37 For example, a specimen mix up at the time of test procurement ultimately will lead to diagnostic error (correct diagnosis but for the wrong patient).

In the following section, oncologic pathology diagnostic errors caused by failures in specific steps are examined and methods by which clinical practitioners and pathologists may improve the process are discussed.

Evidence for Quality Improvement Opportunities in Oncologic Diagnostic Testing and Screening

  1. Top of page
  2. Abstract
  3. Introduction
  4. Definitions of Medical Error
  5. The Total Testing Process
  6. The TTP and Errors by Testing Phase
  7. Estimates of Error Frequency and Harm in Cancer Screening and Testing
  8. Error Root Cause Analysis
  9. Quality Improvement in Cancer Testing and Screening
  10. Evidence for Quality Improvement Opportunities in Oncologic Diagnostic Testing and Screening
  11. Summary
  12. References

Significant improvements have occurred during the past decade in detection and treatment of malignancies in patients seen in both general and specialty oncology practices. Despite these improvements, substantial opportunities for quality enhancements in all phases of the total testing process exist. The eventual performance outcome of oncologic testing is highly dependent not only on the roles of pathologists during the analytic phase of testing but also on the roles clinical practitioners play in preanalytic and postanalytic phases of the total testing process. Opportunities in specific testing activities have been identified whereby clinicians may considerably influence the quality of the oncologic diagnostic and screening tests they use. These specific activities are discussed to provide clinicians involved in any aspect of oncologic care with evidence-based information to support the implementation of quality improvement changes in their testing-related practices. Activities that will be discussed are: 1) effective test ordering, 2) provision of pertinent clinical information with specimens submitted for testing, 3) procurement of a specimen of the highest quality possible, 4) appropriate handling and interpretation of tissues, 5) timely follow-up on test results, 6) effective communication with pathologist staff should problems or inconsistencies with final results or diagnoses exist, and 7) requesting secondary review of tissue samples when it appears to be crucial for obtaining a high-quality, valid diagnosis.

Although at least some of the activities listed above seem self-evident and consistent with current clinical intentions for every patient specimen obtained, the evidence strongly suggests that quality gaps still exist in these processes. Thus, they represent areas ripe for significant improvement in patient care. In addition, the majority of the peer-reviewed published work related to quality improvement in all diagnostic and screening testing to date has been performed by investigators outside of the United States. The international nature of the studies may impact the ability to generalize the reported findings to US oncologic testing and screening programs; however, the information presented here represents findings related to common tests and procedures, such as cervical cancer and colon cancer screening and diagnostic testing for lung lesions suspicious for malignancy. In addition, the international source of the available information clearly reveals the critical need for more research in this area by US investigators.

Effective Test Ordering

Much of the currently available evidence on physician test-ordering appropriateness and effectiveness relates to general clinical laboratory testing,52–60 although several studies have specifically addressed test ordering in oncology practice.61, 62 For example, the development of guidelines in oncologic testing was partly driven by variability in clinical decisions made in test-ordering practice. For example, Rivera and Mehta reported American College of Chest Physicians Clinical Practice Guidelines for initial diagnosis of lung cancer,62 although actual ordering of diagnostic pathology lung tests exhibits variability based on clinician subspecialty.63

The general laboratory-medicine literature provides interesting and important reproducible information on interventions that appear to increase the appropriateness of laboratory test ordering that may be applied and tested on oncologic test-ordering patterns.

For example, multiple investigators have shown that educational interventions improve test-ordering patterns.52–56 Timely feedback on test ordering has been shown to be critical to the success of the process in the majority of these studies.54–56 Information technology applications (ie, use of predetermined computerized laboratory-testing schemes) also have been successful for decreasing laboratory utilization, as has the mandatory use of new test-ordering forms that adhere to national test-ordering guidelines.57–59 However, despite initial success, the latter intervention did not result in sustainable changes in physician test-ordering behavior in the cited studies.

Reports specifically addressing test-ordering appropriateness and effectiveness for cancer diagnosis have focused on colorectal cancer screening and on cervical cancer screening.64–73 In general, findings from these studies for both cancer types show a persistently high degree of variability in the application of these screening tests, despite evidence-based guidelines for both and evidence for cost effectiveness.71 The presence of a high degree of variability in screening practice patterns implies the existence of suboptimal screening and poorer outcomes for at least a portion of the screening-eligible population.

Two additional interesting and important findings in this area are that diagnostic biopsies in certain patient populations and for certain types of malignancies are more cost effective when performed percutaneously,74, 75 and that in general, physician tolerance for risk-taking is associated with test-ordering behavior.76, 77 Further studies are needed to show that the use of interventions that are designed to address these technical and behavioral factors are successful in increasing the appropriate and effective use of laboratory testing in oncology practice.

Provision of Pertinent Clinical Information with Specimens Submitted for Testing

Several observational studies that focused on a variety of malignancy types have demonstrated that the performance of pathologic and/or radiologic tests is improved when pertinent clinical information is included with the testing request.78–83 These studies have primarily used either a retrospective review or prospective observational design and focused on diagnostic testing for breast malignancies. In a recent report by investigators in Australia,80 knowledge of clinical information about the type and site of symptoms improved mammogram performance as measured by sensitivity, specificity, and receiver operating characteristic (ROC) curves. Particularly in oncologic testing, the diagnostic interpretation obtained through radiologic service testing significantly impacts the performance and/or interpretation of pathology-service testing and vice versa. In the case of diagnostic mammography, positive results are essentially always followed by a diagnostic breast biopsy, which requires interpretation by a pathologist. In this way, increased accuracy of mammography testing directly impacts the quality of patient care not only by providing the patient with a valid interpretation of the presence of disease but also by preventing the performance of an additional unnecessary diagnostic test.

A recent and related study performed in the United States illustrates the value of maximizing accurate mammography results, in part because of the “trickle down” effect mammography has on further testing.84 The investigators used Medicare claims data to estimate resource use and costs of the diagnostic workup of Medicare beneficiaries with suspected breast cancer. These authors reported that Medicare spends approximately $679 million annually on diagnostic workups for women with suspected breast cancer and that false-positive mammograms result in diagnostic costs of approximately $250 million, 40% of total costs.

Another recent observational study specifically describes the positive influence that the provision of pertinent clinical information has had on the accuracy of pathologic diagnoses.82 Focused on the interpretation of melanocytic lesion biopsies, this retrospective review of 99 atypical melanocytic lesions showed a significant increase in diagnostic agreement among 10 dermatopathology experts when pertinent clinical information was provided with the specimen for review. Each of the pathologists changed his/her final pathologic diagnosis on 7 of the 99 cases when given additional clinical information followed by re-review.

Additional studies that demonstrate the need for pertinent clinical information in order for radiologists and pathologists to generate diagnostic interpretations with the highest sensitivity and specificity in oncologic care have focused on the specific diagnosis of urothelial carcinoma78 and breast sonographic interpretations.79, 81

All of the above recent studies support the conclusions of a systematic review of the literature published in 2004 by Loy and Irwig that evaluated results from studies aimed at describing the accuracy of diagnostic testing performed with and without clinical information.85 Their review included studies that examined the accuracy of either radiologic or pathologic diagnostic tests. Studies accepted for review were not restricted to those involving oncologic diagnostic testing, although oncologic diagnostic tests were included in the review (eg, interpretation of bronchial brush biopsies). Two conclusions from this systemic review were that the practice of reading diagnostic tests with clinical information seems justified and that future studies should be designed to investigate the best way of providing clinical information. To date, information related to their second conclusion is still unavailable.

Of note, 2 studies we identified reported that the use of clinical information had little to no impact on the accuracy of diagnostic test results.86, 87 The first was a study that examined the usefulness of a mathematical model for predicting the outcomes of pregnancies of unknown location,86 and the second examined the impact of clinical information on the detection of early lung cancer by plain chest radiography.87

Although a smaller number of studies do describe conflicting results, the majority of studies currently available support the provision of pertinent clinical information as common practice for diagnostic testing (including oncologically related testing) due to its impact on the accuracy of test results. However, a significant barrier to realizing this process as common practice is the current lack of evidence for how this component of a crucial handoff point in oncologic care may best be consistently accomplished. The presence of clinical information may narrow the pathology differential diagnosis and reduce the cost of testing. Clinical information may optimize decisions about tissue fixation and processing. Although the presence of clinical information may result in interpretive bias, there is an absence of studies demonstrating this phenomenon in actual practice.

Procurement of a Specimen of the Highest Quality Possible

Data from several sources, such as reports of cytologic-histologic correlation error detection, indicate that the quality of specimen sample is critically important in making an accurate oncologic pathology diagnosis.13, 49, 50 Cytologic-histologic correlation data show that from 50% to 80% of errors are secondary to sampling error, although detailed root cause analysis of sampling failure has rarely been performed.88, 89

Traditionally, in cytologic-histologic correlation root cause analysis, a pathologist retrospectively reviews microscopic slides and classifies error as interpretation-related or sampling-related. For example, in a false-negative case, if the review pathologist retrospectively identifies tumor, the error is classified as interpretation-related and if tumor is not identified, then the error is classified as sampling-related. This method does not specifically focus on specimen quality and its relation to error.

Raab et al created a novel cytologic-histologic correlation root cause analytic method, known as the No Blame Box, that pathologists may use to assess false-negative cases in terms of sample quality and amount of tumor (Fig. 4).90 In an optimal cytologic specimen from a lesion that is cancerous, abundant cancer cells would be present and there would be an absence of factors that limit interpretation, such as obscuring blood or inflammation, poor fixation, or improper preparation. Limiting factors are secondary to both preanalytic factors (eg, the patient bleeds profusely during specimen procurement, and blood obscures diagnostic slide material) and analytic factors (eg, laboratory slide preparation is too thick). The No Blame Box, Figure 4, shows the assessed cause of a false-negative error in 40 patients who had cancer and a negative lung bronchial brush or wash cytology specimen. In the majority of cases, a contributing factor to error was poor specimen quality and, in many cases, interpretation also was a cause. These data indicate that the traditional method of root cause analysis (ie, either sampling or interpretation) is too superficial to determine error cause. A pathologist's false-negative misinterpretation almost always is secondary to undercalling a poor quality specimen or a specimen with only rare tumor cells. These findings were confirmed in a study by Nodit et al who performed root cause analysis on 32 false-negative lung bronchial cytology specimens and found that in 97% of cases, specimen procurement and preparation issues were major contributing factors to error.88 In only 1 case was abundant tumor overlooked.

thumbnail image

Figure 4. This is the No Blame Box. The slides of 40 false-negative cytology errors were evaluated by a pathologist, assessed in terms of specimen quality and amount of tumor. Each oval represents the assessment of specimen quality and amount of tumor present for each of the 40 cases. The pathologist classified the majority of specimens to be of poor quality. Source: Raab SS, Stone CH, Wojcik EM, et al. Use of a new method in reaching consensus on the cause of cytologic-histologic correlation discrepancy. Am J Clin Pathol. 2006;126:836-842. ©2006 American Journal of Clinical Pathology ©2006 American Society for Clinical Pathology

Download figure to PowerPoint

The complex interplay between specimen quality and a pathologist's interpretation also has a role in errors of precision. The No Blame Box data indicated that pathologists made a variety of diagnoses on specimens that contained a limited amount of tumor. Some pathologists made outright diagnoses of malignant, whereas other pathologists used indeterminate or even benign diagnoses. Pathologists had higher level diagnostic reproducibility in specimens when a large amount of tumor was present on the slide.

In summary, these root cause analytic findings indicate that the majority of false-negative errors in oncologic pathology diagnostic tests occur in specimens of lower quality. In screening tests, false-positive errors also are more likely to occur in poorer quality specimens. For example, causes of rendering an indeterminate cytologic diagnosis of atypical squamous cells of undetermined significance (ASC-US rather than a definite squamous intraepithelial lesion [SIL] or benign diagnosis) in cervical cancer Pap-test screening include the absence of a sufficient number of diagnostic cells, procurement failures, or processing failures.28

Traditional means of improving quality have been through educational initiatives that focus on technical aspects of specimen procurement. Often these educational initiatives are used for training on new technologies. Most clinicians lack any input of specimen quality or only find out at a later time whether a specimen was diagnosed as unsatisfactory. Quality improvement initiatives focus on rapid and long-term feedback, change in process through self-assessment or self-awareness, or the use of best practice protocols. Some of these initiatives are well publicized, although others have been implemented as frontline changes at single institutions as a part of a more global quality improvement program.91–94

Initiatives

Use Standardized Tissue Procurement Protocols

The number, type, or location of procured specimen samples may not yield a diagnosis of cancer when cancer is present. For some areas of specialty oncologic pathology testing, researchers have carefully correlated the number and location of specimen samples procured with false-negative rates. Clinical practitioners have used data from these studies to develop optimal practices of specimen procurement.

For example, the prostate biopsy for the detection of prostate cancer has evolved from a digitally guided biopsy method to a transrectal-ultrasound guided (TRUS) systematic biopsy method, which is the current standard of care.95–97 As early stage prostate cancer is neither hypoechoic on TRUS nor palpable, random systematic biopsies are necessary for prostate cancer detection. In 1989, Hodge et al showed that 6 random, evenly distributed biopsies were optimal for prostate cancer detection, and this method became known as the traditional sextant biopsy method.98 By the mid to late 1990s, several researchers reported that the sextant biopsy strategy had a high false-negative rate as cancers in other areas of the prostate were missed. To improve cancer detection, alternative strategies of increasing the number of biopsies and sampling other locations were proposed.99–101 Although expert consensus does not exist on the optimal strategy, most experts agree that the use of the sextant biopsy is not adequate and that extended biopsy protocols, involving the procurement of 10-14 cores, is recommended.95, 102–104 For men who have had negative sextant biopsies, some practitioners recommend the use of saturation biopsies consisting of more than 20 cores.95, 105–107

Clinical practitioners in other subspecialties also have developed standardized protocols for biopsy procedures based on the number and/or location of biopsy samples. These subspecialties include cervical cancer screening involving the number and location of colposcopically directed biopsies108, 109 and colon cancer detection in patients who have inflammatory bowel disease with endoscopically directed biopsies.110

Use of Checklists

Clinical practitioners have standardized other activities, connections, and pathways in tissue-procurement steps to optimize specimen quality. For example, Sidiropoulos et al standardized steps in thyroid gland fine-needle aspiration biopsy (FNAB) sampling and slide preparation.111 In this study, before process standardization, FNABs were performed by a variety of subspecialty clinicians, with or without ultrasound guidance, by using variably sized needles, with different numbers of passes and smears, and different staining techniques. After standardization of these variables, the proportion of satisfactory samples increased from 67% to 89% as samples improved in quality (odds ratio [OR] = 3.82; P < .0001).

Clinical practitioner use of checklists is another method used to standardize the steps in work practice.93, 112 Clinicians normally go through a stepwise procedure during Pap-test procurement and examine the cervix, look for abnormalities, and obtain specimen material by using generally specified methods. However, with the passage of time, much of the work becomes rote, and process steps may be bypassed. The use of a checklist allows a clinician to focus attention on every step in the Pap-test procurement process. One measure of Pap-test quality is the presence of an adequately sampled transformation zone, the region where most cervical intraepithelial lesions develop. In a quality improvement study, 5 gynecologists implemented the use of a checklist and compared preintervention (n = 5384 Pap tests) to postintervention (n = 5442 Pap tests) quality.93 The clinicians showed a statistically significant decrease in Pap tests without a transformation zone component (P = .011) and also showed a 114% increase in the detection of squamous intraepithelial lesion (P = .004) after the intervention.

Nooh et al reported that the use of quality-assurance audits of national performance standards (ie, checklists) could detect areas of improvement in colposcopy services of teaching hospitals in Cardiff and Vale Trust, University of Wales, Cardiff, South Wales, United Kingdom.113 The National Health Service Cervical Screening Programme monitors 10 auditable parameters, some of which directly affect the quality of diagnostic pathology services.114, 115 Among these parameters are obtaining a biopsy in greater than 90% of women who have a high-grade smear and obtaining a sufficient sample in 90% of cases in which a biopsy was taken. Through the use of checklists of these parameters, Nooh et al showed that hospitals could track clinic performance that, in turn, would affect pathology services. Studies evaluating the implementation of practice changes to improve these parameters have rarely been reported.

Use of Specimen Adequacy Assessments

The use of specimen adequacy assessments has been shown to improve specimen quality. Adequacy assessments are widely used in some fields of oncologic pathology, such as Pap testing,93, 116 but rarely used in a uniform manner in other fields of cytology or in surgical pathology services. Specimen adequacy statements have the highest level of impact when clinicians are able to alter practice on the basis of feedback. Our view, based on our experience in clinical practice, is that some pathologists use adequacy statements infrequently because of fear that clinicians will send their samples elsewhere should the inadequate rate be too high. In some fields of pathology diagnosis, adequacy statements have not been sufficiently standardized for widespread dissemination. Standardized specimen adequacy statements have been developed in other areas such as cervical-vaginal Pap testing and thyroid gland fine-needle aspiration, although few studies have examined the interobserver variability in the use of these statements.

Practitioners at individual institutions have developed adequacy statements for clinician feedback that have improved overall specimen quality. For example, in thyroid gland FNAB, root cause analysis has shown that some false-negative diagnoses are secondary to poor-quality specimens being diagnosed as benign because they do not meet the minimum standards of unsatisfactory that would prompt a repeat FNAB.89 Poller et al developed an indeterminate category of diagnosis that may be used to express uncertainty in risk of neoplasia.91, 92 Raab et al similarly developed a category that they called “nonspecific” which was used on less than optimal specimens.94 With the introduction of these categories, both clinical groups initially reported a high frequency of use, indicating that pathologists were previously classifying indeterminate lesions as benign. Raab et al showed that the use of the nonspecific category improved the sensitivity of FNAB compared with the preimplementation practice.94

Use of Immediate Feedback Services

Although adequacy statements assist clinicians in understanding specimen quality, they usually are provided later, at a time when the clinicians may not recall the components that might have resulted in the procurement of a less than optimal specimen. In the ideal state, immediate feedback of specimen quality is a means to assess when additional material is needed. Immediate feedback is used in frozen-section interpretation and in FNAB for patients who have lesions suspicious for cancer.91, 94

In FNAB, immediate interpretation lowers the false-negative rate.94 Researchers have shown that immediate feedback from the FNAB service to the physician performing the biopsy improves specimen quality and decreases the number of passes that are necessary to obtain adequate material.117 Kocjan et al reported that the British Society for Clinical Cytology Code of Practice recommends the use of immediate assessment in FNAB as well as other evidenced-based recommendations for setting up FNAB services, taking samples, preparing slides, and classifying diagnoses.117

Appropriate Handling and Interpretation of Tissues

Optimal performance of the steps involved in the analytic phase of diagnostic testing and screening is important to the quality of oncologic pathology diagnosis. The main steps in the analytic phase of many anatomic pathology practices are shown in Table 2. From our practical experience, we know that the standardization of process substeps in anatomic pathology laboratories exhibits variability within and among laboratories, although this is not much different from the current state of institutional variability in the standardization of steps in other testing and screening phases, such as the preanalytic phase of tissue procurement (eg, lung biopsy or surgical excision).

Table 2. Substeps in Anatomic Pathology Process
SUBSTEPDESCRIPTION
Accessioning steps 
 Specimen receipt in laboratory (transport hand-off)Hospital/transport/courier personnel hand off specimens to laboratory personnel.
 Identification checkLaboratory personnel check that specimen containers and requisition contain appropriate matching identifiers.
 Assignment of unique laboratory identifierSpecimens are assigned unique identifiers in laboratory information systems.
Gross examination steps 
 Identification checkLaboratory personnel check that tissues and accompanying information match.
 Gross examination of specimenLaboratory personnel visually examine specimens in terms of volume and other characteristics (color, lesions, etc). Descriptions are included in pathology reports.
 Sectioning of specimenFor larger specimens, laboratory personnel use a variety of cutting instruments to examine further the internal specimen characteristics.
 Preparation of tissues for processingTissues may be prepared in a variety of ways for further examination, including histologic examination and ancillary testing. For histologic examination, laboratory personnel prepare thin sections that are placed in tissue cassettes and fixed in formalin.
Processing steps (for histologic examination) 
 Tissues processedTissues are placed in one of several types of processors that dehydrate the tissues.
 Identification checkLaboratory personnel visually match tissue cassettes received with records and evaluate cassette integrity following processing.
 Tissues embedded in paraffinLaboratory personnel embed tissue in paraffin to create tissue blocks.
 Tissues thinly sectionedLaboratory personnel use microtomes to thinly section the paraffin blocks. The thin sections are placed on glass slides.
 Slides stainedHematoxylin and eosin is the preferred stain for most histologic tissue sections.
 Slides cover-slippedA thin layer of glass or plastic is placed on top of the slide.
 Slides transported to pathologistsSlides from the same patient (case) are assembled and brought to the pathologist for interpretation.
Interpretation steps 
 Identification checkPathologists match the tissue slides and requisition information.
 Pathologists examine slides microscopicallyPathologists place slides under light microscopes and examine the tissues. Diagnostic interpretations are made using histologically observed criteria. Pathologists may choose to order ancillary tests, such as immunohistochemical tests.
 Pathologists prepare a reportReports contain an interpretation based on findings from microscopic and gross examinations.
Reporting steps 
 Reports sent to clinical providersReports are sent in a variety of ways including mail, facsimile, and the Internet.

Pathology organizations, governmental bodies, and individuals have helped to establish baseline levels of quality through methods such as benchmarking, accreditation, and laboratory and professional competency assessment. These methods of assessing levels of quality have focused on the evaluation of some quality measures (eg, safety, timeliness) and not others (eg, equity, effectiveness). The level of governmental regulation of analytic testing and screening far exceeds that of the regulation of all other testing phases combined.

The College of American Pathologists (CAP) has a long history of establishing baseline quality practices in some analytic testing substeps.118 The CAP and other organizations accredit laboratories through inspections that involve the assessment of quality practices in multiple substeps of practice. Table 3 provides a summary of CAP-sponsored patient safety research establishing baseline safety measurements.48, 49, 121–123 These studies describe the current level of safety practice within multiple institutions (Q-PROBES) or more recently, the level of safety practice within institutions and across time (Q-TRACKS). As multiple institutions were involved in data collection, a window of the variability in practice among these institutions has been provided. As data are institutionally self-reported, root causes of the variability, including differences in institutional quality improvement activities, are difficult to ascertain. These data sources have tended to focus on the output of the interpretive step of diagnostic testing and screening, and the role of earlier failures in analytic steps and preanalytic steps generally were not examined.

Table 3. College of American Pathologists Q-PROBES™ and Q-TRACKS Studies of Patient Safety
YEARAUTHORSPHASES INVOLVEDTITLE OF ARTICLESFINDINGS
2008Raab119Interpretive (analytic)The Effect of Continuous Monitoring of Cytologic-Histologic Correlation Data on Cervical Cancer Screening PerformanceIn this Q-TRACKS study, longer institutional participation in this yearly program was significantly associated with a higher positive predictive value of a positive Pap test (P = .01), higher Pap test sensitivity (P = .002), and a higher Pap test sampling sensitivity (P = .03).
2007Tworek120Interpretive (analytic)The Value of Monitoring Human Papillomavirus DNA Results for Papanicolaou Tests Diagnosed as Atypical Squamous Cells of Undetermined Significance. A College of American Pathologists Q-Probes Study of 68 InstitutionsThe median institutional percentage of human papillomavirus positive results in women who were diagnosed with a Pap test of atypical squamous cells of undetermined significance was 46.8% with a 10th and 90th institutional percentile of 18.0% and 64.0%, respectively.
2006Raab121Interpretive (analytic)The Value of Monitoring Frozen-Section–Permanent-Section Correlation Data Over TimeIn this Q-TRACKS study, longer institutional participation in this yearly program was associated with lower discordant frozen-section–permanent-section frequencies (P = .04) and lower deferred case rates (P = .04).
2006Valenstein48Preanalytic and analytic, all phasesIdentification Errors Involving Clinical Laboratories. A College of American Pathologists Q-Probes Study of Patient and Specimen Identification Errors at 120 InstitutionsSpecimen-identification errors from clinical and anatomic laboratories were combined. The median number of identification errors per 1,000,000 billable tests was 390 (10th institutional percentile was 1291, and the 90th institutional percentile was 78). The authors estimated 160,000 adverse events per year as a result of misidentification of laboratory specimens.
2005Raab122InterpretivePatient Safety in Anatomic Pathology: Measuring Discrepancy Frequencies and CausesIn this Q-PROBES study, using all types of secondary review policies, the mean and median laboratory (n=74) discrepancy frequencies were 6.7% and 5.1%, respectively.
2000Jones123PostanalyticFollow-Up of Abnormal Gynecologic Cytology. A College of American Pathologists Q-Probes Study of 16,132 Cases From 306 InstitutionsThe following percentage of women who received post-Pap test follow-up in 1 year: 85.6% of women with a Pap test cytologic diagnosis of carcinoma, 87.2% with a diagnosis of HSIL, and 82.7% with a diagnosis of LSIL.
1999Nakhleh124PreanalyticNecessity of Clinical Information in Surgical Pathology. A College of American Pathologists Q-Probes Study of 771,475 Surgical Pathology Cases From 341 InstitutionsA total of 5594 (0.73%) cases required additional clinical information for diagnosis (10th through 90th percentile range, 3.01% to 0.08%).
1999Novis125InterpretiveDiagnostic Uncertainty Expressed in Prostate Needle Care Biopsies. A College of American Pathologists Q-Probes Study of 15,753 Prostate Needle Biopsies in 332 InstitutionsThe median rate of diagnostic uncertainty was 6% (0% at the 10th percentile and 14% at the 90th percentile).
1998Nakhleh126All analytic phasesAmended Reports in Surgical Pathology and Implications for Diagnostic Error Detection and Avoidance. A College of American Pathologists Q-Probes Study of 1,667,547 Accessioned Cases in 359 InstitutionsThe median institutional amended report rate was 1.46 per 1000 cases (10th institutional percentile 0.22 and 90th percentile 4.75 per 1000 cases).
1997Nakhleh127ReportingMammographically Directed Breast Biopsies. A College of American Pathologists Q-Probes Study of Clinical Physician Expectations and Specimen Handling and Reporting Characteristics in 434 InstitutionsIn 92% of malignant cases, margin status was reported, 77% of reports contained lesion size, and 83% of reports stated tumor grade.
1995Jones128InterpretiveRescreening in Gynecologic Cytology. Rescreening of 8096 Previous Cases for Current Low-Grade and Indeterminate-Grade Squamous Intraepithelial Lesion Diagnoses—A College of American Pathologists Q-Probes Study of 323 LaboratoriesOf the rescreened Pap tests, 3.5% were reclassified as squamous intraepithelial lesion or carcinoma, and 5.9% were reclassified as atypical squamous cells of undetermined significance.
1996Jones49Interpretive (and sampling)Cervical Biopsy-Cytology Correlation. A College of American Pathologists Q-Probes Study of Over 22,439 Correlations in 348 LaboratoriesThe sensitivity and specificity of the cytology smear (based on correlating the Pap smear with histologic follow-up results) was 89.4% and 64.8%, respectively; 6.5% of women who had a high-grade squamous intraepithelial lesion diagnosed on the Pap smear had a benign tissue biopsy diagnosis.
1996Gephardt129InterpretiveInterinstitutional Comparison of Frozen Section Consultations. A College of American Pathologists Q-Probes Study of 90,538 in 461 InstitutionsThe overall frozen-section–permanent-section discordance rate was 1.42%; 31.8% of discordant frozen sections occurred because of misinterpretation.
1996Gephardt130ReportingLung Carcinoma Surgical Pathology Report Adequacy. A College of American Pathologists Q-Probes Study of Over 8300 Cases From 120 InstitutionsA standard report was used in 20.8% of cases, the presence or absence of microscopic venous invasion noted in 2.6% of cases, and the presence or absence of neoplasm at the bronchial margin was noted in 90.8% of cases.
1996Gephardt131Gross examination and processingExtraneous Tissue in Surgical Pathology. A College of American Pathologists Q-Probes Study of 275 InstitutionsA contaminant (ie, extraneous tissue) was found in 2.9% of slides using a retrospective review method (the institutional 10th percentile showed that contaminant was found in 8.8% of slides and 22.0% of cases).
1996Novis132InterpretiveInterinstitutional Comparison of Frozen Section Consultation in Small Hospitals. A College of American Pathologists Q-Probes Study of 18,532 Frozen Section Consultation Diagnoses in 233 Small HospitalsThe mean frozen-section–permanent-section discordant rate was 1.8%. Of institutions that processed ≥50 frozen sections, 5.8% had a discordance rate above 7.5%, and 24.8% had a discordance rate of 0%.
1996Nakhleh133Accessioning (and preanalytic)Surgical Pathology Specimen Identification and Accessioning. A College of American Pathologists Q-Probes Study of 1,004,115 Cases From 417 Institutions.The median institutional identification and accessioning deficiency rate was 3.4% (of all institutional specimens) with a reported range of 0% to 98.6%.
1995Gephardt134ReportingInterinstitutional Comparison of Bladder Carcinoma Surgical Pathology Report Adequacy. A College of American Pathologists Q-Probes Study of 7234 Bladder Biopsies and Curettings in 268 InstitutionsIn invasive bladder cancers, the tumor type and histologic grade were provided in 99.3% and 93.6% of cases, respectively. In 18% of invasive cancers, the presence or absence of muscularis propria was not stated.
1996Jones135InterpretiveRescreening in Gynecologic Cytology. Rescreening of 3762 Previous Cases for Current High-Grade Squamous Intraepithelial Lesions and Carcinoma. A College of American Pathologists Q-Probes Study of Practice Patterns From 312 InstitutionsFor rescreened Pap smears, the overall false negative rate was 19.7%.
1992Zarbo136Interpretive, reportingInterinstitutional Assessment of Colorectal Carcinoma Surgical Pathology Report Adequacy. A College of American Pathologists Q-Probes Study of Practice Patterns From 532 Laboratories and 15,940 ReportsSummarizing data across all participants, 72.5% of institutions reported microscopic descriptions, and only 12.5% of institutions used standardized reporting formats.
1992Zarbo137InterpretiveInterinstitutional Database for Comparison of Performance in Lung Fine-Needle Aspiration CytologyIn 436 institutions, the sensitivity and specificity of lung fine-needle aspiration was 99% and 96%, respectively. The false-negative and false-positive interpretation frequency was 8% and 0.8%, respectively.
1991Zarbo138InterpretiveInterinstitutional Comparison of Performance in Breast Fine-Needle Aspiration CytologyIn 294 institutions, the sensitivity of fine-needle aspiration was 97% (10,751 satisfactory aspirates); 18% of breast aspirates were unsatisfactory; the incidence of a false-negative diagnosis was 7.1%.
1991Zarbo139InterpretiveInterinstitutional Comparison of Frozen-Section ComparisonIn 297 institutions, the concordance between frozen-section and permanent-section diagnosis was 96.5%; 40% of errors were determined to be secondary to misinterpretation.
1990Howanitz140InterpretiveThe Accuracy of Frozen-Section Diagnosis in 34 HospitalsConcordance between the frozen-section and the permanent-section diagnosis was 96.5%; 44% of errors were secondary to inappropriate sampling.

Laboratories have attempted to decrease oncologic-testing and screening-practice variability through the implementation of technological solutions and/or through process redesign that focus on standardizing work steps and adopting best practices. There is a relative absence of data that link the implementation of analytic technological and/or process redesign solutions to improved patient outcomes or even to intermediate outcomes. Quality improvement implementation of technological solutions and process redesign often target quality measures, such as efficiency and timeliness, and safety measures, in such a manner that error may be difficult to examine. Examples of published patient-safety quality improvement initiatives in several of the major steps of analytic practice are provided below.

Tissue Accessioning Steps

An error with potential major consequences that occurs in the process of accessioning is specimen misidentification. In a Q-PROBES study, Valenstein et al reported a baseline mean institutional specimen-identification error frequency (involving both clinical and anatomic pathology laboratories) of 390 per 1,000,000 billable tests.48 Many of these errors occurred preanalytically and were detected in the accessioning step of the analytic phase. However, a proportion of these errors occurred in the accessioning step. By using an observational technique, Smith et al reported that poorly designed accessioning systems contributed to operator-dependent errors and near-miss events (events that if not caught and corrected may lead to a patient specimen being mixed up with a different patient's specimen) at a frequency of 5.5 and 0.7 per specimen, respectively.141 Zarbo and D'Angelo compared the occurrence of defects (including those related to laboratory waste and efficiency, as well as quality) before and after a quality-improvement intervention using Lean methods. They reported that, at baseline, 33% of all defects occurred in the preanalytic phase of testing and that 75% preanalytic defects involved the accessioning step,142 a similar frequency to that reported by Raab et al.143 Process redesign of the accessioning phase has led to reduction of process errors. After the implementation of Lean methods, Zarbo and D'Angelo reported that the frequency of accessioning-step defects (as a percentage of all preanalytic defects) decreased to 36.8%,142 similar to the findings by Smith et al.141 Process redesign has involved the implementation of continuous-flow methods and physical barriers preventing specimen mix up.24, 144

Bar coding and other related technologies offer an alternative solution for reducing the frequency of specimen-identification errors. Bar-coding technologies may be implemented at the accessioning step or before preanalytic steps. Zarbo et al showed that the implementation of a bar-coding technology conjointly with workflow standardization decreased the specimen misidentification frequency by approximately 62%, although this decrease also included error reduction at steps downstream to the accessioning step.145

Tissue Gross Examination Steps

Errors that occur in the process of gross (macroscopic) examination include specimen misidentification, tissue loss, inappropriate sampling (eg, not sampling tumor or tumor margins, etc) and inappropriate reporting of gross findings that are used in tumor staging. As described above, bar coding and related technologies contribute to the reduction of identification errors.145 Process-improvement activities also may be used to reduce identification and other types of errors in the gross examination step.

Galvis et al showed that compared with resident trainees who generally are inexperienced, pathologists' assistants (PAs), who are experienced gross-examination room personnel, perform at a higher level in specific macroscopic-examination tasks, such as sampling lymph nodes for downstream histologic examination.146 PAs are trained to perform tasks such as standardized gross tissue examination and post mortem examinations. PAs are highly skilled at these tasks, and their expertise at gross examination exceeds that of many pathologists. Galvis et al showed that when compared with resident trainees, PAs retrieved, on average, 4.7 more lymph nodes per specimen in resections involving cancer specimens of colon and breast. For some cancer types, the number of lymph nodes sampled, as well as positive nodes, is an independent predictor of outcome.147–150 Galvis et al also reported that when compared with residents, PAs more accurately sampled cancer tissues (other than lymph nodes) by measuring tissue resubmission rates. These findings indicate that standardized protocols and experience improve the quality of cancer care in gross examinations.

Surgeons use intraoperative frozen sections to determine whether tumors are present in excised or biopsy material. Intraoperative frozen sections are a form of feedback that guides immediate patient management. Howanitz et al and other authors have reported that greater than 40% of errors in intraoperative frozen-section diagnosis result from failure to identify tumors in gross specimens.121, 129, 132, 139, 140 The implementation of standardized frozen-section techniques and gross-examination expertise in the selection of tissues for frozen section are 2 proposed solutions to reduce this error type.

Tissue Processing Steps

Errors that occur in tissue processing steps include specimen misidentification, tissue loss, and inappropriate embedding/sectioning resulting in less than optimal slide preparation. Lean process changes in the histology preparation steps have been reported to improve specimen quality, although the reporting of quality improvement initiatives involving specific steps is relatively rare. Platt et al examined specimen misidentification in tissue processing, which may consist of tissue “floaters” or contaminants131 of other patient-specimen material being inappropriately placed on a specific patient's glass slide.151 Platt et al found that a source of tissue contamination is in the staining baths and showed that cross-contamination of blank slides may occur at a frequency of 8%. Pathologists generally recognize contaminant tissue during the interpretation step, although based on practice experience, contaminants may cause confusion and be a source of interpretation error.

Appropriate tissue fixation is necessary for diagnostic interpretation of light microscopic slides and ancillary tests. Wolff et al reported that 20% of human epidermal growth factor 2 (HER-2) assays performed in the field were incorrect.152 The HER-2 gene is amplified in 20% to 25% of human breast cancers,153 and HER-2 amplification and overexpression are recognized as important markers for aggressive disease and are the molecular targets for specific therapies such as trastuzumab (Herceptin; Genentech, South San Francisco, CA) and lapatinib (GlaxoSmithKline, London, UK).154 Variable tissue fixation, especially ethanol exposure, and antigen-retrieval methods may lead to incorrect HER-2 immunohistochemical results.154–156 Errors may be caused if the period of time for the formalin fixation of tissues is too short or if insufficient quantities of formalin are used.154–157 Other pitfalls related to immunohistochemical testing are nonspecific binding of tissue altered by crush artifact and cautery artifact, factors related to tissue procurement failures.154–160 The American Society of Clinical Oncology (ASCO)-CAP practice guidelines were published to address these issues.152, 154

Interpretation Steps

Errors that occur in the process of interpretation include specimen misidentification (switching one patient slide for another) and misinterpreting the light microscopic and/or ancillary testing findings. Reporting failures are described below. In oncologic pathology testing and screening error, the study of pathology interpretation error far exceeds that of the study of other error types. Diagnostic interpretation errors may be classified into the categories of slips and mistakes, and studies of cognition have helped to define causes of some error types.

As mentioned previously, the root cause of interpretation error generally includes a cognitive failure in conjunction with upstream failures or with system failures. For example, poor-quality specimens may be overinterpreted or underinterpreted, and latent system problems include pathologist overwork, lack of experience, lack of appropriate redundant systems, etc. The current US medical-legal system often ignores system problems and focuses on individual culpability, which is a contrary approach to improving safety systems.

Error-reduction initiatives that target the cognitive component of interpretation error include the implementation of standardized diagnostic criteria, educational initiatives, and the development of redundant systems. As mentioned previously, Raab et al showed that the use of standardized diagnostic terms with criteria reduced errors in thyroid gland FNA specimens.94 The CAP and the American Society for Clinical Pathology (ASCP) provide continuing medical education (CME) to pathologists through glass-slide tests that are offered to participating laboratories.161, 162 Examining these glass slides in an educational setting allows pathologists to improve their skills and use of diagnostic criteria.

Secondary review is a form of redundancy and may be used to detect and/or prevent error. Secondary review is the practice in which at least 1 additional pathologist examines a case and makes a diagnostic interpretation that is compared with the original diagnostic interpretation. Secondary review often is performed in a blinded fashion and may occur before or after a case is reported or signed out.

Pathologists have not standardized the process of secondary review of diagnostic interpretation.9, 13 Secondary review policies before finalizing the interpretation include the review of all new cases of cancer, cases involving specific organ systems that involve high-risk diagnoses (eg, breast or prostate core-biopsy tissues), cases being examined by inexperienced pathologists, challenging cases through a departmental or group consensus conference, or a percentage of all cases. Laboratories differ in the use of presign-out secondary review practices, and clinicians often are unaware of the error-reduction redundancy methods used in specific laboratories. Some pathologists report secondary review or consensus opinions in individual cases, thereby alerting clinicians that an additional check has been performed. Institutional secondary review before finalizing the interpretation discrepancy data are not well known, as diagnostic disagreements often are not recorded in quality assurance logs.

For cases that have been signed out, secondary review may occur on cases being presented to institutional tumor boards, through departmental quality assurance policies (such as review of a specific percentage of cases [eg, 5% review] or cases in which a prior specimen had been obtained and had a different diagnosis), by clinician request, or through external review practices (when a patient requests a second opinion or a patient is seen at a different institution for management and that institution requests review of original diagnostic slide material).

In a CAP multi-institutional study of a variety of secondary review practices, the self-reported mean and median discrepancy frequency of 74 laboratories was 6.7% and 5.1%, respectively.122 Forty-eight percent of all discrepancies were due to a change within the same category of interpretation (eg, one tumor type was changed to another tumor type). This change in diagnosis has a major impact on the clinical management of patients in many fields of oncology, such as lung cancer. Twenty-one percent of all discrepancies were due to a change across categories of interpretation (eg, a benign diagnosis was changed to a malignant diagnosis). Through self-assessed estimates, pathologists have determined that the majority of discrepancies had no effect on patient care, although 5.3% of discrepancies had a moderate or marked effect on patient care. The highest frequency of discrepancy, based on the total number of cases reviewed, occurred when the reason for review was a request by a clinician. Twenty-three percent of all clinician-directed reviews resulted in a diagnostic discrepancy. In cancer care, this finding is not surprising, as clinicians contact pathologists when the diagnosis does not coincide with patient signs and symptoms or other tests.

In this CAP study, the frequency of a diagnostic discrepancy based on the total number of reviewed cases through interdepartmental review (ie, review outside the originating department), intradepartmental review (ie, review within the originating department, such as through consensus conference), quality assurance review, and extradepartmental review was 4.8%, 7.1%, 4.3%, and 8.6%, respectively.122 Institutions are challenged in the management of a case in which a discrepancy is identified. Management depends on several factors, including the expertise of the pathologists involved, the manner in which the discrepancy was identified, and time (eg, prereporting or postreporting) when the discrepancy was detected. In some cases, a third expert opinion is obtained, although even experts disagree in individual cases. In the optimal situation, close communication among the pathologists and clinical practitioners is critical for care delivery.

Postsign-out secondary review recently has been used as a quality improvement tool to standardize diagnostic interpretations. Raab et al compared the effectiveness of a 5% randomized secondary review process to a focused secondary review process by which pathologists reviewed specific case types that they perceived had a higher frequency of diagnostic discrepancy.163 The study involved a hospital system that already performed subspecialty sign out, and the discrepancy detection rate for the random and focused review processes were 2.6% and 13.2%, respectively. The focused review process involved cases such as bladder biopsy, colon resection, and well differentiated lipoid neoplasms specimens. The discrepancies detected by the focused review process revealed a lack of diagnostic standardization, such as in the use of specific criteria to diagnosis a well differentiated liposarcoma. The lack of standardization generally was unrecognized before the secondary review and could be used to focus on standardizing procedures.

There has been little study of methods to standardize oncologic pathology diagnoses. The first step is recognizing the areas in which there is a problem, which has been done through interobserver diagnostic variability studies, which have been variable in method quality.164 Kappa values of diagnostic agreement range from excellent to poor in specific areas of oncologic pathology diagnosis. In only a few areas of oncologic pathology have pathologists attempted to recalibrate pathologist diagnoses using interventions, such as educational initiatives. For example, Schnitt et al showed that interpathologist diagnostic variability in the area of risk-associated ductal proliferative breast lesions could be decreased after an educational initiative involving the teaching of diagnostic criteria of Page and colleagues.165 The work by Page and others is unusual, as different entrenched camps of diagnosis often is the norm, rather than the exception.165–172 For example, Elshiekh et al reported that unanimous agreement among 6 experts was seen in only 13% of cases of follicular variant of papillary carcinoma of the thyroid gland.173 The ability to move beyond this low level of agreement is uncertain.

Grzybicki et al studied the use of a variety of educational initiatives to improve Pap test diagnoses.174 Greater diagnostic standardization was achieved in smaller groups of individuals involving face-to-face meetings that encouraged dialogue and questioning established paradigms. Educational methods involving larger groups that used handouts and lectures had less improvement. A real challenge is in getting experts from different institutions to standardize a diagnosis.

Reporting Steps

The oncologic pathology report contains information on patient-specimen macroscopic and microscopic examination and for some tumors, ancillary testing studies with prognostic or treatment significance. Errors that occur in the gross examination step include the failure to report or incorrect reporting of pathology findings that are used in tumor staging. In 2004, the American College of Surgeons Commission on Cancer mandated that 90% of pathology reports indicating a cancer diagnosis at participating centers contain all scientifically validated or regularly used data elements.175 The use of synoptic reports containing standardized information has become commonplace in surgical pathology reporting.176, 177 Pathologists use standardized reports offered by the CAP and other organizations such as the Association of Directors of Anatomic and Surgical Pathology (ADASP). Variability in reporting remains especially in the use of microscopic descriptions and comments.

Timely Follow Up of Test Results

Based primarily on anecdotal clinical information, physicians have assumed for many years that delays in cancer diagnoses resulted in patient harm of various degrees. Although a relative paucity of evidence is still available regarding this subject, an increasing number of studies have been reported in the literature during the last decade that address this issue. The scope of many has been limited to a descriptive documentation of the number and nature of delays178–185; however, a few investigations have included an examination of the relation between delays and patient outcomes.186–188

Studies examining delays in diagnosis for specific lesions have focused on colon cancer, primary lung tumors, breast cancer, cervical cancer, and oral cancers. Missed opportunities for earlier diagnosis in colorectal cancer (CRC) have been shown to be relatively frequent (65% of a study cohort182), with approximately half judged to be due to systems factors. The majority of process failures occurred during provider-patient communication and in failures to follow up with individual patients or abnormal diagnostic test results.183 The predictor for the longest delays in diagnosis for patients developing colorectal cancers was clinical or laboratory evidence of occult bleeding.183

In addition to process factors, studies focused on identifying potential barriers to initial CRC screening have shown physician knowledge (both primary care and specialty physicians) of current best practices for CRC screening and surveillance and their use to be suboptimal.180, 181 Physician knowledge and compliance with international guidelines relating to acceptable diagnostic time intervals for lung cancer also have been shown to be lacking.178

The lack of physician knowledge regarding acceptable diagnostic time intervals for lung cancer in particular is significant, because lung cancer is one of the few malignancy types for which some evidence is available on the association between delays and patient outcomes. Specifically, investigators in Denmark recently reported the results of their study in diagnostic delays and the stage of lung cancer at the time of surgery.188 They found a statistically significant increase in the median diagnostic delay time between the 2 groups, with high-stage patients having longer delays. In addition, patients with lower stage disease at surgery were more apt to have had their tumor discovered as an incidental finding than patients with higher stage disease. It is important to recognize that stage is a surrogate endpoint, and clinical decisions should ideally be based on how care influences survival or other clinical outcomes.

Conflicting results related to breast cancer diagnostic delays recently have been published by a group of US investigators in Oregon.187 Their retrospective review study was designed to determine the impact of clinician-driven delays in diagnosis on breast cancer prognostic factors and survival. Although they confirmed in their sample that higher stage correlated with decreased survival, diagnostic delays of up to 36 months did not. In addition, there were no correlations among their sample of tumors and other prognostic factors, such as number of positive lymph nodes.

An important foundational paper in this area was recently published by Gandhi at al,186 who focused on linking missed and delayed diagnoses with diagnostic errors associated with adverse outcomes for patients, process breakdowns, and contributing factors. These investigators performed a retrospective review of 307 closed malpractice claims in which patients alleged a missed or delayed diagnosis in the ambulatory setting and found that 59% involved diagnostic errors that harmed patients (30% resulted in death). For 59% of the errors, cancer was the diagnosis involved, and one of the most common breakdowns in the diagnostic process was failure to create a proper follow-up plan.

Effective Communication with Pathologist Staff if Problems or Inconsistencies with Final Results or Diagnoses Exist

Evidence supporting the high value of clinician-pathologist communication for planning the best clinical management for patients with malignant disease of all stages may consistently be found in studies evaluating the impact of multidisciplinary conferences (MC) or tumor boards (TB).189–197

The contemporary approach and standard of care for patients with cancer is multimodal. Often patients receive care at a single healthcare center containing a multidisciplinary, comprehensive cancer center. The centers include, as part of their multimodal approach to management, regular MC or TB meetings. Consultative discussions among all physicians (including radiologists and pathologists) on the patient-care team take place, with a patient management plan the outcome.

The specific activities that take place may differ among centers but generally all involve review of previous radiologic and pathologic test results with review of outside materials by center specialist radiologists and pathologists and a multidisciplinary discussion about the diagnostic and management aspects of the case. This enhanced communication among multiple specialty physicians has been shown to result in significant numbers of changes in both the type and stage of reviewed cases. The occurrence of these discussions in cancer center settings has also been shown to positively impact patient receipt of management best practices.66–69 In some centers, patients are given the opportunity to discuss his/her diagnostic testing and diagnosis with the participating pathologist(s) and radiologist(s).191

One factor that has been shown to contribute to MC or TB changes in both radiologic and pathologic diagnostic test results is diagnostic interobserver variability between generalist and specialist interpretations.122–140 For breast lesions, the pathologic examination variability has been shown to account for changes in approximately 8% of reviewed cases.191, 192 However, in recent studies examining the impact of MC or TB on patient care for patients with gynecologic, pancreatic, and breast malignancies, alterations in the final management plan were made in 35%, 24%, and 52%, respectively.189–191 Therefore, during the face-to-face verbal communication among physicians, clinical information also appears to be exchanged, and that exchange contributes to changes in case diagnostic and prognostic information, which then may produce changes in patient management.

The studies mentioned above, as well as other previous studies, report a positive impact on oncologic diagnostic test accuracy and patient management decisions made in formal, structured forums for interdisciplinary physician-physician communication. No evidence is currently available that describes the impact of individual clinician-pathologist dialogue during routine daily practice on pathologic diagnoses or clinical management plans for patients with malignancies. On the basis of the available MC and TB evidence, one would expect an increase in diagnostic accuracy as a result of enhanced communication, even if it were limited to single clinician-pathologist dialogues. However, further studies examining the diagnostic and clinical impact of interprovider communication are needed to support this expectation.

Requesting Secondary Review of Tissue Samples when It Appears to be Critical for Obtaining a High Quality, Valid Diagnosis

As mentioned previously, extradepartmental review is the practice of secondary slide review that takes place when a patient receives treatment at an institution different from the institution where the original diagnosis of cancer was made.10, 122, 198–240 This review process may also be known as external second opinion and interinstitutional consultation. The reviewing institution is often a large tertiary referral center. Table 4 shows published data since 1990 on single-institution extradepartmental review, and these studies have varied in the definition of a diagnostic discrepancy, patient population, and kinds of specimens examined.10, 122, 198–237 The majority, but not all, studies include cases involving the secondary review of cancer diagnoses. Some of the studies specifically evaluated the review of cancer cases. Table 4 does not include panel review of outside diagnoses. Manion et al reported that these studies tend to examine second opinions on cases more prone to discrepancy.10 The follow-up to adjudicate the accuracy of original and review diagnoses is variable ranging from clinical data obtained through chart review, expert opinion, and additional pathology test results.

Table 4. Interinstitutional Pathology Slide Review Studies
YEARAUTHORSAREATOTAL CASES REVIEWEDTOTAL DISCREPANCY (%)MAJOR DISCREPANCY (%)
2009Wayment199Urologic surgical pathology21322 (10.3)18 (8.5)
2009Thway200Soft tissue surgical pathology34993 (26.6)38 (10.9)
2009Bomeisl201Fine needle aspiration cytopathology742201 (27.1)69 (9.3)
2009Lueck202Cytopathology49992 (18.4)37 (7.4)
2008Manion10Surgical pathology5,629639 (11.3)132 (2.3)
2007Tan203Thyroid fine needle aspiration cytopathology14727 (18.4)8 (5.6)
2007Thomas204Prostate surgical pathology1,323334 (25.2)196 (14.8)
2005Raab13Surgical pathology and cytopathology1,06992 (8.6)8 (0.7)
2005Hamady205Thyroid cancer surgical pathology6612 (18.2)5 (7.6)
2004Tsung206Surgical pathology71542 (5.9)16 (2.2)
2004Ngyuen207Prostate surgical pathology (Gleason scoring)602265 (44)55 (9.1)
1999Kronz223Prostate needle biopsy3,25187 (2.7)15 (0.5)
2003Weir209Surgical pathology and cytopathology1,52268 (6.8)37 (2.4)
2002McGinnis210Dermatopathology (pigmented lesions)5,136559 (10.9)120 (2.3)
2002Wetherington211Surgical pathology6,678213 (3.2)213 (3.2)
2002Staradub212Breast cancer346278 (80)27 (7.8)
2002Vivino213Labial salivary gland6032 (53.3)32 (53.3)
2002Layfield214Cytopathology14624 (16.4)11 (7.5)
2002Westra215Head and neck surgical pathology81454 (6.6)21 (2.6)
2001Arbiser216Soft tissue surgical pathology26685 (31.9)65 (24.4)
2001Coblentz217Bladder biopsy and transurethral resections13124 (18.3)24 (18.3)
2001Hahm218Gastrointestinal and hepatic surgical pathology19450 (25.8)14 (7.2)
2001Baloch219Cytopathology183110 (60.1)28 (15.3)
2001Murphy220Urologic surgical pathology15029 (19.3)14 (9.3)
2000Chafe221Gynecologic surgical pathology599200 (33.3)63 (10.5)
2000Aldape222Neuropathology457105 (23.0)17 (3.7)
1999Kronz223Surgical pathology6,17186 (1.4)86 (1.4)
1999Selman224Gynecologic surgical pathology29550 (16.9)14 (4.8)
1999Lee225Testicular surgical pathology208-12 (5.8)
1999Chan226Gynecologic surgical pathology and cytopathology569108 (19.0)37 (6.5)
1998Wurzer227Prostate biopsies (Gleason scoring)538212 (39.4)69 (12.8)
1998Jacques228Gynecologic surgical pathology (endometrial curettings and biopsy)18243 (23.6)43 (23.6)
1998Jacques229Gynecologic surgical pathology (hysterectomy)7624 (31.6)24 (31.6)
1998Santoso230Gynecologic surgical pathology720119 (16.5)15 (2.1)
1997Sharkey231Urologic surgical pathology and cytopathology376133 (35.3)133 (35.3)
1997Bruner232Neuropathology500214 (42.8)140 (28.0)
1996Epstein233Prostate surgical pathology5357 (1.3)7 (1.3)
1995Prescott234Surgical pathology22753 (23.3)19 (8.3)
1995Abt235Surgical pathology and cytopathology77771 (9.1)45 (5.8)
1995Scott236Neuropathology68074 (10.9)74 (10.9)
1993Segelov237Testicular surgical pathology8728 (32.0)10 (11.4)

The bias in these extradepartmental review studies clearly favors the accuracy of the reviewing institution, as cases from the reviewing institution are never reviewed to measure diagnostic variability. Nonetheless, these studies report a range of secondary review discrepancy frequencies. Some reports classify diagnostic discrepancies into major and minor. On the basis of the published data in Table 4, the range of total discrepant cases was 1.3% to 60.1%, and the range of discrepant cases classified as major was 0.7% to 53.3%.10, 122, 198–237 These studies represent a wide range of specimens reviewed with some studies examining both surgical pathology and cytopathology cases, and other studies examining a small subset of cases, (eg, labial salivary gland). Summing across all Table 4 studies, the overall discrepant case rate was 11.4% with a major discrepant rate of 4.7%.10, 122, 198–237 Major discrepancies generally occurred in cases where patient management was affected, although changes in diagnosis occurred with both major and minor discrepancies. Some authors attributed a high discrepancy frequency to the failure of pathologists to use established histologic criteria.

Should clinicians ask for patient slide material to be reviewed when these patients are treated at a different institution? Given the current lack of diagnostic standardization and the finding that treatment occurs at a local level, the answer that most authors give is yes. The Association of Directors of Anatomic and Surgical pathology (ADASP) recommended institutional consultation as a standard of practice.241 As a follow-up to a secondary review article by Kronz et al, Time magazine recommended a second opinion for all surgical pathology diagnoses of malignant.242 In an article published in 2000, Gupta and Layfield reported that only 50% of institutions followed ADASP guidelines for secondary review, and 38% encouraged second review.243

Kronz and Westra wrote that for some subspecialties, such as head and neck pathology, secondary review makes good clinical and risk management sense for 3 reasons.198 The authors argued that 1) the pathology of some subspecialties is diverse, complex, and difficult; 2) consultation is an essential component of multidisciplinary patient management; and 3) as treatment also has become diverse and complex, large referral hospitals contain staff with more comprehensive pathology diagnostic expertise.

Summary

  1. Top of page
  2. Abstract
  3. Introduction
  4. Definitions of Medical Error
  5. The Total Testing Process
  6. The TTP and Errors by Testing Phase
  7. Estimates of Error Frequency and Harm in Cancer Screening and Testing
  8. Error Root Cause Analysis
  9. Quality Improvement in Cancer Testing and Screening
  10. Evidence for Quality Improvement Opportunities in Oncologic Diagnostic Testing and Screening
  11. Summary
  12. References

The quality of oncologic pathology testing currently is focused on the evaluation of testing steps involved in the ordering, procuring, processing, interpreting, reporting, and decision making based on pathology test results. Most errors in cancer diagnosis are related to several factors and not simply a pathologist's interpretation. Clinical practitioners may improve the safety of oncologic pathology testing services by facilitating communication between clinical services and pathology laboratories at all levels of testing.

The CDC has sponsored several initiatives in the past decade to investigate the state of laboratory medicine with an emphasis on patient safety.244, 245 In September 2007, the CDC convened the 2007 Institute on Critical Issues in Health Laboratory Practice: Managing for Better Health to develop an action plan for the immediate and long-term future. At the 2007 Institute, experts in laboratory medicine practice, clinicians, payers, health services researchers, and patient representatives identified gaps in the current quality of laboratory medicine. This identification is an early step in promoting research for filling these gaps and informing laboratory medicine stakeholders on best practices. These experts identified gaps in the current knowledge of best patient safety practices for laboratory/hospital information system integration, standardizing error measures, effect of workforce vacancy rates on safety, communication methods at handoff points, longitudinal tracking of safety measures, and adoption of quality improvement systems currently used in business and industry.245 Research in these areas as well as other areas, such as subspecialty pathology practice models, training, and laboratory organization, are needed to improve the state of safety in all phases of oncologic pathology diagnostic testing and screening practice.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Definitions of Medical Error
  5. The Total Testing Process
  6. The TTP and Errors by Testing Phase
  7. Estimates of Error Frequency and Harm in Cancer Screening and Testing
  8. Error Root Cause Analysis
  9. Quality Improvement in Cancer Testing and Screening
  10. Evidence for Quality Improvement Opportunities in Oncologic Diagnostic Testing and Screening
  11. Summary
  12. References
  • 1
    Smith BD. Future of cancer incidence in the United States: Preparing for an older, more diversified nation. AJHO Newsletter, October 9, 2009, http://www.ajho.com/future-of-cancer-incidence-in-the-united-states-preparing-for-an-older-more-diversified-nation/article/151778/. Accessed October 25, 2009.
  • 2
    Kohn LT, Corrigan JM, Donaldson MS. To Err is Human: Building a Safer Health System. Washington, DC: National Academy Press; 1999.
  • 3
    Committee on Quality and Health Care in America. Crossing the Quality Chasm: a New Health System for the 21st Century. Washington, DC: National Academy Press; 2001.
  • 4
    Richardson, WC, Corrigan, J. The IOM Quality Initiative: A Progress Report at Year Six. IOM Newsletter, Volume I, Number I, Winter 2002, 17. http://health.usf.edu/medicine/educationalaffairs/pace_files/IOM%20Quality%20Initiative.pdf. Accessed November 17, 2009
  • 5
    Institute of Medicine. Envisioning the National Health Care Quality Report. Washington, DC: National Academy Press; 2001.
  • 6
    Beal AC, Co JPT, Dougherty D, et al. Quality measures for children's health care. Pediatrics. 2004; 113: 199209.
  • 7
    Berwick DM. A user's manual for the IOM's ‘Quality Chasm’ report. Health Aff (Millwood). 2002; 21: 8090.
  • 8
    Grzybicki DM, Raab SS, Janosky JE, et al. Anatomic pathology and patient safety: it's not an error: it's a diagnostic misadventure! Am J Clin Pathol. 2008; 129: 167168.
  • 9
    Frable WJ. Surgical pathology—second reviews, institutional reviews, audits, and correlations: what's out there? Error or diagnostic variation? Arch Pathol Lab Med. 2006; 130: 620625.
  • 10
    Manion E, Cohen MB, Weydert J. Mandatory second opinion in surgical pathology referral material: clinical consequences of major disagreements. Am J Surg Pathol. 2008; 32: 732737.
  • 11
    Valenstein P. Quality Management in Clinical Laboratories. Promoting Patient Safety through Risk Reduction and Continuous Improvement. Northfield, IL: College of American Pathologists; 2005.
  • 12
    National Oceanic and Atmospheric Administration website. http://celebrating200years.noaa.gov/magazine/tct/accuracy_vs_precision.html. Accessed November 15, 2009.
  • 13
    Raab SS, Grzybicki DM, Janosky JE, et al. Clinical impact and frequency of anatomic pathology errors in cancer diagnosis. Cancer. 2005; 104: 22052213.
  • 14
    Leong AS, Braye S, Bhagwandeen B. Diagnostic ‘errors’ in anatomical pathology: relevance to Australian laboratories. Pathology. 2006; 38: 487489.
  • 15
    Raab SS, Meier FA, Zarbo RJ, et al. The “big dog” effect: variability assessing the causes of error in diagnosis of patients with lung cancer. J Clin Oncol. 2006; 24: 28082814.
  • 16
    Byrt T, Bishop J. Carlin JB. Bias, prevalence and kappa. J Clin Epidemiol. 1993; 46: 423429.
  • 17
    Nelson JC, Pepe MS. Statistical description of interrater variability in ordinal ratings. Stat Methods Med Res. 2000; 9: 475496.
  • 18
    Renshaw AA, Gould EW. Measuring errors in surgical pathology in real-life practice: defining what does and what does not matter. Am J Clin Pathol. 2007; 127: 144152.
  • 19
    Birkmeyer JD, Sharp SM, Finlayson SR, Fisher ES, Wennberg JE. Variation profiles of common surgical procedures. Surgery. 1998; 124: 917923.
  • 20
    Fisher ES, Wennberg JE. Health care quality, geographic variations, and the challenge of supply-sensitive care. Perspect Biol Med. 2003; 46: 6979.
  • 21
    Carlisle DM, Valdez RB, Shapiro MF, Brook RH. Geographic variation in rates of selected surgical procedures within Los Angeles County. Health Serv Res. 1995; 30: 2742.
  • 22
    Wolcott J, Schwartz A, Goodman C, The Lewin Group. Laboratory Medicine: A National Status Report, May 2008; https://www.futurelabmedicine.org/reports/laboratory_medicine_-_a_national_status_report_from_the_lewin_group.pdf. Accessed November 17, 2009
  • 23
    Boone J. Presentation at the Institute on Critical Issues in Health Laboratory Practice: Managing for Better Health, September 23-26, 2007. Atlanta, GA: Centers for Disease Control and Prevention.
  • 24
    Lundberg GD. How clinicians should use the diagnostic laboratory in a changing medical world. Clin Cim Acta. 1999; 280: 311.
  • 25
    Stroobants AK, Goldschmidt HM, Plebani M. Error budget calculations in laboratory medicine: linking the concepts of biological variation and allowable medical errors. Clin Chim Acta. 2003; 333: 169176.
  • 26
    Bonini P, Plebani M, Ceriotti F, Rubboli F. Errors in laboratory medicine. Clin Chem. 2002; 48: 691698.
  • 27
    Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med. 2009; 169: 18811887.
  • 28
    Raab SS, Grzybicki DM, Zarbo RJ, et al. Frequency and outcome of cervical cancer prevention failures in the United States. Am J Clin Pathol. 2007; 128: 817824.
  • 29
    Kaplan HS, Battles JB, Van der Schaaf TW, Shea CE, Mercer SQ. Identification and classification of the causes of events in transfusion medicine. Transfusion. 1998; 38: 107181.
  • 30
    Battles JB, Shea CE. A system of analyzing medical errors to improve GME curricula and programs. Acad Med. 2001; 76: 125133.
  • 31
    Smits M, Janssen J, de Viet R, et al. Analysis of unintended events in hospitals: inter-rater reliability of constructing causal trees and classifying root causes. Int J Qual Health Care. 2009; 21: 292300.
  • 32
    Braaten JS, Bellhouse DE. Improving patient care by making small sustainable changes: a cardiac telemetry nit's experience. Nurs Econ. 2007; 25: 162166.
  • 33
    Shannon RP, Frndak D, Grunden N, et al. Using real-time problem solving to eliminate central line infections. Jt Comm J Qual Patient Saf. 2006; 32: 47987.
  • 34
    Condel JL, Sharbaugh DT, Raab SS. Error free pathology: applying lean production methods to anatomic pathology. Clin Lab Med. 2004; 24: 86599.
  • 35
    Reason J. Understanding adverse events: human factors. Qual Health Care. 1995; 4: 809.
  • 36
    Parks JK, Klien J, Frankel HL, Friese RS, Shafi S. Dissecting delays in trauma care using corporate lean six sigma methodology. J Trauma. 2008; 65: 10981104.
  • 37
    Raab SS, Grzybicki DM, Condel JL, et al. Effect of Lean method implementation in the histopathology section of an anatomical pathology laboratory. J Clin Pathol. 2008; 61: 11939.
  • 38
    Spear S, Bowen K. Decoding the DNA of the Toyota Production System. Boston, MA: Harvard Business Press; 1999.
  • 39
    Chalice RW. Stop Rising Healthcare Costs using Toyota Lean Production Methods. 38 Steps for Improvement. Milwaukee, WI: Quality Press; 2005.
  • 40
    Ohno T. Toyota Production System: Beyond Large-scale Production. Portland, OR: Productivity Press; 1988.
  • 41
    Womack JP, Jones DT, Roos D. The Machine that Changed the World. The story of Lean Production. How Japan's Secret Weapon in the Global Auto Wars Will Revolutionize Western Industry. New York, NY: Rawson Associates; 1990.
  • 42
    Ben-Tovim DI, Bassham JE, Bolch D, et al. Lean thinking across a hospital: redesigning care at the Flinders Medical Centre. Aust Health Rev. 2007; 31: 1015.
  • 43
    Napoles L, Quintana M. Developing a lean culture in the laboratory. Clin Leadersh Manag Rev. 2006; 20: E4.
  • 44
    Bryant PM, Gulling RD. Faster, better, cheaper: lean labs are the key to future survival. Clin Leadersh Manag Rev. 2006; 20: E2.
  • 45
    Jimmerson C, Weber D, Sobek DK 2nd. Reducing waste and errors: piloting lean principles at Intermountain Healthcare. Jt Comm J Qual Pat Saf. 2005; 31: 249257.
  • 46
    Meier FA, Zarbo RJ, Varney RC, et al. Amended reports: development and validation of a taxonomy of defects. Am J Clin Pathol. 2008: 130: 238246.
  • 47
    Zarbo RJ, Meier FA, Raab SS. Error detection in anatomic pathology. Arch Pathol Lab Med. 2005; 129: 12371245.
  • 48
    Valenstein PN, Raab SS, Walsh MK. Identification errors involving clinical laboratories: a College of American Pathologists Q-Probes study of patient and specimen identification errors at 120 institutions. Arch Pathol Lab Med. 2006; 130: 11061113.
  • 49
    Jones BA, Novis DA. Cervical biopsy-cytology correlation: a College of American Pathologists Q-Probes study of 22,439 correlations in 348 laboratories. Arch Pathol Lab Med. 1996; 120: 523531.
  • 50
    Clary JM, Silverman JF, Liu Y, et al. Cytohistologic discrepancies: a means to improve pathology practice and patient outcomes. Am J Clin Pathol. 2002; 117: 567573.
  • 51
    Nakhleh RE, Fitzgibbons PL. Quality Management in Anatomic Pathology. Promoting Patient Safety through Systems Improvement and Error Reduction. Northfield, IL: College of American Pathologists; 2005.
  • 52
    Mindemark M, Larsson A. Long-term effects of an education programme on the optimal use of clinical chemistry testing in primary health care. Scand J Clin Lab Invest. 2009; 69: 481486.
  • 53
    Larsson A, Biom S, Wernroth ML, Hulten G, Tryding N. Effects of an education programme to change clinical laboratory testing habits in primary care. Scand J Prim Health Care. 1999; 17: 238243.
  • 54
    Thomas RE, Croal BL, Ramsay C, Eccles M, Grimshaw J. Effect of enhanced feedback and brief educational reminder messages on laboratory test requesting in primary care: a cluster randomized trial. Lancet. 2006; 367: 19901996.
  • 55
    Gortmaker SL, Bickford AF, Mathewson HO, Dumbaugh K, Tirrell PC. A successful experiment to reduce unnecessary laboratory use in a community hospital. Med Care. 1998; 26: 631642.
  • 56
    Verstappen WHJM, Van Merode F, Grimshaw J, Dubois WI, Grol RPTM, Van der Weijden T. Comparing cost effects of two quality strategies to improve test ordering in primary care: a randomized trial. Int J Qual Health Care. 2004; 16: 391398.
  • 57
    Vardy DA, Simon T, Limoni Y, et al. The impact of structured laboratory routines in computerized medical records in a primary care service setting. J Med Syst. 2005; 29: 619626.
  • 58
    Zaat JO, van Eijk JT, Bonte HA. Laboratory test form design influences test ordering by general practitioners in The Netherlands. Med Care. 1992; 30: 189198.
  • 59
    Kahan NR, Waitman D-A, Vardy DA. Curtailing Laboratory test ordering in a managed care setting through redesign of a computerized order form. Am J Manag Care. 2009; 15: 173176.
  • 60
    Feldman BM, Stephens D, Wang EE. How should excess admission laboratory test utilization be curtailed? –paediatricians' preferences. Clin Invest Med. 1995; 18: 502509.
  • 61
    Rivera MP, Mehta AC. Initial diagnosis of lung cancer: ACCP evidence-based clinical practice guidelines (2nd edition). Chest. 2007; 132(3 Suppl ): 131S148S.
  • 62
    Holloway CM, Saskin R, Brackstone M, Paszat L. Variation in the use of percutaneous biopsy for diagnosis of breast abnormalities in Ontario. Ann Surg Oncol. 2007; 14: 29322939.
  • 63
    Grzybicki DM, Gross T, Geisinger KR, Raab SS. Estimation of performance and sequential selection of diagnostic tests in patients with lung lesions suspicious for cancer. Arch Pathol Lab Med. 2002; 126: 1927.
  • 64
    Couture MC, Nguyen CT, Alvarado BE, Velasquez LD, Zunzunegui MV. Inequalities in breast and cervical cancer screening among urban Mexican women. Prev Med. 2008; 47: 471476.
  • 65
    Fisher DA, Galanko J, Dudley TK, Shaheen NJ. Impact of comorbidity on colorectal cancer screening in the veterans healthcare system. Clin Gastroenterol Hepatol. 2007; 5: 991996.
  • 66
    Solomon D, Breen N, McNeel T. Cervical cancer screening rates in the United States and the potential impact of implementation of screening guidelines. CA Cancer J Clin. 2007; 57: 105111.
  • 67
    Koroukian SM, Xu F, Dor A, Cooper GS. Colorectal cancer screening in the elderly populstion: disparities by dual Medicare-Medicaid enrollment status. Health Serv Res. 2006; 21: 136154.
  • 68
    Madrigal de la Campa Mde L, Lazcano Ponce EC, Infante Castaneda C. Overuse of colposcopy service in Mexico. Ginecol Obstet Mex. 2005; 73: 63747.
  • 69
    Lieberman DA, Holub J, Eisen G, Kraemer D, Morris CD. Utilization of colonoscopy in the United States: results from a national consortium. Gastroentest Endosc. 2005; 62: 875883.
  • 70
    Zapka JG, Puleo E, Vickers-Lahti M, Luckmann R. Healthcare system factors and colorectal cancer screening. Am J Prev Med. 2002; 23: 2835.
  • 71
    Fletcher RH, Colditz GA, Pawlson LG, et al. Screening for colorectal cancer: the business case. Am J Manag Care. 2002; 8: 531538.
  • 72
    Bampton PA, Sandford JJ, Young GP. Applying evidence-based guidelines improves use of colonoscopy resources in patients with a moderate risk of colorectal neoplasia. Med J Aust. 2002; 176: 155157.
  • 73
    Arossi S, Ramos S, Paolino M, Sankaranarayanan R. Social inequality in Pap smear coverage: identifying under-users of cervical cancer screening in Argentina. Reprod Health Matters. 2008; 16: 5058.
  • 74
    Hatmaker AR, Donahue RM, Tarpley JL, Pearson AS. Cost-effective use of breast biopsy techniques in a Veterans health care system. Am J Surg. 2006; 192: e3741.
  • 75
    Wright CA, Pienaar JP, Marais BJ. Fine needle aspiration biopsy: diagnostic utility in resource-limited settings. Ann Trop Paediatr. 2008; 28: 6570.
  • 76
    Zaat JO, van Eijk JT. General practitioners' uncertainty, risk preference, and use of laboratory tests. Med Care. 1992; 30: 846854.
  • 77
    Nightingale SD. Risk preference and laboratory test selection. J Gen Intern Med. 1987; 2: 2528.
  • 78
    Lopez-Beltran A, Bassi PF, Pavone-Macaluso M, Montironi R; European Society of Uropathology; Uropathology Working Group. Handling and pathology reporting of specimens with carcinoma of the urinary bladder, ureter, and renal pelvis. A joint proposal of the European Society of Uropathology and the Uropathology Working Group. Virchows Arch. 2004; 445: 103110.
  • 79
    Baek SE, Kim MJ, Kim EK, et al. Effect of clinical information on diagnostic performance in breast sonography. J Ultrasound Med. 2009; 28: 13491356.
  • 80
    Houssami N, Irwig L, Simpson JM, et al. The influence of clinical information on the accuracy of diagnostic mammography. Breast Cancer Res Treat. 2004; 85: 223228.
  • 81
    Houssami N, Irwig L, Simpson JM, et al. The influence of knowledge of mammography findings on the accuracy of breast ultrasound in symptomatic women. Breast J. 2005; 11: 167172.
  • 82
    Ferrara G, Argenyi Z, Argenziano G, et al. The influence of clinical information in the histopathologic diagnosis of melanocytic skin neoplasms. PLoS One. 2009; 4: e5375.
  • 83
    Leslie A, Jones AJ, Goddard PR. The influence of clinical information on the reporting of CT by radiologists. Br J Radiol. 2000; 73: 10521055.
  • 84
    Lee DW, Stang PE, Goldberg GA, Haberman M. Resource use and cost of diagnostic workup of women with suspected breast cancer. Breast J. 2009; 15: 8592.
  • 85
    Loy CT, Irwig L. Accuracy of diagnostic tests read with and without clinical information: a systematic review. JAMA. 2004; 292: 16021609.
  • 86
    Condous G, Van Calster B, Kirk E, et al. Clinical information does not improve the performance of mathematical models in predicting the outcome of pregnancies of unknown location. Fertil Steril. 2007; 88: 572580.
  • 87
    Quekel LG, Goei R, Kessels AG, van Engelshoven JM. Detection of lung cancer on the chest radiograph: impact of previous films, clinical information, double reading, and dual reading. Clin Epidemiol. 2001; 54: 11461150.
  • 88
    Nodit L, Balassanian R, Sudilovsky D, Raab SS. Improving the quality of cytology diagnosis: root cause analysis for errors in bronchial washing and brushing specimens. Am J Clin Pathol. 2005; 124: 883892.
  • 89
    Raab SS, Vrbin CM, Grzybicki DM, et al. Errors in thyroid gland fine needle aspiration. Am J Clin Pathol. 2006: 125: 873882.
  • 90
    Raab SS, Stone CH, Wojcik EM, et al. Use of a new method in reaching consensus on the cause of cytologic-histologic correlation discrepancy. Am J Clin Pathol. 2006; 126: 836842.
  • 91
    Poller DN, Ibrahim AK, Cummings MH, et al. Fine-needle aspiration of the thyroid. Cancer. 2000; 90: 239244.
  • 92
    Poller DN, Stelow EB, Yiangou C. Thyroid FNAC cytology: can we do it better? Cytopathology. 2008; 19: 410.
  • 93
    Raab SS, Andrew-Jaja C, Grzybicki DM, et al. Dissemination of Lean methods to improve Pap testing quality and patient safety. J Low Genit Tract Dis. 2008; 12: 103110.
  • 94
    Raab SS, Grzybicki DM, Sudilovsky D, et al. Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error. Am J Clin Pathol. 2006; 126: 585592.
  • 95
    Patel AR, Jones JS. Optimal biopsy strategies for the diagnosis and staging of prostate cancer. Curr Opin Urol. 2009; 19: 232237.
  • 96
    Djavan B, Margreiter M. Biopsy standards for detection of prostate cancer. World J Urol. 2007; 25: 1117.
  • 97
    Djavan B, Milani S, Remzi M. Prostate biopsy: who, how and when. An update. Can J Urol. 2005; 12 Suppl 1: 4448.
  • 98
    Hodge KK, McNeal JE, Terris MK, Stamey TA. Random systemic versus directed ultrasound guided transrectal core biopsies of the prostate. J Urol. 1989; 142: 7174.
  • 99
    Keetch DW, Catalona WJ, Smith DS. Serial biopsies in men with persistently elevated serum prostate specific antigen values. J Urol. 1994; 151: 15711574.
  • 100
    Babaian RJ, Toi A, Kamoi K, et al. A comparative analysis of sextant and an extended 11-core multisite directed biopsy strategy. J Urol. 2000; 163: 152157.
  • 101
    Terris MK, Wallen EM, Stamey TA. Comparison of mid-lobe versus lateral systematic sextant biopsies in the detection of prostate cancer. Urol Int. 1997; 59: 239242.
  • 102
    Scattoni V, Raber M, Abdollah F, et al. Biopsy schemes with the fewest cores for detecting 95% of the prostate cancers detected by a 24-core biopsy. Eur Urol. ( 2009), doi:10.1016/j.eururo.2009.08.011.
  • 103
    Eskew LA, Bare RL, McCullough DL. Systematic 5 region biopsy is superior to sextant method for diagnosing carcinoma of the prostate. J Urol. 1997; 157: 199202.
  • 104
    Gore JL, Shariat SF, Miles BJ, et al. Optimal combinations of systematic sextant and laterally directed biopsies for the detection of prostate cancer. J Urol. 2001; 165: 15541559.
  • 105
    Borboroglu PG, Comer SW, Riffenburgh RH, Amling CL. Extensive repeat transrectal ultrasound guided prostate biopsy in patients with previous benign sextant biopsies. J Urol. 2000; 163: 158162.
  • 106
    Jones JS, Patel A, Schoenfield, et al. Saturation techniques does not improve cancer detection as an initial prostate biopsy strategy. J Urol. 2006; 175: 485488.
  • 107
    Stewart CS, Leibovich BC, Weaver AL, Lieber MM. Prostate cancer diagnosis using a saturation needle biopsy technique after previous negative sextant biopsies. J Urol. 2001; 166: 8691.
  • 108
    Gage JC, Hanson VW, Abbey K, et al. Number of cervical biopsies and sensitivity of colposcopy. Obstet Gynecol. 2006; 108: 264272.
  • 109
    Homesley HD, Jobson VW, Reish RL. Use of colposcopically directed, four-quadrant cervical biopsy by the colposcopy trainee. J Reprod Med. 1984; 29: 311316.
  • 110
    Collins PD, Mpofu C, Watson AJ, Rhodes JM. Strategies for detecting colon cancer and/or dysplasia in patients with inflammatory bowel disease. Cochrane Database Syst Rev. 2006 19;(2): CD000279.
  • 111
    Sidiropoulos N, Dumont LJ, Golding AC, Quinlisk FL, Gonzalez JL, Padmanabhan V. Quality improvement by standardization of procurement and processing of thyroid fine-needle aspirates in the absence of on-site cytological evaluation. Thyroid. 2009; 19: 10491052.
  • 112
    Gawande A. The Checklist Manifest. How to Get Things Right. New York, NY: Metropolitan Books®; 2009.
  • 113
    Nooh A, Babburi P, Howell R. Achieving quality assurance standards in colposcopy practice: a teaching hospital experience. Aust N Z J Obstet Gynaecol. 2007; 47: 6164.
  • 114
    Johnson EJ, Patnick J, National Co-ordinator of the NHS Cervical Screening Programme (NHSCSP). Achievable standards, benchmarks for reporting, and criteria for evaluating cervical cytopathology. Second edition including revised performance indicators. Cytopathology. 2000; 11: 212241.
  • 115
    Luesley D, Leeson S. Colposcopy and Programme Management. NHS Cervical Screening Programme. NHSCSP Publication. 2004; 20.
  • 116
    Solomon D, Davey D, Kurman R, et al. The Bethesda System 2001: terminology for reporting the results of cervical cytology. JAMA. 2002; 287: 21142119.
  • 117
    Kocjan G, Chandra A, Cross P, et al. BSCC Code of Practice—fine needle aspiration cytology. Cytopathology. 2009; 20: 283296.
  • 118
    Lawson NS, Howanitz PJ. The College of American Pathologists, 1946-1996. Quality Assurance Service. Arch Pathol Lab Med. 1997; 121: 10001008.
  • 119
    Raab SS, Jones BA, Souers R, Tworek JA. The effect of continuous monitoring of cytologic-histologic correlation data on cervical cancer screening performance. Arch Pathol Lab Med. 2008; 132: 1622.
  • 120
    Tworek JA, Jones BA, Raab S, Clary KM, Walsh MK. The value of monitoring human papillomavirus DNA results for Papanicolaou tests diagnosed as atypical squamous cells of undetermined significance: a College of American Pathologists Q-Probes study of 68 institutions. Arch Pathol Lab Med. 2007; 131: 15251531.
  • 121
    Raab SS, Tworek JA, Souers R, Zarbo RJ. The value of monitoring frozen section-permanent section correlation data over time. Arch Pathol Lab Med. 2006; 130: 337342.
  • 122
    Raab SS, Nakhleh RE, Ruby SG. Patient safety in anatomic pathology: measuring discrepancy frequencies and causes. Arch Pathol Lab Med. 2005; 129: 459466.
  • 123
    Jones BA, Novis DA. Follow-up of abnormal gynecologic cytology: a college of American pathologists Q-probes study of 16132 cases from 306 laboratories. Arch Pathol Lab Med. 2000; 124: 665671.
  • 124
    Nakhleh RE, Gephardt G, Zarbo RJ. Necessity of clinical information in surgical pathology. A College of American Pathologists Q-Probes study of 771,475 surgical pathology cases from 341 institutions. Arch Pathol Lab Med. 1999; 123: 615619.
  • 125
    Novis DA, Zarbo RJ, Valenstein PA. Diagnostic uncertainty expressed in prostate needle biopsies. A College of American Pathologists Q-probes Study of 15,753 prostate needle biopsies in 332 institutions. Arch Pathol Lab Med. 1999; 123: 687692.
  • 126
    Nakhleh RE, Zarbo RJ. Amended reports in surgical pathology and implications for diagnostic error detection and avoidance: a College of American Pathologists Q-probes study of 1,667,547 accessioned cases in 359 laboratories. Arch Pathol Lab Med. 1998; 122: 303309.
  • 127
    Nakhleh RE, Jones B, Zarbo RJ. Mammographically directed breast biopsies: a College of American Pathologists Q-Probes study of clinical physician expectations and of specimen handling and reporting characteristics in 434 institutions. Arch Pathol Lab Med. 1997; 121: 1118.
  • 128
    Jones BA. Rescreening in gynecologic cytology. Rescreening of 3762 previous cases for current high-grade squamous intraepithelial lesions and carcinoma—a College of American Pathologists Q-Probes study of 312 institutions. Arch Pathol Lab Med. 1995; 119: 10971103.
  • 129
    Gephardt GN, Zarbo RJ. Interinstitutional comparison of frozen section consultations. A college of American Pathologists Q-Probes study of 90,538 cases in 461 institutions. Arch Pathol Lab Med. 1996; 120: 804809.
  • 130
    Gephardt GN, Baker PB. Lung carcinoma surgical pathology report adequacy: a College of American Pathologists Q-Probes study of over 8300 cases from 464 institutions. Arch Pathol Lab Med. 1996; 120: 922927.
  • 131
    Gephardt GN, Zarbo RJ. Extraneous tissue in surgical pathology: a College of American Pathologists Q-Probes study of 275 laboratories. Arch Pathol Lab Med. 1996; 120: 10091014.
  • 132
    Novis DA, Gephardt GN, Zarbo RJ. Interinstitutional comparison of frozen section consultation in small hospitals: a College of American Pathologists Q-Probes study of 18,532 frozen section consultation diagnoses in 233 small hospitals. Arch Pathol Lab Med. 1996; 120: 10871093.
  • 133
    Nakhleh RE, Zarbo RJ. Surgical pathology specimen identification and accessioning: A College of American Pathologists Q-Probes Study of 1 004 115 cases from 417 institutions. Arch Pathol Lab Med. 1996; 120: 227233.
  • 134
    Gephardt GN, Baker PB. Interinstitutional comparison of bladder carcinoma surgical pathology report adequacy. A College of American Pathologists Q-Probes Study of 7234 bladder biopsies and curettings in 268 institutions. Arch Pathol Lab Med. 1995; 119: 681685.
  • 135
    Jones BA. Rescreening in gynecologic cytology. Rescreening of 8096 previous cases for current low-grade and indeterminate-grade squamous intraepithelial lesion diagnoses—a College of American Pathologists Q-Probes study of 323 laboratories. Arch Pathol Lab Med. 1996; 120: 519522.
  • 136
    Zarbo RJ. Interinstitutional assessment of colorectal carcinoma surgical pathology report adequacy. A College of American Pathologists Q-Probes study of practice patterns from 532 laboratories and 15,940 reports. Arch Pathol Lab Med. 1992; 116: 11131119.
  • 137
    Zarbo RJ, Fenoglio-Preiser CM. Interinstitutional database for comparison of performance in lung fine-needle aspiration cytology. A College of American Pathologists Q-Probe Study of 5264 cases with histologic correlation. Arch Pathol Lab Med. 1992; 116: 463470.
  • 138
    Zarbo RJ, Howanitz PJ, Bachner P. Interinstitutional comparison of performance in breast fine-needle aspiration cytology. A Q-probe quality indicator study. Arch Pathol Lab Med. 1991; 115: 743750.
  • 139
    Zarbo RJ, Hoffman GG, Howanitz PJ. Interinstitutional comparison of frozen-section consultation. A College of American Pathologists Q-Probe study of 79,647 consultations in 297 North American institutions. Arch Pathol Lab Med. 1991; 115: 11871194.
  • 140
    Howanitz PJ, Hoffman GG, Zarbo RJ. The accuracy of frozen-section diagnoses in 34 hospitals. Arch Pathol Lab Med. 1990; 114: 355359.
  • 141
    Smith ML, Raab SS. Near-miss event rates in a traditional surgical pathology accessioning and gross examination laboratory. Mod Pathol. 2009; 22( Supplement 1): 336A.
  • 142
    Zarbo RJ, D'Angelo R. The Henry Ford production system: effective reduction of process defects and waste in surgical pathology. Am J Clin Pathol. 2007; 128: 10151022.
  • 143
    Raab SS, King AM, Grzybicki DM. Root cause analysis of surgical pathology identification and information defects. Mod Pathol. 2009; 22 ( Supplement 1): 336A.
  • 144
    Dhir R, Condel JL, Raab SS. Identification and correction of errors in the anatomic pathology gross room. Pathol Case Rev. 2005; 10: 7982.
  • 145
    Zarbo RJ, Tuthill JM, D'Angelo R, et al. The Henry Ford Production System: reduction of surgical pathology in-process misidentification defects by bar code-specified work process standardization. Am J Clin Pathol. 2009; 131: 468477.
  • 146
    Galvis CO, Raab SS, D'Amico F, Grzybicki DM. Pathologists' assistants practice: a measurement of performance. Am J Clin Pathol. 2001; 116: 816822.
  • 147
    Sakata J, Shirai Y, Wakai T, Ajioka Y, Hatakeyama K. Number of positive lymph nodes independently determines the prognosis after resection in patients with gallbladder carcinoma. Ann Surg Oncol. 2010 Jan 15 [Epub ahead of print].
  • 148
    Bhatti I, Peacock O, Awan AK, Semeraro D, Larvin M, Hall RI. Lymph node ratio versus number of affected lymph nodes as predictors of survival for resected pancreatic adenocarcinoma. World J Surg. 2010 Jan 6 [Epub ahead of print].
  • 149
    Nissan A, Protic M, Bilchik A, Eberhardt J, Peoples GE, Stojadinovic A. Predictive model of outcome of targeted nodal assessment in colorectal cancer. Ann Surg. Jan 5 [Epub ahead of print].
  • 150
    Jakub, J. W., G. Russell, et al. (2009). “ Colon cancer and low lymph node count: who is to blame?Arch Surg. 2009; 144: 111520.
  • 151
    Platt E, Sommer P, McDonald L, Bennett A, Hunt J. Tissue floaters and contaminants in the histology laboratory. Arch Pathol Lab Med. 2009; 133: 973978.
  • 152
    Wolff AC, Hammond ME, Schwartz JN, et al. American Society of Clinical Oncology/College of American Pathologists guideline recommendations for human epidermal growth factor receptor 2 testing in breast cancer. Arch Pathol Lab Med. 2007; 131: 1843.
  • 153
    Slamon D, Clark G, Wong S, et al. Human breast cancer: Correlation of relapse and survival and amplification of the HER-2/neu oncogene. Science. 1987; 235: 177182.
  • 154
    Sauter G, Lee J, Bartlett JM, Slamon DJ, Press MF. Guidelines for human epidermal growth factor receptor 2 testing: biologic and methodologic considerations. J Clin Oncol. 2009; 27: 13231333.
  • 155
    Jacobs T, Gown A, Yazji H, et al. Comparison of fluorescence in situ hybridization and immunohistochemistry in a cohort of 6556 breast cancer tissues. Clin Breast Cancer. 2004; 5: 6339.
  • 156
    Jacobs T, Gown A, Yazji H, et al. Specificity of HercepTest in determining HER-2/neu status of breast cancers using the United States Food and Drug Administration-approved scoring system. J Clin Oncol. 1999; 17: 19831987.
  • 157
    Khoury T, Sait S, Hwang H, et al. Delay to formalin fixation effect on breast biomarkers. Mod Pathol. 2009; 22(11): 14571467.
  • 158
    Gown AM. Current issues in ER and HER2 testing by IHC in breast cancer. Mod Pathol. 2008; 21( Suppl 2): S8S15.
  • 159
    Carlson RW, Moench SJ, Hammond ME, et al. HER2 testing in breast cancer: NCCN Task Force report and recommendations. J Natl Compr Canc Netw. 2006; 4 ( Suppl 3): S122.
  • 160
    Allred DC, Carlson RW, Berry DA, et al. NCCN Task Force Report: Estrogen Receptor and Progesterone Receptor Testing in Breast Cancer by Immunohistochemistry. J Natl Compr Canc Netw. 2009; 7 ( Suppl 6): S1S21.
  • 161
    College of American Pathologists. CAP Education Programs. http://www.cap.org/apps/cap.portal?_nfpb=true&_pageLabel=education. Accessed January 20, 2010
  • 162
    American Society for Clinical Pathology. Continuing Medical Education. http://www.ascp.org/FunctionalNavigation/education/CME.aspx. Accessed January 20, 2010.
  • 163
    Raab SS, Grzybicki DM, Mahood LK, et al. Effectiveness of random and focused review in detecting surgical pathology error. Am J Clin Pathol. 2008; 130: 90512.
  • 164
    Llewellyn H. Observer variation, dysplasia grading, and HPV typing: a review. Am J Clin Pathol. 2000; 114: S2135.
  • 165
    Schnitt SJ, Connolly JL, Tavassoli FA, et al. Interobserver reproducibility in the diagnosis of ductal proliferative breast lesions using standardized criteria. Am J Surg Pathol. 1992; 16: 11331143.
  • 166
    Dalton LW, Pinder SE, Elston CE, et al. Histologic grading of breast cancer: linkage of patient outcome with level of pathologist agreement. Mod Pathol. 2000; 13: 730735.
  • 167
    Page DL, Dupont WD, Jensen RA, Simpson JF. When and to what end do pathologists agree? J Natl Cancer Inst. 1998; 90: 8889.
  • 168
    Rosai J. Borderline epithelial lesions of the breast. Am J Surg Pathol. 1991; 15: 209221.
  • 169
    Ghofrani M, Tapia B, Tavassoli FA. Discrepancies in the diagnosis of intraductal proliferative lesions of the breast and its management implications: results of a multinational survey. Virchows Arch. 2006; 449: 609616.
  • 170
    Wells WA, Carney PA, Eliassen MS, et al. Pathologists' agreement with experts and reproducibility of breast ductal carcinoma-in-situ classification schemes. Am J Surg Pathol. 2000; 24: 651659.
  • 171
    Bethwaite P, Smith N. Delahung B, et al. Reproducibility of new classification schemes for the pathology of ductal carcinoma in situ of the breast. J Clin Pathol. 1998; 51: 450454.
  • 172
    Sloane JP, Amendoeira I, Apostolikas N, et al. Consistency achieved by 23 European pathologists in categorizing ductal carcinoma in situ of the breast using five classifications. European Commission Working Group on Breast Screening Pathology. Hum Pathol. 1998; 29: 10561062.
  • 173
    Elsheikh TM, Asa SL, Chan JK, et al. Interobserver and intraobserver variation among experts in the diagnosis of thyroid follicular lesions with borderline nuclear features of papillary carcinoma. Am J Clin Pathol. 2008; 130: 736744.
  • 174
    Grzybicki DM, Jensen C, Geisinger KR, et al. Improving interobserver reproducibility in Pap test and cervical biopsy interpretations. Mod Pathol. 2007; 20( Supplement 2): 337A.
  • 175
    American College of Surgeons Commission on Cancer. Cancer Program Standards. Chicago IL. American College of Surgeons; 2004.
  • 176
    Kang HP, Devine LJ, Piccoli AL, Seethala RR, Amin W, Parwani AV. Usefulness of a synoptic data tool for reporting of head and neck neoplasms based on the College of American Pathologists cancer checklists. Am J Clin Pathol. 2009; 132: 521530.
  • 177
    Mohanty SK, Piccoli AL, Devine LJ, et al. Synoptic tool for reporting of hematological and lymphoid neoplasms based on World Health Organization classification and College of American Pathologists checklist. BMC Cancer. 2007; 7: 144.
  • 178
    Sood JD, Wong C, Bevan R, Veale A, Sivakumaran P. Delays in the assessment and management of primary lung cancers in South Aukland. NZ Med J. 2009; 122: 4250.
  • 179
    Rash B, Martin-Hirsch P, Schneider A, et al. Resource use and cost analysis of managing abnormal Pap smears: a retrospective study in five countries. Eur J Gynaecol Oncol. 2008; 29: 22532.
  • 180
    Zbidi I, Hazari R, Niv Y, Birkenfeld S. Colonoscopy screening and surveillance of colorectal cancer and polyps: physicians' knowledge. ISR Med Assoc J. 2007: 9: 862865.
  • 181
    Kelly KM, Phillips CM, Jenkins C, et al. Physician and staff perceptions of barriers to colorectal cancer screening in Appalachian Kentucky. Cancer Control. 2007; 14: 167175.
  • 182
    Wahls TL, Peleg I. Patient-and system-related barriers for the earlier diagnosis of colorectal cancer. BMC Fam Pract. 2009; 10: 65.
  • 183
    Singh H, Daci K, Petersen LA, et al. Missed opportunities to initiate endoscopic evaluation for colorectal cancer diagnosis. Am J Gastroenterol. 2009; 104: 25432554.
  • 184
    Sargeran K, Murtomaa H, Safavi SM, Teronen O. Delayed diagnosis of oral cancer in Iran: challenge for prevention. Oral Health Prev Dent. 2009; 7: 6976.
  • 185
    Seoane J, Varela-Centelles PI, Walsh TF, Lopez-Cedrun JL, Vasquez I. Gingival squamous cell carcinoma: diagnostic delay or rapid invasion? J Periodeontol. 2006; 77: 12291233.
  • 186
    Gandhi Tk, Kachalia A, Thomas EJ, et al. Missed and delayed diagnoses in the ambulatory setting: a study of closed malpractice claims. Ann Intern Med. 2006; 145: 488496.
  • 187
    Hardin C, Pommier S, Pommier RF. The relationships among clinician delay of diagnosis of breast cancer and tumor size, nodal status, and stage. Am J Surg. 2006; 192: 506508.
  • 188
    Christensen ED, Harvald T, Jendresen M, Aggestrup S, Petterson G. The impact of delayed diagnosis of lung cancer on the stage at the time of operation. Eur J Cardiothorac Surg. 1997; 12: 880884.
  • 189
    Gatcliffe TA, Coleman RL. Tumor board: more than treatment planning—a 1-year prospective survey. J Cancer Educ. 2008; 23: 235237.
  • 190
    Pawlik TM, Laheru D, Hruban RH, et al. Evaluating the impact of a single-day multidisciplinary clinic on the management of pancreatic cancer. Ann Surg Oncol. 2008; 15: 20812088.
  • 191
    Newman EA, Guest AB, Helvie MA, et al. Changes in surgical management resulting from case review at a breast cancer multidisciplinary tumor board. Cancer. 2006; 107: 23462351.
  • 192
    Zarbo RJ, Nakhleh RE, Walsh M. Customer satisfaction in anatomic pathology. A College of American Pathologists Q-Probes study of 3065 physician surveys from 94 laboratories. Arch Pathol Lab Med. 2003; 127: 2329.
  • 193
    Nguyen TD, Legrand P, Devie I, Cauchois A, Eymard JC. Qualitative assessment of the multidisciplinary tumor board in breast cancer. Bull Cancer. 2008; 95: 247251.
  • 194
    Abraham NS, Gossey JT, Davila JA, Al-Oudat S, Kramer JK. Receipt of recommended therapy by patients with advanced colorectal cancer. Am J Gastroenterol. 2006; 101: 13201328.
    Direct Link:
  • 195
    Lutterbach J, Pagenstecher A, Spreer J, et al. The brain tumor board: lessons to be learned from an interdisciplinary conference. Onkologie. 2005; 28: 2226.
  • 196
    Khalifa MA, Dodge J, Covens A, Osborne R, Ackerman I. Slide review in gynecologic oncology ensures completeness of reporting and diagnostic accuracy. Gynecol Oncol. 2003; 90: 425430.
  • 197
    Petty JK. Vetto JT. Beyond doughnuts: tumor board recommendations influence patient care. J Cancer Educ. 2002; 17: 97100.
  • 198
    Kronz JD, Westra WH. The role of second opinion pathology in the management of lesions of the head and neck. Curr Opin Otolaryngol Head Neck Surg. 2005; 13: 8184.
  • 199
    Wayment RO, Bourne A, Kay P, Tarter TH. Second opinion pathology in tertiary care of patients with urologic malignancies. Urol Oncol. 2009 Jun 11[Epub].
  • 200
    Thway K, Fisher C. Histopathological diagnostic discrepancies in soft tissue tumours referred to a specialist centre. Sarcoma. 2009; 2009: 741975 [Epub 2009 Jun 21].
  • 201
    Bomeisl PE, Jr., Alam S, Wakely PE, Jr. Interinstitutional consultation in fine-needle aspiration cytopathology: a study of 742 cases. Cancer Cytopathol. 2009; 117: 237246.
  • 202
    Lueck N, Jensen C, Cohen MB, Weydert JA. Mandatory second opinion in cytopathology. Cancer Cytopathol. 2009; 117: 8291.
  • 203
    Tan YY, Kebebew E, Reiff E, et al. Does routine consultation of thyroid fine-needle aspiration cytology change surgical management? J Am Coll Surg. 2007; 205: 812.
  • 204
    Thomas CW, Bainbridge TC, Thomson TA, McGahan CE, Morris WJ. Clinical impact of second pathology opinion: a longitudinal study of central genitourinary pathology review before prostate brachytherapy. Brachytherapy. 2007; 6: 135141.
  • 205
    Hamady ZZ, Mather N, Lansdown MR, Davidson L, Maclennan KA. Surgical pathological second opinion in thyroid malignancy: impact on patients' management and prognosis. Eur J Surg Oncol. 2005; 31: 7477.
  • 206
    Tsung JS. Institutional pathology consultation. Am J Surg Pathol. 2004; 28: 399402.
  • 207
    Nguyen PL, Schultz D, Renshaw AA, et al. The impact of pathology review on treatment recommendations for patients with adenocarcinoma of the prostate. Urol Oncol. 2004; 22: 295299.
  • 208
    Kronz JD, Milord R, Wilentz R, Weir EG, Schreiner SR, Epstein JI. Lesions missed on prostate biopsies in cases sent in for consultation. Prostate. 2003; 54: 310314.
  • 209
    Weir MM, Jan E, Colgan TJ. Interinstitutional pathology consultations. A reassessment. Am J Clin Pathol. 2003; 120: 405412.
  • 210
    McGinnis KS, Lessin SR, Elder DE, et al. Pathology review of cases presenting to a multidisciplinary pigmented lesion clinic. Arch Dermatol. 2002; 138: 617621.
  • 211
    Wetherington RW, Cooper HS, Al-Saleem T, et al. Clinical significance of performing immunohistochemistry on cases with a previous diagnosis of cancer coming to a national comprehensive cancer center for treatment or second opinion. Am J Surg Pathol. 2002; 26: 12221230.
  • 212
    Staradub VL, Messenger KA, Hao N, Wiley EL, Morrow M. Changes in breast cancer therapy because of pathology second opinions. Ann Surg Oncol. 2002; 9: 982987.
  • 213
    Vivino FB, Gala I, Hermann GA. Change in final diagnosis on second evaluation of labial minor salivary gland biopsies. J Rheumatol. 2002; 29: 938944.
  • 214
    Layfield LJ, Jones C, Rowe L, Gopez EV. Institutional review of outside cytology materials: a retrospective analysis of two institutions' experiences. Diagn Cytopathol. 2002; 26: 4548.
  • 215
    Westra WH, Kronz JD, Eisele DW. The impact of second opinion surgical pathology on the practice of head and neck surgery: a decade experience at a large referral hospital. Head Neck. 2002; 24: 684693.
  • 216
    Arbiser ZK, Folpe AL, Weiss SW. Consultative (expert) second opinions in soft tissue pathology. Analysis of problem-prone diagnostic situations. Am J Clin Pathol. 2001; 116: 473476.
  • 217
    Coblentz TR, Mills SE, Theodorescu D. Impact of second opinion pathology in the definitive management of patients with bladder carcinoma. Cancer. 2001; 91: 12841290.
  • 218
    Hahm GK, Niemann TH, Lucas JG, Frankel WL. The value of second opinion in gastrointestinal and liver pathology. Arch Pathol Lab Med. 2001; 125: 736739.
  • 219
    Baloch ZW, Hendreen S, Gupta PK, et al. Interinstitutional review of thyroid fine-needle aspirations; impact on clinical management of thyroid nodules. Diagn Cytopathol. 2001; 25: 231234.
  • 220
    Murphy WM, Rivera-Ramirez I, Luciani LG, Wajsman Z. Second opinion of anatomical pathology: a complex issue not easily reduced to matters of right and wrong. J Urol. 2001; 165: 19571959.
  • 221
    Chafe S, Honore L, Pearcey R, Capstick V. An analysis of the impact of pathology review in gynecologic cancer. Int J Radiat Oncol Biol Phys. 2000; 48: 14331438.
  • 222
    Aldape K, Simmons ML, Davis RL, et al. Discrepancies in diagnoses of neuroepithelial neoplasms: the San Francisco Bay Area Adult Glioma Study. Cancer. 2000; 88: 23422349.
  • 223
    Kronz JD, Westra WH, Epstein JI. Mandatory second opinion surgical pathology at a large referral hospital. Cancer. 1999; 86: 24262435.
  • 224
    Selman AE, Niemann TH, Fowler JM, Copeland LJ. Quality assurance of second opinion pathology in gynecologic oncology. Obstet Gynecol. 1999; 94: 302306.
  • 225
    Lee AH, Mead GM, Theaker JM. The value of central histopathological review of testicular tumours before treatment. BJU Int. 1999; 84: 7578.
  • 226
    Chan YM, Cheung AN, Cheng DK, Ng TY, Ngan HY, Wong LC. Pathology slide review in gynecologic oncology: routine or selective? Gynecol Oncol. 1999; 75: 267271.
  • 227
    Wurzer JC, Al-Saleem TI, Hanlon AL, Freedman GM, Patchefsky A, Hanks GE. Histopathologic review of prostate biopsies from patients referred to a comprehensive cancer center: correlation of pathologic findings, analysis of cost, and impact on treatment. Cancer. 1998; 83: 753759.
  • 228
    Jacques SM, Qureshi F, Munkarah A, Lawrence WD. Interinstitutional surgical pathology review in gynecologic oncology: I. Cancer in endometrial curettings and biopsies. Int J Gynecol Pathol. 1998; 17: 3641.
  • 229
    Jacques SM, Qureshi F, Munkarah A, Lawrence WD. Interinstitutional surgical pathology review in gynecologic oncology: II. Endometrial cancer in hysterectomy specimens. Int J Gynecol Pathol. 1998; 17: 4245.
  • 230
    Santoso JT, Coleman RL, Voet RL, Bernstein SG, Lifshitz S, Miller D. Pathology slide review in gynecologic oncology. Obstet Gynecol. 1998; 91: 730734.
  • 231
    Sharkey FE, Sarosdy MF. The significance of central pathology review in clinical studies of transitional cell carcinoma in situ. J Urol. 1997; 157: 6870.
  • 232
    Bruner JM, Inouye L, Fuller GN, Langford LA. Diagnostic discrepancies and their clinical impact in a neuropathology referral practice. Cancer. 1997; 79: 796803.
  • 233
    Epstein JI, Walsh PC, Sanfilippo F. Clinical and cost impact of second-opinion pathology. Review of prostate biopsies prior to radical prostatectomy. Am J Surg Pathol. 1996; 20: 851857.
  • 234
    Prescott RJ, Wells S, Bisset DL, Banerjee SS, Harris M. Audit of tumour histopathology reviewed by a regional oncology centre. J Clin Pathol. 1995; 48: 245249.
  • 235
    Abt AB, Abt LG, Olt GJ. The effect of interinstitution anatomic pathology consultation on patient care. Arch Pathol Lab Med. 1995; 119: 514517.
  • 236
    Scott CB, Nelson JS, Farnan NC, et al. Central pathology review in clinical trials for patients with malignant glioma. A Report of Radiation Therapy Oncology Group 83-02. Cancer. 15 1995; 76: 307313.
  • 237
    Segelov E, Cox KM, Raghavan D, McNeil E, Lancaster L, Rogers J. The impact of histological review on clinical management of testicular cancer. Br J Urol. 1993; 71: 736738.
  • 238
    Renshaw AA, Schultz D, Cote K, Loffredo M, Ziemba DE, D'Amico AV. Accurate Gleason grading of prostatic adenocarcinoma in prostate needle biopsies by general pathologists. Arch Pathol Lab Med. 2003; 127: 10071008.
  • 239
    Frable WJ. Surgical pathology—second reviews, institutional reviews, audits, and correlations: what's out there? Error or diagnostic variation? Arch Pathol Lab Med. 2006; 130: 620625.
  • 240
    Veenhuizen KC, De Wit PE, Mooi WJ, Scheffer E, Verbeek AL, Ruiter DJ. Quality assessment by expert opinion in melanoma pathology: experience of the pathology panel of the Dutch Melanoma Working Party. J Pathol. 1997; 182: 266272.
  • 241
    Association of Directors of Anatomic and Surgical Pathology. Consultations in surgical pathology. Am J Surg Pathol. 1993; 17: 743745.
  • 242
    Horowitz JM. Discordant diagnosis. Time. 1999; 154: 117.
  • 243
    Gupta D, Layfield LJ. Prevalence of inter-institutional anatomic pathology slide review. Am J Surg Pathol. 2000; 24: 280284.
  • 244
    Grzybicki DM, Shahangian S, Pollock AM, Raab SS. A summary of the deliberations on strategic planning for continuous quality improvement in laboratory medicine. Am J Clin Pathol. 2009; 131: 315320.
  • 245
    Shahangian S. CDC institutes on critical issues in health laboratory practice (1984-1995). http://wwwn.cdc.gov/mlp/QIConference/Abstracts/Posters/Poster-%20CDC%20Institutes%20on%20Crit%20Issues.pdf. Accessed January 24, 2010.