Quality in Cancer Diagnosis


  • Stephen S. Raab MD,

    Corresponding author
    1. Professor of Pathology, Department of Pathology, University of Colorado–Denver, Aurora, Colorado
    • University of Colorado–Denver, Department of Pathology, 12605 East 16th Avenue, Anschutz Inpatient Pavilion, Room 3022, Aurora, CO 80045
    Search for more papers by this author
  • Dana M. Grzybicki MD, PhD

    1. Associate Professor, Rocky Vista University School of Osteopathic Medicine, Parker, Colorado
    Search for more papers by this author


Improving the quality of oncologic pathology diagnosis is immensely important as the overwhelming majority of the approximately 1.6 million patients who will be diagnosed with cancer in 2010 have their diagnoses established through the pathologic interpretation of a tissue sample. Millions more patients have tissue samples obtained to rule out cancer and do not have cancer. The majority of studies on the quality of oncologic pathology diagnoses have focused on patient safety and have documented a variety of causes of error that occur in the clinical and pathology laboratory testing phases of diagnostic testing. The reported frequency of a diagnostic error made by oncologic pathology depends on several factors, such as definitions and detection methods, and ranges from 1% to 15%. The large majority of diagnostic errors do not result in severe harm, although mild to moderate harm in the form of additional testing or diagnostic delays occurs in up to 50% of errors. Clinical practitioners play an essential role in error reduction through several avenues such as effective test ordering, providing accurate and pertinent clinical information, procuring high-quality specimens, providing timely follow-up on test results, effectively communicating on potentially discrepant diagnoses, and advocating second opinions on the pathology diagnosis in specific situations. CA Cancer J Clin 2010;60:139–165. © 2010 American Cancer Society, Inc.


Almost all primary and many recurrent diagnoses of cancer are based on the pathology diagnosis. In the United States, approximately 1.6 million individuals will be diagnosed with cancer in 2010,1 and far more individuals will have pathology tissues procured to rule out cancer and will not have cancer. In the current era of healthcare reform and reorganization, the assessment of quality in all aspects of our healthcare system is critically important.2, 3 The screening, diagnosis, and management of patients with cancer form the basis of a sprawling, complex system with extensive practitioner subspecialization. With the gradual aging of the US population, Smith et al estimated that 2.3 million people will be diagnosed with cancer in 2030.1 As most of these patients will enter the healthcare system through the portal of a pathology diagnosis, it is apropos to assess the current state of quality in oncologic pathology diagnosis.

In 1990, the Institute of Medicine (IOM) defined the quality of healthcare as the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.4 This definition addressed both population and individual healthcare needs and encompassed clinician and patient perspectives.4 The IOM further classified quality into 6 domains: safety, timeliness, effectiveness, efficiency, equity, and patient centeredness.5, 6 As most oncologic pathology quality research has heretofore focused on aspects of test performance, a large proportion of the medical literature has reported on patient safety or failures in pathology diagnostic testing and screening.

Most active and published quality improvement activities in oncologic pathology testing affect microsystems of practice or units of care delivery, which Berwick defined as Level B of a 4-level system of care.7 These published quality initiatives affect entities such as local diagnostic testing or screening services or individual laboratories. For there to be true transformation of the quality of oncologic pathology diagnostic care, change will need to occur at additional levels of healthcare, addressing the experience of patients and communities (Level A), healthcare organizations (Level C), and healthcare environments (Level D).

Given the breadth and depth of the interplay between safety and oncologic diagnostic testing and screening practice, an article on oncologic pathology diagnostic safety, by necessity, must be written from a specific perspective, such as that of the practicing clinician, pathologist, health services researcher, payer, or patient. This article is written for practicing clinicians and, as such, excludes details of some existing quality structures, including those within laboratories. Although we will focus on the practical approaches that clinical practitioners may take to contribute to the improvement of patient safety in oncologic pathology testing, we recognize that more global strategies of change affecting higher and lower levels of care are necessary to improve quality in the overall system of oncologic pathology diagnosis. We recognize that an accurate oncologic diagnosis requires the collaboration of pathologists, oncology specialists, and other clinicians. Lastly, we chose to focus this article on activities surrounding quality improvement of patient-safety systems, which has been driven by the activities of quality assurance or the assessment of current levels of safety.

Definitions of Medical Error

An underlying theme that currently is driving the discussion of patient safety in oncologic pathology diagnosis is the lack of agreement on the definition of “diagnostic” pathology error.8–10 This lack of agreement has resulted in major differences in reported error rates and has limited the effectiveness of quality improvement activities. A contributing factor to this dilemma is the lack of acceptance of the IOM definition of a medical error as applied to pathology diagnosis.

The IOM defined a medical error as the failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim.2 This definition encompasses all types of error and does not link patient outcome to error. It is important to note that error or failure does not suggest blame or necessarily lack of skill, negligence, or legal liability.

Pathology laboratories traditionally have considered 2 types of error: errors of accuracy and errors of precision.11 Both types of error may be incorporated into the IOM definition of error. Accuracy is the closeness of a measure to its true value, and precision is the degree to which repeated measures show the same results. Accuracy and precision often are visually portrayed with a cartoon of darts thrown at a dart board or gunshots at a target, as shown in Figure 1.12

Figure 1.

These 4 pictures demonstrate 4 combinations of accuracy and precision through the use of a bull's-eye. The center of the bull's-eye represents the accuracy of a laboratory test, and an oncologic pathology test is accurate when the diagnosis hits the center (ie, correctly identifies the disease process). The closeness of the shots (or test groupings) measures precision or reproducibility. Note that a group of pathologists all could agree with a diagnosis, but the diagnosis may not be accurate (see “Not Accurate, Precise” figure). In the “Accurate, Not Precise” figure, the average of the individual scores lies in the bull's-eye. Source: http://celebrating200years.noaa.gov/magazine/tct/accuracy_vs_precision.html. Reproduced with permission from Barr JT, Silver S.

As a starting point for the study of patient safety in diagnostic pathology, a consortium of pathologists funded by the Agency for Healthcare Research and Quality (AHRQ) defined diagnostic error as the failure of a test (the planned action) to produce a diagnosis that corresponds to the actual disease state in a patient.13 This concept of error is one of accuracy. This definition has not been widely accepted by anatomic pathologists, partly because of the connotation that a diagnostic error implies an error in a pathologist's interpretation. In reality, an error in interpretation is only one of several root causes of a diagnostic error, as will be discussed later.

Precision in oncologic pathology diagnostic testing generally has been reported as the reproducibility of the pathologist's diagnostic interpretation.14, 15 The precision of the testing activities leading to and following the diagnostic interpretation has not been extensively studied. Measures of the precision of diagnostic interpretation are expressed in terms of diagnostic agreement among pathologists and include the metrics of kappa and crude agreement.16, 17 Some pathologists argue that the lack of precision, which is measured by diagnostic disagreement, is not a form of error but represents variability in practice,9, 18 similar to other forms of variation, such as the reported geographic variation in hysterectomy rates.19–21

Diagnostic precision is critically important to clinicians, as will be discussed in the section on Secondary Review in Pathology. By definition, variation represents a form of error, and in clinical practice, variation suggests less than optimal care.19–21 Although we may not know the best clinical practice, disparate rates of the performance of specific procedures (eg, hysterectomy, requesting chest films, etc) imply that one or multiple ways of practicing are not ideal. By using the example of variation in hysterectomy rates, geographic differences in high, moderate, and low rates cannot all be equally clinically effective for patients at equal risk, even when we do not know the optimal rate. A problem in some areas of clinical medicine is that there is a lack of data linking the rates of performance of specific procedures with patient outcome. The same holds true in pathologists' diagnostic interpretations, as we may not know how different diagnostic schemes or even how 2 different pathologists' diagnoses will affect patient outcomes. However, a difference in diagnostic interpretation on the same patient specimen is problematic, especially when clinical management differs by diagnosis.

The Total Testing Process

One way to conceptualize oncologic pathology safety is in terms of a total testing process (TTP), which is a system-based framework for examining all possible interactions and activities that may affect the quality of laboratory tests.22–24 This framework allows for the design and implementation of interventions that may reduce or eliminate errors that adversely affect testing and patient-health outcomes. This framework also allows for the study of barriers and limits to quality-improvement activities. The TTP encompasses all components or steps of the cycle from the point of the clinical question to the point of clinical action. For a patient who has a lesion suspicious for cancer (based on clinical examination and diagnostic imaging), this cycle traverses from the clinical question of “Does this patient have cancer?” to the clinical action of cancer treatment and follow-up, when the diagnosis of cancer is rendered. In this regard, the TTP for oncologic pathology testing is defined by activities (Fig. 2)23 in 3 distinct phases that align with clinical workflow internal and external to the pathology laboratory as follows:

  • 1Preanalytic phase: clinician test selection, test ordering, specimen procurement, patient and specimen identification, and specimen transport
  • 2Analytic phase: specimen processing, preparation, immediately reporting results, and interpretation
  • 3Postanalytic phase: test-result reporting and clinician receipt, clinician interpretation of test results, and clinical action based on interpretation25
Figure 2.

The total testing process (TTP) begins with a clinical question and the care provider and patient. Each larger step in the process comprises many smaller steps, and laboratory medicine is the sum total of all testing steps. Source: Adapted with permission from Boone J. Presentation at the Institute on Critical Issues in Health Laboratory Practice: Managing for Better Health, September 23–26, 2007. Atlanta, GA: Centers for Disease Control and Prevention. The total testing process and its implications for laboratory administration and education. Clin Lab Manage Rev 1994;8:526–542.

All phases in the TTP involve multiple opportunities for making clinical decisions, and some phases involve the use of highly technical skills. Years of training and practice are necessary to hone many of the skills necessary for optimal performance of work-related tasks. Many of the steps in the TTP are based on subjective assessment of criteria. In some ways, a pathologist examining a slide is similar to a cardiologist listening to heart sounds or an internist performing a physical examination.

In oncologic pathology testing, most preanalytic- and postanalytic-phase processes occur outside of the laboratory. The utility of the TTP concept lies in the linking together of all testing steps and the crossing of historical boundaries of testing-process ownership. Currently, data on the performance and problems of some steps of the TTP are well known but are lacking for other steps. In the frame of the TTP, quality improvement and error-reduction initiatives are optimized through a team approach that involves clinicians and pathologists.

The TTP and Errors by Testing Phase

Patient-safety researchers are able to report the frequency, cause, and outcome of error for only some steps of the TTP. Historically, quality improvement initiatives in laboratory medicine have focused on the analytic phase, although pathologists have focused on some aspects of the analytic process more than other aspects. Root cause analytic studies confirm that more errors occur in the preanalytic and postanalytic phases of testing. Bonini et al reported that for the entire field of laboratory medicine, distribution of errors was 32% to 75% in the preanalytic phase, 13% to 32% in the analytic phase, and 9% to 31% in the postanalytic phase.26 Stroobants et al estimated that 20% of all laboratory tests were associated with an error, and greater than 85% of errors occurred in the preanalytic or postanalytic phase.25 These 2 studies concentrated on reviews and estimates of clinical pathology-testing services, which, for the most part, involved automated instruments in the analytic phase. As cancer diagnostic testing and screening by anatomic pathology involves less automation in the analytic phase, the potential for error may be higher, although the frequency of error in the preanalytic and postanalytic phases may be similar.

Schiff et al reported that of 583 missed or delayed diagnoses reported by 310 clinicians at 22 institutions, 10.3% of cases were missed or delayed diagnoses of lung, breast, or colon cancer.27 Errors occurred most frequently in the testing phase (failure to order, report, and follow up a laboratory result, 44%), followed by clinician assessment errors (failure to consider or tendency to overweigh competing diagnoses, 32%), history taking (10%), physical examination (10%), and referral or consultation delays and errors (4%).

Estimates of Error Frequency and Harm in Cancer Screening and Testing

Studies of patient safety in diagnostic oncologic pathology show a tremendous heterogeneity in study design and error-reporting methods. Most studies of patient safety in oncologic pathology testing are based on single-institution data and rely on retrospective review of specific case types. Crucial to the evaluation of these studies is documentation of the specific error-detection method and standardization of the process of collecting data on errors.

In 2002, AHRQ funded 4 institutions to evaluate the frequency, cause, and outcome of diagnostic pathology errors.13 These institutions evaluated the frequency of error in diagnosing cancer by using cytologic-histologic correlation to compare histologic and cytologic diagnoses in patients who underwent both a cytologic and a histologic procedure to obtain specimen material. Histologic and cytologic sampling might have occurred at the same clinical diagnostic procedure or at different procedures performed at different times. Because cytopathology and surgical pathology diagnostic schema are somewhat different, the researchers considered diagnoses in a scaled-step categorical context to determine whether a discrepancy occurred. The researchers performed chart reviews of patients who had histologic and cytologic diagnoses that were discrepant by 2 or more steps (Table 1).13 For example, a lung fine-needle aspiration diagnosis of benign and a lung biopsy diagnosis of malignant would be considered discrepant.

Table 1. Diagnostic Steps for Gynecologic and Nongynecologic Specimens
  1. Source: Reprinted with permission from Raab SS, Grzybicki DM, Janosky JE. Clinical impact and frequency of anatomic pathology errors in cancer diagnosis. Cancer. 2005;104:2205–2213.

0No evidence of intraepithelial lesion or malignancy (NIL)BenignBenignBenign
1Atypical squamous cells-undetermined significance (ASC-US)No equivalentAtypicalNot generally used
2Low grade squamous intraepithelial lesion (LSIL)Cervical intraepithelial neoplasia 1 (CIN 1)SuspiciousNot generally used
3High grade squamous intraepithelial lesion (HSIL)Cervical intraepithelial neoplasia 2 or 3 (CIN 2 or 3)MalignantMalignant
4Invasive carcinomaInvasive carcinoma

For nongynecologic pathology, the frequency of error ranged by institution from 4.87% to 11.8% of all correlating cytology and histology case pairs (P < .001).13 Across all institutions, harm occurred in 39% of all cases of error and was generally classified as low grade, consisting of unnecessary repeat testing or delays in diagnosis. Severe harm occurred in less than 2% of all cases of error. A second AHRQ study evaluated the patient safety of cervical cancer prevention services in 4 hospital systems from 1998 to 2004 in which patients underwent Pap testing followed by colposcopy with biopsy for specific Pap-test diagnoses.28 The researchers reported 5278 cytologic-histologic discrepancies (0.321% of all Pap tests procured during this time period) with approximately half of the errors occurring in the Pap-test phase and half occurring in the colposcopy phase of service. Unnecessarily repeated tests and diagnostic delays occurred in 79.8% and 63.9% of errors involving high- and low-grade lesions, respectively. The researchers reported that cervical cancer screening was highly successful in detecting squamous cell cancer (missed squamous cell cancer in only 1 of 187,786 Pap tests) but was associated with failures linked to minor or moderate harm consisting of overtreatment or unnecessary follow up.

Error Root Cause Analysis

Different methods of error root cause analysis (eg, Eindhoven method29–31 or the Toyota method of asking 5 why's32–34) focus on different categorizations of error, such as latent or active causes of error.35 Latent causes of errors include system problems that contribute to individuals making active errors. Historically, oncologic pathology diagnostic error root cause analysis has centered on active errors of accuracy occurring in the analytic phase of testing, although a few studies have focused on errors in precision and latent factors. The study of root cause analysis has the most meaning when error cause is linked to a quality improvement initiative. As the TTP may be subdivided into a large number of work steps, root cause analysis often is implemented in conjunction with process mapping, where specific process steps are identified.36–38 Figure 3 shows a high-level process map of an anatomic pathology laboratory.37 Errors in oncologic pathology diagnosis may be caused by process failures in specific accessioning steps, such as specimen mislabeling or compromise of specimen integrity.

Figure 3.

This process map of an anatomic pathology laboratory shows the flow of a specimen as it traverses through an anatomic pathology laboratory. Ten large steps are identified as the specimen is accessioned, macroscopically examined, processed, and then interpreted by a pathologist. By examining the substeps of the larger steps during daily work processes, one may detect specific error-prone steps. In this process map, the flow of a specimen in the histology laboratory is characterized by excess movement and crossover, which reflect waste. The crossover points are also sites where specimen mix up may occur and may be targeted for redesign to reduce error. Source: Raab SS, Grzybicki DM, Condel JL, et al. Effect of Lean method implementation in the histopathology section in an anatomical pathology laboratory. J Clin Pathol. 2008;61:1193–1199.

Work steps may be categorized as 1) activities, or the processes that individuals perform; 2) connections, or the handoffs between individuals; and 3) pathways or the flow processes.39–45 Active or latent errors may be associated with each of these work steps, and root cause analysis is used to pinpoint the steps that are prone to fail. Most oncologic pathology tests consist of more than 200 to 300 unique steps from the time a test is ordered to the time the test result is acted upon.

In oncologic pathology diagnosis, researchers have used different error-categorization schemes based on cause attributed to specific process steps.46–48 Meyer et al classified anatomic pathology diagnostic error causes as failures in patient identification (preanalytic or analytic causes), specimen quality (preanalytic or analytic causes), interpretation (analytic causes), or reporting (analytic or postanalytic causes).46 As mentioned above, root cause analyses of errors detected by cytologic-histologic correlation attribute cause to interpretation (analytic cause) or specimen procurement (preanalytic cause).49, 50 These classification schemes have examined mainly active components of error, although rare reports of system errors also have been studied.

Quality Improvement in Cancer Testing and Screening

Quality improvement initiatives vary in scope and complexity from a simple individual frontline change to a large-scale change involving multiple steps of several testing phases. Hospital and laboratory microsystems may use quality improvement systems, such as continuous quality improvement (CQI), total quality management (TQM), Six Sigma, and Lean (eg, Toyota Production System [TPS]) to improve efficiency, another IOM quality metric, but also may introduce change to improve patient safety.11, 51 These improvement systems may fix problems in specific steps, sets of work steps, or systems. In the work-step process model, failures in early steps may result in failures in later steps.34, 37 For example, a specimen mix up at the time of test procurement ultimately will lead to diagnostic error (correct diagnosis but for the wrong patient).

In the following section, oncologic pathology diagnostic errors caused by failures in specific steps are examined and methods by which clinical practitioners and pathologists may improve the process are discussed.

Evidence for Quality Improvement Opportunities in Oncologic Diagnostic Testing and Screening

Significant improvements have occurred during the past decade in detection and treatment of malignancies in patients seen in both general and specialty oncology practices. Despite these improvements, substantial opportunities for quality enhancements in all phases of the total testing process exist. The eventual performance outcome of oncologic testing is highly dependent not only on the roles of pathologists during the analytic phase of testing but also on the roles clinical practitioners play in preanalytic and postanalytic phases of the total testing process. Opportunities in specific testing activities have been identified whereby clinicians may considerably influence the quality of the oncologic diagnostic and screening tests they use. These specific activities are discussed to provide clinicians involved in any aspect of oncologic care with evidence-based information to support the implementation of quality improvement changes in their testing-related practices. Activities that will be discussed are: 1) effective test ordering, 2) provision of pertinent clinical information with specimens submitted for testing, 3) procurement of a specimen of the highest quality possible, 4) appropriate handling and interpretation of tissues, 5) timely follow-up on test results, 6) effective communication with pathologist staff should problems or inconsistencies with final results or diagnoses exist, and 7) requesting secondary review of tissue samples when it appears to be crucial for obtaining a high-quality, valid diagnosis.

Although at least some of the activities listed above seem self-evident and consistent with current clinical intentions for every patient specimen obtained, the evidence strongly suggests that quality gaps still exist in these processes. Thus, they represent areas ripe for significant improvement in patient care. In addition, the majority of the peer-reviewed published work related to quality improvement in all diagnostic and screening testing to date has been performed by investigators outside of the United States. The international nature of the studies may impact the ability to generalize the reported findings to US oncologic testing and screening programs; however, the information presented here represents findings related to common tests and procedures, such as cervical cancer and colon cancer screening and diagnostic testing for lung lesions suspicious for malignancy. In addition, the international source of the available information clearly reveals the critical need for more research in this area by US investigators.

Effective Test Ordering

Much of the currently available evidence on physician test-ordering appropriateness and effectiveness relates to general clinical laboratory testing,52–60 although several studies have specifically addressed test ordering in oncology practice.61, 62 For example, the development of guidelines in oncologic testing was partly driven by variability in clinical decisions made in test-ordering practice. For example, Rivera and Mehta reported American College of Chest Physicians Clinical Practice Guidelines for initial diagnosis of lung cancer,62 although actual ordering of diagnostic pathology lung tests exhibits variability based on clinician subspecialty.63

The general laboratory-medicine literature provides interesting and important reproducible information on interventions that appear to increase the appropriateness of laboratory test ordering that may be applied and tested on oncologic test-ordering patterns.

For example, multiple investigators have shown that educational interventions improve test-ordering patterns.52–56 Timely feedback on test ordering has been shown to be critical to the success of the process in the majority of these studies.54–56 Information technology applications (ie, use of predetermined computerized laboratory-testing schemes) also have been successful for decreasing laboratory utilization, as has the mandatory use of new test-ordering forms that adhere to national test-ordering guidelines.57–59 However, despite initial success, the latter intervention did not result in sustainable changes in physician test-ordering behavior in the cited studies.

Reports specifically addressing test-ordering appropriateness and effectiveness for cancer diagnosis have focused on colorectal cancer screening and on cervical cancer screening.64–73 In general, findings from these studies for both cancer types show a persistently high degree of variability in the application of these screening tests, despite evidence-based guidelines for both and evidence for cost effectiveness.71 The presence of a high degree of variability in screening practice patterns implies the existence of suboptimal screening and poorer outcomes for at least a portion of the screening-eligible population.

Two additional interesting and important findings in this area are that diagnostic biopsies in certain patient populations and for certain types of malignancies are more cost effective when performed percutaneously,74, 75 and that in general, physician tolerance for risk-taking is associated with test-ordering behavior.76, 77 Further studies are needed to show that the use of interventions that are designed to address these technical and behavioral factors are successful in increasing the appropriate and effective use of laboratory testing in oncology practice.

Provision of Pertinent Clinical Information with Specimens Submitted for Testing

Several observational studies that focused on a variety of malignancy types have demonstrated that the performance of pathologic and/or radiologic tests is improved when pertinent clinical information is included with the testing request.78–83 These studies have primarily used either a retrospective review or prospective observational design and focused on diagnostic testing for breast malignancies. In a recent report by investigators in Australia,80 knowledge of clinical information about the type and site of symptoms improved mammogram performance as measured by sensitivity, specificity, and receiver operating characteristic (ROC) curves. Particularly in oncologic testing, the diagnostic interpretation obtained through radiologic service testing significantly impacts the performance and/or interpretation of pathology-service testing and vice versa. In the case of diagnostic mammography, positive results are essentially always followed by a diagnostic breast biopsy, which requires interpretation by a pathologist. In this way, increased accuracy of mammography testing directly impacts the quality of patient care not only by providing the patient with a valid interpretation of the presence of disease but also by preventing the performance of an additional unnecessary diagnostic test.

A recent and related study performed in the United States illustrates the value of maximizing accurate mammography results, in part because of the “trickle down” effect mammography has on further testing.84 The investigators used Medicare claims data to estimate resource use and costs of the diagnostic workup of Medicare beneficiaries with suspected breast cancer. These authors reported that Medicare spends approximately $679 million annually on diagnostic workups for women with suspected breast cancer and that false-positive mammograms result in diagnostic costs of approximately $250 million, 40% of total costs.

Another recent observational study specifically describes the positive influence that the provision of pertinent clinical information has had on the accuracy of pathologic diagnoses.82 Focused on the interpretation of melanocytic lesion biopsies, this retrospective review of 99 atypical melanocytic lesions showed a significant increase in diagnostic agreement among 10 dermatopathology experts when pertinent clinical information was provided with the specimen for review. Each of the pathologists changed his/her final pathologic diagnosis on 7 of the 99 cases when given additional clinical information followed by re-review.

Additional studies that demonstrate the need for pertinent clinical information in order for radiologists and pathologists to generate diagnostic interpretations with the highest sensitivity and specificity in oncologic care have focused on the specific diagnosis of urothelial carcinoma78 and breast sonographic interpretations.79, 81

All of the above recent studies support the conclusions of a systematic review of the literature published in 2004 by Loy and Irwig that evaluated results from studies aimed at describing the accuracy of diagnostic testing performed with and without clinical information.85 Their review included studies that examined the accuracy of either radiologic or pathologic diagnostic tests. Studies accepted for review were not restricted to those involving oncologic diagnostic testing, although oncologic diagnostic tests were included in the review (eg, interpretation of bronchial brush biopsies). Two conclusions from this systemic review were that the practice of reading diagnostic tests with clinical information seems justified and that future studies should be designed to investigate the best way of providing clinical information. To date, information related to their second conclusion is still unavailable.

Of note, 2 studies we identified reported that the use of clinical information had little to no impact on the accuracy of diagnostic test results.86, 87 The first was a study that examined the usefulness of a mathematical model for predicting the outcomes of pregnancies of unknown location,86 and the second examined the impact of clinical information on the detection of early lung cancer by plain chest radiography.87

Although a smaller number of studies do describe conflicting results, the majority of studies currently available support the provision of pertinent clinical information as common practice for diagnostic testing (including oncologically related testing) due to its impact on the accuracy of test results. However, a significant barrier to realizing this process as common practice is the current lack of evidence for how this component of a crucial handoff point in oncologic care may best be consistently accomplished. The presence of clinical information may narrow the pathology differential diagnosis and reduce the cost of testing. Clinical information may optimize decisions about tissue fixation and processing. Although the presence of clinical information may result in interpretive bias, there is an absence of studies demonstrating this phenomenon in actual practice.

Procurement of a Specimen of the Highest Quality Possible

Data from several sources, such as reports of cytologic-histologic correlation error detection, indicate that the quality of specimen sample is critically important in making an accurate oncologic pathology diagnosis.13, 49, 50 Cytologic-histologic correlation data show that from 50% to 80% of errors are secondary to sampling error, although detailed root cause analysis of sampling failure has rarely been performed.88, 89

Traditionally, in cytologic-histologic correlation root cause analysis, a pathologist retrospectively reviews microscopic slides and classifies error as interpretation-related or sampling-related. For example, in a false-negative case, if the review pathologist retrospectively identifies tumor, the error is classified as interpretation-related and if tumor is not identified, then the error is classified as sampling-related. This method does not specifically focus on specimen quality and its relation to error.

Raab et al created a novel cytologic-histologic correlation root cause analytic method, known as the No Blame Box, that pathologists may use to assess false-negative cases in terms of sample quality and amount of tumor (Fig. 4).90 In an optimal cytologic specimen from a lesion that is cancerous, abundant cancer cells would be present and there would be an absence of factors that limit interpretation, such as obscuring blood or inflammation, poor fixation, or improper preparation. Limiting factors are secondary to both preanalytic factors (eg, the patient bleeds profusely during specimen procurement, and blood obscures diagnostic slide material) and analytic factors (eg, laboratory slide preparation is too thick). The No Blame Box, Figure 4, shows the assessed cause of a false-negative error in 40 patients who had cancer and a negative lung bronchial brush or wash cytology specimen. In the majority of cases, a contributing factor to error was poor specimen quality and, in many cases, interpretation also was a cause. These data indicate that the traditional method of root cause analysis (ie, either sampling or interpretation) is too superficial to determine error cause. A pathologist's false-negative misinterpretation almost always is secondary to undercalling a poor quality specimen or a specimen with only rare tumor cells. These findings were confirmed in a study by Nodit et al who performed root cause analysis on 32 false-negative lung bronchial cytology specimens and found that in 97% of cases, specimen procurement and preparation issues were major contributing factors to error.88 In only 1 case was abundant tumor overlooked.

Figure 4.

This is the No Blame Box. The slides of 40 false-negative cytology errors were evaluated by a pathologist, assessed in terms of specimen quality and amount of tumor. Each oval represents the assessment of specimen quality and amount of tumor present for each of the 40 cases. The pathologist classified the majority of specimens to be of poor quality. Source: Raab SS, Stone CH, Wojcik EM, et al. Use of a new method in reaching consensus on the cause of cytologic-histologic correlation discrepancy. Am J Clin Pathol. 2006;126:836-842. ©2006 American Journal of Clinical Pathology ©2006 American Society for Clinical Pathology

The complex interplay between specimen quality and a pathologist's interpretation also has a role in errors of precision. The No Blame Box data indicated that pathologists made a variety of diagnoses on specimens that contained a limited amount of tumor. Some pathologists made outright diagnoses of malignant, whereas other pathologists used indeterminate or even benign diagnoses. Pathologists had higher level diagnostic reproducibility in specimens when a large amount of tumor was present on the slide.

In summary, these root cause analytic findings indicate that the majority of false-negative errors in oncologic pathology diagnostic tests occur in specimens of lower quality. In screening tests, false-positive errors also are more likely to occur in poorer quality specimens. For example, causes of rendering an indeterminate cytologic diagnosis of atypical squamous cells of undetermined significance (ASC-US rather than a definite squamous intraepithelial lesion [SIL] or benign diagnosis) in cervical cancer Pap-test screening include the absence of a sufficient number of diagnostic cells, procurement failures, or processing failures.28

Traditional means of improving quality have been through educational initiatives that focus on technical aspects of specimen procurement. Often these educational initiatives are used for training on new technologies. Most clinicians lack any input of specimen quality or only find out at a later time whether a specimen was diagnosed as unsatisfactory. Quality improvement initiatives focus on rapid and long-term feedback, change in process through self-assessment or self-awareness, or the use of best practice protocols. Some of these initiatives are well publicized, although others have been implemented as frontline changes at single institutions as a part of a more global quality improvement program.91–94


Use Standardized Tissue Procurement Protocols

The number, type, or location of procured specimen samples may not yield a diagnosis of cancer when cancer is present. For some areas of specialty oncologic pathology testing, researchers have carefully correlated the number and location of specimen samples procured with false-negative rates. Clinical practitioners have used data from these studies to develop optimal practices of specimen procurement.

For example, the prostate biopsy for the detection of prostate cancer has evolved from a digitally guided biopsy method to a transrectal-ultrasound guided (TRUS) systematic biopsy method, which is the current standard of care.95–97 As early stage prostate cancer is neither hypoechoic on TRUS nor palpable, random systematic biopsies are necessary for prostate cancer detection. In 1989, Hodge et al showed that 6 random, evenly distributed biopsies were optimal for prostate cancer detection, and this method became known as the traditional sextant biopsy method.98 By the mid to late 1990s, several researchers reported that the sextant biopsy strategy had a high false-negative rate as cancers in other areas of the prostate were missed. To improve cancer detection, alternative strategies of increasing the number of biopsies and sampling other locations were proposed.99–101 Although expert consensus does not exist on the optimal strategy, most experts agree that the use of the sextant biopsy is not adequate and that extended biopsy protocols, involving the procurement of 10-14 cores, is recommended.95, 102–104 For men who have had negative sextant biopsies, some practitioners recommend the use of saturation biopsies consisting of more than 20 cores.95, 105–107

Clinical practitioners in other subspecialties also have developed standardized protocols for biopsy procedures based on the number and/or location of biopsy samples. These subspecialties include cervical cancer screening involving the number and location of colposcopically directed biopsies108, 109 and colon cancer detection in patients who have inflammatory bowel disease with endoscopically directed biopsies.110

Use of Checklists

Clinical practitioners have standardized other activities, connections, and pathways in tissue-procurement steps to optimize specimen quality. For example, Sidiropoulos et al standardized steps in thyroid gland fine-needle aspiration biopsy (FNAB) sampling and slide preparation.111 In this study, before process standardization, FNABs were performed by a variety of subspecialty clinicians, with or without ultrasound guidance, by using variably sized needles, with different numbers of passes and smears, and different staining techniques. After standardization of these variables, the proportion of satisfactory samples increased from 67% to 89% as samples improved in quality (odds ratio [OR] = 3.82; P < .0001).

Clinical practitioner use of checklists is another method used to standardize the steps in work practice.93, 112 Clinicians normally go through a stepwise procedure during Pap-test procurement and examine the cervix, look for abnormalities, and obtain specimen material by using generally specified methods. However, with the passage of time, much of the work becomes rote, and process steps may be bypassed. The use of a checklist allows a clinician to focus attention on every step in the Pap-test procurement process. One measure of Pap-test quality is the presence of an adequately sampled transformation zone, the region where most cervical intraepithelial lesions develop. In a quality improvement study, 5 gynecologists implemented the use of a checklist and compared preintervention (n = 5384 Pap tests) to postintervention (n = 5442 Pap tests) quality.93 The clinicians showed a statistically significant decrease in Pap tests without a transformation zone component (P = .011) and also showed a 114% increase in the detection of squamous intraepithelial lesion (P = .004) after the intervention.

Nooh et al reported that the use of quality-assurance audits of national performance standards (ie, checklists) could detect areas of improvement in colposcopy services of teaching hospitals in Cardiff and Vale Trust, University of Wales, Cardiff, South Wales, United Kingdom.113 The National Health Service Cervical Screening Programme monitors 10 auditable parameters, some of which directly affect the quality of diagnostic pathology services.114, 115 Among these parameters are obtaining a biopsy in greater than 90% of women who have a high-grade smear and obtaining a sufficient sample in 90% of cases in which a biopsy was taken. Through the use of checklists of these parameters, Nooh et al showed that hospitals could track clinic performance that, in turn, would affect pathology services. Studies evaluating the implementation of practice changes to improve these parameters have rarely been reported.

Use of Specimen Adequacy Assessments

The use of specimen adequacy assessments has been shown to improve specimen quality. Adequacy assessments are widely used in some fields of oncologic pathology, such as Pap testing,93, 116 but rarely used in a uniform manner in other fields of cytology or in surgical pathology services. Specimen adequacy statements have the highest level of impact when clinicians are able to alter practice on the basis of feedback. Our view, based on our experience in clinical practice, is that some pathologists use adequacy statements infrequently because of fear that clinicians will send their samples elsewhere should the inadequate rate be too high. In some fields of pathology diagnosis, adequacy statements have not been sufficiently standardized for widespread dissemination. Standardized specimen adequacy statements have been developed in other areas such as cervical-vaginal Pap testing and thyroid gland fine-needle aspiration, although few studies have examined the interobserver variability in the use of these statements.

Practitioners at individual institutions have developed adequacy statements for clinician feedback that have improved overall specimen quality. For example, in thyroid gland FNAB, root cause analysis has shown that some false-negative diagnoses are secondary to poor-quality specimens being diagnosed as benign because they do not meet the minimum standards of unsatisfactory that would prompt a repeat FNAB.89 Poller et al developed an indeterminate category of diagnosis that may be used to express uncertainty in risk of neoplasia.91, 92 Raab et al similarly developed a category that they called “nonspecific” which was used on less than optimal specimens.94 With the introduction of these categories, both clinical groups initially reported a high frequency of use, indicating that pathologists were previously classifying indeterminate lesions as benign. Raab et al showed that the use of the nonspecific category improved the sensitivity of FNAB compared with the preimplementation practice.94

Use of Immediate Feedback Services

Although adequacy statements assist clinicians in understanding specimen quality, they usually are provided later, at a time when the clinicians may not recall the components that might have resulted in the procurement of a less than optimal specimen. In the ideal state, immediate feedback of specimen quality is a means to assess when additional material is needed. Immediate feedback is used in frozen-section interpretation and in FNAB for patients who have lesions suspicious for cancer.91, 94

In FNAB, immediate interpretation lowers the false-negative rate.94 Researchers have shown that immediate feedback from the FNAB service to the physician performing the biopsy improves specimen quality and decreases the number of passes that are necessary to obtain adequate material.117 Kocjan et al reported that the British Society for Clinical Cytology Code of Practice recommends the use of immediate assessment in FNAB as well as other evidenced-based recommendations for setting up FNAB services, taking samples, preparing slides, and classifying diagnoses.117

Appropriate Handling and Interpretation of Tissues

Optimal performance of the steps involved in the analytic phase of diagnostic testing and screening is important to the quality of oncologic pathology diagnosis. The main steps in the analytic phase of many anatomic pathology practices are shown in Table 2. From our practical experience, we know that the standardization of process substeps in anatomic pathology laboratories exhibits variability within and among laboratories, although this is not much different from the current state of institutional variability in the standardization of steps in other testing and screening phases, such as the preanalytic phase of tissue procurement (eg, lung biopsy or surgical excision).

Table 2. Substeps in Anatomic Pathology Process
Accessioning steps 
 Specimen receipt in laboratory (transport hand-off)Hospital/transport/courier personnel hand off specimens to laboratory personnel.
 Identification checkLaboratory personnel check that specimen containers and requisition contain appropriate matching identifiers.
 Assignment of unique laboratory identifierSpecimens are assigned unique identifiers in laboratory information systems.
Gross examination steps 
 Identification checkLaboratory personnel check that tissues and accompanying information match.
 Gross examination of specimenLaboratory personnel visually examine specimens in terms of volume and other characteristics (color, lesions, etc). Descriptions are included in pathology reports.
 Sectioning of specimenFor larger specimens, laboratory personnel use a variety of cutting instruments to examine further the internal specimen characteristics.
 Preparation of tissues for processingTissues may be prepared in a variety of ways for further examination, including histologic examination and ancillary testing. For histologic examination, laboratory personnel prepare thin sections that are placed in tissue cassettes and fixed in formalin.
Processing steps (for histologic examination) 
 Tissues processedTissues are placed in one of several types of processors that dehydrate the tissues.
 Identification checkLaboratory personnel visually match tissue cassettes received with records and evaluate cassette integrity following processing.
 Tissues embedded in paraffinLaboratory personnel embed tissue in paraffin to create tissue blocks.
 Tissues thinly sectionedLaboratory personnel use microtomes to thinly section the paraffin blocks. The thin sections are placed on glass slides.
 Slides stainedHematoxylin and eosin is the preferred stain for most histologic tissue sections.
 Slides cover-slippedA thin layer of glass or plastic is placed on top of the slide.
 Slides transported to pathologistsSlides from the same patient (case) are assembled and brought to the pathologist for interpretation.
Interpretation steps 
 Identification checkPathologists match the tissue slides and requisition information.
 Pathologists examine slides microscopicallyPathologists place slides under light microscopes and examine the tissues. Diagnostic interpretations are made using histologically observed criteria. Pathologists may choose to order ancillary tests, such as immunohistochemical tests.
 Pathologists prepare a reportReports contain an interpretation based on findings from microscopic and gross examinations.
Reporting steps 
 Reports sent to clinical providersReports are sent in a variety of ways including mail, facsimile, and the Internet.

Pathology organizations, governmental bodies, and individuals have helped to establish baseline levels of quality through methods such as benchmarking, accreditation, and laboratory and professional competency assessment. These methods of assessing levels of quality have focused on the evaluation of some quality measures (eg, safety, timeliness) and not others (eg, equity, effectiveness). The level of governmental regulation of analytic testing and screening far exceeds that of the regulation of all other testing phases combined.

The College of American Pathologists (CAP) has a long history of establishing baseline quality practices in some analytic testing substeps.118 The CAP and other organizations accredit laboratories through inspections that involve the assessment of quality practices in multiple substeps of practice. Table 3 provides a summary of CAP-sponsored patient safety research establishing baseline safety measurements.48, 49, 121–123 These studies describe the current level of safety practice within multiple institutions (Q-PROBES) or more recently, the level of safety practice within institutions and across time (Q-TRACKS). As multiple institutions were involved in data collection, a window of the variability in practice among these institutions has been provided. As data are institutionally self-reported, root causes of the variability, including differences in institutional quality improvement activities, are difficult to ascertain. These data sources have tended to focus on the output of the interpretive step of diagnostic testing and screening, and the role of earlier failures in analytic steps and preanalytic steps generally were not examined.

Table 3. College of American Pathologists Q-PROBES™ and Q-TRACKS Studies of Patient Safety
2008Raab119Interpretive (analytic)The Effect of Continuous Monitoring of Cytologic-Histologic Correlation Data on Cervical Cancer Screening PerformanceIn this Q-TRACKS study, longer institutional participation in this yearly program was significantly associated with a higher positive predictive value of a positive Pap test (P = .01), higher Pap test sensitivity (P = .002), and a higher Pap test sampling sensitivity (P = .03).
2007Tworek120Interpretive (analytic)The Value of Monitoring Human Papillomavirus DNA Results for Papanicolaou Tests Diagnosed as Atypical Squamous Cells of Undetermined Significance. A College of American Pathologists Q-Probes Study of 68 InstitutionsThe median institutional percentage of human papillomavirus positive results in women who were diagnosed with a Pap test of atypical squamous cells of undetermined significance was 46.8% with a 10th and 90th institutional percentile of 18.0% and 64.0%, respectively.
2006Raab121Interpretive (analytic)The Value of Monitoring Frozen-Section–Permanent-Section Correlation Data Over TimeIn this Q-TRACKS study, longer institutional participation in this yearly program was associated with lower discordant frozen-section–permanent-section frequencies (P = .04) and lower deferred case rates (P = .04).
2006Valenstein48Preanalytic and analytic, all phasesIdentification Errors Involving Clinical Laboratories. A College of American Pathologists Q-Probes Study of Patient and Specimen Identification Errors at 120 InstitutionsSpecimen-identification errors from clinical and anatomic laboratories were combined. The median number of identification errors per 1,000,000 billable tests was 390 (10th institutional percentile was 1291, and the 90th institutional percentile was 78). The authors estimated 160,000 adverse events per year as a result of misidentification of laboratory specimens.
2005Raab122InterpretivePatient Safety in Anatomic Pathology: Measuring Discrepancy Frequencies and CausesIn this Q-PROBES study, using all types of secondary review policies, the mean and median laboratory (n=74) discrepancy frequencies were 6.7% and 5.1%, respectively.
2000Jones123PostanalyticFollow-Up of Abnormal Gynecologic Cytology. A College of American Pathologists Q-Probes Study of 16,132 Cases From 306 InstitutionsThe following percentage of women who received post-Pap test follow-up in 1 year: 85.6% of women with a Pap test cytologic diagnosis of carcinoma, 87.2% with a diagnosis of HSIL, and 82.7% with a diagnosis of LSIL.
1999Nakhleh124PreanalyticNecessity of Clinical Information in Surgical Pathology. A College of American Pathologists Q-Probes Study of 771,475 Surgical Pathology Cases From 341 InstitutionsA total of 5594 (0.73%) cases required additional clinical information for diagnosis (10th through 90th percentile range, 3.01% to 0.08%).
1999Novis125InterpretiveDiagnostic Uncertainty Expressed in Prostate Needle Care Biopsies. A College of American Pathologists Q-Probes Study of 15,753 Prostate Needle Biopsies in 332 InstitutionsThe median rate of diagnostic uncertainty was 6% (0% at the 10th percentile and 14% at the 90th percentile).
1998Nakhleh126All analytic phasesAmended Reports in Surgical Pathology and Implications for Diagnostic Error Detection and Avoidance. A College of American Pathologists Q-Probes Study of 1,667,547 Accessioned Cases in 359 InstitutionsThe median institutional amended report rate was 1.46 per 1000 cases (10th institutional percentile 0.22 and 90th percentile 4.75 per 1000 cases).
1997Nakhleh127ReportingMammographically Directed Breast Biopsies. A College of American Pathologists Q-Probes Study of Clinical Physician Expectations and Specimen Handling and Reporting Characteristics in 434 InstitutionsIn 92% of malignant cases, margin status was reported, 77% of reports contained lesion size, and 83% of reports stated tumor grade.
1995Jones128InterpretiveRescreening in Gynecologic Cytology. Rescreening of 8096 Previous Cases for Current Low-Grade and Indeterminate-Grade Squamous Intraepithelial Lesion Diagnoses—A College of American Pathologists Q-Probes Study of 323 LaboratoriesOf the rescreened Pap tests, 3.5% were reclassified as squamous intraepithelial lesion or carcinoma, and 5.9% were reclassified as atypical squamous cells of undetermined significance.
1996Jones49Interpretive (and sampling)Cervical Biopsy-Cytology Correlation. A College of American Pathologists Q-Probes Study of Over 22,439 Correlations in 348 LaboratoriesThe sensitivity and specificity of the cytology smear (based on correlating the Pap smear with histologic follow-up results) was 89.4% and 64.8%, respectively; 6.5% of women who had a high-grade squamous intraepithelial lesion diagnosed on the Pap smear had a benign tissue biopsy diagnosis.
1996Gephardt129InterpretiveInterinstitutional Comparison of Frozen Section Consultations. A College of American Pathologists Q-Probes Study of 90,538 in 461 InstitutionsThe overall frozen-section–permanent-section discordance rate was 1.42%; 31.8% of discordant frozen sections occurred because of misinterpretation.
1996Gephardt130ReportingLung Carcinoma Surgical Pathology Report Adequacy. A College of American Pathologists Q-Probes Study of Over 8300 Cases From 120 InstitutionsA standard report was used in 20.8% of cases, the presence or absence of microscopic venous invasion noted in 2.6% of cases, and the presence or absence of neoplasm at the bronchial margin was noted in 90.8% of cases.
1996Gephardt131Gross examination and processingExtraneous Tissue in Surgical Pathology. A College of American Pathologists Q-Probes Study of 275 InstitutionsA contaminant (ie, extraneous tissue) was found in 2.9% of slides using a retrospective review method (the institutional 10th percentile showed that contaminant was found in 8.8% of slides and 22.0% of cases).
1996Novis132InterpretiveInterinstitutional Comparison of Frozen Section Consultation in Small Hospitals. A College of American Pathologists Q-Probes Study of 18,532 Frozen Section Consultation Diagnoses in 233 Small HospitalsThe mean frozen-section–permanent-section discordant rate was 1.8%. Of institutions that processed ≥50 frozen sections, 5.8% had a discordance rate above 7.5%, and 24.8% had a discordance rate of 0%.
1996Nakhleh133Accessioning (and preanalytic)Surgical Pathology Specimen Identification and Accessioning. A College of American Pathologists Q-Probes Study of 1,004,115 Cases From 417 Institutions.The median institutional identification and accessioning deficiency rate was 3.4% (of all institutional specimens) with a reported range of 0% to 98.6%.
1995Gephardt134ReportingInterinstitutional Comparison of Bladder Carcinoma Surgical Pathology Report Adequacy. A College of American Pathologists Q-Probes Study of 7234 Bladder Biopsies and Curettings in 268 InstitutionsIn invasive bladder cancers, the tumor type and histologic grade were provided in 99.3% and 93.6% of cases, respectively. In 18% of invasive cancers, the presence or absence of muscularis propria was not stated.
1996Jones135InterpretiveRescreening in Gynecologic Cytology. Rescreening of 3762 Previous Cases for Current High-Grade Squamous Intraepithelial Lesions and Carcinoma. A College of American Pathologists Q-Probes Study of Practice Patterns From 312 InstitutionsFor rescreened Pap smears, the overall false negative rate was 19.7%.
1992Zarbo136Interpretive, reportingInterinstitutional Assessment of Colorectal Carcinoma Surgical Pathology Report Adequacy. A College of American Pathologists Q-Probes Study of Practice Patterns From 532 Laboratories and 15,940 ReportsSummarizing data across all participants, 72.5% of institutions reported microscopic descriptions, and only 12.5% of institutions used standardized reporting formats.
1992Zarbo137InterpretiveInterinstitutional Database for Comparison of Performance in Lung Fine-Needle Aspiration CytologyIn 436 institutions, the sensitivity and specificity of lung fine-needle aspiration was 99% and 96%, respectively. The false-negative and false-positive interpretation frequency was 8% and 0.8%, respectively.
1991Zarbo138InterpretiveInterinstitutional Comparison of Performance in Breast Fine-Needle Aspiration CytologyIn 294 institutions, the sensitivity of fine-needle aspiration was 97% (10,751 satisfactory aspirates); 18% of breast aspirates were unsatisfactory; the incidence of a false-negative diagnosis was 7.1%.
1991Zarbo139InterpretiveInterinstitutional Comparison of Frozen-Section ComparisonIn 297 institutions, the concordance between frozen-section and permanent-section diagnosis was 96.5%; 40% of errors were determined to be secondary to misinterpretation.
1990Howanitz140InterpretiveThe Accuracy of Frozen-Section Diagnosis in 34 HospitalsConcordance between the frozen-section and the permanent-section diagnosis was 96.5%; 44% of errors were secondary to inappropriate sampling.

Laboratories have attempted to decrease oncologic-testing and screening-practice variability through the implementation of technological solutions and/or through process redesign that focus on standardizing work steps and adopting best practices. There is a relative absence of data that link the implementation of analytic technological and/or process redesign solutions to improved patient outcomes or even to intermediate outcomes. Quality improvement implementation of technological solutions and process redesign often target quality measures, such as efficiency and timeliness, and safety measures, in such a manner that error may be difficult to examine. Examples of published patient-safety quality improvement initiatives in several of the major steps of analytic practice are provided below.

Tissue Accessioning Steps

An error with potential major consequences that occurs in the process of accessioning is specimen misidentification. In a Q-PROBES study, Valenstein et al reported a baseline mean institutional specimen-identification error frequency (involving both clinical and anatomic pathology laboratories) of 390 per 1,000,000 billable tests.48 Many of these errors occurred preanalytically and were detected in the accessioning step of the analytic phase. However, a proportion of these errors occurred in the accessioning step. By using an observational technique, Smith et al reported that poorly designed accessioning systems contributed to operator-dependent errors and near-miss events (events that if not caught and corrected may lead to a patient specimen being mixed up with a different patient's specimen) at a frequency of 5.5 and 0.7 per specimen, respectively.141 Zarbo and D'Angelo compared the occurrence of defects (including those related to laboratory waste and efficiency, as well as quality) before and after a quality-improvement intervention using Lean methods. They reported that, at baseline, 33% of all defects occurred in the preanalytic phase of testing and that 75% preanalytic defects involved the accessioning step,142 a similar frequency to that reported by Raab et al.143 Process redesign of the accessioning phase has led to reduction of process errors. After the implementation of Lean methods, Zarbo and D'Angelo reported that the frequency of accessioning-step defects (as a percentage of all preanalytic defects) decreased to 36.8%,142 similar to the findings by Smith et al.141 Process redesign has involved the implementation of continuous-flow methods and physical barriers preventing specimen mix up.24, 144

Bar coding and other related technologies offer an alternative solution for reducing the frequency of specimen-identification errors. Bar-coding technologies may be implemented at the accessioning step or before preanalytic steps. Zarbo et al showed that the implementation of a bar-coding technology conjointly with workflow standardization decreased the specimen misidentification frequency by approximately 62%, although this decrease also included error reduction at steps downstream to the accessioning step.145

Tissue Gross Examination Steps

Errors that occur in the process of gross (macroscopic) examination include specimen misidentification, tissue loss, inappropriate sampling (eg, not sampling tumor or tumor margins, etc) and inappropriate reporting of gross findings that are used in tumor staging. As described above, bar coding and related technologies contribute to the reduction of identification errors.145 Process-improvement activities also may be used to reduce identification and other types of errors in the gross examination step.

Galvis et al showed that compared with resident trainees who generally are inexperienced, pathologists' assistants (PAs), who are experienced gross-examination room personnel, perform at a higher level in specific macroscopic-examination tasks, such as sampling lymph nodes for downstream histologic examination.146 PAs are trained to perform tasks such as standardized gross tissue examination and post mortem examinations. PAs are highly skilled at these tasks, and their expertise at gross examination exceeds that of many pathologists. Galvis et al showed that when compared with resident trainees, PAs retrieved, on average, 4.7 more lymph nodes per specimen in resections involving cancer specimens of colon and breast. For some cancer types, the number of lymph nodes sampled, as well as positive nodes, is an independent predictor of outcome.147–150 Galvis et al also reported that when compared with residents, PAs more accurately sampled cancer tissues (other than lymph nodes) by measuring tissue resubmission rates. These findings indicate that standardized protocols and experience improve the quality of cancer care in gross examinations.

Surgeons use intraoperative frozen sections to determine whether tumors are present in excised or biopsy material. Intraoperative frozen sections are a form of feedback that guides immediate patient management. Howanitz et al and other authors have reported that greater than 40% of errors in intraoperative frozen-section diagnosis result from failure to identify tumors in gross specimens.121, 129, 132, 139, 140 The implementation of standardized frozen-section techniques and gross-examination expertise in the selection of tissues for frozen section are 2 proposed solutions to reduce this error type.

Tissue Processing Steps

Errors that occur in tissue processing steps include specimen misidentification, tissue loss, and inappropriate embedding/sectioning resulting in less than optimal slide preparation. Lean process changes in the histology preparation steps have been reported to improve specimen quality, although the reporting of quality improvement initiatives involving specific steps is relatively rare. Platt et al examined specimen misidentification in tissue processing, which may consist of tissue “floaters” or contaminants131 of other patient-specimen material being inappropriately placed on a specific patient's glass slide.151 Platt et al found that a source of tissue contamination is in the staining baths and showed that cross-contamination of blank slides may occur at a frequency of 8%. Pathologists generally recognize contaminant tissue during the interpretation step, although based on practice experience, contaminants may cause confusion and be a source of interpretation error.

Appropriate tissue fixation is necessary for diagnostic interpretation of light microscopic slides and ancillary tests. Wolff et al reported that 20% of human epidermal growth factor 2 (HER-2) assays performed in the field were incorrect.152 The HER-2 gene is amplified in 20% to 25% of human breast cancers,153 and HER-2 amplification and overexpression are recognized as important markers for aggressive disease and are the molecular targets for specific therapies such as trastuzumab (Herceptin; Genentech, South San Francisco, CA) and lapatinib (GlaxoSmithKline, London, UK).154 Variable tissue fixation, especially ethanol exposure, and antigen-retrieval methods may lead to incorrect HER-2 immunohistochemical results.154–156 Errors may be caused if the period of time for the formalin fixation of tissues is too short or if insufficient quantities of formalin are used.154–157 Other pitfalls related to immunohistochemical testing are nonspecific binding of tissue altered by crush artifact and cautery artifact, factors related to tissue procurement failures.154–160 The American Society of Clinical Oncology (ASCO)-CAP practice guidelines were published to address these issues.152, 154

Interpretation Steps

Errors that occur in the process of interpretation include specimen misidentification (switching one patient slide for another) and misinterpreting the light microscopic and/or ancillary testing findings. Reporting failures are described below. In oncologic pathology testing and screening error, the study of pathology interpretation error far exceeds that of the study of other error types. Diagnostic interpretation errors may be classified into the categories of slips and mistakes, and studies of cognition have helped to define causes of some error types.

As mentioned previously, the root cause of interpretation error generally includes a cognitive failure in conjunction with upstream failures or with system failures. For example, poor-quality specimens may be overinterpreted or underinterpreted, and latent system problems include pathologist overwork, lack of experience, lack of appropriate redundant systems, etc. The current US medical-legal system often ignores system problems and focuses on individual culpability, which is a contrary approach to improving safety systems.

Error-reduction initiatives that target the cognitive component of interpretation error include the implementation of standardized diagnostic criteria, educational initiatives, and the development of redundant systems. As mentioned previously, Raab et al showed that the use of standardized diagnostic terms with criteria reduced errors in thyroid gland FNA specimens.94 The CAP and the American Society for Clinical Pathology (ASCP) provide continuing medical education (CME) to pathologists through glass-slide tests that are offered to participating laboratories.161, 162 Examining these glass slides in an educational setting allows pathologists to improve their skills and use of diagnostic criteria.

Secondary review is a form of redundancy and may be used to detect and/or prevent error. Secondary review is the practice in which at least 1 additional pathologist examines a case and makes a diagnostic interpretation that is compared with the original diagnostic interpretation. Secondary review often is performed in a blinded fashion and may occur before or after a case is reported or signed out.

Pathologists have not standardized the process of secondary review of diagnostic interpretation.9, 13 Secondary review policies before finalizing the interpretation include the review of all new cases of cancer, cases involving specific organ systems that involve high-risk diagnoses (eg, breast or prostate core-biopsy tissues), cases being examined by inexperienced pathologists, challenging cases through a departmental or group consensus conference, or a percentage of all cases. Laboratories differ in the use of presign-out secondary review practices, and clinicians often are unaware of the error-reduction redundancy methods used in specific laboratories. Some pathologists report secondary review or consensus opinions in individual cases, thereby alerting clinicians that an additional check has been performed. Institutional secondary review before finalizing the interpretation discrepancy data are not well known, as diagnostic disagreements often are not recorded in quality assurance logs.

For cases that have been signed out, secondary review may occur on cases being presented to institutional tumor boards, through departmental quality assurance policies (such as review of a specific percentage of cases [eg, 5% review] or cases in which a prior specimen had been obtained and had a different diagnosis), by clinician request, or through external review practices (when a patient requests a second opinion or a patient is seen at a different institution for management and that institution requests review of original diagnostic slide material).

In a CAP multi-institutional study of a variety of secondary review practices, the self-reported mean and median discrepancy frequency of 74 laboratories was 6.7% and 5.1%, respectively.122 Forty-eight percent of all discrepancies were due to a change within the same category of interpretation (eg, one tumor type was changed to another tumor type). This change in diagnosis has a major impact on the clinical management of patients in many fields of oncology, such as lung cancer. Twenty-one percent of all discrepancies were due to a change across categories of interpretation (eg, a benign diagnosis was changed to a malignant diagnosis). Through self-assessed estimates, pathologists have determined that the majority of discrepancies had no effect on patient care, although 5.3% of discrepancies had a moderate or marked effect on patient care. The highest frequency of discrepancy, based on the total number of cases reviewed, occurred when the reason for review was a request by a clinician. Twenty-three percent of all clinician-directed reviews resulted in a diagnostic discrepancy. In cancer care, this finding is not surprising, as clinicians contact pathologists when the diagnosis does not coincide with patient signs and symptoms or other tests.

In this CAP study, the frequency of a diagnostic discrepancy based on the total number of reviewed cases through interdepartmental review (ie, review outside the originating department), intradepartmental review (ie, review within the originating department, such as through consensus conference), quality assurance review, and extradepartmental review was 4.8%, 7.1%, 4.3%, and 8.6%, respectively.122 Institutions are challenged in the management of a case in which a discrepancy is identified. Management depends on several factors, including the expertise of the pathologists involved, the manner in which the discrepancy was identified, and time (eg, prereporting or postreporting) when the discrepancy was detected. In some cases, a third expert opinion is obtained, although even experts disagree in individual cases. In the optimal situation, close communication among the pathologists and clinical practitioners is critical for care delivery.

Postsign-out secondary review recently has been used as a quality improvement tool to standardize diagnostic interpretations. Raab et al compared the effectiveness of a 5% randomized secondary review process to a focused secondary review process by which pathologists reviewed specific case types that they perceived had a higher frequency of diagnostic discrepancy.163 The study involved a hospital system that already performed subspecialty sign out, and the discrepancy detection rate for the random and focused review processes were 2.6% and 13.2%, respectively. The focused review process involved cases such as bladder biopsy, colon resection, and well differentiated lipoid neoplasms specimens. The discrepancies detected by the focused review process revealed a lack of diagnostic standardization, such as in the use of specific criteria to diagnosis a well differentiated liposarcoma. The lack of standardization generally was unrecognized before the secondary review and could be used to focus on standardizing procedures.

There has been little study of methods to standardize oncologic pathology diagnoses. The first step is recognizing the areas in which there is a problem, which has been done through interobserver diagnostic variability studies, which have been variable in method quality.164 Kappa values of diagnostic agreement range from excellent to poor in specific areas of oncologic pathology diagnosis. In only a few areas of oncologic pathology have pathologists attempted to recalibrate pathologist diagnoses using interventions, such as educational initiatives. For example, Schnitt et al showed that interpathologist diagnostic variability in the area of risk-associated ductal proliferative breast lesions could be decreased after an educational initiative involving the teaching of diagnostic criteria of Page and colleagues.165 The work by Page and others is unusual, as different entrenched camps of diagnosis often is the norm, rather than the exception.165–172 For example, Elshiekh et al reported that unanimous agreement among 6 experts was seen in only 13% of cases of follicular variant of papillary carcinoma of the thyroid gland.173 The ability to move beyond this low level of agreement is uncertain.

Grzybicki et al studied the use of a variety of educational initiatives to improve Pap test diagnoses.174 Greater diagnostic standardization was achieved in smaller groups of individuals involving face-to-face meetings that encouraged dialogue and questioning established paradigms. Educational methods involving larger groups that used handouts and lectures had less improvement. A real challenge is in getting experts from different institutions to standardize a diagnosis.

Reporting Steps

The oncologic pathology report contains information on patient-specimen macroscopic and microscopic examination and for some tumors, ancillary testing studies with prognostic or treatment significance. Errors that occur in the gross examination step include the failure to report or incorrect reporting of pathology findings that are used in tumor staging. In 2004, the American College of Surgeons Commission on Cancer mandated that 90% of pathology reports indicating a cancer diagnosis at participating centers contain all scientifically validated or regularly used data elements.175 The use of synoptic reports containing standardized information has become commonplace in surgical pathology reporting.176, 177 Pathologists use standardized reports offered by the CAP and other organizations such as the Association of Directors of Anatomic and Surgical Pathology (ADASP). Variability in reporting remains especially in the use of microscopic descriptions and comments.

Timely Follow Up of Test Results

Based primarily on anecdotal clinical information, physicians have assumed for many years that delays in cancer diagnoses resulted in patient harm of various degrees. Although a relative paucity of evidence is still available regarding this subject, an increasing number of studies have been reported in the literature during the last decade that address this issue. The scope of many has been limited to a descriptive documentation of the number and nature of delays178–185; however, a few investigations have included an examination of the relation between delays and patient outcomes.186–188

Studies examining delays in diagnosis for specific lesions have focused on colon cancer, primary lung tumors, breast cancer, cervical cancer, and oral cancers. Missed opportunities for earlier diagnosis in colorectal cancer (CRC) have been shown to be relatively frequent (65% of a study cohort182), with approximately half judged to be due to systems factors. The majority of process failures occurred during provider-patient communication and in failures to follow up with individual patients or abnormal diagnostic test results.183 The predictor for the longest delays in diagnosis for patients developing colorectal cancers was clinical or laboratory evidence of occult bleeding.183

In addition to process factors, studies focused on identifying potential barriers to initial CRC screening have shown physician knowledge (both primary care and specialty physicians) of current best practices for CRC screening and surveillance and their use to be suboptimal.180, 181 Physician knowledge and compliance with international guidelines relating to acceptable diagnostic time intervals for lung cancer also have been shown to be lacking.178

The lack of physician knowledge regarding acceptable diagnostic time intervals for lung cancer in particular is significant, because lung cancer is one of the few malignancy types for which some evidence is available on the association between delays and patient outcomes. Specifically, investigators in Denmark recently reported the results of their study in diagnostic delays and the stage of lung cancer at the time of surgery.188 They found a statistically significant increase in the median diagnostic delay time between the 2 groups, with high-stage patients having longer delays. In addition, patients with lower stage disease at surgery were more apt to have had their tumor discovered as an incidental finding than patients with higher stage disease. It is important to recognize that stage is a surrogate endpoint, and clinical decisions should ideally be based on how care influences survival or other clinical outcomes.

Conflicting results related to breast cancer diagnostic delays recently have been published by a group of US investigators in Oregon.187 Their retrospective review study was designed to determine the impact of clinician-driven delays in diagnosis on breast cancer prognostic factors and survival. Although they confirmed in their sample that higher stage correlated with decreased survival, diagnostic delays of up to 36 months did not. In addition, there were no correlations among their sample of tumors and other prognostic factors, such as number of positive lymph nodes.

An important foundational paper in this area was recently published by Gandhi at al,186 who focused on linking missed and delayed diagnoses with diagnostic errors associated with adverse outcomes for patients, process breakdowns, and contributing factors. These investigators performed a retrospective review of 307 closed malpractice claims in which patients alleged a missed or delayed diagnosis in the ambulatory setting and found that 59% involved diagnostic errors that harmed patients (30% resulted in death). For 59% of the errors, cancer was the diagnosis involved, and one of the most common breakdowns in the diagnostic process was failure to create a proper follow-up plan.

Effective Communication with Pathologist Staff if Problems or Inconsistencies with Final Results or Diagnoses Exist

Evidence supporting the high value of clinician-pathologist communication for planning the best clinical management for patients with malignant disease of all stages may consistently be found in studies evaluating the impact of multidisciplinary conferences (MC) or tumor boards (TB).189–197

The contemporary approach and standard of care for patients with cancer is multimodal. Often patients receive care at a single healthcare center containing a multidisciplinary, comprehensive cancer center. The centers include, as part of their multimodal approach to management, regular MC or TB meetings. Consultative discussions among all physicians (including radiologists and pathologists) on the patient-care team take place, with a patient management plan the outcome.

The specific activities that take place may differ among centers but generally all involve review of previous radiologic and pathologic test results with review of outside materials by center specialist radiologists and pathologists and a multidisciplinary discussion about the diagnostic and management aspects of the case. This enhanced communication among multiple specialty physicians has been shown to result in significant numbers of changes in both the type and stage of reviewed cases. The occurrence of these discussions in cancer center settings has also been shown to positively impact patient receipt of management best practices.66–69 In some centers, patients are given the opportunity to discuss his/her diagnostic testing and diagnosis with the participating pathologist(s) and radiologist(s).191

One factor that has been shown to contribute to MC or TB changes in both radiologic and pathologic diagnostic test results is diagnostic interobserver variability between generalist and specialist interpretations.122–140 For breast lesions, the pathologic examination variability has been shown to account for changes in approximately 8% of reviewed cases.191, 192 However, in recent studies examining the impact of MC or TB on patient care for patients with gynecologic, pancreatic, and breast malignancies, alterations in the final management plan were made in 35%, 24%, and 52%, respectively.189–191 Therefore, during the face-to-face verbal communication among physicians, clinical information also appears to be exchanged, and that exchange contributes to changes in case diagnostic and prognostic information, which then may produce changes in patient management.

The studies mentioned above, as well as other previous studies, report a positive impact on oncologic diagnostic test accuracy and patient management decisions made in formal, structured forums for interdisciplinary physician-physician communication. No evidence is currently available that describes the impact of individual clinician-pathologist dialogue during routine daily practice on pathologic diagnoses or clinical management plans for patients with malignancies. On the basis of the available MC and TB evidence, one would expect an increase in diagnostic accuracy as a result of enhanced communication, even if it were limited to single clinician-pathologist dialogues. However, further studies examining the diagnostic and clinical impact of interprovider communication are needed to support this expectation.

Requesting Secondary Review of Tissue Samples when It Appears to be Critical for Obtaining a High Quality, Valid Diagnosis

As mentioned previously, extradepartmental review is the practice of secondary slide review that takes place when a patient receives treatment at an institution different from the institution where the original diagnosis of cancer was made.10, 122, 198–240 This review process may also be known as external second opinion and interinstitutional consultation. The reviewing institution is often a large tertiary referral center. Table 4 shows published data since 1990 on single-institution extradepartmental review, and these studies have varied in the definition of a diagnostic discrepancy, patient population, and kinds of specimens examined.10, 122, 198–237 The majority, but not all, studies include cases involving the secondary review of cancer diagnoses. Some of the studies specifically evaluated the review of cancer cases. Table 4 does not include panel review of outside diagnoses. Manion et al reported that these studies tend to examine second opinions on cases more prone to discrepancy.10 The follow-up to adjudicate the accuracy of original and review diagnoses is variable ranging from clinical data obtained through chart review, expert opinion, and additional pathology test results.

Table 4. Interinstitutional Pathology Slide Review Studies
2009Wayment199Urologic surgical pathology21322 (10.3)18 (8.5)
2009Thway200Soft tissue surgical pathology34993 (26.6)38 (10.9)
2009Bomeisl201Fine needle aspiration cytopathology742201 (27.1)69 (9.3)
2009Lueck202Cytopathology49992 (18.4)37 (7.4)
2008Manion10Surgical pathology5,629639 (11.3)132 (2.3)
2007Tan203Thyroid fine needle aspiration cytopathology14727 (18.4)8 (5.6)
2007Thomas204Prostate surgical pathology1,323334 (25.2)196 (14.8)
2005Raab13Surgical pathology and cytopathology1,06992 (8.6)8 (0.7)
2005Hamady205Thyroid cancer surgical pathology6612 (18.2)5 (7.6)
2004Tsung206Surgical pathology71542 (5.9)16 (2.2)
2004Ngyuen207Prostate surgical pathology (Gleason scoring)602265 (44)55 (9.1)
1999Kronz223Prostate needle biopsy3,25187 (2.7)15 (0.5)
2003Weir209Surgical pathology and cytopathology1,52268 (6.8)37 (2.4)
2002McGinnis210Dermatopathology (pigmented lesions)5,136559 (10.9)120 (2.3)
2002Wetherington211Surgical pathology6,678213 (3.2)213 (3.2)
2002Staradub212Breast cancer346278 (80)27 (7.8)
2002Vivino213Labial salivary gland6032 (53.3)32 (53.3)
2002Layfield214Cytopathology14624 (16.4)11 (7.5)
2002Westra215Head and neck surgical pathology81454 (6.6)21 (2.6)
2001Arbiser216Soft tissue surgical pathology26685 (31.9)65 (24.4)
2001Coblentz217Bladder biopsy and transurethral resections13124 (18.3)24 (18.3)
2001Hahm218Gastrointestinal and hepatic surgical pathology19450 (25.8)14 (7.2)
2001Baloch219Cytopathology183110 (60.1)28 (15.3)
2001Murphy220Urologic surgical pathology15029 (19.3)14 (9.3)
2000Chafe221Gynecologic surgical pathology599200 (33.3)63 (10.5)
2000Aldape222Neuropathology457105 (23.0)17 (3.7)
1999Kronz223Surgical pathology6,17186 (1.4)86 (1.4)
1999Selman224Gynecologic surgical pathology29550 (16.9)14 (4.8)
1999Lee225Testicular surgical pathology208-12 (5.8)
1999Chan226Gynecologic surgical pathology and cytopathology569108 (19.0)37 (6.5)
1998Wurzer227Prostate biopsies (Gleason scoring)538212 (39.4)69 (12.8)
1998Jacques228Gynecologic surgical pathology (endometrial curettings and biopsy)18243 (23.6)43 (23.6)
1998Jacques229Gynecologic surgical pathology (hysterectomy)7624 (31.6)24 (31.6)
1998Santoso230Gynecologic surgical pathology720119 (16.5)15 (2.1)
1997Sharkey231Urologic surgical pathology and cytopathology376133 (35.3)133 (35.3)
1997Bruner232Neuropathology500214 (42.8)140 (28.0)
1996Epstein233Prostate surgical pathology5357 (1.3)7 (1.3)
1995Prescott234Surgical pathology22753 (23.3)19 (8.3)
1995Abt235Surgical pathology and cytopathology77771 (9.1)45 (5.8)
1995Scott236Neuropathology68074 (10.9)74 (10.9)
1993Segelov237Testicular surgical pathology8728 (32.0)10 (11.4)

The bias in these extradepartmental review studies clearly favors the accuracy of the reviewing institution, as cases from the reviewing institution are never reviewed to measure diagnostic variability. Nonetheless, these studies report a range of secondary review discrepancy frequencies. Some reports classify diagnostic discrepancies into major and minor. On the basis of the published data in Table 4, the range of total discrepant cases was 1.3% to 60.1%, and the range of discrepant cases classified as major was 0.7% to 53.3%.10, 122, 198–237 These studies represent a wide range of specimens reviewed with some studies examining both surgical pathology and cytopathology cases, and other studies examining a small subset of cases, (eg, labial salivary gland). Summing across all Table 4 studies, the overall discrepant case rate was 11.4% with a major discrepant rate of 4.7%.10, 122, 198–237 Major discrepancies generally occurred in cases where patient management was affected, although changes in diagnosis occurred with both major and minor discrepancies. Some authors attributed a high discrepancy frequency to the failure of pathologists to use established histologic criteria.

Should clinicians ask for patient slide material to be reviewed when these patients are treated at a different institution? Given the current lack of diagnostic standardization and the finding that treatment occurs at a local level, the answer that most authors give is yes. The Association of Directors of Anatomic and Surgical pathology (ADASP) recommended institutional consultation as a standard of practice.241 As a follow-up to a secondary review article by Kronz et al, Time magazine recommended a second opinion for all surgical pathology diagnoses of malignant.242 In an article published in 2000, Gupta and Layfield reported that only 50% of institutions followed ADASP guidelines for secondary review, and 38% encouraged second review.243

Kronz and Westra wrote that for some subspecialties, such as head and neck pathology, secondary review makes good clinical and risk management sense for 3 reasons.198 The authors argued that 1) the pathology of some subspecialties is diverse, complex, and difficult; 2) consultation is an essential component of multidisciplinary patient management; and 3) as treatment also has become diverse and complex, large referral hospitals contain staff with more comprehensive pathology diagnostic expertise.


The quality of oncologic pathology testing currently is focused on the evaluation of testing steps involved in the ordering, procuring, processing, interpreting, reporting, and decision making based on pathology test results. Most errors in cancer diagnosis are related to several factors and not simply a pathologist's interpretation. Clinical practitioners may improve the safety of oncologic pathology testing services by facilitating communication between clinical services and pathology laboratories at all levels of testing.

The CDC has sponsored several initiatives in the past decade to investigate the state of laboratory medicine with an emphasis on patient safety.244, 245 In September 2007, the CDC convened the 2007 Institute on Critical Issues in Health Laboratory Practice: Managing for Better Health to develop an action plan for the immediate and long-term future. At the 2007 Institute, experts in laboratory medicine practice, clinicians, payers, health services researchers, and patient representatives identified gaps in the current quality of laboratory medicine. This identification is an early step in promoting research for filling these gaps and informing laboratory medicine stakeholders on best practices. These experts identified gaps in the current knowledge of best patient safety practices for laboratory/hospital information system integration, standardizing error measures, effect of workforce vacancy rates on safety, communication methods at handoff points, longitudinal tracking of safety measures, and adoption of quality improvement systems currently used in business and industry.245 Research in these areas as well as other areas, such as subspecialty pathology practice models, training, and laboratory organization, are needed to improve the state of safety in all phases of oncologic pathology diagnostic testing and screening practice.