SEARCH

SEARCH BY CITATION

Keywords:

  • cervical cancer;
  • developing countries;
  • automation-assisted screening;
  • manual liquid-based cytology;
  • H&E stain;
  • cervical cell segmentation;
  • cervical cell classification

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and Methods
  5. Results
  6. Discussion
  7. Acknowledgments
  8. Literature Cited

Current automation-assisted technologies for screening cervical cancer mainly rely on automated liquid-based cytology slides with proprietary stain. This is not a cost-efficient approach to be utilized in developing countries. In this article, we propose the first automation-assisted system to screen cervical cancer in manual liquid-based cytology (MLBC) slides with hematoxylin and eosin (H&E) stain, which is inexpensive and more applicable in developing countries. This system consists of three main modules: image acquisition, cell segmentation, and cell classification. First, an autofocusing scheme is proposed to find the global maximum of the focus curve by iteratively comparing image qualities of specific locations. On the autofocused images, the multiway graph cut (GC) is performed globally on the a* channel enhanced image to obtain cytoplasm segmentation. The nuclei, especially abnormal nuclei, are robustly segmented by using GC adaptively and locally. Two concave-based approaches are integrated to split the touching nuclei. To classify the segmented cells, features are selected and preprocessed to improve the sensitivity, and contextual and cytoplasm information are introduced to improve the specificity. Experiments on 26 consecutive image stacks demonstrated that the dynamic autofocusing accuracy was 2.06 μm. On 21 cervical cell images with nonideal imaging condition and pathology, our segmentation method achieved a 93% accuracy for cytoplasm, and a 87.3% F-measure for nuclei, both outperformed state of the art works in terms of accuracy. Additional clinical trials showed that both the sensitivity (88.1%) and the specificity (100%) of our system are satisfyingly high. These results proved the feasibility of automation-assisted cervical cancer screening in MLBC slides with H&E stain, which is highly desirable in community health centers and small hospitals. © 2013 International Society for Advancement of Cytometry


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and Methods
  5. Results
  6. Discussion
  7. Acknowledgments
  8. Literature Cited

Cervical cancer is the third most common cancer in women, with an estimated 529,000 new cases and 275,000 deaths in 2008 in the world [1]. More than 90% of cervical cancer is caused by human papilloma virus infections which cause changes to endothelial cells before the development of many types of cervical cancers. Therefore, in developed countries, screening by cytology is the most common approach to prevent cervical cancer at a precancerous stage. However, population-wide screening is unavailable in low-resource regions and is suboptimal in developing countries [2]. Therefore, more than 85% of the new cases and about 88% of the deaths of cervical cancer occur in developing and undeveloped countries [1].

Screening of cervical cytology slides is “very labor intensive and demands that the cytotechnologist be capable of high levels of concentration for extended periods” [3]. Automation-assisted reading techniques have the potential to reduce screening errors and increase availability, especially to developing countries. Currently two Food and Drug Administration (FDA)-approved automated reading systems [4, 5] are commercially available. However, a large, prospective randomized trial found that although the productivity can be increased by these systems, the sensitivity is lower than manual reading [6, 7].

Our goal is to explore a cost-effective and highly sensitive screening technique which is more possible to reach the population in developing countries. The proposed technique combines a considered choice of preparation and staining methods, automated image acquisition and assisted diagnosis.

Several different methods are clinically accepted for preparation of cytological slides for screening cervical cancer. The sample may be smeared on a slide directly after collection or it may be prepared using liquid-based cytology (LBC). The LBC may be manual (MLBC) which requires only a centrifuge, a vortex mixer and a pipettor [8], or automated (ALBC) which requires commercial machine. Pap smears has been used for many years and clinically well accepted. However, cells are better dispersed in a LBC slide, making it easier for automated image analysis to identify individual cells [9]. Recent studies suggest that the performance of these three methods is similar [10, 11].

Three stains are widely used for cervical cancer screen: Papanicolaou (Pap) stain, proprietary stain (provided by commercial machines), and hematoxylin and eosin (H&E) stain. Compared to Pap stain, the H&E stain is much easier to prepare, lower in cost, and more consistent, therefore is used most often in histopathology but also in cytopathology. Proprietary stains are typically more costly than the other two stains. Figure 1 shows typical images from the conventional smear with Pap stain, the ALBC with proprietary stain, and the MLBC with H&E stain. It can be seen that the cells in Fig. 1(c) are more dispersive than in Fig. 1(a), but with more artifacts than in Fig. 1(b).

In choosing specific slide preparation method and staining technique, we attempted to minimize overall cost and optimize clinical outcome. Although ALBC has its advantages, MLBC does not require advanced skill and experience and can be reliably performed by a technician after some simple training. Therefore, in a community health centers (CHCs) setting, the hardware cost of ALBC is not justified. Our informal survey showed that many CHC and small hospitals are well prepared to perform MLBC. Finally, H&E stain is the most inexpensive and simple to use of the three stains, and has some clinical acceptance. Therefore, in our study, we choose MLBC with H&E stain.

Previous Works in Cervical Cell Segmentation

A variety of segmentation methods for cervical cells have been proposed in recent years. The majority of cytoplasm segmentation used one or multiple of the following techniques: K-means [12, 13], edge detection [14], thresholding [15, 16], and active contours [13, 17]. Most of these works are designed for images of isolated cells, especially for those in the Herlev data set [18]. For segmentation in images containing multiple cells, thresholding [15, 16, 19] and level set [17] techniques have been used. For nucleus segmentation, related works can be divided into three groups: 1) single-nucleus segmentation, which utilize contour or shape information by using active contour model [13, 20], parametric fitting [21], and difference maximization [12]; 2) multiple-nuclei segmentation, which uses thresholding [17, 22], Hough transform [15], and morphology (watershed; [16, 23, 24] techniques; 3) touching-nuclei splitting, which uses morphological erosion [22], Bayesian classification [25], and active shape model [26].

Two of the aforementioned methods [15, 16] can achieve the segmentation of cytoplasm, multiple nuclei and touching nuclei. However, these methods were developed on healthy rather than a mix of healthy and pathological cells. Since the size, shape and chromatin of abnormal nuclei vary significantly, further development is needed to address typical variations encountered in a clinical setting. So far we have found only one previous study of automated segmentation of abnormal nuclei [27], only very limited data was reported.

More recently segmentation of nuclei in histological images use adaptive thresholding combined with active contour model [28]. This automated method is comparable with the manual delineation in segmentation accuracy. Recently, the graph cut (GC) approach [29] is highly attractive in cell nucleus segmentation. The binarization of nuclei based on GC is addressed in Ref. [30] with results more accurate than global thresholding. Prior knowledge like the shape [31], manual annotation and local image features [32] can be incorporated in the GC framework to allow more robust segmentation.

Previous Works in Cervical Cell Classification

Several cervical cell classification methods have been reported. These methods can be divided into two categories according to their main tasks. The first task attempts to eliminate noncellular artifacts such as debris and inflammatory cell clusters. Typically the shape, size and intensity features [16, 24, 33, 34] are exploited to identify artifacts. For classifier training, the linear discriminant analysis [33], maximum likelihood [34], and support vector machines (SVMs; 16,24) are used. Feature thresholding technique is used in Refs. [22] and [35] for artifacts elimination in Pap smear. The second task aims to classify cells into abnormal or normal class. The extraction of cell feature is the major areas of research in this task. A wide range of feature types has emerged [3, 22, 36, 37], including optical density, size, shape, texture, contextual information, and whole image measurements. An important work is conducted in Ref. [18], where a benchmark data set and 20 features are constructed for cervical cells. On this data set, the most informative features are nucleus/cytoplasm (N/C) ratio, brightness of nuclei and cytoplasm, and longest linear dimension and area of nuclei as demonstrated in Ref. [38] using genetic algorithm combined with nearest neighbor classifier. Recently, in Ref. [39] it was proposed to classify segmented cell image using a linear plot of two-dimensional (2-D) Fourier and logarithmic transforms.

Most studies assume that accurate segmentation of cytoplasm and nuclei are already obtained. A number of studies investigated classification schemes which either refines classification results by cell patch matching [40], or directly classifies abnormal cell without segmentation [41]. Recently, the cell classification only based on nuclear feature is studied [42].

Our Contributions

In this article, we propose the first integrated, automation-assisted system for cervical cancer screening on H&E stained MLBC slides. The automatic selection of abnormal cells from cervical cytology specimen comprises three aspects: image acquisition, cell segmentation, and cell classification. An autofocusing method which rejects the coverslip and successfully finds the actual focal plane is introduced. A global and local scheme is proposed to segment both healthy and abnormal cervical cells. A classification framework is designed to improve the sensitivity of abnormal cell recognition and specificity of normal cell recognition. Specific contributions of the presented work consist of:

  • Gaussian filter is used as the focus function, and a searching method based on iterative comparison of image qualities of specific locations is proposed to find the global maximum of the focus curve.
  • The global multiway GC [29] on the a* channel enhanced images can obtain effective cytoplasm segmentation when image histograms present nonbimodal distribution, whereas the local adaptive GC (LAGC) method can obtain accurate nucleus segmentation by combing intensity, texture, boundary and region information.
  • Feature selection and preprocessing techniques are used to improve the sensitivity, and features which capture contextual and cytoplasmic information are introduced to improve the specificity.

Materials and Methods

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and Methods
  5. Results
  6. Discussion
  7. Acknowledgments
  8. Literature Cited

Clinical Data Collection

All images used in this study were acquired using an Olympus BX41 microscope equipped with 20× objective (Olympus America, Central Valley, PA), Jenoptik ProgRes CF Color 1.4 Megapixel Camera (Jenoptik Optical Systems, Jena, Germany), and MS300 motorized stage (NJRGB, Nanjing, China). Image specifications were 24 bit RGB channels with resolution of 1,360 × 1,024 pixels.

This study included 200 cervical slides from women of aged 22–64 years, which were collected from the Department of Pathology, People's Hospital of Nanshan District, Shenzhen, China, between January 2010 and October 2011. Among them, 53 slides were confirmed by biopsy as cervical intraepithelial neoplasia (CIN). The other 147 slides were negative for intraepithelial lesion and malignancy (NILM) cases. In addition, a number of NILM slides were collected for backup. All slides were prepared using MLBC technique [8], and were stained with H&E. The use of human material was approved by the Ethics Committee of People's Hospital of Nanshan District.

Table 1 lists the details. We randomly selected half of the CIN slides [16] and 18 NILM slides in 2010 to form the training set. For each of the 16 CIN slides, more than 5 fields were imaged, each of which contain at least one abnormal cell were imaged. For the 18 NILM slides, more than 20 random fields were imaged. Abnormal cells which demonstrated a complete and clear structure within the focal plane were annotated by two experienced pathologists (Path.1 and Path.2). In case of disagreement, the cell was discarded. Normal cells were annotated by an engineer (Eng.1), who had studied cervical cytology diagnosis for one month in People's Hospital of Nanshan District, and further validated by Path.1. These images were used for classifier training, segmentation algorithm evaluation and parameters tuning (refer to the Methods and Results sections). The remaining 16 CIN and 10 NILM slides from 2010 were used in prescreening Trial 1. The remaining 21 CIN and 22 NILM slides from 2011 were used in prescreening Trial 2. The 97 slides, which were randomly selected from the NILM results in October 2011, were used for rescreening trial.

Table 1. Details of the slides used in this work.
DateNILMCIN1CIN2CIN3CA
  1. To be more succinct, CIN2+ and CIN3+ were categorized in CIN2 and CIN3, respectively.

Training set189340
Prescreening test set 1107450
Prescreening test set 22231152
Rescreening test set97

System Overview

Our framework consists of three main modules: image acquisition, cell segmentation, and cell classification. The image acquisition module controls a motorized microscopy equipped with a digital camera to automatically acquire image sequence from cervical slides, and simultaneously performs cell segmentation and classification in order to detect abnormal cells from these images for targeted reading by pathologists.

Autofocusing

Image quality is important in our system because automatic segmentation and feature extraction algorithms cannot accurately characterize cervical cells without focused images. Generally, an autofocusing method consists of two aspects: choosing a focusing objective function and searching the maximum value of focusing function.

To choose a focusing function, we compare the six most accurate focusing functions listed in Refs. [43] and [44], including: variance (VAR), Gaussian filter (GS03), Absolute Tenenbaum gradient (ATEN), normalized variance (NVAR), Vollath-5 (VOL5), and Tenenbaum gradient (TEN). Of the six functions described, the Gaussian filter is chosen as the focusing objective function because it has the smallest local maxima, and drops rapidly away from the focus according to our empirical comparison. This choice is in accordance with Ref. [44] that GS03 has the second lowest mean error, and with Ref. [43] that GS03 has the highest static accuracy and second highest dynamic accuracy. Actually, on our image stacks, all these functions produce at least two local maxima (bimodal) on their focus curve. One of the maxima corresponds to the optimal focal plane, and the other corresponds to the surface of coverslip on which there is many black objects.

Due to the nonunimodality feature of the focus curve, especially the second highest peak which corresponds to the coverslip, traditional searching algorithm like the Fibonacci will result in poor search accuracy [43]. To keep away from the surface of coverslip and successfully find the optimal focal plane, an effective algorithm is designed, as described in Table 2. First, the initial focusing orientation is estimated. Then the proposed Three-position and middle-position (T&M) localization procedure is performed, as illustrated in Figure 2. In the T step, the algorithm controls the z-motor of the microscope to move along the focusing orientation with a larger step distl to find the turning range (L, R) of the focusing curve. The focal plane is within this turning range. In the M step, the algorithm iteratively compares the focusing function value of the middle position pm of current range with the position of dists from pm, until the two positions are close enough. The parameter distl is set to be 120, which is slightly larger than the length of the red line in Figure 2a (this setting can ensure the searched turning range to contain the maximal peak). The parameter dists is set as 1, which is the minimal motion distance of z-motor. Here, only the positive orientation focusing method is described. The method of negative orientation is similar.

image

Figure 1. Cervical cell images from (a) conventional smear with Pap stain, (b) ALBC with proprietary stain, (c) MLBC with H&E stain. [Color figure can be viewed in the online issue which is available at wileyonlinelibrary.com]

Download figure to PowerPoint

Table 2. The proposed autofocusing algorithm.
Given: the initial position p1;
1. Estimate the focusing orientation by comparing f(p0) with f(p2), where p0 = p1distl, and p2 = p1 +distl.
2. T step: search the turning range
 p2 =p1, p3 = p2 + distl;
while f(p2) < f(p3)
 p1 = p2, p2 = p3, p3 = p3 + distl.
end while
The turning range is: (L, R) [LEFTWARDS ARROW] (p1, p3);
3. M step: search the focal plane
 while RL > 2·dists,
  pm = (L + R) / 2;
 if f(pm) < f(pm + dists)
  L [LEFTWARDS ARROW] pm;
 if f(pm) = f(pm + dists)
  the focus plane is: pm;
 else
  R [LEFTWARDS ARROW] pm + dists;
 end while
Output: argmax f(θ), where θ = {L, R, pm}.

Cytoplasm Segmentation

After obtaining the high-quality cervical cell image with autofocusing, a multiway GC approach [29] is applied to separate the cytoplasm from the background. Specifically, the a* channel in CIE LAB color space is used for preprocessing. Then, initial segments are generated automatically by using Otsu's multiple thresholding algorithm [45, 46] on the preprocessed image. Finally, the segmentation is refined by the multiway GC method [29]. In the rest of this section, we introduce the details of cytoplasm segmentation.

Preprocessing

In actual cervical cell images, the poor contrast, nonuniform staining and noise will likely hinder cell segmentation. To enhance the contrast, we extract the a* channel in the CIE LAB color space. The use of a* channel is based on the following reason: In H&E staining, cell regions are colored with tones of red and background regions remain colorless. This inspired us to use color to discriminate cells from background. The a* channel represents change between red and green and is able to embody this difference. Hence, in the a* channel image, cells are obviously brighter than the background. To further enhance contrast, the a* channel image is stretched linearly from their original intensity range [Imin, Imax] to the range [0, 255]. A common and effective technique in handling noise is the median filter. It was demonstrated by Tsai et al. [12] that median filter can eliminate both impulse and Gaussian noise in cervical smear images. In our work, a 5 × 5 median filter is applied to the contrast enhanced images to discard noise. Figure 3b shows the result of the above preprocessing.

image

Figure 2. Schematic of the proposed T&M algorithm. (a) T step, from p1 to p2 to p3. The blue point highlights the initial position. (b) M step, from pm to pm + dists. [Color figure can be viewed in the online issue which is available at wileyonlinelibrary.com]

Download figure to PowerPoint

Global segmentation (multiway GC)

Although the difference between cell and background is significantly enhanced by preprocessing, not all the image histograms present bimodal distribution due to the complexity of our images which include inhomogeneous illumination, nonuniform staining, and the presence of inflammation cells and debris. Therefore, a single threshold cannot successfully separate the cervical cell from the background. For example, some cytoplasm with brighter intensity tends to be classified into the background class. With this in mind, we propose to use the multiway GC approach. The output image contains four classes. The class with the lowest mean intensity (corresponding to the intensity in LAB color space) is the background. The other three classes, which contain cytoplasm, nuclei, inflammation cells, debris, etc. are integrated as foreground (cell region). Our approach is summarized below:

  • Given an image with intensity values from Imin to Imax, we compute three optimal thresholds inline image, inline image, and inline image with Imininline imageinline imageinline imageImax, by applying Otsu's multiple (three-threshold) thresholding algorithm [46]. Then, the mean intensity values, c1, c2, c3, and c4, of the four classes (C1, C2, C3, and C4) are computed, where C1 = [Imin, …, inline image], C2 = [ inline image + 1, …, inline image], C3 = [ inline image + 1, …, inline image], and C4 = [ inline image + 1, …, Imax].
  • With c1, c2, c3, and c4, we construct a four-terminal graph, and the Potts model energy function Ep(f) as Eq. (1). Boykov et al. [29] had demonstrated that the minimization of Ep(f) can resolve the multiway cut problem,
    • display math(1)
    where f denotes the pixel label, p indexing pixel, N being the set of adjacent pixels, {p, q} representing a pair of pixels, T(·) is 1 when the condition inside the parentheses is true and 0 otherwise. The first term of Eq. (1) is the data term, which is determined by the connection energy t-links between each pixel and each terminal of the graph. In our work, Dp(fp) is assigned as (cpIp)2, where Ip is the intensity value of pixel p. The second term in Eq. (1) is the pixel continuity term, which is determined by the connection energy n-links between neighborhood pixels in the multiple-terminal graph, as defined by the Potts model parameter in Ref. [29].
  • Finally, by using an implementation of the α-expansion and fast max-flow/min-cut algorithms introduced in Refs. [29] and [47], the energy function of Eq. (1) is optimized, and the global optimal pixel category label f is obtained. Consequently, we can redivide each pixel to its new class.

Finally, the segmented binary image (after merging the three foreground subclasses) is processed by morphological opening. Actually, there exists some other method to replace the Otsu's multiple thresholding, such as K-means clustering [48]. We choose Otsu's multiple thresholding because it is more efficient and shows better segmentation results in our empirical evaluation. Figures 3c–3e show the examples of cytoplasm segmentation. By examining the center part of Figure 3f, we see that the segmentation is not affected by the bright illumination and dirt. However, we do not attempt to split the overlapping cytoplasm. This is because that reliable delineation of the cytoplasm boundary for each cell is unrealistic even for human expert in the presence of heavily overlapping cells.

Nucleus Segmentation

The nucleus segmentation consists of a nucleus binarization algorithm and a touching-nuclei splitting method. For the nucleus binarization, we propose to use GC approach adaptively and locally, denoted as LAGC. LAGC first roughly detects the nuclei and then refines each nucleus within its local neighborhood. For the touching-nuclei splitting, we initially estimate whether a connected component of binarization result is a touching-nuclei clump or not, and then perform splitting based on concave points.

Preprocessing

Before applying the nucleus segmentation algorithm, a practical problem should be considered: if the cytoplasm is deep stained or some inflammation cells cluster together, there might be many abnormal segments, which are often the major challenge in cervical cytology automation [33, 34]. To enhance the contrast between nuclei and cytoplasm, the original color image is preprocessed using the procedure designed in our previous work [49]. Briefly, we convert the original RGB color space to the HSV one, and then the V channel is extracted. After that, the V channel image is enhanced through linearly stretch. Finally, the median filter with a mask of size 5 × 5 pixels is applied to discard noise. After the preprocessing, the nuclei tend to become much darker, whereas the cytoplasm much brighter.

Local segmentation (nucleus binarization)

In order to segment a mixture of healthy and pathological cells in an image, we propose a GC based nucleus binarization method which works in an adaptive and local paradigm, as illustrated in Figure 4. The adaptive stage roughly detects each nucleus region (white and gray objects in Fig. 4b) by applying an efficient adaptive thresholding algorithm [50, 51]. This adaptive detection aims to overcome the influence of nonuniform staining and illumination. The local stage refines each rough segment within its local neighborhood (blue rectangles in Fig. 4b) by using a Poisson distribution based GC. The goal of this local refinement is to extract nuclei with nonuniform chromatin distribution and with low intensity difference to the surrounding cytoplasm.

  • Adaptive stage. Due to the inhomogeneous illumination and nonuniform staining, it is very hard to define a global threshold without either missing some of the nuclei or segmenting non-nuclei parts. Therefore, adaptive thresholding is desirable. Trier and Jain [52] evaluated 11 thresholding algorithms on document images, and concluded that the Niblack algorithm performed the best. Niblack algorithm has also found applications in the segmentation of cell nuclei [28]. Sauvola and Pietikainen [50] proposed a new adaptive binarization method, and the benchmarking results showed that their method outperformed the others including the Niblack algorithm. Therefore, in our work, we use this method to compute a threshold t(x, y) for each pixel at location (x, y):
    • display math(2)
    where m(x, y) and s(x, y) are the mean and the standard deviation of the gray level values within a w × w pixel window of (x, y), The R is the maximal value of standard deviation and set as 128, and k is a constant comprised in the range [0.2, 0.5]. If the intensity value of a pixel is lower than t(x, y), it is segmented as nuclei. Parameter w can be chosen reasonably in relation to the maximal size of the nuclei, which is set to 71 in our work. A k-value of 0.3 gives good results in our work. With the integral images method, this algorithm runs very fast [51].
  • Local stage. Based on the bounding box of each adaptive segment (with size [slength, swidth]), a subimage corresponding to a larger rectangle (with size [slength + sσ, swidth + sσ]) is first extracted from the preprocessed image (Figs. 4b and 4c). Then the subimage is stretched linearly to further enhance the contrast. The histograms of subimages are found to be bimodal, which can be modeled well by a mixture of two Poisson distributions. This modeling choice is based on the analysis in Ref. [30]. The Poisson mixture parameters are given below,
    • display math(3)
    where μ0 and μ1 are the mean intensity values of the foreground and the background, respectively. NF and NB are the number of pixels in the foreground and background, respectively. h(Ip) = NIp/(NF + NB) represents the normalized image histogram, where NIp is the number of pixels with intensity value Ip. In some subimages, a weakly stained nucleus may be surrounded by some dark objects (e.g., nuclei or artifacts). Taking these dark objects as the foreground may lead to an inaccurate segmentation result. Hence, in each subimage, we exclude objects which neither belong to the foreground nor the background. Then, μ1 of each subimage is used to replace the intensity values of the excluded objects. With μ0 and μ1, we compute Poisson probabilities of the foreground and background as follows:
    • display math(4)
image

Figure 3. (a) A color image with original resolution. (b) Contrast enhancement and noise removal. (c) Otsu's multiple thresholding. (d) Multiway GC. (e) Foreground merging and morphological opening. (f) The obtained cytoplasm boundary. [Color figure can be viewed in the online issue which is available at wileyonlinelibrary.com]

Download figure to PowerPoint

With these probabilities, we construct the following GC energy function:

  • display math(5)

where the first term has two possible values depending on whether the foreground or background model is used, and the second term is the pixel continuity term. They are written as follows,

  • display math(6)

where δ(·) is 0 when fp = fq, and 2 otherwise. σ is the scale factor which is determined by the smoothness of nuclear chromatin, and can be set to [20 40]. In our work, it is set as 30. The above energy function is represented as a weighted graph through GC. By seeking the minimum cut of the graph through the max-flow/min-cut algorithm [29], the optimal segmentation of the image is obtained. Note that AL-Kofahi et al. [30] also used Poisson distributions in their GC-based binarization. The difference is that the Poisson distributions are used to seek a global optimal threshold to separate the foreground and the background for the GC initialization in Ref. [30],whereas the foreground and background are already known after the adaptive stage in our method.

After the refinement process, for all segments in the subimage, only the segment which has the maximal overlap area with the object in the adaptive stage is retained. To reduce the computational complexity, for a certain adaptive segment χ (Eq. (2)), we empirically set a condition based on one features, namely, roundness Fr. The Fr is defined as: Fr = μRR, where μR and σR are the means and variance of the distance between the centroid and the boundary points of χ, respectively. Then, the χ which satisfies Fr < 1.2 or Fr > 3 should be the well segmented nuclei or the artifact cluster, but not wrongly segmented nuclei, and it needs no further refinement.

Touching-nuclei splitting and reconstruction

The ability of splitting touching nuclei is crucial to a fully automated cervical nucleus segmentation method. In our work, we combine two of our previously proposed concave-based methods [49, 53] to split the touching nuclei. This combination integrates morphological features (geometric center and arc chord ratio) and gradient feature [radial symmetry center [54]]. Briefly, a certain connected region χi is deemed as a touching-nuclei clump if it satisfies the following two conditions:

  • display math(7)

where ri is the most likely radial symmetry center, gi is the geometrical center, and |·|2 is the Euclidean distance. The Fs is the shape factor and is defined as: Fs = L2/4πFa, where L is the perimeter and Fa is the area. If χi only satisfies the former condition, we use the arc chord ratio based algorithm [49] to split χi instead of empirically reallocating the geometrical center as in Ref. [53]. Otherwise, χi is split by using the radial symmetry based splitting algorithm in Ref. [53].

The splitting lines obtained by the above method usually cannot accurately delineate the occluded contour, and thus will influence the reliability of nuclear feature extraction. In our work, the constrained ellipse fitting technique [25] is used for occluded contour reconstruction.

Cell Classification

After cytoplasm and nucleus segmentation are finished, our task is to automatically identify abnormal cells. For the cervical cell identification task, although both the nuclear and the cytoplasmic features are useful [16, 38], some recent research had validated the important role of nuclei in cancer recognition [42, 55]. More specifically, in cervical cytology diagnosis, all the cervical cytology abnormities (ASC-US, ASC-H, LSIL, HSIL, and CA) accompany nuclear abnormity [56]. Therefore, in our classification framework, we first estimates whether the test object (nuclear mask) is a potentially abnormal nucleus based on nuclear features, and further confirms its abnormality based on contextual information and approximate cytoplasmic feature.

The whole framework includes a training, a validation and a testing stage, which are indicated by different arrows and colors, respectively, as shown in Figure 5. The aim of the training stage is to learn four modules (classifiers): artifact filters, nucleus/artifact classifier, abnormal/normal nucleus classifier, and abnormal cell/hard negative classifier. Each module is learned from its corresponding dataset. The validation stage is used to collect “hard negative” objects to learn the abnormal cell/hard negative classifier. Once the aforementioned modules are established, in the testing stage, if a given object can pass all the four modules along with the atrophic cell filter and context analysis, it will be recognized as an abnormal cell. Otherwise, it is an artifact or a normal cell. In the rest of this section, we describe the details of the construction of the four modules, the atrophic cell filter and context analysis.

image

Figure 4. (a) The input image. (b) The result after preprocessing and adaptive thresholding. The white objects need the LAGC process, whereas the gray objects need not. (c) Refine some of the adaptive segments within its local neighborhood by using a Poisson distribution based GC. (d) Replace the adaptive segments by the refined results. (e) LAGC binarization result. [Color figure can be viewed in the online issue which is available at wileyonlinelibrary.com]

Download figure to PowerPoint

Artifact filters

Considering that there might exist a large number of artifacts in the segmented objects (nuclear masks), we design a series of filters to fast eliminate some of them including dirt, graphite particles, out-of-focus objects, part of cytoplasm, and inflammatory cells. These filters perform threshold calculations using some simple features. The idea of using filters is inspired by Refs. 22 and 33.

The dirt shows gray color and graphite particles tend to be black color. They are eliminated by thresholding the absolute channel-wise difference in RGB color space. The boundaries of poorly focused objects are typically blurry, and therefore, we remove them by thresholding the mean gradient of boundary Bgrad with a Sobel operator. The parts of cytoplasm tend to have irregular shapes and are removed by thresholding the shape factor Fs. Inflammatory cells which have small dark nuclei are eliminated by thresholding the max radius Rmax and the mean intensity Imean. The thresholding parameters are obtained by learning on the training set. We choose the parameters which can eliminate as many artifacts as possible, while retain almost all the true nuclei.

Nucleus/artifact classifier (Classifier 1#)

The above four filters can only eliminate a certain amount of typical artifacts. To remove more artifacts (like the incorrectly segmented nuclei, mucous streams, and noncellular artifacts), a nucleus/artifact classifier is trained.

The positive training samples contain abnormal and normal nuclei which are collected from the training image set. Most of these samples are collected automatically by our segmentation method, and the others which show good image quality but are ill segmented are collected manually by Path.1 (using PhotoShop). The first set of negative samples is collected by Path.1 on automatic segmented images. In addition, to improve the performance of the classifier, we expand the set of negative samples by using bootstrap approach [57] to collect false positive objects.

For each training sample a set of 18 features are automatically extracted. They include chroma, texture, size, shape, and contour, as shown in Table 3. Most of these features are a subset of the features used in Refs. [58] and [59] for cells. Three of the shape features (13, 14, and 15) are commonly used to characterize the shape of tumor [60], but were never used in cervical cell classification task. To select the most informative and independent features from the feature set, the quadratic mutual information (QMI; [61]) and the greedy strategy are used to rank the features. The backward elimination method in the greedy strategy is utilized to eliminate features in order to maximize the QMI of the remaining feature set, and then the remaining features are input to a classifier to test the classification performance. The feature set with the highest classification accuracy is chosen as the optimal feature set. We use QMI since it needs no assumption on the densities of classes and is feasible with training sets with the order of thousands of samples.

Table 3. The 18 features used for the nucleus/artifacts classification
No.DescriptorFeatureNo.DescriptorFeature
  1. SDNRL, standard deviation of the normalized radial length; AR, area ratio; RI, roughness index.

1ChromaBlue Ratio10ShapeRoundness
2ChromaRed ratio11ShapeElongation
3ChromaAverage color12ShapeConvexity
4ChromaAverage intensity13ShapeSDNRL [60]
5TextureVariance14ShapeAR [60]
6TextureEntropy15ShapeRI [60]
7SizePerimeter16ContourBoundary intensity
8SizeArea17ContourBoundary variance
9SizeLongest diameter18ShapeFourier descriptor

In the cervical cell classification task, the cost of misclassifying a cell as an artifact, or misclassifying an abnormal cell as a normal cell is much higher than the cost of reverse error. The synthetic minority oversampling technique (SMOTE; 62) is a very useful technique for solving this unbalance problem. SMOTE is a feature preprocess method that oversample the minority (positive) class by creating synthetic minority class examples. In our work, we use SMOTE to oversample the positive class.

Now we want to combine the features to build an effective classifier. To select an optimal classifier for our task, we compare the classification performance over several learning algorithms [multilayer perception (MLP), AdaBoost, SVMs, and random forests (RF)] using fivefold cross validation strategy.

Abnormal/normal nucleus classifier (Classifier 2#)

The squamous epithelium cell can be divided into three layers: superficial, intermediate and basal. The nuclear sizes from different layers are different. Since most normal nuclei especially the superficial nuclei are much smaller than abnormal nuclei, we first set an area threshold (200 pixels) which eliminate most normal nuclei while retain all the abnormal nuclei. Nuclear masks which are larger than this threshold will be sent to the abnormal/normal nucleus classifier. The training process of this classifier is similar to the nucleus/artifacts classifier. In this section, only different parts are described.

The positive training samples are abnormal nuclei and the negative training samples are normal nuclei. Most of these samples come from the positive samples in nucleus/artifact dataset. To increase the discrimination between positive and negative samples and enrich the variation patterns of samples, we invite Path.1 to eliminate some ambiguous abnormal nuclei and some obvious normal nuclei (e.g., with very small size). Then, we additionally add some clear and typical abnormal nuclei, and collect some false positive samples from the validation images to add to the negative training samples.

In feature extraction, besides the features listed in Table 3, we add another five features to characterize the shape, size, and texture of the nuclei, including: convex hull area [19], rectangular degree [20], shortest diameter [21], local binary pattern (LBP; 63) mean value [22], and LBP variance [23]. Among them, the LBP mean value has already been used in the nucleus/artifact classification task [24], and the LBP has been used in the abnormal/normal cervical cell classification task [37]. In our work, we extract LBP mean value for the training of abnormal/normal nucleus classifier. This is because if we directly use the original 59-D LBP feature, it will lead to the unbalance of feature set (the other features are only about 20 dimensions). It is worth to mention that in the computation process of LBP, one can compute pixels in the nucleus region or pixels in the bounding box of nuclei. We compared the two approaches, and found that the latter is obviously superior to the former. This might be attributed to the cytoplasm characteristic which is contained in the bounding box and is helpful to classification.

Atrophic cell filter and context analysis

Atrophic cells are heavily present when the female hormone is low. Since Atrophic cells are round, deep stained and larger than the nuclei, they are very likely be identified as abnormal nuclei by the automatic algorithm. Fortunately, atrophic cells classically have a “fried egg” appearance (a smaller round nucleus surrounded by a bigger elliptical cytoplasm). Based on this prior, our atrophic cell filter is designed to seek the nuclei in the atrophic cell. First, two types of segmentation tools are applied. One is the intensity based Otsu thresholding algorithm (45; performed in the object region) and the other is the boundary based Canny detector [64]. Then, simple features of shape, size, and intensity are used to detect whether the segmentation results contain a nucleus.

To further reduce the false positive rate, the contextual information of nuclei in a whole slide is utilized. Based on our observation, we found that although most of the false positive nuclei have relatively deep intensities or irregular shapes, their areas are not so large. The clinical knowledge shows that the smallest abnormal nucleus (ASC-H) is 1.5 times larger than the normal middle layer nucleus [56]. Thus, the mean area Smean of all normal nuclei in a slide is used as a prior constraint to further refine the classification results. The object which is identified as abnormal nuclei will be redeemed as normal if its area Si < αSmean, where α is set as 1.5. Because we have already eliminated most of the superficial nuclei, Smean can reflect the size of middle layer nuclei in each slide.

Abnormal cell/hard negative classifier (Classifier 3#)

Some normal nuclei with large sizes tend to be wrongly identified as abnormal by Classifier 2#. In this article, we name these normal nuclei/cells “hard negatives”. To collect hard negatives, we follow the bootstrap strategy to run our algorithm on a validation image set, and select normal cells from the screening results. Although we cannot delineate the cytoplasm boundary for each cell, however, in most of the 128 × 128 image patches (Fig. 6), the approximate segmentation of a cytoplasm can be obtained. These image patches were cropped with a square of 128 × 128 pixels centered at the geometry center of the nuclei. Then, the N/C ratio is computed for classifier training. Since only one feature is extracted, the receiver–operating characteristic (ROC) curve for the classification performance was generated to select a best classification threshold. Note that the cytoplasmic area is defined as the area of current cell subtracts all nuclei in this cell. In the middle columns in Figure 6a and 6b, the blue represents nuclei and the green represents cytoplasm.

image

Figure 5. The proposed flowchart for cell classification. [Color figure can be viewed in the online issue which is available at wileyonlinelibrary.com]

Download figure to PowerPoint

In the current classification framework, the N/C ratio feature is not used in Classifier 2# since we cannot obtain accurate cytoplasm boundary of each cell in heavily overlapping cells. Therefore, we first use nuclear features to identify potential abnormal nuclei and then use the approximate N/C ratio to refine the results. This refinement mainly aims to retain potential abnormal nuclei as many as possible while eliminate potential abnormal nuclei with abundant cytoplasm.

Implementation of the Proposed Methods

Our methods were implemented using C++. We run our C++ release software on a 64-bit Windows PC, which has a 2.66GHz quad-core CPU and a 4GB RAM. The automatic image acquisition process started from the lower left part of the specimen, and scanned along a round path to ensure the scanning to cover a wide range of the specimen. The acquisition contained two modes: acquiring 100 views (about 5 min) and 200 views (about 10 min). After analysis of these acquired images, our system presents the recognition results on the computer screen. In the 100-view mode, similar to the FocalPoint™ system [9], the images which contain abnormal cells are present. In the 200-view mode, similar to the PAPNET™ system [9], the abnormal cells are presented according to their abnormalities (output values of the classification methods). Our system can display the abnormal-like cells in approximately 1 min on average after the acquisition process ends.

Quantitative Assessment Methodologies

Assessment of autofocusing

To evaluate autofocusing, we conducted two experiments on the subset of the test slides. The first experiment was used to evaluate the dynamic accuracy. One consecutive image stack through focus was acquired from each slide. Then, 10 autofocusing tests were performed for each stack using 10 randomly initial positions. The dynamic accuracy was defined as the distance between the real focus position and the maximum position found by our method. The second experiment was used to evaluate the satisfaction degree of autofocusing in real application. We used our method to automatically scanning and acquiring 200 focused images from each slide. Then, the satisfaction degree of each image was subjectively graded into three types: clear (the boundary, structure and chromatin of the nuclei are clear), poor (the chromatin is blurry), and error (all characters are totally blurry) by Path.1 and Path.2, respectively, and the percentage of common satisfaction was calculated. In addition, the average autofocusing steps and times were calculated.

Assessment of cell segmentation

The 21 cervical cell images were selected from the training set to evaluate the performance of cell segmentation. Among them, 15 images contained abnormal cells and the other 6 images are normal. The ground truth used for evaluations of cytoplasm and nucleus segmentation was obtained by manual delineation by the Path.1. Only a subset of nuclei which can be unambiguously determined by human expert was annotated.

The evaluation of cytoplasm segmentation was based on comparison with three other cytoplasm segmentation methods [16, 17, 19] using mutual overlap metric [65], which is defined as follow,

  • display math(8)

[8]where RGT denotes the ground truth region, RSeg the segmented region, and |·| the number of pixels in a certain region.

The results of our nucleus binarization method were compared with the output of Li et al.'s method [13] and Al-Kofahi et al.'s method [30] using a pixel-based and an object-based criteria. In the pixel-based criterion, the precision, recall and F-measure were used as performance indices [15, 16],

  • display math(9)

where TP represents the number of pixels in nuclei which is correctly identified as nuclei, FP denoting the number of pixels in background which is wrongly identified as nuclei, and FN being the number of pixels in nuclei missed by segmentation. These indices cannot only be used to evaluate all the nuclei in the image, but also be used to evaluate each abnormal nucleus individually. In the object-based criterion, the nucleus detection rate and satisfaction degree of abnormal nucleus binarization were calculated. Nucleus detection was considered as a true positive (successful) if the R of nucleus binarization was higher than 60%. The satisfaction degree of binarization was graded into three degrees: poor (F < 75%), acceptable (F ≥ 75%), and very accurate (F ≥ 90%). Then the percentage of the number of corresponding abnormal nuclei in each degree is calculated.

To evaluate the touching-nuclei splitting accuracy, three types of errors were used in accordance with Refs. [30] and [53]: the undersplitting error, oversplitting error, and encroachment error. Furthermore, the pixel-based precision, recall and F-measure were used to evaluate the performance of reconstruction of touching nuclei which are correctly split.

Assessment of cell classification

In order to estimate the effectiveness of Classifier 1# and Classifier 2#, we adopt the true positive rate (TPR), true negative rate (TNR), and correct classification rate (CCR),

  • display math(10)

[10]where np and nn are the number of samples correctly classified to the positive and negative classes, respectively, Np and Nn are the total number of samples in the positive and negative classes, respectively.

Clinical evaluation

Automated screening systems can be used for prescreening or rescreening of cervical cytology slides. Both roles were evaluated in this study. The prescreening trial was taken on the test slide sets, which were described in details in Table 1. Observers (Eng.1, Path.1, and Path.2) were asked to interpret the screening results obtained by our system. The rescreening trial was taken on the 97 NILM slides from Oct. 2011. A graduate student (Stu.1) was asked to interpret the automatic screening results. Then the abnormal-like cells were collected by the student for further consultation (rescreening using microscopy if needed) by pathologists.

To quantitatively evaluate the performance of clinical trial, we utilized the indices, sensitivity (SN) and specificity (SP), which were used to evaluate commercial cervical screening machines by US FDA [4, 5].

  • display math(11)

where TP, TN, FN, and FP denote the number of true positive, true negative, false negative, and false positive, respectively.

Results

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and Methods
  5. Results
  6. Discussion
  7. Acknowledgments
  8. Literature Cited

Performance of Autofocusing

The 26 slides in the testing set of 2010 were used to evaluate the performance of dynamic autofocusing accuracy. The search space was set to ±200 μm around the focus position, and its length was set to 201 steps with a step size of 2 μm. The 260 experimental tests showed that the dynamic autofocusing accuracy of our method was 2.06 μm. Totally 5,200 focused images were automatically acquired for evaluating the satisfaction degree of autofocusing by Path.1 and Path.2, who were asked to classify these images into three degrees according to his observation. As a result, there were 5,112/5,117 clear images, 41/37 poor images, and 47/46 error images obtained by Path.1/Path.2, respectively. The common satisfaction degree was 98.2%. In addition, 14 steps of z-motor (about 3 s) were required throughout one autofocusing process on average.

Performance of Cell Segmentation

In our segmentation method, there are two major parameters that we tuned on a set of training images. The cluster number in the global segmentation was tuned on 30 training images. We set it to 3, 4, 5, and 6, respectively. Correspondingly, the cytoplasm segmentation accuracy Acc was 0.89, 0.93, 0.93, and 0.91, respectively. Considering the tradeoff between accuracy and computational complexity, we chose 4 as the cluster number. The sσ in the LAGC was tuned on 30 nucleus images. We varied the value of sσ (3, 5, 10, 15). The F-measure was 0.80, 0.84, 0.88, and 0.88, respectively. Considering the computational burdens increasing with more clusters and sσ, we suggest using cluster number = 4 in and sσ =10. Other parameters were tuned empirically or were set as the same as our previous studies [49, 53]. Since these parameters mainly depend on cell sizes, our method should generalize well when images are captured using the same objective amplification.

The average Acc obtained by the methods in Refs. 16, 17 and 19 and our proposed global segmentation are 0.64, 0.68, 0.76, and 0.93, respectively. All the three compared methods are obviously worse than our cytoplasm segmentation method. In Gençtav et al.'s [16] method, the only parameter is the radius of the disk of the black top-hat algorithm. We found that this method had difficulties in obtaining consistently satisfactory results with the same set of this parameter for different images. So we used a radius of 210 pixels as set in Ref. 16. For Harandi et al.'s [17] method, the segmentation results will be different if the parameter ν and iteration step of the active contour model have different values. Based on our empirical test, the iteration step is set as 5,000 to ensure the convergence of the contour, and ν is set as 0.8 in accordance with Ref. 17. Our cytoplasm segmentation method takes 0.3 s per image on average.

Figure 7 shows the comparison of nucleus binarization results on an image patch, in which there are two abnormal nuclei (pointed by green arrows). The result of LAGC looks much better than those of other algorithms in terms of the binarization accuracy, especially the abnormal nucleus binarization accuracy. For example, the two abnormal nuclei are wrongly binarized by Li et al.'s method (Fig. 7b) and Al-Kofahi et al.'s method (Fig. 7c).

image

Figure 6. Comparison of (a) abnormal cell images and (b) “hard negatives” using N/C ratio feature. In each graph, the first through the third columns illustrate the image samples, corresponding regions of nuclei and cytoplasm, and the N/C ratio values. [Color figure can be viewed in the online issue which is available at wileyonlinelibrary.com]

Download figure to PowerPoint

Table 4 shows the comparison of LAGC and the other two algorithms in terms of average precision, recall and F-measure of binarization for both normal nuclei and abnormal nuclei. The LAGC has a 0.873 F-measure and a 0.884 F-measure on binarization of all nuclei and abnormal nuclei, respectively. For nucleus detection, the LAGC achieves a 0.99 detection rate of nuclei, whereas Ref. [13] is 0.8 and Ref. [30] is 0.78. Furthermore, for LAGC, 6.2%, 93.8%, and 51.6% of abnormal nucleus binarization results are poor, acceptable and very accurate, respectively, whereas for Ref. [13],the results are 45.3%, 54.7%, and 17.2%, respectively, and for Ref. [30],the results are 48.4%, 51.6%, and 20.3%, respectively. From the three evaluations, we can see that the LAGC outperforms the other two methods. It should be noted that in these evaluations, touching nuclei are considered as a whole object.

Table 4. Comparison of average nucleus binarization performance using pixel based criterion
MethodAll nucleiAbnormal nuclei
PrecisionRecallF-MeasurePrecisionRecallF-Measure
[13]0.520.770.5980.620.860.688
[30]0.790.670.7100.770.610.627
LAGC0.850.900.8730.880.910.884

The original version of Li et al.'s method [13] is designed for single cervical cell images. We extend it to process all nuclei in an image. Our extension includes eliminating the shape constrains of candidates and performing the radical GVF snake on all candidates. Following Ref. [13],the parameters α, μ, β, δ, γ, and θ are set as 1, 1, 5, 0.5, 10, and 2, respectively. Al-Kofahi et al.'s method is implemented in the Farsight open source project [30]. In comparison, their GC based binarization algorithm is utilized.

There are totally 549 overlapping nuclei in our images. Among them, 508 (92.5%) are correctly split, 15 (2.7%) are undersplit, and 26 (4.7%) have encroachment errors. Furthermore, the average precision, recall and F-measure of the reconstruction results are 0.92, 0.87, and 0.89, respectively.

Figure 8 shows cell segmentation results from our test image set. It can be seen that our proposed methods can accurately delineate the boundaries of cytoplasm in H&E stained images in presence of inhomogeneous illumination, inconsistent staining and dirt occlusion, and achieves promising segmentation results for nuclei/touching nuclei having weak staining and nonuniform chromatin distribution. The average time-cost for the whole nucleus segmentation procedure is about 1.6 s per image.

image

Figure 7. Comparison of nuclei binarization results with (a) ground truth, (b) Li et al.'s method [13], (c) Al-Kofahi et al.'s method [30], and (d) LAGC. The edges of binary masks are overlaid on the color images. [Color figure can be viewed in the online issue which is available at wileyonlinelibrary.com]

Download figure to PowerPoint

Performance of Cell Classification

To train the Classifier 1#, a total of 2,089 nuclei and 5,223 artifacts were collected from the training set and the validation set. The Classifier 2# was trained from 1,126 abnormal nuclei and 1,126 normal nuclei. As shown in Table 5, with fivefold cross validation using different classifiers and the feature ranked by QMI, the nucleus/artifact classifier and the abnormal/normal nucleus classifier achieved the highest CCR when trained by RF and MLP, respectively.

Table 5. The performance of nucleus/artifact classifier and abnormal/normal nucleus classifier using four classifiers
ClassifierNuclei/artifacts, %Abnormal/normal nuclei, %
CCRTPRTNRCCRTPRTNR
  1. Note that the bold values represent the best performance for each column.

MLP97.195.597.794.392.596.0
AdaBoost96.493.097.892.791.593.9
SVM97.194.398.293.891.895.9
RF97.395.797.993.792.595.0

Figure 9a shows the ranked features evaluated by RF classifier. The maximum CCR was achieved when eliminating the following four features, entropy [6], average intensity [4], variance [5], and average color [3]. The best feature selected by QMI is roughness index (RI; 15). Figure 9b shows the ranked features evaluated by MLP classifier. According to this curve, the five features included average color [3], average intensity [4], boundary intensity [16], standard deviation of the normalized radial length (SDNRL; 13), and area ratio (AR; 14) were eliminated. Note that the best five features selected by QMI were perimeter, longest diameter, convex hull area, LBP mean value, and area.

image

Figure 8. Automatic segmentation results for cervical cell images with original resolution. The upper two images contain abnormal cells, whereas the lower two are normal cases. The boundaries of the cytoplasm and the nuclei delineated by our methods are marked as yellow and green, respectively. [Color figure can be viewed in the online issue which is available at wileyonlinelibrary.com]

Download figure to PowerPoint

image

Figure 9. The upper and lower figures show the variation in CCR with the remaining features used in the RF classifier and MLP classifier, respectively. The maximum CCRs were achieved using 14 features and 18 features, respectively. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

Download figure to PowerPoint

To further improve the TPR of the Classifier 1# and the Classifier 2#, the positive classes were oversampled at 50%, 100%, 200%, 300%, 400%, 500%, and 600% using SMOTE, where the number of nearest neighbors was set to be 5 as in Ref. [62]. Based on the performance using fivefold cross validation on the nucleus/artifact data set, we chose to oversample the nucleus class at 300%, because it has the highest CCR (98.0%), and relatively high TPR (99.0%) and TNR (96.9%). On the abnormal/normal nucleus data set, we chose to oversample the abnormal nucleus class at 400%, since the highest CCR (94.3%), the relatively high TPR (98.0%), and TNR (90.5%).

The approximate N/C ratio feature was extracted from 1,126 abnormal nuclei and 1,176 hard negatives to train the Classifier 3#. According to analysis of the ROC curve, the N/C ratio threshold was set as 0.098, which achieved a high sensitivity (98.2%) and a promising false positive rate (19% simultaneous). This setting results in a slight decrease on the system sensitivity but a significant alleviation on observer's burden of targeted reading. Among the classification steps, [1] artifacts filter, Classifier 1# and Classifier 2# took 0.2 s on average and [2] atrophic cells filter, context analysis and Classifier 3# took 1 min on average.

Performance of Prescreening and Rescreening

To compare the performance of prescreening between scanning 100 views and 200 views, we analyzed 100 views from each of the 26 test slides of 2010, and 200 views from each of the 43 test slides of 2011. The performance of human target interpretations on the system outputs are shown in Table 6. It can be seen that when analyzing 200 views, both Path.1 and Path.2 achieved relatively high sensitivities, 90.5% and 85.7%, respectively, and 100% specificities. Even Eng.1 can achieve a 90.5% sensitivity and an 81.8% specificity. The performance of analyzing 100 views was relatively poor. In the rescreening trial, by using our system, Stu.1 picked eight slides (8/97 = 8.3%) which might contain abnormal cells. After consultation by pathologists, two of the slides (2/8 = 25%) were diagnosed as abnormal.

Table 6. Prescreening trial results of our system on test slides
YearViewsNo. pos/negObserverFN/FPSN/SPSpeed/slide, s
  1. One of the 10 NILM slides from 2010 was subsequently confirmed as abnormal by pathologists. Hence, the numbers of positive and negative slides from 2010 are 17 and 9, respectively. Path.2 had to be carefully interpreted, so cost much more time. The bold values highlight the final prescreening results obtained by pathologists.

201010017/9Eng.10/4100/55.627.1
Path.1¼94.1/55.637.7
201120021/22Eng.12/490.5/81.823.7
Path.12/090.5/10034.9
Path.23/085.7/100130.0

Discussion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and Methods
  5. Results
  6. Discussion
  7. Acknowledgments
  8. Literature Cited

Since our segmentation method is not specifically designed for cytoplasm, it may fail to delineate the cytoplasm boundary in red regions caused by intensive illumination. However, we observe that it can provide satisfying delineation results of most cytoplasm boundaries, and this can be reflected by our classification result where the cytoplasm boundary delineation and the approximate N/C ratio are used to eliminate the hard negatives. Furthermore, because of the speed requirement, the LAGC and ellipse fitting only process a part of nuclei, and thus it may reduce the sensitivity for the recognition of abnormal nuclei.

In the prescreening trial, there are three false negative diagnosis generated by Path.2. Two of these abnormal slides are also overlooked by Eng.1 and Path.1. The reason is because our system cannot find typical abnormal cells in these slides (in 200 views). Actually, the number of cells is too few in the two slides. We plan to develop an automated assessment method for the satisfaction of cervical slides in the future. Alternatively, we may analyze more views to increase the opportunity to capture abnormal cells, but may lead to some reduction in specificity.

Compared with the two FDA approved automated screening machines—the FocalPoint™ system (SN: 84.2%, SP: 89.4%; 5) and the ThinPrep™ system (SN: 80.4%, SP: 98.8%; 4), our system achieves higher sensitivity (88.1%, mean value of Path.1 and Path.2) and higher specificity (100%) than commercial machines. This is achieved by our proposed segmentation and classification methods which aim at improving the sensitivity of recognition of abnormal nuclei and eliminating most disturbances including artifacts, normal cells, atrophic cells and hard negatives. Note that the comparison is not so fair due to the differences in test slides, observers, evaluation approach, etc. Anyway, our system shows the same trend of commercial machines in terms of sensitivity and specificity. This indicates that using automation-assisted technique for cervical cancer screening in H&E stained MLBC slides is feasible. A disadvantage of our system might be that the productivity is lower than commercial machines. We plan to improve the image acquisition module (and cell segmentation and classification modules if necessary) in the future.

In conclusion, this article presented the first automation-assisted method and system for cervical cancer screening in MLBC slides with H&E staining. The autofocusing algorithm ensures the focal plane to away from the surface of coverslip and to focus on the actual focal plane. The multiway GC performed on the a* channel allows cytoplasm delineation when imaging conditions are not ideal. The LAGC approach and concave points based method enable the nucleus segmentation in images with pathology and serious cell overlapping. The proposed classification framework achieves improvement in sensitivity and specificity simultaneously. Experiment and clinical trial results proved the feasibility of automation-assisted cervical cancer screening in MLBC with H&E staining. Because our approach is cost-effective, it is highly desirable in CHC and small hospitals.

Acknowledgments

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and Methods
  5. Results
  6. Discussion
  7. Acknowledgments
  8. Literature Cited

The authors would like to thank the engineers Jingli Li and Shuangming Zheng who took part in this work. The authors would also like to thank all clinical collaborators: Minghua Li, Ming Lin, Kaixin Wang, and Min Tan.

Literature Cited

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and Methods
  5. Results
  6. Discussion
  7. Acknowledgments
  8. Literature Cited
  • 1
    Ferlay J, Shin HR, Bray F, Forman D, Mathers C, Parkin DM. Estimates of worldwide burden of cancer in 2008: GLOBOCAN 2008. Int J Cancer 2010;127:28932917.
  • 2
    Saslow D, Solomon D, Lawson HW, Killackey M, Kulasingam SL, Cain J, Garcia FA, Moriarty AT, Waxman AG, Wilbur DC, Wentzensen N, et al.; the ACS-ASCCP-ASCP Cervical Cancer Guideline Committee. American cancer society, American society for colposcopy and cervical pathology, and American society for clinical pathology screening guidelines for the prevention and early detection of cervical cancer. CA Cancer J Clin 2012;62:147172.
  • 3
    Birdsong GG. Automated screening of cervical cytology specimens. Hum Pathol 1996;27:468481.
  • 4
    Biscotti CV, Dawson AE, Dziura B, Galup L, Darragh T, Rahemtulla A, Wills-Frank L. Assisted primary screening using the automated thinprep imaging system. Am J Clin Pathol 2005;123:281287.
  • 5
    Wilbur DC, Black-Schaffer WS, Luff RD, Abraham KP, Kemper C, Molina JT, Tench WD. The Becton Dickinson focalpoint GS imaging system. Am J Clin Pathol 2009;132:767775.
  • 6
    Kitchener HC, Blanks R, Dunn G, Gunn L, Desai M, Albrow R, Mather J, Rana DN, Cubie H, Moore C, et al. Automation-assisted versus manual reading of cervical cytology (MAVARIC): A randomised controlled trial. Lancet Oncol 2011;12:5664.
  • 7
    Kitchener HC, Blanks R, Cubie H, Desai M, Dunn G, Legood R, Gray A, Sadique Z, Moss S; MAVARIC Trial Study Group. MAVARIC: A comparison of automation-assisted and manual cervical screening: A randomized controlled trial. Health Technol Assess 2011;15:1170.
  • 8
    Maksem JA, Finnemore M, Belsheim BL, Roose EB, Makkapati SR, Eatwell L, Weidmann J. Manual method for liquid-based cytology: A demonstration using 1,000 gynecological cytologies collected directly to vial and prepared by a smear-slide technique. Diagn Cytopathol 2001;25:334338.
  • 9
    Desai M. Role of automation in cervical cytology. Diagn Histopathol 2009;15:323329.
  • 10
    Davey E, Barratt A, Irwig L, Chan SF, Macaskill P, Mannes P, Saville AM. Effect of study design and quality on unsatisfactory rates, cytology classifications, and accuracy in liquid-based versus conventional cervical cytology: A systematic review. Lancet 2006;367:122132.
  • 11
    Lee JM, Kelly D, Gravitt PE, Fansler Z, Maksem JA, Clark DP. Validation of a low-cost, liquid-based screening method for cervical intraepithelial neoplasia. Am J Obstet Gynecol 2006;195:965970.
  • 12
    Tsai MH, Chan YK, Lin ZZ, Yang-Mao SF, Huang PC. Nucleus and cytoplast contour detector of cervical smear image. Pattern Recognit Lett 2008;29:14411453.
  • 13
    Li K, Lu Z, Liu W, Yin J. Cytoplasm and nucleus segmentation in cervical smear images using Radiating GVF Snake. Pattern Recognit 2012;45:12551264.
  • 14
    Yang-Mao SF, Chan YK, Chu YP. Edge enhancement nucleus and cytoplast contour detector of cervical smear images. IEEE Trans Syst Man Cybern Part B: Cybern 2008;38:353366.
  • 15
    Bergmeir C, García Silvente M, Benítez JM. Segmentation of cervical cell nuclei in high-resolution microscopic images: A new algorithm and a web-based software framework. Comput Methods Programs Biomed 2012;107:497512.
  • 16
    Gençtav A, Aksoy S, Önder S. Unsupervised segmentation and classification of cervical cell images. Pattern Recognit 2012;45:41514168.
  • 17
    Harandi NM, Sadri S, Moghaddam NA, Amirfattahi R. An automated method for segmentation of epithelial cervical cells in images of ThinPrep. J Med Syst 2010;34:10431058.
  • 18
    Jantzen J, Norup J, Dounias G, Bjerregaard B. Pap-smear benchmark data for pattern classification. In: Proc Nat: Inspired Smart Inf Syst Annu Symp; 2005.
  • 19
    Hu M, Ping X, Ding Y. Automated cell nucleus segmentation using improved snake. Int Conf Image Process 2004;4:27372740.
  • 20
    Bamford P, Lovell B. Unsupervised cell nucleus segmentation with active contours. Signal Process 1998;71:203213.
  • 21
    Wu HS, Barba J, Gil J. A parametric fitting algorithm for segmentation of cell images. IEEE Trans Biomed Eng 1998;45:400407.
  • 22
    Lee SJ, Wilhelm PS, Meyer MG, Bannister WR, Kuan CL, Ortyn WE, Nelson LA, Frost KL, Hayenga JW. Cytological slide scoring apparatus. Neo Path Inc. US Pub No. 5933519; August 3 1999.
  • 23
    Plissiti ME, Nikou C, Charchanti A. Automated detection of cell nuclei in Pap smear images using morphological reconstruction and clustering. IEEE Trans Inf Technol Biomed 2011;15:233241.
  • 24
    Plissiti ME, Nikou C, Charchanti A. Combining shape, texture and intensity features for cell nuclei extraction in Pap smear images. Pattern Recognit Lett 2011;32:838853.
  • 25
    Jung C, Kim C, Chae SW, Oh S. Unsupervised segmentation of overlapped nuclei using Bayesian classification. IEEE Trans Biomed Eng 2010;5:28252832.
  • 26
    Plissiti ME, Nikou C. Overlapping cell nuclei segmentation using a spatially adaptive active physical model. IEEE Trans Image Process 2012;21:45684580.
  • 27
    Chang CW, Lin MY, Harn HJ, Harn YC, Chen CH, Tsai KH, Hwang CH. Automatic segmentation of abnormal cell nuclei from microscopic image analysis for cervical cancer screening. In: IEEE Int Conf Nano/Mol Med Eng; 2009. pp 7780.
  • 28
    Nielsen B, Albregtsen F, Danielsen HE. Automatic segmentation of cell nuclei in Feulgen-stained histological sections of prostate cancer and quantitative evaluation of segmentation results. Cytometry Part A 2012;81A:588601.
  • 29
    Boykov Y, Veksler O, Zabih R. Fast approximate energy minimization via graph cuts. IEEE Trans Pattern Anal Mach Intell 2001;23:12221239.
  • 30
    Al-Kofahi Y, Lassoued W, Lee W, Roysam B. Improved automatic detection and segmentation of cell nuclei in histopathology images. IEEE Trans Biomed Eng 2010;57:841852.
  • 31
    Lou X, Koehe U, Wittbrodt J, Hamprecht FA. Learning to segment dense cell nuclei with shape prior. In: IEEE Conf Comput Vis Pattern Recognit; 2012. pp 10121018.
  • 32
    Chang H, Han J, Borowsky A, Loss L, Gray JW, Spellman PT, Parvin B. Invariant delineation of nuclear architecture in glioblastoma multiforme for clinical and molecular association. IEEE Trans Med Imaging 2013;32:670682.
  • 33
    van der Laak JA, Siebers AG, Cuijpers VM, Pahlplatz MM, de Wilde PC, Hanselaar AG. Automated identification of diploid reference cells in cervical smears using image analysis. Cytometry 2002;47:256264.
  • 34
    Tucker JH, Rodenacker K, Juetting U, Nickolls P, Watts K, Burger G. Interval-coded texture features for artifact rejection in automated cervical cytology. Cytometry 1988;9:418425.
  • 35
    Malm P, Balakrishnan BN, Sujathan VK, Kumar R, Bengtsson E. Debris removal in Pap-smear images. Comput Methods Programs Biomed 2013;111:128138.
  • 36
    Mat-Isa NA, Mashor MY, Othman NH. An automated cervical pre-cancerous diagnostic system. Artif Intell Med 2008;42:111.
  • 37
    Nanni L, Lumini A, Brahnam S. Local binary patterns variants as texture descriptors for medical image analysis. Artif Intell Med 2010;49:117125.
  • 38
    Marinakis Y, Dounias G, Jantzen J. Pap smear diagnosis using a hybrid intelligent scheme focusing on genetic algorithm based feature selection and nearest neighbour classification. Comput Biol Med 2009;39:6978.
  • 39
    Sokouti B, Haghipour S, Tabrizi AD. A pilot study on image analysis techniques for extracting early uterine cervix cancer cell features. J Med Syst 2012;36:19011907.
  • 40
    Rutenberg MR, Hall TL. Automated cytological specimen classification system and method. AutoCyte North Carolina. US Patent 6327377B1; 2001.
  • 41
    Zhang J, Liu Y. Cervical cancer detection using SVM based feature screening. Int Conf Med Image Comput Comput Assist Interv 2004;3217:873880.
  • 42
    Plissiti ME, Nikou C. Cervical cell classification based exclusively on nucleus features. Int Conf Image Anal Recognit 2012;7325:483490.
  • 43
    Liu XY, Wang WH, Sun Y. Dynamic evaluation of autofocusing for automated microscopic analysis of blood smear and pap smear. J Microsc 2007;227:1523.
  • 44
    Redondo R, Bueno G, Valdiviezo JC, Nava R, Cristóbal G, Déniz O, García-Rojo M, Salido J, Fernández MM, Vidal J. Escalante-Ramírez B. Autofocus evaluation for bright field microscopy pathology. J Biomed Opt 2012;17:036008-1-0360088.
  • 45
    Otsu NA. Threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 1979;9:6266.
  • 46
    Liao PS, Chen TS, Chung PC. A fast algorithm for multilevel thresholding. J Inf Sci Eng 2001;17:713727.
  • 47
    Boykov Y, Kolmogorov V. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Trans Pattern Anal Mach Intell 2004;26:11241137.
  • 48
    Bishop C. Pattern Recognition and Machine Learning. New York: Springer; 2007. 738 pp.
  • 49
    Zhang L, Chen SP, Wang TF, Chen Y, Liu SX, Li MH. A practical segmentation method for automated screening of cervical cytology. In: Proc IEEE Int Conf Intell Comput Biomed Instrum; 2011. pp 140143.
  • 50
    Sauvola J, Pietikainen M. Adaptive document binarization. Pattern Recognit 2000;33:225236.
  • 51
    Shafait F, Keysers D, Breuel TM. Efficient implementation of local adaptive thresholding techniques using integral images. Proc SPIE Doc Recognit Retr XV 2008;6815:681510681515.
  • 52
    Trier BD, Jain AK. Goal-directed evaluation of binarization methods. IEEE Trans Pattern Anal Mach Intell 1995;17:11911201.
  • 53
    Kong H, Gurcan M, Belkacem-Boussaid K. Partitioning histopathological images: An integrated framework for supervised color-texture segmentation and cell splitting. IEEE Trans Med Imaging 2011;30:16611677.
  • 54
    Loy G, Zelinsky A. Fast radial symmetry for detecting points of interest. IEEE Trans Pattern Anal Mach Intell 2003;25:959973.
  • 55
    Zink D, Fischer AH, Nickerson JA. Nuclear structure in cancer cells. Nat Rev Cancer 2004;4:677687.
  • 56
    Solomon D, Nayar R. The Bethesda System for Reporting Cervical Cytology: Definitions, Criteria, and Explanatory Notes. New York: Springer; 2004. 191 pp.
  • 57
    Rowley HA, Baluja S, Kanada T. Neural network-based face detection. IEEE Trans Pattern Anal Mach Intell 1998;20:2238.
  • 58
    Martin E. Pap-smear classification. Master thesis, Denmark, Technical University of Denmark; 2003.
  • 59
    Rodenacker K, Bengtsson E. A feature set for cytometry on digitized microscopic images. Anal Cell Pathol 2003;25:136.
  • 60
    Tsui PH, Liao YY, Chang CC, Kuo WH, Chang KJ, Yeh CK. Classification of benign and malignant breast tumors by 2-d analysis based on contour description and scatterer characterization. IEEE Trans Med Imaging 2010;28:513522.
  • 61
    Torkkola K. Feature extraction by non-parametric mutual information maximization. J Machine Learn Res 2003;3:14151438.
  • 62
    Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP. SMOTE: Synthetic minority over-sampling technique. J Artif Intell Res 2002;16:321357.
  • 63
    Ojala T, Pietikainen M, Maenpaa T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 2002;24:971987.
  • 64
    Canny J. A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 1986;8:679698.
  • 65
    Sonka M, Fitzpatrick JM. Handbook of Medical Imaging. Washington: SPIE; 2000. 1250 pp.