SEARCH

SEARCH BY CITATION

Keywords:

  • watershed segmentation;
  • nucleus segmentation;
  • model-based segmentation;
  • 3D image analysis;
  • confocal microscopy

Abstract

  1. Top of page
  2. Abstract
  3. MATERIALS AND METHODS
  4. RESULTS AND DISCUSSION
  5. Acknowledgements
  6. LITERATURE CITED

Background

Automated segmentation of fluorescently-labeled cell nuclei in 3D confocal microscope images is essential to many studies involving morphological and functional analysis. A common source of segmentation error is tight clustering of nuclei. There is a compelling need to minimize these errors for constructing highly automated scoring systems.

Methods

A combination of two approaches is presented. First, an improved distance transform combining intensity gradients and geometric distance is used for the watershed step. Second, an explicit mathematical model for the anatomic characteristics of cell nuclei such as size and shape measures is incorporated. This model is constructed automatically from the data. Deliberate initial over-segmentation of the image data is performed, followed by statistical model-based merging. A confidence score is computed for each detected nucleus, measuring how well the nucleus fits the model. This is used in combination with the intensity gradient to control the merge decisions.

Results

Experimental validation on a set of rodent brain cell images showed 97% concordance with the human observer and significant improvement over prior methods.

Conclusions

Combining a gradient-weighted distance transform with a richer morphometric model significantly improves the accuracy of automated segmentation and FISH analysis. Cytometry Part A 56A:23–36, 2003. © 2003 Wiley-Liss, Inc.

Segmentation of intact cell nuclei from 3D confocal microscope images is an essential capability for numerous hypothesis testing studies, especially where knowledge of morphology of cell nuclei, the distribution of fluorescence signals associated with them, and/or the organization of cells in the tissue specimen is required (1, 64, 65). Figure 1 illustrates 3D nucleus segmentation. Figure 1a is one optical slice from a 3D confocal microscope image of a small portion of the rat hippocampus. The red-colored objects in this image are the fluorescently labeled cell nuclei. The green signal indicates RNA from the immediate-early gene Arc as revealed by FISH. Arc RNA emanates from the nuclei at the site of transcription, and the processed mRNA spreads out into the cytoplasm, just outside the nuclear boundaries. Figure 1b shows the nuclear clusters obtained by intensity thresholding of the red portion of the image (representing cell nuclei), followed by connected component labeling (48). The blue lines are the boundaries of these connected components. The core problem of interest is to separate these components into smaller objects representing individual nuclei. The higher-level application of interest is quantification of the gene-transcription activity signals (displayed in green) relative to each nucleus.

thumbnail image

Figure 1. Illustrates the segmentation problem of interest. a: An optical slice from a 3D confocal microscope image of a small portion of the rat hippocampus. The panels to the right and below are y-z and x-z projections. The red-colored objects are the fluorescently-labeled cell nuclei. The green signal indicates RNA of the immediate-early gene Arc, as revealed by FISH. b: Foreground objects obtained by intensity thresholding followed by connected components labeling, which produces many clusters of cell nuclei. The segmentation task is to separate these clusters into 3D regions representing individual nuclei by exploiting a mathematical model describing expected object sizes and shapes.

Download figure to PowerPoint

There are two main categories of cell nucleus segmentation approaches: interactive and automatic. The former is based on manually delineating nuclei in sequential and orthogonal 2D slices using a computer graphics device such as a tablet or mouse, relying on the human visual system and expert judgment (2, 3, 4). Sometimes, enhanced 3D visualization capabilities are employed (56–58). However, many biological studies involve hundreds or even thousands of cell nuclei in one image stack. The principal drawbacks of manual methods are high labor costs, tedium, and slowness. Some reduction of the tedium is occasionally accomplished by the use of statistical sampling methods and stereology (54, 55, 59–61). Another drawback is the subjective nature of the analysis. This issue exhibits itself in the form of significant inter- and intraobserver variability. Furthermore, manual procedures usually just find the object count, and a considerably larger effort is needed to obtain precise object measurements. Even then, such measurements are usually based on some sort of geometric approximation and are not as accurate as the computer-automated segmentation methods described here. In contrast, automatic algorithms are much faster and cheaper by being less labor intensive, which makes it convenient to analyze a much larger number of nuclei per study. They are also much less subjective. Finally, since the images are available in digital form on the computer, they allow the researcher to visually confirm the results, revisit the data at any time, and even re-compute the results if necessary. The potential disadvantage of the segmentation algorithms is the relatively lower correctness compared to the human observer, especially for the clusters of nuclei in which it is difficult to make an accurate definition of each individual nucleus. The possible disadvantage, however, is well compensated by the other advantages noted above. Overall, for the large-scale functional quantitative studies of interest to life scientists, automated methods are essential and unavoidable.

Development of fully-automated computer algorithms for segmenting nuclei in 3D images continues to pose interesting challenges. Much of the difficulty arises from the inherent variability in biological images, the complexity of the nuclear appearance, clustering of the nuclei, and the sheer volume of the datasets (20–100 MB per stack), all combined with instrument limitations.

The goal of the present work is to develop automated and computationally efficient algorithms that improve upon previous methods (5–10). For example, edge-based algorithms are prone to errors such as the identification of noisy edges and discontinuous boundaries (35) and requiring complex postprocessing (6, 52). Region-based approaches, such as thresholding and labeling, are only suitable for images containing well-isolated objects. Other techniques, such as splitting and merging (11–14), simple region growing (15), multiple thresholding (16), and direct morphological segmentation techniques (17–22), did not produce good results, especially when the images contained a high density of cell nuclei, and each cell exhibited significant variation of size, shape, and intensity. The need to process large batches of 3D images (50–100 images per batch) at interactive computer speeds has influenced this work significantly, as will be noted throughout the article.

A specific focus of this article is the problem of disambiguating overlapping objects following an initial voxel-based delineation of the image foreground. The watershed algorithm is widely studied and used for efficient object separation (23–26, 28). It was introduced by Digabel and Lantuejoul (29), extended by Beucher (30), analyzed theoretically by Maisonneuve (31), and formally defined in terms of flooding simulations by Vincent and Soille (32). Its popularity is attributable to its high computational efficiency and ability to extend it to 3D spaces (6, 33, 34), which makes it amenable to application to data-intensive 3D confocal image stacks. We describe an algorithm that combines the attractive features of the 3D watershed algorithm with algorithms that exploit available intensity gradient based cues and the expected anatomic shape of the nuclei, using a statistical model-based approach. The overall flowchart of the proposed algorithm is shown in Figure 2.

thumbnail image

Figure 2. A flowchart overview of the main image analysis steps described in this article.

Download figure to PowerPoint

MATERIALS AND METHODS

  1. Top of page
  2. Abstract
  3. MATERIALS AND METHODS
  4. RESULTS AND DISCUSSION
  5. Acknowledgements
  6. LITERATURE CITED

Image Pre-processing Methods

Images were acquired using a Leica TCS-4D (Leica Microsystems Inc., Bannockburn, IL) confocal microscope equipped with a krypton/argon laser. Typical image size is 512 × 512 × 20, and sampling resolution along x, y, and z direction are 0.45, 0.45 and 1.0 μm, respectively. The file format is usually a series of TIFFs. One problem with the confocal image stack is the inevitable attenuation of light along the depth of the specimen (49, 63). The uneven illumination along the depth of the specimen results in spatial variation of light intensity across the image volume. Besides the optical problems, photobleaching of the specimen contributes to further degradation of the image signal (49). Photobleaching can be modeled as a first-order decay process and therefore can be corrected computationally.

Several sophisticated methods have been used to correct for depth-dependent signal attenuation (61, 62). Unfortunately, the computational needs of these methods are high enough to preclude routine use on a large scale. For the present work, a much simpler method was adopted, keeping in mind the need for computational speed to process the large numbers of images in this study. This method works by comparing each image slice with the standard intensity slice in the stack. Denote the measured image intensity as I = (I1, I2, …, In), where Ii denotes the intensity of all pixels in the ith slice. Since background pixels are not our concern, only the foreground, i.e., visible pixels, are of interest, and we allow the intensity of the foreground pixels in the image stack to be denoted Io = (Io1, Io2, …, Ion). These intensities can be estimated as Ioi = {Ix | IxIi, Ix > vt}, where vt is an intensity threshold determined from a gray-level histogram of the image, based on a measure of class separability (42). In other words, voxels that are brighter than the threshold represent an estimate of the image foreground. We denote the average intensity of foreground pixels in the ith slice as νoi = Īoi. To restore the foreground intensity, we first set a standard average intensity, which is simply taken as the maximum of voi, i.e., vos = max(voi). This number is used to scale the pixel intensity in each image slice according to the formula Ii = Ii × (vos/voi). After this scaling, the average intensity will be equal at each slice. The scaling ratio vos/voi is independent of the foreground area of the specific slice.

Noise and other artifacts present in the image degrade the segmentation result, resulting in, for example, over-segmentation. Again, notwithstanding the availability of sophisticated, but computationally intensive algorithms (40), simpler and computationally efficient methods were adopted. Median filtering (50) is widely used as a noise reduction tool. A median filter with a kernel width of three was applied to each slice of the original 3D image to suppress the effects of shot noise, which is introduced by the photomultiplier tubes of confocal microscopes.

Separation of image background and foreground regions not only defines a broad area of interest but also reduces the ambiguity in the results due to uneven and dense background. Using the same threshold, vt, it is also possible to smooth out variations in the background by setting all pixels below vt to zero, leaving pixels with brightness above vt unchanged.

Another type of preprocessing accounts for pixel misclassification errors due to factors such as imaging noise, random variations in staining, and presence of extraneous objects such as dust. For example, the thresholding process described above sometimes results in some small isolated artifactual objects, often due to the presence of dense noncellular matter in the background. To remove these artifacts, we use a minor region removal algorithm (43). First, all the connected components are identified in the thresholded image, and the sizes of all isolated objects are calculated. Objects smaller than a set threshold are considered to be artifactual, and their voxel intensity is changed to background (zero). Complementarily, it is also possible that some misclassified regions (holes) may lie entirely within an object of interest, such as a nucleus. To remove these holes, the minor region removal algorithms (43) are again applied to the binary logical complement of the thresholded gray image. In the complemented image, the background can be expected to be the largest “blob,” and holes are small island-like objects. If the objects are smaller than the predefined threshold, which is set empirically and manually by the user, assisted by the software system, then the intensity threshold vt is assigned to the pixels belonging to those objects.

Finally, since the object and background have a varying brightness level in the original image, the following problem may occur: the detected objects (region of interest in the thresholded image) boundary may be corroded, containing breaks, gulfs, or peninsulas not corresponding to the physically-correct object boundaries. To solve this problem, morphological filtering is an effective and widely used solution (22, 48). We chose to use the morphological “opening” operation on the image, which achieves shape smoothing without the possible side effects of merging two separated objects. In detail, “opening” of an image I(x,y,z) by a structuring element, also known as a kernel, and denoted K, is written as. It is defined as I(x,y,z) ○ K. It is defined as I(x,y,z) ○ K = (I(x,y,z) Θ K) ⊕ K, where Θ is the morphological erosion operator and ⊕ is the morphological dilation operator (48). Due to the fact that 3D image stacks are usually anisotropic, i.e., the sampling distance size along the axial dimension is bigger than in the radial dimension, the structuring element for dilation and erosion is chosen as a 3D kernel of size 3 × 7 × 7, as illustrated in Figure 3. Note that some small objects may appear in the final segmentation due to the opening operation. These are eliminated by postprocessing operations after segmentation, but before any further analysis such as FISH quantification.

thumbnail image

Figure 3. The quasispherical structuring element for the image preprocessing operations on anisotropic images described in Image Preprocessing Methods. b: The kernel that is applied to the optical slice in the middle, whereas (a) and (c) are applied to the slices above and below.

Download figure to PowerPoint

Connected Object Separation

As noted in the article opening paragraphs, the watershed algorithm is widely studied and used for efficient object separation. The term “watershed” comes from a graphic analogy attributed to Vincent and Soille (32). In this analogy, the gray-level image is treated as a topographic surface. It is assumed that holes have been punched in each regional minimum. If the surface is “flooded” from these holes, the water will progressively flood the “catchment basins” (i.e., the set of points on the surface whose steepest slope paths reach a given minimum) of the image. At the end of this flooding procedure, each minimum is completely surrounded by “dams,” which delimit its associated catchment basins. The set of dams obtained in this way corresponds to watersheds (called watershed surfaces in the 3D case) from a geophysical analogy, and provides a tessellation of the input image in its different catchment basins.

Several extensions of the watershed method have been described in the literature. For instance, Malpica et al. (36) proposed a method to segment nuclear clusters based on a 3D watershed algorithm, where two different image transformations and nuclear markers are presented to fit different types of clusters. This method is based on 2D image slices; nucleus cluster images are characterized by extreme complexity and variability, which limits the accuracy of 2D algorithms. Solorzano et al. (9) presented a 3D segmentation approach applied to cancerous specimens. The 3D confocal image is segmented into nuclear and background regions, and each nuclear region is classified by visual inspection. Objects classified as clusters are divided into individual nuclei using an automatic watershed algorithm.

Notwithstanding its popularity, the watershed algorithm has several limitations. These limitations arise from the fact that it relies on touching objects exhibiting a narrow “neck” in the region of contact. These “necklines” play a critical early role in estimating the number of objects in a given cluster. This process is notoriously error-prone. Considerable effort has been devoted to the design of algorithms for generating the correct set of “markers” to guide the object segmentation. The problem of determining the correct number of markers is inherently difficult, and is conceptually similar to the problem of automatically determining the number of groups in multidimensional statistical data (37–39).

The classical watershed algorithm also ignores important cues in the image. For instance, touching nuclei often exhibit prominent intensity gradients that can be interpreted and exploited to perform accurate object separation. The watershed algorithm does not have a built-in notion of object shape and size, i.e., it does not incorporate an object model that can provide additional cues for separating touching objects. Several attempts have been made to overcome some of the limitations of the watershed algorithm. One class of attempts has relied upon some form of modeling of the objects of interest, i.e., the nuclei. For instance, Roysam et al. (40), and Mackin et al. (41) modeled connected groups of nuclei as a cluster in the four-dimensional space comprised of the spatial dimensions (x,y,z), and the intensity dimension I(x,y,z). This type of model makes the weakest assumptions about the objects of interest. Ancin et al. (6) describe a more sophisticated modeling effort using a stronger set of modeling assumptions. In their work, the nuclei were modeled as blobs whose feature values (e.g., compactness and size) were within defined intervals. This model was used to verify whether or not a given image object represents a valid nucleus.

Computation of the gradient-weighted distance transform.

As noted above, the difficulty with watershed segmentation is that before applying it, one must check if the objects and their background are marked by a regional minimum, and if crest lines outline the objects. If not, one must transform the original image so that the contours to be calculated correspond to watershed lines, and the nuclear objects to catchment basins surrounded by them. To this end, two image transformations have been widely studied: distance transform and gradient transform. Distance transformation is purely geometrical, and accounts for the shape of objects. However, it is only good at dealing with regular shapes, either isolated or touching objects with bottleneck-shaped connections. The gradient transformation is intensity-based, assuming that internuclei gradients are higher than intranuclei gradients. As with all gradient-based operations, this transformation is sensitive to imaging noise, and usually results in over-segmentation. To overcome the above difficulties, we propose a combined image transformation called the “gradient-weighted distance transform”, which accounts for both geometric and intensity features.

Let I denote the preprocessed 3D image. We first compute its 3D intensity gradient, denoted G, as the difference between a pair of images, derived by dilation then erosion of the brightness value of the image. The structuring elements for dilation and erosion are chosen as shown in Figure 3. The problem of computing the 3D image gradient is a widely studied problem (51, 52). The main advantage of the morphological approach is computational efficiency, which is an important consideration for the application of interest.

In order to compute the geometric distance transform, the preprocessed image I is first divided by intensity thresholding using the automatically computed threshold vt described earlier, as follows:

  • equation image(1)

The geometric distance transform D is calculated over Ib, using the chamfer distance transform (47). This algorithm can be computed using voxel masks in three dimensions, as illustrated in Figure 4. These masks are parts of a 3 × 3 × 3 cube. Two passes over the volume Ib are carried out. The forward mask is swept over the volume left to right, top to bottom, and front to back. The backward mask is swept in the opposite direction. At each position, the sum of the local distance in each mask voxel and the value of the voxel it covers are computed, and the new value of the central voxel (labeled 0 in Fig. 4) is the minimum of these sums. In summary,

  • equation image(2)

where vi,j is the value of the pixel at position (i,j), and (k,l) is the position in the mask (the center being (0,0)). The local distance from the mask is denoted c(k,l) ∈ {d1,d2,d3,d4,d5}, and is illustrated in Figure 4. Notice that these local distances already account for the anisotropy of Ib. Specifically, the unequal voxel size δxy and δz in the radial and axial dimensions are accounted for. Note that we use the Euclidean distance here instead of the optimal weights presented in published literature (47). Based on our tests, the optimal weight does not bring us any better results in terms of the final segmentation.

thumbnail image

Figure 4. Illustrates the forward and backward 3D masks used for the chamfer distance transform described in the text (see paragraph titled Computation of the gradient-weighted distance transform, under Connected Object Separation, in Materials and Methods). In these illustrations, d1, d2, …, d5 are the local Euclidean distances to the center voxel (labeled 0), and δz,δxy are the voxel sizes along the axial and radial dimension, respectively.

Download figure to PowerPoint

The geometric distance transform D and the gradient transform G must be combined into a single representation that captures the object separation cues available in the data. One challenge in this regard is the fact that these quantities are dissimilar, i.e., they are expressed in different units, and they can be normalized differently. The final result of the combining operation should be in distance units. These conflicting requirements are met by the following formula.

  • equation image(3)

where Gmin and Gmax are the minimum and maximum values of the gradient G needed for normalization. Note that the distance value D′ is high at positions closer to the center of foreground objects, and in pixels with smaller gradient values. D′ is smaller close to the boundary of the foreground objects, or where the gradient is relatively large. Intuitively, this captures the essential object separation cue that pixels with bigger gradient values tend to be on the boundary of an isolated object, or on the boundary between two touching objects. In practice, the watershed algorithm requires the inverse of this distance transformation. This inverse is denoted T, and is computed as follows:

  • equation image(4)

where max(D′) is the global maximum within the distance images, and Sg represents a Gaussian smoothing operator. The smoothing operation is needed because the transformed image may contain tiny noise-caused intensity peaks, usually due to uneven cell staining. Before applying watershed segmentation, the background pixels obtained previously need to be set to [max(D′ + 1)], where the 1 is added to ensure that the maximum distance within the object (one) is greater than the distance value of the background.

Figure 5 illustrates the effectiveness of the combined measure in equation 3. Figure 5a shows a sample image, with the nuclei indicated in blue, and the FISH signal displayed in red. Figure 5b is a surface plot of the geometric distance D for the region indicated by the white box in Figure 5a. Figure 5c is the result of combining the geometric and gradient measures D and G as in equations 3 and 4 above. It is clear that the combined transformation in Figure 5e is effective in discriminating touching nucleus clusters that do not have the characteristic bottleneck-shaped connecting pattern.

thumbnail image

Figure 5. Illustrates the effectiveness of the combined gradient-weighted distance transform. a: A small portion of one slice from a 3D confocal image stack, which is taken from the CA1 region of the rat brain. The white box in (a) indicates the region of interest, which includes two touching cells without an apparent bottleneck connection. b: A surface plot showing a standard geometric distance map for the highlighted pair of nuclei. d: A surface plot of the combined gradient-weighted distance map for the same region. The geometric distance in (b) does not distinguish the two touching cells, leading to the segmentation result shown in (c). The gradient-weighted distance shown in (d) presents two distinct peaks corresponding to two cells, which results in the correct segmentation shown in (e).

Download figure to PowerPoint

Enhanced 3D watershed algorithm.

Unfortunately, applying the watershed algorithm to the above-described transformed image can directly lead to oversegmentation, i.e., a single nucleus may be divided into multiple fragments. Several solutions have been proposed in the literature to address this well-known problem. Some authors have proposed marker-controlled segmentation (24, 36). In this method, singular markers are defined and imposed as minima on the transformed image. From these minima, the watershed algorithm will find the crest lines in the image by simulating a flooding process (32). In general, this process is difficult. As noted in the introductory paragraphs of this article, the problem of discovering singular markers has the same conceptual level of difficulty as the well-known and unsolved problem of estimating the number of clusters in statistical cluster analysis problems. For a specific application of interest, this problem can sometimes be reduced using a priori knowledge of the solution, when available. This is not straightforward, especially when dealing with noisy images, and when the objects to be detected are complex and varied in shape, size, and intensity. This is especially true when segmenting dense nucleus clusters. Another approach presented in the literature is hierarchical segmentation (23, 28). For example, Beucher (23) defines different levels of segmentation starting from a graphical representation of the images based on the mosaic image transform. Then the hierarchical segmentation is refined by means of a new algorithm called the “waterfall algorithm,” which allows the selection of minima and catchment basins of higher significance compared to their neighborhood. This approach reduces the oversegmentation considerably.

Another type of solution proposed to the above problem requires targeted postprocessing. Postprocessing is performed to find the final contours of the objects. Specifically, some merging techniques have to be used to eliminate the oversegmentation (7). The present work builds upon this methodology. Specifically, we have used a post-watershed merging approach using object model information. The 3D watershed algorithm is simply carried out on the gradient-weighted distance image T using the immersion simulation approach described by Vincent and Soille (32), without deliberate markers (in other words, local minima become markers). The following section describes this approach in more detail.

Model-Based Object Merging Methods

After the 3D watershed algorithm has been carried out using the gradient-weighted distance transform described above, undersegmentation can be nearly eliminated, but the problem of oversegmentation remains, as illustrated in Figure 6. To overcome this problem, some type of merging mechanism has to be introduced in the postprocessing step. Several techniques have been proposed in the literature. One possible method is to make use of hysteresis thresholding to filter noisy weak contours, representing the watershed lines between small regions. As pointed out by Najman and Schmitt (28), hysteresis thresholding produces nonclosed contours and barbs in the case of watershed. Adiga and Chaudhuri (7) presented a rule-based heuristic merging technique to reduce oversegmentation, by identifying the oversegmented objects based on size, and merging them with their parent nucleus. This method represents a significant advance, but can be improved upon. Its limitations arise from the fact that merging purely based on object size is prone to error, especially when segmenting objects with great variation in size. Second, a global size threshold is not easy to set in an automated and consistent manner. Finally, the merging rule does not account for the features of other objects in the image. The present work is similar in principle, but is built upon a richer model of the objects of interest.

thumbnail image

Figure 6. The segmentation result obtained by the 3D watershed step, just prior to postprocessing. There is no undersegmentation case, but some fragmented cells exist, as indicated by yellow arrows, and numbered (42,52), (38,45), and (85,91).

Download figure to PowerPoint

Even the most sophisticated pre- and postprocessing techniques cannot overcome the inherent limitation of purely intensity-based methods, namely the assumption that segmentation can be carried out solely based on information provided by the actual image. Actually, some kind of prior related knowledge can and must be incorporated into the algorithms for automatic nucleus segmentation. That is the reason the model-based segmentation approach has been introduced. Different procedures have been proposed in the literature to approach the problem of representation and usage of prior knowledge for image analysis, such as deformable shape models (27, 46) and statistical models (44, 45).

Due to the wide variation of object shapes and presence of many touching objects, in this work we introduce a statistical modeling–based approach to break the watershed surface, and eliminate oversegmentation. The deformable-shape models are computationally slower, and thus less attractive for the application of interest.

3D object feature selection.

Statistical shape-modeling methods depend upon the availability of parametric models to describe the nucleus objects. These parameters must be selected carefully in order to accurately characterize the nucleus objects, and discriminate outliers from real nucleus objects in an effective manner. The set of parameters must be rich enough to describe complex objects. A realistic strategy for estimating these parameters is for the user to specify examples of valid and invalid nucleus objects, and to perform supervised morphometry on these objects. In practice, the tedium and labor cost of specifying these examples is high enough to limit the number of examples. This in turn forces us to limit the number of object modeling parameters. In this work, our primary training data is cell nuclei from rat brain tissue, where there are about 100 nuclei in each image. We use only a few parameters, as described below. Note that not all these features are actually used for all images. Globally-optimal feature selection is a nontrivial task, and a definitive solution is outside the scope of this article.

Let the location of the pixels in a cell nucleus denoted p = {p0, p1, …, pn–1}, where pi = {xi, yi, zi}. Their corresponding pixel intensity values are denoted v = {v0, v1, …, vn–1}. The following 3D features are readily measured.

Volume.

The volume (size) of the object, V, is the total number of voxels inside the object, i.e., V = n.

Texture.

The simplest texture measure, denoted T, is the standard deviation of intensities of all pixels inside the object

  • equation image(5)

where v̄ denotes the average nucleus intensity.

Convexity.

The convexity, S, of an object is defined as the ratio of the object volume to the volume of the convex hull of the object. The convex hull of an object can be formed by a method called Jarvis's March (53). The convexity is desired to be close to one for circular and elliptical objects, and less than one for concave objects.

Shape.

Let Q be the boundary pixels of the object. The shape feature, U, is defined as

  • equation image(6)

where | · | denotes the number of elements in a set.

To eliminate the effect of anisotropy on feature calculations, we use the following features computed from a 2D projection of the nucleus. Let p′= {p′0, p′1, …, p′k−1} denote the k pixels that belong to the projected nucleus, where p′i = (xi, yi) is the 2D location.

Circularity.

Let p̄′ denote the center of projected nucleus, then the distance between pixels p′ and the center can be described as d = ∥p′− p̄′∥. The circularity, C, is defined as

  • equation image(7)
Area.

The area, A, is the number of pixels of 2D projected nucleus, i.e., A = k.

Mean Radius.

Let R be the vector of the distance from the boundary pixels to the center p̄′, and the mean radius is defined as the average of R, i.e., .

Eccentricity.

The eccentricity, E, is defined as the ratio of the major axis to the minor axis, and can be estimated by the ratio of the maximum to minimum radius R, i.e., E = max(R)/min(R).

Statistical object model construction method.

The statistical object model is an m-dimensional Gaussian distribution defined on a vector of m features X = (x1, x2, …, xm) drawn from the list above. The distribution requires the mean, denoted and covariance matrix, denoted ΣX. These parameters are estimated from a subset Ct, of the objects produced by the watershed algorithm described above (denoted C).

The training set Ct is selected as follows. It is known that objects representing intact nuclei in these results are generally characterized by a relatively large value of volume V, convexity S, and circularity C. Based on these considerations, the training set can be constructed by placing thresholds on volume V, convexity S, and circularity C, as described below:

  • equation image(8)

where V̄,S̄, and are the mean values of object volume, convexity and circularity, and σV, σS and σC are their corresponding standard deviations respectively, t is an empirically specified parameter that sets the degree of selectivity. Note that we remove all the nuclei that are cut by the x, y, and z axes to eliminate their influence on the training object selection, also the half-presented nuclei are not of interest, i.e., when we do FISH analysis in our study, they should all be excluded from further consideration.

Based on the above Gaussian model, we can measure the confidence score for any given object c with feature X, using the Gaussian probability that the object feature fits the model, as follows (37):

  • equation image(9)
Watershed surface breaking and object merging method.

To correct the oversegmentation produced by the watershed step, it is necessary to detect and break (eliminate) the false watershed surfaces and thereby merge nucleus objects. This is guided by a merging criterion based on a merging score derived from the confidence measure described above in equation 9.

Let W denote the set of watershed surfaces that separate adjacent 3D nucleus objects. As illustrated in Figure 7a, each watershed surface wW separates two touching nuclei, denoted as cmath image and cmath image. We define the gradient of w as the average intensity gradient among all pixels in the watershed surface w, i.e., γw = (∑i∈w γi)/n, where n is the number of pixels in w. In the same manner, we define the intensity gradient γc for each nucleus object c by averaging the intensity gradients among all pixels in c. Let cw denote nucleus object formed by breaking w (in other words, merging cmath image and cmath image separated by w). Then, we have:

  • equation image(10)

Note that pixels corresponding to the watershed surface w itself should also be merged into cw. The confidence score of cw, based on equation 9 above, is called the “merging score,” and denoted Smath image in the following. Intuitively, the merging decisions are based on the following two observations: 1) The merging score Smath image should be higher than the score of either nucleus before merging, i.e., Smath image and Smath image. 2) The gradient of w should be relatively large compared with the gradient of nuclei cmath image and cmath image. This is based on assuming that intranuclear gradients are smaller than internuclear gradients, which generally holds true. With these observations in mind, we calculate the following ratios:

  • equation image(11)

The ratio Rmath image reflects the relative degree that the nuclei match the statistical model before and after merging, thus it accounts for the confidence we have on the breaking of w. The higher Rmath image is, the more confidence we have in merging cmath image and cmath image. The ratio Rmath image captures the intuition that a watershed surface with high intensity gradient is likely the boundary of two touching nuclei. The higher the Rmath image, the less likely that w represents background pixels, thus more likely that w belongs to the interior of a nucleus, rather than cmath image and cmath image being two nuclei separated by w. The above two ratios can be combined as follows into a single decision making criterion:

  • equation image(12)

where β is an empirical decision threshold (typical value 1.2).

thumbnail image

Figure 7. Illustrates two example cases encountered by the watershed surface-breaking algorithm for touching objects. a: A case that leads to merging of the two objects. b: The case where one object has multiple watershed surfaces (two in this example). In this case, there are two candidate watershed surfaces to choose from for breaking. Our algorithm prioritizes the watershed surface w that has a greater merging score cw, indicating better fit to the object model, thus the higher confidence towards its breaking.

Download figure to PowerPoint

Breaking of the watershed surface w results in the merging of two objects cmath image and cmath image. This procedure is repeated until no more watershed surfaces in W satisfy the condition in equation 12. Special attention needs to be given to nuclei that touch more than one object, as illustrated in Figure 7b. In this case, we have multiple candidate watershed surfaces to be selected for breaking. Intuitively, we must assign a higher priority to the one that has a greater merging score, i.e., break the watershed with the greatest cw value before other watershed surfaces.

Let Wc denote the watershed surfaces that are adjacent to nucleus object c, and each wWc separates c from its neighbors. The complete watershed surface breaking algorithm is described as follows. equation image

Validation of the object features.

In order to obtain a measure of discriminative capability of the selected features, we adopted Fisher's discriminant ratio criterion (FDR) (37):

  • equation image(13)

where u1, u2 are mean values of the feature in two classes, and σ1, σ2 are their corresponding standard deviations. Prior to calculating the FDR, we need to have nucleus class information available (similar to the training data set). In this work, all the nuclei identified by watershed segmentation described previously can be classified into two categories: a set of intact nuclei, which should not be merged during post-processing; and a set of nucleus fractures resulting from oversegmentation that need to be merged with their adjacent neighbors. To classify them, we first run the model-based watershed surface breaker using the features previously defined. At this stage, minimal manual editing may be performed to correct misclassifications, aided by a graphical user interface (GUI) that is described in the next section. Once we have class information for all nuclei, we can calculate FDR for each desired feature. Table 1 shows their average values, obtained by testing on a series of images.

Table 1. Discriminative Capability of Various Features as Measured by Fisher's Discriminant Ratio (FDR)
Object featureFisher discriminant ratio (FDR)
Volume (3D)0.50
Texture (3D)0.42
Convexity (3D)0.53
Shape (3D)0.37
Circularity (2D)0.33
Area (2D)0.25
Mean radius (2D)0.25
Eccentricity (2D)0.17

RESULTS AND DISCUSSION

  1. Top of page
  2. Abstract
  3. MATERIALS AND METHODS
  4. RESULTS AND DISCUSSION
  5. Acknowledgements
  6. LITERATURE CITED

The algorithms described above have been integrated into a GUI under IDL (Interactive Data Language, Research Systems, Boulder, CO), and C++. Evaluation of the proposed algorithm was carried out using a series of 3D microscope image stacks from rat brain tissue sections. A Windows 2000 computer (Pentium IV, 1.7 GHz, 512 MB memory) was used. The watershed segmentation takes less than 2 min. We pursued manual image analysis and visualization using MetaMorph software (Universal Imaging Corporation, West Chester, PA). Table 2 shows the results on several kinds of images with some different settings. In average, 97% of the nuclei have been correctly segmented in each image. The model-based watershed surface breaker algorithm performed superbly at the postprocessing stage, which merges most of the nucleus fragments, without introducing new clusters.

Table 2. Results of Comparing the Performance of the Algorithms With the Rule-Based Merging Method of Adiga et al. (7), on Several Typical 3D Rat Brain Cell Confocal Microscopy Image Stacks*
Image stackCell count generated by watershedNumber of merging operationsCorrectly segmented cellsNumber of undersegmentation errorsNumber of oversegmentation errors
ModelaRulebModelaRuleb
  • *

    The improvements result mainly from the use of a combined distance transform, and a richer model describing the morphometric characteristics of the nuclei.

  • a

    Model, the proposed statistical model-based watershed breaker.

  • b

    Rule, rule-based merging method in (7).

1216531641306
2204521553405
3223641591319
4207561523628
5176321462305

Although the 3D watershed algorithm is a direct conceptual extension of its 2D counterpart, the variety of 3D nucleus shape/intensity, presence of noise, and uneven cell staining greatly increase the complexity of the overall process. This implies that the image preprocessing plays a vital role in reducing the oversegmentation. Nevertheless, since we do not explicitly use the marker function during watershed segmentation, and treat the local minima of the transformed image as the markers, a high proportion of oversegmented nuclei are to be expected. However, the model-based watershed surface breaker effectively eliminates almost all of these oversegmented nuclei during postprocessing. One example is shown in Figure 8. The test result on five different image stacks is shown in Table 2.

thumbnail image

Figure 8. The segmentation result generated by the model-based merging procedure (i.e., the watershed surface breaker). There were 53 watershed surfaces broken, and most cases of oversegmentation were eliminated.

Download figure to PowerPoint

To further evaluate the proposed segmentation scheme, we compare the segmentation result of rule-based merging in recent prior work (7) with our proposed method. Segmentation in Adiga and Chaudhuri (7) requires the user to specify a nucleus size threshold prior to the rule-based merging. To obtain a fair comparison, we selected the size threshold in our work (7) such that the number of merging operations generated by this method equals the merging number in our method. Table 2 shows the comparison results. As can be seen, the model-based watershed breaker reduces both the nucleus clusters and oversegmented nuclei. The average accuracy is 97%. Clearly, using the nucleus size feature as the only criterion for merging during postprocessing of the watershed algorithm may be improved upon. For example, some small nuclei that are touching have a size below the specified threshold, but they have other good features, e.g., shape, so they should not be merged. On the other hand, a big nucleus might be divided into two or more fragments by watershed segmentation. Although each of these fragments is still above the specified threshold, we should merge them. The rule-based method (7) cannot handle the above cases well. The proposed model-based watershed breaker that incorporates a variety of nucleus features (both intensity and shape) is sophisticated enough to eliminate these errors.

It is worth mentioning that the model-based post-merging method can be made to incorporate a broader range of a prior knowledge than described here. The prior knowledge can potentially be derived from the image(s) under investigation or from some other similar images.

The effectiveness of the statistical model-based watershed surface breaking depends, to some degree, on two factors: 1) a good training set for object-model construction, and 2) good features for nucleus evaluation (score measurement). In this work, a very simple and intuitive training nucleus selection method was adopted for the sake of reducing manual operations. A better training set (and consequently, a better nucleus model) can be obtained by using more sophisticated approaches, or by greater human interaction. Nucleus features play an important role in our model-based postprocessing. In general, both geometric and intensity features should be exploited.

Feature selection is complicated by several factors. High variability in nucleus shapes, unsatisfactory cell staining which results in some holes inside nuclei, unsmooth boundaries, and image anisotropy are the common confounding factors. It should be kept in mind that the purpose of using nucleus features in this work is to determine whether a detected nucleus is intact or oversegmented. Ways to select a set of good features that can enhance the capability of the proposed watershed surface breaker need further study.

Our current efforts emphasize improving the accuracy of nucleus segmentation (especially the nucleus boundaries) for the purpose of FISH analysis, which is a very important task for neuroscience research. The detailed delineation of the nuclear boundary is an issue of considerable importance to the field and to accurate FISH quantitation. This issue has been studied in depth by our group and by others (66), and it continues to be a subject of ongoing study within our group. The high performance of the methods described here also set the stage for the development of large-scale automatic batch processing systems (with little or no human interaction) for cell segmentation, FISH analysis, and cell classification.

Acknowledgements

  1. Top of page
  2. Abstract
  3. MATERIALS AND METHODS
  4. RESULTS AND DISCUSSION
  5. Acknowledgements
  6. LITERATURE CITED

We thank colleagues Almira Vazarjanova, Ph.D, Monica Chawla, Ph.D, and Saurabh Roy for extensive assistance and guidance.

LITERATURE CITED

  1. Top of page
  2. Abstract
  3. MATERIALS AND METHODS
  4. RESULTS AND DISCUSSION
  5. Acknowledgements
  6. LITERATURE CITED
  • 1
    Belien JAM, van Ginkel AHM, Tekola P, Ploeger LS, Poulin NM, Baak JPA, van Diest PJ. Confocal DNA cytometry: a contour-based segmentation algorithm for automated three-dimensional image segmentation. Cytometry 2002; 49: 1221.
  • 2
    Czader M, Liljeborg A, Auer G, Porwit A. Confocal 3-dimensional DNA image cytometry in thick tissue sections. Cytometry 1996; 25: 246253.
  • 3
    Lockett SJ, Sudar D, Thompson CT, Pinkel D, Gray JW. Efficient, interactive, three-dimensional segmentation of cell nuclei in thick tissue sections. Cytometry 1998; 31: 275286.
  • 4
    Rodenacker K, Aubele M, Hutzler P, Adiga U. Groping for quantitative digital 3-D image analysis: an approach to quantitative fluorescence in situ hybridization in thick tissue sections of prostate carcinoma. Anal Cell Pathol 1997; 15: 1929.
  • 5
    Rigaut JP, Vassy J, Herlin P, Duigou F, Masson E, Briane D, Foucrier J, Carvajal-Gonzalez S, Downs AM, Mandard AM. Three dimensional DNA image cytometry by confocal scanning laser microscopy in thick tissue blocks. Cytometry 1991; 12: 511524.
  • 6
    Ancin H, Roysam B, Dufresne TE, Chesnut MM, Ridder GM, Szarowski DH, Turner JN. Advances in automated 3-D image analysis of cell populations imaged by confocal microscopy. Cytometry 1996; 25: 221234.
  • 7
    Adiga U, Chaudhuri BB. An efficient method based on watershed and rule-based merging for segmentation of 3-D histo-pathological images. Pattern Recognition 2001; 34: 14491458.
  • 8
    Tekola P, Baak JPA, van Ginkel AHM, Belien JAM, van Diest PJ, Broeckaert MAM. Three-dimensional confocal laser scanning DNA ploidy cytometry in thick histological sections. J Pathol 1996; 180: 214222.
  • 9
    Solorzano CO, Rodriguez EG, Jones A, Pinkel D, Gray JW, Sudar D, Lockett SV. Segmentation of confocal microscope images of cell nuclei in thick tissue sections, J Microscopy 1999; 193: 212226.
  • 10
    Sarti A, Solorzano CO, Lockett SJ, Malladi R. A geometric model for 3-D confocal image analysis. IEEE Trans Biomed Eng 2000; 47: 16001609.
  • 11
    Lockett SJ, O'Rand M, Rinehart C, Kaufman DG, Herman B, Jacobson K. Automated fluorescence image cytometry: DNA quantification and detection of chlamydial infections. Anal Quant Cytol 1991; 13: 2744.
  • 12
    Glasbey CA. An analysis of histogram-based thresholding algorithms. CVGIP: Graphical Models and Image Processing 1993; 55: 532537.
  • 13
    MacAulay C, Palcic B. A comparison of some quick and simple threshold selection methods for stained cells. Anal Quant Cytol Histol 1988; 10: 134138.
  • 14
    Haralick RM, Shapiro LG. Image segmentation techniques. CVGIP 1985; 29: 100133.
  • 15
    Zucker S. Region-growing: childhood and adolescence. Comput Graphics Image Process 1976; 5: 382399.
  • 16
    Kohler R. A segmentation system based on thresholding. Comput Graphics Image Process 1981; 15: 319338.
  • 17
    Visscher DW, Zarbo RJ, Greenawald KA, Crissman JD. Prognostic significance of morphological parameters and flow cytometric DNA analysis in carcinoma of the breast. Pathol Annu 1990; 25: 171210.
  • 18
    Wolf G. Use of global information and a priori knowledge for segmentation of objects: algorithms and applications. Proceedings of the SPIE 1992; 1660: 397408.
  • 19
    Ahrens P, Schleicher A, Zilles K, Werner L. Image analysis of Nissl-stained neuronal perikarya in the primary visual cortex of the rat: automatic detection and segmentation of neuronal profiles with nuclei and nucleoli. J Microscopy 1990; 157: 349365.
  • 20
    Garbay C, Chassery JM, Brugal G. An interactive region-growing process for cell image segmentation based on local color similarity and global shape criteria. Anal Quant Cytol Histol 1986; 8: 2534.
  • 21
    Lockett SJ, Herman B. Automatic detection of clustered, fluorescent-stained nuclei by digital image-based cytometry. Cytometry 1994; 17: 112.
  • 22
    Meyer F, Beucher S. Morphological segmentation. J Vis Commun Image Representation 1990; 1: 2146.
  • 23
    Beucher S. Watershed: hierarchical segmentation and waterfall algorithm. In: SerraJ, SoilleP, editors. Mathematical morphology and its applications to image processing. Dordrecht, The Netherlands: Kluwer Academic Publishers; 1994. p 6976.
  • 25
    Beucher S. The watershed transformation applied to image segmentation. Scanning Microsc 1992; 6: 299314.
  • 25
    Beucher S, Meyer F. The morphological approach to segmentation: the watershed transformation. In: Mathematical morphology in image processing. New York: Marcel Dekker Inc.; 1993.
  • 26
    Vincent L. Morphological gray scale recognition in image analysis: applications and efficient algorithms. IEEE Trans Image Process 1993; 2: 176201.
  • 27
    Cootes TF, Taylor CJ, Cooper DH, Graham J. Active shape models: their training and application. Comput Vis Image Underst 1995; 61: 3859.
  • 28
    Najman L, Schmitt M. Geodesic saliency of watershed contours and hierarchical segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 1996; 18: 11631173.
  • 29
    Digabel H, Lantuejoul C. Iterative algorithms. ChermantJL, editor. Stuttgart, Germany: Riederer Verlag; 1987. p 8599.
  • 30
    Beucher S. Watersheds of functions and picture segmentation. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. Paris; 1982. p 1928–1931.
  • 31
    Maisonneuve F. Sue le partage des eaux. Tech Report CMM. Paris: School of Mines; 1982.
  • 32
    Vincent L, Soille P. Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE Transactions on Pattern Analysis and Machine Intelligence 1991; 13: 583598.
  • 33
    Sijbers J, Scheunders P, Verhoye M, van der Linden A, van Dyck D, Raman E. Watershed-based segmentation of 3D MR data for volume quantization. Magn Reson Imaging 1997; 15: 679688.
  • 34
    Higgins WE, Ojard EJ. Interactive morphological watershed analysis for 3D medial images. Comput Med Imaging Graph 1993; 17: 387395.
  • 35
    Garbay C. Image structure representation and processing: a discussion of some segmentation methods in cytology. IEEE Transactions on Pattern Analysis and Machine Intelligence 1986; 8: 140146.
  • 36
    Malpica N, Solorzano CO, Vaquero JJ, Santos A, Vallcorba I, Garcia-Sagredo JM, Pozo F. Applying watershed algorithms to the segmentation of clustered nuclei. Cytometry 1997; 28: 289297.
  • 37
    Theodoridis S, Koutroumbas K. Pattern recognition. San Diego: Academic Press; 1999. 625 p.
  • 38
    Harner EJ, Slater PB. Identifying medical regions using hierarchical clustering. Soc Sci Med 1980; 14D: 310.
  • 39
    Filzmoster P, Baumgarner R, Moser E. A hierarchical clustering method for analyzing functional MR images. Magn Reson Imaging 1999; 17: 817826.
  • 40
    Roysam B, Ancin H, Bhattacharjya AK, Chisti MA, Seegal R, Turner JN. Algorithms for automated characterization of cell populations in thick specimens from 3-D confocal fluorescence microscopy data. J Microscopy 1994; 173: 115126.
  • 41
    Mackin RW, Roysam B, Holmes TJ, Turner JN. Automated three-dimensional image analysis of thick and overlapped clusters in cytologic preparations: application to cytologic smears. Anal Quant Cytol Histol 1993; 15: 405417.
  • 42
    Otsu N. A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 1979; SMC-9: 6266.
  • 43
    Hader DP. Image analysis: methods and applications. Boca Raton, FL: CRC Press; 2001. 463 p.
  • 44
    Vemuri BC, Radisavljevic A. Multiresolution stochastic hybrid shape models with fractal priors. ACM Transactions on Graphics 1994; 13: 177200.
  • 45
    Staib LH, Duncan JS. Model-based deformable surface finding for medical images. IEEE Trans Med Imaging 1996; 15: 112.
  • 46
    McInerney T, Terzopoulos D. Deformable models in medical image analysis: a survey. Med Image Anal 1996; 1: 91108.
  • 47
    Borgefors G. Distance transformations in digital images. Comput Vis Graphics Image Proc 1986; 34: 344371.
  • 48
    Haralick RM, Shapiro LG. Computer and robot vision. New York: Addison-Wesley; 1992.
  • 49
    Pawley JB. Handbook of biological confocal microscopy. New York: Plenum Press; 1995. 632 p.
  • 50
    Castleman K. Digital Image Processing. Upper Saddle River: NJ. Prentice-Hall; 1996. 667 p.
  • 51
    Zucker S, Hummel RA. A three dimensional edge operator. IEEE Trans Pattern Anal Mach Intell 1981; 3: 324331.
  • 52
    Ancin H. 3-D image processing algorithms for automated cell counting, measurement and population analysis. PhD thesis. Troy, NY: Rensselaer Polytechnic Institute; 1995.
  • 53
    Parker JR. Practical computer vision using C. New York: John Wiley & Sons; 1994. 476 p.
  • 54
    Bolender RP, Charleston JS Software for counting cells and estimating structural volumes with the optical disector and fractionator. Microsc Res Tech Vol. 25; 1993. p 314324.
  • 55
    Gundersen HJG. Stereology of arbitrary particles. J Microsc. 1986; 143: 345.
  • 56
    Marko M, Leith A, Parsons D. Three-dimensional reconstruction of cells from serial sections and whole-cell mounts using multilevel contouring of stereo micrographs. J Electron Microsc Tech 1988; 9: 395411.
  • 57
    Marko M Leith A. Contour based 3-D surface reconstruction using stereoscopic contouring and digitized images. In: KrieteA, editor. Visualization in biomedical microscopies. New York: VCH Press; 1992. p 4574.
  • 58
    Russ JC. Practical stereology. New York: Plenum Press; 1986. 196 p.
  • 59
    West MJ. New stereological methods for counting neurons. Neurobiol Aging 1993; 14: 275285.
  • 60
    West MJ. Regionally specific loss of neurons in the aging human hippocampus. Neurobiol Aging 1993; 14: 287293.
  • 61
    Liljeborg A, Czader M, Porwit A. A method to compensate for light attenuation with depth in three-dimensional DNA image cytometry using confocal scanning laser microscope. J Microscopy 1995; 177: 108114.
  • 62
    Margadant F, Leemann T, Niederer P. A precise light attenuation correction for confocal scanning microscopy with O(N4/3) computing time and O(N) memory requirements for N voxels. J Microscopy 1996; 182: 121132.
  • 63
    Strasters KC, Van der Voort HTM, Geusebroek JM, Smeulders AWM. Fast attenuation correction in fluorescence confocal imaging: a recursive approach. Bioimaging 1994; 2: 7892.
  • 64
    Guzowski JF, Worley PF. Cellular compartment analysis of temporal activity by fluorescence in situ hybridization (catFISH). In: TaylorGP, editor. Current protocols in neuroscience. New York: John Wiley & Sons; 2001. p 1.8.11.8.16.
  • 65
    Lockett SJ, Herman B. Automatic detection of clustered, fluorescent-stained nuclei by digital image-based cytometry. Cytometry 1994; 17: 112.
  • 66
    Mackin Robert W Jr, Newton Louise M, Turner James N, Roysam Badrinath. Advances in high-speed three-dimensional imaging and automated segmentation algorithms for thick and overlapped clusters in cytologic preparations: application to cervical smears. Anal Quant Cytol Histol 1998; 20: 105121.