An automatic segmentation algorithm for 3D cell cluster splitting using volumetric confocal images

Authors


Y.Y. Cai. School of Mechanical & Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798, Singapore. Tel: +65-6790 5777; fax: +65 6791 1859; e-mail: myycai@ntu.edu.sg

Summary

With the rapid advance of three-dimensional (3D) confocal imaging technology, more and more 3D cellular images will be available. Segmentation of intact cells is a critical task in automated image analysis and quantification of cellular microscopic images. One of the major complications in the automatic segmentation of cellular images arises due to the fact that cells are often closely clustered. Several algorithms are proposed for segmenting cell clusters but most of them are 2D based. In other words, these algorithms are designed to segment 2D cell clusters from a single image. Given 2D segmentation methods developed, they can certainly be applied to each image slice with the 3D cellular volume to obtain the segmented cell clusters. Apparently, in such case, the 3D depth information with the volumetric images is not really used. Often, 3D reconstruction is conducted after the individualized segmentation to build the 3D cellular models from segmented 2D cellular contours. Such 2D native process is not appropriate as stacking of individually segmented 2D cells or nuclei do not necessarily form the correct and complete 3D cells or nuclei in 3D. This paper proposes a novel and efficient 3D cluster splitting algorithm based on concavity analysis and interslice spatial coherence. We have taken the advantage of using the 3D boundary points detected using higher order statistics as an input contour for performing the 3D cluster splitting algorithm. The idea is to separate the touching or overlapping cells or nuclei in a 3D native way. Experimental results show the efficiency of our algorithm for 3D microscopic cellular images.

Introduction

Advances in laser-scanning microscopy, fluorescence labelling techniques and computer assisted image analysis have opened new avenues for cellular exploration. The combination of these new techniques holds tremendous potential for pathologists and biologists to study and understand the analytical and functional properties of cells and cellular constituents. Pathologists traditionally make a diagnostic decision by viewing a specimen under conventional light microscope with two-dimensional (2D) observations. It is often insufficient to unravel all details of the cell behaviour. Cells and nuclei are three-dimensional (3D) structures and hence, 3D imaging should offer better understanding of complex biological assemblies. It provides a highly accurate quantitative and objective feature assessment for better diagnostic decision-making that could be not reliably recognized by 2D observations. Confocal laser scanning microscopy (CLSM) is one of the most promising stereological imaging tools that synthesize 3D cellular images from 2D sections of tissues. Computer-aided 3D image processing and analysis will play a significant role in accurate and contextual quantification of microscopic cellular images.

Segmentation of the intact cells from 3D images of thick tissue sections is a key step in quantitative evaluation of cellular features. It can be achieved either by interactive or automatic approaches. Because manual delineation of cells and cell nuclei is very tedious and time-consuming, automatic segmentation procedure receives special attention by researchers worldwide. Development of fully automated segmentation procedure for 3D cellular images however continues to pose challenges. One of the major complications in the automatic segmentation of cellular images arises due to the fact that the cells are often closely clustered. Cluster division to isolate individual cell objects is a critical task in the automatic evaluation of cytological and histological images to study the morphology of cells or nuclei and/or the spatial distribution of cells in the tissue specimen (Malpica et al., 1997; Pal et al., 1998; Nilsson & Heyden, 2005; Schmitt & Hasse, 2009).

Several automatic segmentation methods such as thresholding, simple region growing, edge or boundary detection methods have been reported in the literature for cell or nuclei segmentation. These methods provide a better separation of cells or nuclei from the background image. However, they cannot be applied to isolate cells from the cluster. To address this issue, significant attempts have been made specifically towards the automatic cluster splitting for cells or cell nuclei image (Vincent & Soille, 1991; Yeo et al., 1993; Ancin et al., 1995, 1996; Fernandez et al., 1995; Najman & Schmitt, 1996; Malpica et al., 1997; Wang, 1998; Solorzano et al., 1999; Adiga & Chaudhuri, 2001; Kumar et al., 2002, 2006; Ruberto et al., 2002; Lin et al., 2003; Wählby et al., 2003; Wählby et al., 2004; Nilsson & Heyden, 2005; Gniadek & Warren, 2007; Li et al., 2007; Long et al., 2007; Bai et al., 2009; Schmitt & Hasse, 2009; Yu et al., 2009; Schmitt & Reetz, 2009; Zhong et al., 2009).

Watershed segmentation (Vincent & Soille, 1991) is one of the most valuable and popular tools. It has proven to be very useful in many areas of image segmentation and has been often applied for segmenting objects touching each other in an image. However, basic watershed algorithm applied directly to the gradient magnitude image is prone to over segmentation. To address this issue, many improved watershed methods have been proposed. One of the well-known solutions is marker controlled or seeded watershed transformation (Ancin et al., 1995, 1996; Malpica et al., 1997; Solorzano et al., 1999; Ruberto et al., 2002; Lin et al., 2003; Wählby et al., 2003, 2004; Nilsson & Heyden, 2005; Gniadek & Warren, 2007; Schmitt & Hasse, 2009). In these methods, watershed transformation is performed by defining a singular seed or marker for each cell or nuclei object using grey scale mathematical morphological operations based on both morphological and intensity information. The drawback of this seeded watershed transformation arises in defining the seed points automatically. If the image contains noise, it often ends up with more than one seed per object leading to oversegmentation. By contrast, if the gradient of the image is not strong it can end up with objects containing no seed leading to undersegmentation. To solve this oversegmentation problem, postprocessing of merging is performed after watershed transformation (Najman & Schmitt, 1996; Adiga & Chaudhuri, 2001; Long et al., 2007). Obviously, when the image gradient within the touching cells is not strong, these algorithms may not produce satisfactory results.

Another widely used approach to deal with the cluster splitting is based on the analysis of concavities. Concavity-based algorithm offers a simple and intuitive way of clump splitting. This method initially determines the dominant points, that is concavities and convexities of the contour followed by defining an optimal cut path or split path between concave points to minimize a cost function (Yeo et al., 1993; Fernandez et al., 1995; Wang, 1998; Kumar et al., 2002, 2006; Schmitt & Reetz, 2009; Yu et al., 2009; Zhong et al., 2009). Yet another approach on this concave analysis-based algorithm has been proposed, which splits the contour of the touching cells based on concave points and ellipse fitting (Bai et al., 2009). Other methods such as ellipse fitting algorithm (Zhang et al., 2005), model-based approaches (Fok et al., 1996; Cong & Parvin, 2000; Lin, 2005) and graph-based approaches (Ta et al., 2009) are found in the literature. Although some of these methods are efficient, most of them are computationally intense, time-consuming and need some parameters initialization.

For 3D cellular images, analysis of cellular structure in 3D is needed by considering the axial (z-depth) information for any reliable evaluation based on the quantitative measure of the cellular features (Adiga & Chaudhuri, 2001). One problem is that confocal image data have better resolution in the xy direction compared with the z-direction. Because of this significant blurring in the z-direction, it is especially difficult to obtain satisfactory results for the volumetric confocal data set by performing 2D cluster splitting on each planar slice without considering the spatial interslice relationship (axial information). 3D cell cluster segmentation represents a significant effort in cellular image processing. Most of the proposed 3D segmentation methods for cell cluster splitting are somehow based on watershed algorithm (Ancin et al., 1995, 1996; Solorzano et al., 1999; Adiga & Chaudhuri, 2001; Lin et al., 2003; Wählby et al., 2004; Gniadek & Warren, 2007; Long et al., 2007). Thus, there are some inherited disadvantages. The concavity analysis-based approaches however, provides prominent results and is successfully applied in many domains for splitting cell clusters. All the concavity analysis-based cluster-splitting approaches described earlier isolate the cells or nuclei from 2D images. To segment volumetric confocal cellular images, 2D concavity analysis-based cluster splitting methods draws the split paths around each cell or nucleus in all successive 2D (xy) planes and stacks the segmented planar images to form 3D segmentation result. In this paper, a novel and efficient 3D cluster splitting algorithm based on concavity analysis for volumetric confocal cellular images by using spatial coherence (interslice) is proposed to separate the touching or overlapping cells or nuclei. Contour data generated from the image is used as an input for the proposed 3D cluster splitting algorithm. Effective local segmentation of contours is an important step for efficient cluster splitting. The contour extraction is accomplished in 3D space by using higher order statistics-based 3D boundary detection method. Thus, we have taken the advantage of using the boundary points detected in 3D as an input contour for performing the 3D cluster splitting algorithm. The entire process of 3D cluster splitting algorithm comprises three steps: (1) contour preprocessing, (2) concave point detection and (3) 3D split path estimation. The main goal of this work is to provide a computationally efficient 3D cell cluster splitting based on concavity analysis method.

Materials and methods

Sample preparation

Frozen sections of embryonic mouse brain are collected on slides, permeabilized in 0.3% TritonX-100 for 5 min and then washed in phosphate-buffered saline three times for 5 min. The sections are treated with RNAse (0.2 mM in phosphate-buffered saline) for 1 h at room temperature before incubation with propidium iodide (PI) (50 μg mL−1 in phosphate-buffered saline) for 15 min. The sections are mounted in Dako mounting medium after a 2 min wash in phosphate-buffered saline, A Bio-Rad MRC-600 confocal fluorescence microscope equipped with the krypton/argon laser is used for fluorescence microscopy. Z stacks are collected using Bio-Rad COMOS software. Although propidium iodide fluorescence does not fade appreciably during collection of the stacks, the images are processed to correct for any fading that might have occurred and to reduce noise and artefacts.

Contour preprocessing

3D cell boundaries or contours of confocal cellular images are obtained by traditional edge or boundary detection procedure. We apply a 3D boundary detection method using higher order statistics developed earlier by our group (Indhumathi et al., 2009) for detecting the cell contours. After boundary extraction, the boundary points are preprocessed to form a closed contour by filling holes or missed boundary points using certain morphological operations. The contour of a cell or touching cells is a sequence of boundary points and these cell contours are used as input image for the cluster splitting algorithm.

As a first step of cluster splitting algorithm, concave points along the contour need to be extracted. Because the original rough boundaries of the cell or nuclei objects exhibit small-scale fluctuations, it poses potential complications for concavity analysis (i.e. excessive false concave points will be found). To overcome this problem, boundary data are smoothed before concave point detection. Boundary smoothing can be done simply by standard Gaussian filter. The purpose of smoothing is to efficiently eliminate the redundant noisy points that introduce deformities in the contour, while preserving the significant locations of the boundary points, which cannot be achieved by Gaussian smoothing. Polygonal approximation is one of the efficient methods popularly used for contour smoothing due its simplicity, good performance and low computational cost.

A simple polygonal approximation based on progressive vertex selection method is used here. A closed digital contour C can be represented by a clockwise ordered sequence of points, that is inline image, pi= (x, y, z), where i= 1, … , L, L is the number of points on the closed contour, r is the contour region and z is the slice number. To start the approximation, we first select two points pi and pj where j=i+n and n≥ 1. Here we set n= 2. Let inline image denote the line segment connecting the point pi and pj. Now for each point inline image (i.e. points between pi and pj) calculate the distance inline image, where inline image denotes the perpendicular distance from the point pk to the line segment inline image. Suppose there is a point pt with distance inline image that exceeds a predetermined threshold value DTh, that is inline image then that point pt should be kept as a contour point after polygonal approximation and move the point pi to pt and continue the same procedure through the rest of the points in the clockwise direction along the contour. Otherwise, move pj to next point pj+1 and repeat the distances calculation and comparing procedures until there is a point pt or pj moves to the end point of the contour. Fig. 1(a) illustrates the procedure of the polygonal approximation algorithm. Figs. 1(b) and (c) show an example how the fluctuations in the boundary data are smoothed by polygonal approximation.

Figure 1.

Boundary smoothing with polygonal approximation. (a) Illustration of the polygonal approximation procedure. If the perpendicular distance inline image for contour points pk from the line segment between two points pi and pj exceeds DTh, then the vertex pk should not be removed; (b) Rough boundary points with small-scale fluctuations before polygonal approximation; and (c) Smooth boundary points after polygonal approximation.

Concave point detection

Concave point represents important features of cellular objects that plays crucial role in cluster decomposing or splitting procedures. Many methods for detection and classification of concave points are found in literature. One method for detecting concave point is based on convex hull analysis using various concavity measures in the region bounded by boundary arc and convex hull chord concavity region such as area, distance from convex hull chord to curve, distance from curve to convex hull (Rosenfeld, 1985), concavity degree and normalized weight of concavity (Yeo et al., 1993) largest perpendicular distance form boundary arc to convex hull (Kumar et al., 2006). Another method is proposed to detect concave points through analysing the distances from contour points to the skeleton or medial axis of the cell. Contour point with shortest distance to the medial axis is computed as concave points (Mao et al., 2003; Wang, 2007). Liang (1989) and Wu & Kemeny (1992) used k-curvature to detect and measure concaveness. A method based on circular mask is developed to detect the concavity point together with concavity orientation (Zhong et al., 2009). Fernandez et al. (1995) used a square mask operator to determine the concaveness measures and identify the concave points along the cell boundary.

Considering the above methods for concave point detection, the work of Fernandez et al. is inspiring because of its simplicity and robustness. To evaluate the measure of concaveness, the method proposed by Fernandez et al. (1995) is adopted in this paper with some optimization. The principle of locating concave points on the cell contours is to calculate the value of concaveness for each boundary point according to the following formula, which is:

image

where inline image is the binary image concerned, Mj is a square mask of size L×L centred at j, r is the contour region, m is the slice number and j is the boundary point and it runs over the contour of the cell image, that is the concaveness of jth boundary point on the contour region r is calculated as the number of points of the L×L mask centred on j that intersects with the binary image. To avoid uncertainties, the concaveness values of two adjacent contour neighbours (j− 1) and (j+ 1) are also added to the present concaveness value. One of the drawbacks of Fernandez et al.'s method of concave point detection is the boundary roughness sensitivity. Also in this method, concave points are detected by thresholding the computed concaveness values. However, a single global threshold value will yield lots of false concave points.

In our method, a polygon approximation is done first to smooth the boundary roughness followed by calculating the concaveness measure for each boundary point along the contour. When the mask is used to determine the concaveness along the boundary of a binary image, a series of concaveness were found along its boundary, which we referred as concaveness map (CM). In the measured concaveness map, there are local maximum of the concaveness values found when j runs over the concavity point as shown in Fig. 2. These local maximum points are referred as concave points. However, there will be some false local extreme values of concaveness along the boundary. Hence, the local peak points (i.e. contour points with high concaveness value), which exceed a certain threshold value, Δcth are extracted as most representative concave points. The threshold is calculated as ΔcthrCM+βσrCM where μrCM is the mean, σrCM is the standard deviation of concaveness map of the contour region r and β is a constant ranges from 0.5–1.5.

Figure 2.

(a) Concave point detection using concaveness measure based on square mask; and (b) Concaveness value along the boundary of the contour. Red dot indicates the peak points above a threshold value. These red points belong to concave points in (a).

Also to get the more significant concave point, one more rule is incorporated. Let Pc(x, y) represent the current concave pixel, Pc+l(x, y) and Pc−l(x, y) represent the boundary pixels at a distance l, which is selected as 10 pixels. The vertical distance from the concave pixel Pc(x, y) to the line inline image is calculated. If Pc(x, y) is a real concave point then the vertical distance should be above the tolerance range (tol) i.e. VD (Pc(x, y) > tol, where tol value ranges from 2 to 4.

Proposed 3D cluster splitting method

There are various ad hoc methods addressed in the literature for the cluster splitting based on the analysis of concavities followed by defining an optimal cut path or split path between concave points that minimizes a cost function but almost all the proposed algorithms aimed to split cells or nuclei from 2D images (Yeo et al., 1993; Wang, 1998; Fernandez et al., 1995; Kumar et al., 2002, 2006; Yu et al., 2009; Schmitt & Reetz 2009; Zhong et al., 2009). A 3D image volume can be considered as a sequence of 2D images. Obviously, 2D cluster splitting algorithms developed can be applied to segment individual cells or nuclei in each 2D slices of 3D image. However, this 2D slice-by-slice based cluster splitting for a 3D volume without considering the depth (z) information may fail simply because the stacking of individually segmented 2D cells or nuclei does not necessarily form the complete 3D cells or nuclei in 3D image. For reliable cluster splitting of confocal cell images, analysis of the images is needed in 3D by considering the interslice connectivity. Hence, in our algorithm, we proposed a two-step 3D layer based split path selection. As a first step, we select an optical section, zm, as the reference section and find the best split path between the detected concave points based on certain selection criteria (Yeo et al., 1993; Wang, 1998; Fernandez et al., 1995; Kumar et al., 2002, 2006; Schmitt & Reetz 2009; Yu et al., 2009; Zhong et al., 2009). Secondly, we estimate the split path for the rest of the sections using the relationship with its previous section. A flowchart of the work sequence for the proposed algorithm is shown in Fig. 3.

Figure 3.

Flow chart for the proposed 3D cluster splitting algorithm.

Reference section split path estimation.  Degradation of fluorescent signal along the depth of the specimen owing to light attenuation is one of frequently observed drawbacks in confocal microscopy (Pawley, 1995). Because of this attenuation factor, the intensity of the image slices along z-direction begins to decrease in the 3D cellular confocal images (Sun et al., 2004) (Fig. 4). Several sophisticated methods have been studied to correct for depth-dependent signal attenuation. However, the attenuation correction methods are not able to fully recover the intensity loss. Because of this intensity loss, the contour radius of the cells or cell nuclei decreases in the image slices and also the contour does not follow a regular shape in all the slices. By considering these factors, the reference slice should be selected. The reference section is the section after which the intensity starts to decrease and also the cluster contour radius. In all the confocal cellular imaging data set used in this paper, the first slice (i.e. topmost slice) is the brightest intensity layer and is considered as a reference slice.

Figure 4.

Illustration of intensity decay due to attenuation factor in confocal image stack. Zm is the reference slice, which is the brightest layer.

After selecting the reference section concave points are detected using our proposed concave points detection method explained is Section 2. Once the most dominant concave points are located along the cell contours, the next step is to search for the splitting path based on certain criteria to connect the detected concave points in an appropriate way so as to divide the clustered cells or nuclei into individual cell or nucleus.

Once cluster objects in the reference section are divided, then the split paths for the rest of the sections are detected by considering the interslice spatial coherence information with its previous slice. Suppose Zm is the reference section then the cluster objects in the section Zm+1 is divided based on its relationship with reference section Zm. Similarly, the cluster objects in all the remaining sections are divided based on its relationship with the previous slice as its reference slice (e.g. for the section Zm+2, Zm+1 is considered as reference section) by considering the depth information in 3D space.

Criteria for acceptance of a splitting path.  After finding all the concave points, attempts have to be made to detect appropriate splitting paths based on certain constrains. Let the optical section zm represent the reference section. Let inline image represents the list of concave points on the contour C in the cluster region r in the reference slice zm. Let n represent the number of concave points on the contour inline image. The concave points in a cluster region listed above are sorted according to its concaveness measure and the concave point, with highest concaveness measure (cp1) is set as the start point of a possible split path. When the first significant concavity point is selected, the splitting path will start from that point, and the remaining task will be to identify the endpoint and thereby to form a splitting path. In the proposed algorithm, certain criteria illustrated in Fig. 5 are utilized to accept a splitting path or not. We group the basic criteria into two types:

Figure 5.

Illustrations of split path section criteria: (a) Length of the split path; (b) perimeter of the split path; (c) opposite alignment; (d) no intersection between split paths; (e) no intersection with the background; (f) single concavity.

  • (a) Type 1: Essential criteria

    • 1Length of the split path. The distance between concavity pixels inline image is called the length of the split path (Fig. 5a).
    • 2Perimeter of the split path. The minimal distance between the concavity pixels on the contour (i.e. boundary length) inline image is called the perimeter of the split path. In Fig. 5(b), the distance along the contour for the concave points m and n is marked in green and blue colour. The shortest boundary length is one highlighted in green colour and it is the perimeter for the concave pixel m and n.
  • (b) Type 2: Supportive criteria

    • 1Opposite alignment. The concave point, which is aligned opposite to start point should be 180° apart. From the start point straight line is extended, which touch the boundary at the opposite side at one point, which is refereed as imaginary endpoint. Now find the real end point, which is in close range to this imaginary endpoint. That is actually opposite to the start point. In Fig. 5(c) for the concave point x, suppose if the concave y and z solves the basic criteria, then the opposite alignment is considered. The point x’ is the imaginary endpoint and the point y is in short range to x’ (highlighted in black colour). Thus, the end point y is nearly opposite to x compared to z and hence, the split path in green is valid split path.
    • 2No intersection between split paths. If there is any intersection with the existing split paths then that split path is an invalid split path and must be omitted. In Fig. 5(d), the two lines are intersecting each other and hence one of them is an invalid split path. A split path is valid if it does not intersect any split path that has already been included.
    • 3No intersection with the background. If the split path runs in the background of the image then that split path must be avoided. In Fig. 5(e), the split path is crossing the background to connect the end point hence, it should not be selected as valid split path.
    • 4Single concavity. If there is any single concave pixel left out in a region, then end point in the opposite direction is marked in the boundary and split path is constructed provided it satisfies the basic criteria (Fig. 5f).

Split algorithm to select the best split path.  The following is the algorithm to select the best split path between two concavity pixels. First, two thresholds need to be defined (1) distance threshold inline image, (2) perimeter threshold inline image, where inline image is the total boundary length of the cluster contour in the region r and n is the number of concave points in the cluster region r. Let cpi be the starting concave point. The rest of the concave points {cpj}j=i+1,...,n with distance measure D(cpij) < Dth, are considered as possible endpoints and are sorted in ascending order according to the length of the split path. The minimal distance for the concavity pixel min {D(cpij)} is the shortest length of the split path. If the concavity pixel with the minimal distance has the perimeter greater than Pth, then that is considered as possible split path. No other creations need to consider in this case. Otherwise, the next concave point in that possible endpoint list should be checked for the basic creations. If two possible end points are found then, check for the degree of oppositeness. The concavity pixels with degree of oppositeness near to 180° should be considered. Also check whether there is any intersection between the split paths and any crossover with the background pixels before deciding the appropriate split path. Once the appropriate split path is found then those two concavity pixels should be removed from the list {cpi}i=1...n and continue the above same procedure for the rest of the concavity pixels until all the split paths are found. If there is any single concave pixel left out in a region, then end point in the opposite direction is marked in the boundary and split path is constructed provided it satisfies the basic criteria.

Split path estimation for the remaining slices.  Once cluster objects in the reference section are divided, the splitting paths for the rest of the sections are detected by using the relationship with its previous slices (Fig. 4). First the sections Zm+1 and Zm−1 slice are selected and the split path is estimated based on its relationship with the reference slice Zm. Letinline image, k= 1, … , p represent the list of split paths calculated on the contour inline imagein the cluster region r in the slice zm. Let p represent the number of split paths on the contour inline image. For each of the split paths, first mark the split path coordinates Skr ((x1, y1, zm+1), (x2, y2, zm+1)) for the slice Zm+1 on the contour inline image based on the split path coordinates Skr ((x1, y1, zm), (x2, y2, zm)) of the section zm for the region r. Once the split path coordinates are marked on the section Zm+1, then for each split paths check the contour pixels inline image within a width W around Skr ((x1, y1, zm+1), (x2, y2, zm+1)) that is having shortest distance and select those pixel pair as a optimum coordinates Skr ((x1, y1, zm+1)*, (x2, y2, zm+1)*) for the split path. One criterion that needs to consider while drawing the split path in the section Zm+1 is that there should not be any overlapping with the background pixel. No other split path creations are required. Similarly, the split paths for all the cluster objects in all the remaining sections, Z0 < Zm < Zt, are detected by considering its previous section as its reference section (e.g. for the section Zm+2, Zm+1 is considered as reference section) where t is the total numbers of sections in the 3D volume.

Experimental results

Evaluation of the proposed algorithm was carried out using various 3D phantom images and 3D microscope image stacks from rat brain tissue sections. An Intel® Pentium-IV workstation equipped with 2.40 GHz CPU and 768 MB RAM is used for development and testing the proposed algorithm. The implementation is done in Microsoft Visual C++ on Windows platform. In this section, to demonstrate the efficiency of our proposed 3D cluster splitting algorithm, we illustrate the results of the algorithm applied to some 3D phantom images and 3D microscopic images.

Performance on 3D phantom image

To elucidate the performance of the proposed algorithm, we have carried out validation experiments using computer synthesized 3D phantom image stack. To generate the 3D phantom images, first we create an image volume with 10–15 slices, each in the size range of 128 × 128 pixels to 512 × 512 pixels. For this, we have placed cluster objects, which are close to the real cell shapes. ImageJ software publicly available at http://rsb.info.nih.gov/ij/ is used to generate the 3D object clusters. In all 3D synthetic images created, there is no attenuation incorporated and hence, there is no change in the image size in all the slices. First, each image is processed by any boundary detection method to obtain the contour of the objects. Then, the contours are used as input image for our algorithm to perform the cluster splitting. No preprocessing is required for these phantom images.

Fig. 6(a) shows the 3D projection of volume rendered image stacks of synthesized cluster image with two touching objects and Fig. 6(b) illustrates the result of our proposed 3D cluster splitting algorithm. Fig. 6(c) shows the final segmented result superimposed on to the original image and is colour coded. The segmented boundary voxels of each object are marked in green colour and internal objects are represented by red colour. The segmentation result of the cluster with three touching objects and multiple touching objects using our proposed 3D cluster splitting algorithm is shown in Fig. 7. Applying the proposed algorithm to the 3D synthetic multiple touching objects efficiently determine the split paths for the object. Fig. 8 shows a synthetic cell image containing many clusters with different size and shapes. Fig. 8(a) shows the 3D projection of volume rendered stacks of synthetic cell image data set. Fig. 8(b) shows the reference section selected from the stack of 3D synthetic image stack shown in Fig. 8(a). Fig. 8(c) shows the boundary points for the reference section detected by the boundary detection procedure. The segmentation result of the proposed algorithm for the reference section is shown in Fig. 8(d). Fig. 8(e) shows the segmented result shown in Fig. 8(d) merged with the original image of the reference section shown in Fig. 8(b). Fig. 8(e) shows the segmented result shown in Fig. 8(d) merged with the original image of the reference section shown in Fig. 8(b). Fig. 8(f) shows 3D projection of the segmented result superimposed on to the original image and is colour coded. The segmented boundary voxels of each object are marked in green colour and internal objects are represented by red colour.

Figure 6.

(a) 3D projection of volume rendered image stacks of synthesized cluster image with two touching object; (b) Result of our proposed 3D cluster splitting algorithm; and (c) Colour-coded segmented result with boundary lines merged on to the original image in (a). Red colour represents the internal objects and green colour represents the boundary voxels.

Figure 7.

Illustration of our proposed cluster splitting algorithm results on 3D synthesized cluster image with multiple touching objects.

Figure 8.

Illustration of our proposed cluster splitting algorithm results on 3D synthetic image stack with many cluster objects of different size and shape: (a) 3D projection of volume rendered image stacks of synthesized cluster image; (b) Reference section selected from the stack of 3D synthetic image stack shown in (a); (c) Boundary points for the reference section shown in (b); (d) Automatic segmentation by our proposed algorithm on the reference section; (e) Segmented result shown in (d) merged with the original image of the reference section shown in (b); and (f) 3D projection of the segmented result superimposed on to the original image and is colour coded. Red colour represents the internal objects and green colour represents the boundary voxels.

Performance on 3D confocal microscopic images

Evaluation of the proposed algorithm was carried out using a series of 3D microscope image stacks obtained from frozen sections of embryonic mouse brain mouse tissue. A Bio-Rad MRC-600 confocal fluorescence microscope equipped with the krypton/argon laser was used for fluorescence microscopy. Z stacks were collected using Bio-Rad COMOS software. Fig. 9 illustrates the results of the proposed algorithm, tested on an image stack showing nuclei in a section of mouse brain, labelled with To-Pro-3 that bleaches rapidly. This data set comprises a stack of 28 optical slices and is an example for cluster of two touching cell nuclei. Fig. 9(a) shows a 3D projection of volume rendered stacks of a To-Pro-3 labelled nuclei volume data set. Fig. 9(b) illustrates the contour of the data set shown in Fig. 9(a) obtained by boundary detection algorithm. The contour is preprocessed using polygonal approximation and is used as input for the cluster splitting algorithm. Fig. 9(c) shows segmented nuclei, which are the result of our proposed 3D cluster splitting algorithm. The image stack is colour coded to show the contour and the split paths clearly. Blue colour represents the boundary points and green purple colour represents the split paths generated by our cluster splitting algorithm. Fig. 9(d) illustrates the final segmented result superimposed on to the original image shown in Fig. 9(a) and is colour coded. The segmented boundary voxels of each object are marked in green colour and internal objects are represented by red colour.

Figure 9.

Illustration of the cluster splitting algorithm in 3D: (a) A volume rendered stack of confocal images comprising 28 optical slices of To-Pro-3 stained nuclei from a frozen section of a mouse brain; (b) Boundary surface points for the image stack shown in (a) obtained by our boundary detection algorithm using HOS; (c) Split paths generated by our proposed 3D cluster splitting algorithm. To show the split paths clearly, the image is colour coded. Blue colour represents the contour points and green colour represents the split paths generated by our cluster splitting algorithm; and (d) Final colour-coded segmentation result with boundary lines merged on to the original image in (a). Red colour represents the internal objects and green colour represents the boundary voxels.

Because of photobleaching, the intensity values of To-Pro-3 labelled nuclei for the subsequent sections decreased progressively. Visual inspection of the 3D results is done by examining each slice of the result and comparing it with the original images. Fig. 10(a(1), a(2) and a(3)) show sections #1, #5 and #12 selected from the stack showing To-Pro-3 stained nuclei shown in Fig. 9(a). Loss of fluorescence due to photobleaching of the fluorochrome is quite noticeable. Also the contour of the cell nuclei is not smooth due to attenuation. Fig. 10(b(1), b(2) and b(3)) show the contour of the image sections shown in Fig. 10(a(1), a(2) and a(3)). Fig. 10(c(1), c(2) and c(3)) show the result of our proposed cluster splitting algorithm for the images shown in Fig. 10(a(1), a(2) and a(3)). It is obvious from the result that our proposed 3D cluster splitting algorithm is able to correctly split the two touching nuclei in 3D.

Figure 10.

(a(1), a(2), a(3)) Sections #1, #5 and #12 selected from the stack of 28 confocal images shown in Fig. 9(a); (b(1), b(2), b(3)) Boundary surface points for the image slice shown in (a(1), a(2), a(3)) after contour preprocessing; and (c(1), c(2), c(3)) Split path generated by our 3D cluster splitting algorithm.

Some other examples of microscopic cell nuclei with multiple touching nuclei are shown in Fig. 11. All three image stacks showing nuclei labelled with To-Pro-3 are taken from a section of mouse brain. Correctly and incorrectly segmented objects are determined visually by comparing the reference sections for all three image stacks segmented automatically by our proposed algorithm with the manual segmentation done by our experts. Fig. 12 illustrates the results of the proposed algorithm, tested on an image stack showing To-Pro-3 labelled nuclei with different size and shapes obtained from a section of mouse brain. Fig. 12(a) shows the 3D projection of volume rendered stacks of confocal image data set. Fig. 12(b) shows the reference section selected from the stack of confocal image stack shown in Fig. 12(a) and (c) shows the boundary points for the reference section detected by the boundary detection procedure. The segmentation result of the proposed algorithm is shown in Fig. 12(d). Fig. 12(e) shows the segmented result shown in Fig. 12(d) merged with the original image of the reference section shown in Fig. 12(b). Fig. 12(f) shows 3D projection of the segmented result superimposed on to the original image and is colour coded. The segmented boundary voxels of each object are marked in green colour and internal objects are represented by red colour.

Figure 11.

Illustration of our proposed cluster splitting algorithm results on 3D confocal image stack with multiple touching objects: (a(1), a(2), a(3)) Volume rendered image stack; (b(1), b(2), b(3)) Manual segmentation done on the reference section; and (c(1), c(2), c(3)) Automatic segmentation by our proposed algorithm on the reference section.

Figure 12.

Illustration of our proposed cluster splitting algorithm results on 3D confocal image stack with many cluster objects of different size and shape: (a) Volume rendered image stack of confocal images comprising 25 optical slices of To-Pro-3 stained nuclei from a frozen section of a mouse brain; (b) Reference section selected from the confocal image stack shown in (a); (c) Boundary surface points for the reference section shown in (b); (d) Automatic segmentation by our proposed algorithm on the reference section; (e) Segmented result shown in (d) merged with the original image of the reference section shown in (b); and (f) 3D projection of the segmented result superimposed on to the original image and is colour coded. Red colour represents the internal objects and green colour represents the boundary voxels.

We have tested our algorithm on several stacks of confocal images, which are labelled both with To-Pro-3 that bleaches rapidly and with propidium iodide that does not photobleach easily (as in phantom examples) and obtained satisfactory segmentations in each case.

For all the data sets used here, we have manually counted the number of cells in each of the images and, have compared it with the number of cells detected by the proposed cluster splitting based segmentation algorithm. The details about the data sets and the segmentation results are presented in Table 1.

Table 1.  Experimental results evaluation: (a) comparison of the results obtained by the proposed cluster splitting method with manual count on seven data sets used; and (b) comparison of segmentation time between the proposed 3D cluster splitting method, 2D slice based cluster splitting method and watershed segmentation method on seven 3D data sets used in this paper.
Data set(a) Number of cells(b) Segmentation time (s)
Manual countAutomatic countError countProposed 3D cluster splitting2D slice based cluster splittingWatershed segmentation
Fig. 7(a(1))3300.1400.7030.250
Fig. 7(b(1))101000.3131.2820.453
Fig. 7(c(1))10820.2962.0310.406
Fig. 92200.4841.7030.938
Fig. 11(a(1))3300.3911.1250.625
Fig. 11(a(2))9811.0311.9051.375
Fig. 11(a(3))131120.9374.4651.516

Comparison and evaluation

Comparison with Fernandez et al. concave point detection.  As a first step for cluster splitting algorithm, concave points are estimated. In this section we determine the influence of significant point concave point detection on segmentation accuracy by comparing the Fernandez et al. concave point detection and our improved significant concave point detection method. We followed Fernandez et al. concaveness measure to determine the concave points. However, we improved the method by using polygonal approximation and also considering the vertical distance measure to locate the most significant concave points. Fig. 13(a) shows the concave points detected by thresholding the computed concaveness values for all boundary points using Fernandez et al. method where as Fig. 13(b) shows the result of our improved concave point detection, which locates the most significant concave points.

Figure 13.

Comparison of concave point detection using: (a) Fernandez et al. concave point detection; and (b) Our improved concave point detection after contour smoothing by polygonal approximation and considering the vertical distance measure.

Comparison with watershed segmentation.  To evaluate the performance of the proposed 3D cluster splitting segmentation method, it is compared to the commonly used watershed segmentation method implemented in ImageJ. For the present comparison, we have chosen two confocal To-Pro-3 stained cell nuclei data sets obtained from a frozen section of mouse brain shown in Fig. 9 with two touching cell nuclei. Fig. 14 shows the segmentation results by using these two methods. For visual inspection, some of the sections are selected and the results of our 3D cluster splitting method and watershed segmentation are displayed. It is evident from the results that the watershed segmentation, which is done in slice-by-slice fashion without considering the depth information, provides wrong results. In Fig. 9, there are only two touching cell nuclei. Fig. 14(a(1), a(2) and a(3)) show sections #1, #7 and #8 selected from the stack showing To-Pro-3 labelled nuclei shown in Fig. 9(a). Fig. 14(b(1)) shows the result of watershed method on section #1, which divides the cluster into two nuclei correctly. However, sections #7 and # 8 (Fig. 14b(2) and b(3)) show that the cluster is divided into three regions. The final 3D result, by stacking these slices will lead to a wrong result. Also in general it is known that watershed segmentation is having oversegmentation problem. This also proves that 2D cluster splitting algorithm applied to 3D images in a slice-by-slice fashion will produce entirely wrong result. By contrast, the results in Fig. 14 (c(1), c(2) and c(3)) show the efficacy of our proposed 3D cluster splitting algorithm, which not only consider the 2D planar information but also the spatial relationship is taken into account to produce the correct segmentation result.

Figure 14.

Comparison with watershed segmentation method: (a(1), a(2), a(3)) Sections #1, #7 and #8 selected from the stack of 28 confocal image series of To-Pro-3 stained nuclei of mouse brain tissue shown in Fig. 9(a); (b(1), b(2), b(3)) Split path generated by watershed segmentation method highlighted in red colour; and (c(1), c(2), c(3)) Split path generated by our 3D cluster splitting method highlighted in red colour.

Computational efficiency.  Our 3D cluster splitting method is computationally fast compared to 2D cluster splitting method applied to each slices of the 3D image stacks. In our method, once the split paths for reference section are estimated, then split paths for the rest of the sections are estimated quickly by comparing the pervious slice. Instead, in 2D slice-by-slice cluster splitting method applied to a 3D image, the split path criterion checking is done in all image slices. To evaluate the performance of our 3D cluster splitting method in terms of computation speed, we have applied 2D slice by slice cluster splitting and watershed segmentation for the 3D images and the segmentation time for 3D images are displayed in Table 1.

Discussion

Cell nuclei are often clustered, making it difficult to separate the individuals. In this paper, we have discussed the 3D cluster splitting algorithm based on concavity analysis by taking the advantage of interslice spatial coherence. Contour extraction of cell clusters is a preliminary requisite for the proposed cluster splitting algorithm. Contours are also extracted in 3D using higher-order statistics based 3D boundary extraction method by considering the entire volume. The work presented here outlines simple but efficient contour preprocessing before concavity analysis to smooth the fluctuations found in the contour of the confocal microscopic images. A 3D split path is estimated for the entire image stack by considering the relation with the adjacent slices. Validation studies using both the synthesized and real 3D images have shown that this method is consistent in segmenting the cluster objects. Also a comparative study with the watershed segmentation indicates that this method shows superior performance on images.

The proposed 3D cluster splitting method has been examined on several confocal data sets and promising results are obtained. Evaluation of the algorithm was carried out to establish the accuracy of the segmentation and its robustness against photobleaching effect. As an evaluation factor, we compare the proposed boundary detection method with the traditional and most commonly used watershed segmentation. We show that the proposed method can effectively divide the whole cell nuclei with depth intensity degradation in 3D space. By contrast, 2D slice-by-slice watershed segmentation oversegments the nuclei in most of the section, which will lead to wrong result in 3D space. By adaptively finding the concave points and estimating the split path in 3D by considering the depth information, the method proposed in this paper considerably improves the approximation accuracy of splitting the cluster objects. In short, the proposed method shows superior performance on 3D images compared to 3D cluster splitting method done in 2D slice-by-slice fashion. Our proposed method is also robust in the sense that it splits cluster objects of different size and also it can be easily fitted to clusters of different shapes and also for different application other than cell cluster segmentation.

The limitations of this works are similar as the limitations of the traditional 2D concavity based cluster splitting. The initial success of the 3D cluster splitting depends on the precise concave point detection. False concave point will lead to wrong split path calculation. Also currently for each object in a cluster, the split paths are drawn from a pair of concave pixels and those concave pixels are not considered for split path estimation for other objects in that cluster. Concavity pixel sharing between two objects in a cluster needs to be considered in future work, which will solve the problem, appeared in Fig. 7(c(1)). The major goal of this work is to extend the 2D concavity based cluster splitting to 3D in a computationally efficient manner. Overall this method will be better alternative to watershed based cluster splitting in 3D space. Further improvements in terms of optimizing the concave point detection and split path checking criteria will certainly provide an improved solution for 3D cluster splitting.

Acknowledgements

M.O. is a member of the Heart & Stroke/Richard Lewar Centre of Excellence. This work is supported by grants from CIHR (MPO-36384) and from the Heart and Stroke Foundation of Ontario (T 6181) to M. Opas. Y.Y.C. thanks the support from Singapore Bio-Imaging Consortium (SBIC).

Ancillary