Fully automated glioma tumour segmentation using anatomical symmetry plane detection in multimodal brain MRI

Automatic brain abnormality detection is a major challenge in medical image processing. Manual lesion delineation techniques are susceptible to subjective errors, and therefore, computer aided preliminary screening of a lesion is necessary. This study introduces an efficient and automated algorithm based on the symmetry of the brain structures in the two hemispheres for brain tumour segmentation using multimodal brain magnetic resonoce imaging. Symmetry is a vital clue for determining intensity ‐ based lesion difference in the two hemispheres of brain. A reliable method is proposed for extracting the cancerous region in order to improve the speed and accuracy of brain tumour segmentation. First, a symmetry plane is detected and then through features extracted from both sides of the brain, a similarity measure for comparing the hemisphere is defined. The cancerous region is extracted using similarity measurement, and the accuracy is improved using postprocessing operation. This algorithm is evaluated against the BRATS datasets including high ‐ and low ‐ grade glioma brain tumours. The performance indices are calculated and comparative analysis is implemented as well. Experimental results demonstrate accuracy close to manual lesion demarcation with performance indices.


| INTRODUCTION
The abnormal growth of cells within the brain could cause tumours which can disrupt cells proper functioning and lead to life-threatening situations. The brain tumour is a group of irregular cells produced within the brain. According to the malignocy degree, there are two types of brain tumours. Some are cancerous, or malicious, that are fast developing and harmful. Others are benevolent and noncancerous that are slow growing, less harmful and treatable. Even though the brain tumours do not appear clearly in medical images, early diagnosis is vital to treatment planning. For further treatment, the physicians need to quantify all parts of the tumour and its surrounding tissues.
Segmentation is a fundamental step in image analysis, understanding, interpretation and recognition. To analyze brain functionalities, medical imaging processes provide guidance to doctors and researchers [1]. Brain tumour segmentation from multimodal magnetic resonoce images (MRI) is one of the most valuable planning factors in surgery and radiotherapy.
Normal human brain presents an approximate bilateral symmetry. This symmetry is detectable in magnetic resonoce imaging (MRI). Symmetry is an important clue for determining intensity-based lesion differences in the two hemispheres of the brain [2]. Although the internal structure of a pathologic brain may depart from its normal bilateral symmetry, the ideal imaginary bilateral symmetry plane, however, remains invariant. Automatic detection and explicit representation of symmetry plane are beneficial for image understanding in neuroradiology in many ways, including registration, tumour segmentation, lesion detection, screening and diagnosis.
Recently, several new algorithms have been proposed for symmetry plane computation in 3D brain images. The researchers [3] have proposed a method to locate the symmetry plane based on the computation of different local texture features. Kuijf et al. [4] have suggested an automatic method to find the symmetry plane based on KL measure using MR brain images. Automatic segmentation of symmetry plane using curve fitting was developed in [5]. In order to determine the symmetry degrees in each axial slice, a midsagittal surface extraction method was suggested in [6] which uses fractal dimension and lacunarity as two independent factors of the voxel values. A semiautomatic extraction method of symmetry planes of skull in brain CT images has been proposed in [7]. In that method, the brain tissue is extracted for feature points in patient's 3D brain CT images using a region growing algorithm. Then the oriented bounding box of brain tissue is obtained to build the initial symmetric plane. The authors in [8] proposed a fully automated method that relies on the brain symmetry analysis, combining region-based and boundarybased segmentation methods. In another study [9], the researchers have proposed a method for ischaemic lesion segmentation on axial views of MRI slices using symmetry axis detection combined with super pixels based clustering. In some cases, aggregated clusters can lead to over estimation or under estimation of the injury. Several brain tumour classification techniques are introduced by several researchers in computer vision [10][11][12]. The most common features used for brain tumour segmentation are image intensities. This is based on the assumption that different tissues have different grey levels [13,14]. Another feature which is used frequently is the local image textures, because it has been observed that different tumour areas exhibit different textural patterns [15]. Moreover, it has been suggested in [16] that using frequency domain instead of spatial domain is a good way of extracting different category of features. Distinct characteristics of image can be extracted in frequency domain, that is either not easy to find, or not visible in spatial features. Wavelet transform (WT) is widely used in feature extraction of magnetic resonoce imaging [17], since the WT provides good localization in both spatial and spectral domains.

| Our contribution
To the best of our knowledge, the existing methods for detecting abnormality using CT and MRI need to be trained on manually segmented lesions [6][7][8][9] such as significant obstacle, especially when datasets are large. In addition, because of variety in shape, location and appearance of glioma tumours in patients, automatic tumour segmentation is a challenging procedure.
We introduce a new approach to segment brain abnormalities such as brain tumours in a simple way. The proposed method generates segmented structures that have homogeneous tissues. It introduces a reliable curve fitting method to detect anatomical symmetry plane of 3D MRI volume data. It benefits from general symmetry of brain structures in the two hemispheres. We showed that our method is robust based on symmetry analysis of hemispheres using the distance based feature evaluation. We select optimum features utilizing the coefficients in discrete wavelet domain. Our proposed method is described in Section 2.
We also performed qualitative analysis of the proposed method on the publicly available standard BRATS datasets. Section 3 describes the experimental results and comparative analysis.
Although our proposed method is classified as a primary diagnosis approach, but its segmentation accuracy is extremely remarkable. Unlike existing techniques that focus only on the specific disease, our proposed method can help with the image understanding in neuroradiology in many ways including registration, ischaemic stroke lesion detection, screening and diagnosis.

| METHODOLOGY
The proposed method consists of four main modules. In the first module, preprocessing operations are applied on brain MR images and the symmetry plane is found. In the second module, the features are extracted from the input images using discrete wavelet transform (DWT). The third module is segmentation in which suspicious hemisphere is detected and the primary cancerous region is extracted. Postprocessing operation and result improvement are applied in the last module.
The complete architecture of the proposed method is shown in Figure 1.

| First module: preprocessing
The first module has four steps. In first step, the intensity value of voxels is normalized. The symmetry plane of input MRI is detected in the second step. In the third step, orientation of image is corrected and finally, a mirrored rotated image is obtained.

| Intensity normalization
One of the main disadvantages of brain MRI sequences is that the same type of tissue does not have a specific intensity. Different MRI sequences show different intensity values for the same tissue type even within the same subject. These intensity variations make segmentation and image analysis difficult [13]. Therefore, intensity normalization is an important preprocessing step for MRI analysis [14]. Various intensity normalization methods have been proposed in medical images. In [18,19] an adaptive threshold based normalization approaches have been introduced. In our method, we used one of the simplest ones called Gaussian intensity normalization that rescales the intensity values by a global linear scaling. In this method, the primary intensities are divided by the standard deviation of the entire intensity values within one slice: where I is the primary intensity value and σ is the standard deviation of an entire scan. This method assumes each sequence (such as T 1 , T 1C , T 2 and T 2Flair ) has the same distribution, and the normalization is done based on this assumption [20]. The intensity resolution of MRI is typically 12-16 bits unsigned integer. By analyzing the grey level range, we found that we can scale the intensity values in range [0, 1024] (i.e. 10 bits per voxel), without any significant information loss.

| Orientation correction
Due to the wrong head posture during image acquisition, the head may be shown oblique in MRI slices. Since we want to take advantage of the brain symmetrical structure, primarily it is needed to find the symmetry plane of MRI and then rotate the reference images around their symmetry plane in order to fix the brain orientation. Although brain images have a high degree of symmetry, however, they are not exactly symmetrical. The area between the two hemispheres of brain is filled with cerebrospinal fluid (CSF) which forms an approximate vertical line in axial planes. In order to detect a symmetry axis, two points are needed in each slice [9]. We extracted these two points automatically. Figure 2 illustrates the points and the symmetry axis of an axial slice. Where P c , as shown in Figure 2 (c), is the centroid point of the brain. This point is on the intersection of the major and minor axes of an ellipse. It is calculated using an ellipse fitting method [21] and fitted to the MRI slice. P MSC is the peak of the superior sagittal sinus located in the bottom of the external brain border. In order to find this point (i.e. the point with maximum curvature), an elliptic curve is fitted to the external boundary of the brain. This point is determined using Equation (2).
ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi a 2 − x 2 p ¼ ± ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi e ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi fitted with respect to the boundary of the brain. Therefore, by placing Equation (3) in Equation (2): P MSC is the coordinates of point with maximum curvature.
The symmetry axis is the line connecting P c to P MSC in Figure 2(c). The angle between symmetry axis and vertical axis is called the offset angle θ offset . The reference image is rotated about symmetry axis by −θ offset . The result is shown in Figure 2 (b). Hence, for the rotated slice, a rotational correction is performed using Equations (7) and (8).
As shown in Figure 2(c), P MSC (x, y) is the point on the major axis of the reference slice. P 0 MSC (x) is the x-coordinate of the point on the rotated ellipse. Its major axis is perpendicular to x-axis which is also the corrected symmetry axis. θ offset is the rotation angel and M is the length of the ellipse's major axis fitted previously on the rotated slice.

| Mirroring
After rotation and offset correction, we flip the image vertically to obtain its mirror image version. The reference image and the mirrored one are used to compare the left and right hemispheres. Mirroring the rotated image is the last step of preprocessing and the beginning of segmentation.

| Second module: feature extraction
The second module includes three steps: feature definition, feature vector creation and similarity measurement. The output of this module is a set of feature vectors representing the similarity of the reference and mirrored rotated images.

| Feature vector creation
Feature extraction is the essential step that transforms an image into a set of features. Effective and beneficial feature set extraction is one of the most crucial and critical tasks for image classification. Several types of features are extracted from an image in order to label objects into their relevant categories. Since the intensity value of voxels is not enough for similarity determination, we define features based on DWT in order to benefit from edge information as well.
Haar filter is applied to the original image in vertical and horizontal directions. Figure 3(b) shows a typical output of Haar wavelet on an MRI slice which is a visualization of four subbands known as HH, HL, LH and LL. The HH is noisy because it contains mostly high-frequency information. Vertical and horizontal edge features are represented by HL and LH subbands, respectively. The LL contains most of the detail information, and it is the reduced version of the original image in 1 4 of the size. All four subbands are used for feature extraction.
Features are extracted from both input and rotated mirrored images as follows: Consider overlapping cubes of size 3 � 3 � 3 on referenced and rotated mirrored 3D images (Figure 3(a)). The central voxel of each cube is considered as a target voxel. We select three consecutive axial slices and apply DWT on each slice separately. Then, we divide them into 3 � 3 � 3 blocks.
For each of four subbands, as visualized in Figure 3, corresponding to each target voxel considered at the centre of a 3 � 3 � 3 cube, the mean (μ) and standard deviation (σ) of F I G U R E 2 (a) Ellipse fitted to original oblique slice, (b) ellipse fitted to rotated slice, (c) original ellipse in green, centroid point P c and maximum sinus curvature point P MSC , major and minor axes of an original ellipse in red, symmetry axis in yellow, offset angle for orientation correction θ offset , rotated ellipse in blue, transformed P MSC to P 0 MSC wavelet coefficients for its 26 connected neighbours are calculated and considered as features. This is repeated for four subbands. Figure 3(c) shows the feature vector format. It has two main parts, the first part has 10 bits, dedicated to target voxel intensity value. The second part contains 80 bits. It has four partitions, each of which has 20 bits related to mean (μ) and standard deviation (σ) extracted from LL, LH, HL and HH coefficients, respectively.
In fact, each voxel in MRI volume is considered to be a target voxel for which the feature vector is calculated.

| Similarity measurement
At each voxel, feature vector comparison is carried out using Euclidean distance. At each voxel, feature vector comparison is carried on. Considering F x = [a 1 , … a k ] as the feature vector of voxel x in reference image and F 0 x = [a 0 1 , … a 0 k ] as the feature vector of x in mirrored image, where k is the feature vector size. The similarity measure is calculated by: Distance ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi The result of the two image comparison is a similarity distance map (i.e. the same size of the MRI), in which the similar regions have the values close to zero.

| Third module: segmentation
This module consists of two main steps: suspicious hemisphere detection and primary cancerous region extraction. These two steps are applied on the similarity distance map obtained in second module. In the first step, the hemisphere having abnormality is selected as the following: Since the similarity distance values obtained in the previous module may have different ranges in multimodal MRI, it is necessary to uniform their values. So, they are normalized in [0,1]. In the normalized map, if the distance value obtained in a coordinate is close to zero, it indicates strong similarity between reference and mirrored image. Conversely, the dissimilar regions have values close to one in corresponding coordinates. Then the normalized distances are multiplied by the corresponding intensity values of the reference image. As a result, the similar regions that their distance values are close to zero, get lower values and the regions with less similarity get higher values. The abnormality exists in the dissimilar regions. Therefore, the suspicious hemisphere gets higher intensity values as seen in Figure 4(b).
In the second step, the primary cancerous region is obtained by converting the result of the first step to a binary image (Figure 4(c)).
At this stage, the segmentation result is not reliable and it needs postprocessing to improve its accuracy.

| Fourth module: postprocessing
Postprocessing can improve the overall segmentation performance by reducing the false-positive and false-negative errors. This is accomplished by relabelling the most appropriate cancerous regions among the several candidates according to the morphological operation result. The main purpose of this step is to eliminate the small objects from the foreground and show only the part of image which has a tumour. In morphological opening, erosion typically removes small objects, and the subsequent dilation tends to restore the overall shape of the objects.
Since restoring accuracy in dilation operation highly depends on the similarity between type of the structuring elements and shape of the restoring object, we use the opening by reconstruction method that is able to restore objects completely after applying erosion. Reconstruction is a morphological transformation involving two images and a structuring element (instead of a single image and structuring element). One image, the marker, is the starting point for the transformation. The other image, the mask, constrains the transformation. The structuring element, B, used defines connectivity. If G is the mask and F is the marker, the reconstruction of G from F, denoted R G (F ), is defined by the following iterative procedure: 1. Initialize h 1 to be the marker image, F.

| EXPERIMENTAL RESULTS
Performance of the proposed method is evaluated on BRATS datasets provided by the multimodal brain image segmentation challenge. All BRATS datasets are publicly available 1 one and they include four MRI sequences (T 1 , T 1C , T 2 and T 2Flair ). All images are skulls stripped and resampled to 1 mm isotropic resolution. Also, all MRI sequences of each case are coregistered. LGG) real glioma patients, respectively. A ground truth (GT) segmentation is available which also provides distinguishing different labels including: (1) necrosis, (2) oedema, (3) nonenhancing tumour, (4) enhancing tumour and (5) everything else. To evaluate the performance of segmentation algorithms, different tumour subregions are grouped into three regions that are appropriate for clinical tasks such as whole tumour region (including all four tumour subregions), tumour core region (including all tumour structures except oedema) and active tumour region (including only the enhancing core subregion defined for high-grade cases) [22]. In order to illustrate the accuracy of the proposed method, the predicted cancerous region by the algorithm (PR) was compared with manually delineated GT. By doing so, we used the parameters in Equations (10)- (13), to measure the average slice-wise true-positive rate (sensitivity), true-negative rate (specificity), Jaccard and Dice similarity [22].
These parametric indices are averaged for BRATS 2015, 2017 and 2019 as shown in Table 1. Figure 8 shows three test case examples and segmentation results of the proposed method. The first one is a LGG in the right hemisphere. The second test case is related to a HGG that appeared in the right side of brain and the last one is a HGG developed in both hemispheres.
In order to evaluate the performance of the proposed method in comparison with the state-of-the-art approaches, we compared our results with top two categories of learning models: deep learning and generative models.
The first approach is based on deep learning which uses convolutional neural networks. Table 2 compares the accuracy of proposed method with deep learning methods. Choudhury et al. [23] have extended some well-known architectures for 2D image classification to the problem of 3D image segmentation and the model introduced by Albiol et al. [24] trains three orthogonal slices in the CNN. Wang et al. [25] and Hamghalam  [26] have developed 3D CNN approaches based on UNets and GAN network, respectively. Guo [27] and Li et al. [28] have implemented deep learning models using cascaded UNets and three orthogonal slices for training, respectively. Kamnitsas et al. [29] and Tseng et al. [30] have developed convolutional neural networks based on cross modality and three dimensional networks, respectively. Although deep learning tries to learn high level features from data in an incremental manner and it tends to perform well, however, it requires large amounts of labelled data. Table 3 compares the results of the proposed method with generative models that have a powerful way of learning data distribution. Recently, generative models have achieved a tremendous success. However, it is not always possible to learn the exact distribution of the training set due to the variation in shape, appearance and location of glioma tumours. Therefore, a generative model tries to model a distribution which is as similar as possible to the true data distribution. As a generative model, a rule-based learning method presented in Barzegar et al. [13] is a hybrid method that uses multislice 3D information of MRI sequences to define a binary neighbourhood pattern for voxel labelling. The study was extended in [14], using an ensemble learning framework. The weighted grey-level differences have been considered as features and tumour segmentation was performed with assistance of the combination of support vector machines in a bagging structure. Lefkovits et al. [31], Serrano-Rubio et al. [32] and Phophalia et al. [33] have proposed specific generative models that can achieve compatible accuracy. Tables 2 and 3 show the comparison results of our method with that of the above mentioned state-of-the-art methods on the publicly available benchmark datasets (BRATS 2015, 2017 and 2019). In the most references, the authors report just Dice score in their experimental results. Our proposed method has compared well enough with these methods. As described previously, the regions in the tumour contain necrosis as well as enhancing and nonenhancing tumour subregions. The whole tumour region includes oedemas as well. Compared to the top ranked methods, our proposed method could achieve the highest Dice score in BRATS 2017.
Since the final goal of our proposed method is its application in a clinical diagnosis support system, it is useful to compare its performance with such support systems as well. Brain tumour image analysis (BraTumIA) [34] is a 3D image analysis tool commonly used for brain MRI segmentation. Sequences of MRI such as T 1 , T 1C , T 2 and T 2Flair , are used as input to this software. This application can segment the normal structures of brains and also tumours such as glioma cases. It is known to be a reliable system by clinical experts. Table 4 shows the comparison results of our proposed methods with Bra-TumIA application. It shows, our proposed method indicates better results compared to the BraTumIA software.

| Parameter analysis
The orientation correction is an important step in segmentation procedure. The basic idea is to take advantage of the brain Barzegar et al. [13] 0.89 Proposed method 0.863 BARZEGAR AND JAMZAD symmetrical structure. As mentioned in Section 2.2, due to the wrong head posture during image acquisition, the head may be shown oblique in MRI slices. According to our proposed pipeline, first, the head's offset angle in the reference image is calculated, then the head orientation is corrected by rotating the reference image about symmetry axis. Second, the rotated image is flipped vertically that we called it mirrored image.
If the orientation correction is not applied on the reference image, the head in the mirrored image does not match with the one in reference image. Consider the offset angle of the reference image is α (Figure 5(a)). This angle turns to −α in the mirrored image ( Figure 5(b)). Therefore, the absurd spaces appear between the two hemispheres in the corresponding distance map. This makes it difficult to precisely detect the cancerous region and its boundary ( Figure 5(c)). Besides, it leads to more false positives that decrease the segmentation accuracy. Figure 6 represents the benchmark of the proposed method performance in segmentation, with and without orientation correction step. In the postprocessing step, we use the opening by reconstruction method that restores the original shapes of the objects. As described earlier, the structuring element used in this method considers the neighbour voxels connectivity in a 3 � 3 � 3 matrix. We experimented 6, 12, 18 and 26 connectivity in 3D neighbourhood as shown in Figure 7, in the postprocessing step. The 26-connectivity which implies B as a 3 � 3 � 3 matrix of ones has the best accuracy. Figure 7 shows the evaluation of Dice measure for different structuring elements.

| Complexity analysis
Since the proposed algorithm is a kind of case-based reasoning algorithm, its time complexity is dominated to the time of preparing input images and similarity calculation. In image preparation phase, according to Equation (5), elliptical curve fitting is calculated for each image slice, assuming it takes O(1) for one slice, it takes O(m) for the input MRI volume, where m is the number of slices in input MRI volume.
The time complexity of feature vector creation for one voxel is O (1). The complexity of similar regions extraction depends on the size of feature vector (k) and the cost of distance calculation for each voxel. Each feature vector includes five features: intensity value of target voxel, LL, HL, LH and HH subband features. As described in Section 2.2, for each voxel, two feature vectors of size k are created. The distance calculation for each voxel according to Equation (9) is O(k). In each slice (image) of size n�n voxels, the feature vector calculation will be O(kn 2 ), for constant k, it becomes O(n 2 ). Since there are m slices in one MRI volume, its feature vector calculation will be O(m�n 2 ) = O(n 2 ). The overall complexity for one patient MRI volume is the summation of elliptical curve filling O(m) and feature vector calculation O(n 2 ), that is, O(m) + O(n 2 ) = O(n 2 ), where n is the number of voxels in one slice.

| Discussion
We evaluated the proposed method on BRATS datasets containing LGG and HGG samples. The evaluation of parametric indices is represented in Table 1. In case of LGG, since the tumour has weak boundaries, the proposed method does not produce accurate enough segmentation. In contrast, for HGG, we get remarkable Dice score for segmentation.
Our proposed method is compared to other state-of-theart methods in Tables 2 and 3. In comparison to deep learning approaches, we achieved the best accuracy in BRATS 2019 and 2015. Additionally, the proposed method performs better than the generative models on BRATS 2019 and 2017.
One factor likely to influence the accuracy improvement is the orientation correction step. Figure 6 illustrates the importance of this step. Another effective factor is the size of structuring element in postprocessing step. As shown in Figure 7, the structuring element consisting of 26-connected neighbourhood has improved the performance of opening operator in reconstruction method.
Although the proposed morphological operation in postprocessing step can lead to a great improvement, but as a limitation of this approach, in few cases of LGG tumours, it does not produce closed contours around such tumours (see Figure 8(c)). As mentioned in Section 2, our proposed method relies on the brain symmetrical structure. For those HGG tumours developed in both hemispheres, the marginal voxels get lower similarity than the interior voxels. So, the predicted tumour contours are not completely closed. By applying basic morphological dilation, boundaries get more visible and the small holes are covered (see Figure 8(d)).
In our method, the time complexity for tumour segmentation in one slice is O(n), where n is the number of voxels in one slice. But the deep learning approaches that use convolutional neural networks have much higher computational complexity mainly because of their architecture (i.e., number of layers and number of nodes in each layer). In addition, the deep learning methods require large memory capacity and advanced high speed processors during both training and testing stages. For generative models, the most time-consuming parts are feature extraction and learning due to the large amount of data they need. In short, the required memory and time complexity vary significantly depending on the methods used in feature extraction and training steps.

| CONCLUSION
An efficient glioma brain tumour segmentation method was proposed. Low computational complexity and accurate segmentation of the cancerous region demonstrate the efficiency of the proposed approach which is sensitive to changes in the symmetry between brain hemispheres. The time complexity of the proposed method is less than generative models and deep learning methods. Besides, using wavelet domain in feature definition, strong feature vectors are generated that reveal more details. Therefore, abnormalities and asymmetric regions are extracted more accurately. The experimental results showed that the proposed method could outperform the generative models and deep learning methods.
Our proposed method has four main modules which are preprocessing, feature extraction, segmentation and postprocessing. Tumour segmentation using symmetry highly depends on intensity threshold for outliers which may reduce the performance of tumour delineation. Therefore, a symmetry plane detection followed by similarity comparison based on efficient context features showed that it can be a better strategy.
The proposed method enables the automated segmentation in multimodal MRI. It was validated on the BRATS datasets. It significantly improved the segmentation result and achieved higher accuracy. Experiments show that the model possesses the capability to learn symmetry in tissue structures of the brain, as well as determining whether these structures change or are absent in one hemisphere. It segments the whole tumour including the core tumour with relatively high accuracy for HGG's.
For our future work, we will focus to improve our approach on LGG segmentation. We also plan to define stronger deep features. Finally, it is important to test the feasibility of applying our method on other cases such as ischaemic stroke lesion.