Next Article in Journal
A New Method Combining Pattern Prediction and Preference Prediction for Next Basket Recommendation
Previous Article in Journal
Thermal–Statistical Odd–Even Fermions’ Staggering Effect and the Order–Disorder Disjunction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Interval Iteration Based Multilevel Thresholding Algorithm for Brain MR Image Segmentation

1
College of Computer Science and Engineering, Changchun University of Technology, Changchun 130012, China
2
Artificial Intelligence Research Institute, Changchun University of Technology, Changchun 130012, China
3
Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
4
College of Computer Science and Technology, Jilin University, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(11), 1429; https://doi.org/10.3390/e23111429
Submission received: 31 August 2021 / Revised: 11 October 2021 / Accepted: 26 October 2021 / Published: 29 October 2021

Abstract

:
In this paper, we propose an interval iteration multilevel thresholding method (IIMT). This approach is based on the Otsu method but iteratively searches for sub-regions of the image to achieve segmentation, rather than processing the full image as a whole region. Then, a novel multilevel thresholding framework based on IIMT for brain MR image segmentation is proposed. In this framework, the original image is first decomposed using a hybrid L1L0 layer decomposition method to obtain the base layer. Second, we use IIMT to segment both the original image and its base layer. Finally, the two segmentation results are integrated by a fusion scheme to obtain a more refined and accurate segmentation result. Experimental results showed that our proposed algorithm is effective, and outperforms the standard Otsu-based and other optimization-based segmentation methods.

1. Introduction

Image segmentation is a key step in image processing and image analysis [1,2,3]. The process of image segmentation refers to dividing an image into several disjoint regions based on features such as intensity, color, spatial texture, and geometric shapes, so that these features show consistency or meaningful similarity in the same region, but show obvious differences between different regions [4,5]. Image segmentation is widely used in many fields, such as computer vision, object recognition, and medical image applications [6,7].
In the field of medical research and practice, image segmentation technology can be applied to computer-aided diagnosis, clinical surgical image navigation, and image-guided tumor radiotherapy [8,9]. Segmentation of organs and their substructures from medical images can be used to quantitatively analyze clinical parameters that are related to volume and shape [10]. For instance, a brain MR image can be segmented into five main regions, namely, the gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), the skull, and the background. In diagnosis of brain disease, WM abnormalities are closely related to multiple sclerosis, schizophrenia, and Alzheimer’s disease. Autism is relevant to changes in the volume of the GM [8,11]. Central nervous system lesions and metabolic disorders of nerve cells change the properties and composition of CSF. When the central nervous system is damaged, the detection of CSF is one of the important auxiliary diagnostic methods. Therefore, accurate segmentation of different object regions in a brain MR image is believed to be one of the most significant tasks for clinical research and treatment.
A large number of image segmentation methods have been previously researched. In [12], Fu et al. classified image segmentation techniques, such as characteristic feature thresholding [13,14,15] or clustering [16,17], edge detection [18,19], and region extraction [20,21]. Other approaches include graph cut methods [22,23] and deep neural network-based methods [24]. Among the existing segmentation methods, thresholding is considered to be an efficient and popular techniques because of its simplicity and high efficiency [25,26,27,28]. Thresholding can be classified into two groups: bi-level thresholding and multi-level thresholding [29]. The former segments an original image into two regions (foreground and background) by searching for an optimal threshold based on gray histogram. Pixels with gray values greater than the threshold are classified as the foreground, whereas pixels with gray values lower than the threshold are classified as the background. When such a simple binary classification is insufficient for subsequent processing, bi-level thresholding is extended to multi-level thresholding, which refers to partitioning the image into several different regions using more thresholds [30].
He et al. proposed an efficient krill herd method to identify optimal thresholding values by maximizing three different objective functions: between-class variance, Kapur’s entropy, and Tsallis entropy [29]. Lei et al. defined square rough entropy in a new form, and presented a novel image segmentation thresholding method based on minimum square rough entropy [31]. The optimal threshold was selected as the value that made the roughness of the object region and the background zero. Yan et al. proposed a novel multilevel thresholding using Kapur’s entropy based on the whale optimization algorithm [32]. This can overcome premature convergence and obtain the global optimal solution. Singh proposed an adaptive thresholding algorithm based on neutrosophic set theory for segmenting Parkinson’s disease MR images [33]. The gray value that maximizes neutrosophic entropy information is selected as the optimal threshold. Omid Tarkhaneh et al. presented a differential evolution-based multilevel thresholding algorithm for MR brain image segmentation [34]. Inspired by Levy distribution, Cauchy distribution, and Cotes’ Spiral, a novel mutation scheme was designed to model swarm intelligence optimization. To solve the increasing complexity of optimization problems, Zhao et al. proposed an improved ant colony optimization algorithm based on the chaotic random spare strategy for multilevel thresholding [35]. The random spare strategy was applied to improve the convergence speed, and the chaotic intensification strategy was used to improve the convergence accuracy and avoid falling into a local optimum. Cai et al. proposed an iterative triclass Otsu thresholding algorithm for microscopic image segmentation [36]. In contrast to the standard Otsu method, it firstly segments an original image into the foreground, the background, and a third region, namely, the “to-be-determined (TBD)” area, based on two class means as obtained by Otsu’s optimal threshold. Then, similar processing is iteratively applied to the TBD region until the preset criterion is met. This single thresholding method performs well for weak objects and segmentation of fine details, but is not applicable to complicated medical image segmentation. However, medical image segmentation is still regarded as an important yet challenging work due to the complexity of the medical image itself, such as low tissue contrast, irregular shape, and large location variance [37].
To improve the quality of image segmentation, we proposed an interval iteration-based multilevel thresholding algorithm for brain MR images. In the algorithm, hybrid L1L0 layer decomposition is adopted to reduce the influence of noise on the segmentation effect. Traditional Otsu multilevel thresholding processes the full image as a whole region, and is inclined to the class with a large variance. To overcome this problem, we extended Cai’s method [36] to multilevel thresholding and proposed a novel interval iteration method to identify optimal thresholds. In addition, a fusion strategy is used to integrate different segmentation images to obtain finer segmentation results. In general, the key contributions of our work can be summarized as follows:
(1)
A hybrid L1L0 layer decomposition method is used to achieve the base layer of an original image, which can remove noise and preserve edge information in the segmentation process.
(2)
An interval iteration multilevel thresholding method is proposed in this paper. In the grayscale histogram of an original image, iterations are separated by the combination of class means and thresholds, and Otsu single thresholding is iteratively applied to each iteration.
(3)
A fusion strategy is adopted to fuse different segmentation results. It takes both spatial and intensity information into account, and makes segmentation more accurate.
The rest of this paper is organized as follows. Section 2 details the interval iteration-based multilevel thresholding method. The framework of the proposed algorithm and related processing are described in Section 3. Section 4 depicts the experiments on brain MR image segmentation including results and analysis. Finally, conclusions and future work are presented and discussed in Section 5.

2. Interval Iteration Based Multilevel Thresholding

In this section, we propose a novel multilevel thresholding algorithm based on interval iteration. The iterative process is illustrated in the following.

2.1. Otsu Method

Let I be an image with size of M × N, and the gray level denoted as G = 0 , 1 , , 255 . We define nj as the number of pixels with gray level j, and define P j = n j M × N , ( p j 0 , j G ) as the probability of such pixels, in which j = 0 255 P j = 1 . Assuming that I is to be segmented into K + 1 ( K 1 ) classes ( C 1 , C 2 , , C K + 1 ) by K thresholds ( t 1 , t 2 , , t K ), the Otsu method searches the histogram of I to find one or more thresholds that minimize intra-class variance or maximize the between-class variance, i.e., Otsu can be defined as { T 1 , T 2 , , T K } = a r g m a x 0 t 1 < t 2 < < < t K L { σ B 2 ( t 1 , t 2 , , t K ) } . If K = 1, it is referred to as single thresholding; otherwise, multilevel thresholding. The between-class variance σ B 2 is calculated as follows:
σ B 2 ( t 1 , t 2 , t K ) = i = 1 K + 1 ω i ( μ i μ T ) 2
where ω i and μ i denote the probability and mean of class Ci, respectively.
ω 1 = j = 0 t 1 P j ω i = j = t i 1 + 1 t i P j , ( i = 2 , , K ) ω K + 1 = j = t K + 1 255 P j
μ 1 = j = 0 t 1 j P j ω 1 μ i = j = t i 1 + 1 t i j P j ω i , ( i = 2 , , K ) μ K + 1 = j = t K + 1 255 j P j ω K + 1
μ T represents the total mean of K + 1 classes.
μ T = j = 0 255 j P j

2.2. Interval Iteration Based Multilevel Thresholding

2.2.1. The First Iteration

Given an original image I, we can obtain its gray histogram curve. Here, an artificial example is shown in Figure 1.
In the first iteration, traditional Otsu multilevel thresholding is performed on the original image to search for K thresholds. K + 1 class means and K initial thresholds can be achieved by computing Equation (1). Figure 2 illustrates the results of Otsu multilevel thresholding. In Figure 2a, K + 1 class means are denoted as μ1,i (i = 1, …, K + 1), and K initial thresholds are denoted as T1,i, (i = 1, …, K). Then, we design a manner of classification. Pixels whose gray values satisfy p μ 1 , 1 are partitioned into class C1; pixels whose gray values satisfy q μ 1 , K + 1 are partitioned into class CK+1. The remaining pixels are divided into K intervals [μ1,1, μ1,2], [ μ1,2, μ1,3], …, [μ1,K, μ1,K+1] according to their gray values, and they are classified in the next iteration. Figure 2b shows an example of the classification. In Figure 2b, the green part denotes C1 and the yellow part represents CK+1; the part between C1 and CK+1 needs to be determined in subsequent iterations.

2.2.2. The Second Iteration

In the second iteration (as shown in Figure 3), new thresholds T2,i (i = 1, …, K) are obtained by applying Otsu single thresholding to K intervals [μ1,1, μ1,2], [ μ1,2, μ1,3], …, [μ1,K, μ1,K+1], respectively. Furthermore, two class means μ2,2i−1, μ2,2i are obtained by T2,i in [μ1,i, μ1,i+1] (i = 1, …, K), which are shown in Figure 3a. Then, classes C1 and CK+1 are updated by adding new pixels whose gray values are in the intervals [μ1,1, μ2,1] and [μ2,2K, μ1,K+1]. These are shown as the green and yellow parts in Figure 3b, respectively. Alternatively, pixels whose gray values are in the intervals [μ2,2, μ2,3], [ μ2,4, μ2,5], …, [μ2,2K−2, μ2,2K−1] are divided into K − 1 classes C2, …, CK, respectively (shown as the light orange part in Figure 3b).

2.2.3. The sth Iteration

In the s (s ≥ 3) iteration, new thresholds Ts,i (i = 1, …, K) are respectively obtained by applying the Otsu method to intervals [μs−1,1, μs−1,2], [ μs−1,3, μs−1,4], …, [μs−1,2K−1, μs−1,2K] which are produced from the previous iteration. Two class means μs,2i−1, μs,2i are obtained by Ts,i in the interval [μs−1,2i−1, μs−1,2i] (i = 1, …, K). New pixels are added to classes C1, C2, …, CK+1, respectively; C1 and CK+1 are expanded by adding new pixels whose gray values are in the intervals [μs−1,1, μs,1] and [μs,2K, μs−1,2K], respectively. Ci (i = 2, …, K − 1) is expanded by adding new pixels whose gray values are in the intervals [μs,2i−2, μs−1,2i−2] and [μs−1,2i−1, μs,2i−1]. For clarity, an example of the process to update class Ci is displayed in Figure 4. In Figure 4a, μs−1,2i−2 and μs−1,2i−1 are two class means obtained from the (s−1)th iteration, in which iteration class Ci includes pixels whose gray values are in the interval [μs−1,2i−2, μs−1,2i−1] (shown as the purple part). In Figure 4b, μs,2i−2 and μs,2i−1 are two new class means obtained from the sth iteration. Pixels in the two intervals [μs,2i−2, μs−1,2i−2] and [μs−1,2i−1, μs,2i−1] (two blue areas) are divided into class Ci.
The above process is repeated, and the search for the rth threshold is stopped if the difference between two consecutive thresholds is less than δ ( δ > 0 ) , i.e., | T h , r T h 1 , r | < δ . Then the rth optimal threshold is set as T r = T h , r . The iteration is stopped when all the optimal thresholds T1, T2, …, TK (as shown in Figure 5) are found.
Algorithm 1 summarizes the framework of interval iteration-based multilevel thresholding (IIMT).
Algorithm 1. Interval iteration-based multilevel thresholding (IIMT).
Input: original image I, number of thresholds K ( K 2 ), constant δ ( δ > 0 ) ;
Output: optimal thresholds T1, T2, …, TK;
1:  Otsu multilevel thresholding (maximize Equation (1)), obtain thresholds T1,1, T1,2, …,
  T1,K, corresponding class means μ1,1, μ1,2, …, μ1,K, μ1,K+1, and divided classes C1, CK+1;
2:  Otsu single thresholding in interval [μ1,i, μ1,i + 1] (i = 1, …, K), obtain corresponding
  threshold T2,i, and class means μ2,2i−1, μ2,2i, update classes C1, CK+1, obtain divided
  classes C2, …, CK;
3:  for i = 1, …, K do
4:     s = 3;
5:     do
6:     {Otsu single thresholding in every interval [μs−1,2i−1, μs−1,2i] (i = 1, …, K), obtain
 corresponding threshold Ts,i and class means μs,2i−1, μs,2i, update divided classes Ci;
7:     s++;
8:    } while ( | T s 1 , i T s 2 , i | < δ )
9:    T i = T s 1 , i ;
10:  end for

3. The Proposed Algorithm

3.1. The Framework

The framework of the proposed algorithm is shown in Figure 6. It is illustrated as follows.
(1)
A hybrid L1L0 layer decomposition method is performed on the original image to obtain its base layer.
(2)
The original image and its base layer are segmented by the IIMT algorithm, and their segmentation results are denoted A and B, respectively.
(3)
The segmentation fusion method is applied to A and B to obtain the final segmentation result.

3.2. Hybrid L1 − L0 Layer Decomposition

Given an image I with size M × N , the hybrid L1L0 layer decomposition model can be defined as follows:
min I B i = 1 M j = 1 N { ( I i , j D ) 2 + λ 1 k = { H , V } | k I i , j B | + λ 2 k = { H , V } F ( k I i , j D ) }
where I B and I D denote the base layer and the detail layer, respectively, and I D = I I B . They are obtained by the L1 gradient sparsity term | k I i , j B | and the L0 gradient sparsity term F ( k I i , j D ) accordingly. k refers to the partial derivative operation along the horizontal gradient (H) or the vertical gradient (V). F is an indicator function, which is defined as:
F ( t ) = 1 , i f t 0 0 , o t h e r w i s e
For the convenience of calculation, Equation (5) can be rewritten in matrix vector form as follows:
min b ( 1 2 | | d | | 2 2 + λ 1 | | b | | 1 + λ 2 1 T F ( d ) )
where b , d R M N × 1 denote the concatenated vector form of I B and I D , respectively. 1 R 2 M N is a vector of all ones. = [ x T , y T ] T R 2 M N × M N , where x T and y T represent two gradient operator matrices in the x and y directions, respectively. F ( d ) refers to a binary vector.
By means of the Lagrangian multiplier method, Equation (7) can be converted to solve the following function:
L ( b , d , c 1 , c 2 , y 1 , y 2 ) = 1 2 | | d | | 2 2 + λ 1 | | c 1 | | 1 + λ 2 1 T F ( c 2 ) + ( c 1 b ) T y 1 + ( c 2 d ) T y 2 + ρ 2 ( | | c 1 b | | 2 2 + | | c 2 d | | 2 2 )
where c 1 , c 2 R 2 M N denotes two auxiliary variables. y 1 , y 2 represent two Lagrangian dual variables. The optimal solution is obtained by a few iterations (15 iterations in paper [38]).
After hybrid L1L0 layer decomposition, the base layer of original image is used for segmentation in the framework of the proposed algorithm. Figure 7 displays an example of decomposition. In Figure 7, the first column contains two original images, and the second column contains two corresponding base layers. From Figure 7b, it can be seen that the base layers are visually smooth, and eliminate some weak edges.

3.3. Segmentation Fusion

A segmentation fusion method [39] is adopted to fuse different segmentation results. In the process of fusion, both spatial and intensity information is taken into account. The final segmentation result after fusion is more accurate.
Let M1, M2 represent two different segmentation maps of original image I, respectively. The pixels in image I can be grouped into two different classes by comparing M1 and M2. One is named the uncontested class, in which the class labels of the pixel in M1 and M2 are the same. The other one is named the controversial class, in which the class labels of the pixel in M1 and M2 are different. Generally, the uncontested pixels do not need to be reclassified, and the controversial pixels are considered to be misclassified and thus need to be reclassified.
Assuming that p is the location of a controversial pixel in image I, l ( p M 1 ) = l a and l ( p M 2 ) = l b denote p’s two different labels in M1 and M2, respectively. The reclassified class label of pixel p is calculated by:
l ( p ) = l a , q p r , l ( q ) = l a S I M ( p , q ) > q p r , l ( q ) = l b S I M ( p , q ) l b , o t h e r w i s e
where p r denotes p’s effective neighborhood with radius r. S I M ( p , q ) refers to the similarity coefficient between p and q, and is defined as:
S I M ( p , q ) = 1 e D i s ( p , q ) 2 α 2 + | I ( p ) I ( q ) | 2 2 β 2
where D i s ( p , q ) denotes the spatial distance between p and q. I ( ) refers to the gray value of pixel . α and β are two parameters which compromise the distance and intensity difference in constructing similarity coefficient (α = 1, β = 1 in paper [36]).
Figure 8 shows a simple example of segmentation fusion. In Figure 8, it can be observed that all the pixels { p i j } i , j = 1 , , 5 are partitioned into three classes l1, l2, l3. The uncontested pixels are shown in Figure 8a. Pixels p11, p12, p13, p23, p24, p51, p52, p53, p54, p55 belong to class l1. Pixels p21, p22, p31, p32, p41 belong to class l2. Pixels p15, p25, p34, p35, p44, p45 belong to class l3. The remaining pixels p14, p33, p42, p43 are controversial pixels, as shown in Figure 8b. The class labels of each controversial pixel in M1 and M2 are inconsistent. Taking pixel p14 as an example, p14′s class label in map M1 is l ( p 14 M 1 ) = l 1 . However, it is classified into class l3 in map M2, i.e., l ( p 14 M 2 ) = l 3 . The four controversial pixels need to be reclassified by Equation (9). In Figure 8c, it can be seen that their final class labels are l(p14) = l3, l(p33) = l2, l(p42) = l1, l(p43) = l3. Finally, the segmentation fusion result F (Figure 8d) can be obtained by combining the uncontested pixels (Figure 8a) and the reclassified pixels (Figure 8c).
Segmentation maps obtained by IIMT may contain islands or isolated holes. The fusion scheme is employed to integrate the two segmentation maps to reduce misclassification pixels. It may eliminate the islands or isolated holes to obtain a better segmentation result.

4. Experimental Results and Analysis

4.1. Experimental Protocols

Transaxial MR-T2 brain images with various slices downloaded from “The Whole Brain Atlas” of Harvard Medical School (http://www.med.harvard.edu/aanlib/home.html, accessed on 17 May 2021) were used in the segmentation experiments. Because space is limited, the ten brain slices #022~#112 displayed in Figure 9 were chosen to demonstrate the performance of our proposed algorithm. Parameters for the proposed algorithm are listed in Table 1. All experiments were performed on a computer with Intel(R) Core(TM) i7-7500U CPU, 2.70 GHz, 8GB RAM, Windows 10 using MATLAB 8.1.0.604 (R2013a).

4.2. Evaluation Measure

To quantitatively evaluate the proposed algorithm and other comparison algorithms, four objective evaluation metrics were adopted in the experiments, namely, (1) uniformity measure [39,40], (2) misclassification error [7], (3) Hausdorff distance [41], and (4) Jaccard index [42].
(1)
Uniformity measure
The uniformity measure can reflect the intensity difference of pixels in the same segmented class or in different segmented classes. It is defined as follows:
U = 1 2 × K × j = 1 K + 1 i S j ( I i A v e ( S j ) ) 2 M × N × ( I m a x I m i n ) ,
where K denotes the number of thresholds; Ii represents the gray value of pixel i in original image I; Sj refers to the jth segmented class of image I; A v e ( S j ) denotes the average gray value of all pixels in Sj; M × N represents the size of image I; Imax and Imin denote the maximum gray value and the minimum gray value of pixels in image I, respectively. The values of uniformity measure U are between 0 and 1. The higher the value, the better the performance, and vice versa.
To fully assess the performance of the proposed algorithm, three common metrics in addition to the uniformity measure were used in the comparison experiments. Let R 1 denote the automatic segmentation of image I, and R 2 denote the ground-truth segmentation.
(2)
Misclassification error
Misclassification error refers to the probability of pixels being misclassified, namely, the ratio of foreground pixels incorrectly classified as background pixels and background pixels incorrectly classified as foreground pixels, to all pixels. Misclassification error is defined as:
M E = 1 | R 1 f o r e g r o u n d R 2 f o r e g r o u n d | + | R 1 b a c k g r o u n d R 2 b a c k g r o u n d | | R 2 f o r e g r o u n d | + | R 2 b a c k g r o u n d |
where R 1 f o r e g r o u n d and R 1 b a c k g r o u n d denote the foreground region and background region of R1, respectively; R 2 f o r e g r o u n d and R 2 b a c k g r o u n d denote the foreground region and background region of R2, respectively.
(3)
Hausdorff distance
The Hausdorff distance is defined as:
H ( R 1 , R 2 ) = max { h ( R 1 , R 2 ) , h ( R 2 , R 1 ) }
where h ( R 1 , R 2 ) = max a i R 1 min b j R 2 | | a i b j | | and h ( R 2 , R 1 ) = max b j R 2 min a i R 1 | | b j a i | | . A higher Hausdorff distance indicates a larger difference between the two segmentations R 1 and R 2 . Hence, a satisfactory segmentation corresponds to a low Hausdorff distance.
(4)
Jaccard index
The Jaccard index is defined as:
J ( R 1 , R 2 ) = | R 1 R 2 | | R 1 R 2 |
The value of Jaccard index varies from 0 to 1. Higher values of J indicate better segmentation.

4.3. Comparison with Otsu-Based Method

In this paper, the newly proposed segmentation algorithm (subsequently referred to as “Proposed”) is based on the Otsu method. To verify its effectiveness, this subsection compares it with three Otsu-based algorithms in terms of single thresholding (K = 1) and multilevel thresholding (K = 2, 3, 4, 5). The comparison algorithms include (1) the original Otsu method (Otsu), (2) the newly proposed interval iteration multilevel thresholding method (IIMT), and (3) IIMT based on Hybrid L1L0 layer decomposition (HL-IIMT).
Figure 10 and Figure 11 display segmentation results of different algorithms for slice #042 and slice #082, respectively. For single level of thresholding K = 1, it can be observed that segmentation results obtained by the Otsu method have many fragmented small areas, such as the lower soft tissue in the first row of Figure 10a, whereas IIMT performs slightly better. However, the edges segmented by HL-IIMT and Proposed are much clearer. In the case of K 2 , it can be seen that Otsu and IIMT have similar segmentation effects. HL-IIMT and Proposed are better than Otsu and IIMT in terms of edge-preserving and denoising, as shown in the segmentation results in Figure 11 (K = 2, K = 4).
Table 2 shows the values of uniformity measure (U) of Proposed, HL-IIMT, IIMT, and Otsu algorithms for slice #042 and slice #082. The best evaluation results are marked in bold. It can be noted that the U values achieved by Proposed are the highest for both of the two test images. To more clearly present the results, Figure 12 illustrates the comparison of U for different algorithms based on Table 2. In Figure 12, it can be clearly noted that Proposed achieves the highest values, and HL-IIMT comes second, followed by IIMT and Otsu. This indicates that the novel thresholding method IIMT presented in this paper is effective, and our Proposed based on IIMT can obtain satisfactory segmentation results with clear edges and little noise.

4.4. Experimental Results on Images Containing Noise

This subsection compares segmentation results of different algorithms (Proposed, Otsu, IIMT, and HL-IIMT) on images containing noise. Figure 13 displays five images with Gaussian noise N (0, 0.001) added to images #022, #042, #062, #082, and #102, which were selected from Figure 9.
Figure 14 displays the segmentation results of images containing noise with a single level of thresholding K = 1. It can be observed that segmentation results achieved by HL-IIMT and Proposed are distinctly better than those of Otsu and IIMT, which have many isolated points. Figure 15 depicts segmentation results obtained by different algorithms with multilevel thresholding K = 4. Obviously, segmentation results of Otsu, IIMT, and HL-IIMT are seriously affected by noise, and most regions are blurred. However, the results of Proposed are better, and they have less noise and clearer edges.
A comparison of the evaluation results for different segmentation algorithms on images containing noise with K = 1, 4 is shown in Table 3, and corresponding comparison charts are given in Figure 16. In Table 3, the best results are marked in bold. It can be noted that Proposed consistently has the highest U values. For images containing noise, both the IIMT-based algorithms (HL-IIMT and Proposed) are superior to the original Otsu method in single threshold segmentation; furthermore, Proposed can achieve satisfactory results in multilevel threshold segmentation compared to the other three algorithms (IIMT, HL-IIMT, and Otsu).

4.5. Comprehensive Comparison

To comprehensively evaluate the performance of our proposed algorithm, segmentation results of “Proposed” were compared with those of six other multilevel thresholding algorithms in this experiment, namely, the local Laplacian filtering and discrete curve evolution-based method (LLF-DCE) [39], the particle swarm optimization-based method (PSO), the bacterial foraging-based method (BF) and adaptive bacterial foraging-based method (ABF) [43], the Nelder–Mead simplex-based method (NMS), and the real coded genetic algorithm (RCGA) [40]. Brief descriptions of the eight algorithms are as follows.
(1)
Proposed
In the proposed algorithm, the initial thresholds and mean value of each class are obtained by Otsu multilevel thresholding. Then, Otsu single thresholding is iteratively performed on each interval to search for the optimal threshold in the sub-region.
(2)
LLF-DCE
In LLF-DCE method, discrete curve evolution (DCE) is used to simplify the curve shape of the image histogram, and important points are reserved that are generally in peak or valley regions [39]. Gray levels corresponding to these points comprise a series of intervals. Then, Otsu single thresholding is performed in each interval to search for the optimal threshold.
(3)
PSO
PSO is a stochastic global optimization algorithm and simulates the foraging behavior of birds. The bird is simulated by a massless particle which has two attributes: speed and position. The optimal solution can be sought by continuously updating the speed and position.
(4)
BF
BF is a heuristic algorithm. In the process of maximizing Kapur’s entropy and between-class variance, BF is adopted to search for optimal thresholds by simulating the foraging behavior of Escherichia coli in the human gut. The behavior specifically includes four actions: chemotaxis, swarming, reproduction, and elimination-dispersal.
(5)
ABF
In the ABF method, an adaptive step size is employed in the traditional bacterial foraging method to improve the exploration and exploitation capability.
(6)
NMS
NMS is a direct search method for multi-dimensional unconstrained minimization. NMS is used to optimize maximum entropy method to identify optimum thresholds.
(7)
RCGA
In the RCGA method, simulated binary crossover (SBX) is employed in crossover and mutation mechanisms of a real coded genetic algorithm. SBX is essentially adaptive, and it creates child solutions proportionally based on the difference in parent solutions. Then, the optimal thresholds are found by maximizing Kapur’s entropy.
Figure 17 depicts the segmentation results of Proposed for brain slices #022~#112 with the number of thresholds K from 2 to 5. It can be seen that segmentation results with different threshold numbers have different effects. In general, the higher the level of thresholding, the better segmentation quality. Table 4 displays the comparison of optimal threshold values obtained by different algorithms with K = 2, 3, 4, 5. The proposed algorithm and LLF-DCE are based on the fusion scheme. The former combines two different segmentation results obtained by IIMT and HL-IIMT; the latter combines two different segmentation results obtained by LLF-Otsu and DCE-Otsu. In Table 4, it can be seen that the final thresholds selected by different algorithms are different from each other.
Table 5 shows the uniformity measure (U) values of different segmentation algorithms. The best results are marked in bold. It is clear that the U value of Proposed is the highest for each test image and each level of thresholding. The proposed algorithm is superior to PSO, BF, ABF, NMS, and RGA in most cases. Taking test image #062 as an example, in the case of K = 2 and 4, U values of Proposed are more than 0.98, whereas the best evaluation result of the above five algorithms is merely 0.9236 (PSO, K = 4). For K = 3 and 5, U values of Proposed are more than 0.99, whereas the best results obtained by PSO, BF, ABF, NMS and RGA are 0.9835 (NMS, K = 5) and 0.9855 (RGA, K = 5), and the remainder are all below 0.95. Compared to the DCE method, the evaluation values of Proposed and LLF-DCE are not significantly different, and Proposed performs slightly better for each test image.
In order to show the comprehensive performance of the proposed algorithm, Figure 18 shows average values and standard deviations of U for different segmentation algorithms with the number of thresholds K from 2 to 5. It can be noted that the average U values of the proposed algorithm are higher than those of other comparison algorithms for each level of thresholding, which indicates superior segmentation quality. In particular, they are significantly higher than the average U values of PSO, BF, ABF, NMS, and RGA in the cases of K = 2, 3, 4. The error bars (standard deviations) of Proposed and LLF-DCE are obviously shorter than those of other segmentation algorithms. Figure 19 shows the comparison of average values of the misclassification error, Hausdorff distance, and Jaccard index for different algorithms. It can be noted that the proposed algorithm achieves the lowest misclassification error and Hausdorff distance, and the highest Jaccard index. In addition, LLF-DCE also performs well when compared with others.
In summary, our proposed algorithm performs better than other comparison segmentation algorithms. It can not only achieve good segmentation results but also has excellent stability.

4.6. Experimental Results on BRATS Database

In this subsection, we applied the proposed algorithm to the BRATS (Multimodal Brain Tumor Image Segmentation Benchmark) database. The BRATS database (http://www.imm.dtu.dk/projects/BRATS2012/data.html, accessed on 25 September 2021) is compiled from the international brain tumor segmentation challenge in MICCAI 2012 conference. It is a widely used database and composed of multi-contrast brain MR scans of 25 low-grade and 25 high-grade glioma cases and the corresponding ground truth. Each case includes four modalities—T1, T1c, T2, and FLAIR [44]—and each MR scanning sequence contains more than one hundred images. Figure 20 presents an example of brain MR images from BRATS. Figure 20a shows the original images and the corresponding ground truth is displayed in Figure 20b.
The performance of the proposed algorithm on BRATS was compared with other segmentation algorithms in terms of the uniformity measure, misclassification error, Hausdorff distance, and Jaccard index. Figure 21 shows the average evaluation values for different algorithms. It can be observed that the proposed algorithm achieves excellent results in terms of the uniformity measure and Hausdorff distance, as shown in Figure 21a,c, which are obviously better than those of other algorithms. From Figure 21b,d, the proposed algorithm also performs best, followed by LL-DCE.

5. Conclusions

In this paper, a novel multilevel thresholding algorithm based on interval iteration (named IIMT) for brain MR images is proposed. In contrast to most other multilevel thresholding methods, IIMT iteratively searches for sub-regions of the image to achieve segmentation, rather than taking the original image as a whole. First, standard Otsu multilevel thresholding is performed on the original image to obtain initial thresholds and class means. Then, in the succeeding iteration, standard Otsu single thresholding is used to determine the threshold in each interval formed by the class means derived in the previous iteration. For two adjacent peaks in the gray histogram, the optimal threshold is found if the difference between thresholds obtained in two consecutive iterations is less than a preset value. Iterating is stopped when all optimal thresholds are found. Furthermore, we presented an IIMT-based segmentation framework for brain MR images. The hybrid L1L0 layer decomposition method is utilized to decompose the original image to derive its base layer. IIMT is separately performed on the original image and its base layer to gain two different segmentation results. In order to improve the segmentation accuracy, a fusion scheme is adopted to fuse these two results. Experimental results verified that the proposed algorithm is applicable and can achieve satisfactory segmentation results. Compared to other multilevel thresholding algorithms, the proposed algorithm can obtain a better visual effect and, subjectively, its segmentation results have clear edges and little noise. The uniformity measure, misclassification error, Hausdorff distance, and Jaccard index objectively demonstrated the performance of the proposed algorithm. The proposed algorithm results in effective segmentation for medical images, and shows excellent stability and robustness for images containing noise. In clinical medicine, the proposed algorithm can assist doctors to diagnose diseases, locate the lesion area, and detect changes in tumor volume and size. It also can be used in pre-processing for other image processing technologies, such as image fusion.
In future, our research work can be extended in three directions. First, the design idea of determining thresholds in the proposed IIMT can be incorporated into other multilevel thresholding algorithms and extended into 2D/3D Otsu or similar methods, such as maximum entropy and minimum error. Second, more effective segmentation fusion strategies can be designed to improve the quality of medical image segmentation. Finally, deep convolutional neural networks can be adopted to image segmentation. We will combine traditional image segmentation techniques with deep learning models to with the aim of achieving good segmentation effects.

Author Contributions

Conceptualization, Y.F. and X.Z.; methodology, Y.F.; software, W.L., X.Z., Z.L. and Y.L.; validation, Y.F. and X.Z.; formal analysis, Y.F.; writing—original draft preparation, Y.F.; writing—review and editing, Y.F., W.L. and X.Z.; supervision, Y.F.; investigation, W.L.; data curation, W.L., Z.L. and Y.L.; funding acquisition, Y.F., X.Z. and G.W. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported in part by Youth Growth Science and Technology Plan Project of Jilin Provincial Department of Science and Technology under Grant 20210508039RQ, in part by ‘‘Thirteenth Five-Year Plan’’ Scientific Research Planning Project of Education Department of Jilin Province under Grants JJKH20200678KJ and JJKH20210752KJ, in part by Fundamental Research Funds for the Central Universities, JLU under Grant 93K172020K05, in part by National Natural Science Foundation of China under Grants 61876070 and 61801190.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors would like to thank http://www.med.harvard.edu/aanlib/home.html (accessed on 17 May 2021) and http://www.imm.dtu.dk/projects/BRATS2012/data.html (accessed on 25 September 2021) for providing source medical images.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, R.; Cao, S.; Ma, K.; Zheng, Y.; Meng, D. Pairwise learning for medical image segmentation. Med. Image Anal. 2020, 67, 101876. [Google Scholar] [CrossRef] [PubMed]
  2. Oktay, O.; Ferrante, E.; Kamnitsas, K.; Heinrich, M.; Bai, W.; Caballero, J.; Cook, S.A.; De Marvao, A.; Dawes, T.; O‘Regan, D.P.; et al. Anatomically constrained neural networks (ACNNs), application to cardiac image enhancement and segmentation. IEEE Trans. Med. Imaging 2017, 37, 384–395. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Wang, T.; Ji, Z.; Yang, J.; Sun, Q.; Shen, X.; Ren, Z.; Ge, Q. Label Group Diffusion for Image and Image Pair Segmentation. Pattern Recognit. 2020, 112, 107789. [Google Scholar] [CrossRef]
  4. Mărginean, R.; Andreica, A.; Dioşan, L.; Bálint, Z. Butterfly effect in chaotic image segmentation. Entropy 2020, 22, 1028. [Google Scholar] [CrossRef]
  5. Monteiro, C.; Campilho, C. Performance Evaluation of Image Segmentation. International Conference Image Analysis and Recognition; Springer: Berlin/Heidelberg, Germany, 2006; pp. 248–259. [Google Scholar]
  6. Zou, L.; Song, L.T.; Weise, T.; Wang, X.F.; Huang, Q.J.; Deng, R.; Wu, Z.Z. A Survey on Regional Level Set Image Segmentation Models based on the Energy Functional Similarity Measure. Neurocomputing 2020, 452, 606–622. [Google Scholar] [CrossRef]
  7. Apro, M.; Pal, S.; Dedijer, S. Evaluation of single and multi-threshold entropy-based algorithms for folded substrate analysis. J. Graph. Eng. Des. 2011, 2, 1–9. [Google Scholar]
  8. Raki, M.; Cabezas, M.; Kushibar, K.; Oliver, A.; Lladó, X. Improving the Detection of Autism Spectrum Disorder by Combining Structural and Functional MRI Information. NeuroImage Clin. 2020, 25, 102181. [Google Scholar] [CrossRef] [PubMed]
  9. Yuan, Z.G.; Bi, L.X.; Hui, H.G. Survey on Medical Image Computer Aided Detection and Diagnosis Systems. J. Softw. 2018, 29, 1471–1514. [Google Scholar]
  10. Tian, J.X.; Liu, G.C.; Gu, S.S. Deep Learning in Medical Image Analysis and Its Challenges. Zidonghua Xuebao/Acta Autom. Sin. 2018, 44, 401–424. [Google Scholar]
  11. Dekhil, O.; Ali, M.; Haweel, R.; Elnakib, Y.; Ghazal, M.; Hajjdiab, H.; Fraiwan, L.; Shalaby, A.; Soliman, A.; Mahmoud, A.; et al. A Comprehensive Framework for Differentiating Autism Spectrum Disorder from Neurotypicals by Fusing Structural MRI and Resting State Functional MRI. Semin. Pediatric Neurol. 2020, 34, 100805. [Google Scholar] [CrossRef]
  12. Fu, S.; Mui, K. A survey on image segmentation. Pattern Recognit. 1981, 13, 3–16. [Google Scholar] [CrossRef]
  13. Wu, B.; Zhou, J.; Ji, X.; Yin, Y.; Shen, X. An ameliorated teaching-learning-based optimization algorithm based study of image segmentation for multilevel thresholding using Kapur’s entropy and Otsu’s between class variance. Inf. Sci. 2020, 533, 72–107. [Google Scholar] [CrossRef]
  14. Zortea, M.; Flores, E.; Scharcanski, J. A simple weighted thresholding method for the segmentation of pigmented skin lesions in macroscopic images. Pattern Recognit. 2017, 64, 92–104. [Google Scholar] [CrossRef]
  15. Ghamisi, P.; Couceiro, M.S.; Martins, F.M.L.; Benediktsson, J.A. Multilevel image segmentation based on fractional-order Darwinian particle swarm optimization. IEEE Trans. Geosci. Remote Sens. 2013, 52, 2382–2394. [Google Scholar] [CrossRef] [Green Version]
  16. Tan, K.S.; Isa, N.A.M. Color image segmentation using histogram thresholding-Fuzzy C-means hybrid approach. Pattern Recognit. 2011, 44, 1–15. [Google Scholar]
  17. Zhang, X.; Sun, Y.; Liu, H.; Hou, Z.; Zhao, F.; Zhang, C. Improved clustering algorithms for image segmentation based on non-local information and back projection. Inf. Sci. 2021, 550, 129–144. [Google Scholar] [CrossRef]
  18. Huang, J.; You, X.; Tang, Y.Y.; Du, L.; Yuan, Y. A novel iris segmentation using radial-suppression edge detection. Signal Process. 2009, 89, 2630–2643. [Google Scholar] [CrossRef]
  19. Rampun, A.; López-Linares, K.; Morrow, P.J.; Scotney, B.W.; Wang, H.; Ocaña, I.G.; Maclair, G.; Zwiggelaar, R.; Ballester, M.A.; Macía, I. Breast pectoral muscle segmentation in mammograms using a modified holistically-nested edge detection network. Med. Image Anal. 2019, 57, 1–17. [Google Scholar] [CrossRef]
  20. Soltani-Nabipour, J.; Khorshidi, A.; Noorian, B. Lung tumor segmentation using improved region growing algorithm. Nucl. Eng. Technol. 2020, 52, 2313–2319. [Google Scholar] [CrossRef]
  21. Xu, G.; Li, X.; Lei, B.; Lv, K. Unsupervised color image segmentation with color-alone feature using region growing pulse coupled neural network. Neurocomputing 2018, 306, 1–16. [Google Scholar] [CrossRef]
  22. Siriapisith, T.; Kusakunniran, W.; Haddawy, P. Pyramid graph cut, Integrating intensity and gradient information for grayscale medical image segmentation. Comput. Biol. Med. 2020, 126, 103997. [Google Scholar] [CrossRef] [PubMed]
  23. Ortuño-Fisac, J.E.; Vegas-Sánchez-Ferrero, G.; Gómez-Valverde, J.J.; Chen, M.Y.; Santos, A.; McVeigh, E.R.; Ledesma-Carbayo, M.J. Automatic estimation of aortic and mitral valve displacements in dynamic CTA with 4D graph-cuts. Med Image Anal. 2020, 65, 101748. [Google Scholar] [CrossRef] [PubMed]
  24. Yu, Q.; Shi, Y.; Sun, J.; Gao, Y.; Zhu, J.; Dai, Y. Crossbar-net, A novel convolutional neural network for kidney tumor segmentation in ct images. IEEE Trans. Image Process. 2019, 28, 4060–4074. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Bhandari, A.K.; Kumar, A.; Singh, G.K. Modified artificial bee colony based computationally efficient multilevel thresholding for satellite image segmentation using Kapur’s, Otsu and Tsallis functions. Expert Syst. Appl. 2015, 42, 1573–1601. [Google Scholar] [CrossRef]
  26. Otsu, N. A threshold selection method from gray-level histograms. Automatica 1975, 11, 23–27. [Google Scholar] [CrossRef] [Green Version]
  27. Kapur, J.; Sahoo, P.; Wong, A. A new method for gray-level picture thresholding using the entropy of the histogram. Comput. Vis. Graph. Image Process. 1985, 29, 273–285. [Google Scholar] [CrossRef]
  28. Kittler, J.; Illingworth, J. Minimum error thresholding. Pattern Recognit. 1986, 19, 41–47. [Google Scholar] [CrossRef]
  29. Lifang, H.; Songwei, H. An efficient krill herd algorithm for color image multilevel thresholding segmentation problem. Appl. Soft Comput. J. 2020, 89, 106063. [Google Scholar]
  30. Abd Elaziz, M.; Lu, S. Many-objectives Multilevel Thresholding Image Segmentation using Knee Evolutionary Algorithm. Expert Syst. Appl. 2019, 125, 305–316. [Google Scholar] [CrossRef]
  31. Lei, B.; Fan, J. Image thresholding segmentation method based on minimum square rough entropy. Appl. Soft Comput. 2019, 84, 105687. [Google Scholar] [CrossRef]
  32. Yan, Z.; Zhang, J.; Yang, Z.; Tang, J. Kapur’s entropy for underwater multilevel thresholding image segmentation based on whale optimization algorithm. IEEE Access 2020, 9, 41294–41319. [Google Scholar] [CrossRef]
  33. Singh, P. A neutrosophic-entropy based adaptive thresholding segmentation algorithm, A special application in MR images of Parkinson’s disease. Artif. Intell. Med. 2020, 104, 101838. [Google Scholar] [CrossRef] [PubMed]
  34. Omid, T.; Haifeng, S. An adaptive differential evolution algorithm to optimal multi-level thresholding for MRI brain image segmentation. Expert Syst. Appl. 2019, 138, 112820. [Google Scholar]
  35. Zhao, D.; Liu, L.; Yu, F.; Heidari, A.A.; Wang, M.; Liang, G.; Muhammad, K.; Chen, H. Chaotic random spare ant colony optimization for multi-threshold image segmentation of 2D Kapur entropy. Knowl.-Based Syst. 2021, 216, 106510. [Google Scholar] [CrossRef]
  36. Cai, H.; Yang, Z.; Cao, X.; Xia, W.; Xu, X. A new iterative triclass thresholding technique in image segmentation. IEEE Trans. Image Process. 2014, 23, 1038–1046. [Google Scholar] [CrossRef] [PubMed]
  37. Zhang, J.; Shi, Y.; Sun, J.; Wang, L.; Zhou, L.; Gao, Y.; Shen, D. Interactive medical image segmentation via a point-based interaction. Artif. Intell. Med. 2021, 111, 101998. [Google Scholar] [CrossRef]
  38. Liang, Z.; Xu, J.; Zhang, D.; Cao, Z.; Zhang, L. A hybrid l1-l0 layer decomposition model for tone mapping. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4758–4766. [Google Scholar]
  39. Feng, Y.; Shen, X.; Chen, H.; Zhang, X. Segmentation fusion based on neighboring information for MR brain images. Multimed. Tools Appl. 2017, 76, 23139–23161. [Google Scholar] [CrossRef]
  40. Manikandan, S.; Ramar, K.; Iruthayarajan, M.; Srinivasagan, K.G. Multilevel thresholding for segmentation of medical brain images using real coded genetic algorithm. Measurement 2014, 47, 558–568. [Google Scholar] [CrossRef]
  41. Liew, L.; Anglin, M.; Banks, W.; Sondag, M.; Ito, K.L.; Kim, H.; Chan, J.; Ito, J.; Jung, C.; Khoshab, N.; et al. A large, open source dataset of stroke anatomical brain images and manual lesion segmentations. Sci. Data 2018, 5, 180011. [Google Scholar] [CrossRef] [Green Version]
  42. Qian, H.; Guo, X.; Xue, I. Image Understanding Based Global Evaluation Algorithm for Segmentation. Acta Electron. Sin. 2012, 40, 1989–1995. [Google Scholar]
  43. Sathya, P.; Kayalvizhi, R. Optimal segmentation of brain MRI based on adaptive bacterial foraging algorithm. Neurocomputing 2011, 74, 2299–2313. [Google Scholar] [CrossRef]
  44. Menze, H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 2015, 34, 1993–2024. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The gray histogram curve of an original image.
Figure 1. The gray histogram curve of an original image.
Entropy 23 01429 g001
Figure 2. Thresholds (T1,1, T1,2, …, T1,K) and class means (μ1,1, μ1,2, …, μ1,K, μ1,K+1) from the first iteration: (a) thresholds and class means, (b) two divided classes C1 and CK+1.
Figure 2. Thresholds (T1,1, T1,2, …, T1,K) and class means (μ1,1, μ1,2, …, μ1,K, μ1,K+1) from the first iteration: (a) thresholds and class means, (b) two divided classes C1 and CK+1.
Entropy 23 01429 g002
Figure 3. Thresholds (T2,1, T2,2, …, T2,K) and class means (μ2,1, μ2,2, …, μ2,2K) from the second iteration: (a) thresholds and class means, (b) K+1 divided classes C1, C2, …, CK+1.
Figure 3. Thresholds (T2,1, T2,2, …, T2,K) and class means (μ2,1, μ2,2, …, μ2,2K) from the second iteration: (a) thresholds and class means, (b) K+1 divided classes C1, C2, …, CK+1.
Entropy 23 01429 g003
Figure 4. An example of updating class Ci: (a) the divided class Ci in the (s-1)th iteration, (b) the updated class Ci in the sth iteration.
Figure 4. An example of updating class Ci: (a) the divided class Ci in the (s-1)th iteration, (b) the updated class Ci in the sth iteration.
Entropy 23 01429 g004
Figure 5. All the obtained optimal thresholds T1, T2, …, TK.
Figure 5. All the obtained optimal thresholds T1, T2, …, TK.
Entropy 23 01429 g005
Figure 6. Framework of the proposed algorithm.
Figure 6. Framework of the proposed algorithm.
Entropy 23 01429 g006
Figure 7. Original images and their corresponding base layers. (a) Original images, (b) base layers.
Figure 7. Original images and their corresponding base layers. (a) Original images, (b) base layers.
Entropy 23 01429 g007aEntropy 23 01429 g007b
Figure 8. An example of segmentation fusion. M1, M2 show two different segmentation maps of an original image: (a) shows the uncontested pixels; (b) shows the controversial pixels; (c) shows the reclassified pixels; (d) shows the segmentation fusion result F.
Figure 8. An example of segmentation fusion. M1, M2 show two different segmentation maps of an original image: (a) shows the uncontested pixels; (b) shows the controversial pixels; (c) shows the reclassified pixels; (d) shows the segmentation fusion result F.
Entropy 23 01429 g008
Figure 9. MR-T2 brain slices: (a) slice #022, (b) slice #032, (c) slice #042, (d) slice #052, (e) slice #062, (f) slice #072, (g) slice #082, (h) slice #092, (i) slice #102, (j) slice #112.
Figure 9. MR-T2 brain slices: (a) slice #022, (b) slice #032, (c) slice #042, (d) slice #052, (e) slice #062, (f) slice #072, (g) slice #082, (h) slice #092, (i) slice #102, (j) slice #112.
Entropy 23 01429 g009aEntropy 23 01429 g009b
Figure 10. Segmentation results obtained by different segmentation algorithms for slice #042 with number of thresholds K from 1 to 5: (a) Otsu, (b) IIMT, (c) HL-IIMT, (d) Proposed.
Figure 10. Segmentation results obtained by different segmentation algorithms for slice #042 with number of thresholds K from 1 to 5: (a) Otsu, (b) IIMT, (c) HL-IIMT, (d) Proposed.
Entropy 23 01429 g010aEntropy 23 01429 g010b
Figure 11. Segmentation results obtained by different segmentation algorithms for slice #082 with number of thresholds K from 1 to 5: (a) Otsu, (b) IIMT, (c) HL-IIMT, (d) Proposed.
Figure 11. Segmentation results obtained by different segmentation algorithms for slice #082 with number of thresholds K from 1 to 5: (a) Otsu, (b) IIMT, (c) HL-IIMT, (d) Proposed.
Entropy 23 01429 g011aEntropy 23 01429 g011b
Figure 12. Values of the uniformity measure for different segmentation algorithms with number of thresholds K from 1 to 5. (a) #042, (b) #082.
Figure 12. Values of the uniformity measure for different segmentation algorithms with number of thresholds K from 1 to 5. (a) #042, (b) #082.
Entropy 23 01429 g012
Figure 13. Images containing noise via the addition of Gaussian noise N (0, 0.001): (a) slice #022, (b) slice #042, (c) slice #062, (d) slice #082, (e) slice #102.
Figure 13. Images containing noise via the addition of Gaussian noise N (0, 0.001): (a) slice #022, (b) slice #042, (c) slice #062, (d) slice #082, (e) slice #102.
Entropy 23 01429 g013
Figure 14. Segmentation results obtained by different segmentation algorithms for images containing noise (K = 1): (a) Otsu, (b) IIMT, (c) HL-IIMT, (d) Proposed.
Figure 14. Segmentation results obtained by different segmentation algorithms for images containing noise (K = 1): (a) Otsu, (b) IIMT, (c) HL-IIMT, (d) Proposed.
Entropy 23 01429 g014
Figure 15. Segmentation results obtained by different segmentation algorithms for images containing noise (K = 4): (a) Otsu, (b) IIMT, (c) HL-IIMT, (d) Proposed.
Figure 15. Segmentation results obtained by different segmentation algorithms for images containing noise (K = 4): (a) Otsu, (b) IIMT, (c) HL-IIMT, (d) Proposed.
Entropy 23 01429 g015aEntropy 23 01429 g015b
Figure 16. Values of the uniformity measure for different segmentation algorithms with number of thresholds K = 1, 4: (a) K = 1, (b) K = 4.
Figure 16. Values of the uniformity measure for different segmentation algorithms with number of thresholds K = 1, 4: (a) K = 1, (b) K = 4.
Entropy 23 01429 g016
Figure 17. Segmentation results obtained by the proposed algorithm for brain slices #022~#112: (a1j1) display the results of 2-thresholding; (a2j2) display the results of 3-thresholding; (a3j3) display the results of 4-thresholding; (a4j4) display the results of 5-thresholding.
Figure 17. Segmentation results obtained by the proposed algorithm for brain slices #022~#112: (a1j1) display the results of 2-thresholding; (a2j2) display the results of 3-thresholding; (a3j3) display the results of 4-thresholding; (a4j4) display the results of 5-thresholding.
Entropy 23 01429 g017aEntropy 23 01429 g017b
Figure 18. Average values and standard deviations of the uniformity measure for different segmentation algorithms with number of thresholds K from 2 to 5.
Figure 18. Average values and standard deviations of the uniformity measure for different segmentation algorithms with number of thresholds K from 2 to 5.
Entropy 23 01429 g018
Figure 19. Comparison of evaluation results for different algorithms: (a) average misclassification error, (b) average Hausdorff distance, (c) average Jaccard index.
Figure 19. Comparison of evaluation results for different algorithms: (a) average misclassification error, (b) average Hausdorff distance, (c) average Jaccard index.
Entropy 23 01429 g019aEntropy 23 01429 g019b
Figure 20. An example of original images and ground truth from BRATS: (a) original images, (b) ground truth.
Figure 20. An example of original images and ground truth from BRATS: (a) original images, (b) ground truth.
Entropy 23 01429 g020
Figure 21. Comparison of evaluation results for different algorithms: (a) average uniformity measure, (b) average misclassification error, (c) average Hausdorff distance, (d) average Jaccard index.
Figure 21. Comparison of evaluation results for different algorithms: (a) average uniformity measure, (b) average misclassification error, (c) average Hausdorff distance, (d) average Jaccard index.
Entropy 23 01429 g021
Table 1. Parameter settings of the proposed algorithm.
Table 1. Parameter settings of the proposed algorithm.
Parameter SettingsDescription
δ = 0.01Value that stops the iteration for IIMT
λ1 = 1Weight of base layer for hybrid L1L0 layer decomposition
λ2 = 0.1λ1Weight of detail layer for hybrid L1L0 layer decomposition
r = 12Radius for segmentation fusion
K = 1, 2, 3, 4, 5Number of the thresholds
Table 2. Comparison of uniformity measure for different segmentation algorithms.
Table 2. Comparison of uniformity measure for different segmentation algorithms.
Test
Images
Number of
Thresholds (K)
Uniformity Measure (U)
ProposedHL-IIMTIIMTOTSU
#04210.98580.98180.97730.9715
20.98550.98050.97640.9705
30.98930.98250.97590.9694
40.98930.98140.97090.9608
50.99140.98310.97170.9707
#08210.98270.97960.97080.9670
20.98360.97990.97160.9687
30.99270.98230.97330.9702
40.98690.98020.97490.9713
50.99380.98040.97500.9714
Table 3. Comparison of uniformity measure for different segmentation algorithms on images containing noise.
Table 3. Comparison of uniformity measure for different segmentation algorithms on images containing noise.
Test
Images
Number of
Thresholds (K)
Uniformity Measure (U)
ProposedHL-IIMTIIMTOTSU
#02210.98920.97860.96520.9569
40.98950.97950.96720.9608
#04210.98170.97230.96710.9646
40.98560.97860.96850.9571
#06210.97800.97020.95190.9407
40.98330.97280.96050.9547
#08210.98080.97190.95910.9520
40.98560.97860.96880.9572
#10210.98690.97840.95560.9503
40.99060.98130.96850.9622
Table 4. Comparison of optimal threshold values obtained by applying different segmentation algorithms to the test images.
Table 4. Comparison of optimal threshold values obtained by applying different segmentation algorithms to the test images.
Test ImagesKOptimal Threshold Values
ProposedLLF-DCEPSOBFABFNMSRCGA
IIMTHL-IIMTLLF-OtsuDCE-Otsu
#022240, 9634, 10326, 951, 7797, 18496, 18495, 18496, 18496, 184
342, 98, 15622, 69, 12526, 64, 1031, 3, 7769, 138, 20765, 131, 18669, 114, 18558, 116, 18558, 115, 185
420, 48, 86, 12620, 60, 100, 14126, 64, 91, 1321, 3, 5, 7983, 116, 175, 20752, 99, 148, 18658, 113, 174, 20843, 87, 132, 18544, 87, 131, 186
528, 70, 118, 164, 22817, 51, 90, 132, 17814, 26, 64, 91, 1321, 3, 5, 68, 7976, 119, 154, 184, 21444, 90, 127, 170, 20843, 88, 130, 176, 20844, 104, 140, 176, 21444, 86, 127, 174, 208
#032250, 11243, 11525, 1021, 77107, 185110, 185110, 185110, 185109, 185
328, 70, 12022, 73, 12425, 82, 1101, 3, 7974, 157, 19272, 120, 19881, 134, 18756, 115, 18653, 116, 185
424, 66, 112, 15222, 73, 124, 18125, 82, 94, 1511, 3, 5, 8195, 125, 164, 19463, 119, 173, 20858, 102, 142, 19039, 83, 132, 18939, 84, 131, 189
530, 74, 118, 158, 20415, 47, 81, 112, 14825, 56, 88, 97, 1511, 3, 5, 57, 8180, 112, 139, 186, 21363, 101, 140, 175, 20752, 87, 128, 167, 19829, 75, 124, 173, 20734, 78, 123, 174, 207
#042254, 11846, 12029, 11137, 87111, 183114, 184114, 184113, 184114, 183
334, 82, 13027, 82, 13029, 69, 13237, 49, 14180, 148, 17870, 136, 18874, 130, 18584, 132, 18884, 132, 187
436, 76, 112, 15621, 67, 105, 14929, 69, 97, 14437, 48, 95, 14381, 125, 164, 19762, 112, 156, 19450, 100, 143, 19029, 76, 128.18730, 75, 127, 188
520, 56, 90, 126, 16818, 60, 94, 128, 17029, 69, 78, 108, 14535, 49, 77, 95, 14382, 115, 142, 184, 21458, 114, 151, 188, 21853, 97, 144, 184, 21831, 76, 126, 178, 21725, 69, 114, 156, 194
#052258, 11449, 11130, 10331, 89119, 186117, 186117, 186118, 185118, 185
346, 88, 13031, 88, 12730, 75, 11131, 45, 12589, 113, 187102, 156, 206107, 158, 204109, 166, 207109, 165, 203
422, 60, 96, 13419, 65, 99, 13530, 75, 93, 14331, 45, 79, 12779, 111, 141, 20893, 124, 171, 21090, 129, 173, 21094, 132, 175, 21091, 131, 174, 209
522, 58, 88, 120, 15635, 89, 120, 152, 19414, 30, 75, 93, 14331, 45, 79, 100, 12765, 85, 131, 162, 20356, 112, 144, 175, 20956, 95, 133, 167, 20320, 67, 120, 167, 20724, 67, 118, 166, 203
#062258, 12051, 11831, 11133, 103109, 186119, 190119, 186121, 187121, 187
348, 94, 14442, 97, 14231, 79, 13433, 45, 133112, 167, 18797, 133, 183102, 147, 199101, 148, 195101, 147, 196
442, 84, 120, 16419, 67, 106, 14931, 79, 96, 15133, 45, 81, 13585, 134, 180, 20398, 140, 182, 21893, 135, 175, 21294, 134, 176, 21194, 134, 175, 211
528, 64, 94, 128, 17027, 75, 102, 137, 17917, 31, 79, 96, 15133, 45, 81, 116, 13599, 119, 157, 181, 20373, 104, 139, 184, 21379, 111, 145, 179, 21228, 68, 120, 168, 20820, 65, 113, 158, 200
#072260, 12052, 12032, 13333, 111116, 177117, 179117, 179118, 179117, 179
354, 100, 15647, 103, 15632, 76, 13933, 45, 13996, 178, 20795, 147, 20299, 150, 190100, 142, 18899, 141, 187
448, 86, 122, 17836, 87, 122, 17432, 76, 93, 15533, 45, 81, 14196, 124, 161, 18794, 129, 173, 21495, 134, 174, 214100, 140, 179, 21499, 140, 179, 213
548, 84, 110, 142, 18817, 62, 94, 128, 17932, 68, 81, 102, 15533, 45, 63, 81, 14172, 112, 151, 178, 19787, 109, 139, 178, 21087, 119, 150, 180, 21410, 64, 120, 172, 21114, 64, 119, 171, 211
#082260, 11651, 11332, 11037, 103110, 170112, 169111, 170112, 169111, 169
354, 102, 15847, 102, 15832, 83, 14337, 49, 137103, 136, 198114, 155, 210111, 155, 201103, 146, 189103, 146, 190
442, 82, 116, 16820, 70, 105, 15832, 83, 93, 16637, 49, 87, 138100, 129, 167, 188103, 139, 175, 21499, 135, 170, 21098, 134, 169, 21098, 133, 169, 210
552, 88, 118, 154, 21017, 63, 92, 121, 17115, 32, 83, 93, 16637, 49, 87, 99, 13978, 105, 151, 180, 20181, 122, 150, 182, 21284, 113, 146, 178, 21414, 62, 115, 168, 21010, 62, 107, 148, 190
#092258, 10855, 11533, 10435, 101109, 175108, 174109, 174109, 173109, 174
352, 92, 13446, 97, 13533, 78, 10935, 47, 123115, 134, 178107, 144, 209104, 158, 207106, 158, 206105, 158, 206
440, 78, 106, 14419, 70, 105, 14333, 78, 93, 14335, 47, 81, 12577, 107, 149, 194100, 129, 164, 208102, 138, 171, 212112, 152, 186, 22097, 136, 211, 173
524, 60, 84, 110, 14818, 65, 94, 120, 15433, 66, 83, 104, 14335, 47, 81, 92, 12590, 113, 165, 185, 20685, 114, 147, 175, 21296, 128, 158, 186, 21610, 64, 110, 160, 2055, 62, 109, 159, 205
#102256, 10853, 11431, 10233, 9998, 166108, 174108, 174108, 173107, 174
350, 92, 13645, 100, 14431, 66, 10833, 47, 127113, 145, 180103, 148, 18998, 146, 18994, 142, 18994, 142, 190
456, 96, 138, 18420, 70, 106, 14731, 66, 94, 14333, 45, 79, 12784, 124, 165, 18979, 122, 164, 20090, 127, 164, 1982, 64, 119, 1731, 63, 120, 174
550, 84, 114, 146, 18219, 67, 97, 125, 15831, 61, 79, 100, 14331, 45, 60, 79, 12799, 128, 147, 194, 21881, 113, 147, 187, 22082, 114, 148, 184, 2189, 62, 106, 147, 1901, 62, 104, 145, 189
#112254, 10648, 12125, 9635, 81109, 162105, 165105, 164106, 163106, 163
334, 78, 12228, 87, 13825, 78, 10635, 51, 137104, 163, 21679, 134, 18071, 123, 1753, 49, 1451, 70, 142
440, 74, 106, 14825, 79, 119, 16425, 71, 89, 14835, 51, 91, 13963, 130, 153, 20654, 117, 156, 19258, 105, 146, 1824, 63, 132, 1781, 65, 123, 172
528, 66, 100, 144, 19421, 64, 100, 129, 17025, 49, 84, 94, 14835, 51, 91, 93, 14158, 128, 155, 187, 21348, 112, 137, 161, 20047, 108, 142, 171, 1972, 44, 79, 131, 1751, 49, 95, 139, 183
Table 5. Comparison of the uniformity measure for different segmentation algorithms.
Table 5. Comparison of the uniformity measure for different segmentation algorithms.
Test
Images
Number of
Thresholds (K)
Uniformity Measure (U)
ProposedDCEPSOBFABFNMSRCGA
#02220.98790.98600.95520.95690.95690.95690.9569
30.99560.97950.96720.97080.96960.97690.9769
40.99120.98470.94200.97650.96980.98240.9824
50.99750.98370.94350.97860.97850.97520.9788
#03220.98940.98440.93680.93420.93420.93420.9342
30.99100.98630.96190.97160.96000.97960.9801
40.99200.98550.91440.96970.97660.98480.9848
50.99830.98520.94220.96680.97670.98510.9843
#04220.98550.98230.92710.92460.92460.92460.9246
30.98930.98260.95850.97210.96890.95480.9548
40.98930.98530.94650.97520.98210.98650.9865
50.99140.98930.93480.97240.97660.98450.9877
#05220.98820.98400.91580.91280.91280.90680.9128
30.99070.98490.95230.97130.96730.88000.9467
40.98920.98610.93720.97640.98340.89820.9856
50.99330.98750.92400.97350.97820.98420.9868
#06220.98180.98020.91920.90470.90490.90150.9015
30.99060.98230.87770.91350.90290.90300.9030
40.98680.98050.92360.88560.89880.89890.8989
50.99070.98280.85050.95270.93250.98350.9855
#07220.97990.97860.90680.90410.90410.90410.9041
30.99100.98210.90340.90840.89850.89920.8992
40.98900.98300.88090.88760.88040.86660.8666
50.99170.98300.95310.88810.88760.98180.9825
#08220.98360.97910.91200.90910.90910.90910.9091
30.99270.98370.88520.86210.86610.88490.8849
40.98690.98300.86190.84790.86220.86950.8695
50.99380.98600.93720.91880.91050.98540.9857
#09220.98930.98870.91310.91560.91310.91310.9131
30.99480.98900.86070.87510.88270.87860.8786
40.99040.98650.94900.85830.85140.82400.8641
50.99320.98800.86840.89230.84010.98800.9876
#10220.98980.98800.93830.92500.92500.92500.9250
30.99510.98920.87680.89770.90970.91790.9179
40.99160.98630.92560.94100.90500.98710.9871
50.99670.98960.84460.91800.91810.99070.9895
#11220.99230.98840.93560.94030.94040.94040.9404
30.99400.98900.91470.96660.97690.98630.9890
40.99460.99010.97510.98240.98250.98850.9896
50.99610.99130.97350.98220.98300.99150.9914
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Feng, Y.; Liu, W.; Zhang, X.; Liu, Z.; Liu, Y.; Wang, G. An Interval Iteration Based Multilevel Thresholding Algorithm for Brain MR Image Segmentation. Entropy 2021, 23, 1429. https://doi.org/10.3390/e23111429

AMA Style

Feng Y, Liu W, Zhang X, Liu Z, Liu Y, Wang G. An Interval Iteration Based Multilevel Thresholding Algorithm for Brain MR Image Segmentation. Entropy. 2021; 23(11):1429. https://doi.org/10.3390/e23111429

Chicago/Turabian Style

Feng, Yuncong, Wanru Liu, Xiaoli Zhang, Zhicheng Liu, Yunfei Liu, and Guishen Wang. 2021. "An Interval Iteration Based Multilevel Thresholding Algorithm for Brain MR Image Segmentation" Entropy 23, no. 11: 1429. https://doi.org/10.3390/e23111429

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop