Next Article in Journal
Analysis of Energy Loss Characteristics of Vertical Axial Flow Pump Based on Entropy Production Method under Partial Conditions
Next Article in Special Issue
Multivariate Multiscale Cosine Similarity Entropy and Its Application to Examine Circularity Properties in Division Algebras
Previous Article in Journal
Critical Phenomena in Light–Matter Systems with Collective Matter Interactions
Previous Article in Special Issue
Essential Conditions for the Full Synergy of Probability of Occurrence Distributions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Brain Tumor Segmentation Based on Bendlet Transform and Improved Chan-Vese Model

1
College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
2
Department of Computer, Control and Management Engineering, University of Rome “La Sapienza”, Via Ariosto 25, 00185 Roma, Italy
3
Department of Industrial Engineering, University of Salerno, via Giovanni Paolo II 132, 84084 Fisciano, Italy
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(9), 1199; https://doi.org/10.3390/e24091199
Submission received: 25 July 2022 / Revised: 21 August 2022 / Accepted: 24 August 2022 / Published: 27 August 2022
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines III)

Abstract

:
Automated segmentation of brain tumors is a difficult procedure due to the variability and blurred boundary of the lesions. In this study, we propose an automated model based on Bendlet transform and improved Chan-Vese (CV) model for brain tumor segmentation. Since the Bendlet system is based on the principle of sparse approximation, Bendlet transform is applied to describe the images and map images to the feature space and, thereby, first obtain the feature set. This can help in effectively exploring the mapping relationship between brain lesions and normal tissues, and achieving multi-scale and multi-directional registration. Secondly, the SSIM region detection method is proposed to preliminarily locate the tumor region from three aspects of brightness, structure, and contrast. Finally, the CV model is solved by the Hermite-Shannon-Cosine wavelet homotopy method, and the boundary of the tumor region is more accurately delineated by the wavelet transform coefficient. We randomly selected some cross-sectional images to verify the effectiveness of the proposed algorithm and compared with CV, Ostu, K-FCM, and region growing segmentation methods. The experimental results showed that the proposed algorithm had higher segmentation accuracy and better stability.

1. Introduction

With the rapid development of computed tomography (CT) and magnetic resonance (MR) imaging techniques, the role of cross-sectional imaging in the diagnosis of brain tumors has expanded. Cross-sectional images are capable of displaying diseased tissues and locations at high resolution with good contrast, thereby aiding in treatment planning. When working with cross-sectional images, one of the most complex problems is segmenting out some specific tissues. Segmentation helps doctors more accurately locate the lesion and assess the severity of the lesion, and is an essential and critical process in disease treatment [1]. Manual localization and segmentation of tumor regions by physicians is an expensive, time-consuming, and tedious task, and the segmentation results are not reproducible. Since cross-sectional images are low-contrast images, existing segmentation methods are often interfered with by bone and fat when detecting tumor contours. Clearly, brain tumor segmentation remains a perplexing task.
Brain tumor segmentation methods mainly include segmentation methods based on supervised learning, semi-supervised segmentation methods, and unsupervised segmentation methods. Xu et al. [2] proposed a new joint motion feature learning architecture that can establish a direct correspondence between motion features and tissue properties, thereby determining the position, size and shape information of the infarction area of myocardial infarction. Zhao et al. [3] combined fully convolutional neural networks (FC-NNs) and Conditional Random Fields (CRFs) to segment brain tumor images slice-by-slice, and trained three segmentation models in axial, coronal and sagittal views to achieve better performance. Jiang et al. [4] proposed a novel dual-stream decoding CNN architecture that designs a separate branch to process edge-stream information, which can make the tumor edge clearer. Zheng et al. [5] proposed a four-dimensional (4D) deep learning model, based on three-dimensional convolution and convolutional long short-term memory (C-LSTM), which can more effectively learn the features of hepatocellular carcinoma (HCC) in multi-phase dynamic contrast enhanced (DCE) images. In medical image-based tumor segmentation, the main problem is insufficient labeled samples [6]. When segmenting tumor tissue in medical images, both supervised learning segmentation methods and semi-supervised segmentation methods suffer from insufficient labeled samples.
Unsupervised segmentation does not require ground-truth images as a criterion to train the model. Although there are several general segmentation methods, such as histogram thresholding [7], region growing [8], CV, and statistical clustering [9], etc., they have failed to achieve good results in the domain of brain tumor identification. Wavelet-based methods are widely used to solve difficult and hot problems, and their effectiveness has been proven in many applications, including data compression [10], signal processing [11], image enhancement [12], image compression [13], image segmentation [14], pattern recognition [15], etc. Wavelet analysis is able to refine and analyze complex information at multi-scales through scaling and translation operators. The commonly used two-dimensional wavelets are the tensor product of one-dimensional wavelets, and the number of directions is limited. The lack of directionality means wavelet transform is unable to make full use of the geometric regularity of the image, so it presents a step-like approximation to smooth contours or textures in an image. Owing to the above limitations, Curvelet [16] and Contourlet [17] were proposed and gradually applied. They increased the number of directions, while maintaining the advantages of two-dimensional wavelets. Raghunandan et al. [18] set a fixed window according to the sub-band relationship of the Contourlet, and used the SVM classifier to extract features in the wavelet domain of Contourlet for each window to achieve text recognition. Nayak et al. [19] presented a Pathological Brain diagnosis process by Curvelet sub-bands and entropy features in different scales and directions. Raikar et al. [20] utilized the multi-scale representation of Curvelet for rotator cuff disease diagnosis. In 2006, Guo et al. [21] introduced the shearing matrix and scaling matrix for geometric transformation into wavelet transform for the first time, and proposed the concept of Shearlet transform to realize multi-scale transformation with direction adaptation. Shearlet transform is widely applied, because it is capable of dealing with the anisotropic features of the image and capturing the geometric information of the edge. Sneha et al. [22] decomposed the image into main information and edge detail feature information through NSST to fuse multimodal medical images, which can help researchers study brain pathology more precisely. To accurately obtain the curvature information of the image, Amit et al. [23] proposed an improved adjustable non-subsampling Shearlet transform (ANSST) based on the Meyer window, but it still has great disadvantages for the extraction of edge curvature information. As we all know, contour curvature is one of the important features of the image, but the existing wavelet transforms lack curvature parameter. Lessig et al. [24] added curvature parameters into the Shearlet transform, resulting in a new wavelet transform “Bendlet”, which is the second-order Shearlet that can accurately express images and identify contour curvature features. Medical cross-sectional images are piecewise smooth images with obvious curvature characteristics, so Bendlet, based on multi-scale analysis, is more suitable for the analysis and processing of medical images.
Our main contributions can be summarized as follows:
(1)
In this study, we propose a model that can detect the position of the tumor from a single image and delineate the tumor region by exploiting the similarity of the images themselves.
(2)
The curvature of the left and right contours of a person’s brain tumor image is not the same, which affects the judgment of the location of the tumor. Existing registration algorithms are prone to wrongly mapping points in medical images. Therefore, we propose a registration algorithm based on Bendlet, which can obtain the curvature features of medical images and classify them to register and correct images.
(3)
To solve the problem that the CV model cannot converge to the tumor position and the segmentation is incomplete, we propose the SSIM unit detection algorithm, which can roughly locate the tumor position from three aspects: brightness, structure, and contrast. Then, the CV model is improved by the Shannon-Cosine wavelet homotopy method, which further improves the segmentation accuracy.
The rest of the paper is organized as follows. Section 2 is our experimental methodology. The experimental results and discussion are presented in Section 3, while the conclusion appears in Section 4.

2. Materials and Methods

The proposed complete workflow for brain tumor detection is illustrated in Figure 1, where the proposed method uses the following basic steps: (1) Image registration using Bendlet transform, (2) Unit localization, and (3) Refined segmentation. Brain tumors are abnormal tissue growths in the brain that cause our brains to compress and deform, so we propose a multi-scale and multi-directional registration algorithm based on Bendlet. The Bendlet registration method is able to find enough feature points and establish the mapping relationship between the two images. Then, the cross-sectional images of the brain can be registered and corrected by non-rigid transformation.

2.1. Bendlet Registration Techniques

Our proposed tumor segmentation method is based on the asymmetry between the two brain hemispheres. However, the contour curvatures of the two hemispheres of the brain are not the same. If the tumor is detected directly without registration, it cannot be successfully detected, as shown in Figure 2.

2.1.1. Bendlet System

The classical wavelet methods can detect singular points, have multi-resolution and localization characteristics, but cannot achieve the optimal expression of boundary curves. Shearlet introduced a shear matrix to control the direction, which is a multi-scale wavelet transform with good direction sensitivity and anisotropy. It is an extension of multi-dimensional, multi-scale, and multi-direction wavelet transform [25,26]. However, they also have disadvantages in characterizing and describing boundary curves, due to the lack of bending parameters. In medical imaging, boundary curves of tumors, bone and soft tissue segmentation curves provide valuable information, such as image structure. Curvature is an important parameter to characterize and describe these curves. Current orientation representation systems have achieved great success in extracting and characterizing boundary curves, but still fail to accurately classify curvature.
Bendlet [27] introduced bending elements as another degree of freedom based on Shearlet to approximate piecewise smooth images, known as a second-order Shearlet transform. Compared with other wavelet transforms, Bendlet transform has great advantages in image approximation. The construction of Bendlet is different from the Shearlet in the scaling operator and shearing operator.
For a > 0 and α ∈ [0,1], the a− scaling is defined in Equation (1):
A a , α : = ( a 0 0 a α )
For lN, r = (s,b) and r = ( r 1 , , r l ) T R l , the l t h order shearing operator is defined in Equation (2). The value s takes the role of shearing and b corresponds to a bending:
S r ( l ) ( x ) = ( 1 m = 1 l r m x 2 m 1 0 1 ) ( x 1 , x 2 ) T
The shearing matrix is generated by letting l = 1, and the operator contains the characteristics of both shearing and bending by letting l = 2. The l t h order α Shearlet system is denoted as Equation (3):
S H ψ ( l , α ) ( f ) ( a , r , t ) : = f , π ( l , α ) ( a , r , t ) ψ
where ψ a , r , t ( x ) = a 1 + α 2 ψ ( A a , α 1 S r ( l ) ( t ) ) . When l = 2, the above equation is considered as second-order Shearlet transform or Bendlet transform.
Figure 3a shows some bending elements of the Bendlet system in the spatial domain. The four parameters above the image are cone, scale, shear, bending. The Bendlet system has multi-scale characteristics ([1, 1, −1, −1], [1, 2, −1, −1]). The shear parameter controls different directions, and when the shear parameter is changed, the orientation of Bendlet elements change. Figure 3b shows the multi-scale representation of medical image contours in different directions by the Bendlet system.
The decay rate of the Bendlet transform varies with a, s, b and t. Classification of curvatures can be completed by different decay rates. For a small boundary radius and large curvature, only the coefficient corresponding to the large bending elements decay slowly. As the radius increases, the curvature becomes smaller, and the coefficient decays the slowest for the small bending parameter. The decay rate is the lowest when the Bendlet element overlaps the boundary curve. Meanwhile, the curvature at the boundary can be obtained by matching the second-order Taylor expansion of the boundary with a circle with radius r > 0 and curvature, which can be calculated by Equation (4):
K = 2 | b | ( 1 + ( s ) 2 ) 3 2

2.1.2. Multi-Scale and Multi-Direction Registration Method Using Bendlet

To improve the level of medical diagnosis and treatment, it is necessary to analyze several images together to obtain comparative information about the patient. We compare and analyze normal cross-sectional images with brain lesions to recognize brain tumors. Tumors will squeeze normal brain tissue, causing flexible deformation. Note that any inaccuracy in registration or bias correction stages directly affects the precision of tumor segmentation. It is necessary to find enough corresponding feature points for flexible transformation to make the two images spatially consistent.
The cross-sectional images of the brain are piecewise smooth, and common registration algorithms are prone to false connection points (as shown in Figure 4a). As we can see from the images, the tissue usually does not follow any particular direction. Throughout the image, the tissue structures continually change direction, creating curved edges. For piecewise smooth medical images, the curvature becomes an important parameter to describe and characterize. Bendlet transform can achieve curvature classification by adding bending elements.
The traditional direction wavelet can detect the curvature through the coefficient response value, but it can only represent the information of three directions of the image, namely horizontal direction, vertical direction and diagonal direction. Since the shape of these elements is a square structure, a large number of coefficient responses are required to fully capture all the information, which also increases the noise inside the image. The Bendlet transform introduces bending parameters to enable bending characteristics. When approximating the curve, the energy can be concentrated on several coefficients. It requires fewer coefficients than ordinary wavelets to fully detect the curvature information in the image. As shown in Figure 5, To capture the curvature information, wavelet needs about 13 coefficient values, Shearlet needs 4 coefficients, and Bendlet needs only 2 coefficients to fully detect curved regions [28].
We transformed cross-sectional images of the brain to the frequency domain via Bendlet, analyzing the transformation coefficients at each scale. When the bending elements coincided with the curvature of the image, the coefficient response was very large, and contour information could be extracted from medical images by Bendlet. We introduced bending elements in Bendlet as registration elements to describe medical images at multi-scales and multi-directions. When the bending element was consistent with the curvature of the image, the point with large coefficient response was found as the registration point. Then, stable points at different scales were selected, and feature vectors were constructed to realize image registration and correction. As shown in Figure 4, our method had more connection points and no mismatch points.

2.2. The SSIM Region Detection

Brain tumors are the abnormal tissue structures on cross-sectional images relative to normal images. We can measure the similarity between normal images and brain tumor images in terms of brightness, structure, and contrast, and detect the region where abnormal tissue is located.
The luminance comparison is:
l ( x , y ) = 2 μ x μ y + C 1 μ 2 x + μ 2 y + C 1
The contrast-based comparison is:
c ( x , y ) = 2 σ x σ y + C 2 σ 2 x + σ 2 y + C 2
Comparison of the structure can be obtained through Equation (7):
s ( x , y ) = σ x y + C 3 σ x σ y + C 3
The three components are combined into a unique expression that is weighted with exponents α, β and γ:
SSIM(x,y)=[l(x,y)]α⋅[c(x,y)]β⋅[s(x,y)]γ
where, ux and uy are the mean of pixels in the image blocks x and y, σx and σy are the standard deviations of the pixels in the image blocks x and y, σxy is the covariance between x and y, C1,C2,C3 are constants. The formulae: σ x y = 1 H W - 1 i = 1 H j = 1 W ( X ( i , j ) u x ) ( Y ( i , j ) u Y ) , u x = 1 H W i = 1 H j = 1 W X ( i , j ) , σ x = ( 1 H W - 1 i = 1 H j = 1 W ( X ( i , j ) u x ) 2 ) 1 2 . {x1,x2,x3,…,xn} show that the normal image is divided into n image blocks. The values {y1,y2,y3,…,yn} show that the brain tumor image is divided into n blocks. We can calculate SSIM values of every two image blocks using Equation (8), SSIM = {s1,s2,s3,…,sn}. The block with the smallest SSIM value is the image block where the brain tumor is located.
Segmenting within the unit area can reduce the interference of other organs and improve the accuracy of segmentation. First, we set the degree of overlap and the sliding window to split the image into overlapping blocks. Then, the structural similarity of corresponding patches is calculated from three aspects of brightness, structure and contrast. The block with the smallest value is the unit where the brain tumor is located. In the window sliding process, if the step size is set too small, the accuracy of the obtained tumor unit is improved, but the time complexity increases. If the step size is set too large, the detailed features are lost and the error increased. In order to strike a balance between computational efficiency and resulting performance, we took the stride size to 50.

2.3. Refined Segmentation

Chan and Vese proposed the Chan-Vese (CV) model by the regional feature information of the image. The specific energy function is:
F ( ϕ , c 1 , c 2 ) = μ Ω H ( ϕ ( x , y ) ) d x d y + λ 1 Ω | I ( x , y ) c 1 | 2 H ( ϕ ( x , y ) ) d x d y + λ 2 Ω | I ( x , y ) c 2 | 2 ( 1 H ( ϕ ( x , y ) ) ) d x d y
where, c 1 = Ω I ( x , y ) H ( φ ( x , y ) ) d x d y Ω H ( φ ( x , y ) ) d x d y , c 2 = Ω I ( x , y ) [ 1 H ( φ ( x , y ) ) ] d x d y Ω [ 1 H ( φ ( x , y ) ) ] d x d y , c1 and c2 are gray mean of the target area and the background area: H ( ϕ ) = { 1 ϕ 0 0 ϕ < 0 .
The level set initialization is required to segment the image by the CV model, and the average gray values c1 and c2 of the foreground and background are initially estimated according to the initialized level set. Then, each point on the level set is updated through the evolution equation. If the gray value of the current point is close to the gray average value of the foreground, the value of the corresponding level set of this point increases, otherwise it decreases. In CT and MRI images, the grayscale difference between brain tumors and background images is very small, and the boundaries are blurred. It is difficult to achieve satisfactory segmentation when directly applying the CV model to cross-sectional images for brain tumor detection. As shown in Figure 6, the CV model fails to converge to the location of the brain tumor.
A PDE can be obtained after variation of the CV model, and the Hermite interval Shannon-cosine wavelet can better handle the boundary conditions for the solution of the PDE. When we solve PDE through multi-scale Shannon-cosine wavelet, the wavelet coefficients can distinguish and judge the boundary of the tumor region to achieve accurate segmentation.
Variation of the CV model yields the following partial differential equation:
ϕ t = δ ε ( ϕ ) [ μ div ( ϕ | ϕ | ) + λ 1 | I 0 c 1 | 2 λ 2 | I 0 c 2 | 2 ]
δ ε = ε π ( ε 2 + ϕ 2 )
where div ( ϕ | ϕ | ) is the curvature of the contour curve, div is divergence operator.
The level set function can be expressed as:
ϕ J ( m , n ) ( x , y , t ) = k 01 = 0 1 k 02 = 0 1 ϕ ( x k 01 0 , y k 02 0 ) w k 01 , k 02 0 ( m , n ) ( x , y ) + j = 0 J 1 k 11 = 0 2 j 1 k 12 = 0 2 j 1 [ α j , k 11 , k 12 1 ( t ) w 2 k 11 + 1 , 2 k 12 j + 1 ( m , n ) ( x , y ) + α j , k 11 , k 12 2 ( t ) w 2 k 11 , 2 k 12 + 1 j + 1 ( m , n ) ( x , y ) + α j , k 11 , k 12 3 ( t ) w 2 k 11 + 1 , 2 k 12 + 1 j + 1 ( m , n ) ( x , y ) ]
According to the interpolation wavelet transform theory, we can obtain the wavelet coefficients: α j , k 1 , k 2 1 ( t n + 1 ) , α j , k 1 , k 2 2 ( t n + 1 ) , α j , k 1 , k 2 3 ( t n + 1 ) :
α j , k 11 , k 12 1 ( t n + 1 ) = ϕ J ( m , n ) ( x 2 k 1 + 1 j + 1 , y 2 k 2 j + 1 ) I j ϕ J ( m , n ) ( x 2 k 1 + 1 j + 1 , y 2 k 2 j + 1 ) = ϕ J ( m , n ) ( x 2 k 1 + 1 j + 1 , y 2 k 2 j + 1 ) [ k 01 = 0 1 k 02 = 0 1 ϕ ( x k 01 0 , y k 02 0 ) w k 01 , k 02 0 ( x 2 k 1 + 1 j + 1 , y 2 k 2 j + 1 ) + j = 0 j 1 k 11 = 0 2 j 1 k 12 = 0 2 j 1 ( α j , k 11 , k 12 1 ( t ) w 2 k 11 + 1 , 2 k 12 j + 1 ( m , n ) ( x 2 k 1 + 1 j + 1 , y 2 k 2 j + 1 ) + α j , k 11 , k 12 2 ( t ) w 2 k 11 , 2 k 12 + 1 j 1 + 1 ( x 2 k 1 + 1 j + 1 , y 2 k 2 j + 1 ) + α j 1 , k 11 , k 12 3 w 2 k 11 + , 2 k 12 + 1 j 1 + 1 ( x 2 k 1 + 1 j + 1 , y 2 k 2 j + 1 ) ) ] α j , k 11 , k 12 2 ( t n + 1 ) = ϕ ( x 2 k 1 j + 1 , y 2 k 2 + 1 j + 1 ) I j ϕ ( x 2 k 1 j + 1 , y 2 k 2 + 1 j + 1 ) = ϕ ( x 2 k 1 j + 1 , y 2 k 2 + 1 j + 1 ) [ k 01 = 0 1 k 02 = 0 1 ϕ ( x k 01 0 , y k 02 0 ) w k 01 , k 02 0 ( x 2 k 1 j + 1 , y 2 k 2 + 1 j + 1 ) + j = 0 j 1 k 11 = 0 2 j 1 k 12 = 0 2 j 1 ( α j 1 , k 11 , k 12 1 w 2 k 11 + 1 , 2 k 12 j 1 + 1 ( x 2 k 1 j + 1 , y 2 k 2 + 1 j + 1 ) + α j , k 11 , k 12 2 w 2 k 11 , 2 k 12 + 1 j 1 + 1 ( x 2 k 1 j + 1 , y 2 k 2 + 1 j 1 + 1 ) + α j 1 , k 11 , k 12 3 w 2 k 11 + 1 , 2 k 12 + 1 j 1 + 1 ( x 2 k 1 j + 1 , y 2 k 2 + 1 j + 1 ) ) ] α j , k 1 , k 2 3 = ϕ ( x 2 k 1 + 1 j + 1 , y 2 k 2 + 1 j + 1 ) I j ϕ ( x 2 k 1 + 1 j + 1 , y 2 k 2 + 1 j + 1 ) = ϕ ( x 2 k 1 + 1 j + 1 , y 2 k 2 + 1 j + 1 ) [ k 01 = 0 1 k 02 = 0 1 ϕ ( x k 01 0 , y k 02 0 ) w k 01 , k 02 0 ( x 2 k 1 j + 1 , y 2 k 2 + 1 j + 1 ) + j = 0 j 1 k 11 = 0 2 j 1 k 12 = 0 2 j 1 ( α j 1 , k 11 , k 12 1 w 2 k 11 + 1 , 2 k 12 j 1 + 1 ( x 2 k 1 j + 1 , y 2 k 2 + 1 j + 1 ) + α j , k 11 , k 12 2 w 2 k 11 , 2 k 12 + 1 j 1 + 1 ( x 2 k 1 j + 1 , y 2 k 2 + 1 j 1 + 1 ) + α j 1 , k 11 , k 12 3 w 2 k 11 + 1 , 2 k 12 + 1 j 1 + 1 ( x 2 k 1 j + 1 , y 2 k 2 + 1 j + 1 ) ) ]
where wk,j is the interval interpolation basis functions, and ϕ n j is Shannon-Cosine wavelet [29]:
w k , j ( x ) = { ϕ ( 2 j x 0 ) + n = 1 N ( 1 n N ) 2 ( 1 + n + 2 n N ) ϕ ( 2 j x + n ) , k = 0 ϕ ( 2 j x 1 ) + n = 1 N n ( 1 n N ) 2 ϕ ( 2 j x + n ) , k = 1 ϕ ( 2 j x k ) , k = 2 , 3 , , 2 j 2 ϕ ( 2 j x 2 j + 1 ) + n = 1 N n ( 1 n N ) 2 ϕ ( 2 j x 2 j n ) , k = 2 j 1 ϕ ( 2 j x 2 j ) + n = 1 N ( 1 n N ) 2 ( 1 + n + 2 n N ) ϕ ( 2 j x 2 j n ) , k = 2 j ϕ n j ( x ) = ϕ ( x x n j ) = sin π Δ j ( x x n j ) π Δ j ( x x n j ) n = 0 m ( a n cos ( 2 n π N ( x x n j ) ) ) [ χ ( ( x x n j ) + N 2 ) χ ( ( x x n j ) N 2 ) ]
We defined ϕ and its derivative is:
ϕ t = F ( t , x , y , ϕ , ϕ x , ϕ y , 2 ϕ x 2 , 2 ϕ x y , 2 ϕ y 2 )
d ϕ J ( x , y , t ) d t = F [ t , x , y , ϕ J ( x , y , t ) , ϕ J ( 1 , 0 ) ( x , y , t ) , ϕ J ( 0 , 1 ) ( x , y , t ) , ϕ J ( 2 , 0 ) ( x , y , t ) , ϕ J ( 1 , 1 ) ( x , y , t ) , ϕ J ( 0 , 2 ) ( x , y , t ) ]
F [ t n , x , y , ϕ J ( x , y , t n ) , ϕ J ( 1 , 0 ) ( x , y , t n ) , ϕ J ( 0 , 1 ) ( x , y , t n ) , ϕ J ( 2 , 0 ) ( x , y , t n ) , ϕ J ( 1 , 1 ) ( x , y , t n ) , ϕ J ( 0 , 2 ) ( x , y , t n ) ]
is denoted by Fn.
We can construct a linear homotopy model:
ϕ J ( x , y , t ) = ( 1 ε ) F n + ε F n + 1
where ε(t) is the homotopy parameter. ε ( t ) = t t n t n + 1 t n   t [ t n , t n + 1 ] , ε [ 0 , 1 ] .
According to perturbation theory, the solution of Equation (11) is:
ϕ J ( x , y , t ) = ϕ 0 J ( x , y , t ) + ε ϕ 1 J ( x , y , t ) + ε 2 ϕ 2 J ( x , y , t ) +
Substituting Equation (9) into Equation (10):
ε 0 : ϕ 0 J = F n ε 1 : ϕ 1 J = F n + 1 F n
Substituting the wavelet transform coefficient into Equation (9), we have:
ϕ J ( x , y , t n + 1 ) = ϕ 0 J ( x , y , t n ) + Δ t 2 [ F ( t n , x , y , ϕ J ( x , y , t n ) , ϕ J ( 1 , 0 ) ( x , y , t n ) , ϕ J ( 0 , 1 ) ( x , y , t n ) ,   ϕ J ( 2 , 0 ) ( x , y , t n ) , ϕ J ( 1 , 1 ) ( x , y , t ) , ϕ J ( 0 , 2 ) ( x , y , t ) ) ] + F ( t n + 1 , x , y , ϕ 0 J ( x , y , t n + 1 ) ,   ϕ 0 J ( 1 , 0 ) ( x , y , t n + 1 ) , ϕ 0 J ( 0 , 1 ) ( x , y , t n + 1 ) , ϕ 0 J ( 2 , 0 ) ( x , y , t n + 1 ) , ϕ 0 J ( 1 , 1 ) ( x , y , t n + 1 ) , ϕ 0 J ( 0 , 2 ) ( x , y , t n + 1 ) )
After locating the unit area where the tumor is located in Section 2.2, we can segment the tumor tissue through the improved CV model, so that the tumor tissue can be accurately detected and segmented, as shown in Figure 7.

3. Results

3.1. Performance Evaluation Metrics

We evaluated the segmentation performance by Accuracy, JSC, DSC, which are described as follows:
Accuracy = TP + TN TP + TN + FP + FN
JSC = TP TP + FP + FN
DSC = 2 TP FP + 2 TP + FN
where True Positive (TP): correct identification, and False Negative (FN): incorrect rejection. True Negative (TN): correct rejection and False Positive (FP): incorrect identification, as shown in Table 1.
In addition, paired t-test was performed on the proposed model and other models in test 6 for comparison with respect to these evaluation metrics. p-value of <0.05 was considered statistically significant.

3.2. Comparison of Segmentation Results with Different Methods

To verify the effectiveness of the proposed method, we randomly selected some images to test. For each brain tumor cross-sectional image, we compared the results of each algorithm with the results of manual segmentation. Figure 8 is the original image. Figure 9 shows the visualization results obtained by the proposed algorithm, CV, K-FCM [9], Ostu [30] and region growing algorithm [8] for brain tumor segmentation. The experimental results of the threshold algorithm were obtained by manually adjusting the threshold parameters several times. Except for the algorithm in this paper, the other methods could not fully achieve automated detection and segmentation. From Table 2, we can observe the quantitative results of the four detection algorithms for brain tumor. When Accuracy, JSC and DSC were higher, it indicated that the prediction accuracy of the target was higher. The proposed method outperformed other algorithms on Accuracy, JSC, and DSC. Therefore, from the comprehensive analysis in Figure 9 and Table 2, it can be seen that the algorithm in this study had high accuracy in detecting and segmenting brain tumors. The proposed method has certain competitiveness compared with other classical algorithms, and is expected to provide a reliable reference for clinical decision-making. In addition, as is shown in Table 3, there are significant differences between the proposed model and other models.

4. Conclusions

Owing to the variability and fuzzy boundary of brain tumor lesions, it is very challenging to develop an automated tumor detection system. We introduced Bendlet into cross-sectional images to extract the dominant features between the normal and abnormal images and realized image registration and correction. Meanwhile, the block where the brain tumor was located was assessed from three aspects of brightness, structure and contrast, which enabled driving the contour line of the improved CV model to the required boundary, even near the weak edge. The ideas presented in this work also offer a potential direction to improve detection accuracy for all types of medical diagnoses.

Author Contributions

Conceptualization, K.M.; methodology, P.C. and F.V.; writing—original draft preparation, K.M.; visualization, P.C. and F.V. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded jointly by the National Natural Science Foundation of China (grant number 61871380), Beijing Natural Science Foundation (grant number 4172034), and the Shandong Provincial Natural Science Foundation (grant number ZR2020MF019).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Some of the data supporting the findings of this study are openly available at https://www.kaggle.com/datasets/mateuszbuda/lgg-mri-segmentation. Additional data is provided by Clinic of the Bashkir State Medical University. (accessed on 1 March 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gomez, J.; Holmes, N.; Hansen, A.; Adhikarla, V.; Gutova, M.; Rockne, R.C.; Cho, H. Mathematical modeling of therapeutic neural stem cell migration in mouse brain with and without brain tumors. Math. Biosci. Eng. 2022, 19, 2592–2615. [Google Scholar] [CrossRef] [PubMed]
  2. Xu, C.; Xu, L.; Gao, Z.; Zhao, S.; Zhang, H.; Zhang, Y.; Du, X.; Zhao, S.; Ghista, D.; Liu, H.; et al. Direct delineation of myocardial infarction without contrast agents using a joint motion feature learning architecture. Med. Image Anal. 2018, 50, 82–94. [Google Scholar] [CrossRef] [PubMed]
  3. Zhao, X.; Wu, Y.; Song, G.; Li, Z.; Zhang, Y.; Fan, Y. A deep learning model integrating FCNNs and CRFs for brain tumor segmentation. Med. Image Anal. 2018, 43, 98–111. [Google Scholar] [CrossRef] [PubMed]
  4. Jiang, M.; Zhai, F.; Kong, J. A novel deep learning model DDU-net using edge features to enhance brain tumor segmentation on MR images. Artif. Intell. Med. 2021, 121, 102180. [Google Scholar] [CrossRef]
  5. Zheng, R.; Wang, Q.; Lv, S.; Li, C.; Wang, C.; Chen, W.; Wang, H. Automatic Liver Tumor Segmentation on Dynamic Contrast Enhanced MRI Using 4D Information: Deep Learning Model Based on 3D Convolution and Convolutional LSTM. IEEE Trans. Med. Imaging 2022. [Google Scholar] [CrossRef]
  6. Jiang, H.; Diao, Z.; Yao, Y.D. Deep Learning Techniques for Tumor Segmentation: A Review; Springer US: New York, NY, USA, 2022. [Google Scholar]
  7. Deantonio, L.; Vigna, L.; Paolini, M.; Matheoud, R.; Sacchetti, G.M.; Masini, L.; Loi, G.; Brambilla, M.; Krengli, M. Application of a smart [18F] FDG-PET adaptive threshold segmentation algorithm for the biological target volume delineation in head and neck cancer. Q. J. Nucl. Med. Mol. Imaging 2022. [Google Scholar] [CrossRef]
  8. Rodrigues, E.O.; Rodrigues, L.O.; Lima, J.J.; Casanova, D.; Favarim, F.; Dosciatti, E.R.; Pegorini, V.; Oliveira, L.S.; Morais, F.F. X-Ray cardiac angiographic vessel segmentation based on pixel classification using machine learning and region growing. Biomed. Phys. Eng. Express 2021, 7, 055026. [Google Scholar] [CrossRef]
  9. Sumathi, R.; Mandadi, V. Towards better segmentation of abnormal part in multimodal images using kernel possibilistic C means particle swarm optimization with morphological reconstruction filters: Combination of KFCM and PSO with morphological filters. Int. J. E-Health Med. Commun. 2021, 12, 55–73. [Google Scholar] [CrossRef]
  10. Kim, H.Y.; Lee, S.W. An Error Resilience and Concealment Method for Line-based Wavelet Image Compressions. IEIE Trans. Smart Process Comput. 2019, 8, 347–357. [Google Scholar] [CrossRef]
  11. Gonçalves, M.A.; Santos, L.; Peixoto, F.C.; Ramalho, T.C. NMR relaxation and relaxivity parameters of MRI probes revealed by optimal wavelet signal compression of molecular dynamics simulations. Int. J. Quantum Chem. 2019, 119, e25896. [Google Scholar] [CrossRef]
  12. Sdiri, B.; Kaaniche, M.; Cheikh, F.A.; Beghdadi, A.; Elle, O.J. Efficient Enhancement of Stereo Endoscopic Images Based on Joint Wavelet Decomposition and Binocular Combination. IEEE Trans. Med. Imaging 2019, 38, 33–45. [Google Scholar] [CrossRef] [PubMed]
  13. Mishra, D.; Singh, S.K.; Singh, R.K. Wavelet-based deep auto encoder-decoder (wdaed)-based image compression. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 1452–1462. [Google Scholar] [CrossRef]
  14. Kafieh, R.; Rabbani, H.; Selesnick, I. Three dimensional data-driven multi scale atomic representation of optical coherence tomography. IEEE Trans. Med. Imaging 2015, 34, 1042–1062. [Google Scholar] [CrossRef]
  15. Lee, S.; Kim, J. Two-Step HNN-Based Pattern Recognition Combining DWT-Based Multi-Resolution Analysis for Rechargeable Cells Distinction. IEEE Trans. Power Electron. 2020, 35, 11891–11901. [Google Scholar] [CrossRef]
  16. Gattim, N.K.; Rajesh, V.; Partheepan, R.; Karunakaran, S.; Reddy, K.N. Multimodal Image Fusion Using Curvelet and Genetic Algorithm. J. Sci. Ind. Res. 2017, 76, 694–696. [Google Scholar]
  17. Uma Vetri Selvi, G.; Nadarajan, R. CT and MRI image compression using wavelet-based contourlet transform and binary array technique. J. Real-Time Image Process 2017, 13, 261–272. [Google Scholar] [CrossRef]
  18. Raghunandan, K.S.; Shivakumara, P.; Roy, S.; Kumar, G.H.; Pal, U.; Lu, T. Multi-Script-Oriented Text Detection and Recognition in Video/Scene/Born Digital Images. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 1145–1162. [Google Scholar] [CrossRef]
  19. Nayak, D.R.; Dash, R.; Chang, X.; Majhi, B.; Bakshi, S. Automated Diagnosis of Pathological Brain Using Fast Curvelet Entropy Features. IEEE Trans. Sustain. Comput. 2020, 5, 416–427. [Google Scholar] [CrossRef]
  20. Raikar, V.P.; Kwartowitz, D.M. Towards predictive diagnosis and management of rotator cuff disease: Using curvelet transform for edge detection and segmentation of tissue. In Proceedings of the Medical Imaging 2016: Ultrasonic Imaging and Tomography, San Diego, CA, USA, 1 April 2016; Volume 9790, pp. 450–456. [Google Scholar]
  21. Guo, K.; Labate, D.; Lim, W.-Q.; Weiss, G.; Wilson, E. Wavelets with composite dilations and their MRA properties. Appl. Comput. Harmon. Anal. 2006, 20, 202–236. [Google Scholar] [CrossRef]
  22. Singh, S.; Anand, R.S. Multimodal Medical Image Sensor Fusion Model Using Sparse K-SVD Dictionary Learning in Nonsubsampled Shearlet Domain. IEEE Trans. Instrum. Meas. 2020, 69, 593–607. [Google Scholar] [CrossRef]
  23. Vishwakarma, A.; Bhuyan, M.K. Image Fusion Using Adjustable Non-subsampled Shearlet Transform. IEEE Trans. Instrum. Meas. 2018, 68, 3367–3378. [Google Scholar] [CrossRef]
  24. Lessig, C.; Petersen, P.; Schäfer, M. Bendlets: A second-order shearlet transform with bent elements. Appl. Comput. Harmon Anal. 2019, 46, 384–399. [Google Scholar] [CrossRef]
  25. Amiri, Z.; Bagherzadeh, H.; Harati, A.; Kamyabi-Gol, R.A. Study of shearlet transform using block matrix dilation. J. Appl. Math. Comput. 2018, 56, 665–689. [Google Scholar] [CrossRef]
  26. Casillo, M.; Gupta, B.B.; Lombardi, M.; Lorusso, A.; Santaniello, D.; Valentino, C. Context Aware Recommender Systems: A Novel Approach Based on Matrix Factorization and Contextual Bias. Electronics 2022, 11, 1003. [Google Scholar] [CrossRef]
  27. Mei, S.; Liu, M.; Kudreyko, A.; Cattani, P.; Baikov, D.; Villecco, F. Bendlet Transform Based Adaptive Denoising Method for Microsection Images. Entropy 2022, 24, 869. [Google Scholar] [CrossRef]
  28. Ooi, A.Z.H.; Embong, Z.; Hamid, A.I.A.; Zainon, R.; Wang, S.L.; Ng, T.F.; Hamzah, R.A.; Teoh, S.S.; Ibrahim, H. Interactive Blood Vessel Segmentation from Retinal Fundus Image Based on Canny Edge Detector. Sensors 2021, 21, 6380. [Google Scholar] [CrossRef]
  29. Mei, S.; Gao, W. Shannon-Cosine wavelet spectral method for solving fractional Fokker-Planck equations. Int. J. Wavelets Multiresolution Inf. Process. 2018, 16, 1–26. [Google Scholar] [CrossRef]
  30. Mao, S.; Li, Y.; Ma, Y.; Zhang, B.; Zhou, J.; Wang, K. Automatic cucumber recognition algorithm for harvesting robots in the natural environment using deep learning and multi-feature fusion. Comput. Electron. Agric. 2020, 170, 105254. [Google Scholar] [CrossRef]
Figure 1. Schematic of the proposed brain tumors Segmentation method.
Figure 1. Schematic of the proposed brain tumors Segmentation method.
Entropy 24 01199 g001
Figure 2. Tumor position detected in images without registration.
Figure 2. Tumor position detected in images without registration.
Entropy 24 01199 g002
Figure 3. Bending elements of the Bendlets system and its representation of images. (a) Bending elements in spatial domain. (b) Multi-scale representation of contours in different directions by Bendlets system.
Figure 3. Bending elements of the Bendlets system and its representation of images. (a) Bending elements in spatial domain. (b) Multi-scale representation of contours in different directions by Bendlets system.
Entropy 24 01199 g003
Figure 4. The Experimental results of image registration between two brain hemispheres (a) The registration result of SURF algorithm. When the SURF algorithm is applied to medical images, the feature points obtained are few and contain more misalignment points. (b) The registration result of the proposed method. Our method increases the number of registered points and reduces mis-matched points.
Figure 4. The Experimental results of image registration between two brain hemispheres (a) The registration result of SURF algorithm. When the SURF algorithm is applied to medical images, the feature points obtained are few and contain more misalignment points. (b) The registration result of the proposed method. Our method increases the number of registered points and reduces mis-matched points.
Entropy 24 01199 g004
Figure 5. Capturing contour curve of cross-sectional images with wavelet, Shearlet, and Bendlet. (a) A huge amount of wavelet coefficients is required to detect contour curve information. (b) Few Shearlet coefficients are needed to complete detection. (c) Bendlet needs only 2 coefficients to fully detect curve information.
Figure 5. Capturing contour curve of cross-sectional images with wavelet, Shearlet, and Bendlet. (a) A huge amount of wavelet coefficients is required to detect contour curve information. (b) Few Shearlet coefficients are needed to complete detection. (c) Bendlet needs only 2 coefficients to fully detect curve information.
Entropy 24 01199 g005
Figure 6. Brain tumor segment by CV model under different numbers of iterations. (a) 50 iterations. (b) 300 iterations. (c) 1000 iterations.
Figure 6. Brain tumor segment by CV model under different numbers of iterations. (a) 50 iterations. (b) 300 iterations. (c) 1000 iterations.
Entropy 24 01199 g006
Figure 7. The improved CV model detection.
Figure 7. The improved CV model detection.
Entropy 24 01199 g007
Figure 8. Examples of brain tumor images. (a) Test1. (b) Test2. (c) Test3. (d) Test4. (e) Test5. (f) Test6.
Figure 8. Examples of brain tumor images. (a) Test1. (b) Test2. (c) Test3. (d) Test4. (e) Test5. (f) Test6.
Entropy 24 01199 g008
Figure 9. Comparison of the proposed method with others. (a) SRG. (b) K-FCF. (c) CV. (d) Ostu. (e) The proposed method.
Figure 9. Comparison of the proposed method with others. (a) SRG. (b) K-FCF. (c) CV. (d) Ostu. (e) The proposed method.
Entropy 24 01199 g009
Table 1. Brain tumors segmentation outcome.
Table 1. Brain tumors segmentation outcome.
Confusion MatrixReal
PositiveNegative
PredictPositiveTPFP
NegativeFNTN
Table 2. Comparative results on six main metrics using different methods.
Table 2. Comparative results on six main metrics using different methods.
AccuracyJSCDSC
Test1K-FCM0.99170.65420.7910
CV0.99480.81090.8956
SRG0.98830.49410.6614
Ostu0.74150.08100.1498
Ours0.99550.82980.9070
Test2K-FCM0.99440.77440.8729
CV0.99620.86530.9278
SRG0.79480.10180.1849
Ostu0.73630.08470.1563
Ours0.99720.89930.9470
Test3K-FCM0.95910.03020.0587
CV0.99960.78560.8799
SRG0.99970.82970.8681
Ostu0.91320.01470.0290
Ours0.99990.94280.9705
Test4K-FCM0.99910.77500.8732
CV0.99920.80320.8908
SRG0.99860.60460.7536
Ostu0.96110.08250.1524
Ours0.99940.83810.9119
Test5K-FCM0.99880.43840.6096
CV0.99860.34530.5134
SRG0.99840.23400.3792
Ostu0.81330.01090.0215
Ours0.99900.53220.6947
Test6K-FCM0.89950.92760.9624
CV0.69660.64490.7841
SRG0.75850.77930.8759
Ostu0.93740.00170.0034
Ours0.99950.92850.9629
Table 3. The Paired t-test for the proposed model and other models (p-Value).
Table 3. The Paired t-test for the proposed model and other models (p-Value).
AccuracyJSCDSC
K-FCM-Ours0.0409135660.0804550.10538
CV-Ours0.175936340.020780.025113
SRG-Ours0.0809706250.012420.021317
Ostu-Ours0.0072894042.65 × 10−55.41 × 10−6
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Meng, K.; Cattani, P.; Villecco, F. Brain Tumor Segmentation Based on Bendlet Transform and Improved Chan-Vese Model. Entropy 2022, 24, 1199. https://doi.org/10.3390/e24091199

AMA Style

Meng K, Cattani P, Villecco F. Brain Tumor Segmentation Based on Bendlet Transform and Improved Chan-Vese Model. Entropy. 2022; 24(9):1199. https://doi.org/10.3390/e24091199

Chicago/Turabian Style

Meng, Kexin, Piercarlo Cattani, and Francesco Villecco. 2022. "Brain Tumor Segmentation Based on Bendlet Transform and Improved Chan-Vese Model" Entropy 24, no. 9: 1199. https://doi.org/10.3390/e24091199

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop