Next Article in Journal
Predicting Coronary Atherosclerotic Heart Disease: An Extreme Learning Machine with Improved Salp Swarm Algorithm
Previous Article in Journal
Efficient and Secure NFC Authentication for Mobile Payment Ensuring Fair Exchange Protocol
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Level Set Method on the Multiscale Edges

1
College of Computer Science, Sichuan University, Chengdu 610065, China
2
College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(10), 1650; https://doi.org/10.3390/sym12101650
Submission received: 12 August 2020 / Revised: 14 September 2020 / Accepted: 1 October 2020 / Published: 9 October 2020

Abstract

:
The level set method can segment symmetrical or asymmetrical objects in real images according to image features. However, the segmentation performance varies with feature scale. In order to improve the segmentation effect, we propose an improved level set method on the multiscale edges, which combines the level set method with image multi-scale decomposition to form a unified model. In this model, the segmentation relies on multiscale edges, and the multiscale edges depend on multiscale decomposition. A novel total variation regularization is proposed in multiscale decomposition to preserve edges. The multiscale edges obtained by the multiscale decomposition are integrated into the segmentation process, and the object can be easily extracted from a proper scale. Experimental results indicate that this method has superior performance in precision, recall and F-measure, compared with relative edge-based segmentation methods, and is insensitive to noise and inhomogeneous sub-regions.

1. Introduction

Image segmentation is a technique of partitioning an image into disjoint regions [1]. It is one of the most critical steps toward image understanding. Many image segmentation techniques are widely applied in various fields of computer vision such as moving object detection [2,3], object recognition [4,5] and vehicle tracking [6]. These segmentation techniques use low-/mid-level features, such as the intensity/color information [2], edges [4], or shape [5]. However, it is still difficult to segment real images since the symmetrical or asymmetrical objects of real images are diverse and the background is complex, and random.
According to features for segmentation, image segmentation methods can be divided into two categories, which are based on the learned features and based on the artificial features. The learned features are generally obtained by the deep learning method from the training, and the artificial features generally are low-level features of an image. The segmentation methods based on learned features can extract the objects by using the features obtained from deep learning [7] on the training set. These segmentation methods usually require a large amounts of training data consisting of objects that may appear in an image. In practice, the training data are limited in many specific tasks. In this case, they cannot replace the methods based on the artificial features. Fully convolutional networks (FCNs) can auto-implement the image segmentation with multi-level information [8], which is obtained through deep learning. Compared with the traditional segmentation algorithms, FCNs can achieve out-performance for the objects with massive samples, but not for the objects with small training sample sets [9]. In addition, segmentation methods based on learned features exhibit poor performance in segmenting new objects which are not in the training set.
Compared with learned feature-based image segmentation methods, artificial feature-based image segmentation methods can use different artificial features to extract objects without training sets. The artificial features used for segmentation are usually the low-level features of the image, such as regions [10,11,12,13,14,15], edges [16,17] and hybrid features [18,19]. The active contour model based on the level set method [20] is an efficient segmentation method using artificial features. According to the implicit representation of the object contour, the active contour model based on the level set method segments an image into the object and the background by the curve evolution under the constraints. According to constraints, these models are mainly categorized into edge-based and region-based models. The region-based CV model [10], in which sub-regions are represented by the mean values, outperforms for piece-wise constant images. While the region inhomogeneity and noise lead to the reduction of regional mean disparity, and lead to poor segmentation results. In order to eliminate the negative effects of non-uniformity, a local binary fitting energy function [12] and its improved function [13] are incorporated into the active contour models. However, those models are not designed for noisy images. Ali [15] proposed a variational global and selective segmentation model suitable for segmenting a range of images that have intensity inhomogeneity, noise and a combination of both. However, due to a large amount of calculation, it is not convenient to apply.
Compared with the region-based methods, the edge-based methods can effectively obtain accurate results particularly in segmenting images with sharp gradients. The edge-based level set method proposed by Li [16], which does not require re-initialization, can effectively segment images with a large gradient. Furthermore, the model evolves faster by constructing a symbolic distance-holding term in the energy function as a penalty term without reinitializing. However, in the case of uneven noise and intensity, this model cannot effectively segment the weak edges. In addition, the edges are blurred by Gaussian filter in this model. To improve the edge information for segmentation, many other level set models have been presented. For example, Akram et al. [21] proposed an edge-based level set method driven by the difference of Gaussians (DoG). The DoG is used as an edge indicator parameter, which acts as a feature enhancement tool to enhance the edges of an image. Khadidos et al. [22] proposed a weighted level set model based on local edge features in which the energy terms are weighted according to their relative importance in boundary detection. However, these models still have two major limitations: (1) segmentation results are sensitive to the initial curve; (2) it is difficult to adaptively choose a smoothing scale (the standard deviation of the Gaussian kernel). In terms of the first limitation, Yeo and Xie incorporated the prior shape information into the initial curve and improved the segmentation effects for the specific object to resolve this problem [17], but the shapes of objects in real images are usually unknown. To solve (2), it is necessary to construct a novel smoothing model to improve the edge information for segmentation. In addition, the model can generate multi-scale edge, which makes the objects be easily extracted from a proper scale.
In order to improve the segmentation effect for real images, we propose an improved level set method for segmentation on the multiscale edges. In the proposed model, the segmentation relies on multiscale edges, and the multiscale edges depend on multiscale decomposition. We first propose a novel variation regularized multiscale decomposition to preserve edges and then the multiscale edges obtained by the multiscale decomposition are merged into the segmentation process. Finally, we construct an iterative termination condition based on a significant level of the segmentation region to extract the objects at a proper scale. The experimental results indicate that this algorithm using multiscale edges of an image exhibits superior performance in precision, recall rate and F-measure to relative edge-based segmentation methods. Moreover, the proposed algorithm is insensitive to noise and inhomogeneous regions.
The rest of the paper is organized as follows. In Section 2, the proposed segmentation model based on multiscale edges is introduced. The experimental results are given in Section 3. Finally, the conclusions are made in Section 4.

2. Proposed Method

The image segmentation performance based on the level set method depends not only on the algorithm itself, but also on edges. The image edges are sensitive to scale. For example, the fine information of the image edges is lost at large scales, which causes the edges to deviate from the actual edges; at small scales, the image edges are sensitive to noise, leading to false edge detection. Therefore, it is necessary to select the proper scale in image segmentation. In this paper, we propose an improved level set method which combines the segmentation and the multiscale decomposition to extract the object from a proper scale. The overall structure of the proposed method is shown in Figure 1. The proposed model can be described as the following formula:
E ( u 0 , ϕ ) = D ( u 0 , u ) + S ( u , ϕ )
where u 0 : Ω [ 0 , 1 ] is an image with N pixels, ϕ is the level set function, a set { ( x , y ) | ϕ ( x , y ) = 0 } denotes the segmentation curve, { ( x , y ) | ϕ ( x , y ) > 0 } denotes the background, and { ( x , y ) | ϕ ( x , y ) < 0 } denotes the object. The first term D ( u 0 , u ) is a multiscale decomposition term, where u is a smoothed image. The second term S ( u , ϕ ) is a segmentation term, where u is the gradient of the smoothed image.

2.1. The Multiscale Edges

The segmentation result, that is an optimal solution of (1), depends on the edges which vary with the scale of u . For an image with the homogeneous regions, since the color distributions within each region contrast highly among regions, the object extraction could achieve a good performance even in a fine scale, as with the image u 0 . In practice, the smoothed image in a fine scale often contains inhomogeneity within a region and leads to pseudo edges. To improve the smoothing effect to preserve strong edges and enhance weak edges, a novel edge-preserved smoothing model is used to analyze the image, which is formulated as based on the following minimization problem:
D ( u 0 , u ) = τ 2 Ω ( u 0 u ) 2 + Ω | u | 2 1 + | u | 2 d x d y
Suppose that D ( u 0 , u ) has a minimum point. Then, it formally satisfies the Euler–Lagrange equation:
τ ( u u 0 ) = d i v ( u ( 1 + | u | 2 ) 2 )
The simple finite difference scheme and lagged diffusivity fixed-point iteration method are exploited to linearize the above equation at each iteration u ( n 1 ) u ( n ) . If p is a member of the four adjacent pixel set Λ on the target pixel p 0 (Figure 2), u ( n 1 ) ( p 0 ) can be updated by:
u ( n ) ( p 0 ) = τ ( n ) u 0 ( p 0 ) + p Λ ω ( n ) ( p ) u ( n 1 ) ( p ) τ ( n ) + p Λ ω ( n ) ( p )
where ω ( n ) ( p ) is the weight coefficient of the pixel p :
ω ( n ) ( p ) = ( 1 + | u ( n 1 ) ( p ) | 2 ) 2
when | u ( n ) ( p ) | 0 , ω ( n ) ( p ) 1 , the inhomogeneous sub-region of an image is smoothed. When | u ( n ) ( p ) | , ω ( n ) ( p ) 0 and u ( n ) ( p 0 ) u ( 0 ) ( p 0 ) , the edges are preserved.
The parameter τ ( n ) in Equation (4) trades off edge preservation and region-smoothing. If τ ( n ) > > ω ( n ) ( p ) , then u ( n ) ( p 0 ) u 0 ( p 0 ) which cannot optimize the edges. If τ ( n ) < < ω ( n ) ( p ) , the u ( n ) ( p 0 ) approximates the weighted-mean of adjacent pixels, the edges are blurred and these blurred edges will cause the segmentation curve to converge excessively.
The fixed τ ( n ) cannot adaptively harmonize edge-preserving and sub-region smoothing according to the local information of the image [23,24]. In order to solve this problem, the 2-clustering of the target pixel and its adjacent pixels are analyzed based on their spatial relationship.
  • If the target pixel is inside or outside the object, the adjacent pixels should be the object or not due to region connectivity (Figure 2a).
  • Suppose the target pixel is located at the object boundary, the adjacent pixels would belong to one of the following cases.
    If one of the four adjacent pixels is located in the background, there are four kinds of situations (Figure 2b).
    If two of the four adjacent pixels are located in the background, there are six kinds of situations (Figure 2c).
As shown in Figure 2, the target pixel should belong to the sub-class that contains more pixels, and the parameter τ ( n ) should be set as:
τ ( n ) = m e d i a n { ( 1 + | u ( n 1 ) ( p 0 ) | 2 ) 2 , ω ( n ) ( p ) } p Λ .
The multiscale edges of an image can be described as:
u 0 = u ( 0 ) u 0 u 0 u ( k 1 ) u 0 u ( k ) u 0 | u 0 | | u ( k 1 ) |   | u ( k ) |
The presence of u 0 at each step constantly ensures that the raw image is not forgotten during the smoothing, which contains the crucial edges of an image. At step k , the smoothed image u ( k ) depends on u ( k 1 ) . This causes the small-scale edges | u ( k 1 ) | to be smeared step by step. As the number of iterations increases, the strong edges are preserved, and the weak edges are enhanced.

2.2. The Level Set Method

In this paper, the object is extracted using the edges in the smoothed image. Assuming that each region in the smoothed image is a constant or an approximate constant, the edge-based level set methods can get accurate results effectively. The main idea is to represent the contour as the zero level set of a high-dimensional level set function, and make it evolve to the object edges driven by the energy function. The segmentation model using the level set method for an image u is formulated through the following minimization problem [16]:
S ( u , ϕ ) = λ Ω g ( | u | ) δ ( ϕ ) | ϕ | d x d y + ν Ω g ( | u | ) H ( ϕ ) d x d y + μ 2 Ω ( | ϕ | 1 ) 2 d x d y .
In Equation (7), the first two terms are the external energy which drive the zero level set toward the object boundaries, while the last term is the internal energy which controls the penalization effect. μ > 0 is a parameter controlling the penalization effect. λ > 0 and ν are the weight of the circumference and area of the segmentation curve, respectively. H ( ϕ ) is the Heaviside function, and δ ( ϕ ) is the Dirac measure. H ( ϕ ) and δ ( ϕ ) are defined as follows:
H ( ϕ ) = { 1 ϕ 0 0 ϕ < 0 ,   δ ( ϕ ) = d H ( ϕ ) d ϕ ,
g ( u ) is the edge indicator function. In our proposed model, the edge indicator function of the gradient | u | is defined as:
g ( | u | ) = ( 1 + | u | ) 1 .
Suppose that S ( | u | , ϕ ) has a minimum point, by introducing an artificial temporal variable t , the steepest descent process could be used to obtain the minimization of the function S ( | u | , ϕ ) . Its gradient flow is:
η ( ϕ ) = ϕ t = ε ( | u | , ϕ ) ϕ = λ δ ( ϕ ) d i v ( g ( | u | ) ϕ | ϕ | ) + ν g ( | u | ) δ ( ϕ ) + μ [ Δ ϕ d i v ( ϕ | ϕ | ) ] ,
where Δ ( ) is the Laplacian operator.
The central difference is use to approximate ϕ and introduce an artificial temporal variable Δ t , while ϕ ( n 1 ) can be updated by:
ϕ ( n ) = ϕ ( n 1 ) + Δ t η ( ϕ ( n 1 ) ) .

2.3. Segmentation on the Multi Scale Edges

The proposed image segmentation method consists of the multiscale decomposition and segmentation. In the process of the discrete computation on (1), this model can be simplified as follows:
ϕ = arg ϕ min { E ( u 0 , ϕ ) } = arg ϕ min { D ( u 0 , u k ) + S ( u k , ϕ k ) } , k = 0 , 1 ,
A series of smoothed images with different scale edges are obtained using the multiscale decomposition model to decompose an image. Then the level set method is used to extract the object from the smoothed image with different scales one by one. At step k , the segmentation curve ϕ k is obtained from the evolution of smoothed image u k by level set algorithm with initial curve ϕ k 1 . The steps of multiscale edge segmentation are shown in Figure 1.
In the segmentation process, since the level set algorithm uses smoothed images with different scale edges each time, the results of each segmentation are inconsistent. In order to evaluate the performance of the segmentation using the adjacent scale-edges, a significance level of segmentation region is established, which is defined as follows:
S l ( k ) = 1 c a r d ( A k A k 1 ) max { c a r d ( A k ) , c a r d ( A k 1 ) } ,
where A k and A k 1 represent the objects using the smoothed images with different scale edges at number of iterations k and k 1 , in which A = { i | ϕ ( i ) 0 , i = 1 , 2 , , N } .
In (13), with the number of iterations k gradually increases, the strong edges are preserved, and the weak edges are enhanced. The segmentation curve gradually evolves to the contour of the object and S l ( k ) gradually decrease. When the segmentation curve reaches the contour of the object, A k and A k 1 are the closest and S l ( k ) = 0 , that is, S l ( k ) is the minimum. When the number of iterations k continues to increase, the edges of the image become blurred, the segmentation curve continue to shrink, and S l ( k ) gradually increase until the segmentation curve disappears. In summary, the automatic iteration termination condition for segmentation should be constructed as S l ( k ) = 0 .

3. Experimental Results and Analysis

The experiments were conducted using VC 6.0 on a PC with Intel-Core i5 CPU @ 3.40 GHz and 4GB of RAM without any particular code optimization. During the implementation of the proposed model, the parameter λ = 5.0 , μ = 0.04 , ν = 3.0 , Δ t = 5.0 were used for all experiments. In addition, the proposed model is implemented to monochannel images.
In order to test the segmentation performance using the proposed method on real images with strong or weak edges, experiments were carried out to compare this with relative level set models and other edge-based segmentation models, such as Li’s model [16], SDREL model [19], DCLSM model [25] and IMST model [26]. In these models, Li’s model is an edge-based level set model that uses the Gaussian filter to smooth images. SDREL model is an improved level set model that considers the saliency map into account and combines region information with edge information into the level set model. Recently, many level set methods combined with deep learning have been proposed [25,27,28]. These existing methods mainly use deep learning to perform coarse segmentation first to obtain the initial mask for level set evolution. Then, the level set function uses this mask as a prior shape to evolve to get the contours of the objects. However, these methods would not get good segmentation results when the initial mask is not accurate enough or covers the object. To prove the performance of the proposed model, DCLSM model which is based on deep learning was considered for comparison.
The different sizes of images that were obtained from the International web and the Berkeley segmentation database were segmented. These partial results are shown in Figure 3. The effects of these three algorithms on the image with strong edges are almost the same (Figure 3a). For images with weak edges (Figure 3b,c), the segmentation result using this method outperformed the other two methods. Compared with other methods, this method has the following three characteristics: (1) integrating smoothing, image segmentation, and the number of smoothing iterations depends on the significant level of segmentation; (2) the weight of the target pixel is set according to local information, and adaptively harmonizes edge-preservation and region-smoothing; (3) the weak edges are enhanced. In Li’s model, the segmentation accuracy declines since the edges are blurred by Gaussian smoothing (Figure 3c). The segmentation effects are poor when using the IMST method for images with weak edges.
In order to evaluate the segmentation performance using this method on real images with inhomogeneous regions, the different sizes of images were segmented, and the partial results are presented in Figure 4. In Li’s model, the segmentation curve was far away from the ground truth due to the edge blurred by Gaussian smoothing. The effect of this method was better than the other two models. However, there were non-semantic sub-regions when using this model in Figure 4c. The reason is that the weak edges were over-enhanced. More experimental results are shown in Figure 5 to prove the segmentation performance.
As obtained from Figure 3, Figure 4 and Figure 5, the CPU time and scores of segmentations are illustrated in Table 1. This method tolerates a higher computational cost to obtain better segmentation results. The computational cost using this model mainly depends on the degree of region inhomogeneity, such as those presented in Figure 3a and Figure 4b, and the CPU times were 3.715 and 7.098 s, respectively.
Compared with the Li’s model, the computational cost using this model was higher. The reason is that this method uses iteration smoothing while Li’s model uses Gaussian smoothing just once.
Noise is a negative effect on segmentation. In order to test for robustness against noise, 480 × 345 degraded images with additive Gaussian noise were segmented and compared with the IMST model [26] and Li’s model [16] using the same initial curve. Partial results are shown in the Figure 6. For the original images, the effect of the proposed method was similar as those of the IMST model, and this performed better than that of Li’s model. With the degrading image quality, the segmentation accuracy of this method was higher than that of other models. The segmentation scores are shown in Table 2. The variance of F-measure using this method, IMST model [26] and Li’s model [16] were respectively 0.9%, 4% and 2.3%. The variance of F-measure was smaller to those of the other two models. The mean of the F-measure was 0.970, 0.933 and 0.906, respectively. The mean of F-measure using the proposed method was higher than those of the other two models. From the statistical mean and variance, this proposed method was not sensitive to noise.
The CPU time of segmentation on the noisy image is shown in Table 3. Li’s model uses Gaussian smoothing only once, while the proposed method uses iteration smoothing to deal with the inhomogeneous sub-region. Thus, the computational cost is higher than that of Li’s model.

4. Conclusions

In this work, we propose an improved level set method for segmentation on the multiscale edges. Compared with relative edge-based segmentation methods, this model improves the segmentation performance in terms of the following aspects: (1) the multiscale edges of an image are incorporated into the segmentation model; (2) the strong edges for segmentation are preserved and the weak edges are enhanced; (3) the optimal scale edge for segmentation is selected. However, the proposed method needs to set the initial curve artificially. Furthermore, the computational cost is high. In the future, our work will be focused on designing an algorithm to adaptively label the initial curve adjacent to the object boundaries.

Author Contributions

Conceptualization, K.H. and Y.S.; methodology, K.H. and Y.S.; validation, K.H. and D.W.; formal analysis, Y.S.; investigation, Y.S.; resources, T.P.; data curation, D.W.; writing-original draft preparation, Y.S.; writing-review and editing, Y.S. and T.P.; visualization, Y.S.; supervision, K.H.; project administration, K.H. and D.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Sichuan Province Natural Science Foundation of China (2016JZ0014).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fang, C.; Liao, Z.; Yu, Y. Piecewise Flat Embedding for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 1470–1485. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Lin, C.-Y.; Muchtar, K.; Lin, W.-Y.; Jian, Z.-Y. Moving Object Detection through Image Bit-Planes Representation Without Thresholding. IEEE Trans. Intell. Transp. Syst. 2020, 21, 1404–1414. [Google Scholar] [CrossRef]
  3. Yoo, J.; Lee, G.-C. Moving Object Detection Using an Object Motion Reflection Model of Motion Vectors. Symmetry 2019, 11, 34. [Google Scholar] [CrossRef] [Green Version]
  4. Yoon, K.S.; Kim, W.-J. Efficient edge-preserved sonar image enhancement method based on CVT for object recognition. IET Image Process. 2019, 13, 15–23. [Google Scholar] [CrossRef]
  5. Zheng, Y.; Guo, B.; Li, C.; Yan, Y. A Weighted Fourier and Wavelet-Like Shape Descriptor Based on IDSC for Object Recognition. Symmetry 2019, 11, 693. [Google Scholar] [CrossRef] [Green Version]
  6. Maqueda, A.I.; Loquercio, A.; Gallego, G.; Garcia, N.; Scaramuzza, D. Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars. In Proceedings of the 2018 IEEE/Cvf Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 5419–5427. [Google Scholar] [CrossRef] [Green Version]
  7. Dan, C.C.; Giusti, A.; Gambardella, L.M.; Schmidhuber, J. Deep Neural Networks Segment Neuronal Membranes in Electron Microscopy Images. Adv. Neural Inf. Process. Syst. 2012, 25, 2852–2860. [Google Scholar]
  8. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar] [CrossRef] [Green Version]
  9. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef] [PubMed]
  10. Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [Green Version]
  11. Wang, D. Hybrid fitting energy-based fast level set model for image segmentation solving by algebraic multigrid and sparse field method. IET Image Process. 2018, 12, 539–545. [Google Scholar] [CrossRef]
  12. Li, C.; Kao, C.-Y.; Gore, J.C.; Ding, Z. Implicit active contours driven by local binary fitting energy. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–7. [Google Scholar] [CrossRef]
  13. Peng, Y.; Liu, F.; Liu, S. Active contours driven by normalized local image fitting energy. Concurr. Comput. Pract. Exp. 2014, 26, 1200–1214. [Google Scholar] [CrossRef]
  14. Swierczynski, P.; Papiez, B.W.; Schnabel, J.A.; Macdonald, C. A level-set approach to joint image segmentation and registration with application to CT lung imaging. Comput. Med. Imaging Graph. 2018, 65, 58–68. [Google Scholar] [CrossRef]
  15. Ali, H.; Rada, L.; Badshah, N. Image Segmentation for Intensity Inhomogeneity in Presence of High Noise. IEEE Trans Image Process 2018, 27, 3729–3738. [Google Scholar] [CrossRef] [PubMed]
  16. Li, C.M.; Xu, C.Y.; Gui, C.; Fox, M.D. Level set evolution without re-initialization: A new variational formulation. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 430–436. [Google Scholar] [CrossRef]
  17. Yeo, S.Y.; Xie, X.; Sazonov, I.; Nithiarasu, P. Segmentation of biomedical images using active contour model with robust image feature and shape prior. Int. J. Numer. Methods Biomed. Eng. 2014, 30, 232–248. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Gao, Y.; Yu, X.; Wu, C.; Zhou, W.; Wang, X.; Zhuang, Y. Accurate Optic Disc and Cup Segmentation from Retinal Images Using a Multi-Feature Based Approach for Glaucoma Assessment. Symmetry 2019, 11, 1267. [Google Scholar] [CrossRef] [Green Version]
  19. Zhi, X.-H.; Shen, H.-B. Saliency driven region-edge-based top down level set evolution reveals the asynchronous focus in image segmentation. Pattern Recognit. 2018, 80, 241–255. [Google Scholar] [CrossRef]
  20. Osher, S.; Sethian, J.A. Fronts propagating with curvature-dependent speed—Algorithms based on hamilton-jacobi Formulations. J. Comput. Phys. 1988, 79, 12–49. [Google Scholar] [CrossRef] [Green Version]
  21. Akram, F.; Garcia, M.A.; Puig, D. Active contours driven by difference of Gaussians. Sci. Rep. 2017, 7, 14984. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Khadidos, A.; Sanchez, V.; Li, C.T. Weighted Level Set Evolution Based on Local Edge Features for Medical Image Segmentation. IEEE Trans Image Process. 2017, 26, 1979–1991. [Google Scholar] [CrossRef]
  23. Aubert, G.; Kornprobst, P. Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations, Applied Mathematical Sciences; Springer: Berlin, Germany, 2006. [Google Scholar]
  24. Chan, T.F.; Osher, S.; Shen, J.H. The digital TV filter and nonlinear denoising. IEEE Trans. Image Process. 2001, 10, 231–241. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Kristiadi, A.; Pranowo, P. Deep Convolutional Level Set Method for Image Segmentation. J. Ict Res. Appl. 2017, 11. [Google Scholar] [CrossRef] [Green Version]
  26. Janakiraman, T.N.; Mouli, P.V.S. Image Segmentation Based on Minimal Spanning Tree and Cycles. In Proceedings of the International Conference on Computational Intelligence and Multimedia Applications, Sivakasi, Tamil Nadu, India, 13–15 December 2007; pp. 215–219. [Google Scholar]
  27. Xie, W.; Li, Y.; Ma, Y. PCNN-based level set method of automatic mammographic image segmentation. Optik 2016, 127, 1644–1650. [Google Scholar] [CrossRef]
  28. Wang, Z.; Acuna, D.; Ling, H.; Kar, A.; Fidler, S. Object Instance Annotation with Deep Extreme Level Set Evolution. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 7492–7500. [Google Scholar]
Figure 1. Processing flow of the proposed image segmentation.
Figure 1. Processing flow of the proposed image segmentation.
Symmetry 12 01650 g001
Figure 2. The 2-clustering of the target pixel and its neighbors, the red circle denotes the target pixel, the white and black filled circles denote foreground and background, respectively. (a) All pixels are located in the foreground, (b) a pixel is located in the background and others in the foreground, (c) two pixels are located in the background and others in the foreground.
Figure 2. The 2-clustering of the target pixel and its neighbors, the red circle denotes the target pixel, the white and black filled circles denote foreground and background, respectively. (a) All pixels are located in the foreground, (b) a pixel is located in the background and others in the foreground, (c) two pixels are located in the background and others in the foreground.
Symmetry 12 01650 g002
Figure 3. Comparison of the proposed method with the Li’s model, IMST model, DCLSM model and SDREL model on real images. Row 1: original images and initial curves, Row 2: the ground truth, Row 3: the proposed model, Row 4: Li’s model, Row 5: IMST model. Row 6: DCLSM model, Row 7: SDREL model.
Figure 3. Comparison of the proposed method with the Li’s model, IMST model, DCLSM model and SDREL model on real images. Row 1: original images and initial curves, Row 2: the ground truth, Row 3: the proposed model, Row 4: Li’s model, Row 5: IMST model. Row 6: DCLSM model, Row 7: SDREL model.
Symmetry 12 01650 g003aSymmetry 12 01650 g003b
Figure 4. Comparison of the proposed method with the Li’s model, IMST model, DCLSM model and SDREL model on real images with serious inhomogeneity. Row 1: original images and initial curves, Row 2: the ground truth, Row 3: the proposed model, Row 4: Li’s model, Row 5: IMST model, Row 6: DCLSM model, Row 7: SDREL model.
Figure 4. Comparison of the proposed method with the Li’s model, IMST model, DCLSM model and SDREL model on real images with serious inhomogeneity. Row 1: original images and initial curves, Row 2: the ground truth, Row 3: the proposed model, Row 4: Li’s model, Row 5: IMST model, Row 6: DCLSM model, Row 7: SDREL model.
Symmetry 12 01650 g004aSymmetry 12 01650 g004b
Figure 5. Comparison of the proposed method with the Li’s model, IMST model, DCLSM model and SDREL model on real images with other real images. Row 1: original images and initial curves, Row 2: the ground truth, Row 3: the proposed model, Row 4: Li’s model, Row 5: IMST model, Row 6: DCLSM model, Row 7: SDREL model.
Figure 5. Comparison of the proposed method with the Li’s model, IMST model, DCLSM model and SDREL model on real images with other real images. Row 1: original images and initial curves, Row 2: the ground truth, Row 3: the proposed model, Row 4: Li’s model, Row 5: IMST model, Row 6: DCLSM model, Row 7: SDREL model.
Symmetry 12 01650 g005aSymmetry 12 01650 g005b
Figure 6. Comparison of the proposed method with the Li’s model and IMST model on real images with noise. Row 1: noisy images and initial curves, Row 2: the ground truth, Row 3: the proposed model, Row 4: Li’s model, Row 5: IMST model.
Figure 6. Comparison of the proposed method with the Li’s model and IMST model on real images with noise. Row 1: noisy images and initial curves, Row 2: the ground truth, Row 3: the proposed model, Row 4: Li’s model, Row 5: IMST model.
Symmetry 12 01650 g006aSymmetry 12 01650 g006b
Table 1. The comparison of CPU time and scores of segmentations on Figure 3, Figure 4 and Figure 5 (where Pre., Rec., F-M and CPU-t. denote precision, recall, F-Measure, and CPU time(s), respectively).
Table 1. The comparison of CPU time and scores of segmentations on Figure 3, Figure 4 and Figure 5 (where Pre., Rec., F-M and CPU-t. denote precision, recall, F-Measure, and CPU time(s), respectively).
MethodFigure 3aFigure 3bFigure 3cFigure 4aFigure 4bFigure 4cFigure 5aFigure 5bFigure 5c
320 × 240480 × 320512 × 365220 × 213320 × 221512 × 384400 × 300480 × 320500 × 333
This method
Pre.1.00.9840.9991.00.9920.9830.9670.9920.964
Rec.0.9650.9720.9930.9380.9400.9980.9880.8340.979
F-M0.9820.9780.9960.9680.9650.9900.9870.9060.970
CPU-t3.7158.67326.9013.6377.09820.2014.3357.7466.474
Li’s (Gaussian smoothing + Level Set)
Pre.0.9850.9760.9590.8390.9300.9250.8120.9480.789
Rec.0.9530.9150.9270.7580.8730.9890.9970.8530.975
F-M0.9680.9450.9430.7960.9000.9560.8950.8980.872
CPU-t2.6666.89413.9731.8345.28818.4712.5975.5234.872
IMST
Pre.0.9980.9670.9470.8560.9290.9810.8730.9430.956
Rec.0.9520.9930.9940.9940.9820.9800.9890.9170.975
F-M0.9750.9800.9700.9200.9550.9810.9270.9300.965
CPU-t2.9476.2099.8952.4643.11511.4232.6043.8713.467
DCLSM (CNN + Level Set)
Pre.0.9820.9520.9630.8730.9310.9340.8900.9490.938
Rec.0.9520.9840.9930.9840.8870.9790.9720.8840.983
F-M0.9660.9690.9780.9250.9080.9590.9290.9150.960
CPU-t2.7666.57711.8351.5765.01215.3472.4834.9734.279
SDREL (Saliency Map + Level Set)
Pre.0.8720.9590.7370.8460.7860.8490.8250.8530.991
Rec.0.9610.9760.9840.9760.9430.9170.9700.7490.989
F-M0.9140.9670.8430.9060.8570.8820.8910.7980.990
CPU-t3.4237.82320.7152.9794.35718.5424.0113.8245.348
Table 2. The score of different algorithms on noisy images (where Pre., Rec., and F-M. denotes precision, recall and F-Measure, respectively).
Table 2. The score of different algorithms on noisy images (where Pre., Rec., and F-M. denotes precision, recall and F-Measure, respectively).
PSNR (dB)The Proposed ModelIMST ModelLi Model
PreRec.F-M.Pre.Rec.F-M.Pre.Rec.F-M
23.530.9990.9620.9800.9141.0000.9550.9590.9430.951
21.940.9980.9620.9800.9150.9840.9480.9510.9430.947
20.840.9970.9640.9800.9300.9980.9620.9550.9430.949
19.990.9960.9650.9800.8841.0000.9380.9440.9420.943
19.390.9920.9630.9770.9200.9960.9570.9450.9410.943
18.770.9910.9650.9780.8701.0000.9300.9420.9390.940
18.270.9830.9660.9740.8761.0000.9340.9390.9380.938
17.840.9800.9670.9730.9340.9810.9570.9290.9400.934
16.790.9730.9680.9700.8870.9910.9360.8810.9380.909
16.230.9730.9680.9700.9150.9960.9540.8660.9360.900
15.770.9720.9700.9710.8661.0000.9280.8540.9350.893
15.360.9660.9700.9680.8480.9990.9170.8450.9370.889
14.990.9650.9720.9680.8760.9850.9280.8380.9350.884
14.400.9600.9720.9660.8561.0000.9220.8260.9330.876
13.710.9570.9740.9650.8800.9770.9260.9120.9320.922
13.330.9530.9740.9630.7980.9920.8840.8060.9330.865
12.870.9530.9760.9640.8280.9890.9020.7900.9300.854
11.770.9420.9780.9600.8691.0000.9300.7850.9340.853
11.340.9250.9810.9520.8690.9990.9300.7630.9470.845
10.920.9070.9840.9440.8050.9880.8770.7300.9540.827
Original Image0.9990.9620.9800.9410.9960.9680.9640.9430.953
Mean0.9710.9700.9700.8800.9940.9330.8770.9390.906
Variance0.0250.0060.0090.0390.0070.0230.0720.0060.040
23.530.9990.9620.9800.9141.0000.9550.9590.9430.951
Table 3. The comparison of CPU time (in sec) with noisy images.
Table 3. The comparison of CPU time (in sec) with noisy images.
PSNR
(dB)
9~1010~1111~1313~1515~1717~2020~30Original
Image
The proposed model27.5~24.023.6~20.820.2~17.817.6~14.213.72~10.810.53~9.08.96~7.417.41
The IMST model9.66~11.811.6~10.810.7~10.410.4~9.559.69~9.129.20~7.747.52-6.336.27
The Li’s model23..4~20.820.5~18.518.3~16.115.8~12.111.2~8.58.43~6.536.13~5.35.26

Share and Cite

MDPI and ACS Style

Su, Y.; He, K.; Wang, D.; Peng, T. An Improved Level Set Method on the Multiscale Edges. Symmetry 2020, 12, 1650. https://doi.org/10.3390/sym12101650

AMA Style

Su Y, He K, Wang D, Peng T. An Improved Level Set Method on the Multiscale Edges. Symmetry. 2020; 12(10):1650. https://doi.org/10.3390/sym12101650

Chicago/Turabian Style

Su, Yao, Kun He, Dan Wang, and Tong Peng. 2020. "An Improved Level Set Method on the Multiscale Edges" Symmetry 12, no. 10: 1650. https://doi.org/10.3390/sym12101650

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop