Next Article in Journal
Generation of Individualized Synthetic Data for Augmentation of the Type 1 Diabetes Data Sets Using Deep Learning Models
Next Article in Special Issue
Multi-Category Gesture Recognition Modeling Based on sEMG and IMU Signals
Previous Article in Journal
Novel Projection Schemes for Graph-Based Light Field Coding
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Segmentation Using Active Contours with Hessian-Based Gradient Vector Flow External Force

1
School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang 212003, China
2
School of Electronics and Information, Jiangsu University of Science and Technology, Zhenjiang 212003, China
3
School of Artificial Intelligence, Hebei University of Technology, Tianjin 300401, China
*
Authors to whom correspondence should be addressed.
Sensors 2022, 22(13), 4956; https://doi.org/10.3390/s22134956
Submission received: 7 May 2022 / Revised: 17 June 2022 / Accepted: 20 June 2022 / Published: 30 June 2022
(This article belongs to the Special Issue Smart Mobile and Sensing Applications)

Abstract

:
The gradient vector flow (GVF) model has been widely used in the field of computer image segmentation. In order to achieve better results in image processing, there are many research papers based on the GVF model. However, few models include image structure. In this paper, the smoothness constraint formula of the GVF model is re-expressed in matrix form, and the image knot represented by the Hessian matrix is included in the GVF model. Through the processing of this process, the relevant diffusion partial differential equation has anisotropy. The GVF model based on the Hessian matrix (HBGVF) has many advantages over other relevant GVF methods, such as accurate convergence to various concave surfaces, excellent weak edge retention ability, and so on. The following will prove the advantages of our proposed model through theoretical analysis and various comparative experiments.

1. Introduction

Image segmentation is a key step from image processing to image analysis. Traditional segmentation methods include threshold [1], clustering [2], active contour model [3], region growth [4], etc. Since someone proposed the snake or active contour model in 1988, the snake or active contour model has become one of the mainstream models of image segmentation [3]. Generally, an active contour performs image segmentation by minimizing the combination of internal and external energy and deforming the curve on the image plane; the internal energy keeps the curve continuous and smooth, while the external energy attracts the curve to the boundary of the object to be segmented on the image. Therefore, the problem of finding the boundary of the segmented object can be transformed into the problem of minimizing the internal and external energy. According to the representation of the curve, the active contour is divided into a parametric contour and geometric contour. The parametric model uses explicit parameter representation [3,5,6,7,8,9], and uses image edge mapping to stop the evolution of contour. Parametric models rely heavily on high gradient amplitudes to extract object boundaries, and are effective only when the contrast between background and foreground is clear enough. The geometric model [10,11,12,13,14,15,16,17,18,19,20,21,22,23,24] is based on the theory of level set technology and usually adopts specific regional homogeneity criteria to guide the evolution of contour.
External force plays a leading role in the evolution of parametric snake contour, so people have invested a lot of energy in the research of external force to improve the robustness of active contour. At present, the proposed gradient vector flow (GVF) [25] is still one of the most successful methods. It spreads the gradient vector from the object boundary to the rest of the image, which not only expands the capture range, but also weakens the influence of noise to a certain extent. Due to its effectiveness, a large number of fast algorithms for the GVF model have been proposed, including vector field convolution (VFC) [26], BVF [27], GVF based on augmented Lagrange [28], the multi-grid method of GVF [29], and efficient numerical format of GVF [30]. Some other efforts focus on improving the initial edge map, for example, a guided filter is employed to enhance the initial edge map [31,32] and a directional edge map is coined for the GVF model [33]; in the literature, the GVF is modified by using the initial contour position and introducing additional boundary conditions of Dirichlet type [34]. Many efforts pay attention to reformulating the energy functional of the GVF model, among others, examples include the harmonic gradient vector flow (HGVF) [35], harmonic surface [32,36], 4DGVF external force field [37], NGVF [38], EPGVF [39], MGVF [40], and CN-GGVF [41]. Recently, the GVF model also has some interesting applications, as well as some interesting work on GVF snake initialization for ultrasonic image segmentation, such as walking particles [42,43]. Very recently, Jaouen proposed an image enhancement vector field based on the partial differential equation (PDE) [44], and pointed out the similarity between the vector field and gradient vector flow, which allows a natural connection between impulse filtering and a large number of work on GVF like fields. It is important to note that the deep learning method plays a very important role for image-based applications presently, such as image segmentation [45,46,47,48,49], detection [50,51], and classification [52,53], and it needs big data for training and the active contour is still of importance for image segmentation.
We can see that although the above contents provide various methods to improve the GVF model, they do not consider the characteristics of image structure. Ref. [54] pointed out that the “Hessian method is a method to extract the direction of image features through high-order differentiation”. Inspired by this principle, we express the smooth constraint formula in the GVF model in matrix form, then incorporate the Hessian matrix into the energy functional of the GVF model, and finally get the GVF based on the Hessian matrix, that is HBGVF. Compared with other methods, we experimentally prove that HBGVF has many advantages, such as accurately converging to various concave surfaces while maintaining weak edges. There is more information related to this work in the literature [55,56].
The rest of this paper is arranged as follows: in the next section, we briefly review the snake model and four famous GVF-based external forces, including GVF [25], GGVF [57], VEF [58], NGVF [38], and CN-GVF [41], and compare them with these GVF-based methods through experiments. Section 3 details the HBGVF model proposed in this paper. In Section 4, we prove the advantages of the proposed model through a large number of experiments, and finally draw a conclusion in Section 5.

2. Backgrounds

2.1. Traditional Model: Active Contours

When the early active contour was proposed, it was defined as the elastic curve c ( s ) = [ x ( s ) , y ( s ) ] , s [ 0 , 1 ] and the following is its energy function formula:
E snake = 1 2 α c 2 + β c 2 + E e x t ( c ( s ) ) d s
in Formula (1), c ( s ) , c ( s ) are the first and second derivatives of c ( s ) , which are, respectively, positively weighted by α and β . E e x t ( c ( s ) ) is the image potential, which may be caused by various things, such as edges. The Euler equation for minimizing E snake can be obtained by deformation calculus as follows:
α c ( s ) β c ( s ) E e x t = 0
Formula (2) is a force balance equation in reference [8],
F int + F e x t = 0
in Formula (3), F int = α c ( s ) β c ( s ) and F e x t = E e x t . The internal force F int keeps the snake contour smooth, while the external force F e x t shrinks the snake contour to the desired image object.
In image I , F e x t is often used as the gradient vector of image edge mapping, as shown in the following formula: I , as follows,
F e x t = E e x t = G σ I 2
In fact, this gradient vector is local, unable to take into account the overall situation, and is not regular enough, so the snake can not evolve effectively under its guidance.

2.2. Gradient Vector Flow (GVF)

Due to the obvious disadvantage of external force in Formula (4), F e x t is replaced by a new vector field v = [ u ( x , y ) V ( x , y ) ] in the GVF model, which can be derived by minimizing the following function,
E G V F = μ u x 2 + u y 2 + v x 2 + v y 2 + | f | 2 | v f | 2 d x d y
In (5), μ is a positive weight, f is the edge map of the image I , and ∇ is the gradient operator. The newly obtained vector field is a gradient vector flow (GVF) field. The GVF field can be obtained by solving the following equation iteratively,
u t = μ Δ u | f | 2 u f x v t = μ Δ v | f | 2 v f y
where Δ is the Laplacian operator. The diffusion equation is isotropic.
The generalized GVF (GGVF) is an extension of the GVF by replacing μ and f x 2 + f y 2 in (6) with two spatially varying functions g ( | f | ) = exp | f | 2 / k 2 and h ( | f | ) = 1 g ( | f | ) , respectively [57], k acts as a threshold and controls the smoothing effect. The introduction of such terms makes the GGVF snake behave better than the GVF snake on thin concavity convergence.

2.3. Virtual Electric Field (VEF)

Reference [58] proposed a virtual electric field model (VEF). In this method, each pixel in the image is regarded as an electron, the charge is the size of the image edge, and the virtual electric field at x 0 , y 0 is derived from the sum of all other electrons in the surrounding area D, which is expressed by the following formula,
E V E F x 0 , y 0 = ( x , y ) D x 0 x x 0 x 2 + y 0 y 2 3 , y 0 y x 0 x 2 + y 0 y 2 3 · f ( x , y )
in Formula (7), D = ( x , y ) t x 0 x t , t y 0 y t , f is the size of the image edge image. Fast Fourier transform (FFT) is applied to the VEF model, so Formula (7) is usually written in convolution form, as follows,
E V E F ( x , y ) = x x 2 + y 2 3 , y x 2 + y 2 3 f ( x , y )
in Formula (8), ⊗ represents the convolution operation.
Thanks to the use of FFT, the VEF model can be realized in real time. In addition, the VEF model also has some characteristics better than the GVF model, such as a large capture range and more sensitive concave convergence.

2.4. Gradient Vector Flow in Normal Direction (NGVF)

It was pointed out in [59] that the Laplace operator can be decomposed into two terms, as shown below,
Δ u = u T T + u N N
Taking u ( x , y ) as an example, in Formula (9), u T T and u N N are the second derivatives of u ( x , y ) in the tangential and normal directions of the isophotes, respectively. It was pointed out in [60] that, as an interpolation operator, u N N has the best performance, Δ u second and u T T third. The diffusion process in (6) is regarded as the interpolation process, and the NGVF is proposed using the optimal interpolator, as shown in the following formula,
u t = μ u N N u f x | f | 2 v t = μ v N N v f y | f | 2
where μ is also a positive weight as in (6).

2.5. Component-Normalized Generalized Gradient Vector Flow (CN-GGVF)

In the CN-GGVF model, the diffusion equations are modified in the following form,
u t = g ( | f | ) · g ( | f | ) u N N + h ( | f | ) u T T h ( | f | ) · u f x v t = g ( | f | ) · g ( | f | ) v N N + h ( | f | ) v T T h ( | f | ) · v f y
where the g ( | f | ) and h ( | f | ) are identical to those in the GGVF model, and u T T and u N N are identical to those in the NGVF model. Based on deep analysis of the behavior of the GGVF model, Qin et al. proposed to normalized the GVF vector in a component-wise manner, such that the CN-GGVF model can converge to a deep and thin notch, the component-normalized (CN) GGVF field reads,
u C N G V F = sign ( u ) = 1 , u > 0 0 , u = 0 1 , u < 0
v C N G V F = sign ( v ) = 1 , v > 0 0 , v = 0 1 , v < 0

3. The HBGVF Model

3.1. Gradient Vector Flow Expressed in Matrix Form

By observing equation, we first reformulate the smoothness constraint in the GVF model into matrix form as follows, u x 2 + u y 2 = u x u y u x u y = u x u y 1 0 0 1 u x u y , we first reformulate the smoothness constraint in the GVF model into matrix form as follows,
E G V F = μ ( u ) T · W · u + ( v ) T · W · v + | f | 2 | v f | 2 d x d y
in Equation (14), W is the identity matrix. It can be seen from the above formula that due to the existence of this identity matrix, it induces the scalar L2 norm, so that the GVF model fails to take into account the image characteristic of image structure. We completely replace all W with matrix D related to the image structure, so we use Hessian matrix to construct, as shown below,
E = μ ( u ) T · D · u + ( v ) T · D · v + | f | 2 | v f | 2 d x d y
where D = a b b c is a symmetric and positive semi-definite matrix. The reconstructed model is called the Hessian-based GVF (HBGVF for short). Using the variational method, the HBGVF field can be obtained by solving the following equation, as shown below,
u t = μ d i v ( D u ) | f | 2 u f x = 0 v t = μ d i v ( D v ) | f | 2 v f y = 0
in Equation (16), d i v is the divergence operator.

3.2. Using the Hessian Matrix to Construct Diffusion Matrix

Through the observation of Formula (16), we can know that its equation is exactly the tensor based diffusion in [61]. The “Hessian method proposed in reference [54] regards the direction of the maximum second-order directional derivative as the direction passing through the image feature, and its vertical direction is regarded as the direction along the image feature.” Inspired by this principle, we use Hessian matrix to reconstruct the diffusion matrix D in Formula (16). Taking image I as an example, its Hessian matrix is represented by the following formula,
H = I x x I x y I x y I y y
using the derivative in [61], the two eigenvalues of H can be solved by the following formula, expressed by λ 1 and λ 2 ,
λ 1 = 1 2 I x x + I y y + I x x I y y 2 + 4 I x y 2 λ 2 = 1 2 I x x + I y y I x x I y y 2 + 4 I x y 2
the eigenvectors corresponding to λ 1 and λ 2 are e 1 and e 2 , which are obtained by the following formula:
e 1 = 2 I x y I y y I x x + I x x I y y 2 + 4 I x y 2
e 2 = 2 I x y I y y I x x I x x I y y 2 + 4 I x y 2
Obviously, through the observation of Formula (19), we can see that λ 1 λ 2 . In reference [54], it is pointed out that because λ 1 λ 2 , the feature vector e 1 has the largest second-order directional derivative direction in all directions, which is considered as the direction passing through the image feature, and e 2 is considered as the direction along the image feature. Using the eigenvalues and eigenvectors of the derived Hessian matrix, we construct the diffusion matrix D in Formula (16). The eigenvector of D is used as the eigenvector of H . We use η 1 , η 2 to represent the two eigenvalues of D , as shown in the following formula:
η 1 = 1 1 + ( | I | / K ) 2 η 2 = 1
where K serves as a threshold, and finally, the D takes the following form,
D = e 1 e 2 η 1 0 0 η 2 e 1 e 2 T
From Formula (22) we can get some information: (I) when | I | , η 1 0 , the HBGVF snake will give up continuing to spread along the image gradient direction on the boundary and spread on the boundary. Therefore, the noise on the image edge can be eliminated while the image edge is preserved; (II) when | I | 0 , η 1 1 = η 2 , that is, in the homogeneous region, the diffusion is isotropic, which is beneficial to the elimination of noise.
Through the above methods, the HBGVF model will have anisotropy, so it can accurately converge all kinds of concave surfaces and retain the weak edge of the image. The methods in reference [61] are used for reference to solve the model proposed in this paper, and the source code in Matlab is available to the public upon request. We note that, since the Hessian matrix and the diffusion matrix should be calculated, the computation time of the proposed HBGVF model is longer than the original GVF model.

4. Corresponding Comparative Experiments

In the experimental part, we show the important characteristics of the HBGVF model by comparing the HBGVF model with GVF [25], GGVF [57], VEF [58], NGVF [38], and CN-GGVF [41]. We normalized the image intensity to the [0,1] range, set and α , β to 0.1, and set the time step for all snakes with the size of τ = 0.5 . For an image of size M · N , the iteration for the calculation of all GVF-like models is M · N , and the time step is 1(less than 1 / ( 4 μ ) ). In order to get a large capture range, μ is 0.2 for GVF, NGVF, and HBGVF, k is 0.5 for the GGVF and CN-GGVF, the region D for the VEF model is of size M · N , k for the HBGVF is 0.1, unless otherwise stated.

4.1. Common Concerns for the GVF-Like Snakes

The GVF model was originally proposed to overcome the shortcomings of traditional gradient-based external force, such as narrow capture range and poor convergence on concave surfaces. Through the following experiments, we will prove some excellent characteristics of HBGVF snake compared with the GVF snake, such as large capture range, accurate convergence to concave, and insensitive to image initialization. Figure 1 shows the convergence results of HBGVF snake on a oom image, U-shaped image, and main body contour respectively. The gray dotted line is the initial contour, and the red solid line is the convergence result. It can be observed from the figure that the HBGVF snake converges to the U-shaped concave surface and is automatically connected to the subject contour. It can be seen from the initialization results in the figure that the HBGVF snake has the advantages of being insensitive to the initial contour and large capture range.

4.2. Convergence to Concavities

It can be seen from Figure 1 that the HBGVF snake performs well on converging to the U-shape image. Next, in order to better test the advantages of the HBGVF snake, we use the other three images with different concave surfaces to compare with other methods similar to GVF. Figure 2 presents the convergence results of the corresponding approaches. One can see that just the HBGVF and GGVF snakes can converge on the three images, the reason behind this observation is that the HBGVF model takes into account the image structure that was characterized by the Hessian matrix, and the GGVF model emphasizes the image structure by paying more attention to the edges by using two varying weighting functions. However, the CN-GGVF model also adopts the two varying weighting functions that are identical to those in the GGVF model, the CN-GGVF snake cannot converge to the various concavities at all, the reason is that the component normalization operation changes the direction of the vector field. Taking the heart image as an example, Figure 2i presents the associated GGVF vector field around the entrance of the concavity, one can see that the vector field in the blue circle is approximately horizontal, since the vector left to the blue circle is downward, it drives the snake contour into the concavity. Figure 2h presents the associated CN-GGVF vector field, where the vectors in black and red are these before and after component normalization, respectively, it is clear that the CN-GGVF field in the yellow circle before component normalization (in black) is similar to the GGVF vector, however, due to the component normalization, the CN-GGVF vector (in red) is upward, and pushes the snake contour out of the concavity. As a result, the CN-GGVF snake stops at the upper half of the concavity. This example tells us that component normalization is not always beneficial to the evolution of the snake contour. The concavities in the man and cat images are semi-close, and the CN-GGVF snake is also not good at converging to these concavities. Therefore, the improper use of the weighting function may cause the opposite effect. Of course, the appropriate use can greatly improve the accuracy of the model, such as the application in [62]. The GVF and VEF snakes just failed in one case, and we will see later that the shortcoming of the VEF snake is that it does not perform well when preserving weak edges. The NGVF snake just works well on the man concavities, and since the limited capture range, the initial contour for the cat image is very close to the cat at the left-bottom corner.

4.3. Weak Edge Preserving

Figure 3a is an example of testing the ability of the HBGVF model to retain the weak edge of the image. The outer ring of the image is seriously blurred in the upper right corner. Refer to the edge diagram in Figure 3b. It can be seen that the contour of the snake is easily attracted to the inner ring of the strong edge. Since it is a pair of contradictions to enlarge capture range and to preserve weak edge simultaneously, the regularization parameters are tuned to μ is 0.1 for GVF, NGVF, and HBGVF, k is 0.01 for the GGVF and CN-GGVF, the size of region D for VEF model is just one twenty-fifth of that of the image, k for HBGVF is 0.01. One can see that the HBGVF snakes can preserve the weak edge well although the diffusion parameter μ is identical to those of the GVF and NGVF; the reason behind this observation is that the HBGVF model takes into account the image structure. Although the kernel size for the VEF is very small and the initial contour for the VEF snake is close to the object, the snake contour yet collapsed at the weak edge. The CN-GGVF and GGVF snakes also stop at the weak edge and the convergence results are almost identical due to the similar diffusion mechanism, when compared with that of the HBGVF snake, the result of the HBGVF snake is smoother, this observation implies the HBGVF field is more regular.

4.4. Test Results of HBGVF Model on Real Images

In order to further highlight the comprehensive performance of the HBGVF snake, we used several real images for comparison. Figure 4 presents a gear image, where there are more than ten semi-close concavities with order number.
The parameter k is 0.2 for the GGVF and 0.3 for the CN-GGVF in order to get a balance between entering the concavities and preserving a weak edge, the parameters for other models are identical to those in Figure 1 and Figure 2. One can see that the GVF snake converges to the concavities from #0 to #9, although it collapses at the two teeth around concavity 5. The GGVF snake converges to the concavities from #0 to #8, and it seems that the GGVF snake is good at preserving a weak edge, in fact, one can see that there is contour entanglement from the right part in Figure 4d, which is a zoomed-in version of the blue rectangle in the left part. The VEF snake suffers from weak edge leakage, and collapses at most of the teeth, see Figure 4e. Figure 4f is the result of the NGVF snake treatment, which manifests that the NGVF snake is not good at concave convergence, and this observation agrees with that in Figure 2. The CN-GGVF snake performs similarly to the NGVF snake, see Figure 4g. Since the HBGVF takes into account the image structure, the HBGVF snake converges to the concavities from #0 to #12 except the 11 th one. However, from the zoomed-in part of the blue rectangle in the left part, HBGVF snakes also performed poorly, as shown in the right part of Figure 4h; in fact, the performance in this example can be enhanced by decreasing the parameter k in HBGVF.
Figure 5 presents a second real image, a flying eagle, and the feathers on the wings are difficult for the active contour to extract. In order to get a balance between extracting the feathers on the wings and enlarging the capture range, the regularization parameter μ is 0.05 for GVF, NGVF, and HBGVF, k is 0.05 for the GGVF and CN-GGVF, the size of region D for the VEF model is just one sixty-forth of that of the image, k for HBGVF is 0.01. The white dash-point lines are the initial contour and the red solid lines are the convergent results. As can be seen from Figure 5a, the GVF snake works well except for the feathers on the right wing. Figure 5b shows that the GGVF snake yields good results in extracting the feathers on both wings, however, it is trapped in local minimum behind the tail. The result of the VEF snake is reported in Figure 5c, and it is obvious from the results that the snake contour is trapped in a local minimum and also fails on extracting the feathers. The NGVF and CN-GGVF snakes are also trapped in a local minimum, see Figure 5d,e, respectively, and the CN-GGVF snake cannot enter the concavities formed by the feathers. On the contrary, Figure 5f shows that the HBGVF snake works well on extracting the feathers and is not trapped in a local minimum, which manifests that the HBGVF field is regular.
Figure 6 presents a medical image, and for the weak edge shown in the white box in Figure 6a, the snake contour is prone to leakage here, the intensity inhomogeneity is also a difficulty. In order to achieve a balance between maintaining the weak edge and overcoming inhomogeneity, the regularization parameter μ is 0.02 for GVF, 0.03 for NGVF and HBGVF, k is 0.03 for the GGVF, and 0.07 for the CN-GGVF, the size of region D for VEF model is just 1/144 of that of the image, k for HBGVF is 0.01. One can see that there is weak-edge leakage and local minimum trap simultaneously for the GVF, VEF, and NGVF snakes. The GGVF, CN-GGVF, and HBGVF snakes yield similar results, where there is no weak-edge leakage or local minimum trap. It is clear that the μ for HBGVF and NGVF are identical, and even larger than that for the GVF, however, the HBGVF snake preserves a weak edge well, the reason behind this observation is that the HBGVF model takes into account the image structure. Figure 7 shows more results of the HBGVF snake on real images, the initial contours are dash-point lines and the convergence results are the solid red lines. The first row presents flowers and leaves and the HBGVF snake extracts the objects accurately, the second row shows three eagles and the difficulty for the HBGVF snake is similar to that in Figure 5, the HBGVF snake also yields satisfactory results. There are three medical images in the third row, and in each panel, the image on the left is the original image with initial contour, from which one can see the blurred and weak boundaries of the objects. The display result on the right shows that the HBGVF snake can satisfactorily delineate the object boundaries.

5. Conclusions

To sum up, the smoothness constraint formula is expressed in the form of a matrix, and the image structure represented by the Hessian matrix is introduced into the GVF model. This GVF model based on the Hessian matrix is abbreviated as HBGVF. Through the above theoretical analysis and experimental comparison, it can be proved that compared with other GVF-based models, the HBGVF snake has many advantages, such as excellent convergence on various concave surfaces, retaining weak edges, and so on. The above experiments include synthetic images and real images in real life. These experiments have proved the excellent characteristics of the HBGVF model. The proposed HBGVF model can also be employed for other applications such as those in [63,64,65,66,67,68,69,70,71,72], and this is our next goal.

Author Contributions

Conceptualization, K.C. and Y.W.; methodology, Y.W., K.C. and Q.Q.; software, Y.W., K.C., Q.Q., W.Q. and Q.D.; validation, Q.Q., W.Q. and Q.D.; formal analysis, K.C., Q.Q., W.Q. and Q.D.; investigation, Q.Q., W.Q. and Q.D.; resources, Q.Q. and W.Q.; data curation, Q.Q. and Q.D.; writing—original draft preparation, Q.Q. and Y.W.; writing—review and editing, Q.Q. and Y.W.; visualization, Q.Q. and Q.D.; supervision, K.C. and Y.W.; project administration, K.C., Y.W. and Q.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Science Foundation Program of China (NSFC) (grant number: 61976241), and the International Science and technology cooperation plan project of Zhenjiang (grant number: GJ2021008).

Data Availability Statement

Dateset is available at request.

Conflicts of Interest

We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled.We hereby declare that the collection, analysis and interpretation of the data in this article and the writing of the report were done by all authors of this article.

References

  1. Sahoo, P.K.; Soltani, S.; Wong, A.K.C.; Chen, Y.C. A Survey of Thresholding Techniques. Comput. Vis. Graph. Image Process. 1998, 41, 142–149. [Google Scholar] [CrossRef]
  2. Cai, W.L.; Chen, S.C.; Zhang, D.Q. Fast and robust fuzzy c-means clustering algorithms incorporating local information for image segmentation. Pattern Recognit. 2007, 40, 825–838. [Google Scholar] [CrossRef] [Green Version]
  3. Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
  4. Shih, F.Y.; Cheng, S.X. Automatic seeded region growing for color image segmentation. Image Vis. Comput. 2005, 23, 877–886. [Google Scholar] [CrossRef]
  5. Yu, S.; Lu, Y.; Molloy, D. A Dynamic-Shape-Prior Guided Snake Model With Application in Visually Tracking Dense Cell Populations. IEEE Trans. Image Process. 2019, 8, 1513–1527. [Google Scholar] [CrossRef]
  6. Zhou, S.; Li, B.; Wang, Y.; Wen, T.; Li, N. The Line- and Block-like Structures Extraction via Ingenious Snake. Pattern Recognit. Lett. 2018, 112, 324–331. [Google Scholar] [CrossRef]
  7. Nakhmani, A.; Tannenbaum, A. Self-Crossing Detection and Location for Parametric Active Contours. IEEE Trans. Image Process. 2012, 21, 3150–3156. [Google Scholar] [CrossRef] [Green Version]
  8. Zhao, S.; Li, G.; Zhang, W.; Gu, J. Automatical Intima-media Border Segmentation on Ultrasound Image Sequences using a Kalman filter snake. IEEE Access 2018, 6, 40804–40810. [Google Scholar] [CrossRef]
  9. Manno-Kovacs, A. Direction Selective Contour Detection for Salient Objects. IEEE Trans. Circuits Syst. Video Technol. 2018, 29, 375–389. [Google Scholar] [CrossRef] [Green Version]
  10. Paragios, N.; Deriche, R. Geodesic Active Contours and Level Sets for the Detection and Tracking of Moving Objects. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 266–280. [Google Scholar] [CrossRef] [Green Version]
  11. Zhu, S.C.; Yuille, A. Region Competition: Unifying Snakes, Region Growing, and Bayes/MDL for Multi-band Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 884–900. [Google Scholar]
  12. Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [Green Version]
  13. Brox, T.; Cremers, D. On Local Region Models and a Statistical Interpretation of the Piecewise Smooth Mumford-Shah Functional. Int. J. Comput. Vis. 2009, 84, 184–193. [Google Scholar] [CrossRef]
  14. Adam, A.; Kimmel, R.; Rivlin, E. On Scene Segmentation and Histograms-Based Curve Evolution. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 1708–1714. [Google Scholar] [CrossRef] [Green Version]
  15. Ni, K.; Bresson, X.; Chan, T.; Esedoglu, S. Local Histogram Based Segmentation Using the Wasserstein Distance. Int. J. Comput. Vis. 2009, 84, 97–111. [Google Scholar] [CrossRef] [Green Version]
  16. Zhao, W.; Xu, X.; Zhu, Y.; Xu, F. Active contour model based on local and global Gaussian fitting energy for medical image segmentation. Optik 2018, 158, 1160–1169. [Google Scholar] [CrossRef]
  17. Ge, Q.; Li, C.; Shao, W.; Li, H. A hybrid active contour model with structured feature for image segmentation. Signal Process. 2015, 108, 147–158. [Google Scholar] [CrossRef]
  18. Wang, H.; Huang, T.; Du, Y. An adaptive weighting parameter selection for improved integrated active contour model. Optik 2015, 126, 5331–5335. [Google Scholar] [CrossRef]
  19. Li, C.; Kao, C.Y.; Gore, J.C.; Ding, Z. Minimization of Region-Scalable Fitting Energy for Image Segmentation. IEEE Trans. Image Process. 2008, 17, 1940–1949. [Google Scholar]
  20. Darolti, C.; Mertins, A.; Bodensteiner, C.; Hofmann, U.G. Local region descriptors for active contours evolution. IEEE Trans. Image Process. 2008, 17, 2275–2288. [Google Scholar] [CrossRef]
  21. Zhang, K.; Song, H.; Zhang, L. Active contours driven by local image fitting energy. Pattern Recognit. 2010, 43, 1199–1206. [Google Scholar] [CrossRef]
  22. Estellers, V.; Zosso, D.; Bresson, X.; Thiran, J.P. Harmonic active contours. IEEE Trans. Image Process. 2014, 23, 69–82. [Google Scholar] [CrossRef] [Green Version]
  23. Gao, Y.; Bouix, S.; Shenton, M.; Tannenbaum, A. Sparse Texture Active Contour. IEEE Trans. Image Process. 2013, 22, 3866–3878. [Google Scholar] [CrossRef] [Green Version]
  24. Caselles, V.; Kimmel, R.; Sapiro, G. Geodesic Active Contours. Int. J. Comput. Vis. 1997, 22, 61–79. [Google Scholar] [CrossRef]
  25. Xu, C.; Prince, J. Snakes, shapes, and gradient vector flow. IEEE Trans. Image Process 1998, 7, 359–369. [Google Scholar]
  26. Li, B.; Acton, S. Active contour external force using vector field convolution for image segmentation. IEEE Trans. Image Process. 2007, 16, 2096–2106. [Google Scholar] [CrossRef] [Green Version]
  27. Sum, K.W.; Cheung, P.Y.S. Boundary vector field for parametric active contours. Pattern Recognit. 2007, 40, 1635–1645. [Google Scholar] [CrossRef]
  28. Ren, D.; Zuo, W.; Zhao, X.; Lin, Z.; Zhang, D. Fast gradient vector flow computation based on augmented Lagrangian method. Pattern Recognit. Lett. 2013, 34, 219–225. [Google Scholar] [CrossRef]
  29. Han, X.; Xu, C.; Prince, J. Fast numerical scheme for gradient vector flow computation using a multigrid method. IET Image Process. 2007, 1, 48–55. [Google Scholar] [CrossRef]
  30. Boukerroui, D. Efficient numerical schemes for gradient vector flow. Pattern Recognit. 2012, 45, 626–636. [Google Scholar] [CrossRef] [Green Version]
  31. Zhao, F.; Zhao, J.; Zhao, W.; Qu, F. Guide filter-based gradient vector flow module for infrared image segmentation. Appl. Opt. 2015, 54, 9809–9817. [Google Scholar] [CrossRef] [PubMed]
  32. Zhu, S.; Bu, X.; Zhou, Q. A Novel Edge Preserving Active Contour Model Using Guided Filter and Harmonic Surface Function for Infrared Image Segmentation. IEEE Access 2018, 6, 5493–5510. [Google Scholar] [CrossRef]
  33. Cheng, J.; Foo, S.W. Dynamic directional gradient vector flow for snakes. IEEE Trans. Image Process. 2006, 15, 1563–1571. [Google Scholar] [CrossRef] [PubMed]
  34. Ray, N.; Acton, S.T.; Ley, K. Tracking leukocytes in vivo with shape and size constrained active contours. IEEE Trans. Med. Imaging 2002, 21, 1222–1235. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Wang, Y.; Jia, Y.; Liu, L. Harmonic gradient vector flow external force for snake model. Electron. Lett. 2008, 44, 105–106. [Google Scholar] [CrossRef]
  36. Wu, Y.; Wang, Y.; Jia, Y. Adaptive diffusion flow active contours for image segmentation. Comput. Vis. Image Underst. 2013, 117, 1421–1435. [Google Scholar] [CrossRef]
  37. Jaouen, V.; González, P.; Stute, S. Variational Segmentation of Vector-Valued Images With Gradient Vector Flow. IEEE Trans. Image Process. 2014, 3, 4773–4785. [Google Scholar] [CrossRef]
  38. Ning, J.; Wu, C.; Liu, S.; Yang, S. NGVF: An improved external force field for active contour model. Pattern Recognit. Lett. 2007, 28, 58–63. [Google Scholar]
  39. Li, C.; Liu, J.; Fox, M.D. Segmentation of external force field for automatic initialization and splitting of snakes. Pattern Recognit. 2005, 38, 1947–1960. [Google Scholar] [CrossRef]
  40. Ray, N.; Acton, S.T. Motion gradient vector flow: An external force for tracking rolling leukocytes with shape and size constrained active contours. IEEE Trans. Med. Imaging 2004, 23, 1466–1478. [Google Scholar] [CrossRef]
  41. Qin, L.; Zhu, C.; Zhao, Y.; Bai, H.; Tian, H. Generalized Gradient Vector Flow for Snakes: New Observations, Analysis, and Improvement. IEEE Trans. Circuits Syst. Video Technol. 2013, 23, 883–897. [Google Scholar] [CrossRef]
  42. Kirimasthong, K.; Rodtook, A.; Lohitvisate, W.; Makhanov, S.S. Automatic initialization of active contours in ultrasound images of breast cancer. Pattern Anal. Appl. 2018, 21, 491–500. [Google Scholar] [CrossRef]
  43. Rodtook, A.; Kirimasthong, K.; Lohitvisate, W. Automatic Initialization of Active Contours and Level Set Method in Ultrasound Images of Breast Abnormalities. Pattern Recognit. 2018, 79, 172–182. [Google Scholar] [CrossRef]
  44. Jaouen, V.; Bert, J.; Boussion, N.; Fayad, H.; Hatt, M.; Visvikis, D. Image enhancement with PDEs and nonconservative advection flow fields. IEEE Trans. Image Process. 2019, 28, 3075–3088. [Google Scholar] [CrossRef]
  45. Minaee, S.; Boykov, Y.; Porikli, F.; Plaza, A.; Kehtarnavaz, N.; Terzopoulos, D. Image Segmentation Using Deep Learning: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 3523–3542. [Google Scholar] [CrossRef]
  46. Wang, W.; Wu, Y.W.Y.; Li, S.; Chen, B. Quantification of Full Left Ventricular Metrics via Deep Regression Learning with Contour-Guidance. IEEE Access 2019, 7, 47918–47928. [Google Scholar] [CrossRef]
  47. Zhang, T.; Zhang, X. A Full-Level Context Squeeze-and-Excitation ROI Extractor for SAR Ship Instance Segmentation. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  48. Shen, W.; Xu, W.; Sun, Z.; Ma, J.; Ma, X.; Zhou, S.; Guo, S.; Wang, Y. Automatic Segmentation of the Femur and Tibia Bones from X-ray Images Based on Pure Dilated Residual U-Net. Inverse Probl. Imaging 2021, 15, 1333–1346. [Google Scholar] [CrossRef]
  49. Zhang, H.; Zhang, W.; Shen, W.; Li, N.; Chen, Y.; Li, S.; Chen, B.; Guo, S.; Wang, Y. Automatic segmentation of the left ventricle from MR images based on nested U-Net with dense block. Biomed. Signal Process. Control. 2021, 68, 102684. [Google Scholar] [CrossRef]
  50. Zhang, T.; Zhang, X. ShipDeNet-20: An Only 20 Convolution Layers and <1-MB Lightweight SAR Ship Detector. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1234–1238. [Google Scholar]
  51. Zhang, T.; Zhang, X.; Shi, J.; Wei, S.; Wang, J.; Li, J.; Su, H.; Zhou, Y. Balance Scene Learning Mechanism for Offshore and Inshore Ship Detection in SAR Images. IEEE Geosci. Remote Sens. Lett. 2020, 19, 1–5. [Google Scholar] [CrossRef]
  52. Zhang, T.; Zhang, X. Squeeze-and-Excitation Laplacian Pyramid Network With Dual-Polarization Feature Fusion for Ship Classification in SAR Images. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  53. Zhang, T.; Zhang, X.; Ke, X.; Liu, C.; Xu, X.; Zhan, X.; Wang, C.; Ahmad, I.; Zhou, Y.; Pan, D.; et al. HOG-ShipCLSNet: A Novel Deep Learning Network With HOG Feature Fusion for SAR Ship Classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–22. [Google Scholar] [CrossRef]
  54. Carmona, R.; Zhong, S. Adaptive Smoothing Respecting Feature Directions. IEEE Trans. Image Process. 1998, 7, 353–358. [Google Scholar] [CrossRef] [Green Version]
  55. Wang, Y.; Chen, W.; Yu, T.; Zhang, Y. Hessian based image structure adaptive gradient vector flow for parametric active contours. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 649–652. [Google Scholar]
  56. Cheng, K.; Xiao, T.; Chen, Q.; Wang, Y. Image segmentation using active contours with modified convolutional virtual electric field external force with an edge-stopping function. PLoS ONE 2020, 15, e0230581. [Google Scholar] [CrossRef]
  57. Xu, C.; Prince, J.L. Generalized gradient vector flow external forces for active contours. Signal Process. 1998, 71, 131–139. [Google Scholar] [CrossRef] [Green Version]
  58. Park, H.K.; Chung, M.J. External force of snake: Virtual electric field. Electron. Lett. 2002, 38, 1500–1502. [Google Scholar] [CrossRef]
  59. You, Y.; Xu, W.; Tannenbaum, A.; Kaveh, M. Behavioral analysis of anisotropic diffusion in image processing. IEEE Trans. Image Process. 1996, 5, 1539–1552. [Google Scholar]
  60. Caselles, V.; Morel, J.; Sbert, C. An axiomatic approach to image interpolation. IEEE Trans. Image Process. 1998, 7, 376–386. [Google Scholar] [CrossRef]
  61. Weickert, J. Coherence-enhancing diffusion filtering. Int. J. Comput. Vis. 1999, 31, 111–127. [Google Scholar] [CrossRef]
  62. Yan, M.; Li, S.; Chan, C.A.; Shen, Y.; Yu, Y. Mobility prediction using a weighted Markov model based on mobile user classification. Sensors 2021, 21, 1740. [Google Scholar] [CrossRef]
  63. Yu, H.; Chua, C. GVF-based anisotropic diffusion models. IEEE Trans. Image Process. 2006, 15, 1517–1524. [Google Scholar]
  64. Hassouna, M.; Farag, A. Variational curve skeletons using gradient vector flow. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 2257–2274. [Google Scholar] [CrossRef]
  65. Prasad, V.; Yegnanarayana, B. Finding axes of symmetry from potential fields. IEEE Trans. Image Process. 2004, 13, 1559–1566. [Google Scholar] [CrossRef]
  66. Battiato, S.; Farinella, G.M.; Puglisi, G. Saliency-Based Selection of Gradient Vector Flow Paths for Content Aware Image Resizing. IEEE Trans. Image Process. 2014, 23, 2081–2095. [Google Scholar] [CrossRef]
  67. Shivakumara, P.; Phan, T.; Lu, S.; Tan, C.L. Gradient vector flow and grouping based method for arbitrarily-oriented scene text detection in video images. IEEE Trans. Circuits Syst. Video Technol. 2013, 23, 1729–1739. [Google Scholar] [CrossRef]
  68. Wang, Y.; Jia, Y.; Wu, Y. Segmentation of the left ventricle in cardiac cine MRI using a shape constrained snake model. Comput. Vis. Image Underst. 2013, 117, 990–1003. [Google Scholar]
  69. Zhu, S.; Gao, J.; Li, Z. Video object tracking based on improved gradient vector flow snake and intra-frame centroids tracking method. Comput. Electr. Eng. 2014, 40, 174–185. [Google Scholar] [CrossRef]
  70. Li, Q.; Deng, T.; Xie, W. Active contours driven by divergence of gradient vector flow. Signal Process. 2016, 120, 185–199. [Google Scholar] [CrossRef]
  71. Abdullah, M.; Dlay, S.; Woo, W.; Chambers, J. Robust Iris Segmentation Method Based on a New Active Contour Force with a Noncircular Normalization. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 3128–3142. [Google Scholar] [CrossRef] [Green Version]
  72. Miri, M.S.; Robles, V.A.; Abràmoff, M.D.; Kwon, Y.H.; Garvin, M.K. Incorporation of gradient vector flow field in a multimodal graph-theoretic approach for segmenting the internal limiting membrane from glaucomatous optic nerve head-centered SD-OCT volumes. Comput. Med. Imaging Graph. 2017, 55, 87–94. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (a) Test images: room image, U image, and subject contour. (b) Convergence results with different initializations and evolutions of the HBGVF snakes.
Figure 1. (a) Test images: room image, U image, and subject contour. (b) Convergence results with different initializations and evolutions of the HBGVF snakes.
Sensors 22 04956 g001
Figure 2. Convergence to concavities. (a) Test images: heart image, man image, and cat image. Evolution and convergence results of the (b) GVF snake, (c) GGVF snake, (d) VEF snake, (e) NGVF snake, (f) CN-GGVF snake, and (g) HBGVF snake. (h) The CN-GGVF field, the vectors in black and red are these before and after component-normalization, respectively, (i) the GGVF field.
Figure 2. Convergence to concavities. (a) Test images: heart image, man image, and cat image. Evolution and convergence results of the (b) GVF snake, (c) GGVF snake, (d) VEF snake, (e) NGVF snake, (f) CN-GGVF snake, and (g) HBGVF snake. (h) The CN-GGVF field, the vectors in black and red are these before and after component-normalization, respectively, (i) the GGVF field.
Sensors 22 04956 g002
Figure 3. (a) Test image, (b) edge map. Convergence results of each model: (c) the GVF snake, (d) the GGVF snake, (e) the VEF snake, (f) the NGVF snake, (g) the CN-GGVF snake, and (h) the HBGVF snake.
Figure 3. (a) Test image, (b) edge map. Convergence results of each model: (c) the GVF snake, (d) the GGVF snake, (e) the VEF snake, (f) the NGVF snake, (g) the CN-GGVF snake, and (h) the HBGVF snake.
Sensors 22 04956 g003
Figure 4. (a) Original test metal gauge image; (b) edge map; the convergence results of each model: (c) the GVF snake, (d) the GGVF snake, (e) the VEF snake, (f) the NGVF snake, (g) the CN-GGVF snake, and (h) the HBGVF snake.
Figure 4. (a) Original test metal gauge image; (b) edge map; the convergence results of each model: (c) the GVF snake, (d) the GGVF snake, (e) the VEF snake, (f) the NGVF snake, (g) the CN-GGVF snake, and (h) the HBGVF snake.
Sensors 22 04956 g004
Figure 5. The convergence results of each model: (a) the GVF snake, (b) the GGVF snake, (c) the VEF snake, (d) the NGVF snake, (e) the CN-GGVF snake, and (f) the HBGVF snake. In order to get a balance between preserving the feathers on the wings and enlarging the capture range, the regularization parameter μ is 0.05 for GVF, NGVF, and HBGVF, k is 0.05 for the GGVF and CN-GGVF, the size of region D for VEF model is just one sixty-forth of that of the image, k for HBGVF is 0.01.
Figure 5. The convergence results of each model: (a) the GVF snake, (b) the GGVF snake, (c) the VEF snake, (d) the NGVF snake, (e) the CN-GGVF snake, and (f) the HBGVF snake. In order to get a balance between preserving the feathers on the wings and enlarging the capture range, the regularization parameter μ is 0.05 for GVF, NGVF, and HBGVF, k is 0.05 for the GGVF and CN-GGVF, the size of region D for VEF model is just one sixty-forth of that of the image, k for HBGVF is 0.01.
Sensors 22 04956 g005
Figure 6. (a) Test medical image; the convergence results of each model: (b) the GVF snake, (c) the GGVF snake, (d) the VEF snake, (e) the NGVF snake, (f) the CN-GGVF snake, and (g) the HBGVF snake.
Figure 6. (a) Test medical image; the convergence results of each model: (b) the GVF snake, (c) the GGVF snake, (d) the VEF snake, (e) the NGVF snake, (f) the CN-GGVF snake, and (g) the HBGVF snake.
Sensors 22 04956 g006
Figure 7. More examples of the convergence results of the HBGVF snake.
Figure 7. More examples of the convergence results of the HBGVF snake.
Sensors 22 04956 g007
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Qian, Q.; Cheng, K.; Qian, W.; Deng, Q.; Wang, Y. Image Segmentation Using Active Contours with Hessian-Based Gradient Vector Flow External Force. Sensors 2022, 22, 4956. https://doi.org/10.3390/s22134956

AMA Style

Qian Q, Cheng K, Qian W, Deng Q, Wang Y. Image Segmentation Using Active Contours with Hessian-Based Gradient Vector Flow External Force. Sensors. 2022; 22(13):4956. https://doi.org/10.3390/s22134956

Chicago/Turabian Style

Qian, Qianqian, Ke Cheng, Wei Qian, Qingchang Deng, and Yuanquan Wang. 2022. "Image Segmentation Using Active Contours with Hessian-Based Gradient Vector Flow External Force" Sensors 22, no. 13: 4956. https://doi.org/10.3390/s22134956

APA Style

Qian, Q., Cheng, K., Qian, W., Deng, Q., & Wang, Y. (2022). Image Segmentation Using Active Contours with Hessian-Based Gradient Vector Flow External Force. Sensors, 22(13), 4956. https://doi.org/10.3390/s22134956

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop