1. Introduction
Image segmentation is a widely studied yet still challenging subject, especially for new and emerging imaging modalities where Mumford–Shah and extremely strong noise may be present. Of course, extremely simple images with clear contrast, without noise and without blur may be segmented by the simple methods, such as thresholding the image intensity values.
Real-life images inevitably have noise and low contrast which poses a challenge for the simple algorithms. Variational segmentation models generally provide more robust solutions for complex images and can usually be categorised loosely into two categories: edge-based or region-based models. Well-known edge-based methods include Kass et al. [
1] and Caselles et al. [
2]. Region-based models are generally referred to the pioneering work of Mumford–Shah (MS) [
3], with some simplified variants such as Chan–Vese [
4,
5] that are most widely used.
In the last few years, when mentioning segmentation of challenging images, we would automatically recommend machine-learning-based approaches such as the UNet [
6] and Resnet [
7]. However, such works are data-dependent, and often, networks are tailored to a specific task. Firstly, they require training data which may not be available (or reliably available) at all. Secondly, we cannot yet conduct automatic transfer learning from a subject area to another to overcome the lack of sufficient training data, e.g., aircraft identification network cannot be adapted to identification of livers in medical imaging. A reliable way of overcoming the lack of sufficient training data is by weakly or semi-supervised learning which uses a small set of training data (in a supervised way) and a larger set of data without annotations (in an unsupervised way) [
8,
9]. Here, ‘unsupervised’ means that a suitable segmentation model is required; developing such a model is our aim.
This paper addresses the fundamental problem of how to segment low-contrast images where image features of interest have piecewise smooth intensities. In fact, the difficulties of the two problems, namely low-contrast and piecewise smooth features, are well-known challenges. Low contrast implies that edge information by way of image gradients alone is not sufficient enough to detect weak jumps. Moreover, many well-known models such as [
4] or its variants assume an input image has approximately piecewise constant intensities; piecewise smooth features imply these models cannot segment such features (or a feature would be split into sub-regions (or multiple phases) according to the intensity distribution, which means that the segmentation is already incorrect). Many approximation models based on the MS [
3] can deal with segmentation of piecewise smooth features but not necessarily images displaying low contrast.
Therefore, this paper considers the Blake–Zisserman model [
10] which can improve on the MS model [
3]. The model [
10] cannot be implemented directly and exactly, just as with the MS [
3], which was never solved directly.
The rest of the paper is organised as follows.
Section 2 briefly reviews related segmentation models.
Section 3 introduces our new model and a game theory reformulation to facilitate subsequent solutions. Proof of the solution existence of the game formulation is given.
Section 4 presents our numerical algorithm for the game formulation, and
Section 5 shows numerical experiments. Brief conclusions are drawn in
Section 6.
2. Related Works
The above-mentioned Mumford–Shah model [
3] minimises the following:
given the input (possibly noisy) image
, where, most importantly, the segmentation is defined by the unknown boundary
,
is a piecewise smooth approximation of
f, and
denotes the Hausdorff measure (i.e., length of the boundary). In the literature, there are many follow-up works of this paper, proposed to make revised models implementable numerically. Successful results have been obtained. See [
11,
12,
13], among others.
However, for images that have weak edges possibly buried in noise and blur, the Mumford–Shah type models may fail to capture the ‘discontinuities of second kind’ or gradient discontinuity, which may be called the staircasing effect for gradients. The Blake–Zisserman (BZ) type model [
10], though less well-known and published earlier than [
3], can be very useful for a class of challenging images where MS is less effective; e.g., see [
14,
15]. The functional of a BZ model takes the form
where
. Here,
is the discontinuity of
. As with the original formulation (
1), the BZ model (
2) is theoretical, not in a readily solvable form. This paper will propose an approximate and solvable model.
Our work is motivated by Cai–Chan–Zeng [
12], who derived a solvable and convex model for (
1). We now review this model briefly. As a first step of reformulation of (
1), Cai–Chan–Zeng [
12] rewrites (
1) in an equivalent form
where
is assumed to be a Jordan curve as the boundary
for the closed domain
. Hence,
are defined in the inside and outside of
, respectively. Of course, both
can be smoothly extended to the entire domain
. A key observation in [
12], motivated by [
5], is that the term
, which is the length of
, may be approximated by
. Then, viewing the smooth functions
as a single function, the model by [
12] is the following:
We now propose a solvable model based on the Blake–Zisserman model (
2). Assume the given image is
f, and our approximation is
, with
.
Motivated by the work of [
12], we shall respectively approximate the key quantities
by
. Therefore, our initial minimisation model takes the form
While (
5) is well-defined in terms of solvability, to facilitate the choice of coupling parameters, we now consider a game formulation. A game formulation encourages independent players to complete with each. Here, each player is a sub-problem in an optimisation formulation; see [
16]. Here, independence means that parameters of sub-problems do not have to rely on each other.
3. The New Model and Reformulation as a Nash Game
In this work, we are interested in a particular case of a two-player game formulation. Instead of optimising the single energy (
5), we consider a game reformulation, where two individuals, or ‘players’, are involved. The first player is the variable
g, and the second one will be introduced by using the idea of operator splitting [
17] to reduce the high-order derivatives in (
5) as first-order terms and to simplify subsequent solution. The solution to this game is the Nash equilibrium, whose existence must be established. For important techniques and results in game theory and its connections to partial differential equations (PDEs) for other problems, the reader is directed to [
18,
19,
20,
21].
More precisely, let
be an approximation for vector
. Then, we propose our new model, approximating (
5), as
where
and
.
Definition 1. A pair in the space is called a Nash equilibrium for the game involving the two energies and , defined on W, if One could consider only the single energy to be optimised; however, for the theoretical analysis, the ellipticity of the sum energy is not guaranteed because of the coupling term between g and . Hence, the existence of minimisers is not straightforward. However, we emphasise that in the game formulation, the energies and are partially elliptic, i.e., with respect to the variables g and , respectively. This is a very important property which eases the proof of the existence of Nash equilibrium.
Proposition 1. There exists a unique Nash equilibrium for the two-player game involving the costs functional and in (6) and (7). Proof of Proposition 1. Since
is partially strict convex, partially elliptic and weakly lower semi-continuous with respect to variable g,
is partially strict convex, partially elliptic and weakly lower semi-continuous with respect to variable ,
the proof is a straightforward and direct application of the the Nash theorem [
22]. □
4. Numerical Algorithms and Implementation
In this section, we detail the numerical algorithm to solve our game model and show how we utilise the outputs to obtain a segmentation result.
4.1. Stage One: Solution of the Main Model Using ADMM
The discretised version of our two-player game model (
6) and (7) is given as follows:
where
and
. The gradient operator
is discretised using backwards differences with zero Neumann boundary conditions.
We aim to solve the coupled problem using the split-Bregman variant of the alternating direction method of multipliers (ADMM) [
23], which is commonly used for problems containing
regularisation. In order to do this, we introduce a new variable into each sub-problem:
Applying split-Bregman to enforce the constraints gives us:
We detail briefly how to solve each of the sub-problems:
g sub-problem: We aim to solve the minimisation problem for fixed
:
where
, which amounts to solving the following equation for
g:
This can be solved by using discrete Fourier transforms
:
where
sub-problem: We aim to solve this minimisation problem for fixed
:
which is solved analytically by a generalised shrinkage formula:
where
,
and
The associated Bregman update is:
sub-problem: We aim to solve the minimisation problem for fixed
:
whose solution is defined by the following:
To find the solution
, we apply discrete Fourier transforms
:
where
,
, and
.
w sub-problem: We aim to solve the minimisation problem for fixed
which, similar to (
9), is solved by using a shrinkage formula:
where
.
4.2. Stage Two: Segmentation of f by Thresholding g
In order to acquire a segmentation result for
f, we take the minimiser
g from stage one and threshold it according to some suitably defined threshold parameter(s). As in [
12], the advantage of this method is that changing the threshold value(s) does not require the re-computation of the optimisation done in stage one.
There are two strategies that can be employed to define the threshold(s). The first is to use the k-means algorithm, which is an automatic method that partitions a given input into K clusters, for . The second is to define the threshold value(s) manually, which generally provides better results. As the threshold values are applied after optimisation, a wide range of values can easily be tried and the best selected. In our experiments, we use manual threshold values for two-phase segmentation, whereas for multiphase segmentation with multiple threshold values, we use k-means to simplify the process.
5. Numerical Results
In this section, we display some examples of the performance of our model and compare it with a number of models, namely:
CRCV: Convex relaxed Chan–Vese model [
5];
CCZ: The two-stage convex variant of the Mumford–Shah model by Cai et al. [
12] given in (
4);
CNC: The convex non-convex segmentation by Chan et al. [
24];
T-ROF: The T-ROF model by Cai et al. [
25];
and also a deep learning model.
We first show some visual comparisons, where noise is added to the original image, and then later do a quantitative analysis on a dataset. Note that all the models above (and ours) except for the CRCV model is capable of multiphase segmentation, whereas the CRCV model (in the Chan–Vese framework) is only capable of two-phase segmentation. For this reason, in the experiments, we only include the CRCV model in two-phase examples.
5.1. Qualitative Results
In
Figure 1, we show an image from an ultrasound. We add additive Gaussian noise with mean 0 and standard deviation 10. We display the outputs of all the competing models, the segmentation result overlaid on the original image, and for all but the CRCV show the segmentation result after thresholding (as the segmentation result after thresholding is the binary output shown first). We see that the segmentation result from our model is better at segmenting the object in the image, noticing that our segmentation effectively segments the “tail” part at the top of the object, whereas the CCZ model fails to segment it well. The CRCV and CNC models segment the tail but fail to remove the noise. We note that the T-ROF model is the best competing model but does not quite segment all the tail.
Similarly, in
Figure 2, we show another two-phase segmentation example, where we have the clean image but add Gaussian noise with mean 0 and standard deviation 25. It is clear that none of the competing models are as good as ours. Our result manages to preserve more detail in general, notably at the strand at the top, and the curved structure at the bottom of the image, without being susceptible to the noise.
In
Figure 3,
Figure 4 and
Figure 5, we show some examples of multiphase segmentation on MRI images of the brain. In all cases, we add Gaussian noise with mean 0 and standard deviation 17 and run the noisy image as input to both for all models but the CRCV model (as this is a two-phase model only). The output is then given as input to the k-means algorithm with
. We show the clustering output in the final column of the relevant figures. We see that the segmentation result of our model is better at finding some of the finer edges; for example, the white matter segmentation from our model is in general more detailed than the segmentation from the competing ones.
5.2. Quantitative Analysis
To assess our method quantitatively, we run our model on 20 images in the Digital Retinal Images for Vessel Extraction (DRIVE) dataset (
https://drive.grand-challenge.org/ accessed 25 October 2021). We use the manual segmentation image as the clean image and add additive Gaussian noise with mean 0 and standard deviation 100 to use as the input image, as shown in
Figure 6,
Figure 7,
Figure 8 and
Figure 9a,b respectively. We display the output of the competing models and our model here as well as a deep learning model (abbreviated as DL). We trained a U-Net [
6] network on 15 of the images (and used the other five as validation set), where the noisy image served as input and we trained with binary cross-entropy loss function to match with the clean image. The results are good; however, we lack a large dataset to provide the impressive result that deep learning approaches usually provide.
Figure 6,
Figure 7,
Figure 8 and
Figure 9 are four examples on the given dataset; however, we run on the 20 images available to provide some quantitative analysis. We use the
DICE coefficient and the
JACCARD similarity coefficient as quantitative measures to evaluate the performance of segmentation results. Given a binary segmentation result
from a model and ground truth segmentation
, the
DICE coefficient is given as:
The
JACCARD similarity coefficient is given as:
In
Table 1, we show the mean and standard deviation values of the
DICE and
JACCARD scores on the dataset. We see clearly that our model is more effective than the Cai model from these results. We note that the numerical values provided for the DL method are run on all 20 images in the dataset; however, the DL was trained on 15 of these images. This is somewhat of an unfair comparison; however, we see that the numerical values for our approach are still larger than the values for the DL approach despite this.
Figure 10 shows the boxplots of quantitative results on the data, for further visualisation.
6. Conclusions
In this paper, we have developed a convex relaxed game formulation of the less well-known Blake–Zisserman model in order to segment images with low contrast and strong noise. The advantages of the game formulation are that the existence of Nash equilibrium can be proved and there is less dependence on parameters for each sub-problem, i.e., parameters of each sub-problem do not rely on each other, and so can be tuned appropriately and separately. The game model was implemented using a fast split-Bregman algorithm, and numerical experiments show improvements in segmentation results over competing models, especially over the well-known Mumford–Shah type methods for low-contrast images.
Author Contributions
Conceptualisation, A.T. and K.C.; methodology, A.T.; software, L.B.; validation, L.B.; formal analysis, A.T.; investigation, L.B.; writing—original draft preparation, L.B., A.T. and K.C.; writing—review and editing, L.B., A.T. and K.C.; visualisation, L.B.; supervision, K.C.; project administration, K.C.; funding acquisition, K.C. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by EPSRC grant number EP/N014499/1.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
MS | Mumford–Shah |
BZ | Blake–Zisserman |
ADMM | Alternating direction method of multipliers |
References
- Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
- Caselles, V.; Kimmel, R.; Sapiro, G. Geodesic active contours. Int. J. Comput. Vis. 1997, 22, 61–79. [Google Scholar] [CrossRef]
- Mumford, D.B.; Shah, J. Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 1989, 42, 577–685. [Google Scholar] [CrossRef] [Green Version]
- Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Chan, T.F.; Esedoglu, S.; Nikolova, M. Algorithms for finding global minimizers of image segmentation and denoising models. SIAM J. Appl. Math. 2006, 66, 1632–1648. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- He, K.; Zhang, X.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Hesamian, M.H.; Jia, W.J.; He, X.J.; Kennedy, P. Deep Learning Techniques for Medical Image Segmentation: Achievements and Challenges. J. Digit. Imaging 2019, 32, 582–596. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Burrows, L.; Chen, K.; Torella, F. On New Convolutional Neural Network Based Algorithms for Selective Segmentation of Images. In Proceedings of the MIUA 2020 Proceedings, Oxford, UK, 15–17 July 2020; Communications in Computer and Information Science book Series (CCIS). Springer: Berlin/Heidelberg, Germany, 2020; Volume 1048, pp. 93–104. [Google Scholar]
- Blake, A.; Zisserman, A. Visual Reconstruction; MIT Press: Cambridge, MA, USA, 1987. [Google Scholar]
- Ambrosio, L.; Tortorelli, V.M. Approximation of functional depending on jumps by elliptic functional via t-convergence. Commun. Pure Appl. Math. 1990, 43, 999–1036. [Google Scholar] [CrossRef]
- Cai, X.; Chan, R.; Zeng, T. A two-stage image segmentation method using a convex variant of the Mumford–Shah model and thresholding. SIAM J. Imaging Sci. 2013, 6, 368–390. [Google Scholar] [CrossRef]
- Burrows, L.; Guo, W.; Chen, K.; Torella, F. Reproducible kernel Hilbert space based global and local image segmentation. Inverse Probl. Imaging 2021, 15, 1. [Google Scholar] [CrossRef]
- Theljani, A.; Belhachmi, Z. A discrete approximation of Blake and Zisserman energy in image denoising with optimal choice of regularization parameters. Math. Methods Appl. Sci. 2021, 44, 3857–3871. [Google Scholar] [CrossRef]
- Zanetti, M.; Ruggiero, V.; Miranda, M., Jr. Numerical minimization of a second-order functional for image segmentation. Commun. Nonlinear Sci. Numer. Simul. 2016, 36, 528–548. [Google Scholar] [CrossRef]
- Theljani, A.; Habbal, A.; Kallel, M.; Chen, K. Game Theory and Its Applications in Imaging and Vision; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
- Huang, Y.; Ng, M.K.; Wen, Y.W. A fast total variation minimization method for image restoration. Multiscale Modeling Simul. 2008, 7, 774–795. [Google Scholar] [CrossRef]
- Habbal, A.; Petersson, J.; Thellner, M. Multidisciplinary topology optimization solved as a Nash game. Int. J. Numer. Methods Eng. 2004, 61, 949–963. [Google Scholar] [CrossRef]
- Kallel, M.; Aboulaich, R.; Habbal, A.; Moakher, M. A Nash-game approach to joint image restoration and segmentation. Appl. Math. Model. 2014, 38, 3038–3053. [Google Scholar] [CrossRef]
- Kallel, M.; Moakher, M.; Theljani, A. The Cauchy problem for a nonlinear elliptic equation: Nash-game approach and application to image inpainting. Inverse Probl. Imaging 2015, 9, 853. [Google Scholar]
- Roy, S.; Borzì, A.; Habbal, A. Pedestrian motion modelled by Fokker–Planck Nash games. R. Soc. Open Sci. 2017, 4, 170648. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Nash, J. Non-Cooperative Games. Ann. Math. 1951, 54, 286–295. [Google Scholar] [CrossRef]
- Goldstein, T.; Osher, S. The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
- Chan, R.; Lanza, A.; Morigi, S.; Sgallari, F. Convex non-convex image segmentation. Numer. Math. 2018, 138, 635–680. [Google Scholar] [CrossRef]
- Cai, X.; Chan, R.; Schönlieb, C.B.; Steidl, G.; Zeng, T. Linkage Between Piecewise Constant Mumford–Shah Model and Rudin–Osher–Fatemi Model and Its Virtue in Image Segmentation. SIAM J. Sci. Comput. 2019, 41, B1310–B1340. [Google Scholar] [CrossRef]
Figure 1.
Results from an ultrasound image: (a) Clean image. (b) Noisy image used as input to the models. (c) Output of the CRCV model. (d) CRCV contour. (e) Output of CCZ. (f) CCZ after thresholding. (g) CCZ contour. (h) Output of CNC. (i) CNC after thresholding. (j) CNC contour. (k) Output of T-ROF. (l) T-ROF after thresholding. (m) T-ROF contour. (n) Output g of our model. (o) Output of our model. (p) Output of our model. (q) Ours after thresholding. (r) Our contour.
Figure 1.
Results from an ultrasound image: (a) Clean image. (b) Noisy image used as input to the models. (c) Output of the CRCV model. (d) CRCV contour. (e) Output of CCZ. (f) CCZ after thresholding. (g) CCZ contour. (h) Output of CNC. (i) CNC after thresholding. (j) CNC contour. (k) Output of T-ROF. (l) T-ROF after thresholding. (m) T-ROF contour. (n) Output g of our model. (o) Output of our model. (p) Output of our model. (q) Ours after thresholding. (r) Our contour.
Figure 2.
Results from a blood vessel image: (a) Clean image. (b) Noisy image used as input to the models. (c) Output of the CRCV model. (d) CRCV contour. (e) Output of CCZ. (f) CCZ after thresholding. (g) CCZ contour. (h) Output of CNC. (i) CNC after thresholding. (j) CNC contour. (k) Output of T-ROF. (l) T-ROF after thresholding. (m) T-ROF contour. (n) Output g of our model. (o) Output of our model. (p) Output of our model. (q) Ours after thresholding. (r) Our contour.
Figure 2.
Results from a blood vessel image: (a) Clean image. (b) Noisy image used as input to the models. (c) Output of the CRCV model. (d) CRCV contour. (e) Output of CCZ. (f) CCZ after thresholding. (g) CCZ contour. (h) Output of CNC. (i) CNC after thresholding. (j) CNC contour. (k) Output of T-ROF. (l) T-ROF after thresholding. (m) T-ROF contour. (n) Output g of our model. (o) Output of our model. (p) Output of our model. (q) Ours after thresholding. (r) Our contour.
Figure 3.
MRI segmentation: (a) Clean image. (b) Noisy image used as input to the models. (c) Output of CCZ. (d) CCZ after thresholding. (e) Output of CNC. (f) CNC after thresholding. (g) Output of T-ROF. (h) T-ROF after thresholding. (i) Output g of our model. (j) Output of our model. (k) Output of our model. (l) Ours after thresholding.
Figure 3.
MRI segmentation: (a) Clean image. (b) Noisy image used as input to the models. (c) Output of CCZ. (d) CCZ after thresholding. (e) Output of CNC. (f) CNC after thresholding. (g) Output of T-ROF. (h) T-ROF after thresholding. (i) Output g of our model. (j) Output of our model. (k) Output of our model. (l) Ours after thresholding.
Figure 4.
MRI segmentation: (a) Clean image. (b) Noisy image used as input to the models. (c) Output of CCZ. (d) CCZ after thresholding. (e) Output of CNC. (f) CNC after thresholding. (g) Output of T-ROF. (h) T-ROF after thresholding. (i) Output g of our model. (j) Output of our model. (k) Output of our model. (l) Ours after thresholding.
Figure 4.
MRI segmentation: (a) Clean image. (b) Noisy image used as input to the models. (c) Output of CCZ. (d) CCZ after thresholding. (e) Output of CNC. (f) CNC after thresholding. (g) Output of T-ROF. (h) T-ROF after thresholding. (i) Output g of our model. (j) Output of our model. (k) Output of our model. (l) Ours after thresholding.
Figure 5.
MRI segmentation: (a) Clean image. (b) Noisy image used as input to the models. (c) Output of CCZ. (d) CCZ after thresholding. (e) Output of CNC. (f) CNC after thresholding. (g) Output of T-ROF. (h) T-ROF after thresholding. (i) Output g of our model. (j) Output of our model. (k) Output of our model. (l) Ours after thresholding.
Figure 5.
MRI segmentation: (a) Clean image. (b) Noisy image used as input to the models. (c) Output of CCZ. (d) CCZ after thresholding. (e) Output of CNC. (f) CNC after thresholding. (g) Output of T-ROF. (h) T-ROF after thresholding. (i) Output g of our model. (j) Output of our model. (k) Output of our model. (l) Ours after thresholding.
Figure 6.
(a) Clean image. (b) Noisy image used as input to the models. (c) Output of the CRCV model. (d) CCZ after thresholding. (e) CNC after thresholding. (f) T-ROF after thresholding. (g) DL Output. (h) Output g of our model. (i) Output of our model. (j) Output of our model. (k) Ours after thresholding. (l) Our contour.
Figure 6.
(a) Clean image. (b) Noisy image used as input to the models. (c) Output of the CRCV model. (d) CCZ after thresholding. (e) CNC after thresholding. (f) T-ROF after thresholding. (g) DL Output. (h) Output g of our model. (i) Output of our model. (j) Output of our model. (k) Ours after thresholding. (l) Our contour.
Figure 7.
(a) Clean image. (b) Noisy image used as input to the models. (c) Output of the CRCV model. (d) CCZ after thresholding. (e) CNC after thresholding. (f) T-ROF after thresholding. (g) DL Output. (h) Output g of our model. (i) Output of our model. (j) Output of our model. (k) Ours after thresholding. (l) Our contour.
Figure 7.
(a) Clean image. (b) Noisy image used as input to the models. (c) Output of the CRCV model. (d) CCZ after thresholding. (e) CNC after thresholding. (f) T-ROF after thresholding. (g) DL Output. (h) Output g of our model. (i) Output of our model. (j) Output of our model. (k) Ours after thresholding. (l) Our contour.
Figure 8.
(a) Clean image. (b) Noisy image used as input to the models. (c) Output of the CRCV model. (d) CCZ after thresholding. (e) CNC after thresholding. (f) T-ROF after thresholding. (g) DL Output. (h) Output g of our model. (i) Output of our model. (j) Output of our model. (k) Ours after thresholding. (l) Our contour.
Figure 8.
(a) Clean image. (b) Noisy image used as input to the models. (c) Output of the CRCV model. (d) CCZ after thresholding. (e) CNC after thresholding. (f) T-ROF after thresholding. (g) DL Output. (h) Output g of our model. (i) Output of our model. (j) Output of our model. (k) Ours after thresholding. (l) Our contour.
Figure 9.
(a) Clean image. (b) Noisy image used as input to the models. (c) Output of the CRCV model. (d) CCZ after thresholding. (e) CNC after thresholding. (f) T-ROF after thresholding. (g) DL Output. (h) Output g of our model. (i) Output of our model. (j) Output of our model. (k) Ours after thresholding. (l) Our contour.
Figure 9.
(a) Clean image. (b) Noisy image used as input to the models. (c) Output of the CRCV model. (d) CCZ after thresholding. (e) CNC after thresholding. (f) T-ROF after thresholding. (g) DL Output. (h) Output g of our model. (i) Output of our model. (j) Output of our model. (k) Ours after thresholding. (l) Our contour.
Figure 10.
Comparison of six methods (CRCV, CCZ, CNCS, TROF, DL, Ours): box plots of the quantitative results for DICE (a) and JACCARD (b) scores.
Figure 10.
Comparison of six methods (CRCV, CCZ, CNCS, TROF, DL, Ours): box plots of the quantitative results for DICE (a) and JACCARD (b) scores.
Table 1.
Quantitative results from images in the DRIVE dataset. Here, we show the two methods evaluated on 20 images and display the mean and standard deviations of both the DICE coefficient and JACCARD score. Note that the DL method was trained on 15 of these 20 images.
Table 1.
Quantitative results from images in the DRIVE dataset. Here, we show the two methods evaluated on 20 images and display the mean and standard deviations of both the DICE coefficient and JACCARD score. Note that the DL method was trained on 15 of these 20 images.
| DICE | JACCARD |
---|
| | | | |
CRCV | 0.727 | 0.0291 | 0.573 | 0.0358 |
CCZ | 0.914 | 0.0139 | 0.843 | 0.0236 |
CNC | 0.939 | 0.0131 | 0.884 | 0.0233 |
T-ROF | 0.932 | 0.0083 | 0.872 | 0.0145 |
DL | 0.946 | 0.0091 | 0.898 | 0.0163 |
Ours | 0.950 | 0.0073 | 0.905 | 0.0133 |
| Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).