Next Article in Journal
Cone-Beam Angle Dependency of 3D Models Computed from Cone-Beam CT Images
Previous Article in Journal
Regularized Optimal Transport Based on an Adaptive Adjustment Method for Selecting the Scaling Parameters of Unscented Kalman Filters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiscale Joint Optimization Strategy for Retinal Vascular Segmentation

1
College of Electronic Information Engineering, Changchun University, Changchun 130012, China
2
School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(3), 1258; https://doi.org/10.3390/s22031258
Submission received: 26 December 2021 / Revised: 31 January 2022 / Accepted: 3 February 2022 / Published: 7 February 2022
(This article belongs to the Section Optical Sensors)

Abstract

:
The accurate segmentation of retinal vascular is of great significance for the diagnosis of diseases such as diabetes, hypertension, microaneurysms and arteriosclerosis. In order to segment more deep and small blood vessels and provide more information to doctors, a multi-scale joint optimization strategy for retinal vascular segmentation is presented in this paper. Firstly, the Multi-Scale Retinex (MSR) algorithm is used to improve the uneven illumination of fundus images. Then, the multi-scale Gaussian matched filtering method is used to enhance the contrast of the retinal images. Optimized by the Particle Swarm Optimization (PSO) algorithm, Otsu algorithm (OTSU) multi-threshold segmentation is utilized to segment the retinal image extracted by the multi-scale matched filtering method. Finally, the image is post-processed, including binarization, morphological operation and edge-contour removal. The test experiments are implemented on the DRIVE and STARE datasets to evaluate the effectiveness and practicability of the proposed method. Compared with other existing methods, it can be concluded that the proposed method can segment more small blood vessels while ensuring the integrity of vascular structure and has a higher performance. The proposed method has more obvious targets, a higher contrast, more plentiful detailed information, and local features. The qualitative and quantitative analysis results show that the presented method is superior to the other advanced methods.

1. Introduction

Retinal vascular image segmentation is an important topic in medical image research, which can effectively assist doctors in the clinical diagnosis and treatment of rapid cardiovascular diseases, diabetes and other diseases. In recent years, many scholars have studied retinal vascular image segmentation and achieved some results. However, due to the complexity of retinal images and the influence of noise and light factors in the image acquisition process, accurate retinal vascular image segmentation is still a challenging task [1,2,3]. Two-dimensional color fundus images and 3D Optical Coherence Tomography (OCT) images are the commonly used images for ophthalmic diseases. OCT technology can provide high-resolution retinal images. However, OCT is expensive, images are difficult to acquire, and images need to be registered for vessel segmentation. Color fundus copy is a non-invasive and painless image of the inner wall of the eye taken at different angles using a fundus camera. More importantly, it allows direct visualization of retinal vascular lesions and other lesions such as microaneurysms, hemorrhages, neovascularization, hard exudates, and absorbent cotton spots. Therefore, we choose color fundus images for retinal vessels segmentation studies [4,5,6].
At present, retinal vascular segmentation methods are mainly divided into supervised learning and unsupervised methods. Among the supervised segmentation methods, deep learning-based algorithms are currently a hot research topic. Reference [7] proposed a retinal vessel segmentation method based on cross-channel learning. This method remolds the task of segmentation as a problem of cross-modality data transformation from retinal image to vessel map. Instead of a single label of the center pixel, the network can output the label map of all pixels for a given image patch. The reported accuracy, sensitivity and specificity of their method are 0.9527, 0.7569 and 0.9816, respectively, on the DRIVE database and 0.9628, 0.7726 and 0.9844 for the STARE database. Reference [8] proposed a deep learning method based on segment-level loss which emphasizes more on the thickness consistency of thin vessels in the training process. The average accuracy, sensitivity and specificity are 0.9542, 0.7653 and 0.9818 for DRIVE and 0.9612, 0.7581 and 0.9846 for STARE. In order to make a trade-off between the number of identified overall vascular structures and how to accurately segment a single vessel, [9] proposed a dynamic depth network for retinal vascular segmentation. The average accuracy is 0.9780 for STARE. The method proposed in reference [10] is composed of a convolution neural network based on a simplified version of u-net architecture. The network receives small blocks extracted from the original image as input and uses a new loss function considering the distance between each pixel and the vascular tree for training. The sensitivity, specificity and accuracy mean values are 0.8597, 0.9690 and 0.9563, respectively, on the DRIVE database and 0.8441, 0.9764 and 0.9635 for the STARE database.
Unsupervised segmentation methods can be divided into filter-based methods, clustering based methods, tracking based methods, morphology-based methods and threshold-based methods. Tolias et al. [11] first proposed automatic retinal vascular segmentation method. Chaudhuri et al. [12] proposed a two-dimensional matching filtering method. Odstrcilik et al. [13] proposed an improved matching filtering method. Gabor [14] first proposed Gabor filter theory, then Daugman [15] designed a two-dimensional Gabor filter based on this theory. Remco [16] applied Cake filter in retinal vascular segmentation. Reference [17] presented a multi-scale Frangi filter-based algorithm. Wang et al. [18] proposed an improved morphological and OTSU retinal vascular segmentation method. In addition to these methods, the blood vessel segmentation method based on fuzzy C-means clustering (FCM) [19] and K-means [20] are also popular. Reference [21] proposed a method based on vascular tracking to extract the retinal vascular part. Fraz et al. [22] proposed a segmentation method based on morpho-logical top-hat transform. Reference [23] proposed a verification-based multi-threshold detection adaptive local threshold method.
Most supervised segmentation methods utilize manually designed features to model the retinal vasculature. However, manual design of features is a heuristic and laborious process that relies heavily on experience and skills. In addition, the parameters used in the algorithm usually need to be carefully tuned in order to address pathology, image noise, and other complex situations. The unsupervised segmentation methods have a fast operation time and low cost, and can obtain better segmentation performance. But these methods are susceptible to noise and cannot capture small blood vessels which provide important information for the detection of diseases.
In this paper, a new unsupervised method is presented. We design a multi-scale joint optimization strategy for retinal vascular segmentation, which can segment more small blood vessels and improve the segmentation accuracy. The segmentation results are beneficial for the diagnosis, screening and treatment of cardiovascular and ophthalmologic diseases. The main contributions of this paper are as follows: (1) the MSR algorithm is used to adjust the brightness of the image and reduce noise. (2) A multi-scale Gaussian matched filtering method is proposed to enhance the contrast of the images. (3) The PSO algorithm is used to optimize OTSU three threshold, accelerate the speed and improve the accuracy.
The remainder of this paper is arranged as follows. In Section 2, the proposed method framework is given, the relevant theories used in each step of the proposed method are explained and the results of each step are shown. Section 3 shows the experimental results and analysis. Section 4 summarizes the paper.

2. Proposed Methodology

2.1. Overview

The flow chart of the algorithm for blood vessel segmentation in this paper is shown in Figure 1. As shown in Figure 1, a retinal segmentation method based on multi-scale joint optimization strategy is mainly divided into four stages: image pre-processing, vascular feature extraction, image multi-threshold segmentation, and image post-processing. Firstly, we use MSR to adjust the brightness of the image and reduce noise, and the green channel is extracted as the original image for subsequent processing. Secondly, the Multi-scale Gaussian matched filtering method is proposed to enhance the contrast of the images and extract the features of the blood. Then, PSO is used to optimize OTSU three thresholds for image segmentation. Finally, the binarized image is processed by breakpoints connection, denoising and edge contour removal. To illustrate the various steps of the algorithm in detail, Figure 2 shows the amplification images of all the output images in important steps.

2.2. Image Pre-Processing

MSR Algorithm

The basic idea of Retinex [24] is that the object color perceived by the human visual system is determined by the reflection properties of the object surface and which has a slight relationship with the incident light information. Assuming the original image is S   x ,   y , then,
S ( x , y ) = i ( x , y ) R ( x , y )  
where
x , y   represents the coordinate point in two-dimensional space.
S x , y   represents the original image.
i x , y   represents the illumination image.
R x , y   represents the reflectivity image.
When i is removed from S , the remaining R is an image that eliminates the impact of light, as human visual systems perceive.
As shown in Figure 1, the low contrast between the target blood vessels and the background is not beneficial for the later segmentation. Thus, we use the MSR algorithm to adjust the brightness and enhance the contrast of the retinal image. The MSR algorithm is proposed by Jobson D J et al. [25], and is defined as shown in Equation (2).
R M S R i = n = 1 N w n R n i = n = 1 N w n ( L o g [ S i ( x , y ) ] L o g [ S i ( x , y ) F n ( x , y ) ] )
where
S i x , y   is the i t h   channel component of the original retinal color image.
R M S R i is the reflection component of the channel.
R n i is the incident component (illumination component) of the   i t h channel at the n t h   scale.
F n   is the n t h Gaussian function.   N is the number of scales (to ensure that the MSR al-gorithm has the advantages of both high and low scales, the value of N   is generally selected as 3).
ω n   is the weight of Gaussian convolution at the n t h scale.
The red-green-blue three-channel extraction experiment is carried out on the images processed by MSR, and it can be found that the contrast between the target and the background of the red channel is low and the noise of the blue channel is large. The green channel image has a balanced brightness, high contrast and uniform gray distribution. Therefore, the green channel image is selected for subsequent processing.

2.3. Vascular Feature Extraction

In this paper, we use Gaussian matching filter to extract the features of the retinal blood vessel. The Gaussian matching filter was first proposed by Chaudhuri et al. [12]. The Gaussian kernel function used in [12] was described as below.
K ( x , y ) = exp ( x 2 / 2 σ 2 ) , y < L / 2
where L is the length of the Gaussian kernel, which indicates the length of blood vessels that can be detected by the filter, we set L to 9,   σ is the scale of Gaussian kernel, which represents the vascular cross-sectional extension area that can be detected by the filter. For the vessels at different orientations, the Gaussian kernel should be rotated accordingly. The Gaussian kernel rotates once every 15 degrees from 0 to 180 degrees ( θ   = 0, 15,..., 180.), and a total of 12 directions are constructed to retain the maximum filtering response of each pixel. The rotation matrix is given by
r i = cos θ i sin θ i sin θ i cos θ i , 0 θ π
Suppose p = x , y is a discrete point in the kernel function, θ i   0 θ i π is the angle of the i th kernel function, then the coordinate value of p after rotation is p i ¯ = u , v = pr i T , then the i th template kernel function is
K i ( x , y ) = - e x p ( - u 2 / 2 σ 2 ) , p ¯ i Z
where Z is the template field and the value range is:   Z = { ( u ,   v ) , u 3 σ , v L / 2 . When the vessel length is less than the filter length, the vessel segment is approximately regarded as a straight line. If the vessel width matches the scale of the Gaussian kernel, the output value of the filter is maximum.
The filtered image is obtained by convolution of the input image with the two-dimensional Gaussian kernel. The mathematical expressions are as follows:
G ( x , y ) = K ( x , y ) R M S R G ( x , y )
where G represents the filtered image and R represents the input image.

2.3.1. Multi-Scale Matching Filtering

Because the length, width, branch and angle of retinal blood vessels are different, it is difficult to accurately extract the vascular feature information in a single scale. Therefore, this paper selects a multi-scale matching filter to extract the characteristics of vascular images. When the macroscale image is selected for filtering, the coarsest blood vessel is mainly extracted. When the small scale is selected to filter the image, the smallest blood vessel is extracted. After many experiments, when   σ 1 = 1.9 , the main contour feature of the blood vessel can be effectively extracted. When   σ 3 = 0.13 , the details of the blood vessel can be effectively extracted. By adding an intermediate scale to filter the image, the following effects can be achieved: (1) denoising while enhancing the extraction of small vessels. (2) The width of smaller vessels will not be overestimated. (3) There is a reasonable filtering response to the blood vessels. When   σ 2 = 0.5 , images including main contour features and partial details can be obtained.
The effect of vascular information extraction at scales σ of 1.9, 0.5 and 0.13 is shown in Figure 3. As shown in Figure 3, the most features of the retinal blood vessels are extracted, which are beneficial for the subsequent image processing.

2.3.2. Information Fusion of Vascular Characteristics

Multi-scale matched filtering can obtain most of retinal vascular information features information at different scales. In order to effectively enhance the contrast between the target blood vessel and the background and obtain better retinal vascular images, the results of each scale matched filtering are fused. The fusion calculation is given as follows:
G = ω 1 G 1 + ω 2 G 2 + ω 3 G 3
where
G   is the fused image.
G 1   G 2   G 3   are the vascular feature image at   σ 1 = 1.9   σ 2 = 0.5 and   σ 3 = 0.13 .
ω   is the weight of each scale superposition.
The fusion results are shown in Figure 4. Figure 4b shows the extraction result of the large scale at σ 1 = 1.9 . Figure 4d displays the extraction result of the small scale   σ 3 = 0.13 . Figure 4f shows the three-scale extraction result. Meanwhile, in order to show fully the comparison effects of the single and three-scale extraction methods, we magnify the details of the red region respectively. It can be seen from Figure 4 that the extraction effect of the single scale is often not good.
When we use the large scale of   σ 1 = 1.9   to process the image, a lot of small and some main vessels are lost. The vascular structure of the extraction image is incomplete. When we adopt the small scale of   σ 3 = 0.13 to process the image, the extraction result has vascular ruptures, poor vascular connectivity and strong noise. Compared with the single-scale filtering, the multi-scale matched filtering method can preserve vascular integrity, effectively extract more small vessels, and reduce the effects of noise.

2.4. Image Segmentation

2.4.1. OTSU Algorithm

The OTSU algorithm was first proposed in 1979 [26]; it selects the optimal threshold by maximizing the class variance of the segmented class. The pixels of a given image have L gray levels   1 , 2   L . The number of pixels in level i is n i , and the total number of pixels is N = n 1 + n 2 + n L .The probability of pixels with gray value i   is denoted as p i :
p i = n i N , p i 0 , i = 0 L - 1 p i = 1
The given image is divided into C 0 and C 1 regions by the threshold t. C 0 represents the pixel level   1 ,   ,   k , C 1 represents the pixel level k + 1 ,   ,   L . The probability and average gray value of the region are given by Equations (9) and (10) respectively, and the total mean level u T of the original image is given by Equation (11).
w 0 = i = 0 t p i w 1 = i = t + 1 L - 1 p i
u 0 = i = 0 t i p i / w 0 u 1 = i = t + 1 L - 1 i p i / w 1
u T = i = 0 L - 1 i p i
The following two relationships expressed by Equation (12) can be easily verified:
w 0 u 0 + w 1 u 1 = u T , w 0 + w 1 = 1
The objective function of the OTSU method can be defined as
σ 2 B = w 0 ( u 0 - u T ) 2 + w 1 ( u 1 - u T ) 2
When   σ B t 2 = Argmax σ B 2 , t obtains the optimal value. Extending the OTSU single threshold to multiple thresholds with interclass variances: we can obtain the best threshold combination ( t 1 , t 2 , ... t m ) when the maximum is obtained. The specific calculation can be described as below:
σ 2 B ( t 1 t 2 t m ) = w 0 ( u 0 - u T ) 2 + w 1 ( u 1 - u T ) 2 + w m ( u m - u T ) 2

2.4.2. PSO Algorithm

PSO algorithm [27] is a swarm intelligence algorithm proposed by simulating bird swarm foraging, which is used to find the solution that makes the objective function obtain maximum or minimum. In the PSO algorithm, the bird swarm is assumed to be a particle with no mass and volume in N -dimensional space, and each particle i   is a candidate solution. Each particle passes through speed and position to find the best in the workspace. Each particle moves around by its own ‘speed’ in the search space, and the speed is the distance travelled by the particle from one position to the current position. Each particle is affected by its individual best realization position pbest and the group global best position   gbest   (solution of the problem). The initialization of PSO algorithm is a group of random particles, namely random solution. The speed and position of particle i   in d -dimensional search space update according to Equations (15) and (16), the specific parameter settings are shown in Table 1.
V i d K = w V i d K - 1 + c 1 r 1 pbest id - x i d K - 1 + c 2 r 2
x i d K = x i d K - 1 + v K - 1 i d
where
v id K   is the d-dimensional component of the velocity of i particle in iteration 𝑘.
x id K   is the d-dimensional component of the position of i particle in iteration 𝑘.
c 1   and c 2 are acceleration constants, which are used to adjust the learning step size.
r 1 and r 2 are two random functions with the range of values 0 ,   1   to increase the search randomness.
ω   is the inertia weight factor used to adjust the search range of solution space.

2.4.3. OTSU Image Segmentation Based on PSO (OTSU-PSO Algorithm)

The background, target and noise of the pre-processed image are at different gray levels. In order to obtain the best segmentation effects, we use multi-threshold to segment the image, and the image can be divided into multiple regions with a multi-gray level. However, it would take too much time to search an optimal threshold combination in the full gray range. To simplify the calculation and improve the operation speed, we used the PSO algorithm to search the optimal threshold combination. It was found experimentally that when the number of the segmentation threshold combination is 3, a better segmentation effects can be achieved. Since the expert segmentation results in the retinal image data set are all binary images, in order to ensure the accuracy of the evaluation index calculation, it is necessary to use the OTSU single threshold to transform the retinal vascular image obtained after the OTSU-PSO algorithm into a binary image, and then the final result can be obtained after image post-processing (see Section 2.5 for details of post-processing). The final segmentation results of OTSU-PSO algorithm are shown in Figure 5. The specific steps of OTSU-PSO algorithm are shown in Table 2.

2.5. Image Post-Processing

The image obtained by the OTSU-PSO algorithm is re-segmented to get the segmented retinal vascular image. The segmented retinal vascular image has the following problems: (1) Some blood vessels are broken; (2) The field edge of fundus camera with false segmentation exists; (3) Noise is also enhanced when detailed features are extracted. In order to solve the problems and compare with the expert segmentation results of retinal image dataset, we post-process the image. The specific operation steps are as follows:
(1).
the median filter is used to denoise the image and connect the broken blood vessels.
(2).
the morphological processing is used to connect domain area and remove the large noise.
(3).
the mask image is extracted from the source retinal image, and the difference image between the source retinal image and the mask image is obtained.
(4).
The difference image is binarized by the OTSU algorithm, and then the binary image is expanded by the morphological processing.
(5).
The segmented vascular image is subtracted from the expanded edge image to get the final output image.
Randomly selected images on the Drive dataset are used to test the PSO-based OTSU three-threshold segmentation results. The effects of the multi-threshold and single-threshold segmentation methods are shown in Figure 5. As shown in Figure 5, there are more small blood vessels lost in the single-threshold segmentation image, and the main blood vessels have structural fracture. Compared with the single-threshold segmentation method, a three-threshold segmentation image has more small blood vessels and better connectivity.

3. Results and Discussion

3.1. Experimental Environment and Datasets

All the experiments are implemented in Matlab2016a (Mathworks, Natick, MA, USA) on 2.30 GHz processor with 3.8 GB RAM. We use two publicly available datasets, DRIVE dataset [2] and STARE dataset [28] to evaluate the performance of the proposed method. The DRIVE dataset contains a total of 40 color retinal images with a resolution of 565 × 585. It is divided into two sets: a testing set and a training set, and each set contains 20 images. The training set includes an artificial split set that is completed by one expert. The testing set includes two manual segmentation sets completed by two experts. The STARE dataset contains 20 color retinal images with the resolution of 700 × 605. The STARE dataset contains two sets of images manually segmented by two experts. There is no separate training and test set available for this dataset.

3.2. Segmentation Evaluation Index

In order to better judge the segmentation effect of the model, it is necessary to compare the segmentation results with the ground truths manually marked by experts. Three most common evaluation metrics, Accuracy ( A c c ), Sensitivity ( S e ) and Specificity ( S p ) are used to evaluate the segmentation results. A c c represents the ratio of the number of correctly segmented pixels to the total pixels.   S e represents the ratio of the number of correctly segmented vascular points to total pixels. S p represents the ratio of the number of correctly segmented background points to the total pixels. The higher the value, the higher the success. The three-evaluation index can be described as
A c c = T P + T N T P + F N + T N + F P
S e = T P T P + F N
S p = T N T N + F P
where
T P   (true-positive) is the number of points correctly segmented into blood vessels.
F P   (false-positive) is the number of vascular points that are incorrectly segmented.
T N   (true-negative) is the number of points correctly segmented as background.
F N   (false-negative) represents the number of background points that are wrongly segmented.
The above three measures metrics are based on a subset of the following four basic quantities:   T P , T N , F P and F N . The measure methods assume that the pixels are independent of each other. Hence, they may cause dependency flaw. Thus, we also adopt Structure Similarity Measure (SSIM) proposed by Wang [29] and Structural Measure(S-measure) proposed by Deng [30] to evaluate the segmentation results.

3.3. Experimental Results and Analysis

The segmentation comparison results on DRIVE and STARE datasets are shown in Figure 6 and Figure 7 and Table 3 and Table 4.
Figure 6a and Figure 7a are the original images of the DRIVE and STARE datasets. Figure 6b and Figure 7b are the segmentation results of the proposed method, which are displayed in red. Figure 6c and Figure 7c are the results made by the first expert, which are displayed in green Figure 6d and Figure 7d are the results made by the second expert, which are displayed in green. Figure 6e,f and Figure 7e,f are the differences between the proposed method and the segmentation results of the first expert and the second expert. According to the color scheme of Figure 6 and Figure 7, the yellow represents the vascular pixels that are correctly segmented. As demonstrated in Figure 6 and Figure 7, the segmentation results on DRIVE and STARE datasets show that there are some red parts, indicating that the proposed method can segment more small blood vessels while ensuring the integrity of the main blood vessels.
In order to analyze the effectiveness of the method adopted in this paper, the quantitative results of this experiment on DRIVE dataset, STARE dataset are shown in Table 3. As shown in Table 3, we use the segmentation results of the first expert as the gold standard to calculate the evaluation index of DRIVE dataset, the average specificity, sensitivity and accuracy of the method are 0.9702, 0.7577 and 0.9514; we also use the segmentation results of the first expert as the gold standard to calculate the evaluation index of STARE dataset, the average specificity, sensitivity and accuracy of the method are 0.9699, 0.7763 and 0.9579.
In order to overcome the dependency flaw of the above three measure parameters, we also use the SSIM and S-measure to evaluate the effectiveness of the proposed method. The higher the values, the better performance. The performance of the proposed method on the two public datasets are listed in Table 4. We calculate the SSIM and S-measure between the segmentation results obtained by the proposed algorithm and the two ground-truth images from the two experts. In addition, for comparison, we compute the SSIM and S-measure between the two experts’ segmentation results. It can be concluded that the proposed method can achieve higher values of SSIM and S-measure, and the segmentation results are better.

3.4. Comparison with Other Methods

In order to intuitively compare the segmentation performance of retinal vessels, the results of this experiment are compared with those of the three methods of the two-dimensional matching filter (M1), the linear tracking morphological (M2) and cap transformation (M3), as shown in Figure 8. In Figure 8, the images in the first and second rows are randomly selected in the Drive test set, and the images in the third and fourth rows are randomly selected in the STARE dataset. The comparison results show that our method is superior to other three methods, which can segment more small vessels while maintaining structural integrity. The segmentation results are comparable to expert manual segmentation results, which is beneficial for the disease diagnosis.
Table 5 gives the comparison results of the proposed method with those of state-of-the-art methods for the two datasets. The comparison methods include five supervised based methods and five unsupervised based algorithms. Moreover, the results of the ten methods are from their paper. The value in bold represents the performance of the proposed method.
Compared with the unsupervised based methods, for DRIVE dataset, the S p of the presented method is 0.0224 lower than maximum value, the A c c of the presented method is 0.0031 lower than maximum value, but the presented method can obtain the highest S e ; for STARE dataset our method achieves the highest S e and A c c , while the S p is 0.0116 lower than maximum value. Compared with the results of the supervised based methods, for the DRIVE dataset, the S p of the presented method is 0.0114 lower than the maximum value, the A c c of the presented method is 0.0048 lower than maximum value, the S e of the presented method is 0.0076 lower than maximum value; for the STARE dataset, the S e   is the maximum, while the A c c and S p are 0.0049 and 0.0147 lower than the maximum, respectively. In general, when jointly regarding the performance measures of S e , S p and A c c , our approach outperforms state-of-the-art methods on the DRIVE and STARE datasets. Our method has less calculation, higher accuracy and a certain robustness.

4. Conclusions

In this paper, we present a multiscale joint optimization strategy for retinal vascular segmentation. The use of the multi-scale matching filtering method can enhance the contrast between the target blood vessels and the background. The optimization strategy utilized PSO can get the optimal segmentation threshold combination. In order to evaluate the effectiveness and applicability of the proposed method, the experiments are implemented on the DRIVE and STARE datasets. The qualitive and quantitative analysis demonstrates that the proposed method outperforms other existed methods and has strong robustness. The segmented images of the presented method have more small blood vessels and better integrity of vascular structure, which is beneficial for the diagnosis of diseases. The main purpose of retinal vessel segmentation proposed in this paper is to assist doctors in the diagnosis of cardiovascular and cerebrovascular diseases. In the future, we plan to classify fundus related diseases based on the segmented retinal vessels, such as glaucoma, senile macular edema and so on. Limited by the currently available datasets, in the future, we will build a new dataset that contains retinal images from patients of diabetic retinopathy, glaucoma and other ophthalmic diseases, and the new dataset can be used to evaluate the capability of the algorithms in handling pathological images.

Author Contributions

Conceptualization, T.X. and X.X.; methodology, M.Y. and X.X.; software, M.Y. and J.Z.; validation and investigation, C.L.; writing—review and editing, M.Y., J.Z. and X.X.; supervision, T.X.; project administration, X.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Natural Science Foundation of China under [NO: 61805021], and in part by the funds of the Science Technology Department of Jilin Province [NO: 20200401146GX].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kondermann, C.; Kondermann, D.; Yan, M. Blood vessel classification into arteries and veins in retinal images. Proc. SPIE 2007, 6512, 651247. [Google Scholar] [CrossRef]
  2. Wang, J.J.; Liew, G.; Klein, R.; Rochtchina, E.; Knudtson, M.D.; Klein, B.E.; Wong, T.Y.; Burlutsky, G.; Mitchell, P. Retinal vessel diameter and cardiovascular mortality: Pooled data analysis from two older populations. Eur. Heart J. 2007, 28, 1984–1992. [Google Scholar] [CrossRef] [PubMed]
  3. Qiu, X.-Q. Analysis of Current Research Status of Retinal Vessel Segmentation. Graph. Image 2019, 2019, 31–36. [Google Scholar]
  4. Procházka, A. Registration and Analysis of Retinal Images for Diagnosis and Treatment Monitoring. In Proceedings of the 2014 International Workshop on Computational Intelligence for Multimedia Understanding (IWCIM), Paris, France, 1–2 November 2014; pp. T5/1–T5/4. [Google Scholar]
  5. Gegundez-Arias, M.E.; Marin, D.; Ponte, B.; Alvarez, F.; Garrido, J.; Ortega, C.; Vasallo, M.J.; Bravo, J.M. A tool for automated diabetic retinopathy pre-screening based on retinal image computer analysis. Comput. Biol. Med. 2017, 88, 100–109. [Google Scholar] [CrossRef]
  6. Li, T.; Bo, W.; Hu, C.; Kang, H.; Liu, H.; Wang, K.; Fu, H. Applications of Deep Learning in Fundus Images: A Review. Med. Image Anal. 2021, 69, 101971. [Google Scholar] [CrossRef]
  7. Li, Q.; Feng, B.; Xie, L.; Liang, P.; Zhang, H.; Wang, T. A cross-modality learning approach for vessel segmentation in retinal images. IEEE Trans. Med. Imaging 2016, 35, 109–118. [Google Scholar] [CrossRef]
  8. Yan, Z.; Yang, X.; Cheng, K.-T.T. Joint Segment-Level and Pixel-Wise Losses for Deep Learning Based Retinal Vessel Segmentation. IEEE Trans. Biomed. Eng. 2018, 65, 1912–1923. [Google Scholar] [CrossRef]
  9. Khanal, A.; Estrada, R. Dynamic Deep Networks for Retinal Vessel Segmentation. Front. Comput. Sci. 2020, 2, 35. [Google Scholar] [CrossRef]
  10. Gegundez-Arias, M.E.; Marin-Santos, D.; Perez-Borrero, I.; Vasallo-Vazquez, M.J. A new deep learning method for blood vessel segmentation in retinal images based on convolutional kernels and modified U-Net model. Comput. Methods Programs Biomed. 2021, 205, 1060811. [Google Scholar] [CrossRef]
  11. Tolias, Y.; Panas, S. A fuzzy vessel tracking algorithm for retinal images based on fuzzy clustering. IEEE Trans. Med. Imaging 1998, 17, 263–273. [Google Scholar] [CrossRef]
  12. Chaudhuri, S.; Chatterjee, S.; Katz, N.; Nelson, M.; Goldbaum, M. Detection of blood vessels in retinal images using two-dimensional matched filters. IEEE Trans. Med. Imaging 1989, 8, 263–269. [Google Scholar] [CrossRef] [Green Version]
  13. Odstrcilik, J.; Kolar, R.; Budai, A.; Hornegger, J.; Jan, J.; Gazarek, J.; Kubena, T.; Cernosek, P.; Svoboda, O.; Angelopoulou, E. Retinal vessel segmentation by improved matched filtering: Evaluation on a new high-resolution fundus image database. IET Image Process. 2013, 7, 373–383. [Google Scholar] [CrossRef]
  14. Gabor, D. Theory of communication. J. Inst. Electr. Eng.-Part III Radio Commun. Eng. 1946, 93, 58. [Google Scholar] [CrossRef]
  15. Daugman, J.G. Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. JOSA A 1985, 2, 1160–1169. [Google Scholar] [CrossRef] [PubMed]
  16. Duits, R. Perceptual Organization in Image Analysis: A Mathematical Approach Based on Scale, Orientation and Curvature. Ph.D. Thesis, Technische Universiteit Eindhoven, Eindhoven, The Netherlands, 2005; pp. 129–136. [Google Scholar]
  17. Pan, Y.; Yi, C. Retinal Vessel Segmentation Based on Multi-scale Frangi Filter. Mod. Inform. Technol. 2020, 4, 116–122. [Google Scholar]
  18. Wang, W.; Zhang, J.; Wu, W. New approach to segment retinal vessel using morphology and Otsu. Appl. Res. Comput. 2019, 36, 2228–2231. [Google Scholar] [CrossRef]
  19. Oliveira, W.S.; Teixeira, J.V.; Ren, T.I.; Cavalcanti, G.; Sijbers, J. Unsupervised Retinal Vessel Segmentation Using Combined Filters. PLoS ONE 2016, 11, e0149943. [Google Scholar] [CrossRef] [Green Version]
  20. Yavuz, Z.; Köse, C. Blood Vessel Extraction in Color Retinal Fundus Images with Enhancement Filtering and Unsupervised Classification. J. Healthc. Eng. 2017, 2017, 4897258. [Google Scholar] [CrossRef]
  21. Zhou, L.; Rzeszotarski, M.S.; Singerman, L.J.; Chokreff, J.M. The detection and quantification of retinopathy using digital angiograms. IEEE Trans. Med. Imaging 1994, 13, 619–626. [Google Scholar] [CrossRef]
  22. Fraz, M.; Barman, S.; Remagnino, P.; Hoppe, A.; Basit, A.; Uyyanonvara, B.; Rudnicka, A.; Owen, C. An approach to localize the retinal blood vessels using bit planes and centerline detection. Comput. Methods Programs Biomed. 2012, 108, 600–616. [Google Scholar] [CrossRef]
  23. Jiang, X.; Mojon, D. Adaptive local thresholding by verification-based multi threshold probing with application to vessel detection in retinal images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 131–137. [Google Scholar] [CrossRef] [Green Version]
  24. Brainard, D.H.; Wandell, B.A. Analysis of the retinex theory of color vision. J. Opt. Soc. Am. A 1986, 3, 1651–1661. [Google Scholar] [CrossRef] [PubMed]
  25. Jobson, D.; Rahman, Z.; Woodell, G. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  27. Fuqi, L.; Xiaomin, L. Multi-level threshold image segmentation algorithm based on particle swarm optimization and fuzzy entropy. Appl. Res. Comput. 2019, 36, 2856–2860. [Google Scholar] [CrossRef]
  28. Hoover, A.; Kouznetsova, V.; Goldbaum, M. Locating Blood Vessels in Retinal Images by Piecewise Threshold Probing of a Matched Filter Response. IEEE Trans. Med. Imaging 2000, 19, 203–210. [Google Scholar] [CrossRef] [Green Version]
  29. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  30. Fan, D.P.; Cheng, M.M.; Liu, Y.; Li, T.; Borji, A. Structure-Measure: A New Way to Evaluate Foreground Maps. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
  31. Dasgupta, A.; Singh, S. A Fully Convolutional Neural Network Based Structured Prediction Approach towards the Retinal Vessel Segmentation. In Proceedings of the 14th International Symposium on Biomedical Imaging (ISBI), Melbourne, Australia, 18–21 April 2017; pp. 248–251. [Google Scholar]
  32. Yang, Y.; Shao, F.; Fu, Z.; Fu, R. Discriminative dictionary learning for retinal vessel segmentation using fusion of multiple features. Signal Image Video Process. 2019, 13, 1529–1537. [Google Scholar] [CrossRef]
  33. Adapa, D.; Raj, A.N.J.; Alisetti, S.N.; Zhuang, Z.; Naik, G. A supervised blood vessel segmentation technique for digital Fundus images using Zernike Moment based features. PLoS ONE 2020, 15, e0229831. [Google Scholar] [CrossRef]
  34. Biswal, B.; Pooja, T.; Subrahmanyam, N.B. Robust retinal blood vessel segmentation using line detectors with multiple masks. IET Image Process. 2018, 12, 389–399. [Google Scholar] [CrossRef]
  35. Ben Abdallah, M.; Azar, A.T.; Guedri, H.; Malek, J.; Belmabrouk, H. Noise-estimation-based anisotropic diffusion approach for retinal blood vessel segmentation. Neural Comput. Appl. 2018, 29, 159–180. [Google Scholar] [CrossRef]
  36. Roy, S.; Mitra, A.; Roy, S.; Setua, S.K. Blood vessel segmentation of retinal image using Clifford matched filter and Clifford convolution. Multimed. Tools Appl. 2019, 78, 34839–34865. [Google Scholar] [CrossRef]
Figure 1. The flow chart of the proposed algorithm for vessel extraction.
Figure 1. The flow chart of the proposed algorithm for vessel extraction.
Sensors 22 01258 g001
Figure 2. Multiscale joint Optimization strategy. (a) Original color image; (b) Pre-processing result; (c) Multi-scale filtering; (d) OTSU image segmentation based on PSO (OTSU-PSO); (e) Post-processing.
Figure 2. Multiscale joint Optimization strategy. (a) Original color image; (b) Pre-processing result; (c) Multi-scale filtering; (d) OTSU image segmentation based on PSO (OTSU-PSO); (e) Post-processing.
Sensors 22 01258 g002
Figure 3. Retinal vessels information extraction image. (a) Original image; (b) σ 1 = 1.9 ; (c)   σ 2 = 0.5 ;   (d) σ 3 = 0.13 .
Figure 3. Retinal vessels information extraction image. (a) Original image; (b) σ 1 = 1.9 ; (c)   σ 2 = 0.5 ;   (d) σ 3 = 0.13 .
Sensors 22 01258 g003
Figure 4. Comparison of multi-scale and single-scale vascular extraction. (a) original image; (b) σ 1 = 1.9 ; (c) Detail amplification of σ 1 = 1.9 ; (d) σ 3 = 0.13 ; (e) Detail amplification of   σ 3 = 0.13 ; (f) multi-scale fusion vascular extraction image; (g) Detail amplification of multi-scale.
Figure 4. Comparison of multi-scale and single-scale vascular extraction. (a) original image; (b) σ 1 = 1.9 ; (c) Detail amplification of σ 1 = 1.9 ; (d) σ 3 = 0.13 ; (e) Detail amplification of   σ 3 = 0.13 ; (f) multi-scale fusion vascular extraction image; (g) Detail amplification of multi-scale.
Sensors 22 01258 g004
Figure 5. Comparison of multi-threshold and single-threshold segmentation results. (a) Multi-scale matched filtering figure; (b) Detail amplification of Multi-scale matched filtering; (c) Single-threshold segmentation figure; (d) Three-threshold segmentation figure; (e) Detail amplification of Single-threshold segmentation; (f) Detail amplification of Three-threshold segmentation.
Figure 5. Comparison of multi-threshold and single-threshold segmentation results. (a) Multi-scale matched filtering figure; (b) Detail amplification of Multi-scale matched filtering; (c) Single-threshold segmentation figure; (d) Three-threshold segmentation figure; (e) Detail amplification of Single-threshold segmentation; (f) Detail amplification of Three-threshold segmentation.
Sensors 22 01258 g005
Figure 6. Comparison results of the proposed algorithm with that of experts on DRIVE 01_test. (a) DRIVE image; (b) Expert 1 segmentation result; (c) Expert 2 segmentation result; (d) Segmentation results of the proposed method; (e) Difference between segmentation result of proposed method and expert 1 segmentation result; (f) Difference between segmentation result of proposed method and expert 2 segmentation result.
Figure 6. Comparison results of the proposed algorithm with that of experts on DRIVE 01_test. (a) DRIVE image; (b) Expert 1 segmentation result; (c) Expert 2 segmentation result; (d) Segmentation results of the proposed method; (e) Difference between segmentation result of proposed method and expert 1 segmentation result; (f) Difference between segmentation result of proposed method and expert 2 segmentation result.
Sensors 22 01258 g006
Figure 7. Comparison results of the proposed algorithm with that of experts on STARE im0255. (a) STARE image; (b) Expert 1 segmentation result; (c) Expert 2 segmentation result; (d) Segmentation results of the proposed method; (e) Difference between segmentation result of proposed method and expert 1 segmentation result; (f) Difference between segmentation result of proposed method and expert 2 segmentation result.
Figure 7. Comparison results of the proposed algorithm with that of experts on STARE im0255. (a) STARE image; (b) Expert 1 segmentation result; (c) Expert 2 segmentation result; (d) Segmentation results of the proposed method; (e) Difference between segmentation result of proposed method and expert 1 segmentation result; (f) Difference between segmentation result of proposed method and expert 2 segmentation result.
Sensors 22 01258 g007
Figure 8. Blood vessel segmentation in retinal images using the proposed method. (a) Original image; (b) The second expert; (c) M1; (d) M2; (e) M3; (f) Proposed algorithm.
Figure 8. Blood vessel segmentation in retinal images using the proposed method. (a) Original image; (b) The second expert; (c) M1; (d) M2; (e) M3; (f) Proposed algorithm.
Sensors 22 01258 g008
Table 1. PSO Parameter Settings.
Table 1. PSO Parameter Settings.
Parameter ValuePSO
Population   size   ( N )40
Inertia   weight   ( w )0.5
Learning constants c 1 = c 2 = 2
Max . Iteration   ( M )20
Initial   Pulse   rate   ( r 1 , r 2 )X
X: Not parameter value
Table 2. OTSU-PSO algorithm.
Table 2. OTSU-PSO algorithm.
Input: number of iterations M, population size N, dimension D.
Output: the optimal threshold combination (gbest _ position (i), i is the threshold number).
Step 1: Initialize the velocity and position of particles, individual extremum pbesti and global extremum gbest.
Step 2: Equation (14) is used to calculate the fitness value of each particle to update the individual extremum pbesti and the global extremum gbest.
Step 3: Update the particle velocity and position of the particle according to the Equations (15)–(16).
Step 4: Determine if the iteration stop condition is satisfied, then the algorithm ends. Otherwise turn to Step 2, continue to iterative cycle, and finally find the optimal solution.
Table 3. Test results of the proposed algorithm on DRIVE dataset and STARE dataset.
Table 3. Test results of the proposed algorithm on DRIVE dataset and STARE dataset.
DRIVEAccSeSpSTAREAccSeSp
01_test0.94040.89380.9450im00010.94240.65060.9656
02_test0.95670.78090.9768im00020.93870.54270.9650
03_test0.94010.73800.9625im00030.95720.61990.9829
04_test0.95350.74190.9749im00040.94260.83150.9455
05_test0.95450.72810.9779im00050.94750.72480.9680
06_test0.94450.70100.9707im00440.96810.78150.9681
07_test0.95120.71320.9751im00770.95960.76970.9748
08_test0.95000.70470.9730im00810.95490.69700.9757
09_test0.95590.70820.9777im00820.96960.84450.9790
10_test0.95600.76750.9729im01390.95410.72540.9731
11_test0.94870.75480.9678im01620.96690.74770.9852
12_test0.95530.75510.9742im01630.97450.85640.9838
13_test0.95370.67640.9837im02350.96600.87390.9733
14_test0.95270.79470.9666im02360.96770.89570.9734
15_test0.90670.85140.9110im02390.95710.90340.9601
16_test0.96170.73340.9844im02400.93230.92480.9326
17_test0.95970.66090.9872im02550.96750.81200.9832
18_test0.96550.76730.9826im02910.97060.79000.9775
19_test0.95800.88130.9650im03190.97070.75200.9769
20_test0.96280.80160.9756im03240.94960.79100.9542
Mean0.95140.75770.97020.95790.97290.77620.9699
Table 4. Performance Metrics of Proposed Method.
Table 4. Performance Metrics of Proposed Method.
DatasetsContrast SSIMS-Measure
DRIVEProposed vs. Expert 10.73850.7982
Proposed vs. Expert 20.62200.8069
Expert1 vs. Expert20.51420.8366
STAREProposed vs. Expert 10.76360.7986
Proposed vs. Expert 20.72070.7708
Expert1 vs. Expert20.71220.7793
Table 5. Comparison of proposed method and other methods on DRIVE and STARE datasets.
Table 5. Comparison of proposed method and other methods on DRIVE and STARE datasets.
Methods YearsAccSeSpAccSeSp
DRIVE STARE
SupervisedLi et al. [7]20160.95270.75690.98160.96280.77260.9844
Dasgupta et al. [31]20160.95330.75690.9792---
Yan et al. [8]20180.95420.76530.98010.96120.75810.9846
Yang et al. [32]20190.94210.75600.96960.94770.72020.9733
Adapa et al. [33]20200.94500.69940.98110.94860.62980.9839
UnsupervisedBiswal et al. [34]20180.95450.71000.97000.94950.70000.9700
Ben et al. [35]20180.93890.68870.97650.93880.68010.9711
Wang et al. [18]20190.93820.56860.99260.94600.63780.9815
Roy et al. [36]20190.92950.43920.96220.94880.43170.9718
YUAN et al. [17]20200.95000.71000.9700---
Proposed20210.95720.77980.97580.95790.77620.9699
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yan, M.; Zhou, J.; Luo, C.; Xu, T.; Xing, X. Multiscale Joint Optimization Strategy for Retinal Vascular Segmentation. Sensors 2022, 22, 1258. https://doi.org/10.3390/s22031258

AMA Style

Yan M, Zhou J, Luo C, Xu T, Xing X. Multiscale Joint Optimization Strategy for Retinal Vascular Segmentation. Sensors. 2022; 22(3):1258. https://doi.org/10.3390/s22031258

Chicago/Turabian Style

Yan, Minghan, Jian Zhou, Cong Luo, Tingfa Xu, and Xiaoxue Xing. 2022. "Multiscale Joint Optimization Strategy for Retinal Vascular Segmentation" Sensors 22, no. 3: 1258. https://doi.org/10.3390/s22031258

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop