Next Article in Journal
Identification and Compensation Method of Unbalanced Error in Driving Chain for Rate-Integrating Hemispherical Resonator Gyro
Previous Article in Journal
Efficient Aperture Fill Time Correction for Wideband Sparse Array Using Improved Variable Fractional Delay Filters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Single-Sample Retinal Vessel Segmentation Method Based on Grey Relational Analysis

School of Information Science and Technology, Nantong University, Nantong 226019, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(13), 4326; https://doi.org/10.3390/s24134326
Submission received: 28 May 2024 / Revised: 23 June 2024 / Accepted: 2 July 2024 / Published: 3 July 2024
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Accurate segmentation of retinal vessels is of great significance for computer-aided diagnosis and treatment of many diseases. Due to the limited number of retinal vessel samples and the scarcity of labeled samples, and since grey theory excels in handling problems of “few data, poor information”, this paper proposes a novel grey relational-based method for retinal vessel segmentation. Firstly, a noise-adaptive discrimination filtering algorithm based on grey relational analysis (NADF-GRA) is designed to enhance the image. Secondly, a threshold segmentation model based on grey relational analysis (TS-GRA) is designed to segment the enhanced vessel image. Finally, a post-processing stage involving hole filling and removal of isolated pixels is applied to obtain the final segmentation output. The performance of the proposed method is evaluated using multiple different measurement metrics on publicly available digital retinal DRIVE, STARE and HRF datasets. Experimental analysis showed that the average accuracy and specificity on the DRIVE dataset were 96.03% and 98.51%. The mean accuracy and specificity on the STARE dataset were 95.46% and 97.85%. Precision, F1-score, and Jaccard index on the HRF dataset all demonstrated high-performance levels. The method proposed in this paper is superior to the current mainstream methods.

1. Introduction

In recent years, the prevalence of diseases such as diabetic retinopathy and hypertension has been on the rise due to irregular lifestyles and unhealthy eating habits. According to the International Diabetes Federation’s latest estimates for 2021, approximately 537 million people worldwide are living with diabetes [1]. In most cases, retinal diseases are asymptomatic until they reach an advanced stage. Therefore, timely detection and treatment are essential to prevent irreversible blindness. Therefore, accurate segmentation of retinal blood vessels is a key task in diagnosing these diseases.
The eyeball wall is mainly composed of three layers: the inner, middle, and outer layers. The innermost layer, known as the retina, is located in the innermost part of the eyeball. It contains many light-sensitive cells responsible for photoreception, converting light signals into visual signals that are transmitted to the brain’s visual center to form images. The retina is the most sensitive area for neural information transmission. The structure of the retina includes the optic disc, macula, fovea, and retinal blood vessels, as shown in Figure 1. The retinal vascular tree consists of the central artery, veins, and their branches. Abnormalities may include microaneurysms, hemorrhages, exudates, and cotton wool spots. The fundus is the only place where arteries, veins, and capillaries can be directly observed, reflecting the dynamic and health status of the entire body’s blood circulation. Additionally, the retina is the only part of the human body where its anatomical structure can be visualized directly and non-invasively. Retinal vessel segmentation technology provides important information such as the shape, thickness, and curvature of retinal blood vessels.
A challenging problem in retinal vessel segmentation is to establish a general segmentation algorithm that is robust enough to the various types of noise that may occur. Generally, the algorithms can broadly be categorized into unsupervised methods and supervised methods. With the development of deep learning, there are many new research results categorized as supervised methods. Tang et al. [2] proposed a multi-proportional channel and U-Net ensemble model for blood vessel extraction. Guo et al. [3] introduced a high-resolution hierarchical network to identify and classify features. Huang et al. [4] proposed a cascade self-attention U-net for accurate segmentation of retinal blood vessels. Xie et al. [5] proposed a method that combines optical coherence tomography angiography (OCTA) with deep learning to achieve high-precision automatic segmentation and parameter extraction of retinal microvasculature and the foveal avascular zone (FAZ). Using these parameters, they explored their potential value in predicting Alzheimer’s Disease (AD) and mild cognitive impairment (MCI). Meng et al. [6] proposed a Dual Adaptive Graph Convolutional Network (DAGCN) for weakly and semi-supervised segmentation of the optic disc and cup, leveraging geometric associations and dual consistency regularization for enhanced performance in glaucoma assessment. This model demonstrates excellent performance on OD&OC segmentation and vCDR estimation, which is of significant importance for glaucoma screening and evaluation. Hao et al. [7] proposed a Voting-based Adaptive Feature Fusion multi-task network (VAFF-Net) for the simultaneous segmentation, detection, and classification of retinal structures in OCTA images, enhancing the precision of ophthalmic diagnostics. Xia et al. [8] proposed an edge-reinforced network (ER-Net) for the segmentation of 3D vessel-like structures in medical images, incorporating a reverse edge attention module (REAM), a feature selection module (FSM), and an edge-reinforced loss function to enhance the delineation of crisp edges and improve segmentation accuracy. Zhao et al. [9] proposed a novel 2-D/3-D symmetry filter for the enhancement of vascular structures in multiple modality images, leveraging a weighted geometric mean approach to address challenges posed by imaging artifacts, contrast variations, and noise, thereby improving the detection and segmentation of blood vessels. Ma et al. [10] introduced a novel split-based coarse-to-fine vessel segmentation network for Optical Coherence Tomography Angiography (OCTA) images, termed OCTA-Net, which employs a ResNeSt backbone and a two-stage training approach to enhance the segmentation of both thick and thin retinal vessels effectively. Despite their effectiveness, these methods often require extensive training datasets, which are computationally demanding to build. In addition, the number of retinal blood vessel samples is limited and labeled samples are even more scarce, and the extraction of blood vessel samples requires certain medical knowledge, usually requiring professional medical personnel to label, which is a time-consuming and laborious process.
Single-sample retinal vessel segmentation still holds significant research significance. This unsupervised learning approach extracts vessel and background features from fundus images without relying on label information, identifying their relationship to achieve vessel segmentation. There have been many classical methods for vessel segmentation, such as traditional matched filtering, vessel tracking, multiscale segmentation, morphological-based methods, and thresholding methods. Zhang et al. [11] proposed an MF-FDOG blood vessel extraction method combining matched filtering and Gaussian first derivative. This method does not guarantee good vascular connectivity. Krause et al. [12] used the local Radon transform method, which had lower computational cost but poor performance in the segmentation of small blood vessels. Zhao et al. [13] proposed a segmentation method based on level set and region growth. Because the active contours tend to move towards the diseased area, segmentation of some abnormal retinal images is inaccurate.
This study focuses on single retinal vessel images, considering the limited sample size and associated uncertainty. To address the small sample problem and enhance the overall algorithm’s specificity and accuracy, gray theory is introduced. These improvements are not merely descriptive but are substantiated with theoretical foundations and demonstrate their effectiveness through experiments. The refinements introduced aim to significantly enhance the robustness and reliability of retinal vessel analysis in medical imaging applications. This study provides a new theoretical framework and algorithm tool for small sample image analysis. The main contributions of this paper are as follows:
(1)
In order to effectively reduce noise interference and enhance the pre-processing stage to achieve accurate retinal blood vessel segmentation, a noise-adaptive discrimination filtering algorithm based on grey relational analysis is proposed. It adaptively adjusts the filtering intensity through the gray correlation degree to distinguish between noise and real features, ensuring that the key vascular structure is preserved without being obscured by noise. This approach retains more critical details while achieving improved filtering effects.
(2)
A threshold segmentation model based on grey relational analysis is proposed to accurately localize the direction of vessels. This model can not only detect retinal blood vessels effectively but also reduce the interference of abnormal retinal noise signals to a certain extent. The study discusses in depth the theoretical underpinning of the model, including how grey correlation analysis quantifies the differences between blood vessels and the background, and provides an evaluation of the model’s performance in actual image segmentation.

2. Proposed Method

Grey system theory was originally proposed by Deng Julong at Huazhong University of Science and Technology in 1980 [14]. This theory was developed to address the issue of “few data, poor information”. In scientific research, people usually use “black” to mean that the information is completely unknown, and “white” to mean that the information is completely known. “Gray” means that part of the information is unknown, and part is known, so these information systems are called “gray systems”. Grey relational analysis is a mathematical theory that can quantitatively obtain unknown information by referring to known information. Its essence is to judge the degree of correlation between the sequences according to the similarity of the sequences. Compared with traditional methods, grey relational model has the following advantages: it does not require a large number of sample data; the specific statistical laws of the system do not need to be known; there is no need to consider the independence of each factor. In recent years, grey relational analysis has been gradually applied to image processing, and the effect is remarkable. Ma et al. [15] successfully integrated Deng’s traditional correlation coefficient into an image edge detection algorithm, effectively merging grey relational analysis with edge detection and yielding optimal results experimentally. Zhen et al. [16] combined grey relational analysis with genetic algorithms to effectively segment target regions, demonstrating certain noise resistance. Li et al. [17] proposed an image edge detection algorithm based on grey simplified B-type relational analysis and realized optimal threshold selection through iteration. The algorithm exhibits strong adaptability to images with drastic grayscale changes, resulting in clear and accurate edge detection. These algorithms make reasonable use of various forms of grey relational analysis for image edge detection, effectively detecting texture edges.
In grey relational analysis, the most classic and widely used is the traditional Deng correlation degree. First of all, in order to standardize the data dimension of each sequence and enhance the comparability of data, it is generally necessary to carry out data normalization processing. Then, the system feature sequence (reference sequence) and related factor sequence (comparison sequence) are established. Then the grey correlation degree and coefficient are obtained.
Considering that the labeling samples of retinal blood vessel images are few, there are uncertain noise and other factors. In this paper, the advantages of a grey system in uncertainty and a small sample problem are used to propose a novel single-sample retinal vessel segmentation method based on grey relational analysis. Figure 2 illustrates the flowchart of the proposed method. First of all, the noise is discrete, and the traditional grey correlation filtering algorithm cannot be used to unify all pixels; otherwise, the real pixel gray scale will be lost. In order to avoid false filtering, a novel noise-adaptive discrimination filtering algorithm based on grey relational analysis is proposed, and a good filtering effect is achieved. Secondly, a threshold segmentation model based on grey relational coefficients is proposed to improve the traditional grey correlation degree model, which avoids the pathological situation that the denominator may be zero in the Dunn correlation degree, and eliminates the normalization process in the traditional Dunn correlation degree calculation, which enhances the stability and executability of the model and achieves good operation results.

2.1. Noise-Adaptive Discrimination Filtering Algorithm Based on Grey Relational Analysis (NADF-GRA)

Since image noise typically consists of high-frequency components, the gray value of a noisy pixel tends to be at or near the extreme values within the filtering window. If the image correlation coefficient between the center pixel of the window and the median value of the domain is small, the center pixel is considered to be a pixel that deviates from the value of the neighborhood and can be identified as noise. Otherwise, it is considered a normal pixel. In the final filtering, only the normal pixels in the marker matrix are selected for grey correlation weighted mean filtering. If the flag matrix displays all noise points, the filtering window can be extended for filtering. Since the effective pixel points when the window is expanded are all points away from the center pixel, it can be changed to nonlinear filtering at this time, which can be replaced by a simple median filter, so that a good filtering effect can be achieved. The filtering algorithm commences by distinguishing noise from normal pixels. This step is critical as it lays the foundation for the subsequent application of our weighted averaging technique. By applying weights, normal pixels that are critical to vascular structure are given higher significance, thus preserving the integrity of retinal blood vessels. And the adaptive nature of our weighted scheme allows for the effective suppression of noise identified during the initial filtering stage, without compromising the vascular features. The dynamic weighting of pixels in our algorithm ensures robust performance across images with varying noise characteristics, maintaining consistent segmentation results.
The algorithm implementation process can be divided into three stages: Firstly, the discrimination stage utilizes grey relational coefficients for noise determination, which provides relevant records distinguishing normal pixels from noise pixels. Secondly, the adaptive adjustment of pixels, where weighted averaging filtering is applied based on the acquired information of normal pixels, thereby eliminating the interference of noise pixels. If there are no normal pixels within the filtering window, consideration is given to expanding the filtering window for simple median filtering. Lastly, the calibration and enhancement module.
In the 3 × 3 filtering window as the central pixel, the median pixel value in the field is selected as the reference sequence, and the 9-pixel value in the field is selected as the comparison sequence:
X 0 = { x 0 ( 1 ) , . . . , x 0 ( 9 ) } = { v , . . . , v }
X 1 = { f ( i 1 , j 1 ) , . . . , f ( i + 1 , j + 1 ) }
where X 0 is the reference sequence, X 1 is the comparison sequence, x 0 ( 1 ) , . . . , x 0 ( 9 ) is the 9-pixel value of the reference sequence, v is the median pixel in the field and f ( i 1 , j 1 ) , . . . , f ( i + 1 , j + 1 ) is the 9-pixel value of the comparison sequence.
To calculate the image grey relational coefficients between the median value of the filtering window and the values of each pixel in the neighborhood:
γ k = γ ( x 0 ( k ) , x 1 ( k ) ) = 1 1 + x 0 ( k ) x 1 ( k )
where γ k is the image association coefficient, x 0 ( k ) is the median value in the domain, and x 1 ( k ) is each value in the domain.
The image association coefficients calculated above are sorted in order from small to large to obtain the gray image association order.
Test whether the gray image association coefficient of the center pixel of the filtering window is ranked in the first three in the association order: If the association coefficient of the center pixel is ranked in the first three, it indicates that the gray value deviates from the median value of the domain, set T ( i , j ) = 0 , and mark it as a noise pixel; otherwise, set T ( i , j ) = 1 , and mark it as a normal pixel.
T ( i , j ) = 0     if   γ 5 = ε 1   or   γ 5 = ε 2   or   γ 5 = ε 3                     1                               otherwise                                                                        
where T ( i , j ) is the flag matrix, γ 5 is the gray image association coefficient of the center pixel, and ε 1 , ε 2 , ε 3 is the coefficient value of the first three sorted in the association order.
From top to bottom and from left to right, each pixel in the image is iterated in turn so that a matrix T with element 0 or 1 marking the image noise information can be obtained.
The above is the noise detection stage of the image, and the next step is the noise point replacement stage of the image: starting from the upper left corner of the image, check whether the corresponding element in the flag matrix T ( i , j ) corresponding to the center pixel f ( i , j ) of the filtering window is equal to 1; if it is equal to 1, the current pixel is a normal pixel point, the pixel value remains unchanged, and the loop enters the next pixel for judgment; if it is equal to 0, it means that the corresponding point in the image is a noise point and should be filtered: at this time, the number of normal pixels in the 3 × 3 window field with f ( i , j ) as the center pixel should be calculated, denoted as C; if C > 0, the C pixel is taken as the comparison sequence, where the value is the reference sequence, the image association coefficient is calculated, and the weighted mean of the grey association is filtered. At this time, there is:
f ( i , j ) = ( h = 1 C γ h · f h ) / h = 1 C γ h
where f ( i , j ) is the central pixel, C is the number of normal pixels in the field, γ h is the image association coefficient, and f h is the pixel in the field. If C = 0, it means that all the pixels in the 3 × 3 window are polluted by noise, which is a large noise block. At this time, the filtering window should be expanded into a 5 × 5 filtering window because the outermost non-noisy pixels in the window are already far away from the center pixel. Therefore, you can simply use the median filter to complete the assignment of the center pixel. Each pixel is processed in the traversal order from top to bottom and from left to right throughout the program loop.
Finally, it enters the adaptive correction and enhancement stage: contrast adaptive histogram equalization is used to effectively improve image contrast, and the similarity between image regions and blood vessels is evaluated on different scales to enhance the detection accuracy of blood vessel structure.

2.2. Threshold Segmentation Model Based on Grey Relational Analysis (TS-GRA)

The basic idea of grey relational analysis is to measure the similarity between reference sequence and comparison sequence by grey relational degree. The edge of the image generally has grayscale mutations to a certain extent, and the grayscale of these mutated pixels generally maintains continuity in a specific direction or texture pattern, and the gray correlation degree can just reflect the degree of such mutations. When detecting the edge of the image, set the mean value of the pixels in the field as the reference sequence. If the comparison sequence is farther away from the reference sequence, it indicates that the image has edge passing in the field, and the grey correlation degree value will be smaller. On the contrary, if no edge passes through, the value of the gray correlation degree will be greater at this time. By setting a threshold, edges can be found in the image.
Set the current center pixel in the 3 × 3 domain window of the image as f ( i , j ) ( i = 2,3 , . . . , M 1 ; j = 2,3 , . . . , N 1 ) , first calculate the mean value of all pixels in the domain window, and then set the reference sequence and comparison sequence.
The difference sequence between the reference sequence and the comparison sequence can be obtained as follows:
Δ ( u ) = x 0 ( u ) x 1 ( u ) , u = 1,2 , . . . , 9
where Δ ( u ) is the difference sequence, x 0 ( u ) is the reference sequence, and x 1 ( u ) is the comparison sequence.
Calculate the grey image correlation coefficient:
γ 01 ( u ) = 1 1 + Δ ( u )
where γ 01 ( u ) is the correlation coefficient of the gray image of each pixel, and Δ ( u ) is the difference sequence.
Calculate the gray image correlation degree of the center pixel f ( i , j ) of the image domain window:
γ 01 ( i , j ) = 1 9 u = 1 9 γ 01 ( u )
where γ 01 ( i , j ) is the gray image correlation degree of the center pixel in the domain, and γ 01 ( u ) is the relational coefficient of each point.
The first four steps start at the top left corner of the image, proceeding in order from left to right and top to bottom, saving the corresponding grey relational coefficients for each pixel as the center pixel of the domain window in a table until the last pixel in the bottom right corner of the image is traversed.
Find the minimum and maximum values of the grey relational coefficients corresponding to all pixels. Establish a threshold value between the minimum and maximum values as the threshold for distinguishing whether the current point is an edge point or a non-edge point, that is:
T a g ( i , j ) = 1 , γ 01 ( i , j ) < θ 0 ,         otherwise
where θ is the threshold to distinguish whether it is an edge point, and T a g ( i , j ) is the matrix that stores whether it is an edge point.
The final segmentation results are obtained through the post-processing stage using morphological manipulation. A disc-shaped structural element was created for closure operations to fill the void in the blood vessel and connect the broken blood vessel, thereby improving vascular connectivity. Removing small, connected areas, thereby eliminating error-detected independent pixels, helps remove noise or pseudo-targets.

2.3. Contrast Adaptive Histogram Equalization (CLAHE)

Intensity inhomogeneity is usually generated during retinal image acquisition, for which it is necessary to perform image enhancement or inhomogeneity correction to eliminate the effect under different lighting conditions. Histogram equalization, a global enhancement method that takes into account the color frequency and intensity of pixels in an image and then redistributes these properties, is effective for images where color and intensity are concentrated in a narrow band. However, it cannot handle images with colors and intensities that span the entire range of display devices. Another widely adopted global enhancement method is gamma correction, which applies a nonlinear function to the pixel values of the input image to adjust the brightness of the image, making dark areas brighter and bright areas darker to improve the contrast and detail of the image. However, the best choice for parameter gamma depends on the image under consideration. Contrast-limited adaptive histogram equalization (CLAHE) divides the entire image space into several small, equally sized regions and processes each region individually, where the contrast of each small region increases so that the histogram of the output image corresponds to the histogram indicated by the distribution parameters. The adjacent small regions are then combined using bilinear interpolation, which suppresses artificially set limits. By limiting the contrast of a single uniform area, it is possible to avoid uneven areas in retinal image analysis, while also avoiding excessive noise amplification.

2.4. Frangi Filtering

A Frangi filter is a filter used to enhance the structure of blood vessels, which is calculated based on the eigenvalues and eigenvectors of the Hessian matrix. Its main principle is to calculate the Hessian matrix on different scales and evaluate the similarity between the image region and the blood vessels by calculating the eigenvalues and eigenvectors. In addition, the Frangi filter also takes into account the orientation information of the feature vector because, in the vascular region, the direction of the feature vector is usually approximate to the direction of the blood vessel, while in the non-vascular region, the direction of the feature vector is more random. Therefore, by calculating the direction Angle of the feature vector, the Frangi filter can further improve the detection accuracy of the vascular structure. Figure 3 shows the enhanced image using the noise-adaptive discrimination filtering algorithm based on grey relational analysis, where it is observed that the blood vessels are distinguishable and the background blood vessel contrast is improved. Figure 3a–e display the middle image of fundus image preprocessing.

2.5. Post-Processing

The final segmentation results are obtained through the post-processing stage using morphological manipulation. First, a disc-shaped structural element is created for closure operations to fill the void in the blood vessel and connect the broken blood vessel, thereby improving the vascular connectivity. Subsequently, connected regions with fewer than 70 pixels are eliminated from the identified blood vessel regions, effectively removing erroneously detected isolated pixels and assisting in the removal of noise or pseudo-targets. Figure 4 shows a comparison of images from the DRIVE dataset before and after post-processing. (a) and (c) are the images before post-processing, and (b) and (d) are the images after post-processing. By enlarging the detailed image before post-processing, it can be found that several error detection pixels appear together with blood vessel pixels, and a large number of gaps exist. The post-processing operation effectively connects the disconnected blood vessel pixels and removes the error pixels, greatly improving the accuracy and connectivity of blood vessel segmentation.

3. Experimental Results and Discussion

3.1. Dataset Introduction

The datasets used in this study are DRIVE, STARE and HRF. These datasets are commonly used to evaluate retinal vessel segmentation in fundus images. The DRIVE dataset came from the Dutch Diabetes Screening Project, and for each image in the dataset, the corresponding blood vessel was manually segmented. The 40 images are equally divided into a training set and a test set, and because it is an unsupervised algorithm, the training set is not required. The STARE dataset contains 397 digital fundus images, of which only 20 had standard results, manually divided by two experts. Half of the fundus images contain pathological signs of different degrees, which poses a great challenge to the robustness and accuracy of the algorithm. The High-Resolution Fundus (HRF) image database has three sets of fundus images: healthy retinas, glaucoma-affected retinas, and diabetic retinopathy retinas. There are 45 pictures in total, and each set has 15 images. Each image has a resolution of 3504 × 2336 pixels and an 8-bit color depth per color plane. A binary mask and human-segmented blood vessel ground truth are provided for each image.

3.2. Evaluation Indicators

The proposed method evaluates the performance of the segmented output image by comparing it with the corresponding gold standard. Sensitivity (Se), Specificity (Sp), Accuracy (Acc) and Precision (Pr) were used to evaluate the validity of the algorithm. Accuracy represents the ratio of the number of pixels correctly segmented by the segmentation method to the number of pixels in the entire image. Accuracy is regarded as one of the most widely accepted measures for quantifying the effectiveness of segmentation results. Precision is the proportion of predicted positive classes in a classification problem that are actually positive. Another way to measure the model’s accuracy is through the F1-score. It is a harmonic average of accuracy and recall that strikes a balance between the two and is often used to assess the accuracy of a model, especially in unbalanced datasets. One more frequently used measure to evaluate the segmented result is the Jaccard coefficient (JC). It measures the percentage of overlap between the segmented output and the ground truth. These evaluation indicators are defined as follows:
S e = T P T P + F N
S p = T N T N + F P
A c c = T P + T N T P + T N + F P + F N
P r = T P T P + F p
F 1 s c o r e = 2 × ( S e × P r ) S e + P r = 2 T P F P + F N + 2 T p
J C = T P T P + F p + F N
where T P is the number of true-positive samples, T N is the number of true-negative samples, F P is the number of false-positive samples, F N is the number of false-negative samples.

3.3. Visual Results

The experiment was carried out on Intel Core-i7 processor with 8 GB of RAM and a Windows 10 operating system. The algorithm was completed in a Matlab R2016a operating environment. The algorithm processes each image with an average of approximately 3.5 s for preprocessing, 0.39 s for segmentation, and 1.21 s for post-processing. Thus, the entire process typically completes in about 5 s per image, demonstrating high efficiency for supervised algorithms. Figure 5 depicts the threshold variation map of the noise-adaptive discrimination filtering algorithm based on grey relational analysis (NADF-GRA). Each pixel in the image corresponds to the intensity of threshold variation within a local window of the original image. Warmer colors like red indicate higher threshold variations, whereas cooler colors like blue indicate lower variations. Typically, regions with higher threshold variations require larger filter sizes to maintain edge sharpness, while regions with smaller variations can use smaller filters to preserve more detail. Effectively adjusting the threshold strategy to match the characteristics of retina images involves selecting appropriate thresholds.
Figure 6 shows the threshold variation curve of the threshold segmentation model based on grey relational analysis (TS-GRA). This curve illustrates the relationship between pixel grayscale levels and cumulative probability within the region of interest (retina). The horizontal axis represents grayscale levels from 0 to 255, and the vertical axis represents the cumulative probability, which denotes the percentage of pixels below each grayscale level. The optimal threshold typically resides at the steepest point of the curve, where the transition from foreground (retina) to background (surrounding area) is most pronounced. This point signifies a threshold that maximizes the segmentation accuracy between the eye and its surroundings, facilitating precise segmentation tasks.
Figure 7 shows the visualized segmentation results of the proposed method on partial images from the DRIVE, STARE and HRF datasets.
To better illustrate the advantages of the proposed method, the segmentation results under the ground truth, the traditional gray-level co-occurrence matrix (GLCM) model and the novel gray-level co-occurrence analysis adaptive (GLCAA) model are compared. Figure 8 and Figure 9 present enlarged comparisons of different models’ segmentation results of partial images from the DRIVE and STARE datasets. Panels (a)–(d), respectively, depict the original input image, the ground truth, segmentation results under the traditional GLCM model, and segmentation results under the novel GLCAA model. From the figures, it can be observed that in the segmentation results based on the traditional GLCM model, many thin blood vessels are not detected. In contrast, the segmentation results obtained using the proposed method closely resemble the ground truth, with many subtle thin blood vessels being detected. Additionally, for the STARE dataset, most retinal images contain varying degrees of pathological features. The proposed method accurately detects both thick and thin blood vessels while also effectively removing much of the noise interference.

3.4. Comparative Analysis of Objective Results

In order to further analyze the performance of the algorithm, these performance measures have been employed in this study. Although various performance indices are found in the literature, mostly Acc, Se, Sp are utilized for validation. Therefore, we present a comparative report of the proposed work and other different algorithms based on these parameters. Table 1 shows the results and average values of 20 images on DRIVE and STARE. Table 2 shows the results and average values of 20 images on STARE. Table 3 shows the results on HRF. Experimental results show that the average accuracy of the proposed method on DRIVE and STARE reaches 96.03% and 95.46%, respectively, which is superior to most other methods. The mean specificity was 98.51% and 97.85%, respectively, achieving the highest specificity on the DRIVE dataset. However, the lower sensitivity index may be due to the application of post-processing methods, which miss some very fine blood vessels in the segmented images, and these fine blood vessel structures highly overlap with the background and are therefore difficult to distinguish. Even with the image enhancement operation, these vessels showed a disconnected structure in the segmentation results, causing them to be incorrectly assumed to be noise and removed. Nevertheless, if the post-processing stage is omitted from the proposed framework, it may increase the sensitivity index but compromise the accuracy parameter.
Table 4 and Table 5 are the comparison of different experimental results of different algorithms on the DRIVE and STARE datasets respectively. Moreover, the Pr, F1-score and JC of the proposed work are ranked with some of the methods in Table 6 separately. The table shows a comparison between the proposed method and some supervised and unsupervised methods, as well as a comparison with the traditional grey relational degree algorithm. Although the proposed method belongs to the latter class of unsupervised methods, the results show that it is also superior to partially supervised algorithms. The tabular data show that the level set and region growth methods proposed in literature [15] have a poor segmentation effect on some abnormal retinal images, resulting in low accuracy. Literature [18] proposes some improved matching filter methods, which have low accuracy due to the fact that these methods use the same original filter parameters for all images, and the filter performance is uneven for images. The methods proposed in reference [19] achieve high sensitivity values in datasets. The use of Mamdani fuzzy rules (Type-2) for edge detection allows the algorithm to handle uncertainty and ambiguity in the image, leading to more accurate identification of blood vessel edges. And the algorithm employs the Green formula to calculate and exclude microaneurysms and other small-area formations from the final image, reducing false positives. However, the accuracy rate was only 86.5%. The lower accuracy may be influenced by some false positives. Due to the high sensitivity of the algorithm, it might interpret minor variations in grayscale as blood vessel edges, leading to the misclassification of non-vessel areas as vessels. The state-of-the-art supervised method proposed in reference [20] achieves high sensitivity values in datasets. MDUNet utilizes a multi-scale feature fusion technique, employs Dense Blocks to extract rich low-level features, and maintains high-resolution feature maps through an HR Block. This approach helps preserve more spatial information, thereby enhancing the model’s sensitivity. However, the accuracy of the method proposed in this paper is higher than that of this method on the DRIVE dataset. However, the accuracy of these methods is very poor compared to the proposed frameworks. And since accuracy is a balanced measure of correctly identifying blood vessels and background pixels, this method is considered superior among all methods. In addition, for the traditional grey relational degree model, the proposed method has improved on the three evaluation indexes, and for the DRIVE dataset, the accuracy, sensitivity and specificity have improved by 1.58%, 9.27% and 0.06% respectively. For the STARE dataset, the accuracy, sensitivity, and specificity improved by 0.9%, 7.48%, and 0.19%, respectively.
Table 6 compares Precision, F1-score, and JC with a few state-of-the-art approaches. The JC is found to be the highest as compared to the other methods for all the databases. The Noise-Adaptive Discrimination Filtering algorithm (NADF-GRA) effectively reduces the impact of noise and employs post-processing techniques such as cavity filling and removal of isolated pixels, thereby enhancing vessel connectivity. These are the reasons for the outstanding performance of the JC. Furthermore, since unsupervised methods do not rely on labeled data, they may be better at capturing the intrinsic structure of the data. Similarly, the Pr and F1-Score are the second contestants for both of the datasets. Table 7 shows the results of the ablation experiments of NADF-GRA and TS-GRA modules conducted on DRIVE and STARE, it can be observed that the method utilizing the traditional GLCM filtering algorithm and GLCM segmentation model achieved Acc, Se, and Sp metrics of 0.9445, 0.5936, and 0.9845 on DRIVE, respectively. After incorporating the NADF-GRA and TS-GRA modules, the metrics improved to 0.9603, 0.6863, and 0.9851, respectively. The Acc, Se and Sp indicators on STARE improved from 0.9456, 0.5957 and 0.9766 to 0.9546, 0.6705 and 0.9785. Ablation experiments show that our proposed module can improve the accuracy of retinal blood vessel segmentation, and the combination of the two is better. The Noise-Adaptive Discrimination Filtering algorithm (NADF-GRA) and the Threshold Segmentation Model (TS-GRA) help to improve performance indicators. These enhancements can be attributed to the NADF-GRA’s ability to effectively suppress noise while preserving vital vascular details and the TS-GRA’s precision in identifying the vascular edges by leveraging grey relational analysis. When applied to the STARE dataset, the method further demonstrated its robustness. The lower sensitivity increase in comparison to the DRIVE dataset may be due to the more complex nature of the STARE images, which include a variety of pathological features that can obscure fine blood vessels.

4. Conclusions

In this paper, a novel grey relational-based model for retinal vessel segmentation is proposed. The reason for using this method is that the gray system theory is proposed to solve the problem of “few data uncertainties”, and the retinal blood vessel images are small in a number of samples and there are uncertainties. The core of this method is to measure the similarity between the reference sequence and the comparison sequence by grey relational analysis. Due to the discontinuity in the distribution of retinal blood vessels and the uncertainty in the system, a new grey relational adaptive discriminant filtering algorithm is proposed, and a good filtering effect is obtained. In addition, the grayscale of the pixel mutation in the edge part of the blood vessel generally satisfies the continuity in a certain direction or texture shape, which is very different from the grayscale mutation caused by noise. Therefore, the gray relational degree is used to segment the blood vessel, thus achieving more accurate results, and many fine blood vessels can also be well detected. The proposed method performs well in terms of connectivity between retinal blood vessels and is also easier to implement. Because some very fine blood vessels are misclassified as background, which will lead to poor results to some extent, we will further study finer granularity segmentation.

Author Contributions

Y.W. and H.L. contributed to the content of this paper. Material preparation, data collection and analysis were performed by Y.W. The first draft was written by Y.W. and all authors commented on the previous version. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported in part by the Nantong Science and Technology Program JC2023075, in part by the National Natural Science Foundation of China under Grant 61976120, and in part by the Postgraduate Research and Practice Innovation Program of Jiangsu Province KYCX24_3643.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data analyzed during the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sun, H.; Saeedi, P.; Karuranga, S.; Pinkepank, M.; Ogurtsova, K.; Duncan, B.B.; Stein, C.; Basit, A.; Chan, J.C.N.; Mbanya, J.C.; et al. IDF Diabetes Atlas: Global, regional and country-level diabetes prevalence estimates for 2021 and projections for 2045. Diabetes Res. Clin. Pract. 2022, 183, 109119. [Google Scholar] [CrossRef]
  2. Tang, P.; Liang, Q.; Yan, X.; Zhang, D.; Gianmarc, C.; Sun, W. Multi-proportion channel ensemble model for retinal vessel segmentation. Comput. Biol. Med. 2019, 111, 103352. [Google Scholar] [CrossRef] [PubMed]
  3. Guo, S. Fundus image segmentation via hierarchical feature learning. Comput. Biol. Med. 2021, 138, 104928. [Google Scholar] [CrossRef] [PubMed]
  4. Huang, Z.; Sun, M.; Liu, Y.; Wu, J. CSAUNet: A cascade self-attention u-shaped network for precise fundus vessel segmentation. Biomed. Signal Proces. 2022, 75, 103613. [Google Scholar] [CrossRef]
  5. Xie, J.; Yi, Q.; Wu, Y.; Zheng, Y.; Liu, Y.; Macerollo, A.; Fu, H.; Xu, Y.; Zhang, J.; Behera, A.; et al. Deep segmentation of OCTA for evaluation and association of changes of retinal microvasculature with Alzheimer’s disease and mild cognitive impairment. Br. J. Ophthalmol. 2023, 108, 432–439. [Google Scholar] [CrossRef] [PubMed]
  6. Meng, Y.; Zhang, H.; Zhao, Y.; Zhao, Y.; Gao, D.; Hamill, B.; Patri, G.; Peto, T.; Madhusudhan, S.; Zheng, Y. Dual Consistency Enabled Weakly and Semi-Supervised Optic Disc and Cup Segmentation with Dual Adaptive Graph Convolutional Networks. IEEE Trans. Med. Imaging 2023, 42, 416–429. [Google Scholar] [CrossRef] [PubMed]
  7. Hao, J.; Shen, T.; Zhu, X.; Liu, Y.; Behera, A.; Zhang, D.; Chen, B.; Liu, J.; Zhang, J.; Zhao, Y. Retinal Structure Detection in OCTA Image via Voting-based Multi-task Learning. IEEE Trans. Med. Imaging 2022, 41, 3969–3980. [Google Scholar] [CrossRef] [PubMed]
  8. Xia, L.; Zhang, H.; Wu, Y.; Song, R.; Ma, Y.; Mou, L.; Liu, J.; Xie, Y.; Ma, M.; Zhao, Y. 3D Vessel-like Structure Segmentation in Medical Images by an Edge-Reinforced Network. Med. Image Anal. 2022, 82, 102581. [Google Scholar] [CrossRef] [PubMed]
  9. Zhao, Y.; Zheng, Y.; Liu, Y.; Zhao, Y.; Luo, L.; Yang, S.; Na, T.; Wang, Y.; Liu, J. Automatic 2D/3D Vessel Enhancement in Multiple Modality Images Using a Weighted Symmetry Filter. IEEE Trans. Med. Imaging 2018, 37, 438–450. [Google Scholar] [CrossRef]
  10. Ma, Y.; Hao, H.; Xie, J.; Fu, H.; Zhang, J.; Yang, J.; Wang, Z.; Liu, J.; Zheng, Y.; Zhao, Y. ROSE: A Retinal OCT-Angiography Vessel Segmentation Dataset and New Model. IEEE Trans. Med. Imaging 2021, 40, 928–939. [Google Scholar] [CrossRef]
  11. Zhang, B.; Zhang, L.; Zhang, L.; Karray, F. Retinal vessel extraction by matched filter with first-order derivative of Gaussian. Comput. Biol. Med. 2010, 40, 438–445. [Google Scholar] [CrossRef] [PubMed]
  12. Krause, M.; Alles, R.M.; Burgeth, B.; Weickert, J. Fast retinal vessel analysis. J. Real-Time Image Process. 2016, 11, 413–422. [Google Scholar] [CrossRef]
  13. Zhao, Y.; Wang, X.; Wang, X.; Frank, Y.S. Retinal vessels segmentation based on level set and region growing. Pattern Recogn. 2014, 47, 2437–2446. [Google Scholar] [CrossRef]
  14. Deng, J.L. Basis of Grey Theory; Huazhong University of Science and Technology Press: Wuhan, China, 2002; pp. 135–141. [Google Scholar]
  15. Ma, M.; Fan, Y.; Xie, S.; Hao, C.; Li, X. A Novel Algorithm of Image Edge Detection Based on Gray System Theory. J. Image Graph. 2003, 8, 1136–1139. [Google Scholar]
  16. Zhen, Z.; Gu, Z.; Liu, Y. Image segmentation based on genetic algorithms and grey relational analysis. J. Grey Syst. 2016, 28, 45–51. [Google Scholar]
  17. Li, H.; Han, Y.; Guo, J. Image edge detection based on grey relation of simplified B-mode. Infrared Technol. 2017, 2, 163–167. [Google Scholar]
  18. Nath, M.K.; Dandapat, S.; Barna, C. Automatic detection of blood vessels and evaluation of retinal disorder from color fundus images. J. Intell. Fuzzy Syst. 2020, 38, 6019–6030. [Google Scholar] [CrossRef]
  19. Orujov, F.; Maskeliunas, R.; Damasevicius, R.; Wei, W. Fuzzy based image edge detection algorithm for blood vessel detection in retinal images. Appl. Soft Comput. 2020, 94, 106452. [Google Scholar] [CrossRef]
  20. Jayachandran, A.; Kumar, S.R.; Perumal, T.S.R. Multi-dimensional cascades neural network models for the segmentation of retinal vessels in colour fundus images. Multimed. Tools Appl. 2023, 82, 42927–42943. [Google Scholar] [CrossRef]
  21. Dong, F.; Wu, D.; Guo, C.; Zhang, S.; Yang, B.; Gong, X. CRAUNet: A cascaded residual attention U-Net for retinal vessel segmentation. Comput. Biol. Med. 2022, 147, 105651. [Google Scholar] [CrossRef]
  22. Qu, Z.; Zhuo, L.; Cao, J.; Li, X.; Yin, H.; Wang, Z. TP-Net: Two-Path Network for Retinal Vessel Segmentation. IEEE J. Biomed. Health Inf. 2023, 27, 1979–1990. [Google Scholar] [CrossRef] [PubMed]
  23. Odstrcilik, J.; Kolar, R.; Budai, A.; Hornegger, J.; Jan, J.; Gazarek, J.; Kubena, T.; Cernosek, P.; Svoboda, O.; Angelopoulou, E. Retinal vessel segmentation by improved matched filtering: Evaluation on a new high-resolution fundus image database. IET Image Process. 2013, 7, 373–383. [Google Scholar] [CrossRef]
  24. Roy, S.; Mitra, A.; Roy, S.; Setua, S.K. Blood vessel segmentation of retinal image using Clifford matched filter and Clifford convolution. Multimed. Tools Appl. 2019, 78, 34839–34865. [Google Scholar] [CrossRef]
  25. Yang, J.; Huang, M.; Fu, J.; Lou, C.; Feng, C. Frangi based multi-scale level sets for retinal vascular segmentation. Comput. Methods Programs Biomed. 2020, 197, 105752. [Google Scholar] [CrossRef] [PubMed]
  26. Tian, F.; Li, Y.; Wang, J.; Chen, W. Blood vessel segmentation of fundus retinal images based on improved Frangi and mathematical morphology. Comput. Math. Methods Med. 2021, 2021, 4761517. [Google Scholar] [CrossRef] [PubMed]
  27. Huang, M.; Feng, C.; Li, W.; Zhao, D. Vessel enhancement using multi-scale space-intensity domain fusion adaptive filtering. Biomed. Signal Process. Control 2021, 69, 102799. [Google Scholar] [CrossRef]
  28. Mahapatra, S.; Agrawal, S.; Mishro, P.K.; Pachori, R.B. A novel framework for retinal vessel segmentation using optimal improved frangi filter and adaptive weighted spatial FCM. Comput. Biol. Med. 2022, 147, 105770. [Google Scholar] [CrossRef] [PubMed]
  29. Shukla, A.K.; Pandey, R.K.; Pachori, R.B. A fractional filter based efficient algorithm for retinal blood vessel segmentation. Biomed. Signal Process. Control 2020, 59, 101883. [Google Scholar] [CrossRef]
  30. Vega, R.; Sanchez-Ante, G.; Falcon-Morales, L.E.; Sossa, H.; Guevara, E. Retinal vessel extraction using lattice neural networks with dendritic processing. Comput. Biol. Med. 2015, 58, 20–30. [Google Scholar] [CrossRef]
  31. Lazar, I.; Hajdu, A. Segmentation of retinal vessels by means of directional response vector similarity and region growing. Comput. Biol. Med. 2015, 66, 209–221. [Google Scholar] [CrossRef]
  32. Aguirre-Ramos, H.; Avina-Cervantes, J.G.; Cruz-Aceves, I.; Ruiz-Pinales, J.; Ledesma, S. Blood vessel segmentation in retinal fundus images using Gabor filters, fractional derivatives, and Expectation Maximization. Appl. Math. Comput. 2018, 339, 568–587. [Google Scholar] [CrossRef]
  33. Annunziata, R.; Garzelli, A.; Ballerini, L.; Mecocci, A.; Trucco, E. Leveraging multiscale hessian-based enhancement with a novel exudate inpainting technique for retinal vessel segmentation. IEEE J. Biomed. Health Inform. 2016, 20, 1129–1138. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Structure of the retina.
Figure 1. Structure of the retina.
Sensors 24 04326 g001
Figure 2. Block diagram of the suggested approach.
Figure 2. Block diagram of the suggested approach.
Sensors 24 04326 g002
Figure 3. Middle image of fundus image preprocessing: (a) the original image, (b) the green channel image, (c) the image after NADF-GRA, (d) the image after CLAHE, (e) the image after Frangi enhancement.
Figure 3. Middle image of fundus image preprocessing: (a) the original image, (b) the green channel image, (c) the image after NADF-GRA, (d) the image after CLAHE, (e) the image after Frangi enhancement.
Sensors 24 04326 g003
Figure 4. Images from the DRIVE dataset before and after post-processing: (a) and (c) are the images before post-processing. (b) and (d) are the images after post-processing.
Figure 4. Images from the DRIVE dataset before and after post-processing: (a) and (c) are the images before post-processing. (b) and (d) are the images after post-processing.
Sensors 24 04326 g004
Figure 5. Threshold variation map of NADF-GRA.
Figure 5. Threshold variation map of NADF-GRA.
Sensors 24 04326 g005
Figure 6. Threshold variation curve of TS-GRA.
Figure 6. Threshold variation curve of TS-GRA.
Sensors 24 04326 g006
Figure 7. Segmentation results of the DRIVE, STARE and HRF datasets.
Figure 7. Segmentation results of the DRIVE, STARE and HRF datasets.
Sensors 24 04326 g007
Figure 8. Magnification and comparison of segmentation results of different algorithms on DRIVE: (a) the original image, (b) the ground truth, (c) segmentation results under the traditional GLCM model, (d) segmentation results under the novel GLCAA model.
Figure 8. Magnification and comparison of segmentation results of different algorithms on DRIVE: (a) the original image, (b) the ground truth, (c) segmentation results under the traditional GLCM model, (d) segmentation results under the novel GLCAA model.
Sensors 24 04326 g008
Figure 9. Magnification and comparison of segmentation results of different algorithms on STARE: (a) the original image, (b) the ground truth, (c) segmentation results under the traditional GLCM model, (d) segmentation results under the novel GLCAA model.
Figure 9. Magnification and comparison of segmentation results of different algorithms on STARE: (a) the original image, (b) the ground truth, (c) segmentation results under the traditional GLCM model, (d) segmentation results under the novel GLCAA model.
Sensors 24 04326 g009
Table 1. Performance evaluation on DRIVE.
Table 1. Performance evaluation on DRIVE.
ImageDRIVE
AccSeSpPrF1-ScoreJC
10.96020.79750.97620.76610.78150.6414
20.95940.77930.97940.81220.79540.6604
30.95330.62880.98920.8660.73860.5730
40.96170.69140.98910.86540.76870.6243
50.95820.62670.99250.89610.74750.5842
60.95450.59240.99360.90840.75710.5590
70.95160.64810.98210.78450.73980.5501
80.95240.56560.98880.82610.67150.5054
90.96930.59140.98950.83290.69170.5287
100.96270.66000.98980.85330.75430.5927
110.95090.71430.97420.73110.75960.5657
120.96990.69450.98470.81040.7480 0.5975
130.95510.65840.98720.84820.74930.5890
140.96240.74890.98110.77740.77290.6167
150.95470.69470.97470.67950.7170 0.5233
160.96410.70860.98410.81550.75830.6107
170.96090.65280.98930.84950.74830.5852
180.96380.76220.98110.77630.77920.6249
190.97230.78980.98880.8640 0.82530.7025
200.96760.72000.98730.81760.76570.6204
Average0.96030.68630.98510.81900.75350.5928
Table 2. Performance evaluation on STARE.
Table 2. Performance evaluation on STARE.
ImageSTARE
AccSeSpPrF1-ScoreJC
10.93380.60700.96220.65780.69810.5266
20.93280.46020.96290.61720.66630.5394
30.92430.78260.94490.6270.69220.5091
40.94440.41500.99800.91760.63260.526
50.95930.77580.97870.62370.69260.5297
60.94530.49310.98210.68590.65410.5934
70.94810.69480.96200.72820.73910.589
80.95210.60110.97350.69610.73770.5705
90.95950.68150.97150.76190.69120.5219
100.95840.57840.97930.76450.65450.5336
110.95910.86930.98200.72940.79710.547
120.97120.87890.97210.66860.75380.6048
130.94950.79870.98590.67920.74010.5968
140.95970.77910.97640.69930.75340.5307
150.96920.78740.98230.78870.76390.618
160.95910.70570.99350.85220.66140.5902
170.95300.69510.98570.75290.79750.5717
180.97610.62700.99470.85880.72720.5713
190.97650.60820.99300.79460.70160.5403
200.96120.57090.98910.77060.66320.4961
Average0.95460.67050.97850.73370.71090.5553
Table 3. Performance evaluation on HRF.
Table 3. Performance evaluation on HRF.
ImageHRF
H (%)DR (%)G (%)
PrF1-ScoreJCPrF1-ScoreJCPrF1-ScoreJC
181.2881.5769.0376.2372.4660.5672.6372.9054.32
279.5080.2968.8576.5969.0758.9372.1771.7552.12
378.9979.9249.8969.2266.2353.0673.6970.9050.89
480.9981.0259.8976.8873.7452.6971.6170.9151.41
582.7775.9763.4976.8474.5457.4172.7469.8951.58
679.8976.20 65.4876.1969.4958.4470.40 69.6352.96
782.9682.80 69.9376.1575.7859.1372.1170.8850.12
881.6478.2667.1677.8278.4556.3370.8870.7950.58
980.6776.7969.7679.1669.8960.7569.3769.8849.47
1078.9180.2168.2378.6164.3657.5567.8169.10 50.25
1180.6578.3162.60 77.1379.8159.8670.5271.3552.77
1283.70 78.7165.5575.0678.8558.0868.9371.8350.96
1379.9376.1559.8579.2976.30 60.7169.0370.8550.98
1478.1878.6864.9476.7175.4458.2971.6870.6748.52
1579.3578.4468.5474.2472.5155.4269.5669.7250.63
avg80.6378.8964.8876.4173.1257.8170.8870.7451.17
Table 4. Comparison of experimental results of different algorithms on DRIVE.
Table 4. Comparison of experimental results of different algorithms on DRIVE.
MethodsAccSeSp
Supervised methodsTang et al. [2]0.95740.80830.9796
Guo S [3]0.95750.79930.9806
Dong et al. [21]0.95860.7954
Qu et al. [22]0.96290.87490.9758
Jayachandran [20]0.95870.80720.9803
Zhao et al. [9]0.95800.77400.9790
Unsupervised methodsOdstrcilik et al. [23]0.93400.70600.9693
Roy et al. [24]0.92950.43920.9622
Yang et al. [25]0.95220.71810.9747
Nath et al. [18]0.94930.43040.9024
Tian et al. [26]0.95540.69420.9802
Huang et al. [27]0.95350.66500.9812
Mahapatra et al. [28]0.96050.70200.9844
Shukla et al. [29]0.94760.70150.9836
Traditional GLCM model0.94450.59360.9845
Proposed method0.96030.68630.9851
Table 5. Comparison of experimental results of different algorithms on STARE.
Table 5. Comparison of experimental results of different algorithms on STARE.
MethodsAccSeSp
Supervised methodsVega et al. [30] 0.91890.81790.9269
Huang et al. [4]0.97280.83040.9862
Qu et al. [22]0.97240.88520.9820
Jayachandran [20]0.96940.98360.8213
Zhao et al. [9]0.95700.78800.9760
Unsupervised methodsLazar and Hajdu [31]0.94920.72480.9751
Ramos et al. [32]0.92310.71160.9454
Roy et al. [24]0.94880.43170.9718
Orujov et al. [19]0.86500.83420.8806
Yang et al. [25]0.95130.67130.9731
Tian et al. [26]0.94920.70190.9771
Huang et al. [27]0.95370.72730.9622
Mahapatra et al. [28]0.96010.68460.9802
Traditional GLCM model0.94560.59570.9766
Proposed method0.95460.67050.9785
Table 6. Comparison using more performance parameters.
Table 6. Comparison using more performance parameters.
MethodsDatabasePrF1-ScoreJC
Vega et al. [30]DRIVE64.0268.84-
Nath et al. [18]DRIVE45.1144.0528.37
Orujov et al. [19]DRIVE34.0238.0055.00
STARE70.1553.3536.17
Annunziata et al. [33]STARE83.3176.82-
Ramos et al. [32]DRIVE68.8073.35-
Shukla et al. [29]DRIVE51.9459.69-
STARE46.3855.87-
Mahapatra et al. [28]DRIVE81.2475.3158.97
STARE74.4071.2955.66
HRF(H)80.6278.7864.51
HRF(DR)77.3273.8658.61
HRF(G)70.2870.7051.14
Traditional GLCM modelDRIVE80.4974.0158.32
STARE72.4670.5354.27
HRF(H)79.9178.1063.58
HRF(DR)75.6972.2756.79
HRF(G)70.0269.8050.52
Proposed methodDRIVE81.9075.3559.28
STARE73.3771.0955.53
HRF(H)80.6378.8964.88
HRF(DR)76.4173.1257.81
HRF(G)70.8870.7451.17
Table 7. Ablation experiment on DRIVE and STARE.
Table 7. Ablation experiment on DRIVE and STARE.
MethodFiltering MethodSegmentation MethodDRIVESTARE
GLCM FilteringNADF-GRAGLCM SegmentationTS-GRAAccSeSpAccSeSp
Method 10.94450.59360.98450.94560.59570.9766
Method 20.94840.59520.98440.95040.64990.9764
Method 30.95690.60190.98490.95280.66810.9780
Method 40.96030.68630.98510.95460.67050.9785
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Li, H. A Novel Single-Sample Retinal Vessel Segmentation Method Based on Grey Relational Analysis. Sensors 2024, 24, 4326. https://doi.org/10.3390/s24134326

AMA Style

Wang Y, Li H. A Novel Single-Sample Retinal Vessel Segmentation Method Based on Grey Relational Analysis. Sensors. 2024; 24(13):4326. https://doi.org/10.3390/s24134326

Chicago/Turabian Style

Wang, Yating, and Hongjun Li. 2024. "A Novel Single-Sample Retinal Vessel Segmentation Method Based on Grey Relational Analysis" Sensors 24, no. 13: 4326. https://doi.org/10.3390/s24134326

APA Style

Wang, Y., & Li, H. (2024). A Novel Single-Sample Retinal Vessel Segmentation Method Based on Grey Relational Analysis. Sensors, 24(13), 4326. https://doi.org/10.3390/s24134326

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop