Next Article in Journal
Dense-to-Question and Sparse-to-Answer: Hybrid Retriever System for Industrial Frequently Asked Questions
Next Article in Special Issue
Older Adults Get Lost in Virtual Reality: Visuospatial Disorder Detection in Dementia Using a Voting Approach Based on Machine Learning Algorithms
Previous Article in Journal
A Novel Dynamic Mathematical Model Applied in Hash Function Based on DNA Algorithm and Chaotic Maps
Previous Article in Special Issue
Enhanced Fuzzy Elephant Herding Optimization-Based OTSU Segmentation and Deep Learning for Alzheimer’s Disease Diagnosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Detection and Modeling of the Major Temporal Arcade in Retinal Fundus Images

by
Dora Elisa Alvarado-Carrillo
1,
Iván Cruz-Aceves
2,*,
Martha Alicia Hernández-González
3 and
Luis Miguel López-Montero
3
1
Center for Research in Mathematics (CIMAT), Guanajuato 36000, GTO, Mexico
2
National Council of Science and Technology (CONACYT)-Center for Research in Mathematics (CIMAT), Guanajuato 36000, GTO, Mexico
3
High Specialty Medical Unit (UMAE), Specialties Hospital No. 1, Mexican Social Security Institute (IMSS), Leon 37320, GTO, Mexico
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(8), 1334; https://doi.org/10.3390/math10081334
Submission received: 10 March 2022 / Revised: 5 April 2022 / Accepted: 8 April 2022 / Published: 18 April 2022

Abstract

:
The Major Temporal Arcade (MTA) is a critical component of the retinal structure that facilitates clinical diagnosis and monitoring of various ocular pathologies. Although recent works have addressed the quantitative analysis of the MTA through parametric modeling, their efforts are strongly based on an assumption of symmetry in the MTA shape. This work presents a robust method for the detection and piecewise parametric modeling of the MTA in fundus images. The model consists of a piecewise parametric curve with the ability to consider both symmetric and asymmetric scenarios. In an initial stage, multiple models are built from random blood vessel points taken from the blood-vessel segmented retinal image, following a weighted-RANSAC strategy. To choose the final model, the algorithm extracts blood-vessel width and grayscale-intensity features and merges them to obtain a coarse MTA probability function, which is used to weight the percentage of inlier points for each model. This procedure promotes selecting a model based on points with high MTA probability. Experimental results in the public benchmark dataset Digital Retinal Images for Vessel Extraction (DRIVE), for which manual MTA delineations have been prepared, indicate that the proposed method outperforms existing approaches with a balanced Accuracy of 0.7067 , Mean Distance to Closest Point of 7.40 pixels, and Hausdorff Distance of 27.96 pixels, while demonstrating competitive results in terms of execution time ( 9.93 s per image).

1. Introduction

The analysis of vascular structures in the retina can facilitate the monitoring and diagnosis of different types of ocular pathologies. The Major Temporal Arcade (MTA) is the thickest vascular structure in the retina and it is composed of the superior and inferior temporal arcades [1]. In clinical practice, the visual examination of the morphological integrity of the MTA and the angle between the two temporal arcades, also called Temporal Arcade Angle (TAA), are used as an indicator of the severity of diabetic retinopathy, myopia, or hypertension. Consequently, the numerical modeling of the MTA plays an essential role in achieving its quantitative analysis, while improving medical diagnosis.
In literature, a handful of works have addressed the analysis of the MTA. Oloumi et al. [2] introduced image processing techniques for the segmentation, tracking, and measurement of the MTA width in order to detect plus disease and retinopathy of prematurity. Nabi et al. [3] proposed a two-step method for the identification of the MTA. Firstly, the retinal vessels are detected through Gabor filters. Then, the Hough transform along with graph theory were applied to separate the MTA. Fleming et al. [4,5] considered the MTA length as a criterion to measure the quality of fundus images, along with other elements such as the optic disc location and the macula size. The MTA was automatically determined by semi-elliptical templates in a range of sizes, using the generalized Hough transform. Fledelius and Goldschmidt [6] studied the changes in the MTA geometry for patients with high myopia, finding a correlation between the increase in myopia and the reduction in the TAA. The analysis was performed manually by ophthalmologists, who located MTA and measured the TAA.
To simplify the task of automatic tracking of MTA morphological alterations over time, Oloumi et al. [7] proposed the parameterization of the MTA by using a parabolic model. A weighted version of the Hough Transform was applied to fit a parabola on the detected vascular tree. A similar approach was presented by Oloumi et al. [8], where a dual parabolic model was designed (i.e., two parabolas adjusted to the upper and lower parts of the MTA) as a solution to the problem of MTA non-symmetry for some fundus images, especially those belonging to sick patients. These models were used to prove that the openness of the MTA decreases when patients suffer from Proliferative Diabetic Retinopathy and Retinopathy of Prematurity [9].
More recently, population-based methods have been presented as an alternative for the detection of parabolic objects in medical images. Guerrero-Turrubiates et al. [10] introduced a method that applied a Univariate Marginal Distribution Algorithm (UMDA) to approximate a parabola for retinal images. The approach creates individuals by concatenating three pixel indices in the image domain. Then, each individual is evaluated using as a fitness function the Hadamard product between the input image and the resulting parabola image. Valdez et al. [11] developed a parabola detection algorithm for the MTA localization in fundus images. Instead of using the Hough transform, which is computationally expensive, a fast hybrid method was proposed that combined the UMDA algorithm with the Simulated Annealing (SA) strategy, which guided the search towards promising regions. The objective function used a segmented image of the vascular structure, weighing pixels according to the distance to the parabola vertex. These two UMDA-based methods obtained superior performance in terms of computational time compared to Hough-based methods.
In this paper, a robust method for the detection and piecewise parametric modeling of the MTA in fundus images is presented. The algorithm follows a weighted-RANSAC strategy, building multiple MTA models from random blood-vessel points taken from the blood-vessel segmented retinal image. Each model consists of a piecewise parametric curve with the ability to consider both symmetric and asymmetric scenarios. To choose the best MTA model, the method considers blood-vessel-width and foreground-location features, extracted through the Distance Transform and grayscale intensities, respectively. Both attributes are merged and used as a probability function to weight the percentage of inlier points for each model. This procedure promotes selecting a model based on points with high MTA probability.
The contributions of this work are summarized as follows:
  • Blood-vessel-width and intensity features are integrated into the MTA detection and modeling process.
  • A modeling strategy addressing both symmetric and asymmetric scenarios is presented to improve the MTA characterization.
  • A weighted-RANSAC scheme is included to robustly select a MTA model configuration.
  • A set of MTA manual delineations for the benchmark DRIVE dataset has been released for scientific purposes.
The rest of this paper is organized as follows. In Section 2, the proposed method is explained in detail. The dataset, evaluation metrics and experimental results are discussed in Section 3. Finally, in Section 4, the conclusions obtained from this work are presented.

2. Methods

The proposed method consists of the following steps: (1) pre-processing of the raw images to deal with the illumination and contrast changes expected in medical images; (2) automatic segmentation of the vascular tree to reduce the MTA search space; (3) the extraction of features from the segmentation carried out in the previous step; and (4) the construction of a numerical model of the MTA from the extracted features. Each of these steps is described in detail in this section.

2.1. Pre-Processing

Fundus images are affected by uneven illumination and low contrast, producing variations in pixel intensity [12,13,14]. In this work, the original RGB fundus image is converted to grayscale and then pre-processed through the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm, which addresses these intensity changes and improves blood-vessel segmentation accuracy by applying a re-distribution of their intensity histogram [15,16,17].
The CLAHE algorithm partitions the image into small rectangular regions and applies a histogram-based local-contrast enhancement procedure. First, the intensity occurrences that exceed a particular value in the histogram of each region in the image are truncated. Then, the truncated occurrences are redistributed uniformly over the histogram. This redistribution of occurrences results in an increase in contrast between the background and the objects of interest, while the truncation procedure aims to avoid noise amplification problems, which are typical issues of traditional histogram equalization methods. Further information on this pre-processing technique for retinal images can be found in [18,19,20].

2.2. Automatic Segmentation of the Vascular Tree

The inclusion of the automatic blood-vessel segmentation step in the proposed algorithm aims to delimit the MTA search space efficiently. Hence, a good trade-off between high accuracy and computational time must be considered in order to select the adequate approach for this step.
In literature, the problem has been widely addressed by unsupervised methods [21,22,23,24], machine learning strategies [25,26,27], and deep learning techniques [28,29,30,31]. Many recent models improve blood-vessel segmentation. However, their advances are generally correlated to a greater number of parameters or increased numerical complexity. Moreover, these methods focus on refining specific cases, such as thin vessel detection, which is less relevant in the MTA detection task.
Considering this context, a Convolutional Neural Network (CNN) with the U-Net architecture has been adopted to perform the task. This approach has proven high efficiency for biomedical image segmentation on reasonable computational time [32,33,34,35]. Moreover, this model constitutes an end-to-end system with learnable parameters and few hyper-parameters to calibrate.
Unlike the original four-level model with an initial depth of 64 channels, a simplified three-level model with an initial depth of 32 channels is employed here. The architecture is assembled by the contracting path (or encoder), the latent path (or bottleneck), and the expanding path (or decoder). Skip-connections are added between feature maps from encoder to decoder to propagate previous information to deeper network layers. Figure 1 illustrates the overall design. Each level in the contracting path consists of a double 3 × 3 convolutional layer + ReLu block and a Maxpooling layer. Similarly, each level in the expanding path consists of a double 3 × 3 convolutional layer + ReLu block followed by a Upsampling layer. A dropout layer has been added between each block to improve generalization [36].
Moreover, a patch-based approach has been adopted in the design to increase the number of images available for training: random 48 × 48 pixel patches are extracted from the grayscale pre-processed train images and are used as input for training the model. Figure 2 shows an overview of this process. For the testing part, ordered 48 × 48 pixel patches with an overlapping of five pixels are extracted from the test images. The final prediction is obtained by averaging the predictions made over each pixel.
Binary cross entropy has been selected as a loss function to adjust the network parameters.
L ( θ ) = 1 m i = 1 m y i log y i ^ + ( 1 y i ) log ( 1 y i ^ ) ,
where θ represents the parameters of the network, m is the number of samples or pixels to classify, y i is the true label of sample i and y i ^ is its predicted label, i.e., the network output.

2.3. Feature Extraction

The MTA is the thickest blood vessel that appears in the foreground of the fundus image. The presented method proposes to quantify blood-vessel thickness and its location in the foreground of the image, so that both attributes contribute to a robust MTA detection and modeling.

2.3.1. Vessel Thickness

Determining blood-vessel-width is a difficult task in fundus retinal imaging due to the variety of blood-vessel amplitudes. A quick and indirect measurement can be performed by using the Distance Transform [37,38,39], since it results in a map containing each blood-vessel pixel’s distance to its nearest background pixel.
The two-dimensional distance transform can be formulated as follows. Let L be the set of sites or pixels in a two-dimensional binary image I R N × M , and let A and B be two non-overlapping subsets of L, i.e., L = A B and A B = , such that set A contains all the sites u = ( x , y ) , u L that belong to the foreground of the image and set B contains all the sites u L that belong to the background of the image:
A = { u L | I ( u ) = 1 }
B = { u L | I ( u ) = 0 }
The Distance Transform is a function that generates a map D R N × M for which each value at site u corresponds to the smallest distance from u to B:
D ( u ) = min v B d ( u , v ) ,
where d ( · , · ) is a distance metric between two points, usually the Euclidean distance metric.
Through a normalization process, i.e., dividing each element in D by the maximum value, and using the segmented image of the vessels as input, the Distance Transform can be used as a metric to determine the vessel width in each position of the image, allowing for the distinction of the following cases:
  • When a pixel is not located in a blood vessel, the metric returns a value of zero.
  • When a pixel is located in the center of a thin blood vessel, the metric returns a value close to zero.
  • When a pixel is located in the center of a thick blood vessel, the metric returns a value close to one.
The metric has a drawback in determining blood-vessel width: as the pixels get closer to the edges, their value gets closer to zero, even when they belong to thick blood vessels.
A strategy to avoid these misleading values is to preserve only the center-line pixels of the vascular structure. However, this approach alters the percentage of pixels belonging to thick blood vessels in the image, which may lead to performance decreasing in the numerical modeling procedure.

2.3.2. Foreground Location

Notice that having an indicator of blood-vessel width is helpful to some extent in locating sites of the image that may belong to the MTA. However, considering that some thick blood vessels appearing in the background of the image could also obtain high responses in the Distance Transform, additional information is required to make a more robust MTA detection.
The blood vessels in the foreground can be identified because their intensities are visibly darker than those that appear in a second plane. Following this idea, a naive foreground location map F R N × M can be calculated by taking the complement of the normalized grayscale blood-vessel intensity image G R N × M .
In the resulting map, the highest values correspond to pixels belonging to first-plane blood vessels, while lower values correspond to pixels belonging to background blood vessels.

2.3.3. MTA Probability Map

The foreground location and blood-vessel thickness feature maps are averaged in order to consider a joint contribution. The average image can be interpreted as a coarse MTA Probability Map P R , where each pixel represents the probability of that position in the image belonging to the structure of the MTA. The process of obtaining P is illustrated in Figure 3.
The values of this map are taken into account in the numerical modeling step, explained below.

2.4. Numerical Modeling of the MTA

2.4.1. Piecewise Parametric Modeling

Previous works regarding the MTA curvature approximation have presented models using parabolic Hough-based techniques that assume a symmetric shape. However, the MTA is not strictly symmetrical. It may present shape alterations derived from a disease or vascular damage. To address this issue, a piecewise parametric approach using quadratic spline curves is proposed.
The spline curve is a piecewise low-order polynomial function that approximates intrinsic shapes while avoiding abrupt oscillations (Runge’s phenomenon) [40,41,42]. Consider a set of n + 1 ordered points or knots x 0 , x 1 , . . . , x n and an integer k > 0 that have been specified. Let S ( x ) be a piecewise polynomial function defined on the interval [ x 0 , x n ] as follows:
S ( x ) = p 0 ( x ) x 0 x x 1 p 1 ( x ) x 1 < x x 2 p n 1 ( x ) x n 1 < x x n ,
where each piece p i with 0 i n 1 is a polynomial function of degree at most k. To guarantee the continuity and smoothness of S ( x ) , the two polynomials p i 1 and p i must share the values of their derivatives from the order (i.e., the value of the function) up to the derivative of order m at knot x i . Then, S ( x ) is said to be a spline curve of degree k and smoothness C m , or S C m in the neighborhood of x i .
For instance, to build a model with a quadratic spline curve S C 1 from a set of ordered pairs u 0 , u 1 , . . . , u n with u i = ( x i , y i ) and 0 i n , a function S ( x ) as described in (5) must be defined, where the polynomial functions p i are of the form:
p i ( x ) = a i x 2 + b i x + c i , i = 0 , 1 , , n 1 .
Then, S ( x ) interpolates the given set of points, that is:
S ( x i ) = y i , i = 0 , 1 , , n .
The polynomials p i and p i + 1 must interpolate the same value at the point x i + 1 to ensure continuity along the interval [ x 0 , x n ] , thereby the following expression have to be satisfied:
p i 1 ( x i ) = p i ( x i ) = y i , i = 1 , 2 , , n 1 .
Furthermore, S ( x ) is also required to be continuous, i.e.,
p i 1 ( x i ) = p i ( x i ) , i = 1 , 2 , , n 1 .
The expressions in (7)–(9) can be used to construct an equation system to find the values of the 3 n unknown coefficients of the polynomial functions p i , adding an additional constraint to the value of p 0 , typically p 0 = 0 .

2.4.2. Weighted RANSAC

In the proposed method, the knots for the quadratic spline curve are randomly taken from the set of non-zero pixels in the MTA Probability Map obtained from the previous step. However, the MTA Probability Map values are noisy, since they were obtained from an estimation based on features extracted from the segmented image. A weighted RANSAC methodology is proposed to robustly select the points that best accomplish the particularities of the MTA.
The RANSAC method is a non-deterministic iterative algorithm for computing the parameters of a model M given a set of N points or observations that contain noise [43,44]. It consists of taking a minimum subset of points Q = u 1 , u 2 , . . . , u h , h < N , necessary to compute the model parameters and observe the number of points that are well explained by it, i.e., the number of points that are within a margin distance from the points estimated by the model (inliers). This process is repeated a number of iterations or until a percentage of inliers is reached, keeping the model that best explains the dataset.
In the original RANSAC, each point inside the margin distance makes the same contribution to the inlier counting. In contrast, in a weighted-RANSAC scheme, the contribution w ( u ) of each point u = ( x , y ) , u L to the counting is weighted using a criteria [45]. In the proposed method, the criteria is given by the MTA Probability Map P :
w ( u ) = P ( u ) d ( u , u ^ ) ϵ 0 otherwise ,
where d ( · , · ) represents a distance metric between two points –in this case, the distance between point u and the prediction u ^ obtained with model M – and ϵ > 0 is the tolerance error value that determines if point u is considered as an inlier.
Algorithm 1 describes in detail the weighted-RANSAC scheme. This strategy prefers models containing points at the center of thick vessels with strong intensities are preferred. This behavior is also imposed from the task of selecting points to estimate the model: although a uniform probability distribution is used to choose the points acting as knots, the proportion of points belonging to thick vessels is greater than the one corresponding to thin points. Hence, points belonging to thick vessels are selected more frequently.
Algorithm 1: The weighted-RANSAC algorithm.
Mathematics 10 01334 i001

2.4.3. Constraints on Point Selection

Point selection can result in models that do not cover the entire length of the MTA. On that account, a restriction has been added to choose points from three image regions (top, middle and bottom) in balanced parts, as shown in Figure 4. Under these conditions, the search for the optimal model with RANSAC is always conducted considering points around a larger area in the image, which improves the results of the MTA modeling.

3. Results and Discussion

The proposed method was evaluated through multiple comparisons with state-of-the-art algorithms using the dataset DRIVE. First, the dataset DRIVE and its MTA manual delineations are presented. Secondly, the evaluation metrics are described. Third, the implementation details are explained. Finally, the comparative analysis is discussed.

3.1. Dataset and Delineation of the MTA

The dataset DRIVE [46], consisting of 40 retinal fundus images with size 565 × 584 pixels, is used to evaluate the performance of the proposed method. The partitions used for training and testing were taken as recommended by the dataset authors.
The DRIVE dataset has been designed for the blood-vessel segmentation task and does not provide ground-truth images for the MTA detection problem. Hence, hand-labeled MTA annotations have been created for this work by an expert ophthalmologist of the Ophthalmology Department of the Mexican Social Security Institute (IMSS) T1-León [47]. The set of MTA manual delineations is freely available (http://personal.cimat.mx:8181/~ivan.cruz/Journals/MTA_drive.html, accessed on 11 April 2022). To the authors’ best knowledge, this dataset is the first in the literature that releases MTA manual delineations for scientific purposes.

3.2. Evaluation Metrics

The metrics considered to evaluate the closeness between the model and the MTA ground-truth are Mean Distance to Closest Point (MDCP) and Hausdorff Distance, as proposed by Oloumi et al. [7].
MDCP measures the average of the distances of each point belonging to one set to the nearest point of the other set. Let A and B be two sets of points, where A is the set of points estimated by the model and B is the set of points in the ground-truth delineation. The MDCP can be defined as follows:
MDCP ( A , B ) = 1 N i = 1 N DCP ( a i , B ) ,
where N is the cardinality of set A, a i its i-th element and DCP is the distance to the closest point, which is computed as follows:
DCP ( a i , B ) = min | | a i b j | | ,
for j = 1 , 2 , . . . , M , M being the cardinality of set B, b j its j-th element and | | · | | any norm operator, typically the Euclidean norm. Moreover, Hausdorff Distance also uses the previous definition of DCP to find the smallest distance of each point of A to the ground-truth set B, however it does not calculate an average. Instead, Hausdorff Distance takes the maximum DCP distance:
H ( A , B ) = max DCP ( a i , B ) .
In both measures, small values indicate that the model is a good fit for the ground-truth.
The metrics regarding the MTA detection, taking into consideration that the model corresponds to a one-pixel wide curve, are: Precision, skeleton-based Recall and skeleton-based balanced Accuracy.
The Precision is calculated as follows:
Pre ( M ) = TP TP + FP ,
where TP (true positives) represents the number of pixels belonging to model M within the MTA delineation and FP (false positives) represents the number of pixels belonging to model M outside the MTA delineation.
Following this idea, a skeleton-based Recall can also be defined, as shown in (15), to indicate the ratio of correct positive pixels out of the total positive pixels that a perfect model should contain.
Rec ( M ) = TP P ,
where P represents the number of pixels in the skeleton of the MTA delineation, which will be taken as the ideal number of pixels that a one-pixel wide model should contain.
Finally, a skeleton-based balanced Accuracy is considered to measure the general performance of the model, using the following definition:
bACC ( M ) = TPR + TNR 2 ,
with
TPR = TP / P ,
TNR = TN / N ,
and using N (skeleton negatives) as the number of pixels that do not belong to the skeleton of the MTA delineation.

3.3. Implementation Details

The proposed method was evaluated using the dataset DRIVE with MTA manual delineations.
Firstly, for the segmentation part, The U-Net network was trained with 200,000 random patches with size 48 × 48 pixels, from which 180,000 were used for training and 20,000 were used for validation. The optimization process was performed for 150 epochs, using the Stochastic Gradient Descent (SGD) optimizer with minibatch size of 32 patches.
Secondly, for the spline curve construction in the numerical modeling part, a configuration of five points with quadratic functions has been chosen.
Finally, since the numerical modeling part contains a stochastic component for point selection and model construction, 30 independent executions were carried out and the averaged results are reported.
All the experiments were executed using a computer with an Intel Core i5 2.4GHz processor and 12GB of RAM, excepting the segmentation step, for which a Nvidia Tesla K80 GPU with 12GB of RAM was used. The source code is freely available (https://github.com/dora-alvarado/robust-MTA-detection-modeling, accessed on 11 April 2022).

3.4. Comparative Analysis

The proposed method was evaluated using the test set of the dataset DRIVE, consisting of 20 images with size 565 × 584 pixels. In Figure 5, some examples of the MTA numerical modeling are presented.
The performance analysis was carried out considering four Hough-based state-of-the-art MTA detection methods: the Gabor-based enhancement combined with a Hough detector (Gabor+Hough) [7], the General Hough detector (General Hough) [48], the parabola detection algorithm from MIPAV software (MIPAV) [49], and the hybrid UMDA+SA parabola detector (UMDA+SA) [11].
A comparison in terms of closeness between the numerical model and the MTA manual delineation is shown in Table 1. The proposed method reaches the best value in MDCP and Hausfford Distance, more than three pixels and six pixels away from the second best, Gabor+Hough [7], respectively.
A comparison for the MTA detection task has also been made using Precision, skeleton-based Recall and skeleton-based balanced Accuracy. As shown in Table 2, the proposed method obtains the best values in the three metrics, doubling the performance of the second best (UMDA+SA [11]) in Precision.
Through a qualitative comparison presented in Figure 6, it can be inferred that the difference in performance lies in the ability of the methods to adjust to a non-symmetric MTA shape. The proposed method shows a robust behavior in these scenarios, while the rest of the methods diverge from the manual delineation due to its parabolic foundation.
Finally, a comparison considering execution time is reported in Table 3. The proposed method obtains the second best value, only surpassed by the hybrid method UMDA+SA [11], and with an execution time 20 times faster than the third best, Gabor+Hough [7].

4. Conclusions and Future Work

This paper proposes a new method for automatic MTA detection and modeling. The segmentation step contributes to the computational efficiency of the proposed method, reducing the search space to only blood-vessel-related pixels. Unlike the previous works, based on semi-elliptical parabolas, the proposed method relies on a piecewise parametric function with the ability to adequately represent both symmetric and asymmetric MTAs. Through a weighted-RANSAC scheme that takes advantage of a priori knowledge about the MTA characteristics, the algorithm makes a robust selection of points to build the model. The inclusion of blood-vessel width and foreground-location estimations for inlier counts promotes selecting a model built with high probability MTA points. The method has proven to be robust and efficient in the MTA modeling task, obtaining a Balanced accuracy of 0.7067 , MDCP of 7.40 pixels, Hausdorff distance of 27.96 pixels, and an average execution time of 9.93 s per image. These numerical results have also shown that the proposed method is suitable for implementation in systems that perform computer-aided diagnosis in ophthalmology.
A future direction of this work may be to determine the MTA openness for the presented piecewise-parametric model with the aim of classifying ophthalmological alterations. The first approach to this task would be to quantify the area-under-the-curve and slope variations for the function.

Author Contributions

Conceptualization, D.E.A.-C., I.C.-A., M.A.H.-G. and L.M.L.-M.; methodology, D.E.A.-C., I.C.-A., M.A.H.-G. and L.M.L.-M.; software, D.E.A.-C. and I.C.-A.; validation, D.E.A.-C., I.C.-A., M.A.H.-G. and L.M.L.-M.; formal analysis, D.E.A.-C., I.C.-A., M.A.H.-G. and L.M.L.-M.; investigation, D.E.A.-C., I.C.-A., M.A.H.-G. and L.M.L.-M.; data curation, I.C.-A., M.A.H.-G. and L.M.L.-M.; writing—original draft preparation, D.E.A.-C. and I.C.-A.; writing—review and editing, D.E.A.-C. and I.C.-A.; visualization, D.E.A.-C. and I.C.-A.; supervision, D.E.A.-C., I.C.-A., M.A.H.-G. and L.M.L.-M.; project administration, I.C.-A. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by the Centro de Investigación en Matemáticas, A.C. (CIMAT).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work was partially supported by CONACyT, Mexico under Doctoral Studies Grant no. 626155/719327 and Project Cátedras-CONACyT No. 3150-3097.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
bACCSkeleton-based balanced Accuracy
CLAHEContrast Limited Adaptive Histogram Equalization
CNNConvolutional Neural Network
DCPDistance to Closest Point
DRIVEDigital Retinal Images for Vessel Extraction
FPFalse Positives
IMSSMexican Social Security Institute
ITAInferior Temporal Arcade
MDCPMean Distance to Closest Point
MTAMajor Temporal Arcade
NSkeleton-based Negatives
PPositives
PrePrecision
RANSACRandom Sample Consensus
RecSkeleton-based Recall
STASuperior Temporal Arcade
UMDAUnivariate Marginal Distribution Algorithm
TAATemporal Arcade Angle
TPTrue Positives

References

  1. Wilson, C.; Theodorou, M.; Cocker, K.D.; Fielder, A.R. The temporal retinal vessel angle and infants born preterm. Br. J. Ophthalmol. 2006, 90, 702–704. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Oloumi, F.; Rangayyan, R.M.; Casti, P.; Ells, A.L. Computer-aided diagnosis of plus disease via measurement of vessel thickness in retinal fundus images of preterm infants. Comput. Biol. Med. 2015, 66, 316–329. [Google Scholar] [CrossRef] [PubMed]
  3. Nabi, F.; Yousefi, H.; Soltanian-Zadeh, H. Segmentation of major temporal arcade in angiography images of retina using generalized hough transform and graph analysis. In Proceedings of the 2015 22nd Iranian Conference on Biomedical Engineering (ICBME), Tehran, Iran, 25–27 November 2015; pp. 287–292. [Google Scholar]
  4. Fleming, A.D.; Philip, S.; Goatman, K.A.; Olson, J.A.; Sharp, P.F. Automated assessment of diabetic retinal image quality based on clarity and field definition. Investig. Ophthalmol. Vis. Sci. 2006, 47, 1120–1125. [Google Scholar] [CrossRef] [Green Version]
  5. Fleming, A.D.; Goatman, K.A.; Philip, S.; Prescott, G.J.; Sharp, P.F.; Olson, J.A. Automated grading for diabetic retinopathy: A large-scale audit using arbitration by clinical experts. Br. J. Ophthalmol. 2010, 94, 1606–1610. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Fledelius, H.C.; Goldschmidt, E. Optic disc appearance and retinal temporal vessel arcade geometry in high myopia, as based on follow-up data over 38 years. Acta Ophthalmol. 2010, 88, 514–520. [Google Scholar] [CrossRef] [PubMed]
  7. Oloumi, F.; Rangayyan, R.M.; Ells, A.L. Parabolic modeling of the major temporal arcade in retinal fundus images. IEEE Trans. Instrum. Meas. 2012, 61, 1825–1838. [Google Scholar] [CrossRef]
  8. Oloumi, F.; Rangayyan, R.; Ells, A.L. Dual-parabolic modeling of the superior and the inferior temporal arcades in fundus images of the retina. In Proceedings of the 2011 IEEE International Symposium on Medical Measurements and Applications, Bari, Italy, 30–31 May 2011; pp. 1–6. [Google Scholar] [CrossRef]
  9. Oloumi, F.; Rangayyan, R.M.; Ells, A.L. Computer-Aided Diagnosis of Retinopathy of Prematurity in Retinal Fundus Images. In Medical Image Analysis and Informatics; CRC Press: Boca Raton, FL, USA, 2017; pp. 57–83. [Google Scholar] [CrossRef]
  10. Guerrero-Turrubiates, J.D.J.; Cruz-Aceves, I.; Ledesma, S.; Sierra-Hernandez, J.M.; Velasco, J.; Avina-Cervantes, J.G.; Avila-Garcia, M.S.; Rostro-Gonzalez, H.; Rojas-Laguna, R. Fast parabola detection using estimation of distribution algorithms. Comput. Math. Methods Med. 2017, 2017, 6494390. [Google Scholar] [CrossRef]
  11. Valdez, S.I.; Espinoza-Perez, S.; Cervantes-Sanchez, F.; Cruz-Aceves, I. Hybridization of the Univariate Marginal Distribution Algorithm with Simulated Annealing for Parametric Parabola Detection. In Hybrid Metaheuristics for Image Analysis; Springer: Berlin/Heidelberg, Germany, 2018; pp. 163–186. [Google Scholar] [CrossRef]
  12. Zhou, M.; Jin, K.; Wang, S.; Ye, J.; Qian, D. Color retinal image enhancement based on luminosity and contrast adjustment. IEEE Trans. Biomed. Eng. 2017, 65, 521–527. [Google Scholar] [CrossRef]
  13. Soomro, T.A.; Khan, T.M.; Khan, M.A.; Gao, J.; Paul, M.; Zheng, L. Impact of ICA-based image enhancement technique on retinal blood vessels segmentation. IEEE Access 2018, 6, 3524–3538. [Google Scholar] [CrossRef]
  14. Alwazzan, M.J.; Ismael, M.A.; Ahmed, A.N. A hybrid algorithm to enhance colour retinal fundus images using a Wiener filter and CLAHE. J. Digit. Imaging 2021, 34, 750–759. [Google Scholar] [CrossRef]
  15. Pizer, S.M.; Johnston, R.E.; Ericksen, J.P.; Yankaskas, B.C.; Muller, K.E. Contrast-Limited Adaptive Histogram Equalization: Speed and Effectiveness. In Proceedings of the First Conference on Visualization in Biomedical Computing, Atlanta, GA, USA, 22–25 May 1990; IEEE Computer Society Press: Washington, DC, USA, 1990; p. 337. [Google Scholar]
  16. Sule, O.; Viriri, S.; Gwetu, M. Contrast Enhancement in Deep Convolutional Neural Networks for Segmentation of Retinal Blood Vessels. In Asian Conference on Intelligent Information and Database Systems; Springer: Berlin/Heidelberg, Germany, 2021; pp. 278–290. [Google Scholar] [CrossRef]
  17. Arjuna, A.; Rose, R.R. Performance Analysis of Various Contrast Enhancement techniques with Illumination Equalization on Retinal Fundus Images. In Proceedings of the 2019 International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 27–29 November 2019; pp. 406–411. [Google Scholar] [CrossRef]
  18. Ningsih, D.R. Improving Retinal Image Quality Using the Contrast Stretching, Histogram Equalization, and CLAHE Methods with Median Filters. Int. J. Image Graph. Signal Process. 2020, 12, 30. [Google Scholar] [CrossRef]
  19. da Rocha, D.A.; Barbosa, A.B.L.; Guimarães, D.S.; Gregório, L.M.; Gomes, L.H.N.; da Silva Amorim, L.; Peixoto, Z.M.A. An unsupervised approach to improve contrast and segmentation of blood vessels in retinal images using CLAHE, 2D Gabor wavelet, and morphological operations. Res. Biomed. Eng. 2020, 36, 67–75. [Google Scholar] [CrossRef]
  20. dos Santos, J.C.M.; Carrijo, G.A.; dos Santos Cardoso, C.F.; Ferreira, J.C.; Sousa, P.M.; Patrocinio, A.C. Fundus image quality enhancement for blood vessel detection via a neural network using CLAHE and Wiener filter. Res. Biomed. Eng. 2020, 36, 107–119. [Google Scholar] [CrossRef]
  21. Zhou, C.; Zhang, X.; Chen, H. A new robust method for blood vessel segmentation in retinal fundus images based on weighted line detector and hidden Markov model. Comput. Methods Programs Biomed. 2020, 187, 105231. [Google Scholar] [CrossRef] [PubMed]
  22. Ali, A.; Mimi Diyana Wan Zaki, W.; Hussain, A.; Haslina Wan Abdul Halim, W.; Hashim, N.; Noorshahida Mohd Isa, W. B-COSFIRE and Background Normalisation for Efficient Segmentation of Retinal Vessels. In Proceedings of the 2021 IEEE Symposium on Industrial Electronics Applications (ISIEA), Langkawi Island, Malaysia, 10–11 July 2021; pp. 1–5. [Google Scholar] [CrossRef]
  23. Ma, Y.; Zhu, Z.; Dong, Z.; Shen, T.; Sun, M.; Kong, W. Multichannel Retinal Blood Vessel Segmentation Based on the Combination of Matched Filter and U-Net Network. BioMed Res. Int. 2021, 2021, 5561125. [Google Scholar] [CrossRef]
  24. Khan, T.M.; Khan, M.A.; Rehman, N.U.; Naveed, K.; Afridi, I.U.; Naqvi, S.S.; Raazak, I. Width-wise vessel bifurcation for improved retinal vessel segmentation. Biomed. Signal Process. Control 2022, 71, 103169. [Google Scholar] [CrossRef]
  25. Rodrigues, E.O.; Conci, A.; Liatsis, P. ELEMENT: Multi-modal retinal vessel segmentation based on a coupled region growing and machine learning approach. IEEE J. Biomed. Health Inform. 2020, 24, 3507–3519. [Google Scholar] [CrossRef]
  26. Tamim, N.; Elshrkawey, M.; Abdel Azim, G.; Nassar, H. Retinal blood vessel segmentation using hybrid features and multi-layer perceptron neural networks. Symmetry 2020, 12, 894. [Google Scholar] [CrossRef]
  27. Shi, Y.; Liu, L.; Li, F. An Adaptive Topology-enhanced Deep Learning Method Combined with Fast Label Extraction Scheme for Retinal Vessel Segmentation. In Proceedings of the 2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Shanghai, China, 23–25 October 2021; pp. 1–6. [Google Scholar] [CrossRef]
  28. Jin, Q.; Meng, Z.; Pham, T.D.; Chen, Q.; Wei, L.; Su, R. DUNet: A deformable network for retinal vessel segmentation. Knowl.-Based Syst. 2019, 178, 149–162. [Google Scholar] [CrossRef] [Green Version]
  29. Feng, S.; Zhuo, Z.; Pan, D.; Tian, Q. CcNet: A cross-connected convolutional network for segmenting retinal vessels using multi-scale features. Neurocomputing 2020, 392, 268–276. [Google Scholar] [CrossRef]
  30. Li, L.; Verma, M.; Nakashima, Y.; Nagahara, H.; Kawasaki, R. IterNet: Retinal Image Segmentation Utilizing Structural Redundancy in Vessel Networks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass, CO, USA, 1–5 March 2020; pp. 3656–3665. [Google Scholar] [CrossRef]
  31. Jiang, Y.; Liu, W.; Wu, C.; Yao, H. Multi-Scale and Multi-Branch Convolutional Neural Network for Retinal Image Segmentation. Symmetry 2021, 13, 365. [Google Scholar] [CrossRef]
  32. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar] [CrossRef] [Green Version]
  33. Yu, L.; Yang, X.; Chen, H.; Qin, J.; Heng, P.A. Volumetric ConvNets with Mixed Residual Connections for Automated Prostate Segmentation from 3D MR Images. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 66–72. [Google Scholar]
  34. Alom, M.Z.; Hasan, M.; Yakopcic, C.; Taha, T.M.; Asari, V.K. Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation. arXiv 2018, arXiv:1802.06955. [Google Scholar]
  35. Dalmış, M.U.; Litjens, G.; Holland, K.; Setio, A.; Mann, R.; Karssemeijer, N.; Gubern-Mérida, A. Using deep learning to segment breast and fibroglandular tissue in MRI volumes. Med. Phys. 2017, 44, 533–546. [Google Scholar] [CrossRef] [PubMed]
  36. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  37. Jiang, X.; Mojon, D. Adaptive local thresholding by verification-based multithreshold probing with application to vessel detection in retinal images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 131–137. [Google Scholar] [CrossRef] [Green Version]
  38. Azegrouz, H.; Trucco, E.; Dhillon, B.; MacGillivray, T.; MacCormick, I. Thickness dependent tortuosity estimation for retinal blood vessels. In Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–3 September 2006; pp. 4675–4678. [Google Scholar] [CrossRef]
  39. Sironi, A.; Lepetit, V.; Fua, P. Multiscale centerline detection by learning a scale-space distance transform. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2697–2704. [Google Scholar] [CrossRef] [Green Version]
  40. McKinley, S.; Levine, M. Cubic Spline Interpolation. Coll. Redwoods 1998, 45, 1049–1060. [Google Scholar]
  41. Dyer, S.A.; Dyer, J.S. Cubic-spline interpolation. 1. IEEE Instrum. Meas. Mag. 2001, 4, 44–46. [Google Scholar] [CrossRef]
  42. Marsh, L.; Cormier, L.; Cormier, D.; Publications, S. Spline Regression Models; Number n.º 137 in Quantitative Applications in the Social Sciences; SAGE Publications: New York, NY, USA, 2001. [Google Scholar]
  43. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  44. Derpanis, K.G. Overview of the RANSAC Algorithm. Image Rochester N. Y. 2010, 4, 2–3. [Google Scholar]
  45. Zhang, D.; Wang, W.; Huang, Q.; Jiang, S.; Gao, W. Matching images more efficiently with local descriptors. In Proceedings of the 2008 19th International Conference on Pattern Recognition, Tampa, FL, USA, 8–11 December 2008; pp. 1–4. [Google Scholar] [CrossRef]
  46. Staal, J.; Abramoff, M.; Niemeijer, M.; Viergever, M.; van Ginneken, B. Ridge based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef]
  47. Giacinti, D.J.; Cervantes Sánchez, F.; Cruz Aceves, I.; Hernández González, M.A.; López Montero, L.M. Determination of the parabola of the retinal vasculature using a segmentation computational algorithm. Nova Sci. 2019, 11. [Google Scholar] [CrossRef]
  48. Sanchez, C. Parabola Detection Using Hough Transform. 2007. Available online: https://www.mathworks.com/matlabcentral/fileexchange/15841-parabola-detection-using-hough-transform (accessed on 9 March 2022).
  49. McAuliffe, M. Medical Image Processing, Analysis, and Visualization (MIPAV); National Institutes of Health: Bethesda, MD, USA, 2009. [Google Scholar]
Figure 1. The three level U-Net architecture employed for the blood-vessel segmentation. The model consists of a contracting path (or encoder), a latent path (or bottleneck), and a expanding path (or decoder). Skip-connections are added between feature maps from the contracting path to the expanding path.
Figure 1. The three level U-Net architecture employed for the blood-vessel segmentation. The model consists of a contracting path (or encoder), a latent path (or bottleneck), and a expanding path (or decoder). Skip-connections are added between feature maps from the contracting path to the expanding path.
Mathematics 10 01334 g001
Figure 2. U-Net network training. (a) original images; (b) CLAHE enhanced images; (c) random patches taken from enhanced images; (d) a 48 × 48 pixel patch; (e) the U-Net is trained with patches; (f) a 48 × 48 pixel segmented patch is obtained.
Figure 2. U-Net network training. (a) original images; (b) CLAHE enhanced images; (c) random patches taken from enhanced images; (d) a 48 × 48 pixel patch; (e) the U-Net is trained with patches; (f) a 48 × 48 pixel segmented patch is obtained.
Mathematics 10 01334 g002
Figure 3. Extraction of features for the MTA Probability Map. From left to right: (a) CLAHE-enhanced grayscale image; (b) blood-vessel segmentation; (c) foreground location feature map estimated through grayscale blood-vessel intensities; (d) blood-vessel thickness feature map estimated through Distance Transform; (e) MTA Probability Map estimated through averaging foreground location and blood-vessel thickness feature maps.
Figure 3. Extraction of features for the MTA Probability Map. From left to right: (a) CLAHE-enhanced grayscale image; (b) blood-vessel segmentation; (c) foreground location feature map estimated through grayscale blood-vessel intensities; (d) blood-vessel thickness feature map estimated through Distance Transform; (e) MTA Probability Map estimated through averaging foreground location and blood-vessel thickness feature maps.
Mathematics 10 01334 g003
Figure 4. The weighted RANSAC method for MTA modeling. The image is divided into three sections, n points are chosen from each section in balanced portions and are used to compute the spline model. The model is evaluated by means of the inliers account, defined from a tolerance value ϵ . Each point has the inlier weight that corresponds to its value on map P , represented here by its grayscale intensity.
Figure 4. The weighted RANSAC method for MTA modeling. The image is divided into three sections, n points are chosen from each section in balanced portions and are used to compute the spline model. The model is evaluated by means of the inliers account, defined from a tolerance value ϵ . Each point has the inlier weight that corresponds to its value on map P , represented here by its grayscale intensity.
Mathematics 10 01334 g004
Figure 5. Determination and modeling of MTA process using the test set of retinal fundus images. ROWS: (1) 03_test; (2) 05_test; (3) 06_test; (4) 07_test; (5) 08_test; (6) 09_test. COLUMNS: (a) Grayscale Enhanced Image; (b) MTA Probability Map; (c) Top 20 models with the higher inlier count in the Weighted RANSAC algorithm; (d) Result of the best Spline-based model obtained from the Weighted RANSAC algorithm: blue curve represents the ground-truth delineation, green curve represents the resulting model, red dots represents the knots used to build the spline curve.
Figure 5. Determination and modeling of MTA process using the test set of retinal fundus images. ROWS: (1) 03_test; (2) 05_test; (3) 06_test; (4) 07_test; (5) 08_test; (6) 09_test. COLUMNS: (a) Grayscale Enhanced Image; (b) MTA Probability Map; (c) Top 20 models with the higher inlier count in the Weighted RANSAC algorithm; (d) Result of the best Spline-based model obtained from the Weighted RANSAC algorithm: blue curve represents the ground-truth delineation, green curve represents the resulting model, red dots represents the knots used to build the spline curve.
Mathematics 10 01334 g005
Figure 6. MTA modeling using different approaches and a set of images from the test set. For all images, the blue line represents the ground-truth delineation and the green line represents the model. ROWS: (a) General Hough by Sanchez [48], (b) MIPAV by McAuliffe [49], (c) UMDA+SA by Valdez et al. [11], (d) Proposed method. COLUMNS: (1) 03_test; (2) 05_test; (3) 06_test; (4) 07_test; (5) 08_test; (6) 09_test.
Figure 6. MTA modeling using different approaches and a set of images from the test set. For all images, the blue line represents the ground-truth delineation and the green line represents the model. ROWS: (a) General Hough by Sanchez [48], (b) MIPAV by McAuliffe [49], (c) UMDA+SA by Valdez et al. [11], (d) Proposed method. COLUMNS: (1) 03_test; (2) 05_test; (3) 06_test; (4) 07_test; (5) 08_test; (6) 09_test.
Mathematics 10 01334 g006
Table 1. Performance comparison of model closeness (in pixels) to the ground-truth MTA delineation of the test set. Average of 30 runs of the MDCP and Hausdorff Distances.
Table 1. Performance comparison of model closeness (in pixels) to the ground-truth MTA delineation of the test set. Average of 30 runs of the MDCP and Hausdorff Distances.
MethodMDCP (px.)Hausdorff (px.)
Mean ± Std.Mean ± Std.
Gabor+Hough [7]12.10 ± 6.1634.90 ± 16.60
General Hough [48]31.28 ± 0.0064.49 ± 0.00
MIPAV [49]25.69 ± 0.0059.91 ± 0.00
UMDA+SA [11]30.45 ± 12.94105.80 ± 27.54
Proposed method7.40 ± 5.3427.96 ± 17.66
Table 2. Performance comparison of MTA detection in terms of Precision, skeleton-based Recall (Rec) and skeleton-based balanced Accuracy (bACC) using the test set.
Table 2. Performance comparison of MTA detection in terms of Precision, skeleton-based Recall (Rec) and skeleton-based balanced Accuracy (bACC) using the test set.
MethodPreRecbACC
General Hough [48]0.04540.03380.5024
MIPAV [49]0.17490.17850.5405
UMDA+SA [11]0.22360.24260.6150
Proposed method0.45170.42550.7067
Table 3. Execution time comparison of MTA detection and modeling using the test set.
Table 3. Execution time comparison of MTA detection and modeling using the test set.
MethodExecution Time (s)
Gabor+Hough [7]200
General Hough [48]4.7641 (per pixel)
MIPAV [49]230
UMDA+SA [11]1.68
Proposed method9.93
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alvarado-Carrillo, D.E.; Cruz-Aceves, I.; Hernández-González, M.A.; López-Montero, L.M. Robust Detection and Modeling of the Major Temporal Arcade in Retinal Fundus Images. Mathematics 2022, 10, 1334. https://doi.org/10.3390/math10081334

AMA Style

Alvarado-Carrillo DE, Cruz-Aceves I, Hernández-González MA, López-Montero LM. Robust Detection and Modeling of the Major Temporal Arcade in Retinal Fundus Images. Mathematics. 2022; 10(8):1334. https://doi.org/10.3390/math10081334

Chicago/Turabian Style

Alvarado-Carrillo, Dora Elisa, Iván Cruz-Aceves, Martha Alicia Hernández-González, and Luis Miguel López-Montero. 2022. "Robust Detection and Modeling of the Major Temporal Arcade in Retinal Fundus Images" Mathematics 10, no. 8: 1334. https://doi.org/10.3390/math10081334

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop