Next Article in Journal
SINR-Based MCS Level Adaptation in CSMA/CA Wireless Networks to Embrace IoT Devices
Next Article in Special Issue
Neutrosophic Duplet Semi-Group and Cancellable Neutrosophic Triplet Groups
Previous Article in Journal
The Interval Cognitive Network Process for Multi-Attribute Decision-Making
Previous Article in Special Issue
Scale Effect and Anisotropy Analyzed for Neutrosophic Numbers of Rock Joint Roughness Coefficient Based on Neutrosophic Statistics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Retinal Vessel Detection Approach Based on Shearlet Transform and Indeterminacy Filtering on Fundus Images

1
Department of Computer Science, University of Illinois at Springfield, Springfield, IL 62703, USA
2
Engineering Faculty, Department of Electrical-Electronics Engineering, Bitlis Eren University, 13000 Bitlis, Turkey
3
Technology Faculty, Department of Electrical and Electronics Engineering, Firat University, 23119 Elazig, Turkey
4
Mathematics & Science Department, University of New Mexico, 705 Gurley Ave., Gallup, NM 87301, USA
*
Author to whom correspondence should be addressed.
Symmetry 2017, 9(10), 235; https://doi.org/10.3390/sym9100235
Submission received: 21 September 2017 / Revised: 9 October 2017 / Accepted: 16 October 2017 / Published: 19 October 2017
(This article belongs to the Special Issue Neutrosophic Theories Applied in Engineering)

Abstract

:
A fundus image is an effective tool for ophthalmologists studying eye diseases. Retinal vessel detection is a significant task in the identification of retinal disease regions. This study presents a retinal vessel detection approach using shearlet transform and indeterminacy filtering. The fundus image’s green channel is mapped in the neutrosophic domain via shearlet transform. The neutrosophic domain images are then filtered with an indeterminacy filter to reduce the indeterminacy information. A neural network classifier is employed to identify the pixels whose inputs are the features in neutrosophic images. The proposed approach is tested on two datasets, and a receiver operating characteristic curve and the area under the curve are employed to evaluate experimental results quantitatively. The area under the curve values are 0.9476 and 0.9469 for each dataset respectively, and 0.9439 for both datasets. The comparison with the other algorithms also illustrates that the proposed method yields the highest evaluation measurement value and demonstrates the efficiency and accuracy of the proposed method.

1. Introduction

A fundus image is an important and effective tool for ophthalmologists who diagnose the eyes for determination of various diseases such as cardiovascular, hypertension, arteriosclerosis and diabetes. Recently, diabetic retinopathy (DR) has become a prevalent disease and it is seen as the major cause of permanent vision loss in adults worldwide [1]. Prevention of such adult blindness necessitates the early detection of the DR. DR can be detected early by inspection of the changes in blood vessel structure in fundus images [2,3]. In particular, the detection of the new retinal vessel growth is quite important. Experienced ophthalmologists can apply various clinical methods for the manual diagnosis of DR which require time and steadiness. Hence, automated diagnosis systems for retinal screening are in demand.
Various works have been proposed so far where the authors have claimed to find the retinal vessels automatically on fundus images. Soares et al. proposed two-dimensional Gabor wavelets and supervised classification method to segment retinal vessel [4], which classifies pixels as vessel and non-vessel pixels. Dash et al. presented a morphology-based algorithm to segment retinal vessel [5]. Authors used 2-D Gabor wavelets and the CLAHE method for enhancing retinal images. Segmentation was achieved by geodesic operators. The obtained segmentation result was then refined with post-processing.
Zhao et al. introduced a methodology where level sets and region growing methods were used for retinal vessel segmentation [6]. These authors also used CLAHE and 2D Gabor filters for image enhancement. The enhanced images were further processed by an anisotropic diffusion filter to smooth the retinal images. Finally, the vessels segmentation was achieved by using level sets and region growing method. Levet et al. developed a retinal vessel segmentation method using shearlet transform [7]. The authors introduced a term called ridgeness which was calculated for all pixels at a given scale. Hysteresis thresholding was then applied for extracting the retinal vessels. Another multi-resolution approach was proposed by Bankhead et al. [8], where the authors used wavelets. The authors achieved the vessel segmentation by thresholding the wavelet coefficients. The authors further introduced an alternative approach for center line detection by use of spline fitting. Staal et al. extracted the ridges in images [9]. The extracted ridges were then used to form the line elements which produced a number of image patches. After obtaining the feature vectors, a feature selection mechanism was applied to reduce the number of features. Finally, a K-nearest-neighbors classifier was used for classification. Kande et al. introduced a methodology combining vessel enhancement and the SWFCM method [10]. The vessel enhancement was achieved by matched filtering and the extraction of the vessels was accomplished by the SWFCM method. Chen et al. introduced a hybrid model for automatic retinal vessel extraction [11], which combined the signed pressure force function and the local intensity to construct a robust model for handling the segmentation problem against the low contrast. Wang et al. proposed a supervised approach which segments the vessels in the retinal images hierarchically [12]. It opted to extract features with a trained CNN (convolutional neural network) and used an ensemble random forest to categorize the pixels as a non-vessel or vessel classes. Liskowski et al. utilized a deep learning method to segment the retinal vessels in fundus images [13] using two types of CNN models. One was a standard CNN architecture with nine layers and the other just consisted of convolution layers. Maji et al. introduced an ensemble based methodology for retinal vessels segmentation [14] which considered 12 deep CNN models for constructing the classifier structure. The mean operation was used for the outputs of all networks for the final decision.
In this study, a retinal vessel detection approach is presented using shearlet transform and indeterminacy filtering. Shearlets are capable to capture the anisotropic information which makes it strong in the detection of edges, corners, and blobs where there exists a discontinuity [15,16,17]. Shearlets are employed to describe the vessel’s features and map the image into the neutrosophic domain. An indeterminacy filter is used to remove the uncertain information on the neutrosophic set. A line-like filter is also utilized to enhance the vessel regions. Finally, the vessel is identified via a neural network classifier.

2. Proposed Method

2.1. Shearlet Transform

Shearlet transformation enables image features to be analyzed in more flexible geometric structures with simpler mathematical approaches and is also able to reveal directional and anisotropic information at multi-scales [18]. In the 2-D case, the affine systems are defined as the collection:
S H ϕ f ( a , s , t ) = < f ,   ϕ a , s , t >  
ϕ a , s , t ( x ) = | det M a , s | 1 2 ϕ ( M a , s 1 x t )
where ϕ a , s , t is the shearlet coefficient. M a , s = B s A a = ( a a s 0 a ) , and A a = ( a 0 0 a ) is parabolic scaling matrix and B s = ( 1 s 0 1 ) is shear matrix   ( a > 0 ,   s R ,   t R 2 ) . In this equation a scale parameter is a real number greater than zero and s is a real number. In this case M a , s is the composition of the A a and B s .

2.2. Neutrosophic Indeterminacy Filtering

Recently, the neutrosophic theory extended from classical fuzzy theory denotes that neutrosophy has been successfully used in many applications for reducing the uncertainty and indeterminacy [19]. An element g in the neutrosophic set (NS) is defined as g (T, I, F), where T identifies true degree in the set, I identify the indeterminate degree in the set, and F identifies false in the set. T, I and F are the neutrosophic components. The previously reported studies demonstrated that the NS has a vital role in image processing [20,21,22].
A pixel P ( x , y ) at the location of (x,y) in an image is described in the NS domain as P N S ( x , y ) = { T ( x , y ) , I ( x , y ) , F ( x , y ) } , where T ( x , y ) , I ( x , y ) and F ( x , y ) are the membership values belonging to the bright pixel set, indeterminate set, and non-white set, respectively.
In this study, the fundus image’s green channel is mapped into NS domain via shearlet feature values:
T ( x , y ) = S T L ( x , y ) S T L m i n S T L m a x S T L m i n
I ( x , y ) = S T H ( x , y ) S T H m i n S T H m a x S T H m i n
where T and I are the true and indeterminate membership values. S T L ( x , y ) is the low-frequency component of the shearlet feature at the current pixel P(x,y). In addition, S T L m i n and S T L m a x are the minimum value and maximum value of the low-frequency component of the shearlet feature in the whole image, respectively. S T H ( x , y ) is the high-frequency component of the shearlet feature at the current pixel P(x,y). Moreover, S T H m i n and S T H m a x are the minimum value and maximum value of the high-frequency component of the shearlet feature in the whole image, respectively. In the proposed algorithm, we only utilize neutrosophic components T and I for segmentation.
Then an IF (indeterminacy filter) is defined using the indeterminacy membership to reduce the indeterminacy in images. The IF is defined based on the indeterminacy value I s ( x , y ) having the kernel function as:
O I ( u , v ) = 1 2 π σ I 2 e ( u 2 + v 2 2 σ I 2 ( x , y ) )
σ I ( x , y ) = f ( I ( x , y ) ) = r I ( x , y ) + q
where O I ( u , v )   is the kernel function in the local neighborhood. u and v are coordinator values of local neighborhood in kernel function. σ I is the standard deviation of the kernel function, which is defined as a linear function associated to the indeterminate degree. r and q are the coefficients in the linear function to control the standard deviation value according to the indeterminacy value. Since the σ I becomes large with a high indeterminate degree, the IF can create a smooth current pixel by using its neighbors, while with a low indeterminate degree, the value of σ I is small and the IF performs less smoothing operation.
T ( x , y ) = T ( x , y ) O I ( u , v ) = v = y m / 2 y + m / 2 u = x m / 2 x + m / 2 T ( x u , y v ) O I ( u , v )
where T is the indeterminate filtering result.

2.3. Line Structure Enhancement

A multiscale filter is employed on the image to enhance the line-like structure [17]. The local second-order partial derivatives, Hessian matrix, is computed and a line-likeness is defined using its eigenvalues. This measure can describe the vessels region in the fundus images and is shown as follows:
E n ( s ) = { 0 i f   λ 2 > 0   o r   λ 3 > 0 ( 1 e R A 2 2 α 2 ) . e R B 2 2 β 2 . ( 1 e S 2 2 c 2 ) otherwise
S = j D λ j 2
R A = R A = | λ 2 | | λ 3 |
R B = R B = | λ 1 | | λ 2 λ 3 |
where λ k is the eigenvalue with the k-th smallest magnitude of the Hessian matrix. D is the dimension of the image. α, β and c are thresholds to control the sensitivity of the line filter to the measures RA, RB and S.

2.4. Algorithm of the Proposed Approach

A retinal vessel detection approach is proposed using shearlet transform and indeterminacy filtering on fundus images. Shearlet transform is employed to describe the vessel’s features and map the green channel of the fundus image into the NS domain. An indeterminacy filter is used to remove the indeterminacy information on the neutrosophic set. A multiscale filter is utilized to enhance the vessel regions. Finally, the vessel is detected via a neural network classifier using the neutrosophic image and the enhanced image. The proposed method is summarized as:
  • Take the shearlet transform on green channel Ig;
  • Transform the Ig into neutrosophic set domain using the shearlet transform results, and the neutrosophic components are denoted as T and I;
  • Process indeterminacy filtering on T using I and the result is denoted as T′;
  • Perform the line-like structure enhancement filter on T′ and obtain the En;
  • Obtain the feature vector FV = [T′ I En] for the input of the neural network;
  • Train the neural network as a classifier to identify the vessel pixels;
  • Identify the vessel pixels using the classification results by the neural network.
The whole steps can be summarized using a flowchart in Figure 1.

3. Experimental Results

3.1. Retinal Fundus Image Datasets

In the experimental section, we test the proposed method on two publicly available datasets namely the Digital Retinal Images for Vessel Extraction (DRIVE) and Structured Analysis of the Retina (STARE) datasets.
The DRIVE database was obtained from a diabetic retinopathy screening program in the Netherlands. This database was created to enable comparative studies on the segmentation of blood vessels in retinal images. Researchers are able to test their algorithms and compare their results with other studies in this database [23]. The DRIVE dataset contains 40 total fundus images and has been divided into training and test sets [4]. Training and test sets contain an equal number of images (20). Each image was captured using 8 bits per color plane at 768 by 584 pixels. The field of view (FOV) on each image is circular with a diameter of approximately 540 pixels and all images were cropped using FOV.
STARE (STructured Analysis of the REtina) Project was designed and initialized in 1975 by Michael Goldbaum, M.D. at the University of California, San Diego. Clinical images were obtained by the Shiley Eye Center at the University of California, San Diego, and by the Veterans Administration Medical Center in San Diego [24]. The STARE dataset has 400 raw images. Blood vessel segmentation annotations have 20 hand-labeled images [2].
In our experiment, we select 20 images with ground truth results in the training set of the DRIVE dataset as the training samples, and 20 images in the test set from the DRIVE and 20 from the STARE for validation.

3.2. Experiment on Retinal Vessel Detection

The experimental results on DRIVE and STARE were demonstrated in Figure 2 and Figure 3, respectively. The first columns of Figure 2 and Figure 3 show the input retinal images, the second ones in Figure 2 and Figure 3 are the ground-truth segmentation of the retinal vessels and the third ones in Figure 2 and Figure 3 show the obtained results. We used a three-layered neural network classifier. While the input layer of the neural network classifier contained three nodes, hidden and the output layers contained 20 nodes and one node, respectively. The classifier was trained with scaled conjugate gradient backpropagation algorithm. The learning rate was chosen as 0.001, the momentum coefficient was set to 0.01. We used almost 1000 iterations during the training of the network.
As seen in Figure 2 and Figure 3, the proposed method obtained reasonable results in which the large vessels were detected perfectly. Only several thin vessel regions were missed. Three ROC curves are drawn to demonstrate the proposed method’s performance in DRIVE and STARE datasets.

4. Discussion

The evaluation of the results was carried out using the receiver operating characteristic (ROC) curve, and the area under the ROC curve denoted as AUC. An AUC value tending to 1 demonstrates a successful classifier, AUC equal 0 indicates an unsuccessful classifier.
Figure 4 and Figure 5 illustrate the ROC curves on the test set from DRIVE and STARE datasets, respectively. The AUC values also stressed the successful results of the proposed approach on both datasets. While the calculated the AUC value was 0.9476 for DRIVE data set, and a 0.9469 AUC value was calculated for STARE dataset.
We further compared the obtained results with some early published results. These comparison results in DRIVE and STARE were listed in Table 1 and Table 2.
In Table 1, Maji et al. [14] have developed a collective learning method using 12 deep CNN models for vessel segmentation, Fu et al. [25] have proposed an approach combining CNN and CRF (Conditional Random Field) layers, and Niemeijer et al. [26] presented a vessel segmentation algorithm based on pixel classification using a simple feature vector. The proposed method achieved the highest AUC value for the DRIVE dataset. Fu et al. [25] also achieved the second highest AUC value.
In Table 2, Kande et al. [10] have recommended an unsupervised fuzzy based vessel segmentation method, Jiang et al. [2] have proposed an adaptive local thresholding method and Hoover et al. [27] also have combined local and region-based properties to segment blood vessels in retinal images. The highest AUC value was also obtained for STARE dataset with the proposed method.
In the proposed method, the post-processing procedure is not used to deal with the classification results from neural network. In future, we will employ some post-processing methods for improving the quality of the vessel detection.

5. Conclusions

This study proposes a new method for retinal vessel detection. It initially forwards the input retinal fundus images into the neutrosophic domain via shearlet transform. The neutrosophic domain images are then filtered with two neutrosophic filters for noise reduction. Feature extraction and classification steps come after the filtering steps. The presented approach was tested on DRIVE and STARE. The results were evaluated quantitatively. The proposed approach outperformed the others by means of both evaluation methods. The comparison with the existing algorithms also stressed the high accuracy of the proposed approach. In future, we will employ some post-processing methods for improving the quality of the vessel detection.

Acknowledgments

The authors would like to thank the editors and anonymous reviewers for their helpful comments and suggestions.

Author Contributions

Yanhui Guo, Ümit Budak, Abdulkadir Şengür and Florentin Smarandache conceived and worked together to achieve this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kanski, J.J. Clinical Ophthalmology: A Systematic Approach; Butterworth-Heinemann: London, UK, 1989. [Google Scholar]
  2. Jiang, X.; Mojon, D. Adaptive local thresholding by verification-based multithreshold probing with application to vessel detection in retinal images. IEEE Trans. Pattern Anal. 2003, 25, 131–137. [Google Scholar] [CrossRef]
  3. Walter, T.; Klein, J.C. Segmentation of color fundus images of the human retina: Detection of the optic disc and the vascular tree using morphological techniques. Med. Data Anal. 2001, 2199, 282–287. [Google Scholar] [CrossRef]
  4. Soares, J.V.; Leandro, J.J.; Cesar, R.M.; Jelinek, H.F.; Cree, M.J. Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification. IEEE Trans. Med. Imaging 2006, 25, 1214–1222. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Dash, J.; Bhoi, N. Detection of Retinal Blood Vessels from Ophthalmoscope Images Using Morphological Approach. ELCVIA 2017, 16, 1–14. [Google Scholar] [CrossRef]
  6. Zhao, Y.Q.; Wang, X.H.; Wang, X.F.; Shih, F.Y. Retinal vessels segmentation based on level set and region growing. Pattern Recognit. 2014, 47, 2437–2446. [Google Scholar] [CrossRef]
  7. Levet, F.; Duval-Poo, M.A.; De Vito, E.; Odone, F. Retinal image analysis with shearlets. In Proceedings of the Smart Tools and Apps in computer Graphics, Genova, Italy, 3–4 October 2016; pp. 151–156. [Google Scholar] [CrossRef]
  8. Bankhead, P.; Scholfield, C.N.; McGeown, J.G.; Curtis, T.M. Fast retinal vessel detection and measurement using wavelets and edge location refinement. PLoS ONE 2012, 7, e32435. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Staal, J.; Abràmoff, M.D.; Niemeijer, M.; Viergever, M.A.; Van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef] [PubMed]
  10. Kande, G.B.; Subbaiah, P.V.; Savithri, T.S. Unsupervised fuzzy based vessel segmentation in pathological digital fundus images. J. Med. Syst. 2010, 34, 849–858. [Google Scholar] [CrossRef] [PubMed]
  11. Chen, G.; Chen, M.; Li, J.; Zhang, E. Retina Image Vessel Segmentation Using a Hybrid CGLI Level Set Method. Biomed. Res. Int. 2017, 2017, 1263056. [Google Scholar] [CrossRef] [PubMed]
  12. Wang, S.; Yin, Y.; Cao, G.; Wei, B.; Zheng, Y.; Yang, G. Hierarchical retinal blood vessel segmentation based on feature and ensemble learning. Neurocomputing 2015, 149(Part B), 708–717. [Google Scholar] [CrossRef]
  13. Liskowski, P.; Krawiec, K. Segmenting Retinal Blood Vessels with Deep Neural Networks. IEEE Trans. Med. Imaging 2016, 35, 2369–2380. [Google Scholar] [CrossRef] [PubMed]
  14. Maji, D.; Santara, A.; Mitra, P.; Sheet, D. Ensemble of deep convolutional neural networks for learning to detect retinal vessels in fundus images. CoRR 2016, arXiv:1603.04833. [Google Scholar]
  15. Yi, S.; Labate, D.; Easley, G.R.; Krim, H. A shearlet approach to edge analysis and detection. IEEE Trans. Image Process. 2009, 18, 929–941. [Google Scholar] [CrossRef] [PubMed]
  16. Easley, G.; Labate, D.; Lim, W.Q. Sparse directional image representations using the discrete shearlet transform. Appl. Comput. Harmon A 2008, 25, 25–46. [Google Scholar] [CrossRef]
  17. Frangi, A.F.; Niessen, W.J.; Vincken, K.L.; Viergever, M.A. Multiscale vessel enhancement filtering. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 1998; pp. 130–137. [Google Scholar] [CrossRef]
  18. Labate, D.; Lim, W.Q.; Kutyniok, G.; Weiss, G. Sparse multidimensional representation using shearlets. In Proceedings of the SPIE 5914 Wavelets XI Optics and Photonics, San Diego, CA, USA, 17 September 2005; p. 59140U. [Google Scholar] [CrossRef]
  19. Smarandache, F. A Unifying Field in Logics: Neutrosophic Logic. Neutrosophy, Neutrosophic Set, Neutrosophic Probability; American Research Press: Rehoboth, NM, USA, 2003. [Google Scholar]
  20. Guo, Y.; Xia, R.; Şengür, A.; Polat, K. A novel image segmentation approach based on neutrosophic c-means clustering and indeterminacy filtering. Neural Comput. Appl. 2016, 28, 3009–3019. [Google Scholar] [CrossRef]
  21. Guo, Y.; Şengür, A.; Ye, J. A novel image thresholding algorithm based on neutrosophic similarity score. Measurement 2014, 58, 175–186. [Google Scholar] [CrossRef]
  22. Guo, Y.; Sengur, A. A novel color image segmentation approach based on neutrosophic set and modified fuzzy c-means. Circ. Syst. Signal Process. 2013, 32, 1699–1723. [Google Scholar] [CrossRef]
  23. DRIVE: Digital Retinal Images for Vessel Extraction. Available online: http://www.isi.uu.nl/Research/Databases/DRIVE/ (accessed on 3 May 2017).
  24. STructured Analysis of the Retina. Available online: http://cecas.clemson.edu/~ahoover/stare/ (accessed on 3 May 2017).
  25. Fu, H.; Xu, Y.; Lin, S.; Wong, D.W.K.; Liu, J. Deepvessel: Retinal vessel segmentation via deep learning and conditional random field. In Proceedings of the 19th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2016, Athens, Greece, 17–21 October 2016; pp. 132–139. [Google Scholar] [CrossRef]
  26. Niemeijer, M.; Staal, J.; Van Ginneken, B.; Loog, M.; Abramoff, M.D. Comparative study of retinal vessel segmentation methods on a new publicly available database. In Proceedings of the SPIE Medical Imaging 2004: Image Processing, San Diego, CA, USA, 14–19 February 2004; Volume 5370, pp. 648–656. [Google Scholar] [CrossRef]
  27. Hoover, A.D.; Kouznetsova, V.; Goldbaum, M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Imaging 2000, 19, 203–210. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flowchart for retinal vessel detection.
Figure 1. Flowchart for retinal vessel detection.
Symmetry 09 00235 g001
Figure 2. Detection results by our proposed methods on three samples randomly taken from the Digital Retinal Images for Vessel Extraction (DRIVE) dataset: (a) Original (b) Corresponding ground truth and (c) Detection results.
Figure 2. Detection results by our proposed methods on three samples randomly taken from the Digital Retinal Images for Vessel Extraction (DRIVE) dataset: (a) Original (b) Corresponding ground truth and (c) Detection results.
Symmetry 09 00235 g002
Figure 3. Detection results by our proposed methods on three samples randomly taken in the Structured Analysis of the Retina (STARE) dataset: (a) Original (b) Corresponding ground truth and (c) Detection results.
Figure 3. Detection results by our proposed methods on three samples randomly taken in the Structured Analysis of the Retina (STARE) dataset: (a) Original (b) Corresponding ground truth and (c) Detection results.
Symmetry 09 00235 g003
Figure 4. ROC curve in the test set of DRIVE.
Figure 4. ROC curve in the test set of DRIVE.
Symmetry 09 00235 g004
Figure 5. ROC curve in STARE.
Figure 5. ROC curve in STARE.
Symmetry 09 00235 g005
Table 1. Comparison with the other algorithms on DRIVE dataset.
Table 1. Comparison with the other algorithms on DRIVE dataset.
MethodAUC
Maji et al. [14]0.9283
Fu et al. [25]0.9470
Niemeijer et al. [26]0.9294
Proposed method0.9476
Table 2. Comparison with the other algorithm on STARE dataset.
Table 2. Comparison with the other algorithm on STARE dataset.
MethodAUC
Jiang et al. [2]0.9298
Hoover et al. [27]0.7590
Kande et al. [10]0.9298
Proposed method0.9469

Share and Cite

MDPI and ACS Style

Guo, Y.; Budak, Ü.; Şengür, A.; Smarandache, F. A Retinal Vessel Detection Approach Based on Shearlet Transform and Indeterminacy Filtering on Fundus Images. Symmetry 2017, 9, 235. https://doi.org/10.3390/sym9100235

AMA Style

Guo Y, Budak Ü, Şengür A, Smarandache F. A Retinal Vessel Detection Approach Based on Shearlet Transform and Indeterminacy Filtering on Fundus Images. Symmetry. 2017; 9(10):235. https://doi.org/10.3390/sym9100235

Chicago/Turabian Style

Guo, Yanhui, Ümit Budak, Abdulkadir Şengür, and Florentin Smarandache. 2017. "A Retinal Vessel Detection Approach Based on Shearlet Transform and Indeterminacy Filtering on Fundus Images" Symmetry 9, no. 10: 235. https://doi.org/10.3390/sym9100235

APA Style

Guo, Y., Budak, Ü., Şengür, A., & Smarandache, F. (2017). A Retinal Vessel Detection Approach Based on Shearlet Transform and Indeterminacy Filtering on Fundus Images. Symmetry, 9(10), 235. https://doi.org/10.3390/sym9100235

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop