Efficient Retinal Vessel Segmentation with 78K Parameters
Abstract
1. Introduction
- DSAE-Net Architecture: A novel cascaded lightweight architecture for retinal vessel segmentation, featuring a parameterized W-shaped topology that implements a dual-stage feature refinement mechanism for progressive segmentation. Experiments demonstrate that DSAE-Net ranks among the top-performing models within the category of highly efficient and compact networks.
- Skeleton Distance Loss (SDL): A novel loss function that overcomes boundary loss limitations by leveraging vessel skeletons to handle severe class imbalance.
- Convolutional Module with Feature Attention (CMFA): A novel module introduced for feature extraction, fusing grouped convolutions with feature weight redistribution. Grounded in channel grouping and weight redistribution concepts, CMFA effectively enlarges the receptive field while preserving spatial precision, helping to navigate the trade-off between lightweightness and performance.
- Coordinate Attention Gate (CAG): A plug-and-play module is designed for direction-sensitive feature reweighting during fusion, effectively suppressing noise interference inherent to skip connections. Combined with the cascaded architecture’s refinement strategy, this significantly enhances model precision.
2. Methods
2.1. Dual U-Net Cascaded Architecture
- denotes the network depth;
- denotes the number of kernels in the top convolutional layer.
2.2. Skeleton Distance Loss
2.3. Lightweight Contextual Convolution Module
2.4. Optimization for Decoder
3. Results
3.1. Datasets
- DRIVE [37]: A foundational dataset containing 40 JPEG color fundus images (size: pixels), including 7 cases with pathologies. It provides 20 training and 20 test images. The training set was split into 15 images for training and 5 for validation. Purpose: It serves as the primary benchmark for general segmentation performance and model comparison.
- CHASE_DB1 [38]: A dataset comprising 28 retinal images (size: pixels) captured from both eyes of 14 children. The dataset was randomly divided into 16 images for training, 6 for validation, and 6 for testing. Purpose: It can be used to validate model applicability and robustness across a younger demographic cohort.
- HRF (High-Resolution Fundus) [39]: This dataset features high-resolution images (size: pixels) from 15 healthy subjects, 15 diabetic retinopathy patients, and 15 glaucoma patients. Images were split into 20 for training, 5 for validation, and 20 for testing. Purpose: It can be used to test model performance on high-resolution imagery and is able to handle fine details like capillaries under varying health conditions.
- STARE [40]: This dataset contains retinal vessel images (size: pixels) exhibiting a variety of pathologies. The 20 images were split into 12 for training, 4 for validation, and 4 for testing. Purpose: It can be used to evaluate model robustness and segmentation accuracy in the presence of diverse pathological manifestations.
3.2. Details of Implementation
3.3. Experimental Results
3.3.1. Visual Comparison
3.3.2. Parametric Design Study
3.3.3. Ablation Study on Skeleton Distance Loss
3.3.4. Ablation Study
4. Conclusions and Discussion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
Model | Acc (%) | AUC (%) | Dice (%) | MCC (%) | HD95 |
---|---|---|---|---|---|
UNet | 94.87 ± 0.08 | 96.99 ± 0.12 | 80.00 ± 0.43 | 77.06 ± 0.45 | 37.96 ± 2.96 |
Attention UNet | 95.42 ± 0.13 | 98.04 ± 0.07 | 82.32 ± 0.09 | 79.74 ± 0.11 | 20.03 ± 1.98 |
SA-UNet | 95.44 ± 0.09 | 97.95 ± 0.12 | 82.42 ± 0.08 | 79.82 ± 0.12 | 20.84 ± 1.24 |
ConvUNeXt | 95.46 ± 0.12 | 97.76 ± 0.10 | 82.31 ± 0.20 | 79.71 ± 0.26 | 15.63 ± 2.66 |
ULite | 95.40 ± 0.27 | 97.48 ± 0.29 | 81.77 ± 0.44 | 79.14 ± 0.61 | 21.83 ± 2.92 |
DSAE-Net | 95.48 ± 0.02 | 98.00 ± 0.04 | 82.44 ± 0.10 | 79.86 ± 0.11 | 12.12 ± 2.17 |
References
- Fathi, A.; Naghsh-Nilchi, A.R. Automatic wavelet-based retinal blood vessels segmentation and vessel diameter estimation. Biomed. Signal Process. Control 2013, 8, 71–80. [Google Scholar] [CrossRef]
- Qin, Q.; Chen, Y. A review of retinal vessel segmentation for fundus image analysis. Eng. Appl. Artif. Intell. 2024, 128, 107454. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems 30; The MIT Press: Cambridge, MA, USA, 2017. [Google Scholar] [CrossRef]
- Wang, L.; Pan, Z.; Wang, J. A Review of Reinforcement Learning Based Intelligent Optimization for Manufacturing Scheduling. Complex Syst. Model. Simul. 2021, 1, 257–270. [Google Scholar] [CrossRef]
- Cui, Z.; Xue, F.; Cai, X.; Cao, Y.; Wang, G.; Chen, J. Detection of Malicious Code Variants Based on Deep Learning. IEEE Trans. Ind. Inf. 2018, 14, 3187–3196. [Google Scholar] [CrossRef]
- Zhang, K.; Su, Y.; Guo, X.; Qi, L.; Zhao, Z. MU-GAN: Facial Attribute Editing Based on Multi-Attention Mechanism. IEEE/CAA J. Autom. Sin. 2021, 8, 1614–1626. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar] [CrossRef]
- Chen, C.; Chuah, J.H.; Ali, R.; Wang, Y. Retinal Vessel Segmentation Using Deep Learning: A Review. IEEE Access 2021, 9, 111985–112004. [Google Scholar] [CrossRef]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 10–17 October 2021; pp. 9992–10002. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, H.; Chen, Z.; Huangliang, K.; Zhang, H. TransUNet+: Redesigning the skip connection to enhance features in medical image segmentation. Knowl.-Based Syst. 2022, 256, 109859. [Google Scholar] [CrossRef]
- Yang, Z.; Farsiu, S. Directional Connectivity-Based Segmentation of Medical Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 11525–11535. [Google Scholar] [CrossRef]
- Xie, Y.H.; Huang, B.S.; Li, F. UnetTransCNN: Integrating Transformers with Convolutional Neural Networks for Enhanced Medical Image Segmentation. Front. Oncol. 2025, 15, 1467672. [Google Scholar] [CrossRef] [PubMed]
- Yan, Y.; Qin, B. Research on Lung Nodule Segmentation Method Based on RSE-Vnet Convolutional Network. J. Hunan Univ. Technol. 2025, 39, 46–51. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar] [CrossRef]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.P.; Yuille, A.L. Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. arXiv 2014, arXiv:1412.7062. [Google Scholar] [CrossRef]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar] [CrossRef]
- Sun, K.; Chen, Y.; Chao, Y.; Geng, J.; Chen, Y. A retinal vessel segmentation method based improved U-Net model. Biomed. Signal Process. Control 2023, 82, 104574. [Google Scholar] [CrossRef]
- Li, W.; Zhang, M.; Chen, D. Fundus Retinal Blood Vessel Segmentation Based on Active Learning. In Proceedings of the 2020 International Conference on Computer Information and Big Data Applications (CIBDA), Guiyang, China, 17–19 April 2020; pp. 264–268. [Google Scholar] [CrossRef]
- Uysal, E.; Güraksin, G.E. Computer-aided retinal vessel segmentation in retinal images: Convolutional neural networks. Multimed. Tools Appl. 2021, 80, 3505–3528. [Google Scholar] [CrossRef]
- Galdran, A.; Anjos, A.; Dolz, J.; Chakor, H.; Lombaert, H.; Ayed, I.B. State-of-the-art retinal vessel segmentation with minimalistic models. Sci. Rep. 2022, 12, 6174. [Google Scholar] [CrossRef]
- Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; et al. Segment Anything. arXiv 2023, arXiv:2304.02643. [Google Scholar] [CrossRef]
- Ma, J.; He, Y.; Li, F.; Han, L.; You, C.; Wang, B. Segment anything in medical images. Nat. Commun. 2024, 15, 654. [Google Scholar] [CrossRef] [PubMed]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar] [CrossRef]
- Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.J.; Heinrich, M.P.; Misawa, K.; Mori, K.; McDonagh, S.G.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar] [CrossRef]
- Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar] [CrossRef]
- Han, Z.; Jian, M.; Wang, G.-G. ConvUNeXt: An efficient convolution neural network for medical image segmentation. Knowl.-Based Syst. 2022, 253, 109512. [Google Scholar] [CrossRef]
- Yin, Y.; Han, Z.; Jian, M.; Wang, G.-G.; Chen, L.; Wang, R. AMSUnet: A neural network using atrous multi-scale convolution for medical image segmentation. Comput. Biol. Med. 2023, 162, 107120. [Google Scholar] [CrossRef]
- Dinh, B.D.; Nguyen, T.T.; Tran, T.T.; Pham, V.T. 1M parameters are enough? A lightweight CNN-based model for medical image segmentation. In Proceedings of the 2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Taipei, Taiwan, 31 October–3 November 2023; pp. 1279–1284. [Google Scholar] [CrossRef]
- Valanarasu, J.M.J.; Patel, V.M. UNeXt: MLP-Based Rapid Medical Image Segmentation Network. In Proceedings of the Medical Image Computing and Computer Assisted Intervention (MICCAI), Singapore, 18–22 September 2022; pp. 23–33. [Google Scholar] [CrossRef]
- Kervadec, H.; Bouchtiba, J.; Desrosiers, C.; Granger, E.; Dolz, J.; Ayed, I.B. Boundary loss for highly unbalanced segmentation. In Proceedings of the International Conference on Medical Imaging with Deep Learning (MIDL), London, UK, 8–10 July 2019; pp. 285–296. [Google Scholar]
- Ma, J.; Ren, X.; Tsviatkou, V.Y. A novel fully parallel skeletonization algorithm. Pattern Anal. Appl. 2022, 25, 169–188. [Google Scholar] [CrossRef]
- Li, Y.; Yao, T.; Pan, Y.; Mei, T. Contextual Transformer Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 1489–1500. [Google Scholar] [CrossRef]
- Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13708–13717. [Google Scholar] [CrossRef]
- Staal, J.; Abramoff, M.D.; Niemeijer, M.; Viergever, M.A.; Ginneken, B.v. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef]
- Owen, C.G.; Rudnicka, A.R.; Nightingale, C.M.; Mullen, R.; Barman, S.A.; Sattar, N.; Cook, D.G.; Whincup, P.H. Retinal Arteriolar Tortuosity and Cardiovascular Risk Factors in a Multi-Ethnic Population Study of 10-Year-Old Children; the Child Heart and Health Study in England (CHASE). Arterioscler. Thromb. Vasc. Biol. 2011, 31, 1933–1938. [Google Scholar] [CrossRef] [PubMed]
- Budai, A.; Bock, R.; Maier, A.; Hornegger, J.; Michelson, G. Robust Vessel Segmentation in Fundus Images. Int. J. Biomed. Imaging 2013, 2013, 154860. [Google Scholar] [CrossRef] [PubMed]
- Hoover, A.D.; Kouznetsova, V.; Goldbaum, M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Imaging 2000, 19, 203–210. [Google Scholar] [CrossRef]
- Milletari, F.; Navab, N.; Ahmadi, S.A. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar] [CrossRef]
- Jha, D.; Smedsrud, P.H.; Riegler, M.A.; Johansen, D.; Lange, T.D.; Halvorsen, P.; Johansen, H.D. ResUNet++: An Advanced Architecture for Medical Image Segmentation. In Proceedings of the 2019 IEEE International Symposium on Multimedia (ISM), San Diego, CA, USA, 9–11 December 2019; pp. 225–2255. [Google Scholar] [CrossRef]
- Guo, C.; Szemenyei, M.; Yi, Y.; Wang, W.; Chen, B.; Fan, C. SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 1236–1242. [Google Scholar] [CrossRef]
- Liu, W.; Yang, H.; Tian, T.; Cao, Z.; Pan, X.; Xu, W.; Jin, Y.; Gao, F. Full-Resolution Network and Dual-Threshold Iteration for Retinal Vessel and Coronary Angiograph Segmentation. IEEE J. Biomed. Health Inform. 2022, 26, 4623–4634. [Google Scholar] [CrossRef]
- Iqbal, A.; Sharif, M. PDF-UNet: A semi-supervised method for segmentation of breast tumor images using a U-shaped pyramid-dilated network. Expert Syst. Appl. 2023, 221, 119718. [Google Scholar] [CrossRef]
- Zhang, W.; Qu, S.; Feng, Y. LMFR-Net: Lightweight multi-scale feature refinement network for retinal vessel segmentation. Pattern Anal. Appl. 2025, 28, 44. [Google Scholar] [CrossRef]
- Zhang, J.; Luan, Z.; Ni, L.; Qi, L.; Gong, X. MSDANet: A multi-scale dilation attention network for medical image segmentation. Biomed. Signal Process. Control 2024, 90, 105889. [Google Scholar] [CrossRef]
Dataset | Train | Val | Test | Image Size (Pixels) | Characteristics / Purpose |
---|---|---|---|---|---|
DRIVE [37] | 15 | 5 | 20 | General benchmark; includes pathologies | |
CHASE-DB1 [38] | 16 | 6 | 6 | Images from children; demographic robustness | |
HRF [39] | 20 | 5 | 20 | High-resolution; healthy/DR/glaucoma subjects | |
STARE [40] | 12 | 4 | 4 | Diverse pathologies; robustness testing |
Model | Year | Parameters (M) | FLOPs (G) | Time (CPU) (s) * | Time (GPU) (s) * |
---|---|---|---|---|---|
UNet | 2015 | 7.76 | 54.85 | 0.97 | 0.86 |
Attention UNet | 2018 | 34.88 | 266.53 | 3.55 | 2.59 |
ResUNet++ | 2019 | 4.06 | 63.26 | 1.65 | 1.07 |
SA-UNet | 2020 | 0.48 | 10.61 | 1.02 | 0.71 |
ConvUNeXt | 2022 | 3.51 | 29.01 | 1.53 | 1.02 |
FR-UNet | 2022 | 5.72 | 235.92 | 5.10 | 3.58 |
PDF-UNet | 2023 | 10.52 | 44.74 | 1.13 | 0.77 |
ULite | 2023 | 0.88 | 3.03 | 0.86 | 0.55 |
MSDANet | 2024 | 55.55 | 650.91 | 6.02 | 2.35 |
LMFR-Net | 2025 | 0.07 | 13.97 | 1.05 | 0.70 |
DSAE-Net | 2025 | 0.08 | 4.76 | 1.00 | 0.67 |
Dataset | Model | Acc (%) | AUC (%) | Dice (%) | MCC (%) | HD95 |
---|---|---|---|---|---|---|
DRIVE | UNet * | 94.87 | 96.99 | 80.00 | 77.06 | 37.96 |
Attention UNet * | 95.42 | 98.04 | 82.32 | 79.74 | 20.03 | |
ResUNet++ | 94.94 | 97.12 | 80.18 | 77.28 | 37.75 | |
SA-UNet * | 95.44 | 97.95 | 82.42 | 79.82 | 20.84 | |
ConvUNeXt * | 95.46 | 97.76 | 82.31 | 79.71 | 15.63 | |
FR-Unet | 95.36 | 97.72 | 81.72 | 79.06 | 26.54 | |
PDF-UNet | 95.35 | 97.80 | 81.94 | 79.27 | 24.5 | |
ULite * | 95.40 | 97.48 | 81.77 | 79.14 | 21.83 | |
MSDANet | 95.48 | 97.85 | 82.43 | 79.84 | 27.43 | |
LMFR-Net | 95.29 | 97.64 | 81.69 | 78.99 | 26.58 | |
DSAE-Net * | 95.48 | 98.00 | 82.44 | 79.86 | 12.12 | |
CHASE-DB | UNet | 95.89 | 97.99 | 79.11 | 76.88 | 64.42 |
Attention UNet | 96.22 | 98.45 | 81.22 | 79.27 | 34.74 | |
ResUNet++ | 96.25 | 98.48 | 81.29 | 79.33 | 26.25 | |
SA-UNet | 96.14 | 98.52 | 80.99 | 78.02 | 35.59 | |
ConvUNeXt | 96.15 | 98.41 | 80.90 | 78.91 | 36.42 | |
FR-Unet | 96.15 | 98.51 | 80.95 | 78.97 | 31.59 | |
PDF-UNet | 96.07 | 98.36 | 80.63 | 78.62 | 41.17 | |
ULite | 95.95 | 98.11 | 79.91 | 77.80 | 43.92 | |
MSDANet | 96.14 | 98.09 | 80.89 | 78.90 | 40.75 | |
LMFR-Net | 96.04 | 98.42 | 80.56 | 78.99 | 36.84 | |
DSAE-Net | 96.25 | 98.56 | 81.46 | 79.54 | 22.65 | |
HRF | UNet | 96.11 | 97.44 | 78.79 | 76.65 | 170.6 |
Attention UNet | 96.37 | 98.01 | 80.46 | 78.46 | 127.14 | |
ResUNet++ | 95.91 | 96.97 | 77.41 | 75.20 | 225.36 | |
SA-UNet | 96.29 | 97.94 | 80.10 | 78.06 | 128.25 | |
FR-Unet | 96.26 | 97.73 | 79.72 | 77.66 | 140.58 | |
ConvUNeXt | 96.21 | 97.57 | 79.41 | 77.33 | 132.63 | |
PDF-UNet | 96.27 | 97.76 | 79.81 | 77.75 | 144.42 | |
ULite | 96.04 | 97.24 | 78.35 | 76.19 | 151.5 | |
MSDANet | 96.26 | 97.70 | 79.92 | 77.86 | 150.3 | |
LMFR-Net | 96.30 | 97.84 | 80.03 | 77.98 | 147.2 | |
DSAE-Net | 96.31 | 97.90 | 80.12 | 78.10 | 126.25 | |
STARE | UNet | 96.45 | 97.87 | 76.60 | 75.62 | 57.25 |
Attention UNet | 96.64 | 97.85 | 79.02 | 77.53 | 49.75 | |
ResUNet++ | 96.61 | 98.15 | 80.01 | 78.19 | 38.38 | |
SA-UNet | 96.70 | 98.55 | 80.71 | 78.92 | 30.67 | |
ConvUNeXt | 96.80 | 98.34 | 81.53 | 79.78 | 27.00 | |
FR-Unet | 96.73 | 98.48 | 79.56 | 78.13 | 35.75 | |
PDF-UNet | 96.44 | 98.32 | 80.05 | 78.13 | 30.27 | |
ULite | 96.54 | 97.86 | 80.01 | 78.12 | 24.5 | |
MSDANet | 96.19 | 98.07 | 79.33 | 77.40 | 30.33 | |
LMFR-Net | 96.52 | 97.91 | 79.75 | 77.85 | 35.25 | |
DSAE-Net | 96.68 | 98.73 | 81.92 | 80.25 | 27.25 |
Dataset | Acc (%) | AUC (%) | Dice (%) | MCC (%) |
---|---|---|---|---|
DRIVE | 95.48 ± 0.02 | 98.00 ± 0.04 | 82.44 ± 0.11 | 79.86 ± 0.12 |
CHASE-DB | 96.25 ± 0.03 | 98.56 ± 0.02 | 81.46 ± 0.22 | 79.54 ± 0.23 |
HRF | 96.31 ± 0.04 | 97.90 ± 0.05 | 80.12 ± 0.08 | 78.10 ± 0.09 |
STARE | 96.68 ± 0.03 | 98.73 ± 0.02 | 81.92 ± 0.13 | 80.25 ± 0.15 |
Metric | Configuration: Depth k/Width m | |||||
---|---|---|---|---|---|---|
3/4 | 3/8 | 3/16 | 4/8 | 4/16 | 4/32 | |
Param (M) | 0.02 | 0.08 | 0.31 | 0.32 | 1.27 | 5.03 |
FLOPs (G) | 1.33 | 4.76 | 18.57 | 6.81 | 26.23 | 102.93 |
Dice (Val) (%) | 81.94 | 82.82 | 83.08 | 82.95 | 83.40 | 83.10 |
AUC (Val) (%) | 98.03 | 98.18 | 98.26 | 98.19 | 98.36 | 98.30 |
Dice (Test) (%) | 81.77 | 82.44 | 82.31 | 82.41 | 82.68 | 81.98 |
AUC (Test) (%) | 97.71 | 98.00 | 97.76 | 98.01 | 98.04 | 97.74 |
Metric | Value of | |||
---|---|---|---|---|
0.0 | 0.3 | 0.7 | 1.0 | |
Dice (Val) (%) | 82.77 | 82.81 | 82.82 | 82.8 |
HD95 (Val) | 21.0 | 17.7 | 10.5 | 9.0 |
Dice (Test) (%) | 82.40 | 82.41 | 82.44 | 82.44 |
HD95 (Test) | 23.05 | 20.45 | 12.68 | 15.35 |
Dataset | Model | Param (M) | Acc | AUC | Dice | MCC |
---|---|---|---|---|---|---|
DRIVE | w/o CAG & CMFA | 0.06 | 94.99 | 97.11 | 81.20 | 78.26 |
w/o CAG | 0.06 | 95.17 | 97.49 | 81.22 | 78.46 | |
w/o CMFA | 0.08 | 95.46 | 97.95 | 82.20 | 79.59 | |
Full | 0.08 | 95.48 | 98.00 | 82.44 | 79.86 | |
CHASE-DB | w/o CAG & CMFA | 0.06 | 96.02 | 98.12 | 79.24 | 77.01 |
w/o CAG | 0.06 | 96.22 | 98.51 | 81.16 | 79.18 | |
w/o CMFA | 0.08 | 96.24 | 98.69 | 81.44 | 79.52 | |
Full | 0.08 | 96.25 | 98.56 | 81.46 | 79.54 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zeng, Z.; Liu, J.; Huang, X.; Luo, K.; Yuan, X.; Zhu, Y. Efficient Retinal Vessel Segmentation with 78K Parameters. J. Imaging 2025, 11, 306. https://doi.org/10.3390/jimaging11090306
Zeng Z, Liu J, Huang X, Luo K, Yuan X, Zhu Y. Efficient Retinal Vessel Segmentation with 78K Parameters. Journal of Imaging. 2025; 11(9):306. https://doi.org/10.3390/jimaging11090306
Chicago/Turabian StyleZeng, Zhigao, Jiakai Liu, Xianming Huang, Kaixi Luo, Xinpan Yuan, and Yanhui Zhu. 2025. "Efficient Retinal Vessel Segmentation with 78K Parameters" Journal of Imaging 11, no. 9: 306. https://doi.org/10.3390/jimaging11090306
APA StyleZeng, Z., Liu, J., Huang, X., Luo, K., Yuan, X., & Zhu, Y. (2025). Efficient Retinal Vessel Segmentation with 78K Parameters. Journal of Imaging, 11(9), 306. https://doi.org/10.3390/jimaging11090306