ELA-Net: An Efficient Lightweight Attention Network for Skin Lesion Segmentation
Abstract
:1. Introduction
- We propose a lightweight segmentation model, ELANet, which achieves efficient skin lesion segmentation with an extremely low parameter, facilitating its deployment in resource-constrained clinical devices.
- We design the BRM for seamless integration into the model. Its internal atrous convolution and attention mechanisms comprehensively capture feature information, thereby enhancing the model’s segmentation accuracy.
- We construct a multi-scale attention fusion (MAF) module that combines output streams of various scales, effectively solving the information loss caused by down sampling operations and obtaining global context information.
- We conduct extensive experiments on three public datasets, ISIC2016, ISIC2017, and ISIC2018, achieving state-of-the-art performance and attaining an excellent balance between accuracy and lightness.
2. Related Work
2.1. Skin Lesion Segmentation
2.2. Lightweight Networks
2.3. Attention Mechanisms
3. Methods
3.1. ELANet Architecture
3.1.1. Downsampling Module
3.1.2. Pyramid Pooling Module
3.1.3. Feature Fusion Module
3.2. Multiscale Attention Fusion
3.3. Bilateral Residual Module
4. Experiment
4.1. Dataset
4.2. Evaluation Metrics
4.3. Implementation and Configuration
4.4. Comparisons with State-of-the-Art Methods
4.5. Ablation Experiments
4.6. Generalization Experiment
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Siegel, R.L.; Miller, K.D.; Jemal, A. Cancer statistics, 2019. CA Cancer J. Clin. 2019, 69, 7–34. [Google Scholar] [CrossRef] [PubMed]
- Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef] [PubMed]
- Ge, Z.; Demyanov, S.; Chakravorty, R.; Bowling, A.; Garnavi, R. Skin disease recognition using deep saliency features and multimodal learning of dermoscopy and clinical images. In Medical Image Computing and Computer Assisted Intervention−MICCAI 2017; Springer: Cham, Switzerland, 2017; Volume 20, pp. 250–258. [Google Scholar]
- Kharazmi, P.; AlJasser, M.I.; Lui, H.; Wang, Z.J.; Lee, T.K. Automated detection and segmentation of vascular structures of skin lesions seen in Dermoscopy, with an application to basal cell carcinoma classification. IEEE J. Biomed. Health Inform. 2016, 21, 1675–1684. [Google Scholar] [CrossRef] [PubMed]
- Siegel, R.L.; Miller, K.D.; Goding Sauer, A.; Fedewa, S.A.; Butterly, L.F.; Anderson, J.C.; Cercek, A.; Smith, R.A.; Jemal, A. Colorectal cancer statistics, 2020. CA A Cancer J. Clin. 2020, 70, 145–164. [Google Scholar] [CrossRef] [PubMed]
- Vestergaard, M.E.; Macaskill, P.H.P.M.; Holt, P.E.; Menzies, S.W. Dermoscopy compared with naked eye examination for the diagnosis of primary melanoma: A meta-analysis of studies performed in a clinical setting. Br. J. Dermatol. 2008, 159, 669–676. [Google Scholar] [CrossRef] [PubMed]
- Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
- Khan, J.F.; Bhuiyan, S.M.A.; Adhami, R.R. Image segmentation and shape analysis for road-sign detection. IEEE Trans. Intell. Transp. Syst. 2010, 12, 83–96. [Google Scholar] [CrossRef]
- Sheikh, Y.A.; Khan, E.A.; Kanade, T. Mode-seeking by medoid shifts. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar]
- Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
- Zhang, H.; Dana, K.; Shi, J.; Zhang, Z.; Wang, X.; Tyagi, A.; Agrawal, A. Context encoding for semantic segmentation. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7151–7160. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, Proceedings of the 18th International Conference, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; Volume 18, pp. 234–241. [Google Scholar]
- Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention u-net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
- Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. UNet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 2019, 39, 1856–1867. [Google Scholar] [CrossRef]
- Milletari, F.; Navab, N.; Ahmadi, S.-A. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
- Peng, Y.; Sonka, M.; Chen, D.Z. U-Net v2: Rethinking the Skip Connections of U-Net for Medical Image Segmentation. arXiv 2023, arXiv:2311.17791. [Google Scholar]
- Birkenfeld, J.S.; Tucker-Schwartz, J.M.; Soenksen, L.R.; Avilés-Izquierdo, J.A.; Marti-Fuster, B. Computer-aided classification of suspicious pigmented lesions using wide-field images. Comput. Methods Programs Biomed. 2020, 195, 105631. [Google Scholar] [CrossRef] [PubMed]
- Soenksen, L.R.; Kassis, T.; Conover, S.T.; Marti-Fuster, B.; Birkenfeld, J.S.; Tucker-Schwartz, J.; Naseem, A.; Stavert, R.R.; Kim, C.C.; Senna, M.M.; et al. Using deep learning for dermatologist-level detection of suspicious pigmented skin lesions from wide-field images. Sci. Transl. Med. 2020, 13, 581. [Google Scholar]
- Strzelecki, M.H.; Strąkowska, M.; Kozłowski, M.; Urbańczyk, T.; Wielowieyska-Szybińska, D.; Kociołek, M. Skin lesion detection algorithms in whole body images. Sensors 2021, 21, 6639. [Google Scholar] [CrossRef] [PubMed]
- Betz-Stablein, B.; D’alessandro, B.; Koh, U.; Plasmeijer, E.; Janda, M.; Menzies, S.W.; Hofmann-Wellenhof, R.; Green, A.C.; Soyer, H.P. Reproducible naevus counts using 3D total body photography and convolutional neural networks. Dermatology 2022, 238, 4–11. [Google Scholar] [CrossRef]
- Dai, D.; Dong, C.; Xu, S.; Yan, Q.; Li, Z.; Zhang, C.; Luo, N. Ms RED: A novel multi-scale residual encoding and decoding network for skin lesion segmentation. Med. Image Anal. 2022, 75, 102293. [Google Scholar] [CrossRef]
- Xu, Q.; Ma, Z.; Na, H.E.; Duan, W. DCSAU-Net: A deeper and more compact split-attention U-Net for medical image segmentation. Comput. Biol. Med. 2023, 154, 106626. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Yu, L.; Chen, H.; Dou, Q.; Qin, J.; Heng, P.A. Automated melanoma recognition in dermoscopy images via very deep residual networks. IEEE Trans. Med. Imaging 2016, 36, 994–1004. [Google Scholar] [CrossRef]
- Bi, L.; Kim, J.; Ahn, E.; Kumar, A.; Feng, D.; Fulham, M. Step-wise integration of deep class-specific learning for dermoscopic image segmentation. Pattern Recognit. 2019, 85, 78–89. [Google Scholar] [CrossRef]
- Nasr-Esfahani, E.; Rafiei, S.; Jafari, M.H.; Karimi, N.; Wrobel, J.S.; Samavi, S.; Soroushmehr, S.M.R. Dense pooling layers in fully convolutional network for skin lesion segmentation. Comput. Med. Imaging Graph. 2019, 78, 101658. [Google Scholar] [CrossRef]
- Tang, P.; Liang, Q.; Yan, X.; Xiang, S.; Sun, W.; Zhang, D.; Coppola, G. Efficient skin lesion segmentation using separable-Unet with stochastic weight averaging. Comput. Methods Programs Biomed. 2019, 178, 289–301. [Google Scholar] [CrossRef]
- Arora, R.; Raman, B.; Nayyar, K.; Awasthi, R. Automated skin lesion segmentation using attention-based deep convolutional neural network. Biomed. Signal Process. Control 2021, 65, 102358. [Google Scholar] [CrossRef]
- Wu, H.; Pan, J.; Li, Z.; Wen, Z.; Qin, J. Automated skin lesion segmentation via an adaptive dual attention module. IEEE Trans. Med. Imaging 2020, 40, 357–370. [Google Scholar] [CrossRef] [PubMed]
- Kumar, A.; Hamarneh, G.; Drew, M.S. Illumination-based transformations improve skin lesion segmentation in dermoscopic images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 728–729. [Google Scholar]
- Paszke, A.; Chaurasia, A.; Kim, S.; Culurciello, E. ENet: A deep neural network architecture for real-time semantic segmentation. arXiv 2016, arXiv:1606.02147. [Google Scholar]
- Romera, E.; Alvarez, J.M.; Bergasa, L.M.; Arroyo, R. ERFNet: Efficient Residual Factorized ConvNet for Real-Time Semantic Segmentation. IEEE Trans. Intell. Transp. Syst. 2017, 19, 263–272. [Google Scholar] [CrossRef]
- Valanarasu, J.M.J.; Patel, V.M. Unext: Mlp-based rapid medical image segmentation network. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2022; pp. 23–33. [Google Scholar]
- Ruan, J.; Xiang, S.; Xie, M.; Liu, T.; Fu, Y. MALUNet: A Multi-Attention and Light-weight UNet for Skin Lesion Segmentation. In Proceedings of the 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Las Vegas, NV, USA, 6–8 December 2022; pp. 1150–1156. [Google Scholar]
- Ruan, J.; Xie, M.; Gao, J.; Liu, T.; Fu, Y. EGE-UNet: An Efficient Group Enhanced UNet for Skin Lesion Segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2023; Volume 154, pp. 481–490. [Google Scholar]
- Wang, F.; Jiang, M.; Qian, C.; Yang, S.; Li, C.; Zhang, H.; Wang, X.; Tang, X. Residual attention network for image classification. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3156–3164. [Google Scholar]
- Jaderberg, M.; Simonyan, K.; Zisserman, A. Spatial transformer networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; Volume 28. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Kaul, C.; Manandhar, S.; Pears, N. FocusNet: An attention-based fully convolutional network for medical image segmentation. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 455–458. [Google Scholar]
- Yu, C.; Wang, J.; Peng, C.; Gao, C.; Yu, G.; Sang, N. BiSeNet: Bilateral segmentation network for real-time semantic segmentation. In Proceedings of the European Conference on Computer Vision (ECCV 2018), Munich, Germany, 8–14 September 2018; pp. 325–341. [Google Scholar]
- Wang, Y.; Zhou, Q.; Liu, J.; Xiong, J.; Gao, G.; Wu, X.; Latecki, L.J. LEDNet: A lightweight encoder-decoder network for real-time semantic segmentation. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 1860–1864. [Google Scholar]
- Gao, G.; Xu, G.; Yu, Y.; He, J.; Yang, J.; Yue, D. MSCFNet: A lightweight network with multi-scale context fusion for real-time semantic segmentation. IEEE Trans. Intell. Transp. Syst. 2021, 23, 25489–25499. [Google Scholar] [CrossRef]
- Gao, G.; Xu, G.; Li, J.; Yu, Y.; Lu, H.; Yang, J. FBSNet: A fast bilateral symmetrical network for real-time semantic segmentation. IEEE Trans. Multimed. 2022, 25, 3273–3283. [Google Scholar] [CrossRef]
- Gutman, D.; Codella, N.C.; Celebi, E.; Helba, B.; Marchetti, M.; Mishra, N.; Halpern, A. Skin lesion analysis toward melanoma detection: A challenge at the International Symposium on Biomedical Imaging (ISBI) 2016, hosted by the International Skin Imaging Collaboration (ISIC). arXiv 2016, arXiv:1605.01397. [Google Scholar]
- Codella, N.C.; Gutman, D.; Celebi, M.E.; Helba, B.; Marchetti, M.A.; Dusza, S.W.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H.; et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 International Symposium on Biomedical Imaging (ISBI), hosted by the International Skin Imaging Collaboration (ISIC). In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 168–172. [Google Scholar]
- Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 180161. [Google Scholar] [CrossRef]
- Codella, N.; Rotemberg, V.; Tschandl, P.; Celebi, M.E.; Dusza, S.; Gutman, D.; Helba, B.; Kalloo, A.; Liopyris, K.; Marchetti, M.; et al. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the International Skin Imaging Collaboration (ISIC). arXiv 2019, arXiv:1902.03368. [Google Scholar]
- Creswell, A.; Arulkumaran, K.; Bharath, A.A. On denois-ing autoencoders trained to minimise binary cross-entropy. arXiv 2017, arXiv:1708.08487. [Google Scholar]
- Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
- Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-Unet: Unet-like pure transformer for medical image segmentation. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2022; pp. 205–218. [Google Scholar]
- Wu, T.; Tang, S.; Zhang, R.; Cao, J.; Zhang, Y. CGNet: A Light-weight Context Guided Network for Semantic Segmentation. IEEE Trans. Image Process. 2020, 30, 1169–1179. [Google Scholar] [CrossRef] [PubMed]
- Yu, C.; Gao, C.; Wang, Y.; Yu, G.; Shen, C.; Sang, N. BiSeNet V2: Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation. Int. J. Comput. Vis. 2021, 129, 3051–3068. [Google Scholar] [CrossRef]
- Poudel, R.P.; Liwicki, S.; Cipolla, R. Fast-SCNN: Fast semantic segmentation network. arXiv 2019, arXiv:1902.04502. [Google Scholar]
- Zhao, H.; Qi, X.; Shen, X.; Shi, J.; Jia, J. ICNet for Real-Time Semantic Segmentation on High-Resolution Images. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8‒14 September 2018; pp. 405–420. [Google Scholar]
- Sun, K.; Xiao, B.; Liu, D.; Wang, J. Deep high-resolution representation learning for human pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 5693–5703. [Google Scholar]
- Howard, A.; Sandler, M.; Chen, B.; Wang, W.; Chen, L.-C.; Tan, M.; Chu, G.; Vasudevan, V.; Zhu, Y.; Pang, R.; et al. Searching for MobileNetV3. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
- Strudel, R.; Garcia, R.; Laptev, I.; Schmid, C. Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 11–17 October 2021; pp. 7262–7272. [Google Scholar]
- Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J.M.; Luo, P. SegFormer: Simple and efficient design for semantic segmentation with transformers. Adv. Neural Inf. Process. Syst. 2021, 34, 12077–12090. [Google Scholar]
Dataset | Training | Validation | Testing |
---|---|---|---|
ISIC2016 | 900 | / | 379 |
ISIC2017 | 2000 | 150 | 600 |
ISIC2018 | 2594 | 100 | 1000 |
TN3K | 2879 | / | 614 |
Method | ISIC2016 | ISIC2017 | ISIC2018 | Params (M) | FLOPs (G) | |||
---|---|---|---|---|---|---|---|---|
mIoU (%) | Acc (%) | mIoU (%) | Acc (%) | mIoU (%) | Acc (%) | |||
UNet [13] | 85.56 | 95.03 | 80.86 | 91.40 | 82.54 | 91.95 | 24.891 | 225 |
PSPNet [11] | 86.25 | 95.62 | 72.51 | 87.85 | 76.89 | 89.41 | 46.602 | 357 |
DeeplabV3+ [10] | 85.76 | 95.52 | 76.02 | 89.48 | 74.82 | 88.20 | 41.216 | 353 |
CGNet [53] | 86.25 | 95.30 | 82.02 | 91.86 | 76.88 | 89.17 | 0.492 | 6.896 |
BiseNetV2 [54] | 87.45 | 96.08 | 79.08 | 90.69 | 79.68 | 90.84 | 3.341 | 24.571 |
Fastscnn [55] | 88.61 | 96.36 | 80.41 | 91.32 | 80.25 | 90.94 | 1.398 | 1.853 |
ICNet [56] | 88.62 | 96.27 | 80.16 | 91.27 | 77.92 | 89.52 | 47.528 | 30.745 |
ErfNet [33] | 89.76 | 96.71 | 81.82 | 91.96 | 81.25 | 91.42 | 2.082 | 29.062 |
HRNet [57] | 89.60 | 96.69 | 79.43 | 90.94 | 78.98 | 90.45 | 9.636 | 37.157 |
MobileNetV3 [58] | 90.12 | 96.86 | 79.80 | 91.96 | 81.20 | 91.55 | 3.282 | 8.687 |
Segmenter [59] | 82.60 | 94.00 | 76.67 | 89.32 | 75.52 | 87.80 | 6.685 | 4.722 |
Segformer [60] | 87.40 | 95.80 | 79.11 | 90.79 | 80.74 | 91.05 | 3.716 | 3.682 |
TransUNet [51] | 81.45 | 93.57 | 78.43 | 90.23 | 77.36 | 89.65 | 66.815 | 32.63 |
SwinUNet [52] | 85.29 | 94.95 | 81.84 | 91.88 | 82.86 | 92.21 | 27.145 | 5.91 |
UNext [34] | 83.77 | 94.44 | 82.28 | 92.15 | 81.54 | 91.45 | 1.471 | 0.439 |
MALUNet [35] | 83.30 | 94.11 | 80.93 | 91.49 | 82.65 | 91.96 | 0.175 | 0.083 |
EGEUNet [36] | 81.33 | 93.21 | 81.02 | 91.46 | 81.51 | 91.55 | 0.053 | 0.072 |
ELANet | 89.87 | 96.78 | 81.85 | 91.99 | 82.87 | 92.19 | 0.459 | 8.430 |
Method | MAF | PPM | Attention | mIoU (%) | Acc (%) | Params (M) | FLOPs (G) | Time (S/Image) | ||
---|---|---|---|---|---|---|---|---|---|---|
SA | CA | HA | ||||||||
Model 1 | × | × | √ | × | × | 85.63 | 95.10 | 0.270 | 6.754 | 0.0331 |
Model 2 | × | × | × | √ | × | 87.84 | 96.06 | 0.273 | 6.679 | 0.0328 |
Model 3 | × | × | × | × | √ | 89.18 | 96.51 | 0.271 | 6.716 | 0.0337 |
Model 4 | √ | × | × | × | √ | 89.24 | 96.56 | 0.289 | 7.260 | 0.0362 |
Model 5 | × | √ | × | × | √ | 89.29 | 96.56 | 0.329 | 7.158 | 0.0354 |
Model 6 (ELANet) | √ | √ | × | × | √ | 89.87 | 96.78 | 0.459 | 8.430 | 0.0417 |
Method | mIoU (%) | Acc (%) | Params (M) | FLOPs (G) |
---|---|---|---|---|
UNet [13] | 84.94 | 96.21 | 24.891 | 225 |
CGNet [53] | 83.17 | 93.63 | 0.492 | 6.896 |
MobileNetV3 [58] | 82.87 | 95.31 | 3.282 | 8.687 |
Segformer [60] | 79.10 | 84.79 | 3.716 | 3.682 |
TransUNet [51] | 86.86 | 96.47 | 66.815 | 32.63 |
SwinUNet [52] | 80.57 | 94.96 | 27.145 | 5.91 |
UNext [34] | 86.64 | 96.63 | 1.471 | 0.439 |
MALUNet [35] | 82.99 | 95.65 | 0.175 | 0.083 |
EGEUNet [36] | 83.14 | 95.75 | 0.053 | 0.072 |
ELANet | 85.42 | 96.30 | 0.459 | 8.430 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nie, T.; Zhao, Y.; Yao, S. ELA-Net: An Efficient Lightweight Attention Network for Skin Lesion Segmentation. Sensors 2024, 24, 4302. https://doi.org/10.3390/s24134302
Nie T, Zhao Y, Yao S. ELA-Net: An Efficient Lightweight Attention Network for Skin Lesion Segmentation. Sensors. 2024; 24(13):4302. https://doi.org/10.3390/s24134302
Chicago/Turabian StyleNie, Tianyu, Yishi Zhao, and Shihong Yao. 2024. "ELA-Net: An Efficient Lightweight Attention Network for Skin Lesion Segmentation" Sensors 24, no. 13: 4302. https://doi.org/10.3390/s24134302
APA StyleNie, T., Zhao, Y., & Yao, S. (2024). ELA-Net: An Efficient Lightweight Attention Network for Skin Lesion Segmentation. Sensors, 24(13), 4302. https://doi.org/10.3390/s24134302