EMR-YOLO: A Multi-Scale Benthic Organism Detection Algorithm for Degraded Underwater Visual Features and Computationally Constrained Environments
Abstract
1. Introduction
- To improve the efficiency of information flow and gradient propagation, a lightweight downsampling convolution module (MBFDown) is designed, which employs a multibranch structure and cross-stage feature fusion strategy. It can effectively reduce the number of parameters and computational complexity while maintaining robust feature extraction capabilities.
- To suppress the feature representation of complex underwater backgrounds, we propose a Two-Stage Region-Aware Channel Attention mechanism (RTRA). By integrating the ability to mitigate background noise, this mechanism enables the detector to focus on target regions, thereby reducing missed detections and enhancing semantic understanding and generalization in complex environments.
- For the adaptive perception capability of anchor-free detection heads, we innovatively design an efficient dynamic sparse detection head (EDSHead), which integrates a unified attention mechanism with a dynamic sparse operator, thereby improving the multi-scale fusion capability for the spatial modeling of BOs with variable sizes and morphologies.
2. Related Work
2.1. Learning with Multi-Scale and Morphological Diversity
2.2. Learning with Computational Constraints
2.3. Learning with Visual Feature Degradation
3. EMR-YOLO Scheme
3.1. Multi-Branch Fusion Downsampling
3.2. Region-Wise Two-Stage Routing Attention
3.3. Effcient Dynamic Sparse Head
4. Experimental Studies and Comparisons
4.1. Dataset and Configurations
4.2. Efficient Dynamic Sparse Head Performance
4.3. Attention Performance Comparisons
4.4. Ablation Study
4.5. Comparisons with Typical Object Detection Approaches
4.6. Visualization of Test Results
5. Discussion
5.1. MBFDown: Light Weighting and Efficient Feature Extraction
5.2. RTRA: Enhancing Underwater Background Adaptability
5.3. EDSHead: Improving Multi-Scale Target Perception Capability
6. Limitations and Future Work
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Huang, H.; Tang, Q.; Li, J.; Zhang, W.; Bao, X.; Zhu, H.; Wang, G. A review on underwater autonomous environmental perception and target grasp, the challenge of robotic organism capture. Ocean Eng. 2020, 195, 106644. [Google Scholar] [CrossRef]
- Chen, T.; Wang, N. Shuffled Grouping Cross-Channel Attention-Based Bilateral-Filter-Interpolation Deformable ConvNet with Applications to Benthonic Organism Detection. IEEE Trans. Artif. Intell. 2024, 5, 4506–4518. [Google Scholar] [CrossRef]
- Dakhil, R.A.; Khayeat, A.R.H. Review on Deep Learning Techniques for Underwater Object Detection. In Proceedings of the Data Science and Machine Learning, Copenhagen, Denmark, 17–18 September 2022; pp. 49–63. [Google Scholar]
- Zhang, D.; Wu, C.; Zhou, J.; Zhang, W.; Li, C.; Lin, Z. Hierarchical attention aggregation with multi-resolution feature learning for GAN-based underwater image enhancement. Eng. Appl. Artif. Intell. 2023, 125, 106743. [Google Scholar] [CrossRef]
- Viola, P.; Jones, M. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, Kauai, HI, USA, 8–14 December 2001; p. I. [Google Scholar]
- Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
- Xiao, J.; Guo, H.; Zhou, J.; Zhao, T.; Yu, Q.; Chen, Y.; Wang, Z. Tiny object detection with context enhancement and feature purification. Expert Syst. Appl. 2023, 211, 118665. [Google Scholar] [CrossRef]
- Xu, S.; Zhang, M.; Song, W.; Mei, H.; He, Q.; Liotta, A. A systematic review and analysis of deep learning-based underwater object detection. Neurocomputing 2023, 527, 204–232. [Google Scholar] [CrossRef]
- Duan, K.; Bai, S.; Xie, L.; Qi, H.; Huang, Q.; Tian, Q. CenterNet: Keypoint Triplets for Object Detection. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6568–6577. [Google Scholar]
- Song, P.; Li, P.; Dai, L.; Wang, T.; Chen, Z. Boosting R-CNN: Reweighting R-CNN samples by RPN’s error for underwater object detection. Neurocomputing 2023, 530, 150–164. [Google Scholar] [CrossRef]
- Zhang, J.; Zhu, L.; Xu, L.; Xie, Q. MFFSSD: An Enhanced SSD for Underwater Object Detection. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 5938–5943. [Google Scholar]
- Hua, X.; Cui, X.; Xu, X.; Qiu, S.; Liang, Y.; Bao, X.; Li, Z. Underwater object detection algorithm based on feature enhancement and progressive dynamic aggregation strategy. Pattern Recognit. 2023, 139, 109511. [Google Scholar] [CrossRef]
- Chen, L.; Zhou, F.; Wang, S.; Dong, J.; Li, N.; Ma, H.; Wang, X.; Zhou, H. SWIPENET: Object detection in noisy underwater scenes. Pattern Recognit. 2022, 132, 108926. [Google Scholar] [CrossRef]
- Lin, T.Y.; Dollar, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar]
- Fu, X.; Liu, Y.; Liu, Y. A case study of utilizing YOLOT based quantitative detection algorithm for marine benthos. Ecol. Inform. 2022, 70, 101603. [Google Scholar] [CrossRef]
- Qiao, S.; Chen, L.C.; Yuille, A. DetectoRS: Detecting Objects with Recursive Feature Pyramid and Switchable Atrous Convolution. arXiv 2020, arXiv:2006.02334. [Google Scholar] [CrossRef]
- Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; Wei, Y. Deformable Convolutional Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 764–773. [Google Scholar]
- Zhu, X.; Hu, H.; Lin, S.; Dai, J. Deformable ConvNets v2: More Deformable, Better Results. arXiv 2018, arXiv:1811.11168. [Google Scholar] [CrossRef]
- Yu, F.; Koltun, V. Multi-Scale Context Aggregation by Dilated Convolutions. arXiv 2016, arXiv:1511.07122. [Google Scholar] [CrossRef]
- Hinton, G.; Vinyals, O.; Dean, J. Distilling the Knowledge in a Neural Network. arXiv 2015, arXiv:1503.02531. [Google Scholar] [CrossRef]
- Liu, Z.; Sun, M.; Zhou, T.; Huang, G.; Darrell, T. Rethinking the Value of Network Pruning. arXiv 2019, arXiv:1810.05270. [Google Scholar] [CrossRef]
- Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. arXiv 2017, arXiv:1707.01083. [Google Scholar] [CrossRef]
- Li, C.; Anwar, S.; Porikli, F. Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recognit. 2020, 98, 107038. [Google Scholar] [CrossRef]
- Wu, S.; Luo, T.; Jiang, G.; Yu, M.; Xu, H.; Zhu, Z.; Song, Y. A Two-Stage Underwater Enhancement Network Based on Structure Decomposition and Characteristics of Underwater Imaging. IEEE J. Ocean. Eng. 2021, 46, 1213–1227. [Google Scholar] [CrossRef]
- Li, C.; Guo, J.; Guo, C. Emerging from Water: Underwater Image Color Correction Based on Weakly Supervised Color Transfer. IEEE Signal Process. Lett. 2018, 25, 323–327. [Google Scholar] [CrossRef]
- Guo, Y.; Li, H.; Zhuang, P. Underwater Image Enhancement Using a Multiscale Dense Generative Adversarial Network. IEEE J. Ocean. Eng. 2020, 45, 862–870. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, S. A quantitative detection algorithm based on improved faster R-CNN for marine benthos. Ecol. Inform. 2021, 61, 101228. [Google Scholar] [CrossRef]
- Xu, X.; Liu, Y.; Lyu, L.; Yan, P.; Zhang, J. MAD-YOLO: A quantitative detection algorithm for dense small-scale marine benthos. Ecol. Inform. 2023, 75, 102022. [Google Scholar] [CrossRef]
- Chen, L.; Huang, Y.; Dong, J.; Xu, Q.; Kwong, S.; Lu, H.; Lu, H.; Li, C. Underwater Object Detection in the Era of Artificial Intelligence: Current, Challenge, and Future. arXiv 2024, arXiv:2410.05577. [Google Scholar] [CrossRef]
- Ren, S.; Zhou, D.; He, S.; Feng, J.; Wang, X. Shunted Self-Attention via Multi-Scale Token Aggregation. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 10843–10852. [Google Scholar]
- Chen, Y.; Dai, X.; Liu, M.; Chen, D.; Yuan, L.; Liu, Z. Dynamic ReLU. arXiv 2020, arXiv:2003.10027. [Google Scholar] [CrossRef]
- Liu, C.; Li, H.; Wang, S.; Zhu, M.; Wang, D.; Fan, X.; Wang, Z. A Dataset and Benchmark of Underwater Object Detection for Robot Picking. In Proceedings of the 2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Shenzhen, China, 5–9 July 2021; pp. 1–6. [Google Scholar]
- Bornstein, T.; Lange, D.; Münchmeyer, J.; Woollam, J.; Rietbrock, A.; Barcheck, G.; Grevemeyer, I.; Tilmann, F. PickBlue: Seismic phase picking for ocean bottom seismometers with deep learning. arXiv 2023, arXiv:2304.06635. [Google Scholar] [CrossRef]
- Liu, Y.; Shao, Z.; Hoffmann, N. Global Attention Mechanism: Retain Information to Enhance Channel-Spatial Interactions. arXiv 2021, arXiv:2112.05561. [Google Scholar] [CrossRef]
- Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. arXiv 2021, arXiv:2103.02907. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. arXiv 2018, arXiv:1807.06521. [Google Scholar] [CrossRef]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. arXiv 2020, arXiv:1910.03151. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.B.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Computer Vision—ECCV 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer: Cham, Switzerland, 2016; Volume 9905, pp. 21–37. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 318–327. [Google Scholar] [CrossRef] [PubMed]
- Tian, Z.; Shen, C.; Chen, H.; He, T. FCOS: Fully Convolutional One-Stage Object Detection. arXiv 2019, arXiv:1904.01355. [Google Scholar]
- Kim, K.; Lee, H.S. Probabilistic Anchor Assignment with IoU Prediction for Object Detection. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XXV. pp. 355–371. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar] [CrossRef]
- Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications. arXiv 2022, arXiv:2209.02976. [Google Scholar] [CrossRef]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; pp. 7464–7475. [Google Scholar]
- Wang, C.Y.; Yeh, I.H.; Liao, H.Y.M. YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. arXiv 2024, arXiv:2402.13616. [Google Scholar] [CrossRef]
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. YOLOv10: Real-Time End-to-End Object Detection. arXiv 2024, arXiv:2405.14458. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. arXiv 2016, arXiv:1602.07261. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. arXiv 2014, arXiv:1409.4842. [Google Scholar] [CrossRef]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. arXiv 2015, arXiv:1512.00567. [Google Scholar] [CrossRef]
- Wang, C.Y.; Mark Liao, H.Y.; Wu, Y.H.; Chen, P.Y.; Hsieh, J.W.; Yeh, I.H. CSPNet: A New Backbone that can Enhance Learning Capability of CNN. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 1571–1580. [Google Scholar]
- Zhu, L.; Wang, X.; Ke, Z.; Zhang, W.; Lau, R. BiFormer: Vision Transformer with Bi-Level Routing Attention. arXiv 2023, arXiv:2303.08810. [Google Scholar]
- Xiong, Y.; Li, Z.; Chen, Y.; Wang, F.; Zhu, X.; Luo, J.; Wang, W.; Lu, T.; Li, H.; Qiao, Y.; et al. Efficient Deformable ConvNets: Rethinking Dynamic and Sparse Operator for Vision Applications. arXiv 2024, arXiv:2401.06197. [Google Scholar] [CrossRef]
- Dai, X.; Chen, Y.; Xiao, B.; Chen, D.; Liu, M.; Yuan, L.; Zhang, L. Dynamic Head: Unifying Object Detection Heads with Attentions. arXiv 2021, arXiv:2106.08322. [Google Scholar] [CrossRef]
Scheme | Block | Para | GFLOPs | AP | ||
---|---|---|---|---|---|---|
Baseline +Dyhead | — | 3.01 | 8.1 | 52.73 | 86.91 | 58.31 |
— | 5.19 | 17.6 | 53.74 | 87.83 | 60.42 | |
+EDSHead | 1 | 4.75 | 14.6 | 53.83 | 88.29 | 60.43 |
2 | 6.24 | 16.2 | 53.92 | 87.71 | 60.41 | |
3 | 7.73 | 17.8 | 53.91 | 87.86 | 60.40 | |
4 | 9.22 | 19.3 | 53.82 | 87.21 | 60.36 | |
5 | 10.70 | 20.3 | 53.56 | 87.22 | 60.32 | |
6 | 12.93 | 23.6 | 49.64 | 83.20 | 54.44 |
Scheme | AP | AP | AP50 | AP75 | |||
---|---|---|---|---|---|---|---|
Echinus | Scallop | Starfish | Holothurian | ||||
MLCA | 92.17 | 88.78 | 93.02 | 75.17 | 53.12 | 87.29 | 59.83 |
GAM | 92.35 | 88.53 | 92.67 | 74.32 | 53.19 | 86.96 | 59.01 |
CA | 92.13 | 88.72 | 93.12 | 75.12 | 53.14 | 87.27 | 59.78 |
CBAM | 91.78 | 88.76 | 92.64 | 75.42 | 53.23 | 87.15 | 59.73 |
ECA | 92.23 | 88.91 | 92.91 | 74.23 | 52.96 | 87.07 | 59.17 |
RTRA | 92.26 | 89.74 | 92.95 | 75.49 | 53.31 | 87.61 | 59.84 |
YOLOv8 | EDSHead | MBFDown | RTRA | Para | GFLOPs | AP | ||
---|---|---|---|---|---|---|---|---|
✓ | 3.01 | 8.1 | 52.73 | 86.91 | 58.31 | |||
✓ | ✓ | 4.75 | 14.6 | 53.83 | 88.29 | 60.43 | ||
✓ | ✓ | 2.53 | 7.1 | 52.90 | 87.12 | 58.91 | ||
✓ | ✓ | 4.01 | 14.1 | 53.32 | 87.61 | 59.84 | ||
✓ | ✓ | ✓ | 4.31 | 13.6 | 53.67 | 87.92 | 61.12 | |
✓ | ✓ | ✓ | 5.49 | 15.6 | 54.57 | 88.40 | 61.27 | |
✓ | ✓ | ✓ | 4.01 | 14.0 | 53.34 | 87.56 | 59.66 | |
✓ | ✓ | ✓ | ✓ | 5.05 | 14.7 | 55.06 | 88.41 | 62.43 |
Scheme | Backbone | Para | GFLOPs | AP | AP | |||||
---|---|---|---|---|---|---|---|---|---|---|
Echinus | Scallop | Starfish | Holothurian | |||||||
Faster RCNN | VGG16 | 41.32 | 251.4 | 68.11 | 40.16 | 69.88 | 48.97 | 28.74 | 59.19 | 29.36 |
SSD | VGG16 | 26.44 | 32.5 | 44.39 | 34.66 | 49.81 | 30.17 | 22.06 | 39.09 | 24.81 |
RetinaNet | ResNet50 | 34.13 | 100.5 | 78.61 | 62.18 | 72.29 | 65.15 | 33.49 | 68.81 | 31.17 |
CenterNet | Hourglass104 | 11.76 | 15.6 | 83.51 | 72.48 | 77.86 | 61.62 | 38.93 | 73.87 | 36.58 |
FCOS | ResNet18 | 19.14 | 39.12 | 77.64 | 66.42 | 76.29 | 67.96 | 47.92 | 72.08 | 53.72 |
PAA | ResNet18 | 18.86 | 38.41 | 79.27 | 70.13 | 80.42 | 67.12 | 52.42 | 74.24 | 59.28 |
YOLOv3 | Darknet53 | 61.53 | 193.8 | 79.88 | 64.27 | 80.96 | 60.05 | 30.38 | 69.17 | 24.88 |
YOLOv4 | CSPDarknet53 | 52.59 | 119.8 | 84.97 | 71.94 | 82.03 | 73.66 | 33.31 | 76.84 | 31.36 |
YOLOv5-S | CSPDarknet53 | 7.34 | 16.6 | 84.96 | 73.14 | 83.91 | 74.69 | 44.08 | 76.71 | 42.05 |
YOLOv6-N | EfficientRep | 4.32 | 11.1 | 87.71 | 74.55 | 84.64 | 71.98 | 45.01 | 77.87 | 43.96 |
YOLOv7-Tiny | E-ELAN | 6.01 | 13.1 | 88.15 | 75.68 | 85.21 | 75.07 | 46.84 | 78.41 | 45.09 |
YOLOv8-N | CSPDarknet53 | 3.01 | 8.1 | 91.80 | 85.34 | 91.93 | 78.58 | 52.73 | 86.91 | 58.31 |
YOLOv9-T | CSPDarknet53 | 7.12 | 29.3 | 92.39 | 84.99 | 92.59 | 80.14 | 51.97 | 87.66 | 58.09 |
YOLOv10-S | CSPDarknet53 | 7.23 | 21.4 | 94.25 | 86.36 | 91.15 | 80.89 | 52.06 | 87.81 | 59.12 |
YOLOv11-S | CSPDarknet53 | 9.52 | 21.7 | 95.06 | 85.25 | 93.41 | 79.98 | 52.19 | 88.01 | 59.47 |
Ours | CSPDarknet53 | 5.05 | 14.7 | 93.38 | 88.78 | 93.52 | 78.92 | 55.06 | 88.41 | 62.43 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zou, D.; Zhao, S.; Zhou, J.; Liu, G.; Jiang, Z.; Xu, M.; Fu, X.; Liu, S. EMR-YOLO: A Multi-Scale Benthic Organism Detection Algorithm for Degraded Underwater Visual Features and Computationally Constrained Environments. J. Mar. Sci. Eng. 2025, 13, 1617. https://doi.org/10.3390/jmse13091617
Zou D, Zhao S, Zhou J, Liu G, Jiang Z, Xu M, Fu X, Liu S. EMR-YOLO: A Multi-Scale Benthic Organism Detection Algorithm for Degraded Underwater Visual Features and Computationally Constrained Environments. Journal of Marine Science and Engineering. 2025; 13(9):1617. https://doi.org/10.3390/jmse13091617
Chicago/Turabian StyleZou, Dehua, Songhao Zhao, Jingchun Zhou, Guangqiang Liu, Zhiying Jiang, Minyi Xu, Xianping Fu, and Siyuan Liu. 2025. "EMR-YOLO: A Multi-Scale Benthic Organism Detection Algorithm for Degraded Underwater Visual Features and Computationally Constrained Environments" Journal of Marine Science and Engineering 13, no. 9: 1617. https://doi.org/10.3390/jmse13091617
APA StyleZou, D., Zhao, S., Zhou, J., Liu, G., Jiang, Z., Xu, M., Fu, X., & Liu, S. (2025). EMR-YOLO: A Multi-Scale Benthic Organism Detection Algorithm for Degraded Underwater Visual Features and Computationally Constrained Environments. Journal of Marine Science and Engineering, 13(9), 1617. https://doi.org/10.3390/jmse13091617