Application of Deep Learning in Underwater Image Processing

A special issue of Journal of Marine Science and Engineering (ISSN 2077-1312). This special issue belongs to the section "Physical Oceanography".

Deadline for manuscript submissions: 10 April 2025 | Viewed by 3756

Special Issue Editors


E-Mail Website
Guest Editor
1. Department of Electrical Engineering, National Taiwan Normal University, Taipei 106, Taiwan
2. Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung 80424, Taiwan
Interests: 3D reconstruction technology; deep learning; multimedia signal processing; video communication

E-Mail Website
Guest Editor
Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung 80424, Taiwan
Interests: underwater communication chip design and optimization; low-power underwater integrated circuit development; AUVs chip design; integrated circuit design
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Information Engineering, National Chin-Yi University of Technology, Taichung 402, Taiwan
Interests: image/video processing; machine learning; computer vision; multimedia applications
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Underwater image processing is one of the key technologies driving advancements in fields such as marine biology, oceanography, underwater exploration, and more. Served as a carrier of information, the quality of underwater images significantly impacts various applications. However, capturing high-quality underwater images is challenging due to complex and uncontrollable conditions. Common issues include color distortion, blurred details, low contrast and brightness, and noise. These problems hinder both human perception and practical applications. Furthermore, the unique properties of underwater imaging, such as its selective light absorption and scattering, make it difficult to achieve satisfactory results with the existing in-air methods or traditional underwater image processing techniques.

In recent years, deep learning technologies have emerged as a game-changer in addressing these challenges and improving underwater image quality. These technologies provide new opportunities and insights, enhancing their applicability and reliability in real-world scenarios. This Special Issue aims to bring together leading researchers and practitioners from around the world to showcase their latest research findings and future directions in this dynamic field. We particularly welcome the submissions responding to the call for proposals on underwater image processing and analysis that leverage advanced deep learning techniques.

The scope of this Special Issue is to cover all aspects that relate to underwater image processing. Topics of interest include, but are not limited to, the following:

  • Underwater image enhancement and restoration;
  • Underwater image denoising;
  • Underwater object detection and classification;
  • Underwater object tracking;
  • Underwater object recognition;
  • Underwater semantic segmentation;
  • Underwater scene understanding;
  • Underwater image depth estimation;
  • Underwater 3D modeling;
  • Underwater image synthesis and generation;
  • Generative AI for underwater image processing;
  • Underwater image quality assessment methods, including full-reference assessment metrics, non-reference assessment metrics, etc.

Prof. Dr. Chia-Hung Yeh
Prof. Dr. Chua-Chin Wang
Dr. Guo-Shiang Lin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Marine Science and Engineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • machine learning
  • underwater image processing
  • underwater vision
  • underwater drones
  • Autonomous Underwater Vehicles (AUVs)
  • ocean information engineering
  • ocean observation technologies
  • artificial intelligence in the underwater environment

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 8118 KiB  
Article
Research on the Identification and Classification of Marine Debris Based on Improved YOLOv8
by Wenbo Jiang, Lusong Yang and Yun Bu
J. Mar. Sci. Eng. 2024, 12(10), 1748; https://doi.org/10.3390/jmse12101748 - 3 Oct 2024
Viewed by 838
Abstract
Autonomous underwater vehicles equipped with target recognition algorithms are a primary means of removing marine debris. However, due to poor underwater visibility, light scattering by suspended particles, and the coexistence of organisms and debris, current methods have problems such as poor recognition and [...] Read more.
Autonomous underwater vehicles equipped with target recognition algorithms are a primary means of removing marine debris. However, due to poor underwater visibility, light scattering by suspended particles, and the coexistence of organisms and debris, current methods have problems such as poor recognition and classification effects, slow recognition speed, and weak generalization ability. In response to these problems, this article proposes a marine debris identification and classification algorithm based on improved YOLOv8. The algorithm incorporates the CloFormer module, a context-aware local enhancement mechanism, into the backbone network, fully utilizing shared and context-aware weights. Consequently, it enhances high- and low-frequency feature extraction from underwater debris images. The proposed C2f-spatial and channel reconstruction (C2f-SCConv) module combines the SCConv module with the neck C2f module to reduce spatial and channel redundancy in standard convolutions and enhance feature representation. WIoU v3 is employed as the bounding box regression loss function, effectively managing low- and high-quality samples to improve overall model performance. The experimental results on the TrashCan-Instance dataset indicate that compared to the classical YOLOv8, the [email protected] and F1 scores are increased by 5.7% and 6%, respectively. Meanwhile, on the TrashCan-Material dataset, the [email protected] and F1 scores also improve, by 5.5% and 5%, respectively. Additionally, the model size has been reduced by 12.9%. These research results are conducive to maintaining marine life safety and ecosystem stability. Full article
(This article belongs to the Special Issue Application of Deep Learning in Underwater Image Processing)
Show Figures

Figure 1

12 pages, 3550 KiB  
Article
Deep Learning Based Characterization of Cold-Water Coral Habitat at Central Cantabrian Natura 2000 Sites Using YOLOv8
by Alberto Gayá-Vilar, Alberto Abad-Uribarren, Augusto Rodríguez-Basalo, Pilar Ríos, Javier Cristobo and Elena Prado
J. Mar. Sci. Eng. 2024, 12(9), 1617; https://doi.org/10.3390/jmse12091617 - 11 Sep 2024
Viewed by 681
Abstract
Cold-water coral (CWC) reefs, such as those formed by Desmophyllum pertusum and Madrepora oculata, are vital yet vulnerable marine ecosystems (VMEs). The need for accurate and efficient monitoring of these habitats has driven the exploration of innovative approaches. This study presents a [...] Read more.
Cold-water coral (CWC) reefs, such as those formed by Desmophyllum pertusum and Madrepora oculata, are vital yet vulnerable marine ecosystems (VMEs). The need for accurate and efficient monitoring of these habitats has driven the exploration of innovative approaches. This study presents a novel application of the YOLOv8l-seg deep learning model for the automated detection and segmentation of these key CWC species in underwater imagery. The model was trained and validated on images collected at two Natura 2000 sites in the Cantabrian Sea: the Avilés Canyon System (ACS) and El Cachucho Seamount (CSM). Results demonstrate the model’s high accuracy in identifying and delineating individual coral colonies, enabling the assessment of coral cover and spatial distribution. The study revealed significant variability in coral cover between and within the study areas, highlighting the patchy nature of CWC habitats. Three distinct coral community groups were identified based on percentage coverage composition and abundance, with the highest coral cover group being located exclusively in the La Gaviera canyon head within the ACS. This research underscores the potential of deep learning models for efficient and accurate monitoring of VMEs, facilitating the acquisition of high-resolution data essential for understanding CWC distribution, abundance, and community structure, and ultimately contributing to the development of effective conservation strategies. Full article
(This article belongs to the Special Issue Application of Deep Learning in Underwater Image Processing)
Show Figures

Figure 1

18 pages, 22988 KiB  
Article
MEvo-GAN: A Multi-Scale Evolutionary Generative Adversarial Network for Underwater Image Enhancement
by Feiran Fu, Peng Liu, Zhen Shao, Jing Xu and Ming Fang
J. Mar. Sci. Eng. 2024, 12(7), 1210; https://doi.org/10.3390/jmse12071210 - 18 Jul 2024
Viewed by 761
Abstract
In underwater imaging, achieving high-quality imagery is essential but challenging due to factors such as wavelength-dependent absorption and complex lighting dynamics. This paper introduces MEvo-GAN, a novel methodology designed to address these challenges by combining generative adversarial networks with genetic algorithms. The key [...] Read more.
In underwater imaging, achieving high-quality imagery is essential but challenging due to factors such as wavelength-dependent absorption and complex lighting dynamics. This paper introduces MEvo-GAN, a novel methodology designed to address these challenges by combining generative adversarial networks with genetic algorithms. The key innovation lies in the integration of genetic algorithm principles with multi-scale generator and discriminator structures in Generative Adversarial Networks (GANs). This approach enhances image details and structural integrity while significantly improving training stability. This combination enables more effective exploration and optimization of the solution space, leading to reduced oscillation, mitigated mode collapse, and smoother convergence to high-quality generative outcomes. By analyzing various public datasets in a quantitative and qualitative manner, the results confirm the effectiveness of MEvo-GAN in improving the clarity, color fidelity, and detail accuracy of underwater images. The results of the experiments on the UIEB dataset are remarkable, with MEvo-GAN attaining a Peak Signal-to-Noise Ratio (PSNR) of 21.2758, Structural Similarity Index (SSIM) of 0.8662, and Underwater Color Image Quality Evaluation (UCIQE) of 0.6597. Full article
(This article belongs to the Special Issue Application of Deep Learning in Underwater Image Processing)
Show Figures

Figure 1

17 pages, 7982 KiB  
Article
Deep Dynamic Weights for Underwater Image Restoration
by Hafiz Shakeel Ahmad Awan and Muhammad Tariq Mahmood
J. Mar. Sci. Eng. 2024, 12(7), 1208; https://doi.org/10.3390/jmse12071208 - 18 Jul 2024
Viewed by 683
Abstract
Underwater imaging presents unique challenges, notably color distortions and reduced contrast due to light attenuation and scattering. Most underwater image enhancement methods first use linear transformations for color compensation and then enhance the image. We observed that linear transformation for color compensation is [...] Read more.
Underwater imaging presents unique challenges, notably color distortions and reduced contrast due to light attenuation and scattering. Most underwater image enhancement methods first use linear transformations for color compensation and then enhance the image. We observed that linear transformation for color compensation is not suitable for certain images. For such images, non-linear mapping is a better choice. This paper introduces a unique underwater image restoration approach leveraging a streamlined convolutional neural network (CNN) for dynamic weight learning for linear and non-linear mapping. In the first phase, a classifier is applied that classifies the input images as Type I or Type II. In the second phase, we use the Deep Line Model (DLM) for Type-I images and the Deep Curve Model (DCM) for Type-II images. For mapping an input image to an output image, the DLM creatively combines color compensation and contrast adjustment in a single step and uses deep lines for transformation, whereas the DCM employs higher-order curves. Both models utilize lightweight neural networks that learn per-pixel dynamic weights based on the input image’s characteristics. Comprehensive evaluations on benchmark datasets using metrics like peak signal-to-noise ratio (PSNR) and root mean square error (RMSE) affirm our method’s effectiveness in accurately restoring underwater images, outperforming existing techniques. Full article
(This article belongs to the Special Issue Application of Deep Learning in Underwater Image Processing)
Show Figures

Figure 1

Back to TopTop