Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (996)

Search Parameters:
Keywords = underwater imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 7671 KB  
Article
Improving the Knowledge on the Distribution and Ecology of the Protected Echinoid Centrostephanus longispinus (Philippi, 1845) in the Alboran Sea
by Javier Valenzuela, Emilio González-García, Ana Mena-Torres, Adrián Martín-Taboada, Marina Gallardo-Núñez, Antonio García-Ledesma, Patricia Barcenas, José L. Rueda and Ángel Mateo-Ramírez
Diversity 2025, 17(11), 758; https://doi.org/10.3390/d17110758 - 29 Oct 2025
Viewed by 397
Abstract
Centrostephanus longispinus (Philippi, 1845) is a sea urchin widely distributed across the tropical and temperate Atlantic Ocean (including the Caribbean) and Mediterranean Sea. Although it is present along the Alboran Sea coastline (Western Mediterranean), it is generally considered rare and is listed under [...] Read more.
Centrostephanus longispinus (Philippi, 1845) is a sea urchin widely distributed across the tropical and temperate Atlantic Ocean (including the Caribbean) and Mediterranean Sea. Although it is present along the Alboran Sea coastline (Western Mediterranean), it is generally considered rare and is listed under conservation and protection lists and conventions due to fragmented populations threatened by seabed degradation. This study provides the first density and size distribution data for this echinoid in the circalittoral and bathyal bottoms of the Alboran Sea, aiming to relate its presence to seabed features, environmental variables, and human pressures. A series of 131 (62 ROV and 69 TASIFE transects) underwater image transects were collected during CIRCAESAL expeditions (2021, 2023, 2024) using a ROV and a photogrammetric sledge from infralittoral to bathyal bottoms (17–856 m depth). Images were processed with OFOP software to quantify and classify individuals by size classes, depth, substrate, seafloor roughness, micro-habitat, and coverage of key benthic structuring species. A total of 524 individuals of C. longispinus were detected in 13 transects, with the highest densities recorded at 48–100 m depths in rough, rocky substrates with crevices and a moderate to low coverage of key benthic structuring species. Differences in habitat use were also observed across depth strata: individuals in shallower zones tend to remain hidden within crevices and structurally complex substrates, displaying a more cryptic behaviour, whereas those in deeper strata rely less on refuge and occupy less complex habitats. The largest aggregations occurred near the Guadiaro Canyon, outside the “Estrecho Oriental” Special Area of Conservation (SAC), suggesting this area may serve as a population reservoir deserving conservation. Despite these findings, ecological knowledge of C. longispinus remains limited, and future studies should improve the knowledge gaps, particularly in the eastern and southern Alboran Sea. Full article
(This article belongs to the Special Issue Deep-Sea Echinoderms of the European Seas)
Show Figures

Figure 1

23 pages, 18947 KB  
Article
IOPE-IPD: Water Properties Estimation Network Integrating Physical Model and Deep Learning for Hyperspectral Imagery
by Qi Li, Mingyu Gao, Ming Zhang, Junwen Wang, Jingjing Chen and Jinghua Li
Remote Sens. 2025, 17(21), 3546; https://doi.org/10.3390/rs17213546 - 26 Oct 2025
Viewed by 408
Abstract
Hyperspectral underwater target detection holds great potential for marine exploration and environmental monitoring. A key challenge lies in accurately estimating water inherent optical properties (IOPs) from hyperspectral imagery. To address these limitations, we propose a novel water IOP estimation network to support the [...] Read more.
Hyperspectral underwater target detection holds great potential for marine exploration and environmental monitoring. A key challenge lies in accurately estimating water inherent optical properties (IOPs) from hyperspectral imagery. To address these limitations, we propose a novel water IOP estimation network to support the interpretation of bathymetric models. We propose the IOPs physical model that focuses on the description of the water IOPs, describing how the concentrations of chlorophyll, colored dissolved organic matter, and detrital material influence the absorption and backscattering coefficients. Building on this foundation, we proposed an innovative IOP estimation network integrating a physical model and deep learning (IOPE-IPD). This approach enables precise and physically interpretable estimation of the IOPs. Specially, the IOPE-IPD network takes water spectra as input. The encoder extracts spectral features, while dual parallel decoders simultaneously estimate four key parameters. Based on these outputs, the absorption and backscattering coefficients of the water body are computed using the IOPs physical model. Subsequently, the bathymetric model is employed to reconstruct the water spectrum. Under the constraint of a consistency loss, the retrieved spectrum is encouraged to closely match the input spectrum. To ensure the IOPE-IPD’s applicability across various scenarios, multiple actual and Jerlov-simulated aquatic environments were used. Comprehensive experimental results demonstrate the robustness and effectiveness of our proposed IOPE-IPD over the compared method. Full article
Show Figures

Figure 1

18 pages, 7594 KB  
Article
An Underwater Low-Light Image Enhancement Algorithm Based on Image Fusion and Color Balance
by Ruishen Xu, Daqi Zhu, Wen Pang and Mingzhi Chen
J. Mar. Sci. Eng. 2025, 13(11), 2049; https://doi.org/10.3390/jmse13112049 - 26 Oct 2025
Viewed by 427
Abstract
Underwater vehicles are widely used in underwater salvage and underwater photography. However, the processing of underwater images has always been a significant challenge. Due to low light conditions in underwater environments, images are often affected by color casts, low visibility and missing edge [...] Read more.
Underwater vehicles are widely used in underwater salvage and underwater photography. However, the processing of underwater images has always been a significant challenge. Due to low light conditions in underwater environments, images are often affected by color casts, low visibility and missing edge details. These issues seriously affect the accuracy of underwater object detection by underwater vehicles. To address these problems, an underwater low-light image enhancement method based on image fusion and color balance is proposed in this paper. First, color compensation and white balance algorithms are employed to restore the natural appearance of the images. The texture characteristics of these white-balanced images are then enhanced using unsharp masking (USM) technology. Subsequently, a dual channel dehazing is applied, the image visibility is improved and the blocking artifacts common in traditional dark channel dehazing is avoided. Finally, through multi-scale fusion, the sharpened and dehazed image are combined to obtain the final enhanced image. In quantitative analysis, PSNR (Peak Signal-to-Noise Ratio), SSIM (Structural Similarity index), UIQM (Underwater Image Quality Measurement) and UCIQE (Underwater Color Image Quality Evaluation) were 28.62, 0.8753, 0.8831 and 0.5928, respectively. The results show that the images generated by this enhancement technique have higher visibility compared with other methods. It also produces images with more details while preserving edge information. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

16 pages, 14135 KB  
Article
Underwater Image Enhancement with a Hybrid U-Net-Transformer and Recurrent Multi-Scale Modulation
by Zaiming Geng, Jiabin Huang, Xiaotian Wang, Yu Zhang, Xinnan Fan and Pengfei Shi
Mathematics 2025, 13(21), 3398; https://doi.org/10.3390/math13213398 - 25 Oct 2025
Viewed by 487
Abstract
The quality of underwater imagery is inherently degraded by light absorption and scattering, a challenge that severely limits its application in critical domains such as marine robotics and archeology. While existing enhancement methods, including recent hybrid models, attempt to address this, they often [...] Read more.
The quality of underwater imagery is inherently degraded by light absorption and scattering, a challenge that severely limits its application in critical domains such as marine robotics and archeology. While existing enhancement methods, including recent hybrid models, attempt to address this, they often struggle to restore fine-grained details without introducing visual artifacts. To overcome this limitation, this work introduces a novel hybrid U-Net-Transformer (UTR) architecture that synergizes local feature extraction with global context modeling. The core innovation is a Recurrent Multi-Scale Feature Modulation (R-MSFM) mechanism, which, unlike prior recurrent refinement techniques, employs a gated modulation strategy across multiple feature scales within the decoder to iteratively refine textural and structural details with high fidelity. This approach effectively preserves spatial information during upsampling. Extensive experiments demonstrate the superiority of the proposed method. On the EUVP dataset, UTR achieves a PSNR of 28.347 dB, a significant gain of +3.947 dB over the state-of-the-art UWFormer. Moreover, it attains a top-ranking UIQM score of 3.059 on the UIEB dataset, underscoring its robustness. The results confirm that UTR provides a computationally efficient and highly effective solution for underwater image enhancement. Full article
Show Figures

Figure 1

18 pages, 3445 KB  
Article
Underwater Objective Detection Algorithm Based on YOLOv8-Improved Multimodality Image Fusion Technology
by Yage Qie, Chao Fang, Jinghua Huang, Donghao Wu and Jian Jiang
Machines 2025, 13(11), 982; https://doi.org/10.3390/machines13110982 - 24 Oct 2025
Viewed by 567
Abstract
The field of underwater robotics is experiencing rapid growth, wherein accurate object detection constitutes a fundamental component. Given the prevalence of false alarms and omission errors caused by intricate subaquatic conditions and substantial image noise, this study introduces an enhanced detection framework that [...] Read more.
The field of underwater robotics is experiencing rapid growth, wherein accurate object detection constitutes a fundamental component. Given the prevalence of false alarms and omission errors caused by intricate subaquatic conditions and substantial image noise, this study introduces an enhanced detection framework that combines the YOLOv8 architecture with multimodal visual fusion methodology. To solve the problem of degraded detection performance of the model in complex environments like those with low illumination, features from Visible Light Image are fused with the Thermal Distribution Features exhibited by Infrared Image, thereby yielding more comprehensive image information. Furthermore, to precisely focus on crucial target regions and information, a Multi-Scale Cross-Axis Attention Mechanism (MSCA) is introduced, which significantly enhances Detection Accuracy. Finally, to meet the lightweight requirement of the model, an Efficient Shared Convolution Head (ESC_Head) is designed. The experimental findings reveal that the YOLOv8-FUSED framework attains a mean average precision (mAP) of 82.1%, marking an 8.7% enhancement compared to the baseline YOLOv8 architecture. The proposed approach also exhibits superior detection capabilities relative to existing techniques while simultaneously satisfying the critical requirement for real-time underwater object detection. Moreover, the proposed system successfully meets the essential criteria for real-time detection of underwater objects. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

20 pages, 17509 KB  
Article
Underwater Structural Multi-Defects Automatic Detection via Hybrid Neural Network
by Chunyan Ma, Zhe Chen, Huibin Wang and Guangze Shen
J. Mar. Sci. Eng. 2025, 13(11), 2029; https://doi.org/10.3390/jmse13112029 - 22 Oct 2025
Viewed by 351
Abstract
Detecting underwater structural defects is vital for hydraulic engineering safety. Diverse patterns of underwater structural defects, i.e., the morphology and scale characteristics, pose difficulties on feature representability during detection. Any single feature morphology is insufficient to fully characterize diverse types of underwater defect [...] Read more.
Detecting underwater structural defects is vital for hydraulic engineering safety. Diverse patterns of underwater structural defects, i.e., the morphology and scale characteristics, pose difficulties on feature representability during detection. Any single feature morphology is insufficient to fully characterize diverse types of underwater defect patterns. This paper proposes a novel hybrid neural network to enhance feature representation of underwater structural multi-defects, which in turn improves the accuracy and adaptability of underwater detection. Three types of convolution operations are combined to build Hybrid Aggregation Network (HanNet), enhancing the morphological representation for diverse defects. Considering the scale difference of diverse defects, the Multi-Scale Shared Feature Pyramid (MSFP) is proposed, facilitating adaptive representation for diverse sizes of structural defects. The defect detection module leverages an Adaptive Spatial-Aware Attention (ASAA) at the backend, enabling selective enhancement of salient defect features. For model training and evaluation, we, for the first time, build an underwater structural multi-defects sonar image dataset containing a wide range of typical defect types. Experimental results demonstrate that the proposed model outperforms state-of-the-art methods, significantly improving defect detection accuracy, and provides an effective solution for detecting diverse structural defects in complex underwater environments. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

22 pages, 3177 KB  
Article
RECAD: Retinex-Based Efficient Channel Attention with Dark Area Detection for Underwater Images Enhancement
by Tianchi Zhang, Qiang Liu, Hongwei Qin and Xing Liu
J. Mar. Sci. Eng. 2025, 13(11), 2027; https://doi.org/10.3390/jmse13112027 - 22 Oct 2025
Viewed by 271
Abstract
Focusing on visual target detection for Autonomous Underwater Vehicles (AUVs), this paper investigates enhancement methods for weakly illuminated underwater images, which typically suffer from blurring, color distortion, and non-uniform illumination. Although deep learning-based approaches have received considerable attention, existing methods still face limitations [...] Read more.
Focusing on visual target detection for Autonomous Underwater Vehicles (AUVs), this paper investigates enhancement methods for weakly illuminated underwater images, which typically suffer from blurring, color distortion, and non-uniform illumination. Although deep learning-based approaches have received considerable attention, existing methods still face limitations such as insufficient feature extraction, poor detail detection, and high computational costs. To address these issues, we propose RECAD—a lightweight and efficient underwater image enhancement method based on Retinex theory. The approach incorporates a dark region detection mechanism to significantly improve feature extraction from low-light areas, along with an efficient channel attention module to reduce computational complexity. A residual learning strategy is adopted in the image reconstruction stage to effectively preserve structural consistency, achieving an SSIM value of 0.91. Extensive experiments on the UIEB and LSUI benchmark datasets demonstrate that RECAD outperforms state-of-the-art models including FUnIEGAN and U-Transformer, achieving a high SSIM of 0.91 and competitive UIQM scores (up to 3.19), while improving PSNR by 3.77 dB and 0.69–1.09 dB, respectively, and attaining a leading inference speed of 97 FPS, all while using only 0.42 M parameters, which substantially reduces computational resource consumption. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

29 pages, 7170 KB  
Article
Two Non-Learning Systems for Profile-Extraction in Images Acquired from a near Infrared Camera, Underwater Environment, and Low-Light Condition
by Tianyu Sun, Jingmei Xu, Zongan Li and Ye Wu
Appl. Sci. 2025, 15(20), 11289; https://doi.org/10.3390/app152011289 - 21 Oct 2025
Viewed by 293
Abstract
The images acquired from near infrared cameras can contain thermal noise, which degrades the quality of the images. The quality of the images obtained from underwater environments suffer from the complex hydrological environment. All these issues make the profile-extraction in these images a [...] Read more.
The images acquired from near infrared cameras can contain thermal noise, which degrades the quality of the images. The quality of the images obtained from underwater environments suffer from the complex hydrological environment. All these issues make the profile-extraction in these images a difficult task. In this work, two non-learning systems are built for making filters by using wavelets transform combined with simple functions. They can be shown to extract profiles in the images acquired from the near infrared camera and underwater environment. Furthermore, they are useful for low-light image enhancement, edge/array detection, and image fusion. The increase in the measurement by entropy can be found by enhancing the scale of the filters. When processing the near infrared images, the values of running time, the memory usage, Signal-to-Noise Ratio (SNR), and Peak Signal-to-Noise Ratio (PSNR) are generally smaller in the operators of Canny, Roberts, Log, Sobel, and Prewitt than those in the Atanh filter and Sech filter. When processing the underwater images, the values of running time, the memory usage, SNR, and PSNR are generally smaller in Sobel operator than those in the Atanh filter and Sech filter. When processing the low-light images, it can be seen that the Atanh filter obtains the highest values of the running time and the memory usage compared to the filter based on the Retinex model, the Sech filter, and a matched filter. Our designed filters require little computational resources comparing to learning-based ones and hold the merits of being multifunctional, which may be useful for advanced imaging in the field of bio-medical engineering. Full article
Show Figures

Figure 1

16 pages, 21685 KB  
Article
MambaUSR: Mamba and Frequency Interaction Network for Underwater Image Super-Resolution
by Guangze Shen, Jingxuan Zhang and Zhe Chen
Appl. Sci. 2025, 15(20), 11263; https://doi.org/10.3390/app152011263 - 21 Oct 2025
Viewed by 347
Abstract
In recent years, underwater image super-resolution (SR) reconstruction has increasingly become a core focus of underwater machine vision. Light scattering and refraction in underwater environments result in images with blurred details, low contrast, color distortions, and multiple visual artifacts. Despite the promising results [...] Read more.
In recent years, underwater image super-resolution (SR) reconstruction has increasingly become a core focus of underwater machine vision. Light scattering and refraction in underwater environments result in images with blurred details, low contrast, color distortions, and multiple visual artifacts. Despite the promising results achieved by deep learning in underwater SR tasks, global and frequency-domain information remain poorly addressed. In this study, we introduce a novel underwater SR method based on the Vision State-Space Model, dubbed MambaUSR. At its core, we design the Frequency State-Space Module (FSSM), which integrates two complementary components: the Visual State-Space Module (VSSM) and the Frequency-Assisted Enhancement Module (FAEM). The VSSM models long-range dependencies to enhance global structural consistency and contrast, while the FAEM employs Fast Fourier Transform combined with channel attention to extract high-frequency details, thereby improving the fidelity and naturalness of reconstructed images. Comprehensive evaluations on benchmark datasets confirm that MambaUSR delivers superior performance in underwater image reconstruction. Full article
Show Figures

Figure 1

25 pages, 10667 KB  
Article
Adaptive Exposure Optimization for Underwater Optical Camera Communication via Multimodal Feature Learning and Real-to-Sim Channel Emulation
by Jiongnan Lou, Xun Zhang, Haifei Shen, Yiqian Qian, Zhan Wang, Hongda Chen, Zefeng Wang and Lianxin Hu
Sensors 2025, 25(20), 6436; https://doi.org/10.3390/s25206436 - 17 Oct 2025
Viewed by 578
Abstract
Underwater Optical Camera Communication (UOCC) has emerged as a promising paradigm for short-range, high-bandwidth, and secure data exchange in autonomous underwater vehicles (AUVs). UOCC performance strongly depends on exposure time and International Standards Organization (ISO) sensitivity—two parameters that govern photon capture, contrast, and [...] Read more.
Underwater Optical Camera Communication (UOCC) has emerged as a promising paradigm for short-range, high-bandwidth, and secure data exchange in autonomous underwater vehicles (AUVs). UOCC performance strongly depends on exposure time and International Standards Organization (ISO) sensitivity—two parameters that govern photon capture, contrast, and bit detection fidelity. However, optical propagation in aquatic environments is highly susceptible to turbidity, scattering, and illumination variability, which severely degrade image clarity and signal-to-noise ratio (SNR). Conventional systems with fixed imaging settings cannot adapt to time-varying conditions, limiting communication reliability. While validating the feasibility of deep learning for exposure prediction, this baseline lacked environmental awareness and generalization to dynamic scenarios. To overcome these limitations, we introduce a Real-to-Sim-to-Deployment framework that couples a physically calibrated emulation platform with a Hybrid CNN-MLP Model (HCMM). By fusing optical images, environmental states, and camera configurations, the HCMM achieves substantially improved parameter prediction accuracy, reducing RMSE to 0.23–0.33. When deployed on embedded hardware, it enables real-time adaptive reconfiguration and delivers up to 8.5 dB SNR gain, surpassing both static-parameter systems and the prior CNN baseline. These results demonstrate that environment-aware multimodal learning, supported by reproducible optical channel emulation, provides a scalable and robust solution for practical UOCC deployment in positioning, inspection, and laser-based underwater communication. Full article
Show Figures

Figure 1

22 pages, 18934 KB  
Article
A Graph-Aware Color Correction and Texture Restoration Framework for Underwater Image Enhancement
by Jin Qian, Bin Zhang, Hui Li and Xiaoshuang Xing
Electronics 2025, 14(20), 4079; https://doi.org/10.3390/electronics14204079 - 17 Oct 2025
Viewed by 400
Abstract
Underwater imagery exhibits markedly more severe visual degradation than their terrestrial counterparts, manifesting as pronounced color aberration, diminished contrast and luminosity, and spatially non-uniform haze. To surmount these challenges, we propose the graph-aware framework for underwater image enhancement (GA-UIE), integrating specialized modules for [...] Read more.
Underwater imagery exhibits markedly more severe visual degradation than their terrestrial counterparts, manifesting as pronounced color aberration, diminished contrast and luminosity, and spatially non-uniform haze. To surmount these challenges, we propose the graph-aware framework for underwater image enhancement (GA-UIE), integrating specialized modules for color correction and texture restoration, a unified framework that explicitly utilizes the intrinsic graph information of underwater images to achieve high-fidelity color restoration and texture enhancement. The proposed algorithm is architected in three synergistic stages: (1) graph feature generation, which distills color and texture graph feature priors from the underwater image; (2) graph-aware enhancement, performing joint color restoration and texture sharpening under explicit graph priors; and (3) graph-aware fusion, harmoniously aggregating the graph-aware color and texture joint representations to yield the final visually coherent output. Comprehensive quantitative evaluations reveal that the output from our novel framework achieves the significant scores across a broad spectrum of metrics, including PSNR, SSIM, LPIPS, UCIQE, and UIQM on the UIEB and U45 datasets. These results decisively exceed those of all existing benchmark techniques, thereby validating the method’s exceptional efficacy in the enhancement of underwater imagery. Full article
Show Figures

Graphical abstract

22 pages, 33466 KB  
Article
Symmetry-Constrained Dual-Path Physics-Guided Mamba Network: Balancing Performance and Efficiency in Underwater Image Enhancement
by Ye Fang, Heting Sun, Yali Li, Shuai Yuan and Feng Zhao
Symmetry 2025, 17(10), 1742; https://doi.org/10.3390/sym17101742 - 16 Oct 2025
Viewed by 405
Abstract
The field of underwater image enhancement (UIE) has advanced significantly, yet it continues to grapple with persistent challenges stemming from complex, spatially varying optical degradations such as light absorption, scattering, and color distortion. These factors often impede the efficient deployment of enhancement models. [...] Read more.
The field of underwater image enhancement (UIE) has advanced significantly, yet it continues to grapple with persistent challenges stemming from complex, spatially varying optical degradations such as light absorption, scattering, and color distortion. These factors often impede the efficient deployment of enhancement models. Conventional approaches frequently rely on uniform processing strategies that neither adapt effectively to diverse degradation patterns nor adequately incorporate physical principles, resulting in a trade-off between enhancement quality and computational efficiency. To overcome these limitations, we propose a Dual-Path Physics-Guided Mamba Network (DPPGM), a lightweight framework designed to synergize physical optics modeling with data-driven learning. Extensive experiments on three benchmark datasets (UIEB, LSUI, and U45) demonstrate that DPPGM outperforms 13 state-of-the-art methods, achieving an exceptional balance with only 1.48 M parameters and 25.39 G FLOPs. The key to this performance is a symmetry-constrained architecture: it incorporates a dual-path Mamba module for degradation-aware processing, physics-guided optimization based on the Jaffe–McGlamery model, and compact subspace fusion, ensuring that quality and efficiency are mutually reinforced rather than competing objectives. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

24 pages, 17690 KB  
Article
Power-Compensated White Laser Underwater Imaging Applications Based on Transmission Distance
by Weiyu Cai, Guangwang Ding, Xiaomei Liu, Xiang Li, Houjie Chen, Xiaojuan Ma and Hua Liu
Optics 2025, 6(4), 51; https://doi.org/10.3390/opt6040051 - 10 Oct 2025
Viewed by 467
Abstract
The complex aquatic environment attenuates light transmission, thereby limiting the detection range of underwater laser systems. To address the challenges of limited operational distance and significant light energy attenuation, this study investigates optimized underwater lighting and imaging applications using a combined tricolor RGB [...] Read more.
The complex aquatic environment attenuates light transmission, thereby limiting the detection range of underwater laser systems. To address the challenges of limited operational distance and significant light energy attenuation, this study investigates optimized underwater lighting and imaging applications using a combined tricolor RGB (RED-GREEN-BLUE) white laser source. First, accounting for the attenuation characteristics of water, we propose a power-compensated white laser system based on transmission distance and underwater imaging theory. Second, underwater experiments are conducted utilizing both standard D65 white lasers and the proposed power-compensated white lasers, respectively. Finally, the theory is validated by assessing image quality metrics of the captured underwater imagery. The results demonstrate that a low-power (0.518 W) power-compensated white laser achieves a transmission distance of 5 m, meeting the requirements for a long-range, low-power imaging light source. Its capability for independent adjustment of the three-color power output fulfills the lighting demands for specific long-distance transmission scenarios. These findings confirm the advantages of power-compensated white lasers in long-range underwater detection and refine the characterization of white light for underwater illumination. Full article
Show Figures

Figure 1

17 pages, 7150 KB  
Article
DeepFishNET+: A Dual-Stream Deep Learning Framework for Robust Underwater Fish Detection and Classification
by Mahdi Hamzaoui, Mokhtar Rejili, Mohamed Ould-Elhassen Aoueileyine and Ridha Bouallegue
Appl. Sci. 2025, 15(20), 10870; https://doi.org/10.3390/app152010870 - 10 Oct 2025
Viewed by 1114
Abstract
The conservation and protection of fish species are crucial tasks for aquaculture and marine biology. Recognizing fish in underwater environments is highly challenging due to poor lighting and the visual similarity between fish and the background. Conventional recognition methods are extremely time-consuming and [...] Read more.
The conservation and protection of fish species are crucial tasks for aquaculture and marine biology. Recognizing fish in underwater environments is highly challenging due to poor lighting and the visual similarity between fish and the background. Conventional recognition methods are extremely time-consuming and often yield unsatisfactory accuracy. This paper proposes a new method called DeepFishNET+. First, an Underwater Image Enhancement module was implemented for image correction. Second, Global CNN Stream (RestNet50) and a Local Transformer Stream were implemented to generate the Feature Map and Feature Vector. Next, a feature fusion operation was performed in the Cross-Attention Feature Fusion module. Finally, Yolov8 was used for fish detection and localization. Softmax was applied for species recognition. This new approach achieved a classification precision of 98.28% and a detection precision of 92.74%. Full article
(This article belongs to the Special Issue Advances in Aquatic Animal Nutrition and Aquaculture)
Show Figures

Figure 1

16 pages, 5781 KB  
Article
Design of an Underwater Optical Communication System Based on RT-DETRv2
by Hexi Liang, Hang Li, Minqi Wu, Junchi Zhang, Wenzheng Ni, Baiyan Hu and Yong Ai
Photonics 2025, 12(10), 991; https://doi.org/10.3390/photonics12100991 - 8 Oct 2025
Viewed by 515
Abstract
Underwater wireless optical communication (UWOC) is a key technology in ocean resource development, and its link stability is often limited by the difficulty of optical alignment in complex underwater environments. In response to this difficulty, this study has focused on improving the Real-Time [...] Read more.
Underwater wireless optical communication (UWOC) is a key technology in ocean resource development, and its link stability is often limited by the difficulty of optical alignment in complex underwater environments. In response to this difficulty, this study has focused on improving the Real-Time Detection Transformer v2 (RT-DETRv2) model. We have improved the underwater light source detection model by collaboratively designing a lightweight backbone network and deformable convolution, constructing a cross-stage local attention mechanism to reduce the number of network parameters, and introducing geometrically adaptive convolution kernels that dynamically adjust the distribution of sampling points, enhance the representation of spot-deformation features, and improve positioning accuracy under optical interference. To verify the effectiveness of the model, we have constructed an underwater light-emitting diode (LED) light-spot detection dataset containing 11,390 images was constructed, covering a transmission distance of 15–40 m, a ±45° deflection angle, and three different light-intensity conditions (noon, evening, and late night). Experiments show that the improved model achieves an average precision at an intersection-over-union threshold of 0.50 (AP50) value of 97.4% on the test set, which is 12.7% higher than the benchmark model. The UWOC system built based on the improved model achieves zero-bit-error-rate communication within a distance of 30 m after assisted alignment (an initial lateral offset angle of 0°–60°), and the bit-error rate remains stable in the 10−7–10−6 range at a distance of 40 m, which is three orders of magnitude lower than the traditional Remotely Operated Vehicle (ROV) underwater optical communication system (a bit-error rate of 10−6–10−3), verifying the strong adaptability of the improved model to complex underwater environments. Full article
Show Figures

Figure 1

Back to TopTop