sensors-logo

Journal Browser

Journal Browser

Underwater Vision Sensing System

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: 25 October 2024 | Viewed by 10889

Special Issue Editors

College of Information Science and Engineering, Ocean University of China, Qingdao 266100, China
Interests: underwater vision; artificial intelligence

E-Mail Website
Guest Editor
Division of Integrative Systems and Design (ISD) and Department of Computer Science and Engineering (CSE), Hong Kong University of Science and Technology, Hong Kong SAR 999077, China
Interests: underwater vision; marine vision; underwater scene understanding
Faculty of Engineering Department of Mechanical and Control Engineering, Kyushu Institute of Technology, Fukuoka 804-0015, Japan
Interests: robotics; oceanic optics; computer vision; artificial Intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Image and video sensing are essential tools for exploring and understanding the hidden world beneath the waves, yet the complex aquatic environment has prevented them from being widely deployed. Recent advances in underwater vision sensing technology have enabled researchers to develop efficient and cost-effective systems for underwater imaging. Underwater vision sensing systems can be used for a variety of applications, such as underwater surveillance and tracking, monitoring of marine habitats and species, navigation and mapping, and exploration.

The goal of this Special Issue is to introduce recent advances in underwater vision sensing systems, which involve autonomous underwater vehicles, sonar imaging, optical imaging, 3D reconstruction, automatic driving devices, sonar system optimization, the Internet of Things, security facilities, navigation systems, computer vision devices, acoustic materials, acoustic technologies, and so on. In this Special Issue, we expect to publish papers with theoretical and practical innovations in underwater vision sensing systems involving underwater optical and sonar imaging, underwater sensor design and development, image and signal processing for underwater vision sensing, image and signal analysis for object recognition and tracking, machine learning and artificial intelligence for underwater vision sensing, underwater navigation and localization, underwater communication and networking, applications of underwater vision sensing in autonomous underwater vehicles, and any other possible applications.

Topics of interest include, but are not limited to:

Underwater imaging;

Acoustic technology;

The implementation of deep learning in underwater vision systems;

Underwater 3D reconstruction;

Remotely operated vehicle;

Autonomous underwater vehicles;

Underwater stereo vision;

Underwater monitoring;

Image and signal processing in underwater vision sensing systems;

Underwater networks;

Underwater communication;

Underwater sensors;

Underwater materials;

Underwater devices;

Underwater navigation;

Underwater sensing and detection;

Underwater microscopy.

Dr. Zhibin Yu
Dr. Sai-Kit Yeung
Dr. Huimin Lu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 56384 KiB  
Article
Underwater Image Enhancement Based on Luminance Reconstruction by Multi-Resolution Fusion of RGB Channels
by Yi Wang, Zhihua Chen, Guoxu Yan, Jiarui Zhang and Bo Hu
Sensors 2024, 24(17), 5776; https://doi.org/10.3390/s24175776 - 5 Sep 2024
Viewed by 559
Abstract
Underwater image enhancement technology is crucial for the human exploration and exploitation of marine resources. The visibility of underwater images is affected by visible light attenuation. This paper proposes an image reconstruction method based on the decomposition–fusion of multi-channel luminance data to enhance [...] Read more.
Underwater image enhancement technology is crucial for the human exploration and exploitation of marine resources. The visibility of underwater images is affected by visible light attenuation. This paper proposes an image reconstruction method based on the decomposition–fusion of multi-channel luminance data to enhance the visibility of underwater images. The proposed method is a single-image approach to cope with the condition that underwater paired images are difficult to obtain. The original image is first divided into its three RGB channels. To reduce artifacts and inconsistencies in the fused images, a multi-resolution fusion process based on the Laplace–Gaussian pyramid guided by a weight map is employed. Image saliency analysis and mask sharpening methods are also introduced to color-correct the fused images. The results indicate that the method presented in this paper effectively enhances the visibility of dark regions in the original image and globally improves its color, contrast, and sharpness compared to current state-of-the-art methods. Our method can enhance underwater images in engineering practice, laying the foundation for in-depth research on underwater images. Full article
(This article belongs to the Special Issue Underwater Vision Sensing System)
Show Figures

Figure 1

20 pages, 27042 KiB  
Article
Redefining Accuracy: Underwater Depth Estimation for Irregular Illumination Scenes
by Tong Liu, Sainan Zhang and Zhibin Yu
Sensors 2024, 24(13), 4353; https://doi.org/10.3390/s24134353 - 4 Jul 2024
Viewed by 669
Abstract
Acquiring underwater depth maps is essential as they provide indispensable three-dimensional spatial information for visualizing the underwater environment. These depth maps serve various purposes, including underwater navigation, environmental monitoring, and resource exploration. While most of the current depth estimation methods can work well [...] Read more.
Acquiring underwater depth maps is essential as they provide indispensable three-dimensional spatial information for visualizing the underwater environment. These depth maps serve various purposes, including underwater navigation, environmental monitoring, and resource exploration. While most of the current depth estimation methods can work well in ideal underwater environments with homogeneous illumination, few consider the risk caused by irregular illumination, which is common in practical underwater environments. On the one hand, underwater environments with low-light conditions can reduce image contrast. The reduction brings challenges to depth estimation models in accurately differentiating among objects. On the other hand, overexposure caused by reflection or artificial illumination can degrade the textures of underwater objects, which is crucial to geometric constraints between frames. To address the above issues, we propose an underwater self-supervised monocular depth estimation network integrating image enhancement and auxiliary depth information. We use the Monte Carlo image enhancement module (MC-IEM) to tackle the inherent uncertainty in low-light underwater images through probabilistic estimation. When pixel values are enhanced, object recognition becomes more accessible, allowing for a more precise acquisition of distance information and thus resulting in more accurate depth estimation. Next, we extract additional geometric features through transfer learning, infusing prior knowledge from a supervised large-scale model into a self-supervised depth estimation network to refine loss functions and a depth network to address the overexposure issue. We conduct experiments with two public datasets, which exhibited superior performance compared to existing approaches in underwater depth estimation. Full article
(This article belongs to the Special Issue Underwater Vision Sensing System)
Show Figures

Figure 1

21 pages, 15042 KiB  
Article
SD-YOLOv8: An Accurate Seriola dumerili Detection Model Based on Improved YOLOv8
by Mingxin Liu, Ruixin Li, Mingxin Hou, Chun Zhang, Jiming Hu and Yujie Wu
Sensors 2024, 24(11), 3647; https://doi.org/10.3390/s24113647 - 4 Jun 2024
Cited by 2 | Viewed by 1182
Abstract
Accurate identification of Seriola dumerili (SD) offers crucial technical support for aquaculture practices and behavioral research of this species. However, the task of discerning S. dumerili from complex underwater settings, fluctuating light conditions, and schools of fish presents a challenge. This paper proposes [...] Read more.
Accurate identification of Seriola dumerili (SD) offers crucial technical support for aquaculture practices and behavioral research of this species. However, the task of discerning S. dumerili from complex underwater settings, fluctuating light conditions, and schools of fish presents a challenge. This paper proposes an intelligent recognition model based on the YOLOv8 network called SD-YOLOv8. By adding a small object detection layer and head, our model has a positive impact on the recognition capabilities for both close and distant instances of S. dumerili, significantly improving them. We construct a convenient S. dumerili dataset and introduce the deformable convolution network v2 (DCNv2) to enhance the information extraction process. Additionally, we employ the bottleneck attention module (BAM) and redesign the spatial pyramid pooling fusion (SPPF) for multidimensional feature extraction and fusion. The Inner-MPDIoU bounding box regression function adjusts the scale factor and evaluates geometric ratios to improve box positioning accuracy. The experimental results show that our SD-YOLOv8 model achieves higher accuracy and average precision, increasing from 89.2% to 93.2% and from 92.2% to 95.7%, respectively. Overall, our model enhances detection accuracy, providing a reliable foundation for the accurate detection of fishes. Full article
(This article belongs to the Special Issue Underwater Vision Sensing System)
Show Figures

Figure 1

23 pages, 11342 KiB  
Article
Geometric Implications of Photodiode Arrays on Received Power Distribution in Mobile Underwater Optical Wireless Communication
by Tharuka Govinda Waduge, Boon-Chong Seet and Kay Vopel
Sensors 2024, 24(11), 3490; https://doi.org/10.3390/s24113490 - 28 May 2024
Cited by 1 | Viewed by 814
Abstract
Underwater optical wireless communication (UOWC) has gained interest in recent years with the introduction of autonomous and remotely operated mobile systems in blue economic ventures such as offshore food production and energy generation. Here, we devised a model for estimating the received power [...] Read more.
Underwater optical wireless communication (UOWC) has gained interest in recent years with the introduction of autonomous and remotely operated mobile systems in blue economic ventures such as offshore food production and energy generation. Here, we devised a model for estimating the received power distribution of diffused line-of-sight mobile optical links, accommodating irregular intensity distributions beyond the beam-spread angle of the emitter. We then used this model to conduct a spatial analysis investigating the parametric influence of the placement, orientation, and angular spread of photodiodes in array-based receivers on the mobile UOWC links in different Jerlov seawater types. It revealed that flat arrays were best for links where strict alignment could be maintained, whereas curved arrays performed better spatially but were not always optimal. Furthermore, utilizing two or more spectrally distinct wavelengths and more bandwidth-efficient modulation may be preferred for received-signal intensity-based localization and improving link range in clearer oceans, respectively. Considering the geometric implications of the array of receiver photodiodes for mobile UOWCs, we recommend the use of dynamically shape-shifting array geometries. Full article
(This article belongs to the Special Issue Underwater Vision Sensing System)
Show Figures

Figure 1

13 pages, 4006 KiB  
Article
Underwater Turbid Media Stokes-Based Polarimetric Recovery
by Zhenfei Wang, Meixin Hu and Ketao Zhang
Sensors 2024, 24(5), 1367; https://doi.org/10.3390/s24051367 - 20 Feb 2024
Cited by 2 | Viewed by 1165
Abstract
Underwater optical imaging for information acquisition has always been an innovative and crucial research direction. Unlike imaging in the air medium, the underwater optical environment is more intricate. From an optical perspective, natural factors such as turbulence and suspended particles in the water [...] Read more.
Underwater optical imaging for information acquisition has always been an innovative and crucial research direction. Unlike imaging in the air medium, the underwater optical environment is more intricate. From an optical perspective, natural factors such as turbulence and suspended particles in the water cause issues like light scattering and attenuation, leading to color distortion, loss of details, decreased contrast, and overall blurriness. These challenges significantly impact the acquisition of underwater image information, rendering subsequent algorithms reliant on such data unable to function properly. Therefore, this paper proposes a method for underwater image restoration using Stokes linearly polarized light, specifically tailored to the challenges of underwater complex optical imaging environments. This method effectively utilizes linear polarization information and designs a system that uses the information of the first few frames to calculate the enhanced images of the later frames. By doing so, it achieves real-time underwater Stokes linear polarized imaging while minimizing human interference during the imaging process. Furthermore, the paper provides a comprehensive analysis of the deficiencies observed during the testing of the method and proposes improvement perspectives, along with offering insights into potential future research directions. Full article
(This article belongs to the Special Issue Underwater Vision Sensing System)
Show Figures

Figure 1

11 pages, 914 KiB  
Article
CenterNet-Saccade: Enhancing Sonar Object Detection with Lightweight Global Feature Extraction
by Wenling Wang, Qiaoxin Zhang, Zhisheng Qi and Mengxing Huang
Sensors 2024, 24(2), 665; https://doi.org/10.3390/s24020665 - 20 Jan 2024
Viewed by 1243
Abstract
Sonar imaging technology is widely used in the field of marine and underwater monitoring because sound waves can be transmitted in elastic media, such as the atmosphere and seawater, without much interference. In underwater object detection, due to the unique characteristics of the [...] Read more.
Sonar imaging technology is widely used in the field of marine and underwater monitoring because sound waves can be transmitted in elastic media, such as the atmosphere and seawater, without much interference. In underwater object detection, due to the unique characteristics of the monitored sonar image, and since the target in an image is often accompanied by its own shadow, we can use the relative relationship between the shadow and the target for detection. To make use of shadow-information-aided detection and realize accurate real-time detection in sonar images, we put forward a network based on a lightweight module. By using the attention mechanism with a global receptive field, the network can make the target pay attention to the shadow information in the global environment, and because of its exquisite design, the computational time of the network is greatly reduced. Specifically, we design a ShuffleBlock model adapted to Hourglass to make the backbone network lighter. The concept of CNN dimension reduction is applied to MHSA to make it more efficient while paying attention to global features. Finally, CenterNet’s unreasonable distribution method of positive and negative samples is improved. Simulation experiments were carried out using the proposed sonar object detection dataset. The experimental results further verify that our improved model has obvious advantages over many existing conventional deep learning models. Moreover, the real-time monitoring performance of our proposed model is more conducive to the implementation in the field of ocean monitoring. Full article
(This article belongs to the Special Issue Underwater Vision Sensing System)
Show Figures

Figure 1

17 pages, 7978 KiB  
Article
A Novel Cone Model Filtering Method for Outlier Rejection of Multibeam Bathymetric Point Cloud: Principles and Applications
by Xiaoyang Lv, Lei Wang, Dexiang Huang and Shengli Wang
Sensors 2023, 23(17), 7483; https://doi.org/10.3390/s23177483 - 28 Aug 2023
Cited by 2 | Viewed by 1520
Abstract
The utilization of multibeam sonar systems has significantly facilitated the acquisition of underwater bathymetric data. However, efficiently processing vast amounts of multibeam point cloud data remains a challenge, particularly in terms of rejecting massive outliers. This paper proposes a novel solution by implementing [...] Read more.
The utilization of multibeam sonar systems has significantly facilitated the acquisition of underwater bathymetric data. However, efficiently processing vast amounts of multibeam point cloud data remains a challenge, particularly in terms of rejecting massive outliers. This paper proposes a novel solution by implementing a cone model filtering method for multibeam bathymetric point cloud data filtering. Initially, statistical analysis is employed to remove large-scale outliers from the raw point cloud data in order to enhance its resistance to variance for subsequent processing. Subsequently, virtual grids and voxel down-sampling are introduced to determine the angles and vertices of the model within each grid. Finally, the point cloud data was inverted, and the custom parameters were redefined to facilitate bi-directional data filtering. Experimental results demonstrate that compared to the commonly used filtering method the proposed method in this paper effectively removes outliers while minimizing excessive filtering, with minimal differences in standard deviations from human-computer interactive filtering. Furthermore, it yields a 3.57% improvement in accuracy compared to the Combined Uncertainty and Bathymetry Estimator method. These findings suggest that the newly proposed method is comparatively more effective and stable, exhibiting great potential for mitigating excessive filtering in areas with complex terrain. Full article
(This article belongs to the Special Issue Underwater Vision Sensing System)
Show Figures

Figure 1

13 pages, 5355 KiB  
Article
Densely Connected Networks with Multiple Features for Classifying Sound Signals with Reverberation
by Zhuo Chen, Dazhi Gao, Kai Sun, Xiaojing Zhao, Yueqi Yu and Zhennan Wang
Sensors 2023, 23(16), 7225; https://doi.org/10.3390/s23167225 - 17 Aug 2023
Viewed by 1036
Abstract
In indoor environments, reverberation can distort the signalseceived by active noise cancelation devices, posing a challenge to sound classification. Therefore, we combined three speech spectral features based on different frequency scales into a densely connected network (DenseNet) to accomplish sound classification with reverberation [...] Read more.
In indoor environments, reverberation can distort the signalseceived by active noise cancelation devices, posing a challenge to sound classification. Therefore, we combined three speech spectral features based on different frequency scales into a densely connected network (DenseNet) to accomplish sound classification with reverberation effects. We adopted the DenseNet structure to make the model lightweight A dataset was created based on experimental and simulation methods, andhe classification goal was to distinguish between music signals, song signals, and speech signals. Using this framework, effectivexperiments were conducted. It was shown that the classification accuracy of the approach based on DenseNet and fused features reached 95.90%, betterhan the results based on other convolutional neural networks (CNNs). The size of the optimized DenseNet model is only 3.09 MB, which is only 7.76% of the size before optimization. We migrated the model to the Android platform. The modified model can discriminate sound clips faster on Android thanhe network before the modification. This shows that the approach based on DenseNet and fused features can dealith sound classification tasks in different indoor scenes, and the lightweight model can be deployed on embedded devices. Full article
(This article belongs to the Special Issue Underwater Vision Sensing System)
Show Figures

Figure 1

20 pages, 9894 KiB  
Article
Estimation of Target Motion Parameters from the Tonal Signals with a Single Hydrophone
by Kai Sun, Dazhi Gao, Xiaojing Zhao, Doudou Guo, Wenhua Song and Yuzheng Li
Sensors 2023, 23(15), 6881; https://doi.org/10.3390/s23156881 - 3 Aug 2023
Cited by 2 | Viewed by 1115
Abstract
In the shallow-water waveguide environment, the tonal signals radiated by moving targets carry modal interference and Doppler shift information. The modal interference can be used to obtain the time of the closest point of approach (tCPA) and the [...] Read more.
In the shallow-water waveguide environment, the tonal signals radiated by moving targets carry modal interference and Doppler shift information. The modal interference can be used to obtain the time of the closest point of approach (tCPA) and the ratio of the range at the closest point of approach to the velocity of the source (rCPA/v). However, parameters rCPA and v cannot be solved separately. When tCPA is known, the rCPA and the v of the target can be obtained theoretically by using the Doppler information. However, when the Doppler frequency shift is small or at a low signal-to-noise ratio, there will be a strong parametric coupling between rCPA and v. In order to solve the above parameter coupling problem, a target motion parameter estimation method from tonal signals with a single hydrophone is proposed in this paper. The method uses the Doppler and modal interference information carried by the tonal signals to obtain two different parametric coupling curves. Then, the parametric coupling curves can be used to estimate the two motion parameters. Simulation experiments verified the rationality of this method. The proposed method was applied to the SWellEx-96 and speedboat experiments, and the estimation errors of the motion parameters were within 10%, which shows the method is effective in its practical applications. Full article
(This article belongs to the Special Issue Underwater Vision Sensing System)
Show Figures

Figure 1

Back to TopTop