Topic Editors

College of Information Science and Technology, Dalian Maritime University, Dalian 116026, China
Dr. Wenqi Ren
School of Cyber Science and Technology, Sun Yat-Sen University, Guangzhou 510275, China
School of Information Science and Engineering, Ningbo University, Ningbo 315211, China
Department of Computer Science, National Chengchi University, Taipei 116011, Taiwan

Applications and Development of Underwater Robotics and Underwater Vision Technology

Abstract submission deadline
30 November 2024
Manuscript submission deadline
31 January 2025
Viewed by
34807

Topic Information

Dear Colleagues,

In today’s world, the ocean is one of the most important areas for human exploration and development. Underwater vision, as a cross-disciplinary field related to underwater environments, has a wide range of applications in marine resource development, marine biology research, underwater detection and control, and other fields.

In terms of marine resource development, underwater vision technology is a valuable tool for marine oil exploration and deep-sea mineral resource development. For example, high-precision underwater vision systems on underwater robots can facilitate oil exploration and development in deep-sea environments. Moreover, these robots can be utilized for the exploration and development of seabed mineral resources, enabling the development and utilization of deep-sea resources.

Regarding marine biology research, underwater vision technology is useful for observing and studying marine organisms. High-definition underwater cameras on underwater robots can capture and observe marine organisms in the ocean. Furthermore, these cameras can aid in the study of deep-sea organisms, which enables scientists to comprehend the distribution of biological communities and ecosystems in deep-sea environments.

As for underwater detection and control, underwater vision technology plays a vital role in underwater target detection, underwater 3D modeling, and more. For example, underwater robots equipped with underwater sonars and cameras can detect and identify underwater targets in underwater environments. Moreover, these robots can create 3D models of underwater scenes, providing a visualization tool for the detection and study of underwater environments.

Therefore, the application of underwater vision in the marine field has broad prospects and great significance in promoting the development of the marine field. In order to promote the development of the underwater vision field, we will edit a Special Issue on underwater vision, inviting experts and scholars to share both their research results and the latest developments in the field.

  • We welcome submissions of papers related to the following areas:
  • Underwater robot vision systems;
  • Underwater image enhancement and processing techniques;
  • Underwater object detection and recognition;
  • Underwater 3D reconstruction techniques;
  • Underwater optical imaging and laser scanning technologies;
  • Underwater physical environment modeling and simulation;
  • Underwater acoustic imaging and sonar technologies;
  • Underwater communication and networking technologies.

Dr. Jingchun Zhou
Dr. Wenqi Ren
Dr. Qiuping Jiang
Dr. Yan-Tsung Peng
Topic Editors

Keywords

  • computer vision
  • image processing
  • underwater vision
  • underwater image enhancement/restoration
  • underwater robot
  • underwater imaging

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Journal of Marine Science and Engineering
jmse
2.7 4.4 2013 16.9 Days CHF 2600 Submit
Machines
machines
2.1 3.0 2013 15.6 Days CHF 2400 Submit
Remote Sensing
remotesensing
4.2 8.3 2009 24.7 Days CHF 2700 Submit
Robotics
robotics
2.9 6.7 2012 17.7 Days CHF 1800 Submit
Sensors
sensors
3.4 7.3 2001 16.8 Days CHF 2600 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (18 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
23 pages, 12047 KiB  
Article
Autonomous Underwater Vehicle Navigation Enhancement by Optimized Side-Scan Sonar Registration and Improved Post-Processing Model Based on Factor Graph Optimization
by Lin Zhang, Lianwu Guan, Jianhui Zeng and Yanbin Gao
J. Mar. Sci. Eng. 2024, 12(10), 1769; https://doi.org/10.3390/jmse12101769 - 5 Oct 2024
Viewed by 702
Abstract
Autonomous Underwater Vehicles (AUVs) equipped with Side-Scan Sonar (SSS) play a critical role in seabed mapping, where precise navigation data are essential for mosaicking sonar images to delineate the seafloor’s topography and feature locations. However, the accuracy of AUV navigation, based on Strapdown [...] Read more.
Autonomous Underwater Vehicles (AUVs) equipped with Side-Scan Sonar (SSS) play a critical role in seabed mapping, where precise navigation data are essential for mosaicking sonar images to delineate the seafloor’s topography and feature locations. However, the accuracy of AUV navigation, based on Strapdown Inertial Navigation System (SINS)/Doppler Velocity Log (DVL) systems, tends to degrade over long-term mapping, which compromises the quality of sonar image mosaics. This study addresses the challenge by introducing a post-processing navigation method for AUV SSS surveys, utilizing Factor Graph Optimization (FGO). Specifically, the method utilizes an improved Fourier-based image registration algorithm to generate more robust relative position measurements. Then, through the integration of these measurements with data from SINS, DVL, and surface Global Navigation Satellite System (GNSS) within the FGO framework, the approach notably enhances the accuracy of the complete trajectory for AUV missions. Finally, the proposed method has been validated through both the simulation and AUV marine experiments. Full article
Show Figures

Figure 1

27 pages, 6983 KiB  
Article
DA-YOLOv7: A Deep Learning-Driven High-Performance Underwater Sonar Image Target Recognition Model
by Zhe Chen, Guohao Xie, Xiaofang Deng, Jie Peng and Hongbing Qiu
J. Mar. Sci. Eng. 2024, 12(9), 1606; https://doi.org/10.3390/jmse12091606 - 10 Sep 2024
Viewed by 929
Abstract
Affected by the complex underwater environment and the limitations of low-resolution sonar image data and small sample sizes, traditional image recognition algorithms have difficulties achieving accurate sonar image recognition. The research builds on YOLOv7 and devises an innovative fast recognition model designed explicitly [...] Read more.
Affected by the complex underwater environment and the limitations of low-resolution sonar image data and small sample sizes, traditional image recognition algorithms have difficulties achieving accurate sonar image recognition. The research builds on YOLOv7 and devises an innovative fast recognition model designed explicitly for sonar images, namely the Dual Attention Mechanism YOLOv7 model (DA-YOLOv7), to tackle such challenges. New modules such as the Omni-Directional Convolution Channel Prior Convolutional Attention Efficient Layer Aggregation Network (OA-ELAN), Spatial Pyramid Pooling Channel Shuffling and Pixel-level Convolution Bilat-eral-branch Transformer (SPPCSPCBiFormer), and Ghost-Shuffle Convolution Enhanced Layer Aggregation Network-High performance (G-ELAN-H) are central to its design, which reduce the computational burden and enhance the accuracy in detecting small targets and capturing local features and crucial information. The study adopts transfer learning to deal with the lack of sonar image samples. By pre-training the large-scale Underwater Acoustic Target Detection Dataset (UATD dataset), DA-YOLOV7 obtains initial weights, fine-tuned on the smaller Smaller Common Sonar Target Detection Dataset (SCTD dataset), thereby reducing the risk of overfitting which is commonly encountered in small datasets. The experimental results on the UATD, the Underwater Optical Target Detection Intelligent Algorithm Competition 2021 Dataset (URPC), and SCTD datasets show that DA-YOLOV7 exhibits outstanding performance, with [email protected] scores reaching 89.4%, 89.9%, and 99.15%, respectively. In addition, the model maintains real-time speed while having superior accuracy and recall rates compared to existing mainstream target recognition models. These findings establish the superiority of DA-YOLOV7 in sonar image analysis tasks. Full article
Show Figures

Figure 1

18 pages, 9476 KiB  
Article
Grabbing Path Extraction of Deep-Sea Manganese Nodules Based on Improved YOLOv5
by Chunlu Cui, Penglei Ma, Qianli Zhang, Guijie Liu and Yingchun Xie
J. Mar. Sci. Eng. 2024, 12(8), 1433; https://doi.org/10.3390/jmse12081433 - 19 Aug 2024
Viewed by 803
Abstract
In an effort to enhance the efficiency and accuracy of deep-sea manganese nodule grasping behavior by a manipulator, a novel approach employing an improved YOLOv5 algorithm is proposed for the extraction of the shortest paths to manganese nodules targeted by the manipulator. The [...] Read more.
In an effort to enhance the efficiency and accuracy of deep-sea manganese nodule grasping behavior by a manipulator, a novel approach employing an improved YOLOv5 algorithm is proposed for the extraction of the shortest paths to manganese nodules targeted by the manipulator. The loss function of YOLOv5s has been improved by integrating a dual loss function that combines IoU and NWD, resulting in better accuracy for loss calculations across different target sizes. Additionally, substituting the initial C3 module in the network backbone with a C2f module is intended to improve the flow of gradient information while reducing computational demands. Once the geometric center of the manganese nodules is identified with the improved YOLOv5 algorithm, the next step involves planning the most efficient route for the manipulator to pick up the nodules using an upgraded elite strategy ant colony algorithm. Enhancements to the ACO algorithm consist of implementing an elite strategy and progressively decreasing the number of ants in each round. This method reduces both the number of iterations and the time required for each iteration, while also preventing the occurrence of local optimal solutions. The experimental findings indicate that the improved YOLOv5s detection algorithm boosts detection accuracy by 2.3%. Furthermore, when there are fewer than 30 target planning points, the improved algorithm requires, on average, 24% fewer iterations than the ACO algorithm to determine the shortest path. Additionally, the speed of calculation for each iteration is quicker while still providing the optimal solution. Full article
Show Figures

Figure 1

17 pages, 13174 KiB  
Article
Enhanced YOLOv7 for Improved Underwater Target Detection
by Daohua Lu, Junxin Yi and Jia Wang
J. Mar. Sci. Eng. 2024, 12(7), 1127; https://doi.org/10.3390/jmse12071127 - 4 Jul 2024
Cited by 1 | Viewed by 1091
Abstract
Aiming at the problems of the underwater existence of some targets with relatively small size, low contrast, and a lot of surrounding interference information, which lead to a high leakage rate and low recognition accuracy, a new improved YOLOv7 underwater target detection algorithm [...] Read more.
Aiming at the problems of the underwater existence of some targets with relatively small size, low contrast, and a lot of surrounding interference information, which lead to a high leakage rate and low recognition accuracy, a new improved YOLOv7 underwater target detection algorithm is proposed. First, the original YOLOv7 anchor frame information is updated by the K-Means algorithm to generate anchor frame sizes and ratios suitable for the underwater target dataset; second, we use the PConv (Partial Convolution) module instead of part of the standard convolution in the multi-scale feature fusion module to reduce the amount of computation and number of parameters, thus improving the detection speed; then, the existing CIou loss function is improved with the ShapeIou_NWD loss function, and the new loss function allows the model to learn more feature information during the training process; finally, we introduce the SimAM attention mechanism after the multi-scale feature fusion module to increase attention to the small feature information, which improves the detection accuracy. This method achieves an average accuracy of 85.7% on the marine organisms dataset, and the detection speed reaches 122.9 frames/s, which reduces the number of parameters by 21% and the amount of computation by 26% compared with the original YOLOv7 algorithm. The experimental results show that the improved algorithm has a great improvement in detection speed and accuracy. Full article
Show Figures

Figure 1

12 pages, 4714 KiB  
Article
Artificial Neural Network for Glider Detection in a Marine Environment by Improving a CNN Vision Encoder
by Jungwoo Lee, Ji-Hyun Park, Jeong-Hwan Hwang, Kyoungseok Noh, Youngho Choi and Jinho Suh
J. Mar. Sci. Eng. 2024, 12(7), 1106; https://doi.org/10.3390/jmse12071106 - 29 Jun 2024
Viewed by 956
Abstract
Despite major economic and technological advances, much of the ocean remains unexplored, which has led to the use of remotely operated vehicles (ROVs) and gliders for surveying. ROVs and underwater gliders are essential for ocean data collection. Gliders, which control their own buoyancy, [...] Read more.
Despite major economic and technological advances, much of the ocean remains unexplored, which has led to the use of remotely operated vehicles (ROVs) and gliders for surveying. ROVs and underwater gliders are essential for ocean data collection. Gliders, which control their own buoyancy, are particularly effective unmanned platforms for long-term observations. The traditional method of recovering the glider on a small boat is a risky operation and depends on the skill of the workers. Therefore, a safer, more efficient, and automated system is needed to recover them. In this study, we propose a lightweight artificial neural network for underwater glider detection that is efficient for learning and inference. In order to have a smaller parameter size and faster inference, a convolutional neural network (CNN) vision encoder in an artificial neural network splits an image of a glider into a number of elongated patches that overlap to better preserve the spatial information of the pixels in the horizontal and vertical directions. Global max-pooling, which computes the maximum over all the spatial locations of an input feature, was used to activate the most salient feature vectors at the end of the encoder. As a result of the inference of the glider detection models on the test dataset, the average precision (AP), which indicates the probability that an object is located within the predicted bounding box, shows that the proposed model achieves AP = 99.7%, while the EfficientDet-D2 model for comparison of detection performance achieves AP = 69.2% at an intersection over union (IOU) threshold of 0.5. Similarly, the proposed model achieves an AP of 78.9% and the EfficientDet-D2 model achieves an AP of 50.5% for an IOU threshold of 0.75. These results show that accurate prediction is possible within a wide range of recall for glider position inference in a real ocean environment. Full article
Show Figures

Figure 1

18 pages, 5247 KiB  
Article
Sensing Pre- and Post-Ecdysis of Tropical Rock Lobsters, Panulirus ornatus, Using a Low-Cost Novel Spectral Camera
by Charles Sutherland, Alan D. Henderson, Dean R. Giosio, Andrew J. Trotter and Greg G. Smith
J. Mar. Sci. Eng. 2024, 12(6), 987; https://doi.org/10.3390/jmse12060987 - 12 Jun 2024
Cited by 2 | Viewed by 803
Abstract
Tropical rock lobsters (Panulirus ornatus) are a highly cannibalistic species with intermoult animals predominantly attacking animals during ecdysis (moulting). Rapid, positive characterisation of pre-ecdysis lobsters may open a pathway to disrupt cannibalism. Ecdysial suture line development is considered for pre-ecdysis recognition [...] Read more.
Tropical rock lobsters (Panulirus ornatus) are a highly cannibalistic species with intermoult animals predominantly attacking animals during ecdysis (moulting). Rapid, positive characterisation of pre-ecdysis lobsters may open a pathway to disrupt cannibalism. Ecdysial suture line development is considered for pre-ecdysis recognition with suture line definition compared for intermoult and pre-ecdysis lobsters emerged and immerged, using white, near ultraviolet (365 nm), near infrared (850 nm), and specialty SFH 4737 broadband IR LEDs against a reference of intermoult lobsters with no suture line development. Difficulties in acquiring suture line images prompted research into pre-ecdysis characterisation from the lobster’s dorsal carapace, due to its accessibility through a culture vessel’s surface. In this study, a novel low-cost spectral camera was developed by coordinating an IMX219 image sensor, an AS7265x spectral sensor, and four SFH 4737 broadband infrared LEDs through a single-board computer. Images and spectral data from the lobster’s dorsal carapace were acquired from intermoult, pre-ecdysis, and post-ecdysis lobsters. For the first time, suture line definition was found to be enhanced under 850 nm, 365 nm, and SFH 4737 LEDs for immerged lobsters, while the 850 nm LED achieved the best suture line definition of emerged lobsters. Although the spectral camera was unable to characterise pre-ecdysis, its development was validated when a least squares regression for binary classification decision boundary successfully separated 86.7% of post-ecdysis lobsters. Achieving post-ecdysis characterisation is the first time the dorsal carapace surface has been used to characterise a moult stage for palinurid lobsters. Full article
Show Figures

Figure 1

19 pages, 8006 KiB  
Article
An Underwater Localization Method Based on Visual SLAM for the Near-Bottom Environment
by Zonglin Liu, Meng Wang, Hanwen Hu, Tong Ge and Rui Miao
J. Mar. Sci. Eng. 2024, 12(5), 716; https://doi.org/10.3390/jmse12050716 - 26 Apr 2024
Viewed by 994
Abstract
The feature matching of the near-bottom visual SLAM is influenced by underwater raised sediments, resulting in tracking loss. In this paper, the novel visual SLAM system is proposed in the underwater raised sediments environment. The underwater images are firstly classified based on the [...] Read more.
The feature matching of the near-bottom visual SLAM is influenced by underwater raised sediments, resulting in tracking loss. In this paper, the novel visual SLAM system is proposed in the underwater raised sediments environment. The underwater images are firstly classified based on the color recognition method by adding the weights of pixel location to reduce the interference of similar colors on the seabed. The improved adaptive median filter method is proposed to filter the classified images by using the mean value of the filter window border as the discriminant condition to retain the original features of the image. The filtered images are finally processed by the tracking module to obtain the trajectory of underwater vehicles and the seafloor maps. The datasets of seamount areas captured in the western Pacific Ocean are processed by the improved visual SLAM system. The keyframes, mapping points, and feature point matching pairs extracted from the improved visual SLAM system are improved by 5.2%, 11.2%, and 4.5% compared with that of the ORB-SLAM3 system, respectively. The improved visual SLAM system has the advantage of robustness to dynamic disturbances, which is of practical application in underwater vehicles operated in near-bottom areas such as seamounts and nodules. Full article
Show Figures

Figure 1

19 pages, 15815 KiB  
Article
A Statistical Evaluation of the Connection between Underwater Optical and Acoustic Images
by Rebeca Chinicz and Roee Diamant
Remote Sens. 2024, 16(4), 689; https://doi.org/10.3390/rs16040689 - 15 Feb 2024
Cited by 1 | Viewed by 1248
Abstract
The use of Synthetic Aperture Sonar (SAS) in autonomous underwater vehicle (AUV) surveys has found applications in archaeological searches, underwater mine detection and wildlife monitoring. However, the easy confusability of natural objects with the target object leads to high false positive rates. To [...] Read more.
The use of Synthetic Aperture Sonar (SAS) in autonomous underwater vehicle (AUV) surveys has found applications in archaeological searches, underwater mine detection and wildlife monitoring. However, the easy confusability of natural objects with the target object leads to high false positive rates. To improve detection, the combination of SAS and optical images has recently attracted attention. While SAS data provides a large-scale survey, optical information can help contextualize it. This combination creates the need to match multimodal, optical–acoustic image pairs. The two images are not aligned, and are taken from different angles of view and at different times. As a result, challenges such as the different resolution, scaling and posture of the two sensors need to be overcome. In this research, motivated by the information gain when using both modalities, we turn to statistical exploration for feature analysis to investigate the relationship between the two modalities. In particular, we propose an entropic method for recognizing matching multimodal images of the same object and investigate the probabilistic dependency between the images of the two modalities based on their conditional probabilities. The results on a real dataset of SAS and optical images of the same and different objects on the seafloor confirm our assumption that the conditional probability of SAS images is different from the marginal probability given an optical image, and show a favorable trade-off between detection and false alarm rate that is higher than current benchmarks. For reproducibility, we share our database. Full article
Show Figures

Graphical abstract

15 pages, 4491 KiB  
Article
G-Net: An Efficient Convolutional Network for Underwater Object Detection
by Xiaoyang Zhao, Zhuo Wang, Zhongchao Deng and Hongde Qin
J. Mar. Sci. Eng. 2024, 12(1), 116; https://doi.org/10.3390/jmse12010116 - 7 Jan 2024
Cited by 1 | Viewed by 1770
Abstract
Visual perception technology is of great significance for underwater robots to carry out seabed investigation and mariculture activities. Due to the complex underwater environment, it is often necessary to enhance the underwater image when detecting underwater targets by optical sensors. Most of the [...] Read more.
Visual perception technology is of great significance for underwater robots to carry out seabed investigation and mariculture activities. Due to the complex underwater environment, it is often necessary to enhance the underwater image when detecting underwater targets by optical sensors. Most of the traditional methods involve image enhancement and then target detection. However, this method greatly increases the timeliness in practical application. To solve this problem, we propose a feature-enhanced target detection network, Global-Net (G-Net), which combines underwater image enhancement with target detection. Different from the traditional method of reconstructing enhanced images for target detection, G-Net realizes the integration of image enhancement and target detection. In addition, our feature map learning module (FML) can effectively extract defogging features. The test results in a real underwater environment show that G-Net improves the detection accuracy of underwater targets by about 5%, but also has high detection efficiency, which ensures the reliability of underwater robots in seabed investigation and aquaculture activities. Full article
Show Figures

Figure 1

19 pages, 5080 KiB  
Article
Underwater Object Detection in Marine Ranching Based on Improved YOLOv8
by Rong Jia, Bin Lv, Jie Chen, Hailin Liu, Lin Cao and Min Liu
J. Mar. Sci. Eng. 2024, 12(1), 55; https://doi.org/10.3390/jmse12010055 - 25 Dec 2023
Cited by 5 | Viewed by 2958
Abstract
The aquaculture of marine ranching is of great significance for scientific aquaculture and the practice of statistically grasping existing information on the types of living marine resources and their density. However, underwater environments are complex, and there are many small and overlapping targets [...] Read more.
The aquaculture of marine ranching is of great significance for scientific aquaculture and the practice of statistically grasping existing information on the types of living marine resources and their density. However, underwater environments are complex, and there are many small and overlapping targets for marine organisms, which seriously affects the performance of detectors. To overcome these issues, we attempted to improve the YOLOv8 detector. The InceptionNeXt block was used in the backbone to enhance the feature extraction capabilities of the network. Subsequently, a separate and enhanced attention module (SEAM) was added to the neck to enhance the detection of overlapping targets. Moreover, the normalized Wasserstein distance (NWD) loss was proportionally added to the original CIoU loss to improve the detection of small targets. Data augmentation methods were used to improve the dataset during training to enhance the robustness of the network. The experimental results showed that the improved YOLOv8 achieved the mAP of 84.5%, which was an improvement over the original YOLOv8 of approximately 6.2%. Meanwhile, there were no significant increases in the numbers of parameters and computations. This detector can be applied on platforms for seafloor observation experiments in the field of marine ranching to complete the task of real-time detection of marine organisms. Full article
Show Figures

Figure 1

14 pages, 5305 KiB  
Article
New Insights into Sea Turtle Propulsion and Their Cost of Transport Point to a Potential New Generation of High-Efficient Underwater Drones for Ocean Exploration
by Nick van der Geest, Lorenzo Garcia, Roy Nates and Fraser Borrett
J. Mar. Sci. Eng. 2023, 11(10), 1944; https://doi.org/10.3390/jmse11101944 - 9 Oct 2023
Cited by 1 | Viewed by 2378
Abstract
Sea turtles gracefully navigate their marine environments by flapping their pectoral flippers in an elegant routine to produce the required hydrodynamic forces required for locomotion. The propulsion of sea turtles has been shown to occur for approximately 30% of the limb beat, with [...] Read more.
Sea turtles gracefully navigate their marine environments by flapping their pectoral flippers in an elegant routine to produce the required hydrodynamic forces required for locomotion. The propulsion of sea turtles has been shown to occur for approximately 30% of the limb beat, with the remaining 70% employing a drag-reducing glide. However, it is unknown how the sea turtle manipulates the flow during the propulsive stage. Answering this research question is a complicated process, especially when conducting laboratory tests on endangered animals, and the animal may not even swim with its regular routine while in a captive state. In this work, we take advantage of our robotic sea turtle, internally known as Cornelia, to offer the first insights into the flow features during the sea turtle’s propulsion cycle consisting of the downstroke and the sweep stroke. Comparing the flow features to the animal’s swim speed, flipper angle of attack, power consumption, thrust and lift production, we hypothesise how each of the flow features influences the animal’s propulsive efforts and cost of transport (COT). Our findings show that the sea turtle can produce extremely low COT values that point to the effectiveness of the sea turtle propulsive technique. Based on our findings, we extract valuable data that can potentially lead to turtle-inspired elements for high-efficiency underwater drones for long-term underwater missions. Full article
Show Figures

Figure 1

17 pages, 3271 KiB  
Article
Underwater Image Translation via Multi-Scale Generative Adversarial Network
by Dongmei Yang, Tianzi Zhang, Boquan Li, Menghao Li, Weijing Chen, Xiaoqing Li and Xingmei Wang
J. Mar. Sci. Eng. 2023, 11(10), 1929; https://doi.org/10.3390/jmse11101929 - 6 Oct 2023
Viewed by 1167
Abstract
The role that underwater image translation plays assists in generating rare images for marine applications. However, such translation tasks are still challenging due to data lacking, insufficient feature extraction ability, and the loss of content details. To address these issues, we propose a [...] Read more.
The role that underwater image translation plays assists in generating rare images for marine applications. However, such translation tasks are still challenging due to data lacking, insufficient feature extraction ability, and the loss of content details. To address these issues, we propose a novel multi-scale image translation model based on style-independent discriminators and attention modules (SID-AM-MSITM), which learns the mapping relationship between two unpaired images for translation. We introduce Convolution Block Attention Modules (CBAM) to the generators and discriminators of SID-AM-MSITM to improve its feature extraction ability. Moreover, we construct style-independent discriminators that enable the discriminant results of SID-AM-MSITM to be not affected by the style of images and retain content details. Through ablation experiments and comparative experiments, we demonstrate that attention modules and style-independent discriminators are introduced reasonably and SID-AM-MSITM performs better than multiple baseline methods. Full article
Show Figures

Figure 1

15 pages, 4817 KiB  
Article
Underwater Geomagnetic Localization Based on Adaptive Fission Particle-Matching Technology
by Huapeng Yu, Ziyuan Li, Wentie Yang, Tongsheng Shen, Dalei Liang and Qinyuan He
J. Mar. Sci. Eng. 2023, 11(9), 1739; https://doi.org/10.3390/jmse11091739 - 4 Sep 2023
Cited by 3 | Viewed by 1312
Abstract
The geomagnetic field constitutes a massive fingerprint database, and its unique structure provides potential position correction information. In recent years, particle filter technology has received more attention in the context of robot navigation. However, particle degradation and impoverishment have constrained navigation systems’ performance. [...] Read more.
The geomagnetic field constitutes a massive fingerprint database, and its unique structure provides potential position correction information. In recent years, particle filter technology has received more attention in the context of robot navigation. However, particle degradation and impoverishment have constrained navigation systems’ performance. This paper transforms particle filtering into a particle-matching positioning problem and proposes a geomagnetic localization method based on an adaptive fission particle filter. This method employs particle-filtering technology to construct a geomagnetic matching positioning model. Through adaptive particle fission and sampling, the problem of particle degradation and impoverishment in traditional particle filtering is solved, resulting in improved geomagnetic matching positioning accuracy. Finally, the proposed method was tested in a sea trial, and the results show that the proposed method has a lower positioning error than traditional particle-filtering and intelligent particle-filtering algorithms. Under geomagnetic map conditions, an average positioning accuracy of about 546.44 m is achieved. Full article
Show Figures

Figure 1

19 pages, 6136 KiB  
Article
Coupling Dilated Encoder–Decoder Network for Multi-Channel Airborne LiDAR Bathymetry Full-Waveform Denoising
by Bin Hu, Yiqiang Zhao, Guoqing Zhou, Jiaji He, Changlong Liu, Qiang Liu, Mao Ye and Yao Li
Remote Sens. 2023, 15(13), 3293; https://doi.org/10.3390/rs15133293 - 27 Jun 2023
Cited by 1 | Viewed by 1378
Abstract
Multi-channel airborne full-waveform LiDAR is widely used for high-precision underwater depth measurement. However, the signal quality of full-waveform data is unstable due to the influence of background light, dark current noise, and the complex transmission process. Therefore, we propose a nonlocal encoder block [...] Read more.
Multi-channel airborne full-waveform LiDAR is widely used for high-precision underwater depth measurement. However, the signal quality of full-waveform data is unstable due to the influence of background light, dark current noise, and the complex transmission process. Therefore, we propose a nonlocal encoder block (NLEB) based on spatial dilated convolution to optimize the feature extraction of adjacent frames. On this basis, a coupled denoising encoder–decoder network is proposed that takes advantage of the echo correlation in deep-water and shallow-water channels. Firstly, full waveforms from different channels are stacked together to form a two-dimensional tensor and input into the proposed network. Then, NLEB is used to extract local and nonlocal features from the 2D tensor. After fusing the features of the two channels, the reconstructed denoised data can be obtained by upsampling with a fully connected layer and deconvolution layer. Based on the measured data set, we constructed a noise–noisier data set, on which several denoising algorithms were compared. The results show that the proposed method improves the stability of denoising by using the inter-channel and multi-frame data correlation. Full article
Show Figures

Graphical abstract

21 pages, 61283 KiB  
Article
Two-Branch Underwater Image Enhancement and Original Resolution Information Optimization Strategy in Ocean Observation
by Dehuan Zhang, Wei Cao, Jingchun Zhou, Yan-Tsung Peng, Weishi Zhang and Zifan Lin
J. Mar. Sci. Eng. 2023, 11(7), 1285; https://doi.org/10.3390/jmse11071285 - 25 Jun 2023
Cited by 1 | Viewed by 1340
Abstract
In complex marine environments, underwater images often suffer from color distortion, blur, and poor visibility. Existing underwater image enhancement methods predominantly rely on the U-net structure, which assigns the same weight to different resolution information. However, this approach lacks the ability to extract [...] Read more.
In complex marine environments, underwater images often suffer from color distortion, blur, and poor visibility. Existing underwater image enhancement methods predominantly rely on the U-net structure, which assigns the same weight to different resolution information. However, this approach lacks the ability to extract sufficient detailed information, resulting in problems such as blurred details and color distortion. We propose a two-branch underwater image enhancement method with an optimized original resolution information strategy to address this limitation. Our method comprises a feature enhancement subnetwork (FEnet) and an original resolution subnetwork (ORSnet). FEnet extracts multi-resolution information and utilizes an adaptive feature selection module to enhance global features in different dimensions. The enhanced features are then fed into ORSnet as complementary features, which extract local enhancement features at the original image scale to achieve semantically consistent and visually superior enhancement effects. Experimental results on the UIEB dataset demonstrate that our method achieves the best performance compared to the state-of-the-art methods. Furthermore, through comprehensive application testing, we have validated the superiority of our proposed method in feature extraction and enhancement compared to other end-to-end underwater image enhancement methods. Full article
Show Figures

Figure 1

13 pages, 2904 KiB  
Article
An Onboard Point Cloud Semantic Segmentation System for Robotic Platforms
by Fei Wang, Yujie Yang, Jingchun Zhou and Weishi Zhang
Machines 2023, 11(5), 571; https://doi.org/10.3390/machines11050571 - 22 May 2023
Cited by 2 | Viewed by 1762
Abstract
Point clouds represent an important way for robots to perceive their environments, and can be acquired by mobile robots with LiDAR sensors or underwater robots with sonar sensors. Hence, real-time semantic segmentation of point clouds with onboard edge devices is essential for robots [...] Read more.
Point clouds represent an important way for robots to perceive their environments, and can be acquired by mobile robots with LiDAR sensors or underwater robots with sonar sensors. Hence, real-time semantic segmentation of point clouds with onboard edge devices is essential for robots to apprehend their surroundings. In this paper, we propose an onboard point cloud semantic segmentation system for robotic platforms to overcome the conflict between attaining high accuracy of segmentation results and the limited available computational resources of onboard devices. Our system takes raw a sequence of point clouds as inputs, and outputs semantic segmentation results for each frame as well as a reconstructed semantic map of the environment. At the core of our system is the transformer-based hierarchical feature extraction module and fusion module. The two modules are implemented with sparse tensor technologies to speed up inference. The predictions are accumulated according to Bayes rules to generate a global semantic map. Experimental results on the SemanticKITTI dataset show that our system achieves +2.2% mIoU and 18× speed improvements compared with SOTA methods. Our system is able to process 2.2 M points per second on Jetson AGX Xavier (NVIDIA, Santa Clara, USA), demonstrating its applicability to various robotic platforms. Full article
Show Figures

Figure 1

17 pages, 6569 KiB  
Article
An Improved YOLOv5s-Based Scheme for Target Detection in a Complex Underwater Environment
by Chenglong Hou, Zhiguang Guan, Ziyi Guo, Siqi Zhou and Mingxing Lin
J. Mar. Sci. Eng. 2023, 11(5), 1041; https://doi.org/10.3390/jmse11051041 - 13 May 2023
Cited by 7 | Viewed by 1993
Abstract
At present, sea cucumbers, sea urchins, and other seafood products have become increasingly significant in the seafood aquaculture industry. In traditional fishing operations, divers go underwater for fishing, and the complex underwater environment can cause harm to the divers’ bodies. Therefore, the use [...] Read more.
At present, sea cucumbers, sea urchins, and other seafood products have become increasingly significant in the seafood aquaculture industry. In traditional fishing operations, divers go underwater for fishing, and the complex underwater environment can cause harm to the divers’ bodies. Therefore, the use of underwater robots for seafood fishing has become a current trend. During the fishing process, underwater fishing robots rely on vision to accurately detect sea cucumbers and sea urchins. In this paper, an algorithm for the target detection of sea cucumbers and sea urchins in complex underwater environments is proposed based on the improved YOLOv5s. The following improvements are mainly carried out in YOLOv5s: (1) To enhance the feature extraction ability of the model, the gnConv-based self-attentive sublayer HorBlock module is proposed to be added to the backbone network. (2) To obtain the optimal hyperparameters of the model for underwater datasets, hyperparameter evolution based on the genetic algorithm is proposed. (3) The underwater dataset is extended using offline data augmentation. The dataset used in the experiment is created in a real underwater environment. The total number of created datasets is 1536, and the training, validation, and test sets are randomly divided according to the ratio of 7:2:1. The divided dataset is input to the improved YOLOv5s network for training. The experiment shows that the mean average precision (mAP) of the algorithm is 94%, and the mAP of the improved YOLOv5s model rises by 4.5% compared to the original YOLOv5s. The detection speed increases by 4.09 ms, which is in the acceptable range compared to the accuracy improvement. Therefore, the improved YOLOv5s has better detection accuracy and speed in complex underwater environments, and can provide theoretical support for the underwater operations of underwater fishing robots. Full article
Show Figures

Figure 1

28 pages, 19746 KiB  
Review
An Overview of Key SLAM Technologies for Underwater Scenes
by Xiaotian Wang, Xinnan Fan, Pengfei Shi, Jianjun Ni and Zhongkai Zhou
Remote Sens. 2023, 15(10), 2496; https://doi.org/10.3390/rs15102496 - 9 May 2023
Cited by 19 | Viewed by 7850
Abstract
Autonomous localization and navigation, as an essential research area in robotics, has a broad scope of applications in various scenarios. To widen the utilization environment and augment domain expertise, simultaneous localization and mapping (SLAM) in underwater environments has recently become a popular topic [...] Read more.
Autonomous localization and navigation, as an essential research area in robotics, has a broad scope of applications in various scenarios. To widen the utilization environment and augment domain expertise, simultaneous localization and mapping (SLAM) in underwater environments has recently become a popular topic for researchers. This paper examines the key SLAM technologies for underwater vehicles and provides an in-depth discussion on the research background, existing methods, challenges, application domains, and future trends of underwater SLAM. It is not only a comprehensive literature review on underwater SLAM, but also a systematic introduction to the theoretical framework of underwater SLAM. The aim of this paper is to assist researchers in gaining a better understanding of the system structure and development status of underwater SLAM, and to provide a feasible approach to tackle the underwater SLAM problem. Full article
Show Figures

Graphical abstract

Back to TopTop