Next Article in Journal
Diffusion Characteristics and Mechanisms of Thermal Plumes from Coastal Power Plants: A Numerical Simulation Study
Previous Article in Journal
A Three-Dimensional Spatial Interpolation Method and Its Application to the Analysis of Oxygen Deficit in the Bohai Sea in Summer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Underwater Robot Localization in Confined Spaces

by
Haoyu Wu
1,
Yinglong Chen
1,*,
Qiming Yang
2,
Bo Yan
1 and
Xinyu Yang
1
1
Naval Architecture and Ocean Engineering College, Dalian Maritime University, Dalian 116026, China
2
AVIC Guizhou Honglin Aviation Power Control Technology Co., Ltd., Guiyang 550009, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(3), 428; https://doi.org/10.3390/jmse12030428
Submission received: 27 December 2023 / Revised: 15 February 2024 / Accepted: 26 February 2024 / Published: 28 February 2024
(This article belongs to the Section Ocean Engineering)

Abstract

:
Underwater robots often encounter the influence of confined underwater environments during underwater exploration. These environments include underwater caves, sunken ships, submerged houses, and pipeline structures. Robot positioning in these environments is strongly disturbed, leading not only to the failure of some commonly used positioning methods but also to an increase in errors in positioning systems that normally function well in open water. In order to overcome the limitations of positioning methods in confined underwater environments, researchers have studied different underwater positioning methods and have selected suitable methods for positioning in such environments. These methods can achieve high-precision positioning without relying on assistance from other platforms and are referred to as autonomous positioning methods. Autonomous positioning methods for underwater robots mainly include SINS/DR positioning and SLAM positioning. In addition, in recent years, researchers have developed some bio-inspired autonomous positioning methods. This article introduces applicable robot positioning methods and sensors in confined underwater environments and discusses the research directions of robot positioning methods in such environments.

1. Overview

With the advancement of technology, global exploration of underwater environments has become increasingly profound, driven by the need to develop seabed resources, study underwater ecosystems, and conduct various other underwater activities. Due to the limitations imposed by the underwater environment on the physical presence of humans, the use of underwater robots for subsea operations is an immensely attractive option. Underwater unmanned robotic systems are primarily categorized into two main types: Remotely Operated Vehicles (ROVs) or Autonomous Underwater Vehicles (AUVs) [1]. ROVs are typically controlled in real-time by operators on a surface vessel via a tether, whereas AUVs operate autonomously without the need for tethered control [2]. Underwater unmanned robotic systems have a wide range of applications in open spaces, such as inspecting the exterior surfaces of ships [3] and conducting seabed mapping [4]. They also play an important role in exploring confined spaces.
Confined underwater spaces [1] typically refer to underwater environments that meet one or more of the following conditions: limited spatial range (usually on the order of meters), obstacles formed by regular or irregular structures at the edges or interiors of aquatic environments, the presence of coverings at the top of underwater environments, restricted sensing range for sensors, significant water turbidity, inadequate ambient light intensity, insignificant underwater environmental characteristics, and dynamic changes in underwater structures, including underwater caves [5], tanks [6], docks, ship hulls, submerged houses, nuclear facilities, and so on.
In confined spaces, due to access restrictions and environmental limitations, such as low illumination levels, turbidity, and a lack of salient features, traditional localization systems that have been developed for use in large bodies of water cannot be used [7]. Some classic examples demonstrate the practical applications of confined underwater spaces. In order to overcome the potential dangers to divers [8] and the issue of entangling underwater robot cables [9], the EU-funded UNEXMIN project developed the UX-1 AUV for exploring underwater mines. To mitigate the effects of water turbidity and environmental light on positioning, the UX-1 utilized a fusion of IMU and DVL for localization. The research team conducted underwater tests of the UX-1 in June 2018, September 2018, and March 2019, respectively, at the pegmatite mine Kaatiala, the mercury mine at Idrija, and the uranium mine of Urgeiriça in Portugal [10]. Among these three underwater mines, the pegmatite mine Kaatiala exhibited the most optimal underwater environment with clear water quality and simple mine structures, while the mercury mine at Idrija and the uranium mine of Urgeiriça both had turbid water and complex mine structures. Despite these challenging conditions, successful motion trajectories of the UX-1 were achieved in all three trials. In the absence of absolute external references, underwater localization typically experiences drift rates of 0.1% to 3% or more [11], with open water providing sufficient safety margins. In confined spaces, to mitigate localization drift, SLAM (Simultaneous Localization and Mapping) methods can be employed to provide absolute positioning references for AUVs. The underwater laboratory at the University of Girona, Spain, developed an AUV named Sparus, which was later commercialized for underwater experimentation platforms. A study conducted by the Woods Hole Oceanographic Institution in collaboration with the University of Girona utilized the Sparus AUV equipped with DR (Dead Reckoning) sensors and SLAM sensors for positioning underwater cave exploration robots [12]. The study dataset focused on an underwater cave named “Coves de Cala Viuda”, located in the L’Escala area of Costa Brava, Spain. The experimental results indicated that SLAM positioning with sonar achieved an accuracy level approximately 0.5 orders of magnitude higher than DR positioning. This demonstrates the effectiveness of SLAM in improving positioning accuracy in confined underwater spaces. The Tallinn University of Technology in Estonia utilized a fusion of DR and optical guidance positioning methods to explore a submerged house in the Rummu Quarry Lake test site [13]. In this study, the U-CAT AUV [14] served as the testing platform, with DR as the primary localization method, and successful navigation of the AUV was achieved through optical guidance.
Autonomous localization of underwater robots in unknown confined spaces is particularly challenging due to the influence of these unfamiliar environments. In scenarios such as underwater exploration of dense artificial infrastructures [8,9], navigation in underwater mine tunnels [10,11], mapping of natural underwater caves [12], and robot localization and navigation in nuclear facilities [13], confined underwater space positioning technology has wide applications. Many commonly used underwater localization methods are not applicable in such scenarios. Examples include baseline localization (long baseline, short baseline, ultra-short baseline), acoustic network localization, and geophysical field localization (gravity field matching, terrain matching, magnetic field matching).
Baseline localization relies on acoustic communication between the baseline and the underwater robot to determine the robot’s spatial position. However, in unknown confined underwater environments, the lack of prior knowledge about the underwater space prevents the pre-deployment of long baseline arrays on the seabed. Additionally, confined underwater environments introduce significant interference with acoustic signals, compromising the effectiveness of baseline communication for localization. Acoustic network localization requires the pre-deployment of an underwater sensor network, with the relative or absolute position of each sensor needing to be determined for underwater robot localization. However, in unknown confined underwater environments, it is impractical to deploy a large number of sensors and determine their positions in advance. Geophysical field localization involves detecting and mapping the unknown confined underwater environment in advance and then collaborating with sensors on the underwater robot to achieve localization. However, when it comes to exploring confined underwater spaces, underwater robots often operate in such environments for the first time. Therefore, traditional localization methods relying on prior map data are usually ineffective.
As a result, the quest for effective underwater robot localization methods in unknown confined spaces remains a prominent research direction, with the aim of overcoming the challenges posed by these complex and dynamic underwater environments. Nowadays, there are several relevant review papers on underwater localization. These papers include comprehensive overviews of underwater localization [14]; papers focusing on underwater localization sensors [15]; a review focusing on the automation of operations in confined underwater spaces [16], with a core emphasis on individual positioning methods (SLAM [17], SINS/DR [18]); and papers discussing the development of localization and navigation for underwater robots [19]. However, none of these papers provide a concentrated and detailed introduction to localization methods suitable for underwater robots in confined spaces. Therefore, this paper aims to review and organize the localization methods for underwater robots in confined spaces.
Due to limited available localization methods in confined underwater spaces, the types of sensors available for underwater robot localization are also restricted. In general, these sensors can be categorized into internal state sensors and external state sensors. Internal state sensors include inertial navigation systems (INSs) and gyroscopes. INS sensors suffer from cumulative errors that degrade accuracy over time, necessitating their combination with other sensors. Gyroscopes, on the other hand, only provide orientation information and require integration with other sensors for complete localization. External state sensors comprise optical cameras, laser rangefinders, sonar systems, multi-beam Doppler velocity loggers (DVLs), rangefinders, and depth sensors. With the exception of depth sensors, all these external state sensors can individually contribute to the complete localization of underwater robots. Depth sensors, due to their limited information gathering, typically require integration with other sensors for comprehensive localization.
Underwater robot localization methods in confined environments can typically be categorized into three fundamental approaches: SINS (Inertial Navigation System) localization, DR (Dead Reckoning) localization, and SLAM (Simultaneous Localization and Mapping) localization.
  • SINS Localization:
    • Core Sensor: Inertial Measurement Unit (IMU), consisting of gyroscopes and accelerometers.
    • Principle: SINS localization relies on the integration of IMU data, particularly the double integration of accelerometer data to obtain the displacement vector. The accumulated displacement vector is used for underwater robot localization.
  • DR Localization:
    • Components: Odometer (mileage counter) and compass.
    • Information: The odometer provides the displacement and the compass provides the heading angle. Combining these two pieces of information allows the underwater robot to achieve at least horizontal plane localization. It is important to note that the term “odometer” here refers to a general mileage counter, not specifically the Doppler Velocity Log (DVL). For example, visual odometry can provide velocity information for underwater robots.
    • Integration: Displacement vectors can be obtained from IMU, DVL, compass, and depth sensors in practical applications. The combination of these sensors often results in higher localization accuracy and robustness in unknown confined underwater environments.
  • SLAM Localization:
    • Method: SLAM localization involves mapping the environment, extracting features at different time points, and solving for the corresponding poses to obtain the displacement vector. Continuous accumulation of the displacement vector enables underwater robot localization.
    • Types: SLAM methods using different types of sensors can be categorized into Visual SLAM, Sonar SLAM, Laser SLAM, and Multi-Sensor Fusion SLAM.
These three foundational localization methods provide different strategies for underwater robots to navigate and position themselves in confined and challenging underwater environments. The choice of method depends on factors such as the characteristics of the environment, available sensors, and desired accuracy.
This article introduces various localization methods for confined underwater environments and discusses related development directions. Chapter Two presents SINS/DR localization in confined underwater spaces, Chapter Three covers SLAM localization in confined underwater spaces, and Chapter Four delves into other localization methods in confined underwater spaces.

2. SINS/DR Localization

SINS (Strapdown Inertial Navigation System) and DR (Dead Reckoning) localization methods are commonly employed for autonomous localization of robots in confined underwater environments. While the localization principles for these two methods are essentially the same in open and confined environments, in confined underwater spaces, SINS and DR localization are often integrated with other localization methods to achieve higher precision and robustness. The sensors typically used in SINS/DR localization include an IMU (Inertial Measurement Unit), a DVL (Doppler Velocity Log), a compass, and a depth sensor. These sensors encompass both internal state sensors and external state sensors, with the ultimate goal of obtaining the displacement vector for the underwater robot. The characteristics of SINS and DR are shown in Table 1.

2.1. Single-Method Localization with SINS/DR

The core of today’s widespread use of SINS is a positioning system composed of signals collected by an IMU and processed by a computer. The IMU integrates accelerometers and gyroscopes inherent in inertial navigation. It transforms the collected acceleration and angular velocity information into electrical signals, which are then inputted into a computer. The computer processes the information outputted by the IMU through methods such as integration and determinant calculations, ultimately deriving relative position information for the inertially connected structure. In the INS, due to using inertial sensor signals and the integration of them for the calculation of velocity and position, the navigation error is unboundedly increased with the passage of time. This method has the advantages of being fully autonomous and not affected by the operating environment, but its positioning error increases cumulatively with time, which greatly limits the applicability of the INS for long endurance [20,21]. There are various types of gyroscopes and accelerometers that make up the IMU, as illustrated in Figure 1.
Due to the prevalent use of a DVL as an odometer in DR localization in confined underwater environments, this section focuses on the DR localization method using DVL. A Doppler Velocity Log (DVL) collects velocity information of underwater robots and integrates it to calculate the displacement vector information of underwater structures. The core principle of DVL is to utilize the inconsistency between the transmitted frequency and received frequency of acoustic signals from a moving body, known as the Doppler frequency shift, to determine the relative velocity of the moving carrier relative to the seafloor. To enable DR localization to determine the spatial position of the underwater robot, the DVL used in DR localization must be a multi-beam DVL. A single-beam DVL can only output the velocity of one dimension in a three-dimensional space. To obtain a more accurate spatial velocity vector, the DVL needs at least three transducers emitting sound signals acting on the seabed. However, for enhanced reliability, a four-transducer signal is often employed, ensuring redundancy with one beam.
The Doppler Velocimeter has two operating modes: Firstly, when the carried device is relatively close to the seabed and within the working depth of the DVL, the emitted sound signals of the transducer are referenced to the seabed, as shown in Figure 2. Secondly, when the water depth is greater than the working depth of the DVL, the emitted sound signals of the transducer are referenced to a certain depth of water layer. The errors in DVL primarily come from the fact that, when referenced to the seabed, each beam’s sound signal aligns with different seabed topography, resulting in errors in the DVL velocity vector. When referenced to a certain depth of water layer, if that layer has a flow velocity, the measured velocity by DVL is not zero, causing the Doppler Velocimeter to fail and produce a position drift. This method is a navigation method based on the Doppler effect and has the advantage that the navigation error does not increase cumulatively [22].
A compass measures the yaw angle of the underwater robot relative to the world coordinate system, which is crucial for the absolute localization of the underwater robot. Compasses are divided into magnetic compasses and gyrocompasses. A magnetic compass relies on the Earth’s magnetic field to keep the pointer consistently pointing in the same direction. However, its accuracy can be easily affected by metal structures on the carrying platform. To overcome the influence of external structures on the compass, researchers have developed gyrocompasses. A gyrocompass utilizes a gyro that is electrically driven and continuously rotating, ensuring that the axis of the horizontally placed gyro remains stably aligned. This provides a stable orientation to the carrying platform, overcoming the impact of external structures on the compass’s accuracy.
A depth sensor measures water depth by sensing water pressure, and utilizes the relationship between water pressure and depth for depth measurement. The use of depth sensors in underwater localization can be categorized into two types: direct involvement of depth sensors in localization and depth sensors assisting in localization. The direct involvement of depth sensors in localization refers to the direct fusion of depth data, measured by the depth sensor with data obtained from other sensors, such as IMUs. Through geometric relationships [23], this enables the positioning of the carrying platform. Due to the high precision and low error accumulation characteristics of depth sensors over time, despite providing only one-dimensional data, depth sensors can serve as a constraint in the localization algorithm to reduce positioning errors [24]. In this scenario, the depth sensor functions as an assistive localization tool.

2.2. Localization with the SINS/DR Combination Method

Due to the susceptibility of DVL-based DR localization to underwater references, the localization error in SINS accumulates and diverges over time. Additionally, the compass and depth sensor only provide one-dimensional data. Therefore, relying solely on any of these individual localization methods would result in significant positioning errors. SINS/DVL-integrated navigation is one of the common navigation methods for AUVs [25,26].
Integration of SINS and DVL significantly reduces positioning errors, with the emphasis on fusion filtering algorithms. Before selecting a fusion filtering algorithm, it is essential to determine the coupling method. The most commonly used coupling methods in the fusion of SINS and DVL are non-coupling, loosely coupled, and tightly coupled methods.
Non-coupling refers to periodically replacing SINS positioning data at certain intervals with DVL positioning data at the same time to reduce the accumulation of SINS positioning errors. This method is not a true coupling method. Taking the loosely coupled method and Kalman filtering as an example of underwater robot localization, SINS independently performs positioning calculations. After obtaining the robot’s state vector at a certain moment, the difference with DVL is calculated to obtain the error vector between SINS and DVL. This error vector serves as the observation vector for the Kalman filter, and the Kalman filter outputs an optimal state estimate (as shown in Figure 3).
In contrast to the loosely coupled method, the tightly coupled method uses the Doppler translation error vector of DVL as the observation vector for the Kalman filter. Additionally, the state vector synchronously includes the constant Doppler offset and scale factor errors of DVL (as shown in Figure 4).
The algorithms for fusing measurements from multiple sensors include: the Weighted Average Method [27], Kalman Filtering Method [28], Multiple Bayesian Estimation Method [29], Dempster-Shafer Evidence Theory Method [30], Fuzzy Logic Reasoning Method [31], Artificial Neural Network Method [32], and Particle Swarm Method [33]. The traditional Kalman filtering algorithm is widely used in SINS/DVL positioning, assuming that the measurement errors of the SINS and DVL positioning systems follow Gaussian distribution and that they are linear systems in the calculation process [34]. Many scholars have improved the Kalman filtering algorithm to enhance accuracy in reducing localization errors, such as the Unscented Kalman Filtering, Extended Kalman Filtering, etc. Xianfei Pan and Yuanxin Wu [35] used a practical EKF to integrate IMU and DVL information, and the simulation results accord very well with our analytic conclusions; EKF is more accurate than the IO-DVLC observer in estimating DVL parameters. Karmozdi et al. [36] presented a novel modification of the Dual Unscented Kalman Filter (DUKF) for the on-line concurrent state and parameter estimation; experimental results indicate an increased performance when the proposed methodology is utilized. Fasheng Wang and Yuejin Lin [37] provided an improvement strategy to the UPF, named unscented particle filter. This strategy adopts the merits of the standard particle filter in that it takes much less time than the UPF.
Due to the interference from real-world environments, SINS and DVL are affected by non-Gaussian noise, leading to measurement noise that does not adhere to the previous Gaussian assumption [38]. In such cases, the performance of the Kalman Filter (KF) may degrade and it is advisable to use H∞ technology to improve the robustness of the filter [39]. It is worth noting that different Kalman filtering methods have their own advantages and disadvantages. The advantage of KF is its low computational resource consumption, while its disadvantage is its limited applicability to linear systems [40]. The Extended Kalman Filter (EKF) has the advantage of being applicable to non-linear systems, but it consumes more computational resources [41]. The Unscented Kalman Filter (UKF) is developed based on EKF, as EKF is not easy to converge when dealing with complex dynamic systems. Researchers have proposed the UKF to make the algorithm converge more easily [40]. Table 2 summarizes some characteristics of the Kalman filtering algorithm.
In confined underwater spaces, robots are often subject to environmental constraints, requiring precise operation. This makes it crucial to minimize the delays in controlling sensors [42,43] and output signals [44]. Due to the working characteristics of acoustic sensors, which collect information through sound waves, similar to DVL, acoustic sensors often face issues with delays [45]. The delay issue with DVL can be mitigated to some extent through algorithmic approaches [46]. Similarly, sonar sensors used for sonar SLAM also encounter this problem. The intuitive impact is the distortion of images from mechanical scanning sonars. Researchers have addressed this issue by optimizing algorithms [47].

3. SLAM (Simultaneous Localization and Mapping) Localization

The SLAM (Simultaneous Localization and Mapping) localization method has seen increasing accuracy with the improvement of computer performance. SLAM localization estimates the relative positions of structures in space by acquiring environmental features through sensors. These features can include points [48], lines [49], colors [50], or depths [51]. The workflow of SLAM (as shown in Figure 5) is as follows:
  • Collection of environmental information: Gather environmental information, including features, lines, and depth.
  • Front-End Odometry: Use the collected environmental information to infer the current relative pose and position of the camera.
  • Back-End Optimization: Employ various algorithms to reduce errors in the inferred pose and position information from the second step. Commonly used algorithms include filtering algorithms (often Kalman filtering) and non-linear optimization methods (usually graph optimization). Concurrently,
  • Loopback detection: loop closure detection is performed based on the collected environmental information. Due to the accumulation of navigation errors in visual SLAM, loop closure detection corrects errors when the camera moves to a point with the same feature, optimizing the tracking path for that time period. Loop closure detection significantly reduces path drift in visual SLAM. If an error occurs in this step, such as incorrectly recognizing the same frame or different frames as the same, serious errors may occur in subsequent work.
  • Mapping: Generate a visualization trajectory image of the final result on the computer and create a three-dimensional model based on the collected environmental information.
The challenge for SLAM in confined underwater spaces lies in acquiring environmental information. The localization results of underwater visual SLAM are typically influenced by factors such as illumination, turbidity [52,53], and the clarity of features. In practical working environments, situations where water quality is turbid and features are not distinct are common. Special algorithms are used to mitigate the impact of the environment on localization accuracy in such cases. There are two main methods to improve the quality of underwater visual images: image restoration algorithms and image enhancement algorithms [17]. Image restoration algorithms consider the imaging principles and restore images by inversely deducing these principles, while image enhancement algorithms, disregarding imaging principles, enhance images by redistributing pixel intensities to improve contrast and color [54]. Image enhancement algorithms are usually faster and simpler than image restoration algorithms [55].
Although laser light, like visible light, undergoes scattering and absorption when propagating through water, lasers can select monochromatic light with lower backward scattering and absorption rates. Therefore, lasers can often identify features at greater distances and, to some extent, overcome the various problems associated with optical imaging in visual SLAM [56]. Sonar SLAM is less affected by the aquatic environment and is robust to water turbidity, making it a common solution for underwater AUV sensing [57]. However, sonar is affected by factors such as water salinity, temperature, pressure [58], and multi-path effects [59].
It is generally more applicable than laser and visual SLAM, but its imaging accuracy is constrained by cost and technology, making sonar the least accurate among the three methods [60]. To enhance the robustness and accuracy of SLAM, many studies have explored the fusion of various SLAM methods, achieving multi-SLAM fusion localization. This approach will be discussed in Section 3.4. Table 3 summarizes the characteristics of different underwater SLAM methods.

3.1. Visual SLAM Localization

This section focuses on developing methods for localization based on visual information. Visual SLAM localization methods include feature-based methods, direct methods, and depth learning-based methods. Feature-based SLAM utilizes feature methods, such as SIFT [62], SURF [63], and ORB [64], to extract feature points by calculating algorithms that match feature points between adjacent frames. By leveraging geometric relationships, it obtains the rotation matrix and translation matrix of the camera, thus determining the camera’s pose. The goal of feature-based methods is to minimize the reprojection error [65], typically achieved by calculating the differences between pixel coordinates. Direct methods minimize photometric error [66], using a function that subtracts pixel grayscale values. These methods are usually based on the assumption of photometric invariance to match two consecutive images. Deep learning methods extract advanced features from images without the need for traditional feature extractors. One widely used deep learning-based method nowadays is based on Poly-YOLO object detection. An example of its application in localization is demonstrated by Chen et al., who utilized the Poly-YOLO model to generate motion trajectories of navigating vessels on the water surface.
Since 2000, underwater visual SLAM has gradually matured and evolved towards large-scale visual SLAM [67,68]. Subsequently, applying visual SLAM to scan underwater structures became a hot research topic, especially in studies focusing on the scan detection of ship hull surfaces [69,70]. After 2010, researchers began optimizing algorithms related to underwater visual SLAM, with achievements ranging from filtering algorithms [71] to feature extraction algorithms [72]. The application of direct visual SLAM came later, emerging around 2014 [73]. In 2015, A. Concha et al. [74] used direct visual SLAM to achieve dense modeling of underwater scenes and spatial pose localization of underwater structures. Due to the difficulty in guaranteeing underwater water quality and the high image quality requirements of direct visual SLAM, image optimization methods suitable for direct methods are also a key focus of research [75].
With deep learning becoming a popular research direction in recent years, researchers have gradually applied deep learning to visual SLAM. Deep learning methods were initially applied in land-based visual SLAM, where researchers applied deep learning methods to target recognition [76,77] and loop closure detection [78,79,80,81,82], achieving high-precision image recognition. From 2018 onwards, researchers began applying deep learning visual SLAM to the localization and navigation of underwater structures. T. Manderson et al. [83] applied deep learning visual SLAM to underwater collision avoidance for AUVs, and M. Leonardi et al. [84] used deep learning in underwater image enhancement, removing unimportant feature points from the image to leave high-quality points, thus accelerating the operation speed of underwater visual SLAM. A significant amount of work using deep learning visual SLAM has focused on loop closure detection [85,86,87,88]. This is because geometry-based visual SLAM performs poorly in the loop closure detection stage, while deep learning-based image detection technology is mature and performs excellently, greatly improving the success rate of loop closure detection. B. Teixeira et al. [89] applied deep learning visual SLAM to underwater AUVs. The visual SLAM, which fused inertial systems, demonstrated higher accuracy compared to two pure deep learning visual SLAM methods, named GeoNet [90] and SfMlearner [77], in experimental results.
The research on underwater visual SLAM mentioned above indicates that the primary focus lies in improving back-end optimization algorithms to enhance positioning accuracy, optimizing image processing methods to improve feature extraction accuracy, increasing loop closure detection accuracy to reduce positioning drift, and exploring applications of underwater visual SLAM. Research directions in underwater visual SLAM in confined spaces are generally similar. In comparison to sonar SLAM, visual SLAM in enclosed underwater spaces has lower costs but faces challenges, such as dim lighting and the impact of murky water on imaging quality.
Due to the limited space on underwater robotic platforms, it is necessary to minimize the resource consumption of underwater visual SLAM to enhance its practicality [91]. Aiming to reduce the cost of robot localization in underwater confined environments, C. Cain and A. Leonessa [92] developed a localization platform that combines a downward camera and a visual rangefinder for visual SLAM. The platform’s localization performance was tested using Kalman filtering and particle filtering methods, achieving high accuracy. Once the visual SLAM localization accuracy reached a certain level, researchers began using visual SLAM for high-resolution environmental mapping. N. Weidner [93] et al. applied ORBSLAM to the 3D reconstruction of underwater caves and diver localization, achieving good results with a front-facing camera in practical datasets. To comprehensively obtain environmental information in confined underwater environments, many research efforts use multiple cameras oriented at different angles to achieve this goal.
E. Nocerino [94] and colleagues installed 12 motion cameras on an ROV to capture 360-degree environmental information. They conducted visual SLAM experiments in an underground facility and achieved accurate results. B. Joshi [95] et al. studied the impact of camera placement in different environments on visual SLAM localization results in underwater SLAM. They tested visual SLAM localization results using a front-facing camera in underwater caves. E. Ochoa [96] and colleagues used four cameras installed in various directions on an underwater robot to collect environmental information. They employed SLAM to establish a 3D environmental model and achieve self-localization. Through the 3D environmental model and self-localization, researchers implemented obstacle avoidance for the robot in underwater shipwreck exploration. In the same year, B. Joshi et al. [97] used visual SLAM fused with an IMU for localization and obtained excellent results in cave and shipwreck-related datasets. In underwater environments, not only solving the problem of environmental information acquisition but also handling the collected environmental point cloud data is a challenge that visual SLAM needs to overcome. F. Hidalgo [98] used a method for underwater robot autonomous localization based on ORBSLAM2 and point cloud processing for localization in underwater enclosed rooms, validating the feasibility of this approach. D. Wu [99] and colleagues proposed a visual SLAM system applied in road tunnels and corridor environments. The system combines ORB features and LSD line features; using the angle relationship of line projection, they propose a new method for calculating the reprojection error of line features, reconstruct the reprojection model based on line features, and construct a new reprojection error model based on point-line features, which adds an angle constraint to the reprojection of line features and solves the instability caused by line projection error. This type of SLAM significantly improves the localization accuracy of robots in visually challenging environments with light and shadow blur, providing a feasible solution for the localization of robots in confined underwater spaces.

3.2. Sonar SLAM Localization

Due to the variable nature of underwater environments, situations with blurred underwater visibility are common, leading to a sharp increase in error for visual SLAM. Unlike visual SLAM, sonar SLAM is generally unaffected by low underwater lighting conditions and visibility issues. Additionally, the detection range of sonar sensors is much larger than that of visual SLAM. Sonar SLAM employs sonar sensors, which can be categorized based on the number of transmitted detection beams into single-beam sonar and multi-beam sonar. The imaging characteristics of different types of sonar are summarized in Table 4.
Since the 1990s, a significant amount of research has been conducted around using sonar as a sensor for localization and mapping [105,106,107]. Due to hardware performance limitations and other factors, strict sonar SLAM was not achieved in that era. Researchers at that time referred to their work as “Localization and map”. True sonar SLAM emerged in 2003 when Y. L. J. Ip [108] proposed a segment-based map-building approach using the Enhanced Adaptive Fuzzy Clustering Algorithm (EAFC). This method aimed to extract line segments from noisy, ambiguous, and spurious sonar measurements. Additionally, a fuzzy-tuned extended Kalman filter (FT-EKF) was introduced to address the challenge of model-based localization without prior knowledge of the state noise model. Due to the superior performance of sonar in underwater detection applications, underwater sonar SLAM was quickly proposed and applied in real-world scenarios for autonomous exploration of unknown environments [109,110]. During the robot’s motion, sonar scans of the environment are not entirely continuous, leading to distortions in the images collected by sonar at different time intervals (due to the time it takes for sound waves to propagate while the robot continues to move). This ultimately results in a decrease in the accuracy of sonar SLAM.
To address the issue of sonar image distortion, D. Ribas et al. [9] introduced an algorithm for handling continuous data generated by mechanical scanning sonar. The algorithm successfully extracted features from sonar images, corrected the distortion of these features, and utilized extended Kalman filtering in related experiments to obtain the AUV’s motion path. The algorithm developed by D. Ribas and colleagues has been widely used in subsequent sonar SLAM research. In later research, J. Li et al. [47] proposed a new pose-graph SLAM algorithm using forward-looking sonar (FLS) as the sole sensor for correcting localization drift. This algorithm achieved autonomous exploration and scanning of the ship’s outer surface. Experimental results demonstrated the algorithm’s ability to robustly detect loop closures and estimate relative pose constraints from FLS, effectively minimizing drift in vehicle localization.
Inspired by underwater visual SLAM, J. Wang et al. [111] applied the optical flow method to underwater sonar SLAM. Unlike traditional sonar SLAM methods that use features similar to point-based methods, the optical flow method in sonar SLAM is more robust to noise and the absence of elevation angle. Y. Wang et al. [112] introduced a deep learning-based algorithm for multi-point sonar image 3D reconstruction. This algorithm uses multiple-point sonar images as samples for 3D reconstruction to simulate pseudo-depth images. Experimental results indicated that the proposed algorithm achieved higher 3D reconstruction accuracy compared to the A2FNet [113] and ElevateNet [114] methods.
In real confined environments, sonar SLAM localization methods are widely used.
N. Fairfield et al. [115] conducted an exploration experiment using sonar SLAM in an underwater cave. The experiment utilized octree mapping for the three-dimensional reconstruction of the environment and achieved high-precision positioning of the cave detector using particle filtering. S. Soylu et al. [116] tested sonar SLAM using the Extended Kalman Filter (EKF) method in a water tank and obtained favorable positioning results. C. White et al. [117] explored a complex ancient reservoir using six sonar mapping and localization methods. They found that FastSLAM performed well and proposed corresponding improvement strategies. A. Mallios et al. [118] demonstrated sonar SLAM using cross-registration. They conducted experiments in a fully enclosed underwater cave and showed that this sonar SLAM algorithm had errors in order of magnitude smaller than DVL-based Dead Reckoning (DR). The combination of different sonar methods can also affect the accuracy of sonar SLAM. Y. Breux and L. Lapierre [119] used two mechanical scanning sonars to simulate exploration of underwater lava tubes. This sonar SLAM used a high-precision narrow-beam sonar to compensate for the limitations of low-precision wide-beam sonar in three-dimensional measurements. The experimental results showed higher precision in positioning.

3.3. Laser SLAM Localization

Compared to laser SLAM, visual SLAM has some apparent drawbacks: (1) Visual SLAM is greatly affected by lighting conditions, making it challenging to operate under extreme lighting conditions, such as strong or weak light [120]. (2) Visual SLAM is influenced by the texture of the surrounding environment, and if the texture changes are not significant, it can lead to significant errors in visual SLAM [121]. The application of terrestrial laser SLAM has become relatively mature, undergoing the evolution from indoor to outdoor and from two-dimensional to three-dimensional [122,123,124,125,126,127].
Underwater laser SLAM faces challenges due to the impact of water on laser scattering. Additionally, this scattering affects the imaging accuracy of laser SLAM. When laser echoes occur on target objects, the coherence of the laser is also affected. Although water has a noticeable attenuating effect on laser radar due to scattering and refraction, under similar conditions, laser SLAM still has a longer maximum operating range than visual SLAM. Furthermore, compared to sonar SLAM, laser SLAM has higher imaging accuracy, often leading to higher positioning precision.
The classification of LiDAR SLAM can be based on different working principles of the camera, as shown in Figure 6. Cameras with a laser generator as the core include laser depth cameras and LiDAR cameras. The working principle of a laser depth camera involves emitting invisible light from a laser generator to the target. The differences in the reflected invisible light from target points in different spatial coordinates can be used to calculate the three-dimensional shape of the target. Due to the various differences in the reflection of invisible light from different parts of the target, laser depth cameras can be further classified into structured light cameras and Time-of-Flight (TOF) depth cameras based on the differences in detecting reflected light. Structured light cameras describe the spatial structure of the target by detecting the phase differences of reflected light or by using triangulation. TOF depth cameras describe the spatial structure of the target by detecting the propagation time of reflected light (TOF method) and are essentially a 3D application of solid-state LiDAR. Similarly, LiDAR can be classified based on different scanning methods, including mechanical scanning LiDAR, hybrid solid-state LiDAR, and solid-state LiDAR. The main difference between structured light cameras and LiDAR lies in the requirements for the components of the structured light used, with structured light cameras often using infrared light with predetermined components.
Underwater laser SLAM utilizes a line laser scanner, similar in principle to mechanical scanning sonar, to collect environmental feature information using a rotating single line laser. G. Inglis et al. [128] explored underwater basins using line structured light laser SLAM, achieving the localization of underwater robots using the SIFT feature extraction method and EKF. M. Massot-Campos et al. [56] reduced the uncertainty in the localization of line structured light laser SLAM using the submap method and proposed a solution to the registration problem of laser point clouds. K. Himri et al. [129] applied laser 3D reconstruction to underwater laser SLAM, using a laser scanner for underwater target identification. After identifying the target, the AUV pose is estimated. M. Massot-Campos et al. [130] modified the commonly used sonar-based BPSLAM [121] to use a laser scanner to obtain depth maps. This laser SLAM no longer relies on feature matching for localization, leading to improved accuracy in positioning in low-feature underwater environments. Using this laser SLAM also ensures higher positioning accuracy in complex underwater environments where DVL positioning drift occurs. H. B. Yang et al. [131] used a line laser camera and IMU fusion for underwater robot localization, where the line laser camera provided depth information for the images. This step skips the process of calculating environmental depth using a visual camera in localization, directly obtaining depth information from the images and speeding up the localization solution.

3.4. Multi-Sensor Fusion SLAM Localization

Being sensors of this type, DVLs and IMUs experience localization errors that accumulate gradually over time due to integration over unit time. In contrast, SLAM-based localization relies on environmental information to achieve positioning, implying that the errors in this localization algorithm stem from global drift, depending on the effectiveness of loop closure detection. The fusion of DVLs and IMUs with SLAM can often provide complementary advantages, especially in unknown and confined environments where SLAM alone may struggle with loop closure detection. In such cases, a combination of DVL/IMU localization and SLAM fusion localization methods is necessary. In multi-sensor fusion SLAM localization, the estimation of the robot’s position results from the fusion of data from multiple sensors, while mapping is performed independently using data from vision, sonar, and laser sensors.
The fusion algorithm used in multi-sensor fusion SLAM localization is similar to the fusion algorithm mentioned in Section 2.1 for the SINS/DR combined sensor localization fusion. It is worth noting that, due to the non-linearity of the process in which SLAM methods (visual SLAM, sonar SLAM, laser SLAM) collect point cloud data to infer the robot’s spatial position, and the linearity of the process in which the DVL and IMU infer the robot’s spatial position, we need to select appropriate algorithms for both positioning methods and, if the algorithms are not compatible, we may need to use multi-threading to process the data (as shown in Figure 7).
In recent years, there have been many research achievements in the use of multi-sensor fusion SLAM localization in constrained underwater spaces. These efforts aim to install as many sensors as possible on a robot to achieve higher positioning accuracy. Multi-sensor fusion SLAM can achieve high-precision localization through loosely coupled fusion methods. S. Rahman et al. [132] fused visual, sonar, and IMU data, estimating the robot’s state by minimizing reprojection errors, IMU errors, and sonar distance errors. They validated the accuracy of this multi-data fusion SLAM using datasets related to underwater caves.
In subsequent research, S. Rahman [133] installed sonar, cameras, and an IMU on an underwater robot and integrated sensor data using a SLAM framework. The authors validated and compared the positioning accuracy of the robot in a cave-related dataset when using all sensors versus using only visual sensors with IMU. They also compared the accuracy of visual SLAM positioning methods using different approaches.
To speed up the computation speed of multi-sensor SLAM, researchers have conducted some work in this area. C. Cheng et al. [134] fused sonar, IMUs, and DVLs into the SLAM framework and used MFLS to accelerate the processing of sonar information. They verified the performance of the positioning and mapping system in a simulated maze map environment.
Compared to loosely coupled methods, non-coupling multi-sensor fusion SLAM algorithms are simpler to implement. A. Martins et al. [135] integrated a multi-beam sonar system, a rotating line laser system, an IMU, and a DVL on the UX-1 underwater robot. The multi-beam sonar system and rotating line laser system provided the robot with environmental point cloud information. After integrating the IMU and DVL, inertial navigation information was generated to correct localization errors in SLAM. They validated the positioning and mapping accuracy of this system in a cave simulation environment.

4. Other Localization Methods in Confined Spaces

The mainstream methods for localization in confined spaces have been described. This section introduces some less common methods that, in specific confined environments, can achieve high-precision localization and possess unique advantages.
Distance sensors (sonar rangefinder, laser rangefinder) are typically used for obstacle avoidance, but they can also be utilized to infer the robot’s horizontal position through multiple non-rotating distance sensors. By using a depth sensor to obtain vertical position, the spatial location of the robot can be determined. Some researchers use laser rangefinders to obtain the robot’s distance in a flat plane to determine its position in a cage. For instance, M. Bjerkeng et al. [136] combined laser triangulation sensors with a heading compass to determine the position of a Remotely Operated Vehicle (ROV) inside an underwater cage, discovering that errors mainly originated from outliers in laser triangulation sensor measurements. In completely enclosed environments like caves, distance sensors also play a crucial role in robot localization. V. Preston et al. [137] mentioned using two vertically opposed sonar rangefinders to determine the underwater robot’s vertical displacement in cave environments, transforming two-dimensional robot localization into three-dimensional localization. J. D. Hernandez et al. [138] installed four horizontally oriented sonar rangefinders on the Sparus II underwater robot to determine its position in the horizontal plane, while simultaneously using a depth sensor to ascertain the robot’s vertical position.
In addition to methods for precise localization using distance sensors for underwater robots, there are also some methods for fuzzy localization. These methods do not provide precise position information but offer signals similar to obstacle avoidance. D. J. F. Toal et al. [139] used optical fibers to achieve fuzzy localization for robots. The light signals propagating within the optical fibers change as they approach obstacles and, through this change, the robot can detect the approximate outline of obstacles. Some researchers also use magnetic field generators for obstacle detection, exploiting the principle that the magnetic field changes when passing through obstacles [140].
In confined underwater spaces, robots may sometimes face situations with less obstruction. Researchers have developed autonomous localization methods for robots that differ from traditional baseline positioning. Due to the strong interference resistance of polarized light underwater compared to ordinary light sources, some researchers install a polarized light matrix on the underwater platform where the robot is docked. The underwater robot estimates its own pose parameters through the detection of polarized light sources. H. Y. Cheng et al. [141] experimented with a robot pose estimation method based on a polarized light matrix on the docking platform (DS) for autonomous underwater robots. After the experiment, they obtained results with an error of within 0.116 m within a distance of 100 m.
Simulation of underwater robot localization has also been a hotspot in recent years. A significant amount of research has focused on the localization principles of fish lateral lines. Algorithms for the pose of dipoles relative to sensor matrices have been widely applied to estimate the motion of artificial lateral lines in bionic underwater robots [142,143]. X. D. Zheng et al. [144] combined the localization of magnetic dipoles relative to the sensor matrix with the GRNN method. They achieved precise positioning of the relative positions of magnetic dipoles using an artificial lateral line system composed of a cross-shaped sensor matrix. The experimental results showed that the system had good positioning accuracy within 13 cm (the distance of two body lengths). Clearly, such applications in underwater localization systems for bionic robots require a relatively small space to achieve high positioning accuracy. For artificial lateral line systems, there is also a navigation assistance method that relies entirely on the autonomous localization of bionic underwater robots. A significant direction of research is the localization method based on the measurement of water flow velocity by the artificial lateral line system.
T. Salumäe and M. Kruusmaa [145] utilized an artificial lateral line system composed of onboard pressure sensors to measure the pose information of a biomimetic fish-shaped robot in a fluid. J. F. Fuentes-Pérez et al. [146] also employed a pressure sensor array to form an artificial lateral line system for a biomimetic fish-shaped robot. They proposed a novel sampling algorithm, resulting in improved localization performance. The research on biomimetic lateral lines for fish provides a new perspective on the localization of underwater robots in confined environments. This approach, which obtains robot pose information by correlating water flow pressure and velocity, is similar to traditional inertial navigation positioning and is not affected by the complexities of underwater spaces.
Bionic robots scanning the environment within a certain range to obtain the horizontal plane localization of underwater robots also have some unique positioning methods. J. G. Peng et al. [147] extended the method of using electrolocation for self-positioning to a technique utilizing a weak electric field to scan the environment. They extracted frequency domain information of the environment in the weak electric field, thereby achieving the horizontal plane positioning of underwater robots. Although the method of scanning the environment using a weak electric field is not yet mature, it still provides a solution for autonomous underwater localization.
Researchers studying the navigation methods of some aquatic organisms found that aquatic organisms use the polarization information generated by Snell’s window under different angles of illumination to achieve self-positioning navigation [148,149,150]. Researchers applied this navigation and positioning method to the autonomous localization of underwater robots, using the angle of sunlight as a source of information for underwater polarization navigation. H. Y. Cheng et al. [151] achieved a positioning accuracy comparable to GPS navigation by combining this underwater polarization navigation with inertial navigation on an underwater robot.

5. Conclusions

This article introduces the localization methods for robots in confined underwater spaces. Acoustic, visual, laser, and inertial navigation (SINS) methods are widely used in the localization of robots in confined underwater spaces. In some specific confined environments with minimal obstructions, positioning can be achieved with the assistance of auxiliary devices. In addition to traditional localization methods, bio-inspired localization methods have been rapidly developing in recent years. Research directions for localization methods in confined underwater spaces mainly focus on multi-sensor fusion and the development of novel sensors. The research direction of multi-sensor fusion localization in confined underwater spaces can be further divided into studies on the advantages of complementary multi-sensor fusion, research on multi-sensor fusion filtering algorithms, and research on multi-sensor underwater applications.
In previous research efforts, many researchers delved into the advantages of complementary multi-sensor fusion, exploring various possible combinations of localization methods. The current research direction has gradually evolved into the use of a greater variety and quantity of sensors to improve localization accuracy. Research on multi-sensor fusion filtering algorithms involves developing algorithms that filter and fuse signals collected from multiple sensors, such as the Kalman fusion filtering algorithm for linear systems and the particle filtering algorithm for non-linear systems. Earlier studies on fusion filtering algorithms were mostly based on optimizing these two algorithms. In recent years, with the development of artificial intelligence, many fusion filtering algorithms based on AI algorithms have emerged, and multi-sensor fusion filtering algorithms have also gradually moved towards the direction of adaptive algorithms.
Research on the application of multiple sensors in confined underwater spaces initially focused on detecting static environments and using them as localization benchmarks. However, due to the complexity of underwater environments, this type of localization algorithm applicable only to static environments was insufficient. In order to broaden the applicability of multi-sensor fusion localization, researchers incorporated the detection of dynamic targets into localization benchmarks. This significantly enhanced the obstacle avoidance and localization accuracy of robots in confined underwater spaces.
Localization information in confined underwater spaces comes from individual sensors. Since improving the accuracy of existing sensors is challenging, there is a pressing need for high-precision sensors suitable for localization in confined underwater spaces. Localization sensors are gradually shifting from traditional acoustic, optical, electrical, and inertial sensors to bio-inspired sensors. However, bio-inspired sensors are currently not fully capable of handling high-precision localization tasks. Therefore, developing new high-precision sensors remains an area worthy of in-depth research.

Author Contributions

Conceptualization, Y.C.; validation, Y.C.; writing—original draft preparation, H.W.; writing—review and editing, H.W., Q.Y., B.Y. and X.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (NSFC) under Grant 52275053, in part by the Fundamental Research Funds for the Central Universities under Grant 3132023513, the National Key Research and Development Program 2021YFC2802403, and the Ministry of Industry and Information Technology’s High-Tech Ship Project CBG2N21-2-1.

Data Availability Statement

All data required is included in this paper.

Conflicts of Interest

Author Qiming Yang was employed by the company AVIC Guizhou Honglin Aviation Power Control Technology Co., The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Watson, S.; Duecker, D.A.; Groves, K. Localisation of unmanned underwater vehicles (UUVs) in complex and confined environments: A review. Sensors 2020, 20, 6203. [Google Scholar] [CrossRef]
  2. Huvenne, V.A.; Robert, K.; Marsh, L.; Lo Iacono, C.; Le Bas, T.; Wynn, R.B. ROVs and AUVs. In Submarine Geomorphology; Springer International Publishing: Cham, Switzerland, 2018; pp. 93–108. [Google Scholar]
  3. Hong, S.; Chung, D.; Kim, J.; Kim, Y.; Kim, A.; Yoon, H.K. In-water visual ship hull inspection using a hover-capable underwater vehicle with stereo vision. J. Field Robot. 2019, 36, 531–546. [Google Scholar] [CrossRef]
  4. Flögel, S.; Ahrns, I.; Nuber, C.; Hildebrandt, M.; Duda, A.; Schwendner, J.; Wilde, D. A new deep-sea crawler system—MANSIO-VIATOR. In Proceedings of the 2018 OCEANS—MTS/IEEE Kobe Techno-Oceans (OTO), Kobe, Japan, 28–31 May 2018; pp. 1–10. [Google Scholar] [CrossRef]
  5. Alvarez-Tuñón, O.; Rodríguez, A.; Jardón, A.; Balaguer, C. Underwater Robot Navigation for Maintenance and Inspection of Flooded Mine Shafts. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1482–1487. [Google Scholar]
  6. Krejtschi, J.K. In Service above Ground Storgae Tank Inspection with a Remotely Operated Vehicle (ROV). Ph.D. Thesis, University of Glamorgan, Pontypridd, UK, 2005. [Google Scholar]
  7. Wu, Y.; Ta, X.; Xiao, R.; Wei, Y.; An, D.; Li, D. Survey of underwater robot positioning navigation. Appl. Ocean. Res. 2019, 90, 101845. [Google Scholar] [CrossRef]
  8. Martins, A.; Almeida, J.; Almeida, C.; Pereira, R.; Sytnyk, D.; Soares, E.; Matias, B.; Pereira, T.; Silva, E. MARA-A modular underwater robot for confined spaces exploration. In Proceedings of the Global Oceans 2020: Singapore—U.S. Gulf Coast, Biloxi, MS, USA, 5–30 October 2020; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar]
  9. Ribas, D.; Ridao, P.; Tardós, J.D.; Neira, J. Underwater SLAM in man-made structured environments. J. Field Robot. 2008, 25, 898–921. [Google Scholar] [CrossRef]
  10. Fernandez, R.A.S.; Milošević, Z.; Dominguez, S.; Rossi, C. Motion control of underwater mine explorer robot UX-1: Field trials. IEEE Access 2019, 7, 99782–99803. [Google Scholar] [CrossRef]
  11. De Cerqueira Gava, P.D.; Jorge, V.A.M.; Júnior, C.L.N.; Adabo, G.J. AUV cruising auto pilot for a long straight confined underwater tunnel. In Proceedings of the 2020 IEEE International Systems Conference (SysCon), Montreal, QC, Canada, 24 August–20 September 2020; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar]
  12. Hernández, J.D.; Istenič, K.; Gracias, N.; Palomeras, N.; Campos, R.; Vidal, E.; García, R.; Carreras, M. Autonomous underwater navigation and optical mapping in unknown natural environments. Sensors 2016, 16, 1174. [Google Scholar] [CrossRef] [PubMed]
  13. Bao, J.; Li, D.; Qiao, X.; Rauschenbach, T. Integrated navigation for autonomous underwater vehicles in aquaculture: A review. Inf. Process. Agric. 2020, 7, 139–151. [Google Scholar] [CrossRef]
  14. Dong, M.; Chou, W.; Fang, B. Underwater matching correction navigation based on geometric features using sonar point cloud data. Sci. Program. 2017, 2017, 7136702. [Google Scholar] [CrossRef]
  15. Nivedhitha, D.; Karthik, D.; Murugan, S.S. Localization Systems for Autonomous Operation of Underwater Robotic Vehicles: A Survey. In Proceedings of the OCEANS 2022—Chennai, Chennai, India, 21–24 February 2022; pp. 1–8. [Google Scholar] [CrossRef]
  16. Cong, Y.; Gu, C.; Zhang, T.; Gao, Y. Underwater robot sensing technology: A survey. Fundam. Res. 2021, 1, 337–345. [Google Scholar] [CrossRef]
  17. Zhang, S.; Zhao, S.; An, D.; Liu, J.; Wang, H.; Feng, Y.; Li, D.; Zhao, R. Visual SLAM for underwater vehicles: A survey. Comput. Sci. Rev. 2022, 46, 100510. [Google Scholar] [CrossRef]
  18. Botti, L.; Ferrari, E.; Mora, C. Automated entry technologies for confined space work activities: A survey. J. Occup. Environ. Hyg. 2017, 14, 271–284. [Google Scholar] [CrossRef]
  19. Chutia, S.; Kakoty, N.M.; Deka, D. A Review of Underwater Robotics, Navigation, Sensing Techniques and Applications. In Proceedings of the 2017 3rd International Conference on Advances in Robotics, New Delhi, India, 28 June–2 July 2017; pp. 1–6. [Google Scholar] [CrossRef]
  20. Sabet, M.T.; Mohammadi Daniali, H.; Fathi, A.; Alizadeh, E. A Low-Cost Dead Reckoning Navigation System for an AUV Using a Robust AHRS: Design and Experimental Analysis. IEEE J. Ocean. Eng. 2018, 43, 927–939. [Google Scholar] [CrossRef]
  21. Narasimhappa, M.; Mahindrakar, A.D.; Guizilini, V.C.; Terra, M.H.; Sabat, S.L. MEMS-Based IMU Drift Minimization: Sage Husa Adaptive Robust Kalman Filtering. IEEE Sens. J. 2020, 20, 250–260. [Google Scholar] [CrossRef]
  22. Morgado, M.; Oliveira, P.; Silvestre, C. Tightly coupled ultrashort baseline and inertial navigation system for underwater vehicles: An experimental validation. J. Field Robot. 2013, 30, 142–170. [Google Scholar] [CrossRef]
  23. Heo, Y.; Lee, G.H.; Kim, J. EKF-based Localization for the Underwater Structure Inspection Robot using Depth Sensor and IMU. In Proceedings of the 9th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Daejeon, Republic of Korea, 26–29 November 2012; pp. 643–645. [Google Scholar]
  24. Zhang, H.; Song, Z. Research on multi-sensor fusion of underwater robot navigation system. In Proceedings of the 2009 IEEE International Conference on Robotics and Biomimetics (ROBIO), Guilin, China, 19–23 December 2009; pp. 1327–1330. [Google Scholar]
  25. Karmozdi, A.; Hashemi, M.; Salarieh, H.; Alasty, A. INS-DVL Navigation Improvement Using Rotational Motion Dynamic Model of AUV. IEEE Sens. J. 2020, 20, 14329–14336. [Google Scholar] [CrossRef]
  26. Li, W.; Wu, W.; Wang, J.; Wu, M. A novel backtracking navigation scheme for Autonomous Underwater Vehicles. Measurement 2014, 47, 496–504. [Google Scholar] [CrossRef]
  27. Hu, G.; Gao, S.; Zhong, Y.; Gao, B.; Subic, A. Matrix weighted multisensor data fusion for INS/GNSS/CNS integration. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 2016, 230, 1011–1026. [Google Scholar] [CrossRef]
  28. Sasiadek, J.Z.; Hartana, P. Sensor data fusion using Kalman filter. In Proceedings of the Third International Conference on Information Fusion, Paris, France, 10–13 July 2000; IEEE: Piscataway, NJ, USA, 2000; Volume 2. [Google Scholar]
  29. Gruyer, D.; Lambert, A.; Perrollaz, M.; Gingras, D. Experimental comparison of Bayesian positioning methods based on multi-sensor data fusion. Int. J. Veh. Auton. Syst. 2014, 12, 24–43. [Google Scholar] [CrossRef]
  30. Bai, L.; Du, C.; Chen, J. An Information Fusion Positioning Algorithm Based on Extended Dempster-Shafer Evidence Theory. In Proceedings of the 2019 International Conference on Sensing, Diagnostics, Prognostics, and Control (SDPC), Beijing, China, 15–17 August 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
  31. Stover, J.A.; Hall, D.L.; Gibson, R.E. A fuzzy-logic architecture for autonomous multisensor data fusion. IEEE Trans. Ind. Electron. 1996, 43, 403–410. [Google Scholar] [CrossRef]
  32. Barreto-Cubero, A.J.; Gómez-Espinosa, A.; Escobedo Cabello, J.A.; Cuan-Urquizo, E.; Cruz-Ramírez, S.R. Sensor data fusion for a mobile robot using neural networks. Sensors 2021, 22, 305. [Google Scholar] [CrossRef]
  33. Kim, H.; Suh, D. Hybrid particle swarm optimization for multi-sensor data fusion. Sensors 2018, 18, 2792. [Google Scholar] [CrossRef] [PubMed]
  34. Li, W.; Chen, M.; Li, Y. Key Techniques of SINS/DVL Integrated Navigation System. J. Phys. Conf. Ser. 2021, 2095, 012034. [Google Scholar] [CrossRef]
  35. Pan, X.; Wu, Y. Underwater Doppler Navigation with Self-calibration. J. Navig. 2015, 69, 295–312. [Google Scholar] [CrossRef]
  36. Karras, G.C.; Loizou, S.G.; Kyriakopoulos, K.J. On-line State and Parameter Estimation of an Under-actuated Underwater Vehicle using a Modified Dual Unscented Kalman Filter. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010. [Google Scholar]
  37. Wang, F.S.; Lin, Y.J. Improving Particle Filter with A New Sampling Strategy. In Proceedings of the 4th International Conference on Computer Science and Education, Nanning, China, 25–28 July 2009; pp. 408–412. [Google Scholar]
  38. Lu, H.; Hao, S.; Peng, Z.; Huang, G. Application of Robust High-Degree CKF Based on MCC in Integrated Navigation. Comput. Eng. Appl. 2020, 56, 257–264. [Google Scholar]
  39. Yang, B.; Xu, X.; Zhang, T.; Sun, J.; Liu, X. Novel SINS initial alignment method under large misalignment angles and uncertain noise based on nonlinear filter. Math. Probl. Eng. 2017, 2017, 5917917. [Google Scholar] [CrossRef]
  40. Khodarahmi, M.; Maihami, V. A Review on Kalman Filter Models. Arch. Comput. Methods Eng. 2023, 30, 727–747. [Google Scholar] [CrossRef]
  41. Julier, S.J.; Uhlmann, J.K. New extension of the Kalman filter to nonlinear systems. In Proceedings of the Signal Processing, Sensor Fusion, and Target Recognition VI, Orlando, FL, USA, 21–25 April 1997; International Society for Optics and Photonics: Bellingham, WA, USA, 1997. [Google Scholar]
  42. Li, J.; Hai, H.; Li, J. Real-Time Location of Underwater Robot Grasping Based on Time Delay Compensation. In Proceedings of the Advances in Guidance, Navigation and Control: Proceedings of 2020 International Conference on Guidance, Navigation and Control, ICGNC 2020, Tianjin, China, 23–25 October 2020; Springer: Singapore, 2021. [Google Scholar]
  43. Li, G.; Wu, J.; Tang, T.; Chen, Z.; Chen, J.; Liu, H. Underwater Acoustic Time Delay Estimation Based on Envelope Differences of Correlation Functions. Sensors 2019, 19, 1259. [Google Scholar] [CrossRef]
  44. Sørensen, F.F.; von Benzon, M.; Liniger, J.; Pedersen, S. A Quantitative Parametric Study on Output Time Delays for Autonomous Underwater Cleaning Operations. J. Mar. Sci. Eng. 2022, 10, 815. [Google Scholar] [CrossRef]
  45. Xiao, G.; Wang, B.; Deng, Z.; Fu, M.; Ling, Y. An Acoustic Communication Time Delays Compensation Approach for Master–Slave AUV Cooperative Navigation. IEEE Sens. J. 2017, 17, 504–513. [Google Scholar] [CrossRef]
  46. Wang, S.; Chen, L.; Gu, D.; Hu, H. An optimization based moving horizon estimation with application to localization of autonomous underwater vehicles. Robot. Auto. Syst. 2014, 62, 1581–1596. [Google Scholar] [CrossRef]
  47. Li, J.; Kaess, M.; Eustice, R.M.; Johnson-Roberson, M. Pose-Graph SLAM Using Forward-Looking Sonar. IEEE Robot. Autom. Lett. 2018, 3, 2330–2337. [Google Scholar] [CrossRef]
  48. Wu, E.; Zhao, L.; Guo, Y.; Zhou, W.; Wang, Q. Monocular vision SLAM based on key feature points selection. In Proceedings of the 2010 IEEE International Conference on Information and Automation, Harbin, China, 20–23 June 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 1741–1745. [Google Scholar]
  49. An, S.Y.; Kang, J.G.; Lee, L.K.; Oh, S.Y. SLAM with salient line feature extraction in indoor environments. In Proceedings of the 2010 11th International Conference on Control Automation Robotics & Vision, Singapore, 7–10 December 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 410–416. [Google Scholar]
  50. Zou, Y.; Eldemiry, A.; Li, Y.; Chen, W. Robust RGB-D SLAM using point and line features for low textured scene. Sensors 2020, 20, 4984. [Google Scholar] [CrossRef] [PubMed]
  51. Zhao, S.; Fang, Z. Direct depth SLAM: Sparse geometric feature enhanced direct depth SLAM system for low-texture environments. Sensors 2018, 18, 3339. [Google Scholar] [CrossRef] [PubMed]
  52. O’Byrne, M.; Ghosh, B.; Pakrashi, V.; Schoefs, F. Effects of turbidity and lighting on the performance of an image processing based damage detection technique. In Safety, Reliability, Risk and Life-Cycle Performance of Structures and Infrastructures, Proceedings of the 11th International Conference on Structural Safety and Reliability, ICOSSAR 2013, New York, USA, 16–20 June 2013; Deodatis, G., Ellingwood, B.R., Frangopol, D.M., Eds.; Taylor & Francis: Oxfordshire, UK, 2014. [Google Scholar]
  53. Sørensen, F.F.; Mai, C.; Olsen, O.M.; Liniger, J.; Pedersen, S. Commercial Optical and Acoustic Sensor Performances under Varying Turbidity, Illumination, and Target Distances. Sensors 2023, 23, 6575. [Google Scholar] [CrossRef] [PubMed]
  54. Wang, Y.; Song, W.; Fortino, G.; Qi, L.Z.; Zhang, W.; Liotta, A. An experimental-based review of image enhancement and image restoration methods for underwater imaging. IEEE Access 2019, 7, 140233–140251. [Google Scholar] [CrossRef]
  55. Schettini, R.; Corchs, S. Underwater image processing: State of the art of restoration and image enhancement methods. Eurasip J. Adv. Signal Process. 2010, 2010, 1–14. [Google Scholar] [CrossRef]
  56. Massot-Campos, M.; Oliver, G.; Bodenmann, A.; Thornton, B. Submap bathymetric SLAM using structured light in underwater environments. In Proceedings of the 2016 IEEE/OES Autonomous Underwater Vehicles (AUV), Tokyo, Japan, 6–9 November 2016; pp. 181–188. [Google Scholar] [CrossRef]
  57. Joe, H.; Cho, H.; Kim, B.; Pyo, J.; Yu, S.-C. Profiling and Imaging Sonar Fusion Based 3D Normal Distribution Transform Mapping for AUV Application. In Proceedings of the 2018 OCEANS—MTS/IEEE Kobe Techno-Oceans (OTO), Kobe, Japan, 28–31 May 2018; pp. 1–5. [Google Scholar] [CrossRef]
  58. Collings, S.; Martin, T.J.; Hernandez, E.; Edwards, S.; Filisetti, A.; Catt, G.; Marouchos, A.; Boyd, M.; Embry, C. Findings from a Combined Subsea LiDAR and Multibeam Survey at Kingston Reef, Western Australia. Remote Sens. 2020, 12, 2443. [Google Scholar] [CrossRef]
  59. Foote, K.G. Using a sonar in a different environment from that of its calibration: Effects of changes in salinity and temperature. In Proceedings of the OCEANS 2018 MTS/IEEE Charleston, Charleston, SC, USA, 22–25 October 2018; pp. 1–5. [Google Scholar] [CrossRef]
  60. Burguera, A.; González, Y.; Oliver, G. Underwater SLAM with robocentric trajectory using a mechanically scanned imaging sonar. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 3577–3582. [Google Scholar] [CrossRef]
  61. Bleier, M. Underwater Laser Scanning-Refractive Calibration, Self-Calibration and Mapping for 3D Reconstruction. Ph.D. Thesis, Universität Würzburg, Würzburg, Germany, 2023. [Google Scholar]
  62. Sushama, M.; Rajinikanth, E. Face recognition using DRLBP and SIFT feature extraction. In Proceedings of the 2018 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 3–5 April 2018; pp. 994–999. [Google Scholar]
  63. Ramya, P.P.; Ajay, J. Object recognition and classification based on improved bag of features using surf and mser local feature extraction. In Proceedings of the 2019 1st International Conference on Innovations in Information and Communication Technology (ICIICT), Chennai, India, 25–26 April 2019; pp. 1–4. [Google Scholar]
  64. Chen, J.; Luo, L.; Wang, S.; Wu, H. Automatic panoramic UAV image mosaic using ORB features and robust transformation estimation. In Proceedings of the 2018 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; pp. 4265–4270. [Google Scholar]
  65. Álvarez-Tuñón, O.; Brodskiy, Y.; Kayacan, E. Monocular visual simultaneous localization and mapping: (r)evolution from geometry to deep learning-based pipelines. IEEE Trans. Artif. Intell. 2023. [Google Scholar] [CrossRef]
  66. Zhou, F.; Zhang, L.; Deng, C.; Fan, X. Improved Point-Line Feature Based Visual SLAM Method for Complex Environments. Sensors 2021, 21, 4604. [Google Scholar] [CrossRef]
  67. Saez, J.M.; Hogue, A.; Escolano, F.; Jenkin, M. Underwater 3D SLAM through entropy minimization. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Orlando, FL, USA, 15–19 May 2006; pp. 3562–3567. [Google Scholar]
  68. Mahon, I.; Williams, S.B.; Pizarro, O.; Johnson-Roberson, M. Efficient View-Based SLAM Using Visual Loop Closures. IEEE Trans. Robot. 2008, 24, 1002–1014. [Google Scholar] [CrossRef]
  69. Kim, A.; Eustice, R. Pose-graph Visual SLAM with Geometric Model Selection for Autonomous Underwater Ship Hull Inspection. In Proceedings of the IEEE RSJ International Conference on Intelligent Robots and Systems, St Louis, MO, USA, 10–15 October 2009; pp. 1559–1565. [Google Scholar]
  70. Kim, A.; Eustice, R.M. Real-Time Visual SLAM for Autonomous Underwater Hull Inspection Using Visual Saliency. IEEE Trans. Robot. 2013, 29, 719–733. [Google Scholar] [CrossRef]
  71. Burguera, A.; Bonin-Font, F.; Oliver, G. Trajectory-Based Visual Localization in Underwater Surveying Missions. Sensors 2015, 15, 1708–1735. [Google Scholar] [CrossRef] [PubMed]
  72. Negre, P.L.; Bonin-Font, F.; Oliver, G. Cluster-Based Loop Closing Detection for Underwater SLAM in Feature-Poor Regions. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 2589–2595. [Google Scholar]
  73. Engel, J.; Schops, T.; Cremers, D. LSD-SLAM: Large-Scale Direct Monocular SLAM. In Proceedings of the 13th European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; pp. 834–849. [Google Scholar]
  74. Concha, A.; Drews, P.; Campos, M.; Civera, J. Real-Time Localization and Dense Mapping in Underwater Environments from a Monocular Sequence. In Proceedings of the Oceans 2015 Genova, Ctr Congressi Genova, Genova, Italy, 18–21 May 2015. [Google Scholar]
  75. Cho, Y.; Kim, A. Channel invariant online visibility enhancement for visual SLAM in a turbid environment. J. Field Robot. 2018, 35, 1080–1100. [Google Scholar] [CrossRef]
  76. Chen, W.; Qu, T.; Zhou, Y.M.; Weng, K.J.; Wang, G.; Fu, G.Q. Door recognition and deep learning algorithm for visual based robot navigation. In Proceedings of the 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO), Bali, Indonesia, 5–10 December 2014; pp. 1793–1798. [Google Scholar]
  77. Zhou, T.H.; Brown, M.; Snavely, N.; Lowe, D.G. Unsupervised Learning of Depth and Ego-Motion from Video. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1851–1858. [Google Scholar]
  78. Gao, X.; Zhang, T. Loop Closure Detection for Visual SLAM Systems Using Deep Neural Networks. In Proceedings of the 34th Chinese Control Conference (CCC), Hangzhou, China, 28–30 July 2015; pp. 5851–5856. [Google Scholar]
  79. Bai, D.; Wang, C.; Zhang, B.; Yi, X.; Tang, Y. Matching-range-constrained real-time loop closure detection with CNNs features. Robot. Biomim. 2016, 3, 15. [Google Scholar] [CrossRef]
  80. Xia, Y.F.; Li, J.; Qi, L.; Fan, H. Loop Closure Detection for Visual SLAM Using PCANet Features. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 2274–2281. [Google Scholar]
  81. Hu, H.; Zhang, Y.Z.; Duan, Q.; Hu, M.Y.; Pang, L.Z. Loop Closure Detection for Visual SLAM Based on Deep Learning. In Proceedings of the 7th IEEE Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Honolulu, HI, USA, 31 July–4 August 2017; pp. 1214–1219. [Google Scholar]
  82. Ding, B.Y.; Liu, Z.H.; Liu, S.Z.; Wu, Q.; Wu, R.H. Stacked Denoising Auto-encoder Based Image Representation for Visual Loop Closure Detection. In Proceedings of the Chinese Automation Congress (CAC), Xian, China, 30 November–2 December 2018; pp. 369–373. [Google Scholar]
  83. Manderson, T.; Dudek, G. GPU-Assisted Learning on an Autonomous Marine Robot for Vision-Based Navigation and Image Understanding. In Proceedings of the Conference on OCEANS MTS/IEEE Charleston, Charleston, SC, USA, 22–25 October 2018. [Google Scholar]
  84. Leonardi, M.; Fiori, L.; Stahl, A. Deep learning based keypoint rejection system for underwater visual ego-motion estimation. In Proceedings of the 21st IFAC World Congress on Automatic Control—Meeting Societal Challenges, Electr Network, 11–17 July 2020; pp. 9471–9477. [Google Scholar]
  85. Burguera, A.; Bonin-Font, F. Visual Loop Detection in Underwater Robotics: An Unsupervised Deep Learning Approach. In Proceedings of the 21st IFAC World Congress on Automatic Control—Meeting Societal Challenges, Electr Network, 11–17 July 2020; pp. 14656–14661. [Google Scholar]
  86. Burguera, A. Lightweight Underwater Visual Loop Detection and Classification using a Siamese Convolutional Neural Network. In Proceedings of the 13th IFAC Conference on Control Applications in Marine Systems, Robotics, and Vehicles (CAMS), Oldenburg, Germany, 22–24 September 2021; pp. 410–415. [Google Scholar]
  87. Burguera, A.; Bonin-Font, F.; Font, E.G.; Torres, A.M. Combining Deep Learning and Robust Estimation for Outlier-Resilient Underwater Visual Graph SLAM. J. Mar. Sci. Eng. 2022, 10, 511. [Google Scholar] [CrossRef]
  88. Wang, Y.Y.; Ma, X.R.; Wang, J.; Hou, S.L.; Dai, J.; Gu, D.B.; Wang, H.Y. Robust AUV Visual Loop-Closure Detection Based on Variational Autoencoder Network. IEEE Trans. Ind. Inform. 2022, 18, 8829–8838. [Google Scholar] [CrossRef]
  89. Teixeira, B.; Silva, H.; Matos, A.; Silva, E. Deep Learning for Underwater Visual Odometry Estimation. IEEE Access 2020, 8, 44687–44701. [Google Scholar] [CrossRef]
  90. Yin, Z.C.; Shi, J.P. GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose. In Proceedings of the 31st IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 1983–1992. [Google Scholar]
  91. Cain, C.H.; Leonessa, A. Testing vision-based sensors for enclosed underwater environments when applied to ekf slam. In Proceedings of the 5th Annual Dynamic Systems and Control Division Conference/11th JSME Motion and Vibration Conference, Fort Lauderdale, FL, USA, 17–19 October 2012; pp. 213–220. [Google Scholar]
  92. Cain, C.; Leonessa, A. Validation of underwater sensor package using feature based slam. Sensors 2016, 16, 380. [Google Scholar] [CrossRef]
  93. Weidner, N.; Rahman, S.; Li, A.Q.; Rekleitis, I. Underwater cave mapping using stereo vision. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5709–5715. [Google Scholar]
  94. Nocerino, E.; Nawaf, M.M.; Saccone, M.; Ellefi, M.B.; Pasquet, J.; Royer, J.-P.; Drap, P. Multi-camera system calibration of a low-cost remotely operated vehicle for underwater cave exploration. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 329–337. [Google Scholar] [CrossRef]
  95. Joshi, B.; Xanthidis, M.; Roznere, M.; Burgdorfer, N.J.; Mordohai, P.; Li, A.Q.; Rekleitis, I. Underwater Exploration and Mapping. In Proceedings of the 2022 IEEE/OES Autonomous Underwater Vehicles Symposium (AUV), Singapore, 19–21 September 2022; pp. 1–7. [Google Scholar]
  96. Ochoa, E.; Gracias, N.; Istenic, K.; Bosch, J.; Cieslak, P.; García, R. Collision Detection and Avoidance for Underwater Vehicles Using Omnidirectional Vision. Sensors 2022, 22, 5354. [Google Scholar] [CrossRef]
  97. Joshi, B.; Xanthidis, M.; Rahman, S.; Rekleitis, I. High Definition, Inexpensive, Underwater Mapping. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022. [Google Scholar]
  98. Hidalgo, F. ORBSLAM2 and Point Cloud Processing Towards Autonomous Underwater Robot Navigation. In Proceedings of the Global Oceans 2020: Singapore—U.S. Gulf Coast, Biloxi, MS, USA, 5–30 October 2020; pp. 1–4. [Google Scholar]
  99. Wu, D.; Wang, M.E.; Li, Q.; Xu, W.P.; Zhang, T.H.; Ma, Z.H. Visual Odometry With Point and Line Features Based on Underground Tunnel Environment. IEEE Access 2023, 11, 24003–24015. [Google Scholar] [CrossRef]
  100. Handegard, N.O. An overview of underwater acoustics applied to observe fish behaviour at the Institute of Marine Research. In Proceedings of the 2013 MTS/IEEE OCEANS—Bergen, Bergen, Norway, 10–14 June 2013; pp. 1–7. [Google Scholar] [CrossRef]
  101. Pratomo, D.G. The Development of Seabed Sediment Mapping Methods: The Opportunity Application in the Coastal Waters. IOP Conf. Ser. Earth Environ. Sci. 2021, 731, 012039. [Google Scholar]
  102. Normark, W.R.; Posamentier, H.; Mutti, E. Turbidite systems: State of the art and future directions. Rev. Geophys. 1993, 31, 91–116. [Google Scholar] [CrossRef]
  103. Shen, S.; Zeng, Y.; Lai, C.; Jiang, S.; Wu, S.; Ma, S. Rapid Three-Dimensional Reconstruction of Underwater Defective Pile Based on Two-Dimensional Images Obtained Using Mechanically Scanned Imaging Sonar. Struct. Control. Health Monit. 2023, 2023, 3647434. [Google Scholar] [CrossRef]
  104. Bozma, O.; Kuc, R. Building a sonar map in a specular environment using a single mobile sensor. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 1260–1269. [Google Scholar] [CrossRef]
  105. Rencken, W.D. Concurrent localisation and map building for mobile robots using ultrasonic sensors. In Proceedings of the 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ‘93), Yokohama, Japan, 26–30 July 1993; Volume 2193, pp. 2192–2197. [Google Scholar]
  106. Chong, K.S.; Kleeman, L. Mobile-robot map building from an advanced sonar array and accurate odometry. Int. J. Robot. Res. 1999, 18, 20–36. [Google Scholar]
  107. Tardos, J.D.; Neira, J.; Newman, P.M.; Leonard, J.J. Robust mapping and localization in indoor environments using sonar data. Int. J. Robot. Res. 2002, 21, 311–330. [Google Scholar] [CrossRef]
  108. Ip, Y.L.J. Studies on Map Building and Exploration Strategies for Autonomous Mobile Robots (AMR); Hong Kong Polytechnic University: Hong Kong, China, 2003. [Google Scholar]
  109. Mahon, I.; Williams, S. SLAM using natural features in an underwater environment. In Proceedings of the 8th International Conference on Control, Automation, Robotics and Vision (ICARCV 2004), Kunming, China, 6–9 December 2004; pp. 2076–2081. [Google Scholar]
  110. Walter, M.; Hover, F.; Leonard, J. SLAM for ship hull inspection using exactly sparse extended information filters. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; pp. 1463–1470. [Google Scholar]
  111. Wang, J.; Shan, T.; Englot, B. Underwater Terrain Reconstruction from Forward-Looking Sonar Imagery. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3471–3477. [Google Scholar]
  112. Wang, Y.; Ji, Y.; Tsuchiya, H.; Asama, H.; Yamashita, A. Learning Pseudo Front Depth for 2D Forward-Looking Sonar-based Multi-view Stereo. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 8730–8737. [Google Scholar]
  113. Wang, Y.S.; Ji, Y.; Liu, D.Y.; Tsuchiya, H.; Yamashita, A.; Asama, H. Elevation Angle Estimation in 2D Acoustic Images Using Pseudo Front View. IEEE Robot. Autom. Lett. 2021, 6, 1535–1542. [Google Scholar] [CrossRef]
  114. DeBortoli, R.; Li, F.X.; Hollinger, G.A. ElevateNet: A Convolutional Neural Network for Estimating the Missing Dimension in 2D Underwater Sonar Images. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 4–8 November 2019; pp. 8040–8047. [Google Scholar]
  115. Fairfield, N.; Kantor, G.; Wettergreen, D. Real-time SLAM with octree evidence grids for exploration in underwater tunnels. J. Field Robot. 2007, 24, 3–21. [Google Scholar] [CrossRef]
  116. Soylu, S.; Hampton, P.; Crees, T.; Woodroffe, A.; Jackson, E. Sonar-based slam navigation in flooded confined spaces with the imotus-1 hovering auv. In Proceedings of the 2018 IEEE/OES Autonomous Underwater Vehicle Workshop (AUV), Porto, Portugal, 6–9 November 2018; pp. 1–6. [Google Scholar]
  117. White, C.; Hiranandani, D.; Olstad, C.S.; Buhagiar, K.; Gambin, T.; Clark, C.M. The Malta cistern mapping project: Underwater robot mapping and localization within ancient tunnel systems. J. Field Robot. 2010, 27, 399–411. [Google Scholar] [CrossRef]
  118. Mallios, A.; Ridao, P.; Ribas, D.; Carreras, M.; Camilli, R. Toward Autonomous Exploration in Confined Underwater Environments. J. Field Robot. 2016, 33, 994–1012. [Google Scholar] [CrossRef]
  119. Breux, Y.; Lapierre, L. Elevation angle estimations of wide-beam acoustic sonar measurements for autonomous underwater karst exploration. Sensors 2020, 20, 4028. [Google Scholar] [CrossRef]
  120. Bonin-Font, F.; Burguera, A.; Oliver, G. Imaging systems for advanced underwater vehicles. J. Marit. Res. 2011, 8, 65–86. [Google Scholar]
  121. Barkby, S.; Williams, S.B.; Pizarro, O.; Jakuba, M.V. A Featureless Approach to Efficient Bathymetric SLAM Using Distributed Particle Mapping. J. Field Robot. 2011, 28, 19–39. [Google Scholar] [CrossRef]
  122. Guivant, J.; Nebot, E.; Baiker, S. Localization and map building using laser range sensors in outdoor applications. J. Robot. Syst. 2000, 17, 565–583. [Google Scholar] [CrossRef]
  123. Surmann, H.; Nüchter, A.; Hertzberg, J. An autonomous mobile robot with a 3D laser range finder for 3D exploration and digitalization of indoor environments. Robot. Auton. Syst. 2003, 45, 181–198. [Google Scholar] [CrossRef]
  124. Pulli, K. Multiview registration for large data sets. In Proceedings of the Second International Conference on 3-D Digital Imaging and Modeling (Cat. No.PR00062), Ottawa, ON, Canada, 8 October 1999; pp. 160–168. [Google Scholar]
  125. Bosse, M.; Newman, P.; Leonard, J.; Teller, S. Simultaneous localization and map building in large-scale cyclic environments using the Atlas framework. Int. J. Robot. Res. 2004, 23, 1113–1139. [Google Scholar] [CrossRef]
  126. Garulli, A.; Giannitrapani, A.; Rossi, A.; Vicino, A. Mobile robot SLAM for line-based environment representation. In Proceedings of the 44th IEEE Conference on Decision Control/European Control Conference (CCD-ECC), Seville, Spain, 12–15 December 2005; pp. 2041–2046. [Google Scholar]
  127. Cole, D.M.; Newman, P.M. Using laser range data for 3D SLAM in outdoor environments. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Orlando, FL, USA, 15–19 May 2006; pp. 1556–1563. [Google Scholar]
  128. Inglis, G.; Smart, C.; Vaughn, I.; Roman, C. A pipeline for structured light bathymetric mapping. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 4425–4432. [Google Scholar]
  129. Himri, K.; Pi, R.; Ridao, P.; Gracias, N.; Palomer, A.; Palomeras, N. Object Recognition and Pose Estimation using Laser scans For Advanced Underwater Manipulation. In Proceedings of the 2018 IEEE/OES Autonomous Underwater Vehicle Workshop (AUV), Porto, Portugal, 6–9 November 2018; pp. 1–6. [Google Scholar]
  130. Massot-Campos, M.; Oliver-Codina, G.; Thornton, B. Laser Stripe Bathymetry using Particle Filter SLAM. In Proceedings of the OCEANS 2019, Marseille, France, 17–20 June 2019; pp. 1–7. [Google Scholar]
  131. Yang, H.B.; Xu, Z.Z.; Jia, B.Z. An Underwater Positioning System for UUVs Based on LiDAR Camera and Inertial Measurement Unit. Sensors 2022, 22, 5418. [Google Scholar] [CrossRef]
  132. Rahman, S.; Li, A.Q.; Rekleitis, I. Sonar Visual Inertial SLAM of Underwater Structures. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 5190–5196. [Google Scholar]
  133. Rahman, S. A Multi-Sensor Fusion-Based Underwater Slam System. Ph.D. Thesis, University of South Carolin, Columbia, SC, USA, 2020. [Google Scholar]
  134. Cheng, C.; Wang, C.; Yang, D.; Liu, W.; Zhang, F. Underwater Localization and Mapping Based on Multi-Beam Forward Looking Sonar. Front. Neurorobotics 2022, 15, 801956. [Google Scholar] [CrossRef]
  135. Martins, A.; Almeida, J.; Almeida, C.; Dias, A.; Dias, N.; Aaltonen, J.; Heininen, A.; Koskinen, K.T.; Rossi, C.; Dominguez, S.; et al. UX 1 system design—A robotic system for underwater mining exploration. In Proceedings of the 25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1494–1500. [Google Scholar]
  136. Bjerkeng, M.; Grotli, E.I.; Kirkhus, T.; Thielemann, J.T.; Amundsen, H.B.; Su, B.A.; Ohrem, S. Absolute localization of an ROV in a Fish Pen using Laser Triangulation. In Proceedings of the 31st Mediterranean Conference on Control and Automation (MED), Limassol, Cyprus, 26–29 June 2023; pp. 182–188. [Google Scholar]
  137. Preston, V.; Salumae, T.; Kruusmaa, M. Underwater confined space mapping by resource-constrained autonomous vehicle. J. Field Robot. 2018, 35, 1122–1148. [Google Scholar] [CrossRef]
  138. Hernandez, J.D.; Vidal, E.; Moll, M.; Palomeras, N.; Carreras, M.; Kavraki, L.E. Online motion planning for unexplored underwater environments using autonomous underwater vehicles. J. Field Robot. 2019, 36, 370–396. [Google Scholar] [CrossRef]
  139. Toal, D.J.F.; Flanagan, C.; Lyons, W.B.; Nolan, S.; Lewis, E. Proximal object and hazard detection for autonomous underwater vehicle with optical fibre sensors. Robot. Auton. Syst. 2005, 53, 214–229. [Google Scholar] [CrossRef]
  140. Boyer, F.; Lebastard, V.; Chevallereau, C.; Servagent, N. Underwater Reflex Navigation in Confined Environment Based on Electric Sense. IEEE Trans. Robot. 2013, 29, 945–956. [Google Scholar] [CrossRef]
  141. Cheng, H.Y.; Chu, J.K.; Zhang, R.; Gui, X.Y.; Tian, L.B. Real-Time Position and Attitude Estimation for Homing and Docking of an Autonomous Underwater Vehicle Based on Bionic Polarized Optical Guidance. J. Ocean. Univ. China 2020, 19, 1042–1050. [Google Scholar] [CrossRef]
  142. Pandya, S.; Yang, Y.; Jones, D.L.; Engel, J.; Liu, C. Multisensor processing algorithms for underwater dipole localization and tracking using MEMS artificial lateral-line sensors. Eurasip J. Appl. Signal Process. 2006, 2006, 76593. [Google Scholar] [CrossRef]
  143. Coombs, S. Nearfield detection of dipole sources by the goldfish (Carassius auratus) and the mottled sculpin (Cottus bairdi). J. Exp. Biol. 1994, 190, 109–129. [Google Scholar] [CrossRef] [PubMed]
  144. Zheng, X.D.; Zhang, Y.; Ji, M.J.; Liu, Y.; Lin, X.; Qiu, J.; Liu, G.J. Underwater Positioning Based on an Artificial Lateral Line and a Generalized Regression Neural Network. J. Bionic Eng. 2018, 15, 883–893. [Google Scholar] [CrossRef]
  145. Salumäe, T.; Kruusmaa, M. Flow-relative control of an underwater robot. Proc. R. Soc. A-Math. Phys. Eng. Sci. 2013, 469, 20120671. [Google Scholar] [CrossRef]
  146. Fuentes-Pérez, J.F.; Tuhtan, J.A.; Carbonell-Baeza, R.; Musall, M.; Toming, G.; Muhammad, N.; Kruusmaa, M. Current velocity estimation using a lateral line probe. Ecol. Eng. 2015, 85, 296–300. [Google Scholar] [CrossRef]
  147. Peng, J.G.; Zhu, Y.; Yong, T. Research on Location Characteristics and Algorithms based on Frequency Domain for a 2D Underwater Active Electrolocation Positioning System. J. Bionic Eng. 2017, 14, 759–769. [Google Scholar] [CrossRef]
  148. Shashar, N.; Hagan, R.; Boal, J.G.; Hanlon, R.T. Cuttlefish use polarization sensitivity in predation on silvery fish. Vis. Res. 2000, 40, 71–75. [Google Scholar] [CrossRef]
  149. Cartron, L.; Josef, N.; Lerner, A.; McCusker, S.D.; Darmaillacq, A.S.; Dickel, L.; Shashar, N. Polarization vision can improve object detection in turbid waters by cuttlefish. J. Exp. Mar. Biol. Ecol. 2013, 447, 80–85. [Google Scholar] [CrossRef]
  150. Waterman, T.H. Reviving a neglected celestial underwater polarization compass for aquatic animals. Biol. Rev. 2006, 81, 111–115. [Google Scholar] [CrossRef]
  151. Cheng, H.Y.; Yu, S.M.; Yu, H.; Zhu, J.C.; Chu, J.K. Bioinspired Underwater Navigation Using Polarization Patterns Within Snell’s Window. China Ocean. Eng. 2023, 37, 628–636. [Google Scholar] [CrossRef]
Figure 1. Classification of gyroscope and accelerometer.
Figure 1. Classification of gyroscope and accelerometer.
Jmse 12 00428 g001
Figure 2. Schematic diagram of DVL operating modes ((left): referenced to the seafloor, (right): referenced to a water layer).
Figure 2. Schematic diagram of DVL operating modes ((left): referenced to the seafloor, (right): referenced to a water layer).
Jmse 12 00428 g002
Figure 3. IMU/DVL Loosely Coupled Localization.
Figure 3. IMU/DVL Loosely Coupled Localization.
Jmse 12 00428 g003
Figure 4. IMU/DVL Tightly Coupled Localization.
Figure 4. IMU/DVL Tightly Coupled Localization.
Jmse 12 00428 g004
Figure 5. Illustration of the SLAM Localization Process.
Figure 5. Illustration of the SLAM Localization Process.
Jmse 12 00428 g005
Figure 6. Classification of Laser Cameras.
Figure 6. Classification of Laser Cameras.
Jmse 12 00428 g006
Figure 7. Illustration of Multi-Sensor Fusion Processing.
Figure 7. Illustration of Multi-Sensor Fusion Processing.
Jmse 12 00428 g007
Table 1. Characteristics of SINS and DR.
Table 1. Characteristics of SINS and DR.
MethodPrincipleMain SensorsDisadvantages
SINSNewtonian Mechanics TheoremIMUFast accumulation of errors
DRDoppler Frequency ShiftDVLStrong susceptibility to environmental influences
Table 2. Characteristics of Kalman Filtering.
Table 2. Characteristics of Kalman Filtering.
CategoriesRequires Gaussian AssumptionRequires System LinearityResource Consumption
KFRequiresRequiresLow
EKFRequiresNot requiredHigh
UKFRequiresNot requiredModerate
Table 3. Characteristics of SLAM.
Table 3. Characteristics of SLAM.
TypeMediumDetection RangeImage PrecisionFactors
Visual SLAMVisible lightCloseMediumIllumination, turbidity, clarity of features
Sonar SLAMSound wavesFarLowSalinity, temperature, water pressure, multipath effects
Laser SLAMLaserMediumHighTurbidity, multi-path effects [61]
Table 4. Sonar Imaging Characteristics.
Table 4. Sonar Imaging Characteristics.
Multi-Beam SonarSingle Beam Sonar
Multi-beam Echo Sounder [100]Forward-Looking Sonar [101]Single-Beam Echo Sounder [102]Side Scan Sonar [103]Mechanical Scanning Sonar
[104]
3D3D1D2D2D
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, H.; Chen, Y.; Yang, Q.; Yan, B.; Yang, X. A Review of Underwater Robot Localization in Confined Spaces. J. Mar. Sci. Eng. 2024, 12, 428. https://doi.org/10.3390/jmse12030428

AMA Style

Wu H, Chen Y, Yang Q, Yan B, Yang X. A Review of Underwater Robot Localization in Confined Spaces. Journal of Marine Science and Engineering. 2024; 12(3):428. https://doi.org/10.3390/jmse12030428

Chicago/Turabian Style

Wu, Haoyu, Yinglong Chen, Qiming Yang, Bo Yan, and Xinyu Yang. 2024. "A Review of Underwater Robot Localization in Confined Spaces" Journal of Marine Science and Engineering 12, no. 3: 428. https://doi.org/10.3390/jmse12030428

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop