Next Article in Journal
Numerical Investigation of Tsunami Wave Force Acting on Twin Box-Girder Bridges
Previous Article in Journal
Numerical Simulation of Gas Production Behavior Using Stepwise Depressurization with a Vertical Well in the Shenhu Sea Area Hydrate Reservoir of the South China Sea
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Underwater Multisensor Fusion Simultaneous Localization and Mapping System Based on Image Enhancement

1
Chinese Flight Test Establishment, Xi’an 710089, China
2
Avic The First Aircraft Institute, Xi’an 710000, China
3
School of Marine Science and Technology, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(7), 1170; https://doi.org/10.3390/jmse12071170
Submission received: 16 May 2024 / Revised: 2 July 2024 / Accepted: 9 July 2024 / Published: 12 July 2024
(This article belongs to the Section Ocean Engineering)

Abstract

:
As a key method of ocean exploration, the positioning accuracy of autonomous underwater vehicles (AUVs) directly influences the success of subsequent missions. This study aims to develop a novel method to address the low accuracy in visual simultaneous localization and mapping (SLAM) within underwater environments, enhancing its application in the navigation and localization of AUVs. We propose an underwater multisensor fusion SLAM system based on image enhancement. First, we integrate hybrid attention mechanisms with generative adversarial networks to address the blurring and low contrast in underwater images, thereby increasing the number of feature points. Next, we develop an underwater feature-matching algorithm based on a local matcher to solve the feature tracking problem caused by grayscale changes in the enhanced image. Finally, we tightly couple the Doppler velocity log (DVL) with the SLAM algorithm to better adapt to underwater environments. The experiments demonstrate that, compared to other algorithms, our proposed method achieves reductions in both mean absolute error (MAE) and standard deviation (STD) by up to 68.18% and 44.44%, respectively, when all algorithms are operating normally. Additionally, the MAE and STD of our algorithm are 0.84 m and 0.48 m, respectively, when other algorithms fail to operate properly.

1. Introduction

Since the twenty-first century, due to the increasing shortage of land resources, the ocean, as a newly developed treasure trove for humanity, has emerged as a new world center [1,2]. Due to the complexity of the marine environment, the cost of manual operation in this environment is high, and there are various hidden dangers. Autonomous underwater vehicles (AUVs) [3,4], owing to their autonomous navigation capabilities, can effectively replace manual underwater operations such as seabed exploration, seabed archaeology, and seabed minesweeping [5,6], and have gradually become essential tools for executing underwater tasks. The prerequisite for AUVs to perform underwater missions is the availability of a high-precision navigation and localization system. Hydroacoustic localization systems [7,8], inertial navigation and localization systems [9,10], and simultaneous localization and napping (SLAM) technology [11,12] have been proposed as solutions to the underwater navigation challenges faced by AUVs.
The process of the hydroacoustic positioning system is summarized as the interaction between an acoustic transmitting transducer and a receiving transducer to determine the target position. As hydroacoustic positioning systems generally use acoustic waves as the communication carrier, they are affected by the underwater environment, leading to increased noise in the hydroacoustic signal, longer communication delays, and other issues that significantly impact the accuracy and real-time capabilities of navigation and positioning. Additionally, hydroacoustic positioning systems are expensive and difficult to install, further limiting their application scenarios.
Inertial navigation and positioning systems achieve a high degree of covertness and autonomy by using the acceleration and angular velocity data obtained from accelerometers and gyroscopes to perform dead reckoning, thereby determining the current position. In practical applications, this information is generally provided by inertial measurement devices. High-precision navigation equipment is expensive and generally not suitable for civilian use, while low-cost micro-electro-mechanical system inertial measurement units (MEMS IMUs) are less accurate and cannot be used independently for extended periods.
With the development of computer technology, SLAM technology with a laser or camera as the core sensor is increasingly employed. Laser SLAM [13,14], which achieves navigation and positioning by comparing the emitted and reflected laser beams, is gradually being applied in underwater environments. For example, the literature [15,16] proposes the use of single-photon underwater LiDAR for bathymetric measurements, and the measurement results are highly consistent with synchronized sonar data. Visual SLAM [17,18], which determines position through image data, has the advantages of no drift and high stability, and can provide high-precision navigation and positioning for AUVs.
Currently, SLAM algorithms applied to AUVs are broadly classified into two categories: EKF-SLAM, based on the extended Kalman filter (EKF); fast SLAM, based on the particle filter (PF); and graph-SLAM, based on graph optimization.
The computation of EKF-SLAM [19] is divided into four processes: prediction, observation update, data association, and state dimension expansion. A study [20] proposed an algorithm for underwater localization using acoustic signals based on the EKF-SLAM algorithm, which fuses inertial sensors and the direction angle of the acoustic source obtained from Bayesian estimation for position estimation. Authors [21] developed an enhanced EKF-SLAM, and the algorithm uses position error models based on side-scan sonar and AUV inertial navigation systems as input data, rather than directly using the data from inertial sensors and side-scan sonar. EKF-SLAM inherits the advantages and disadvantages of the EKF algorithm. The algorithm’s principle is simple and easy to implement, but the error caused by system nonlinearity may lead to nonconvergence and high computational cost.
The basis of FAST-SLAM is the PF, whose core idea is to use a series of randomly drawn samples and their weights to represent the posterior probability distribution of the state. When the number of samples is sufficiently large, the true posterior distribution can be well approximated by such random sampling. Researchers [22] utilized the mean trajectory graph, a method that reduces computational consumption by retaining only the current estimated position of the AUV in its particles, while all historical states of the AUV are stored in the mean trajectory graph. Others [23] proposed a particle-filter-based bathymetric simultaneous localization and mapping (BSLAM) method with mean trajectory map representation, where the particles of the algorithm retain only the current estimated position of the AUV, while all historical states of the AUV are stored in the averaged trajectory map. The disadvantages of FAST-SLAM include the problem of particle “degeneracy” and the difficulty of finding a general selection criterion for the importance density function used to generate particles.
In graph-SLAM, the position of an AUV is represented as a node or vertex, and the relationships between positions constitute edges. The process is mainly divided into two parts: the front end, responsible for processing the data collected by the sensors, and the back end, which uses a nonlinear optimization algorithm to determine the robot’s position. The literature [24] proposes a collaborative SLAM framework where each AUV generates its own local map while using marginalized pose and sparsified information matrices to reduce the size of communication packets, ultimately achieving collaborative localization and mapping. In a study [25], to enhance the robustness of SLAM, a robust estimator that fuses the robot’s kinematic model with proprioceptive sensors was proposed to propagate the pose when the visual inertial odometer (VIO) fails, ensuring proper localization. Graph-optimization-based algorithms were previously considered too time consuming to meet the real-time requirements of SLAM. However, with the emergence of efficient solving algorithms and the rapid development of related hardware, graph-optimization-based algorithms have regained attention and become a hot topic in the current SLAM research.
The disadvantages of underwater images [26,27], such as color deviation and blurring, pose the risk of crashing the SLAM operation. In this study, to meet the demand for AUV underwater navigation and localization, an underwater multisensor fusion SLAM algorithm based on image enhancement was designed by integrating an underwater image enhancement module into VINS-Mono [28], which belongs to the graph-SLAM category, and fusing data from various sensors such as a monocular camera, IMU, and Doppler velocity kog (DVL).
The main contributions of this study are summarized as follows:
In this paper, an underwater image enhancement algorithm based on a generative adversarial network [29] is proposed. To improve the quality of underwater images, we designed a hybrid attention module, consisting of channel attention and spatial attention, and applied it to the generator to enhance the underwater image enhancement effect. Additionally, we constructed a multinomial loss function to improve training efficiency.
In this paper, a multisensor fusion SLAM algorithm is proposed, based on the VINS-Mono framework, incorporating DVL, and making corresponding improvements to its measurement preprocessing, initialization, and nonlinear optimization components. Additionally, to address the impact of grayscale changes caused by image enhancement on image matching, this paper proposes an underwater image matching algorithm based on a local matcher.
In Section 2, we describe the specific algorithms used in this study. The results obtained from the system are presented in Section 3. Finally, we provide a brief discussion in Section 4 and the conclusions in Section 5.

2. Materials and Methods

In this section, we describe the programs and methods used in this study, divided into the following two aspects: First, the underwater image enhancement algorithm based on a generative adversarial network is presented. Second, the integration of DVL into the VINS-Mono algorithm and the underwater feature-matching algorithm based on the local matcher are introduced.
The algorithm proposed in this paper is based on the traditional VINS-Mono framework and consists of four parts: measurement preprocessing, initialization, nonlinear optimization, and loop closure. The system framework of the proposed algorithm is shown in Figure 1.

2.1. Underwater Image Enhancement Algorithm Based on Generative Adversarial Network

The disadvantages of underwater images, such as partial color, blurring, and low contrast, lead to a significant reduction in the number of extractable feature points, which in turn can cause the collapse of the entire SLAM system. To address this problem, this subsection proposes a generative adversarial network based on a hybrid attention mechanism to enhance underwater images. The hybrid attention module, which serves as the core part of the generator, aims to restore the color and texture of the image and consists of a channel attention module and a spatial attention module. The overall network framework is a generative adversarial network, with the adversarial network framework based on U-Net [30]. The generator network framework and hybrid attention module network framework are shown in Figure 2.

2.1.1. Channel Attention Module

The channel attention module reallocates channel resources to aid in the color restoration of images. It aggregates feature map information using both maximum pooling and average pooling and achieves adaptive selection through learned parameters, ensuring that the two pooling results have different weights to obtain better channel attention maps. The network framework is shown in Figure 3, and the specific process is detailed below:
Step 1: The input feature map X R C i n × H i n × W i n undergoes two pooling operations in two pooling layers. The pooling results are then weighted by the learning parameters α and β and summed element-wise with their mean to obtain the feature tensor f a d d R C i n × 1 × 1 . This process can be expressed by the following equation:
f a d d = α A v g P o o l X β M a x P o o l X 0.5 A v g P o o l X M a x P o o l X
where A v g P o o l represents average pooling; M a x P o o l represents max pooling; ⊕ denotes element-wise summation; and ⊗ denotes element-wise multiplication. The parameters α and β belong to 0 , 1 , and the network achieves adaptive tuning of channel weights through these two learning parameters.
Step 2: The feature tensor f a d d undergoes dimensionality reduction followed by dimensionality enhancement to achieve cross-channel information interaction, resulting in the feature tensor f z R C i n × 1 × 1 . The process can be expressed by the following equation:
f z = Φ 1 Φ 2 f a d d = C 2 D 1 × 1 C i n × d δ B N C 2 D 1 × 1 d × C i n f a d d
where Φ 1 represents the ascending operation; Φ 2 represents the descending operation; C 2 D 1 × 1 C i n × d denotes a 1 × 1 2D convolution with the number of output channels as C i n and the number of input channels as d; C 2 D 1 × 1 d × C i n denotes a 1 × 1 2D convolution with the number of output channels as d and the number of input channels as C i n ; B N represents batch normalization; and δ is the R e L U activation function. The parameter d is determined by the hyperparameter r:
d = m a x C i n C i n r r , L
In the final step, the feature tensor f z is normalized and multiplied element-wise with the input feature map X to obtain the output feature map Y R C i n × H i n × W i n :
Y = σ f z X
where σ is the s o f t m a x activation function.

2.1.2. Spatial Attention Module

The spatial attention module strengthens the network’s learning of information-rich regions, aiding in the recovery of the highly textured parts of the image, and determines the importance of the features in each part of the different channels through learning. To ensure the network pays more attention to the highly textured parts of the image, this paper proposes a feature separation algorithm that divides the features in each channel into important and minor features. The spatial attention module first uses the proposed algorithm to achieve feature separation, then applies average pooling and max pooling operations along the channel dimensions, and finally passes the two pooling results through a series of convolutional layers to obtain the final spatial attention map. The network framework is shown in Figure 4, and the specific flow is detailed below:
Step 1: The input feature map X R C i n × H i n × W i n undergoes adaptive average pooling to obtain the average value of each layer. The part above the average value is defined as important features, and the part below the average value is defined as secondary features. Two tensors are then defined to store the results: one tensor sets the important features to 1 and the secondary features to 0, while the other tensor does the opposite. The two tensors are then multiplied element-wise with the input feature map to obtain the important feature map F 1 R C i n × H i n × W i n and the secondary feature map F 2 R C i n × H i n × W i n . The Python code is shown in Algorithm 1.
Algorithm 1 Python code for feature separation.
Input: 
X
Output: 
F 1 , F 2
  1:
def channel_separation(X):
  2:
      one = torch.ones_like(X)
  3:
      zero = torch.zeros_like(X)
  4:
      avg_pool = nn.AdaptiveAvgPool2d(1)
  5:
      avg = avg_pool(X)
  6:
      important_tensor = torch.where(X > avg, one, zero)
  7:
      subimportant_tensor = 1 - important_tensor
  8:
       F 1 = X *important_tensor
  9:
       F 2 = X *subimportant_tensor
10:
      return  F 1 , F 2
Step 2: The feature maps F 1 and F 2 undergo a pooling layer, a 7 × 7 shared convolutional layer and a series of nonlinear operations to obtain the corresponding spatial attention tensors f S A 1 R 1 × H i n × W i n and f S A 2 R 1 × H i n × W i n . The process can be expressed as follows:
f S A 1 = Φ 3 C 2 D 7 × 7 1 × 1 A v g P o o l F 1 ; M a x P o o l F 1 f S A 2 = Φ 3 C 2 D 7 × 7 1 × 1 A v g P o o l F 2 ; M a x P o o l F 2
where Φ 3 represents a series of nonlinear operations consisting of a sequence of B N layers and R e L U functions; and C 2 D 7 × 7 1 × 1 denotes a 7 × 7 2D convolution with 1 output channels and 1 input channel.
In the final step, the spatial attention tensors f S A 1 and f S A 2 are multiplied with the feature maps F 1 and F 2 , respectively, and the results are summed element-wise to obtain the output feature map Y R C i n × H i n × W i n . The process can be expressed as follows:
Y = f S A 1 F 1 f S A 2 F 2

2.1.3. Loss Function

In this method, the loss function L is obtained by linearly weighting the adversarial loss L a d v [29], the L1 loss L 1 [31], the gradient loss L gra , and the color bias loss L c o l . The specific form of the loss function is shown in the following equation:
L = L a d v + λ g 1 L 1 + λ g 2 L gra + λ g 3 L c o l
where the scaling factors λ g 1 ,   λ g 2 , and λ g 3 are set to 0.3, 0.3, and 0.4.
Among them, the gradient loss L gra can represent the clarity and integrity at the image boundary to a certain extent, aiding in the generation of image details, as follows:
L g r a = E x , y i y i G x 1 + E x , y j y j G x 1
where i , j are the horizontal and vertical gradient operators.
The color bias loss L c o l is used to encourage the color distribution of the generated image to align more closely with the real image by measuring the degree of color bias in the generated image, as follows:
L c o l = D y / M y D x / M x d a = i = 1 M i = 1 N a M N , d b = i = 1 M i = 1 N b M N D = d a 2 + d b 2 M a = i = 1 M i = 1 N a d a M N , M b = i = 1 M i = 1 N d d b M N M = M a 2 + M b 2
where D denotes the average color difference of the image; M denotes the center distance of the image; a and b denote the two-dimensional coordinates of the a channel and b channel of the image in Lab space; and M and N denote the maximum values of the two-dimensional coordinates of the image in Lab space.

2.2. Underwater Multisensor Fusion SLAM Algorithm

2.2.1. DVL Tightly Coupled to SLAM Algorithm

In this paper, · w represents the world coordinate system, · b represents the body coordinate system (defined the same as the IMU coordinate system), · c represents the camera coordinate system, and · d represents the DVL coordinate system. Meanwhile, the rotation matrix R and quaternion q are used to represent rotation.
Adding DVL to VINS-Mono requires corresponding adjustments to pr-integration, initialization, and nonlinear optimization. The following sections describe each of these adjustments in detail.
Based on the IMU kinematic model [32], by adding the DVL information, the kinematic IMU/DVL model in the world coordinate system can be obtained as shown in Equation (10):
p b k + 1 w = p b k w + v b k w Δ t + t k , k + 1 R t w a ^ t b a g w d t 2 v b k + 1 w = v b k w + t k , k + 1 R t w a ^ t b a g w d t q b k + 1 w = q b k w t k , k + 1 1 2 Ω ω ^ t b ω q t b k d t p d k + 1 w = p d k w + t k , k + 1 R t w R d b v ^ t b d d t
where p b w , v b w , q b w denote the position, velocity, and rotation of the IMU in the world coordinate system; p d w is the position of the DVL in the world coordinate system; a ^ t , ω ^ t , v ^ t are the measured values of the accelerometer, gyroscope, and DVL, respectively; b a , b ω , b d are the corresponding zero biases; Δ t denotes the time difference between the two frames; R t w is the rotation of the IMU in the world coordinate system at time t; g w is the acceleration of gravity in the world coordinate system; q t b k is the rotation increase in the IMU in the b k frame at time t; R d b is the rotational parameter of IMU-DVL; ⊙ is the symbol for quadratic multiplication; and Ω is defined as:
Ω ( ω ) = ω × ω ω T 0 , ω × = 0 ω z ω y ω z 0 ω x ω y ω x 0
Equation (10) is multiplied by R w b k on both sides to obtain the IMU pr-integration α b k + 1 b k , β b k + 1 b k , γ b k + 1 b k and the DVL pre-integration η d k + 1 b k :
α b k + 1 b k = t k , k + 1 R t b k a ^ t b a d t 2 β b k + 1 b k = t k , k + 1 R t b k a ^ t b a d t γ b k + 1 b k = t k , k + 1 1 2 Ω ω ^ t b ω γ t b k d t η d k + 1 b k = t k , k + 1 R t b k R d b v ^ t b d d t
where R w b k = R b k w T , R b k w represents the rotation of the IMU in the world coordinate system for the k frame.
The first-order Taylor series expansion of Equation (12) is given by
α b k + 1 b k α ^ b k + 1 b k + J b a α δ b a + J b ω α δ b ω β b k + 1 b k β ^ b k + 1 b k + J b a β δ b a + J b ω β δ b ω γ b k + 1 b k γ ^ b k + 1 b k 1 1 2 J b ω γ δ b ω η d k + 1 b k η ^ d k + 1 b k + J b d η δ b d + J b ω η δ b ω
where α ^ b k + 1 b k , β ^ b k + 1 b k , γ ^ b k + 1 b k , η ^ d k + 1 b k are the IMU/DVL preintegrated measurements; δ b a , δ b ω , δ b d are the zero-bias errors of the accelerometer, gyroscope, and DVL; and J denotes the respective Jacobian matrix.
The estimation of the rotational parameter R c b and the gyroscope zero bias during initialization is consistent with that of VINS-Mono. Therefore, only the estimation of the accelerometer’s zero bias and the DVL’s zero bias is presented in this paper. Considering scale s, the pose of the IMU/DVL in the world coordinate system is
R b w = R c w R c b 1 s p b w = s p c w R b w p c b s p d w = s p c w R b w R d b p c d
where R c w and p c w are the attitude and position of the camera in the world coordinate system; p c b and p c d are the translational parameters of the IMU-camera and DVL-camera; and R d b is the rotational parameter of the IMU-DVL.
The state variables are selected as χ 0 = v b 0 b 0 , , v b k b k , v b k + 1 b k + 1 , g w , s ; Equation (10) and Equation (14) are combined to obtain Equation (15), which is solved to obtain the external parameter estimates of velocity, gravity vector, and scale factor χ ^ 0 = v ^ b 0 b 0 , , v ^ b k b k , v ^ b k + 1 b k + 1 , g ^ w , s ^ .
I Δ t 0 1 2 R w b k Δ t 2 2 R w b k p b k + 1 w p b k w I R w b k R b k + 1 w R w b k Δ t 0 v b k b k v b k + 1 b k + 1 g w s = α ^ b k + 1 b k + η ^ b k + 1 b k I R w b k R b k + 1 w p c b + R d b p c d β ^ b k + 1 b k
To reduce the effect of gravity on the estimated accelerometer zero bias and DVL zero bias, a gravity refinement step is required.
g w = g g ^ w g ^ w + b 1 b 2 m 1 m 2 = g g ^ w g ^ w + bm
where b 1 and b 2 are the two orthogonal bases in the gravity tangent plane, and m 1 and m 2 are their respective lengths.
The state variables are selected as χ 0 = v b 0 b 0 , , v b k b k , v b k + 1 b k + 1 , m , s , δ b a , δ b d . Equations (13), (15), and (16) are combined to obtain Equation (17), which is solved to obtain the external parameter estimates of the zero bias of the accelerometer and the zero bias of the DVL.
I Δ t 0 1 2 R w b k b Δ t 2 2 R w b k p c k + 1 w p c k w J b a α J b d η I R w b k R b k + 1 w R w b k b Δ t 0 J b a β 0 v b k b k v b k + 1 b k + 1 m s δ b a δ b d = α ^ b k + 1 b k + η ^ b k + 1 b k I R w b k R b k + 1 w p c b + R d b p c d 1 2 R w b k g ^ w g ^ w g Δ t 2 β ^ b k + 1 b k R w b k g ^ w g ^ w g Δ t
After obtaining the accelerometer’s zero bias and DVL’s zero bias, they are repropagated for all IMU/DVL preintegrations, thereby completing the initialization of the SLAM algorithm presented in this paper.
For nonlinear optimization, the cost function after adding DVL is
min χ l , j C r C z ^ l c j , χ P l c j 2 + k B O r B O z ^ b k + 1 b k , χ P b k + 1 b k 2 + r p J p χ 2
where r C z ^ l c j , χ , r B O z ^ b k + 1 b k , χ , r p , J p are the visual residuals, IMU/DVL preintegrated residuals, and marginalization constraint. The visual residuals and marginalized residuals are consistent with VINS-Mono, and the IMU/DVL preintegrated residuals is
r B O z ^ b k + 1 b k , χ = δ α b k + 1 b k δ β b k + 1 b k δ γ b k + 1 b k δ η d k + 1 b k δ b a δ b ω δ b d = R w b k p b k + 1 w p b k w + 1 2 g w Δ t 2 v b k w Δ t α ^ b k + 1 b k R w b k v b k + 1 w + g w Δ t v b k w β ^ b k + 1 b k 2 q b k w 1 q b k + 1 w γ ^ b k + 1 b k 1 x y z R w b k p b k + 1 w p b k w + R w b k R b k + 1 w I p d b η ^ d k + 1 b k b a k + 1 b a k b ω k + 1 b ω k b d k + 1 b d k
where δ α b k + 1 b k ,   δ β b k + 1 b k ,   δ γ b k + 1 b k ,   δ η d k + 1 b k are the measurements residual of IMU/DVL pre-integration; δ b a ,   δ b ω ,   δ b d are the zero bias residual of IMU/DVL.

2.2.2. Underwater Feature-Matching Algorithm Based on a Local Matcher

After the underwater images undergo image enhancement, although their clarity and contrast improve, the grayscale also changes to some extent, as shown in Figure 5, thus putting the optical flow algorithm [33] at risk of failure. To solve this problem, this paper proposes an underwater feature-matching algorithm based on a local matcher.
Assuming that the two consecutive frames of the image are the c k frame and the c k + 1 frame, the corresponding IMU measurements are the b k frame and the b k + 1 frame, and the DVL measurements are the v k frame and the v k + 1 frame. The relative position Δ p and relative attitude Δ q between the two frames can be obtained according to the following equation:
Δ p = R d c v k b d + v k + 1 b d 2 Δ t Δ q = t k , k + 1 1 2 R b c Ω ω ^ t b ω q t b k d t
where R b c and R d c are the rotational parameters for the camera-IMU and camera-DVL.
Assume that the normalized coordinates corresponding to the projections of a waypoint on the images of the c k frame and the c k + 1 frame are p c k and p c k + 1 , the camera’s internal reference is K and the pixel coordinates corresponding to p c k is u c k . The predicted value u ^ c k + 1 of the pixel coordinate p c k + 1 corresponding to u c k + 1 can be obtained using the following equation:
p c k = K 1 u c k p c k + 1 = Δ R p c k + Δ p u ^ c k + 1 = K p c k + 1
where Δ R is the matrix form of q .
To apply the predicted value u ^ c k + 1 to feature matching, a local matcher with a search window was designed. First, a search window is created, which is a circular or square area centered on the predicted value u ^ c k + 1 , and its radius varies from small to large. Next, candidate features are searched within the small radius area. If the number of candidates exceeds the preset threshold (20 features), then the 20 features with the maximum response value are retained. Otherwise, the radius of the search window is enlarged, and the above search steps are repeated until 20 candidate points are detected or the detection area reaches or exceeds the size of the entire image. Finally, the feature points in the search window are regarded as potential matching candidates, and the feature point with the closest descriptor distance is obtained by brute-force matching in a small range.
Through the local matcher, even if there is a certain discrepancy in the enhancement effect between the two consecutive frames, it can still ensure that the feature points of the former image have a candidate match in the latter image. Then, after applying the RANSAC algorithm [34], a better matching result can be obtained. The flowchart of the underwater feature-matching algorithm is shown in Figure 6.

3. Results

In this study, we verified the localization accuracy and robustness of the proposed algorithm through simulation and physical experiments, using the proposed algorithm as the experimental group and VINS-Mono as the control group.

3.1. Simulation Experiments

In this study, the underwater AFRL dataset [35] was selected for simulation verification. According to the authors’ suggestion, COLMAP [36] was used to generate the reference trajectories of the dataset, which could be considered as the ground truth (GT). To be more representative, the Bus subdataset with poorer image quality in the AFRL dataset and the Cave subdataset with better image quality were selected for simulation experiments. Only the “left eye” informationwas used during the simulation process. Additionally, new datasets including the DVL information were generated based on the two subdatasets according to Figure 7 to validate the SLAM algorithm proposed in this paper.
The simulation experiment results are shown in Figure 8 and Figure 9, and the experimental errors, including mean error (MEAN), standard deviation (STD), and root mean square error (RMSE), are shown in Table 1. From Figure 8 and Figure 9, it can be seen that the VINS-Mono algorithm can run normally only on the Cave subdataset, but the absolute pose error (APE) is relatively large. This includes max (maximum error), min (minimum error), std (same as STD), median, mean (same as MEAN), and mse (mean square error). The SLAM algorithm proposed in this paper ran normally on both underwater subdatasets. Although the APE was a little larger on the Bus subdataset than on the Cave subdataset, it was still within a reasonable range.
From Table 1, it can be seen that compared with the VINS-Mono algorithm, the SLAM algorithm proposed in this paper has better adaptability and robustness. On the Bus subdataset, with a total mileage of about 53 m, the VINS-Mono algorithm could not run properly, while the RMSE of the SLAM algorithm proposed in this paper was 0.29 m. On the Cave subdataset, with a total mileage of about 87 m, the RMSE of the VINS-Mono algorithm was 0.35 m, while the RMSE of the SLAM algorithm proposed in this paper was 0.16m, which is 54.29% lower.

3.2. Physical Experiments

In this study, the algorithm proposed in this paper was validated through underwater physical experiments. BlueROV was selected as the experimental platform, carrying sensors such as an underwater camera, an IMU, a DVL, and an underwater global positioning system (GPS). The underwater experiments were divided into open-loop and closed-loop experiments, and the reference trajectories (i.e., GT) were provided by the underwater GPS.

3.2.1. Open-Loop Experiment

The total length of the underwater open-loop experimental trajectory was about 83.18 m. The experimental trajectory is shown in Figure 10. RVIZ is a 3D visualizer for the Robot Operating System (ROS) framework.
The experimental results are shown in Figure 11 and Figure 12, and the experimental errors are shown in Table 2. From Figure 11 and Figure 12, it can be observed that in the feature-rich region, both the proposed algorithm and VINS-Mono coulie detect the feature points that satisfy the operation of their respective algorithms. However, the proposed image enhancement algorithm significantly increased the number of feature points, which improved localization accuracy to a certain extent. From Table 2, in terms of mean absolute error (MAE) and STD, the proposed algorithm reduced these values by 1.20 m and 0.40 m, respectively, compared to those of the VINS-Mono algorithm.
Building on VINS-Mono, the proposed algorithm uses image enhancement to optimize the original image. Additionally, the DVL used for velocimetry has a certain inhibitory effect on the velocity dispersion of the IMU accelerometer, resulting in better performance.

3.2.2. Closed-Loop Experiment

The total mileage of the underwater closed-loop experimental trajectory wa about 209.04 m. The experimental trajectory is shown in Figure 13.
The experimental results are shown in Figure 14 and Figure 15, and the experimental errors are shown in Table 3. From Figure 14 and Figure 15, it can be seen that in feature-scarce regions, VINS-Mono could not run properly due to its inability to detect the required feature points. The image enhancement algorithm proposed in this paper effectively solved this problem by providing enough feature points for the algorithm, thereby guaranteeing its normal operation. From Table 3, in terms of MAE and STD, the proposed algorithm had errors of 0.84 m and 0.48 m, respectively, and these errors are relatively small. (Note: Tte errors of the VINS-Mono algorithm shown in Figure 15 and Table 3 represent the errors before it failed to run.)
In the underwater closed-loop experiments, the proposed algorithm detected the loopback and initiated the repositioning and position optimization session, resulting in a small error near the starting point. Additionally, the proposed algorithm incorporated more image enhancement and DVL constraints compared to the VINS-Mono algorithm, which improved robustness and positioning accuracy.

4. Discussion

To improve the accuracy and robustness of underwater visual SLAM, this paper proposes an underwater multisensor fusion SLAM system based on image enhancement. The system uses an underwater image enhancement algorithm based on a generative adversarial network to optimize the underwater images. An underwater feature-matching algorithm based on the local matcher is used to solve the feature-matching problem due to the grayscale change in the enhanced images. Additionally, the DVL is tightly coupled to the visual SLAM system, which ultimately reduces the accuracy error.
In the experiments, the main reason for the failure of the VINS-Mono algorithm was the low input image quality, which could not provide sufficient feature points. In contrast, the image enhancement part of the proposed algorithm effectively solves this problem. Additionally, the above optimization further improves the navigation accuracy of the proposed algorithm. In the open-loop experiment, the MAE and STD of the proposed algorithm were reduced by 68.18% and 44.44%, respectively, compared to those of VINS-Mono. In the closed-loop experiment, the MAE and STD values of the proposed algorithm were 0.84 m and 0.48 m, respectively.
The proposed algorithm is mainly suitable for areas with rich and easily distinguishable underwater features. However, in special underwater environments, AUVs sometimes move to areas with weak or repetitive textures, such as a flat seabed with all sand, resulting in insufficient or mismatched features. In this case, the proposed algorithm has the possibility of crashing. To solve this problem, in future research, we plan to add Kalman filtering to the algorithm as an alternative navigation scheme. We are investigating the navigation accuracy confidence between the two, the timing of switching between them, and the related problems faced by visual SLAM when rerunning on an existing map. Additionally, to further bolster the algorithm’s robustness, we will explore integrating a magnetometer and a depth gauge into the algorithm, which will help with obtaining more accurate heading angle and depth information. Therefore, the code is not open-source.

5. Conclusions

In this paper, we proposed an underwater multisensor fusion SLAM system based on image enhancement, which uses a generative adversarial network with hybrid attention to enhance underwater images. To obtain feature matching between the enhanced images, based on the a priori information provided by the gyroscope and the DVL, we use the proposed local matcher to accomplish feature tracking. Meanwhile, the addition of the DVL information enhances the system’s robustness and accuracy.
Specifically, the main focus of this study was on underwater image enhancement and tight DVL coupling. Underwater image enhancement is the foundation of the entire algorithm, directly determining whether the algorithm can run normally. Its role is to process poor-quality underwater images into images that meet the system’s requirements, providing enough feature points for the algorithm. The role of tight DVL coupling is manifested in two aspects. First, it provides a priori information for the feature matching of the underwater enhanced image, increasing the speed and accuracy of the matching. Second, it provides real speed information to optimize the initialization and nonlinear optimization of the algorithm, which in turn improve the accuracy of navigation and positioning.
To validate the proposed algorithm, we conducted AFRL dataset simulation experiments and physical experiments. The results of these experiments demonstrated that our proposed algorithm has good accuracy and underwater adaptability. Additionally, our system provides significant improvements in terms of MAE and STD over VINS-Mono.

Author Contributions

Conceptualization, Z.L. and F.Z.; methodology, Z.L.; software, Z.L.; validation, K.W. and F.Z.; formal analysis, Z.L., K.W., J.Z. and F.Z.; writing—original draft preparation, Z.L.; writing—review and editing, Z.L., K.W., J.Z. and F.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available dataset AFRL was analyzed in this study. These data can be found here: https://afrl.cse.sc.edu/afrl/resources/datasets/ (accessed on 12 April 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AUVAutonomous Underwater Vehicle
SLAMSimultaneous Localization and Mapping
DVLDoppler Velocity Log
MAEMean Absolute Error
STDStandard Deviation
MEMS IMUMicro-Electro-Mechanical System Inertial Measurement Unit
EKFExtended Kalman Filter
PFParticle Filter
VIOVisual Inertial Odometer
GTGround Truth
MEANMean Error
RMSERoot Mean Square Error
APEAbsolute Pose Error
GPSGlobal Positioning System
ROSRobot Operating System
MAEMean Absolute Error

References

  1. Li, Y.; Takahashi, S.; Serikawa, S. Cognitive ocean of things: A comprehensive review and future trends. Wirel. Netw. 2022, 28, 917. [Google Scholar] [CrossRef]
  2. Lusty, P.A.; Murton, B.J. Deep-ocean mineral deposits: Metal resources and windows into earth processes. Elements 2018, 14, 301–306. [Google Scholar] [CrossRef]
  3. Constable, S.; Kowalczyk, P.; Bloomer, S. Measuring marine self-potential using an autonomous underwater vehicle. Geophys. J. Int. 2018, 215, 49–60. [Google Scholar] [CrossRef]
  4. Wu, Y. Coordinated path planning for an unmanned aerial-aquatic vehicle (UAAV) and an autonomous underwater vehicle (AUV) in an underwater target strike mission. Ocean. Eng. 2019, 182, 162–173. [Google Scholar] [CrossRef]
  5. Johnson-Roberson, M.; Bryson, M.; Friedman, A.; Pizarro, O.; Troni, G.; Ozog, P.; Henderson, J.C. High-resolution underwater robotic vision-based mapping and three-dimensional reconstruction for archaeology. J. Field Robot. 2017, 34, 625–643. [Google Scholar] [CrossRef]
  6. Mogstad, A.A.; Ødegård, Ø.; Nornes, S.M.; Ludvigsen, M.; Johnsen, G.; Sørensen, A.J.; Berge, J. Mapping the historical shipwreck figaro in the high arctic using underwater sensor-carrying robots. Remote Sens. 2020, 12, 997. [Google Scholar] [CrossRef]
  7. Ma, L.; Gulliver, T.A.; Zhao, A.; Zeng, C.; Wang, K. An underwater bistatic positioning system based on an acoustic vector sensor and experimental investigation. Appl. Acoust. 2021, 171, 107558. [Google Scholar] [CrossRef]
  8. Zhao, S.; Wang, Z.; Nie, Z.; He, K.; Ding, N. Investigation on total adjustment of the transducer and seafloor transponder for GNSS/Acoustic precise underwater point positioning. Ocean. Eng. 2021, 221, 108533. [Google Scholar] [CrossRef]
  9. Hsu, H.Y.; Toda, Y.; Yamashita, K.; Watanabe, K.; Sasano, M.; Okamoto, A.; Inaba, S.; Minami, M. Stereo-vision-based AUV navigation system for resetting the inertial navigation system error. Artif. Life Robot. 2022, 27, 165–178. [Google Scholar] [CrossRef]
  10. Mu, X.; He, B.; Wu, S.; Zhang, X.; Song, Y.; Yan, T. A practical INS/GPS/DVL/PS integrated navigation algorithm and its application on Autonomous Underwater Vehicle. Appl. Ocean. Res. 2021, 106, 102441. [Google Scholar] [CrossRef]
  11. Chen, H.; Huang, H.; Qin, Y.; Li, Y.; Liu, Y. Vision and laser fused SLAM in indoor environments with multi-robot system. Assem. Autom. 2019, 39, 297–307. [Google Scholar] [CrossRef]
  12. Zhao, J.; Liu, S.; Li, J. Research and implementation of autonomous navigation for mobile robots based on SLAM algorithm under ROS. Sensors 2022, 22, 4172. [Google Scholar] [CrossRef] [PubMed]
  13. Peng, H.; Zhao, Z.; Wang, L. A Review of Dynamic Object Filtering in SLAM Based on 3D LiDAR. Sensors 2024, 24, 645. [Google Scholar] [CrossRef] [PubMed]
  14. Huang, B.; Zhao, J.; Liu, J. A survey of simultaneous localization and mapping with an envision in 6g wireless networks. arXiv 2019, arXiv:1909.05214. [Google Scholar]
  15. Shangguan, M.; Weng, Z.; Lin, Z.; Lee, Z.; Shangguan, M.; Yang, Z.; Sun, J.; Wu, T.; Zhang, Y.; Wen, C. Day and night continuous high-resolution shallow-water depth detection with single-photon underwater lidar. Opt. Express 2023, 31, 43950–43962. [Google Scholar] [CrossRef] [PubMed]
  16. Shangguan, M.; Yang, Z.; Lin, Z.; Lee, Z.; Xia, H.; Weng, Z. Compact Long-Range Single-Photon Underwater Lidar with High Spatial–Temporal Resolution. IEEE Geosci. Remote. Sens. Lett. 2023, 20, 1501905. [Google Scholar] [CrossRef]
  17. Macario Barros, A.; Michel, M.; Moline, Y.; Corre, G.; Carrel, F. A comprehensive survey of visual slam algorithms. Robotics 2022, 11, 24. [Google Scholar] [CrossRef]
  18. Chen, W.; Shang, G.; Ji, A.; Zhou, C.; Wang, X.; Xu, C.; Li, Z.; Hu, K. An overview on visual slam: From tradition to semantic. Remote Sens. 2022, 14, 3010. [Google Scholar] [CrossRef]
  19. Tena Ruiz, I.J. Enhanced Concurrent Mapping and Localisation Using Forward-Looking Sonar. Ph.D. Thesis, Heriot-Watt University, Edinburgh, UK, 2001. [Google Scholar]
  20. Choi, J.; Lee, Y.; Kim, T.; Jung, J.; Choi, H.T. EKF SLAM using acoustic sources for autonomous underwater vehicle equipped with two hydrophones. In Proceedings of the OCEANS 2016 MTS/IEEE Monterey, Monterey, CA, USA, 19–23 September 2016; pp. 1–4. [Google Scholar]
  21. Wang, W.; Cheng, B. Augmented EKF based SLAM system with a side scan sonar. In Proceedings of the 2020 12th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), Hangzhou, China, 26–27 August 2020; IEEE: New York, NY, USA, 2020; Volume 1, pp. 71–74. [Google Scholar]
  22. Chen, L.; Yang, A.; Hu, H.; Naeem, W. RBPF-MSIS: Toward rao-blackwellized particle filter SLAM for autonomous underwater vehicle with slow mechanical scanning imaging sonar. IEEE Syst. J. 2019, 14, 3301–3312. [Google Scholar] [CrossRef]
  23. Zhang, Q.; Li, Y.; Ma, T.; Cong, Z.; Zhang, W. Bathymetric particle filter SLAM based on mean trajectory map representation. IEEE Access 2021, 9, 71725–71736. [Google Scholar] [CrossRef]
  24. Chen, F.; Zhang, B.; Zhao, Q. Multi-AUVs Cooperative SLAM Under Weak Communication. In Proceedings of the 2023 International Conference on Control, Robotics and Informatics (ICCRI), Shanghai, China, 26–28 May 2023; IEEE: New York, NY, USA, 2023; pp. 52–56. [Google Scholar]
  25. Joshi, B.; Bandara, C.; Poulakakis, I.; Tanner, H.G.; Rekleitis, I. Hybrid Visual Inertial Odometry for Robust Underwater Estimation. In Proceedings of the OCEANS 2023-MTS/IEEE US Gulf Coast, Biloxi, MS, USA, 25–28 September 2023; IEEE: New York, NY, USA, 2023; pp. 1–7. [Google Scholar]
  26. Yang, Y.F.; Qin, J.H.; Li, T. Study on the light scattering of suspended particles in seawater. J. Electron. Meas. Instrum. 2018, 32, 145–150. [Google Scholar]
  27. Zhai, C.C.; Han, X.Y.; Peng, Y.F. Research on light transmission characteristics of some inorganic salts in seawater. Laser Optoelectron. Prog. 2014, 52, 43–48. [Google Scholar]
  28. Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
  29. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  30. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III. Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  31. Zhao, H.; Gallo, O.; Frosio, I.; Kautz, J. Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 2016, 3, 47–57. [Google Scholar] [CrossRef]
  32. Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. On-manifold preintegration for real-time visual–inertial odometry. IEEE Trans. Robot. 2016, 33, 1–21. [Google Scholar] [CrossRef]
  33. Tamgade, S.N.; Bora, V.R. Notice of Violation of IEEE Publication Principles: Motion Vector Estimation of Video Image by Pyramidal Implementation of Lucas Kanade Optical Flow. In Proceedings of the 2009 Second International Conference on Emerging Trends in Engineering & Technology, Nagpur, India, 16–18 December 2009; IEEE: New York, NY, USA, 2009; pp. 914–917. [Google Scholar]
  34. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  35. Rahman, S.; Quattrini Li, A.; Rekleitis, I. SVIn2: A multi-sensor fusion-based underwater SLAM system. Int. J. Robot. Res. 2022, 41, 1022–1042. [Google Scholar] [CrossRef]
  36. Schonberger, J.L.; Frahm, J.M. Structure-from-Motion Revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; IEEE: New York, NY, USA, 2016; pp. 4104–4113. [Google Scholar]
Figure 1. The system framework for the proposed algorithm. The color-coded parts indicate the components that were improved in this study.
Figure 1. The system framework for the proposed algorithm. The color-coded parts indicate the components that were improved in this study.
Jmse 12 01170 g001
Figure 2. Network framework: (a) generator network framework; (b) hybrid attention module network framework.
Figure 2. Network framework: (a) generator network framework; (b) hybrid attention module network framework.
Jmse 12 01170 g002
Figure 3. Channel attention module network framework.
Figure 3. Channel attention module network framework.
Jmse 12 01170 g003
Figure 4. Spatial attention module network framework.
Figure 4. Spatial attention module network framework.
Jmse 12 01170 g004
Figure 5. Grayscale comparison before and after underwater image enhancement.
Figure 5. Grayscale comparison before and after underwater image enhancement.
Jmse 12 01170 g005
Figure 6. Underwater feature-matching flowchart.
Figure 6. Underwater feature-matching flowchart.
Jmse 12 01170 g006
Figure 7. Underwater feature-matching flowchart.
Figure 7. Underwater feature-matching flowchart.
Jmse 12 01170 g007
Figure 8. Comparison plot of the two algorithms on the Bus subdataset: (a) trajectory comparison chart; (b) APE comparison chart.
Figure 8. Comparison plot of the two algorithms on the Bus subdataset: (a) trajectory comparison chart; (b) APE comparison chart.
Jmse 12 01170 g008
Figure 9. Comparison plot of the two algorithms on the Cave subdataset: (a) trajectory comparison chart; (b) APE comparison chart.
Figure 9. Comparison plot of the two algorithms on the Cave subdataset: (a) trajectory comparison chart; (b) APE comparison chart.
Jmse 12 01170 g009
Figure 10. Underwater open-loop experimental trajectory: (a) experimental satellite trajectory map (hand-painted); (b) RVIZ trajectory map.
Figure 10. Underwater open-loop experimental trajectory: (a) experimental satellite trajectory map (hand-painted); (b) RVIZ trajectory map.
Jmse 12 01170 g010
Figure 11. Two algorithms’ feature-matching results: (a) VINS-Mono; (b) the algorithm proposed in this paper.
Figure 11. Two algorithms’ feature-matching results: (a) VINS-Mono; (b) the algorithm proposed in this paper.
Jmse 12 01170 g011
Figure 12. Results of underwater open-loop experiments with two algorithms: (a) trajectory comparison chart; (b) error comparison chart.
Figure 12. Results of underwater open-loop experiments with two algorithms: (a) trajectory comparison chart; (b) error comparison chart.
Jmse 12 01170 g012
Figure 13. Underwater closed-loop experimental trajectory: (a) experimental satellite trajectory map (hand-painted); (b) RVIZ trajectory map.
Figure 13. Underwater closed-loop experimental trajectory: (a) experimental satellite trajectory map (hand-painted); (b) RVIZ trajectory map.
Jmse 12 01170 g013
Figure 14. Two algorithms’ feature-matching results. (a) VINS-Mono; (b) the algorithm in this paper.
Figure 14. Two algorithms’ feature-matching results. (a) VINS-Mono; (b) the algorithm in this paper.
Jmse 12 01170 g014
Figure 15. Results of underwater closed-loop experiments with two algorithms: (a) trajectory comparison chart; (b) error chart of the proposed algorithm; (c) error chart for the nonfailed part of the VINS-Mono algorithm. (Due to the significant numerical difference between the errors of the proposed algorithm and the VINS-Mono algorithm, the errors are plotted separately to better illustrate the details.)
Figure 15. Results of underwater closed-loop experiments with two algorithms: (a) trajectory comparison chart; (b) error chart of the proposed algorithm; (c) error chart for the nonfailed part of the VINS-Mono algorithm. (Due to the significant numerical difference between the errors of the proposed algorithm and the VINS-Mono algorithm, the errors are plotted separately to better illustrate the details.)
Jmse 12 01170 g015
Table 1. Comparison of the errors of the two algorithms.
Table 1. Comparison of the errors of the two algorithms.
SubdatasetAlgorithmMEAN (m)STD (m)RMSE (m)
Bus subdatasetVINS-Mono *---
Our0.240.160.29
Cave subdatasetVINS-Mono0.320.130.35
Our0.140.060.16
* Indicates algorithm could not run properly.
Table 2. Comparison of the errors of the two algorithms.
Table 2. Comparison of the errors of the two algorithms.
AlgorithmMAE (m)STD (m)
VINS-Mono1.760.90
Our0.560.50
.
Table 3. Comparison of the errors of the two algorithms.
Table 3. Comparison of the errors of the two algorithms.
AlgorithmMAE (m)STD (m)
VINS-Mono *2.322.01
Our0.840.48
* Represents the error of the nonfailed part of the VINS-Mono algorithm.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, Z.; Wang, K.; Zhang, J.; Zhang, F. An Underwater Multisensor Fusion Simultaneous Localization and Mapping System Based on Image Enhancement. J. Mar. Sci. Eng. 2024, 12, 1170. https://doi.org/10.3390/jmse12071170

AMA Style

Liang Z, Wang K, Zhang J, Zhang F. An Underwater Multisensor Fusion Simultaneous Localization and Mapping System Based on Image Enhancement. Journal of Marine Science and Engineering. 2024; 12(7):1170. https://doi.org/10.3390/jmse12071170

Chicago/Turabian Style

Liang, Zeyang, Kai Wang, Jiaqi Zhang, and Fubin Zhang. 2024. "An Underwater Multisensor Fusion Simultaneous Localization and Mapping System Based on Image Enhancement" Journal of Marine Science and Engineering 12, no. 7: 1170. https://doi.org/10.3390/jmse12071170

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop