A Method for Recognition and Coordinate Reference of Autonomous Underwater Vehicles to Inspected Objects of Industrial Subsea Structures Using Stereo Images
Abstract
:1. Introduction
2. Problem Statement and General Approach
3. Proposed Methods
3.1. Forming the Model Object
- 1.
- The operator selects one or more of the most informative object’s views from the sequence of photographs of the overview trajectory (with respective stereo pairs of frames), which together can potentially provide recognition of the object for any position of the working (inspection) trajectory when the AUV is positioned above the object.
- 2.
- The operator fixes a rectangular area of the object’s location in the seabed plane (using the VNM, which provides the calculation of points’ coordinates in the external CS with some accuracy). This allows for a rough object search at the recognition stage (while the AUV is moving along the working trajectory) before activation of the object recognition algorithm based on the processing of images taken with the camera.
- 3.
- The operator creates a 3D model of the object that combines the 3D models of several used views of the overview trajectory. The spatial geometric elements making up the model are formed by processing the original stereo pairs of frames of the views used. The processing includes the application of algorithmic procedures with the explicit involvement of the operator in the process of CE formation. The types of elements used, as noted above, are as follows: points, rectilinear segments, and macro-elements based on segment lines (such as “corner” type or others).For each selected kth view of the AUV’s overview trajectory:
- 3.1.
- The point features in the left and right frames of the stereo-pair are matched using the SURF detector. The points belonging to the object, specified by the operator, are filtered and selected. Manual generation of additional points is possible. It is also possible to include terrain points near the object since the scene is static. By matching the sets of points M_2D_POINT_VkVk_LR and M_2D_POINT_ VkVk_RL, a set of 3D points M_3D_POINT_ VkVk_CSview_k, visible through the camera for this view, is constructed using the ray triangulation method. The points are coordinated in the CSview_k of this view of the overview trajectory.
- 3.2.
- The operator generates a set of 3D edge lines visible through the camera in CSview_k of this view using the original frames of the stereo pair. Each spatial segment line is described by two 3D endpoints belonging to it with pointers at 2D images of these points in the 2D sets indicated above. The problem of 3D reconstruction of the matched 2D images of segment lines in the model stereo-pair of frames is solved in a traditional way: by matching the endpoints in the stereo-pair of frames with the calculation of correlation estimates when scanning epipolar lines and then calculating the 3D coordinates of the segment endpoints in CSview_k by the ray triangulation method. The endpoints can also be matched by the operator clearly indicating the matched images. The formed segments are coordinated in CSview_k and stored in the set M_3D_LINE_ VkVk_CSview_k.
- 3.3.
- Formation of “corner” type CEs based on the set of spatial segments obtained for this view. The formed “corners” CEs are coordinated in the CSview_k and stored in the set M_3D_CORNER_ VkVk_CSview_k.
- 4.
- To obtain a model description independent of the AUV’s CS, the operator explicitly determines the object’s intrinsic CS. Such a CS is determined on the basis of the segment line indicated by the operator (let it be referred to as the base segment line) in such a way that the Z’-axis is oriented in the direction of the Z-axis of the external CS (see Figure 2). This can be achieved using the data received by the standard navigation system of the AUV. For each used kth view of the object, a coordinate transformation matrix is calculated that links the object’s CS with the CS of the camera of this view of the overview trajectory (which is performed in a standard way by setting the unit vectors of one CS in the other CS). If the camera does not see the selected base segment line in one of the views, then the VNM method is applied, which provides a matrix for coordinate transformation from the CS of one trajectory position into the CS of the other position (i.e., the coordinates of the base segment line are calculated in the CS of the considered view implicitly, without matching in the stereo pair of frames). Afterward, the object’s intrinsic CS is built, the same as for other views, on the base segment line, and the matrix of the link between the CS of this view and the object’s CS is calculated. Thus, a single CS of the object is built using the same spatial base segment line in all the views used. For each kth view, its own matrix of transformation into this object’s CS is calculated (where n is the object’s sequential number).
- 5.
- The coordinates of all three CE types of the used kth view are transformed using the calculated matrix into the object’s intrinsic CS built. The obtained coordinate representations are stored, respectively, in the sets M_3D_POINT_Vk_CSobject_n, M_3D_LINE_Vk_CSobject_n, and M_3D_CORNER_Vk_CSobject_n specified in CSobject_n. Simultaneously, these coordinate representations are recorded to the respective accumulated sets representing the object’s complete 3D model in CSobject_n with all processed views taken into account: M_3D_POINT_CSobject_n, M_3D_LINE_CSobject_n, and M_3D_CORNER_CSobject_n. Note that if CE is present in several views, then in the complete model it is represented by averaged coordinates.
- 6.
- The operator explicitly determines the CS of the SPS. As such a CS, the CS of one of the objects is used. The intrinsic CSs of all objects are coordinated in the CS of the SPS.Thus, the object model is formed in two representations:
- 6.1.
- As a set of 3D models of several objects’ views, where the model of a view is a combination of three CE sets (points, lines, and corners) specified in the CS of this view (CSview_k). The matrix for coordinate transformation from this view’s CS into the objects’ intrinsic CS (CSobject_n) is calculated for each view.
- 6.2.
- As a combination of sets of three CE types (points, lines, and corners) specified in the object’s intrinsic CS (CSobject_n), which represents the spatial structure of the object. Here, the set of each type is formed by summing the CEs from several views used. Note that a face plane is linked to each edge line, which belongs to it. The normal plane and its position relative to the segment line (the face on the right or left) are indicated. This information is necessary at the object recognition stage for th ecorrect calculation of the correlation estimate when matching the images of the segment line (two of its points) in the frames of the model’s stereo-pair and the stereo pair of the working (inspection) trajectory (the rectangular area adjacent to the edge used to calculate the correlation coefficient is specified only on this plane).
3.2. Recognition of SPS Objects
3.2.1. Object Recognition by Point Features (Stage 1)
- (a)
- The object’s intrinsic CS was built, for which the direction of the Z-axis of the external CS was used as the Z-axis direction.
- (b)
- A set of 3D points M_3D_ VkVk_CSview_k was obtained in the coordinate system CSview_k from the matched 2D sets (see Section 3.1). The matrix for coordinate transformation from CSview_k to CSobject_n was calculated. Here, object_n is the identifier of nth object.
- Selection of the k-th view in the model object for which the object’s field of visibility best fits the potential object’s field of visibility for the AUV’s camera at the considered position of the inspection trajectory.
- Generation and matching, using the SURF detector, of special points in the left frames of the model and working stereo pairs. The result of successful matching is the set M_2D_POINT_WVk_LL in the image im_L_object_iewk and the set S_2D_VkW_LL in the image im_L_object_work.
- If the matching in the previous step is successful, then the special points in the frames of the working stereo pair are matched using the SURF detector. The result is the set S_2D_POINT_WW_LR in the image im_L_object_work and the set S_2D_POINT_WW_RL in the image im_R_object_work.
- The set of 3D points S_3D_POINT_WW_CSwork is built in the coordinate system CSwork from the 2D sets matched in the previous step.
- The selection of a subset of points, also matched in the left frames of the “model” and “working” stereo pair, from the set of points matched in the frames of the “model” stereo pair (a model of this view): subM_2D_POINT_Vk = M_2D_POINT_VkVk_LR M_2D_POINT_ WVk _LL (if there are fewer than three points, then we assume that the camera “does not see” the object). This operation is necessary to subsequently form a 3D representation of that subset of points, from the model of this object’s view, for which matching with the recognized points of the object was obtained in the CS of the working trajectory position.
- The selection, from the set of points matched in the frames of the working stereo pair, of a subset of points that was also matched with the respective subset of the model of this object’s view (subM_2D_POINT_Vk):subS_2D_point_W = S_2D_point_WW_LR ∩ S_2D_ point_ VkW _LL.
- The formation of the subset M_3D_POINT_Vk_CSview_k (belonging to the set M_3D_POINT_Vk Vk_CSview_k) corresponding to the subset subM_2D_POINT_Vk.
- Building of the subset S_3D_POINT_VkW_CSwork belonging to the set S_3D_POINT_WW_CSwork using the 2D subset subS_2D_point_W obtained in paragraph 6.
- Since the subsets M_3D_POINT_Vk_CSview_k and S_3D_POINT_W_CSwork were matched by performing the above operations, the matrix for coordinate transformation between the above-indicated CSs () is calculated from the set of these points. A standard method is used: minimizing the sum of discrepancies by the least squares method.
- The matrix for coordinate transformation from CSAUV_work into CSobject_n is calculated and provides the direct calculation of AUV’s movement in the coordinate space of the object: .
- To calculate the AUV’s movement in the coordinate space of the SPS, a matrix for coordinate transformation from CSobject_n to CSSPS is calculated using the matrix .
3.2.2. Recognition and Calculation of 3D Coordinates of “Segment Line” and “Corner” Type CEs (Stage 2)
- Pointers at 2D images of its two points in the stereo pair of frames of the model are extracted.
- The segment line is placed in the coordinate space CSwork using a matrix (calculated in step 1) and is projected onto the stereo pair of frames of the working trajectory. The expression “segment line is projected” means the projection of the segment’s end points.
- The coefficient of correlation between the compared images of the point in the model’s stereo pair and the working stereo pair of frames is calculated for each end point (Figure 6). When comparing the end points of the segment lines, the orientation of the vehicle by the compass of both the model and the working trajectory is used to provide the same orientation of the surrounding terrain compared.
- In the case of incomplete matching of the images, an attempt is made to find a match within a small radius, assuming that the projection (step 2) was performed with some error (step 3).
4. Experiments
5. Discussion
- -
- A model of an underwater object is formed on the basis of views (foreshortening) of a previously completed survey trajectory using an automated technique with the participation of an operator. The technique does not require much labor since it excludes preliminary direct in situ measurements and the use of documentation.
- -
- The use of several views of the survey trajectory contributes to a higher degree of object recognition at the inspection stage and, as experiments on synthetic and real data have shown, several-fold increases the accuracy of the coordinate reference of the APR to the object.
- -
- The use of several types of geometric elements when comparing data obtained at the position of the inspection trajectory with the object model increases the reliability of object recognition, despite the noisiness of the images, and also increases the accuracy of the coordinate reference due to the use of an expanded set of points when calculating the coordinate transformation matrix.
- -
- Original and computationally low-cost algorithms for recognition and matching with the model of the characteristic geometric elements used (point features, segments, and angles) have been developed.
- Error for Stereo + IMU With Loop—39 cm; Error for Stereo + IMU Without Loop—66 cm
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
References
- Mai, C.; Hansen, L.; Jepsen, K.; Yang, Z. Subsea Infrastructure Inspection: A Review Study. In Proceedings of the 6th International Conference on Underwater System Technology: Theory and Applications, Penang, Malaysia, 13–14 December 2016. [Google Scholar] [CrossRef]
- Zhang, Y.; Zheng, M.; An, C.; Seo, J.K. A review of the integrity management of subsea production systems: Inspection and monitoring methods. Ships Offshore Struct. 2019, 14, 1–15. [Google Scholar] [CrossRef]
- Manley, J.E.; Halpin, S.; Radford, N.; Ondler, M. Aquanaut: A New Tool for Subsea Inspection and Intervention. In Proceedings of the OCEANS 2018 MTS/IEEE Conference, Charleston, SC, USA, 22–25 October 2018. [Google Scholar] [CrossRef]
- Albiez, J.; Cesar, D.; Gaudig, C.; Arnold, S.; Cerqueira, R.; Trocoli, T.; Mimoso, G.; Saback, R.; Neves, G. Repeated close-distance visual inspections with an AUV. In Proceedings of the OCEANS 2016 MTS/IEEE Monterey, San Diego, CA, USA, 19 September 2016; pp. 1–8. [Google Scholar] [CrossRef]
- Terracciano, D.; Bazzarello, L.; Caiti, A.; Costanzi, R.; Manzari, V. Marine Robots for Underwater Surveillance. Curr. Robot. Rep. 2020, 1, 159–167. [Google Scholar] [CrossRef]
- Jacobi, M. Autonomous inspection of underwater structures. Robot. Auton. Syst. 2015, 67, 80–86. [Google Scholar] [CrossRef]
- Bao, J.; Li, D.; Qiao, X.; Rauschenbach, T. Integrated navigation for autonomous underwater vehicles in aquaculture: A review. Inf. Process. Agric. 2020, 7, 139–151. [Google Scholar] [CrossRef]
- Sahoo, A.; Dwivedy, S.K.; Robi, P.S. Advancements in the field of autonomous underwater vehicle. Ocean. Eng. 2019, 181, 145–160. [Google Scholar] [CrossRef]
- Wirth, S.; Carrasco, P.L.N.; Oliver-Codina, G. Visual odometry for autonomous underwater vehicles. In Proceedings of the 2013 MTS/IEEE OCEANS, Bergen, Norway, 10–13 June 2013; pp. 1–6. [Google Scholar] [CrossRef]
- Jung, J.; Li, J.-H.; Choi, H.-T.; Myung, H. Localization of AUVs using visual information of underwater structures and artificial landmarks. Intell. Serv. Robot. 2016, 10, 67–76. [Google Scholar] [CrossRef]
- Gao, J.; Wu, P.; Yang, B.; Xia, F. Adaptive neural network control for visual servoing of underwater vehicles with pose estimation. J. Mar. Sci. Technol. 2016, 22, 470–478. [Google Scholar] [CrossRef]
- Xu, H.; Oliveira, P.; Soares, C.G. L1 adaptive backstepping control for path-following of underactuated marine surface ships. Eur. J. Control. 2020, 58, 357–372. [Google Scholar] [CrossRef]
- Fan, S.; Liu, C.; Li, B.; Xu, Y.; Xu, W. AUV docking based on USBL navigation and vision guidance. J. Mar. Sci. Technol. 2018, 24, 673–685. [Google Scholar] [CrossRef]
- Ferrera, M.; Moras, J.; Trouvé-Peloux, P.; Creuze, V. Real-Time Monocular Visual Odometry for Turbid and Dynamic Underwater Environments. Sensors 2019, 19, 687. [Google Scholar] [CrossRef] [PubMed]
- Himri, K.; Ridao, P.; Gracias, N. 3D Object Recognition Based on Point Clouds in Underwater Environment with Global Descriptors: A Survey. Sensors 2019, 19, 4451. [Google Scholar] [CrossRef] [PubMed]
- Kasaei, S.H.; Lopes, L.S.; Tomé, A.M.; Oliveira, M. An orthographic descriptor for 3D object learning and recognition. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 4158–4163. [Google Scholar]
- Osada, R.; Funkhouser, T.; Chazelle, B.; Dobkin, D. Matching 3D models with shape distributions. In Proceedings of the SMI 2001 International Conference On Shape Modeling and Applications, Genova, Italy, 7–11 May 2001; pp. 154–166. [Google Scholar]
- Marton, Z.C.; Pangercic, D.; Rusu, R.B.; Holzbach, A.; Beetz, M. Hierarchical object geometric categorization and appearance classification for mobile manipulation. In Proceedings of the 10th IEEE-RAS International Conference on Humanoid Robots (Humanoids), Nashville, TN, USA, 6–8 December 2010; pp. 365–370. [Google Scholar]
- Rusu, R.B.; Blodow, N.; Beetz, M. Fast point feature histograms (FPFH) for 3D registration. In Proceedings of the IEEE International Conference on Robotics and Automation, ICRA’09, Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
- Himri, K.; Ridao, P.; Gracias, N. Underwater Object Recognition Using Point-Features, Bayesian Estimation and Semantic Information. Sensors 2021, 21, 1807. [Google Scholar] [CrossRef] [PubMed]
- Chemisky, B.; Menna, F.; Nocerino, E.; Drap, P. Underwater Survey for Oil and Gas Industry: A Review of Close Range Optical Methods. Remote Sens. 2021, 13, 2789. [Google Scholar] [CrossRef]
- Bobkov, V.; Kudryashov, A.; Inzartsev, A. Method for the Coordination of Referencing of Autonomous Underwater Vehicles to Man-Made Objects Using Stereo Images. J. Mar. Sci. Eng. 2021, 9, 1038. [Google Scholar] [CrossRef]
- Wen, L.; Youzhi, L. Autonomous Underwater Vehicle Navigation and Control: A Brief Review. Robot Autom. Eng. J. 2024, 5, 555672. [Google Scholar]
- Bobkov, V.A.; Kudryashov, A.P.; Inzartsev, A.V. Technology of AUV High-Precision Referencing to Inspected Object. Gyroscopy Navig. 2019, 10, 322–329. [Google Scholar] [CrossRef]
- Melman, S.; Bobkov, V.; Inzartsev, A.; Pavin, A. Distributed Simulation Framework for Investigation of Autonomous Underwater Vehicles’ Real-Time Behavior. In Proceedings of the OCEANS’15 MTS/IEEE, Washington, DC, USA, 19–22 October 2015. [Google Scholar]
- Carrasco, P.L.N.; Bonin-Font, F.; Campos, M.M.; Codina, G.O. Stereo-Vision Graph-SLAM for Robust Navigation of the AUV SPARUS II. IFAC-PapersOnLine 2015, 48, 200–205. [Google Scholar] [CrossRef]
- Gómez-Espinosa, A.; Cuan-Urquizo, E.; González-García, J. Autonomous Underwater Vehicles: Localization, Navigation, and Communication for Collaborative Missions. Appl. Sci. 2020, 10, 1256. [Google Scholar] [CrossRef]
- Hsu, H.Y.; Toda, Y.; Yamashita, K.; Watanabe, K.; Sasano, M.; Okamoto, A.; Inaba, S.; Minami, M. Stereo-vision-based AUV navigation system for resetting the inertial navigation system error. Artif. Life Robot. 2022, 27, 165–178. [Google Scholar] [CrossRef]
- Wang, Y.; Gu, D.; Ma, X.; Wang, J.; Wang, H. Robust Real-Time AUV Self-Localization Based on Stereo Vision-Inertia. IEEE Trans. Veh. Technol. 2023, 72, 7160–7170. [Google Scholar] [CrossRef]
S_3D_POINT_CSwork (Number of Points Matched to Model/Number of Points in Model) | S_3D_LINE_CSwork (Number of Lines Matched to Model/Number of Lines in Model) | S_3D_CORNER_CSwork (Number of Corners Matched to Model/Number of Corners in Model) | |
---|---|---|---|
View 1 in CS1 | np_1/mp | n1_1/mL | nc1/mc |
… | … | … | … |
View k in CSk | np_k/mp | nlk/mL | nck/mc |
… | … | … | … |
View last in CSlast | np_last/mp | nllast/mL | nclast/mc |
Image Size | Number of Matched Points/Number of Points in 3D Cloud | Our Algorithm, Time (s) | Our Algorithm, Error | ICP Algorithm, Time (s) | ICP Algorithm, Error | |
---|---|---|---|---|---|---|
Model scene | 1200 × 900 | 26/252 | 0.1 | 32 mm | 0.14 | 41 mm |
Real scene | 1600 × 1200 | 246/584 | 0.15 | 28 px | 0.21 | 52 px |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bobkov, V.; Kudryashov, A. A Method for Recognition and Coordinate Reference of Autonomous Underwater Vehicles to Inspected Objects of Industrial Subsea Structures Using Stereo Images. J. Mar. Sci. Eng. 2024, 12, 1514. https://doi.org/10.3390/jmse12091514
Bobkov V, Kudryashov A. A Method for Recognition and Coordinate Reference of Autonomous Underwater Vehicles to Inspected Objects of Industrial Subsea Structures Using Stereo Images. Journal of Marine Science and Engineering. 2024; 12(9):1514. https://doi.org/10.3390/jmse12091514
Chicago/Turabian StyleBobkov, Valery, and Alexey Kudryashov. 2024. "A Method for Recognition and Coordinate Reference of Autonomous Underwater Vehicles to Inspected Objects of Industrial Subsea Structures Using Stereo Images" Journal of Marine Science and Engineering 12, no. 9: 1514. https://doi.org/10.3390/jmse12091514
APA StyleBobkov, V., & Kudryashov, A. (2024). A Method for Recognition and Coordinate Reference of Autonomous Underwater Vehicles to Inspected Objects of Industrial Subsea Structures Using Stereo Images. Journal of Marine Science and Engineering, 12(9), 1514. https://doi.org/10.3390/jmse12091514