Binocular Vision-Based Non-Singular Fast Terminal Control for the UVMS Small Target Grasp
Abstract
:1. Introduction
- (1)
- On the underwater visual matching of blurred and low contrast images, an improved unsharp masking algorithm is proposed to further enhance the weak texture region. The improved ORB feature-matching method is developed with adaptive threshold, non-maximum suppression and improved random sample consensus (RANSAC). Adaptive threshold is adopted to extract FAST feature points, non-maximum suppression is performed to remove feature point blocks, which can reduce the number of invalid feature points and save matching time, and the RANSAC algorithm can strengthen the elimination of mismatch points.
- (2)
- A novel adaptive nonsingular terminal sliding mode controller is proposed, incorporating the obstacle function, to enhance its applicability in UVMS operations, specifically regarding unknown disturbances and uncertain parameters with unknown upper bounds. The obstacle function is designed using a quasi-potential barrier function to suppress chattering issues, and a feedforward strategy is employed to handle the coupling of the robotic arm.
- (3)
- Oceanic experiments have been conducted to prove the performance of the proposed algorithm and controller.
2. Binocular Vision Matching Method for Underwater Small Object
2.1. Image Preprocessing
2.2. ORB Algorithm for Binocular Vision Matching
- (1)
- Scale space construction:
- (2)
- Candidate key points are picked up through FAST-9 and ascertained through Harris sort:
- (3)
- Estimate the major orientation with intensity centroid method. The moment centroid is:
- (4)
- RBRIEF algorithm:
2.3. Improved ORB Matching Algorithm for Small Objects
- (1)
- Improvement of FAST-9 key points:
- (2)
- Apply the non-maximum value to remove repetition point:
- (3)
- RBRIEF descriptor improvement:
- (4)
- Improved RANSAC to eliminate mistake matching points:
Algorithm 1. Binocular Vision Matching Method for Underwater Small Object |
① Image preprocessing with improved unsharp masking algorithm. ② Construct scale space. ③ Pick up candidate key points through improved FAST-9, ascertain key points through Harris sort. ④ Estimate the major orientation with intensity centroid method, and apply non-maximum value to get rid of repetition point. ⑤ Generate feature descriptors in rotation invariant binary mode through improved RBRIEF descriptor. ⑥ Eliminate mistake matching points through improved RANSAC. If the ratio of interior point is obviously lower than that of the current optimal model, go to ① and repeat sampling and matching. Or generate parallax map. |
3. Adaptive Non-Singular Fast Terminal Controller Design
3.1. Non-Singular Fast Terminal Sliding-Mode Controller
3.2. Adaptive Non-Singular Fast Terminal Sliding-Mode Controller
3.3. The Overall Controller with Barrier Function
4. Simulations and Experiments
4.1. Image Processing and Binocular Visual Matching Experiments
4.2. Simulations on Proposed Non-Singular Fast Terminal Sliding-Mode Controller
4.3. Oceanic Experiments on Small Object Positioning and Grasp
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Prats, M.; Ribas, D.; Palomeras, N.; García, J.C.; Nannen, V.; Wirth, S.; Fernández, J.J.; Beltrán, J.P.; Campos, R.; Ridao, P.; et al. Reconfigurable AUV for intervention missions: A case study on underwater object recovery. Intell. Serv. Robot. 2012, 5, 19–31. [Google Scholar] [CrossRef]
- Carrera, A.; Palomeras, N.; Hurtós, N.; Kormushev, P.; Carreras, M. Cognitive system for autonomous underwater intervention. Pattern Recognit. Lett. 2015, 67, 91–99. [Google Scholar] [CrossRef]
- Yang, L.; Zhao, S.; Wang, X.; Shen, P.; Zhang, T. Deep-Sea Underwater Cooperative Operation of Manned/Unmanned Submersible and Surface Vehicles for Different Application Scenarios. J. Mar. Sci. Eng. 2022, 10, 909. [Google Scholar] [CrossRef]
- Prats, M.; Garc, J.C.; Wirth, S.; Ribas, D.; Sanz, P.J.; Ridao, P.; Gracias, N.; Oliver, G. Multipurpose Autonomous Underwater Intervention: A Systems Integration Perspective. In Proceedings of the 20th Mediterranean Conference on Control & Automation (MED), Spain, Barcelona, 3–6 July 2012; pp. 1379–1384. [Google Scholar]
- Choi, J.-K.; Yokobiki, T.; Kawaguchi, K. ROV-Based Automated Cable-Laying System: Application to DONET2 Installation. IEEE J. Ocean. Eng. 2018, 43, 665–676. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, S.; Wei, Q.; Tan, M.; Zhou, C.; Yu, J. Development of an Underwater Manipulator and Its Free-Floating Autonomous Operation. IEEE/ASME Trans. Mechatron. 2016, 21, 815–824. [Google Scholar] [CrossRef]
- Razzanelli, M.; Casini, S.; Innocenti, M.; Pollini, L. Development of a Hybrid Simulator for Underwater Vehicles with Manipulators. IEEE J. Ocean. Eng. 2020, 45, 1235–1251. [Google Scholar] [CrossRef]
- Lynch, B.; Ellery, A. Efficient Control of an AUV-Manipulator System: An Application for the Exploration of Europa. IEEE J. Ocean. Eng. 2014, 39, 552–570. [Google Scholar] [CrossRef]
- Youakim, D.; Cieslak, P.; Dornbush, A.; Palomer, A.; Ridao, P.; Likhachev, M. Multirepresentation, Multiheuristic A* search-based motion planning for a free-floating underwater vehicle-manipulator system in unknown environment. J. Field Robot. 2020, 37, 925–950. [Google Scholar] [CrossRef]
- Huang, H.; Tang, Q.; Li, H.; Liang, L.; Li, W.; Pang, Y. Vehicle-Manipulator System Dynamic Modeling and Control for Underwater Autonomous Manipulation. Multibody Syst. Dyn. 2017, 41, 367–390. [Google Scholar] [CrossRef]
- Rizzini, D.L.; Kallasi, F.; Aleotti, J.; Oleari, F.; Caselli, S. Integration of a stereo vision system into an autonomous underwater vehicle for pipe manipulation tasks. Comput. Electr. Eng. 2016, 58, 560–571. [Google Scholar] [CrossRef]
- Ridao, P.; Carreras, M.; Ribas, D.; Sanz, P.J.; Oliver, G. Intervention AUVs: The next challenge. Annu. Rev. Control. 2015, 40, 227–241. [Google Scholar] [CrossRef]
- Taryudi; Wang, M.-S. Eye to hand calibration using ANFIS for stereo vision-based object manipulation system. Microsyst. Technol. 2018, 24, 305–3177. [Google Scholar] [CrossRef]
- Chang, J.-W.; Wang, R.-J.; Wang, W.-J.; Huang, C.-H. Implementation of an Object-Grasping Robot Arm Using Stereo Vision Measurement and Fuzzy Control. Int. J. Fuzzy Syst. 2015, 17, 193–205. [Google Scholar] [CrossRef]
- Chang, W.-C. Robotic assembly of smartphone back shells with eye-in-hand visual servoing. Robot. Comput. Manuf. 2018, 50, 102–113. [Google Scholar] [CrossRef]
- Peñalver, A.; Pérez, J.; Fernández, J.; Sales, J.; Sanz, P.; García, J.; Fornas, D.; Marín, R. Visually-guided manipulation techniques for robotic autonomous underwater panel interventions. Annu. Rev. Control. 2015, 40, 201–211. [Google Scholar] [CrossRef]
- Sivčev, S.; Rossi, M.; Coleman, J.; Dooly, G.; Omerdić, E.; Toal, D. Fully automatic visual servoing control for work-class marine intervention ROVs. Control. Eng. Pract. 2018, 74, 153–167. [Google Scholar] [CrossRef]
- Lin, Y.H.; Shou, K.P.; Huang, L.J. The initial study of LLS-based binocular stereo-vision system on underwater 3D image reconstruction in the laboratory. J. Mar. Sci. Technol. 2017, 22, 513–532. [Google Scholar] [CrossRef]
- Li, T.; Liu, C.; Liu, Y.; Wang, T.; Yang, D. Binocular stereo vision calibration based on alternate adjustment algortithm. Opt.-Int. J. Light Electron Opt. 2018, 173, 13–20. [Google Scholar] [CrossRef]
- Hu, Y.; Chen, Q.; Feng, S.; Tao, T.; Asundi, A.; Zuo, C. A new microscopic telecentric stereo vision system- Calibration, rectification, and three-dimensional reconstruction. Opt. Lasers Eng. 2019, 113, 14–22. [Google Scholar] [CrossRef]
- Park, J.-S.; Kim, H.-E.; Kim, H.-Y.; Lee, J.; Kim, L.-S. A vision processor with a unified interest point detection and matching hardware for accelerating stereo matching algorithm. IEEE Trans. Circuits Syst. Video Technol. 2016, 26, 2328–2343. [Google Scholar] [CrossRef]
- Huo, G.; Wu, Z.; Li, J.; Li, S. Underwater Target Detection and 3D Reconstruction System Based on Binocular Vision. Sensors 2018, 18, 3570. [Google Scholar] [CrossRef] [PubMed]
- Negahdaripour, S.; Firoozfam, P. An ROV Stereovision System for Ship-Hull Inspection. IEEE J. Ocean. Eng. 2006, 31, 551–564. [Google Scholar] [CrossRef]
- Ttofis, C.; Kyrkou, C.; Theocharides, T. A Low-Cost Real-Time Embedded Stereo Vision System for Accurate Disparity Estimation Based on Guided Image Filtering. IEEE Trans. Comput. 2016, 65, 2678–2693. [Google Scholar] [CrossRef]
- Zhuang, S.; Zhang, X.; Tu, D.; Zhang, C.; Xie, L. A standard expression of underwater binocular vision for stereo matching. Meas. Sci. Technol. 2020, 31, 115012. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, Y.; Li, H.; Zhang, W.; Zhang, Q. Epipolar geometry and stereo matching algorithm for underwater fish-eye images. Int. J. Adv. Robot. Syst. 2018, 15, 1729881418764715. [Google Scholar] [CrossRef]
- Rosten, E.; Porter, R.; Drummond, T. Faster and Better: A Machine Learning Approach to Corner Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 105–119. [Google Scholar] [CrossRef]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
- Osipov, A.; Shumaev, V.; Ekielski, A.; Gataullin, T.; Suvorov, S.; Mishurov, S.; Gataullin, S. Identification and Classification of Mechanical Damage during Continuous Harvesting of Root Crops Using Computer Vision Methods. IEEE Access 2022, 10, 28885–28894. [Google Scholar] [CrossRef]
- Li, D.; Du, L. AUV Trajectory Tracking Models and Control Strategies: A Review. J. Mar. Sci. Eng. 2021, 9, 1020. [Google Scholar] [CrossRef]
- Haugaløkken, B.O.A.; Skaldebø, M.B.; Schjølberg, I. Monocular vision-based gripping of objects. Robot. Auton. Syst. 2020, 131, 103589. [Google Scholar] [CrossRef]
- Cai, M.; Wang, S.; Wang, Y.; Wang, R.; Tan, M. Coordinated Control of Underwater Biomimetic Vehicle–Manipulator System for Free Floating Autonomous Manipulation. IEEE Trans. Syst. Man Cybern. Syst. 2020, 51, 4793–4803. [Google Scholar] [CrossRef]
- Li, J.; Huang, H.; Xu, Y.; Wu, H.; Wan, L. Uncalibrated visual servoing for underwater vehicle manipulator systems with an eye in hand configuration camera. Sensors 2019, 24, 5469. [Google Scholar] [CrossRef] [PubMed]
- Chen, H.-T.; Song, S.-M.; Zhu, Z.-B. Robust Finite-time Attitude Tracking Control of Rigid Spacecraft under Actuator Saturation. Int. J. Control. Autom. Syst. 2018, 16, 1–15. [Google Scholar] [CrossRef]
- Boukattaya, M.; Mezghani, N.; Damak, T. Adaptive nonsingular fast terminal sliding-mode control for the tracking problem of uncertain dynamical systems. ISA Trans. 2018, 77, 1–19. [Google Scholar] [CrossRef]
- Kong, S.; Fang, X.; Chen, X.; Wu, Z.; Yu, J. A NSGA-II-Based Calibration Algorithm for Underwater Binocular Vision Measurement System. IEEE Trans. Instrum. Meas. 2020, 69, 794–803. [Google Scholar] [CrossRef]
- Lwin, K.N.; Minami, M.; Mukada, N.; Myint, M.; Yamada, D.; Yanou, A.; Matsuno, T.; Saitou, K.; Godou, W.; Sakamoto, T. Visual Docking against Bubble Noise with 3-D Perception Using Dual-Eye Cameras. IEEE J. Ocean. Eng. 2020, 45, 247–270. [Google Scholar] [CrossRef]
- Marani, G.; Yuh, J. Introduction to Autonomous Manipulation; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
- Antonelli, G. Underwater Robots; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
- Chen, Y.; Yan, L.; Xu, B.; Liu, Y. Multi-Stage Matching Approach for Mobile Platform Visual Imagery. IEEE Access 2019, 7, 160523–160535. [Google Scholar] [CrossRef]
- Kamyshova, G.; Osipov, A.; Gataullin, S.; Korchagin, S.; Ignar, S.; Gataullin, T.; Terekhova, N.; Suvorov, S. Artificial Neural Networks and Computer Vision’s-Based Phytoindication Systems for Variable Rate Irrigation Improving. IEEE Access 2022, 10, 8577–8589. [Google Scholar] [CrossRef]
Descriptors | Number of Left Map Feature Point | Number of Right Map Feature Point | Feature Point Extraction Time (ms) | Feature Point Description Time (ms) | Numbers of Thick Matching Point Pairs | Ratio of Feature Point Matching |
---|---|---|---|---|---|---|
SIFT | 125 | 174 | 5970 | 233 | 30 | 24% |
OFAST | 1865 | 1909 | 127 | 387 | 158 | 8.47% |
Proposed algorithm | 1591 | 1627 | 144 | 184 | 484 | 30.42% |
Object | Descriptors | True Value(mm) | Direct Calibration Calculation Value (mm) | Relative Error | Calibration Calculation Value after Conversion (mm) | Relative Error |
---|---|---|---|---|---|---|
Sea cucumber | Proposed algorithm | (90.50, −117.42, 1175.40) | (100.51, −107.53, 1303.09) | (11.06%, 8.42%, 10.86%) | (93.84, −119.79, 1186.23) | (3.69%, −2.02%, 0.921%) |
SIFT | (94.73, −114.414, 1189.12) | (4.68%, −2.56%, 1.17%) | ||||
OFAST | (102.49, −108.90, 1214.28) | (13.25%, −7.25%, 3.31%) | ||||
Pectinid | Proposed algorithm | (94.62, 16.44, 1175.40) | (83.73, 13.96, 1259.79) | (−11.21%, −15.08%, 7.18%) | (92.57, 15.86, 1187.78) | (−2.21%, −3.52%, 1.05%) |
SIFT | (91.97, 15.71, 1191.04) | (−2.80%, −4.46%, 1.33%) | ||||
OFAST | (87.11, 14.36, 1219.72) | (−7.94%, −12.64%, 3.77%) | ||||
Sea urchin | Proposed algorithm | (61.52, 124.56, 1175.40) | (70.89, 142.26, 1244.32) | (15.23%, 13.41%, −5.86%) | (62.45, 127.78, 1187.02) | (1.51%, 2.58%, 1.02%) |
SIFT | (62.70, 128.63, 1190.59) | (1.91%, 3.27%, 1.29%) | ||||
OFAST | (64.86, 136.10, 1218.16) | (5.42%, 9.27%, 3.66%) |
Algorithm | Number of Thick Matching Point | Number of Accurate Matching Point | Time for Eliminate Mistake Matching Time (ms) | Ratio of Correct Matching Point | |
---|---|---|---|---|---|
First Group | RANSAC | 554 | 466 | 5.4 | 84.11% |
Proposed algorithm | 554 | 489 | 2.7 | 88.27% | |
Second Group | RANSAC | 484 | 339 | 9.2 | 70.04% |
Proposed algorithm | 484 | 368 | 3.5 | 76.03% |
Joint | (m) | (rads) | di (m) | qi (rads) | (m) |
---|---|---|---|---|---|
1 | 0 | π/2 | 0 | 0 | |
2 | 0.6 | 0 | 0 | 0.5 | |
3 | 0.3 | 0 | 0 | 0.5 |
82.13 |
Vehicle | Link 1 | Link 2 | Link 3 | |
---|---|---|---|---|
82.13 | 2.60 | 3.52 | 3.16 | |
4.95 | 0.03 | 0 | 0 | |
7.36 | 0.03 | 0.16 | 0.16 | |
8.66 | 0 | 0.16 | 0.16 |
3 | 5 | 2 | 0.4 | 6 |
0.1 | 0.1 | 0.1 | |||
20 | 20 | 20 | |||
200 | 200 | 200 | |||
0.11 | 0.11 | 0.11 | |||
1 | 1 | 1 |
0.98 | 0.9 | 0.98 | 0.7 |
The Initial Position of the Robot (m) | Target Position (m) | Desired Vehicle Position (m) | Desired Vehicle Attitude (rad) | Desired Manipulator Attitude (rad) |
---|---|---|---|---|
(0, 0, 0) | (1.5, 0.5, 2) | (0.9518, 0.5, 1.5845) | (0, 0.1231, 0) | (0, −0.2159, 0.0929) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jiang, T.; Sun, Y.; Huang, H.; Qin, H.; Chen, X.; Li, L.; Zhang, Z.; Han, X. Binocular Vision-Based Non-Singular Fast Terminal Control for the UVMS Small Target Grasp. J. Mar. Sci. Eng. 2023, 11, 1905. https://doi.org/10.3390/jmse11101905
Jiang T, Sun Y, Huang H, Qin H, Chen X, Li L, Zhang Z, Han X. Binocular Vision-Based Non-Singular Fast Terminal Control for the UVMS Small Target Grasp. Journal of Marine Science and Engineering. 2023; 11(10):1905. https://doi.org/10.3390/jmse11101905
Chicago/Turabian StyleJiang, Tao, Yize Sun, Hai Huang, Hongde Qin, Xi Chen, Lingyu Li, Zongyu Zhang, and Xinyue Han. 2023. "Binocular Vision-Based Non-Singular Fast Terminal Control for the UVMS Small Target Grasp" Journal of Marine Science and Engineering 11, no. 10: 1905. https://doi.org/10.3390/jmse11101905
APA StyleJiang, T., Sun, Y., Huang, H., Qin, H., Chen, X., Li, L., Zhang, Z., & Han, X. (2023). Binocular Vision-Based Non-Singular Fast Terminal Control for the UVMS Small Target Grasp. Journal of Marine Science and Engineering, 11(10), 1905. https://doi.org/10.3390/jmse11101905