Next Article in Journal
A Semantic Network Method for the Identification of Ship’s Illegal Behaviors Using Knowledge Graphs: A Case Study on Fake Ship License Plates
Next Article in Special Issue
Disturbance Observer-Based Model Predictive Control for an Unmanned Underwater Vehicle
Previous Article in Journal
Model-Free Adaptive Sliding Mode Control Method for Unmanned Surface Vehicle Course Control
Previous Article in Special Issue
Analytical and Numerical Study of Underwater Tether Cable Dynamics for Seabed Walking Robots Using Quasi-Static Approximation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Binocular Vision-Based Non-Singular Fast Terminal Control for the UVMS Small Target Grasp

1
National Key Laboratory of Science and Technology on Underwater Vehicle, Harbin Engineering University, Harbin 150001, China
2
College of Mechanical and Electrical Engineering, Heilongjiang Institute of Technology, Harbin 150050, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2023, 11(10), 1905; https://doi.org/10.3390/jmse11101905
Submission received: 8 August 2023 / Revised: 20 September 2023 / Accepted: 26 September 2023 / Published: 30 September 2023
(This article belongs to the Special Issue Subsea Robotics)

Abstract

:
Autonomous underwater manipulation is very important for the robotic and intelligence operations of oceanic engineering. However, a small target often involves limited features and results in inaccurate visual matching. In order to improve visual measurement accuracy, this paper has proposed an improved unsharp masking algorithm to further enhance the weak texture region of blurred and low contrast images. Moreover, an improved ORB feature-matching method with adaptive threshold, non-maximum suppression and improved random sample consensus has also been proposed. To overcome unknown underwater disturbances and uncertain system parameters in the underwater robotic manipulations, an adaptive non-singular terminal sliding mode controller has been proposed with a quasi-barrier function to suppress the chattering problem and improve grasp accuracy for small target. Oceanic experiments have been conducted to prove the performance of the proposed method.

1. Introduction

For the time being, the implementation of underwater manipulation missions through underwater robots have attracted increasing attention worldwide [1,2,3]. Their missions and applications include environmental detection, oceanic engineering, structural inspections, seabed surveys, scientific explorations, mine extractions, wreck recoveries, etc. [4] However, most underwater manipulation tasks are still carried out through working class remotely operated vehicles (ROVs) and divers [5,6]. Moreover, the complicated and accurate manipulation operations depend not only on the pilots’ skills and manipulation control [7], but also on the performance of the sensor equipment and their process algorithms [8]. Therefore, underwater autonomous or semi-autonomous operations have become one of the most important solutions for the accurate manipulation of an underwater vehicle manipulator system (UVMS) [9,10]. In order to realize the autonomous manipulation, the UVMS should not only perceive the target through visual recognition, tracking, and distance measurement, but also carry out motion planning and control for the multiple DOFs (degree of freedoms) UVMS [11,12]. Among these challenges, binocular vision distance measurement and manipulation control are very important for the integration and coordination between sensing and motions [13,14,15].
Regarding binocular vision distance measurement guided for manipulation [16,17], the underwater scenario is influenced by scattering and absorption with loss of color detail [18]. The distance measurement process includes binocular vision calibration [19], rectification [20] and matching [21]. In order to realize visual-guided underwater missions, Guanying Huo et al. [22] presented a binocular vision-based underwater target detection and 3D reconstruction system. Guided by target detection, the article improved the semi-global binocular matching method through matching the valid target area and optimizing the basic disparity map. Although many articles have focused on binocular vision [23,24], the fast and precise visual matching for a small object is usually influenced by multiple refraction times and ineffective imaging from the underwater environment. Stereo matching is one of the major steps in determining the depth [21], but conventional image matching strategies are less effective because of underwater refractive distortions [25]. Yaqian Li et al. [26] proposed an adaptive window based on mean-shift segmentation to alleviate distortion. In order to realize visual-guided manipulation for small objects, fast and precise vision matching is very important for distance measurement. The precise distance measurement needs to pick up enough feature points for correct matching in real-time. With standard-oriented FAST (features from accelerated segment test) [27] detecting the key points, the ORB (oriented FAST and rotated BRIEF (binary robust independent elementary feature)) [28] method is a very fast and efficient binary descriptor. But small objects in the underwater environment often include limited features and cause unstable matching. A. Osipov et al. [29] achieved high-precision detection and classification through improved YOLOv4-tiny and CNN algorithms.
In order to realize motion control for underwater robots [30], Satja Sivčev et al. [17] presented a vision-based kinematics control method for a working class ROV. Bent Oddvar Arnesen Haugaløkken et al. [31] developed a sliding mode kinematic control for UVMS grasp on the basis of monocular vision. In order to alleviate control chattering and assure finite-time convergence, Mingxue Cai et al. [32] proposed an improved nonsingular terminal sliding mode control on the basis of an underwater biomimetic vehicle manipulator system. Li and Huang et al. [33] proposed a nonlinear adaptive model to estimate camera parameters and improve visual serving convergence, with a disturbance observer. In comparison with other sliding mode control methods, the nonsingular terminal sliding mode control has become more and more popular for high-precision robotic control [34]. But these control methods assume a prior knowledge of the upper bound, which may not easily be obtained due to the model complexity and uncertainties. Mohamed Boukattaya et al. [35] proposed an adaptive nonsingular fast terminal sliding-mode control for the unknown upper-bound estimation of uncertain dynamical systems; the concept of adaptive design in it is excellent, which has inspired us in our own design. Except for the vision-based manipulation control of UVMS, the view field is limited due to the close visible distance; moreover, vehicle navigation systems, such as magnetic compass and Doppler velocity log (DVL), cannot ensure high accuracy small boundary reciprocating motions for the long term. The controller should further suppress vibration problems to keep the target in sight. Therefore, a design obstacle function is formulated, combined with the concept of adaptive design [35], to develop control laws that are more suitable for UVMS operations.
The main contribution of this paper includes the following points:
(1)
On the underwater visual matching of blurred and low contrast images, an improved unsharp masking algorithm is proposed to further enhance the weak texture region. The improved ORB feature-matching method is developed with adaptive threshold, non-maximum suppression and improved random sample consensus (RANSAC). Adaptive threshold is adopted to extract FAST feature points, non-maximum suppression is performed to remove feature point blocks, which can reduce the number of invalid feature points and save matching time, and the RANSAC algorithm can strengthen the elimination of mismatch points.
(2)
A novel adaptive nonsingular terminal sliding mode controller is proposed, incorporating the obstacle function, to enhance its applicability in UVMS operations, specifically regarding unknown disturbances and uncertain parameters with unknown upper bounds. The obstacle function is designed using a quasi-potential barrier function to suppress chattering issues, and a feedforward strategy is employed to handle the coupling of the robotic arm.
(3)
Oceanic experiments have been conducted to prove the performance of the proposed algorithm and controller.
The rest of this article is organized as follows. The binocular vision matching method for underwater small objects will be outlined in Section 2. Section 3 will investigate the adaptive non-singular fast terminal controller. The results of our simulations and experiments will be discussed in Section 4. Section 5 will provide our conclusions.

2. Binocular Vision Matching Method for Underwater Small Object

Binocular vision can provide three-dimensional distance information for manipulation control [36]. Since underwater visual images are usually grayed, obscured and contrast reduced [37,38], it is very important to enhance the texture during image preprocessing. From the target position information obtained from binocular vision, the UVMS can realize position-based visual serving control for the manipulation. Figure 1 presents the block diagram of the system described in this paper.

2.1. Image Preprocessing

In order to preserve edge information and eliminate image noise for underwater small object visual matching, the fast guide filter algorithm was applied. The linear relationship between output image q i and guiding image g i in the window of ω k is:
q i = a k g i + b k , i ω k
To minimize the difference between the output image q i and input image p i , the cost function of window coefficient can be expressed as:
E ( a k , b k ) = i = ω k ( ( a k g i + b k p i ) 2 + a k 2 )
where the linear coefficient a k = c o v ( p k , g k ) σ k 2 + and b k = p k ¯ μ k a k .
To further enhance the weak texture region, an improved unsharp masking algorithm has been proposed. After evaluating the high frequency component, enhancing the edge and texture details and establishing the gain functions, the equation for the enhanced image can be expressed as:
F ( x T , y T ) = I T ( x T , y T ) + K T λ T ( x T , y T ) × D T ( x T , y T )
where D T ( x T , y T ) = I T ( x T , y T ) ( 1 1 2 π T σ T 2 ) i f D T ( x T , y T ) M a x D T ( x T , y T ) 0 o t h e r w i s e , I T ( x T , y T ) is original image; λ ( x T , y T ) is gain function; K is the coefficient to tune the high frequency information intensity.
λ T ( x T , y T ) = β T × L C ( x , y ) + ( 1 β T ) × L v ( x T , y T ) L C ( x T , y T ) = k T = 0 L T 1 sgn ( k T ) G L c L v ( x T , y T ) = 1 m × n ( ( i = x ( m 1 ) / 2 i = x + ( m 1 ) / 2 j = y ( n 1 ) / 2 j = y + ( n 1 ) / 2 ( f T ( i , j ) f T ¯ ) 2 ) ) 1 / 2 G L v
where L C ( x T , y T ) is local complexity with G L c as global complexity; L v ( x T , y T ) is local standard deviation, with G L v as global standard deviation; f T ¯ is the average of domain Ω ( m × n ) with center ( x T , y T ) ; β T is the weight value between local standard deviation and local complexity.

2.2. ORB Algorithm for Binocular Vision Matching

In order to pick up feature points and guarantee matching speed for real time grasp, the ORB algorithm [38] was improved. The ORB algorithm includes four steps:
(1)
Scale space construction:
Gauss convolution is applied to pick up feature points with scale invariance. Octave and intra-octave layers are constructed through original image down sampling.
(2)
Candidate key points are picked up through FAST-9 and ascertained through Harris sort:
In order to select 16 pixels on a circle around every feature point, the FAST feature points are extracted. If more than 3/4 pixels are heavier or lighter than the threshold, the feature point is appropriate. Harris response values are calculated with the top N key response points.
(3)
Estimate the major orientation with intensity centroid method. The moment centroid is:
C T = ( m 10 m 00 , m 01 m 00 )
The main orientation from feature point to centroid is:
θ T = arctan ( m 01 / m 10 )
(4)
RBRIEF algorithm:
The RBRIEF algorithm is applied to generate feature descriptors in rotation invariant binary mode. The BRIEF descriptor is composed of 256 pairs 5 × 5 child window. The descriptors require the centroid direction of feature point to calculate the main BRIEF direction.

2.3. Improved ORB Matching Algorithm for Small Objects

Although the ORB algorithm is very rapid, its matching performance is still affected by obscure images. This section will improve the ORB algorithm with the key points selection process, RBRIEF descriptor and improved RANSAC to improve the underwater matching performance for small objects and eliminate mistake matching points.
(1)
Improvement of FAST-9 key points:
For the underwater uneven illumination environment, an adaptive threshold k ε d is adopted to improve the key points’ selection process:
k ε d = a × ( 1 n i = 1 n g i max 1 n i = 1 n g i min )
In the L × L feature point, the maximum gray value g i max and minimum gray value g i min are picked up to control the threshold value through the gray D-value.
(2)
Apply the non-maximum value to remove repetition point:
The increase in feature description and matching depends on the feature points’ congregation. The greater the repetition of the same feature points, the more feature description is required and time is consumed for the feature matching, which is disadvantageous for object positioning. After the comparisons of FAST values between feature points and their two layer pyramid, a non-maximum value is applied. FAST values are expressed with the absolute difference sum of pixel gray values:
v s = i = 1 n f g p g i
where n f is the number of feature points; g p is the extreme value of gray value.
(3)
RBRIEF descriptor improvement:
The purpose of the improvement of the RBRIEF descriptor is to realize accurate descriptions, strong distinguishability and reduced time requirements. The norm value of three-pixel blocks will be applied instead of two pixels gray value in order to improve the descriptor robustness. Three 7 × 7 blocks of p t , a , p t , 1 and p t , 2 are selected in the 48 × 48 square neighborhood of feature point to obtain each bit value of feature point B T ( W T , S t ) :
B T ( W T , S t ) = 1 p t , a p t , 1 F 2 > p t , a p t , 2 F 2 0 o t h e r w i s e
In order to obtain a large number of pixel block combinations, a supervised learning method has been applied to the pixel blocks grouping method. The training data set has firstly been constructed from local image descriptor data, a Harris operator has been applied to extract initial feature points and to match different images corresponding to the feature points. A total of 3 × 56k pixel blocks will be generated from each feature point neighborhood to match each descriptors data. The 50k bit series will be obtained from the data set after matching. If the absolute correlation between candidate pixel block positions and all the previous positions is greater than the threshold, the candidate will be eliminated. The 256-bit binary descriptor is established at last.
(4)
Improved RANSAC to eliminate mistake matching points:
The traditional RANSAC algorithm selects four matching pairs randomly, but with unstable iterative times. This paper will sort matching pairs according to their qualities and obtain the homography matrix for the top four pairs, in order to solve the optimal homography matrix and improve the matching success rate. The quality of matching pairs can be calculated as:
γ T = 1 d min 2 / d min 2
where d min and d min 2 are the distances of the nearest and second nearest neighbor for the matching pair.
It is unnecessary to calculate all the matching pairs for the iteration of the homography matrix; one can randomly select half of all the matching pairs to calculate, if the ratio of interior point is obviously lower than that of the current optimal model, then the model can be abandoned to save the iteration time Algorithm 1.
Algorithm 1. Binocular Vision Matching Method for Underwater Small Object
① Image preprocessing with improved unsharp masking algorithm.
② Construct scale space.
③ Pick up candidate key points through improved FAST-9, ascertain key points through Harris sort.
④ Estimate the major orientation with intensity centroid method, and apply non-maximum value to get rid of repetition point.
⑤ Generate feature descriptors in rotation invariant binary mode through improved RBRIEF descriptor.
⑥ Eliminate mistake matching points through improved RANSAC.
If the ratio of interior point is obviously lower than that of the current optimal model, go to ① and repeat sampling and matching.
Or generate parallax map.

3. Adaptive Non-Singular Fast Terminal Controller Design

3.1. Non-Singular Fast Terminal Sliding-Mode Controller

The dynamics model of the UVMS can be described with Euler–Lagrange equations of motion as:
M ζ v ˙ + C ζ v + D ζ v + G ζ τ ζ u n = τ ζ c t r l + τ ζ m a n i
where v denotes the velocity vector of vehicle; M ζ ( 6 + n ) x ( 6 + n ) is the added mass, system inertial and mass matrix; M ζ = M 0 + Δ M ζ , Δ M ζ are the uncertainties of M ζ ; n is the degree of freedoms of manipulator; C ζ ( 6 + n ) x ( 6 + n ) is the centrifugal Coriolis matrix; C ζ = C 0 + Δ C ζ , Δ C ζ is the uncertainties of C ζ ; D ζ ( 6 + n ) x ( 6 + n ) denotes dissipative drag matrix caused by fluid viscosity; D ζ = D 0 + Δ D ζ , Δ D ζ are the uncertainties of D ζ ; G ζ ( 6 + n ) x 1 is the gradational term; G ζ = G 0 + Δ G ζ , Δ G ζ are the uncertainties of G ζ ; M 0 , C 0 , D 0 and G 0 are the known items; τ ζ u n ( 6 + n ) x 1 is the combination of external disturbance and model uncertainties; τ ζ m a n i denotes the coupling forces between the vehicle and manipulator [10]; τ ζ c t r l denotes the input vector of control forces.
The specific form of M ζ is [39]:
M ζ = M V M Vm T ( q ) M Vm ( q ) M m ( q )
where M V represents the inertia matrix of the vehicle body; M m ( q ) n × n represents the inertia matrix of the manipulator (including additional mass); M Vm ( q ) n × 6 represents the coupling matrix of system inertia; vector M Vm T ( q ) q ¨ and M Vm ( q ) v ˙ represent the coupling forces (moments) generated between the manipulator and the vehicle body, and the vehicle body and the manipulator due to inertia.
In Equation (11), the specific form of C ζ is [39]:
C ζ = C V ( v ) C Vm T ( v , q ˙ ) C Vm ( v , q ˙ ) C m ( q , q ˙ )
where C V ( v ) represents the Coriolis force matrix of the vehicle body; C m ( q , q ˙ ) n × n represents the Coriolis force matrix of the manipulator (including additional mass); C Vm ( v , q ˙ ) n × 6 represents the coupling matrix of system Coriolis forces; vector C Vm T ( v , q ˙ ) q ˙ and C Vm ( v , q ˙ ) v , respectively, represent the coupling forces (moments) generated between the manipulator and the vehicle body, and the vehicle body and the manipulator due to Coriolis forces.
In Equation (11), the specific form of D ζ is [39]:
D ζ = D V ( v ) 0 0 D m ( q , q ˙ )
where D V ( v ) represents the damping matrix of the vehicle body; D m ( q , q ˙ ) n × n represents the damping matrix of the manipulator.
For the transformation from the inertial coordinate system of the vehicle to the task space of the manipulator end effector, one has:
η ˙ = J ( η ) v M η η ¨ + C η η ˙ + D η η ˙ + G η τ η u n = τ η c t r l τ η m a n i
where
M η = J T M ζ J M η = J T M ζ J τ η m a n i = J T τ ζ m a i n τ η m a n i = J T τ ζ m a i n
C η = J T [ C ζ M ζ J J ˙ ] J G η = J T G ζ τ η u n = J T τ ζ u n
For the manipulation control, dynamic Equation (15) can be described as a second order system:
x ˙ 1 = x 2 x ˙ 2 = f ( x ) + d ( t ) + b ( x ) u ( t )
where x 1 and x 2 are the system state variables vector; f ( x ) and b ( x ) are nonlinear functions; d ( t ) is the external disturbance vector; u ( t ) is the control input vector. The physical layout of the UVMS thruster positions studied in this article is shown in Figure 2, and the mathematical model diagram is shown in Figure 3.
The horizontal thrusters 1, 2, 3, and 4 and the forward and aft thrusters 5, 6 are arranged as shown in the figure; they will generate the control forces τ η c t r l in the directions of forward and backward movement, lateral movement, ascent and descent, longitudinal pitch, and yaw rotation. The control force τ η c t r l and the thrust of the thrusters u ( t ) can be represented as shown in Equation (19):
τ η c t r l = B v u ( t )
In the equation, B v is referred to as the thruster control matrix, and its specific form is given by Equation (20).
B v = sin β cos β 0 0 y 1 sin β cos β 0 0 y 2 sin β cos β 0 0 y 3 sin β cos β 0 0 y 4 0 0 1 x 5 0 0 0 1 x 6 0
In the equation, the angular variable β , as illustrated in Figure 3, the values of ( y 1 , y 2 , y 3 , y 4 , x 5 , x 6 ) represent the distances from the coordinate origin to the positions of the thrusters.
After obtaining the desired control forces using the control algorithm, it is necessary to calculate the thrust provided by each thruster. The conversion relationship is given by Equation (21), where B v + is referred to as the pseudoinverse of the thruster control matrix.
B v + = B v T ( B v B v ) 1 u ( t ) = B v + τ c t r l
In order to realize manipulation control, the nonsingular fast terminal sliding-mode surface is [40]:
s = e + 1 β e ˙ p / q
where β > 0 , p and q are positive odd numbers; e = x 1 x d , x d is the reference vector. One has the equivalent control law as:
u = b 1 ( x ) x ¨ d f ( x ) + β q p e ˙ 2 p / q s i g n ( e ˙ ) + ( λ + L g ) s i g n ( s )
where d ( t ) L g . From (16) and (23), one has the controller for UVMS cruising as:
τ ζ c t r l = J T M η ( η ) x ¨ d f ( x ) + β q p e ˙ 2 p / q s i g n ( e ˙ ) + ( λ + L g ) s i g n ( s )
where τ ζ m a n i = 0 , since the manipulator stays in a fixed posture. Meanwhile, the controller for the manipulation process can be described as:
τ η c t r l = J T M η ( η ) x ¨ d f ( x ) + β q p e ˙ 2 p / q s i g n ( e ˙ ) + ( λ + L g ) s i g n ( s ) τ η m a n i

3.2. Adaptive Non-Singular Fast Terminal Sliding-Mode Controller

Since the manipulation process should consider the coupling effects between vehicle and manipulator, thus M η and f ( x ) are different from M η and f ( x ) , respectively.
For the practical application of (24) and (25), one should consider model uncertainties. The combination of uncertainties and disturbance can be defined as:
τ ζ u n = Δ M ζ v ˙ + Δ C ζ v + Δ D ζ v + Δ G ζ + d ( t )
If we assume,
τ ζ u n = Δ f ( x ) + d ( t ) < δ
where Δ f ( x ) denotes the model uncertainties of UVMS; δ represents the upper bound of τ ζ u n . If we suppose the control input τ ζ c t r l does not contain an acceleration signal, δ should be composed of the position and velocity feedback as:
δ = d 0 + d 1 e + d 2 e ˙
where d 0 , d 1 and d 2 are constants.
From (24), the controller without considering disturbances can be equivalent to:
τ e q = J T M η ( η ) x ¨ d f ( x ) + β q p e ˙ 2 p / q s i g n ( e ˙ )
The switching controller for δ can be designed as:
s ˙ = k s s + ( δ + μ s ) s i g n ( s )
where k s is the positive constant; μ s is the vector of very small parameter. In order to satisfy the sliding condition with disturbance and uncertainties, the switching controller can be designed as:
τ s w ( t ) = J T M η ( η ) [ k s s + ( δ + μ s ) s i g n ( s ) ] = J T M η ( η ) [ k s s + ( d 0 + d 1 e + d 2 e ˙ 2 + η ) s i g n ( s ) ]
The adaptive switching controller is:
τ a s w ( t ) = J T M η ( η ) [ k s + ( d ^ 0 + d ^ 1 e + d ^ 2 e ˙ 2 + μ s ) s i g n ( s ) ]
Therefore, the adaptive nonsingular fast terminal sliding-mode controller is designed as:
τ A N T S M C = J T M η ( η ) x ¨ d f ( x ) + β q p e ˙ 2 p / q s i g n ( e ˙ ) + J T M η ( η ) [ k s + ( d ^ 0 + d ^ 1 e + d ^ 2 e ˙ 2 + μ s ) s i g n ( s ) ]
where d ˜ 0 = d ^ 0 d 0 , d ˜ 1 = d ^ 1 d 1 , d ˜ 2 = d ^ 2 d 2 are the adaptation errors. d ^ 0 , d ^ 1 and d ^ 2 can be updated through:
d ^ ˙ 0 = λ 0 s e ˙ β 1 d ^ ˙ 1 = λ 1 s e ˙ β 1 e d ^ ˙ 2 = λ 1 s e ˙ β 1 e 2
where λ 0 , λ 1 and λ 1 are arbitrary positive-tuning parameters.

3.3. The Overall Controller with Barrier Function

In the vision-based manipulation process, the vehicle should stay in its position so that the target can be kept in the field of sight. However, the external hydrodynamic disturbance and the coupling effects between vehicle and manipulator make the control system vibrate reciprocatively and apt to lose the target.
In order to further suppress the vibration and ensure controller convergence in finite time, the barrier function was designed. The barrier function K B F ( σ ) is an even and positive function (see Figure 4). The interval parameter ε is defined as ε > 0 . The boundaries of the barrier function are σ ( ε , ε ) and K B F ( σ ) ( 0 , ) . For the barrier function, one has:
K B F ( σ ) = 0 w h e n   σ ± ε K B F ( σ ) w h e n   lim x ± ε K B F ( σ ) = +
The quasi-barrier function is proposed for the adaptation controller and vibration suppression from the barrier function.
K ˜ B F ( σ ) = K B F ( σ ) w h e n   ε ˜ < σ < ε ˜ K ˜ B F ( σ ) = 1 w h e n   σ ε ˜ , σ ε ˜
The basic barrier function and quasi barrier function can be defined as:
K B F ( σ ) = L ¯ σ ε σ K ˜ B F ( σ ) = L ¯ s a t ε ˜ ( σ ) ( ε s a t ε ˜ σ )
where lim σ ε K ˜ B F ( σ ) = 1 , L ¯ = ε ε ˜ ε ˜ ; the s a t ε ˜ function is defined as: s a t ε ˜ ( σ ) = σ σ < ε ˜ ε ˜ s i g n ( σ ) σ ε ˜ . Therefore, the quasi-barrier function concerning control errors and sliding surface can be described as:
K ˜ B F 1 ( e ˙ ) = L ˜ 1 s a t ε ˜ 1 ( e ˙ ) ( ε 1 s a t ε 1 e ˙ ) K ˜ B F 2 ( s ) = L ˜ 2 s a t ε ˜ 2 ( s ) ( ε 2 s a t ε 2 s )
where ε 1 and ε 2 are the bound of control errors and sliding surface, respectively; L ˜ 1 and L ˜ 2 are the gains of K ˜ B F 1 ( e ˙ ) and K ˜ B F 2 ( s ) ; s a t ε ˜ 1 ( e ˙ ) and s a t ε ˜ 2 ( s ) are the saturation functions.
L ¯ 1 = ε 1 ε ˜ 1 ε ˜ 1 L ¯ 2 = ε 2 ε ˜ 2 ε ˜ 2 and   s a t ε ˜ 1 ( e ˙ ) = e ˙ f o r e ˙ < ε ˜ 1 ε ˜ 1 s i g n ( e ˙ ) f o r e ˙ ε ˜ 1 s a t ε ˜ 2 ( s ) = s f o r s < ε ˜ 2 ε ˜ 2 s i g n ( s ) f o r s ε ˜ 2
Therefore, from the frame of the overall controller in Figure 5, the overall controller is designed as:
τ ζ c t r l = J T M η ( η ) x ¨ d f ( x ) + β q p | e ˙ | 2 p / q K ˜ B F 1 ( e ˙ ) s i g n ( e ˙ ) + d ^ 0 + d ^ 1 e + d ^ 2 e ˙ 2 + μ s * K ˜ B F 2 ( s ) * s i g n ( s ) + * k s c r u i sin g τ ζ c t r l = J T M η ( η ) x ¨ d f ( x ) + β q p | e ˙ | 2 p / q K ˜ B F 1 ( e ˙ ) s i g n ( e ˙ ) + d ^ 0 + d ^ 1 e + d ^ 2 e ˙ 2 + μ s ) * K ˜ B F 2 ( s ) * s i g n ( s ) + * k s τ ζ m a n i       manipulating
Theorem 1.
For the control system (40), sliding surface is chosen as (26), the barrier function is defined as (38), the control system converges to the equilibrium point in finite time.
Proof. 
The Lyapunov function candidate about barrier function is selected as:
V B F = s + ε 2 1 L ˜ 2 K ˜ B F 2 ( s )
Different V B F with respect to time, one has:
V ˙ B F = s s s ˙ + L ˜ 2 2 s i g n ( s ) ( ε 2 s ) s ˙
If we split V ˙ B F into Equation (43)
V ˙ B F = V ˙ B F 1 + V ˙ B F 2
where
V ˙ B F 1 = s s s ˙ = s s ( e ˙ + 1 β p q e ˙ p / q 1 e ¨ ) V ˙ B F 2 = L ˜ 2 2 s i g n ( s ) ( ε 2 s ) s ˙ = L ˜ 2 2 s i g n ( s ) ( ε 2 s ) ( e ˙ + 1 β p q e ˙ p / q 1 e ¨ )
From (33) and (34), one has:
V ˙ B F 1 = s | s | s ˙ = s i g n ( s ) e ˙ + 1 β p q e ˙ p / q 1 e ¨ = s i g n ( s ) e ˙ + 1 β p q e ˙ p / q 1 x ¨ d x ¨ = s i g n ( s ) e ˙ + 1 β p q e ˙ p / q 1 x ¨ d f ( x ) + τ ζ u n x ¨ d + f ( x ) β q p | e ˙ | 2 p / q K ˜ B F 1 ( e ˙ ) s i g n ( e ˙ ) 1 β p q e ˙ p / q 1 d ^ 0 + d 1 ^ e + d ^ 2 e ˙ 2 + μ s * K ˜ B F 2 ( s ) * s i g n ( s ) 1 β p q e ˙ p / q 1 k s s = s i g n ( s ) e ˙ K ˜ B F 1 ( e ˙ ) + 1 β p q e ˙ p / q 1 τ ζ u n 1 β p q e ˙ p / q 1 d ^ 0 + d ^ 1 e + d ^ 2 e ˙ 2 + μ s * K ˜ B F 2 ( s ) 1 β p q e ˙ p / q 1 k s | s |
and
V ˙ B F 2 = L ˜ 2 2 s i g n ( s ) ε 2 | s | 2 s ˙ = L ˜ 2 2 s i g n ( s ) ε 2 | s | 2 e ˙ + 1 β p q e ˙ p / q 1 e ¨ = L ˜ 2 2 s i g n ( s ) ε 2 | s | 2 e ˙ + 1 β p q e ˙ p / q 1 x ¨ d x ¨ = L ˜ 2 2 s i g n ( s ) ε 2 | s | 2 e ˙ + 1 β p q e ˙ p / q 1 x ¨ d f ( x ) + τ ζ u n x ¨ d + f ( x ) β q p | e ˙ | 2 p / q K ˜ B F 1 ( e ˙ ) s i g n ( e ˙ ) 1 β p q e ˙ p / q 1 d ^ 0 + d ^ 1 e + d ^ 2 e ˙ 2 + μ s * K ˜ B F 2 ( s ) * s i g n ( s ) 1 β p q e ˙ p / q 1 k s s = L ˜ 2 2 s i g n ( s ) ε 2 | s | 2 e ˙ K ˜ B F 1 ( e ˙ ) + 1 β p q e ˙ p / q 1 τ ζ un 1 β p q e ˙ p / q 1 L ˜ 2 2 ε 2 | s | 2 * d ^ 0 + d ^ 1 e + d ^ 2 e ˙ 2 + μ s * K ˜ B F 2 ( s ) 1 β p q e ˙ p / q 1 L ˜ 2 2 ε 2 | s | 2 k s | s |
If we define ζ 1 = 1 β p q e ˙ p / q 1 ( d ^ 0 + d ^ 1 e + d ^ 2 e ˙ 2 + μ s ) and ζ 2 = ( e ˙ K ˜ B F 1 ( e ˙ ) + 1 β p q e ˙ p / q 1 τ ζ u n ) one has:
V ˙ B F = V ˙ B F 1 + V ˙ B F 2 = s i g n ( s ) ζ 2 ζ 1 * K ˜ B F 2 ( s ) 1 β p q e ˙ p / q 1 k s s + s i g n ( s ) L ˜ 2 2 ε 2 s 2 ζ 2 ζ 1 L ˜ 2 2 ε 2 s 2 * K ˜ B F 2 ( s ) 1 β p q e ˙ p / q 1 L ˜ 2 2 ε 2 s 2 k s s
If we define s i g n ( e ˙ ) ζ 2 ζ 2 , one has:
V ˙ B F ζ 1 * K ˜ B F 2 ( s ) k 1 L ˜ 2 2 ε 2 s 2 * K ˜ B F 2 ( s ) + Δ 1 1 β p q e ˙ p / q 1 k s 1 β p q e ˙ p / q 1 L ¯ 2 2 ε 2 s 2 k s
where Δ 1 = ζ 2 + L ˜ 2 2 ε 2 s 2 ζ 2 , Δ 1 0 . Thus, one has:
V ˙ B F = ζ 1 L ˜ 2 2 ε 2 s 2 s 1 β p q e ˙ p / q 1 k s s 1 β p q e ˙ p / q 1 L ˜ 2 2 ε 2 s 2 k s s ζ 1 L ˜ 2 ε 2 s s + Δ 1 = ζ 1 L ˜ 2 2 ε 2 s 2 s 1 β p q e ˙ p / q 1 k s s 1 β p q e ˙ p / q 1 L ˜ 2 2 ε 2 s 2 k s s 1 ε 2 s ( L ˜ 2 ζ 1 s - ε 2 s Δ 1 ) = ζ 1 L ˜ 2 2 ε 2 s 2 s 1 β p q e ˙ p / q 1 k s s 1 β p q e ˙ p / q 1 L ˜ 2 2 ε 2 s 2 k s s Δ 1 ε 2 s ( L ˜ 2 ζ 1 Δ 1 s - ε 2 s ) = ζ 1 L ˜ 2 2 ε 2 s 2 s 1 β p q e ˙ p / q 1 k s s 1 β p q e ˙ p / q 1 L ˜ 2 2 ε 2 s 2 k s s Δ 1 ε 2 s ( L ˜ 2 ζ 1 Δ 1 s + s - ε 2 )
where L ˜ 2 ζ 1 Δ 1 s + s ε 2 0 . If s ε 2 , one has L ˜ 2 ζ 1 Δ 1 s + s - ε 2 > 0 . If s ε 2 , one has Δ 1 = 0 and V ˙ B F < 0 . Therefore, the control law (32) is asymptotically stable. □

4. Simulations and Experiments

4.1. Image Processing and Binocular Visual Matching Experiments

In order to verify the proposed algorithm, underwater images in the tank and oceanic environment have been processed with fast guide filter algorithm and improved unsharp masking algorithm. From the processing results, the proposed algorithm can better preserve image marginal information and smooth background details. The comparison of image processing algorithms is shown in Figure 6.
To prove the robustness and accuracy of the visual matching algorithm of this paper, the classical SIFT [41] (scale invariant feature transform) and ORB have been compared with on feature point extraction and matching (see Figure 7 and Figure 8). The feature extraction points from SIFT algorithm are comparatively sparse, resulting in few matching pairs of the small objects. For the ORB algorithm, unclear margin and texture could result in few or unstable matching pairs. On the other hand, the feature extraction points from the proposed algorithm can obtain dense and accurate matching pairs with strong distinguishable characters. On the matching efficiency (see Table 1), non-maximum value and improved RBRIEF descriptor have got rid of the repetition and ineffective matching points, which has reduced the time required and improved the matching accuracy. The matching efficiency of the improved method is significantly better than the ORB algorithm and slightly better than the SIFT algorithm. In terms of real-time performance, the algorithm proposed in this paper takes slightly longer than the ORB algorithm in the feature point extraction process. Although the algorithm proposed in this paper improves the feature description compared to ORB, increasing the algorithm complexity, the time consumed in the final feature point description process is still smaller than that of the ORB algorithm. Therefore, in terms of real-time performance, the algorithm proposed in this paper is superior to both the ORB algorithm and the SIFT algorithm.
To prove the underwater visual localization accuracy of the proposed method in this paper, the localization results of the three algorithms were compared with the ground truth values obtained in the underwater experiment, as shown in Table 2. For the three tested targets: sea cucumber, scallop, and sea urchin, the improved algorithm proposed in this paper shows significantly higher accuracy than ORB and SIFT algorithms, and the coordinate values after transformation by this algorithm are within the allowable error range.
To prove the improved RANSAC algorithm for mismatching elimination, Figure 9 illustrate the comparison of results for RANSAC and improved RANSAC algorithm in the matching map. From Figure 9 and Table 3, the iteration time for RANSAC is apparently unstable with fewer accurate matching points than the improved RANSAC algorithm of this paper. The results show the proposed algorithm is more accurate with more correct matching points.

4.2. Simulations on Proposed Non-Singular Fast Terminal Sliding-Mode Controller

This numerical simulation is for the autonomous grasping task of the underwater vehicle manipulator system after discovering a target. This paper focuses on the control problem of target grasping for underwater robots based on visual-assisted localization. It mainly addresses the issues of pose accuracy and frequency during the target grasping process of the underwater vehicle manipulator system (UVMS), as well as the kinematic planning and UVMS dynamic control for the target grasping task after obtaining the relative distance between the robot and the target.
The simulated process of the target grasping for the UVMS based on visual-assisted localization is as follows: When the target is detected through the visual system, the initial accurate value is obtained from binocular ranging and the desired pose is obtained for both the vehicle and the manipulator. Subsequent binocular ranging values will be used for the localization system; based on the current position state, we can calculate the boat’s position error e and sliding mode surface s by setting a target position or reference position; by utilizing the position error e , error derivative e ˙ , and sliding mode surface s , we can obtain the adaptive update law; calculations can be performed to obtain K ¯ B F 1 ( e ˙ ) based on the position error e ˙ and a predefined ε 1 , as well as K ¯ B F 2 ( s ) based on the sliding mode surface s and a predefined ε 2 ; the joint drivers rotate the individual joints of the robotic arm and calculate the coupling forces τ ζ m a n i ; the UVMS (underwater vehicle manipulator system) coordination controller for target grasping based on the quasi-obstacle function is obtained through the obtained errors e , sliding mode surface s , and quasi-obstacle function K ¯ B F 1 ( e ) , K ¯ B F 2 ( s ) ; the thrust allocation is used to calculate the required thrust for each thruster.
In order to analyze the proposed non-singular fast terminal sliding-mode controller on the UVMS, the non-singular fast terminal sliding-mode controller (NTSMC) in the first part of Section 3, adaptive non-singular fast terminal sliding-mode controller (ANTSMC) in the second part of Section 3 and quasi barrier function based adaptive non-singular fast terminal sliding-mode controller (QBF-ANTSMC) in the third part of Section 3 have been compared. The platform of the Oceanic Experiments with three DOFs manipulator has been applied for the simulations. The Denavit–Hartenberg parameters of the manipulator are illustrated in Table 4. The quality and center of gravity positions of UVMS are displayed in Table 5. The dynamic parameters are displayed in Table 6. The correlated parameters of the controller are set out in Table 7, Table 8 and Table 9. Figure 10 illustrates the single dimensional DOF control results in longitudinal, vertical and heading directions.
Since the motion control of UVMS sometimes requires multiple degrees of freedom to move together, this paper conducts simulation experiments on multiple degrees of freedom of UVMS. The desired trajectory for the simultaneous motion of four degrees of freedom in the simulation process is as follows:
η d = r sin ( w t ) , r r cos ( w t ) , r sin ( w t ) , 0 , 0 , w t T
where r = 2 , w = 2 π 40 r a d / s . We conducted simulation experiments to control the UVMS system using three different methods: NTSMC, ANTSMC and QBF-ANTSMC.
Figure 11 presents the comparisons of vehicle three-dimensional trajectory tracking. From the results, the vehicle can reach the desired position in finite time with the NTSMC controller, but should assume a disturbance upper limit. Moreover, the upper limit should be accurately set, to eliminate preliminary tracking errors. Under the ANTSMC control, the vehicle can reach the desired position in finite time without a disturbance upper limit assumption, but still with great chattering phenomena. On the other hand, QBF-ANTSMC can obtain a trajectory that strictly coincides with the desired trajectory, which means the proposed controller can obtain accurate control results and suppress chattering phenomena.
By utilizing binocular vision to obtain target information and acquiring the desired pose of the robot, subsequent simulation experiments are conducted for trajectory tracking. The initial position of the robot, target position and desired pose are shown in Table 10 for this simulation. Figure 12 and Figure 13 illustrate the position-based manipulation control process.

4.3. Oceanic Experiments on Small Object Positioning and Grasp

Based on the effective numerical simulation using physical parameter data, we conducted oceanic experiments to verify the effectiveness of the proposed binocular visual algorithm and controller. A UVMS platform (see Figure 14) with three joints manipulator, binocular vision, DVL, magnetic compass, four horizontal vector thrusters and two vertical thrusters was equipped. The experiments were performed in the oceanic environment of the Chinese Yellow Sea region close to the Zhangzi Dao Island of Liao Ning Province.
Figure 15 illustrates the process of the oceanic experiments (see Figure 15a). At first, the UVMS is cruising according to the planned trajectory and looking for the sea organism object. When the object has been detected, the UVMS will position the object relative distance from the binocular visual matching of the proposed improved ORB algorithm. Then, the vehicle begins to float approaching the object according to the binocular matching and dead reckoning feedback. Figure 15b shows the current disturbance obtained from DVL during the capture process which is about 0.5 knot in a single direction. Figure 15c–g illustrate the approach and capture control process of the vehicle and end effector; the proposed QBF-ANTSMC controller can realize accurate and stable control under complicated current disturbance and suppress chattering phenomenon. Figure 15h illustrates the process from the diver’s perspective with GoPro underwater camera.

5. Conclusions

In order to realize vision-based grasp control for small targets, this study has proposed an improved binocular visual measurement and a novel QBF-ANTSMC for UVMS position-based manipulation control. At first, an improved unsharp masking algorithm has been proposed to further enhance the weak texture region for the blurred and low contrast images. Secondly, an improved ORB feature-matching method has been developed with adaptive threshold, non-maximum suppression and improved random sample consensus. Thirdly, an adaptive non-singular terminal sliding mode controller with a quasi-barrier function has been proposed for the unknown bound of unknown disturbance and to suppress the chattering problem. Oceanic experiments with binocular vision measurement and QBF-ANTSMC manipulation control were conducted to prove the performance of the proposed algorithm and controller. However, due to the complexity of the underwater environment, there are still certain errors in the binocular ranging values. In the next step, we will improve the accuracy of the visual binocular ranging by studying new algorithms. The simulation environment and sea trial environment for the control algorithm in this article were relatively flat seabed environments, but obstacles often exist in many situations. In the next step, we will improve the control algorithm for situations where obstacles exist around the target object.

Author Contributions

Conceptualization, T.J. and H.H.; data curation, Y.S., Z.Z. and X.H.; formal analysis, L.L. and X.H.; funding acquisition, H.H., H.Q. and X.C.; investigation, Y.S.; methodology, T.J. and L.L.; project administration, H.H. and H.Q.; resources, H.H., H.Q. and X.C.; validation, Z.Z.; writing—original draft, T.J.; writing—review and editing, T.J. and Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This project is supported by the National Natural Science Foundation of China (No. U21A20490, 52025111, 61633009, 51779058), and funded by Joint guidance project of Heilongjiang Natural Science Foundation (LH2020F026, YQ2020E033). All of the support provided is highly appreciated.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Prats, M.; Ribas, D.; Palomeras, N.; García, J.C.; Nannen, V.; Wirth, S.; Fernández, J.J.; Beltrán, J.P.; Campos, R.; Ridao, P.; et al. Reconfigurable AUV for intervention missions: A case study on underwater object recovery. Intell. Serv. Robot. 2012, 5, 19–31. [Google Scholar] [CrossRef]
  2. Carrera, A.; Palomeras, N.; Hurtós, N.; Kormushev, P.; Carreras, M. Cognitive system for autonomous underwater intervention. Pattern Recognit. Lett. 2015, 67, 91–99. [Google Scholar] [CrossRef]
  3. Yang, L.; Zhao, S.; Wang, X.; Shen, P.; Zhang, T. Deep-Sea Underwater Cooperative Operation of Manned/Unmanned Submersible and Surface Vehicles for Different Application Scenarios. J. Mar. Sci. Eng. 2022, 10, 909. [Google Scholar] [CrossRef]
  4. Prats, M.; Garc, J.C.; Wirth, S.; Ribas, D.; Sanz, P.J.; Ridao, P.; Gracias, N.; Oliver, G. Multipurpose Autonomous Underwater Intervention: A Systems Integration Perspective. In Proceedings of the 20th Mediterranean Conference on Control & Automation (MED), Spain, Barcelona, 3–6 July 2012; pp. 1379–1384. [Google Scholar]
  5. Choi, J.-K.; Yokobiki, T.; Kawaguchi, K. ROV-Based Automated Cable-Laying System: Application to DONET2 Installation. IEEE J. Ocean. Eng. 2018, 43, 665–676. [Google Scholar] [CrossRef]
  6. Wang, Y.; Wang, S.; Wei, Q.; Tan, M.; Zhou, C.; Yu, J. Development of an Underwater Manipulator and Its Free-Floating Autonomous Operation. IEEE/ASME Trans. Mechatron. 2016, 21, 815–824. [Google Scholar] [CrossRef]
  7. Razzanelli, M.; Casini, S.; Innocenti, M.; Pollini, L. Development of a Hybrid Simulator for Underwater Vehicles with Manipulators. IEEE J. Ocean. Eng. 2020, 45, 1235–1251. [Google Scholar] [CrossRef]
  8. Lynch, B.; Ellery, A. Efficient Control of an AUV-Manipulator System: An Application for the Exploration of Europa. IEEE J. Ocean. Eng. 2014, 39, 552–570. [Google Scholar] [CrossRef]
  9. Youakim, D.; Cieslak, P.; Dornbush, A.; Palomer, A.; Ridao, P.; Likhachev, M. Multirepresentation, Multiheuristic A* search-based motion planning for a free-floating underwater vehicle-manipulator system in unknown environment. J. Field Robot. 2020, 37, 925–950. [Google Scholar] [CrossRef]
  10. Huang, H.; Tang, Q.; Li, H.; Liang, L.; Li, W.; Pang, Y. Vehicle-Manipulator System Dynamic Modeling and Control for Underwater Autonomous Manipulation. Multibody Syst. Dyn. 2017, 41, 367–390. [Google Scholar] [CrossRef]
  11. Rizzini, D.L.; Kallasi, F.; Aleotti, J.; Oleari, F.; Caselli, S. Integration of a stereo vision system into an autonomous underwater vehicle for pipe manipulation tasks. Comput. Electr. Eng. 2016, 58, 560–571. [Google Scholar] [CrossRef]
  12. Ridao, P.; Carreras, M.; Ribas, D.; Sanz, P.J.; Oliver, G. Intervention AUVs: The next challenge. Annu. Rev. Control. 2015, 40, 227–241. [Google Scholar] [CrossRef]
  13. Taryudi; Wang, M.-S. Eye to hand calibration using ANFIS for stereo vision-based object manipulation system. Microsyst. Technol. 2018, 24, 305–3177. [Google Scholar] [CrossRef]
  14. Chang, J.-W.; Wang, R.-J.; Wang, W.-J.; Huang, C.-H. Implementation of an Object-Grasping Robot Arm Using Stereo Vision Measurement and Fuzzy Control. Int. J. Fuzzy Syst. 2015, 17, 193–205. [Google Scholar] [CrossRef]
  15. Chang, W.-C. Robotic assembly of smartphone back shells with eye-in-hand visual servoing. Robot. Comput. Manuf. 2018, 50, 102–113. [Google Scholar] [CrossRef]
  16. Peñalver, A.; Pérez, J.; Fernández, J.; Sales, J.; Sanz, P.; García, J.; Fornas, D.; Marín, R. Visually-guided manipulation techniques for robotic autonomous underwater panel interventions. Annu. Rev. Control. 2015, 40, 201–211. [Google Scholar] [CrossRef]
  17. Sivčev, S.; Rossi, M.; Coleman, J.; Dooly, G.; Omerdić, E.; Toal, D. Fully automatic visual servoing control for work-class marine intervention ROVs. Control. Eng. Pract. 2018, 74, 153–167. [Google Scholar] [CrossRef]
  18. Lin, Y.H.; Shou, K.P.; Huang, L.J. The initial study of LLS-based binocular stereo-vision system on underwater 3D image reconstruction in the laboratory. J. Mar. Sci. Technol. 2017, 22, 513–532. [Google Scholar] [CrossRef]
  19. Li, T.; Liu, C.; Liu, Y.; Wang, T.; Yang, D. Binocular stereo vision calibration based on alternate adjustment algortithm. Opt.-Int. J. Light Electron Opt. 2018, 173, 13–20. [Google Scholar] [CrossRef]
  20. Hu, Y.; Chen, Q.; Feng, S.; Tao, T.; Asundi, A.; Zuo, C. A new microscopic telecentric stereo vision system- Calibration, rectification, and three-dimensional reconstruction. Opt. Lasers Eng. 2019, 113, 14–22. [Google Scholar] [CrossRef]
  21. Park, J.-S.; Kim, H.-E.; Kim, H.-Y.; Lee, J.; Kim, L.-S. A vision processor with a unified interest point detection and matching hardware for accelerating stereo matching algorithm. IEEE Trans. Circuits Syst. Video Technol. 2016, 26, 2328–2343. [Google Scholar] [CrossRef]
  22. Huo, G.; Wu, Z.; Li, J.; Li, S. Underwater Target Detection and 3D Reconstruction System Based on Binocular Vision. Sensors 2018, 18, 3570. [Google Scholar] [CrossRef] [PubMed]
  23. Negahdaripour, S.; Firoozfam, P. An ROV Stereovision System for Ship-Hull Inspection. IEEE J. Ocean. Eng. 2006, 31, 551–564. [Google Scholar] [CrossRef]
  24. Ttofis, C.; Kyrkou, C.; Theocharides, T. A Low-Cost Real-Time Embedded Stereo Vision System for Accurate Disparity Estimation Based on Guided Image Filtering. IEEE Trans. Comput. 2016, 65, 2678–2693. [Google Scholar] [CrossRef]
  25. Zhuang, S.; Zhang, X.; Tu, D.; Zhang, C.; Xie, L. A standard expression of underwater binocular vision for stereo matching. Meas. Sci. Technol. 2020, 31, 115012. [Google Scholar] [CrossRef]
  26. Li, Y.; Zhang, Y.; Li, H.; Zhang, W.; Zhang, Q. Epipolar geometry and stereo matching algorithm for underwater fish-eye images. Int. J. Adv. Robot. Syst. 2018, 15, 1729881418764715. [Google Scholar] [CrossRef]
  27. Rosten, E.; Porter, R.; Drummond, T. Faster and Better: A Machine Learning Approach to Corner Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 105–119. [Google Scholar] [CrossRef]
  28. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
  29. Osipov, A.; Shumaev, V.; Ekielski, A.; Gataullin, T.; Suvorov, S.; Mishurov, S.; Gataullin, S. Identification and Classification of Mechanical Damage during Continuous Harvesting of Root Crops Using Computer Vision Methods. IEEE Access 2022, 10, 28885–28894. [Google Scholar] [CrossRef]
  30. Li, D.; Du, L. AUV Trajectory Tracking Models and Control Strategies: A Review. J. Mar. Sci. Eng. 2021, 9, 1020. [Google Scholar] [CrossRef]
  31. Haugaløkken, B.O.A.; Skaldebø, M.B.; Schjølberg, I. Monocular vision-based gripping of objects. Robot. Auton. Syst. 2020, 131, 103589. [Google Scholar] [CrossRef]
  32. Cai, M.; Wang, S.; Wang, Y.; Wang, R.; Tan, M. Coordinated Control of Underwater Biomimetic Vehicle–Manipulator System for Free Floating Autonomous Manipulation. IEEE Trans. Syst. Man Cybern. Syst. 2020, 51, 4793–4803. [Google Scholar] [CrossRef]
  33. Li, J.; Huang, H.; Xu, Y.; Wu, H.; Wan, L. Uncalibrated visual servoing for underwater vehicle manipulator systems with an eye in hand configuration camera. Sensors 2019, 24, 5469. [Google Scholar] [CrossRef] [PubMed]
  34. Chen, H.-T.; Song, S.-M.; Zhu, Z.-B. Robust Finite-time Attitude Tracking Control of Rigid Spacecraft under Actuator Saturation. Int. J. Control. Autom. Syst. 2018, 16, 1–15. [Google Scholar] [CrossRef]
  35. Boukattaya, M.; Mezghani, N.; Damak, T. Adaptive nonsingular fast terminal sliding-mode control for the tracking problem of uncertain dynamical systems. ISA Trans. 2018, 77, 1–19. [Google Scholar] [CrossRef]
  36. Kong, S.; Fang, X.; Chen, X.; Wu, Z.; Yu, J. A NSGA-II-Based Calibration Algorithm for Underwater Binocular Vision Measurement System. IEEE Trans. Instrum. Meas. 2020, 69, 794–803. [Google Scholar] [CrossRef]
  37. Lwin, K.N.; Minami, M.; Mukada, N.; Myint, M.; Yamada, D.; Yanou, A.; Matsuno, T.; Saitou, K.; Godou, W.; Sakamoto, T. Visual Docking against Bubble Noise with 3-D Perception Using Dual-Eye Cameras. IEEE J. Ocean. Eng. 2020, 45, 247–270. [Google Scholar] [CrossRef]
  38. Marani, G.; Yuh, J. Introduction to Autonomous Manipulation; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  39. Antonelli, G. Underwater Robots; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  40. Chen, Y.; Yan, L.; Xu, B.; Liu, Y. Multi-Stage Matching Approach for Mobile Platform Visual Imagery. IEEE Access 2019, 7, 160523–160535. [Google Scholar] [CrossRef]
  41. Kamyshova, G.; Osipov, A.; Gataullin, S.; Korchagin, S.; Ignar, S.; Gataullin, T.; Terekhova, N.; Suvorov, S. Artificial Neural Networks and Computer Vision’s-Based Phytoindication Systems for Variable Rate Irrigation Improving. IEEE Access 2022, 10, 8577–8589. [Google Scholar] [CrossRef]
Figure 1. The process of binocular vision−based manipulation control.
Figure 1. The process of binocular vision−based manipulation control.
Jmse 11 01905 g001
Figure 2. Physical layout diagram of the thruster positions.
Figure 2. Physical layout diagram of the thruster positions.
Jmse 11 01905 g002
Figure 3. Physical layout diagram of the thruster positions.
Figure 3. Physical layout diagram of the thruster positions.
Jmse 11 01905 g003
Figure 4. Basic barrier function and quasi barrier function.
Figure 4. Basic barrier function and quasi barrier function.
Jmse 11 01905 g004
Figure 5. The frame of overall controller.
Figure 5. The frame of overall controller.
Jmse 11 01905 g005
Figure 6. Comparisons for image processing algorithm.
Figure 6. Comparisons for image processing algorithm.
Jmse 11 01905 g006
Figure 7. Comparison of results for the feature point extraction.
Figure 7. Comparison of results for the feature point extraction.
Jmse 11 01905 g007
Figure 8. Comparison of results for the feature point matching.
Figure 8. Comparison of results for the feature point matching.
Jmse 11 01905 g008
Figure 9. Comparison of results for the feature point accurate matching.
Figure 9. Comparison of results for the feature point accurate matching.
Jmse 11 01905 g009
Figure 10. The single dimensional DOF control result.
Figure 10. The single dimensional DOF control result.
Jmse 11 01905 g010
Figure 11. Three−dimensional control results for comparison.
Figure 11. Three−dimensional control results for comparison.
Jmse 11 01905 g011aJmse 11 01905 g011b
Figure 12. Manipulation simulation process.
Figure 12. Manipulation simulation process.
Jmse 11 01905 g012
Figure 13. Manipulation simulation result.
Figure 13. Manipulation simulation result.
Jmse 11 01905 g013
Figure 14. The UVMS platform in the oceanic experiment.
Figure 14. The UVMS platform in the oceanic experiment.
Jmse 11 01905 g014
Figure 15. Oceanic experiment results.
Figure 15. Oceanic experiment results.
Jmse 11 01905 g015aJmse 11 01905 g015bJmse 11 01905 g015c
Table 1. Time required and numbers for feature point extraction.
Table 1. Time required and numbers for feature point extraction.
DescriptorsNumber of Left Map Feature PointNumber of Right Map Feature PointFeature Point Extraction Time (ms)Feature Point Description Time (ms)Numbers of Thick Matching Point PairsRatio of Feature Point Matching
SIFT12517459702333024%
OFAST186519091273871588.47%
Proposed algorithm1591162714418448430.42%
Table 2. Comparison of image target coordinates.
Table 2. Comparison of image target coordinates.
ObjectDescriptorsTrue Value(mm)Direct Calibration Calculation Value (mm)Relative ErrorCalibration Calculation Value after Conversion (mm)Relative Error
Sea
cucumber
Proposed algorithm(90.50,
−117.42,
1175.40)
(100.51,
−107.53, 1303.09)
(11.06%,
8.42%,
10.86%)
(93.84, −119.79, 1186.23)(3.69%, −2.02%, 0.921%)
SIFT(94.73, −114.414, 1189.12)(4.68%, −2.56%, 1.17%)
OFAST(102.49, −108.90, 1214.28)(13.25%, −7.25%, 3.31%)
PectinidProposed algorithm(94.62,
16.44,
1175.40)
(83.73,
13.96,
1259.79)
(−11.21%,
−15.08%,
7.18%)
(92.57, 15.86, 1187.78)(−2.21%, −3.52%, 1.05%)
SIFT(91.97, 15.71, 1191.04)(−2.80%, −4.46%, 1.33%)
OFAST(87.11, 14.36, 1219.72)(−7.94%, −12.64%, 3.77%)
Sea urchinProposed algorithm(61.52,
124.56,
1175.40)
(70.89,
142.26,
1244.32)
(15.23%,
13.41%,
−5.86%)
(62.45, 127.78, 1187.02)(1.51%, 2.58%, 1.02%)
SIFT(62.70, 128.63, 1190.59)(1.91%, 3.27%, 1.29%)
OFAST(64.86, 136.10, 1218.16)(5.42%, 9.27%, 3.66%)
Table 3. Comparisons between proposed and RANSAC algorithm.
Table 3. Comparisons between proposed and RANSAC algorithm.
AlgorithmNumber of Thick Matching PointNumber of Accurate Matching PointTime for Eliminate Mistake Matching Time (ms)Ratio of Correct Matching Point
First GroupRANSAC5544665.484.11%
Proposed algorithm5544892.788.27%
Second GroupRANSAC4843399.270.04%
Proposed algorithm4843683.576.03%
Table 4. Denavit–Hartenberg parameters of the manipulator.
Table 4. Denavit–Hartenberg parameters of the manipulator.
Joint a i (m) α i (rads)di (m)qi (rads) a i (m)
10π/20 q 1 0
20.600 q 2 0.5
30.300 q 3 0.5
Table 5. Quality and center of gravity position of UVMS.
Table 5. Quality and center of gravity position of UVMS.
m ( k g ) x g ( m m ) y g ( m m ) Z g ( m m )
82.13 3 . 04 × 10 2 2.79 × 10 2 3.19 × 10 2
Table 6. Main dynamic parameters of the system.
Table 6. Main dynamic parameters of the system.
VehicleLink 1Link 2Link 3
M kg 82.132.603.523.16
I x x ( kg m 2 ) 4.950.0300
I y y ( kg m 2 ) 7.360.030.160.16
I z z ( kg m 2 ) 8.6600.160.16
Table 7. Parameters of NTSMC.
Table 7. Parameters of NTSMC.
q p β λ L g
3520.46
Table 8. Parameters of ANTSMC.
Table 8. Parameters of ANTSMC.
μ 0 0.1 μ 1 0.1 μ 2 0.1
k 1 20 k 2 20 k 3 20
k 4 200 k 5 200 k 6 200
η 1 0.11 η 2 0.11 η 3 0.11
η 4 1 η 5 1 η 6 1
Table 9. Parameters of QBF-ANTSMC.
Table 9. Parameters of QBF-ANTSMC.
ε 1 ε ˜ 1 ε 2 ε ˜ 2
0.980.90.980.7
Table 10. The task information for the simulation experiment.
Table 10. The task information for the simulation experiment.
The Initial Position of the Robot (m)Target Position (m)Desired Vehicle Position (m)Desired Vehicle Attitude (rad)Desired Manipulator Attitude (rad)
(0, 0, 0)(1.5, 0.5, 2)(0.9518, 0.5, 1.5845)(0, 0.1231, 0)(0, −0.2159, 0.0929)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, T.; Sun, Y.; Huang, H.; Qin, H.; Chen, X.; Li, L.; Zhang, Z.; Han, X. Binocular Vision-Based Non-Singular Fast Terminal Control for the UVMS Small Target Grasp. J. Mar. Sci. Eng. 2023, 11, 1905. https://doi.org/10.3390/jmse11101905

AMA Style

Jiang T, Sun Y, Huang H, Qin H, Chen X, Li L, Zhang Z, Han X. Binocular Vision-Based Non-Singular Fast Terminal Control for the UVMS Small Target Grasp. Journal of Marine Science and Engineering. 2023; 11(10):1905. https://doi.org/10.3390/jmse11101905

Chicago/Turabian Style

Jiang, Tao, Yize Sun, Hai Huang, Hongde Qin, Xi Chen, Lingyu Li, Zongyu Zhang, and Xinyue Han. 2023. "Binocular Vision-Based Non-Singular Fast Terminal Control for the UVMS Small Target Grasp" Journal of Marine Science and Engineering 11, no. 10: 1905. https://doi.org/10.3390/jmse11101905

APA Style

Jiang, T., Sun, Y., Huang, H., Qin, H., Chen, X., Li, L., Zhang, Z., & Han, X. (2023). Binocular Vision-Based Non-Singular Fast Terminal Control for the UVMS Small Target Grasp. Journal of Marine Science and Engineering, 11(10), 1905. https://doi.org/10.3390/jmse11101905

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop