Next Article in Journal
Internal and External Temperature Monitoring of a Li-Ion Battery with Fiber Bragg Grating Sensors
Next Article in Special Issue
Fuzzy Sets in Dynamic Adaptation of Parameters of a Bee Colony Optimization for Controlling the Trajectory of an Autonomous Mobile Robot
Previous Article in Journal
Drain Current Modulation of a Single Drain MOSFET by Lorentz Force for Magnetic Sensing Application
Previous Article in Special Issue
Novel Robotic Platforms for the Accurate Sampling and Monitoring of Water Columns
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras

State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150000, China
*
Authors to whom correspondence should be addressed.
Sensors 2016, 16(9), 1383; https://doi.org/10.3390/s16091383
Submission received: 24 June 2016 / Revised: 24 August 2016 / Accepted: 25 August 2016 / Published: 30 August 2016
(This article belongs to the Special Issue Advanced Robotics and Mechatronics Devices)

Abstract

:
Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme.

1. Introduction

With development of space technology, modern space missions are becoming much more complex than ever before, including tasks such as on-orbit refueling, damaged module replacement, target satellite capture, etc. To complete such challenging missions, many attempts have been made to design special robots for space applications. The free-floating space robot was invented as a particular kind of robotic system for on-orbit satellite recovery and maintenance. It employs a free-flying satellite and robotic manipulators mounted on it, so that it can perform on-orbit approaching and target operating tasks. For extending the service life, the position and attitude of the carrier satellite are totally uncontrolled during operations. There have been several successful free-flying space robots, such as Engineering Test Satellite VII (ETS–VII) which is the first space robot project that successfully captured an autonomous on-orbit target [1,2,3] and the orbital express plan of NASA [4,5,6], etc. Other free-floating space robot projects have also been carried out as described in [7,8].
The kinematics, dynamics and control of a free-floating space robot are much more complex compared to their ground counterparts, because of the free-floating base satellite. Since the base satellite is free-floating, any robot arm movements may result in changes to the base satellite position and attitude due to the dynamical interactions between the manipulator and the satellite carrier [9,10]. To solve this problem, a modeling technique was proposed by Umetani and Yoshida [11]. They introduced the momentum conservation law into the space robot kinematics formulation, and put forward the concept of the generalized Jacobian matrix. Another successful modeling technique known as the Virtual Manipulator (VM) technique was proposed by Vafa and Dubowsky [12]. They simplified the kinematics equation by decoupling the system centroid translational degrees of freedom. The VM technique sets up a virtual ground-fixed manipulator to describe the motions of the free-floating space robot. Dubowsky and Papadopoulos compared the structure of the motion equations for space robots and ground-fixed robots in [13]. They concluded that if the carrier satellite attitude can be measured or calculated, almost all the ground robot control strategies can be applied to a free-floating space robot. Based on these models and concepts, a number of control methods have been proposed for free-floating space robots. A variable structure control strategy was presented by Fang [14], where a neural network controller is used as the dynamic compensator. Huang et al. studied the tethered space robot (TSR), and presented several trajectory planning and control methods [15,16,17]. To stabilize the TSR during capture impact with target, Huang derived the impact dynamic model for target capturing and an adaptive robust controller was designed accordingly [18]. Because the collision during the capture and the original target rotation leads to a tumbling of the tethered space robot–target combination system, a robust adaptive backstepping controller was designed to realize stabilization after the target is captured [19]. Huang also investigated the spacecraft attitude takeover control for extending the life time of fuel-exhausted spacecraft [20]. He designed a reconfigurable control system handling the attitude control problem including the mass property changes. Pathak proposed a robust overwhelming control method for space robot and verified it by numerical simulations [21]. Tsuchiya studied the satellite attitude dynamics of space robot and proposed an attitude control scheme based on the reaction wheels [22]. Rastegari and Moosavian suggested a multiple impedance control approach for free-flying space robots to track the path and tune the inner forces at the same time [23]. Zarafshan and Moosavian investigated the dynamics of space robot with flexible elements and proposed a hybrid suppression control method [24]. Considering that pure motion control is not applicable for the satellite-capturing tasks, sensor-based control methods are also applied in space robot engineering. One of the most effective sensor-based robot control strategies is the visual servoing. Instead of the “looking” then “moving” mode which combines the visual sensor and the robotic system in an open-loop fashion, visual servoing introduces a visual feedback control loop to increase the accuracy of the overall system. The basic conceptual framework of visual servoing for robotic manipulators is introduced in Hutchinson’s article [25]. In this paper, two major classes of visual servoing systems were discussed in detail. Chaumette and Hutchinson described the basic approaches and advanced techniques of visual servoing in [26,27] and the performance and stability issues of the two visual servoing schemes were discussed. The visual servoing techniques have also been approved in space robot engineering by several scholars as reported in [28,29,30,31].
Although a lot of achievements have been made in space robot modelling and control, there are still several issues to be considered in practical engineering. One practical problem is that the mass properties of the carrier satellite keep changing due to the fuel consumption, solar panel adjustments, carrying a captured payload with undetermined mass, etc. These unknown properties are critical and recognized as a challenging problem in space robot target capture control. Since few efforts have been made for controller designing of space robots with unknown satellite mass properties, the undetermined properties of robotic systems are need to be known in advance. Yoshida and Abiko presented an approach for determining space robot inertia parameters [32]. Their identification algorithm is designed based on the conservation of momentum and the gravity gradient torque. In this method, only the reaction wheel motion rates need to be measured. Ma and Dang proposed another method to identify the carrier satellite inertia properties [33]. By using the linear and angular satellite velocities, this issue is treated as a linear identification problem. According to the existing literatures, the unknown satellite mass properties are generally determined based on the conservation-of-momentum principle. The largest advantages of such identification methods are: (1) they don’t consume any fuel because the properties are estimated by manipulator motions; (2) since the accelerations and forces aren’t directly involved in the calculation, in theory, only velocities need to be measured, which are generally less noisy. However, several difficulties still must be considered in identifying the satellite mass, which plays a very important role in space robot dynamics. One difficulty is that the carrier-satellite mass has quite low sensitivity to the angular motions of the satellite, which means it can’t be identified only by reaction wheel motion rates or gyro signals. The other problem is that although the satellite mass can be identified by the linear velocities of the carrier satellite and no acceleration or force is directly involved in the computation, in fact, the linear velocities can’t be measured as easily as the angular velocities. In engineering, linear velocities are usually integrated from accelerometer data, which brings drifting errors. Based on such considerations, new identification methods should be proposed trying to handle these issues.
In this paper, a new identification strategy is proposed to estimate the unknown mass properties without computing the linear velocity of the carrier satellite. The eye-in-hand camera signals, which are commonly used in space robot target capture, are adopted as well as the gyro data for identifying both the satellite mass and the centroid position. Because the unknown mass properties are estimated in real time, the self-tuning control scheme is applied to handle this capturing control problem including unknown parameters. The whole approach is established based on a new space robot model for reducing the computation complexity. The rest of this paper is organized as follows: Section 2 describes the new space robot modeling method proposed in this paper; Section 3 designs the self-tuning scheme and presents the new identification method; Section 4 introduces the ground verification experimental system and gives the simulation results; and the results are discussed in Section 5.

2. Free-Floating Space Robot Kinematics Modeling

The space robot system, consisting of a carrier satellite and a space manipulator mounted on it, is shown in Figure 1.
a i is the position vector from the i th joint of the robotic arm to the centroid of the i th link. b i is the position vector from the centroid of the i th link to the i + 1 th joint. k i is the rotation vector of the i th joint. r 0 represents the position of the carrier satellite. b 0 is the position vector from the carrier satellite centroid to the first joint.
According to Figure 1, the position of space robot end-effector can be determined as:
P e = r 0 + i = 1 n a i + i = 0 n b i
where r 0 is not a constant vector, because the carrier satellite is free-floating in space. The centroid position of the space robot system can be expressed as:
r g = i = 1 n m i r i + m 0 r 0 M
where m i is the mass of the i th link; m 0 is the mass of the carrier satellite; r i is the centroid position of the i th link; M is the total mass of the space robot system. They can be obtained by:
r i = r 0 + k = 0 i 1 b k + k = 1 i a k
M = i = 0 n m i
Assuming that there is no external force acting on the space robot, the centroid position of the space robot system will not change. By Equations (2)–(4), the position of the carrier satellite can be determined as:
r 0 = r g 1 M ( i = 1 n ( k = i n m k ) a i ) 1 M ( i = 0 n 1 ( k = i + 1 n m k ) b i )
According to Equation (5), a hypothetical manipulator can be set up to describe the linear motion of the carrier satellite. The length vector from the i th joint of the hypothetical manipulator to the joint i + 1 , is defined as:
l i = { γ b i i = 0 γ ( a i + b i ) 0 < i < n γ a i i = n
where:
{ a i = ( k = i n m k ) a i b i = ( k = i + 1 n m k ) b i γ = 1 M
Attaching the hypothetical manipulator to the centroid of the space robot system, the end position of the hypothetical manipulator can be calculated as:
P e = r g + γ i = 1 n a i + γ i = 0 n 1 b i = r 0
According to Equation (8), the carrier satellite position can be obtained by the hypothetical manipulator motion. In this case, the hypothetical manipulator is referred as the carrier manipulator. The position of the space robot end-effector can be expressed as:
P e = r 0 + i = 1 n a i + i = 0 n b i = P e + L 0 e
where L 0 e is the position vector from the carrier satellite centroid to the space robot end. If the carrier satellite is considered as the 0th link, the space robot can be considered as another hypothetical manipulator mounted on the carrier manipulator end. Since it performs the same operation motions as the space manipulator, it is referred as the service manipulator. Consequently, the free-floating space robot system is equivalent to a hypothetical ground-fixed robotic system including two manipulators. The hypothetical robotic system can be described as Figure 2.
According to Equation (9), the velocity vector of the service manipulator end, representing the space robot end-effector, can be expressed as:
v e = P ˙ e = P ˙ e + L ˙ 0 e
where:
L ˙ 0 e = i = 1 n a ˙ i + i = 0 n b ˙ i
P ˙ e = γ k = 1 n a ˙ k + γ k = 0 n 1 b ˙ k
and a ˙ i , as well as b ˙ i can be determined by:
{ a ˙ i = ω i × a i b ˙ i = ω i × b i
where ω i is the angular velocity of the ith link and can be described as:
ω i = ω 0 + k = 1 i k k θ ˙ k
and ω 0 is the satellite angular velocity; θ ˙ k is the angle velocity of the k th link. Substituting Equations (13) and (14) into Equation (11) gives:
L ˙ 0 e = i = 1 n [ ( ω 0 + k = 1 i k k θ ˙ k ) × a i ] + i = 1 n [ ( ω 0 + k = 1 i k k θ ˙ k ) × b i ] + ω 0 × b 0 = ω 0 × ( i = 0 n b i + i = 1 n a i ) + i = 1 n [ k i × ( k = i n b k + k = i n a k ) θ ˙ i ]
Defining L i e as the position vector from the i th joint to the service manipulator end, it can be determined as:
L i e = k = i n a k + k = i n b k n i > 1
According to Equations (15) and (16), L ˙ 0 e is derived as:
L ˙ 0 e = ω 0 × L 0 e + i = 1 n [ k i × L i e θ ˙ i ] = ( J r b ˜ 0 ) ω 0 + J m θ ˙
where:
{ J r = L ˜ 1 e J m = [ k 1 × L 1 e k i × L i e k n × L n e ]
According to Equations (7) and (13), a ˙ k and b ˙ k can be defined by:
{ a ˙ k = ω k × ( q = k n m q ) a k = ω k × a k b ˙ k = ω k × ( q = k + 1 n m q ) b k = ω k × b k
Substituting Equations (14) and (18) into Equation (12), we have:
P ˙ e = γ k = 1 n [ ( ω 0 + i = 1 k k i θ ˙ i ) × a k ] + γ k = 1 n 1 [ ( ω 0 + i = 1 k k i θ ˙ i ) × b k ] + γ ω 0 × b 0 = γ ω 0 × ( k = 0 n 1 b k + k = 1 n a k ) + γ i = 1 n [ k i × ( k = i n 1 b k + k = i n a k ) θ ˙ i ]
Defining L i e as the position vector from the i th joint to the end of the carrier manipulator, it can be expressed as:
L i e = γ L i e
where:
L i e = { a n i = n ( k = i n a k + k = i n 1 b k ) 0 < i < n ( k = 1 n a k + k = 0 n 1 b k ) i = 0
According to Equations (19) and (20), P ˙ e can be further derived as:
P ˙ e = ω 0 × L 0 e + i = 1 n [ k i × L i e θ ˙ i ] = ( J r γ M m b ˜ 0 ) ω 0 + J m θ ˙
where:
{ J r = L ˜ 1 e J m = [ k 1 × L 1 e k i × L i e k n × L n e ]
and M m is the total mass of the manipulator, known as:
M m = i = 1 n m i
Defining the Jacobian matrices J r and J m as:
{ J r = L ˜ 1 e = 1 γ J r J m = [ k 1 × L 1 e k i × L i e k n × L n e ] = 1 γ J m
as a result, P ˙ e also takes the following form:
P ˙ e = γ [ ( J r M m b ˜ 0 ) ω 0 + J m θ ˙ ]
In Equation (24), the unknown satellite mass is only contained in the parameter γ . According to Equations (17) and (24), the differential kinematics equation of free-floating space robot translational motion can be expressed as:
v e = [ J r + γ J r ( 1 + γ M m ) b ˜ 0 ] ω 0 + ( J m + γ J m ) θ ˙
where J m and J m are exactly the same as the Jacobian matrix of ground-fixed manipulators.
According to Equation (25), two linear velocities are defined as:
{ v eo = J m θ ˙ v ec = J m θ ˙
where v eo , as well as v ec can be considered as the linear velocity of a ground-fixed manipulator. Consequently, the end-effector velocity of space robot can be further expressed as:
v e = v eo + ω 0 × L 1 e + γ ( v ec + ω 0 × L 1 e ) + ω 0 × b 0 + γ M m ω 0 × b 0
The main advantage in using Equation (27) is that the unknown satellite mass properties will not be involved in the calculation of v eo and v ec , and only two parameters, γ and b 0 relating to the undetermined satellite mass and centroid are to be determined. Accordingly, in this paper, γ will be estimated instead of identifying the satellite mass. m 0 can still be determined by Equations (4) and (7) if necessary.
The proposed modeling method is derived by describing the carrier satellite translational motions with a hypothetical manipulator fixed on the centroid of the robotic system and taking the free-floating space robot as an equivalent ground-fixed manipulator system. Comparing with existing modeling methods presented in [11] and [12], several advantages of using this new method are described as follows:
(1)
Since the free-floating space robot system is kinematically equivalent to the hypothetical robotic system, this new space robot model is not complex to compute because only ground-fixed robot kinematics are involved in the calculations.
(2)
In the new translational-motion equations, the undetermined carrier satellite mass, which is a challenge in parameter estimating as suggested in [32], only impacts a constant factor, namely γ. Accordingly, this new modeling method is more convenient in identifying the satellite mass properties.

3. Self-Tuning Control Designing

Because the mass properties of the carrier satellite are changing throughout the whole service life, precise target capturing of free-floating space robots is considered as a challenging problem. For improving the control performance, a self-tuning target capturing control scheme is applied by adopting the eye-in-hand camera and gyros. The self-tuning control concept is obtained based on the certainty equivalence principle. By coupling a motion controller with an online parameter estimator, the self-tuning controller can perform simultaneous identification of unknown properties. Accordingly, instead of the unknown true values, the controller parameters are determined by the estimations.
Because the dynamic and kinematic parameters of the space manipulator mounted on the carrier satellite are all constants, the satellite mass properties can be estimated in real time by end-effector translations and satellite rotations. As a consequence, the proposed space robot self-tuning control scheme is as shown in Figure 3.
The self-tuning control operation is described as follows: the motion controller plans the space robot end-effector motion based on the current relative position and attitude after transforming the camera signals to the inertial frame. At each time instant, a set of property estimations identified by the parameter estimator are sent to the inverse kinematics calculator, which is obtained from the past joint motions and the sensor data from eye-in-hand camera and gyros. Based on these estimated parameters and desired end-effector motions, the joint trajectories are planned by the space robot inverse kinematics calculator. The free-floating space robot joints move following the control input and generate a new output, updating the input data of the parameter estimator and motion controller.

3.1. Kinematics Calculator Designing

Because the outputs of the eye-in-hand camera are the relative position and attitude at the end-effector frame, a kinematics calculator is applied to transform the camera signals to the inertial frame. The relative position and attitude at the inertial frame are computed as:
{ Δ P = A e ( θ , ψ ) Δ P e Δ Φ = A e ( θ , ψ ) Δ Φ e
where A e is the rotation matrix; Δ P e and Δ Φ e are the relative position and attitude at the end-effector frame; ψ is the satellite attitude. Based on the Euler axis/angle, Δ Φ is expressed as:
Δ Φ = χ ρ
where χ is the Euler rotation angle; ρ is the Euler equivalent axis.
The linear end-effector velocity, which is applied as the parameter estimator input, is also computed by this calculator as:
v e = Δ P ˙ + v t
where v t is the linear velocity of the target satellite, which is assuming to be known or identified by other approaches.

3.2. Motion Controller Designing

Defining the relative pose as the control error of the free-floating space robot, it is written as:
e = [ Δ P Δ Φ ]
The ideal feed-back response of e is designed as follows:
K e + e ˙ = 0
where K is the matrix of control factors reflecting performance specifications. According to Equations (31) and (32), the motion controller of the free-floating space robot is given by ignoring the target rotation as:
[ v d ω d ] = Κ [ Δ P Δ Φ ] + [ v t 0 ]

3.3. Inverse Kinematics Calculating

According to Equation (25), the carrier satellite angular velocity is to be determined for performing inverse kinematics calculations. There are several ways to obtain the satellite angular velocity. One way is to substitute the angular momentum conservation equation into the kinematics equation. This, however, requires cumbersome computations. Another way is measuring the angular motion directly by gyros, which is especially convenient because almost all satellites are equipped with gyroscopes. Another advantage in adopting gyro information is that it will be unnecessary to identify the satellite inertia tensor matrix for calculating the inverse kinematics. Defining the measured satellite angular velocity as ω ¯ 0 , the angular velocity of space robot end-effector is calculated as:
ω e = ω ¯ 0 + J ω θ ˙
where:
J ω = [ k 1 k i k 1 ]
According to Equations (25) and (34), we have the following differential kinematics equation of the free-floating space robot as:
[ v e ω e ] = [ J r + γ J r ( 1 + γ M m ) b ˜ 0 E ] ω ¯ 0 + [ J m + γ J m J ω ] θ ˙
Consequently, the desired manipulator joint velocities can be obtained as:
θ ˙ d = [ J m + γ J m J ω ] 1 ( [ v d ω d ] [ J r + γ J r ( 1 + γ M m ) b ˜ 0 E ] ω ¯ 0 )
By Equation (37), once the mass properties, namely γ and b 0 , are determined, space manipulator joint motions can be planned.

3.4. Mass Property Estimating

In this section, a real time estimator is proposed identifying the unknown mass properties relating to the inverse kinematics calculation. According to Equation (37), only γ and b 0 are to be estimated which indicate the system total mass and the satellite centroid position, respectively. Defining the linear velocity of the end-effector as a function of γ and b 0 , it gives:
v e = f ( γ , b 0 )
Note that the unknown parameters γ and b 0 appear nonlinearly in Equation (38), linear identification techniques cannot be applied directly.
The estimation error is defined as:
e = v ^ e v e
where v e is the true value of space robot end-effector velocity obtained by the eye-in-hand camera; v ^ e is the calculation employing the estimated mass properties γ ^ and b ^ 0 . Defining temporarily identified mass properties as γ ^ tem and b ^ 0 tem , according to Equation (38), e can be linearized as:
e = [ f ( γ ^ tem , b ^ 0 tem ) + f ( γ ^ tem , b ^ 0 tem ) γ Δ γ ^ + f ( γ ^ tem , b ^ 0 tem ) b 0 Δ b ^ 0 ] v e
where:
{ Δ γ ^ = γ ^ γ ^ tem Δ b ^ 0 = b ^ 0 b ^ 0 tem
According to Equation (27), f ( γ ^ tem , b ^ 0 tem ) γ can be simply determined as:
f ( γ ^ tem , b ^ 0 tem ) γ = v ec + ω 0 × L 1 e + M m ω 0 × b ^ 0 tem
Defining a new robot linear velocity as:
v = v ec + ω 0 × L 1 e + M m ω 0 × b 0
Equation (42) can be further derived as:
f ( γ ^ tem , b ^ 0 tem ) γ = v ( b ^ 0 tem )
Substituting Equation (27) into Equation (38), f ( γ ^ tem , b ^ 0 tem ) b 0 can be determined as:
f ( γ ^ tem , b ^ 0 tem ) b 0 = [ 1 + M m γ ^ tem ] ω ˜ 0
Defining a robot angular velocity ω as:
ω = [ 1 + M m γ ] ω 0
Equation (45) can be further expressed as:
f ( γ ^ tem , b ^ 0 tem ) b 0 = ω ˜ ( γ ^ tem )
Substituting Equations (44) and (47) into Equation (40), the estimation error can be calculated as:
e = f ( γ ^ tem , b ^ 0 tem ) + v ( b ^ 0 tem ) Δ γ ^ + ω ˜ ( γ ^ tem ) Δ b ^ 0 v e
Defining the end-effector velocity calculated by temporarily identified mass properties as v ^ tem ( γ ^ tem , b ^ 0 tem ) , Equation (48) can be further expressed as:
e = v ^ tem ( γ ^ tem , b ^ 0 tem ) v e + v ec ( b ^ 0 tem ) Δ γ ^ + ω ˜ 0 ( γ ^ tem ) Δ b ^ 0
According to Equation (41), once parameter errors Δ γ ^ and Δ b ^ 0 are determined, the estimated mass properties γ ^ and b ^ 0 can be simply identified. Since the number of the parameters to be identified is larger than the number of equations described by Equation (49), Δ γ ^ and Δ b ^ 0 can’t be determined by simply assuming that the estimation error vector is 0. To solve this problem, other constraints need to be introduced.
Employing the least-squares technique, the total estimation error during the time interval ( t Δ t , t ) is defined as:
Q = t Δ t t e ( r ) T e ( r ) d r
where Δ t is the length of the integral interval. To ensure the total estimation error is minimized, first order partial differential equations of Q can be determined by:
{ Q Δ γ = 2 t Δ t t e ( r ) T e ( r ) Δ γ d r = 0 Q Δ b 0 = 2 t Δ t t e ( r ) Δ b 0 e ( r ) d r = 0 3 × 1
Substituting Equations (44), (47) and (48) into Equation (51), the following equations hold as:
{ t Δ t t v T v d r Δ γ ^ + t Δ t t v T ω ˜ d r Δ b ^ 0 = t Δ t t v T ( v e v ^ etem ) d r t Δ t t ω ˜ v d r Δ γ ^ + t Δ t t ω ˜ ω ˜ d r Δ b ^ 0 = t Δ t t ω ˜ ( v e v ^ etem ) d r
Defining the parameter error vector as:
X = [ Δ γ ^ Δ b ^ 0 ]
according to Equation (52), the identification equation can be formed as:
A X = B
where:
A = [ t Δ t t v T v d r t Δ t t v T ω ˜ d r t Δ t t ω ˜ v d r t Δ t t ω ˜ ω ˜ d r ]
B = [ t Δ t t v T ( v e v enom ) d r t Δ t t ω ˜ ( v e v enom ) d r ]
According to Equation (54), the parameter error vector can be determined by:
X = A 1 B
After all, the estimated mass properties γ ^ and b ^ 0 can be computed according to Equations (41), (53) and (57).
Note that, the estimated mass properties are initialized by the nominal mass and centroid position in this work. After that, the temporarily identified mass properties must be updated in real time for more accurate estimations. In addition, the length of the integral interval has to be carefully designed according to the specified system properties and the practical mission requirements because the condition number of matrix A can be very large, even singular when Δ t is too small. On the other hand, Δ t cannot be too large either. Although the velocities as v and ω are quite simple in calculation, longer integral intervals still require greater computations which are against the on-line application. Because the tuning process is based on the feedback of eye-in-hand camera images, which are usually quite noisy, the identification error is investigated to indicate the impacts of the measurement errors. The upper bound estimation of the identification error is given in the Appendix A of this paper.

4. Ground Testing Based on Hardware-in-the-Loop Simulation System

Generally, space robot target capture methods are tested with ground test systems [34,35,36]. Thus a hardware-in-the-loop simulation system, illustrated by Figure 4, is employed for simulating the satellite capture process. Two industrial robots are used to simulate the space robot motions and the target satellite respectively: industrial robot A represents the space robot motions and the target satellite is mounted on industrial robot B.
To mimic the weightless condition in space, which are quite difficult to achieve in a ground environment, a space robot dynamics simulation program is established for calculating the space robot motions under microgravity. Since a carrier satellite was not available, gyro data is also output by this program. Moreover, a joint electronic simulator employing the same electronic interface with real space robot joints is used to simulate the robotic joint dynamics since the actual space robot is missing too. The eye-in-hand camera system and the space robot central controller are both real.
The space robot hardware-in-the-loop simulation system consists of two industrial robots, the space robot motion controller, the inverse kinematics calculator, an electronic simulator, the space robot dynamics simulation system, the kinematic equivalence module, the eye-in-hand camera system and the mass property estimator. The space robot motion controller, inverse kinematics calculations and the proposed mass property estimation algorithm are all realized by the space robot central controller. The block diagram of designed experimental system is shown in Figure 5.
In the satellite capturing experimental system, the needed linear and angular velocities are calculated through the relative position and attitude measured by the vision system. Meanwhile, the mass property estimator determines the unknown parameters from the past joint trajectories and sensor information in real time. Then, the joint motions are planned based on the estimations and the desired end-effector motion. According to the joint motions, the electronic simulator determines the output torques of the joints. Then the motions of the space robot are simulated by the dynamic simulation system. Finally, the relative motions between the space robot and the target satellite are demonstrated by the industrial robots based on kinematic equivalence.
The instruction cycle of the designed space robot central controller is 250 ms. The joint electronic simulator’s control cycle is 25 ms. The required measurement accuracies of the eye-in-hand camera system are 1 mm in distance and 1 deg in orientation. The measurement frequency is 4 Hz. A picture of the laboratory setup is shown in Figure 6. Space robot kinematic and dynamic parameters are expressed as Table 1.
In this test, the space robot is required to move the end-effector from the initial position to the satellite with suitable posture. The nominal mass properties of carrier satellite are defined as:
{ m 0 n o m = 680 ( kg ) b 0 n o m = [ 500 0 800 ] T ( mm )
Although the satellite mass is not necessary in the self-tuning control scheme, it is computed by γ ^ for verifying the estimator. According to Equations (4) and (7), the estimated satellite mass can be calculated as:
m ^ 0 = 1 γ ^ M m
Accordingly, the property estimation errors are computed as follows:
{ δ m 0 = m ^ 0 m 0 δ b 0 = b ^ 0 b 0
To validate the proposed control scheme, both the closed-loop responses and the parameter estimation errors are tested by the ground experimental system. The experimental results are shown in Figure 7 and Figure 8.
The closed-loop responses are shown in Figure 7. It is seen that, the space robot end-effector can approach the target satellite smoothly with the desired attitude, illustrating the effectiveness of the proposed target capture control method. The mass property estimation errors are shown in Figure 8. They suggest that although the nominal mass properties are initialized with obvious errors, the estimated mass properties gradually approach the real values.
As the first attempt for estimating the satellite mass properties by eye-in-hand camera signals, the following conclusions can be made by comparing the proposed method with other existing identification methods:
(1)
Compared with the propulsion-based methods as presented in [37], because the thrusters are not applied in the proposed method, no satellite fuel will be consumed.
(2)
Unlike the direct torque-sensing method proposed in [38], this proposed method doesn’t demand any torque or acceleration measurements, not only in theory but also in engineering.
(3)
Compared with the method based on measuring the reaction wheel motion rates presented in [32], which has difficulty estimating the satellite mass, the proposed method identifies both satellite mass and centroid position by adopting eye-in-hand camera signals and gyro information.
(4)
Compared with the method presented in [33], which estimates the satellite mass properties base on sensing the satellite rotation and translation, this proposed method doesn’t require the linear velocity of the carrier satellite, which is usually integrated from accelerometer data and brings drifting errors.

5. Conclusions

Target satellite capture is a challenging problem, especially for space robots with unknown mass properties. Since most existing works for space robot motion control require accurate property values, new efforts are being made for handling such control problems including unknown parameters. In this paper, gyro and camera signals are adopted to improve the control performance. For this improved system, a novel space robot modelling technique is proposed. By this newly established model, the free-floating space robot is equivalent to a ground-fixed manipulator system, thus simplifying the issue. Accordingly, a self-tuning target capturing controller is designed taking unknown parameters into count. The control parameters are determined in real time by the estimator established based on the least-squares technique. The experimental results suggest that the designed space robot target capturing controller is effective. Because the proposed method does not demand accurate satellite mass properties, it can be applied when the fuel consumption is unknown or carrying an undetermined payload, etc. As further research, the proposed method has a potential to be applied in identifying other space robot parameters, such as manipulator link lengths, to cope with contingent requirements.

Acknowledgments

This work is supported by a grant from the National Program on Key Research Program (No. 2013CB733105).

Author Contributions

H.L. raised the challenge target capturing control problem of free-floating space robot with unknown mass properties based on project needs; Z.L. designed the space robot modeling technique and the self-tuning control scheme including the mass property estimation algorithm; Z.L. and B.W. conceived and designed the experiments; Z.L. and B.W. analyzed the data; Z.L. contributed the software tools; Z.L. wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The measured velocity data is expressed as:
{ v ¯ e = v e + Δ v e ω ¯ 0 = ω 0 + Δ ω 0
where Δ v e , as well as Δ ω 0 , is the measurement error. According to Equations (43) and (46), the defined velocities are computed as:
{ v ¯ = v ec L 0 e ( b ^ 0 tem ) × ω ¯ 0 ω ¯ = [ 1 + M m γ ^ tem ] ω ¯ 0
According to Equation (54), the identification equation is determined as:
A ¯ X ¯ = B ¯
where:
A ¯ = A + Δ A = [ t Δ t t v ¯ v ¯ d r t Δ t t ( ω ¯ × v ¯ ) T d r t Δ t t ω ¯ × v ¯ d r t Δ t t ω ¯ × ( ω ¯ × ) d r ]
B ¯ = B + Δ B = [ t Δ t t v ¯ ( v ¯ e v enom ) d r t Δ t t ω ¯ × ( v ¯ e v enom ) d r ]
and X ¯ is the identification result with the error δ X ; Δ A and Δ B are the errors of A and B caused by the measurement errors Δ v e and Δ ω 0 . The error matrixes Δ A and Δ B is expressed as follows:
{ Δ A = ε A E A Δ B = ε B E B
where:
{ ε A = Δ A ε B = Δ B E A = E B = 1
In accordance with the theory presented in [33], the upper bound of the error for the identification problem represented by Equation (A3) can be estimated by:
δ X A 1 2 r ε A + A 1 ( X ε A + ε B ) + N 1 ε
and the upper bound of the relative error in the identified parameters is:
δ X X ( κ ( A ) ) 2 ε A A ζ + κ ( A ) ( ε A A + ε B B γ ) + N 2 ε
where r is the residual vector; κ is the condition number; N 1 ε , as well as N 2 ε , represents the second- and higher-order terms of ε , which can be practically ignored. They and others are computed as follows:
{ r = B A X κ ( A ) = A 1 A ζ = r A X γ = B A X
To determine the error bounds, Δ A as well as Δ B should be computed in terms of the measurement errors Δ v e and Δ ω 0 . Substituting Equations (A1) and (A2) into Equations (A4) and (A5), Δ A and Δ B are expressed as:
Δ A = [ t Δ t t Δ v ( 2 v ¯ Δ v ) d r t Δ t t ( Δ ω × v ¯ + ω ¯ × Δ v Δ ω × Δ v ) T d r t Δ t t ( Δ ω × v ¯ + ω ¯ × Δ v Δ ω × Δ v ) d r t Δ t t ( ω ¯ × Δ ω × + Δ ω × ω ¯ × Δ ω × Δ ω × ) d r ]
Δ B = [ t Δ t t ( v ¯ Δ v e + Δ v δ v e Δ v Δ v e ) d r t Δ t t ( ω ¯ × Δ v e + Δ ω × δ v e Δ ω × Δ v e ) d r ]
where:
{ Δ v = v ¯ v = L 0 e × Δ ω 0 Δ ω = ω ¯ ω = [ 1 + M m γ ^ tem ] Δ ω 0 δ v e = v ¯ e v enom
Ignoring the second-order terms, Δ A and Δ B are rewritten as functions of Δ ω 0 and Δ v e :
Δ A = [ t Δ t t ( 2 v ¯ T L ˜ 0 e ) Δ ω 0 d r t Δ t t { [ ( 1 + M m γ ^ tem ) v ¯ ˜ + ω ¯ ˜ L ˜ 0 e ] Δ ω 0 } T d r t Δ t t [ ( 1 + M m γ ^ tem ) v ¯ ˜ + ω ¯ ˜ L ˜ 0 e ] Δ ω 0 d r ( 1 + M m γ ^ tem ) t Δ t t [ ω ¯ ˜ Δ ω ˜ 0 + ( ω ¯ ˜ Δ ω ˜ 0 ) T ] d r ]
Δ B = [ t Δ t t v ¯ T Δ v e δ v e T L ˜ 0 e Δ ω 0 d r t Δ t t ω ¯ ˜ Δ v e [ 1 + M m γ ^ tem ] δ v ˜ e Δ ω 0 d r ]

References

  1. Yoon, W.; Goshozono, T.; Kawabe, H.; Kinami, M.; Tsumaki, Y.; Uchiyama, M.; Oda, M.; Doi, T. Model based space robot teleoperation of ETS-VII manipulator. IEEE Trans. Robot. Autom. 2004, 20, 602–612. [Google Scholar] [CrossRef]
  2. Yoshida, K.; Hasizume, K.; Nenchev, D.N.; Inaba, N.; Oda, M. Control of a space manipulator for autonomous target capture-ETS-VII flight experiments and analysis. Guid. Navig. Control 2000. [Google Scholar] [CrossRef]
  3. Yoshida, K.; Hashizume, K.; Abiko, S. Zero reaction maneuver: Flight validation with ETS-VII space robot and extension to kinematically redundant arm. In Proceedings of the IEEE International Conference on Robotics & Automation, Seoul, Korea, 21–26 May 2001; IEEE Computer Society: Washington, DC, USA.
  4. Motaghedi, P. On-orbit performance of the Orbital Express Capture System. In Proceedings of the SPIE—The International Society for Optical Engineering, Orlando, FL, USA, 15 April 2008; SPIE: Bellingham, WA, USA.
  5. Ogilvie, A.; Allport, J.; Hannah, M.; Lymer, J. Autonomous satellite servicing using the Orbital Express demonstration manipulator system. In Proceedings of the 9th International Symposium on Artificial Intelligence, Robotics and Automation in Space, Hollywood, SC, USA, 25–29 February 2008; IEEE Computer Society: Washington, DC, USA.
  6. Pinson, R.; Howard, R.; Heaton, A. Orbital Express Advanced Video Guidance Sensor: Ground Testing, Flight Results and Comparisons. In Proceedings of the AIAA Guidance, Navigation and Control Conference and Exhibit, Honolulu, HI, USA, 18–21 August 2008; AIAA: Reston, VA, USA.
  7. Jin, M.; Yang, H.; Xie, Z.; Sun, K.; Liu, H. The Ground-based Verification System of Visual Servoing Control for a Space Robot. In Proceedings of the IEEE International Conference on Mechatronics and Automation, Takamatsu, Japan, 4–7 August 2013; IEEE Computer Society: Washington, DC, USA.
  8. Liang, B.; Li, C.; Xue, L.; Qiang, W. A Chinese Small Intelligent Space Robotic System for On-Orbit Servicing. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; IEEE Computer Society: Washington, DC, USA.
  9. Dubowsky, S.; Papadopoulos, E. The kinematics, dynamics and control of free-flying and free-floating space robotics system. IEEE Trans. Robot. Autom. 1993, 9, 531–543. [Google Scholar] [CrossRef]
  10. Oda, M.; Ohkami, Y. Coordinated control of spacecraft attitude and space manipulators. Control Eng. Pract. 1997, 5, 11–21. [Google Scholar] [CrossRef]
  11. Umetani, Y.; Yoshida, K. Resolved motion rate control of space manipulators with generalized jacobian matrix. IEEE Trans. Robot. Autom. 1989, 5, 303–314. [Google Scholar] [CrossRef]
  12. Vafa, Z.; Dubowsky, S. The kinematics and dynamics of Space manipulators: The virtual manipulator approach. Int. J. Robot. Res. 1990, 9, 3–21. [Google Scholar] [CrossRef]
  13. Papadopoulos, E.; Dubowsky, S. On the nature of control algorithms for free-floating space manipulators. IEEE Trans. Robot. Autom. 1991, 7, 750–758. [Google Scholar] [CrossRef]
  14. Fang, Y.; Zhang, W.; Ye, X. Variable Structure Control for Space Robots Based on Neural Networks. Int. J. Adv. Robot. Syst. 2014, 11. [Google Scholar] [CrossRef]
  15. Huang, P.; Xu, X.; Meng, Z. Optimal Trajectory Planning and Coordinated Tracking Control Method of Tethered Space Robot Based on Velocity Impulse. Int. J. Adv. Robot. Syst. 2014, 11. [Google Scholar] [CrossRef]
  16. Meng, Z.; Huang, P. An Effective Approach Control Scheme for the Tethered Space Robot System. Int. J. Adv. Robot. Syst. 2014, 11. [Google Scholar] [CrossRef]
  17. Wang, D.; Huang, P.; Cai, J.; Meng, Z. Coordinated control of tethered space robot using mobile tether attachment point in approaching phase. Adv. Space Res. 2014, 54, 1077–1091. [Google Scholar] [CrossRef]
  18. Huang, P.; Wang, D.; Meng, Z.; Zhang, F.; Liu, Z. Impact dynamic modelling and adaptive target capturing control for tethered space robots with uncertainties. IEEE/ASME Trans. Mechatron. 2016, 21, 2260–2271. [Google Scholar] [CrossRef]
  19. Huang, P.; Wang, D.; Meng, Z.; Zhang, F. Adaptive postcapture backstepping control for tumbling tethered space robot–target combination. J. Guid. Control Dyn. 2016, 39, 150–155. [Google Scholar] [CrossRef]
  20. Huang, P.; Wang, M.; Meng, Z.; Zhang, F.; Liu, Z.; Chang, H. Reconfigurable spacecraft attitude takeover control in post-capture of target by space manipulators. J. Frankl. Inst. 2016, 353, 1985–2008. [Google Scholar] [CrossRef]
  21. Pathak, P.M.; Kumar, R.P.; Mukherjee, A.; Dasgupta, A. A scheme for robust trajectory control of space robots. Simul. Model. Pract. Theory 2008, 16, 1337–1349. [Google Scholar] [CrossRef]
  22. Tsuchiya, K. Breakwell Memorial Lecture: Attitude dynamics of satellite—From spinning satellite to space robot. Acta Astronaut. 2008, 62, 131–139. [Google Scholar] [CrossRef]
  23. Rastegari, R.; Moosavian, S.A.A. Multiple impedance control of space free-flying robots via virtual linkages. Acta Astronaut. 2010, 66, 748–759. [Google Scholar] [CrossRef]
  24. Zarafshan, P.; Moosavian, S.A.A. Dynamics modeling and Hybrid Suppression Control of space robots performing cooperative object manipulation. Commun. Nonlinear Sci. Numer. Simulat. 2013, 18, 2807–2824. [Google Scholar] [CrossRef]
  25. Hutchinson, S.; Hager, G.D.; Corke, P.I. A tutorial on visual servo control. IEEE Trans. Robot. Autom. 1996, 12, 651–670. [Google Scholar] [CrossRef]
  26. Chaumette, F.; Hutchinson, S. Visual servo control. I. Basic approaches. IEEE Robot. Autom. Mag. 2006, 13, 82–90. [Google Scholar] [CrossRef]
  27. Chaumette, F.; Hutchinson, S. Visual servo control. II. Advanced approaches. IEEE Robot. Autom. Mag. 2007, 14, 109–118. [Google Scholar] [CrossRef]
  28. Rouleau, G.; Rekleitis, I.; L’Archeveque, R.; Martin, E. Autonomous capture of a tumbling satellite. J. Field Robot. 2007, 24, 275–296. [Google Scholar]
  29. Sabatini, M.; Monti, R.; Gasbarri, P.; Palmerini, G. Adaptive and robust algorithms and tests for visual-based navigation of a space robotic manipulator. Acta Astronaut. 2013, 83, 65–84. [Google Scholar] [CrossRef]
  30. Sabatini, M.; Monti, R.; Gasbarri, P.; Palmerini, G. Deployable space manipulator commanded by means of visual-based guidance and navigation. Acta Astronaut. 2013, 83, 27–43. [Google Scholar] [CrossRef]
  31. Dong, G.; Zhu, Z.H. Position-based visual servo control of autonomous robotic manipulators. Acta Astronaut. 2015, 115, 291–302. [Google Scholar] [CrossRef]
  32. Yoshida, K.; Abiko, S. Inertia Paramter Identification for a Free-flying Space Robot. AIAA Guid. Navig. Control Conf. Exhib. 2002, 38, 1–8. [Google Scholar]
  33. Ma, O.; Dang, H.; Pham, K. On-Orbit Identification of Inertia Properties of Spacecraft Using a Robotic Arm. J. Guid. Control Dyn. 2008, 31, 1761–1771. [Google Scholar] [CrossRef]
  34. Ma, O.; Wang, J.; Misra, S.; Liu, M. On the validation of SPDM task verification facility. J. Robot. Syst. 2004, 21, 219–235. [Google Scholar] [CrossRef]
  35. Xu, W.; Liu, Y.; Liang, B.; Xu, Y.; Qiang, W. Autonomous Path Planning and Experiment Study of Free-floating Space Robot for Target Capturing. J. Intell. Robot. Syst. 2008, 51, 303–331. [Google Scholar] [CrossRef]
  36. Boge, T.; Ma, O. Using Advanced Industrial Robotics for Spacecraft Rendezvous and Docking Simulation. In Proceedings of the IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; IEEE Computer Society: Washington, DC, USA.
  37. Rackl, W.; Lampariello, R.; Albu-Schäffer, A. Parameter identification methods for free-floating space robots with direct torque sensing. In Proceedings of the IFAC Symposium on Automatic Control in Aerospace, Würzburg, Germany, 2–6 September 2013; Elsevier: Amsterdam, The Netherlands.
  38. Xu, W.; Hu, Z.; Zhang, Y.; Wang, Z.; Wu, X. A practical and effective method for identifying the complete inertia parameters of space robots. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, 28 September–2 October 2015; IEEE Computer Society: Washington, DC, USA.
Figure 1. Space robot system.
Figure 1. Space robot system.
Sensors 16 01383 g001
Figure 2. Equivalent robotic system.
Figure 2. Equivalent robotic system.
Sensors 16 01383 g002
Figure 3. Block diagram of space robot self-tuning control scheme.
Figure 3. Block diagram of space robot self-tuning control scheme.
Sensors 16 01383 g003
Figure 4. Kinematic equivalence diagram of hard-ware-in-the-loop simulation system.
Figure 4. Kinematic equivalence diagram of hard-ware-in-the-loop simulation system.
Sensors 16 01383 g004
Figure 5. Block diagram of space robot ground test system.
Figure 5. Block diagram of space robot ground test system.
Sensors 16 01383 g005
Figure 6. Laboratory with the space robot ground experimental system.
Figure 6. Laboratory with the space robot ground experimental system.
Sensors 16 01383 g006
Figure 7. Closed-loop responses of space robot system: (a) Position errors; (b) Angle errors.
Figure 7. Closed-loop responses of space robot system: (a) Position errors; (b) Angle errors.
Sensors 16 01383 g007
Figure 8. Property estimation errors: (a) estimation error of m 0 ; (b) estimation error of b 0 .
Figure 8. Property estimation errors: (a) estimation error of m 0 ; (b) estimation error of b 0 .
Sensors 16 01383 g008
Table 1. Kinematic and dynamic parameters of space robot.
Table 1. Kinematic and dynamic parameters of space robot.
Parameter (unit)BasePole 1Pole 2Pole 3Pole 4Pole 5Pole 6
M (kg)6481.59.61.59.01.510.5
a x (mm)00−493.502890−112
a y (mm)0056−123−1231270
a z (mm)012000000
b x (mm)5390−493.5123333−123−123
b y (mm)5189−5601200
b z (mm)813000000
I xx (kg∙m2)198 3.12 × 10 3 3.71 × 10 2 3.47 × 10 3 0.82 3.42 × 10 3 0.91
I yy (kg∙m2)198 3.12 × 10 3 1.92 3.47 × 10 3 0.64 3.42 × 10 3 0.91
I zz (kg∙m2)198 3.12 × 10 3 1.92 3.47 × 10 3 0.67 3.42 × 10 3 0.11

Share and Cite

MDPI and ACS Style

Li, Z.; Wang, B.; Liu, H. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras. Sensors 2016, 16, 1383. https://doi.org/10.3390/s16091383

AMA Style

Li Z, Wang B, Liu H. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras. Sensors. 2016; 16(9):1383. https://doi.org/10.3390/s16091383

Chicago/Turabian Style

Li, Zhenyu, Bin Wang, and Hong Liu. 2016. "Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras" Sensors 16, no. 9: 1383. https://doi.org/10.3390/s16091383

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop