1. Introduction
Space robotic manipulators have become essential components in various space missions, including on-orbit servicing, debris removal, and in-space assembly of large structures, thanks to their high technological readiness level [
1]. They offer distinct advantages over human astronauts by performing tasks that are too time-consuming, risky, and costly for human astronauts [
2]. Typically, a space robotic manipulator system comprises a base spacecraft equipped with one or multiple robotic manipulators. A notable example is the Canadarm on the International Space Station (see
Figure 1) [
3].
The primary challenge in on-orbit service and debris removal missions is the dynamic control problem of a free-floating robot manipulator interacting with a free-floating/tumbling target in microgravity, especially during the capture and post-capture stabilization phases [
1]. Achieving a safe or “soft” capture, where the robot successfully grasps a free-floating malfunctioned spacecraft or non-cooperative debris (hereafter referred to as target) without pushing it away, is critical. Simultaneously, it is essential to stabilize the attitude of the base spacecraft that carries the robot. Once the spacecraft–target combination is safely captured and stabilized, on-orbit service can proceed, or the debris can be relocated to a graveyard orbit or deorbited for re-entry into the Earth’s atmosphere within 25 years [
4]. Given the high-risk nature of these operations, where the robot must physically come into contact with the target, it is imperative that the capture system and associated control algorithms are rigorously tested and validated on Earth before deployment in space [
5].
Experimental validation for any research is never without difficulties, especially in space applications where access to space for testing is limited and prohibitively expensive. Ground-based testing and validation of a space robot’s dynamic responses and associated control algorithms during contact with an unknown 3D target in microgravity presents additional complexities. Various technologies have been proposed in the literature to mimic microgravity environments on Earth.
The most commonly used method is the air-bearing testbed. For example,
Figure 2A,B shows air-bearing testbeds in the authors’ laboratory at York University [
6] and the Polish Academy of Sciences [
7], respectively. While these testbeds provide an almost frictionless and zero-gravity environment, they are restricted to planar motion and cannot emulate the 6DOF motion of space robots in three-dimensional space.
Another approach involves the use of active suspension systems for gravity compensation [
8] and can achieve 6DOF motion. However, these systems can become unstable due to the coupled vibrations between the space manipulator and the suspension system. Additionally, accurately identifying and compensating for kinetic friction within the tension control system remains a significant challenge. Neutral buoyancy, as discussed in Ref. [
9], is another method to simulate microgravity by submerging a space robotic arm in a pool or water tank to test 6DOF motions. However, this method requires custom-built robotic arms to prevent damage from water exposure. Moreover, the effects of fluid damping and added inertia can skew the results, particularly when the system’s dynamic behavior is significant.
The closest approximation to a zero-gravity environment on Earth is achieved through parabolic flights by aircraft [
10] or drop towers [
11]. However, parabolic flights offer a limited microgravity duration of approximately 30 s, which is insufficient to fully test the robotic capture process of a target. Furthermore, the cost of these flights is high, and the available testing space in the aircraft cabin is restricted. Drop towers offer even shorter microgravity periods, often less than 10 s depending on the tower’s height, with an even more constrained testing space [
11].
The integration of computer simulation and hardware implementation has emerged as a highly effective method for validating space manipulator capture missions. The hardware-in-the-loop (HIL) system combines mathematical and mechanical models, utilizing a hardware robotic system to replicate the dynamic behavior of a simulated spacecraft and space manipulator.
Figure 2C shows the HIL testbed at the Shenzhen Space Technology Center in China [
12].
Figure 2D illustrates the European Proximity Operations Simulator at the German Aerospace Center [
13]. The Canadian Space Agency (CSA) has also developed a sophisticated HIL simulation system—the SPDM (special purpose dexterous manipulator) task verification facility, shown in
Figure 2E, which simulates the dynamic behavior of a space robotic manipulator performing maintenance tasks on the ISS [
14]. This facility is regarded as a formal verification tool.
Figure 2F displays the dual-robotic-arm testbed at the Research Institution of Intelligent Control and Testing at Tsinghua University [
15].
Figure 2G shows the manipulator task verification facility (MTVF) developed by the China Academy of Space Technology [
16,
17].
Figure 2.
(
A) Air-bearing testbed at York University [
6], (
B) air-bearing testbed at the Polish Academy of Sciences [
7], (
C) HIL testbed at Shenzhen Space Technology Center [
12], (
D) European proximity operations simulator at the German Aerospace Center [
13], (
E) CSA SPDM task verification facility [
14], (
F) dual robotic testbed at Tsinghua University [
15,
16], (
G) MTVF at the China Academy of Space Technology [
16].
Figure 2.
(
A) Air-bearing testbed at York University [
6], (
B) air-bearing testbed at the Polish Academy of Sciences [
7], (
C) HIL testbed at Shenzhen Space Technology Center [
12], (
D) European proximity operations simulator at the German Aerospace Center [
13], (
E) CSA SPDM task verification facility [
14], (
F) dual robotic testbed at Tsinghua University [
15,
16], (
G) MTVF at the China Academy of Space Technology [
16].
Notably, the existing space robotic testing facilities feature a simple capture interface, which is designed to achieve pose alignment between the robotic end-effector and the target. As summarized in
Figure 3, this basic interface is inadequate for comprehensive capture, sensing feedback, and post-capture stabilization of the target. These tasks require a sophisticated interface, such as a multi-finger gripper integrated with a suite of sensors.
This need motivates the current work to expand the space robotic experimental validation technology. One of the major contributions of this work is the more complex capture interface. It features a three-finger gripper equipped with a camera, a torque/force sensor, and tactile sensors. These enhancements enable active gravity compensation and precise contact force detection, significantly improving the system’s capabilities for experimental validation in space robotics.
2. Design of Hardware-in-the-Loop Testbed
The proposed robotic HIL testbed comprises two key sub-systems, as shown in
Figure 4. The testbed includes two identical 6DOF industrial robots. A mock-up target is affixed to the end-effector of one robot (Robot B), while a robotic finger gripper is mounted on the end-effector of the second robot (Robot A) with a camera in the eye-in-hand configuration. The gripper can autonomously track, approach, and grasp the target, guided by real-time visual feedback from the camera. Two computers were used to simulate the dynamic motion of the free-floating spacecraft-borne robot and the free-floating/tumbling target in a zero-gravity environment based on multibody dynamics and contact models. The 6DOF relative dynamic motions of the space robotic end-effector and the target (depicted as blue components in the figure) were first simulated in 3D space and, subsequently, converted into control inputs for the robotic arms (represented as red components) to replicate the motions in the physical testbed.
The dynamic simulation program of the mock-up satellite and space robot can represent all known environmental disturbance forces in orbit based on the experimental requirements. These forces include gravitational disturbances like (J2, J3, J4, …), lunar or sun gravitational disturbances, atmospheric drag, magnetic forces and torques due to eddy currents in the satellite and spacecraft robot, and solar radiation.
The relative motion between the target and the gripper is 6DOF. If the target is fixed or its motion is known in advance, then one 6DOF industrial robot is enough to emulate the relative 6DOF motion. To deal with the non-cooperative and unknown target, its motion must be detected and tracked in real time. Thus, the relative 6DOF motion is not available in advance. Therefore, two industrial FANUC robotic manipulators were used in the current experimental system to reflect this nature and test complex capture scenarios where both the target and the manipulator are in motion. This setup allowed us to:
- -
Simulate the tumbling motion or drift of the target independently, which is a common scenario in space debris removal or satellite capture missions.
- -
Test control algorithms designed to track and capture the tumbling target, even as the target rotates or moves independently.
- -
Simulate post-capture stabilization, where both the target and the manipulator are dynamically affected by contact forces, which is critical for safe and successful operations in space missions.
2.1. Robotic Manipulator System
In the HIL configuration, the 6DOF dynamic motions of both the free-floating target and the end-effector of the space robotic manipulator (shown in blue in
Figure 4) were achieved by using two FANUC M-20iD/25 robotic manipulators [
18], as shown in
Figure 5A. These robots offer 6DOF motion, enabling precise control of the end-effector’s pose (3DOF translational and 3DOF rotational movements). The end-effector has a payload capacity of 25 kg and a reach of 1831 mm. Each robotic arm is independently controlled by a dedicated computer.
The robotic arm, identified as robot (B) in
Figure 4, was equipped with a mock-up target attached to its end-effector, and was designed to replicate the target’s free-floating motion in space. An ATI industries force and torque sensor, shown in
Figure 5C and depicted in green in
Figure 4, was mounted between the end-effector and the mock-up target. This sensor measured both the forces and torques generated by the weight of the mock-up target and any contact forces during a capture event. These measurements, combined with the contact forces measured by the tactile sensors on the gripper, provided a comprehensive assessment of the contact loads on the target. These data were then fed into the dynamic model of the target to simulate its disturbed motion, whether it was in a free-floating state before capture or in a disturbed state after being captured by the gripper. The simulated target motion was subsequently converted into joint commands, which were then fed into the robotic manipulator for execution.
The FANUC industrial robot, designed for high-precision manufacturing environments, was engineered to execute predefined paths with exceptional repeatability (±0.02 mm) [
18]. To enable dynamic path tracking, necessitated by a moving target or gripper, an interface was established between the FANUC robot’s control box and an external computer via an ethernet connection, utilizing the FANUC StreamMotion protocol. This protocol allows for independent control of the robot’s six joints by streaming discretized joint commands, which are written in Python, from the external computer directly to the FANUC control box. These joint commands must be transmitted and received within a strict timestep of 4–8 ms, and the target positions must be within reachable limits. With this interface, the robotic arm (all joints) could be controlled by the simulated path as follows.
The motion of the end-effector of the space manipulator was planned in the simulation using MATLAB/Simulink; similarly, the motion of the target was predicted in the same simulation environment. In the pre-grasping phase, once the new target pose had been acquired by the computer vision system, the trajectory of the end-effector of the space manipulator was planned accordingly under microgravity conditions. Simulations could be run at a faster-than-time-step pace on the PC, but the FANUC manipulator requires discrete commands at 0.004 s intervals. Therefore, the simulated trajectories were output according to this time step to ensure that the FANUC robot’s end-effector (EE) precisely followed the simulated trajectories of both the target and the space manipulator’s EE. This was done by (i) extracting 6DOF poses of the end-effector (gripper) of the space robotic manipulator and the target under microgravity conditions from the simulation, (ii) converting the 6DOF pose into joint angle commands of 6DOF FANUC robotic manipulator, and (iii) controlling FANUC robotic manipulators to move the gripper and the target based on the simulated trajectories of 6DOF poses of the end-effector (gripper) of the space robotic manipulator and the target under microgravity conditions as if they were moving in microgravity.
In the post-grasping phase, forces and torques at the interface between the end-effector of the FANUC robot and the target, as well as the gripper, were measured by ATI sensors. Following the procedure outlined in the next section, i.e., Equations (5) and (6), the gravity compensation was applied to the force and torque measurements, yielding the net contact forces and torques acting on the target and gripper. These contact forces and torques were then input into the dynamic models of the target and space manipulator to simulate their disturbed motions at the next time step. Based on these new positions, the joint commands for the FANUC manipulators were recalculated and sent to the control systems to move the target and gripper, accordingly, as described in the pre-grasping phase. This process introduces a delay, as noted in [
19], due to the time needed to record the forces and torques and apply gravity compensation before each simulation time step. This delay was addressed through the FANUC control system.
In the hardware implementation, the two FANUC robots were synchronized at each simulation time step. Before moving to the next set of simulated positions, they must first reach the positions from the previous time step, regardless of any delay. This synchronization ensured consistent coordination between the robots at each time step, continuing until the end-effector captured the target. A flowchart summarizing the process from simulation to real-world motion is presented in
Figure 6.
The robot on the right (robot A) in
Figure 4 simulated the motion of a free-floating space robotic manipulator under microgravity. This robot was equipped with a gripper from Robotiq, see
Figure 5B, mounted via an ATI force/load sensor. The gripper featured three individually actuated fingers, each consisting of three links. However, only the base link was directly controllable in two directions; the other two links were passively adaptive and designed to conform to the varying surface profiles of the target.
Similar to robot B, robot A was controlled by an external computer to replicate the gripper’s motion extracted from the simulation in a microgravity environment. In the pre-capture phase, the simulated motions of the space manipulator’s end-effector (gripper) under microgravity conditions were extracted from the simulation and directly executed by the FANUC robot. These motions were converted into joint angle commands of the 6DOF FANUC robotic manipulator by inverse kinematics. The latter was input into the control system of the FANUC robotic manipulators to move the gripper and the target based on the simulated trajectories of 6DOF poses of the end-effector (gripper) of the space robotic manipulator and the target under microgravity conditions as if they were moving in microgravity. However, during the capture phase, when contact occurred, the ATI sensor measured the resultant forces and torques caused by the gripper coming into contact with the target, while the tactile sensors on the gripper measured the contact forces. These data provided a detailed assessment of the contact loads on both the target and the robotic manipulator. Then, the gravity effect on the ATI sensor measurements was compensated for by the procedure in
Section 2.2. The net force and torque measurements due to contact were used in the simulation program to calculate the disturbed motions of the gripper and the target due to the contact. These motions were input into the FANUC robotic control algorithms to move the gripper and the target as if operating in orbit. The implementation procedure is shown in
Figure 4.
2.2. Active Gravity Compensation
Active gravity compensation was achieved by the ATI force/torque sensor mounted between the end-effector of robot B and the target. Prior to contact with the gripper, the sensor measured the force and moment exerted by gravity as follows:
where
T is the transformation matrix from the inertial frame to the local frame fixed to the sensor,
W is the target’s weight vector in the inertial frame, and
is the position vector of the target’s center of gravity in the sensor’s local frame as shown in
Figure 7. Both
W and
were known from the design of the target, while the transformation matrix
T was determined from the kinematics of robot B based on the joint angle measurement from the robotic control system.
Upon contact between the target and the gripper, the force and moment measured by the ATI sensor were changed to:
where
Fc is the true contact force acting on the target and
r is the location of the contact point, which can be determined through computer vision.
From Equations (3) and (4), the contact force
Fc and moment
Mc acting at the center of mass of the free-floating target were calculated as:
Finally,
Fc and
Mc were input into the dynamic model of the target to calculate the motion caused by the contact disturbance. This disturbed motion was then mapped into the robotic joint angle commands, allowing robot B (holding the target) to mimic the motion of the target. Snapshots of the target’s motion with gravity compensation are shown in
Figure 8. It should be noted that an external force, in lieu of contact force, was applied in the test in
Figure 8.
The resultant contact force and torque at the gripper side, which were later input into the dynamic simulation of the free-floating motion of the space robot and its base spacecraft, can either be taken from the measurement at the target side based on Equations (5) and (6) via network communication or be calculated directly from the force and torque measured by the ATI sensor mounted between the gripper and robot A’s end-effector and the data of tactile sensors on the gripper fingers. At the moment of contact and during the post-contact phase, the configuration of the gripper fingers relative to the base of the gripper or the ATI sensor was known. The tactile sensor data provided information about which finger made contact. Based on these data, one was able to calculate the resultant contact force and torque acting on the gripper by subtracting the force and torque caused by the gravity in the measurement.
While the gripper was moving, there was the potential for disturbances caused by the gripper motions. However, the free-floating spacecraft exhibited a relatively slow movement with minimal acceleration after subtracting the orbital motion, resulting in a very low disturbance from gravitational effects influencing the measurements. Another source of disturbance in the ATI measurements was noise. The noise floor level was calibrated in advance and then filtered out using data acquisition algorithms.
2.3. Capturing Interface
To grasp a moving or potentially tumbling target attached to the robot (B) in
Figure 4, the Robotiq three-finger adaptive gripper was employed as the capturing interface. The gripper specifications are detailed in
Table 1. The Robotiq manufacturer of adaptive grippers, is based in Lévis, Quebec, Canada.
The Robotiq three-finger adaptive gripper was selected for the following compelling reasons:
Flexibility: as an underactuated gripper, the Robotiq three-finger gripper is capable of adapting to the shape of the target being grasped, providing exceptional flexibility and reliability. This versatility makes it ideal for various applications, from grasping irregularly shaped targets to performing delicate manipulation tasks.
Repeatability: the gripper is capable of providing high repeatability with a precision better than 0.05 mm, making it well-suited for tasks requiring precise grasping and manipulation. Some movements are shown in
Figure 9.
Payload capacity: capable of handling payloads up to 10 kg, the gripper is suitable for applications involving the manipulation of heavy targets.
Compatibility: the gripper is compatible with most industrial robots and supports control via ethernet industrial protocol (IP), transmission control protocol/internet protocol (TCP/IP), or Modbus remote terminal unit (RTU), facilitating seamless integration with existing robotic systems.
Grasping modes: the gripper offers four pre-set grasping modes (scissor, wide, pinch, and basic), providing a wide range of grasping setups for grasping different targets.
Grasping force and speed: with a maximum grasping force of 60 N and selectable grasping modes, the gripper allows users to choose the most appropriate settings for the specific task, ensuring safe and effective handling of targets of varying shapes and sizes.
Overall, the high level of flexibility, repeatability, and payload capacity of the Robotiq gripper make it an ideal choice for current applications.
2.4. Sensing System
The sensing system in the HIL testbed included two ATI force/torque sensors, (manufactured by Automation Technology, Inc. headquartered in Apex, NC, USA), custom tactile sensors (
Figure 10a) mounted on each link of the gripper’s fingers, and an Intel
® RealSense™ depth camera D455 (
Figure 10b). the camera was manufactured by Intel Corporation, which is headquartered in Santa Clara, CA, USA. The tactile sensors, constructed from thin film piezoresistive pressure sensors, were affixed to the finger links of the gripper. These sensors measured normal contact forces between the gripper and the target, ensuring the safety of both components. To improve the accuracy of sensor data, they were incorporated into 3D-printed plastic adaptors that adapted to the contours of the fingers. These adaptors maintained consistent contact and ensured that forces exerted on the gripper’s fingers were accurately recorded and fed back into the control loop. This feedback was instrumental for controlling the fingers with greater precision and for improving the gripper’s ability to grasp the target securely.
The Intel RealSense camera determined the target’s relative pose by either photogrammetry or AI-enhanced computer vision algorithms. This information was fed into the robotic control algorithm, which guided the robot to autonomously track and grasp the target. Furthermore, six high-resolution (2K) TV cameras were strategically positioned in the testing room to monitor the test from six angles to provide ground truth data, see
Figure 11.
2.5. Mock-Up Target Satellite
The mock-up target used in this testbed was a scale-down model of a typical satellite. The physical appearance and dimensions were scaled down to fit our workspace and allow the computer vision algorithm to track its pose. The inertial properties of the scaled model did not impact the experimental hardware setup, as long as they remained within the payload limits of the FANUC manipulator. Therefore, the mock-up target was a 30 cm cube made of 1/8 inch thick aluminum plates, directly bolted to the ATI force/torque sensor, which was subsequently mounted on the end-effector of the robotic manipulator. To mimic the appearance of a real satellite for training of AI-computer vision algorithm, the cube was wrapped in thermal blankets that replicated the light-reflective properties of real satellites in space.
Additional components, including a dummy thrust nozzle, coupling ring, and solar panel, were attached to the mock satellite, as shown in
Figure 12. The dummy thrust nozzle and coupling ring were made by 3D printer using acrylonitrile butadiene styrene and the panel (30 cm × 90 cm × 0.635 cm) was made from a polymethyl methacrylate sheet.
The total mass of the mock-up satellite was 7.3 kg and its inertial matrix was [Ixx,Iyy,Izz] = [0.18,0.18,0.19] kg·m2.
The illumination condition in the testing room was adjustable to simulate a space-like environment, such as using a single light source to mimic the sunlight. This setup was designed to train the AI-enhanced computer vision algorithm to accurately recognize the target’s pose in a simulated space environment.
2.6. Computer System
Two desktop PCs were employed to control a pair of FANUC robotic manipulators. The first PC controlled the robot interacting with the mock-up target. This computer executed open-loop forward control to move the mock-up target as if it were in orbit. During a capture event, the resultant contact forces, after compensating for gravity, were input into the simulation program to calculate the target’s disturbed motion in real time. The motion was then converted into robotic joint angle commands by inverse kinematics, which were transmitted to the robot’s control box to emulate the target as if it were in a microgravity environment. This task did not require a high computing power. Hence, a Dell XPS 8940 desktop PC was employed. The desktop is manufactured by Dell Technologies, headquartered in Round Rock, TX, USA.
The second PC was responsible for collecting measurement data from the camera, tactile sensors, and force/torque sensors, simulating the 6DOF motion of the space robot, and for ensuring that the robotic manipulator and gripper could replicate the gripper’s motion in a microgravity environment. In case of a capture event, the computer controlled the robotic manipulator and gripper to synchronize the motion of the gripper with the target. This process demanded a high computing power and, accordingly, a Lambda™ GPU Desktop PC—deep learning workstation was selected for this purpose. The integrated HIL testbed is shown in
Figure 13.