Next Article in Journal
Identification of Critical Nodes for Delay Propagation in Susceptible-Exposed-Infected-Recovered (SEIR) and Genetic Algorithm (GA) Route Networks
Next Article in Special Issue
A Conceptual Design of Deployable Antenna Mechanisms
Previous Article in Journal
U-Space Contingency Management Based on Enhanced Mission Description
Previous Article in Special Issue
Reinforcement Learning-Based Pose Coordination Planning Capture Strategy for Space Non-Cooperative Targets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of 6DOF Hardware-in-the-Loop Ground Testbed for Autonomous Robotic Space Debris Removal

Department of Mechanical Engineering, York University, 4700 Keele Street, Toronto, ON M3J 1P3, Canada
*
Author to whom correspondence should be addressed.
Aerospace 2024, 11(11), 877; https://doi.org/10.3390/aerospace11110877
Submission received: 15 August 2024 / Revised: 2 October 2024 / Accepted: 11 October 2024 / Published: 25 October 2024
(This article belongs to the Special Issue Space Mechanisms and Robots)

Abstract

:
This paper presents the development of a hardware-in-the-loop ground testbed featuring active gravity compensation via software-in-the-loop integration, specially designed to support research in autonomous robotic removal of space debris. The testbed is designed to replicate six degrees of freedom (6DOF) motion maneuvering to accurately simulate the dynamic behaviors of free-floating robotic manipulators and free-tumbling space debris under microgravity conditions. The testbed incorporates two industrial 6DOF robotic manipulators, a three-finger robotic gripper, and a suite of sensors, including cameras, force/torque sensors, and tactile tensors. Such a setup provides a robust platform for testing and validating technologies related to autonomous tracking, capture, and post-capture stabilization within the context of active space debris removal missions. Preliminary experimental results have demonstrated advancements in motion control, computer vision, and sensor fusion. This facility is positioned to become an essential resource for the development and validation of robotic manipulators in space, offering substantial improvements to the effectiveness and reliability of autonomous capture operations in space missions.

1. Introduction

Space robotic manipulators have become essential components in various space missions, including on-orbit servicing, debris removal, and in-space assembly of large structures, thanks to their high technological readiness level [1]. They offer distinct advantages over human astronauts by performing tasks that are too time-consuming, risky, and costly for human astronauts [2]. Typically, a space robotic manipulator system comprises a base spacecraft equipped with one or multiple robotic manipulators. A notable example is the Canadarm on the International Space Station (see Figure 1) [3].
The primary challenge in on-orbit service and debris removal missions is the dynamic control problem of a free-floating robot manipulator interacting with a free-floating/tumbling target in microgravity, especially during the capture and post-capture stabilization phases [1]. Achieving a safe or “soft” capture, where the robot successfully grasps a free-floating malfunctioned spacecraft or non-cooperative debris (hereafter referred to as target) without pushing it away, is critical. Simultaneously, it is essential to stabilize the attitude of the base spacecraft that carries the robot. Once the spacecraft–target combination is safely captured and stabilized, on-orbit service can proceed, or the debris can be relocated to a graveyard orbit or deorbited for re-entry into the Earth’s atmosphere within 25 years [4]. Given the high-risk nature of these operations, where the robot must physically come into contact with the target, it is imperative that the capture system and associated control algorithms are rigorously tested and validated on Earth before deployment in space [5].
Experimental validation for any research is never without difficulties, especially in space applications where access to space for testing is limited and prohibitively expensive. Ground-based testing and validation of a space robot’s dynamic responses and associated control algorithms during contact with an unknown 3D target in microgravity presents additional complexities. Various technologies have been proposed in the literature to mimic microgravity environments on Earth.
The most commonly used method is the air-bearing testbed. For example, Figure 2A,B shows air-bearing testbeds in the authors’ laboratory at York University [6] and the Polish Academy of Sciences [7], respectively. While these testbeds provide an almost frictionless and zero-gravity environment, they are restricted to planar motion and cannot emulate the 6DOF motion of space robots in three-dimensional space.
Another approach involves the use of active suspension systems for gravity compensation [8] and can achieve 6DOF motion. However, these systems can become unstable due to the coupled vibrations between the space manipulator and the suspension system. Additionally, accurately identifying and compensating for kinetic friction within the tension control system remains a significant challenge. Neutral buoyancy, as discussed in Ref. [9], is another method to simulate microgravity by submerging a space robotic arm in a pool or water tank to test 6DOF motions. However, this method requires custom-built robotic arms to prevent damage from water exposure. Moreover, the effects of fluid damping and added inertia can skew the results, particularly when the system’s dynamic behavior is significant.
The closest approximation to a zero-gravity environment on Earth is achieved through parabolic flights by aircraft [10] or drop towers [11]. However, parabolic flights offer a limited microgravity duration of approximately 30 s, which is insufficient to fully test the robotic capture process of a target. Furthermore, the cost of these flights is high, and the available testing space in the aircraft cabin is restricted. Drop towers offer even shorter microgravity periods, often less than 10 s depending on the tower’s height, with an even more constrained testing space [11].
The integration of computer simulation and hardware implementation has emerged as a highly effective method for validating space manipulator capture missions. The hardware-in-the-loop (HIL) system combines mathematical and mechanical models, utilizing a hardware robotic system to replicate the dynamic behavior of a simulated spacecraft and space manipulator.
Figure 2C shows the HIL testbed at the Shenzhen Space Technology Center in China [12]. Figure 2D illustrates the European Proximity Operations Simulator at the German Aerospace Center [13]. The Canadian Space Agency (CSA) has also developed a sophisticated HIL simulation system—the SPDM (special purpose dexterous manipulator) task verification facility, shown in Figure 2E, which simulates the dynamic behavior of a space robotic manipulator performing maintenance tasks on the ISS [14]. This facility is regarded as a formal verification tool. Figure 2F displays the dual-robotic-arm testbed at the Research Institution of Intelligent Control and Testing at Tsinghua University [15]. Figure 2G shows the manipulator task verification facility (MTVF) developed by the China Academy of Space Technology [16,17].
Figure 2. (A) Air-bearing testbed at York University [6], (B) air-bearing testbed at the Polish Academy of Sciences [7], (C) HIL testbed at Shenzhen Space Technology Center [12], (D) European proximity operations simulator at the German Aerospace Center [13], (E) CSA SPDM task verification facility [14], (F) dual robotic testbed at Tsinghua University [15,16], (G) MTVF at the China Academy of Space Technology [16].
Figure 2. (A) Air-bearing testbed at York University [6], (B) air-bearing testbed at the Polish Academy of Sciences [7], (C) HIL testbed at Shenzhen Space Technology Center [12], (D) European proximity operations simulator at the German Aerospace Center [13], (E) CSA SPDM task verification facility [14], (F) dual robotic testbed at Tsinghua University [15,16], (G) MTVF at the China Academy of Space Technology [16].
Aerospace 11 00877 g002
Notably, the existing space robotic testing facilities feature a simple capture interface, which is designed to achieve pose alignment between the robotic end-effector and the target. As summarized in Figure 3, this basic interface is inadequate for comprehensive capture, sensing feedback, and post-capture stabilization of the target. These tasks require a sophisticated interface, such as a multi-finger gripper integrated with a suite of sensors.
This need motivates the current work to expand the space robotic experimental validation technology. One of the major contributions of this work is the more complex capture interface. It features a three-finger gripper equipped with a camera, a torque/force sensor, and tactile sensors. These enhancements enable active gravity compensation and precise contact force detection, significantly improving the system’s capabilities for experimental validation in space robotics.

2. Design of Hardware-in-the-Loop Testbed

The proposed robotic HIL testbed comprises two key sub-systems, as shown in Figure 4. The testbed includes two identical 6DOF industrial robots. A mock-up target is affixed to the end-effector of one robot (Robot B), while a robotic finger gripper is mounted on the end-effector of the second robot (Robot A) with a camera in the eye-in-hand configuration. The gripper can autonomously track, approach, and grasp the target, guided by real-time visual feedback from the camera. Two computers were used to simulate the dynamic motion of the free-floating spacecraft-borne robot and the free-floating/tumbling target in a zero-gravity environment based on multibody dynamics and contact models. The 6DOF relative dynamic motions of the space robotic end-effector and the target (depicted as blue components in the figure) were first simulated in 3D space and, subsequently, converted into control inputs for the robotic arms (represented as red components) to replicate the motions in the physical testbed.
The dynamic simulation program of the mock-up satellite and space robot can represent all known environmental disturbance forces in orbit based on the experimental requirements. These forces include gravitational disturbances like (J2, J3, J4, …), lunar or sun gravitational disturbances, atmospheric drag, magnetic forces and torques due to eddy currents in the satellite and spacecraft robot, and solar radiation.
The relative motion between the target and the gripper is 6DOF. If the target is fixed or its motion is known in advance, then one 6DOF industrial robot is enough to emulate the relative 6DOF motion. To deal with the non-cooperative and unknown target, its motion must be detected and tracked in real time. Thus, the relative 6DOF motion is not available in advance. Therefore, two industrial FANUC robotic manipulators were used in the current experimental system to reflect this nature and test complex capture scenarios where both the target and the manipulator are in motion. This setup allowed us to:
-
Simulate the tumbling motion or drift of the target independently, which is a common scenario in space debris removal or satellite capture missions.
-
Test control algorithms designed to track and capture the tumbling target, even as the target rotates or moves independently.
-
Simulate post-capture stabilization, where both the target and the manipulator are dynamically affected by contact forces, which is critical for safe and successful operations in space missions.

2.1. Robotic Manipulator System

In the HIL configuration, the 6DOF dynamic motions of both the free-floating target and the end-effector of the space robotic manipulator (shown in blue in Figure 4) were achieved by using two FANUC M-20iD/25 robotic manipulators [18], as shown in Figure 5A. These robots offer 6DOF motion, enabling precise control of the end-effector’s pose (3DOF translational and 3DOF rotational movements). The end-effector has a payload capacity of 25 kg and a reach of 1831 mm. Each robotic arm is independently controlled by a dedicated computer.
The robotic arm, identified as robot (B) in Figure 4, was equipped with a mock-up target attached to its end-effector, and was designed to replicate the target’s free-floating motion in space. An ATI industries force and torque sensor, shown in Figure 5C and depicted in green in Figure 4, was mounted between the end-effector and the mock-up target. This sensor measured both the forces and torques generated by the weight of the mock-up target and any contact forces during a capture event. These measurements, combined with the contact forces measured by the tactile sensors on the gripper, provided a comprehensive assessment of the contact loads on the target. These data were then fed into the dynamic model of the target to simulate its disturbed motion, whether it was in a free-floating state before capture or in a disturbed state after being captured by the gripper. The simulated target motion was subsequently converted into joint commands, which were then fed into the robotic manipulator for execution.
The FANUC industrial robot, designed for high-precision manufacturing environments, was engineered to execute predefined paths with exceptional repeatability (±0.02 mm) [18]. To enable dynamic path tracking, necessitated by a moving target or gripper, an interface was established between the FANUC robot’s control box and an external computer via an ethernet connection, utilizing the FANUC StreamMotion protocol. This protocol allows for independent control of the robot’s six joints by streaming discretized joint commands, which are written in Python, from the external computer directly to the FANUC control box. These joint commands must be transmitted and received within a strict timestep of 4–8 ms, and the target positions must be within reachable limits. With this interface, the robotic arm (all joints) could be controlled by the simulated path as follows.
The motion of the end-effector of the space manipulator was planned in the simulation using MATLAB/Simulink; similarly, the motion of the target was predicted in the same simulation environment. In the pre-grasping phase, once the new target pose had been acquired by the computer vision system, the trajectory of the end-effector of the space manipulator was planned accordingly under microgravity conditions. Simulations could be run at a faster-than-time-step pace on the PC, but the FANUC manipulator requires discrete commands at 0.004 s intervals. Therefore, the simulated trajectories were output according to this time step to ensure that the FANUC robot’s end-effector (EE) precisely followed the simulated trajectories of both the target and the space manipulator’s EE. This was done by (i) extracting 6DOF poses of the end-effector (gripper) of the space robotic manipulator and the target under microgravity conditions from the simulation, (ii) converting the 6DOF pose into joint angle commands of 6DOF FANUC robotic manipulator, and (iii) controlling FANUC robotic manipulators to move the gripper and the target based on the simulated trajectories of 6DOF poses of the end-effector (gripper) of the space robotic manipulator and the target under microgravity conditions as if they were moving in microgravity.
In the post-grasping phase, forces and torques at the interface between the end-effector of the FANUC robot and the target, as well as the gripper, were measured by ATI sensors. Following the procedure outlined in the next section, i.e., Equations (5) and (6), the gravity compensation was applied to the force and torque measurements, yielding the net contact forces and torques acting on the target and gripper. These contact forces and torques were then input into the dynamic models of the target and space manipulator to simulate their disturbed motions at the next time step. Based on these new positions, the joint commands for the FANUC manipulators were recalculated and sent to the control systems to move the target and gripper, accordingly, as described in the pre-grasping phase. This process introduces a delay, as noted in [19], due to the time needed to record the forces and torques and apply gravity compensation before each simulation time step. This delay was addressed through the FANUC control system.
In the hardware implementation, the two FANUC robots were synchronized at each simulation time step. Before moving to the next set of simulated positions, they must first reach the positions from the previous time step, regardless of any delay. This synchronization ensured consistent coordination between the robots at each time step, continuing until the end-effector captured the target. A flowchart summarizing the process from simulation to real-world motion is presented in Figure 6.
The robot on the right (robot A) in Figure 4 simulated the motion of a free-floating space robotic manipulator under microgravity. This robot was equipped with a gripper from Robotiq, see Figure 5B, mounted via an ATI force/load sensor. The gripper featured three individually actuated fingers, each consisting of three links. However, only the base link was directly controllable in two directions; the other two links were passively adaptive and designed to conform to the varying surface profiles of the target.
Similar to robot B, robot A was controlled by an external computer to replicate the gripper’s motion extracted from the simulation in a microgravity environment. In the pre-capture phase, the simulated motions of the space manipulator’s end-effector (gripper) under microgravity conditions were extracted from the simulation and directly executed by the FANUC robot. These motions were converted into joint angle commands of the 6DOF FANUC robotic manipulator by inverse kinematics. The latter was input into the control system of the FANUC robotic manipulators to move the gripper and the target based on the simulated trajectories of 6DOF poses of the end-effector (gripper) of the space robotic manipulator and the target under microgravity conditions as if they were moving in microgravity. However, during the capture phase, when contact occurred, the ATI sensor measured the resultant forces and torques caused by the gripper coming into contact with the target, while the tactile sensors on the gripper measured the contact forces. These data provided a detailed assessment of the contact loads on both the target and the robotic manipulator. Then, the gravity effect on the ATI sensor measurements was compensated for by the procedure in Section 2.2. The net force and torque measurements due to contact were used in the simulation program to calculate the disturbed motions of the gripper and the target due to the contact. These motions were input into the FANUC robotic control algorithms to move the gripper and the target as if operating in orbit. The implementation procedure is shown in Figure 4.

2.2. Active Gravity Compensation

Active gravity compensation was achieved by the ATI force/torque sensor mounted between the end-effector of robot B and the target. Prior to contact with the gripper, the sensor measured the force and moment exerted by gravity as follows:
F 0 = T   W
M 0 = r 0 × ( T W )
where T is the transformation matrix from the inertial frame to the local frame fixed to the sensor, W is the target’s weight vector in the inertial frame, and r 0 is the position vector of the target’s center of gravity in the sensor’s local frame as shown in Figure 7. Both W and r 0 were known from the design of the target, while the transformation matrix T was determined from the kinematics of robot B based on the joint angle measurement from the robotic control system.
Upon contact between the target and the gripper, the force and moment measured by the ATI sensor were changed to:
F = T W + F c
M = r 0 × ( T W ) + r × F c
where Fc is the true contact force acting on the target and r is the location of the contact point, which can be determined through computer vision.
From Equations (3) and (4), the contact force Fc and moment Mc acting at the center of mass of the free-floating target were calculated as:
F c = F T W
M c = r r 0 × F c = M r 0 × ( T W + F c )
Finally, Fc and Mc were input into the dynamic model of the target to calculate the motion caused by the contact disturbance. This disturbed motion was then mapped into the robotic joint angle commands, allowing robot B (holding the target) to mimic the motion of the target. Snapshots of the target’s motion with gravity compensation are shown in Figure 8. It should be noted that an external force, in lieu of contact force, was applied in the test in Figure 8.
The resultant contact force and torque at the gripper side, which were later input into the dynamic simulation of the free-floating motion of the space robot and its base spacecraft, can either be taken from the measurement at the target side based on Equations (5) and (6) via network communication or be calculated directly from the force and torque measured by the ATI sensor mounted between the gripper and robot A’s end-effector and the data of tactile sensors on the gripper fingers. At the moment of contact and during the post-contact phase, the configuration of the gripper fingers relative to the base of the gripper or the ATI sensor was known. The tactile sensor data provided information about which finger made contact. Based on these data, one was able to calculate the resultant contact force and torque acting on the gripper by subtracting the force and torque caused by the gravity in the measurement.
While the gripper was moving, there was the potential for disturbances caused by the gripper motions. However, the free-floating spacecraft exhibited a relatively slow movement with minimal acceleration after subtracting the orbital motion, resulting in a very low disturbance from gravitational effects influencing the measurements. Another source of disturbance in the ATI measurements was noise. The noise floor level was calibrated in advance and then filtered out using data acquisition algorithms.

2.3. Capturing Interface

To grasp a moving or potentially tumbling target attached to the robot (B) in Figure 4, the Robotiq three-finger adaptive gripper was employed as the capturing interface. The gripper specifications are detailed in Table 1. The Robotiq manufacturer of adaptive grippers, is based in Lévis, Quebec, Canada.
The Robotiq three-finger adaptive gripper was selected for the following compelling reasons:
Flexibility: as an underactuated gripper, the Robotiq three-finger gripper is capable of adapting to the shape of the target being grasped, providing exceptional flexibility and reliability. This versatility makes it ideal for various applications, from grasping irregularly shaped targets to performing delicate manipulation tasks.
Repeatability: the gripper is capable of providing high repeatability with a precision better than 0.05 mm, making it well-suited for tasks requiring precise grasping and manipulation. Some movements are shown in Figure 9.
Payload capacity: capable of handling payloads up to 10 kg, the gripper is suitable for applications involving the manipulation of heavy targets.
Compatibility: the gripper is compatible with most industrial robots and supports control via ethernet industrial protocol (IP), transmission control protocol/internet protocol (TCP/IP), or Modbus remote terminal unit (RTU), facilitating seamless integration with existing robotic systems.
Grasping modes: the gripper offers four pre-set grasping modes (scissor, wide, pinch, and basic), providing a wide range of grasping setups for grasping different targets.
Grasping force and speed: with a maximum grasping force of 60 N and selectable grasping modes, the gripper allows users to choose the most appropriate settings for the specific task, ensuring safe and effective handling of targets of varying shapes and sizes.
Overall, the high level of flexibility, repeatability, and payload capacity of the Robotiq gripper make it an ideal choice for current applications.

2.4. Sensing System

The sensing system in the HIL testbed included two ATI force/torque sensors, (manufactured by Automation Technology, Inc. headquartered in Apex, NC, USA), custom tactile sensors (Figure 10a) mounted on each link of the gripper’s fingers, and an Intel® RealSense™ depth camera D455 (Figure 10b). the camera was manufactured by Intel Corporation, which is headquartered in Santa Clara, CA, USA. The tactile sensors, constructed from thin film piezoresistive pressure sensors, were affixed to the finger links of the gripper. These sensors measured normal contact forces between the gripper and the target, ensuring the safety of both components. To improve the accuracy of sensor data, they were incorporated into 3D-printed plastic adaptors that adapted to the contours of the fingers. These adaptors maintained consistent contact and ensured that forces exerted on the gripper’s fingers were accurately recorded and fed back into the control loop. This feedback was instrumental for controlling the fingers with greater precision and for improving the gripper’s ability to grasp the target securely.
The Intel RealSense camera determined the target’s relative pose by either photogrammetry or AI-enhanced computer vision algorithms. This information was fed into the robotic control algorithm, which guided the robot to autonomously track and grasp the target. Furthermore, six high-resolution (2K) TV cameras were strategically positioned in the testing room to monitor the test from six angles to provide ground truth data, see Figure 11.

2.5. Mock-Up Target Satellite

The mock-up target used in this testbed was a scale-down model of a typical satellite. The physical appearance and dimensions were scaled down to fit our workspace and allow the computer vision algorithm to track its pose. The inertial properties of the scaled model did not impact the experimental hardware setup, as long as they remained within the payload limits of the FANUC manipulator. Therefore, the mock-up target was a 30 cm cube made of 1/8 inch thick aluminum plates, directly bolted to the ATI force/torque sensor, which was subsequently mounted on the end-effector of the robotic manipulator. To mimic the appearance of a real satellite for training of AI-computer vision algorithm, the cube was wrapped in thermal blankets that replicated the light-reflective properties of real satellites in space.
Additional components, including a dummy thrust nozzle, coupling ring, and solar panel, were attached to the mock satellite, as shown in Figure 12. The dummy thrust nozzle and coupling ring were made by 3D printer using acrylonitrile butadiene styrene and the panel (30 cm × 90 cm × 0.635 cm) was made from a polymethyl methacrylate sheet.
The total mass of the mock-up satellite was 7.3 kg and its inertial matrix was [Ixx,Iyy,Izz] = [0.18,0.18,0.19] kg·m2.
The illumination condition in the testing room was adjustable to simulate a space-like environment, such as using a single light source to mimic the sunlight. This setup was designed to train the AI-enhanced computer vision algorithm to accurately recognize the target’s pose in a simulated space environment.

2.6. Computer System

Two desktop PCs were employed to control a pair of FANUC robotic manipulators. The first PC controlled the robot interacting with the mock-up target. This computer executed open-loop forward control to move the mock-up target as if it were in orbit. During a capture event, the resultant contact forces, after compensating for gravity, were input into the simulation program to calculate the target’s disturbed motion in real time. The motion was then converted into robotic joint angle commands by inverse kinematics, which were transmitted to the robot’s control box to emulate the target as if it were in a microgravity environment. This task did not require a high computing power. Hence, a Dell XPS 8940 desktop PC was employed. The desktop is manufactured by Dell Technologies, headquartered in Round Rock, TX, USA.
The second PC was responsible for collecting measurement data from the camera, tactile sensors, and force/torque sensors, simulating the 6DOF motion of the space robot, and for ensuring that the robotic manipulator and gripper could replicate the gripper’s motion in a microgravity environment. In case of a capture event, the computer controlled the robotic manipulator and gripper to synchronize the motion of the gripper with the target. This process demanded a high computing power and, accordingly, a Lambda™ GPU Desktop PC—deep learning workstation was selected for this purpose. The integrated HIL testbed is shown in Figure 13.

3. Preliminary Experimental Results

This section details some preliminary experimental results obtained for each subsystem of the HIL testbed.

3.1. Multiple Angle Camera Fusion

Preliminary tests were conducted to establish groundtruth data for capture validation using six TV cameras. As shown in Figure 11, the motions of two robotic manipulators were recorded from six different angles, enabling comprehensive monitoring of the tracking, capture, and post-capture stabilization control.

3.2. Computer Vision

The AI-based YOLO V5 (an open-source object detection algorithm created by Ultralytics which is based in Virginia, USA) [20] was employed to recognize the interesting features and then track the relative pose and motion of the target using the Intel depth camera input, as shown in Figure 10b. First, the algorithm was trained to identify the target by bounding key features (e.g., nozzle, coupling ring, solar panel) at different viewing angles and distances and under different illumination conditions, as illustrated in Figure 14A. Next, the algorithm was trained to estimate the target’s pose by analyzing these features.
To validate the vision system, a camera was positioned above the gripper in an eye-in-hand configuration, as shown in Figure 13. The target was then moved 0.5 m away from the camera along the camera’s optical axis, 0.5 m downward, and 0.5 m sideways. During this movement, the computer vision system continuously estimated and recorded the target’s pose in real time. During this case, it was noted that there was no rotational movement of the target.
The target’s true position was determined by the FANUC robot controller. Figure 14B shows a comparison between the target’s position as determined by the FANUC robot and the YOLO algorithm. The results indicate that the position error was minimal when the target was moved away along the camera’s optical axis. However, as the target moved away from the camera’s optical axis, the error increased to 3 cm, representing approximately 6% of the displacement. This error arose because the nozzle appeared skewed in the camera’s view, causing the center of the bounding box (representing the nozzle’s position estimated by YOLO) to deviate from the actual nozzle center (see the magenta line in Figure 14B). Currently, the YOLO algorithm does not address this issue, highlighting the need for future research to minimize such errors.

3.3. Motion Equivalence

The 6DOF dynamic model of the free-floating space manipulator and the target in 3D space has been detailed in our previous works [21,22]. This model includes a six-joint serial manipulator mounted on a 6DOF base spacecraft. Figure 15 shows a skeletal representation of the manipulator and the target in the simulation. Their motions in 3D space were simulated in MATLAB/Simulink, based on the dynamic models in [21,22]. From the simulation, only the poses of the end-effector and the target were extracted from the simulation and converted into joint commands for execution by the FANUC robots. These commands were continuously updated based on the simulation at 0.004 s time intervals to ensure the FANUC robots could accurately replicate the motions of the space manipulator’s end-effector and of the target. This motion equivalence concept is shown in Figure 16 and described below.
In this motion equivalence, the base frames of the industrial robots A ( A ) and B ( B ) were fixed to the ground with respect to the inertial frame ( I ), while the base frame of the space manipulator ( S ) was free-floating with respect to the inertial frame ( I ).
The vector r T represents the target pose in the inertial frame, while the vectors r E E and r S represent the poses of the space manipulator’s end-effector and floating base, respectively, in the inertial frame. The homogeneous transformation matrix between two frames is denoted by T. The subscripts EE and T refer to the end-effector and target, respectively, while the subscripts A and B refer to the base frames of robots A and B.
The space manipulator’s motion was achieved by industrial robot A, and the target’s motion was replicated by robot B. Once the path of the space manipulator’s gripper pose was generated through computer simulation, the gripper’s pose relative to the frame ( A ) was calculated in Equation (7):
T E E A = ( T A ) 1 T E E
Then, the desired joint angles for industrial robot A ( Θ A d ) were calculated using its inverse kinematic equation, presented in Equation (8),
Θ A d = f A 1 ( T E E A )
where f A is the direct kinematic equation of robot A, calculated using the Denavit–Hartenberg (DH) parameters of the FANUC M-20iD/25 robotic manipulator [18], as shown in Table 2.
For the target, its path was calculated in the frame ( B ) in the simulation. The simulated path was achieved through the industrial robot B, using Equation (9):
T T B = ( T B ) 1 T T
Finally, the desired joint angles for the industrial robot B ( Θ B d ) were calculated by its inverse kinematic equation, as seen in Equation (10):
Θ B d = f B 1 ( T T t )
where f B is the direct kinematic equation of the robot B, calculated in a similar manner to that used for robot A.
The Denavit–Hartenberg (DH) parameters are a standardized set of four parameters used to describe the kinematic configuration of a robot manipulator, specifically to model the relationship between consecutive joints and links [21]. For an I number of joints, the four parameters include: (i) the joint angle ( θ i ), which represents the angle between the x i axis and the x i + 1 axis, measured around the z i axis, (ii) the length of each link ( a i ) that describes the distance between the axes of two consecutive joints along their common normal, (iii) the link offset ( d i ) describing the distance along the z i axis of the current frame from the origin of one link frame to the next, and (iv) the link twist ( α i ), the angle between the z i axis of the current joint and the z i + 1 axis of the next joint, measured along the x i axis.

3.4. Target Motion

The target motion simulation was determined based on the tumbling rate of the GOES 8 geostationary satellite [23] and Envisat [24]. Specifically, a drift rate of −0.01 m/s in the x axis direction and a tumbling rate of 0.28 deg/s, 2.8 deg/s, and 0.28 deg/s with respect to the roll, pitch, and yaw axis in the frame of B , respectively, were used to model the target’s motion. This motion was then converted into joint angle commands by the inverse kinematics of robot B.
The kinematics and dynamics of the target were considered to accurately predict its tumbling motion in the simulation. Starting with initial angular velocity, the subsequent motion was determined by integrating dynamic and kinematic equations throughout the simulation. From the simulation, only the position and orientation data were used to calculate joint commands (angles) via inverse kinematics. These commands were executed by the FANUC robot to represent the target motion as if it were in space. These commands were continuously updated based on the simulation in real time to ensure that the FANUC robot precisely replicated the target’s motion in space. Figure 17 shows the desired joint commands fed into the robotic control box and the actual joint positions achieved by the robot at each time step. The target successfully replicated the behavior of tumbling space debris. Snapshots of the target motion are shown in Figure 18.
Forces and torques measured by the ATI force/load sensor were recorded during the tumbling motion of the target. Figure 19 illustrates the measured forces in the x, y, and z directions, while Figure 20 presents the measured torques with respect to the x, y, and z axes. These measurement data served as baseline to account for the gravitational effects and were used to extract the true contact force if contact with the gripper occurred. Notably, the force and torque peaks observed in Figure 19 and Figure 20 were the direct results of the discrete nature of the “Streamotion” command updates provided by the FANUC protocol. As the control system updated the manipulator motion at discrete intervals, there were small but sudden changes in the target’s velocity and position, leading to transient peaks in the measured forces and torques. These peaks were measured to be approximately several Newtons (N) in force and around one Newton meter (Nm) in torque. In the future, we will explore new control protocols from FANUC to smooth out these peaks.

3.5. Pre-Grasping Gripper Motion

After the computer vision system retrieved the target’s pose relative to the gripper, the gripper’s path was planned based on the capture criteria and then converted into joint angle commands for robot A, which maneuvered the gripper toward the target [25].
The gripper’s motion was controlled using position-based visual servoing. After the target pose was estimated from the camera visual data, the relative difference between the gripper (equipped with an eye-in-hand camera) and the target was computed. The error signal was then fed into a control algorithm, which continuously adjusted the gripper’s movement to minimize the error. This process ensured that the gripper approached and aligned with the target precisely, compensating for any positional or angular deviations during the task.
In this demonstration, the generalized Jacobian matrix was used to control the space manipulator in the simulation. However, in future experiments, the space manipulator will be controlled using neural networks [21,22].
Figure 21 shows the desired joint commands, derived from motion equivalence, which were input into robot A to achieve the capture, alongside the actual joint positions achieved by robot A at each time step. Figure 22 shows the robotic cartesian positions. Snapshots of this motion are shown in Figure 23.
The gripper was guided by the computer vision system, which obtained the relative pose of the target’s capture feature (in this case, a nozzle) during the approach phase. The complete simultaneous capture motion is shown in Figure 24. Figure 25 shows the camera point of view of the target, the bounding box, and estimated depth, the numbers in white indicating the distance from the camera.
As the gripper approached the target, the capture feature could move out of the camera’s field of view or be blocked by the gripper in its close proximity, as shown in Figure 25. Currently, the controller relies on the last known pose estimate and continues moving toward that pose. Future work will involve integrating additional eye-to-hand cameras, as shown in Figure 10, in addition to the eye-in-hand camera to maintain continuous target tracking by computer vision.

4. Conclusions

This study describes the development of a hardware-in-the-loop ground testbed featuring active gravity compensation via software-in-the-loop integration, specially designed to support research in the context of autonomous robotic removal of space debris. Some preliminary results of the experiments are presented to show the system’s ability with respect to computer vision, path planning, torque/force sensing, and active gravity compensation via software-in-the-loop to achieve a full zero-gravity emulated target capture mission on Earth.

Author Contributions

Conceptualization, A.A.A., B.B. and Z.H.Z.; methodology, A.A.A., B.B. and Z.H.Z.; investigation, A.A.A. and B.B.; resources, Z.H.Z.; writing—original draft preparation, A.A.A., B.B. and Z.H.Z.; writing—review and editing, Z.H.Z.; visualization, A.A.A. and B.B.; supervision, Z.H.Z.; project administration, Z.H.Z.; funding acquisition, Z.H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Canadian Space Agency, Flights and Fieldwork for the Advancement of Science and Technology grant number 19FAYORA14, by the Natural Sciences and Engineering Research Council of Canada, discovery grant number RGPIN-2024-06290, and by the Collaborative Research and Training Experience Program, grant number 555425-2021.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Flores-Abad, A.; Ma, O.; Pham, K.; Ulrich, S. A review of space robotics technologies for on-orbit servicing. Prog. Aerosp. Sci. 2014, 68, 1–26. [Google Scholar] [CrossRef]
  2. Larouche, B.P.; Zhu, Z.H. Autonomous robotic capture of the non-cooperative target using visual servoing and motion predictive control. Auton. Robot. 2014, 37, 157–167. [Google Scholar] [CrossRef]
  3. Available online: https://www.asc-csa.gc.ca/eng/blog/2021/04/16/canadarm2-celebrates-20-years-on-international-space-station.asp (accessed on 9 June 2023).
  4. Kessler, D.J.; Johnson, N.L.; Liou, J.C.; Matney, M. The Kessler syndrome: Implications to future space operations. Adv. Astronaut. Sci. 2010, 137, 47–62. [Google Scholar]
  5. Yang, F.; Dong, Z.H.; Ye, X. Research on Equivalent Tests of Dynamics of On-orbit Soft Contact Technology Based on On-Orbit Experiment Data. J. Phys. Conf. Ser. 2018, 1016, 012013. [Google Scholar] [CrossRef]
  6. Santaguida, L.; Zhu, Z.H. Development of air-bearing microgravity testbed for autonomous spacecraft rendezvous and robotic capture control of a free-floating target. Acta Astronaut. 2023, 203, 319–328. [Google Scholar] [CrossRef]
  7. Rybus, T. Obstacle avoidance in space robotics: Review of major challenges and proposed solutions. Prog. Aerosp. Sci. 2018, 101, 31–48. [Google Scholar] [CrossRef]
  8. Jia, J.; Jia, Y.; Sun, S. Preliminary design and development of an active suspension gravity compensation system for ground verification. Mech. Mach. Theory 2018, 128, 492–507. [Google Scholar] [CrossRef]
  9. Carignan, C.R.; Akin, D.L. The reaction stabilization of on-orbit robots. IEEE Control. Syst. Mag. 2000, 20, 19–33. [Google Scholar]
  10. Zhu, Z.H.; Kang, J.; Bindra, U. Validation of CubeSat tether deployment system by ground and parabolic flight testing. Acta Astronaut. 2021, 185, 299–307. [Google Scholar] [CrossRef]
  11. Xu, W.; Liang, B.; Xu, Y. Survey of modeling, planning, and ground verification of space robotic systems. Acta Astronaut. 2011, 68, 1629–1649. [Google Scholar] [CrossRef]
  12. Xu, W.; Liang, B.; Xu, Y.; Li, C.; Qiang, W. A ground experiment system of free-floating robot for capturing space target. J. Intell. Robot. Syst. 2007, 48, 187–208. [Google Scholar] [CrossRef]
  13. Boge, T.; Ma, O. Using advanced industrial robotics for spacecraft rendezvous and docking simulation. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar]
  14. Rouleau, G.; Rekleitis, I.; L’Archeveque, R.; Martin, E.; Parsa, K.; Dupuis, E. Autonomous capture of a tumbling satellite. J. Field Robot. 2007, 24, 275–296. [Google Scholar]
  15. Liu, H.; Liang, B.; Wang, X.; Zhang, B. Autonomous path planning and experiment study of free-floating space robot for spinning satellite capturing. In Proceedings of the 2014 13th International Conference on Control Automation Robotics & Vision (ICARCV), Singapore, 10–12 December 2014; pp. 1573–1580. [Google Scholar]
  16. Qian, L.; Xiao, X.; Mou, F.; Wu, S.; Ma, W.; Hu, C. Study on a Numerical Simulation of a Manipulator Task Verification Facility System. In Proceedings of the 2018 IEEE International Conference on Mechatronics and Automation (ICMA), Changchun, China, 5–8 August 2018; pp. 2132–2137. [Google Scholar]
  17. Fangli, M.; Xiao, X.; Zhang, T.; Liu, Q.; Li, D.; Hu, C.; Ma, W. A HIL simulation facility for task verification of the Chinese space station manipulator. In Proceedings of the 2018 IEEE International Conference on Mechatronics and Automation (ICMA), Changchun, China, 5–8 August 2018; pp. 2138–2144. [Google Scholar]
  18. Available online: https://www.fanucamerica.com/products/robots/series/m-20/m-20id-25 (accessed on 24 July 2023).
  19. Chenkun, Q.; Li, D.; Ma, W.; Wei, Q.; Zhang, W.; Wang, W.; Hu, Y.; Gao, F. Distributed delay compensation for a hybrid simulation system of space manipulator capture. IEEE/ASME Trans. Mechatron. 2021, 27, 2367–2378. [Google Scholar]
  20. Jocher, G. YOLOv5, Version 7.0. [Computer Software]. Ultralytics: Frederick, MD, USA, 2022. [CrossRef]
  21. Ali, A.A.; Shi, J.-F.; Zhu, Z.H. Path planning of 6-DOF free-floating space robotic manipulators using reinforcement learning. Acta Astronaut. 2024, 224, 367–378. [Google Scholar] [CrossRef]
  22. Ali, A.A.; Zhu, Z.H. Reinforcement learning for path planning of free-floating space robotic manipulator with collision avoidance and observation noise. Front. Control. Eng. 2024, 5, 1394668. [Google Scholar] [CrossRef]
  23. Cognion, R.L. Rotation rates of inactive satellites near geosynchronous earth orbit. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 9–12 September 2014; Maui Economic Development Board: Kihei, HI, USA, 2014. [Google Scholar]
  24. Daniel, K.; Kirchner, G.; Koidl, F.; Fan, C.; Carman, R.; Moore, C.; Dmytrotsa, A.; Ploner, M.; Bianco, G.; Medvedskij, M.; et al. Attitude and spin period of space debris Envisat measured by satellite laser ranging. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7651–7657. [Google Scholar]
  25. Beigomi, B. robotiq3f_py, Version 2.0.0; [Computer Software]. Available online: https://github.com/baha2r/robotiq3f_py (accessed on 24 July 2023).
Figure 1. Space robotic manipulator—Canadarm [3].
Figure 1. Space robotic manipulator—Canadarm [3].
Aerospace 11 00877 g001
Figure 3. (a) Shenzhen Space Technology Center [12], (b) German Aerospace Center [13], (c) China Academy of Space Technology [15], and (d) Tsinghua University [16].
Figure 3. (a) Shenzhen Space Technology Center [12], (b) German Aerospace Center [13], (c) China Academy of Space Technology [15], and (d) Tsinghua University [16].
Aerospace 11 00877 g003
Figure 4. Schematic of dual-robot HIL testbed.
Figure 4. Schematic of dual-robot HIL testbed.
Aerospace 11 00877 g004
Figure 5. (A) FANUC manipulator, (B) robotic gripper, (C) ATI force/load sensor.
Figure 5. (A) FANUC manipulator, (B) robotic gripper, (C) ATI force/load sensor.
Aerospace 11 00877 g005
Figure 6. Flowchart illustrating how the simulated motion can be applied to real-world execution.
Figure 6. Flowchart illustrating how the simulated motion can be applied to real-world execution.
Aerospace 11 00877 g006
Figure 7. ATI Sensor Frame.
Figure 7. ATI Sensor Frame.
Aerospace 11 00877 g007
Figure 8. The target’s free-floating motion disturbed by an external force.
Figure 8. The target’s free-floating motion disturbed by an external force.
Aerospace 11 00877 g008
Figure 9. Robotiq three-finger gripper movement.
Figure 9. Robotiq three-finger gripper movement.
Aerospace 11 00877 g009
Figure 10. (a) Tactile sensor, (b) Intel camera.
Figure 10. (a) Tactile sensor, (b) Intel camera.
Aerospace 11 00877 g010
Figure 11. Full-scale monitoring of all angles.
Figure 11. Full-scale monitoring of all angles.
Aerospace 11 00877 g011
Figure 12. Mock-up satellite and components.
Figure 12. Mock-up satellite and components.
Aerospace 11 00877 g012
Figure 13. The 6DOF hardware-in-the-loop ground testbed.
Figure 13. The 6DOF hardware-in-the-loop ground testbed.
Aerospace 11 00877 g013
Figure 14. (A) Yolo feature recognition, (B) AI computer vision for target tracking result.
Figure 14. (A) Yolo feature recognition, (B) AI computer vision for target tracking result.
Aerospace 11 00877 g014
Figure 15. Skeletal representation of the 6DOF simulation environment.
Figure 15. Skeletal representation of the 6DOF simulation environment.
Aerospace 11 00877 g015
Figure 16. Motion equivalence.
Figure 16. Motion equivalence.
Aerospace 11 00877 g016
Figure 17. Joint positions of robot B to deliver the target motion.
Figure 17. Joint positions of robot B to deliver the target motion.
Aerospace 11 00877 g017
Figure 18. Mock-up satellite tumbling in space.
Figure 18. Mock-up satellite tumbling in space.
Aerospace 11 00877 g018
Figure 19. ATI force/load sensor: force values.
Figure 19. ATI force/load sensor: force values.
Aerospace 11 00877 g019
Figure 20. ATI force/load sensor: torque values.
Figure 20. ATI force/load sensor: torque values.
Aerospace 11 00877 g020
Figure 21. Robot A/gripper joint positions.
Figure 21. Robot A/gripper joint positions.
Aerospace 11 00877 g021
Figure 22. Robot A/gripper cartesian positions.
Figure 22. Robot A/gripper cartesian positions.
Aerospace 11 00877 g022
Figure 23. Gripper capture of target.
Figure 23. Gripper capture of target.
Aerospace 11 00877 g023
Figure 24. Full debris capture mission.
Figure 24. Full debris capture mission.
Aerospace 11 00877 g024
Figure 25. Camera’s field of view and bounding box during the pre-capture phase.
Figure 25. Camera’s field of view and bounding box during the pre-capture phase.
Aerospace 11 00877 g025
Table 1. Robotiq three-finger adaptive gripper specifications.
Table 1. Robotiq three-finger adaptive gripper specifications.
Gripper opening0–155 mm
Gripper weight2.3 kg
Object diameter for encompassing20–155 mm
Maximum recommended payload (encompassing grip)10 kg
Maximum recommended payload (fingertip grip)2.5 kg
Grip force (fingertip grip)30–70 N
Table 2. FANUC industrial robot dh parameters 19.
Table 2. FANUC industrial robot dh parameters 19.
Kinematicsθ [rad]a [m]d [m]α [rad]
Joint 1 θ 1 00.075π/2
Joint 2 θ 2 00.84π
Joint 3 θ 3 00.215π/2
Joint 4 θ 4 −0.890π/2
Joint 5 θ 5 00π/2
Joint 6 θ 6 −0.090π
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ali, A.A.; Beigomi, B.; Zhu, Z.H. Development of 6DOF Hardware-in-the-Loop Ground Testbed for Autonomous Robotic Space Debris Removal. Aerospace 2024, 11, 877. https://doi.org/10.3390/aerospace11110877

AMA Style

Ali AA, Beigomi B, Zhu ZH. Development of 6DOF Hardware-in-the-Loop Ground Testbed for Autonomous Robotic Space Debris Removal. Aerospace. 2024; 11(11):877. https://doi.org/10.3390/aerospace11110877

Chicago/Turabian Style

Ali, Ahmad Al, Bahador Beigomi, and Zheng H. Zhu. 2024. "Development of 6DOF Hardware-in-the-Loop Ground Testbed for Autonomous Robotic Space Debris Removal" Aerospace 11, no. 11: 877. https://doi.org/10.3390/aerospace11110877

APA Style

Ali, A. A., Beigomi, B., & Zhu, Z. H. (2024). Development of 6DOF Hardware-in-the-Loop Ground Testbed for Autonomous Robotic Space Debris Removal. Aerospace, 11(11), 877. https://doi.org/10.3390/aerospace11110877

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop