Next Article in Journal
Experimental Study on Mechanical Deformation and Energy Evolution of Deep Coal Under Complex Stress Paths
Previous Article in Journal
A Framework for Improving Accessibility of Serious Games in Handheld Augmented Reality Based on User Interaction Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Joint Iterative Satellite Pose Estimation and Particle Swarm Optimization

1
International Academy of Aviation Industry, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
2
State Key Laboratory of Remote Sensing Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
3
National Astronomical Observatory of China, Chinese Academy of Sciences, Beijing 100101, China
4
Department of Geography, Faculty of Social Sciences, Srinakharinwirot University, Bangkok 10110, Thailand
5
School of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(4), 2166; https://doi.org/10.3390/app15042166
Submission received: 7 January 2025 / Revised: 14 February 2025 / Accepted: 17 February 2025 / Published: 18 February 2025

Abstract

:
Satellite pose estimation (PE) is crucial for space missions and orbital maneuvering. High-accuracy satellite PE could reduce risks, enhance safety, and help achieve the objectives of close proximity and docking operations for autonomous systems by reducing the need for manual control in the future. This article presents a joint iterative satellite PE and particle swarm optimization (PE-PSO) method. The PE-PSO method uses the number of batches derived from satellite PE as the number of particles and keeps the number of epochs from the satellite PE process as the number of epochs for PSO. The objective function of PSO is the training function of the implemented network. The output obtained from the previous objective function is applied to update the new positions of the particles, which serve as the inputs of the current training function. The PE-PSO method is tested on synthetic Soyuz satellite image datasets acquired from the Unreal Rendered Spacecrafts On-Orbit Datasets (URSOs) under different preset hyperparameters. The proposed method significantly reduces the incurred loss, especially during the batch-processing operation of each epoch. The results illustrate the accuracy improvement attained by the PE-PSO method over epoch processing, but its time consumption is not distinct from that of the conventional method. In addition, PE-PSO achieves better performance by reducing the mean position estimation error by 13.1% and the mean orientation estimation error on the testing dataset by 29.1% based on the pretrained weights of Common Objects in Context (COCO). Additionally, PE-PSO improves the accuracy of the Soyuz_hard-based weight by 7.8% and 0.3% in terms of the mean position estimation error and mean orientation estimation error, respectively.

1. Introduction

Satellite pose estimation (PE) plays a crucial role in space technology and is often applied in spacecraft formation flying (SFF) [1], close proximity operations [2], relative navigation [3], on-orbit maintenance (OOM) [4], on-orbit servicing (OOS) [5], and active space debris removal [6]. The precision of vision-based PE methods depends on several factors, including the solar illumination conditions and the diffuse reflection of the Earth’s background, which form “a complex spatial environment”. Due to the satellite light level constraint imposed by a small volume and the use of a monocular camera with a low power consumption level [7], space object sensing technology has lower payload satellite requirements, but it is more often applied in special on-orbit space environments. Vision-based navigation, which is built on RGB image detection technology, has advantages in terms of its high accuracy and robustness and has recently attracted the attention of several scholars [8,9,10,11]. Most of the existing research has presented different methods or techniques with the same objective of enhancing the performance of PE. The challenge concerning this research problem is related to the lighting environment in space, which is a constraint that affects camera imaging and the relationship between the spatial position information of a satellite and its orientation characteristics. Therefore, some researchers are working on using different kinds of cameras for relative noncooperative space target PE; examples include visual cameras [12], infrared cameras [13], light detection and ranging (LiDAR) sensors [14], and time-of-flight (TOF) cameras [15,16]. Owing to the limited amount of light in space, active sensors generally have greater robustness than other sensors do. Therefore, LiDAR and TOF cameras have attracted widespread attention from researchers. For example, Ref. [15] presented the projected model of a TOF camera and proposed a new calibration model. The three utilized modules, including a TOF data processing module, a registration module, and a model mapping module, were applied to overcome the above problems with the smallest position and orientation errors.
However, vision-based navigation methods formed via the implementation of RGB images have increased in popularity since the development of artificial intelligence. Noncooperative object PE based on optical images can be described as a six-degree-of-freedom (6-DOF) PE problem in computer vision tasks. The research conducted in [17,18] focused on applying key points for PE. Their key-point detection deployment method involves information translation procedures for the object of interest by utilizing perspective-n-points (PnPs) and random sample consensus (RANSAC). Additionally, the PnP and RANSAC frameworks have been implemented [19] to enhance the satellite PE performance achieved by the procedure of a space target landmark regression network, which uses the information derived from the proposed deep 1D landmark representation that encodes the horizontal and vertical pixel coordinates of a landmark as two independent 1D vectors. Ref. [20] demonstrated a binocular PE model based on a self-supervised transformer network (STN) by generating training samples under various conditions and then implementing a combination of the SIFT and a convolutional neural network (CNN) for each generated feature. The feedforward network employed in the transformer of their proposed method was replaced with a global average pooling layer. Another study implemented GoogLeNet in their experiment with some modifications. By deploying a novel loss function, their method achieved better performance than that of the existing approaches [10]. FilterformerPose [21] is a PE network that has a CNN as its backbone and feeds its information for mapping a translation and orientation regression network; the authors devised a filter-based transformer encoder model and constructed a hypernetwork-like design based on the self-attention filtering mechanism. This approach has the ability to remove noise and generate adaptive weight information for the utilized deep learning model. Ref. [22] also presented a deep learning approach for satellite PE with the condition of knowing 3D models of the satellite. They proposed a CNN combined with 3D prior knowledge, which was expressed as a 3D model in the form of a point cloud, and used a special loss function that was more suitable for satellite PE tasks to attain desirable performance.
Many studies have examined the joint use of two techniques [23,24,25,26,27], which results in better results every iteration and can improve the final accuracy of the combined approach. The benefit of the iterative technique, which is the first process, is that it can obtain precise results before passing them to the second process. This means that the model parameters can be updated on the fly.
In the vision-based learning domain, many papers have discussed input image uncertainty [23,28,29,30,31]. Researchers have presented different ways to overcome such uncertainty or perform weakly supervised object detection. For the first time, Ref. [23] proposed a joint or cross-task enforcement method for use between weakly supervised object detection and segmentation tasks with a multitask learning scheme; this approach uses the failure patterns of each sub-method to complement the learning process of the other sub-method. Refs. [28,29,30] presented a multiple-instance learning (MIL) method with an additional technique to improve the accuracy of weakly supervised detection. Intersecting vectors [31] use binocular vision to reconstruct a straight line and, based on the direction vector and intersection of the straight line, solve the pose of the measured target in the measurement coordinate system, and obtain the initial PE value.
The optimization algorithm has been widely used for several approaches. Ref. [32] introduce distributed optimization methods for multiagent systems by using the partial information as an objective function. The result shows the effectiveness of the design. In the context of unmanned aerial vehicle (UAV) research, the gradient-free optimization algorithm has been implemented for the cooperative source-seeking problem using a group of quadrotors equipped with sensors for source scalar field measurement with limited communications [33].
Particle swarm optimization (PSO) was introduced in 1995 by Kennedy and Eberhart [34]. It has been implemented in several fields of study, including PE. Ref. [35] applied PSO to solve the ambiguity introduced by uncertainty at the locations of landmarks on the basis of the assumption that each pose hypothesis is represented by a particle. The quaternion-based PSO (Q-PSO) approach was presented in [36]. It fits a 3D model of the target object to depth images. The fitness function of this method is based on depth information only, and the quaternion formulation avoids singularities and the need for conversions between different rotation representations. PSO also implements adjustments for the regularization parameters and kernel parameters of a support vector machine [37] to improve the accuracy of facial detection. Ref. [38] applied PSO to impose kinematics constraints on the results of CNNs for implementing hierarchical hybrid hand PE. PSO was also used to introduce a cooperative target for PE with monocular vision [39], according to which the mathematical relationship between the distributed configurations of cooperative targets and the measurement accuracy was derived on the basis of the PnP principle.
In the experiment conducted herein, a novel joint iterative PE and PSO method is introduced with the aim of improving the accuracy of satellite PE, which has great benefits for the development of space technology such as close proximity operations, active space debris removal, and on-orbit servicing. The main contributions of this work are as follows.
(1)
A new end-to-end satellite PE method is proposed; this approach jointly iterates between deep learning for the PE framework and PSO. This can improve the effectiveness of the model in terms of translating the position and orientation information of the target satellite;
(2)
The design of the position parameters of the PSO algorithm, which is effectively integrated with deep learning in the PE framework, reduces the error induced by complex input images, and improves the accuracy of the satellite pose inference process;
(3)
The proposed method is verified by testing it in different downstream computer vision tasks.
The rest of the paper is organized as follows. In Section 2, the related works that briefly explain the deep learning technique for estimating spacecraft poses from photorealistic renderings [11] and the PSO algorithm is presented. Section 3 describes the utilized materials and the proposed method, which is the joint iterative method including deep learning-based spacecraft PE from photorealistic renderings and PSO, for improving the accuracy of the constructed model. The results are illustrated in Section 4. Conclusions and suggestions for future work are presented in Section 5 and Section 6, respectively.

2. Related Works

2.1. Deep Learning-Based Spacecraft Pose Estimation from Photorealistic Renderings

The deep learning framework that is applied in the experiment of this work is derived from [11]. This deep learning framework involves end-to-end PE, the drawback of which is that it does not handle multiple objects concurrently. The end-to-end PE method involves feeding images into the model, which directly computes the pose information of interest. Another method is based on the key points of the target object. However, the end-to-end PE method should be employed whenever possible to mitigate the computational overhead and inference inefficiency of other approaches.
This deep learning framework is for pose estimation. It is based on orientation soft classification, which enables modeling orientation ambiguity as a mixture of Gaussians which is a proposed method in [11] by performing continuous orientation estimation via classification with soft assignment coding [40]. This deep learning framework used ResNet architecture with pretrained weights as its network backbone. The last fully connected layer and the global average pooling layer of the conventional network were removed and retained for spatial feature resolution, effectively leaving only one pooling layer in the second layer. The global pooling layer was replaced by one extra 3 × 3 convolution with a stride of 2 (bottleneck layer) to compress the CNN features. For 3D location estimation, the framework adopts a simple regression branch with two fully connected layers. It minimizes the relative error via its loss function, which was developed on its own to be suitable for the constructed framework, instead of the Euclidean distance measure. Another study presented a mathematical model for the loss function of spacecraft PE [10,22]. This deep learning framework has also been studied to find alternative modifications or adjustments [21] because of its low number of pooling layers and good accuracy–complexity tradeoff, which are the highlights of ResNet architectures.

2.2. Particle Swarm Optimization

PSO is a metaheuristic gradient-free optimization method based on the population dynamics of particles moving with a velocity that depends on both globally optimal and particlewise optimal solutions. PSO was introduced by Kennedy and Eberhart [34] and has been shown to be effective in solving optimization problems, even in high-dimensional search spaces. Although PSO has been implemented in various studies, alternatives are being developed to achieve higher performance. For example, researchers modified the structure of PSO, such as its size or shape, to construct improved approaches, such as the hybrid PSO method [37,41,42,43].
The PSO algorithm was constructed on the basis of particles that simulate individuals in a group with two acceptance parameters, namely, positions and velocities. Assuming that the particles have a dimensionality of N , the position of the i -th particle in N is denoted as X i , and its velocity is V i . The objective function is important in PSO, as it has a role in fitting each particle and obtaining the corresponding fitness value. Let the process obtain the best position ( p b e s t ) and the current position x i , which represents the experience of particle actions. g b e s t is defined as the best position found by all the particles in the entire group or the best position in p b e s t . PSO uses both p b e s t and g b e s t to determine the next action in the next iteration.
The PSO process begins with a random solution and then iterates to obtain the optimal solution. Each particle is updated under the conditions of p b e s t and g b e s t via Equations (1) and (2) to update the two parameters, as follows:
v i = c 1 × r a n d ( B ) × p b e s t i x i + c 2 × r a n d ( B ) × ( g b e s t i x i )
x i = x i + v i
where i = 1 , , N , N is the total number of particles, and r a n d ( B ) is defined as a random number function resulting in random floats number in the half-open interval [0.0, 1.0) and has a size equal to the batch size of the input image ( B ) in the PE algorithm which is deployed by deep learning architecture in Section 2.1. c 1 is the cognitive (individual) learning factor, c 2 is a social (group) learning factor, and x i is the current position. The expression of Equation (1) contains the magnitude and direction of the last speed, the best point between the current point and the particle, and the best point between the current point and the group. This combination determines the next action and a better v i is defined by Equation (3) as shown below:
v i = ω × v i + c 1 × r a n d ( B ) × p b e s t i x i + c 2 × r a n d ( B ) × ( g b e s t i x i )
where ω is a nonnegative number that is used to define the weight of the PSO process, which affects the convergence speed and convergence precision of the PSO algorithm.

3. Materials and Methods

3.1. Materials

The dataset employed in the current experiment is derived from the Unreal Rendered Spacecrafts On-Orbit Datasets (URSOs). An example of the Soyuz satellite is presented in Figure 1. This dataset consists of synthetic data from the Unreal Engine that allows the addition of realistic lighting, shadows, and other features for image generation purposes. Figure 1b explains the axis notation for a moving spacecraft frame; the green color represents the Y-axis, the red color is the X-axis, and the Z-axis is the blue one. The URSO is not only realistic but also involves a variety of data, which includes a range of spacecraft, such as space probes and satellites. The implemented dataset has a total of 5000 RGB images with resolutions of 1280 × 960 pixels and ground truths in the [ x   y   z   q 1   q 2   q 3   q 0 ] format for all images. While x , y , z are the magnitude of the relative distance in the X, Y, and Z-axes, respectively. The orientation is presented in quaternions by q 0 represents the real part of the quaternions and q 1   q 2   q 3 are the components in the vector part. The PE process also uses the quaternion as an input. However, the final results need to be converted to Euler angles for ease of understanding, and the rotation sequence is an important part of such a calculation.
The dataset possesses various positions and orientation ranges with several background images. It is important for a deep learning method to learn features under various conditions. The Soyuz_easy data package is separated into 4000 training images, 500 testing images, and 500 evaluation images.

3.2. Joint Iterative Pose Estimation and Particle Swarm Optimization (PE-PSO) Method

Dynamic sample weighting strategy is one important concept that affects the deep learning model performance [29]. The PE-PSO method has taken the concept of dynamic sample weight and joint iteration as used in [26]. The PSO algorithm deploys as the method to estimate sample weight on the fly in every epoch of spacecraft pose estimation by using deep learning. Deep learning-based spacecraft pose estimation from photorealistic renderings in Section 2.1 is used as the pose estimation method (PE) and the objective function of the PSO algorithm, while it takes the benefit from the PSO algorithm by obtaining the updated simple weight in every epoch. This manner encourages an improvement of the final pose estimation model.
The proposed method, PE-PSO, iterates between the PE and PSO processes. For the optimization problem, the slack variable ( ξ ) is added to an inequality constraint to transform it into an equality constraint. Assume that the batch size of the input training batch is i , the modification of the slack variable ( ξ ), which is applied to the position of the particle ( P o s i t i o n k ) in the PSO algorithm, is the following vector.
P o s i t i o n k = [ ξ 0   ξ 1     ξ i ]
where subscription k is defined as the number of particles, whose maximum value is the number of batches in the deep learning-based PE scheme. It implies that particle k has P o s i t i o n k and the P o s i t i o n k has members to be a modification of the slack variable ( ξ ) equal to the number of the input training batch ( i ).
The workflow and pseudocode of the PE-PSO method are presented in Figure 2 and Algorithm 1, respectively. In Figure 2, P o s i t i o n k is updated via the PSO process, which is depicted by the red dotted line. The objective function for PSO is defined as a PE training function as explained in Section 2.1, which results in position error ( E r r o r p o s i t i o n , k ), orientation error ( E r r o r o r e n t a t i o n , k ), and total errors ( E r r o r t o t a l , k ). These errors are expressed as equations in the accuracy assessment section. The loss L o s s k   implemented in PSO is the average of these three errors, and it is defined as the following equation.
L o s s k = E r r o r t o t a l , k + E r r o r p o s i t i o n , k + E r r o r o r e n t a t i o n , k 3
In the experiment, the PE process is trained with an input consisting of a batch of four images ( B = 4 ) concatenated with the positions of the particles ( P o s i t i o n k ). Thus, the input for each processing iteration during batch training is expressed as follows:
i n p u t = [ b a t c h _ i m a g e s   P o s i t i o n k ] T
In every iteration of PE, PSO recalculates the values contained in P o s i t i o n k , which are part of the input. This concept is similar to the sample weight used in a general deep learning algorithm. However, it is not a predefined sample weight value that is determined before PE processing. P o s i t i o n k is not related to the sample weight and is calculated on the fly based on the PSO concepts and processes in every epoch. Therefore, these values are changed to correspond to the minimum loss of the objective function at every time step.
The velocity of a particle is computed via the following equation:
V e l o c i t y k , t =   ω V e l o c i t y k , t 1 + c 1 r 1 p b e s t k , t P o s i t i o n k , t 1 + c 2 r 2 g b e s t k , t P o s i t i o n k , t 1
where c 1 is the local learning coefficient, c 2 is the global learning coefficient, and k ,   t indicate the particle of interest and epoch, respectively. Weight (ω) is an important parameter for identifying motion during PSO. Weight (ω) controls how much of the previous velocity of the particle is retained in the current velocity. A larger weight (ω) encourages global exploration, while a smaller one promotes local exploitation. Typical values of Weight (ω) range between 0.4 and 0.9. However, it can also be dynamically adjusted during the optimization process to balance between global and local exploration [44]. In this experiment, the varied weight (ω) has been tested as present in Section 4 to seek the suitable value. r 1   and r 2 are random matrices whose sizes are the batch size ( i ) .   p b e s t k , t is the local best value, whereas g b e s t k , t is the global best value.
The velocity conditions of the population or a particle are as follows:
V e l o c i t y m a x = 0.1 U B L B V e l o c i t y m i n = V e l o c i t y m a x
where U B and L B are the upper bound and lower bound of the velocity, which are set as 20 and −20, respectively, in the current experiment.
Algorithm 1 and the workflow of the proposed method (shown in Figure 2) clearly depict the above processes. The proposed method begins with initial parameters for PE, such as the learning rate, number of epochs, number of batches, and batch size. PSO also has initial hyperparameters, namely, the positions and weights of the particles, the inertia weight damping ratio, the local learning coefficient, the global learning coefficient, and the lower and upper bounds of the population velocity. The key parameters of the joint (PE-PSO) method are the numbers of PE and PSO epochs. Moreover, it is important to set the population size or swarm size equal to the number of batches. In each epoch, PSO computes the velocity via Equation (7) with the predefined constraint imposed by Equation (8) and updates the corresponding position. The updated position is then applied as a part of the input (as Equation (6)) of the objective function to train the PE model of the spacecraft. The loss value is calculated via Equation (5) as the average of all losses involved in the training process, i.e., the total loss, location loss, and orientation loss. Consequently, the algorithm checks and updates the process based on the local (personal) and global losses by using fewer conditional operations than the previous stacking value. The weight ( ω ) is adjusted by the inertia weight damping ratio ( ω d a m p ) in every epoch. The PE-PSO process continues until the maximum number of epochs is reached.
Algorithm 1: The PE-PSO method
Input: number of batches (epochs), batch size ( i ) , weight ( ω ) , inertia weight damping ratio ( ω d a m p ) , local learning coefficient ( c 1 ) , global learning coefficient ( c 2 ) , lower bound ( L B ) and upper bound ( U B ) of the population velocity, learning rate.
Output: The PE-PSO model.
for  t   < epoch:
  for  k   < number of batches:
     •
Prepare the input and build batch logs for PE;
     •
Calculate the particle velocity ( V e l o c i t y k ) via Equation (7);
     •
Check for and apply velocity limits via Equation (8);
     •
Update the position of the particle ( P o s i t i o n k );
     •
Compute L o s s k via Equation (5);
     •
Check and update the positions of the local and global best particles.
  Update weight as ω = ω ω d a m p

3.3. Accuracy Assessment

Both the conventional model and the proposed model produce numerical outputs as their spacecraft poses ( P o s e s ) , which is given by seven-column datasets and is expressed as follows:
P o s e s = [ x , y , z , q 1 , q 2 , q 3 , q 0 ]
This vector is separated into two parts, namely, a position part ( [ x , y , z ] ) and an attitude or orientation part ( [ q 0 , q 1 , q 2 , q 3 ] ) that is expressed as a quaternion. The magnitudes of the X-, Y-, and Z-axes represent x , y , z , respectively. Regarding the orientation, q 0 is the real part of the quaternion, and q 1 , q 2 , q 3 are the components contained in the vector part.
The error computation is also separated into position error and orientation error components, which are calculated by the Euclidean distance measure, as described in the following questions.
E r r o r p o s i t i o n = x i x g t
E r r o r o r i e n t a t i o n = | q i q g t |
where x i represents the spacecraft pose ( P o s e s ) in the position part and x g t is the ground truth. This value corresponds with the formula used to calculate the orientation error, where q i is the output predicted from the PE model and q g t is the ground truth for the orientation part. The combination of E r r o r p o s i t i o n and E r r o r o r e n t a t i o n is the total error ( E r r o r t o t a l ), which is represented in the following equation.
E r r o r t o t a l = E r r o r p o s i t i o n + E r r o r o r i e n t a t i o n
These three kinds of errors are used for training the PE model using both the conventional and the proposed methods. Moreover, the position error is calculated in meters, whereas the orientation error transforms the quaternions in the aerospace sequence to Euler angles for model performance representation purposes.

4. Results

4.1. Training Results

First, the experiment runs for one epoch and 100 batches to evaluate the trend of the PE-PSO method. Figure 3a, Figure 3b and Figure 3c express the E r r o r t o t a l , E r r o r p o s i t i o n , and E r r o r o r i e n t a t i o n values produced the first 100 batches of training in the first iteration, respectively. The conventional method and the PE-PSO method are compared with ω = 1 and ω d a m p = 0.9 , and the Common Objects in Context (COCO) weight, where the ResNet50 architecture is used as the backbone. At the beginning, the PE-PSO method yields high values for every aspect of the error. However, these errors gradually decrease when the number of batches increases. Therefore, the hypothesis is that the PSO algorithm can work jointly with the PE process.
In the next experiments, five test cases are implemented, which are summarized as shown in Table 1.
The weight ( ω ) and weight damping ratio ( ω d a m p ) are deployed to conduct PSO processes, with different values used for each test case. Test case number 1 is the conventional method that implements processes without the joint iterative method including the PSO algorithm. Test cases 2 to 5 are the joint processing methods with the PSO algorithm that apply a ω d a m p of 0.5 and vary the value of the weight ( ω ) to 0.5, 0.3, 0.2, and 0.1, respectively. All the experiments are implemented with 100 epochs, each epoch includes 1000 training batches, the ResNet50 structure is used as the backbone, and the model runs on a desktop computer equipped with an NVIDIA GeForce RTX 4070 GPU possessing 16 GB of VRAM.
The experiment is performed with two types of pretrained weights: (1) the COCO weight and (2) the Soyuz_hard weight [11]. E r r o r p o s i t i o n , E r r o r o r i e n t a t i o n , and E r r o r t o t a l , which are calculated via Equations (10), (11), and (12), respectively, are presented in Figure 4. The results obtained with different training conditions exhibit different performances. For the COCO pretrained weight, the E r r o r t o t a l of the conventional method is worse than those of the other methods, whereas PE-PSO with ω = 0.3 and ω d a m p = 0.5 performs best, as shown in Figure 4a. Figure 4c shows the E r r o r p o s i t i o n results. Although the graph does not clearly verify that the PE-PSO method has better performance than the original method does, Table 2 presents a slight improvement. Corresponding to E r r o r t o t a l in Figure 4e, the PE-PSO method with ω = 0.3 and ω d a m p = 0.5 provides the best E r r o r o r i e n t a t i o n results, whereas the conventional method has the highest orientation error.
Table 2 and Table 3 present the average accuracies achieved over 100 epochs of the training process with the COCO pretrained weight and the Soyuz_hard pretrained weight, respectively. Test cases 2 to 5, which are joint processes with the PSO algorithm, tend to yield better results than the conventional method does for pretrained weights. For the COCO pretrained weight, test case number 3, whose weight ( ω ) = 0.3, performs the best for every error aspect, with a decrease of 3.6% in E r r o r t o t a l . Under the Soyuz_hard pretrained weight, test case number 4 attains the optimal performance for E r r o r t o t a l and E r r o r o r i e n t a t i o n . The proposed method outperforms the conventional method by 0.1% in terms of both types of error, whereas test case number 3 improves E r r o r p o s i t i o n by 3.3% over that of the conventional method. Although Figure 4b, Figure 4d and Figure 4f, which plot the E r r o r t o t a l , E r r o r p o s i t i o n , and E r r o r o r i e n t a t i o n induced with the Soyuz_hard pretrained weight, respectively, clearly present outstanding performance, the improvement is reflected by the numerical values in Table 3 and is clearly described above. In addition, the computational times of the conventional method and the PE-PSO method are not different, as detailed in Table 2 and Table 3.
Table 2 shows that there is no pattern for setting the optimal number of weights ( ω ) and inertia weight damping ratio ( ω d a m p ) . Large or small parameter values do not always result in the best performance. The results are dependent on the proper values, which are affected by several factors, such as the dataset and hyperparameters.
Moreover, the mean and standard deviation ( σ ) of each modified element of the slack variable ( ξ ) mentioned in Equation (4) are addressed, as presented in Table 4 and Figure 5, by implementing test case number 3. In Figure 5, batch numbers 400, 600, and 800 are chosen for plotting, and the behaviors of these modified slack variables are presented. The Y-axis represents the modifications of the slack variables ξ 1 , ξ 2 , ξ 3 , ξ 4 in Figure 5a, Figure 5b, Figure 5c, and Figure 5d, respectively, while the X-axis represents the number of epochs. This figure clearly shows that when time passes, the values of these variables tend to be stable and vary less than they did at the beginning of the interaction process. The graph in Figure 5 corresponds to the standard deviation values ( σ ) in Table 4. The standard deviation ( σ ) decreases with time, and the smallest value is obtained when the number of epochs is 100, which is the last epoch of the experiment. This finding supports the fact that higher accuracy is attained in the final case. However, slack variable element number 1 ( ξ 1 ) is modified, where the standard deviation ( σ ) does not comply with the assumption because the number of iterations is not sufficient. The tradeoff between iteration time and accuracy must be carefully defined to attain appropriate performance.

4.2. Testing Result

In the testing results, the estimated mean of the position error is converted to meters, whereas the orientation is converted to Euler angles for ease of understanding and comparison [11]. Table 5 and Table 6 present the testing results by evaluating 500 images and implementing the tested method with the COCO pretrained weight and the Soyuz_hard pretrained weight, respectively. The testing results are presented as mean position estimation errors and mean orientation estimation errors. The numerical values in the tables indicate that test case number 3, which is the PE-PSO method with a weight ( ω ) of 0.3 and a weight damping ratio ( ω d a m p ) of 0.5, can achieve the best performance. It can reduce the position error from 1.3053 to 1.1346, representing an improvement of 13.1%, whereas it can improve the mean orientation estimation error under the COCO pretrained weight by 29.1%. Moreover, the PE-PSO method can enhance the performance achieved with the Soyuz_hard pretrained weight by reducing the mean position estimation error and the mean orientation estimation error by 7.8% and 0.3%, respectively. The values listed in Table 5 and Table 6 confirm that the PE-PSO method can improve accuracy or reduce the error of satellite PE whenever it is deployed with the proper weight   ω and weight damping ( ω d a m p ) values.

4.3. Robustness Analysis

K-fold cross-validation has been implemented using the Soyuz_easy dataset, with 1000 randomly selected samples, to assess the robustness of PE-PSO in comparison to the conventional method. The selected dataset is split into 10 groups, with each group containing 100 samples. Ten groups of datasets are deployed with cross-validation techniques. If K is defined as the index of the testing dataset, with the remaining groups serving as the training dataset, the minimum value of K is 1, and the maximum value is 10. The COCO pretrained weight has been implemented to conduct a robustness analysis with the weight ( ω ) of 0.3 and a weight damping ratio ( ω d a m p )   of 0.5, which is shown as the best performance in the previous section. The robustness analysis is performed over 30 epochs, with the results presented in Table 7, Table 8 and Table 9. Although the results for orientation error and total error, presented in Table 8 and Table 9, do not show a significant improvement, the PE-PSO method demonstrates a reduction in position error. The mean position error across all 10 test cases is reduced by 1.5%, with the highest reduction observed at K = 8, where it is mitigated by 2.8%. Moreover, the overall trend of the robustness analysis favors the PE-PSO method, particularly with further iterations, as highlighted in the previous section.

5. Conclusions

The vision-based navigation field has advanced because of the availability of research and the development of artificial intelligence, which can be applied to develop space technology. For example, vision-based navigation is applied in satellite PE because of the high precision of the close proximity, on-orbit servicing, and active space debris removal operations. Two types of vision-based navigation are available: key point-based estimation and end-to-end estimation. Key point-based satellite PE extracts key points from images via different techniques and then inputs them into a deep learning model to conduct the PE process. The drawback of this method also depends on the performance of the implemented key-point extraction technique, while other advantages include a reduction in the required processing time. For end-to-end satellite PE, the given RGB images are directly input into a deep learning model for the PE process, in which the interference of the extraction method is ignored because of the original input source. Although more computational time is required in this method than in the first approach, the overhead of computer equipment is substantial.
PE-PSO, the joint iterative deep learning-based PE and PSO method, is proposed as a novel method, where the objective is to improve the accuracy of satellite PE. The PE-PSO method is designed by adding slack variable modifications for the input data contained in every training batch of the PE algorithm. The size of the modified slack variable is adjusted so that it is equal in size to the batch size implemented in the PE algorithm, which is updated on the fly based on the concept of the PSO algorithm.
The results are presented in terms of the error reduction achieved in each epoch. In this experiment, three types of errors are analyzed: position estimation error, orientation estimation error, and total estimation error, which include position and orientation errors. PE-PSO performs well by mitigating the mean position estimation error and the mean orientation estimation error by 13.1% and 29.1% on the testing dataset, respectively, based on the COCO pretrained weight. With respect to the Soyuz_hard pretrained weight, PE-PSO achieves 7.8% and 0.3% better mean position estimation error and mean orientation estimation error reductions, respectively, than the conventional does. The PE-PSO process is based on the weight and weight damping ratio, which are variables of the PSO algorithm. Therefore, it is important to select proper values for these parameters.

6. Future Work

This research concentrates on the optimal method for achieving precise satellite PE. In future work, we will expand in two research directions. First, the satellite PE method will be tested in other environments, such as the moon, to benefit future space exploration endeavors. Second, satellite movements will be predicted based on satellite PE. For example, the alternative state estimation method can be implemented by using the PE-PSO method as a model for measuring the position and orientation of a satellite for prediction or estimation, which would be useful for satellite operations.

Author Contributions

Conceptualization, P.K. and C.C.; methodology, P.K.; software, P.K.; validation, P.K.; formal analysis, P.K.; investigation, P.K.; resources, P.K.; data curation, P.K.; writing—original draft preparation, P.K., W.B., L.T. and P.B.; writing—review and editing, P.K.; visualization, P.K.; supervision, P.K., C.C., Y.Z. and P.B.; project administration, P.K.; funding acquisition, P.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research is the result of the project entitled “Impact of air traffic on air quality by Artificial Intelligent Grant No. RE-KRIS/FF67/012” by King Mongkut’s Institute of Technology Ladkrabang, which has received funding support from the NSRF.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Acknowledgments

The authors would like to acknowledge the administrative and technical support received from Satang Space Company Limited, the Air-Space Control Optimization and Management Laboratory (ASCOM-LAB), the International Academy of Aviation Industry, and King Mongkut’s Institute of Technology Ladkrabang, which contributed suggestions and infrastructure during the experiments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dony, N.-A.; Dong, W. Distributed Robust Formation Flying and Attitude Synchronization of Spacecraft. J. Aerosp. Eng. 2021, 34, 04021015. [Google Scholar] [CrossRef]
  2. Pasqualetto Cassinis, L.; Fonod, R.; Gill, E.; Ahrns, I.; Fernandez, J. CNN-Based Pose Estimation System for Close-Proximity Operations Around Uncooperative Spacecraft. In Proceedings of the AIAA Scitech 2020 Forum, Orlando, FL, USA, 6–10 January 2020. [Google Scholar] [CrossRef]
  3. Xu, J.; Song, B.; Yang, X.; Nan, X. An Improved Deep Keypoint Detection Network for Space Targets Pose Estimation. Remote. Sens. 2020, 12, 3857. [Google Scholar] [CrossRef]
  4. Liu, X.; Wang, H.; Chen, X.; Chen, W.; Xie, Z. Position Awareness Network for Noncooperative Spacecraft Pose Estimation Based on Point Cloud. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 507–518. [Google Scholar] [CrossRef]
  5. Li, Y.; Zhang, A. Observability analysis and autonomous navigation for two satellites with relative position measurements. Acta Astronaut. 2019, 163, 77–86. [Google Scholar] [CrossRef]
  6. De Jongh, W.C.; Jordaan, H.W.; Van Daalen, C.E. Experiment for pose estimation of uncooperative space debris using stereo vision. Acta Astronaut. 2020, 168, 164–173. [Google Scholar] [CrossRef]
  7. Opromolla, R.; Vela, C.; Nocerino, A.; Lombardi, C. Monocular-Based Pose Estimation Based on Fiducial Markers for Space Robotic Capture Operations in GEO. Remote. Sens. 2022, 14, 4483. [Google Scholar] [CrossRef]
  8. Bechini, M.; Lavagna, M.; Lunghi, P. Dataset generation and validation for spacecraft pose estimation via monocular images processing. Acta Astronaut. 2023, 204, 358–369. [Google Scholar] [CrossRef]
  9. Park, T.; Märtens, M.; Lecuyer, G.; Izzo, D.; D’Amico, S. SPEED+: Next Generation Dataset for Spacecraft Pose Estimation across Domain Gap. In Proceedings of the 2022 IEEE Aerospace Conference (AERO), Big Sky, MT, USA, 5–12 March 2022. [Google Scholar] [CrossRef]
  10. Phisannupawong, T.; Kamsing, P.; Torteeka, P.; Channumsin, S.; Sawangwit, U.; Hematulin, W.; Jarawan, T.; Somjit, T.; Yooyen, S.; Delahaye, D.; et al. Vision-Based Spacecraft Pose Estimation via a Deep Convolutional Neural Network for Noncooperative Docking Operations. Aerospace 2020, 7, 126. [Google Scholar] [CrossRef]
  11. Proença, P.F.; Gao, Y. Deep Learning for Spacecraft Pose Estimation from Photorealistic Rendering. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 6007–6013. [Google Scholar]
  12. Volpe, R.; Palmerini, G.B.; Sabatini, M. A passive camera based determination of a non-cooperative and unknown satellite’s pose and shape. Acta Astronaut. 2018, 151, 805–817. [Google Scholar] [CrossRef]
  13. Pasqualetto Cassinis, L.; Fonod, R.; Gill, E. Review of the robustness and applicability of monocular pose estimation systems for relative navigation with an uncooperative spacecraft. Prog. Aerosp. Sci. 2019, 110, 100548. [Google Scholar] [CrossRef]
  14. Zhu, W.; She, Y.; Hu, J.; Wang, B.; Mu, J.; Li, S. A hybrid relative navigation algorithm for a large–scale free tumbling non–cooperative target. Acta Astronaut. 2022, 194, 114–125. [Google Scholar] [CrossRef]
  15. Sun, D.; Hu, L.; Duan, H.; Pei, H. Relative Pose Estimation of Non-Cooperative Space Targets Using a TOF Camera. Remote. Sens. 2022, 14, 6100. [Google Scholar] [CrossRef]
  16. Yan, Z.; Wang, H.; Ze, L.; Ning, Q.; Lu, Y. A pose estimation method of space non-cooperative target based on ORBFPFH SLAM. Optik 2023, 286, 171025. [Google Scholar] [CrossRef]
  17. Chen, H.; Wang, P.; Wang, F.; Tian, W.; Xiong, L.; Li, H. EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 2771–2780. [Google Scholar]
  18. Xu, W.; Yan, L.; Hu, Z.; Liang, B. Area-oriented coordinated trajectory planning of dual-arm space robot for capturing a tumbling target. Chin. J. Aeronaut. 2019, 32, 2151–2163. [Google Scholar] [CrossRef]
  19. Liu, S.; Zhu, X.; Cao, Z.; Wang, G. Deep 1D Landmark Representation Learning for Space Target Pose Estimation. Remote. Sens. 2022, 14, 4035. [Google Scholar] [CrossRef]
  20. Sun, Q.; Pan, X.; Ling, X.; Wang, B.; Sheng, Q.; Li, J.; Yan, Z.; Yu, K.; Wang, J. A Vision-Based Pose Estimation of a Non-Cooperative Target Based on a Self-Supervised Transformer Network. Aerospace 2023, 10, 997. [Google Scholar] [CrossRef]
  21. Ye, R.; Wang, L.; Ren, Y.; Wang, Y.; Chen, X.; Liu, Y. FilterformerPose: Satellite Pose Estimation Using Filterformer. Sensors 2023, 23, 8633. [Google Scholar] [CrossRef] [PubMed]
  22. Qiao, S.; Zhang, H.; Meng, G.; An, M.; Xie, F.; Jiang, Z. Deep-Learning-Based Satellite Relative Pose Estimation Using Monocular Optical Images and 3D Structural Information. Aerospace 2022, 9, 768. [Google Scholar] [CrossRef]
  23. Shen, Y.; Ji, R.; Wang, Y.; Wu, Y.; Cao, L. Cyclic Guidance for Weakly Supervised Joint Detection and Segmentation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 697–707. [Google Scholar]
  24. Insom, P.; Liu, R.; Duan, R.; Hou, Y.; Boonsrimuang, P. Joint iterative channel estimation and decoding under pulsed radio frequency interference condition. In Proceedings of the 16th International Conference on Advanced Communication Technology, Pyeong Chang, Republic of Korea, 16–19 February 2014; pp. 983–988. [Google Scholar]
  25. Insom, P.; Insom, P.; Boonsrimuang, P. Joint iterative channel estimation and decoding under impulsive interference condition. In Proceedings of the 2016 18th International Conference on Advanced Communication Technology (ICACT), Pyeong Chang, Republic of Korea, 31 January–3 February 2016. [Google Scholar]
  26. Insom, P.; Cao, C.; Boonsrimuang, P.; Liu, D.; Saokarn, A.; Yomwan, P.; Xu, Y. A Support Vector Machine-Based Particle Filter Method for Improved Flooding Classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1943–1947. [Google Scholar] [CrossRef]
  27. Insom, P.; Cao, C.; Boonsrimuang, P.; Bao, S.; Chen, W.; Ni, X. A support vector machine-based particle filter for improved land cover classification applied to MODIS data. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 775–778. [Google Scholar]
  28. Wan, F.; Liu, C.; Ke, W.; Ji, X.; Jiao, J.; Ye, Q. C-MIL: Continuation Multiple Instance Learning for Weakly Supervised Object Detection. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  29. Li, X.; Yi, S.; Zhang, R.; Fu, X.; Jiang, H.; Wang, C.; Liu, Z.; Gao, J.; Yu, J.; Yu, M.; et al. Dynamic sample weighting for weakly supervised object detection. Image Vis. Comput. 2022, 122, 104444. [Google Scholar] [CrossRef]
  30. Tang, P.; Wang, X.; Bai, X.; Liu, W. Multiple Instance Detection Network with Online Instance Classifier Refinement. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3059–3067. [Google Scholar]
  31. Li, Y.; Yan, Y.; Xiu, X.; Miao, Z. An Uncertainty Weighted Non-Cooperative Target Pose Estimation Algorithm, Based on Intersecting Vectors. Aerospace 2022, 9, 681. [Google Scholar] [CrossRef]
  32. Liu, T.; Qin, Z.; Hong, Y.; Jiang, Z.P. Distributed Optimization of Nonlinear Multiagent Systems: A Small-Gain Approach. IEEE Trans. Autom. Control 2022, 67, 676–691. [Google Scholar] [CrossRef]
  33. Jin, Z.; Li, H.; Qin, Z.; Wang, Z. Gradient-Free Cooperative Source-Seeking of Quadrotor Under Disturbances and Communication Constraints. IEEE Trans. Ind. Electron. 2025, 72, 1969–1979. [Google Scholar] [CrossRef]
  34. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  35. Jürgen, B.; Wolfgang, H.; Michael, A. Particle swarm optimization on low dimensional pose manifolds for monocular human pose estimation. In Optics and Photonics for Counterterrorism, Crime Fighting and Defence IX and Optical Materials and Biomaterials in Security and Defence Systems Technology X; SPIE: Bellingham, WA, USA, 2013. [Google Scholar] [CrossRef]
  36. Rosa, S.; Toscana, G.; Bona, B. Q-PSO: Fast Quaternion-Based Pose Estimation from RGB-D Images. J. Intell. Robot. Syst. 2018, 92, 465–487. [Google Scholar] [CrossRef]
  37. Li, S. Global Face Pose Detection Based on an Improved PSO-SVM Method. In Proceedings of the 2020 International Conference on Aviation Safety and Information Technology, Weihai, China, 14–16 October 2020; pp. 549–553. [Google Scholar]
  38. Ye, Q.; Yuan, S.; Kim, T.-K. Spatial Attention Deep Net with Partial PSO for Hierarchical Hybrid Hand Pose Estimation. arXiv 2016, arXiv:1604.03334. [Google Scholar]
  39. Lei, J.; Wang, J.; Shi, J.; Xu, G.; Cheng, Y. Configuration optimization method of cooperative target for pose estimation with monocular vision. Opt. Eng. 2024, 63, 023102. [Google Scholar] [CrossRef]
  40. Lingqiao, L.; Lei, W.; Xinwang, L. In defense of soft-assignment coding. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2486–2493. [Google Scholar]
  41. Felzenszwalb, P.; McAllester, D.; Ramanan, D. A discriminatively trained, multiscale, deformable part model. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  42. Deng, W.; Yao, R.; Zhao, H.; Yang, X.; Li, G. A novel intelligent diagnosis method using optimal LS-SVM with improved PSO algorithm. Soft Comput. 2019, 23, 2445–2462. [Google Scholar] [CrossRef]
  43. Sahab, M.G.; Toropov, V.; Gandomi, A.H. A Review on Traditional and Modern Structural Optimization. In Metaheuristic Applications in Structures and Infrastructures; Elsevier: Amsterdam, The Netherlands, 2013; pp. 25–47. [Google Scholar]
  44. Modern Methods of Optimization. In Engineering Optimization; John and Wiley and Sons: Hoboken, NJ, USA, 2009; Chapter 13; pp. 693–736. [CrossRef]
Figure 1. The Unreal Rendered Spacecrafts On-Orbit Datasets (URSOs). (a) Example image (Image 1333_rgb.png) of the Soyuz_easy dataset. (b) Axis notation for a moving spacecraft frame [11].
Figure 1. The Unreal Rendered Spacecrafts On-Orbit Datasets (URSOs). (a) Example image (Image 1333_rgb.png) of the Soyuz_easy dataset. (b) Axis notation for a moving spacecraft frame [11].
Applsci 15 02166 g001
Figure 2. The workflow of the PE-PSO method.
Figure 2. The workflow of the PE-PSO method.
Applsci 15 02166 g002
Figure 3. (a) Total loss, (b) position loss, and (c) orientation loss produced over 100 batches of training in the first epoch.
Figure 3. (a) Total loss, (b) position loss, and (c) orientation loss produced over 100 batches of training in the first epoch.
Applsci 15 02166 g003
Figure 4. The different errors induced by the conventional method and the PE-PSO method with various hyperparameters in test cases 2 to 5 are compared in Table 1. The E r r o r t o t a l values obtained with (a) the COCO weight and (b) the Soyuz_hard weight are shown. The E r r o r p o s i t i o n values induced with (c) the COCO weight and (d) the Soyuz_hard weight are further presented. The E r r o r o r i e n t a t i o n values obtained with (e) the (COCO) weight and (f) the Soyuz_hard weight are also demonstrated.
Figure 4. The different errors induced by the conventional method and the PE-PSO method with various hyperparameters in test cases 2 to 5 are compared in Table 1. The E r r o r t o t a l values obtained with (a) the COCO weight and (b) the Soyuz_hard weight are shown. The E r r o r p o s i t i o n values induced with (c) the COCO weight and (d) the Soyuz_hard weight are further presented. The E r r o r o r i e n t a t i o n values obtained with (e) the (COCO) weight and (f) the Soyuz_hard weight are also demonstrated.
Applsci 15 02166 g004
Figure 5. Modifications of the slack variables (a)   ξ 1 , (b)   ξ 2 , (c)   ξ 3 , and (d)   ξ 4 under the 400, 600, and 800 training batches in each epoch of the PE-PSO method.
Figure 5. Modifications of the slack variables (a)   ξ 1 , (b)   ξ 2 , (c)   ξ 3 , and (d)   ξ 4 under the 400, 600, and 800 training batches in each epoch of the PE-PSO method.
Applsci 15 02166 g005
Table 1. Summary of test cases and training conditions.
Table 1. Summary of test cases and training conditions.
Test Case No.Training Condition
1The conventional method
2The PE-PSO method with ω = 0.5 and ω d a m p = 0.5
3 The PE-PSO method with ω = 0.3 and ω d a m p = 0.5
4 The PE-PSO method with ω = 0.2 and ω d a m p = 0.5
5 The PE-PSO method with ω = 0.1 and ω d a m p = 0.5
Table 2. The average errors of 100 iterations of the training process with the COCO weight.
Table 2. The average errors of 100 iterations of the training process with the COCO weight.
Test Case No E r r o r t o t a l Improvement E r r o r p o s i t i o n Improvement E r r o r o r e n t a t i o n ImprovementComputation Time (s)
16.6347-0.0976-6.5372-59,131.9780
26.62260.2%0.0994−1.9%6.52320.2%60,345.0513
36.39713.6%0.09512.5%6.30203.6%59,141.1049
46.47702.4%0.09661.0%6.38032.4%60,297.9563
56.63060.1%0.09571.9%6.53490.0%59,284.5769
The bold in table show the best value.
Table 3. The average errors of 100 iterations of the training process with the Soyuz_hard weight.
Table 3. The average errors of 100 iterations of the training process with the Soyuz_hard weight.
Test Case No E r r o r t o t a l Improvement E r r o r p o s i t i o n Improvement E r r o r o r e n t a t i o n ImprovementComputation Time (s)
15.5181-0.0494-5.4687-23,546.6051
25.51520.1%0.04930.2%5.46590.1%23,292.7730
35.51400.1%0.04773.3%5.46620.0%23,654.2353
45.51190.1%0.04930.2%5.46260.1%23,548.0132
55.51440.1%0.04871.3%5.46570.1%23,614.5915
The bold in table show the best value.
Table 4. The mean and standard deviation ( σ ) of each modified element of the slack variable ( ξ ) in epochs 1, 50, and 100.
Table 4. The mean and standard deviation ( σ ) of each modified element of the slack variable ( ξ ) in epochs 1, 50, and 100.
Epoch 1Epoch 50Epoch 100
Mean σ Mean σ Mean σ
ξ 1 0.99107.20560.66071.2528−1.02131.4733
ξ 2 0.91196.73171.50651.47611.07541.2136
ξ 3 −1.38496.2698−4.35271.4546−4.71261.1934
ξ 4 1.61687.32084.05831.29904.03121.2177
Table 5. The mean estimation of position and orientation errors for the testing dataset using the COCO pretrained weights.
Table 5. The mean estimation of position and orientation errors for the testing dataset using the COCO pretrained weights.
Test Case no.Mean
Position Estimation Error
Mean
Orientation Estimation Error
11.305329.7798
21.226725.5245
31.134621.1206
41.247521.4514
51.237527.2040
The bold in table show the best value.
Table 6. The mean estimation of position and orientation errors for the testing dataset using the Soyuz_hard pretrained weights.
Table 6. The mean estimation of position and orientation errors for the testing dataset using the Soyuz_hard pretrained weights.
Test Case no.Mean
Position Estimation Error
Mean
Orientation Estimation Error
10.69504.7611
20.60364.8252
30.64104.7488
40.82225.1743
50.85344.7671
The bold in table show the best value.
Table 7. The mean position error for cross-validation.
Table 7. The mean position error for cross-validation.
K E r r o r p o s i t i o n
ConventionalPE-PSOImprovement (%)
10.20350.20111.2%
20.18390.18270.7%
30.19820.19710.6%
40.18690.18262.3%
50.20160.19811.7%
60.18740.1889−0.8%
70.18640.18182.5%
80.18550.18042.8%
90.18260.17842.3%
100.18730.18421.7%
Mean0.19030.18751.5%
Table 8. The mean orientation error for cross-validation.
Table 8. The mean orientation error for cross-validation.
K E r r o r o r e n t a t i o n
ConventionalPE-PSOImprovement (%)
19.55499.52860.3%
29.42159.42590.0%
39.46709.45440.1%
49.44219.43450.1%
59.53569.52010.2%
69.46329.41130.6%
79.48879.48820.0%
89.44069.43630.1%
99.49229.48350.1%
109.45529.45570.0%
Mean9.47619.46390.1%
Table 9. The mean of total error for cross-validation.
Table 9. The mean of total error for cross-validation.
K E r r o r t o t a l
ConventionalPE-PSOImprovement (%)
19.75839.72980.3%
29.60549.60860.0%
39.66529.65150.1%
49.62909.61710.1%
59.73719.71810.2%
69.65069.60020.5%
79.67519.67000.1%
89.62619.61670.1%
99.67499.66200.1%
109.64259.63990.0%
Mean9.66649.65170.2%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kamsing, P.; Cao, C.; Zhao, Y.; Boonpook, W.; Tantiparimongkol, L.; Boonsrimuang, P. Joint Iterative Satellite Pose Estimation and Particle Swarm Optimization. Appl. Sci. 2025, 15, 2166. https://doi.org/10.3390/app15042166

AMA Style

Kamsing P, Cao C, Zhao Y, Boonpook W, Tantiparimongkol L, Boonsrimuang P. Joint Iterative Satellite Pose Estimation and Particle Swarm Optimization. Applied Sciences. 2025; 15(4):2166. https://doi.org/10.3390/app15042166

Chicago/Turabian Style

Kamsing, Patcharin, Chunxiang Cao, You Zhao, Wuttichai Boonpook, Lalida Tantiparimongkol, and Pisit Boonsrimuang. 2025. "Joint Iterative Satellite Pose Estimation and Particle Swarm Optimization" Applied Sciences 15, no. 4: 2166. https://doi.org/10.3390/app15042166

APA Style

Kamsing, P., Cao, C., Zhao, Y., Boonpook, W., Tantiparimongkol, L., & Boonsrimuang, P. (2025). Joint Iterative Satellite Pose Estimation and Particle Swarm Optimization. Applied Sciences, 15(4), 2166. https://doi.org/10.3390/app15042166

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop