Next Article in Journal
Multichannel Electrical Impedance Spectroscopy Analyzer with Microfluidic Sensors
Next Article in Special Issue
An Optimized Tightly-Coupled VIO Design on the Basis of the Fused Point and Line Features for Patrol Robot Navigation
Previous Article in Journal
Visual Navigation for Recovering an AUV by Another AUV in Shallow Water
Previous Article in Special Issue
Vision for Robust Robot Manipulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Star Image Prediction and Restoration under Dynamic Conditions

1
Key Laboratory of Micro-Inertial Instrument and Advanced Navigation Technology, Ministry of Education, School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China
2
School of Electrical Engineering and Automation, Qilu University of Technology, Jinan 250353, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(8), 1890; https://doi.org/10.3390/s19081890
Submission received: 24 March 2019 / Accepted: 16 April 2019 / Published: 20 April 2019
(This article belongs to the Special Issue Visual Sensors)

Abstract

:
The star sensor is widely used in attitude control systems of spacecraft for attitude measurement. However, under high dynamic conditions, frame loss and smearing of the star image may appear and result in decreased accuracy or even failure of the star centroid extraction and attitude determination. To improve the performance of the star sensor under dynamic conditions, a gyroscope-assisted star image prediction method and an improved Richardson-Lucy (RL) algorithm based on the ensemble back-propagation neural network (EBPNN) are proposed. First, for the frame loss problem of the star sensor, considering the distortion of the star sensor lens, a prediction model of the star spot position is obtained by the angular rates of the gyroscope. Second, to restore the smearing star image, the point spread function (PSF) is calculated by the angular velocity of the gyroscope. Then, we use the EBPNN to predict the number of iterations required by the RL algorithm to complete the star image deblurring. Finally, simulation experiments are performed to verify the effectiveness and real-time of the proposed algorithm.

1. Introduction

Along with the development of navigation technology, the requirement for a spacecraft attitude measurement is becoming higher and higher [1,2]. In general, star sensors and gyroscopes are often used in spacecraft to measure the attitude information. The star sensor is supposed to be the most accurate attitude-measuring device in stable conditions [3]. However, under dynamic conditions, frame loss and blurring of the star image may occur, which leads to decreased accuracy or even failure of the star centroid extraction and attitude determination. Therefore, only by solving the frame loss and blurring problem of the star image, can the star sensor maintain good performance under dynamic conditions. Because gyroscopes have a relatively high measurement accuracy and excellent dynamic performance in a short period, using the gyroscope to assist in improving the dynamic performance of the star sensor has become a hot topic [4,5,6,7,8,9].
In the process of spacecraft motion, due to the influence of external interference and the limitation of the star sensor, the star sensor is prone to frame loss, which can result in a lack of coherence in the process of moving image tracking and even loss of key motion features. Therefore, how to eliminate the frame loss error has become a research hotspot in the field of image processing. Currently, the primary methods for eliminating frame loss error includes the frame loss error elimination based on the support vector machine (SVM) [10,11], frame loss error elimination based on iterative error compensation [12,13] and frame loss error elimination based on adaptive minimum error threshold segmentation [14]. These methods eliminate the interference noise in the image and compensate the frame loss error, but still cannot avoid the frame loss. To overcome the shortcomings of the above methods, a method for eliminating the frame loss by using a motion image-tracking model is presented in [15], since the frame loss of the star image is mainly affected by the exposure time and readout time of the star sensor [2]. Therefore, in [16,17,18], parallel processing is used to overlap exposure time and readout time to reduce the frame loss of the star image. In [19,20], the authors used image intensifiers to increase the sensitivity of the image sensor, thereby reducing the occurrence of the frame loss in the star sensors. In [21], Wang et al. proposed using field programmable gate arrays (FPGAs) to improve the processing ability of the star sensor to reduce the readout time. Yu et al. [22] proposed a method to reduce the occurrence of the frame loss by using an intensified star sensor. Although FPGAs and image intensifiers can assist the star sensor in reducing the occurrence of the frame loss, the additional FPGA and image intensifier lead to an increase in the weight and power consumption of the star sensor and limit its application in micro-spacecraft.
The motion blur of the star image is another important reason that affects the dynamic performance of the star sensor. To improve the dynamic performance of the star sensor, many scholars have done a lot of research in the field of image processing, especially on the star image deblurring algorithms [23]. According to whether the point spread function (PSF) is known or not, the deblurring methods can be classified into two typical forms: Blind image deblurring (BID) with unknown PSF, and non-blind image deblurring (NBID) with known PSF [24]. Mathematically, the process of NBID is an inverse problem, and an NBID algorithm has a good real-time performance. Currently, most BID algorithms perform blur kernel estimation and image deblurring simultaneously, and recursively to approach the sharp image [25,26,27,28,29,30]. Therefore, BID methods have poor real-time performance. Because star sensors are widely used in spacecraft, the real-time requirements are high. Therefore, we intend to study an NBID algorithm for star image deblurring.
Two problems should be solved in the process of restoring the blurred star image. One is how to determine the blur kernel, and the other is to choose which deblurring method to use. The gyroscope can be used to measure the angular rates of the carrier and is easy to integrate, and the blur kernel parameters (blur angle and blur length) can be calculated according to the angular rate information output by the gyroscope. In this paper, a gyroscope is used to assist in the calculation of blur kernels. For the star image deblurring, there are two commonly used NBID algorithms. One is the Wiener filter [31,32]. Quan et al. [31] proposed a Wiener filter based on the optimal window technique for recovering the blurred star image. Ma et al. [32] proposed an improved one-dimensional Wiener filtering method for star image deblurring. Although the two methods are better in real time, they also amplify the noise in the image. The other is the Richardson–Lucy (RL) algorithm, which can effectively suppress the noise in the deblurred star image [33,34]. However, the iterative convergence criterion is not given in the RL algorithm, and the optimal number of iterations needs to be obtained through constant-trying with large time-consumption. If the amount of blurred star image to be processed is enormous, this is a disadvantage that cannot be ignored.
In this paper, to solve the shortcomings of the above methods and further improve the performance of the star sensor under highly dynamic conditions, we propose an improved gyroscope-assisted star image prediction method and RL non-blind deblurring algorithm. In the star image prediction method, considering the second-order distortion of the star sensor lens, a prediction model between the angular rates of the gyroscope and the position of the star spot is established. For the improved RL algorithm, first, we analyze the point spread function (PSF) model of the star sensor under different motion conditions, and then the ensemble back-propagation neural network (EBPNN) prediction model based on the improved bagging method is constructed to predict the number of termination iterations required by the conventional RL algorithm, which is used to overcome the disadvantage of traditional RL algorithm that needs to set the number of iterations manually.
The rest of this paper is organized as follows. In Section 2, we introduce the star image prediction model in the case of the frame loss of the star image. The improved RL algorithm is described in Section 3. In Section 4, simulation results are shown to demonstrate the effectiveness of our method. Finally, we give a conclusion in Section 5.

2. Prediction Model of the Star Image

The star sensor is a vision sensor that can be used to measure the attitude of a spacecraft [35]. To obtain the high-precision attitude information of the spacecraft, we must ensure that the image sensor of the star sensor can output the star image continuously. Due to the highly dynamic motion of the spacecraft, frame loss of the star image often occurs. Therefore, it is especially important to ensure that the star sensor can output the high-precision attitude information under the condition of the frame loss of the star image. In this section, we will show how to predict the position of the star spot based on the angular rates of the gyroscope in the presence of distortion of the star sensor lens. In Figure 1, the star sensor obtains the direction vector of the navigation star in the celestial inertial coordinate system by observing the stars on the celestial sphere. At time t , the attitude matrix of the star sensor in the celestial coordinate system is A ( t ) , the star sensor can detect the direction vector v i of the navigation star in the celestial coordinate system, and its image vector can be represented as W i in the star sensor coordinate system. The image coordinate of the principal point of the lens of the star sensor is ( x 0 , y 0 ) , the coordinates of the navigation star S i on the image plane is ( x i , y i ) . Since the optical lens of the star sensor mainly has a second-order radial distortion, the ideal image coordinate ( x i , y i ) of the navigation star S i can be expressed as,
{ x i x 0 = ( x i x 0 ) ( 1 + k x r 2 ) , y i y 0 = ( y i y 0 ) ( 1 + k y r 2 ) ,
where, r = ( x i x 0 ) 2 + ( y i y 0 ) 2 , k x and k y represent the second-order radial distortion coefficients in the X and Y directions, respectively.
Assuming that the focal length of the star sensor is f , the direction vector W i can be given by
W i = 1 [ ( x i x 0 ) ( 1 + k x r 2 ) ] 2 + [ ( y i y 0 ) ( 1 + k y r 2 ) ] 2 + f 2 [ ( x i x 0 ) ( 1 + k x r 2 ) ( y i y 0 ) ( 1 + k y r 2 ) f ] .
According to the attitude matrix A ( t ) of the star sensor, the relationship between the vectors W i and v i can be obtained,
W i = A ( t ) v i ,
where, the attitude matrix A ( t ) can be solved by the N vector method, Trial method, Quest method, Q-method and Least square method [36]. In this paper, we use the angular velocity information of the gyroscope to calculate the attitude matrix A ( t ) .
In Figure 2, O S X Y Z represents the star sensor coordinate system, O C u v represents the image plane coordinate system, the projection point of the principal point O S of the lens of the star sensor on the image plane is O C , O C O S is consistent with the principal optical axis of the star sensor lens and its length is equal to the focal length f . w x , w y and w z represent the three-axis angular rates of the star sensor at instant t , which can be measured by the gyroscope. P denotes the position of the navigation star on the star image at instant t , O C P denotes the direction vector of the navigation star under the coordinate system of the star sensor, and the star spot P shifts to P at instant t + t . According to Equation (3), the direction vectors O S P and O S P can be expressed as,
{ O S P = W i ( t ) = A ( t ) v i , O S P = W i ( t + t ) = A ( t + t ) v i ,
where, A ( t + t ) = A t t + t A ( t ) , A ( t + t ) denotes the attitude matrix at instant t + t .
A t t + t = I ( w ( t ) × ) t = I [ 0 w z ( t ) w y ( t ) w z ( t ) 0 w x ( t ) w y ( t ) w x ( t ) 0 ] t =   [ 1 w z ( t ) t w y ( t ) t w z ( t ) t 1 w x ( t ) t w y ( t ) t w x ( t ) t 1 ] ,
where ( w ( t ) × ) represents the cross-product matrix of the star sensor angular rates vector w ( t ) .
According to Equations (4) and (5), the relationship between W i ( t ) and W i ( t + t ) can be obtained,
W i ( t + t ) = A t t + t W i ( t ) ,
where, we can calculate W i through the star image. According to Equations (1) and (6), we can obtain the position prediction model as follows,
{ x i ( t + t ) = ( x i ( t ) x 0 ) ( 1 + k x r 2 ) + x 0 + ( ( y i ( t ) y 0 ) ( 1 + k y r 2 ) + y 0 ) w z ( t ) t + f w y ( t ) t ( ( ( x i ( t ) x 0 ) ( 1 + k x r 2 ) + x 0 ) w y ( t ) t + ( ( y i ( t ) y 0 ) ( 1 + k y r 2 ) + y 0 ) w x ( t ) t ) / f + 1 y i ( t + t ) = ( ( y i ( t ) y 0 ) ( 1 + k y r 2 ) + y 0 ) ( ( x i ( t ) x 0 ) ( 1 + k x r 2 ) + x 0 ) w z ( t ) t f w x ( t ) t ( ( ( x i ( t ) x 0 ) ( 1 + k x r 2 ) + x 0 ) w y ( t ) t + ( ( y i ( t ) y 0 ) ( 1 + k y r 2 ) + y 0 ) w x ( t ) t ) / f + 1 .

3. Improved Star Image Deblurring Algorithm

Generally, establishing a PSF under a specific motion is the key to star image recovery. In this section, first, we analyze the PSF model of the blurred star image caused by the rotation of the star sensor around the optical axis and non-optical axis and calculate the PSF in the corresponding motion condition through the angular velocity information of the gyroscope. Then, we introduce an improved RL algorithm to recover the blurred star image.

3.1. Motion Blur Model of the Star Image

To better recover the blurred star image, the primary task is to obtain the PSF. Therefore, it is necessary to analyze the mechanism of the star image blurs. The star sensor is a navigation device that acquires the attitude by utilizing star observations. Because the star sensor needs to photograph the sky with a dark background, in order to increase the number of navigation stars in the star image, it needs to increase the exposure time appropriately. If the star sensor has a wide range of motion during the exposure time, the same star will be imaged at different locations on the star image, which will result in blurring of the star image. Mathematically, the model of star image blurring can be written as,
g ( x , y ) = f ( x , y ) h ( x , y ) + n ( x , y ) ,
where f ( x , y ) , g ( x , y ) , and h ( x , y ) denote the sharp star image, the blurred star image, and the PSF, respectively; represents two-dimensional convolution operator, and n ( x , y ) denotes the image noise.
Due to the different motion types of the star, sensors produce different PSFs, so PSF is important for describing the model of the blurred star image. Since the distance from the navigation star to the earth is much larger than the distance from the star sensor to the earth, the linear motion has less effect on the star image blur, and this effect can be ignored. Therefore, we mainly analyze the model of the blurred star image generated by the angular motion.
In Figure 3a, the star image blur caused by the angular motion is shown. Since the exposure time of the star sensor is short, the angular velocity of the star sensor can be considered to be constant during the exposure time. Moreover, the star sensor coordinate system is coincident with the body-fixed frame. In Figure 3b, the model of the blurred star image generated by the star sensor rotating around the X-axis is shown, the initial angle between the starlight direction and the principal optical axis of the star sensor is α , and the projection of the navigation star is P in the star image. When the star sensor rotates clockwise around the X-axis at an angular velocity w x , and during the exposure time t , the rotational angle is α = w x t , and the star spot moves from P to P in the image plane. The geometric relationship between P and P is,
L P P = f [ tan ( α + α ) tan α ] / d c c d ,
where L P P represents the distance from P to P quantized by pixels, d c c d denotes the pixel size, and f is the focal length of the star sensor.
As a result of the short exposure time of the star sensor, α is quite small, the first order Taylor-expansion for tan ( α + α ) can be obtained.
tan ( α + α ) tan α + ( tan α ) α = tan α + ( sin 2 α + cos 2 α cos 2 α ) α = tan α + ( tan 2 α + 1 ) α .
Substituting Equation (10) into (9), we have
L P P = f ( tan 2 α + 1 ) α / d c c d .
In general, the rotational motion characteristics of the star sensor in the O S X and O S Y directions are the same. As shown in Figure 3c, during the exposure time t , the star sensor rotates clockwise around the Y-axis at an angular velocity w y , the rotational angle is α = w y t , the star spot shifts along the u axis in the image plane, and its translation vector can be obtained.
L P P = f ( tan 2 α + 1 ) α / d c c d .
When the star sensor rotates around the X-axis and Y-axis with angular rates w x and w y , respectively, and after the exposure time t , the rotation angle of the star sensor is α = w x y t = w x 2 + w y 2 t , and the translation vector of the star spot is
L P P = f ( tan 2 α + 1 ) α / d c c d .
In general, when the star sensor rotates around the cross bore-sight direction ( O S X and OSY directions), the blur kernel angle θ of the star image can be given by
θ = arctan [ tan ( α + α ) tan α tan ( α + α ) tan α ] .
Then, the PSF of the blurred star image is expressed as [37,38],
h 1 ( x , y ) = { 1 / L P P ,   i f   y / x = sin | θ | / cos | θ | , 0 x L P P cos | θ | 0 ,   o t h e r w i s e .
In Figure 3d, the star sensor rotates clockwise around the Z-axis at an angular rate w z , point P ( u , v ) does a circular motion with O C as the center and r = u 2 + v 2 as the radius. The rotation angle of the star sensor is α = w z t during the exposure time t . Since the exposure time of the star sensor is short, the arc length P P can be approximated as the chord length P P . Inspired by reference [39], the motion of the star spot can be regarded as a uniform linear motion on the focal plane. The displacement of the star spot in the direction of the X- and Y-axis can be expressed as,
{ P P u v w z t , P P v u w z t .
The star image blur kernel angle θ and the P P are given by
θ = arctan ( P P u / P P v ) ,
P P = P P u 2 + P P v 2 = v 2 w z 2 t 2 + u 2 w z 2 t 2 = | w z | t r .
According to the geometric relation in Figure 3d,
tan α = r d c c d / f .
Substituting Equation (19) into Equation (18), Equation (18) can be rewritten as
P P = | w z | t f tan α / d c c d .
Therefore, when the star sensor rotates around the Z-axis, the PSF of the blurred star image is expressed as,
h 2 ( x , y ) = { 1 / P P , i f   y / x = sin | θ | / cos | θ | , 0 x < P P cos | θ | 0 , o t h e r w i s e .
In summary, according to Equations (15) and (21), the model of the multiple-blurred star image is given by
g ( x , y ) = f ( x , y ) h 1 ( x , y ) h 2 ( x , y ) + n ( x , y ) ,
where the h 1 ( x , y ) and h 2 ( x , y ) need to be calculated based on the angular velocity w x , w y and w z of the star sensor. In this paper, we use a gyroscope to provide the angular velocity [ w b x   w b y   w b z ] of the spacecraft. Therefore, the angular velocity [ w x   w y   w z ] of the star sensor is expressed as,
[ w x , w y , w z ] T = C b s [ w b x , w b y , w b z ] T ,
where C b s denotes the rotation matrix from the body coordinate system to the star sensor coordinate system. Because the star sensor is fixed on the spacecraft, C b s can be calibrated in advance.
After obtaining the PSF, the NBID algorithm is used to recover the blurred star image.

3.2. Richardson-Lucy (RL) Algorithm

The NBID algorithm includes both linear and nonlinear algorithms. The most common linear NBID algorithms include the inverse filtering algorithm, Wiener filtering algorithm, and least squares algorithm [3]. Compared with the linear NBID algorithm, nonlinear NBID algorithm has a better effect in suppressing noise and preserving image edge information. Currently, the RL algorithm [40] is the most widely used non-linear iterative restoration algorithm. The RL algorithm is a blurred image deconvolution algorithm that extends from the maximum a posteriori probability estimate. This method assumes that the noise in the image follows a Poisson distribution, and the likelihood probability of the image is
p ( g / f ) = x , y ( ( f ( x , y ) h ( x , y ) ) g ( x , y ) e ( f ( x , y ) h ( x , y ) ) g ( x , y ) ! ,
where, ( x , y ) denotes the pixel coordinate, g ( x , y ) represents the blurred image, h ( x , y ) denotes the PSF, and denotes the two-dimensional convolution operator.
To get the maximum likelihood solution of the sharp image f ( x , y ) , we minimize the energy function.
E ( f ) = x , y { ( f ( x , y ) h ( x , y ) ) g ( x , y ) log ( f ( x , y ) h ( x , y ) ) ( x , y ) } .
By deriving the E ( f ) and normalizing the blur kernel h ( x , y ) , the RL algorithm iteratively updates the image by
f n + 1 ( x , y ) = [ g ( x , y ) f n ( x , y ) h ( x , y ) h ( x , y ) ] f n ( x , y ) ,
where n represents the iteration number.
The RL algorithm has two important properties [40]: Non-negativity and energy preserving. It constrains the non-negative of estimated values of the sharp image and preserves the total energy of the image in the iteration so that the RL algorithm has excellent performance in the star image deblurring. However, the iterative convergence criterion is not given in the RL algorithm, and the optimal number of iterations need to be obtained through constant-trying with large time-consumption. This shortcoming of the RL algorithm cannot be ignored if we are dealing with a large number of the blurred star image. Therefore, it is necessary to study an improved RL algorithm which automatically sets the number of iterations.

3.3. Improved RL Algorithm

To overcome the shortcomings of the RL algorithm, we propose an improved RL algorithm, and the flow diagram is shown in Figure 4. First, we set the parameters of the star sensor including the field of view, focal length, star magnitude limit, resolution of the star image, etc. We use these parameters to simulate a large number of sharp star image and the corresponding blurred star image. Second, according to the angular rates of the gyroscope output, we calculate the PSF of each blurred star image and use the RL algorithm to deblur the star image and record the optimal number of iterations used. The optimal number of iterations and the sum of the Magnitude of Fourier Coefficients (SUMFC) of the PSF of the blurred star image are used for the training of the ensemble back-propagation neural network (EBPNN) [41]. After the training is completed, the optimal iteration number prediction model of the RL algorithm can be obtained. Finally, when the navigation system is used, the PSF of the blurred star image is obtained according to the angular velocity of the gyroscope, and the SUMFC of the PSF is used as the input of the prediction model. The star image is deblurred according to the number of iterations required by the RL algorithm of the prediction model output. Especially, when predicting the number of iterations, the ensemble back-propagation neural network (EBPNN) prediction model based on the improved bagging method uses the SUMFC of the PSF of the blurred star image as the input.
We use different PSFs to blur the sharp star image (Figure 5a). The relationship between the SUMFC of PSFs and the corresponding number of iterations required by the RL algorithm is shown in Figure 6. We can see that there is an obvious non-linear relationship between them, which prompts us to use EBPNN to predict the optimal number of iterations of the RL algorithm.
The performance of a single back-propagation (BP) neural network is limited. It takes a long time to learn, and its objective function is easy to fall into a local minimum. Therefore, we use the integration strategy based on the improved bagging method to integrate the single neural network. The bagging method [42] is based on the re-sampling and self-help technology. The self-help learning sample set D i ( i = 1 , 2 , ) is retrieved from the original training set D , the size of each self-learning sample set is equivalent to the original training set, and each self-learning sample trains a single BP neural network. The bagging method increases the diversity of neural network by re-selecting the training set, thereby improving the generalization ability and prediction accuracy of the EBPNN.
In order to further improve the prediction accuracy of the ensemble neural network, we introduce a just-in-time learning algorithm to optimize the sample sets D i ( i = 1 , 2 , ) obtained by the bagging method. Suppose two input samples x i and x q , where x q is the currently acquired input sample and x i is a training sample in D i ( i = 1 , 2 , ) . The distance and angle between them can be calculated by the following equation.
{ d ( x i , x q ) = x i x q 2 , θ ( x i , x q ) = arccos x i T x q x i 2 x q 2 .
The similarity between x i and x q is
S ( x i , x q ) = α e d ( x i , x q ) + ( 1 α ) cos ( θ ( x i , x q ) ) ,
where, α is the weighting factor, the larger the S ( x i , x q ) value, the higher the similarity between x i and x q .
We select the k-group data closest to the currently acquired one sample x q from the training sample set D i ( i = 1 , 2 , ) and arrange the new sample set in descending order.
{ D i = { ( x 1 , i , y 1 , i ) , ( x 2 , i , y 2 , i ) , , ( x k , i , y k , i ) } , i = 1 , 2 , , S ( x 1 , x q ) > S ( x 2 , x q ) > > S ( x k , x q ) ,
where y k , i denotes the expected output value corresponding to x k , i in training sets D i ( i = 1 , 2 , ) .
Therefore, the local modeling problem is transformed into an optimization problem.
J ( δ ) = min δ i = 1 k ( y i y ^ ( δ , x i ) ) 2 S ( x i , x q ) .
Minimize J ( δ ) to obtain the model parameter δ at the current moment, and then obtain its local model:
y q = y ^ ( δ , x q ) .
In particular, we find that the computational complexity of the EBPNN model increases with the increase of the number of BPNN models, but the prediction accuracy of the EBPNN model does not always increase with it, sometimes it even decreases. Therefore, after considering the computational complexity and prediction accuracy of the EBPNN model, we decide to use three sub-BP neural network models to construct the EBPNN model. As shown in Figure 7, three BP neural networks are trained by different sample sets D i ( i = 1 , 2 , 3 ) , and the integrated prediction model is obtained by aggregating the three BP neural networks. When the EBPNN is used for prediction, we use the weighted method to integrate the output of each neural network and take the integrated result as the output of EBPNN. In the process of integrating the output of each BP neural network, first, we calculate the average training errors e i ( i = 1 , 2 , 3 ) of three sub-models on their respective training sample set. Then, we construct a weighting vector w of 1 × n dimensions, the value of n is the same as the number of sub-BP neural network models, so n = 3 , w i = 1 / e i , ( i = 1 , 2 , 3 ) . Finally, we calculate the prediction results of three sub-models for the input data x q by Equation (31) and form a 1 × 3 -dimensional output vector y q . The final prediction result of the EBPNN is expressed as,
y = w y q T i = 1 3 w i .
To verify the effectiveness of the EBPNN prediction model, we analyzed the accuracy of the iteration times estimated by the model. In the training stage of the EBPNN model, each BP neural network adopts a three-layer structure. The nodes of the input layer, hidden layer, and output layer are set to 1, 10 and 1, respectively. The sigmoid function is used as the activation function. The original training set D contains 1708 samples. In Figure 8, we show the number of iterations predicted by the EBPNN model and compare it with the optimal number of iterations. We can see that the number of iterations estimated by the EBPNN almost coincides with the optimal number of iterations, and the error between them is small. Therefore, the performance of the EBPNN prediction model can meet our requirements.
After EBPNN predicts the number of iterations, we use the improved RL algorithm to obtain the sharp star image, and then we can accurately estimate the attitude information by the star image segmentation, star extraction, star identification, star matching, and other operations [43].

4. Simulation Results and Analysis

In order to prove the effectiveness of the star image prediction method and the improved RL algorithm in the highly dynamic environment, we compare and analyze the prediction accuracy of the star spot, and the accuracy of the attitude estimation before and after the star image deblurring in the following section.

4.1. Star Image Prediction Experiment

In this section, to validate the star image prediction method, we need to simulate the star image acquired by the star sensor at a different time. In the process of star image simulation, we determine the position of the navigation star in the star image based on the bore-sight direction of the star sensor and the right ascension and declination of the navigation star. Since the star sensor is fixed on the spacecraft, it can obtain different star images as the spacecraft moves. We assume that the exposure time of the star sensor is 0.01 s, the field of view is 20 ° × 20 ° , the image sensor size is 865 pixels × 865 pixels, the pixel size is 20   μ m , the focal length is 49 mm, and select the stars brighter than 3m in Yale Bright Star Catalogue as the guide star catalog. We use these parameters and the spacecraft trajectory to simulate the images at different times and use them as the ground truth of the star image. According to the above parameters, the resolution of the star image we simulated is 865 × 865 . To speed up the processing of the star image, we intercept the 512 × 512 size as the star image to be processed. The trajectory of the spacecraft we simulated is shown in Figure 9. And 1500 frames of consecutive star images are simulated, the first and the 1500th frame star image are shown in Figure 10.
To validate the star image prediction method, we predict the star image based on the first frame star image and the angular velocity of the gyroscope, and compare it with the ground truth of the star image. Figure 11a,b show the ground truth of the 1500th frame star image and the 1500th frame star image predicted by the proposed algorithm. To more intuitively demonstrate the accuracy of the prediction algorithm, in Table 1, the centroid coordinates of the star spot in the real and predicted 1500th frame star image is shown, where ( x , y ) represents the centroid coordinate of the star spot in the real star image, ( x , y ) is the centroid coordinate of the predicted star spot. x and y represent the difference of the horizontal and vertical coordinates between the true star spot and the predicted star spot, respectively. As seen from Table 1, the maximum error of the horizontal and vertical coordinates of the star spot predicted by our method within 15 s is 0.89 and 0.50 pixels, respectively.
To further analyze the prediction algorithm, according to the first frame star image shown in Figure 10a, we successively predicted the position of stars in 1500 star images, and analyze the mean value of the estimation error of the star spot position in each predicted star image. As shown in Figure 12, the mean value of the coordinate errors of the predicted star spot increases with the increase of the estimated number of frames, but the mean errors could stay in a small range. Therefore, in the case of the short-term frame loss, the proposed method can achieve an accurate prediction of the star spot.

4.2. Experiments on Star Image Deblurring

In this section, we present some examples to validate the proposed gyro-assisted improved RL algorithm. First, we analyze the blurring of the star image when the star sensor rotates around the X-axis, the Y-axis, the Z-axis, the X- and Y-axis, and the three axes simultaneously. Then, we add the Gaussian white noise with zero mean and variance 0.01 to the blurred star images. Finally, the blurred star image is deblurred by our proposed algorithm, and we compare the deblurred star image with the original sharp star image. Figure 13 shows the magnified original star image, blurred star images caused by the star sensor rotate around the X- and Y-axis, (wx = 10°/s, w y = 10 ° / s ), deblurred star image, and the gray distribution of the star spot in them. As can be seen from Figure 13, the gray value of the star spot in the blurred star image decreases significantly, and after deblurring the star image, the smearing phenomenon is obviously suppressed, the gray value and the gray distribution of star spot are closer to those in original star image.
The star sensor is an attitude measurement device. To more intuitively reflect the deblurring performance of the proposed algorithm, we compare the attitude information of the spacecraft estimated by the star image before and after deblurring. The star image observed by the star sensor at a certain time is shown in Figure 14. First, we perform an angular motion blurring on the observed star image, then we use the proposed algorithm and the automatic iterative RL algorithm to deblur the star image, and compare the attitude information estimated by the deblurred images. The automatic iterative RL algorithm calculates the mean square error (MSE) of the currently restored image by automatically increasing the number of iterations, and compares it with the MSE of the image restored by the last iteration. If the MSE of the currently restored image is higher than the last iteration recovery result, the last iteration number is considered to be the optimal number of iterations, and the restored image is the optimal restoration result. The attitude estimation results are shown in Table 2, Table 3, Table 4, Table 5 and Table 6, and the “Fail” indicates that the attitude information of the spacecraft cannot be estimated by the star image because the degree of blurring of the star image is too high.
From Table 2, Table 3, Table 4, Table 5 and Table 6, it can be seen that the attitude estimation failed when the angular velocity of the star sensor rotating around the X-axis, the Y-axis, the Z-axis, the X-and the Y-axis, and the three axes exceeds w x = 25 ° / s , w y = 30 ° / s , w z = 25 ° / s , w = [ 20 , 20 , 0 ] ° / s and w = [ 15 , 15 , 15 ] ° / s , respectively. After the blurred star image is restored by the proposed algorithm and the automatic iterative RL algorithm, the maximum angular velocity of the attitude can be estimated to be expanded to w x = 75 ° / s , w y = 75 ° / s , w z = 80 ° / s , w = [ 75 , 75 , 0 ] ° / s , and w = [ 55 , 55 , 55 ] ° / s , respectively, these two methods have a similar performance, and the attitude errors are kept in a small range. This is because with the increase of the angular velocity of the star sensor, the blur extent of the star image gets bigger, and the gray value of the star spot decreases significantly. When the gray value of a blurred star is lower than the threshold for star image segmentation and the blurred star can hardly be detected. However, after the restoration of the blurred star image, the gray value of the star spot is improved, and the gray distribution of the star spot is closer to the true distribution so that the star spot can also be extracted under high dynamic conditions. Finally, the attitude of the spacecraft can be estimated by these star spots.
To verify the real-time performance of the proposed algorithm in the case of Gaussian noise, we use the proposed algorithm and the automatic iterative RL algorithm to restore the blurred star image caused by the star sensor rotating around the Z-axis and compare the time consumed by the two methods. As shown in Figure 15, the real-time performance of the proposed algorithm is significantly better than the iterative RL algorithm. This is mainly because the proposed algorithm can use the ensemble neural network based on the improved bagging method to quickly predict the number of iterations required by the RL algorithm, while the iterative RL algorithm requires a step-by-step iteration to optimize the number of iteration steps required.
Second, in the case where the blurred star image is contaminated by Poisson noise, we present the deblurring performance of the proposed method and compare it with the automatic iterative RL algorithm. Figure 16 shows the magnified original star image, blurred star images caused by star sensor rotate around the X- and Y-axis, ( w x = w y = 10 ° / s ), deblurred star image, and the gray distribution of the star spot in the case of Poisson noise. Combined with Table 7, Table 8, Table 9, Table 10 and Table 11, we can see that in the case of Poisson noise, the attitude estimation failed when the angular velocity of the star sensor rotating around the X-axis, the Y-axis, the Z-axis, the X-and the Y-axis, and the three axes exceeds w x = 40 ° / s , w y = 35 ° / s , w z = 35 ° / s , w = [ 30 , 30 , 0 ] ° / s and w = [ 15 , 15 , 15 ] ° / s , respectively. After the blurred star image is restored by the proposed algorithm and the automatic iterative RL algorithm, the maximum angular velocity of the attitude can be estimated to be expanded to w x = 160 ° / s , w y = 160 ° / s , w z = 170 ° / s , w = [ 120 , 120 , 0 ] ° / s , and w = [ 80 , 80 , 80 ] ° / s , respectively, these two methods have a similar performance, and the attitude errors are kept in a small range. Figure 17 shows the real-time performance of the proposed algorithm and the iterative RL algorithm when dealing with the blurred star image caused by the star sensor rotating around the Z-axis, and the result shows that the real-time performance of our algorithm is better than the iterative RL algorithm when the degree of the blurred star image is large.
In summary, the proposed method and the iterative RL algorithm significantly improve the dynamic performance of the star sensor and have similar performance. However, the real-time performance of our algorithm is better than the iterative RL algorithm, especially in the case of Gaussian white noise.

5. Conclusions

In this paper, we improve the dynamic performance of the star sensor by using the star image prediction method and the star image deblurring method. Taking into account the distortion of the star sensor lens, we use the information provided by the star sensor and the gyroscope to establish a star spot prediction model. Also, for the blurred star image problem, we proposed an improved Richardson-Lucy (RL) algorithm based on the EBPNN.
Experimental results demonstrate that the proposed methods are effective in improving the dynamic performance of the star sensor. The maximum error of the star image prediction algorithm is 0.89 pixels in 15 s and the attitude errors calculated from the star image restored by the improved RL algorithm can be kept in a small range. Compared with the iterative RL algorithm, the improved RL algorithm proposed in this paper has better real-time performance.

Author Contributions

X.C. designed and conceived this study; D.L. and C.S. performed the experiments and wrote the paper; D.L. and X.L. developed the program used in the experiment; X.C. reviewed and edited the manuscript. All authors read and approved this manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (No. 61873064, 51375087), Transformation Program of Science and Technology Achievements of Jiangsu Province (No. BA2016139), Postgraduate Research & Practice Innovation Program of Jiangsu Province (No. KYCX18_0073) and Scientific Research Foundation of Graduate School of Southeast University (No. YBPY1931).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Inamori, T.; Hosonuma, T.; Ikari, S.; Saisutjarit, P.; Sako, N.; Nakasuka, S. Precise attitude rate estimation using star images obtained by mission telescope for satellite missions. Adv. Space Res. 2015, 55, 1199–1210. [Google Scholar] [CrossRef]
  2. Zhang, S.; Xing, F.; Sun, T.; You, Z.; Wei, M. Novel approach to improve the attitude update rate of a star tracker. Opt. Express 2018, 26, 5164–5181. [Google Scholar] [CrossRef]
  3. Sun, T.; Xing, F.; You, Z.; Wang, X.; Li, B. Deep coupling of star tracker and MEMS-gyro data under highly dynamic and long exposure conditions. Meas. Sci. Technol. 2014, 25, 085003. [Google Scholar] [CrossRef]
  4. Lu, J.; Lei, C.; Yang, Y. A dynamic precision evaluation method for the star sensor in the stellar-inertial navigation system. Sci. Rep. 2017, 7, 4356. [Google Scholar] [CrossRef]
  5. Tan, W.; Dai, D.; Wu, W.; Wang, X.; Qin, S. A comprehensive calibration method for a star tracker and gyroscope units integrated System. Sensors 2018, 18, 3106. [Google Scholar] [CrossRef] [PubMed]
  6. Ma, L.; Zhan, D.; Jiang, G.; Fu, S.; Jia, H.; Wang, X.; Huang, Z.; Zheng, J.; Hu, F.; Wu, W.; et al. Attitude-correlated frames approach for a star sensor to improve attitude accuracy under highly dynamic conditions. Appl. Opt. 2015, 54, 7559–7566. [Google Scholar] [CrossRef]
  7. Yan, J.; Jiang, J.; Zhang, G. Dynamic imaging model and parameter optimization for a star tracker. Opt. Express 2016, 24, 5961–5983. [Google Scholar] [CrossRef]
  8. Gao, Y.; Qin, S.; Jiang, G.; Zhou, J. Dynamic smearing compensation method for star centroding of star sensors. In Proceedings of the IEEE Conference on Metrology for Aerospace, Florence, Italy, 21–23 June 2016; pp. 221–226. [Google Scholar]
  9. Jiang, J.; Yu, W.; Zhang, G. High-accuracy decoupling estimation of the systematic coordinate errors of an INS and intensified high dynamic star tracker based on the constrained least squares method. Sensors 2017, 17, 2285. [Google Scholar] [CrossRef]
  10. Sharma, V.K.; Mahapatra, K.K. Visual object tracking based on sequential learning of SVM parameter. Digit. Signal Process. 2018, 79, 102–115. [Google Scholar] [CrossRef]
  11. Shi, R.; Wu, G.; Kang, W.; Wang, Z.; Feng, D. Visual tracking utilizing robust complementary learner and adaptive refiner. Neurocomputing 2017, 260, 367–377. [Google Scholar] [CrossRef]
  12. Danelljan, M.; Hager, G.; Khan, F.S.; Felsberg, M. Learning spatially regularized correlation filters for visual tracking. In Proceedings of the IEEE Conference on Computer Vision, Santiago, Chile, 11–18 December 2015; pp. 4310–4318. [Google Scholar]
  13. Fan, Z.; Ji, H.; Zhang, Y. Iterative particle filter for visual tracking. Signal Process. Image Commun. 2015, 36, 140–153. [Google Scholar] [CrossRef]
  14. Jin, J.; Yan, L. Adaptive image tracking algorithm based on improved particle filter and sparse representation. J. Comput. Appl. Softw. 2014, 31, 152–155. [Google Scholar]
  15. Wu, L. Research on method of eliminating frame error in moving image tracking. Comput. Simul. 2016, 198–201. [Google Scholar]
  16. Zhong, H.; Yang, M.; Lu, X. Increasing update rate for star sensor by pipelining parallel processing method. Opt. Precis. Eng. 2009, 17, 2230–2235. [Google Scholar]
  17. Mao, X.; Liang, W.; Zheng, X. A parallel computing architecture based image processing algorithm for star sensor. J. Astronaut. 2011, 32, 613–619. [Google Scholar]
  18. Zhou, Q.; Mao, X.; Zhang, Q. A image processing algorithm with marker for high-speed and multi-channel star sensor. J. Harbin Inst. Technol. 2016, 48, 119–124. [Google Scholar]
  19. Katake, A.B. Modeling, Image Processing and Attitude Estimation of High Speed Star Sensors. Ph.D. Thesis, Texas A&M University, College Station, TX, USA, August 2006. [Google Scholar]
  20. Katake, A. StarCam SG100: A high-update rate, high-sensitivity stellar gyroscope for spacecraft. In Proceedings of the Conference on Sensors, Cameras, and Systems for Industrial/Scientific Applications XI, San Jose, CA, USA, 19–21 January 2010; pp. 1–10. [Google Scholar]
  21. Wang, X.; Wei, X.; Fan, Q.; Li, J.; Wang, G. Hardware implementation of fast and robust star centroid extraction with low resource cost. IEEE Sens. J. 2015, 15, 4857–4865. [Google Scholar] [CrossRef]
  22. Yu, W.; Jiang, J.; Zhang, G. Star tracking method based on multiexposure imaging for intensified star trackers. Appl. Opt. 2017, 56, 5961–5971. [Google Scholar] [CrossRef]
  23. Wang, S.; Zhang, S.; Ning, M.; Zhou, B. Motion blurred star image restoration based on MEMS gyroscope aid and blur kernel correction. Sensors 2018, 18, 2662. [Google Scholar] [CrossRef]
  24. Zhu, H.; Deng, L.; Bai, X.; Li, M.; Cheng, Z. Deconvolution methods based on φ HL regularization for spectral recovery. Appl. Opt. 2015, 54, 4337–4344. [Google Scholar] [CrossRef] [PubMed]
  25. Liu, G.; Chang, S.; Ma, Y. Blind image deblurring using spectral properties of convolution operators. IEEE Trans. Image Process. 2014, 23, 5047–5056. [Google Scholar] [CrossRef]
  26. Ren, W.; Cao, X.; Pan, J.; Guo, X.; Zuo, W.; Yang, M. Image deblurring via enhanced low-rank prior. IEEE Trans. Image Process. 2016, 25, 3426–3437. [Google Scholar] [CrossRef] [PubMed]
  27. Ma, L.; Zeng, T. Image deblurring via total variation based structured sparse model selection. J. Sci. Comput. 2016, 67, 1–19. [Google Scholar] [CrossRef]
  28. Lu, Q.; Zhou, W.; Fang, L.; Li, H. Robust blur kernel estimation for license plate images from fast moving vehicles. IEEE Trans. Image Process. 2016, 25, 2311–2323. [Google Scholar] [CrossRef] [PubMed]
  29. Chen, S.; Shen, H. Multispectral image out-of-focus deblurring using interchannel correlation. IEEE Trans. Image Process. 2015, 24, 4433–4445. [Google Scholar] [CrossRef] [PubMed]
  30. Xue, F.; Blu, T. A novel SURE-based criterion for parametric PSF estimation. IEEE Trans. Image Process. 2015, 24, 595–607. [Google Scholar] [CrossRef] [PubMed]
  31. Quan, W.; Zhang, W. Restoration of motion-blurred star image based on Wiener filter. In Proceedings of the IEEE Conference on Intelligent Computation Technology and Automation, Shenzhen, China, 28–29 March 2011; pp. 691–694. [Google Scholar]
  32. Ma, X.; Xia, X.; Zhang, Z.; Wang, G.; Qian, H. Star image processing of SINS/CNS integrated navigation system based on 1DWF under high dynamic conditions. In Proceedings of the IEEE Conference on Position, Location and Navigation Symposium, Savannah, GA, USA, 28–29 March 2011; pp. 514–518. [Google Scholar]
  33. Jiang, J.; Huang, J.; Zhang, G. An accelerated motion blurred star restoration based on single image. IEEE Sens. J. 2017, 17, 1306–1315. [Google Scholar] [CrossRef]
  34. Ma, L.; Bernelli-Zazzera, F.; Jiang, G.; Wang, X.; Huang, Z.; Qin, S. Region-confined restoration method for motion-blurred star image of the star sensor under dynamic conditions. Appl. Opt. 2016, 55, 4621–4631. [Google Scholar] [CrossRef]
  35. Anderson, E.H.; Fumo, J.P.; Erwin, R.S. Satellite ultraquiet isolation technology experiment (SUITE). In Proceedings of the IEEE Conference on Aerospace, Big Sky, MT, USA, 25 March 2000; pp. 299–313. [Google Scholar]
  36. Rad, A.M.; Nobari, J.H.; Nikkhah, A.A. Optimal attitude and position determination by integration of INS, star tracker, and horizon sensor. IEEE Aerosp. Electron. Syst. Mag. 2014, 29, 20–33. [Google Scholar] [CrossRef]
  37. Moghaddam, M.E.; Jamzad, M. Blur identification in noisy images using radon transform and power spectrum modeling. In Proceedings of the 12th IEEE International Workshop on Systems, Signals and Image Processing, Chalkida, Greece, 22–24 September 2005; pp. 347–352. [Google Scholar]
  38. Aizenberg, I.; Paliy, D.V.; Zurada, J.M. Blur identification by multilayer neural network based on multivalued neurons. IEEE Trans. Neural Netw. 2008, 19, 883–898. [Google Scholar] [CrossRef]
  39. Liu, C.; Hu, L.; Liu, G.; Yang, B.; Li, A. Kinematic model for the space-variant image motion of star sensors under dynamical conditions. Opt. Eng. 2015, 54, 063104. [Google Scholar] [CrossRef]
  40. Yang, H.; Huang, H.; Lai, S. A novel gradient attenuation Richardson–Lucy algorithm for image motion deblurring. Signal Process. 2014, 103, 399–414. [Google Scholar] [CrossRef]
  41. Tao, H.; Lu, X. Smoky vehicle detection based on multi-feature fusion and ensemble neural networks. Multimed. Tools Appl. 2018, 77, 32153–32177. [Google Scholar] [CrossRef]
  42. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef]
  43. Wu, X.; Wang, X. Multiple blur of star image and the restoration under dynamic conditions. Acta Astronaut. 2011, 68, 1903–1913. [Google Scholar]
Figure 1. Star image model of the star sensor.
Figure 1. Star image model of the star sensor.
Sensors 19 01890 g001
Figure 2. Prediction model of the star spot.
Figure 2. Prediction model of the star spot.
Sensors 19 01890 g002
Figure 3. Motion blur star image model. (a) Blurred star image generated by the angular motion; (b) blurred star image generated by the rotation of the star sensor around the X-axis; (c) blurred star image generated by the rotation of the star sensor around the Y-axis; (d) blurred star image generated by the rotation of the star sensor around the Z-axis.
Figure 3. Motion blur star image model. (a) Blurred star image generated by the angular motion; (b) blurred star image generated by the rotation of the star sensor around the X-axis; (c) blurred star image generated by the rotation of the star sensor around the Y-axis; (d) blurred star image generated by the rotation of the star sensor around the Z-axis.
Sensors 19 01890 g003
Figure 4. Flow diagram of the improved Richardson-Lucy (RL) algorithm for star image deblurring.
Figure 4. Flow diagram of the improved Richardson-Lucy (RL) algorithm for star image deblurring.
Sensors 19 01890 g004
Figure 5. Original sharp star image and its gray distribution. (a) Original sharp star image; (b) gray distribution of star spot.
Figure 5. Original sharp star image and its gray distribution. (a) Original sharp star image; (b) gray distribution of star spot.
Sensors 19 01890 g005
Figure 6. The relationship between the magnitude of Fourier coefficients (SUMFC) of the point spread function (PSF) and the corresponding optimal number of iterations.
Figure 6. The relationship between the magnitude of Fourier coefficients (SUMFC) of the point spread function (PSF) and the corresponding optimal number of iterations.
Sensors 19 01890 g006
Figure 7. The ensemble back-propagation neural network.
Figure 7. The ensemble back-propagation neural network.
Sensors 19 01890 g007
Figure 8. Comparison between the optimal number of iterations and the estimated number of iterations.
Figure 8. Comparison between the optimal number of iterations and the estimated number of iterations.
Sensors 19 01890 g008
Figure 9. The spacecraft trajectory. (a) Three-dimensional trajectory of spacecraft; (b) projection of the spacecraft trajectory on the surface of the Earth.
Figure 9. The spacecraft trajectory. (a) Three-dimensional trajectory of spacecraft; (b) projection of the spacecraft trajectory on the surface of the Earth.
Sensors 19 01890 g009
Figure 10. Star image simulation result. (a) The first frame star image; (b) the 1500th frame star image.
Figure 10. Star image simulation result. (a) The first frame star image; (b) the 1500th frame star image.
Sensors 19 01890 g010
Figure 11. True star image versus predicted star image. (a) The true value of the 1500th frame star image; (b) the 1500th frame star image predicted by the proposed algorithm.
Figure 11. True star image versus predicted star image. (a) The true value of the 1500th frame star image; (b) the 1500th frame star image predicted by the proposed algorithm.
Sensors 19 01890 g011
Figure 12. Predicted star position error.
Figure 12. Predicted star position error.
Sensors 19 01890 g012
Figure 13. The magnified star image and the gray distribution of the star spot in the case of Gaussian white noise. (a) The magnified original star image; (b) gray value distribution of star spot in the original star image; (c) the magnified blurred star image ( w x = w y = 10 ° / s ); (d) gray value distribution of star spot in the blurred star image ( w x = w y = 10 ° / s ); (e) the magnified deblurred star image ( w x = w y = 10 ° / s ); (f) gray value distribution of star spot in the deblurred star image ( w x = w y = 10 ° / s ).
Figure 13. The magnified star image and the gray distribution of the star spot in the case of Gaussian white noise. (a) The magnified original star image; (b) gray value distribution of star spot in the original star image; (c) the magnified blurred star image ( w x = w y = 10 ° / s ); (d) gray value distribution of star spot in the blurred star image ( w x = w y = 10 ° / s ); (e) the magnified deblurred star image ( w x = w y = 10 ° / s ); (f) gray value distribution of star spot in the deblurred star image ( w x = w y = 10 ° / s ).
Sensors 19 01890 g013
Figure 14. Star spots observed by a star sensor.
Figure 14. Star spots observed by a star sensor.
Sensors 19 01890 g014
Figure 15. Comparison of running time between the proposed method and the iterative RL method in the case of Gaussian noise.
Figure 15. Comparison of running time between the proposed method and the iterative RL method in the case of Gaussian noise.
Sensors 19 01890 g015
Figure 16. The magnified star image and the gray level distribution of the star spot in the case of Poisson noise. (a) The magnified original star image; (b) gray level distribution of star spot in the original star image; (c) the magnified blurred star image ( w x = w y = 10 ° / s ); (d) gray level distribution of star spot in the blurred star image ( w x = w y = 10 ° / s ); (e) the magnified deblurred star image ( w x = w y = 10 ° / s ); (f) gray level distribution of star spot in the deblurred star image (wx = wy = 10°/s).
Figure 16. The magnified star image and the gray level distribution of the star spot in the case of Poisson noise. (a) The magnified original star image; (b) gray level distribution of star spot in the original star image; (c) the magnified blurred star image ( w x = w y = 10 ° / s ); (d) gray level distribution of star spot in the blurred star image ( w x = w y = 10 ° / s ); (e) the magnified deblurred star image ( w x = w y = 10 ° / s ); (f) gray level distribution of star spot in the deblurred star image (wx = wy = 10°/s).
Sensors 19 01890 g016
Figure 17. Comparison of running time between the proposed method and the iterative RL method in the case of Poisson noise.
Figure 17. Comparison of running time between the proposed method and the iterative RL method in the case of Poisson noise.
Sensors 19 01890 g017
Table 1. Comparison of the coordinates between the ideal and the predicted star spots in the 1500th star image.
Table 1. Comparison of the coordinates between the ideal and the predicted star spots in the 1500th star image.
Star NumberIdeal Star Spot CoordinatePredicted Star Spot CoordinatePredicted Star Spot Coordinate Error
x y x y Δ x Δ y
12463.5023.2563.620.75−0.12
246.51945.6219.250.87−0.25
354.50371.5054.23371.760.26−0.26
4277336.50276.60336.600.39−0.10
5290.50305290.11305.270.38−0.27
6336.50502.71336.50502.7100
7386.50340386339.640.500.35
8400.50170400169.780.500.21
9420.71409.50420.50409.000.210.50
10425.72225.88425.40225.600.320.28
11431.64351431.50350.710.140.28
12455130.50454.23130.230.760.26
13486.40131.60485.50131.500.890.09
Table 2. Comparison of attitude estimation in the case of Gaussian white noise (Vary w x ).
Table 2. Comparison of attitude estimation in the case of Gaussian white noise (Vary w x ).
w x   ( deg / s ) Attitude Errors (Blurred Star Image) (arc-second)Attitude Errors (Restored Star Image by our Method) (arc-second)Attitude Errors (Restored Star Image by Iterative RL Algorithm) (arc-second)
PitchYawRollPitchYawRollPitchYawRoll
114.6114.5910.2814.6114.5910.2814.6114.5910.28
514.6313.022.3314.6114.5910.2814.6114.5910.28
10141.49159.9717.9832.0517.395.2658.8222.074.94
1526.764.717.0724.7711.805.0914.6114.5910.28
20181.60136.945.6924.7711.805.0924.0419.740.04
25FailFailFail31.0867.235.4448.3369.820.37
35FailFailFail43.036.385.0314.6114.5910.28
45FailFailFail14.6114.5910.2814.6114.5910.28
55FailFailFail14.6114.5910.2831.0867.235.44
65FailFailFail64.1734.5710.0875.6545.954.80
75FailFailFailFailFailFailFailFailFail
Table 3. Comparison of attitude estimation in the case of Gaussian white noise (Vary w y ).
Table 3. Comparison of attitude estimation in the case of Gaussian white noise (Vary w y ).
w y   ( deg / s ) Attitude Errors (Blurred Star Image) (arc-second)Attitude Errors (Restored Star Image by Our Method) (arc-second)Attitude errors (Restored Star Image by Iterative RL Algorithm) (arc-second)
PitchYawRollPitchYawRollPitchYawRoll
131.0867.235.4431.0867.235.4431.0867.235.44
5115.6361.7652.2043.036.385.0343.036.385.03
1011.5076.892.0814.6114.5910.2814.6114.5910.28
1511.6761.476.2614.6114.5910.2824.7711.805.09
20154.24149.1710.11136.04113.320.7082.09103.430.50
25104.74147.650.6314.6114.5910.2843.6079.7020.88
30FailFailFail220.53211.738.9156.7592.699.77
40FailFailFail80.4465.314.7281.3267.235.44
50FailFailFail49.1034.304.8753.0333.805.03
60FailFailFail131.35108.6030.05134.43105.0930.27
70FailFailFail58.0757.610.2980.4465.314.72
75FailFailFailFailFailFailFailFailFail
Table 4. Comparison of attitude estimation in the case of Gaussian white noise (Vary w z ).
Table 4. Comparison of attitude estimation in the case of Gaussian white noise (Vary w z ).
w z   ( deg / s ) Attitude Errors (Blurred Star Image) (arc-second)Attitude Errors (Restored Star Image by Our Method) (arc-second)Attitude Errors (Restored Star Image by Iterative RL Algorithm) (arc-second)
PitchYawRollPitchYawRollPitchYawRoll
114.6114.5910.2814.6114.5910.2814.6114.5910.28
5103.2473.5112.204.3840.735.2814.6114.5910.28
1060.4355.4925.0614.6114.5910.2814.6114.5910.28
15157.89162.4015.2814.6114.5910.2814.6114.5910.28
2084.80136.063.8514.6114.5910.2814.6114.5910.28
25FailFailFail14.6114.5910.2814.6114.5910.28
35FailFailFail14.6114.5910.2834.9419.0119.09
45FailFailFail30.0366.2720.8714.6114.5910.28
55FailFailFail14.6114.5910.2891.8676.684.65
65FailFailFail75.6796.890.44112.50111.7216.04
75FailFailFail92.57106.525.68170.85140.594.22
80FailFailFailFailFailFailFailFailFail
Table 5. Comparison of attitude estimation in the case of Gaussian white noise (Vary w x and w y ).
Table 5. Comparison of attitude estimation in the case of Gaussian white noise (Vary w x and w y ).
Angular Velocity (deg/s)Attitude Errors (Blurred Star Image) (arc-second)Attitude Errors (Restored Star Image by Our Method) (arc-second)Attitude Errors (Restored Star Image by Iterative RL Algorithm) (arc-second)
w x w y PitchYawRollPitchYawRollPitchYawRoll
1131.3028.1716.2314.6114.5910.2814.6114.5910.28
55203.95191.1222.8313.1430.540.1114.6114.5910.28
101029.6618.844.7214.6114.5910.2814.6114.5910.28
1515335.46369.2031.0214.6114.5910.2814.6114.5910.28
2020FailFailFail14.6114.5910.2814.6114.5910.28
3030FailFailFail58.8222.074.9456.7592.699.77
4040FailFailFail14.6114.5910.284.6839.0415.47
5050FailFailFail49.1034.304.8741.749.4710.16
6060FailFailFail14.6114.5910.2831.0867.235.44
7070FailFailFail126.33125.570.77120.2497.620.61
7575FailFailFailFailFailFailFailFailFail
Table 6. Comparison of attitude estimation in the case of Gaussian white noise (Vary w x , w y and w z ).
Table 6. Comparison of attitude estimation in the case of Gaussian white noise (Vary w x , w y and w z ).
Angular Velocity (deg/s)Attitude Errors (Blurred Star Image) (arc-second)Attitude Errors (Restored Star Image by Our Method) (arc-second)Attitude Errors (Restored Star Image by Iterative RL Algorithm) (arc-second)
w x w y w z PitchYawRollPitchYawRollPitchYawRoll
11166.0827.7716.1714.6114.5910.2814.6114.5910.28
55556.0246.9737.1814.6114.5910.2814..6114.5910.28
101010101.4284.9712.2158.8222.074.9431.0867.235.44
151515FailFailFail5.4641.8220.6514.6114.5910.28
252525FailFailFail83.89119.709.6642.4949.4225.86
353535FailFailFail80.5143.5520.27176.132014.1229.58
454545FailFailFail148.94104.1929.98132.07131.2630.66
555555FailFailFailFailFailFailFailFailFail
Table 7. Comparison of attitude estimation in the case of Poisson noise (Vary w x ).
Table 7. Comparison of attitude estimation in the case of Poisson noise (Vary w x ).
w x   ( deg / s ) Attitude Errors (Blurred Star Image) (arc-second)Attitude Errors (Restored Star Image by Our Method) (arc-second)Attitude Errors (Restored Star Image by Iterative RL Algorithm) (arc-second)
PitchYawRollPitchYawRollPitchYawRoll
114.6114.5910.2814.6114.5910.2814.6114.5910.28
2032.7332.9015.3114.6114.5910.2814.6114.5910.28
3547.3486.9127.3814.6114.5910.2815.0516.1014.61
40FailFailFail14.6114.5910.2814.6114.5910.28
55FailFailFail108.82137.305.8265.2894.1926.05
70FailFailFail81.3473.455.612.6412.0020.58
85FailFailFail5.7737.940.1033.4576.9315.70
100FailFailFail14.6114.5910.2824.7711.805.09
115FailFailFail32.6111.3230.7758.3214.2515.18
130FailFailFail77.3040.3920.17155.73162.0411.18
155FailFailFail43.036.385.03115.90122.3610.87
160FailFailFailFailFailFailFailFailFail
Table 8. Comparison of attitude estimation in the case of Poisson noise (Vary w y ).
Table 8. Comparison of attitude estimation in the case of Poisson noise (Vary w y ).
w y   ( deg / s ) Attitude Errors (Blurred Star Image) (arc-second)Attitude Errors (Restored Star Image by Our Method) (arc-second)Attitude Errors (Restored Star Image by Iterative RL Algorithm) (arc-second)
PitchYawRollPitchYawRollPitchYawRoll
114.6114.5910.2814.6114.4910.2814.6114.5910.28
1579.4429.9124.8514.6114.4910.2848.3369.820.37
30142.86135.255.4431.0867.234.8714.6114.5910.28
35FailFailFail14.6114.5910.2814.6114.5910.28
60FailFailFail29.2321.9020.5189.29132.3416.03
75FailFailFail31.0867.235.4435.2185.9026.01
90FailFailFail14.6114.5910.2858.0757.610.29
105FailFailFail134.41104.3924.90154.70146.496.03
120FailFailFail136.04113.320.7082.09103.430.50
135FailFailFail102.0479.480.54127.91105.3014.66
155FailFailFail119.1074.690.48137.6293.0514.73
160FailFailFailFailFailFailFailFailFail
Table 9. Comparison of attitude estimation in the case of Poisson noise (Vary w z ).
Table 9. Comparison of attitude estimation in the case of Poisson noise (Vary w z ).
wz (deg/s)Attitude Errors (Blurred Star Image) (arc-second)Attitude Errors (Restored Star Image by Our Method) (arc-second)Attitude Errors (Restored Star Image by Iterative RL Algorithm) (arc-second)
PitchYawRollPitchYawRollPitchYawRoll
114.6114.5910.2814.6114.5910.2814.6114.5910.28
15124.0782.0014.1514.6114.5910.2815.0114.940.09
30171.49129.0726.7714.6114.5910.2814.6114.5910.28
35FailFailFail14.6114.5910.2814.6114.5910.28
45FailFailFail14.6114.5910.284.3840.735.28
60FailFailFail14.6114.5910.2849.1034.304.87
75FailFailFail14.5258.0820.7428.997.0515.25
90FailFailFail101.47129.934.50115.8864.169.72
105FailFailFail22.9022.8625.8788.02109.2915.91
120FailFailFail31.0867.235.4433.4754.990.21
150FailFailFail15.3415.260.0914.6114.5910.28
165FailFailFail87.4843.1914.9875.1774.579.85
170FailFailFailFailFailFailFailFailFail
Table 10. Comparison of attitude estimation in the case of Poisson noise (Vary w x and w y ).
Table 10. Comparison of attitude estimation in the case of Poisson noise (Vary w x and w y ).
Angular Velocity (deg/s)Attitude Errors (Blurred Star Image) (arc-second)Attitude Errors (Restored Star Image by Our Method) (arc-second)Attitude Errors (Restored Star Image by Iterative RL Algorithm) (arc-second)
w x w y PitchYawRollPitchYawRollPitchYawRoll
1140.6118.5922.1314.6114.5910.2814.6114.5910.28
151551.7033.1021.4414.6114.5910.2814.6114.5910.28
2525372.32260.7813.3214.6114.5910.2849.1034.304.87
3030FailFailFail14.6114.5910.2843.086.385.03
4545FailFailFail14.7114.4610.2714.7114.4610.27
6060FailFailFail43.6079.7020.8843.036.385.03
7575FailFailFail80.4465.314.7275.75118.9316.02
9090FailFailFail52.7559.8325.9858.8222.074.94
105105FailFailFail58.8222.074.9460.4425.314.72
115115FailFailFail138.30159.190.82138.30159.190.82
120120FailFailFailFailFailFailFailFailFail
Table 11. Comparison of attitude estimation in the case of Poisson noise (Vary w x , w y and w z ).
Table 11. Comparison of attitude estimation in the case of Poisson noise (Vary w x , w y and w z ).
Angular Velocity (deg/s)Attitude Errors (Blurred Star Image) (arc-second)Attitude Errors (Restored Star Image by Our Method) (arc-second)Attitude Errors (Restored Star Image by Iterative RL Algorithm) (arc-second)
w x w y w z PitchYawRollPitchYawRollPitchYawRoll.
11159.7447.7320.4214.6114.5910.2814.6114.5910.28
55567.7046.1716.6814.6114.5910.2814..6114.5910.28
10101088.61125.4516.5714.6114.5910.2814.6114.5910.28
151515FailFailFail14.6114.5910.2880.4465.314.72
252525FailFailFail17.4310.025.1640.7511.5215.49
353535FailFailFail53.5224.1225.3853.5224.1225.38
454545FailFailFail93.01128.8536.48150.77171.600.99
555555FailFailFail87.1979.210.4597.6067.9130.26
656565FailFailFail42.5942.400.1647.7383.7410.54
757575FailFailFail42.3471.3941.4442.3471.3941.44
808080FailFailFailFailFailFailFailFailFail

Share and Cite

MDPI and ACS Style

Liu, D.; Chen, X.; Liu, X.; Shi, C. Star Image Prediction and Restoration under Dynamic Conditions. Sensors 2019, 19, 1890. https://doi.org/10.3390/s19081890

AMA Style

Liu D, Chen X, Liu X, Shi C. Star Image Prediction and Restoration under Dynamic Conditions. Sensors. 2019; 19(8):1890. https://doi.org/10.3390/s19081890

Chicago/Turabian Style

Liu, Di, Xiyuan Chen, Xiao Liu, and Chunfeng Shi. 2019. "Star Image Prediction and Restoration under Dynamic Conditions" Sensors 19, no. 8: 1890. https://doi.org/10.3390/s19081890

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop