Next Article in Journal
DATSURYOKU Sensor—A Capacitive-Sensor-Based Belt for Predicting Muscle Tension: Preliminary Results
Previous Article in Journal
Evaluating the Dynamics of Bluetooth Low Energy Based COVID-19 Risk Estimation for Educational Institutes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On-Orbit Geometric Calibration from the Relative Motion of Stars for Geostationary Cameras

1
Key Laboratory of Intelligent Infrared Perception, Chinese Academy of Sciences, Shanghai 200083, China
2
CAS Key Laboratory of Infrared System Detection and Imaging Technology, Shanghai Institute of Technical Physics, Shanghai 200083, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
4
Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou 310024, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(19), 6668; https://doi.org/10.3390/s21196668
Submission received: 12 September 2021 / Revised: 30 September 2021 / Accepted: 2 October 2021 / Published: 7 October 2021
(This article belongs to the Section Navigation and Positioning)

Abstract

:
Affected by the vibrations and thermal shocks during launch and the orbit penetration process, the geometric positioning model of the remote sensing cameras measured on the ground will generate a displacement, affecting the geometric accuracy of imagery and requiring recalibration. Conventional methods adopt the ground control points (GCPs) or stars as references for on-orbit geometric calibration. However, inescapable cloud coverage and discontented extraction algorithms make it extremely difficult to collect sufficient high-precision GCPs for modifying the misalignment of the camera, especially for geostationary satellites. Additionally, the number of the observed stars is very likely to be inadequate for calibrating the relative installations of the camera. In terms of the problems above, we propose a novel on-orbit geometric calibration method using the relative motion of stars for geostationary cameras. First, a geometric calibration model is constructed based on the optical system structure. Then, we analyze the relative motion transformation of the observed stars. The stellar trajectory and the auxiliary ephemeris are used to obtain the corresponding object vector for correcting the associated calibration parameters iteratively. Experimental results evaluated on the data of a geostationary experiment satellite demonstrate that the positioning errors corrected by this proposed method can be within ±2.35 pixels. This approach is able to effectively calibrate the camera and improve the positioning accuracy, which avoids the influence of cloud cover and overcomes the great dependence on the number of the observed stars.

1. Introduction

At present, geostationary remote sensing cameras (RSCs) are widely used in earth observation and space surveillance, such as the monitoring of ocean change, meteorological and natural disasters [1,2,3]. Geostationary RSCs can continuously monitor and quickly revisit any location within the field of regard of the satellite, which offers new functionalities not covered by low earth orbit satellites [4]. Precise orientation parameters of the camera are preset in ground-based laboratories before launch, including the principal point, the focal length and camera installation matrix with respect to the satellite-body coordinate system, which contributes to establish a geometric positioning model for direct georeferencing [5,6,7]. Nevertheless, owing to the launch vibration and the variation of spatial thermal environment, the positioning model will inevitably change, which will ultimately bring about the reduction of the geometric accuracy [8,9]. Therefore, on-orbit geometric calibration with the references including the ground control points (GCPs), coastlines and stars [10,11,12,13,14] is an essential prerequisite for ensuring the high-precision satellite imagery [15,16,17,18]. However, for geostationary RSCs, subjected to the cloud coverage and the accuracy of corresponding extraction algorithms, it is not easy to ensure the availability of a great number of accurate GCPs through earth observation, and, additionally, the star-based calibration method is usually limited by the number of the observed stars. Hence, an efficient geometric calibration method being free from the weather and the number of stars is urgently needed.
Up to now, a large number of studies have been conducted on on-orbit calibration to improve the positioning accuracy of RSCs. The early published work commonly focused on constructing the positioning models through GCPs in calibration test sites [19]. especially, according to 33 GPS-surveyed GCPs in the Denver test site, the calibration of in-flight field angle map was performed to reduce the interior orientation systematic errors of the camera of IKONOS, and the residual errors were within ±1 pixel [20,21,22]. Based on the globally distributed test sites, Valorge et al. [23] estimated the line-of-sight (LOS) biases of the Haute Resolution Géométrique carried by SPOT-5 satellite and modified the misalignment between the instruments and the attitude and orbit control subsystem (AOCS) reference frame. Similarly, using a viewing geometry model, given ephemeris and attitude data, precise camera geometry and datum transformation, Radhadevi et al. [24] address the in-flight calibration, consisting of alignment calibration of individual sensors and calibration between the sensors, for IRS-P6, which requires as many GCPs as possible for stability and reliability. Furthermore, according to the calibration fields located in Denver and Lunar Lake, the planimetric accuracy of GeoEye can be achieved as 3 m (RMS) [25,26,27]. However, relying heavily on the high-precision references of the calibration sites and the number of the manually selected GCPs, the methods above usually turn out to be inefficient and unstable.
To reduce the dependence on calibration sites and improve the efficiency, automatic GCPs extraction methods based on geographic references, including the digital orthophoto map (DOM) and the digital elevation model (DEM), as well as the associated feature matching technology, have been proposed to acquire abundant GCPs. Using the correlation method [28] to match the Orbita hyperspectral satellite images and the corresponding DOM images in Hubei area automatically, Jiang et al. [29] carried out the geometric calibration of Zhuhai-1 with 2102 extracted GCPs, and the calibration accuracy could be better than 0.5 pixels. For implementing the automatic geometric calibration of GF4, Wang et al. [30] adopted the SIFT algorithm to extract the GCPs from the DOM and DEM references of Landsat 8 and ASTER GDEM (GDEM2), and the matching accuracy was declared to be better than 0.3 pixels. Using the GPU-ASIFT automatic GCPs extraction algorithm, Dong et al. [31] obtained thousands of GCPs from the Landsat8 images and AW3D30 DSM to perform the on-orbit geometric calibration for GF-4, and the final calibration accuracy could be within 1.19 pixels. For correcting geo-referencing errors of KMSS-2 images of Meteor-M No. 2–2 satellite, Zhukov et al. [32] performed geometric calibration based on the bank of Landsat GCPs. In addition, in most cases, the errors after corrected did not exceed 60 m. Combined rational polynomial coefficient (RPC) model-based forward and inverse transformation with the DEM data extraction, Ye et al. [33] designed and implemented the automatic orthorectification system (GF1AMORS), and the experiments showed that the automatic orthorectification process exhibited a nice accuracy and stability in both mountainous terrain and plain terrain. Seo et al. [34] presented the direct geo-referencing model of KOMPSAT-3A AEISS-A using GCP/Image Control Point (ICP) to correct the distortion with under 0.5-pixel accuracy and bundle adjustment, and then the image data were provided to users. Coupled with the Gaofen-7 satellite data, Liu et al. [35] constructed a geometric imaging model of the area array footprint camera and proposed a coarse-to-fine “LPM-SIFT + Phase correlation” matching strategy for the automatic extraction of calibration control points. Compared with the calibration result using a small number of manually collected control points, the root mean square error (RMSE) of the residual of the control points is improved from half a pixel to 1/3, and the RMSE of the same orbit checkpoints in the image space is improved from 1 pixel to 0.7. Li et al. [14] proposed an accurate geometric texture-based GCPs extraction approach for the thermal infrared remote sensing images of Landsat 8 and GLS 2000, and the absolute matching errors in sample and line directions could be 0.50 and 0.47 pixels. In addition, similarly, the coastlines could also be considered as efficient references to accomplish the on-orbit calibration as well [36]. According to the GCPs obtained from the global self-consistent hierarchical high-resolution shoreline and the coastline template matching, Chen et al. [37] developed an on-orbit installation matrix calibration approach for the navigation of the advanced geostationary radiation imager (AGRI) on FY-4A with the navigation error being 1.3 pixels. Although the GCP extraction approaches above are conductive to the automatic calibration, it is obvious that they depend heavily on the cloud coverage of the images and the distribution of GCPs.
However, due to the unpredictable cloud coverage and certain features changing greatly compared with the reference image, eligible remote sensing images cannot always be obtained in real time, which results in great difficulties for the conventional methods in performing immediate calibration for urgent positioning requirements.
To avoid the GCP restrictions, Delvit et al. [38] proposed an auto-reverse method for geometric calibration of Pleiades-HR using a couple of images from the same orbit with inverse directions. Although this method works well without external references, it is not applicable to other satellites without the extreme agility.
Additionally, being independent of GCPs, star-based geometric calibration, unaffected by the eclipse and interfering daylight, is also a promising and effective method [39]. Kim et al. [40] proposed a geometric calibration using stellar sources in an earth observation satellite, which can help monitor the geographic location accuracy of satellite images. In addition, numerical simulation verified the effectiveness of the method. Using the ensemble of star field images, Christian et al. [41] proposed a geometric calibration of the Orion optical navigation camera and verified the effectiveness of the method through numerical experiments. In addition, with the stars as the reference points, Fourest et al. managed to perform the geometric calibration of Pleiades-HR [42]. Likewise, Li et al. [4] constructed a rigorous stellar-based geometric positioning model for geostationary cameras and proposed a thermal deformation positioning error correction method with the accuracy of less than ±1.9 pixels. In addition, processing the star map from the camera and star sensors for the star coordinate acquisition, Guan et al. [43] developed a camera-star sensor installation calibration method for Luojia 1-01 Satellite, which achieved a positioning accuracy of better than 800 m. Although the star-based method has some advantages over conventional methods with GCPs, it is impracticable in cases where only a few or even a single star appears in the camera’s field of view, because inadequate stars will result in a lack of references for estimating the calibration parameters.
As mentioned above, to avoid the restrictions of GCPs and the observed stars, in this paper, we propose a novel on-orbit geometric calibration method using the relative motion of observed stars for geostationary cameras. Thanks to the relative motion between the observed stars and the camera, the stellar trajectories from consecutive multi-frames are used to calculate the abundant object vectors (OVs) for correcting the calibration parameters iteratively, which, effectively, overcomes the number limitation of the observed stars. Section 2 elaborates the preprocessing of the stellar trajectory, the proposed geometric calibration method from the relative motion, and the solution of the method. Section 3 presents the experiments and results with on-orbit observation data. Section 4 focuses on discussing the findings of the study. Finally, the conclusions are summarized in Section 5.

2. Methodology

2.1. Preprocessing of Stellar Trajectory

In terms of the observation stars, the prediction of stellar trajectories can be performed according to stellar constellations and satellite attitudes. Continuous observations of stars are completed by a two-dimensional pointing mirror. We obtain a series of the star images by controlling the optical axis of the camera and making the stars move from the left to the right in the field of view (FoV).
Firstly, for each star image, the centroid of the star needs to be determined accurately. As shown in Equation (1), the centroids of the star images, generally distributed in multiple pixels, are acquired through the widely used traditional centroid extraction method [44]. The gray value of the pixel is considered to be the weight of the corresponding position for computing the center of the target.
x 0 = x W y W x × G ( x , y ) x W y W G ( x , y ) y 0 = x W y W y × G ( x , y ) x W y W G ( x , y )
where W is the size of target window, G ( x , y ) is the gray value of pixel in ( x , y ) , and ( x 0 , y 0 ) is the centroid position of the target.
Subsequently, with the relative motion of stars, the star trajectory can be obtained from the multiple consecutive images. Theoretically, the trajectory should be a smooth curve. However, affected by the disturbance of satellite platform, the instability of the pointing mirror, and error in discrete sampling, the actual trajectory generally presents as a series of irregular scattered points [45], which results in a centroid position error and affects the subsequent positioning accuracy. Therefore, in order to improve the accuracy of the star’s position, a smoothing spline is adopted to fitting the trajectory of the scattered star imaging points. The model can be described as
S g = p i w i y i g x i 2 + 1 p d 2 g d x 2 d x ,
where p is the smoothing parameter defined in [ 0 , 1 ] , w i is the weight of each point, and g is the function of smoothing spline fitting chosen to minimize the value of Equation (2).

2.2. Geometric Calibration Model

The geometric positioning model of the camera establishes the relationship between the image point in the focal plane and the corresponding object in the geodetic coordinate system. Figure 1 shows the diagram of the geometric positioning model of the geostationary camera. During operation, the satellite continuously adjusts its attitude to make the camera face the earth for observation. Due to the high altitude and the large observable range of the geostationary camera, the star observation could be realized by controlling the camera to point to the deep space with a two-dimensional pointing mirror.
The rigorous geometric positioning model of the camera can be constructed as
cos δ cos σ cos δ sin σ sin δ = λ R o c e R s o p i t c h , r o l l , y a w R c s α , β , γ R r e f φ d x 0 Δ x 0 d y Δ y 0 0 f u u 0 v 0 v 1 ,
where ( u , v ) is the pixel coordinate of the image point, ( u 0 , v 0 ) is the pixel coordinate of the principal point O, Δ x and Δ y are the distortions of the image point in the x and y directions on the image plane coordinate system respectively, dx and dy are the dimensions of a pixel in the x and y directions, respectively, and f is the focal length of the optical system. As shown in Figure 2, R r e f φ denotes the reflection matrix of the pointing mirror, φ is the intersection angle of the optical axis and the normal of the pointing mirror, α , β , and γ are the three installation angles of the camera mounted on the satellite relative to the three axes of the satellite body coordinate system, R c s ( α , β , γ ) is the corresponding installation matrix, p i t c h , r o l l and y a w are the pitch, roll and yaw angles of the satellite body coordinate system relative to the orbital coordinate system, R s o ( p i t c h , r o l l , y a w ) is the corresponding the transformation, R o c e is the transformation matrix from the orbital coordinate system to the celestial coordinate system, and λ is the scale factor.
In Equation (3), the transformation from the pixel coordinate system to the camera coordinate system and the transformation from the camera coordinate system to the celestial coordinate system can be described as the interior positioning model and exterior positioning model, respectively. Practically, the interior positioning model is often affected by the errors including the detector translation, the lens distortion, and the principal distance deviation, while the exterior positioning model is affected by the orbit measurement error, the attitude measurement error, and the camera installation error. Therefore, it is necessary to consider the influences of various error sources so as to construct a suitable calibration model.
In view of the complex interior error sources and the strong coupling between the interior parameters [6,31], a third-order polynomial is adopted to model the tangent of directional angles of the detector to avoid excessive over-parameterization [30], and the interior calibration model can be expressed as
tan ψ x u , v tan ψ y u , v 1 = x c / f y c / f 1 = 1 f x c y c f = 1 f d x 0 Δ x 0 d y Δ y 0 0 f u u 0 v 0 v 1 ,
tan ψ x u , v tan ψ y u , v = a 0 + a 1 u + a 2 v + a 3 u v + a 4 u 2 + a 5 v 2 + a 6 u 2 v + a 7 u v 2 + a 8 u 3 + a 9 v 3 b 0 + b 1 u + b 2 v + b 3 u v + b 4 u 2 + b 5 v 2 + b 6 u 2 v + b 7 u v 2 + b 8 u 3 + b 9 v 3 ,
where ψ x ( u , v ) , ψ y ( u , v ) are the directional angles of the image point ( u , v ) , and a 0 , , a 9 , b 0 , , b 9 are the interior parameters.
Then, to reduce the computational complexity, a generalized installation matrix R i n s of the camera is introduced as
R i n s = R s c α , β , γ R r e f = A 1 A 2 A 3 B 1 B 2 B 3 C 1 C 2 C 3 ,
where α , β , γ are the exterior parameters, A 1 , A 2 , A 3 , B 1 , B 2 , B 3 , C 1 , C 2 , C 3 are the elements in the generalized installation matrix R i n s .
Subsequently, based on Equations (3)–(6), the geometric calibration model is constructed as
cos δ cos σ cos δ sin σ sin δ = λ R o c e R s o p i t c h , r o l l , y a w R i n s tan ψ x c u , v tan ψ y c u , v 1 ,
In the calibration model, both R o c e and R s o can be calculated from the attitude and ephemeris of the satellite. The exterior parameters X E are used to describe the synthesis of the reflect matrix and installation matrix and to compensate the installation angle error and measurement error. Similarly, the interior parameters X I are used to describe and compensate for the internal distortion of the camera. Using the stellar track points, the exterior and interior parameters could be computed iteratively. Distinctly, the accuracy of exterior and interior parameters determines the accuracy of the calibration model.

2.3. Model Solution Method

2.3.1. Relative Motion Transformation

To determine the calibration parameters, sufficient references generally referring to the geographic data are usually required for the model calculation. In this paper, the celestial coordinate of the star is used to solve the model parameters. Given the star’s coordinates and the ephemeris, the OVs are introduced to represent the incident vector of the star in the satellite body coordinate system as
O V = T x T y T z = R o s R c e o P s ,
where P s = [ cos δ cos σ    cos δ sin σ    sin δ ] T is the orientation vector of the star in the celestial coordinate system.
In the conventional star-based calibration methods [39,42], according to Equation (7), the OVs of the stars in a single frame can be expressed as O V i = R o s R c e o P s i ( i = 1 , 2 , , n ) ( n means the number of the stars), where P s i is the orientation vector of the ith observed star in the celestial coordinate system. In addition, plenty of OVs are the crucial inputs to ensure that the positioning model can be solved iteratively. However, if there are only a few stars, or even single star in one frame, the obtained OVs are insufficient to support the subsequent calculation.
To address this problem, we propose a method to obtain multiple OVs through the relative motion of the stars. As shown in Figure 3, at the position Pos1, p1 on the focal plane FP1 is the corresponding image point of the star S. In the satellite body coordinate system O s 1 X s 1 Y s 1 Z s 1 , L O S 1 ¯ P 1 is the emergent LOS of p1, and O V 1 ¯ P 1 is the OV of S. Then, after reaching the position Pos2, on the focal plane FP2, p2 is the corresponding image point of the star S, and p1 is at the same position as p1 on the focal plane FP1. In the satellite body coordinate system O s 2 X s 2 Y s 2 Z s 2 , L O S 2 ¯ P 2 is the emergent LOS of p2, L O S 1 ¯ P 2 is the emergent LOS of p1, and O V 2 ¯ P 2 is the OV of S. During operation, O 2 x 2 y 2 and O s 2 X s 2 Y s 2 Z s 2 are the corresponding coordinate system of O 1 x 1 y 1 and O s 1 X s 1 Y s 1 Z s 1 , respectively. According to the geometric relationship of imaging, the OV coincides with the corresponding LOS. Therefore, we can obtain
O V 1 ¯ P 1 = L O S 1 ¯ P 1 O V 2 ¯ P 2 = L O S 2 ¯ P 2 ,
Since the interior and exterior parameters of the imaging system could be regarded as invariant during the operation, the emergent LOS of the image points at the same position on the focal plane are unchanged, namely, L O S 1 ¯ P 1 = L O S 1 ¯ P 2 . Assume that there is a virtual star S′ in the object space, making O V 1 ¯ P 2 = O V 1 ¯ P 1 , where O V 1 ¯ P 2 is the OV of S′ in O s 2 X s 2 Y s 2 Z s 2 . Based on the above relationship expressions, it is easy to prove that O V 1 ¯ P 2 = L O S 1 ¯ P 2 , which denotes that p1 could be regarded as the corresponding image point of S′ according to the imaging geometry principle. In other words, the stellar track points p2 and p1 can be considered as the image of the two different stars S and S′ taken at the same time.
Then, we need to figure out the values of the OVs of p1 and p2. In Equation (8), it can be proved that R c e o is determined by the instantaneous position vector P ( t ) and velocity vector V ( t ) of the satellite, and R o s also varies all the time owing to the change of the three attitude angles. Thus, the OVs of p1 and p2 can be obtained with the different R c e o and R o s calculated in the position Pos1 and Pos2, respectively.
On the basis of the theory above, multiple OVs could be obtained through the relative motion of the star. As shown in Figure 4, during the operation, the position of the star relative to the satellite changes all the time. In a short period, the interior and exterior parameters of the camera could be considered to be invariant. Since the camera geometry remains unchanged, the stellar trajectory generated by the images taken at different times can be regarded as the image of the multiple stars observed at the same time. Therefore, using enough stellar track points, we construct the OVs as O V i = R o s i R c e o i P s ( i = 1 , 2 , , m ) (define m as the number of the points).

2.3.2. Model Solving

According to the different calculation order of the calibration parameters, the geometric calibration methods are mainly divided into three types: overall calibration, first exterior calibration and then interior calibration, and first interior calibration and then exterior calibration. To reduce the correlation between the parameters and improve the accuracy of the interior calibration parameters, this paper adopts a stepwise calibration method [30] to calculate the calibration parameters. First, the exterior parameters are estimated. Then, the interior parameters are estimated in the reference camera frame determined by the estimated exterior parameters.
According to the mentioned above equations, we transform Equation (7) as
tan ψ x c u , v tan ψ y c u , v 1 = λ A 1 B 1 C 1 A 2 B 2 C 2 A 3 B 3 C 3 T x T y T z ,
Then, Equation (11) can be transformed from Equation (10) for the calibration.
S = A 1 T x + B 1 T y + C 1 T z A 3 T x + B 3 T y + C 3 T z tan ψ x c u , v L = A 2 T x + B 2 T y + C 2 T z A 3 T x + B 3 T y + C 3 T z tan ψ y c u , v ,
where S and L are the residual expressions in the horizontal direction and vertical direction, respectively.
In terms of exterior calibration, we initialize the interior and exterior parameters with the on-ground calibration parameters, and then, for each stellar track point, linearize Equation (11) to construct the error equation Equation (12) as
V E = A Δ X E L E ,   P E = E ,
where A is the coefficient matrix of Equation (12), Δ X E is the correction of exterior parameters, L E is the error vector calculated by the current interior and exterior parameters, and P E is the identity weight matrix. For each stellar track point, Δ X E can be estimated by the least-square method as
Δ X E = A T P E A 1 A T P E L E ,
Then, exterior parameters X E could be updated as
X E = X E + Δ X E ,
X E is updated iteratively until Δ X E ε , where ε is the preset small positive threshold for exterior calibration.
For interior calibration, insert the modified X E above into Equation (11), and, for each stellar track point, linearize Equation (11) to construct the error equation, Equation (15), as
V I = B Δ X I L I ,   P I = E ,
where B is the coefficient matrix of Equation (15), Δ X I is the correction of the interior parameters, L I is the error vector calculated by the current interior and exterior parameters, and P I is the identity weight matrix. Then, we can obtain the correction of interior parameters by the least-square method as
Δ X I = B T P I B 1 B T P I L I ,
Interior parameters X I could be updated as
X I = X I + Δ X I ,
X I is updated iteratively until Δ X I ξ , where ξ is the preset small positive threshold for interior calibration.

2.3.3. Representation of Error

On the basis of the geometric calibration model and its solution above, we have obtained the calibration parameters X E and X I . To evaluate the calibration accuracy, the calibration model with the calculated calibration parameters inserted is adopted to compute the celestial coordinate of the star corresponding to the image point, and then compared with the theoretical coordinate derived from the star catalog.
The absolute positioning errors of the right ascension and declination directions are respectively defined as
R A e r r o r = σ σ D E e r r o r = δ δ
where σ and δ are the practical right ascension and declination of the star computed by the calibration model, σ and δ are the right ascension and declination of the corresponding star determined from the wide-field infrared survey explorer catalog according to the observation plan, and R A e r r o r , D E e r r o r are the absolute positioning errors in the right ascension and declination directions respectively. According to Equation (18), the unit of RAerror and Deerror is degree or arcsecond, while the unit of positioning error is usually pixel or meter. To express the positioning error more intuitively, based on the angle of view and the size of the detector, transform the units of Raerror and Deerror from degree to pixel as:
E r r o r p i x e l = E r r o r deg / A / S ,
where E r r o r deg and E r r o r p i x e l are the positioning errors in degree and pixel respectively, A is the angle of view of a single detector’s response element to the optical system, and S is the size of the detector, which, as used in the experiment in this paper, is 1024 pixels.

3. Experiment and Results

According to the theories above, we evaluated the proposed method based on the real star observation short-wave infrared data of the staring camera of a geostationary experiment satellite. The detailed information of the experiment satellite is shown in Table 1. We collected the observations from August 2 to August 21 as the data sets, and the specific experimental results are shown in the following.

3.1. Trajectory Fitting Results

Considering that there are many stellar trajectories, a stellar trajectory of the star FYID1070664 observed on 2nd August is taken as an example to illustrate the experimental results of curve fitting.
The fitting results and the distribution of fitting errors are shown in Figure 5a,b, respectively. As shown in Figure 5a, the fitting result is a smooth curve, and most of the track points are centered on the curve. It can be seen from Figure 5b that the fitting errors are generally within ±0.03 pixels. Moreover, Table 2 reveals that the sum of squares error (SSE) is close to 0 and the determination coefficient of R-Square is close to 1, which illustrates the effectiveness of the fitting model. Thus, it could be seen that the deviation of the collected stellar trajectory from the ideal trajectory points caused by the satellite platform jitter, the pointing error of the pointing mirror and the sampling error could be reduced by the curve fitting. The fitting curve could more accurately describe the positions of the stellar track points, thereby improving the subsequent calibration accuracy.

3.2. Results of Positioning Errors

In the process of calibration, the more track points there are, the more accurate the estimated calibration parameters will be. To improve the performance of the proposed method, stellar trajectories with more than 25 track points were selected for the experiments. For each trajectory, five track points were randomly selected as test data for the positioning errors, and the remaining points were used to calculate the parameters.
Due to the great fluctuation in the thermal environment around the geostationary satellite, the space thermal deformation resulting from it will significantly change the installation angles of the remote sensing camera and affect the positioning accuracy ultimately. Li et al. [4] found that the positioning error caused by the space thermal deformation is periodic, with a cycle of one day approximately. Therefore, to ensure the stability of the experiment, the data collected at a particular time (from 11:25 to 11:45 in this paper) on 20 consecutive days were picked for the experiments.
According to the calibration model, we calculate the positioning errors determined by the initial parameters calibrated on the laboratory and the calibration parameters through the proposed method, respectively. As shown in Table 3, compared to the initial right ascension positioning accuracy of −6.795 pixels and the initial declination positioning accuracy of −21.004 pixels, the geometric positioning errors after calibration are greatly reduced. It can be seen that the positioning accuracy after correction is approximately ±0.85 pixels, andthus we need to experiment with more data to verify the positioning accuracy of the calibration model subsequently.
The 437 stellar trajectories of the whole data set for the 20 days were used to verify the accuracy of the model. As shown in Figure 6, the positioning errors are concentrated in the middle and scattered around, which seems to accord with the normal distribution. To explore the property of the positioning errors, then we analyze the probability distribution of the positioning errors and try to fit it with the normal distribution function. The results are shown in Figure 7 and Table 4. It can be seen that the positioning errors of both the right ascension and declination directions all stably obey normal distribution. The probability of R A e r r o r within ±2.24 pixels at a 95% confidence level can reach 95.44% ( 2 σ ). Similarly, the probability of D E e r r o r within ±2.35 pixels at a 95% confidence level can reach 95.44% ( 2 σ ). According to the experimental results of the real data of on-orbit satellite, it demonstrates that the proposed approach corrects the periodic positioning errors caused by the space thermal deformation.

4. Discussion

By analyzing the experimental results of the data collected at a specific time on 20 consecutive days, the absolute positioning errors after calibration were reduced from 21.004 pixels to 0.85 pixels. Subsequently, 437 stellar trajectories were used to verify the accuracy of the method. In addition, the results show that the positioning errors in the right ascension and declination directions corrected by this proposed method obey normal distribution stably and can be within ±2.35 pixels. In addition, the distribution of positioning errors also explains why the minimum absolute error in Table 3 is about 0.1 pixel, and the maximum is about 3.0 pixels. Due to the random selection of the test track points, that is, the track points for calibration were selected randomly, the estimation of the parameters with different points are different, which affects the positioning error. Most absolute positioning errors falls within 2.35 pixels, and occasionally the absolute positioning errors are about 3 pixels.
Furthermore, to further analyze the influence of the number of track points on the calibration accuracy, the experiments with 10 to 25 track points for calibrating have been carried out and the results were shown below. As shown in Figure 8, the positioning errors kept reducing with the increase of the number of track points. When more than 20 calibrating track points are available, the reduction of positioning error gradually slows down and the error becomes stable. This is because the least squares method is used to estimate the parameters. When there are fewer track points, the accuracy of the algorithm is greatly dependent on the number of samples. Increasing the number of the track points will obviously improve the accuracy of parameter estimation, thereby improving the calibration accuracy. When the number reaches a certain degree, the accuracy of the algorithm tends to be stable. Therefore, the reduction in the number of track points will affect the positioning accuracy.
It should be noted that, taking into account the characteristics of the normal distribution, the positioning accuracy of the model is also affected by other random errors. Random measurement errors such as orbit measurement error and attitude measurement error may be one of the key factors that determine the calibration accuracy of this method. In addition, the accuracy of star centroid extraction and trajectory fitting accuracy also directly affect the positioning accuracy of the model, which may also be a further study in the future.

5. Conclusions

For a geostationary remote sensing camera, due to the orbital heat flux and the shock and vibration during launch, the installation structures between the camera and satellite platform will change inevitably and ultimately bring about the reduction of the positioning accuracy, which needs recalibrating. Nevertheless, traditional on-orbit camera calibration methods with GCPs and stars are often affected by cloud coverage or limited to the number of stars. To address these problems, this paper presents a novel on-orbit geometric calibration method from the relative motion of stars for geostationary cameras. Based on the optical system structure, a geometric calibration model is constructed. In addition, then the relative motion transformation of the observed stars is analyzed. According to the analysis above, we adopt the stellar trajectory and the auxiliary ephemeris to get sufficient input OVs for estimating the calibration parameters iteratively.
The proposed method is verified with on-orbit measurement data. Experimental results demonstrate that the positioning model can be well-calibrated by the proposed approach and the geometric accuracy of the remote sensing images is significantly improved. With the increase of the number of track points, the calibration accuracy is gradually improved. Though, this method is proposed for geostationary cameras, it is likely to be suitable and versatile for other RSCs because of the similar spatial relative motion relationship between the satellite and the target stars.

Author Contributions

Conceptualization, L.J. and X.L.; methodology, L.J.; software, L.Y. (Lan Yang); validation, X.L., L.Y. (Lin Yang) and Z.H.; formal analysis, L.L.; investigation, X.L.; resources, L.L.; data curation, L.Y. (Lan Yang); writing—original draft preparation, L.J.; writing—review and editing, L.J.; visualization, L.Y. (Lin Yang); supervision, F.C.; project administration, Z.H.; funding acquisition, F.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the CASEarth Minisatellite Thermal Infrared Spectrometer Project of the Shanghai Institute of Technical Physics, Chinese Academy of Sciences, grant number XDA19010102.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Acknowledgments

The authors are grateful to the staff at CAS Key Laboratory of Infrared System Detection and Imaging Technology, who gave their time and expertise. The authors gratefully acknowledge funding from the CASEarth Minisatellite Thermal Infrared Spectrometer Project of the Shanghai Institute of Technical Physics, Chinese Academy of Sciences. Finally, the authors would like to thank the editor and anonymous reviewers for their thoughtful comments and constructive suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Asaoka, S.; Nakada, S.; Umehara, A.; Ishizaka, J.; Nishijima, W. Estimation of spatial distribution of coastal ocean primary production in Hiroshima Bay, Japan, with a geostationary ocean color satellite. Estuar. Coast. Shelf Sci. 2020, 244, 106897. [Google Scholar] [CrossRef]
  2. Bhatt, R.; Doelling, D.R.; Haney, C.; Spangenberg, D.A.; Scarino, B.; Gopalan, A. Clouds and the Earth’s Radiant Energy System strategy for intercalibrating the new-generation geostationary visible imagers. J. Appl. Remote Sens. 2020, 14, 032410. [Google Scholar] [CrossRef]
  3. Miura, T.; Nagai, S. Landslide Detection with Himawari-8 Geostationary Satellite Data: A Case Study of a Torrential Rain Event in Kyushu, Japan. Remote Sens. 2020, 12, 1734. [Google Scholar] [CrossRef]
  4. Li, X.; Yang, L.; Su, X.; Hu, Z.; Chen, F. A Correction Method for Thermal Deformation Positioning Error of Geostationary Optical Payloads. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7986–7994. [Google Scholar] [CrossRef]
  5. Cao, J.; Yuan, X.; Gong, J. In-Orbit Geometric Calibration and Validation of ZY-3 Three-Line Cameras Based on CCD-Detector Look Angles. Photogramm. Rec. 2015, 30, 211–226. [Google Scholar] [CrossRef]
  6. Wang, M.; Cheng, Y.; Tian, Y.; He, L.; Wang, Y. A New On-Orbit Geometric Self-Calibration Approach for the High-Resolution Geostationary Optical Satellite GaoFen4. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1670–1683. [Google Scholar] [CrossRef]
  7. Liu, T.; Burner, A.W.; Jones, T.W.; Barrows, D.A. Photogrammetric techniques for aerospace applications. Prog. Aerosp. Sci. 2012, 54, 1–58. [Google Scholar] [CrossRef]
  8. Jacobsen, K. Issues and methods for in-flight and on-orbit calibration. In Post-Launch Calibration of Satellite Sensors: Proceedings of the International Workshop on Radiometric and Geometric Calibration, December 2003, Mississippi, USA; ISPRS Book Series 2; CRC Press: Boca Raton, FL, USA, 2004; pp. 83–92. [Google Scholar]
  9. Zhou, G.; Jiang, L.; Liu, N.; Li, C.; Li, M.; Sun, Y.; Yue, T. Advances and perspectives of on-orbit geometric calibration for high-resolution optical satellites. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 2123–2126. [Google Scholar]
  10. Mulawa, D. On-orbit geometric calibration of the OrbView-3 high resolution imaging satellite. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 1–6. [Google Scholar]
  11. De Lussy, F.; Greslou, D.; Dechoz, C.; Amberg, V.; Delvit, J.M.; Lebegue, L.; Blanchet, G.; Fourest, S. Pleiades HR in Flight Geometrical Calibration: Location and Mapping of the Focal Plane. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, 519–523. [Google Scholar] [CrossRef] [Green Version]
  12. Greslou, D.; de Lussy, F.; Delvit, J.M.; Dechoz, C.; Amberg, V. Pleiades-Hr Innovative Techniques for Geometric Image Quality Commissioning. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, 543–547. [Google Scholar] [CrossRef] [Green Version]
  13. Wang, Y.; Wang, M.; Zhu, Y. On-Orbit Calibration of Installation Parameter of Multiple Star Sensors System for Optical Remote Sensing Satellite with Ground Control Points. Remote Sens. 2020, 12, 1055. [Google Scholar] [CrossRef] [Green Version]
  14. Li, X.; Hu, Z.; Jiang, L.; Yang, L.; Chen, F. GCPs Extraction With Geometric Texture Pattern for Thermal Infrared Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2020, 1–5. [Google Scholar] [CrossRef]
  15. Wang, T.; Zhang, Y.; Zhang, Y.; Zhang, Z.; Xiao, X.; Yu, Y.; Wang, L. A Spliced Satellite Optical Camera Geometric Calibration Method Based on Inter-Chip Geometry Constraints. Remote Sens. 2021, 13, 2832. [Google Scholar] [CrossRef]
  16. Li, J.; Liu, Z. Camera Geometric Calibration Using Dynamic Single-Pixel Illumination with Deep Learning Networks. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 2550–2558. [Google Scholar] [CrossRef]
  17. Radhadevi, P.; Müller, R.; D’Angelo, P.; Reinartz, P. In-flight Geometric Calibration and Orientation of ALOS/PRISM Imagery with a Generic Sensor Model. Photogramm. Eng. Remote Sens. 2011, 77, 531–538. [Google Scholar] [CrossRef]
  18. Yang, B.; Pi, Y.; Li, X.; Yang, Y. Integrated geometric self-calibration of stereo cameras onboard the ZiYuan-3 satellite. ISPRS J. Photogramm. Remote Sens. 2020, 162, 173–183. [Google Scholar] [CrossRef]
  19. Tang, Y.; Wei, Z.; Wei, X.; Li, J.; Wang, G. On-Orbit Calibration of Installation Matrix between Remote Sensing Camera and Star Camera Based on Vector Angle Invariance. Sensors 2020, 20, 5667. [Google Scholar] [CrossRef] [PubMed]
  20. Grodecki, J.; Dial, G. IKONOS Geometric Accuracy Validation; International archives of photogrammetry remote sensing and spatial information sciences; Leibniz University Hannover Institute of Photogrammetry and GeoInformation: Hannover, Germany, 2012; p. 34. [Google Scholar]
  21. Helder, D.; Coan, M.; Patrick, K.; Gaska, P. IKONOS geometric characterization. Remote Sens. Environ. 2003, 88, 69–79. [Google Scholar] [CrossRef]
  22. Dial, G.; Bowen, H.; Gerlach, F.; Grodecki, J.; Oleszczuk, R. IKONOS satellite, imagery, and products. Remote Sens. Environ. 2003, 88, 23–36. [Google Scholar] [CrossRef]
  23. Valorge, C.; Meygret, A.; Lebègue, L. 40 years of experience with SPOT in-flight Calibration. In Post-Launch Calibration of Satellite Sensors; CRC Press: Boca Raton, FL, USA, 2004. [Google Scholar]
  24. Radhadevi, P.V.; Solanki, S.S. In-flight geometric calibration of different cameras of IRS-P6 using a physical sensor model. Photogramm. Rec. 2008, 23, 69–89. [Google Scholar] [CrossRef]
  25. Crespi, M.; Colosimo, G.; De Vendictis, L.; Fratarcangeli, F.; Pieralice, F. GeoEye-1: Analysis of Radiometric and Geometric Capability. In International Conference on Personal Satellite Services; Springer: Berlin/Heidelberg, Germany, 2010; pp. 354–369. [Google Scholar]
  26. Aguilar, M.A.; del Mar Saldana, M.; Aguilar, F.J. Assessing geometric accuracy of the orthorectification process from GeoEye-1 and WorldView-2 panchromatic images. Int. J. Appl. Earth Obs. Geoinf. 2013, 21, 427–435. [Google Scholar] [CrossRef]
  27. Aguilar, M.A.; Aguilar, F.J.; Saldaña, M.D.M.; Fernández, I. Geopositioning Accuracy Assessment of GeoEye-1 Panchromatic and Multispectral Imagery. Photogramm. Eng. Remote Sens. 2012, 78, 247–257. [Google Scholar] [CrossRef]
  28. Leprince, S.; Barbot, S.; Ayoub, F.; Avouac, J.-P. Automatic and Precise Orthorectification, Coregistration, and Subpixel Correlation of Satellite Images, Application to Ground Deformation Measurements. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1529–1558. [Google Scholar] [CrossRef] [Green Version]
  29. Jiang, Y.; Wang, J.; Zhang, L.; Zhang, G.; Li, X.; Wu, J. Geometric Processing and Accuracy Verification of Zhuhai-1 Hyperspectral Satellites. Remote Sens. 2019, 11, 996. [Google Scholar] [CrossRef] [Green Version]
  30. Wang, M.; Cheng, Y.; Chang, X.; Jin, S.; Zhu, Y. On-orbit geometric calibration and geometric quality assessment for the high-resolution geostationary optical satellite GaoFen4. ISPRS J. Photogramm. Remote Sens. 2017, 125, 63–77. [Google Scholar] [CrossRef]
  31. Dong, Y.; Fan, D.; Ma, Q.; Ji, S.; Ouyang, H. Automatic on-orbit geometric calibration framework for geostationary optical satellite imagery using open access data. Int. J. Remote Sens. 2019, 40, 6154–6184. [Google Scholar] [CrossRef]
  32. Zhukov, B.S.; Kondratieva, T.V.; Polyanskiy, I.V. Correction of automatic image georeferencing for the KMSS-2 multispectral satellite imaging system on board Meteor-M No. 2-2 satellite. Sovrem. Probl. Distantsionnogo Zondirovaniya Zemli Kosm. 2021, 18, 75–81. [Google Scholar] [CrossRef]
  33. Ye, S.; Zhang, C.; Wang, Y.; Liu, D.; Du, Z.; Zhu, D. Design and implementation of automatic orthorectification system based on GF-1 big data. Trans. Chin. Soc. Agric. Eng. 2017, 33 (Suppl. 1), 266–273. [Google Scholar]
  34. Seo, D.; Hong, G.; Jin, C. KOMPSAT-3A direct georeferencing mode and geometric Calibration/Validation. In Proceedings of the 36th Asian Conference on Remote Sensing: Fostering Resilient Growth in Asia, ACRS 2015, Quezon City, Philippines, 24–28 October 2015. [Google Scholar]
  35. Liu, L.; Xie, J.; Tang, X.; Ren, C.; Chen, J.; Liu, R. Coarse-to-Fine Image Matching-Based Footprint Camera Calibration of the GF-7 Satellite. Sensors 2021, 21, 2297. [Google Scholar] [CrossRef] [PubMed]
  36. Takenaka, H.; Sakashita, T.; Higuchi, A.; Nakajima, T. Geolocation Correction for Geostationary Satellite Observations by a Phase-Only Correlation Method Using a Visible Channel. Remote Sens. 2020, 12, 2472. [Google Scholar] [CrossRef]
  37. Chen, B.; Li, X.; Zhang, G.; Guo, Q.; Wu, Y.; Wang, B.; Chen, F. On-orbit installation matrix calibration and its application on AGRI of FY-4A. J. Appl. Remote Sens. 2020, 14, 024507. [Google Scholar] [CrossRef]
  38. Delvit, J.M.; Greslou, D.; Amberg, V.; Dechoz, C.; Delussy, F.; Lebegue, L.; Latry, C.; Artigues, S.; Bernard, L. Attitude assessment using Pleiades-HR capabilities. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, 525–530. [Google Scholar] [CrossRef] [Green Version]
  39. Lee, S.; Shin, D. On-Orbit Camera Misalignment Estimation Framework and Its Application to Earth Observation Satellite. Remote Sens. 2015, 7, 3320–3346. [Google Scholar] [CrossRef] [Green Version]
  40. Kim, H.; Kim, E.; Chung, D.-W.; Seo, O.-C. Geometric Calibration Using Stellar Sources in Earth Observation Satellite. Trans. Jpn. Soc. Aeronaut. Space Sci. 2009, 52, 104–107. [Google Scholar] [CrossRef]
  41. Christian, J.A.; Benhacine, L.; Hikes, J.; D’Souza, C. Geometric Calibration of the Orion Optical Navigation Camera using Star Field Images. J. Astronaut. Sci. 2016, 63, 335–353. [Google Scholar] [CrossRef]
  42. Fourest, S.; Kubik, P.; Lebègue, L.; Déchoz, C.; Lacherade, S.; Blanchet, G. Star-Based Methods for Pleiades HR Commissioning. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, 513–518. [Google Scholar] [CrossRef] [Green Version]
  43. Guan, Z.; Jiang, Y.; Wang, J.; Zhang, G. Star-Based Calibration of the Installation Between the Camera and Star Sensor of the Luojia 1-01 Satellite. Remote Sens. 2019, 11, 2081. [Google Scholar] [CrossRef] [Green Version]
  44. Wang, Z.J.; Ma, K. Study on the precision of feature point centroid extraction based on imaging angle. Laser J. 2018, 39, 90–99. [Google Scholar]
  45. Liu, Y.; Yang, L.; Chen, F.-S. Multispectral registration method based on stellar trajectory fitting. Opt. Quantum Electron. 2018, 50, 189. [Google Scholar] [CrossRef]
Figure 1. Diagram of the geometric positioning model of the geostationary RSC. Op-u-v, O-x-y, Oc-XcYcZc, Os-XsYsZs, Oo-XoYoZo and Oce-XceYceZce are the pixel coordinate system, the image coordinate system, the camera coordinate system, the satellite body coordinate system, the orbital coordinate system, and the celestial coordinate system, respectively. x-axis and y-axis are parallel to u-axis and v-axis respectively. Xc-axis and Yc-axis are parallel to x-axis and y-axis, respectively. Os and Oo is located in the centroid of the satellite. Xs points in the flight direction of the satellite, Ys is along the horizontal axis of the satellite, and Zs is determined by the right-hand rule. Xo points in the direction of satellite motion, Zo points to the center of the earth, and Yo is determined according to the right-hand rule. Xce points the vernal equinox γ 0 , Yce is perpendicular to Xce in the equatorial plane, and Zce is perpendicular to the equatorial plane and points to the celestial pole.
Figure 1. Diagram of the geometric positioning model of the geostationary RSC. Op-u-v, O-x-y, Oc-XcYcZc, Os-XsYsZs, Oo-XoYoZo and Oce-XceYceZce are the pixel coordinate system, the image coordinate system, the camera coordinate system, the satellite body coordinate system, the orbital coordinate system, and the celestial coordinate system, respectively. x-axis and y-axis are parallel to u-axis and v-axis respectively. Xc-axis and Yc-axis are parallel to x-axis and y-axis, respectively. Os and Oo is located in the centroid of the satellite. Xs points in the flight direction of the satellite, Ys is along the horizontal axis of the satellite, and Zs is determined by the right-hand rule. Xo points in the direction of satellite motion, Zo points to the center of the earth, and Yo is determined according to the right-hand rule. Xce points the vernal equinox γ 0 , Yce is perpendicular to Xce in the equatorial plane, and Zce is perpendicular to the equatorial plane and points to the celestial pole.
Sensors 21 06668 g001
Figure 2. Schematic of the imaging system of camera. N is the normal vector, and φ is the intersection angle of the optical axis and the normal of the pointing mirror. is the exit vector corresponding to the principle point of the camera.
Figure 2. Schematic of the imaging system of camera. N is the normal vector, and φ is the intersection angle of the optical axis and the normal of the pointing mirror. is the exit vector corresponding to the principle point of the camera.
Sensors 21 06668 g002
Figure 3. The schematic diagram of relative transformation.
Figure 3. The schematic diagram of relative transformation.
Sensors 21 06668 g003
Figure 4. Schematic diagram of geometric transformation of trajectory. (a) The stellar trajectory obtained by snapping the star S at the position P 1 , P 2 , P . (b) The image of the stars S 1 , S 2 , S 3 taken at the position P .
Figure 4. Schematic diagram of geometric transformation of trajectory. (a) The stellar trajectory obtained by snapping the star S at the position P 1 , P 2 , P . (b) The image of the stars S 1 , S 2 , S 3 taken at the position P .
Sensors 21 06668 g004
Figure 5. Curve fitting of stellar trajectory. (a) The fitting results of smoothing spline method. (b) The distribution of the fitting errors.
Figure 5. Curve fitting of stellar trajectory. (a) The fitting results of smoothing spline method. (b) The distribution of the fitting errors.
Sensors 21 06668 g005
Figure 6. Distribution of the positioning error.
Figure 6. Distribution of the positioning error.
Sensors 21 06668 g006
Figure 7. (a) Probability distribution of R A e r r o r . (b) Probability distribution of D E e r r o r .
Figure 7. (a) Probability distribution of R A e r r o r . (b) Probability distribution of D E e r r o r .
Sensors 21 06668 g007
Figure 8. Influence analysis for the number of track points on the positioning accuracy.
Figure 8. Influence analysis for the number of track points on the positioning accuracy.
Sensors 21 06668 g008
Table 1. Parameters of the experiment satellite.
Table 1. Parameters of the experiment satellite.
ItemsDetailed Parameters
Orbit altitude36,000 km
Focal length1250 mm (short-wave infrared)
Array sensor information1024 × 1024 HgCdTe
Pixel size25 μm (short-wave infrared)
Accuracy of attitude measurements 1 × 10 4 ° / s
Table 2. Analysis of fitting accuracy.
Table 2. Analysis of fitting accuracy.
SSER-SquareRMSE
Orbit altitude0.0038560.97760.01728
SSE is the sum of squares error, R-Square is the determination coefficient of the model and RMSE is the root-mean-squared error. The closer the SSE approaches 0, the R-Square approaches 1, the better the fitting effectiveness becomes.
Table 3. Positioning errors before and after calibration.
Table 3. Positioning errors before and after calibration.
Initial Positioning ErrorsPositioning Errors after Calibration
RAerror/PixelDEerror/PixelRAerror/PixelAbsolute Error/PixelDEerror/PixelAbsolute Error/Pixel
2nd August−10.384 −20.247 −0.616 0.616−0.080 0.08
3rd August−5.081 −20.430 −0.086 0.0860.099 0.099
4th August−6.402 −20.508 −0.484 0.4842.892 2.892
5th August−5.181 −20.624 −0.464 0.4641.233 1.233
6th August−5.336 −20.563 −1.210 1.21−0.233 0.233
7th August−5.877 −20.557 −0.017 0.017−0.127 0.127
8th August−7.975 −20.208 −0.110 0.11−0.008 0.008
9th August−4.686 −20.414 −0.063 0.0630.027 0.027
10th August−5.713 −21.664 1.642 1.642−0.736 0.736
11th August−8.704 −20.375 −0.568 0.5680.468 0.468
12th August−4.839 −21.312 −0.100 0.1−0.446 0.446
13th August−5.371 −21.283 −0.271 0.271−0.437 0.437
14th August−8.423 −20.874 −1.571 1.5710.025 0.025
15th August−8.366 −21.446 −2.038 2.0380.984 0.984
16th August−5.023 −22.626 1.076 1.0762.793 2.793
17th August−7.060 −21.679 −0.280 0.28−1.177 1.177
18th August−8.879 −20.206 3.712 3.7121.505 1.505
19th August−7.201 −22.269 0.511 0.5111.013 1.013
20th August−7.743 −21.209 −0.888 0.8881.864 1.864
21st August−7.658 −22.182 −1.128 1.1280.921 0.921
Mean−6.795−21.004−0.1480.841750.5290.8534
Table 4. Analysis of the probability distribution for the positioning error.
Table 4. Analysis of the probability distribution for the positioning error.
CLmumucisigmasigmaci
RAerror95%−0.0611(−0.1661, 0.0440)1.1187(1.0492, 1.1982)
DEerror0.0234(−0.0870, 0.1338)1.1754(1.1024, 1.2589)
CL is the confidence level. mu and sigma are the estimate values of the mean and the standard deviation, respectively. muci and sigmaci are the confidence interval of mu and sigma, respectively.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jiang, L.; Li, X.; Li, L.; Yang, L.; Yang, L.; Hu, Z.; Chen, F. On-Orbit Geometric Calibration from the Relative Motion of Stars for Geostationary Cameras. Sensors 2021, 21, 6668. https://doi.org/10.3390/s21196668

AMA Style

Jiang L, Li X, Li L, Yang L, Yang L, Hu Z, Chen F. On-Orbit Geometric Calibration from the Relative Motion of Stars for Geostationary Cameras. Sensors. 2021; 21(19):6668. https://doi.org/10.3390/s21196668

Chicago/Turabian Style

Jiang, Linyi, Xiaoyan Li, Liyuan Li, Lin Yang, Lan Yang, Zhuoyue Hu, and Fansheng Chen. 2021. "On-Orbit Geometric Calibration from the Relative Motion of Stars for Geostationary Cameras" Sensors 21, no. 19: 6668. https://doi.org/10.3390/s21196668

APA Style

Jiang, L., Li, X., Li, L., Yang, L., Yang, L., Hu, Z., & Chen, F. (2021). On-Orbit Geometric Calibration from the Relative Motion of Stars for Geostationary Cameras. Sensors, 21(19), 6668. https://doi.org/10.3390/s21196668

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop