**1. Introduction**

With the accelerating development of technical aeronautics and space exploration, many countries have launched advanced Earth observation satellites in recent years. The linear array push-broom sensor is the payload which has been carried by the most satellites. This significant quantity of Earth observation has had a great impact on overall production and human life. However, the imaging process of sensors is easily influenced by a variety of errors, including thermal deformation error, optical distortion error, satellite position, attitude measurement errors, and so on [1,2]. Commonly, the raw images contain many image distortions and cannot be directly applied to the processing of subsequent applications. Typically, errors which result from image distortion can be divided into two sources: long-period errors, including assembling error, optical distortion error, and thermal distortion error, which are constant, non-varying, or slowly varying [3]; and short-period errors, including satellite position, attitude measurement errors, and attitude jitter error. If these errors are insufficiently characterized or uncorrected, significant distortion of the raw image can result. Therefore, it is necessary to complete geometric calibration of the sensor to eliminate image distortion before raw image application.

The imaging process involves projecting a point on the surface of the Earth toward the sensor's focal plane [4]. The imaging models are used to describe this imaging process consist of two types [5]. The first type is the rigorous imaging model, which involves a set of coordinate transformations. Many rigorous image models have been proposed by different scholars for a variety of sensors in previous research [1,2,6,7]. However, most imagery vendors often do not provide details of advanced Earth observation satellites, such as the precise parameters and work mode of sensor or the satellite orbit, to the user. Therefore, this type of model has many limitations in practical use. The other type is the generalized imaging model. There are a lot of generalized imaging models, with the main ones being the rational function model (RFM), the direct linear transformation model, and the polynomial model. The RFM is widely applied in sensor calibration due to its simple form and high precision compared to the other models [5]. However, the RFM has strict requirements regarding the distribution and number of ground control points (GCPs) in each frame, and it also needs to calculate the coefficients frame-by-frame. The RFM method effectively calibrates image distortion caused by the long-period errors—such as thermal distortion error, optical distortion error, and assembling error—but the calibration performance of the RFM method deteriorates in the presence of short-period errors, such as attitude jitter [8]. The majority of advanced Earth observation satellites undergo attitude jitter, which results from attitude control operations, the dynamic structure, and so on [9]. Referring to the previous studies [8,9], we found three classic ways to handle satellite attitude jitter. The first way is to apply an advanced hardware device, such as an angular displacement sensor, to achieve high-resolution measurement data. However, this is infeasible for many satellites due to restrictions regarding the economy and technology. The second way is to estimate attitude jitter by using multispectral parallax images. This method is unsuited for the estimation of attitude jitter of satellites which are not equipped with multispectral sensors. The third way is to use high precision GCPs in scenes; the effectiveness of this method is decided by the distribution and number of GCPs [8]. However, it is difficult to extract many GCPs with high precision in each scene, especially if water or cloud coverage exists in the scene.

In order to solve the above-mentioned problems, this paper puts forward a geometric calibration method by using sparse recovery to remove linear array push-broom sensor bias. The errors relating to the imaging process are approximated to the equivalent bias angles [10], and the equivalent bias angles of the image sequence are recovered by compressive sensing in this method. Hence, the traditional problem regarding error-solving in the sensor calibration process is transformed into a new problem regarding signal recovery in this paper.

### **2. Sensor Geometric Calibration Model**

### *2.1. Sensor Geometric Calibration Modeling*

Over the past decade, the linear array push-broom sensor has been used as a main payload assembled by the most advanced Earth observation satellites. The imaging process of the linear array push-broom sensor projects a point on the surface of Earth, such as a GCP, to the sensor's focal plane [4]. The basic flow diagram of this process is shown in Figure 1. The rigorous imaging model for the linear array push-broom sensor is presented as follows [6,7,11].

$$\begin{bmatrix} X - X\_F \\ Y - Y\_F \\ Z - Z\_F \end{bmatrix} = m \times \mathbf{R}\_{\mathrm{ECI}}^{\mathrm{ECF}} \times \mathbf{R}\_{\mathrm{ayl}}^{\mathrm{ECI}} (\theta\_{\mathrm{D}}, \theta\_{\mathrm{I}}, \theta\_{\mathrm{W}}) \times \mathbf{R}\_{\mathrm{ayl}}^{\mathrm{bdy}} (\phi\_r, e, \psi) \times \mathbf{R}\_{\mathrm{bdy}}^{\mathrm{sec}} (\phi\_\mathrm{X}, \phi\_\mathrm{Y}, \phi\_\mathrm{Z}) \times \mathbf{M}\_{\mathrm{mir}} (\theta\_{\mathrm{b}}, \theta\_\mathrm{k}) \times \begin{bmatrix} 0 \\ y \\ -f \end{bmatrix} \tag{1}$$

where (*X*,*Y*,*Z*) represents the projective position in the Earth-centered fixed (ECF) coordinate system, *y* represents the image column position in the focal plane coordinate system, (*XF*,*YF*,*ZF*) represents the satellite position in the ECF, *m* represents a scale factor, *f* represents the focal length, **R***ECF ECI* denotes the rotation from the Earth-centered inertial (ECI) to the ECF coordinate system, θΩ, θ*i*, θω represent the orbital elements of the satellite (right ascension of the ascending node, inclination, and true perigee angle), **R***ECI orb* denotes the orbital elements that give the rotation from the satellite orbit to the ECI coordinate system, ϕ, ε,ψ represent the Euler angles of the satellite attitude, **R***body orb* denotes the satellite attitude angles that give the rotation from the satellite body to the satellite orbit coordinate system, φ*X*,φ*Y*,φ*<sup>Z</sup>* represent the assembling angles of the sensor, **R***sen body* denotes the assembling angles that give the rotation from the sensor to the satellite body coordinate system, θ0, θ*<sup>c</sup>* represent the sensor pointing angles (the mirror assembling angle and the mirror rotation angle), and **M***mir* denotes the sensor pointing angles that give the rotation from the pointing to the sensor coordinate systems.

**Figure 1.** Basic flow diagram of the imaging process for the linear array push-broom sensor.

Compared with the true values, the imaging parameters usually solve some undistinguished errors in the imaging process. The long-period errors, such as thermal distortion error, optical distortion error, assembling error, and the short-period errors—such as satellite orbit error and attitude error—of the imaging process are approximated to the equivalent bias angles. The sensor geometric calibration model is presented as [1,2,10]

$$\begin{bmatrix} X - X\_F \\ Y - Y\_F \\ Z - Z\_F \end{bmatrix} = m \times \mathbf{R}\_{Eq}(\alpha, \beta, \theta) \times \mathbf{R}\_{ECl}^{ECF} \times \mathbf{R}\_{orb}^{ECI} \times \mathbf{R}\_{bady}^{bady} \times \mathbf{R}\_{bady}^{secn} \times \begin{bmatrix} 0 \\ y \\ -f \end{bmatrix} \tag{2}$$

where α, β, θ represent the equivalent bias angles and **R***Eq*(α, β, θ) denotes the equivalent rotation matrix

$$\mathbf{R}\_{Eq}(a,\beta,\theta) = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos\alpha & \sin\alpha \\ 0 & -\sin\alpha & \cos\alpha \end{pmatrix} \begin{pmatrix} \cos\beta & 0 & -\sin\beta \\ 0 & 1 & 0 \\ \sin\beta & 0 & \cos\beta \end{pmatrix} \begin{pmatrix} \cos\theta & -\sin\theta & 0 \\ \sin\theta & \cos\theta & 0 \\ 0 & 0 & 1 \end{pmatrix} \tag{3}$$

### *2.2. Error Modeling*

All on-orbit satellites suffer from an environment that varies cyclically between cooling and heating due to the circular orbit around Earth. The thermal distortion error resulting from the satellite in the Earth's orbit causing a solar eclipse is the biggest error in the imaging process [12]. The period of this cycle is the same as the orbit period of the satellite. Thermal distortion errors present in three directions, with the error in the direction toward the sun being the largest. The optical distortion error and assembling error are constant bias errors or long-period variation errors [4]. Attitude jitter is also a primary source of decreasing sensor performance. The satellite position and attitude measurement errors, which are a type of Gaussian error, can be effectively removed by using compressive sensing, therefore, they are not included in the bias angle model. According to the characteristics of these errors, the equivalent bias angles can be described by the constant errors and a set of period components varying in sinusoidal waveform, demonstrated as [4,12]

$$\begin{cases} \alpha(t) = \boldsymbol{\varsigma}\_{a} + \sum\_{i}^{n} A\_{ai} \sin(2\pi f\_{ai}t + \zeta\_{ai}) \\ \quad \boldsymbol{\beta}(t) = \boldsymbol{\varsigma}\_{\beta} + \sum\_{i}^{n} A\_{\beta i} \sin(2\pi f\_{\beta i}t + \zeta\_{\beta i}) \\ \boldsymbol{\theta}(t) = \boldsymbol{\varsigma}\_{\theta} + \sum\_{i}^{n} A\_{\ell i} \sin(2\pi f\_{\ell i}t + \zeta\_{\ell i}) \end{cases} \tag{4}$$

where ςα, ςβ, ςθ represent the constant errors, *A*α*i*, *A*β*i*, *A*θ*<sup>i</sup>* represent the amplitudes of the period components, *f*α*i*, *f*β*i*, *f*θ*<sup>i</sup>* represent the frequencies of the period components, and ζα*i*, ζβ*i*, ζθ*<sup>i</sup>* represent the phases of the period components.

### **3. Geometric Calibration by Using Sparse Recovery**

### *3.1. Compressive Sensing*

As a new theory in signal processing, compressive sensing was proposed in 2004 and rapidly developed in the following years. The core idea of this theory mainly involves two aspects. The first is the sparsity of the signal, which means that the majority of the elements are have a value of zero or are otherwise very small [13]. The other is sampling irrelevance, in other words, the measurement matrix required to meet the restricted isometry property (RIP) [14]. The traditional Nyquist–Shannon sampling theorem indicated that the sampling frequency must exceed the Nyquist sampling frequency to restore the signal without distortion. However, the sparse signal is exactly recovered by compressive sensing from its incomplete measurements far below the Nyquist sampling frequency [13]. In practical problems, the majority of signals are not initially sparse. Fortunately, these signals can be converted into sparse signals using sparse transformation. Sparse transformation can be done by discrete Fourier transformation (DFT), discrete cosine transformation, and wavelet transformation. Hence, these signals are recovered by compressive sensing. Because of its incomparable advantage in handling large-scale compressible data, compressive sensing is widely used in the fields of radio communication, array signal processing, medical imaging, and so on.

### *3.2. Procedure of Proposed Method*

The geometric calibration method consists of five primary steps: (a) periodic wavy pattern recognition; (b) sparse GCP image shift calculation; (c) dense GCP image shift calculation; (d) equivalent bias angles recovery; and (e) image calibration. The procedure of the proposed method is shown in Figure 2.

**Figure 2.** Flowchart of the proposed method.

### 3.2.1. Periodic Wavy Pattern Recognition

The first step is to recognize the periodic wavy pattern of the raw image sequence frame-by-frame. Once the raw image exists with periodic wavy patterns, which are caused by short-period error, such as attitude jitter error, the subsequent process of Step 3 is necessary for the frame. There are three processes that need to be completed for periodic wavy pattern recognition. First, a line or edge detection algorithm is used to extract the road, dike, and coastline from the raw images. Then, the smoothness of the extracted lines or edges is extracted. Finally, the existence of periodic wavy patterns in the images is determined. The details of this step can be seen in [15].

### 3.2.2. Sparse GCP Image Shift Calculation

There are three main processes in this step. First, a handful of images with good imaging conditions are randomly selected from the raw image sequence, excluding images with periodic wavy patterns, as the measurement scenes. Then, a small number of GCPs with high precision from each measurement scene are extracted and matched. The sampling interval of GCPs is set to larger, and these GCPs are defined as sparse GCPs in this paper. The latitude and longitude ranges of each measurement scene are calculated to determine whether there are GCPs in each measurement scene; the templates of GCPs used in this GCPs extraction process are obtained from the Shuttle Radar Topography Mission data. The details of this process are seen in [16]. Finally, we calculate the ideal position of the GCPs in the focal plane coordinates using Equation (2), and the equivalent bias angles are set to zero in this process. The real position of the GCPs are measured from the raw images and the image shift of sparse GCPs is calculated as

$$\begin{cases} \Delta \mathbf{x}\_i = \mathbf{x}\_i - \mathbf{x}\_{ci} \\ \Delta y\_i = y\_i - y\_{ci} \end{cases} \tag{5}$$

where (Δ*xi*, Δ*yi*) is the image shift between the real position (*xi*, *yi*) and the ideal position (*xci*, *yci*) of *i*th GCP.

### 3.2.3. Dense GCP Image Shift Calculation

The raw images with periodic wavy patterns are selected as the measurement scenes, and a number of GCPs with intensive distribution from each measurement scene were extracted and matched with periodic wavy patterns. The sampling interval of the GCPs is set to smaller; these GCPs are defined as dense GCPs. The exact number of GCPs is determined by the sparsity and length of the recovered signal, the recovery algorithm, and so on. Generally speaking, the length of the measurements is far below the length of the recovered signal [14]. The rest of this step follows the same process as Step 2.

### 3.2.4. Equivalent Bias Angle Recovery

The most important thing in this crucial step is the signal fusion. It is known that the compressive sensing method can only recover 1-D column vectors, therefore, 3-D equivalent bias angles signals must be merged into a 1-D time varying signal according to the time order. The problem regarding solving 3-D signal errors is transformed into the problem relating to 1-D signal recovery. After obtaining the image shift of GCPs, the remaining tasks of this step are to select a desired sparse basis and an appropriate recovery algorithm and to construct a suitable measurement matrix to exactly recover the 1-D time varying signal. The 3-D signals are separated from the 1-D recovered signal. The details of this important work are presented in the following sections.

### 3.2.5. Image Calibration

There are two processes in this step. One involves calculating the calibrated position, and the other is to do with resampling the raw image. Once the equivalent bias angles are recovered, the calibrated image positions are easily calculated using Equation (2) with the imaging parameters. The image resampling can be completed using the bilinear interpolation method. The gray values of the calibrated image are obtained according to the calibrated image positions and the raw image.

### *3.3. Criticism of the Proposed Method*

If the equivalent bias angles are considered to be errors in an ideal situation, the estimation of equivalent bias angles is a typical adjustment of indirect observations. The function model of the adjustment of these indirect observations is expressed as

$$\mathbf{L} = \mathbf{B}\mathbf{X} + \mathbf{d} \tag{6}$$

where **L** is the observation, **B** is the measurement matrix, **X** is the estimated parameter, and **d** is an offset constant.

Generally, the estimated parameter is expressed as

$$\mathbf{X} = \mathbf{X}\_0 + \mathbf{\hat{x}} \tag{7}$$

where **<sup>X</sup>**<sup>0</sup> is the ideal value of the estimated parameter and **^ x** is the correction of the estimated parameter. Let

$$\mathbf{L}\_1 = \mathbf{L} - \mathbf{L}\_0 = \mathbf{L} - (\mathbf{B}\mathbf{X}\_0 + \mathbf{d}) \tag{8}$$

where **L**<sup>0</sup> is the ideal observation value.

According to Equations (6) and (7), Equation (8) can be simplified as

$$\mathbf{L}\_1 = \mathbf{\hat{B}}\mathbf{\hat{x}}\tag{9}$$

where **^ x** is the equivalent bias angle, **L**<sup>1</sup> is the image shift of the GCP, and **B** is the measurement matrix. When there is nonlinear relation between **L** and **X**(**L** = *F*(**X**)), partial derivative of function *F*(**X**) must be sought to obtain the measurement matrix **B**.

3.3.1. Measurement Matrix and Measurement Equation

### (a) Measurement variables

According to Equation (8), **L**<sup>1</sup> is determined by **L** and **L**<sup>0</sup> is the real position of the GCP and **L**<sup>0</sup> is the ideal position of the GCP when M GCPs are extracted and matched from the raw image sequence. The **L** and **L**<sup>0</sup> are expressed as

$$\begin{array}{llll} \mathbf{L} = \begin{bmatrix} \mathbf{x}\_1 & \mathbf{y}\_1 & \mathbf{x}\_2 & \mathbf{y}\_2 & \cdots & \mathbf{x}\_M & \mathbf{y}\_M \end{bmatrix}^\mathrm{T} \\\ \mathbf{d} = \begin{bmatrix} \mathbf{x}\_{c1} & \mathbf{y}\_{c1} & \mathbf{x}\_{c2} & \mathbf{y}\_{c2} & \cdots & \mathbf{x}\_{cM} & \mathbf{y}\_{cM} \end{bmatrix}^\mathrm{T} \end{array} \tag{10}$$

According to Equation (5), L1 is expressed as

$$\begin{aligned} \mathbf{L}\_1 &= \begin{bmatrix} \Delta \mathbf{x}\_1 & \Delta y\_1 & \Delta \mathbf{x}\_2 & \Delta y\_2 & \cdots & \Delta \mathbf{x}\_M & \Delta y\_M \end{bmatrix}^\mathrm{T} \\ &= \begin{bmatrix} \mathbf{x}\_1 - \mathbf{x}\_{c1} & y\_1 - y\_{c1} & \mathbf{x}\_2 - \mathbf{x}\_{c2} & y\_2 - y\_{c2} & \cdots & \mathbf{x}\_M - \mathbf{x}\_{cM} & y\_M - y\_{cM} \end{bmatrix}^\mathrm{T} \end{aligned} \tag{11}$$

where L1 is a 2*M* × 1 vector.

(b) Measurement equation

According to Equation (9), the measurement equation of one GCP is expressed as

$$\begin{bmatrix} \Delta \mathbf{x}\_{\text{ill}} & \Delta y\_{\text{ill}} \end{bmatrix}^{\text{T}} = \mathbf{B}\_{\text{mm}} \begin{bmatrix} \alpha\_{\text{ll}} & \beta\_{\text{ll}} & \theta\_{\text{ll}} \end{bmatrix}^{\text{T}} \tag{12}$$

where α*<sup>n</sup>* β*<sup>n</sup>* θ*<sup>n</sup>* .T are the equivalent bias angles of the mth GCP and **B***mn* is the 2 × 3 measurement matrix.

The measurement equation of M GCP is expressed as

$$\mathbf{L}\_{1} = \mathbf{B} \mathbf{x} \begin{bmatrix} \Delta \mathbf{x}\_{1} \\ \Delta y\_{1} \\ \Delta x\_{2} \\ \Delta y\_{2} \\ \vdots \\ \Delta x\_{M} \\ \Delta y\_{M} \\ \end{bmatrix} = \begin{bmatrix} \mathbf{B}\_{11} & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{0} & \mathbf{B}\_{22} & \cdots & \mathbf{0} \\ \mathbf{0} & \mathbf{B}\_{22} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{0} & \mathbf{0} & \cdots & \mathbf{B}\_{MM} \end{bmatrix} \begin{bmatrix} \rho\_{1} \\ \rho\_{1} \\ \rho\_{2} \\ \rho\_{2} \\ \rho\_{2} \\ \vdots \\ \rho\_{M} \\ \vdots \\ \rho\_{M} \\ \rho\_{M} \\ \end{bmatrix} = \begin{bmatrix} \begin{bmatrix} a\_{1} \\ \rho\_{1} \\ \rho\_{1} \\ \rho\_{2} \\ \rho\_{2} \\ \vdots \\ \rho\_{M} \\ \vdots \\ \rho\_{M} \\ \vdots \\ \rho\_{M} \\ \vdots \\ \rho\_{M} \\ \end{bmatrix} \begin{bmatrix} \rho\_{1} \\ \rho\_{1} \\ \rho\_{2} \\ \vdots \\ \rho\_{M} \\ \vdots \\ \rho\_{M} \\ \vdots \\ \rho\_{M} \\ \vdots \\ \rho\_{M} \\ \vdots \\ \rho\_{M} \\ \end{bmatrix} = \begin{bmatrix} \rho\_{1} \\ \rho\_{1} \\ \rho\_{1} \\ \rho\_{2} \\ \vdots \\ \rho\_{M} \\ \vdots \\ \rho\_{M} \\ \vdots \\ \rho\_{M} \\ \vdots \\ \rho\_{M} \\ \vdots \\ \rho\_{M} \end{bmatrix} \tag{13}$$

where **<sup>L</sup>**<sup>1</sup> is a 2*<sup>M</sup>* <sup>×</sup> 1 measurement vector, **<sup>B</sup>** is a 2*<sup>M</sup>* <sup>×</sup> <sup>3</sup>*<sup>N</sup>* measurement matrix, and **^ x** is a 3*N* × 1 merged equivalent bias angle vector.

### (c) Linearization of measurement equation

The other important issue regards obtaining the element of the measurement matrix **B**. To reduce the complexity of the algorithm, the linearization of measurement equations for each GCP must be completed before recovering the signal. According to Equation (2), the relationship between the estimated parameters and the measurements is nonlinear, therefore a partial derivative of Equation (2) should be sought to obtain the measurement matrix **B**. However, seeking a partial derivative of Equation (2) is difficult, and the numerical analysis method is used to estimate the elements of the measurement matrix **B**. Next, the elements *b<sup>m</sup>* <sup>11</sup> and *bm* <sup>21</sup> of **B***mn* are taken as examples to introduce the details of the solution.


$$\begin{cases} \Delta \mathbf{x}\_i = \mathbf{x}\_i - \mathbf{x}\_0 \\ \Delta y\_i = y\_i - y\_0 \end{cases} i \in [\mathbf{1}, I] \tag{14}$$

5. *b<sup>m</sup>* <sup>11</sup> and *<sup>b</sup><sup>m</sup>* <sup>21</sup> are calculated as follows. The other elements of **B***mn* can be calculated in the same way

$$\begin{cases} b\_{11}^{m} = \frac{1}{l} \sum\_{i=1}^{l} \frac{\Delta x\_{i}}{i \times \Delta \gamma} \\\ b\_{21}^{m} = \frac{1}{l} \sum\_{i=1}^{l} \frac{\Delta y\_{i}}{i \times \Delta \gamma} \end{cases} \tag{15}$$

### 3.3.2. Sparse Basis and Sparse Representation

The equivalent bias angles signals based on Equation (4) are not sparse in the time domain. According to previous studies [8,17], the DFT converts equivalent bias angles signals into the sparse representation. The simulated signals based on Equation (4) are displayed in Figure 3a. The three

curves in Figure 3a represent the simulated three-dimensional equivalent bias angles signals with different constant errors and period components. The red curve indicates the X direction, the blue curve indicates the Y direction, and the black curve indicates the Z direction. The results of the 1-D DFT are displayed in Figure 3b.

**Figure 3.** (**a**) Simulated 3-D equivalent bias angles signals. (**b**) Results of 1-D DFT for the 1-D equivalent bias angle signal.

To easily recover the equivalent bias angles signals, the 3-D equivalent bias angle signals need to be merged into a 1-D signal according to time order. The 1-D merged signal and the results of the 1-D DFT are displayed in Figure 4.

**Figure 4.** (**a**) 1-D merged signal. (**b**) Results of the 1-D DFT.

From Figure 3b, it is seen that the 3-D equivalent bias angle signals are sparse in the frequency domain and the 1-D merged signal is sparse in the frequency domain, as seen in Figure 4b. Therefore, we select the 1-D DFT basis as the sparse basis **Ψ** in the proposed method. The performances of the other sparse bases are all worse than the 1-D DFT basis. The definition of sparse basis **Ψ** is

$$\mathbf{Y} = \frac{1}{N} \begin{bmatrix} e^{j2\pi(0 \times 0)/N} & e^{j2\pi(1 \times 0)/N} & \cdots & e^{j2\pi((N-1) \times 0)/N} \\ e^{j2\pi(0 \times 1)/N} & e^{j2\pi(1 \times 1)/N} & \cdots & e^{j2\pi((N-1) \times 1)/N} \\ \vdots & \vdots & \ddots & \vdots \\ e^{j2\pi(0 \times (N-1))/N} & e^{j2\pi(1 \times (N-1))/N} & \cdots & e^{j2\pi((N-1) \times (N-1))/N} \end{bmatrix} \tag{16}$$

### 3.3.3. Signal Recovery

It is obvious from Equation (12) that the 2-D measurement equation used to solve the 3-D equivalent bias angles of one GCP is a pathological problem. The traditional methods assume that the equivalent bias angles are unchanged in the frame period and use classical optimal estimation methods—such as least squares estimation, Bayes estimation, or maximum likelihood estimation—to combine a number of GCPs to estimate the equivalent bias angles of each frame. However, this assumption is unreasonable when equivalent bias angles contain short-period errors. Due to *M* << *N*, the solution of Equation (13) is highly undetermined; therefore, the traditional methods do not work in this situation. Fortunately, it is possible to exactly recover the values of **^ x** using compressive sensing if **^ x** is represented by the sparse basis.

The measurement equation of M GCP in Equation (13) is rewritten as

$$\mathbf{L}\_1 = \mathbf{B}\hat{\mathbf{x}} = \mathbf{B}\mathbf{Y}\mathbf{f} = \overline{\mathbf{B}}\mathbf{f} \tag{17}$$

where **<sup>Ψ</sup>** is a 3*<sup>N</sup>* <sup>×</sup> <sup>3</sup>*<sup>N</sup>* sparse basis, **<sup>f</sup>** is a 3*<sup>N</sup>* <sup>×</sup> 1 sparse representation, **^ x** = **Ψf**, and )**B** = **BΨ** is a 2*M* × 3*N* sensing matrix. In Equation (17), **L1** is considered to be the product of **f** times )**B** [13]. The 1-D DFT method achieves RIP [17]. The GCP is selected randomly when the measurement matrix **B** achieves RIP, therefore, the sensing matrix )**B** also achieves RIP. We use the matching pursuit algorithm to recover the exact values of **f** due to its lower computational complexity [18,19]. The merged equivalent bias angle signal **^ x** is obtained as

$$
\hat{\mathbf{x}} = \mathbf{\tilde{y}} \mathbf{f} \tag{18}
$$

### **4. Experiments and Analysis**

Two experiments were used to compare the proposed method with the RFM method in this section. The first method employs the image data from the Hyperion of Earth Observing 1 (EO-1) and shows the performances of two methods designed to estimate the long-period components of equivalent bias angles. The second method, which uses the image data from the Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM) of the Advanced Land Observing Satellite (ALOS), shows that the proposed method has a superior performance regarding the estimation of the short-period components. The details of the experiments are described as follows.

### *4.1. Hyperion Data Experiment*

### 4.1.1. Data Description

Hyperion, as the main payload of EO-1, is a typical linear array push-broom sensor. A total of 100 Hyperion frame images from 7 June 2002 were selected as the experimental data. In the proposed method, we selected 40 frames with some evenly distributed GCPs from the experimental data as the measurement scenes. The measurement scene used in the proposed method is represented in Figure 5a, with the red crosses denoting the GCPs. The rest of the frames were selected as verification scenes, which evenly distribute 50 random check points (RCPs) in each scene. One verification scene used in the proposed method is represented in Figure 5b, with the blue triangles denoting the RCPs. For the RFM method, we extracted some GCPs in each frame; the verification scenes were the same as in the proposed method. The measurement and verification scene of the RFM method are represented in Figure 5c. The GCPs of the measurement scenes were used as the inputs for both methods. The RCPs in the verification scenes were selected as the valuators to exactly evaluate the calibration performances of both methods. To adequately compare the methods, we designed 10 testing cases with different numbers of GCPs (5–50) and counted the calibration results. GCPs appeared to distribute evenly in this experiment. The original size (3129 × 256) of the Hyperion images was changed in order to be presented in Figure 5.

**Figure 5.** (**a**) Measurement scene and (**b**) verification scene of the proposed method. (**c**) Measurement and verification scene of the rational functional model (RFM) method (30 GCPs and 50 random check points (RCPs)).
