Next Article in Journal
Retrieval of an On-Orbit Bidirectional Reflectivity Reference in the Mid-Infrared Bands of FY-3D/MERSI-2 Channels 20
Next Article in Special Issue
A Bi-Radial Model for Lens Distortion Correction of Low-Cost UAV Cameras
Previous Article in Journal
Two Mw ≥ 6.5 Earthquakes in Central Pamir Constrained by Satellite SAR Observations
Previous Article in Special Issue
Contribution of Photogrammetry for Geometric Quality Assessment of Satellite Data for Global Climate Monitoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

2OC: A General Automated Orientation and Orthorectification Method for Corona KH-4B Panoramic Imagery

1
Engineering Center, Chinese Academy of Surveying and Mapping (CASM), Beijing 100036, China
2
The First Institute of Photogrammetry and Remote Sensing, Ministry of Natural Resources, Xi’an 710054, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(21), 5116; https://doi.org/10.3390/rs15215116
Submission received: 26 August 2023 / Revised: 12 October 2023 / Accepted: 18 October 2023 / Published: 26 October 2023

Abstract

:
Due to a lack of geographical reference information, complex panoramic camera models, and intricate distortions, including radiation, geometric, and land cover changes, it can be challenging to effectively apply the large number (800,000+) of high-resolution Corona KH-4B panoramic images from the 1960s and 1970s for surveying-related tasks. This limitation hampers their significant potential in the remote sensing of the environment, urban planning, and other applications. This study proposes a method called 2OC for the automatic and accurate orientation and orthorectification of Corona KH-4B images, which is based on generalized control information from reference images such as Google Earth orthophoto. (1) For the Corona KH-4B panoramic camera, we propose an adaptive focal length variation model that ensures accuracy and consistency. (2) We introduce a robust multi-source remote sensing image matching algorithm, which includes an accurate primary orientation estimation method, a multi-threshold matching enhancement strategy based on scale, orientation, and texture (MTE), and a model-guided matching strategy. These techniques are employed to extract high-accuracy generalized control information for Corona images with significant geometric distortions and numerous weak texture areas. (3) A time-iterative Corona panoramic digital differential correction method is proposed. The orientation and orthorectification results of KH-4B images from multiple regions, including the United States, Russia, Austria, Burkina Faso, Beijing, Chongqing, Gansu, and the Qinghai–Tibet Plateau in China, demonstrate that 2OC not only achieves automation but also attains a state-of-the-art level of generality and accuracy. Specifically, the standard deviation of the orientation is less than 2 pixels, the mosaic error of orthorectified images is approximately 1 pixel, and the standard deviation of ground checkpoints is better than 4 m. In addition, 2OC can provide a longer time series analysis of data from 1962 to 1972, benefiting various fields such as environmental remote sensing and archaeology.

1. Introduction

The Corona program, as a reconnaissance satellite initiative aimed at acquiring military strategic and weapon intelligence, collected over 860,000 images of the Earth’s surface [1,2]. The highest-quality images from the Corona program, the KH-4B images [3], have a resolution of 1.8 m, which can be used for the identification and mapping of historic road systems [4], architectural structures [5], and historic landscapes [6]. They are also valuable for studying urban expansion [7], creating historical land cover maps [8,9], reconstructing historical ecological data [10], and assessing glacier area changes [11]. Nevertheless, the original KH-4B images suffer from intricate panoramic geometric distortions due to the absence of georeferencing, preventing their direct use in scientific research. Moreover, because of complex distortions, such as radiation distortion, geometric distortion, land cover changes, and weak texture, the current georeferencing process for KH-4B imagery is achieved through expensive and time-consuming manual rectification, significantly limiting their practical utility.
In recent years, some scholars [12] have attempted to fit the Corona panoramic camera model using traditional frame-based camera models to correct the panoramic distortions. However, the rectification accuracy of these methods is low because of significant differences in camera models. To address this issue, Jacobsen et al. [13] proposed a perspective framework camera model that incorporates geometric panoramic transformation terms. This model was applied on a large scale and achieved an orientation mean error of approximately 11.4 pixels. A fisheye camera model was used by Lauer et al. [14] to fit the panoramic distortions of Corona images, achieving a planimetric positioning accuracy of 17 m on local Corona images. Additionally, Sohn [15] proposed the use of a second-order rational function model (RFM) to georectify Corona KH-4B images. However, the limited number of manually extracted generalized control points resulted in lower accuracy. Sohn [15] also proposed two rigorous mathematical models, which are a modified projection model based on the frame-based model and a time-related exterior orientation elements projection model. The time-related exterior orientation elements projection model, as a complex and rigorous panoramic camera model, achieved sub-2-pixel orientation accuracy on a local region with around 30 manually extracted generalized control points. However, this camera model cannot adapt to focal length variations, and the experiments show that the official design focal length is not always optimal for all image orientations, leading to increased errors and even orientation failure. Furthermore, the aforementioned approaches had a significant drawback: the need for manual extraction of generalized control points [16]. We aim to automatically extract match points between KH-4B images and reference images using image matching techniques. However, traditional image matching techniques (such as SIFT [17], SURF [18], and UR-SIFT [19]) are unable to extract stable and accurate control information from Corona images due to the complexity of multiple distortions arising from differences in satellite orbits, sensor characteristics, and large temporal gaps. For instance, Bhattacharya et al. [20] manually selected generalized control points and used remote sensing software (Graz) to generate a Corona KH-4B digital elevation model (DEM). These methods improve orientation accuracy with enhanced mathematical models but are only suitable for small datasets because of their reliance on manually extracted control information.
This study proposes a universal method, named 2OC, for the automatic orientation and orthorectification of Corona KH-4B images. First, a 14-parameter panoramic mathematical model and a time-iterative orthorectification technique are introduced to correct the panoramic geometric distortion and fit the focal length distortions of Corona panoramic images. Second, to address the complex distortions of Corona images, a robust image-matching method is proposed to extract correspondences between Corona images and reference images, generating generalized control points. Specifically, (1) A robust feature matching algorithm, NIFT, is proposed to estimate the transformation relationship between images, considering radiation and geometry distortions. (2) A multi-threshold matching enhancement strategy, MTE, is developed to optimize the distribution and quantity of generalized control points in areas with weak textures and land cover changes, thus improving the overall accuracy of the control information. (3) A model-guided matching strategy is introduced to reduce the impact of panoramic distortions.
The main contributions of this study are as follows:
  • We propose an automatic orientation and orthorectification method (2OC) for Corona KH-4B images. To validate its effectiveness, we apply 2OC to a large number of multi-regional KH-4B images with various sources of references, and the detailed information is given in Table 1. The orientation accuracy is better than 2 pixels, the stitching error of orthorectified images is approximately 1 pixel, and the ground checkpoints have an RMSE accuracy better than 4 m. This demonstrates that 2OC is capable of processing KH-4B images with different regions, terrains, and image distortions using multiple reference images.
  • We propose a 14-parameter corona panoramic camera model and a time-iterative panoramic orthorectification method. First, to address the focal length and panoramic distortions of KH-4B images, we propose a 14-parameter panoramic camera model. Second, to obtain an analytical solution for back-projecting ground coordinates to image coordinates, we propose a novel time-iterative panoramic orthorectification method.
  • We introduce a robust control information extraction algorithm for extracting match points between KH-4B images and reference images to generate control information. To overcome land cover changes and geometric distortions (rotation, scale, panoramic), we propose a robust feature matching algorithm, a multi-threshold matching enhancement strategy (MTE) based on local texture, scale, and orientation, and a model-guided matching strategy.

2. Related Work

In this section, we first provide an overview of current methods for processing Corona images and then briefly review the matching techniques for multi-source remote sensing images.

2.1. Geometric Processing of the Corona Images

Non-panoramic camera model-based methods: Altmaier et al. [12] directly used frame-based image processing software (ERDAS IMAGE OrthoBASE Pro) to process Corona images, obtaining digital surface models with elevation and planimetric accuracy of 10 m and 3 m, respectively. Casana [21] also used this method to process Corona images in the Middle East and achieved orientation errors of approximately 5 pixels within a small image size of less than 5000 × 5000 pixels. Nita et al. [22] utilized match points between panoramic images for relative orientation, followed by absolute orientation using manually extracted generalized control points to generate DSMs and orthorectified images with a planimetric error of around 14 m. Rizayeva et al. [8] applied the method presented in [22] to produce orthorectified images with a resolution of 2.5 m and achieved a planimetric error of 16.3 ± 10.4 m. Furthermore, Bhattacharya et al. [20] used Graz (RSG) software to process KH-4B images, resulting in a triangulation error of approximately 2.5 pixels. Moreover, Jacobsen et al. proposed a perspective framework camera model that incorporates panoramic transformation terms, but it exhibited a high standard deviation of 11.4 pixels in orientation. After all, non-panoramic camera models, although simple, suffer from lower accuracy.
Panoramic camera model-based methods: To better fit the Corona KH-4B panoramic camera, Sohn [15] proposed two approaches: (1) modifying the panoramic camera model based on the differences between frame-based imaging and panoramic imaging models by analyzing their transformation equations, (2) developing a time-dependent panoramic camera model by analyzing the panoramic imaging process and considering camera and platform motions. Based on the manually extracted generalized control points, they achieved orientation accuracy of approximately 1.5 pixels for small-scale KH-4B images. Additionally, Shin and Schenk [23] proposed a simplified panoramic camera model, assuming that the internal parameters of the camera only undergo motion along the sensor direction during exposure and the external parameters experience motion along the flight direction. They obtained a height error of approximately 12 m in Corona stereo pairs. Although the fisheye camera model is different from the Corona panoramic camera model, Lauer et al. [14] attempted to apply the fisheye camera model to process Corona images and achieved a planimetric accuracy of approximately 17 m. These methods improve orientation accuracy with enhanced mathematical models but are only suitable for small datasets because of their reliance on manually extracted control information.

2.2. Image Matching Techniques

In recent years, scale-invariant feature transform (SIFT) [17] has been widely used as a classical local feature extraction algorithm for image registration in remote sensing. However, the non-linear intensity distortion between multi-temporal and multi-sensor remote sensing images severely degrades the performance of SIFT. Therefore, Ye et al. proposed some region-based matching algorithms, such as HOPC [24] and CFOG [25], which have been successfully applied to multi-sensor image registration. These methods rely on prior information about the image position. To address this issue, Li et al. proposed feature-matching algorithms, namely RIFT [26] and LNIFT [27], which do not depend on prior location information. These algorithms exhibit good performance in combating non-linear radiometric distortion but have limited robustness regarding rotation and scale distortion. Additionally, with the rapid development of deep learning techniques, Ghuffar et al. [28] employed the deep model Superglue [29], designed for matching natural scene images, to automatically extract control information from Landsat images, achieving a sub-pixel level of median error. However, it cannot adapt well to the unique complex distortions in KH-4B images. Therefore, the current image-matching techniques are not robust enough to handle the complex distortions of KH-4B images for orientation and rectification tasks.
Based on this, 2OC is proposed for the orientation and orthorectification of Corona KH-4B images with complex distortions. First, a 14-parameter panoramic mathematical model and a digital differential orthorectification method are proposed to effectively fit the panoramic and focal length distortions of KH-4B images. Second, a robust image-matching algorithm is developed for the automatic extraction of control information. Extensive experimental results demonstrate that 2OC achieves orientation accuracy better than 2 pixels, with a mosaic error of approximately 1 pixel for orthorectified images and a median error of less than 4 m for ground checkpoints.

3. Corona KH-4B Image Processing

The 2OC process consists of multiple modules: a panoramic camera model, image orientation, orthorectification, and a generalized control information extraction algorithm. The detailed workflow is shown in Figure 1: First, the images are downsampled (❶), and a feature-matching algorithm is used to estimate the transformation matrix between KH-4B images and reference images (❷). Second, an image pyramid is constructed, and template matching (❸) and multi-threshold matching enhancement strategy (❸) are employed to obtain reliable match points (❹) for generating generalized control information (❹). Then, image orientation (❺) is performed, and model-guided matching (❻) is used to re-optimize the generalized control information (❻) and image orientation results (❼). Finally, the KH-4B orthorectified images are obtained using orthorectification based on iterative scanning time (❽).

3.1. Introduction of the Corona Images

As shown in Table 2, the Corona missions include KH-1, KH-2, KH-3, KH-4, KH-4A, and KH-4B. According to U.S. Executive Order 12951 [30], these images were released to the National Archives and Records Administration (NARA) and the U.S. Geological Survey (USGS) on 23 February 1995. The complete panoramic image has a size of approximately 70 × 745 mm, and the USGS scanned the image at resolutions of 7 or 14 μm. However, due to the large size of the image, it was divided into four overlapping sections for scanning, labeled as a, b, c, and d, and generated four sub-images. Detailed information on Corona KH images is provided in Table 2.
KH images have complex distortions, making them hard to process. (1) The Corona KH-4 camera rotates steadily in the across-track direction within an expansion of 70° while sequentially exposing a static film, obtaining a series of instantaneous strip images with significant panoramic distortions. (2) The KH-4B images may experience varying levels of deformation. Fortunately, the additional markings, panoramic geometry (PG) stripes, on the image can assist in evaluating the deformation. Specifically, during the photography process, the lamps mounted on the lens form straight lines at the edges of the image, as shown in Figure 2. Therefore, the PG stripes bend with film deformation. However, KH-4 and KH-4B images do not include PG stripes but rather feature shrinkage marks and format center indicators. (3) The scanning process of the image is not precisely calibrated, requiring the estimation of rotation and translation components using the overlapping areas of adjacent image blocks for accurate sub-image stitching. A previous study [28] has shown that there are varying levels of block-wise deformations within the images, which can result in incorrect sub-image transformations. These errors accumulate during the stitching process, affecting the overall accuracy. Therefore, we process the sub-images separately before stitching to avoid stitching errors and correct scanning errors, considering that the interior and exterior orientation elements of image orientation compensate for the rotation and translation of sub-images.

3.2. The Imaging Model of Corona KH-4B Panoramic Cameras

The imaging process of a KH-4B camera is depicted in Figure 3. While the satellite moves swiftly along its orbit, the camera rapidly rotates to sequentially expose the static film. This dynamic process results in time-varying exterior orientation elements of the camera. To better fit the imaging procedure of the KH-4B panoramic camera, a 14-parameter mathematical model is proposed. This model includes 12 exterior orientation elements associated with time, the dynamic correction parameter, and the image focal length. The derivation of the imaging model at any arbitrary time t is provided below.
(1)
Exterior orientation of the KH-4B panoramic camera at time t
First, the change in exterior orientation elements, including position coordinate { X s t ,   Y s t ,   Z s t } and orientation elements { ω t ,   φ t ,   κ t } , caused by the satellite motion can be expressed using the following equations by assuming that they are linearly related to the time t .
X s t = X s 0 + X s 1 t
Y s t = Y s 0 + Y s 1 t
Z s t = Z s 0 + Z s 1 t
ω t = ω 0 + ω 1 t
φ t = φ 0 + φ 1 t
κ t = κ 0 + κ 1 t
t = x p L  
where { ω 0 , φ 0 , κ 0 , X s 0 , Y s 0 , Z s 0 } and { ω t , φ t , κ t , X s t , Y s t , Z s t } are the exterior orientation elements at time 0 and t , where t ϵ [ 0 ,   1 ] after normalization. { ω 1 , φ 1 , κ 1 , X s 1 , Y s 1 , Z s 1 } are the variation coefficients with respect to t .   x p represents the coordinate of the instantaneous image on the panoramic image in the horizontal direction, L represents the length of the film.
Second, we introduce a change in the exterior orientation elements caused by camera rotation along the cross-track direction with t , which is elaborated in Figure 3b and can be expressed using Equation (8).
α = x p f
where α is the rotation angle in the cross-track direction, and f is the camera focal length.
Therefore, the exterior orientation elements of the camera at t are { X s t , Y s t , Z s t , ω t , φ t + α , κ t }.
(2)
The imaging model of instantaneous strip images
Given that the instantaneous strip image width at time t is extremely narrow, the instantaneous strip image satisfies the following collinearity equation:
[ 0 y p f ] = s R α R t [ X X 0 t Y Y 0 t Z Z 0 t ]
R α = [ cos α 0 sin α 0 1 0 sin α 0 cos α ]
where s represents the scale factor, R α and R t are the rotation matrices caused by the camera rotation and satellite motion, respectively, where R t can be obtained using ω t , φ t , κ t in Equations (4)–(6). ( X , Y , Z ) are the coordinates of a ground point, y p is the vertical coordinate of the corresponding image point.
Furthermore, the rapid motion along the track direction of the camera will produce dynamic deformation in the vertical direction during the exposure process. Considering this, we employ a displacement y I M C to mitigate this deformation, which can be expressed using Equation (11).
y I M C = P f sin α cos ω t
P = V H δ
where V and H represent the satellite’s velocity and orbit altitude, respectively. δ represents the angular velocity of the lens in the cross-track direction. Therefore, the imaging model can be described as follows.
[ 0 y p + y I M C f ] = s R α R t [ X X 0 t Y Y 0 t Z Z 0 t ]
By multiplying the left equation with R α T
[ f s i n α y p + y I M C f c o s α ] = s R t [ X X s t Y Y s t Z Z s t ]
If we define N x , N y and N z as follows:
N x = r 11 ( X X s t ) + r 12 ( Y Y s t ) + r 13 ( Z Z s t )
N y = r 21 ( X X s t ) + r 32 ( Y Y s t ) + r 33 ( Z Z s t )
N z = r 31 ( X X s t ) + r 32 ( Y Y s t ) + r 33 ( Z Z s t )
where r i j represents the element at the i-th row and j-th column of the rotation matrix R t . Moreover, the scale factor s can be eliminated by dividing the first and second rows of the equation by the third row of the Equation (14).
tan α = N x N y
y p + y I M C = f cos α N y N z
Then, we can obtain the panoramic camera coordinates ( x p , y p ) based on the following collinearity function.
x p = f tan 1 ( N x N z )
y p = P f sin α cos ω 0 t f cos α N y N z
After all, the relationship between the ground coordinates ( X , Y , and panoramic camera coordinates ( x p , y p ) can be modeled with the following 14 parameters: the camera’s initial exterior orientation parameters { X s 0 , Y s 0 , Z s 0 , ω 0 , φ 0 , κ 0 } , coefficients for its linear variation over t   { X s 1 , Y s 1 , Z s 1 , ω 1 , φ 1 , κ 1 } , image dynamic deformation coefficient P , and the camera focal length f .

3.3. Automated Orientation of Corona KH-4B Images Based on Generalized Control

We orient the KH-4B images by solving the 14 parameters using the generalized control points. Specifically, the image matching technique proposed in Section 4 is employed to automatically obtain a large number of well-distributed match points between KH-4B images and reference images. Here, we set the coordinates of a pair of match points as ( x i , y i ) in the KH-4B image and ( x i , y i ) in the reference image. Using the geographic information of the reference image, the object coordinates ( X i , Y i ) corresponding to ( x i , y i ) can be obtained, and the elevation information Z i can be obtained from the Digital Elevation Model (DEM). This results in the generalized control information ( x i , y i , X i , Y i , Z i ) .
Given that the imaging model has 14 parameters and each generalized control point can provide two equations, at least seven points are required to solve the parameters using Equations (20) and (21). When more generalized control points are available, these parameters can be solved using a least squares adjustment, which will enhance the calculation accuracy and reliability. Additionally, as the collinearity equations are non-linear, they must be linearized and require relatively accurate initial parameters.
However, Corona KH-4B images do not provide orientation parameters or auxiliary information that can be used to calculate the orientation parameters, such as the primary point coordinates, lens distortion coefficients, reference coordinates, satellite position, satellite velocity, and satellite attitude. As described in Section 3.1, we orient the sub-image separately instead of the whole image. Even though the primary point coordinates of the sub-image will deviate from the original photographic primary point; however, this deviation will be compensated by the external orientation elements of the pose parameters. In this study, the initial values of X s 0 and Y s 0 are set to the average geographical coordinates ( X i , Y i ) of all generalized control points, Z s 0 is set to 170,000 m based on the satellite orbit, w is set to {−15°, 15°} based on the forward and backward perspectives. Focal length f is set to 0.609602 m based on a default value provided by [28], and the other nine parameters are set to 0.

3.4. Orthorectification of KH-4B Panoramic Images Based on Iteration over Time t

Orthorectification is the process of mapping image points from panoramic KH-4B images to orthophoto. First, we calculate the corresponding ground coordinates ( X ,   Y ,   Z )   for the image point ( x o ,   y o ) of orthophoto. Then, we compute the corresponding panoramic coordinates ( x p ,   y p ) for ground point ( X ,   Y ,   Z ) based on the imaging model and the solved 14 parameters. However, the solution of x p require the exposure time t and the exterior orientation elements at t , which are unknown. To address the circular dependency problem, we formulate it as an optimization problem of minimizing an objective function E . E describes the error between ( x p ,   y p ) and the panoramic image coordinates ( x , y ) computed based on ( X , Y , Z ), t ( x p ) , and α ( x p ) .
t ( x p ) = x p L  
α ( x p ) = x p f  
E = ( x p x ( t ( x p ) , α ( x p ) , X , Y , Z ) ) 2 + ( y p y ( t ( x p ) , α ( x p ) , X , Y , Z ) ) 2
where x p represents the image coordinate point, L is the length of the film,   t ( x p ) and α ( x p ) are the scan time and rotation angle of the instantaneous strip image at x p , respectively. ( X , Y , Z ) are the ground point coordinates, and x ( t ( x p ) , X , Y , Z ) and y ( t ( x p ) , α ( x p ) , X , Y , Z ) are the image coordinates calculated based on the time t ( x p ) and ground point ( X , Y , Z ) . To find the image coordinates that minimize the objective function E , this study adopts an iterative approach to update the exterior orientation elements as well as the image coordinates.
The specific steps of the orthorectification process are as follows:
(1)
We create a grid for the orthorectified image based on the coverage range of the KH-4B image and the desired resolution s , and interpolate the elevation values Z for the grid points using DEM.
(2)
We initially set t to 0.5, x p to half of the film length and y p to 0 for each ground point.
(3)
We first calculate { X s t , Y s t , Z s t , ω t , φ t , κ t } and α based on t and the solved 14 parameters. Then, we calculate the x ( t ( x p ) , α ( x p ) , X , Y , Z ) and y ( t ( x p ) , α ( x p ) , X , Y , Z ) of Equation (26) according to Equations (20) and (21). Finally, we adjust t according to x ( t ( x p ) , α ( x p ) , X , Y , Z ) . Repeat this step until E is minimized. We summarize the process in Algorithm 1.
(4)
We interpolate the grayscale values of ( x p , y p ) on the KH-4 image and assign them to the orthorectified image.
Algorithm 1: The specific steps of orthorectification
Input: ground point coordinates ( X ,   Y ,   Z ) ; solved 14 parameters:   X s 0 ,   Y s 0 ,   Z s 0 ,   ω 0 ,   φ 0 ,   φ 0 ,
κ 0 ,   X s 1 ,   Y s 1 ,   Z s 1 ,   ω 1 ,   φ 1 ,   κ 1 ,   P ,   f ; film length: L .
Output:  ( x p ,   y p ) .
  • Initialization :   x 0 L⁄2; y 0 ←0; t 1 ←0.5;   i ←1.
2.
While (E < 1) or (i < 50) do
     X s t i , Y s t i , Z s t i , ω t i , φ t i , κ t i , α i t i
     x i ,   y i f ( X s t i , Y s t i , Z s t i , ω t i , φ t i , κ t i , α i , P , f ,   X , Y , Z ) // f ( ) is the Equations (20) and (21)
     E = ( x i x i 1 ) 2 + ( y i y i 1 ) 2
     t i + 1 t i ; x i + 1 , y i + 1 x i ,   y i ;   x p ,   y p x i ,   y i
     i i + 1
  End
3.
Return :   x p ,   y p

4. Extraction of Generalized Control Information

Differences in satellite orbit, sensors, and acquisition times result in radiometric, rotational, and scale distortions and changes in ground features between KH-4B images and reference images. To address this issue, this section proposes a robust algorithm for extracting generalized control information.

4.1. Feature Matching of the Corona Image and the Reference Image

4.1.1. Multiscale GU-FAST Feature Detection

To address radiometric distortion and feature point clustering, a method called GU-FAST is proposed. Specifically, GU-FAST first detects edges using the Sobel [31] operator and then applies the FAST algorithm with a low threshold to extract N corner points ( N > 2 M ,   M is the number of feature points to be detected), which are sorted based on the Harris [32] score. Next, within a range of ( w h / ( 4 M ) ) 2 , where w and h are the width and height of the image, respectively, GU-FAST searches for neighboring points and removes them from the set. Finally, the top M key points with the largest responses are selected. To handle scale distortion, a scale space is constructed based on [33], and multiscale GU-FAST corner points are applied.

4.1.2. Rotation-Invariant Feature Description

Image rotation and non-linear radiometric distortion pose inevitable challenges in the matching of KH-4B images and references with non-linear intensity change, considering that the traditional feature description methods cannot handle non-linear intensity change and are sensitive to image rotation. To address this, a feature descriptor based on multi-directional features is proposed, which utilizes a multiscale, multi-directional Log–Gabor filter to construct multi-directional structural features (MR).
M R ( x , y , o ) = s A s o ( x , y )
N o r m ( x , y ) = S q r t ( o M R ( x , y , o ) 2 )
where A s o represents the filter feature of Log–Gabor, where s and o denote the scale and orientation of the Log–Gabor filter, respectively. N o r m ( x , y ) represents norm values.
A. Primary Orientation Estimation
Since the initial orientation of the Log–Gabor filter is fixed, the order of MR layers is highly sensitive to rotation distortion. Towards this, a primary orientation estimation algorithm based on the weighted norm feature of multiple-directional filtering is proposed. The specific algorithm follows the steps below:
(1)
Extract the norm values of a circular area around the feature point and apply Gaussian weighting to the area.
(2)
Identify evenly distributed multiple sectors with same-size overlapping regions within the circular area, as shown in Figure 4a. Specifically, we randomly create the first sector with a size of θ 2 degrees, then rotate the sector sequentially by θ 1 degrees clockwise. The adjacent sectors will have an overlapping of ( θ 2 θ 1 ) angles. In this study, we set θ 1 and θ 2 as 5 and 30, respectively, obtaining 72 sectors in total.
(3)
Calculate the sum of weighted norms of all pixels within a sector for all for each sector.
(4)
Find the sector with the largest norm value and take the orientation of the central axis corresponding to this sector as the primary orientation according to the following equations.
T = ( N m a x 1 )   θ 1 + θ 2 2
P O = { T , T < 360   T 360 , T 360
where N m a x is the index of the sector region with the maximum norm value, P O is the primary orientation within the range of [0°, 180°).
Additionally, the orientation of the central axis corresponding to the sector with the second largest norm is taken as the secondary primary orientation if the value exceeds 70% of the maximum norm value. We also build a feature descriptor with the secondary primary orientation.
B. Feature Descriptor Construction
Note that the primary orientation of each feature point has been obtained, and the order of the layers of the MR feature is adjusted according to the primary orientation, as shown in Table 3. Furthermore, because of the symmetry in the multi-directional filters, the order of each layer in MR remains consistent with the primary orientation of θ and θ + 180 degrees. For example, the third layer is moved to the first layer if the primary orientation is within the intervals 50–75° and 230–255°.
After that, the process of feature description is described as follows. First, multiple sampling points (12 directions, 3 concentric circles) are determined within the neighborhood of the feature point, as shown in Figure 5a. Second, for each sampling point, the multi-directional filter features (MR) within the circular neighborhood are weighted and summed using a Gaussian kernel, resulting in an o-dimensional sampling vector (Figure 5b). Finally, as shown in Figure 5c, starting from the primary orientation, the sampling vectors of each sampling point are concatenated clockwise to form a complete feature descriptor.

4.1.3. Feature Matching

We first perform pairwise matching using the nearest neighbor distance ratio [17] (NNDR) for each level of the image pyramid to obtain initial matches. Then, the mismatches are removed using the Forward Selection and Consistency [34] (FSC) method. Subsequently, the matches of all pyramid images are aggregated, and the FSC method is conducted on the fused match set. As a result, the final feature-matching results are obtained, and the transformation between the image pair is estimated.

4.2. Pyramid-Wise Template Matching

To further improve the matching performance, a pyramid-wise template matching strategy is applied. First, the reference image is resampled based on the estimated transformation matrix. Then, two image pyramids are constructed for the reference image and the KH-4B image, respectively. Finally, a template matching algorithm called CFOG [25] is employed for precise matching at each layer of pyramid images. The detailed process is as follows.
  • At the top level of the pyramid, the corner features of the KH-4B image gradient map are extracted using the GU-FAST algorithm.
  • The corner points detected on the KH-4B image are mapped to the reference image based on the transformation matrix, and template matching is performed using the CFOG [25].
  • The matches are mapped to the next level of the image pyramid based on the resolution difference of different levels of the pyramid.
  • The process 1–3 is repeated until the original resolution is reached.

4.3. Multi-Threshold Matching Enhancement

Local changes in the scene are an inevitable problem in matching multi-temporal remote sensing images, which can cause template matching to fall into local optima, producing erroneous matches. This significantly reduces the accuracy of control information and may even lead to failure in image orientation. Traditional match filtering methods struggle to eliminate these unreliable matches due to the following reasons: (1) The global reprojection error thresholds tend to be large due to the complex imaging model of KH-4B images, even when the matching points are completely correct. (2) Setting similarity thresholds becomes challenging due to non-linear radiometric distortions. (3) The panoramic camera model fails to converge when relatively low-accuracy generalized control points (16–32 m) from low-resolution layers in the image pyramid are used.
Aiming at eliminating the incorrect matches accurately, this study proposes a multi-threshold matching enhancement strategy (MTE) based on the scale and rotation change in a group of local feature points. Upon jointly considering a wide range of feature points, the unreliable points located in the change areas can be effectively removed, and more correct matches are found. The specific steps are as follows.
For a matching pair P k in the KH-4B image and P r in the reference image, we detect feature points with the FAST operator in a local area (~500 pixels) around P k , a group of feature points can be obtained. Considering the scale change and rotation between the KH-4B image and the reference image have been roughly eliminated, the offsets of the newly detected points and P k can be directly applied to P r to predict the corresponding points of the detected points on the reference images. Specifically, for a new feature point Pi, its initial corresponding point can be calculated as P r P k + P i . After that, we use template matching to optimize the initial matching points and estimate a local transformation matrix using the refined matches. Based on the estimated transformation matrix, the scale change and rotation angle in the local area can be calculated. We set scale and rotation thresholds at 20% and 5°, respectively, based on the largest possible differences between KH-4B and reference imagery. If differences in scale and rotation are within these limits, we consider the local structures to be unchanged, and the new matches are retained; otherwise, the land cover has changed, and the new matches are discarded.

4.4. Model-Guided Matching

After obtaining sufficient relatively high-accuracy matches from the above matching process, we calculate the 14 parameters using the image model described in Section 3.2 and orthorectify the KH-4B image. Until now, notable geometric distortions caused by the intricate image process have been significantly reduced. We rematch the orthorectified image and the reference image to improve the accuracy. As shown in Figure 6, we first retain the previously obtained matching points in the reference image as feature points and discard the previous corresponding points on the KH-4B image. Second, we project the feature points to the KH-4B image with the calculated imaging parameters. Finally, we take the feature points as input and employ template matching on the reference image and coarsely orthorectified image to refine the matching results. The projected point on the KH-4B image in the second step and the adjusted feature point on the reference image in the last step are taken as a pair of generalized control points.

5. Experiments and Results

To thoroughly evaluate the accuracy and generalization of 2OC, KH-4B images (listed in Table 4) from different locations with diverse terrain features and complex distortions were used as validation data. The evaluation was conducted by quantifying the accuracy of orientation and orthorectification.

5.1. The Accuracy of Orientation

We first assess the accuracy of image orientation, which was evaluated using residual orientation and pose parameters. All obtained correspondences between the KH image and the reference orthophoto are used as generalized control points. The calculation of root mean square error (RMSE) of generalized control points is as follows:
m = i ( ( x i P ( x i ) ) 2 + ( y i P ( y i ) ) 2 ) n
where ( x i , y i ) is the image coordinate of the generalized control point, P ( x i ) , P ( y i ) are the image coordinates calculated using the panoramic camera model, n is the number of generalized control points, and m is the RMSE of residual. In Figure 7, the generalized control points required for DS1101-1069DF090b image orientation are depicted and extracted using the 2OC method.
Figure 7 shows the generalized control points required for DS1101-1069DF090b image orientation, extracted using 2OC. Overall, the 2OC approach demonstrates advantages in terms of both generalized control point quantity and distribution that manual extraction methods cannot achieve. In the zoomed-in view, the precision of these generalized control points has reached a level comparable to human recognition accuracy. The experimental results in Table 5 indicate that: (1) The overall accuracy is better than 2 pixels, with significantly higher accuracy in the central image blocks compared to the edge image blocks. This may be attributed to severe panoramic distortion and image deformation at the edges. (2) Model-guided matching effectively reduces the impact of panoramic distortion on the accuracy of control information, improving the orientation accuracy by 30% to 45%. For comparison with state-of-the-art work [28], we conducted experiments on the DS1117-2071DF008 dataset. The results show that the generalized control points extracted by 2OC outperform [28] in terms of quantity and distribution. The orientation accuracies are as follows: a (1.54), b (1.17), c (1.3), d (1.56), which are better than 1.94 of [28].
Furthermore, to verify the generalization capability, we applied 2OC to the KH-4B images in Table 5 and presented the orthorectified images and mean errors of orientation in Figure 8. The overall accuracy is better than 2 pixels. The Gansu Province, known for its Loess Plateau, exhibits characteristics of weak texture, repetitive patterns, and significant speckle noise. Beijing and Vermont, USA, represent cases with land cover changes and radiometric distortions. The snow-covered region in Tibet leads to overexposure and severe non-linear radiometric distortion. The KH-4B images in Burkina Faso exhibit considerable noise due to camera and terrain factors. The area near the Ob River in Russia has numerous small lakes, where frozen water bodies in the KH-4B images are overexposed. In contrast, the water bodies in the reference image are underexposed, resulting in severe radiometric distortion. Moreover, changes in the lake edges over a 50-year period are observed. Figure 9 provides a detailed demonstration of complex distortions, including non-linear radiometric distortion, land cover changes, image noise, weak texture, cloud cover, and repetitive patterns. These results demonstrate that 2OC exhibits high generalization capability and can handle KH-4B images with various terrain types and complex distortions.
Table 6 shows detailed attitude parameters for the DS1101-1069DF090 dataset. The attitude parameters between sub-images vary due to different primary point coordinates but should follow certain patterns. Here, X represents the east–west direction, Y represents the north–south direction, and Z represents the plumb line direction. The true scan time for a panoramic image is approximately 0.36 s (84,000 pixels), so the scan time for sub-images is approximately 0.154 s (36,000 pixels)/0.103 s (24,000 pixels). However, in this experiment, the scan time for sub-images is normalized to 1. Therefore, Ys1 should be around ~1.2 km   s 1 /0.8 km   s 1 (equivalent to approximately 77.7 km   s 1 for a true panoramic camera), with an orbital altitude of around 170 km. ϕ 0 represents the rotation around the Y-axis, which represents the camera’s scanning angle. ω 0 represents the rotation around the X-axis and should be around −15 degrees. However, the experimental results show variations between −10° and −20°. This may be due to the satellite’s attitude not strictly aligning with the plumb line on the ground, resulting in a deviation in the scanning direction of the camera. κ 0 represents the rotation around the Z-axis (plumb line direction), and from the distribution of generalized control points in Figure 7d of the Google image, a deviation of approximately 10° can be observed. The P (vertical height difference) for a and d is larger than that of b and c, which may be attributed to more severe image deformation at the ends.

5.2. The Accuracy of Orthorectification

In this section, we present the registration checkerboard image between DS1101-1069DF092c and the reference image, as shown in Figure 10, and test the accuracy of the orthophoto generated using the proposed model. Orthorectified images are closely related to attitude parameters, and their accuracy reflects the accuracy of the attitude parameters. For a comprehensive evaluation, two metrics, the mosaic error and the error of the generalized control points, are applied. Specifically, we evaluate the mosaic accuracy based on three aspects: standard deviation (SD), maximum value (Max), and mean value (Mean) of the differences between the coordinate match points located in the overlapping regions of adjacent sub-image blocks. Table 7 provides detailed information accounting for the mosaic accuracy between the orthorectified image blocks of DS1101-1069DF089 and DS1101-1069DF090.
According to Table 7, the mosaic errors in the orthorectified images are mostly within 1 pixel, with the maximum value ranging from 1 to 4 pixels and the average value being better than 1 pixel. The mosaic errors at the edges of the image blocks are larger than those in the middle, which is consistent with the image orientation accuracy. This indicates that the proposed 14-parameter panoramic camera model accurately reverts the panoramic imaging process and can be used for orthorectification.
To obtain accurate ground checkpoints, sub-images were first stitched into a complete orthorectified image using the recovered georeferenced information. Then, 43 ground checkpoints were manually selected. Figure 11 shows the detailed distribution of these checkpoints. The root mean square error (RMSE) of the ground checkpoints in the X-direction is 3.89 m, and in the Y-direction is 3.29 m. The average RMSE in the X-direction is 2.69 m, and in the Y-direction is 2.52 m.

6. Discussion

This section provides a detailed discussion of three critical steps in 2OC that may significantly affect the orientation and orthorectification accuracy, which are the PG stripe correction accuracy, the stability of the feature matching algorithm (NIFT), and the robustness of multi-threshold matching enhancement (MTE) to resist complex distortions.

6.1. The Impact of PG Stripe Correction on Orientation Accuracy

As described in Section 3.2, estimating rotation and translation components using the overlapping regions between adjacent image blocks may be affected by local image deformations, leading to stitching errors and a decrease in orientation accuracy. To simultaneously avoid stitching errors and scanning errors, this study adopts a method of processing individual image blocks before stitching. This is because the rotation and translation components of the image blocks are compensated for in the attitude parameters, including the angular and linear elements.
As shown in Table 8, the improvement in accuracy due to PG stripe correction for compensating image distortion is not as significant as [28]. Therefore, we constructed the PG stripe curves for each sub-image and found that this is mainly because the PG stripe curves of the sub-images are almost linear (as shown in Figure 12, Figure 13 and Figure 14), similar to image rotation. Hence, they can be compensated for using the angular elements of the attitude parameters. However, in the stitched complete image (with rotation and translation), as depicted in Figure 15, the PG stripe becomes more intricate. This complexity is likely due to the estimation bias in the rotation component. Consequently, PG stripe correction proves to be significant for the approach that involves image stitching followed by correction [28].

6.2. The Evaluation of NIFT

To evaluate the performance of NIFT, we created a dataset consisting of 30 pairs of down-sampled KH-4B images and reference images (as shown in Table 9) and compared it with SIFT [17], LNIFT [27], and RIFT [26]. For the comparative methods, the default parameters provided by the authors are used. For NIFT, the scale factor, orientation factor, descriptor orientation, and the number of concentric circles were set to {4, 6, 12, 3}, respectively. Accuracy rate (SR) and the number of correct matching points (NCM) were used as evaluation metrics. SR can be calculated as follows:
S R = N s N t
where N s is the number of successfully matched image pairs, N t is the total number of image pairs involved, and SR represents the robustness of the matching algorithm.
Table 10 gives the detailed matching results for the four algorithms. Among them, SIFT, as a classical method, only achieved a correctness rate of 6.6%. LNIFT algorithm obtained a higher number of NCM with 108. However, its SR was only 3.3%. RIFT had the same NCM as LNIFT but achieved higher robustness, with an SR of 53%. In comparison, NIFT successfully matched all image pairs and achieved more than three times NCM compared with the other algorithms.
To further evaluate the robustness of NIFT against rotation, we selected two representative images from Dataset 3 and manually rotated the images in steps of 5° within the range of [0, 360], creating 73 pairs of images with different rotation distortions. The matching results in Figure 16 demonstrate that although the NCM fluctuates to some extent due to the multi-directional filtering features, the algorithm is still able to obtain more than 190 correct matching points at any angle.

6.3. The Evaluation of MTE

To evaluate the effectiveness of the multi-threshold matching enhancement strategy (MTE), we designed a series of experiments. To be simple, the 2OC without any matching enhancement strategy is referred to as 2 O C a ; the 2OC using the strategy of eliminating incorrect points based on reprojection error is referred to as 2 O C b , where the reprojection error threshold is adaptively adjusted using three times the mean error; the 2OC enhanced with MTE is referred to as 2 O C c . All three methods were applied to the same image dataset, which consists of DS1101-1069DF090 Corona image, Google Earth orthophoto, and 30 m STRM DEM data of the same area.
The experimental results in Table 11 show that there is no significant difference between the three methods in pairs a and b of image blocks, with an accuracy difference of less than 1 pixel. This is because the a and b regions are mountainous areas with little variation in land features and only a few low-quality generalized control points. In Region C, 2 O C a and 2 O C b achieved an accuracy of 3–4 pixels, while 2 O C c had an error of only 1.3 pixels. This is mainly due to the presence of two rivers with significant changes in Region C, resulting in a number of low-quality generalized control points and a large number of matching errors in image orientation. In Region D, where land feature changes are the most pronounced and low-quality generalized control points dominate, both 2 O C a and 2 O C b failed, while 2 O C c achieved an error of 1.4 pixels. This indicates that: (1) Complex distortions in the images indeed generate a large number of low-reliability match points, severely affecting the accuracy of image orientation and even preventing proper convergence. (2) The multi-threshold matching enhancement strategy effectively reduces the proportion of low-quality match points in the overall set of match points by eliminating low-reliability match points and increasing high-reliability match points, resulting in the correct convergence of the panoramic camera model estimation.

7. Conclusions

This study presents a method for orientation and rectification of Corona KH-4B images (referred to as 2OC). First, to eliminate the focal length and panoramic distortion in KH-4B images, a panoramic mathematical model with 14 parameters and a time-iterative orthorectification method are proposed. Second, to counter complex distortions (radiometric/geometric distortions, weak texture, land changes, etc.) and automatically extract control information, a robust generalized control information extraction algorithm is proposed. Specifically, this includes the Maximum Sector Norm-based Primary Direction Estimation (NIFT), the Multi-Threshold Matching Enhancement Strategy (MTE) based on local texture, scale, and orientation, and the Model-Guided Matching.
Next, the generalization of 2OC is validated using KH-4B images with diverse terrain features (plateaus, glaciers, plains, hills, basins, etc.) and complex distortions from different locations worldwide (USA, Russia, Ethiopia, Burkina Faso, China) (Table 5). The results demonstrate superior orientation accuracy of better than 2 pixels, orthorectification accuracy of approximately 1 pixel, and planimetric accuracy of better than 4 m. Furthermore, detailed ablation and comparative experiments are conducted on the proposed NIFT, MTE, and Model-Guided Matching modules, highlighting their significant contributions to the generalization of 2OC. For example, allowing temporal differences (50 years) between the reference image and KH-4B image, non-linear radiometric distortion, rotational distortion ([0, 360)), scale distortion (1:4), and local land changes. Additionally, the rationality of separately processing sub-images and the impact of PG stripe correction are analyzed in detail.
Finally, the application of 2OC is not limited to handling Corona KH-4B images. By replacing the image mathematical model, it can be applied to a wider range of historical remote sensing images with complex multi-source and multi-temporal differences. In future studies, we will (1) attempt to use the rational function model commonly used in the remote sensing field to rectify KH-4B images for handling other historical remote sensing images with significant temporal differences and (2) explore the combination of manual features and deep learning features to extract control information more rapidly.

Author Contributions

Conceptualization, Z.H., Y.L. and L.Z.; methodology, Z.H., Y.L. and L.Z; validation, Z.H., Y.L. and L.Z.; formal analysis, Z.H. and Y.S.; investigation, X.H. and Z.H.; data curation, Z.H., H.A. and Y.S.; writing—original draft preparation, Z.H. and Y.L.; writing—review and editing, Z.H., Y.L., C.Z. and L.Z.; supervision, Y.L. and L.Z.; project administration, Y.L.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Basic scientific research project of the Chinese Academy of Surveying and Mapping (CASM), grant number AR2305.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dashora, A.; Lohani, B.; Malik, J.N. A repository of earth resource information—CORONA satellite programme. Curr. Sci. 2007, 92, 926–932. [Google Scholar]
  2. Madden, F. The CORONA camera system-iteks contribution to world security. J. Br. Interplanet. Soc. 1999, 52, 379–396. [Google Scholar]
  3. Cloud, J. Imaging the World in a Barrel: CORONA and the Clandestine Convergence of the Earth Sciences. Soc. Stud. Sci. 2001, 31, 231–251. [Google Scholar] [CrossRef]
  4. Ur, J. CORONA satellite photography and ancient road networks: A northern Mesopotamian case study. Antiquity 2003, 77, 102–115. [Google Scholar] [CrossRef]
  5. Casana, J. Global-Scale Archaeological Prospection using CORONA Satellite Imagery: Automated, Crowd-Sourced, and Expert-led Approaches. J. Field Archaeol. 2020, 45, S89–S100. [Google Scholar] [CrossRef]
  6. Philip, G.; Donoghue, D.; Beck, A.; Galiatsatos, N. CORONA satellite photography: An archaeological application from the Middle East. Antiquity 2002, 76, 109–118. [Google Scholar] [CrossRef]
  7. Watanabe, N.; Nakamura, S.; Liu, B.; Wang, N. Utilization of Structure from Motion for processing CORONA satellite images: Application to mapping and interpretation of archaeological features in Liangzhu Culture, China. Archaeol. Res. Asia 2017, 11, 38–50. [Google Scholar] [CrossRef]
  8. Rizayeva, A.; Nita, M.D.; Radeloff, V.C. Large-area, 1964 land cover classifications of Corona spy satellite imagery for the Caucasus Mountains. Remote Sens. Environ. 2023, 284, 113343. [Google Scholar] [CrossRef]
  9. Narama, C.; Shimamura, Y.; Nakayama, D.; Abdrakhmatov, K. Recent changes of glacier coverage in the western Terskey-Alatoo range, Kyrgyz Republic, using Corona and Landsat. Ann. Glaciol. 2006, 43, 223–229. [Google Scholar] [CrossRef]
  10. Andersen, G.L. How to detect desert trees using corona images: Discovering historical ecological data. J. Arid Environ. 2006, 65, 491–511. [Google Scholar] [CrossRef]
  11. Narama, C.; Kääb, A.; Duishonakunov, M.; Abdrakhmatov, K. Spatial variability of recent glacier area changes in the Tien Shan Mountains, Central Asia, using Corona (~1970), Landsat (~2000), and ALOS (~2007) satellite data. Glob. Planet. Change 2010, 71, 42–54. [Google Scholar] [CrossRef]
  12. Altmaier, A.; Kany, C. Digital surface model generation from CORONA satellite images. ISPRS J. Photogramm. Remote Sens. 2002, 56, 221–235. [Google Scholar] [CrossRef]
  13. Jacobsen, K. Calibration and Validation of Corona kh-4b to Generate Height Models and Orthoimages. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 2020, 151–155. [Google Scholar] [CrossRef]
  14. Lauer, B. Exploiting Space-Based Optical and Radar Imagery to Measure and Model Tectonic Deformation in Continental Areas. Ph.D. Thesis, Université Paris Cité, Paris, France, 2019. [Google Scholar]
  15. Sohn, H.G.; Kim, G.-H.; Yom, J.-H. Mathematical modelling of historical reconnaissance CORONA KH-4B Imagery. Photogramm. Rec. 2004, 19, 51–66. [Google Scholar] [CrossRef]
  16. Dashora, A.; Sreenivas, B.; Lohani, B.; Malik, J.N.; Shah, A.A. GCP collection for corona satellite photographs: Issues and methodology. J. Indian Soc. Remote Sens. 2006, 34, 153–160. [Google Scholar] [CrossRef]
  17. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  18. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  19. Sedaghat, A.; Mokhtarzade, M.; Ebadi, H. Uniform Robust Scale-Invariant Feature Matching for Optical Remote Sensing Images. Ieee Trans. Geosci. Remote Sens. 2011, 49, 4516–4527. [Google Scholar] [CrossRef]
  20. Bhattacharya, A.; Bolch, T.; Mukherjee, K.; King, O.; Menounos, B.; Kapitsa, V.; Neckel, N.; Yang, W.; Yao, T. High Mountain Asian glacier response to climate revealed by multi-temporal satellite observations since the 1960s. Nat. Commun. 2021, 12, 4133. [Google Scholar] [CrossRef]
  21. Casana, J.; Cothren, J. Stereo analysis, DEM extraction and orthorectification of CORONA satellite imagery: Archaeological applications from the Near East. Antiquity 2008, 82, 732–749. [Google Scholar] [CrossRef]
  22. Nita, M.D.; Munteanu, C.; Gutman, G.; Abrudan, I.V.; Radeloff, V.C. Widespread forest cutting in the aftermath of World War II captured by broad-scale historical Corona spy satellite photography. Remote Sens. Environ. 2018, 204, 322–332. [Google Scholar] [CrossRef]
  23. Shin, S.W.; Schenk, T. Rigorous Modeling of the First Generation of the Reconnaissance Satellite Imagery. J. Remote Sens. 2008, 24, 223–233. [Google Scholar]
  24. Ye, Y.; Shan, J.; Bruzzone, L.; Shen, L. Robust Registration of Multimodal Remote Sensing Images Based on Structural Similarity. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2941–2958. [Google Scholar] [CrossRef]
  25. Ye, Y.X.; Bruzzone, L.; Shan, J.; Bovolo, F.; Zhu, Q. Fast and Robust Matching for Multimodal Remote Sensing Image Registration. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9059–9070. [Google Scholar] [CrossRef]
  26. Li, J.; Hu, Q.; Ai, M. RIFT: Multi-Modal Image Matching Based on Radiation-Variation Insensitive Feature Transform. IEEE Trans. Image Process. 2020, 29, 3296–3310. [Google Scholar] [CrossRef] [PubMed]
  27. Li, J.; Xu, W.; Shi, P.; Zhang, Y.; Hu, Q. LNIFT: Locally Normalized Image for Rotation Invariant Multimodal Feature Matching. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  28. Ghuffar, S.; Bolch, T.; Rupnik, E.; Bhattacharya, A. A Pipeline for Automated Processing of Declassified Corona KH-4 (1962–1972) Stereo Imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  29. Sarlin, P.-E.; DeTone, D.; Malisiewicz, T.; Rabinovich, A. Superglue: Learning feature matching with graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 21–24 June 2020; pp. 4938–4947. [Google Scholar]
  30. Woolsey, R.J. CORONA and the Intelligence Community. Stud. Intell. 1996, 39, 14. [Google Scholar]
  31. Kanopoulos, N.; Vasanthavada, N.; Baker, R.L. Design of an image edge detection filter using the Sobel operator. IEEE J. Solid-State Circuits 1988, 23, 358–367. [Google Scholar] [CrossRef]
  32. Harris, C.G.; Stephens, M.J. A Combined Corner and Edge Detector. In Proceedings of the Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988. [Google Scholar]
  33. Leutenegger, S.; Chli, M.; Siegwart, R.Y. BRISK: Binary Robust invariant scalable keypoints. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2548–2555. [Google Scholar]
  34. Wu, Y.; Ma, W.; Gong, M.; Su, L.; Jiao, L. A Novel Point-Matching Algorithm Based on Fast Sample Consensus for Image Registration. IEEE Geosci. Remote Sens. Lett. 2015, 12, 43–47. [Google Scholar] [CrossRef]
Figure 1. The flowchart of 2OC. M represents the transformation matrix between images, and GCP is the abbreviation for generalized control points.
Figure 1. The flowchart of 2OC. M represents the transformation matrix between images, and GCP is the abbreviation for generalized control points.
Remotesensing 15 05116 g001
Figure 2. Schematic of film and PG reference data in KH-4B missions. All dimensions are in meters unless otherwise stated.
Figure 2. Schematic of film and PG reference data in KH-4B missions. All dimensions are in meters unless otherwise stated.
Remotesensing 15 05116 g002
Figure 3. The imaging of KH-4B images. (a) The imaging process; (b) the imaging geometric relationship.
Figure 3. The imaging of KH-4B images. (a) The imaging process; (b) the imaging geometric relationship.
Remotesensing 15 05116 g003
Figure 4. The Estimation process of primary orientation with θ 1 = 5 , θ 2 = 30 . (a) computes weighted norm sums for pixels in each sector, while (b) identifies the sector with the highest norm value and assigns its central axis orientation as the primary orientation.
Figure 4. The Estimation process of primary orientation with θ 1 = 5 , θ 2 = 30 . (a) computes weighted norm sums for pixels in each sector, while (b) identifies the sector with the highest norm value and assigns its central axis orientation as the primary orientation.
Remotesensing 15 05116 g004
Figure 5. The pipeline of feature description. (a) shows sampling point distribution and numbering; (b) illustrates the construction of the sampling vector for point (3,1); (c) demonstrates feature vector construction by concatenating sampling vectors.
Figure 5. The pipeline of feature description. (a) shows sampling point distribution and numbering; (b) illustrates the construction of the sampling vector for point (3,1); (c) demonstrates feature vector construction by concatenating sampling vectors.
Remotesensing 15 05116 g005
Figure 6. Model-guided image matching. First, translate the reference image’s feature points to the panoramic image. Then, reapply the template matching between the orthorectified local KH image block and the reference image. Finally, utilize these matching results as generalized control points.
Figure 6. Model-guided image matching. First, translate the reference image’s feature points to the panoramic image. Then, reapply the template matching between the orthorectified local KH image block and the reference image. Finally, utilize these matching results as generalized control points.
Remotesensing 15 05116 g006
Figure 7. The matches, namely the generalized control points, between the DS1101-1069DF090b image (c) and the reference Google orthophoto image (d), and the green dots indicate the control points, and the red numbers indicate the point numbers. (a,b,e,f) are the corresponding zoomed-in areas in (c,d).
Figure 7. The matches, namely the generalized control points, between the DS1101-1069DF090b image (c) and the reference Google orthophoto image (d), and the green dots indicate the control points, and the red numbers indicate the point numbers. (a,b,e,f) are the corresponding zoomed-in areas in (c,d).
Remotesensing 15 05116 g007
Figure 8. The orientation accuracy in various areas. The red font in the yellow square represents the orientation mean square error of each panoramic image. (a) The orthophoto of Russia; (b) The orthophoto of Gansu, China; (c) The orthophoto of Beijing, China; (d) The orthophoto of Vermont, USA; (e) The partial map of Russia; (f) The partial map of USA; (g) The world map; (h) The China Map; (i) The orthophoto of Chongqing, China; (j) The orthophoto of Arizona, USA; (k) The Burkina Faso map; (l) The Ethiopia map; (m) The orthophoto of Burkina Faso; (n) The orthophoto of Ethiopia; (o) The orthophoto of the Qinghai–Tibet Plateau.
Figure 8. The orientation accuracy in various areas. The red font in the yellow square represents the orientation mean square error of each panoramic image. (a) The orthophoto of Russia; (b) The orthophoto of Gansu, China; (c) The orthophoto of Beijing, China; (d) The orthophoto of Vermont, USA; (e) The partial map of Russia; (f) The partial map of USA; (g) The world map; (h) The China Map; (i) The orthophoto of Chongqing, China; (j) The orthophoto of Arizona, USA; (k) The Burkina Faso map; (l) The Ethiopia map; (m) The orthophoto of Burkina Faso; (n) The orthophoto of Ethiopia; (o) The orthophoto of the Qinghai–Tibet Plateau.
Remotesensing 15 05116 g008
Figure 9. The registration checkboard of the KH-4B orthophoto and reference image with complex image contents, where the image with a red dot is the KH-4B Orthophoto. (ac) are located at the Qinghai–Tibet Plateau, and Beijing, China, with large NID; (d,e) are located in America and Beijing with land cover change; (f,h) are both located in Burkina Faso with noises; (g) is located in Gansu province, China with few textures; (i) is located in Ethiopia with large cloud coverage; (j) is located in Chongqing, China with repetitive textures.
Figure 9. The registration checkboard of the KH-4B orthophoto and reference image with complex image contents, where the image with a red dot is the KH-4B Orthophoto. (ac) are located at the Qinghai–Tibet Plateau, and Beijing, China, with large NID; (d,e) are located in America and Beijing with land cover change; (f,h) are both located in Burkina Faso with noises; (g) is located in Gansu province, China with few textures; (i) is located in Ethiopia with large cloud coverage; (j) is located in Chongqing, China with repetitive textures.
Remotesensing 15 05116 g009
Figure 10. The registration results of the KH-4B image, DS1101-1069DF092c, and the reference image. (ac) show the zoomed-in registration results of unchanged mountains; (d) shows the zoomed-in registration result of changed rivers. (e) The registration checkboard of the KH-4B orthophoto and reference image. (f) shows the zoomed-in registration result of the changed mountains area. (g,h) show the zoomed-in registration results of the changed plain area, where rivers have transformed into farmland and roads have undergone alterations. (i) shows the zoomed-in registration result of partially unchanged mountainous regions where rivers have experienced minor changes.
Figure 10. The registration results of the KH-4B image, DS1101-1069DF092c, and the reference image. (ac) show the zoomed-in registration results of unchanged mountains; (d) shows the zoomed-in registration result of changed rivers. (e) The registration checkboard of the KH-4B orthophoto and reference image. (f) shows the zoomed-in registration result of the changed mountains area. (g,h) show the zoomed-in registration results of the changed plain area, where rivers have transformed into farmland and roads have undergone alterations. (i) shows the zoomed-in registration result of partially unchanged mountainous regions where rivers have experienced minor changes.
Remotesensing 15 05116 g010
Figure 11. The orthophoto of DS1101-1069DF089, along with 42 checkpoints marked with red-cross dots.
Figure 11. The orthophoto of DS1101-1069DF089, along with 42 checkpoints marked with red-cross dots.
Remotesensing 15 05116 g011
Figure 12. The PG stripe curves of the four sub-images, a, b, c, d, of DS1101-1069DF089.
Figure 12. The PG stripe curves of the four sub-images, a, b, c, d, of DS1101-1069DF089.
Remotesensing 15 05116 g012
Figure 13. The PG stripe curves of the four sub-images, a, b, c, d, of DS1101-1069DF090.
Figure 13. The PG stripe curves of the four sub-images, a, b, c, d, of DS1101-1069DF090.
Remotesensing 15 05116 g013
Figure 14. The PG stripe curves of sub-images, a, b, c, d, of DS1105-1071DF141.
Figure 14. The PG stripe curves of sub-images, a, b, c, d, of DS1105-1071DF141.
Remotesensing 15 05116 g014
Figure 15. The PG stripe curve of the stitched panoramic image DS1105-1071DF141.
Figure 15. The PG stripe curve of the stitched panoramic image DS1105-1071DF141.
Remotesensing 15 05116 g015
Figure 16. The NCM curve of NIFT under various rotational distortions.
Figure 16. The NCM curve of NIFT under various rotational distortions.
Remotesensing 15 05116 g016
Table 1. The details of the KH-4B images and the reference images used in the experiments.
Table 1. The details of the KH-4B images and the reference images used in the experiments.
KH-4B ScenesReference ImageDistortion
RegionTerrain
USA, Russia, Ethiopia, Burkina Faso, and China’s Beijing, Chongqing, Gansu, and the Qinghai–Tibet plateauLoess plateau, glacier, plain, hill, high mountainGoogle Earth, Bing, and ArcGISTemporal disparities (50-year), scale variations (1:5), rotational disparities (0–360 degrees), radiometric differences, and local land cover alterations
Table 2. The detailed information of KH images.
Table 2. The detailed information of KH images.
Mission DesignatorsFilm SizeMicron SizeFile Size
KH-1; KH-2; KH-370 × 745 mm12 Micron (1800 dpi)80 MB (4 files)
KH-4; KH-4A; KH-4B7 Micron (3600 dpi)319 MB (4 files)
Table 3. The relationship between the layers order of MR and primary orientation.
Table 3. The relationship between the layers order of MR and primary orientation.
Primary OrientationThe Order of Layers of MR
350–15°, 170–195°{0°, 30°, 60°, 90°, 120°, 150°}
20–45°, 200–225°{30°, 60°, 90°, 120°, 150°, 0°}
50–75°, 230–255°{60°, 90°, 120°, 150°, 0°, 30°}
80–105°, 260–285°{90°, 120°, 150°, 0°, 30°, 60°}
110–135°, 290–315°{120°, 150°, 0°, 30°, 60°, 90°}
140–165°, 320–345°{150°, 0°, 30°, 60°, 90°, 120°}
Table 4. The details of multi-region Corona images and reference images.
Table 4. The details of multi-region Corona images and reference images.
KH-4A/B ImageReference ImageDistortion
RegionSerial NumberGeomorphic TypeResolutionSourceResolution
United StatesVermont stateDS11161030DF009-10Forest, Grassland, Lakes~1.8Google Earth2.3Land cover change, Radiometric distortion, Low/Repetitive texture, Image noise, Scale distortion, Rotation distortion, Panoramic geometry, Cloud occlusion
Arizona stateDS11162161DA012-14Plateau, Basin, Plain~1.8Google Earth4.2
RussiaKhanty-MansiDS11102201DA033-34Plain, Lakes~1.8Bing4.2
ChinaBeijingDS11011069DF089-94Plain, Mountains~1.8Google Earth4.2
ChongqingDS11142119DF045-49Hills, Mountains~1.8Google Earth2.3
GansuDS11162297DA009-10Loess Plateau~1.8Google Earth4.2
Qinghai–Tibet PlateauDS11122265DA091-92Glacier, Plateau~1.8Google Earth4.2
EthiopiaSouth West ShewaDS11022203DA071-73Plateau~1.8ArcGIS4.2
Burkina FasoCentre-subDS10451058DF045Plateau~2.75ArcGIS4.2
Table 5. The RMSE of orientation.
Table 5. The RMSE of orientation.
KH-4B ScenesRMSE (Pixels)
First StageSecond Stage
DS1101-1069DF089a3.501.9
b2.11.2
c2.11.19
d2.31.3
DS1101-1069DF090a2.881.9
b21.2
c1.91.3
d2.41.4
Table 6. The 14 parameters of the four sub-images of DS1101-1069DF090.
Table 6. The 14 parameters of the four sub-images of DS1101-1069DF090.
Parameterabcd
X s 0 ( m ) 4,564,318.004,564,053.0004,564,089.5004,563,850
X s 1 ( m ) 473.7850041010.521729552.2315671008.983704
Y s 0 ( m ) 384,394.15625386,881.375000387,326.562500391,244.68750
Y s 1 ( m ) −796.56854−1383.837524−1283.782349−909.642029
Z s 0 ( m ) 169,839.609375170,634.375000170,839.453125169,477.796875
Z s 1 ( m ) −812.061096−605.189026−630.709961−196.461411
ω 0 ( d e g ) −11.1166419−13.7147451−16.6276366−19.6613908
ω 1 ( d e g ) 0.077005530.0135791−0.02675713−0.2566851
ϕ 0 ( d e g ) 28.215937413.3082628−4.44374614−23.0435035
ϕ 1 ( d e g ) 0.191940860.359645610.37299553−0.37213609
κ 0 ( d e g ) −11.274399−9.7904165−9.5870738−10.9570732
κ 1 ( d e g ) −0.273988420.060733530.156302890.08623015
P−0.0149320.0172800.006359−0.001736
f ( m ) 0.60250.60280.6096020.6029
Table 7. The detailed mosaic error of DS1101-1069DF089 and DS1101-1069DF090.
Table 7. The detailed mosaic error of DS1101-1069DF089 and DS1101-1069DF090.
Corona ScenesEvaluation MetricsX (Pixel)Y (Pixel)
DS1101-1069DF089a-bSD1.27621.3553
Max3.76584.3370
Mean0.98061.1052
b-cSD0.56110.7644
Max1.46061.7897
Mean0.45700.6427
c-dSD0.80330.8411
Max2.81381.9079
Mean0.63240.6927
DS1101-1069DF090a-bSD0.48650.5811
Max1.50061.5650
Mean0.38140.4690
b-cSD0.46140.8178
Max1.55281.9974
Mean0.37120.6845
c-dSD0.85470.5226
Max2.43851.7930
Mean0.67900.4120
Table 8. The orientation accuracy for the four sub-images of DS1101-1069DF089 with/without film deformation adjustment based on PG stripe.
Table 8. The orientation accuracy for the four sub-images of DS1101-1069DF089 with/without film deformation adjustment based on PG stripe.
KH-4B ScenesWithout PGWith PG
DS1101-1069DF089a2.11.9
b1.341.2
c1.291.19
d1.41.3
DS1101-1069DF090a2.21.9
b1.31.2
c1.41.3
d1.51.4
Table 9. The details of the dataset used in feature matching.
Table 9. The details of the dataset used in feature matching.
Size (Pixel)Resolution (m)NumberDifference
500 × 200~1400 × 60064~12830Radiometric distortion, land cover change, scale/rotation.
Table 10. The compared matching results.
Table 10. The compared matching results.
NCMSR
SIFTLNIFTRIFTNIFTSIFTLNIFTRIFTNIFT
201081093736.63.353.3100
Table 11. The RMSE of orientation for DS101-1069DF90 of three methods. a, b, c, and d represent the sub-image of DS101-1069DF90.
Table 11. The RMSE of orientation for DS101-1069DF90 of three methods. a, b, c, and d represent the sub-image of DS101-1069DF90.
MethodThe RMSE (Pixels)
abcd
2 O C a 2.61.34.2-
2 O C b 2.51.33.1-
2 O C c 1.91.21.31.4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hou, Z.; Liu, Y.; Zhang, L.; Ai, H.; Sun, Y.; Han, X.; Zhu, C. 2OC: A General Automated Orientation and Orthorectification Method for Corona KH-4B Panoramic Imagery. Remote Sens. 2023, 15, 5116. https://doi.org/10.3390/rs15215116

AMA Style

Hou Z, Liu Y, Zhang L, Ai H, Sun Y, Han X, Zhu C. 2OC: A General Automated Orientation and Orthorectification Method for Corona KH-4B Panoramic Imagery. Remote Sensing. 2023; 15(21):5116. https://doi.org/10.3390/rs15215116

Chicago/Turabian Style

Hou, Zhuolu, Yuxuan Liu, Li Zhang, Haibin Ai, Yushan Sun, Xiaoxia Han, and Chenming Zhu. 2023. "2OC: A General Automated Orientation and Orthorectification Method for Corona KH-4B Panoramic Imagery" Remote Sensing 15, no. 21: 5116. https://doi.org/10.3390/rs15215116

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop