Next Article in Journal
Optical Fibre NO2 Sensor Based on Lutetium Bisphthalocyanine in a Mesoporous Silica Matrix
Previous Article in Journal
A New Black Carbon Sensor for Dense Air Quality Monitoring Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wide Swath Stereo Mapping from Gaofen-1 Wide-Field-View (WFV) Images Using Calibration

1
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
2
Collaborative Innovation Center of Geospatial Technology, Wuhan University, Wuhan 430079, China
3
The Department of Remote Sensing and Photogrammetry and the Center of Excellence in Laser Scanning Research, Finnish Geospatial Research Institute, 02430 Masala, Finland
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(3), 739; https://doi.org/10.3390/s18030739
Submission received: 25 November 2017 / Revised: 23 February 2018 / Accepted: 25 February 2018 / Published: 1 March 2018
(This article belongs to the Section Remote Sensors)

Abstract

:
The development of Earth observation systems has changed the nature of survey and mapping products, as well as the methods for updating maps. Among optical satellite mapping methods, the multiline array stereo and agile stereo modes are the most common methods for acquiring stereo images. However, differences in temporal resolution and spatial coverage limit their application. In terms of this issue, our study takes advantage of the wide spatial coverage and high revisit frequencies of wide swath images and aims at verifying the feasibility of stereo mapping with the wide swath stereo mode and reaching a reliable stereo accuracy level using calibration. In contrast with classic stereo modes, the wide swath stereo mode is characterized by both a wide spatial coverage and high-temporal resolution and is capable of obtaining a wide range of stereo images over a short period. In this study, Gaofen-1 (GF-1) wide-field-view (WFV) images, with total imaging widths of 800 km, multispectral resolutions of 16 m and revisit periods of four days, are used for wide swath stereo mapping. To acquire a high-accuracy digital surface model (DSM), the nonlinear system distortion in the GF-1 WFV images is detected and compensated for in advance. The elevation accuracy of the wide swath stereo mode of the GF-1 WFV images can be improved from 103 m to 30 m for a DSM with proper calibration, meeting the demands for 1:250,000 scale mapping and rapid topographic map updates and showing improved efficacy for satellite imaging.

1. Introduction

By the end of the twentieth century, a series of major breakthroughs had been made in the fields of space technology and information technology, resulting in significant changes to the fields of surveying and mapping. The development of Earth observation systems continues to change the nature of survey and mapping products as well as the methods for updating maps. Thus, satellite images have become another important source of information in addition to aerial photogrammetry. Among the optical satellite mapping methods, the multiline array stereo mode and agile stereo mode are undoubtedly two of the most common methods for acquiring stereo images.
As shown in Figure 1a, the multiline array stereo mode uses multiline array cameras to image the surface and acquire multiple images at different angles, baselines, and overlapping areas. Because this method acquires strip images along a track, it is capable of surveying and mapping a wide area. The SPOT-5 HRS camera [1,2,3,4] and the Terra ASTER camera [5,6] use the two-line array stereo mode, whereas the Ziyuan-3 triple linear-array camera [7,8] and the MappingSatellite-1 camera [9,10,11] adopt the three-line array stereo mode. However, due to the narrow width (generally less than 50 km) of the multiline array, the revisit period may be up to two or three months, which results in a low temporal resolution. In short, the multiline array stereo mode has a wide spatial coverage and a low temporal resolution.
As shown in Figure 1b, the agile stereo mode uses one camera to observe the same area at different angles and forms a stereo image pair to obtain stereo information. This mode is typically used to acquire two or more times the number of observations of the same area at different angles using the attitude maneuver of the satellite pitch or roll axis. Relying on high satellite agility, the agile stereo mode can make rapid stereo observations of an area (generally in a few seconds along a track or a few hours across a track). IKONOS [12], GeoEye [13], QuickBird-2 [14], WorldView [15], SPOT-6 and 7 [16], and Pleiades [17,18,19] use this stereo mode for surveying and mapping. However, due to satellite agility, the agile stereo mode cannot easily acquire a complete strip of stereo images covering a broad area and can only focus on one area, such as an urban area. In short, the agile stereo mode has a high-temporal resolution and a narrow spatial coverage.
Thus, the conflict between the temporal resolution and spatial coverages of these two modes limits many remote sensing applications, such as rapid updates of medium scale topographic maps, global change detect, etc., which often require wide spatial coverages and high-temporal resolution. In June 2009 and October 2011, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) provided two versions of the Global Digital Elevation Model (GDEM). Although the ASTER GDEM achieves a global 30-m resolution, meeting the demand of a 1:250,000 scale topographic map, the data have a poor temporal resolution and make it difficult to rapidly update maps and detect global change.
In this study, we show how the wide field and short revisit period of images in the wide swath stereo mode can address this issue. This mode is typically used to acquire two or more times the number of observations of the same area from different orbits. As shown in Figure 1c, the wide swath stereo mode uses one camera to observe the same area from different orbits and forms a stereo image pair using the WFV of the camera without requiring attitude agility. Compared with classic stereo modes, the wide swath stereo mode relies on a wide swath (e.g., 800 km) and can rapidly obtain numerous stereo observations of a certain area (generally within a few days) as well as provide a wide coverage for survey and mapping purposes. In short, the wide swath stereo mode has both a wide spatial coverage and high-temporal resolution, which can meet the demands for rapidly updating maps and detecting global changes.
At present, while high-resolution wide swath images are less common because of the limitations of satellite camera hardware, the Gaofen-1 (GF-1) wide-field-view (WFV) images, with their total swath width of 800 km, multispectral resolution of 16 m and revisit period of four days [20,21], are used to implement the wide swath stereo mode. In addition, calibration of the lens to correct for the radial distortions is used in generation of digital surface models (DSMs) from SPOT-5 [22], and there are nonlinear system errors in GF-1 WFV images. Therefore, calibration is necessary and vital in the computation of GF-1 3D stereo models for more accurate DSM. In this paper, we first present the key processes behind the wide swath stereo mode with calibration. Then, we perform GF-1 WFV experiments to demonstrate DSM accuracy improvement after calibration and the validity of the wide swath stereo mode, which is our research purpose. Finally, we present a discussion and concluding remarks.

2. Materials and Methods

2.1. Overview of GF-1 WFV

GF-1 is the first satellite of the Chinese high-resolution Earth observation system. The main purpose of GF-1 is to make major technological breakthroughs, such as those in optical remote sensing technology (high-spatial, multispectral, and high-temporal resolutions), multi-image mosaic and fusion technology, high-precision and high-stability attitude control technology, high-reliability low-orbit satellite technology, and high-resolution data processing and application technology [20].
The GF-1 satellite design parameters are shown in Table 1. The satellite has a sun synchronous orbit and is equipped with two high-resolution (HR) cameras and four WFV cameras. The nadir resolution of the HR panchromatic camera is 2 m, and that of the HR multispectral camera is 8 m. The total swath of the HR cameras is 60 km, and thus, the revisit period is typically 41 days. The nadir resolution of the WFV camera is 16 m over a total swath of 800 km, and this camera has a revisit period of 4 days.
In this study, we use the WFV cameras. The field design of the GF-1 WFV cameras is shown in Figure 2. The field of view (FOV) of the camera is 16.44°, and the overlap FOV between adjacent cameras is 0.44°. The angle between the center sights of WFV-1 and WFV-4 is up to 48°. By taking the wide swath characteristics into account, it is possible to apply WFV-1 and WFV-4 to stereo mapping.
However, because the primary goals of the GF-1 WFV camera are for use in land and resource surveys, the nonlinear system errors of the image, especially the distortion error, are less of a consideration in the camera design and data processing. The nonlinear system error of the images will seriously influence the stereo mapping, so a calibration should be applied to the WFV camera to acquire non-distorted images in advance. Then, an analysis of the intersection accuracies between WFV-1 and WFV-4 should be performed to demonstrate the feasibility of the image acquisition. Finally, the processing procedure for the wide swath stereo mapping using GF-1 WFV images must be specified.

2.2. Calibration

To acquire a high-accuracy digital surface model (DSM), the nonlinear system distortion in the GF-1 WFV images should be detected and compensated for in advance. Traditional calibration methods usually require a high-accuracy geometric calibration field (GCF) that covers the entire image across the satellite path to acquire sufficient ground control points (GCPs) [17,23,24]. However, due to the wide swath size of the GF-1 WFV images, it is difficult to obtain enough GCPs from the GCF to cover all rows in one GF-1 WFV image, especially when considering the high construction costs and site constraints of the GCF.
Huang et al. [25] propose a multicalibration image method to solve the GF-1 WFV image calibration problem. In this method, the calibration images are collected at different times, and their different rows are covered by the GCF. Then, the GCPs covering all the rows can be obtained and can be used with the modified calibration model to detect distortion. Experiments show that this method can increase the GF-1 WFV image orientation accuracy from several pixels to 1 pixel, thereby eliminating nearly all the nonlinear distortion. In this study, we use this method to detect and correct the GF-1 WFV-1 and WFV-4 images.
The calibration model for the linear sensor model is established based on [7]:
[ X S Y S Z S ] = [ X ( t ) Y ( t ) Z ( t ) ] + m R ( t ) R U ( t ) [ x + Δ x y + Δ y 1 ]
where [ X ( t ) , Y ( t ) , Z ( t ) ] indicates the satellite position with respect to the geocentric Cartesian coordinate system, and R ( t ) is the rotation matrix converting the body coordinate system to the geocentric Cartesian coordinate system. Both these parameters are functions of time and are, therefore, functions of scan lines. Here, [ x + Δ x , y + Δ y , 1 ] represents the ray direction when the z-coordinate is a constant with a value of 1 in the body coordinate system. Furthermore, m denotes the unknown scaling factor, and [ X S , Y S , Z S ] is the ground position of the pixel in the geocentric Cartesian coordinate system. RU is the offset matrix that compensates for the exterior errors, and ( Δ x , Δ y ) denotes the interior distortion of the image space.
RU can be expanded by introducing additional variables [26,27,28]:
R U ( t ) = [ cos φ 0 sin φ 0 1 0 sin φ 0 cos φ ] [ 1 0 0 0 cos ω sin ω 0 sin ω cos ω ] [ cos κ sin κ 0 sin κ cos κ 0 0 0 1 ]
where ω , φ and κ are rotations about the X, Y, and Z axes of the body coordinates, respectively, and should be detected to eliminate exterior errors. Note that images collected at different times have different exterior errors, and thus, the number of RU values correspond to the number of images.
As mentioned above, multicalibration images are collected at different times and have different exterior errors (the installation errors may be the same) but the same interior error. The strong correlation between the exterior and the interior errors will inevitably influence the interior error in different calibration images. The interior error in the image space varies with the calibration images and is difficult to fit using the classical 5 order polynomial model [25].
The additional parameters c j , d j , e j , f j are introduced, and the modified polynomial model can be written as [25]
{ Δ x = a 0 + a 1 s + a 2 s 2 + + a i s i + c j + d j s , Δ y = b 0 + b 1 s + b 2 s 2 + + b i s i + e j + f j s , 0 i 5 2 j n
where the variables a 0 , a 1 , , a i , and b 0 , b 1 , , b i describe the distortion; s is the image coordinate across a track; n represents the number of calibration images; and c j , d j , e j , f j represent the modified parameters of each calibration image (except for the base image). Note that the images collected at different times have the same distortions.
Based on Equations (1)–(3), the functional relationship of the image point and parameters can be derived as Equation (4) in a simple style:
x = x ( a 0 , , a i , b 0 , , b i , ω , φ , κ , c j , d j , e j , f j )   , y = y ( a 0 , , a i , b 0 , , b i , ω , φ , κ , c j , d j , e j , f j )   , 0 i 5 2 j n
Equation (4) is the basic calibration model of the proposed method.
By taking partial derivative and linearization for Equation (4), the error equation can be written simply as:
V = A t L
where t = [ d a 0 , , d a i , d b 0 , , d b i , d c 2 , d d 2 , d e 2 , d f 2 , , d c n , d d n , d e n , d f n , d ω 1 , d φ 1 , d κ 1 , , d ω n , d φ n , d κ n ] represents the correction to the calibration parameters of images. A is coefficient matrix of the error equation, and L is the constant vector. Equation (5) is the basic error equation of the proposed method in the paper.
The corresponding normal equation of Equation (5):
A T A t = A T L
The correction to the calibration parameters t from the normal Equation (6) will be:
t = ( A T A ) 1 A T L
After the correction to the calibration parameters t is calculated, the calibration parameters can be updated.

2.3. 3D Stereo Model and Analysis

The stereo partners for 3D stereo model are made up of the GF-1 WFV-1 and WFV-4 images from different orbits with common coverage (Figure 3). Corresponding points acquired by the semi-global matching (SGM) method [29] enable to reconstruct the 3D location of the object point on the terrain. Forward intersection is done via iterative least squares adjustment using 2n (for n stereo partners) observation equations [30,31]. Normally, in this research on Gaofen-1, n is 2 and 4 equations are established per stereo tie point for the derivation of the 3 object space coordinates including planimetry and elevation. The initial values for the object space coordinates are derived from an affine transformation using the corner coordinates given by the image provider. Initial height values are taken from the mean height of the area under investigation. Normally, convergence is achieved after several iterations.
According to [32,33], the ratio R between the vertical accuracy and horizontal accuracy is written as follows:
R = h e r r o r v e r r o r = H S
where herror represents the horizontal error and verror represents the vertical error. S is the baseline length, and H is flight height. Thus, the horizontal error can be calculated as follows:
h e r r o r = H S v e r r o r
The flight height of GF-1 is 644.5 km, while the baseline between WFV-1 and WFV-4 is approximately 600 km. The calibration accuracy ec is approximately 1 pixel, and the corresponding point matching accuracy em is approximately 0.5 pixels. Because the nadir resolution resnad is approximately 16 m, the resolutions of WFV-1 and WFV-4 (res) are determined by Equation (10) considering the swing angle θ (half of the angle between the two camera center sights):
r e s = r e s n a d cos 2 θ
Thus, the vertical and horizontal errors for WFV-1 and WFV-4 are as follows:
v e r r o r = ( e c + e m ) r e s = ( e c + e m ) r e s n a d cos 2 θ = ( 1 + 0.5 ) × 16 cos 2 ( 24 ) = 28.8 ( m ) h e r r o r = H S ( e c + e m ) v e r r o r = 644.5 600 × 28.8 = 30.9 ( m ) .
According to the stereo analysis, the planimetric and height accuracies for the GF-1 WFV-1 and WFV-4 cameras corresponded to approximately 29 m and 31 m. As the calibration and matching accuracies are approximate values, the stereo accuracy is merely a reference value that differs slightly from the actual value at each pixel.

2.4. Processing Procedure

The process for wide swath stereo mapping using GF-1 WFV images is shown in Figure 4. There are three main processes: calibration, orientation using GCPs, and DSM generation. Of these, the calibration and orientation using GCPs are applied to raw WFV-1 and WFV-4 images, respectively, whereas the DSM generation uses the WFV-1 and WFV-4 images after orientation.
First, the calibration method in [25] is used to detect and correct for the systematic nonlinear distortion error and to acquire post-calibration images. Note that distortion detection is performed only once for each camera during the calibration process, and the calibration parameters can be used continuously by compensating the images.
Then, the orientation using the GCPs is applied based on the affine model, which is the most common orientation model, resulting in post-orientation images. This process should be performed on each image because the orientation without the GCPs differs for each image. The orientation process eliminates most of the random errors in the images.
Finally, the calibration parameters for camera distortion and exterior orientation parameters from the affine model are sent to DSM generation. The SGM method [29] is introduced to acquire the corresponding points and point cloud, and then the DSM is generated from the point cloud. Note that the WFV-1 and WFV-4 images should have some overlap when using this method.

3. Experiments

3.1. Datasets

3.1.1. Calibration Images

To acquire the calibration parameters, we collected some calibration images. Detailed information on the WFV-1 and WFV-4 calibration images is listed in Table 2 and Table 3, respectively. The GCPs are acquired via the method introduced in [25] using the GCF, and the sample range represents the GCF coverage of the start and end rows of the images across the track.

3.1.2. Stereo Mapping Images

Scene 068316 (WFV-1) and scene 112159 (WFV-4), covering the Shanxi province in China, were collected as stereo mapping images. Detailed information on the stereo mapping images is listed in Table 4, and the spatial coverages of the stereo mapping images are shown in Figure 5. Figure 5 shows that the overlap across the tracks of the stereo mapping images is up to 60%.

3.2. Geometry Calibration

The calibration parameters are calculated according to Huang et al. [25]. In [25], the residual errors before and after the compensation for the distortions of the calibration images using the GCPs from the GCF demonstrate that all the distortions have been corrected and the calibration parameters are effective for the calibration images.
After calculating the calibration parameters via the proposed method, it is important to verify whether the calibration parameters can be used in other validation images. Considering the goal to validate the effect of calibration parameters for compensating camera distortion, the affine model for images based on four GCPs was adopted as the exterior orientation model, removing other errors caused by exterior elements [34,35].
Because the GCF has a range restriction and because the swath width of the GF1 WFV camera reaches 200 km, the check points (CPs) from the GCF can only cover some rows of each image. Thus, the exterior orientation will absorb some interior errors and influence the orientation accuracy of the whole image. Considering the resolution of the GF1 WFV (16 m) and the horizontal positioning accuracy of Google Earth (less than three meters) [36,37], it is proper and feasible to manually extract corner points or feature points from Google Earth as CPs to evaluate the orientation accuracies and illustrate the influence of compensation.
As shown in Table 5, the maximum orientation errors without the calibration parameters are approximately 5.5 pixels in the WVF-1 camera and approximately 11 pixels in the WVF-4 camera. The orientation accuracies without the calibration parameters are only two pixels in the WVF-1 camera and approximately five pixels in the WVF-4 camera. These errors are partly the result of the distortion in the original scenes. Thus, when the original scenes are compensated with the calibration parameters acquired by the proposed method, the maximum orientation errors are reduced to less than two pixels for both cameras. The orientation accuracy level after the calibration consistently exceeds one pixel, especially the accuracies of scenes 125567, 061400, 112159 are reduced to approximately 0.5 pixels, illustrating that the proposed method can provide effective distortion compensation for the WFV-1 and WFV-4 cameras.

3.3. Orientation Accuracy of Stereo Images

The orientation errors of the stereo scenes 068316 and 125567 are shown in Table 5. Before calibration, the maximum error is up to 2.7 pixels for scene 068316 (WFV-1) and 9.2 pixels for scene 112159 (WFV-4). The root mean square (RMS) error is up to 1.4 pixels for scene 068316 (WFV-1) and 5.2 pixels for scene 112159 (WFV-4). After calibration, the maximum error is up to 1.0 pixels for scene 068316 (WFV-1) and 1.1 pixels for scene 112159 (WFV-4). The RMS error is up to 0.6 pixels for scene 068316 (WFV-1) and 0.5 pixels for scene 112159 (WFV-4).
In addition, the orientation residual plots before and after the calibration of scenes 068316 and 112159 are shown in Figure 6. Before calibration, as shown in Figure 6a,b, the plots show that the four corners are more accurate than the other regions because the affine model with four GCPs cannot completely absorb the higher-order distortion effects, especially in the middle region. After calibration, as shown in Figure 6c,d, it can be seen that the accuracy level is consistently approximately one pixel, and the residual errors are random. In short, the nonlinear system error has been eliminated after the calibration, and the images are undistorted images whose residual system errors can be absorbed by the affine model with four GCPs.
Thus, the orientation accuracy has been improved after calibration, and the results after orientation can be used to generate the DSM.

3.4. Stereo Mapping

3.4.1. Digital Surface Model (DSM) Generation

The calibration parameters for compensating camera distortion and exterior orientation parameters from the affine model based on four GCPs are used in DSM generation. To compare the accuracies before and after calibration, the SGM method is used on the stereo scenes 068316 and 112159 to generate a large number of corresponding points. Then, the corresponding points are intersected via forward intersection to generate a point cloud. Finally, the point cloud is directly transformed into a DSM with no filtering.
Figure 7a,b show the DSM generation results before and after calibration, respectively. Although there are a few incorrect results due to poor radiation quality, most areas obtain a complete terrain. Thus, it is possible to use wide swath images in stereo mapping. In other words, the DSM generation results in Figure 7 verify the feasibility of stereo mapping with the wide swath stereo mode.

3.4.2. Elevation Accuracy Validation

To validate the elevation accuracy, we introduce a high-accuracy GCF whose horizontal and elevation accuracies are 1.6 m and 1.5 m, respectively. The coverage of the GCF is shown in Figure 8. Due to the wide swath of GF-1 WFV, the GCF covers only part of the DSM area. We adopt two analysis methods to verify the accuracy: a profile analysis along the red line in Figure 8, and a global analysis to calculate all elevation errors.
Figure 9 is the elevation profile plot along the red line, containing the real elevation, the elevation before calibration, and the elevation after calibration. The elevation after calibration closely resembles the real elevation, while the difference between the elevation before calibration and the real elevation gradually increases with the growth of pixel sample number. This phenomenon is more obvious from a perspective of the elevation error before and after the calibration compared with the real elevation in Figure 9. The figure shows that the elevation before calibration is within a few meters of the real elevation. However, the greater the number of pixels, the greater the difference between the elevation before calibration and the real elevation, by as much as a hundred meters. The reason of the phenomenon in Figure 9 and Figure 10 is that the nonlinear system error in the image has been eliminated after calibration. In short, Figure 9 and Figure 10 by a profile analysis demonstrate DSM accuracy improvement after calibration and that the calibration results in a relatively good DSM.
Figure 11 and Figure 12 and Table 6 are the results of a global analysis to calculate all elevation errors. Figure 11 shows the elevation error compared with the high-accuracy GCF. The figures show that there is a systematic error in the DSM before calibration, whereas the elevation error is random in the DSM after calibration. The contrast demonstrates that calibration detects and compensates the nonlinear system error in the image, resulting in DSM accuracy improvement.
The corresponding statistical plot of the elevation error is shown in Figure 12, and the elevation error statistics are shown in Table 6. Before calibration, there is one peak (120 m) in the statistical plot, which results in a mean elevation error of 95.927 m and an RMS error of 103.850 m. The systematic deviation phenomenon fits the increasing trend gradually with elevation error. In addition, the plot shows a biased normal distribution. After calibration, there is only one peak at zero meters. Thus, the mean elevation error is approximately 4.107 m, and the RMS error is 30.116 m, leading to a more standard normal distribution of the plot. In general, the global statistical analysis in Figure 12 and Table 6 further indicates that calibration brings an obvious drop of elevation error or a significant improvement of DSM accuracy.
In addition, the 30 m elevation accuracy is consistent with the stereo analysis result (31 m), meeting the demand of the 1:250,000 scale mapping and rapid updates of the topographic map. The low elevation accuracy is the result of the low resolution and poor radiometric quality, as opposed to the wide swath stereo mapping mode. Considering the 16 m nadir resolution and poor radiation quality, the elevation accuracy is significantly improved after calibration.

4. Conclusions

This paper proposes a wide swath stereo mode method that is characterized by both a wide spatial coverage and a high-temporal resolution. Compared with classical stereo modes, the wide swath stereo mode is capable of obtaining a wider range of stereo images over a short time period. The GF-1 WFV images with a total swath of 800 km, a multispectral resolution of 16 m and a revisit period of four days, are used in experiments. Nonlinear system errors in GF-1 WFV images is detected and compensated for in advance, and calibration bring a significant improvement of DSM accuracy. The results show that the wide swath stereo mode of the GF-1 WFV images can reach an elevation accuracy of 30 m for a DSM at proper calibration conditions, which meets the demand of the 1:250,000 scale mapping and rapid updates of the topographic map, and demonstrates the feasibility and efficacy of this mode for satellite imaging.
Moreover, given the limited nadir resolution of 16 m and poor radiation quality of the GF-1 WFV images, the 30 m elevation accuracy is still relatively low, although the elevation accuracy is significantly improved after calibration. We suggest that by using higher resolution wide swath images of improved radiation qualities, the wide swath stereo mapping mode will deliver better results with the proper calibration.

Acknowledgments

This study was supported in part by the National Key Research Development Program of China under project No. 2016YFE0202300. The dataset was downloaded from an open source repository at http://www.rscloudmart.com. The authors would like to thank the editors and anonymous reviewers for their constructive suggestions.

Author Contributions

Shoubin Chen and Wenchao Haung wrote the paper and conducted the experiments. Jingbin Liu guided the structure of the paper. Ruizhi Chen checked the paper and gave some suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Michalis, P.; Dowman, I. A rigorous model and dem generation for SPOT5-HRS. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 410–415. [Google Scholar]
  2. Hashemian, M.S.; Abootalebi, A.; Kianifar, F. Accuracy evaluation of dem generated from SPOT5 HRS imageries. In Proceedings of the XXth ISPRS Congress, Commission I, Istanbul, Turkey, 12–23 July 1993. [Google Scholar]
  3. Bouillon, A. SPOT5 HRG and HRS first in-flight geometric quality results. In Proceedings of the International Symposium on Remote Sensing, Crete, Greece, 22–27 September 2002; pp. 212–223. [Google Scholar]
  4. Berthier, E.; Toutin, T. SPOT5-HRS digital elevation models and the monitoring of glacier elevation changes in north-west canada and south-east alaska. Remote Sens. Environ. 2008, 112, 2443–2454. [Google Scholar] [CrossRef]
  5. Toutin, T. Three-dimensional topographic mapping with aster stereo data in rugged topography. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2241–2247. [Google Scholar] [CrossRef]
  6. Hirano, A.; Welch, R.; Lang, H. Mapping from aster stereo image data: Dem validation and accuracy assessment. ISPRS J. Photogramm. Remote Sens. 2003, 57, 356–370. [Google Scholar] [CrossRef]
  7. Tang, X.; Zhang, G.; Zhu, X.; Pan, H.; Jiang, Y.; Zhou, P.; Wang, X. Triple linear-array image geometry model of ziyuan-3 surveying satellite and its validation. Int. J. Image Data Fusion 2013, 4, 33–51. [Google Scholar] [CrossRef]
  8. Deren, L. China’s first civilian three-line-array stereo mapping satellite: Zy-3. Acta Geod. Cartogr. Sin. 2012, 41, 317–322. [Google Scholar]
  9. Wang, R.X.; Hu, X.; Wang, X.Y.; Yang, J.F. The construction and application of mapping satellite-1 engineering. J. Remote Sens. 2012, 16, 2–5. [Google Scholar]
  10. Songming, L.I.; Yan, L.I. Mapping satellite-1 transmission type photogrammetric and remote sensing satellite. J. Remote Sens. 2012, 16, 10–16. [Google Scholar]
  11. Yong, F.U.; Zou, S. Evaluation of the location accuracy of the mapping satellite-1 stereo image. J. Remote Sens. 2012, 16, 94–97. [Google Scholar]
  12. Dial, G.; Bowen, H.; Gerlach, F.; Grodecki, J.; Oleszczuk, R. Ikonos satellite, imagery, and products. Remote Sens. Environ. 2003, 88, 23–36. [Google Scholar] [CrossRef]
  13. Crespi, M.; Colosimo, G.; Vendictis, L.D.; Fratarcangeli, F.; Pieralice, F. Geoeye-1: Analysis of radiometric and geometric capability. In International Conference on Personal Satellite Services; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  14. Liedtke, J. Quickbird-2 System Description and Product Overview. Available online: http://calval.cr.usgs.gov/wordpress/wp-content/uploads/16Liedtk.pdf (accessed on 25 February 2018).
  15. Dolloff, J.; Settergren, R. An assessment of worldview-1 positional accuracy based on fifty contiguous stereo pairs of imagery. Photogramm. Eng. Remote Sens. 2010, 76, 935–943. [Google Scholar] [CrossRef]
  16. ASTRIUM. SPOT 6 & SPOT 7 Imagery User Guide. Available online: http://www.astrium-geo.com/en/147-spot-6-7-satellite-imagery (accessed on 25 February 2018).
  17. Greslou, D.; Delussy, F. Geometric Calibration of Pleiades Location Model. Available online: http://www.isprs.org/proceedings/XXXVI/part1/Papers/PS2-18.pdf (accessed on 25 February 2018).
  18. Delevit, J.M.; Greslou, D.; Amberg, V.; Dechoz, C.; De Lussy, F.; Lebegue, L.; Latry, C.; Artigues, S.; Bernard, L. Attitude assessment using pleiades-HR capabilities. In Proceedings of the ISPRS—International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Melbourne, Australia, 25 August–1 September 2012; Volume XXXIX-B1, pp. 525–530. [Google Scholar]
  19. Kubik, P.; Lebègue, L.; Fourest, S.; Delvit, J.M.; Lussy, F.D.; Greslou, D.; Blanchet, G. First in-flight results of pleiades 1a innovative methods for optical calibration. In Proceedings of the International Conference on Space Optics—ICSO 2012, Ajaccio, Corsica, France, 9–12 October 2012. [Google Scholar]
  20. Bai, Z.G. Technology characters of GF-1 satellite. Aerosp. China 2013, 8, 5–9. [Google Scholar]
  21. Lu, C.L.; Wang, R.; Yin, H. Gf-1 satellite remote sensing characters. Spacecr. Recovery Remote Sens. 2014, 35, 67–73. [Google Scholar]
  22. Toutin, T. Generation of dsms from spot-5 in-track hrs and across-track hrg stereo data using spatiotriangulation and autocalibration ☆. ISPRS J. Photogramm. Remote Sens. 2006, 60, 170–181. [Google Scholar] [CrossRef]
  23. Leprince, S.; Musé, P.; Avouac, J.P. In-flight ccd distortion calibration for pushbroom satellites based on subpixel correlation. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2675–2683. [Google Scholar] [CrossRef]
  24. Bouillon, A.B.E.; Lebegue, L. SPOT Image Quality Performances. Available online: http://www2.geo-airbusds.com/files/pmedia/public/r438_9_spot_quality_performances_2013.pdf (accessed on 25 February 2018).
  25. Huang, W.; Zhang, G.; Tang, X.; Li, D. Compensation for distortion of basic satellite images based on rational function model. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 5767–5775. [Google Scholar] [CrossRef]
  26. Xu, J.; Hou, M.; Yu, J.; Zeng, Y. Study of CBERS CCD camera bias matix calculation and its application. Spacecr. Recovery Remote Sens. 2004, 4, 004. [Google Scholar]
  27. Radhadevi, P.V.; Solanki, S.S. In-flight geometric calibration of different cameras of irs-p6 using a physical sensor model. Photogramm. Rec. 2008, 23, 69–89. [Google Scholar] [CrossRef]
  28. Yuan, X.; Yu, X. Calibration of angular systematic errors for high resolution satellite imagery. Acta Geod. Cartogr. Sin. 2012, 41, 385–392. [Google Scholar]
  29. Hirschmüller, H. Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 30, 328–341. [Google Scholar] [CrossRef] [PubMed]
  30. Grodecki, J. Mathematical model for 3D feature extraction from multiple satellite images described by RPCs. In Proceedings of the ASPRS Annual Conference Proceedings, Denver, CO, USA, 23–28 May 2004. [Google Scholar]
  31. Lehner, M.; Müller, R.; Reinartz, P. Stereo evaluation of cartosat-1 data on test site 5—First DLR results. In Proceedings of the 2006 Symposium Geospatial Databases for Sustainable Development, Goa, India, 27–30 September 2006; Volume 36. [Google Scholar]
  32. Zhang, Y. Analysis of precision of relative orientation and forward intersection with high-overlap images. Ed. Board Geomat. Inf. Sci. Wuhan Univ. 2005, 30, 126–130. [Google Scholar]
  33. Zhang, J.; Hu, A. Method and precision analysis of multi-baseline photogrammetry. Geomat. Inf. Sci. Wuhan Univ. 2007, 32, 847–851. [Google Scholar]
  34. Fraser, C.S.; Hanley, H.B. Bias compensation in rational functions for ikonos satellite imagery. Photogramm. Eng. Remote Sens. 2003, 69, 53–58. [Google Scholar] [CrossRef]
  35. Fraser, C.S.; Yamakawa, T. Insights into the affine model for high-resolution satellite sensor orientation. ISPRS J. Photogramm. Remote Sens. 2004, 58, 275–288. [Google Scholar] [CrossRef]
  36. Farah, A.; Algarni, D. Positional accuracy assessment of googleearth in riyadh. Artif. Satell. 2014, 49, 101–106. [Google Scholar] [CrossRef]
  37. Pulighe, G.; Baiocchi, V.; Lupia, F. Horizontal accuracy assessment of very high resolution google earth images in the city of rome, italy. Int. J. Digit. Earth 2015, 9, 342–362. [Google Scholar] [CrossRef]
Figure 1. Three different stereo mapping mode. (a) Multiline array cameras attain multiple images at different angles, baselines, and overlapping areas, (b) one camera observes the same area from different angles at the same orbit by the attitude maneuver, (c) one camera observes the same area from different orbits without requiring attitude agility.
Figure 1. Three different stereo mapping mode. (a) Multiline array cameras attain multiple images at different angles, baselines, and overlapping areas, (b) one camera observes the same area from different angles at the same orbit by the attitude maneuver, (c) one camera observes the same area from different orbits without requiring attitude agility.
Sensors 18 00739 g001
Figure 2. Field design of the GF-1 WFV cameras. The field of view (FOV) of single camera is 16.44°, and the overlap FOV between adjacent cameras is 0.44°.
Figure 2. Field design of the GF-1 WFV cameras. The field of view (FOV) of single camera is 16.44°, and the overlap FOV between adjacent cameras is 0.44°.
Sensors 18 00739 g002
Figure 3. Forward intersection method of the stereo partners. Partners are made up of the GF-1 WFV-1 and WFV-4 images from different orbits with common coverage. The angle between the center sights of WFV-1 and WFV-4 is up to 48°.
Figure 3. Forward intersection method of the stereo partners. Partners are made up of the GF-1 WFV-1 and WFV-4 images from different orbits with common coverage. The angle between the center sights of WFV-1 and WFV-4 is up to 48°.
Sensors 18 00739 g003
Figure 4. Wide swath stereo mapping process using the GF-1 WFV images. Step 1: the calibration is to correct for the systematic nonlinear distortion and acquire post-calibration images. Step 2: to determine elements of absolute orientation in two images. ‘GCPs’ stands for ground control points. Step 3: to acquire the corresponding points and generate a digital surface model (DSM) from point cloud.
Figure 4. Wide swath stereo mapping process using the GF-1 WFV images. Step 1: the calibration is to correct for the systematic nonlinear distortion and acquire post-calibration images. Step 2: to determine elements of absolute orientation in two images. ‘GCPs’ stands for ground control points. Step 3: to acquire the corresponding points and generate a digital surface model (DSM) from point cloud.
Sensors 18 00739 g004
Figure 5. Coverage of the stereo mapping images.
Figure 5. Coverage of the stereo mapping images.
Sensors 18 00739 g005
Figure 6. Orientation errors for the stereo mapping images. The red triangles in four corners stand for four ground control points (GCPs) for the affine model (the exterior orientation model). The red points stand for check points (CPs) for orientation accuracy assessment. The longer the red arrow, the more the orientation residual. In (a,b), the four corners are more accurate than the middle region. In (c,d), the residual errors become less and random due to that the calibration eliminates the nonlinear system error.
Figure 6. Orientation errors for the stereo mapping images. The red triangles in four corners stand for four ground control points (GCPs) for the affine model (the exterior orientation model). The red points stand for check points (CPs) for orientation accuracy assessment. The longer the red arrow, the more the orientation residual. In (a,b), the four corners are more accurate than the middle region. In (c,d), the residual errors become less and random due to that the calibration eliminates the nonlinear system error.
Sensors 18 00739 g006aSensors 18 00739 g006b
Figure 7. DSM generation.
Figure 7. DSM generation.
Sensors 18 00739 g007
Figure 8. Geometric calibration field (GCF) spatial coverage.
Figure 8. Geometric calibration field (GCF) spatial coverage.
Sensors 18 00739 g008
Figure 9. Elevation profile plot.
Figure 9. Elevation profile plot.
Sensors 18 00739 g009
Figure 10. Elevation error before and after calibration.
Figure 10. Elevation error before and after calibration.
Sensors 18 00739 g010
Figure 11. Elevation error with the high-accuracy GCF. (a) A systematic error in the DSM before calibration, (b) the error is random in the DSM after calibration.
Figure 11. Elevation error with the high-accuracy GCF. (a) A systematic error in the DSM before calibration, (b) the error is random in the DSM after calibration.
Sensors 18 00739 g011
Figure 12. Statistical plot of the elevation error. The peaks are at 120 m and 0 m respectively, before and after calibration. Statistical plot tends to a more standard normal distribution after calibration.
Figure 12. Statistical plot of the elevation error. The peaks are at 120 m and 0 m respectively, before and after calibration. Statistical plot tends to a more standard normal distribution after calibration.
Sensors 18 00739 g012
Table 1. Gaofen-1 satellite design parameter table.
Table 1. Gaofen-1 satellite design parameter table.
ItemAreaTechnology Performance
OrbitOrbit TypeSun Synchronous Circle Orbit
Average Height644.5 km
Descending Time10:30 a.m.
Regression Period41 days
Revisit CharacteristicNo Swing: WFV Camera 4 days; HR Camera 41 days
Swing: HR Camera 4 days
High-Resolution (HR) Camera (PAN/MS)Nadir ResolutionPan < 2 m; MS < 8 m
Swath≈60 km
Field of View≈6.4°
Wide-Field-View (WFV) CameraNadir Resolution<16 m
Swath>800 km
Pixel Size0.0065 mm
Focal Length270 mm
Field of View>48°
Attitude ControlControl ModeThree Axis Stability, Ground Orientation
Pointing Accuracy<0.1° (three axes, 3σ)
Stability<5 × 10−4°/s (three axes, 3σ)
ManeuverabilitySwing: ±25°
Table 2. Details of the GF-1 WFV-1 calibration images.
Table 2. Details of the GF-1 WFV-1 calibration images.
Scene IDCameraAreaImaging DataNumber of GCPsSample Range (Pixel)
068316WFV-1Shanxi10 August 201315,8006300–9000
108244WFV-1Shanxi7 November 201318,05710,200–12,000
125565WFV-1Shanxi27 November 201319,4593200–5700
126740WFV-1Shanxi5 December 201314,551500–2700
Table 3. Details of the GF-1 WFV-4 calibration images.
Table 3. Details of the GF-1 WFV-4 calibration images.
Scene IDCameraAreaImaging DataNumber of GCPsSample Range (Pixel)
061400WFV-4Shanxi30 July 201327540–1800
101766WFV-4Shanxi23 October 201313,0994700–7000
108857WFV-4Shanxi8 November 201312,7916000–8500
113764WFV-4Shanxi20 November 201316,3612000–4500
120856WFV-4Henan28 November 201314108500–11,000
Table 4. Details of the stereo mapping images.
Table 4. Details of the stereo mapping images.
Scene IDCameraAreaSwing Angle (°)Imaging Data
068316WFV-1Shanxi27.61110 August 2013
112159WFV-4Shanxi−27.61116 August 2013
Table 5. Orientation accuracy before and after calibration for validation images (units: pixels).
Table 5. Orientation accuracy before and after calibration for validation images (units: pixels).
CameraScene IDNumber of CPsStatusLineSampleMax.Min.RMS
WVF-106831620Ori.0.9161.0692.6920.2071.410
Cla.0.4300.4370.9910.1300.613
07947628Ori.0.8401.9215.5380.5122.097
Cal.0.6460.6351.7880.0880.906
12556726Ori.0.9661.7213.1730.5411.973
Cal.0.3840.4331.0720.0790.579
13227926Ori.0.7901.9914.9220.2492.142
Cal.0.5250.5051.1980.0540.728
WVF-406140030Ori.1.8795.55111.1210.0845.861
Cal.0.3690.4531.1880.0400.584
11215926Ori.1.5524.9429.2520.2225.180
Cal.0.4220.3931.1180.0840.577
Ori.: original, Cal.: calibration.
Table 6. Elevation error statistics before and after calibration (units: meter). The elevation accuracy can improve from 103 m to 30 m.
Table 6. Elevation error statistics before and after calibration (units: meter). The elevation accuracy can improve from 103 m to 30 m.
ItemMin.Max.MeanSTDRMS
Before Calibration−27.098201.80995.92739.786103.850
After Calibration−94.076101.3434.10729.83530.116
STD: standard deviation, RMS: root mean square.

Share and Cite

MDPI and ACS Style

Chen, S.; Liu, J.; Huang, W.; Chen, R. Wide Swath Stereo Mapping from Gaofen-1 Wide-Field-View (WFV) Images Using Calibration. Sensors 2018, 18, 739. https://doi.org/10.3390/s18030739

AMA Style

Chen S, Liu J, Huang W, Chen R. Wide Swath Stereo Mapping from Gaofen-1 Wide-Field-View (WFV) Images Using Calibration. Sensors. 2018; 18(3):739. https://doi.org/10.3390/s18030739

Chicago/Turabian Style

Chen, Shoubin, Jingbin Liu, Wenchao Huang, and Ruizhi Chen. 2018. "Wide Swath Stereo Mapping from Gaofen-1 Wide-Field-View (WFV) Images Using Calibration" Sensors 18, no. 3: 739. https://doi.org/10.3390/s18030739

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop