Next Article in Journal
Dual-Band Multiple-Element MIMO Antenna System for Next-Generation Smartphones
Previous Article in Journal
A Survey of Information Extraction Based on Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pre-Processing of Inner CCD Image Stitching of the SDGSAT-1 Satellite

1
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
Key Laboratory of Technology in Geo-Spatial Information Processing and Application Systems, Institute of Electronics, Chinese Academy of Sciences, Beijing 100190, China
3
University of Chinese Academy of Sciences, Beijing 101408, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(19), 9693; https://doi.org/10.3390/app12199693
Submission received: 2 August 2022 / Revised: 19 September 2022 / Accepted: 21 September 2022 / Published: 27 September 2022
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Spliced optical satellite cameras suffering from low stitching accuracy are influenced by various factors which can greatly restrict their applications. Most previous studies have focused on the geometric precision of stitched images, which is influenced by the stitching consistency and the relationships between different inner CCD (Charge-Coupled Device) images. Therefore, the stitching accuracy is of great significance in multiple CCD image production. Traditionally, the line-time normalization method has been applied for inner CCD image stitching based on designed line-times with the assumption of uniform sampling during imaging. However, the misalignment of the designed and actual line-time affected by various factors can lead to image distortion. Therefore, this paper investigates the performance of different normalization methods to produce stitched images with higher geometric performance using the actual line-time. First, the geometric distortions caused by misalignments between the designed and actual line-time are analyzed to show the differences in sampling rate and step-points. To overcome the distortions introduced by the fitting error of the designed line-time, three fine normalization methods based on the actual line-time, respectively called scene-based, block-based, and line-based line-time normalization methods, are introduced and compared with the traditional method. The scene-based and block-based line-time normalization methods fit the actual line-time section-by-section, while the line-based method builds the relationships between adjacent inner CCD images line-by-line. Images obtained from the Sustainable Development Goals Satellite 1 (SDGSAT-1) satellite are used for verification of different methods. The performance of the designed line-time normalization method and three fine actual line-time normalization methods is compared; the stitching accuracy can reach about 0.8, 0.56, 0.5, and 0.45 pixels, respectively. The time consumption of these four compared methods is about 5.5 s, 4.9 s, 5.4 s, and 58.9 s, respectively. Therefore, the block-based actual line-time normalization method utilized in practice can provide a good balance between running time and accuracy. In the future, we intend to find a new way to improve the efficiency of line-based line-time normalization methods to produce stitched images with higher geometric consistency and accuracy.

1. Introduction

The Sustainable Development Goals Satellite 1 (SDGSAT-1), launched in November 2021, is specially designed for the “Big Earth Data Science Engineering” project supported by the Chinese Academy of Sciences. There are three sensors assembled on the platform: an infrared thermal sensor, a night-time light sensor, and a multispectral sensor. The infrared thermal sensor is sensitive to temperature variations of 0.2 C, which is helpful for mapping energy distributions and the interaction between human activities and nature. The night-time light dataset, collected on a 1000:1 scale, is useful for detailed analysis of global human settlement patterns. The multispectral sensor covers seven channels, and can warn of changes in water quality in rivers, lakes, and oceans [1]. The acquisition breadth of the imaging swath is about 300 km, and it can achieve global coverage within 11 days. With the combination of multiple sources and all-day observation from the SDGSAT-1 satellite, it is of great significance to characterize the traces of human activities in order to provide exclusive data support for Sustainable Development Goal (SDG) indicators that can represent the interaction between humankind and nature. Table 1 provides the main characteristics of the SDGSAT-1 satellite.
A significant change in earth observation research has been brought about by the use of Time-Delayed and Integration Charge-Coupled Device (TDI-CCD) sensors [2]. Remote sensing images produced by TDI-CCDs have the benefits of adequate exposure, small volume, and good mobility [3]. Standard products offered by image vendors include stitched images, which are necessary due to the restricted field of view (FOV) of single CCD sensors. Figure 1 illustrates typical types of staggered CCD elements based on the IKONOS, Pleiades, WorldView-2, and ZiYuan-3 (ZY-3) satellite platforms. As shown in Figure 1a,b, multiple inner CCDs are staggered in two or more lines mechanically, as this is the simplest and most practical method. To reduce the influence of imaging time delay between different inner CCDs, the imaging sensors on the Pleiades satellite are arranged in an arc, as shown in Figure 1c. A semi-transparent mirror is applied on the focal plane of the ZY-3 satellite by projecting all inner CCD images into the same plane, as shown in Figure 1d. As for the SDGSAT-1 satellite, multiple inner CCDs are arranged in a similar layout as the design of the ZY-3 satellite.
Influenced by various factors, stitched products derived from inner CCD images often suffer from low accuracy and obvious distortions [4,5]. As such, the stitching accuracy between adjacent inner CCD images is a critical factor for the geometric performance of stitched images. A large number of investigations based on images from SPOT-5, QuickBird, ALOS/PRISM, ZY-3, and other satellite platforms have been carried out for multiple CCD image processing. The orientation of time-dependent single CCD lines requires that the calibration, attitude accuracy and time interval requirements for recording attitude information be satisfied. Three phases for processing the SPOT-5 images were carried out in 2002, including CCD distortion calibration and relative camera orientation [6]. In [7], the authors used IKONOS images to generate digital surface models (DSM). The accuracy of produced DSMs reached about 2 m after calibration of the interior parameters. Similarly, different combinations of stereo and tri-stereo Pleiades images were applied for the production of DSM in [8]. After a detailed analysis, ref. [9] found that the stitching accuracy of IKONOS imagery greatly influenced the reconstruction of the Digital Elevation Models (DEMs); other investigations have come to the same conclusion using a variety of experimental datasets [10,11].
To obtain stitched images with satisfactory geometric performance, ref. [12] applied a simple translation model for the calibration of stitched images from the IRS-1C satellite. Considering only three inner CCDs assembled on the focal plane, the stitching accuracy reached the sub-pixel level. However, much higher accuracy is not expected for a precise affine transformation model using this dataset [13]. In [14], the authors evaluated the performance of stitched images obtained from multiple CCD sensors on the IKONOS satellite. A pseudo-orientation model was proposed for the calibration of standard level 1 products, and the suitability of the rational function model (RFM) was investigated as well. The authors of [15] proposed a generic model for the calibration of stitched imagery from the ALOS/PRISM satellite. Due to insufficient calibration of the joint error between inner CCD images, systematic errors could not be completely corrected. In [16], the authors analyzed the geometric performance of TianHui-1 (TH-1) satellite imagery and proposed an Equivalent Frame Photo (EFP) model for the calibration of line-matrix CCD images. They were able to improve the accuracy of the TH-1 imagery from about 170 m to 11.8 m. With the aim of producing a stitched lunar map, [17] proposed an automatic stitching method for 2C level data based on an image registration and geolocation model. In [18,19], the authors proposed an improved geometric calibration model for the production of stitched images using datasets from the HaiYang-1C satellite. The authors of [20] proposed an integral imaging model based on inter-chip geometry constraints.
Table 2 summarizes the main characteristics of traditional multiple CCD image stitching methods, which are divided into two categories: object-space methods and image-space methods. A pseudo-rigorous model is necessary to build the relationship between inner CCD images and stitched images for object-space methods, while image-space methods depend on the normalization of imaging sampling intervals for each inner CCD image. In 2009, ref. [21] systematically analyzed the error sources of stitched multiple CCD images from both the along-track direction and the cross-track direction. An object-space correction method was proposed for the calibration of multiple CCD images. Later, [22] proposed a rigorous geometric sensor model for multiple CCD image stitching. By separating each CCD image, a rigorous sensor model with scaling, rotation, and translation parameters was established for the resampling of stitched images. Similarly, [3] proposed an object-space reprojection model to generate a stitched image by indirect geometric image rectification using ZY-1 satellite images, and [23] used penalized splines to model the inner CCD distortions of WorldView-3 imagery. A pseudo-geolocation model is necessary to build the relationship between the inner CCD images and the stitched products; however, extra fitting errors are introduced during reprojection, and computational complexity is relatively high.
In contrast, image-space methods use mulitple transformation models for the registration of normalized inner CCD images. The authors of [12] used an image-space transformation model for the calibration of stitched images. In 2010, ref. [24] proposed a structure-based image-matching method to build a mosaic of multiple CCD CBERS-02B satellite images. In [25], the authors analyzed the impact of the correction method on integration time. The authors of [26] proposed a method for sift-based stitching of image-space inner CCD images using TH-1 strip imagery. They were able to improve accuracy by introducing a piece-wise linear stitching and logically adaptive image enhancement method for the stitching area. In most of the above-mentioned cases, stitched images are produced using a designed line-time, which is inconsistent with the actual imaging state. This approach introduces distortion during image-space resampling.
Nowadays, most researchers are interested in optimizing the pseudo-rigorous model to obtain better stitching accuracy; image-space line-time normalization methods are usually included in this process to produce normalized adjacent inner CCD images. This kind of method eliminate geometric distortions based on reprojected object-space coordinates and the calibration of internal and external parameters. However, in a rigorous model the calibration of these parameters depends on ground control information, and the generality of these parameters needs to be checked periodically. Moreover, errors in external DEMs can introduce extra fitting errors.
To obtain stitched images with high geometric performance with lower computational complexity, in this paper we analyze deficiencies in the stitching accuracy of adjacent CCD images caused by the misalignment of non-uniform sampling. We investigate different line-time normalization methods developed using the actual line-time, namely, scene-based, block-based, and line-based line-time normalization methods, in order to eliminate stitching errors between adjacent inner CCD images. Experiments with multi-spectral images obtained from the SDGSAT-1 satellite are used to verify the stitching accuracy based on automatically extracted tie-points between adjacent inner CCD images and on the visual effect.
The main contributions of this paper are:
  • A close analysis of misalignment between designed and actual line-times;
  • The introduction of three fine actual line-time normalization methods to overcome geometric distortions caused by the fitting error of designed line-times;
  • A practical demonstration of block-based actual line-time normalization methods to achieve a balance between running time and stitching accuracy.

2. Study Site and Datasets

2.1. Study Site

Raw multi-spectral images obtained from the SDGSAT-1 satellite covering Nanjing, Linyi, Yinchuan and Lanzhou cities in China were used for the verification of multiple CCD image stitching, as shown in Figure 2a. The coverage varies a great deal, ranging from urban areas with residential and commercial buildings to desert and mountainous areas with poor conditions. The elevation of the experimental area ranges from several metres to thousands of metres. The great difference in coverage and terrain offers a varied landscape in terms of geometric complexity.
The first image was obtained on 18 November 2021, two weeks after the launch of the SDGSAT-1 satellite. The coverage area in Figure 2b is around Nanjing City, consisting of dense buildings and coastlines. The second image in Figure 2c was obtained on 23 November 2021. The coverage is mainly surround by plain and small hills. The last two images, in Figure 2d,e were obtained in 25 November 2021, and consist of large-scale desert and mountainous areas.

2.2. Datasets

The experimental dataset contained four multi-spectral images and two night-time images. Each image was stitched from eight inner CCD images for expansion of the imaging swath in a 10 m resolution. This satellite was designed with agile maneuvering imaging capability in order to improve its observational coverage. The fused images obtained using multiple channels can be applied in the monitoring of energy consumption, residential development patterns, and coastal port environments.

3. Methodology

Figure 3 shows the workflow of this paper. To obtain images with high geometric accuracy, this paper investigates the performance of different fine line-time normalization methods in image-space. First, the imaging line-times of all inner CCDs were normalized by setting a nominal reference. After normalization, a homogenous CCD-line was produced through pre-processing of the installation error. In each homogenous CCD-line, all inner CCD images share the same imaging time in object-space. Finally, the phase congruency method was applied to calibrate overlapping pixels between adjacent inner CCD images.

3.1. Traditional Line-Time Normalization

The multi-level integration method is usually applied in TDI-CCD sensors for the collection of sufficient echo energy, which can produce images with very high resolution. Therefore, the combination of multiple inner CCD images can achieve earth observation with a wide swath and high ground resolution. For the SDGSAT-1, eight inner CCDs are placed in a staggered arrangement on two substrates at regular intervals, as shown in Figure 4a. The projections of different inner CCDs are discontinuous, leading to misalignments between adjacent raw inner CCD images. Figure 4b,c illustrates the imaging mode of multiple CCD sensors and its projections in object-space. To meet the requirements of mapping applications, it is critical to obtain imagery with high geometric stitching accuracy [3,27].
Figure 5a,b show the rigorous model for processing multiple CCD images in object-space and image-space, respectively. Traditionally, a homogenous CCD line is produced based on the relationships between adjacent chips in object-space. Supposing that p 1 ( x 1 , y 1 ) and p 2 ( x 2 , y 2 ) are corresponding image-space points of the same object P ( X , Y , Z ) in inner CCD image 1 and image 2; we can derive the translations between this two inner CCD images as follows:
X Y Z = X S 1 Y S 1 Z S 1 + R 1 · x 1 y 1 f X Y Z = X S 2 Y S 2 Z S 2 + R 2 · x 2 y 2 f
where ( X S 1 , Y S 1 , Z S 1 ) and ( X S 2 , Y S 2 , Z S 2 ) are the GPS antenna centers of different inner CCD sensors and R 1 and R 2 are the translation matrixes.
With the help of ground control information and constraints between adjacent inner CCDs, the internal and external geometric parameters can be calibrated. After that, an equivalent rigorous model can be established by projecting all inner CCD images into a pseudo-imaging plane. In contrast, the image-space method produces stitched images without considering the relationships in object-space. For a single object P on the ground, the projected points in different inner CCD images may differ from each other due to the staggered installation. Supposing that P 1 and P 2 in Figure 5b are the corresponding image-space projections of the same object, adjacent inner CCD images can be stitched with an affine transformation model based on the designed line-time. The scaling difference is caused by the misalignment in the sampling rate of each inner CCD during imaging, while staggered installation leads to the offsets. To produce a seamless stitched image, both the scale and offset parameters should be calibrated carefully.
The determination of scale parameters of different inner CCD images is closely related to the line-time during flight. Traditionally, a designed line-time is used for the normalization of image-space coordinates. The sampling rate, which is usually considered as the reciprocal of the line-time, remains unchanged for a certain period, as shown in Figure 6a. Therefore, the track dataset is separated into several parts; the imaging period of each CCD can be represented as follows:
T s = T 0 , T e = T 0 + Σ j = 1 n ( l e j l s j ) · t j
where T s ( T 0 ) and T e represent the start and end time of the whole dataset, n is the number of segments separated by the step-points of the whole dataset, l s and l e are the start and end line indexes in the jth segment, and t j is the corresponding designed sampling rate.
Assuming that the number of inner CCDs is N, the largest scan lines among all inner CCD images is R N ; then, the normalized sampling rate f N can be calculated by
f N = T e T s R N
Generally, the imaging period for all inner CCDs is nearly the same, while the sampling rates differ from each other. Therefore, we can derive the imaging time after normalization for the nth row of the normalized image as follows:
T c = f N · n + T 0
In addition, T c can be obtained using the raw image data before normalization, and the normalized image ca then be obtained by interpolating the raw data based on the relationship between the original and normalized imaging times of each scan line.
Initially, pixels located in different segments may have different sampling intervals in the along-track direction due to the change in the line-time. After normalization of the line-time, all inner CCD images are resampled with the same imaging frequency, as shown in Figure 6b. This method was developed based on the assumption that the line-time in each segment remains stable, which suggests an ideal design. In fact, the sampling intervals in image-space of different scan lines differ considerably from each other, leading to telescopic differences between pixels.

3.2. Fine Line-Time Normalization

The pixel intervals between each scan line are different due to the instability of the satellite platform. Figure 7 and Figure 8 illustrate an example of the line-time difference of the tested image data 1. As shown in Figure 7a, the differences between the designed line-time (orange line) and the actual line-time (blue line) are noticeable, especially when there is interference. Figure 7b illustrates the difference between the actual and designed line-times in each scan line. The maximum difference can reach 200 ns, which amounts to a difference of about 1.4%; this is introduced considering that the standard line-time of a scan line is 1440 ns. Figure 8 illustrates the details of the designed line-time (orange line) and the actual line-time (blue line) of the first 500 scan lines of image data 1. The red rectangle rectangles represent the step points of the designed line-time, which is inconsistent with the change in the actual line-time. The step points of the actual and designed line-times are not synchronous; the same conclusion can be drawn from other datasets. Ths is the reason that image-space calibration models based on designed line-time normalization methods suffer from image distortion in sub-CCD image stitching. Furthermore, most object-space based models cannot provide more accurate results if the rigorous sensor model is established based on a designed line-time.
Usually, the satellite position and attitude parameters are recorded sparsely. The interpolation of position and attitude introduces fitting errors during the reprojection of image-space pixels. Furthermore, the reprojection process introduces image distortion due to the unstable geolocation error of the satellite platform. Therefore, the object-space inner CCD image stitching method is influenced by various factors in the establishment of a pseudo-rigorous geolocation model. For stitching developed in image-space, the design of mismatched imaging rates leads to results in which the sampling intervals between adjacent scan lines are not the same, which results in local image distortion if not calibrated correctly. As shown in Figure 9, the accumulated time error between the actual and designed line-times is about 0.5 ms for around 10,000 scan lines. Assuming that the average speed of the satellite platform is about 7.6 km/s, the resulting ground difference can reach 4 m. This distortion is consequential for images with very high resolution and cannot be neglected. Considering that the ground resolution for multispectral images obtained from the SDGSAT-1 satellite is 10 m with a swath of 300 km, a mismatched imaging line-time introduces pixel-level distortions in standard image products. It is noteworthy that line-time mismatching is an essential issue in image distortion, significantly compromising the geometric performance and further applications of these products.
In this paper, different fine line-time normalization methods, namely, scene-based, block-based, and line-based line-time normalization methods, are investigated based on the actual line-time. Compared with the results in Figure 6a, Figure 10a illustrates that the sampling distance of the raw data is various in object-space. The actual line-time is recorded in the metafile of each sca nline. As shown in Figure 7, the step-points of the actual line-time and designed line-time for each inner CCD image are not synchronous, and the actual line-time randomly changes around the designed line-time. Therefore, the application of the actual line-time rather than the designed line-time is of great significance, and can improve the accuracy of image stitching. Usually, the sub-CCD located in the center or edge of the imaging plane is selected as the reference. For the SDGSAT-1 satellite, the reference image is the raw inner CCD image located on the edge of each camera.
For better consistency between adjacent inner CCD images, different fine line-time normalization methods can be applied to resampling all slave images. The misalignment between adjacent inner CCD images caused by the staggered installation is treated as a systematic error, which can be calibrated according to the design of the imaging sensor. After calibration, the start lines of each sub-CCD image are geometrically rectified, as shown in Figure 4c.
The line-based line-time normalization method is the finest among all investigated models. Supposing that L T i is the line-time of the ith line in the normalized inner CCD image, the corresponding actual line-time of the denormalized inner CCD image is L T j (where j is the row index of the denormalized inner CCD image), and P i and P j represent the pixel value of the ith and jth row in the normalized and denormalized sub-CCD image, respectively. Then, the normalized image can be resampled as follows:
P i = L T i L T j L T i L T i 1 · P 2 j + Δ x i , j + ( 1 L T i + 1 L T j + 1 L T 1 i + 1 L T 1 i ) · P 2 j + 1 + Δ x i , j , i f L T 1 i < L T 2 j + Δ x i , j < L T 1 i + 1
All slave images can be normalized according to Equation (5). After line-based line-time normalization, the pixels of slave images are stretched or compressed or are mixed between adjacent images. However, the relationship with the actual line-time is unchangeable. In particular, Equation (5) can be rewritten as
P 2 i = P 2 j + Δ x i , j , i f L T 1 i = L T 2 j + Δ x i , j
Therefore, all slave sub-CCD images are refined using the same reference image for consistency. Typically, the resampling of each scan line is time-consuming. According to the statistical results shown in Figure 7 and Figure 8, the actual time intervals between every two scan lines are relatively stable for a brief period. Therefore, a block-based line-time normalization method is introduced based on the actual line-time. The whole dataset is divided into several segments according to the step-point. The average imaging frequency in each segment can be obtained as follows:
f r e q i = L T i + 1 L T i R i
where R i represents the total number of rows in the ith segment.
A more simplified method is the scene-based line-time normalization method, which is built around the assumption that the line-time is relatively stable in each scene. In this method, only the start-line and end-line of each scene are interpolated. The internal rows are calculated based on the relationship of the average imaging frequency. Therefore, the raw image is resized as a whole.

3.3. Inner CCD Image Matching

Usually, imaging matching methods are divided into two parts, feature-based methods and area-based methods [29]. Feature-based methods include the SIFT method [30] as well as other methods that depend on detecting distinctive features such as points, contours, and edges. In contrast, area-based methods use mutual information [31] or cross-correlation [32] to estimate the geometric transformation by optimizing a similarity measurement. For remote sensing images, the feature-based method always suffers from high computational costs. In this paper, the area-based normalized cross-correlation (NCC) method based on phase congruency is applied to estimate the transformation between adjacent sub-CCD images.
For optical images, the phase congruency model is established based on an additive Gaussian noise model as follows:
P c ( x , y ) = Σ n E θ k ( x , y ) Σ n Σ k A n , θ k ( x , y ) + ϵ E θ k ( x , y ) = F θ k 2 ( x , y ) + H θ k 2 ( x , y ) = Σ n ( e n , θ k ( x , y ) + o n , θ k ( x , y ) ) = Σ n ( I ( x , y ) · M n e + I ( x , y ) · M n o )
where E θ k ( x , y ) is the local energy, A n , θ k is the amplitude, θ k represents the orientation angle, ϵ is a preset parameter to avoid ill-conditional equations, I ( x , y ) represents the pixel value at location ( x , y ) , and M n e and M n o are the even and odd components of the 2D filters at a scale n.
After the transformation of different raw sub-CCD images, the structural image can avoid the instability caused by differences in brightness. Therefore, the NCC model is applied for the extraction of tie points:
ρ = Σ ( w ( u , v ) · P C I ( x u , y v ) μ 1 ) ( w ( u , v ) · P C T ( x , y ) μ 2 ) Σ ( w ( u , v ) · P C I ( x u , y v ) μ 1 ) 2 Σ ( w ( u , v ) · P C T ( x , y ) μ 2 ) 2
where ρ is the correlation coefficient, ( u , v ) represents the image-space coordinate in the sliding window, ( x , y ) is the central point of the mask, and P C I and P C T represent the intensity of the reference and slave image, respectively. The definition of w, μ 1 , and μ 2 can be shown as follows:
w ( u , v ) = e ( x u ) 2 + ( y v ) 2 μ 1 = 1 N Σ ( w ( u , v ) · P C I ( x u , y v ) ) μ 2 = 1 N Σ ( w ( u , v ) · P C T ( x u , y v ) )
where e is introduced as an adjustable coefficient to balance the geometric differences caused by image distortions and N is the size of the mask.
Usually, the overlapping area of inner CCD images is several tens of pixels. Therefore, the size of the mask is set as half the width of the overlapping area. In each window, the center points with the largest correlation coefficients are regarded as candidate points. With the help of the random sample consensus (RANSAC) algorithm, outliers are removed, allowing the geometric performance between adjacent inner CCD images in both the image row and column directions to be improved.

4. Experimental Results

Four multi-spectral images obtained from the SDGSAT-1 satellite are included to verify the performance of different line-time normalization methods. Figure 11 illustrates an overview of the tested image dataset acquired with different coverages.
Line-time mismatching is one of the main error sources in image distortion. Traditionally, a designed line-time is used as the reference in image resampling. Slave images are resampled by interpolation of the designed line-time according to Equations (2)∼(5), which changes evenly with time. In fact, the actual line-time is not completely coincident with the designed line-time, leading to different resampling intervals in object-space. Therefore, images resampled using the traditional line-time normalization method may share the problem of geometric dislocation in image space.
Figure 12, Figure 13, Figure 14 and Figure 15 show the stitching results of the four images collected from the SDGSAT-1 satellite in different scenes and coverage. Distinct views are selected from each image for the comparison of different line-time normalization methods. The inner CCD images in Figure 12a,e, Figure 13a,e, Figure 14a,e and Figure 15a,e illustrate the stitched results of eight different scenes. The normalized inner CCD images are resampled using the designed line-time and stitched directly. Located in the front part of a whole dataset, the misalignment is not obvious. With the accumulation of mismatched line-time errors, the stitching error can be easily detected, and distortions in these images are obvious, as in the examples shown in Figure 12a,e. As mentioned in Section 3.1, this image distortion is caused by the accumulation of mismatches between the actual and designed line-times.
Figure 12b–d,f–h shows the results with line-based, block-based, and scene-based actual line-time normalization methods, respectively. Compared to the results obtained using the traditional designed line-time normalization method, these results show better geometric consistency between adjacent inner CCD images. The results in Figure 12d,h indicate the best performance among the alternatives, while the scene-based line-time normalization methods are not as stable.
As can be seen in all subfigures shown in Figure 13 and Figure 15, the investigated actual line-time normalization methods can achieve results as good as the traditional line-time normalization method. The accumulated difference between the actual and designed line-time is not noticeable. Therefore, the stitching error between adjacent sub-CCD images is usually at the sub-pixel level regardless of the method.
For the results in Figure 14b–d,e–h, pixel-level stitching errors can be detected when using the scene-based line-time normalization method. Usually, a whole dataset is separated into different scenes according to the design of standard products. For the scene-based line-time normalization method, the recorded line-time is considered as a constant in the imaging period of a scene. In Figure 14d,h the discrepancy is obvious due to the instability of the actual line-time. However, the actual line-time changes line by line and is influenced by various factors, meaning that additional errors are introduced during the fitting process. In contrast, the actual line-time is considered unchangeable in each block, and an average resampling rate is used as a replacement for the resampling of each inner CCD image. Better results are obtained when using the block-based line-time normalization method, as shown in Figure 14c,g. In the block-based line-time normalization method, scan lines located at step-points are recorded as the boundary of each block and the normalized image of each scanline is fitted using a linear transformation. Therefore, the block-based normalization results can reduce the inconsistencies between different sub-CCD images.
Traditional designed line-time normalization, scene-based line-time normalization, and block-based line-time normalization can be regarded as either global or local fitting of the actual line-time normalization. These methods introduce distortions caused by the misalignment of the normalized line-time and the actual line-time, which influences the geometric performance of standard products. Therefore, a line-based actual line-time normalization method is considered. In this experiment, the obtained inner CCD image located at the edge of each camera is selected as the reference image and all slave images are resampled using the interpolation of the actual line-time of each inner CCD image. The resampled image is reproduced using Equations (6)∼(8). The results in Figure 14d–h illustrate that no obvious dislocations between adjacent sub-CCD imagery can be identified. In total, the results of the line-based line-time normalization method achieve the best stitching accuracy of the different methods.
Table 3 illustrates the root mean square error (RMSE) of the stitching error between adjacent sub-CCD images using different line-time normalization methods, where X, Y, and P represent the error in the line direction, sample direction, and plane direction in image-space, respectively. After the calibration of the systematic error, tie points are extracted for the verification of stitching accuracy. The stitching accuracy of all experimented datasets processed based on the traditional line-time normalization method is around 0.8 pixels, while the results obtained using the other three methods are around 0.5 pixels. The line-based line-time normalization method achieves the best accuracy, with better than 0.5 pixels for all scenes.
Figure 16a–d shows the statistical results of the stitching error with different methods. The blue line is the traditional normalization method based on designed line-time, and changes a great deal with eight adjacent sub-CCD images. In comparison, the curves in red, green, and purple in Figure 16a–d represent the results obtained using actual scene-based, block-based, and line-based line-time normalization, which is rather glossy for the stitching error between adjacent sub-CCD images. The purple line is located at the lowest point in most cases, which indicates that the line-based line-time normalization method performs the best. Moreover, Table 4 compares the execution times of different methods. The execution times of the traditional line-time normalization method, scene-based line-time normalization method, and block-based line-time normalization method are almost the same, producing a standard image in about five seconds. In comparison, the computational complexity of the line-based line-time normalization method is about one order of magnitude higher than others, leading to degeneration in the processing time to about 58 s per image. Though the stitching accuracy of the line-based line-time normalization method performs the best among all methods, its time consumption is much higher. In comparison, the stitching accuracy of the block-based actual line-time normalization method is similar to the line-based method while being much less time-consuming. Therefore, the block-based line-time normalization method is selected in the processing system to achieve a balance between time and accuracy.

5. Discussion and Conclusions

Generally, inner CCD image stitching methods can be divided into two categories: image-space based methods and object-space based methods. Object-space methods are based on a pseudo-rigorous imaging model applied to the stitched imagery. First, the geolocation parameters (interior and exterior) of each inner CCD image should be calibrated to achieve relative geolocation orientation. Tie points extracted by image matching algorithms are utilized to develop the transformation model. To simplify the relationships between adjacent CCD images, two-dimensional affine transformation models are based on the assumption of limited geometric distortions. After that, a pseudo-geolocation model is developed by fitting the geometric relationships of the stitched image and each inner CCD image. Image-space pixels are projected to the object-space coordinates, and ground information is sometimes included to guarantee the geolocation accuracy. The parameters of the pseudo-rigorous model are fitted using the projected points, and the inner CCD image-space coordinates can be reprojected into the stitched images.
In contrast, image-space methods define transformation models between adjacent inner CCD images to fit the geometric relationship of the corresponding image coordinates. Usually, an affine transformation model is utilized with line-time normalization. Based on the designed line-time of the imaging system, the imaging period is divided into several parts based on the assumption of a uniform sampling rate.
Nowadays, most researchers are interested in the development of object-space methods, as image-space methods based on the designed line-time lead to geometric distortions that are hard to calibrated. As for the object-space methods, the developed pseudo-rigorous model in object-space can be directly applied for subsequent geometric processing of the stitched products, and the final stitching results rely on the quality of image-space normalization. However, there are three main limitations of object-space methods: (1) the development of the pseudo-rigorous model is composed of the projection and reprojection of image-space coordinates between the inner CCD images and the stitched images, which consumes a great deal of time and resources; (2) calibration of internal and external parameters needs to be performed periodically and is based on ground control information, which is hard to obtain; and (3) the performance of the stitched images is determined by the accuracy of external DEMs, as the elevation introduces extra errors during projection and reprojection.
Aiming at finding a more efficient method for the production of stitched images with less geometric distortion and higher accuracy, this paper investigates the geometric performance of line-time normalization methods based on the actual line-time. By eliminating the difference between the designed and actual line-times, three line-time normalization methods based on the actual line-time are introduced for fine normalization of adjacent inner CCD images. The development of the scene-based and block-based line-time normalization method is similar to the traditional line-time normalization method. Datasets are divided into segments according to large step-points of the actual line-time, and the sampling intervals are treated as a constant during the normalization. The actual line-time is rather unstable, and the fitting process leads to discrepancies between inner CCD images. Therefore, the line-based actual line-time normalization method is introduced for fine rectification of inner CCD images line by line. We analyzed xperimental results using multiple images obtained from the SDGSAT-1 satellite with various coverage. The results indicate that the line-based line-time normalization method achieves the best stitching accuracy, however, the time consumption is greatly increased due to high computational complexity. The block-based actual line-time normalization method can achieve similar accuracy compared to the line-based method with a much lower execution time. Therefore, in practice the block-based line-time normalization method is able to achieve a good balance between accuracy and time.
Compared with traditional methods, the actual line-time normalization methods investigated here can avoid geometric distortions caused by misalignment between adjacent inner CCD images. Furthermore, these actual line-time normalization methods can avoid the projection and reprojection of image-space coordinates, greatly simplifying the stitching process. Though the stitching accuracy of the line-based line-time normalization method is the highest, in the follow-up research the efficiency of this method needs to be further analyzed and improved in terms of computational complexity.

Author Contributions

Conceptualization, N.J., F.W., J.Z. and H.Y.; Data curation, N.J. and J.Z. and B.C.; Formal analysis, N.J. and F.W.; Funding acquisition, F.W. and H.Y.; Investigation, N.J., B.C. and J.Z.; Methodology, N.J. and F.W.; Project administration, N.J., F.W., B.C. and H.Y.; Resources, F.W. and B.C.; Writing—original draft, N.J. and J.Z.; Writing—review and editing, N.J., F.W. and H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the From 0 to 1 Original Innovative Project of the Chinese Academy of Sciences Frontier Science Research Program under Grant No. ZDBS-LY-JSC036, by the National Natural Science Foundation of China under Grant No. 61901439, and by the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No. XDA19010401.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SDGSAT-1Sustainable Development Goals Satellite 1
TDITime-Delayed and Integration
CCDCharge-Coupled Device
FOVField of View
DEMDigital Elevation Model
ZY-3ZiYuan-3
TH-1TianHui-1
AOIArea of Interest
EFPEquivalent Frame Photo
RFMRational Function Model
NCCNormalized Cross-Correlation
RANSACRandom Sample Consensus
RMSERoot Mean Square Error

References

  1. Kloiber, S.M.; Brezonik, P.L.; Olmanson, L.G.; Bauer, M.E. A procedure for regional lake water clarity assessment using Landsat multispectral data. Remote Sens. Environ. 2002, 82, 38–47. [Google Scholar] [CrossRef]
  2. Meng, W.; Zhu, S.; Zhu, B.; Bian, S. The research of TDI-CCDs imagery stitching using information mending algorithm. Int. Symp. Photoelectron. Detect. Imaging Imaging Sens. Appl. Int. Soc. Opt. Photonics 2013, 8908, 89081C. [Google Scholar]
  3. Tang, X.; Hu, F.; Wang, M.; Pan, J.; Jin, S.; Lu, G. Inner FoV stitching of spaceborne TDI CCD images based on sensor geometry and projection plane in object space. Remote Sens. 2014, 6, 6386–6406. [Google Scholar] [CrossRef]
  4. Weser, T.; Rottensteiner, F.; Willneff, J.; Poon, J.; Fraser, C.S. Development and testing of a generic sensor model for pushbroom satellite imagery. Photogramm. Rec. 2008, 23, 255–274. [Google Scholar] [CrossRef]
  5. Cao, B.; Qiu, Z.; Zhu, S.; Meng, W.; Mo, D.; Cao, F. A solution to RPCs of satellite imagery with variant integration time. Surv. Rev. 2016, 48, 392–399. [Google Scholar] [CrossRef]
  6. Breton, E.; Bouillon, A.; Gachet, R.; Delussy, F. Pre-flight and in-flight geometric calibration of SPOT5 HRG and HRS images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 20–25. [Google Scholar]
  7. Baltsavias, E.; Li, Z.; Eisenbeiss, H. DSM generation and interior orientation determination of IKONOS images using a testfield in Switzerland. Photogramm. Fernerkund. Geoinf. 2006, 2006, 41. [Google Scholar]
  8. Panagiotakis, E.; Chrysoulakis, N.; Charalampopoulou, V.; Poursanidis, D. Validation of Pleiades Tri-Stereo DSM in urban areas. ISPRS Int. J.-Geo-Inf. 2018, 7, 118. [Google Scholar] [CrossRef]
  9. Zhang, L.; Gruen, A. Multi-image matching for DSM generation from IKONOS imagery. ISPRS J. Photogramm. Remote Sens. 2006, 60, 195–211. [Google Scholar] [CrossRef]
  10. Li, Z.; Gruen, A. Automatic DSM generation from linear array imagery data. Proc. ISPRS 2004, 298, 12–23. [Google Scholar]
  11. Fraser, C.; Yamakawa, T. Insights into the affine model for high-resolution satellite sensor orientation. ISPRS J. Photogramm. Remote Sens. 2004, 58, 275–288. [Google Scholar] [CrossRef]
  12. Jacobsen, K. Geometric and information potential of IRS-1C PAN-images. IEEE Int. Geosci. Remote Sens. Symp. 1999, 1, 428–430. [Google Scholar]
  13. Baltsavias, E.P.; Pateraki, M.N.; Zhang, L. Radiometric and geometric evaluation of Ikonos GEO images and their use for 3D building modelling. In Joint Workshop of ISPRS Working Groups I/2, I/5 and IV/7 High Resolution Mapping from Space 2001, Hannover, Germany, 19–21 September; Institute of Geodesy and Photogrammetry: Zurich, Germany, 2001. [Google Scholar]
  14. De Lussy, F.; Kubik, P.; Greslou, D.; Pascal, V.; Gigord, P.; Cantou, J.P. PLEIADES-HR image system products and quality PLEIADES-HR image system products and geometric accuracy. In Proceedings of the International Society for Photogrammetry and Remote Sensing Workshop; Citeseer: Hannover, Germany, 2005; Volume 1720. [Google Scholar]
  15. Weser, T.; Rottensteiner, F.; Willneff, J.; Fraser, C. A generic pushbroom sensor model for high-resolution satellite imagery applied to SPOT 5, QuickBird and ALOS data sets. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2007, 36, 6. [Google Scholar]
  16. Wang, J.; Wang, R.; Hu, X.; Su, Z. The on-orbit calibration of geometric parameters of the Tian-Hui 1 (TH-1) satellite. ISPRS J. Photogramm. Remote Sens. 2017, 124, 144–151. [Google Scholar] [CrossRef]
  17. Li, Z.; Ye, M.; Cai, Z.; Tang, Z. Automatic stitching method for Chang‘E-2 CCD images of the Moon. J. Earth Sci. 2017, 28, 168–179. [Google Scholar] [CrossRef]
  18. Cao, J.; Zhang, Z.; Jin, S.; Chang, X. Geometric stitching of a HaiYang-1C ultra violet imager with a distorted virtual camera. Opt. Express 2020, 28, 14109–14126. [Google Scholar] [CrossRef]
  19. Cao, J.; Zhou, N.; Shang, H.; Ye, Z.; Zhang, Z. Internal Geometric Quality Improvement of Optical Remote Sensing Satellite Images with Image Reorientation. Remote Sens. 2022, 14, 471. [Google Scholar] [CrossRef]
  20. Wang, T.; Zhang, Y.; Zhang, Y.; Zhang, Z.; Xiao, X.; Yu, Y.; Wang, L. A Spliced Satellite Optical Camera Geometric Calibration Method Based on Inter-Chip Geometry Constraints. Remote Sens. 2021, 13, 2832. [Google Scholar] [CrossRef]
  21. Hu, F.; Jin, S. An Algorithm for Mosaicking Non-Collinear TDI CCD Chip Images Based on Reference Plane in Object-Space. ISDE6 Proceeding. 2009. Available online: https://www.cas.cn/zt/hyzt/gjszdqhy/hyrc/200909/W020090903365491299972.pdf (accessed on 1 August 2022).
  22. Pan, J.; Hu, F.; Wang, M.; Jin, S.; Li, G. An inner FOV stitching method for non-collinear TDI CCD images. Acta Geod. Cartogr. Sin. 2014, 43, 1165. [Google Scholar]
  23. Pan, H.; Huang, T.; Zhou, P.; Cui, Z. Self-calibration dense bundle adjustment of multi-view Worldview-3 basic images. ISPRS J. Photogramm. Remote Sens. 2021, 176, 127–138. [Google Scholar] [CrossRef]
  24. Shi-wei, L.; Tuan-jie, L.; Hong-qi, W. Image Mosaic for TDI-CCD Push-broom Camera Image Based on Image Matching. Remote Sens. Technol. Appl. 2009, 24, 374–378. [Google Scholar]
  25. Cao, B.; Zhu, S.; Zhenge, Q.; Meng, W.; Zhao, Y. Analysis of Integration Time Adjustment’s Impact on Parallax and It’s Correction. J. Geomat. Sci. Technol. 2015, 32, 610–614. [Google Scholar]
  26. Cheng, J.; Mu, C.; Li, Y.; Jiang, Y. Technical Study on Improving Stitching Accuracy for TH-1 Satellite Strip Images. Spacecr. Recovery Remote Sens. 2018, 39, 84–92. [Google Scholar]
  27. Poli, D.; Toutin, T. Review of developments in geometric modelling for high resolution satellite pushbroom sensors. Photogramm. Rec. 2012, 27, 58–73. [Google Scholar] [CrossRef]
  28. Weican, M.; Shulong, Z.; Wen, C.; Yongfeng, Z.; Xiang, G.; Fanzhi, C. Establishment and optimization of rigorous geometric model of push-broom camera using TDI CCD arranged in an alternating pattern. Acta Geod. Cartogr. Sin. 2015, 44, 1340. [Google Scholar]
  29. Joglekar, J.; Gedam, S.S. Area based image matching methods—A survey. Int. J. Emerg. Technol. Adv. Eng. 2012, 2, 130–136. [Google Scholar]
  30. Ma, W.; Wen, Z.; Wu, Y.; Jiao, L.; Gong, M.; Zheng, Y.; Liu, L. Remote sensing image registration with modified SIFT and enhanced feature matching. IEEE Geosci. Remote Sens. Lett. 2016, 14, 3–7. [Google Scholar] [CrossRef]
  31. Suri, S.; Reinartz, P. Mutual-information-based registration of TerraSAR-X and Ikonos imagery in urban areas. IEEE Trans. Geosci. Remote Sens. 2009, 48, 939–949. [Google Scholar] [CrossRef]
  32. Hasan, M.; Pickering, M.R.; Jia, X. Robust automatic registration of multimodal satellite images using CCRE with partial volume interpolation. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4050–4061. [Google Scholar] [CrossRef]
Figure 1. Focal plane arrangement of different satellite platforms: (a) IKONOS imaging plane, (b) WorldView-2 imaging plane, (c) Pleiades imaging plane(Letters A E is the index of inner CCD sensors), (d) ZY-3 imaging plane.
Figure 1. Focal plane arrangement of different satellite platforms: (a) IKONOS imaging plane, (b) WorldView-2 imaging plane, (c) Pleiades imaging plane(Letters A E is the index of inner CCD sensors), (d) ZY-3 imaging plane.
Applsci 12 09693 g001
Figure 2. Geometric distribution and overviews of experimented datasets. (a) Geometric distribution of the experimental dataset (Nos. 1∼4 accompanied by red stars indicate the locations of four multi-spectral images, referred to as Image data 1∼Image data 4): (b) Image data 1, (c) Image data 2, (d) Image data 3, (e) Image data 4.
Figure 2. Geometric distribution and overviews of experimented datasets. (a) Geometric distribution of the experimental dataset (Nos. 1∼4 accompanied by red stars indicate the locations of four multi-spectral images, referred to as Image data 1∼Image data 4): (b) Image data 1, (c) Image data 2, (d) Image data 3, (e) Image data 4.
Applsci 12 09693 g002
Figure 3. Work-flow of the proposed method.
Figure 3. Work-flow of the proposed method.
Applsci 12 09693 g003
Figure 4. Illustration of imaging principles of multiple CCD stitching [28]: (a) imaging principle of multiple CCD in focal plane, (b) imaging Mode, and (c) projections.
Figure 4. Illustration of imaging principles of multiple CCD stitching [28]: (a) imaging principle of multiple CCD in focal plane, (b) imaging Mode, and (c) projections.
Applsci 12 09693 g004
Figure 5. Rigorous sensor model of stitched CCD imagery: (a) rigorous model in object-space and (b) rigorous model in image-space.
Figure 5. Rigorous sensor model of stitched CCD imagery: (a) rigorous model in object-space and (b) rigorous model in image-space.
Applsci 12 09693 g005
Figure 6. Illustration of the traditional line-time normalization method: (a) inner CCD image before traditional line-time normalization and (b) inner CCD image after traditional line-time normalization.
Figure 6. Illustration of the traditional line-time normalization method: (a) inner CCD image before traditional line-time normalization and (b) inner CCD image after traditional line-time normalization.
Applsci 12 09693 g006
Figure 7. Time differences between the actual line-time and design line-time: (a) comparison between the designed and actual line-time of Image Data 1 and (b) time differences between the designed and actual line-time of Image Data 1.
Figure 7. Time differences between the actual line-time and design line-time: (a) comparison between the designed and actual line-time of Image Data 1 and (b) time differences between the designed and actual line-time of Image Data 1.
Applsci 12 09693 g007
Figure 8. Detailed differences between the actual and designed line-times of the first 1000 scan lines of Image Data 1.
Figure 8. Detailed differences between the actual and designed line-times of the first 1000 scan lines of Image Data 1.
Applsci 12 09693 g008
Figure 9. Accumulated time error between the actual and designed line-time.
Figure 9. Accumulated time error between the actual and designed line-time.
Applsci 12 09693 g009
Figure 10. Illustration of the fine line-time normalization method: (a) sub-CCD image before fine line-time normalization and (b) sub-CCD image after fine line-time normalization.
Figure 10. Illustration of the fine line-time normalization method: (a) sub-CCD image before fine line-time normalization and (b) sub-CCD image after fine line-time normalization.
Applsci 12 09693 g010
Figure 11. Illustration of detailed coverage of each dataset: (a) urban area in Image Data 1, (b) plains area in Image Data 2, (c) desert area in Image Data 3, (d) mountainous area in Image Data 4.
Figure 11. Illustration of detailed coverage of each dataset: (a) urban area in Image Data 1, (b) plains area in Image Data 2, (c) desert area in Image Data 3, (d) mountainous area in Image Data 4.
Applsci 12 09693 g011aApplsci 12 09693 g011b
Figure 12. Stitching results for Image Data 1 with different methods: (a) traditional line-time normalization, (b) line-based line-time normalization, (c) block-based line-time normalization, (d) scene-based line-time normalization, (e) traditional line-time normalization, (f) line-based line-time normalization, (g) block-based line-time normalization, (h) scene-based line-time normalization.
Figure 12. Stitching results for Image Data 1 with different methods: (a) traditional line-time normalization, (b) line-based line-time normalization, (c) block-based line-time normalization, (d) scene-based line-time normalization, (e) traditional line-time normalization, (f) line-based line-time normalization, (g) block-based line-time normalization, (h) scene-based line-time normalization.
Applsci 12 09693 g012
Figure 13. Stitching results for Image Data 2 with different methods: (a) traditional line-time normalization, (b) line-based line-time normalization, (c) block-based line-time normalization, (d) scene-based line-time normalization, (e) traditional line-time normalization, (f) line-based line-time normalization, (g) block-based line-time normalization, (h) scene-based line-time normalization.
Figure 13. Stitching results for Image Data 2 with different methods: (a) traditional line-time normalization, (b) line-based line-time normalization, (c) block-based line-time normalization, (d) scene-based line-time normalization, (e) traditional line-time normalization, (f) line-based line-time normalization, (g) block-based line-time normalization, (h) scene-based line-time normalization.
Applsci 12 09693 g013
Figure 14. Stitching results for Image Data 3 with different methods: (a) traditional line-time normalization, (b) line-based line-time normalization, (c) block-based line-time normalization, (d) scene-based line-time normalization, (e) traditional line-time normalization, (f) line-based line-time normalization, (g) block-based line-time normalization, (h) scene-based line-time normalization.
Figure 14. Stitching results for Image Data 3 with different methods: (a) traditional line-time normalization, (b) line-based line-time normalization, (c) block-based line-time normalization, (d) scene-based line-time normalization, (e) traditional line-time normalization, (f) line-based line-time normalization, (g) block-based line-time normalization, (h) scene-based line-time normalization.
Applsci 12 09693 g014
Figure 15. Stitching results for Image Data 4 with different methods: (a) traditional line-time normalization, (b) line-based line-time normalization, (c) block-based line-time normalization, (d) scene-based line-time normalization, (e) traditional line-time normalization, (f) line-based line-time normalization, (g) block-based line-time normalization, (h) scene-based line-time normalization.
Figure 15. Stitching results for Image Data 4 with different methods: (a) traditional line-time normalization, (b) line-based line-time normalization, (c) block-based line-time normalization, (d) scene-based line-time normalization, (e) traditional line-time normalization, (f) line-based line-time normalization, (g) block-based line-time normalization, (h) scene-based line-time normalization.
Applsci 12 09693 g015
Figure 16. Geometric error between adjacent CCDs of different datasets: (a) geometric error between adjacent CCDs in image data 1, (b) geometric error between adjacent CCDs in image data 2, (c) geometric error between adjacent CCDs in image data 3, (d) geometric error between adjacent CCDs in image data 4.
Figure 16. Geometric error between adjacent CCDs of different datasets: (a) geometric error between adjacent CCDs in image data 1, (b) geometric error between adjacent CCDs in image data 2, (c) geometric error between adjacent CCDs in image data 3, (d) geometric error between adjacent CCDs in image data 4.
Applsci 12 09693 g016
Table 1. Designed characteristics of the SDGSAT-1 satellite.
Table 1. Designed characteristics of the SDGSAT-1 satellite.
SDGSAT-1
Satellite
Multispectral
Sensor
Num of CCDs8
Ground Resolution10 m
Num of Channels7
Imaging TypePushbroom
Nighttime
Sensor
Num of CCDs8
Ground Resolution10/40 m
Num of Channels5
Imaging TypePushbroom
Infrared Thermal
Sensor
Num of CCDs4
Ground Resolution30 m
Num of Channels3
Imaging TypeWhiskbroom
Table 2. Main characteristics of the traditional methods.
Table 2. Main characteristics of the traditional methods.
MethodsKey IssuesDetailed Contribution
Object-SpacePseudo-Rigorous Model1. Developing a reference plane
2. Local correction for large error area
3. Segmenting AOI into regular grids
Image-SpaceLine-time Normalization1. Determination of dynamic offset
2. Piecewise linear stitching and
logically adaptive image enhancement
3. Integration time adjustment
Table 3. Stitching error between adjacent sub-CCD images (pixels).
Table 3. Stitching error between adjacent sub-CCD images (pixels).
MethodsTraditionalLine-BasedBlock-BasedScene-Based
Line-TimeLine-TimeLine-TimeLine-Time
NormalizationNormalizationNormalizationNormalization
Image Data 1X0.550.180.210.18
Y0.550.430.450.56
P0.840.490.530.55
Image Data 2X0.660.180.180.49
Y0.470.420.420.32
P0.830.480.480.62
Image Data 3X0.660.170.170.39
Y0.380.400.440.29
P0.850.460.500.54
Image Data 4X0.560.170.170.32
Y0.410.370.410.31
P0.730.430.460.51
Table 4. Time consumption of different stitching methods (seconds).
Table 4. Time consumption of different stitching methods (seconds).
MethodsTraditionalLine-BasedBlock-BasedScene-Based
Line-TimeLine-TimeLine-TimeLine-Time
NormalizationNormalizationNormalizationNormalization
Image Data 15.604.725.2256.67
Image Data 25.424.975.6359.74
Image Data 35.335.035.4160.53
Image Data 45.684.885.2758.47
Average5.514.905.3858.85
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jiao, N.; Wang, F.; Chen, B.; Zhu, J.; You, H. Pre-Processing of Inner CCD Image Stitching of the SDGSAT-1 Satellite. Appl. Sci. 2022, 12, 9693. https://doi.org/10.3390/app12199693

AMA Style

Jiao N, Wang F, Chen B, Zhu J, You H. Pre-Processing of Inner CCD Image Stitching of the SDGSAT-1 Satellite. Applied Sciences. 2022; 12(19):9693. https://doi.org/10.3390/app12199693

Chicago/Turabian Style

Jiao, Niangang, Feng Wang, Bo Chen, Jingxing Zhu, and Hongjian You. 2022. "Pre-Processing of Inner CCD Image Stitching of the SDGSAT-1 Satellite" Applied Sciences 12, no. 19: 9693. https://doi.org/10.3390/app12199693

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop