Next Article in Journal
Retrieval of Fine-Resolution Aerosol Optical Depth (AOD) in Semiarid Urban Areas Using Landsat Data: A Case Study in Urumqi, NW China
Next Article in Special Issue
Radiometric Cross-Calibration of the Wide Field View Camera Onboard GaoFen-6 in Multispectral Bands
Previous Article in Journal
A Two-Stream Symmetric Network with Bidirectional Ensemble for Aerial Image Matching
Previous Article in Special Issue
Removal of Large-Scale Stripes Via Unidirectional Multiscale Decomposition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Mapping with Super-Resolved Multispectral Images for Geostationary Satellites

Qian Xuesen Laboratory of Space Technology, China Academy of Space Technology, Beijing 100094, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(3), 466; https://doi.org/10.3390/rs12030466
Submission received: 9 January 2020 / Revised: 28 January 2020 / Accepted: 29 January 2020 / Published: 2 February 2020
(This article belongs to the Special Issue Quality Improvement of Remote Sensing Images)

Abstract

:
Super-resolution (SR) technology has shown great potential for improving the performance of the mapping and classification of multispectral satellite images. However, it is very challenging to solve ill-conditioned problems such as mapping for remote sensing images due to the presence of complicated ground features. In this paper, we address this problem by proposing a super-resolution reconstruction (SRR) mapping method called the mixed sparse representation non-convex high-order total variation (MSR-NCHOTV) method in order to accurately classify multispectral images and refine object classes. Firstly, MSR-NCHOTV is employed to reconstruct high-resolution images from low-resolution time-series images obtained from the Gaofen-4 (GF-4) geostationary orbit satellite. Secondly, a support vector machine (SVM) method was used to classify the results of SRR using the GF-4 geostationary orbit satellite images. Two sets of GF-4 satellite image data were used for experiments, and the MSR-NCHOTV SRR result obtained using these data was compared with the SRR results obtained using the bilinear interpolation (BI), projection onto convex sets (POCS), and iterative back projection (IBP) methods. The sharpness of the SRR results was evaluated using the gray-level variation between adjacent pixels, and the signal-to-noise ratio (SNR) of the SRR results was evaluated by using the measurement of high spatial resolution remote sensing images. For example, compared with the values obtained using the BI method, the average sharpness and SNR of the five bands obtained using the MSR-NCHOTV method were higher by 39.54% and 51.52%, respectively, and the overall accuracy (OA) and Kappa coefficient of the classification results obtained using the MSR-NCHOTV method were higher by 32.20% and 46.14%, respectively. These results showed that the MSR-NCHOTV method can effectively improve image clarity, enrich image texture details, enhance image quality, and improve image classification accuracy. Thus, the effectiveness and feasibility of using the proposed SRR method to improve the classification accuracy of remote sensing images was verified.

Graphical Abstract

1. Introduction

Geostationary orbit remote sensing satellites have the advantages of high temporal resolution and large imaging width, which enables them to continuously cover large areas. Consequently, images from geostationary orbit satellites are frequently utilized in meteorology, environmental protection, fire monitoring, and other remote sensing applications [1,2]. The application of such images is becoming increasingly popular due to continuous worldwide efforts in developing a new generation of geostationary satellite sensors. Therefore, geostationary orbit remote sensing satellites are now producing an unprecedented amount of earth observation data. For example, recently, the new generations of the GOES-R satellite series in North America [3,4], the Himawari-8/9 satellite in Japan [4,5], and the Fengyun-4A (FY-4A) satellite [6,7] and the GF-4 satellite in China [8,9] have been successfully launched, and the MTG-I satellite in Europe is scheduled for launch in 2021 [10]. Images from geostationary orbit remote sensing satellites offer an opportunity to use super-resolution reconstruction (SRR) technology to further improve image spatial resolution and mapping.
It is extremely difficult to obtain high spatial resolution images from geostationary orbit remote sensing satellites due to the limitations of optical imaging payload technology, the physical constraints of the imaging instruments themselves, the high altitude of the satellites, and the large distance to the imaging object (which is dozens of times that of low-earth-orbit satellite images). Spatial resolution is of key importance for the utility of satellite-based earth observing systems [11]. Multispectral satellite images with a high spatial resolution can provide detailed and accurate information for land-use and land-cover mapping [12]. The factors that affect the spatial resolution of multispectral sensors include the orbit altitude, the speed of the platform, the instantaneous field of view, and the revisit cycle. Studies have investigated ways to improve the spatial resolution of images from geostationary orbit remote sensing satellites via post-processing, since it is very difficult to upgrade multiple imaging devices once a satellite has been launched.
Methods for the mapping of remote sensing images include hard classification and soft classification techniques [13]. Traditional hard classification techniques generally cannot effectively classify mixed pixels in land cover. Techniques such as maximum likelihood classification (MLC) [14], support vector machine (SVM) [15], and random forest classifiers (RFC) [16] can all effectively classify mixed pixels by estimating the fractional abundance of land cover classes in each mixed pixel. However, these methods are limited by the distribution in the spatial information of land cover. Research on super-resolution mapping (SRM) has developed rapidly in recent years. SRM is a super-resolution soft classification based on the Markov random field theorem, in which the fractional images are first allocated randomly under computational constraints [17], and the initialized results are then optimized by changing the spatial arrangement of subpixels to gradually approach a certain objective, or via a pixel-swapping optimization algorithm, the neighboring pixel value [18], minimizing the perimeter of the images [19], and geostatistical indicators based on SRM [20] in order to generate a realistic spatial structure on the refined thematic map. Due to the limited performance of the soft classification of low-resolution image datasets, the SRM method has high algorithm complexity and slow operational efficiency. Therefore, artificial intelligence algorithms, such as genetic algorithms [21], particle swarm optimization [22], and sparse algorithms [23,24] have not been fully employed in the processing of images from geostationary orbit multispectral remote sensing satellites.
In this study, the accuracy of mapping based on images from geostationary orbit remote sensing satellites was improved using super-resolution reconstruction (SRR). The employed method is different to the SRM approach in that it can increase the number of informative pixels using the mismatch in ground texture information from image to image. These reconstructed higher spatial resolution images can then be classified without any limitation on the classification techniques adopted. The SRR method has been widely used for the processing of remote sensing satellite images. It includes (1) a frequency-domain method, (2) a combined spatial-domain and frequency-domain method, and (3) a deep-learning-based method. The SRR method is simple to implement and can be implemented in parallel; however, the classification result is poor. The airspace reconstruction method is comprised of the non-uniform sampling interpolation iterative back projection (IBP) method [25], projection onto convex sets (POCS) methods [26], the maximum a posteriori (MAP) algorithm [27], the total variation (TV) algorithm, a convolutional neural network algorithm [28,29], etc. However, although these methods are relatively simple and easy to implement, they are not suitable for the practical classification of satellite images. In recent years, a large number of SRR methods based on sparse representation have been developed. However, in the frequency domain, specific transformations can only provide sparse representation for specific types of input signals, and thus it is difficult to develop a general image sparse representation method. Some researchers are currently working on finding a universal solution to the image ill-posed inverse problem. For example, an overlapping group sparsity total variation (OGSTV) model [30] was used to restore damaged images and was demonstrated to be highly effective in reducing the stairstep effect. Additionally, other researchers proposed a model based on the alternating direction method of multipliers (ADMM) to regularize TV and showed this model to be very effective for removing salt and pepper noise, although not random noise [31]. Meanwhile, the authors proposed an SRR method based on dictionary and sparse representation [32]; however, this method required a relatively large number of training samples.
In this paper, we adopt a novel SRR method called the mixed sparse representation non-convex high-order total variation (MSR-NCHOTV) method to achieve the fine classification of images from geostationary orbit remote sensing satellites. To the best of our knowledge, there has been no previous attempt to formulate a solution for a mixed sparse representation model for such classification. The sparsity of the image spatial domain model and OGSTV and NCHOTV regularizers were employed for the removal of staircase artifacts from geostationary orbit remote sensing satellite images. To effectively handle the subproblems and constraints, we adopted the ADMM algorithm to improve the quality of the reconstruction of sparse signals as well as the computing speed of the SRR algorithm. We used low-resolution GF-4 images of sequential frames obtained from the same scene over a short time to ensure the robustness of the algorithm. We improved the spatial resolution of the images by integrating several multispectral GF-4 images acquired on different dates. Overall, it is shown that compared with a bilinear interpolation (BI) SRR method, the MSR-NCHOTV method obtained a better classification and clustering outcome, both visually and numerically.
The remainder of this paper is organized as follows. In Section 2, the methodology of the MSR-NCHOTV SRR method is briefly reviewed. In Section 3, the experimental data and the pretreatment of GF-4 satellite images are described. In Section 4, the classification results are presented and compared to demonstrate the effectiveness of the proposed method. Finally, conclusions are presented in Section 5.

2. Methodology

2.1. Degradation Model of Remote Sensing Images

GF-4 satellite images are likely to suffer from geometric distortion, blurring, and additional noise. In order to reconstruct the images with increased clarity and higher resolution, it is necessary to eliminate many factors that affect image quality, such as blur, platform drift/jitter, and noise. The degradation model of the original remote sensing image is described by the following mathematical expression [33]:
y i = D H i M i u + f i , i = 1 , 2 , , K
where u is an ideal multispectral dataset obtained by sampling a continuous scene at high resolution. y i is a low-resolution multispectral image recorded by sensor i. K is the number of low-resolution multispectral images. Mi is a warping matrix which represents geometric distortion brought on by vibration of the satellite platform and air turbulence. Hi is a blur matrix representing the imaging system’s modulation transfer function and radiometric distortion. For example that due to atmospheric scattering and absorption, D is a subsampling matrix reflecting the difference in the expected resolution and the actual resolution of the sensor, and f i represents additive noise with a Gaussian distribution with zero mean. The goal of SRR is to reconstruct the original high-resolution image u based on a series of low-resolution images. This process is completed through image post-processing for each spectral band. The degradation model is established based on the subsequent inversion of the high-resolution image.

2.2. Mixed Sparse Representation Based on Non-Convex Higher-Order Total Variation

In order to find the ill-conditioned solution of Formula (1) for SRR of remote sensing images, the MSR-NCHOTV method with group sparsity for the SRR model is derived as follows:
min u λ 2 D H i M i u y i 2 2 + ϕ ( u ) + ω 2 u p p +   T c ( u ) +   K s p p
where ϕ ( · ) and · correspond to the overlapping group sparse regularizer and non-convex norm regularizer, respectively; the parameter p ( 0 < p < 1 ) is used to guarantee the non-convexity of the function; the variables λ (λ > 0) and ω ( ω > 0) are regularization parameters that control the data fidelity term and the non-convex high-order regularizer, respectively; s represents the point components in the image domain during the algorithm iteration; when the normalized image pixel value is less than 0, s = 0; when the image pixel value is greater than 1, s = 1; ∇ and ∇2 are the first- and second-order gradient difference operators, respectively, assuming periodic boundary conditions for the remote sensing image [34]; K denotes weights for the contribution of the image spatial domain model; T c ( u ) is a characteristic (indication) function in the interval C = [0, 1024] which is used to restore the pixel value of the image between 0 and 1024. This type of constraint is called a box constraint and has been shown to improve the image quality in image restorations. The characteristic function for the set C is defined as [35]:
T c ( x ) { 0 , x C , + , x C ,
In order to solve the constrained separable optimization problem of Formula (2), the ADMM method [31] is employed to convert Formula (2) into the following representation:
min λ 2 D H i M i u y i 2 2 + ϕ ( v ) + ω w p p + T c ( z ) +   K r p p s . t . v = u , w = 2 u , z = u , r = s .
The augmented Lagrangian function is defined as follows:
A ( r , s , u , v , w , z ;   μ 1 , μ 2 , μ 3 , μ 4 ) = λ 2 D H i M i u y i 2 2 + ϕ ( v ) + ω w p p                           + T c ( z ) +   K s p p μ 1 Τ ( v u )   + ρ 1 2 v u 2 2 μ 2 Τ ( w 2 u )                           + ρ 2 2 w 2 u 2 2 μ 3 Τ ( z u )   + ρ 3 2 z u 2 2 μ 4 Τ ( r s ) + ρ 4 2 r s 2 2 ,
where ρ ( ρ > 0 ) is a the penalty parameter or regularization parameter, associated with the quadratic penalty terms v u 2 2 and w 2 u 2 2 ; the variables μ1, μ2, μ3, and μ4 are the Lagrange multipliers associated with the constraints v = ∇u, w = ∇2u, z = u, and r = s, respectively. The objective is to find the critical point of A by alternatively minimizing A with respect to r, s, u, v, w, z, μ1, μ2, μ3, and μ4. The ADMM method was used to study the optimal solution of each subproblem step by step. Next, we discuss the optimization strategies to solve each subproblem.
(1) The u-subproblem is a least-squares problem with the following form (Formula (6)). Fixing r, s, v, w, and z, solve for u by:
u k + 1 = arg min u λ 2 D H i M i u y i 2 2 μ 1 Τ ( v k u ) + ρ 1 2 v k u 2 2 μ 2 Τ ( w k 2 u ) + ρ 2 2 w k 2 u 2 2 μ 3 Τ ( z k u ) + ρ 3 2 z k u 2 2 .
Formula (6) has the following closed-form solution:
u k + 1 = ( λ ( D H i M i ) Τ ( D H i M i ) + ρ 1 Τ + ρ 2 ( 2 ) Τ 2 + ρ 3 Ι ) 1 ( λ ( D H i M i ) Τ y i Τ μ 1 + ρ 1 Τ v ( 2 ) Τ μ 2 + ρ 2 ( 2 ) Τ w μ 3 + ρ 3 z )
Under periodic boundary conditions for the image u, due to the fact that the matrices KTK, ∇T∇, and (∇2)T2 are all block circulant with circulant block [35]. The linear system in Formula (7) can be efficiently solved in the Fourier domain using a 2D fast Fourier transform (FFT).
(2) Fixing r, s, u, w, and z, solve for v. The v-subproblem has the following form:
v k + 1 = arg min v ρ 1 2 v u k + 1 2 2 μ 1 Τ ( v u k + 1 )   + ϕ ( v ) = arg min v ρ 1 2 v ( u k + 1 + μ 1 k ρ 1 ) 2 2 + ϕ ( v )
This v-subproblem is an OGSTV denoising problem [30] where the OGSTV regularizer is defined as follows:
ϕ ( v ) = i [ j = 0 K 1 | v ( i + j ) | 2 ] 1 2
where K is the size of the group. The group size can be seen as a K-sized contagious window starting with index i.
We adopted the majorization–minimization (MM) strategy to solve the OGSTV problem in Formula (8). In each iteration of the MM algorithm, a convex quadratic problem is minimized, yielding a convergent solution to Formula (8).
(3) Fixing r, s, u, v, and z, solve for w. The w-subproblem is as follows:
w k + 1 = arg min w ρ 2 2 w 2 u k + 1 2 2 μ 2 Τ ( w 2 u k + 1 ) + ω w p p = arg min w ρ 2 2 w ( 2 u k + 1 + μ 2 k ρ 2 ) 2 2 + ω w p p ,
Formula (10) is a non-convex second-order TV denoising problem. In this study, the iterative reweighted l1 (IRL1) algorithm proposed by [36] was adopted to solve the problem of l1 in each iteration of the IRL1 algorithm. The problem in Formula (10) can be solved by approximating the formula as a weighted l 1 problem as in [37]: For x k + 1 = 2 u k + 1 + μ 2 k ρ 2 , the solution of Formula (11) via one-dimensional shrinkage is given by:
w k + 1 = arg min w ρ 2 2 w x k + 1 2 2 + i t i | ω i |
where ti is given by [35].
t i = ω p ( | ω i k | + ε ) 1 p ,
where ε is a small positive number to avoid division by zero. Now, the weighted l1 problem in Formula (13) can be solved using the one-dimensional shrinkage operator as follows:
w k + 1 = s h r i n k ( x k + 1 , t i ω ρ 2 ) = max { | x k + 1 | t i ω ρ 2 , 0 } · s i g n ( x k + 1 ) ,
where shrink (·) denotes the one-dimensional shrinkage operator.
(4) Fixing r, s, u, v, and w, solve for z. The z-subproblem reads as follows:
z k + 1 = arg min z ρ 3 2 z u k + 1 2 2 μ 3 Τ ( z u k + 1 ) + T c ( z ) = arg min z ρ 3 2 z ( u k + 1 + μ 3 k ρ 3 ) 2 2 + T c ( z ) .
The z-subproblem is a projection which ensures that the pixel value stays in the desired range of [0, 1024] (for 16-bit images) [38]. It has the following closed-form solution. The zk+1-subproblem can be rewritten as
z k + 1 = p r o ( u k + 1 + μ 3 k ρ 3 ) = min ( 1024 , max ( u k + 1 + μ 3 k ρ 3 , 0 ) ) ,
where projC (·) is the projection operator.
(5) Fixing r, u, v, w, and z, solve for s. The s-subproblem has the following form:
s k + 1 = arg min s ρ 4 2 r k + 1 s 2 2 μ 4 Τ ( r k + 1 s ) + K s p p = arg min s ρ 4 2 r k + 1 + μ 4 k ρ 4 s 2 2 + K s p p ,
The method to solve the s-subproblem is the same as that to solve the w-subproblem, and the specific solution method refers to the w-subproblem, which is a least-squares problem with a closed-form solution (Formula (16)).
(6) Fixing s, u, v, w, and z, solve for r. The r-subproblem has the following form:
r k + 1 = arg min r ρ 4 2 r s k + 1 2 2 μ 4 Τ ( r s k + 1 ) = arg min r ρ 4 2 r + μ 4 k ρ 4 s k + 1 2 2 ,
which is a simple derivative problem.
(7) By updating the Lagrange multipliers μ1, μ2, μ3, and μ4 has the following form:
μ 1 k + 1 = μ 1 k + ρ ( v k + 1 u k + 1 ) μ 2 k + 1 = μ 2 k + ρ ( 2 u k + 1 w k + 1 ) μ 3 k + 1 = μ 3 k + ρ ( u k + 1 z k + 1 ) μ 4 k + 1 = μ 4 k + ρ ( s k + 1 r k + 1 )
For ρ, we used an updating rule proposed by [31] that reduces the number of iterations which are required. The convergence of the iterative solution can be partly guaranteed by the above non-convex ADMM methods. In each iteration, by projecting the residual into each domain, the algorithm maintains a large-amplitude component and sets a small-amplitude coefficient to zero, which conforms to the l1 norm minimization. As the iteration progresses, the residuals become progressively smaller. In each iteration, different types of structural components can be reconstructed in the sparse full variation domain and the image spatial domain, respectively. When a suitable number of iterations is reached, the algorithm terminates. Finally, the SRR image is obtained.

2.3. Classification of Multispectral Images using MSR-NCHOTV

The classification of multispectral remote sensing images has always been a key part of remote sensing research. The accurate recognition and classification of multiple types of image is a challenge in the field of remote sensing.
Due to the low spatial resolution and the large number of mixed pixels in multi-spectral images from geostationary orbit satellites, some targets may be difficult to identify, which complicates image classification. Moreover, the SRR-based classification used in this study is not restricted by features, texture information, or the spatial resolution of images and is applicable to all geostationary orbit remote sensing images. Therefore, we employed the SRR method as the preprocessing step of image classification. Thus, by combining different pieces of high-frequency image information from sequential frames of low-resolution geostationary orbit remote sensing satellites, high-resolution images were reconstructed using the MSR-NCHOTV SRR method, and these high-resolution images were then classified.

2.4. Sub-Voxel-Level Joint Registration Between Image Bands

The GF-4 satellite adopts filter switching technology in its remote sensing imaging system. It has a time delay of several seconds in imaging between multiple spectral bands. Research [39] on the stability of the time interval between different image bands has been performed by selecting different images and 22 targets in sequential frames, which illustrated that the imaging time interval of GF-4 image data in each band was about 7.9 s. The difference in imaging time between spectral bands causes the target to shift between different channels. Additionally, the acquired image has serious geometric deformation and low image quality due to the influence of satellite orbit drift, satellite platform wobble, atmospheric disturbance, and changes in the ground surface. Therefore, a method for the registration of remote sensing images is needed to improve the SRR accuracy of images.
Fine registration between remote sensing image bands is the most important step in completing image SRR. Intensity-based image registration methods tend to have high accuracy but low robustness.
Scale invariant feature transform (SIFT) and speed-up robust feature (SURF) are effective image feature point matching algorithms; however, they are computationally complex, and their real-time performance is poor. Rublee et al. showed that oriented fast and rotated brief (ORB) has strong robustness as well as a high data-processing speed (one order of magnitude faster than that of the SURF algorithm and two orders of magnitude faster than that of the SIFT algorithm) [40,41], which can partially compensate for the poor robustness of the intensity-based registration method [42,43]. This study adopts a sub-voxel-level high-precision image registration algorithm based on a combination of ORB feature extraction and an intensity-based registration method [44,45]. The CUDA programming language was utilized to accelerate the sub-voxel-level high-precision image registration algorithm and meet the registration requirements of GF-4 remote sensing images. The registration process of the algorithm is shown in Figure 1.
During the image rough registration stage, by selecting the reference image and the image to be registered, the ORB registration algorithm was used for feature extraction. Meanwhile, the random sample consensus (RANSAC) algorithm was used to eliminate the wrong matching points to realize the rough image registration [43]. To ensure the operational efficiency of the algorithm, only a small number of high-precision and evenly distributed matching point pairs need to be selected in the process of feature-point extraction and the elimination of rough imperfections since the ORB registration algorithm mainly realizes rough registration function. After the matching point pairs were obtained, the monography matrix was calculated to generate the rough registration results by image transformation. In the fine registration stage, the base image and coarse image to be fine-registered were sampled, and the image Gaussian pyramid was constructed. The cross-correlation information in the image area was then calculated. Due to the large amount of computation required for this process, a high-speed GPU parallel processing method was adopted to accelerate the algorithm implementation. The image was fine-tuned layer by layer in the Gaussian pyramid of each layer through iterative looping, and the accurate image registration was finally obtained.

3. Experimental Data and Pretreatment

3.1. Experimental Data

This study used image datasets from the GF-4 geostationary orbit remote sensing satellite. This is China’s first high-resolution geostationary orbit optical imaging satellite and the world’s most sophisticated high-resolution geostationary orbit remote sensing satellite. The GF-4 satellite was launched on 29 December 2015 from the Xichang Satellite Launch Center in Sichuan Province, Southwestern China, and is equipped with a CMOS plane array optical remote sensing camera. The GF-4 satellite has a high temporal resolution, high spatial resolution, and large imaging width. It has a ground spatial resolution of 50 m, an image width of 500 × 500 km, an orbital height of 36,000 km, and allows high-frequency, all-weather, continuous, long-term observation of large areas [8,44].

3.2. Research Area

In this study, two sets of 10 consecutive GF-4 satellite image frames with different shooting times and covering different regions were selected. The spatial resolution of these experimental data is 50 m, and the shooting interval is one frame per minute.
Dataset 1 covered the Binhai New area of Tianjin City, covering the intersection between the Shandong Peninsula and the Liaodong Peninsula from 38°40′~39°00′N and 117°20′~118°00′E. The data were acquired at 10:40 a.m. on 24 August 2018. The Binhai New area has 153 km of coastline, a land area of 2270 km2, and a sea area of 3000 km2. The climate characteristics of the area have components of a continental warm temperate zone monsoon climate and a maritime climate.
Dataset 2 was acquired at 09:00 a.m. on 25 June 2016 and covered the Wendeng City district, Weihai City, Shandong Province. The Wendeng City district, located in the east of the Shandong Peninsula from 36°52′~37°23′N and 121°43′~122°19′E, covers a total area of 1645 km2 and has 155.88 km of coastline. The district has a continental monsoon climate with four distinct seasons.
According to the imaging mode of the GF-4 geostationary orbit staring satellite, a pixel window of 1024 × 1024 was selected from each group of images for the study area in order to improve the classification efficiency of the satellite images, as shown in Figure 2.
In this study, an SVM classification method was adopted to verify the classification accuracy of the proposed method. In the classification process, high-resolution Google Earth images of the two study areas were used to verify the correctness of the selection of the classification training samples.

3.3. Experimental Process

GF-4 satellite images have a wide coverage, and the region of interest (ROI) varies from frame to frame. In this paper, methods for cutting GF-4 satellite images are divided into manual and automatic methods. In the process of selecting the same ROI area in each GF-4 image frame for cutting, the size of each frame was 10,240 × 10,240 pixels. Although the position of each frame is offset relative to the position of the other frames, the cutting of the ROI area is not affected since each frame is the same size. Firstly, we selected an image frame, annotated the X and Y coordinates of the point in the top left corner of the cutting area, and then entered the length and width of the image to be cut using a cutting algorithm in the Matlab 2019a software (MathWorks, Natick, MA, USA) and the C++ programming language, finally obtaining the required cutting area of the ROI.
Due to the influence of various factors in the satellite remote sensing imaging process, the image acquisition time is inconsistent for each spectral segment. By conducting band registration between Band 1 and Bands 2, 3, 4, and 5, respectively, of the same frame of the GF-4 satellite image, it was found that the difference in pixel values between different bands of each frame of the GF-4 satellite image is one or more pixels, and the difference in spatial resolution is tens to hundreds of meters. The results of the band registration accuracy evaluation of a single frame of a GF-4 satellite image are shown in Table 1.
Pixel offset errors between the bands of the GF-4 satellite greatly reduce the data quality of the GF-4 satellite images, and thus, greatly affect the subsequent processing and application of GF-4 satellite data. Therefore, we adopted a joint registration method based on ORB feature extraction and intensity to improve the accuracy of the SRR. As shown in Figure 3, the red and green double images of the registered image are completely superposed, thus greatly improving the quality of the GF-4 satellite image.

4. Experimental Results and Analysis

4.1. Evaluation of Image Quality

In order to obtain higher-resolution satellite images, the SRR of GF-4 satellite images was carried out using the MSR-NCHOTV algorithm, which was run using the Ubuntu 16.04 Linux environment. In order to verify the effectiveness of the algorithm, SRR was also carried out on the same GF-4 satellite images using other SRR algorithms, namely BI [24], the POCS method [25], and the IBP method [26], respectively. Image evaluation included subjective evaluation and objective evaluation. The objective evaluation method involved determining the resolution, image sharpness, root-mean-square error, signal-to-noise ratio (SNR), point spread function (PSF), modulation transfer function (MTF), and the classification accuracy of the image super-segmentation results. In this study, the sharpness and SNR of the super-segmented images were used to evaluate the accuracy of the SRR results.
Image sharpness, also known as image average gradient, is obtained by calculating the intensity gradient for every pixel in the image. In this study, the image sharpness was evaluated based on the change in gray level between adjacent pixels following the method proposed by [46]. The higher the image blur value, the lower the image sharpness. The formula for image blur is as follows:
b l u r F = M a x ( b _ F v e r , b _ F h o r )
where b_Fhor represents the change in gray level of the neighboring pixel in the horizontal direction, b_Fver represents the change in gray level of the neighboring pixel in the vertical direction, and blurF represents the maximum change in gray level of the neighboring pixels. Firstly, the image was reblurred using a Gaussian filter, and the image quality was then evaluated by calculating the changes in gray level between neighboring pixels before and after reblurring. If the change is large, it indicates that the distortion caused by image blurring is small; otherwise, it indicates that the image distortion is large.
SNR is an important characteristic parameter that is used to evaluate the quality of remote sensing images. It can help data providers or users to better predict image quality and improves their ability to extract information from remote sensing images. The SNR reflects the ratio of signal to noise in the image; the higher the value, the better the quality of the remote sensing image. In this study, the method for the measurement of the SNR of remote sensing images with a high spatial resolution proposed by [47] was used to divide the high spatial resolution remote sensing data into a region with a homogeneous SNR and a region with a heterogeneous SNR. For the homogeneous area, the local standard deviation of the SNR in the small window was calculated by sliding the small window across the whole homogeneous area. All of the local standard deviations were sorted from small to large, and the average value of the largest 90% of the local standard deviations was taken as the estimate of noise level ( σ s ). For the estimation of the noise level in the inhomogeneous area, each pixel value iteration step of the small window n × n was moved over the inhomogeneous area to calculate the local standard deviation and mean noise level in the small window after each slide. The local standard deviations falling into each subinterval were then sorted, and the average value of the minimum local standard deviation of 10% was taken as the estimate of noise level in this subinterval. The remaining local standard deviation was considered to be caused by the edge, texture, and other features. The image SNR estimated by this method can be expressed by Formula (20):
S N R ^ = S σ s
where S indicates the signal strength and σ s represents the noise level. In order to improve the evaluation accuracy of image SNR, in the SNR estimation process, the size of the small sliding window was set to 10 × 10 pixels.
The results of the SRR were classified using SVM. The classification accuracy was quantitatively evaluated using overall accuracy (OA) and the Kappa coefficient.

4.2. Experiment 1

Experiment 1 used Dataset 1 (see Section 3.2). The area covered by this dataset contains a large amount of buildings and vegetation. The data in this area were reconstructed using the BI, POCS, IBP, and MSR-NCHOTV methods, respectively. As can be seen in the Figure 4, the SRR based on MSR-NCHOTV is superior to the SRR based on the POCS method, completely suppressing block step artifacts without changing the original image information and retaining the edge details. In the SRR based on BI, the image edge is relatively fuzzy, while the SRR based on POCS has serious pixel offset due to registration error and does not consider the prior probability of the image, resulting in an overall sharpening of the image. The subjective effect of reconstruction using IBP is superior to that obtained using the other three algorithms; however, the local details are fuzzy. These results show that the MSR-NCHOTV algorithm proposed in this paper achieves the best SRR of the four studied algorithms.
Using the image quality evaluation metrics of image sharpness and SNR, the results of the four SRR algorithms were objectively analyzed and evaluated. Table 2 and Table 3 show the sharpness and SNR obtained using these four algorithms, respectively. The image blur of each band is largest for the BI algorithm (Table 2). Compared with the POCS method, the IBP method achieved a smaller blur value. The blur value obtained for the MSR-NCHOTV method is the smallest of the four methods, meaning that the image sharpness is the highest, while the image quality is also superior to that obtained using the other three methods. As can be seen in Table 3, the SNR of each band of the BI results is lower than the value obtained using the other three methods, while the SNR of each band of the POCS results is slightly higher than the value obtained using the IBP method. The SNR of each band of the MSR-NCHOTV results is the largest of all the four methods. The results show that the edge and texture information of the complex terrain are improved after SRR using MSR-NCHOTV. Figure 5a,b shows the changes in image sharpness and SNR obtained using the four SRR methods.
The SRR results obtained using the four different methods were classified by SVM. By visual interpretation of high-resolution Google Earth satellite images of the same area, the main types of features in the SRR images were first confirmed, including vegetation, water, buildings, soil, and beach. Then, based on the relative distribution of each object category in the image, 512 sample points were selected from the MSR-NCHOTV SRR image using a stratified random sampling method. The distribution of sample points is shown in Figure 6. The real land-cover categories of these sample points were determined by visual interpretation of the Google Earth images, and the classification accuracy of the various methods was thereby obtained. Figure 7 shows the classification results of the four methods. From the figure, it can be seen that there is more texture information in the MSR-NCHOTV SRR image, especially, edge information of water, roads, etc. As shown in the Table 4, the OA and Kappa coefficient of the MSR-NCHOTV SRR image were 92.75% and 0.91, respectively, which are both significantly higher than those of the SRR images obtained using the three other methods. Table 5 shows the producer’s accuracy (PA) and user’s accuracy (UA) of the SRR images obtained using the four methods for each land-cover category. The results show that the PA and UA of the MSR-NCHOTV SRR image are significantly higher than those of the SRR images obtained using the three other methods, which proves that the MSR-NCHOTV method obtains a higher classification accuracy than these other methods.

4.3. Experiment 2

Experiment 2 used Dataset 2 (see Section 3.2). In Experiment 2, we used BI, POCS, IBP, and MSR-NCHOTV to perform image SRR, and obtained the images shown in Figure 8. From the figure, it can be seen that the image obtained using BI is fuzzy, and the one obtained using POCS is superior to that obtained using IBP in terms of image clarity and texture details. In the image obtained using MSR-NCHOTV, details of road, vegetation, beach, and other complex texture information are more clearly visible than in the images obtained using the other three methods. Table 6 and Table 7 give quantitative evaluations of the sharpness and SNR, respectively, of the four SRR results. As shown in Table 6, the blur value of the image obtained using the MSR-NCHOTV method is the smallest of all the four methods, and its SNR is the largest of all the four methods (Table 7). Figure 9 shows the sharpness and SNR of the SRR images for separate bands. All the above evidence shows that, of the studied SRR methods, the MSR-NCHOTV method performs the best.
Combined with the visual interpretation of Google Earth satellite images, we first identified the main types of land cover in the images, that is, buildings, vegetation, water, farmland, and beach. A stratified random sampling method was used to select 509 sample points. Then, visual interpretation of the Google Earth images was used to determine the real land-cover categories of these sample points. Thus, the classification accuracy of the various methods was obtained. Figure 10 shows the distribution of the sample points. Figure 11a–d shows the mapping results of the four methods. As shown in Figure 11d, in the classification results obtained using the MSR-NCHOTV method, the texture edges and details are more abundant and smoother, respectively, than in the classification results obtained using the three other models. Overall, compared with the images obtained using the three other SRR methods, the image obtained using the MSR-NCHOTV method is visually more similar to the original image. Table 8 shows the results of the OA and Kappa coefficient for the classification of the four SRR images. For the SRR images obtained using the MSR-NCHOTV method image, the OA is 93.3941% and the Kappa coefficient is 0.9121, which are both higher than the values obtained for the classification using the SRR images produced using the three other methods. As shown in Table 9, the PA and UA of buildings, vegetation, water, farmland, and beach for the image obtained using the MSR-NCHOTV method are higher than those obtained using the other three methods.

5. Discussion

In this paper, it is shown that the proposed MSR-NCHOTV SRR method can significantly improve the spatial resolution and sharpness of images and provide abundant texture information and detail. Compared with the SRR results obtained using the BI method [24], the sharpness and SNR of the five bands of the GF-4 satellite images obtained using the MSR-NCHOTV SRR method were higher by an average of 39.54% and 51.52%, respectively. Compared with the SRR results obtained using the POCS method [25], the sharpness and SNR of the five bands of the GF-4 satellite images obtained using the MSR-NCHOTV SRR method were higher by an average of 11.86% and 43.63%, respectively. Compared with the SRR results obtained using the IBP method [26], the sharpness and SNR of the five bands of the GF-4 satellite images obtained using the MSR-NCHOTV SRR method were higher by an average of 18.00% and 40.27%, respectively (Table 10).
Table 11 shows a comparison of the OA and Kappa coefficient values of the classification results obtained using the two groups of experimental data, in which the values are expressed as percentage differences relative to the values obtained for the classification based on the image obtained using the BI method [24]. As shown in Table 11, for both experiments, the average OA and Kappa coefficient values obtained using the MSR-NCHOTV method are higher than those obtained using the POCS method [25] and IBP method [26]; compared to the values obtained using the BI method [24], the average values of OA and the Kappa coefficient obtained using the MSR-NCHOTV method are higher by 32.20% and 46.14%, respectively.
From the above comparative analysis, it can be seen that the sharpness and SNR of the SRR satellite image, and the OA and Kappa coefficient of the classification results, obtained using the MSR-NCHOTV method are all higher than those obtained using the BI, IBP, or POCS methods. The MSR-NCHOTV SRR method has obvious advantages in eliminating step artifacts and maintaining image texture details. Moreover, the method is not restricted by the image category, the image classification method, or the hardware environment. Therefore, the MSR-NCHOTV SRR method has potential as a preprocessing stage for multispectral image classification and can not only improve the spatial resolution of the images but can also generate more abundant information and higher-quality super-resolution images. This method can also be used for the deblurring, feature extraction, fusion, and denoising of remote sensing images. However, the MSR-NCHOTV SRR method is more complicated in parameter selection in the calculation process, and requires multiple iterations in the calculation process, which reduces the operational efficiency of the algorithm. In the future, we will optimize and improve the MSR-NCHOTV SRR method in order to improve the robustness of the algorithm, reduce the computational load, and shorten the operational time, and we will further improve the operational efficiency of the algorithm by using a faster GPU. Additionally, we will also apply this method to the k nearest neighbors (KNN) [48], multi-layer extreme learning machine-based autoencoder (MLELM-AE) [49], and fuzziness and spectral angle mapper-based active learning (FSAM-AL) [50] classification methods, and we will furthermore use the method to achieve the SRR of remote sensing videos.

6. Conclusions

In this study, a method based on the MSR-NCHOTV algorithm was first used to perform SRR of an image from the GF-4 geostationary satellite and to thereby improve the spatial resolution and clarity of the image. Then an SVM method was used to classify the land cover in the reconstructed image. The experimental results show that the overall accuracy and Kappa coefficient of the classification obtained using the SRR image produced using the MSR-NCHOTV method are higher than those obtained using the SRR images produced using the POCS or IBP methods, which proves the feasibility of the proposed MSR-NCHOTV method. This method is not limited to the classification of ground objects nor to the extraction and classification of image information; it can additionally be used in image denoising, image restoration, and other applications. The MSR-NCHOTV method can greatly improve the recognition rate and accuracy of target detection and is of great significance for the use of geostationary orbit satellite data for disaster reduction and prevention, meteorological early warning, and military operations. However, since the implementation of this method is time-consuming, we will attempt to reduce its calculation time and improve its efficiency in the future.

Author Contributions

Conceptualization, X.Y.; methodology, X.Y.; validation, X.Y.; formal analysis, X.Y., F.L.; investigation, X.Y.; writing—original draft preparation, X.Y.; writing-review and editing, X.Y., L.X., X.L.; visualization, M.L.; supervision, N.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the National Key Research and Development Projects (No. 2016YFB0501301) and the National Natural Science Foundation of China (61773383).

Acknowledgments

GF-4 satellite data were provided by the First Institute of Oceanography Ministry of Natural Resources, Qingdao, China.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations and Variables

The following abbreviations are used in this manuscript:
SRSuper-resolution
MSR-NCHOTVMixed sparse representation non-convex high-order total variation
SVMSupport vector machine
BIBilinear interpolation
POCSProjection onto convex sets
IBPIterative back projection
MLCMaximum likelihood classification
RFCRandom forest classifiers
SRMSuper-resolution mapping
TVTotal variation
OGSTVOverlapping group sparsity total variation
ADMMAlternating direction method of multipliers
yiLow-resolution multispectral image recorded by sensor i
uIdeal multispectral dataset obtained by sampling a continuous scene at high-resolution
DA subsampling matrix reflecting the difference in the expected resolution and the actual resolution of the sensor
ϕ ( u ) Overlapping group sparse regularizer
K Denotes weights for the contribution of the image spatial domain model
T c ( u ) Characteristic (indication) function
ρ Penalty parameter or regularization parameter
μThe Lagrange multipliers associated with the constraints
FFTFourier transform
SIFTScale invariant feature transform
SURFSpeed-up robust feature
ORBOriented fast and rotated brief
RANSACRandom sample consensus
SNRSignal-to-noise ratio
KNNK nearest neighbors
MLELM-AEMulti-layer extreme learning machine-based autoencoders
FSAM-ALFuzziness and spectral angle mapper-based active learning
OAOverall accuracy

References

  1. Lindsey, D.T.; Nam, S.; Miller, S.D. Tracking oceanic nonlinear internal waves in the Indonesian seas from geostationary orbit. Remote Sens. Environ. 2018, 208, 202–209. [Google Scholar] [CrossRef]
  2. Fang, L.; Zhan, X.; Schull, M.; Kalluri, S.; Laszlo, I.; Yu, P.; Carter, C.; Hain, C.; Anderson, M. Evapotranspiration Data Product from NESDIS GET-D System Upgraded for GOES-16 ABI Observations. Remote Sens. 2019, 11, 2639. [Google Scholar] [CrossRef] [Green Version]
  3. Kim, Y.; Hong, S. Deep Learning-Generated Nighttime Reflectance and Daytime Radiance of the Midwave Infrared Band of a Geostationary Satellite. Remote Sens. 2019, 11, 2713. [Google Scholar] [CrossRef] [Green Version]
  4. He, T.; Zhang, Y.; Liang, S.; Yu, Y.; Wang, D. Developing Land Surface Directional Reflectance and Albedo Products from Geostationary GOES-R and Himawari Data: Theoretical Basis, Operational Implementation, and Validation. Remote Sens. 2019, 11, 2655. [Google Scholar] [CrossRef] [Green Version]
  5. Bessho, K.; Date, K.; Hayashi, M.; Ikeda, A.; Imai, T. An introduction to Himawari-8/9—Japan’s new-generation geostationary meteorological satellites. J. Meteorol. Soc. Jpn. Ser. II 2016, 94, 151–183. [Google Scholar] [CrossRef] [Green Version]
  6. Fan, S.; Han, W.; Gao, Z.; Yin, R.; Zheng, Y. Denoising Algorithm for the FY-4A GIIRS Based on Principal Component Analysis. Remote Sens. 2019, 11, 2710. [Google Scholar] [CrossRef] [Green Version]
  7. Yang, L.; Gao, X.; Li, Z.; Jia, D.; Jiang, J. Nowcasting of Surface Solar Irradiance Using FengYun-4 Satellite Observations over China. Remote Sens. 2019, 11, 1984. [Google Scholar] [CrossRef] [Green Version]
  8. Zhang, T.; Ren, H.; Qin, Q.; Sun, Y. Snow Cover Monitoring with Chinese Gaofen-4 PMS Imagery and the Restored Snow Index (RSI) Method: Case Studies. Remote Sens. 2018, 10, 1871. [Google Scholar] [CrossRef] [Green Version]
  9. Chang, X.; He, L. System Noise Removal for Gaofen-4 Area-Array Camera. Remote Sens. 2018, 10, 759. [Google Scholar] [CrossRef] [Green Version]
  10. Zhang, P.; Lu, Q.; Hu, X.; Gu, S.; Yang, L.; Min, M.; Chen, L.; Xu, N.; Sun, L.; Bai, W.; et al. Latest progress of the Chinese meteorological satellite program and core data processing technologies. Adv. Atmos. Sci. 2019, 36, 1027–1045. [Google Scholar] [CrossRef]
  11. Tao, Y.; Muller, J.P. Super-Resolution Restoration of MISR Images Using the UCL MAGiGAN System. Remote Sens. 2019, 11, 52. [Google Scholar] [CrossRef] [Green Version]
  12. Almeida, C.A.; Coutinho, A.C.; Esquerdo, J.C.D.M.; Adami, M.; Venturieri, A.; Diniz, C.G.; Dessay, N.; Durieux, L.; Gomes, A.R. High spatial resolution land use and land cover mapping of the Brazilian Legal Amazon in 2008 using Landsat-5/TM and MODIS data. Acta Amaz. 2016, 46, 291–302. [Google Scholar] [CrossRef]
  13. Chen, Y.; Ge, Y.; Heuvelink, G.B.; Hu, J.; Jiang, Y. Hybrid constraints of pure and mixed pixels for soft-then-hard super-resolution mapping with multiple shifted images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2040–2052. [Google Scholar] [CrossRef]
  14. Jain, A.D.; Makris, N.C. Maximum Likelihood Deconvolution of Beamformed Images with Signal-Dependent Speckle Fluctuations from Gaussian Random Fields: With Application to Ocean Acoustic Waveguide Remote Sensing (OAWRS). Remote Sens. 2016, 8, 694. [Google Scholar] [CrossRef] [Green Version]
  15. Chatziantoniou, A.; Psomiadis, E.; Petropoulos, G.P. Co-Orbital Sentinel 1 and 2 for LULC Mapping with Emphasis on Wetlands in a Mediterranean Setting Based on Machine Learning. Remote Sens. 2017, 9, 1259. [Google Scholar] [CrossRef] [Green Version]
  16. Zhang, Y.; Cao, G.; Li, X.; Wang, B.; Fu, P. Active Semi-Supervised Random Forest for Hyperspectral Image Classification. Remote Sens. 2019, 11, 2974. [Google Scholar] [CrossRef] [Green Version]
  17. Li, L.; Chen, Y.; Xu, T.; Liu, R.; Shi, K.; Huang, C. Super-resolution mapping of wetland inundation from remote sensing imagery based on integration of back-propagation neural network and genetic algorithm. Remote. Sens. Environ. 2015, 164, 142–154. [Google Scholar] [CrossRef]
  18. Butt, A.; Shabbir, R.; Ahmad, S.S.; Aziz, N. Land use change mapping and analysis using Remote Sensing and GIS: A case study of Simly watershed, Islamabad, Pakistan. Egypt. J. Remote Sens. Space Sci. 2015, 18, 251–259. [Google Scholar] [CrossRef] [Green Version]
  19. De Philippis, G.; Lamboley, J.; Pierre, M.; Velichkov, B. Regularity of minimizers of shape optimization problems involving perimeter. J Math Pure Appl. 2018, 109, 147–181. [Google Scholar] [CrossRef] [Green Version]
  20. Shi, Z.; Li, P.; Jin, H.; Tian, Y.; Chen, Y.; Zhang, X. Improving Super-Resolution Mapping by Combining Multiple Realizations Obtained Using the Indicator-Geostatistics Based Method. Remote Sens. 2017, 9, 773. [Google Scholar] [CrossRef] [Green Version]
  21. Tong, X.; Xu, X.; Plaza, A.; Xie, H.; Pan, H.; Cao, W.; Lv, D. A new genetic method for subpixel mapping using hyperspectral images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 4480–4491. [Google Scholar] [CrossRef]
  22. He, D.; Zhong, Y.; Feng, R.; Zhang, L. Spatial-temporal sub-pixel mapping based on swarm intelligence theory. Remote. Sens. 2016, 8, 894. [Google Scholar] [CrossRef] [Green Version]
  23. Feng, R.; Zhong, Y.; Xu, X.; Zhang, L. Adaptive sparse subpixel mapping with a total variation model for remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2855–2872. [Google Scholar] [CrossRef]
  24. Xu, X.; Tong, X.; Plaza, A.; Zhong, Y.; Zhang, L. Joint sparse sub-pixel mapping model with endmember variability for remotely sensed imagery. Remote Sens. 2017, 9, 15. [Google Scholar] [CrossRef] [Green Version]
  25. Yoo, J.S.; Kim, J.O. Noise-Robust Iterative Back-Projection. IEEE Trans. Image Process 2019, 29, 1219–1232. [Google Scholar] [CrossRef] [PubMed]
  26. Zhang, H.; Yang, Z.; Zhang, L.; Shen, H. SRR for multi-angle remote sensing images considering resolution differences. Remote Sens. 2014, 6, 637–657. [Google Scholar] [CrossRef] [Green Version]
  27. Li, F.; Xin, L.; Guo, Y.; Gao, J.; Jia, X. A framework of mixed sparse representations for remote sensing images. IEEE Trans. Geosci, Remote Sens. 2016, 55, 1210–1221. [Google Scholar] [CrossRef]
  28. Sun, L.; Zhan, T.; Wu, Z.; Xiao, L.; Jeon, B. Hyperspectral Mixed Denoising via Spectral Difference-Induced Total Variation and Low-Rank Approximation. Remote Sens. 2018, 10, 1956. [Google Scholar] [CrossRef] [Green Version]
  29. He, Z.; Liu, L. Hyperspectral Image Super-Resolution Inspired by Deep Laplacian Pyramid Network. Remote Sens. 2018, 10, 1939. [Google Scholar] [CrossRef] [Green Version]
  30. Liu, J.; Huang, T.Z.; Liu, G.; Wang, S.; Lv, X.G. Total variation with overlapping group sparsity for speckle noise reduction. Neurocomputing 2016, 216, 502–513. [Google Scholar] [CrossRef]
  31. Chen, Y.; Huang, T.Z.; Deng, L.J.; Zhao, X.L. Group sparsity based regularization model for remote sensing image stripe noise removal. Neurocomputing 2017, 267, 95–106. [Google Scholar] [CrossRef]
  32. Shao, Z.; Wang, L.; Wang, Z.; Deng, J. Remote Sensing Image Super-Resolution Using Sparse Representation and Coupled Sparse Autoencoder. IEEE J. STARS. 2019, 12, 2663–2674. [Google Scholar] [CrossRef]
  33. Yang, X.; Li, F.; Xin, L.; Zhang, N.; Lu, X.; Xiao, H. Finer scale mapping with super-resolved GF-4 satellite images. In Proceedings of the Image and Signal Processing for Remote Sensing XXV, Strasbourg, France, 9–11 September 2019; Volume 11155, p. 111550A. [Google Scholar]
  34. Wu, C.; Tai, X.C. Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models. SIAM J. Image Sci. 2010, 3, 300–339. [Google Scholar] [CrossRef] [Green Version]
  35. Wang, Y.; Yang, J.; Yin, W.; Zhang, Y. A New Alternating Minimization Algorithm for Total Variation Image Reconstruction. SIAM J. Imaging Sci. 2008, 1, 248–272. [Google Scholar] [CrossRef]
  36. Wu, J.-Y.; Huang, L.-C.; Yang, M.-H.; Chang, L.-H.; Liu, C.-H. Enhanced Noisy Sparse Subspace Clustering via Reweighted L1-Minimization†. In Proceedings of the 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP), Aalborg, Denmark, 17–20 September 2018. [Google Scholar]
  37. Adam, T.; Paramesran, R. Hybrid non-convex second-order total variation with applications to non-blind image deblurring. Signal Image Video Process. 2019, 14, 115–123. [Google Scholar] [CrossRef]
  38. Condat, L. Discrete total variation: New definition and minimization. SIAM J. Image Sci. 2017, 10, 1258–1290. [Google Scholar] [CrossRef] [Green Version]
  39. Bao, H.; Li, Z.L.; Chai, F.M.; Yang, H.S. Filter wheel mechanism for optical remote sensor in geostationary orbit. Opt. Precis. Eng. 2015, 23, 11. [Google Scholar]
  40. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011. [Google Scholar]
  41. Tareen, S.A.K.; Saleem, Z. A comparative analysis of sift, surf, kaze, akaze, orb, and brisk. In Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan, 3–4 March 2018. [Google Scholar]
  42. Nasihatkon, B.; Fejne, F.; Kahl, F. Globally optimal rigid intensity based registration: A fast fourier domain approach. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 5936–5944. [Google Scholar]
  43. Acosta, B.M.T.; Heiligenstein, X.; Malandain, G.; Bouthemy, P. Intensity-based matching and registration for 3D correlative microscopy with large discrepancies. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 493–496. [Google Scholar]
  44. Yang, A.; Zhong, B.; Wu, S.; Liu, Q. Radiometric Cross-Calibration of GF-4 in Multispectral Bands. Remote Sens. 2017, 9, 232. [Google Scholar] [CrossRef] [Green Version]
  45. Hossein-Nejad, Z.; Nasri, M. A-RANSAC: Adaptive random sample consensus method in multimodal retinal image registration. Biomed. Signal Process. 2018, 45, 325–338. [Google Scholar] [CrossRef]
  46. Crété, F.; Dolmiere, T.; Ladret, P.; Nicolas, M. The blur effect: Perception and estimation with a new no-reference perceptual blur metric. In Proceedings of the Human Vision and Electronic Imaging XII, San Jose, CA, USA, 29 January–1 February 2007; Volume 6492, p. 64920. [Google Scholar]
  47. Jiehai, C.; Yanchen, B. A method for measuring signal-to-noise ratio of high spatial resolution remote sensing images. Remote Sens. Technol. Appl. 2015, 30, 469–475. [Google Scholar]
  48. Ahmad, M.; Protasov, S.; Khan, A.M.; Hussain, R.; Khattak, A.M.; Khan, W.A. Fuzziness-based active learning framework to enhance hyperspectral image classification performance for discriminative and generative classifiers. PLoS ONE 2018, 13, e0188996. [Google Scholar] [CrossRef] [PubMed]
  49. Ahmad, M.; Khan, A.; Mazzara, M.; Distefano, S. Multi-layer Extreme Learning Machine-based Autoencoder for Hyperspectral Image Classification. In Proceedings of the 14th International Conference on Computer Vision Theory and Applications (VISAPP’19), Prague, Czech Republi, 25–27 February 2019; pp. 75–82. [Google Scholar]
  50. Ahmad, M.; Khan, A.; Khan, A.M.; Mazzara, M.; Distefano, S.; Sohaib, A.; Nibouche, O. Spatial prior fuzziness pool-based interactive classification of hyperspectral images. Remote Sens. 2019, 11, 1136. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Workflow of the sub-voxel-level high-precision image registration algorithm based on a combination of oriented fast and rotated brief (ORB) feature extraction and an intensity-based registration process. RANSAC: random sample consensus.
Figure 1. Workflow of the sub-voxel-level high-precision image registration algorithm based on a combination of oriented fast and rotated brief (ORB) feature extraction and an intensity-based registration process. RANSAC: random sample consensus.
Remotesensing 12 00466 g001
Figure 2. The two study areas used in this work.
Figure 2. The two study areas used in this work.
Remotesensing 12 00466 g002
Figure 3. Comparison of an original GF-4 image (a) and the band-registered image (b).
Figure 3. Comparison of an original GF-4 image (a) and the band-registered image (b).
Remotesensing 12 00466 g003
Figure 4. Results of the super-resolution reconstruction (SRR) for Experiment 1 using the (a) bilinear interpolation (BI), (b) iterative back projection (IBP), (c) projection onto convex sets (POCS), and (d) mixed sparse representation non-convex high-order total variation (MSR-NCHOTV) methods. Panels with the suffixes 1, 2, and 3 show local detail magnifications area of the corresponding panel in the left column.
Figure 4. Results of the super-resolution reconstruction (SRR) for Experiment 1 using the (a) bilinear interpolation (BI), (b) iterative back projection (IBP), (c) projection onto convex sets (POCS), and (d) mixed sparse representation non-convex high-order total variation (MSR-NCHOTV) methods. Panels with the suffixes 1, 2, and 3 show local detail magnifications area of the corresponding panel in the left column.
Remotesensing 12 00466 g004aRemotesensing 12 00466 g004b
Figure 5. Image sharpness (a) and SNR (b) for Experiment 1.
Figure 5. Image sharpness (a) and SNR (b) for Experiment 1.
Remotesensing 12 00466 g005
Figure 6. Distribution of the 512 sample points which were selected from the SRR image for Experiment 1 that was obtained using the proposed MSR-NCHOTV algorithm.
Figure 6. Distribution of the 512 sample points which were selected from the SRR image for Experiment 1 that was obtained using the proposed MSR-NCHOTV algorithm.
Remotesensing 12 00466 g006
Figure 7. Classification results for the SRR images for Experiment 1 obtained using (a) BI, (b) POCS, (c) IBP, (d) MSR-NCHOTV.
Figure 7. Classification results for the SRR images for Experiment 1 obtained using (a) BI, (b) POCS, (c) IBP, (d) MSR-NCHOTV.
Remotesensing 12 00466 g007
Figure 8. The SRR images for Experiment 2 obtained using (a) BI, (b) POCS, (c) IBP, (d) MSR-NCHOTV. Panels with the suffixes 1, 2, and 3 show local detail magnifications area of the corresponding panel in the left column.
Figure 8. The SRR images for Experiment 2 obtained using (a) BI, (b) POCS, (c) IBP, (d) MSR-NCHOTV. Panels with the suffixes 1, 2, and 3 show local detail magnifications area of the corresponding panel in the left column.
Remotesensing 12 00466 g008aRemotesensing 12 00466 g008b
Figure 9. Image sharpness (a) and SNR (b) for Experiment 2.
Figure 9. Image sharpness (a) and SNR (b) for Experiment 2.
Remotesensing 12 00466 g009
Figure 10. Distribution of the 509 sample points which were selected from the SRR image for Experiment 2 that was obtained using the proposed MSR-NCHOTV algorithm.
Figure 10. Distribution of the 509 sample points which were selected from the SRR image for Experiment 2 that was obtained using the proposed MSR-NCHOTV algorithm.
Remotesensing 12 00466 g010
Figure 11. Classification results for the SRR images for Experiment 2 which were obtained using (a) BI, (b) POCS, (c) IBP, (d) MSR-NCHOTV.
Figure 11. Classification results for the SRR images for Experiment 2 which were obtained using (a) BI, (b) POCS, (c) IBP, (d) MSR-NCHOTV.
Remotesensing 12 00466 g011
Table 1. Results of the band registration accuracy evaluation of a single frame of a Gaofen-4 (GF-4) satellite image.
Table 1. Results of the band registration accuracy evaluation of a single frame of a Gaofen-4 (GF-4) satellite image.
Reference BandRegistration
Band
X Direction Mean Value (Pixels)Y Direction Mean Value (Pixels)Valid PointsSpatial Resolution (M)X Direction Distance (M)Y Direction Distance (M)
Band 1Band 21.391.81705069.3590.27
Band 1Band 31.680.88735083.9244.02
Band 1Band 41.831.14725091.7156.85
Band 1Band 53.411.396650170.6269.51
Table 2. Evaluation results for image sharpness.
Table 2. Evaluation results for image sharpness.
Band 1Band 2Band 3Band 4Band 5
BI0.600.610.660.480.65
POCS0.470.380.420.340.51
IBP0.480.390.440.280.51
MSR-NCHOTV0.450.360.410.240.46
Note: BI: bilinear interpolation; POCS: projection onto convex sets; IBP: iterative back projection; MSR-NCHOTV: mixed sparse representation non-convex high-order total variation.
Table 3. Evaluation results for image signal-to-noise ratio (SNR).
Table 3. Evaluation results for image signal-to-noise ratio (SNR).
Band 1Band 2Band 3Band 4Band 5
BI115.41229.24162.92124.69128.81
POCS167.38288.34212.09155.30138.26
IBP166.71285.37210.86132.18136.48
MSR-NCHOTV192.56384.29267.71247.88203.01
Table 4. Results of the overall accuracy (OA) and Kappa coefficient of the four SRR methods.
Table 4. Results of the overall accuracy (OA) and Kappa coefficient of the four SRR methods.
OA (%)Kappa
BI68.040.59
POCS72.940.65
IBP88.820.86
MSR-NCHOTV92.750.91
Table 5. Results of the producer’s accuracy (PA) and user’s accuracy (UA) of the four SRR methods.
Table 5. Results of the producer’s accuracy (PA) and user’s accuracy (UA) of the four SRR methods.
Land Use TypeBIPOCSIBPMSR-NCHOTV
PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Buildings64.2364.2365.0472.0792.6886.3694.3189.23
Vegetation93.9173.4796.5276.5598.2693.3910095.83
Water76.0978.9581.1684.8594.299.2493.48100
Soil16.8737.8433.7349.1259.0483.0574.792.54
Beach80.3958.5780.3963.0892.1670.1510079.69
Table 6. Evaluation results for image sharpness for Experiment 2.
Table 6. Evaluation results for image sharpness for Experiment 2.
Band 1Band 2Band 3Band 4Band 5
BI0.490.440.510.380.65
POCS0.340.310.330.280.35
IBP0.400.390.410.320.39
MSR-NCHOTV0.280.270.300.250.29
Table 7. Evaluation results for image SNR for Experiment 2.
Table 7. Evaluation results for image SNR for Experiment 2.
Band 1Band 2Band 3Band 4Band 5
BI123.61209.96143.5635.23118.13
POCS114.35131.30136.5342.34215.51
IBP167.91134.80165.8662.28260.93
MSR-NCHOTV351.50304.09382.63247.88340.64
Table 8. The OA and Kappa coefficient of the classification for Experiment 2 using the SRR images obtained using the four methods.
Table 8. The OA and Kappa coefficient of the classification for Experiment 2 using the SRR images obtained using the four methods.
OA (%)Kappa
BI72.920.66
POCS79.170.74
IBP83.330.79
MSR-NCHOTV93.400.91
Table 9. The PA and UA of the classification obtained using the SRR images for Experiment 2 for each land cover category.
Table 9. The PA and UA of the classification obtained using the SRR images for Experiment 2 for each land cover category.
BIPOCSIBPMSR-NCHOTV
PAUAPAUAPAUAPAUA
Buildings58.33%66.67%95.83%92.00%100.00%100.00%88.89%100.0%
Vegetation75.51%88.10%81.63%88.89%77.55%88.37%98.57%83.13%
Water60.47%70.27%72.09%62.00%69.77%73.17%88.12%100.00%
Farmland90.00%66.67%97.50%84.7892.50%84.09%95.12%93.41%
Beach75.00%71.05%52.78%73.0886.11%77.50%100.00%88.89%
Table 10. The percentage improvements in the sharpness and SNR of the SRR image obtained using the MSR-NCHOTV method relative to those of the SRR images obtained using the BI, POCS, and IBP methods.
Table 10. The percentage improvements in the sharpness and SNR of the SRR image obtained using the MSR-NCHOTV method relative to those of the SRR images obtained using the BI, POCS, and IBP methods.
SRR MethodBand 1
(%)
Band 2
(%)
Band 3
(%)
Band 4
(%)
Band 5
(%)
Average (%)
Image SharpnessBI33.9339.8139.5342.1142.3139.54
POCS10.969.085.73520.0613.4711.86
IBP18.1319.2316.8318.0917.7218.00
Image SNRBI52.4535.6550.8167.7550.9451.52
POCS40.2840.9042.5560.1434.3143.63
IBP32.8340.7138.9560.7828.0940.27
Table 11. The percentage improvements in the OA and Kappa coefficient for the classifications obtained using the POCS, IBP, and MSR-NCHOTV methods, relative to the values obtained using the BI method.
Table 11. The percentage improvements in the OA and Kappa coefficient for the classifications obtained using the POCS, IBP, and MSR-NCHOTV methods, relative to the values obtained using the BI method.
Experiment 1Experiment 2Average Value
OA (%)Kappa (%)OA (%)Kappa (%)OA (%)Kappa (%)
POCS7.2010.778.5711.937.8911.35
IBP30.5545.1414.2920.0322.4232.59
MSR-NCHOTV36.3153.6428.0838.6432.2046.14

Share and Cite

MDPI and ACS Style

Yang, X.; Li, F.; Xin, L.; Lu, X.; Lu, M.; Zhang, N. An Improved Mapping with Super-Resolved Multispectral Images for Geostationary Satellites. Remote Sens. 2020, 12, 466. https://doi.org/10.3390/rs12030466

AMA Style

Yang X, Li F, Xin L, Lu X, Lu M, Zhang N. An Improved Mapping with Super-Resolved Multispectral Images for Geostationary Satellites. Remote Sensing. 2020; 12(3):466. https://doi.org/10.3390/rs12030466

Chicago/Turabian Style

Yang, Xue, Feng Li, Lei Xin, Xiaotian Lu, Ming Lu, and Nan Zhang. 2020. "An Improved Mapping with Super-Resolved Multispectral Images for Geostationary Satellites" Remote Sensing 12, no. 3: 466. https://doi.org/10.3390/rs12030466

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop