Next Article in Journal
Positive Experience Design Strategies for IoT Products to Improve User Sustainable Well-Being
Previous Article in Journal
Discrete Choice Experiment Consideration: A Framework for Mining Community Consultation with Case Studies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Super-Resolution Reconstruction of Remote Sensing Images of the China–Myanmar Pipeline Based on Generative Adversarial Network

1
College of Safety and Ocean Engineering, China University of Petroleum (Beijing), Beijing 102249, China
2
CNPC International Pipeline Company, Beijing 102206, China
3
College of Artificial Intelligence, China University of Petroleum (Beijing), Beijing 102249, China
4
Key Laboratory of Oil and Gas Safety and Emergency Technology, Ministry of Emergency Management, Beijing 102249, China
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(17), 13068; https://doi.org/10.3390/su151713068
Submission received: 12 June 2023 / Revised: 20 July 2023 / Accepted: 25 August 2023 / Published: 30 August 2023

Abstract

:
The safety monitoring and early warning of overseas oil and gas pipelines are essential for China, as these pipelines serve as significant energy channels. The route of the China–Myanmar pipeline in Myanmar is mountainous and covered with vegetation, changeable climate, and abundant rain, and is prone to disasters. Therefore, artificial route inspection is dangerous and inefficient. Satellite remote sensing technology has an advantage over a traditional ground survey due to its large range and long-distance capabilities, thus can aid in monitoring the safety of oil and gas transportation. To improve the resolution of remote sensing data, in this paper, we propose a Nonlocal dense receptive field generative adversarial network, using remote sensing images of the Muse section of the China–Myanmar pipeline as data. Based on super-resolution generative adversarial network (SRGAN), we use a dense residual structure to improve the network depth and introduce the Softsign activation function to optimize the problem of easy saturation and gradient disappearance. To extract deep features of the image, we proposed a residual-in-residual nonlocal (RRNL) dense block, followed by the addition of the receptive field block (RFB) mechanism at the end to extract global features. Four loss functions are combined to improve the stability of model training and the quality of reconstructed image. The experimental results show that the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) of the reconstructed remote sensing image of Muse section reach 30.20 dB and 0.84. Compared to conventional methods and generic deep neural networks, the proposed approach achieved an improvement of 8.33 dB and 1.41 dB in terms of PSNR and an improvement of 21.7% and 5.9% in terms of SSIM. The reconstructed images exhibit improved texture clarity and are more visible to the human eye. This method successfully achieves super-resolution reconstruction of remote sensing images for the Muse section of the China–Myanmar pipeline, enhances the details of the image, and significantly improves the efficiency of satellite remote sensing monitoring.

1. Introduction

Oil and natural gas are of paramount importance to a multitude of sectors, encompassing industry, agriculture, national defense, and transportation. These resources are closely linked to daily necessities, such as food, clothing, housing, and transportation. Pipeline transportation is the main mode of liquefied petroleum gas (LPG) transportation: as the demand for oil and gas increases, so does the volume of pipeline transportation. The safe transportation of oil and gas pipelines is particularly important. It is necessary to put some energy into ensuring the normal operation of the pipelines, both pre-construction route selection surveys and post-operation environmental monitoring of transportation pipelines necessitate a dedicated allocation of resources to ensure the smooth functioning of the pipelines. Regarding the China–Myanmar pipeline, multiple challenges loom over pipeline safety monitoring due to its traversing through mountainous regions with dense vegetation, abundant rainfall, variable climate, and frequent occurrence of ground hazards. Satellite remote sensing technology has become increasingly mature in recent years, and it has been widely used in disaster monitoring, environmental monitoring, marine monitoring, resource exploration, crop estimation, surveying and mapping, military, and other aspects. Satellite remote sensing technology has proven more advantageous than traditional ground surveys for pre-construction pipeline surveys and post-operation monitoring. The technology allows for obtaining images of larger scope and more recent time frames. It can upload them to relevant platforms as basic data. Consequently, this improves the visibility and timeliness of transport pipeline surveys and monitoring while meeting the requirements of large scopes and long distances. As a result, it reduces engineering difficulty and workload and enhances the safety of oil and gas pipeline transportation. Spatial resolution refers to the capability of a remote sensing image to discern or display the smallest spatial details. A higher spatial resolution means that the image has a higher level of detail and is able to distinguish smaller surface objects. However, because the distance between the satellite and the ground is too far, the resolution of the image captured by the satellite is difficult to support the continuous enlargement of the image, resulting in blurred contents and a low quality of the close-up image. Therefore, the resolution of the satellite image can be improved by cooperating with the image reconstruction technology to improve the quality of the image so as to better assist the monitoring of the pipeline area.
Traditional super-resolution methods include algorithms based on interpolation, reconstruction, and learning. The algorithm based on interpolation is simple and fast, but it may be less effective when dealing with certain image mutations, such as textures and edges, and may result in jaggedness, such as the Bicubic and Lanczos resampling method [1]. Methods based on reconstruction can effectively link the jaggedness phenomenon, restoring a realistic scene is challenging, such as iterative back projection [2] and convex set projection [3]. Methods based on learning apply to small data volume samples and require artificially designed complex features, such as the K-neighborhood method that adds the Markov network to neural network algorithm [4] and the sparse coding networks that enable sparse representations of low-resolution (LR) image patches combined with high-resolution (HR) image patch dictionaries.
With the development of deep learning, deep neural networks have become a popular choice for image super-resolution. Based on convolutional neural networks, super-resolution convolutional neural network (SRCNN), efficient sub-pixel convolutional neural network (ESPCN), residual channel attention networks (RCAN), multi-scale feature interaction network (MSFIN), and others have promoted the development of this field [5,6,7,8]. SRCNN [5] as the first deep learning network for super-resolution that is data-driven and computationally intensive with small perceptual field prevents it from extracting global features. Shi et al. proposed a sub-pixel convolution layer in their ESPCN [6] paper to implement an image scaling process. RCAN [7] applies the attention mechanism to super-resolution networks for the first time. MSFIN [8] is a lightweight method for migrating complex super-resolution algorithms to mobile devices. Methods such as very deep super resolution (VDSR), enhanced deep super-resolution network (EDSR), and Multi-scale Residual Network (MSRN) apply residual networks here [9,10,11]. VDSR [9] uses residual networks for the first time. EDSR [10] removes some unnecessary modules of and compact the model while obtaining better results. MSRN [11] implements adaptive detection of image features at different scales. SRGAN [12] introduced adversarial generation networks into the field for the first time and proposed perceptual loss to make the generated images more realistic. Enhanced super-resolution generative adversarial networks (ESRGAN) [13] modifies the network structure and loss function from SRGAN, removes artifacts, and enhances sharpness and edge information. Prajapati et al. proposed unsupervised single image super-resolution network (USISResNet) [14], an unsupervised learning algorithm, and used an objective learning function based on mean opinion score (MOS). Zhang et al. proposed a blind super-resolution generative adversarial network (BSRGAN) [15] to design a degradation model more applicable to real-world images.
In the field of remote sensing, the super-resolution process of remote sensing images can be summarized as follows: (1) acquiring low-resolution experimental images according to the simulated image degradation model; (2) aligning the images and obtaining motion estimation parameters; (3) performing image reconstruction and obtaining reconstructed images; and (4) calculating evaluation indexes [16]. Li et al. proposed a generalized hidden Markov tree model [17] for the super resolution of remote sensing images. Wang et al. [18] proposed an iterative optimization method for super-resolution processing of remote sensing images based on maximum a posteriori probability estimation. Rohith et al. [19] used a two-level decomposition of a discrete wavelet transform combined with a sub-pixel panning technique to achieve HR remote sensing images with enhanced details. Zhang et al. proposed a super-resolution-related support vector regression [20] method to convert low-resolution images to high-resolution images. Jiang et al. [21] proposed an adversarial generative network for enhancing boundary information in remote sensing image and obtained better results. About the deep learning in the field of remote sensing for oil and gas transportation safety, X. Kang et al. proposed a self-supervised spectral–spatial transformer network (SSTNet) [22] for the mapping of oil spill remote sensing images.
At present, most of the image super-resolution works applying deep learning are mainly based on landscape images, character images, animal images, cartoon images, etc., which are not suitable for some specific scenes. In the process of training, deep networks often have the problems of difficult network convergence and gradient disappearance. In the context of super-resolution for remote sensing images, it is crucial to give special consideration to factors, such as contextual information, long-range dependencies, and non-local similarity. Therefore, based on the safety study of the Muse section of the China–Myanmar oil and gas transportation pipeline, a remote sensing image dataset of the Muse section of the China–Myanmar pipeline is established in this paper, and a Nonlocal [23] dense receptive field adversarial model is proposed to realize the remote sensing image super-resolution reconstruction of the pipeline. We use a dense residual structure to improve the network depth, introduce Softsign activation function to optimize the problem of easy saturation and gradient disappearance, propose an RRNL dense perceptual field module to extract deep image features, and introduce an RFB [24] mechanism to extract global features of the image so that the quality of remote sensing reconstructed images can be improved.

2. Data and Methods

2.1. Constructing Remote Sensing Image Dataset of the China–Myanmar Pipeline

The data used in the experiment in this paper were the remote sensing images of the Muse section of China–Myanmar pipeline in Myanmar collected from the Google Earth, and 1100 512 × 512 remote sensing images were obtained by processing. The remote sensing image dataset of the China–Myanmar pipeline was established by dividing the data into training and validation sets according to the ratio of 10:1, as shown in Figure 1. The proposed algorithm employed the LR image as the training input data compared with the HR image, as shown in Figure 2. The LR image was obtained after quadruple downsampling, whereas (a1), (b1), (c1), (d1) represented the HR image, and (a2), (b2), (c2), (d2) represented the LR image.

2.2. Remote Sensing Image Super-Resolution Network of China–Myanmar Pipeline

Generative adversarial network [25] is a generative model that consists of two networks, a generator and a discriminator, to discriminate the true from the false by feeding the data generated by the generator into the discriminator, putting the two in an adversarial game, and continuously training to achieve the degree of falsity. This study used a generative adversarial network to reconstruct HR remotely sensed images along the Muse section of the China–Myanmar pipeline, with the dense residual structure based on SRGAN being used to improve the depth of the network and introduce the Softsign activation function to optimize the problem of easy saturation and gradient disappearance. The proposed RRNL dense perceptual field module was designed to extract deep image features and introduce the RFB mechanism to extract global features of the image intended to improve the performance of the RRNL dense receptive field module. The RFB mechanism was introduced at the end to extract global features, resulting in an improved training stability of the model and better quality of the reconstructed images.

2.2.1. Generator Design

The generative network structure of the nonlocal dense receptive field adversarial model presented in this paper was based on the SRGAN generator framework and consists of three main parts: global feature extraction, local feature extraction, and image reconstruction, as shown in Figure 3.
Downsampling acquired the LR image, a 9 × 9 convolutional layer extracted global features, and a Residual block and a dense receptive field network with three RRNL blocks in series extract local features. The Residual block was similar to the original SRGAN structure, but the ESRGAN structure removed the batch normalization layer (BN) to reduce artifacts and improve network generalization. The RRNL block contained a residual-in-residual dense block (RRDB) [13] structure with a Nonlocal [23] structure in series. Each RRDB module consisted of three residual stacking blocks and was densely interconnected to increase network capacity, as shown in Figure 4b,c. The β in Figure 4b is the weighting factor for operations involving residuals in the RRDB, which defaults to 0.2. The Nonlocal structure quickly captured long-range dependencies by directly computing the relationship between two positions through Nonlocal operations. The Nonlocal structure is shown in Figure 4a, where the input X is processed. After the three 1 × 1 convolution layers, three branches were obtained: the value, query, and key-branch reshape. The value branch performed matrix multiplication to get the relationship matrix of (hw) × (hw) and then employed softmax (attention map) to determine the relationship between each point and all other position points. After reshaping, the attention map multiplied the output. The features of each point were related to those of all other points through the attention map, providing global context information. The RFB mechanism was introduced at the end to simulate the receptive field of human vision and enhanced the feature extraction capability of the network. The RFB structure evolved from inception [26] by substituting dense links with sparse connections and introducing three additional dilated convolutional layers, as shown in Figure 5, where (3 × 3 conv, rate = 1/3/5) were the three additional dilated convolutions. The shortcut of RFB design from ResNet [27] was a technique designed to solve the problem of vanishing gradients and model degradation in the training of deep neural networks by adding skip connections to the network; the output of a certain layer was directly connected to the input of the subsequent layers, which made it easier for the gradient to propagate through the network, making the network easier to train. Through the generator network, the output is obtained as a super-resolution(SR) image.

2.2.2. Discriminator Design

The discriminator was used to distinguish the real HR image from the false image generated by the generator. Figure 6 shows the structure of the discriminator used in this paper, which consists of eight feature extraction blocks mainly composed of CN, BN, and LeakyReLU. In each feature extraction block, the convolutional layers were composed of convolutions with a kernel size of 3, padding of 1, and a stride of 1 or 2, which increased the feature mapping from 64 to 1024, and at the end of the discriminator, an adaptive averaging pooling layer and the LeakyReLU function produce final discrimination result.

2.2.3. Loss Function

This network loss function consists mainly of two parts, namely generator loss and discriminator loss [12], with the discriminator loss specified as follows:
DLoss = Max D x real 1 D x fake
x fake = G z
where x real is the input real image information, x fake is the reconstructed image generated by the generator, and z is the incoming noise.
The generator loss function is specified as follows:
GLoss = Min λ a L a + λ p L p + λ i L i + λ tv L tv  
Among them, λ a , λ p ,   λ i ,   λ tv are constant coefficients, which are set to 0.001, 0.006, 1, 2 × 10−8. La, Lp, Li, Ltv are the adversarial loss [12], perceptual loss [28], image mean square error loss, and total variance loss [29], respectively, in this paper, defined as shown in Equations (4)–(7), respectively:
L a = E x real log 1 D x real x real , x fake E x fake log D x fake x fake , x real
where E x real , E x fake are the mean values taken for the real remote sensing image and the reconstructed image in the batch, respectively. D x real x real , x fake denotes the difference between the reconstructed image and the real image by the discriminator D_(x_real) to find the difference between the reconstructed image and the real image.
L p = 1 N i N Vgg x fake i Vgg x real i 1
where x fake i denotes the ith reconstructed image, x real i denotes the ith real image, Vgg (·) denotes the loss network, which is the VGG-19 [30] pre-trained network used in the text. N is the training lot size.
L i = 1 N i N x fake i x real i 1
In image super-resolution reconstruction, using the mean square error (MSE) metric as a loss metric was able to obtain a high PSNR as well as other quality evaluation metrics.
L tv = x i , j 1 x i , j 2 + x i + 1 , j x i , j 2
where x i , j is the pixel in the i-th row and j-th column of the image, and x i , j 1 is the pixel to its left and   x i + 1 , j is the pixel below it, and the total variance loss is the sum of the squares of the differences between each pixel and its left and bottom pixels.

2.3. Evaluation Indicators

The metrics PSNR and SSIM were selected to evaluate the quality of constructed images.
PSNR is a measure of image quality that represents the ratio of maximum possible signal power to destructive noise power that affects its representation accuracy, as obtained through MSE, and the specific formula for MSE is shown below:
MSE = 1 mn i = 0 m 1 j = 0 n 1 I i , j K i , j 2
where m, n, I, K mean two m × n monochrome images I and K, one of which is a noise approximation of the other. The specific formula for PSNR is shown below:
PSNR = 10 log 10 MAX I 2 MSE = 20 log 10 MAX I MSE
where MAX I indicates the maximum value of the image point color, which is 255 if each sample point is represented by 8 bits, so the smaller the MSE and the larger the PSNR. A higher PSNR can provide better image quality, image analysis accuracy and enhancement while a lower PSNR can result in degraded image quality, loss of information, and inaccurate analysis results.
SSIM is a measure of image similarity that requires four images as input: a distortion-free image, a recovered image, and two variables, x, and y. The SSIM formula is as follows:
SSIM x , y = l x , y α c x , y β s x , y γ
where α, β, γ are greater than 0; l(x,y), c(x,y), s(x,y) are brightness comparison, contrast comparison, and structure comparison, respectively, as follows:
l x , y = 2 μ x μ y + c 1 μ x 2 + μ y 2 + c 1
c x , y = 2 σ xy + c 2 σ x 2 + σ y 2 + c 2
s x , y = σ xy + c 3 σ x σ y + c 3  
where μ x , μ y are the average values of x, y, σ x , and σ y is the standard deviation of x, y, and σ xy is the covariance of x, y, and c 1 , c 2 , and c 3 to avoid a constant with denominator 0, in practical engineering generally another α = β = γ = 1 and c 3 = c 2 /2, the SSIM is simplified to the following form:
SSIM x , y = 2 μ x μ y + c 1 σ xy + c 2 μ x 2 + μ y 2 + c 1 σ x 2 + σ y 2 + c 2  
A smaller SSIM (which ranges from 0 to 1) between the output image and the distortion-free image indicated a better the image quality.

3. Experiment and Discussion

3.1. Training Equipment and Parameter Settings

The experiments in this paper were conducted using the Windows 11 operating system, the PyTorch deep learning framework, and the experimental environment details are provided in Table 1. The super-resolution reconstruction scale is set to 4, each round of training-data loading batch size is 4, using Nadam optimizer, betas = (0.9, 0.999), eps = 1 × 10−8, weight_decay = 0, momentum_decay = 0.004, the 1st–650th iterations’ learning rate is set to 0.0002, the 651st–1200th iterations’ learning rate is set to 0.0001, the learning rate of the part over 1200 iterations was set to 0.00005, and a total of 2000 iterations were conducted.

3.2. Experimental Result

In this paper, the performance of a non-local dense receptive field generative adversarial model for remote sensing image reconstruction was tested on the remote sensing image dataset of the Muse section of the China-Myanmar pipeline. Comparative experiments were conducted with Bicubic, Nearest, SRCNN, and SRGAN algorithms.
For objective metrics, the specific experimental results are shown in Table 2. The rrnl-gan reconstruction model outperformed other methods, such as the Bicubic, Nearest, SRCNN, and SRGAN in improving 8.33 dB, 5.66 dB, 1.41 dB, and 0.21 dB in PSNR and 0.078, 0.217, 0.059, and 0.006 in SSIM on the Muse pipeline remote sensing image dataset, respectively. The results suggest that the proposed model is more effective in accomplishing super-resolution for remote sensing images, compared to other models, with final PSNR and SSIM values of 30.20 and 0.84, respectively.
This paper demonstrates the reconstruction effect of the Nonlocal dense receptive field into the confrontation model for remote sensing images of the Muse pipeline through images, providing a more intuitive understanding of the subject effect. Figure 7 and Figure 8 show the comparative results of remote sensing image reconstruction of the pipeline by various super resolution algorithms. This paper presents (a)~(c) the input image, (a1)~(c1) the reconstructed images using the bicubic algorithm, (a2)~(c2) the reconstructed image using the nearest algorithm, (a3)~(c3) the reconstructed image using SRCNN, (a4)~(c4) the reconstructed image using SRGAN, and (a5)~(c5) the reconstructed image using the model proposed. The reconstructed images, including houses and buildings, are visually compared in Figure 7. Figure 7a shows a remote-sensing image, including a barrel, house, and green area. It is noticeable that the edge of the barrel was very blurred in (a1), (a2), and (a3) while (a4) and (a5) offer better restoration of the edge information. Additionally, the grid division of the roof and the edge area of the barrel have a clear and more realistic texture in (a5) compared to (a4). Figure 7b showcases a remote-sensing image of a living site and a house, where (b1), (b2), and (b3) are quite blurred for the texture of the site, (b4) and (b5) restore the white texture of the site more realistically, and (b5) is clearer than (b4) for the edge of the house. Figure 7c contains images of houses and roads captured using remote sensing, among which (c5) is the clearest one with distinctive boundaries.
The visual comparison between the reconstructed image of the river and green area is shown in Figure 8. The algorithm proposed has better texture extraction capabilities compared to other algorithms. Figure 8a is a remote sensing image, including puddles and fields; compared to (a1), (a2), and (a3), it appears that (a4) and (a5) have more distinct boundaries between the water surface and the land. Additionally, the proposed algorithm (a5) has a more realistic and clear reconstruction of the texture of the cultivation on the field compared to the improved algorithm (a4). Figure 8b is a remote sensing image including puddles and green areas. It can be clearly felt from the figure (b5) that the reconstruction effect is more obvious than the previous four figures. Figure 8c is the remote sensing image of terrace, the bicubic, nearest, and SRCNN algorithms that produced reconstructed images with average performance in detail texture. However, the resulting processing was too smooth, resulting in a visual effect inferior to the images reconstructed by the model proposed, and image (c5) more realistically restores the layered condition of the terraces.

3.3. Ablation Experiments

This paper introduces the Residual block1, Residual block2, RRDB, RRNL, Nonlocal, and RFB modules for comparing the results of image-quadruple super-resolution and conducting experimental analysis of algorithm feasibility along the Muse pipeline. The Residual block1 module consists of a convolutional (CN) layer, BN layer, and PReLU activation function. The Residual block2 module includes five CN layers with a LeakyReLU activation function. RRDB combines Residual block2 with dense concatenation. Nonlocal consists of a CN layer with a Softmax activation function. RFB consists of several BasicCn modules with a ReLu activation function, whereas the BasicCn module mainly consists of a CN layer, BN layer, and ReLu activation function. RRNL consists of RRDB and Nonlocal. The details are shown in Table 3.
Table 4 shows the specific network structure and experimental results for these modules, which form six models for comparison tests. When using one Residual block1 module, three RRNL modules, and one RFB module, the experiment reached a maximum of 30.20 and 0.840 on PSNR and SSIM, higher than the lowest values of 1.18 and 0.041. The comparison of specific results can be seen in Figure 9. Figure 10 shows the change trend of loss value in the generator part of the network during the experiment; it can be seen from the picture that experiment 1 represented by the orange curve, experiment 2 represented by the green curve, and experiment 3 represented by the red curve still have obvious spike fluctuations after several iterations and have not reached stable convergence. The model scheme finally adopted in this paper, which is the experiment represented by the blue line, has the fastest convergence, is relatively stable, has small fluctuation, and reaches the lowest value, so it can be seen that this network model is the most desirable.

4. Conclusions

The China–Myanmar natural gas pipeline serves as a vital conduit for China’s energy investments. It passes through Musi, Myanmar, which has large topography and abundant rainfall and is prone to disasters around the pipeline. Manual inspection is difficult with a high risk factor, and once an accident occurs, it will cause huge losses. In order to monitor the safe operation of the pipeline section, remote sensing data of the pipeline can be obtained through satellite remote sensing technology so as to realize the visibility and timeliness of large-scale and long-distance transportation pipeline investigation and monitoring. This paper is committed to improving the resolution of remote sensing data along the pipeline and has done the following work:
  • The adversus-generation network is applied to the super resolution reconstruction of remote sensing images; the network based on SRGAN has been improved by implementing a dense residual structure instead of the original generator structure, resulting in an improved network depth that better fits the image features in remote sensing images;
  • The Softsign activation function has been implemented, addressing the problems of network saturation and gradient disappearance;
  • The RRNL dense perceptual field module is proposed, enhances gradient propagation, and is combined with a Nonlocal mechanism to extract deep features of the image without being limited to neighboring points. Finally, the RFB mechanism is introduced to extract global image features;
  • By considering adversarial loss, perceptual loss, image mean square error loss, and total variance loss during training, the network training is more stable.
According to the final experimental results, the proposed method has demonstrated an improvement of 8.33 dB in PSNR and 0.217 in SSIM compared to the traditional method, 1.41 dB in PSNR and 0.059 in SSIM compared to the common deep neural network for super-resolution reconstruction of remote sensing images of the China–Myanmar pipeline. The reconstructed images also exhibit visible textures to the human eye. Future research should continue to explore and solve the problem of detail loss in the super resolution part of the image and carry out the land subsidence prediction work so as to better prevent the occurrence of pipeline safety accidents.

Author Contributions

Y.J.: Data curation and Conceptualization. Q.R.: Software and Writing—Original Draft. Y.R.: Visualization. H.L.: Formal analysis. S.D.: Project administration. Y.M.: Writing—Review & Editing. All authors have read and agreed to the published version of the manuscript.

Funding

China National Petroleum Corporation Limited—China University of Petroleum (Beijing) Strategic Cooperation Science and Technology Project: Research and Application of Key Technologies for the Integrity of the “Belt and Road” Overseas Long Distance Pipeline (ZLZX2020-05).

Institutional Review Board Statement

The study did not require ethical approval.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data that has been used are confidential.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Duchon, C.E. Lanczos Filtering in One and Two Dimensions. J. Appl. Meteorol. Climatol. 1979, 18, 1016–1022. [Google Scholar] [CrossRef]
  2. Qin, F.; He, X.; Wu, W.; Yang, X. Image super-resolution reconstruction based on sub-pixel registration and iterative back projection. In International Conference on Information Computing and Automation; University of Electric Power Technology: Hefei, China, 2008; pp. 277–280. [Google Scholar]
  3. Stark, H.; Oskoui, P. High-resolution image recovery from image-plane arrays, using convex projections. J. Opt. Soc. America. A Opt. Image Sci. 1989, 6, 1715–1726. [Google Scholar] [CrossRef] [PubMed]
  4. Freeman, W.T.; Pasztor, E.C. Learning low-level vision. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 1182–1189. [Google Scholar]
  5. Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a Deep Convolutional Network for Image Super-Resolution. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part IV 13. Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 184–199. [Google Scholar]
  6. Shi, W.; Caballero, J.; Huszar, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
  7. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 294–310. [Google Scholar]
  8. Wang, Z.; Gao, G.; Li, J.; Yu, Y.; Lu, H. Lightweight Image Super-Resolution with Multi-Scale Feature Interaction Network. In Proceedings of the 2021 IEEE International Conference on Multimedia and Expo (ICME), Shenzhen, China, 5–9 July 2021; pp. 1–6. [Google Scholar]
  9. Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
  10. Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 1132–1140. [Google Scholar]
  11. Li, J.; Fang, F.; Mei, K.; Zhang, G. Multi-scale Residual Network for Image Super-Resolution. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 527–542. [Google Scholar]
  12. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 105–114. [Google Scholar]
  13. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Loy, C.C. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; Leal-Taixé, L., Roth, S., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 63–79. [Google Scholar]
  14. Prajapati, K.; Chudasama, V.; Patel, H.; Upla, K.; Ramachandra, R.; Raja, K.; Busch, C. Unsupervised Single Image Super-Resolution Network (USISResNet) for Real-World Data Using Generative Adversarial Network. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 1904–1913. [Google Scholar]
  15. Zhang, K.; Liang, J.; Gool, L.V.; Timofte, R. Designing a Practical Degradation Model for Deep Blind Image Super-Resolution. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 10–17 October 2021; pp. 4771–4780. [Google Scholar]
  16. Lu, W.; Wang, J. Survey of super resolution processing method of remote sensing image. Sci. Surv. Mapp. 2016, 41, 53–58, 69. [Google Scholar]
  17. Li, F.; Jia, X.; Fraser, D.; Lambert, A. Super Resolution for Remote Sensing Images Based on a Universal Hidden Markov Tree Model. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1270–1278. [Google Scholar]
  18. Suyu, W.; Li, Z.; Xiaoguang, L. Spectral imagery super resolution by using of a high resolution panchromatic image. In Proceedings of the 2010 3rd International Conference on Computer Science and Information Technology, Chengdu, China, 9–11 July 2010; pp. 220–224. [Google Scholar]
  19. Rohith, G.; Vasuki, A. A Novel approach to super resolution image reconstruction algorithm from low resolution panchromatic images. In Proceedings of the 2015 3rd International Conference on Signal Processing, Communication and Networking (ICSCN), Chennai, India, 26–28 March 2015; pp. 1–8. [Google Scholar]
  20. Zhang, H.; Huang, B. Scale conversion of multi sensor remote sensing image using single frame super resolution technology. In Proceedings of the 2011 19th International Conference on Geoinformatics, Shanghai, China, 24–26 June 2011; pp. 1–5. [Google Scholar]
  21. Jiang, K.; Wang, Z.; Yi, P.; Wang, G.; Lu, T.; Jiang, J. Edge-Enhanced GAN for Remote Sensing Image Superresolution. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5799–5812. [Google Scholar] [CrossRef]
  22. Kang, X.; Deng, B.; Duan, P.; Wei, X.; Li, S. Self-Supervised Spectral–Spatial Transformer Network for Hyperspectral Oil Spill Mapping. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–10. [Google Scholar] [CrossRef]
  23. Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-local Neural Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7794–7803. [Google Scholar]
  24. Liu, S.; Huang, D.; Wang, Y. Receptive Field Block Net for Accurate and Fast Object Detection. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 404–419. [Google Scholar]
  25. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the 28th Conference on Neural Information Processing Systems (NIPS), Montreal, BC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
  26. Szegedy, C.; Wei, L.; Yangqing, J.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  27. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  28. Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 694–711. [Google Scholar]
  29. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  30. Simonyan, K.; Zisserman, A.J.C. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
Figure 1. Remote sensing image dataset of the China-Myanmar pipeline.
Figure 1. Remote sensing image dataset of the China-Myanmar pipeline.
Sustainability 15 13068 g001
Figure 2. Comparison of original image and quadruple downscaled sampled image. (a1) a HR image, (a2) a LR image, (b1) a HR image, (b2) a LR image, (c1) a HR image, (c2) a LR image, (d1) a HR image, (d2) a LR image.
Figure 2. Comparison of original image and quadruple downscaled sampled image. (a1) a HR image, (a2) a LR image, (b1) a HR image, (b2) a LR image, (c1) a HR image, (c2) a LR image, (d1) a HR image, (d2) a LR image.
Sustainability 15 13068 g002
Figure 3. Structure of RRNL-SRGAN generator network.
Figure 3. Structure of RRNL-SRGAN generator network.
Sustainability 15 13068 g003
Figure 4. RRNL module structure diagram. (a) Non-local module structure, (b) RRDB module structure, (c) Dense block structure.
Figure 4. RRNL module structure diagram. (a) Non-local module structure, (b) RRDB module structure, (c) Dense block structure.
Sustainability 15 13068 g004
Figure 5. RFB mechanism structure diagram.
Figure 5. RFB mechanism structure diagram.
Sustainability 15 13068 g005
Figure 6. Structure of RRNL-SRGAN discriminator network.
Figure 6. Structure of RRNL-SRGAN discriminator network.
Sustainability 15 13068 g006
Figure 7. Comparison of visual effects of reconstructed images containing houses and buildings. (a) a remote-sensing image including a barrel, house, and green area, (b) a remote-sensing image of a living site and a house, (c) a remote-sensing image contains houses and roads, (a1c1) processed by Bicubic, (a2c2) processed by Nearest, (a3c3) processed by SRCNN, (a4c4) processed by SRGAN, (a5c5) processed by Ours.
Figure 7. Comparison of visual effects of reconstructed images containing houses and buildings. (a) a remote-sensing image including a barrel, house, and green area, (b) a remote-sensing image of a living site and a house, (c) a remote-sensing image contains houses and roads, (a1c1) processed by Bicubic, (a2c2) processed by Nearest, (a3c3) processed by SRCNN, (a4c4) processed by SRGAN, (a5c5) processed by Ours.
Sustainability 15 13068 g007
Figure 8. Comparison of the visual effects of the reconstructed images containing the river and green area. (a) a remote-sensing image including puddles and fields, (b) a remote-sensing image including puddles and green areas, (c) a remote-sensing image of terrace, (a1c1) processed by Bicubic, (a2c2) processed by Nearest, (a3c3) processed by SRCNN, (a4c4) processed by SRGAN, (a5c5) processed by Ours.
Figure 8. Comparison of the visual effects of the reconstructed images containing the river and green area. (a) a remote-sensing image including puddles and fields, (b) a remote-sensing image including puddles and green areas, (c) a remote-sensing image of terrace, (a1c1) processed by Bicubic, (a2c2) processed by Nearest, (a3c3) processed by SRCNN, (a4c4) processed by SRGAN, (a5c5) processed by Ours.
Sustainability 15 13068 g008
Figure 9. Comparison of ablation experimental results.
Figure 9. Comparison of ablation experimental results.
Sustainability 15 13068 g009
Figure 10. Comparison of loss value results.
Figure 10. Comparison of loss value results.
Sustainability 15 13068 g010
Table 1. Experimental environment.
Table 1. Experimental environment.
NameVersion
OPERATING SYSTEMWindows 11
GPUNVIDIA GeForce GTX 3080
VIDEO MEMORY10G
CUDA11.6
Python3.9
PyTorch 1.12.0
Table 2. Mean values of PSNR and SSIM under different methods.
Table 2. Mean values of PSNR and SSIM under different methods.
MethodPSNRSSIM
Bicubic21.870.762
Nearest24.540.623
SRCNN28.790.781
SRGAN29.990.834
Ours30.200.840
Table 3. Composition of the modules of the ablation experiment.
Table 3. Composition of the modules of the ablation experiment.
Module NameModule Composition
Residual block1CN + BN + PReLu + CN + BN
Residual block2CN + CN + CN + CN + CN + LeakyReLU
RRDBResidual block2 + Residual block2 + Residual block2
NonlocalCN + CN + Softmax + CN + CN
BasicCnCN + (BN) + ReLu
RFB3 × BasicCn + 3 × BasicCn + 4 × BasicCn + BasicCn + BasicCn + ReLu
RRNLRRDB + Nonlocal
Table 4. Comparison table of the results of ablation experiments.
Table 4. Comparison table of the results of ablation experiments.
Experiment NameModelsPSNRSSIM
Exp. 1Residual block1(3) + RRDB(2) + Nonlocal29.020.799
Exp. 2Residual block1(2) + RRDB(3) + Nonlocal29.030.800
Exp. 3RRDB(5) + Nonlocal29.020.799
Exp. 4Residual block1(1) + Residual block2(1) + RRDB(3) + Nonlocal29.890.831
Exp. 5Residual block1(1) + Residual block2(1) + RRDB(3) + RFB29.900.830
Exp. BESTResidual block1(1) + RRNL(3) + RFB30.200.840
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, Y.; Ren, Q.; Ren, Y.; Liu, H.; Dong, S.; Ma, Y. Super-Resolution Reconstruction of Remote Sensing Images of the China–Myanmar Pipeline Based on Generative Adversarial Network. Sustainability 2023, 15, 13068. https://doi.org/10.3390/su151713068

AMA Style

Jiang Y, Ren Q, Ren Y, Liu H, Dong S, Ma Y. Super-Resolution Reconstruction of Remote Sensing Images of the China–Myanmar Pipeline Based on Generative Adversarial Network. Sustainability. 2023; 15(17):13068. https://doi.org/10.3390/su151713068

Chicago/Turabian Style

Jiang, Yuanliang, Qingying Ren, Yuan Ren, Haipeng Liu, Shaohua Dong, and Yundong Ma. 2023. "Super-Resolution Reconstruction of Remote Sensing Images of the China–Myanmar Pipeline Based on Generative Adversarial Network" Sustainability 15, no. 17: 13068. https://doi.org/10.3390/su151713068

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop