Next Article in Journal
Effect of Metal Thickness on the Sensitivity of Crack-Based Sensors
Next Article in Special Issue
A Distributed Parallel Algorithm Based on Low-Rank and Sparse Representation for Anomaly Detection in Hyperspectral Images
Previous Article in Journal
Extraction of High-Precision Urban Impervious Surfaces from Sentinel-2 Multispectral Imagery via Modified Linear Spectral Mixture Analysis
Previous Article in Special Issue
RPC-Based Orthorectification for Satellite Images Using FPGA
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Blind UAV Images Deblurring Based on Discriminative Networks

1
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
2
School of Software Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(9), 2874; https://doi.org/10.3390/s18092874
Submission received: 22 June 2018 / Revised: 3 August 2018 / Accepted: 27 August 2018 / Published: 31 August 2018
(This article belongs to the Special Issue High-Performance Computing in Geoscience and Remote Sensing)

Abstract

:
Unmanned aerial vehicles (UAVs) have become an important technology for acquiring high-resolution remote sensing images. Because most space optical imaging systems of UAVs work in environments affected by vibrations, the optical axis motion and image plane jitter caused by these vibrations easily result in blurring of UAV images. In the paper; we propose an advanced UAV image deblurring method based on a discriminative model comprising a classifier for blurred and sharp UAV images which is embedded into the maximum a posteriori framework as a regularization term that constantly optimizes ill-posed problem of blind image deblurring to obtain sharper UAV images. Compared with other methods, the results show that in image deblurring experiments using both simulated and real UAV images the proposed method delivers sharper images of various ground objects.

1. Introduction

Unmanned aerial vehicles (UAVs) represent a quickly evolving technology, gaining attention as remote sensing tools across a variety of scientific fields. In contrast to traditional aircraft or satellite platforms, UAVs have the advantage of lower cost flight missions so they are often employed to produce various photogrammetric and remote sensing products when cost is a factor. Normally, UAVs rarely provide a stable camera platform, which can result in blurred images since UAVs are affected by wind, turbulence, sudden inputs by operators, and by in-flight movements of the aircraft. These blurs impede the visual analysis and interpretation of the data, which can result lower accuracy when using automatic photogrammetric processing algorithms.
Up to now, many deblurring methods concerning remote sensing images have been proposed. Li et al. [1] proposed a remote sensing image deblurring method based on grid computation with distributed processing. Papa et al. [2] used a technique that projects images onto convex sets as a means to establish a priori information in a restoration algorithm for satellite images. Zhao et al. [3] put forward a model containing both total regularization and sparsity regularization terms to deblur and unmix hyperspectral data. Li et al. [4] utilized inadequate exposure motion blur-free image and precise exposure blurred image to obtain blur kernel as a priori knowledge to restore the sharp remote sensing images. Mastriani et al. [5] combined an original technique for noise reduction in the wavelet domain, and a learning approach for Kohonen self-organizing map, to deblur SAR image. Shen et al. [6] used the Huber-Markov prior model to regularize both the image and the blur parameters for the deblurring of remote sensing images. Berisha et al. [7] constructed an optimal Kronecker preconditioner and used spectral data from an isolated star to estimate the multiple point spread functions for joint deblurring and sparse unmixing of hyperspectral image dataset. Liao et al. [8] used PCA transform to separate the information content in a hyperspectral image from the noise and employed a total variation method to jointly denoise and deblur for hyperspectral image. Ma et al. [9] based on compressed-sensing theorem provided a decoding algorithm based on Poisson singular integral and iterative curvelet thresholding to correct the blur problem for remote sensing images. Palsson et al. [10] used a Wiener filter to deblur the images produced by component substitution, multi-resolution analysis, and pansharpening methods. Xie et al. [11] designed an intersect direction iteration algorithm and proposed a total variation restoration model for remote sensing image restoration. Wang et al. [12] estimated the parameters of blurred image based on Bayesian principle to deduce a remote sensing image deblurring algorithm. Tang et al. [13] used displacement vector to build the prior point spread function, then proposed an image deblurring method for remote sensing images based on local temporal compressive photography. Chen et al. [14] constructed a point spread function by high precision motion estimation for remote sensing image deblurring. He et al. [15] used a salient edge selection method based on relative total variation to predict sharp edge information, proposing a deblurring method for remote sensing images. Abrahams et al. [16] chose the optimal Gaussian width to estimate a symmetric Gaussian point-spread function, and then proposed a way mitigating the blurring problem for Defense Meteorological Satellite Program’s nighttime lights images. Cao et al. [17] proposed a deblurring method for remote sensing images based on the relationship between dark channel and convolution. Dong et al. [18] employed the standard Richarson-Lucy algorithm with a piece wise local regularization term and combined it with residual deconvolution method to restore remote sensing images. Jidesh et al. [19] provided a levelset-driven anisotropic diffusion model for deblurring of SAR image, formulated using a non-local regularization framework.
The traditional image deblurring can be divided into blind and non-blind deconvolution. Non-blind image deconvolution can be carried out in various ways, but these methods all require additional knowledge. Yu et al. [20] detailed a deblurring method for remote sensing images that built a multi-scale pyramid image based on local region selection. Xu et al. [21] used the orbit and camera parameters to estimate the extent of lunar image motion blur and image motion value from the blurred lunar image based on small crater detection scheme, adopting the regularization method to deblur the lunar remote sensing images. Some researchers gained additional information through a variety of methods including fluttering shutters [22], color channel dependent exposure times [23] and video cameras [24].
Blind image deconvolution uses only the blurred image and no additional information, and remaining complete the task of deriving a sharp image. In recent years, progress has been made in blind image deconvolution [25,26,27,28,29,30,31]. Cho et al. [32] utilized a shock filter with bilateral filter together to predict sharp edges, and then selected the salient edges for kernel estimation. Xu et al. [33] proposed a effective mask computation algorithm to adaptively select useful edges for kernel estimation and the iterative support detection method was introduced to refine blur kernel. Shan et al. [34] proposed a piece-wise continuous function to fit the natural image gradient distribution. Many image priors have also been introduced that favor clean images clean over blurred images. Krishnan et al. [35] used Hyper-Laplacian distribution to approximate the natural image distribution. Wang et al. [36] employed image edge information as prior for blind motion deblurring. Dong et al. [37] constructed prior knowledge from experts for blind image deconvolution. Michaeli et al. [38] exploited internal patch recurrence to recover the underlying blur kernel. Pan et al. [39] utilized a dark channel prior to blind image deblurring. Xu et al. [40] proposed a L0-regularized prior for image deblurring.
Currently, convolutional neural networks (CNN) have emerged as a promising method to automatically learn deeper feature representations from images. They successfully obtain remarkable results in image processing [41,42,43,44,45,46,47,48]. Generative Adversarial Networks (GAN) have been proposed to synthesize realistic images by effectively learning the distribution of training images [49]. In order to distinguish between real images and generated images, the discriminative ability of GAN is constantly enhanced through training. GAN [50,51,52,53,54], whose optimization methods are constantly being improved, have been also applied widely in image generation and classification.
In the proposed method, we employ the powerful distinguishing ability of GAN to establish networks that effectively distinguish between blurred and sharp UAV images. In our approach, we take the trained networks as a regularization term in the maximum a posteriori framework, which as a prior information, is used to continuously optimize the blind deblurring of UAV images to obtain better deblurring results. In order to make the discriminative model provide more useful prior information for UAV image deblurring during training, we input blurred UAV images of different intensities into the model so that the trained model will become more robust when faced with different blurred UAV images. The results from deblurring experiments with real blurred UAV images show that the proposed method yields sharper deblurring results than the other methods tested.

2. Background

The blur process of UAV images can be generally modeled as Equation (1):
B = L f + n
where, B is a blur UAV image, L is a latent image, f is blur kernel and n is the image noise. Blind UAV image deblurring is an ill-posed problem because the number of unknowns exceeds the number of observed data. An observed blurred image provides only limited constraint on the solution, so there are many possibilities for obtaining a sharp image from observed blurred image, which requires regularization to solve. In order to solve this problem, prior knowledge for both UAV images and blur kernels is essential.
Many methods estimate the latent image L and the blur kernel f from the blur image L based on Equation (1), which can be expressed as Equation (2) [39,40]:
min L , f L f B 2 2 + γ f 2 2 + μ L 0 + λ P ( L )
where, the first term indicates the convolution output of the deblurred image, furthermore, and the blur kernel should be similar to the observation; the second term is used to regularize the solution of the blur kernel; the third term is L0 gradient prior as a regularization term [40]; the fourth term is used to measure sparsity of the priors of latent image. The critical element to this framework is the latent image prior, and clearer UAV image is more helpful for the minimizing Equation (2) to solve the ill-posed problem.

3. Proposed Method

3.1. The Process of Obtaining Image Prior

In order to better solve the blur problem in UAV image, we present a new method to learn an image prior based on discriminative networks. GAN [49] is a novel deep learning method based on CNN, which adopts a min-max adversarial game theoretic optimization framework and has powerful ability of image generation by continuously updating discriminative ability of true image and false image, as shown in Equation (3):
min D   max G   E x p r [ log ( 1 D ( x ) ) ] + E x p g [ log G ( x ) ]
where, p r is the sample distribution of the actual image, p g is the sample distribution of the generated image.
Through continuous training, the probability distribution of the image generated by the generative model is indistinguishable from the probability distribution of the true image, which makes it possible to generate clearer UAV images from blurred ones. Meanwhile, the continuous training also leads to continuous enhancement of the classification ability of the discriminative model. Because the image prior has higher sensibility for blurred images and lower sensibility for clear images, we can use the effective discriminative ability of GAN as prior image, which acts in a trained classifier as a regularization term of the latent image for UAV image deblurring.

3.1.1. The Structure of Discriminative Networks

The GAN-based networks take the image as input and output a percentage, indicating the probability that the input image is a blurred UAV image. In order to make the proposed networks provide better prior images, different blur levels and sizes of images are input into the networks. The proposed networks consist of generative model G and discriminative model D as shown in Figure 1.
We construct the generative model G containing six residual blocks, each of which consists of two convolutional layers and two batch normalization layers [55], and then a skip connection [56] is established. The residual block [57] can solve gradient vanishing at the structure of the deeper networks. Batch normalization makes distribution of input image back to the standard normal distribution, which makes the training faster and easier. The specific settings of each layer of the generative model D are as follows:
C(r,64)→C(64)B(r)C(64)B(r)SC→…4…→C(64)B(r)C(64)B(r)SC→C(64)→C(t,3)
where, C(r,64) denotes a set of convolutional layers with 64 feature maps and activation function, Relu; C(64)B(r)C(64)B(r)SC represents a residual block and B(r) is a batch normalizational layer with activation function, Relu, SC denotes a skip connection with a total of six residual blocks, while C(t,3) represents a convolutional layer with three feature maps and activation function, Tanh. The proposed discriminative model D is shown at the bottom part of Figure 1. The specific settings of each layer of the discriminative model are as follows:
C(lr,64)→C(128)BN(lr)→C(256)BN(lr)→C(512)BN(lr)→C(1024)BN(lr)→C(2048k)BN(lr)→SC→GA→SM
where, lr denotes the activation function, Leakyrelu; C(128)BN (lr) denotes a set of convolutional layers with 128 feature maps followed by batch-normalization with activation function, Leakyrelu, and GA is the global average pooling layer which converts feature maps into a percentage. SM indicates the sigmoid non-linear function. The feature maps are increased from 256 to 2048 which is suitable number for the pretrained VGG networks [58].

3.1.2. The Loss Function

The primary aim of proposed method is to obtain effective classification networks, so when facing min-max adversarial game theoretic problem, we can fix generative model G and optimize discriminative model D. We optimize the proposed networks via Equation (2), as shown in:
S ( ρ ) = log ( 1 z i ) ( 1 z i t ) 1 M i = 1 M z i t log ( z i )
where, M indicates input image, ρ indicates the optimized parameters by the proposed networks; z i t = p ( y ; ρ ) , it is the output of the classifier that indicates the probability of the input image to be blurred, we let z t = 0 for sharp images and z t = 1 for blurred UAV images.

3.2. Deblurring the UAV Images

After adding GAN-based image prior, the objective function Equation (2) of the deblurring UAV images converges and converted into Equation (5):
min L , f L f B 2 2 + γ f 2 2 + μ L 0 + λ d ( L )
The deblurring process is modeled as an optimization problem by solving alternatively the latent image L and the blur kernel f, so we separated Equation (5) into the following Equations (6) and (7):
min L L f B 2 2 + μ L 0 + λ d ( L )
min f L f B 2 2 + γ f 2 2

3.2.1. Estimating the Latent Image

Informed by existing methods [40,59], during the optimization of Equation (6), we use the half-quadratic splitting L0 minimization method and introduce auxiliary variables j, k that indicate image and image gradients respectively. Thus, the objective function can be rewritten as Equation (8):
min L , j , k L f B 2 2 + θ L k 2 2 + ω L j 2 2 + μ k 0 + λ d ( j )
where, θ and ω are the penalty parameters. We can solve Equation (8) by alternatively minimizing L, j, and k while fixing other variables, thus avoiding the non-convex problem when directly minimizing L 0 and d ( L ) .
The latent image L can be efficiently solved by fixing g and u. The solution for L is obtained by solving Equation (9) during each iteration:
min L L f B 2 2 + θ L k 2 2 + ω L j 2 2
Equation (9) is a least squares minimization problem, whose solution is close to the solution for Equation (10):
L = F 1 ( F ( f ) ¯ F ( B ) + ω F ( j ) + θ [ ( F ( h ) ¯ F ( k h ) + F ( v ) ¯ F ( k v ) ] F ( f ) ¯ F ( f ) + ω + θ ( F ( ) ¯ F ( ) )
where, F 1 ( ) and F ( ) indicate the Fast Fourier Transform (FFT) and inverse FFT, respectively; the F ( ) ¯ is the complex conjugate operator; k = ( k h , k v ) , it is image gradients in horizonal and vertical directions, respectively; h and v indicate the horizontal and vertical differential operators, respectively.
Given the latent image L, we solve j and k separately with Equations (11) and (12):
min j ω L j 2 2 + λ d ( j )
min k θ L k 2 2 + μ k 0
In order to solve Equation (11), we use the back-propagation approach to compute the derivative of d(j) and update j by the gradient descent method:
j ( s + 1 ) = j s + φ [ ω ( j s L ) + λ d f ( j s ) d ( j s ) ]
where, s denotes the s-th iteration, s + 1 denotes the s + 1-th iteration, φ denotes step width, d f ( j s ) d ( j s ) denotes the differential of j s .
Equation (12) is a pixel-wise minimization problem, thus, we solve (12) based on [60]. As shown in Equation (14):
k = { L , | L | 2 λ θ 0 , o t h e r w i s e

3.2.2. Estimating Blur Kernel

As in the existing methods [29,60], the kernel estimation methods based on gradients have been shown to be more accurate when given L. Therefore, we estimate the blur kernel using image gradients generated by Equation (15):
min f L f B 2 2 + γ f 2 2
The solution of j is obtained by image pyramid which is similarly FFT [61]. After obtaining f, the negative elements were set to 0, and we normalize f so that the sum of its elements is 1. We alternatively solve (6) and (15) in iterations at each pyramid level. Algorithm 1 lists the pyramid level for coarse-to-fine solution.
Algorithm 1: Blur kernel estimation
Input: Blur image
Output: Blur kernel f
Initialize f with results from the coarser level.
While iteration i ≤ 5 do
Solve for f using Equation (14)
end while

4. Experiments

4.1. Training Details of Discriminative Networks

In order to obtain latent images prior of UAV images deblurring, we constructed a training dataset of UAV images to learn the GAN-based classification networks. The training dataset of UAV images included two parts, one part was obtained by a CW-30 in Yangjiang City (Guangdong Province, China). The camera was a SWDC with a focal length of 50 mm, the size of each entire image was 8206 × 6078 pixels and the flying height was 600–800 m; the other part of the training dataset was obtained by a CW-30 in Guiyang City (Guizhou Province, China). The camera was a H5D-50 with a focal length of 50 mm, the size of each entire image is 8176 × 6132 pixels and the flying height was 600–800 m. For the convenience during training, we cut entire images into smaller images by using Photoshop. In order to obtain blurred UAV images, we added realistic synthesized blur at different sizes and intensities to 1000 sharp UAV images at 320 × 320 pixels. The discriminative networks were trained on an Nvidia GRID M60-8Q (8G) GPU using the tensorflow framework and the number of training iterations was 80 k. Due to the limited memory of the computer, the batch size of our experiment was 1. We used Aadm as an optimization algorithm and set the initial learning rate to 0.002 which was decreased by a factor of 10 for every 100 epochs.
During training, all strides were set to 1 in the generative model G and all other convolution kernel sizes were 3 × 3 except for the fact that the last convolution kernel sized at 1 × 1. In the discriminative model D, the first six layers were composed of kernels of size 4 × 4 with a stride 2; the next layer was composed of kernels of sized 1 × 1 with a stride 1 and the last two layers were composed of kernels of size 3 × 3 with a stride 1. In generative model G and discriminative model D, all padding modes were padded edges of kernels.

4.2. Comparison and Qualitative Evaluation

After the training process of the GAN-based networks converge, the trained model was used as the latent prior image. In all the experiments, we set μ = 0.005, λ = 0.005, and γ = 2. We evaluated the proposed algorithm on synthetic blurred UAV images and real blurred UAV images. The testing dataset was photographed in other mapping areas, which is obtained by a CW-10 in Wuhan City (Hubei Province, China) where the camera was an ILCE-7R with a focal length of 28 mm, the size of entire image was 7360 × 4916 pixels and the flying height was 400–600 m. One part of the testing dataset included originally sharp UAV images. We artificially added synthetic blurring with different sizes and intensities to these data, as shown in Figure 2. The other part of the testing dataset were the real blurred UAV images. The proposed method was compared qualitatively and quantitatively to method [17], method [59] and method [38]. We chose the method [17] because it is related to remote sensing images deblurring, and therefore comparable. The method [59] and method [38] were seen as a widespread reference, and they closely related to our work, so they can be used as a comparatively valuable references for this paper.

4.2.1. Comparison of the Synthetic Blurred UAV Image Results

In Figure 2, we provide the experimental results for four UAV images of different ground objects to which synthetic blurring of different sizes and intensities were added. The labels a1–d6 indicate the real clean UAV images, the labels a2–d2 indicate the synthetically blurred UAV image (with synthetic blur of different sizes and intensities added), the labels a3–d3 indicate the deblurred UAV images from method [17]. The labels a4–d4 indicate the deblurred UAV images from method [59], the labels a5–d5 indicate the deblurred UAV images using method [38], and the labels a6–d6 indicate the deblurred UAV images from proposed method. In each of the images, small rectangles in white and blue identify areas enlarged for examination shown in the larger white and blue rectangles inserted each series of images.
Figure 3 shows the estimated blur kernels of deblurring UAV images in Figure 2, which allow visually comparison of blur kernel estimation. It seems obvious that the blur kernels obtained by proposed method have more continuous blobs. The blur kernels obtained by proposed method are much more plausible in visually comparison.
A qualitative visual comparison of images in Figure 2, produced by the proposed method and the other tested methods shows that restores more distinct ground edges and more detailed ground objects textures. Meanwhile, the deblurred UAV images obtained by proposed method are closer to the true ground objects.
In order to quantitatively evaluate the proposed method, we employ Structural Similarity Index (SSIM) [62] and Image Quality measure (FM) [63] for comparative evaluation. FM is image sharpness measure for blurred images in frequency domain, it been calculated by the Equation (16). The specific details of the algorithm about FM can be seen in [63]:
Image   Quality   Measure   ( FM ) = T H M × N
Table 1 lists the SSIM values of deblurred UAV images obtained by several deblurring methods, respectively. In Table 1, row 1 denotes the tested methods, column 1 denotes four UAV images of different ground objects, column 2 denotes the SSIM values of deblurred images obtained by method [17], column 3 denotes the SSIM values of deblurred images obtained by method [59]. Column 4 denotes the SSIM values of deblurred images obtained by method [38], and column 5 denotes the SSIM values of deblurred images obtained by proposed method. Table 2 is similar to Table 1, but shows results using FM for the tested methods.
It can be seen in Table 1 and Table 2, the proposed method has the highest SSIM and FM, this is consistent with the visual effects of the deblurred UAV images.

4.2.2. Comparison of the Real Blurred UAV Image

We evaluated the proposed method on the real blurred UAV images, consisting of five UAV images with different ground objects. Figure 4 shows the deblurred results for several real UAV images processed with the proposed algorithm and other test methods.
It can be seen that the deblurred image output by the proposed algorithm contain clearer ground objects textures and generates fewer artifacts. Figure 5 shows the estimated blur kernels of deblurring UAV images in Figure 4, which shows the blur kernels estimation for visual inspection.
Table 3 presents the FM values of deblurred UAV images obtained by several deblurring methods, respectively. In Table 3, row 1 denotes the tested methods, column 1 denotes 5 real UAV images of different ground objects, column 2 denotes the FM values of deblurred images obtained by method [17], column 3 denotes the FM values of deblurred images obtained by method [59]. Column 4 denotes the FM values of deblurred images obtained by method [38], and column 5 denotes the FM values of deblurred images obtained by proposed method. It can be observed in Table 3, the proposed method has the highest FM, this is consistent with the visual effects of the real deblurred UAV images.
The better deblurred images tend to restore better the local features found in the UAV images. If conducting a matching method based on local features to match a deblurred image with a sharp image in the same region, the better matching results should be able to demonstrate the better deblurring results, so we try to compare the deblurring results from the perspective of image matching. Because the UAV platform for acquiring images consists of multiple cameras—the left-view, the front-view, the right-view, the back-view, the down-view camera—this allows us to obtain images from different angles in the same mapping area. In order to further verify our method, the deblurred UAV images gathered by several methods are matched with the clear images of different cameras in the same mapping areas so that the results of the matching experiments are employed to compare the results of deblurring.
SIFT [64] is a well-known matching algorithm in image matching, which is based on matching methods of image local features. The quality of the matching results can often reflect the similarities of the local characteristics of the two images, and the number of matching pairs is the standard for judging the quality of matching results. Therefore, in this article, we compare the several deblurring methods using the number of matching point pairs based on SIFT.
Figure 6 shows the matching results of a sharp UAV image with different camera views and deblurred UAV images of the same region by the four tested methods. Table 4 shows the number of correct matching pairs about Figure 6. It can be observed that the deblurred UAV images generated by proposed method obtain more correct matching pairs than other methods in number.
Through comparative experiments with the real blurred UAV images and synthetically blurred UAV images, it can be seen that the deblurred images obtained by the proposed method can obtain better deblurring results. In the comparative experiments with synthetic data, the proposed method obtained the highest SSIM and FM values. In the comparative experiments with real data, our method can better restore the local features of real blurred UAV images and obtained clearer ground objects textures.

5. Conclusions

In this paper, we utilized a trained discriminative model as a classifier which can effectively distinguish between blurred and clear UAV images in UAV image deblurring. The trained classifier is used as prior image information to continuously optimize the blind deblurring of UAV images to get better deblurring results. In comparative experiments of simulated and real blurred UAV images, the proposed method has obtained better results in various ground objects. In the process of training the discriminative model, we input UAV images with different intensities of blurring, which make the data-driven prior image information more applicable when deblurring real UAV images.

Author Contributions

R.W. executed all the analyses and wrote most of the paper; G.M. and R.W. reviewed the content and offered substantial improvement for this paper; Q.Q. and Q.S. development and coordination of the project and helped in the interpretation of the results; J.H. shaped significant ideas for experiments of this paper.

Funding

This work is supported by National Key R&D Program of China (2018YFB10046).

Acknowledgments

We are thankful for Stephen C. McClure for providing us English editing of the manuscript freely.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, S.; Zhu, G.; Ge, P. Remote Sensing Image Deblurring Based on Grid Computation. Int. J. Min. Sci. Technol. 2006, 16, 409–412. [Google Scholar] [CrossRef]
  2. Papa, J.; Mascarenhas, N.; Fonseca, L.; Bensebaa, K. Convex restriction sets for CBERS-2 satellite image restoration. Int. J. Remote Sens. 2008, 29, 443–458. [Google Scholar] [CrossRef]
  3. Zhao, X.; Wang, F.; Huang, T.; Ng, M.; Plemmons, R. Deblurring and Sparse Unmixing for Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4045–4058. [Google Scholar] [CrossRef]
  4. Li, Q.; Dong, W.; Xu, Z.; Feng, H.; Chen, Y. Motion Blur Suppression Method Based on Dual Mode Imaging for Remote Sensing Image. Spacecr. Recover. Remote Sens. 2013, 34, 86–92. [Google Scholar]
  5. Mastriani, M. Denoising based on Wavelets and Deblurring via Self-Organizing Map for Synthetic Aperture Radar Images. arXiv, 2016; arXiv:1608.00274. Available online: https://arxiv.org/abs/1608.00274(accessed on 22 June 2018).
  6. Shen, H.; Du, L.; Zhang, L.; Gong, W. A Blind Restoration Method for Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2012, 9, 1037–1047. [Google Scholar] [CrossRef]
  7. Berisha, S.; Nagy, J.; Plemmons, R. Deblurring and Sparse Unmixing of Hyperspectral Images Using Multiple Point Spread Functions. SIAM J. Sci. Comput. 2015, 37, 389–406. [Google Scholar] [CrossRef]
  8. Liao, W.; Goossens, B.; Aelterman, J.; Luong, H.; Pižurica, A. Hyperspectral image deblurring with PCA and total variation. In Proceedings of the 5th Workshop on Hyperspectral Image & Signal Processing: Evolution in Remote Sensing, Gainesville, FL, USA, 26–28 June 2013; pp. 1–4. [Google Scholar]
  9. Ma, J.; Dimet, F. Deblurring From Highly Incomplete Measurements for Remote Sensing. IEEE Trans. Geosci. Remote Sens. 2009, 47, 792–802. [Google Scholar] [CrossRef] [Green Version]
  10. Palsson, F.; Sveinsson, J.; Ulfarsson, M.; Benediktsson, J. MTF-Based Deblurring Using a Wiener Filter for CS and MRA Pansharpening Methods. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2255–2269. [Google Scholar] [CrossRef]
  11. Xie, M.; Yan, F. Half-blind remote sensing image restoration with partly unknown degradation. In Proceedings of the Seventh International Conference on Electronics and Information Engineering, Nanjing, China, 17–18 September 2016; p. 1032213. [Google Scholar]
  12. Wang, Z.; Geng, G. Investigation on Deblurring of Remote Sensing Images Using Bayesian Principle. In Proceedings of the Fourth International Conference on Image and Graphics (ICIG 2007), Washington, DC, USA, 22–24 August 2007; pp. 160–163. [Google Scholar]
  13. Tang, C.; Chen, Y.; Feng, H.; Xu, Z.; Li, Q. Motion Deblurring based on Local Temporal Compressive Sensing for Remote Sensing Image. Opt. Eng. 2016, 55, 093106. [Google Scholar] [CrossRef]
  14. Chen, Y.; Wu, J.; Xu, Z.; Li, Q.; Feng, H. Image deblurring by motion estimation for remote sensing. In Proceedings of the Satellite data compression, communications, and processing VI, San Diego, CA, USA, 3–5 August 2010; Volume 7810, p. 78100U. [Google Scholar]
  15. He, Y.; Liu, J.; Liang, Y. An Improved Robust Blind Motion Deblurring Algorithm for Remote Sensing Images. In Proceedings of the International Symposium on Optoelectronic Technology and Application Conference, Beijing, China, 9–11 May 2016; p. 101573A. [Google Scholar]
  16. Abrahams, A.; Oram, C.; Lozano-Gracia, N. Deblurring DMSP nighttime lights: A new method using Gaussian filters and frequencies of illumination. Remote Sens. Environ. 2018, 210, 242–258. [Google Scholar] [CrossRef]
  17. Cao, S.; Tan, W.; Xing, K.; He, H.; Jiang, K. Dark channel inspired deblurring method for remote sensing image. J. Appl. Remote Sens. 2018, 12, 015012. [Google Scholar] [CrossRef]
  18. Dong, W.; Feng, H.; Xu, Z.; Li, Q. A piecewise local regularized Richardson–Lucy algorithm for remote sensing image deconvolution. Opt. Laser Technol. 2011, 43, 926–933. [Google Scholar] [CrossRef]
  19. Jidesh, P.; Balaji, B. Deep Learning for Image Denoising. Int. J. Remote Sens. 2018. [Google Scholar] [CrossRef]
  20. Yu, C.; Tan, H.; Chen, Q. Multi-Scale Image Deblurring Based on Local Region Selection and Image Block Classification. In Proceedings of the 2017 International Conference on Wireless Communications, Networking and Applications WCNA2017, Shenzhen, China, 20–22 October 2017; pp. 256–260. [Google Scholar]
  21. Xu, Z.; Ye, P.; Cui, G.; Feng, H.; Li, Q. Image Restoration for Large-motion Blurred Lunar Remote Sensing Image. Soc. Photo-Opt. Instrum. Eng. 2016, 10025, 1025534. [Google Scholar] [CrossRef]
  22. Raskar, R.; Agrawal, A.; Tumblin, J. Coded Exposure Photography: Motion Deblurring using Fluttered Shutter. ACM Trans. Graph. 2006, 25, 795–804. [Google Scholar] [CrossRef]
  23. Lelégard, L.; Delaygue, E.; Brédif, M.; Vallet, B. Detecting and Correcting Motion Blur from Images Shot with Channel-Dependent Exposure Time. In Proceedings of the ISPRS Congress 2012, Melbourne, Australia, 25 August–1 September 2012; pp. 341–346. [Google Scholar]
  24. Tai, Y.; Du, H.; Brown, M.; Lin, L. Correction of Spatially Varying Image and Video Motion Blur Using a Hybrid Camera. IEEE Trans. Image Process. 2010, 32, 1012–1028. [Google Scholar] [CrossRef] [Green Version]
  25. Ioffe, S.; Szegedy, S. Good Image Priors for Non-blind Deconvolution. In Proceedings of the ECCV 2014, Zurich, Switzerland, 6–12 September 2014; pp. 231–246. [Google Scholar]
  26. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W. Efficient marginal likelihood optimization in blind deconvolution. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2657–2664. [Google Scholar]
  27. Cho, H.; Wang, J.; Lee, S. Text Image Deblurring Using Text-Specific Properties. In Proceedings of the ECCV 2012, Florence, Italy, 7–13 October 2012; pp. 524–537. [Google Scholar]
  28. Cho, H.; Wang, J.; Lee, S. Handling Outliers in Non-Blind Image Deconvolution. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 495–502. [Google Scholar]
  29. Zhao, J.; Feng, H.; Xu, Z.; Li, Q. An Improved Image Deconvolution Approach Using Local Constraint. Opt. Laser Technol. 2012, 44, 421–427. [Google Scholar] [CrossRef]
  30. Oliveira, J.; Figueiredo, M.; Bioucas-Dias, J. Parametric Blur Estimation for Blind Restoration of Natural Images: Linear Motion and Out-of-Focus. IEEE Trans. Image Process. 2013, 23, 466–477. [Google Scholar] [CrossRef] [PubMed]
  31. Amizic, B.; Spinoulas, L.; Molina, R.; Katsaggelos, A. Compressive blind image deconvolution. IEEE Trans. Image Process. 2013, 22, 3994–4006. [Google Scholar] [CrossRef] [PubMed]
  32. Cho, H.; Lee, S. Fast Motion Deblurring. ACM Trans. Graph. 2009, 28, 1–8. [Google Scholar] [CrossRef]
  33. Xu, L.; Jia, J. Two-Phase Kernel Estimation for Robust Motion Deblurring. In Proceedings of the ECCV 2010, Heraklion, Greece, 5–11 September 2010; pp. 157–170. [Google Scholar]
  34. Shan, Q.; Jia, J.; Agarwala, A. High-quality Motion Deblurring from a Single Image. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar] [CrossRef]
  35. Krishnan, D.; Fergus, R. Fast Image Deconvolution using Hyper-Laplacian Priors. In Proceedings of the 23rd Annual Conference on Neural Information Processing Systems 2009, Vancouver, BC, Canada, 7–10 December 2009; pp. 1033–1041. [Google Scholar]
  36. Wang, J.; Lu, K.; Wang, Q.; Jia, J. Kernel Optimization for Blind Motion Deblurring with Image Edge Prior. Math. Probl. Eng. 2012, 2012, 243–253. [Google Scholar] [CrossRef]
  37. Dong, W.; Feng, H.; Xu, Z.; Li, Q. Blind image deconvolution using the Fields of Experts prior. Opt. Commun. 2012, 258, 5051–5061. [Google Scholar] [CrossRef]
  38. Michaeli, T.; Michal, I. Blind Deblurring Using Internal Patch Recurrence. In Proceedings of the ECCV 2014, Zurich, Switzerland, 6–12 September 2014; pp. 783–798. [Google Scholar]
  39. Pan, J.; Sun, D.; Pfister, H.; Yang, M. Blind Image Deblurring Using Dark Channel Prior. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1628–1636. [Google Scholar]
  40. Xu, L.; Zheng, S.; Jia, J. Unnatural L0 Sparse Representation for Natural Image Deblurring. In Proceedings of the CVPR2013, Washington, DC, USA, 23–28 June 2013; pp. 1107–1114. [Google Scholar]
  41. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; Volume 60, pp. 1097–1105. [Google Scholar] [CrossRef]
  42. Kim, J.; Lee, J.; Lee, K. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In Proceedings of the CVPR2016, Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
  43. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2242–2251. [Google Scholar]
  44. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the CVPR2014, Washington, DC, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  45. Mao, X.J.; Shen, C.; Yang, Y.B. Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections. In Proceedings of the 29th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, 5–10 December 2016. [Google Scholar]
  46. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Dong, C.; Deng, Y.; Chen, C.; Tang, X. Compression Artifacts Reduction by a Deep Convolutional Network. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV2015), Santiago, Chile, 7–13 December 2015; pp. 576–584. [Google Scholar]
  48. Fiandrotti, A.; Fosson, S.; Ravazzi, C.; MagliAmizic, E. GPU-accelerated algorithms for compressed signals recovery with application to astronomical imagery deblurring. Int. J. Remote Sens. 2017, 39, 3994–4006. [Google Scholar] [CrossRef]
  49. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D. Generative Adversarial Networks. Adv. Neural Inf. Process. Syst. 2014, 3, 2672–2680. [Google Scholar]
  50. Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv, 2015; arXiv:1511.06434. Available online: https://arxiv.org/abs/1511.06434(accessed on 22 June 2018).
  51. Huang, X.; Li, Y.; Poursaeed, O.; Hopcroft, J.; Belongie, S. Stacked Generative Adversarial Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR2017), Honolulu, HI, USA, 21–26 July 2017; pp. 1866–1875. [Google Scholar]
  52. Wu, H.; Zheng, Z.; Zhang, Z.; Huang, K. GP-GAN: Towards Realistic High-Resolution Image Blending. arXiv, 2017; arXiv:1703.07195v1. Available online: https://arxiv.org/abs/1703.07195(accessed on 22 June 2018).
  53. Gorijala, M.; Dukkipati, A. Image Generation and Editing with Variational Info Generative Adversarial Networks. arXiv, 2017; arXiv:1701.04568v1. Available online: https://arxiv.org/abs/1701.04568(accessed on 22 June 2018).
  54. Mogren, O. C-RNN-GAN: Continuous recurrent neural networks with adversarial training. In Proceedings of the NIPS 2016, Barcelona, Spain, 5–10 December 2016. [Google Scholar]
  55. Ioffe, S.; Szegedy, S. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on Machine Learning (ICML), Lille, France, 2015; pp. 448–456. [Google Scholar]
  56. He, K.; Zhang, X.; Ren, R.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  57. He, K.; Zhang, X.; Ren, R.; Sun, J. Identity Mappings in Deep Residual Networks. In Proceedings of the European Conference on Computer Vision (ECCV2016), Amsterdam, The Netherlands, 22–29 October 2016; pp. 630–645. [Google Scholar]
  58. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv, 2014; arXiv:1409.1556. Available online: https://arxiv.org/abs/1409.1556(accessed on 22 June 2018).
  59. Pan, J.; Hu, Z.; Su, Z.; Yang, M. Deblurring Text Images via L0-Regularized Intensity and Gradient Prior. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2901–2908. [Google Scholar]
  60. Xu, L.; Lu, C.; Xu, Y.; Jia, J. Image smoothing via L0 gradient minimization. ACM Trans. Graph. 2011, 30, 1–12. [Google Scholar] [CrossRef]
  61. Köhler, R.; Hirsch, M.; Mohler, B.; Schölkopf, B.; Harmeling, S. Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database. In Proceedings of the 2012 European conference on Computer Vision, Florence, Italy, 7–13 October 2012; pp. 27–40. [Google Scholar]
  62. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  63. De, K.; Masilamani, V. Image deblurring by motion estimation for remote sensing. Procedia Eng. 2013, 64, 149–158. [Google Scholar] [CrossRef]
  64. Lowe, D. Distinctive image features from scale-invariant key points. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
Figure 1. An overview of the proposed discriminative classifier. The networks include generative model G and discriminative model D.
Figure 1. An overview of the proposed discriminative classifier. The networks include generative model G and discriminative model D.
Sensors 18 02874 g001
Figure 2. Test results from the four deblurring methods for UAV images (the proportional scale of figure is 49%), incluing several ground objects. In each group of images, the 1st images (a1d1) are the ground truth; the 2nd images (a2d2) are synthetically blurred image; and the 3rd images (a3d3) present the deblurring results from method [17]. The 4th images (a4d4) present the deblurring results from method [59]; the 5th image (a5d5) present the deblurring results from method [38]; and the 6th images (a6d6) present the deblurring results from the proposed method.
Figure 2. Test results from the four deblurring methods for UAV images (the proportional scale of figure is 49%), incluing several ground objects. In each group of images, the 1st images (a1d1) are the ground truth; the 2nd images (a2d2) are synthetically blurred image; and the 3rd images (a3d3) present the deblurring results from method [17]. The 4th images (a4d4) present the deblurring results from method [59]; the 5th image (a5d5) present the deblurring results from method [38]; and the 6th images (a6d6) present the deblurring results from the proposed method.
Sensors 18 02874 g002aSensors 18 02874 g002bSensors 18 02874 g002cSensors 18 02874 g002d
Figure 3. The estimated blur kernels of deblurring UAV images shown in Figure 2. The images (a1a4) are the estimated blur kernels of (a3a6) seen in Figure 2, the images (b1b4) are the estimated blur kernels of (b3b6) found in Figure 2, the images (c1c4) are the estimated blur kernels of (c3c6) seen in Figure 2, and the images (d1d4) are the estimated blur kernels of (d3d6) found in Figure 2.
Figure 3. The estimated blur kernels of deblurring UAV images shown in Figure 2. The images (a1a4) are the estimated blur kernels of (a3a6) seen in Figure 2, the images (b1b4) are the estimated blur kernels of (b3b6) found in Figure 2, the images (c1c4) are the estimated blur kernels of (c3c6) seen in Figure 2, and the images (d1d4) are the estimated blur kernels of (d3d6) found in Figure 2.
Sensors 18 02874 g003aSensors 18 02874 g003b
Figure 4. Testing results of several deblurring methods for real blurred UAV images (the proportional scale of figure is 49%). In each group of images, the 1st images are the original blurred UAV images; the 2nd images present the deblurring results from method [17]; and the 3rd images present the deblurring results from method [59]. The 4th images present the deblurring results from method [38]; the 5th images present the deblurring results from the proposed method.
Figure 4. Testing results of several deblurring methods for real blurred UAV images (the proportional scale of figure is 49%). In each group of images, the 1st images are the original blurred UAV images; the 2nd images present the deblurring results from method [17]; and the 3rd images present the deblurring results from method [59]. The 4th images present the deblurring results from method [38]; the 5th images present the deblurring results from the proposed method.
Sensors 18 02874 g004aSensors 18 02874 g004bSensors 18 02874 g004cSensors 18 02874 g004d
Figure 5. The estimated blur kernels for deblurred UAV images in Figure 4. The images of column 1 are the estimated blur kernels of deblurred UAV images by method [17], The images of column 2 are the estimated blur kernels of deblurred UAV images by method [59], The images of column 3 are the estimated blur kernels of deblurred UAV images by method [38], and the images of column 4 are the estimated blur kernels of deblurred UAV images by our method.
Figure 5. The estimated blur kernels for deblurred UAV images in Figure 4. The images of column 1 are the estimated blur kernels of deblurred UAV images by method [17], The images of column 2 are the estimated blur kernels of deblurred UAV images by method [59], The images of column 3 are the estimated blur kernels of deblurred UAV images by method [38], and the images of column 4 are the estimated blur kernels of deblurred UAV images by our method.
Sensors 18 02874 g005aSensors 18 02874 g005b
Figure 6. In each matching image (the proportional scale of figure is 30%), the left image is the sharp UAV image, and the right image is the deblurred UAV image. The 1st images (a1e1) present matching results from SIFT and the deblurred UAV images using method [17] and sharp UAV images. The 2nd images (a2e2) present matching results from SIFT and the deblurred UAV images using method [59] and the sharp UAV images. The 3rd images (a3e3) present matching results from SIFT and the deblurred UAV images created by method [38] and sharp UAV images; and the 4th images (a4e4) present matching results from SIFT and the deblurred UAV images using proposed method and the sharp UAV images.
Figure 6. In each matching image (the proportional scale of figure is 30%), the left image is the sharp UAV image, and the right image is the deblurred UAV image. The 1st images (a1e1) present matching results from SIFT and the deblurred UAV images using method [17] and sharp UAV images. The 2nd images (a2e2) present matching results from SIFT and the deblurred UAV images using method [59] and the sharp UAV images. The 3rd images (a3e3) present matching results from SIFT and the deblurred UAV images created by method [38] and sharp UAV images; and the 4th images (a4e4) present matching results from SIFT and the deblurred UAV images using proposed method and the sharp UAV images.
Sensors 18 02874 g006aSensors 18 02874 g006bSensors 18 02874 g006cSensors 18 02874 g006d
Table 1. Quantitative measurement results using SSIM on synthetic blurred UAV testing images.
Table 1. Quantitative measurement results using SSIM on synthetic blurred UAV testing images.
ImagesMethod [17]Method [59]Method [38]Ours
a0.87760.87520.85480.8824
b0.87490.87090.84920.8782
c0.81850.81570.78320.8232
d0.82980.82850.78530.8342
Average results 10.85180.84960.81810.8577
1 The average results of 50 synthetic blurred UAV test images.
Table 2. Quantitative measurement results using FM on synthetic blurred UAV testing images.
Table 2. Quantitative measurement results using FM on synthetic blurred UAV testing images.
ImagesMethod [17]Method [59]Method [38]Ours
a0.05810.05220.04970.0624
b0.04790.03850.04130.0545
c0.06650.05410.05920.0712
d0.05260.04950.04360.0597
Table 3. Quantitative measurement results using FM on real blurred UAV testing images.
Table 3. Quantitative measurement results using FM on real blurred UAV testing images.
ImagesMethod [17]Method [59]Method [38]Ours
a0.03220.02760.02410.0392
b0.02570.01880.02390.0336
c0.03490.02830.02170.0424
d0.02760.02140.01580.0297
e0.04950.04730.03940.0561
Table 4. The comparison of the correct matching pairs which are between deblurred images and the corresponding sharp UAV images is obtained by SIFT.
Table 4. The comparison of the correct matching pairs which are between deblurred images and the corresponding sharp UAV images is obtained by SIFT.
ImagesMethod [17]Method [59]Method [38]Ours
a999184115
b44454057
c72676481
d1311716
e88817793

Share and Cite

MDPI and ACS Style

Wang, R.; Ma, G.; Qin, Q.; Shi, Q.; Huang, J. Blind UAV Images Deblurring Based on Discriminative Networks. Sensors 2018, 18, 2874. https://doi.org/10.3390/s18092874

AMA Style

Wang R, Ma G, Qin Q, Shi Q, Huang J. Blind UAV Images Deblurring Based on Discriminative Networks. Sensors. 2018; 18(9):2874. https://doi.org/10.3390/s18092874

Chicago/Turabian Style

Wang, Ruihua, Guorui Ma, Qianqing Qin, Qiang Shi, and Juntao Huang. 2018. "Blind UAV Images Deblurring Based on Discriminative Networks" Sensors 18, no. 9: 2874. https://doi.org/10.3390/s18092874

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop