Next Article in Journal
Fuzzy Information Retrieval Based on Continuous Bag-of-Words Model
Next Article in Special Issue
Detection Method of Data Integrity in Network Storage Based on Symmetrical Difference
Previous Article in Journal
A Feasible Community Detection Algorithm for Multilayer Networks
Previous Article in Special Issue
Communication Fault Maintenance Decision of Information System Based on Inverse Symmetry Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Single Image Rain Removal Based on Deep Learning and Symmetry Transform

1
School of Electronic and Information Engineering, Hebei University of Technology, Tianjin 300401, China
2
Department of Electronic and Optical Engineering, Army Engineering University Shijiazhuang Campus, Shijiazhuang 050003, China
3
College of Electronic Information Engineering, Hebei University, Baoding 071000, China
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(2), 224; https://doi.org/10.3390/sym12020224
Submission received: 13 November 2019 / Revised: 17 December 2019 / Accepted: 25 December 2019 / Published: 3 February 2020

Abstract

:
Rainy, as an inevitable weather condition, will affect the acquired image. To solve this problem, a single image rain removal algorithm based on deep learning and symmetric transformation is proposed. Because of the important characteristics of wavelet transform, such as symmetry, orthogonality, flexibility and limited support, wavelet transform is used to remove rain from a single image. The image is denoised by using wavelet decomposition, threshold value and wavelet reconstruction in wavelet transform, and the rain drop image is transformed from RGB space to YUV (luma chroma) space by using deep learning to obtain the brightness component and color component of the image. the brightness component and residual component of the raindrop source image and the ideal recovered image without raindrop are extracted. The residual image and brightness component are overlapped again, the reconstructed image is restored to RGB space by YUV inverse transformation, and the final color raindrop free image is obtained. After training the network, the optimal parameters of the network are obtained, and finally the convolution neural network which can effectively remove the rain line is obtained. Experimental results show that compared with other algorithms, the proposed algorithm achieves the highest value in both peak signal-to-noise ratio (PSNR) and structural similarity, which shows that the image effect of the algorithm is better after rain removal.

1. Introduction

Rain is a common weather phenomenon. Raindrops may cause abnormal operation of outdoor computer vision system, blur the acquired outdoor image, lose the original details and features of the image, and reduce the visual effect of the image. Therefore, the research of rain removal algorithm for image has important value and wide application. For example, in the fields of image enhancement, target recognition and target tracking, rain removal algorithm for image is very helpful to improve the overall accuracy of the algorithm. In recent years, rain removal algorithm for image has been widely studied, and the effect of rain removal is improving.
A large number of rain removal algorithms have been proposed successively, among which the typical rain removal algorithm for single image is to treat the raindrop in the image as a special high-frequency noise, and then use the algorithm of image decomposition or raindrop recognition to filter the raindrop. Lin et al. proposed a rain removal algorithm for single image based on morphological component analysis [1]. The algorithm used the advantages of sparse representation in image denoising, built a complete dictionary library of high frequency components of image through dictionary learning, and then reconstructed rainless image through sparse coding. However, in the process of sparse representation to restore the high-frequency part of the image, there was information loss, which was easy to cause the blur of the local texture of the repaired image. Zhong et al. proposed a discriminative sparse coding algorithm with classification ability [2,3], and used it to improve the accuracy of raindrop detection. Although the effect of these algorithms was improved to some extent, the effect of rain removal was still limited due to the use of only shallow image features. When there were objects with similar structure and direction of raindrops in the image, it was difficult for these algorithms to keep the original structure information while eliminating raindrops.
Deep learning, also known as deep convolution neural network, has a deep network structure, which can express complex deep image features, and has achieved very good results in image recognition, image rain removal, image restoration and other fields [4,5]. Deep convolution network often contains more convolution layer processing, and the algorithm efficiency is generally low. In order to remove raindrops from a single image quickly and effectively, and retain complete image information, this paper introduces wavelet transform on the basis of deep convolution network, and proposes a new algorithm based on deep learning and symmetric wavelet transform for raindrop removal of a single image. By learning the residual components between the symmetric image databases without rain or with rain, the algorithm solves the problem of network degradation caused by the increase of deep convolution neural network to a large extent. Wavelet transform has the important characteristics of symmetry, orthogonality, flexibility, limited support and so on [6]. In this paper, the symmetry of wavelet transform is used to denoise the image, synthesize the characteristics of time domain and frequency domain, transform the image into a series of wavelet coefficients, which can be efficiently compressed and stored, and denoise the image. Compared with the traditional deep convolution neural network, the deep learning wavelet transform can reduce the network training parameters, the network convergence speed is fast, and the image restoration ability is strong.

2. Materials and Algorithms

2.1. Image Denoising Algorithm Based on Wavelet Transform

2.1.1. Characteristics of Wavelet Transform

After two-dimensional wavelet decomposition of image f ( x , y ) , four sub-images LL j , L H j , H L j and HH can be obtained, which represent the sub-images of low frequency, horizontal high frequency, vertical high frequency and diagonal high frequency on scale 2 j respectively. For LL j , it can be decomposed into one low frequency component and three high frequency components. When the number of wavelet decomposition layers is N , one low frequency component and 3 N high frequency components can be obtained [7,8]. Characteristics of wavelet transform: after wavelet decomposition, the low-frequency region reflects the basic contour of the image; the details of noise and image are mainly distributed in the high-frequency region, so wavelet denoising can be carried out in the high-frequency region of wavelet coefficients. In the high-frequency wavelet coefficients, the noise is mainly concentrated in the wavelet coefficients with smaller amplitude, and the image details are mainly concentrated in the wavelet coefficients with larger amplitude. The wavelet coefficients with smaller absolute value have more noise components, the wavelet coefficients with larger absolute value have less noise components, and the distribution of the superposition of high-frequency wavelet coefficients is close to Gaussian distribution.

2.1.2. Denoising Process

Noise image is set as Formula (1)
S ( i , j ) = f ( i , j ) + n ( i , j ) i , j = 1 , 2 , , N
where f ( i , j ) is the original signal and n ( i , j ) is the white Gaussian noise with zero mean. Formula (2) can be obtained by multi-layer wavelet transform of Formula (1).
W s ( i , j ) = W f ( i , j ) + W n ( i , j ) W f ( i , j ) = W S ( i , j ) W n ( i , j )
According to Formula (2), as long as the wavelet coefficient W f ( i , j ) of the original image are obtained, the original image can be obtained by inverse wavelet transform [9,10,11,12]. The main points of denoising (to obtain W f ( i , j ) ) based on wavelet transform are: a threshold T is set for the high frequency domain of wavelet coefficients; the wavelet coefficients larger than the threshold T are preserved as the important wavelet coefficients of T = c i σ 2 log [ j + 1 ) / j } 2 ( J j ) / 2 , and the wavelet coefficients smaller than the threshold T are set to zero. At present, the main threshold selection algorithms are: (1) hard threshold T = σ 2 ln ( N ) , σ is the variance of noise, T is the size of the image. (2) soft threshold, ( j = j 0 , j 1 J ) , where the value of i is 1, 2 and 3 respectively, and they represent H H , L H and H L subband images, c i is a constant, σ is the variance of image noise, J is the number of wavelet decomposition layers. With the above algorithm, most of the noise wavelet coefficients can be removed, and the specific steps of denoising with wavelet transform are as follows: (1) wavelet decomposition: using some wavelet function, the image is decomposed by N-level wavelet; (2) action threshold: a certain threshold function is selected to make threshold processing of the high-frequency wavelet coefficients of each layer obtained by wavelet transform, and the most critical problem is the selection of threshold function; (3) wavelet reconstruction: the processed wavelet coefficients are transformed by inverse wavelet transform to restore the original image.
Through this algorithm, the single image after denoising is obtained. Based on the image after denoising, the image with rain is de rained by using deep learning.

2.2. Raindrop Removal Model Based on Deep Learning

The core of raindrop removal for single image based on deep learning is to design a more effective end-to-end mapping model of raindrop image and rainless image to achieve the repair of raindrop area [13]. VDSR (very deep networks for Super-Resolution) is a kind of feedforward deep convolution neural network based on deep learning. On the basis of inheriting the advantages of the deep convolution neural network, it greatly avoids the problem that the deep convolution neural network is easy to over fit with the increase of the number of layers, efficiently explores and captures various features of the image, and shows excellent performance in the image restoration task [14,15]. Therefore, the application of very deep network in the process of raindrop removal of a single image can effectively improve the performance of raindrop repair. The very deep network is composed of several convolution layers, which is used to learn the residual image between the raindrop image and the ideal recovery image without raindrop, so it is convenient to repair the raindrop image. In this paper, the structure of the network based on the very deep network is shown in Figure 1.
The algorithm consists of four parts: YUV (luma chroma) spatial transformation, deep feature learning, image reconstruction and YUV inverse transformation [16]. YUV space transformation transforms the denoised raindrop image I from RGB space to YUV space to obtain the brightness component Y I and color component { U I , V I } of image I ; deep learning extracts the residual component Y I of the denoised raindrop source image and the residual component Y J of the ideal recovered image J without raindrop, i.e., F ( Y I ) = Y J Y I , where F ( Y I ) represents network mapping; In image reconstruction, the learned residual image F ( Y I ) and the luminance component Y I are superposed again to obtain the luminance component Y ^ J of the reconstructed image. the inverse YUV transform restores the reconstructed image Y ^ J to RGB space, and the final color image J without rain is obtained.

2.2.1. YUV Spatial Tansformation

Figure 2 shows the contrast diagram of brightness and color components of image without rain and raindrop image in YUV space in the same scene, where, Figure 2a is an image without rain, Figure 2b is brightness component of Figure 2a, Figure 2c,d are color component of Figure 2a, Figure 2e is brightness component of rain image, Figure 2f is brightness component of Figure 2e, and Figure 2a,h are color component of Figure 2e.
Compared Figure 2b with Figure 2f, Figure 2c with Figure 2g, and Figure 2d with Figure 2h, it is not difficult to find that the raindrops in Figure 2e mainly exist in the brightness component with high resolution (as shown in Figure 2f), but have little effect on the color component with low resolution. Therefore, in this algorithm, only the brightness component of the source image is removed from the rain. This step can not only reduce the amount of data to be processed and network parameters for network learning, but also not operate the color component of the source image, which can effectively retain more color information of the source image.

2.2.2. Deep Feature Learning

The deep very deep network based on image restoration will perform multi-layer convolution processing and nonlinear mapping on the luminance component Y I of the image I with raindrops, and learn the residual image of the J brightness component Y J of the ideal restored image without raindrops, that is, F ( Y I ) = Y J Y I . The traditional deep convolution neural network’s learning of Y J is transformed into the learning of F ( Y I ) , which not only avoids the phenomenon of over fitting with the increase of network depth, but also makes the whole network easier to optimize, and greatly improves the efficiency.
Convolution layer is the core of the model of raindrop removal based on image restoration, which has the characteristics of local connection and weight sharing. Taking a single image I as an example, the brightness component Y I of the network input image is firstly obtained by convolution layer C on v . 1 to get the characteristics image F 1 ( Y 1 ) , and the expression is as follows:
F 1 ( Y 1 ) = max ( 0 , W 1 * Y 1 + B 1 )
where F 1 is the output of the first convolution layer, W 1 is the weight of the layer 1 network, B 1 is the bias vector of the corresponding neuron of layer 1, and “ * “ is the convolution operation. The size of W 1 is c × f 1 × f 1 × n 1 , c is the number of channels in the input image (only Y channels are processed, so c = 1). f 1 is the size of a single filter and n 1 is the number of filters.
In order to extract the deep features of the image, the network depth is increased by nonlinear mapping between convolution layes, that is, the n 1 feature images output by convolution layer C on v . 1 are used as the input of the next convolution layer, and are convoluted with one or more convolution kernels to generate one or more outputs, the expression is as follows:
F l ( Y I ) = max ( 0 , W l * F l 1 + B l )
where F l is the output of the l ( l = 2 , 3 , , D 1 ) convolution layer, F l 1 is the output of the l 1 -th convolution layer (the output of the upper layer is the input of the next layer), W l is the weight of the l -th convolution layer, B l is the offset of the l -th convolution layer, and the dimension of the offset is consistent with the number of convolution cores in this layer.
In order to reconstruct the residual image F ( Y I ) of Y I raindrop source image and Y J raindrop free image, it is necessary to aggregate the residual feature blocks extracted from the convolution layer, that is, to convolute and filter the feature map in the traditional way of calculating the mean value. The expression is as follows:
F ( Y I ) = W D * F D 1 ( Y I ) + B D
The residual image F ( Y I ) and the brightness component Y I are superposed to obtain the brightness component Y ^ J of the reconstructed image. The expression is as follows:
Y ^ J = F ( Y I ) + Y I
Through the above different forms of convolution operation, the weight and bias terms are optimized, which not only simplifies the network, but also makes up for the shortcomings of sparse coding method without feedforward process. The forward parameter transfer process of nonlinear mapping is helpful for the feature selection of depth image. At the same time, the residual image is calculated according to the great similarity of information between the input raindrop image and the output non raindrop image. Most of the values of F ( Y I ) are close to 0, which makes the convergence speed of the whole network faster in the training process.

2.2.3. Network Training Algorithm

In order to get the mapping relationship from the rain image to the rain removal image, it is necessary to train the deep convolution neural network, and get the optimal value Q * of the network parameter Q = { W i 1 , W i 2 , W 2 , W 3 , B i 1 , B i 2 , B 2 , B 3 } . Q is the network parameter, M is the original rain image, and S is the rain free image. Because it is difficult to find a large number of rain and rain free image pairs in reality, we use photoshop software to make the simulated rain image as the training set. 300 rain free original images are selected from the data set, rain lines of different directions and sizes are added on each image, and 20 different images are made, forming a data set containing 3500 rain images in total. 3200 rain images and their corresponding rain free original images are used as the training set of the network, and the remaining 300 images are used for the experiment test of simulated rain images.
For the training objective of the network, the mean square error between the rain free image S ^ i and the original image S i is taken as the loss function, and the random gradient descent algorithm is used to minimize the mean square error and get the optimal value of the parameters. The calculation formula of mean square error is Equation (7).
L ( Q ) = 1 n v = 1 n h P ( M v ) S v 2
In the network formula, M v is a series of rain images, S v is the corresponding rain free image, n is the number of training samples, h P represents the gradient descent function.
The standard back propagation method is used to train the network so as to minimize the target loss function. The updating process of network weight parameters is Equation (8).
Γ v + 1 = 0.9 Γ v + η L ( Q ) , L ( Q ) = Y ^ J + γ v + 1
where: r and v are the convolution level identification and the number of iterations, η is the learning rate, and γ v + 1 is the derivative. For the weight parameters of each convolution layer, the Gaussian distribution with mean value of 0 and standard deviation of 0.001 is used for random initialization. In this paper, the training process of all networks is carried out in the framework of deep convolution neural network.

3. Results

3.1. Denoising Experiment of Wavelet Transform

The algorithm in this paper is used to process the noisy image. Figure 3 is the rain image affected by the mixture of salt and pepper noise with standard deviation of 0.1 and Gaussian white noise with standard deviation of 20. Figure 4 is the denoised image processed by the algorithm in this paper.
It can be seen from Figure 4 that the algorithm in this paper has a better removal effect for mixed noise, which proves that this denoising algorithm has the advantages of keeping good details and has strong adaptability.

3.2. Simulated Rain Image

300 simulated rain images are used as the experimental test set (Dataset name: National Center for Environmental Information URL: https://www.ncdc.noaa.gov/data-access), and the algorithm proposed in reference [16] and the algorithm in this paper are used for comparative experiments. Some experimental results are shown in Figure 5. Observing Figure 5, it can be found that two algorithms based on dictionary learning sparse coding, ID and DSC, have the problems of fuzzy image original details and incomplete rain line removal. Although LP algorithm has some improvement in these two aspects, compared with the algorithm proposed in this paper, there is still a certain gap.
Figure 5a represents the unprocessed original picture, Figure 5b represents the image processed by the ID algorithm. Figure 5c represents the image processed by the DSC algorithm. Figure 5d represents the image processed by the LP algorithm. Figure 5e represents the image processed by the algorithm in this paper. As can be seen from Figure 5, compared with the algorithm proposed in reference [16], the rain removal algorithm in this paper can effectively remove the rain lines in the image, while retaining the details of the original image, avoiding the loss of feature information. This is because the algorithm proposed in reference [16] only applies the feature information of the lower level of the original image, when there is a texture similar to the rain line direction in the image, these algorithms will mistakenly treat non rain objects, leading to blur image details. The algorithm based on convolution neural network makes use of the high-level feature information of the original image to distinguish rain line and non rain objects accurately, and effectively removes rain line while maintaining the details of the original image.
In order to further prove the effectiveness of the proposed algorithm, the peak signal-to-noise ratio and structure similarity of 300 simulated rain images and the original rain free images are calculated. Peak signal-to-noise ratio (PSNR) is the most popular and widely used objective evaluation algorithm for image quality. The structural similarity can better reflect the subjective feelings of human eyes. The combination of the two makes the experimental evaluation of simulated rain images more perfect.
Table 1 and Table 2 show the calculation results of the images in Figure 5 and the calculation mean of 300 simulated rain images respectively. From the results of the two tables, we can see that compared with the other three algorithms, the algorithm in this paper has achieved the highest value in peak signal-to-noise ratio and structural similarity, which proves the effectiveness of the algorithm again.

3.3. Experiment on Real Rain Image

The training set and test set used in training convolution neural network in this paper are composed of simulated rain images. In order to prove that the algorithm in this paper has the same good effect on real rain images in reality, the algorithm in Section 3.1 is applied to real rain images denoised by wavelet transform algorithm in this paper, and compared with ID, DSC and LP algorithms, the algorithm in this paper is proved to be effective. The experimental results are shown in Figure 6. Compared with other algorithms, the algorithm in this paper has better visual effect, and has greater advantages in removing rain lines and preserving the original image information.
Figure 6a represents the unprocessed original picture, Figure 6b represents the image processed by the ID algorithm. Figure 6c represents the image processed by the DSC algorithm. Figure 6d represents the image processed by the LP algorithm. Figure 6e represents the image processed by the algorithm in this paper. It can be seen from Figure 6 that there are still raindrops in the ID rain removal algorithm that are not removed cleanly, and the texture of image repair is fuzzy; some high-frequency information is lost in DSC algorithm, and the effect of raindrop removal is not good; raindrops can be seen clearly in IP rain removal algorithm, the image color changes, and the overall image is distortion. Although the convolution neural network model proposed in this paper is trained by using the synthetic rain map data, the proposed algorithm still has a good effect on the real rain image. It can be found from the graph that the algorithm proposed in this paper can effectively remove the rain stripe on the real rain graph even when the rain stripe is obvious.
In this paper, the blind image quality index (BIQI) is chosen to evaluate various algorithms objectively. The blind image quality index is an algorithm to evaluate the result image when the original image is unknown. The lower the blind image quality index is, the better the image quality is.
Analysis of Table 3 shows that the Blind image quality index of the Simulate rain images method is 36.59, and the Blind image quality index of the ID method is 42.45.The Blind image quality index of the DSC method is 34.68, and the Blind image quality index of the LP method is 29.32.The blind image quality index of algorithm of this paper is only 27.81. This method has the lowest performance and the best rain performance.

4. Discussion

In this paper, on the basis of image denoising based on wavelet transform, a deep convolution neural network algorithm is proposed, which can make the neural network achieve better image denoising effect.
(1)
By analyzing the wavelet transform denoising experiments, we can know that the method in this paper has a better removal effect on mixed noise, which proves that this denoising algorithm has the advantages of maintaining good details and strong adaptability.
(2)
It can be known from the analysis and simulation of rainy image experiments that the method in this paper can effectively remove rain lines in the image, while retaining the details of the original image and avoiding the loss of feature information. This is because the algorithm in this paper is based on the traditional deep convolutional neural network and incorporates wavelet transform. When there are textures in the image that are close to the direction of the rain line, these algorithms will incorrectly process non-rain objects, resulting in blurred image details. Compared with other algorithms, the algorithm in this paper has better visual effects, and has greater advantages in removing rain lines and retaining original image information.
(3)
The analysis of real rain image experiments shows that the proposed convolutional neural network model is trained using synthetic rain map data. The proposed algorithm still has a good rain removal effect on images under real rain. Even when the rain bars are obvious, the algorithm proposed in this paper still effectively removes the rain bars on the real rain map.
Based on the above experimental results, the image of the algorithm in this paper is clearer and smoother after the rain, the details of the raindrops are almost unrecognizable, and the image color has not changed. Compared with the results of other algorithms, the image of the algorithm in this paper is less disturbed by noise, and the peak signal-to-noise ratio and structural similarity have achieved the highest value. The reason is that the algorithm in this paper is based on the traditional deep convolution neural network. Based on the network, wavelet transform is added to denoise the image before rain removal. It can be seen from the blind image quality index that the rain removal effect of the proposed method is the best.

5. Conclusions

In this paper, a rain removal algorithm for single image based on deep learning and wavelet transform is proposed. On the basis of image denoising based on wavelet transform, a deep convolution neural network algorithm is proposed, which can make the neural network get better effect of image denoising. Experiments show that the performance of the algorithm in this paper has reached the best level at present, especially suitable for the problem of image rain removal in high noise environment. The algorithm can denoise the original image with the rain simulation image. On this basis, the actual rain image is denoised and rain removed. Compared with the current algorithms, this algorithm has some advantages when the noise intensity is certain. After rain removal, the image effect is good. The algorithm in this paper has pertinence to the rain removal image, which limits the use value of the model, and there is space for further improvement in the future research work.

Author Contributions

Based on deep learning and symmetric transformation, this paper studies the rain removal of a single image. The image is denoised using wavelet decomposition, action threshold, and wavelet reconstruction in the wavelet transform. The denoised raindrop image is transformed from RGB space to YUV space using deep learning to obtain the brightness and color components of the image. Extract the luminance component of the raindrop source image after denoising and the luminance component and residual component of the ideal recovery image without raindrops, and re-superimpose the learned residual image and luminance component after image reconstruction. The reconstructed image is restored to the RGB space by the YUV inverse transform to obtain the final color image without raindrops. Experiments show that the algorithm in this paper has achieved the highest values in terms of peak signal-to-noise ratio and structural similarity, indicating that the algorithm has better image performance after rain. Q.Y., M.Y. and Y.X. designed the methods and concepts of this article. Q.Y. and Y.X. completed the experimental analysis section. Q.Y., M.Y., Y.X. and S.C. finished writing this article. All authors have read and agreed to the published version of the manuscript.

Funding

This work was granted by the Natural Science Foundation of Hebei Province, China (Grant No. F2019202381, F2019202464), and National Natural Science Foundation of China (Grant No. 61806071).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lin, X.W.; Zeng, H.Q.; Hou, J.H.; Zhu, J.Q.; Cai, C.H. Multi-detail Convolutional Neural Networks for Single Image Rain Removal. J. Signal Process. 2019, 35, 460–465. [Google Scholar]
  2. Zhong, F.; Yang, B. Novel Single Image Raindrop Removal Algorithm Based on Deep Learning. Comput. Sci. 2018, 45, 283–287. [Google Scholar]
  3. Liu, W.; Zha, Z.J.; Wang, Y. $p$-Laplacian Regularized Sparse Coding for Human Activity Recognition. IEEE Trans. Ind. Electron. 2016, 63, 5120–5129. [Google Scholar] [CrossRef]
  4. Pu, C.C.R.; Hong, J.C. Research on remote sensing image detection based on deep convolution neural network and significant image. Autom. Instrum. 2018, 12, 50–53, 57. [Google Scholar]
  5. Guo, J.C.; Guo, H.; Chunle, G. Single image rain removal based on multi-scale convolutional neural network. J. Harbin Inst. Technol. 2018, 50, 185–191. [Google Scholar]
  6. Jiang, Q.; Cai, R.J.; Ye, W.J.; Liu, Y.J.; Li, H.T.; He, W.X.; Liu, F. Video Image Denoising Technology Based on U-GAN Neural Network. Mod. Comput. 2018, 635, 37–40. [Google Scholar]
  7. Jing, F.; Zhou, L.W.; Lu, W.G. Design of Power Quality Monitoring Terminal Based on ADALINE Neural. J. Power Supply 2017, 15, 118–125. [Google Scholar]
  8. Gao, W.; Wang, W. A tight neighborhood union condition on fractional (g, f, n ’, m)-critical deleted graphs. Colloq. Math. 2017, 149, 291–298. [Google Scholar] [CrossRef]
  9. Li, Y.; Chi, Y. Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors. IEEE Trans. Signal Process. 2016, 64, 1257–1269. [Google Scholar] [CrossRef] [Green Version]
  10. Jiang, T.X.; Huang, T.Z.; Zhao, X.L. Fast De Rain: A Novel Video Rain Streak Removal Method Using Directional Gradient Priors. IEEE Trans. Image Process. 2018, 28, 1. [Google Scholar] [CrossRef] [Green Version]
  11. Hao, D.; Li, Q.; Li, C. Single-image-based rain streak removal using multidimensional variational mode decomposition and bilateral filter. J. Electron. Imaging 2017, 26, 013020. [Google Scholar] [CrossRef]
  12. Pnevmatikakis, E.A.; Soudry, D.; Gao, Y. Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data. Neuron 2016, 89, 285–299. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Zhu, L.; Pan, Y.; Wang, J. Affine Transformation Based Ontology Sparse Vector Learning Algorithm. Appl. Math. Nonlinear Sci. 2017, 2, 111–122. [Google Scholar] [CrossRef] [Green Version]
  14. Zhu, Z.C.; Li, B.; Fang, S. A restoration algorithm for rain video image. J. Hefei Univ. Technol. Nat. Sci. 2011, 34, 1011–1014. [Google Scholar]
  15. Fu, X.; Huang, J.; Ding, X. Clearing the Skies: A Deep Network Architecture for Single-Image Rain Removal. IEEE Trans. Image Process. 2017, 2944–2956. [Google Scholar] [CrossRef] [Green Version]
  16. Yang, F.; Zhang, Z.W.; Xu, K. A New Adaptive Embedded Manifold Denoising Algorithm for Video Motion Object Segmentation. J. Jilin Univ. Sci. Ed. 2017, 55, 169–176. [Google Scholar]
Figure 1. Schematic diagram of raindrop removal algorithm based on VDSR (very deep networks for Super-Resolution).
Figure 1. Schematic diagram of raindrop removal algorithm based on VDSR (very deep networks for Super-Resolution).
Symmetry 12 00224 g001
Figure 2. Comparison of YUV (luma chroma) components between no-rain images and raindrop images.
Figure 2. Comparison of YUV (luma chroma) components between no-rain images and raindrop images.
Symmetry 12 00224 g002aSymmetry 12 00224 g002b
Figure 3. Salt and pepper noise and Gaussian noise ults of real rain images of rain.
Figure 3. Salt and pepper noise and Gaussian noise ults of real rain images of rain.
Symmetry 12 00224 g003
Figure 4. The image of rain after denoising.
Figure 4. The image of rain after denoising.
Symmetry 12 00224 g004
Figure 5. Experimental results of simulated rain images.
Figure 5. Experimental results of simulated rain images.
Symmetry 12 00224 g005aSymmetry 12 00224 g005b
Figure 6. Experimental results of real rain images.
Figure 6. Experimental results of real rain images.
Symmetry 12 00224 g006aSymmetry 12 00224 g006b
Table 1. PSNR of different algorithm.
Table 1. PSNR of different algorithm.
Images300 Virtual ImagesAirplaneBridge
Peak signal-to-noise ratioID30.9829.7527.71
DSC32.6636.7235.07
LP36.0634.8836.91
Algorithm of this paper37.1539.2539.83
Table 2. SSIM of different algorithm.
Table 2. SSIM of different algorithm.
Images300 Virtual ImagesAirplaneBridge
Structural similarityID0.880.810.89
DSC0.950.980.98
LP0.950.950.98
Algorithm of this paper0.970.990.99
Table 3. BIQI of different met.
Table 3. BIQI of different met.
Blind image quality indexImagesStreet View
Simulate rain images36.59
ID42.45
DSC34.68
LP29.32
algorithm of this paper27.81

Share and Cite

MDPI and ACS Style

Yang, Q.; Yu, M.; Xu, Y.; Cen, S. Single Image Rain Removal Based on Deep Learning and Symmetry Transform. Symmetry 2020, 12, 224. https://doi.org/10.3390/sym12020224

AMA Style

Yang Q, Yu M, Xu Y, Cen S. Single Image Rain Removal Based on Deep Learning and Symmetry Transform. Symmetry. 2020; 12(2):224. https://doi.org/10.3390/sym12020224

Chicago/Turabian Style

Yang, Qing, Ming Yu, Yan Xu, and Shixin Cen. 2020. "Single Image Rain Removal Based on Deep Learning and Symmetry Transform" Symmetry 12, no. 2: 224. https://doi.org/10.3390/sym12020224

APA Style

Yang, Q., Yu, M., Xu, Y., & Cen, S. (2020). Single Image Rain Removal Based on Deep Learning and Symmetry Transform. Symmetry, 12(2), 224. https://doi.org/10.3390/sym12020224

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop