**5. Conclusions**

In this paper, we have proposed a single image de-raining model based on conditional generative adversarial networks and a Pix2Pix framework. The model consists of two neural networks: a generator network to map rainy images to de-rained images, and a discriminator network to classify real and generated de-rained images. Different performance matrices were used to evaluate the performance of the new model using both synthesized and real-world image data. The evaluations proved that the proposed CGANet model outperformed the state-of-the-art methods for image de-raining. The new CGANet model is presented as a high-potential approach for successful de-raining of images.

This paper is focused on image de-raining; however, the proposed model applies equally well to any other image translation problem in a different domain. In future developments, further analysis can be carried out to optimize the loss function by incorporating more comprehensive components with local and global perceptual information.

**Author Contributions:** Conceptualization, P.H., D.A., and R.N.; methodology, P.H., and R.N.; investigation, P.H.; data curation, P.H.; writing—original draft preparation, P.H.; writing—review and editing, R.N., D.A., D.D.S., and N.C.; supervision, D.A., D.D.S., and N.C. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Data and source code for the experiments are publicly available at https://github.com/prasadmaduranga/CGANet (accessed on 11 December 2020).

**Acknowledgments:** This work was supported by a La Trobe University Postgraduate Research Scholarship.

**Conflicts of Interest:** The authors declare no conflict of interest.
