AI Radar Sensor: Creating Radar Depth Sounder Images Based on Generative Adversarial Network
Abstract
:1. Introduction
- Generating synthetic radar images and their corresponding labels based on modified cycle-consistent adversarial networks.
- Testing and evaluation of generated synthetic imagery based on both qualitative and quantitative similarity indexes.
- Testing of the generated images for data augmentation and training of a contour detection algorithm.
- Collecting a novel data set of radar imagery from the Arctic and Antarctic.
2. Related Work
3. Materials And Methods
3.1. Formulation
3.2. Network
4. Experimental Results
4.1. Dataset
4.2. Qualitative Results
4.3. Quantitative Results: Survey
4.4. Quantitative Results: Similarity Metrics
4.5. Cycle Loss Evaluation
4.6. Quantitative Results: Improving Edge Detection
5. Conclusions
Author Contributions
Acknowledgments
Conflicts of Interest
References
- Crandall, D.J.; Fox, G.C.; Paden, J.D. Layer-finding in radar echograms using probabilistic graphical models. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR), Tsukuba, Japan, 11–15 November 2012; IEEE: Hoboken, NJ, USA, 2012; pp. 1530–1533. [Google Scholar]
- Lee, S.; Mitchell, J.; Crandall, D.J.; Fox, G.C. Estimating bedrock and surface layer boundaries and confidence intervals in ice sheet radar imagery using MCMC. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; IEEE: Hoboken, NJ, USA, 2014; pp. 111–115. [Google Scholar]
- Mitchell, J.E.; Crandall, D.J.; Fox, G.C.; Rahnemoonfar, M.; Paden, J.D. A semi-automatic approach for estimating bedrock and surface layers from multichannel coherent radar depth sounder imagery. In Proceedings of the SPIE Remote Sensing, Dresden, Germany, 23–26 September 2013; International Society for Optics and Photonics: Bellingham, WA, USA, 2013; p. 88921E. [Google Scholar]
- Rahnemoonfar, M.; Yari, M.; Fox, G.C. Automatic polar ice thickness estimation from SAR imagery. In Proceedings of the SPIE Defense + Security, Baltimore, MD, USA, 17–21 April 2019; International Society for Optics and Photonics: Bellingham, WA, USA, 2016; p. 982902. [Google Scholar]
- Rahnemoonfar, M.; Fox, G.C.; Yari, M.; Paden, J. Automatic Ice Surface and Bottom Boundaries Estimation in Radar Imagery Based on Level-Set Approach. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5115–5122. [Google Scholar] [CrossRef]
- Rahnemoonfar, M.; Abbassi, A.; Paden, J.; Fox, G.C. Automatic ice thickness estimation in radar imagery based on charged particle concept. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Fort Worth, TX, USA, 23–28 July 2017. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25; Curran Associates, Inc.: Red Hook, NY, USA, 2012; pp. 1097–1105. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Sheppard, C.; Rahnemoonfar, M. Real-time scene understanding for UAV imagery based on deep convolutional neural networks. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; IEEE: Hoboken, NJ, USA, 2017; pp. 2243–2246. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Hariharan, B.; Arbeláez, P.; Girshick, R.; Malik, J. Simultaneous Detection and Segmentation. In Computer Vision—ECCV 2014; Proceedings of the 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer International Publishing: Cham, Switzerland, 2014; Part VII; pp. 297–312. [Google Scholar]
- Rahnemoonfar, M.; Sheppard, C. Deep count: Fruit counting based on deep simulated learning. Sensors 2017, 17, 905. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Rahnemoonfar, M.; Sheppard, C. Real-time yield estimation based on deep learning. In Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping II; International Society for Optics and Photonics: Bellingham, WA, USA, 2017; Volume 10218, p. 1021809. [Google Scholar]
- Rahnemoonfar, M.; Dobbs, D.; Yari, M.; Starek, M.J. DisCountNet: Discriminating and counting network for real-time counting and localization of sparse objects in high-resolution UAV imagery. Remote Sens. 2019, 11, 1128. [Google Scholar] [CrossRef] [Green Version]
- Kamangir, H.; Rahnemoonfar, M.; Dobbs, D.; Paden, J.; Fox, G.C. Detecting ice layers in Radar images with deep hybrid networks. In Proceedings of the IEEE Conference on Geoscience and Remote Sensing (IGARSS), Valencia, Spain, 22–27 July 2018. [Google Scholar]
- Farabet, C.; Couprie, C.; Najman, L.; LeCun, Y. Learning Hierarchical Features for Scene Labeling. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1915–1929. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mostajabi, M.; Yadollahpour, P.; Shakhnarovich, G. Feedforward semantic segmentation with zoom-out features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3376–3385. [Google Scholar]
- Rahnemoonfar, M.; Robin, M.; Miguel, M.V.; Dobbs, D.; Adams, A. Flooded area detection from UAV images based on densely connected recurrent neural networks. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Valencia, Spain, 22–27 July 2018; IEEE: Hoboken, NJ, USA, 2018; pp. 3743–3746. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Advances in Neural Information Processing Systems; NIPS: San Diego, CA, USA, 2014; pp. 2672–2680. [Google Scholar]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar] [CrossRef] [Green Version]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar] [CrossRef] [Green Version]
- Ganguli, S.; Garzon, P.; Glaser, N. GeoGAN: A Conditional GAN with Reconstruction and Style Loss to Generate Standard Layer of Maps from Satellite Images. arXiv 2019, arXiv:1902.05611. [Google Scholar]
- Liu, Y.; Chen, W.; Liu, L.; Lew, M.S. SwapGAN: A Multistage Generative Approach for Person-to-Person Fashion Style Transfer. IEEE Trans. Multimed. 2019, 21, 2209–2222. [Google Scholar] [CrossRef] [Green Version]
- Wu, L.; Wang, Y.; Shao, L. Cycle-consistent deep generative hashing for cross-modal retrieval. IEEE Trans. Image Process. 2019, 28, 1602–1612. [Google Scholar] [CrossRef]
- Shah, M.; Chen, X.; Rohrbach, M.; Parikh, D. Cycle-Consistency for Robust Visual Question Answering. arXiv 2019, arXiv:1902.05660. [Google Scholar]
- Qiao, T.; Zhang, W.; Zhang, M.; Ma, Z.; Xu, D. Ancient Painting to Natural Image: A New Solution for Painting Processing. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, HI, USA, 7–11 January 2019; IEEE: Hoboken, NJ, USA, 2019; pp. 521–530. [Google Scholar]
- Almahairi, A.; Rajeswar, S.; Sordoni, A.; Bachman, P.; Courville, A.C. Augmented CycleGAN: Learning Many-to-Many Mappings from Unpaired Data. arXiv 2018, arXiv:1802.10151. [Google Scholar]
- Che, T.; Li, Y.; Jacob, A.P.; Bengio, Y.; Li, W. Mode regularized generative adversarial networks. arXiv 2016, arXiv:1612.02136. [Google Scholar]
- Kodali, N.; Abernethy, J.; Hays, J.; Kira, Z. On Convergence and Stability of GANs. arXiv 2017, arXiv:1705.07215. [Google Scholar]
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein GAN. arXiv 2017, arXiv:1701.07875. [Google Scholar]
- Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A. Improved Training of Wasserstein GANs. arXiv 2017, arXiv:1704.00028. [Google Scholar]
- Shrivastava, A.; Pfister, T.; Tuzel, O.; Susskind, J.; Wang, W.; Webb, R. Learning from simulated and unsupervised images through adversarial training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar] [CrossRef] [Green Version]
- Kim, T.; Cha, M.; Kim, H.; Lee, J.K.; Kim, J. Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. arXiv 2017, arXiv:1703.05192. [Google Scholar]
- Wang, P.; Patel, V.M. Generating high quality visible images from SAR images using CNNs. In Proceedings of the 2018 IEEE Radar Conference (RadarConf18), Oklahoma City, OK, USA, 23–27 April 2018; IEEE: Hoboken, NJ, USA, 2018; pp. 570–575. [Google Scholar]
- Merkle, N.; Fischer, P.; Auer, S.; Müller, R. On the possibility of conditional adversarial networks for multi-sensor image matching. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 2633–2636. [Google Scholar] [CrossRef] [Green Version]
- Frid-Adar, M.; Klang, E.; Amitai, M.; Goldberger, J.; Greenspan, H. Synthetic Data Augmentation using GAN for Improved Liver Lesion Classification. arXiv 2018, arXiv:1801.02385. [Google Scholar]
- Ben-Cohen, A.; Klang, E.; Raskin, S.P.; Soffer, S.; Ben-Haim, S.; Konen, E.; Amitai, M.M.; Greenspan, H. Cross-Modality Synthesis from CT to PET using FCN and GAN Networks for Improved Automated Lesion Detection. Eng. Appl. Artif. Intell. 2019, 78, 186–194. [Google Scholar] [CrossRef] [Green Version]
- Zhu, Y.; Aoun, M.; Krijn, M.; Vanschoren, J. Data augmentation using conditional generative adversarial networks for leaf counting in arabidopsis plants. In Proceedings of the British Machine Vision Conference, Newcastle, UK, 3–6 September 2018. [Google Scholar]
- Wu, E.; Wu, K.; Cox, D.; Lotter, W. Conditional Infilling GANs for Data Augmentation in Mammogram Classification. In Image Analysis for Moving Organ, Breast, and Thoracic Images; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2018; pp. 98–106. [Google Scholar] [CrossRef] [Green Version]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Identity mappings in deep residual networks. arXiv 2016, arXiv:1603.05027. [Google Scholar]
- Ulyanov, D.; Vedaldi, A.; Lempitsky, V. Instance normalization: The missing ingredient for fast stylization. arXiv 2016, arXiv:1607.08022. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
- Ridgeway, K.; Snell, J.; Roads, B.; Zemel, R.S.; Mozer, M.C. Learning to generate images with perceptual similarity metrics. arXiv 2015, arXiv:1511.06409. [Google Scholar]
- Kahaki, S.M.M.; Nordin, M.J.; Ashtari, A.H.; Zahra, S.J. Invariant feature matching for image registration application based on new dissimilarity of spatial features. PLoS ONE 2016, 11, e0149710. [Google Scholar]
- Borji, A. Pros and Cons of GAN Evaluation Measures. arXiv 2018, arXiv:1802.03446. [Google Scholar] [CrossRef] [Green Version]
- Korhonen, J.; You, J. Peak signal-to-noise ratio revisited: Is simple beautiful? In Proceedings of the Fourth International Workshop on Quality of Multimedia Experience, Yarra Valley, Australia, 5–7 July 2012; pp. 37–38. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C. Mean squared error: Love it or leave it? A new look at Signal Fidelity Measures. IEEE Signal Process. Mag. 2009, 26, 98–117. [Google Scholar] [CrossRef]
- Yao, S.; Lin, W.; Ong, E.; Lu, Z. Contrast signal-to-noise ratio for image quality assessment. In Proceedings of the IEEE International Conference on Image Processing 2005, Genova, Italy, 14 September 2005; Volume 1, pp. 397–400. [Google Scholar] [CrossRef]
- Xie, W.; Noble, J.A.; Zisserman, A. Microscopy Cell Counting with Fully Convolutional Regression Networks. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2018, 6, 283–292. [Google Scholar] [CrossRef]
- Kahaki, S.; Nordin, M.; Ashtari, A. Contour-based corner detection and classification by using mean projection transform. Sensors 2014, 14, 4126–4143. [Google Scholar] [CrossRef] [Green Version]
- Grigorescu, C.; Petkov, N.; Westenberg, M.A. Contour detection based on nonclassical receptive field inhibition. IEEE Trans. Image Process. 2003, 12, 729–739. [Google Scholar] [CrossRef]
Loss Function | SSIM (Avg) | SSIM (Min) | SSIM (Max) | PSNR (Avg) | PSNR (Min) | PSNR (Max) |
---|---|---|---|---|---|---|
L1 Cycle Loss [21] | 0.68 | 0.016 | 0.79 | 19.15 | 4.67 | 24.44 |
L2 Cycle Loss | 0.72 | 0.57 | 0.8 | 20.93 | 17.78 | 23.32 |
Loss Function | Precision | Recall | F1 | F2 |
---|---|---|---|---|
L1 Cycle Loss [21] | 0.007 | 0.008 | 0.007 | 0.008 |
L2 Cycle Loss | 0.04 | 0.1 | 0.04 | 0.048 |
Hyper Parameters | SSIM | PSNR | Precision | Recall | F1 | F2 |
---|---|---|---|---|---|---|
, , , R = 0.0002 | 0.75 | 23.69 | 0.048 | 0.058 | 0.05 | 0.06 |
, , , R = 0.0002 | 0.82 | 25.71 | 0.33 | 0.49 | 0.38 | 0.43 |
Dataset Used | Number of Train/Test Samples | Precision | Recall | F1 Score | F2 Score |
---|---|---|---|---|---|
Real | 20,463/8769 | 0.518 | 0.586 | 0.507 | 0.534 |
Synthetic | 20,463/8769 | 0.417 | 0.500 | 0.451 | 0.478 |
Mixture 1 | 40,926/17538 | 0.575 | 0.660 | 0.590 | 0.621 |
Mixture 2 | 30,694/17538 | 0.522 | 0.528 | 0.506 | 0.510 |
Mixture 3 | 20,463/8769 | 0.172 | 0.136 | 0.139 | 0.134 |
Mixture 4 | 30,694/8769 | 0.394 | 0.680 | 0.463 | 0.551 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rahnemoonfar, M.; Johnson, J.; Paden, J. AI Radar Sensor: Creating Radar Depth Sounder Images Based on Generative Adversarial Network. Sensors 2019, 19, 5479. https://doi.org/10.3390/s19245479
Rahnemoonfar M, Johnson J, Paden J. AI Radar Sensor: Creating Radar Depth Sounder Images Based on Generative Adversarial Network. Sensors. 2019; 19(24):5479. https://doi.org/10.3390/s19245479
Chicago/Turabian StyleRahnemoonfar, Maryam, Jimmy Johnson, and John Paden. 2019. "AI Radar Sensor: Creating Radar Depth Sounder Images Based on Generative Adversarial Network" Sensors 19, no. 24: 5479. https://doi.org/10.3390/s19245479
APA StyleRahnemoonfar, M., Johnson, J., & Paden, J. (2019). AI Radar Sensor: Creating Radar Depth Sounder Images Based on Generative Adversarial Network. Sensors, 19(24), 5479. https://doi.org/10.3390/s19245479