A High-Quality Hybrid Mapping Model Based on Averaging Dense Sampling Parameters
Abstract
:1. Introduction
2. Related Work
3. Methods
3.1. Cyclic Adversarial Generation Network Based on SWAD Optimization Method
3.2. Network Optimization
4. Experiments
4.1. Dataset and Hyperparameters
4.2. Integrity Testing of the SWAD Method on the GAN Network
4.3. Evaluation Indicators
4.4. Experiment Results
4.4.1. Experiment Results of Prevailing and Proposed Methods on Google Maps Dataset
- Convergence is considered when the amplitude of the loss curve does not exceed 10% of the maximum loss value, and the training round at this point is identified as the convergence time.
- We record the oscillation counts equal to or greater than 10% of the maximum loss value to assess the extent of oscillation during training.
4.4.2. Style Transfer Result on Google Maps Dataset
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Huang, C.; Mees, O.; Zeng, A.; Burgard, W. Visual language maps for robot navigation. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 10608–10615. [Google Scholar]
- Huang, C.; Mees, O.; Zeng, A.; Burgard, W. Audio visual language maps for robot navigation. arXiv 2023, arXiv:2303.07522. [Google Scholar]
- Mao, J.H.; Yang, J.; Shao, R.P.; Wang, W.Z. Research on the construction of a BIM-based model for cross-floor indoor navigation maps. In Frontiers in Civil and Hydraulic Engineering; CRC Press: Boca Raton, FL, USA, 2023; Volume 1, pp. 372–378. [Google Scholar]
- Grigorescu, S.; Trasnea, B.; Cocias, T.; Macesanu, G. A survey of deep learning techniques for autonomous driving. J. Field Robot. 2020, 37, 362–386. [Google Scholar] [CrossRef]
- Jiang, Z.; Zhang, X.; Wang, P. Grid-Map-Based Path Planning and Task Assignment for Multi-Type AGVs in a Distribution Warehouse. Mathematics 2023, 11, 2802. [Google Scholar] [CrossRef]
- Yamaguchi, T.; Kuwano, A.; Koyama, T.; Okamoto, J.; Suzuki, S.; Okuda, H.; Saito, T.; Masamune, K.; Muragaki, Y. Construction of brain area risk map for decision making using surgical navigation and motor evoked potential monitoring information. Int. J. Comput. Assist. Radiol. Surg. 2023, 18, 269–278. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Q.; Liu, X. Robot indoor navigation point cloud map generation algorithm based on visual sensing. J. Intell. Syst. 2023, 32, 20220258. [Google Scholar] [CrossRef]
- Tanwar, J.; Sharma, S.K.; Mittal, M. Designing obstacle’s map of an unknown place using autonomous drone navigation and web services. Int. J. Pervasive Comput. Commun. 2023, 19, 154–169. [Google Scholar] [CrossRef]
- Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
- Salvo, C.; Vitale, A. A Remote Sensing Method to Assess the Future Multi-Hazard Exposure of Urban Areas. Remote Sens. 2023, 15, 4288. [Google Scholar] [CrossRef]
- Wang, L.; Gao, R.; Li, C.; Wang, J.; Liu, Y.; Hu, J.; Li, B.; Qiao, H.; Feng, H.; Yue, J. Mapping Soybean Maturity and Biochemical Traits Using UAV-Based Hyperspectral Images. Remote Sens. 2023, 15, 4807. [Google Scholar] [CrossRef]
- Jing, Y.; Yang, Y.; Feng, Z.; Ye, J.; Yu, Y.; Song, M. Neural style transfer: A review. IEEE Trans. Vis. Comput. Graph. 2019, 26, 3365–3385. [Google Scholar] [CrossRef]
- Wang, P.; Li, Y.; Vasconcelos, N. Rethinking and improving the robustness of image style transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 124–133. [Google Scholar]
- Ayyalasomayajula, R.; Arun, A.; Wu, C.; Sharma, S.; Sethi, A.R.; Vasisht, D.; Bharadia, D. Deep learning based wireless localization for indoor navigation. In Proceedings of the 26th Annual International Conference on Mobile Computing and Networking, London, UK, 21–25 September 2020; pp. 1–14. [Google Scholar]
- Fernández, C.; Munoz-Bulnes, J.; Fernández-Llorca, D.; Parra, I.; Garcia-Daza, I.; Izquierdo, R.; Sotelo, M.A. High-level interpretation of urban road maps fusing deep learning-based pixelwise scene segmentation and digital navigation maps. J. Adv. Transp. 2018, 2018, 2096970. [Google Scholar] [CrossRef]
- Golroudbari, A.A.; Sabour, M.H. Recent Advancements in Deep Learning Applications and Methods for Autonomous Navigation–A Comprehensive Review. arXiv 2023, arXiv:2302.11089. [Google Scholar]
- Lee, Y.W.; Kim, J.S.; Park, K.R. Ocular Biometrics with Low-Resolution Images Based on Ocular Super-Resolution CycleGAN. Mathematics 2022, 10, 3818. [Google Scholar] [CrossRef]
- Xu, C.; Shu, J.; Zhu, G. Multi-Feature Dynamic Fusion Cross-Domain Scene Classification Model Based on Lie Group Space. Remote Sens. 2023, 15, 4790. [Google Scholar] [CrossRef]
- Singh, S.P.; Jaggi, M. Model fusion via optimal transport. Adv. Neural Inf. Process. Syst. 2020, 33, 22045–22055. [Google Scholar]
- Cha, J.; Chun, S.; Lee, K.; Cho, H.C.; Park, S.; Lee, Y.; Park, S. Swad: Domain generalization by seeking flat minima. Adv. Neural Inf. Process. Syst. 2021, 34, 22405–22418. [Google Scholar]
- Li, J.; Hong, D.; Gao, L.; Yao, J.; Zheng, K.; Zhang, B.; Chanussot, J. Deep learning in multimodal remote sensing data fusion: A comprehensive review. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102926. [Google Scholar] [CrossRef]
- Ghassemian, H. A review of remote sensing image fusion methods. Inf. Fusion 2016, 32, 75–89. [Google Scholar] [CrossRef]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
- Gatys, L.; Ecker, A.S.; Bethge, M. Texture synthesis using convolutional neural networks. Adv. Neural Inf. Process. Syst. 2015, 28, 262–270. [Google Scholar]
- Gatys, L.A.; Ecker, A.S.; Bethge, M. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2414–2423. [Google Scholar]
- Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part II 14; Springer: Cham, Switzerland, 2016; pp. 694–711. [Google Scholar]
- Luan, F.; Paris, S.; Shechtman, E.; Bala, K. Deep photo style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4990–4998. [Google Scholar]
- Li, Y.; Fang, C.; Yang, J.; Wang, Z.; Lu, X.; Yang, M.H. Universal style transfer via feature transforms. Adv. Neural Inf. Process. Syst. 2017, 30, 385–395. [Google Scholar]
- Li, Y.; Liu, M.Y.; Li, X.; Yang, M.H.; Kautz, J. A closed-form solution to photorealistic image stylization. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 453–468. [Google Scholar]
- Yoo, J.; Uh, Y.; Chun, S.; Kang, B.; Ha, J.W. Photorealistic style transfer via wavelet transforms. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9036–9045. [Google Scholar]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Li, B.; Li, P.; Liu, B.; Li, M. A High-Precision Underwater Target Detection Method Based on Cascade Neural Network and Edge Computing. CN116758406A, 15 September 2023. [Google Scholar]
- Izmailov, P.; Podoprikhin, D.; Garipov, T.; Vetrov, D.; Wilson, A.G. Averaging weights leads to wider optima and better generalization. arXiv 2018, arXiv:1803.05407. [Google Scholar]
- Son, D.M.; Kwon, H.J.; Lee, S.H. Enhanced Night-to-Day Image Conversion Using CycleGAN-Based Base-Detail Paired Training. Mathematics 2023, 11, 3102. [Google Scholar] [CrossRef]
- Krstanović, L.; Popović, B.; Janev, M.; Brkljač, B. Feature Map Regularized CycleGAN for Domain Transfer. Mathematics 2023, 11, 372. [Google Scholar] [CrossRef]
- Chen, H.; Lundberg, S.; Lee, S.I. Checkpoint ensembles: Ensemble methods from a single training process. arXiv 2017, arXiv:1710.03282. [Google Scholar]
- Guo, H.; Jin, J.; Liu, B. Stochastic weight averaging revisited. Appl. Sci. 2023, 13, 2935. [Google Scholar] [CrossRef]
- Garipov, T.; Izmailov, P.; Podoprikhin, D.; Vetrov, D.P.; Wilson, A.G. Loss surfaces, mode connectivity, and fast ensembling of dnns. Adv. Neural Inf. Process. Syst. 2018, 31, 8789–8798. [Google Scholar]
- Huang, G.; Li, Y.; Pleiss, G.; Liu, Z.; Hopcroft, J.E.; Weinberger, K.Q. Snapshot ensembles: Train 1, get m for free. arXiv 2017, arXiv:1704.00109. [Google Scholar]
- Neklyudov, K.; Molchanov, D.; Ashukha, A.; Vetrov, D. Variance networks: When expectation does not meet your expectations. arXiv 2018, arXiv:1803.03764. [Google Scholar]
- Mandt, S.; Hoffman, M.D.; Blei, D.M. Stochastic gradient descent as approximate bayesian inference. arXiv 2017, arXiv:1704.04289. [Google Scholar]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Robbins, H.; Monro, S. A stochastic approximation method. Ann. Math. Stat. 1951, 22, 400–407. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Obukhov, A.; Krasnyanskiy, M. Quality assessment method for GAN based on modified metrics inception score and Fréchet inception distance. In Proceedings of the Software Engineering Perspectives in Intelligent Systems: Proceedings of 4th Computational Methods in Systems and Software 2020; Springer: Cham, Switzerland, 2020; Volume 14, pp. 102–114. [Google Scholar]
- Chong, M.J.; Forsyth, D. Effectively unbiased fid and inception score and where to find them. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 6070–6079. [Google Scholar]
Phase | Source | Image Size | Structure |
---|---|---|---|
Integrity Testing | MNIST | ||
Experiment | Google Maps |
Hyperparameters | SGD 1 | Adam 1 | SWAD 1 | SWA 2 | PSWA 2 | SGD 2 | SWAD 2 |
---|---|---|---|---|---|---|---|
Learning rate | - | - | 1 × 10 | 1 × 10 | 2 × 10 | 2 × 10 | 2 × 10 |
Batch size | - | - | 64 | 2 | 2 | 2 | 2 |
Number of epochs | - | - | 100 | 100 | 100 | 100 | 100 |
Methods | FID | Decreasing Rate of FID 1 | IS-A | IS-B |
---|---|---|---|---|
SWA | 194.970 | 0 | 4.328 | 2.334 |
SWA (generators and discriminators) 2 | 331.962 | −70.3% | 3.897 | 2.097 |
PSWA | 246.689 | −26.5% | 4.756 | 1.796 |
SGD | 89.147 | 54.3% | 3.696 | 3.004 |
SWAD | 86.274 | 55.8% | 3.892 | 3.008 |
Methods | Convergence Rounds | Oscillation Counts |
---|---|---|
SWA | 42 k | 40 |
SWA (generators and discriminators) | 35 k | 35 |
PSWA | 48 k | 51 |
SGD | 36 k | 46 |
SWAD | 26 k | 36 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yi, F.; Li, W.; Huang, M.; Du, Y.; Ye, L. A High-Quality Hybrid Mapping Model Based on Averaging Dense Sampling Parameters. Appl. Sci. 2024, 14, 335. https://doi.org/10.3390/app14010335
Yi F, Li W, Huang M, Du Y, Ye L. A High-Quality Hybrid Mapping Model Based on Averaging Dense Sampling Parameters. Applied Sciences. 2024; 14(1):335. https://doi.org/10.3390/app14010335
Chicago/Turabian StyleYi, Fanxiao, Weishi Li, Mengjie Huang, Yingchang Du, and Lei Ye. 2024. "A High-Quality Hybrid Mapping Model Based on Averaging Dense Sampling Parameters" Applied Sciences 14, no. 1: 335. https://doi.org/10.3390/app14010335