Hierarchical Feature Fusion and Enhanced Attention Mechanism for Robust GAN-Generated Image Detection
Abstract
:1. Introduction
2. Related Work
2.1. Spatial-Based Fake-Image Detection
2.2. Frequency-Based Fake Image Detection
2.3. Development of GAN
3. Methodology
3.1. Overview of the Proposed Network
3.2. Multi-Type Stepwise Pooling (MSP)
3.3. Attention Enhancement Module (AEM)
Algorithm 1 Attention Filtering |
|
4. Experimental Results
4.1. Experimental Setup
4.2. Detection Performance
4.3. Visualization
4.3.1. Fake-Image CAM Visualization
4.3.2. Real-Image CAM Visualization
4.4. Ablation Studies
4.4.1. Effectiveness of Different Components
4.4.2. Effectiveness of AF
4.4.3. Impact of p
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- He, Y.; Yu, N.; Keuper, M.; Fritz, M. Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, Montreal, QC, Canada, 19–27 August 2021; Zhou, Z.H., Ed.; International Joint Conferences on Artificial Intelligence Organization: San Francisco, CA, USA, 2021; pp. 2534–2541. [Google Scholar] [CrossRef]
- Liu, Z.; Qi, X.; Torr, P.H. Global texture enhancement for fake face detection in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8060–8069. [Google Scholar]
- Yu, Y.; Ni, R.; Zhao, Y. Mining generalized features for detecting ai-manipulated fake faces. arXiv 2020, arXiv:2010.14129. [Google Scholar]
- Jeong, Y.; Kim, D.; Min, S.; Joe, S.; Gwon, Y.; Choi, J. Bihpf: Bilateral high-pass filters for robust deepfake detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 48–57. [Google Scholar]
- Jeong, Y.; Kim, D.; Ro, Y.; Choi, J. Frepgan: Robust deepfake detection using frequency-level perturbations. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 22 February–1 March 2022; Volume 36, pp. 1060–1068. [Google Scholar]
- Wang, S.Y.; Wang, O.; Zhang, R.; Owens, A.; Efros, A.A. CNN-generated images are surprisingly easy to spot… for now. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8695–8704. [Google Scholar]
- Chai, L.; Bau, D.; Lim, S.N.; Isola, P. What makes fake images detectable? Understanding properties that generalize. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XXVI 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 103–120. [Google Scholar]
- Yu, N.; Davis, L.S.; Fritz, M. Attributing fake images to gans: Learning and analyzing gan fingerprints. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 7556–7566. [Google Scholar]
- Li, L.; Bao, J.; Zhang, T.; Yang, H.; Chen, D.; Wen, F.; Guo, B. Face X-Ray for More General Face Forgery Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Frank, J.; Eisenhofer, T.; Schönherr, L.; Fischer, A.; Kolossa, D.; Holz, T. Leveraging frequency analysis for deep fake image recognition. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 3247–3258. [Google Scholar]
- Hulzebosch, N.; Ibrahimi, S.; Worring, M. Detecting CNN-generated facial images in real-world scenarios. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 13–19 June 2020; pp. 642–643. [Google Scholar]
- Cazenavette, G.; Sud, A.; Leung, T.; Usman, B. FakeInversion: Learning to Detect Images from Unseen Text-to-Image Models by Inverting Stable Diffusion. arXiv 2024, arXiv:2406.08603. [Google Scholar]
- Wißmann, A.; Zeiler, S.; Nickel, R.M.; Kolossa, D. Whodunit: Detection and Attribution of Synthetic Images by Leveraging Model-specific Fingerprints. In Proceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation, Phuket, Thailand, 10–14 June 2024; MAD ’24. pp. 65–72. [Google Scholar] [CrossRef]
- Uhlenbrock, L.; Cozzolino, D.; Moussa, D.; Verdoliva, L.; Riess, C. Did You Note My Palette? Unveiling Synthetic Images Through Color Statistics. In Proceedings of the 2024 ACM Workshop on Information Hiding and Multimedia Security, Baiona, Spain, 24–26 June 2024; IH&MMSec ’24; pp. 47–52. [Google Scholar] [CrossRef]
- Tan, C.; Liu, H.; Zhao, Y.; Wei, S.; Gu, G.; Liu, P.; Wei, Y. Rethinking the Up-Sampling Operations in CNN-Based Generative Network for Generalizable Deepfake Detection. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 28130–28139. [Google Scholar] [CrossRef]
- He, Z.; Chen, P.Y.; Ho, T.Y. RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection. arXiv 2024, arXiv:2405.20112. [Google Scholar]
- Rossler, A.; Cozzolino, D.; Verdoliva, L.; Riess, C.; Thies, J.; Nießner, M. Faceforensics++: Learning to detect manipulated facial images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1–11. [Google Scholar]
- Li, Y.; Chang, M.C.; Lyu, S. In ictu oculi: Exposing ai created fake videos by detecting eye blinking. In Proceedings of the 2018 IEEE International Workshop on Information Forensics and Security (WIFS), Hong Kong, China, 11–13 December 2018; pp. 1–7. [Google Scholar]
- Cozzolino, D.; Thies, J.; Rössler, A.; Riess, C.; Nießner, M.; Verdoliva, L. Forensictransfer: Weakly-supervised domain adaptation for forgery detection. arXiv 2018, arXiv:1812.02510. [Google Scholar]
- Marra, F.; Gragnaniello, D.; Verdoliva, L.; Poggi, G. Do gans leave artificial fingerprints? In Proceedings of the 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), San Jose, CA, USA, 28–30 March 2019; pp. 506–511. [Google Scholar]
- Bayar, B.; Stamm, M.C. A deep learning approach to universal image manipulation detection using a new convolutional layer. In Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security, Vigo, Spain, 20–22 June 2016; pp. 5–10. [Google Scholar]
- Zhao, T.; Xu, X.; Xu, M.; Ding, H.; Xiong, Y.; Xia, W. Learning self-consistency for deepfake detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 15023–15033. [Google Scholar]
- Durall, R.; Keuper, M.; Keuper, J. Watch your up-convolution: Cnn based generative deep neural networks are failing to reproduce spectral distributions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 7890–7899. [Google Scholar]
- Bergmann, S.; Moussa, D.; Brand, F.; Kaup, A.; Riess, C. Forensic analysis of AI-compression traces in spatial and frequency domain. Pattern Recognit. Lett. 2024, 180, 41–47. [Google Scholar] [CrossRef]
- Zhang, Y.; Xu, X. Diffusion Noise Feature: Accurate and Fast Generated Image Detection. arXiv 2023, arXiv:2312.02625. [Google Scholar]
- Bappy, J.H.; Simons, C.; Nataraj, L.; Manjunath, B.; Roy-Chowdhury, A.K. Hybrid lstm and encoder–decoder architecture for detection of image forgeries. IEEE Trans. Image Process. 2019, 28, 3286–3300. [Google Scholar] [CrossRef] [PubMed]
- Masi, I.; Killekar, A.; Mascarenhas, R.M.; Gurudatt, S.P.; AbdAlmageed, W. Two-branch recurrent network for isolating deepfakes in videos. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part VII 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 667–684. [Google Scholar]
- Qian, Y.; Yin, G.; Sheng, L.; Chen, Z.; Shao, J. Thinking in frequency: Face forgery detection by mining frequency-aware clues. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 86–103. [Google Scholar]
- Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Radford, A. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
- Odena, A. Semi-supervised learning with generative adversarial networks. arXiv 2016, arXiv:1606.01583. [Google Scholar]
- Brock, A. Large Scale GAN Training for High Fidelity Natural Image Synthesis. arXiv 2018, arXiv:1809.11096. [Google Scholar]
- Karras, T. Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv 2017, arXiv:1710.10196. [Google Scholar]
- Karras, T.; Laine, S.; Aila, T. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 4401–4410. [Google Scholar]
- Karras, T.; Laine, S.; Aittala, M.; Hellsten, J.; Lehtinen, J.; Aila, T. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 2020; pp. 8110–8119. [Google Scholar]
- Sauer, A.; Karras, T.; Laine, S.; Geiger, A.; Aila, T. Stylegan-t: Unlocking the power of gans for fast large-scale text-to-image synthesis. In Proceedings of the International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; pp. 30105–30118. [Google Scholar]
- Kang, M.; Zhu, J.Y.; Zhang, R.; Park, J.; Shechtman, E.; Paris, S.; Park, T. Scaling up gans for text-to-image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 10124–10134. [Google Scholar]
- Trevithick, A.; Chan, M.; Takikawa, T.; Iqbal, U.; Mello, S.D.; Chandraker, M.K.; Ramamoorthi, R.; Nagano, K. What You See is What You GAN: Rendering Every Pixel for High-Fidelity Geometry in 3D GANs. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–24 June 2024; pp. 22765–22775. [Google Scholar]
- Lei, B.; Yu, K.; Feng, M.; Cui, M.; Xie, X. DiffusionGAN3D: Boosting Text-guided 3D Generation and Domain Adaptation by Combining 3D GANs and Diffusion Priors. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 10487–10497. [Google Scholar] [CrossRef]
- Pan, X.; Tewari, A.; Leimkühler, T.; Liu, L.; Meka, A.; Theobalt, C. Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold. In Proceedings of the ACM SIGGRAPH 2023 Conference Proceedings, Los Angeles, CA, USA, 6–10 August 2023. SIGGRAPH ’23. [Google Scholar] [CrossRef]
- Boroujeni, S.P.H.; Razi, A. IC-GAN: An Improved Conditional Generative Adversarial Network for RGB-to-IR image translation with applications to forest fire monitoring. Expert Syst. Appl. 2024, 238, 121962. [Google Scholar] [CrossRef]
- Huang, N.; Gokaslan, A.; Kuleshov, V.; Tompkin, J. The GAN is dead; long live the GAN! A Modern GAN Baseline. In Proceedings of the Thirty-Eighth Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 9–15 December 2024. [Google Scholar]
- Tan, C.; Zhao, Y.; Wei, S.; Gu, G.; Wei, Y. Learning on gradients: Generalized artifacts representation for gan-generated images detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 12105–12114. [Google Scholar]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Choi, Y.; Choi, M.; Kim, M.; Ha, J.W.; Kim, S.; Choo, J. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8789–8797. [Google Scholar]
- Park, T.; Liu, M.Y.; Wang, T.C.; Zhu, J.Y. Semantic image synthesis with spatially-adaptive normalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 2337–2346. [Google Scholar]
- Yu, F.; Seff, A.; Zhang, Y.; Song, S.; Funkhouser, T.; Xiao, J. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv 2015, arXiv:1506.03365. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
- Liu, Z.; Luo, P.; Wang, X.; Tang, X. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chil, 7–13 December 2015; pp. 3730–3738. [Google Scholar]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the Computer Vision—ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part V 13. Springer: Berlin/Heidelberg, Germany, 2014; pp. 740–755. [Google Scholar]
Methods | Settings | Test Models | |||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Class | ProGAN | StyleGAN | Stylegan2 | BigGAN | CycleGAN | StarGAN | GauGAN | Deepfake | Mean | ||||||||||
Acc | AP | Acc | AP | Acc | AP | Acc | AP | Acc | AP | Acc | AP | Acc | AP | Acc | AP | Acc | AP | ||
Wang [6] | 1 | 50.4 | 63.8 | 50.4 | 79.3 | 68.2 | 94.7 | 50.2 | 61.3 | 50.0 | 52.9 | 50.0 | 48.2 | 50.3 | 67.6 | 50.1 | 51.5 | 52.5 | 64.9 |
Frank [10] | 1 | 78.9 | 77.9 | 69.4 | 64.8 | 67.4 | 64.0 | 62.3 | 58.6 | 67.4 | 65.4 | 60.5 | 59.5 | 67.5 | 69.0 | 52.4 | 47.3 | 65.7 | 63.3 |
Durall [23] | 1 | 85.1 | 79.5 | 59.2 | 55.2 | 70.4 | 63.8 | 57.0 | 53.9 | 66.7 | 61.4 | 99.8 | 99.6 | 58.7 | 54.8 | 53.0 | 51.9 | 68.7 | 65.0 |
BiHPF [4] | 1 | 82.5 | 81.4 | 68.0 | 62.8 | 68.8 | 63.6 | 67.0 | 62.5 | 75.5 | 74.2 | 90.1 | 90.1 | 73.6 | 92.1 | 51.6 | 49.9 | 72.1 | 72.1 |
FrePGAN [5] | 1 | 95.5 | 99.4 | 80.6 | 90.6 | 77.4 | 93.0 | 63.5 | 60.5 | 59.4 | 59.9 | 99.6 | 100.0 | 53.0 | 49.1 | 70.4 | 81.5 | 74.9 | 79.3 |
LGgrad [43] | 1 | 99.4 | 99.9 | 96.0 | 99.6 | 93.8 | 99.4 | 79.5 | 88.9 | 84.7 | 94.4 | 99.5 | 100.0 | 70.9 | 81.8 | 66.7 | 77.9 | 86.3 | 92.7 |
Ours | 1 | 99.1 | 99.9 | 91.9 | 99.4 | 93.3 | 99.5 | 79.8 | 89.5 | 87.2 | 94.2 | 98.0 | 99.9 | 71.0 | 76.9 | 70.9 | 82.0 | 86.4 | 92.7 |
Wang [6] | 2 | 64.6 | 92.7 | 52.8 | 82.8 | 75.7 | 96.6 | 51.6 | 70.5 | 58.6 | 81.5 | 51.2 | 74.3 | 53.6 | 86.6 | 50.6 | 51.5 | 57.3 | 79.6 |
Frank [10] | 2 | 85.7 | 81.3 | 73.1 | 68.5 | 75.0 | 70.9 | 76.9 | 70.8 | 86.5 | 80.8 | 85.0 | 77.0 | 67.3 | 65.3 | 50.1 | 55.3 | 75.0 | 71.2 |
Durall [23] | 2 | 79.0 | 73.9 | 63.6 | 58.8 | 67.3 | 62.1 | 69.5 | 62.9 | 65.4 | 60.8 | 99.4 | 99.4 | 67.0 | 63.0 | 50.5 | 50.2 | 70.2 | 66.4 |
BiHPF [4] | 2 | 87.4 | 87.4 | 71.6 | 74.1 | 77.0 | 81.1 | 82.6 | 80.6 | 86.0 | 86.6 | 93.8 | 80.8 | 75.3 | 88.2 | 53.7 | 54.0 | 78.4 | 79.1 |
FrePGAN [5] | 2 | 99.0 | 99.9 | 80.8 | 92.0 | 72.2 | 94.0 | 66.0 | 61.8 | 69.1 | 70.3 | 98.5 | 100.0 | 53.1 | 51.0 | 62.2 | 80.6 | 75.1 | 81.2 |
LGgrad [43] | 2 | 99.8 | 100.0 | 94.8 | 99.7 | 92.4 | 99.6 | 82.5 | 92.4 | 85.9 | 94.7 | 99.7 | 99.9 | 73.7 | 83.2 | 60.6 | 67.8 | 86.2 | 92.2 |
Ours | 2 | 99.6 | 100.0 | 94.2 | 99.6 | 86.7 | 99.2 | 89.0 | 95.5 | 85.7 | 94.8 | 99.7 | 100.0 | 78.8 | 85.1 | 54.7 | 60.4 | 86.1 | 91.8 |
Wang [6] | 4 | 91.4 | 99.4 | 43.8 | 91.4 | 76.4 | 97.5 | 52.9 | 73.3 | 72.7 | 88.6 | 63.8 | 90.8 | 63.9 | 92.2 | 51.7 | 62.3 | 67.1 | 86.9 |
Frank [10] | 4 | 90.3 | 85.2 | 74.5 | 72.0 | 73.0 | 71.4 | 88.7 | 86.0 | 75.5 | 71.2 | 99.5 | 99.5 | 69.2 | 77.4 | 60.7 | 49.1 | 78.9 | 76.5 |
Durall [23] | 4 | 81.1 | 74.4 | 54.4 | 52.6 | 66.8 | 62.0 | 60.1 | 56.3 | 69.0 | 64.0 | 98.1 | 98.1 | 61.9 | 57.4 | 50.2 | 50.0 | 67.7 | 64.4 |
BiHPF [4] | 4 | 90.7 | 86.2 | 76.9 | 75.1 | 76.2 | 74.7 | 84.9 | 81.7 | 81.9 | 78.9 | 94.4 | 94.4 | 69.5 | 78.1 | 54.4 | 54.6 | 78.6 | 77.9 |
FrePGAN [5] | 4 | 99.0 | 99.9 | 80.7 | 89.6 | 84.1 | 98.6 | 69.2 | 71.1 | 71.1 | 74.4 | 99.9 | 100.0 | 60.3 | 71.7 | 70.9 | 91.9 | 79.4 | 87.2 |
LGrad [43] | 4 | 99.9 | 100.0 | 94.8 | 99.9 | 96.0 | 99.9 | 82.9 | 90.7 | 85.3 | 94.0 | 99.6 | 100.0 | 72.4 | 79.3 | 58.0 | 67.9 | 86.1 | 91.5 |
Ours | 4 | 99.9 | 100.0 | 96.0 | 99.9 | 93.5 | 99.7 | 84.4 | 93.1 | 88.5 | 96.1 | 99.7 | 100.0 | 74.4 | 83.2 | 59.5 | 70.6 | 87.0 | 92.8 |
MethodAcc | FAM | AEM | Acc | AP |
---|---|---|---|---|
A | 86.1 | 91.5 | ||
B | * | 85.8 | 91.9 | |
C | * | 86.7 | 93.2 | |
D | * | * | 87.0 | 92.8 |
p | Acc | AP |
---|---|---|
0.4 | 86.4 | 91.6 |
0.6 | 87.0 | 92.8 |
0.8 | 85.3 | 90.9 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, W.; Cui, S.; Zhang, Q.; Chen, B.; Zeng, H.; Zhong, Q. Hierarchical Feature Fusion and Enhanced Attention Mechanism for Robust GAN-Generated Image Detection. Mathematics 2025, 13, 1372. https://doi.org/10.3390/math13091372
Zhang W, Cui S, Zhang Q, Chen B, Zeng H, Zhong Q. Hierarchical Feature Fusion and Enhanced Attention Mechanism for Robust GAN-Generated Image Detection. Mathematics. 2025; 13(9):1372. https://doi.org/10.3390/math13091372
Chicago/Turabian StyleZhang, Weinan, Sanshuai Cui, Qi Zhang, Biwei Chen, Hui Zeng, and Qi Zhong. 2025. "Hierarchical Feature Fusion and Enhanced Attention Mechanism for Robust GAN-Generated Image Detection" Mathematics 13, no. 9: 1372. https://doi.org/10.3390/math13091372
APA StyleZhang, W., Cui, S., Zhang, Q., Chen, B., Zeng, H., & Zhong, Q. (2025). Hierarchical Feature Fusion and Enhanced Attention Mechanism for Robust GAN-Generated Image Detection. Mathematics, 13(9), 1372. https://doi.org/10.3390/math13091372