Dynamic Programming-Based White Box Adversarial Attack for Deep Neural Networks
Abstract
:1. Introduction
1.1. Motivation
1.2. Contributions
1.3. Organization
2. Related Work
2.1. Adversarial Attack
2.2. Gradient Based Adversarial Approach
2.3. Fixed Length Disturbance Based Adversarial Approach
2.4. Content Aware Image Resizing
2.5. Relevance
3. Methodology
3.1. Introduction
- Fixed L0 norm (or Hamming distance)—In this approach the total number of pixels that can be perturbed are constrained with no constraint on the strength of those perturbations.
- Fixed LK norm—In this approach, all pixels can change with an overall constraint on the accumulated force of disturbances.
3.2. Perturbation Rule: 0–255 Approach
3.3. Sub-Pixel Scoring Method
3.3.1. A Preliminary Score Measure for Pixel Selection
3.3.2. Improving the Score
Adding Gradient Magnitudes
Vectorising the Score
Combining Sub-Pixel Scores
3.4. Dynamic Programming Based SPSM Attack—DPSPSM
- Select such that is maximum and .
- While :
- a.
- Perturb the sub-pixels of using (3)
- b.
Algorithm 1: Dynamic Programming Based SPSM Attack—DPSPSM |
Input: CIFAR 10 image dataset containing image X and labels Y, CNN VGG16 network Output: Perturbed Images 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. |
- Step 1: Generate Sign of Gradient Matrix
- Step 2: Generate Sub-Pixel Score Matrix
- Step 3: Generate Seam Using Dynamic Programming
3.4.1. Seam Generation Process
- Initialization: For the first row, DP(i) is simply the SPSM score of the pixel since there are no rows above it.
- Recurrence Relation: For each subsequent row, DP(i), using (10), is the sum of the SPSM score of the i-th pixel using (9) and the maximum DP value of the three adjacent pixels in the row above.
- Parent Tracking: P(i), using (11), stores the index of the pixel in the row above that has the maximum DP value.
3.4.2. Seam Selection
- Start from the Bottom: Select the pixel in the last row with the highest DP(i).
- Trace Back: Use P(i) to trace back to the top row, perturbing the sub-pixels of each selected pixel along the way.
- Step 4: Make Changes to Seam Selection
4. Results
4.1. Datase
4.2. Results
Attack Name | Success Rate (%) |
---|---|
SPSM Attack (K = 10) | 10 |
SPSM Attack (K = 20) | 21.5 |
SPSM Attack (K = 30) | 32.5 |
DPSPSM Attack (Proposed Attack) | 30.45 |
One Pixel Attack [32] | 31.40 |
FGSM [13] | 90.93 |
5. Discussion, Limitations and Future Work
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
CNN | Convolutional Neural Network |
DE | Differential Evolution |
DL | Deep Learning |
DNN | Deep Neural Network |
DP | Dynamic Programming |
DPSPSM | Dynamic Programming based Sub-Pixel Score Method Attack |
FGSM | Fast Gradient Sign Method |
ML | Machine Learning |
SPSM | Sub-Pixel Score Method |
References
- Deng, L.; Yu, D. Deep Learning: Methods and Applications. Found. Trends Signal Process. 2014, 7, 197–387. [Google Scholar] [CrossRef]
- Hoy, M.B. Alexa, Siri, Cortana, and More: An Introduction to Voice Assistants. Med. Ref. Serv. Q. 2018, 37, 81–88. [Google Scholar] [CrossRef] [PubMed]
- Zhai, S.; Chang, K.-H.; Zhang, R.; Zhang, M. DeepIntent: Learning Attentions for Online Advertising with Recurrent Neural Networks. In KDD’16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. [Google Scholar] [CrossRef]
- Zhang, H.; Cao, X.; Ho, J.K.L.; Chow, T.W.S. Object-Level Video Advertising: An Optimization Framework. IEEE Trans. Ind. Inform. 2017, 13, 520–531. [Google Scholar] [CrossRef]
- Elkahky, A.M.; Song, Y.; He, X. A Multi-View Deep Learning Approach for Cross Domain User Modeling in Recommendation Systems. In Proceedings of the 24th International Conference on World Wide Web, Florence, Italy, 18–22 May 2015. [Google Scholar] [CrossRef]
- Cheng, H.-T.; Koc, L.; Harmsen, J.; Shaked, T.; Chandra, T.; Aradhye, H.; Anderson, G.; Corrado, G.; Chai, W.; Ispir, M.; et al. Wide & Deep Learning for Recommender Systems. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems—DLRS 2016, Boston, MA, USA, 15 September 2016. [Google Scholar] [CrossRef]
- Wang, H.; Wang, N.; Yeung, D.-Y. Collaborative Deep Learning for Recommender Systems. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia, 10–13 August 2015. [Google Scholar] [CrossRef]
- Ker, J.; Wang, L.; Rao, J.; Lim, T. Deep Learning Applications in Medical Image Analysis. IEEE Access 2018, 6, 9375–9389. [Google Scholar] [CrossRef]
- Greenspan, H.; van Ginneken, B.; Summers, R.M. Guest Editorial Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique. IEEE Trans. Med. Imaging 2016, 35, 1153–1159. [Google Scholar] [CrossRef]
- Suzuki, K. Overview of deep learning in medical imaging. Radiol. Phys. Technol. 2017, 10, 257–273. [Google Scholar] [CrossRef]
- Young, T.; Hazarika, D.; Poria, S.; Cambria, E. Recent Trends in Deep Learning Based Natural Language Processing. IEEE Comput. Intell. Mag. 2018, 13, 55–75. [Google Scholar] [CrossRef]
- Cambria, E.; White, B. Jumping NLP Curves: A Review of Natural Language Processing Research. IEEE Comput. Intell. Mag. 2014, 9, 48–57. [Google Scholar] [CrossRef]
- Goodfellow, I.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. arXiv 2015, arXiv:1412.6572. [Google Scholar] [CrossRef]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv 2013, arXiv:1312.6199. [Google Scholar] [CrossRef]
- Yuan, X.; He, P.; Zhu, Q.; Li, X. Adversarial Examples: Attacks and Defenses for Deep Learning. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2805–2824. [Google Scholar] [CrossRef] [PubMed]
- Athalye, A.; Carlini, N.; Wagner, D. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. arXiv 2018, arXiv:1802.00420. [Google Scholar] [CrossRef]
- Kurakin, A.; Goodfellow, I.; Bengio, S. Adversarial examples in the physical world. arXiv 2017, arXiv:1607.02533. [Google Scholar] [CrossRef]
- Carlini, N.; Wagner, D. Towards Evaluating the Robustness of Neural Networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–24 May 2017; pp. 39–57. Available online: https://ieeexplore.ieee.org/document/7958570 (accessed on 15 January 2024).
- Moosavi-Dezfooli, S.-M.; Fawzi, A.; Frossard, P. DeepFool: A simple and accurate method to fool deep neural networks. arXiv 2016, arXiv:1511.04599. [Google Scholar] [CrossRef]
- Wiyatno, R.; Xu, A. Maximal Jacobian-based Saliency Map Attack. arXiv 2018, arXiv:1808.07945. [Google Scholar] [CrossRef]
- Moosavi-Dezfooli, S.-M.; Fawzi, A.; Fawzi, O.; Frossard, P. Universal Adversarial Perturbations. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 86–94. Available online: https://arxiv.org/abs/1610.08401 (accessed on 15 January 2024).
- Kurakin, A.; Goodfellow, I.; Bengio, S. Adversarial Machine Learning at Scale. arXiv 2017, arXiv:1611.01236. [Google Scholar] [CrossRef]
- Liu, Y.; Chen, X.; Liu, C.; Song, D. Delving into Transferable Adversarial Examples and Black-box Attacks. arXiv 2017, arXiv:1611.02770. [Google Scholar] [CrossRef]
- Papernot, N.; McDaniel, P.; Goodfellow, I. Transferability in Machine Learning: From Phenomena to Black-Box Attacks using Adversarial Samples. arXiv 2016, arXiv:1605.07277. [Google Scholar] [CrossRef]
- Tramèr, F.; Kurakin, A.; Papernot, N.; Goodfellow, I.; Boneh, D.; McDaniel, P. Ensemble Adversarial Training: Attacks and Defenses. arXiv 2020, arXiv:1705.07204. [Google Scholar] [CrossRef]
- Papernot, N.; McDaniel, P.; Wu, X.; Jha, S.; Swami, A. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. arXiv 2016, arXiv:1511.04508. [Google Scholar] [CrossRef]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv 2019, arXiv:1706.06083. [Google Scholar] [CrossRef]
- Papernot, N.; McDaniel, P.; Goodfellow, I.; Jha, S.; Celik, Z.B.; Swami, A. Practical Black-Box Attacks against Machine Learning. arXiv 2017, arXiv:1602.02697. [Google Scholar] [CrossRef]
- Xu, W.; Evans, D.; Qi, Y. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. In Proceedings of the 2018 Network and Distributed System Security Symposium, San Diego, CA, USA, 18–21 February 2018. [Google Scholar] [CrossRef]
- Zantedeschi, V.; Nicolae, M.-I.; Rawat, A. Efficient Defenses Against Adversarial Attacks. arXiv 2017, arXiv:1707.06728. [Google Scholar] [CrossRef]
- Shamir, A.; Safran, I.; Ronen, E.; Dunkelman, O. A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance. arXiv 2019, arXiv:1901.10861. [Google Scholar] [CrossRef]
- Su, J.; Vargas, D.V.; Sakurai, K. One Pixel Attack for Fooling Deep Neural Networks. IEEE Trans. Evol. Comput. 2019, 23, 828–841. [Google Scholar] [CrossRef]
- Avidan, S.; Shamir, A. Seam carving for content-aware image resizing. ACM Trans. Graph. (TOG) 2007, 26, 10. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2012, 60, 84–90. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar] [CrossRef]
- Saon, G.; Kuo, H.-K.J.; Rennie, S.; Picheny, M. The IBM 2015 English Conversational Telephone Speech Recognition System. arXiv 2015, arXiv:1505.05899. [Google Scholar] [CrossRef]
- Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to Sequence Learning with Neural Networks. arXiv 2014, arXiv:1409.3215. [Google Scholar] [CrossRef]
- Deng, L.; Li, J.; Huang, J.-T.; Yao, K.; Yu, D.; Seide, F.; Seltzer, M.; Zweig, G.; He, X.; Williams, J.; et al. Recent advances in deep learning for speech research at Microsoft. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; Available online: https://ieeexplore.ieee.org/abstract/document/6639345 (accessed on 15 January 2024).
- van den Oord, A.; Dieleman, S.; Zen, H.; Simonyan, K.; Vinyals, O.; Graves, A.; Kalchbrenner, N.; Senior, A.; Kavukcuoglu, K. WaveNet: A Generative Model for Raw Audio. arXiv 2016, arXiv:1609.03499. [Google Scholar] [CrossRef]
- Zhang, X.-L.; Wu, J. Deep Belief Networks Based Voice Activity Detection. IEEE Trans. Audio Speech Lang. Process. 2013, 21, 697–710. [Google Scholar] [CrossRef]
- Zhang, X.; Zhao, J.; LeCun, Y. Character-level Convolutional Networks for Text Classification. arXiv 2016, arXiv:1509.01626. [Google Scholar] [CrossRef]
- Severyn, A.; Moschitti, A. Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval—SIGIR ’15, Santiago, Chile, 9–13 August 2015. [Google Scholar] [CrossRef]
- Kong, X.; Yu, F.; Yao, W.; Cai, S.; Zhang, J.; Lin, H. Memristor-induced hyperchaos, multiscroll and extreme multistability in fractional-order HNN: Image encryption and FPGA implementation. Neural Netw. 2024, 171, 85–103. [Google Scholar] [CrossRef] [PubMed]
- Xiao, X.; Zhou, X.; Yang, Z.; Yu, L.; Zhang, B.; Liu, Q.; Luo, X. A comprehensive analysis of website fingerprinting defenses on Tor. Comput. Secur. 2024, 136, 103577. [Google Scholar] [CrossRef]
- Biggio, B.; Corona, I.; Maiorca, D.; Nelson, B.; Šrndić, N.; Laskov, P.; Giacinto, G.; Roli, F. Evasion Attacks against Machine Learning at Test Time. In Advanced Information Systems Engineering; Springer: Berlin/Heidelberg, Germany, 2013; pp. 387–402. [Google Scholar] [CrossRef]
- Huang, L.; Joseph, A.D.; Nelson, B.; Rubinstein, B.I.P.; Tygar, J.D. Adversarial machine learning. In Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence—AISec ’11, Chicago, IL, USA, 21 October 2011. [Google Scholar] [CrossRef]
- Eykholt, K.; Evtimov, I.; Fernandes, E.; Li, B.; Rahmati, A.; Xiao, C.; Prakash, A.; Kohno, T.; Song, D. Robust Physical-World Attacks on Deep Learning Models. arXiv 2018, arXiv:1707.08945. [Google Scholar] [CrossRef]
- Sharif, M.; Bhagavatula, S.; Bauer, L.; Reiter, M.K. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security—CCS’16, Vienna, Austria, 24–28 October 2016. [Google Scholar] [CrossRef]
- Goswami, G.; Ratha, N.; Agarwal, A.; Singh, R.; Vatsa, M. Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks. arXiv 2018, arXiv:1803.00401. [Google Scholar] [CrossRef]
- Athalye, A.; Engstrom, L.; Ilyas, A.; Kwok, K. Synthesizing Robust Adversarial Examples. arXiv 2018, arXiv:1707.07397. [Google Scholar]
- Hu, W.; Tan, Y. Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN. arXiv 2017, arXiv:1702.05983. [Google Scholar] [CrossRef]
- Rozsa, A.; Rudd, E.M.; Boult, T.E. Adversarial Diversity and Hard Positive Generation. arXiv 2016, arXiv:1605.01775. [Google Scholar] [CrossRef]
- Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z.B.; Swami, A. The Limitations of Deep Learning in Adversarial Settings. arXiv 2015, arXiv:1511.07528. [Google Scholar] [CrossRef]
- Simonyan, K.; Vedaldi, A.; Zisserman, A. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. arXiv 2013, arXiv:1312.6034. [Google Scholar] [CrossRef]
- Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images. Ph.D. Thesis, University of Toronto, Toronto, ON, Canada, 2009. Available online: https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf (accessed on 15 January 2024).
- Snoek, J.; Larochelle, H.; Adams, R.P. Practical Bayesian Optimization of Machine Learning Algorithms. arXiv 2012, arXiv:1206.2944. [Google Scholar] [CrossRef]
Aspect | SPSM | One Pixel Attack | DPSPSM | FGSM |
---|---|---|---|---|
Methodology | Sub-Pixel Score Method | Differential Evolution (DE) | Dynamic Programming (DP) and gradient change | Gradient-based approach |
Perturbation Strategy | Modifies top-K pixels based on SPSM score | Modifies a single pixel using iterative optimization | Modifies a vertical seam using a structured approach | Modifies all pixels using gradient information |
Computational Overhead | Moderate due to selection and modification of top-K pixels | High due to numerous iterations required by DE | Lower due to direct computation using DP and gradients | Low due to single forward pass |
Efficiency | Efficient but may scatter perturbations that affects overall efficiency | Computationally intensive, requiring multiple iterations | Computationally efficient, leveraging gradient information directly. | Highly efficient, leveraging gradient information directly |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Aggarwal, S.; Mittal, A.; Aggarwal, S.; Singh, A.K. Dynamic Programming-Based White Box Adversarial Attack for Deep Neural Networks. AI 2024, 5, 1216-1234. https://doi.org/10.3390/ai5030059
Aggarwal S, Mittal A, Aggarwal S, Singh AK. Dynamic Programming-Based White Box Adversarial Attack for Deep Neural Networks. AI. 2024; 5(3):1216-1234. https://doi.org/10.3390/ai5030059
Chicago/Turabian StyleAggarwal, Swati, Anshul Mittal, Sanchit Aggarwal, and Anshul Kumar Singh. 2024. "Dynamic Programming-Based White Box Adversarial Attack for Deep Neural Networks" AI 5, no. 3: 1216-1234. https://doi.org/10.3390/ai5030059