An Adversarial Example Generation Algorithm Based on DE-C&W
Abstract
:1. Introduction
- A generic algorithm for adversarial example generation based on DE-C&W is proposed, which uses the DE algorithm to preprocess the data and the Adam algorithm to optimize the loss function of C&W to improve the universality and success rate of adversarial example attacks;
- The improved DE algorithm is used to preprocess the initial examples to reduce the initial search dimension of the C&W algorithm and improve the accuracy of searching for effective attack points, enhancing the query efficiency while ensuring the success rate of the attack;
- The scaling factor is adaptively adjusted based on the two individuals that generate the differential vector, and the fitness function is used to adaptively control the change in the crossover probability factor of the DE algorithm, improving the global optimization ability and accelerating the convergence rate. Through the introduction of a new mutation strategy that balances global search and local development capabilities, the accuracy of searching for effective attack points is improved, and the effectiveness and efficiency of adversarial example attacks are enhanced;
- The loss function of C&W is redefined and its gradient calculation method is improved, making it possible to implement black-box attacks by obtaining only the probability value of the model label, thereby improving the portability of the adversarial examples. The loss function is further optimized by using the Adam optimization algorithm, which is less affected by local minima and has a faster convergence speed, to find the optimal perturbation and improve the speed of finding the optimal solution;
- The comparative experimental results show that the algorithm reduces the average number of queries and attack cost while ensuring the success rate of adversarial example attacks;
2. Related Work
3. Adversarial Example Generation Based on DE-C&W
3.1. Optimization of C&W Attack
3.2. Adam Optimization Algorithm
3.3. Differential Evolution Algorithm
3.4. Improved Differential Evolution Algorithm
- Optimization of scaling factor F
- Optimization of crossover probability CR
- Improvement in mutation strategy
3.5. A Generic Algorithm for Adversarial Example Generation Based on DE-C&W
4. Experiments and Analysis of Results
4.1. Experimental Environment
4.2. Experimental Parameter Settings
4.3. Experimental Results and Analysis
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Jiang, W.; He, Z.; Zhan, J.; Pan, W.; Adhikari, D. Research progress and challenges on application-driven adversarial examples: A survey. ACM Trans. Cyber-Physical Syst. 2021, 5, 39. [Google Scholar]
- Serban, A.; Poll, E.; Visser, J. Adversarial examples on object recognition: A comprehensive survey. ACM Comput. Surv. 2020, 53, 1–38. [Google Scholar]
- Zhang, J.; Li, C. Adversarial examples: Opportunities and challenges. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 2578–2593. [Google Scholar] [PubMed]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv 2013, arXiv:1312.6199. [Google Scholar]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. arXiv 2014, arXiv:1412.6572. [Google Scholar]
- Mao, X.; Chen, Y.; Wang, S.; Su, H.; He, Y.; Xue, H. Composite adversarial attacks. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 2–9 February 2021; pp. 8884–8892. [Google Scholar]
- Cui, W.; Li, X.; Huang, J.; Wang, W.; Wang, S.; Chen, J. Substitute model generation for black-box adversarial attack based on knowledge distillation. In Proceedings of the 2020 IEEE International Conference on Image Processing, Online, 25–28 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 648–652. [Google Scholar]
- Zhang, C.; Benz, P.; Karjauv, A.; Kweon, I.S. Data-free universal adversarial perturbation and black-box attack. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Online, 11–17 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 7868–7877. [Google Scholar]
- Kurakin, A.; Goodfellow, I.J.; Bengio, S. Adversarial examples in the physical world. In Artificial Intelligence Safety and Security; Chapman and Hall: London, UK; CRC: Boca Raton, FL, USA, 2018; pp. 99–112. [Google Scholar]
- Moosavi-Dezfooli, S.M.; Fawzi, A.; Frossard, P. Deepfool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NA, USA, 27–30 June 2016; pp. 2574–2582. [Google Scholar]
- Xiao, C.; Li, B.; Zhu, J.Y.; He, W.; Liu, M.; Song, D. Generating adversarial examples with adversarial networks. arXiv 2018, arXiv:1801.02610. [Google Scholar]
- Brendel, W.; Rauber, J.; Bethge, M. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. arXiv 2017, arXiv:1712.04248. [Google Scholar]
- Chen, P.Y.; Zhang, H.; Sharma, Y.; Yi, J.; Hsieh, C.-J. ZOO: Zeroth Order Optimization Based Black-Box Attacks to Deep Neural Networks Without Training Substitute Models; ACM: New York, NY, USA, 2017; pp. 15–26. [Google Scholar]
- Carlini, N.; Wagner, D. Towards evaluating the robustness of neural networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (sp), San Jose, CA, USA, 22–26 May 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 39–57. [Google Scholar]
- Tu, C.C.; Ting, P.; Chen, P.Y.; Liu, S.; Zhang, H.; Yi, J.; Hsieh, C.-J.; Cheng, S.-M. Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HA, USA, 27 January–1 February 2019; Volume 33, pp. 742–744. [Google Scholar]
- Su, J.; Vargas, D.V.; Sakurai, K. One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 2019, 23, 828–841. [Google Scholar]
- Da, Q.; Zhang, G.Y.; Li, S.Z.; Liu, Z.C.; Wang, W.S. Gradient-free adversarial attack algorithm based on differential evolution. Int. J. Bio-Inspired Comput. 2023, 22, 217–226. [Google Scholar] [CrossRef]
- Xie, C.; Zhang, Z.; Zhou, Y.; Bai, S.; Wang, J.; Ren, Z.; Yuille, A. Improving transferability of adversarial examples with input diversity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2730–2739. [Google Scholar]
- Huang, L.F.; Zhuang, W.Z.; Liao, Y.X.; Liu, N. Black-box Adversarial Attack Method Based on Evolution Strategy and Attention Mechanism. J. Softw. 2021, 32, 3512–3529. [Google Scholar]
- Hu, W.; Tan, Y. Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN. Data Min. Big Data 2022, 1745, 409–423. [Google Scholar]
- Yang, B.; Zhang, H.; Li, Z.; Zhang, Y.; Xu, K.; Wang, J. Adversarial example generation with AdaBelief optimizer and crop invariance. Appl. Intell. 2023, 53, 2332–2347. [Google Scholar] [CrossRef]
- Liu, Y.; Chen, X.; Liu, C.; Song, D. Delving into transferable adversarial examples and black-box attacks. arXiv 2016, arXiv:1611.02770. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Jais, I.K.M.; Ismail, A.R.; Nisa, S.Q. Adam optimization algorithm for wide and deep neural network. Knowl. Eng. Data Sci. 2019, 2, 41–46. [Google Scholar] [CrossRef]
- Zhang, Z. Improved adam optimizer for deep neural networks. In Proceedings of the 2018 IEEE/ACM 26th international symposium on quality of service (IWQoS), Banff, AL, Canada, 4–6 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–2. [Google Scholar]
- Storn, R.; Price, K. Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
- Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2010, 15, 4–31. [Google Scholar] [CrossRef]
- Zaharie, D. Critical values for the control parameters of differential evolution algorithms. In Proceedings of the MENDEL 2002, 8th International Conference on Soft Computing, Brno, Czech Republic, 21 May 2002; pp. 62–67. [Google Scholar]
- Gamperle, R. A parameter study for differential evolution. In Advances in Intelligent Systems Fuzzy Systems Evolutionary Computation; Harvard University Press: Cambridge, MA, USA, 2002; pp. 293–298. [Google Scholar]
- Ali, M.M.; Törn, A. Optimization of carbon and silicon cluster geometry for tersoff potential using differential evolution. In Optimization in Computational Chemistry and Molecular Biology: Local and Global Approaches; Springer: Berlin/Heidelberg, Germany, 2000; pp. 287–300. [Google Scholar]
- Gao, X.; Jiang, B.; Zhang, S. On the information-adaptive variants of the ADMM: An iteration complexity perspective. J. Sci. Comput. 2018, 76, 327–363. [Google Scholar] [CrossRef]
- Sun, G.; Yang, B.; Yang, Z.; Xu, G. An adaptive differential evolution with combined strategy for global numerical optimization. Soft Comput. 2020, 24, 6277–6296. [Google Scholar] [CrossRef]
- Fan, Q.; Yan, X. Self-adaptive differential evolution algorithm with zoning evolution of control parameters and adaptive mutation strategies. IEEE Trans. Cybern. 2015, 46, 219–232. [Google Scholar] [CrossRef]
- Elsayed, S.M.; Sarker, R.A.; Essam, D.L. An improved self-adaptive differential evolution algorithm for optimization problems. IEEE Trans. Ind. Inform. 2012, 9, 89–99. [Google Scholar]
- Ghosh, A.; Das, S.; Chowdhury, A.; Giri, R. An improved differential evolution algorithm with fitness-based adaptation of the control parameters. Inf. Sci. 2011, 181, 3749–3765. [Google Scholar]
- Das, S.; Abraham, A.; Chakraborty, U.K.; Konar, A. Differential evolution using a neighborhood-based mutation operator. IEEE Trans. Evol. Comput. 2009, 13, 526–553. [Google Scholar]
- Fan, H.Y.; Lampinen, J. A trigonometric mutation operation to differential evolution. J. Glob. Optim. 2003, 27, 105–129. [Google Scholar]
- Shafiq, M.; Gu, Z. Deep residual learning for image recognition: A survey. Appl. Sci. 2022, 12, 8972. [Google Scholar] [CrossRef]
- Reyad, M.; Sarhan, A.; Arafa, M. A modified Adam algorithm for deep neural network optimization. Neural Comput. Appl. 2023, 35, 17095–17112. [Google Scholar]
Attack Type | Attack Methods | Accuracy/Advantages | Query Frequency | Limitations |
---|---|---|---|---|
White Box | FGSM | High success rate; benchmark for performance | No queries needed | Single-step, less precise perturbations. |
BIM | Higher accuracy due to iterative perturbations | No queries needed | Computationally intensive due to iterations. | |
DeepFool | Generation of smaller, effective perturbations | No queries needed | Manual perturbation setting; limited to single models. | |
AdvGAN | Generation of adversarial examples via GAN training | No queries needed | Requires GAN training; resource-heavy. | |
Black Box | Boundary Attack | Efficient adversarial example generation | High | High computational cost; requires many queries. |
ZOO | Gradient-free; broad applicability | High | Slow optimization; no gradient access. | |
Autozoom | Improved optimization via autoencoder | Medium | Computationally intensive; relies on autoencoder modeling. | |
One Pixel Attack | Minimal perturbation (1 pixel modified) | High | Minimal perturbation; limited applicability. | |
Gradient-Free DE | Reduced query count with differential evolution | Low | Requires fitness function design; elimination mechanism complexity. | |
Momentum Iterative | Enhanced transferability and success rate | Medium | Combines multiple methods; increased complexity. | |
Covariance Matrix | Reduced query count; adaptive strategy | High | Requires adaptive evolution strategy; attention mechanism dependency. | |
MalGAN | Near-zero detection rate for malware | High (specific) | Specific to malware; ineffective against retraining defenses. | |
ABI-FGM&CIM | High transferability and attack success rate | High | Requires AdaBelief optimizer tuning; crop invariance adaptation needed. | |
Ours (DE-C&W) | High transferability and attack success rate | Low | Not validated on more complex deep learning models and larger dataset. |
Layer Name | Network Structure |
---|---|
Conv 1 | kernel size: (5,5); out channel: 6; stride: 1 |
Max pooling 1 | kernel size: (2,2); stride: 2 |
Conv 2 | kernel size: (5,5); out channel: 6; stride: 1 |
Max pooling 2 | kernel size: (2,2); stride: 2 |
Conv 3 | kernel size: (5,5); out channel: 120; stride:1 |
Full connected 1 | 120 × 84 |
Full connected 2 | 84 × 10; softmax |
Layer Name | Network Structure |
---|---|
Conv 1 | kernel size: (7,7); out channel: 64; stride: 2 |
Max pooling 1 | kernel size: (3,3); stride: 2 |
Bottleneck 1 | output size: 32 × 32 × 16 |
Bottleneck 2 | output size: 16 × 16 × 32 |
Bottleneck 3 | output size: 8 × 8 × 64 |
Average pooling | output size: 1 × 1 |
Full connected | 1000-d fc; softmax |
Model | Cifar-10 | MNIST |
---|---|---|
LeNet | 91.37% | 98.29% |
ResNet | 92.15% | 98.05% |
Custom | 96.38% | 98.12% |
Dataset | Model | Algorithm | Success Rate (%) | Average Time (s) | Average Number of Queries |
---|---|---|---|---|---|
Cifar-10 | LeNet | ZOO | 96.76 | >600 | >1000 |
One Pixel | 84.10 | 0.59 | 324.49 | ||
DE-C&W | 94.31 | 1.74 | 69.31 | ||
ResNet | ZOO | 89.54 | >600 | >1000 | |
One Pixel | 80.91 | 0.70 | 347.65 | ||
DE-C&W | 90.77 | 1.99 | 71.10 | ||
Custom | ZOO | 94.93 | >600 | >1000 | |
One Pixel | 83.28 | 0.62 | 327.10 | ||
DE-C&W | 92.95 | 1.98 | 69.62 | ||
MNIST | LeNet | ZOO | 93.26 | 272.72 | >1000 |
One Pixel | 89.24 | 0.23 | 227.36 | ||
DE-C&W | 97.28 | 1.02 | 81.98 | ||
ResNet | ZOO | 90.02 | 300.01 | >1000 | |
One Pixel | 84.71 | 0.37 | 241.66 | ||
DE-C&W | 93.70 | 1.10 | 84.13 | ||
Custom | ZOO | 92.26 | 285.80 | >1000 | |
One Pixel | 86.28 | 0.24 | 234.11 | ||
DE-C&W | 95.02 | 1.79 | 83.30 |
Dataset | Model | ZOO | One Pixel | DE-C&W | ZOO | One Pixel | DE-C&W |
---|---|---|---|---|---|---|---|
Cifar-10 | LeNet | 3.06 | 1.35 | 1.46 | 2.55 | 2.58 | 1.98 |
ResNet | |||||||
Custom | |||||||
MNIST | LeNet | 1.35 | 1.88 | 1.48 | |||
ResNet | |||||||
Custom |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, R.; Wu, Q.; Wang, Y. An Adversarial Example Generation Algorithm Based on DE-C&W. Electronics 2025, 14, 1274. https://doi.org/10.3390/electronics14071274
Zhang R, Wu Q, Wang Y. An Adversarial Example Generation Algorithm Based on DE-C&W. Electronics. 2025; 14(7):1274. https://doi.org/10.3390/electronics14071274
Chicago/Turabian StyleZhang, Ran, Qianru Wu, and Yifan Wang. 2025. "An Adversarial Example Generation Algorithm Based on DE-C&W" Electronics 14, no. 7: 1274. https://doi.org/10.3390/electronics14071274
APA StyleZhang, R., Wu, Q., & Wang, Y. (2025). An Adversarial Example Generation Algorithm Based on DE-C&W. Electronics, 14(7), 1274. https://doi.org/10.3390/electronics14071274