Using Artificial Intelligence to Improve Security in the Software Development Cycle: Techniques, Challenges and Opportunities

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Algorithms for Multidisciplinary Applications".

Deadline for manuscript submissions: 31 July 2024 | Viewed by 2384

Special Issue Editors


E-Mail Website
Guest Editor
Instituto Superior de Engenharia de Lisboa, INESC-ID Lisboa, R. Conselheiro Emídio Navarro 1, 1959-007 Lisboa, Portugal
Interests: cloud computing; edge computing; domputer security; programming languages; resource virtualization; distributed algorithms
Computer Science Department, ISEL-Lisbon School of Engineering, 1049-001 Lisboa, Portugal
Interests: data analytics; spatio-temporal analysis; visualisation; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electronic Engineering, Telecommunications and Computers at the ISEL, Lisbon School of Engineering, 1049-001 Lisboa, Portugal
Interests: bioinformatics; biomechanics; computational mathematics; numerical analysis and soft tissues

Special Issue Information

Dear Colleagues,

Software must be robust, trustworthy, reliable, and secure. However, traditional approaches to achieving these characteristics mostly rely on manual and time-consuming techniques, such as code reviews and penetration testing. These approaches frequently fail to adequately address the complexity and scale of modern software systems. This results in limited coverage and late detection of problems and vulnerabilities. Artificial intelligence (AI) and machine learning (ML) algorithms, in particular generative artificial intelligence, can play a significant role in software development, generating robust and reliable code, designing tests and attack vectors, identifying vulnerabilities and flaws from service composition, or treating machine learning models as code.

This Special Issue invites researchers and practitioners to contribute their original research, methodologies, and case studies on the application of AI and ML algorithms in enhancing security. We particularly encourage submissions related but not limited to the following broad areas:

  • AI and ML to improve security in software development in general, including the software supply chains, addressing vulnerability detection, the identification of malicious code, and the prevention of supply chain attacks. This Special Issue aims to address the pressing concern of software supply chain security
  • The use of generative adversarial networks (GANs) in the software development cycle by contributing with novel GAN architectures, training methods, and evaluation metrics for improving security and the quality of software in general.
  • The application of code generation and language models for tasks like automated code review, vulnerability detection, refactoring, program synthesis, and leveraging language models for security tasks, reflecting the significant advancements in deep learning in this area.
  • Challenges and opportunities of integrating the machine learning operations (ML-OPS) practices into the software development cycle. Topics include model versioning, reproducibility, scalability, and continuous integration and deployment, with a focus on enhancing security in effectively managing ML models.

Dr. José Simão
Dr. Nuno Datia
Dr. Matilde Pato
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • software development cycle
  • artificial intelligence
  • software security
  • software supply chain security
  • generative adversarial networks
  • language models
  • machine learning operations

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 4613 KiB  
Article
Impacting Robustness in Deep Learning-Based NIDS through Poisoning Attacks
by Shahad Alahmed, Qutaiba Alasad, Jiann-Shiun Yuan and Mohammed Alawad
Algorithms 2024, 17(4), 155; https://doi.org/10.3390/a17040155 - 11 Apr 2024
Viewed by 436
Abstract
The rapid expansion and pervasive reach of the internet in recent years have raised concerns about evolving and adaptable online threats, particularly with the extensive integration of Machine Learning (ML) systems into our daily routines. These systems are increasingly becoming targets of malicious [...] Read more.
The rapid expansion and pervasive reach of the internet in recent years have raised concerns about evolving and adaptable online threats, particularly with the extensive integration of Machine Learning (ML) systems into our daily routines. These systems are increasingly becoming targets of malicious attacks that seek to distort their functionality through the concept of poisoning. Such attacks aim to warp the intended operations of these services, deviating them from their true purpose. Poisoning renders systems susceptible to unauthorized access, enabling illicit users to masquerade as legitimate ones, compromising the integrity of smart technology-based systems like Network Intrusion Detection Systems (NIDSs). Therefore, it is necessary to continue working on studying the resilience of deep learning network systems while there are poisoning attacks, specifically interfering with the integrity of data conveyed over networks. This paper explores the resilience of deep learning (DL)—based NIDSs against untethered white-box attacks. More specifically, it introduces a designed poisoning attack technique geared especially for deep learning by adding various amounts of altered instances into training datasets at diverse rates and then investigating the attack’s influence on model performance. We observe that increasing injection rates (from 1% to 50%) and random amplified distribution have slightly affected the overall performance of the system, which is represented by accuracy (0.93) at the end of the experiments. However, the rest of the results related to the other measures, such as PPV (0.082), FPR (0.29), and MSE (0.67), indicate that the data manipulation poisoning attacks impact the deep learning model. These findings shed light on the vulnerability of DL-based NIDS under poisoning attacks, emphasizing the significance of securing such systems against these sophisticated threats, for which defense techniques should be considered. Our analysis, supported by experimental results, shows that the generated poisoned data have significantly impacted the model performance and are hard to be detected. Full article
Show Figures

Figure 1

15 pages, 1395 KiB  
Article
Evolutionary Approaches for Adversarial Attacks on Neural Source Code Classifiers
by Valeria Mercuri, Martina Saletta and Claudio Ferretti
Algorithms 2023, 16(10), 478; https://doi.org/10.3390/a16100478 - 12 Oct 2023
Viewed by 1291
Abstract
As the prevalence and sophistication of cyber threats continue to increase, the development of robust vulnerability detection techniques becomes paramount in ensuring the security of computer systems. Neural models have demonstrated significant potential in identifying vulnerabilities; however, they are not immune to adversarial [...] Read more.
As the prevalence and sophistication of cyber threats continue to increase, the development of robust vulnerability detection techniques becomes paramount in ensuring the security of computer systems. Neural models have demonstrated significant potential in identifying vulnerabilities; however, they are not immune to adversarial attacks. This paper presents a set of evolutionary techniques for generating adversarial instances to enhance the resilience of neural models used for vulnerability detection. The proposed approaches leverage an evolution strategy (ES) algorithm that utilizes as the fitness function the output of the neural network to deceive. By starting from existing instances, the algorithm evolves individuals, represented by source code snippets, by applying semantic-preserving transformations, while utilizing the fitness to invert their original classification. This iterative process facilitates the generation of adversarial instances that can mislead the vulnerability detection models while maintaining the original behavior of the source code. The significance of this research lies in its contribution to the field of cybersecurity by addressing the need for enhanced resilience against adversarial attacks in vulnerability detection models. The evolutionary approach provides a systematic framework for generating adversarial instances, allowing for the identification and mitigation of weaknesses in AI classifiers. Full article
Show Figures

Figure 1

Back to TopTop