AI in Cybersecurity

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 May 2023) | Viewed by 47119

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical Engineering and Computer Science, Texas A&M University-Kingsville, Kingsville, TX 78363, USA
Interests: computer vision; machine learning; artificial intelligence; pattern recognition; biomedical engineering; biomedical signal and image processing; bioinformatics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical Engineering and Computer Science, Texas A&M University-Kingsville, Kingsville, TX 78363, USA
Interests: bioinformatics; computational biology; machine learning; pattern recognition; data mining and analysis
Special Issues, Collections and Topics in MDPI journals
Department of Electrical Engineering and Computer Science, Texas A&M University-Kingsville, Kingsville, TX 78363, USA
Interests: parallel distributed systems; networking; storage systems; cluster and grid computing; real-time systems; fault-tolerant computing; performance evaluation; dynamic resource management; network security
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical Engineering and Computer Science, Texas A&M University-Kingsville, Kingsville, TX 78363, USA
Interests: object-oriented programming; mobile development
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Cyber defense and security is now an essential field of computer science due to the ever-increasing threats and attacks on computer infrastructure. Machine learning and artificial intelligence methods are applicable in the detection of cyber threats, such as malware analysis, intrusion detection, injection attacks, etc. There are various algorithms in machine learning and artificial intelligence methods. Additionally, there are several applications of cyber defense, including firewall configuration, packet sniffing, network analysis, and network traffic monitoring. This Special Issue welcomes papers on any of these abovementioned or related topics using or developing machine learning and artificial intelligence algorithms.

Dr. Ayush Goyal
Dr. Avdesh Mishra
Dr. Mais Nijim
Dr. David Hicks
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cybersecurity
  • cyber defense
  • cyber intelligence
  • machine learning
  • artificial intelligence
  • intrusion detection
  • malware analysis

Related Special Issue

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 2824 KiB  
Article
Energy Efficient Load-Balancing Mechanism in Integrated IoT–Fog–Cloud Environment
by Meenu Vijarania, Swati Gupta, Akshat Agrawal, Matthew O. Adigun, Sunday Adeola Ajagbe and Joseph Bamidele Awotunde
Electronics 2023, 12(11), 2543; https://doi.org/10.3390/electronics12112543 - 05 Jun 2023
Cited by 4 | Viewed by 1646
Abstract
The Internet of Things (IoT) and cloud computing have revolutionized the technological era unabatedly. These technologies have impacted our lives to a great extent. The traditional cloud model faces a variety of complications with the colossal growth of IoT and cloud applications, such [...] Read more.
The Internet of Things (IoT) and cloud computing have revolutionized the technological era unabatedly. These technologies have impacted our lives to a great extent. The traditional cloud model faces a variety of complications with the colossal growth of IoT and cloud applications, such as network instability, reduced bandwidth, and high latency. Fog computing is utilized to get around these problems, which brings IoT devices and cloud computing closer. Hence, to enhance system, process, and data performance, fog nodes are planted to disperse the load on cloud servers using fog computing, which helps reduce delay time and network traffic. Firstly, in this article, we highlight the various IoT–fog–cloud models for distributing the load uniformly. Secondly, an efficient solution is provided using fog computing for balancing load among fog devices. A performance evaluation of the proposed mechanism with existing techniques shows that the proposed strategy improves performance, energy consumption, throughput, and resource utilization while reducing response time. Full article
(This article belongs to the Special Issue AI in Cybersecurity)
Show Figures

Figure 1

18 pages, 2184 KiB  
Article
Artificial Intelligence-Based Cyber Security in the Context of Industry 4.0—A Survey
by Antonio João Gonçalves de Azambuja, Christian Plesker, Klaus Schützer, Reiner Anderl, Benjamin Schleich and Vilson Rosa Almeida
Electronics 2023, 12(8), 1920; https://doi.org/10.3390/electronics12081920 - 19 Apr 2023
Cited by 20 | Viewed by 16297
Abstract
The increase in cyber-attacks impacts the performance of organizations in the industrial sector, exploiting the vulnerabilities of networked machines. The increasing digitization and technologies present in the context of Industry 4.0 have led to a rise in investments in innovation and automation. However, [...] Read more.
The increase in cyber-attacks impacts the performance of organizations in the industrial sector, exploiting the vulnerabilities of networked machines. The increasing digitization and technologies present in the context of Industry 4.0 have led to a rise in investments in innovation and automation. However, there are risks associated with this digital transformation, particularly regarding cyber security. Targeted cyber-attacks are constantly changing and improving their attack strategies, with a focus on applying artificial intelligence in the execution process. Artificial Intelligence-based cyber-attacks can be used in conjunction with conventional technologies, generating exponential damage in organizations in Industry 4.0. The increasing reliance on networked information technology has increased the cyber-attack surface. In this sense, studies aiming at understanding the actions of cyber criminals, to develop knowledge for cyber security measures, are essential. This paper presents a systematic literature research to identify publications of artificial intelligence-based cyber-attacks and to analyze them for deriving cyber security measures. The goal of this study is to make use of literature analysis to explore the impact of this new threat, aiming to provide the research community with insights to develop defenses against potential future threats. The results can be used to guide the analysis of cyber-attacks supported by artificial intelligence. Full article
(This article belongs to the Special Issue AI in Cybersecurity)
Show Figures

Figure 1

18 pages, 802 KiB  
Article
Codeformer: A GNN-Nested Transformer Model for Binary Code Similarity Detection
by Guangming Liu, Xin Zhou, Jianmin Pang, Feng Yue, Wenfu Liu and Junchao Wang
Electronics 2023, 12(7), 1722; https://doi.org/10.3390/electronics12071722 - 04 Apr 2023
Cited by 2 | Viewed by 2693
Abstract
Binary code similarity detection is used to calculate the code similarity of a pair of binary functions or files, through a certain calculation method and judgment method. It is a fundamental task in the field of computer binary security. Traditional methods of similarity [...] Read more.
Binary code similarity detection is used to calculate the code similarity of a pair of binary functions or files, through a certain calculation method and judgment method. It is a fundamental task in the field of computer binary security. Traditional methods of similarity detection usually use graph matching algorithms, but these methods have poor performance and unsatisfactory effects. Recently, graph neural networks have become an effective method for analyzing graph embeddings in natural language processing. Although these methods are effective, the existing methods still do not sufficiently learn the information of the binary code. To solve this problem, we propose Codeformer, an iterative model of a graph neural network (GNN)-nested Transformer. The model uses a Transformer to obtain an embedding vector of the basic block and uses the GNN to update the embedding vector of each basic block of the control flow graph (CFG). Codeformer iteratively executes basic block embedding to learn abundant global information and finally uses the GNN to aggregate all the basic blocks of a function. We conducted experiments on the OpenSSL, Clamav and Curl datasets. The evaluation results show that our method outperforms the state-of-the-art models. Full article
(This article belongs to the Special Issue AI in Cybersecurity)
Show Figures

Figure 1

17 pages, 2842 KiB  
Article
Integrated Feature-Based Network Intrusion Detection System Using Incremental Feature Generation
by Taehoon Kim and Wooguil Pak
Electronics 2023, 12(7), 1657; https://doi.org/10.3390/electronics12071657 - 31 Mar 2023
Viewed by 1152
Abstract
Machine learning (ML)-based network intrusion detection systems (NIDSs) depend entirely on the performance of machine learning models. Therefore, many studies have been conducted to improve the performance of ML models. Nevertheless, relatively few studies have focused on the feature set, which significantly affects [...] Read more.
Machine learning (ML)-based network intrusion detection systems (NIDSs) depend entirely on the performance of machine learning models. Therefore, many studies have been conducted to improve the performance of ML models. Nevertheless, relatively few studies have focused on the feature set, which significantly affects the performance of ML models. In addition, features are generated by analyzing data collected after the session ends, which requires a significant amount of memory and a long processing time. To solve this problem, this study presents a new session feature set to improve the existing NIDSs. Current session-feature-based NIDSs are largely classified into NIDSs using a single-host feature set and NIDSs using a multi-host feature set. This research merges two different session feature sets into an integrated feature set, which is used to train an ML model for the NIDS. In addition, an incremental feature generation approach is proposed to eliminate the delay between the session end time and the integrated feature creation time. The improved performance of the NIDS using integrated features was confirmed through experiments. Compared to a NIDS based on ML models using existing single-host feature sets and multi-host feature sets, the NIDS with the proposed integrated feature set improves the detection rate by 4.15% and 5.9% on average, respectively. Full article
(This article belongs to the Special Issue AI in Cybersecurity)
Show Figures

Figure 1

22 pages, 5339 KiB  
Article
A Semantic Learning-Based SQL Injection Attack Detection Technology
by Dongzhe Lu, Jinlong Fei and Long Liu
Electronics 2023, 12(6), 1344; https://doi.org/10.3390/electronics12061344 - 12 Mar 2023
Cited by 6 | Viewed by 4444
Abstract
Over the years, injection vulnerabilities have been at the top of the Open Web Application Security Project Top 10 and are one of the most damaging and widely exploited types of vulnerabilities against web applications. Structured Query Language (SQL) injection attack detection remains [...] Read more.
Over the years, injection vulnerabilities have been at the top of the Open Web Application Security Project Top 10 and are one of the most damaging and widely exploited types of vulnerabilities against web applications. Structured Query Language (SQL) injection attack detection remains a challenging problem due to the heterogeneity of attack loads, the diversity of attack methods, and the variety of attack patterns. It has been demonstrated that no single model can guarantee adequate security to protect web applications, and it is crucial to develop an efficient and accurate model for SQL injection attack detection. In this paper, we propose synBERT, a semantic learning-based detection model that explicitly embeds the sentence-level semantic information from SQL statements into an embedding vector. The model learns representations that can be mapped to SQL syntax tree structures, as evidenced by visualization work. We gathered a wide range of datasets to assess the classification performance of the synBERT, and the results show that our approach outperforms previously proposed models. Even on brand-new, untrained models, accuracy can reach 90% or higher, indicating that the model has good generalization performance. Full article
(This article belongs to the Special Issue AI in Cybersecurity)
Show Figures

Figure 1

17 pages, 1538 KiB  
Article
A Machine Learning-Based Intrusion Detection System for IoT Electric Vehicle Charging Stations (EVCSs)
by Mohamed ElKashlan, Mahmoud Said Elsayed, Anca Delia Jurcut and Marianne Azer
Electronics 2023, 12(4), 1044; https://doi.org/10.3390/electronics12041044 - 20 Feb 2023
Cited by 14 | Viewed by 3687
Abstract
The demand for electric vehicles (EVs) is growing rapidly. This requires an ecosystem that meets the user’s needs while preserving security. The rich data obtained from electric vehicle stations are powered by the Internet of Things (IoT) ecosystem. This is achieved through us [...] Read more.
The demand for electric vehicles (EVs) is growing rapidly. This requires an ecosystem that meets the user’s needs while preserving security. The rich data obtained from electric vehicle stations are powered by the Internet of Things (IoT) ecosystem. This is achieved through us of electric vehicle charging station management systems (EVCSMSs). However, the risks associated with cyber-attacks on IoT systems are also increasing at the same pace. To help in finding malicious traffic, intrusion detection systems (IDSs) play a vital role in traditional IT systems. This paper proposes a classifier algorithm for detecting malicious traffic in the IoT environment using machine learning. The proposed system uses a real IoT dataset derived from real IoT traffic. Multiple classifying algorithms are evaluated. Results were obtained on both binary and multiclass traffic models. Using the proposed algorithm in the IoT-based IDS engine that serves electric vehicle charging stations will bring stability and eliminate a substantial number of cyberattacks that may disturb day-to-day life activities. Full article
(This article belongs to the Special Issue AI in Cybersecurity)
Show Figures

Figure 1

18 pages, 2610 KiB  
Article
Anomaly Detection of Zero-Day Attacks Based on CNN and Regularization Techniques
by Belal Ibrahim Hairab, Heba K. Aslan, Mahmoud Said Elsayed, Anca D. Jurcut and Marianne A. Azer
Electronics 2023, 12(3), 573; https://doi.org/10.3390/electronics12030573 - 23 Jan 2023
Cited by 2 | Viewed by 2351
Abstract
The rapid development of cyberattacks in the field of the Internet of things (IoT) introduces new security challenges regarding zero-day attacks. Intrusion-detection systems (IDS) are usually trained on specific attacks to protect the IoT application, but the attacks that are yet unknown for [...] Read more.
The rapid development of cyberattacks in the field of the Internet of things (IoT) introduces new security challenges regarding zero-day attacks. Intrusion-detection systems (IDS) are usually trained on specific attacks to protect the IoT application, but the attacks that are yet unknown for IDS (i.e., zero-day attacks) still represent challenges and concerns regarding users’ data privacy and security in those applications. Anomaly-detection methods usually depend on machine learning (ML)-based methods. Under the ML umbrella are classical ML-based methods, which are known to have low prediction quality and detection rates with regard to data that it has not yet been trained on. DL-based methods, especially convolutional neural networks (CNNs) with regularization methods, address this issue and give a better prediction quality with unknown data and avoid overfitting. In this paper, we evaluate and prove that the CNNs have a better ability to detect zero-day attacks, which are generated from nonbot attackers, compared to classical ML. We use classical ML, normal, and regularized CNN classifiers (L1, and L2 regularized). The training data consists of normal traffic data, and DDoS attack data, as it is the most common attack in the IoT. In order to give the full picture of this evaluation, the testing phase of those classifiers will include two scenarios, each having data with different attack distribution. One of these is the backdoor attack, and the other is the scanning attack. The results of the testing proves that the regularized CNN classifiers still perform better than the classical ML-based methods in detecting zero-day IoT attacks. Full article
(This article belongs to the Special Issue AI in Cybersecurity)
Show Figures

Figure 1

20 pages, 3057 KiB  
Article
A Novel Virus Capable of Intelligent Program Infection through Software Framework Function Recognition
by Wang Guo, Hui Shu, Yeming Gu, Yuyao Huang, Hao Zhao and Yang Li
Electronics 2023, 12(2), 460; https://doi.org/10.3390/electronics12020460 - 16 Jan 2023
Viewed by 1286
Abstract
Viruses are one of the main threats to the security of today’s cyberspace. With the continuous development of virus and artificial intelligence technologies in recent years, the intelligentization of virus technology has become a trend. It is of urgent significance to study and [...] Read more.
Viruses are one of the main threats to the security of today’s cyberspace. With the continuous development of virus and artificial intelligence technologies in recent years, the intelligentization of virus technology has become a trend. It is of urgent significance to study and combat intelligent viruses. In this paper, we design a new type of confirmatory virus from the attacker’s perspective that can intelligently infect software frameworks. We aim for structural software as the target and use BCSD (binary code similarity detection) to identify the framework. By incorporating a software framework functional structure recognition model in the virus, the virus is enabled to intelligently recognize software framework functions in executable files. This paper evaluates the BCSD model that is suitable for a virus to carry and constructs a lightweight BCSD model with a knowledge distillation technique. This research proposes a software framework functional structure recognition algorithm, which effectively reduces the recognition precision’s dependence on the BCSD model. Finally, this study discusses the next researching direction of intelligent viruses. This paper aims to provide a reference for the research of detection technology for possible intelligent viruses. Consequently, focused and effective defense strategies could be proposed and the technical system of malware detection could be reinforced. Full article
(This article belongs to the Special Issue AI in Cybersecurity)
Show Figures

Figure 1

13 pages, 569 KiB  
Article
DFSGraph: Data Flow Semantic Model for Intermediate Representation Programs Based on Graph Network
by Ke Tang, Zheng Shan, Chunyan Zhang, Lianqiu Xu, Meng Qiao and Fudong Liu
Electronics 2022, 11(19), 3230; https://doi.org/10.3390/electronics11193230 - 08 Oct 2022
Viewed by 1499
Abstract
With the improvement of software copyright protection awareness, code obfuscation technology plays a crucial role in protecting key code segments. As the obfuscation technology becomes more and more complex and diverse, it has spawned a large number of malware variants, which make it [...] Read more.
With the improvement of software copyright protection awareness, code obfuscation technology plays a crucial role in protecting key code segments. As the obfuscation technology becomes more and more complex and diverse, it has spawned a large number of malware variants, which make it easy to evade the detection of anti-virus software. Malicious code detection mainly depends on binary code similarity analysis. However, the existing software analysis technologies are difficult to deal with the growing complex obfuscation technologies. To solve this problem, this paper proposes a new obfuscation-resilient program analysis method, which is based on the data flow transformation relationship of the intermediate representation and the graph network model. In our approach, we first construct the data transformation graph based on LLVM IR. Then, we design a novel intermediate language representation model based on graph networks, named DFSGraph, to learn the data flow semantics from DTG. DFSGraph can detect the similarity of obfuscated code by extracting the semantic information of program data flow without deobfuscation. Extensive experiments prove that our approach is more accurate than existing deobfuscation tools when searching for similar functions from obfuscated code. Full article
(This article belongs to the Special Issue AI in Cybersecurity)
Show Figures

Figure 1

17 pages, 1663 KiB  
Article
Position Distribution Matters: A Graph-Based Binary Function Similarity Analysis Method
by Zulie Pan, Taiyan Wang, Lu Yu and Yintong Yan
Electronics 2022, 11(15), 2446; https://doi.org/10.3390/electronics11152446 - 05 Aug 2022
Viewed by 2130
Abstract
Binary function similarity analysis evaluates the similarity of functions at the binary level to aid program analysis, which is popular in many fields, such as vulnerability detection, binary clone detection, and malware detection. Graph-based methods have relatively good performance in practice, but currently, [...] Read more.
Binary function similarity analysis evaluates the similarity of functions at the binary level to aid program analysis, which is popular in many fields, such as vulnerability detection, binary clone detection, and malware detection. Graph-based methods have relatively good performance in practice, but currently, they cannot capture similarity in the aspect of the graph position distribution and lose information in graph processing, which leads to low accuracy. This paper presents PDM, a graph-based method to increase the accuracy of binary function similarity detection, by considering position distribution information. First, an enhanced Attributed Control Flow Graph (ACFG+) of a function is constructed based on a control flow graph, assisted by the instruction embedding technique and data flow analysis. Then, ACFG+ is fed to a graph embedding model using the CapsGNN and DiffPool mechanisms, to enrich information in graph processing by considering the position distribution. The model outputs the corresponding embedding vector, and we can calculate the similarity between different function embeddings using the cosine distance. Similarity detection is completed in the Siamese network. Experiments show that compared with VulSeeker and PalmTree+VulSeeker, PDM can stably obtain three-times and two-times higher accuracy, respectively, in binary function similarity detection and can detect up to six-times more results in vulnerability detection. When comparing with some state-of-the-art tools, PDM has comparable Top-5, Top-10, and Top-20 ranking results with respect to BinDiff, Diaphora, and Kam1n0 and significant advantages in the Top-50, Top-100, and Top-200 detection results. Full article
(This article belongs to the Special Issue AI in Cybersecurity)
Show Figures

Figure 1

16 pages, 3394 KiB  
Article
A Watermarking Optimization Method Based on Matrix Decomposition and DWT for Multi-Size Images
by Lei Wang and Huichao Ji
Electronics 2022, 11(13), 2027; https://doi.org/10.3390/electronics11132027 - 28 Jun 2022
Cited by 16 | Viewed by 1743
Abstract
Image watermarking is a key technology for copyright protection, and how to better balance the invisibility and robustness of algorithms is a challenge. To tackle this challenge, a watermarking optimization method based on matrix decomposition and discrete wavelet transform (DWT) for multi-size images [...] Read more.
Image watermarking is a key technology for copyright protection, and how to better balance the invisibility and robustness of algorithms is a challenge. To tackle this challenge, a watermarking optimization method based on matrix decomposition and discrete wavelet transform (DWT) for multi-size images is proposed. The DWT, Hessenberg matrix decomposition (HMD), singular value decomposition (SVD), particle swarm optimization (PSO), Arnold transform and logistic mapping are combined for the first time to achieve an image watermarking optimization algorithm. The multi-level decomposition of DWT is used to be adapted to multi-size host images, the Arnold transform, logistic mapping, HMD and SVD are used to enhance the security and robustness, and the PSO optimized scaling factor to balance invisibility and robustness. The simulation results of the proposed method show that the PSNRs are higher than 44.9 dB without attacks and the NCs are higher than 0.98 under various attacks. Compared with the existing works, the proposed method shows high robustness against various attacks, such as noise, filtering and JPEG compression and in particular, the NC values are at least 0.44% higher than that in noise attacks. Full article
(This article belongs to the Special Issue AI in Cybersecurity)
Show Figures

Figure 1

Review

Jump to: Research

24 pages, 956 KiB  
Review
DDoS Attack Detection in IoT-Based Networks Using Machine Learning Models: A Survey and Research Directions
by Amal A. Alahmadi, Malak Aljabri, Fahd Alhaidari, Danyah J. Alharthi, Ghadi E. Rayani, Leena A. Marghalani, Ohoud B. Alotaibi and Shurooq A. Bajandouh
Electronics 2023, 12(14), 3103; https://doi.org/10.3390/electronics12143103 - 17 Jul 2023
Cited by 4 | Viewed by 5865
Abstract
With the emergence of technology, the usage of IoT (Internet of Things) devices is said to be increasing in people’s lives. Such devices can benefit the average individual, who does not necessarily have to have technical knowledge. The IoT can be found in [...] Read more.
With the emergence of technology, the usage of IoT (Internet of Things) devices is said to be increasing in people’s lives. Such devices can benefit the average individual, who does not necessarily have to have technical knowledge. The IoT can be found in home security and alarm systems, smart fridges, smart televisions, and more. Although small Internet-connected devices have numerous benefits and can help enhance people’s efficiency, they also can pose a security threat. Malicious actors often attempt to find new ways to exploit and utilize certain resources, and IoT devices are a perfect candidate for such exploitation due to the huge volume of active devices. This is particularly true for Distributed Denial of Service (DDoS) attacks, which involve the exploitation of a massive number of devices, such as IoT devices, to act as bots and send fraudulent requests to services, thus obstructing them. To identify and detect whether such attacks have occurred or not in a network, there must be a reliable mechanism of detection based on adequate techniques. The most common technique for this purpose is artificial intelligence, which involves the use of Machine Learning (ML) and Deep Learning (DL) to help identify cyberattacks. ML models involve algorithms that use structured data to learn from, predict outcomes from, and identify patterns. The goal of this paper is to review selected studies and publications relevant to the topic of DDoS detection in IoT-based networks using machine-learning-relevant publications. It offers a wealth of references for academics looking to define or expand the scope of their research in this area. Full article
(This article belongs to the Special Issue AI in Cybersecurity)
Show Figures

Figure 1

Back to TopTop