sensors-logo

Journal Browser

Journal Browser

AI Technology for Cybersecurity and IoT Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: 30 April 2024 | Viewed by 4527

Special Issue Editors


E-Mail Website
Guest Editor
Graduate School of Information, Production and Systems, Waseda University, Shinjuku City 1698050, Japan
Interests: Internet of Things; artificial intelligence; data privacy; blockchains; 5G/6G

E-Mail Website
Guest Editor
Department of Systems Innovation, The University of Tokyo, Tokyo 113-0033, Japan
Interests: network intelligence; Internet of Things; next-generation communication; privacy preservation

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) technology is emerging in the cybersecurity and Internet of Things (IoT) areas with great promise. The continuous emergence of novel, invisible, and complex cyber-attacks, such as advanced persistent threats (APT), fuels the demands for intelligent discovery and prevention of cybersecurity threats. To deal with the aforementioned complex threats, AI technology for novel cybersecurity includes the construction of a dynamic cyber-attack model, intelligent defense, as well as fine-grained preserved privacy. On the other hand, AI technologies for IoT can be clustered into intelligent environment sensing, edge computing, and communications. AI technology for IoT supports the intelligent management and efficient control of heterogeneous IoT sensors in the process of data collection and edge computing for decentralized big data. In addition, the novel communications in IoT (e. g. Terahertz in 6G) are envisioned to be implemented and deployed through AI-enabled allocation and scheduling technologies. Together with recent advances in AI technology, the applications of AI for both cybersecurity and IoT are still open and require immediate studies.

This Special Issue focuses on the new challenges, technologies, solutions, and applications in the field of AI technology for cybersecurity and IoT. Potential topics include, but are not limited to:

  1. AI architectures and models for cyber-attacks/threats sensing, classification, and detection;
  2. Cybersecurity defense theory and methodology inspired by AI;
  3. AI-driven software and hardware security technologies;
  4. Privacy and learning protection for AI models;
  5. AI-driven blockchain technologies;
  6. Intelligent sensing paradigm design in IoT;
  7. AI-based organization, orchestration, and optimization for IoT;
  8. Edge computing framework for IoT using AI technology;
  9. Resource allocation and scheduling for IoT based on AI;
  10. AI algorithms for Terahertz and configurable communications in IoT.

Prof. Dr. Jun Wu
Dr. Qianqian Pan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • hardware security
  • software security
  • data security
  • privacy preserving
  • blockchain
  • Internet of Things
  • edge computing and intelligence
  • terahertz
  • smart sensing

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 9322 KiB  
Article
Research on Fault Detection by Flow Sequence for Industrial Internet of Things in Sewage Treatment Plant Case
by Dongfeng Lei, Liang Zhao and Dengfeng Chen
Sensors 2024, 24(7), 2210; https://doi.org/10.3390/s24072210 - 29 Mar 2024
Viewed by 364
Abstract
Classifying the flow subsequences of sensor networks is an effective way for fault detection in the Industrial Internet of Things (IIoT). Traditional fault detection algorithms identify exceptions by a single abnormal dataset and do not pay attention to the factors such as electromagnetic [...] Read more.
Classifying the flow subsequences of sensor networks is an effective way for fault detection in the Industrial Internet of Things (IIoT). Traditional fault detection algorithms identify exceptions by a single abnormal dataset and do not pay attention to the factors such as electromagnetic interference, network delay, sensor sample delay, and so on. This paper focuses on fault detection by continuous abnormal points. We proposed a fault detection algorithm within the module of sequence state generated by unsupervised learning (SSGBUL) and the module of integrated encoding sequence classification (IESC). Firstly, we built a network module based on unsupervised learning to encode the flow sequence of the different network cards in the IIoT gateway, and then combined the multiple code sequences into one integrated sequence. Next, we classified the integrated sequence by comparing the integrated sequence with the encoding fault type. The results obtained from the three IIoT datasets of a sewage treatment plant show that the accuracy of the SSGBUL–IESC algorithm exceeds 90% with subsequence length 10, which is significantly higher than the accuracies of the dynamic time warping (DTW) algorithm and the time series forest (TSF) algorithm. The proposed algorithm reaches the classification requirements for fault detection for the IIoT. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

21 pages, 2841 KiB  
Article
Detection of Malicious Threats Exploiting Clock-Gating Hardware Using Machine Learning
by Nuri Alperen Kose, Razaq Jinad, Amar Rasheed, Narasimha Shashidhar, Mohamed Baza and Hani Alshahrani
Sensors 2024, 24(3), 983; https://doi.org/10.3390/s24030983 - 02 Feb 2024
Viewed by 768
Abstract
Embedded system technologies are increasingly being incorporated into manufacturing, smart grid, industrial control systems, and transportation systems. However, the vast majority of today’s embedded platforms lack the support of built-in security features which makes such systems highly vulnerable to a wide range of [...] Read more.
Embedded system technologies are increasingly being incorporated into manufacturing, smart grid, industrial control systems, and transportation systems. However, the vast majority of today’s embedded platforms lack the support of built-in security features which makes such systems highly vulnerable to a wide range of cyber-attacks. Specifically, they are vulnerable to malware injection code that targets the power distribution system of an ARM Cortex-M-based microcontroller chipset (ARM, Cambridge, UK). Through hardware exploitation of the clock-gating distribution system, an attacker is capable of disabling/activating various subsystems on the chip, compromising the reliability of the system during normal operation. This paper proposes the development of an Intrusion Detection System (IDS) capable of detecting clock-gating malware deployed on ARM Cortex-M-based embedded systems. To enhance the robustness and effectiveness of our approach, we fully implemented, tested, and compared six IDSs, each employing different methodologies. These include IDSs based on K-Nearest Classifier, Random Forest, Logistic Regression, Decision Tree, Naive Bayes, and Stochastic Gradient Descent. Each of these IDSs was designed to identify and categorize various variants of clock-gating malware deployed on the system. We have analyzed the performance of these IDSs in terms of detection accuracy against various types of clock-gating malware injection code. Power consumption data collected from the chipset during normal operation and malware code injection attacks were used for models’ training and validation. Our simulation results showed that the proposed IDSs, particularly those based on K-Nearest Classifier and Logistic Regression, were capable of achieving high detection rates, with some reaching a detection rate of 0.99. These results underscore the effectiveness of our IDSs in protecting ARM Cortex-M-based embedded systems against clock-gating malware. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

22 pages, 1017 KiB  
Article
On-Demand Centralized Resource Allocation for IoT Applications: AI-Enabled Benchmark
by Ran Zhang, Lei Liu, Mianxiong Dong and Kaoru Ota
Sensors 2024, 24(3), 980; https://doi.org/10.3390/s24030980 - 02 Feb 2024
Viewed by 761
Abstract
The development of emerging information technologies, such as the Internet of Things (IoT), edge computing, and blockchain, has triggered a significant increase in IoT application services and data volume. Ensuring satisfactory service quality for diverse IoT application services based on limited network resources [...] Read more.
The development of emerging information technologies, such as the Internet of Things (IoT), edge computing, and blockchain, has triggered a significant increase in IoT application services and data volume. Ensuring satisfactory service quality for diverse IoT application services based on limited network resources has become an urgent issue. Generalized processor sharing (GPS), functioning as a central resource scheduling mechanism guiding differentiated services, stands as a key technology for implementing on-demand resource allocation. The performance prediction of GPS is a crucial step that aims to capture the actual allocated resources using various queue metrics. Some methods (mainly analytical methods) have attempted to establish upper and lower bounds or approximate solutions. Recently, artificial intelligence (AI) methods, such as deep learning, have been designed to assess performance under self-similar traffic. However, the proposed methods in the literature have been developed for specific traffic scenarios with predefined constraints, thus limiting their real-world applicability. Furthermore, the absence of a benchmark in the literature leads to an unfair performance prediction comparison. To address the drawbacks in the literature, an AI-enabled performance benchmark with comprehensive traffic-oriented experiments showcasing the performance of existing methods is presented. Specifically, three types of methods are employed: traditional approximate analytical methods, traditional machine learning-based methods, and deep learning-based methods. Following that, various traffic flows with different settings are collected, and intricate experimental analyses at both the feature and method levels under different traffic conditions are conducted. Finally, insights from the experimental analysis that may be beneficial for the future performance prediction of GPS are derived. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

23 pages, 3362 KiB  
Article
CTSF: An Intrusion Detection Framework for Industrial Internet Based on Enhanced Feature Extraction and Decision Optimization Approach
by Guangzhao Chai, Shiming Li, Yu Yang, Guohui Zhou and Yuhe Wang
Sensors 2023, 23(21), 8793; https://doi.org/10.3390/s23218793 - 28 Oct 2023
Viewed by 936
Abstract
The traditional Transformer model primarily employs a self-attention mechanism to capture global feature relationships, potentially overlooking local relationships within sequences and thus affecting the modeling capability of local features. For Support Vector Machine (SVM), it often requires the joint use of feature selection [...] Read more.
The traditional Transformer model primarily employs a self-attention mechanism to capture global feature relationships, potentially overlooking local relationships within sequences and thus affecting the modeling capability of local features. For Support Vector Machine (SVM), it often requires the joint use of feature selection algorithms or model optimization methods to achieve maximum classification accuracy. Addressing the issues in both models, this paper introduces a novel network framework, CTSF, specifically designed for Industrial Internet intrusion detection. CTSF effectively addresses the limitations of traditional Transformers in extracting local features while compensating for the weaknesses of SVM. The framework comprises a pre-training component and a decision-making component. The pre-training section consists of both CNN and an enhanced Transformer, designed to capture both local and global features from input data while reducing data feature dimensions. The improved Transformer simultaneously decreases certain training parameters within CTSF, making it more suitable for the Industrial Internet environment. The classification section is composed of SVM, which receives initial classification data from the pre-training phase and determines the optimal decision boundary. The proposed framework is evaluated on an imbalanced subset of the X-IIOTID dataset, which represent Industrial Internet data. Experimental results demonstrate that with SVM using both “linear” and “rbf” kernel functions, CTSF achieves an overall accuracy of 0.98875 and effectively discriminates minor classes, showcasing the superiority of this framework. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

11 pages, 731 KiB  
Article
MCW: A Generalizable Deepfake Detection Method for Few-Shot Learning
by Lei Guan, Fan Liu, Ru Zhang, Jianyi Liu and Yifan Tang
Sensors 2023, 23(21), 8763; https://doi.org/10.3390/s23218763 - 27 Oct 2023
Cited by 1 | Viewed by 1274
Abstract
With the development of deepfake technology, deepfake detection has received widespread attention. Although some deepfake forensics techniques have been proposed, they are still very difficult to implement in real-world scenarios. This is due to the differences in different deepfake technologies and the compression [...] Read more.
With the development of deepfake technology, deepfake detection has received widespread attention. Although some deepfake forensics techniques have been proposed, they are still very difficult to implement in real-world scenarios. This is due to the differences in different deepfake technologies and the compression or editing of videos during the propagation process. Considering the issue of sample imbalance with few-shot scenarios in deepfake detection, we propose a multi-feature channel domain-weighted framework based on meta-learning (MCW). In order to obtain outstanding detection performance of a cross-database, the proposed framework improves a meta-learning network in two ways: it enhances the model’s feature extraction ability for detecting targets by combining the RGB domain and frequency domain information of the image and enhances the model’s generalization ability for detecting targets by assigning meta weights to channels on the feature map. The proposed MCW framework solves the problems of poor detection performance and insufficient data compression resistance of the algorithm for samples generated by unknown algorithms. The experiment was set in a zero-shot scenario and few-shot scenario, simulating the deepfake detection environment in real situations. We selected nine detection algorithms as comparative algorithms. The experimental results show that the MCW framework outperforms other algorithms in cross-algorithm detection and cross-dataset detection. The MCW framework demonstrates its ability to generalize and resist compression with low-quality training images and across different generation algorithm scenarios, and it has better fine-tuning potential in few-shot learning scenarios. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

Back to TopTop