Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (131)

Search Parameters:
Keywords = IoT traffic types

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1257 KB  
Article
Detecting AI-Generated Network Traffic Using Transformer–MLP Ensemble
by Byeongchan Kim, Abhishek Chaudhary and Sunoh Choi
Appl. Sci. 2025, 15(21), 11338; https://doi.org/10.3390/app152111338 - 22 Oct 2025
Viewed by 322
Abstract
The rapid growth of generative artificial intelligence (AI) has enabled diverse applications but also introduced new attack techniques. Similar to deepfake media, generative AI can be exploited to create AI-generated traffic that evades existing intrusion detection systems (IDSs). This paper proposes a Dual [...] Read more.
The rapid growth of generative artificial intelligence (AI) has enabled diverse applications but also introduced new attack techniques. Similar to deepfake media, generative AI can be exploited to create AI-generated traffic that evades existing intrusion detection systems (IDSs). This paper proposes a Dual Detection System to detect such synthetic network traffic in the Message Queuing Telemetry Transport (MQTT) protocol widely used in Internet of Things (IoT) environments. The system operates in two stages: (i) primary filtering with a Long Short-Term Memory (LSTM) model to detect malicious traffic, and (ii) secondary verification with a Transformer–MLP ensemble to identify AI-generated traffic. Experimental results show that the proposed method achieves an average accuracy of 99.1 ± 0.6% across different traffic types (normal, malicious, and AI-generated), with nearly 100% detection of synthetic traffic. These findings demonstrate that the proposed dual detection system effectively overcomes the limitations of single-model approaches and significantly enhances detection performance. Full article
Show Figures

Figure 1

23 pages, 3141 KB  
Article
Machine Learning-Assisted Cryptographic Security: A Novel ECC-ANN Framework for MQTT-Based IoT Device Communication
by Kalimu Karimunda, Jean de Dieu Marcel Ufitikirezi, Roman Bumbálek, Tomáš Zoubek, Petr Bartoš, Radim Kuneš, Sandra Nicole Umurungi, Anozie Chukwunyere, Mutagisha Norbelt and Gao Bo
Computation 2025, 13(10), 227; https://doi.org/10.3390/computation13100227 - 26 Sep 2025
Viewed by 585
Abstract
The Internet of Things (IoT) has surfaced as a revolutionary technology, enabling ubiquitous connectivity between devices and revolutionizing traditional lifestyles through smart automation. As IoT systems proliferate, securing device-to-device communication and server–client data exchange has become crucial. This paper presents a novel security [...] Read more.
The Internet of Things (IoT) has surfaced as a revolutionary technology, enabling ubiquitous connectivity between devices and revolutionizing traditional lifestyles through smart automation. As IoT systems proliferate, securing device-to-device communication and server–client data exchange has become crucial. This paper presents a novel security framework that integrates elliptic curve cryptography (ECC) with artificial neural networks (ANNs) to enhance the Message Queuing Telemetry Transport (MQTT) protocol. Our study evaluated multiple machine learning algorithms, with ANN demonstrating superior performance in anomaly detection and classification. The hybrid approach not only encrypts communications but also employs the optimized ANN model to detect and classify anomalous traffic patterns. The proposed model demonstrates robust security features, successfully identifying and categorizing various attack types with 90.38% accuracy while maintaining message confidentiality through ECC encryption. Notably, this framework retains the lightweight characteristics essential for IoT devices, making it especially relevant for environments where resources are constrained. To our knowledge, this represents the first implementation of an integrated ECC-ANN approach for securing MQTT-based IoT communications, offering a promising solution for next-generation IoT security requirements. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

17 pages, 1133 KB  
Article
Spatio-Temporal Recursive Method for Traffic Flow Interpolation
by Gang Wang, Yuhao Mao, Xu Liu, Haohan Liang and Keqiang Li
Symmetry 2025, 17(9), 1577; https://doi.org/10.3390/sym17091577 - 21 Sep 2025
Viewed by 425
Abstract
Traffic data sequence imputation plays a crucial role in maintaining the integrity and reliability of transportation analytics and decision-making systems. With the proliferation of sensor technologies and IoT devices, traffic data often contain missing values due to sensor failures, communication issues, or data [...] Read more.
Traffic data sequence imputation plays a crucial role in maintaining the integrity and reliability of transportation analytics and decision-making systems. With the proliferation of sensor technologies and IoT devices, traffic data often contain missing values due to sensor failures, communication issues, or data processing errors. It is necessary to effectively interpolate these missing parts to ensure the correctness of downstream work. Compared with other data, the monitoring data of traffic flow shows significant temporal and spatial correlations. However, most methods have not fully integrated the correlations of these types. In this work, we introduce the Temporal–Spatial Fusion Neural Network (TSFNN), a framework designed to address missing data recovery in transportation monitoring by jointly modeling spatial and temporal patterns. The architecture incorporates a temporal component, implemented with a Recurrent Neural Network (RNN), to learn sequential dependencies, alongside a spatial component, implemented with a Multilayer Perceptron (MLP), to learn spatial correlations. For performance validation, the model was benchmarked against several established methods. Using real-world datasets with varying missing-data ratios, TSFNN consistently delivered more accurate interpolations than all baseline approaches, highlighting the advantage of combining temporal and spatial learning within a single framework. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

19 pages, 5116 KB  
Article
Development and Evaluation of a Novel IoT Testbed for Enhancing Security with Machine Learning-Based Threat Detection
by Waleed Farag, Xin-Wen Wu, Soundararajan Ezekiel, Drew Rado and Jaylee Lassinger
Sensors 2025, 25(18), 5870; https://doi.org/10.3390/s25185870 - 19 Sep 2025
Viewed by 707
Abstract
The Internet of Things (IoT) has revolutionized industries by enabling seamless data exchange between billions of connected devices. However, the rapid proliferation of IoT devices has introduced significant security challenges, as many of these devices lack robust protection against cyber threats such as [...] Read more.
The Internet of Things (IoT) has revolutionized industries by enabling seamless data exchange between billions of connected devices. However, the rapid proliferation of IoT devices has introduced significant security challenges, as many of these devices lack robust protection against cyber threats such as data breaches and denial-of-service attacks. Addressing these vulnerabilities is critical to maintaining the integrity and trust of IoT ecosystems. Traditional cybersecurity solutions often fail in dynamic, heterogeneous IoT environments due to device diversity, limited computational resources, and inconsistent communication protocols, which hinder the deployment of uniform and scalable security mechanisms. Moreover, there is a notable lack of realistic, high-quality datasets for training and evaluating machine learning (ML) models for IoT security, limiting their effectiveness in detecting complex and evolving threats. This paper presents the development and implementation of a novel physical smart office/home testbed designed to evaluate ML algorithms for detecting and mitigating IoT security vulnerabilities. The testbed replicates a real-world office environment, integrating a variety of IoT devices, such as different types of sensors, cameras, smart plugs, and workstations, within a network generating authentic traffic patterns. By simulating diverse attack scenarios including unauthorized access and network intrusions, the testbed provides a controlled platform to train, test, and validate ML-based anomaly detection systems. Experimental results show that the XGBoost model achieved a balanced accuracy of up to 99.977% on testbed-generated data, comparable to 99.985% on the benchmark IoT-23 dataset. Notably, the SVM model achieved up to 96.71% accuracy using our testbed data, outperforming its results on IoT-23, which peaked at 94.572%. The findings demonstrate the testbed’s effectiveness in enabling realistic security evaluations and ability to generate real-world datasets, highlighting its potential as a valuable tool for advancing IoT security research. This work contributes to the development of more resilient and adaptive security frameworks, offering valuable insights for safeguarding critical IoT infrastructures against evolving threats. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

59 pages, 3591 KB  
Review
Efficient Caching Strategies in NDN-Enabled IoT Networks: Strategies, Constraints, and Future Directions
by Ala’ Ahmad Alahmad, Azana Hafizah Mohd Aman, Faizan Qamar and Wail Mardini
Sensors 2025, 25(16), 5203; https://doi.org/10.3390/s25165203 - 21 Aug 2025
Viewed by 955
Abstract
Named Data Networking (NDN) is identified as a significant shift within the information-centric networking (ICN) perspective that avoids our current IP-based infrastructures by retrieving data based on its name rather than where the host is placed. This shift in paradigm is especially beneficial [...] Read more.
Named Data Networking (NDN) is identified as a significant shift within the information-centric networking (ICN) perspective that avoids our current IP-based infrastructures by retrieving data based on its name rather than where the host is placed. This shift in paradigm is especially beneficial in Internet of Things (IoT) settings because information sharing is a critical challenge, as millions of IoT items create enormous traffic. Content caching in the network is another key characteristic of NDN used in IoT, which enables data storing within the network and provides IoT devices with the opportunity to address nearby caching nodes to gain the intended content, which, in its turn, will minimize latency as well as bandwidth consumption. However, effective caching solutions must be developed since cache management is made difficult by the constant shifting of IoT networks and the constrained capabilities of IoT devices. This paper gives an overview of cache strategies in NDN-based IoT systems. It emphasizes six strategy types: popularity-based, freshness-aware, collaborative, hybrid, probabilistic, and machine learning-based, evaluating their performances in terms of demands like content preference, cache update, and power consumption. By analyzing various caching policies and their performance characteristics, this paper provides valuable insights for researchers and practitioners developing caching strategies in NDN-based IoT networks. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

14 pages, 653 KB  
Article
Industrial Internet of Things Intrusion Detection System Based on Graph Neural Network
by Siqi Yang, Wenqiang Pan, Min Li, Mingyong Yin, Hao Ren, Yue Chang, Yidou Liu, Senyao Zhang and Fang Lou
Symmetry 2025, 17(7), 997; https://doi.org/10.3390/sym17070997 - 24 Jun 2025
Cited by 2 | Viewed by 1946
Abstract
Deep learning greatly improves the detection efficiency of abnormal traffic through autonomous learning and effective extraction of data feature information. Among them, Graph Neural Networks (GNN) effectively fit the data features of abnormal traffic by aggregating the features and structural information of network [...] Read more.
Deep learning greatly improves the detection efficiency of abnormal traffic through autonomous learning and effective extraction of data feature information. Among them, Graph Neural Networks (GNN) effectively fit the data features of abnormal traffic by aggregating the features and structural information of network nodes. However, the performance of GNN in the field of industrial Internet of Things (IIoT) is still insufficient. Since the asymmetry of GNN traffic data is greater than that of the traditional Internet, it is necessary to propose a detection method with high detection rate. At present, many algorithms overly emphasize the optimization of graph neural network models, while ignoring the heterogeneity of resources caused by the diversity of devices in IIoT networks, and the different traffic characteristics caused by multi type protocols. Therefore, universal GNN may not be fully applicable. Therefore, a novel intrusion detection framework incorporating graph neural networks is developed for Industrial Internet of Things systems. Design mini-batch sampling to support data parallelism and accelerate the training process in response to the distributed characteristics of the IIoT. Due to the strong real-time characteristics of the industrial IIoT, data packets in concentrated time periods contain a large number of feature attributes, and the high redundancy of features due to the correlation between features. This paper establishes a model temporal correlation and designs a new model. The performance of the proposed GIDS model is evaluated on several benchmark datasets such as BoT-IoT, ACI-IoT-2023 and OPCUA. The results marked that the method performs well on both binary classification task and multiclass classification task. The accuracy on binary classification task is 93.63%, 97.34% and 100% with F1 values of 94.34%, 97.68% and 100.00% respectively. The accuracy on multiclass classification task is 92.34%, 93.68% and 99.99% with F1 values of 94.55%, 94.12% and 99.99% respectively. Through experimental measurements, the model effectively utilizes the natural distribution characteristics of network traffic in both temporal and spatial dimensions, achieving better detection results. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

23 pages, 6982 KB  
Article
An Efficient and Low-Delay SFC Recovery Method in the Space–Air–Ground Integrated Aviation Information Network with Integrated UAVs
by Yong Yang, Buhong Wang, Jiwei Tian, Xiaofan Lyu and Siqi Li
Drones 2025, 9(6), 440; https://doi.org/10.3390/drones9060440 - 16 Jun 2025
Viewed by 705
Abstract
Unmanned aerial vehicles (UAVs), owing to their flexible coverage expansion and dynamic adjustment capabilities, hold significant application potential across various fields. With the emergence of urban low-altitude air traffic dominated by UAVs, the integrated aviation information network combining UAVs and manned aircraft has [...] Read more.
Unmanned aerial vehicles (UAVs), owing to their flexible coverage expansion and dynamic adjustment capabilities, hold significant application potential across various fields. With the emergence of urban low-altitude air traffic dominated by UAVs, the integrated aviation information network combining UAVs and manned aircraft has evolved into a complex space–air–ground integrated Internet of Things (IoT) system. The application of 5G/6G network technologies, such as cloud computing, network function virtualization (NFV), and edge computing, has enhanced the flexibility of air traffic services based on service function chains (SFCs), while simultaneously expanding the network attack surface. Compared to traditional networks, the aviation information network integrating UAVs exhibits greater heterogeneity and demands higher service reliability. To address the failure issues of SFCs under attack, this study proposes an efficient SFC recovery method for recovery rate optimization (ERRRO) based on virtual network functions (VNFs) migration technology. The method first determines the recovery order of failed SFCs according to their recovery costs, prioritizing the restoration of SFCs with the lowest costs. Next, the migration priorities of the failed VNFs are ranked based on their neighborhood certainty, with the VNFs exhibiting the highest neighborhood certainty being migrated first. Finally, the destination nodes for migrating the failed VNFs are determined by comprehensively considering attributes such as the instantiated SFC paths, delay of physical platforms, and residual resources. Experiments demonstrate that the ERRRO performs well under networks with varying resource redundancy and different types of attacks. Compared to methods reported in the literature, the ERRRO achieves superior performance in terms of the SFC recovery rate and delay. Full article
(This article belongs to the Special Issue Space–Air–Ground Integrated Networks for 6G)
Show Figures

Figure 1

25 pages, 1352 KB  
Systematic Review
Systematic Review of Graph Neural Network for Malicious Attack Detection
by Sarah Mohammed Alshehri, Sanaa Abdullah Sharaf and Rania Abdullrahman Molla
Information 2025, 16(6), 470; https://doi.org/10.3390/info16060470 - 2 Jun 2025
Cited by 3 | Viewed by 5862
Abstract
As cyberattacks continue to rise alongside the rapid expansion of digital systems, effective threat detection remains a critical yet challenging task. While several machine learning approaches have been proposed, the use of graph neural networks (GNNs) for cyberattack detection has not yet been [...] Read more.
As cyberattacks continue to rise alongside the rapid expansion of digital systems, effective threat detection remains a critical yet challenging task. While several machine learning approaches have been proposed, the use of graph neural networks (GNNs) for cyberattack detection has not yet been systematically explored in depth. This paper presents a systematic literature review (SLR) that analyzes 28 recent academic studies published between 2020 and 2025, retrieved from major databases including IEEE, ACM, Scopus, and Springer. The review focuses on evaluating how GNN models are applied in detecting various types of attacks, particularly those targeting IoT environments, web services, phishing, and network traffic. Studies were classified based on the type of dataset, GNN model architecture, and attack domain. Additionally, key limitations and future research directions were extracted and analyzed. The findings provide a structured comparison of current methodologies and highlight gaps that warrant further exploration. This review contributes a focused perspective on the potential of GNNs in cybersecurity and offers insights to guide future developments in the field. Full article
Show Figures

Figure 1

26 pages, 10537 KB  
Article
Development of a Low-Cost Traffic and Air Quality Monitoring Internet of Things (IoT) System for Sustainable Urban and Environmental Management
by Lorand Bogdanffy, Csaba Romuald Lorinț and Aurelian Nicola
Sustainability 2025, 17(11), 5003; https://doi.org/10.3390/su17115003 - 29 May 2025
Cited by 2 | Viewed by 1568
Abstract
In this research, we present the development and validation of a compact, resource-efficient (low-cost, low-energy), distributed, real-time traffic and air quality monitoring system. Deployed since November 2023 in a small town that relies on burning various fuels and waste for winter heating, the [...] Read more.
In this research, we present the development and validation of a compact, resource-efficient (low-cost, low-energy), distributed, real-time traffic and air quality monitoring system. Deployed since November 2023 in a small town that relies on burning various fuels and waste for winter heating, the system comprises three IoT units that integrate image processing and environmental sensing for sustainable urban and environmental management. Each unit uses an embedded camera and sensors to process live data locally, which are then transmitted to a central database. The image processing algorithm counts vehicles by type with over 95% daylight accuracy, while air quality sensors measure pollutants including particulate matter (PM), equivalent carbon dioxide (eCO2), and total volatile organic compounds (TVOCs). Data analysis revealed fluctuations in pollutant concentrations across monitored areas, correlating with traffic variations and enabling the identification of pollution sources and their relative impacts. Recorded PM10 daily average levels even reached eight times above the safe 24 h limits in winter, when traffic values were low, indicating a strong link to household heating. This work provides a scalable, cost-effective approach to traffic and air quality monitoring, offering actionable insights for urban planning and sustainable development. Full article
Show Figures

Figure 1

21 pages, 2229 KB  
Article
A Deep Learning Approach for Multiclass Attack Classification in IoT and IIoT Networks Using Convolutional Neural Networks
by Ali Abdi Seyedkolaei, Fatemeh Mahmoudi and José García
Future Internet 2025, 17(6), 230; https://doi.org/10.3390/fi17060230 - 22 May 2025
Cited by 3 | Viewed by 1419
Abstract
The rapid expansion of the Internet of Things (IoT) and industrial Internet of Things (IIoT) ecosystems has introduced new security challenges, particularly the need for robust intrusion detection systems (IDSs) capable of adapting to increasingly sophisticated cyberattacks. In this study, we propose a [...] Read more.
The rapid expansion of the Internet of Things (IoT) and industrial Internet of Things (IIoT) ecosystems has introduced new security challenges, particularly the need for robust intrusion detection systems (IDSs) capable of adapting to increasingly sophisticated cyberattacks. In this study, we propose a novel intrusion detection approach based on convolutional neural networks (CNNs), designed to automatically extract spatial patterns from network traffic data. Leveraging the DNN-EdgeIIoT dataset, which includes a wide range of attack types and traffic scenarios, we conduct comprehensive experiments to compare the CNN-based model against traditional machine learning techniques, including decision trees, random forests, support vector machines, and K-nearest neighbors. Our approach consistently outperforms baseline models across multiple performance metrics—such as F1 score, precision, and recall—in both binary (benign vs. attack) and multiclass settings (6-class and 15-class classification). The CNN model achieves F1 scores of 1.00, 0.994, and 0.946, respectively, highlighting its strong generalization ability across diverse attack categories. These results demonstrate the effectiveness of deep-learning-based IDSs in enhancing the security posture of IoT and IIoT infrastructures, paving the way for intelligent, adaptive, and scalable threat detection systems. Full article
Show Figures

Graphical abstract

23 pages, 1402 KB  
Article
Adaptive Scheduling in Cognitive IoT Sensors for Optimizing Network Performance Using Reinforcement Learning
by Muhammad Nawaz Khan, Sokjoon Lee and Mohsin Shah
Appl. Sci. 2025, 15(10), 5573; https://doi.org/10.3390/app15105573 - 16 May 2025
Cited by 1 | Viewed by 867
Abstract
Cognitive sensors are embedded in home appliances and other surrounding devices to create a connected, intelligent environment for providing pervasive and ubiquitous services. These sensors frequently create massive amounts of data with many redundant and repeating bit values. Cognitive sensors are always restricted [...] Read more.
Cognitive sensors are embedded in home appliances and other surrounding devices to create a connected, intelligent environment for providing pervasive and ubiquitous services. These sensors frequently create massive amounts of data with many redundant and repeating bit values. Cognitive sensors are always restricted in resources, and if careful strategy is not applied at the time of deployment, the sensors become disconnected, degrading the system’s performance in terms of energy, reconfiguration, delay, latency, and packet loss. To address these challenges and to establish a connected network, there is always a need for a system to evaluate the contents of detected data values and dynamically switch sensor states based on their function. Here in this article, we propose a reinforcement learning-based mechanism called “Adaptive Scheduling in Cognitive IoT Sensors for Optimizing Network Performance using Reinforcement Learning (ASC-RL)”. For reinforcement learning, the proposed scheme uses three types of parameters: internal parameters (states), environmental parameters (sensing values), and history parameters (energy levels, roles, number of switching states) and derives a function for the state-changing policy. Based on this policy, sensors adjust and adapt to different energy states. These states minimize extensive sensing, reduce costly processing, and lessen frequent communication. The proposed scheme reduces network traffic and optimizes network performance in terms of network energy. The main factors evaluated are joint Gaussian distributions and event correlations, with derived results of signal strengths, noise, prediction accuracy, and energy efficiency with a combined reward score. Through comparative analysis, ASC-RL enhances the overall system’s performance by 3.5% in detection and transition probabilities. The false alarm probabilities are reduced to 25.7%, the transmission success rate is increased by 6.25%, and the energy efficiency and reliability threshold are increased by 35%. Full article
(This article belongs to the Collection Trends and Prospects in Multimedia)
Show Figures

Figure 1

22 pages, 3040 KB  
Article
Diverse Machine Learning-Based Malicious Detection for Industrial Control System
by Ying-Chin Chen, Chia-Hao Cheng, Tzu-Wei Lin and Jung-San Lee
Electronics 2025, 14(10), 1947; https://doi.org/10.3390/electronics14101947 - 10 May 2025
Viewed by 676
Abstract
The digital transformation of manufacturing through OT, IoT, and AI integration has created extensive networked sensor ecosystems, introducing critical cybersecurity vulnerabilities at IT-OT interfaces. This might particularly challenge the detection component of the NIST cybersecurity framework. To address this concern, the authors designed [...] Read more.
The digital transformation of manufacturing through OT, IoT, and AI integration has created extensive networked sensor ecosystems, introducing critical cybersecurity vulnerabilities at IT-OT interfaces. This might particularly challenge the detection component of the NIST cybersecurity framework. To address this concern, the authors designed a diverse machine learning-based intrusion detection system framework for industrial control systems (DICS). DICS implements a sophisticated dual-module architecture. The screening analysis module initially categorizes network traffic as either unidentifiable or recognized packets, while the classification analysis module subsequently determines specific attack types for identifiable traffic. When unrecognized zero-day attack traffic accumulates in a buffer and reaches a predetermined threshold, the agile training module incorporates these patterns into the system, which enables continuous adaptation. During experimental validation, the authors rigorously assess dataset industrial relevance and strategically divide the datasets into four distinct groups to accurately simulate diverse network traffic patterns characteristic of real industrial environments. Moreover, the authors highlight the system’s alignment with IEC 62443 requirements for industrial control system security. In conclusion, the comprehensive analysis demonstrates that DICS delivers superior detection capabilities for malicious network traffic in industrial settings. Full article
Show Figures

Figure 1

27 pages, 7212 KB  
Article
Multi-View Intrusion Detection Framework Using Deep Learning and Knowledge Graphs
by Min Li, Yuansong Qiao and Brian Lee
Information 2025, 16(5), 377; https://doi.org/10.3390/info16050377 - 1 May 2025
Cited by 3 | Viewed by 1192
Abstract
Traditional intrusion detection systems (IDSs) rely on static rules and one-dimensional features, and they have difficulty dealing with zero-day attacks and highly concealed threats; furthermore, mainstream deep learning models cannot capture the correlation between multiple views of attacks due to their single perspective. [...] Read more.
Traditional intrusion detection systems (IDSs) rely on static rules and one-dimensional features, and they have difficulty dealing with zero-day attacks and highly concealed threats; furthermore, mainstream deep learning models cannot capture the correlation between multiple views of attacks due to their single perspective. This paper proposes a knowledge graph-enhanced multi-view deep learning framework, considering the strategy of integrating network traffic, host behavior, and semantic relationships; and evaluates the impact of the secondary fusion strategy on feature fusion to identify the optimal multi-view model configuration. The primary objective is to verify the superiority of multi-view feature fusion technology and determine whether incorporating knowledge graphs (KGs) can further enhance model performance. First, we introduce the knowledge graph (KG) as one of the feature views and neural networks as additional views, forming a multi-view feature fusion strategy that emphasizes the integration of spatial and relational features. The KG represents relational features combined with spatial features extracted by neural networks, enabling a more comprehensive representation of attack patterns through the synergy of both feature types. Secondly, based on this foundation, we propose a two-level fusion strategy. During the representation learning of spatial features, primary fusion is performed of each view, followed by secondary fusion with relational features from KG, thereby deepening and broadening feature integration. These strategies for understanding and deploying the multi-view concept improve the model’s expressive power and detection performance and also demonstrate strong generalization and robustness across three datasets, including TON_IoT and UNSW-NB15, marking a contribution of this study. After experimental evaluation, the F1 scores of multi-view models outperformed single-view models across all three datasets. Specifically, the F1 score of the multi-view approach (Model 6) improved by 10.57% on the TON_IoT Network+Win10 dataset compared with the best single-view model. In contrast, improvements of 5.53% and 3.21% were observed on the TON_IoT network and UNSW-NB15 datasets. In terms of feature fusion strategies, the secondary fusion strategy (Model 6) outperformed primary fusion (Model 5). Furthermore, incorporating KG-based relational features as a separate view improved model performance, a finding validated by ablation studies. Experimental results show that the deep fusion strategy of multi-dimensional data overcomes the limitations of traditional single-view models, enables collaborative multi-dimensional analysis of network attack behaviors, and significantly enhances detection capabilities in complex attack scenarios. This approach establishes a scalable multimodal analysis framework for intelligent cybersecurity, advancing intrusion detection beyond traditional rule-based methods toward semantic understanding. Full article
(This article belongs to the Special Issue Intrusion Detection Systems in IoT Networks)
Show Figures

Figure 1

20 pages, 3197 KB  
Article
Research on Intrusion Detection Method Based on Transformer and CNN-BiLSTM in Internet of Things
by Chunhui Zhang, Jian Li, Naile Wang and Dejun Zhang
Sensors 2025, 25(9), 2725; https://doi.org/10.3390/s25092725 - 25 Apr 2025
Cited by 4 | Viewed by 3849
Abstract
With the widespread deployment of Internet of Things (IoT) devices, their complex network environments and open communication modes have made them prime targets for cyberattacks. Traditional Intrusion Detection Systems (IDS) face challenges in handling complex attack types, data imbalance, and feature extraction difficulties [...] Read more.
With the widespread deployment of Internet of Things (IoT) devices, their complex network environments and open communication modes have made them prime targets for cyberattacks. Traditional Intrusion Detection Systems (IDS) face challenges in handling complex attack types, data imbalance, and feature extraction difficulties in IoT environments. Accurately detecting abnormal traffic in IoT has become increasingly critical. To address the limitation of single models in comprehensively capturing the diverse features of IoT traffic, this paper proposes a hybrid model based on CNN-BiLSTM-Transformer, which better handles complex features and long-sequence dependencies in intrusion detection. To address the issue of data class imbalance, the Borderline-SMOTE method is introduced to enhance the model’s ability to recognize minority class attack samples. To tackle the problem of redundant features in the original dataset, a comprehensive feature selection strategy combining XGBoost, Chi-square (Chi2), and Mutual Information is adopted to ensure the model focuses on the most discriminative features. Experimental validation demonstrates that the proposed method achieves 99.80% accuracy on the CIC-IDS 2017 dataset and 97.95% accuracy on the BoT-IoT dataset, significantly outperforming traditional intrusion detection methods, proving its efficiency and accuracy in detecting abnormal traffic in IoT environments. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

18 pages, 1372 KB  
Article
Resource Allocation in 5G Cellular IoT Systems with Early Transmissions at the Random Access Phase
by Anastasia Daraseliya, Eduard Sopin, Vyacheslav Begishev, Yevgeni Koucheryavy and Konstantin Samouylov
Sensors 2025, 25(7), 2264; https://doi.org/10.3390/s25072264 - 3 Apr 2025
Viewed by 1052
Abstract
While the market for massive machine type communications (mMTC) is evolving at an unprecedented pace, the standardization bodies, including 3GPP, are lagging behind with standardization of truly 5G-grade cellular Internet-of-Things (CIoT) systems. As an intermediate solution, an early data transmission mechanisms encapsulating the [...] Read more.
While the market for massive machine type communications (mMTC) is evolving at an unprecedented pace, the standardization bodies, including 3GPP, are lagging behind with standardization of truly 5G-grade cellular Internet-of-Things (CIoT) systems. As an intermediate solution, an early data transmission mechanisms encapsulating the data into the preambles has been recently proposed for 4G/5G Narrowband IoT (NB-IoT) technology. This mechanism is also expected to become a part of future CIoT systems. The aim of this paper is to propose a model for CIoT systems with and without early transmission functionality and assess the optimal distribution of resources at the random access and data transmission phases. To this end, the developed model captures both phases explicitly as well as different traffic composition in downlink and uplink directions. Our numerical results demonstrate that the use of early transmission functionality allows one to drastically decrease the delay of uplink packets by up to 20–40%, even in presence of downlink traffic sharing the same set of resources. However, it also affects the optimal share of resources allocated for random access and data transmission phases. As a result, the optimal performance of 5G mMTC technologies with or without early transmission mode can only be attained if the dynamic resource allocation is implemented. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Back to TopTop