Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (364)

Search Parameters:
Keywords = internet topologies

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
49 pages, 1694 KB  
Review
Analysis of Deep Reinforcement Learning Algorithms for Task Offloading and Resource Allocation in Fog Computing Environments
by Endris Mohammed Ali, Jemal Abawajy, Frezewd Lemma and Samira A. Baho
Sensors 2025, 25(17), 5286; https://doi.org/10.3390/s25175286 - 25 Aug 2025
Viewed by 847
Abstract
Fog computing is increasingly preferred over cloud computing for processing tasks from Internet of Things (IoT) devices with limited resources. However, placing tasks and allocating resources in distributed and dynamic fog environments remains a major challenge, especially when trying to meet strict Quality [...] Read more.
Fog computing is increasingly preferred over cloud computing for processing tasks from Internet of Things (IoT) devices with limited resources. However, placing tasks and allocating resources in distributed and dynamic fog environments remains a major challenge, especially when trying to meet strict Quality of Service (QoS) requirements. Deep reinforcement learning (DRL) has emerged as a promising solution to these challenges, offering adaptive, data-driven decision-making in real-time and uncertain conditions. While several surveys have explored DRL in fog computing, most focus on traditional centralized offloading approaches or emphasize reinforcement learning (RL) with limited integration of deep learning. To address this gap, this paper presents a comprehensive and focused survey on the full-scale application of DRL to the task offloading problem in fog computing environments involving multiple user devices and multiple fog nodes. We systematically analyze and classify the literature based on architecture, resource allocation methods, QoS objectives, offloading topology and control, optimization strategies, DRL techniques used, and application scenarios. We also introduce a taxonomy of DRL-based task offloading models and highlight key challenges, open issues, and future research directions. This survey serves as a valuable resource for researchers by identifying unexplored areas and suggesting new directions for advancing DRL-based solutions in fog computing. For practitioners, it provides insights into selecting suitable DRL techniques and system designs to implement scalable, efficient, and QoS-aware fog computing applications in real-world environments. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

29 pages, 919 KB  
Article
DDoS Defense Strategy Based on Blockchain and Unsupervised Learning Techniques in SDN
by Shengmin Peng, Jialin Tian, Xiangyu Zheng, Shuwu Chen and Zhaogang Shu
Future Internet 2025, 17(8), 367; https://doi.org/10.3390/fi17080367 - 13 Aug 2025
Viewed by 449
Abstract
With the rapid development of technologies such as cloud computing, big data, and the Internet of Things (IoT), Software-Defined Networking (SDN) is emerging as a new network architecture for the modern Internet. SDN separates the control plane from the data plane, allowing a [...] Read more.
With the rapid development of technologies such as cloud computing, big data, and the Internet of Things (IoT), Software-Defined Networking (SDN) is emerging as a new network architecture for the modern Internet. SDN separates the control plane from the data plane, allowing a central controller, the SDN controller, to quickly direct the routing devices within the topology to forward data packets, thus providing flexible traffic management for communication between information sources. However, traditional Distributed Denial of Service (DDoS) attacks still significantly impact SDN systems. This paper proposes a novel dual-layer strategy capable of detecting and mitigating DDoS attacks in an SDN network environment. The first layer of the strategy enhances security by using blockchain technology to replace the SDN flow table storage container in the northbound interface of the SDN controller. Smart contracts are then used to process the stored flow table information. We employ the time window algorithm and the token bucket algorithm to construct the first layer strategy to defend against obvious DDoS attacks. To detect and mitigate less obvious DDoS attacks, we design a second-layer strategy that uses a composite data feature correlation coefficient calculation method and the Isolation Forest algorithm from unsupervised learning techniques to perform binary classification, thereby identifying abnormal traffic. We conduct experimental validation using the publicly available DDoS dataset CIC-DDoS2019. The results show that using this strategy in the SDN network reduces the average deviation of round-trip time (RTT) by approximately 38.86% compared with the original SDN network without this strategy. Furthermore, the accuracy of DDoS attack detection reaches 97.66% and an F1 score of 92.2%. Compared with other similar methods, under comparable detection accuracy, the deployment of our strategy in small-scale SDN network topologies provides faster detection speeds for DDoS attacks and exhibits less fluctuation in detection time. This indicates that implementing this strategy can effectively identify DDoS attacks without affecting the stability of data transmission in the SDN network environment. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

23 pages, 650 KB  
Article
Exercise-Specific YANG Profile for AI-Assisted Network Security Labs: Bidirectional Configuration Exchange with Large Language Models
by Yuichiro Tateiwa
Information 2025, 16(8), 631; https://doi.org/10.3390/info16080631 - 24 Jul 2025
Viewed by 321
Abstract
Network security courses rely on hands-on labs where students configure virtual Linux networks to practice attack and defense. Automated feedback is scarce because no standard exists for exchanging detailed configurations—interfaces, bridging, routing tables, iptables policies—between exercise software and large language models (LLMs) that [...] Read more.
Network security courses rely on hands-on labs where students configure virtual Linux networks to practice attack and defense. Automated feedback is scarce because no standard exists for exchanging detailed configurations—interfaces, bridging, routing tables, iptables policies—between exercise software and large language models (LLMs) that could serve as tutors. We address this interoperability gap with an exercise-oriented YANG profile that augments the Internet Engineering Task Force (IETF) ietf-network module with a new network-devices module. The profile expresses Linux interface settings, routing, and firewall rules, and tags each node with roles such as linux-server or linux-firewall. Integrated into our LiNeS Cloud platform, it enables LLMs to both parse and generate machine-readable network states. We evaluated the profile on four topologies—from a simple client–server pair to multi-subnet scenarios with dedicated security devices—using ChatGPT-4o, Claude 3.7 Sonnet, and Gemini 2.0 Flash. Across 1050 evaluation tasks covering profile understanding (n = 180), instance analysis (n = 750), and instance generation (n = 120), the three LLMs answered correctly in 1028 cases, yielding an overall accuracy of 97.9%. Even with only minimal follow-up cues (≦3 turns) —rather than handcrafted prompt chains— analysis tasks reached 98.1% accuracy and generation tasks 93.3%. To our knowledge, this is the first exercise-focused YANG profile that simultaneously captures Linux/iptables semantics and is empirically validated across three proprietary LLMs, attaining 97.9% overall task accuracy. These results lay a practical foundation for artificial intelligence (AI)-assisted security labs where real-time feedback and scenario generation must scale beyond human instructor capacity. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

25 pages, 1984 KB  
Article
Intra-Domain Routing Protection Scheme Based on the Minimum Cross-Degree Between the Shortest Path and Backup Path
by Haijun Geng, Xuemiao Liu, Wei Hou, Lei Xu and Ling Wang
Appl. Sci. 2025, 15(15), 8151; https://doi.org/10.3390/app15158151 - 22 Jul 2025
Viewed by 250
Abstract
With the continuous development of the Internet, people have put forward higher requirements for the stability and availability of the network. Although we constantly strive to take measures to avoid network failures, it is undeniable that network failures are unavoidable. Therefore, in this [...] Read more.
With the continuous development of the Internet, people have put forward higher requirements for the stability and availability of the network. Although we constantly strive to take measures to avoid network failures, it is undeniable that network failures are unavoidable. Therefore, in this situation, enhancing the stability and reliability of the network to cope with possible network failures has become particularly crucial. Therefore, researching and developing high fault protection rate intra-domain routing protection schemes has become an important topic and is the subject of this study. This study aims to enhance the resilience and service continuity of networks in the event of failures by proposing innovative routing protection strategies. The existing methods, such as Loop Free Alternative (LFA) and Equal Cost Multiple Paths (ECMP), have some shortcomings in terms of fast fault detection, fault response, and fault recovery processes, such as long fault recovery time, limitations of routing protection strategies, and requirements for network topology. In response to these issues, this article proposes a new routing protection scheme, which is an intra-domain routing protection scheme based on the minimum cross-degree backup path. The core idea of this plan is to find the backup path with the minimum degree of intersection with the optimal path, in order to avoid potential fault areas and minimize the impact of faults on other parts of the network. Through comparative analysis and performance evaluation, this scheme can provide a higher fault protection rate and more reliable routing protection in the network. Especially in complex networks, this scheme has more performance and protection advantages than traditional routing protection methods. The proposed scheme in this paper exhibits a high rate of fault protection across multiple topologies, demonstrating a fault protection rate of 1 in the context of real topology. It performs commendably in terms of path stretch, evidenced by a figure of 1.06 in the case of real topology Ans, suggesting robust path length control capabilities. The mean intersection value is 0 in the majority of the topologies, implying virtually no common edge between the backup and optimal paths. This effectively mitigates the risk of single-point failure. Full article
Show Figures

Figure 1

16 pages, 2354 KB  
Proceeding Paper
Design and Implementation of a Passive Optical Network for a Small Town
by Fatima Sapundzhi, Boyko Zarev, Slavi Georgiev, Snezhinka Zaharieva, Metodi Popstoilov and Meglena Lazarova
Eng. Proc. 2025, 100(1), 40; https://doi.org/10.3390/engproc2025100040 - 15 Jul 2025
Viewed by 429
Abstract
The increasing demand for high-speed internet and advanced digital services necessitates the deployment of robust and scalable broadband infrastructure, particularly in smaller urban and rural areas. This paper presents the design and implementation of a passive optical network (PON) based on a gigabit-capable [...] Read more.
The increasing demand for high-speed internet and advanced digital services necessitates the deployment of robust and scalable broadband infrastructure, particularly in smaller urban and rural areas. This paper presents the design and implementation of a passive optical network (PON) based on a gigabit-capable passive optical network (GPON) standard to deliver fiber-to-the-home (FTTH) services in a small-town setting. The proposed solution prioritizes cost-effectiveness, scalability, and minimal energy consumption by leveraging passive splitters and unpowered network elements. We detail the topology planning, splitter architecture, installation practices, and technical specifications that ensure efficient signal distribution and future network expansion. The results demonstrate the successful implementation of an optical access infrastructure that supports high-speed internet, Internet Protocol television (IPTV), and voice services while maintaining flexibility for diverse urban layouts and housing types. Full article
Show Figures

Figure 1

15 pages, 1213 KB  
Article
A Lightweight Certificateless Authenticated Key Agreement Scheme Based on Chebyshev Polynomials for the Internet of Drones
by Zhaobin Li, Zheng Ju, Hong Zhao, Zhanzhen Wei and Gongjian Lan
Sensors 2025, 25(14), 4286; https://doi.org/10.3390/s25144286 - 9 Jul 2025
Viewed by 341
Abstract
The Internet of Drones (IoD) overcomes the physical limitations of traditional ground networks with its dynamic topology and 3D spatial flexibility, playing a crucial role in various fields. However, eavesdropping and spoofing attacks in open channel environments threaten data confidentiality and integrity, posing [...] Read more.
The Internet of Drones (IoD) overcomes the physical limitations of traditional ground networks with its dynamic topology and 3D spatial flexibility, playing a crucial role in various fields. However, eavesdropping and spoofing attacks in open channel environments threaten data confidentiality and integrity, posing significant challenges to IoD communication. Existing foundational schemes in IoD primarily rely on symmetric cryptography and digital certificates. Symmetric cryptography suffers from key management challenges and static characteristics, making it unsuitable for IoD’s dynamic scenarios. Meanwhile, elliptic curve-based public key cryptography is constrained by high computational complexity and certificate management costs, rendering it impractical for resource-limited IoD nodes. This paper leverages the low computational overhead of Chebyshev polynomials to address the limited computational capability of nodes, proposing a certificateless public key cryptography scheme. Through the semigroup property, it constructs a lightweight authentication and key agreement protocol with identity privacy protection, resolving the security and performance trade-off in dynamic IoD environments. Security analysis and performance tests demonstrate that the proposed scheme resists various attacks while reducing computational overhead by 65% compared to other schemes. This work not only offers a lightweight certificateless cryptographic solution for IoD systems but also advances the engineering application of Chebyshev polynomials in asymmetric cryptography. Full article
(This article belongs to the Special Issue UAV Secure Communication for IoT Applications)
Show Figures

Figure 1

15 pages, 1529 KB  
Article
Peak Age of Information Optimization in Cell-Free Massive Random Access Networks
by Zhiru Zhao, Yuankang Huang and Wen Zhan
Electronics 2025, 14(13), 2714; https://doi.org/10.3390/electronics14132714 - 4 Jul 2025
Viewed by 390
Abstract
With the vigorous development of Internet of Things technologies, Cell-Free Radio Access Network (CF-RAN), leveraging its distributed coverage and single/multi-antenna Access Point (AP) coordination advantages, has become a key technology for supporting massive Machine-Type Communication (mMTC). However, under the grant-free random access mechanism, [...] Read more.
With the vigorous development of Internet of Things technologies, Cell-Free Radio Access Network (CF-RAN), leveraging its distributed coverage and single/multi-antenna Access Point (AP) coordination advantages, has become a key technology for supporting massive Machine-Type Communication (mMTC). However, under the grant-free random access mechanism, this network architecture faces the problem of information freshness degradation due to channel congestion. To address this issue, a joint decoding model based on logical grouping architecture is introduced to analyze the correlation between the successful packet transmission probability and the Peak Age of Information (PAoI) in both single-AP and multi-AP scenarios. On this basis, a global Particle Swarm Optimization (PSO) algorithm is designed to dynamically adjust the channel access probability to minimize the average PAoI across the network. To reduce signaling overhead, a PSO algorithm based on local topology information is further proposed to achieve collaborative optimization among neighboring APs. Simulation results demonstrate that the global PSO algorithm can achieve performance closely approximating the optimum, while the local PSO algorithm maintains similar performance without the need for global information. It is especially suitable for large-scale access scenarios with wide area coverage, providing an efficient solution for optimizing information freshness in CF-RAN. Full article
Show Figures

Figure 1

23 pages, 2431 KB  
Article
SatScope: A Data-Driven Simulator for Low-Earth-Orbit Satellite Internet
by Qichen Wang, Guozheng Yang, Yongyu Liang, Chiyu Chen, Qingsong Zhao and Sugai Chen
Future Internet 2025, 17(7), 278; https://doi.org/10.3390/fi17070278 - 24 Jun 2025
Viewed by 615
Abstract
The rapid development of low-Earth-orbit (LEO) satellite constellations has not only provided global users with low-latency and unrestricted high-speed data services but also presented researchers with the challenge of understanding dynamic changes in global network behavior. Unlike geostationary satellites and terrestrial internet infrastructure, [...] Read more.
The rapid development of low-Earth-orbit (LEO) satellite constellations has not only provided global users with low-latency and unrestricted high-speed data services but also presented researchers with the challenge of understanding dynamic changes in global network behavior. Unlike geostationary satellites and terrestrial internet infrastructure, LEO satellites move at a relative velocity of 7.6 km/s, leading to frequent alterations in their connectivity status with ground stations. Given the complexity of the space environment, current research on LEO satellite internet primarily focuses on modeling and simulation. However, existing LEO satellite network simulators often overlook the global network characteristics of these systems. We present SatScope, a data-driven simulator for LEO satellite internet. SatScope consists of three main components, space segment modeling, ground segment modeling, and network simulation configuration, providing researchers with an interface to interact with these models. Utilizing both space and ground segment models, SatScope can configure various network topology models, routing algorithms, and load balancing schemes, thereby enabling the evaluation of optimization algorithms for LEO satellite communication systems. We also compare SatScope’s fidelity, lightweight design, scalability, and openness against other simulators. Based on our simulation results using SatScope, we propose two metrics—ground node IP coverage rate and the number of satellite service IPs—to assess the service performance of single-layer satellite networks. Our findings reveal that during each network handover, on average, 38.94% of nodes and 83.66% of links change. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

23 pages, 3558 KB  
Article
Research on High-Reliability Energy-Aware Scheduling Strategy for Heterogeneous Distributed Systems
by Ziyu Chen, Jing Wu, Lin Cheng and Tao Tao
Big Data Cogn. Comput. 2025, 9(6), 160; https://doi.org/10.3390/bdcc9060160 - 17 Jun 2025
Viewed by 816
Abstract
With the demand for workflow processing driven by edge computing in the Internet of Things (IoT) and cloud computing growing at an exponential rate, task scheduling in heterogeneous distributed systems has become a key challenge to meet real-time constraints in resource-constrained environments. Existing [...] Read more.
With the demand for workflow processing driven by edge computing in the Internet of Things (IoT) and cloud computing growing at an exponential rate, task scheduling in heterogeneous distributed systems has become a key challenge to meet real-time constraints in resource-constrained environments. Existing studies now attempt to achieve the best balance in terms of time constraints, energy efficiency, and system reliability in Dynamic Voltage and Frequency Scaling environments. This study proposes a two-stage collaborative optimization strategy. With the help of an innovative algorithm design and theoretical analysis, the multi-objective optimization challenges mentioned above are systematically solved. First, based on a reliability-constrained model, we propose a topology-aware dynamic priority scheduling algorithm (EAWRS). This algorithm constructs a node priority function by incorporating in-degree/out-degree weighting factors and critical path analysis to enable multi-objective optimization. Second, to address the time-varying reliability characteristics introduced by DVFS, we propose a Fibonacci search-based dynamic frequency scaling algorithm (SEFFA). This algorithm effectively reduces energy consumption while ensuring task reliability, achieving sub-optimal processor energy adjustment. The collaborative mechanism of EAWRS and SEFFA has well solved the dynamic scheduling challenge based on DAG in heterogeneous multi-core processor systems in the Internet of Things environment. Experimental evaluations conducted at various scales show that, compared with the three most advanced scheduling algorithms, the proposed strategy reduces energy consumption by an average of 14.56% (up to 58.44% under high-reliability constraints) and shortens the makespan by 2.58–56.44% while strictly meeting reliability requirements. Full article
Show Figures

Figure 1

28 pages, 2413 KB  
Article
A Performance Evaluation for Software Defined Networks with P4
by Omesh A. Fernando, Hannan Xiao, Joseph Spring and Xianhui Che
Network 2025, 5(2), 21; https://doi.org/10.3390/network5020021 - 11 Jun 2025
Viewed by 734
Abstract
The exponential growth in the number of devices connected via the internet has led to the need to achieve granular programmability for increased performance, resilience, reduced latency, and jitter. Software Defined Networking (SDN) and Programming Protocol independent Packet Processing (P4) are designed to [...] Read more.
The exponential growth in the number of devices connected via the internet has led to the need to achieve granular programmability for increased performance, resilience, reduced latency, and jitter. Software Defined Networking (SDN) and Programming Protocol independent Packet Processing (P4) are designed to introduce programmability into the control and data plane of networks, respectively. Despite their individual potential and capabilities, the performance of combining SDN and P4 remains underexplored. This study presents a comprehensive evaluation of SDN with data plane programmability using P4 (SDN+P4) against traditional SDN with Open vSwitch (SDN+OvS), aimed at answering the hypothesis that combining SDN and P4 strengthens the control and data plane programmability and offers improved management and adaptability, which would provide a platform with faster packet processing with reduced jitter, loss, and processing overhead. Mininet was employed to emulate three distinct topologies: multi-path, grid, and transit-stub. Various traffic types were transmitted to assess performance metrics across the three topologies. Our results demonstrate that SDN+P4 outperform SDN+OvS significantly due to parallel processing, flexible parsing, and reduced overhead. The evaluation demonstrates the potential of SDN+P4 to provide a more resilient and stringent service with improved network performance for the future internet and its heterogeneity of applications. Full article
Show Figures

Figure 1

17 pages, 68021 KB  
Article
A Low-Power Differential Temperature Sensor with Chopped Cascode Transistors and Switched-Capacitor Integration
by Junyi Yang, Thomas Gourousis, Mengting Yan, Ruyi Ding, Ankit Mittal, Milin Zhang, Francesco Restuccia, Aatmesh Shrivastava, Yunsi Fei and Marvin Onabajo
Electronics 2025, 14(12), 2381; https://doi.org/10.3390/electronics14122381 - 11 Jun 2025
Viewed by 700
Abstract
Embedded differential temperature sensors can be utilized to monitor the power consumption of circuits, taking advantage of the inherent on-chip electrothermal coupling. Potential applications range from hardware security to linearity, gain/bandwidth calibration, defect-oriented testing, and compensation for circuit aging effects. This paper introduces [...] Read more.
Embedded differential temperature sensors can be utilized to monitor the power consumption of circuits, taking advantage of the inherent on-chip electrothermal coupling. Potential applications range from hardware security to linearity, gain/bandwidth calibration, defect-oriented testing, and compensation for circuit aging effects. This paper introduces the use of on-chip differential temperature sensors as part of a wireless Internet of Things system. A new low-power differential temperature sensor circuit with chopped cascode transistors and switched-capacitor integration is described. This design approach leverages chopper stabilization in combination with a switched-capacitor integrator that acts as a low-pass filter such that the circuit provides offset and low-frequency noise mitigation. Simulation results of the proposed differential temperature sensor in a 65 nm complementary metal-oxide-semiconductor (CMOS) process show a sensitivity of 33.18V/°C within a linear range of ±36.5m°C and an integrated output noise of 0.862mVrms (from 1 to 441.7 Hz) with an overall power consumption of 0.187mW. Considering a figure of merit that involves sensitivity, linear range, noise, and power, the new temperature sensor topology demonstrates a significant improvement compared to state-of-the-art differential temperature sensors for on-chip monitoring of power dissipation. Full article
(This article belongs to the Special Issue Advances in RF, Analog, and Mixed Signal Circuits)
Show Figures

Figure 1

18 pages, 1289 KB  
Article
Topology-Aware Anchor Node Selection Optimization for Enhanced DV-Hop Localization in IoT
by Haixu Niu, Yonghai Li, Shuaixin Hou, Tianfei Chen, Lijun Sun, Mingyang Gu and Muhammad Irsyad Abdullah
Future Internet 2025, 17(6), 253; https://doi.org/10.3390/fi17060253 - 8 Jun 2025
Viewed by 404
Abstract
Node localization is a critical challenge in Internet of Things (IoT) applications. The DV-Hop algorithm, which relies on hop counts for localization, assumes that network nodes are uniformly distributed. It estimates actual distances between nodes based on the number of hops. However, in [...] Read more.
Node localization is a critical challenge in Internet of Things (IoT) applications. The DV-Hop algorithm, which relies on hop counts for localization, assumes that network nodes are uniformly distributed. It estimates actual distances between nodes based on the number of hops. However, in practical IoT networks, node distribution is often non-uniform, leading to complex and irregular topologies that significantly reduce the localization accuracy of the original DV-Hop algorithm. To improve localization performance in non-uniform topologies, we propose an enhanced DV-Hop algorithm using Grey Wolf Optimization (GWO). First, the impact of non-uniform node distribution on hop count and average hop distance is analyzed. A binary Grey Wolf Optimization algorithm (BGWO) is then applied to develop an optimal anchor node selection strategy. This strategy eliminates anchor nodes with high estimation errors and selects a subset of high-quality anchors to improve the localization of unknown nodes. Second, in the multilateration stage, the traditional least square method is replaced by a continuous GWO algorithm to solve the distance equations with higher precision. Simulated experimental results show that the proposed GWO-enhanced DV-Hop algorithm significantly improves localization accuracy in non-uniform topologies. Full article
(This article belongs to the Special Issue Convergence of IoT, Edge and Cloud Systems)
Show Figures

Figure 1

20 pages, 2304 KB  
Article
Resilient Topology Reconfiguration for Industrial Internet of Things: A Feature-Driven Approach Against Heterogeneous Attacks
by Tianyu Wang, Dong Li, Bowen Zhang, Xianda Liu and Wenli Shang
Entropy 2025, 27(5), 503; https://doi.org/10.3390/e27050503 - 7 May 2025
Viewed by 556
Abstract
This paper proposes a feature-driven topology reconfiguration framework to enhance the resilience of Industrial Internet of Things (IIoT) systems against heterogeneous attacks. By dynamically partitioning IIoT into subnetworks based on localized attack features and reconstructing each subnetwork with tailored topologies, our framework significantly [...] Read more.
This paper proposes a feature-driven topology reconfiguration framework to enhance the resilience of Industrial Internet of Things (IIoT) systems against heterogeneous attacks. By dynamically partitioning IIoT into subnetworks based on localized attack features and reconstructing each subnetwork with tailored topologies, our framework significantly improves connectivity and communication efficiency. Evaluations on a real-world dataset (Tech-Routers-RF) characterizing IIoT topologies with 2113 nodes show that under diverse attack scenarios, connectivity and communication efficiency improve by more than 70% and 50%, respectively. Leveraging information entropy to quantify the trade-off between structural diversity and connection predictability, our work bridges adaptive network design with real-world attack dynamics, offering a scalable solution for securing large-scale IIoT deployments. Full article
(This article belongs to the Special Issue Spreading Dynamics in Complex Networks)
Show Figures

Figure 1

7 pages, 8590 KB  
Proceeding Paper
Design and Implementation of Environmental Monitoring System Using Flask-Based Web Application
by Rong-Tai Hong
Eng. Proc. 2025, 92(1), 37; https://doi.org/10.3390/engproc2025092037 - 29 Apr 2025
Viewed by 1071
Abstract
A low-cost, real-time environmental monitoring system is proposed in this study. The system integrates the Internet of Things (IoT) technology and a micro-framework Flask-based web application. The star topology of Bluetooth devices is adopted to connect the master server and multiple sensor nodes. [...] Read more.
A low-cost, real-time environmental monitoring system is proposed in this study. The system integrates the Internet of Things (IoT) technology and a micro-framework Flask-based web application. The star topology of Bluetooth devices is adopted to connect the master server and multiple sensor nodes. The system employs a Raspberry Pi 4 model B as the master server running a micro-framework web application and an Arduino UNO as the sensor nodes connected to multiple sensors and actuators. Since the sensor data need to be consecutively and continuous in real-time, multiple tasks are executed simultaneously to complete the process; therefore, thread-based parallelism is used. The proposed system enables real-time environmental monitoring with low maintenance costs by leveraging the micro-framework web application and ad hoc network. Furthermore, the proposed system is scalable, as its components are commercial off-the-shelf commodities available on the market, and the number of sensor nodes and sensors used can be increased based on the requirements of the desired system. Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

32 pages, 7418 KB  
Article
Real-Time Large-Scale Intrusion Detection and Prevention System (IDPS) CICIoT Dataset Traffic Assessment Based on Deep Learning
by Samuel Kofi Erskine
Appl. Syst. Innov. 2025, 8(2), 52; https://doi.org/10.3390/asi8020052 - 11 Apr 2025
Cited by 1 | Viewed by 3236
Abstract
This research utilizes machine learning (ML), and especially deep learning (DL), techniques for efficient feature extraction of intrusion attacks. We use DL to provide better learning and utilize machine learning multilayer perceptron (MLP) as an intrusion detection (IDS) and intrusion prevention (IPS) system [...] Read more.
This research utilizes machine learning (ML), and especially deep learning (DL), techniques for efficient feature extraction of intrusion attacks. We use DL to provide better learning and utilize machine learning multilayer perceptron (MLP) as an intrusion detection (IDS) and intrusion prevention (IPS) system (IDPS) method. We deploy DL and MLP together as DLMLP. DLMLP improves the high detection of all intrusion attack features on the Internet of Things (IoT) device dataset, known as the CICIoT2023 dataset. We reference the CICIoT2023 dataset from the Canadian Institute of Cybersecurity (CIC) IoT device dataset. Our proposed method, the deep learning multilayer perceptron intrusion detection and prevention system model (DLMIDPSM), provides IDPST (intrusion detection and prevention system topology) capability. We use our proposed IDPST to capture, analyze, and prevent all intrusion attacks in the dataset. Moreover, our proposed DLMIDPSM employs a combination of artificial neural networks, ANNs, convolutional neural networks (CNNs), and recurrent neural networks (RNNs). Consequently, this project aims to develop a robust real-time intrusion detection and prevention system model. DLMIDPSM can predict, detect, and prevent intrusion attacks in the CICIoT2023 IoT dataset, with a high accuracy of above 85% and a high precision rate of 99%. Comparing the DLMIDPSM to the other literature, deep learning models and machine learning (ML) models have used decision tree (DT) and support vector machine (SVM), achieving a detection and prevention rate of 81% accuracy with only 72% precision. Furthermore, this research project breaks new ground by incorporating combined machine learning and deep learning models with IDPS capability, known as ML and DLMIDPSMs. We train, validate, or test the ML and DLMIDPSMs on the CICIoT2023 dataset, which helps to achieve higher accuracy and precision than the other deep learning models discussed above. Thus, our proposed combined ML and DLMIDPSMs achieved higher intrusion detection and prevention based on the confusion matrix’s high-rate attack detection and prevention values. Full article
(This article belongs to the Special Issue Advancements in Deep Learning and Its Applications)
Show Figures

Figure 1

Back to TopTop