Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (863)

Search Parameters:
Keywords = packet losses

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1428 KB  
Article
A Decision Tree Regression Algorithm for Real-Time Trust Evaluation of Battlefield IoT Devices
by Ioana Matei and Victor-Valeriu Patriciu
Algorithms 2025, 18(10), 641; https://doi.org/10.3390/a18100641 - 10 Oct 2025
Abstract
This paper presents a novel gateway-centric architecture for context-aware trust evaluation in Internet of Battle Things (IoBT) environments. The system is structured across multiple layers, from embedded sensing devices equipped with internal modules for signal filtering, anomaly detection, and encryption, to high-level data [...] Read more.
This paper presents a novel gateway-centric architecture for context-aware trust evaluation in Internet of Battle Things (IoBT) environments. The system is structured across multiple layers, from embedded sensing devices equipped with internal modules for signal filtering, anomaly detection, and encryption, to high-level data processing in a secure cloud infrastructure. At its core, the gateway evaluates the trustworthiness of sensor nodes by computing reputation scores based on behavioral and contextual metrics. This design offers operational advantages, including reduced latency, autonomous decision-making in the absence of central command, and real-time responses in mission-critical scenarios. Our system integrates supervised learning, specifically Decision Tree Regression (DTR), to estimate reputation scores using features such as transmission success rate, packet loss, latency, battery level, and peer feedback. The results demonstrate that the proposed approach ensures secure, resilient, and scalable trust management in distributed battlefield networks, enabling informed and reliable decision-making under harsh and dynamic conditions. Full article
Show Figures

Figure 1

23 pages, 8816 KB  
Article
Error Correction in Bluetooth Low Energy via Neural Network with Reject Option
by Wellington D. Almeida, Felipe P. Marinho, André L. F. de Almeida and Ajalmar R. Rocha Neto
Sensors 2025, 25(19), 6191; https://doi.org/10.3390/s25196191 - 6 Oct 2025
Viewed by 233
Abstract
This paper presents an approach to error correction in wireless communication systems, with a focus on the Bluetooth Low Energy standard. Our method uses the redundancy provided by the cyclic redundancy check and leaves the transmitter unchanged. The approach has two components: an [...] Read more.
This paper presents an approach to error correction in wireless communication systems, with a focus on the Bluetooth Low Energy standard. Our method uses the redundancy provided by the cyclic redundancy check and leaves the transmitter unchanged. The approach has two components: an error-detection algorithm that validates data packets and a neural network with reject option that classifies signals received from the channel and identifies bit errors for later correction. This design localizes and corrects errors and reduces transmission failures. Extensive simulations were conducted, and the results demonstrated promising performance. The method achieved correction rates of 94–98% for single-bit errors and 54–68% for double-bit errors, which reduced the need for packet retransmissions and lowered the risk of data loss. When applied to images, the approach enhanced visual quality compared with baseline methods. In particular, we observed improvements in visual quality for signal-to-noise ratios between 9 and 11 dB. In many cases, these enhancements were sufficient to restore the integrity of corrupted images. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

29 pages, 2319 KB  
Article
Research on the Development of a Building Model Management System Integrating MQTT Sensing
by Ziang Wang, Han Xiao, Changsheng Guan, Liming Zhou and Daiguang Fu
Sensors 2025, 25(19), 6069; https://doi.org/10.3390/s25196069 - 2 Oct 2025
Viewed by 397
Abstract
Existing building management systems face critical limitations in real-time data integration, primarily relying on static models that lack dynamic updates from IoT sensors. To address this gap, this study proposes a novel system integrating MQTT over WebSocket with Three.js visualization, enabling real-time sensor-data [...] Read more.
Existing building management systems face critical limitations in real-time data integration, primarily relying on static models that lack dynamic updates from IoT sensors. To address this gap, this study proposes a novel system integrating MQTT over WebSocket with Three.js visualization, enabling real-time sensor-data binding to Building Information Models (BIM). The architecture leverages MQTT’s lightweight publish-subscribe protocol for efficient communication and employs a TCP-based retransmission mechanism to ensure 99.5% data reliability in unstable networks. A dynamic topic-matching algorithm is introduced to automate sensor-BIM associations, reducing manual configuration time by 60%. The system’s frontend, powered by Three.js, achieves browser-based 3D visualization with sub-second updates (280–550 ms latency), while the backend utilizes SpringBoot for scalable service orchestration. Experimental evaluations across diverse environments—including high-rise offices, industrial plants, and residential complexes—demonstrate the system’s robustness: Real-time monitoring: Fire alarms triggered within 2.1 s (22% faster than legacy systems). Network resilience: 98.2% availability under 30% packet loss. User efficiency: 4.6/5 satisfaction score from facility managers. This work advances intelligent building management by bridging IoT data with interactive 3D models, offering a scalable solution for emergency response, energy optimization, and predictive maintenance in smart cities. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

18 pages, 2031 KB  
Article
The Impact of Security Protocols on TCP/UDP Throughput in IEEE 802.11ax Client–Server Network: An Empirical Study
by Nurul I. Sarkar, Nasir Faiz and Md Jahan Ali
Electronics 2025, 14(19), 3890; https://doi.org/10.3390/electronics14193890 - 30 Sep 2025
Viewed by 292
Abstract
IEEE 802.11ax (Wi-Fi 6) technologies provide high capacity, low latency, and increased security. While many network researchers have examined Wi-Fi security issues, the security implications of 802.11ax have not been fully explored yet. Therefore, in this paper, we investigate how security protocols (WPA2, [...] Read more.
IEEE 802.11ax (Wi-Fi 6) technologies provide high capacity, low latency, and increased security. While many network researchers have examined Wi-Fi security issues, the security implications of 802.11ax have not been fully explored yet. Therefore, in this paper, we investigate how security protocols (WPA2, WPA3) affect TCP/UDP throughput in IEEE 802.11ax client–server networks using a testbed approach. Through an extensive performance study, we analyze the effect of security on transport layer protocol (TCP/UDP), internet protocol layer (IPV4/IPV6), and operating systems (MS Windows and Linux) on system performance. The impact of packet length on system performance is also investigated. The obtained results show that WPA3 offers greater security, and its impact on TCP/UDP throughput is insignificant, highlighting the robustness of WPA3 encryption in maintaining throughput even in secure environments. With WPA3, UDP offers higher throughput than TCP and IPv6 consistently outperforms IPv4 in terms of both TCP and UDP throughput. Linux outperforms Windows in all scenarios, especially with larger packet sizes and IPv6 traffic. These results suggest that WPA3 provides optimized throughput performance in both Linux and MS Windows in 802.11ax client–server environments. Our research provides some insights into the security issues in Gigabit Wi-Fi that can help network researchers and engineers to contribute further towards developing greater security for next-generation wireless networks. Full article
Show Figures

Figure 1

35 pages, 3558 KB  
Article
Realistic Performance Assessment of Machine Learning Algorithms for 6G Network Slicing: A Dual-Methodology Approach with Explainable AI Integration
by Sümeye Nur Karahan, Merve Güllü, Deniz Karhan, Sedat Çimen, Mustafa Serdar Osmanca and Necaattin Barışçı
Electronics 2025, 14(19), 3841; https://doi.org/10.3390/electronics14193841 - 27 Sep 2025
Viewed by 385
Abstract
As 6G networks become increasingly complex and heterogeneous, effective classification of network slicing is essential for optimizing resources and managing quality of service. While recent advances demonstrate high accuracy under controlled laboratory conditions, a critical gap exists between algorithm performance evaluation under idealized [...] Read more.
As 6G networks become increasingly complex and heterogeneous, effective classification of network slicing is essential for optimizing resources and managing quality of service. While recent advances demonstrate high accuracy under controlled laboratory conditions, a critical gap exists between algorithm performance evaluation under idealized conditions and their actual effectiveness in realistic deployment scenarios. This study presents a comprehensive comparative analysis of two distinct preprocessing methodologies for 6G network slicing classification: Pure Raw Data Analysis (PRDA) and Literature-Validated Realistic Transformations (LVRTs). We evaluate the impact of these strategies on algorithm performance, resilience characteristics, and practical deployment feasibility to bridge the laboratory–reality gap in 6G network optimization. Our experimental methodology involved testing eleven machine learning algorithms—including traditional ML, ensemble methods, and deep learning approaches—on a dataset comprising 10,000 network slicing samples (expanded to 21,033 through realistic transformations) across five network slice types. The LVRT methodology incorporates realistic operational impairments including market-driven class imbalance (9:1 ratio), multi-layer interference patterns, and systematic missing data reflecting authentic 6G deployment challenges. The experimental results revealed significant differences in algorithm behavior between the two preprocessing approaches. Under PRDA conditions, deep learning models achieved perfect accuracy (100% for CNN and FNN), while traditional algorithms ranged from 60.9% to 89.0%. However, LVRT results exposed dramatic performance variations, with accuracies spanning from 58.0% to 81.2%. Most significantly, we discovered that algorithms achieving excellent laboratory performance experience substantial degradation under realistic conditions, with CNNs showing an 18.8% accuracy loss (dropping from 100% to 81.2%), FNNs experiencing an 18.9% loss (declining from 100% to 81.1%), and Naive Bayes models suffering a 34.8% loss (falling from 89% to 58%). Conversely, SVM (RBF) and Logistic Regression demonstrated counter-intuitive resilience, improving by 14.1 and 10.3 percentage points, respectively, under operational stress, demonstrating superior adaptability to realistic network conditions. This study establishes a resilience-based classification framework enabling informed algorithm selection for diverse 6G deployment scenarios. Additionally, we introduce a comprehensive explainable artificial intelligence (XAI) framework using SHAP analysis to provide interpretable insights into algorithm decision-making processes. The XAI analysis reveals that Packet Loss Budget emerges as the dominant feature across all algorithms, while Slice Jitter and Slice Latency constitute secondary importance features. Cross-scenario interpretability consistency analysis demonstrates that CNN, LSTM, and Naive Bayes achieve perfect or near-perfect consistency scores (0.998–1.000), while SVM and Logistic Regression maintain high consistency (0.988–0.997), making them suitable for regulatory compliance scenarios. In contrast, XGBoost shows low consistency (0.106) despite high accuracy, requiring intensive monitoring for deployment. This research contributes essential insights for bridging the critical gap between algorithm development and deployment success in next-generation wireless networks, providing evidence-based guidelines for algorithm selection based on accuracy, resilience, and interpretability requirements. Our findings establish quantitative resilience boundaries: algorithms achieving >99% laboratory accuracy exhibit 58–81% performance under realistic conditions, with CNN and FNN maintaining the highest absolute accuracy (81.2% and 81.1%, respectively) despite experiencing significant degradation from laboratory conditions. Full article
Show Figures

Figure 1

32 pages, 1432 KB  
Review
A Review of Multi-Microgrids Operation and Control from a Cyber-Physical Systems Perspective
by Ola Ali and Osama A. Mohammed
Computers 2025, 14(10), 409; https://doi.org/10.3390/computers14100409 - 25 Sep 2025
Viewed by 329
Abstract
Developing multi-microgrid (MMG) systems provides a new paradigm for power distribution systems with a higher degree of resilience, flexibility, and sustainability. The inclusion of communication networks as part of MMG is critical for coordinating distributed energy resources (DERs) in real time and deploying [...] Read more.
Developing multi-microgrid (MMG) systems provides a new paradigm for power distribution systems with a higher degree of resilience, flexibility, and sustainability. The inclusion of communication networks as part of MMG is critical for coordinating distributed energy resources (DERs) in real time and deploying energy management systems (EMS) efficiently. However, the communication quality of service (QoS) parameters such as latency, jitter, packet loss, and throughput play an essential role in MMG control and stability, especially in highly dynamic and high-traffic situations. This paper presents a focused review of MMG systems from a cyber-physical viewpoint, particularly concerning the challenges and implications of communication network performance of energy management. The literature on MMG systems includes control strategies, models of communication infrastructure, cybersecurity challenges, and co-simulation platforms. We have identified research gaps, including, but not limited to, the need for scalable, real-time cyber-physical systems; limited research examining communication QoS under realistic conditions/traffic; and integrated cybersecurity strategies for MMGs. We suggest future research opportunities considering these research gaps to enhance the resiliency, adaptability, and sustainability of modern cyber-physical MMGs. Full article
Show Figures

Figure 1

15 pages, 1698 KB  
Article
AI-Driven Energy-Efficient Data Aggregation and Routing Protocol Modeling to Maximize Network Lifetime in Wireless Sensor Networks
by R. Arun Chakravarthy, C. Sureshkumar, M. Arun and M. Bhuvaneswari
NDT 2025, 3(4), 22; https://doi.org/10.3390/ndt3040022 - 25 Sep 2025
Viewed by 262
Abstract
The research work presents an artificial intelligence-driven, energy-aware data aggregation and routing protocol for wireless sensor networks (WSNs) with the primary objective of extending overall network lifetime. The proposed scheme leverages reinforcement learning in conjunction with deep Q-networks (DQNs) to adaptively optimize both [...] Read more.
The research work presents an artificial intelligence-driven, energy-aware data aggregation and routing protocol for wireless sensor networks (WSNs) with the primary objective of extending overall network lifetime. The proposed scheme leverages reinforcement learning in conjunction with deep Q-networks (DQNs) to adaptively optimize both Cluster Head (CH) selection and routing decisions. An adaptive clustering mechanism is introduced wherein factors such as residual node energy, spatial proximity, and traffic load are jointly considered to elect suitable CHs. This approach mitigates premature energy depletion at individual nodes and promotes balanced energy consumption across the network, thereby enhancing node sustainability. For data forwarding, the routing component employs a DQN-based strategy to dynamically identify energy-efficient transmission paths, ensuring reduced communication overhead and reliable sink connectivity. Performance evaluation, conducted through extensive simulations, utilizes key metrics including network lifetime, total energy consumption, packet delivery ratio (PDR), latency, and load distribution. Comparative analysis with baseline protocols such as LEACH, PEGASIS, and HEED demonstrates that the proposed protocol achieves superior energy efficiency, higher packet delivery reliability, and lower packet losses, while adapting effectively to varying network dynamics. The experimental outcomes highlight the scalability and robustness of the protocol, underscoring its suitability for diverse WSN applications including environmental monitoring, surveillance, and Internet of Things (IoT)-oriented deployments. Full article
Show Figures

Figure 1

22 pages, 858 KB  
Systematic Review
Network Data Flow Collection Methods for Cybersecurity: A Systematic Literature Review
by Alessandro Carvalho Coutinho and Luciano Vieira de Araújo
Computers 2025, 14(10), 407; https://doi.org/10.3390/computers14100407 - 24 Sep 2025
Viewed by 354
Abstract
Network flow collection has become a cornerstone of cyber defence, yet the literature still lacks a consolidated view of which technologies are effective across different environments and conditions. We conducted a systematic review of 362 publications indexed in six digital libraries between January [...] Read more.
Network flow collection has become a cornerstone of cyber defence, yet the literature still lacks a consolidated view of which technologies are effective across different environments and conditions. We conducted a systematic review of 362 publications indexed in six digital libraries between January 2019 and July 2025, of which 51 met PRISMA 2020 eligibility criteria. All extraction materials are archived on OSF. NetFlow derivatives appear in 62.7% of the studies, IPFIX in 45.1%, INT/P4 or OpenFlow mirroring in 17.6%, and sFlow in 9.8%, with totals exceeding 100% because several papers evaluate multiple protocols. In total, 17 of the 51 studies (33.3%) tested production links of at least 40 Gbps, while others remained in laboratory settings. Fewer than half reported packet-loss thresholds or privacy controls, and none adopted a shared benchmark suite. These findings highlight trade-offs between throughput, fidelity, computational cost, and privacy, as well as gaps in encrypted-traffic support and GDPR-compliant anonymisation. Most importantly, our synthesis demonstrates that flow-collection methods directly shape what can be detected: some exporters are effective for volumetric attacks such as DDoS, while others enable visibility into brute-force authentication, botnets, or IoT malware. In other words, the choice of telemetry technology determines which threats and anomalous behaviours remain visible or hidden to defenders. By mapping technologies, metrics, and gaps, this review provides a single reference point for researchers, engineers, and regulators facing the challenges of flow-aware cybersecurity. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Graphical abstract

20 pages, 7858 KB  
Article
Optimizing CO2 Monitoring: Evaluating a Sensor Network Design
by Kenia Elizabeth Sabando-Bravo, Marlon Navia and Jorge Luis Zambrano-Martinez
J. Sens. Actuator Netw. 2025, 14(5), 93; https://doi.org/10.3390/jsan14050093 - 19 Sep 2025
Viewed by 495
Abstract
In the present work, a sensor network design for monitoring carbon dioxide (CO2) pollution in Portoviejo City, Ecuador, is evaluated through a methodology that combines simulation and physical implementation. This methodology involves the development and evaluation of two scenarios: an initial [...] Read more.
In the present work, a sensor network design for monitoring carbon dioxide (CO2) pollution in Portoviejo City, Ecuador, is evaluated through a methodology that combines simulation and physical implementation. This methodology involves the development and evaluation of two scenarios: an initial scenario (A), developed through both physical implementation and simulation, and another simulation scenario (B). Both simulated scenarios are created using CupCarbon version 6.51 software. In these scenarios, the functionality of Wireless Sensor Networks (WSNs) is analyzed by implementing the LoRaWAN communication technology. Furthermore, the MQ-135 sensor is used to obtaining data on the PPM of (CO2) in order to examine the areas that concentrate the most significant amount of this atmospheric pollutant. The proposed networks are evaluated using the packet loss metric during data transmission. After implementation, analysis, and respective evaluation, it can be concluded that the network simulated in Scenario B is suitable for monitoring (CO2) and other pollutants that can be analyzed within the urban environment. Full article
Show Figures

Figure 1

20 pages, 3181 KB  
Article
Integrating Reinforcement Learning and LLM with Self-Optimization Network System
by Xing Xu, Jianbin Zhao, Yu Zhang and Rongpeng Li
Network 2025, 5(3), 39; https://doi.org/10.3390/network5030039 - 16 Sep 2025
Viewed by 694
Abstract
The rapid expansion of communication networks and increasingly complex service demands have presented significant challenges to the intelligent management of network resources. To address these challenges, we have proposed a network self-optimization framework integrating the predictive capabilities of the Large Language Model (LLM) [...] Read more.
The rapid expansion of communication networks and increasingly complex service demands have presented significant challenges to the intelligent management of network resources. To address these challenges, we have proposed a network self-optimization framework integrating the predictive capabilities of the Large Language Model (LLM) with the decision-making capabilities of multi-agent Reinforcement Learning (RL). Specifically, historical network traffic data are converted into structured inputs to forecast future traffic patterns using a GPT-2-based prediction module. Concurrently, a Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm leverages real-time sensor data—including link delay and packet loss rates collected by embedded network sensors—to dynamically optimize bandwidth allocation. This sensor-driven mechanism enables the system to perform real-time optimization of bandwidth allocation, ensuring accurate monitoring and proactive resource scheduling. We evaluate our framework in a heterogeneous network simulated using Mininet under diverse traffic scenarios. Experimental results show that the proposed method significantly reduces network latency and packet loss, as well as improves robustness and resource utilization, highlighting the effectiveness of integrating sensor-driven RL optimization with predictive insights from LLMs. Full article
Show Figures

Figure 1

18 pages, 3524 KB  
Article
Efficient Multi-Topology Failure Tolerance Mechanism in Polymorphic Network
by Ziyong Li, Bai Lin, Wenyu Jiang and Le Tian
Electronics 2025, 14(18), 3573; https://doi.org/10.3390/electronics14183573 - 9 Sep 2025
Viewed by 337
Abstract
Enhancing the failure tolerance ability of networks is crucial, as node or link failures are common occurrences on-site. The current fault tolerance schemes are divided into reactive and proactive schemes. The reactive scheme requires detection and repair after the failure occurs, which may [...] Read more.
Enhancing the failure tolerance ability of networks is crucial, as node or link failures are common occurrences on-site. The current fault tolerance schemes are divided into reactive and proactive schemes. The reactive scheme requires detection and repair after the failure occurs, which may lead to long-term network interruptions. The proactive scheme can reduce recovery time through preset backup paths, but requires additional resources. Aiming at the problems of long recovery time or high overhead of the current failure tolerance schemes, the Polymorphic Network adopts field-definable network baseline technology, which can support diversified addressing and routing capabilities, making it possible to implement a more complex and efficient failure tolerance scheme. Inspired by this, we propose an efficient Multi-topology Failure Tolerance mechanism in Polymorphic Network (MFT-PN). The MFT-PN embeds a failure recovery function into the packet processing logic by leveraging the full programmable characteristics of the network element, improving failure recovery efficiency. The backup path information is pushed into the header of the failed packet to reduce the flow table storage overhead. Meanwhile, MFT-PN introduces the concept of multi-topology routing by constructing multiple logical topologies, with each topology adopting different failure recovery strategies. Then, we design a multi-topology loop-free link backup algorithm to calculate the backup path for each topology, providing extensive coverage for different failure scenarios. Experimental results show that compared with the existing strategies, MFT-PN can reduce resource overhead by over 72% and the packet loss rate by over 59%, as well as effectively cope with multiple failure scenarios. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

27 pages, 2027 KB  
Article
Comparative Analysis of SDN and Blockchain Integration in P2P Streaming Networks for Secure and Reliable Communication
by Aisha Mohmmed Alshiky, Maher Ali Khemakhem, Fathy Eassa and Ahmed Alzahrani
Electronics 2025, 14(17), 3558; https://doi.org/10.3390/electronics14173558 - 7 Sep 2025
Viewed by 603
Abstract
Rapid advancements in peer-to-peer (P2P) streaming technologies have significantly impacted digital communication, enabling scalable, decentralized, and real-time content distribution. Despite these advancements, challenges persist, including dynamic topology management, high latency, security vulnerabilities, and unfair resource sharing (e.g., free rider). While software-defined networking (SDN) [...] Read more.
Rapid advancements in peer-to-peer (P2P) streaming technologies have significantly impacted digital communication, enabling scalable, decentralized, and real-time content distribution. Despite these advancements, challenges persist, including dynamic topology management, high latency, security vulnerabilities, and unfair resource sharing (e.g., free rider). While software-defined networking (SDN) and blockchain individually address aspects of these limitations, their combined potential for comprehensive optimization remains underexplored. This study proposes a distributed SDN (DSDN) architecture enhanced with blockchain support to provide secure, scalable, and reliable P2P video streaming. We identified research gaps through critical analysis of the literature. We systematically compared traditional P2P, SDN-enhanced, and hybrid architectures across six performance metrics: latency, throughput, packet loss, authentication accuracy, packet delivery ratio, and control overhead. Simulations with 200 peers demonstrate that the proposed hybrid SDN–blockchain framework achieves a latency of 140 ms, a throughput of 340 Mbps, an authentication accuracy of 98%, a packet delivery ratio of 97.8%, a packet loss ratio of 2.2%, and a control overhead of 9.3%, outperforming state-of-the-art solutions such as NodeMaps, the reinforcement learning-based routing framework (RL-RF), and content delivery networks-P2P networks (CDN-P2P). This work establishes a scalable and attack-resilient foundation for next-generation P2P streaming. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Graphical abstract

23 pages, 999 KB  
Article
Decentralized and Network-Aware Task Offloading for Smart Transportation via Blockchain
by Fan Liang
Sensors 2025, 25(17), 5555; https://doi.org/10.3390/s25175555 - 5 Sep 2025
Viewed by 1118
Abstract
As intelligent transportation systems (ITSs) evolve rapidly, the increasing computational demands of connected vehicles call for efficient task offloading. Centralized approaches face challenges in scalability, security, and adaptability to dynamic network conditions. To address these issues, we propose a blockchain-based decentralized task offloading [...] Read more.
As intelligent transportation systems (ITSs) evolve rapidly, the increasing computational demands of connected vehicles call for efficient task offloading. Centralized approaches face challenges in scalability, security, and adaptability to dynamic network conditions. To address these issues, we propose a blockchain-based decentralized task offloading framework with network-aware resource allocation and tokenized economic incentives. In our model, vehicles generate computational tasks that are dynamically mapped to available computing nodes—including vehicle-to-vehicle (V2V) resources, roadside edge servers (RSUs), and cloud data centers—based on a multi-factor score considering computational power, bandwidth, latency, and probabilistic packet loss. A blockchain transaction layer ensures auditable and secure task assignment, while a proof-of-stake (PoS) consensus and smart-contract-driven dynamic pricing jointly incentivize participation and balance workloads to minimize delay. In extensive simulations reflecting realistic ITS dynamics, our approach reduces total completion time by 12.5–24.3%, achieves a task success rate of 84.2–88.5%, improves average resource utilization to 88.9–92.7%, and sustains >480 transactions per second (TPS) with a 10 s block interval, outperforming centralized/cloud-based baselines. These results indicate that integrating blockchain incentives with network-aware offloading yields secure, scalable, and efficient management of computational resources for future ITSs. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2025)
Show Figures

Figure 1

23 pages, 2216 KB  
Article
An Adaptive Application-Aware Dynamic Load Balancing Framework for Open-Source SD-WAN
by Teodor Petrović, Aleksa Vidaković, Ilija Doknić, Mladen Veinović and Živko Bojović
Sensors 2025, 25(17), 5516; https://doi.org/10.3390/s25175516 - 4 Sep 2025
Viewed by 1133
Abstract
Traditional Software-Defined Wide Area Network (SD-WAN) solutions lack adaptive load-balancing mechanisms, leading to inefficient traffic distribution, increased latency, and performance degradation. This paper presents an Application-Aware Dynamic Load Balancing (AADLB) framework designed for open-source SD-WAN environments. The proposed solution enables dynamic traffic routing [...] Read more.
Traditional Software-Defined Wide Area Network (SD-WAN) solutions lack adaptive load-balancing mechanisms, leading to inefficient traffic distribution, increased latency, and performance degradation. This paper presents an Application-Aware Dynamic Load Balancing (AADLB) framework designed for open-source SD-WAN environments. The proposed solution enables dynamic traffic routing based on real-time network performance indicators, including CPU utilization, memory usage, connection delay, and packet loss, while considering application-specific requirements. Unlike conventional load-balancing methods, such as Weighted Round Robin (WRR), Weighted Fair Queuing (WFQ), Priority Queuing (PQ), and Deficit Round Robin (DRR), AADLB continuously updates traffic weights based on application requirements and network conditions, ensuring optimal resource allocation and improved Quality of Service (QoS). The AADLB framework leverages a heuristic-based dynamic weight assignment algorithm to redistribute traffic in a multi-cloud environment, mitigating congestion and enhancing system responsiveness. Experimental results demonstrate that compared to these traditional algorithms, the proposed AADLB framework improved CPU utilization by an average of 8.40%, enhanced CPU stability by 76.66%, increased RAM utilization stability by 6.97%, slightly reduced average latency by 2.58%, and significantly enhanced latency consistency by 16.74%. These improvements enhance SD-WAN scalability, optimize bandwidth usage, and reduce operational costs. Our findings highlight the potential of application-aware dynamic load balancing in SD-WAN, offering a cost-effective and scalable alternative to proprietary solutions. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

20 pages, 5966 KB  
Article
Formation Control of Multiple UUVs Based on GRU-KF with Communication Packet Loss
by Juan Li, Rui Luo, Honghan Zhang and Zhenyang Tian
J. Mar. Sci. Eng. 2025, 13(9), 1696; https://doi.org/10.3390/jmse13091696 - 2 Sep 2025
Viewed by 405
Abstract
In response to the problem of decreased collaborative control performance in underwater unmanned vehicles (UUVs) with communication packet loss, a GRU-KF method for multi-UUV control that integrates a gated recurrent unit (GRU) and a Kalman filter (KF) is proposed. First, a UUV feedback [...] Read more.
In response to the problem of decreased collaborative control performance in underwater unmanned vehicles (UUVs) with communication packet loss, a GRU-KF method for multi-UUV control that integrates a gated recurrent unit (GRU) and a Kalman filter (KF) is proposed. First, a UUV feedback linearization model and a current model are established, and a multi-UUV controller-based leader–follower method is designed, using a neural network-based radial basis function (RBF) to counteract the uncertainty effects in the model. For scenarios involving packet loss in multi-UUV collaborative communication, the GRU network extracts historical temporal features to enhance the system’s adaptability to communication uncertainties, while the KF performs state estimation and error correction. The simulation results show that, compared to compensation by the GRU network, the proposed method significantly reduces the jitter level and convergence time of errors, enabling the formation to exhibit good robustness and accuracy in communication packet loss scenarios. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Back to TopTop