Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (395)

Search Parameters:
Keywords = packet loss rate

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1004 KB  
Article
A Lightweight IDS Based on Blockchain and Machine Learning for Detecting Physical Attacks in Wireless Sensor Networks
by Maytham S. Jabor, Aqeel S. Azez, José Carlos Campelo and Alberto Bonastre
Sensors 2026, 26(6), 1961; https://doi.org/10.3390/s26061961 - 20 Mar 2026
Viewed by 506
Abstract
Wireless sensor networks (WSNs) are vulnerable to physical attacks in which adversaries gain partial or full control of sensor nodes, compromising the integrity of the network. Conventional security mechanisms impose excessive computational overhead and are not well suited to resource-constrained WSN devices. This [...] Read more.
Wireless sensor networks (WSNs) are vulnerable to physical attacks in which adversaries gain partial or full control of sensor nodes, compromising the integrity of the network. Conventional security mechanisms impose excessive computational overhead and are not well suited to resource-constrained WSN devices. This paper proposes a lightweight, two-layer intrusion detection system (IDS) that integrates blockchain (BC) technology with machine learning for physical attack detection in WSNs. The first layer employs a lightweight BC protocol among cluster heads (CHs) and the base station (BS) to detect data integrity violations through hash-based consensus. The second layer applies an artificial neural network (ANN) at the base station to detect attacks that bypass blockchain verification, without imposing any processing load on sensor nodes. Simulation experiments on a 100-node WSN demonstrate that the combined system achieves 97.42% accuracy and 98.35% recall, outperforming five established classifiers and both standalone components. The system sustains detection rates above 99.98% under 30 simultaneous attackers and maintains reliable operation under packet loss conditions up to 10%. Full article
(This article belongs to the Special Issue Privacy and Cybersecurity in IoT-Based Applications)
Show Figures

Figure 1

14 pages, 4757 KB  
Article
Design and Implementation of an IoT-Based Low-Power Wearable EEG Sensing System for Home-Based Sleep Monitoring
by Ya Wang, Jun-Bo Chen and Yu-Ting Chen
Sensors 2026, 26(6), 1803; https://doi.org/10.3390/s26061803 - 12 Mar 2026
Viewed by 507
Abstract
Long-term home-based sleep monitoring requires wearable sensing devices that strictly balance signal precision with power constraints. This study presents the design and implementation of a low-noise, low-power wearable single-channel electroencephalography (EEG) system for automatic sleep staging. The hardware architecture integrates a TI ADS1298 [...] Read more.
Long-term home-based sleep monitoring requires wearable sensing devices that strictly balance signal precision with power constraints. This study presents the design and implementation of a low-noise, low-power wearable single-channel electroencephalography (EEG) system for automatic sleep staging. The hardware architecture integrates a TI ADS1298 analog front-end with an STM32F4 microcontroller, utilizing differential sampling and hardware-based filtering to effectively suppress power-line interference and baseline drift. System-level testing demonstrates an average power consumption of approximately 150.85 mW, enabling over 24.6 h of continuous operation on a 1000 mAh battery, which meets the requirements for overnight monitoring. To achieve accurate staging without draining the wearable’s battery, we adopted and deployed a lightweight deep learning model, SleePyCo, on the cloud backend. This architecture was specifically optimized for our edge–cloud collaborative execution, which combines contrastive representation learning with temporal dependency modeling. Validation on the ISRUC dataset yielded an overall accuracy of 79.3% ± 3.0%, with a notable F1-score of 88.3% for Deep Sleep (N3). Furthermore, practical field trials involving 10 healthy subjects verified the system’s engineering stability, achieving a valid data rate exceeding 97% and a Bluetooth packet loss rate of only 0.8%. These results confirm that the proposed hardware–software co-designed system provides a robust, energy-efficient IoMT sensing solution for daily sleep health management. Full article
(This article belongs to the Section Wearables)
Show Figures

Graphical abstract

16 pages, 3234 KB  
Article
Flexible Vis/NIR Wireless Sensing and Estimation with DeepEnsemble Learning for Pork
by Maoyuan Yin, Daixin Liu, Hongyan Yang, Xiaoshuang Shi, Guan Xiong, Min Zhang, Tianyu Zhu, Lingling Chen, Ruihua Zhang and Xinqing Xiao
Agriculture 2026, 16(6), 650; https://doi.org/10.3390/agriculture16060650 - 12 Mar 2026
Viewed by 311
Abstract
The rapid chilling and aging stages following pork slaughter represent a critical window for determining final physicochemical quality and flavor development. To address the destructive nature of conventional meat quality assessment methods and the limitations of rigid spectral probes when applied to irregular [...] Read more.
The rapid chilling and aging stages following pork slaughter represent a critical window for determining final physicochemical quality and flavor development. To address the destructive nature of conventional meat quality assessment methods and the limitations of rigid spectral probes when applied to irregular biological surfaces, this study developed and validated a wireless monitoring system integrating a flexible visible/near-infrared (VIS/NIR) sensing array with ensemble learning algorithms. The proposed system enables non-destructive, continuous monitoring of pork quality during cold-chain storage. A DeepEnsemble regression model based on a stacking framework was constructed by integrating Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), and Extreme Gradient Boosting (XGBoost) to predict pH, moisture content, and total amino acid concentration. During a 26 h dynamic aging experiment, the proposed model achieved coefficients of determination (R2) of 0.9019, 0.9687, and 0.9600 for pH, moisture content, and total amino acids, respectively, with prediction performance exceeding that of individual regression models. The wireless transmission module maintained stable data communication under low-temperature and high-humidity conditions (−20 °C and 0–4 °C), with packet loss rates below 0.1%. These results indicate that the proposed system can effectively capture the dynamic evolution of pork quality during aging and provides a practical non-destructive approach for intelligent pork quality evaluation, cold-chain monitoring, and digital management of meat supply chains. Full article
(This article belongs to the Section Agricultural Product Quality and Safety)
Show Figures

Figure 1

55 pages, 3447 KB  
Article
A Microservices-Based Solution with Hybrid Communication for Energy Management in Smart Grid Environments
by Artur F. S. Veloso, José V. Reis and Ricardo A. L. Rabelo
Sensors 2026, 26(5), 1714; https://doi.org/10.3390/s26051714 - 9 Mar 2026
Viewed by 465
Abstract
The increasing variability of residential demand, combined with the expansion of distributed generation and electric vehicles, has introduced new challenges to the stability of Smart Grids (SGs). Centralized management models lack the flexibility required to operate under these conditions, reinforcing the need for [...] Read more.
The increasing variability of residential demand, combined with the expansion of distributed generation and electric vehicles, has introduced new challenges to the stability of Smart Grids (SGs). Centralized management models lack the flexibility required to operate under these conditions, reinforcing the need for scalable and data-driven architectures. This study proposes an energy management solution based on microservices, supported by hybrid communication in Low Power Wide Area Networks (LPWAN), integrating Long Range Wide Area Network (LoRaWAN) and LoRaMESH to enhance connectivity, local resilience, and reliability in data acquisition for Internet of Things (IoT) and Demand Response (DR) applications. A prototype composed of a Smart Meter (SM), a Data Aggregation Point (DAP), and a Concentrator (CON) was evaluated in a controlled environment, achieving Packet Delivery Rates above 97%, an average RSSI of −92 dBm, and a Signal-to-Noise Ratio close to 9 dB, validating the robustness of the hybrid communication. At a larger scale, data from 5567 households in the Low Carbon London (LCL) project were used to generate representative Load Profiles (LPs) through seven aggregation and clustering techniques, consistently identifying the 18:00–21:00 interval as the critical peak, with demand reaching up to 42% above the daily average. Fourteen load shifting algorithms were evaluated, and the Hybrid Adaptive Algorithm based on Intention and Resilience (HAAIR), proposed in this work, achieved the best overall performance with a 1.83% peak reduction, US$65.40 in cost savings, a reduction of 60 kg of CO2, a Comfort Loss Index of 0.04, resilience of 9.5, and reliability of 0.98. The results demonstrate that the integration of hybrid LPWAN communication, modular microservice-based architecture, and adaptive DR strategies driven by Artificial Intelligence (AI) represents a promising pathway toward scalable, resilient, and energy-efficient SGs. Full article
(This article belongs to the Special Issue LoRa Communication Technology for IoT Applications—2nd Edition)
Show Figures

Figure 1

25 pages, 2414 KB  
Article
Communication Bicasting for Improving Throughput and Fairness in Multihomed Networks Using QUIC with BBRv3
by Tomoya Kawana, Rei Nakagawa and Nariyoshi Yamai
Telecom 2026, 7(2), 29; https://doi.org/10.3390/telecom7020029 - 4 Mar 2026
Viewed by 425
Abstract
When devices equipped with multiple wireless network interfaces access the Internet via Wi-Fi, 4G, and 5G, external factors such as radio interference can increase packet loss rates, resulting in reduced communication speed. To address this issue, two approaches exist: the use of Bottleneck [...] Read more.
When devices equipped with multiple wireless network interfaces access the Internet via Wi-Fi, 4G, and 5G, external factors such as radio interference can increase packet loss rates, resulting in reduced communication speed. To address this issue, two approaches exist: the use of Bottleneck Bandwidth and Round-trip propagation time (BBR), a congestion control algorithm designed to mitigate the impact of packet loss and bicasting in multihomed networks. Bicasting in multihomed networks exploits multiple network paths by transmitting identical packets simultaneously over different networks, thereby reducing effective packet loss and mitigating throughput reduction. In this paper, we introduce a novel network architecture that effectively operates in lossy networks by combining bicasting with BBR. By utilizing QUIC and OpenFlow, the proposed architecture enables the construction of a multihomed network that is independent of the operating system (OS), allowing flexible configuration of congestion control algorithms. Furthermore, the introduction of a QUIC proxy enables the use of existing server-side applications without requiring any modifications. Using the proposed multihomed network, we evaluate communication performance for unicasting and bicasting under varying packet loss rates, and we also analyze fairness with competing Transmission control protocol (TCP) flows. The results indicate that the combination of BBRv3 and bicasting achieves fivefold higher throughput than TCP unicasting at a 1% packet loss rate while preserving fairness with competing TCP flows. Full article
Show Figures

Figure 1

30 pages, 2011 KB  
Article
Buffering and Adaptive Coding for Flooding with Randomized Network Coding on Multi-Hop Wireless Broadcasting
by Youji Fukuta, Yoshiaki Shiraishi, Masanori Hirotomo and Masami Mohri
Sensors 2026, 26(5), 1594; https://doi.org/10.3390/s26051594 - 3 Mar 2026
Viewed by 466
Abstract
Broadcast-based flooding in wireless ad hoc networks is subject to the broadcast storm problem, characterized by excessive transmissions, collisions, and link losses. While randomized network coding (RNC) enhances resilience against packet losses, efficient buffer management and adaptive transmission strategies are essential. This paper [...] Read more.
Broadcast-based flooding in wireless ad hoc networks is subject to the broadcast storm problem, characterized by excessive transmissions, collisions, and link losses. While randomized network coding (RNC) enhances resilience against packet losses, efficient buffer management and adaptive transmission strategies are essential. This paper proposes novel buffering mechanisms and adaptive coding strategies to improve data unit reception rates in RNC-based broadcast flooding. Our buffering mechanism combines Last-In-First-Out (LIFO) and Least Recently Used (LRU) discard policies. When buffers are full, it prioritizes the discarding of stale, incomplete buffers based on elapsed time since the last coded block arrival, thereby overcoming First-In-First-Out (FIFO) limitations that prematurely discard buffers before sufficient coded blocks have accumulated. Our adaptive coding dynamically adjusts transmitted coded packets based on data unit duplication rates without inter-node coordination, reducing blocks during high duplication and increasing them under difficult reception conditions. Simulation experiments using OMNeT++ and INET framework for Vehicular Ad Hoc Networks demonstrate that LIFO+LRU buffering significantly increases the received data units and prevents redundant reception, while adaptive coding further improves reception rates under challenging conditions. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

15 pages, 5848 KB  
Article
A Software Defined Radio Implementation of Non-Orthogonal Multiple Access with Reliable Decoding via Error Correction
by Dipanjan Adhikary and Eirini Eleni Tsiropoulou
Future Internet 2026, 18(3), 128; https://doi.org/10.3390/fi18030128 - 2 Mar 2026
Viewed by 490
Abstract
Non-orthogonal multiple access (NOMA) has been identified as one of the key technologies for 6G capacity and latency gains. However, existing implementation challenges of the NOMA technique, related to carrier, timing, and phase offsets, successive interference cancellation (SIC) error propagation, packet loss dynamics, [...] Read more.
Non-orthogonal multiple access (NOMA) has been identified as one of the key technologies for 6G capacity and latency gains. However, existing implementation challenges of the NOMA technique, related to carrier, timing, and phase offsets, successive interference cancellation (SIC) error propagation, packet loss dynamics, and host to software defined radios processing jitter, create obstacles in the practical implementation of NOMA. This paper bridges the gap between theory and hardware by introducing a complete two-user NOMA transmit–receive chain on a low-cost ADALM-Pluto software defined radio (SDR) platform. The proposed implementation integrates matched filtering, offset estimation and correction, SIC with waveform reconstruction and subtraction, and reliability reinforcement via rate-1/2 convolutional coding with Viterbi decoding. We have performed a complete validation of the proposed design in both downlink and uplink modes. We collected data regarding the packet-level and system-related metrics, such as end-to-end latency, bit error rate (BER), and success rate. Moreover, we demonstrate the implementation of the uplink NOMA without need for expensive GPS-disciplined oscillators by leveraging the Pluto Rev-C dual-transmit channels that share a common oscillator. We present detailed experimental results at 915 MHz with BPSK modulation for the downlink performance, and also show a full implementation of the uplink NOMA. We observe excellent reliability for the downlink setup and good reliability for the uplink system. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2026–2027)
Show Figures

Graphical abstract

20 pages, 682 KB  
Article
ARQ-Enhanced Short-Packet NOMA Communications with STAR-RIS
by Zhipeng Wang, Jin Li, Shuai Zhang and Dechuan Chen
Telecom 2026, 7(2), 25; https://doi.org/10.3390/telecom7020025 - 2 Mar 2026
Viewed by 312
Abstract
To address the rigorous requirements of ultra-reliable low-latency communication (URLLC) in beyond 5G/6G networks, we propose an innovative architecture combining automatic repeat request (ARQ) protocol with a simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) to enhance short-packet non-orthogonal multiple access (NOMA) communications. [...] Read more.
To address the rigorous requirements of ultra-reliable low-latency communication (URLLC) in beyond 5G/6G networks, we propose an innovative architecture combining automatic repeat request (ARQ) protocol with a simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) to enhance short-packet non-orthogonal multiple access (NOMA) communications. Specifically, retransmission mechanism provided by ARQ is utilized to mitigate packet errors stemming from practical system imperfections, i.e., imperfect channel state information (ipCSI), imperfect successive interference cancellation (ipSIC), and hardware impairments. Using the analytical foundation provided by finite blocklength (FBL) theory, expressions for two key performance metrics, i.e., the average block error rate (BLER) and effective throughput, are derived for two NOMA users. Simulation results validate the analytical derivations and demonstrate that the ARQ scheme provides significant reliability gains for each user and achieves synergistic gain with STAR-RIS technology. In addition, the effective throughput exhibits a peak at an optimal blocklength, balancing the reliability gain from a longer blocklength against the spectral efficiency loss from a lower coding rate. This optimal blocklength decreases with more STAR-RIS elements, as improved channel conditions reduce the need for long blocklengths. Full article
Show Figures

Figure 1

15 pages, 2735 KB  
Article
IBPS—A Novel Integrated Battery Protection System Based on Novel High-Precision Pressure Sensing
by Meiya Dong, Biaokai Zhu, Fangyong Tan and Gang Liu
Electronics 2026, 15(5), 1013; https://doi.org/10.3390/electronics15051013 - 28 Feb 2026
Viewed by 285
Abstract
Nowadays, thermal runaway accidents involving lithium batteries in new energy vehicles and energy storage power stations occur frequently, with battery deformation pressure as the core precursor signal. Traditional battery protection schemes suffer from limitations, including wired connections, limited real-time remote monitoring, and insufficient [...] Read more.
Nowadays, thermal runaway accidents involving lithium batteries in new energy vehicles and energy storage power stations occur frequently, with battery deformation pressure as the core precursor signal. Traditional battery protection schemes suffer from limitations, including wired connections, limited real-time remote monitoring, and insufficient sensing accuracy, rendering them unable to meet the safety monitoring needs of large-scale battery modules. Therefore, a high-precision pressure-sensing battery protection system based on the Internet of Things has been developed. This paper selects a MEMS high-precision pressure sensor with an accuracy of ±0.1 kPa to design an IoT sensing node based on the STM32L431 and LoRa/Wi-Fi 6, integrating pressure sensing and wireless communication. It proposes a sliding-average filtering and wavelet denoising algorithm, as well as a temperature-compensation calibration model, to optimize sensing accuracy. Additionally, it constructs a hierarchical early warning model based on pressure thresholds. The experiment demonstrates that the sensor achieves a detection accuracy of 99.2%, a response delay of less than 50 ms, a transmission packet loss rate of less than 0.5%, an end-to-end delay of less than 200 ms, and an early warning accuracy rate of 99.2% under battery overcharge/overtemperature conditions. The innovation of this study lies in the first integration of high-precision pressure sensing and IoT communication for battery protection. A low-power IoT sensing node tailored for battery aging scenarios has been designed, validating the novel application value of IoT sensing in the safety monitoring of new energy equipment. This system fills a gap in IoT pressure-sensing technology for battery protection, enabling practical applications and serving as a reference for implementing integrated sensing and communication technology. Full article
(This article belongs to the Special Issue IoT Sensing and Generalization)
Show Figures

Figure 1

15 pages, 836 KB  
Article
Lightweight Adaptive Reinforcement Learning-Based TCP Congestion Control for Multi-Hop Ad Hoc Networks
by Hai Li and Zhe Xin
Electronics 2026, 15(5), 947; https://doi.org/10.3390/electronics15050947 - 25 Feb 2026
Viewed by 838
Abstract
Ad hoc networks are characterized by flexible deployment and multi-hop communication, which has facilitated their growing prevalence in diverse applications. However, the TCP protocol exhibits substantial performance degradation in multi-hop ad hoc networks with dynamic topologies. To address this issue, this paper proposes [...] Read more.
Ad hoc networks are characterized by flexible deployment and multi-hop communication, which has facilitated their growing prevalence in diverse applications. However, the TCP protocol exhibits substantial performance degradation in multi-hop ad hoc networks with dynamic topologies. To address this issue, this paper proposes TCP-RLA, a lightweight adaptive reinforcement learning-based TCP congestion control algorithm. It predicts network state variations and leverages a deep Q-network (DQN) with a rule-assisted discrete action space to adaptively tune the congestion window. This design boosts convergence speed and reduces computational complexity, making it well-suited for resource-constrained ad hoc nodes. Simulation results demonstrate that, compared with two reinforcement learning-based algorithms (GVegas and Orca), TCP-RLA achieves an average throughput improvement of 36.1% and 43.3%, an average round-trip time (RTT) reduction of 13.1% and 47.9%, and an average packet loss rate (PLR) reduction of 33.3% and 50%, respectively. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

18 pages, 714 KB  
Article
LoRa-Based IoT Multi-Hop Architecture for Smart Vineyard Monitoring: Simulation Framework and System Design
by Chiara Suraci, Pietro Zema, Giuseppe Marrara, Angelo Tropeano, Alessandro Campolo, Mariateresa Russo and Giuseppe Araniti
Sensors 2026, 26(4), 1112; https://doi.org/10.3390/s26041112 - 9 Feb 2026
Viewed by 548
Abstract
The growing interest in precision agriculture has led, in recent years, to an increase in the adoption of Internet of Things (IoT) technologies in the service of smart agriculture to optimize agricultural production processes through the monitoring of environmental conditions and prevent food [...] Read more.
The growing interest in precision agriculture has led, in recent years, to an increase in the adoption of Internet of Things (IoT) technologies in the service of smart agriculture to optimize agricultural production processes through the monitoring of environmental conditions and prevent food loss. This work stems from research conducted as part of the Tech4You project, where the enabling digital technologies developed in Spoke 6 contribute to the advanced solutions envisaged by Spoke 3 to facilitate the transition to a sustainable agrifood system. In particular, we present the design and evaluation of a multi-hop Device-to-Device (D2D) communication architecture that leverages Long Range (LoRa) technology, specifically designed for monitoring vineyards in the context of passito wine production. The proposed framework addresses the challenge of monitoring mobile containers for grapes during the drying phase, a critical stage in which inadequate temperatures and humidity can promote the growth of fungi and the formation of mycotoxins. The integration of simulation-based performance evaluation with a multi-layer system architecture is presented in this work. The objective is to compare the performance of different routing strategies in choosing data forwarding paths to the gateway. The simulation results show that the proposed routing strategy, which is based on learning but also focuses on energy consumption, offers good performance. In particular, it achieves packet delivery rates of over 92% and preserves over 95% of active nodes after 2 h of operation. Energy-aware routing strategies also perform well compared to those that only consider the distance from the destination, but overall, the proposed strategy achieves a better trade-off on the metrics analyzed. Full article
(This article belongs to the Special Issue 5G/6G Networks for Wireless Communication and IoT—2nd Edition)
Show Figures

Figure 1

21 pages, 4405 KB  
Article
Performance Benchmarking of 5G SA and NSA Networks for Wireless Data Transfer
by Miha Pipan, Marko Šimic and Niko Herakovič
J. Sens. Actuator Netw. 2026, 15(1), 18; https://doi.org/10.3390/jsan15010018 - 2 Feb 2026
Viewed by 1400
Abstract
This paper presents test results of the performance comparison of 5G standalone (SA) and non-standalone (NSA) networks in the context of gathering data of remote sensors and machines. The study evaluates key network characteristics such as latency, throughput, jitter and packet loss (for [...] Read more.
This paper presents test results of the performance comparison of 5G standalone (SA) and non-standalone (NSA) networks in the context of gathering data of remote sensors and machines. The study evaluates key network characteristics such as latency, throughput, jitter and packet loss (for UDP protocol only) using standardized tests to gain insights into the impact of these factors on real-time and data-intensive communication. In addition, a range of communication protocols including OPC UA, Modbus, MQTT, AMQP, CoAP, EtherCAT and gRPC were tested to assess their efficiency, scalability and suitability with different send data sizes. By conducting experiments in a controlled hardware environment, we have analyzed the impact of the 5G architecture on protocol behavior and measured the transmission performance at different data sizes and connection configurations. Particular attention is paid to protocol overhead, data transfer rates and responsiveness, which are crucial for industrial automation and IoT deployments. The results show that SA networks consistently offer lower latency and more stable performance, where robust and low-latency data transfer is essential. In contrast, lightweight IoT protocols such as MQTT and CoAP demonstrate reliable operation in both SA and NSA environments due to their low overhead and adaptability. These insights are equally important for time-critical industrial protocols such as EtherCAT and OPC UA, where stability and responsiveness are crucial for automation and control. The study highlights current limitations of 5G networks in supporting both remote sensing and industrial use cases, while providing guidance for selecting the most suitable communication protocols depending on network infrastructure and application requirements. Moreover, the results indicate directions for configuring and optimizing future 5G networks to better meet the demands of remote sensing systems and Industry 4.0 environments. Full article
(This article belongs to the Section Communications and Networking)
Show Figures

Figure 1

30 pages, 22347 KB  
Article
Enhancing V2V Communication by Parsimoniously Leveraging V2N2V Path in Connected Vehicles
by Songmu Heo, Yoo-Seung Song, Seungmo Kang and Hyogon Kim
Sensors 2026, 26(3), 819; https://doi.org/10.3390/s26030819 - 26 Jan 2026
Viewed by 370
Abstract
The rapid proliferation of connected vehicles equipped with both Vehicle-to-Vehicle (V2V) sidelink and cellular interfaces creates new opportunities for real-time vehicular applications, yet achieving ultra-reliable communication without prohibitive cellular costs remains challenging. This paper addresses reliable inter-vehicle video streaming for safety-critical applications such [...] Read more.
The rapid proliferation of connected vehicles equipped with both Vehicle-to-Vehicle (V2V) sidelink and cellular interfaces creates new opportunities for real-time vehicular applications, yet achieving ultra-reliable communication without prohibitive cellular costs remains challenging. This paper addresses reliable inter-vehicle video streaming for safety-critical applications such as See-Through for Passing and Obstructed View Assist, which require stringent Service Level Objectives (SLOs) of 50 ms latency with 99% reliability. Through measurements in Seoul urban environments, we characterize the complementary nature of V2V and Vehicle-to-Network-to-Vehicle (V2N2V) paths: V2V provides ultra-low latency (mean 2.99 ms) but imperfect reliability (95.77%), while V2N2V achieves perfect reliability but exhibits high latency variability (P99: 120.33 ms in centralized routing) that violates target SLOs. We propose a hybrid framework that exploits V2V as the primary path while selectively retransmitting only lost packets via V2N2V. The key innovation is a dual loss detection mechanism combining gap-based and timeout-based triggers leveraging Real-Time Protocol (RTP) headers for both immediate response and comprehensive coverage. Trace-driven simulation demonstrates that the proposed framework achieves a 99.96% packet reception rate and 99.71% frame playback ratio, approaching lossless transmission while maintaining cellular utilization at only 5.54%, which is merely 0.84 percentage points above the V2V loss rate. This represents a 7× cost reduction versus PLR Switching (4.2 GB vs. 28 GB monthly) while reducing video stalls by 10×. These results demonstrate that packet-level selective redundancy enables cost-effective ultra-reliable V2X communication at scale. Full article
Show Figures

Figure 1

23 pages, 3420 KB  
Article
Design of a Wireless Monitoring System for Cooling Efficiency of Grid-Forming SVG
by Liqian Liao, Jiayi Ding, Guangyu Tang, Yuanwei Zhou, Jie Zhang, Hongxin Zhong, Ping Wang, Bo Yin and Liangbo Xie
Electronics 2026, 15(3), 520; https://doi.org/10.3390/electronics15030520 - 26 Jan 2026
Viewed by 392
Abstract
The grid-forming static var generator (SVG) is a key device that supports the stable operation of power grids with a high penetration of renewable energy. The cooling efficiency of its forced water-cooling system directly determines the reliability of the entire unit. However, existing [...] Read more.
The grid-forming static var generator (SVG) is a key device that supports the stable operation of power grids with a high penetration of renewable energy. The cooling efficiency of its forced water-cooling system directly determines the reliability of the entire unit. However, existing wired monitoring methods suffer from complex cabling and limited capacity to provide a full perception of the water-cooling condition. To address these limitations, this study develops a wireless monitoring system based on multi-source information fusion for real-time evaluation of cooling efficiency and early fault warning. A heterogeneous wireless sensor network was designed and implemented by deploying liquid-level, vibration, sound, and infrared sensors at critical locations of the SVG water-cooling system. These nodes work collaboratively to collect multi-physical field data—thermal, acoustic, vibrational, and visual information—in an integrated manner. The system adopts a hybrid Wireless Fidelity/Bluetooth (Wi-Fi/Bluetooth) networking scheme with electromagnetic interference-resistant design to ensure reliable data transmission in the complex environment of converter valve halls. To achieve precise and robust diagnosis, a three-layer hierarchical weighted fusion framework was established, consisting of individual sensor feature extraction and preliminary analysis, feature-level weighted fusion, and final fault classification. Experimental validation indicates that the proposed system achieves highly reliable data transmission with a packet loss rate below 1.5%. Compared with single-sensor monitoring, the multi-source fusion approach improves the diagnostic accuracy for pump bearing wear, pipeline micro-leakage, and radiator blockage to 98.2% and effectively distinguishes fault causes and degradation tendencies of cooling efficiency. Overall, the developed wireless monitoring system overcomes the limitations of traditional wired approaches and, by leveraging multi-source fusion technology, enables a comprehensive assessment of cooling efficiency and intelligent fault diagnosis. This advancement significantly enhances the precision and reliability of SVG operation and maintenance, providing an effective solution to ensure the safe and stable operation of both grid-forming SVG units and the broader power grid. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

43 pages, 9485 KB  
Article
Dynamic Task Allocation for Multiple AUVs Under Weak Underwater Acoustic Communication: A CBBA-Based Simulation Study
by Hailin Wang, Shuo Li, Tianyou Qiu, Yiqun Wang and Yiping Li
J. Mar. Sci. Eng. 2026, 14(3), 237; https://doi.org/10.3390/jmse14030237 - 23 Jan 2026
Viewed by 522
Abstract
Cooperative task allocation is one of the critical enablers for multi-Autonomous Underwater Vehicle (AUV) missions, but existing approaches often assume reliable communication that rarely holds in real underwater acoustic environments. We study here the performance and robustness of the Consensus-Based Bundle Algorithm (CBBA) [...] Read more.
Cooperative task allocation is one of the critical enablers for multi-Autonomous Underwater Vehicle (AUV) missions, but existing approaches often assume reliable communication that rarely holds in real underwater acoustic environments. We study here the performance and robustness of the Consensus-Based Bundle Algorithm (CBBA) for multi-AUV task allocation under realistically degraded underwater communication conditions with dynamically appearing tasks. An integrated simulation framework that incorporates a Dubins-based kinematic model with minimum turning radius constraints, a configurable underwater acoustic communication model (range, delay, packet loss, and bandwidth), and a full implementation of improved CBBA with new features, complemented by 3D trajectory and network-topology visualization. We define five communication regimes, from ideal fully connected networks to severe conditions with short range and high packet loss. Within these regimes, we assess CBBA based on task allocation quality (total bundle value and task completion rate), convergence behavior (iterations and convergence rate), and communication efficiency (message delivery rate, average delay, and network connectivity), with additional metrics on the number of conflicts during dynamic task reallocation. Our simulation results indicate that CBBA maintains performance close to the optimum when the conditions are good and moderate but degrades significantly when connectivity becomes intermittent. We then introduce a local-communication-based conflict resolution strategy in the face of frequent task conflicts under very poor conditions: neighborhood-limited information exchange, negotiation within task areas, and decentralized local decisions. The proposed conflict resolution strategy significantly reduces the occurrence of conflicts and improves task completion under stringent communication constraints. This provides practical design insights for deploying multi-AUV systems under weak underwater acoustic networks. Full article
(This article belongs to the Special Issue Dynamics and Control of Marine Mechatronics)
Show Figures

Figure 1

Back to TopTop