Next Issue
Volume 6, June
Previous Issue
Volume 5, December
 
 

Network, Volume 6, Issue 1 (March 2026) – 18 articles

Cover Story (view full-size image): Accurate cellular traffic forecasting is vital for proactive resource allocation and efficient 5G/6G operation, yet it remains challenging due to strong spatial heterogeneity across cells and long-term temporal dependencies. The existing approaches either model cells independently or rely on attention/graph-heavy spatiotemporal architectures that incur computational overhead, limiting real-time deployment. We introduce Hierarchical SpatioTemporal Mamba, which integrates frame-wise convolutional spatial encoding with Mamba-based state-space temporal modeling and lightweight temporal attention in a hierarchical design. It outperforms baselines, reducing error by up to 47.3% while remaining 45× smaller than contemporary models. This makes it a lightweight, robust solution for real-time predictive network management. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
20 pages, 2673 KB  
Article
TAFL-UWSN: A Trust-Aware Federated Learning Framework for Securing Underwater Sensor Networks
by Raja Waseem Anwar, Mohammad Abrar, Abdu Salam and Faizan Ullah
Network 2026, 6(1), 18; https://doi.org/10.3390/network6010018 - 19 Mar 2026
Viewed by 324
Abstract
Underwater Acoustic Sensor Networks (UASNs) are pivotal for environmental monitoring, surveillance, and marine data collection. However, their open and largely unattended operational settings, constrained communication capabilities, limited energy resources, and susceptibility to insider attacks make it difficult to achieve safe, secure, and efficient [...] Read more.
Underwater Acoustic Sensor Networks (UASNs) are pivotal for environmental monitoring, surveillance, and marine data collection. However, their open and largely unattended operational settings, constrained communication capabilities, limited energy resources, and susceptibility to insider attacks make it difficult to achieve safe, secure, and efficient collaborative learning. Federated learning (FL) offers a privacy-preserving method for decentralized model training but is inherently vulnerable to Byzantine threats and malicious participants. This paper proposes trust-aware FL for underwater sensor networks (TAFL-UWSN), a trust-aware FL framework designed to improve security, reliability, and energy efficiency in UASNs by incorporating trust evaluation directly into the FL process. The goal is to mitigate the impact of adversarial nodes while maintaining model performance in low-resource underwater environments. TAFL-UWSN integrates continuous trust scoring based on packet forwarding reliability, sensing consistency, and model deviation. Trust scores are used to weight or filter model updates both at the node level and the edge layer, where Autonomous Underwater Vehicles (AUVs) act as mobile aggregators. A trust-aware federated averaging algorithm is implemented, and extensive simulations are conducted in a custom Python-based environment, comparing TAFL-UWSN to standard FedAvg and Byzantine-resilient FL approaches under various attack conditions. TAFL-UWSN achieved a model accuracy exceeding 92% with up to 30% malicious nodes while maintaining a false positive rate below 5.5%. Communication overhead was reduced by 28%, and energy usage per node dropped by 33% compared to baseline methods. The TAFL-UWSN framework demonstrates that integrating trust into FL enables secure, efficient, and resilient underwater intelligence, validating its potential for broader application in distributed, resource-constrained environments. Full article
Show Figures

Figure 1

20 pages, 315 KB  
Systematic Review
Green Scheduling and Task Offloading in Edge Computing: A Systematic Review
by Adriana Rangel Ribeiro, Ana Clara Santos Andrade, Gabriel Leal dos Santos, Guilherme Dinarte Marcondes Lopes, Edvard Martins de Oliveira, Adler Diniz de Souza and Jeremias Barbosa Machado
Network 2026, 6(1), 17; https://doi.org/10.3390/network6010017 - 16 Mar 2026
Viewed by 278
Abstract
This paper presents a Systematic Literature Review (SLR) on green scheduling and task offloading strategies for energy optimization in edge computing environments. The evolution of low-latency, high-performance applications has driven the widespread adoption of distributed computing paradigms such as Edge Computing, Fog-Cloud architectures, [...] Read more.
This paper presents a Systematic Literature Review (SLR) on green scheduling and task offloading strategies for energy optimization in edge computing environments. The evolution of low-latency, high-performance applications has driven the widespread adoption of distributed computing paradigms such as Edge Computing, Fog-Cloud architectures, and the Internet of Things (IoT). In this context, Mobile Edge Computing (MEC) is often combined with Unmanned Aerial Vehicles (UAVs) to extend computational capabilities to areas with limited infrastructure, bringing processing closer to the data source to reduce latency and improve scalability. Nevertheless, these systems encounter substantial energy-related challenges, particularly in battery-powered or resource-constrained environments. To address these concerns, green computing strategies—especially energy-efficient scheduling and task offloading—have emerged as promising approaches to optimize energy usage in edge environments. Green scheduling optimizes task allocation to minimize energy consumption, whereas offloading redistributes workloads from resource-constrained devices to edge or cloud servers. Increasingly, these techniques are enhanced through artificial intelligence (AI) and machine learning (ML), enabling adaptive and context-aware decision-making in dynamic environments. This paper conducts a systematic literature review (SLR) to synthesize the most widely adopted strategies for energy-efficient scheduling and task offloading in edge computing, highlighting their impact on sustainability and performance. The analysis provides a comprehensive view of the state of the art, examines how architectural contexts influence energy-aware decisions, and highlights the role of AI/ML in enabling intelligent and sustainable edge systems. The findings reveal current research gaps and outline future directions to advance the development of robust, scalable, and environmentally responsible computing infrastructures. Full article
Show Figures

Figure 1

23 pages, 1246 KB  
Article
Accuracy of Fiber Propagation Evaluation Using Phenomenological Attenuation and Raman Scattering Models in Multiband Optical Networks
by Giuseppina Maria Rizzi and Vittorio Curri
Network 2026, 6(1), 16; https://doi.org/10.3390/network6010016 - 12 Mar 2026
Viewed by 248
Abstract
The constant growth of IP data traffic, driven by sustained annual increases surpassing 26%, is pushing current optical transport infrastructures towards their capacity limits. Since the deployment of new fiber cables is economically demanding, ultra-wideband transmission is emerging as a promising cost-effective solution, [...] Read more.
The constant growth of IP data traffic, driven by sustained annual increases surpassing 26%, is pushing current optical transport infrastructures towards their capacity limits. Since the deployment of new fiber cables is economically demanding, ultra-wideband transmission is emerging as a promising cost-effective solution, enabled by multi-band amplifiers and transceivers spanning the entire low-loss window of standard single-mode fibers. In this scenario, an accurate modeling of the frequency-dependent fiber parameters is essential to reliably model optical signal propagation. In particular, the combined impact of attenuation variations with frequency and inter-channel stimulated Raman scattering (SRS) fundamentally shapes the power evolution of wide wavelength division multiplexing (WDM) combs and directly affects nonlinear interference (NLI) generation, as well as the amount of ASE noise. In this work, we review a set of analytical approximations, based on phenomenological approaches, for frequency-dependent attenuation and Raman scattering gain, and analyze their impact on achieving an effective balance between computational efficiency and physical fidelity. Through extensive analyses performed with the open-source software GNPy (version 2.12, Telecom Infra Project) on an optical line system exploring multi-band scenarios spanning C+L+S, C+L+E, and U-to-E transmission, we demonstrate that the proposed approximations reproduce the reference SRS power evolution and NLI profiles with root mean square errors (RMSEs) consistently below 0.03 dB, and down to the 10−3–10−2 dB range for the most accurate configurations. Although the current implementation does not yet provide a direct reduction in computational time, the proposed framework lays the groundwork for future developments toward closed-form or semi-analytical solutions, enabling more efficient modeling and optimization of ultra-wideband optical transmission. Full article
Show Figures

Figure 1

21 pages, 2699 KB  
Article
Investigation of Underground Communication Quality Using Distributed Antenna Systems Considering Radio-Frequency Signal Propagation Characteristics in Almaty Metro Tunnels
by Askar Abdykadyrov, Moldir Kuatova, Nurzhigit Smailov, Zhandos Dosbayev, Sunggat Marxuly, Maxat Mamadiyarov, Ainur Kuttybayeva, Nurlan Kystaubayev and Amirkhan Bekmurza
Network 2026, 6(1), 15; https://doi.org/10.3390/network6010015 - 10 Mar 2026
Viewed by 274
Abstract
This study investigates radio-frequency signal propagation in underground metro tunnels with a focus on distributed antenna system (DAS) deployment. Deterministic simulations were performed using Altair WinProp 2024.1 (ProMan) with a 3D ray-tracing engine (GO + UTD) at 2.4 GHz in a reinforced concrete [...] Read more.
This study investigates radio-frequency signal propagation in underground metro tunnels with a focus on distributed antenna system (DAS) deployment. Deterministic simulations were performed using Altair WinProp 2024.1 (ProMan) with a 3D ray-tracing engine (GO + UTD) at 2.4 GHz in a reinforced concrete tunnel model of 900 m length. Two antenna configurations (B3: 8 dBi directional; B8: 5 dBi wide-beam) were evaluated under identical geometric and material conditions. Results show that path loss varies from 42 to 65 dB over 850 m, with estimated attenuation exponents lower than free-space values due to quasi-waveguide effects. The B3 configuration provides higher near-field received power (up to −7.5 dBm) but exhibits stronger attenuation over long distances. In contrast, the B8 configuration ensures a more uniform spatial power distribution and a reduced path-loss growth rate beyond 500 m. The findings confirm that antenna radiation pattern significantly influences underground communication performance and demonstrate the engineering suitability of distributed antenna systems for stable metro tunnel coverage. Full article
Show Figures

Figure 1

11 pages, 581 KB  
Article
Experimental Study of Alien Crosstalk Limits in Densely Bundled Commodity 10GBASE-T Ethernet Cables
by Aleksei Demin, Viktoriia Vasileva and Dmitrii Chaikovskii
Network 2026, 6(1), 14; https://doi.org/10.3390/network6010014 - 9 Mar 2026
Viewed by 284
Abstract
In the realm of high-speed Ethernet networks, alien crosstalk (AXT) significantly undermines the integrity and efficiency of data transmission. While existing works mostly focus on modeling and physical-layer mitigation techniques such as PAM16/DSQ128 modulation and LDPC coding, there is a lack of experimental [...] Read more.
In the realm of high-speed Ethernet networks, alien crosstalk (AXT) significantly undermines the integrity and efficiency of data transmission. While existing works mostly focus on modeling and physical-layer mitigation techniques such as PAM16/DSQ128 modulation and LDPC coding, there is a lack of experimental evidence on how severe AXT affects commodity 10GBASE-T equipment in realistic, densely cabled installations. In this study, we assemble and evaluate the experimental testbed that emulates a highly adverse AXT environment by tightly bundling up to seven 60 m twisted-pair Ethernet cables and using only off-the-shelf 10GBASE-T network cards. We quantitatively characterize how increasing cable density leads to automatic speed downgrades, connection failures, and non-linear saturation of the aggregate throughput, and relate these effects to the observed link quality on individual ports. Our results demonstrate that, even in the presence of standard crosstalk mitigation and error-correction mechanisms, severe AXT can force commodity 10GBASE-T links to fall back from 10 Gbit/s to 1 Gbit/s or below. Based on these findings, we derive practical guidelines for dense-cabling deployments and identify key requirements for experimental testbeds that can more reliably quantify AXT severity and its impact on commodity 10GBASE-T link stability (rate fallback and link loss) under realistic conditions. Full article
Show Figures

Figure 1

23 pages, 2990 KB  
Article
Forecasting-Aware Digital Twin Calibration for Reliable Multi-Horizon Traffic Prediction
by Zeyad AlJundi, Taqwa A. Alhaj, Fatin A. Elhaj, Inshirah Idris and Tasneem Darwish
Network 2026, 6(1), 13; https://doi.org/10.3390/network6010013 - 6 Mar 2026
Viewed by 474
Abstract
Digital twin systems are becoming an important tool in intelligent transportation management, as they provide simulation-based environments for monitoring, analyzing, and predicting traffic behavior. However, the predictive performance of traffic digital twins is often limited by the quality and temporal consistency of sensor-level [...] Read more.
Digital twin systems are becoming an important tool in intelligent transportation management, as they provide simulation-based environments for monitoring, analyzing, and predicting traffic behavior. However, the predictive performance of traffic digital twins is often limited by the quality and temporal consistency of sensor-level data generated from microscopic simulations. Most current calibration methods focus mainly on matching macroscopic traffic indicators, such as vehicle count and speed, without explicitly addressing the requirements of multi-horizon forecasting. This creates a gap between achieving realistic simulations and building reliable predictive models. This research proposes a forecasting-aware digital traffic twin framework that integrates microscopic SUMO simulation, controlled sensor-level observation modeling through geometric misalignment and noise injection, behavioral calibration, and deep temporal forecasting within a unified end-to-end structure. Unlike traditional calibration approaches, the proposed Genetic Algorithm (GA) reformulates calibration as a multi-step predictive optimization task. Simulation parameters are optimized by minimizing forecasting error produced by a lightweight proxy sequence model embedded within the calibration loop. In this way, calibration moves beyond simple statistical matching and instead emphasizes temporal learnability and forecasting stability, enabling the digital twin to generate traffic patterns more suitable for long-term prediction. Based on the calibrated traffic time series, both convolutional and recurrent deep learning models are evaluated under single-step and multi-step forecasting scenarios. To further examine generalizability, external validation is performed using the real-world PEMS-BAY dataset. The experimental findings demonstrate that forecasting-aware calibration reduces macroscopic traffic signal errors by around 50% for vehicle count and around 40% for average speed, improves temporal stability, and significantly enhances forecasting accuracy across both short-term and long-term horizons. Full article
(This article belongs to the Special Issue Emerging Trends and Applications in Vehicular Ad Hoc Networks)
Show Figures

Figure 1

20 pages, 10112 KB  
Article
Satellite Backhaul for Extending Connectivity in Rural Remote Areas: Deployment and Performance Assessment
by Souhaima Stiri, Maria Rita Palattella, Juan David Niebles Castano and Christos Politis
Network 2026, 6(1), 12; https://doi.org/10.3390/network6010012 - 24 Feb 2026
Viewed by 693
Abstract
Limited terrestrial network coverage in rural and remote areas constitutes a significant barrier to the digital transformation of the agricultural sector. Smart and precision farming applications, ranging from conventional environmental monitoring systems to advanced Digital Twin solutions, rely on the reliable transmission of [...] Read more.
Limited terrestrial network coverage in rural and remote areas constitutes a significant barrier to the digital transformation of the agricultural sector. Smart and precision farming applications, ranging from conventional environmental monitoring systems to advanced Digital Twin solutions, rely on the reliable transmission of sensor data, images, and video streams from geographically isolated farms. Such data-intensive services cannot be effectively supported without a robust communication infrastructure. Non-Terrestrial Networks (NTNs), particularly satellite systems, offer both narrowband and broadband connectivity, enabling the transmission of low-rate sensor measurements, as well as high-throughput multimedia data from the field. This paper presents an experimental performance evaluation of two satellite backhauling solutions: a Geostationary Earth Orbit (GEO) system provided by SES and a Low Earth Orbit (LEO) system from Starlink. The networks were first deployed and tested in a laboratory environment and subsequently validated in an operational agricultural field setting. Their performance is benchmarked against a terrestrial cellular network to assess their suitability for supporting advanced agricultural applications. The performance assessment results indicate that both satellite backhauling solutions are reliable and capable of meeting the bandwidth and latency requirements of delay-tolerant agricultural applications. In addition to the technical evaluation, this work presents a cost–benefit analysis that further underscores the advantages of NTN-based solutions. Despite higher initial expenditures, they provide extended coverage in remote areas and enable cost sharing across multiple users, improving overall economic viability. Full article
Show Figures

Figure 1

32 pages, 6235 KB  
Article
Beyond Attention: Hierarchical Mamba Models for Scalable Spatiotemporal Traffic Forecasting
by Zineddine Bettouche, Khalid Ali, Andreas Fischer and Andreas Kassler
Network 2026, 6(1), 11; https://doi.org/10.3390/network6010011 - 13 Feb 2026
Viewed by 621
Abstract
Traffic forecasting in cellular networks is a challenging spatiotemporal prediction problem due to strong temporal dependencies, spatial heterogeneity across cells, and the need for scalability to large network deployments. Traditional cell-specific models incur prohibitive training and maintenance costs, while global models often fail [...] Read more.
Traffic forecasting in cellular networks is a challenging spatiotemporal prediction problem due to strong temporal dependencies, spatial heterogeneity across cells, and the need for scalability to large network deployments. Traditional cell-specific models incur prohibitive training and maintenance costs, while global models often fail to capture heterogeneous spatial dynamics. Recent spatiotemporal architectures based on attention or graph neural networks improve accuracy but introduce high computational overhead, limiting their applicability in large-scale or real-time settings. We propose HiSTM (Hierarchical SpatioTemporal Mamba), a spatiotemporal forecasting architecture built on state-space modeling. HiSTM combines spatial convolutional encoding for local neighborhood interactions with Mamba-based temporal modeling to capture long-range dependencies, followed by attention-based temporal aggregation for prediction. The hierarchical design enables representation learning with linear computational complexity in sequence length and supports both grid-based and correlation-defined spatial structures. Cluster-aware extensions incorporate spatial regime information to handle heterogeneous traffic patterns. Experimental evaluation on large-scale real-world cellular datasets demonstrates that HiSTM achieves better accuracy, outperforming strong baselines. On the Milan dataset, HiSTM reduces MAE by 29.4% compared to STN, while achieving the lowest RMSE and highest R2 score among all evaluated models. In multi-step autoregressive forecasting, HiSTM maintains 36.8% lower MAE than STN and 11.3% lower than STTRE at the 6-step horizon, with a 58% slower error accumulation rate compared to STN. On the unseen Trentino dataset, HiSTM achieves 47.3% MAE reduction over STN and demonstrates better cross-dataset generalization. A single HiSTM model outperforms 10,000 independently trained cell-specific LSTMs, demonstrating the advantage of joint spatiotemporal learning. HiSTM maintains best-in-class performance with up to 30% missing data, outperforming all baselines under various missing data scenarios. The model achieves these results while being 45× smaller than PredRNNpp, 18× smaller than xLSTM, and maintaining competitive inference latency of 1.19 ms, showcasing its effectiveness for scalable 5/6G traffic prediction in resource-constrained environments. Full article
Show Figures

Figure 1

53 pages, 3104 KB  
Article
Auditing Inferential Blind Spots: A Framework for Evaluating Forensic Coverage in Network Telemetry Architectures
by Mehrnoush Vaseghipanah, Sam Jabbehdari and Hamidreza Navidi
Network 2026, 6(1), 9; https://doi.org/10.3390/network6010009 - 29 Jan 2026
Viewed by 647
Abstract
Network operators increasingly rely on abstracted telemetry (e.g., flow records and time-aggregated statistics) to achieve scalable monitoring of high-speed networks, but this abstraction fundamentally constrains the forensic and security inferences that can be supported from network data. We present a design-time audit framework [...] Read more.
Network operators increasingly rely on abstracted telemetry (e.g., flow records and time-aggregated statistics) to achieve scalable monitoring of high-speed networks, but this abstraction fundamentally constrains the forensic and security inferences that can be supported from network data. We present a design-time audit framework that evaluates which threat hypotheses become non-supportable as network evidence is transformed from packet-level traces to flow records and time-aggregated statistics. Our methodology examines three evidence layers (L0: packet headers, L1: IP Flow Information Export (IPFIX) flow records, L2: time-aggregated flows), computes a catalog of 13 network-forensic artifacts (e.g., destination fan-out, inter-arrival time burstiness, SYN-dominant connection patterns) at each layer, and maps artifact availability to tactic support using literature-grounded associations with MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK). Applied to backbone traffic from the MAWI Day-In-The-Life (DITL) archive, the audit reveals selectiveinference loss: Execution becomes non-supportable at L1 (due to loss of packet-level timing artifacts), while Lateral Movement and Persistence become non-supportable at L2 (due to loss of entity-linked structural artifacts). Inference coverage decreases from 9 to 7 out of 9 evaluated ATT&CK tactics, while coverage of defensive countermeasures (MITRE D3FEND) increases at L1 (7 → 8 technique categories) then decreases at L2 (8 → 7), reflecting a shift from behavioral monitoring to flow-based controls. The framework provides network architects with a practical tool for configuring telemetry systems (e.g., IPFIX exporters, P4 pipelines) to reason about and provision the minimum forensic coverage. Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management, 2nd Edition)
Show Figures

Figure 1

18 pages, 615 KB  
Article
DOTSSA: Directed Acyclic Graph-Based Online Trajectory Simplification with Stay Areas
by Masaharu Hirota
Network 2026, 6(1), 8; https://doi.org/10.3390/network6010008 - 29 Jan 2026
Viewed by 341
Abstract
Devices equipped with the Global Positioning System (GPS) generate massive volumes of trajectory data on a daily basis, imposing substantial computational, network, and storage burdens. Online trajectory simplification reduces redundant points in a streaming manner while preserving essential spatial and temporal characteristics. A [...] Read more.
Devices equipped with the Global Positioning System (GPS) generate massive volumes of trajectory data on a daily basis, imposing substantial computational, network, and storage burdens. Online trajectory simplification reduces redundant points in a streaming manner while preserving essential spatial and temporal characteristics. A representative method in this line of research is Directed acyclic graph-based Online Trajectory Simplification (DOTS). However, DOTS does not preserve stay-related information and can incur high computational cost. To address these limitations, we propose Directed acyclic graph-based Online Trajectory Simplification with Stay Areas (DOTSSA), a fast online simplification method that integrates DOTS with an online stay area detection algorithm (SA). In DOTSSA, SA continuously monitors movement patterns to detect stay areas and segments the incoming trajectory accordingly, after which DOTS is applied to the extracted segments. This approach ensures the preservation of stay areas while reducing computational overhead through localized DAG construction. Experimental evaluations on a real-world dataset show that, compared with DOTS, DOTSSA can reduce compression time, while achieving comparable compression ratios and preserving key trajectory features. Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management, 2nd Edition)
Show Figures

Figure 1

19 pages, 1248 KB  
Article
Round-Trip Time Estimation Using Enhanced Regularized Extreme Learning Machine
by Hassan Rizky Putra Sailellah, Hilal Hudan Nuha and Aji Gautama Putrada
Network 2026, 6(1), 10; https://doi.org/10.3390/network6010010 - 29 Jan 2026
Viewed by 627
Abstract
Reliable Internet connectivity is essential for latency-sensitive services such as video conferencing, media streaming, and online gaming. Round-trip time (RTT) is a key indicator of network performance and is central to setting retransmission timeout (RTO); inaccurate RTT estimates may trigger unnecessary retransmissions or [...] Read more.
Reliable Internet connectivity is essential for latency-sensitive services such as video conferencing, media streaming, and online gaming. Round-trip time (RTT) is a key indicator of network performance and is central to setting retransmission timeout (RTO); inaccurate RTT estimates may trigger unnecessary retransmissions or slow loss recovery. This paper proposes an Enhanced Regularized Extreme Learning Machine (RELM) for RTT estimation that improves generalization and efficiency by interleaving a bidirectional log-step heuristic to select the regularization constant C. Unlike manual tuning or fixed-range grid search, the proposed heuristic explores C on a logarithmic scale in both directions (×10 and /10) within a single loop and terminates using a tolerance–patience criterion, reducing redundant evaluations without requiring predefined bounds. A custom RTT dataset is generated using Mininet with a dumbbell topology under controlled delay injections (1–1000 ms), yielding 1000 supervised samples derived from 100,000 raw RTT measurements. Experiments follow a strict train/validation/test split (6:1:3) with training-only standardization/normalization and validation-only hyperparameter selection. On the controlled Mininet dataset, the best configuration (ReLU, 150 hidden neurons, C=102) achieves R2=0.9999, MAPE=0.0018, MAE=966.04, and RMSE=1589.64 on the test set, while maintaining millisecond-level runtime. Under the same evaluation pipeline, the proposed method demonstrates competitive performance compared to common regression baselines (SVR, GAM, Decision Tree, KNN, Random Forest, GBDT, and ELM), while maintaining lower computational overhead within the controlled simulation setting. To assess practical robustness, an additional evaluation on a public real-world WiFi RSS–RTT dataset shows near-meter accuracy in LOS and mixed LOS/NLOS scenarios, while performance degrades markedly under dominant NLOS conditions, reflecting physical-channel limitations rather than model instability. These results demonstrate the feasibility of the Enhanced RELM and motivate further validation on operational networks with packet loss, jitter, and path variability. Full article
Show Figures

Figure 1

22 pages, 2627 KB  
Article
FANET Routing Protocol for Prioritizing Data Transmission to the Ground Station
by Kaoru Takabatake and Tomofumi Matsuzawa
Network 2026, 6(1), 7; https://doi.org/10.3390/network6010007 - 14 Jan 2026
Viewed by 800
Abstract
In recent years, with the improvement of unmanned aerial vehicle (UAV) performance, various applications have been explored. In environments such as disaster areas, where existing infrastructure may be damaged, alternative uplink communication for transmitting observation data from UAVs to the ground station (GS) [...] Read more.
In recent years, with the improvement of unmanned aerial vehicle (UAV) performance, various applications have been explored. In environments such as disaster areas, where existing infrastructure may be damaged, alternative uplink communication for transmitting observation data from UAVs to the ground station (GS) is critical. However, conventional mobile ad hoc network (MANET) routing protocols do not sufficiently account for GS-oriented traffic or the highly mobile UAV topology. This study proposed a flying ad hoc network (FANET) routing protocol that introduces a control option called GS flood, where the GS periodically disseminates routing information, enabling each UAV to efficiently acquire fresh source routes to the GS. Evaluation using NS-3 in a disaster scenario confirmed that the proposed method achieves a higher packet delivery ratio and practical latency compared to the representative MANET routing protocols, namely DSR, AODV, and OLSR, while operating with fewer control IP packets than existing methods. Furthermore, although the multihop throughput between UAVs and the GS in the proposed method plateaued at approximately 40% of the physical-layer maximum, it demonstrated performance exceeding realistic satellite uplink capacities ranging from several hundred kbps to several Mbps. Full article
Show Figures

Figure 1

30 pages, 6746 KB  
Article
Securing IoT Networks Using Machine Learning-Resistant Physical Unclonable Functions (PUFs) on Edge Devices
by Abdul Manan Sheikh, Md. Rafiqul Islam, Mohamed Hadi Habaebi, Suriza Ahmad Zabidi, Athaur Rahman bin Najeeb and Mazhar Baloch
Network 2026, 6(1), 6; https://doi.org/10.3390/network6010006 - 12 Jan 2026
Cited by 1 | Viewed by 722
Abstract
The Internet of Things (IoT) has transformed global connectivity by linking people, smart devices, and data. However, as the number of connected devices continues to grow, ensuring secure data transmission and communication has become increasingly challenging. IoT security threats arise at the device [...] Read more.
The Internet of Things (IoT) has transformed global connectivity by linking people, smart devices, and data. However, as the number of connected devices continues to grow, ensuring secure data transmission and communication has become increasingly challenging. IoT security threats arise at the device level due to limited computing resources, mobility, and the large diversity of devices, as well as at the network level, where the use of varied protocols by different vendors introduces further vulnerabilities. Physical Unclonable Functions (PUFs) provide a lightweight, hardware-based security primitive that exploits inherent device-specific variations to ensure uniqueness, unpredictability, and enhanced protection of data and user privacy. Additionally, modeling attacks against PUF architectures is challenging due to the random and unpredictable physical variations inherent in their design, making it nearly impossible for attackers to accurately replicate their unique responses. This study collected approximately 80,000 Challenge Response Pairs (CRPs) from a Ring Oscillator (RO) PUF design to evaluate its resilience against modeling attacks. The predictive performance of five machine learning algorithms, i.e., Support Vector Machines, Logistic Regression, Artificial Neural Networks with a Multilayer Perceptron, K-Nearest Neighbors, and Gradient Boosting, was analyzed, and the results showed an average accuracy of approximately 60%, demonstrating the strong resistance of the RO PUF to these attacks. The NIST statistical test suite was applied to the CRP data of the RO PUF to evaluate its randomness quality. The p-values from the 15 statistical tests confirm that the CRP data exhibit true randomness, with most values exceeding the 0.01 threshold and supporting the null hypothesis of randomness. Full article
Show Figures

Figure 1

25 pages, 2414 KB  
Article
Enhanced Wireless Sensor Network Lifetime Using EGWO-Optimized Neural Network Approach
by Mohamad Nurkamal Fauzan, Rendy Munadi, Sony Sumaryo and Hilal Hudan Nuha
Network 2026, 6(1), 5; https://doi.org/10.3390/network6010005 - 4 Jan 2026
Viewed by 570
Abstract
Efficient clustering is essential for reducing energy consumption and extending the operational lifetime of Wireless Sensor Networks. Classical protocols such as LEACH, PEGASIS, HEED, and EEHC frequently exhibit unbalanced energy usage, resulting in early node failures and reduced communication reliability. This study introduces [...] Read more.
Efficient clustering is essential for reducing energy consumption and extending the operational lifetime of Wireless Sensor Networks. Classical protocols such as LEACH, PEGASIS, HEED, and EEHC frequently exhibit unbalanced energy usage, resulting in early node failures and reduced communication reliability. This study introduces an Enhanced Grey Wolf Optimization-based Neural Network (EGWO-NN) designed to adaptively select cluster heads by continuously optimizing decision parameters according to real-time network conditions. The proposed method is evaluated against four benchmark protocols using statistical comparisons of node survivability, transmission energy, and communication performance. Results show that EGWO-NN sustains significantly more alive nodes per round, with strong statistical differences compared with LEACH, PEGASIS, HEED, and EEHC (t = 18.27, 9.94, 18.91, 18.93; p < 1022). Transmission energy analysis similarly indicates significant improvements across all pairwise tests (|t| = 4.12–46.34; p < 104), supported by an overall ANOVA result (F = 14.74, p = 1.42×1010). EGWO-NN also enhances data delivery, outperforming baseline protocols in both packets sent and Packet Delivery Ratio, with highly significant differences (t = 17.62–19.75 and 11.25–22.89). These findings demonstrate that EGWO-NN provides a robust and scalable approach for improving energy efficiency and communication reliability in WSNs. Full article
(This article belongs to the Special Issue Advanced 6G Networks for the Internet of Things)
Show Figures

Figure 1

16 pages, 2553 KB  
Article
Evaluating AES-128 Segment Encryption in Live HTTP Streaming Under Content Tampering and Packet Loss
by Bzav Shorsh Sabir and Aree Ali Mohammed
Network 2026, 6(1), 4; https://doi.org/10.3390/network6010004 - 31 Dec 2025
Viewed by 652
Abstract
One of the main sources of entertainment is live video streaming platforms, which allow viewers to watch video streams in real time. However, because of the increasing demand for high quality content, the vulnerability of streaming systems against cyberattacks highlights how crucial it [...] Read more.
One of the main sources of entertainment is live video streaming platforms, which allow viewers to watch video streams in real time. However, because of the increasing demand for high quality content, the vulnerability of streaming systems against cyberattacks highlights how crucial it is to implement strong security mechanisms without sacrificing performance. Therefore, the safeguard of video streams against cyberthreats such as content tampering and interception is a top priority while still maintaining robustness against network fluctuations. Two distinct scenarios are proposed to test AES-128 encryption in securing HTTP live streaming segments against content tampering and resilience to packet loss. Results show that AES-128 encryption provides confidentiality and successfully prevents meaningful manipulation of the video content, confirming its reliability as segment encryption does not significantly alter packet loss-induced playback behavior compared to unencrypted streaming under the tested conditions, Performance analysis shows that AES-128 has no significant difference in data loss for up to 4% of network packet loss compared to unencrypted segments. Full article
Show Figures

Figure 1

26 pages, 749 KB  
Article
Adaptive Real-Time Risk and Impact Assessment for 5G Network Security
by Dionysia Varvarigou, Kostas Lampropoulos, Spyros Denazis and Paris Kitsos
Network 2026, 6(1), 3; https://doi.org/10.3390/network6010003 - 24 Dec 2025
Viewed by 847
Abstract
The expansion of 5G networks has led to larger attack surfaces due to more applications and use cases, more IoT connections, and the distributed 5G system architecture. Existing security frameworks often lack the ability to perform real-time, context-aware risk assessments that are specifically [...] Read more.
The expansion of 5G networks has led to larger attack surfaces due to more applications and use cases, more IoT connections, and the distributed 5G system architecture. Existing security frameworks often lack the ability to perform real-time, context-aware risk assessments that are specifically adapted to dynamic 5G environments. In this paper, we present an integrated framework that combines Snort intrusion detection with a risk and impact assessment model to evaluate threats in real time. By correlating intrusion alerts with contextual risk metrics tied to 5G core functions, the framework prioritizes incidents and supports timely mitigation. Evaluation in a controlled testbed shows the framework’s stability, scalability, and effective risk classification, thereby strengthening cybersecurity for next-generation networks. Full article
(This article belongs to the Special Issue Cybersecurity in the 5G Era)
Show Figures

Figure 1

34 pages, 11111 KB  
Review
Multi-Level Multi-Technology Underwater Networks: Challenges and Opportunities for Marine Monitoring
by A. Rehman and L. Galluccio
Network 2026, 6(1), 2; https://doi.org/10.3390/network6010002 - 24 Dec 2025
Viewed by 1111
Abstract
Underwater networks are crucial for monitoring the marine ecosystem, enabling data collection to support the preservation and protection of natural resources. Among the various technologies available, acoustic and optical communications stand out for their superior performance in underwater environments. Acoustic technologies are suitable [...] Read more.
Underwater networks are crucial for monitoring the marine ecosystem, enabling data collection to support the preservation and protection of natural resources. Among the various technologies available, acoustic and optical communications stand out for their superior performance in underwater environments. Acoustic technologies are suitable for long-range communications, typically operating over hundreds of meters up to several kilometers, albeit with low data rates ranging from a few hundred bps to few tens of kbps. In contrast, optical technologies excel in providing high data rates, often between 1 and 10 Mbps, but only over short distances (e.g., 50 m) in controlled conditions. To leverage the strengths of these technologies, recent research has proposed multi-modal underwater systems; however, these solutions generally rely on single-level or at most dual-level architectures, limiting the benefits of a structured hierarchical approach. In this review paper, after discussing related work on multi-technology acoustic and optical networks, we highlight relevant design guidelines for multi-technology, multi-level underwater architectures, explicitly considering three layers: a deep acoustic layer, an intermediate optical layer, and an upper RF-enabled surface layer. For illustration, we also discuss a PoC of such a hierarchical architecture under development at the University of Catania, Italy, in the Area Marina Isole dei Ciclopi natural reserve. The PoC includes optical nodes capable of transmitting up to 10 Mbps over short ranges and acoustic nodes (both software defined and not) supporting rates of tens of kbps over hundreds of meters and being adaptive to network conditions, interconnected through hybrid multi-technology nodes deployed across the three network levels. By assigning specific technologies to appropriate layers, the architecture enhances scalability, robustness, and adaptability to dynamic underwater conditions. This design strategy not only improves data transmission efficiency but also ensures seamless operation across diverse marine scenarios, making it an effective solution for a wide range of underwater monitoring applications. Full article
Show Figures

Figure 1

25 pages, 3648 KB  
Article
Authentication and Authorisation Method for a Cloud Side Static IoT Application
by Jose Alvarez, Matheus Santos, David May and Gerard Dooly
Network 2026, 6(1), 1; https://doi.org/10.3390/network6010001 - 19 Dec 2025
Viewed by 528
Abstract
IoT applications are increasingly common, yet they often rely on expensive, externally managed authentication services. This paper introduces a novel, self-contained authentication method for IoT applications which leverages fog computing principles to lower operational costs and infrastructure complexity. The proposed system, fogauth, [...] Read more.
IoT applications are increasingly common, yet they often rely on expensive, externally managed authentication services. This paper introduces a novel, self-contained authentication method for IoT applications which leverages fog computing principles to lower operational costs and infrastructure complexity. The proposed system, fogauth, combines device serial numbers with cryptographically generated UUIDs to establish secure identification without third-party services. A static cloud-side architecture coupled with a lightweight, locally hosted API enables secure authentication through object-storage operations. Performance testing demonstrates comparable security performance to commercial cloud-based authentication while reducing long-term operational costs and maintaining latency at below 2 minutes in production conditions. fogauth provides a scalable and economically viable alternative for companies seeking to reduce cloud dependency and minimize long-term costs associated with IoT application security. To support reproducibility, a complete open-source implementation and validation dataset are provided, allowing independent replication and extension of the system. Full article
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop