Previous Issue
Volume 7, February
 
 

Telecom, Volume 7, Issue 2 (April 2026) – 17 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
24 pages, 444 KB  
Article
A Novel IoT Security Framework Combining X25519 with NIST Lightweight Ascon Encryption and Hybrid Transform-Domain Steganography
by Mohammed Al Saleh, Rima Shbaro and Joseph Azar
Telecom 2026, 7(2), 40; https://doi.org/10.3390/telecom7020040 (registering DOI) - 8 Apr 2026
Abstract
This paper aims to secure sensitive data generated by IoT devices by introducing a lightweight hybrid approach that combines steganography and cryptography. While classical cryptography offers confidentiality guarantees, the visibility of the produced ciphertexts keeps them at risk of traffic analysis, which could [...] Read more.
This paper aims to secure sensitive data generated by IoT devices by introducing a lightweight hybrid approach that combines steganography and cryptography. While classical cryptography offers confidentiality guarantees, the visibility of the produced ciphertexts keeps them at risk of traffic analysis, which could reveal communication patterns. Although some studies use Curve25519-based protocols, ECC paired with RDWT, or VLSB-based steganography, there is no complete approach that combines cryptographic and steganographic methods that is tailored to IoT devices. Our proposed scheme addresses this gap by integrating X25519 with Elligator 2 for efficient key exchange, using Ascon-AEAD128 for encryption, and finally hiding the encrypted payload within cover images using hybrid DWT-DCT steganography. When compared to similar hybrid approaches, our method achieves better performance, with results showing high imperceptibility, low computational overhead, and good resistance to noise. The cryptographic-steganographic combo adopted by our proposed framework improves confidentiality, integrity, and resistance to detection in resource-constrained IoT systems. Full article
Show Figures

Figure 1

31 pages, 3744 KB  
Article
Propagation Analysis of 4G/5G Mobile Networks Along Railway Lines: Implications for FRMCS Deployment in Latvia (2025)
by Aleksandrs Ribalko, Elans Grabs, Aleksandrs Madijarovs, Armands Lahs, Toms Karklins, Anna Karklina, Aleksandrs Romanovs, Ernests Petersons, Lilita Gegere and Aleksandrs Ipatovs
Telecom 2026, 7(2), 39; https://doi.org/10.3390/telecom7020039 - 3 Apr 2026
Viewed by 204
Abstract
This paper investigates the quality of mobile network coverage along the Riga–Tukums railway corridor with a focus on the performance of 4G and 5G technologies. Ensuring reliable mobile connectivity along suburban railway corridors remains a significant technical challenge due to mixed forest–urban propagation [...] Read more.
This paper investigates the quality of mobile network coverage along the Riga–Tukums railway corridor with a focus on the performance of 4G and 5G technologies. Ensuring reliable mobile connectivity along suburban railway corridors remains a significant technical challenge due to mixed forest–urban propagation conditions, macro-cell-dominated LTE infrastructure, mobility-induced channel variability, and fluctuating passenger density. Unlike high-speed railway environments that are extensively studied in dedicated 5G-R scenarios, suburban railway systems often rely on existing macro-cell deployments, where coverage continuity, signal quality stability, and capacity constraints must be addressed simultaneously. This study presents a measurement-based evaluation of 4G and 5G radio performance along the Riga–Tukums railway corridor under real operational conditions (50–90 km/h). Classical propagation models (Okumura–Hata and COST231-Hata) are quantitatively validated using MAE and RMSE metrics, followed by correlation analysis between RSSNR and QoS indicators. A theoretical Doppler sensitivity assessment (80–200 km/h) is conducted to evaluate mobility robustness across LTE and 5G frequency bands. Mobility transition regions and handover-related time windows are geometrically estimated, and passenger density-based capacity modeling is applied to assess throughput degradation under peak occupancy scenarios. Based on these results, a multi-layer network planning strategy integrating 700 MHz macro coverage, 1700 MHz capacity enhancement, and 3500 MHz 5G NR deployment is proposed. The optimization strategy resulted in an estimated 22–28% increase in stable service coverage in previously weak-signal zones and demonstrated that propagation model deviations remain within ranges comparable to recent railway studies (≈15–25 dB RMSE). These findings provide a structured framework for suburban railway communication optimization and support the gradual modernization of railway infrastructure toward FRMCS-ready architectures. The study illustrates the applicability of modern modelling tools for assessing and improving mobile communication systems and contributes to the broader development of digital infrastructure within Latvia’s transport sector. Full article
Show Figures

Figure 1

18 pages, 2574 KB  
Article
A Comparative Benchmark of Scale-Up and Scale-Out MIMO Architectures for 5G and Prospective 6G Networks
by Samuel Otero Rebolo and Victor Monzon Baeza
Telecom 2026, 7(2), 38; https://doi.org/10.3390/telecom7020038 - 3 Apr 2026
Viewed by 167
Abstract
The evolution toward prospective sixth-generation (6G) wireless networks is expected to significantly increase user density, bandwidth demand, and architectural complexity, reinforcing the need for scalable multiple-input multiple-output (MIMO) deployments. In this context, two fundamentally different design strategies have emerged: scaling up centralized antenna [...] Read more.
The evolution toward prospective sixth-generation (6G) wireless networks is expected to significantly increase user density, bandwidth demand, and architectural complexity, reinforcing the need for scalable multiple-input multiple-output (MIMO) deployments. In this context, two fundamentally different design strategies have emerged: scaling up centralized antenna arrays and scaling out distributed cooperative infrastructures. This paper presents a system-level comparative benchmark of scale-up and scale-out MIMO architectures under identical operating conditions of three representative downlink deployments: centralized Massive MIMO, centralized XL-Massive MIMO, and distributed Cell-Free MIMO. All architectures are assessed under identical urban channel conditions, transmit power, bandwidth, and traffic assumptions, considering sub-6 GHz (3.5 GHz) and millimeter-wave (28 GHz) frequency bands as proxies for 5G and prospective 6G operation. A unified Monte Carlo simulation framework is employed to jointly evaluate aggregate throughput, spectral efficiency, coverage performance, interference behavior, and energy efficiency over a wide range of user densities and service radii. The results highlight the distinct architectural trade-offs between centralized and distributed deployments: XL-Massive MIMO maximizes aggregate throughput and spatial reuse in dense hotspot scenarios, whereas Cell-Free MIMO provides superior coverage uniformity and improved energy efficiency in wide-area deployments. By isolating the impact of architectural scaling under consistent assumptions, the presented benchmark offers quantitative guidance for 6G network design and deployment planning. Full article
Show Figures

Figure 1

25 pages, 3352 KB  
Article
Protecting HWSNs from Super Adversaries with Robust Certificateless Signcryption
by Parichehr Dadkhah, Parvin Rastegari, Mohammad Dakhilalian, Phil Yeoh, Mingzhong Wang, Shahrzad Saremi, Rania Shibl, Yassine Himeur and Wathiq Mansoor
Telecom 2026, 7(2), 37; https://doi.org/10.3390/telecom7020037 - 1 Apr 2026
Viewed by 208
Abstract
Healthcare Wireless Sensor Networks (HWSNs) have attracted significant attention due to their vital role in diseases’ diagnosis, monitoring, and treatment. By continuously collecting patients’ physiological data and enabling remote medical services, these networks can greatly improve the quality of healthcare. However, the inadequate [...] Read more.
Healthcare Wireless Sensor Networks (HWSNs) have attracted significant attention due to their vital role in diseases’ diagnosis, monitoring, and treatment. By continuously collecting patients’ physiological data and enabling remote medical services, these networks can greatly improve the quality of healthcare. However, the inadequate handling of security and privacy issues poses serious risks to patients. In this context, signcryption schemes are essential cryptographic primitives that simultaneously provide authentication, confidentiality, and data integrity with a low overhead. Recently, Deng et al. proposed a certificateless signcryption (CL-SC) scheme for HWSNs and proved its security in the standard model. In this paper, we demonstrate that their scheme is insecure under an enhanced adversarial model, where a super Type II adversary, which is a malicious key generation center, can replace the system’s master public key using the master secret key under its control, and subsequently forge valid signcryptions on arbitrary messages on behalf of a sensor node. To address this vulnerability, we propose an enhanced CL-SC scheme based on elliptic curve cryptography (ECC). Under the hardness assumptions of the Elliptic Curve Decisional Diffie–Hellman Problem (ECDDHP) and the Computation Attack Algorithm (CAA), the proposed scheme achieves confidentiality and existential unforgeability against both super Type I and super Type II adversaries in the standard model. Performance analysis further shows that our scheme is efficient and well suited for resource-constrained HWSN environments. Full article
Show Figures

Figure 1

23 pages, 3338 KB  
Article
Improving the Energy Efficiency of Radio Access Networks by Using an Adaptive URLLC Slot Structure Within the 5G Advanced Architecture
by Anastasia V. Ermakova and Oleg V. Varlamov
Telecom 2026, 7(2), 36; https://doi.org/10.3390/telecom7020036 - 1 Apr 2026
Viewed by 263
Abstract
As mobile networks evolve toward Beyond 5G and 6G architectures, energy efficiency and sustainability have become increasingly critical due to growing traffic volumes, denser base station deployments, and the rising number of connected devices. Supporting Ultra-Reliable Low-Latency Communication (URLLC) services is particularly challenging, [...] Read more.
As mobile networks evolve toward Beyond 5G and 6G architectures, energy efficiency and sustainability have become increasingly critical due to growing traffic volumes, denser base station deployments, and the rising number of connected devices. Supporting Ultra-Reliable Low-Latency Communication (URLLC) services is particularly challenging, as their stringent requirements for both high reliability and minimal latency can lead to a significant increase in energy consumption within the radio access network. This paper examines slot structure mechanisms for concurrently servicing URLLC and enhanced Mobile Broadband (eMBB) traffic within the 5G Advanced framework, with a focus on improving energy efficiency and optimizing radio resource utilization. We propose an adaptive algorithm for managing radio interface time resources, which dynamically allocates sub-slots based on current network load and radio channel conditions. The system model is implemented in Simulink and incorporates URLLC and eMBB traffic generation, signal-to-noise ratio estimation, and a priority-based scheduling mechanism. Simulation results demonstrate that the proposed approach meets URLLC latency and reliability requirements while reducing redundant transmissions and enhancing the energy efficiency of the radio access network. These findings position the proposed method as a promising solution for the design of energy-efficient, next-generation mobile networks. Full article
Show Figures

Figure 1

20 pages, 3392 KB  
Article
AI-Driven Reliability in 6G Networks: Enhancing QoE of Real-World Video Streaming
by Christos Betzelos, Dimitrios Uzunidis, Anastasios Vetsos and Panagiotis A. Karkazis
Telecom 2026, 7(2), 35; https://doi.org/10.3390/telecom7020035 - 30 Mar 2026
Viewed by 369
Abstract
This paper advances user-centric Artificial Intelligence (AI) frameworks for reliability in fifth-generation and beyond (B5G) networks by examining their use in high-demand services such as video streaming. The proposed framework can leverage multi-layer monitoring across the edge–cloud continuum, application-layer metrics, and 5G core [...] Read more.
This paper advances user-centric Artificial Intelligence (AI) frameworks for reliability in fifth-generation and beyond (B5G) networks by examining their use in high-demand services such as video streaming. The proposed framework can leverage multi-layer monitoring across the edge–cloud continuum, application-layer metrics, and 5G core performance data to evaluate reliability through Quality of Experience (QoE) optimization. Results demonstrate that improved frame delivery can be achieved via dynamic resource prediction and proactive resource allocation. The study validates the framework’s scalability in dynamic workload conditions, emphasizing its role in mission-critical video services. Full article
Show Figures

Figure 1

24 pages, 3066 KB  
Article
Enhancing Network Traffic Monitoring Through eXplainable Artificial Intelligence Methodologies
by Cătălin-Eugen Bucur, Georgiana Crihan, Anamaria Rădoi, Elena-Grațiela Robe-Voinea and Iustin-Nicolae Moroșan
Telecom 2026, 7(2), 34; https://doi.org/10.3390/telecom7020034 - 23 Mar 2026
Viewed by 394
Abstract
In the contemporary digital landscape, AI (Artificial Intelligence) emerged as a pivotal tool in enhancing the defense technologies developed across the entire network infrastructure. As reliance on AI-based decision-making grew, so did the imperative need for interpretability, transparency, and trustworthiness, leading to the [...] Read more.
In the contemporary digital landscape, AI (Artificial Intelligence) emerged as a pivotal tool in enhancing the defense technologies developed across the entire network infrastructure. As reliance on AI-based decision-making grew, so did the imperative need for interpretability, transparency, and trustworthiness, leading to the development and integration of XAI (eXplainable Artificial Intelligence). This research paper provides a comprehensive overview of the current state of the art in XAI approaches that can be effectively implemented for network traffic monitoring, especially in critical digital infrastructures. The main contribution of this research article consists of the comparative analysis of the XAI SHAP (Shapley Additive Explanation) method applied to different datasets obtained from real-time network traffic monitoring, utilizing several representative parameters, which demonstrates the performance, vulnerabilities, and limitations of the proposed method, and also the security implications of the system resources from a cybersecurity perspective. Experimental results show that Ethernet networks offer higher predictability and clearer decision boundaries. Consequently, they are a safer solution for deployment in sensitive network architectures. In contrast, BYOD (Bring Your Own Device) Wi-Fi environments exhibit greater randomness. Full article
Show Figures

Figure 1

17 pages, 376 KB  
Article
Challenges in Digitalization for Holistic and Transparent Supply Chains During Crises
by Larry Wigger and Anthony Vatterott
Telecom 2026, 7(2), 33; https://doi.org/10.3390/telecom7020033 - 20 Mar 2026
Viewed by 505
Abstract
COVID-19 supply-chain disruptions clearly illustrated deficiencies in central coordination. Meaningful improvement in the central coordination of supply-chains will require transparency into resource stocks and flows. The latest technology, like 5G, blockchain and IoT, are primed to provide this transparency for collaboration during crises. [...] Read more.
COVID-19 supply-chain disruptions clearly illustrated deficiencies in central coordination. Meaningful improvement in the central coordination of supply-chains will require transparency into resource stocks and flows. The latest technology, like 5G, blockchain and IoT, are primed to provide this transparency for collaboration during crises. This will improve agility and service, reduce inventory and enable reverse logistics benefits. Furthermore, transparent global networks can allow a more inclusive and equitable distribution of critical supply, yielding quicker resolution during crises. However, many challenges exist that suggest further delay in the adoption of a holistic and transparent digitalized supply chain. This paper explores the most recent pandemic with attention to the limiting factors at all levels of emergent global crisis response. Full article
(This article belongs to the Special Issue Digitalization, Information Technology and Social Development)
Show Figures

Figure 1

30 pages, 4114 KB  
Article
TricP: A Novel Approach for Human Activity Recognition Using Tricky Predator Optimization Based on Inception and LSTM
by Palak Girdhar, Muslem Al-Saidi, Prashant Johri, Deepali Virmani, Hussein Taha and Oday Ali Hassen
Telecom 2026, 7(2), 32; https://doi.org/10.3390/telecom7020032 - 19 Mar 2026
Viewed by 282
Abstract
Human Activity Recognition (HAR) is a pivotal research area for applications such as automated surveillance, smart homes, security, healthcare, and human behavior analysis. Traditional machine-learning approaches often rely on manual feature engineering, which can limit generalization. Although deep learning has improved HAR through [...] Read more.
Human Activity Recognition (HAR) is a pivotal research area for applications such as automated surveillance, smart homes, security, healthcare, and human behavior analysis. Traditional machine-learning approaches often rely on manual feature engineering, which can limit generalization. Although deep learning has improved HAR through automatic representation learning, achieving high detection performance under computational constraints remains challenging. This paper proposes an efficient HAR framework that combines deep learning with hybrid optimization. Surveillance videos are first decomposed into frames, and a keyframe selection stage identifies distinctive frames to reduce redundancy and computational cost while preserving informative content. Motion and appearance features are then extracted using Histogram of Oriented Optical Flow (HOOF) and a ResNet-101 model, respectively, and concatenated into a unified feature representation. Classification is performed using an Inception-based Long Short-Term Memory (Incept-LSTM) network, which is fine-tuned via the proposed Tricky Predator Optimization (TricP) over a restricted, low-dimensional parameter vector. TricP is inspired by predator poaching behavior and the social dynamics of Latrans to enhance exploration and exploitation during search. Experiments on the UCF-Crime dataset show that the proposed method achieves 96.84% specificity, 92.16% sensitivity, and 93.62% accuracy. Full article
Show Figures

Figure 1

32 pages, 2273 KB  
Review
Comprehensive Survey on Autonomous Disaster Reconnaissance: A Comparative Analysis of UAVs and UGVs
by Harishik Dev Singh Jamwal and Saurabh Singh
Telecom 2026, 7(2), 31; https://doi.org/10.3390/telecom7020031 - 16 Mar 2026
Viewed by 515
Abstract
Autonomous platforms are critical for accelerating disaster response by delivering situational awareness and search-and-rescue support without exposing human operators to risk. However, practitioners face significant challenges in selecting and implementing robust software on vendor-constrained, immutable hardware. This paper provides a comprehensive survey contrasting [...] Read more.
Autonomous platforms are critical for accelerating disaster response by delivering situational awareness and search-and-rescue support without exposing human operators to risk. However, practitioners face significant challenges in selecting and implementing robust software on vendor-constrained, immutable hardware. This paper provides a comprehensive survey contrasting the capabilities of two complementary unmanned platforms: Unmanned Aerial Vehicles (UAVs) and Unmanned Ground Vehicles (UGVs). We analyze state-of-the-art software blueprints for perception, navigation, and coordination under the constraints of fixed hardware. Key contributions include a comparative analysis of mission suitability, a synthesis of emerging machine learning algorithms for robust navigation, and an identification of critical research gaps. While recent works have advanced specific algorithms, a comprehensive survey comparing software-driven approaches on fixed-hardware UAVs and UGVs is lacking, a gap this paper aims to fill. Our analysis reveals that the sim-to-real transfer gap, the absence of standardised disaster benchmarks, and limited explainability of deep-reinforcement-learning policies remain the most critical barriers to field deployment. We conclude with a prioritised research roadmap that groups open challenges into short-term (1–2 year) and long-term (3–5+ year) directions. Full article
Show Figures

Figure 1

17 pages, 21262 KB  
Article
On the Effect of the Time Step in Discrete-Time Framework Analysis
by Mario E. Rivero-Ángeles, Izlian. Y. Orea-Flores, Iclia Villordo Jiménez and Yesenia E. Gonzalez-Navarro
Telecom 2026, 7(2), 30; https://doi.org/10.3390/telecom7020030 - 10 Mar 2026
Viewed by 212
Abstract
In classic communication systems, signals and data were mostly continuous in time, such as voice (fixed and mobile telephony, and radio systems) and video signals (Television services), Conversely, in modern communication systems, most signals are packet-based (text and images in messaging services and [...] Read more.
In classic communication systems, signals and data were mostly continuous in time, such as voice (fixed and mobile telephony, and radio systems) and video signals (Television services), Conversely, in modern communication systems, most signals are packet-based (text and images in messaging services and social media) and even continuous-time data has to be converted into a discrete-time nature data, such as video and voice services that are now discretized to be sent in packet-based communication systems. However, these classic communication systems were analyzed, studied, and designed using continuous-time analysis, such as the classic Erlang-B formula. This classic analysis can still be used in modern systems, but a discrete-based framework provides a seamless analysis and yields more accurate results. In this work, the effect of the system’s elementary time step is analyzed, and guidelines for its selection are provided to adequately analyze continuous-time systems within a discrete-time framework. To demonstrate the utility of the discretization and to consider these guidelines, we developed a mathematical analysis based on a discrete-time Markov chain to study a system with a buffer capacity under conventional and bursty traffic, which is commonly found in an Internet of Things application. The derived formulas allow us to quantify system performance under a discrete framework. This, in turn, allows us to provide some relevant guidelines for the elementary time step selection to adequately analyze continuous-time systems under a discrete-time framework. Full article
Show Figures

Figure 1

25 pages, 2414 KB  
Article
Communication Bicasting for Improving Throughput and Fairness in Multihomed Networks Using QUIC with BBRv3
by Tomoya Kawana, Rei Nakagawa and Nariyoshi Yamai
Telecom 2026, 7(2), 29; https://doi.org/10.3390/telecom7020029 - 4 Mar 2026
Viewed by 412
Abstract
When devices equipped with multiple wireless network interfaces access the Internet via Wi-Fi, 4G, and 5G, external factors such as radio interference can increase packet loss rates, resulting in reduced communication speed. To address this issue, two approaches exist: the use of Bottleneck [...] Read more.
When devices equipped with multiple wireless network interfaces access the Internet via Wi-Fi, 4G, and 5G, external factors such as radio interference can increase packet loss rates, resulting in reduced communication speed. To address this issue, two approaches exist: the use of Bottleneck Bandwidth and Round-trip propagation time (BBR), a congestion control algorithm designed to mitigate the impact of packet loss and bicasting in multihomed networks. Bicasting in multihomed networks exploits multiple network paths by transmitting identical packets simultaneously over different networks, thereby reducing effective packet loss and mitigating throughput reduction. In this paper, we introduce a novel network architecture that effectively operates in lossy networks by combining bicasting with BBR. By utilizing QUIC and OpenFlow, the proposed architecture enables the construction of a multihomed network that is independent of the operating system (OS), allowing flexible configuration of congestion control algorithms. Furthermore, the introduction of a QUIC proxy enables the use of existing server-side applications without requiring any modifications. Using the proposed multihomed network, we evaluate communication performance for unicasting and bicasting under varying packet loss rates, and we also analyze fairness with competing Transmission control protocol (TCP) flows. The results indicate that the combination of BBRv3 and bicasting achieves fivefold higher throughput than TCP unicasting at a 1% packet loss rate while preserving fairness with competing TCP flows. Full article
Show Figures

Figure 1

18 pages, 2162 KB  
Article
Blockchain-Enabled Decentralized End Hopping for Proactive Network Defense
by Shenghan Luo, Fangxiao Li, Leyi Shi and Dawei Zhao
Telecom 2026, 7(2), 28; https://doi.org/10.3390/telecom7020028 - 4 Mar 2026
Viewed by 462
Abstract
As network attack methods continue to evolve, flooding attacks remain a major threat that causes network paralysis and service disruption. Statically configured systems are particularly vulnerable, as attackers can exploit reconnaissance information to launch large-scale attacks, while conventional defense mechanisms often fail under [...] Read more.
As network attack methods continue to evolve, flooding attacks remain a major threat that causes network paralysis and service disruption. Statically configured systems are particularly vulnerable, as attackers can exploit reconnaissance information to launch large-scale attacks, while conventional defense mechanisms often fail under high-intensity traffic. To address this problem, this paper introduces Moving Target Defense (MTD) within a decentralized framework and proposes a blockchain-based decentralized End Hopping system. The system employs the Practical Byzantine Fault Tolerance (PBFT) consensus protocol for dynamic controller election and incorporates a disaster recovery mechanism, which eliminates single points of failure while ensuring reliable controller transitions and rapid service restoration. Experimental results demonstrate that the proposed system achieves satisfactory performance in terms of availability, effectiveness, and security, providing a practical approach to constructing robust proactive defense networks. Full article
Show Figures

Figure 1

22 pages, 824 KB  
Article
Security Improvement for UAV-Assisted Integrated Sensing, Communication, and Jamming Networks
by Lin Shi, Chuansheng Yan, Dingcheng Yang, Yu Xu, Fahui Wu and Huabing Lu
Telecom 2026, 7(2), 27; https://doi.org/10.3390/telecom7020027 - 3 Mar 2026
Viewed by 505
Abstract
We propose a unmanned aerial vehicle (UAV)-assisted integrated sensing, communication, and jamming (U-ISJC) framework, in which a multifunctional UAV first detects the sensing target to obtain sensing information, and subsequently transmits the information to communication users via a unified beam in the presence [...] Read more.
We propose a unmanned aerial vehicle (UAV)-assisted integrated sensing, communication, and jamming (U-ISJC) framework, in which a multifunctional UAV first detects the sensing target to obtain sensing information, and subsequently transmits the information to communication users via a unified beam in the presence of multiple eavesdroppers. To avoid functional conflicts, a time slot frame structure is designed for the UAV’s multifunctional capabilities, enabling communication, sensing, and jamming tasks within each timeslot. The time slot allocation factor dynamically adjusts based on the UAV’s flight trajectory for efficient UAV resource utilization. Additionally, to prevent security rate leakage caused by eavesdroppers, a jamming beam is added to serve both jamming and sensing functions. Our objective is to maximize the the worst-case total secure data transmission rate by jointly optimizing sub-time slot allocation, beamforming, and UAV trajectory. To address this problem, we propose a joint optimization algorithm that adopts the concave–convex procedure (CCCP) technique and semi-definite relaxation (SDR), under the block coordinate descent (BCD) framework. The simulation results show that compared with the baseline scheme, the proposed algorithm substantially improves the communication security rate while ensuring the quality of communication and sensing. Full article
Show Figures

Figure 1

20 pages, 4118 KB  
Article
Optimization of Sum-Rate for Downlink Transmission in Hybrid RIS-Assisted MISO Systems
by Wei Pang and Ying Zhang
Telecom 2026, 7(2), 26; https://doi.org/10.3390/telecom7020026 - 3 Mar 2026
Viewed by 289
Abstract
Reconfigurable intelligent surfaces (RISs) hold promising technical prospects for 6G wireless communications to enhance system capacity, coverage and sum-rate. Unlike existing studies deploying only passive or active RISs, this paper adopts a novel hybrid RIS architecture that optimally allocates the number of active [...] Read more.
Reconfigurable intelligent surfaces (RISs) hold promising technical prospects for 6G wireless communications to enhance system capacity, coverage and sum-rate. Unlike existing studies deploying only passive or active RISs, this paper adopts a novel hybrid RIS architecture that optimally allocates the number of active and passive elements. Under fixed quantities of both RIS element types in the fixed hybrid RIS, it simultaneously increases the number of base station antennas and served users, focusing on solving rate optimization for hybrid RIS-assisted MISO systems deployed in various scenarios. This paper establishes a fundamental model for hybrid RIS reflection signals. To better characterize the performance of the proposed hybrid RIS architecture, an optimization problem is formulated to maximize the sum-rate of the hybrid RIS-assisted multi-user, multiple-input, single-output (MU-MISO) system. An efficient algorithm is proposed combining fractional programming (FP), alternating optimization, and Lagrange duality transformation. Simulation results demonstrate that with hybrid RIS assistance, the system’s sum-rate gain increases by 49.1% and 40%, respectively, compared to systems with only active RIS deployment. This achieves higher sum-rate gains at lower power consumption. Full article
Show Figures

Figure 1

20 pages, 682 KB  
Article
ARQ-Enhanced Short-Packet NOMA Communications with STAR-RIS
by Zhipeng Wang, Jin Li, Shuai Zhang and Dechuan Chen
Telecom 2026, 7(2), 25; https://doi.org/10.3390/telecom7020025 - 2 Mar 2026
Viewed by 300
Abstract
To address the rigorous requirements of ultra-reliable low-latency communication (URLLC) in beyond 5G/6G networks, we propose an innovative architecture combining automatic repeat request (ARQ) protocol with a simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) to enhance short-packet non-orthogonal multiple access (NOMA) communications. [...] Read more.
To address the rigorous requirements of ultra-reliable low-latency communication (URLLC) in beyond 5G/6G networks, we propose an innovative architecture combining automatic repeat request (ARQ) protocol with a simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) to enhance short-packet non-orthogonal multiple access (NOMA) communications. Specifically, retransmission mechanism provided by ARQ is utilized to mitigate packet errors stemming from practical system imperfections, i.e., imperfect channel state information (ipCSI), imperfect successive interference cancellation (ipSIC), and hardware impairments. Using the analytical foundation provided by finite blocklength (FBL) theory, expressions for two key performance metrics, i.e., the average block error rate (BLER) and effective throughput, are derived for two NOMA users. Simulation results validate the analytical derivations and demonstrate that the ARQ scheme provides significant reliability gains for each user and achieves synergistic gain with STAR-RIS technology. In addition, the effective throughput exhibits a peak at an optimal blocklength, balancing the reliability gain from a longer blocklength against the spectral efficiency loss from a lower coding rate. This optimal blocklength decreases with more STAR-RIS elements, as improved channel conditions reduce the need for long blocklengths. Full article
Show Figures

Figure 1

35 pages, 633 KB  
Article
Bi-Objective Optimization for Scalable Resource Scheduling in Dense IoT Deployments via 5G Network Slicing Using NSGA-II
by Francesco Nucci and Gabriele Papadia
Telecom 2026, 7(2), 24; https://doi.org/10.3390/telecom7020024 - 2 Mar 2026
Viewed by 405
Abstract
The proliferation of Internet of Things (IoT) devices demands efficient resource management in fifth-generation (5G) networks, particularly through network slicing mechanisms supporting massive machine-type communications (mMTCs). This paper addresses IoT connectivity in 5G network slicing through a bi-objective optimization framework balancing operational costs [...] Read more.
The proliferation of Internet of Things (IoT) devices demands efficient resource management in fifth-generation (5G) networks, particularly through network slicing mechanisms supporting massive machine-type communications (mMTCs). This paper addresses IoT connectivity in 5G network slicing through a bi-objective optimization framework balancing operational costs with quality-of-service. We formulate a bi-objective optimization problem that balances operational costs with quality-of-service (QoS) requirements across heterogeneous 5G network slices. The proposed approach employs a tailored Non-dominated Sorting Genetic Algorithm II (NSGA-II) incorporating domain-specific constraints, including device priorities, slicing isolation requirements, radio resource limitations, and battery capacity. Through extensive simulations on scenarios with up to 5000 devices, our method generates diverse Pareto-optimal solutions achieving hypervolume improvements of 8–13% over multi-objective DRL, 15–28% over single-objective DRL baselines, and 22–41% over heuristic approaches while maintaining computational scalability suitable for real-time network management (sub-2 min execution). Validation with real-world traffic traces from operational deployments confirms algorithm robustness under realistic burstiness and temporal patterns, with 7% performance degradation vs. synthetic traffic—within expected simulation–reality gaps. This work provides a practical framework for IoT resource scheduling in current 5G and future Beyond-5G (B5G) telecommunications infrastructures, validated in scenarios of up to 5000 devices. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop