1. Introduction
The rapid expansion of the Internet of Things (IoT) has fundamentally transformed data-driven decision-making in various sectors, facilitating the seamless integration of sensors, actuators, and analytics platforms. Among the numerous IoT communication protocols available, Low-Power Wide Area Network (LPWAN) technologies such as Sigfox, NBIoT, and Ingenu RPMA [
1,
2,
3], stand out due to their design for long-range and low-power communication. These protocols are particularly well-suited for IoT applications, as they enable devices to transmit small data packets over extensive distances while consuming minimal energy, thereby optimizing operational efficiency and extending the lifespan of battery-operated devices. Long-Range Wide Area Network (LoRaWAN) has emerged as a cornerstone technology for low-power, long-range applications, exploiting its proprietary chirp spread spectrum (CSS) modulation at the physical layer to achieve robust signal penetration and energy efficiency [
4]. LoRaWAN’s star-of-stars architecture, where end devices (EDs) transmit data to gateways (GWs) that relay information to a centralized network server (NS), supports deployments spanning tens of kilometers with minimal infrastructure [
5]. These features have made LoRaWAN indispensable in smart cities (e.g., waste management, traffic control) [
6], precision agriculture (e.g., soil moisture monitoring) [
7], and industrial Internet of Things (IIoT) (e.g., predictive maintenance) [
8,
9]. However, its adoption in mission-critical domains such as military operations, healthcare monitoring, and disaster response demands rethinking its default “best-effort” data transmission model to accommodate heterogeneous traffic with varying urgency levels.
In military scenarios, the Internet of Military Things (IoMT) represents a significant technological advancement in modern defense operations, enabling the real-time exchange of data among sensors, actuators, and decision-making systems. These networks enhance situational awareness, resource management, and tactical decision-making, all of which are critical in military environments where rapid and informed responses can significantly impact mission success. IoMT networks rely on LoRaWAN to monitor troop vitals, track equipment status, and detect security breaches in remote border regions [
10]. Similarly, healthcare systems utilize wearable LoRaWAN sensors to transmit real-time physiological data, such as heart rate and blood oxygen levels, for remote patient monitoring [
11]. These applications generate a mix of high-priority alerts such as cardiac emergencies, unauthorized border crossings, and routine telemetry, necessitating dynamic prioritization to ensure critical data are delivered without delays. However, traditional LoRaWAN implementations assume uniform, periodic transmissions, which is insufficient for dynamic and priority-based applications that require adaptive data flow control, leading to inefficiencies.
A major challenge in military border security is ensuring timely and reliable communication in remote and high-risk environments. Sensors deployed along borders detect unauthorized crossings, environmental threats, and troop movements, necessitating a data prioritization mechanism to ensure that critical alerts are transmitted promptly. Similarly, the healthcare monitoring of military personnel requires the frequent transmission of high-priority physiological data, such as vital signs and emergency alerts. While LoRaWAN is suitable for low-data-rate applications, these use cases introduce challenges related to network congestion, latency, and energy consumption, especially when handling a mix of high- and low-priority data.
Existing LoRaWAN network simulation models, such as those in ns-3, primarily support fixed-interval data transmissions, which limits their applicability in IoMT scenarios that require adaptive transmission rates and priority-based traffic management. Current solutions lack mechanisms for dynamically adjusting transmission parameters based on the importance of real-time data, which is crucial for enhancing efficiency and reliability in military and healthcare applications. This study addresses these gaps by introducing a priority-based flow control (PFC) protocol, which optimizes network performance by prioritizing critical data transmissions while maintaining energy efficiency. A traffic generator model is developed using the Kronecker delta function to mathematically express the conditional increase in priority based on sensor readings. We employed a simulation-based approach to model and evaluate priority-based traffic for IoMT applications.
Beyond military and healthcare use cases, the proposed protocol has implications for smart cities, prioritizing emergency alerts (e.g., fire detection) over routine environmental data; and IIoT, ensuring the timely transmission of equipment failure warnings in automated factories; as well as environmental monitoring, which expedites flood or wildfire alerts in remote sensor networks. By integrating these advancements, our work bridges the gap between LoRaWAN’s low-power design and the rigorous demands of mission-critical IoT, providing a scalable framework for managing heterogeneous data traffic.
To the best of our knowledge, such a framework has not previously been developed in the literature. The principal contributions of this work are summarized as follows:
Design and implementation of an enhanced ns-3 LoRaWAN simulation module that supports event-driven and priority-based traffic modeling.
Development of a priority-based flow control (PFC) algorithm that dynamically adjusts data transmission rates based on sensor priority levels.
Proposal of an adaptive priority-based communication protocol to improve the efficiency of LoRaWAN in military and healthcare applications.
A comprehensive evaluation of the proposed framework through simulation, demonstrating improvements in packet delivery ratio, energy consumption, and network reliability.
The remainder of this article is organized as follows:
Section 2 provides a technological overview of LoRaWAN and its applicability to IoMT.
Section 3 reviews the related literature.
Section 4 describes the design and implementation of the enhanced sensor data generator.
Section 5 provides the system model.
Section 6 presents the proposed schemes.
Section 7 presents and discusses the simulation results.
Section 8 concludes the article with future directions.
2. Technological Overview
LoRaWAN is a proprietary trademark synonymous with LoRa and a member of low-power wide area network (LPWAN) technology in the Internet of Things (IoT). It connects numerous end devices (EDs) with low-cost, low-data-rate, long-range, and long-lasting batteries suitable for various IoT applications with varying levels of QoS in various industries such as smart agriculture, smart metering, smart cities, and smart healthcare [
6,
7,
11,
12]. Unlike NBIoT [
13] and Sigfox [
1], which are proprietary, LoRaWAN operates in the Industrial, Scientific, and Medical (ISM) band. LoRa employs a physical (PHY) layer chirp spread spectrum (CSS) modulation technology that provides the highest receiver sensitivity while consuming the least power compared to other LPWAN technologies [
4]. The CSS enables the demodulation of data packets with a low signal-to-noise ratio (SNR) at lower data rates. EDs sense the environment and communicate with the network server (NS) via the gateway (GW). Depending on the distance from the gateway and the propagation conditions, transmission parameters are set, namely, spreading factor (SF), transmission power (TP), bandwidth (BW), and coding rate (CR). These transmission parameters have an impact on energy consumption [
14].
LoRaWAN employs the adaptive data rate (ADR) scheme, an essential element that regulates these transmission parameters to optimize resource allocation. The key objective of the ADR scheme is to optimize the network for maximum capacity, ensuring that EDs continuously transmit with optimal transmission parameters. Since the lifetime of the ED battery is limited, charging or replacing batteries may be impossible in some harsh environments; thus, energy efficiency is considered to avoid network lifetime degradation in a LoRaWAN network. The work in [
15] details the network architecture and key features like adaptive data rate and device classes, and it highlights LoRaWAN’s strengths in energy efficiency and wide coverage. Our work in [
16,
17] optimized ADR for energy efficiency. Simulation tools were employed to deploy and evaluate our LoRaWAN network. We reviewed the available LoRaWAN simulators in the literature, such as [
18,
19,
20], and settled on an open-source simulation tool that accurately models the behavior of LoRaWAN networks, including factors like signal propagation, interference, collisions, data rates, and network congestion. We sought features that enable easy customization of network topology and node behaviors, allowing for the modification of various parameters, such as transmission power, spreading factor, data rates, and network configurations. The NS-3 simulation tool was selected as the platform for implementation. However, within ns-3, several modules were implemented, such as [
5,
21,
22,
23]. The LoRaWAN module by [
5], available in [
24], was selected due to the availability of a supportive community, user forums, and documentation associated with the simulator. The frequent updates, ongoing maintenance, and bug fixes provided by the simulator’s developers ensure compatibility with new protocol versions and improvements in simulation capabilities, making it a suitable choice. However, the existing model has some limitations. The current Periodic Sender Application only allows for fixed-interval data transmission, failing to capture the dynamic and often unpredictable data patterns of austere security and healthcare applications. The model does not allow for prioritizing critical data, such as emergency alerts or urgent patient updates, which is crucial in security and healthcare scenarios.
While LoRaWAN is generally well-suited for applications with low data rate requirements, like many border security use cases, healthcare monitoring often demands higher data rates, leading to several potential effects on LoRaWAN’s performance:
Higher data rates in LoRaWAN require the use of smaller spreading factors, which decrease receiver sensitivity, requiring higher transmission power for successful communication. This increased transmission power directly translates to higher energy consumption by the end devices. This is a critical consideration for remote and austere environments, where emergency dispatchers (EDs) may rely on batteries with a limited lifespan.
LoRaWAN exploits the orthogonality of different spreading factors to allow multiple EDs to transmit concurrently without interference. However, fewer orthogonal channels are available when EDs transmit at higher data rates. This can lead to increased collisions and congestion, reducing the overall network capacity and the number of devices that can be effectively supported.
Higher data rates imply a shorter time on air for individual packets. However, in the context of healthcare monitoring, where more frequent transmissions might be needed, the overall airtime used by these transmissions could still increase, potentially leading to higher latency for other applications sharing the LoRaWAN infrastructure.
In LoRaWAN, data rate and coverage have an inherent trade-off. Lower data rates provide a more extended range, while higher data rates limit the range. If healthcare monitoring applications demand higher data rates, it could impact the achievable coverage area for these applications within a LoRaWAN deployment.
In this work, we attempted to address these challenges by developing an improved sensor data sender application and a priority-based communication protocol. We contribute to realizing dependable and robust LoRaWAN solutions for these critical applications.
3. Related Literature
The military sector has long relied on diverse wireless communication technologies, each tailored to specific operational requirements. While LoRaWAN offers distinct advantages for low-power, long-range sensor networks in IoMT applications, it is important to contextualize its role alongside established military-grade wireless solutions. Mobile Ad Hoc Networks (MANETs), such as those employing the Optimized Link State Routing Protocol (OLSR) [
25], form the backbone of tactical communication systems. These self-configuring networks excel in dynamic environments where infrastructure is absent or compromised, enabling peer-to-peer connectivity among soldiers, vehicles, and command centers. MANETs support high mobility and robust data throughput, making them ideal for real-time voice and video transmission. However, their energy-intensive routing protocols and scalability limitations in large-area deployments [
26,
27] render them less suitable for low-power, wide-area sensor networks where LoRaWAN thrives. Tactical Cognitive Radio Networks (CRNs) represent a paradigm shift in military communications, enabling the dynamic access to underutilized spectrum to avoid jamming and interception. These networks are particularly valuable in electronic warfare scenarios, where maintaining secure and reliable links is paramount. Studies have demonstrated CRNs’ ability to maintain connectivity even in contested spectral environments [
28], but their computational complexity and power consumption exceed the capabilities of resource-constrained LoRaWAN end devices. Mesh networks support advanced applications, such as real-time situational awareness and drone control, with throughput orders of magnitude higher than LoRaWAN [
29,
30]. However, their infrastructural requirements and energy demands limit their use for persistent, wide-area sensor monitoring, precisely the area where LoRaWAN’s energy efficiency and scalability prove advantageous.
Recent advancements in wireless sensor networks have demonstrated significant improvements, establishing this technology as a new, energy-efficient solution for low-data-rate applications. The growing adoption of LoRaWAN in mission-critical applications has highlighted the need for intelligent priority-based traffic management. Current approaches to data prioritization in IoT networks vary significantly in their design philosophies, performance characteristics, and suitability for different deployment scenarios. While its adoption in healthcare monitoring is well-documented [
31], recent studies highlight its growing role in mission-critical domains such as military operations [
32], border security [
33], and industrial automation [
34]. These applications require adaptive data flow control to handle heterogeneous traffic, where high-priority alerts, such as unauthorized border crossings and equipment failures, must coexist with routine telemetry. The simplest approach to traffic management employs static priority assignments, where data categories are predefined based on application requirements. For instance, in healthcare monitoring systems, vital signs may be permanently classified as high-priority while routine telemetry receives low priority. While our focus is on the priority of data transmission, certain applications, such as those in the military and healthcare sectors, require an additional consideration of the security aspect [
35]. LoRaWAN utilizes AES-128 encryption to secure the payload with the AppSKey, ensuring message integrity and authenticity with the NwkSKey. However, security remains an active research area, with studies focusing on enhancing resilience under adversarial attacks and evolving threat mitigation strategies [
36,
37,
38].
The work in [
39] presents a novel Data Transmission Protocol using Priority Approach (DTP-PA) designed for low-rate wireless sensor networks (LR-WSNs) with heterogeneous traffic. The authors addressed challenges in transmitting priority-based traffic in multi-hop sensor networks while minimizing energy consumption and delay. The proposed solution comprises three algorithms that dynamically adjust reporting rates based on decision intervals specified by the sink node. It prioritizes packet scheduling based on hop count and data priority to ensure the timely delivery of critical packets while also optimizing buffer occupancy and data flow by adjusting reporting rates in response to network conditions. Simulation and testbed evaluations demonstrated significant performance improvements in priority packet delivery and overall network throughput compared to traditional methods. The innovative scheduling prioritizes long-distance, high-priority packets, reducing delays for critical applications. The algorithms minimize energy consumption through dynamic reporting rates and distributed decision-making. While the protocol reduces delay for priority packets, comprehensive delay constraints for heterogeneous traffic flows are not addressed, and the solution focuses on multi-hop networks, limiting its direct applicability to other network architectures. This method benefits from straightforward implementation and predictable network behavior, making it suitable for basic IoT deployments with consistent traffic patterns. However, its rigidity becomes problematic in dynamic environments where data criticality may change rapidly. During emergency situations in military or healthcare scenarios, the inability to dynamically reprioritize data flows can result in either excessive resource allocation to non-critical transmissions or insufficient bandwidth for urgently needed information. Moreover, fixed-priority schemes often fail to account for changing network conditions, such as congestion or node mobility, which can potentially exacerbate performance degradation during peak loads or topological changes.
More sophisticated solutions, such as the Priority-Based Energy-Efficient Routing Protocol (PEERP), attempt to address these limitations through conditional routing strategies [
40]. The authors present an innovative routing protocol for healthcare systems that utilize IoT. The proposed PEERP protocol classifies health information into two categories: emergency situation (P1) and vital health data (P2). Critical data, P1, are transmitted using direct communication for minimal delay, while less time-sensitive data, P2, are sent via multi-hop communication to optimize energy consumption. PEERP introduces a cost-based mechanism to select forwarder nodes based on residual energy and communication distance, thereby ensuring balanced energy usage and prolonging the network’s lifetime. This work supports delay-sensitive (emergency) and energy-sensitive (continuous monitoring) applications, ensuring broad applicability. While effective in small-scale simulations, PEERP’s performance in large-scale IoT deployments with a greater number of nodes remains unexplored. A slightly higher path loss than ATTEMPT in prolonged scenarios may impact reliability in specific applications. The accuracy of forwarder node selection heavily depends on the cost function parameters, which may need to be tuned for different environments. This approach demonstrates particular effectiveness in healthcare applications, where it balances the need for rapid emergency response with energy conservation for routine monitoring. The protocol incorporates a cost-based forwarding mechanism that considers both residual energy and communication distance when selecting relay nodes, theoretically prolonging network lifetime. The binary priority classification proves inadequate for complex scenarios requiring finer granularity in urgency levels, such as distinguishing between life-threatening emergencies and important but non-critical alerts.
A novel cache replacement technique for industrial IoT applications was developed in [
10]. The approach integrates a periodic popularity prediction with content size caching to optimize data transmission and reduce latency in IIoT environments. The method assigns values to cached content based on popularity, size, and time update characteristics. When cache replacement is needed, the least valuable information is removed first. Simulation results demonstrated that the proposed technique improves cache hit rates and reduces transmission delay compared to classical caching algorithms, including GDS, MPC, LRU, FIFO, and LFU. The proposed approach achieved a 15.3% higher hit rate than GDS, 17.3% higher than MPC, 20.1% higher than LRU, 22.3% higher than FIFO, and 24.8% higher than LFU in scenarios involving 350 different information categories. The study’s findings are applicable to various fields, including supply chain management, smart manufacturing, automation, energy optimization, intelligent logistics, and e-healthcare applications. A Priority-Based Energy-Efficient Metaheuristic Routing Approach for Smart Healthcare Systems (SHS) is proposed in [
41]. This approach utilizes a hybrid Duty-Cycled Ant Colony Optimization Routing (DC-ACOP) mechanism to optimize data transmission in IoT-enabled smart healthcare systems. Their primary focus was reducing transmission delay and energy consumption while ensuring prioritized data delivery for critical healthcare applications. They proposed the DC-ACOP method, which integrates dynamic duty cycling where sensor nodes activate their communication units only on demand to save energy. The method integrates priority-based routing, where data packets are categorized based on criticality using the IP packet’s Type of Service (ToS) field. This ensures that high-priority healthcare data are transmitted first. They employed metaheuristic optimization, specifically Ant Colony Optimization (ACO), which determines efficient routing paths dynamically based on parameters such as residual energy, mobility, and path length. The ACO-based routing approach requires additional processing, which may not be ideal for resource-constrained sensor nodes. While practical, assigning priority labels and dynamically adjusting routing behavior adds protocol complexity.
The article entitled “Reducing Operational Expenses of LoRaWAN-Based Internet of Remote Things Applications” [
42] focuses on optimizing scheduled transmissions in LoRaWAN networks that rely on LEO satellite links instead of terrestrial backhauls. To minimize operational costs—specifically the airtime billed by satellite operators—the authors proposed a cost-efficient transmission scheduling algorithm that organizes when and how end devices send data through LoRaWAN gateways to satellites. Simulation results demonstrated that this scheduling approach significantly reduced airtime and, therefore, operational expenses, without compromising data delivery reliability. The approach assumes predictable and timely satellite pass availability, which may not be practical in dynamic or emergency scenarios. The solution relies on strategically placed LoRaWAN gateways with satellite connectivity, which may pose logistical or financial barriers in certain contexts. The work in [
10] addresses challenges in IIoT sensing networks, particularly the uncertainty in data transmission caused by limited resources and dynamic environments. The authors propose a priority-based transmission algorithm (PBTA) that enhances data certainty by prioritizing data packets based on the urgency and importance of the IIoT application. The drawback is that the computational overhead caused by the priority calculation and dynamic slot allocation may introduce processing delays, especially in low-power IIoT devices.
A priority-driven transmission model for wearable sensors in healthcare systems is developed in [
43]. The model employs a Type-2 Fuzzy Logic System (FLS) for real-time congestion detection and utilizes active queue management (AQM) to dynamically regulate transmission rates. Their key focus is on reducing congestion and minimizing transmission delay while ensuring reliable data delivery. The proposed method integrates selective decision modes, including congestion mitigation and perfect queuing, to enhance network efficiency. The fixed-priority assignment is a drawback, as there is no priority reassignment mechanism to adjust to changing patient conditions. The CBT-based slot allocation may become inefficient if network traffic experiences unexpected spikes. The study evaluates its approach through simulations in OMNeT++, demonstrating improved queue utilization (37.18%), higher success rates (3.67%), and a reduced transmission delay (23.69%) compared to existing methods. The study in [
44] introduces LoRa-REP, a redundancy-based transmission mechanism designed to enhance communication reliability and resilience in LoRaWAN networks for mixed-criticality applications. Traditional LoRaWAN suffers from high failure probabilities due to its simple ALOHA MAC protocol, which lacks built-in mechanisms for prioritizing transmissions and ensuring reliable retransmission. The proposed LoRa-REP solution addresses these limitations by replicating critical messages across multiple spreading factors, reducing transmission failures. The authors implemented message replication using multiple SF values, ensuring orthogonality and minimizing interference. They utilized virtual nodes to mimic independent devices, achieving backward compatibility with existing LoRaWAN deployments. The optimization of uplink and downlink transmission scheduling was implemented to minimize transaction delays. Experimental results using a real-world LoRaWAN testbed showed that LoRa-REP reduced the failure probability of critical transactions from 78% (single SF) to below 2.5%, significantly improving reliability in mixed-criticality scenarios. The drawback of this solution is increased energy consumption, as it requires multiple transmissions for message redundancy, potentially draining the battery of battery-powered devices more quickly. While LoRa-REP improves reliability for a few critical nodes, excessive redundant transmissions may increase network congestion in dense deployments. The multiple SF replications consume more transmission slots, reducing overall network capacity. The SF replication strategy is fixed and is not dynamically adjusted based on network conditions. Furthermore, the approach demonstrates poor scalability as the number of critical nodes increases, with the network capacity diminishing rapidly due to the multiplicative effect of redundant transmissions.
These limitations make pure redundancy-based solutions impractical for large-scale deployments with mixed criticality requirements. Unlike the static priority assignments, our work employs dynamic thresholds that continuously evaluate sensor data against clinically or operationally significant parameters. This real-time assessment, implemented through Kronecker delta functions, enables automatic priority escalation when measurements exceed predefined thresholds. We propose a priority-based protocol that considers energy efficiency.
5. System Model
Having developed a more realistic data generator, we implemented it in our border control application scenario, where fixed and mobile end devices were used by the military to secure the border.
Figure 3 is a NetAnim output of the ns-3 simulation [
46] showing the proposed two-dimensional network topology area of the US–Mexico border. The small red dots around the side of the US borderline denote the LoRaWAN static EDs deployed for border security operations. The militant icons denote the mobile EDs. The network consists of a network server, gateways, static EDs monitoring the environment, and mobile body-sensor EDs monitoring the vitals of the patrolling soldiers. We considered a LoRaWAN network using European regional parameters with confirmed traffic transmissions for Class A end devices. The network uses modulation with a fixed bandwidth of 125 kHz and consists of three GWs positioned along the coverage within 5 km of the border (
Table 1). The EDs are heterogeneous, that is, mobile and static, and located randomly with a uniform distribution in the coverage area. The simulation tool mimics the SX1301 digital baseband chip used for GW capabilities and SX1272 [
47,
48] for the ED transceiver. The network spans a topographical area of 10 km by 20 km. The EDs generate packets of fixed payload for a given SF at different application data intervals, regardless of proximity to the GW. In our algorithm, the NS uses the average SNR values [
49,
50] of the previous four packets sent by the ED to approximate the link quality, in comparison with the standard ADR protocol that uses the maximum SNR value of twenty packets, saving on computational costs [
16]. We utilized the log–distance propagation path loss model [
51]. This model enables the estimation of signal strength decay as it travels over a distance, considering factors such as attenuation and interference. The system model incorporates the interference arising from simultaneous uplink transmissions on a specific appropriate uplink transmission. A transmitted packet is received or dropped based on the sensitivity values provided in
Table 2. Therefore, SF should be allocated to an ED guaranteeing that the received signal strength is higher than the receiver sensitivity as shown in (
1) below.
where
denotes the average SNR of the packets in the ReceivedPacketList,
is the minimum SNR threshold, and
is the device margin.
The received signal power at the GW in dB is given in (
2):
where
is the transmit power at the
ith ED,
is the antenna gain, and
is the path loss.
The path loss propagation is given by
where
is the distance between the
ith ED and the gateway,
is the path loss exponent (3.76), and
is the carrier frequency (868.1 MHz).
We assumed a simple energy consumption model comprising four states: transmit, idle, receive, and sleep. The energy model links each of the aforementioned states with a different voltage and current utilization as shown in
Table 3. We monitored the energy usage of each node throughout the simulation period to determine the network’s overall energy consumption. The model calculates the device’s energy consumption and estimates the ED’s battery life. The total energy consumption for each ED is given by (
4)
where
is the energy consumed when the ED is transmitting a packet,
is the energy consumed when the ED is receiving an incoming packet,
is the energy consumed when listening for incoming packets, and
is the energy consumed when the ED is in sleep mode.
7. Results and Discussion
An evaluation of the proposed algorithm is presented in this section based on the results derived from the simulations. The performance analysis of the LoRaWAN network primarily focuses on four key metrics: confirmed packet success rate (CPSR), uplink packet delivery ratio (UL-PDR), total energy consumption (ET), and interference ratio (IR). In the first evaluation scenario, we varied the number of static border EDs while maintaining a fixed number of BSN EDs (50), an application data packet rate of 60 s for mobile BSN EDs, and 1200 for the fixed border EDs. In the second evaluation scenario, we varied the number of BSN EDs while maintaining a constant application data interval for both BSN EDs and border EDs and a fixed number of static border EDs. The main findings of our study indicate that our proposed protocols, PFC and PFC-CDU, performed significantly better than the case where no “flow control” was used in terms of packet delivery ratio, while also demonstrating reduced interference rates and energy consumption.
7.1. Performance with Varying Number of Static Border Sensor EDs
This section presents an analysis of the impact of increasing the number of border sensor nodes (No.BorderEDs) on various network performance metrics for different data flow control mechanisms, as illustrated in
Figure 4. The evaluation compares three data flow control mechanisms: No Flow Control (NFC), priority-based flow control (PFC), and PFC with Dynamic CDU (PFC_DCDU). This analysis considers a 60-second data interval for mobile EDs and a 1200-second data interval for static EDs.
7.1.1. No.BorderEDs vs. UL_PDR (Uplink Packet Delivery Ratio)
The uplink packet delivery ratio is the ratio of the number of packets successfully received at the GW to the number of packets generated by the ED. The results indicate that an increase in the number of border sensor nodes results in a decrease in the uplink packet delivery ratio, primarily due to network congestion and increased packet collisions. However, the implementation of flow control mechanisms significantly mitigates the decline. NFC exhibits a substantial decrease in UL_PDR, dropping to approximately 0.75 at 400 nodes, which highlights severe congestion and frequent packet losses. PFC and PFC_DCDU maintain high reliability (0.95) up to 300 nodes but experience a slight decline at 400 nodes due to increased contention for network resources. The vertical bars on the graphs indicate the range between the lowest and highest recorded values from the multiple simulation runs, and the spread of the data points around the mean. The error bars indicate high variability in NFC, suggesting unstable packet delivery performance due to retransmissions, whereas PFC_DCDU exhibits smaller error margins, demonstrating consistent reliability. A marginal improvement in UL_PDR between PFC and PFC_DCDU is attributed to the dynamic confirmed data update mechanism.
7.1.2. No.BorderEDs vs. CPSR (Confirmed Packet Success Rate)
The confirmed packet success rate is the probability that the transmitted uplink packets and their corresponding downlink packets are received by the network server and the ED, respectively, in at least one of the available transmission attempts. The analysis of the results reveals that NFC experiences a sharp decline in CPSR, from 0.65 to 0.45, as node density increases, indicating severe network degradation due to congestion and retransmission overhead. PFC and PFC_DCDU maintain CPSR values above 0.9 up to 300 nodes, with a minor decline at 400 nodes, suggesting that the intelligent scheduling of packets preserves network efficiency under high node densities. The error bar analysis shows that PFC_DCDU has smaller variations, reinforcing its robustness and stability in high-density deployments.
7.1.3. No.BorderEDs vs. Energy Consumption
The total energy consumption comprises the energy utilized by all the EDs. Energy efficiency is a critical factor in IoMT networks.
Figure 4 illustrates the relationship between node density and energy consumption, showing that NFC leads to excessive energy consumption, exceeding 2000 J at 400 nodes, primarily due to inefficient retransmissions and packet losses. PFC and PFC_DCDU exhibit significantly lower energy consumption owing to effective congestion control and prioritization mechanisms that reduce unnecessary transmissions. The error bars for NFC are relatively large, reflecting the inconsistent nature of energy expenditure due to unpredictable retransmissions. PFC_DCDU exhibits the smallest error bars, suggesting predictable and controlled energy utilization, which is crucial for energy-constrained applications.
7.1.4. No.BorderEDs vs. Lost Packets Because of Tx Issue
Packet loss due to transmission failures is a key indicator of network performance under heavy traffic. NFC results in an exponential increase in lost packets, exceeding 2500 at 400 nodes, making it unsuitable for large-scale deployments. PFC and PFC_DCDU significantly reduce packet losses, maintaining a stable and efficient transmission environment. The error bar analysis indicates that NFC experiences substantial fluctuations, signifying unreliable network performance. PFC_DCDU achieves the lowest packet loss with minimal variance, reinforcing its efficiency in handling increasing node density.
7.2. Performance at Varying Mobile Body Sensor EDs
The analysis depicted in
Figure 5 provides insights into the effectiveness of different flow control strategies under varying numbers of body sensor nodes while keeping border sensor node parameters fixed. While the performance gains achieved by the PFC_DCDU model may not be substantial in every context, they are nevertheless impactful in scenarios characterized by a high node density, indicating its potential for optimizing system performance in such environments.
7.2.1. No.BodyEDs vs. Uplink Packet Delivery Ratio
UL_PDR, as depicted in the top-left graph, demonstrates the proportion of successfully received packets at the gateway. As the number of body sensor nodes increases, UL_PDR declines, particularly in the case of the “No Flow Control” approach. This degradation is attributed to network congestion, increased collisions, and excessive retransmissions, leading to packet loss. The priority-based flow control (PFC) and PFC with Dynamic CDU (PFC_DCDU) exhibit superior performance, maintaining a high UL_PDR above 95%. The effectiveness of these mechanisms is attributed to their ability to manage transmission scheduling and data prioritization, thereby reducing congestion and enhancing reliability.
7.2.2. No.BodyEDs vs. Confirmed Packet Success Rate
The CPSR, shown in the top-right graph, represents the successful reception rate of high-priority packets. The “No Flow Control” approach exhibits a sharp decline in CPSR as the number of body sensor nodes increases, dropping from approximately 0.9 to nearly 0.5. This indicates that without an intelligent flow control mechanism, critical data are severely impacted by network congestion. On the other hand, both PFC and PFC_DCDU demonstrate robustness in maintaining a CPSR above 90%. The slight decline observed in these schemes is attributed to increasing network contention; however, the proactive traffic management employed ensures that critical packets receive a higher transmission priority, thereby sustaining reliability.
7.2.3. No.BodyEDs vs. Energy Consumption
The bottom-left graph illustrates energy consumption across different flow control strategies. The “No Flow Control” scheme results in significantly higher energy consumption, increasing steeply with the number of body sensor nodes. This is due to excessive retransmissions, idle listening, and unnecessary packet forwarding, resulting in the rapid depletion of node batteries. Conversely, PFC and PFC_DCDU consume substantially less energy, with PFC_DCDU demonstrating the lowest energy usage. This efficiency is because the dynamic CDU optimizes transmission intervals and adaptively controls duty cycles, reducing redundant transmissions and conserving energy.
7.2.4. No.BodyEDs vs. Lost Packets Due to Transmission Issues
The bottom-right graph provides insights into the number of packets lost due to transmission failures. The “No Flow Control” strategy suffers from severe packet losses, escalating dramatically with increasing body sensor nodes. This further corroborates the earlier findings that network congestion and excessive retransmissions lead to deteriorating performance. In contrast, PFC and PFC_DCDU significantly mitigate packet losses, with PFC_DCDU performing slightly better. These mechanisms’ proactive congestion management and intelligent transmission scheduling ensure more efficient channel utilization, reducing the likelihood of packet drops.
7.3. Performance at Varying Static Border Sensor Data Intervals
Figure 6 presents the results, which provide insights into the effectiveness of the proposed protocol’s performance under varying border sensor intervals in terms of key performance metrics, including UL_PDR, CPSR, energy consumption, and lost packets due to transmission issues. The analysis compares the three different data flow control mechanisms.
7.3.1. Impact on Uplink Packet Delivery Ratio
The figure illustrates the relationship between the border sensor data interval and UL_PDR. The results indicate that UL_PDR improves with increasing data intervals, showing a consistent upward trend. Notably, the No Flow Control exhibits significantly lower UL_PDR values compared to the flow control mechanisms, indicating its inefficiency in handling network congestion. Both PFC and PFC_DCDU demonstrate superior performance, with PFC_DCDU marginally outperforming PFC. This suggests that the dynamic CDU mechanism further enhances data packet delivery reliability by effectively managing network resources.
7.3.2. Impact on Confirmed Packet Success Rate
The CPSR results follow a trend similar to UL_PDR, where increasing data intervals lead to improved control packet success rates. The No Flow Control scheme performs significantly worse, stabilizing below 0.6, whereas both PFC and PFC_DCDU maintain a CPSR above 0.85, reflecting their ability to manage packet transmissions efficiently. The slight advantage of PFC_DCDU over standard PFC suggests that dynamic CDU mechanisms optimize control packet delivery.
7.3.3. Impact on Energy Consumption
The energy consumption results exhibit a downward trend, implying that increasing the data interval reduces energy expenditure across all schemes. This reduction is most pronounced for the No Flow Control scheme, which initially consumes the highest energy (above 1000 J at the smallest interval) but gradually decreases as the interval increases. The PFC and PFC_DCDU schemes consistently exhibit significantly lower energy consumption. Among them, PFC_DCDU achieves the lowest energy usage, reinforcing its efficiency in balancing transmission needs and energy expenditure. These results confirm that flow control strategies improve network longevity by reducing unnecessary retransmissions and collisions.
7.3.4. Impact on Lost Packets Due to Transmission Issues
The bottom-right plot quantifies the number of packets lost due to transmission failures. The No Flow Control mechanism exhibits the highest packet loss, with over 1600 packets lost at the smallest interval. As the interval increases, packet loss decreases; however, it remains significantly higher than the flow-controlled approaches. Both PFC and PFC_DCDU maintain substantially lower lost packet counts, with PFC_DCDU showing a slight edge over standard PFC. These findings suggest that flow control mechanisms—especially those incorporating dynamic CDU—effectively mitigate packet loss by optimizing transmission efficiency.
The results conclusively demonstrate that employing a data flow control mechanism significantly improves network reliability, reduces energy consumption, and enhances packet delivery efficiency. Among the evaluated approaches, the PFC_DCDU scheme consistently outperforms the others, achieving optimal trade-offs between energy efficiency and data reliability. The findings emphasize the necessity of adaptive flow control techniques in large-scale IoT deployments, particularly in mission-critical applications such as health monitoring and emergency response systems.
7.4. Priority-Based Performance Metrics
This section comprehensively evaluates throughput performance under the priority-based flow control (PFC) protocol, examining how priority levels (Pr = 0, 1, and 2) influence network efficiency and reliability. Throughput is measured as the number of successful packets delivered per minute per gateway, taking into account collision rates, transmission intervals, and confirmed delivery mechanisms.
Table 4 summarizes the analysis of the simulation results.
For high-priority traffic (Pr = 2), the protocol achieves near-real-time delivery (12–15 packets per minute per gateway) with consistent sub-100ms latency, even in dense deployments of up to 400 nodes. This is accomplished through immediate transmission, prioritized channel access, and the use of minimal spreading factors (SF7). Medium-priority traffic (Pr = 1) maintains a balance between data freshness and network efficiency, delivering 3–5 packets per minute with controlled latency under 5 min, while the dynamic 300 s transmission interval adapts to network load. Low-priority traffic (Pr = 0) operates with intentional packet dropping (15–20%) during congestion, which preserves higher-priority throughput and stabilizes overall network interference. The results confirm that PFC’s tiered approach successfully meets the distinct requirements of critical, important, and background data flows.
The protocol’s performance remains robust across different network densities and mobility scenarios. Even at peak loads, the Pr = 2 throughput shows only minimal degradation (≤10% at 400 nodes), while Pr = 1 maintains reliable service through adaptive intervals and PFC_DCDU’s confirmed delivery mechanism. The strategic dropping of Pr = 0 packets during congestion prevents network overload while still capturing 90% of sensor trends. These results validate that PFC’s design choices, including priority-specific transmission intervals, dynamic spreading factor selection, and controlled packet dropping, collectively optimize the trade-offs between latency, throughput, and energy efficiency. The protocol’s ability to maintain stratified quality-of-service levels makes it particularly suitable for mixed-criticality IoMT applications where both urgent alerts and routine monitoring must coexist on resource-constrained networks.
7.5. Generalization to Alternative Topologies
While the PFC_DCDU protocol is designed for LoRaWAN’s star-of-stars topology, its priority-based flow control principles could extend to other IoT network architectures, albeit with distinct trade-offs. In mesh networks, PFC’s interval-based prioritization could integrate with multi-hop routing to enforce end-to-end priority handling, although latency may increase for high-priority packets due to hopping. Cluster-tree topologies naturally align with PFC’s tiered priorities, where cluster heads can locally schedule traffic. However, bottlenecks near the root may require dual-homing for critical nodes. For MANETs, dynamic topology changes would necessitate cross-layer coordination to maintain priority queues across transient links, potentially combining PFC’s intervals with location-aware priority assignment. In cellular IoT, PFC could be mapped to existing QoS classes, although energy costs may increase for low-priority devices. Hybrid topologies such as mesh-star would require gateway-based priority translation to ensure consistent behavior across segments. Across all architectures, PFC’s core mechanism—dynamic interval tuning based on priority—remains applicable; however, its efficacy depends on topology-specific adaptations to address challenges such as resource allocation fairness, priority inversion, and mobile node management. This flexibility suggests PFC could serve as a template for priority-aware IoT communication beyond LoRaWAN, with implementation suggestions guided by deployment conditions.