1. Introduction
Digital transformation is one of the most prominent topics in applied technology, affecting multiple industries, including manufacturing, logistics, or utilities. Taking connected devices as a basis, the Industrial Internet of Things (IIoT) plays a key role in enabling new use cases, such as zero-defect manufacturing, human–machine collaboration, and smart metering, among other innovative applications. However, one of the main challenges lies in scaling up IIoT solutions while ensuring robustness and quality of service. In this context, wireless communications that enable reliable data exchange in the harsh environment of industrial scenarios is a key enabler for digital transformation. Given these requirements, LoRaWAN is a suitable candidate for setting up the aforementioned wireless network.
LoRa and LoRaWAN are protocols for LPWAN (Low Power Wide Area Network) networks. Considering the OSI levels, at level 1, the physical level, we find LoRa communication technology [
1]. This technology allows point-to-point communication between devices. What characterizes a LoRa device is its long range and minimal consumption. For this, it uses the spread spectrum technique, where the signal to be sent uses more bandwidth than is theoretically necessary, which allows the reception of multiple signals in parallel, given that these signals are transmitted at different speeds. LoRa uses the ISM band, although the technology can operate on any frequency below 1 GHz; so as long as the emission regulations are respected, any person or company can make use of it without the need for a license. Thus, LoRa usually operates in the 433 MHz, 868 MHz (Europe), and 915 MHz (America) bands, depending on the country or region. These regulations are important as they restrict the channels but also the transmission power and the duty cycle allowed for devices in the network [
2]. LoRaWAN [
3] corresponds to level 2 (network), and is also known as MAC (Media Access Control). LoRaWAN oversees joining different LoRa devices and managing their connection parameters, e.g., channels, bandwidth, and data encryption.
Besides LoRaWAN, there are other LPWAN technologies on the market [
4] like SigFox, which has a fee per device and year connected and whose technology is private, with strong restrictions for devices which can only transmit up to 140 packets of 12 bytes per day; or NB-IoT, which also relies on public cellular network operators. All the elements in LoRaWAN can be deployed as a private end-to-end network, avoiding the use of public infrastructure, and it is an open specification, which allows for some modifications and special configurations that could be of interest for industrial applications.
LoRaWAN presents a fit solution for selected use cases in IIoT. It supports deploying hundreds of nodes with extended range and optimal energy consumption, with a reduced cost in network infrastructure, and works in the sub-GHz band, a different band to Wi-Fi, Bluetooth, or other popular wireless technologies, avoiding interference from these networks and presenting better noise resistance. This standard includes the implementation of an Adaptive Data Rate (ADR), a mechanism that helps adapt to channel conditions by changing devices’ data rate (DR) transmission parameters, which reduces congestion in dense areas and can help save battery.
Nevertheless, the current trend focuses mainly on the optimization and enhancement of the uplink traffic, that is, from nodes (end devices or ED) to the gateway (GW). This is derived from the fact that this type of network is mostly used for simple telemetry, although it has potential for much more. Many applications and use cases are appearing that present a chance to profit from the downlink part of the communication, that is, from GW (or better, application server) to the end devices. These use cases include relaying information or data analysis on zero defect manufacturing to alert personnel on the floor plant where 2,4 GHz communications are banned for security and safety reasons. In that scenario, users with end devices may have some mobility, or be spread over a large area that would otherwise require several access points and GWs. This is presented in the Zero-Defect Manufacturing Platform (ZDMP) [
5] project, where devices need to receive maintenance alerts coming from a data analysis system, and the proposed LoRaWAN network is used to support data acquisition and alarm events because that company’s policies restrict wireless communications to sub gigahertz bands.
Once the technology and potential use cases have been selected, two challenges remain, and these are covered by this article. On one hand, deploying a high number of nodes often requires some planification to ensure the network will operate as required. For this task, simulators are handy tools that reproduce a given scenario virtually. In this work, the popular ns3 simulator has been chosen, as it includes many models and libraries for LoRaWAN, some of them tested and used during this work [
6]. On the other hand, the forementioned ADR mechanism is only implemented for the uplink stage of communication, while the use cases under study aim to use optimized downlink traffic.
The article first introduces previous related work and summarizes the key aspects and issues with the technology. Subsequently, it proposes a mechanism called DROB (downlink rate optimization for class B), which is an implementation of a modified ADR for downlink. Next, the behaviour of DROB, considering not only the Packet Delivery Ratio (PDR) but other key indicators as well, is derived from a scheduled MAC (such as discarded events in devices’ buffers and energy consumption) and analysed. To reproduce these scenarios as faithfully as possible, the simulator was enhanced with the inclusion of external and localized interference sources, which also served to highlight the response of the proposed mechanism to these situations in real life.
2. Related Work
Simulations must be made as close as possible to real world conditions so results can be extrapolated correctly. Related work and studies fail to consider special noise or interference sources that may appear in these harsh environments (machines, motors, etc.), other than regular baseline channel noise. The study presented in [
7] shows Nakagami models can fairly model baseline channel fading in LoRa due to the environment. Another interesting case study, with practical measurements, can be found in [
8]. The work to model individual interference sources was inspired by [
9] and [
10]. This article analyses the impact of this type of interference by modelling and simulating noise sources in the same scenario as the network.
A good starting point for ADR mechanisms, how they work, and interesting proposals in the scientific community is the review of ADR-related work and the state of the art that can be found in [
11], where authors present recent solutions and challenges. In other recent relevant work, [
12] presents a new adaptive strategy for smart metering application focusing on uplink, and [
13] introduces a new MAC protocol called FCA-LoRa, based on LoRaWAN, which addresses the downlink stage, but relies on multiple GWs to achieve results. The authors of [
14] analyse the possibility of optimizing performance by modifying the SF used, but instead of using the ADR mechanism, the gradient projection method was chosen. Numerical results are obtained for a number of nodes from 0 to 10,000 with the use of SF7, SF8, and SF9 and compared to theoretical limits. This case shows that the length of the packet can be a consideration in optimizing performance but lacks downlink traffic analysis.
Regarding the scope of simulators, [
15,
16] show simulations performed with ns3 for LoRaWAN technology. The authors of [
15] analyse an indoor industrial scenario with LoRaWAN class A devices. Simulations have been performed with a number of nodes ranging from 10 to 1000 within a radius of 200 m. These simulations compared the probability of success with constant spreading factors (SFs). On the other hand, battery consumption has also been compared with other technologies such as IEEE 802.15.4. The authors of [
16] studied LoRaWAN class B devices. In this article, devices were distributed within a radius of 6100 m. Two simulations were performed to study downlink performance, one with 1% and the other with 10% duty cycle channel limitation. However, this work does not consider the impact of enabling ADR, as DR was kept static during the simulations in both uplink and downlink.
Other simulation tool options can also be found in the literature. A LoRa-MAB simulator is analysed in [
17]. The performance of LoRaWAN class A during uplink was evaluated with 100 nodes, with the possibility of selecting an SF parameter from 7 to 12. Nodes were distributed in a radius of 4.5 km. The duty cycle limitation was 1%. In this case, the algorithm used improved the performance of LoRaWAN regarding the value of successful packages and energy consumption. However, the connection times were excessively long.
In [
18], the FADR algorithm with a LoRaSim simulator is presented. Simulations were performed for a number of nodes from 100 to 4000 within a radius of 100 to 3200 m. This mechanism controlled the transmission power but could not completely eliminate all collisions. In [
19], the mechanism of ADR is analysed using the LoRaWANSim simulator. The chosen duty cycle was 1% with a radius of 670 m. The results show network behaviour for a different number of nodes. This work did not analyse class B device operation.
Some initial tests and simulations with ns3 can be found in [
20], but while this study addressed ADR for downlink, it lacked a realistic scenario model to validate different channel condition changes, and the ADR implementation itself could be improved with changes to the key statistics for triggering operations.
3. Materials and Methods
3.1. Technology Introduction
The LoRaWAN standard, as introduced in the previous section, can be found in [
3], and information on the physical level LoRa can be found in [
1]. This section introduces the most relevant aspects of the technology so that the explanation of the proposed enhancements is understandable. LoRaWAN introduces three operation modes that a node can select to transmit or receive data:
- -
Class A describes the original LoRaWAN mode of communication. Communication is bidirectional, with the limitation that nodes can only receive a downlink packet from the GW if they have previously sent an uplink message, which is a great limitation for some applications. However, battery savings in this class are greater than in the other classes.
- -
Class B is bidirectional, but in this case nodes can also receive in scheduled time slots. When operating in class B, nodes can send uplink messages following the class A mechanism (i.e., sending data at any time the duty cycle allows), but they do not need to enable the RX windows, as the GW establishes planification, reserving slots for all class B nodes in the network. Other temporal parameters can be seen in
Figure 1. This mechanism requires every node to be synchronized with the GW. This is achieved using periodic beacons broadcast by the GW every 128 s. The time window between beacons is then divided into slots that can be assigned to nodes to enable downlink messages without the need for previous upload packets.
- -
Class C keeps the radio interface permanently active and allows bidirectional communication so long as the duty cycle is respected, at the cost of consuming more energy than any other operation mode.
This work focuses on downlink traffic which needs to be decoupled from the uplink stage. This means the GW does not have to wait until a message from a node is received to send a message to this node, as occurs in class A. Furthermore, class C drains the batteries of devices faster and is more affected by collisions than class B, which introduces scheduling to reduce this problem. Therefore, class B is the selected mode of operation because it fits the application requirements better than the others. Nevertheless, class B also introduces some challenges that need to be considered: scalability and performance. Having scheduled channel access means nodes have defined slots and therefore chances of receiving data, which prevents collisions with other nodes, but also means there is a limited number of available slots—so when the number of nodes grow, this schedule can be a problem. There are several events that can hinder the operation of a class B node:
- -
The pingslot selected has already been assigned to another node when the number of nodes is higher than the available pingslots—situation 1 (indicated with a 1) in
Figure 2.
- -
The assigned pingslot is already in use (because the previous downlink transmission is still active)—situation 2 (indicated with a 2) in
Figure 2.
- -
Pingslot assigned is too close to the previously active one (even if it finished) and cannot be used due to duty cycle regulations—situation 3 (indicated with a 3) in
Figure 2.
Figure 2 above shows a temporal representation of a class B beacon period in a network where N possible nodes are receiving from the GW. Nodes 1, i, j and N represent any four nodes of the network. Node 1 is used as the node that receives successfully, while node i does not receive due to having the same assigned timeslot as node 1. The packet for node j is also discarded as it occurs while the transmission for node 1 is still happening. Lastly, after sending the packet to node 1, the GW must respect the duty cycle, which means any outgoing packet for another node, such as node N, is also discarded. Beacons can be also lost due to the duty cycle: even if there are beacon guards and reserved times, if the last pingslot active is too close to the beacon’s sending time, the duty cycle limitation can cancel the transmission. Losing beacons is a problem because nodes can desynchronize from the network (thus returning to class A operation), and it also affects the pingslot selection algorithm for each node.
3.2. DROB: Adaptive Downlink Rate Optimization for Class B
The LoRaWAN specification features the Adaptive Data Rate (ADR), which is a mechanism that allows resources to be saved while improving lower quality communications. This mechanism optimizes airtime and energy consumption in end devices, but devices remain decisive in whether or not to activate this mechanism.
Put simply, when the mechanism is activated, devices start a transmission counter, and if they do not receive acknowledgements the counter is increased until the devices chose either to increase the transmission power if possible, or, if power is already the maximum allowed, try to decrease the data rate (DR), which is equivalent to increasing the spreading factor (SF). On the other side, the GW registers the SNR (signal to noise ratio) for the packets received from each device. After receiving sufficient samples (typically 20, but this can be altered), the SNR margin is computed by comparing the maximum SNR registered against requirements in order to receive the packets, and a predefined margin to ensure operation at each data rate. With these operations the GW is able to inform (via a MAC command) each device whether it can increase the data rate. The flow chart in
Figure 3 summarizes how this mechanism operates.
The proposal for the downlink traffic follows the same logic but considers some key points, as introduced as part of previous work in [
14]: the maximum DR the GW can use is set as DR3, because higher DRs affect the strict timing for pingslots that enables synchronised and scheduled transmissions, which would mean changing timing parameters that might prevent communication. Additionally, the increments of power before changing DR are not considered, as the GW is not limited by batteries and always operates at the maximum allowed, because it has to reach as far as possible and to multiple destinations. This is different in a node, which only needs to adjust its power to reach the GW and usually operates with battery limitations.
Considering these assumptions, the GW computes the same statistics for each node, and when the pingslot assigned to a certain node is reached, the GW will use the DR it calculates to send the downlink packet. For this to work, nodes need to send information to the GW during uplink, as received SNR and pingslot acknowledgment is required to perform the ADR operation. This information can be sent as piggyback data in regular uplink messages, or in downlink acknowledgments, whichever happens first. The GW also tracks the packet loss ratio (PLR) for each node, that is, the ratio of downlink lost packets, updating this value periodically via a sliding window algorithm, and whenever the PLR value for a given node is above the selected limit, called PLR
max, according to the requirements of the application, the GW informs the node that the following packets will be sent after decreasing the DR.
Figure 4 shows the flow chart describing the operation of the proposed DROB mechanism.
This implementation differs from the decreasing method of a traditional ADR mechanism, but it allows for faster reaction to changes in the channel, allowing the DR to be adapted to the changing conditions of the environment, and ensures stability without triggering too many unneeded changes.
3.3. Interference in Industrial Scenarios
Interference occurs due to adverse EM interactions that may disturb connectivity between devices and their respective GW. In LoRa, two types of interference sources can be found: LoRa signals, if devices use exactly the same set of transmission parameters (DR, BW, channel); and other signals. While in the ns3-simulator the former is already considered as a possible event during normal operation, external sources sharing the same frequency band and coexisting in the scenario were not implemented directly, but only as background channel noise.
Channel noise is often modelled as additive white Gaussian noise (AWGN), a basic noise model used to reproduce the effect of random processes that occur in nature. The term white derives from the idea that it has uniform power across the frequency band for the information system (as the colour white has uniform emissions at all frequencies in the visible spectrum), and Gaussian because the noise signal PDF (probability density function) follows a Gaussian distribution.
However, in real life, and especially in industrial scenarios, interference sources can appear due to the characteristics of some machinery or electronic devices present in the floor-plant, adding even more disturbing components to the received signal (as seen in
Figure 5). Motors, conveyor belts, electricity generators, and other elements can produce high power RF signals that hinder communications and can saturate the channel. Therefore, for a LoRaWAN receiver listening for significant data (checking if the bytes received represent a preamble to keep listening for the full packet), this interference signal can mask partially or completely the real network packet transmitted by a LoRaWAN source (either GW or node). Even with the mechanisms provided by LoRaWAN to protect against this, if the interference signal is too strong the expected signal cannot be recovered, leading to packet losses.
For this work, these interference sources have been added to the ns3 simulator modules by representing them as devices that can transmit “packets” of configurable length, power, and periodicity, but without information that the network devices can interpret as LoRaWAN data. This means that these sources are not limited by duty cycles, maximum power allowed or packet size limitations. Furthermore, interference sources can be located physically at specific points of the scenario, as they would be in real life, so the ubication and power of their signal varies its repercussion on the operation of the network deployed.
Figure 6 shows the temporal evolution of the interference sources’ output signal, with a fixed power of 28 dBm and 120 s of duration (t
on), but the periodicity is randomized between 220 and 600 s (t
off). This signal represents a strong and repetitive interference source meant to test saturation in the network simulated, in order to validate the implementation of the interference sources. With this signal configuration, the network is as expected unable to operate because the power and duration of interference makes receivers unable to recover valid packets.
Once validated, the interferers are configured with signals that, according to consulted sources, can be expected to appear, with t
on in the order of hundred milliseconds and with lower power levels. The choice of interference signals that can represent real sources is made assuming that these sources appear during shorter periods of time, often matching floor plant operation events, such as personnel shifts. This may occur for example two or three times per day. During these periods of heavier noise due to interference, the signal has a t
on duration of 150 ms, which is within the range of measured values found in the related work introduced in previous sections. The signal has a higher periodicity during this stage, chosen as a random distribution between t
on and the duration of sending
N packets of minimum size (so the DROB mechanism can gather sufficient statistics to operate) considering the duty cycle regulations.
Figure 7 shows the described distribution of high frequency noise periods with a close-up view of 60 s inside one of those periods.
3.4. Scenario Configuration
The simulations were configured to reproduce the characteristics of an industrial plant. The dimensions of 2000 m × 500 m correspond to four big, connected warehouses or rooms, forming an industrial plant, during a period of 24 h. The height of the chosen plant was 10 m. This scenario is based on an automotive production plant that is being used for related projects. The devices deployed from a rectangular grid with the GW at the bottom-left corner (as in the x = 0, y = 0 coordinate). There are four additional elements representing the interference sources, aligned vertically and horizontally in each room. This distribution can be seen in
Figure 8.
In this scenario, four interference sources, one per room, introduced high-frequency noise during given periods of time, matching workers and process lines turns and shifts. This behaviour was modelled as three periods of 1 h of high frequency interference, with the sources introducing more frequent signals, as described in
Section 3.3. The rest of the time (21 h), the interference sources introduced spurious signals at a lower frequency. This test configuration (the combination of high and low frequency noise periods) will be called the “real scenario” from now on, as opposed to the saturated scenario introduced by
Figure 6, in order to validate the impact of interference sources.
The chosen payload size is 100 bytes (13 bytes of LoRaWAN protocol headers +87 bytes of data), which is enough to represent a simple binary message with a short text instruction or measurements with timestamp. The preamble length is eight symbols by default as indicated in the standard. For this configuration, the corresponding airtimes and duty cycle restrictions are shown in
Table 1.
The rest of involved configuration parameters for the network operation can be found in
Table 2. These include interference signal configuration.
4. Results
4.1. Effect of Interference Sources in Network PDR
The first consideration for validating the interference sources lies in checking their effect on the received interference signal’s power for each node, as they are located all around the scenario. The power received during 24 h from all interference at each node’s receiver, activating or deactivating the interference sources implemented (in saturation and real scenario tests), can be seen in
Figure 9 below, which shows the maximum power received from interference for each node. The position of the nodes in the scenario can be derived from
Figure 8. The mean values in dBm during the simulation are not shown, as they show little difference between configurations of realistic interferers (short pulses with same power as the network’s signals) and an interference-less scenario because the contribution of the four external interference sources is diluted by the contribution from the other 199 nodes around the map (that is, intra-network interference). Obviously, introducing interference sources that saturate the channel, with a higher power level and long duration, causes the average to rise above the realistic and noise-less cases. On the other hand, focusing on the maximum values of interference power received, the configurations with the external interference activated inject more power than the noiseless network, especially for the furthest nodes, where the contributions of noise signal are additive along the way. In the case of the saturated scenario, peaks are distributed around nodes closer to the interference sources, and become higher in nodes in the middle rooms, as they are near more interference sources than nodes on the outermost sides of the floor plant and thus have the chance to overhear long interference signals more often. There are two additional considerations of significance for interpreting these results. First, in terms of power received, the DR parameter configured does not have an impact on the results, because the DR only regards the time on-air and the SNR margin to recover data, so the effect of this parameter is seen in the number of lost packets, but not on the raw value of power received. Secondly and relatedly, the value of raw noise power must be considered not directly related to packet loss, as it also depends on the power of the information packet received and the SNR margin that enables the node to recover this information.
Therefore, the real effect of these external interference sources can be seen by analysing the number of lost packets due to interference during the simulation. In
Figure 10 below, the simulations for 24 h of the network can be seen. In this case, configurations without external interference sources show little or no packet loss during downlink, which is consistent with theoretical results for class B operation in LoRaWAN. This case is not shown in the figure as 0% of packets were destroyed. However, when the interference sources were activated, packet losses increased among all nodes in the network. These losses were higher when nodes were near the interference sources. In the case of the saturated scenario, almost all packets were affected by interference and losses were very high for every configuration. Thus, they are not shown in the figure. Regarding the real scenario, it is relevant that for the DR3 configuration, which is supposed to be more robust to noise than DR5 (the SNR margin is greater and the on-air time of packets is longer, so the receiver has more time to recover the packet), the percentage of destroyed packets was higher. This is due to the longer time on-air it takes for a packet to be received in DR3. This usually means that DR3 is more favourable for other LoRaWAN intra-network interference sources, as the other nodes’ packets would take a shorter or the same time to be transmitted and the power received by the interferer is within the SNR margin.
When external sources that transmit during much longer periods and with higher power were included, spending more time on-air only served to give the interferer a greater chance to cause a packet loss. This value of packets destroyed refers only to downlink traffic, which, as described, is scheduled so in theory the only sources that can trigger interference are the external ones introduced in this scenario. Additionally, DROB achieved values between DR3 and DR5. These results were expected after 24 h, as the nodes spent time configured among DR3, DR4, and DR5.
4.2. Effect of DROB Mechanism in Packet Delivery Ratio
Another way to check the real effect of interference signals in the network, while observing the impact of the proposed DROB mechanism, is the packet delivery ratio (PDR) for the downlink traffic. This represents success in receiving the packets, and therefore with higher interference, a lower PDR is expected. It is worth noting the impact of the interference can differ due not only to the interference sources, but also the configuration of nodes and GW, which is where the DROB mechanism tries to optimize resources according to the environment variables.
Given that the DROB mechanism is tuneable, and the number of samples (using the notation
C_x to identify the number of samples and the size of the window used) to perform the operations is configurable, the simulation was initially performed while varying this parameter in order to select the optimal value and check its impact, then selecting the candidate to test against the fixed DR configurations. Values of 20, 10 and 5 samples have been used. Values above 20 made the mechanism slow to react, as it waited to receive sufficient samples. On the contrary, lower values meant reacting more quickly, but also may have introduced too much variability and cause instability in the network as the configuration changed too often, hence the lower limit of five was set. In
Table 3 it can be seen that this parameter does not significantly affect the results. In any case, C_20 was selected, as it showed overall better results and kept the sampling to 20, following the recommendations for the standard’s ADR. This will be called simply the DROB.
Once the DROB configuration was selected,
Figure 11 below shows how the network responded in terms of PDR. These simulations kept the DR fixed to DR3 and DR5, which are the limits for downlink in a class B operation (due to restrictions in pingslot duration), and then activated DROB, which selected the DR used for each downlink packet sent dynamically during downlink.
These results were obtained after 24 h of simulation and show that the proposed DROB mechanism achieved results closer to the DR5 configuration, although it was able to keep more nodes above 75% PDR. If the analysis focuses on the periods of 1 h that introduce high frequency interference, then the results for the DROB mechanism showed improvement against the other configurations, as it adapted to channel conditions on the nodes that suffered from packet loss, keeping the DR as high as possible on the rest. By choosing the best DR configuration considering channel changes along time on a node per node basis, the result was that during the periods of time that noise was more frequent, DROB achieved better PDR results than the fastest (DR5) and the more robust (DR3) configurations. It was also able to send more packets that DR3 and better keep more nodes above 75% PDR than the other configurations.
Table 4 shows the overall and the partial results for the whole network. Additionally,
Figure 12 shows the average PDR per room (50 nodes), where it can be seen that further nodes suffered from more noise and interference and therefore benefited more from the DROB mechanism.
Furthermore, the configuration changes in DR for each node are summarized in
Figure 13, where the number of DR increases and decreases (both representing a change in DR) for each node during the 24 h simulation can be seen. Additionally, the figure of DR changes during an interval of 1 h in the simulation that introduced high frequency interference shows that the DROB mechanism reduced DR during that time to adapt to channel conditions, because it is during those periods that the mechanisms triggered DR down operations, while in other periods, if possible, it triggered DR up operations.
4.3. Effect of DROB in Class B Gateway Buffer
When analysing the results of packets sent by the GW, it is clear there was a big difference depending on the DR used. This is due to the effect of having longer or shorter airtimes needed to transmit a packet. The impact of the airtime was greater in the downlink stage, as the GW needed to transmit to many nodes in short periods of time, therefore encountering strong limitations due to the duty cycle, but other events can also happen in the output buffer of the GW when operating in class B, as explained in previous sections. This includes finding the pingslot scheduled still busy (again, due to longer airtimes), or even nodes having the same slot assigned (which can happen when the number of nodes is higher than available slots). In that regard, as can be seen in
Figure 14, the configuration that achieves best results was DR5, which needs the shortest airtime, while the proposed DROB remains in an intermediate position as the GW used the entire range of available DRs during the time of simulation. Focusing on raw numbers, it may seem that keeping DR fixed in DR5 is the best option. Nevertheless, the number of nodes (relevant to scalability), the distance between nodes and the GW, the distance between nodes (density), and application requirements can impact the network operation greatly. The mechanism proposed can be further refined to adapt differently to channel changes, but with the results for this scenario it is valid to work under a certain set of parameters that attempt to stress the system and force the DR change trigger more often, where its benefits can be even higher. The main goal of this study was to provide a tool that enables some degree of adaptability to varying network conditions without a priori knowledge and pre-planning.
4.4. Effect of DROB in Energy Usage
Changing the DR used during communication is one of the parameters that impacts energy consumption the most in a LoRaWAN network, because it translates directly to the time that node’s radio interface must be enabled (either for transmission or for reception). For this analysis only the downlink stage was considered, and the focus was on nodes, as the GW was assumed to be mains-powered. When nodes receive packets, the overhead power of a class B operation is a function of:
- -
The time-on-air of the beacon, which is region-specific (SF9/125 kHz)
- -
The periodicity of the ping slots (one ping slot per beacon period in this case)
- -
The ping slot DR, which allows to the calculation of the time-on-air of the packet.
- -
Whether during the assigned pingslot there is a packet being sent or not.
Leaving aside the beacons, which are constant for all nodes, for each device in the network during reception in class B, the energy consumed ERX can be calculated with Equation (1), where:
- -
IRX is the current consumption, and V is the voltage.
- -
tairtimeDRi is the time-on-air of the packet for the DR used.
- -
NDRi is the number of packets received with that DR.
- -
tpreamble is the time the radio is on listening to detect whether there is an incoming packet or not.
- -
Nemptyslots is the number of ping slots when the node did not detect incoming packets.
As expected,
Figure 15 shows how DR5, due to the reduced airtime required, achieved the best results even after sending more messages, with the DROB being balanced between configurations. As explained before, this also means that if the channel or scenario conditions got to the point that some nodes in DR5 could not receive, the DROB would still work in reducing the DR but would maintain it as high as possible, therefore consuming less power than fixed DR3 with same levels of robustness.
5. Discussion and Conclusions
The implementation of interference sources and the proposed DROB mechanism lead to relevant conclusions. On one hand, simulations often fail at some point to consider relevant parameters that are probably only reproducible in real life testbeds. Even using channel and noise models, real applications can introduce unexpected elements hard to consider during simulations. Therefore, apart from trying to improve the simulator with these external interference sources, the mechanism provided is of interest because it addresses cases where channel conditions may change frequently and impact greatly on the network operation.
From a superficial analysis of the results, two main choices can seem optimal: selecting a fixed DR5, which allows sending more packets and uses less energy, or selecting DR3 to ensure the best PDR results, at the cost of sending fewer messages due to duty cycle restrictions. A relevant comment here is that the tested configuration was selected to be very challenging for class B operations, as the GW tried to send one packet per node per beacon cycle (128 s), and this was the cause of the quantity of buffer discard events. In a real application, the GW would probably need to send much less frequently, reducing greatly the impact on buffer events. In any case, it is a good test of possible situations such as bursty traffic, when the GW needs to send large quantities of data (OTA firmware updates, configurations, etc.)
The main goal of the proposed mechanism was to adapt to variable conditions, which implies that it worked on the higher DR when possible, but used lower DRs when the channel presents too much interference. Therefore, the results achieved are expected, as the performance of DROB was located between the limits DR5 and DR3. The results when focusing on periods of high frequency interference and noise are promising. This also implies that it can be expected for DROB to succeed more in other scenarios—for instance, in more densely populated or larger areas and distances (note that for the initial analysis with total control of environment variables, the proposed scenario was uniformly distributed). This depends on the conditions of the scenario and the network deployed, so it is key that DROB achieves stable and adaptive results and that it can be used for a wide range of scenarios. By using this type of reactive and autonomous mechanism, the need for deployment pre-planning tools is reduced.
Therefore, the next steps for refining this mechanism include applying it to a wider range of conditions and scenarios, taking into consideration scalability and application requirements for reducing GW buffer events, and optimizing the duty cycle and scheduling.