1. Introduction
LoRa is one of the low-power wide-area network (LPWAN) technologies. The characteristics of LPWAN include long-range communication, short packets, low power, and low bandwidth [
1]. As a result, LoRa can enable energy-efficient connectivity for many widely dispersed end-point IoT devices. In many IoT applications, such as environmental monitoring and smart cities, the LoRa gateway acts as a bridge that transmits messages from the end device to the cloud processing center. As the Internet of Things grew, so did the number of apps that support real-time latency. Due to the huge amount of computing data generated by IoT end applications, effectively reducing the latency of latency-sensitive IoT applications is a huge challenge for LoRa gateways currently. In addition, LoRa gateways provide services for data transfer between end-point IoT devices and cloud computing. This information is subsequently analyzed and saved on the cloud. Consequently, the LoRa gateway’s computation and storage capabilities are underutilized, and are subject to certain potential dangers. The difficulties include enhancing the effectiveness and safety of the LoRa system.
With the development of IoT and communication technology, more and more sensor devices are connected to the IoT in various ways, and a large amount of data is generated as a result. This massive amount of data causes several issues, including excessive latency, necessitating a larger bandwidth. Transmission of data between cloud centers and IoT devices will result in significant delays. Edge computing may deliver services at the network’s edge to minimize latency and provide edge data processing power and security [
2]. Edge computing became popular as a result of the installation of servers at the edge [
3]. Edge computing employs a decentralized strategy that allows application processing and task execution to occur locally. In this manner, the network traffic issue is reduced. In edge computing, access to resources is provided at the network’s edge [
4], close to end devices, which has a significant influence on bandwidth and minimizes communication latency [
5]. With the development of edge computing technology, the processing power of traditional cloud computing can be partially migrated to the LoRa gateway. This decentralized strategy may aid in preventing the falsification of data when applied to the LoRa gateway. Consequently, an EC-assisted LoRa gateway is anticipated to enhance both computing performance and security.
The issue of job scheduling in the LoRa gateway using edge computing is an essential subject of debate. A lot of research has focused on the job scheduling of edge computing resources in recent years. Nonetheless, the majority of past initiatives have encountered two significant limitations, namely a lack of practicability and a lack of generalizability. Existing simulation-based systems, for instance, are inapplicable to the majority of real-world circumstances. Most present solutions are appropriate for a limited range of IoT applications but cannot be generalized to compute-intensive and time-sensitive applications. Moreover, several algorithms are engaged in sophisticated offloading procedures that demand a considerable amount of decision time, thereby rendering the schemes unfriendly. The main research contributions of this paper are as follows.
First, we built a LoRa gateway with edge computing on the Zynq SoC platform, which is called the EC-assisted LoRa gateway.
Then, based on the characteristics of the LoRa network, we used the delay awareness algorithm with high-availability edge computing to monitor and maintain the edge node failures through a distributed method. On the one hand, we want a reverse tile gateway that can run locally or together with existing network server systems. In order to build a resource-concentrated and latency-aware IoT application, we have been developing a decentralized method for grouping edge nodes in a distributed pattern.
In addition, in order to improve the network reliability of the Internet of Things in the distributed deployment mode, we have constructed a distributed edge node group for the end nodes in the network, and performed maintenance operations such as detection, repair, and replacement on each node in the group.
Finally, in order to enable the new algorithm LAA to run on the LoRa gateway, we modified the ARM-based embedded Linux operating system in the LoRa gateway to generate an image.ub that is compatible with third-party lib libraries and plugins. The experimental results indicated that the EC-assisted LoRa gateway can achieve the same performance while using fewer resources than a traditional LoRa gateway.
The rest of the paper is organized as follows.
Section 2 discusses related work, and
Section 3 describes a design for an EC-assisted LoRa gateway. Then, the experiments and their results are elaborated on in
Section 4.
Section 5 concludes this article.
2. Related Work
For the growth of IoT services, efficiency and security challenges become crucial. A total of 75.44 billion devices will be linked globally by 2025 as a result of the expansion of 5G and the fast development of the IoT, generating vast quantities of data [
6]. The fast growth of IoT devices, as well as the massive data traffic generated at the network’s edge, imposed new strains on the current centralized cloud computing architecture due to bandwidth and resource constraints. As a possible option, edge computing has garnered considerable interest. Furthermore, edge computing is closer to the edge side of IoT than cloud computing, which can provide computing services close to the end devices, and therefore can provide data pre-processing and part of machine learning model training services [
7]. Incorporating edge computing into the LoRa gateway remains conceptually and technically problematic owing to the diverse needs and limited resources of IoT devices.
2.1. LoRa Gateway and System
As described in
Section 1, LoRa communication exhibits distinctive characteristics, such as simple and cost-effective networking, robust interference resistance, and unparalleled advantages in large-scale low-power networking and IoT services in remote areas, where cellular base stations are inaccessible. The fundamental structure of a LoRa communication system is illustrated in
Figure 1, where end devices transmit data packets to gateways, which in turn route them to network servers via either cellular networks or Ethernet. The focus of this paper is investigating how to leverage distributed computing to maximize the gateway capabilities at the edge for data processing, and thereby alleviate the burden on cloud computing centers. The hardware design of both the terminal devices and the LoRa gateway can be customized to accommodate different types of sensor interfaces and service requirements.
At the LoRa communication level, several implementations of LoRa systems were proposed in [
8], including XisLoRa, Chirp Stack, and the Things Network (TTN). The communication architecture design proposed in [
8], which was targeted at different applications for uplink and downlink, has provided inspiration for the design approach in this article.
In the field of distributed computing, Danish et al. [
9] and Lin et al. [
10] both proposed applying the technology of the blockchain to LoRa networks. However, their proposed approach was to deploy blockchain technology in the cloud, which would not be effective in reducing the load pressure on the cloud, but would rather increase the processing workload of the cloud. The LoRa gateway’s edge computing capabilities would not be activated. Liu et al. [
11] proposed a LoRa system based on edge computing which migrates some of the access control command instructions from the cloud server to the LoRa gateway, thus effectively relieving the burden of cloud processing. Ozyilmaz and Yurdakul [
12] proposed a LoRa system based on blockchain technology. This system would give different devices with different computing and storage capacities different ways to manage their data. High-capability LoRa gateways can download the whole blockchain, but low-capability LoRa gateways can only obtain block headers.
Regarding the distributed applications of LoRa, most of the existing literature (e.g., [
13,
14]) focuses on utilizing deep learning techniques for processing the collected data on LoRa gateways and end devices, with applications in areas such as fall detection and agriculture. However, there is relatively little research on the massive uplink end device access and multi-task downlink publication to end devices using LoRa. LoRa gateways typically only provide data relay services between end devices and cloud services, with limited utilization of resources beyond their communication functions. To use edge computing in LoRa gateways, the system architecture of LoRa gateways must be refactored, and the computing resources must be evaluated.
2.2. Edge Computing Network
To support creative IoT applications for edge devices and allow the success of the EC-assisted IoT paradigm, the academic community and industry have suggested a vast array of EC designs and technologies. In this category are cloudlets (small servers), vehicular (or portable) EC (VEC), and edge cloud. These technologies primarily aid in the deployment of applications in challenging environments with rapid temporal variation. In addition, there are mobile edge computing (MEC) and mobile cloud computing (MCC) technologies that enable the implementation of extensive computing applications on local IoT smart devices. This is achieved by offloading a significant percentage of programs to the devices themselves.
Due to the diverse nature of the resources accessible to edge nodes, IoT applications confront several obstacles, including the need for flexibility, low latency, high bandwidth, error-handling capabilities, and capacity. Computing at the network’s edge offers adaptive resources that allow distributed computing and safeguard data from errors typical of a centralized system. Some research, such as on cloudlets [
15], femtoclouds [
16], and edge computing, has emphasized the incorporation of mobile device resources. When a device detects edge support, it transfers the majority of its processing to a cloudlet rather than building task-specific components [
17]. The concept of cluster computing in femtoclouds necessitates centralized management by expert controllers [
18]. By distributing specialized servers to satisfy end users’ needs in a particular place, edge computing makes this feasible. The mechanisms for clustering-based methods are detailed in [
19,
20,
21].
The previously mentioned studies have tended toward a centralized strategy for the structure and administration of different resources, such as operating systems and applications [
22]. To meet deadlines on time, it is necessary to use a decentralized method. This decentralized strategy delivers resources at the network’s edge [
23].
When an IoT application is operating on a collection of edge networks, it is crucial for the edge computing to be dependable and fault-tolerant [
24]. Due to the vast range of edge devices, networks, and data processing methodologies, it is a significant challenge to create reliable network services and effective fault-tolerant solutions in edge computing networks. Refs. [
25,
26,
27] have conducted research on methods for fault detection and correction in edge nodes.
As we have seen, many researchers have attempted to deploy distributed computing methods in LoRa gateways and use various techniques to address the edge computing node failure problem, but these methods typically use centralized control for error handling, which often results in significant resource waste and communication delays [
26]. In contrast to previous studies, the main significance of this study lies in leveraging LoRa gateways to share the distributed computing load, and in providing an efficient method for accessing and managing large-scale LoRa nodes as distributed edge nodes, which effectively reduces the burden on cloud computing processing and system energy consumption. As a result, this study provides a good edge computing approach for wireless IoT systems that utilize large-scale edge sensing LoRa nodes in applications such as smart forestry and agriculture.
3. Design of the EC-Assisted LoRa Gateway
3.1. LoRa Gateway Hardware System Design Based on Zynq SoC
The LoRa gateway hardware system is based on a XINLIX Zynq7000 processor. Zynq SoC architecture combines the software programmability of a processor with the hardware programmability to form a dual-core ARM Cortex-A9 processor and a conventional FPGA logic component, providing unparalleled system performance, flexibility, and scalability. The communication module of the LoRa gateway uses the LoRa SX1278 baseband chip, which features a small size, low power consumption, long transmission distance, and high interference immunity. The LoRa gateway can receive, process, store, and forward the data of end devices. By assigning LoRa gateway edge computing capability, it can effectively enhance data processing capability and reduce data transmission delays. The proposed EC-assisted LoRa gateway hardware system architecture is shown in
Figure 2.
As shown in
Figure 2, the main modules connected to the PS resource include the sensor interface, the system debugging interface, the storage module, the communication module, the Ethernet interface, etc. The storage module mainly includes QSPI FLASH, DDR3, and eMMC, which are used to store the running memory, system files, application files, and data files. The wireless communication method for uplink in the system is via 3G, 4G, or LTE modules, which are connected to the network server, while the wireless communication for downlink relies on four LoRa modules, which are connected to numerous end devices. The communication interface between the LTE module and the Zynq is USB, and the communication interface between the LoRa module and the Zynq is UART pass-through. Furthermore, the system also includes interfaces for sensor information acquisition and debugging. In order to connect to multiple serial devices, we utilized multiple AXI_UARLITE IP core ports in our system. We developed the software for multi-threaded design, communication interface development, and distributed computing using the Xilinx Software Development Kit (SDK) environment. The program was compiled using appropriate tools to generate an executable file that was subsequently executed on the Zynq platform.
Algorithmic IP cores for data processing have been integrated into the PL side based on edge requirements to further improve the efficiency of edge computing. However, the end device access and management calculations in this article were run on the dual-core ARM Cortex-A9 on the PS side.
3.2. Edge Computing in the LoRa Gateway
Using the LoRa gateway with edge computing capabilities, two modules were transferred from the network server to the LoRa gateway. Through network control (NC), which processes application packet data, and JS, which manages the connection operations of the end devices using connection operations, the LoRa gateway generated and stored contextual data for the end devices. Conversely, the LoRa gateway required certain contextual data to perform application packet processing tasks.
As shown in
Figure 3, when the LoRa gateway receives a network request from an end device, it first verifies the legitimacy of the request. Then, the LoRa gateway creates session data for the end device, including DevAddr, two session keys, and some metadata. The LoRa gateway requests and creates a new block with a transaction containing these data. In the meantime, the LoRa gateway must produce and transmit a join accept message to the end device. The end device’s successful acceptance of the join accept message shows that the join operation was successful. It may be sent from the network server to the LoRa gateway because the NC module is placed in the LoRa gateway, which is utilized for application data processing.
Uplink Data Processing: When an end device reports application data, the NC processing module in the LoRa gateway divides the application into three parts: metadata, encrypted data, and the message integrity code (MIC) value. Then, the NC module checks the integrity of the application data by extracting the DevAddr field from the metadata and querying the session data of the end device to verify the authenticity of the MIC. If the MIC value of the application packet does not match, the packet is rejected. After the verification is passed, the LoRa gateway transmits the encrypted data applied by the end device to the network server. At the same time, the LoRa gateway must send an acknowledgment message (ACK) to the end device, confirming that the application packet has been successfully received.
Downlink Data Processing: When sending data to the end device, the network server first creates a new connection and encrypts the application data sent down. The encrypted data are then sent from the network server to the LoRa gateway. The LoRa gateway contains the downlink device application data packets, and the MIC value information is generated and calculated by the LoRa gateway, which is encapsulated in the NC module. When the end device reports data, the LoRa gateway obtains the target information of the end device, while the downlink application packets are sent to the end device through the wireless channel of the LoRa module.
LoRa gateways that are deployed on near-end devices can improve the efficiency of cloud computing processing in the system by means of edge computing. Among them, the LoRa gateway mainly handles the parsing and encapsulation of application packets from end devices and MIC calculations, which are no longer handled by the central cloud server. On the one hand, the computing resources in the LoRa gateway can be fully utilized to relieve the computing pressure in the cloud server, and at the same time, the LoRa gateway can perform pre-processing operations on the application data reported by the end devices, such as the verification of MIC, without affecting the processing of the cloud server or the transmission function of the application data.
3.3. Distributed Task Execution Mode in the LoRa Gateway
The symbols applied in this paper are shown in
Table 1.
We set the characteristics of
, where
denotes the
th edge node with the execution request.
denotes the workload,
denotes the size of task-related data, and
denotes the deadline of the task. The task
, without any limitations. These subtasks may be undertaken concurrently on several edge nodes without a specific order. In
Table 1, the significant equation variables are shown. The required resources are provided in (
) time to complete the task faster than its objective, in order to process an edge node network subtask and ensure a fair procedure. For the sake of this paper, we will consider each task’s required CPU time as a resource. For one subtask of the task
,
Ci millions of instructions (MI) were required. The following subtasks
will be loaded.
On the basis of speed
, the CPU time (
) required for processing the
on the edge of the network
is shown below.
For the
consisting of
subtasks, the LoRa gateway processes the tasks on
edge nodes in a distributed manner.
denotes the resources (CPU time) required to process the tasks at the edge nodes at time
.
will be:
denotes the resources required to complete all the tasks of a single edge node,
denotes the resources allocated to the tasks processed by the edge node at the current moment
, and the remaining required resources
are denoted as follows.
The current resource demand was given by
at time
. For the
th
at time
,
describes all resources that are currently accessible. The resources for
employed in the edge network distribution process associated with the cloud are defined as the degree of fairness
of
th
at time
.
The degree of network fairness indicates that all needed network resources have been allotted to perform the th at time . Less than 1 in disparity indicates an unfair execution.
For competent administration, we propose that significant jobs at the edge be spread over a group , where denotes the number of groups and e denotes edge nodes. is associated with a group job that has to be executed. As an organizer, is a representation of an edge node that will receive and process the . The edge node of the organizer will interact with other edge nodes in the group to complete the assigned task and provide output to the end nodes.
In the edge node group, each edge node acquires the available LoRa gateway resources based on the task. Our proposed latency-aware algorithm (LAA) executes in a decentralized way. An edge node joins the edge node group when it accepts handling a task, so that the action may be completed in distribution mode. To enable distributed processing of the , the node will broadcast to the edge environment to establish a .
Our proposed LAA algorithm is shown in Algorithm 1. First, in steps 3–7, The will broadcast a grouping message for adding edge nodes to the pair queue . After the grouping is completed, in steps 9–15, the LoRa gateway will perform tasks on the edge node elements added to the queue . Then, in steps 18–21, the nearby edge nodes in the grouping are monitored and are included in the group when the nearby nodes are available. Throughout the process, the task processing is cyclically performed on all edge nodes that join the group.
Algorithm 1. Pseudocode of the LAA algorithm. |
Algorithm LAA: Distributed Latency-Aware Task Processing Algorithm |
Input: , , , , , ; |
Output:
|
1: for do |
2: ; |
3: send a request to join ; |
4: response neighborhood edge node |
5: for do |
6:
|
7: end for |
8: while do |
9: if then |
10: for do |
11: if then |
12 pull |
13: |
14: |
15: end if |
16: end for |
17: else |
18: for do |
19: if then |
20: send to |
21: |
22: end if |
23: end for |
24: end if |
25: end while |
26: |
27: end for |
28: return |
4. Experiments
4.1. Experimental Setup
The LoRa gateway was built on an FPGA-based embedded hardware system using the industrially specialized Xilinx Zynq CPU board and Cortex-A9 SoC. With this gear, edge computing is feasible. In the central cloud, our system comprised four LoRa gateways and two network servers. Many customizable client applications were deployed on the server that could make direct connections to the LoRa gateway. The end devices were deployed for experimental testing using the SX1278-based LoRa communication module. The end device can typically connect multiple sensors for data collection and upload the collected data to the LoRa gateway.
In Experiment 1 (test join requests from end devices), 100–2200 simulated end devices submitted join requests continually to the LoRa gateway. Each LoRa gateway controlled between 25 and 550 end devices. Each end device operated at random intervals ranging between 10 min and 2 h. If the end device received a join request and accepted the message, the join operation was deemed successful. After 300 s, if no application had been received, the join request failed. The registration messages for end devices are usually very short, and a large number of device accesses can create pressure on the edge computing system in terms of registration information parsing.
In Experiment 1, the evaluation metric used to reflect the real-time performance of the gateway during the registration of a large number of devices was the time it takes for the end node to register with the network service.
Experiment 2 (test application data upload of end devices) emulated between 100 and 2000 end devices. Each LoRa gateway could support between 25 and 400 end devices. All steps for end device assembly were completed. Therefore, the end devices could perform data uploading normally. Each end device was able to select a time interval of 15–20 s for continuous data reporting. The network server sent an ACK to each package to signal that the uplink data were received properly. If the end device did not receive an ACK within 30 s after the upload, the transmission of the application data package was deemed unsuccessful. After the time interval or a failure, the endpoint continued to transmit. A large number of end device data packets were transmitted with a full load (payload 255 bytes), and by adopting the design proposed in this paper, the performance of the edge gateway was fully utilized.
Experimental metric 2 utilized two performance indicators: system throughput and CPU utilization. System throughput refers to the number of successfully transmitted data in a unit of time by network devices or ports (the maximum data rate that devices can receive and forward without dropping frames). It is an actual value used to measure network performance. CPU utilization is the percentage of CPU resources occupied by running programs, representing the status of the machine running programs at a specific time. Throughput and CPU utilization at their limits can assess whether the utilization of a LoRa network’s overall processing and forwarding capabilities has reached its maximum, and whether the fairness of network transmission is ensured.
Experiment 3 (testing the EC-assisted LoRa gateway versus the traditional LoRa gateway) simulated the operation of 100–2200 end devices. During the experiment, four LoRa gateways were deployed to provide data processing services for the end devices. Each LoRa gateway could interact with the end devices in the coverage area and obtain the application information reported by the end devices. Then, a comparative experiment was carried out using the conventional LoRa system technique. In a traditional LoRa gateway system, the JS and NC modules are deployed in a central cloud implementation. As seen in
Figure 4, the central cloud must thus handle all application packages. Our comparison demonstrated the enhanced performance of the LoRa gateway with EC-assisted support.
In Experiment 3, we evaluated the overall network processing and forwarding capability of the LoRa gateway using the metric of bandwidth occupation, which represents the efficiency of receiving and sending messages on the gateway’s bandwidth per second.
In conclusion, the evaluation encompassed the gateway’s capacity for accommodating large-scale LoRa nodes and processing their uplink data, and the enhancement in bandwidth utilization achieved by EC-assisted LoRa gateways compared to conventional gateways. Through these experiments, the beneficial effects of the proposed approach were assessed.
4.2. Results and Analysis
Figure 5 provides statistical information on the processing time for access requests. The black dashed line represents the maximum acceptable delay for an end device to receive and accept the message. It may be adjusted to meet certain needs. The join request was deemed unsuccessful if the time barrier was exceeded. Once the number of end devices reached 2000, the mean latency began to exceed the barrier. According to the findings, nearly 75% of end devices could effectively receive and accept messages with a 5 s constraint and 1800 end devices.
Figure 6 depicts the system throughput and CPU utilization of the LoRa gateway and network server while processing application packets from the end devices. The throughput continued to increase until the number of end devices reached 1200, due to underutilized resources. At the same time, CPU usage increased. Once the number of end devices approached 1200, the system throughput stabilized and decreased slightly. This was because as the number of end devices continued to increase, the system became overloaded and the consumption of various resource switches caused a degradation in system performance, thus affecting system throughput. Similarly, the variation in CPU utilization reflected this. The NC module in the LoRa gateway consumed about seven CPU cores. The rest of the cores were occupied by modules used for the operating system and signal processing. However, the processing burden on the network server was alleviated, as the packet processing was now handled by the LoRa gateway with edge computing capabilities.
As the number of end devices increases, the LoRa gateway and network server demand increasing CPU resources. As a result of all packages and invalid packages being processed and checked by the LoRa gateway, the network server’s CPU use was decreased in comparison to the prior LoRa system, in which the network server was responsible for handling all packages. As demonstrated in
Figure 7, in addition to conserving CPU resources, the bandwidth of the transmission link between the LoRa gateway and a network server was also preserved. Compared to the conventional LoRa system, 1000 end devices required about 41.1% less bandwidth. As the number of end devices increased, the gap continued to expand linearly. One of the advantages of conserving resources is that the saved resources may be used for more genuine packets, protecting them from invalid or malicious traffic. We built edge intelligence algorithms in the LoRa gateway, which were able to validate the effectiveness of end devices and perform specific data processing, effectively reducing the transmission bandwidth of the LoRa network and reducing the energy consumption.
5. Conclusions
This article proposed the design and implementation of a LoRa gateway with edge computing, which is called an “EC-assisted LoRa Gateway”. Then, an EC-assisted LoRa gateway prototype was developed on an embedded hardware system, and we proposed a new distributed computing model and used the latency-aware algorithm (LAA) for task processing at the edge nodes in the system, which effectively reduced the latency of task processing and improved the reliability and stability of the network. Experiments demonstrated the efficacy of our approaches, with the suggested LAA algorithm achieving the greatest edge node improvement and performance. We showed the highest performance that the EC-assisted LoRa gateway is capable of achieving: compared to a conventional LoRa system, the EC-assisted LoRa gateway reduces CPU usage and bandwidth use without compromising system throughput.
In our future work, we plan to further advance research in edge computing. The current proposal did not address issues related to energy efficiency or the detection and repair of failures in edge nodes. In the future, these issues may be considered for achieving high network availability in the context of IoT ecosystems. Furthermore, the experiments presented in this paper were conducted using a simulated LoRa transmission interface, time, and rate, and not with actual massive LoRa end devices. Therefore, future experiments will focus more on the deployment of actual massive wireless modules and data testing to extend the research to a wider range of wireless transmission scenarios.