1. Introduction
With the rapid growth of wireless services beyond fifth-generation (5G) technology and the proliferation of innovative applications, a massive number of devices have been deployed to collect and share data to achieve their full potential; this is known as the Internet of Things (IoT) [
1,
2]. IoT technology plays a key role in modern wireless communication systems. IoT networks are expected to realize features like local decision making and/or remote monitoring and control by utilizing sensory mechanisms in real-time applications by deploying a massive number of devices [
3]. IoT technologies, such as smart cities, vehicular communication, autonomous driving, and remote health care, demand a high amount of data traffic for wireless networks and are growing explosively [
2]. In this context, providing reliable wireless access for the massive IoT devices and the diversification of data traffic is essential to the realization of 5G IoT networks and has attracted considerable attention from academia and industry [
2,
4]. Cellular-based IoT networks are considered the most promising potential solution to provide last-mile connectivity for IoT devices due to their cost-effective deployments and guaranteed services, such as high scalability, diversity, and security, which do not need massive additional infrastructure deployments and significant changes in current cellular networks [
2,
5].
In cellular-based IoT networks, massive IoT devices initially access the cellular base bastion (BS) for network connections directly or through IoT gateways by performing a random access (RA) procedure [
5]. However, the massive number of IoT devices simultaneously access and uplink the transmission of small data packets and even need to frequently forward the real-time updating status information. Then, preamble collision among IoT devices is inevitable, and as such, improving the access mechanism of existing cellular systems for better quality of service (QoS) with less power consumption of IoT devices is one of the key challenges for cellular-based IoT networks [
2,
5,
6].
In IoT networks, a broad range of IoT services are enabled by massive amounts of IoT devices to monitor environmental factors such as temperature and humidity [
4,
7]. In this case, IoT devices need to process different data traffic streams for various receivers, where each traffic stream has different performance metrics, such as delay, throughput, or AoI, in characterizing the timeliness [
1,
2,
4]. Then, there is an unavoidable need for designing innovative data-collection mechanisms for cellular-based IoT networks to enhance efficiency and scalability [
7]. Due to the delay-tolerant and uplink-preferable characteristics of IoT traffic with small data packets, the main access technology to request the channel resources in the uplink transmission is the contention-based RA [
2]. Meanwhile, different IoT services have different QoS requirements based on the properties of the traffic data and functionality. Then, providing access mechanisms with satisfactory network performance for a massive number of IoT devices and the corresponding variable network services is still a key challenge. However, IoT devices repeatedly generate updated information regarding the environmental factors being observed and transmit them to the BSs [
3]. Due to the properties of IoT networks, the purpose of frequent updating of the information status is to keep the information as fresh as possible, which means the delay is as minimal as possible. In [
8], the authors introduced the idea of the Age of Information (AoI), which measures the freshness of information, and they discussed the possibility of utilizing the efficient design of freshness-aware IoT networks. Then, a series of works utilized the results of [
8] by characterizing the temporal mean of AoI or other freshness-related metrics for various variations in queuing models. In [
4], the authors applied the concept of AoI to measure the freshness of information at the BSs regarding the random processes monitored by IoT devices, while [
1] studied the AoI of K-tier cellular-based IoT networks while considering weighted path loss association policy and fractional power control strategy. However, artificial designs, like utilizing timing advance (TA) information to avoid collisions, can result in unfairness, which is not considered in most existing studies.
In this paper, we propose a multi-tier cellular-based IoT network with a queuing theory-based theoretic model and utilize a traffic-prioritization scheduling scheme to investigate minimum packet transmission delay to meet particular application latency requirements on AoI. Firstly, a multi-tier hierarchical cellular-based IoT network is modeled by utilizing Voronoi tessellation. Secondly, data packets from each IoT device are allocated different priorities based on their AoI in real time. A multiple access protocol is designed for data packets to access the uplink channel and the data packet with the highest priority without the regulatory requirement for listen-before-talk (LBT), while the others have carrier-sense multiple access with a collision-avoidance (CSMA/CA) mechanism. Thirdly, the mean packet-transmission delay is obtained for different traffic packets by analyzing the M/G/1 queue in each AP. Fourthly, a mean packet total delay minimization problem by optimally allocating network resources for a two-tier cellular-based IoT network is given and solved by utilizing the gradient descent method with the bisection method. The numerical results demonstrate the superior effectiveness of the proposed mechanism and algorithm, which not only achieve the minimum mean packet delay but also give consideration to delay-sensitive traffic based on the AoI metric.
The remaining parts of this paper are structured as follows. In
Section 2, we review the related literature. In
Section 3, we model the system for a cellular-based IoT network with multi-tier APs and investigate the queueing model in each AP to achieve mean packet delay for packets with different priorities. In
Section 4, we present the problem formulation to minimize mean total packet delay in a multi-tier cellular-based IoT network and solve it. In
Section 5, we assess the effectiveness of the proposed mechanism by the mean packet delay in a two-tier cellular-based IoT network by showing numerical results, followed by conclusions in
Section 6.
2. Related Literature
In [
9], a queuing-theory-based model that allows for cross-layered optimization was proposed to investigate the possibility of leveraging multi-RAT to reduce transmission delay without losing the requisite QoS and maintaining the freshness of the received information via the AoI metric in the multi-hop IoT networks. Meanwhile, previous research [
10] proposed a three-dimensional resource allocation algorithm for maximizing the total throughput in a cognitive radio IoT network with simultaneous wireless information and power transfer. In [
11], the authors maximized the energy efficiency and spectral efficiency of the space–air–ground IoT networks by optimizing subchannel selection, power control, and UAV position deployment. Refs. [
12,
13,
14] investigated the resource allocation in cellular IoT networks, which revealed the rising attention of researchers given to next-generation cellular networks and IoT techniques. The authors in [
15] maximized the energy efficiency in a UAV-based network that collects information from smart devices, sensor devices, and IoT devices by optimizing UAV trajectory, power allocation, and time slot assignment. However, as the most distinguishing feature, diverse QoS requirements, especially the strict data packet latency at difficulty scenarios with multi-hop transmissions, are not comprehensively investigated in the existing literature, which requires a more elaborately designed optimization mechanism based on the AoI metric, and this need motivates this paper. In [
16], authors studied the resource allocation optimization mechanism to minimize mean packet transmission delay in a three-dimensional cellular network with multi-layer UAV networks. Furthermore, in [
17], data packets were classified into incumbent packets and relayed packets based on the data source locations. Packet delays for various packet classes at each layer of a multi-layer UAV network were investigated, and minimum total packet delay was achieved by optimally allocating spectrum and power resources among layers of the UAV network. However, all the relayed packets in each UAV for relaying to the macrocell base station were allocated with the same priority to be served in the queuing model, which is less efficient in utilizing limited radio resources with guaranteed network performance, especially for the packets that already experienced large delay that accumulated at the previous relaying UAV layers. Meanwhile, multiple-hop relaying causes a delay penalty to packets, which is fatal to delay-sensitive packets in life-limited UAV networks. Then, it is essential to investigate and guarantee packet delay performance for all the relayed packets in the hierarchical multi-tier networks based on the AoI metric, which are the main contributions and innovations of this paper. Furthermore, we extend the technologies into a different but more general scenario, which is the cellular-based IoT networks, and it can be extended to the scenarios for heterogeneous data with different delay requirements based on the preference of the network designer.
3. System Model
In this paper, we investigate a cellular-based IoT network with multi-tier APs, which is denoted by a graph
. In
,
is the set of
M MBSs,
is the set of
A APs, and
is the set of
D IoT devices. The nodes in
,
and
follow Poisson point process (PPP) distribution with intensities
,
, and
. The IoT devices transmit data packets to the corresponding MBS through several AP relays due to the limited transmission range in IoT networks. Then, as shown in
Figure 1, each AP tier in the network consists of the APs with the same number of transmission counts from the MBS.
3.1. Channel Model
In cellular-based IoT networks, the communication environment is complex, and it is impossible to achieve ideal LOS communication channels for all the IoT devices, even with the utilization of multi-tier APs to extend network coverage and improve network connectivity. To achieve reliable analysis results for general communication scenarios, this paper models propagation channels by considering LOS and non-LOS components along with their occurrence probabilities separately in (
1). The LOS connection probability is decided by the communication environment, height, and density of surrounding buildings, and the elevation angle between the IoT devices and APs along locations of them [
18]. Depending on the LOS or non-LOS connection between the IoT device and AP, the received power
at node
from node
j is shown as
where
is an additional attenuation factor due to the non-LOS connection and
.
This paper utilizes the Rayleigh fading model in non-LOS paths for both desired signal and interference to achieve reliable analysis results in general communication scenarios, which leads to a random channel power gain following an exponential distribution with unit mean. Then the received power at node
from node
j,
, is shown as [
16]
where power transmitted from node
j to node
is the
. The
is a random variable that follows an exponential distribution with mean
to account for the multi-path fading. Path loss exponent is
.
is the distance between node
j and node
, where
and
denote locations of node
j and node
in the Voronoi tessellation [
19].
The transmission rate
from node
j to node
is
where the channel bandwidth between
j and
is
with the co-channel interference and white Gaussian noise are
and
.
3.2. Multi-Tier Cellular-Based IoT Network Model
In a cellular-based IoT network, the expected number of APs covered by one MBS can be given in the Poisson–Voronoi (PV) system, as shown in [
19,
20]
where the radius of MBS cell coverage is
. By utilizing the same methodology, the expected number of IoT devices in one AP coverage
.
With modifications in (
4), the expected total length of connections between APs and MBS is as shown as
and expected total length of connections from device to AP is
.
The expected average length of connections between APs and MBS can be turned into the radius of MBS cell coverage, which can be given by utilizing the properties of Voronoi properties mentioned above:
Then, the mean number of tiers of APs covered by one MBS is
where
is the distance between neighboring tiers of APs, which is determined by the minimum transmission range needed to avoid co-channel interference.
By taking the same procedure as shown in (
4), mean number of APs at the
ith tier of AP network is represented as [
20]
where
.
3.3. Queueing Model in Multi-Tier Cellular-Based IoT Network
As shown in the multi-tier cellular-based IoT network model, each AP at the
ith tier receives packets not only directly from the IoT device served by it but also from APs at the
th tier for relaying. To utilize the limited resources efficiently while considering the QoS requirement of the delay-sensitive IoT devices and the corresponding applications, we utilize different access protocols for different data traffic streams in the queueing model. The priority of each data packet is shown as
where
are the data packets in the waiting list to access the channel. The data packet that has the highest priority is named
, and the others with different sequence numbers from
, to indicate the priorities.
Data packet with the highest priority in the queueing model immediately occupies the channel as soon as the channel is sensed to be idle without the LBT procedure, while the other packets follow the CSMA/CA mechanism in the same batch. After the packet is successfully transmitted, the data packet with the highest priority among all the remaining data packets that not have been successfully transmitted yet follows the CSMA/CA mechanism in the next sensing process. In this paper, we name the data packet with the highest priority as a packet with priority and the other packets as packets without priority.
For the data received from the APs at the th tier, the priority of each packet is decided by the location of the AP that the generating device is attached to, which is because the packet generated by the device attached to the APs far from the ith tier already experienced delay, which makes it more emergent to give priority to it to satisfy the QoS requirement by utilizing a shorter contention window size for these packets than that of the packets near the ith tier.
3.4. Mean Packet Delay of Packet with Priority
3.4.1. Without Packets in the Channel
The packet service time of packets with priority is determined by the status of the packets without priority. Under the average fraction of time that packets without priority do not occupy the channel, packets with priority in the queueing instantly occupy the channel, and the packet service time in this case is as shown as [
17]
where there is no packet with priority occupying the channel at the moment.
is the channel occupancy time in service of packets with priority, and that for packets without priority is
.
and
follow the exponential distribution with
and
, respectively. Packet arrival rates for packets with priority and packets without priority are
and
, which follow Poisson distribution, respectively.
On the other hand, the packet service time of packets with priority with existing packets with priority occupying the channel is given by
where
is the residual busy period of packets with priority, whose Laplace–Stieltjes transform (LST) is as shown as [
21]
The LSTs of
based on the
are expressed as follows:
where
is a root of quadratic equation.
The mean packet service time of packets with priority in this case is
3.4.2. With Packets in the Channel
In addition, under the average fraction of time that packets without priority occupy the channel, packets with priority wait until the channel is released and occupy the channel as soon as the channel is sensed to be idle. The packet service time in this case is given by
where
is the residual service time of
. It is worth mentioning that this case is based on the fact that it is impossible to have packets with priority occupying the channel under the assumptions in this case.
Then, based on the fundamental theorem, the packet service time of packets with priority is shown as
where
is achieved by utilizing the memoryless property of exponential distribution.
The LST of
can be given as
3.5. Mean Packet Delay of Packets without Priority
3.5.1. Without Packets with Priority in the Channel
By taking the analysis procedure as shown in the previous section for packets with priority, the packet service time of packets without priority at the
ith tier of the multi-tier cellular-based IoT network can be achieved based on the status of packets with priority. In the case that packets with priority do not take the channel, the packet service time of packets without priority is [
17]
where
is the time for successful completion of the distributed coordination function (DCF) inter-frame space (DIFS) without any packets with priority arriving. We also assume that there are no packets without priority occupying the channel in this case at the moment. However, if the channel is occupied during the duration of the DIFS, a new DIFS restarts after the channel is idle again, which can be given as a summation of stopped time for the time between the first DIFS starts and the last DIFS duration
with a busy period of packets with priority
. Then the
analyzed above for packets without priority with CSMA/CA mechanism is [
22], as shown in (
19).
In (
19), the
is
and
is the probability that there are arriving packets within the
kth DIFS attempt duration.
The LST of
and
are as shown as follows:
and
where the
is a root of cubic equation.
Then, the LST of
is shown as
is the time for the backoff procedure in the CSMA/CA mechanism. After detecting the channel is not occupied for the DIFS duration, the station initiates the transmission only if the channel remains idle for an additional random time duration
, which is determined by the contention window size and is given by
where
is the backoff interval and
is the number of arriving packets with priority during
.
is the number of arriving packets without priority transmitted from devices served by APs at the
th tier.
Then, the LST of
can be given by
On the other hand, the packet service time of packets without priority with existing packets without priority occupying the channel is given by
where
is the residual busy period of packets without priority occupying the channel, whose LST is shown as
The LSTs of
are expressed as follows:
where
is the packets without priority occupying the channel.
is a root of cubic equation.
Then, the mean packet service time of packets without priority in this case is shown as
3.5.2. With Packets with Priority in the Channel
The packet service time of packets without priority in the case that packets with priority occupy the channel is presented as
It is also worth mentioning that this case is based on the fact that it is impossible to have packets without priority occupying the channel under the assumptions in this case.
Then based on the fundamental theorem, the packet service time of packets without priority is shown as
5. Numerical Results
In this paper, we present a cellular-based IoT network with two-tier APs in one MBS cell coverage with dimensions 1 km × 1 km × 1 km. The distance between two tiers of APs is 50 m. In the simulation, , and are 1, 1000, and 10,000 per km, respectively. The total spectrum and power for APs in the network are 1000 MHz and 100 W, respectively. The step size to update and is . The system parameter is s, and the maximum is s by taking slot time as with the contention window size as 8.
Figure 2 presents variation trends of the total packet transmission delay
as spectrum allocation ratio
for different fixed power allocation ratios
. Meanwhile, the minimum
for each
reveals the optimal value of
, which is
as shown with the vertical dotted lines. Based on the facts shown in the figure, we can observe that the value of
is decreasing as the
keeps increasing, which is due to the fact that more transmission power can compensate the shortage of spectrum resource to achieve the same minimum total packet delay.
Figure 3 shows the variation trend of the total packet transmission delay
as power allocation ratio
for various fixed values of spectrum allocation ratio
. Also, the minimum
for each
reveals the optimal value of
, which is
as shown with the vertical dotted lines. The value of
is decreasing as the
keeps increasing, which reflects the same conclusion as shown in
Figure 2 and cross-proves the correctness of the proposed scheme and algorithm. On the other hand, it presents a performance comparison of the proposed mechanism and the spectrum resource allocation with the Monte Carlo simulation. As shown in the figure, the proposed mechanism obtains better minimum mean total packet delay with
than the
is any other values obtained by using the Monte Carlo simulation. Meanwhile, the performance comparison of the proposed scheme with the spectrum allocation based on the round-robin scheduling mechanism with
by the total packet transmission delay is given in
Figure 4. In the round-robin scheduling mechanism, each tier in the multi-tier cellular-based IoT network alternatively utilizes all the network resources, which results in a single-hop transmission network and accumulates all the network traffic. It also shows superior performance than the round-robin scheduling in the figure.
Based on the results shown in
Figure 2 and
Figure 3,
Figure 5 describes the value of optimal
and optimal
, which collaboratively reveal the minimum
as shown in the third dimension in the figure. In the figure, the value of
is decreasing with
increasing, and the value of
is decreasing when
is increasing, as shown in
Figure 2 and
Figure 3. With the condition mentioned above that
must be satisfied,
cannot be less than
in
Figure 3 and
Figure 5.
With
Figure 5 as a basis,
Figure 6 describes the mean delay
at the first tier of the network experienced by the packets transmitted from devices served by APs at the first tier of the multi-tier cellular-based IoT network. The optimal values of
,
, and the optimal value of
,
, are constant at the maximum values to achieve the minimum
, as shown in
Figure 6, on account of more spectrum and power resources leading to the smaller delay to packets from devices attached to the APs in the first tier with constant packets arrival rate and higher priority than the other packets.
On the other hand,
Figure 7 exhibits the mean delay
at the second tier of the network experienced by the packets transmitted from devices served by APs at the second tier of multi-tier cellular-based IoT networks. The optimal value of
,
, and the optimal value of
,
, is constant at the minimum values to obtain the minimum
, as shown in
Figure 7, for more spectrum and power resources.
Figure 8 shows the mean delay
at the first tier of the network experienced by the packets transmitted from devices served by APs at the second tier of the multi-tier cellular-based IoT networks for relaying. The optimal value of
,
, is constant at the maximum value, which is due to the relaying delay being the dominant factor in mean total packet delay and needs enough resources to guarantee the QoS performance due to the large AoI. The optimal value of
,
decreases from the maximum value to the minimum value as
increases because the maximum power resource is needed to obtain the minimum mean relaying delay when the spectrum resource cannot be guaranteed, as the spectrum increases until it reaches the value where the minimum mean total packet delay can be guaranteed with any ratio of power resource allocation, which changes to minimum value sharply, and the remaining part is allocated to the second tier of the IoT network. It also shows that the mean total packet delay is more sensitive to
compared to
, because the mean delay for relaying is more sensitive to
, as shown.