^{1}

^{2}

^{*}

^{2}

^{*}

^{3}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

We consider a typical body area network (BAN) setting in which sensor nodes send data to a common hub regularly on a TDMA basis, as defined by the emerging IEEE 802.15.6 BAN standard. To reduce transmission losses caused by the highly dynamic nature of the wireless channel around the human body, we explore

Body Area Networks (BAN) are an emerging technology that has attracted attention from researchers in academia and industry due to its significant potential in many different areas including health care, sports, military, and entertainment. A typical BAN application involves a number of low-power sensing devices, operating on or around the human body, that collect sensor data, possibly processing it locally, and transmit the information to a central device, known as a

The current draft proposal of the IEEE BAN Task Group puts forth a TDMA-based approach as the most appropriate MAC solution to achieve the desired energy efficiency [

A TDMA mechanism typically involves splitting time into super-frames, or rounds. At the start of each round, all the nodes turn their radios on to listen for the beacon packet, transmitted by the hub to convey important network management information and assist with time synchronization. This fact can be used in implementing a variable TDMA schedule, namely, by informing all the nodes (either within the beacon packet itself or in a separate packet transmitted immediately after the beacon) about the slot allocations for the upcoming round. Thus, any beacon-based TDMA MAC protocol (including the upcoming 802.15.6 standard [

Our contribution in this work is as follows. As a first step, we formulate the optimal variable scheduling problem based on a simple, two-state (on/off) Gilbert model of wireless links, and propose a number of scheduling strategies for the hub node based only on its observation of nodes' transmission outcomes (success or failure). Through simulation, based on Gilbert parameter ranges extracted from a set of experimental Received Signal Strength Indicator RSSI traces [

The rest of the paper is structured as follows. Section 2 discusses the related work, followed by Section 3 that presents our system and wireless channel model and formulates the slot scheduling problem. Sections 4 and 5 present our proposed scheduling solutions with and without the observation of actual RSSI values, respectively. Finally, Section 6 concludes the paper.

The idea of dynamic assignment of transmission slots has been previously studied in the context of the 802.15.4 communication standard, where the network coordinator is able to vary the allocation of Guaranteed Transmission Slots (GTS) on a per-round basis and convey it to the sensor nodes in a periodic beacon packet. Initially it was noted that simple GTS allocation mechanism proposed in the standard is ineffective and leads to low bandwidth utilization due to the fact that large parts of the GTS remain unused [

All of the above works focus on application requirements and traffic characteristics when performing slot assignments, rather than the state of the wireless channel, which in a BAN can sometimes be highly volatile yet at other times can have a coherence time of up to 400 ms [

A variant of variable scheduling of TDMA slots that accounts for wireless link states has been studied in the context of TDMA-based cellular networks. A good discussion of the framework and survey of proposed algorithms in this context, usually referred to as

We consider a typical medical application of a Body Area Network, such as monitoring of vital life signs (e.g., blood pressure, heart rate, respiratory rate or temperature) [

In our scenario, we set the data generation interval to be equal to the length of a superframe; in other words, each sensor node has one new data sample per round, which it transmits to the hub. Once the transmission attempt is completed, the sensor immediately goes to sleep as an energy-saving measure, regardless of the success of the data delivery. Generally, an ARQ mechanism employing acknowledgments from the hub and retransmissions by the sensor could be employed to increase the data delivery success rate. However, we do not consider such a mechanism in this work, as our focus is on the delivery rate improvements that can be achieved through variable scheduling alone, without expending additional energy for retransmissions (the performance of variable scheduling and its benefits in the presence of retransmissions are discussed in our separate study [

In order to gain initial insight into the design of efficient variable scheduling, we assume that each wireless link between a sensor node and the hub evolves according to a discrete Markov process, in which each state is classified as either “good” (allowing transmissions to be received successfully) or “bad” (in which no reception is possible). A similar Markov model for wireless links in BAN environments has been suggested by the IEEE BAN Task Group [

We denote the transition probabilities between the bad and the good state and vice versa by

If the probability of a link to be in the good state in a certain slot is denoted by

For the specific cases where a link is initially known to be in the bad state (

Note that

We emphasize that the simple Markov model outlined above is only used to develop an initial set of scheduling techniques that are evaluated in Section 4, while the strategies in Section 5 are designed and evaluated on experimental RSSI traces [

In order to obtain a set of realistic ranges for steady state and volatility values, we have used traces from a publicly available data set that contains experimental RSSI measurements from devices strapped to human subjects performing everyday activities over the course of several days [

For our numerical evaluations in Section 4, the values of _{i}_{i}

We now consider the slot assignment strategy by the hub assuming that the wireless links evolve according to the Gilbert model described in the previous section. We define an _{i}

However, while not entirely unrealistic, a full knowledge of all the link states by the hub would incur a significant communication and energy overhead at the start of a TDMA round, as each sensor node would then need to actively sample its wireless channel (e.g., with the transmission of a probe). Accordingly, we are particularly interested in scheduling strategies that only use information already available without additional probing—namely, the outcome (success or failure) of the transmission by each sensor in the previous round. If we denote the number of slots elapsed since the transmission of node

It is interesting to note that repeated slot assignments seeking to maximize (4) in every round may not necessarily lead to the long-term optimal performance. This is due to the fact that the information available to the hub depends on the scheduling decisions taken in the previous round; and hence we refer to a permutation that maximizes (4) as a

For the initial consideration of the optimal scheduling strategies, we assume that the statistical parameters

We initially consider the optimal scheduling under the assumption that full information about current channel states is available to the scheduling algorithm at the start of each TDMA round. As explained above, this can be achieved in theory by probing all the links before making a scheduling decision, leading, however, to an unacceptably high overhead in terms of both time and energy costs it imposes on the sensor nodes. We therefore emphasize that the assumption of full information is only used in this subsection to obtain an upper bound reference, and it will be alleviated thereafter.

If the channel state information is always available to the scheduler, then the optimal long-term performance is achieved simply by maximizing the rate of successful transmissions at each round independently. Thus, the optimization target is given by _{i}

We point out that well-known algorithms for solving the maximum-weight matching problem exist, requiring ^{3}) time in general [

To that end, we proceed by establishing a basic property of the optimal solution.

If _{i}

Lemma (1) captures the intuition based on a fundamental property of the function

Accordingly, we define the following scheduling approaches that will be evaluated further:

^{2}) complexity).

^{3}) complexity).

The rationale behind the

It is interesting to point out that the gain achieved by variable scheduling declines for extreme values of outage threshold. Indeed, when the threshold is low, the average volatility and steady state of all links are also low (as it was shown in

As explained earlier, requiring the nodes to actively sample their channels at the start of each round would impose an unacceptably high overhead, in terms of both time and energy. We now move away from that assumption and discuss scheduling algorithms that rely only on the outcome of the communication during the previous round, which is available anyway and does not require additional effort to obtain.

We first point out that, despite the delayed information, it is still true that the probability of being in a good state continues to monotonically decrease over time for any link that was known to be good in the previous round, and, conversely, monotonically increase for any link that was bad. Consequently, the result of Lemma (1) continues to hold; in other words, it is still always better to schedule all the links in the “good” group ahead of the “bad”. However, it is no longer true that, if all the links have identical Markov transition parameters, then the ordering within each group does not matter. Our next result explains this fact in detail and presents the optimal transmission schedule in this case.

In other words, the order of the links in the “good” group should be reversed while the order of the links in the “bad” group should be kept the same as in the last round.

As a result of Lemma (2), we add another strategy to our repertoire: schedule all links that were good in the previous round ahead of the bad ones; and within the good group only, invert the order of the transmissions from the last round. We refer to this approach as the

The graph in

However, the most interesting finding is that the performance of the newly introduced

As mentioned earlier, when the scheduling is not based on the instantaneous channel state information at the start of the round, it is no longer necessarily true that a scheduling strategy that maximizes the expected number of successful links in each round will also achieve the best

Consider the following artificial example to illustrate this effect. Assume

The previous section explored scheduling strategies based only on the binary outcome (

Since the Gilbert model discussed in Section 3 does not capture the fluctuations of RSSI value, in this section we move away from any particular model and evaluate the scheduling strategies using the RSSI traces directly. We emphasize that the design of our algorithms is still based on the same insights from previous sections,

We proceed by examining how the probability of successful transmission evolves over time for different RSSI values, thus establishing a connection with the insights gained in the previous section.

Once again, we start with the simple case where the information (namely, RSSI readings) from all links is available to the scheduler at the beginning of the round. Recall that in the corresponding scenario based on the Gilbert model, we introduced three scheduling strategies: _{i}_{i}

As the next step, we aim to find out how the newly introduced

Despite the above fact, the RSSI information can still be useful for scheduling decisions based on the previous round outcomes, and we devised a strategy to combine the strengths of

Similar to our previous strategies, the hub defines two groups of nodes: those that should be scheduled as early in the round as possible (the “Early” group), and, conversely, as late in the round as possible (the “Late” group). Previously, the “Late” group corresponded to the subset of nodes with bad links and the “Early” group only contained those with good links, but now we allow more flexibility. More specifically, the steps taken by the

Initially, the nodes are equally split between the two groups in a random manner.

Every round, each node is moved to the opposite group unless it failed to transmit in the previous round, in which case it is forced to the “Late” group.

Slots are assigned by increasing RSSI order in the “Early” group and by decreasing order in the “Late” group. If a failure happened in the previous round, that node is assigned a large negative RSSI value, implying that it will be scheduled as late as possible.

Note that if the RSSI reading of each node remains constant across consequent rounds, then the algorithm will behave exactly as simple

To improve the reliability in Body Area Networks, we have presented a framework for variable TDMA scheduling, where transmission slots are assigned to nodes based on the information about their wireless links, with the goal of minimizing transmission failures due to bad channel state. Based on a two-state Gilbert link model we have developed the simple yet effective

Furthermore, we have conducted additional evaluations directly on an experimental set of RSSI traces, reinforcing the main scheduling principles and extending the

We highlight that all the techniques proposed in this paper operate by varying the transmission schedule of sensor nodes only, and the performance improvements are achieved without consuming any additional energy (e.g., by retransmissions) or increasing the latency of incoming packets. It is worth noting that in a separate work [

Our study has aimed to demonstrate the benefits of variable scheduling at a proof-of-concept level. There are many practical issues that remain to be addressed in the design of a full MAC protocol incorporating the ideas of this paper and [

Consider an allocation _{i}_{i}

We observe that both links improve their success probabilities as a result of the swap. Consequently, the assignment

The first and second claims of the lemma are essentially a repetition of Lemma (1) and follow trivially from the monotonicity of _{i}_{j}

Consider an allocation _{i}_{i}_{j}

Since the parameters of all links are identical, we can denote _{i}_{j}_{i}_{j}

The proof of the fourth lemma claim follows a very similar derivation to the above, and is omitted for brevity.

Typical ranges for steady state and volatility.

Performance based on link state information at the start of each round.

Performance based on link state information from the previous round.

Probability of successful transmission based on most resent RSSI reading, with attenuation thresholds of 80 dB and 85 dB.

Performance based on RSSI reading at the start of each round.

Performance based on RSSI reading from the previous round.