**3. Preliminaries**

#### *3.1. System Model*

A sensor network is designed as a graph *G* = (*V*, *E*) consisting of a set of vertices, *V* = {1, 2, . . . ., . . . , *N*}, representing the sensor nodes and a set of edges, *E* ⊆ *V* × *V*, representing the two-way communication links among the nodes. We refer to the nodes located within the wireless range of a node *i* ∈ *V* as the neighbors of node *i*. These neighbor nodes of node *<sup>i</sup>* are denoted by <sup>V</sup>(*i*) <sup>=</sup> {*<sup>j</sup>* <sup>∈</sup> *<sup>V</sup>*|{*i*, *<sup>j</sup>*} <sup>∈</sup> *<sup>E</sup>*}. Each node in this article has its own unique ID number (≥ 1) as well as a hardware clock. Simply for the purpose of clarity, we assume that the network is fixed and the communication between the nodes is fully reliable. Besides, we also presume that each node is supplied with an omnidirectional antenna and has a steady wireless range of an *r* meter radius. Moreover, we assume that the sink node recognizes all neighbor nodes of every node in the network [21].

#### *3.2. Message Format*

LPSRS utilizes two types of messages: the first one is used for the scheduling process, while the other one is used for the synchronization process. The first scheduling message includes the following fields:


Meanwhile, the synchronization message has the following fields:


#### **4. LPSRS Algorithm**

LPSRS objects to decreasing power consumption while synchronizing the whole WSN. It actually uses a scheduling methodology that tries to cover as many nodes as possible with a small number of reference nodes. LPSRS consists of two processes: (1) the scheduling process and (2) the synchronization process. The scheduling process runs only during the configuration phase or when the network topologies are significantly altered. It offers a low-power approach to the aim of guaranteed synchronization for a tree-based network topology without collisions. Meanwhile, the synchronization step is repeated on a regular schedule with a *Psyn* interval to keep the time offset of all nodes within an allowable threshold.

#### *4.1. Scheduling Process*

In this section, we explain in detail the scheduling algorithm, which aims to reduce communication costs while ensuring that all nodes in a tree-based network topology are completely covered. The suggested solution provides each reference node a distinct allocated time slot, ensuring that the synchronization operation is performed at a distinct time slot to avoid collisions.

We simply describe the core idea of the scheduling algorithm. Since finding the minimum list of connected reference nodes is an (NP)-hard problem [32], we simply use the breadth-first search (BFS) method to approximately find the minimum set of reference nodes. BFS is a method for traversing the graph layer. The total time complexity of BFS is *O*(*E* + *V*) [33]. As the sink node (predefined node) has complete awareness of the system architecture, it simply uses BFS to identify the level of each node in the network. As shown in Figure 1 and Algorithm 1, the sink node begins from the sink node and finds out all the adjacent nodes at the current depth before moving on to the nodes at the next depth level (lines 8 to 12). The sink also determines all reference nodes, *Ref Nodes*, and their scheduled time slots, *SchSlots* (lines 13 to 18). Here, *Ref Nodes* are the nodes responsible to spread time messages to all nodes in the network. To prevent reusing the same reference node, we set *Parentflag*[*j*] to a *true* value (line 15). Next, all reference nodes and their related time slots are listed in the scheduling message. The scheduling message is then broadcast to all nodes in the network. To finish the synchronization process, each reference node uses its assigned time slot [21].

**Figure 1.** Scheduling process's flow chart.

**Algorithm 1**: LPSRS pseudo-code for the scheduling process (sink node only).

**Initial** : *<sup>Q</sup>* <sup>←</sup> <sup>∅</sup>, *CoverNodesflag*[ ] <sup>←</sup> *f alse*, *cnt* <sup>←</sup> 0, *NodeParent*[ ] <sup>←</sup> <sup>∅</sup>, *Parentflag*[ ] <sup>←</sup> *f alse* 1. *s* ← *sink node* 2. *Ref Nodes*(1) ← *s* 3. *Schslot*(1) ← 0 4. *CoverNodesflag*[*s*] ← *true* 5. *enqueue*(*Q*,*s*) 6. **while** (*Q*! = *Null*) **do** 7. *current i* ← *dequeue*(*Q*); // Remove *i* form queue 8. **for** (*every edge* (*i*, *<sup>j</sup>*) <sup>∈</sup> *<sup>E</sup>*) **do** 9. **if** *CoverNodesflag*[*j*] == *f alse* **Then** 10. *CoverNodesflag*[*j*] ← *true* 11. *NodeParent*[*j*] ← *i* 12. *enqueue*(*Q*, *j*) 13. **if** *Parentflag*[*i*] == *f alse* **Then** 14. *Ref Nodes*(*cnt*) ← *i* 15. *Parentflag*[*j*] ← *true* 16. *SchSlots*(*cnt*) ← *cnt* 17. *cnt* ← *cnt* + 1 18. **end if** 19. **end if** 20. **end for** 21. **end while** 22. send scheduling message <0xFFFF, s, *Sch*, *MesN*, *RefNodes*, *SchSlots*, *SeqN*>

When a node receives a scheduling message, it applies Algorithm 2. Each node responds only if its ID has been stored in *Ref Nodes* (lines 5 to 10). Then, it preserves the typically associated time slot, *mySchslot*. For each reference node, the waiting timer is initially started (WT) as soon as it receives the scheduling message. The value of WT can be computed as below:

$$WTvalue = mySchslot - Schslot\_{SrcID} \tag{1}$$

Here, *WTvalue* is the amount of time each reference should wait before transmitting the scheduling message. *SchslotSrcID* and *mySchslot* are the sender's time slot and the current reference time slot, respectively (lines 12 to 15). The current reference transmits the scheduling message to its nearby nodes as soon as WT eventually expires (lines 17 and 18). Additionally, each reference node examines the sequence number field, *SeqN*, to avoid sending the scheduling message multiple times. If the *SeqN* of the current message is less than or equal to the *SeqN* of the previous message, every reference node does not forward the message. It only adds the *SrcID* of the sender node to its reference nodes neighbor list (lines 2 to 4). The reference nodes neighbor list is used by the synchronization process as described in Section 4.2.

For a large network, if the size of *Ref Nodes* and *SchSlots* exceeds the maximum packet size, the scheduling message is divided into multiple small messages, *MesN*. Besides, the assigned time slot width of each reference node is increased to avoid collisions. It will be equal to *MesN* times the time slot width. For example, if the maximum packet size is 50 bytes, the assigned time slot is 10 ms, and *Ref Nodes* and *SchSlots*'s sizes are 80 bytes, the scheduling message is split into two messages, *MesN* equals 2, and the time slot width is increased to be 20 ms.

In this paragraph, we clearly demonstrate an example of the suggested technique using the network represented in Figure 2. As illustrated in Figure 3, the sink node, *S*, employs Algorithm 1 to construct a tree. It actually finds that the reference nodes are *S* (sink node), *B*, *C*, *I*, and *F* and that their allocated time slots are 0, 1, 2, 3, and 4, respectively

(lines 6~21). Then, the sink node builds and broadcasts the scheduling message to its neighbor nodes (line 22). As a result, S fills the scheduling message as follows.

**Algorithm 2**: LPSRS pseudo-code for the scheduling process (all nodes).


**Figure 2.** Sensor network example.

**Figure 3.** The tree created by the LPSRS scheduling process.


Upon receiving the scheduling message, each node uses Algorithm 2 to check the *SeqN* (line 2). Here, the *SeqN* = 1, which is greater than the previous *SeqN* (default value = 0). As a result, each node examines whether or not it has been designated as a reference node. If it is a reference node, it sets its scheduled time slot to *mySchSlot* and starts WT (lines 5~20). After the timeout expires, it sends the received packet. (Line 22). Node *C*, for example, is designated as a reference node, and the time slot is 2. *C* then determines its waiting time, which is equal to 2. (2 (*C*'s *mySchSlot*)–0 (*SchslotSrcID*(sender node))). Next, *C* delivers the received message to its adjacent nodes just as soon as the waiting timer expires.
