Next Article in Journal
Heat Conduction Plate Layout Optimization Using Physics-Driven Convolutional Neural Networks
Next Article in Special Issue
Leader-Following Formation Tracking Control of Nonholonomic Mobile Robots Considering Collision Avoidance: A System Transformation Approach
Previous Article in Journal
On Ghost Imaging Studies for Information Optical Imaging
Previous Article in Special Issue
Recognition and Location Algorithm for Pallets in Warehouses Using RGB-D Sensor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Temporal Logic Planning and Receding Horizon Control for Signal Source Localization

School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(21), 10984; https://doi.org/10.3390/app122110984
Submission received: 4 October 2022 / Revised: 27 October 2022 / Accepted: 27 October 2022 / Published: 30 October 2022
(This article belongs to the Special Issue Mobile Robotics and Autonomous Intelligent Systems)

Abstract

:
This article copes with signal source localization by employing a receding horizon control approach with temporal logic planning in the light of a single mobile robot. First, a temporal logic planning approach is proposed such that the task requirements from the temporal logic specifications can be effectively dealt with based on the product automaton in an offline fashion. Second, in order to label the key nodes of the product automaton, a particle filter is utilized to predict the source positions as the key nodes. Third, on the basis of the product automaton, a receding horizon control approach with temporal logic planning is developed to produce the robot’s trajectory that satisfies a given linear temporal logic specification. Finally, the effectiveness of the proposed control approach is illustrated for signal source localization.

1. Introduction

In the last decade, signal source localization has received much attention from researchers [1,2,3,4,5,6,7,8] since this problem is easily found in nature and society, such as searching for odor sources, rescuing victims, and detecting electromagnetic sources, to name a few. In order to deal with this problem, some control strategies have been proposed [6,8,9]. For example, in [8], Song et al. used a probability map to label the positions of signal sources and designed a controller to enable the robot to move toward the signal source. In [6], Lu et al. used particle filters to estimate the position of a signal source and designed movement for the behavior of the leader to search for signal clues.
It should be pointed out that these control strategies consist of a decision level and a control level. At the decision level, source positions are estimated by the probability method or the particle filter [10], such as the grid probability method [8] and the particle filter [9]. At the control level, in terms of the estimated source position, the corresponding controller controls the robot to move toward the possible source positions. However, there exist two drawbacks to the control approaches described above. One drawback is that some specifications for signal source localization are not considered. For example, a wide search area is usually divided into Region 1, Region 2, Region 3, Region 4, and so on. In each region, the positions of obstacles and signal sources are different. Correspondingly, a task specification can be described as “The mobile robot is required to orderly visit each region and search for signal sources while avoiding obstacles”. These task specifications can be expressed by linear temporal logic (LTL) [11,12,13,14,15]. On the other hand, the LTL specifications are viewed as the limitations for the movement trajectories of the robots. Hence, from the viewpoint of path planning [16,17,18], one requires the finding of a path that satisfies the LTL specifications. The other drawback is that the dynamic events are not efficiently managed, and path constraints cannot be well integrated into the above approaches. Receding horizon control (RHC) [19,20] can enable efficient management of dynamic events and integration of the trajectories constraints. Based on the RHC approach, one can obtain a control sequence by solving a finite horizon optimization problem and then apply the first control input in this sequence in the current moment. In the next moment, a new control sequence is recalculated by the next sampling information. However, how to give a control strategy that combines the receding horizon control with the LTL specifications for signal source localization is not understood. The main challenge is how to build the cost functions in terms of the LTL specifications. Therefore, by facing the above challenges, our aims are to design an RHC approach with temporal logic specifications.
The main contributions of this paper are summarized in the following.
(1)
We design a temporal logic planning approach where the task of signal source localization is transformed into a path planning task based on the product automaton. One main characteristic is that the product automaton can be built in an offline fashion, while the other characteristic is that the product automaton can be real-time trimmed based on the estimated position of the signal source according to particle filters.
(2)
On the basis of the product automaton, an RHC approach with temporal logic planning is proposed such that the robot can locate signal sources while avoiding obstacles. Moreover, the generated movement trajectories meet the given LTL formula.
(3)
We validate the proposed RHC approach with temporal logic planning through the simulation results and experimental results. The results reveal that our approach controls the robot to orderly find the signal sources.
The rest of the paper is organized as follows. In Section 2, we introduce the kinematics of the robot and the signal strength model. Then, the main problem of the paper is formulated. In Section 3, we propose a temporal logic planning approach by designing a product automaton where some key nodes are labeled by particle filters. In Section 4, we propose an RHC with temporal logic planning, where the corresponding movement trajectory meets a given LTL formula and guides the robot to locate all signal sources while avoiding obstacles. In Section 5 and Section 6, we use Matlab simulation and real experiments to illustrate the effectiveness of the proposed control method. Finally, we give the conclusions in Section 7.

2. Preliminaries

2.1. Kinematics of Mobile Robots

The turtlebot robot, shown in Figure 1, is used to locate the signal source, and the corresponding kinematics model is described by
θ ( k + 1 ) = θ ( k ) + ω ( k ) Δ t x ( k + 1 ) = x ( k ) + μ ( k ) ω ( k ) ( sin ( θ ( k + 1 ) ) sin ( θ ( k ) ) ) y ( k + 1 ) = y ( k ) + μ ( k ) ω ( k ) ( cos ( θ ( k ) ) cos ( θ ( k + 1 ) ) )
where [ x ( k ) , y ( k ) ] are the positions of mobile robots; θ ( k ) is the orientation angle; ω ( k ) is the angular velocity at time k; μ ( k ) is linear velocity at time k; Δ t is the sampling period; and u k = [ μ ( k ) , ω ( k ) ] .

2.2. Signal Strength Model

The CC2530 module is utilized as a signal source in Figure 2. The corresponding received signal strength model is given by
Z k = 10 ( log 10 c α log 10 d k + log 10 Φ ( ϕ k ) ) d k = ( x ( k ) p ) 2 + ( y ( k ) q ) 2 ϕ k = atan ( y ( k ) q , x ( k ) p ) θ ( k )
where Z k is the expected signal strength; d k is the distance between the robot and the signal source at time k; ( p , q ) is the source position; ϕ k is the bearing angle; and c and β are parameters whose specific values are based on the used hardware. Φ ( ϕ k ) is calculated by
Φ ( ϕ k ) = 1 d k d a cos 2 ( λ ϕ k ) ϕ k [ ± φ ] , d k > d a cos 2 ( λ φ ) o t h e r w i s e
where d a is the antenna length. The specific values of parameters φ and λ are based on the used directional antenna.

2.3. Problem Formulation

It should be noted that signal sources may be distributed in a wide area such that one usually requires the robot to search for signal sources from one region to another region. For example, the localization task can be described as “The whole area is partitioned into four regions. The robot should orderly search for the signal sources from one region to the other region infinitely often”. In order to model the above localization task with temporal requirements, the linear temporal logic (LTL) formula can be used [14].
The LTL formula ϕ consists of a finite set of atomic propositions O, temporal operators, e.g., G (always), ⇒ (implication), U (until), and the standard Boolean operators, e.g., ∧ (conjunction), ∨ (disjunction), and ¬ (negation). Based on the LTL formula, the aforementioned search task can be described by
ϕ = G ( ( A A U B ) ( B B U C ) ( C C U D ) ( D D U A ) )
where A , B , C , D denote four searching regions; A U B means that A has to hold at least until B is true; and G Ψ means that Ψ is true at all positions of the infinite sequence. For example, there are two infinite sequences:
v 1 = A A B B B B C C C D D D D A v 2 = A A A B B B C B C C C D D A A
It is forward to see that sequence v 1 meets formula ϕ . However, sequence v 2 does not satisfy formula ϕ . Hence, this paper mainly deals with the following problem.
Problem 1.
For a mobile robot (1) and an LTL formula ϕ that are used to model the search task with temporal requirements, how to design an RHC controller u ( k ) that can enable the robot to locate signal sources and avoid obstacles while ensuring that the generated trajectory meets ϕ.

3. Temporal Logic Planning

3.1. Product Automaton

The search region is divided into n × n square cells of the same size. Let Γ be the index set of all cells. Each cell is denoted by C q , where q Γ . Define a map o such that, for any position p k C q , we have o ( p k ) = q . C q i C q j = Ø for all q i , q j Γ and q i q j [12]. Then, we give the finite-state transition system for the robot as
Definition 1.
A finite-state transition system is a tuple T = ( Q T , Q 0 , δ T , Π T , ζ T , ω T ) , where Q T is a finite set of states representing the set of nodes in the transition system, Q 0 is the initial state set, δ T Q T × Q T is the set of transition representing the transition relationships between states, Π T is an observation set, ζ T : Q T 2 Π T is the observation map, and ω T : δ T R + is a positive weight function whose value represents the cost of transition between states in Q T .
It should be pointed out that if q i T q j , ω T ( q i , q j ) = 1 ; otherwise, ω T ( q i , q j ) = . An infinite sequence q = q 0 q 1 q 2 q 3 generated by the transition system T is accepted if and only if a given formula is satisfied. Moreover, one can use ltl2ba toolkit to convert the LTL formula to a Büchi automaton.
Definition 2.
A Büchi automaton is a tuple B = ( Q B , Q B 0 , Π B , δ B , F B ) , where Q B is a finite set of states, Q B 0 Q B is the set of initial states, Π B is the input alphabet, F B is a set with accepted states, and δ B : Q B × Π B 2 Q B is the observation map.
Further, one can construct the product automaton according to the finite-state transition system and the Büchi automaton.
Definition 3.
According to the system T = ( Q T , Q 0 , δ T , Π T , ζ T , ω T ) and a Büchi automaton B = ( Q B , Q B 0 , Π B , δ B , F B ) , a product automaton P = ( Q P , Q P 0 , δ P , ω P , F P ) can be constructed by T × B , where Q P = Q T × Q B , Q P 0 = Q 0 × Q B 0 , δ P Q P × Q P is the set of transitions defined by [ ( q i , s i ) , ( q j , s j ) ] where q i T q j , s i ζ ( q i ) B s j and s i , s j Q B , ω P : δ P R + is the weight function defined by ω P [ ( q i , s i ) , ( q j , s j ) ] = ω T ( q i , q j ) , and F P = Q T × F B is the set of acceptable states.
An infinite sequence p = ( q 0 , s 0 ) ( q 1 , s 1 ) ( q 2 , s 2 ) , where ( q h , s h ) Q P , ( q h , s h ) P ( q h + 1 , s h + 1 ) for all h 0 , is an accepted run of the product automaton P if and only if it starts from the initial states and intersects F p infinitely many times. Let the set D ( p 0 , p F ) consist of all finite trajectories from state p 0 to state p F . Then, define a set Ω = { p F | D ( p 0 , p F ) , p 0 Q P 0 , p F F P } . Since the searching region is supposed to be connected, Ω is not an empty set. In addition, we define Δ Ω as an acceptable node-set and D ( p F 1 , p F 2 ) Ø for any p F 1 Δ and p F 2 Δ .
In order to effectively search for signal sources, any cell in the search area can be regarded as the initial state in the finite-state transition system T. Based the acceptable node set Δ , the corresponding product automaton can be further simplified. It should be pointed out that the above work can be performed offiline, which means that the process of generating a product automaton cannot influence the calculation performance of robots.

3.2. Particle Filter

It should be noted that all trajectories that satisfy the LTL formula can be obtained based on the product automaton P in an offline fashion for any initial cells. However, the robot is not only required to orderly visit each region but also is required to locate the corresponding signal sources in each region. Hence, the possible positions of signal sources need to be labeled in the product automaton P. In order to estimate the positions of signal sources, the particle filter can be used [21].
Since signal sources are unknown, particles are distributed evenly in the whole searching space at the beginning. The number of particles is n and the position of the ith particle at k time is p k i . Through the above signal strength model (2), the corresponding signal strength value of each particle can be calculated. The specific formula is as follows:
y k i = Z k i R r a n d n
where Z k i is the signal strength received by the robot at time k when the ith particle is regarded as the signal source and can be calculated by (2); R is the observed noise; and r a n d n is a random number that satisfies the normal distribution.
After obtaining the real observation value Z k , we calculate the weight of each particle as follows:
ω k i = 1 2 π R exp ( ( y k i Z k i ) 2 2 R )
Then, we normalize the particle weight as:
ω ^ k i = ω k i / i = 1 n ω k i
According to the calculated weight, the new particle swarm is obtained by resampling. Low weight particles are sampled with low probability, and high weight particles are sampled with high probability. According to the new particle population, the estimated location of the signal source is given by
p ¯ k = i = 1 n p k i / n
Repeat (5)–(8) after receiving a new signal strength observation. Finally, the estimation position of a signal source will converge to the corresponding real position.
If the searching area is divided into four regions in terms of the LTL formula (4), Δ A , Δ B , Δ C , and Δ D are employed to represent the node set of each region where each element in the set is also a set consisting of the states in the automaton T with the states in the automaton B and is described by Δ A j , j = 1 , . It is worth mentioning that these elements in sets Δ A , Δ B , Δ C , or Δ D , may change over time. Therefore, one can use Δ A , Δ B , Δ C , and Δ D , and the product automaton as a path monitor to guarantee that the generated path meets (4) and passes through the acceptable nodes.

4. Receding Horizon Control with Temporal Logic Planning

4.1. Obstacle Avoidance

It is necessary to consider how to control the robot to locate signal sources while avoiding obstacles. If there exist obstacles in the movement path of the robot, in order to avoid obstacles, we can assume that there is a virtual robot that moves along the boundary of the rectangle obstacles. According to Lemma 4 of the reference [22], the velocity and position of the virtual robot are described by
p ^ k j = Z p k + ( I Z ) s k
v ^ k j = Z v k
where p k = [ x ( k ) , y ( k ) ] ; p ^ k j is viewed as the position of the jth virtual robot at time k; v ^ k j can be regarded as the velocity of the jth virtual robot at time k; v k = [ μ ( k ) cos ( θ ( k ) ) , μ ( k ) sin ( θ ( k ) ) ] ; Z = I a k a k T where a k is the unit vector perpendicular to the boundary of the obstacle; and I is an identity matrix.
By establishing a cost function, the actual robot and the virtual robot can maintain a safe distance. Hence, the cost function is given by
F = j = 1 M ( α | p k p ^ k j d | + β v k v ^ k j )
where α and β are the weight coefficients; there are M virtual robots; · denotes 2-norm; and d is the safety distance. From (11), one can see that the cost function consists of two items. The first item | p k p ^ k j d | enables the robots to keep a safe distance from the jth obstacle. By using the second item v k v ^ k j , the robot can move along the boundary of the jth obstacle.

4.2. Receding Horizon Control with Temporal Logic Planning

According to the above temporal logic planning, we give the basic steps of the proposed control approach.
(1) At k = 0 , we initialize the node sets Δ A , Δ B , Δ C , and Δ D where Δ A 1 , Δ A 2 , , Δ A , Δ B 1 , Δ B 2 , , Δ B , Δ C 1 , Δ C 2 , , Δ C , Δ D 1 , Δ D 2 , , Δ D . After that, we can obtain u ( 0 | 0 ) , , u ( N 1 | 0 ) . Correspondingly, an initial position sequence p 0 ( 1 ) , , p 0 ( N ) is obtained such that the distance between o ( p 0 ( N ) ) and a labeled node in Δ A 1 is minimized.
(2) At k > 0 , according to the position sequence p k ( 1 ) , , p k ( N ) , we can obtain a sequence q = q k ( 1 ) , , q k ( N ) in T where o ( p k ( j ) ) = q k ( j ) for j = 1 , , N . Based on the product automaton, we can obtain the corresponding sequence p = p 1 | k , , p N | k . Then we define L ( p k , p n ) = l = k n 1 ω P ( p l , p l + 1 ) . Then, we consider two cases as follows.
Case 1
L ( p k , p n ) > 0 , and L ( p l | k 1 , p n ) 0 , l { 1 , , N } , p n Δ A 1 .
u k = arg min u ( k ) l = 1 N j = 1 M ( α | p k ( l ) p ^ k j ( l ) d | + β v k ( l ) v ^ k j ( l ) )
s.t.
( 1 ) , L ( p N | k , p n ) L ( p N | k 1 , p n ) p k ( l ) X , v k ( l ) V , u k ( l 1 ) U
where N is the prediction horizon; p k ( l ) and p ^ k j ( l ) is the lth predicted position and the jth virtual robot’s position, respectively; v k ( l ) and v ^ k j ( l ) are the lth predicted velocity and the jth virtual robot’s velocity, respectively; and u ( k ) = u ( 0 | k ) , , u ( N 1 | k ) .
Case 2
L ( p k , p n ) > 0 , and exist a z ( z { 1 , , N } ) such that L ( p z | k 1 , p n ) = 0 , p n Δ A 1 .
u k = arg min u ( k ) l = 1 N j = 1 M ( α | p k ( l ) p ^ k j ( l ) d | + β v k ( l ) v ^ k j ( l ) )
s.t.
( 1 ) , L ( p z 1 | k , p n ) = 0 p k ( l ) X , v k ( l ) V , u k ( l 1 ) U
(3) Repeat step (2) until all elements in Δ A have arrived. Then we use Δ B , Δ C , Δ D , and Δ to replace Δ A .
(4) Repeat steps (2) and (3).
In the following, the proposed control approach is summarized in Algorithm 1, and Theorem 1 describes the corresponding characteristics.
Theorem 1.
Consider the kinematics of robot (1). Algorithm 1 is practicable for any initial position in the application regions under an LTL formula ϕ such that there exists a movement trajectory along which the robot can avoid obstacles and find the signal sources.
Proof. 
At k = 0 , an initial trajectory can be easily generated. By induction, a sequence x = x ( 1 | k 1 ) , , x ( N | k 1 ) at time k 1 can be attained. Hence, the corresponding sequence in T is q = q 1 | k 1 , , q N | k 1 . According to Definitions 2 and 3, one can obtain p = p 1 | k 1 , , p N | k 1 in the automaton P. At time k, based on x ^ ( l | k ) = x ( l + 1 | k 1 ) , l { 1 , , N 1 } , a sequence x ^ = x ^ ( 1 | k ) , , x ^ ( N 1 | k ) can be directly constructed. When conditions satisfy Case 1, we have x ^ ( N | k ) such that L ( p N | k , p n ) L ( p N | k 1 , p n ) . When the conditions meet Case 2, we have a sequence x ^ = x ^ ( 1 | k ) , , x ^ ( N | k ) where x ^ ( 1 | k ) = x ( 2 | k 1 ) , , x ^ ( z 1 | k ) = x ( z | k 1 ) and x ^ ( N | k ) = x ( z | k 1 ) . Therefore, we can conclude that there exist solutions for the proposed control at k = 0 , 1 , .
At time k, assume that the current conditions satisfy Case 1. We have L ( p N | k ξ , p n ) L ( p N | k , p n ) where k ξ 0 . Since the number of nodes in P is finite, there is a time step z such that L ( p N | k + z , p n ) = 0 . Then, the current conditions meet Case 2. We have L ( p k + z , p n ) = 0 . In terms of Δ A , Δ B , Δ C , Δ D , and Δ , along with the steps to repeat, the robot can locate all the signal sources, and the trajectory satisfies the given LTL formula. Moreover, according to (12) and (13), the robot keeps a safe distance to avoid obstacles.    □
Algorithm 1 Receding Horizon Control with Temporal Logic Planning (RHC-TLP).
/*Initialization*/
Initialize the particle filter and the parameters of received signal strength model (2) such as c, α , d a , φ , and λ ;
Initialize the robot’ parameters and give the LTL formula ϕ ;
Build a state transition system and obtain an acceptable set of nodes Δ based on the results of the division;
Initialize Δ A 1 , Δ A 2 in Δ A , Δ B 1 , Δ B 2 in Δ B , Δ C 1 , Δ C 2 in Δ C , and Δ D 1 , Δ D 2 in Δ D .
/*Main Body*/
repeat
    Receive the signal strength Z k at the position p k ;
    Update Δ A 1 , Δ A 2 in Δ A , Δ B 1 , Δ B 2 in Δ B , Δ C 1 , Δ C 2 in Δ C , and Δ D 1 , Δ D 2 in Δ D according to the estimated positions of signal sources;
    if  Δ A j , Δ B j , Δ C j , or Δ D j ( j = 1 , ) are updated then
        Select a node in Δ A j , Δ B j , Δ C j , or Δ D j ;
    else
        The given node is not changed in Δ A j , Δ B j , Δ C j , or Δ D j .
    end if
    if Case 1 is satisfied. then
        Solve (12) to obtain u k ( 0 ) , , u k ( N 1 ) .
    end if
    if Case 2 is satisfied. then

        Solve (13) to obtain u k ( 0 ) , , u k ( N 1 ) .
    end if
    if The given node in Δ A 1 is arrived then
         Δ A 1 is replaced by Δ A 2 , Δ A 3 in order.
    end if

    if All the given nodes in Δ A are arrived then
        we orderly replace Δ A with Δ B , Δ C , Δ D , Δ .
    end if
     u k ( 0 ) is used as control input at time k;
until p k Δ

4.3. Complexity

From Algorithm 1, one can see that the state number of the production automaton has an impact on the algorithm complexity. Let | ϕ | be the length of the formula ϕ where all the symbols and operators are included. Based on the LTL formula, the Büchi automaton includes, at most, | ϕ | × 2 | ϕ | states [11]. Hence, the size of the production automaton is bounded by | Q P | × | ϕ | × 2 | ϕ | . Moreover, F P is the number of acceptable states, and the overall complexity of the offline portion of Algorithm 1 is O ( | F P | 3 + | F P | 2 + ( | ϕ | × 2 | ϕ | ) 2 × | F P | ) according to [11]. It should be pointed out that, due to constructing the product automaton offline, the computational complexity does not influence the real-time computation of Algorithm 1.
On the other hand, the in-line complexity of Algorithm 1 is highly dependent on the horizon N, the number of obstacles M and the maximal changing number of positions of signal sources δ P . Let the maximal number of transitions at each state in the production automaton be δ Q . Hence, the complexity of the in-line portion of Algorithm 1 is bounded by δ P × ( δ Q ) M N and is determined by the used searching algorithm. Moreover, the computational latency can be relaxed by using a more efficient graph search algorithm such that the optimal trajectory can be found from exponential time to polynomial time. Further, by dividing the whole region, each region can consist of the lesser states so as to reduce the search time.

5. Simulation Results

Algorithm 1 (RHC-TLP) is implemented based on Matlab. The searching environment is 10 × 10 m and is divided into 0.25 × 0.25 m small areas. The robot detects the signal strength at a time interval of 1 second where the observed noise R is 5. Moreover, the simulation parameters are shown in Table 1. The initial position and velocity are (−4.5, 4.5 m) and (0, 0 m/s), respectively. In order to show the effectiveness of RHC-TLP, the RHC approach in [9] is used as a comparison algorithm. The main reasons are the following. The first reason is that the other signal source searching algorithms, including the RHC approach, do not deal with the case with complicated task specifications. Because of the no-planning part, the complex task is hard to finish. The other reason is that the RHC approach can deal with uncertain problems in the search process well and can be used as a typical search algorithm. Hence, the RHC approach is utilized as a comparison algorithm.
In the following, we consider two cases. In Case 1, we put a different number of signal sources in different regions. For Region A, the source number is 1 and the position is (−2.5, 2). For Region B, the source number is 1 and the position is (2, 3). For Region C, the source number is 2 and the corresponding positions are (4, −4) and (2, −2). For Region D, the source number is 1 and the position is (−2, −4). In Case 2, we place the same number of signal sources in different regions. The number of signal sources in each region is 2. For Region A, the positions are (−1, 3) and (−2.5, 2). For Region B, the positions are (4, 1) and (1, 4). For Region C, the positions are (4, −2) and (2, −4). For Region D, the positions are (−4, −1) and (−2, −4). For the two cases, the green five-pointed star indicates the real signal sources and the red dot indicates the predicted signal sources. The red circle is the starting position, and the red star is the ending position. The black blocks refer to the obstacles.
From Figure 3, the mobile robot can effectively locate all the signal sources and does not collide with the obstacles. Although there are some corner-like obstacles, the mobile robot can avoid the obstacles. If there is no signal source in a certain area, the mobile robot reaches the default node and then moves to the next area. For Case 2, shown in Figure 4, like Case 1, the mobile robot can effectively locate all the signal sources and does not collide with any obstacles. Table 2 gives the localization time and path length.
From Table 2 and Table 3, it can be seen that, with the increase in the number of signal sources, the localization time and path length obtained by the RHC-TLP approach also increase. In addition, from the standard error, the fluctuation range of the error is relatively small, which indicates that the proposed RHC-TLP approach is relatively stable. Moreover, in contrast to the RHC approach, the proposed RHC-TLP approach can obtain a lesser localization time and path length. The main reason is, due to the automaton planning, the proposed RHC-TLP approach can easily find the signal sources from one region to the other region, while the RHC approach needs to adjust the movement direction according to the possible signal sources that result in the repeated movement of robots. Hence, the performance capability of the proposed RHC-TLP approach can be well manifested.
From Figure 3 and Figure 4, it can be seen that the obstacles can be avoided, and all the signal sources can be found. At the same time, from Figure 5, it can also be seen that there is a certain error between the estimated position and the real position, and the average localization errors become big while increasing the source number.

6. Experimental Results

In what follows, the effectiveness of Algorithm 1 is validated for signal source localization. The real testing environment is shown in Figure 6.
The number of signal sources is 3, and their positions are (0.5, 3), (2.5, 3.5) and (3, 0.5). Table 4 illustrates the average results and standard errors on localization time and path length. Figure 7, Figure 8 and Figure 9 show the movement trajectories of the mobile robot and the predicted position of the signal sources, where the green star denotes the real source positions, and the red dot refers to the predicted source positions. The red circle is the starting position, and the black blocks are the obstacles. Hence, RHC-TLP can control the mobile robot to find all of the signal sources. From Table 3, one can see that the localization time and path length increases significantly with the number of obstacles from 0 to 2. The main reason is that the robot avoids obstacles in the movement process. Figure 10 shows the average localization errors. One can see that as the number of obstacles increases, the average localization errors also increase.

7. Conclusions

This paper has addressed signal source localization. A temporal logic planning approach has been designed such that a product automaton is built from the temporal logic specifications and is updated based on the signal sources estimated by a particle filter. Then, an RHC with temporal logic planning has been designed, through which the mobile robot can search for signal sources and avoid obstacles, and the corresponding trajectory meets the given LTL formula. Finally, the performance capabilities of the proposed control approach have been verified through simulation and experimental results.

Author Contributions

D.C. and B.G. conducted the simulations and experiments; X.C. developed the methods and wrote the paper; Q.L. reviewed and edited the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62073108 and by the Science and Technology Project of Zhejiang Provincial Education under Grant Y202146803.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

References

  1. Hu, Y.; Lu, Q.; Hu, Y. Event-based communication and finite-time consensus control of mobile sensor networks for environmental monitoring. Sensors 2018, 18, 2547. [Google Scholar] [CrossRef] [Green Version]
  2. Kim, C.-Y.; Song, D.; Xu, Y.; Yi, J.; Wu, X. Cooperative search of multiple unknown transient radio sources using multiple paired mobile robots. IEEE Trans. Robot. 2014, 30, 1161–1173. [Google Scholar] [CrossRef]
  3. Leonard, N.E.; Paley, D.; Lekien, F.; Sepulchre, R.; Fratantoni, D.M.; Davis, R. Collective motion, sensor networks and ocean sampling. Proce. IEEE 2007, 95, 48–74. [Google Scholar] [CrossRef] [Green Version]
  4. Lu, Q.; Han, Q.-L.; Liu, S. A cooperative control framework for a collective decision on movement behaviors of particles. IEEE Trans. Evol. Comput. 2016, 20, 859–873. [Google Scholar] [CrossRef]
  5. Pan, L.; Lu, Q.; Yin, K.; Zhang, B. Signal source localization of multiple robots using an event-triggered communication scheme. Appl. Sci. 2018, 8, 977. [Google Scholar] [CrossRef] [Green Version]
  6. Lu, Q.; Han, Q.-L.; Peng, D.; Choi, Y. Decision and event-based fixed-time consensus control for electromagnetic source localization. IEEE Trans. Cybern. 2022, 52, 2186–2199. [Google Scholar] [CrossRef] [PubMed]
  7. O¨gren, P.; Fiorelli, E.; Leonard, N.E. Cooperative control of mobile sensor networks: Adaptive gradient climbing in a distributed environment. IEEE Trans. Autom. Control 2004, 49, 1292–1302. [Google Scholar] [CrossRef] [Green Version]
  8. Song, D.; Kim, C.Y.; Yi, J. Simultaneous localization of multiple unknown and transient radio sources using a mobile robot. IEEE Trans. Robot. 2012, 28, 668–680. [Google Scholar] [CrossRef]
  9. Chen, D.; Lu, Q.; Peng, D.; Yin, K.; Zhong, C.; Shi, T. Receding horizon control of mobile robots for locating unknown wireless sensor networks. Assem. Autom. 2019, 39, 445–459. [Google Scholar] [CrossRef]
  10. Corte´s, J. Distributed kriged Kalman filter for spatial estimation. IEEE Trans. Autom. Control 2009, 54, 2816–2827. [Google Scholar] [CrossRef]
  11. Ding, X.; Lazar, M.; Belta, C. LTL receding horizon control for finite deterministic systems. Automatica 2014, 50, 399–408. [Google Scholar] [CrossRef] [Green Version]
  12. Wongpiromsarn, T.; Topcu, U.; Murray, R.M. Receding horizon temporal logic planning. IEEE Trans. Autom. Control 2012, 57, 2817–2830. [Google Scholar] [CrossRef]
  13. Yordanov, B.; Tumova, J.; Cerna, I.; Barnat, J.; Belta, C. Temporal logic control of discrete-time piecewise affine systems. IEEE Trans. Autom. Control 2012, 57, 1491–1504. [Google Scholar] [CrossRef]
  14. Tabuada, P.; Pappas, G.J. Linear time logic control of discretetime linear systems. IEEE Trans. Autom. Control 2006, 51, 1862–1877. [Google Scholar] [CrossRef] [Green Version]
  15. Rasmussen, C.E.; Williams, C.K.I. Gaussian Processes for Machine Learning; MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
  16. Chen, D.; Lu, Q.; Chen, Y. A Method for Solving Local Minimum Problem of Local Path Planning Based on Particle Swarm Optimization. In Proceedings of the Chinese Automation Congress & China Intelligent Manufacturing International Conference, Jinan, China, 20–22 October 2017; pp. 4944–4949. [Google Scholar]
  17. Chen, Y.; Lu, Q.; Yin, K.; Zhang, B.; Zhong, C. PSO-based receding horizon control of mobile robots for local path planning. In Proceedings of the 43rd Annual Conference of the IEEE Industrial Electronics Society, Beijing, China, 29 October–1 November 2017; pp. 5587–5592. [Google Scholar]
  18. Lu, Q.; Han, Q.-L. Mobile robot networks for environmental monitoring: A cooperative receding horizon temporal logic control approach. IEEE Trans. Cybern. 2019, 49, 698–711. [Google Scholar] [CrossRef]
  19. Mayne, D.Q.; Rawlings, J.B.; Rao, C.V.; Scokaert, P.O.M. Constrained model predictive control: Stability and optimality. Automatica 2000, 36, 789–814. [Google Scholar] [CrossRef]
  20. Yan, Z.; Wang, J. Nonlinear model predictive control based on collective neurodynamic optimization. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 840–850. [Google Scholar] [CrossRef]
  21. Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef] [Green Version]
  22. Olfati-Saber, R. Flocking for multi-agent dynamic systems: Algorithms and theory. IEEE Trans. Autom. Control 2006, 51, 401–420. [Google Scholar] [CrossRef]
Figure 1. The Turtlebot robot.
Figure 1. The Turtlebot robot.
Applsci 12 10984 g001
Figure 2. CC2530 radio frequency module.
Figure 2. CC2530 radio frequency module.
Applsci 12 10984 g002
Figure 3. The robot’s movement path for Case 1.
Figure 3. The robot’s movement path for Case 1.
Applsci 12 10984 g003
Figure 4. The robot’s movement path for Case 2.
Figure 4. The robot’s movement path for Case 2.
Applsci 12 10984 g004
Figure 5. The average localization errors.
Figure 5. The average localization errors.
Applsci 12 10984 g005
Figure 6. The experimental environment.
Figure 6. The experimental environment.
Applsci 12 10984 g006
Figure 7. The robot movement trajectory in the real environment with 3 signal sources and 0 obstacles.
Figure 7. The robot movement trajectory in the real environment with 3 signal sources and 0 obstacles.
Applsci 12 10984 g007
Figure 8. The robot movement trajectory in the real environment with 3 signal sources and 1 obstacle.
Figure 8. The robot movement trajectory in the real environment with 3 signal sources and 1 obstacle.
Applsci 12 10984 g008
Figure 9. The robot movement trajectory in the real environment with 3 signal sources and 2 obstacles.
Figure 9. The robot movement trajectory in the real environment with 3 signal sources and 2 obstacles.
Applsci 12 10984 g009
Figure 10. The average localization errors between the actual location and the estimated location.
Figure 10. The average localization errors between the actual location and the estimated location.
Applsci 12 10984 g010
Table 1. The simulation parameters.
Table 1. The simulation parameters.
ParameterValue
The control parameters α and β 2, 1
The safety distance d0.3 m
The detection radius r0.5 m
The prediction horizon N6
The RSS model parameters c, α , d a , φ and λ 0.001, 1.96, 0.2, 40°, 2
Table 2. Average (Standard error) results for localization time (s) and path length (m) in light of 30 runs for Case 1.
Table 2. Average (Standard error) results for localization time (s) and path length (m) in light of 30 runs for Case 1.
AlgorithmsLocalization TimePath Length
RHC-TLP117.74 (1.08)29.53 (0.23)
RHC259 (4.2)37.39 (0.81)
Table 3. Average (Standard error) results of localization time (s) and path length (m) in light of 30 runs for Case 2.
Table 3. Average (Standard error) results of localization time (s) and path length (m) in light of 30 runs for Case 2.
CasesLocalization TimePath Length
RHC-TLP141.74 (5.53)31.37 (0.58)
RHC288.8 (4.5)39.88 (0.85)
Table 4. Average results (Standard errors) on localization time (s) and path length (m) in light of 10 runs.
Table 4. Average results (Standard errors) on localization time (s) and path length (m) in light of 10 runs.
ObstaclesLocalization TimePath Length
077.6 (3.9)10.23 (0.13)
191.6 (4.5)10.83 (0.21)
296.4 (5.2)11.05 (0.29)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, X.; Lu, Q.; Chen, D.; Geng, B. Temporal Logic Planning and Receding Horizon Control for Signal Source Localization. Appl. Sci. 2022, 12, 10984. https://doi.org/10.3390/app122110984

AMA Style

Chen X, Lu Q, Chen D, Geng B. Temporal Logic Planning and Receding Horizon Control for Signal Source Localization. Applied Sciences. 2022; 12(21):10984. https://doi.org/10.3390/app122110984

Chicago/Turabian Style

Chen, Xingtong, Qiang Lu, Dilong Chen, and Boyuan Geng. 2022. "Temporal Logic Planning and Receding Horizon Control for Signal Source Localization" Applied Sciences 12, no. 21: 10984. https://doi.org/10.3390/app122110984

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop