Next Article in Journal
A Programmable Impedance Tuner with a High Resolution Using a 0.18-um CMOS SOI Process for Improved Linearity
Next Article in Special Issue
A Systematic Literature Review on Privacy by Design in the Healthcare Sector
Previous Article in Journal
Analysis of the Voltage-Dependent Plasticity in Organic Neuromorphic Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Opponent-Aware Planning with Admissible Privacy Preserving for UGV Security Patrol under Contested Environment

College of Intelligence Science and Technology, National University of Defense and Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(1), 5; https://doi.org/10.3390/electronics9010005
Submission received: 13 October 2019 / Revised: 26 November 2019 / Accepted: 4 December 2019 / Published: 18 December 2019
(This article belongs to the Special Issue AI-Enabled Security and Privacy Mechanisms for IoT)

Abstract

:
Unmanned ground vehicles (UGVs) have been widely used in security patrol. The existence of two potential opponents, the malicious teammate (cooperative) and the hostile observer (adversarial), highlights the importance of privacy-preserving planning under contested environments. In a cooperative setting, the disclosure of private information can be restricted to the malicious teammates. In adversarial setting, obfuscation can be added to control the observability of the adversarial observer. In this paper, we attempt to generate opponent-aware privacy-preserving plans, mainly focusing on two questions: what is opponent-aware privacy-preserving planning, and, how can we generate opponent-aware privacy-preserving plans? We first define the opponent-aware privacy-preserving planning problem, where the generated plans preserve admissible privacy. Then, we demonstrate how to generate opponent-aware privacy-preserving plans. The search-based planning algorithms were restricted to public information shared among the cooperators. The observation of the adversarial observer could be purposefully controlled by exploiting decoy goals and diverse paths. Finally, we model the security patrol problem, where the UGV restricts information sharing and attempts to obfuscate the goal. The simulation experiments with privacy leakage analysis and an indoor robot demonstration show the applicability of our proposed approaches.

1. Introduction

With the development of intelligent unmanned system technology, unmanned ground vehicles (UGVs) have become extremely rugged for the harshest military use, such as executing monitoring tasks in harsh and complex urban environments [1]. As our world becomes increasingly well-connected, there is an increased need to enable UGVs to cooperate in generating plans for security patrol. For example, the iRobot’s PackBot, which has played a critical role in providing situational awareness for anti-terrorist operations [2].
Several approaches have been proposed in recent years to address the conundrum of privacy preservation through controlling privacy leakage for different requirements under contested environments. One of them is differential privacy [3], which adds appropriate noise to the transmission state in order to limit the opponent to acquiring only the true state of the transmitted signal at a predetermined level of accuracy. Another approach uses cryptography for secure multi-party computation (MPC). In [4,5], the authors encrypt the messages with a public-key homomorphic cryptosystem and apply techniques (e.g., random masking and random permutation) to protect the agents’ privacy. So, the encrypted messages can be exchanged among the agents in various ways [6]. A third approach tries to guarantee privacy as a loss of observability [7,8], but it is difficult to achieve strong privacy of this form. All these mainly focus on the privacy leakage from the information perspective, while decision-related privacy preservation (e.g., privacy-preserving planning) has mostly been neglected.
Several recent pieces of research on privacy-preserving planning for multi-agent systems have captured the attention of the planning community [9,10,11]. Privacy-preserving plans represent plans that do not actively disclose sensitive private information. In fact, privacy preservation is the goal pursued by multi-agent planning, which has been a crucial concern for multi-agent systems in some contexts, such as agent negotiation [12], multi-agent reinforcement learning and policy iteration [4,5], deep learning [13], and distributed constraint optimization problems (DCOPs) [14,15,16]. Multi-agent planning (MAP) in cooperative environments aims at generating a sequence of actions to fulfill some specified goals [17]. Most multi-agent systems rely intrinsically on collaboration among agents to accomplish a joint task, in which the collaboration depends on the exchange of information among them, so the privacy preservation of the information naturally rises.
Security patrol has been widely studied among the defense and security fields in the past decade. Unmanned ground vehicle patrols have gained increased interest in recent decades mainly due to their relevance to various security applications [18]. The common mode of urban security patrol is to patrol checkpoints. As illustrated in Figure 1, the UGVs perform security patrol, one supply center, and some determined checkpoints are distributed across the security patrol environment. The UGVs are required to repeatedly visit some checkpoints to monitor the local area, but the UGVs do not share information about the task plan with the supply center. We assume that the adversary can exploit any predictable behavior of the UGVs, which means the adversary has full knowledge of the patrolling task. Since the opponent is collecting information, the objective of privacy-preserving planning is to protect private information in different situations.
Regarding urban security patrol, some checkpoints located around the urban trunk road are high risk. Thus, it is feasible to deploy UGVs to patrol such checkpoints regularly and collect information (e.g., images, video, etc.). Although UGVs have become quite ubiquitous in patrols with logging and tracking capabilities, they are mostly at risk. The hostile observer will constantly monitor the task execution and get access to the UGVs’ data and actions. The challenge of privacy preservation arises because all aspects of information are private and the UGVs are not eager to share, which drives us to compute privacy-preserving plans that can protect privacy when executed in cooperative and adversarial environments.
In this paper, we address the problem of opponent-aware privacy-preserving planning for security patrol and attempt to answer the following questions: what is opponent-aware privacy-preserving planning, and how can we generate opponent-aware privacy-preserving plans? Our contribution lies in the opponent-aware privacy-preserving planning architecture. In a cooperative setting, the search-based planning method could be restricted to obtain the public information shared by the cooperative agents. Whereas, in an adversarial setting, the observation of the adversary could be purposefully controlled by exploiting decoy goals and diverse paths. Finally, simulation experiments with privacy leakage analysis and indoor robot demonstration show the applicability of our proposed approaches.
The rest of this paper is organized as follows. In Section 2, some related works about privacy, security, and metrics are presented. In Section 3, we decompose the opponent-aware privacy-preserving planning problem into two subproblems from different perspectives. In Section 4, experimental evaluations of plan generation and information leakage analysis were conducted. In Section 5, we conclude this paper and point out further directions.

2. Background and Related Work

2.1. Privacy and Security Assumption

Privacy and Security: Many privacy models have been adopted in multi-agent planning according to three different criteria: the information model (imposed privacy [19], induced privacy [20]), the information-sharing scheme (MA-STRIPS [19], subset privacy [21]), and practical privacy guarantees (no privacy [22], weak privacy [23], object cardinality privacy [24], and strong privacy [9]). Privacy can be divided into different categories, such as agent privacy, model privacy, decision privacy, topology privacy, and constraint privacy [4,14]. Here we introduce some widely used types of privacy.
Definition 1
(Agent privacy). No agent should be able to recognize the identity or existence of another agent.
Agent privacy can be achieved by employing anonymous or coded names. Such as one agent would not want the opponents to know the identity or existence.
Definition 2
(Model privacy). No agent should be able to recognize the model of another agent, including environmental and algorithmic models which are related to states, actions, observations, transition probability, and rewards.
Model privacy is the key issue in adversarial environments; one agent will not get more information about others except for what has been revealed.
Planning algorithms for privacy preserving can be divided into weak or strong privacy [17], ε -strong privacy [25], and provable guarantees privacy [26].
Definition 3
(Weak privacy preserving). The agent will not disclose private information of the states, private actions, and private parts of the public actions during the whole run of the algorithm.
In other words, the agent will share only the information in the public part. Even if not communicated, the adversary will deduce the existence and values of private variables, preconditions, and effects from the (public) information communicated.
Definition 4
(Strong privacy preserving). The adversary can deduce no information about the private variables, preconditions, or effect of the actions, beyond the shared public projection of actions and plans.
Privacy essentially concerns a semi-honest adversary who is interested in learning the information. Privacy is equivalent to the concept of unobservability among the control community, and it is closely related to the concept of semantic security from cryptography [27], where secure plans build on the concept of independent inputs [28]. A secure plan is always private, which imposes an additional constraint (all possible goals must result in to the same observations) to the privacy problem [29].
Security Assumption: In [30], the authors define the notion of privacy-preserving planning based on secure MPC and provide some proper analysis of privacy leakage in multi-agent planning. Many assumptions specify the properties of the agent, environment, and algorithm in some secure multi-party computation literature [10,28,31].
Assumption 1
(Adversary model). An honest but curious adversary who is passive and follows the algorithm and the protocol correctly but may glean information from the execution and communicated data to learn about the privacy. A malicious adversary, who can actively deviate from the protocol specification.
Assumption 2
(Algorithm known). The adversary has access to the algorithm and knows how the algorithm works. The agent should not rely on the privacy of the algorithmic mechanism itself.
Assumption 3
(Input independent). The adversary can rerun the algorithm by setting different goals as real goals to check the variability of the output.
Assumption 4
(FIFO). When the actor takes action to reach a corresponding state, only then does the adversary receive the corresponding observation in the order which was emitted by the plan execution.
As is usually done by cryptography, these assumptions do not take the adversary’s recognition model into consideration, which is quite different from the artificial intelligence (AI) community.

2.2. Privacy-Preserving Planning

The planning problem of privacy preserving can be modeled as a multi-agent planning (MAP) problem with a privacy-preserving requirement. MAP comes in different types, such as deterministic MAP (DMAP) [19,32], interactive partially observable Markov decision processes (I-POMDPs) [33], and decentralized POMDPs (Dec-POMDPs) [34]. Regarding privacy, there are many synonymous concepts in the recent literature, which all aim at generating obfuscated behavior, such as deception, security, and obfuscation, as shown in Table 1. A secure plan is always private; a deceptive plan is always obfuscating, but may or may not be dissimulating [29]. A simple illustration of different strategies is shown in Figure 2.
In a cooperative environment, many multi-agent planners have been proposed to address privacy-preserving planning problems, such as MAFS (multi-agent forward search) [30], MADLA (multi-agent distributed and local asynchronous) [44], and PSM (planning state machine) [11]. In [11], the author proposed one secure planner for multi-agent planning, but this planner is impractical to compute all possible solutions. In [9], the authors introduce a modified version of the multi-agent forward search algorithm, Secure-MAFS [30], which is implemented based on an equivalent macro sending technique [24]. Some privacy guarantee planning algorithms have been provided in [9], but they are restricted to very special cases.
In an adversarial environment, the adversary implicitly uses the signal behavioral cues of the actors during the plan execution, and perform diagnosis on the internal information based on the resulting observations. Recently, there has been some interest in exploring privacy preservation [36], goal obfuscation [28,35], deception [37,38], intention hiding [41], etc. In [35], Kulkarni et al. attempted to make plans with k-ambiguous goals, but they were not guaranteed to be secure. In [36], Keren et al. proposed to preserve privacy by keeping their goal ambiguous for as long as possible, but there was only one candidate goal and one partially obfuscated plan. In [38], Masters et al. applied some deceptive strategies for path planning, but these do not support deception when the adversary knows the explicit model. In [28], Kulkarni et al. proposed to securely obfuscate the real goal by making all candidate goals equally likely for as long as possible, but the heuristic deployed makes the planner incomplete. All these studies employ goal or plan recognition modules.

2.3. Information Leakage Metric

Although the key motivation for privacy-preserving planning is preserving privacy, some private information will be leaked during the planning, which means it is impossible to achieve complete privacy. If one malicious teammate directly receives any of the private information, or can indirectly deduce the privacy from the communicated public information, the privacy information will be leaked. To evaluate the privacy leakage, we consider the foundations of quantitative information flow [45]. The leakage of the private information is based on the uncertainty of the adversary about the input. Here we use the min-entropy (an instance of Rényi entropy [46]) as a better measure of the privacy information leakage ( P I L ) :
P I L = H ( H ) H ( H | L ) ,
where the initial uncertainty is H ( H ) , the residual uncertainty is H ( H | L ) .
Using the uniform distribution case, we denote the number of states as t p r i o and t p o s t , then the remaining uncertainty gives a security guarantee. The expected probability that the adversary could guess H given L decreases exponentially with H ( H | L ) : 2 H ( H | L ) = 2 log t post = 1 / t post , and we can obtain the privacy information leakage:
P I L = log t prio log t post = log t prio t post .

3. Methodology

3.1. Opponent-Aware Privacy-Preserving Planning

In privacy-preserving planning (PPP), it is important to acknowledge that two potential opponents are involved: the malicious teammate (cooperator) and the hostile observer (adversary). PPP should produce plans that reveal neither the goal nor the activities of the agents, but many planners cannot have completeness, strong privacy preserving, and efficiency together. So, it is practical for them to achieve opponent-aware privacy preservation within bounded privacy information leakage. As illustrated in Figure 3, privacy leakage will occur at the information layer and decision-making layer. At the information layer, differential privacy and homomorphic encryption are applicable techniques to protect private information.
In this paper, we mainly focus on the middle layer for decision-making. For task planning in cooperative environments, we need to restrict the information sharing to malicious teammates, and for the path planning in adversarial environments, we need to control the observability of the adversary. Here, we define the opponent-aware privacy-preserving planning problem as follows:
Definition 5
(Opponent-aware privacy-preserving planning). We define opponent-aware privacy-preserving planning as a multi-agent planning problem of protecting privacy secure to certain extent considering two opponents.
As a result, the generated plans protect privacy from two potential opponents: the malicious teammate and the hostile observer. In a cooperative setting, to cope with malicious teammates, we should restrict the disclosure of private information to malicious teammates. In adversarial settings, real combat scenarios often consist of hostile opponents, so we will add obfuscation to control the observability of the opponents.

3.2. Information Sharing Restricted Task Planning

In cooperative environments, the agents are cooperative in concurrently planning and executing their local plans to achieve a joint goal. We could model all other agents as a single adversary, who can collect the information to infer more. Information sharing restricted task planning with privacy preservation can be defined as follows [10]:
Definition 6
(Information sharing restricted task planning). For a set of agents N , the information sharing restricted task planning problem for multi-agent M = Π i i = 1 | N | is a set of agent problems, where for each agent n N the problem is:
Π i = V i = V p u b V i p r i v , A i = A i p u b A i p r i v , I , G ,
where V i is a set of variables, s.t. each V V i has a finite domain d o m ( V ) , if | d o m ( V ) | = 2 , then all variables are binary. V p u b is the set of public variables common to all agents and V i p r i v is the set of variables private to agent n i N , s.t. V pub V i priv = . The state I is the initial state and G is the goal.
Each action is defined as a tuple a = pre ( a ) , eff ( a ) , cost ( a ) , where pre ( a ) and eff ( a ) are partial states representing the precondition and effect, respectively, cost ( a ) is the cost of action a. So, the state transition can be defined as Γ ( s , a ) s eff ( a ) . We follow the formal treatment of privacy-preserving planning from [10,30], for each agent n N , the private parts of the problem Π i are:
  • The set of private variables V i p r i v and the number | V i p r i v | , the domains d o m ( V ) and the size | d o m ( V ) | .
  • The set of private actions A i p r i v and the number | A i p r i v | , the number and values of variables in pre ( a ) and eff ( a ) .
  • The private parts of the public actions in A pub , such as the numbers and values of private variables in pre ( a ) V i p r i v and eff ( a ) V i p r i v for each action a A i p u b .

3.2.1. Task Plan Generation

The multi-agent planning problem M = Π i i = 1 | N | can be viewed from different perspectives, called projections. The view of a single agent n i on the global problem is not the only Π i , projections of other agents are available as well. As for agent n i , the public projection of an action a A i p u b is a = pre ( a ) eff ( a ) , the public projection of Π i can be represented as follows:
Π i = V p u b , A i = { a | a A i p u b } , I , G
So, the task planning solution to Π i is a sequence π i of actions from A i j i A i , the goal state G k = π i I , which means Γ ( I , π i ) G k . The public projection of π is π = ( a 1 , . . . , a k ) with all private actions omitted. The global solution of M is a set of task plans π i i = 1 | N | , s.t. each π i is a local solution to Π i . If π i = π j for the public action, we call these local solutions equivalent.

3.2.2. Privacy Leakage Analysis

We adopt the privacy leakage metric from [47,48], as we set the number | V i p r i v | p and the size d = max V V i p r i v | d o m ( V ) | . The prior information is a tuple:
I p r i o = Π , π , p , b .
The additional information obtained by the adversary is a sequence of messages exchanged between the agents N = ( n 1 , . . . n k ) . After information exchange during the planning process, the posterior information available to the adversary is a tuple:
I p o s t = Π , π , p , b , N .
Considering the transition system of the Π i , we associate the prior information I p r i o and I p o s t with variables τ ( I p r i o ) and τ ( I p o s t ) , which represent the uncertainty of the planning algorithm. So, the final information leakage is computed as:
P I L = log τ ( I p r i o ) log τ ( I p o s t ) = log τ ( I p r i o ) τ ( I p o s t ) .
The upper bound of all transition systems’ number is t 0 = 2 d 2 1 p . After classifying the actions into five categories, i.e., inital -applicable (ia), not-inital-applicable (nia), privately-dependent (pd), privately-independent (pi), privately-nondeterministic (pn) [47], the final information leakage formula is as follows:
P I L = log a A τ p r i o ( a ) a A τ p o s t ( a ) .
In this paper, we mainly use MAFS algorithms for task planning, and the privacy leakage can be computed as follows: we first reconstruct the search tree, then identify the parent states and applied actions, and classify the actions into five classes (ia, nia, pd, pi, pn). Finally, we compute the information leakage (see Algorithm 1 for details). Here, the privacy leakage computation with sets of actions can be reformulated as a mixed-integer linear program (MILP) problem with disjunctive constraints.
Algorithm 1: Privacy information leakage analysis based on the MAFS algorithm.
Input: M = Π i i = 1 | N | , number p, and size d
Output: privacy information leakage P I L
1 reconstruct the search tree based on the MAFS algorithm [30].
2  identify possible parent states.
3  identify possible applied actions.
4 classify actions into five classes (ia, nia, pd, pi, pn).
5 compute privacy information leakage using the Equation (8).
6 return P I L
For the possible number of the transition system, we construct the following combinatorial optimization problem, which can be solved using the off-the-shelf solver IBM CPLEX [49].
max l o g ( a A X τ p o s t ( a ) )
s . t . a A X τ p o s t ( a ) t X ,
where the t X t 0 , action type X { i a , n i a , p d , p i , p n } , A X A .

3.3. Observability Controlled Path Planning

In adversarial environments, the observed agents try to control the observation of the adversary by obfuscating their goals. Considering the observation of the adversary in the adversarial setting (mission planning, reconnaissance, etc.), privacy immediately follows from setting with partial observation [28,35,40]. The observability controlled path planning problem is to find a path from the start location to the goal on the navigation map (discrete grid, connected graph, or continuous space representation). So, the discrete path planning problem can be defined as follows:
Definition 7
(Observability controlled path planning). For every agent n N , the observability controlled path planning problem is a tuple [35]:
Φ = D , I , G , P , Ω , O
  • D = S , A , c is the path planning domain, S is a non-empty set of location nodes, A S × S is a set of action-related edges, c : E R 0 + returns the cost of traversing each edge.
  • I S is the start location and g r G is the real goal;
  • G = { g r g 0 g 1 } is a set of candidate goals, where g r is the real goal
  • Ω = o i | i = 1 , is a set of m observations that can be emitted as a result of the action taken and the state transition.
  • O : ( A × S ) Ω is a many-to-one observation function which maps the taken action and the next state reached to an observation in Ω.
In adversarial environments, the adversary will receive the observation sequence associated the actions performed by the observed agent. We could model this process as a one-sensor model, where the adversary maintains one belief space according to the observations. Following the definition of belief space from [35], we take the belief space of the adversary into account in path planning, so as to control the observability of the adversary.
Definition 8.
A belief b n is induced by observation O i , emitted by action a i , resulting in state s ^ i . The belief state and belief update are defined as:
b 0 = s ^ 0 | O , I = o 0 O , s ^ 0 = o 0 ,
b i + 1 = u p d a t e b i , o i + 1 = { s ^ i + 1 | a ^ , Γ s ^ i , a ^ s ^ i + 1 s ^ i b i O a ^ , s ^ i + 1 = o i + 1 } .

3.3.1. Path Plan Generation

Decoy Goals: If the adversary is aware of the actor’s candidate goals but not the real goal. An observability controlled path plan is to hide the real goal with decoy goals, where G n = { g 0 g n 1 } is the set of decoy goals, the observation sequence should not be biased by any goal in G n . The objective here is to minimize the privacy leakage by making the adversary’s belief space consistent with the decoy goals.
Definition 9.
An observability controlled path plan with decoy goals:
Γ ( I , π k ) g r and G G : s b n , s = G | m
where m < n , for ease of computation, in this paper, we set m = 2 , as in the final stage, path generation will only depend on two goals.
Observability Controlled Path: Predictability and obfuscation are a pair of incompatible concepts. With the decoy goals selected, patrolling on an observability controlled plan, the observed agent will start with obfuscated steps, but the agent will adopt predictable steps when approaching the goal in the end. So, one observability controlled path is the one with steps that are obfuscated for as long as possible. There is one obfuscated turning point, where all subsequent steps are predictable.
We will employ one probabilistic goal recognition model as the adversary’s sensor model.
Definition 10.
An obfuscated turning point is the final state in the observation sequence O n = { o 1 , . . . o n } , s.t. the posterior probability of the real goal does not exceed any selected decoy goals, otherwise, the point is predictable to the adversary.
P ( g r | O n ) P ( g | O n ) , g G n \ { g r }
Definition 11.
A last obfuscated turning point is the last state π i of one given path π, which all subsequent states, π j , j { i + 1 , . . . | π | } are predictable to the adversary.
Here, we mainly focus on the last obfuscated turning point. The observability controlled path plan will cover two parts. As shown in Figure 4, one part of the obfuscated path from the start point to the last obfuscated turning point, and one part of the predictable path from the last obfuscated turning point ( L O T P ) to the real goal. We can get the strong goal obfuscated path π with continually obfuscated steps to the L O T P . Using the cost-difference-based probabilistic goal recognition model introduced in [50], we can get the L O T P after selecting the decoy goals:
optc L O T P , g r optc g r , g d + optc s , g r optc s , g d 2 ,
where g d is the selected decoy goal, and optc ( a , b ) is the optimal cost from the state a to b. If we adopt discrete grid or graph-based discrete domain representations for path planning, we will approximate the L O T P to the closet state.
Diverse Path: When the adversary knows the observed agent’s goal, in order to control the adversary’s observability, we need diverse paths. We can compute the diversity between all the pairs of plans using one plan distance metric mentioned in Appendix A.1. Two plans are a δ distant pair with respect to distance metric d, if d ( p 1 , p 2 ) = δ . A path plan set ( P P S ) induced by plan p starting at I is minimally δ distant if δ = min p 1 , p 2 P P S d ( p 1 , p 2 ) .
Definition 12.
A plan π k is k diverse path plan ( k 2 ) :
d m i n ( P P S ( I , π k ) ) δ and | P P S ( I , π k ) | k .
As a result, if the adversary does not know the real goal, the first part of the path is done by performing a two-decoy-goals path planning. After getting the L O P T , we can compute the whole path plan. If the adversary does know the real goal, we need to generate diverse path plans. The details of an observability controlled path plan are given in Algorithm 2.
Algorithm 2: An observability controlled path plan generation algorithm.
Electronics 09 00005 i001

3.3.2. Privacy Leakage Analysis

Planning with obfuscated goals involves preserving privacy with minimized information leakage. Under the requirement of privacy preservation, the observed agent will deliberately choose misleading actions to obfuscate the goal. We can quantify the information leakage of the states and actions as follows:
Definition 13
(S-PI). The privacy information leakage based state privacy information metric is defined as:
I S P I ( s j ) = H ( max g i G g r P g i | s j P g r | s j ) = l o g ( max g i G g r P g i | s j P g r | s j ) .
Definition 14
(A-PI). As for a i E ( s ) , a j E ( s ) , and E ( s ) = E ( s ) \ a i . The information leakage based action privacy information metric is defined as:
I A P I ( a i ) = a j E ( s ) I S P I ( s ) E ( s ) .
Using the action privacy information metric I S P I ( s ) as additional action cost, we can analyze the privacy leakage of the observability controlled path plan.

4. Experiments

In this section, experiments were conducted for opponent-aware privacy-preserving planning. All the experiments were executed on one Alienware running Ubuntu 16.04 with 4 CPU cores and 8 GB of RAM. We used the MAFS algorithm [30] for information sharing restricted task plan generation. The algorithms for privacy leakage analysis and observability controlled path planning were coded with Python.

4.1. Plan Generation and Privacy Leakage Analysis

Here, we first generate task plans for a robot in the urban security patrol scenario. Then we present three different goal configuration scenarios for path planning. Besides, we analyze the privacy leakage for the task plan and path plan. Finally, we present an indoor robot demonstration using the TurtleBot3 Burger [51].

4.1.1. Task Plan Generation and Privacy Leakage Analysis

As shown in Figure 1, we now define some variables for security patrol scenario. The interaction of the simplified security patrol scenario can be modeled between four agents: two UGVs, one supply center (the malicious teammate), and the hostile observer (opponent).
Variables Definition: For task planning under a cooperative environment, N = { U G V 1 , U G V 2 , S C } . After patrolling any candidate checkpoint in zone 1, UGV will return to the supply center to charge and transmit collected data, and the task will be completed after patrolling the two zones.
As shown in Table 2, we set the binary variables with T / F values. In the initial state, the supply center has enough supplies, the UGV is charged. In the goal state, the task of the UGV is complete. The following set of variables can be used to describe the task planning problem.
Information Sharing Restricted Task Plan: Here, we simply set UGV1 for zone 1 and UGV2 for zone2. Each UGV will choose two checkpoints to patrol (e.g., checkpoint 1 and 3). The actions of A U G V 1 and A S C can be formulated as shown in Table 3. We provide the action descriptions for UGVs and the supply center in Figure 5.
In the following, we will compute the task plan of UGV1 for zone 1 security patrol. The projection of public actions and related transition system are shown in Figure 6. The projection results of the public actions are shown in Table 4. The actions P C 1 , P C 3 A U G V , R C , R R A S C both have an equal projection. We denote them simply as P C and R .
We chose MAFS and Secure-MAFS algorithms for task plan generation. The solution of UGV1 to the security patrol scenario is π U G V 1 = { R , P C , R , P C , T C } , which is public to the supply center.
Privacy Leakage Analysis: Then, complete transition is shown in Figure 7. In MAFS, if the state of the UGV is expanded using public action, the resulting public projection state will be sent to the supply center. We analyzed the privacy leakage based on the sent and received states from the UGV.
An upper bound of the UGV transition system is t 0 = 15 p , where p = | V U G V 1 p r i v | . After classifying the action types, the P C belongs to { i a , p i , p n } , T C belongs to { p d } . τ P C i a = 12 p , τ P C p i = 15 p 6 p , τ P C p n = 15 p 8 p , τ P C i a × p i = 12 p 3 p , τ P C i a × p n = 12 p 6 p , τ T C i a × p i = 15 p 3 p . Using Algorithm 1 with Equation (8), we could compute the privacy information leakage for UGV1: P I L = log τ ( I p r i o ) log τ ( I p o s t ) = 10.4 9.7 0.7 .

4.1.2. Path Plan Generation and Privacy Leakage Analysis

Observability Controlled Path Plan: As shown in Figure 8, we used a 13 × 13 discrete grids based simulation environment with different configurations (line, circular, and triangular) for experimental evaluation. We simply set m = 2 , k = 2 , and the UGV patrolled one checkpoint through one observability controlled path and chose one diverse path back to the supply center. For any checkpoint, after choosing the candidate decoy checkpoints, we used Algorithm 2 to generate an observability controlled path.
Privacy Leakage Analysis: Following the “single-observation” cost difference based probabilistic goal recognition model from [50], we could pre-compute the cost difference for each state offline to calculate the likelihood that each goal will be the selected checkpoint. As shown in Figure 9, we could create heatmaps for the discrete grids domain, showing the posterior goal probability of each goal at each state. Armed with heatmaps, we could use the state/action privacy information metrics (Equations (13) and (14)) for privacy leakage analysis. The results of the privacy leakage of the paths to each checkpoint under different configurations are shown in Table 5.

4.2. Indoor Robot Demonstration

To simulate the security patrol scenario with an internal robot and an external human, we used the TurtleBot3 Burger for an indoor robot demonstration. The TurtleBot3 Burger is a mobile robot platform established on ROS (robot operation system). Table 6 shows the configuration. As shown in Figure 10, the TurtleBot3 Burger contains several modules, and we designed the ROS nodes for the software framework and built an experimental scene with four checkpoints. The initial state of the robot is in the middle of the scene.
As shown in Figure 11, after generating the information sharing restricted task plan, the robot should generate an observability controlled path plan for checkpoint patrol. As for any checkpoint, the robot will follow the generated path to visit. The trajectories of the robot and the objects in the scene were visualized through RVIZ, and the environment map was built through the Lidar LDS-01.

5. Conclusions and Future Work

In this paper, the opponent-aware privacy-preserving planning problem in a complex environment is addressed and two questions are answered. Owing to the explosion of privacy preservation in planning, we first define opponent-aware privacy-preserving planning. Then, we present approaches for information sharing restricted task plan generation and observability controlled path plan generation. The final experiments with privacy leakage analysis and the indoor robot demonstration show the applicability of the proposed approaches to generating plans. In fact, many pieces of research have modeled the interaction between patrol UGVs and adversary with Stackelberg or stochastic games, in which agents pursue utility maximization. Additionally, many robust and online goal recognition approaches have been proposed, such as the self-modulating model proposed in [38] for rational and irrational agents. In the future, we will use a stochastic game model with active adversaries to model this problem.

Author Contributions

J.L. and W.Z. proposed the method; X.G., W.G., and Z.L. designed and performed the experiments; J.L. and X.J. analyzed the experimental data and wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grants No. 61702528, No. 61603406.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Abbreviations

The following abbreviations are used in this manuscript:
UGVUnmanned Ground Vehicle
MAPMulti-Agent Planning
PILPrivacy Information Leakage
DMAPDeterministic MAP
I-POMDPsInteractive POMDPs
Dec-POMDPsDecentralized POMDP
MPCMulti-Party Computation
DCOPDistributed Constraint Optimization Problems
MAFSMulti-Agent Forward Search
MADLAMulti-Agent Distributed and Local Asynchronous
MILPMixed Integer Linear Program
PPSPath Plan Set
LOTPLast Obfuscated Turning Point
ROSRobot Operation System

Appendix A. Metrics

Appendix A.1. Plan Distance Metrics

We leverage three alternatives to measure the plan distance and one privacy leakage metric to quantify the information leakage. Three kinds of plan distance metrics have been introduced in [35,52,53,54]—namely, action distance, causal link distance, and state sequence distance.
Definition A1
(Action distance). The set of unique actions in a plan π is A ( π ) = { a | a π } . Given the action sets A ( p 1 ) and A ( p 2 ) of two plans p 1 and p 2 , respectively, the action distance can be defined:
d A ( p 1 , p 2 ) = 1 | A ( p 1 ) A ( p 2 ) | | A ( p 1 ) A ( p 2 ) | .
Definition A2
(Causal link distance). a i , p i , a i + 1 is the tuple form of a causal link, the predicate p i can be produced as an effect of action a i and used as a precondition for a i + 1 . The causal link distance for the causal link sets C ( p 1 ) and C ( p 2 ) of plans p 1 and p 2 can be defined:
d C ( p 1 , p 2 ) = 1 | C ( p 1 ) C ( p 2 ) | | C ( p 1 ) C ( p 2 ) | .
Definition A3
(State sequence distance). Given two state sequence sets S ( p 1 ) = ( s 0 p 1 , , s n p 1 ) and S ( p 2 ) = ( s 0 p 2 , , s n p 3 ) for p 1 and p 2 , respectively, where n n are the lengths of the plans, s k p 1 is overloaded to denote the set of variables in state s k of plan p 1 , the state sequence distance can be defined:
d S ( p 1 , p 2 ) = 1 n k = 1 n ( 1 | s k p 1 s k p 2 | | s k p 1 s k p 2 | ) + n n .

References

  1. Liu, Y.; Liu, Z.; Shi, J.; Wu, G.; Chen, C. Optimization of Base Location and Patrol Routes for Unmanned Aerial Vehicles in Border Intelligence, Surveillance, and Reconnaissance. J. Adv. Transp. 2019, 2019, 9063232. [Google Scholar] [CrossRef] [Green Version]
  2. Bell, R.A. Unmanned ground vehicles and EO-IR sensors for border patrol. In Optics and Photonics in Global Homeland Security III; International Society for Optics and Photonics: Bellingham, DC, USA, 2007; Volume 6540, p. 65400B. [Google Scholar]
  3. Dwork, C.; Roth, A. The Algorithmic Foundations of Differential Privacy. Found. Trends Theor. Comput. Sci. 2013, 9, 211–407. [Google Scholar] [CrossRef]
  4. Wu, F.; Zilberstein, S.; Chen, X. Privacy-Preserving Policy Iteration for Decentralized POMDPs. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
  5. Sakuma, J.; Kobayashi, S.; Wright, R.N. Privacy-preserving reinforcement learning. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008. [Google Scholar] [CrossRef] [Green Version]
  6. Liu, Q.; Ren, X.; Mo, Y. Secure and privacy preserving average consensus. In Proceedings of the 2017 Workshop on Cyber-Physical Systems Security and PrivaCy, Dallas, TX, USA, 3 November 2017. [Google Scholar] [CrossRef]
  7. Alaeddini, A.; Morgansen, K.; Mesbahi, M. Adaptive communication networks with privacy guarantees. In Proceedings of the American Control Conference, Seattle, WA, USA, 24–26 May 2017. [Google Scholar] [CrossRef] [Green Version]
  8. Pequito, S.; Kar, S.; Sundaram, S.; Aguiar, A.P. Design of communication networks for distributed computation with privacy guarantees. In Proceedings of the IEEE Conference on Decision and Control, Los Angeles, CA, USA, 15–17 December 2014. [Google Scholar] [CrossRef]
  9. Brafman, R.I. A privacy preserving algorithm for multi-agent planning and search. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Buenos Aires, Argentina, 25–31 July 2015. [Google Scholar]
  10. Štolba, M. Reveal or Hide: Information Sharing in Multi-Agent Planning. Ph.D. Thesis, Czech Technical University in Prague, Prague, Czech Republic, 2017. [Google Scholar]
  11. Tožička, J. Multi-Agent Planning by Plan Set Intersection. Ph.D. Thesis, Czech Technical University in Prague, Prague, Czech Republic, 2017. [Google Scholar]
  12. Zhang, S.; Makedon, F. Privacy preserving learning in negotiation. In Proceedings of the Symposium on Applied Computing, Santa Fe, NM, USA, 13–17 March 2005; pp. 821–825. [Google Scholar]
  13. Shokri, R.; Shmatikov, V. Privacy-preserving deep learning. In Proceedings of the 53rd Annual Allerton Conference on Communication, Control, and Computing, Allerton, IL, USA, 30 September 2015–2 October 2015. [Google Scholar] [CrossRef]
  14. Léauté, T.; Faltings, B. Protecting privacy through distributed computation in multi-agent decision making. J. Artif. Intell. Res. 2013, 47, 649–695. [Google Scholar] [CrossRef]
  15. Grinshpoun, T. A Privacy-Preserving Algorithm for Distributed Constraint Optimization. In Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2014), Paris, France, 5–9 May 2014. [Google Scholar]
  16. Tassa, T.; Zivan, R.; Grinshpoun, T. Max-sum goes private. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Buenos Aires, Argentina, 25–31 July 2015. [Google Scholar]
  17. Štolba, M.; Tožička, J.; Komenda, A. Secure Multi-Agent Planning. In Proceedings of the 1st International Workshop on AI for Privacy and Security - PrAISe ’16, The Hague, The Netherlands, 29–30 August 2016; pp. 1–8. [Google Scholar] [CrossRef]
  18. Agmon, N.; Kaminka, G.A.; Kraus, S. Multi-Robot Adversarial Patrolling: Facing a Full-Knowledge Opponent. J. Artif. Intell. Res. 2014, 42, 887–916. [Google Scholar]
  19. Brafman, R.I.; Domshlak, C. From One to Many: Planning for Loosely Coupled Multi-Agent Systems. In Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS), Sydney, Australia, 14–18 September 2008. [Google Scholar]
  20. Torreño, A.; Onaindia, E.; Sapena, Ó. FMAP: Distributed cooperative multi-agent planning. Appl. Intell. 2014, 41, 606–626. [Google Scholar] [CrossRef] [Green Version]
  21. Bonisoli, A.; Gerevini, A.E.; Saetti, A.; Serina, I. A privacy-preserving model for the multi-agent propositional planning problem. In Proceedings of the 2nd ICAPS Distributed and Multi-Agent Planning workshop (ICAPS DMAP-2014), Portsmouth, NH, USA, 22 June 2014. [Google Scholar] [CrossRef]
  22. Decker, K.S.; Lesser, V.R. Generalizing the partial global planning algorithm. Int. J. Intell. Coop. Inf. Syst. 1992, 1, 319–346. [Google Scholar] [CrossRef]
  23. Borrajo, D. Multi-Agent Planning by Plan Reuse. In Proceedings of the 2013 International Conference on Autonomous Agents and Multi-Agent Systems, St. Paul, MN, USA, 6–10 May 2013. [Google Scholar]
  24. Maliah, S.; Shani, G.; Stern, R. Stronger Privacy Preserving Projections for Multi-Agent Planning. In Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS), London, UK, 12–17 June 2016. [Google Scholar]
  25. Komenda, A.; Tožička, J.; Štolba, M. ϵ-strong privacy preserving multi-agent planning. In Lecture Notes in Computer Science; including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics; Springer Science + Business Media: Berlin, Germany, 2018. [Google Scholar] [CrossRef]
  26. Beimel, A.; Brafman, R.I. Privacy Preserving Multi-Agent Planning with Provable Guarantees. arXiv 2018, arXiv:1810.13354. [Google Scholar]
  27. Goldreich, O. Foundations of Cryptography; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar] [CrossRef]
  28. Kulkarni, A.; Klenk, M.; Rane, S.; Soroush, H. Resource Bounded Secure Goal Obfuscation. In Proceedings of the AAAI Fall Symposium on Integrating Planning, Diagnosis and Causal Reasoning, Arlington, VA, USA, 18–20 October 2018. [Google Scholar]
  29. Chakraborti, T.; Kulkarni, A.; Sreedharan, S.; Smith, D.E.; Kambhampati, S. Explicability? legibility? predictability? transparency? privacy? security? the emerging landscape of interpretable agent behavior. In Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS), Berkeley, CA, USA, 11–15 July 2019. [Google Scholar]
  30. Nissim, R.; Brafman, R. Distributed heuristic forward search for multi-agent planning. J. Artif. Intell. Res. 2014, 51, 293–332. [Google Scholar] [CrossRef] [Green Version]
  31. Lindell, Y.; Pinkas, B. Secure Multiparty Computation for Privacy-Preserving Data Mining. J. Priv. Confid. 2018. [Google Scholar] [CrossRef]
  32. Torreño, A.; Onaindia, E.; Komenda, A.; Štolba, M. Cooperative multi-agent planning: A survey. ACM Comput. Surv. (CSUR) 2018, 50, 84. [Google Scholar] [CrossRef]
  33. Panella, A.; Gmytrasiewicz, P. Bayesian learning of other agents’ finite controllers for interactive POMDPs. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016. [Google Scholar]
  34. Oliehoek, F.A.; Amato, C. A Concise Introduction to Decentralized POMDPs; Springer International Publishing: Cham, Switzerland, 2016; Volume 1. [Google Scholar]
  35. Kulkarni, A.; Srivastava, S.; Kambhampati, S. A unified framework for planning in adversarial and cooperative environments. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019. [Google Scholar]
  36. Keren, S.; Gal, A.; Karpas, E. Privacy preserving plans in partially observable environments. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), New York, NY, USA, 9–15 July 2016. [Google Scholar]
  37. Masters, P.; Sardina, S. Deceptive path-planning. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Melbourne, Australia, 19–25 August 2017. [Google Scholar]
  38. Masters, P.; Sardina, S. Goal recognition for rational and irrational agents. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, Montreal, QC, Canada, 13–17 May 2019; pp. 440–448. [Google Scholar]
  39. Root, P.J. Collaborative UAV Path Planning with Deceptive Strategies. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2005. [Google Scholar]
  40. Keren, S.; Gal, A.; Karpas, E. Goal Recognition Design for Non-Optimal Agents. In Proceedings of the Twenty-Fourth International Conference on Automated Planning and Scheduling, Portsmouth, NH, USA, 21–26 June 2014. [Google Scholar]
  41. Strouse, D.; Kleiman-Weiner, M.; Tenenbaum, J.; Botvinick, M.; Schwab, D.J. Learning to share and hide intentions using information regularization. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2018; pp. 10249–10259. [Google Scholar]
  42. Le Guillarme, N. A Game-Theoretic Planning Framework for Intentional Threat Assessment. Ph.D. Thesis, Thèse de Doctorat, Université de Caen, Caen, France, 2016. [Google Scholar]
  43. Shen, M.; How, J.P. Active Perception in Adversarial Scenarios using Maximum Entropy Deep Reinforcement Learning. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
  44. Štolba, M.; Komenda, A. Relaxation heuristics for multiagent planning. In Proceedings of the Twenty-Fourth International Conference on Automated Planning and Scheduling, Portsmouth, NH, USA, 21–26 June 2014. [Google Scholar]
  45. Smith, G. On the foundations of quantitative information flow. In Lecture Notes in Computer Science; including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics; Springer Science + Business Media: Berlin, Germany, 2009. [Google Scholar] [CrossRef] [Green Version]
  46. Rényi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics; The Regents of the University of California: Los Angeles, CA, USA, 1961. [Google Scholar]
  47. Štolba, M.; Tožička, J.; Komenda, A. Quantifying privacy leakage in multi-agent planning. ACM Trans. Internet Technol. (TOIT) 2018, 18, 28. [Google Scholar] [CrossRef]
  48. Štolba, M.; Fišer, D.; Komenda, A. Privacy Leakage of Search-Based Multi-Agent Planning Algorithms. In Proceedings of the International Conference on Automated Planning and Scheduling, Berkeley, CA, USA, 11–15 July 2019; Volume 29, pp. 482–490. [Google Scholar]
  49. IBM CPLEX. Available online: http://www.ibm.com/us-en/marketplace/ibm-ilog-cplex (accessed on 1 March 2019).
  50. Masters, P.; Sardina, S. Cost-based goal recognition in navigational domains. J. Artif. Intell. Res. 2019, 64, 197–242. [Google Scholar] [CrossRef] [Green Version]
  51. TurtleBot3. Available online: https://www.turtlebot.com (accessed on 1 August 2019).
  52. Srivastava, B.; Nguyen, T.A.; Gerevini, A.; Kambhampati, S.; Do, M.B.; Serina, I. Domain independent approaches for finding diverse plans. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Hyderabad, India, 6–12 January 2007. [Google Scholar]
  53. Nguyen, T.A.; Do, M.; Gerevini, A.E.; Serina, I.; Srivastava, B.; Kambhampati, S. Generating diverse plans to handle unknown and partially known user preferences. Artif. Intell. 2012, 190, 1–31. [Google Scholar] [CrossRef] [Green Version]
  54. Bryce, D. Landmark-based plan distance measures for diverse planning. In Proceedings of the Twenty-Fourth International Conference on Automated Planning and Scheduling, Portsmouth, NH, USA, 21–26 June 2014. [Google Scholar]
Figure 1. Typical urban security patrol scenario with some checkpoints on a simplified road network. The UGVs should patrol two zones, each with four candidate checkpoints, and the supply center will provide support for the UGVs.
Figure 1. Typical urban security patrol scenario with some checkpoints on a simplified road network. The UGVs should patrol two zones, each with four candidate checkpoints, and the supply center will provide support for the UGVs.
Electronics 09 00005 g001
Figure 2. Deception Strategies: (a) Simulation with the UGV hiding the true goals and going to either of the five goals. (b) Dissimulation with the UGV showing false goals; the probability of decoy goals are higher than the true goal. Obfuscation strategies: (c) Privacy with the UGV going to either of the five goals; the resulting plans could be deceptive. (d) Security with the UGV could go to either of the three goals under rational assumption.
Figure 2. Deception Strategies: (a) Simulation with the UGV hiding the true goals and going to either of the five goals. (b) Dissimulation with the UGV showing false goals; the probability of decoy goals are higher than the true goal. Obfuscation strategies: (c) Privacy with the UGV going to either of the five goals; the resulting plans could be deceptive. (d) Security with the UGV could go to either of the three goals under rational assumption.
Electronics 09 00005 g002
Figure 3. Privacy-preserving methods for decision-making layer and information layer, and the application layer for some application areas.
Figure 3. Privacy-preserving methods for decision-making layer and information layer, and the application layer for some application areas.
Electronics 09 00005 g003
Figure 4. The last obfuscated turning point.
Figure 4. The last obfuscated turning point.
Electronics 09 00005 g004
Figure 5. Action descriptions for UGV and supply center for the security patrol scenario.
Figure 5. Action descriptions for UGV and supply center for the security patrol scenario.
Electronics 09 00005 g005
Figure 6. The public projection and related transition systems of the patrol action. The arrows represent the transition for the given variable.
Figure 6. The public projection and related transition systems of the patrol action. The arrows represent the transition for the given variable.
Electronics 09 00005 g006
Figure 7. The public projection of actions and the related transition system. The arrows represent transition for the given variable.
Figure 7. The public projection of actions and the related transition system. The arrows represent transition for the given variable.
Electronics 09 00005 g007
Figure 8. Observability controlled paths to the real checkpoint (blue), and diverse paths back to the supply center (red): (a) Line configuration. (b) Circular configuration. (c) Triangular configuration.
Figure 8. Observability controlled paths to the real checkpoint (blue), and diverse paths back to the supply center (red): (a) Line configuration. (b) Circular configuration. (c) Triangular configuration.
Electronics 09 00005 g008
Figure 9. Heatmaps for different configurations.
Figure 9. Heatmaps for different configurations.
Electronics 09 00005 g009
Figure 10. Indoor simulation environment.
Figure 10. Indoor simulation environment.
Electronics 09 00005 g010
Figure 11. Indoor robot demonstration: (a,c) the robot heads to checkpoint 2 with an observability controlled path, and returns to the supply center with a diverse path. (b,d) The corresponding paths on discrete grids.
Figure 11. Indoor robot demonstration: (a,c) the robot heads to checkpoint 2 with an observability controlled path, and returns to the supply center with a diverse path. (b,d) The corresponding paths on discrete grids.
Electronics 09 00005 g011
Table 1. Some synonymous concepts of privacy.
Table 1. Some synonymous concepts of privacy.
ConceptsMain Contributions
Obfuscationk-ambiguous and d-diverse [35]
one candidate goal [36]
secure MAFS [9]
Privacyprivacy leakage [10]
plan set intersection [11]
privacy-preserving policy iteration [4]
Securityequidistant states [28]
last deceptive point [37,38]
deceptive shortest path [39]
equidistant states [28]
Deceptionbounded deception [40]
hide intention [41]
λ Deception [42]
deceptive adversary [43]
Table 2. Variables for task planning.
Table 2. Variables for task planning.
Variable inDescriptionVariableValues I G
UGV1 is chargedcg1T/FT-
V p u b UGV2 is chargedcg2T/FT-
task1 is completetc1T/FFT
task2 is completetc2T/FFT
checkpoint 1 is patrolledcp1T/FF-
checkpoint 2 is patrolledcp2T/FF-
V U G V 1 p r i v checkpoint 3 is patrolledcp3T/FF-
checkpoint 4 is patrolledcp4T/FF-
zone 1 is patrolledzn1T/FFT
checkpoint 5 is patrolledcp5T/FF-
checkpoint 6 is patrolledcp6T/FF-
V U G V 2 p r i v checkpoint 7 is patrolledcp7T/FF-
checkpoint 8 is patrolledcp8T/FF-
zone 2 is patrolledzn2T/FFT
V S C p r i v supply center can provide supportscT/FT-
Table 3. Actions for UGV and supply center.
Table 3. Actions for UGV and supply center.
ActionDescriptionLabelpre(a)eff(a)
patrol checkpoint 1PC1{cg1 = T}{cp1 = T, cg1 = F}
A U G V 1 p u b patrol checkpoint 3PC3{cg1 = T}{cp3 = T, cg1 = F}
task1 is completeTC{cp1 = T, cp3 = T, tc1 = F}{zn1 = T, tc = T}
A S C p u b rechargeRC{sc = T, cg1 = F}{sc = F, cg1 = T}
recharge and resupplyRR{sc = F, cg1 = F}{sc = T, cg1 = T}
Table 4. Projection the public actions of the UGV and the supply center.
Table 4. Projection the public actions of the UGV and the supply center.
Actionpre(a)eff(a)
P C {cg1 = T}{cg1 = F}
T C {tc = F}{tc = T}
R {cg1 = F}{cg1 = T}
Table 5. The privacy leakage of the paths to each checkpoint under different configurations.
Table 5. The privacy leakage of the paths to each checkpoint under different configurations.
ConfigurationCheckpoint1Checkpoint2Checkpoint3
Line13.514.913.5
Circular20.710.510.5
Triangular12.312.214.7
Table 6. The configuration of TurtleBot3 Burger.
Table 6. The configuration of TurtleBot3 Burger.
ItemsConfiguration
Lidar360-degree laser Lidar LDS-01 (HLS-LFCD2)
SBCRaspberry PI 3 and Intel Joule 570x
BatteryLithium polymer 11.1V 1800 mAh
IMUGyroscope 3 Axis, Accelerometer 3 Axis, Magnetometer 3 Axis
MCUOpenCR (32-bit ARM Cortex M7)
MotorDYNAMIXEL(XL430)

Share and Cite

MDPI and ACS Style

Luo, J.; Zhang, W.; Gao, W.; Liao, Z.; Ji, X.; Gu, X. Opponent-Aware Planning with Admissible Privacy Preserving for UGV Security Patrol under Contested Environment. Electronics 2020, 9, 5. https://doi.org/10.3390/electronics9010005

AMA Style

Luo J, Zhang W, Gao W, Liao Z, Ji X, Gu X. Opponent-Aware Planning with Admissible Privacy Preserving for UGV Security Patrol under Contested Environment. Electronics. 2020; 9(1):5. https://doi.org/10.3390/electronics9010005

Chicago/Turabian Style

Luo, Junren, Wanpeng Zhang, Wei Gao, Zhiyong Liao, Xiang Ji, and Xueqiang Gu. 2020. "Opponent-Aware Planning with Admissible Privacy Preserving for UGV Security Patrol under Contested Environment" Electronics 9, no. 1: 5. https://doi.org/10.3390/electronics9010005

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop