Next Article in Journal
Numerical Investigation of the R-Curve Effect in Delamination of Composite Materials Using Cohesive Elements
Previous Article in Journal
Surface Durability of 3D-Printed Polymer Gears
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Opacity-Enforcing Supervisory Control of Discrete Event Systems on Choosing Cost

College of Information Science and Engineering, Huaqiao University, Xiamen 361021, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(6), 2532; https://doi.org/10.3390/app14062532
Submission received: 6 February 2024 / Revised: 13 March 2024 / Accepted: 15 March 2024 / Published: 17 March 2024
(This article belongs to the Section Robotics and Automation)

Abstract

:
To ensure opacity, it is optimal to retain as many as possible occurring event sequences. Contrary to this problem, the other optimal goal is to preserve the minimal occurring event sequences. Based on the choosing cost, an optimal opacity-enforcing problem with minimal discount choosing cost is presented under two constraints in this paper. The first constraint is the opacity of the controlled system. The second is the retention of the secret to the maximum. To solve the model, two scenarios on opacity are considered. For the two scenarios, some algorithms are presented to achieve the optimal solution for the model by using the method of dynamic programming. Then, the solutions produced by the algorithms are proved to be correct by theoretical proof. Finally, some illustrations and an application example on location privacy protection for the algorithms are given.

1. Introduction

In modern society, when important information about enterprises is released online, it generally needs to be as complete as possible. In order to keep the security and opacity of important data transmission, it is necessary to use some unimportant data to confuse and make it difficult to distinguish from an adversary. The more information is used to confuse important data, the better it is. Therefore, some papers (e.g., [1,2,3,4]) discussed the issue of maximum opaque sublanguages. However, in reality, due to the cost involved in using data transmission, less unimportant data and lower transmission costs are preferred. Therefore, this paper proposes an optimization mathematical model to find the most cost-effective information to confuse the important data to be released and proposes an algorithm to obtain the optimal control strategy.
In 2004, opacity was first introduced to analyze cryptographic protocols in [5]. In 2005, the research modeled as Petri nets in [6] brought opacity to the field of discrete event systems (DES). Afterward, the research on opacity in DES boomed up. In DES models, the definition of opacity was divided into two cases: language-based opacity [1,7,8] (e.g., strong opacity [7], weak opacity [7], non-opacity [7]) and state-based opacity [6,9,10,11,12,13] (e.g., current-state opacity [6,9], initial-state opacity [6], initial-and-final-state opacity [13], K-step opacity [9,10,11], infinite-step opacity [12,14]). In ref. [15], the works of [13] were extended, and it was shown that the various notions of opacity can be transformable to each other. Then, in Ref. [16], the existing notions of opacity were unified and a general framework was provided. With the definition of opacity, the verification approach was investigated in the previous works. Once a system was not opaque, supervisory control theory [1,4,17] or the enforcement approach [18,19,20,21,22] were presented to ensure opacity in the system. In general, supervisory control theory restricts the behavior of the system to ensure opacity, whereas the enforcement approach does not restrict but modifies the output of the system to ensure opacity. For example, in Ref. [1], fix-point theory was used to check the opacity of the closed system at every iteration to achieve the maximal permissive supervisor on condensed state estimates. In Ref. [4], the maximal permissive supervisor was obtained using the refining of the plant and observer instead of condensed state estimates. In Ref. [17], strong infinite- and k-step opacity was transformed into a language and an algorithm was presented to enforce infinite- and k-step opacity by a supervisor. In Ref. [18], the synthesis of insertion function was extended from current-state opacity [23] to infinite-step opacity and K-step opacity. In Ref. [19], fictitious events were inserted at the output of the systems to enforce the opacity of the system. And some works [20,21] extended the method of [19], where the authors of Ref. [20] discussed the problem of opacity enforcement under energy constraints and the authors of Ref. [21] studied supervisory control under local mean payoff constraints. In Ref. [22], current-state opacity was verified and enforced based on an algebraic state space approach for partially observed finite automaton.
To ensure opacity, in Ref. [24], some algorithms were proposed to design a controller to control information released to the public. And then, in Ref. [25], the work of [24] was extended, and then an algorithm was presented for finding minimal information release policies for non-opacity. In Ref. [26], a reward was assigned for revealing the information of each transition and the maximum guaranteed payoff policy was found for the weighted DES. In Ref. [27], a dynamic information release mechanism was proposed to verify current-state opacity where information is partially released and state-dependent.
To ensure opacity, the secret is preserved by enabling/disabling some events to restrict the behavior of the system. If cost functions are defined in DES, two types of optimal supervisory control problems are developed: one is for event cost, and the other is for control cost. For example, in Ref. [28], the cost of events was defined to design a supervisor to minimize the total cost to reach the desired states. Then, in Ref. [29], the framework was extended to the partial observation of the system. Afterward, in Ref. [30], the mean payoff supervisory control problem was investigated for a system where each event has an integer associated with it. In [31,32], the costs of choosing control input and occurring event were defined and two optimal problems to minimize the maximal discounted total cost among all possible strings generated in the systems were solved using Markov decision processes.
In contrast to the supremal opaque sublanguage of the plant in [1,2,3,4,8], we want to find a ‘smallest’ closed controllable sublanguage of the plant, concerning which the secret is not only opaque but also ‘largest’. Since the class of opaque languages is not closed under intersection [8], ‘smallest’ does not refer to set inclusion, but to minimal discount total choosing cost. On the other hand, ‘largest’ refers to set inclusion, where it means the union of all the elements of the class of confused secret. To describe the optimal problem, a non-linear optimal supervisory control model is proposed by introducing the concept of choosing cost.
The paper is organized as follows. Section 2 establishes the background on supervisory control theory, opacity, and choosing cost of DES. In Section 3, we present an optimal supervisory control problem that is modeled by non-linear programming with two constraint conditions. In Section 4, we propose two scenarios to show the computational process of the optimal problem. The first scenario is divided into three cases in Section 5, where we suppose that the plant can ensure the secret is opaque, and some algorithms and theorems are put forward to the solution of the optimal problem. In Section 6, a generalized algorithm and theorem are proposed under the second scenario, where the plant cannot ensure the secret is opaque. Finally, the main contribution of the work is discussed in Section 7.

2. Preliminary

2.1. Supervisory Control Theory

We consider a DES modeled by a determined finite transition system, G = ( Q , Σ , δ , q 0 ) , where Q is a finite set of states, Σ is a finite set of events labeled in the transition, partial function δ : Σ × Q Q is the transition function and q 0 Q is the initial state. A run of G is a finite non-empty sequence, q 0 σ 1 q 1 σ 2 q n 1 σ n , where q i Q , σ i + 1 Σ and q i + 1 = δ ( q i , σ i ) for i = 1 n 1 . The trace t r ( ρ ) of run ρ is σ 1 σ 2 σ n . The languages of G are the set of all traces of runs of G, and they are denoted L ( G ) = { s Σ * | δ ( s , q 0 ) ! } . Event set Σ is assumed to be partitioned into controllable event set Σ c and uncontrollable event set Σ u , where Σ = Σ c ˙ Σ u . Each subset of events is a control pattern, and the set of all control patterns is denoted by Γ = { γ | Σ u γ Σ } . A supervisor for G is any map f: L ( G ) Γ . For system G, we denote Γ L ( G ) as the set of supervisors. The closed behavior of f / G , i.e., G under the supervision of f, is defined to be language L ( f / G ) L ( G ) described as follows:
  • ϵ L ( f / G ) ;
  • s σ L ( f / G ) s L ( f / G ) , s σ L ( G ) , σ f ( s ) .
For language K L ( G ) , notation K ¯ is said to be the prefix(-closure) of K. K is said to be (prefix-)closed if K ¯ = K .
Definition 1 
([33]). We consider non-empty language K. Then, K is controllable if K ¯ Σ u L ( G ) K ¯ .
A necessary and sufficient condition for the existence of a supervisor is given as follows:
Theorem 1 
([33]). We consider non-empty language K L ( G ) . Then, there exists supervisor f such that L ( f / G ) = K if and only if K is a controllable and closed language.

2.2. Supervisory Control for Opacity

Assuming that the adversary can be aware of any supervisor’s control policy, a subset Σ a of the events can be seen by the adversary through an observable function, θ : Σ * Σ a * . An adversary’s observation of the system is denoted by o b s ( G ) = { Q a , Σ a , δ a , q 0 a } , which is an observer of system G. If an observer of the system is unable to determine some secret information, we offer the following equivalent definitions of opacity:
Definition 2 
([1]). We consider system G and non-empty language K. For any s K L ( G ) , if there exists s L ( G ) K such that θ ( s ) = θ ( s ) , we say K is (strongly) opaque with respect to L ( G ) and Σ a .
Definition 3 
([7]). We consider system G and non-empty language K. K is strongly opaque with respect to L ( G ) and θ if θ ( K ) θ ( L ( G ) K ) .
If the condition of Definition 2 cannot be met, we say K is not (strongly) opaque with respect to L ( G ) and Σ a . For the definition of non-opacity, it is different from that of [7].
In [8], since opaque language has a desirable property that it is closed under union, there exist supremal controllable closed and opaque sublanguages of the system [1,4]. In [1,4,8], different works about supremal controllable closed and opaque sublanguage are computed.

2.3. The Definition of Choosing Cost

In [31,32], two optimal control models of DES are presented based on the costs of choosing a control input and an event occurring, respectively. In this paper, the choosing cost is obtained from the definition of the cost of choosing the control input after some string of [31,32]. For a given DES G, we let c ( s , γ ) be the cost of choosing control input γ at string s, where s L ( G ) , γ Γ and c ( s , γ ) is nonnegative. For any supervisor f: L ( G ) Γ , we call c ( s , f ) = c ( s , f ( s ) ) the cost of choosing f ( s ) after s under the supervision of f. All in all, we simply call c ( s , γ ) (or c ( s , f ) ) the choosing cost.

3. Optimal Supervisory Control Model on Opacity

In the section, an optimal supervisory control model is constructed to minimize the choosing cost of the controlled system which is opaque. Given system G and secret K L ( G ) , as shown in [1], secret K is a regular language. For secret K, we suppose there exists a set of secret states Q s Q which can recognize secret K, i.e., s K iff δ ( q 0 , s ) Q s .
To show the cost of information released, we introduce the cost of choosing control input in [31,32] and define the total choosing cost as follows:
Definition 4. 
Given closed-loop behavior L ( f / G ) , the discount total choosing cost of L ( f / G ) is defined as V ( L ( f / G ) , f ) = s L ( f / G ) β | s | c ( s , f ) , where f is a supervisor controlling system G and β > 0 is a discount factor. For convenience, V ( L ( f / G ) , f ) is simplified as V ( L ( f / G ) ) .
For system G, if there exist two supervisors f 1 and f 2 such that L ( f 1 / G ) L ( f 2 / G ) , it is obvious that V ( L ( f 1 / G ) ) V ( L ( f 2 / G ) ) by Definition 4.
To obtain the discount total choosing cost, we definite discount cost by the sum of choosing cost after s under f.
Definition 5. 
We consider s, t n = σ 1 σ 2 σ n and t k = σ 1 σ 2 σ k ( 0 k n , t 0 = ϵ ) . Then, V s ( t n ) = k = 0 n β | s | + k c ( s t k , f s t n ) is said to be the discount cost of choosing t n after s under f, where f s t n ( s t k ) = f ( s t k ) Σ s t n ( s t k ) and Σ s t n ( s t k ) = { σ | s t k σ s t n ¯ } . Particularly, if k = 0 , i.e., t 0 = ϵ , then V s ( ϵ ) = β | s | c ( s , ϕ ) = 0 ; if s = ϵ , then s t n = t n and V ϵ ( t n ) = k = 0 n β k c ( t k , f t n ) , where t 0 = ϵ means that choosing f ( s ) at s has nothing to do with Σ.
In Definition 5, if some string can be divided into two pieces, a formula is formulated to simplify the computation of discount cost in the following conclusions:
Proposition 1. 
We consider t = s t . Then, we have V ϵ ( t ) = V ϵ ( s ) + V s ( t ) .
Proof. 
For t = s t , we proved the following formula, where s = σ 1 σ 2 σ | s | and t = σ | s | + 1 σ | s | + 2 σ | s | + n :
V ϵ ( t ) = k = 0 | s | + n β k c ( t k , f t ) = c ( ϵ , f s ) + β c ( σ 1 , f s ) + + β | s | c ( s , f s ) + k = 1 n β | s | + k c ( s t k , f s t ) = k = 0 | s | β k c ( σ 1 σ k , f s ) + k = 0 n β | s | + k c ( s t k , f s t ) = V ϵ ( s ) + V s ( t )
  □
To generalize the above Proposition 1, we have the following conclusion as Proposition 2:
Proposition 2. 
We consider s = s 1 s 2 s k L ( G ) . Then, we have V ϵ ( s ) = V ϵ ( s 1 ) + V s 1 ( s 2 ) + + V s 1 s 2 s k 1 ( s k ) .
Proof. 
The proof can proceed by induction.
Base case: If k = 1 , then it holds that V ϵ ( s ) = V ϵ ( s 1 ) .
Inductive hypothesis: Suppose that we have V ϵ ( s 1 s j 1 s j ) = V ϵ ( s 1 ) + V s 1 ( s 2 ) + + V s 1 s 2 s j 1 ( s j ) if k = j .
Inductive step: For k = j + 1 , we prove that V ϵ ( s 1 s j 1 s j s j + 1 ) = V ϵ ( s 1 ) + V s 1 ( s 2 ) + + V s 1 s 2 s j 1 ( s j ) + V s 1 s 2 s j ( s j + 1 ) .
By Definition 5, we have the following, which completes the inductive step:
V ϵ ( s ) = V ϵ ( s 1 s j 1 s j s j + 1 ) = V ϵ ( s 1 s j 1 s j ) + V s 1 s j 1 s j ( s j + 1 ) ( Proposition   1 ) = V ϵ ( s 1 ) + V s 1 ( s 2 ) + + V s 1 s 2 s j 1 ( s j ) + V s 1 s j 1 s j ( s j + 1 ) ( Hypothesis k = j )
  □
To obtain the discount total choosing cost, we formulate Algorithm 1 to obtain V ( L ( f / G ) ) by the computation of V s ( s ) .
Algorithm 1 Computing V ( L ( f / G ) ) , the discount total choosing cost of L ( f / G )
Input:  L ( f / G ) ;
Output:  V ( L ( f / G ) ) .
 1:
Let root = q 0 .
 2:
Let Q v = Q .
 3:
for  q Q { q 0 } do
 4:
  Compute the in-degree of state q.
 5:
  if in-degree of state q is more than 1 then
 6:
   Copy state q and its successor states and traces based on its in-degree.
 7:
   Insert the states q and its successor into Q and Q v .
 8:
  end if
 9:
end for
  10:
for  q Q { q 0 } do
  11:
  Compute the out-degree of state q.
  12:
  if out-degree =1 then
  13:
   Concatenate the transitions from q’s parent to q’s child via q, i.e., achieve a transition ( q s parent a b q child ) from the transitions ( q s parent a q and q b q s child ).
  14:
    Q = Q { q }
  15:
  end if
  16:
end for
  17:
At some node q, if there exists a transition s from root to q and a transition s from q to its child (i.e., root s q s q ’s child), we have V s ( s ) .
  18:
By breadth-first search, we traverse all the nodes and have V ( L ( f / G ) ) = q Q v V s ( s )
  19:
return  V ( L ( f / G ) )
As shown in Algorithm 1, closed-system f / G can be transformed into a tree automaton, where the closed-system is language-equivalent with the tree automaton. To obtain the longest common prefix of some strings, we need to compute the out-degree of each node. In the tree automaton, if the out-degree of some node is greater than one, the string from the root to the node must be the longest common prefix of some strings from the root to the leaves via the node.
To show the computational process of Algorithm 1, an example is given to obtain the discount total choosing cost.
Example 1. 
Suppose that L ( f / G ) = { s 1 ¯ , s 2 ¯ } and t is the longest common prefix of s 1 and s 2 . To simplify the process of computation in Definition 4, we denote t = σ 1 σ l , t 1 = σ i 1 σ i m and t 2 = σ j 1 σ j n such that s 1 = t t 1 and s 2 = t t 2 . By Definition 4, we have the following equation:  
V ( { s 1 ¯ , s 2 ¯ } ) = k = 0 l β k c ( σ 1 σ k , f σ 1 σ l ) + β l k = 0 m β k c ( σ 1 σ l σ i 1 σ i k , f σ 1 σ l σ i 1 σ i m ) + β l k = 0 n β k c ( σ 1 σ l σ j 1 σ j k , f σ 1 σ l σ j 1 σ j n ) = V ϵ ( t ) + V t ( t 1 ) + V t ( t 2 ) = V ϵ ( t t 1 ) + V t ( t 2 ) = V ϵ ( s 1 ) + V t ( t 2 ) = V ϵ ( s 2 ) + V t ( t 1 )
According to Algorithm 1, we transform L ( f / G ) in Figure 1 into a tree automaton shown in Figure 2.
By Figure 2 and Algorithm 1, we have the following formula:
V ( { s 1 ¯ , s 2 ¯ } ) = V ϵ ( t ) + V t ( t 1 ) + V t ( t 2 ) = V ϵ ( t t 1 ) + V t ( t 2 ) = V ϵ ( s 1 ) + V t ( t 2 ) = V ϵ ( s 2 ) + V t ( t 1 )
If there exists language K such that K ¯ L ( f / G ) , the discount total choosing cost of K ¯ can be denoted by V ( K ¯ ) and obtained as shown in Algorithm 1. If we denote V ( L ( f / G ) K ¯ ) by V ( L ( f / g ) ) V ( K ¯ ) , we can also obtain the discount cost using Definition 5. V ( L ( f / G ) K ¯ ) means to compute the choosing cost outside of K ¯ in L ( f / G ) . Next, we continue the above Example 1.
Example 2. 
In Figure 1, we assume there exists state subset Q K = { 4 , 6 } such that K = t ( σ i 1 + σ j 1 ) . By Algorithm 1, we obtain a tree automaton based on K ¯ shown in Figure 3. So, it holds that V ( K ¯ ) = V ϵ ( t ) + V t ( σ i 1 ) + V t ( σ j 1 ) . According to the formulas of V ( L ( f / G ) and V ( K ¯ ) , we have the following equation:  
V ( L ( f / G ) K ¯ ) = [ V ϵ ( t ) + V t ( t 1 ) + V t ( t 2 ) ] [ V ϵ ( t ) + V t ( σ i 1 ) + V t ( σ j 1 ) ] = V t ( t 1 ) V t ( σ i 1 ) + V t ( t 2 ) V t ( σ j 1 ) = V t σ i 1 ( σ i 2 σ i m ) + V t σ j 1 ( σ j 2 σ j n )
For the system and the secret, we propose an optimal problem to synthesize a supervisor such that the discount total choosing cost of the controlled system is minimal.
Optimal opacity-enforcing problem. Given system G and secret K L ( G ) , we find supervisor f such that L ( f / G ) satisfies the following conditions:
1.
K is opaque with respect to L ( f / G ) and Σ a ;
2.
All the secret information that has not been leaked in G remains in the closed-loop system f / G ;
3.
For closed-loop behavior L ( f / G ) , discount total choosing cost V ( L ( f / G ) ) is minimal.
In Condition 1 of the above problem, if K is opaque with respect to L ( f / G ) and Σ a , we have θ ( K L ( f / G ) ) θ ( L ( f / G ) K ) by Definition 3.
In Condition 2, if the supremal controllable and closed sublanguage of L ( G ) are denoted by L ( g / G )  [4], the largest secret K permitted by supervisor f is L ( g / G ) K . So, all the secret information that is not leaked in G is L ( g / G ) K . Therefore, the second condition implies that K L ( f / G ) = K L ( g / G ) holds.
In Condition 3, we hope that objective function V ( L ( f / G ) ) is minimal.
Based on the above optimal problem and its analysis, a non-linear optimal model is formulated as follows:
min V ( L ( f / G ) ) s . t . θ ( K L ( f / G ) ) θ ( L ( f / G ) K ) K L ( f / G ) = K L ( g / G ) f Γ L ( G )
For the above optimal Model (1), the objective function means that the supervised system’s discount total choosing cost is minimal. And the first constraint condition means that K is opaque with respect to the controlled system. The latter implies that the biggest number of secrets not to be disclosed is in the controlled system.

4. Solution of Optimal Model on Choosing Cost

In this section, we first make the following assumption:
Assumption 1. 
If s L ( G ) and γ = Σ u , then we have c ( s , γ ) = 0 .
For Assumption 1, we suppose that any uncontrollable event’s choosing cost is 0.
Under the assumption, we consider the following two scenarios to solve optimal Model (1) in the following sections:
  • Scenario 1 Secret K is opaque with respect to L ( G ) and Σ a .
  • Scenario 2 Secret K is not opaque with respect to L ( G ) and Σ a .
Figure 4 illustrates the classification process of optimal Model (1) in the upcoming sections, i.e., Section 5 and Section 6.
In the process depicted in Figure 4, the main method used is verification of the opacity of some systems in Algorithm 2. The specific process is as follows:
Algorithm 2 How to know which scenario or case is available
 1:
if K is opaque with respect to L ( G ) and Σ a  then
 2:
   Scenario 1 (in Section 5) is available;
 3:
  if  K ¯ = = L ( G )  then
 4:
   Case 1 (in Section 5.1) is available;
 5:
  else
 6:
   if K is opaque with respect to K ¯ and Σ a  then
 7:
    Case 2 (in Section 5.2) is available;
 8:
   else
 9:
    Case 3 (in Section 5.3) is available;
  10:
   end if
  11:
  end if
  12:
else
  13:
   Scenario 2 (in Section 6) is available.
  14:
end if
  15:
return

5. Scenario 1: Secret K Is Opaque with Respect to L(G) and Σ a

We consider language L ( G ) and regular language (secret) K L ( G ) . By [4], we have L ( G ) = L ( g / G ) for the reason that L ( g / G ) is the supremal opaque sublanguage of L ( G ) . Therefore, the second condition of Model (1) is equivalent to formula K L ( f / G ) = K , which implies that K L ( f / G ) . Since L ( f / G ) is prefix-closed, it is obvious that K L ( f / G ) is equivalent to formula K ¯ L ( f / G ) . So, in Model (1), the second constraint condition can be equivalently reduced to K ¯ L ( f / G ) . And then, optimal Model (1) can be displayed as the following Model (2):
min V ( L ( f / G ) ) s . t . θ ( K L ( f / G ) ) θ ( L ( f / G ) K ) K ¯ L ( f / G ) f Γ L ( G )
To solve optimal Model (2), we consider the following three cases:

5.1. Case 1: K ¯ = L ( G )

We have L ( G ) L ( f / G ) in Condition 2 of Model (2). To ensure the feasible region is not empty, we can obtain supervisor f such that L ( G ) = L ( f / G ) . So, we have the following theorem:
Theorem 2. 
We have system G and secret K L ( G ) . In Case 1 of Scenario 1, L ( G ) = L ( f / G ) is an optimal solution of optimal Model (2).
Proof. 
From the above analysis, it is obvious that L ( G ) = L ( f / G ) is the unique solution of the feasible set. So, V ( L ( f / G ) ) is minimal.   □

5.2. Case 2: K ¯ L ( G ) and s K , s K ¯ K Such That θ ( s ) = θ ( s )

In Case 2, the condition means that any secret cannot be distinguished from some non-secret in closure K ¯ .
For secret K and its closure K ¯ , we have θ ( K ) θ ( K ¯ K ) θ ( L ( G ) K ) . If there exists a closed-loop system f / G such that K ¯ L ( f / G ) , it obviously can ensure the opacity of secret K. So, the feasible region of Model (2) is not empty.
To find supervisor f, we infer whether K ¯ is controllable with respect to L ( G ) .
  • If K ¯ is controllable with respect to L ( G ) , there exists supervisor f such that L ( f / G ) = K ¯ . For closed-loop behavior L ( f / G ) , it satisfies the two constraint conditions of Model (2). So, the feasible region of Model (2) is not empty.
  • If K ¯ is not controllable with respect to L ( G ) , we can find a controllable and closed superlanguage K ¯ of K ¯ . Superlanguage K ¯ not only ensures the opacity of K(in Theorem A1 of Appendix A), but also maximizes secret K. So, the feasible region of Model (2) is not empty.
According to the above analysis, the following Theorem 3 states that an optimal solution of Model (2) can be obtained using the controllability of K ¯ or superlanguage K ¯ .
Theorem 3. 
We consider system G and secret K L ( G ) . In Case 2 of Scenario 1, we have the following conclusions about the optimal solution of Model (2):
1 
If K ¯ is controllable with respect to L ( G ) , there exists supervisor f such that L ( f / G ) = K ¯ , which is the optimal solution of Model (2).
2 
If K ¯ is not controllable with respect to L ( G ) , there exists supervisor f such that L ( f / G ) = K ¯ , which is the optimal solution of Model (2).
Proof. 
Firstly, we prove that closed-loop behavior L ( f / G ) of Theorem 3 is a feasible solution of optimal Model (2).
If K ¯ is controllable with respect to L ( G ) , K ¯ is a closed and controllable sublanguage of L ( G ) . So, there exists supervisor f such that L ( f / G ) = K ¯ . If K ¯ is not controllable with respect to L ( G ) , it is obvious that K ¯ is the infimal closed and controllable superlanguage of K ¯ . So, there exists supervisor f such that L ( f / G ) = K ¯ .
Therefore, we have K ¯ L ( f / G ) which means constraint Condition 2 of Model (2) is true.
By Theorem A1 of Appendix A, we conclude that L ( f / G ) can ensure the opacity of K under Case 2 of Scenario 1, which implies that constraint Condition 1 of Model (2) is true.
From the above points, it is true that L ( f / G ) in Theorem 3 is a feasible solution of Model (2).
Next, we prove by contraction that the discount total choosing costs of L ( f / G ) produced in Theorem 3 are minimal. We assume that there exists feasible solution L ( f / G ) ( L ( f / G ) ) of Model (2) such that V ( L ( f / G ) ) < V ( L ( f / G ) ) .
According to constraint ondition 2, we have K ¯ L ( f / G ) . Afterward, we consider the controllability of K ¯ .
If K ¯ is controllable, it holds that L ( f / G ) = K ¯ by Theorem 1, which means L ( f / G ) L ( f / G ) . So, we have V ( L ( f / G ) ) V ( L ( f / G ) ) , which contracts with V ( L ( f / G ) ) < V ( L ( f / G ) ) .
If K ¯ is not controllable, it holds that K ¯ L ( f / G ) = K ¯ by Theorem 1. For any s K ¯ K ¯ , there exists t s such that t K ¯ and s = t Σ u * hold. As shown in Proposition 2 and Assumption 1, we have V ϵ ( s ) = V ϵ ( t ) + V t ( Σ u * ) = V ϵ ( t ) . So, it holds that V ( K ¯ ) = V ( K ¯ ) = V ( L ( f / G ) ) . According to formula K ¯ L ( f / G ) , it is true that V ( L ( f / G ) ) V ( L ( f / G ) ) , which contracts with V ( L ( f / G ) ) < V ( L ( f / G ) ) .
In summary, it is true that V ( L ( f / G ) ) V ( L ( f / G ) ) , which means that the discount total choosing costs of L ( f / G ) in Theorem 3 are minimal.   □
According to the proof of Theorem 3, we have the following corollaries:
Corollary 1. 
We consider language L. If new language L is considered as the concatenation of any string of L with an uncontrollable string (i.e., Σ u * ), then the discount total choosing costs of L and L are the same, that is, V ( L ) = V ( L ) .
Corollary 2. 
We consider system G and secret K L ( G ) . In Case 2 of Scenario 1, V ( K ¯ ) = V ( K ¯ ) = V ( L ( f / G ) ) holds, where L ( f / G ) is the closed-loop system in Theorem 3.
Example 3. 
We consider finite transition system G = ( Q , Σ , δ , q 0 ) shown in Figure 5, where Σ u = { f , t } . Obviously, for system G, Assumption 1 is true. We suppose that secret K = { a , a b , a e b g t } , which can be recognized by Q s = { 3 , 6 , 16 } . To show choosing cost c ( s , γ ) , label p · | n q means that if there is a transition from p to q by ·, notation n denotes choosing cost c ( s , · ) . For control input Γ, the cost of choosing γ Γ is defined as c ( s , γ ) = σ γ c ( s , σ ) .
We assume that the adversary has complete knowledge of the supervisor’s control policy. From the adversary’s view, the adversary can see a partial set of events, denoted by Σ a = { a , b , d , f , g } . For secret K, it can be verified that K is opaque with respect to L ( G ) and Σ a (Scenario 1). To reduce the choosing cost, closed-loop system L ( f / G ) can be obtained in Theorem 3, where L ( f / G ) = { ϵ , t , a , a b , a b t , a b t f , a e , a e b , a e b , a e b g , a e b g t } is shown in Figure 6.
By Definition 5 and Algorithm 1, V ( L ( f / G ) ) = V ϵ ( t ) + V ϵ ( a ) + V a ( e b g t ) + V a ( b t f ) = 1.726 is minimal.

5.3. Case 3: K ¯ L ( G ) and s K , s K ¯ K Such That θ ( s ) θ ( s )

In Case 3, the condition means that there exists some secret in K such that all the non-secrets confused with them are outside of K ¯ .
For L ( G ) , we let K ̲ = { s K | s K ¯ K , θ ( s ) θ ( s ) } be a set of some secret which cannot be confused by any string in K ¯ K , and L = { s L ( G ) K | θ ( s ) θ ( K ̲ ) } be a set of strings which can confuse the secret of K ̲ . For language L, we call [ s ] = { s L | θ ( s ) = θ ( s ) } the coset (or equivalence class) of s with respect to L and θ , where s is said to be the equivalent string of s. And L / [ · ] is defined as the quotient set of L with respect to coset [ · ] . For a determined finite transition system, the number of strings in L is finite and the length of each string of L is finite, too. Obviously, coset [ · ] and quotient set L / [ · ] are also finite.
To solve Model (2), Algorithm 3 shown as follows is proposed by referring to Function 1 (seen in Algorithm 4) and Function 2 (seen in Algorithm 5).
Algorithm 3 Optimal supervisory control I
Input: Automaton G, secret K and choosing cost c ( s , γ ) ;
Output: Closed-loop language L ( f / G ) .
 1:
Construct language L = { s L ( G ) K | θ ( s ) θ ( K ̲ ) }
 2:
j = 1
 3:
for  s K ̲  do
 4:
   [ s ] = { s L | θ ( s ) = θ ( s ) }
 5:
  if  [ s ] = ϕ  then
 6:
   Continue
 7:
  end if
 8:
   H j = [ s ]
 9:
   L = L [ s ]
  10:
   j = j + 1
  11:
end for
  12:
H = { H j | j { 1 , 2 , , j 1 }
  13:
V = f u n c t i o n 1 ( L / [ · ] , K )
  14:
Label = f u n c t i o n 2 ( H , V )
  15:
In Label, find and sort from t s to t t , i.e., t s s 1 s 2 s j 1 t t
  16:
K ¯ = K ¯ { s 1 ¯ } { s j 1 ¯ }
  17:
By Theorem 3, L ( f / G ) can be got from K ¯
  18:
return  L ( f / G )
In Line 13 of Algorithm 3, Function 1 (in Algorithm 4) shows how to compute the choosing cost outside of the closure of secret K. For a quotient set, we take any string of a coset and obtain a prefix with a maximal length in the closure of the secret. And then, we compute the discount cost of choosing the remaining string after the prefix. The specific process is shown in Algorithm 4.
In Line 14 of Algorithm 3, Function 2 constructs a weighted directed diagram with multi-stages and produces a path with minimal discount total choosing cost. For the diagram, the elements of a set H are regarded as the stages, and the elements of H i are defined as the nodes of each stage. Based on dynamic programming, the optimal weight between different nodes of adjacent stages is obtained in Function 3 (in Algorithm 6). Then, the weighted directed diagram is obtained. For every node of the diagram, an ordered pair is obtained by employing Function 3 (in Algorithm 6). For the ordered pair, the first element is the set of shortest paths with minimal discount total choosing cost from the starting node to the current node, and the second is the discount total choosing cost of the path. When the current node is the ending node, the path with minimal discount total choosing cost is obtained. The specific processes are shown in Algorithms 5 and 6.
Algorithm 4 Calculation of the choosing cost V = f u n c t i o n 1 ( R , K ) outside of K ¯
Input: Quotient set R and secret K;
Output: Choosing cost V outside of K ¯ .
 1:
for  [ s ] R  do
 2:
  for  s [ s ]  do
 3:
   let n be the length of s, and s is denoted by s = σ 1 σ 2 σ n
 4:
    i = n 1
 5:
   take s = s i t , and suppose that s i = σ 1 σ i , t = σ i + 1 , σ n
 6:
   while  i < n  do
 7:
    if  i = = 1  then
 8:
      s i = ϵ
 9:
     Break
  10:
    end if
  11:
    if  s i K ¯  then
  12:
     Break
  13:
    end if
  14:
     i = i 1
  15:
   end while
  16:
   compute the cost t after s i , that is V s i ( t )
  17:
  end for
  18:
end for
  19:
return  V = { V s i ( t ) | s = s i t , s [ s ] , s i K ¯ , s i σ i + 1 s ¯ K ¯ }
Algorithm 5 Finding the set of strings Label t t = f u n c t i o n 2 ( H , V ) whose choosing cost of the path from the starting node is minimal

Input:  H = { H 1 , H 2 , , H | H | } and V, where | H | is the number of the elements of H;
Output: the strings set Label t t of the shorted path from the starting node to the ending node.
 1:
H 0 = { t s }
 2:
Label t s = { t s }
 3:
for  j = 1 : 1 : | H |  do
 4:
  for  s H j  do
 5:
   if  j = 1  then
 6:
     a t s , s = V s i ( t )
 7:
     Label s = Label t s { s }
 8:
     V min ( s ) = a t s , s
 9:
   else
  10:
    for  s H j 1  do
  11:
      a s , s = f u n c t i o n 3 ( s , s , Label s , V min ( s ) )
  12:
      V s ( s ) = V min ( s ) + a s , s
  13:
    end for
  14:
     s = arg min { V s ( s ) , s H j 1 }
  15:
     Label s = Label s { s }
  16:
     V min ( s ) = V s ( s )
  17:
   end if
  18:
  end for
  19:
end for
  20:
H j + 1 = { t t }
  21:
for  s H j  do
  22:
   a s , t t = 0
  23:
end for
  24:
s = arg min { V min ( s ) | s H | H | }
  25:
Label t t = Label s { t t }
  26:
return  Label t t
Algorithm 6 Computing optimal weight value a s , s = f u n c t i o n 3 ( s , s , Label s , V min ( s ) ) between s and s
Input:  s , s , Label s , V min ( s ) ;
Output: Optimal weight value a s , s from node s to node s.
 1:
a s , s =
 2:
for  s Label s  do
 3:
  compute the longest common prefix s 1 of s and s such that s = s 1 t 1 and s = s 1 t 1 , and t 1 t 1
 4:
  if  t 1 = ϵ  then
 5:
   compute the cost t 1 after s 1 , V s 1 ( t 1 )
 6:
    a s , s = min { a s , s , V s 1 ( t 1 ) }
 7:
  else
 8:
   if  t 1 = ϵ  then
 9:
     a s , s = 0
  10:
    Break
  11:
   else
  12:
     a s , s = min { a s , s , V s 1 ( t 1 ) }
  13:
   end if
  14:
  end if
  15:
end for
  16:
return  a s , s
According to the calculation process of Algorithm 3, we have the following theorem to show the solution of Model (2):
Theorem 4. 
We consider system G and secret K L ( G ) . In Case 3 of Scenario 1, closed-loop behavior L ( f / G ) produced in Algorithm 3 is an optimal solution of Model (2).
Proof. 
We first show that closed-loop behavior L ( f / G ) produced in Algorithm 3 is a feasible solution of optimal Model (2).
1.
Proof of the opacity (the first constraint condition).
As shown in Case 3, the secret of K K ̲ can be confused by the non-secret strings of K ¯ . For K ̲ , all the non-secrets in L ( G ) which cannot be distinguished form the secret in K ̲ are in j H j of Line 12. At Lines 14 and 15, string s i is from H i , where i = { 1 , 2 , , j 1 } . At Lines 15 and 16, we have K ¯ L ( f / G ) and { s 1 , s 2 , , s j 1 } L ( f / G ) . So, closed-loop behavior L ( f / G ) produced in Algorithm 3 can ensure the opacity of secret K.
2.
Proof of the secret remaining in the closed-loop system being maximal (the second constraint condition).
According to Lines 15 and 16, it holds that K ¯ L ( f / G ) . So, the second constraint condition is true.
To sum up, the closed-loop behavior L ( f / G ) obtained in Algorithm 3 is a feasible solution of optimal Model (2).
Secondly, we show that the discount total choosing cost of closed-loop behavior L ( f / G ) produced by Algorithm 3 is minimal.
Since it holds that K ¯ L ( f / G ) , the discount total choosing cost of L ( f / G ) can be computed as follows:
V ( L ( f / G ) ) = V ( K ¯ ) + V ( L ( f / G ) K ¯ ) .
V ( K ¯ ) is finite. To minimize the discount total choosing cost of L ( f / G ) , we need to show V ( L ( f / G ) K ¯ ) is minimal using Formula (3). As shown in Line 1 of Algorithm 3, it is obvious that language L contains all the non-secrets in L ( G ) , which cannot be distinguished from all the secrets of K ̲ . So, if we want to make V ( L ( f / G ) K ¯ ) minimal, all the strings s in L ( f / G ) K ¯ must come from L. And then L ( f / G ) K ¯ L holds. According to Lines 3–11 of Algorithm 3, we have L = j H j . And all the strings in H j can confuse one secret of K ̲ and its equivalent secret. Only one string is chosen in each set, H j of H, which is the necessary condition to minimize V ( L ( f / G ) K ¯ ) .
At Line 13 of Algorithm 3 (i.e., function 1 of Algorithm 4), all the strings s = s i t in L are traversed and choosing cost V s i ( t ) after s i K ¯ can be obtained, where t = σ i + 1 σ n and s i σ i + 1 K ¯ .
At Line 14 of Algorithm 3, a diagram with multi-stages is constructed in j H j { t s } { t t } , where initial node t s and ending node t t are virtual, H j is the set of nodes in the jth stage. To only pick a string in each H j , we find a path from t s to t t . And then, Algorithm 6 is proposed to optimize weight a s , s of transition from node s to node s between adjacent stages. For optimal weight, it is obvious that the discount total choosing cost of each node (i.e., string) of the path is equal to the total weight of the path (at Line 12 of Algorithm 5). At Lines 3–19 of Algorithm 5, the shortest path Label s and its minimal discount total choosing cost V min ( s ) of the jth stage can be obtained by Label s and V min ( s ) of the ( j 1 ) th stage based on dynamical programming. As shown in the above analysis about Line 14 of Algorithm 3 (i.e., Function 2 in Algorithm 5), the first element Label s of the ordered pair ( Label s , V min ( s ) ) is the shorted path (i.e., the set of strings) with minimal discount total choosing cost from starting node t s to current node s , and the second V min ( s ) is the discount total choosing cost of the path. When the current node is t t (i.e., Line 20 of Algorithm 5), Label t t is the shorted path from the initial to the ending node and V min ( t t ) is the minimal discount total choosing cost of the path (see Lines 21–25 of Algorithm 5). So, Label t t is the subset of L, whose discount total choosing cost is minimal and whose strings can confuse all the secrets of K ̲ .
At Lines 16 and 17 of Algorithm 3, closed-loop behavior L ( f / G ) can be ensured to be controllable and closed by Corollary 2. The discount total choosing L ( f / G ) is minimal as shown in the above analysis.
All in all, closed-loop behavior L ( f / G ) produced in Algorithm 3 is an optimal solution of Model (2).   □
Example 4. 
We consider finite transition system G = ( Q , Σ , δ , q 0 ) and secret K = { a , a b , a e b g , a e b g t } shown in Figure 7, where Σ u = { f , t } and K can be recognized by Q s = { 3 , 6 , 13 , 16 } . We suppose that the adversary has complete knowledge of the supervisor’s control policy, and the observed set of events by the adversary is Σ a = { a , d , f , g , t } . It is verified that K is opaque with respect to L ( G ) and Σ a . But K is not opaque with respect to K ¯ and Σ a , i.e., secret a e b g , a e b g t cannot be confused by any non-secret of K ¯ . Case 3 of Scenario 1 is fulfilled and Assumption 1 is true. Next, we construct closed-loop behavior L ( f / G ) using Algorithm 3.
For system G and secret K, we obtain language K ̲ = { a e b g , a e b g t } , where the strings in K ̲ cannot be confused by any strings of K ¯ . From the opacity of L ( G ) , we can find sub-language L = { a e b g t e , a e b g b t , a e b g b , e a b g t , e a b g , e a b e g , e a b e g t } , whose strings cannot be distinguished with the secret in K ̲ . For language L, we offer the following computational process:
We take a e b g K ̲ , and then we have θ ( a e b g ) = a g and H 1 = [ a e b g ] = { e a b g , e a b e g , a e b g b } .
We take a e b g t K ̲ , and then we have θ ( a e b g t ) = a g t and H 2 [ a e b g t ] = { e a b g t , e a b e g t , a e b g t e , a e b g b t } .
So, quotient set L / [ · ] = { e a b g , e a b e g , a e b g b } , { e a b g t , e a b e g t , a e b g t e , a e b g b t } is a partition of L.
For [ a e b g ] , we can compute the choosing cost of the suffix of the non-secret string out of K ¯ in the first stage (seen in Function 1 of Algorithm 4).
If s = e a b g , we have ϵ K ¯ and V ϵ ( e a b g ) = 5.126 .
If s = e a b e g , we have ϵ K ¯ and V ϵ ( e a b e g ) = 5.1256 .
If s = a e b g b , we have a e b g K ¯ and V a e b g ( b ) = 0.0002 .
For coset [ a e b g t ] , we can similarly obtain the following in the second stage:
If s = e a b g t , we have ϵ K ¯ and V ϵ ( e a b g t ) = 5.126 .
If s = e a b e g t , we have ϵ K ¯ and V ϵ ( e a b e g t ) = 5.1256 .
If s = a e b g t e , we have a e b g t K ¯ and V a e b g t ( e ) = 0.00005 .
If s = a e b g b t , we have a e b g K ¯ and V a e b g ( b t ) = 0.0002 .
Based on H 1 , H 2 and the choosing cost out of K ¯ above, a weighted directed diagram shown in Figure 8 is constructed using Algorithm 5 calling Algorithm 6. In the diagram, every node denoted by is shown as a fraction. For the fraction, its numerator is non-secret string s = s i σ i + 1 σ i + k in j = 1 2 H j , and its denominator is V s i ( t ) , where s i K ¯ and s i σ i + 1 K ¯ .
To show the weight between nodes of adjacent stages, some weight is given as follows by Definition 5:
V e a b g ( t ) = 0 , V e a b ( e g t ) = 0.0056 , V e a b e g ( t ) = 0 , V e a b ( g t ) = 0.006 , V a e b g b ( t ) = 0 .
By Algorithm 5, label L a b e l s and minimal choosing cost V min ( s ) of a path from initial node t s to current node s are computed in Table 1.
In Table 1, we have L a b e l t t = { t s , a e b g b , a e b g b t , t t } for ending node t t . From Line 16 of Algorithm 3, we know that the shortest path is t s a e b g b a e b g b t t t . So, it holds that min V ( L ( f / G ) K ¯ ) = 0.0002 . Since V ( K ¯ ) = 1.726 (by Algorithm 1) is finite, V ( L ( f / G ) ) = 1.726 + 0.0002 = 1.7262 . So, in Line 17, we have L ( f / G ) = { t ¯ , a b t f ¯ , a e b g t ¯ , a e b g b t ¯ } shown in Figure 9.
In Figure 9, it is verified that V ( L ( f / G ) ) = 1.7262 by Algorithm 1.

6. Scenario 2: Secret K Is Not Opaque with Respect to L(G) and Σ a

For system G and secret K L ( G ) , if K is not opaque with respect to L ( G ) and Σ a ; we need to design a supervisor to prohibit all the secrets disclosed. To obtain the supervisor, we can use the method of [1,4] to end up with the maximal permission sublanguage of L ( G ) which can ensure the opacity of K. Then, Scenario 1 is fulfilled. Next, we propose Algorithm 7 to solve Model (1).
Algorithm 7 Optimal supervisory control II
Input: Automaton G, secret K, and choosing cost c ( s , γ ) ;
Output: Closed-loop behavior L ( f / G ) .
 1:
Compute the observer o b s ( G ) of G
 2:
Compute the parallel composition G o b s ( G )
 3:
For the parallel composition G o b s ( G ) , find supervisor g such that L ( g / G ) is the maximal permissive closed-loop behavior which can ensure K opaque [1,4].
 4:
G = g / G
 5:
K = K L ( g / G )
 6:
if  K ¯ = L ( G ) then
 7:
  Design supervisor f such that L ( f / G ) = L ( G )
 8:
else
 9:
  if  s K , K ¯ K s . t . θ ( s ) = θ ( s )  then
  10:
   Design supervisor f from K ¯ by Theorem 3, and then output closed-loop behavior L ( f / G )
  11:
  else
  12:
   Perform Algorithm 3 to design supervisor f , and then output closed-loop behavior L ( f / G )
  13:
  end if
  14:
end if
  15:
L ( f / G ) = L ( f / G )
  16:
return  L ( f / G )
According to the above algorithm, we first construct maximal permissive supervisor g to enforce the opacity of K. And then, it is verified that closed-loop behavior L ( g / G ) and restricted secret K L ( g / G ) meet the requirements of Scenario 1. As shown in Theorem 2, Theorem 3, and Theorem 4, we can conclude that Algorithm 7 can produce an optimal solution of Model (1).
Theorem 5. 
We consider system G and secret K L ( G ) . In Scenario 2, closed-loop behavior L ( f / G ) obtained in Algorithm 7 is an optimal solution of Model (1).
Proof. 
Firstly, we prove that L ( f / G ) is a controllable and closed sublanguage of L ( G ) . As shown in Lines 7, 10, and 12, L ( f / G ) is a closed sublanguage of L ( G ) . Next, we prove L ( f / G ) is controllable with respect to L ( G ) .
s L ( f / G ) , σ Σ u , s σ L ( G ) s L ( g / G ) , σ Σ u , s σ L ( G ) ( L ( f / G ) L ( G ) = L ( g / G ) ) s σ L ( g / G ) ( L ( g / G ) is controllable with respect to L ( G ) ) s σ L ( f / G ) ( L ( f / G ) is controllable with respect to L ( g / G ) )
L ( f / G ) is a controllable and closed sublanguage of L ( G ) . At Line 15, there exits supervisor f such that L ( f / G ) = L ( f / G ) .
Secondly, we show that closed-loop behavior L ( f / G ) produced by Algorithm 7 is a feasible solution of Model (1).
1.
Showing the opacity of L ( f / G ) .
In Lines 3–5, it is obvious that K is opaque with respect to L ( G ) and Σ a . At Lines 6–14, by Theorems 2–4, it holds that K is opaque with respect to L ( f / G ) and Σ a , which implies that θ ( K L ( f / G ) ) θ ( L ( f / G ) K ) . According to Lines 4, 5, and 15, we have θ ( K L ( f / G ) ) θ ( L ( f / G ) K L ( g / G ) ) . Since L ( f / G ) L ( g / G ) , we have θ ( L ( f / G ) K L ( g / G ) ) θ ( L ( f / G ) K L ( f / G ) ) . So, θ ( K L ( f / G ) ) θ ( L ( f / G ) K L ( f / G ) ) holds, which means θ ( K L ( f / G ) ) θ ( L ( f / G ) K ) is true. Therefore, K is opaque with respect to L ( f / G ) and Σ a .
2.
Showing that closed-loop behavior L ( f / G ) can preserve the maximal secret information.
For system G and secret K , at Lines 4–14, it is obvious that L ( f / G ) is a feasible solution of Model (2), which implies that K ¯ L ( f / G ) . Since K L ( g / G ) = K at Line 5 and L ( f / G ) = L ( f / G ) at Line 15, it holds that K L ( g / G ) L ( f / G ) , which implies that K L ( g / G ) K L ( f / G ) . Since L ( f / G ) L ( G ) holds, it is true that L ( f / G ) L ( g / G ) . So, we have K L ( f / G ) K L ( g / G ) . Therefore, we have K L ( f / G ) = K L ( g / G ) .
To conclude, closed-loop behavior L ( f / G ) produced by Algorithm 7 is a feasible solution of Model (1).
Finally, we show that the discount total choosing cost of L ( f / G ) produced by Algorithm 7 is minimal for Model (1).
We assume that closed-loop behavior L ( f / G ) produced by Algorithm 7 is not the optimal solution of Model (1). So, there exists supervisor f 1 such that L ( f 1 / G ) is a feasible solution of Model (1) and V ( L ( f 1 / G ) ) < V ( L ( f / G ) ) holds. For Model (1), the two constrain conditions are satisfied for L ( f 1 / G ) . The two conditions mean that K L ( f 1 / G ) is opaque with respect to L ( f 1 / G ) and Σ a , and K L ( f 1 / G ) = K L ( g / G ) holds. As shown in Line 5, we have K = K L ( f 1 / G ) . Taking G 1 = f 1 / G , it holds that K ¯ L ( G 1 ) . Based on the assumption about L ( f 1 / G ) and L ( g / G ) , we have L ( f 1 / G ) L ( g / G ) . Then, we consider the following two cases:
Case 1 
If L ( f 1 / G ) = L ( g / G ) , we discuss the relation between K ¯ and L ( G 1 ) .
1.1 
If K ¯ = L ( G 1 ) , it holds that L ( f 1 / G ) = L ( f / G ) at Lines 6 and 7, which contracts with the assumption that V ( L ( f 1 / G ) ) < V ( L ( f / G ) ) .
1.2 
If K ¯ L ( G 1 ) , it holds that V ( L ( f / G ) ) V ( L ( f 1 / G ) ) by Theorems 3 and 4 at Lines 9–13, which contracts with the assumption that V ( L ( f 1 / G ) ) < V ( L ( f / G ) ) .
Case 2 
If L ( f 1 / G ) L ( g / G ) , it is true that s K ¯ for any s L ( g / G ) L ( f 1 / G ) because of formulas K ¯ L ( G 1 ) and L ( f 1 / G ) = L ( G 1 ) , which implies that s is out of K ¯ . Then, we discuss the relation between K ¯ and L ( G 1 ) again.
2.1 
If K ¯ = L ( G 1 ) , there exists s K ¯ K such that θ ( s ) = θ ( s ) for any s K , which means that all the secret strings in K can be confused by the non-secret string in K ¯ . By Corollary 2, it holds that V ( L ( f 1 / G ) ) = V ( K ¯ ) . In Lines 9 and 10, it is true that V ( L ( f / G ) ) = V ( K ¯ ) . So, it holds that V ( L ( f / G ) ) = V ( L ( f 1 / G ) ) , which contracts with assumption that V ( L ( f 1 / G ) ) < V ( L ( f / G ) ) .
2.2 
If K ¯ L ( G 1 ) , we discuss the following two sub-cases:
2.2.1 
If there exists s K ¯ K such that θ ( s ) = θ ( s ) for any s K , it is obvious that V ( L ( f 1 / G ) ) = V ( K ¯ ) by Corollary 2. At Lines 9, 10 and 15 of Algorithm 7, it holds that V ( L ( f / G ) ) = V ( K ¯ ) . So, it is true that V ( L ( f / G ) ) = V ( L ( f 1 / G ) ) , which contracts with assumption that V ( L ( f 1 / G ) ) < V ( L ( f / G ) ) .
2.2.2 
If there exists s K such that θ ( s ) θ ( s ) for any s K ¯ K , we have the following formulas:
V ( L ( f 1 / G ) ) = V ( K ¯ , f 1 ) + V ( L ( f 1 / G ) K ¯ )
V ( L ( f / G ) ) = V ( K ¯ , f ) + V ( L ( f / G ) K ¯ ) .
Owing to the definition of feasible solution, we have K ¯ L ( f 1 / G ) and K ¯ L ( f / G ) . So, it holds that V ( K ¯ , f 1 ) = V ( K ¯ , f ) . For the remaining part of Formulas (4) and (5), we construct two weight directed diagrams T 1 and T, where T 1 (or T) is produced in Algorithm 5 (e.g., Line 14 of Algorithm 3) if f 1 / G , K , c ( s , γ ) (or g / G , K , c ( s , γ ) ) is inputted in Algorithm 3.
By the constructions of L and H in Algorithm 3, it is obvious that T 1 T (i.e., T 1 is a sub-diagram of T) and that a s , s T 1 a s , s T , where a s , s T 1 (and a s , s T ) is the weight of arc ( s , s ) in diagram T 1 (and T, respectively). So, the sum of the weight of the shortest path of T is less than that of T 1 , which implies that V ( L ( f / G ) K ¯ ) V ( L ( f 1 / G ) K ¯ ) . According to Formulas (4) and (5), we have V ( L ( f / G ) ) V ( L ( f 1 / G ) ) , which contracts with assumption that V ( L ( f 1 / G ) ) < V ( L ( f / G ) ) .
To conclude, it is true that V ( L ( f / G ) ) V ( L ( f 1 / G ) ) , which implies that the discount total choosing cost of closed-loop behavior L ( f / G ) produced by Algorithm 7 is minimal for optimal Model (1). □
To show the effectiveness of Algorithm 7, we introduce the model of [1] to compute the optimal choosing control strategy.
Example 5. 
We consider transition system G [1] shown in Figure 10, which models all sequences of possible moves of an agent in a three-storey building with a south wing and a north wing, both equipped with lifts and both connected by a corridor at each floor. Moreover, there is a staircase that leads from the first floor in the south wing to the third floor in the north wing. The agent starts from the first floor in the south wing. They can walk up the stairs (s) or walk through the corridors (c) from south to north without any control. The lifts can be used several times one floor upwards (u) and at most once on floor downwards (d) altogether. The moves of the lifts are controllable. Thus, Σ c = { u , d } . The secret is that the agent is either on the second floor in the south wing or on the third floor in the north wing, i.e., Q s = { 1 , 5 , 7 , 11 } marked by a double circle. The adversary may gather the exact subsequence of moves in Σ a = { u , c , s } from sensors, but they cannot observe the downwards moves of the lifts.
For every transition of system G, the choosing cost is shown in Figure 10. In [1], there are a unique supremal prefix-closed and controllable sublanguage L ( f / G ) (shown in Figure 11) of L ( G ) such that secret S is opaque with respect to L ( f / G ) and Σ a . So, V ( K ¯ ) = V ϵ ( s ) + V ϵ ( c u u ) = 0 + 0.55 = 0.55 .
We suppose that choosing cost c ( s , γ ) is inserted in G and g / G , shown in Figure 10 and Figure 11, respectively. According to Line 12 of Algorithm 7 (i.e., Algorithm 3), K ̲ = { s , c u u } and L = { s d , c u u d } . By the process of Algorithm 3, we have H 1 = { s d } and H 2 = { c u u d } , which means there exists only one path (shown in Figure 12) from the starting node to the ending node. So, min f Γ L ( G ) V ( L ( f / G ) K ¯ ) = V s ( d ) + V c u u ( d ) = 0.2 + 0.002 = 0.202 .
At Lines 16 and 17 of Algorithm 3, it is obvious that L ( f / G ) is shown in Figure 13, which has the minimal discount total choosing cost V ( f / G ) = V ( K ¯ , f ) + V ( L ( f / G ) K ¯ ) = 0.55 + 0.202 = 0.752 .
The optimal supervisory control defined by L ( f / G ) prevents the agent from using the lift of the south wing and the lift of the north wing from the second floor to the third floor at any time after they used this lift downwards, as well as the lift of the north wing downwards on the second floor.

7. Conclusions

When opacity-enforcing supervisory control is considered in Discrete Event Systems, we have to face another problem, i.e., cost. In reality, we hope we can reduce the cost while preserving the opacity of the supervised system. So, an optimal supervisory control model is formulated to enforce opacity by a supervisor with minimal discount total choosing cost. In the model, the objective function is to minimize the discount total choosing cost of closed-loop behavior, and the two constraint conditions are given: one is to enforce the opacity of closed-loop behavior, the other is to preserve the maximal part of secret information for closed-loop behavior. To solve the above optimal model, some algorithms and theorems are formulated from simplicity to complexity.
In this paper, the plant is modeled by the Finite Transition System, because coset [ · ] and quotient set L / [ · ] in Algorithms 3 and 4 may be infinite in Finite State Machine. To break the restriction, we will adopt a finite state to replace infinite event string and introduce the optimal control problem to Finite State Machine, which is our future work.

Author Contributions

Conceptualization, methodology, Y.D. and F.W.; supervision, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by National Natural Science Foundation of China grant number 61203040, Natural Science Foundation of Fujian Province grant number 2022J01295 and Science and Technology Association Project of Quanzhou.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Theorem A1. 
We consider system G and languages L , K satisfied with K ¯ L and K ¯ L ( G ) . If K is opaque with respect to K ¯ and Σ a , then K is opaque with respect to L and Σ a .
Proof. 
s K s K ¯ K , s . t . θ ( s ) = θ ( s ) ( K is opaque with respect to K ¯ and Σ a ) s L K , s . t . θ ( s ) = θ ( s ) ( K ¯ L ) K is opaque with respect to L and Σ a ( Definition 2 )

References

  1. Dubreil, J.; Darondeau, P.; Marchand, H. Supervisory control for opacity. IEEE Trans. Autom. Control 2010, 55, 1089–1100. [Google Scholar] [CrossRef]
  2. Takai, S.; Oka, Y. A formula for the supremal controllable and opaque sublanguage arising in supervisory control. SICE J. Control. Meas. Syst. Integr. 2008, 1, 307–311. [Google Scholar] [CrossRef]
  3. Takai, S.; Watanabe, Y. Modular synthesis of maximally permissive opacity-enforcing supervisors for discrete event systems. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2011, E94A, 1041–1044. [Google Scholar] [CrossRef]
  4. Moulton, R.; Hamgini, B.; Khouzani, Z.; Meira-Goes, R.; Wang, F.; Rudie, K. Using subobservers to synthesize opacity enforcing supervisors. Discret. Event Dyn. Syst. 2022, 32, 611–640. [Google Scholar] [CrossRef]
  5. Mazare, L. Using unification for opacity properties. In Proceedings of the Workshop on Information Technology&Systems, Las Vegas, LV, USA, 16–18 June 2004; pp. 165–176. [Google Scholar]
  6. Bryans, J.W.; Koutny, M.; Ryan, P.Y. Modelling opacity using petri nets. Electron. Notes Theor. Comput. Sci. 2005, 121, 101–115. [Google Scholar] [CrossRef]
  7. Lin, F. Opacity of discrete event systems and its applications. Automatica 2011, 47, 496–503. [Google Scholar] [CrossRef]
  8. Ben-Kalefa, M.; Lin, F. Opaque superlanguages and sublanguages in discrete event systems. Cybern. Syst. 2016, 47, 392–426. [Google Scholar] [CrossRef]
  9. Saboori, A.; Hadjicostis, C.N. Notions of security and opacity in discrete event systems. In Proceedings of the IEEE Conference on Decision and Control, Cancun, Mexico, 9–11 December 2008; pp. 5056–5061. [Google Scholar]
  10. Saboori, A.; Hadjicostis, C.N. Verification of k-step opacity and analysis of its complexity. IEEE Trans. Autom. Sci. Eng. 2011, 8, 549–559. [Google Scholar] [CrossRef]
  11. Falcone, Y.; Marchand, H. Enforcement and validation (at runtime) of various notions of opacity. Discret. Event Dyn. Syst. 2015, 25, 531–570. [Google Scholar] [CrossRef]
  12. Saboori, A.; Hadjicostis, C.N. Verification of infinite-step opacity and analysis of its complexity. In Proceedings of the IFAC Workshop on Dependable Control of Discrete Systems, Bari, Italy, 10–12 June 2009; pp. 46–51. [Google Scholar]
  13. Wu, Y.C.; Lafortune, S. Comparative analysis of related notions of opacity in centralized and coordinated architectures. Discret. Event Dyn. Syst. 2013, 23, 307–339. [Google Scholar] [CrossRef]
  14. Saboori, A.; Hadjicostis, C.N. Verification of initial-state opacity in security applications of DES. In Proceedings of the International Workshop on Discrete Event Systems, Gothenburg, Sweden, 28–30 May 2008; pp. 328–333. [Google Scholar]
  15. Balun, J.; Masopust, T. Comparing the notions of opacity for discrete-event systems. Discret. Event Dyn. Syst. 2021, 31, 553–582. [Google Scholar] [CrossRef]
  16. Wintenberg, A.; Blischke, M.; Lafortune, S.; Ozay, N. A general language-based framework for specifying and verifying notions of opacity. Discret. Event Dyn. Syst. 2022, 32, 253–289. [Google Scholar] [CrossRef]
  17. Ma, Z.; Yin, X.; Li, Z. Verification and enforcement of strong infinite- and k-step opacity using state recognizers. Automatica 2021, 133, 109838. [Google Scholar] [CrossRef]
  18. Liu, R.; Lu, J. Enforcement for infinite-step opacity and K-step opacity via insertion mechanism. Automatica 2022, 140, 110212. [Google Scholar] [CrossRef]
  19. Ji, Y.; Wu, Y.C.; Lafortune, S. Enforcement of opacity by public and private insertion functions. Automatica 2018, 93, 369–378. [Google Scholar] [CrossRef]
  20. Ji, Y.; Yin, X.; Lafortune, S. Enforcing opacity by insertion functions under multiple energy constraints. Automatica 2019, 108, 108476. [Google Scholar] [CrossRef]
  21. Ji, Y.; Yin, X.; Lafortune, S. Opacity enforcement using nondeterministic publicly-known edit functions. IEEE Trans. Autom. Control 2019, 64, 4369–4376. [Google Scholar] [CrossRef]
  22. Zhou, Y.; Chen, Z.; Liu, Z.X. Verification and enforcement of current-state opacity based on a state space approach. Eur. J. Control 2023, 71, 100795. [Google Scholar] [CrossRef]
  23. Wu, Y.C.; Lafortune, S. Synthesis of insertion functions for enforcement of opacity security properties. Automatica 2014, 50, 1336–1348. [Google Scholar] [CrossRef]
  24. Zhang, B.; Shu, S.L.; Lin, F. Maximum information release while ensuring opacity in discrete event systems. IEEE Trans. Autom. Sci. Eng. 2015, 12, 1067–1079. [Google Scholar] [CrossRef]
  25. Behinaein, B.; Lin, F.; Rudie, K. Optimal information release for mixed opacity in discrete-event systems. IEEE Trans. Autom. Sci. Eng. 2019, 16, 1960–1970. [Google Scholar] [CrossRef]
  26. Khouzani, Z.A. Optimal Payoff to Ensure Opacity in Discrete-Event Systems. Master’s Thesis, Queen’s University, Kingston, ON, Canada, 2019. [Google Scholar]
  27. Hou, J.; Yin, X.; Li, S. A framework for current-state opacity under dynamic information release mechanism. Automatica 2022, 140, 110238. [Google Scholar] [CrossRef]
  28. Sengupta, R.; Lafortune, S. An optimal control theory for discrete event systems. SIAM J. Control Optim. 1998, 36, 488–541. [Google Scholar] [CrossRef]
  29. Pruekprasert, S.; Ushio, T. Optimal stabilizing supervisor of quantitative discrete event systems under partial observation. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2016, 99, 475–482. [Google Scholar] [CrossRef]
  30. Ji, X.; Lafortune, S. Optimal supervisory control with mean payoff objectives and under partial observation. Automatica 2021, 123, 109359. [Google Scholar] [CrossRef]
  31. Hu, Q.; Yue, W. Two new optimal models for controlling discrete event systems. J. Ind. Manag. Optim. 2017, 1, 65–80. [Google Scholar] [CrossRef]
  32. Yue, W.; Hu, Q. Optimal control for discrete event systems with arbitrary control pattern. Discret. Contin. Dyn. Syst. Ser. B (DCDS-B) 2012, 6, 535–558. [Google Scholar]
  33. Cassandras, C.; Lafortune, S. Introduction to Discrete Event Systems; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
Figure 1. System f / G .
Figure 1. System f / G .
Applsci 14 02532 g001
Figure 2. Tree automaton obtained from Figure 1, where t = σ 1 σ l , t 1 = σ i 1 σ i m and t 2 = σ j 1 σ j n .
Figure 2. Tree automaton obtained from Figure 1, where t = σ 1 σ l , t 1 = σ i 1 σ i m and t 2 = σ j 1 σ j n .
Applsci 14 02532 g002
Figure 3. Tree automaton based on K ¯ .
Figure 3. Tree automaton based on K ¯ .
Applsci 14 02532 g003
Figure 4. Flow chart to solve optimal Model (1), where G is the closed-loop system under some supervisor and K is the secret in G .
Figure 4. Flow chart to solve optimal Model (1), where G is the closed-loop system under some supervisor and K is the secret in G .
Applsci 14 02532 g004
Figure 5. System G.
Figure 5. System G.
Applsci 14 02532 g005
Figure 6. Closed-loop language L ( f / G ) .
Figure 6. Closed-loop language L ( f / G ) .
Applsci 14 02532 g006
Figure 7. System G.
Figure 7. System G.
Applsci 14 02532 g007
Figure 8. A weighted directed diagram.
Figure 8. A weighted directed diagram.
Applsci 14 02532 g008
Figure 9. Closed-loop language L ( f / G ) .
Figure 9. Closed-loop language L ( f / G ) .
Applsci 14 02532 g009
Figure 10. System G.
Figure 10. System G.
Applsci 14 02532 g010
Figure 11. Closed-Loop System g / G .
Figure 11. Closed-Loop System g / G .
Applsci 14 02532 g011
Figure 12. A weighted directed diagram.
Figure 12. A weighted directed diagram.
Applsci 14 02532 g012
Figure 13. Closed-loop behavior L ( f / G ) with the minimal discount total choosing cost.
Figure 13. Closed-loop behavior L ( f / G ) with the minimal discount total choosing cost.
Applsci 14 02532 g013
Table 1. The set L a b e l s of shortest path and its minimal discount total choosing cost V min ( s ) at every node s of the diagram.
Table 1. The set L a b e l s of shortest path and its minimal discount total choosing cost V min ( s ) at every node s of the diagram.
sj = 0j = 1j = 2j = 3
Labels,  V min  (s)Labels,  V min  (s)Labels,  V min  (s)Labels,  V min  (s)>
t s { t s } , 0
e a b g { t s , e a b g } , 5.126
e a b e g { t s , e a b e g } , 5.1256
a e b g b { t s , a e b g b } , 0.0002
e a b g t { t s , e a b g , e a b g t } , 5.126
e a b e g t { t s , e a b e g , e a b e g t } , 5.1256
a e b g b t { t s , a e b g b , a e b g b t } , 0.0002
a e b g t e { t s , a e b g b , a e b g t e } , 0.00025
t t { t s , a e b g b , a e b g b t , t t } , 0.0002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dai, Y.; Wang, F.; Luo, J. Optimal Opacity-Enforcing Supervisory Control of Discrete Event Systems on Choosing Cost. Appl. Sci. 2024, 14, 2532. https://doi.org/10.3390/app14062532

AMA Style

Dai Y, Wang F, Luo J. Optimal Opacity-Enforcing Supervisory Control of Discrete Event Systems on Choosing Cost. Applied Sciences. 2024; 14(6):2532. https://doi.org/10.3390/app14062532

Chicago/Turabian Style

Dai, Yinyin, Fei Wang, and Jiliang Luo. 2024. "Optimal Opacity-Enforcing Supervisory Control of Discrete Event Systems on Choosing Cost" Applied Sciences 14, no. 6: 2532. https://doi.org/10.3390/app14062532

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop