Next Article in Journal
Limit Distributions of Products of Independent and Identically Distributed Random 2 × 2 Stochastic Matrices: A Treatment with the Reciprocal of the Golden Ratio
Previous Article in Journal
Comparison of Different Optimization Techniques for Model-Based Design of a Buck Zero Voltage Switching Quasi-Resonant Direct Current to Direct Current Converter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Verification and Enforcement of (ϵ, ξ)-Differential Privacy over Finite Steps in Discrete Event Systems

1
School of Electro-Mechanical Engineering, Xidian University, Xi’an 710071, China
2
Institute of Systems Engineering, Macau University of Science and Technology, Taipa 999078, Macau SAR, China
3
School of Electrical and Mechanical Engineering, Xuchang University, Xuchang 461000, China
4
Industrial Engineering Department, College of Engineering, King Saud University, P.O. Box 800, Riyadh 11421, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(24), 4991; https://doi.org/10.3390/math11244991
Submission received: 7 November 2023 / Revised: 14 December 2023 / Accepted: 15 December 2023 / Published: 18 December 2023
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
In the realm of data protection strategies, differential privacy ensures that unauthorized entities cannot reconstruct original data from system outputs. This study explores discrete event systems, specifically through probabilistic automata. Central is the protection of state data, particularly the initial state privacy of multiple starting states. We introduce an evaluation criterion to safeguard initial states. Using advanced algorithms, the proposed method counters the probabilistic identification of any state within this collection by adversaries from observed data points. The efficacy is confirmed when the probability distributions of data observations tied to these states converge. If a system’s architecture does not meet state differential privacy demands, we propose an enhanced supervisory control mechanism. This control upholds state differential privacy across all initial states, maintaining operational flexibility within the probabilistic automaton framework. Concluding, a numerical analysis validates the approach’s strength in probabilistic automata and discrete event systems.

1. Introduction

In the context of expansive data analytics and the handling of large, sensitive datasets, the RibsNet architecture [1] presents a significant advancement in data center networks, enhancing the management and confidentiality of extensive data collections. This innovation aligns with the ongoing efforts in data privacy, where methodologies like nearest similarity-based clustering [2], k-anonymity [3], and augmented l-diversity [4] have been developed, and both the scholarly community and the industrial sphere are diligently strengthening the confidentiality of sensitive elements within extensive databases. These advanced methods strive to balance the essential benefits of big data analytics with the ethical need for individual privacy [2]. Especially in domains of heightened sensitivity, such as healthcare, these mechanisms for preserving privacy serve as a sine qua non for engendering data-driven insights while mitigating the inherent risks posed by the disclosure of personalized data [3,4].
In data privacy, advanced anonymity architectures are being carefully improved to protect sensitive information in transactional databases. These pioneering algorithms, frequently modulated by bespoke sensitivity indices dictated by the end users, epitomize a finely calibrated balance between data obfuscation and utility. Surpassing extant paradigms in terms of computational frugality and data preservation, these algorithms prioritize the diminution of information attrition [5]. In synchrony with the emergent paradigms in the data privacy landscape, bio-inspired computational strategies such as the black hole algorithm are being judiciously employed to surmount the conventional limitations endemic to k-anonymity models. Preliminary empirical evidence substantiates the black hole algorithm’s superior capacity for amplifying data utility when juxtaposing against existing frameworks [6].
Differential privacy is a game-changing concept that precisely measures the privacy risks associated with using statistical databases. Transcending classical paradigms, this framework contends with privacy infirmities that affect even those individuals who are conspicuously absent from the database’s annals. Through the deployment of a sophisticated suite of algorithms, differential privacy navigates an impeccable equilibrium between maximizing data utility and furnishing unassailable privacy guarantees. As a result, it carves out a novel ambit in the realm of secure, yet perspicacious, data analytics [7,8].
The incorporation of stochastic noise has solidified its role as a quintessential instrument for attaining differential privacy, providing a rigorous framework for individual data safeguarding within collective data assemblages. Employing exacting mathematical formulations such as ϵ -differential [9] and ( ϵ , δ ) -differential privacy [10], this approach critically evaluates the performance metrics of assorted noise-introduction protocols. It furnishes a nuanced exegesis of the intrinsic trade-offs between data concealment and utility, notably in the realm of consensus algorithms.
Advanced frameworks that integrate noise have been developed to protect the subtle differences between similar datasets using differential privacy principles. These frameworks facilitate the injection of stochastic noise, which adheres to various probabilistic distributions, such as the Laplacian or Gaussian models, into the results of data queries. By doing so, they fulfill the stipulated privacy assurances [10,11].
In the evolving field of differential privacy, especially for qualitative data, moving from global sensitivity models to localized sensitivity approaches is a significant change. While the exponential mechanism [12] has long served as the bedrock for global sensitivity-centric strategies, the recently unveiled local dampening mechanism [13] inaugurates a more sophisticated and malleable architecture for safeguarding data privacy. This innovative development exploits local sensitivity metrics to furnish more contextually nuanced and granular privacy assurances, thereby pioneering new frontiers in the intricate realm of qualitative data protection.
Discrete event systems (DESs) function as a formidable paradigm for the conceptualization, scrutiny, and governance of a diverse gamut of systems, wherein the conceptual entity of an ‘event’ is pivotal in dictating systemic dynamics. Ranging from industrial manufacturing sequences and computational networks to telecommunication infrastructures and operations research, the scope of DESs effortlessly straddles conventional disciplinary demarcations. Within this multifaceted domain, a salient focal point comprises the assessment and amelioration of security susceptibilities, especially in scenarios characterized by incomplete observability [14].
Opacity [15] stands as an indispensable facet of security within the ambit of DESs, concentrating on the extent to which a system discloses its internal confidentialities to an extraneous, non-active observer-frequently denominated as either the intruder or malevolent scrutineer. In this intellectual landscape, the notion of language-based opacity [16] takes on considerable weight. This concept delves into the inherent restrictions that come with an outsider’s incomplete surveillance of a system. The critical inquiry here is whether these observational limitations inhibit the outsider from discerning whether the unfolding sequence of events betrays confidential or delicate matters.
State-based opacity [17] augments the analytical tableau by imbuing it with discrete temporal dimensions that enrich the overarching conceptual framework. Contingent upon the particular temporal instances at which a system transits through confidential states, this genre of opacity can be meticulously categorized into four seminal subclasses: initial-state opacity [18], current-state opacity [19], k-step opacity [20], and infinite-step opacity [21]. Each of these refined categories furnishes a stratified comprehension of a system’s robustness vis-à-vis unauthorized external examination across diverse temporal vicissitudes. Notwithstanding its pivotal significance, the authentication of opacity within the frameworks of DESs is anything but elementary, frequently confronting formidable computational impediments. Cutting-edge inquiries in the discipline delve into the architectural constituents of automata paradigms that could potentially facilitate the more manageable verification of opacity [22].
Within the intricate nexus of differential privacy and symbolic control architectures, the transposition of statistical privacy schemata to non-quantitative datasets establishes a pioneering standard. Leveraging exponential methodologies and specialized automata configurations, sensitive alphanumeric sequences are skillfully approximated. This is executed with a dual objective: the preservation of informational sanctity and the retention of data utility. The governance of this intricate process is underscored by the employment of distance metrics such as the Levenshtein distance [23].
At this interdisciplinary juncture, the enhancement of data security is amplified across a diverse spectrum of fields, ranging from natural language processing to intricate industrial systems. Predicated upon this foundational understanding, the current scholarly endeavor aims to further elevate this paradigm. The focus is sharpened on safeguarding the opacity of initial states within DESs, utilizing probabilistic automata embedded within the stringent confines of differential privacy. Within the realm of discrete-event systems, the notion of initial state opacity emerges as an indispensable yardstick for assessing the system’s prowess in concealing its nascent state from unauthorized external entities.
Classical deterministic frameworks for initial state opacity have developed to encompass probabilistic and stochastic paradigms, thus facilitating a more textured comprehension of security dynamics in volatile environments. To cater to elevated privacy requisites, especially in situations where an intrusive entity possesses comprehensive structural awareness of the system, advanced iterations of opacity, such as robust initial state opacity, have been formulated [19,24,25].
Techniques for substantiating these robust opacity conditions are undergoing refinement via innovative approaches, including parallel composition techniques and integer linear programming algorithms [26]. Initial state opacity is a crucial area that combines verification methods, probability models, and cybersecurity concepts to effectively assess data protection in dynamic systems. Cutting-edge computational schemas, such as Petri net models, facilitate the expeditious verification of initial state opacity, obviating the necessity for laborious state space enumeration. Contemporary real-time validation methodologies, encompassing linear programming algorithms, further enhance the relevance and applicability of initial state opacity in intricate network architectures, including those inherent to defense systems and mobile communications networks [27,28].
In architectures delineated via non-deterministic finite automata and their probabilistic counterparts, i.e., probabilistic finite automata, the verification of initial state opacity necessitates sophisticated computational modalities and analytical techniques. The complexity is exacerbated when extended to non-deterministic transition systems, particularly in the instances where state spaces are potentially unbounded. The exploration of initial state opacity thus constitutes a critical nexus between formal computational methodologies, automata theory, and information security, engendering both algorithmic intricacies and theoretical conundrums [29,30]. Differential privacy was delicately integrated into discrete event systems using probabilistic automata in a vital piece of research [31]. The major goal was to protect state information illustrating system resource settings. This technique was designed to provide state differential privacy, with an emphasis on thorough, step-by-step validation. This method made it difficult for potential adversaries to infer the system’s starting state from a limited collection of observations.
In an era where the landscape of extensive data analysis is rapidly evolving, the necessity for impregnable data privacy frameworks becomes ever more apparent. The research presented herein constitutes a substantial breakthrough in the sphere of differential privacy within the context of DESs. This study introduces the concept of the privacy decay factor [32], denoted as ξ , a groundbreaking development that diverges markedly from preceding research. Prior studies primarily concentrated on the protection of state information within DESs through the utilization of probabilistic automata. This research, however, forges new paths by integrating this innovative parameter. The implementation of ξ has proven to be particularly efficacious in ensuring the privacy of initial states or conditions, thereby guaranteeing the secure concealment of sensitive data from the very outset of data processing a pivotal requirement in the face of continually evolving privacy standards.
By broadening the scope to encompass an expanded range of initial state conditions and databases, this approach markedly bolsters the method’s resilience and adaptability. The incorporation of ξ addresses a significant void in existing methodologies, providing a more fortified framework for preserving the privacy of initial states against potential adversaries. This methodical consideration of an extensive array of starting conditions not only augments the theoretical foundations of differential privacy within DESs but also showcases its tangible applicability across a spectrum of intricate systems.
Building on this essential understanding, the present scholarly pursuit endeavors to advance this paradigm further. It concentrates on the fortification of the privacy of initial states within DESs, employing probabilistic automata encased within the stringent parameters of differential privacy. As a consequence, this research establishes a new benchmark in the domain of privacy preservation, signaling a shift in data security strategies across diverse sectors. This is particularly pertinent in industries where stringent privacy measures from the commencement of data processing are of the utmost importance. The following lines describe the main contributions of this scientific endeavor:
  • The research introduces a novel model of ( ϵ , ξ ) -differential privacy for discrete event systems (DESs) formulated by probabilistic automata, ensuring equitable resource distribution across multiple initial states.
  • A novel verification strategy is introduced, originating from distinct observations of a vast set of initial states and evaluating adherence to the tenets of ( ϵ , ξ ) -differential privacy across defined observational sequences.
  • By seamlessly integrating probabilistic automata with a diverse set of initial states, the study presents a tailored verification approach. Should systems deviate from the privacy standards, a specialized control mechanism is deployed, conveying ( ϵ , ξ ) -differential privacy within the overarching closed-loop system.
  • The research’s potency is exemplified through a detailed numerical case study, illustrating the verification method’s acumen in assessing the privacy integrity of specific automata classes.
The rest of this paper is organized in a way that facilitates a comprehensive study of the relevant topics. Section 2 sets the foundation by introducing the basics of probabilistic automata and the important principles of ( ϵ , ξ ) -differential privacy in the context of data security. Section 3 serves as the main investigative focus of the study. The approach to the verification of ( ϵ , ξ ) -differential privacy over finite steps is detailed in Section 4. In Section 5, we describe a method for ensuring ( ϵ , ξ ) -differential privacy via supervisory control. A numerical case study is presented in Section 6 to offer empirical support for the methodologies proposed. Finally, Section 7 concludes the paper, summarizing the key findings and their implications.

2. Preliminaries

This section introduces the concept of probabilistic automata within the framework of discrete event systems and discusses the conventional notion of ( ϵ , ξ ) -differential privacy.

2.1. Probabilistic Automata in Discrete Event Systems (DESs)

In the study of DESs, the use of deterministic finite automata is crucial. A deterministic finite automaton can be formally described by a structure D = ( S , , δ , s 0 ) [33], where S is a finite set of states, and is the set of events, which can be further divided into o for the set of observable events and u o for that of unobservable events. The partial transition function δ : S × S defines the deterministic behavior of the DES, mapping a state and an event to a subsequent state. Finally, s 0 denotes the initial state, taken from the set S, before any events have occurred.
To grasp the operation of the deterministic finite automaton, consider δ ( s , e ) to be defined if event e from the set can trigger a transition from state s. Adapting δ for event sequences or strings, we have δ : S × S . This function operates as per δ ( s , ε ) = s and δ ( s , u e ) = δ ( δ ( s , u ) , e ) , given that both δ ( s , u ) and δ ( δ ( s , u ) , e ) are defined for a state s and a string combination u. For any state s and a string u, if δ is appropriately defined for the string u at state s, it is denoted as δ ( s , u ) ! . The length of a string u, indicating the number of events in it, is denoted by | u | .
For an automaton D = ( S , , δ , s 0 ) and a specific state s, the language that the automaton generates, starting from state s, is given by L ( D , s ) = { u | δ ( s , u ) ! } . In scenarios in which potential attackers can observe and log only the observable events, a key tool to consider is the natural projection, denoted by P : o . This projection effectively translates any executed string within the system into a corresponding sequence of observable events, termed an observation. The definition of this projection is recursive: for any string u from and an event e from , the projection is defined as P ( u e ) = P ( u ) P ( e ) . It is important to note that P ( e ) is equal to e when e belongs to o , and it becomes the empty string ε when e belongs to u o . Building on this, the set of observations generated by an automaton D = ( S , , δ , s 0 ) starting from a state s in S can be defined as
L o ( D , s ) = { ω o | u L ( D , s ) : P ( u ) = ω } .
This essentially captures all observable sequences corresponding to the possible behaviors of the automaton from the state s.
Definition 1
(Probabilistic automaton [34]). A probabilistic automaton is defined as a tuple G = ( D , ρ ) , where
  • D = ( S , , δ , s 0 ) denotes the underlying deterministic finite automaton,
  • ρ : S × [ 0 , 1 ] serves as a probability distribution function over transitions.
Again, S is the set of states, and s 0 S is the initial state. Given a state s S and an event e ,
ρ ( s , e ) = 0 , if δ ( s , e ) is undefined > 0 , if δ ( s , e ) is defined ,
the set of feasible events at a given state s is E ( s ) = { e ρ ( s , e ) > 0 } , subject to the constraint e E ( s ) ρ ( s , e ) = 1 .
When the underlying structure is implied, the probabilistic automaton G can be denoted succinctly as G ( s 0 ) . Given a probabilistic automaton G = ( S , , δ , s 0 , ρ ) , the metric P r σ plays a pivotal role in determining the likelihood of generating a string u e in from state s [34]. When the string u e consists of u and an event e , P r σ is recursively defined as
P r σ ( s , u e ) = 1 , if u e = ε P r σ ( s , u ) × ρ ( δ ( s , u ) , e ) , if δ ( s , u ) 0 , otherwise ,
where ε within is the empty string. At its core, P r σ ( s , u ) signifies the probability that the string u is executed starting from state s in the automaton G . A positive value of P r σ ( s , u ) is realized if δ ( s , u ) for every u . In this framework, strings and observations emerge as foundational constructs. A string, represented within , takes the form u : N , where is the finite alphabet set of events. Such a string can be visualized as a deterministic sequence, u = u 1 u 2 u n , with each u i . The automaton’s transition function establishes that the combination of a state and a string leads to a unique subsequent state, as represented by δ : S × S . Further, the language emerging from state s is denoted by
L ( G , s ) = { u P r σ ( s , u ) > 0 } ,
highlighting all strings that can be produced from state s with a non-zero probability.
Contrasting with strings, observations are denoted as ω within o . They are defined as ω : N o , with o being the observation alphabet set of events. Observations stem from a potentially non-bijective projection, formalized as P : o .
This suggests that multiple strings might map to the same observation. Essentially, observations capture a more abstract or condensed perspective of the automaton’s behavior. The observations generated from a state s are defined as
( G , s ) = { ω o u L ( G , s ) : ω = P ( u ) } .
This definition describes all possible observations inferred from the strings originating at state s. To further delineate the relationship between strings and observations, we can define the set of strings that are consistent with a particular observation when the automaton is in state s, which is given by
U ( s , ω ) = { u u L ( G , s ) , P ( u ) = ω } .
The probability of generating a specific observation from a state s is calculated as
P r ( s , ω ) = u U ( s , ω ) P r σ ( s , u ) .
While strings u outline the explicit sequences of transitions an automaton undergoes, observations ω act as potential aggregated or abstracted representations of these sequences. These mathematical representations and definitions bind the automaton’s dynamics, bridging the states to the strings and the resultant observations.
Example 1.
Consider the probabilistic automaton structure G = ( S , , δ , ρ ) illustrated in Figure 1. The automaton initiates from the state s 0 , leading us to denote the automaton configured in this manner as G ( s 0 ) . This configuration comprises a set of states S = { s 0 , s 1 , s 2 , s 3 , s 4 , s 5 , s 6 } and partitions the event set Σ into observable events o = { α , β , λ , γ , μ } and unobservable events u o = { τ } . For this configuration, ρ ( s 0 , α ) = 0.8 and ρ ( s 0 , λ ) = 0.2 . Moreover, we ascertain that e E ( s 0 ) ρ ( s 0 , e ) = 1 with E ( s 0 ) = { α , λ } . Considering a string u = α γ τ , the probability of generating the string u from s 0 is given by Pr σ ( s 0 , α γ τ ) = ρ ( s 0 , α ) × ρ ( s 1 , γ ) × ρ ( s 2 , τ ) = 0.24 . However, let ω = α γ . Then, the probability of generating this sequence of observation is given by Pr ( s 0 , ω ) = Pr σ ( s 0 , α γ ) + Pr σ ( s 0 , α γ τ ) = 0.84 .
Given an automaton G , the set of states reached by generating a string consistent with observation ω from state s is
ϕ ( s , ω ) = { s S | u U ( s , ω ) : δ ( s , u ) = s } .
The probability of transitioning from state s 0 to state s given observation ω is denoted as P r ( s | ϕ ( s 0 , ω ) ) . For a state s that is reachable after generating an observation ω from the initial state s 0 , the probability of generating a string u is given by P r σ ( s 0 , ω , s , u ) = P r ( s | ϕ ( s 0 , ω ) ) × P r σ ( s , u ) . For the purpose of elucidating the mathematical constructs utilized in our analysis, we define an indicator function, denoted as I, which ascertains the validity of state transitions. Formally, it is defined as
I δ ( s 0 , u ) = s = 1 , if δ ( s 0 , u ) = s 0 , otherwise .
In complex mathematical models, the relationship between system changes and observations is subtle. Although not immediately obvious, the system’s current state becomes clear after careful analysis of these observation sequences [35]. Delving deeper into such structures, a crucial concept that becomes intertwined with our discussion is the expectation under a specific distribution. In the provided formulation, E u U ( s 0 , ω ) signifies the expected value (or average) taken over all strings u drawn from the set U ( s 0 , ω ) . This encapsulates the average behavior or outcome under the distribution of strings originating from s 0 and culminating in observations ω . Furthermore, by harnessing the power of the indicator function, the probability P r ( s | ϕ ( s 0 , ω ) ) undergoes a modification, being redefined as an expectation:
P r ( s | ϕ ( s 0 , ω ) ) = E u U ( s 0 , ω ) [ I δ ( s 0 , u ) = s × P r σ ( s 0 , u ) ] E u U ( s 0 , ω ) [ P r σ ( s 0 , u ) ] .
This expression quantifies the likelihood that the automaton, initiating its sequence at state s 0 and transitioning with strings from the set U ( s 0 , ω ) , lands on state s. Subsequently, the probability P r σ ( s 0 , ω , s , u ) integrates the indicator function and the expectation concept:
P r σ ( s 0 , ω , s , u ) = u U ( s 0 , ω ) I δ ( s 0 , u ) = s × P r σ ( s 0 , u ) u U ( s 0 , ω ) P r σ ( s 0 , u ) × P r σ ( s , u ) if s ϕ ( s 0 , ω ) 0 otherwise .
This formulation captures the normalized probability of the system transitioning from s 0 to s under the observation ω and then executing string u from state s.
Example 2.
Consider the probabilistic automaton depicted in Figure 1, with the initial state s 0 . Let o = { α , β , λ , γ , μ } and u o = { τ } . For the observation ω = α γ , define U ( s 0 , α γ ) as the set of all strings originating from s 0 that are consistent with ω, i.e., U ( s 0 , α γ ) = { α γ τ , α γ } . The cumulative conditional probabilities for transitions from s 0 to either state s 2 or state s 5 using expectations are:
P r ( s 2 | ϕ ( s 0 , α γ ) ) = E u U ( s 0 , α γ ) [ I δ ( s 0 , u ) = s 2 × P r σ ( s 0 , u ) ] E u U ( s 0 , α γ ) [ P r σ ( s 0 , u ) ] = 0.6 / ( 0.6 + 0.24 ) = 5 / 7 ; P r ( s 5 | ϕ ( s 0 , α γ ) ) = E u U ( s 0 , α γ ) [ I δ ( s 0 , u ) = s 5 × P r σ ( s 0 , u ) ] E u U ( s 0 , α γ ) [ P r σ ( s 0 , u ) ] = 0.24 / ( 0.6 + 0.24 ) = 2 / 7 .
Consider the transition scenario from the initial state s 0 to state s 2 . In this instance, the probability of executing the string u = τ β λ from state s 2 is calculated as P r σ ( s 0 , α γ , s 2 , τ β λ ) = 5 / 7 × 0.4 × 1 × 0.25 = 1 / 14 . Conversely, considering the transition to state s 5 , the probability of executing the string u = β λ μ from state s 5 is denoted as P r σ ( s 0 , α γ , s 5 , β λ μ ) = 2 / 7 × 1 × 0.25 × 0.55 = 11 / 280 .
Let N be the set of natural numbers, and N + be the set of positive natural numbers, i.e., N + = { x | x > 0 x N } . For an observation ω o , the collection of all observations produced from a state s ϕ ( s 0 , ω ) at k N + step is defined as [34]
( s 0 , ω , s , k ) = { ω o | ω ( G , s ) , | ω | = k } .
The comprehensive set of all possible observations that arise from k-step observation extensions with k N + , following the generation of an observation ω from state s 0 , is
( s 0 , ω , k ) = s ϕ ( s 0 , ω ) ( s 0 , ω , s , k ) .
The likelihood of producing ω ( s 0 , ω , k ) after the system has yielded an observation ω from s 0 is defined as
P r ( s 0 , ω , k , ω ) = s ϕ ( s 0 , ω ) u U ( s , ω ) P r σ ( s 0 , ω , s , u ) .
It should be highlighted that P r ( s 0 , ω , k , ω ) denotes the probability of observing ω after a k-step extension, assuming the prior generation of ω from s 0 .
Example 3.
Consider the probabilistic automaton illustrated in Figure 1, where s 0 serves as the initial state. Suppose o = { α , γ , λ , β , μ } and u o = { τ } . Let us assume k = 2 . For the observation sequence ω = α γ generated from state s 0 , we can represent the sequence of observation extensions over two steps from either state s 2 or s 5 as follows: ( s 0 , α γ , s 2 , 2 ) = { α γ , α λ , α μ } , and ( s 0 , α γ , s 5 , 2 ) = { β μ , β γ , β λ } . Combining these sequences, we find the set of sequences that lead to either s 2 or s 5 over two steps after generating the observation sequence ω = α γ from state s 0 are ( s 0 , α γ , 2 ) = ( s 0 , α γ , s 2 , 2 ) ( s 0 , α γ , s 5 , 2 ) = { α γ , α λ , α μ , β μ , β γ , β λ } . Here, P r ( s 0 , ω , 2 , ω ) is calculated for each possible sequence as follows:  
For ω = α λ : P r ( s 0 , α γ , 2 , α λ ) = 5 / 7 × 0.6 × 0.25 = 3 / 28 ; For ω = α γ : P r ( s 0 , α γ , 2 , α γ ) = 5 / 7 × 0.6 × 0.2 = 3 / 35 ; For ω = α μ : P r ( s 0 , α γ , 2 , α μ ) = 5 / 7 × 0.6 × 0.55 = 33 / 140 ; For ω = β μ : P r ( s 0 , α γ , 2 , β μ ) = P r σ ( s 0 , α γ , s 2 , τ β μ ) + P r σ ( s 0 , α γ , s 5 , β μ ) = 5 / 7 × 0.4 × 1 × 0.55 + 2 / 7 × 0.55 × 1 = 11 / 35 ; For ω = β λ : P r ( s 0 , α γ , 2 , β λ ) = P r σ ( s 0 , α γ , s 2 , τ β λ ) + P r σ ( s 0 , α γ , s 5 , β λ ) = 5 / 7 × 0.25 × 1 × 0.4 + 2 / 7 × 0.25 × 1 = 1 / 7 ; For ω = β γ : P r ( s 0 , α γ , 2 , β γ ) = P r σ ( s 0 , α γ , s 2 , τ β γ ) + P r σ ( s 0 , α γ , s 5 , β γ ) = 5 / 7 × 0.2 × 1 × 0.4 + 2 / 7 × 0.2 × 1 = 4 / 35 .

2.2. Differential Privacy

Differential privacy is a framework that quantifies the privacy guarantees offered by a randomized algorithm. It provides a measure ensuring that the output of a computation remains approximately the same, even if one record in the input dataset is altered. This ensures that an adversary cannot determine whether a specific individual’s information is included in the input to the function. Formally, given a threshold ϵ > 0 , a randomized algorithm M adheres to ϵ -differential privacy if, for any pair of datasets A 1 and A 2 that differ by at most one record, and for any output set O, the following inequality holds [36]:
e ϵ P ( M ( A 1 ) O ) P ( M ( A 2 ) O ) e ϵ ,
where M ( A 1 ) and M ( A 2 ) denote the outputs of algorithm M when run on datasets A 1 and A 2 , respectively. The function P assigns a probability to a potential output of M. The parameter ϵ , often referred to as the privacy budget, sets a limit on the allowable information leakage, where ϵ R and ϵ is a non-negative real number.
In the field of differential privacy, one of the central challenges emerges from the recurrent utilization of a privacy-preserving mechanism. As this mechanism is used repeatedly, the cumulative privacy guarantees can degrade, potentially weakening the overall privacy assurance [32]. Addressing this issue necessitates a more refined strategy, and introducing a decay factor serves this purpose. This factor places emphasis on the time-dependent modification of privacy loss parameters. Consider a scenario where mechanism M is invoked in an iterative fashion. A direct or naïve summation of the privacy loss parameter ϵ over each use could swiftly consume the allotted privacy budget, compromising the robustness of the privacy safeguards in place. This predicament underscores the relevance of the decay factor. When expressed as
ϵ = ϵ e ξ i ,
it signifies an exponential decay in the privacy loss parameter. This ensures that with each subsequent application or iteration, there is a diminishing allowance for deviation in privacy guarantees, thereby systematically curtailing potential privacy breaches. From a mathematical perspective, the compounded degradation in differential privacy over n iterations can be encapsulated by: ϵ total = i = 1 n ϵ e ξ i . This culminates in the refined differential privacy relationship:
e ϵ total P ( M ( A i ) O ) P ( M ( B i ) O ) e ϵ total .
This integration suggests that the privacy safeguard, when enhanced with a decay factor, does not merely sum over multiple invocations. Instead, it decays, emphasizing a gradual tightening of privacy constraints. This sophisticated model ensures that our differential privacy implementations remain robust and effective, even with repeated applications. While ϵ total is crucial in a broad range of differential privacy applications, the study in this paper specifically emphasizes the application of differential privacy to automata initial states over finite steps. We deliberately choose to focus on the decay of ϵ across iterations and, consequently, omit the consideration of ϵ total to maintain a concise and focused presentation.
Definition 2
(Decay factor ξ ). A decay factor ξ in the context of differential privacy is a measure that determines the rate at which the privacy loss parameter ϵ decreases for each iteration of a privacy-preserving algorithm. It quantifies the exponential reduction in the potential privacy loss, thus ensuring enhanced long-term protection of privacy in iterative processes.
The role of ξ includes (1) exponential decay— ξ results in an exponential decrease in ϵ with each iteration; (2) privacy degradation control—adjusting ξ fine-tunes the privacy degradation rate, with a higher ξ enhancing protection; (3) robustness in repeated use— ξ limits cumulative privacy loss, ensuring robustness in differential privacy mechanisms over multiple iterations.

3. Investigative Focus

This section delves into the complexity of maintaining ( ϵ , ξ ) -differential privacy within an ensemble of probabilistic automata with multiple initial states. Two main situations are examined: the first deals with the verification of privacy parameters across automata, and the second presents an oversight framework to assure compliance operations under defined privacy restrictions.

3.1. Proximate States and Differential Privacy

Definition 3.
(Proximate states for n initials). Given a probabilistic automaton structure G = ( S , , δ , ρ ) and a set of initial states { s 0 1 , s 0 2 , , s 0 n } S , the states in this set are said to be collectively proximate if there exists an observation ω o { ε } such that for any s 0 i with 1 i n , P r ( s 0 i , ω ) > 0 . The collective contiguity implies that an observation that is not the empty string can be generated from every initial state in the set.
Definition 4 (K-step ( ϵ , ξ ) -differential privacy for n initials).
Let G = ( S , , δ , ρ ) denote a probabilistic automaton structure. Given a set of collectively proximate initial states { s 0 1 , s 0 2 , , s 0 n } S , this leads to n distinct probabilistic automata G ( s 0 1 ) , G ( s 0 2 ) , , G ( s 0 n ) . These automata are said to uphold ( ϵ , ξ )-differential privacy over a k-step observation extension, modulated by a decay factor ξ, if for each automaton G ( s 0 i ) and any observation ω o originating from s 0 i , there exists a set of subsequent observations Ω k such that for each k k , Ω k = i = 1 n ( s 0 i , ω , k ) , where ℓ denotes an observation likelihood function.
Additionally, for any two distinct initial states s 0 i and s 0 j S , and for all k k , given any observation ω Ω k , where ω , the ( ϵ , ξ ) -differential privacy condition | P r ( s 0 i , ω , k , ω ) P r ( s 0 j , ω , k , ω ) | ϵ e ξ k must hold. The privacy parameter ϵ undergoes exponential decay influenced by k steps of observation, guided by the decay rate ξ . As observations accrue, the ( ϵ , ξ ) -differential privacy assurance wanes, underscoring the inherent challenge in preserving privacy as more data emerges.

3.2. Verification Concerns

Within an ensemble of n probabilistic automata, derived from a foundational automaton with n distinct initial states as { G ( s 0 1 ) , G ( s 0 2 ) , , G ( s 0 n ) } , the verification process is paramount. It is essential to ascertain that each automaton in the ensemble genuinely observes the principles of ( ϵ , ξ )-differential privacy over a k-step observation extension, particularly when influenced by a decay factor ξ .
A dedicated verifier, V ω k , is posited to play this crucial role. Its function is to assess whether each automaton in the ensemble, when initialized from its respective initial state within the set { s 0 1 , s 0 2 , , s 0 n } , indeed adheres to the ( ϵ , ξ ) -differential privacy criteria. More specifically, given any observation ω that originates from these initial states, following the definition of proximate states, V ω k must validate:
  • The emergence and integrity of subsequent observations Ω k based on the defined observation likelihood function .
  • The preservation of the ( ϵ , ξ ) -differential privacy condition between any two distinct initial states for all possible k k steps of observation.
This verification serves as a continuous safeguard, ensuring that the ( ϵ , ξ ) -differential privacy guarantees, despite the probabilistic behaviors and intertwined initial states, remain uncompromised and robust.

4. Verification of (ϵ, ξ)-Differential Privacy over Finite Steps

For the intricate analysis of probabilistic automaton with multiple initial states, the verifier emerges as a pivotal tool. It meticulously dissects each initial state, treating it as the epicenter of an independent probabilistic automaton system. Through this isolated examination, the verifier offers a holistic understanding of every conceivable behavior that might emanate from a particular initial state. In essence, by simulating each initial state coupled with its observation sequence as a stand-alone probabilistic automaton, the verifier elucidates the potential trajectory of the entire automaton. This methodology, delineated in Algorithm 1, crafts a refined lens for researchers to discern the nuances of behaviors within a multi-initial-state probabilistic automaton framework.
Consider a set of n probabilistic automata, denoted as G = [ G ( s 0 1 ) , G ( s 0 2 ) , , G ( s 0 n ) ] , with each automaton defined by G ( s 0 i ) = ( S , , δ , s 0 i , ρ ) . The goal of the algorithm is to compute a Verifier for an observation ω , denoted as V ω k = ( S v , o , δ v , S 0 ) . In the heart of our algorithm lies the concept of memoization, a ubiquitous technique in algorithmic optimization. We introduce a memoization function, defined as M : ( S × ) 2 S , where the domain of M encapsulates the Cartesian product of states and events, and its codomain is the power set of S, embracing all conceivable subsets of reachable states. Formally articulated, for any state-event pair ( s , e ) , we have
M ( s , e ) = { s S : δ ( s , e ) = s } .
At the inception of the algorithm, M is initialized as an empty function. As the algorithm progresses and each state-event pair is encountered, the resulting set of reachable states is cached in M to prevent redundant computations in future iterations. With the memoization table M established, the algorithm begins by iterating through each initial state s 0 i of G . Depending on the current segment of the observation sequence and the state under consideration, the set of reachable states is either computed afresh using the transition function δ or promptly retrieved from M. As the algorithm traverses the observation sequence, it accumulates the resulting reachable states into S 0 . Subsequently, the algorithm embarks on computing the product states, represented as Q 1 × Q 2 × × Q n . These product states are deduced based on observable events and the current state, leading to the formation of S v , which is synthesized from the collection S r . The worst-case approximated complexity of Algorithm 1 is O 2 n × n 2 × | o | × | u o | .
Definition 5 (Verifier).
Given a probabilistic automaton structure G = ( S , , δ , ρ ) , and a list of proximate initial states [ s 0 1 , s 0 2 , , s 0 n ] leading to n distinct lists of probabilistic automata [ G ( s 0 1 ) , G ( s 0 2 ) , , G ( s 0 n ) ] , a differentially private verifier V ω k for a k-step observation extension modulated by a decay factor ξ is defined as a 4-tuple V ω k = ( S v , o , δ v , S 0 ) . Here, S v is the cross-product of states in the automata set, S = Q 1 × Q 2 × × Q n , with each Q i being a subset of states from G ( s 0 i ) , o denotes all observable events, δ v : S v × o 2 S v represents the transition function of the verifier, and S 0 is redefined as the Cartesian product of sets obtained from the function ϕ applied to each initial state and the observation ω, specifically S 0 = ϕ ( s 0 1 , ω ) × × ϕ ( s 0 n , ω ) . This verifier V ω k encapsulates the collective behaviors of the original probabilistic automata set, over a k-step of observation.
Theorem 1.
Given a verifier V ω k that integrates the state-transition model with respect to observation ω up to step k, an initial positive parameter ϵ, a decay factor ξ, and a list of n initial states [ s 0 1 , s 0 2 , , s 0 n ] , Algorithm 2 will ascertain whether the system upholds ( ϵ , ξ ) -differential privacy across k steps within the modulating privacy parameter ϵ that evolves with each step.
Proof. 
Initiating with the assignment of a unit probability to each originating state s at t = 0 , the procedure warrants unequivocal certainty for these foundational states. The algorithm then invokes Algorithm 1 to procure the transition probabilities ϕ ( s 0 i , ω ω ) , thereby forming a probabilistic mapping from any state s 0 i to another s 0 j contingent on the observation sequence ω . Iteratively, the algorithm explores the subsequent potential states for each event in the observation sequence, thereby crafting a dynamic probabilistic landscape for each state s.
Central to the verification of privacy is the computation of the absolute deviation between the transition probabilities of any two initial states s 0 i and s 0 j . If any such deviation surpasses ϵ , the algorithm returns a negative result. As the observation progresses, the state set S undergoes updates to encompass new feasible states, symbolized as S . Should the algorithm traverse all k steps without infringing the evolving ( ϵ , ξ )-differential privacy constraint, it returns an affirmative outcome. This, by induction, validates the algorithm’s fidelity to ( ϵ , ξ )-differential privacy for all inaugural states across steps 1 k k , culminating in the affirmation of the theorem.    □
Algorithm 1: Construction of a k-step verifier
Mathematics 11 04991 i001
Algorithm 2 conducts ( ϵ , ξ ) -differential privacy verification for a specified number of steps, k. It takes inputs including a verifier V ω k , step count k, privacy threshold ϵ , and n initial states. Initially, each state in S 0 has a transition probability of 1. The algorithm iteratively examines state transitions and probability distributions up to step k. A critical aspect of this process is the decay factor ξ , crucial for upholding rigorous privacy constraints. This factor adjusts the influence of the privacy parameter over time, ensuring consistent adherence to privacy norms throughout the algorithm’s execution.
Crucially, the algorithm juxtaposes privacy probabilities across all conceivable state pairings. Should any disparity surpass ϵ , the algorithm promptly returns ’false’. This guarantees a uniform privacy assessment, irrespective of initial state variances, encompassing a predictable future scope. Considering its design and operations, the worst-case computational complexity of Algorithm 2 is gauged at O ( k × 2 n × | o | × n 3 × | u o | ) . This provides an understanding of the resource demands for larger input sizes, crucial for practical implementations.
Algorithm 2: Differential privacy verification over finite steps with ϵ decay
Mathematics 11 04991 i002
Example 4.
Consider the structure of a probabilistic automaton depicted in Figure 2, where we assume that the initial states are s 1 and s 2 , which means n = 2 . The set of observable events is o = { α , β , λ , γ , μ } , while the set of unobservable events is u o = { τ } . The verifier for the system G ( s 1 ) and G ( s 2 ) is illustrated in Figure 3, in which the initial state S 0 is defined as the Cartesian product S 0 = { s 1 } × { s 2 } .
For the state S 1 and the observable event α, we can say that S 1 is the Cartesian product of ϕ ( s 1 , α ) and ϕ ( s 2 , α ) , resulting in S 1 = { s 3 , s 4 } × { s 5 } . Therefore, δ ( S 0 , α ) becomes { s 3 , s 4 } × { s 5 } . Similarly, for the state S 4 and the observable event λ, S 4 is the Cartesian product of ϕ ( ( s 3 , s 4 ) , λ ) and ϕ ( s 5 , λ ) , which turns out to be { s 4 } × . Consequently, δ ( S 1 , λ ) is also { s 4 } × . For the state S 5 and the observable event β, S 5 is { s 5 } × { s 6 } , and this results from taking the Cartesian product of ϕ ( s 3 , β ) and ϕ ( s 5 , β ) . Therefore, δ ( S 1 , β ) is { s 5 } × { s 6 } . Finally, for the state S 8 and the observable event γ, S 8 is the Cartesian product of ϕ ( s 5 , γ ) and ϕ ( s 6 , γ ) , yielding S 8 = { s 4 } × { s 4 } . This leads us to conclude that δ ( S 5 , γ ) is also { s 4 } × { s 4 } .
Figure 2. A probabilistic automaton G *.
Figure 2. A probabilistic automaton G *.
Mathematics 11 04991 g002
Figure 3. The verifier V ω k of probabilistic automaton G *.
Figure 3. The verifier V ω k of probabilistic automaton G *.
Mathematics 11 04991 g003
Example 5.
Let us consider the probabilistic automaton structure in Figure 2. Assume two proximate initial states s 1 and s 2 such that n = 2 . The verifier is shown in Figure 3. We want to verify the ϵ-differential privacy with ϵ = 0.13 , decay factor ξ = 0.05 and for k = 1 . Due to the decay factor, our effective ϵ for this step is ϵ e ξ k = 0.13 e 0.05 0.124 . For state S 0 :
Pr 1 ( S 0 , α ) Pr 2 ( S 0 , α ) = Pr ( s 1 | { s 1 } ) × Pr ( { s 3 , s 4 } | ϕ ( s 1 , α ) ) Pr ( s 2 | { s 2 } ) × Pr σ ( s 2 , α ) = 0.5 / 0.5 + ( 0.5 × 0.8 ) + 0.5 × 0.8 / 0.5 + ( 0.5 × 0.8 ) 0.9 = 0.1 < ϵ 0.124 ; Pr 1 ( S 0 , γ ) Pr 2 ( S 0 , γ ) = Pr ( s 1 | { s 1 } ) × Pr σ ( s 1 , γ ) Pr ( s 2 | { s 2 } ) × Pr σ ( s 2 , γ ) = 0.120 < ϵ 0.124 ; Pr 1 ( S 0 , μ ) Pr 2 ( S 0 , μ ) = Pr ( s 1 | { s 1 } ) × Pr σ ( s 1 , μ ) Pr ( s 2 | { s 2 } ) × Pr σ ( s 2 , μ ) = 0.38 0.1 = 0.28 > ϵ 0.124 .
The system of two automata G ( s 1 ) and G ( s 2 ) do not satisfy ( ϵ , ξ )-differential privacy at k = 1 with ϵ = 0.124 .

5. Ensuring (ϵ, ξ)-Differential Privacy via Supervisory Control

In the prior section, we introduced the notion of ( ϵ , ξ ) -differential privacy, where an automaton starting from distinct n initial states produces similar observation likelihoods over a set number of steps. This concept is central to our ongoing discussion. We then delve into supervisory control as a method for ensuring this privacy alignment over a fixed sequence length. Our study further explores the realms of control theory, highlighting a groundbreaking algorithm rooted in probabilistic automata and designed to predict state transitions from observation sequences.
Consider a verifier V ω k = ( S v , o , δ v , S 0 ) and a state S S v , where S is an element of the state space S v and can be expressed as a tuple of multiple state components, i.e., S = Q 1 × Q 2 × × Q n . We define S # as a subset of S v containing states that can be reached from the initial state S 0 for a specific observation sequence ω . In essence, S # captures the set of states that are intermediate in the transition from S 0 to S through the sequence ω .
The set R ( S ) is then introduced to encapsulate all events e that transition the system from state S to some other state S that is a member of S # . Specifically, S represents a state, analogous to S, but formed from different combinations of state components. Hence, S = Q 1 × Q 2 × × Q n where each Q i is a potential state from the original n probabilistic automata. This ensures the system remains within the permissible state transitions defined by S # . Thus, R ( S ) can be defined as:
R ( S ) = e o S S # { S } : δ v ( S , e ) = S .
This expression underscores the events that, when executed from state S, guide the system to another state within the permissible state transitions defined by S # . Let Θ : o R be a ranking function defined over the observation events in the context of the verifier V ω k . For any event e o , Θ ( e ) designates a unique scalar value, conveying its significance or importance.
This formulation ensures both uniqueness, where for any two distinct events e 1 , e 2 o , if e 1 e 2 then Θ ( e 1 ) Θ ( e 2 ) , and a total order such that for any two events e 1 and e 2 in o , the relationships Θ ( e 1 ) < Θ ( e 2 ) , Θ ( e 1 ) > Θ ( e 2 ) , or Θ ( e 1 ) = Θ ( e 2 ) are established.
Once the events are ranked by their Θ ( e ) values, each event e is assigned an index value T ( S , e ) , where T : S v × o N + is a mapping function. In this context, for states S = Q 1 × Q 2 × × Q n S v and S = Q 1 × Q 2 × × Q n S , the set of all events that transition the system from S to S is denoted as E ( S , S ) = { e o δ v ( S , e ) = S } .
Consequently, the events in R ( S ) are systematically sorted based on their respective Θ ( e ) values in ascending order, ensuring a structured and meaningful sequence for subsequent operations, reflecting both the probabilistic properties and temporal nuances of the events within V ω k .
For each event e belonging to E ( S , S ) , we define A ( S , S , e ) as a column vector with | R ( S ) | dimensions, exclusively consisting of zeros and ones. Additionally, we define a binary scalar A ( S , S , e ) [ v ] in the following manner:
A ( S , S , e ) [ c ] = 1 , if v = T ( S , e ) 0 , if v T ( S , e ) .
With v being a positive integer such that 1 v | R ( S ) | , the matrix A ( S , S ) can be viewed as an expanded form of A ( S , S , e ) for every event e in E ( S , S ) , arranged in ascending order according to the value of T ( S , e ) . Given a probabilistic automaton structure G = ( S , , δ , ρ ) with proximate initial states s 0 ( 1 ) , s 0 ( 2 ) , , s 0 ( n ) , and an observation sequence ω o , we define V ω k = ( S v , o , δ v , S 0 ) as the verification mechanism. For a compound state S = Q 1 × Q 2 × × Q n in S v , and given δ v ( S 0 , ω ) = S , any of the row vectors C ω ( S 1 | S ) , C ω ( S 2 | S ) , , C ω ( S n | S ) is with the dimensioning of | R ( S ) | . For each event e R ( S ) and index T ( S , e ) { 1 , 2 , , | R ( S ) | } , the following relationship holds for i { 1 , 2 , , n } :
C ω ( S i | S ) [ T ( S , e ) ] = s S i s σ ( s , e ) Pr σ ( s | ϕ ( s 0 ( i ) , ω ω ) ) × Pr σ ( s , u ) .
Given a probabilistic automaton G = ( S , , δ , ρ ) with n initial states s 0 ( 1 ) , s 0 ( 2 ) , , s 0 ( n ) and an observation ω o , let V ω k = ( S v , o , δ v , S 0 ) serve as the verificationer. For a compound state S = Q 1 × Q 2 × × Q n residing in the state space S v , and given δ v ( S 0 , ω ) = S , we define A ω ( S i | S ) as the corresponding probabilistic matrix for the sub-state S i , the composite state S, and the observation sequence ω o , where i is an index drawn from the set { 1 , 2 , , n } . The definition of A ω ( S i | S ) unfolds in two scenarios:
First, in the event that S = , the matrix A ε ( S i | S ) is identified directly as C ε ( S i | S ) ;
Second, when S # , for any S = Q 1 × Q 2 × × Q n belonging to S # such that δ v ( S 0 , ω ) = S , δ v ( S , e ) = S , and ω e = ω , the following relation holds:
A m ω ( S i | S ) = A ω ( S i | S ) × A ( S , S ) T [ : ] [ m ] × C ω ( S i | S ) ;
Subsequently, the exhaustive probabilistic matrix A ω ( S i | S ) is synthesized as
A ω ( S i | S ) = ( A 1 ω ( S i | S ) ) T | | ( A h ω ( S i | S ) ) T T ;
where h = N r ( A ω ( S i | S ) ) ; this relationship reflects the intricate structure and hierarchy inherent in the probabilistic matrices, emphasizing a crucial characteristic, possibly its dimensionality or structural depth.
Within this setup, h is deduced by elevating N to the power of r. It is conjectured that r represents the rank or a distinct inherent trait of the matrix A ω ( S i | S ) . This formulation provides a foundation for the ensuing algorithmic stages, imparting a computational perspective to the entire procedure. A clear linkage between m and h exists, with m being an element of the set { 1 , 2 , , h } . The strategy for computing these probabilistic matrices for each state in the verifier is encapsulated in Algorithm 3, demonstrating a computational complexity of O S v 2 × o + R ( S ) + R ( S ) + n × S i .
Algorithm 3: Probability matrices determination
Mathematics 11 04991 i003
Example 6.
Consider the probabilistic automaton structure represented in Figure 2 and its corresponding verifier illustrated in Figure 3, in which the given parameters are o = { α , β , λ , γ , μ } and u o = { τ } . We define the function Θ as: Θ ( α ) = 1 , Θ ( β ) = 2 , Θ ( λ ) = 3 , Θ ( γ ) = 4 , and Θ ( μ ) = 5 . For S 0 , we obtain E ( S 0 , S 1 ) = { α } , A ( S 0 , S 1 ) = [ A α ] with A α = ( 1 , 0 , 0 ) T and E ( S 0 , S 3 ) = { μ } , A ( S 0 , S 3 ) = [ A μ ] with A μ = ( 0 , 0 , 1 ) T . For S 2 , we obtain R ( S 5 ) = { γ } and T ( S 5 , γ ) = 1 . Thus, C ω ( { s 5 } | S 5 ) = P r σ ( s 5 , γ ) = 0.55 and C ω ( { s 6 } | S 5 ) = P r σ ( s 6 , γ ) = 1 , where ω = { α β } .
Example 7.
Consider the verification depicted in Figure 3. For the initial state S 0 , we have A ε ( { s 1 } | S 0 ) = ( 0.5 , 0.12 , 0.38 ) and A ε ( { s 2 } | S 0 ) = ( 0.9 , 0 , 0.1 ) . For state S 3 : A ε ( { s 1 } | S 0 ) × A ( S 0 , S 3 ) = 0.38 ; A ε ( { s 2 } | S 0 ) × A ( S 0 , S 3 ) = 0.1 . For an observable event ω = μ , A ω ( { s 1 } | S 3 ) = ( 0.38 ) T [ : ] [ 2 ] × C ω ( { s 1 } | S 3 ) = ( 0.19 , 0 ) T and A ω ( { s 5 } | S 3 ) = ( 0.1 ) T [ : ] [ 2 ] × C ω ( { s 5 } | S 3 ) = ( 0 , 0.045 ) T .
Within the intricate domain of probabilistic automata, the introduction of fake events represents a paradigm shift in how systems interact with their observable events. This is not a superficial addition; rather, it is a calculated stratagem where these events, ingeniously derived from the set o of genuine observable events, play a pivotal role in ensuring system integrity and privacy [37]. The origin of these fake events can be traced back to the enforcement mechanism, which incorporates an insertion function designed to embed any event within o seamlessly. While on the surface, these inserted events might mirror the observable ones, their underlying design is distinct. The crucial aspect lies in their deliberate ambiguity; they mirror the observable events with such precision that they become indistinguishable upon a cursory examination. This resemblance is not coincidental but serves a pivotal role in the broader system dynamics.
The aforementioned duality serves a higher purpose, particularly in the domain of differential privacy. The insertion of fake events is not merely a technique for enhancing system dynamics; it acts as a vital control strategy. When the system fails to achieve ( ε , ξ ) -differential privacy through conventional means, these fake events step in to refine the transition probabilities of the genuine events within o , ensuring the attainment of the desired ( ε , ξ ) -differential privacy threshold. Such innovation highlights the sophistication and flexibility of probabilistic automata theory, where privacy safeguards and operational effectiveness are adeptly intertwined.
The supervisory control system in question operates based on observed events o . In addition to these genuine events, we introduce fabricated events, denoted by fake . Each fabricated event is a simulation derived from its o counterpart. Collectively, the universe of all events is given by = o fake uo . The matrix D is a configuration of size n × n , with n indicating the system’s state count. An element D [ i ] [ j ] delineates the difference in actions executed at states Q i and Q j in response to a specific observation ω . This relationship is formally described by D [ i ] [ j ] = A ω ( Q i S ) A ω ( Q j S ) . To incorporate ( ϵ , ξ ) -differential privacy, we leverage an insertion function, E, which maps a state and a genuine event to a corresponding fabricated event. Formally, Φ : S × o S × fake . The execution of this function results in Φ ( s , e ) = ( s , E ( s , e ) ) , with s determined by s = δ ( s , e ) . This function’s application adjusts the event probability distribution. For instance, triggering an artificial event, e fake , at state s modulates the event probabilities as ρ ( s , e ) = ρ ( s , e ) ρ ( s , e fake ) × ρ ( s , e ) .
In the context of a supervisory control system within a probabilistic automaton, the process of refining transition probabilities through the insertion of fake events plays a pivotal role in adhering to differential privacy constraints. Consider a system where the transition probability from an initial state by a particular observation sequence ω is formulated as A ω ( Q S ) = P r σ ( s 0 , ω e ) × ρ ( δ ( s 0 , ω ) , e ) , with ω = ω e , where ω o and e o . This formulation encapsulates the probability of the sequence ω e occurring from the initial state s 0 and the probability of transitioning to the next state due to event e.
The introduction of a fake event, executed by the supervisory function E, fundamentally alters this transition probability. Post-insertion, the probability of the transition by the observation sequence ω is refined to A ω ( Q S ) = P r σ ( s 0 , ω e ) × ρ ( δ ( s 0 , ω ) , e ) × ( 1 ρ ( δ ( s 0 , ω ) , e fake ) ) . The inclusion of the term ( 1 ρ ( δ ( s 0 , ω ) , e fake ) ) , which lies between 0 and 1, ensures a reduction in A ω ( Q S ) subsequent to the insertion of the fake event.
This reduction plays a crucial role in aligning the system with differential privacy constraints. When the inequality | ( A ω ( Q i S ) ( A ω ( Q j S ) | ϵ holds, indicating a potential breach of privacy thresholds, the insertion of the fake event with a predetermined probability effectively diminishes the left-hand side of the inequality by a factor of ( 1 ρ ( δ ( s 0 , ω ) , e fake ) ) . This decrement contributes significantly to reducing the differences in probabilities of sequences resulting from the initial states ( s 0 i , s 0 j ), thus facilitating the achievement of the differential privacy constraint. Therefore, the strategic insertion of fake events emerges as a vital mechanism for refining transition probabilities in a manner that upholds the principles of differential privacy within the framework of probabilistic automata.
Theorem 2.
Given a supervisory control system formulated by the verifier V k ω = ( S v , o , δ v , S 0 ) . The introduction of a fake event mechanism, governed by the insertion function E, that operates within the ( ϵ , ξ ) -differential privacy boundary will ensure convergence towards a refined system state such that the differential between state transitions, quantified by matrix D, remains beneath the threshold ϵ .
Proof. 
Starting with the differential matrix representation for each pair of states ( Q i , Q j ) in our system, the differential is defined as D [ i ] [ j ] = A ω ( Q i S ) A ω ( Q j S ) , capturing the initial difference between the states in light of a particular observation ω . For any given state s and an event e from o , the insertion function is described as E : S × o fake , producing a fake event e fake = E ( s , e ) . With this new event, the state transition dynamics are altered: the state transition probabilities evolve to ρ ( s , e ) = ρ ( s , e ) ρ ( s , e fake ) × ρ ( s , e ) , ensuring the recalculated differential remains constrained within the threshold | D [ i ] [ j ] | ϵ . Assuming that the introduction of the fake event modifies the state transition mechanisms, the system undergoes iterative invocations of the insertion function to guarantee all differentials in matrix D adhere to i , j : | D [ i ] [ j ] | ϵ .
Given a maximum bound of iterations, denoted as k, combined with the consistent adjustment via the insertion function E, it is implied that convergence occurs within these iterations. Crucially, each introduction of a fake event acts as a remedial measure, aligning the system’s dynamics closer to the ( ϵ , ξ ) -differential privacy conditions. Through our established iterations, and by ensuring differentials remain within the specified boundary, the system not only guarantees state transition convergence but also maintains the constraints of differential privacy. Thus, the theorem is affirmed, elucidating how the supervisory control system, using the intricate balance of fake event insertion, converges toward refined states while upholding ( ϵ , ξ ) -differential privacy.    □
Transitioning to the algorithmic realm, Algorithm 4 stands at the intersection of computer science, data analytics, and ( ϵ , ξ ) -differential privacy.
Algorithm 4 is meticulously crafted to ensure that whenever variances within matrix D surpass a predefined threshold ϵ , a fake event, e fake , is integrated into o .
This intricate mechanism ensures a balance between maintaining ( ϵ , ξ ) -differential privacy mandates and upholding the system’s dynamic transitions. The core of this algorithm focuses on extracting the value a i from the observation matrix A ( Q i S ) while concurrently formulating z i using z i s Q i [ P r ( s | ϕ ( s 0 i , ω ) ) × ρ ( s , e ) ] . Remarkably, despite its sophistication, Algorithm 4 boasts a computational efficiency with complexity of O ( k × | S v | × n 3 × | o | ) , thereby encapsulating the essence of modern supervisory control-bridging privacy safeguards with system dynamism.
Theorem 3.
For the algorithm “Iterative supervisory control refinement for multiple initial states”, given a verifier V ω k = ( S v , o , δ v , S 0 ) and a positive integer k, the algorithm converges to a refined set of states in at most k iterations, ensuring that the differential between state transitions, as indicated by matrix D, remains below the threshold ϵ .
Proof. 
Initiating with S t set to the initial state S 0 , the algorithm iteratively refines states, bounded by a maximum of k iterations. The matrix D captures state differentials, and the algorithm ensures that none of its entries surpass ϵ . If an entry in D breaches this threshold, a “fake event” is introduced, adjusting state transitions to bring the differential back within bounds.
The algorithm’s iterations (lines 10–18) either keep differentials in D under ϵ or apply corrective mechanisms to achieve this. Hence, within k iterations, the algorithm guarantees that state transition differentials adhere to the threshold ϵ .    □
Algorithm 4: Iterative supervisory control refinement for multiple initial states
Mathematics 11 04991 i004

6. Numerical Case Study

This section presents a numerical case study that precisely illustrates the proposed methodology. The case study confirms the methodology’s effectiveness and applicability to discrete event systems modeled by probabilistic automata. A DES represented by probabilistic automata is shown in Figure 4, where o = { α , β , λ , γ , μ } , u o = { τ } , f a k e = { α , β , λ , γ , μ } , and the initial states are: { s 0 , s 1 , s 2 } . The verifier of this system is represented in Figure 5. Let Θ ( α ) = 1 , Θ ( β ) = 2 , Θ ( λ ) = 3 , Θ ( γ ) = 4 , Θ ( μ ) = 5 , ϵ = 0.14 , and ξ = 0.01 .
For k = 1 and state S 0 in V ω k : ϵ 1 = ϵ e 0.01 = 0.139 .
T ( S 0 , α ) = 1 , T ( S 0 , λ ) = 2 . A ε { s 0 } | S 0 = ( 0.2 , 0.8 ) ; A ε { s 1 } | S 0 = ( 0.31 , 0.69 ) ; A ε { s 2 } | S 0 = ( 0.22 , 0.78 ) .
Probability differences are:
( s 0 , s 1 ) = ( 0.110 , 0.110 ) ; ( s 1 , s 2 ) = ( 0.090 , 0.090 ) ; ( s 2 , s 0 ) = ( 0.020 , 0.020 ) .
For k = 2 : ϵ 2 = ϵ e 0.02 = 0.136 . For state S 1 :
T ( S 1 , β ) = 1 , T ( S 1 , λ ) = 2 , T ( S 1 , γ ) = 3 . A ε { s 0 } | S 0 × A ( S 0 , S 1 ) = 0.2 ; A ε { s 1 } | S 0 × A ( S 0 , S 1 ) = 0.31 ; A ε { s 2 } | S 0 × A ( S 0 , S 1 ) = 0.22 .
Then:
A ε { s 4 } | S 1 = ( 0.2 , 0 , 0 ) ; A ε { s 6 } | S 1 = ( 0.171 , 0.047 , 0.093 ) ; A ε { s 7 } | S 1 = ( 0.220 , 0 , 0 ) .
Probability differences are:
( s 4 , s 6 ) = ( 0.029 , 0.047 , 0.093 ) ; ( s 6 , s 7 ) = ( 0.049 , 0.047 , 0.093 ) ; ( s 7 , s 4 ) = ( 0.020 , 0 , 0 ) .
For k = 2 and state S 2 :
T ( S 2 , β ) = 1 , T ( S 2 , λ ) = 2 , T ( S 2 , μ ) = 3 . A ε { s 0 } | S 0 × A ( S 0 , S 2 ) = ( 0.2 , 0.8 ) × ( 0 , 1 ) T = 0.8 ; A ε { s 1 } | S 0 × A ( S 0 , S 2 ) = ( 0.31 , 0.69 ) × ( 0 , 1 ) T = 0.69 ; A ε { s 2 } | S 0 × A ( S 0 , S 2 ) = ( 0.22 , 0.78 ) × ( 0 , 1 ) T = 0.78 .
For s 3 :
A λ ( { s 3 } | S 2 ) = 0.8 × C λ ( { s 3 } | S 2 ) = 0.8 ( 0.6 , 0 , 0.14 ) = ( 0.480 , 0 , 0.112 ) ; ρ ( s 3 , γ ) = 0.26 ; ρ ( s 3 , γ ) 1 × A λ ( { s 3 } | S 2 ) = 0.26 × ( 0.48 , 0 , 0.11 ) = ( 0.125 , 0 , 0.029 ) ; ρ ( s 3 , γ ) 2 × A λ ( { s 3 } | S 2 ) = ( 0.26 ) 2 × ( 0.48 , 0 , 0.11 ) = ( 0.032 , 0 , 0.007 ) .
For s 5 :
A λ ( { s 5 } | S 2 ) = 0.69 × C λ ( { s 5 } | S 2 ) = 0.69 ( 0.8 , 0 , 0 ) = ( 0.552 , 0 , 0 ) ; ρ ( s 5 , γ ) = 0.2 ; ρ ( s 5 , γ ) 1 × A λ ( { s 5 } | S 2 ) = 0.2 × ( 0.55 , 0 , 0 ) = ( 0.110 , 0 , 0 ) ; ρ ( s 5 , γ ) 2 × A λ ( { s 5 } | S 2 ) = ( 0.2 ) 2 × ( 0.55 , 0 , 0 ) = ( 0.004 , 0 , 0 ) .
For s 6 :
A λ ( { s 6 } | S 2 ) = 0.8 × C λ ( { s 6 } | S 2 ) = 0.78 ( 0.55 , 0.15 , 0 ) = ( 0.429 , 0.117 , 0 ) ; ρ ( s 6 , γ ) = 0.3 ; ρ ( s 6 , γ ) 1 × A λ ( { s 6 } | S 2 ) = 0.3 × ( 0.43 , 0.12 , 0 ) = ( 0.129 , 0.036 , 0 ) ; ρ ( s 6 , γ ) 2 × A λ ( { s 6 } | S 2 ) = ( 0.3 ) 2 × ( 0.43 , 0.12 , 0 ) = ( 0.039 , 0.011 , 0 ) .
For k = 3 : ϵ 3 = ϵ e 0.03 = 0.132 . For state S 3 :
T ( S 3 , α ) = 1 , T ( S 3 , β ) = 2 , T ( S 3 , λ ) = 3 , T ( S 3 , γ ) = 4 , T ( S 3 , μ ) = 5 . A α ( { s 4 } | S 1 ) × A ( S 1 , S 3 ) = ( 0.2 , 0 , 0 ) × ( 1 , 0 , 0 ) T = 0.2 ; A α ( { s 6 } | S 1 ) × A ( S 1 , S 3 ) = ( 0.171 , 0.047 , 0.093 ) × ( 1 , 0 , 0 ) T = 0.171 ; A α ( { s 7 } | S 1 ) × A ( S 1 , S 3 ) = ( 0.22 , 0 , 0 ) × ( 1 , 0 , 0 ) T = 0.22 . A α β ( { s 9 } | S 3 ) = 0.2 × C α β ( { s 9 } | S 3 ) = 0.2 × ( 0 , 0.4 , 0 , 0.4 , 0.2 ) = ( 0 , 0.080 , 0 , 0.080 , 0.040 ) ; A α β ( { s 15 } | S 3 ) = 0.17 × C α β ( { s 15 } | S 3 ) = 0.17 × ( 0.24 , 0 , 0.19 , 0.4 , 0.17 ) ; = ( 0.041 , 0 , 0.032 , 0.068 , 0.029 ) ; A α β ( { s 11 } | S 3 ) = 0.22 × C α β ( { s 11 } | S 3 ) = 0.22 × ( 0 , 0.25 , 0 , 0 , 0.75 ) ; = ( 0 , 0.055 , 0 , 0 , 0.165 ) .
Probability differences are:
( s 9 , s 15 ) = ( 0.041 , 0.080 , 0.032 , 0.012 , 0.011 ) ; ( s 15 , s 11 ) = ( 0.041 , 0.055 , 0.032 , 0.068 , 0.136 ) ; ( s 11 , s 9 ) = ( 0 , 0.025 , 0 , 0.080 , 0.125 ) .
For k = 3 and state S 6 :
T ( S 6 , α ) = 1 , T ( S 6 , λ ) = 2 , T ( S 6 , γ ) = 3 , T ( S 6 , μ ) = 4 .
We then compute:
A λ { s 3 } | S 2 × A ( S 2 , S 6 ) = 0.480 ; A λ { s 5 } | S 2 × A ( S 2 , S 6 ) = 0.552 ; A λ { s 6 } | S 2 × A ( S 2 , S 6 ) = 0.429 .
Here, the probability of choosing either ( s 8 or s 12 ) by the ω = λ β is conditional such that:
P r ( s 12 | ϕ ( s 0 , λ β ) ) = 0.48 × 0.55 0.48 × 0.55 + 0.48 = 0.355 ;
Pr ( s 8 | ϕ ( s 0 , λ β ) ) = 0.48 0.48 × 0.55 + 0.48 = 0.645 .
Next, we have:
A λ β | { s 8 , s 12 } | S 6 = 0.48 × C λ β ( { s 8 , s 12 } | S 6 ) = A λ β | { s 8 , s 12 } | S 6 = ( 0.218 , 0.139 , 0.146 , 0.143 ) ; A λ β { s 10 } | S 6 = ( 0.221 , 0.121 , 0.044 , 0.165 ) ; A λ β { s 15 } | S 6 = ( 0.103 , 0.081 , 0.172 , 0.073 ) .
Probability differences are:
( { s 8 , s 12 } , s 10 ) = ( 0.003 , 0.018 , 0.102 , 0.022 ) ; ( s 10 , s 15 ) = ( 0.118 , 0.040 , 0.128 , 0.920 ) ; ( s 15 , { s 8 , s 12 } ) = ( 0.115 , 0.058 , 0.026 , 0.070 ) .
Based on the numerical investigation conducted, the automaton G # satisfies the ( ϵ , ξ ) -differential privacy condition for k = 1 with ϵ 1 = 0.139 and for k = 2 with ϵ 2 = 0.136 . However, the ( ϵ , ξ ) -differential privacy condition is not met for k = 3 with ϵ 3 = 0.132 between the pair ( s 15 , s 11 ) . For the state s 11 , triggering the fake μ via supervisory control such that: Φ ( s 11 , μ ) = ( s 11 , μ ) , will lead to redistribution of events probabilities as follows: ( 0 , 0.225 , 0 , 0 , 0.675 , 0.1 ) . By triggering μ at state s 15 such that Φ ( s 15 , μ ) = ( s 15 , μ ) , the adjusted transitions probabilities are as follows: ( 0.216 , 0 , 0.171 , 0.36 , 0.153 , 0.1 ) . Now, we are computing the transformed probabilities, taking into account the effect of the triggered fake event μ :
A α β ( { s 11 } | S 3 ) = 0.22 × ( 0 , 0.225 , 0 , 0 , 0.675 ) = ( 0 , 0.050 , 0 , 0 , 0.149 , 0.022 ) ; A α β ( { s 15 } | S 3 ) = 0.171 × ( 0.216 , 0 , 0.171 , 0.36 , 0.153 ) = 0.037 , 0 , 0.029 , 0.062 , 0.026 , 0.017 ) ; A α β ( { s 9 } | S 3 ) = 0.2 × ( 0 , 0.4 , 0 , 0.4 , 0.2 , 0 ) = ( 0 , 0.080 , 0 , 0.080 , 0.040 ) .
Based on the above, the supervisory mechanism successfully enforces the ( ϵ , ξ ) -differential privacy for k = 3 with ϵ 3 = 0.132 .

7. Conclusions

Our study marks a significant advancement in probabilistic automata, introducing a verification protocol aimed at protecting initial states. Utilizing advanced mathematical methods, this protocol evaluates privacy risks in event sequences and incorporates a supervisory control to maintain privacy without sacrificing system functionality. Looking ahead, we plan to explore the integration of probabilistic automata with dynamic concealment frameworks, focusing on adaptability and responsiveness in various system environments.
Despite these advancements, we recognize areas needing further exploration and improvement. Managing the complexity of multiple initial states in probabilistic automata is challenging, particularly regarding scalability and efficiency in larger systems. The reliance on precise observation sequences is another critical aspect, as any inaccuracies could undermine the reliability of our privacy assurances. The resource-intensive nature of our approach also necessitates consideration, especially in settings with limited resources. Additionally, enhancing the model’s adaptability to dynamic systems with frequently changing initial states and behaviors is a crucial future direction. Finally, broadening the methodology’s applicability to diverse systems and domains remains an essential goal.
Thus, while our research establishes a solid foundation for differential privacy in discrete event systems using probabilistic automata, it also underscores the need for continuous advancements in complexity management, observation accuracy, resource optimization, adaptability, and application scope. These areas will be central to our ongoing research efforts in this evolving field.

Author Contributions

Conceptualization, Z.L.; methodology, T.A.A.-S.; software, G.Z.; validation, M.A.E.-M.; formal analysis, M.S.; investigation, T.A.A.-S. and G.Z.; resources, M.A.E.-M. and M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the Key Technology R&D Program of Henan Province of China (Grant No. 232102220060), National Natural Science Foundation of China (Grant No. 62103349), and the Special Fund for Scientific and Technological Innovation Strategy of Guangdong Province (Grant No. 2022A0505030025). The authors present their appreciation to King Saud University for funding this research through Researchers Supporting Program number (RSPD2023R704), King Saud University, Riyadh, Saudi Arabia.

Data Availability Statement

The experimental data used in this paper can be obtained by contacting the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DESsDiscrete Event Systems

References

  1. Al-Makhlafi, M.; Gu, H.; Almuaalemi, A.; Almekhlafi, E.; Adam, M.M. RibsNet: A scalable, high-performance, and cost-effective two-layer-based cloud data center network architecture. IEEE Trans. Netw. Serv. Manag. 2023, 20, 1676–1690. [Google Scholar] [CrossRef]
  2. Rao, P.S.; Satyanarayana, S. Privacy-preserving data publishing based on sensitivity in context of Big Data using Hive. J. Big Data 2018, 5, 20. [Google Scholar] [CrossRef]
  3. Jain, P.; Gyanchandani, M.; Khare, N. Big data privacy: A technological perspective and review. J. Big Data 2016, 3, 472–496. [Google Scholar] [CrossRef]
  4. Yao, L.; Chen, Z.; Hu, H.; Wu, G.; Wu, B. Sensitive attribute privacy preservation of trajectory data publishing based on l-diversity. Distrib. Parallel Databases 2020, 39, 785–811. [Google Scholar] [CrossRef] [PubMed]
  5. Zhang, B.; Lin, J.C.; Liu, Q.; Fournier-Viger, P.; Djenouri, Y. A(k, p)-anonymity framework to sanitize transactional database with personalized sensitivity. J. Internet Technol. 2019, 20, 801–808. [Google Scholar]
  6. Kacha, L.; Zitouni, A.; Djoudi, M. KAB: A new k-anonymity approach based on black hole algorithm. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 4075–4088. [Google Scholar] [CrossRef]
  7. Dwork, C. Differential Privacy. In Automata, Languages and Programming. ICALP 2006; Lecture Notes in Computer Science; Bugliesi, M., Preneel, B., Sassone, V., Wegener, I., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4052. [Google Scholar] [CrossRef]
  8. Dwork, C.; Roth, A. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 2013, 9, 211–407. [Google Scholar] [CrossRef]
  9. Geng, Q.; Viswanath, P. The optimal noise-adding mechanism in differential privacy. IEEE Trans. Inf. Theory 2016, 62, 925–951. [Google Scholar] [CrossRef]
  10. He, J.; Cai, L.; Guan, X. Differential private noise adding mechanism and its application on consensus algorithm. IEEE Trans. Signal Process. 2020, 68, 4069–4082. [Google Scholar] [CrossRef]
  11. Sarkar, A.; Sharma, A.; Gill, A.; Thakur, P. A differential privacy-based system for efficiently protecting data privacy. In Proceedings of the 2023 International Conference on Sustainable Computing and Smart Systems (ICSCSS), Coimbatore, India, 14–16 June 2023; pp. 1399–1404. [Google Scholar] [CrossRef]
  12. Jain, P.; Gyanchandani, M.; Khare, N. Differential privacy: Its technological prescriptive using big data. J. Big Data 2018, 5, 15. [Google Scholar] [CrossRef]
  13. Farias, V.A.; Brito, F.T.; Flynn, C.; Machado, J.C.; Majumdar, S.; Srivastava, D. Local dampening: Differential privacy for non-numeric queries via local sensitivity. VLDB J. 2023, 32, 1191–1214. [Google Scholar] [CrossRef]
  14. Cassandras, C.G.; Lafortune, S. Systems and Models. In Introduction to Discrete Event Systems; Springer: Cham, Switzerland, 2021; pp. 1–52. [Google Scholar] [CrossRef]
  15. Lin, F. Opacity of discrete event systems and its applications. Automatica 2011, 47, 496–503. [Google Scholar] [CrossRef]
  16. Badouel, E.; Bednarczyk, M.A.; Borzyszkowski, A.M.; Caillaud, B.; Darondeau, P. Concurrent secrets. Discrete Event Dyn. Syst. 2007, 17, 425–446. [Google Scholar] [CrossRef]
  17. Zhang, K. State-based opacity of real-time automata. In Proceedings of the 27th IFIP WG 1.5 International Workshop on Cellular Automata and Discrete Complex Systems (AUTOMATA 2021), Marseille, France, 12–14 July 2021; Castillo-Ramirez, A., Guillon, P., Perrot, K., Eds.; Volume 90, pp. 12:1–12:15. [Google Scholar] [CrossRef]
  18. Lai, A.; Lahaye, S.; Li, Z. Initial-state detectability and initial-state opacity of unambiguous weighted automata. Automatica 2021, 127, 109490. [Google Scholar] [CrossRef]
  19. Han, X.; Zhang, K.; Zhang, J.; Li, Z.; Chen, Z. Strong current-state and initial-state opacity of discrete-event systems. Automatica 2023, 148, 110756. [Google Scholar] [CrossRef]
  20. Balun, J.; Masopust, T. On verification of weak and strong k-step opacity for discrete-event systems. IFAC-PapersOnLine 2022, 55, 108–113. [Google Scholar] [CrossRef]
  21. Yin, X.; Li, Z.; Wang, W.; Li, S. Infinite-step opacity and k-step opacity of stochastic discrete-event systems. Automatica 2019, 99, 266–274. [Google Scholar] [CrossRef]
  22. Balun, J.; Masopust, T. On opacity verification for discrete-event systems. IFAC-PapersOnLine 2020, 53, 2075–2080. [Google Scholar] [CrossRef]
  23. Jones, A.; Leahy, K.; Hale, M. Towards differential privacy for symbolic systems. In Proceedings of the 2019 American Control Conference (ACC), Philadelphia, PA, USA, 10–12 July 2019; pp. 372–377. [Google Scholar] [CrossRef]
  24. Saboori, A.; Hadjicostis, C.N. Verification of initial-state opacity in security applications of DES. In Proceedings of the 2008 9th International Workshop on Discrete Event Systems, Gothenburg, Sweden, 28–30 May 2008; pp. 328–333. [Google Scholar] [CrossRef]
  25. Keroglou, C.; Hadjicostis, C.N. Initial state opacity in stochastic DES. In Proceedings of the 2013 IEEE 18th Conf. Emerging Technol. and Factory Autom. (ETFA), Cagliari, Italy, 10–13 September 2013; pp. 1–8. [Google Scholar] [CrossRef]
  26. Basile, F.; De Tommasi, G.; Motta, C.; Sterle, C. Necessary and sufficient condition to assess initial-state-opacity in live bounded and reversible discrete event systems. IEEE Control Syst. Lett. 2022, 6, 2683–2688. [Google Scholar] [CrossRef]
  27. Tong, Y.; Li, Z.; Seatzu, C.; Giua, A. Verification of state-based opacity using Petri nets. IEEE Trans. Automat. Contr. 2017, 62, 2823–2837. [Google Scholar] [CrossRef]
  28. Cong, X.; Fanti, M.P.; Mangini, A.M.; Li, Z. On-line verification of initial-state opacity by Petri nets and integer linear programming. ISA Trans. 2019, 93, 108–114. [Google Scholar] [CrossRef] [PubMed]
  29. Zhang, K.; Yin, X.; Zamani, M. Opacity of nondeterministic transition systems: A (bi)simulation relation approach. IEEE Trans. Automat. Contr. 2019, 64, 5116–5123. [Google Scholar] [CrossRef]
  30. Hadjicostis, C.N.; Keroglou, C. Opacity formulations and verification in discrete event systems. In Proceedings of the 2014 IEEE Emerging Technology and Factory Automation (ETFA), Barcelona, Spain, 16–19 September 2014; pp. 1–12. [Google Scholar] [CrossRef]
  31. Teng, Y.; Li, Z.; Yin, L.; Wu, N. State-based differential privacy verification and enforcement for probabilistic automata. Mathematics 2023, 11, 1853. [Google Scholar] [CrossRef]
  32. Steinke, T. Composition of differential privacy and privacy amplification by subsampling. arXiv 2022, arXiv:2210.00597. [Google Scholar]
  33. Cassandras, C.G.; Lafortune, S. Languages and automata. In Introduction to Discrete Event Systems; Springer: Cham, Switzerland, 2021. [Google Scholar] [CrossRef]
  34. Kumar, R.; Garg, V. Control of stochastic discrete event systems: Synthesis. In Proceedings of the IEEE Conference on Decision and Control, Tampa, FL, USA, 18 December 1998; Volume 3, pp. 3299–3304. [Google Scholar] [CrossRef]
  35. Rabiner, L.R. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE 1989, 77, 257–286. [Google Scholar] [CrossRef]
  36. McSherry, F.; Talwar, K. Mechanism design via differential privacy. In Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS’07), Providence, RI, USA, 21–23 October 2007; pp. 94–103. [Google Scholar] [CrossRef]
  37. Jacob, R.; Lesage, J.-J.; Faure, J.-M. Overview of discrete event systems opacity: Models, validation, and quantification. Annu. Rev. Control 2016, 41, 135–146. Available online: https://www.sciencedirect.com/science/article/pii/S1367578816300189 (accessed on 13 July 2023). [CrossRef]
Figure 1. A probabilistic automaton G .
Figure 1. A probabilistic automaton G .
Mathematics 11 04991 g001
Figure 4. A probabilistic automaton G # .
Figure 4. A probabilistic automaton G # .
Mathematics 11 04991 g004
Figure 5. The verifier V ω k for k = 3 of probabilistic automaton G # .
Figure 5. The verifier V ω k for k = 3 of probabilistic automaton G # .
Mathematics 11 04991 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Al-Sarayrah, T.A.; Li, Z.; Zhu, G.; El-Meligy, M.A.; Sharaf, M. Verification and Enforcement of (ϵ, ξ)-Differential Privacy over Finite Steps in Discrete Event Systems. Mathematics 2023, 11, 4991. https://doi.org/10.3390/math11244991

AMA Style

Al-Sarayrah TA, Li Z, Zhu G, El-Meligy MA, Sharaf M. Verification and Enforcement of (ϵ, ξ)-Differential Privacy over Finite Steps in Discrete Event Systems. Mathematics. 2023; 11(24):4991. https://doi.org/10.3390/math11244991

Chicago/Turabian Style

Al-Sarayrah, Tareq Ahmad, Zhiwu Li, Guanghui Zhu, Mohammed A. El-Meligy, and Mohamed Sharaf. 2023. "Verification and Enforcement of (ϵ, ξ)-Differential Privacy over Finite Steps in Discrete Event Systems" Mathematics 11, no. 24: 4991. https://doi.org/10.3390/math11244991

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop