1. Introduction
In the context of expansive data analytics and the handling of large, sensitive datasets, the RibsNet architecture [
1] presents a significant advancement in data center networks, enhancing the management and confidentiality of extensive data collections. This innovation aligns with the ongoing efforts in data privacy, where methodologies like nearest similarity-based clustering [
2], k-anonymity [
3], and augmented l-diversity [
4] have been developed, and both the scholarly community and the industrial sphere are diligently strengthening the confidentiality of sensitive elements within extensive databases. These advanced methods strive to balance the essential benefits of big data analytics with the ethical need for individual privacy [
2]. Especially in domains of heightened sensitivity, such as healthcare, these mechanisms for preserving privacy serve as a sine qua non for engendering data-driven insights while mitigating the inherent risks posed by the disclosure of personalized data [
3,
4].
In data privacy, advanced anonymity architectures are being carefully improved to protect sensitive information in transactional databases. These pioneering algorithms, frequently modulated by bespoke sensitivity indices dictated by the end users, epitomize a finely calibrated balance between data obfuscation and utility. Surpassing extant paradigms in terms of computational frugality and data preservation, these algorithms prioritize the diminution of information attrition [
5]. In synchrony with the emergent paradigms in the data privacy landscape, bio-inspired computational strategies such as the black hole algorithm are being judiciously employed to surmount the conventional limitations endemic to k-anonymity models. Preliminary empirical evidence substantiates the black hole algorithm’s superior capacity for amplifying data utility when juxtaposing against existing frameworks [
6].
Differential privacy is a game-changing concept that precisely measures the privacy risks associated with using statistical databases. Transcending classical paradigms, this framework contends with privacy infirmities that affect even those individuals who are conspicuously absent from the database’s annals. Through the deployment of a sophisticated suite of algorithms, differential privacy navigates an impeccable equilibrium between maximizing data utility and furnishing unassailable privacy guarantees. As a result, it carves out a novel ambit in the realm of secure, yet perspicacious, data analytics [
7,
8].
The incorporation of stochastic noise has solidified its role as a quintessential instrument for attaining differential privacy, providing a rigorous framework for individual data safeguarding within collective data assemblages. Employing exacting mathematical formulations such as
-differential [
9] and
-differential privacy [
10], this approach critically evaluates the performance metrics of assorted noise-introduction protocols. It furnishes a nuanced exegesis of the intrinsic trade-offs between data concealment and utility, notably in the realm of consensus algorithms.
Advanced frameworks that integrate noise have been developed to protect the subtle differences between similar datasets using differential privacy principles. These frameworks facilitate the injection of stochastic noise, which adheres to various probabilistic distributions, such as the Laplacian or Gaussian models, into the results of data queries. By doing so, they fulfill the stipulated privacy assurances [
10,
11].
In the evolving field of differential privacy, especially for qualitative data, moving from global sensitivity models to localized sensitivity approaches is a significant change. While the exponential mechanism [
12] has long served as the bedrock for global sensitivity-centric strategies, the recently unveiled local dampening mechanism [
13] inaugurates a more sophisticated and malleable architecture for safeguarding data privacy. This innovative development exploits local sensitivity metrics to furnish more contextually nuanced and granular privacy assurances, thereby pioneering new frontiers in the intricate realm of qualitative data protection.
Discrete event systems (DESs) function as a formidable paradigm for the conceptualization, scrutiny, and governance of a diverse gamut of systems, wherein the conceptual entity of an ‘event’ is pivotal in dictating systemic dynamics. Ranging from industrial manufacturing sequences and computational networks to telecommunication infrastructures and operations research, the scope of DESs effortlessly straddles conventional disciplinary demarcations. Within this multifaceted domain, a salient focal point comprises the assessment and amelioration of security susceptibilities, especially in scenarios characterized by incomplete observability [
14].
Opacity [
15] stands as an indispensable facet of security within the ambit of DESs, concentrating on the extent to which a system discloses its internal confidentialities to an extraneous, non-active observer-frequently denominated as either the intruder or malevolent scrutineer. In this intellectual landscape, the notion of language-based opacity [
16] takes on considerable weight. This concept delves into the inherent restrictions that come with an outsider’s incomplete surveillance of a system. The critical inquiry here is whether these observational limitations inhibit the outsider from discerning whether the unfolding sequence of events betrays confidential or delicate matters.
State-based opacity [
17] augments the analytical tableau by imbuing it with discrete temporal dimensions that enrich the overarching conceptual framework. Contingent upon the particular temporal instances at which a system transits through confidential states, this genre of opacity can be meticulously categorized into four seminal subclasses: initial-state opacity [
18], current-state opacity [
19],
k-step opacity [
20], and infinite-step opacity [
21]. Each of these refined categories furnishes a stratified comprehension of a system’s robustness vis-à-vis unauthorized external examination across diverse temporal vicissitudes. Notwithstanding its pivotal significance, the authentication of opacity within the frameworks of DESs is anything but elementary, frequently confronting formidable computational impediments. Cutting-edge inquiries in the discipline delve into the architectural constituents of automata paradigms that could potentially facilitate the more manageable verification of opacity [
22].
Within the intricate nexus of differential privacy and symbolic control architectures, the transposition of statistical privacy schemata to non-quantitative datasets establishes a pioneering standard. Leveraging exponential methodologies and specialized automata configurations, sensitive alphanumeric sequences are skillfully approximated. This is executed with a dual objective: the preservation of informational sanctity and the retention of data utility. The governance of this intricate process is underscored by the employment of distance metrics such as the Levenshtein distance [
23].
At this interdisciplinary juncture, the enhancement of data security is amplified across a diverse spectrum of fields, ranging from natural language processing to intricate industrial systems. Predicated upon this foundational understanding, the current scholarly endeavor aims to further elevate this paradigm. The focus is sharpened on safeguarding the opacity of initial states within DESs, utilizing probabilistic automata embedded within the stringent confines of differential privacy. Within the realm of discrete-event systems, the notion of initial state opacity emerges as an indispensable yardstick for assessing the system’s prowess in concealing its nascent state from unauthorized external entities.
Classical deterministic frameworks for initial state opacity have developed to encompass probabilistic and stochastic paradigms, thus facilitating a more textured comprehension of security dynamics in volatile environments. To cater to elevated privacy requisites, especially in situations where an intrusive entity possesses comprehensive structural awareness of the system, advanced iterations of opacity, such as robust initial state opacity, have been formulated [
19,
24,
25].
Techniques for substantiating these robust opacity conditions are undergoing refinement via innovative approaches, including parallel composition techniques and integer linear programming algorithms [
26]. Initial state opacity is a crucial area that combines verification methods, probability models, and cybersecurity concepts to effectively assess data protection in dynamic systems. Cutting-edge computational schemas, such as Petri net models, facilitate the expeditious verification of initial state opacity, obviating the necessity for laborious state space enumeration. Contemporary real-time validation methodologies, encompassing linear programming algorithms, further enhance the relevance and applicability of initial state opacity in intricate network architectures, including those inherent to defense systems and mobile communications networks [
27,
28].
In architectures delineated via non-deterministic finite automata and their probabilistic counterparts, i.e., probabilistic finite automata, the verification of initial state opacity necessitates sophisticated computational modalities and analytical techniques. The complexity is exacerbated when extended to non-deterministic transition systems, particularly in the instances where state spaces are potentially unbounded. The exploration of initial state opacity thus constitutes a critical nexus between formal computational methodologies, automata theory, and information security, engendering both algorithmic intricacies and theoretical conundrums [
29,
30]. Differential privacy was delicately integrated into discrete event systems using probabilistic automata in a vital piece of research [
31]. The major goal was to protect state information illustrating system resource settings. This technique was designed to provide state differential privacy, with an emphasis on thorough, step-by-step validation. This method made it difficult for potential adversaries to infer the system’s starting state from a limited collection of observations.
In an era where the landscape of extensive data analysis is rapidly evolving, the necessity for impregnable data privacy frameworks becomes ever more apparent. The research presented herein constitutes a substantial breakthrough in the sphere of differential privacy within the context of DESs. This study introduces the concept of the privacy decay factor [
32], denoted as
, a groundbreaking development that diverges markedly from preceding research. Prior studies primarily concentrated on the protection of state information within DESs through the utilization of probabilistic automata. This research, however, forges new paths by integrating this innovative parameter. The implementation of
has proven to be particularly efficacious in ensuring the privacy of initial states or conditions, thereby guaranteeing the secure concealment of sensitive data from the very outset of data processing a pivotal requirement in the face of continually evolving privacy standards.
By broadening the scope to encompass an expanded range of initial state conditions and databases, this approach markedly bolsters the method’s resilience and adaptability. The incorporation of addresses a significant void in existing methodologies, providing a more fortified framework for preserving the privacy of initial states against potential adversaries. This methodical consideration of an extensive array of starting conditions not only augments the theoretical foundations of differential privacy within DESs but also showcases its tangible applicability across a spectrum of intricate systems.
Building on this essential understanding, the present scholarly pursuit endeavors to advance this paradigm further. It concentrates on the fortification of the privacy of initial states within DESs, employing probabilistic automata encased within the stringent parameters of differential privacy. As a consequence, this research establishes a new benchmark in the domain of privacy preservation, signaling a shift in data security strategies across diverse sectors. This is particularly pertinent in industries where stringent privacy measures from the commencement of data processing are of the utmost importance. The following lines describe the main contributions of this scientific endeavor:
The research introduces a novel model of -differential privacy for discrete event systems (DESs) formulated by probabilistic automata, ensuring equitable resource distribution across multiple initial states.
A novel verification strategy is introduced, originating from distinct observations of a vast set of initial states and evaluating adherence to the tenets of -differential privacy across defined observational sequences.
By seamlessly integrating probabilistic automata with a diverse set of initial states, the study presents a tailored verification approach. Should systems deviate from the privacy standards, a specialized control mechanism is deployed, conveying -differential privacy within the overarching closed-loop system.
The research’s potency is exemplified through a detailed numerical case study, illustrating the verification method’s acumen in assessing the privacy integrity of specific automata classes.
The rest of this paper is organized in a way that facilitates a comprehensive study of the relevant topics.
Section 2 sets the foundation by introducing the basics of probabilistic automata and the important principles of
-differential privacy in the context of data security.
Section 3 serves as the main investigative focus of the study. The approach to the verification of
-differential privacy over finite steps is detailed in
Section 4. In
Section 5, we describe a method for ensuring
-differential privacy via supervisory control. A numerical case study is presented in
Section 6 to offer empirical support for the methodologies proposed. Finally,
Section 7 concludes the paper, summarizing the key findings and their implications.
4. Verification of (ϵ, ξ)-Differential Privacy over Finite Steps
For the intricate analysis of probabilistic automaton with multiple initial states, the verifier emerges as a pivotal tool. It meticulously dissects each initial state, treating it as the epicenter of an independent probabilistic automaton system. Through this isolated examination, the verifier offers a holistic understanding of every conceivable behavior that might emanate from a particular initial state. In essence, by simulating each initial state coupled with its observation sequence as a stand-alone probabilistic automaton, the verifier elucidates the potential trajectory of the entire automaton. This methodology, delineated in Algorithm 1, crafts a refined lens for researchers to discern the nuances of behaviors within a multi-initial-state probabilistic automaton framework.
Consider a set of
n probabilistic automata, denoted as
, with each automaton defined by
. The goal of the algorithm is to compute a
Verifier for an observation
, denoted as
. In the heart of our algorithm lies the concept of memoization, a ubiquitous technique in algorithmic optimization. We introduce a memoization function, defined as
, where the domain of
M encapsulates the Cartesian product of states and events, and its codomain is the power set of
S, embracing all conceivable subsets of reachable states. Formally articulated, for any state-event pair
, we have
At the inception of the algorithm, M is initialized as an empty function. As the algorithm progresses and each state-event pair is encountered, the resulting set of reachable states is cached in M to prevent redundant computations in future iterations. With the memoization table M established, the algorithm begins by iterating through each initial state of . Depending on the current segment of the observation sequence and the state under consideration, the set of reachable states is either computed afresh using the transition function or promptly retrieved from M. As the algorithm traverses the observation sequence, it accumulates the resulting reachable states into . Subsequently, the algorithm embarks on computing the product states, represented as . These product states are deduced based on observable events and the current state, leading to the formation of , which is synthesized from the collection . The worst-case approximated complexity of Algorithm 1 is .
Definition 5 (
Verifier)
. Given a probabilistic automaton structure , and a list of proximate initial states leading to n distinct lists of probabilistic automata , a differentially private verifier for a k-step observation extension modulated by a decay factor ξ is defined as a 4-tuple . Here, is the cross-product of states in the automata set, , with each being a subset of states from , denotes all observable events, represents the transition function of the verifier, and is redefined as the Cartesian product of sets obtained from the function ϕ applied to each initial state and the observation ω, specifically . This verifier encapsulates the collective behaviors of the original probabilistic automata set, over a k-step of observation.
Theorem 1. Given a verifier that integrates the state-transition model with respect to observation ω up to step k, an initial positive parameter ϵ, a decay factor ξ, and a list of n initial states , Algorithm 2 will ascertain whether the system upholds -differential privacy across k steps within the modulating privacy parameter ϵ that evolves with each step.
Proof. Initiating with the assignment of a unit probability to each originating state s at , the procedure warrants unequivocal certainty for these foundational states. The algorithm then invokes Algorithm 1 to procure the transition probabilities , thereby forming a probabilistic mapping from any state to another contingent on the observation sequence . Iteratively, the algorithm explores the subsequent potential states for each event in the observation sequence, thereby crafting a dynamic probabilistic landscape for each state s.
Central to the verification of privacy is the computation of the absolute deviation between the transition probabilities of any two initial states and . If any such deviation surpasses , the algorithm returns a negative result. As the observation progresses, the state set S undergoes updates to encompass new feasible states, symbolized as . Should the algorithm traverse all k steps without infringing the evolving ()-differential privacy constraint, it returns an affirmative outcome. This, by induction, validates the algorithm’s fidelity to ()-differential privacy for all inaugural states across steps , culminating in the affirmation of the theorem. □
Algorithm 1: Construction of a k-step verifier |
|
Algorithm 2 conducts -differential privacy verification for a specified number of steps, k. It takes inputs including a verifier , step count k, privacy threshold , and n initial states. Initially, each state in has a transition probability of 1. The algorithm iteratively examines state transitions and probability distributions up to step k. A critical aspect of this process is the decay factor , crucial for upholding rigorous privacy constraints. This factor adjusts the influence of the privacy parameter over time, ensuring consistent adherence to privacy norms throughout the algorithm’s execution.
Crucially, the algorithm juxtaposes privacy probabilities across all conceivable state pairings. Should any disparity surpass
, the algorithm promptly returns ’false’. This guarantees a uniform privacy assessment, irrespective of initial state variances, encompassing a predictable future scope. Considering its design and operations, the worst-case computational complexity of Algorithm 2 is gauged at
. This provides an understanding of the resource demands for larger input sizes, crucial for practical implementations.
Algorithm 2: Differential privacy verification over finite steps with decay |
|
Example 4. Consider the structure of a probabilistic automaton depicted in Figure 2, where we assume that the initial states are and , which means . The set of observable events is = , while the set of unobservable events is . The verifier for the system and is illustrated in Figure 3, in which the initial state is defined as the Cartesian product . For the state and the observable event α, we can say that is the Cartesian product of and , resulting in . Therefore, becomes . Similarly, for the state and the observable event λ, is the Cartesian product of and , which turns out to be . Consequently, is also . For the state and the observable event β, is , and this results from taking the Cartesian product of and . Therefore, is . Finally, for the state and the observable event γ, is the Cartesian product of and , yielding . This leads us to conclude that is also .
Figure 2.
A probabilistic automaton *.
Figure 2.
A probabilistic automaton *.
Figure 3.
The verifier of probabilistic automaton *.
Figure 3.
The verifier of probabilistic automaton *.
Example 5. Let us consider the probabilistic automaton structure in Figure 2. Assume two proximate initial states and such that . The verifier is shown in Figure 3. We want to verify the ϵ-differential privacy with , decay factor and for . Due to the decay factor, our effective for this step is . For state : The system of two automata and do not satisfy ()-differential privacy at with .
5. Ensuring (ϵ, ξ)-Differential Privacy via Supervisory Control
In the prior section, we introduced the notion of -differential privacy, where an automaton starting from distinct n initial states produces similar observation likelihoods over a set number of steps. This concept is central to our ongoing discussion. We then delve into supervisory control as a method for ensuring this privacy alignment over a fixed sequence length. Our study further explores the realms of control theory, highlighting a groundbreaking algorithm rooted in probabilistic automata and designed to predict state transitions from observation sequences.
Consider a verifier and a state , where S is an element of the state space and can be expressed as a tuple of multiple state components, i.e., . We define as a subset of containing states that can be reached from the initial state for a specific observation sequence . In essence, captures the set of states that are intermediate in the transition from to S through the sequence .
The set
is then introduced to encapsulate all events
e that transition the system from state
S to some other state
that is a member of
. Specifically,
represents a state, analogous to
S, but formed from different combinations of state components. Hence,
where each
is a potential state from the original
n probabilistic automata. This ensures the system remains within the permissible state transitions defined by
. Thus,
can be defined as:
This expression underscores the events that, when executed from state S, guide the system to another state within the permissible state transitions defined by . Let be a ranking function defined over the observation events in the context of the verifier . For any event , designates a unique scalar value, conveying its significance or importance.
This formulation ensures both uniqueness, where for any two distinct events , if then , and a total order such that for any two events and in , the relationships , , or are established.
Once the events are ranked by their values, each event e is assigned an index value , where is a mapping function. In this context, for states and , the set of all events that transition the system from to S is denoted as .
Consequently, the events in are systematically sorted based on their respective values in ascending order, ensuring a structured and meaningful sequence for subsequent operations, reflecting both the probabilistic properties and temporal nuances of the events within .
For each event
e belonging to
, we define
as a column vector with
dimensions, exclusively consisting of zeros and ones. Additionally, we define a binary scalar
in the following manner:
With
v being a positive integer such that
, the matrix
can be viewed as an expanded form of
for every event
e in
, arranged in ascending order according to the value of
. Given a probabilistic automaton structure
with proximate initial states
, and an observation sequence
, we define
as the verification mechanism. For a compound state
in
, and given
, any of the row vectors
is with the dimensioning of
. For each event
and index
, the following relationship holds for
:
Given a probabilistic automaton with n initial states and an observation , let serve as the verificationer. For a compound state residing in the state space , and given , we define as the corresponding probabilistic matrix for the sub-state , the composite state S, and the observation sequence , where i is an index drawn from the set . The definition of unfolds in two scenarios:
First, in the event that , the matrix is identified directly as ;
Second, when
, for any
belonging to
such that
,
, and
, the following relation holds:
Subsequently, the exhaustive probabilistic matrix
is synthesized as
where
; this relationship reflects the intricate structure and hierarchy inherent in the probabilistic matrices, emphasizing a crucial characteristic, possibly its dimensionality or structural depth.
Within this setup,
h is deduced by elevating
to the power of
r. It is conjectured that
r represents the rank or a distinct inherent trait of the matrix
. This formulation provides a foundation for the ensuing algorithmic stages, imparting a computational perspective to the entire procedure. A clear linkage between
m and
h exists, with
m being an element of the set
. The strategy for computing these probabilistic matrices for each state in the verifier is encapsulated in Algorithm 3, demonstrating a computational complexity of
.
Algorithm 3: Probability matrices determination |
|
Example 6. Consider the probabilistic automaton structure represented in Figure 2 and its corresponding verifier illustrated in Figure 3, in which the given parameters are and . We define the function Θ
as: , , , , and . For , we obtain , with and , with . For , we obtain and . Thus, and , where . Example 7. Consider the verification depicted in Figure 3. For the initial state , we have and . For state : ; . For an observable event , and . Within the intricate domain of probabilistic automata, the introduction of
fake events represents a paradigm shift in how systems interact with their observable events. This is not a superficial addition; rather, it is a calculated stratagem where these events, ingeniously derived from the set
of genuine observable events, play a pivotal role in ensuring system integrity and privacy [
37]. The origin of these fake events can be traced back to the enforcement mechanism, which incorporates an insertion function designed to embed any event within
seamlessly. While on the surface, these inserted events might mirror the observable ones, their underlying design is distinct. The crucial aspect lies in their deliberate ambiguity; they mirror the observable events with such precision that they become indistinguishable upon a cursory examination. This resemblance is not coincidental but serves a pivotal role in the broader system dynamics.
The aforementioned duality serves a higher purpose, particularly in the domain of differential privacy. The insertion of fake events is not merely a technique for enhancing system dynamics; it acts as a vital control strategy. When the system fails to achieve -differential privacy through conventional means, these fake events step in to refine the transition probabilities of the genuine events within , ensuring the attainment of the desired -differential privacy threshold. Such innovation highlights the sophistication and flexibility of probabilistic automata theory, where privacy safeguards and operational effectiveness are adeptly intertwined.
The supervisory control system in question operates based on observed events . In addition to these genuine events, we introduce fabricated events, denoted by . Each fabricated event is a simulation derived from its counterpart. Collectively, the universe of all events is given by The matrix D is a configuration of size , with n indicating the system’s state count. An element delineates the difference in actions executed at states and in response to a specific observation . This relationship is formally described by To incorporate -differential privacy, we leverage an insertion function, E, which maps a state and a genuine event to a corresponding fabricated event. Formally, The execution of this function results in with determined by This function’s application adjusts the event probability distribution. For instance, triggering an artificial event, , at state s modulates the event probabilities as
In the context of a supervisory control system within a probabilistic automaton, the process of refining transition probabilities through the insertion of fake events plays a pivotal role in adhering to differential privacy constraints. Consider a system where the transition probability from an initial state by a particular observation sequence is formulated as , with , where and . This formulation encapsulates the probability of the sequence occurring from the initial state and the probability of transitioning to the next state due to event e.
The introduction of a fake event, executed by the supervisory function E, fundamentally alters this transition probability. Post-insertion, the probability of the transition by the observation sequence is refined to . The inclusion of the term , which lies between 0 and 1, ensures a reduction in subsequent to the insertion of the fake event.
This reduction plays a crucial role in aligning the system with differential privacy constraints. When the inequality holds, indicating a potential breach of privacy thresholds, the insertion of the fake event with a predetermined probability effectively diminishes the left-hand side of the inequality by a factor of . This decrement contributes significantly to reducing the differences in probabilities of sequences resulting from the initial states (, ), thus facilitating the achievement of the differential privacy constraint. Therefore, the strategic insertion of fake events emerges as a vital mechanism for refining transition probabilities in a manner that upholds the principles of differential privacy within the framework of probabilistic automata.
Theorem 2. Given a supervisory control system formulated by the verifier . The introduction of a fake event mechanism, governed by the insertion function E, that operates within the -differential privacy boundary will ensure convergence towards a refined system state such that the differential between state transitions, quantified by matrix D, remains beneath the threshold .
Proof. Starting with the differential matrix representation for each pair of states in our system, the differential is defined as , capturing the initial difference between the states in light of a particular observation . For any given state s and an event e from , the insertion function is described as , producing a fake event . With this new event, the state transition dynamics are altered: the state transition probabilities evolve to , ensuring the recalculated differential remains constrained within the threshold . Assuming that the introduction of the fake event modifies the state transition mechanisms, the system undergoes iterative invocations of the insertion function to guarantee all differentials in matrix D adhere to .
Given a maximum bound of iterations, denoted as k, combined with the consistent adjustment via the insertion function E, it is implied that convergence occurs within these iterations. Crucially, each introduction of a fake event acts as a remedial measure, aligning the system’s dynamics closer to the -differential privacy conditions. Through our established iterations, and by ensuring differentials remain within the specified boundary, the system not only guarantees state transition convergence but also maintains the constraints of differential privacy. Thus, the theorem is affirmed, elucidating how the supervisory control system, using the intricate balance of fake event insertion, converges toward refined states while upholding -differential privacy. □
Transitioning to the algorithmic realm, Algorithm 4 stands at the intersection of computer science, data analytics, and -differential privacy.
Algorithm 4 is meticulously crafted to ensure that whenever variances within matrix D surpass a predefined threshold , a fake event, , is integrated into .
This intricate mechanism ensures a balance between maintaining -differential privacy mandates and upholding the system’s dynamic transitions. The core of this algorithm focuses on extracting the value from the observation matrix while concurrently formulating using . Remarkably, despite its sophistication, Algorithm 4 boasts a computational efficiency with complexity of , thereby encapsulating the essence of modern supervisory control-bridging privacy safeguards with system dynamism.
Theorem 3. For the algorithm “Iterative supervisory control refinement for multiple initial states”, given a verifier and a positive integer k, the algorithm converges to a refined set of states in at most k iterations, ensuring that the differential between state transitions, as indicated by matrix D, remains below the threshold .
Proof. Initiating with set to the initial state , the algorithm iteratively refines states, bounded by a maximum of k iterations. The matrix D captures state differentials, and the algorithm ensures that none of its entries surpass . If an entry in D breaches this threshold, a “fake event” is introduced, adjusting state transitions to bring the differential back within bounds.
The algorithm’s iterations (lines 10–18) either keep differentials in D under or apply corrective mechanisms to achieve this. Hence, within k iterations, the algorithm guarantees that state transition differentials adhere to the threshold . □
Algorithm 4: Iterative supervisory control refinement for multiple initial states |
|