Next Article in Journal
Evaluating Fiscal and Monetary Policy Coordination Using a Nash Equilibrium: A Case Study of Hungary
Previous Article in Journal
The Local Times for Additive Brownian Sheets and the Intersection Local Times for Independent Brownian Sheets
Previous Article in Special Issue
Calculation and Analysis of Petri Net Reachability Graphs by a Think-Globally-Act-Locally Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Critical Observability of Stochastic Discrete Event Systems Under Intermittent Loss of Observations

1
College of Artificial Intelligence and Computer Science, Xi’an University of Science and Technology, Xi’an 710054, China
2
College of Safety Science and Engineering, Xi’an University of Science and Technology, Xi’an 710054, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(9), 1426; https://doi.org/10.3390/math13091426
Submission received: 20 February 2025 / Revised: 17 April 2025 / Accepted: 24 April 2025 / Published: 26 April 2025
(This article belongs to the Special Issue Discrete Event Dynamic Systems and Applications)

Abstract

:
A system is said to be critically observable if the operator can always determine whether the current state belongs to a set of critical states. Due to the communication failures, systems may suffer from intermittent loss of observations, which makes the system not critically observable. In this sense, to characterize critical observability in a quantitative way, this paper extends the notion of critical observability to stochastic discrete event systems modeled as partially observable probabilistic finite automata. Two new notions, called step-based almost critical observability and almost critical observability are proposed, which describe a measure of critical observability for a given system against intermittent loss of observations. We introduce a new language operation to obtain a probabilistic finite automaton describing the behavior of the plant system under intermittent loss of observations. Based on this structure, we also present verification methodologies to check the aforementioned two notions and analyze the complexity. Finally, the results are applied to a raw coal processing system, which shows the effectiveness of the proposed methods.

1. Introduction

In the last decade, numerous reports have addressed state estimation and information security problems in cyber–physical systems (CPSs) under partial observation conditions. Within the realm of discrete event systems (DESs), many studies have concentrated on issues such as opacity verification [1], detectability verification [2,3], attack detection [4,5], predictability [6], and observability [7].
Critical observability [8,9] is a safety property related to state estimation [10,11,12] that concentrates on deciding whether an operator can always identify a given set of critical states in a system through its observation of the system information flow. Several previous works investigated the critical observability of DESs, which are a kind of abstraction of CPSs [13].
By exploiting decentralization and bi-simulation, Pola et al. [14] verify critical observability for networks of finite-state automata. This study proposes a decentralized architecture in which local critical observers perform online detection of critical states, thus significantly reduces the computational effort. Masopust [15] further reveals the computational complexity of verifying critical observability for (networks of) finite-state automata and labeled Petri nets (LPNs). Yan et al. [16] consider the critical observability problem related to a given time step and its inverse problem by using an algebraic model of finite-state automata. Tong and Ma [17] consider communication losses for the problem of k-step and definite critical observability in DESs modeled by finite-state automata. The research in [18,19] extend the formulation of critical observability to timed DESs modeled as max-plus automata and bounded labeled time Petri nets, respectively. Moreover, the research in [20] verifies the critical observability of bounded and live LPNs by using linear integer programming without enumerating the reachability graph. The work of [20] proves a necessary and sufficient condition to check the critical observability when an arbitrary subset of reachable markings describes the critical-state set. Furthermore, the authors extend the results to the case when a critical-state set is modeled by all the reachable markings that satisfy disjunctions of generalized mutual exclusion constraints [21]. The work in [22] also enforces the critical observability of bounded LPNs after verification by using the supervisory control theory [23].
Most approaches to the critical observability of DESs assume that all sensors work properly and all the information recorded by sensors is always transmitted to the operator. However, poor sensor operations or communication malfunctions between the sensors and the operator may lead to event observation losses, which has been considered in diagnosability (codiagnosability) and supervisory control in DESs [24,25,26]. In such a scenario, given a set of predefined critical states, a system under intermittent loss of observations is likely to lose its critical observability to some degree.
In fact, critical observability defined in all the aforementioned works does not take into account the likelihood of violating a critical observability requirement, where only a binary report is outputted: a system is either critically observable or not. However, the binary report cannot provide enough information for the scenarios where different actions of a system have an unequal occurrence likelihood. Thus, the work in [27] formulates the notion of critical observability in a Markov hybrid system called P ¯ -observability, which characterizes the probability of determining a critical state. Then, it designs an observer to check this property. More precisely, a system is said to be P ¯ -observable if either we are sure that the system is not in a critical state or the probability of being in a critical state is higher than a given bound P ¯ . Unlike P ¯ -observability, we consider other scenarios in which the critical observability with a binary report is not enough for a proper description. In particular, if a non-critically observable system has a small violation probability, it is crucial to consider quantitatively evaluating critical observability via probabilistic models, namely stochastic discrete event systems (SDESs). It is obvious that the work in [27] and our work deal with different kinds of critical observability measures in different kinds of systems. Thus, these two works are incomparable.
This paper makes an investigation analyzing step-based almost critical observability (SA-CO) and almost critical observability (A-CO) of an SDES under intermitting loss of observations modeled by a probabilistic finite automaton (PFA). Compared with a finite-state automaton, a PFA can capture the uncertainty (such as random behaviors) through the state transition probabilities, while the finite-state automaton or the non-probabilistic model cannot. That is to say, a PFA is more powerful to characterize the system accurately than the finite-state automaton. Compared with a hidden Markov model (HMM), the structure of a PFA is relatively simple and the state transitions are directly based on probabilities, while the HMM has a complex structure because it contains hidden states and observed states. Now, the PFA is commonly used in many related problems of state estimation, i.e., diagnosability [28,29], robust and safe fault diagnosis [30,31], prognosability [32], coprognosability [33], detectability [34], and state-based opacity [35,36,37].
The main contributions of this paper are as follows. First, we define the following two notions of critical observability in an SDES under intermittent loss of observations modeled as a PFA, namely SA-CO and A-CO. More precisely, SA-CO requires the a priori cumulated probability of violating critical observability for any sequence with length k and any sequence with infinite length below a given threshold, where k is an arbitrary non-negative integer. A-CO considers the a priori cumulated probability of violating critical observability for any sequence whose prefix does not violate critical observability, and requires such a probability below a given threshold. From the notions of SA-CO and A-CO, the main challenge is to compute the violation language for critical observability under an intermittent loss of observations. Thus, we built a modified system that can capture the behavior of the plant system under an intermittent loss of observations using a PFA. In addition, we propose an algorithm to construct a new finite information structure that can capture the violation language for critical observability of the plant system under the intermittent loss of observations using a PFA and its isomorphic Markov chain. Based on these results, necessary and sufficient conditions are presented to verify SA-CO and A-CO of SDESs under intermittent loss of observations, and the complexity is also given.
Note that the formulations of SA-CO and A-CO are motivated by the formulations of step-based almost current-state opacity and almost current-state opacity in [35], step-based almost initial-state opacity and almost initial-state opacity in [36], and almost k-step opacity and almost infinite-step opacity in [37]. In fact, critical observability is (inversely) related to current state opacity, which aims to protect secret states by ensuring that the current state estimate of a system always includes at least one non-secret state. In contrast, critical observability requires the current state behavior to be identified as critical or non-critical. Thus, SA-CO and A-CO are different with step-based almost current-state opacity and almost current-state opacity in [35]. Moreover, all the above methods in [35,36,37] do not consider communication malfunctions between the plant and the operator. Thus, the methods in [35,36,37] cannot be used to solve the problem raised in this paper, and it is necessary to introduce new methods to verify SA-CO and A-CO in SDESs. To the best of our knowledge, this is the first work to deal with SA-CO and A-CO in SDESs, which may benefit practitioners in the energy and resource industries, where stochastic events are extensively prevalent.
The rest of this paper is organized as follows. Section 2 provides some foundations of a finite-state automaton and a PFA. Section 3 presents how to build a PFA for an SDES that suffers from the intermittent loss of observations. In Section 4, we show the problem formulation of this paper, i.e., SA-CO and A-CO under intermittent loss of observations. Section 5 and Section 6 illustrate the methods to verify SA-CO and A-CO under intermittent loss of observations, respectively. The application of the proposed methods to engineering problems is introduced in Section 7. Finally, Section 8 concludes the paper, and suggests directions for future work.

2. Preliminaries

This section briefly introduces some basic knowledge used throughout the paper, including finite-state automaton (deterministic finite automaton and nondeterministic finite automaton) and probabilistic finite automaton.

2.1. Finite-State Automaton

Let N be a set of non-negative integers, E be an alphabet, and E * be the Kleene-closure  [38] of E. A language L E * is defined as a set of strings. We denote the prefix-closure of L by L ¯ , i.e.,
L ¯ = { u E * | v E * : u v L } .
For a string u E * , we denote by v u if v { u } ¯ and denote by v < u if v { u } ¯ { u } . For any string u, let | u | denote the length of u. For any set A, let | A | denote its cardinality and 2 A denote its power set.
A deterministic finite automaton (DFA) is denoted by G = ( X , E , δ , X 0 ) , where X is the state set, δ : X × E X is the deterministic (partial) transition function, and X 0 X is the initial-state set.
A nondeterministic finite automaton (NFA) is denoted by G = ( X , E , δ , X 0 ) , where X is the state set, δ : X × E 2 X is the nondeterministic transition function, and X 0 X is the initial-state set. The set of all transitions in G is also denoted by δ = { ( x , e , x ) | δ ( x , e ) = x } . The active event set at a state x X is defined as
Γ ( x ) = { e E | δ ( x , e ) } .
The function δ can be extended to the domain 2 X × E * by induction as shown in [38]. For the sake of simplicity, let δ ( u ) denote δ ( X 0 , u ) . The language generated by G is defined as
L ( G ) = { u E * | δ ( u ) } .
The event set E is partitioned into two disjoint sets: E = E o E u o , where E o is the set of observable events and E u o is the set of unobservable events. The natural projection P : E * E o * is defined as (i) P ( ϵ ) = ϵ ; P ( e ) = e if e E o and P ( e ) = ϵ otherwise; (iii) P ( u e ) = P ( u ) P ( e ) . Moreover, the inverse projection P 1 : E o * 2 E is defined as P 1 ( w ) = { u E * | P ( u ) = w } .
Given an NFA G and a set of states x o 2 X , let U ( x o ) denote the set of states that are unobservable reached from some state in x o , i.e., U ( x o ) = { x X | x x o , v E u o * s . t . x δ ( x , v ) } . We use N ( x o , e ) to denote the set of states that are directly reachable upon the occurrence of an observable event e E o , i.e., N ( x o , e ) = { x X | x x o s . t . x δ ( x , e ) } .
An observer of an NFA G is defined by a DFA G o = ( X o , E o , δ o , X 0 , o ) , where X o 2 X , X 0 , o = U ( X 0 ) and for any x o 2 X , e E o , δ o ( x o , e ) = U ( N ( x o , e ) ) . Actually, it is necessary to only construct the accessible part of the observer from the set of initial states X 0 , o (the set of states reachable from the set of initial states through some event sequences w E o * ).

2.2. Probabilistic Finite Automaton

An SDES is modeled as a PFA ( G , p , π 0 )  [39], where G = ( X , E , δ , X 0 ) is an NFA, p : X × E × X [ 0 , 1 ] is the transition probability function, and π 0 is the initial-state probability distribution vector such that x X 0 : π 0 ( x ) = 0 and x X :
x X π 0 ( x ) = 1 .
Particularly, for any x , x X , e E , let p ( x , e | x ) denote the probability that event e occurs and the PFA reaches state x from state x. If p ( x , e | x ) = 0 , then state x cannot reach state x through event e. We assume that x X :
x X e E p ( x , e | x ) = 1 .
For any string u E * and state x X , we denote the occurrence probability of u by P r ( u ) , and the probability that u has occurred and the state of the system becomes x by P r ( x , u ) . Then, for any e E and u E * , we have
P r ( u e ) = x X P r ( x , u e ) , P r ( x , u e ) = x X ( p ( x , e | x ) P r ( x , u ) ) , P r ( x , ϵ ) = π 0 ( x ) .
A Markov chain is denoted by M = ( X , p M , π 0 ) , where p M ( x | x ) is the transition probability from state x to state x for x , x X , and π 0 is the initial-state probability distribution vector. Let P denote the transition probability matrix of M, whose ( x , x ) th element is p M ( x | x ) . Given π 0 , the current-state probability distribution vector after k steps is defined as π k = P k π 0 . For any PFA ( G , p , π 0 ) , its associated Markov chain or is denoted as M = ( X , p M , π 0 ) , whose transition probability is defined as
p M ( x | x ) = e E p ( x , e | x ) .

3. Modeling the Behavior of PFA Under Intermittent Loss of Observations

This section presents how to build a PFA model that can capture the behavior of an SDES under intermittent loss of observations.
Let E = E i l E n i l be a partition of E, where E i l denotes the set of observable events associated with intermittent loss of observations and E n i l denotes the set of observable events that do not suffer from intermittent loss of observations. Moreover, let E i l = { e u | e E i l } and E d = E E i l .
The dilation function is defined as D : E * 2 E d * , where D ( ϵ ) = ϵ , D ( e ) = { e } , if e E n i l , D ( e ) = { e , e u } , if e E i l , and D ( s e ) = D ( s ) D ( e ) where s E * and e E . The dilation function operation can be extended to languages as follows:
D ( L ( G ) ) = s L ( G ) D ( s ) .
Based on the dilation function D, given an NFA G = ( X , E , δ , X 0 ) , if it suffers from the intermittent loss of observations, it is defined as G d = ( X , E d , δ d , X 0 ) , where δ d ( x , e u ) = δ ( x , e ) if e u E i l and δ d ( x , e ) = δ ( x , e ) if e E .
In a PFA, for all e E i l , we use P r ( ( x , e u | x ) | ( x , e | x ) ) ( 0 , 1 ) to denote the probability that event e u occurs at state x, leading to state x , given that event e occurs at state x, leading to state x . In addition, the transition probability dilation function is defined as p d : X × E d × X [ 0 , 1 ] , where p d ( x , e | x ) = p ( x , e | x ) if e E n i l ; and p d ( x , e | x ) = p ( x , e | x ) ( 1 P r ( ( x , e u | x ) | ( x , e | x ) ) ) , p d ( x , e u | x ) = p ( x , e | x ) P r ( ( x , e u | x ) | ( x , e | x ) ) if e E i l . Then, the PFA characterizing the behavior of ( G , p , π 0 ) that suffers from the intermittent loss of observations is denoted as ( G d , p d , π 0 ) . Actually, the intermittent loss of observations in this paper can be seen as the transition missing. However, since the PFA has contained all the potential states of the real system, there is no state missing in this paper.
Note that in this paper, the transition probabilities and the event loss probabilities are known and accurate, which is commonly adopted in the literature. Actually, in most of the cases, the number of events that can occur at a given state of a system is fixed, and the probabilities of the occurrence of these events can be usually obtained based on the physical properties of events. In addition, compared with the intermittent loss of observations on sensors, sensor misclassification, false positives/negatives, or correlated sensor failures can generally be seen as faults occurring in the system, which is related to fault diagnosis or prognosis in SDESs.

4. Critical Observability in SDESs Under Intermittent Loss of Observations

This section establishes the formulation of SA-CO and A-CO in SDESs under intermittent loss of observations. To achieve this aim, we first recall the notion of critical observability in DES without considering the probability and the intermittent loss of observations.
Definition 1 
([15]). Given an NFA G = ( X , E , δ , X 0 ) with E = E o E u o , a natural projection P, and a set of critical states X c X . The system G is critically observable with respect to E o and X c if δ ( P 1 P ( w ) ) X c or δ ( P 1 P ( w ) ) X X c for any w L ( G ) , where δ ( P 1 P ( w ) ) = u P 1 P ( w ) δ ( u ) .
In this paper, we assume that X 0 X c . Particularly, if X 0 = X c , then G is undoubtedly critically observable. However, for a critically observable NFA G, if it suffers from an intermittent loss of observations, its critical observability may not hold according to Definition 1. In addition, note that a system does not satisfy critical observability even if the probability of a sequence that violates critical observability is very small. In order to quantify critical observability, we introduce the following definition that can characterize the probability of violating critical observability under an intermittent loss of observations.
Definition 2. 
Given a PFA ( G , p , π 0 ) , a projection P d : E d * E o * , the probability P r ( ( x , e u | x ) | ( x ,   e | x ) ) for all e E i l , and a set of critical states X c X , let ( G d , p d , π 0 ) with G d = ( X , E d , δ d , X 0 ) be a PFA characterizing the behavior of ( G , p , π 0 ) under an intermittent loss of observations, defined as follows:
L N = { w L ( G d ) | [ δ d ( P d 1 P d ( w ) ) X c ] [ δ d ( P d 1 P d ( w ) ) ( X X c ) ] } .
Then, PFA ( G , p , π 0 ) is step-based almost critically observable (SA-CO) with respect to X c , P d , P r ( ( x , e u | x ) | ( x , e | x ) ) , and a threshold β if the following two conditions hold:
k = 0 , 1 , 2 , , w L N , | w | = k P r ( w ) < β ,
w L N , | w | P r ( w ) < β .
Note that any sequence w in L N will violate critical observability, i.e., the operator cannot determine whether the current state of the system is critical or not through the observation P ( w ) . It is obvious that if G satisfies critical observability under an intermittent loss of observations, then ( G , p , π 0 ) also satisfies SA-CO for any β > 0 under the intermittent loss of observations.
Example 1. 
Consider a PFA ( G , p , π 0 ) in Figure 1a with G = ( X , E , δ , X 0 ) , E o = { b , c , d } , X c = { 4 } , and π 0 = [ 1 , 0 , 0 , 0 , 0 , 0 ] T . First, assume that E i l = , the observer of G is shown in Figure 2, and G is critically observable based on Definition 1. Then, let E i l = { c } , G become not critically observable, and we have L N = { a b * c u a * d b * , a b * d b * } (For brevity, for any w E * , we use w * instead of { w } * to represent the Kleene-closure of { w } . Moreover, for any w , u E * , we use u v instead of { w } { u } to represent the concatenation of { w } and { u } ). Let P r ( ( 3 , c u | 1 ) | ( 3 , c | 1 ) ) = 0.3 ; then, we have p d ( 3 , c u | 1 ) = p ( 3 , c | 1 ) P r ( ( 3 , c u | 1 ) | ( 3 , c | 1 ) ) = 0.9 × 0.3 = 0.27 , p d ( 3 , c | 1 ) = p ( 3 , c | 1 ) ( 1 P r ( ( 3 , c u | 1 ) | ( 3 , c | 1 ) ) ) = 0.9 × ( 1 0.3 ) = 0.63 , and p d ( x , e | x ) = p ( x , e | x ) for all transitions in δ except for transition ( 1 , c , 3 ) , and we can obtain ( G d , p d , π 0 ) in Figure 1b. Let us check whether the PFA is SA-CO with respect to X c , P d , P r ( ( x , e u | x ) | ( x , e | x ) ) , and β = 0.3 . Since all sequences violating critical observability have lengths greater than or equal to three, we begin with | w | = 3 . In particular,
w L N , | w | = 3 P r ( w ) = P r ( a c u d ) + P r ( a b d ) + P r ( a d b ) = 0.2475 .
Then, take into account | w | = 4 ,
w L N , | w | = 4 P r ( w ) = P r ( a b c u d ) + P r ( a c u a d ) + P r ( a c u d b ) + P r ( a b b d ) + P r ( a b d b ) + P r ( a d b b ) = 0.1665 .
It is obvious that
w L N , | w | = k P r ( w )
decreases with k increasing. Thus, the PFA is SA-CO with respect to X c , P d , P r ( ( x , e u | x ) | ( x , e | x ) ) , and a threshold β = 0.3 .
Definition 3. 
Given a PFA ( G , p , π 0 ) , a projection P d : E d * E o * , the probability P r ( ( x , e u | x ) | ( x , e | x ) ) for all e E i l , and a set of critical states X c X , let ( G d , p d , π 0 ) with G d = ( X , E d , δ d , X 0 ) be a PFA characterizing the behavior of ( G , p , π 0 ) under the intermittent loss of observations, defined as
L N P = { w L N | w < w s . t . w L N } .
Then, PFA ( G , p , π 0 ) is almost critically observable (A-CO) with respect to X c , P d , P r ( ( x , e u | x ) | ( x , e | x ) ) and a threshold β if
w L N P P r ( w ) < β .
Note that if there exist two sequences w and u in G d with w < u , P d ( D ( w ) ) = e 0 e 1 e n , and P d ( D ( u ) ) = e 0 e 1 e n e n + 1 e m violating critical observability, then we can confirm that the system is not critically observable through the observation P d ( D ( w ) ) , and it is not necessary to consider the rest of the observation, i.e., e n + 1 e m . Meanwhile, if the two sequences w and u with w < u and P d ( D ( w ) ) = P d ( D ( u ) ) violate the critical observability, we only need to consider the probability of the occurrence of w since the sequence w should occur first before the occurrence of u.
Example 2. 
Consider a PFA ( G , p , π 0 ) in Figure 1a with E o = { b , c , d } , X c = { 4 } , and π 0 = [ 1 , 0 , 0 , 0 , 0 , 0 ] T . First, assume that E i l = , G is critically observable based on Definition 1. Then, let E i l = { c } , G becomes not critically observable, and we have L N = { a b * c u a * d b * , a b * d b * } and L N P = { a b * c u a * d , a b * d } . Let P r ( ( 3 , c u | 1 ) | ( 3 , c | 1 ) ) = 0.3 ; then we can obtain ( G d , p d , π 0 ) in Figure 1b, and the following holds:
w L N P P r ( w ) = n = 0 0.5 × 0 . 1 n × 0.27 × 0 . 5 n × 0.5 + 0.5 × 0 . 1 n × 0.9 = 0.65 .
Thus, for any β > 0.65 , the PFA is A-CO with respect to X c , P d , P r ( ( x , e u | x ) | ( x , e | x ) ) , and a threshold β.
In fact, the threshold β in SA-CO and A-CO is usually determined by the safety requirements of the users for the real system. Particularly, for this work, we mainly focus on the verification of SA-CO and A-CO of SDESs under an intermittent loss of observations with respect to the threshold, and the method of threshold acquisition is not the main purpose of this work.

5. Verification of SA-CO Under Intermittent Loss of Observations

In this section, we present a method to verify SA-CO of SDESs under intermittent loss of observations.
To this aim, it is necessary to compute the cumulative probability of violating critical observability for all sequences of length k N , including an infinite length. The main difficulty to solve this problem is that there are infinite numbers of k, and infinite memory may be needed to calculate the sequences and their probabilities for each k. Thus, in the following, we show how to resolve this issue.
Now, we briefly introduce how Algorithm 1 works, as follows. In particular, Step 1 uses ( G d , p d , π 0 ) to describe the behavior of ( G , p , π 0 ) suffering from the intermittent loss of observations. Step 2 builds an observer that can capture all the observations that violate critical observability, which transforms all the nondeterminism of transitions in a PFA to deterministic transitions. In order to capture all the event sequences that can generate the observations violating critical observability, Step 3 adds an unobservable self-loop to each state of the observer, and Step 4 uses a product operator applied to a PFA ( G d , p d , π 0 ) and a DFA derived from Step 3. Finally, Step 5 constructs the Markov chain M associated with the PFA obtained in Step 4, while discarding the labels on the transitions of the PFA.
Definition 4. 
Given a PFA ( G , p , π 0 ) , a projection P d : E d * E o * , the probability P r ( ( x , e u | x ) | ( x , e | x ) ) for all e E i l , and a set of critical states X c X , let M = ( X p , p M , π 0 , p ) be a Markov chain defined as in Algorithm 1. A state x p = ( x , y ) X p is said to violate critical observability if y X c y ( X X c ) . The set of states in M that violate the critical observability is denoted as X p , v .
Algorithm 1 Construction of a Markov chain for verifying SA-CO under intermittent loss of observations
Input: A PFA ( G , p , π 0 ) with G = ( X , E , δ , X 0 ) , a set of events suffering from intermittent loss of observations E i l , the probability P r ( ( x , e u | x ) | ( x , e | x ) ) for all e E i l , and a set of critical states X c X
Output: The Markov chain M for verifying SA-CO
1:
Build the PFA ( G d , p d , π 0 ) with G d = ( X , E d , δ d , X 0 ) that models the behavior of ( G , p , π 0 ) suffering from the intermittent loss of observations.
2:
Build the observer G o = ( X o , E o , δ o , X 0 , o ) associated with G d by regarding all events in E i l unobservable and using the method in [38].
3:
Build the DFA G ^ o = ( X o , E d , δ ^ o , X 0 , o ) from G o by adding a self-loop to each state of G o for each event e E d E o .
4:
Build the PFA ( G p , p p , π 0 , p ) with G p = ( X p , E d , δ p , X 0 , p ) , where
  • X p = X × X o is the set of states;
  • δ p ( ( x , y ) , e ) is the transition function such that δ p ( ( x , y ) , e ) = ( δ d ( x , e ) , δ ^ o ( y , e ) ) , if e Γ ( x ) Γ ( y ) ;
  • X 0 , p = X 0 × X 0 , o ;
  • p p ( x p , e | x p ) is the state transition probability such that x p = ( x , y ) X p , x p = ( x , y ) X p , and e E d , as p p ( x p , e | x p ) = p d ( x , e | x ) if y δ ^ o ( y , e ) , and p p ( x p , e | x p ) = 0 otherwise;
  • π 0 , p is the initial-state probability distribution vector indexed by the states x p of PFA ( G p , p p , π 0 , p ) such that π 0 , p ( x p ) = π 0 ( x ) , if x p = ( x , y ) X 0 , otherwise π 0 , p ( x p ) = 0 .
5:
Construct the Markov chain M = ( X p , p M , π 0 , p ) associated with the PFA ( G p , p p , π 0 , p ) , i.e., the state transition of Markov chain is defined as p M ( x p | x p ) = e E d p p ( x p , e | x p ) for x p , x p X p .
Theorem 1. 
Given a PFA ( G , p , π 0 ) , a projection P d : E d * E o * , the probability P r ( ( x , e u | x ) | ( x , e | x ) ) for all e E i l , and a set of critical states X c X , let M = ( X p , p M , π 0 , p ) be a Markov chain constructed in Algorithm 1 with the transition probability matrix P , and π , p be the stationary distribution of M. Then, PFA ( G , p , π 0 ) is SA-CO with respect to X c , E o , and a threshold β if and only if 1 X p , v π , p < β and for all non-negative integer k:
1 X p , v π k , p < β ,
where π k , p = P k π 0 , p , and 1 X p , v is a | X p | -dimensional binary row vector such that the element whose index in X p , v equals to 1 (otherwise, it equals to 0).
Proof. 
Given a PFA ( G , p , π 0 ) , a projection P d : E d * E o * , and the probability P r ( ( x , e u | x ) | ( x , e | x ) ) for all e E i l based on the construction of ( G d , p d , π 0 ) in Algorithm 1, we can characterize the behavior of ( G , p , π 0 ) under the intermittent loss of observations. In addition, by the construction of ( G p , p p , π 0 , p ) , we know that the event sequences in PFA ( G d , p d , π 0 ) violating critical observability are those reaching states x p = ( x , y ) in ( G p , p p , π 0 , p ) such that y X c y ( X X c ) .
Note that in Step 5 of Algorithm 1, we do not change the state structure of ( G p , p p , π 0 , p ) but only remove its labels. Thus, the language L N violating critical observability of G under the intermittent loss of observations is captured by the Markov chain M associated with the PFA ( G p , p p , π 0 , p ) , i.e., especially the sequences that reach the states in X p , v of M.
In Equation (8), π k , p represents the current-state probability distribution vector of the Markov chain M after k steps. Based on Definition 4, the cumulative probability of being in any of the state x p X p , v after k steps is 1 X p , v π k , p . Thus, we have
1 X p , v π k , p = w L N , | w | = k P r ( w ) .
Note that the results also hold for k , which completes the proof. □
Note that for an irreducible and aperiodic Markov chain, its stationary distribution is the unique strictly positive vector π that can be computed by solving the equations P π = π and
i = 1 m π ( i ) = 1 .
Theorem 1 indicates that a PFA ( G , p , π 0 ) with intermittent loss of observations is SA-CO if and only if Equation (8) holds for all non-negative integers k and an infinite number. In the following, we provide a practical way to verify this condition using finite memory.
Theorem 2. 
Given an irreducible and aperiodic Markov chain M = ( X , p M , π 0 ) with the transition probability matrix P , assume that P and π 0 have rational entries and let π denote the stationary distribution of M. Let 1 X v be a row vector of 0’s and 1’s, which is indexed by states in X. Then, for β > 1 X v π , there exists an integer K * depending only on p M , π 0 , and β, such that for all k K * , 1 X v π k < β .
Proof. 
It follows directly from the proof of Theorem 2 in [35]. □
In Theorem 2, the Markov chain is assumed to be irreducible and aperiodic. Note that this result can be extended to the Markov chain that is irreducible but periodic, which has been proven in the appendix of [35].
Example 3. 
Given a PFA under an intermittent loss of observations, assume that an irreducible and aperiodic Markov chain M = ( X , p M , π 0 ) with the transition probability matrix P in Theorem 2 are given by X = { 0 , 1 , 2 } , π 0 = [ 1 0 0 ] , and
P = 0.2 0.3 0.5 0.3 0.3 0.4 0.5 0.4 0.1 .
The eigenvalues associated with P are λ 1 = 1 , λ 2 = 0.3732 , and λ 3 = 0.0268 . Since all eigenvalues are distinct, it holds that P k = A 1 + λ 2 k A 2 + λ 3 k A 3 , where
A 1 = 0.3333 0.3333 0.3333 0.3333 0.3333 0.3333 0.3333 0.3333 0.3333 ,
A 2 = 0.3333 0.1220 0.4553 0.1220 0.0447 0.1667 0.4553 0.1667 0.6220 ,
and
A 3 = 0.3333 0.4553 0.1220 0.4553 0.6220 0.1667 0.1220 0.1667 0.0447 .
In addition, we have
1 X v π k = 1 X v P k π 0 = 1 X v A 1 π 0 + λ 2 k 1 X v A 2 π 0 + λ 3 k 1 X v A 3 π 0 = c 1 + λ 2 k c 2 + λ 3 k c 3 .
Assume that β = 0.4 and 1 X v = [ 0 1 0 ] , we can verify whether the condition in Theorem 2 holds. First, we compute the stationary distribution of M as π = [ 0.3333 0.3333 0.3333 ] T such that 1 X v π < β . Then, the constants c 1 , c 2 , and c 3 in Equation (9) can be computed as c 1 = 0.3333 , c 2 = 0.1220 , and c 3 = 0.4553 , and the constant K * in Theorem 2 can be obtained using the equation
K * = l n ( β c 1 ) l n ( i = 2 m | c i | ) l n | λ 2 | = 2 .
Due to β > c 1 , we only need to check whether 1 X v π k < β for k = 0 , 1 , 2 . In fact, since 1 X v π 0 = 0 < 0.4 , 1 X v π 1 = 0.3 < 0.4 , and 1 X v π 2 = 0.35 < 0.4 , we can conclude that the PFA under intermittent loss of observations is SA-CO based on Theorems 1 and 2.
This section concludes by discussing the complexity of the proposed method. To verify SA-CO under an intermittent loss of observations, we use Algorithm 1 to construct the Markov chain M that includes at most m = | X | × 2 | X | states. Then, for finding eigenvalues and computing the state stationary distribution of M, the complexity is O ( m 3 ) . Thus, the total complexity of verifying SA-CO is exponential in the size of the PFA.

6. Verification of A-CO Under Intermittent Loss of Observations

In this section, we show how to formally verify A-CO in SDESs under an intermittent loss of observations. For the sake of solving this problem, we need to make the states of M defined in Definition 4 (the set of critical observability violative states) absorbing. Thus, the probability of violating critical observability is transformed to computing the absorption probability at these absorbing states. The following theorem provides a necessary and sufficient condition to verify A-CO of SDESs under intermittent loss of observations.
Theorem 3. 
Given a PFA ( G , p , π 0 ) , a projection P d : E d * E o * , the probability P r ( ( x , e u | x ) | ( x , e | x ) ) for all e E i l , and a set of critical states X c X , let M = ( X p , p M , π 0 , p ) be a Markov chain constructed in Algorithm 1. Build a Markov chain M ˜ = ( X p , p ˜ M , π 0 , p ) by making states in X p , v absorbing. Then, PFA ( G , p , π 0 ) is A-CO with respect to X c , P d , P r ( ( x , e u | x ) | ( x , e | x ) ) , and a threshold β if and only if p a ( X p , v ) < β , where p a ( X p , v ) is the absorbing probability at X p , v of M ˜ .
Proof. 
For the sake of characterizing the language L N P , it is only necessary to take into account part of the sequences in L N such that none of their prefixes belong to L N . That is to say, for any u L N , its continuation in L N does not belong to L N P . Thus, we make the set of states X p , v in the Markov chain M absorbing. By using this procedure, the absorbing probability in the states set X p , v equals to the cumulative probability of sequences in L N P . □
In practice, we can compute the absorbing probability p a ( X p , v ) in Theorem 3 using a backward recursive technique [40], as follows:
p a ( X p , v ) = x p X p π 0 , p ( x p ) P a ( x p )
where P a is the vector of a minimal non-negative solution to Equation (11):
P a ( x p ) = x p X p p ˜ M ( x p | x p ) P a ( x p ) , if x p X p , v 1 , if x p X p , v .
Example 4. 
Consider a PFA ( G , p , π 0 ) in Figure 1a with E o = { b , c , d } , E i l = c , X c = { 4 } , and π 0 = [ 1 0 0 0 0 0 ] T . We first build the observer G o of G d in Figure 3, and the PFA ( G p , p p , π 0 , p ) in Figure 4. Then, we construct the Markov chain M associated with the PFA ( G p , p p , π 0 , p ) , and transform M to another Markov chain M ˜ in Figure 5 by changing all the states in X p , v = { ( 4 , { 4 , 5 } ) , ( 5 , { 4 , 5 } ) } to be absorbing. For brevity, we assign labels x p , 0 to x p , 10 to all the states in M ˜ . Then, we can calculate the absorbing probability in X p , v by solving Equation (12) as follows:
P a ( x p , 0 ) = 0.5 P a ( x p , 1 ) + 0.5 P a ( x p , 2 ) , P a ( x p , 1 ) = 0.1 P a ( x p , 3 ) + 0.63 P a ( x p , 4 ) + 0.27 P a ( x p , 5 ) , P a ( x p , 2 ) = 0.1 P a ( x p , 6 ) + 0.9 P a ( x p , 9 ) , P a ( x p , 3 ) = 0.1 P a ( x p , 3 ) + 0.63 P a ( x p , 4 ) + 0.27 P a ( x p , 7 ) , P a ( x p , 4 ) = 0.5 P a ( x p , 4 ) + 0.5 P a ( x p , 8 ) , P a ( x p , 5 ) = 0.5 P a ( x p , 5 ) + 0.5 P a ( x p , 10 ) , P a ( x p , 6 ) = 0.1 P a ( x p , 6 ) + 0.9 P a ( x p , 9 ) , P a ( x p , 7 ) = 0.5 P a ( x p , 7 ) + 0.5 P a ( x p , 10 ) , P a ( x p , 8 ) = P a ( x p , 8 ) , P a ( x p , 9 ) = P a ( x p , 10 ) = 1 .
Since the solution of Equation (12) is not a singleton, we assign the free term P a ( x p , 8 ) = 0 and obtain the minimal non-negative solution P a = [ 0.65 0.3 1 0.3 0 1 1 1 1 0 1 ] T accordingly. Thus, it holds that
w L N P P r ( w ) = p a ( X p , v ) = π 0 , p ( x p , 0 ) × P a ( x p , 0 ) = 0.65 .
According to Theorem 3, for any β > 0.65 , ( G , p , π 0 ) is A-CO with respect to X c , P d , P r ( ( x , e u | x ) | ( x , e | x ) ) , and a threshold β, which is consistent with Example 2.
This section concludes by discussing the complexity of the proposed method. To verify A-CO under an intermittent loss of observations, we use Algorithm 1 to construct the Markov chain M that includes at most m = | X | × 2 | X | states. Then, we transform M to another Markov chain M ¯ by making all the critical observability violative states absorbing, whose complexity is O ( m ) . Finally, for solving Equation (10) of M ¯ , the complexity is O ( m 3 ) . Thus, the total complexity of verifying A-CO is exponential in the size of the PFA.
Remark 1. 
Note that in logic DESs, a modified parallel composition of two copies of NFA G is usually used to check the critical observability of G with polynomial complexity, which can capture all pairs of event sequences having the same observations. However, due to the nondeterminism in the transition function of the corresponding NFA associated with a given PFA, the parallel composition or its modified version cannot maintain the transition probabilities required for computing the probability of event sequences violating critical observability. In particular, as for the SDES modeled as a PFA, when we use the algorithm with polynomial complexity to construct a verifier (or its modified version) for SA-CO or A-CO verification, the sum of probabilities of all defined events at a given state in the verifier may be larger than one. In this sense, the verifier can neither be seen as a PFA nor be transformed to a Markov chain; thus, we cannot use such a structure with polynomial complexity to verify SA-CO and A-CO. Thus, this paper exploits observer-based methods to verify SA-CO and A-CO with exponential complexity.

7. An Industrial Application Example

In this section, the developed results are applied to a raw coal processing system. In the real world, many events are stochastic. Especially in coal mining and processing, spontaneous combustion and fire, raw coal mining quality, and other events are stochastic and difficult to predict. Therefore, the processing flow of coal mines can often be modeled as an SDES. Note that the SDES is still a discrete system but not a continuous system, since the state of the system does not change continuously over time. Based on the theoretical results in this paper, a Matlab tool [41] was developed to verify SA-CO and A-CO of SDESs under intermittent loss of observations. In the following, the experimental results are obtained using this tool.
Figure 6 shows a beneficiation plant located in China. This plant uses a dense medium beneficiation system to sort the raw coal. For example, qualified media will be initially stored in a qualified medium tank, waiting for the system to operate. When the system runs, the medium will be transported to the beneficiation system and mixed with raw coal for sorting. After the mixing and sorting process is completed, most of the medium can be directly separated from the coal and returned to the tank through the splitter. A small portion of the medium mixed with coal needs to be repeatedly processed through a De-medium screen and a magnetic separator before returning to the qualified medium tank. In this process, the quality of the medium and whether the medium can directly flow into the splitter are often stochastic.
This process is modeled as a PFA shown in Figure 7a, where X 0 = { 0 } . Moreover, the meanings of each state and event are listed in Table 1 and Table 2, respectively. In this PFA, the event a represents the waiting process. Thus, the event a is unobservable. Furthermore, we set the critical state to X c = { 3 } and E i l = , which indicates that the medium is processed in the magnetic separator. In a magnetic separator, the medium needs to be separated from various substances such as coal, medium, slurry, etc. This process is important and affects the quality of the medium. Based on the definition of critical observability, the system is critically observable. However, if the event g suffers some faults, i.e., E i l = { g } , the event g becomes unobservable, which is modeled as a PFA shown in Figure 7b. Correspondingly, the observer is constructed as Figure 8 and the system becomes not critically observable.
Assume that β = 0.3 . First, by using Algorithm 1, we can obtain the Markov chain M in Figure 9a and compute its stationary distribution as π = [ 0.1431 0.0572 0.1431 0.1907 0.2044   0.2044 0.0572 ] T . Due to X c = { 3 } , we have 1 X v = [ 0 0 0 0 0 1 1 ] and 1 X v π = 0.2616 < β . Then, we can also verify that 1 X v π i < β for all i = 1 , 2 , , K * . Based on Theorems 1 and 2, we can conclude that the PFA is SA-CO under the intermittent loss of observations for any β > 0.3 .
To verify A-CO, we modify the Markov chain M in Figure 9a by changing all the states in X p , v = { ( 3 , { 1 , 3 } ) , ( 1 , { 1 , 3 } ) } to be absorbing. Then, we compute the absorbing probability p a ( X p , v ) in Theorem 3 by using Equations (10) and (11). For this example, we have p a ( X p , v ) = π 0 , p ( x p , 0 ) × P a ( x p , 0 ) = 1 . That is to say, the a priori cumulated probability of all the sequences in L N P equals to 1.
Note that we obtain the probabilities of events in a PFA through abstract modeling of the raw coal processing system in this section, which does not rely on real-time data collection and analysis from field operations. In addition, regarding the impact on control decisions or risk mitigation, we plan to integrate our current analytical findings with supervisory control theory to deal with critical observability enforcement.

8. Conclusions

This paper investigates the critical observability in the context of SDESs under intermittent loss of observations. Two new notions, called SA-CO and A-CO, are proposed to quantitatively evaluate the probability that critical observability is violated under the intermittent loss of observations. In particular, we present a new language operation to construct a PFA capturing the behavior of the plant system under the intermittent loss of observations. Then, two effective methods are presented to verify the two properties, respectively, which have the exponential complexity with respect to the number of states of the plant system. Finally, a raw coal processing system is used to demonstrate the feasibility and effectiveness of the practical implications of the proposed methods.
One important future direction for further research is to extend the notion of critical observability to a continuous system (i.e., a continuous time Markov model [12]), and propose the methods to verify this property for such a kind of system by constructing a Markovian observer and analyze the generator matrix of the observer. Then, due to the exponential complexity in this paper, we are also interested in reducing the computational complexity of the proposed methods such that they can be applied to some large complex systems more efficiently.

Author Contributions

Conceptualization, X.C.; methodology, X.C. and H.Z.; software, W.C. and G.Z.; validation, X.C. and H.Z.; formal analysis, X.C. and H.Z.; investigation, X.C. and H.Z.; resources, X.C.; data curation, W.C. and G.Z.; writing—original draft preparation, X.C.; writing—review and editing, X.C. and H.Z.; visualization, W.C. and G.Z.; supervision, Z.Y.; project administration, X.C. and Z.Y.; funding acquisition, X.C. and Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Natural Science Foundation of China under Grants 62303375, 62273272, and 61873277, in part by the Natural Science Foundation of Shaanxi Province under Grant 2022JQ-606, in part by the Outstanding Youth Science and Technology Fund of Xi’an University of Science and Technology under Grant 6310224017, and in part by the Youth Innovation Team of Shaanxi Universities under Grant 24JP108.

Data Availability Statement

The data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Nomenclature

A-COAlmost critical observability
DESDiscrete event system
PFAProbabilistic finite automaton
SA-CO  Step-based almost critical observability
SDESStochastic discrete event system
2 A Power set of A
DDilation function
EAlphabet
E * Kleene-closure of E
E o Set of observable events
E u o Set of unobservable events
E i l Set of observable events associated with intermittent loss of observations
E n i l Set of observable events that do not suffer from intermittent loss of observations
GNondeterministic finite automaton
( G , p , π 0 ) PFA
LLanguage
L ¯ Prefix-closure of L
MMarkov chain
N Set of non-negative integers
P Transition probability matrix
PNatural projection
P 1 Inverse projection
pTransition probability function of a PFA
p M Transition probability function of a Markov chain
p a ( X p , u )  Absorbing probability of a state set X p , u
P r ( u ) Occurrence probability of an event sequence u
U ( x o ) Set of states that are unobservable reached from a set of states x o
XState set
X 0 Initial-state set
X c Set of critical states
β Threshold
δ Nondeterministic transition function
π 0 Initial-state probability distribution vector

References

  1. Li, X.; Hadjicostis, C.N.; Li, Z. Opacity Enforcement in Discrete Event Systems Using Modification Functions. IEEE Trans. Autom. Sci. Eng. 2025, 22, 3252–3264. [Google Scholar] [CrossRef]
  2. Lefebvre, D.; Seatzu, C.; Hadjicostis, C.N.; Giua, A. Detectability notions for a class of finite labeled Markovian systems. Nonlinear Anal. Hybrid Syst. 2025, 57, 101586. [Google Scholar] [CrossRef]
  3. Dong, W.; Zhang, K.; Li, S.; Yin, X. On the verification of detectability for timed discrete event systems. Automatica 2024, 164, 111644. [Google Scholar] [CrossRef]
  4. Ding, S.; Liu, G.; Yin, L.; Wang, J.; Li, Z. Detection of cyber-attacks in a discrete event system based on deep learning. Mathematics 2024, 12, 2635. [Google Scholar] [CrossRef]
  5. Labed, A.; Saadaoui, I.; E, H.; El-Meligy, M.A.; Li, Z.; Sharaf, M. Language recovery in discrete-event systems against sensor deception attacks. Mathematics 2023, 11, 2313. [Google Scholar] [CrossRef]
  6. Cong, X.; Yu, Z.; Fanti, M.P.; Mangini, A.M.; Li, Z. Predictability verification of fault patterns in labeled Petri nets. IEEE Trans. Autom. Control 2025, 70, 1973–1980. [Google Scholar] [CrossRef]
  7. Ma, Z.; Tong, Y.; Seatzu, C. Verification of pattern–pattern diagnosability in partially observed discrete event systems. IEEE Trans. Autom. Control 2024, 69, 2044–2051. [Google Scholar] [CrossRef]
  8. De Santis, E.; Di Benedetto, M.; Di Gennaro, S.; D’Innocenzo, A.; Pola, G. Critical observability of a class of hybrid systems and application to air traffic management. In Stochastic Hybrid Systems: Theory and Safety Critical Applications; Springer: Berlin/Heidelberg, Germany, 2006; pp. 141–170. [Google Scholar]
  9. Zhang, J.; Li, Z. Critical observability enforcement in discrete event systems using differential privacy. Mathematics 2024, 12, 3842. [Google Scholar] [CrossRef]
  10. Hadjicostis, C.N. Estimation and Inference in Discrete Event Systems; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  11. Zhou, S.; Yu, J.; Yin, L.; Li, Z. Security quantification for discrete event systems based on the worth of states. Mathematics 2023, 11, 3629. [Google Scholar] [CrossRef]
  12. Lefebvre, D.; Seatzu, C.; Hadjicostis, C.N.; Giua, A. Probabilistic state estimation for labeled continuous time Markov models with applications to attack detection. Discret. Event Dyn. Syst. 2022, 32, 539–544. [Google Scholar] [CrossRef]
  13. Yu, Z.; Gao, H.; Cong, X.; Wu, N.; Song, H.H. A survey on cyber-physical systems security. IEEE Internet Things J. 2023, 10, 21670–21686. [Google Scholar] [CrossRef]
  14. Pola, G.; De Santis, E.; Di Benedetto, M.; Pezzuti, D. Design of decentralized critical observers for networks of finite state machines: A formal method approach. Automatica 2017, 86, 174–182. [Google Scholar] [CrossRef]
  15. Masopust, T. Critical observability for automata and Petri nets. IEEE Trans. Autom. Control 2020, 65, 341–346. [Google Scholar] [CrossRef]
  16. Yan, Y.; Deng, H.; Chen, Z. A new look at the critical observability of finite state machines from an algebraic viewpoint. Asian J. Control 2022, 24, 3056–3065. [Google Scholar] [CrossRef]
  17. Tong, Y.; Ma, Z. Verification of k-Step and definite critical observability in discrete-event systems. IEEE Trans. Autom. Control 2023, 68, 4305–4312. [Google Scholar] [CrossRef]
  18. Lai, A.; Lahaye, S.; Komenda, J. Observer construction for polynomially ambiguous max-plus automata. IEEE Trans. Autom. Control 2021, 67, 1582–1588. [Google Scholar] [CrossRef]
  19. Cong, X.; Fanti, M.; Mangini, A.; Li, Z. Critical observability of labeled time Petri net systems. IEEE Trans. Autom. Sci. Eng. 2023, 20, 2063–2074. [Google Scholar] [CrossRef]
  20. Cong, X.; Fanti, M.P.; Mangini, A.M.; Li, Z. Critical observability of discrete-event systems in a Petri net framework. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 2789–2799. [Google Scholar] [CrossRef]
  21. Ma, Z.; Li, Z.; Giua, A. Design of optimal Petri net controllers for disjunctive generalized mutual exclusion constraints. IEEE Trans. Autom. Control 2015, 60, 1774–1785. [Google Scholar] [CrossRef]
  22. Cong, X.; Fanti, M.; Mangini, A.; Li, Z. Critical observability verification and enforcement of labeled Petri nets by using basis markings. IEEE Trans. Autom. Control 2023, 68, 8158–8164. [Google Scholar] [CrossRef]
  23. Wonham, W.M.; Cai, K. Supervisory Control of Discrete-Event Systems; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  24. Alves, M.V.; da Cunha, A.E.; Carvalho, L.K.; Moreira, M.V.; Basilio, J.C. Robust supervisory control of discrete event systems against intermittent loss of observations. Int. J. Control 2021, 94, 2008–2020. [Google Scholar] [CrossRef]
  25. Nunes, C.E.; Moreira, M.V.; Alves, M.V.; Carvalho, L.K.; Basilio, J.C. Codiagnosability of networked discrete event systems subject to communication delays and intermittent loss of observation. Discret. Event Dyn. Syst. 2018, 28, 215–246. [Google Scholar] [CrossRef]
  26. Carvalho, L.K.; Basilio, J.C.; Moreira, M.V. Robust diagnosis of discrete event systems against intermittent loss of observations. Automatica 2012, 48, 2068–2078. [Google Scholar] [CrossRef]
  27. Di Benedetto, M.; Di Gennaro, S.; D’Innocenzo, A. Critical observability for a class of stochastic hybrid systems and application to air traffic management. HYBRIDGE D7 2005, 5, 1–22. [Google Scholar]
  28. Thorsley, D.; Teneketzis, D. Diagnosability of stochastic discrete-event systems. IEEE Trans. Autom. Control 2005, 50, 476–492. [Google Scholar] [CrossRef]
  29. Chen, J.; Kumar, R. Polynomial test for stochastic diagnosability of discrete-event systems. IEEE Trans. Autom. Sci. Eng. 2013, 10, 969–979. [Google Scholar] [CrossRef]
  30. Yin, X.; Chen, J.; Li, Z.; Li, S. Robust fault diagnosis of stochastic discrete event systems. IEEE Trans. Autom. Control 2019, 64, 4237–4244. [Google Scholar] [CrossRef]
  31. He, J.; Wang, D.; Yang, M.; Hu, Y. Asynchronous fault diagnosis of stochastic discrete-event systems in industrial applications. IEEE Sens. J. 2024, 24, 4886–4898. [Google Scholar] [CrossRef]
  32. Chen, J.; Kumar, R. Stochastic failure prognosis of discrete event systems. IEEE Trans. Autom. Control 2021, 67, 5487–5492. [Google Scholar] [CrossRef]
  33. Liao, H.; Liu, F.; Zhao, R. Reliable co-prognosability of decentralized stochastic discrete-event systems and a polynomial-time verification. IEEE Trans. Cybern. 2021, 52, 6207–6216. [Google Scholar] [CrossRef]
  34. Keroglou, C.; Hadjicostis, C.N. Detectability in stochastic discrete event systems. Syst. Control Lett. 2015, 84, 21–26. [Google Scholar] [CrossRef]
  35. Saboori, A.; Hadjicostis, C.N. Current-state opacity formulations in probabilistic finite automata. IEEE Trans. Autom. Control 2014, 59, 120–133, Erratum in IEEE Trans. Autom. Control 2024, 69, 3480–3481. [Google Scholar] [CrossRef]
  36. Keroglou, C.; Hadjicostis, C. Initial state opacity in stochastic DES. In Proceedings of the 2013 IEEE 18th Conference on Emerging Technologies & Factory Automation (ETFA), Cagliari, Italy, 10–13 September 2013; pp. 1–8. [Google Scholar]
  37. Yin, X.; Li, Z.; Wang, W.; Li, S. Infinite-step opacity and K-step opacity of stochastic discrete-event systems. Automatica 2019, 99, 266–274. [Google Scholar] [CrossRef]
  38. Cassandras, C.G.; Lafortune, S. Introduction to Discrete Event Systems; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  39. Teng, Y.; Li, Z.; Yin, L.; Wu, N. State-based differential privacy verification and enforcement for probabilistic automata. Mathematics 2023, 11, 1853. [Google Scholar] [CrossRef]
  40. Norris, J.R. Markov Chains; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  41. Cong, X.; Cui, W.; Zhao, G. Matlab Tool for Verification of Critical Observability in Stochastic Discrete Event Systems Under Intermittent Loss of Observations. Website. Available online: https://github.com/congxuya/Matlab_tool_SA-COA-CO.git (accessed on 16 April 2025).
Figure 1. (a) PFA ( G , p , π 0 ) and (b) ( G d , p d , π 0 ) .
Figure 1. (a) PFA ( G , p , π 0 ) and (b) ( G d , p d , π 0 ) .
Mathematics 13 01426 g001
Figure 2. Observer G o for the NFA G in Figure 1a.
Figure 2. Observer G o for the NFA G in Figure 1a.
Mathematics 13 01426 g002
Figure 3. Observer G o of G d constructed in Step 2 of Algorithm 1.
Figure 3. Observer G o of G d constructed in Step 2 of Algorithm 1.
Mathematics 13 01426 g003
Figure 4. PFA ( G p , p p , π 0 , p ) constructed in Step 4 of Algorithm 1.
Figure 4. PFA ( G p , p p , π 0 , p ) constructed in Step 4 of Algorithm 1.
Mathematics 13 01426 g004
Figure 5. Markov chain associated with PFA ( G p , p p , π 0 , p ) in Figure 4 by making the states in X p , v absorbing.
Figure 5. Markov chain associated with PFA ( G p , p p , π 0 , p ) in Figure 4 by making the states in X p , v absorbing.
Mathematics 13 01426 g005
Figure 6. A raw coal processing system.
Figure 6. A raw coal processing system.
Mathematics 13 01426 g006
Figure 7. (a) The raw coal processing system model by PFA ( G , p , π 0 ) and (b) the corresponding ( G d , p d , π 0 ) .
Figure 7. (a) The raw coal processing system model by PFA ( G , p , π 0 ) and (b) the corresponding ( G d , p d , π 0 ) .
Mathematics 13 01426 g007
Figure 8. The corresponding observer associated with the NFA G d in Figure 7b.
Figure 8. The corresponding observer associated with the NFA G d in Figure 7b.
Mathematics 13 01426 g008
Figure 9. (a) Markov chain associated with PFA ( G p , p p , π 0 , p ) in Algorithm 1 and (b) Markov chain by making states in X p , v absorbing.
Figure 9. (a) Markov chain associated with PFA ( G p , p p , π 0 , p ) in Algorithm 1 and (b) Markov chain by making states in X p , v absorbing.
Mathematics 13 01426 g009
Table 1. List of event meanings.
Table 1. List of event meanings.
EMeaning
aWaiting for the process to complete
bRoughly detect as qualified
cPrecisely detect as qualified
dRoughly detect as unqualified
ePrecisely detect as unqualified
fSending to the magnetic separator
gSeparating the medium by De-medium screen
hDistributing the medium
Table 2. List of state meanings.
Table 2. List of state meanings.
XMeaning
0Dense medium suspension tank
1Suspension in the De-medium screen
2Suspension waits to be processed by magnetic separator
3Suspension in the magnetic separator
4Suspension in the arc screen
5Suspension in the splitter
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cong, X.; Zhu, H.; Cui, W.; Zhao, G.; Yu, Z. Critical Observability of Stochastic Discrete Event Systems Under Intermittent Loss of Observations. Mathematics 2025, 13, 1426. https://doi.org/10.3390/math13091426

AMA Style

Cong X, Zhu H, Cui W, Zhao G, Yu Z. Critical Observability of Stochastic Discrete Event Systems Under Intermittent Loss of Observations. Mathematics. 2025; 13(9):1426. https://doi.org/10.3390/math13091426

Chicago/Turabian Style

Cong, Xuya, Haoming Zhu, Wending Cui, Guoyin Zhao, and Zhenhua Yu. 2025. "Critical Observability of Stochastic Discrete Event Systems Under Intermittent Loss of Observations" Mathematics 13, no. 9: 1426. https://doi.org/10.3390/math13091426

APA Style

Cong, X., Zhu, H., Cui, W., Zhao, G., & Yu, Z. (2025). Critical Observability of Stochastic Discrete Event Systems Under Intermittent Loss of Observations. Mathematics, 13(9), 1426. https://doi.org/10.3390/math13091426

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop