2.1. MEC Index
Consider a generic feedback control systems shown in
Figure 1, where
is the set point,
is the controller output,
is the unmeasured disturbance.
,
and
denote the transfer functions of the feedback controller, the process and disturbance dynamics, respectively. The set point is set to zero by convenience and the disturbances are assumed to be zero mean.
Let the system under consideration be described by an ARMAX model.
where
are the output, the input and the noise of the ARMAX process, respectively. The disturbance transfer function
in
Figure 1 can be further decomposed as follows by Diophantine equation,
where
is the impulse response coefficients of
in
with order
and
is the remaining transfer function that satisfies the Identity (2).
where
. The feedback-invariant terms are not functions of the process model or the controller; they depend only on the characteristics of the disturbance acting on the process. The second term is feedback-varying. This means that of the process output entropy (Equation (3)) depends on the structure and parameters of the controller (
). The entropy of the output variable can reach the minimum value if
.
For non-Gaussian variables, unlike Gaussian variables which have the particularity that all distribution information is contained in the first and second moments and the higher moments above the second moment are zero. So, the MVC control that only minimize the second order does not apply to the non-Gaussian systems. Fortunately, the entropy is alternative uncertainty measurement which is more general in representing the system randomness using the probability distribution that all the stochastic information is included. Therefore, all higher order moments including the second one can be optimized using the entropy instead of the mean square error optimization.
For a linear non-Gaussian system, the goal of the minimum entropy controller is to minimize the entropy of the system output variables [
25,
26,
27,
28]. Like the conventional MVC, the minimum entropy value will be obtained if and only if
,
MEC based assessment compares the actual system-output entropy
to the output entropy
as obtained using minimum-entropy controller. And the MEC–based CPA index is represented by
where
is the entropy of the output variable with MEC and
is the entropy of the output variable with actual controller. This index is similar to the MVC index will be always within the interval
, where MVC index values close to unity indicate good performance with regard to the theoretically achievable output minimum entropy. “0” means the worst performance, including unstable control.
In fact, the relative entropy or Kullback–Leibler (KL) divergence [
29,
30] can reflect the distance between two probability distributions. It is defined as follows.
The relative entropy can be also used as a performance assessment index if denotes the probability density function (PDF) of the output variable with MEC and denotes the PDF of output variable with actual controller.
This index equal to zero indicates that the controller is the minimum entropy controller and when it deviates from zero, it is not a minimum entropy controller. However, the problem of the relative entropy-based assessment index is that the corresponding the relative entropy index is not a convex index. In other words, this index can obtain whether the current controller is the minimum entropy controller but it is difficult to find a suitable threshold to determine the current controller performance is good or bad. In this sense, it is not appropriate to use the KL distance as a control loop performance assessment index.
2.2. Rational Entropy
In [
25,
26], the authors gave a method for calculating the entropy,
which is based on the following lemma.
Lemma 1 [25].Ifis a random variable, then.
Lemma 2 [25].For the two random variablesand, the entropy or the amount of information is revealed by.
If and are mutually independent, , then .
However, the conclusion of Equation (6) is wrong because the two lemmas are unsuitable for the considering condition. For the Lemma 1, it is valid only for discrete random variables. But for continuous random variables, we will discuss whether Lemma 1 is established by the following examples.
Suppose the system is as follows,
According to (6), the entropy is obtained as follows,
Then for the same distribution
, we use the different coefficients
to get the probability distribution as shown in
Figure 2. Obviously, the different coefficients lead to the different distribution of
. Since the entropy is determined by the shape of the distribution, that is, the coefficients cannot be omitted,
.
As for the Lemma 2, probability theory shows that if the PDFs of
and
are known and
and
are mutually independent, the PDF of
is calculated as follows [
28],
where
,
and
are the PDFs of
,
and
. The random variable
is still a univariate variable but Lemma 2 uses the properties of multivariate random variables. The distribution of the sum of two random variables is not the same as the joint distribution. In other words,
. Due to confusion of concepts, the results of the performance evaluation are not credible.
As aforementioned, the entropy is determined by the shape of the distribution (or the shape of the PDF), then the entropy of the feedback invariant is determined by the shape of the PDF of
. In [
27], a method of computing the entropy of feedback invariants called the consistent discrete distribution approximation method is introduced, which improves the deficiencies of literature [
25]. Although this method ensures the unity and consistency of entropy calculation, different standards are required for different non-Gaussian noise. Therefore, the identification of distribution function is an indispensable step and it will be illustrated in subsequent sections.
In the previous studies, Shannon Entropy is one of performance assessment criteria based on minimum entropy control [
28]. It is defined as,
It seems to be a new benchmark to describe. But the Shannon entropy of the continuous random variable may be negative or even negative infinite, this means that the SE does not satisfy the “consistency” property. As a result, its uncertainty determines that it cannot be used as a new standard. Fortunately, a rational entropy (RE) instead of the SE is proposed by Zhou [
24]. This type of entropy exhibits most properties of the Shannon’s entropy and, at the same time, satisfies the “consistency” property. In this paper, we use the rational entropy of the process output with MEC and the actual output to calculate performance index. Let
be a random variable in
and
be its PDF, the rational entropy (RE) is given as [
24]
Although the expression of RE is similar to that of the relative entropy, RE and relative entropy have different meanings: RE reflects the uncertainty of random variables and relative entropy reflects the distance between two probability distributions.
In fact, [
24] gave a CPA index of the output stochastic distribution control (SDC) systems. However, [
24] only gave the calculation method of the theoretical benchmark value and did not give the estimation method of this benchmark; On the other hand, the index of [
24] was only for the SDC systems, so its theory benchmark was not generic. In other words, a CPA framework that directly uses the characteristic of continuous random variables for general non-Gaussian systems has not yet been established. This paper aims to build a MEC-based CPA index for the general feedback control system with non-Gaussian disturbances.
In the chemical process, its output is generally measurable but it is difficult to accurately obtain the process model due to the lack of complete physicochemical knowledge and random disturbance distribution. It means the actual output entropy can be obtained directly from the collected output samples by Equation (11) but the entropy of MEC process still needs to be estimated through the identification of system parameters and noise PDF estimation. To summarize, the complete algorithm to evaluate the MEC-based index and to assess feedback controls contains the steps described as follows.
- (1)
Select the time-series-model type. Determine/estimate the system time delay τ and system order.
- (2)
Identify the closed-loop model from collected output samples.
- (3)
Estimate the benchmark entropy of process data.
- (4)
Compute the performance index.