1. Introduction
Generally, the reliability of control systems, in terms of the stability of the entire system and uncertainty rejection, has become an important subject in the last three decades where the complexity of control systems has increased significantly due to the fast technological development in many engineering areas. Usually, industrial plants and systems rely on measurements obtained by sensors as well the control actions from actuators to achieve certain control objectives, which implies that the quality of the sensor and actuator measurements have significant impacts on the control system performance. As a result, the controlling process will eventually become more difficult if faults or failures occur in system measurements. Faults in the system may appear from different sources, such as faults caused by sensors due to aging or broken components. Additionally, faults can also be caused by actuators, such as the presence of broken parts, certain failures in the actuator machinery, or errors in the operating mechanism. As a result, the presence of faults and failures within a system might weaken the system’s performance and lead to system degradation or stability loss.
Generally, both actuators and sensors may partially or completely lose effectiveness due to several causes resulting in unexpected faults in measurements or control actions. Furthermore, actuators and sensors may also provide unwanted offsets and becomes stuck during operation. Furthermore, faults caused by sensors and actuators may accumulate with time and increase after each closed-loop process, causing significant damage to the entire system. Therefore, it is necessary to build advanced fault detection and isolation (FDI) techniques to estimate faults in the system, and synthesize fault-tolerant control (FTC) methods in order to augment the system, and guarantee the reliability, sustainability, and safety of the systems [
1,
2,
3,
4].
Over the past three decades, various researchers and practitioners have been involved in developing and improving fault-tolerant control (FTC) approaches, where several approaches have been proposed and used to enhance the controlling process under the presence of sensors and actuator faults. Generally, FTC methods are divided into two different categories: the first class is the passive FTC, while the second class is the active FTC method. In the passive FTC category, the control system does not instantly respond to the online changes or occurrence of faults, while the controller is pre-designed, and its structure is fixed for both the faulty-free and the faulty cases. As a result, the control system will only exhibit certain robustness and tolerate a set of faults without any change in the structure of the controller [
5,
6,
7]. Consequently, this type of controller is easy to implement. However, it may display limited performance and small fault tolerance, since the controller structure cannot adjust to new online faults information. On the other hand, the structure of an active FTC allows the controller to react to the occurrence of faults immediately and change its parameters accordingly in order to achieve robustness and tolerate faults based on online faults information obtained by the fault detection and identification (FDI) procedure [
8,
9,
10]. In conclusion, active FTC schemes may have the ability to deal with online faults and can achieve better robustness with significant fault acceptability than the passive approach [
10].
Usually, noises and external disturbances are undesired factors in real-word control systems, and these unavoidable disturbances can be generated by many sources including the dynamics of the targeted system itself. In order to achieve a better performance, it is essential to reduce the impact of noises and uncertainties to a certain adequate level. Furthermore, having enough information of all states of a system is a very complex task in many industrial engineering systems where some of these states are unmeasurable. To deal with this issue, observers and observer-based controllers are employed to control these types of systems, where an observer has the ability to estimate the states of a dynamic system. Consequently, the observer provides important information, which eventually is used for system supervision and the realization of the feedback control [
11,
12,
13,
14,
15,
16]. Many controllers based on observer have been proposed to control systems with unmeasurable states. Dong et al. [
17] proposed an approach based on H-infinity in a generalized internal model control design with an adaptive observer-based fault estimator for active fault-tolerant control. The proposed approach uses online fault estimations to guarantee H-infinity performance and closed-loop system stability. Aouaouda et al. [
18] proposed an active fault-tolerant tracking controller methodology to deal with vehicle dynamics. In this approach, a Takagi–Sugeno model was used to represent the dynamics of the nonlinear system, while the system states along with sensor faults are estimated by a descriptor observer. Finally, a fault-tolerant tracking controller is proposed in terms of linear matrix inequalities (LMIs) to control vehicle dynamics. Kommuri et al. [
1] proposed a fault-tolerant control approach based on a higher-order sliding mode observer. The proposed approach was investigated in controlling an automatic speed tracking control of an electric vehicle powered by a permanent-magnet synchronous motor. Furthermore, the proposed approach was examined in the presence of sensor failures/faults. Unfortunately, the nature of the proposed observer is very complex, and the controller gains were set beforehand. Additionally, the proposed approach in [
1] considered a Lyapunov matrix in a specific form. The authors in [
19,
20] proposed fault-tolerant controlling strategies based on H-infinity observers to estimate both system states and faults. These types of observers have the ability to estimate both states and faults with high accuracy. Based on the proposed observer, output feedback fault-tolerant controllers were synthesized to ensure the stability of the targeted systems. Furthermore, several novel observer designs have been proposed for discrete-time systems. Most recently, a novel observer design approach was presented in [
21], in which a robust stable hybrid fault-tolerant predictive control was proposed for handling actuator faults providing effective robust trajectory tracking performance.
As one of the most popular advanced control algorithms in the industry, model predictive control (MPC) has been introduced into several industrial processes and become one of the hottest topics in research and development [
22,
23,
24,
25,
26]. Recently, fault-tolerant control approaches based on model predictive control (MPC) have gained more attraction in several fields where several researchers have tried to address the faults/failures issues within the use of predictive control. Zou et al. [
27] proposed a fault-tolerant model predictive control (MPFTC) strategy for the purpose of controlling batch processes subject to disturbances and partial actuator faults. In this approach, the control procedure was formulated as an optimization problem, while a genetic algorithm was used to obtain the control signal to enhance the performance of the overall system. Zarch et al. [
28] employed the viability theory to evaluate actuator fault tolerance based on nonlinear model predictive control (NMPC). In this approach, the nonlinear model of the system was described as a linear parameter varying (LPV) form, while the viability concepts were applied to evaluate the fault tolerance of the NMPC strategy. Shi et al. [
29] proposed a robust model predictive control for a class of industrial processes with interval time-varying delay subject to constraints, uncertainties, unknown disturbances, and partial actuator failures. In this approach, an extended state-space model was used to describe the system, while this extended model was used to synthesize a robust control low that guarantees acceptable tracking performance. Sheikhbahaei et al. [
30] constructed a novel approach for constrained explicit model predictive control for linear discrete-time systems subject to uncertainties, disturbances, and possible actuator faults. Zhang et al. [
31] developed a model predictive fault-tolerant control approach for batch processes subject to partial actuator faults and unknown disturbances, while in [
32], an overview of predictive control for time-delay systems is proposed.
Unfortunately, most of the existing fault tolerance predictive control approaches for discrete-time systems are either focused only on partially actuator faults or sensor faults, and a simultaneous sensor/actuator fault problem has not been fully studied for discrete-time systems. Motivated by the results provided in [
19,
20], this manuscript is dedicated to developing and investigating a novel robust constrained fault tolerant predictive control method for discrete-time systems subject to disturbances and both sensor and actuator faults (see
Figure 1). Consider the proposed discrete-time control scenario shown in
Figure 1. The measurements are obtained by sensors, and used by both virtual and real observers to estimate the system’s states. The estimated states are then used to compute the desired control signal. To construct the control low, an extended state-space model is first used for constructing virtual observers. Then, real observers are formulated based on virtual observers. This step is quite obvious, since virtual observers may contain unavailable information. Finally, an observer-based robust fault tolerance predictive control strategy is designed such that both system states and actuator/sensor faults are estimated at once. As a result, these steps can ensure a minor influence of faults on the dynamics of the system. Furthermore, the robust stability of the closed-looped system is guaranteed with several proposed stability conditions. As a result, these proposed sufficient stability conditions confirm the robust stability of the overall closed-loop system, while both observer and fault-tolerant controller gains are obtained by solving the linear matrix inequalities (LMIs) problem. Note that the robust fault-tolerant predictive control, which is an observer-based controller, is synthesized via an online LMI optimization problem to minimize the upper bound of the cost function while handling disturbances, faults, and constraints.
This manuscript is structured as follows: First, the problem formulation is presented in
Section 2.
Section 3 describes the steps of constructing the virtual and real observers in order to estimate the states and faults at once. Next, the proposed robust constrained fault tolerant predictive control is described in detail in
Section 4.
Section 5 is dedicated to proving the robust stability of the proposed RFTPC scheme. Then, simulation results are illustrated in
Section 6. Finally,
Section 6 concludes the paper.
Notation: all symbols used in this paper are standard, in the area of robust control and fault-tolerant control, unless otherwise specified. Given a vector , the symbol signifies the Euclidean norm of vector . is the controller gain matrices, while represents the observer gain matrix. represents the dimensional Euclidean space. and represent the zero and the identity matrices with proper dimensions, respectively. The superscript T, usually seen in the top-right of matrices and vectors, represents the transpose of matrices or vectors. Finally, matrices , , , , and are symmetric positive definite matrices. and are known as positive-definite weighting matrices.
2. Problem Statement
Generally, a discrete-time system can be represented as follow:
where
is the state vector of the system,
represents the output vector of the system, and
is the system input vector.
The control signal is usually governed by the following constraints:
and
is the upper limit on the control signal.
Since the considered discrete-time system is subject to both disturbances and faults, an observer that can be used to estimate system states and faults with high accuracy is very hard to attain. Thus, an extra step based on virtual observers is considered in order to design the real observer. Generally, both sensor and actuator faults are considered in this work. The real measurement of the system output is usually altered when a sensor fault occurs and modelled as follows:
represents the vector of additive sensor faults.
Similarly, an actuator fault can cause certain changes over the control input, which can be represented as:
where
describes the actuator fault, and
is the actual control input signal.
Through this work, several Lemmas and assumptions are required for the development of our main results.
Lemma 1. (Schur complement). [33] Given any real matrices,,, andwithand. Then, we have the following: Assumption 1. Actuator faults, sensor faults, and the external disturbanceare bounded, and there is a positive scalarsuch that.
Remark 1. Assumption 1 will eventually ensure that the increments of the disturbances as well as faults between two sampling time instants are bounded.
In this paper, both sensor and actuator faults are considered and added to the model representation in Equation (1) as follows:
where
,
, and
denote the vector of the faulty system state, faulty control input, and faulty measurement output vector, respectively. The vectors
and
are the additive actuator and sensor faults, respectively.
,
, and
are the system matrices of compatible dimensions.
3. Observer Design
The main objective of a virtual observer is to improve measurements and provide better accuracy in estimating system states and fault observation.
To design an observer that estimates the system state vector
, sensor faults
, and actuator faults
, an augmented system based on the system Equation (5) and subject to external disturbance is defined as:
where the augmented vector
, and matrices
,
,
, and
.
Consider a full rank matrix with and represents the identity and the zero matrices with proper dimensions. denotes the inverse matrix of , which can be easily obtained by , i.e., , and the vectors of matrix are defined as .
Multiplying
by both sides of Equation (6), and by using the fact
, we have the following:
Defining a virtual observer in the following form:
where
represents the estimated values of the system state vector
, sensor faults
, and actuator faults
, respectively. Finally,
is the targeted observer gain matrix.
Next, the estimation error
of the observer in Equation (8) is defined as
, and by subtracting Equation (8) from Equation (7), the dynamic equation of error is obtained as follows:
where
,
.
Unfortunately, the virtual observer contains the unknown term in Equation (8), which cannot be easily determined and will eventually reduce the precision of the observed information. To address this issue, a real observer based on the previous virtual observer parameters is designed for eliminating the undesirable term .
Therefore, an auxiliary variable
is introduced, while the real observer of the system Equation (5) is obtained as follows:
where
is an auxiliary variable, and matrices
,
,
, and
are the observer parameters to be determined later as follows:
4. Observer-Based Robust Tolerant Predictive Control
In this section, an observer-based state feedback fault-tolerant predictive controller was constructed for stabling the overall closed-loop system. First, an observer was constructed in order to estimate faults and in the presence of external disturbance. Next, a fault-tolerant predictive controller based on observers was designed to ensure the stability of the system and maintained acceptable control performance.
As the first step, the control law for the system Equation (2) was designed to guarantee better fault compensation and reject the influences of disturbances. The control signal is considered in the following form:
Substituting
it into Equation (4), we have the following:
where
, then the control signal is represented as:
Replacing Equation (13) with Equation (5), we obtain the following:
where
,
.
Next, an optimization problem is established for obtaining a suitable control signal where the objective is to minimize a worst-case quadratic objective function in an infinite horizon as follows:
The matrices
and
are known as positive-definite weighting matrices. Next, sufficient stability conditions for observation errors and an overall closed-loop system are established in terms of LMIs. Based on Kothare et al. [
34], the following theorem is proposed for the robust predictive control.
Theorem 1. Given an augmented state-space model described by Equation (6), the state-feedback controller described in Equation (4) will robustly stabilize the augmented system subjected to disturbances, sensor/actuator faults, and input constraints if there are symmetric positive definite matrices,,,,andand a positive scalarsatisfying the following convex optimization problem: Subject to
where
and
.
Additionally, at each sampling time, the control gain can be easily obtained by while the gain of the state virtual observer by . Furthermore, the obtained observer gain is used to attain the of the real observer in Equation (11).
Proof. To obtain the stability conditions, a Lyapunov function is defined as:
and
.
For any
, suppose
satisfies the following stability constraint:
Assumed that the summation is up to
, i.e.,
,
. Summing from
to
produces the following:
By defining , an upper bound on the performance index is obtained as . Hence, the first inequality of Equation (23) holds.
Next, we show that the second inequality of Equation (24) holds.
Since the inequality in Equation (20) implies that
strictly decreases as
transitions to
and
from Equation (21), we have the following:
Using the Schur complement, we obtain the following:
Substituting and into the above inequality matrix, and then applying the congruence transformation to the resulting inequality with diag [1, , ], we conclude that Equation (18) holds.
Based on input constraint Equation (2), we can write the following:
and based on Equation (30), we can obtain the following:
Using the Schur complement, we obtain the following:
Multiplying the right by and the left by we obtain Equation (19).
Finally, Equation (20) implies the following:
where
,
and the matrices
and
are positives. Furthermore, since
, we have the following:
By taking into account Equations (35) and (36), we can state that Equation (20) implies the following:
By substituting
,
,
and
into Equation (37) and multiplying everything from the left by diag [
,
] and from the right by diag [
,
], we obtain the following:
Using the Schur complement, we see that this is equivalent to:
Again, applying the Schur complement to Equation (39):
then multiplying the resulting inequality from the left by
and from the right by
taking into account Equations (4) and (5), we obtain the following:
Consequently, the function is a strictly decreasing Lyapunov function, which confirms that , or as . This ends the proof of Theorem 1.
At each sampling time, the gain of the virtual observer, obtained by , is then used to compute the real observer gain and to estimate system states and faults.
As mentioned earlier in
Section 3, the auxiliary variable defined as
is used to define the real observer, where:
This satisfies the form of the designed observer (10) with parameters (11). Finally, we can obtain the estimated values . □.
5. Robust Stability Analysis
In this section, mathematical formulations are provided to prove that the proposed control law defined in Equation (12) subjected to the constraint Equation (2) robustly stabilizes the close loop system subject to sensor/actuator faults and external disturbances. The provided mathematical formulation, presented in this section, is based on the work of Kothare et al. [
34]. Obviously, the optimization problem defined earlier must have a feasible solution at each iteration (sampling time) in order to guarantee the asymptotic stability of the closed-loop system (14). This brings us to propose the following feasibility lemma:
Lemma 2. Feasibility [34]. Any feasible solution found at the sampling instantthat satisfies the optimization problem of Theorem 1 is also a feasible solution for all times. Therefore, if the targeted optimization problem, discussed earlier in Theorem 1, is feasible at sampling instant, then it is feasible at any sampling instant.
Proof. To prove Lemma 2, it is only necessary to verify that inequality in Equation (18) is feasible for all future measurements of the state and error.
and
,
are the future measurements of state and error, respectively. Assuming that the optimization problem in Theorem 1 is feasible at sampling instance k, the following inequality will hold:
The inequality in Equation (44) needs to be feasible for all future measurements in order to prove Lemma 2, which means:
At the sampling instant
, the values of the state and error are described by
and
, respectively. Note that these states and errors are obtained after applying the controller at the sampling instant
, and they must also satisfy the condition in Equation (4); this is written as follows:
This concludes that the feasible solution obtained by Theorem 1 at sampling instant is also feasible at instant , and hence the problem is feasible at instant . By repeating this argument for all instants, , , , we complete the demonstration of Lemma 2.
Theorem 2. Robust stability [34]. Assume that Theorem 1 holds at sampling instant k for the system in Equation (14), then the feasible feedback control law obtained from Theorem 1 will robustly asymptotically stabilize the closed-loop system. Proof. To secure robust asymptotic stability of the closed-loop system, the Lyapunov function must be a strictly decreasing Lyapunov function, where and .
Indicating that
and
are matrices that correspond to the optimal solution at instants
and
respectively, we obtain the following:
This is obvious, since is optimal at the instance k + 1, while is only feasible at instant .
The controller is said to be a stabilizing controller for all possible states of the system, and the matrix
is a Lyapunov matrix if the following inequality holds:
If at sampling instant
, the controller, obtained at that sampling instant
, is applied to the system, then the state and error of the system at the next instant
,
, also satisfies the inequality (48). Combining this result with inequality in Equation (47), we obtain the following:
Therefore, is a strictly decreasing Lyapunov function, which confirms that , or as . □.
6. Results and Discussions
In this section, the robustness challenges associated with the observer design are investigated when a discrete-time system is subject to simultaneous input disturbance, actuator faults, and sensor faults. First, a linear mass-spring system with two springs and two masses is modeled and described similar to Equation (1) with the following matrices:
To investigate the proposed controller, the initial values, sensor and actuator faults, as well as external disturbance are set as follows:
Additionally, the MPC parameters are selected as follows: The Q and R weight matrices of the cost function given in Equation (16) are selected as 100I and 10I, respectively. The control constraint is considered as ,
Based on Theorem 1, the optimization problem provided in Equations (17)–(20) is solved using the YALMIP package [
35], while the final observer gain matrix
and controller gain matrix
are:
Note that we are dealing with an online LMI optimization problem to minimize the upper bound of the cost function while managing disturbances, sensor/actuator faults, and constraints. The provided gains
and
, shown above, are the obtained gains at the instant
t = 60 s. However, these two gains are almost similar to the gains at the instant
t = 40 s when the system states are almost converted to zero. Additionally, the simulation results are presented in
Figure 2,
Figure 3,
Figure 4,
Figure 5 and
Figure 6.
Figure 2a,b illustrate the states of the system and their estimates when the system is subject to actuator faults, sensor faults, and external disturbance at the same time. The states of the system converge toward zero, which means the proposed controller successfully regulates the system to its equilibrium point.
Additionally, the results shown in
Figure 2a,b demonstrate the stability of the closed-loop system under sensor/actuator faults, and external disturbance, while the constructed observer showed an outstanding performance in estimating actuator and sensor faults. It can also be seen that the proposed observers have the ability to track and estimate the nominal states of the system with a minor error and within the first 10 s. Furthermore, the influence of the sensor/actuator is very limited, since the proposed method converges all states to zero within the first 10 s.
This can be verified from
Figure 3 where the observer-based predictive control successfully retained the state errors (ex) between nominal model states and faulty states, within an acceptable peak norm (reached a max of 0.1226 within the first 10 s for some states). Notice that these results (estimation error) were achieved under external disturbance, sensor, and actuator faults applied to the system. The synthesized fault-tolerant model predictive controller successfully compensated the system faults with acceptable accuracy.
Figure 4,
Figure 5 and
Figure 6 illustrate the estimation performance of both sensors and actuator faults. The results indicate that the proposed methodology provided a satisfactory performance in terms of fault estimations. The proposed methodology maintained the estimation error between real faults and estimated faults within an acceptable peak norm (less than 0.078 within the first 10 s). Furthermore, the proposed observer provided superior estimation results of the actuator faults to the observer in [
19]. Note that the proposed methodology in [
19] only addressed the case of actuator faults, which is completely different from our proposed approach, where both sensor and actuator faults were considered. In conclusion, the proposed fault-tolerant predictive control has the ability to predict and estimate the signal accurately.