Next Article in Journal / Special Issue
Green’s Classifications and Evolutions of Fixed-Order Networks
Previous Article in Journal
Optimizing Three-Dimensional Constrained Ordered Weighted Averaging Aggregation Problem with Bounded Variables
Previous Article in Special Issue
Kinematics in the Information Age
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detecting and Handling Cyber-Attacks in Model Predictive Control of Chemical Processes

1
Department of Chemical and Biomolecular Engineering, University of California, Los Angeles, CA 90095-1592, USA
2
Department of Electrical and Computer Engineering, Taif University, Taif 21974, Saudi Arabia
3
Department of Chemical Engineering and Materials Science, Wayne State University, Detroit, MI 48202, USA
4
Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90095-1592, USA
*
Author to whom correspondence should be addressed.
Mathematics 2018, 6(10), 173; https://doi.org/10.3390/math6100173
Submission received: 24 August 2018 / Revised: 20 September 2018 / Accepted: 21 September 2018 / Published: 25 September 2018
(This article belongs to the Special Issue Mathematics and Engineering)

Abstract

:
Since industrial control systems are usually integrated with numerous physical devices, the security of control systems plays an important role in safe operation of industrial chemical processes. However, due to the use of a large number of control actuators and measurement sensors and the increasing use of wireless communication, control systems are becoming increasingly vulnerable to cyber-attacks, which may spread rapidly and may cause severe industrial incidents. To mitigate the impact of cyber-attacks in chemical processes, this work integrates a neural network (NN)-based detection method and a Lyapunov-based model predictive controller for a class of nonlinear systems. A chemical process example is used to illustrate the application of the proposed NN-based detection and LMPC methods to handle cyber-attacks.

1. Introduction

Recently, the security of process control systems has become crucially important since control systems are vulnerable to cyber-attacks, which are a series of computer actions to compromise the security of control systems (e.g., integrity, stability and safety) [1,2]. Since cyber-physical systems (CPS) or supervisory control and data acquisition (SCADA) systems are usually large-scale, geographically dispersed and life-critical systems where embedded sensors and actuators are connected into a network to sense and control the physical devices [3], the failure of cybersecurity can lead to unsafe process operation, and potentially to catastrophic consequences in the chemical process industries, causing environmental damage, capital loss and human injuries. Among cyber-attacks, targeted attacks are severe threats for control systems because of their specific designs with the aim of modifying the control actions applied to a chemical process (for example, the Stuxnet worm aims to modify the data sent to a Programmable Logic Controller [4]). Additionally, targeted attacks are usually stealthy and difficult to detect using classical detection methods since they are designed based on some known information of control systems (e.g., the process state measurement). Therefore, designing an advanced detection system (e.g., machine learning-based detection methods [5,6]) and a suitable optimal control scheme for nonlinear processes in the presence of targeted cyber-attacks is an important open issue.
Due to the rapid development of computer networks of CPS in the past two to three decades, the components (e.g., sensors, actuators, and controllers) in a large-scale process control system are now connected through wired/wireless networks, which makes these systems more vulnerable to cyber-attacks that can damage the operation of physical layers besides cyber layers. Additionally, since the development of most of the existing detection methods still depends partly on human analysis, the increased use of data and the designs of stealthy cyber-attacks pose challenges to the development of timely detection methods with high detection accuracy. In this direction, the design of cyber-attacks, the anomaly detection methods focusing on physical layers, and the corresponding resilient control methods have received a lot of attention. A typical method of detection [4] is using a model of the process and comparing the model output predictions with the actual measured outputs. In [7], a dynamic watermarking method was proposed to detect cyber-attacks via a technique of injecting private excitation into the system. Moreover, four representative detection methods were summarized in [3] as Bayesian detection with binary hypothesis, weighted least squares, χ 2 -detector based on Kalman filters and quasi-fault detection and isolation methods.
Besides the detection of cyber-attacks, the design of resilient control schemes also plays an important role in operating a chemical process reliably under cyber-attacks. To guarantee the process performance (e.g., robustness, stability, safety, etc.) and mitigate the impact of cyber-attacks, resilient state estimation and resilient control strategies have attracted considerable research interest. In [2,8], resilient estimators were designed to reconstruct the system states accurately. An event-triggered control system was proposed in [9] to tolerate Denial-of-service (DoS) attacks without jeopardizing the stability of the closed-loop system.
On the other hand, as a widely-used advanced control methodology in industrial chemical plants, model predictive control (MPC) achieves optimal performance of multiple-input multiple-output processes while accounting for state and input constraints [10]. Based on Lyapunov methods (e.g., a Lyapunov-based control law), the Lyapunov-based model predictive control (LMPC) method was developed to ensure stability and feasibility in an explicitly-defined subset of the region of attraction of the closed-loop system [11,12]. Additionally, process operational safety can also be guaranteed via control Lyapunov-barrier function-based constraints in the framework of LMPC [13]. At this stage, however, the potential safety/stability problem in MPC caused by cyber-attacks has not been studied with the exception of a recent work that provides a quantitative framework for the evaluation of resilience of control systems with respect to various types of cyber-attacks [14].
Motivated by this, we develop an integrated data-based cyber-attack detection and model predictive control method for nonlinear systems subject to cyber-attacks. Specifically, a cyber-attack (e.g., a min-max cyber-attack) that aims to destabilize the closed-loop system via a sensor tamper is considered and applied to the closed-loop process. Under such a cyber-attack, the closed-loop system under the MPC without accounting for the cyber-attack cannot ensure closed-loop stability. To detect potential cyber-attacks, we take advantage of machine learning methods, which are widely-used in clustering, regression, and other applications such as model order reduction [15,16,17], to build a neural network (NN)-based detection system. First, the NN training dataset was obtained for three conditions: (1) The system without disturbances and cyber-attacks (i.e., nominal system); (2) The system with only process disturbances considered; (3) The system with only cyber-attacks considered. Then, a NN detection method is trained off-line to derive a model that can be used on-line to predict cyber-attacks. In addition, considering the classification accuracy of the NN, a sliding detection window is employed to reduce false cyber-attack alarms. Finally, a Lyapunov-based model predictive control (LMPC) method that utilizes the state measurement from secure, redundant sensors is developed to reduce the impact of cyber-attacks and re-stabilize the closed-loop system in finite time.
The rest of the paper is organized as follows: in Section 2, the class of nonlinear systems considered and the stabilizability assumptions are given. In Section 3, we introduce the min-max cyber-attack, develop a NN-based detection system and a Lyapunov-based model predictive controller (LMPC) that guarantees recursive feasibility and closed-loop stability under sample-and-hold implementation within an explicitly characterized set of initial conditions. In Section 4, a nonlinear chemical process example is used to demonstrate the applicability of the proposed cyber-attack detection and control method.

2. Preliminaries

2.1. Notation

Throughout the paper, the notation · is used to denote the Euclidean norm of a vector, the notation · Q denotes a weighted Euclidean norm of a vector (i.e., x Q 2 = x T Q x where Q is a positive definite matrix). x T denotes the transpose of x. R + denotes the set [ 0 , ) . The notation L f V ( x ) denotes the standard Lie derivative L f V ( x ) : = V ( x ) x f ( x ) . For given positive real numbers β and ϵ , B β ( ϵ ) : = { x R n | | x ϵ | < β } is an open ball around ϵ with a radius of β . Set subtraction is denoted by "∖", i.e., A B : = { x R n | x A , x B } . x maps x to the least integer greater than or equal to x and x maps x to the greatest integer less than or equal to x. The function f ( · ) is of class C 1 if it is continuously differentiable in its domain. A continuous function α : [ 0 , a ) [ 0 , ) is said to belong to class K if it is strictly increasing and is zero only when evaluated at zero.

2.2. Class of Systems

The class of continuous-time nonlinear systems considered is described by the following state-space form:
x ˙ = f ( x ) + g ( x ) u + d ( x ) w , x ( t 0 ) = x 0
where x R n is the state vector, u R m is the manipulated input vector, and w W is the disturbance vector, where W : = { w R q | | w | θ , θ 0 } . The control action constraint is defined by u U = { u m i n u u m a x } R m , where u m i n and u m a x represent the minimum and the maximum value vectors of inputs allowed, respectively. f ( · ) , g ( · ) and d ( · ) are sufficiently smooth vector and matrix functions of dimensions n × 1 , n × m and n × q , respectively. Without loss of generality, the initial time t 0 is taken to be zero ( t 0 = 0 ), and it is assumed that f ( 0 ) = 0 , and thus, the origin is a steady-state of the system of Equation (1) with w ( t ) 0 , (i.e., ( x s * , u s * ) = ( 0 , 0 ) ). In the manuscript, we assume that every measured state is measured by multiple sensors that are isolated from one another such that if one sensor measurement is tampered by cyber-attacks, a secure network or some secure way can still be used to send the correct sensor measurements of x ( t ) to the controller. This can also be viewed as secure, redundant sensors or just having an alternative, secure network to send the sensor measurements to the controller. However, if this assumption does not hold, i.e., no secure sensors are available, then the system has to be shut down after the detection of cyber-attacks, or to be operated in an open-loop manner thereafter with an accurate process model.

2.3. Stabilizability Assumptions and Lyapunov-Based Control

Consider the nominal system of Equation (1) with w ( t ) 0 . We first assume that there exists a stabilizing feedback control law u = Φ ( x ) U such that the origin of the nominal system of Equation (1) can be rendered asymptotically stable for all x D 1 R n , where D 1 is an open neighborhood of the origin, in the sense that there exists a positive definite C 1 control Lyapunov function V that satisfies the small control property and the following inequalities:
α 1 ( | x | ) V ( x ) α 2 ( | x | ) ,
V ( x ) x F ( x , Φ ( x ) , 0 ) α 3 ( | x | ) ,
V ( x ) x α 4 ( | x | )
where α j ( · ) , j = 1 , 2 , 3 , 4 are class K functions. F ( x , u , w ) is used to represent the system of Equation (1) (i.e., F ( x , u , w ) = f ( x ) + g ( x ) u + d ( x ) w ).
An example of a feedback control law that is continuous for all x in a neighborhood of the origin and renders the origin asymptotically stable is the following control law [18]:
φ i ( x ) = p + p 2 + | q | 4 | q | 2 q , if q 0 0 , if q = 0
Φ i ( x ) = u i m i n , if φ i ( x ) < u i m i n φ i ( x ) , if u i m i n φ i ( x ) u i m a x u i m a x , if φ i ( x ) > u i m a x
where p denotes L f V ( x ) and q denotes ( L g V ( x ) ) T = [ L g 1 V ( x ) L g m V ( x ) ] T . φ i ( x ) of Equation (3a) represents the ith component of the control law Φ ( x ) before considering saturation of the control action at the input bounds. Φ i ( x ) of Equation (3b) represents the ith component of the saturated control law Φ ( x ) that accounts for the input constraints u U . Based on the controller Φ ( x ) that satisfies Equation (2), the set of initial conditions from which the controller Φ ( x ) can stabilize the origin of the input-constrained system of Equation (1) is characterized as: ϕ n = { x R n | V ˙ + κ V ( x ) 0 , u = Φ ( x ) U , κ > 0 } . Additionally, we define a level set of V ( x ) inside ϕ n as Ω ρ : = { x ϕ n | V ( x ) ρ } , which represents a stability region of the closed-loop system of Equation (1).

3. Cyber-Attack and Detection Methodology

From the perspective of process control systems, cyber-attacks are malicious signals that can compromise actuators, sensors or their communication networks. Specifically, among sensor cyber-attacks, DoS attacks, replay attacks and deception attacks are the three most common and easily implementable ones by attackers [5]. On the other hand, since stealthy cyber-attacks are designed to damage the performance of CPS (e.g., stability and safety), developing more reliable detection and control methods that can detect, locate and mitigate cyber-attacks in a timely fashion and control the damage within a tolerable limit is imperative.
In this section, the min-max cyber-attack designed to damage closed-loop stability of the system of Equation (1) is first introduced. Subsequently, a general model-based detection method [4] and the corresponding stealthy cyber-attacks that can evade such detection are presented. Therefore, to better detect different types of cyber-attacks, the data-based detection scheme that utilizes machine learning methods is finally developed with a sliding detection window.

3.1. Min-Max Cyber-Attack

In this subsection, we first consider a deception sensor cyber-attack, in which the minimum or maximum allowable sensor measurement values are fed into process control systems (e.g., a Lyapunov-based control system with a stability region Ω ρ defined by a level set of Lyapunov function V ( x ) ) to drive the closed-loop states away from their expected values and finally ruin the stability of the closed-loop system. Since x Ω ρ , there exists a feasible control action u = Φ ( x ) such that V ˙ < 0 , closed-loop stability is maintained within the stability region Ω ρ under Φ ( x ) . Assuming that attackers know the stability region of the system of Equation (1) in advance and have access to some of the sensors (but not all), to remain undetectable by a simple stability region-based detection method (i.e., the cyber-attack is detected if the state is out of the stability region), the min-max cyber-attack is designed with the following form such that the fake sensor measurements are still inside Ω ρ :
x ¯ = arg max x R { V ( x ) ρ }
where x ¯ is the tampered sensor measurement. Since the controller needs to get access to true state measurements to maintain closed-loop stability in a state feedback control system, wrong state measurements under cyber-attacks can affect control actions and eventually drive the state away from its set-point. In the section “Application to a chemical process example”, it is shown that if attackers apply a min-max cyber-attack to safety-critical sensors (e.g., temperature or pressure sensors in a chemical reactor) in process control systems, closed-loop stability may not be maintained (i.e., the closed-loop state goes out of Ω ρ ) and the system may have to be shut down.

3.2. Model-Based Detection and Stealthy Cyber-Attack

Based on the known process model of Equation (1), a cumulative sum (CUSUM) statistic detection method [4] can be developed to minimize the detection time when a cyber-attack occurs. Specifically, the CUSUM statistic method detects cyber-attacks by calculating the cumulative sum of the deviation between expected and measured states. The method is developed by the following equations:
S ( k ) = ( S ( k 1 ) + z ( k ) ) + , S ( 0 ) = 0
D ( S ( k ) ) = 1 , if S ( k ) > S T H 0 , otherwise
where S ( k ) is the nonparametric CUSUM statistic and S T H is the threshold of the detection of cyber-attacks. ( S ) + = S , if S 0 and ( S ) + = 0 otherwise. D is the detection indicator where D = 1 indicates that the cyber-attack is confirmed or there is no cyber-attack if D = 0 . z ( k ) is the deviation between expected states x ˜ ( t k ) and measured states x ( t k ) at time t = t k : z ( k ) : = | x ˜ ( t k ) x ( t k ) | b where x ˜ ( t k ) is derived using the known process model, the state and the control action at t = t k 1 , and b is a small positive constant to reduce the false alarm rate due to disturbances.
With a carefully selected S T H , the model-based detection method can detect many sensor cyber-attacks efficiently. However, the above model-based method may be evaded and becomes invalid for stealthy cyber-attacks if attackers know more about the system (e.g., the system model and the principles of the detection method). For example, three advanced stealthy cyber-attacks were proposed in [4] to damage the system without triggering the threshold of the model-based detection method. Specifically, a surge cyber-attack is designed to maximize the damage for the first few steps (similar to min-max cyber-attacks) and switch to cyber-attacks with small perturbations for the rest of time when S ( k ) reaches S T H . The form of a surge cyber-attack is given by the following equations:
x ( t k ) = x ( t k ) m i n , if S ( k ) S T H x ˜ ( t k ) | S T H + b S ( k 1 ) | , otherwise
The above surge cyber-attack is able to maintain S ( k ) within its threshold and therefore is undetectable by the above detection method. In this case, the defenders should either develop more advanced detection methods for stealthy cyber-attacks (i.e., it becomes an interactive decision-making process between an attacker and a defender [19]), or develop a detection method from another perspective, for example, a data-based method. Since the purpose of any type of stealthy cyber-attack is to change the normal operation and destroy the performance of the system of Equation (1), the dynamic operation of the system of Equation (1) (e.g., dynamic trajectories in state-space) under cyber-attacks becomes different from that of the nominal system of Equation (1). The deviation of the data can be regarded as an intrinsic indicator for detection of cyber-attacks. In this direction, a data-based detection system is developed via machine learning methods in the next subsection.

3.3. Detection via Machine Learning Techniques

Machine learning has a wide range of applications in classification, regression, and clustering problems. To detect cyber-attacks, classification methods can be utilized to determine whether there is a cyber-attack on the system of Equation (1) or not. The data-based learning problems are usually categorized into unsupervised learning and supervised learning.
Unsupervised learning (e.g., k-means clustering) uses unlabeled data to derive a model that can split the data into different categories. On the other hand, supervised learning aims to develop a function that maps an input to an output based on labeled dataset (input-output pairs). There are two types of supervised learning tools, (1) classification tools (e.g., k-nearest neighbor (k-NN), support vector machine (SVM), random forest, neural networks) are used to develop a function based on labeled training datasets to predict the class of a new set of data that was not used in the training stage; (2) regression tools (e.g., linear regression, support vector regression, etc.) aim to predict the outcome of an event based on the relationship between variables obtained from the training datasets (labeled input-output pairs) [20]. Since supervised learning concerns labeled training data, we utilize a neural network (NN) algorithm to predict whether the system of Equation (1) is nominally operating, under disturbances or under cyber-attacks. Subsequently, a Lyapunov-based model predictive controller is proposed to stabilize the closed-loop system during the absence and presence of cyber-attacks.

3.4. NN-Based Detection System

Since the evolution of the closed-loop state from the initial condition x ( 0 ) = x 0 Ω ρ is determined by both the nonlinear system model of Equation (1) and the design of process control systems, it is difficult to distinguish normal operation from the operation under cyber-attacks. Moreover, even if a detection method is developed for a specific cyber-attack (e.g., min-max cyber-attack), the detection strategy is not guaranteed to identify a different type of cyber-attack. Motivated by these concerns, this work proposes a data-based detection system for different types of cyber-attacks by using machine learning methods.
As a widely-used machine learning method, neural networks build a general class of nonlinear functions from input variables to output variables. The basic structure of a feed-forward multiple-input-single-output neural network with one hidden layer is given in Figure 1, where N u j , j = 1 , 2 , , n denotes the input variables in the input layer, θ 1 i , i = 1 , 2 , , h denotes the neurons in the hidden layer and N y denotes the output in the output layer. Specifically, the hidden neurons θ 1 i and the output N y (i.e., the classification result) are obtained by the following equations, respectively [21]:
θ 1 i = σ 1 ( j = 1 n N w i j ( 1 ) N u j + N w i 0 ( 1 ) )
N y = σ 2 ( j = 1 h N w j ( 2 ) θ 1 j + N w 0 ( 2 ) )
where σ 1 , σ 2 are nonlinear activation functions, N w i j ( 1 ) and N w j ( 2 ) are weights, and N w i 0 ( 1 ) , N w 0 ( 2 ) are biases. For simplicity, the input vector N u will be used to denote all the inputs N u j , and the weight matrix N w will be used to represent all the weights and biases in Equations (7) and (8). The neurons in the hidden layer receive the weighted sum of inputs and use activation functions σ 1 (e.g., ReLu function σ ( x ) = max ( 0 , x ) or sigmoid function σ ( x ) = 1 / ( 1 + e x ) ) to bring in the nonlinearity such that the NN is not a simple linear combination of the inputs. The output neuron generates the class label via a linear combination of hidden neurons and an activation function σ 2 (e.g., sigmoid function for two classes or softmax function σ i ( x ) = e x i / k = 1 K e x k for multiple classes where K is the number of classes).
Given a set of training data including the input vectors N u i , i = 1 , 2 , N T and the corresponding classified labels (i.e., target vectors N t i ), the NN model is trained by minimizing the following error function (i.e., loss function):
E ( N w ) = 1 2 i = 1 N T | N y i ( N u i , N w ) N t i | 2
where N y i ( N u i , N w ) is the predicted class for the input N u i under N w . The above nonlinear optimization problem is solved using the stochastic gradient descent (SGD) method, in which the backpropagation method is utilized to calculate the gradient of E ( N w ) . Meanwhile, the weight matrix N w is updated by the following equation:
N w : = N w η E ( N w )
where η is the learning rate to control the speed of convergence. Additionally, to avoid over-fitting during the training process, k-fold cross-validation is employed to randomly partition the original dataset into k 1 subsets of training data and 1 subset of validation data, and early-stopping is activated once the error on the validation set stops decreasing.
Finally, the classification accuracy of the validation dataset is utilized to demonstrate the performance of the neural network since the validation dataset is independent of the training dataset and is not used in training the NN model. Specifically, the classification accuracy (i.e., the test accuracy) of the trained NN model is obtained by the following equation:
N a c c = n c n v a l
where n c is the number of data samples with correct predicted classes, and n v a l is the total number of data samples in the validation dataset. In general, the NN performance depends on many factors, e.g., the size of dataset, the number of hidden layers and nodes, and the intensity and the amount of disturbance applied [22,23,24]. In Remark 1, the method of determining the number of layers and nodes is introduced.
In this paper, the NN is developed to derive a model M to classify three classes: the nominal closed-loop system, the closed-loop system with disturbances, and the closed-loop system under cyber-attacks. A large dataset of time-varying states for various initial conditions (i.e., dynamic trajectories) of the above three cases is used as the input to the neural network. The output of the neural network is the classified class. Since the feed-forward NN is a static model with a fixed input dimension (i.e., fixed time length) but the detection method should be applied during the dynamic operation of the system of Equation (1), multiple NN models with various sizes of input datasets (i.e., various time lengths) are used for the detection of cyber-attacks in real time until the time length corresponding to the available data since the beginning of the time of operation becomes equal to the time length that is preferred to be utilized for the remainder of the operating time. Specifically, given a training dataset of time-series state vectors (i.e., closed-loop trajectories): N u R n × T where n is the number of states and T is the number of sampling steps of each trajectory, the NN model is obtained and applied as follows: (1) the NN is trained with data corresponding to time lengths from the initial time to T sampling steps in intervals of N a sampling steps, i.e., the ith NN model M i is trained using data from t = 0 to t = i N a , where i = 1 , 2 , , T / N a and T is a multiple integer of N a ; (2) when incorporating the NN-based detection system in MPC, real-time state measurement data can be readily utilized in the corresponding NN model M i to check if there is a cyber-attack so far.
Remark 1.
With an appropriate structure (i.e., number of layers and hidden neurons) of the neural network, the weight matrix N w is calculated by Equation (10) and will be utilized to derive the classification accuracy of Equation (11). However, in general, there is no systematic method to determine the structure of a neural network since it highly depends on the number of training data samples and also the complexity of the model needed for classification. Therefore, in practice, the neural network is initiated with one hidden layer with a few hidden neurons. If the classification result is unsatisfactory, we increase the hidden neurons number and further layers with appropriate regularization are added to improve the performance.
Remark 2.
It is noted that the above classification accuracy of the NN model represents the ratio of the number of correct predictions to the total number of predictions for all classes. If we only consider the case of binary classification (i.e., whether the system is under cyber-attacks or not), sensitivity (also called recall or true positive rate) and specificity (also called true negative rate) are also useful measures. Specifically, sensitivity measures the proportion of actual cyber-attacks that are correctly identified as such, while specificity measures the proportion of actual non-cyber-attacks that are correctly identified as such. Therefore, in the presence of multiple types of cyber-attacks or disturbances, it becomes straightforward to learn the performance of the NN-based method to detect true cyber-attacks via sensitivity and specificity.

3.5. Sliding Detection Window

Since the classification accuracy of a NN is not perfect, false alarms may be triggered based on a one-time detection (i.e., non-cyber-attack case may be identified as cyber-attack). In order to reduce the false alarm rates, a detection indicator D i generated by each sub-model M i and a sliding detection window with length N s are proposed as follows:
D i = 1 , if attack is detected by M i 0 , if no attack is detected by M i
Based on the detection indicator D i at every N a sampling steps, the weighted sum of detection indicators within the sliding detection window D I shown in Figure 2 at t = t k = k Δ is calculated as follows:
D I = j = ( k N s + 1 ) / N a k / N a γ k N a j D j
where γ is a detection factor that gives more weight to recent detections within the sliding window because the classification accuracy of the NN increases as more data is used for training. If D I D T H , where D T H is a threshold that indicates a real cyber-attack in the closed-loop system, then the cyber-attack is confirmed and reported by the NN-based detection system; otherwise, the detection system remains silent and the sliding window will be rolled one sampling time. To balance false alarms and missed detections, the threshold D T H is determined via extensive closed-loop simulations under cyber-attacks to derive a desired detection rate.
Additionally, since there is no guaranteed feasible control action that can drive the state back towards the origin once the state of the system of Equation (1) is outside the stability region Ω ρ due to the way of characterizing ϕ n and Ω ρ , it is also necessary to check whether the state is in Ω ρ , especially when cyber-attacks occur but have not been detected yet. Therefore, to prevent the system state from entering a region in state-space where closed-loop stability is not guaranteed, the boundedness of the state vector within the stability region is also checked using the state measurement from redundant, secure sensors at the time when D i = 1 . If the state x has already left Ω ρ , closed-loop stability is no longer guaranteed and in this case further safety system components (e.g., physical safety devices) need to be activated to avoid dangerous operations [25]. However, if x Ω ρ , the state measurement will be read from redundant, secure sensors instead of the original sensors to avoid deterioration of stability under the potential cyber-attack indicated by D i = 1 .
Remark 3.
The sliding window with length N s is employed to reduce false alarm rates. Considering that the classification accuracy derived is not perfect, the idea behind the sliding detection window is that a cyber-attack is confirmed only if it has been detected for a few times continuously instead of a one-time detection. The length of sliding window N s will balance the efficiency of detection and false alarm rates. Specifically, a larger N s and a higher detection threshold D T H ( D I D T H within the sliding detection window represents the confirmation of a cyber-attack) lead to longer detection time but a lower false alarm rate, while a smaller N s and a lower D T H have the opposite effect. Therefore, N s and D T H should be determined well to achieve a balanced performance between detection efficiency and false alarm rate.
Remark 4.
The above supervised learning-based cyber-attack detection method is able to distinguish the normal operation of the system of Equation (1) from the abnormal operation under cyber-attacks, provided that there is a large amount of labeled data available for training. However, for those unknown cyber-attacks which are never used for training, the detection is not guaranteed. Specifically, if there exists an unknown cyber-attack that is distinct from the trained cyber-attacks, the NN-based detection method may not be able to identify it as a cyber-attack. In this case, an unsupervised learning-based detection method may achieve better performance by clustering unknown cyber-attack data into a new class. However, if the unknown cyber-attack shares similar properties (e.g., similar attack mechanism) with a trained cyber-attack, the NN method may still be able to detect it and classify it as one of the available classes. For example, it is demonstrated in the section “Application to a chemical process example” that the unknown surge cyber-attack can still be detected by the NN-based detection system that is trained for min-max cyber-attacks because of the similarity between these two cyber-attacks.
Remark 5.
Since different types of cyber-attacks may have various purposes, targeted sensors and attack duration, the dynamic behavior of a closed-loop system varies with different cyber-attacks, which can be eventually reflected by the data of states. Besides the detection of cyber-attacks, the above NN-based detection method is also able to recognize the types of cyber-attacks by training the NN model with data of various types of cyber-attacks labeled as different classes. As a result, the NN model can not only detect the occurrence of cyber-attacks, but also can identify the type of a cyber-attack if the data of that particular cyber-attack has been utilized for training.

4. Lyapunov-Based MPC (LMPC)

To cope with the threats of the above sensor cyber-attacks, a feedback control method that accounts for the corruption of some sensor measurements should be designed by defenders to mitigate the impact of cyber-attacks and still stabilize the system of Equation (1) at its steady-state. Based on the assumption of the existence of a Lyapunov function V ( x ) and a controller u = Φ ( x ) that satisfy Equation (2), the LMPC that utilizes the accurate measurement from redundant, secure sensors is proposed as the following optimization problem:
J = min u S ( Δ ) t k t k + N L t ( x ˜ ( t ) , u ( t ) ) d t
s . t x ˜ ˙ ( t ) = f ( x ˜ ( t ) ) + g ( x ˜ ( t ) ) u ( t )
x ˜ ( t k ) = x ( t k )
u ( t ) U , t [ t k , t k + N )
V ˙ ( x ( t k ) , u ( t k ) ) V ˙ ( x ( t k ) , Φ ( x ( t k ) ) ) , if V ( x ( t k ) ) > ρ m i n ,
V ( x ˜ ( t ) ) ρ m i n , t [ t k , t k + N ) if V ( x ( t k ) ) ρ m i n
where x ˜ ( t ) is the predicted state trajectory, S ( Δ ) is the set of piecewise constant functions with period Δ , and N is the number of sampling periods in the prediction horizon. V ˙ ( x ( t k ) , u ( t k ) ) represents the time derivative of V ( x ) , i.e., V x ( f ( x ˜ ( t ) ) + g ( x ˜ ( t ) ) u ( t ) ) . We assume that the states of the closed-loop system are measured at each sampling time instance, and will be used as the initial condition in the optimization problem of LMPC in the next sampling step. Specifically, based on the measured state x ( t k ) at t = t k , the above optimization problem is solved to obtain the optimal solution u * ( t ) over the prediction horizon t [ t k , t k + N ) . The first control action of u * ( t ) , i.e., u * ( t k ) , is sent to the control actuators to be applied over the next sampling period. Then, at the next sampling time t k + 1 : = t k + Δ , the optimization problem is solved again, and the horizon will be rolled one sampling time.
In the optimization problem of Equation (14), the objective function of Equation (14a) that is minimized is the integral of L t ( x ˜ ( t ) , u ( t ) ) over the prediction horizon, where the function L t ( x , u ) is usually in a quadratic form (i.e., L t ( x , u ) = x T R x + u T Q u , where R and Q are positive definite matrices). The constraint of Equation (14b) is the nominal system of Equation (1) (i.e., w ( t ) 0 ) to predict the evolution of the closed-loop state. Equation (14c) defines the initial condition of the nominal process system of Equation (14b,14d) defines the input constraints over the prediction horizon. The constraint of Equation (14e) requires that V ( x ˜ ) for the system decreases at least at the rate under Φ ( x ) at t k when V ( x ( t k ) ) > ρ m i n . However, if x ( t k ) enters a small neighborhood around the origin Ω ρ m i n : = { x ϕ n | V ( x ) ρ m i n } , in which V ˙ is not required to be negative due to the sample-and-hold implementation of the LMPC, the constraint of Equation (14f) is activated to maintain the state inside Ω ρ m i n afterwards.
When the cyber-attack is detected by D i = 1 but not confirmed by D I D T H yet, the optimization problem of the LMPC of Equation (14) uses the state measurement from redundant, secure sensors instead of the original sensors as the initial condition x ( t k ) for the optimization problem of Equation (14) until the next instance of detection. However, if the cyber-attack is finally confirmed by D I D T H , the misbehaving sensor will be isolated, and the optimization problem of the LMPC of Equation (14) starts to use the state measurement from secure sensors instead of the compromised state measurement as the initial condition x ( t k ) for the optimization problem of Equation (14) for the remaining time of process operation. The structure of the entire cyber-attack-detection-control system is shown in Figure 3.
If the cyber-attack is detected and confirmed before the closed-loop state is driven out of the stability region, it follows that the closed-loop state is always bounded in the stability region Ω ρ thereafter and ultimately converges to a small neighborhood Ω ρ m i n around the origin for any x 0 Ω ρ under the LMPC of Equation (14). The detailed proof can be found in [11]. An example trajectory is shown in Figure 4.
Remark 6.
It is noted that the speed of detection (which depends heavily on the size of the input data to the NN, the number of hidden layers and the type of activation functions) plays an important role in stabilizing the closed-loop system of Equation (1) since the operation of the closed-loop system under the LMPC of Equation (14) becomes unreliable after cyber-attacks occur. In other words, if we can detect cyber-attacks in a short time, the LMPC can switch to redundant, secure sensors and still be able to stabilize the system at the origin before it leaves the stability region Ω ρ . Additionally, the probability of closed-loop stability can be derived based on the classification accuracy of the NN-based detection method and its activation frequency N a . Specifically, given the classification accuracy p n n [ 0 , 1 ] , if the NN-based detection system is activated every N a = 1 sampling step, the probability of the cyber-attack being detected at each sampling step (i.e., D i = 1 ) is equal to p n n , which implies that the probability of closed-loop stability x 0 Ω ρ is no less than p n n . Moreover, for safety reasons, the region of initial conditions can be chosen as a conservative sub-region (i.e., Ω ρ e : = { x ϕ n | V ( x ) ρ e } , where ρ e < ρ ) inside the stability region to avoid the rapid divergence of states under cyber-attacks and improve closed-loop stability. For example, let ρ e = m a x { V ( x ( t ) ) | V ( x ( t + Δ ) ) ρ , u U } such that x ( t k ) Ω ρ e , x ( t k + 1 ) still stays in Ω ρ despite a miss of detection of cyber-attacks. Therefore, the probability of closed-loop stability x 0 Ω ρ e under the LMPC of Equation (14) reaches 1 ( 1 p n n ) 2 (i.e., the probability of cyber-attacks being detected within two sampling periods).
Remark 7.
It is demonstrated in [11] that in the presence of sufficiently small bounded disturbances (i.e., | w ( t ) | θ ), closed-loop stability is still guaranteed for the system of Equation (1) under the sample-and-hold implementation of the LMPC of Equation (14) with a sufficiently small sampling period Δ. In this case, it is undesirable to treat the disturbance as a cyber-attack and trigger the false alarm. Therefore, the detection system should account for the disturbance case and have the capability to distinguish cyber-attacks from disturbances (i.e., the system with disturbances should be classified as a distinct class or treated as the nominal system).

5. Application to a Chemical Process Example

In this section, we utilize a chemical process example to illustrate the application of the proposed detection and control methods for potential cyber-attacks. Consider a well-mixed, non-isothermal continuous stirred tank reactor (CSTR) where an irreversible first-order exothermic reaction takes place. The reaction converts the reactant A to the product B via the chemical reaction A B . A heating jacket that supplies or removes heat from the reactor is used. The CSTR dynamic model derived from material and energy balances is given below:
d C A d t = F V L ( C A 0 C A ) k 0 e E / R T C A
d T d t = F V L ( T 0 T ) Δ H k 0 ρ C p e E / R T C A + Q ρ C p V L
where C A is the concentration of reactant A in the reactor, T is the temperature of the reactor, Q denotes the heat supply/removal rate, and V L is the volume of the reacting liquid in the reactor. The feed to the reactor contains the reactant A at a concentration C A 0 , temperature T 0 , and volumetric flow rate F. The liquid has a constant density of ρ and a heat capacity of C p . k 0 , E and Δ H are the reaction pre-exponential factor, activation energy and the enthalpy of the reaction, respectively. Process parameter values are listed in Table 1. The control objective is to operate the CSTR at the equilibrium point ( C A s , T s ) = ( 0.57 kmol/m 3 , 395.3 K) by manipulating the heat input rate Δ Q = Q Q s , and the inlet concentration of species A, Δ C A 0 = C A 0 C A 0 s . The input constraints for Δ Q and Δ C A 0 are | Δ Q | 0.0167 kJ/min and | Δ C A 0 | 1 kmol/m 3 , respectively.
To place Equation (15) in the form of the class of nonlinear systems of Equation (1), deviation variables are used in this example, such that the equilibrium point of the system is at the origin of the state-space. x T = [ C A C A s T T s ] represents the state vector in deviation variable form, and u T = [ Δ C A 0 Δ Q ] represents the manipulated input vector in deviation variable form.
The explicit Euler method with an integration time step of h c = 10 5 min is applied to numerically simulate the dynamic model of Equation (15). The nonlinear optimization problem of the LMPC of Equation (14) is solved using the IPOPT software package [26] with the sampling period Δ = 10 3 min.
We construct a Control Lyapunov Function using the standard quadratic form V ( x ) = x T P x , with the following positive definite P matrix:
P = 9.35 0.41 0.41 0.02
Under the LMPC of Equation (14) without cyber-attacks, closed-loop stability is achieved for the nominal system of Equation (15) in the sense that the closed-loop state is always bounded in the stability region Ω ρ with ρ = 0.2 and ultimately converges to Ω ρ m i n with ρ m i n = 0.002 around the origin. However, if a min-max cyber-attack is added to tamper the sensor measurement of temperature of the system of Equation (15), closed-loop stability is no longer guaranteed. Specifically, the min-max cyber-attack is designed to be of the following form:
x ¯ 1 = x 1
x ¯ 2 = min { arg max x 2 R { x T P x ρ } }
where x 1 = C A C A s , x 2 = T T s , and x ¯ 1 , x ¯ 2 are the corresponding state measurements under min-max cyber-attacks. In this example, the min-max cyber-attack of Equation (17) is designed such that the measurement of concentration remains unchanged, and the measurement of temperature is tampered to be the minimum value that keeps the state at the boundary of the stability region Ω ρ .
In Figure 5 and Figure 6, the temperature sensor measurement is intruded by a min-max cyber-attack at time t = 0.067 min. Without any cyber-attack detection system, it is shown in Figure 5 that the LMPC of Equation (14) keeps operating the system of Equation (15) using false sensor measurements blindly and finally drives the closed-loop state out of the stability region Ω ρ .
To handle the min-max cyber-attack, the model-based detection system of Equation (5) and the NN-based detection method are applied to the system of Equation (15). The simulation results are shown in Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13. Subsequently, the application of the NN-based detection method to the system under other cyber-attacks and the presence of disturbances is demonstrated in Figure 14, Figure 15 and Figure 16. Specifically, we first demonstrate the application of the model-based detection system of Equation (5) and of the LMPC of Equation (14), where S T H = 1 and b = 0.5 are chosen through closed-loop simulations. In Figure 7, the min-max cyber-attack of Equation (17) is added at 0.06 min and is detected at 0.1 min before the closed-loop state comes out of Ω ρ . The variation of the CUSUM statistic S ( k ) is shown in Figure 8, in which S ( k ) remains at b when there is no cyber-attack and exceeds S T H at 0.1 min. After the min-max cyber-attack is detected, the true states are obtained from redundant, secure sensors and the LMPC of Equation (14) drives the closed-loop state into Ω ρ m i n .
Next, the NN-based detection system and the LMPC of Equation (14) are implemented to mitigate the impact of cyber-attacks. The feed-forward NN model with two hidden layers is built in Python using the Keras library. Specifically, 3000 time-series balanced data samples of the closed-loop states of the nominal system, the system with disturbances, and the system under min-max cyber-attacks from t = 0 to t = 1 min are used to train the neural network to generate the classification of three classes, where class 0, 1, and 2 stand for the system under min-max cyber-attacks, the nominal system and the system with disturbances, respectively. It is demonstrated that 3000 time-series data is sufficient to build the NN for the CSTR example because dataset size smaller than 3000 leads to lower classification accuracy while the increase of dataset size over 3000 does not significantly improve the classification accuracy but brings more computation time as found in our calculations. 3000 data samples are split into 2000 training data, 500 validation data and 500 test data, respectively. V ( x ) = x T P x is utilized as the input vector to the NN model. The structure of the NN model is listed in Table 2. Additionally, to improve the performance of the NN model, batch normalization is utilized after each hidden layer to improve the performance of the NN algorithm.
To apply the NN-based detection method, we first investigate the relationship of the classification accuracy of the NN with respect to the size of the dataset. Specifically, assuming that the min-max cyber-attack occurs at a random sampling step before 0.1 min, the first NN model M 0.1 is trained at t = 0.1 min using the data of states from t = 0 to 0.1 min. As shown in Figure 9, early-stopping is activated at the 8th iteration (epoch) of training when validation accuracy ceases to increase. The averaged classification accuracy at t = 0.1 min is obtained by training the same model M t = 0.1 for 10 times independently. The above process is repeated by increasing the size of the dataset by 0.02 min every time to derive the models for different time instances (i.e., M t = 0.12 , M t = 0.14 , …). The minimum, the maximum and the averaged classification accuracy at each detection time instance are shown in Figure 10.
Figure 10 shows that the averaged test accuracy increases as more state measurements are collected after the cyber-attack occurs, and is up to 95% with state measurements for a long period of time. This suggests that the detection based on recent models is more reliable and deserves higher weights in the sliding window. The confusion matrix of the above NN for three classes: the system under min-max cyber-attack, the nominal system, and the system with disturbances is given in Table 3. Additionally, besides the NN method, other supervised learning-based classification methods including k-NN, SVM and random forests are also applied to the same dataset and obtained the averaged test accuracies, sensitivities and specificities within 0.28 min as listed in Table 4.
When the detection of cyber-attacks is incorporated into the closed-loop system of Equation (15) under the LMPC of Equation (14), the detection system is called every N a = 5 sampling periods. The sliding window length is N s = 15 sampling periods and the threshold for the detection indicator is D T H = 1.6 . The detection system is activated from t = 0.1 min such that a desired test accuracy is achieved with enough data. The closed-loop state-space profiles under the NN-based detection system with the stability region Ω ρ check and the detection system without the Ω ρ check are shown in Figure 11 and Figure 12.
Specifically, in Figure 11, it is demonstrated that without the stability region check, the closed-loop state leaves Ω ρ before the cyber-attack is confirmed. However, under the detection system with the boundedness check of Ω ρ , the closed-loop state is always bounded in Ω ρ by switching to redundant sensors at the first detection of min-max cyber-attacks. In Figure 12, it is shown that after the min-max cyber-attack is confirmed at t = 0.115 min, the misbehaving sensor is isolated and the LMPC of Equation (14) starts using the measurement of temperature from redundant sensors and re-stabilizes the system at the origin. The simulations demonstrate that it takes around 0.8 min for the closed-loop state trajectory to enter and remain in Ω ρ m i n under the LMPC of Equation (14) once the min-max cyber-attack is detected. The corresponding input profiles for the closed-loop system of Equation (1) under the NN-based detection system with the Ω ρ check are shown in Figure 13, where it is observed that a sharp change of Δ C A 0 occurs from t = 0.095 min to t = 0.115 min due to the min-max cyber-attack.
Additionally, when both disturbances and min-max cyber-attacks are present, it is demonstrated that the NN-based detection system is still able to detect the min-max cyber-attack and re-stabilize the closed-loop system of Equation (15) in the presence of disturbances by following the same steps as in the pure-cyber-attack case. The bounded disturbances w 1 and w 2 are added in Equation (15a,15b) as standard Gaussian white noise with zero mean and variances σ 1 = 0.1 kmol/(m 3 min) and σ 2 = 2 K/min, respectively. Also, the disturbance terms are bounded as follows: | w 1 | 0.1 kmol/(m 3 min), and | w 2 | 2 K/min, respectively. The closed-loop state and input profiles are shown in Figure 14, Figure 15 and Figure 16. Specifically, in Figure 15, it is demonstrated that the min-max cyber-attack occurs at 0.08 min and is confirmed at 0.115 min before the closed-loop state leaves Ω ρ . In the presence of disturbances, the misbehaving sensor is isolated and the closed-loop states are driven to a neighborhood around the origin under the LMPC of Equation (14). In Figure 16, it is demonstrated that the manipulated inputs show variation around the steady-state values (0, 0) when the closed-loop system reaches a neighborhood of the steady-state due to the bounded disturbances.
Lastly, since the surge cyber-attack of Equation (6) is undetectable by the model-based detection method, we also test the performance of the NN-based detection on the surge cyber-attack due to the similarity between surge cyber-attacks and min-max cyber-attacks (i.e., the surge cyber-attack works as a min-max attack for the first few sampling steps). It is demonstrated in simulations that 89 % of surge cyber-attacks can be detected by the NN-based detection system that is trained for min-max cyber-attacks only, which implies that the NN-based detection method can be applied to many other cyber-attacks with similar properties.
Moreover, when cyber-attacks with different properties are taken into account, for example, the replay attack (i.e., x ¯ = X , where X is the set of past measurements of states), the NN-based detection system can still efficiently distinguish the type of cyber-attacks and disturbances by re-training the NN model. The new NN model is built with labeled training data for the case of min-max, replay, nominal and with disturbances, for which the classification accuracy within 0.28 min is up to 85%. As a result, the NN-based detection model can be readily updated with the data of new cyber-attacks without changing the entire structure of detection or control systems.

6. Conclusions

In this work, we proposed an integrated NN-based detection and model predictive control method for nonlinear process systems to account for potential cyber-attacks. The NN-based detection system was first developed with the sliding detection window to detect cyber-attacks. Based on that, the Lyapunov-based MPC was developed with the stability region check triggered by the detection indicator to achieve closed-loop stability in the sense that the closed-loop state remained within a well-characterized stability region and was ultimately driven to a small neighborhood around the origin. Finally, the proposed integrated NN-based detection and LMPC method was applied to a nonlinear chemical process example. The simulation results demonstrated that the min-max cyber-attack was successfully detected before the state exited the stability region, and the closed-loop system was stabilized under the LMPC by using the measurements from redundant secure sensors. The good performance of the proposed approach with respect to surge and replay cyber-attacks was also demonstrated. The value and importance of the NN-based detection method is twofold. First, the NN-based detection method is able to detect cyber-attacks without having to know the process model if a large amount of past data is available. This is very important as nowadays most SCADA systems are large-scale networks with complicated process models, while the big data processing becoming available in both storage and computation. Second, compared to other detection methods, the NN-based detection is easy to implement. The proposed detection and control method can improve the safeness of processes by effectively detecting known (or similar to known) cyber-attacks and also can be readily updated to handle new, unknown cyber-attacks. However, NN-based detection method also has its limitations. Although it achieves desired performance for a trained, known cyber-attack, it is not guaranteed to work for an unknown, new cyber-attack unless it shares similar properties with known cyber-attacks.

Author Contributions

Investigation, Z.W., F.A., J.Z., Z.Z. and H.D.; Methodology Z.W., F.A., J.Z., Z.Z. and H.D.; Writing, Z.W. and H.D.; Supervision, P.D.C.

Funding

Financial support from the National Science Foundation and the Department of Energy is gratefully acknowledged.

Conflicts of Interest

The authors declare that they have no conflict of interest regarding the publication of the research article.

References

  1. Ye, N.; Zhang, Y.; Borror, C.M. Robustness of the Markov-chain model for cyber-attack detection. IEEE Trans. Reliab. 2004, 53, 116–123. [Google Scholar] [CrossRef]
  2. Fawzi, H.; Tabuada, P.; Diggavi, S. Secure estimation and control for cyber-physical systems under adversarial attacks. IEEE Trans. Autom. Control 2014, 59, 1454–1467. [Google Scholar] [CrossRef]
  3. Ding, D.; Han, Q.L.; Xiang, Y.; Ge, X.; Zhang, X.M. A survey on security control and attack detection for industrial cyber-physical systems. Neurocomputing 2018, 275, 1674–1683. [Google Scholar] [CrossRef]
  4. Cárdenas, A.A.; Amin, S.; Lin, Z.S.; Huang, Y.L.; Huang, C.Y.; Sastry, S. Attacks against process control systems: Risk assessment, detection, and response. In Proceedings of the 6th ACM Symposium on Information, Computer And Communications Security, Hong Kong, China, 22–24 March 2011; pp. 355–366. [Google Scholar]
  5. Singh, J.; Nene, M.J. A survey on machine learning techniques for intrusion detection systems. Int. J. Adv. Res. Comput. Commun. Eng. 2013, 2, 4349–4355. [Google Scholar]
  6. Ozay, M.; Esnaola, I.; Vural, F.T.Y.; Kulkarni, S.R.; Poor, H.V. Machine learning methods for attack detection in the smart grid. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1773–1786. [Google Scholar] [CrossRef] [PubMed]
  7. Satchidanandan, B.; Kumar, P.R. Dynamic watermarking: Active defense of networked cyber–physical systems. Proc. IEEE 2017, 105, 219–240. [Google Scholar] [CrossRef]
  8. Pajic, M.; Weimer, J.; Bezzo, N.; Sokolsky, O.; Pappas, G.J.; Lee, I. Design and implementation of attack-resilient cyberphysical systems: With a focus on attack-resilient state estimators. IEEE Control Syst. 2017, 37, 66–81. [Google Scholar]
  9. Dolk, V.S.; Tesi, P.; De Persis, C.; Heemels, W.P.M.H. Event-triggered control systems under denial-of-service attacks. IEEE Trans. Control Netw. Syst. 2017, 4, 93–105. [Google Scholar] [CrossRef]
  10. Rawlings, J.B.; Mayne, D.Q. Model Predictive Control: Theory and Design; Nob Hill Pub.: San Francisco, CA, USA, 2009. [Google Scholar]
  11. Mhaskar, P.; El-Farra, N.H.; Christofides, P.D. Stabilization of nonlinear systems with state and control constraints using Lyapunov-based predictive control. Syst. Control Lett. 2006, 55, 650–659. [Google Scholar] [CrossRef]
  12. Muñoz de la Peña, D.; Christofides, P.D. Lyapunov-based model predictive control of nonlinear systems subject to data losses. IEEE Trans. Autom. Control 2008, 53, 2076–2089. [Google Scholar] [CrossRef]
  13. Wu, Z.; Albalawi, F.; Zhang, Z.; Zhang, J.; Durand, H.; Christofides, P.D. Control Lyapunov-barrier function-based model, predictive control of nonlinear systems. In Proceedings of the American Control Conference, Milwaukee, WI, USA, 27–29 June 2018; pp. 5920–5926. [Google Scholar]
  14. Durand, H. A Nonlinear Systems Framework for Cyberattack Prevention for Chemical Process Control Systems. Mathematics 2018, 6, 169. [Google Scholar] [CrossRef]
  15. Narasingam, A.; Kwon, J.S.I. Data-driven identification of interpretable reduced-order models using sparse regression. Comput. Chem. Eng. 2018. [Google Scholar] [CrossRef]
  16. Narasingam, A.; Siddhamshetty, P.; Kwon, J.S.I. Temporal clustering for order reduction of nonlinear parabolic PDE systems with time-dependent spatial domains: Application to a hydraulic fracturing process. AIChE J. 2017, 63, 3818–3831. [Google Scholar] [CrossRef]
  17. Sidhu, H.S.; Narasingam, A.; Siddhamshetty, P.; Kwon, J.S.I. Model order reduction of nonlinear parabolic PDE systems with moving boundaries using sparse proper orthogonal decomposition: Application to hydraulic fracturing. Comput. Chem. Eng. 2018, 112, 92–100. [Google Scholar] [CrossRef]
  18. Lin, Y.; Sontag, E.D. A universal formula for stabilization with bounded controls. Syst. Control Lett. 1991, 16, 393–397. [Google Scholar] [CrossRef] [Green Version]
  19. Li, Y.; Shi, L.; Cheng, P.; Chen, J.; Quevedo, D.E. Jamming attacks on remote state estimation in cyber-physical systems: A game-theoretic approach. IEEE Trans. Autom. Control 2015, 60, 2831–2836. [Google Scholar] [CrossRef]
  20. Tsai, C.F.; Hsu, Y.F.; Lin, C.Y.; Lin, W.Y. Intrusion detection by machine learning: A review. Expert Syst. Appl. 2009, 36, 11994–12000. [Google Scholar] [CrossRef]
  21. Bishop, C.M. Pattern Recognition and Machine Learning (Information Science and Statistics); Springer: New York, NY, USA, 2006; p. 049901. [Google Scholar]
  22. Alexandridis, K.; Maru, Y. Collapse and reorganization patterns of social knowledge representation in evolving semantic networks. Inf. Sci. 2012, 200, 1–21. [Google Scholar] [CrossRef]
  23. Daqi, G.; Yan, J. Classification methodologies of multilayer perceptrons with sigmoid activation functions. Pattern Recognit. 2005, 38, 1469–1482. [Google Scholar] [CrossRef]
  24. Xu, B.; Liu, X.; Liao, X. Global exponential stability of high order Hopfield type neural networks. Appl. Math. Comput. 2006, 174, 98–116. [Google Scholar] [CrossRef]
  25. Zhang, Z.; Wu, Z.; Durand, H.; Albalawi, F.; Christofides, P.D. On integration of feedback control and safety systems: Analyzing two chemical process applications. Chem. Eng. Res. Des. 2018, 132, 616–626. [Google Scholar] [CrossRef] [Green Version]
  26. Wächter, A.; Biegler, L.T. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Programm. 2006, 106, 25–57. [Google Scholar] [CrossRef]
Figure 1. Basic structure of a feed-forward neural network used for cyber-attack detection.
Figure 1. Basic structure of a feed-forward neural network used for cyber-attack detection.
Mathematics 06 00173 g001
Figure 2. The sliding detection window with detection activated every N a sampling steps, where triangles represent the detection indicator D i and the box with length N s represents the sliding detection window.
Figure 2. The sliding detection window with detection activated every N a sampling steps, where triangles represent the detection indicator D i and the box with length N s represents the sliding detection window.
Mathematics 06 00173 g002
Figure 3. Basic structure of the proposed integrated NN-based detection and LMPC control method.
Figure 3. Basic structure of the proposed integrated NN-based detection and LMPC control method.
Mathematics 06 00173 g003
Figure 4. A schematic representing the stability region Ω ρ and the small neighborhood Ω ρ m i n around the origin. The trajectory first moves away from the origin due to the cyber-attack and finally re-converges to Ω ρ m i n under the LMPC of Equation (14) after the detection of the cyber-attack by the proposed detection scheme.
Figure 4. A schematic representing the stability region Ω ρ and the small neighborhood Ω ρ m i n around the origin. The trajectory first moves away from the origin due to the cyber-attack and finally re-converges to Ω ρ m i n under the LMPC of Equation (14) after the detection of the cyber-attack by the proposed detection scheme.
Mathematics 06 00173 g004
Figure 5. The state-space profile for the CSTR of Equation (15) under the LMPC of Equation (14) and under a min-max cyber-attack for the initial condition (−0.25, 3).
Figure 5. The state-space profile for the CSTR of Equation (15) under the LMPC of Equation (14) and under a min-max cyber-attack for the initial condition (−0.25, 3).
Mathematics 06 00173 g005
Figure 6. The true state profile ( x 2 = T T s ) and the sensor measurements ( x ¯ 2 = T ¯ T s ) of the closed-loop system under the LMPC of Equation (14) and under a min-max cyber-attack for the initial condition (−0.25, 3), where the vertical dotted line shows the time the cyber-attack is added.
Figure 6. The true state profile ( x 2 = T T s ) and the sensor measurements ( x ¯ 2 = T ¯ T s ) of the closed-loop system under the LMPC of Equation (14) and under a min-max cyber-attack for the initial condition (−0.25, 3), where the vertical dotted line shows the time the cyber-attack is added.
Mathematics 06 00173 g006
Figure 7. Closed-loop state profiles ( x 2 = T T s , x ¯ 2 = T ¯ T s ) for the initial condition (−0.25, 3) under the LMPC of Equation (14) and the model-based detection system.
Figure 7. Closed-loop state profiles ( x 2 = T T s , x ¯ 2 = T ¯ T s ) for the initial condition (−0.25, 3) under the LMPC of Equation (14) and the model-based detection system.
Mathematics 06 00173 g007
Figure 8. The variation of S ( k ) for the initial condition (−0.25, 3) under the LMPC of Equation (14) and the model-based detection system.
Figure 8. The variation of S ( k ) for the initial condition (−0.25, 3) under the LMPC of Equation (14) and the model-based detection system.
Mathematics 06 00173 g008
Figure 9. The variation of training accuracy and validation accuracy for the NN model M 0.1 , where early-stopping is activated at the 8th epoch of training.
Figure 9. The variation of training accuracy and validation accuracy for the NN model M 0.1 , where early-stopping is activated at the 8th epoch of training.
Mathematics 06 00173 g009
Figure 10. The test accuracy of neural network with respect to the size of training and test data.
Figure 10. The test accuracy of neural network with respect to the size of training and test data.
Mathematics 06 00173 g010
Figure 11. The state-space profile for the closed-loop CSTR with the initial condition (0.24, −2.78), where a min-max cyber-attack is detected by the NN-based detection system and mitigated by the LMPC of Equation (14).
Figure 11. The state-space profile for the closed-loop CSTR with the initial condition (0.24, −2.78), where a min-max cyber-attack is detected by the NN-based detection system and mitigated by the LMPC of Equation (14).
Mathematics 06 00173 g011
Figure 12. Closed-loop state profiles ( x 2 = T T s , x ¯ 2 = T ¯ T s ) for the initial condition (0.24, −2.78) under the LMPC of Equation (14) and the NN-based detection system.
Figure 12. Closed-loop state profiles ( x 2 = T T s , x ¯ 2 = T ¯ T s ) for the initial condition (0.24, −2.78) under the LMPC of Equation (14) and the NN-based detection system.
Mathematics 06 00173 g012
Figure 13. Manipulated input profiles ( u 1 = Δ C A 0 , u 2 = Δ Q ) for the initial condition (0.24, −2.78) under the LMPC of Equation (14) and the NN-based detection system.
Figure 13. Manipulated input profiles ( u 1 = Δ C A 0 , u 2 = Δ Q ) for the initial condition (0.24, −2.78) under the LMPC of Equation (14) and the NN-based detection system.
Mathematics 06 00173 g013
Figure 14. The state-space profiles for the closed-loop CSTR with bounded disturbances and the initial condition (0.25, −3), where a min-max attack is detected by the NN-based detection system and mitigated by the LMPC of Equation (14).
Figure 14. The state-space profiles for the closed-loop CSTR with bounded disturbances and the initial condition (0.25, −3), where a min-max attack is detected by the NN-based detection system and mitigated by the LMPC of Equation (14).
Mathematics 06 00173 g014
Figure 15. State profiles ( x 2 = T T s , x ¯ 2 = T ¯ T s ) for the closed-loop CSTR with bounded disturbances and the initial condition (0.25, −3) under the LMPC of Equation (14) and the NN-based detection system.
Figure 15. State profiles ( x 2 = T T s , x ¯ 2 = T ¯ T s ) for the closed-loop CSTR with bounded disturbances and the initial condition (0.25, −3) under the LMPC of Equation (14) and the NN-based detection system.
Mathematics 06 00173 g015
Figure 16. Manipulated input profiles ( u 1 = Δ C A 0 , u 2 = Δ Q ) for the closed-loop CSTR with bounded disturbances and the initial condition (0.25, −3) under the LMPC of Equation (14).
Figure 16. Manipulated input profiles ( u 1 = Δ C A 0 , u 2 = Δ Q ) for the closed-loop CSTR with bounded disturbances and the initial condition (0.25, −3) under the LMPC of Equation (14).
Mathematics 06 00173 g016
Table 1. Parameter values of the CSTR.
Table 1. Parameter values of the CSTR.
T 0 = 310 K F = 100 × 10 3 m 3 /min
V L = 0.1 m 3 E = 8.314 × 10 4 kJ/kmol
k 0 = 72 × 10 9 min 1 Δ H = 4.78 × 10 4 kJ/kmol
C p = 0.239 kJ/(kg K) R = 8.314 kJ/(kmol K)
ρ = 1000 kg/m 3 C A 0 s = 1.0 kmol/m 3
Q s = 0.0 kJ/min C A s = 0.57 kmol/m 3
T s = 395.3 K
Table 2. Feed-forward NN model.
Table 2. Feed-forward NN model.
NeuronsActivation Functions
First Hidden Layer 120 ReLu
Second Hidden Layer 100 ReLu
Output Layer 1 Softmax
Table 3. Confusion matrix of the neural network.
Table 3. Confusion matrix of the neural network.
Actual Class 0:
Min-Max Cyber-Attack
Actual Class 1:
Nominal System
Actual Class 2:
The System with Disturbances
Predicted Class 0:19813
Predicted Class 1:014010
Predicted Class 2:00148
Table 4. Comparison of the performance of different detection models.
Table 4. Comparison of the performance of different detection models.
ModelsTest AccuracySensitivitySpecificity
k-NN 71.1 % 90.9 % 99.5 %
SVM 83.0 % 93.0 % 87.8 %
Random Forest 96.2 % 100.0 % 96.2 %
Neural Network 95.8 % 98.0 % 98.6 %

Share and Cite

MDPI and ACS Style

Wu, Z.; Albalawi, F.; Zhang, J.; Zhang, Z.; Durand, H.; Christofides, P.D. Detecting and Handling Cyber-Attacks in Model Predictive Control of Chemical Processes. Mathematics 2018, 6, 173. https://doi.org/10.3390/math6100173

AMA Style

Wu Z, Albalawi F, Zhang J, Zhang Z, Durand H, Christofides PD. Detecting and Handling Cyber-Attacks in Model Predictive Control of Chemical Processes. Mathematics. 2018; 6(10):173. https://doi.org/10.3390/math6100173

Chicago/Turabian Style

Wu, Zhe, Fahad Albalawi, Junfeng Zhang, Zhihao Zhang, Helen Durand, and Panagiotis D. Christofides. 2018. "Detecting and Handling Cyber-Attacks in Model Predictive Control of Chemical Processes" Mathematics 6, no. 10: 173. https://doi.org/10.3390/math6100173

APA Style

Wu, Z., Albalawi, F., Zhang, J., Zhang, Z., Durand, H., & Christofides, P. D. (2018). Detecting and Handling Cyber-Attacks in Model Predictive Control of Chemical Processes. Mathematics, 6(10), 173. https://doi.org/10.3390/math6100173

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop