Next Article in Journal
A Decentralised Multi-Authority Attribute-Based Encryption for Secure and Scalable IoT Access Control
Previous Article in Journal
The Development and Application of Multi-Cell Elliptical Superconducting Cavity Pre-Tuning Equipment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neural Network-Adaptive Secure Control for Nonlinear Cyber-Physical Systems Against Adversarial Attacks

College of Mechanical Engineering and Automation, Huaqiao University, Xiamen 361021, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(7), 3893; https://doi.org/10.3390/app15073893
Submission received: 28 October 2024 / Revised: 21 February 2025 / Accepted: 24 February 2025 / Published: 2 April 2025

Abstract

:
The “insecurity of the network” characterizes each agent as being remotely controlled through unreliable network channels. In such an insecure network, the output signal can be altered through carefully designed adversarial attacks to produce erroneous results. To address this, this paper proposes a neural network (NN) adaptive secure control scheme for cyber-physical systems (CPSs) via attack reconstruction strategies, where the attack reconstruction strategy serves as the solution to the NNs estimation problem on the insecurity of the network. Consequently, by introducing a novel error transformation, an NN-adaptive secure control method is formulated as the framework of backstepping. Based on the Lyapunov stability theory and defined error transformation, it is proven that the above secure control process reaches the expected trajectory, and all the signals are bounded in closed-loop systems. Finally, its effectiveness is verified via a simulation of attitude control of two-joint robots.

1. Introduction

The wireless communication connection between physical components and network computing layers in cyber-physical systems (CPSs) introduces vulnerabilities in network attacks [1,2]. Consequently, there is a crucial need to thoroughly study and address the security issues associated with CPSs to enhance the security of CPSs and safeguard CPSs against cyber-attacks.
Thus, many researchers focus on improving the means of keeping CPSs secure. The secure state estimation (SSE) is the problem of estimating the system state from the compromised data packets and the noisy [3]. To ensure an SSE under cyber-attacks, it is crucial to identify the information transmission channel affected when a portion of the collected information is tampered with. In the context of an SSE amidst cyber-attacks, the challenge lies in identifying the precise set of compromised channels among numerous possibilities, rendering the problem an instance of combinatorial optimization and falling within the realm of NP-hard problems [4,5,6,7,8]. To solve such difficulties, the author devotes the L 1 / L r relaxation technique in reference [4]. The author applies the ideas from reference [4] to the event-triggered projected gradient descent algorithm [5]. Note that the aforementioned algorithms [4,5] can only guarantee the correctness of the estimates under certain restrictive assumptions about the system structure. Reference [6] investigates the impact of cyber-attacks, establishing that the stability of the observation error system is stable as long as the 2 s -sparse observable condition of the original system is satisfied. The existing work provides fundamental limits for estimation against integrity attacks, specifically proving the necessity of 2 s -sparse observable for SSE against s -sparse attacks. For example, reference [7] analyzes the convergence of observation errors under s -sparse attacks using the separation principle and Lyapunov stability theory via a system based on 2 s -sparse observable. However, it is worth noting that control systems are often nonlinear. In such complex systems, accurate SSE may require the use of complex models and algorithms, increasing computational complexity and cost.
Existing studies suggest that increasing secure/resilient control or prior knowledge can alleviate the damage caused by cyber-attacks. Through further research on the assumption that attacks can be parameterized, which is state-dependent, reference [9,10] relaxed the requirement for attacks to be bounded, allowing attacks to be unbounded. Reference [11] introduces the coordinate transformations in controller design, which can overcome the negative impact caused by cyber-attacks. References [12,13] propose an attack compensator that effectively suppresses the impact of attacks by removing extreme values, thereby ensuring that normal nodes correctly estimate the system state under malicious node interference and avoid traversal search methods. Meanwhile, reference [14] proposes a novel attack compensator to suppress the effect of the attack signal on state information by compressing the amplitude of abnormal measurement signals. Due to the challenge of unknown time-varying gains in the design of control systems after the parameterization of attacks, to solve it, the author proposes a Nussbaum gain technique [15]. In addition, the authors propose an observer with attack compensation in [15], which successfully solved the problem of unknown control gain caused by unknown time-varying coefficients. However, the assumption that attacks can be parameterized is too strict, and it is difficult to find conditions that satisfy this assumption in real physical systems.
As deep learning continues to be studied, the performance of NN models is constantly improving. Researchers have proposed many prediction methods [16], making the application of NN more reliable and understandable. Therefore, in security maintenance and system performance evaluations, neural networks are usually used to estimate attack functions to maintain the security of the system [17,18,19]. Therefore, the authors propose the NN estimation algorithm for the systems in [17,18], excluding the strict assumption of parameterization. At the same time, in reference [19], the authors use nonlinear mapping to the cyber-attack function, applying it to a compact set, achieving the estimation requirements for the attack. In addition, an anti-attack estimation algorithm is also designed to suppress the impact of attack signals [19]. Hence, developing new control algorithms against sensor and actuator attacks is of significant importance in both theory and practice. Therefore, the main challenge of adversarial attack recovery algorithms lies in how to achieve complete compensation or suppression of attack signals and ensure convergence to the equilibrium point in any situation (with or without attacks).
In this paper, we propose an NN adaptive secure control scheme under the NN estimation algorithm framework. Specifically, we consider the following situations:
In the above research on secure/resilient control [9,10,11,12,13,14,15,16], secure/resilient control strategies are designed under the assumption that attacks can be parameterized. If an attack cannot be parameterized, how can we design an attack reconstruction strategy to eliminate the adverse effects of the cyber-attack? Furthermore, compared to [17,18,19], how can we utilize this insecure information to restore system security when attacking all channels?
The goal of this paper is to obtain the NN attack reconstruction strategies in an insecure network with all information unavailable to cope with the aforementioned situations. The innovations of this paper are as follows:
(1)
We formulate the attack reconstruction algorithm in an insecure network with all information unavailable, where the attack reconstruction strategy involves the final solution of the estimation algorithm. Specifically, by introducing mapping, time is first mapped into a compact set. Then, we use the NN estimation algorithm to solve approximate learning for unknown adversarial attacks, thereby improving the ability to cope with the adversarial attacks. Note that, although [9,10,11,12,13,14,15,16] also investigates the problems of adversarial attacks, the adversarial attacks model in [9,10,11,12,13,14,15,16] requires that the adversarial attacks must be parameterized; this article removes this restrictive assumption and is more applicable to general attack models.
(2)
We study the NN adaptive secure control scheme within the framework of backstepping, where the attack reconstruction algorithm is utilized to solve approximate learning for unknown adversarial attacks. We have demonstrated, based on the Lyapunov stability analysis theory, that the proposed secure control can ensure system performance even in the presence of adversarial attacks. Different from the requirements in the literature [20,21], and considering that the sensor attacks should be s -sparse [3,4,5,6,7,8,21,22], this paper utilizes the attack reconstruction algorithm to solve approximate learning for unknown adversarial attacks, no longer requiring these restrictive assumptions.
To ensure clarity and consistency in the text, the abbreviations employed throughout the paper are summarized in Abbreviations prat.
The remainder of this paper is organized as follows. Section 2 introduces the system architecture. Section 3 introduces the NN attack reconstruction algorithm to estimate the cyber-attacks. Section 4 discusses the NN adaptive secure control method. In Section 5, a stability analysis was conducted on the designed the NN adaptive secure control method. Section 6 discusses the feasibility of the algorithm through simulation.

2. Preliminaries and Problem Formulation

A. 
System Descriptions
Consider the following nonlinear CPSs as
x ˙ i = x i + 1 + f i ( x ¯ i ) x ˙ n = u ~ + f n ( x ¯ n ) y = x 1
where x ¯ i = x 1 , , x i T ,   x ¯ i Ω R i , i = 1 , , n 1 is the state vector [16]. u ~ denotes the actuator output. f i ( x ¯ i ) and f n ( x ¯ n ) are unknown smooth nonlinear dynamics functions of the CPSs (1). y denotes the output of the controlled system.
B. 
Adversarial Attack Model
In this paper, we consider the adversarial attack as follows [9,23]:
x ~ s ( t ) = x s ( t ) + a s ( t , x ¯ s )
u ~ ( t ) = u ( t ) + a ( t , x ¯ n )
where the compromise state x ~ s ,   s = 1 , , n is the available variable after being attacked. u ( t ) is the unavailable control input caused by the adversarial attack. a s ( t , x ¯ s ) is defined as a time-varying, state-dependent malicious adversarial attack function, meaning it changes over time and is influenced by the system’s current state.
Remark 1.
In the presence of sensor attacks, this paper proposes an NN estimation algorithm to estimate the unknown sensor attacks, essentially letting the network train itself and determine which attack values are most important. The proposed a NN estimation algorithm method does not require that the sensor attacks must be parameterized. Note that, although [10,11,12,13,14,15] also investigate the problems of injection unknown sensor attacks, the proposed control methods in [10,11,12,13,14,15] require that the sensor attacks must be parameterized, that is, the restrictive assumption is removed in this paper.
In defining the reference signal y 0 , the control objectives are given as follows:
Control objective. The control objective of this paper is to design a state-feedback secure control scheme based on the NN attack reconstruction strategies in an insecure network with all the information available for nonlinear CPSs (1) against adversarial attack (2) and (3), such that:
(1)
The obtained NN attack reconstruction strategies in an insecure network with all information unavailable can effectively estimate unknown attacks.
(2)
The above secure control process reaches the expected trajectory, and all the closed-loop signals are bounded.
(3)
The output error of the system converges to a small neighborhood of zero.

3. NN Attack Reconstruction Algorithm

Due to the unknown adversarial attack functions a s ( t , x ¯ s ) and a ( t , x ¯ n ) being unusable, this section needs to construct the attack reconstruction strategies during the control design process. Thus, the NN estimation algorithm is proposed to solve approximate learning for unknown adversarial attacks.
According to the research in reference [16,19], the estimation performance of neural networks (NNs) is only guaranteed on the compact set Ω . However, adversarial attack functions a ( t , x ¯ n ) are time-dependent; therefore, according to reference [19], we propose a nonlinear mapping that maps t to a compact set:
h ( t ) = t t + 1 ,   t [ 0 , )
This paper takes the inverse mapping h 1 : [ 0 , 1 ] [ 0 , ) of (4), which we can obtain as follows:
h 1 ( ς ) = ς 1 ς   , ς [ 0 , 1 ]
According to (4) and (5), the time-related adversarial attack functions a s ( t , x ¯ s ) and a ( t , x ¯ n ) have been mapped to the compact set ς [ 0 , 1 ] . NNs are capable of learning complex nonlinear relationships, making them highly adaptable to various types of data and problems. Next, we use NNs to capture adversarial attack patterns and features. We define
a s ( t , 0 ) = a s ( h 1 ( ς ) , 0 ) = a s , h ( ς , 0 )
a ( t , 0 ) = a ( h 1 ( ς ) , 0 ) = a h ( ς , 0 )
Similarly to work [19], we have
a s , h ( ς , x ¯ s ) = w s T φ s ( ς , x ¯ s ) + ε s 0
a h ( ς , x ¯ n ) = v T ϕ ( ς , x ¯ n ) + υ 0
where w s T = [ w s , 1 , , w s , r ] and v T = [ v 1 , , v r ] present an ideal weight vector, which is the optimal weight vector found through training to minimize error, while r is the NN rules number; φ s ( ς , x ¯ s ) = [ φ s , 1 ( ς , x ¯ s ) , , φ s , r ( ς , x ¯ s ) ] T and ϕ ( ς , x ¯ n ) = [ ϕ 1 ( ς , x ¯ n ) , , ϕ r ( ς , x ¯ n ) ] T are activation function vectors, aiding in the Gaussian activation function, with parameters being tuned during training for optimal performance; and ε s 0 and υ 0 are approximation errors of the NN and satisfy that ε s 0 ε s 0 and υ 0 υ 0 , respectively. ε s 0 and υ 0 are positive constants.
Based on (8) and (9), we can use w ^ s T φ s ( ς , x ¯ s ) and v ^ T ϕ ( ς , x ¯ n ) to solve approximate learning for unknown adversarial attacks, respectively.

4. Controller Design

Based on the NN attack reconstruction strategies (8) and (9), and under the framework of a backstepping recursive control design algorithm, an NN-adaptive control scheme is developed which can solve approximate learning for unknown adversarial attacks.
To analyze the stability of the designed controller, we define the following error coordinate change as follows:
z 1 = x 1 y 0 z s = x s v s ξ s = v s β s 1
where β s 1 R is the virtual controller, which is a virtual controller, which means that the value of β s 1 R is not a virtual value (it represents the control parameters within the system); z s is the error surface; ξ s R is a coordinate of the first-order filter output; v s R is a variable, which is obtained through a first-order filter on intermediate virtual control β s 1 R with a constant τ s to obtain ν s , i.e.,
τ s v ˙ s + v s = β s 1 ,   v s ( 0 ) = β s 1 ( 0 )
where τ s is positive constant.
Step 1: Similarly to Equations (8) and (9), we can express the approximation of the NN as follows:
f s ( x ¯ s ) = θ s T φ s ( x ¯ s ) + ε s
where θ s T = [ θ s , 1 , , θ s , r ] is an ideal weight vector, which is the optimal weight vector found through training to minimize error; φ s ( x ¯ s ) = [ φ s , 1 ( x ¯ s ) , , φ s , r ( x ¯ s ) ] T is an activation function vector, which is the Gaussian activation function, with parameters tuned during training for optimal performance; given the positive constant ε s > 0 , the error ε s of NN and satisfies ε s ε s . For unknown nonlinear dynamics, a NN learns it by adjusting weights during training based on input-output data f s ( x ¯ s ) .
By taking the derivative of both sides of (10) simultaneously, and using (1), it can be derived that
z ˙ 1 = x ˙ 1 = x 2 + f 1 ( x 1 ) = x 2 + θ 1 T φ 1 ( x 1 ) + ε 1
To analyze the system stability, we have chosen the following quadratic Lyapunov function. This function is selected to effectively capture the system’s dynamics and provide a suitable framework for evaluating its stability; we have
V 1 = 1 2 ( z 1 2 + 1 p 1 θ ~ 1 T θ ~ 1 + 1 q 1 w ~ 1 T w ~ 1 )
where θ ~ s = θ s θ ^ s .
To ensure stability, we design the virtual controller and the parameter adaptation laws as follows, and the stability will be proven in the next section.
β 1 = c 1 z ^ 1 3 z ^ 1 θ ^ 1 T φ 1 ( x ~ 1 ) w ^ 1 T φ 1 ( ς , x ~ 1 ) + y ˙ 0
θ ^ ˙ 1 = p 1 z ^ 1 φ 1 ( x ~ 1 ) + σ 1 θ ^ 1
w ^ ˙ 1 = q 1 z ^ 1 φ 1 ( ς , x ~ 1 ) + σ ¯ 1 w ^ 1
where c 1 > 0 , p 1 > 0 , q 1 > 0 , σ 1 > 0 and σ ¯ 1 > 0 are design parameters. z ^ 1 = x ~ 1 w 1 T φ 1 ( ς , x ~ 1 ) and the vector θ ^ s provide estimates for the ideal learning weight of θ s .
Remark 2.
Therefore, the main challenge of adversarial attack recovery algorithms lies in how to achieve complete compensation or suppression of attack signals. Due to the unavailability of the tracking error z 1 in (10), we introduce a novel error transformation to construct an equivalent z ^ 1 = x ~ 1 w 1 T φ 1 ( ς , x ~ 1 ) instead of z 1 . The equivalence between z ^ 1 = x ~ 1 w 1 T φ 1 ( ς , x ~ 1 ) and z 1 depends on the performance of (8) and (9), respectively. The estimation performance of NNs can be supported within tight sets.
Step s   ( 2 s n 1 ) : By taking the derivative of both sides of (10) simultaneously, it can be obtained that
z ˙ s = x ˙ s β ˙ s 1 = x s + 1 + f s ( x ¯ s ) β ˙ s 1 = x s + 1 + θ s T φ s ( x ¯ s ) + ε s β ˙ s 1
From (11), one has
ξ ˙ s = ν ˙ s β ˙ s 1 = β ˙ s 1 β s 1 v s τ s = N s ( ) ξ s τ s
where β ˙ s 1 = N s ( ) = N s ( z 1 , , z s , θ 1 , , θ s , w 1 , , w s , ξ 2 , ξ s , y 0 , y ˙ 0 ) is a known continuous function.
Similarly to the Lyapunov function in (14), we choose the following Lyapunov function in step s ; we obtain
V s = V s 1 + 1 2 ( z s 2 + 1 p s θ ~ s T θ ~ s + 1 q s w ~ s T w ~ s + ξ s 2 )
To ensure stability, we design the virtual controller and the parameter adaptation laws as follows, and the stability will be proven in the next section.
β s = c s z ^ s 3 z ^ s θ ^ s T φ s ( x ¯ ~ s ) w ^ s T φ s ( ς , x ¯ ~ s ) + ν ˙ s
θ ^ ˙ s = p s z ^ s φ s ( x ¯ ~ s ) + σ s θ ^ s
w ^ ˙ s = q s z ^ s φ s ( ς , x ¯ ~ s ) + σ ¯ s w ^ s
where c s > 0 , p s > 0 , q s > 0 , σ s > 0 and σ ¯ s > 0 are design parameters. And z ^ s = x ~ s w ^ s T φ s ( ς , x ¯ ~ s ) .
Remark 3.
The author has designed an anti-attack observer that can compensate for attacks to restore the system state in [19], which only requires the sensor output to be attacked. Unlike reference [19], this article studies all states being attacked, and all variables are unavailable. The estimation performance of NNs can be supported within tight sets.
Step n : Similarly to Step s , from (1), (3) and (12) and (10), by taking the derivative of both sides of (10) simultaneously, it can be gathered that
z ˙ n = x ˙ n v ˙ n = u ~ + f n ( x ¯ n ) v ˙ n = u + a ( t , x ¯ n ) + f n ( x ¯ n ) v ˙ n = u + v T ϕ ( ς , x ¯ n ) + υ 0 + θ n T φ n ( x ¯ n ) + ε n v ˙ n
Similarly to the Lyapunov functions (14) and (20), we choose the following Lyapunov function in step s , finding the following:
V n = V n 1 + 1 2 ( z n 2 + 1 p n θ ~ n T θ ~ n + 1 q n w ~ n T w ~ n + 1 m v ~ T v ~ )
To ensure stability, we design the virtual controller and the parameter adaptation laws as follows, and the stability will be proven in the next section.
u = c n z ^ n 3 z ^ n θ ^ n T φ n ( x ¯ ~ n ) v ^ T ϕ ( ς , x ¯ ~ n ) + ν ˙ n
θ ^ ˙ n = p n z ^ n φ n ( x ¯ ~ n ) + σ n θ ^ n
w ^ ˙ n = q n z ^ n φ n ( ς , x ¯ ~ n ) + σ ¯ n w ^ n
v ^ ˙ = m z ^ n ϕ n ( ς , x ¯ ~ n ) + σ ¯ v ^
where c n > 0 , p n > 0 , q n > 0 , σ n > 0 , σ ¯ n > 0 and σ ¯ > 0 are design parameters. Additionally, z ^ n = x ~ n w ^ n T φ n ( ς , x ¯ ~ n ) .
The configuration of the NN-adaptive state-feedback control scheme is shown in Figure 1.

5. Stability Analysis

Theorem 1.
For nonlinear CPSs (1), if we adopt the NN attack reconstruction strategies (8) and (9), the secure control scheme (26), adaptive laws (16) and (17), (22) and (23), (28) and (29) and the virtual control laws (15), (21), (27), and if, for all the initial conditions, V n ( 0 ) is bounded, then a whole-state-feedback secure control scheme makes the following properties valid:
(1)
The obtained NN attack reconstruction strategies in an insecure network with all information unavailable can effectively estimate unknown attacks.
(2)
The above secure control process reaches the expected trajectory and all the closed-loop signals are bounded.
(3)
The output error of the system converges to a small neighborhood of zero.
Proof. 
According to (19) and (20), we can obtain
V ˙ 1 = z 1 z ˙ 1 + 1 p 1 θ ~ 1 T ( θ ˙ 1 θ ^ ˙ 1 ) + 1 q 1 w ~ 1 T ( w ˙ 1 w ^ ˙ 1 ) = z 1 ( z 2 + ξ 2 + θ 1 T φ 1 ( x 1 ) + ε 1 ) 1 p 1 θ ~ 1 T θ ^ ˙ 1 1 q 1 w ~ 1 T w ^ ˙ 1 = z 1 ( z 2 + ξ 2 + θ 1 T φ 1 ( x 1 ) θ 1 T φ 1 ( x ~ 1 ) + θ 1 T φ 1 ( x ~ 1 ) + ε 1 + w ~ 1 T φ 1 ( ς , x ~ 1 ) w ~ 1 T φ 1 ( ς , x ~ 1 ) ) 1 p 1 θ ~ 1 T θ ^ ˙ 1 1 q 1 w ~ 1 T w ^ ˙ 1 = z 1 ( z 2 + ξ 2 + θ 1 T φ 1 ( x 1 ) θ 1 T φ 1 ( x ~ 1 ) + θ ^ 1 T φ 1 ( x ~ 1 ) + ε 1 + w ^ 1 T φ 1 ( ς , x ~ 1 ) w 1 T φ 1 ( ς , x ~ 1 ) + β 1 ) + ( z 1 z ^ 1 ) θ ~ 1 T φ 1 ( x ~ 1 ) + 1 p 1 θ ~ 1 T ( p 1 z ^ 1 φ 1 ( x ~ 1 ) θ ^ ˙ 1 ) + ( z 1 z ^ 1 ) w ~ 1 T φ 1 ( ς , x ~ 1 ) + 1 q 1 w ~ 1 T ( q 1 z ^ 1 φ 1 ( ς , x ~ 1 ) w ^ ˙ 1 )
According to (1) and (10), we have
z 1 z ^ 1 = a 1 ( t , x 1 ) + w ^ 1 T φ 1 ( ς , x ~ 1 ) = ( w 1 T φ 1 ( ς , x 1 ) + ε 10 ) + w ^ 1 T φ 1 ( ς , x ~ 1 )
From (14), by applying the inequality 2 a b a 2 + b 2 , we can derive the following expression/relationship, which will help in simplifying the analysis and in providing a clearer understanding of the system’s behavior.
z 1 ( z 2 + ξ 2 + ε 1 ) 3 2 z 1 2 + 1 2 z 2 2 + 1 2 ε 1 2 + 1 2 ξ 2 2
z 1 ( θ 1 T φ 1 ( x 1 ) θ 1 T φ 1 ( x ~ 1 ) w 1 T φ 1 ( ς , x ~ 1 ) ) 3 2 z 1 2 + θ 1 2 + 1 2 w 1 2
( z 1 z ^ 1 ) θ ~ 1 T φ 1 ( x ~ 1 ) + ( z 1 z ^ 1 ) w ~ 1 T φ 1 ( ς , x ~ 1 ) ( w ^ 1 T φ 1 ( ς , x ~ 1 ) w 1 T φ 1 ( ς , x 1 ) ε 10 ) ( w ~ 1 T + θ ~ 1 T ) φ 1 ( ς , x ~ 1 ) = ( w 1 T φ 1 ( ς , x ~ 1 ) w ~ 1 T φ 1 ( ς , x ~ 1 ) w 1 T φ 1 ( ς , x 1 ) ε 10 ) ( w ~ 1 T + θ ~ 1 T ) φ 1 ( ς , x ~ 1 ) ( 2 w 1 + w ~ 1 ε 10 ) ( w ~ 1 + θ ~ 1 ) w 1 2 + w ~ 1 2 + 2 θ ~ 1 2 + ε 10 2
Substituting (30)–(32) into (12) yields
V 1 z 1 ( 3 z 1 + θ ^ 1 T φ 1 ( x ~ 1 ) + w ^ 1 T φ 1 ( ς , x ~ 1 ) + β 1 ) + 1 2 z 2 2 + 1 2 ξ 2 2 + 2 θ ~ 1 2 + w ~ 1 2 + 1 p 1 θ ~ 1 T ( p 1 z ^ 1 φ 1 ( x ~ 1 ) θ ^ ˙ 1 ) + θ 1 2 + 1 q 1 w ~ 1 T ( q 1 z ^ 1 φ 1 ( ς , x ~ 1 ) w ˙ 1 ) + 3 2 w 1 2 + 1 2 ε 1 2 + ε 10 2
Substituting (14)–(16) into (35) yields
V 1 z 1 ( c 1 z ^ 1 + 3 z 1 3 z ^ 1 ) + 1 2 z 2 2 + 1 2 ξ 2 2 + 2 θ ~ 1 2 + w ~ 1 2 + σ 1 p 1 θ ~ 1 T θ ^ 1 + σ ¯ 1 q 1 w ~ 1 T w ^ 1 + θ 1 2 + 3 2 w 1 2 + 1 2 ε 1 2 + ε 10 2
By employing the inequality 2 a b a 2 + b 2 , we can obtain
z 1 ( c 1 z ^ 1 + 3 z 1 3 z ^ 1 ) = c 1 z 1 ( z 1 + w 1 T φ 1 ( ς , x 1 ) + ε 10 w 1 T φ 1 ( ς , x ~ 1 ) ) + 3 z 1 ( z 1 3 z ^ 1 ) c 1 z 1 ( z 1 + 2 w 1 + w ~ 1 + ε 10 ) 3 z 1 ( 2 w 1 + w ~ 1 + ε 10 ) + 3 z 1 2 c 1 z 1 2 + 3 z 1 2 + 9 + 4 c 1 2 2 w ~ 1 2 + ( 18 + 2 c 1 2 ) w 1 2 + 5 ε 10 2
According to (36) and (37), we have
V 1 ( c 1 3 ) z 1 2 + 1 2 z 2 2 + 1 2 ξ 2 2 + θ ~ 1 2 + 9 + 4 c 1 2 2 w ~ 1 2 + σ 1 p 1 θ ~ 1 T θ ^ 1 + σ ¯ 1 q 1 w ~ 1 T w ^ 1 + D 1
where D 1 = θ 1 2 + ( 39 2 + 2 c 1 2 ) w 1 2 + 1 2 ε 1 2 + 6 ε 10 2 .
According to (12) and (18), we can obtain
V s = z s ( β s ν ˙ s + z s + 1 + ξ s + 1 + ε s + θ s T φ s ( x ¯ s ) ) + ξ s ( ν ˙ s β ˙ s 1 ) 1 p s θ ~ s T θ ^ ˙ s 1 q s w ~ s T w ^ ˙ s = z s ( β s ν ˙ s + z s + 1 + ξ s + 1 + θ s T φ s ( x ¯ s ) θ s T φ s ( x ¯ ~ s ) + θ ^ s T φ s ( x ¯ ~ s ) + ε s + w ^ s T φ s ( ς , x ¯ ~ s ) w s T φ s ( ς , x ¯ ~ s ) ) + ξ s ( ν ˙ s α ˙ s 1 ) + ( z s z ^ s ) θ ~ s T φ s ( x ¯ ~ s ) + 1 p s θ ~ s T ( p s z ^ s φ s ( x ¯ ~ s ) θ ^ ˙ s ) + ( z s z ^ s ) w ~ s T φ s ( ς , x ¯ ~ s ) + 1 q s w ~ s T ( q s z ^ s φ s ( ς , x ¯ ~ s ) w ^ ˙ s )
Let Ω = { ( z k , θ ~ k , w ~ k , ξ k ) | k = 1 s ( z k 2 + 1 p k θ ~ k T θ ~ k + 1 m k w ~ k T w ~ k ) + k = 2 s ξ k 2 N k } . Since Ω is a compact set and N s ( ) satisfies N s ( ) N s on Ω with a constant N s , from (11) and (20), and by using the inequality 2 a b a 2 + b 2 , we can obtain
z s ( z s + 1 + ξ s + 1 + ε s ) 3 2 z s 2 + 1 2 z s + 1 2 + 1 2 ξ s + 1 2 + 1 2 ε s 2
z s ( θ s T φ s ( x ¯ s ) θ s T φ s ( x ¯ ~ s ) w s T φ s ( ς , x ¯ ~ s ) ) 3 2 z s 2 + θ s 2 + 1 2 w s 2
ξ s ( ν ˙ s α ˙ s 1 ) ( 1 τ s N s 2 ) ξ s 2
( z s z ^ s ) θ ~ s T φ s ( x ¯ ~ s ) + ( z s z ^ s ) w ~ s T φ s ( ς , x ¯ ~ s ) w s 2 + 2 θ ~ s 2 + w ~ s 2 + ε s 0 2
Substituting (40)–(43) into (20) yields
V s V s 1 + z s ( 3 z s + w ^ s T φ s ( x ¯ ~ s ) + w ^ s T φ s ( ς , x ¯ ~ s ) ν ˙ s + β s ) + 1 2 z s + 1 2 ( 1 τ s N s 2 ) ξ s 2 + 1 2 ξ s + 1 2 + w ~ s 2 + 1 p s θ ~ s T ( p s z ^ s φ s ( x ¯ ~ s ) θ ^ ˙ s ) + 2 θ ~ s 2 + 1 q s w ~ s T ( q s z ^ s φ s ( ς , x ¯ ~ s ) w ^ ˙ s ) + θ s 2 + 3 2 w s 2 + 1 2 ε s 2 + ε s 0 2
Substituting (21)–(23) into (44) yields
V s V s 1 + z s ( c s z ^ s + 3 z s 3 z ^ s ) + 1 2 z s + 1 2 ( 1 τ s N s 2 ) ξ s 2 + 1 2 ξ s + 1 2 + w ~ s 2 + 2 θ ~ s 2 + σ s p s θ ~ s T θ ^ s + σ ¯ s q s w ~ s T w ^ s + θ s 2 + 1 2 w s 2 + 1 2 ε s 2 + ε s 0 2
By employing the inequality 2 a b a 2 + b 2 , we can obtain
  z s ( c s z ^ s + 3 z s 3 z ^ s ) c s z s 2 + 3 z s 2 + 9 + 4 c s 2 2 w ~ s 2 + ( 18 + 2 c s 2 ) w s 2 + 5 ε s 0 2
According to (45) and (46), we have
V ˙ s k = 1 s ( c k 3 ) z k 2 + 1 2 z s + 1 2 k = 2 s ( 1 τ k N k 2 1 2 ) ξ k 2 + k = 1 s ( σ k p k θ ~ k T θ ^ k + σ ¯ k q k w ~ k T w ^ k ) + 2 s θ ~ 1 2 + k = 1 s ( 9 + 4 c k 2 2 w ~ 1 2 + w ~ k 2 ) + D s
where D s = D s 1 + θ s 2 + ( 39 2 + 2 c s 2 ) w s 2 + 1 2 ε s 2 + 6 ε s 0 2 .
According to (1), (3), (9) and (10), we can obtain
V ˙ n = V ˙ n 1 + z n ( u + v T ϕ ( ς , x ¯ n ) v T ϕ ( ς , x ¯ ~ n ) + v ^ T ϕ ( ς , x ¯ ~ n ) + θ n T φ n ( x ¯ n ) θ n T φ n ( x ¯ ~ n ) + θ ^ n T φ n ( x ¯ ~ n ) + υ 0 + ε n w T φ ( ς , x ¯ ~ n ) w ^ T φ ( ς , x ¯ ~ n ) ) + ( z n z ^ n ) θ ~ n T φ n ( x ¯ ~ n ) + ( z n z ^ n ) w ~ T φ ( ς , x ¯ ~ n ) + ( z n z ^ n ) v ~ T ϕ ( ς , x ¯ ~ n ) + 1 p n θ ~ n T ( p n z ^ n φ n ( x ¯ ~ n ) θ ^ ˙ n ) + 1 q n w ~ n T ( q n z ^ n φ n ( ς , x ¯ ~ n ) w ^ ˙ n ) + 1 m v ~ T ( m z ^ n ϕ ( ς , x ¯ ~ n ) v ^ ˙ )
Similary to (40)–(46), we can gather that
V ˙ n s = 1 n ( c s 3 ) z s 2 s = 2 n ( 1 τ s N s 2 1 2 ) ξ s 2 + s = 1 n ( σ s p s θ ~ s T θ ^ s + σ ¯ s q s w ~ s T w ^ s ) + σ m v ~ T v ^ + 2 n θ ~ 1 2 + s = 1 n ( 9 + 4 c s 2 2 w ~ 1 2 + w ~ s 2 ) + D n
where D n = D n 1 + θ n 2 + ( 41 2 + 2 c n 2 ) w n 2 + 1 2 ε n 2 + 6 ε n 0 2 + 1 2 υ 0 2 .
By employing the inequality 2 a b a 2 + b 2 , we can obtain
θ ~ s T θ s + w ~ s T w s σ m v ~ T v ^ 1 2 w ~ s 2 1 2 v ~ 2 1 2 θ ~ s 2 + 1 2 θ s 2 + 1 2 w s 2 + 1 2 v 2
According to (49) and (50), we have
V ˙ n s = 1 n ( c s 3 ) z s 2 s = 2 n ( 1 τ s N s 2 1 2 ) ξ s 2 s = 1 n ( σ s 2 p s θ ~ s 2 2 n θ ~ 1 2 ) s = 1 n ( σ ¯ s 2 q s w ~ s 2 9 + 4 c s 2 2 w ~ 1 2 w ~ s 2 ) 1 2 v ~ 2 + D n
where D n = D n + s = 1 n ( σ s 2 p s θ s 2 + σ ¯ s 2 q s w s 2 ) + 1 2 v 2 .
Then, (51) can be written as
V ˙ n C n V n + D n
where C n = min { c s 3 , 1 τ s N s 2 1 2 , σ s 2 p s , σ ¯ s 2 q s , 2 n , 9 + 4 c s 2 2 , 1 } ,   s = 1 , , n . This means that the system exhibits stability and boundedness over time, ensuring that the state variables remain within a certain range, even in the presence of potential disturbances or uncertainties.
By integrating both sides of (52) simultaneously with respect to [ 0 , t ] , it can be found that
0 V n ( t ) D n C n + [ V n ( 0 ) D n C n ] e C n t
Denote the initial conditions are w ~ ( 0 ) = [ w ~ 1 ( 0 ) , , w ~ n ( 0 ) ] T , θ ~ ( 0 ) = [ θ ~ 1 ( 0 ) , , θ ~ n ( 0 ) ] T , v ~ ( 0 ) = [ v ~ ( 0 ) , , v ~ ( 0 ) ] T , and z ( 0 ) = [ z 1 ( 0 ) , , z n ( 0 ) ] T . According to (53), it can be seen that, with the definition of V n ( z , θ ~ , w ~ , v ~ ) , t t 0 + T 0 , we can obtain ( z , θ ~ , w ~ , v ~ ) 2 C n / D n , and the control tracking error satisfy z ( 2 V n ( 0 ) e C n t + C n / D n ) .
The proof of Theorem 1 is complete. □
Remark 4.
The time complexity is generally O ( T L n ) , where T is the number of iterations, L is the number of layers in the network, and n is the number of neurons per layer; the time complexity for such methods is generally O ( m k ) , where m is the number of parameters to be optimized and k is the number of optimization iterations. Although parameterized attack algorithms typically have lower computational complexity, the parameterization assumption they rely on is quite restrictive. This assumption limits their applicability to parameterized attack models exclusively, making them less flexible for general adversarial attack scenarios. In contrast, our approach removes this restrictive assumption, offering a more general and adaptable framework for evaluating a broader variety of adversarial attack methods.
In this paper, parameter selection guidelines are as follows:
(1)
Select design parameters q i and m to determine the adversarial attack reconstruction adaptive laws w ^ ˙ s and w ^ ˙ s .
(2)
Select design parameters such that c s ,   s = 1 , , n 1 , p s , σ s , σ ¯ s and σ ¯ to determine the virtual controller β s and the adaptive law θ ^ ˙ s .

6. Simulation Results

In this section, we provide two-joint robots to verify the effectiveness of the proposed NN attack reconstruction strategies and state-feedback secure control scheme against adversarial attacks. We expect to achieve the following control objective:
(1)
The obtained NN attack reconstruction strategies in an insecure network with all information unavailable can effectively estimate unknown attacks.
(2)
The above secure control process reaches the expected trajectory, and all the closed-loop signals are bounded, i.e., z , θ ~ , w ~ , v ~ 2 C n / D n .
(3)
The output error of the system converges to a small neighborhood of zero, i.e., lim t y y 0 = ε 0 . ε 0 is an infinitesimal constant.
Example 1.
The dynamic equation of a two-joint manipulator is defined as follows:
M ( q ) q ¨ + C ( q , q ˙ ) q ˙ + d = u y = q
where q = [ θ 1 θ 2 ] T , q ˙ and q ¨ R 2 are the angle, angular velocity and angular acceleration of the manipulator, respectively. M ( q ) R 2 × 2 is an inertia; C ( q , q ˙ ) R 2 × 2 is a centripetal force and Coriolis moment; d refers to an external disturbance, which does not affect the system’s convergence but allows for a more realistic representation of the system’s behavior under practical conditions; u = [ u 1 u 2 ] T is a control input torque vector; y is an output vector. In order to apply the backstepping method, we define x 1 = [ x 1 , 1 x 1 , 2 ] T = q and x 2 = [ x 2 , 1 x 2 , 2 ] T = q ˙ , which is written (54) as
x ˙ 1 = x 2 x ˙ 2 = M 1 ( x 1 ) u M 1 ( x 1 ) C ( x 1 , x 2 ) x 2 M 1 ( x 1 ) d y = x 1
with
M 1 ( x 1 ) = J 1 + J 2 + 2 m 2 r 2 l 1 cos θ 2 J 2 m 2 r 2 l 1 cos θ 2 J 2 + m 2 r 2 l 1 cos θ 2 J 2
J 1 = 4 3 m 1 r 1 2 + m 2 l 1 2 , J 2 = 4 3 m 2 r 2 2
C ( x 1 , x 2 ) = 2 m 2 r 2 l 1 θ ˙ 2 sin θ 2 m 2 r 2 l 1 θ ˙ 2 sin θ 2 m 2 r 2 l 1 θ ˙ 1 sin θ 2 0
where m 1 is the mass of the first connecting rod and m 2 is the mass of the second connecting rod ; l 1 is the length of the first connecting rod and l 2 is the length of the second connecting rod; r 1 is the distance from the first joint to the center of gravity of the first connecting rod and r 2 is the distance from the second joint to the center of gravity of the first connecting rod; θ 1 is the angle of the first connecting rod and θ 2 is the angle of the second connecting rod; J 1 is the inertia matrix of the first connecting rod and J 2 is the inertia matrix of the second connecting rod. We describe the two-joint robotic in simulation by means of a 3-dimensional model, which is shown in Figure 2.
The FDI attacks model in (2) is defined as follows:
x ~ 1 ( t ) = x 1 ( t ) + [ a 1 ( t , 0 ) a 1 ( t , 0 ) ] T x ~ 2 ( t ) = x 2 ( t ) + [ a 2 ( t , x 2 ) a 2 ( t , x 2 ) ] T
with
a 1 ( t , 0 ) = 0 , if   0 t < 15 sin ( t ) + 0.5 , if   t 15
a 2 ( t , x 2 ) = 0 , if   0 t < 10 0.5 x 2 , 1 2 x 2 , 1 sin ( x 2 , 1 ) , if   t 10
The actuator attacks model in (3) is defined as follows:
u ~ ( t ) = u ( t ) + [ a ( 0 , x 1 ) a ( 0 , x 1 ) ] T
with
a ( 0 , x 1 ) 0 , if   0 t < 10 x 1 , 1 cos ( x 1 , 1 ) , if   t 10
The parameter (the center and variance of radial basis function and the weight value from hidden layer to output layer) learning process follows the principle of automatically increasing the number of hidden neurons of the network until the number of neurons reaches the maximum. The spread of radial basis function is set to 1. In the simulation, the number of neurons for unknown function M 1 ( x 1 ) and C ( x 1 , x 2 ) are chosen as 11. The number of neurons for sensor attack a s ( t , x ¯ s ) and a ( t , x ¯ n ) are chosen as 101. The activation functions are chosen by exp ( ( x ¯ s ( 0.2 l 10 ) ) 2 / 2 ) ,   l = 0 , , 100 . The layers of the NN can be presented as follows Figure 3:
We design the secure controller, virtual controller, and the parameter adaptive laws as
u = c 2 z ^ 2 3 z ^ 2 θ ^ 2 T φ 2 ( x ¯ ~ 2 ) v ^ T ϕ ( ς , x ¯ ~ 2 ) + ν ˙ 2
β 1 = c 1 z ^ 1 3 z ^ 1 θ ^ 1 T φ 1 ( x ~ 1 ) w ^ 1 T φ 1 ( ς , x ~ 1 ) + y ˙ 0
θ ^ ˙ 1 = p 1 z ^ 1 φ 1 ( x ~ 1 ) + σ 1 θ ^ 1
θ ^ ˙ 2 = p 2 z ^ 2 φ 2 ( x ¯ ~ 2 ) + σ 2 θ ^ 2
w ^ ˙ 1 = q 1 z ^ 1 φ 1 ( ς , x ~ 1 ) + σ ¯ 1 w ^ 1
w ^ ˙ 2 = q 2 z ^ 2 φ 2 ( ς , x ¯ ~ 2 ) + σ ¯ 2 w ^ 2
v ^ ˙ 2 = m z ^ 2 ϕ 2 ( ς , x ¯ ~ 2 ) + σ ¯ v ^
The system initial conditions are chosen as x 1 ( 0 ) = 1 0.1 T and x 2 ( 0 ) = 0 0.1 T . The other initial conditions of the systems are chosen as zero. Additionally, the reference signal is y 0 = sin t sin t T and the external disturbance is d = 0.25 sin t 0.25 sin t T . The design parameters in (56)–(58) and (64)–(71) are given in Table 1.
In this simulation, we simulated a sustained confrontation between attackers and defenders. The NN attack reconstruction strategies have been plotted in Figure 4, Figure 5 and Figure 6.
Figure 4, Figure 5 and Figure 6 show that, in regard to the NN estimation, Algorithm (8) and (9) solve approximate learning for unknown adversarial attacks and improve the ability to cope with the adversarial attacks at t = 10 and t = 15 . We use the developed NN attack reconstruction strategies (8) and (9) to solve the state-compromised problem caused by adversarial attacks, which further confirms that, in terms of NN estimation, Algorithm (8) and (9) can solve approximate learning for unknown adversarial attacks. Thus, the obtained NN attack reconstruction strategies in an insecure network with all information unavailable can effectively estimate unknown attacks.
Figure 7, Figure 8 and Figure 9 show that the performance trajectory diagram of the designed state-feedback secure control scheme in a closed-loop control system. Figure 7 shows the attitude angles θ i , i = 1 , 2 and their observations of two-joint robots under the proposed NN attack reconstruction strategies, and the curves of the corresponding angular velocities θ ˙ i , i = 1 , 2 and their observations are plotted in Figure 8. Thus, it can be seen that the output error of the system converges to a small neighborhood of zero, i.e., lim t y y 0 = ε 0 . Figure 9 shows that the two-joint robots control the input signal. It indicates that the followers’ attitude can still be consistent with the reference signal trajectory attitude despite adversarial attacks. Thus, it can be seen that the above secure control process reaches the expected trajectory and that all the closed-loop signals are bounded, i.e., z , θ ~ , w ~ , v ~ 2 C n / D n .
To emphasize the superiority of the designed scheme, both the algorithms in [21] and our proposed algorithm are implemented under identical conditions.
The simulation results are depicted in Figure 10, Figure 11 and Figure 12 (the adversarial attacks occur at t = 10 and t = 15 ). Figure 10, Figure 11 and Figure 12 indicate that this state-feedback control scheme cannot guarantee the stability of the considered CPSs.

7. Conclusions

This paper proposed an NN-adaptive secure control scheme for cyber-physical systems via attack reconstruction strategies where the attack reconstruction strategy served as the solution to the NN estimation problem regarding the insecurity of the network. Consequently, by introducing a novel error transformation, an NN-adaptive secure control method was formulated as the framework of backstepping. Based on the Lyapunov stability theory and the defined error transformation, it was proven that the above secure control process had reached the expected trajectory and that all the closed-loop signals had been bounded. Finally, its effectiveness was verified via a simulation of the attitude control of two-joint robots.

Author Contributions

Conceptualization, R.Z.; Methodology, D.H.; Writing—original draft, R.Z., D.H. and F.Y.; Writing—review & editing, F.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Natural Science Foundation of Xiamen City. (3502Z20227195).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The completion of this thesis was very hard, I want to thank everyone who contributed in this, it is your help that can make my thesis better and more complete, in this writing experience let me gain a lot, but also let me understand the direction of the future of life, thank you very much to everyone!

Conflicts of Interest

The authors declare no competing interests.

Abbreviations

NNNeural Network
NNsNeural Networks
CPSsCyber-Physical Systems
SSESecure State Estimation

References

  1. Sui, T.J.; Sun, X.M. The vulnerability of distributed state estimator under stealthy attacks. Automatica 2021, 133, 109869. [Google Scholar] [CrossRef]
  2. Yang, W.; Zhang, Y.; Chen, G.R.; Yang, C.; Shi, L. Distributed filtering under false data injection attacks. Automatica 2019, 102, 34–44. [Google Scholar] [CrossRef]
  3. Ao, W.; Song, Y.D.; Wen, C.Y. Distributed secure state estimation and control for CPSs under sensor attacks. IEEE Trans. Cybern. 2020, 50, 259–269. [Google Scholar] [CrossRef] [PubMed]
  4. Chen, G.D.; Yao, D.Y.; Li, H.Y.; Zhou, Q.; Lu, R.Q. Saturated threshold event-triggered control for multiagent systems under sensor attacks and its application to UAVs. IEEE Trans. Circuits Syst. I Regul. Pap. 2021, 69, 884–895. [Google Scholar] [CrossRef]
  5. Lu, A.Y.; Yang, G.H. Secure state estimation for cyber-physical systems under sparse sensor attacks via a switched Luenberger observer. Inf. Sci. 2017, 417, 454–464. [Google Scholar] [CrossRef]
  6. Shoukry, Y.; Tabuada, P. Event-Triggered state observers for sparse sensor noise/attacks. IEEE Trans. Autom. Control 2016, 61, 2079–2091. [Google Scholar] [CrossRef]
  7. An, L.W.; Yang, G.H. Distributed secure state estimation for cyber-physical systems under sensor attacks. Automatica 2019, 107, 526–538. [Google Scholar] [CrossRef]
  8. An, L.W.; Yang, G.H. Secure state estimation against sparse sensor attacks with adaptive switching mechanism. IEEE Trans. Autom. Control 2018, 63, 2596–2603. [Google Scholar] [CrossRef]
  9. Yucelen, T.; Haddad, W.M.; Feron, E.M. Adaptive control architectures for mitigating sensor attacks in cyber-physical systems. Cyber-Phys. Syst. 2016, 2, 1165–1170. [Google Scholar]
  10. Jin, X.; Haddad, W.M.; Yucelen, T. An adaptive control architecture for mitigating sensor and actuator attacks in cyber-physical systems. IEEE Trans. Autom. Control 2017, 62, 6058–6064. [Google Scholar] [CrossRef]
  11. Ren, X.X.; Yang, G.H. Adaptive control for nonlinear cyber-physical systems under false data injection attacks through sensor networks. Int. J. Robust Nonlinear Control 2020, 30, 65–79. [Google Scholar] [CrossRef]
  12. Li, Z.J.; Zhao, J. Resilient adaptive control of switched nonlinear cyber-physical systems under uncertain deception attacks. Inf. Sci. 2021, 543, 398–409. [Google Scholar] [CrossRef]
  13. Lv, W.S. Finite-Time adaptive neural control for nonlinear systems under state-dependent sensor attacks. Int. J. Robust Nonlinear Control 2021, 31, 4689–4704. [Google Scholar] [CrossRef]
  14. Song, S.; Park, J.H.; Zhang, B.Y.; Song, X.N. Event-Based adaptive fuzzy fixed-time secure control for nonlinear CPSs against unknown false data injection and backlash-like hysteresis. IEEE Trans. Fuzzy Syst. 2022, 30, 1939–1951. [Google Scholar] [CrossRef]
  15. Chen, L.X.; Tong, S.C. Observer-Based adaptive fuzzy consensus control of nonlinear multi-agent systems encountering deception attacks. IEEE Trans. Ind. Inform. 2024, 2, 1808–1818. [Google Scholar] [CrossRef]
  16. Ge, S.S.; Wang, C. Direct adaptive NN control of a class of nonlinear systems. IEEE Trans. Neural Netw. 2002, 13, 214–221. [Google Scholar] [CrossRef]
  17. Sargolzaei, A.; Yazdani, K.; Abbaspour, A.; Crane, C.D., III; Dixon, W.E. Detection and mitigation of false data injection attacks in networked control systems. IEEE Trans. Ind. Inform. 2020, 16, 4281–4292. [Google Scholar] [CrossRef]
  18. Sargolzaei, A.; Yazdani, K.; Abbaspour, A.; Crane, C.D., III; Dixo, W.E. Lyapunov-Based control of a nonlinear multiagent system with a time-varying input delay under false-data-injection attacks. IEEE Trans. Ind. Inform. 2022, 18, 2693–2703. [Google Scholar] [CrossRef]
  19. Chen, L.X.; Li, Y.M.; Tong, S.C. Neural Network Adaptive Consensus Control for Nonlinear Multi-Agent Systems Encountered Sensor Attacks. Int. J. Syst. Sci. 2023, 54, 2536–2550. [Google Scholar] [CrossRef]
  20. Meng, M.; Xiao, G.X.; Li, B.B. Adaptive consensus for heterogeneous multi-agent systems under sensor and actuator attacks. Automatica 2020, 122, 109242. [Google Scholar] [CrossRef]
  21. Gao, Y.B.; Sun, G.H.; Liu, J.X.; Shi, Y.; Wu, L.G. State estimation and self-triggered control of CPSs against joint sensor and actuator attacks. Automatica 2020, 133, 108687. [Google Scholar] [CrossRef]
  22. Shoukry, Y.; Nuzzo, P.; Puggelli, A.; Sangiovanni-Vincentelli, A.L.; Seshia, S.A.; Tabuada, P. Secure state estimation for cyber-physical systems under sensor attacks: A satisfiability modulo theory approach. IEEE Trans. Autom. Control 2017, 62, 4917–4932. [Google Scholar] [CrossRef]
  23. Kwon, C.; Hwang, I. Cyber attack mitigation for cyber-physical systems: Hybrid system approach to controller design. IET Control Theory Appl. 2016, 10, 731–741. [Google Scholar] [CrossRef]
Figure 1. The control algorithm via NN attack reconstruction algorithm.
Figure 1. The control algorithm via NN attack reconstruction algorithm.
Applsci 15 03893 g001
Figure 2. Structure diagram of the two-joint robotic arm.
Figure 2. Structure diagram of the two-joint robotic arm.
Applsci 15 03893 g002
Figure 3. The layers of the NN.
Figure 3. The layers of the NN.
Applsci 15 03893 g003
Figure 4. Sensor attack a 1 ( t , 0 ) and NN attack reconstruction strategies w ^ 1 T φ 1 ( ς , x 1 ) .
Figure 4. Sensor attack a 1 ( t , 0 ) and NN attack reconstruction strategies w ^ 1 T φ 1 ( ς , x 1 ) .
Applsci 15 03893 g004
Figure 5. Sensor attack a 2 ( t , x ~ 2 ) and NN attack reconstruction strategies w ^ 2 T φ 2 ( ς , x ¯ 2 ) .
Figure 5. Sensor attack a 2 ( t , x ~ 2 ) and NN attack reconstruction strategies w ^ 2 T φ 2 ( ς , x ¯ 2 ) .
Applsci 15 03893 g005
Figure 6. Actuator attack a ( t , x ~ 1 ) and NN attack reconstruction strategies v ^ T ϕ ( ς , x 1 ) .
Figure 6. Actuator attack a ( t , x ~ 1 ) and NN attack reconstruction strategies v ^ T ϕ ( ς , x 1 ) .
Applsci 15 03893 g006
Figure 7. Two-joint robots tracking trajectories of x 1 .
Figure 7. Two-joint robots tracking trajectories of x 1 .
Applsci 15 03893 g007
Figure 8. Two-joint robots angular velocity tracking trajectories of x 2 .
Figure 8. Two-joint robots angular velocity tracking trajectories of x 2 .
Applsci 15 03893 g008
Figure 9. Two-joint robots input trajectories of u .
Figure 9. Two-joint robots input trajectories of u .
Applsci 15 03893 g009
Figure 10. Two-joint robots tracking trajectories of x 1 without attack reconstruction strategy.
Figure 10. Two-joint robots tracking trajectories of x 1 without attack reconstruction strategy.
Applsci 15 03893 g010
Figure 11. Two-joint robots angular velocity tracking trajectories of x 2 without attack reconstruction strategy.
Figure 11. Two-joint robots angular velocity tracking trajectories of x 2 without attack reconstruction strategy.
Applsci 15 03893 g011
Figure 12. Two-joint robots input trajectories of u without attack reconstruction strategy.
Figure 12. Two-joint robots input trajectories of u without attack reconstruction strategy.
Applsci 15 03893 g012
Table 1. Design parameters.
Table 1. Design parameters.
Design Parameters
m 1 = m 2 = 0.765 l 1 = l 2 = 0.25
r 1 = r 2 = 0.15 c 1 = 50 , c 2 = 150
σ 1 = σ 2 = 0.01 σ ¯ 1 = σ ¯ 2 = σ ¯ = 0.02
p 1 = p 2 = q 1 = q 2 = 1 m = τ = 1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, R.; He, D.; You, F. Neural Network-Adaptive Secure Control for Nonlinear Cyber-Physical Systems Against Adversarial Attacks. Appl. Sci. 2025, 15, 3893. https://doi.org/10.3390/app15073893

AMA Style

Zhao R, He D, You F. Neural Network-Adaptive Secure Control for Nonlinear Cyber-Physical Systems Against Adversarial Attacks. Applied Sciences. 2025; 15(7):3893. https://doi.org/10.3390/app15073893

Chicago/Turabian Style

Zhao, Renhe, Dongqi He, and Fangyi You. 2025. "Neural Network-Adaptive Secure Control for Nonlinear Cyber-Physical Systems Against Adversarial Attacks" Applied Sciences 15, no. 7: 3893. https://doi.org/10.3390/app15073893

APA Style

Zhao, R., He, D., & You, F. (2025). Neural Network-Adaptive Secure Control for Nonlinear Cyber-Physical Systems Against Adversarial Attacks. Applied Sciences, 15(7), 3893. https://doi.org/10.3390/app15073893

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop