*Article* **Adaptive Event-Triggered Synchronization of Uncertain Fractional Order Neural Networks with Double Deception Attacks and Time-Varying Delay**

**Zhuan Shen, Fan Yang, Jing Chen, Jingxiang Zhang, Aihua Hu and Manfeng Hu \***

School of Science, Jiangnan University, Wuxi 214122, China; 6191204005@stu.jiangnan.edu.cn (Z.S.); 6191204018@stu.jiangnan.edu.cn (F.Y.); 8201703038@jiangnan.edu.cn (J.C.); zhangjingxiang@jiangnan.edu.cn (J.Z.); aihuahu@jiangnan.edu.cn (A.H.) **\*** Correspondence: humanfeng@jiangnan.edu.cn; Tel.: +86-510-8591-0233

**Abstract:** This paper investigates the problem of adaptive event-triggered synchronization for uncertain FNNs subject to double deception attacks and time-varying delay. During network transmission, a practical deception attack phenomenon in FNNs should be considered; that is, we investigated the situation in which the attack occurs via both communication channels, from S-C and from C-A simultaneously, rather than considering only one, as in many papers; and the double attacks are described by high-level Markov processes rather than simple random variables. To further reduce network load, an advanced AETS with an adaptive threshold coefficient was first used in FNNs to deal with deception attacks. Moreover, given the engineering background, uncertain parameters and time-varying delay were also considered, and a feedback control scheme was adopted. Based on the above, a unique closed-loop synchronization error system was constructed. Sufficient conditions that guarantee the stability of the closed-loop system are ensured by the Lyapunov-Krasovskii functional method. Finally, a numerical example is presented to verify the effectiveness of the proposed method.

**Keywords:** uncertain fractional order neural network; adaptive event-triggered scheme; double deception attacks; time-varying delay

### **1. Introduction**

Neural networks, which bridge the micro-world of communications with the physical world for processing information as mathematical models, widely exist in a broad range of areas, such as intelligent control, secure communication, and pattern recognition [1–4]. Due to the complexity of the dynamic characteristics of some physical systems, a traditional integer-order neural network model cannot accurately represent their dynamic behaviors. Fractional order calculus is not only a generalized form of the traditional integer-order calculus; it also has some irreplaceable properties of integral order calculus, such as the special feature of time memory [4–7]. Based on these features, the fractional order differential equation has been used to model neural networks [8–12]. Synchronization, among several phenomena arising from the complex nonlinear dynamics of neural networks, has gained lots of attention and has been applied in many integer-order neural networks [13–17]. However, there are few studies about the synchronization problem of FNNs, which was the first motivation of this paper.

The event-triggered scheme (ETS) depends on a predefined event-triggered condition to determine whether the sampled data should be transmitted to the next control unit rather than a fixed period; therefore, replacing the time-triggered scheme (TTS) to save network communication resources and guarantee the system's performance simultaneously was suggested in [16,18–23]. Although ETS was adopted in the latest three studies of different fractional order, real-valued systems [21–23], there was still a common disadvantage: the threshold coefficients of traditional ETS are all constants and cannot be timely adjusted

**Citation:** Shen, Z.; Yang, F.; Chen, J.; Zhang, J.; Hu, A.; Hu, M. Adaptive Event-Triggered Synchronization of Uncertain Fractional Order Neural Networks with Double Deception Attacks and Time-Varying Delay. *Entropy* **2021**, *23*, 1291. https:// doi.org/10.3390/e23101291

Academic Editor: Luis Hernández-Callejo

Received: 7 September 2021 Accepted: 25 September 2021 Published: 30 September 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

to fit a system's evolution. However, the adaptive event-triggered scheme (AETS), as a combination of adaptive control and traditional ETS, can overcome the conservativeness to make good use of communication resources dynamically. Therefore, designing an AETS with an adaptive threshold coefficient for FNNs to further improve the utilization of communication resources was the second motivation of the current work.

On the other hand, a security problem, due to advanced modern communication technology, has recently emerged as a hot topic in the engineering applications [24,25], especially in autonomous vehicle platooning [26,27]. Since the control components such as sensors, controllers, and actuators are connected by the shared communication networks to achieve remote control, compromise by malicious adversaries is extremely risky [22,28,29]. As a typical representative of malicious attacks, a deception attack can replace the original data with false data to destroy the system [22,28–31]. To the best of the authors' knowledge, the synchronization problem of FNNs regarding deception attacks has been investigated in the literature [22], although the deception attacks were only allowed to occur in the controller to actuator (C-A) channel, governed by a Bernoulli variable. However, in communication networks, attacks may occur in the sensor to controller (S-C) channel and C-A channel simultaneously. Moreover, it is well known that a Bernoulli process is a special kind of the Markov process. Therefore, inspired by the aforementioned discussion, investigating double deception attacks governed by Markov processes in the synchronization of FNNs under AETS was the third motivation. Given the actual environmental conditions, neural networks inevitably suffer from noise and limitations of equipment, so uncertainties in parameters and time-varying delay have also been taken into account. The main contributions are outlined below.


The remainder of this paper is organized as follows. In Section 2, some preliminaries are introduced and the model is formulated. The main results, including theorems, are shown in Section 3. In Section 4, a simulation which verified the main results is presented. Finally, the discussion and conclusions are presented in Section 5.

Notation: In this paper, *<sup>R</sup><sup>n</sup>* and · denote the *<sup>n</sup>*-dimensional Euclidean vector space and the Euclidean norm for vectors, respectively. *<sup>R</sup>n*×*<sup>n</sup>* is the set of all *<sup>n</sup>* <sup>×</sup> *<sup>n</sup>* real matrices. *T* denotes the transposition of the vectors or matrices. *I* represents the identity matrix with appropriate dimensions, and *He*[*A*] = *A* + *AT*. The symbol *N* represents the sets of all natural numbers and *N*<sup>0</sup> = *N* ∪ {0}. The signal "∗" denotes the symmetric block of matrix. *col*(...) and *diag*(...) represent a column vector and a diagonal matrix, respectively.

**Remark 1.** *Network attacks may occur in both S-C and C-A channels during network transmission, as shown in Figure 1. We only found a few studies investigating relevant network attacks, and they only used single-channel attacks: the C-A channel [22]; the S-C channel [32–34]. In addition, in prior studies the behaviors of network attacks were governed by Bernoulli variables, usually. To the authors' knowledge, there is no literature simultaneously considering network attacks in S-C and C-A channels in FNNs. Moreover, in this paper, the double network attacks governed by two independent Markov processes are more general than Bernoulli processes.*

**Figure 1.** The framework of the closed-loop synchronization error system.

### **2. Preliminaries and Model Formulation**

In this section, the basic definitions and relations about fractional calculus are introduced; then a closed-loop synchronization error system is constructed.

### *2.1. Fractional Order Calculations*

**Definition 1.** *The fractional integral of order r for an integrable function f*(*x*) : [*t*0, +∞] → *R is defined as [19]:*

$$\rho\_{t\mathbb{0}}I\_t^r f(t) = \frac{1}{\Gamma(r)} \int\_{t\mathbb{0}}^t \frac{f(\beta)}{(t-\beta)^{1-r}} \,\mathrm{d}\beta\prime$$

*where* 0 < *r* < 1*, and* Γ(·) *is the Gamma function.*

**Definition 2.** *The Caputo fractional derivative of order <sup>r</sup>* <sup>&</sup>gt; <sup>0</sup> *for a function <sup>f</sup>*(*t*) <sup>∈</sup> *<sup>C</sup>n*([*t*0, <sup>+</sup>∞), *<sup>R</sup>*) *is defined as [22]:*

$$\_{t\_0}D\_t^r f(t) = \frac{1}{\Gamma(n-r)} \int\_{t\_0}^t \frac{f^{(n)}(\beta)}{(t-\beta)^{r-n+1}} \, \mathrm{d}\beta\dots$$

*where t* ≥ *t*<sup>0</sup> *and n is an integer such that* 0 < *n* − 1 < *r* < *n. Moreover, when* 0 < *r* < 1,

$$\_{t\_0}D\_t^r f(t) = \frac{1}{\Gamma(1-r)} \int\_{t\_0}^t \frac{f'(\beta)}{(t-\beta)^r} \,\mathrm{d}\beta.$$

From the ldefinitions 1 and 2, it is clear that the Caputo fractional derivative satisfies the following properties:


**Lemma 1** ([22])**.** *For a differentiable function vector <sup>x</sup>*(*t*) <sup>∈</sup> *<sup>R</sup>n, an equality with the following form is true:*

$$\mathbb{P}\_{t\_0} D\_t^r(\mathbf{x}^T(t) P \mathbf{x}(t)) \le 2 \mathbf{x}^T(t) P\_{t\_0} D\_t^r \mathbf{x}(t),$$

*where r and P* <sup>∈</sup> *<sup>R</sup>n*×*<sup>n</sup> satisfy* <sup>0</sup> <sup>&</sup>lt; *<sup>r</sup>* <sup>&</sup>lt; <sup>1</sup> *and P* <sup>&</sup>gt; <sup>0</sup>*, respectively.*

**Lemma 2** ([35])**.** *For a given positive definite matrix* R ∈ *<sup>R</sup>n*×*n, given scalars <sup>a</sup>*, *<sup>b</sup> satisfying <sup>a</sup>* <sup>&</sup>lt; *b, the following inequality holds for any continuously differentiable function e*(*x*) *in* [*a*, *<sup>b</sup>*] <sup>→</sup> *<sup>R</sup>n:*

$$\mathcal{E}\left(b-a\right)\int\_{a}^{b}\boldsymbol{\varepsilon}^{T}\left(\boldsymbol{s}\right)\mathcal{R}\boldsymbol{\varepsilon}\left(\boldsymbol{s}\right)\,\mathrm{d}s \geq \left(\int\_{a}^{b}\boldsymbol{\varepsilon}\left(\boldsymbol{s}\right)\,\mathrm{d}s\right)^{T}\mathcal{R}\left(\int\_{a}^{b}\boldsymbol{\varepsilon}\left(\boldsymbol{s}\right)\,\mathrm{d}s\right).$$

**Lemma 3** ([36])**.** *For <sup>η</sup>*(*t*) <sup>∈</sup> [0, *<sup>η</sup>*] *and any matrices <sup>R</sup>*, *<sup>S</sup>* <sup>∈</sup> *<sup>R</sup>n*×*<sup>n</sup> satisfying* " *R S* ∗ *R* # ≥ 0*, the following inequality holds:*

$$-\eta \int\_{t-\eta}^{t} \dot{\mathfrak{e}}^T(s) \boldsymbol{R} \dot{\mathfrak{e}}(s) \, \mathrm{d}s \le \dot{\mathfrak{e}}^T(t) \boldsymbol{\Theta} \dot{\mathfrak{e}}(t) \, \mathrm{d}s$$

*where ξ*(*t*) = *col*{*e*(*t*),*e*(*t* − *η*(*t*)),*e*(*t* − *η*)} *and*

$$
\Theta = \begin{bmatrix}
\* & -2R + He[S] & R - S \\
\* & \* & -R
\end{bmatrix}.
$$

**Lemma 4** ([32])**.** *For given matrix S* = " *S*<sup>11</sup> *S*<sup>12</sup> *<sup>S</sup>*<sup>21</sup> *<sup>S</sup>*22# *, where S*<sup>12</sup> = *S<sup>T</sup>* <sup>21</sup>*, the following conditions are equivalent.*

$$\begin{aligned} (1) & S < 0; \\ (2) & S\_{22} < 0, S\_{11} - S\_{21} S\_{22}^{-1} S\_{12} < 0. \end{aligned}$$

### *2.2. Model Formulation*

Consider the following uncertain FNN model as the master system:

$$\begin{aligned} \, \_{t\_0}D\_t^r \mathbf{x}(t) &= -(A + \Delta A(t))\mathbf{x}(t) + (B + \Delta B(t))\hat{f}(\mathbf{x}(t)) \\ &+ (D + \Delta D(t))\hat{f}(\mathbf{x}(t - \eta(t))) + I(t), \\ \mathbf{y}(t) &= \mathbf{C}\mathbf{x}(t), \\ \mathbf{x}(t\_0) &= \phi\_1(t\_0), t\_0 \in [-\eta, 0], \end{aligned} \tag{1}$$

where 0 < *r* < 1 denotes the order of fractional order derivative. *x*(*t*)=(*x*1(*t*), *x*2(*t*), ... , *xn*(*t*))*<sup>T</sup>* <sup>∈</sup> *<sup>R</sup><sup>n</sup>* is the state vector of the neuron. *<sup>y</sup>*(*t*) is the measurable output vector. *<sup>η</sup>*(*t*) satisfies 0 <sup>≤</sup> *<sup>η</sup>*(*t*) <sup>≤</sup> *<sup>η</sup>*, and *<sup>η</sup>*˙(*t*) <sup>≤</sup> *<sup>η</sup>*¯ denotes the time-varying coupling delay. <sup>ˆ</sup> *f*(*x*(*t*)) = ( ˆ *f*1(*x*1(*t*)), ˆ *f*2(*x*2(*t*)), ... , ˆ *fn*(*xn*(*t*))) and ˆ *<sup>f</sup>*(*x*(*<sup>t</sup>* <sup>−</sup> *<sup>η</sup>*(*t*))) = ( <sup>ˆ</sup> *<sup>f</sup>*1(*x*1(*<sup>t</sup>* <sup>−</sup> *<sup>η</sup>*(*t*))), <sup>ˆ</sup> *f*2(*x*(*t* − *η*(*t*))), ... , ˆ *fn*(*x*(*<sup>t</sup>* <sup>−</sup> *<sup>η</sup>*(*t*)))) <sup>∈</sup> *<sup>R</sup><sup>n</sup>* are the activation functions. *<sup>I</sup>*(*t*) is an external input vector. *<sup>A</sup>* <sup>=</sup> *diag*(*a*1, *<sup>a</sup>*2, ... , *an*) <sup>∈</sup> *<sup>R</sup>n*×*n*, are the self-feedback connection weight matrices. *<sup>B</sup>* = (*bij*)*n*×*<sup>n</sup>* <sup>∈</sup> *<sup>R</sup>n*×*n*, *<sup>D</sup>* = (*dij*)*n*×*<sup>n</sup>* <sup>∈</sup> *<sup>R</sup>n*×*<sup>n</sup>* are the connection weight matrices. Furthermore, Δ*A*(*t*), Δ*B*(*t*), Δ*D*(*t*) are the matrices with time-varying parameters, which are norm bounded and satisfy

$$\left[\Delta A(t), \Delta B(t), \Delta D(t)\right] = GS(t)\left[E\_{a\prime}, E\_{b\prime}, E\_d\right]\_{\prime\prime}$$

where *G*, *Ea*, *Eb*, *Ed* are known constant matrices, *S*(*t*) is an unknown time-varying matrix function satisfying *<sup>S</sup>T*(*t*)*S*(*t*) <sup>≤</sup> *<sup>I</sup>*. Assume that master system (1) have a unique solution with initial value *φ*1(*t*0) and that it is continuously differential on *t*<sup>0</sup> ∈ [−*η*, 0] [37]. Next, consider the corresponding slave system as follows:

$$\begin{aligned} \, \_{t \wedge} D\_t^{\pi} \mathfrak{X}(t) &= -(A + \Delta A(t)) \mathfrak{X}(t) + (B + \Delta B(t)) \widehat{f}(\mathfrak{X}(t)) \\ &+ (D + \Delta D(t)) f(\mathfrak{X}(t - \eta(t))) + I(t) + u(t), \\ \mathfrak{Y}(t) &= \mathrm{C} \mathfrak{X}(t), \\ \mathfrak{X}(t\_0) &= \phi\_2(t\_0), t\_0 \in [-\eta, 0], \end{aligned} \tag{2}$$

where *x*ˆ(*t*)=(*x*ˆ1(*t*), *y*2(*t*), ... , *x*ˆ*n*(*t*))*<sup>T</sup>* is the state vector. Similarly, assume slave system (2) also has a unique solution with initial value *φ*2(*t*0), which is continuously differential on *t*<sup>0</sup> ∈ [−*η*, 0], and *u*(*t*) is the control input, and the others are same as the master system.

In order to realize the synchronization between systems (1) and (2), define the synchronization error *z*(*t*) = *C*(*x*ˆ(*t*) − *x*(*t*)), and the parameter uncertainty of each part is treated as a whole. The following error system can be obtained:

$$\begin{aligned} \, \_{t\_0}D\_t' \varepsilon(t) &= -A\varepsilon(t) + B f(\varepsilon(t)) + D f(\varepsilon(t - \eta(t))) + \mathbb{G}m(t) + \mathfrak{u}(t), \\ m(t) &= S(t)(-E\_d \varepsilon(t) + E\_b f(\varepsilon(t)) + E\_d f(\varepsilon(t - \eta(t)))), \\ z(t) &= \mathbb{C}\varepsilon(t), \\ \varepsilon(t\_0) &= \phi(t\_0), t\_0 \in [-\eta, 0], \end{aligned} \tag{3}$$

where *f*(*e*(*t*)) = ˆ *<sup>f</sup>*(*x*ˆ(*t*)) <sup>−</sup> <sup>ˆ</sup> *<sup>f</sup>*(*x*(*t*)), *<sup>f</sup>*(*e*(*<sup>t</sup>* <sup>−</sup> *<sup>η</sup>*(*t*))) = <sup>ˆ</sup> *<sup>f</sup>*(*x*ˆ(*<sup>t</sup>* <sup>−</sup> *<sup>η</sup>*(*t*))) <sup>−</sup> <sup>ˆ</sup> *f*(*x*(*t* − *η*(*t*))). The initial value of error system (3) is *φ*(*t*0) = *φ*2(*t*0) − *φ*1(*t*0), *t*<sup>0</sup> ∈ [−*η*, 0]. It is well known that system (3) has a unique solution [38].

**Remark 2.** *The model considered in this paper can be regarded as a generalization of [22]. Such an attack has only been considered in the C-A channel and governed by a Bernoulli process in FNNs [22], in which the event-triggered threshold coefficient is a constant and cannot fit a system's evolution dynamically. The FNNs studied in this paper not only adopt AETS to further improve the utilization of communication resources, but parameters' uncertainties and double deception attacks are also investigated.*

The following assumption will be used later on.

**Assumption 1.** *The neuron activation function f*(*e*(*t*)) *is continuous and bounded, and satisfies the following conditions:*

$$0 \le \frac{f\_i(e\_1(t)) - f\_i(e\_2(t))}{e\_1(t) - e\_2(t)} \le \phi\_{i\prime} \tag{4}$$

*for i* = 1, 2, . . . , *n*, *where φ<sup>i</sup> are known positive constants.*

Let the two adversary network attacks during the communication be characterized by two independent right-continuous Markov processes *rt*, *qt* on the probability space taking values in the finite state space *M* = {1, 2, ... ,*s*} with generator *π* = (*πij*)*s*×*s*, *ρ* = (*ρij*)*s*×*<sup>s</sup>* given by

$$\Pr\{r\_{t+k} = j | r\_t = i\} = \begin{cases} \begin{array}{c} \pi\_{ij}k + o(k) \quad i \neq j, \\ 1 + \pi\_{ii}k + o(k) \quad i = j. \end{array} \} \\ \Pr\{q\_{t+k} = n | q\_t = m\} = \begin{cases} \begin{array}{c} \rho\_{mn}k + o(k) \quad m \neq n, \\ 1 + \rho\_{mn}k + o(k) \quad m = n. \end{cases} \end{cases} \end{cases}$$

where *<sup>k</sup>* <sup>&</sup>gt; 0, lim*k*→<sup>0</sup> *<sup>o</sup>*(*k*) *<sup>k</sup>* = 0, *πij* ≥ 0, *i* = *j*, *ρmn* ≥ 0, *m* = *n*, and for every *i*, *m* ∈ *M*, *πii* = − <sup>∑</sup>*j*=*<sup>i</sup> <sup>π</sup>ij*, *<sup>ρ</sup>mm* = − <sup>∑</sup>*n*=*<sup>m</sup> <sup>ρ</sup>mn*.

To save on network bandwidth as much as possible, an AETS was adopted in this study. The sensor with sampling period *h* was time-driven, and the output error *z*(*t*) was measured by the sensor at the sampling instant *lh*, *l* ∈ *N*0. Let *tkh* denote the triggered instant; then the next triggered instant is denoted by *tk*<sup>+</sup>1*h*. *tk* + *ih*, *i* ∈ *N* denotes the current sampling time. Whether or not the sampled data *z*(*tk* + *ih*) should be transmitted is determined by the adaptive event-triggered condition:

$$
\varepsilon\_k^T(t)\Omega \mathbb{Z}\_k(t) - d(t)z^T(t\_k + ih)\Omega z(t\_k + ih) \le 0,\tag{5}
$$

where *z*˜*k*(*t*) = *z*(*tkh*) − *z*(*tk* + *ih*), *z*(*tkh*) denotes the latest transmitted data, Ω > 0 is a weighting matrix to be designed, and the adaptive threshold coefficient *d*(*t*) satisfies the following adaptive law:

$$\dot{d}(t) = (\frac{1}{d(t)^2} - \frac{d\mathcal{b}}{d(t)})\overline{z}\_k^T(t)\Omega\overline{z}\_k(t),\tag{6}$$

where *w*¯ ≥ 1 can adjust the monotonicity of *d*(*t*) [32], and the next triggered instant can be denoted as follows:

$$\|t\_{k+1}h = t\_k h + \min\{ih | \tilde{z}\_k^T \Omega \tilde{z}\_k > d(t)z^T(t\_k + ih)\Omega z(t\_k + ih), i \in N\}.$$

Based on the reality of the network communication, the delay *sk* is considered at the instant *tkh*. Assume that 0 ≤ *sk* ≤ *s*¯, where *s*¯ = *max*{*sk*}. The sampling date *z*(*tkh*) will be transmitted at the instant *tk* + *sk*. Then the time interval [*tkh* + *sk*, *tk*<sup>+</sup>1*h* + *sk*<sup>+</sup>1) can be divided *I*<sup>0</sup> = [*tkh* + *sk*, *tkh* + *h* + *s*¯), *Ii* = [*tkh* + *ih* + *s*¯, *tkh* + *ih* + *h* + *s*¯), *i* = 1, 2, ... , *δ* − 1, and *δ* = *tk*<sup>+</sup><sup>1</sup> − *tk* − 1, *I<sup>δ</sup>* = [*tkh* + *δh* + *s*¯, *tk*<sup>+</sup><sup>1</sup> + *dk*<sup>+</sup>1). Then *z*˜*k*(*t*) = *z*(*tkh*) − *z*(*tkh* + *ih*) is equivalent to:

$$\mathbb{Z}\_k(t) = \begin{cases} z(t\_k h) - z(t\_k h), & t \in I\_{0\prime} \\ z(t\_k h) - z(t\_k h + ih), & t \in I\_{i\prime} \\ z(t\_k h) - z(t\_k h + \delta h), & t \in I\_{\delta} \end{cases} \tag{7}$$

which can be written as

$$\overline{z}\_k(t) = z(t\_k h) - z(t - \tau(t)), \\ t \in [t\_k h + s\_{k'} t\_{k+1} h + s\_{k+1}) \tag{8}$$

in which

$$\pi(t) = \begin{cases} t - t\_k h\_\star & t \in I\_{0\star} \\ t - t\_k h - ih\_\star & t \in I\_{i\star} \\ t - t\_k h - \delta h\_\star & t \in I\_{\delta} \end{cases} \tag{9}$$

According to Equation (9), it is easy to get

$$0 \le \tau(t) \le h + \overline{s}, t \in \left[t\_k h + s\_{k'} t\_{k+1} h + s\_{k+1}\right).$$

**Remark 3.** *From the adaptive event-triggered condition (5), it is easy to know the minimum event-triggered interval is a constant, which means that there is no Zeno behavior.*

As shown in Figure 1, deception attacks may occur on the S-C communication channel, and the integrity of normal transmission data will be damaged by malicious attacks. To depict the stochastic occurrence modeling of deception attacks, Markov processes are adopted in this paper. Then the control input in time interval [*tkh* + *sk*, *tk*<sup>+</sup>1*h* + *sk*<sup>+</sup>1), *k* = 1, 2, . . . , can be denoted as

$$\begin{split} z\_s(t\_k h, r\_t) &= b^s(r\_t) z(t\_k h) + \bar{b}^s(r\_t) g\_s(z(t\_k h)), \\ &= b^s(r\_t) \Big( z(t - \tau(t)) + \bar{z}\_k(t) \Big) + \bar{b}^s(r\_t) g\_s(z(t\_k h)). \end{split} \tag{10}$$

where *bs*(1) = 1, *bs*(2) = 0, ¯ *<sup>b</sup>s*(*rt*) = <sup>1</sup> <sup>−</sup> *<sup>b</sup>s*(*rt*), and *gs* : *<sup>R</sup><sup>n</sup>* <sup>→</sup> *<sup>R</sup><sup>n</sup>* is the energy bounded deception signal in the S-C communication channel satisfying

$$\|\|g\_s(\mathbf{x}(t))\|\| \le \|\|G\_s \mathbf{x}(t)\|\|. \tag{11}$$

where *Gs* <sup>∈</sup> *<sup>R</sup>n*×*<sup>n</sup>* is a known constant matrix satisfying *Gs* <sup>&</sup>gt; 0. If *rt* <sup>=</sup> 1, the data will be transmitted normally without any attack. Conversely, *rt* = 2 means that malicious attack signals occur in the S-C channel.

The main purpose of this study was to synchronize uncertain FNNs under AETS, subject to double deception attacks and time-varying delay. Construct the state feedback controller:

$$\begin{split} u(t) &= u\_{\mathbb{s}}(t\_k \mathbb{h}, r\_t), \\ &= Kz\_{\mathbb{s}}(t\_k \mathbb{h}, r\_t), t \in [t\_k \mathbb{h} + s\_{\mathbb{k}}, t\_{k+1}\mathbb{h} + s\_{k+1}), \end{split} \tag{12}$$

where the feedback gain matrix *K* needs to be determined.

In a similar routine to that of the S-C communication channel, when the released data *us*(*tkh*,*rt*) are transmitted through the C-A communication channel, the channel may be attacked again. Therefore, the control output signal can be denoted as

$$\begin{split} u(t) &= u\_{\varepsilon}(t\_{k}h, r\_{t}, q\_{t}), \\ &= b^{\varepsilon}(q\_{t})u\_{\varepsilon}(t\_{k}h, r\_{t}) + \bar{b}^{\varepsilon}(q\_{t})g\_{\varepsilon}(u\_{\varepsilon}(t\_{k}h, r\_{t})), \\ &= b^{\varepsilon}(q\_{t})b^{\varepsilon}(r\_{t})K\mathbb{C}x(t - \tau(t)) + b^{\varepsilon}(q\_{t})b^{\varepsilon}(r\_{t})K\bar{z}(t) + b^{\varepsilon}(q\_{t})\bar{b}^{\varepsilon}(r\_{t})Kg\_{\varepsilon}(\bar{z}(t)) \\ &\quad + \bar{b}^{\varepsilon}(q\_{t})g\_{\varepsilon}(u\_{\varepsilon}(t\_{k}h, r\_{t})), \quad t \in [t\_{k}h + s\_{k}, t\_{k+1}h + s\_{k+1}), \end{split} \tag{13}$$

where *<sup>z</sup>*¯(*t*) = *<sup>z</sup>*˜(*t*) + *<sup>z</sup>*(*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*(*t*)), *<sup>b</sup>c*(1) = 1, *<sup>b</sup>c*(2) = 0, ¯ *<sup>b</sup>c*(*qt*) = <sup>1</sup> <sup>−</sup> *<sup>b</sup>c*(*qt*), and *gc* : *<sup>R</sup><sup>n</sup>* <sup>→</sup> *<sup>R</sup><sup>n</sup>* is the energy bounded deception signal in the C-A communication channel satisfying

$$\|\|g\_{\varepsilon}(\mathbf{x}(t))\|\| \le \|\|G\_{\varepsilon}\mathbf{x}(t)\|\|. \tag{14}$$

where *Gc* <sup>∈</sup> *<sup>R</sup>n*×*<sup>n</sup>* is a known constant matrix satisfying *Gc* <sup>&</sup>gt; 0. For simplicity, for every *<sup>i</sup>*, *<sup>m</sup>* <sup>∈</sup> *<sup>M</sup>*,*rt* <sup>=</sup> *<sup>i</sup>*, *qt* <sup>=</sup> *<sup>m</sup>*, *<sup>b</sup>s*(*rt*), *<sup>b</sup>c*(*qt*) are denoted in this paper by *<sup>b</sup><sup>s</sup> <sup>i</sup>* and *<sup>b</sup><sup>c</sup> <sup>m</sup>*, respectively. Similarly, for a matrix *P*1(*rt*, *qt*), it is denoted by *Pim* <sup>1</sup> . In addition, for a matrix *<sup>P</sup>im* <sup>1</sup> , there is the following definition:

$$P\_1^{im} = \sum\_{j \in M} \pi\_{ij} P\_1^{jm} + \sum\_{n \in M} \rho\_{mn} P\_1^{in}. \tag{15}$$

Then, it is easy to obtain the error system

$$\begin{aligned} \, \_0D\_t^\tau e(t) &= -Ae(t) + Bf(e(t)) + Df(e(t-\eta(t))) + Gm(t) + b\_m^\mathcal{E}b\_i^s \mathcal{K}\tilde{z}(t) \\ &+ b\_m^\mathcal{E}b\_i^s \mathcal{K}\mathcal{C}e(t-\tau(t)) + b\_m^\mathcal{E}\tilde{b}\_i^s \mathcal{K}\mathcal{g}\_s(\tilde{z}(t)) + \bar{b}\_m^\mathcal{E}g\_c(u\_s(t\_k h\_r r\_t)), \\ m(t) &= \mathcal{S}(t)(-E\_a e(t) + E\_b f(e(t)) + E\_d f(e(t-\eta(t)))), \\ z(t) &= \mathcal{C}e(t), \\ e(t\_0) &= \phi(t\_0), t\_0 \in [-\max\{\eta, h\}, 0]. \end{aligned} \tag{16}$$

The following two definitions will be used in the proof of Theorem 1.

**Definition 3** ([39])**.** *Let V*(*t*,*e*(*t*),*rt* = *i*, *qt* = *m*) *be the positive Lyapunov–Krasovskii functional and* L(·) *be a weak infinitesimal operator. Then*

$$\mathcal{E}\left\{\int\_0^t \mathcal{L}V(s,e(s),i,m) \,\mathrm{d}s\right\} = \mathcal{E}V(t,e(t),i,m) - \mathcal{E}V(0,\phi(t\_0),r\_0,q\_0),$$

*where* E *denotes the expectation.*

**Definition 4** ([40,41])**.** *The synchronization error system (16) is said to be globally, stochastically, asymptotically stable in the mean square sense, if for any initial conditions φ*(*t*0) *defined on* [−*max*{*η*, *h*}, 0] *and r*0, *q*<sup>0</sup> ∈ *M the following condition is satisfied:*

$$\lim\_{t \to \infty} \mathcal{E} \left\{ \int\_0^t \mathfrak{e}^T(s) \mathfrak{e}(s) \, \mathrm{d}s \mid \phi(t\_0), r\_{0\prime} q\_0 \right\} < \infty.$$

So far, a closed-loop synchronization error system (16) has been constructed. In the following, in order to realize the synchronization between systems (1) and (2), the stability of error system (16) will be proven.

### **3. Results**

Two theorems are developed in this section. Firstly, the synchronization criterion for systems (1) and (2) is presented in Theorem 1. Then, on the basis of Theorem 1, the criterion for feedback controller design is developed by Theorem 2.

**Theorem 1.** *Suppose Assumption 1 holds. The FNNs (1) and (2) are globally, stochastically, asymptotically synchronized under the feedback control scheme (12) in the mean square sense, for the given scalars r and control gain matrix K, if there exist positive definite matrices P*, Ω, *Pim* <sup>1</sup> , *<sup>P</sup>im* <sup>3</sup> , *N*1*, N*3, *Rim* <sup>1</sup> , *<sup>R</sup>im* <sup>2</sup> *, <sup>M</sup>im* <sup>1</sup> , *<sup>M</sup>im* <sup>2</sup> , *L*1, *L*2, *J*1, *J*2, *Q*1, *Q*2*; positive definite diagonal matrices* Δ1*,* Δ2*; and matrices Pim* <sup>2</sup> , *<sup>N</sup>*2*, <sup>S</sup>im*, *<sup>T</sup>im; and positive scalars <sup>ε</sup>*, *<sup>λ</sup>*1, *<sup>λ</sup>*2,*such that the following LMIs for every i*, *m hold:*

$$
\begin{bmatrix}
\Pi\_{1,1} & \Pi\_{1,2} & \Pi\_{1,3} & \Pi\_{1,4} & \Pi\_{1,5} & \Pi\_{1,6} & \Pi\_{1,7} \\
\* & \Pi\_{2,2} & \Pi\_{2,3} & 0 & \Pi\_{2,5} & 0 & 0 \\
\* & \* & \Pi\_{3,3} & 0 & 0 & 0 & \Pi\_{3,7} \\
\* & \* & \* & \Pi\_{4,4} & 0 & \Pi\_{4,6} & \Pi\_{4,7} \\
\* & \* & \* & \* & \* & \Pi\_{5,5} & 0 & \Pi\_{5,7} \\
\* & \* & \* & \* & \* & \* & \Pi\_{6,6} & 0 \\
\* & \* & \* & \* & \* & \* & \Pi\_{7,7}
\end{bmatrix} < 0,\tag{17}
$$

$$
\text{sincing } \lambda \text{ - maximum of zero in a series in } \lambda.\tag{18}
$$

$$
\mathcal{R}\_1^{\text{un}} < L\_1, \mathcal{R}\_2^{\text{un}} < L\_2, \mathcal{M}\_1^{\text{un}} < f\_1, \mathcal{M}\_2^{\text{un}} < f\_1. \tag{18}
$$

$$
\begin{bmatrix} R\_2^{im} & S^{im} \\ \* & R\_2^{im} \end{bmatrix} \ge 0,\\
\begin{bmatrix} M\_2^{im} & T^{im} \\ \* & M\_2^{im} \end{bmatrix} \ge 0,\tag{19}
$$

$$
\begin{bmatrix} P\_1^{im} & P\_2^{im} \\ \* & P\_3^{im} \end{bmatrix} > 0, \begin{bmatrix} N\_1 & N\_2 \\ \* & N\_3 \end{bmatrix} > 0,\tag{20}
$$

*where*

*Π*1,1 = " Ξ1,1 Ξ1,2 <sup>∗</sup> <sup>Ξ</sup>2,2# , *Π*1,2 = " *Sim Mim* <sup>2</sup> <sup>−</sup> *<sup>T</sup>im <sup>T</sup>im Rim* <sup>2</sup> <sup>−</sup> *<sup>S</sup>im* 0 0 # , *Π*1,3 = " Ξ1,6 −*εEaEd* + *PD* 0 0 # , *<sup>Π</sup>*6,6 <sup>=</sup> <sup>−</sup>(*λ*2*G<sup>T</sup> <sup>c</sup> Gc*)−1, *Π*1,4 = " *bc m* ¯ *bs <sup>i</sup> PK* ¯ *bc mP b<sup>c</sup> mb<sup>s</sup> <sup>i</sup> PK* 0 0 *λ*1*CTGT <sup>s</sup> Gs* # , *Π*1,5 = " *Pim* <sup>1</sup> + (*Pim* <sup>2</sup> )*<sup>T</sup> <sup>P</sup>im* <sup>2</sup> + *<sup>P</sup>im* <sup>3</sup> 0 0 00# , *Π*1,6 = " 0 *bs <sup>i</sup> <sup>C</sup>TKT* # , *Π*1,7 = Ψ ⊗ " <sup>−</sup>*A<sup>T</sup> bs i bc mCTKT* # , *Π*2,3 = ⎡ ⎣ 0 0 0 ΦΔ<sup>2</sup> − (1 − *η*¯)*N*<sup>2</sup> 0 0 ⎤ ⎦,

*Π*2,2 = ⎡ ⎣ <sup>−</sup>*Q*<sup>1</sup> <sup>−</sup> *<sup>R</sup>im* <sup>2</sup> 0 0 <sup>∗</sup> <sup>Ξ</sup>4,4 *<sup>M</sup>im* <sup>2</sup> <sup>−</sup> *<sup>T</sup>im* ∗ ∗−*Q*<sup>2</sup> <sup>−</sup> *<sup>M</sup>im* 2 ⎤ ⎦, *Π*2,5 = ⎡ ⎣ <sup>−</sup>*Pim* <sup>1</sup> <sup>−</sup>*Pim* <sup>2</sup> 0 0 00 <sup>−</sup>(*Pim* <sup>2</sup> )*<sup>T</sup>* <sup>−</sup>*Pim* <sup>3</sup> 0 ⎤ ⎦, *Π*3,3 = " Ξ6,6 *εEaEd* <sup>∗</sup> <sup>Ξ</sup>7,7 # , *Π*3,7 = Ψ ⊗ " *BT D<sup>T</sup>* # , *Π*4,4 = ⎡ ⎣ −*λ*<sup>1</sup> *I* 0 0 ∗ −*λ*<sup>2</sup> *I* 0 ∗ ∗ Ξ10,10 ⎤ ⎦, *Π*4,6 = ⎡ ⎣ ¯ *bs <sup>i</sup> <sup>K</sup><sup>T</sup>* 0 *bs <sup>i</sup> <sup>K</sup><sup>T</sup>* ⎤ ⎦, *Π*4,7 = Ψ ⊗ ⎡ ⎣ ¯ *bs i bc mK<sup>T</sup>* ¯ *bc m bs i bc mK<sup>T</sup>* ⎤ ⎦, *Π*5,5 = ⎡ ⎣ *P*¯*im* <sup>1</sup> <sup>−</sup> *<sup>R</sup>im* <sup>1</sup> *<sup>P</sup>*¯*im* <sup>2</sup> 0 <sup>∗</sup> *<sup>P</sup>*¯*im* <sup>3</sup> <sup>−</sup> *<sup>M</sup>im* <sup>1</sup> 0 ∗ ∗−*εI* ⎤ ⎦, *Π*5,7 = Ψ ⊗ ⎡ ⎣ 0 0 *G<sup>T</sup>* ⎤ ⎦, *Π*7,7 = ⎡ ⎢ ⎢ ⎣ <sup>−</sup>(*Rim* <sup>2</sup> )−<sup>1</sup> <sup>000</sup> ∗ −(*Mim* <sup>2</sup> )−<sup>1</sup> 0 0 ∗ ∗−2*h*(*L*2)−<sup>1</sup> <sup>0</sup> ∗ ∗ ∗−2*η*(*J*2)−<sup>1</sup> ⎤ ⎥ ⎥ ⎦, <sup>Ξ</sup>1,1 <sup>=</sup> <sup>−</sup>2*PA* <sup>+</sup> *<sup>Q</sup>*<sup>1</sup> <sup>+</sup> *<sup>Q</sup>*<sup>2</sup> <sup>+</sup> *<sup>N</sup>*<sup>1</sup> <sup>+</sup> *<sup>h</sup>*2*Rim* <sup>1</sup> + *<sup>η</sup>*2*Mim* <sup>1</sup> + *h*3 <sup>2</sup> *<sup>L</sup>*<sup>1</sup> <sup>+</sup> *<sup>η</sup>*<sup>3</sup> <sup>2</sup> *<sup>J</sup>*<sup>1</sup> <sup>−</sup> *<sup>R</sup>im* <sup>2</sup> <sup>−</sup> *<sup>M</sup>im* <sup>2</sup> + *<sup>ε</sup>E*<sup>2</sup> *a* , Ξ1,2 = *b<sup>c</sup> mb<sup>s</sup> <sup>i</sup> PKC* + *<sup>R</sup>im* <sup>2</sup> <sup>−</sup> *<sup>S</sup>im*, <sup>Ψ</sup> <sup>=</sup> <sup>1</sup> *h η h*<sup>2</sup> *η*<sup>2</sup> 2 , <sup>Ξ</sup>2,2 <sup>=</sup> <sup>−</sup>2*Rim* <sup>2</sup> + *He*[*Sim*] + *<sup>C</sup>T*Ω*<sup>C</sup>* + *<sup>λ</sup>*1*CTGT <sup>s</sup> GsC* + Ω, <sup>Ξ</sup>4,4 <sup>=</sup> <sup>−</sup>(<sup>1</sup> <sup>−</sup> *<sup>η</sup>*¯)*N*<sup>1</sup> <sup>−</sup> <sup>2</sup>*Mim* <sup>2</sup> <sup>+</sup> *He*[*Tim*], <sup>Ξ</sup>6,6 <sup>=</sup> *<sup>N</sup>*<sup>3</sup> <sup>−</sup> <sup>2</sup>Δ<sup>1</sup> <sup>+</sup> *<sup>ε</sup>E*<sup>2</sup> *b* , <sup>Ξ</sup>7,7 <sup>=</sup> <sup>−</sup>(<sup>1</sup> <sup>−</sup> *<sup>η</sup>*¯)*N*<sup>3</sup> <sup>−</sup> <sup>2</sup>Δ<sup>2</sup> <sup>+</sup> *<sup>ε</sup>E*<sup>2</sup> *<sup>d</sup>*, <sup>Ξ</sup>10,10 = *<sup>λ</sup>*1*G<sup>T</sup> <sup>s</sup> Gs* − *w*¯ Ω, Ξ1,6 = *PB* + *N*<sup>2</sup> + ΦΔ<sup>1</sup> − *εEaEb*.

**Proof.** Consider the following fractional order Lyapunov–Krasovskii functional:

$$V(t, e(t)) = \sum\_{k=1}^{9} V\_k(t, e(t), r\_{t'} q\_t)\_{t'}$$

where

*<sup>V</sup>*1(*t*,*e*(*t*),*rt*, *qt*) = *<sup>t</sup>*0*Dr*−<sup>1</sup> *<sup>t</sup> <sup>e</sup>T*(*t*)*Pe*(*t*), *<sup>V</sup>*2(*t*,*e*(*t*),*rt*, *qt*) = <sup>1</sup> 2 *dT*(*t*)*d*(*t*), *<sup>V</sup>*3(*t*,*e*(*t*),*rt*, *qt*) = 9 *<sup>t</sup> <sup>t</sup>*−*<sup>h</sup> <sup>e</sup>*(*s*) <sup>d</sup>*<sup>s</sup>* <sup>9</sup> *<sup>t</sup> <sup>t</sup>*−*<sup>η</sup> <sup>e</sup>*(*s*) <sup>d</sup>*<sup>s</sup> T*" *Pim* <sup>1</sup> *<sup>P</sup>im* 2 <sup>∗</sup> *<sup>P</sup>im* 3 #9 *<sup>t</sup> <sup>t</sup>*−*<sup>h</sup> <sup>e</sup>*(*s*) <sup>d</sup>*<sup>s</sup>* <sup>9</sup> *<sup>t</sup> <sup>t</sup>*−*<sup>η</sup> <sup>e</sup>*(*s*) <sup>d</sup>*<sup>s</sup>* , *<sup>V</sup>*4(*t*,*e*(*t*),*rt*, *qt*) = *<sup>t</sup> t*−*h eT*(*s*)*Q*1*e*(*s*) d*s* + *t t*−*η eT*(*s*)*Q*2*e*(*s*) d*s*, *<sup>V</sup>*5(*t*,*e*(*t*),*rt*, *qt*) = *<sup>t</sup> t*−*η*(*t*) " *e*(*s*) *f*(*e*(*s*))#*T*" *N*<sup>1</sup> *N*<sup>2</sup> ∗ *N*<sup>3</sup> #" *e*(*s*) *f*(*e*(*s*))# d*s*, *V*6(*t*,*e*(*t*),*rt*, *qt*) = *h* 0 −*h t t*+*θ* " *e*(*s*) *e*˙(*s*) #*T*" *Rim* <sup>1</sup> 0 0 *Rim* 2 #"*e*(*s*) *e*˙(*s*) # d*s*d*θ*, *V*7(*t*,*e*(*t*),*rt*, *qt*) = *η* 0 −*η t t*+*θ* " *e*(*s*) *e*˙(*s*) #*T*" *Mim* <sup>1</sup> 0 0 *Mim* 2 #"*e*(*s*) *e*˙(*s*) # d*s*d*θ*, *V*8(*t*,*e*(*t*),*rt*, *qt*) = *h* 0 −*h* 0 *θ t t*+*β* " *e*(*s*) *e*˙(*s*) #*T*" *L*<sup>1</sup> 0 0 *L*<sup>2</sup> #"*e*(*s*) *e*˙(*t*) # d*s*d*β*d*θ*, *V*9(*t*,*e*(*t*),*rt*, *qt*) = *η* 0 −*η* 0 *θ t t*+*β* " *e*(*s*) *e*˙(*s*) #*T*" *J*<sup>1</sup> 0 0 *J*<sup>2</sup> #"*e*(*s*) *e*˙(*t*) # d*s*d*β*d*θ*.

For simplicity, *Vi* = *Vi*(*t*,*e*(*t*),*rt*, *qt*), *i* = 1, 2, . . . , 9.

The weak infinitesimal operator L is defined as follows:

$$
\mathcal{L}V(t, \varepsilon(t), r\_{t\prime}q\_t) = \frac{\partial V(t, \varepsilon(t), r\_{t\prime}q\_t)}{\partial t} + \dot{\varepsilon}^T(t) \frac{\partial V(t, \varepsilon(t), r\_{t\prime}q\_t)}{\partial \varepsilon(t)}\Big|\_{r\_t = i, q\_t = m}
$$

$$
+ \sum\_{j=1}^2 \pi\_{ij} V(\varepsilon(t), j, m) + \sum\_{n=1}^2 \rho\_{mn} V(\varepsilon(t), i, n).
$$

By calculating the weak infinitesimal derivatives of *V*(*t*,*e*(*t*),*rt*, *qt*) along with the error system (16), one has

$$
\mathcal{L}V\_1 \le 2e^T(t) P D\_t^\gamma e(t), \quad \mathcal{L}V\_2 = d(t)\dot{d}(t), \tag{21}
$$

$$\begin{split} \mathcal{L}V\_{3} &= 2 \begin{bmatrix} \int\_{t-h}^{t} e(s) \, \mathrm{d}s \\ \int\_{t-\eta}^{t} e(s) \, \mathrm{d}s \end{bmatrix}^{T} \begin{bmatrix} P\_{1}^{im} & P\_{2}^{im} \\ \* & P\_{3}^{im} \end{bmatrix} \begin{bmatrix} e(t) - e(t-h) \\ e(t) - e(t-\eta) \end{bmatrix} + \begin{bmatrix} \int\_{t-h}^{t} e(s) \, \mathrm{d}s \\ \int\_{t-\eta}^{t} e(s) \, \mathrm{d}s \end{bmatrix}^{T} \\ &\times \begin{bmatrix} \bar{P}\_{1}^{im} & \bar{P}\_{2}^{im} \\ \* & \bar{P}\_{3}^{im} \end{bmatrix} \begin{bmatrix} \int\_{t-h}^{t} e(s) \, \mathrm{d}s \\ \int\_{t-\eta}^{t} e(s) \, \mathrm{d}s \end{bmatrix}^{T} \end{split} \tag{22}$$

$$
\mathcal{L}V\_4 = \varepsilon^T(t)(Q\_1 + Q\_2)\varepsilon(t) - \varepsilon^T(t-h)Q\_1\varepsilon(t-h) - \varepsilon^T(t-\eta)Q\_2\varepsilon(t-\eta), \tag{23}
$$

$$\begin{split} \mathcal{L}V\_{5} &= \begin{bmatrix} \varepsilon(t) \\ f(\varepsilon(t)) \end{bmatrix}^{T} \begin{bmatrix} N\_{1} & N\_{2} \\ \* & N\_{3} \end{bmatrix} \begin{bmatrix} \varepsilon(t) \\ f(\varepsilon(t)) \end{bmatrix} - (1 - \eta) \begin{bmatrix} \varepsilon(t - \eta(t)) \\ f(\varepsilon(t - \eta(t))) \end{bmatrix}^{T} \\ &\times \begin{bmatrix} N\_{1} & N\_{2} \\ \* & N\_{3} \end{bmatrix} \begin{bmatrix} \varepsilon(t - \eta(t)) \\ f(\varepsilon(t - \eta(t))) \end{bmatrix} \end{split} \tag{24}$$

$$\begin{split} \mathcal{L}V\_{6} &= h^{2} \begin{bmatrix} \dot{e}(t) \\ \dot{\varepsilon}(t) \end{bmatrix}^{T} \begin{bmatrix} R\_{1}^{im} & 0 \\ 0 & R\_{2}^{im} \end{bmatrix} \begin{bmatrix} \varepsilon(t) \\ \dot{\varepsilon}(t) \end{bmatrix} - h \int\_{t-h}^{t} \begin{bmatrix} \varepsilon(s) \\ \dot{\varepsilon}(s) \end{bmatrix} \begin{bmatrix} R\_{1}^{im} & 0 \\ 0 & R\_{2}^{im} \end{bmatrix} \begin{bmatrix} \varepsilon(t) \\ \dot{\varepsilon}(t) \end{bmatrix} \mathrm{d}s \\ &+ h \int\_{-h}^{0} \int\_{t+\theta}^{t} \begin{bmatrix} \varepsilon(t) \\ \dot{\varepsilon}(t) \end{bmatrix} \begin{bmatrix} \dot{R}\_{1}^{im} & 0 \\ 0 & R\_{2}^{im} \end{bmatrix} \begin{bmatrix} \varepsilon(t) \\ \dot{\varepsilon}(t) \end{bmatrix} \mathrm{d}\mathbf{s} \mathrm{d}\theta\_{\prime} \end{split} \tag{25}$$

$$\begin{split} \mathcal{L}V\_{7} &= \eta^{2} \begin{bmatrix} \dot{e}(t) \\ \dot{\varepsilon}(t) \end{bmatrix}^{T} \begin{bmatrix} M\_{1}^{im} & 0 \\ 0 & M\_{2}^{im} \end{bmatrix} \begin{bmatrix} \varepsilon(t) \\ \dot{\varepsilon}(t) \end{bmatrix} - \eta \int\_{t-\eta}^{t} \begin{bmatrix} \varepsilon(s) \\ \dot{\varepsilon}(s) \end{bmatrix} \begin{bmatrix} M\_{1}^{im} & 0 \\ 0 & M\_{2}^{im} \end{bmatrix} \begin{bmatrix} \varepsilon(t) \\ \dot{\varepsilon}(t) \end{bmatrix} \mathrm{d}s \\ &+ \eta \int\_{-\eta}^{0} \int\_{t+\vartheta}^{t} \begin{bmatrix} \varepsilon(t) \\ \dot{\varepsilon}(t) \end{bmatrix} \begin{bmatrix} \dot{M}\_{1}^{im} & 0 \\ 0 & \dot{M}\_{2}^{im} \end{bmatrix} \begin{bmatrix} \varepsilon(t) \\ \dot{\varepsilon}(t) \end{bmatrix} \mathrm{d}\mathrm{sd}\theta\_{\prime} \end{split} \tag{26}$$

$$\mathcal{L}V\_{8} = \frac{h^{3}}{2} \begin{bmatrix} \varepsilon(t) \\ \dot{\varepsilon}(t) \end{bmatrix}^{T} \begin{bmatrix} L\_{1} & 0 \\ 0 & L\_{2} \end{bmatrix} \begin{bmatrix} \varepsilon(t) \\ \dot{\varepsilon}(t) \end{bmatrix} - h \int\_{-h}^{0} \int\_{t+\theta}^{t} \begin{bmatrix} \varepsilon(t) \\ \dot{\varepsilon}(t) \end{bmatrix}^{T} \begin{bmatrix} L\_{1} & 0 \\ 0 & L\_{2} \end{bmatrix} \begin{bmatrix} \varepsilon(t) \\ \dot{\varepsilon}(t) \end{bmatrix} \mathrm{d} \mathrm{sd}\theta\_{\prime} \tag{27}$$

$$
\mathcal{L}V\_9 = \frac{\eta^3}{2} \begin{bmatrix} \varepsilon(t) \\ \dot{\varepsilon}(t) \end{bmatrix}^T \begin{bmatrix} I\_1 & 0 \\ 0 & I\_2 \end{bmatrix} \begin{bmatrix} \varepsilon(t) \\ \dot{\varepsilon}(t) \end{bmatrix} \\
$$

By using Lemmas 1 and 2, it follows that

$$-\ln \int\_{t-h}^{t} \dot{e}^T(s) R\_2^{im} \dot{e}(s) \, \mathrm{d}s \le \xi\_1^T(t) \Theta\_1 \xi\_1^\chi(t),\tag{29}$$

$$-\eta \int\_{t-\eta}^{t} \dot{\mathfrak{e}}^T(s) \mathcal{M}\_2^{im} \dot{\mathfrak{e}}(s) \, \text{d}s \le \mathfrak{J}\_2^T(t) \Theta\_2 \mathfrak{J}\_2(t),\tag{30}$$

$$-h\int\_{t-h}^{t} \boldsymbol{e}^{T}(\boldsymbol{s}) \boldsymbol{R}\_{1}^{im} \boldsymbol{e}(\boldsymbol{s}) \, \mathrm{d}s \leq -\left(\int\_{t-h}^{t} \boldsymbol{e}(\boldsymbol{s}) \, \mathrm{d}s\right)^{T} \boldsymbol{R}\_{1}^{im} \int\_{t-h}^{t} \boldsymbol{e}(\boldsymbol{s}) \, \mathrm{d}s,\tag{31}$$

$$-\eta \int\_{t-\eta}^{t} \boldsymbol{\varepsilon}^{T}(\mathbf{s}) \boldsymbol{M}\_{1}^{im} \boldsymbol{\varepsilon}(\mathbf{s}) \, \mathrm{d}s \leq -\left(\int\_{t-\eta}^{t} \boldsymbol{\varepsilon}(\mathbf{s}) \, \mathrm{d}s\right)^{T} \boldsymbol{M}\_{1}^{im} \int\_{t-\eta}^{t} \boldsymbol{\varepsilon}(\mathbf{s}) \, \mathrm{d}s,\tag{32}$$

where *ξ*1(*t*) = *col*{*e*(*t*),*e*(*t* − *τ*(*t*)),*e*(*t* − *h*)}, *ξ*2(*t*) = *col*{*e*(*t*),*e*(*t* − *η*(*t*)),*e*(*t* − *η*)}, and

$$
\begin{aligned}
\Theta\_1 &= \begin{bmatrix}
\* & -2R\_2^{im} + He[S^{im}] & R\_2^{im} - S^{im} \\
\* & \* & -R\_2^{im}
\end{bmatrix}, \\
\Theta\_2 &= \begin{bmatrix}
\* & -2M\_2^{im} + He[T^{im}] & M\_2^{im} - T^{im} \\
\* & \* & -M\_2^{im}
\end{bmatrix}.
\end{aligned}
$$

It can be obtained from *m*(*t*) that

$$\begin{split} \varepsilon \varepsilon^{T}(t)E\_{a}^{2}\varepsilon(t) - 2\varepsilon \varepsilon^{T}(t)E\_{a}E\_{b}f(\varepsilon(t)) - 2\varepsilon \varepsilon^{T}(t)E\_{a}E\_{d}f(\varepsilon(t-\eta(t))) \\ + \varepsilon f^{T}(\varepsilon(t))E\_{b}^{2}f(\varepsilon(t)) + 2\varepsilon f^{T}(\varepsilon(t))E\_{b}E\_{d}f(\varepsilon(t-\eta(t))) \\ + \varepsilon f^{T}(\varepsilon(t-\eta(t)))E\_{d}^{2}f(\varepsilon(t-\eta(t))) - \varepsilon m^{T}(t)m(t) \geq 0. \end{split} \tag{33}$$

Moreover, from the adaptive event-triggered condition, activation function, (11) and (14), it follows that

$$d(t)\dot{d}(t) \le z^T(T - \tau(t))\Omega z(T - \tau(t)) - \vartheta \tilde{z}^T(t)\Omega \tilde{z}(t),\tag{34}$$

$$-2f^T(e(t))\Delta\_1 f(e(t)) + 2e^T(t)\Phi \Delta\_1 f(e(t)) \ge 0,\tag{35}$$

$$-2f^T(e(t-\eta(t)))\Delta\_2f(e(t-\eta(t)))+2e^T(t-\eta(t))\Phi\Delta\_2f(e(t-\eta(t)))\geq 0,\tag{36}$$

$$
\lambda\_1 \mathbb{Z}^T(t) \mathbf{G}\_s^T \mathbf{G}\_s \mathbb{Z}(t) - \lambda\_1 \mathbf{g}\_s^T(\mathbb{Z}(t)) \mathbf{g}\_s(\mathbb{Z}(t)) \ge 0,\tag{37}
$$

$$
\lambda\_2 \boldsymbol{\mu}\_{\boldsymbol{\varepsilon}}^T \mathbf{G}\_{\boldsymbol{\varepsilon}}^T \mathbf{G}\_{\boldsymbol{\varepsilon}} \boldsymbol{\mu}\_{\boldsymbol{\varepsilon}} - \lambda\_2 \mathbf{g}\_{\boldsymbol{\varepsilon}}^T (\boldsymbol{\mu}\_{\boldsymbol{\varepsilon}}) \boldsymbol{g}\_{\boldsymbol{\varepsilon}} (\boldsymbol{\mu}\_{\boldsymbol{\varepsilon}}) \geq 0. \tag{38}
$$

Let

$$\mathcal{Z}(t) = \operatorname{col}\left\{\boldsymbol{e}(t), \boldsymbol{e}(t-\tau(t)), \boldsymbol{e}(t-h), \boldsymbol{e}(t-\eta(t)), \boldsymbol{e}(t-\eta), f(\boldsymbol{e}(t)), f(\boldsymbol{e}(t-\eta(t))), \ldots\right\}$$

$$g\_s(\boldsymbol{z}), g\_c(\boldsymbol{u}\_c), \tilde{\mathbf{z}}(t), \int\_{t-h}^t \mathbf{e}^T(\boldsymbol{s}) \operatorname{ds}\_\prime \int\_{t-\eta}^t \boldsymbol{e}^T(\boldsymbol{s}) \operatorname{ds}\_\prime m(t)\right\},$$

together with (21)–(38). Then, the following can be obtained.

$$
\mathcal{L}V(t, e(t), r\_t, q\_t) \le \zeta^T(t) \Xi \zeta(t) \dots
$$

From the aforementioned part, we know that matrix inequality (17) guarantees Ξ < 0 holds. That further guarantees that L*V*(*t*,*e*(*t*),*rt*, *qt*) < 0 holds for every *i*, *m* ∈ *M*. Let *λ*<sup>0</sup> = *λmin*(−Ξ); then *λ*<sup>0</sup> > 0. For any *t* > 0, we have:

$$
\mathcal{L}V(t, \mathfrak{e}(t), r\_t, q\_t) \le -\lambda\_0 \zeta^T(t)\zeta(t) \le -\lambda\_0 \mathfrak{e}^T(t)\mathfrak{e}(t).
$$

By Definition 3, one can obtain:

$$\mathcal{E}\,V(t,\mathfrak{c}(t),i,m) - \mathcal{E}\,V(0,\mathfrak{\phi}(t\_0),r\_0,q\_0) \le -\lambda\_0 \mathcal{E}\left\{\int\_0^t \mathfrak{e}^T(t)\mathfrak{c}(t)\,\mathrm{d}\mathfrak{s}\right\}\_{\prime\prime}$$

hence, for *t* ≥ 0:

$$\mathcal{E}\left\{\int\_0^t e^T(t)e(t) \,\mathrm{d}s\right\} \le \frac{1}{\lambda\_0} \mathcal{E}V(0, \phi(t\_0), r\_0, q\_0),$$

based on Definition 4, which implies that error system (16) is globally, stochastically, asymptotically stable in the mean square sense. That means systems (1) and (2) get globally, stochastically, asymptotically synchronized in the mean square sense. The proof is completed.

Notice that Theorem 1 only gives sufficient conditions for the synchronization of systems (1) and (2), and fails to solve the design problem of the controller (12). Therefore, the design method of the control gain *K* is constructed in Theorem 2.

**Theorem 2.** *Suppose Assumption 1 holds. The FNNs (1) and (2) are globally, stochastically, asymptotically synchronized in the mean square sense, for the given scalars r, if there exist positive definite matrices P*, Ω, *Pim* <sup>1</sup> , *<sup>P</sup>im* <sup>3</sup> , *<sup>N</sup>*1, *<sup>N</sup>*3*, <sup>R</sup>im* <sup>1</sup> , *<sup>R</sup>im* <sup>2</sup> , *<sup>M</sup>im* <sup>1</sup> , *<sup>M</sup>im* <sup>2</sup> , *L*1*, L*2, *J*1, *J*2, *Q*1, *Q*2*; positive definite diagonal matrices* Δ1, Δ2*; and matrices Pim* <sup>2</sup> , *<sup>N</sup>*2, *<sup>S</sup>im, <sup>T</sup>im*,*Y; and positive scalars <sup>ε</sup>*, *<sup>λ</sup>*1, *<sup>λ</sup>*2,*such that the following LMIs for every i*, *m hold:*

$$
\begin{bmatrix}
\bar{\Pi}\_{1,1} & \Pi\_{1,2} & \Pi\_{1,3} & \bar{\Pi}\_{1,4} & \Pi\_{1,5} & \bar{\Pi}\_{1,6} & \bar{\Pi}\_{1,7} \\
\* & \Pi\_{2,2} & \Pi\_{2,3} & 0 & \Pi\_{2,5} & 0 & 0 \\
\* & \* & \Pi\_{3,3} & 0 & 0 & 0 & \bar{\Pi}\_{3,7} \\
\* & \* & \* & \Pi\_{4,4} & 0 & \Pi\_{4,6} & \Pi\_{4,7} \\
\* & \* & \* & \* & \* & \Pi\_{5,5} & 0 & \Pi\_{5,7} \\
\* & \* & \* & \* & \* & \* & \Pi\_{6,6} & 0 \\
\* & \* & \* & \* & \* & \* & \Pi\_{7,7}
\end{bmatrix} < 0,\tag{39}
$$

$$
\bar{R}\_1^{\rm im} < L\_1, \bar{R}\_2^{\rm im} < L\_2, \bar{M}\_1^{\rm im} < f\_1, \bar{M}\_2^{\rm im} < f\_1. \tag{40}
$$

$$
\begin{bmatrix} R\_2^{im} & S^{im} \\ \* & R\_2^{im} \end{bmatrix} \ge 0,\\
\begin{bmatrix} M\_2^{im} & T^{im} \\ \* & M\_2^{im} \end{bmatrix} \ge 0,\tag{41}
$$

$$
\begin{bmatrix} P\_1^{im} & P\_2^{im} \\ \* & P\_3^{im} \end{bmatrix} > 0, \begin{bmatrix} N\_1 & N\_2 \\ \* & N\_3 \end{bmatrix} > 0,\tag{42}
$$

*where*

*Π*˜ 1,1 = " Ξ1,1 Ξ˜ 1,2 <sup>∗</sup> <sup>Ξ</sup>2,2# , *Π*˜ 1,4 = " *bc m* ¯ *bs <sup>i</sup><sup>Y</sup>* ¯ *bc mP b<sup>c</sup> mb<sup>s</sup> iY* 0 0 *λ*1*CTGT <sup>s</sup> Gs* # , *Π*˜ 1,6 = " 0 *bs <sup>i</sup> <sup>C</sup>TYT* # , *<sup>Π</sup>*˜ 1,7 <sup>=</sup> <sup>Ψ</sup> <sup>⊗</sup> " <sup>−</sup>*ATP bs i bc mCTYT* # , *<sup>Π</sup>*˜ 3,7 <sup>=</sup> <sup>Ψ</sup> <sup>⊗</sup> " *BTP DTP* # , *Π*˜ 4,6 = ⎡ ⎣ ¯ *bs iY<sup>T</sup>* 0 *bs iY<sup>T</sup>* ⎤ ⎦, *<sup>Π</sup>*˜ 4,7 <sup>=</sup> <sup>Ψ</sup> <sup>⊗</sup> ⎡ ⎣ ¯ *bs i bc mY<sup>T</sup>* ¯ *bc mP bs i bc mY<sup>T</sup>* ⎤ <sup>⎦</sup>, *<sup>Π</sup>*˜ 5,7 <sup>=</sup> <sup>Ψ</sup> <sup>⊗</sup> ⎡ ⎣ 0 0 *GTP* ⎤ <sup>⎦</sup>, *<sup>Π</sup>*˜ 6,6 <sup>=</sup> <sup>−</sup>2*α*1*<sup>P</sup>* <sup>+</sup> *<sup>α</sup>*<sup>2</sup> 1*λ*2*G<sup>T</sup> <sup>c</sup> Gc*, *<sup>Π</sup>*˜ 7,7 <sup>=</sup> *diag*{−2*α*2*<sup>P</sup>* <sup>+</sup> *<sup>α</sup>*<sup>2</sup> 2*Rim* <sup>2</sup> , <sup>−</sup>2*α*3*<sup>P</sup>* <sup>+</sup> *<sup>α</sup>*<sup>2</sup> 3*Mim* <sup>2</sup> , <sup>−</sup>4*hα*4*<sup>P</sup>* <sup>+</sup> <sup>2</sup>*hα*<sup>2</sup> <sup>4</sup>*L*2, <sup>−</sup>4*hα*5*<sup>P</sup>* <sup>+</sup> <sup>2</sup>*hα*<sup>2</sup> <sup>5</sup> *J*2}, Ξ˜ 1,2 = *b<sup>c</sup> mb<sup>s</sup> iYC* + *<sup>R</sup>im* <sup>2</sup> <sup>−</sup> *<sup>S</sup>im*,

*and the other parameters are the same as in Theorem 1, among them the feedback gain matrix is defined with K* = *P*−1*Y.*

**Proof.** For any scalar *α* > 0, the following inequality holds:

$$(a\Omega - P)\Omega^{-1}(a\Omega - P) \ge 0.$$

Based on the inequality, it can be obtained that:

13

$$-P\Omega^{-1}P \le -2aP + a^2\Omega.$$

By defining *χ* = *diag*{ *<sup>I</sup>*,..., *<sup>I</sup>*, *<sup>P</sup>*, *<sup>P</sup>*, *<sup>P</sup>*, *<sup>P</sup>*, *<sup>P</sup>*}, multipling (17) by *<sup>χ</sup>* on the left side and the right side, respectively, and replacing the term in *<sup>Π</sup>*6,6 with <sup>−</sup>2*α*1*<sup>P</sup>* <sup>+</sup> *<sup>α</sup>*<sup>2</sup> 1*λ*4*G<sup>T</sup> <sup>c</sup> Gc*, *Π*˜ 6,6 can be obtained. In the same way, *Π*˜ 7,7 replaces *Π*7,7. In addition, *Y* = *KP* is also replaced. Then linear matrix inequality (39) can be obtained. That completes the proof.

### **4. Numerical Simulations**

In this section, a simulation is presented to demonstrate the effectiveness of the proposed approach. Consider the FNNs which are described by Equation (1) and (2) with the following parameters:

$$A = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, B = \begin{bmatrix} 1.8 & -0.1 \\ -2 & 0.4 \end{bmatrix}, D = \begin{bmatrix} -1.7 & -0.6 \\ 0.5 & -2.5 \end{bmatrix}'$$

$$E\_d = \begin{bmatrix} 0.01 & 0 \\ 0 & 0.01 \end{bmatrix}, E\_b = \begin{bmatrix} 0.01 & 0 \\ 0 & 0.01 \end{bmatrix}, E\_d = \begin{bmatrix} 0.01 & 0 \\ 0 & 0.01 \end{bmatrix}'$$

$$\mathbf{G} = \begin{bmatrix} 0.01 & 0 \\ 0 & 0.02 \end{bmatrix}, \mathbf{C} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}.$$

The nonlinear function was selected as ˆ *f*(*x*) =tanh(*x*), so it can be calculated that Φ = *I*. Due to the time-varying delay *η*(*t*) = 0.1*e<sup>t</sup>* <sup>1</sup>+*e<sup>t</sup>* , *η* = 0.1, *η*¯ = 0.025 can be obtained, respectively. The functions of deception signals are were chosen to be *gs*(*x*) =tanh(*x*), *gc*(*x*) = tanh(*x*); therefore, one can get *Gs* = *I*, *Gc* = *I*. In this numerical example, we set the sampling period to *h* = 0.05, *γ* = 0.98, the initial value of the adaptive event-triggered parameter *d*<sup>0</sup> to 0.8, the external input vector *I*(*t*) to 0, <sup>1</sup> = 0.1, <sup>2</sup> = 0.1, <sup>3</sup> = 0.1, <sup>4</sup> = 0.1, <sup>5</sup> = 0.1. Additionally, the generators of Markov process *rt*, *qt* were

$$
\pi\_{i\bar{j}} = \begin{bmatrix} -0.4 & 0.4 \\ 0.5 & -0.5 \end{bmatrix}, \\
\rho\_{i\bar{j}} = \begin{bmatrix} -0.4 & 0.4 \\ 0.65 & -0.65 \end{bmatrix}.
$$

Based on the proposed method, by solving the LMIs in Theorem 2, one can obtain the desired controller gain and the adaptive event-triggered weighting matrix as follows:

$$K = \begin{bmatrix} -0.0178 & 0.0026 \\ -0.0021 & -0.0270 \end{bmatrix}, \Omega = \begin{bmatrix} 0.0007 & 0.0007 \\ 0.0007 & 0.0011 \end{bmatrix}. \tag{43}$$

We chose the initial values *φ*1(*t*0)=(0.5; −0.1), *φ*2(*t*0)=(0.1; 0.2). Figure 2 shows the state trajectories of synchronization errors without control input. As can be seen from Figure 2, if there is no control input, the error system itself is unstable, which means that the systems cannot be synchronized. Using the feedback controller (12), the simulation results were obtained, as shown in Figures 3–7. Figure 3 shows the state trajectories of synchronization errors with control input, and one can see that synchronization errors finally converged to zero under the designed control protocol, which shows that the systems can achieve synchronization. Figures 4 and 5 depict the states of double deception attacks, whose states caused the oscillations of the synchronization error and the control input. Figure 6 depicts the trajectories of control input, from which one can see that the control input gradually tended to 0; that is, when the system achieves synchronization, external control is no longer required. Figure 7 shows the evolution of adaptive threshold coefficient *d*(*t*) in AETS. From the adaptive law (6), the adaptive threshold coefficient can be timely adjusted according to the synchronization error. Therefore, when the error system is stable, that is, when synchronization is achieved, the parameter will no longer be adjusted and

will tend toward a constant. From the above simulation results, it can easily be seen that the proposed synchronization problem in this paper was effectively solved.

**Figure 2.** Synchronization error *ei*(*t*)(*i* = 1, 2) without control input.

**Figure 3.** Synchronization error *ei*(*t*)(*i* = 1, 2) with control input *ui*(*t*)(*i* = 1, 2).

**Figure 4.** The state of the deception signal in the S-C channel.

**Figure 5.** The state of the deception signal in the C-A channel.

**Figure 6.** The trajectories of control input *ui*(*t*)(*i* = 1, 2).

**Figure 7.** The trajectory of event-triggered parameter *d*(*t*).

### **5. Discussion and Conclusions**

The adaptive event-triggered synchronization problem of uncertain FNNs with double deception attacks and time-varying delay has been investigated in this paper. Noteworthy is that, regarding fractional order systems receiving deception attacks using traditional event-triggered methods given in the literature [22], we believe that the literature has not been comprehensive enough. Not only the traditional ETS technology, but also the attack phenomena were governed by Bernoulli processes, and attacks only occurred in the C-A channel. Thus, in this study, the AETS was adopted to determine the signals the needed to be transmitted. The deception attacks in communication channels from the sensor to controller and from controller to actuator are governed by two independent Markov processes. Considering the AETS, double deception attacks, and parameter uncertainties, a time-varying closed-loop fractional order synchronization error system was constructed. Sufficient conditions were formulated to guarantee the considered system is stochastically stable by employing the Lyapunov–Krasovskii functional method. Finally, a numerical example was presented to verify its effectiveness and the feasibility of the proposed

method. Thereby, we showed that our approach is more meaningful and comprehensive. It should be mentioned that besides deception attacks, denial of service (DoS) attacks is another interesting issue for FNNs and deserves further exploration. In addition, solving the problem of multiple communication channels for FNNs will be part of our future research efforts.

**Author Contributions:** Z.S. was in charge of the construction of model and writing. F.Y. was in charge of the simulation. J.C. and J.Z. mainly contributed to the synchronization analysis. A.H. mainly contributed via the supervision of program. M.H. was in charge of the review and editing of the whole paper. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work is jointly supported by the National Natural Science Foundation of China under grant 61973137 and the Natural Science Foundation of Jiangsu Province under grant BK20181342.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **Abbreviations**

The following abbreviations are used in this paper:

FNNs Fractional order neural networks ETS Event-triggered scheme TTS Time-triggered scheme AETS Adaptive event-triggered scheme S-C Channel Sensor to controller channel C-A Channel Controller to actuator channel

### **References**


MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

MDPI Books Editorial Office E-mail: books@mdpi.com www.mdpi.com/journal/books

Academic Open Access Publishing

www.mdpi.com ISBN 978-3-0365-7649-7