Next Article in Journal
Abstract Fractional Cauchy Problem: Existence of Propagators and Inhomogeneous Solution Representation
Previous Article in Journal
Fractal Study of the Development Law of Mining Cracks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Multiple Frequency Conversion Sinusoidal Chaotic Neural Network and Its Application

1
School of Mathematics and Statistics, Taishan University, Taian 271000, China
2
Beijing Institute of Artificial Intelligence, Beijing University of Technology, Beijing 100124, China
3
Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing 100124, China
4
School of Automation, Beijing Information Science and Technology University, Beijing 100192, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2023, 7(9), 697; https://doi.org/10.3390/fractalfract7090697
Submission received: 25 June 2023 / Revised: 17 September 2023 / Accepted: 19 September 2023 / Published: 21 September 2023
(This article belongs to the Section General Mathematics, Analysis)

Abstract

:
Aiming at the problem that the global search performance of a transiently chaotic neural network is not ideal, a multiple frequency conversion sinusoidal chaotic neural network (MFCSCNN) model is proposed based on the biological mechanism of the brain, including multiple functional modules and sinusoidal signals of different frequencies. In this model, multiple FCS functions and Sigmoid functions with different phase angles were used to construct the excitation function of neurons in the form of weighted sum. In this paper, the inverted bifurcation diagram, Lyapunov exponential diagram and parameter range of the model are given. The dynamic characteristics of the model are analyzed and applied to function optimization and combinatorial optimization problems. Experimental results show that the multiple frequency conversion sinusoidal chaotic neural network has better global search performance than the transient chaotic neural network and other related models.

1. Introduction

A transiently chaotic neural network (TCNN) is a type of neural network used to solve optimization problems [1,2]. The structure of a TCNN network is based on the traditional Hopfield neural network (HNN) by adding self-feedback connections, which makes the network show complex chaotic dynamic behavior [3,4,5]. The optimization mechanism of TCNN is the same as that of HNN, both of which adopt the gradient descent algorithm [6]. By introducing the chaotic dynamic (self-feedback) term, TCNN can make use of the ergodic property, pseudo-randomness and non-repeatability of chaos to avoid local optimization [7,8]. However, TCNN’s chaotic global search performance is still limited due to the parameter setting, excitation function, annealing function and other factors, resulting in not being ideal [9,10]. Therefore, scholars conduct comprehensive research from diverse perspectives to enhance the model’s performance.
The excitation function of TCNN is usually a monotone increasing Sigmoid function [2,5]. Shuai et al. [11] showed that the effective excitation function can select various shapes and should exhibit non-monotonic behavior, and then proposed a CNN model with odd symmetric excitation function. Potapov et al. [12] proposed that the non-monotonic excitation function is more likely to make the neural network produce chaotic dynamic behavior. Therefore, domestic and foreign scholars have carried out relevant research on this. Chen et al. [7] proposed a PLF-TCNN model for solving the traveling salesman problem using the piecewise linear function as a nonmonotonic excitation function. Uykan et al. [13,14] took the double Sigmoid function as the excitation function of neurons and proposed a DS-HNN model to solve the channel allocation optimization problem. Xu et al. [15] proposed a SSW-CNN model for solving combinatorial optimization problems by summing the Shannond wavelet and Sigmoid function as a non-monotonic excitation function. Zhang et al. [16] designed the Euler formula as a non-monotonic excitation function, and proposed a CHNN-APHM model for solving blind detection problems. The above models all propose different forms of non-monotonic excitation functions. Although the optimization ability of the model has been enhanced to some extent, they all lack a certain biological mechanism. According to the biological neurological characteristics of “the higher the frequency, the lower the amplitude” with α, β, δ, γ and θ brain waves in different states, Hu et al. [17,18,19] designed the FCS function, weighted it with the Sigmoid function, and formed the non-monotonic excitation function, and proposed the FCSCNN model. The model has a certain actual biological mechanism, but the human brain contains 14 to 16 billion neurons, and is divided into multiple functional modules, therefore, each module’s activity is different [20,21]. The brain waves are composed of different frequency of sinusoidal signals [22,23,24,25]. Therefore, a single FCS function cannot fully reflect the complex and varied mechanism of action of brain nerve cells.
The excitation function of the transient chaotic neural network conforms to the biological mechanism of neuroscience, and makes it have more abundant chaotic dynamic characteristics and better global search optimization performance. Based on the above theories and biological mechanisms, this paper combines multiple frequency conversion sinusoidal (FCS) functions with different phase angles and Sigmoid functions to form new non-monotonic excitation functions, which are a weighted sum. A new TCNN model, the multiple frequency conversion sinusoidal chaotic neural network (MFCSCNN) model, is proposed. This model maintains the overall monotonicity of the Sigmoid excitation function, and at the same time has variable local non-monotonicity, which can produce complex chaotic dynamic behavior, and is beneficial to make better use of the chaotic ergodic characteristics for global optimization. At the same time, the model is more consistent with the actual biological mechanism of neural information transmission in terms of function construction and phase angle introduction. In this paper, the inverse bifurcation diagram and Lyapunov exponent diagram [26,27,28] of chaotic neurons are provided, and their dynamic characteristics are analyzed. Then, the model is applied to function optimization and combinatorial optimization problems [29,30,31,32,33]. It is verified that the model has good global optimization performance and can overcome local optimization problems better than others.

2. MFCS Chaotic Neuron Model

2.1. MFCS Function

By introducing several FCS functions and adding phase angle parameters, this paper proposes a multi-frequency sine (MFCS) function, which is defined as follows:
S u = n = 1 N c n A n sin u ε n + ϕ n = n = 1 N c n A n 0 e a n u sin u ε n 0 e b n u + ϕ n
where n is the number of FCS functions, n = 1, 2, …, N; u is the function independent variable, representing the strength of brain activity; An, An(0) are the amplitude and the initial value of the amplitude (0 ≤ A(0) ≤ 1), respectively; εn is the steepness factor, representing the magnitude of the frequency of the sine function, εn(0) is the initial valuen of the steepness factor (εn(0) > 0); an, bn are positive parameters; cn is the weighting coefficient; and φn is the phase angle. When N = 2, A1(0) = 0.8, ε1(0) = 0.02, a1 = 6, b1 = 1, φ1 = 0, c1 = 0.25, A2(0) = 0.4, ε2(0) = 0.04, a2 = 2, b2 = 1.8, φ2 = π/2, and c2 = 0.35, the comparison graph of FCS and MFCS function, as shown in Figure 1, can be derived.
As shown in Figure 1, the MFCS function, composed of the weighted sum of two FCS functions, has more complex and variable amplitude-frequency characteristics than a single FCS function. With the increase of N value, by selecting different values of A(0), ε(0), φ, a, b, and c, the MFCS function will have a richer image waveform, which can better represent and reflect the biological mechanism of brainwave superposition, complexity, and diversity. MFCS and FCS function have the same parameter range and action characteristics, as shown in Table 1.
Here, c·A(0) determines the weight of the FCS function; the greater the ε(0), the smaller the f; with the increase of |u|, the larger a is, the faster A decreases, and the larger b is, the faster f increases. As shown in Table 1, the frequency band and range of the MFCS function (0.497~100.311 Hz) are consistent with the frequency band and range of brainwave (0.5~100 Hz), and the biometric characteristics of the actual brainwave frequency are maintained.

2.2. MFCS Chaotic Neuron Model

In order to further analyze the dynamic characteristics and application of the MFCS function in the MFCSCNN model, the excitation function of neurons is constructed by weighted sum of the MFCS function and Sigmoid function, which is described as follows:
f u = S 0 u , ε 0 + n = 1 N c n S n u , ε n = 1 1 + e u / ε 0 + n = 1 N c n A n sin u ε n + ϕ n
where S0 is the Sigmoid function, ε0 is the steepness parameter of S0 (ε0 > 0), and other parameters are defined by the same Equation (1). When N is 1 and 2, respectively, the graph of the f(u) activation function, as shown in Figure 2, can be derived. The independent variable u is fixed at 0.5, and other parameters are set as shown in Figure 1, except b1 and b2. The three-dimensional image of f(u) changing with parameters b1 and b2 is shown in Figure 3.
As shown in Figure 2, the activation function with MFCS added has more obvious and intense non-monotonicity, while the global monotonicity of the Sigmoid function is not affected. As shown in Figure 3, the MFCS excitation function is more multidimensional and has a richer dynamic basis. When ε0(0) = 0.08, N = 2, A1(0) = 0.2, ε1(0) = 0.1, a1 = 6, b1 = 1.8, φ1 = 0, c1 = 0.25, A2(0) = 0.6, ε2(0) = 0.2, a2 = 3.5, b2 = 0.6, φ2 = 0.9π, c2 = 0.35, Sigmoid, MFCS excitation function and its derivative images, as shown in Figure 4, can be derived.
As shown in Figure 4, by adjusting the values of the MFCS function’s parameters, An(0), εn(0), an, bn, cn, and φn, it is possible to control whether the derivative approaches zero. The derivative image of Sigmoid + MFCS has richer variations and possibilities than the Sigmoid excitation function, and the problem of gradient disappearance is not easy to see. All of the above provide a good prerequisite for neurons to produce more complex chaotic dynamic behavior. To sum up, according to Equation (2) and Figure 1, Figure 2, Figure 3 and Figure 4, it can be seen that larger c·An(0), an, and bn, smaller εn(0) and random φn are more conducive to enhancing the non-monotonic degree of the neuronal excitation function. Therefore, this paper proposes a MFCS chaotic neuron model, which is described as follows:
x t = f y t
y t + 1 = k y t z t x t I 0
z t + 1 = 1 β z t
f u = S 0 u , ε 0 + n = 1 N c n S n u , ε n
S 0 u , ε 0 = 1 / 1 + e u / ε 0
S n u , ε n = A n sin u ε n + ϕ n = A n 0 e a n u sin u ε n 0 e b n u + r n ϕ n
where y(t) is the internal state of the neuron, x(t) is the output of the neuron, z(t) is the self-feedback connection weight (z(t) > 0), k is the damping factor of the nerve diaphragm (0 ≤ k ≤1), I0 is the positive parameter, β is the annealing attenuation factor of z(t) (0 ≤ β ≤1), rn is the random number between [0, 1], and φn is 2π. The other parameters are defined by the same Equation (2).

2.3. Analysis of Dynamic Characteristics

There are also some techniques to present a strategy for identifying chaotic trajectories, such as inverse bifurcation diagram [10,18], Lyapunov index [17,26] and so on. In this paper, the dynamic characteristics of the MFCS chaotic neuron model are analyzed by drawing the inverse bifurcation diagram and calculating its Lyapunov index. The inverse bifurcation diagram can directly reflect the evolution process of neuronal dynamics and observe the state changes of chaos. The Lyapunov index can objectively quantify the nodes and intensity of chaos and periodic motion [3,26]. A positive Lyapunov exponent indicates that the model is in a chaotic state, and the greater the value, the stronger the degree of chaos [27,28]. The Lyapunov exponent definition is described as follows [17]:
λ = lim m 1 m i = 0 m 1 log d y t + 1 d y t
As shown from Equation (4):
d y t + 1 d y t = k z t d x t d y t = k z t d S 0 y t d y t + n = 1 N c n d S n y t d y t
where
d S 0 y t d y t = 1 ε 0 S 0 y t 1 S 0 y t
d S n y t d y t = d A n 0 e a n y t sin u ε n 0 e b n y t + ϕ n / d y t = A n 0 a e a n y t sin y t e b n y t ε n 0 + ϕ n + e a n y t ε n 0 cos y t e b n y t ε n 0 + ϕ n e b n y t + b n y t e b n y t , y > 0 A n 0 a e a n y t sin y t e b n y t ε n 0 + ϕ n + e a n y t ε n 0 cos y t e b n y t ε n 0 + ϕ n e b n y t b n y t e b n y t , y < 0 = A n 0 a e a n y t sin y t e b n y t ε n 0 + ϕ n + e a n y t ε n 0 cos y t e b n y t ε n 0 + ϕ n e b n y t + b n y t e b n y t = A n 0 e a n y t 1 + b n y t e b n y t ε n 0 cos y t e b n y t ε n 0 + ϕ n a sin y t e b n y t ε n 0 + ϕ n
The Lyapunov exponent calculation formula of the MFCS chaotic neuron model is shown in Equations (9)–(12). If the appropriate parameters are selected, MFCS neurons will exhibit abundant chaotic dynamic behavior. The neuron model basic parameter settings are as follows: k = 1, β = 0.005, ε0 = 0.02, I0 = 0.65, z(0) = 0.8; MFCS function related parameters setting are as follows: N = 2, A1(0) = 0.8, ε1(0) = 0.02, a1 = 6, b1 = 1, φ1 = 0, A2(0) = 0.4, ε2(0) = 0.04, a2 = 2, b2 = 1.8, r2·φ2 = π/2. The output of the neuron x(t) is initialized randomly. x(t) is iteratively calculated according to the model Equations (3)–(8). The value of each x(t) are recorded. Inverse bifurcation diagram of the transient chaotic neuron (c1 = c2 = 0), FCS chaotic neuron (c1 = 0.25, c2 = 0), MFCS chaotic neuron (c1 = 0.25, c2 = 0.6) and time evolution diagram of the Lyapunov exponent are described as shown in Figure 5 by Matlab R2020a:
As shown in Figure 5, under the same parameter conditions, neuronal output x gradually changes from a chaotic state to a period-doubling state with the increase of iterations and the decrease of z (…, 4 cycles, 2 cycles, 1 cycle). The output range of the TCN, FCS, and MFCS chaotic neuron are [0, 1], [−0.0842, 1.0870], [−0.2006, 1.1562], respectively. In order to further prove our results quantitatively, we used the average Lyapunov exponents from 0 to 400 iterations to quantitatively compare the chaos characteristics of the two models. After calculation, the average Lyapunov exponents of the TCN, FCS, and MFCS chaotic neuron were −0.4942, 0.0239, 0.3753, respectively. MFCS has an inverted bifurcation plot with greater output range, better distribution uniformity, and higher and more positive Lyapunov exponents than FCS, transient neuron models. To sum up, the MFCS chaotic neuron model has richer and stronger chaotic dynamics, which lays a good theoretical foundation for its applications in intelligent optimization, neural computing, secure communication and so on.

3. Multiple Frequency Conversion Sinusoidal Chaotic Neural Network (MFCSCNN) Model

3.1. MFCSCNN Model

According to the MFCS chaotic neuron model above and the network structure of TCNN, the MFCSCNN model is constructed, which is described as follows:
x t = f y t
y i t + 1 = k y i t + α j = 1 , j i N w i j x j t + I i z i t x i t I 0
z t + 1 = 1 β z t
f u = S 0 u , ε 0 + n = 1 N c n S n u , ε n
S 0 u , ε 0 = 1 / 1 + e u / ε 0
S n u , ε n = A n sin u ε n + ϕ n = A n 0 e a n u sin u ε n 0 e b n u + r n ϕ n
where α is the input positive proportion parameter, wij is the connection weight between neuron i and neuron j (wij = wji, wii = 0), Ii is the input threshold of neuron i, and the rest of the parameters are defined in the same way as the MFCS chaotic neuron model. The signal flow diagram and network structure diagram of the MFCSCNN model are shown in Figure 6:
To obtain good optimization performance of the model, it is necessary to properly select and balance the relationship between the basic parameters of the model and the MFCS parameter settings. The higher the complexity of the optimization problem, the stronger the non-monotony of MFCS is required. In the process of using the model, the choice of parameters is indeed a very critical factor, not only because of the initial value sensitivity of chaos, but also because the model has multiple parameters with different meanings. A relatively larger cn·An(0), an, and bn and a smaller εn(0) can be selected within the reference range in Table 1 [17,18,19]. Among the basic parameters, α represents the weight of the feedback signal received by the current neuron from other neurons, which is essentially used to balance the strength of chaotic dynamics and gradient descent [1,2,3,4]. z(0) and β mainly determine the initial value and decay rate of the self-feedback term (chaos term), respectively [1,2,3,4,5,6,7,8]. For the adjustment mechanism of α, z(0), and β, it is usually expected that the early stage is dominated by chaos search (improving the global search ability, avoiding local optimization), and the late stage is dominated by gradient convergence (improving the convergence speed). Appropriate parameter selection of the model determines whether the algorithm can find the optimal solution quickly and accurately, which is always the difficulty faced by the TCNN class optimization model. Therefore, in order to facilitate rapid, effective and clear research and the use of appropriate parameter settings, this paper summarizes all parameter settings and selection guidance by referring to many literatures [1,2,3,4,5,6,7,8,9,10,17,18,19] and the above experimental analysis and verification, as shown in Table 2.
Table 2 shows the theoretical, experimental, and empirical values of the parameters. The theoretical values of these parameters are generally relatively certain, but the range is relatively large, and cannot play a good reference role in the actual experiment process. Experimental values are usually derived from references and experiments. Compared with the theoretical value, the parameter setting range of experimental values is relatively clear and narrow. It can ensure that the model can easily generate chaotic dynamics in these ranges. However, the range of this experimental value is still not accurate enough, which will make it difficult to find a more suitable parameter value to ensure the optimization ability of the model. Therefore, this paper provides a smaller range of parameters (empirical values) for the new model in the last column of Table 2. For users who have no experience in parameter selection, these empirical values can be used as a good reference.
The parameters shown in Table 2 restrict and influence each other. A change in one parameter will inevitably lead to a change of the dynamic characteristics in the overall model. If the change is caused by one parameter beyond the regulating ability of another parameter, it will lead to the invalidation of the parameters and the model. If any parameter is improperly selected, the algorithm cannot find the optimal solution or appear invalid solutions. The above provides the parameter evolution law and reference range of the MFCSCNN model, which is convenient to provide an effective reference for experiment setting when solving complex optimization problems by using the algorithm. The selection and change of MFCS parameters ultimately affect the non-monotonicity of the excitation function. If the non-monotonicity is too large, it will affect the stability of the model, and if it is too small, it will not produce the sufficient chaotic global search effect. The ideal parameter selection adjustment strategy should be based on the difficulty of the optimization problem and the current solution state to adjust the relevant parameter values in real time, which is also one of the key difficulties in this kind of model research.

3.2. Optimization Mechanism

The self-feedback term and non-monotonic excitation function are introduced into the MFCSCNN model based on the HNN structure to generate complex chaotic dynamics characteristics. With the decrease of z, the self-feedback term gradually attenuates to zero, and the model also evolves from chaotic search to the gradient descent state, and finally converges to the optimal solution [29,30]. Therefore, the optimization mechanism of MFCSCNN is the same as that of HNN, which maps the objective function of the problem to the energy function of the model and the dynamic evolution process of the model to the optimization process of the objective function [31]. When the model converges to the fixed point, the neuronal output is the solution of the current optimization problem [32]. According to the HNN optimization mechanism, the following relations exist:
d y i d t = E x i
However, there is an additional energy term in the expression of the energy function due to the self-feedback term, which is also the key to generating chaotic ergodic search behavior (avoiding local optimality). The MFCSCNN model energy function is described as follows:
E t = E H o p + H = 1 2 i = 1 i j N j = 1 j i N w i j x i t x j t i = 1 N I i x i t   + 1 τ i = 1 N 0 x i t f 1 ξ d ξ + H x i , w i j , I i
where H is the additional energy term, i = 1,2…, N, N is the number of neurons, wij is the connection weight between neuron i and neuron j, xi is the output of the i th neuron, Ii is the threshold of the i th neuron, τi is the time constant of the i th neuron, and f−1(·) is the inverse function of the activation function.

4. Application of the MFCSCNN Model for Optimization Problems

4.1. Application of the Model for Function Optimization

In order to verify the optimization performance of the algorithm, a classical function optimization problem is described as follows:
f x 1 , x 2 = 4 x 1 2 2.1 x 1 4 + x 1 6 / 3 + x 1 x 2 4 x 2 2 + 4 x 2 4
where f is the objective function (the optimal value is −1.0316285), and x1, x2 are the independent variables (the extreme value points are −0.089842, 0.71266, respectively). MFCSCNN is used to optimize the function, and the parameters are set as follows: k = 1, α = 0.03, β = 0.025, ε0 = 0.02, I0 = 0.65, z(0) = 0.8, A1(0) = 0.4, ε1(0) = 0.02, a1 = 6, b1 = 1, c1 = 0.25, φ1 = 0, A2(0) = 0.8, ε2(0) = 0.04, a2 = 2, b2 = 0.5, c2 = 0.35, and φ2 = π. The time evolution of the neuron output x1, x2 and the energy function E of the MFCSCNN model for optimizing function f is shown in Figure 7:
As shown in Figure 7, when the model evolved to 212 steps, E = −1.03162845, x1 = −0.089834, x2 = 0.71266, the time evolution diagram showed rich evolution processes of chaotic search and inverted bifurcations, and finally converged to the fixed point by gradient. During the experiment, it was found that small parameter changes will affect the dynamic evolution process of the whole model, which also confirms the initial value sensitivity of chaos. The experimental results show that MFCSCNN can solve the function optimization problem well. In order to better verify the optimization performance of the model, the complex Traveling Salesman Problem (TSP) in combinatorial optimization was selected for experiments.

4.2. Application of the Model for Combinatorial Optimization

The Traveling Salesman Problem (TSP) can be described as follows: suppose there are N cities providing their location and distance from each other, a closed path needs to be found; each city is only visited once, so to return to the starting city, the distance required for this path is the shortest [33,34]. The objective function of the TSP is as follows:
E = W 1 2 i = 1 N j = 1 N x i j 1 2 + j = 1 N i = 1 N x i j 1 2 + W 1 2 i = 1 N j = 1 N k = 1 N x k , j + 1 + x k , j 1 x i j d i k
The internal state equation of MFCS neurons for solving TSP is as follows:
y i j t + 1 = k y i j t z t x i j t I 0 + α W 1 l j N x i l t + k i N x k j t W 2 k i N d i k x k , j + 1 t + x k , j 1 t + W 1
where xi0 = xin, xi(n+1)=xi1, xij represents the output of the neuron, which represents the city i being accessed at the j th; W1 and W2 are the coupling parameters corresponding to the constrains and cost function of the tour length, respectively; and dij represents the distance between city i and city j.
As shown in Figure 8, TSP problems of cities 10, 30 and 75 are selected as experimental objects, and the shortest path lengths shown are 2.6776, 4.237406 and 5.434474, respectively [15,17,19,30].
Select the MFCSCNN model parameters as follows: k = 1, α = 0.025, β = 0.01, ε0 = 0.02, I0 = 0.65, z(0) = 0.8, A1(0) = 0.1, ε1(0) = 0.2, a1 = 3, b1 = 1.8, c1 = 0.25, φ1 = 0, A2(0) = 0.5, ε2(0) = 0.1, a2 = 6, b2 = 0.9, c2 = 0.15, φ2 = 0.8π, W1 = 1, and W2 = 1. By randomly initializing the value of xij, the model solves the output x1,1 of a single neuron and the evolution diagram of the energy function for TSP of 10 cities, as shown in Figure 9.
As shown in Figure 9, the final energy function value is 1.3388 and the shortest path length is 2.6776. MFCSCNN can solve the TSP problem of 10 cities well. In order to further verify the ability of the model to solve more TSP problems with a higher number and complexity, the TSP problems of 30 cities were selected for experimental verification. The parameters of the MFCSCNN model were selected as follows: k = 1, α = 0.02, β = 0.005, ε0 = 0.02, I0 = 0.65, z(0) = 0.8, A1(0) = 0.2, ε1(0) = 0.2, a1 = 4, b1 = 1.5, c1 = 0.25, φ1 = 0, A2(0) = 0.4, ε2(0) = 0.1, a2 = 2, b2 = 0.8, c2 = 0.15, φ2 = 0.6π, W1 = 1, and W2 = 1. By randomly initializing the value of xij, the output x1,1 of a single neuron and the evolution diagram of the energy function are shown in Figure 10 for TSP of 30 cities with MFCSCNN.
As shown from Figure 8, Figure 9 and Figure 10, MFCSCNN can solve the TSP problem well. In order to verify the optimization performance of MFCSCNN, the basic parameters of the model are fixed. The different intelligent optimization algorithms (HNN [4], TCNN [10], ITCNN [13], NCNN [35], FCSCNN [17]) are selected for comparative experiments. The values of xij and yij were randomly initialized. In total, 2000 and 200 independent experiments were carried out with the iteration times of each experiment being 1000, 2000, respectively. Statistical experimental results are shown in Table 3 and Table 4.
For the TSP problem of 75 cities, the optimal rate of most algorithms is low, and cannot find the optimal solution. Therefore, evaluation index J [36] is introduced to evaluate the optimization performance of different models, and the expression is shown as follows:
J = A V S G M G M × 100 %
where AVS is the mean of the legal solution, and GM is the global optimal solution. The lower the value of J, the stronger the optimization performance. The values of xij and yij were randomly initialized. In total, 200 independent experiments were carried out with the iteration times of each experiment being 8000, respectively. Statistical experimental results are shown in Table 5.
The richer the chaotic dynamic characteristics, the stronger the optimization ability of the model [37]. As shown from Table 3, Table 4 and Table 5, MFCSCNN has a better optimal ratio and accuracy than HNN, TCNN, ITCNN, NCNN and FCSCNN under the same basic parameter conditions. The larger the number of cities in TSP problem, the better to test the optimization ability of the model [38]. Furthermore, it has better performance when solving larger scale TSP problems. The introduction of more complex and significant non-monotonic excitation functions in MFCSCNN provides a rich global search basis for chaotic dynamics in the early stage. At the same time, MFCSCNN has more variable and complex function characteristics than other models, which also confirms the previous dynamic analysis results. Therefore, compared with other related algorithms, MFCSCNN shows better optimization performance.

5. Conclusions

In this paper, a new MFCS function is designed according to the biological mechanism of the brain wave composed of multiple sinusoidal frequency signals by introducing several FCS functions and adding phase angle parameters, and a new MFCSCNN model is proposed by combining it with the Sigmoid function to form the excitation function of chaotic neurons. By analyzing the activation function diagram, inverted bifurcation diagram and Lyapunov exponential diagram of chaotic neurons, it is proved that MFCS chaotic neurons have richer and more complex chaotic dynamic characteristics than TCN and FCS chaotic neurons, which lays a good foundation for global optimization. At the same time, according to the optimization mechanism and theoretical and experimental analysis, the parameter selection range of the model is provided. In order to further verify the optimization performance of the MFCSCNN model, the classical function optimization and TSP in NP problem with higher complexity were selected for the simulation experiments. The experimental results showed that the proposed MFCSCNN model has better optimization performance and accuracy than several existing optimization algorithms, especially in the optimization problems with higher complexity, which proves the effectiveness and feasibility of the model. The scope of the model can be expanded to address various optimization problems.

Author Contributions

Conceptualization, Z.H. and Z.G.; Data curation, G.W., L.W. and Y.Z.; Formal analysis, G.W., L.W., X.Z. and Y.Z.; Funding acquisition, Z.H., Z.G., G.W. and L.W.; Investigation, Y.Z.; Methodology, Z.H., Z.G. and G.W.; Software, G.W., L.W. and X.Z.; Supervision, Z.G.; Validation, Z.H., Z.G., X.Z. and Y.Z.; Writing—original draft, Z.H.; Writing—review and editing, Z.H. and X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science Foundation of Shandong (ZR2021QF065, ZR2019MA017), National Natural Science Foundation of China (62003185), Beijing Natural Science Foundation (4232043, 4232044), Scientific Research Foundation Project of Beijing Information Science & Technology University (2022XJJ08).

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the referees for their detailed reading and comments that were both helpful and insightful.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aihara, K.; Takabe, T.; Toyoda, M. Chaotic neural networks. Phys. Lett. A 1990, 144, 333–340. [Google Scholar] [CrossRef]
  2. Chen, L.; Aihara, K. Chaotic simulated annealing by a neural network model with transient chaos. Neural Netw. 1995, 8, 915–930. [Google Scholar] [CrossRef]
  3. Ma, T.; Mou, J.; Li, B.; Banerjee, S.; Yan, H. Study on the complex dynamical behavior of the fractional-order Hopfield neural network system and its implementation. Fractal Fract. 2022, 6, 637. [Google Scholar] [CrossRef]
  4. Yu, F.; Zhang, Z.; Shen, H.; Huang, Y.; Cai, S.; Du, S. FPGA implementation and image encryption application of a new PRNG based on a memristive Hopfield neural network with a special activation gradient. Chin. Phys. B 2022, 31, 20505. [Google Scholar] [CrossRef]
  5. Sheikhan, M.; Hemmati, E. Transient chaotic neural network-based disjoint multipath routing for mobile ad-hoc networks. Neural Comput. Appl. 2012, 21, 1403–1412. [Google Scholar] [CrossRef]
  6. Wang, M.; Wang, Y.; Chu, R. Dynamical analysis of the incommensurate fractional-order Hopfield neural network system and its digital circuit realization. Fractal Fract. 2023, 7, 474. [Google Scholar] [CrossRef]
  7. Chen, S.; Shih, C. Transiently chaotic neural networks with piecewise linear output functions. Chaos Soliton. Fract. 2009, 39, 717–730. [Google Scholar] [CrossRef]
  8. Yan, W.; Jiang, Z.; Huang, X.; Ding, Q. Adaptive neural network synchronization control for uncertain fractional-order time-delay chaotic systems. Fractal Fract. 2023, 7, 288. [Google Scholar] [CrossRef]
  9. Keup, C.; Tobias, K.; Dahmen, D.; Helias, M. Transient chaotic dimensionality expansion by recurrent networks. Phys. Rev. X 2021, 11, 021064. [Google Scholar] [CrossRef]
  10. Yang, L.; Chen, T.; Huang, W. Dynamics of transiently chaotic neural network and its application to optimization. Commun. Theor. Phys. 2001, 35, 22–27. [Google Scholar]
  11. Shuai, J.; Chen, Z.; Liu, R.; Wu, B. Self-evolution neural model. Phys. Lett. A 1996, 221, 311–316. [Google Scholar] [CrossRef]
  12. Potapov, A.; Ali, M.K. Robust chaos in neural networks. Phys. Lett. A 2000, 277, 310–322. [Google Scholar] [CrossRef]
  13. Uykan, Z. Fast-Convergent double-sigmoid hopfield neural network as applied to optimization problems. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 990–996. [Google Scholar] [CrossRef] [PubMed]
  14. Yu, S.; Huan, R.; Zhang, Y.; Feng, D. Novel improved blind detection algorithms based on chaotic neural networks. Acta Phys. Sin. 2014, 63, 060701. [Google Scholar]
  15. Xu, Y.; Sun, M. Shannon wavelet chaotic neural network and its solution to TSP. Control Theory Appl. 2008, 25, 574–577. [Google Scholar]
  16. Zhang, Y.; Zhang, Z.; Liang, Y.; Yu, S.; Wang, L. Blind signal detection using complex transiently chaotic Hopfield neural network. J. Inform. Hiding Multimed. Signal Process. 2018, 9, 523–530. [Google Scholar]
  17. Hu, Z.; Li, W.; Qiao, J. Frequency conversion sinusoidal chaotic neural network and its application. Acta Phys. Sin. 2017, 66, 090502. [Google Scholar]
  18. Hu, Z.; Li, W.; Qiao, J. Frequency conversion sinusoidal chaotic neural network based on self-adaptive simulated annealing. Acta Electron. Sin. 2019, 47, 613–622. [Google Scholar]
  19. Qiao, J.; Hu, Z.; Li, W. Hysteretic noisy frequency conversion sinusoidal chaotic neural network for traveling salesman problem. Neural Comput. Appl. 2019, 31, 7055–7069. [Google Scholar] [CrossRef]
  20. Abbott, A. Neuroscience: The brain, interrupted. Nature 2015, 518, 24–26. [Google Scholar] [CrossRef]
  21. Patel, K.; Katz, C.; Kalia, S.; Popovic, M.; Valiante, T. Volitional control of individual neurons in the human brain. Brain 2021, 144, 3651–3663. [Google Scholar] [CrossRef] [PubMed]
  22. Thomas, K.P.; Robinson, N.; Prasad, V.A. Separability of motor imagery directions using subject-specific discriminative EEG Features. IEEE Trans. Hum-Mach. Syst. 2021, 51, 544–553. [Google Scholar] [CrossRef]
  23. Karaka, S.; Herrmann, C.S.; Chiarenza, G.A. A special issue on electroencephalogram (EEG) oscillations: In memoriam of Erol Baar. Int. J. Psychophysiol. 2021, 159, 71–73. [Google Scholar] [CrossRef]
  24. Cinar, E.; Sahin, F. New classification techniques for electroencephalogram (EEG) signals and a real-time EEG control of a robot. Neural Comput. Appl. 2013, 22, 29–39. [Google Scholar] [CrossRef]
  25. Zhao, H.; Chen, Y.; Pei, W. Towards online applications of EEG biometrics using visual evoked potentials. Expert Syst. Appl. 2021, 177, 114961. [Google Scholar] [CrossRef]
  26. Xu, S.; Wang, X.; Ye, X. A new fractional-order chaos system of Hopfield neural network and its application in image encryption. Chaos Soliton. Fract. 2022, 157, 111889. [Google Scholar] [CrossRef]
  27. Sawoor, A.; Sadkane, M. Lyapunov-based stability of delayed linear differential algebraic systems. Appl. Math. Lett. 2021, 118, 107185. [Google Scholar] [CrossRef]
  28. Shibata, K.; Ejima, T.; Tokumaru, Y.; Matsuki, T. Sensitivity-Local index to control chaoticity or gradient globally. Neural Netw. 2021, 143, 436–451. [Google Scholar] [CrossRef]
  29. Li, C.; Yang, Y.; Yang, X.; Zi, X.; Xiao, F. A tristable locally active memristor and its application in Hopfield neural network. Nonlinear Dyn. 2022, 108, 1697–1717. [Google Scholar] [CrossRef]
  30. Zhao, L.; Sun, M.; Cheng, J.; Xu, Y. A novel chaotic neural network with the ability to characterize local features and its application. IEEE Trans. Neural Netw. 2009, 20, 735–742. [Google Scholar] [CrossRef]
  31. Wang, L.; Liu, W.; Shi, H. Delay-constrained multicast routing using the noisy chaotic neural networks. IEEE Trans. Comput. 2009, 58, 82–89. [Google Scholar] [CrossRef]
  32. Li, M.; Zhang, R.; Yan, S. Adaptive synchronization of a class of fractional-order complex-valued chaotic neural network with time-delay. Chin. Phys. B 2021, 30, 120503. [Google Scholar] [CrossRef]
  33. Karakostas, P.; Sifaleras, A. A double-adaptive general variable neighborhood search algorithm for the solution of the traveling salesman problem. Appl. Soft Comput. 2022, 121, 108746. [Google Scholar] [CrossRef]
  34. Huerta, I.; Neira, D.; Ortega, D.; Varas, V.; Godoy, J.; Asin-Acha, R. Improving the state-of-the-art in the traveling salesman problem: An anytime automatic algorithm selection. Expert Syst. Appl. 2022, 187, 115948. [Google Scholar] [CrossRef]
  35. Sun, M.; Lee, K.Y.; Xu, Y. Hysteretic noisy chaotic neural networks for resource allocation in OFDMA system. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 273–285. [Google Scholar] [CrossRef]
  36. Liu, X.; Xiu, C. A novel hysteretic chaotic neural network and its applications. Neurocomputing 2007, 70, 2561–2565. [Google Scholar] [CrossRef]
  37. Magallón-García, D.A.; Ontanon-Garcia, L.J.; García-López, J.H.; Huerta-Cuéllar, G.; Soubervielle-Montalvo, C. Identification of chaotic dynamics in jerky-based systems by recurrent wavelet first-order neural networks with a morlet wavelet activation function. Axioms 2023, 12, 200. [Google Scholar] [CrossRef]
  38. Mariescu-Istodor, R.; Frnti, P. Solving the large-scale TSP problem in 1h: Santa claus challenge 2020. Front. Robot. AI 2021, 8, 689908. [Google Scholar] [CrossRef]
Figure 1. The comparison graph of (a) FCS and (b) MFCS function.
Figure 1. The comparison graph of (a) FCS and (b) MFCS function.
Fractalfract 07 00697 g001
Figure 2. The graph of f(u) activation function:(a) Sigmoid+FCS (N = 1); (b) Sigmoid+MFCS (N = 2).
Figure 2. The graph of f(u) activation function:(a) Sigmoid+FCS (N = 1); (b) Sigmoid+MFCS (N = 2).
Fractalfract 07 00697 g002
Figure 3. The graph of f(u) activation function with b1, b2.
Figure 3. The graph of f(u) activation function with b1, b2.
Fractalfract 07 00697 g003
Figure 4. The graph of (a) Sigmoid, (b) Sigmoid+MFCS activation function.
Figure 4. The graph of (a) Sigmoid, (b) Sigmoid+MFCS activation function.
Fractalfract 07 00697 g004
Figure 5. The reversed bifurcation ((a) TCN; (c) FCS; (e) MFCS; (g) FCS/MFCS) and Lyapunov exponents ((b) TCN; (d) FCS; (f) MFCS; (h) FCS/MFCS) of the transient, FCS and MFCS chaotic neuron. Note: The black dashed line in subfigures (b,d,f,h) is line of reference with 0.
Figure 5. The reversed bifurcation ((a) TCN; (c) FCS; (e) MFCS; (g) FCS/MFCS) and Lyapunov exponents ((b) TCN; (d) FCS; (f) MFCS; (h) FCS/MFCS) of the transient, FCS and MFCS chaotic neuron. Note: The black dashed line in subfigures (b,d,f,h) is line of reference with 0.
Fractalfract 07 00697 g005aFractalfract 07 00697 g005b
Figure 6. The (a) signal flow graph and (b) network structure graph of MFCSCNN model.
Figure 6. The (a) signal flow graph and (b) network structure graph of MFCSCNN model.
Fractalfract 07 00697 g006
Figure 7. The time evolution diagram of the MFCSCNN output (a) x1, (b) x2 and energy function (c) E.
Figure 7. The time evolution diagram of the MFCSCNN output (a) x1, (b) x2 and energy function (c) E.
Fractalfract 07 00697 g007aFractalfract 07 00697 g007b
Figure 8. The optimal path of (a) 10 and (b) 30 cities TSP.
Figure 8. The optimal path of (a) 10 and (b) 30 cities TSP.
Fractalfract 07 00697 g008
Figure 9. The neuron output (a) x1,1 and energy function (b) E of 10 cities TSP by MFCSCNN.
Figure 9. The neuron output (a) x1,1 and energy function (b) E of 10 cities TSP by MFCSCNN.
Fractalfract 07 00697 g009
Figure 10. The neuron output (a) x1,1 and energy function (b) E of 30 cities TSP by MFCSCNN.
Figure 10. The neuron output (a) x1,1 and energy function (b) E of 30 cities TSP by MFCSCNN.
Fractalfract 07 00697 g010
Table 1. The parameters scope and impact characteristics of MFCS function.
Table 1. The parameters scope and impact characteristics of MFCS function.
ParametersRange of ValuesAction Characteristics
A(0)[0, 1]determine the FCS weight with c
ε(0)[0.0044, 0.32]control the lower bound of f; inversely proportional to f
φ[0, 2π]affect MFCS mutability
a(0, ∞)proportional to the rate at which A decreases
b[0.56, 1.8]control the upper bound of f; proportional to the rate at which f increases
c[0, 1]determine FCS weights with A(0)
Table 2. The guidance of parameter settings for MFCSCNN model.
Table 2. The guidance of parameter settings for MFCSCNN model.
ParametersNameTheoretical
Values
Experimental
Values
Empirical
Values
k [1,2]Memory constant(0, ∞)[0, 1][0.9, 1]
α [1,2,3,4]Positive parameter(0, ∞)(0, 1][0.005, 0.1]
β [1,2,3,4,5,8]Annealing factor[0, 1](0, 1)[0.001, 0.05]
z(0) [6,7]Initial value of self feedback[0, 1][0.7, 1][0.6, 0.8]
I0 [7,8]Positive parameter(0, ∞)[0, 1][0.45, 0.65]
ε0 [9,10]Steepness parameter[0, ∞)[0.01, 0.5][0.01, 0.1]
An(0) [17]Initial value of the amplitude[0, ∞)(0, 1.2][0.1, 0.8]
εn(0) [18]Initial value of teepness(0, ∞)[0.0044, 0.32][0.01, 0.1]
an [18]Positive parameter(0, ∞)[1, 10][3, 8]
bn [19]Positive parameter(0, ∞)[0.56, 1.8][0.6, 1.5]
cn [19]Weighting coefficient(0, ∞)[0, 1][0.2, 0.8]
r2Random coefficient[0, 1][0, 1][0, 1]
φnPhase Angle[0, 2π][0, 2π]π or 2π
Table 3. The results of the different models for 10-city TSP.
Table 3. The results of the different models for 10-city TSP.
ModelNLPNOPRLPRGM
HNN [4]158293079.10%46.50%
TCNN [10]1972180998.60%90.45%
ITCNN [13]20001900100%95.00%
NCNN [35]20001910100%95.50%
FCSCNN [17]20001981100%99.05%
MFCSCNN20001992100%99.60%
Note: Number of Legitimate Path, NLP; Number of Optimal Path, NOP; Rate of Legitimate Path, RLP; Rate of Global Minima, RGM. The best performance index are marked in bold.
Table 4. The results of the different models for 30-city TSP.
Table 4. The results of the different models for 30-city TSP.
ModelNLPNOPRLPRGM
HNN [4]112756.0%3.5%
TCNN [10]1894594.5%22.5%
ITCNN [13]1965298.0%26.0%
NCNN [35]1956097.5%30.0%
FCSCNN [17]1936196.5%30.5%
MFCSCNN1956897.5%34.0%
The best performance index are marked in bold.
Table 5. The results of the different models for 75-city TSP.
Table 5. The results of the different models for 75-city TSP.
ModelNLPRLPBSJ
HNN [4]3216.0%6.470325.675%
TCNN [10]10351.5%5.761713.534%
ITCNN [13]12562.5%5.588410.802%
NCNN [35]11859.0%5.610111.513%
FCSCNN [17]13668.0%5.43458.621%
MFCSCNN14673.0%5.43458.336%
Note: Best Solution, BS; Evaluation Index, J. The best performance index are marked in bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, Z.; Guo, Z.; Wang, G.; Wang, L.; Zhao, X.; Zhang, Y. The Multiple Frequency Conversion Sinusoidal Chaotic Neural Network and Its Application. Fractal Fract. 2023, 7, 697. https://doi.org/10.3390/fractalfract7090697

AMA Style

Hu Z, Guo Z, Wang G, Wang L, Zhao X, Zhang Y. The Multiple Frequency Conversion Sinusoidal Chaotic Neural Network and Its Application. Fractal and Fractional. 2023; 7(9):697. https://doi.org/10.3390/fractalfract7090697

Chicago/Turabian Style

Hu, Zhiqiang, Zhongjin Guo, Gongming Wang, Lei Wang, Xiaodong Zhao, and Yongfeng Zhang. 2023. "The Multiple Frequency Conversion Sinusoidal Chaotic Neural Network and Its Application" Fractal and Fractional 7, no. 9: 697. https://doi.org/10.3390/fractalfract7090697

Article Metrics

Back to TopTop