Next Article in Journal
Human–Computer Interaction for Intelligent Systems
Previous Article in Journal
TextDC: Exploring Multidimensional Text Detection via a New Benchmark and Solution
Previous Article in Special Issue
A Prediction Method with Data Leakage Suppression for Time Series
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

n-Dimensional Chaotic Time Series Prediction Method

1
School of Information Science and Engineering, Shenyang Ligong University, Shenyang 110159, China
2
Party and Government Office, Shenyang Ligong University, Shenyang 110159, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(1), 160; https://doi.org/10.3390/electronics12010160
Submission received: 29 November 2022 / Revised: 26 December 2022 / Accepted: 28 December 2022 / Published: 29 December 2022

Abstract

:
Chaotic time series have been involved in many fields of production and life, so their prediction has a very important practical value. However, due to the characteristics of chaotic time series, such as internal randomness, nonlinearity, and long-term unpredictability, most prediction methods cannot achieve high-precision intermediate or long-term predictions. Thus, an intermediate and long-term prediction (ILTP) method for n-dimensional chaotic time series is proposed to solve this problem. Initially, the order of the model is determined by optimizing the preprocessing and constructing the joint calculation strategy, so that the observation sequence can be decomposed and reorganized accurately. Furthermore, the RBF neural network is introduced to construct a multi-step prediction model of future sequences, with a feedback recursion mechanism. Compared with the existing prediction methods, the error of the ILTP method can be reduced by 1–6 orders of magnitude, and the prediction step can be increased by 10–20 steps. The ILTP method can provide reference technology for the application of time series prediction with chaotic characteristics.

1. Introduction

As a bridge connecting chaos theory and real life, the chaotic time series [1,2] has attracted wide attention. Because of its rich variation of laws in a nonlinear [3] dynamic system, it has a high theoretical and research value in exposing the deterministic laws that may be implied in mining random phenomena. At the same time, because the chaotic time series involves nearly all areas of daily life, it has important applicational value in practical production and in life, and the research on chaotic time series prediction methods has very important practical value. Therefore, many scholars have conducted vast amounts of research on chaotic characteristics and chaotic time series prediction.
First, linear models [4] are proposed, such as the grey prediction method [5]. Then, nonlinear models [6] are proposed, such as the support vector machine [7]. With the continuous development of artificial intelligence technology, a series of prediction methods for chaotic time series have been proposed. At present, the machine learning method based on the nonlinear mapping ability of the neural network [8,9] is the main prediction method. For example, an improved differential evolution (IDE) algorithm [10] is proposed to optimize the topology of the deep hybrid neural network (DHNN) and select the filter size of the convolutional neural network (CNN), as well as the number of hidden neurons of the gated recurrent unit (GRU). The IDE of the adaptive mutation operator and the dynamic chaotic crossover operator not only improves the convergence speed, but also shortens the optimization time. Further, an improved recurrent neural network, namely the hierarchical delayed memory echo state network (HDESN) [11], is proposed for the task of predicting a multi-step chaotic time series. Multiple reservoirs are used to discover and explore hidden information in historical sequences, and valuable evolution patterns are extracted through depth topology and hierarchical processing. In addition, Hao-Chang Chen and Du-Qu Wei proposed a chaotic prediction model [12] of an echo state network (ESN) based on the selective opposition grey wolf optimizer (SOGWO) optimization. The model establishes the optimal network by reorganizing the input weight matrix and establishing the selective opposition strategy. The effectiveness and superiority of this model are verified by numerical simulation. Then, in order to achieve high-precision short-term prediction in wind power applications, a hybrid model [13]—considered the physical characteristics of the data and based on granular rules—is proposed, which is constructed through chaotic phase space reconstruction and granular computing (GrC). The feasibility and effectiveness of the prediction model are verified by experiments with real data, and the time convolution network (TCN) model [14] for chaotic time series prediction is proposed. Then, a new TCN-CBAM model is constructed by embedding the convolution block attention module (CBAM) with fusion space and the channel attention mechanism. The prediction experiments show that this model can achieve better results. Compared with other methods, this method not only increases the performance by 41.4%, but also exhibits the shortest training time. At the same time, Yang Jinhui et al. proposed a chaotic time series prediction method [15] based on the Hankel alternative view of Koopman analysis and machine learning (HAVOK-ML). Through HAVOK analysis, the chaotic dynamics are decomposed into an intermittent forcing linear system; then, the external intermittent forcing term is estimated by machine learning to reconstruct the closed linear model. The prediction performance evaluation shows that the method has good prediction ability. Xinghan Xu and Weijie Ren proposed a new hybrid model [16] based on a two-level decomposition method and an optimized back propagation neural network (BPNN). The comprehensive information for the chaotic time series is obtained through the two-level decomposition method of the complete ensemble empirical mode decomposition and the variational mode decomposition (VMD) of adaptive noise, which significantly improved the prediction performance. In addition, Qiao et al. proposed an adaptive Levenberg–Marquardt (LM) algorithm [17] based on the echo state network. This method introduces the damping term into the adaptive multi-input neural network to overcome the ill-posed problem that may occur during the training process caused by the infinity of the minimum singular value. Then, the recursive path, based on the fuzzy neural network theory, is added to each node in the hidden layer of the network, and the self-constructed recursive fuzzy neural network [18] is constructed to solve the prediction problem of the chaotic time series.
In view of the various prediction methods mentioned above, most of them are based on nonlinear models, such as neural networks, and then introduce the multi-network fusion calculations or multiple optimization algorithms to establish complex models. Most of the methods improve the accuracy of the chaotic time series prediction at the cost of computational complexity, but there is still a lack of a prediction method that can simultaneously solve the chaotic time series of different dimensions. Therefore, in order to improve the prediction accuracy, enhance the applicability of multi-dimensional chaotic time series, and avoid the subjectivity of neural network parameter selection, an intermediate and long-term prediction (ILTP) method for n-dimensional chaotic time series is proposed, which can obtain efficient prediction performance and solve the medium and long-term prediction problem of chaotic time series.

2. Materials and Methods

Based on chaos-related theory and concept of the regression model, an optimized preprocessing method for sequence reconstruction and decomposition is proposed to accurately select the drive sequence and target sequence required for model building. Then, aiming at the diversity of the low and high dimensions of chaotic time series, it is ensured that the established model can mine and extract the deterministic rules contained between sequences. Considering that the radical basis function (RBF) neural network can obtain the matching topology structure, according to the actual situation, which has strong universality, therefore, the proposed optimization preprocessing mechanism is combined with the radical basis function neural network to construct a chaotic neural network estimation method to solve and improve the training speed of the model. Finally, the multi-step estimation of multiple chaotic time series is achieved by introducing the estimation result recursive feedback method, and it is verified that the model can meet the high estimation accuracy. The principle of the proposed ILTP method is shown in Figure 1.
For the sequence with dimension n and length N at the current time, the i -th dimension (let the initial value of i be equal to 1) is selected as the observation sequence X N = { x ( 1 ) , x ( 2 ) , x ( 3 ) , , x ( N ) } . Next, normalize the sequence, as shown in Equation (1), and output the optimized sequence X N = { x ( 1 ) , x ( 2 ) , x ( 3 ) , , x ( N ) } .
X N = X N X ¯ N s
The mean value X ¯ N of the observation sequence X N with length N is:
X ¯ N = 1 N ( x ( 1 ) + x ( 2 ) + x ( 3 ) + + x ( N ) ) = 1 N i = 1 N x ( i )
The overall standard deviation s of observation sequence X N is:
s = ( 1 N i = 1 N ( x ( i ) X ¯ N ) 2 ) 1 2
Next, for the optimized sequence X N , considering that it is generated by the sequence of past time through multiple recursive iterations, the sequence sample x ( N ) can be described by the l -order nonlinear autoregressive model ( l -NAR ) shown in Equation (4).
X N = f ( X N 1 , X N 2 , , X N l )
Among them, f ( · ) represents the relationship function between each sequence X N 1 , X N 2 , , X N l .
Then, for the determination of the model order, in order to avoid the single limitation of calculation, reduce the complexity and increase the adaptability of the optimization sequence length change, considering that the l -order nonlinear autoregressive model can be equivalent to the phase space reconstruction [19], with the delay time of 1 and the embedding dimension of l + 1 , a joint calculation strategy of the model order based on the organic combination of G-P correlation dimension [20] and Akaike information criterion (AIC) is proposed, as shown in Equation (5).
l = { n ,                                                 m = 0 , 1 min ( m 1 , n ) ,   m = e l s e
In Equation (5), m is the minimum embedding dimension obtained by the G-P method, and n is the calculation result of the AIC criterion. The model order joint calculation is shown in the second part of the block diagram in Figure 1.
The calculation result l of the optimization order output from Equation (5) is brought into Equation (4), and the optimization sequence X N is decomposed, so the sequences on the right side of Equation (4) are:
X N 1 = { x ( 1 ) , x ( 2 ) , x ( 3 ) , , x ( N 1 ) } X N 2 = { x ( 2 ) , x ( 3 ) , x ( 4 ) , , x ( N 1 ) }                         X N l = { x ( l ) , x ( l + 1 ) , x ( l + 2 ) , , x ( N 1 ) }
The left sequence of Equation (4) is:
X N = { x ( 1 ) , x ( 2 ) , x ( 3 ) , , x ( N ) }
Furthermore, considering that the length of the driving sequence and the target sequence should be unified in the subsequent neural network training, Equations (6) and (7) are divided into chaotic time series with M = N l sample points. Then, the segmented chaotic time series with dimension l × M is taken as the driving sequence, and the driving sequence { Y 1 , Y 2 , , Y l } is shown in Equation (8).
Y 1 = { x ( 1 ) , x ( 2 ) , x ( 3 ) , , x ( M ) } Y 2 = { x ( 2 ) , x ( 3 ) , x ( 4 ) , , x ( M + 1 ) }             Y l = { x ( l ) , x ( l + 1 ) , x ( l + 2 ) , , x ( M + l 1 ) }
The chaotic time series with dimension 1 × M is regarded as the target sequence Y l + 1 , as shown in Equation (9).
Y l + 1 = { x ( l + 1 ) , x ( l + 2 ) , x ( l + 3 ) , , x ( M + l ) }
Since the radial basis function (RBF) [21,22] neural network can adjust the network parameters in real time, according to the number of training samples, and dynamically determine the hidden layer unit center and expansion constant, it is easy to adapt to the characteristics of new samples. Therefore, the driving sequence writing matrix form of Equation (8) is used as the training input of the RBF neural network, as shown in Equation (10); that is, the input layer size of the RBF neural network is l × M .
[ Y 1 , Y 2 , , Y l ] T = ( x ( 1 ) , x ( 2 ) , x ( 3 ) , , x ( M ) x ( 2 ) , x ( 3 ) , x ( 4 ) , , x ( M + 1 ) x ( l ) , x ( l + 1 ) , x ( l + 2 ) , , x ( M + l 1 ) )
And taking the Equation (9) target sequence as the training output of the RBF neural network; that is, the output layer size is 1 × M , the output is the short-term prediction network of chaotic time series based on the RBF neural network.
Considering the long-term and medium-term prediction of the future chaotic time series, the first-dimensional driving sequence Y 1 = { x ( 1 ) , x ( 2 ) , x ( 3 ) , , x ( M ) } contained in Equation (10) is moved out. The target sequence Y l + 1 , shown in Equation (9), is recursively fed back to the driving sequence and substituted into the short-term prediction network. Then, the output prediction sequence is:
Y l + 2 = { x ( l + 2 ) , x ( l + 3 ) , x ( l + 4 ) , , x ( N + 1 ) }
Thus, the final sequence sample x ( N + 1 ) of Equation (11) is the sequence value of the estimated observation sequence at the next moment. At the beginning of the second recursive feedback, the first dimension of the driving sequence in the first recursive feedback is removed, and the Equation (11) recursive feedback is returned to the short-term prediction network input. The second recursive feedback output prediction sequence is:
Y l + 3 = { x ( l + 3 ) , x ( l + 4 ) , x ( l + 5 ) , , x ( N + 2 ) }
Among these, the estimated value of the last sequence sample x ( N + 2 ) is the sequence value at the next time of x ( N + 1 ) , and then the L -order recursive feedback process is carried out to achieve the multi-step long-term estimation. The recursive feedback mechanism is shown in the third part of Figure 1, where L is the prediction step length of the sequence.
Furthermore, in order to obtain the sequence that can reflect the change rule of the future sequence, the last position of each recursive output sequence, namely { x ( N + 1 ) , x ( N + 2 ) , , x ( N + L ) } , shown in the third part of Figure 1, is stored, and it is passed through the established anti-standardized processing function shown in Equation (13). Then, the output is the medium and long-term prediction result x ( N + j ) of the observation sequence, as shown in Equation (13), where, j = 1 , 2 , , L .
x ( N + j ) = x ( N + j ) × s + X ¯ N
Finally, the sequence dimensions n and i at the current time are compared. If n = i , the prediction results of the current time series are output, as shown in Equation (13). If n > i , then update i = i + 1 , and repeat the above implementation process, so that the mid-long-term prediction method of the n -dimensional chaotic time series can realize the prediction of chaotic time series under each dimension with high accuracy.

3. Simulation Experiment

The ILTP method is simulated under AMD Ryzen 5 5600H with a Radeon Graphics CPU processor, 8 GB memory, and a Windows 11 operating system. The experimental code is written in MATLAB. The following analysis is conducted from three aspects— effectiveness, calculation efficiency, and neural network training error.

3.1. Datasets

The experimental data in this paper are one-dimensional logistic chaotic time series, two-dimensional Henon chaotic time series, and three-dimensional Lorenz chaotic time series generated by the mapping function.

3.2. Effectiveness Analysis

In order to test the effectiveness of the ILTP method, it is analyzed from the preprocessing stage, the model order determination stage, and the final stage. In order to evaluate the deviation degree and prediction accuracy between the prediction and the real value, the absolute error index of prediction is set to < 10 5 , and the prediction step length is tested under the condition of satisfying the absolute error index.
Firstly, in order to verify the effectiveness of the sequence normalization function in the ILTP method, the preprocessing stage is tested and analyzed, and the test parameters are set. Among them, the typical logistic sequence is selected for one-dimensional chaotic time series, with an initial value of 0.52 and a fractal parameter of 4. The typical Henon sequence is selected for two-dimensional chaotic time series, with a = 1.4 , and b = 0.3 . The typical Lorenz sequence is selected for three-dimensional chaotic time series, with γ = 28 , σ = 10 , and b = 8 / 3 . In order to avoid the contingency of the experimental results, 20 experiments, with a sequence length of 50~1000, are selected for statistical testing. The test results of three different dimensional sequences in the preprocessing stage are shown in Figure 2.
As shown in Figure 2, compared with the curve without function optimization, the curve optimized by the standardized processing function is higher than the curve without function optimization, by many times. Among them, the number of experiments in which the prediction step size of the one-dimensional logistic chaotic time series after standardized processing is greater than or equal to that before processing is 18, and the number of experiments in which it is less is 2, and the effective test proportion is 90%. The prediction step in which the two-dimensional Henon chaotic time series after processing is greater than or equal to the number of experiments before processing is 15, and less than 5 times, and the effective test proportion is 75%. The prediction step in which the three-dimensional Lorenz chaotic time series after processing is greater than or equal to the number of experiments before processing is 19, and less than 1, and the effective test accounted for 95%. In summary, the proportion of effective tests is higher than 75%, indicating that when the sequence length is changed, the prediction step number can be improved, to a certain extent, after the optimization of the standardized processing function in the preprocessing stage. Therefore, it can be shown that the established standardized processing function is effective for increasing the prediction step length.
Secondly, to verify the effectiveness of the joint computing mechanism (JCM) established in the ILTP method, the test analysis is carried out. Taking logistic chaotic time series as an example, the time series with sequence lengths of 20, 30, 50, 100, and 200 are selected to predict the corresponding future time series. The order of the calculation model and the prediction step length under the condition of error index < 10 5 are used as verification indexes. Then, the JCM is compared with the experimental results of the traditional AIC, as shown in Table 1.
It can be seen that when the sequence length is short, the proposed model order joint calculation system can not only reduce the model order and the complexity of neural network training, but can also achieve a longer prediction step length under the condition of satisfying the accuracy compared with the traditional method. When the sequence length is 20 and 30, the proposed mechanism increases by 13 steps compared with the traditional method. Although the improved method has a one step smaller prediction length than the traditional method when the sequence length is 100, the model order is lower, and the model complexity will be lower. When the sequence length increases gradually, the proposed mechanism exhibits the same value as the order calculation results of the traditional method, which can realize the medium and long-term prediction of the future change of chaotic time series. Therefore, it can be shown that the proposed joint calculation strategy for determining the order of the model is effective.
Finally, in order to verify the final results of the proposed ILTP method, one-dimensional logistic chaotic time series, two-dimensional Henon chaotic time series, and three-dimensional Lorenz chaotic time series are used as observation sequences. The sequence length is set to 500, and the step size results that can be predicted by the ILTP method under the condition of absolute error index < 10 5 are shown in Figure 3, Figure 4, and Figure 5. Among them, the two-dimensional Henon chaotic time series can be divided into an x phase and a y phase, and the three-dimensional Lorenz chaotic time series can be divided into an x phase, a y phase, and a z phase.
As can be seen from Figure 3, Figure 4, and Figure 5, the effective prediction step of the ILTP method for one-dimensional logistic chaotic time series can reach 19 steps. When the x phase of the two-dimensional Henon chaotic time series satisfies the accuracy condition, the prediction step is 20 steps, and the prediction step of the y phase is 19 steps. By comprehensively judging the prediction length of the x phase and the y phase, the maximum prediction step of the two-dimensional Henon chaotic time series under the condition of a given sequence length of 500 is 19 steps. The x phase, y phase, and z phase of three-dimensional Lorenz chaotic time series can predict 13 steps, 10 steps, and 20 steps, respectively, and the maximum prediction step of three-dimensional Lorenz chaotic time series is 10 steps when comparing the three-dimensional prediction step.
The statistical experimental results regarding the effectiveness of the three aspects show that the ILTP method can solve the medium and long-term prediction problem of multidimensional chaotic time series under the condition of satisfying the high precision requirement.

3.3. Complexity Analysis

In the simulation experiment, the prediction of n-dimensional chaotic time series is mainly realized, including generating experimental data through the mapping function, jointly calculating the NAR model order, dividing the driving sequence, training the RBF neural network, and network prediction. In the algorithm for generating experimental data, the mapping function is required to generate a sequence of real values with length n, and the complexity is O ( n ) . In the joint calculation NAR model order algorithm, the main work is to calculate the model order through the organic combination of the G-P correlation dimension and the AIC criterion, so the complexity is O ( 1 ) . In the partition driver sequence algorithm, the optimized sequence is divided to obtain a driver sequence with the same length as the target sequence. The time complexity of the algorithm is O ( n ) . Matrix multiplication is the main operation in neural networks, so the complexity of neural networks is based on the number of matrix multiplication operations. In the RBF neural network training algorithm, the time complexity is O ( n ) . In the network prediction algorithm, the multi-step prediction network is achieved through a two-layer loop, so the time complexity of the algorithm is O ( n 2 ) . Therefore, the time complexity of the multi-step prediction algorithm for n-dimensional chaotic time series is O ( n 2 ) .

3.4. Error Analysis

Considering that the error iterative convergence curve of the neural network can reflect the goodness of the neural network training stage, the quality of the error iterative convergence curve is particularly important to the prediction accuracy of the ILTP method, so the training iteration process is analyzed. One-dimensional logistic chaotic time series, two-dimensional Henon chaotic time series and three-dimensional Lorenz chaotic time series are also selected. The iteration termination condition of error is set to a 10 15 order of magnitude, and the error iteration convergence curves of the neural network training process are shown in Figure 6, Figure 7, and Figure 8 respectively.
According to Figure 6, it can be seen that although the error iterative convergence curve shows abrupt oscillation when iterating to about 150 times, the curve generally shows a downward trend, and the error converges to 10 15 orders of magnitude after 200 iterations, indicating that the training effect of the driving sequence and the target sequence is consistent with the expected effect. For the two-dimensional Henon chaotic time series, it can be seen from Figure 7 that although the convergence curves of the x-phase training iteration error and the y-phase training iteration error are not smooth, they always show a downward trend. Since the length of the sequence is 500, the maximum number of iterations is 500. Although the training process does not reach the set error termination value, the magnitude of the error is below 10 12 at the maximum number of iterations, and it still exhibits high prediction accuracy. The error iterative convergence curve of the three-dimensional Lorenz chaotic time series can be seen in Figure 8. Although the error iterative convergence curve of the x phase, y phase, and z phase is not iterative to the set error termination value in the process of neural network training, the order of magnitude of error at the end of training is about 10 14 , which still indicates that there is a low training error. Although the curve shows jitter and mutation during training, it still maintains the overall downward trend.
Secondly, considering the correlation between the error obtained by the ILTP method and the prediction step size, the error is the direct influencing factor of the prediction step size. Therefore, the prediction error of the value of the future chaotic time series can be statistically analyzed through observation. In the process of increasing the prediction step size, the absolute error value between the predicted value and the real value is calculated. Then, the variation curves of the absolute error values of one-dimensional logistic chaotic time series, two-dimensional Henon chaotic time series and three-dimensional Lorenz chaotic time series are shown in Figure 9, Figure 10, and Figure 11.
It can be seen that when the prediction length of the future sequence of the one-dimensional logistic chaotic time series is 19, the absolute error value is 8.263 × 10 5 at this time. When it is greater than the step size, the order of magnitude of its absolute error increases to 10 4 . Therefore, the maximum prediction step size of the ILTP method for medium- and long-term prediction of one-dimensional logistic chaotic time series is 19 steps. It can be seen from Figure 10 that when the x phase reaches the maximum step size, the absolute error value is 1.123 × 10 5 , which still maintains the order of magnitude of 10 5 . When predicting the y phase of the two-dimensional Henon chaotic time series, when the step size is 19, the absolute error value is 7.303 × 10 5 . When it is greater than 19 steps, the order of magnitude is less than 10 5 under the accuracy setting condition. Therefore, it can be comprehensively determined that the maximum prediction step size under the accuracy requirement is 19 steps for the medium- and long-term prediction of the two-dimensional Henon chaotic time series. It can be seen from Figure 11 that the absolute error value obtained by the prediction of the x phase of the three-dimensional Lorenz chaotic time series changes obviously with the gradual increase in the step length. When the step length is 13, the absolute error value is 8.813 × 10 5 , and the order of magnitude increases to 10 4 when the step length is greater than 13 steps. When predicting the y phase, the absolute error value is 7.755 × 10 5 when the step length is increased to the tenth step, and the order of magnitude further increases when the step length is greater than 10 steps. In the process of predicting the z phase, the error value always meets the accuracy condition with the increase in the step length. When estimating the three-dimensional Lorenz chaotic time series, the maximum prediction step that can meet the error accuracy condition is 10 steps.
Finally, in order to explore the superiority of the proposed ILTP method and further verify the performance of the method, the logistic chaotic time series with an initial value of 0.52 and a fractal parameter of 4 is taken as an example, and repeat experiments are carried out with different sequence lengths. The proposed method is compared with the maximum absolute error and average absolute error measurement indexes of the existing commonly used methods, such as the error back propagation (BP) neural network [23], the radial basis function RBF neural network, and the double hidden layer neural network [24]. The results are shown in Table 2 and Table 3.
It can be seen that the average absolute error of the ILTP method is lower than that of other methods, and the order of magnitude of the error value is below 10 5 , especially compared with BP neural network and double hidden layer neural network, in which the value can be 4–6 orders of magnitude higher, and compared with the RBF neural network, it is also 1–3 orders of magnitude lower. In the process of repeating experiments by changing the sequence length, the maximum absolute error of the ILTP method is 1–5 orders of magnitude lower than that of other methods, and the order of magnitude is always maintained below 10 5 .
Furthermore, considering the prediction step size is an important index to measure the superiority of the method, in order to ensure the accuracy of statistical experiments and increase the repeated test process, the prediction step size results obtained by the four methods are shown in Table 4.
It can be seen that the prediction step size of the proposed ILTP method is larger than that of other methods regarding the experimental process of changing sequence length, which is significantly larger than that of the BP neural network and the double hidden layer neural network. Compared with the step sizes of both of these networks, the proposed ILTP can increase the prediction step size by 10–20 steps. Compared with the RBF neural network, the proposed ILTP not only has a longer prediction step size, but also shows stable prediction results in the process of increasing sequence length. When the sequence length is greater than 50, the prediction step size can be maintained between 16–20 steps. A comparison with other methods is shown Table 5.
In summary, the proposed ILTP method not only improves the step size under the conditions of increasing precision control, but also effectively reduces the prediction error, and shows high precision performance in solving the problem of future sequence prediction.

4. Conclusions

In order to enhance the applicability of the chaotic time series with low-dimensional and high-dimensional characteristics, an intermediate and long term prediction (ILTP) method for the n-dimensional chaotic time series is proposed. The analysis shows that the ILTP method can achieve medium- and long-term prediction under the condition of a 10 5 error magnitude for both low-dimensional and high-dimensional chaotic time series. Compared with other methods, the error of the ILTP method can be reduced by 1–6 orders of magnitude, and it shows higher prediction accuracy. Moreover, its prediction step length can be increased by 10–20 steps, and the prediction performance is more stable. The ILTP method can not only contribute solutions and a theoretical basis for the pseudo-random sequence prediction of communication systems, but it can also provide reference technology for other chaotic sequence prediction applications with chaotic characteristics.

Author Contributions

Conceptualization, F.L.; methodology, F.L.; software, F.L. and B.Y.; validation, F.L. and B.Y.; formal analysis, F.L.; investigation, M.C.; resources, B.Y. and M.C.; data curation, B.Y. and M.C.; writing—original draft preparation, F.L.; writing—review and editing, F.L. and Y.F.; visualization, F.L. and B.Y.; supervision, Y.F. and M.C.; project administration, F.L. and Y.F.; funding acquisition, F.L. and Y.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 61971291), the central government local science and technology development projects (2022020128-JH6/1001), and the Shenyang Natural Science Foundation (Grant No. 22-315-6-10).

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflict of interests.

Abbreviations

SymbolExplanation
N observation sequence length
X N observation sequence
X N optimized sequence after normalization
X ¯ N mean of observation sequence
s overall standard deviation of observation sequence
l optimized model order
m minimum embedding dimension obtained by G-P method
n calculation results of AIC criterion
M segmented sequence length
{ Y 1 , Y 2 , , Y l } driving sequence
Y l + 1 target sequence
Y l + 2 first recursive feedback output prediction sequence
Y l + 3 second recursive feedback output prediction sequence
L prediction step length of the sequence
X ( N + j ) medium-long-term prediction results of observation sequence

References

  1. Sun, J. Complex Network Construction of Univariate Chaotic Time Series Based on Maximum Mean Discrepancy. Entropy 2020, 22, 142. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Han, M.; Feng, S.; Chen, C.L.P.; Xu, M.; Qiu, T. Structured Manifold Broad Learning System: A Manifold Perspective for Large-Scale Chaotic Time Series Analysis and Prediction. IEEE Trans. Knowl. Data Engineering 2019, 31, 1809–1821. [Google Scholar] [CrossRef]
  3. Asadollahi, M.; Ghiasi, A.R.; Badamchizadeh, M.A. Adaptive control for a class of nonlinear chaotic systems with quantized input delays. J. Frankl. Inst. 2019, 357, 254–278. [Google Scholar] [CrossRef]
  4. Nijhoff, M.F.B.; Pol, R.A.M.; Volbeda, M.M.; Kotsopoulos, A.M.M.; Sonneveld, J.P.M.; Otterspoor, L.M.; Abdo, W.F.M.; Silderhuis, V.M.M.; El Moumni, M.M.; Moers, C.M. External Validation of the DCD-N Score and a Linear Prediction Model to Identify Potential Candidates for Organ Donation After Circulatory Death: A Nationwide Multicenter Cohort Study. Transplantation 2021, 105, 1311–1316. [Google Scholar] [CrossRef]
  5. Yu, X.; Li, Y.; Liu, Y.; Yu, H. Inertial optimization MCL deep mine localization algorithm based on grey prediction and artificial bee colony. Wirel. Netw. 2021, 27, 3053–3072. [Google Scholar] [CrossRef]
  6. Lin, F.; Xue, G.; Su, G.; Qin, B. A hybrid adaptive synchronization protocol for nondeterministic perturbed fractional-order chaotic nonlinear systems. Adv. Differ. Equ. 2020, 2020, 150–169. [Google Scholar] [CrossRef]
  7. Li, J.; Lei, Y.; Yang, S. Nanning, Guangxi, China. Mid-long term load forecasting model based on support vector machine optimized by improved sparrow search algorithm. Presented at ICPE 2021 Annual Meeting. Energy Rep. 2022, 8, 491–497. [Google Scholar] [CrossRef]
  8. Yang, Z.; Yu, H.; He, Y.; Sun, W.; Mao, Z.-H.; Mian, A. Fully Convolutional Network-Based Self-Supervised Learning for Semantic Segmentation. IEEE Trans. Neural Networks Learn. Systems 2022, 10, 1–11. [Google Scholar] [CrossRef]
  9. Tan, Y.; Zuo, J.; Chen, J. Qingdao, China. An Environmental Cost Value Model Based on Dynamic Neural Network Prediction, Presented at ICAITA 2019 Annual Meeting. J. Phys. Conf. Ser. 2019, 1325, 011001. [Google Scholar] [CrossRef]
  10. Huang, W.; Li, Y.; Huang, Y. Deep Hybrid Neural Network and Improved Differential Neuroevolution for Chaotic Time Series Prediction. IEEE Access 2020, 8, 159552–159565. [Google Scholar] [CrossRef]
  11. Na, X.; Re, W.; Xu, X. Hierarchical delay-memory echo state network: A model designed for multi-step chaotic time series prediction. Eng. Appl. Artif. Intell. 2021, 102, 104229–104239. [Google Scholar] [CrossRef]
  12. Chen, H.; Wei, D. Chaotic time series prediction using echo state network based on selective opposition grey wolf optimizer. Nonlinear Dyn. 2021, 104, 3925–3935. [Google Scholar] [CrossRef]
  13. Wang, Y.; Xiong, W.; Shiping, E.; Liu, Q.; Yang, N.; Fu, P.; Gong, K.; Huang, Y. Wind Power Prediction Based on a Hybrid Granular Chaotic Time Series Model. Front. Energy Res. 2022, 9, 1–6. [Google Scholar] [CrossRef]
  14. Cheng, W.; Wang, Y.; Peng, Z.; Ren, X.; Shuai, Y.; Zang, S.; Liu, H.; Cheng, H.; Wu, J. High-efficiency chaotic time series prediction based on time convolution neural network. Chaos Solitons Fractals 2021, 152, 111304–111314. [Google Scholar] [CrossRef]
  15. Yang, J.; Zhao, J.; Song, J.; Wu, J.; Zhao, C.; Leng, H. A Hybrid Method Using HAVOK Analysis and Machine Learning for Predicting Chaotic Time Series. Entropy 2022, 24, 408. [Google Scholar] [CrossRef]
  16. Xu, X.; Ren, W. A Hybrid Model Based on a Two-Layer Decomposition Approach and an Optimized Neural Network for Chaotic Time Series Prediction. Symmetry 2019, 11, 610. [Google Scholar] [CrossRef] [Green Version]
  17. Qiao, J.; Wang, L.; Yang, C.; Gu, K. Adaptive Levenberg-Marquardt Algorithm Based Echo State Network for Chaotic Time Series Prediction. IEEE Access 2018, 6, 10720–10732. [Google Scholar] [CrossRef]
  18. Li, Q.; Lin, R. A New Approach for Chaotic Time Series Prediction Using Recurrent Neural Network. Math. Probl. Eng. 2016, 2016, 1–9. [Google Scholar] [CrossRef] [Green Version]
  19. Lang, S.H.; Zhu, H.; Sun, G.D.; Jiang, Y.; Wei, C.L. A Study On Methods for Determining Phase Space Reconstruction Parameters. J. Comput. Nonlinear Dyn. 2022, 17, 011006–011021. [Google Scholar] [CrossRef]
  20. Zhou, S.; Wang, X.; Zhou, W.; Zhang, C. Recognition of the scale-free interval for calculating the correlation dimension using machine learning from chaotic time series. Phys. A Stat. Mech. Its Appl. 2022, 588, 126563. [Google Scholar] [CrossRef]
  21. Liu, Y.; Yang, H.; Zhao, Y.; Zheng, G. A heterogeneous lattice structure modeling technique supported by multiquadric radial basis function networks. J. Comput. Des. Eng. 2021, 9, 68–81. [Google Scholar] [CrossRef]
  22. Xiao, Y.; Xie, X.; Li, Q.; Li, T. Nonlinear dynamics model for social popularity prediction based on multivariate chaotic time series. Phys. A Stat. Mech. Its Appl. 2019, 525, 1259–1275. [Google Scholar] [CrossRef]
  23. Wang, S.; Zhu, H.; Wu, M.; Zhang, W. Active Disturbance Rejection Decoupling Control for Three-Degree-of- Freedom Six-Pole Active Magnetic Bearing Based on BP Neural Network. IEEE Trans. Appl. Supercond. 2020, 30, 1–5. [Google Scholar] [CrossRef]
  24. Han, W.; Nan, L.; Su, M.; Chen, Y.; Li, R.; Zhang, X. Research on the Prediction Method of Centrifugal Pump Performance Based on a Double Hidden Layer BP Neural Network. Energies 2019, 12, 2709. [Google Scholar] [CrossRef]
Figure 1. The principle of the ILTP method.
Figure 1. The principle of the ILTP method.
Electronics 12 00160 g001
Figure 2. Test results of the ILTP preprocessing stage.
Figure 2. Test results of the ILTP preprocessing stage.
Electronics 12 00160 g002
Figure 3. Logistic chaotic time series prediction results.
Figure 3. Logistic chaotic time series prediction results.
Electronics 12 00160 g003
Figure 4. Henon chaotic time series prediction results.
Figure 4. Henon chaotic time series prediction results.
Electronics 12 00160 g004
Figure 5. Lorenz chaotic time series prediction results.
Figure 5. Lorenz chaotic time series prediction results.
Electronics 12 00160 g005
Figure 6. One-dimensional logistic error iterative convergence curve. The blue line is the error iteration convergence curve, and the black line is the iteration termination condition set at 10 15 .
Figure 6. One-dimensional logistic error iterative convergence curve. The blue line is the error iteration convergence curve, and the black line is the iteration termination condition set at 10 15 .
Electronics 12 00160 g006
Figure 7. Two-dimensional Henon error iterative convergence curve. The blue line is the error iteration convergence curve, and the black line is the iteration termination condition set at 10 15 .
Figure 7. Two-dimensional Henon error iterative convergence curve. The blue line is the error iteration convergence curve, and the black line is the iteration termination condition set at 10 15 .
Electronics 12 00160 g007
Figure 8. Three-dimensional Lorenz error iterative convergence curve. The blue line is the error iteration convergence curve, and the black line is the iteration termination condition set at 10 15 .
Figure 8. Three-dimensional Lorenz error iterative convergence curve. The blue line is the error iteration convergence curve, and the black line is the iteration termination condition set at 10 15 .
Electronics 12 00160 g008
Figure 9. One-dimensional logistic error iterative convergence curve. The blue line represents the absolute error value curve, and the blue circle represents the absolute error value.
Figure 9. One-dimensional logistic error iterative convergence curve. The blue line represents the absolute error value curve, and the blue circle represents the absolute error value.
Electronics 12 00160 g009
Figure 10. Two-dimensional Henon error iterative convergence curve. The blue line represents the absolute error value curve, and the blue circle represents the absolute error value.
Figure 10. Two-dimensional Henon error iterative convergence curve. The blue line represents the absolute error value curve, and the blue circle represents the absolute error value.
Electronics 12 00160 g010
Figure 11. Three-dimensional Lorenz error iterative convergence curve. The blue line represents the absolute error value curve, and the blue circle represents the absolute error value.
Figure 11. Three-dimensional Lorenz error iterative convergence curve. The blue line represents the absolute error value curve, and the blue circle represents the absolute error value.
Electronics 12 00160 g011
Table 1. Comparison results of the model order joint calculation system effectiveness experiments.
Table 1. Comparison results of the model order joint calculation system effectiveness experiments.
Experimental Times12345
JCMAICJCMAICJCMAICJCMAICJCMAIC
Order calculation results1214111211
Effective prediction steps140140191916172020
Table 2. Comparison results of average absolute error.
Table 2. Comparison results of average absolute error.
Experimental TimesBPRBFDouble Hidden LayerILTP
10.04550.00160.23986.1071 × 10−5
20.17173.7947 × 10−40.16271.0940 × 10−7
30.19412.0100 × 10−40.18862.2794 × 10−5
40.15260.00140.17692.4562 × 10−5
50.14780.00190.10992.1787 × 10−5
Table 3. Comparison results of maximum absolute error.
Table 3. Comparison results of maximum absolute error.
Experimental TimesBPRBFDouble Hidden LayerILTP
10.35000.01970.85545.9861 × 10−4
20.97780.00520.99741.1043 × 10−6
30.93790.00290.94542.7776 × 10−4
40.79180.01070.73402.2192 × 10−4
50.85760.02250.68932.2196 × 10−4
Table 4. Comparison results of prediction steps.
Table 4. Comparison results of prediction steps.
Sequence SampleBPRBFDouble Hidden LayerILTP
20011014
50215020
100112116
200412216
300313220
500217219
800512218
1000311219
Table 5. Comparison of the ILTP method with other methods.
Table 5. Comparison of the ILTP method with other methods.
MethodMethodologyDifferences in ILTP Method MechanismAnalysis of Advantages of ILTP Method
BPThe back propagation neural network has a three-layer network structure. The input layer receives input from the outside world. The neurons in the hidden layer and the output layer calculate the input characteristics or signals through the weight matrix in order to minimize the mean error variation between the actual output values and the expected output values of the network and output the results.Establishing a sequence normalization processing function reduces the impact of different dimensions of the original sequence on the accuracy of subsequent processing results. The introduced radical basis function neural network can adjust the network parameters in real time, according to the number of training samples, dynamically determining the unit center and expansion constants of the hidden layer. The model order joint computer mechanism is designed to avoid the subjectivity of other methods.The back propagation neural network shows subjectivity in the setting of network parameters. The prediction step is 10–20 steps less than that in the ILTP. The error value is 4–6 orders of magnitude higher than that in the ILTP. The output value cannot be automatically updated to the input, and the long-term prediction cannot be realized.
RBFBy using the radial basis function as the “basis” of the implicit unit to form the hidden layer space, the input vectors can be mapped directly to the hidden space, without a weighted connection. When the center point is determined, the mapping relationship is also determined. Using the local approximation strategy for a local area of the input space, only a few connection weights affect the output. Therefore, it has relatively good generalization ability and a faster learning speed.The recursive feedback mechanism is introduced in the ILTP medium- and long-term prediction model to automatically update output values to input for long term prediction. Establishing a sequence normalization processing function reduces the impact of different dimensions of the original sequence on the accuracy of subsequent processing results.The prediction step is 2–8 steps smaller than in the ILTP. The error value is 1–3 orders of magnitude higher than in the ILTP. The output value cannot be automatically updated to the input, and the long-term prediction cannot be realized.
Double hidden layerThe double hidden layer neural network is based on the same principle as the ordinary back propagation neural network. Two hidden layers are designed in the input layer and the output layer, which can thus carry more neurons. When the input layer is large, the non-linear mapping is stronger, and the prediction performance of the back propagation neural network can be improved.Establishing a sequence normalization processing function reduces the impact of different dimensions of the original sequence on the accuracy of subsequent processing results. The introduced radical basis function neural network can adjust the network parameters in real time according to the number of training samples, dynamically determine the unit center and expansion constants of the hidden layer. The model order joint computer mechanism is designed to avoid subjectivity of other methods.The double hidden layer neural network shows subjectivity in the setting of network parameters. The prediction step is 10–20 steps smaller than in the ILTP. The error value is 4–6 orders of magnitude higher than in the ILTP. The output value cannot be automatically updated to the input, and the long-term prediction cannot be realized.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, F.; Yin, B.; Cheng, M.; Feng, Y. n-Dimensional Chaotic Time Series Prediction Method. Electronics 2023, 12, 160. https://doi.org/10.3390/electronics12010160

AMA Style

Liu F, Yin B, Cheng M, Feng Y. n-Dimensional Chaotic Time Series Prediction Method. Electronics. 2023; 12(1):160. https://doi.org/10.3390/electronics12010160

Chicago/Turabian Style

Liu, Fang, Baohui Yin, Mowen Cheng, and Yongxin Feng. 2023. "n-Dimensional Chaotic Time Series Prediction Method" Electronics 12, no. 1: 160. https://doi.org/10.3390/electronics12010160

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop