Next Article in Journal
Considering the Consequences of Cybersickness in Immersive Virtual Reality Rehabilitation: A Systematic Review and Meta-Analysis
Previous Article in Journal
Model for Semi-Automatic Serious Games Generation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

IEALL: Dam Deformation Prediction Model Based on Combination Model Method

College of Computer and Information, Hohai University, Nanjing 211100, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(8), 5160; https://doi.org/10.3390/app13085160
Submission received: 30 March 2023 / Revised: 12 April 2023 / Accepted: 15 April 2023 / Published: 20 April 2023

Abstract

:
The accuracy of dam deformation prediction is a key issue that needs to be addressed due to the many factors that influence dam deformation. In this paper, a dam deformation prediction model based on IEALL (IGWO-EEMD-ARIMA-LSTM-LSTM) is proposed for a single-point scenario. The IEALL model is based on the idea of a combination model. Firstly, EEMD is used to decompose the dam deformation data, and then the ARIMA and LSTM models are selected for prediction. To address the problem of low prediction accuracy caused by simple linear addition of prediction results from different models in traditional combination models, the LSTM model is used to learn the combination relationship of different model prediction results. The problem of neural network parameters falling into local optima due to random initialization is addressed by using the improved gray wolf optimization (IGWO) to optimize multiple parameters in the IEALL combination model to obtain the optimal parameters. For the multi-point scenario of dam deformation, based on the IEALL model, a dam deformation prediction model based on spatio-temporal correlation and IEALL (STAGCN-IEALL) is proposed. This model introduces graph convolutional neural networks (GCN) to extract spatial features from multi-point sequences, increasing the model’s ability to express spatial dimensions. To address the dynamic correlation between different points in the deformation sequence at any time and the dynamic dependence on different points at any given time, spatio-temporal attention mechanisms are introduced to capture dynamic correlation from both spatial and temporal dimensions. Experimental results showed that compared to ST-GCN, IEALL reduced the RMSE, MAE, and MAPE by 16.06%, 14.72%, and 21.19%. Therefore, the proposed model effectively reduces the prediction error and can more accurately predict the trend of dam deformation changes.

1. Introduction

Water energy is a clean, sustainable source of energy that is becoming increasingly significant. In order to make full use of water and energy, by the end of 2011, more than 98,000 reservoirs had been built in China. These dams have brought enormous social and economic benefits in many fields, such as power generation, water supply, shipping, and tourism. It has been crucial to the swift growth of the national economy [1]. Due to the influence of the natural environment and the lack of safety monitoring means and other factors, the working behavior of the dam will continue to change. It will bring some hidden dangers to the normal operation of the dam [2,3]. If these hidden dangers cannot be found in time and measures taken to solve them, catastrophic accidents such as dam catastrophe or even dam failure are likely to be caused. Therefore, the research on ensuring the safe operation of dams has increasingly become the focus of scientific research workers.
Dam deformation is one of the most important parameters to reflect dam safety. Scientific analysis and prediction of monitoring data are of great significance to ensure the safe operation of dams [1,2,3,4]. However, the dam deformation data has strong nonlinear characteristics, and is affected by many factors, so a single-point prediction model used to predict the results is often not comprehensive enough. Therefore, it is necessary to combine the different characteristics of different models, and set up a combination of forecasting models to achieve more accurate prediction. Bates et al. [5] first proposed the theory of combined forecasting, which proposed that a single model could not solve the problem completely and needed to combine different models or methods in a scientific way, at the same time, a comprehensive forecasting model is obtained. Common combination prediction models include weighted average method, ensemble learning methods, neural network combination model, and time series combination model. Currently, the models based on weighted averaging generally use simple addition, which may reduce the accuracy of the prediction model. For neural network combination models, the commonly used method is parameter random initialization, which can lead to the model easily becoming stuck in local optima or not being sufficiently trained. For time series combination models, existing research often ignores the spatial correlation between multiple measuring points in the deformation sequence and fails to capture the dynamic correlation by combining the spatial and temporal dimensions.
In view of the above problems, based on the idea of first decomposition, then integration, the major work of this paper is as follows:
(1) For the single-point scenario, a combined prediction model based on IEALL is proposed to obtain the change trend of time dimension, and the time series data of single measurement points are analyzed to obtain the change trend of a dam. Additionally, a model parameter optimization algorithm IGWO [6] is used to obtain the optimal parameters that minimized the model loss value and further improved the performance of the combined model.
(2) For the scenario of multiple measurement points in the dam, a deformation prediction model STAGCN-IEALL is proposed. Graph Convolutional Networks (GCN) [7,8] is used to extract the multipoint series space spatial features, which increases the expression ability of the model spatial dimension, and uses IEALL model to extract the change trend of the temporal dimension, thereby increasing the prediction accuracy of the model. STAGCN-IEALL also introduces a spatio-temporal attention mechanism [9,10,11,12] to capture the dynamic correlation from both spatial and temporal dimensions.
The remainder of this paper is as follows: Section 2 introduces the current research status and deficiencies in the deformation of dams. Section 3 introduces the methods and optimization ideas used in this paper. Section 4 introduces the dam deformation prediction model based on IEALL. Section 5 introduces the dam deformation prediction model based on IEALL and spatio-temporal correlation. Section 6 conducts comparative experiments to verify the effectiveness of the proposed models. Finally, Section 7 summarizes the whole paper.

2. Related Works

2.1. Combination Models Based on Weighted Averaging

In order to apply the combination prediction model to the field of dam deformation, Wang et al. [13] used the Arima model to predict the deformation data of a dam, and the ANN model to predict the error. Finally, the predicted value of the two parts was added together to obtain the final predicted value, which proved that the prediction accuracy was higher than the ARIMA model. Feng et al. [14] proposed a combined prediction algorithm based on ARIMA-BP, where ARIMA was used to predict the sequence to obtain the residual value, then BP was used to predict the residual value to obtain the final result by addition, compared with the single model, the prediction accuracy is improved. However, the above models all use ordinary addition operations, which cannot fully describe the complex relationship between the predicted values of each model. Therefore, it is necessary to improve the combination method in traditional combination models to improve the prediction accuracy of the combination model.

2.2. Combination Models Based on Neural Network

Due to the uncertain parameters of neural network, the model cannot reach its optimal state, reducing the accuracy of prediction. In order to determine the parameters of the model, it is necessary to study optimization algorithms. Currently, an LSTM prediction model based on adaptive particle swarm optimization (PSO) has been proposed by Song Gang, Zhang et al. [15] to solve the problem of parameter optimization of long short-term memory network (LSTM). The key parameters of LSTM model are optimized by PSO algorithm based on adaptive learning strategy. The optimized LSTM model improved the prediction accuracy and had universal applicability, However, the PSO optimization process has certain limitations as well. For example, because the inertia weight is changed linearly by the algorithm, it is simple for it to approach an early local optimum [16,17]. To optimize the LSTM model, Yang et al. [18] suggested an improved PSO algorithm. To improve the parameter choice of the LSTM model, a particle swarm optimization approach with nonlinear change inertia weight and learning factor is employed. However, the PSO optimization algorithm also has some limitations, such as the linear change of the inertia weight in the algorithm, which may lead to premature convergence to local optima.

2.3. Combination Models Based on Time Series

Due to the strong correlation between the deformation series of multiple observation points, it is not enough to concentrate only on the time dimension of a single observation point when predicting dam deformation. As spatio-temporal data mining has advanced, an increasing number of academics have proposed spatio-temporal data prediction models [19,20], but the application of these models to predict dam deformation is still limited. Dam displacement data can be successfully fitted and forecasted in time and space due to the use of time autoregressive models by Li et al. [21]. Nevertheless, the model is only suitable in the condition of a uniform distribution of monitoring points. Yang et al. [22] proposed a spatio-temporal autoregressive model based on the Kriging method with spatial weighting, which combined the dam deformation data, effectively describing the variation of the actual space and enhancing prediction accuracy. All of the aforementioned models can enhance the model’s predictive power to some extent, but the effect on the complete trend needs to be improved. The above models have the problem of not being able to capture the dynamic correlation between different measuring points at any time in the deformation sequence, as well as the dynamic correlation dependence of different measuring points at different times.

3. Methods

3.1. Ensemble Empirical Mode Decomposition

The ensemble empirical mode decomposition (EEMD) algorithm [23,24,25], is a data analysis technique that decomposes a complex time series into a set of simpler intrinsic mode functions (IMFs) and a residue. The decomposition process is based on the sifting of the original data into a series of progressively higher frequency components that are referred to as IMFs. Each IMF represents a narrowband oscillatory mode, and the sum of all the IMFs and the residue is equal to the original time series. The decomposition process is repeated multiple times using different random noise samples to obtain an ensemble of IMFs, which are then averaged to reduce the noise and improve the accuracy of the decomposition.
Assuming that the original time series is s t , the specific decomposition process of EEMD is as follows:
(1)
Introduce Gaussian white noise ω i t into the original time series s t to obtain a new sequence X i t with added Gaussian white noise, as shown in Equation (1):
X i t = s t + ω i t
(2)
Decompose the new sequence X i t using EMD to obtain IMF components and a trend item r i t , as shown in Formula (2):
X i t = j = 1 m i m f i j t + r i t
(3)
After introducing different Gaussian white noises to the original time series s(t) in step (1) and performing the EMD decomposition on the resulting sequence X i t in step (2), the process is repeated M times. According to the property of irrelevant random sequences having an average value of zero, the obtained IMF components can be averaged to eliminate the interference caused by the added white noise, while maintaining the natural dyadic filtering window of each IMF and trend term. This ensures that the averaging also preserves this property, thereby avoiding the influence of multiple additions of Gaussian white noise on the decomposition of the original sequence’s IMF and resolving mode mixing. All the IMF components and trend terms obtained after EEMD decomposition are shown in Equations (3) and (4):
i m f j t = 1 N i = 1 N i m f i j t
r t = 1 N i = 1 N r i t
(4)
Using EEMD decomposition, m IMF components and a trend item are obtained. They are expressed as follows:
s t = j = 1 m i m f j t + r t
By adding multiple sets of Gaussian white noise and averaging the decomposition results, the white noise can cancel each other out, making the results closer to the true sequence and meeting practical needs, based on the characteristic that the mean of Gaussian white noise is zero.

3.2. Autoregressive Integrated Moving Average Model

The Autoregressive Integrated Moving Average (ARIMA) [26] model is a widely used time series analysis method. ARIMA models capture the linear relationships among the data points and make predictions based on those relationships. An ARIMA model consists of three components: autoregression (AR), differencing (I), and moving average (MA). The AR component represents the linear relationship between the current observation and a lagged value, i.e., the value of the time series at some previous time step. The I component involves differing the time series to make it stationary, i.e., removing any trends or seasonality. The MA component represents the linear relationship between the error term and a lagged value of the error term.
The order of an ARIMA model is specified by three parameters: p, d, and q, where p is the order of the AR component, d is the degree of differencing, and q is the order of the MA component. The selection of the values for these parameters depends on the characteristics of the time series being analyzed and can be determined using various methods such as the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). Once the parameters are determined, an ARIMA model can be used to make predictions for future values of the time series. The accuracy of the predictions depends on the quality of the model and the stability of the time series.
The deformation data of a dam at a certain moment is determined by the past state variables and external disturbances. The linear part of the deformation sequence in ARIMA is represented by AR, which is determined only by the past state, and the MA represents the sequence determined only by random disturbances such as the environment. As time progresses, the state variables often become non-stationary, making it difficult to fit using ARMA. Therefore, the sequence is subjected to several difference operations to make it stationary before using the ARMA model to predict the deformation of the dam.

3.3. Traditional Combination Prediction Model

Combination prediction model refers to the use of a certain method to combine multiple different models, fully leveraging the strengths of each model to compensate for the shortcomings of individual model predictions, thereby improving the prediction accuracy of the model. In view of the strong nonlinearity and non-stationarity of dam deformation data affected by multiple factors, using a single model for prediction cannot simultaneously consider the linear and nonlinear parts. Therefore, the idea of using a combination model for dam deformation prediction has been proposed.
Let the original time series of dam deformation be denoted as y t , which can be represented as follows:
y t = L t + N t
where L t represents the linear part in the deformation sequence, and N t represents the nonlinear part. The specific steps of the traditional combination model are as follows:
(1)
First, a linear prediction model is constructed to predict the linear part of the dam deformation sequence, and the linear prediction result L ^ t   and residual sequence e t are obtained, as shown in Equation (7):
e t = y t L ^ t
(2)
Then, a nonlinear prediction model is constructed to predict the nonlinear part of the dam deformation sequence, which is the residual sequence e t . Let f (.) be the function relationship determined by the nonlinear model, and the nonlinear prediction value N ^ t is obtained as follows:
N ^ t = f e t
(3)
Finally, the linear and nonlinear prediction results are added to obtain the final prediction result of the combined model:
y ^ t = L ^ t + N ^ t

3.4. Improved Combination Prediction Model

In the traditional combination model, the linear and nonlinear parts of the dam deformation sequence are separately predicted using two models, and the linear and nonlinear prediction values are obtained. Then, the two parts’ prediction values are added using a simple linear combination based on Equation (9). However, using a simple linear combination to characterize the relationship between linear and nonlinear prediction values may not be accurate. If the relationship between the linear and nonlinear components is more complex, and a simple combination with static weights is used, it may lower the accuracy of the model and even be lower than the accuracy of a single model.
To address the issue that the relationship between the linear prediction value L ^ t and the nonlinear prediction value N ^ t in the dam deformation combination model cannot be characterized using a simple linear combination, this section proposes an improved dam deformation prediction model combination method. It uses the powerful self-learning ability of the LSTM model to fit the relationship between the two components dynamically and determine the relationship and weight dynamically rather than using a static addition relationship, thereby achieving more accurate predictions. The specific formula is as follows:
y t = l s t m L ^ t , N ^ t

3.5. Optimization Idea

Based on the traditional combination forecasting model, this paper proposes the EALL (EEMD-ARIMA-LSTM-LSTM) algorithm based on the decompose-predict-combine approach, combined with various techniques introduced earlier to optimize the data components in the traditional combination forecasting model. In order to solve the problem of neural network parameters being randomly initialized leading to local optima or insufficient training, an improved gray wolf optimization algorithm was introduced to optimize LSTM in EALL, and the dam deformation prediction model IEALL was proposed. Then, based on the single-point prediction model IEALL, the spatial correlation between deformation sequence multiple points is considered, and the dam deformation prediction model STAGCN is used to extract the spatial features between multiple measurement points, resulting in the STAGCN-IEALL dam deformation prediction model. The optimization process for the combined model is illustrated in Figure 1.

4. Dam Deformation Prediction Model Based on IEALL

4.1. Overall Framework of Model

The data on dam deformation are affected by a variety of factors, and the combined model’s forecast accuracy needs to be increased. This paper proposes a combined prediction model based on IEALL for single measurement point scenarios based on the concept of decomposition-prediction-combination. The overall idea of the model is shown in Figure 2. Firstly, the EEMD algorithm is used to decompose the complex dam deformation series into linear and nonlinear components, which are processed by ARIMA and LSTM, respectively. In order to address the issue that the relationship between the linear and nonlinear prediction values in the dam deformation combination model cannot be explained by simple linear addition, the model fits the relationship between the two components using the strong self-learning ability of the LSTM model, dynamically determining the relationship and weight rather than using the static addition relationship. Due to the random initialization of the neural network parameters, the model is susceptible to local optimization or to not being fully trained. In order to optimize the LSTM model in EALL, the improved gray wolf optimization approach is used in this study. The final prediction result is obtained when the historical relationship between the linear result and the nonlinear result sequence is fitted using historical real data.

4.2. EALL Algorithm

The EALL algorithm is implemented as follows:
  • Collection and preparation of data.
Gather and arrange the pertinent temperature, water level, and dam deformation data, as well as the data needed for preprocessing, reduction, and other activities.
2.
EEMD decomposes deformation data.
The dam deformation sequence is decomposed into m   I M F ( I M F 1 , I M F 2 , , I M F m ) component sequences by EEMD decomposition algorithm, and then the frequency is judged one by one. If the number of high-frequency sequences in m   I M F components is i , then the high-frequency sequence are i m f 1 t , i m f 2 t , , i m f i t and the low-frequency sequence are i m f i + 1 t , i m f i + 2 t , , i m f m t ; the decomposition process is described in Section 3.1.
3.
Construct the ARIMA model.
First, the ADF criterion is used to measure the stationarity until the sequence is stable and the parameter d is obtained, and then the values of p , q are determined by AIC criterion and autocorrelation and partial autocorrelation to complete the model order determination. The linear portion of i m f i + 1 t , i m f i + 2 t , , i m f m t in the sequence of dam deformation is predicted using A R I M A p , d , q . If the sequence is i m f k t , k i + 1 , , m , the current sequence prediction value L k can be calculated using the Equation (11). Similarly, the L ^ t L ^ i + 1 , , L ^ m outcome of the linear prediction:
L ^ k = a r i m a i m f k t , p , d , q , k i + 1 , , m
4.
Construct the LSTM model.
The LSTM model is used to predict the nonlinear part in the deformation sequence, and it is built on top of the RNN by using three gate structures (i.e., forget gate, input gate, and output gate) to control the information flow, forgetting, and updating at different time steps, thereby solving problems such as gradient explosion in long-term dependencies, and making it better at predicting time series with large time spans. Assuming the original sequence is x t , the process of using the LSTM model for prediction can be mainly divided into the following steps:
  • (a)
    Data preprocessing and normalization.
The feature sequence of the most correlated influencing factors (temperature, water level, age) is added to the training data as the training sample of the LSTM model. Then, the training data are normalized and divided into a training set and a validation set.
Specifically, the expression for the displacement component of the dam body affected by temperature δ T is given in Equation (12) [27]:
δ T = i = 1 n ( b 1 i sin 2 π i t 365 + b 2 i cos 2 π i t 365 )
where b 1 i and b 2 i are the regression coefficients of the temperature factor, n is generally a value of 1 or 2, and t is the cumulative number of days from the monitoring day to the measurement day. When i is 1, the period is one year, and when i is 2, the period is half a year.
The water level factor δ H is the expression of the displacement component generated by the dam under the action of water pressure and the gravity of the reservoir, as shown in Equation (13) [27]:
δ H = i = 1 m a i H i
where a i is the regression coefficient for the water pressure factor, m is generally 3 for gravity dams, and H i is the water level of the reservoir.
The time-dependent component of the dam deformation is expressed as Formula (14) [27], which is denoted as δ θ :
δ θ = c I n θ
In Equation (10), c represents the regression coefficient of the time effect factor, and θ is the accumulated number of days from the observation day to the measurement start day divided by 100.
According to the above formulas, the appropriate influencing factors are selected as: H , H 2 ,   H 3 ,   sin 2 π t 365 ,   cos 2 π t 365 ,   sin 4 π t 365 ,   cos 4 π t 365 ,   θ , and I n θ , to construct the LSTM model input feature sequence. At the same time, they are combined with the deformation sequence to construct the input of the training set, as follows:
r a i n X = x 1 x 2 x 4 d 1 d 2 d 4 x 2 x 3 x 5 d 2 d 3 d 5 x n 4 x n 3 x n 1 d n 4 d n 3 d n 1 , t r a i n Y = x 5 x 6 x n
The formula shows that the training set input of the LSTM model is constructed by combining the deformation sequence and the feature sequence that considers factors such as temperature, water level, and aging, where n is the length of the training sequence, and x 1 , x 2 , , x n represent the deformation sequence of the training set, and d 1 , d 2 , , d n represent the feature sequence composed of factors such as temperature, water level, and aging, which are added to the training in the form of features. This can effectively improve the recognition of influencing factor features by the LSTM model and achieve more accurate predictions.
  • (b)
    Parameter initialization.
Parameter initialization is an important step in training a deep learning model such as LSTM. The batch size, number of neurons (or units), and learning rate (η) are parameters that need to be initialized when setting up the LSTM model.
  • (c)
    LSTM Model Training.
Train the LSTM model using the parameters initialized in step 2, using the Adam algorithm instead of the original gradient descent algorithm in the LSTM model. The Adam algorithm updates the neural network weights iteratively based on the training data, achieving efficient computation and better performance. At the same time, set the training parameters such as the number of training epochs, batch size, etc.
  • (d)
    Model prediction.
Read the trained model from step 3 to make predictions on the validation set and check the prediction performance of the model. If the error is large, adjust the parameters and retrain. If the error meets the requirements, save it as the predicted value of the LSTM model.
Firstly, the LSTM is used to predict the nonlinear part i m f 1 t , i m f 2 t , , i m f i t in the dam deformation sequence, and the sequence is set as i m f k t , k 1 , 2 , , i and the lstm() is set as the functional relationship determined by the LSTM model to obtain the predicted value N ^ t N ^ 1 , , N ^ i of the nonlinear part:
N ^ j = l s t m i m f j t ,   j 1 , 2 , , i
Finally, an LSTM model is constructed to fit the linear and nonlinear prediction results. Assuming the functional relationship determined by the second LSTM is l s t m , the final prediction result of the combination model is:
y ^ t = l s t m L ^ t , N ^ t

4.3. Optimization of Model Parameters Based on IGWO

Compared with other algorithms, gray wolf optimization algorithm has the characteristics of great global search ability and fast convergence speed [28]. Based on this, the IGWO algorithm is developed through the combination of a dynamic weight and a nonlinear convergence factor to enhance the optimization performance of the algorithm, and the IGWO algorithm is then used to optimize the LSTM. In order to increase the model’s predictive accuracy, the improved gray wolf optimization method primarily optimizes the LSTM model’s parameters and derives the ideal parameter group. The following aspects are primarily involved in the IGWO optimization process:
(1)
optimize the parameter selection
The main optimization parameters are training times as   epoch , batch size as batch _ sizeh , number of hidden layer units num as num , learning rate as η , and so on.
(2)
selecting a fitness function
The fitness function is a critical part of the iterative search process used in the IGWO algorithm’s optimization process, and it has a direct impact on the convergence rate and optimization performance of the algorithm. The ultimate objective of the combined prediction model of dam deformation is to minimize the k = t t max ( y ^ t y t ) 2 , where y t is the actual value at the moment of t , and y ^ t is the corresponding predicted value, and t max is the length of the time series. The fitness function is set simultaneously according to Equation (18):
f i t n e s s = t t max ( y ^ t y t ) 2
(3)
EALL Model Process Based on IGWO Optimization
The flow chart of optimizing the LSTM model parameters in EALL based on IGWO is shown in Figure 3. Firstly, the parameters of the LSTM model are initialized, and a random uniform distribution function is used to initialize the parameters N of the gray wolf population, the maximum iteration number is t max , and the coefficient parameter a , A , C . Record the matching places of the top three fitness values as X α , X β , X δ , after calculating and comparing the fitness values of all gray wolves. Change the coefficient parameter’s value a , A , C as well as the individual gray wolf’s position as X t + 1 . Recalculate and compare the gray wolves’ fitness values, choose the top three with the best fitness, and then update X α , X β , X δ . If the current iteration number is less than the maximum iteration number t max , the gray wolf individual’s position and coefficient parameters will be updated continuously. If it is larger than the maximum iteration number X α , the optimal solution to the current parameters will be output. The LSTM model is developed according to the optimization parameters obtained above and trained to obtain an optimal LSTM model. To determine the final LSTM prediction value, the optimized model is utilized to predict the prediction sample.

5. Dam Deformation Prediction Model Based on Spatio-Temporal Correlation and IEALL

The IEALL model is based on the analysis and forecasting of its monitoring sequence in the case of a single monitoring point, but more frequently, it is necessary to take into account the strong correlation between the deformation sequences of multiple monitoring points in the case of multiple monitoring point deformation data. This section focuses on how to analyze data on dam deformation in both time and spatial dimensions to produce more precise predictions of dam deformation.
It is assumed that in the multi-point prediction of dam deformation, there are N deformation measuring points, each of which D is a dimension sequence, and the data of the deformation time series of the measuring point is taken as the prediction sequence, the other indexes of D 1 are the characteristic series of the measuring point (such as temperature, water level, aging, etc.), and the length of the time series of each measuring point is T , then the global input X of the model can be expressed as the historical feature set of each measurement point: X = X 1 , X 2 , , X T N × T × D , where X t represents the dataset of all measuring points at t 1 , 2 , , T time, in which x t 1 ,   x t 2 , ,   x t N represents the combination of monitoring sequences of all measuring points at t time. The data of i at the time t of each measurement point is composed of the deformation data and the characteristic sequence of the point, which is expressed as: x t i = x t i , j = ( x t i , 1 ,   x t i , 2 , ,   x t i , D ) T N , At the same time, some measuring points i can be expressed as x t i = x 1 i , x 2 i , , x T i N × T . The research content of this section is to use a prediction model for dam deformation prediction task. That is to say, the historical sequence X of N deformation measuring points with time length T is input into the proposed model to predict the deformation prediction value of the measuring points in K days.The prediction performance of the model is concluded from the prediction results to Y ^ T + K i = y ^ T + 1 i , y ^ T + 2 i , , y ^ T + K i K . By comparing the predicted value Y ^ T + K i with the actual value Y T + K i = y T + 1 i , y T + 2 i , , y T + K i K on K days, the conclusion is drawn regarding the model’s prediction performance.

5.1. Overall Framework of Model

Based on IEALL model, a new spatio-temporal prediction model of dam deformation is proposed, which is based on STAGCN-IEALL model. To capture how this measuring point affects other measuring points in the space, GCN is utilized to extract the spatial properties of numerous measuring points. The IEALL model was utilized to extract the sequence’s temporal properties, namely, to capture the measurement points’ temporal dimension variation trend. At the same time, a spatio-temporal attention mechanism is introduced to optimize the model to deal with the dynamic correlation between different measuring points at any time and the dynamic correlation dependence of any measuring points at different time in the deformation sequence, so that the proposed algorithm can extract the spatio-temporal characteristics of the sequence more accurately and achieve more accurate prediction.
The STAGCN-IEALL model framework, shown in Figure 4, consists mainly of two parts, the encoder network and the decoder network. The encoder module is composed of a spatio-temporal attention module, GCN spatial feature extraction module and IEALL temporal feature extraction module, and the decoder is composed of LSTM network. The specific process is as follows: Put a graph of the dam’s deformation sequence at various measurement points into the encoder. Enter the spatio-temporal attention module, which is primarily divided into the spatial attention module and the temporal attention module, and compute the attention matrix under each of their respective dimensions; Following that, a normalization computation is used to produce a spatial attention matrix that contains spatial attention, and a temporal attention matrix that contains temporal attention, each value in the matrix indicates the attention allocation weight between nodes, in order to capture the dynamic correlation between geographical and temporal dimensions. Then, the matrix containing spatio-temporal attention output from the spatio-temporal attention module is inputted into the spatial feature extraction module, and GCN with a better spatial feature’s extraction effect is used to extract spatial features from the input, spatio-temporal attention feature vectors are combined, spatial features from multiple measurement points are extracted, dynamic correlation between various measurement points at any particular time is also taken into consideration to obtain more complete spatial features. Similar to how the data are input into a temporal feature extraction module, the IEALL model is utilized to extract the features of the temporal dimension and combine the temporal attention feature vector, and more comprehensive temporal dimension features are obtained in consideration of the dynamic dependency of different moments. At the same time, the encoder outputs the hidden features of learning at each moment, and constructs a potential representation of the spatio-temporal features of the historical distortion sequence, namely spatio-temporal attention context vector. Finally, the LSTM decoder is used to decode according to the context vector and the output of the hidden layer, and the dam deformation sequence is reconstructed through a fully connected layer to obtain the final prediction value.

5.2. Spatio-Temporal Attention Mechanism

(1)
Spatial attention mechanism
The dam Deformation monitoring data of several measuring stations interact with each other and are dynamic to a certain extent, that is, they will change with the change of time, space and other factors. In order to enable the model to self-adaptively calculate the dynamic correlation between measurement points, this section takes the spatio-temporal correlation of numerous measurement points as a foundation and introduces the spatio-temporal attention mechanism. The following equation can be used to determine the spatial attention mechanism:
S = V S σ ( X t W 1 W 2 W 3 X t ) T + b S
Among them, the X t N × T × D as the input of t time, V S N × N , W 1 T ,   W 2 D ,   W 3 T × D and b S N × N are to be learning parameters, σ as the activation function. The attention matrix S N × N is obtained by dynamic calculation based on the input data of the current layer. S i , j represents the degree of influence of node j on node i semantically.
Then, the softmax function is used to normalize S N × N , and map it to a real number between 0 ,   1 , so that the output result is the probability representation of the classification being taken, and the normalization can make the sum of the probabilities 1 . Let S represent for the normalized SoftMax attention matrix, then the computation equation is as follows:
S i , j = soft max ( S i , j ) = exp ( S i , j ) j = 1 N exp ( S i , j )
The values in the matrix represent the attention weights between nodes. When features in the network need to be extracted, the spatial attention matrix S N × N should be combined with the matrix representing the graph structure to dynamically adjust the influence weights between nodes and allocate attention reasonably, and then the subsequent feature extraction operation should be carried out. The spatial attention mechanism is shown in Figure 5.
(2)
Temporal attention mechanism
The time dimension can be used to describe the correlation of a certain measurement point of dam deformation at different times, and the correlation will also change dynamically at different times. Therefore, the temporal attention mechanism is introduced at the same time to adaptively adjust the weights of different times. The specific calculation equation is as follows:
E = V e σ ( X t T U 1 U 2 U 3 X t ) T + b e
E i , j = exp ( E i , j ) j = 1 T exp ( E i , j )
where X t N × T × D ,   V e N × N ,   U 1 N ,   U 2 D ,   U 3 D × N , and b e T × T are all learnable parameters, σ is the activation function, E is the non-standardized attention matrix. Similarly, E represents the attention matrix standardized by the SoftMax function, and E i , j represents the degree of influence of time j on time i . When the network features need to be further extracted, the weights at different times can be dynamically adjusted by combining the normalized time attention matrix, and attention can be allocated reasonably, and then the subsequent time feature extraction can be carried out.

5.3. Encoder

5.3.1. Spatial Feature Extraction

GCN is utilized to extract spatial features since the input sequence in this part is organized as a graph. There are two common techniques to construct GCN. The first is the spatial domain method, which is comparable to the conventional convolution method in that it defines the convolution operation directly on each node’s connection relationship. The other is a spectral domain graph convolution network, which performs spectral domain convolution operations using the graph’s Fourier transform. In this section, the spectral domain graph method of graph convolution is used to perform convolution operation. The data are regarded as the signal on the graph, and the information on the graph is directly processed to capture the pattern and feature on the input sequence space, so as to accurately describe the spatial correlation between multiple measurement points and improve the prediction accuracy.
By algebraically transforming the graph, spectrograms can be analyzed. In spectrogram theory, a Laplacian matrix L is used to describe the correlation of each node in the graph structure, namely L = D A , and its normalized form after regularization is: L = I N D 1 2 A D 1 2 N × N , including I N matrix for the unit, A as the adjacency matrix, D N N for degree matrix and node degree of diagonal matrix, D i i = j A i j . A further decomposition of the Laplacian matrix can be obtained: L = U Λ U T , where U is the Fourier basis and Λ = d i a g λ 0 , λ 1 , , λ N 1 N × N is a diagonal matrix consisting of the Laplace matrix’s eigenvalues.
Using the deformation input X t t at time t as an example, the input graph signal is X t = x t 1 , x t 2 , , x t N , and the picture information is Fourier transformed X ^ t = U T X t . Because the nature of the available by Laplacian U is orthogonal matrix, X t = U X ^ t can be obtained using the inverse Fourier transform of the figure signal processing operation. The convolution kernel g θ is used to convolution the input signal G x with the diagonalized linear operator defined in the Fourier domain, as shown in Equation (23):
g θ G x = g θ L X t = g θ U Λ U T X t = U g θ Λ U T X t
The convolution operation of graph signals is equal to the product of these signals, so the cost of eigen factorization of the corresponding Laplacian matrix is higher when the scale is larger. For this problem, an approximate solution can be achieved by Chebyshev polynomial kernel, as shown in Equation (24):
g θ G x = g θ L X t = U g θ Λ U T X t U k = 0 K 1 θ k T k L ˜ U T X t
Among them,   θ k is the coefficient of Chebyshev polynomials,   L ˜ = 2 L λ m a x N ,   λ m a x is the largest eigenvalue of the Pierre-simon Laplace matrix. A Chebyshev Polynomials can be defined recursively as: T k x = 2 x T k 1 x T k 2 x ,of which T 0 x = 1 , T 1 x = x . The approximate solution is obtained by using this expansion, which is equivalent to extracting the periphery of every node in the graph by using convolution kernel 0 ~ K 1 . The neighborhood information of the order is calculated to obtain the spatial characteristics. In addition, for multi-channel graph signal that has multi-dimensional characteristics of the data, each node has D 1 dimension, for the moment t use D 1 Convolution check for convolution operations can obtain g θ G x . Of which θ = θ 1 , θ 2 , , θ D 1 is the convolution kernel parameter, using the RELU linear correction unit as the activation function, Re L U g θ G x .
At the same time, in order to dynamically adjust the correlation between the nodes, the spatial attention matrix is combined with each item T k L ˜ in the Chebyshev polynomials to obtain T k L ˜ S . Of which Jacques Hadamard product, therefore, the graph convolution operation combined with the spatio-temporal attention mechanism is expressed as follows:
y s = g θ G x = g θ L X t k = 0 K 1 θ k T k L ˜ S X t

5.3.2. Temporal Feature Extraction

The temporal feature extraction module is mainly used to extract the time dimension features of the prediction sequence, that is, the model in the sequence processing effect is hidden layer output h t at time t ; this article uses the IEALL model, the specific steps described in Section 4, the moment t the hidden state and the encoded context eigenvector are computed by an equation such as Equation (26):
h t , c t = I E A L L x t , h t 1 , c t 1
where the context feature vector c t from t time attention of the moment E t i with EALL’s hidden layer output h t calculated: c t = i = 1 t 1 E t i h i .

5.4. Decoder

The decoder module is made up of an LSTM network that encodes c t = c 1 , c 2 , , c T and historical hidden layer feature information to generate the target sequence prediction value, with the decoder LSTM function-related equation being as follows:
f t = σ W x f x t + W h f h t 1 + b f C ˜ t = tanh ( W h c h t 1 + W x c x t + b c ) i t = σ W h i h t 1 + W x i x t + b i o t = σ W h o h t 1 + W x o x t + b o h t = o t tanh ( C t )
The target sequence is generated as y t = y 1 , y 2 , , y T . The equation is:
y 1 = L S T M c 1 y 2 = L S T M c 2 , y 1 y t = L S T M c t , y 1 , , y t 1
Finally, through a layer of full connection layer FC, the target sequence s   y t = y 1 , y 2 , , y T to the final prediction sequence s   Y ^ T + 1 , Y ^ T + 2 , , Y ^ T + K = F C y t , obtain the final forecast.

6. Experiments

Based on the above theory and design, this section launches from the experimental environment and data, the experimental evaluation index, the realization result, and the analysis, three aspects, through a series of contrast verification experiments, and this paper proves the feasibility and value of the proposed model.

6.1. Experimental Preparation

The hardware environment used for the experiments is Intel Core i5-8600k with 16 GB RAM and RTX2060; the software environments are Python 3.7, Tensorflow1.12 (GPU), PyCharm 2019.2, Keras2.2.4, and MATLAB 2016a.
The following categories of data were analyzed for this study:
(1)
The data used for the EALL are the monitoring data of a dam, selected from 13 November 2011 to 16 May 2018, the positive plumb line downstream displacement data, temperature data, and water level data from PL4 measurement point of 19# dam section, totaling 7227 items. The data are firstly preprocessed by pandas. The outliers are removed from the data according to the 3 σ criterion, the data are temporally complemented according to the resample method, then the data are interpolated and complemented by the Lagrange Interpolation Polynomial to obtain the final preprocessed data.
(2)
The data used in Experiment 1 on gray wolf optimization are the Sphere function, Rastrigin function, and Grienwank function in the CEC2014 benchmark function set. Benchmark function is used to test whether an algorithm has high performance and high efficiency, the exploration ability, convergence speed, and optimization-seeking accuracy of the optimization algorithm can be evaluated by selecting appropriate benchmark functions for testing.
(3)
The experimental data based on the feasibility and performance of the STAGCN-IEALL model are selected from five measurement points (PP1-1, PP2-1, …, PL5,) of five positive vertical lines (PP1-PL5) of the downstream displacement data, which are arranged in the dam segment of No. 1, 7, 12, 17, and 19, respectively. The time span of each measurement point is selected from 13 November 2011 to 16 May 2018, while the deformation data and the corresponding temperature, water level, and other four characteristics data are selected for each measurement point. The dataset contains a total of 60,225 entries.

6.2. Evaluation Index

In order to verify the prediction effect of the method proposed in this paper, so as to comprehensively evaluate the effectiveness of the model, this paper selects Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute Error (MAE), three indicators as the evaluation criteria of the prediction model.
(1) RMSE. The lower the RMSE, the more accurate the prediction, as it represents the dispersion of the dataset. The RMSE equation is:
RMSE = 1 m i = 1 m ( x i x p i ) 2
Among them, m represents total data,   x i is the true value of the i th data,   x p i is predicted value.
(2) MAPE. In MAPE evaluation criteria, the smaller the value, the higher the prediction accuracy of the model. The MAPE equation is:
MAPE = 1 m i = 1 m x i x p i x i
(3) MAE. In MAE evaluation criteria, the smaller the value is, the higher the prediction precision of the model. The MAPE equation is:
M A E = 1 m i = 1 m x i x p i

6.3. Analysis of Results

6.3.1. Analysis of EALL

The dam deformation sequence is predicted using the EALL by setting up reasonable trials, and the efficacy is validated by comparing the experimental results to those of the ARIMA model, the LSTM model, and the conventional combination of ARIMA-LSTM model. The construction and prediction process of EALL is as follows:
(1) EEMD decomposition of deformation sequence: The dam deformation sequence is decomposed by EEMD. First, the parameters of EEMD are set, and then the number I of adding Gaussian white noise is set to 100, and the standard deviation of Gaussian white noise is 0.05 times. The obtained EEMD decomposition diagram is shown in Figure 6. The original deformation sequence is decomposed into nine IMF components i m f 1 t , i m f 2 t , , i m f 9 t , and a trend component r t .
Where i m f 1 t , i m f 2 t , , i m f 5 t have high frequencies, showing irregular and fluctuating characteristics, which belong to the nonlinear part of the deformation sequence, i m f 6 t , i m f 7 t , , i m f 9 t have low frequencies and show a certain trend, so it can be considered that temperature and aging are the main factors causing the deformation of this part and therefore belong to the linear part of the sequence.
(2) Judging the frequency of deformation sequence: after the deformation sequence is decomposed into nine IMF components, the zero-crossing rate approach is used to determine the frequency of each component. According to the experience of dam deformation sequence decomposition, 10% is a suitable definition standard for high and low frequency components [29].
This standard is used in this study to define i m f 1 t , i m f 2 t , , i m f 5 t as high frequency components, and i m f 6 t , i m f 7 t , , i m f 9 t as low frequency components.
(3) Prediction of low-frequency series by ARIMA model: ARIMA model is used to predict low-frequency series i m f 6 t , i m f 7 t , , i m f 9 t .
The other sequences are the same, and i m f 7 t is used as an example in this section. The sequence is first put through the ADF test. If it is determined that the sequence is unstable, the differential operation is performed. The sequence is stable after three differential operations, and the differential order d is 3. Second, the order of ARIMA model is determined, the difference order d , autoregressive order p , and moving average order q are calculated. One of them has been identified as d . AIC criterion is used to identify the parameters of p ,   q ranges, and the autocorrelation coefficient diagram (ACF) and partial autocorrelation coefficient diagram (PACF) are used to determine the values of p ,   q . The parameters p , d ,   and   q of the ARIMA model are determined as A R I M A 1 , 3 , 2 by experiments. The model is then trained to forecast the IMF7 sequence, and its performance is assessed in accordance with the evaluation criteria. The evaluation results are shown in Table 1.
According to Table 1, it can be seen that the values of RMSE, MAPE, and MAE are relatively small, which indicates that ARIMA has a good prediction effect on the IMF7 sequence and meets the experimental requirements. Similarly, according to the above experimental steps, i m f 6 t , i m f 8 t , , i m f 9 t are predicted, respectively, and the prediction results of all low-frequency series are obtained.
(4) LSTM model prediction of high-frequency series. The LSTM prediction model is constructed, and the sequence IMF3 is used as the training and prediction sequence. The specific process is as follows:
Step 1: Determine the input sequence of the LSTM model and the feature sequence composed of water level, temperature, and aging, and normalize them to construct the input dataset of the LSTM model.
Step 2: Determine the parameters of the LSTM model. Set up the LSTM’s parameters, setting n u m to 3, η to 0.001, and training step size to 10. The original random gradient descent technique is also replaced with the Adam algorithm. The training parameters for the LSTM model are determined with B a t c h _ s i z e sets to 10 and e p o c h sets to 200. If it is too small, the network will be underfitted, while if it is too large, the network will be overfitted. The model’s batch size is a crucial parameter that influences the model’s training time and the speed at which the parameters are updated.
Step 3: LSTM realizes prediction. According to the parameters determined in the previous two steps, the LSTM model is constructed and trained to obtain the prediction results of the IMF3 sequence. Similarly, the LSTM prediction result of i m f 1 t , i m f 2 t , , i m f 5 t can be obtained.
The above steps are performed on the sequences i m f 1 t , i m f 2 t , , i m f 9 t and are predicted by ARIMA model and LSTM model, respectively, and the results obtained are shown in Figure 7 after integration.
As can be seen from Figure 7, different models are selected to predict different sequences after the original dam data are decomposed by EEMD and have a good performance. In order to further judge the prediction effect of the model, RMSE, MAPE, and MAE are used to evaluate each sequence, and the results are shown in Table 2.
As can be seen from Table 2, ARIMA and LSTM models are used to predict the original sequence after decomposition, and the overall performance is better, which gives full play to the advantages of the respective models and reduces the error of prediction.
(5) LSTM builds the final prediction result. According to the idea of combined model and the prediction process of LSTM in the previous step, the second LSTM model is constructed. The input training data are the linear ARIMA prediction results, L ^ 6 , L ^ 7 , , L ^ 9 of low-frequency sequences, the LSTM prediction results of high-frequency sequences N ^ 1 , N ^ 2 , N ^ 5 , and the original sequence x t .
The historical relationship between them was learned through LSTM model, then the final predicted value y t was obtained. Parameters of the LSTM model were initialized, trained, and predicted, and the final prediction result of the deformation sequence x t was obtained, as shown in Figure 8a.
In order to evaluate the feasibility and prediction effect of this model more reasonably and objectively, the ARIMA model, LSTM model, and ARIMA-LSTM model are respectively used to predict the selected dam deformation sequence, and the prediction results of the four models are compared and analyzed with the EALL model, as shown in Figure 8b.
As seen in Figure 8b, single-point prediction models like ARIMA and LSTM lack the ability to account for both linear and nonlinear components in the deformation sequence, resulting in a substantial error in the prediction result. The ARIMA-LSTM model adopts a combination model format, and the prediction result is obviously superior to that of a single-point prediction model. Nevertheless, because it just uses a simple linear addition method to combine the prediction results of the two models, and does not decompose the deformation sequence to forecast the deformation sequence, the prediction result is still a bit different from the actual value. It can be seen from the figure that the prediction result of the EALL prediction model proposed in this paper is closer to the original value of the deformation sequence, which is better than the three models of ARIMA, LSTM, and ARIMA-LSTM. The prediction results were evaluated by that evaluation criteria, and the results are shown in Table 3.
It can be seen from Table 3 that the RMSE value, MAPE value, and MAE value of the EALL proposed in this paper are lower than those of the ARIMA, LSTM, and ARIMA-LSTM models. As can be observed, the EALL predicts more accurately than the ARIMA, LSTM, and ARIMA-LSTM models. Moreover, the change regulation of the deformation sequence can be more precisely mined, so as to realize more accurate deformation prediction.

6.3.2. Analysis of IEALL

In this experiment, the comparison algorithm of IGWO is particle swarm optimization (PSO) and original gray wolf optimization (GWO). The initial population of IGWO algorithm is set to 30, and the maximum number of population iterations is 500. Due to the optimization process of the algorithm, there is a certain randomness that makes the results inevitably have some deviations. In this experiment, all the algorithms are run 10 times, and the average of 10 times is the final result of the experiment, which can reduce the deviation caused by randomness and evaluate the performance of the algorithm more accurately.
Experiment 1: Test and comparison of IGWO optimization performance.
In order to prove that the IGWO algorithm proposed in this paper has stronger optimization performance than the classical optimization algorithm, this experiment is based on the Sphere function, Rastrigin function, and Grienwank function in the benchmark function, the results are compared and analyzed by using PSO algorithm, GWO algorithm, and IGWO algorithm, respectively. It is mainly evaluated from two aspects: global optimization accuracy and stability of the algorithm.
(1) Analysis and comparison of global optimization accuracy
In the experiment, PSO, GWO, and IGWO algorithms were used to optimize the Sphere function, Rastrigin function, and Grienwank function, respectively. The results of 10 experiments and averaging are shown in Table 4.
According to Table 4, the IGWO algorithm found the optimal solution 0 after 10 experiments under the test of single-peak benchmark function Sphere. Therefore, under the test of unimodal benchmark function, IGWO optimization effect is better than PSO and GWO algorithm. The results of IGWO are all equal to 0 or the closest to 0 under the tests of multi-peak benchmark functions Rastrigin and Grienwank. Therefore, under the multi-peak benchmark function, IGWO can successfully jump out of the local optimal solution to find the global optimal solution, and the optimization effect is better than PSO and GWO algorithms. In general, IGWO algorithm has better global optimization performance than PSO and GWO under different benchmark functions.
(2) Stability analysis and comparison of algorithms
Algorithm stability is also an important evaluation criterion for optimization algorithms. The mean square error results of 10 times of the above three models are shown in Table 5.
It can be seen from Table 5 that under the unimodal benchmark function Sphere test, IGWO algorithm found the optimal solution in 10 trials, and the mean square error was 0. Therefore, under the unimodal benchmark function test, IGWO optimization stability is better than PSO and GWO algorithm. Under the multi-modal benchmark functions, Rastrigin and Grienwank test, the mean square error results of IGWO are smaller than those of PSO and GWO algorithms. In general, under different benchmark functions, IGWO algorithm has more stable optimization effects than PSO and GWO.
The benchmark function optimization test results of the IGWO, PSO, and GWO algorithms were compared, and it was found that the IGWO method had higher convergence accuracy, faster convergence speed, and stronger stability than the PSO and GWO algorithms. As a consequence, the comprehensive optimization performance was higher, which serves as a useful foundation for the second experiment.
Experiment 2: Test and verification of IEALL optimization performance.
The IGWO algorithm is used to optimize the LSTM model parameters in the EALL. Then, in order to verify whether the optimized parameters are optimal, the IEALL model is constructed, constantly changing the current parameters and bringing them into the model for prediction while keeping other parameters constant, recording the RMSE values of the model when each parameter takes different values. The parameters to be optimized are e p o c h , b a t c h _ s i z e , n u m , and η , etc. In order to verify the impact of the optimized parameters on the model accuracy, the above parameters are experimented and analyzed one by one. Firstly, e p o c h , b a t c h _ s i z e , n u m , and η are initialized, then IGWO is used to optimize each parameter. The specific experimental procedure is as follows.
(1) The e p o c h , b a t c h _ s i z e , n u m , and η are optimized by the IGWO algorithm. Where e p o c h is set to 10 and the range is set to [10, 200], b a t c h _ s i z e is set to 1 and the range is set to [1, 100], n u m is set to 2 and the range is set to [2, 50], η is 0.001 and the range is set to [0.001–0.01], IGWO is used to optimize the current parameters within the parameter range whille keeping the other three parameters unchanged. The experimental results show that e p o c h = 153 , b a t c h _ s i z e = 36 , n u m = 13 , and η = 0.0077 , which are the selected parameters of EALL in this section.
(2) To verify the effectiveness of the parameters after the IGWO optimization, the trend of the RMSE values of the predicted results of the EALL under the change of the current parameters is calculated by keeping the other three parameters unchanged. The experimental results are shown in Figure 9.
As can be seen from Figure 9a, when other parameters remain unchanged, the RMSE value of EALL model first decreases with the increase of training times. When e p o c h = 153 , the model RMSE value reaches the minimum value. However, with the continuous increase of training times, the model slowly enters the state of overfitting, and the RMSE value gradually increases.
Batch size refers to the number of samples fed into the network training at each time. When it is too small, the convergence speed is slow and less stable, and it is easy to fall into the partial optimum, while if the batch size is too large, it is easy to cause problems such as missing the nadir during gradient descent and thus reducing the accuracy of the model. As can be seen from Figure 9b, the minimum value is obtained when b a t c h _ s i z e = 36 .
The number of hidden layer units also has a certain influence on the prediction results, it can be seen from Figure 9c that as the number of hidden layers increases, the internal structure of the model becomes increasingly complex, which is prone to problems such as overfitting and long training time, then leading to gradual increase in the RMSE of the model; the RMSE value of the model is the smallest when n u m = 13 .
Learning rate determines whether the target can converge to the global minimum and the convergence speed. When the learning rate is too small, the convergence process is slow, while if it is too high will lead to overfitting or failure to converge. As can be seen from Figure 9d, the overall effect of the learning rate on the prediction accuracy decreases first and then improves, and the RMSE reaches the minimum value at η = 0.0077 .
It can be concluded through the above verification experiments that the IGWO algorithm can effectively optimize the parameter of the EALL, thereby improving the predictive accuracy of the model. Each reaches the optimum when e p o c h = 153 , b a t c h _ s i z e = 36 , n u m = 13 , η = 0.0077 , which are also set as the optimal parameters obtained after IGWO optimization.
Experiment 3: IEALL prediction results and comparison.
The optimal parameters obtained after IGWO optimization are brought into the EALL to obtain the prediction results of the IEALL model. The results are compared to the model with the EALL and shown in Figure 10.
As can be seen from the figure, the predicted values of the IEALL model are closer to the actual values and the predicted effect is better. The results of the evaluation of the prediction effects of the two models are shown in Table 6.
From Table 6, it can be seen that the RMSE, MAPE, and MAE values of the IEALL model are closer to 0 than EALL, which can be judged that the IEALL model is more effective in dam deformation series prediction compared to EALL.

6.3.3. Analysis of STAGCN-IEALL

(1)
Parameter setting
The basic parameters of the experiment are time dimension T is 2409, the number of nodes N is 5, the dimension of measurement point features D is 5, and the prediction time K is 230. The hyperparameters are set mainly by grid search to determine the optimal parameters, keeping the other hyperparameters constant when determining a parameter. The b a t c h _ s i z e is taken from the set [8, 16, 32, 64, 128, 256], η is taken from the set [0.0001, 0.0005, 0.001, 0.005, 0.01], the optimizer is selected from [Adam, SGD, Adagrad]. Finally, the results are obtained through multiple experiments on the verification set. The selected b a t c h _ s i z e is 128, η is 0.001, the optimizer selects Adam. At the same time, all results are taken as the average of multiple experimental results to reduce the experimental error.
(2)
Baseline methods
In order to more comprehensively verify the advantages of the method proposed in this section in predicting the dam deformation sequence, the following prediction methods are used for comparative experiments, as follows:
    • ARIMA-LSTM: The dam deformation prediction model combining autoregressive moving average model and long short-term memory network has stronger prediction ability than a single model.
    • EALL: The algorithm proposed in Section 4, based on the common dam deformation combination model, decomposes the deformation sequence through EEMD, and gives full play to the respective prediction advantages of ARIMA and LSTM models. Finally, the relationship between the two predicted values is constructed through LSTM model, and the final predicted value is obtained.
    • IEALL: Based on EALL, IGWO is proposed to optimize two LSTM networks in EALL, so that the LSTM networks can reach the best training parameters, thus improving the prediction accuracy.
    • ST-ARIMA [21]: Temporal-spatial autoregressive moving average model. By constructing the spatial weight matrix and constructing the spatio-temporal stationary sequence by using the methods of adjacency weighting and inverse distance weighting, good prediction results have been achieved in the multi-station prediction.
    • FC-LSTM [30]: A prediction model with fully connected LSTM layers is used in both encoder and decoder.
    • DCRNN [31]: Diffusion convolutional recurrent neural network, which uses random walk on graph to model the prediction task, has achieved good prediction results.
    • ST-GCN [32]: Spatio-temporal graph convolutional neural networks, which combine graph convolutional layers and convolutional sequences to make predictions.
Among them, the first three methods are single measuring point prediction methods only considering time series, and the last four methods are multi-measuring point prediction methods considering spatial correlation.
(3)
Analysis of experimental results
Experiment 1: Prediction results and analysis of each model.
The deformation series are predicted by the above seven models as well as the STAGCN-IEALL model proposed in this paper, where the single measurement point prediction model uses the single measurement point data and the multi measurement point prediction model uses the deformation data combining the spatial distribution of multiple measurement points to predict the value of a measurement point for 230 days. Through a large number of experiments, the average results of 10 experiments are taken to minimize the error caused by randomness, and the final results are shown in Table 7.
It can be seen from Table 7 that the prediction results of single measurement point prediction models such as ARIMA-LSTM, EALL, and IEALL are poor, because these three models ignore spatial features. As ST-ARIMA, FC-LSTM, and DCRNN take into account the spatial association of multiple measurement points, the prediction effect of the three models is better than the above three one-sided point models. ST-GCN uses graph convolution model to fully mine the spatial correlation between multiple measurement points and achieves better results than traditional spatio-temporal models. STAGCN-IEALL proposed in this paper uses GCN to extract spatial features and IEALL to extract temporal features, and on this basis a spatio-temporal attention mechanism is introduced to fully consider the dynamic changes between measurement points, so as to ensure that the model can learn the dynamic correlation between different measurement points at any time and the dynamic correlation dependence of any measurement point at different times.
Compared with the comparison model, the values of RMSE, MAE, and MAPE are the closest to 0, which are 16.06%, 14.72%, and 21.19% lower than those of ST-GCN, respectively. Therefore, the prediction performance of STAGCN-IEALL model proposed in this paper is better.
Experiment 2: Comparative analysis of predictive results under different prediction lengths of each model.
In order to more comprehensively verify the prediction performance and stability of the dam deformation spatio-temporal prediction model based on the STAGCN-IEALL, this experiment uses the above eight models to predict the deformation series under different prediction durations, continuously changing the value of the time dimension T in the model and at the same time calculating the prediction performance under the three indicators of different models, then the results are analyzed. The experimental results are shown in Figure 11.
It can be seen from Figure 11 that for the three single-point prediction models, the IEALL model has the best prediction effect. Because the single measuring point prediction model does not consider the spatial correlation between measuring points, the prediction performance gradually decreases with the increase of the prediction time. The four traditional spatio-temporal prediction models, ST-ARIMA, FC-LSTM, DCRNN and ST-GCN, have better prediction performance than the single-point model because of considering the spatial relationship of multiple measurement points. With the increase of prediction time, the prediction performance decreases, but the decline rate is lower than that of the single-point prediction model. It shows that the spatio-temporal prediction model considering spatial factors is more suitable for long-term prediction deformation tasks. STAGCN-IEALL proposed in this paper not only considers the spatio-temporal correlation, but also fully considers the dynamic changes between the measurement points to ensure that the model can learn the dynamic correlation between different measurement points at any time and the dynamic correlation dependence of any measurement point at different times, so as to improve the prediction accuracy of the model. It can be seen from the figure that with the increase of the prediction time, the prediction accuracy of the model is improved. It has always maintained the lowest RMSE, MAE, and MAPE values, and the growth rate is relatively stable. In conclusion, STAGCN-IEALL model is superior to other models.
Experiment 3: Intuitive comparison of prediction effect.
In order to observe the predicted results and true values of the model and the performance of the comparison model more intuitively, Figure 12 shows the predicted performance of three models: STAGCN-IEALL, IEALL, and ST-GCN when the prediction duration is set to 230:
As can be seen from Figure 12, the STAGCN-IEALL model proposed in this paper is better than the IEALL model and the classic spatio-temporal prediction model ST-GCN, which can accurately predict the trend of dam deformation and effectively reduce the prediction error.
Experiment 4: Analysis of ablation study.
The STAGCN-IEALL model proposed in this paper is made up of four major components: a spatial attention mechanism, a temporal attention mechanism, a spatial feature extraction module, and a temporal feature extraction module. This section removes a part from the STAGCN-IEALL model in turn, and tests the performance of each model through experiments, in order to further analyze the effectiveness of each module. The following are variant models: (1)-SA: Removing spatial attention module; (2)-TA: the temporal attention module is removed on the basis of model (1); (3)-GCN: the GCN spatial feature extraction module is removed on the basis of (2); (4)-IEALL: based on the model (3), the IEALL time feature extraction module is removed, which is a common encoder-decoder model.
The STAGCN-IEALL model’s prediction table, after each module has been removed one at a time, is shown in Figure 13. Calculations show that the RMSE, MAPE, and MAE of the encoder-decoder model are lowered by 8.43%, 8.12%, and 11.86%, respectively, after the addition of the IEALL time feature extraction module. The prediction accuracy of the model is effectively improved after the time dimension features were successfully extracted using the IEALL model. On this basis, the RMSE, MAPE, and MAE of the model are decreased by 15.97%, 17.38%, and 22.19%, respectively. It can be concluded that GCN’s spatial feature extraction of deformation data from multiple measurement points improves the model’s spatial dimension expression capacity. The model’s predictability has significantly increased. Due to the temporal attention module, the three indexes of the model are lowered by 6.12%, 9.37%, and 12.39%, respectively. This shows that by taking into account the dynamic correlation between various moments, the model’s prediction accuracy can be significantly increased. Finally, the three indexes increased by 7.78%, 5.93%, and 7.91% after the spatial attention module was implemented. The dynamic correlation of different measurement points was considered by the spatial attention module, which effectively improved the prediction performance. Therefore, it can be concluded that the cooperation between the modules of STAGCN-IEALL model effectively improves the prediction accuracy of the model and achieves more accurate dam deformation prediction.

7. Conclusions

Dam deformation prediction is of great significance, as accurate prediction can effectively evaluate the health status of dams and assist managers in formulating better management strategies. In this paper, a combination model method is proposed to improve the accuracy of dam deformation prediction by improving existing methods and introducing new prediction models. For the single-point scenario, an IEALL-based dam deformation prediction model is proposed to address the issues of low prediction accuracy caused by simple linear addition and parameter random initialization leading to local optima. For the multi-point scenario, a dam deformation prediction algorithm based on spatio-temporal correlation and IEALL (STAGCN-IEALL) is proposed to address the issues of dynamic correlation between different points in the deformation sequence and dynamic dependence on different points. In the comparison experiment of prediction results of various models, STAGCN-IEALL reduced RMSE, MAE, and MAPE by 16.06%, 14.72%, and 21.19%, respectively, compared to ST-GCN. In the prediction results comparison experiment at different time lengths, the STAGCN-IEALL model maintained the lowest RMSE, MAE, and MAPE values as the prediction length increased, and the growth rate was relatively stable. In the ablation analysis experiment, after adding the spatio-temporal feature extraction module, the model’s RMSE, MAPE, and MAE were reduced by 7.78%, 5.93%, and 7.91%, respectively. Through various experiments, the proposed combination model can effectively predict the monitoring data of dam deformation and has certain guiding and reference significance for solving practical engineering problems.

Author Contributions

Conceptualization, G.X. and Z.J.; methodology, Z.J.; software, Y.L.; validation, Y.L., Z.J. and Q.Z.; formal analysis, Z.J.; investigation, Y.L.; resources, Z.J.; data curation, Y.L.; writing—original draft preparation, Y.L.; writing—review and editing, C.W.; visualization, Z.J.; supervision, C.W.; project administration, G.X.; funding acquisition, G.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The Water Resources Science and Technology Projects in Jiangsu Province and The National Key R & D Program of China, grant number 2018YFC0407106 and 2017065.

Data Availability Statement

Data generated and analyzed during this study are available from the corresponding author by request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gu, C.; Su, H.; Liu, H. Review on service risk analysis of dam engineering. J. Hydraul. Eng. 2018, 49, 10. [Google Scholar]
  2. Huaizhi, S.; Hu, J.; Zhongru, W. A study of safety evaluation and early-warning method for dam global behavior. Struct. Health Monit. 2012, 11, 269–279. [Google Scholar] [CrossRef]
  3. Ab, A.; Mkm, A.; Ds, B. Seepage and dam deformation analyses with statistical models: Support vector regression machine and random forest—ScienceDirect. Procedia Struct. Integr. 2019, 17, 698–703. [Google Scholar]
  4. Dai, B.; Gu, C.; Zhao, E.; Qin, X. Statistical model optimized random forest regression model for concrete dam deformation monitoring. Struct. Control Health Monit. 2018, 25, e2170. [Google Scholar] [CrossRef]
  5. Wei, B.; Yuan, D.; Xu, Z.; Li, L. Modified hybrid forecast model considering chaotic residual errors for dam deformation. Struct. Control Health Monit. 2017, 25, e2188. [Google Scholar] [CrossRef]
  6. Faris, H.; Aljarah, I.; Al-Betar, M.A.; Mirjalili, S. Grey wolf optimizer: A review of recent variants and applications. Neural Comput. Appl. 2018, 30, 413–435. [Google Scholar] [CrossRef]
  7. Dong, W.; Zhang, L.; Jin, Z.; Sun, W.; Gao, J. Prediction of the waterborne navigation density based on the multi-feature spatio-temporal graph convolution network. Chin. J. Internet Things 2020, 4, 78–85. [Google Scholar] [CrossRef]
  8. Gong, G.; An, X.; Mahato, N.K.; Sun, S.; Wen, Y. Research on Short-Term Load Prediction Based on Seq2seq Model. Energies 2019, 12, 3199. [Google Scholar] [CrossRef]
  9. Chen, Z. A novel deep learning method based on attention mechanism for bearing remaining useful life prediction. Appl. Soft Comput. 2020, 86, 105919. [Google Scholar] [CrossRef]
  10. Guo, H.R.; Wang, X.J.; Zhong, Y.X.; Peng, L.U. Traffic signs recognition based on visual attention mechanism. J. China Univ. Posts Telecommun. 2011, 18, 12–16. [Google Scholar] [CrossRef]
  11. Liu, Y.; Zhang, Q.; Song, L.; Chen, Y. Attention-based recurrent neural networks for accurate short-term and long-term dissolved oxygen prediction. Comput. Electron. Agric. 2019, 165, 104964. [Google Scholar]
  12. Ran, X.; Shan, Z.; Fang, Y.; Lin, C. An LSTM-Based Method with Attention Mechanism for Travel Time Prediction. Sensors 2019, 19, 861. [Google Scholar] [PubMed]
  13. Wang, F.; Huaizhi, S.; Jing, K. Dam safety monitoring model based on ARIMA-ANN. Eng. J. Wuhan Univ. 2010, 43, 585–588. [Google Scholar]
  14. Feng, L.L.; Li, X. Dam safety monitoring model based on neural network and time series. In Applied Mechanics and Materials; Trans Tech Publications Ltd.: Stafa-Zurich, Switzerland, 2014; pp. 543–547. [Google Scholar]
  15. Gang, S.; Zhang, Y.; Bao, F.; Qin, C. Stock prediction model based on particle swarm optimization LSTM. J. Beijing Univ. Aeronaut. Astronaut. 2019, 45, 2533–2542. [Google Scholar]
  16. Liang, Y.; Zhang, H. Ship Track Prediction Based on AIS Data and PSO Optimized LSTM Network. Int. Core J. Eng. 2020, 6, 23–33. [Google Scholar]
  17. Wei, T.; Pan, T. Short-term power load forecasting based on LSTM neural network optimized by improved PSO. J. Syst. Simul. 2021, 33, 1866. [Google Scholar]
  18. Yang, M. Temperature Prediction Based on Improved PSO-LSTM Neural Network. Mod. Inf. Technol. 2020, 4, 110–112. [Google Scholar]
  19. Ting, X.; Yong, Q.; Zhang, W.; Li, Q. A short-Term Traffic Flow Prediction Model Based on Spatio-Temporal Correlation. Comput. Eng. Des. 2019, 40, 501–507. [Google Scholar]
  20. Cheng, T.; Wang, J.Q.; Haworth, J.; Heydecker, B.; Chow, A. A Dynamic Spatial Weight Matrix and Localized Space-Time Autoregressive Integrated Moving Average for Network Modeling. Geogr. Anal. 2014, 46, 75–97. [Google Scholar] [CrossRef]
  21. Li, G.; Dai, W.; Yang, G.; Liu, B. Application of Space-Time Auto-Regressive Model in Dam Deformation Analysis. Geomat. Inf. Sci. Wuhan Univ. 2015, 40, 877–881. [Google Scholar]
  22. Yang, Z.; Dai, W.; Chen, B.; Shi, Q.; Li, L. The application of Kriging’s space-time auto-regressive model in deformation modeling. Sci. Surv. Mapp. 2018, 43, 6. [Google Scholar]
  23. Xiao, Y.; Tian, X.; Xiao, M. Tourism Traffic Demand Prediction Using Google Trends Based on EEMD-DBN. Engineering 2020, 12, 194–215. [Google Scholar] [CrossRef]
  24. Kao, Y.S.; Nawata, K.; Huang, C.Y. Predicting Primary Energy Consumption Using Hybrid ARIMA and GA-SVR Based on EEMD Decomposition. Mathematics 2020, 8, 1722. [Google Scholar] [CrossRef]
  25. Liu, X.; Xia, C.; Chen, Z.; Chai, Y.; Jia, R. A new framework for rainfall downscaling based on EEMD and an improved fractal interpolation algorithm. Stoch. Environ. Res. Risk Assess. 2020, 34, 1147–1173. [Google Scholar] [CrossRef]
  26. Zhang, G.P. Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing 2003, 50, 159–175. [Google Scholar] [CrossRef]
  27. Zhang, F. Research on Dam Safety Monitoring Model Based on Neural Network; Southeast University: Nanjing, China, 2016. [Google Scholar]
  28. Zhihao, H.E.; Jin, G.; Wang, Y. A novel grey wolf optimizer and its applications in 5G frequency selection surface design. Inf. Technol. Electron. Eng. 2022, 23, 16. [Google Scholar]
  29. Zhen, X.; Jin, Q. Application of regression models and time series to dam deformation monitoring. J. Hubei Norm. Univ. Nat. Sci. Ed. 2010, 30, 83–88. [Google Scholar]
  30. Yao, T.; Huang, T.; Zeng, X.; Wu, Z.; Zhang, J.; Luo, D.; Zhang, X.; Wang, Y.; Cheng, Z.; Li, X. Multimode waveguide analyses and design based on the FC-LSTM hybrid network. JOSA B 2022, 39, 2564–2572. [Google Scholar] [CrossRef]
  31. Mallick, T.; Balaprakash, P.; Rask, E.; Macfarlane, J. Graph-Partitioning-Based Diffusion Convolutional Recurrent Neural Network for Large-Scale Traffic Forecasting. Transp. Res. Rec. 2020, 2674, 473–488. [Google Scholar] [CrossRef]
  32. Tian, D.; Lu, Z.M.; Chen, X.; Ma, L.H. An attentional spatial temporal graph convolutional network with co-occurrence feature learning for action recognition. Multimed. Tools Appl. 2020, 79, 12679. [Google Scholar] [CrossRef]
Figure 1. Optimization diagram of composite model.
Figure 1. Optimization diagram of composite model.
Applsci 13 05160 g001
Figure 2. Overall framework diagram of IEALL.
Figure 2. Overall framework diagram of IEALL.
Applsci 13 05160 g002
Figure 3. The flowchart of IGWO to optimize the parameters of LSTM model in EALL.
Figure 3. The flowchart of IGWO to optimize the parameters of LSTM model in EALL.
Applsci 13 05160 g003
Figure 4. Overall framework diagram of STAGCN-IEALL.
Figure 4. Overall framework diagram of STAGCN-IEALL.
Applsci 13 05160 g004
Figure 5. Spatial attention mechanism.
Figure 5. Spatial attention mechanism.
Applsci 13 05160 g005
Figure 6. Diagram of dam deformation sequence decomposed by EEMD.
Figure 6. Diagram of dam deformation sequence decomposed by EEMD.
Applsci 13 05160 g006
Figure 7. Integration of forecast results of different IMF sequences.
Figure 7. Integration of forecast results of different IMF sequences.
Applsci 13 05160 g007
Figure 8. Prediction results of the model. (a) Prediction results of EALL model; (b) Comparison of prediction results of various models.
Figure 8. Prediction results of the model. (a) Prediction results of EALL model; (b) Comparison of prediction results of various models.
Applsci 13 05160 g008
Figure 9. Variation curve of training times and RMSE value of prediction results: (a) variation curve of e p o c h and RMSE; (b) variation curve of b a t c h _ s i z e and RMSE; (c) variation curve of n u m and RMSE; (d) variation curve of η and RMSE.
Figure 9. Variation curve of training times and RMSE value of prediction results: (a) variation curve of e p o c h and RMSE; (b) variation curve of b a t c h _ s i z e and RMSE; (c) variation curve of n u m and RMSE; (d) variation curve of η and RMSE.
Applsci 13 05160 g009aApplsci 13 05160 g009b
Figure 10. Comparison of prediction results between IEALL and EALL.
Figure 10. Comparison of prediction results between IEALL and EALL.
Applsci 13 05160 g010
Figure 11. Variation curves of three evaluation indexes under different duration. (a) Curve of RMSE variation in different durations; (b) Curve of MAE variation in different durations; (c) Curve of MAPE variation in different durations.
Figure 11. Variation curves of three evaluation indexes under different duration. (a) Curve of RMSE variation in different durations; (b) Curve of MAE variation in different durations; (c) Curve of MAPE variation in different durations.
Applsci 13 05160 g011
Figure 12. Comparison of prediction effects.
Figure 12. Comparison of prediction effects.
Applsci 13 05160 g012
Figure 13. Comparison of model predictions of each variant.
Figure 13. Comparison of model predictions of each variant.
Applsci 13 05160 g013
Table 1. Evaluation results of IMF7 predictive value.
Table 1. Evaluation results of IMF7 predictive value.
ModelRMSEMAPEMAE
A R I M A 1 , 3 , 2 9.32 × 10−50.71388.26 × 10−5
Table 2. Comparison of evaluation indicators of IMF sequences.
Table 2. Comparison of evaluation indicators of IMF sequences.
SequenceTraining DatasetTest Dataset
RMSEMAPEMAERMSEMAPE MAE
IMF13.71 × 10−28.42632.63 × 10−26.11 × 10−116.93544.23 × 10−1
IMF29.41 × 10−24.91325.26 × 10−22.37 × 10−110.41341.52 × 10−1
IMF32.17 × 10−32.84711.42 × 10−31.92 × 10−36.29271.25 × 10−3
IMF48.05 × 10−31.02256.12 × 10−31.58 × 10−33.64931.27 × 10−3
IMF51.73 × 10−40.52211.03 × 10−42.21 × 10−41.41511.87 × 10−4
IMF64.75 × 10−40.88353.92 × 10−45.56 × 10−41.92744.94 × 10−4
IMF71.53 × 10−40.04241.64 × 10−49.32 × 10−50.71388.26 × 10−5
IMF81.46 × 10−40.11721.02 × 10−41.45 × 10−41.04311.29 × 10−4
IMF91.32 × 10−40.31551.26 × 10−46.79 × 10−42.34587.61 × 10−4
Table 3. Evaluation of prediction results of each model.
Table 3. Evaluation of prediction results of each model.
ModelRMSEMAPEMAE
ARIMA8.731518.5945.4372
LSTM5.533111.54293.7613
ARIMA-LSTM1.57626.58411.0961
EALL0.91814.23250.7386
Table 4. Average results of convergence accuracy of different models.
Table 4. Average results of convergence accuracy of different models.
Benchmark FunctionPSOGWOIGWO
Sphere0.17322.21 × 10−60
Grienwank0.07419.65 × 10−50
Rastrigin0.04315.52 × 10−52.01 × 10−9
Table 5. Mean square error test results of different models.
Table 5. Mean square error test results of different models.
Benchmark FunctionPSOGWOIGWO
Sphere0.43398.13 × 10−70
Grienwank3.83012.19 × 10−42.92 × 10−5
Rastrigin15.19840.81539.21 × 10−3
Table 6. The evaluation results of IEALL model and EALL models.
Table 6. The evaluation results of IEALL model and EALL models.
ModelRMSEMAPEMAE
EALL0.91814.23250.7386
IEALL0.71132.47360.9421
Table 7. Prediction results of different models.
Table 7. Prediction results of different models.
ModelRMSEMAPEMAE
ARIMA-LSTM1.17621.09616.5841
EALL0.91810.73864.2325
IEALL0.71130.51272.4736
ST-ARIMA0.46980.34921.1421
FC-LSTM0.50190.39731.3409
DCRNN0.42360.30820.8521
ST-GCN0.36170.15290.5327
STAGCN-IEALL0.30360.13040.4158
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, G.; Lu, Y.; Jing, Z.; Wu, C.; Zhang, Q. IEALL: Dam Deformation Prediction Model Based on Combination Model Method. Appl. Sci. 2023, 13, 5160. https://doi.org/10.3390/app13085160

AMA Style

Xu G, Lu Y, Jing Z, Wu C, Zhang Q. IEALL: Dam Deformation Prediction Model Based on Combination Model Method. Applied Sciences. 2023; 13(8):5160. https://doi.org/10.3390/app13085160

Chicago/Turabian Style

Xu, Guoyan, Yuwei Lu, Zixu Jing, Chunyan Wu, and Qirui Zhang. 2023. "IEALL: Dam Deformation Prediction Model Based on Combination Model Method" Applied Sciences 13, no. 8: 5160. https://doi.org/10.3390/app13085160

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop