Next Article in Journal
Deriving the Importance of Defects in Multi-Unit Residential Buildings Using the Analytic Hierarchy Process Method
Previous Article in Journal
Effect of Reinforcement Corrosion on Structural Behavior in Reinforced Concrete Structures According to Initiation and Propagation Periods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Step Prediction of TBM Tunneling Speed Based on Advanced Hybrid Model

1
Kunming Engineering Corporation Limited of POWERCHINA, Kunming 650051, China
2
School of Water Conservancy, North China University of Water Resources and Electric Power, Zhengzhou 450046, China
3
Henan Provincial Key Laboratory of Hydrosphere and Watershed Water Security, Zhengzhou 450046, China
4
Zhongzhou Water Holding Co., Ltd., Zhengzhou 450000, China
5
Bei Fang Investigation Design & Research Co., Ltd., Tianjin 300000, China
*
Author to whom correspondence should be addressed.
Buildings 2024, 14(12), 4027; https://doi.org/10.3390/buildings14124027
Submission received: 14 November 2024 / Revised: 5 December 2024 / Accepted: 9 December 2024 / Published: 18 December 2024
(This article belongs to the Section Building Structures)

Abstract

:
The accurate prediction of tunneling speed in tunnel boring machine (TBM) construction is the basis for the timely adjustment of the operating parameters of TBM equipment to ensure safe and efficient tunneling. In this paper, a multi-step prediction model of TBM tunneling speed based on the EWT-ICEEMDAN-SSA-LSTM hybrid model is proposed. Firstly, four datasets were selected under different geological conditions, and the original data were preprocessed using the binary discriminant function and the 3 σ principle; secondly, the preprocessed data were decomposed using the empirical wavelet variation (EWT) to obtain several subseries and residual series; then, Intrinsic Computing Expressive Empirical Mode Decomposition With Adaptive Noise (ICEEMDAN) was used to perform further decomposition on residual sequences. Finally, several subsequences were fed into a Long Short-Term Memory (LSTM) network optimized by the Sparrow Search Algorithm (SSA) for multi-step training and prediction, and the predicted results of each subsequence were added up to obtain the final result. A comparison with existing models showed that the performance of the prediction method proposed in this paper is superior to other models. Of the four datasets, the average accuracy from the first step prediction to the fifth step prediction reached 99.06%, 98.99%, 99.07%, and 99.03%, respectively, indicating that the proposed method has high multi-step prediction performance and generalization ability. In this sense, this paper provides a reference for other projects.

1. Introduction

Because of its high efficiency, good effect, high safety performance, and low economic cost [1], TBM construction has become the primary choice for constructing long-distance rock tunnels in water conservancy, mining and transportation projects, etc. The TBM construction process, susceptible to changes in the surrounding geology and rock conditions, requires the timely adjustment of TBM performance parameters according to changes in tunnel rock types. At the same time, with the development of smart construction and the establishment of a large information platform, intelligent control and, eventually, unmanned TBM will inevitably be realized. Accordingly, the accurate prediction of TBM tunneling speed becomes one of the most important issues.
In recent decades, prediction models for excavation speed have emerged one after another and can be roughly divided into two categories: theoretical models and empirical models. The former includes models such as the hobbing–breaking performance prediction model [2], improved CSM (Colorado School of Mines) model [3], etc. The latter includes models like probabilistic models [4] and NTNU (Norges teknisk–naturvitenskapelige universitet) regression prediction models [5]. Since there are many factors affecting TBM tunneling performance and the mechanism is more complex, these theoretical and empirical methods cannot clearly and accurately describe the relationship between the performance of TBM tunneling and its impact factors, resulting in their poor applicability in complex geological environments and operations.
With the rapid development of artificial intelligence technology in recent years, artificial neural network technology with powerful nonlinear mapping capabilities has been applied to TBM construction prediction. For example, BP (Back Propagation) models are used to predict TBM tunneling speed [6,7]; SA-BP (Simulated Annealing–Back Propagation) models are used to predict TBM rock parameters [8]; IPSO-BP (Improved Particle Swarm Optimization–Back Propagation) models are used to predict tunneling parameters of stable sections based on data from tunneling ascents [9]; GWO-GRNN (Grey Wolf Optimizer–Generalized Regression Neural Network) models are used to predict TBM performance under different TBM operating parameters and geological conditions [10]; and LSTM models are used to predict tunnel lithology [11] and TBM tunneling speed [12]. It has been shown that three neural network methods, namely RNN (Recurrent Neural Network), LSTM, and GRU (Gated Recurrent Unit), outperform the traditional nonlinear regression algorithm in predicting TBM tunneling parameters [13]. Moreover, Regression Trees and Artificial Intelligence Algorithms are used to evaluate TBM performance [14]. PSO-ANN (Particle Swarm Optimization–Artificial Neural Network) is used to estimate the TBM advance rate [15]. Support vector machines are used to predict the performance of TBMs [16]. Fuzzy logic modeling is employed to predict TBM penetration rates [17]. Particle Swarm Optimization models are used to predict TBM penetration rates [18]. Three different models are used to evaluate TBM performance [19]. Based on a novel rock classification system for tunnel boring machines, the prediction of TBM construction speed is performed [20]. CNN-LSTM (Convolutional Neural Networks–Long Short-Term Memory) is used to predict TBM giant tunneling speed [21]. Imperialist Competitive Algorithm (ICA) and quantum fuzzy logic are used to predict TBM boring performance [22]. Machine learning models are employed to predict TBM excavation speed [23]. However, all these methods still have their limitations. First, the above models cannot fully exploit the TBM tunneling history information, which limits their prediction performance. In addition, they usually use raw and complex data for prediction tasks, which leads to difficulties in describing and predicting the change patterns of complex tunneling parameters. Second, they can only use a few signals to predict the next tunneling speed, resulting in insufficient time for these methods to predict, limiting their practical application. Therefore, predicting long-term changes in tunneling speed in the future is of great importance for safety improvement. Moreover, many in-depth studies have been conducted on long-term prediction problems in other fields, achieving a series of fruitful results [24,25,26,27,28,29,30,31,32,33,34,35]. As for the long-term prediction of TBM construction, some significant findings have also been achieved, including the prediction of TBM cutter torque using the AHDM (adaptive hierarchical decomposition-based method) multi-step prediction algorithm [36] and the multi-step prediction of cutter torque using the VMD-EWT-LSTM (Variational Mode Decomposition–Empirical Wavelet Transform–Long Short-Term Memory) model [37]. However, there is a lack of research on the multi-signal long-term prediction of TBM tunneling speed. With current models, it is not possible to realize the intelligent control of TBM earlier so as to better guide the construction of subsequent TBMs.
Therefore, to fill the gap in this field and predict long-term tunneling speed as early as possible, a multi-step prediction method is proposed based on the hybrid EWT-ICEEMDAN-SSA-LSTM model for TBM tunneling speed, and the main innovations and contributions are as follows:
(1) Data feature extraction based on the EWT-ICEEMDAN method. Firstly, the original data are pre-processed using the binary discriminant function and the principle. Subsequently, considering that the EWT decomposition model and ICEEMDAN decomposition model display excellent characteristics of decomposing nonlinear signals, the pre-processed tunneling speed sequence is decomposed by EWT to obtain several sub-series and residual sequences which are then fed into the ICEEMDAN model for re-decomposition. In this way, the features of the original dataset can be extracted effectively. This method mainly addresses the inability of the above models to fully utilize the historical information of TBMs to achieve prediction and changes the situation that they usually use raw and complex data to complete the prediction.
(2) Multi-step prediction based on SSA-LSTM method. Based on the excellent characteristics of the SSA-optimized LSTM model for extracting time series variation features, multi-step prediction of each data series is carried out and the individual predictions are superimposed to obtain the final result. Combined with engineering examples, the model is compared and analyzed with existing tunnel excavation speed prediction models, which verifies the effectiveness and accuracy of the model. This proposed method bridges the gap in the field of multi-signal long-term prediction of TBM tunnel boring speed and provides a better guide for the construction of subsequent projects.
Based on the above discussion and focusing on the importance of accurate prediction of TBM tunneling speed, this paper proposes an advanced hybrid model to accurately predict the tunneling speed of a domestic water diversion tunnel, with an average accuracy of 98%. It provides new ideas and development for research in the field of TBM digging performance prediction. In this sense, it can serve as a scientific basis and technical support for the planning, design and construction of the subsequent TBM tunneling project, so as to better guide TBM construction and improve the construction efficiency and quality.

2. Predictive Models

2.1. EWT

EWT, as a signal decomposition method [38,39], combines the adaptive nature of empirical modal decomposition (EMD) with the scientific nature of wavelet variational scores. With a more reliable mathematical basis, it can eliminate problems such as modal aliasing. The empirical wavelet transform first performs an adaptive Fourier decomposition of the signal, which can be used to obtain different frequency-domain features according to the signal. Then, the spectral segmentation boundaries are optimized with local maxima, after which the empirical scale function and the empirical wavelet function are constructed at each scale. The detail function is the inner product of the empirical wavelet and the original function, which decomposes the IMF. For n > 0, the empirical wavelet function ψ n ( ω ) and the empirical scale function φ n ( ω ) are expressed by the following two equations, respectively.
ψ n ( ω ) = 1 cos [ π 2 β 1 2 τ n + 1 ( ω ω n + 1 + τ n + 1 ) ] sin [ π 2 β 1 2 τ n ( ω ω n + τ n ) ] 0
φ n ( ω ) = 1 cos [ π 2 β 1 2 τ n ( ω ω n + τ n ) ] 0
where ω is the signal angular frequency and ω n is the spectral division boundary; ψ n ( ω ) denotes the empirical scale function; φ n ( ω ) is the empirical wavelet function; β ( x ) is an arbitrary C k ( [ 0 , 1 ] ) function.
Here, β ( x ) = x 4 ( 35 84 x + 70 x 2 20 x 3 ) .
The detail coefficient W f e ( n , t ) is generated by the inner product of ψ n and the signal f ( t ) , which can be written as:
W f e ( n , t ) = f , ψ n = f ( τ ) ψ n ( τ t ) ¯ d τ = F 1 f ( ω ) ψ n ( ω ) ¯
The approximation factor W f 0 ( n , t ) is generated by the inner product of φ 1 and the signal f ( t ) , which can be written as:
W f e ( 0 , t ) = f , ψ 1 = f ( τ ) ψ 1 ( τ t ) ¯ d τ = F 1 f ( ω ) ψ 1 ( ω ) ¯
where ψ n 1 ( ω ) is the Fourier transform function of ψ n ( ω ) ; ψ 1 1 ( ω ) denotes the Fourier transform function of ψ 1 ( ω ) ; ψ n ( ω ) ¯ is the Fourier transform function of ψ n ( ω ) ; ψ 1 ( ω ) ¯ is the Fourier transform function of ψ n ( ω ) ; and , is an inner product calculation.
The resulting reconfiguration expression for the signal f ( t ) is:
f ( t ) = W f e ( 0 , t ) ϕ 1 ( 0 , t ) + n 1 N W f ε ( n , t ) φ n ( t ) = W f ε ( 0 , ω ) ϕ 1 ( ω ) + n 1 N W f ε ( n , ω ) φ n ( ω )
After EWT processing, the signal f ( t ) is decomposed to obtain IMF single-component components f k ( t ) ( k = 1 , 2 , 3 , ) with frequencies ranging from low to high.
f o ( t ) = W f e ( 0 , t ) φ 1 ( t )
f k ( t ) = W f e ( k , t ) φ k ( t )
where W f e ( 0 , t ) is the approximate function obtained from Equation (4); W f e ( k , t ) is the detail function obtained from Equation (3).
Pseudo-code for EWT:
%% \
warning off
close all
clear \

2.2. ICEEMDAN

The ICEEMDAN signal processing method, proposed by Colominas et al., is based on the CEENDAN model [40,41,42,43]. It can effectively reduce the effect of residual noise in modal components and improve the interference caused by modal aliasing problems and pseudo-modal components before the reconstruction and decomposition of the original signal. To be specific, x is defined as the signal to be decomposed; E k ( ) denotes the kth order modal component generated by the EMD decomposition; N ( ) denotes the local mean of the generated signal; and w ( i ) represents Gaussian white noise. Its specific computational steps are as follows:
1. Add Ι group of white noise w ( i ) to the original sequence and construct the sequence x ( i ) = x + β 0 E ( w ( i ) ) to obtain the first set of residuals R 1 = ( N ( x ( i ) ) ) .
2. Calculate the first modal component: d 1 = x R 1 .
3. Continue to add white noise, calculate the second set of residuals R 1 + β 1 E ( w ( i ) ) using local mean decomposition, and define the second modal component d 2 . 1 d 2 = R 1 R 2 = R 1 ( N ( R 1 + β 1 E ( w ( i ) ) ) ) .
4. Calculate the Kth residual R k = ( N ( R k 1 + β k 1 E ( w ( i ) ) ) and modal components d k = R k 1 R k .
5. Until the end of the computational decomposition, all modalities and residuals are obtained.
Pseudo-code for ICEEMDAN:
ecg = data;%data
%%
Nstd = 0.2;
NR = 1;
MaxIter = 5000;
%% ICEEMDAN
[modes] = iceemdan(ecg,Nstd,NR,MaxIter,1);%iceemdan
modes = modes’;
t = 1:length(ecg);
[a b] = size(modes);
Fig;
subplot(a + 1,1,1);
plot(t,ecg);% the ECG signal is in the first row of the subplot
ylabel(‘original’)
set(gca,‘xtick’,[])
title(‘ICEEMDAN’)
axis tight;
for i = 2:a
subplot(a + 1,1,i);
plot(t,modes(i-1,:));
ylabel ([‘IMF’ num2str(i-1)]);
set(gca,‘xtick’,[])
xlim([1 length(ecg)])
end

2.3. SSA

SSA is an optimization algorithm proposed by Jiankai Xue et al. [44,45,46]. It is usually applied for structured problems with more explicit problem and condition descriptions and a unique explicit and global optimal point. In SSA, discoverers with better fitness values are prioritized to obtain food during the search process.
The discoverer’s location is as follows:
X i , j t + 1 = X i , j , exp ( i α i t e r max ,   if   R 2 < S T X i , j + Q L ,   if   R 2 S T
where t+1 represents the current number of iterations and X i , j denotes the position information of the ith sparrow in the jth dimension; R 2 [ 0 , 1 ] and S T [ 0.5 , 1 ] denote the warning value and the safety value, respectively; Q are random numbers obeying the positive-terrestrial distribution.
The joiner’s location is as follows:
X i , j t + 1 = Q exp ( X w o r s t X i , j t i 2 ) ,   if   i > n 2 X P t + 1 + X i , j X P t + 1 ,   A + L ,
where X P is the current optimal position of the discoverer; X w o r s t refers to the current global worst position; and A denotes a matrix of 1 × d .
The vigilantes’ location is as follows:
X i , j t + 1 = X b e s t t + β X i , j t X ,   if   f i > f g X i , j t + K ( X i , j t X w o r s t t ( f i f w ) + ε ) ,   if   f = f g
where X b e s t is the current global optimal position; β refers to a step control parameter; K [ 1.1 ] denotes a random number; f i is the fitness value of the current individual sparrow; f g and f w are the current global best and worst adaptation values, respectively; and ε represents a constant to avoid zero in the denominator.

2.4. LSTM

LSTM neural networks are well suited for processing multiple sets of time-series data [47,48,49,50]. It compensates for the problems of overfitting, gradient explosion, and vanishing when RNNs process data. The LSTM is composed of an input layer, an implicit layer, and an output layer. There are three different types of gates in the LSTM model, namely the input gate, forget gate, and output gate. The LSTM model is shown in Figure 1 and the three gates are calculated as shown below:
i t = σ ( w i [ h t 1 ,   x t ] + b i )
f t = σ ( w f [ h t 1 ,   x t ] + b f )
c t = f t c t 1 + i t c ~
c ~ = tanh ( w c [ h t 1 ,   x t ] + b c )
o t = σ ( w o [ h t 1 ,   x t ] + b o )
h t = o t tanh ( c t )
where i t , f t , c t and o t denote the forgetting gates, input gates, cell states, and output gates, respectively; w i , w f , w c and w o represent the weight matrices; b i , b f , b c and b o are the corresponding bias vectors; σ is the sigmoid activation function; tanh is the activation function; h t is the output at time t; h t 1 is the input at time t; and c t ~ refers to the candidate vector update value.
Pseudo-code for SSA-LSTM:
numHiddenUnits = round(x(1));
maxEpochs = round(x(2));
InitialLearnRate = x(3);
L2Regularization = x(4);
layers = [...
sequenceInputLayer(numFeatures)
lstmLayer(numHiddenUnits)
fullyConnectedLayer(numResponses)
regressionLayer];
options = trainingOptions(‘adam’,...
‘MaxEpochs’,maxEpochs,...
‘ExecutionEnvironment’,‘cpu’,...
‘InitialLearnRate’,InitialLearnRate,...
‘GradientThreshold’,1,...
‘L2Regularization’,L2Regularization,...
‘Verbose’,0);
%‘Plots’,‘training-progress’
net = trainNetwork(XTrain,YTrain,layers,options);
PredictTrain = predict(net,XTrain, ‘ExecutionEnvironment’,‘cpu’);
PredictTest = predict(net,XTest, ‘ExecutionEnvironment’,‘cpu’);
mseTrain = mse(YTrain-PredictTrain);
mseTest = mse(YTest-PredictTest);
fitness =mseTrain+mseTest;
disp
End

2.5. Framework of EWT-ICEEMDAN-SSA-LSTM

Figure 2 displays the framework of the hybrid model based on EWT-ICEEMDAN-SSA-LSTM for multi-step prediction of TBM tunneling speed. EWT can decompose the TBM tunneling speed time series data into multiple simply varying subsequences, but its residual sequence variation characteristics are still complex. Therefore, the residual sequences are further decomposed using ICEEMDAN to reduce their sequence complexity. Compared with the original TBM tunneling speed sequence, the several decomposed sub-series boast good stability and regularity. LSTM has powerful time series prediction capability, and SSA can optimize the internal parameters of the LSTM model well so that the SSA-LSTM model achieves good multi-step performance prediction of tunneling speed. It consists of four main steps: data preprocessing, data decomposition, SSA optimization, and tunnel speed multi-step prediction.

3. Case Study

3.1. Project Overview

This project is a water diversion tunnel project in China, with a total length of 13.84 km for the tunnel boring machine section. The maximum burial depth is 647 m; the average depth of burial is 239 m; the radius of the excavation section is 4 m; and the length of the TBM is about 290 m. The weight of the machine is about 650 t, and the total power of the installed machine is about 2400 KW. The TBM tunneling construction period is about 21 months, with the average monthly progress being 531 m, and the daily progress being 25–30 m. The TBM tunneling section is mainly dominated by sodium dolomite gneiss, mixed gneiss, and black cloud augite mixed gneiss. Based on the perimeter rock parameters of the geological exploration report (stratigraphic lithology, yield (inclination and dip), saturated uniaxial compressive strength (MPa), type of rock structure, degree of rock integrity, angle between the strike of the rock layer and the axis of the hole) and the actual perimeter rock exposure type, the environmental phase of the TBM tunneling can be divided into class III, IV, and V. As shown below, Table 1 displays the classification of the surrounding rock category, and Table 2 the main performance parameters of TBM. Moreover, Figure 3 shows the geological profile of the TBM tunneling hole section.
During the data collection process, the TBM platform recorded the data every minute (60 s), with roughly 43,000 datasets per month, and a total of 138 parameters included. To better visualize the change curve of tunneling speed, 600 sets of raw boring speed data were extracted from the TBM cloud platform, of which 200 sets are under Class III surrounding rock, 200 sets under Class IV surrounding rock, and 200 sets under Class V surrounding rock. Figure 4 shows the variation law of tunneling speed at a certain time under different geological conditions.

3.2. Data Acquisition and Pre-Processing

As can be seen in Figure 4, the TBM construction under different geological conditions experiences stoppage, jamming, and other construction conditions, resulting in the collection of many invalid data. This paper adopts the binary state discriminant function to preprocess these data.
K = f ( n ) f ( F ) f ( v ) f ( T )
f ( x ) = 1 , x 0 0 , x = 0
where K is the state discriminant function; n is the cutter speed; F is the total thrust. v is the tunneling speed, and T is the cutter torque. When K = 0 , the data are judged to be invalid and deleted. When K = 1 , the data are valid and thus collected.
At the same time, TBM construction is subject to a large number of external factors and equipment failure, resulting in the collection of valid data with abnormal values. Here, the 3 σ principle is used for rejection. The probability that the values are distributed in ( μ 3 σ , μ + 3 σ ) is 0.9973, where μ and σ represent the mean and standard deviation, respectively. To be specific, the mean and standard deviation of the raw TBM data are calculated and then compared to judge whether each value in the data column deviates from the mean by more than 3 times. If it exceeds 3 times, it is considered an outlier and deleted. The average value of the adjacent incoming points of the outlier is taken and substitution is performed. Thus, the tunneling data in steady state are obtained. Figure 5 shows the outliers before processing, and Figure 6 shows the outliers after processing.
To test the prediction and generalization performance of the model proposed in this paper for TBM tunneling speed, the average of each hour was set as a dataset [51], and all data were continuous data for nine months. Four consecutive datasets were selected from the TBM dataset after data preprocessing, which include Class III surrounding rock, Class IV surrounding rock, and five types of surrounding rock, and each dataset contained 200 sets of tunneling speed data.

3.3. Data Decomposition and Parameter Setting

The data were first decomposed using EWT. The adaptive method for determining the number of wavelet filter decomposition layers was determined according to DU et al. [52]. After repeated testing, the parameters were set as follows. The signal-to-noise ratio of N s t d is 0.2; the number of noise additions N E is 500; and the maximum number of iterations M a x I T e r is 5000. The modal components and average decomposition errors for each dataset are given in Table 3. Among others, Figure 7 gives the decomposition results for dataset 1. From Figure 7, it can be seen that the sequence decomposed by EWT has good regularity, but the residual sequence still has a complex variation pattern. Therefore, the residual series were further decomposed by substituting them into ICEEMDAN. The specific parameters do not change, and the decomposed components of each IMF are shown in Figure 8 below.

3.4. Multi-Step Prediction

Here, multi-step prediction is the use of the first 200 h of tunneling speed data to predict the subsequent 5 h of tunneling speed, as shown in Figure 9. In these four datasets, training is 70% and testing is 30%. In the prediction process, the trend and cycle features are based on the parameters of the LSTM algorithm and have been tested iteratively [53,54]. The specific parameters of LSTM are as follows. The hidden layer is set to 2, H 1 and H 2 , respectively; the number of neurons in each layer to 20 and 20; the training frequency E to 100; and the learning rate to 0.005. In the SSA model, the population is set to 30 and the maximum number of iterations is set to 50. The sparrow algorithm optimizes the LSTM hyperparameters H 1 , H 2 , E , η with search ranges of [ 1 , 100 ] , [ 1 , 100 ] , [ 1 , 100 ] , [ 0.001 , 0.01 ] , respectively. This paper is based on the experimental environment of MATLAB2020b, and the information of the hardware is as follows: CPUi5-1135G7, GPU is MX450, RAM is 512 G, and the operating system is Windows 11.
To increase the prediction accuracy of the model, the raw data P P 1 , P 2 , , P m are normalized according to Equation (19) to obtain the training data sequence Q Q 1 , Q 2 , , Q m .
Q i = P i min 1 i m P m max 1 i m P i min 1 i m P m

3.5. Model Error Analysis

In this paper, three parameters are used to analyze and evaluate the predicted results, namely root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE). RMSE, MAE, and MAPE all indicate the correct prediction of the model. The smaller the RMSE and MAE are, the more accurate the model will be. MAPE can accurately reflect the accuracy of the model. The calculation process is as follows.
R M S E = 1 m i = 1 m ( y i y i )
M A E = 1 m i = 1 m y i y i
M A P E = 1 m i = 1 m y i y i y i × 100 %
where y denotes the measured value; y is the predicted value; y i represents the average value of each parameter; and m is the length of the predicted sequence of each parameter.

3.6. Multi-Step Prediction Results

Table 4 shows the performance of the multi-step prediction method based on EWT-ICEEMDAN-SSA-LSTM. Figure 10 shows the prediction results for dataset 1. As shown in Table 4, the five-step average prediction accuracy reaches 99.06%, 98.99%, 99.07%, and 99.03% in the four datasets. The above results show that the model prediction accuracies are high in all four datasets. This fully verifies that the model proposed in this article has excellent predictive performance and strong generalization ability.

4. Discussion

4.1. Comparison of Models

To further verify the superior performance of the model proposed in this paper, this model is analyzed in comparison with the commonly used tunneling speed predictive models (BP, GRNN, GRU, and LSTM). The results are shown in Table 5.
Moreover, the proposed model is also compared with ICEEMDAN-SSA-LSTM, EWT-SSA-LSTM, ICEEMDAN-LSTM, EWT-LSTM and SSA-LSTM.
From Table 4, it can be seen that the multi-step prediction performance of the model proposed in this paper is far superior to BP, GRNN, GRU, and LSTM. When the prediction time is short, GRU and LSTM have better predictive performance than BP and GRNN. The prediction accuracy of the above four models decreases significantly as the prediction time and data length increase. However, the model proposed in this paper still provides accurate predictions. This model is also better than SSA-LSTM, ICEEMDAN-LSTM, EWT-LSTM, ICEEMDAN-SSA-LSTM, and EWT-SSA-LSTM. The reason is that SSA-LSTM directly substitutes the original tunneling speed sequence for training and prediction. Yet, the original tunneling speed sequence data feature very obvious irregularity and complexity, thus reducing the accuracy of prediction. The ICEEMDAN and EWT decomposed tunneling speed sequence data have strong regularity, but the residual sequence is still a bit complicated, which also reduces the prediction accuracy. Several subsequent sequences after two decompositions are significantly less complex compared to the original tunneling speed sequence. And SSA can optimize LSTM well, improving the prediction accuracy of the LSTM model. Therefore, the overall prediction accuracy based on the EEMDAN EWT SSA LSTM method proposed in this paper is the highest.
The tunneling speed is often a good visual indicator of the efficiency of the TBM. Due to the very complex geological structure of the tunnel, this leads to non-linearity and irregularity in the obtained tunneling speed data. For the multi-step prediction of TBM tunneling speed, the single model prediction accuracy is very low. This can be justified by the difficulty in extracting the variation pattern of the original tunneling speed sequence. EWT has a powerful decomposition signal that can separate complex signals into several sub-series. Subsequences have more regular change cycles and less complex properties. Compared with the original tunneling speed sequence, the regularity of each subsequence is significantly enhanced after EWT decomposition. However, the residual sequence still has strong complexity and randomness. ICEEMDAN can extract most of the component information in the frequency component signal, thus significantly enhancing the regularity of the residual sequence after ICEEMDAN decomposition. Therefore, after two decompositions based on EWT and ICEEMDAN, several groups of subsequences with higher stability and regularity can be obtained. Featuring high global search power, high stability, and good convergence, SSA can address the randomly varying behavior of input weights and thresholds of LSTM models. LSTM has a strong ability to predict real-time data. Therefore, the prediction method proposed in this paper has high performance for multi-step prediction.

4.2. Comparison of Time Consumption

Time cost is also a very important evaluation indicator. This paper discusses the training and prediction time cost of all prediction methods. Table 6 shows that the training time of BP and GRNN is much shorter than that of LSTM and GRU. This is because BP and GRNN are a multilayer feed-forward neural network with tutor-learning neural networks with relatively simple principles and less internal neuron structure. LSTN and GRU have more complex network structures and relatively longer training time. GRU maintains the same prediction as LSTM. However, compared with LSTM, it has a simpler internal structure, because the internal structure computation of its three gates becomes two gates, and therefore, it takes less time. The LSTM optimized by SSA is significantly more time-consuming, which is because SSA needs to adjust the parameters within the LSTM to obtain an accurate prediction. The method proposed in this paper not only requires processing parsing data, but also network training. Accordingly, the training and prediction time would be long, but good prediction results can be obtained. Moreover, the measurement time of the method proposed in this paper is only 17.72 ms, which is acceptable compared to the increasing trend in prediction accuracy. However, each training time based on the EWT-ICEEEMDAN-SSA-LSTM model takes about 10–15 min and requires a large amount of raw data as input to the model, which is a great drain on the experimental environment, time and hardware. These limitations are not negligible.

4.3. Comparison of Model Decomposition

EMD (empirical modal decomposition), VMD (metamorphic modal decomposition) and EWT are widely used signal decomposition methods. Not constrained by the type of signal and trend term, the EMD method is applicable to a wide range of signals. However, EMD also suffers from modal confounding and endpoint effects. As a result, the signal decomposition of EMD is relatively poor. ICEEMDAN is based on the adaptive noise fully integrated empirical modal decomposition (CEEMDAN), which improves on the shortcomings of CEEMDAN. Instead of using white noise directly, it uses the local average of the signal to extract K-order patterns, removing more pseudo-modes. VMD improves the modal mixing phenomenon of EMD well, but extracts relatively fewer subsequences compared with EWT, resulting in relatively lower subsequence smoothness. And compared to VMD, EWT allows for more modal components and a more thorough decomposition of the raw data.
Table 7 and Figure 11 show the number of decomposition modes and the average decomposition relative error for dataset 3. It can be seen that EWT decomposes the largest number of subsequences and has the lowest average decomposition relative error. ICEEMDAN and EMD decompose the same subsequence, but EMD has the highest average decomposition relative error. VMD decomposes the least number of subsequences. Therefore, it is suitable to adopt EWT and ICEEMDAN as the data decomposition methods in this paper.

4.4. Wilcoxon Rank-Sum Test

This section further discusses the difference between the prediction results obtained by the model proposed in this paper and those of existing models, and tests it using the Wilcoxon rank-sum test. Firstly, it is assumed that the existing model has a small error compared with the model proposed in this paper. Based on the p-value, which indicates the confidence level, it is decided whether to accept or reject the original hypothesis. Table 8, Table 9, Table 10 and Table 11 present the Wilcoxon rank sum test results for four datasets. As can be seen, the error confidence p-values for multi-step prediction of tunneling speed for both the existing model and the model proposed in this paper are below 0.08. To be more accurate, most of them are below 0.05, which allows the original hypothesis to be considered unreasonable. Moreover, it can be seen that there is a large error between the models. Therefore, the proposed model displays better prediction performance than the existing models.

5. Conclusions

This paper proposes a multi-step prediction model for TBM tunneling speed based on EWT-ICEEMDAN-SSA-LSTM. First, data preprocessing is performed using binary discriminant functions and other functions to eliminate outliers and missing values of the original data and obtain smooth and stable data. Secondly, the raw data after pre-processing are decomposed using EWT, through which several subsequences and residual sequences can be obtained. Then, the residual sequences are decomposed again using ICEEMDAN. After two rounds of decomposition, several subsequences with good stability and regularity are obtained, which makes the data easier to predict. Finally, based on the SSA-LSTM model, multi-step prediction is performed for several subsequences, and the prediction results are combined to obtain the final prediction results.
Four datasets were selected under different complex geological conditions to validate the proposed prediction model. The results indicate that the EWT-ICEEMDAN-SSA-LSTM based on a multi-step prediction model for TBM tunneling speed has a more accurate prediction performance than other existing models. The average prediction accuracy of the five steps is 99.06%, 98.99%, 99.07%, and 99.03% in four different datasets, respectively.
With the actual project of a water diversion tunnel in China as the research object, the hybrid model proposed in this paper achieves good prediction accuracy. This can provide strong data support for the planning, design and construction of the subsequent similar projects, according to which the engineering team can optimize the parameters of the tunnelling speed, reduce the construction risk and improve the operational efficiency, especially in the tunnel project under the complex geological conditions. In this sense, the value of its application is particularly significant.
In the future, the prediction of TBM boring speed can be further studied on the compound fried interaction mechanism between geological conditions, boring parameters, and mechanical state so as to improve the accuracy and generalization ability of the prediction model, especially for tunnel boring under the special geological conditions such as long distance, large depth, high water pressure, etc. New theories and methods for the prediction of the performance of TBM boring can be further explored to solve the bottlenecks of the existing technology.
However, there are also several limitations in this study. One is how to deal with the data and predict the TBM tunneling speed when coming across large geological defects such as a karst cave, broken belt, etc. Secondly, more engineering examples are needed to validate the predictive model proposed in this paper, as different engineering will have some differences in TBM type, geology condition, etc., and the dataset will have different characteristics.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/buildings14124027/s1, Figure S1: Comparison of the second step predicted tunneling speed curves for different methods in dataset 1; Table S1: Prediction results of different prediction methods in dataset 1; Figure S2: Comparison of the third step predicted tunneling speed curves for different methods in dataset 2; Table S2: Prediction results of different prediction methods in dataset 2; Figure S3: Comparison of the fourth step predicted tunneling speed curves for different methods in dataset 3; Table S3: Prediction results of different prediction methods in dataset 3; Figure S4: Comparison of the predicted tunneling speed curves for the fifth step of the different methods in dataset 4; Table S4: Prediction results of different prediction methods in dataset 4; Table S5: Average prediction performance of different prediction methods; Figure S5: Comparison of decomposition models in dataset 1; Table S6: model Number and average decomposition relative error for dataset 1; Figure S6: Comparison of decomposition models in dataset 2; Table S7: model Number and average decomposition relative error for dataset 2; Figure S7: Comparison of decomposition models in dataset 4; Table S8: model Number and average decomposition relative error for dataset 4.

Author Contributions

D.L.: Conceptualization, Formal analysis, Investigation, Methodology, Software, Writing—original draft. Y.Y.: Formal analysis, Funding acquisition, Writing—review and editing, Project administration. S.Y.: Methodology, Software, Investigation. Z.Z.: Data curation, Resources. X.S.: Data curation, Resources. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Nature Science Foundation of China (Grant No. 51679089) and funds from the Intelligent Water Conservancy Project of Discipline Innovation Introduction Base of Henan Province, China (Grant No. GXJD004).

Data Availability Statement

The original contributions presented in the study are included in the article and Supplementary Materials, further inquiries can be directed to the corresponding author.

Conflicts of Interest

Author Defu Liu is affiliated with the company Kunming Engineering Corporation Limited of POWERCHINA. Author Zhixiao Zhang is affiliated with the company Zhongzhou Water Holding Co., Ltd. Author Xiaohu Sun is affiliated with the company Bei Fang Investigation Design & Research Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Liu, Q.S.; Liu, J.P.; Pan, Y.C.; Kong, X.X.; Cui, X.Z.; Huang, S.B.; Wei, L. Research advances of tunnel boring machine performance prediction models for hard rock. Chin. J. Rock Mech. Eng. 2016, 35, 2766–2786. [Google Scholar]
  2. Sanio, H.P. Prediction of the performance of disc cutters in anisotropic rock. Int. J. Rock Mech. Min. Sci. 1985, 22, 153–161. [Google Scholar] [CrossRef]
  3. Yagiz, S. Development of Rock Fracture and Brittleness Indices to Quantify the Effects of Rock Mass Features and Toughness in the CSM Model Basic Penetration for Hard Rock Tunneling Machines. Ph.D. Thesis, Colorado School of Mines, Golden, CO, USA, 2002. [Google Scholar]
  4. Nelson, P.P.; Al-jalil, Y.A. Tunnel Boring Machine Project Data Bases and Construction Simulation; The University of Texas at Austin: Austin, TX, USA, 1994; pp. 708–712. [Google Scholar]
  5. Bruland. Bruland a Hard Rock Tunnel Boring; Norwegian University of Science and Technology: Trondheim, Norway, 1998. [Google Scholar]
  6. Wen, S.; Zhao, Y.X.; Yang, S.Q. Prediction on penetration rate of TBM based on Monte Carlo-BP neural network. Yantu Lixue/Rock Soil Mech. 2009, 30, 3127–3132. [Google Scholar]
  7. Xiong, F.; Hu, Z.P.; Ren, X.; Zhang, P. Matlab-Based BP Neural Network Applied to the Prediction of TBM Advance Rate. Mod. Tunn. Technol. 2017, 54, 101–107. [Google Scholar]
  8. Liu, B.; Wang, R.; Zhao, G.; Guo, X.; Wang, Y.; Li, J.; Wang, S. Prediction of rock mass parameters in the TBM tunnel based on BP neural network integrated simulated annealing algorithm. Tunn. Undergr. Space Technol. 2020, 95, 103103. [Google Scholar] [CrossRef]
  9. Hou, S.K.; Liu, Y.R.; Zhang, K. Prediction of TBM tunnelling parameters based on IPSO-BP hybrid model. Chin. J. Rock Mech. Eng. 2020, 39, 1648–1657. [Google Scholar]
  10. Liu, W.L.; Liu, A.; Liu, C.J. Multi-objective optimization control for tunnel boring machine performance improvement under uncertainty. Autom. Constr. 2022, 139, 104310. [Google Scholar] [CrossRef]
  11. Liu, Z.B.; Li, L.; Fang, X.L.; Qi, W.B.; Shen, J.M.; Zhou, H.Y.; Zhang, Y.L. Hard-rock tunnel lithology prediction with TBM construction big data using a global-attention-mechanism-based LSTM network. Autom. Constr. 2021, 125, 103647. [Google Scholar] [CrossRef]
  12. Gao, B.Y.; Wang, R.R.; Lin, C.G.; Guo, X.; Liu, B.; Zhang, W.G. TBM penetration rate prediction based on the long short-term memory neural network. Undergr. Space 2020, 6, 718–731. [Google Scholar] [CrossRef]
  13. Gao, X.J.; Shi, M.L.; Song, X.G.; Zhang, C.; Zhang, H.W. Recurrent neural networks for real-time prediction of TBM operating parameters. Autom. Constr. 2019, 98, 225–235. [Google Scholar] [CrossRef]
  14. Salimi, A.; Rostami, J.; Moormann, C. Application of rock mass classification systems for performance estimation of rock TBMs using regression tree and artificial intelligence algorithms. Tunn. Undergr. Space Technol. 2019, 92, 103046. [Google Scholar] [CrossRef]
  15. Armaghani, D.J.; Koopialipoor, M.; Marto, A.; Yagiz, S. Application of several optimization techniques for estimating TBM advance rate in granitic rocks. J. Rock Mech. Geotech. Eng. 2019, 11, 779–789. [Google Scholar] [CrossRef]
  16. Mahdevari, S.; Shahriar, K.; Yagiz, S.; Shirazi, M.A. A support vector regression model for predicting tunnel boring machine penetration rates. Int. J. Rock Mech. Min. Sci. 2014, 72, 214–229. [Google Scholar] [CrossRef]
  17. Ghasemi, E.; Yagiz, S.; Ataei, M. Predicting penetration rate of hard rock tunnel boring machine using fuzzy logic. Bull. Eng. Geol. Environ. 2014, 73, 23–35. [Google Scholar] [CrossRef]
  18. Yagiz, S.; Karahan, H. Prediction of hard rock TBM penetration rate using particle swarm optimization. Int. J. Rock Mech. Min. Sci. 2011, 48, 427–433. [Google Scholar] [CrossRef]
  19. Ma, T.; Jin, Y.; Liu, Z.; Prasad, Y.K. Research on Prediction of TBM Performance of Deep-Buried Tunnel Based on Machine Learning. Appl. Sci. 2022, 12, 6599. [Google Scholar] [CrossRef]
  20. Li, Z.; Tao, Y.; Du, Y.; Wang, X. Classification and Prediction of Rock Mass Boreability Based on Daily Advancement during TBM Tunneling. Buildings 2024, 14, 1893. [Google Scholar] [CrossRef]
  21. Li, L.; Liu, Z.; Zhou, H.; Zhang, J.; Shen, W.; Shao, J. Prediction of TBM cutterhead speed and penetration rate for high-efficiency excavation of hard rock tunnel using CNN-LSTM model with construction big data. Arab. J. Geosci. 2022, 15. [Google Scholar] [CrossRef]
  22. Afradi, A.; Ebrahimabadi, A. Prediction of TBM penetration rate using the imperialist competitive algorithm (ICA) and quantum fuzzy logic. Innov. Infrastruct. Solut. 2021, 6, 103. [Google Scholar] [CrossRef]
  23. Latif, K.; Sharafat, A.; Seo, J. Digital Twin-Driven Framework for TBM Performance Prediction, Visualization, and Monitoring through Machine Learning. Appl. Sci. 2023, 13, 11435. [Google Scholar] [CrossRef]
  24. Chu, Y.; Xu, P.; Li, M.; Chen, Z.; Chen, Z.; Chen, Y.; Li, W. Short-term metropolitan-scale electric load forecasting based on load decomposition and ensemble algorithms. Energy Build. 2020, 225, 110343. [Google Scholar] [CrossRef]
  25. Costa, M.; Ruiz-Cárdenas, R.; Mineti, L.; Prates, M. Dynamic time scan forecasting for multi-step wind speed prediction. Renew. Energy 2021, 177, 584–595. [Google Scholar] [CrossRef]
  26. Jiang, Y.; Xia, T.; Wang, D.; Fang, X.; Xi, L. Adversarial regressive tomain adaptation framework for infrared thermography-based unsupervised remaining useful life prediction. IEEE Trans. Ind. Inf. 2022, 18, 7219–7229. [Google Scholar] [CrossRef]
  27. Liu, Z.; Wang, X.; Zhang, Q.; Huang, C. Empirical mode decomposition based hybrid ensemble model for electrical energy consumption forecasting of the cement grinding process. Measurement 2019, 138, 314–324. [Google Scholar] [CrossRef]
  28. Li, H.T.; Bai, J.C.; Cui, X.; Li, Y.W.; Sun, S.L. A new secondary decomposition-ensemble approach with cuckoo search optimization for air cargo forecasting. Appl. Soft Comput. 2020, 90, 106161. [Google Scholar] [CrossRef]
  29. Ali, M.; Deo, R.C.; Maraseni, T.; Downs, N.J. Improving SPI-derived drought forecasts incorporating synoptic-scale climate indices in multi-phase multivariate empirical mode decomposition model hybridized with simulated annealing and kernel ridge regression algorithms. J. Hydrol. 2019, 576, 164–184. [Google Scholar] [CrossRef]
  30. Ali, M.; Prasad, R.; Xiang, Y.; Yaseen, Z.M. Complete ensemble empirical mode decomposition hybridized with random forest and kernel ridge regression model for monthly rainfall forecasts. J. Hydrol. 2020, 584, 124647. [Google Scholar] [CrossRef]
  31. Ribeiro, M.; Coelho, L. Ensemble approach based on bagging, boosting and stacking for short-term prediction in agribusiness time series. Appl. Soft Comput. 2020, 86, 105837. [Google Scholar] [CrossRef]
  32. Jeddi, S.; Sharifian, S. A hybrid wavelet decomposer and GMDH-ELM ensemble model for Network function virtualization workload forecasting in cloud computing. Appl. Soft Comput. 2020, 88, 105940. [Google Scholar] [CrossRef]
  33. Wang, R.; Lei, Y.; Yang, Y.; Xu, W.; Wang, Y. Dynamic prediction model of landslide displacement based on (SSA-VMD)-(CNN-BiLSTM-attention): A case study. Front. Phys. 2024, 12, 1417536. [Google Scholar] [CrossRef]
  34. Xiao, Y.; Ju, N.; He, C.; Xiao, Z.; Ma, Z. Week-ahead shallow landslide displacement prediction using chaotic models and robust LSTM. Front. Earth Sci. 2022, 10, 965071. [Google Scholar] [CrossRef]
  35. Li, H.; Deng, J.; Feng, P.; Pu, C.; Arachchige, D.D.K.; Cheng, Q. Short-Term Nacelle Orientation Forecasting Using Bilinear Transformation and ICEEMDAN Framework. Front. Energy Res. 2021, 9, 780928. [Google Scholar] [CrossRef]
  36. Qin, C.J.; Shi, G.; Tao, J.F.; Yu, H.G.; Jin, Y.R.; Xiao, D.Y.; Zhang, Z.N.; Liu, C.L. An adaptive hierarchical decomposition-based method for multi-step cutterhead torque forecast of shield machine. Mech. Syst. Signal Process. 2022, 175, 109148. [Google Scholar] [CrossRef]
  37. Shi, G.; Qin, C.J.; Tao, J.F.; Liu, C.L. A VMD-EWT-LSTM-based multi-step prediction approach for shield tunneling machine cutterhead torque. Knowl.-Based Syst. 2021, 228, 107213. [Google Scholar] [CrossRef]
  38. Gilles, J. Empirical wavelet transform. IEEE Trans. Signal Process. 2013, 61, 3999–4010. [Google Scholar] [CrossRef]
  39. Karijadi, I.; Chou, S.Y.; Dewabharata, A. Wind power forecasting based on hybrid CEEMDAN-EWT deep learning method. Renew. Energy 2023, 218, 119357. [Google Scholar] [CrossRef]
  40. Wu, J.; Feng, M.; Li, T.Y. Daily Crude Oil Price Forecasting Based on Improved CEEMDAN, SCA, and RVFL, A Case Study in WTI Oil Market. Energies 2020, 13, 1852. [Google Scholar] [CrossRef]
  41. Yang, J.F.; Ren, Y.; Chai, J.; Zhang, D.; Liu, Y. Adit deformation prediction based on ICEEMDAN dispersion entropy and LSTM-BP. Opt. Fiber Technol. 2023, 79, 103364. [Google Scholar] [CrossRef]
  42. Fu, L.M.; Li, J.L.; Chen, Y.F. An innovative decision making method for air quality monitoring based on big data-assisted artificial intelligence technique. J. Innov. Knowl. 2023, 8, 100294. [Google Scholar] [CrossRef]
  43. Colominas, M.A.; Schlotthauer, G.; Torres, M.E. Improved complete ensemble EMD: A suitable tool for biomedical signal processing. Biomed. Signal Process. Control 2014, 14, 19–29. [Google Scholar] [CrossRef]
  44. Xue, J.K.; Shen, B. A novel swarm intelligence optimization approach, sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  45. Zhang, Z.Z.; Li, K.; Guo, H.; Liang, X. Combined prediction model of joint opening-closing deformation of immersed tube tunnel based on SSA optimized VMD, SVR and GRU. Ocean Eng. 2024, 305, 117933. [Google Scholar] [CrossRef]
  46. Li, X.H.; Guo, M.; Zhang, R.; Chen, G. A data-driven prediction model for maximum pitting corrosion depth of subsea oil pipelines using SSA-LSTM approach. Ocean Eng. 2022, 261, 112062. [Google Scholar] [CrossRef]
  47. Jin, Y.R.; Qin, C.J.; Huang, Y.X.; Zhao, W.Y.; Liu, C.L. Multi-domain modeling of atrial fibrillation detection with twin attentional convolutional long short-term memory neural networks. Knowl.-Based Syst. 2020, 193, 105460. [Google Scholar] [CrossRef]
  48. Qin, X.W.; Wang, Q.J.; Wang, X.M.; Guo, J.J.; Chu, X. Short-Term Prediction of PM2.5 in Beijing Based on VMD LSTM Method. J. Jilin Univ. (Earth Sci. Ed.) 2022, 52, 214–221. [Google Scholar]
  49. Shahid, F.; Zameer, A.; Muneeb, M. A novel genetic LSTM model for wind power forecast. Energy 2021, 223, 120069. [Google Scholar] [CrossRef]
  50. Xiao, D.Y.; Huang, Y.X.; Qin, C.J.; Shi, H.S.; Li, Y.M. Fault diagnosis of induction motors using recurrence quantification analysis and LSTM with weighted BN. Shock Vib. 2019, 1–14. [Google Scholar] [CrossRef]
  51. Qiu, D.H.; Fu, K.; Xue, Y.G.; Li, Z.Q.; Li, G.K.; Kong, F.M. LSTM time-series prediction model for TBM tunneling parameters of deepburied tunnels and application research. J. Cent. South Univ. (Sci. Technol.) 2021, 52, 2646–2660. [Google Scholar]
  52. Du, W.; Zhu, R.; Li, Y. Adaptive determination method of wavelet filter decomposition layers. Optoelectron. Laser 2010, 21, 1408–1411. [Google Scholar]
  53. Jiang, P.; Wang, Z.X.; Li, X.B.; Wang, X.V.; Yang, B.; Zheng, J. Energy consumption prediction and optimization of industrial robots based on LSTM. J. Manuf. Syst. 2023, 70, 137–148. [Google Scholar] [CrossRef]
  54. Ren, L.; Dong, J.; Wang, X.; Meng, Z.; Zhao, L.; Deen, M.J. A Data-driven Auto-CNN-LSTM Prediction Model for Lithium-ion Battery Remaining Useful Life. IEEE Trans. Ind. Inform. 2020, 17, 3478–3487. [Google Scholar] [CrossRef]
Figure 1. Structure of LSTM neurons.
Figure 1. Structure of LSTM neurons.
Buildings 14 04027 g001
Figure 2. Model framework based on EWT-ICEEMDAN-SSA-LSTM.
Figure 2. Model framework based on EWT-ICEEMDAN-SSA-LSTM.
Buildings 14 04027 g002
Figure 3. Geological profile of the TBM tunneling hole section. 1. Quaternary accumulations. 2. Muddy siltstone and mudstone (shale) interbedded with coal line of the upper Triassic Songgui Formation. 3. Sandy mud shale interbedded with tuff of the upper Triassic. 4. Tuff and muddy tuff of the upper Triassic Zhongwu Formation. 5. Tuff and dolomitic tuff in the upper part of the middle Permian Bei Nga Formation. 6. Striated tuff in the lower and upper part of the lower part of the Bei Nga Formation. 7. Interbedded tuff and sandstone in the lower part of the Bei Nga Formation. 8. Middle Triassic tuff. 9. Middle Triassic slate and schist interbedded with a small amount of tuff. 10. Sandstone and mudstone of the Qingtianbao Formation of the Lower Triassic. 11. Sandstone and mudstone sandstone interbedded with coal beds in the upper part of the Hei Nizhaoxiao Formation of the Permian. 12. Tuff and dacite in the middle part of the Cangna Formation of the Devonian. 13. Devonian Middle Poor Fault Formation Slate and Graystone. 14. Devonian Ranjiayuan Formation Slate and Graystone. 15. Permian Basalt. 16. Tertiary Andesitic Basalt. 17. Faults and Numbers. 18. Drill Holes and Numbers. 19. Drill and Blast Sections. 20. TBM Construction Sections.
Figure 3. Geological profile of the TBM tunneling hole section. 1. Quaternary accumulations. 2. Muddy siltstone and mudstone (shale) interbedded with coal line of the upper Triassic Songgui Formation. 3. Sandy mud shale interbedded with tuff of the upper Triassic. 4. Tuff and muddy tuff of the upper Triassic Zhongwu Formation. 5. Tuff and dolomitic tuff in the upper part of the middle Permian Bei Nga Formation. 6. Striated tuff in the lower and upper part of the lower part of the Bei Nga Formation. 7. Interbedded tuff and sandstone in the lower part of the Bei Nga Formation. 8. Middle Triassic tuff. 9. Middle Triassic slate and schist interbedded with a small amount of tuff. 10. Sandstone and mudstone of the Qingtianbao Formation of the Lower Triassic. 11. Sandstone and mudstone sandstone interbedded with coal beds in the upper part of the Hei Nizhaoxiao Formation of the Permian. 12. Tuff and dacite in the middle part of the Cangna Formation of the Devonian. 13. Devonian Middle Poor Fault Formation Slate and Graystone. 14. Devonian Ranjiayuan Formation Slate and Graystone. 15. Permian Basalt. 16. Tertiary Andesitic Basalt. 17. Faults and Numbers. 18. Drill Holes and Numbers. 19. Drill and Blast Sections. 20. TBM Construction Sections.
Buildings 14 04027 g003
Figure 4. The variation law of tunneling speed under different geological conditions.
Figure 4. The variation law of tunneling speed under different geological conditions.
Buildings 14 04027 g004
Figure 5. The outlier before processing.
Figure 5. The outlier before processing.
Buildings 14 04027 g005
Figure 6. The outlier after processing.
Figure 6. The outlier after processing.
Buildings 14 04027 g006
Figure 7. Results of EWT decomposition of dataset 1.
Figure 7. Results of EWT decomposition of dataset 1.
Buildings 14 04027 g007
Figure 8. Results of ICEEMDAN decomposition of residual series of dataset 1.
Figure 8. Results of ICEEMDAN decomposition of residual series of dataset 1.
Buildings 14 04027 g008
Figure 9. Multi-step prediction structure diagram.
Figure 9. Multi-step prediction structure diagram.
Buildings 14 04027 g009
Figure 10. Multi-step prediction results for dataset 1: (a) (1)-step prediction with ICEEMDAN-EWT-SSA-LSTM; (b) (2)-step prediction with ICEEMDAN-EWT-SSA-LSTM; (c) (3)-step prediction with ICEEMDAN-EWT-SSA-LSTM; (d) (4)-step prediction with ICEEMDAN-EWT-SSA-LSTM; (e) (5)-step prediction with ICEEMDAN-EWT-SSA-LSTM.
Figure 10. Multi-step prediction results for dataset 1: (a) (1)-step prediction with ICEEMDAN-EWT-SSA-LSTM; (b) (2)-step prediction with ICEEMDAN-EWT-SSA-LSTM; (c) (3)-step prediction with ICEEMDAN-EWT-SSA-LSTM; (d) (4)-step prediction with ICEEMDAN-EWT-SSA-LSTM; (e) (5)-step prediction with ICEEMDAN-EWT-SSA-LSTM.
Buildings 14 04027 g010aBuildings 14 04027 g010b
Figure 11. Comparison of the number of patterns of different decomposition models. (a) EMD data decomposition results. (b) VMD data decomposition results. (c) ICEEMDAN data decomposition results. (d) EWT data decomposition results.
Figure 11. Comparison of the number of patterns of different decomposition models. (a) EMD data decomposition results. (b) VMD data decomposition results. (c) ICEEMDAN data decomposition results. (d) EWT data decomposition results.
Buildings 14 04027 g011aBuildings 14 04027 g011b
Table 1. The classification of the surrounding rock category.
Table 1. The classification of the surrounding rock category.
Stratigraphic LithologyYield (Inclination and Dip)Saturated Uniaxial Compressive Strength (MPa)The Type of Rock StructureThe Degree of Rock IntegrityThe Angle Between the Rock Strike and the Axis of the CavePerimeter Rock Type
mudstone165°∠20°22~40lamellatevery poor20° V
limestone136°∠9°bulk structureshattered
limestone136°∠9°40~50medium-thickness laminarcompleteIV
limestone183°∠19°22~40bulk structureshattered25°V
limestone183°∠19°40~55medium-thickness laminarcomplete38°IV
limestone130°∠15°22~40medium-thickness laminarvery poor15°V
limestone140°∠14°40~55medium-thickness laminarcompleteIV
limestone140°∠14°22~40bulk structureshatteredV
limestone240°∠19°40~55medium-thickness laminarcomplete85°IV
limestone240°∠19°20~40bulk structureshattered85°V
limestone109°∠12°40~55medium-thickness laminarcomplete36°IV
limestone109°∠12°22~40medium-thickness laminarvery poor36°V
limestone109°∠12°40~55medium-thickness laminarcomplete36°IV
limestone109°∠12°55~80medium-thickness laminarmore complete36°III
limestone109°∠12°40~55medium-thickness laminarcomplete36°IV
limestone105°∠7°22~40medium-thickness laminarvery poor40°V
limestone105°∠7°40~50thickly stratifiedcomplete45°IV
limestone105°∠7°50~70medium-thickness laminarmore complete43°III
limestone245°∠16°medium-thickness laminarmore complete86°
limestone245°∠16°medium-thickness laminarmore complete81°
Table 2. The main performance parameters of TBM.
Table 2. The main performance parameters of TBM.
Technical ParametersValue
Cutter excavation diameter/mm4000
Rated total thrust/KN15,000
Maximum total thrust/KN17,500
Rated torque of cutter/(KN·m)1890
Cutter release torque/(KN·m)2900
Support shoe rated support force/KN30,900
Maximum support force of the support boot/KN36,000
Table 3. Modal components and average decomposition relative error for four datasets.
Table 3. Modal components and average decomposition relative error for four datasets.
DatasetModal ComponentsAverage Decomposition Relative Error
1113.84%
2105.05%
391.28%
493.65%
Table 4. Prediction results for different datasets.
Table 4. Prediction results for different datasets.
Dataset1234
1-stepRMSE (KN·m)0.470.890.680.79
MAE (KN·m)0.361.320.861.09
MAPE (%)0.42%0.63%0.54%0.58%
2-stepRMSE (KN·m)0.681.021.271.32
MAE (KN·m)0.541.431.251.40
MAPE (%)0.64%0.86%0.75%0.82%
3-stepRMSE (KN·m)1.031.361.660.92
MAE (KN·m)0.831.541.651.20
MAPE (%)0.99%0.88%0.97%1.04%
4-stepRMSE (KN·m)1.300.960.720.84
MAE (KN·m)0.941.520.951.23
MAPE (%)1.13%1.22%1.18%1.21%
5-stepRMSE (KN·m)1.531.560.831.61
MAE (KN·m)1.161.970.981.81
MAPE (%)1.41%1.47%1.22%1.22%
Table 5. Average predictive performance of each predictive model.
Table 5. Average predictive performance of each predictive model.
ModelEvaluation ParametersStep-1Step-2Step-3Step-4Step-5
BPRMSE (KN·m)4.915.455.726.627.08
MAE (KN·m)3.984.494.775.485.89
MAPE (%)7.50%8.04%8.44%8.94%9.40%
GRNNRMSE (KN·m)4.905.465.746.416.90
MAE (KN·m)3.924.394.965.566.09
MAPE (%)7.20%7.62%8.05%8.51%9.02%
GRURMSE (KN·m)3.584.214.725.315.85
MAE (KN·m)2.983.574.184.855.33
MAPE (%)5.45%5.92%6.41%6.88%7.33%
LSTMRMSE (KN·m)3.724.264.935.456.17
MAE (KN·m)3.133.724.484.925.26
MAPE (%)5.94%6.33%6.77%7.21%7.71%
SSA-LSTMRMSE (KN·m)2.663.123.514.174.75
MAE (KN·m)2.182.673.323.994.56
MAPE (%)4.09%4.42%4.85%5.36%5.82%
ICEEMDAN-LSTMRMSE (KN·m)2.082.523.003.744.53
MAE (KN·m)1.702.142.693.274.08
MAPE (%)3.06%3.37%3.86%4.36%4.92%
EWT-LSTMRMSE (KN·m)1.732.222.673.314.04
MAE (KN·m)1.401.912.412.713.42
MAPE (%)2.65%2.99%3.29%3.78%4.38%
ICEEMDAN-SSA
-LSTM
RMSE (KN·m)1.702.022.373.003.46
MAE (KN·m)1.331.742.142.622.83
MAPE (%)2.48%2.73%3.03%3.42%3.63%
EWT-SSA-LSTMRMSE (KN·m)1.091.541.882.232.59
MAE (KN·m)1.221.471.762.122.49
MAPE (%)1.57%1.82%2.08%2.26%2.43%
EWT-ICEEMDAN
-SSA-LSTM
RMSE (KN·m)0.711.071.240.951.38
MAE (KN·m)0.911.151.301.161.48
MAPE (%)0.54%0.77%0.97%1.19%1.33%
Table 6. Current prediction method and training and prediction time of this method.
Table 6. Current prediction method and training and prediction time of this method.
ModelBPGRNNGRULSTMEWT-ICEEEMDAN-SSA-LSTM
Average training Time (s)1.042.3526.4931.04769.19
Average test time (ms)0.250.321.321.5617.72
Table 7. Modal components and average decomposition relative error for Dataset 3.
Table 7. Modal components and average decomposition relative error for Dataset 3.
ModelModes NumberAverage Decomposition Relative Error
EMD63.50%
VMD53.14%
ICEEMDAN63.11%
EWT91.28%
Table 8. Results of Wilcoxon rank-sum test for Dataset 1.
Table 8. Results of Wilcoxon rank-sum test for Dataset 1.
Step-1Step-2Step-3Step-4Step-5
BP0.000.000.000.000.00
GRNN0.020.000.000.000.00
GRU0.000.000.000.000.00
LSTM0.000.000.000.000.00
Table 9. Results of Wilcoxon rank-sum test for Dataset 2.
Table 9. Results of Wilcoxon rank-sum test for Dataset 2.
Step-1Step-2Step-3Step-4Step-5
BP0.000.000.000.000.00
GRNN0.030.000.000.000.00
GRU0.000.000.000.000.01
LSTM0.000.000.000.010.00
Table 10. Results of Wilcoxon rank-sum test for Dataset 3.
Table 10. Results of Wilcoxon rank-sum test for Dataset 3.
Step-1Step-2Step-3Step-4Step-5
BP0.000.000.000.000.00
GRNN0.000.000.000.000.00
GRU0.000.000.000.000.00
LSTM0.000.000.000.000.00
Table 11. Results of Wilcoxon rank-sum test for Dataset 4.
Table 11. Results of Wilcoxon rank-sum test for Dataset 4.
Step-1Step-2Step-3Step-4Step-5
BP0.000.000.000.000.00
GRNN0.000.000.000.000.00
GRU0.020.000.000.000.00
LSTM0.060.000.000.000.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, D.; Yang, Y.; Yang, S.; Zhang, Z.; Sun, X. Multi-Step Prediction of TBM Tunneling Speed Based on Advanced Hybrid Model. Buildings 2024, 14, 4027. https://doi.org/10.3390/buildings14124027

AMA Style

Liu D, Yang Y, Yang S, Zhang Z, Sun X. Multi-Step Prediction of TBM Tunneling Speed Based on Advanced Hybrid Model. Buildings. 2024; 14(12):4027. https://doi.org/10.3390/buildings14124027

Chicago/Turabian Style

Liu, Defu, Yaohong Yang, Shuwen Yang, Zhixiao Zhang, and Xiaohu Sun. 2024. "Multi-Step Prediction of TBM Tunneling Speed Based on Advanced Hybrid Model" Buildings 14, no. 12: 4027. https://doi.org/10.3390/buildings14124027

APA Style

Liu, D., Yang, Y., Yang, S., Zhang, Z., & Sun, X. (2024). Multi-Step Prediction of TBM Tunneling Speed Based on Advanced Hybrid Model. Buildings, 14(12), 4027. https://doi.org/10.3390/buildings14124027

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop