Next Article in Journal
Analysis of the Impact of Forces in Hanger Rods on Power Boiler Operation
Next Article in Special Issue
Roughness Evolution of Granite Flat Fracture Surfaces during Sliding Processes
Previous Article in Journal
Implementation of an Automatic Meeting Minute Generation System Using YAMNet with Speaker Identification and Keyword Prompts
Previous Article in Special Issue
Numerical Simulation Study on the Deformation Patterns of Surrounding Rock in Deeply Buried Roadways under Seepage Action
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning in Rockburst Intensity Level Prediction: Performance Evaluation and Comparison of the NGO-CNN-BiGRU-Attention Model

1
School of Resources and Safety Engineering, Central South University, Changsha 410083, China
2
Ocean College, Zhoushan Campus, Zhejiang University, Hangzhou 316021, China
3
School of Civil Engineering, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(13), 5719; https://doi.org/10.3390/app14135719
Submission received: 31 May 2024 / Revised: 22 June 2024 / Accepted: 27 June 2024 / Published: 29 June 2024
(This article belongs to the Special Issue Rock Mechanics in Geotechnical and Tunnel Engineering)

Abstract

:
Rockburst is an extremely hazardous geological disaster. In order to accurately predict the hazardous degree of rockbursts, this paper proposes eight new classification models for predicting the intensity level of rockbursts based on intelligent optimisation algorithms and deep learning techniques and collects 287 sets of real rockburst data to form a sample database, in which six quantitative indicators are selected as feature parameters. In order to validate the effectiveness of the constructed eight machine learning prediction models, the study selected Accuracy, Precision, Recall and F1 Score to evaluate the prediction performance of each model. The results show that the NGO-CNN-BiGRU-Attention model has the best prediction performance, with an accuracy of 0.98. Subsequently, engineering validation of the model is carried out using eight sets of real rockburst data from Daxiangling Tunnel, and the results show that the model has a strong generalisation ability and can satisfy the relevant engineering applications. In addition, this paper also uses SHAP technology to quantify the impact of different factors on the rockburst intensity level and found that the elastic strain energy index and stress ratio have the greatest impact on the rockburst intensity level.

1. Introduction

Rockburst [1] is a dynamic instability phenomenon in which the surrounding rock is spalled, blocked, ejected or thrown due to the abrupt and severe dissipation of elastic strain energy in the rock, and its essence is the surrounding rock has gained kinetic energy [2]. As a kind of sudden, random and highly hazardous geological disaster [3], the process of rockburst activity usually has three phases [4], which are the inactive phase, the active phase and the destructive phase. During the inactive phase, the elastic stress energy accumulation is small; in the destruction stage, large-scale rockbursts usually occur; and the active stage is between the inactive stage and the destruction stage of the state, usually accompanied by medium and small-scale rockbursts. In recent years, the frequency and severity of rockbursts have increased with the increase in geological activities due to the increase in underground space engineering activities, as well as the deep mining of mines. Such geological hazards not only affect the construction process [5,6] but also cause harm to social production activities, citizens’ lives and property safety. In view of the hazards of rock explosions, how to prevent the occurrence of rock explosion disasters is particularly important, and rock explosion prediction and evaluation is an important prerequisite for the development of prevention and control measures. Therefore, the accurate and reliable prediction of high-intensity rock explosion disasters has become a research hotspot in related fields [7,8].
Due to the extreme complexity of the rock explosion prediction problem, there is currently no set of mature theories and methods at home and abroad, but they can be broadly divided into the field measurement method and theoretical analysis method [9]. The field measurement method is usually the use of geophysical-related professional engineering equipment, the focus of the region of the rock body detection and monitoring, so as to determine whether there is the possibility of a rock explosion, but due to the current detection, monitoring equipment is still in the development stage, the reliability of the information collected is very limited. The theoretical analysis method is founded on the physical parameters of the rock mass, stress state and multiple triggering factors, it employs mathematical methods to predict the intensity of rockbursts, and representative methods include cloud model theory [10,11], fuzzy integrated discrimination [12,13], grey systems theory [14], topology theory [15], the ideal point method [16,17], the distance discrimination method [18,19], etc., but the solution methods are usually cumbersome and the generalisation ability of the established models is weak.
For the past few years, in light of the ongoing development of computer intelligence technology, machine-learning-related methods have better applications in the area of prediction, and other scholars have also conducted related research on rockburst prediction using machine learning techniques. Zhou [20] and others used PSO-SVMs to establish a long-term prediction model for rockbursts in underground caverns, and the results showed that the method has strong robustness; Dong [21] and others employed a Random Forest (RF) model of the underground rock project to forecast the likelihood of rockbursts and the associated rockburst grade and compared the outcomes with those obtained using the SVM and ANN methods; the findings indicate that the RF model is a viable approach for forecasting the intensity of rockbursts; Wu Shunchuan et al. [22], based on the principle of PCA-PNN to establish the rockburst intensity grading prediction model, showed that the outcomes of the assessment indicate that the model convergence speed is faster and the misjudgement rate is lower, and the prediction effect is more satisfactory. Pu Yuanyuan et al. [23] used the t-SNE method to downscale 246 real rockburst data set, and employed the clustering method to re-label the data, and then used the SVC to train the model, so the findings of the final prediction results demonstrate that the method has a strong generalisation ability, and the prediction accuracy is higher. Li et al. [24] performed rockburst prediction using seven machine learning methods based on 314 real cases of rockbursts, and the output showed that the integrated tree had superior prediction performance compared to other ML models; Xue [25] and colleagues developed a PSO-ELM rockburst intensity level prediction model, which they validated with 15 examples of rockbursts at the Jiangbian Hydropower Station; the model exhibited superior predictive capabilities, as evidenced by the results. Wang et al. [26] evaluated the learning ability of Boosting and Bagging algorithms based on rockburst instance data and also used cross-validation as a method of adjusting hyper-parameters and after analyses showed that the Bagging method was the most effective for the prediction of the rockburst intensity level. Ma et al. [27] established the Borderline-SMOTE 1-Adaboost model; with this method, to a certain extent, the problem of data imbalance is solved, and the prediction accuracy is higher after comparing with various models, and the model is applied in the forecasting of rockburst intensity in the Qinling water tunnel, which provides a reference for the early warning of rockburst disasters during the construction of deep buried tunnels. Ma et al. [28] proposed the LightGBM-TCN-RF rockburst intensity level prediction model, which makes the model able to predict the danger degree of rockburst characteristic variables, providing a reference for the field of rock explosion prediction and monitoring.
All of these research methods mentioned above have achieved some success in rockburst prediction research and have certain advantages for specific problems while enriching the disaster theory related to rockburst-prediction research. However, considering that each algorithm inevitably has some shortcomings in practical engineering applications, such as SVM, KNN and other models, the application of the model tends to be less robust, and the model’s predicted performance in different regions, different geological conditions, different climates and other conditions of the differences in the performance of the rockburst prediction of the differences exist, often with the input of the parameters of the different changes. With the progress of machine learning technology, more and more scholars are trying to integrate multiple models [29,30] to overcome the shortcomings of the poor generalisation ability of a single model and have achieved some results. In the field of rockburst prediction, several scholars have also made attempts to integrate models [31]; however, the generalisability of these models needs further research [32]. In this paper, to address the limitations of the current research methodology and enhance the model’s predictive power and generalisability, an NGO-CNN-BiGRU-Attention rockburst prediction model based on intelligent optimisation algorithms and deep learning techniques is proposed, which is trained based on 287 sets of real rockburst data in different regions, under different geological conditions and at different times and is combined with RF, SVM, KNN, ELM, CNN, BiGRU and CNN-BiGRU-Attention models; the findings demonstrate that the NGO-CNN-BiGRU-Attention model exhibits greater prediction accuracy. In the end, the model is validated using eight sets of real rockburst cases in the Daxiangling Tunnel, and the results indicate that the model exhibits robust generalisation capacity and is suitable for engineering applications.

2. Methods

2.1. Random Forest (RF)

Random Forest (RF) is an integrated learning algorithm; this algorithm predicts by constructing multiple decision trees and combining them together (Figure 1) and therefore effectively avoids the shortcomings of a single decision tree that is prone to overfitting [33]. The following is the detailed algorithmic flow of the Random Forest algorithm:
Set the number of trees N and the number of randomly selected features m in each tree, and prepare the training set data. Subsequently, the following steps are performed cyclically for each tree n : (1) random sampling; (2) random selection of features; and (3) construction of decision trees.
For the classification prediction problem of the rockburst intensity prediction, the RF model uses voting, where the majority vote determines the final classification result:
y ^ final = argmax y i = 1 N I ( y ^ = y )
where N denotes the number of decision trees; the prediction for each tree is y ^ i ; and I ( · ) is an indicator function.

2.2. Support Vector Machine (SVM)

The Support Vector Machine (SVM) model is commonly used as an efficient supervised learning model to deal with linearly differentiable and nonlinear problems [34]. It transforms the initial data to a higher space by means of kernel function mapping, the structure of which is shown in Figure 2. Its kernel function expression is as follows:
k ( x , z ) = exp x z 2 2 σ 2
with the introduction of the kernel function, the decision rule used by SVM for classification becomes the following:
g ( x ) = sgn ( i = 1 n a i y i ( k ( x i , x ) + b )
A slack variable ξ i is introduced to improve the fitness, while a penalty term C is added to constrain the misclassification cases:
min w , b 1 2 w 2 + C i = 1 n ξ i y i ( w T x i + b ) 1 ξ i , ξ i 0 , i = 1 , 2 , , n
This is then converted to a dyadic problem:
w max = i = 1 n a i 1 2 i = 1 , j = 1 n a i a j y i y j k ( x i , x j ) , s . t . i = 1 n a i y i = 0 , 0 a i C , i = 1 , 2 , , n .
where a i and a j denote the Lagrange multipliers; y i and y j denote the training sample categories, and y i , y j ( 0 , 1 ) ; the penalty factor, C , is employed to quantify the intricacy inherent to machine learning models. If this parameter is set too high, the model is prone to overfitting to the training data, which makes the SVM structure complex and reduces the computational efficiency; if the parameter is set too low, it will make the SVM not sufficient to capture the data features, resulting in underfitting.

2.3. KNN Algorithm

The KNN algorithm is a simple and powerful classification algorithm; for the samples to be classified, it searches its pattern space [35] and finds the nearest K training samples as its K-nearest neighbours according to the proximity of the samples and finally predicts the type of the samples to be classified according to the K-nearest neighbours by using some kind of voting strategy, and its structure is shown in Figure 3. The mathematical expressions of its distance metric and classification prediction method are shown in Equations (6) and (7), respectively:
d i s t ( X 1 , X 2 ) = i = 1 n ( x 1 i x 2 i ) 2
where X 1 and X 2 are the eigenvectors of the two data points, x 1 i and x 2 i are their i th eigenvalue, respectively, and n is the number of features.
y ^ = arg max y i = 1 K I ( y i = y )
where y ^ is the predicted category, y i denotes the category of the i th nearest neighbour and I ( ) represents the indicator function.

2.4. Extreme Learning Machine (ELM)

The ELM classifier is a machine learning algorithm with better generalisation ability, which is simple to implement, has faster training speed and has better generalisation ability than traditional neural network methods [36], as shown in the mathematical expressions in Equations (8)–(10):
The output H of the implicit layer can be obtained by Equation (8):
H = g ( X W + b )
W o , the weights of the output layer of the ELM, can be calculated using (9):
W o = ( H T H + λ I ) 1 H T Y
The prediction results can be obtained using Equation (10):
Y ^ = g ( X W + b ) W o
In the above equation, X denotes the input data matrix, W represents the weight from the input layer to the hidden layer, the symbol b represents the bias vector of the hidden layer, g ( ) represents the activation function, H represents the output of the hidden layer, H T represents the transpose of the output matrix H , Y represents the labelling matrix of the training data, λ denotes the regularisation parameter, I denotes the unit matrix, X denotes the new input data, H denotes the fixed output of the hidden layer, W o stands for the output weight obtained by the learning and Y ^ means the predicted output result.

2.5. Convolutional Neural Networks (CNNs)

CNN is a class of feedforward networks that include a deep structure and employ convolutional computation [37], which is capable of capturing local patterns and features in the data, and its main components include convolutional, pooling and fully connected layers. Considering the uniqueness of the channel, the input of the l th layer of the CNN is referred to as x l R H l × W l , and the output obtained after x l has gone through the operations of the l th layer is referred to as x l + 1 R H l + 1 × W l + 1 .
Convolutional layers extract features from the input using convolutional kernels, convolutional neural networks generally require an increasing number of convolutional kernels as the layers get deeper and deeper and the operation of convolution can be represented by the following equation:
x i l + 1 , j l + 1 l + 1 = i = 0 H j = 0 W w i , j × x i l + i , j l + j l
where w i , j is the weights learnt by the convolution kernel and ( i l + 1 , j l + 1 ) is the positional coordinate of the convolution result and which needs to satisfy the following expression:
0 i l + 1 < H l H + 1 0 j l + 1 < W l W + 1
where H and W are the dimensions of the convolution kernel.
The role of the pooling layer is mainly feature dimensionality reduction, the prevention of overfitting, feature invariance, etc. The pooling layer includes two kinds of average pooling and maximum pooling. The expression for average pooling is as follows:
x i l + 1 , j l + 1 l + 1 = 1 H W i = 0 H j = 0 W x i l × H + i , j l × W = j l
The expression for maximum pooling is as follows:
x i l + 1 , j l + 1 l + 1 = max 0 i < H , 0 j < W x i l × H + i , j l × W + j l
The fully connected layer maps feature representations into sample labels. It is a feed-forward neural network:
x l + 1 = f ( w x l + b )
In this context, f represents the activation function, w represents the weight matrix and b represents the bias matrix.

2.6. Bidirectional Gated Recirculation Units (BiGRUs)

BiGRU is an improved model based on GRU, which consists of a pair of forward-backward GRUs, where both GRUs simultaneously learn feature mapping from the forward and reverse directions [38] and extract the sequence features, and then, the two output vectors from the forward and backward GRUs are concatenated to yield the final result. Since the model is able to capture the information from forward and backward directions, it is able to better understand the features and structures in the sequence and thus has higher model efficiency, and the related principle is shown in Figure 4.
When dealing with classification problems, BiGRU is usually used as a feature extractor to convert sequence data into fixed-length feature vectors, which are then fed into a classifier for classification.
(1)
Calculate the hidden state sequence of BiGRU. The hidden state sequences are computed separately for forward GRU and reverse GRU, and then, they are connected to form the hidden state sequences of BiGRU, the mathematical expression of which is shown below:
z t = σ ( w z [ h t 1 , x t ] + b z ) r t = σ ( w r [ h t 1 , x t ] + b r ) h ˜ t = tanh ( w h [ r t h t 1 , x t ] + b ) h t = ( 1 z t ) h t 1 + z t h ˜ t
where z t is an update gate, r t is a hidden gate, x t denotes an input at moment t , h ˜ t represents a candidate state at moment t , h t stands for a hidden state value at moment t , w z represents the weight matrices of the corresponding update and w r represents reset gates, respectively, while b z and b r are their biases, respectively.
(2)
Feature Extraction Process. Pooling operations, such as average pooling or maximum pooling, are performed on the hidden layer, and the computational formulas are given in (13) and (14), to convert the variable-length hidden state sequence into a fixed-length feature vector.
(3)
Classification of feature vectors. Its mathematical expression is as follows:
Output = softmax ( W fc Feature + b fc )
In the context of neural networks, the symbol W fc represents the weight matrix of the fully connected layer, while b fc denotes the bias vector, and solfmax is the activation function of the classifier, which is used to compute the probability distribution for each category.

2.7. Attention Mechanism

The core principle Attention mechanism is a series of weighted summation operations to re-assign the feature weight coefficients according to the importance of the hidden state [39], which has the ability of data screening and key information extraction, and dynamically adjust the weights and thus has a strong advantage in dealing with nonlinear and complexity problems. In this study, the Attention layer is added after the BiGRU layer, because the Attention mechanism enables the model to identify key information about rockbursts and ignore irrelevant information, improving the prediction accuracy. The formula is as follows:
λ t = tanh ( w t h t + b t ) , a t = exp ( λ t ) k = 1 exp ( λ t ) , v t = t a t h t
The hidden state after the BiGRU layer is designated as h t , λ t is the new state, a t is the attention weights and v t is the new feature vector finally obtained.

2.8. Northern Goshawk Optimisation Algorithm (NGO)

The Northern Goshawk Algorithm (NGO) is a population-based optimisation algorithm with the advantages of strong global search capability, fast convergence, good robustness and flexibility. The NGO algorithm simulates the behaviour of the Northern Goshawk throughout the process of the hunting process, including prey recognition and attack, chase and escape [40], and establishes mathematical models based on the different phases of the hunt, as shown in Figure 5.
(1)
Prey identification phase. The northern goshawk will randomly select a prey and then quickly attack it. Its mathematical expression is shown below:
P i = X k , i = 1 , 2 , , N , k = 1 , 2 , , i 1 , i + 1 , , N
x i , j n e w , P 1 = x i , j + q ( p i , j E x i , j ) , F P i < F i x i , j + q ( x i , j E p i , j ) , F P i F i
X i = X i n e w , p 1 , F i n e w , p 1 < F i X i , F i n e w , p 1 F i
where P i represents the prey position corresponding to the i th hawk, N stands for the sample length, k is a random integer that falls within the range [ 1 , N ] and not equal to i ; F P i is the value of the objective function, F i is the value of the objective function corresponding to the i th solution; X i is the initial solution of the i th hawk, x i , j is the value of the i th hawk in the j th dimension and x i , j n e w , P 1 denotes the value of the i th northern hawk in the j th dimension in the 1st hunting phase; P 1 is the 1st hunting phase; F i n e w , p 1 is the value of the objective function corresponding to the new solution; and the parameter q and the parameter E are random parameters, where q is a random number belonging to the intervals between the [ 0 , 1 ] , and E has the value 1 or 2.
(2)
Pursuit and escape phase. After a northern goshawk has captured its prey, the prey will often try to escape. During the chase, given the high speeds typically exhibited by northern goshawks, they have the ability to capture prey in a variety of situations. Assuming that the hunt is at an attack position at radius R at this point, the mathematical expression is shown below:
x i , j n e w , P 2 = x i , j + R ( 2 r 1 ) x i , j
R = 0.02 1 t T
X i = X i n e w , p 2 , F i n e w , p 2 < F i X i , F i n e w , p 2 F i
where x i , j n e w , P 2 denotes the value of the j th dimension of the i th northern goshawk in the 2nd hunting stage, and P 2 represents the 2nd hunting stage; t denotes the current number of iterations, T denotes the maximum number of iterations, respectively; R stands for the radius of the attack; and F i n e w , p 2 denotes the value of the objective function for the stage.

2.9. CNN-BiGRU-Attention Hybrid Neural Network Model Based on NGO Optimisation Algorithm

While CNN, BiGRU, Attention mechanism [41] and NGO algorithms have been utilised in various domains, this study represents the first instance of integrating these methods for predicting rockburst intensity. Furthermore, specific enhancements and optimisations were implemented to address potential limitations.
Given the sensitivity of the original CNN-BiGRU-Attention model to initial parameter settings and its susceptibility to local optimal solutions during training, we propose a hybrid neural network model that combines CNN-BiGRU-Attention with the NGO optimisation algorithm.
This approach leverages CNN’s feature extraction capabilities, BiGRU’s proficiency in time series modelling and the Attention mechanism’s ability to weigh important features with the aim of enhancing accuracy and robustness in rockburst intensity prediction. The Northern Goshawk Optimization Algorithm is employed for optimising hyperparameters (such as the size of CNN’s convolutional kernel, number of GRU units and learning rate), as well as weight initialisation. With its strong global search capabilities and adaptability to complex prediction scenarios related to rockbursts, this optimisation algorithm effectively improves the classification prediction results’ accuracy and reliability. The model structure is shown in Figure 6:
Specific operation steps:
  • Data preprocessing: firstly, a sample library for data preprocessing and data partitioning must be established; the data set comprises 80% training data and 20% test data.
  • Hyper-parameter optimisation: the objective function is defined as the prediction error, and the NGO optimisation algorithm is employed to optimise the hyper-parameters of the model.
  • Data prediction: once the training package has been incorporated into the training model, the latter will undergo a hyperparameter search conducted by the NGO. Once this process has reached its conclusion, the model parameters will be established. At this juncture, the data will then be fed into the CNN-BiGRU-Attention for the prediction of rockburst intensity levels.
  • Model evaluation: the accuracy, precision, recall and F1 score were employed as indicators to assess the precision of the model’s predictions and to facilitate comparisons and analyses with other prediction models.

3. Establishment of the Database

3.1. Data Sources

The study extensively collected data from domestic and international rockburst-related engineering cases and used them to establish a rockburst database (Supplementary Materials). This database consists of 287 groups of data, including 8 groups of data from Li et al. [24], 15 groups of data from Xue et al. [25], 7 groups of data from Wu et al. [42], 12 groups of data from Pu et al. [23], 108 groups of data from Long et al. [43] and 137 groups of data from Afraei et al. [4], which are shown in Table 1.
In general, the selection of predictive indicators should meet the requirements of easy access, strong representativeness and strong characteristics, so this study selected the maximum tangential stress of the surrounding rock σ θ (MPa), uniaxial compressive strength of rock σ c (MPa), tensile strength of rock σ t (MPa), the stress ratio of rock σ θ / σ c , rock brittleness ratio of rock σ c / σ t and elasticity and strain energy of the rock index W e t , six quantitative indexes as a characteristic parameter for analysing the situation of rockbursts. From the viewpoint of the geological structure of rockbursts, the maximum tangential stress σ θ (MPa) of the enclosing rock body is usually one of the main driving factors for the occurrence of rockbursts. In contrast, although the main maximum (minimum) stress, equivalent stress and bias stress can also reflect the stress state, the maximum tangential stress of the surrounding rock is more intuitive and direct in characterising the local stress concentration and damage risk of the surrounding rock body. Many existing studies and engineering practices have shown that the maximum tangential stress σ θ (MPa) plays an important role in rockburst prediction and stability analysis of the surrounding rock [3]. Also, rock uniaxial compressive strength σ c (MPa) and rock tensile strength σ t (MPa) can better reflect the mechanical properties of the rock, in the actual situation of rockbursts have been documented and can be used as a prediction and evaluation of rockburst indicators; from the energy point of view, the rockburst is the energy of elastic stress to a certain degree of energy accumulation after the rapid release of the results. Therefore, the rock elastic strain energy index W e t was selected as a rockburst prediction index. In addition, the stress ratio σ θ / σ c and rock brittleness ratio σ c / σ t are also proposed by many researchers to predict the selection of rockburst strength indicators [44,45,46].
In this paper, the rockburst rating criteria selected are a posteriori derived from the rockburst intensity level proposed by Zhou et al. [20] (Table 2), which categorises the intensity level into four distinct categories: None, Light, Moderate and Strong. Table 3 demonstrates the calculation of the statistical analysis metrics for each indicator (i.e., standard deviation, kurtosis, maximum, mean, median and range).

3.2. Data Description and Analysis

The rockburst case datasets used in this study are all from reliable databases and can be verified; the distribution of the rockburst class is shown in Figure 7, in which class I samples are the least, for 47 cases, class II samples are 78 cases; class III samples are the most, for 121 cases; and class IV samples are 41 cases. The fiddle diagram of the initial dataset is shown in Figure 8. The fiddle diagram is a combination of densities, and its shape can characterise the overall distribution of the data; a wider fiddle diagram indicates that the data are more evenly distributed, and a narrower fiddle diagram indicates that the data are more concentrated. The box line part in the middle of the violin diagram indicates the median and interquartile range of the data, and the concentration of the scatter represents the density of a certain value. By observing Figure 7 and Figure 8, it can be found that there are some category imbalances or sampling biases in this dataset, and the appearance of these outliers may be related to the special working condition samples collected [47].

3.3. Parameter Correlation Analysis

In order to assess the correlation among the six characteristic parameters of rockbursts, a correlation analysis can be based on the Pearson correlation coefficient method [48] for correlation analysis of the above sample data of rockburst prediction; the formula is shown in Equation (25), and the results are shown in Figure 9. The magnitude of the absolute value of the correlation coefficient can be employed to demonstrate the strength of the correlation. The shape of the ellipse indicates the strength of the correlation; the more rounded the ellipse, the weaker the correlation, and the flatter the ellipse, the stronger the correlation.
r = i = 1 n ( x i x ¯ ) ( y i y ¯ ) i = 1 n ( x i x ¯ ) 2 i = 1 n ( y i y ¯ ) 2
where: x ¯ and y ¯ are the mean values of n test values, respectively; r [ 1 , 1 ] .
The diagonal position in Figure 9 is the histogram of the normal distribution of the original dataset; in the normal distribution, the mean is the centre point of the data, and the standard deviation determines the width of the curve [49] and can be a measure of the degree of dispersion between the data points and the mean. A wider histogram indicates that the standard deviation is larger, and otherwise, it is the opposite. As can be seen from the figure, most of the data are more concentrated, and the number of data along both sides of the mean gradually decreases, while the normal histogram of W e t is more abnormal, which indicates that there are certain outliers in the sample and also reflects the existence of a certain degree of uncertainty in W e t .
A correlation coefficient with an absolute value more than 0.5 implies a high degree of correlation between the two features [50,51]. From Figure 9, we can see that the correlation between σ c and σ t reaches 0.6388, which indicates that the correlation between the two is strong, and it has been pointed out in the related literature [52] that there may be some kind of connection between the two. The correlation between σ c and W e t is as high as 0.6124, which explains to a certain extent that when the rock is subjected to high stress, a large amount of elastic strain energy will be accumulated in its interior, and if the rock’s compressive strength is lower and it cannot effectively resist the action of external stress, then the rock will be prone to occur. If the rock’s compressive strength is low and can not effectively resist the role of external stress, then the rock is prone to damage [53], resulting in an abrupt release of elastic strain energy, which may cause rockburst. σ t and σ c / σ t have a correlation of −0.5983, with a strong negative correlation, which indicates that the brittle characteristics of the rock are closely related to rockbursts. The greater the brittleness of the rockbursts, the higher the tendency to be higher [54,55]; in the presence of high stress, the rocks with lower tensile strength and higher brittleness are more likely to be damaged and prone to rockbursts. The correlation between the other features is weak, indicating a strong independence between the features. In summary, the consequences of the data correlation analysis are more consistent with the engineering reality, which further verifies the reliability and authenticity of the dataset.

4. Prediction Results and Analysis

The eight rockburst intensity level prediction models in this study have differences, so machine learning models are employed to predict the rockburst intensity level and to compare the effectiveness and performance of the models.

4.1. Model Prediction Results

By inputting the test set into the rockburst intensity prediction classifier, the prediction consequences of the classifiers of different models based on the test set can be obtained. The confusion matrix, a table employed to assess the efficiency of a classification model, is able to present the model’s prediction results in the context of the correspondence between the real categories and the model’s predicted categories. Figure 10 illustrates the confusion matrix corresponding to the classification prediction results of each model.

4.2. Evaluation and Comparison of Model Prediction Performance

In order to assess the prediction efficiency and generalisation ability of different rockburst intelligent prediction models, the study selected the Accuracy, Precision, Recall and F1 Score to evaluate the evaluation performance of each method (26)–(29) [28], and the calculated results are shown in Table 4.
A c c u r a c y = T P + T N T P + F P + T N + F N
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
TP represents true positive, TN true negative, FP false positive and FN false negative.
Table 4 demonstrates the evaluation results of the different models, and it can be seen that the best prediction performance is the NGO-CNN-BiGRU-Attention model with an accuracy of 98.28%, which has a precision of 97.92%, a recall of 97.73% and an F1 score of 97.82%, which are higher than the other models. The order is CNN-BiGRU-Attention, RF, BiGRU, ELM, CNN, KNN and SVM, with accuracies of 96.55%, 94.83%, 93.10%, 89.66%, 86.21%, 82.76% and 79.31%, respectively. It is not difficult to find that there is a difference in the performance of traditional machine learning methods in the rockburst intensity level prediction, which may be due to a single model with poor robustness. Since this study makes the model better able to capture the complex patterns of the data by combining CNN and BiGRU and introduces the attention mechanism, the model’s attention and prediction accuracy can be improved. In addition, the NGO algorithm facilitates the model to better seize the global information. So, although the prediction performance of the CNN-BiGRU-Attention rockburst prediction model shows good performance, with a precision rate of 94.95%, a recall rate of 94.95% and an F1 score of 94.95%, there is still a certain gap compared with the NGO-CNN-BiGRU-Attention model, which also reflects the characteristics of the NGO algorithm’s automatic optimisation of the search effect being good and having strong robustness.
In conclusion, by comparing the predictive capabilities of various models, it is not difficult to find that the NGO-CNN-BiGRU-Attention model, as a superior model, makes the overall model have a stronger characterisation ability by combining a variety of deep learning techniques and is able to forecast the intensity level of rockbursts with a high accuracy rate and provides certain guiding significance for the research of rockburst disaster prevention and control.

4.3. Feature Importance Analysis Based on SHAP Methodology

SHAP is a method employed to elucidate the rationale behind machine learning predictive models [56], which is particularly applicable to black-box models. It determines the extent to which each input variable contributes to the final prediction of the model by using the concept of Shapley values. For the rockburst intensity level prediction problem, the SHAP method can be employed to analyse the relative importance of sample features. The specific results are presented in Figure 11.
The degree of feature importance can be used to demonstrate the extent to which each feature affects the prediction results. As illustrated in Figure 11, each rectangle represents the corresponding feature importance. The more important the feature, the taller the rectangle; the less important the feature, the shorter the rectangle. It becomes apparent that the characteristic importance of the elastic strain energy index W e t is the largest, 0.21, indicating that the feature has the greatest impact on the intensity level of the rockburst; the stress ratio σ θ / σ c feature importance is as high as 0.201, indicating that the feature has a greater impact on the intensity level of the rockburst, and the feature and the elastic strain energy index W e t are close to the characteristic importance of the feature; it can be speculated that there may be a certain degree of correlation between these two features. In addition, the maximum tangential stress σ θ (MPa), rock uniaxial compressive strength σ c (MPa), rock tensile strength σ t (MPa) and rock brittleness ratio σ c / σ t characteristic importance are lower than the first two, but their impact on the intensity level of rockbursts cannot be ignored. Therefore, these features should be taken into account in engineering applications and research to make the prediction of rockburst intensity levels more reliable.

5. Engineering Validation

The Daxiangling Tunnel is located in the south of Ya’an City, Sichuan Province, which is the first longest tunnel under construction in Southwest China, with a maximum burial depth of 1648 m. In this study, eight sets of data were selected from the rockburst data of the Daxiangling Tunnel [57] to assess the rockburst hazard and were predicted using the NGO-CNN-BiGRU-Attention rockburst intensity prediction model. Table 5 illustrates the findings of the predictions, and all rockburst cases were predicted correctly, indicating that the NGO-CNN-BiGRU-Attention model shows strong generalisation ability and engineering practicability, which corroborates the previous conclusions.
In order to further prove the effectiveness of the NGO-CNN-BiGRU-Attention model, the prediction results of this model were compared with those of other models (see Table 6), and the results showed that the accuracy of the NGO-CNN-BiGRU-Attention model was the highest, thus indicating that the model is an effective rockburst prediction method.
The above results show that the NGO-CNN-BiGRU-Attention model has strong prediction performance and generalisation ability in the area of rockburst intensity prediction, but to some extent, the model still has certain deficiencies. Due to the different selections of the sample library, the generalisation ability of the trained model varies; the contribution of different predictors to the rockburst intensity level is different, and the performance of the model will be influenced to some degree by the different predictors selected, so it is necessary to further explore this effect in the research of rockburst prediction.

6. Conclusions

In the present study, for the rockburst intensity level prediction problem, eight machine learning classification prediction models were established to predict the rockburst intensity level, based on six evaluation indexes, and compared and analysed with other machine learning prediction models. Furthermore, the study also utilised the SHAP method to conduct a thorough analysis of the models to explain the affect or importance of each feature on the prediction of the rockburst intensity level, and the final conclusions were obtained as follows.
The study collected 287 sets of real rockburst data from different regions, geological conditions and times to constitute a sample library for the machine learning prediction model, which included six characteristic parameters, such as the maximum tangential stress σ θ (MPa), uniaxial compressive strength of rock σ c (MPa), the tensile strength of rock σ t (MPa), stress ratio σ θ / σ c , brittleness ratio of rock σ c / σ t and elastic strain energy index W e t , and it analysed the correlation between the different evaluation indexes using the Pearson correlation coefficient method.
The study constructed eight machine learning classification prediction models based on machine learning techniques and evaluated the prediction performance of each model using calculations and found that the NGO-CNN-BiGRU-Attention model achieved an accuracy of 0.98, which is the highest among all models tested, and the other models’ accuracies were, in order, 0.96 for CNN-BiGRU-Attention,0.94 for RF, 0.93 for BiGRU, 0.89 for ELM, 0.86 for CNN, 0.82 for KNN, and 0.79 for SVM. Since the NGO algorithm enables the model to better capture the global information, although the CNN-BiGRU-Attention rockburst prediction model shows good prediction performance, there is a certain level of accuracy compared with the NGO-CNN-BiGRU-Attention model; this reflects the good automatic optimisation and robustness of the NGO algorithm and also shows that the NGO-CNN-BiGRU-Attention model has good performance in the area of rockburst intensity level prediction, which can provide some guidance for related research.
For the prediction results of different models, the SHAP method was employed to quantify the impact of different evaluation indexes on the intensity level of rockbursts, and the results show that the elastic strain energy index W e t and stress ratio σ θ / σ c have the greatest influence on the intensity level of rockbursts, and their characteristic importance is as high as 0.21 and 0.201, respectively. For the other characteristics, although their importance is not as high, they have a non-negligible influence on the intensity level of rockbursts. The other features are not as important, but their influence on the intensity level of the rockburst is not negligible. Therefore, these features should be taken into account in engineering applications and research to make the prediction of rockburst intensity levels more reliable.
The NGO-CNN-BiGRU-Attention model is applied to eight sets of real rockburst cases in the Daxiangling Tunnel, and the findings indicate that the model has excellent prediction performance, strong generalisation ability and good engineering practicability, which can be applied to the prediction of the rockburst intensity level in real engineering.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app14135719/s1, Database of rockburst samples.

Author Contributions

Conceptualisation, H.L. and T.M.; methodology, K.P.; software, H.L.; validation, T.M., K.L. and H.L.; formal analysis, T.M.; investigation, X.H.; resources, H.L.; data curation, T.M.; writing—original draft preparation, H.L.; writing—review and editing, S.X.; visualisation, T.M.; supervision, Y.L.; project administration, K.P.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key Research and Development Program of China—2023 Key Special Project (No. 2023YFC2907400), the National Natural Science Foundation of China (No.52104109) and the Natural Science Foundation of Hunan Province, China (No.2022JJ40602).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
RFRandom Forest
SVMSupport Vector Machine
KNNK-Nearest Neighbors algorithm
ELMExtreme Learning Machine
CNNConvolutional Neural Networks
BiGRBidirectional gated recirculation units
NGONorthern Goshawk Optimisation Algorithm
σ θ the maximum tangential stress of the surrounding rock
σ c uniaxial compressive strength of rock
σ t tensile strength of rock
σ θ / σ c the stress ratio of rock
σ c / σ t rock brittleness ratio of rock
W e t elasticity and strain energy of the rock index

References

  1. Wang, C.; Xu, J.; Li, Y.; Wang, T.; Wang, Q. Optimization of BP Neural Network Model for Rockburst Prediction under Multiple Influence Factors. Appl. Sci. 2023, 13, 2741. [Google Scholar] [CrossRef]
  2. Qian, Q. Definition, mechanism, classification and quantitative forecast model for rockburst and pressure bump. Rock Soil Mech. 2014, 35, 1–6. [Google Scholar]
  3. Xu, G.; Li, K.; Li, M.; Qin, Q.; Yue, R. Rockburst Intensity Level Prediction Method Based on FA-SSA-PNN Model. Energies 2022, 15, 5016. [Google Scholar] [CrossRef]
  4. Afraei, S.; Shahriar, K.; Madani, H.S. Developing intelligent classification models for rock burst prediction after recognizing significant predictor variables, Section 1: Literature review and data preprocessing procedure. Tunn. Undergr. Space Technol. Inc. Trenchless Technol. Res. 2019, 83, 324–353. [Google Scholar] [CrossRef]
  5. Li, N.; Feng, X.; Jimenez, R. Predicting rock burst hazard with incomplete data using Bayesian networks. Tunn. Undergr. Space Technol. Inc. Trenchless Technol. Res. 2017, 61, 61–70. [Google Scholar] [CrossRef]
  6. Li, M.L.; Li, K.G.; Qin, Q.C.; Wu, S.; Liu, Y.; Liu, B. Discussion and selection of machine learning algorithm model for rockburst intensity grade prediction. Chin. J. Rock Mech. Eng. 2021, 40, 2806–2816. [Google Scholar]
  7. Qiao, L.; Dong, J.; Liu, J.; Chen, L. Review on the Study Progress of Rockburst Mechanism and Prediction in Underground Metal Mines in China. Met. Mine 2023, 14–28. [Google Scholar]
  8. Li, X.B.; Gong, F.Q.; Wang, S.F.; Li, D.Y.; Tao, M.; Zhou, J.; Huang, L.Q.; Ma, C.D.; Du, K.; Feng, F. Coupled static-dynamic loading mechanical mechanism and dynamic criterion of rockburst in deep hard rock mines. Chin. J. Rock Mech. Eng. 2019, 38, 708–723. [Google Scholar]
  9. Liu, W.; Li, J.; Li, L. Review of research status on rockburst. Gold 2010, 31, 26–28. [Google Scholar]
  10. Wang, Y.; Jing, H.; Zhang, Q.; Wei, L.; Xu, Z. A normal cloud model-based study of grading prediction of rockburst intensity in deep underground engineering. Rock Soil Mech. 2015, 36, 1189–1194. [Google Scholar]
  11. Yang, J.; Wang, G.Y.; Liu, Q.; Guo, Y.; Liu, Y.; Gan, W.; Liu, Y. Retrospect and Prospect of Research of Normal Cloud Model. Chin. J. Comput. 2018, 41, 724–744. [Google Scholar]
  12. Wu, L. The Study on Fuzzy Comprehensive Evaluation and Its Application. Master’s Thesis, Taiyuan University of Technology, Taiyuan, China, 2006. [Google Scholar]
  13. Yang, J.; Li, X.; Zhou, Z.; Lin, Y. A Fuzzy Assessment Method of Rock-burst Prediction Based on Rough Set Theory. Met. Mine 2010, 26–29. [Google Scholar]
  14. Liu, S. Emergence and Development of Grey System Theory and Its Forward Trends. J. Nanjing Univ. Aeronaut. Astronaut. 2004, 36, 267–272. [Google Scholar]
  15. Cai, W.; Yang, C. Basic theory and methodology on Extenics. Chin. Sci. Bull. 2013, 58, 1190–1199. [Google Scholar]
  16. Wang, Y.C.; Shang, Y.Q.; Sun, H.Y.; Yan, X.S. Research and application of rockburst intensity prediction model based on entropy coefficient and ideal point method. J. China Coal Soc. 2010, 35, 218–221. [Google Scholar]
  17. Jia, Y.P.; Lv, Q.; Shang, Y.Q.; Du, L.L.; Zhi, M.M. Rockburst prediction based on rough set and ideal point method. J. Zhejiang Univ. Eng. Sci. 2014, 48, 498–503. [Google Scholar]
  18. Luo, L.; Cao, P. Model of weighted distance discriminant analysis and application for deep roadway. J. Cent. South Univ. Sci. Technol. 2012, 43, 3971–3975. [Google Scholar]
  19. Wang, J.; Li, X.; Yang, J. A Weighted Mahalanobis Distance Discriminant Analysis for Predicting Rock-Burst in Deep Hard Rocks. J. Min. Saf. Eng. 2011, 28, 395–400. [Google Scholar]
  20. Zhou, J.; Li, X.; Shi, X. Long-term prediction model of rockburst in underground openings using heuristic algorithms and support vector machines. Saf. Sci. 2012, 50, 629–644. [Google Scholar] [CrossRef]
  21. Dong, L.; Li, X.; Peng, K. Prediction of rockburst classification using Random Forest. Trans. Nonferrous Met. Soc. China 2013, 23, 472–477. [Google Scholar] [CrossRef]
  22. Wu, S.; Zhang, C.; Cheng, Z. Prediction of intensity classification of rockburst based on PCA-PNN principle. J. China Coal Soc. 2019, 44, 2767–2776. [Google Scholar]
  23. Pu, Y.; Apel, B.D.; Xu, H. Rockburst prediction in kimberlite with unsupervised learning method and support vector classifier. Tunn. Undergr. Space Technol. Inc. Trenchless Technol. Res. 2019, 90, 12–18. [Google Scholar] [CrossRef]
  24. Li, D.; Liu, Z.; Armaghani, D.J.; Xiao, P.; Zhou, J. Novel ensemble intelligence methodologies for rockburst assessment in complex and variable environments. Sci. Rep. 2022, 12, 1844. [Google Scholar] [CrossRef] [PubMed]
  25. Xue, Y.; Bai, C.; Qiu, D.; Kong, F.; Li, Z. Predicting rockburst with database using particle swarm optimization and extreme learning machine. Tunn. Undergr. Space Technol. Inc. Trenchless Technol. Res. 2020, 98, 103287. [Google Scholar] [CrossRef]
  26. Wang, S.M.; Zhou, J.; Li, C.Q.; Armaghani, D.J.; Li, X.B.; Mitri, H.S. Rockburst prediction in hard rock mines developing bagging and boosting tree-based ensemble techniques. J. Cent. South Univ. 2021, 28, 527–542. [Google Scholar] [CrossRef]
  27. Ma, K.; Shen, Q.Q.; Sun, X.Y.; Ma, T.H.; Hu, J.; Tang, C.A. Rockburst prediction model using machine learning based on microseismic parameters of Qinling water conveyance tunnel. J. Cent. South Univ. 2023, 30, 289–305. [Google Scholar] [CrossRef]
  28. Ma, L.; Cai, J.; Dai, X.; Jia, R. Research on Rockburst Risk Level Prediction Method Based on LightGBM−TCN−RF. Appl. Sci. 2022, 12, 8226. [Google Scholar] [CrossRef]
  29. Xu, J.; Yang, Y. A survey of ensemble learning approaches. J. Yunnan Univ. Nat. Sci. Ed. 2018, 40, 1082–1092. [Google Scholar]
  30. Kong, Y.; Jing, M. Research of the Classification Method Based on Confusion Matrixes and Ensemble Learning. Comput. Eng. Sci. 2012, 34, 111–117. [Google Scholar]
  31. Tan, W.; Hu, N.; Ye, Y.; Wu, M.; Huang, Z.; Wang, X. Rockburst intensity classification prediction based on four ensemble learning. Chin. J. Rock Mech. Eng. 2022, 41, 3250–3259. [Google Scholar]
  32. Dychkovskyi, R.; Falshtynskyi, V.; Ruskykh, V.; Cabana, E.; Kosobokov, O. A modern vision of simulation modelling in mining and near mining activity. E3S Web Conf. 2018, 60, 00014. [Google Scholar] [CrossRef]
  33. Zhou, Y.S.; Cui, J.L.; Zhou, L.Y.; Sun, H.X.; Liu, S.Q. Study on the Evaluation of Personal Credit Risk Based on the Improved Random Forest Model. Credit Ref. 2020, 38, 28–32. [Google Scholar]
  34. Gao, J.; Li, Y.; Guo, Z.; Tong, W. A Fault Diagnosis Method for Fire Control Systems Based on KPCA-ISSA-SVM. J. Ordnance Equip. Eng. 2024, 1–8. [Google Scholar]
  35. Zhang, Z.; Huang, Y.; Wang, H. A New KNN Classification Approach. Comput. Sci. 2008, 170–172. [Google Scholar]
  36. Xu, R.; Liang, X.; Qi, J.S.; Li, Z.Y.; Zhang, S.S. Advances and Trends in Extreme Learning Machine. Chin. J. Comput. 2019, 42, 1640–1670. [Google Scholar]
  37. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Trans. Image Process. A Publ. IEEE Signal Process. Soc. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [PubMed]
  38. Cai, T.; Zeng, X. Short Term Load Forecasting Method Based on Multi-feature Extracted Attention-BiGRU. Hebei Electr. Power 2023, 42, 1–7. [Google Scholar]
  39. Xu, W.; Chen, X.; Zu, T. Research on RMB Exchange Rate Prediction Based on CNN BiGRU Attention Fusion Model. J. Anqing Norm. Univ. Nat. Sci. Ed. 2023, 29, 35–41. [Google Scholar]
  40. Li, B.; Guo, Z.; Gao, P. Application of improved northern goshawk optimization algorithm in photovoltaic array. J. Electron. Meas. Instrum. 2023, 37, 131–139. [Google Scholar]
  41. Zhang, J. A sensitive entity identification method for the Internet based on CNN and BiGRU-attention. Netw. Secur. Technol. Appl. 2020, 04, 61–65. [Google Scholar]
  42. Wu, S.; Wu, Z.; Zhang, C. Rock burst prediction probability model based on case analysis. Tunn. Undergr. Space Technol. Inc. Trenchless Technol. Res. 2019, 93, 103069. [Google Scholar] [CrossRef]
  43. Long, G.; Wang, H.; Hu, K.; Zhao, Q.; Zhou, H.; Shao, P.; Liao, J.; Gan, F.; He, Y. Probability prediction method for rockburst intensity based on rough set and multidimensional cloud model uncertainty reasoning. Environ. Earth Sci. 2024, 83, 84. [Google Scholar] [CrossRef]
  44. Adoko, A.C.; Gokceoglu, C.; Wu, L.; Zuo, Q.J. Knowledge-based and data-driven fuzzy modeling for rockburst prediction. Int. J. Rock Mech. Min. Sci. 2013, 61, 86–95. [Google Scholar] [CrossRef]
  45. Xie, S.; Lin, H.; Chen, Y.; Ma, T. Modified Mohr–Coulomb criterion for nonlinear strength characteristics of rocks. Fatigue Fract. Eng. Mater. Struct. 2024, 47, 2228–2242. [Google Scholar] [CrossRef]
  46. Xie, S.; Lin, H.; Duan, H. A novel criterion for yield shear displacement of rock discontinuities based on renormalization group theory. Eng. Geol. 2023, 314, 107008. [Google Scholar] [CrossRef]
  47. Ma, T.; Lin, Y.; Zhou, X.; Wei, P.; Li, R.; Su, J. Entropy weight-normal cloud model for predicting the risk of water breakout in coal bed floor. Chin. J. Saf. Sci. 2022, 32, 171–177. [Google Scholar]
  48. Chen, Y. Methods for Calculating the Correlation Coefficient. J. China Exam. 2011, 15–19. [Google Scholar]
  49. Ma, T.; Lin, Y.; Zhou, X.; Zhang, M. Grading Evaluation of Goaf Stability Based on Entropy and Normal Cloud Model. Adv. Civ. Eng. 2022, 2022, 9600909. [Google Scholar] [CrossRef]
  50. Wang, T.; Luo, R.; Ma, T.; Chen, H.; Zhang, K.; Wang, X.; Chu, Z.; Sun, H. Study and verification on an improved comprehensive prediction model of landslide displacement. Bull. Eng. Geol. Environ. 2024, 83, 90. [Google Scholar] [CrossRef]
  51. Xu, Y.P.; Lin, Z.H.; Ma, T.X.; She, C.; Xing, S.M.; Qi, L.Y.; Farkoush, S.G.; Pan, J. Optimization of a biomass-driven Rankine cycle integrated with multi-effect desalination, and solid oxide electrolyzer for power, hydrogen, and freshwater production. Desalination 2022, 525, 115486. [Google Scholar] [CrossRef]
  52. Sun, L. Discusses the connection between tensile strength and uniaxial compressive strength of rock. Agric. Sci. -Technol. Inf. 2012, 48–49. [Google Scholar]
  53. Zhou, H.; Xie, H.; Zuo, J. Developments in researches on mechanical behaviors of under the condition of high ground pressure in the depths. Adv. Mech. 2005, 35, 91–99. [Google Scholar]
  54. Zhou, Q.; Li, H.; Yang, C. Review of evaluation of rockburst and harzard in underground engineerings. Rock Soil Mech. 2003, 669–673. [Google Scholar]
  55. Zhang, J.; Ai, C.; Li, Y.W.; Zeng, J.; Qiu, D.Z. Brittleness evaluation index based on energy variation in the whole process of rock failure. Chin. J. Rock Mech. Eng. 2017, 36, 1326–1340. [Google Scholar]
  56. Qi, W.; Sun, R.; Zheng, T.; Qi, J. Prediction and analysis model for ground peak acceleration based on XGBoost and SHAP. Chin. J. Geotech. Eng. 2023, 45, 1934–1943. [Google Scholar]
  57. Xiong, Y.; Chen, G.; Liu, X. Rockburst Prediction Based on Limit Tree Machine Learning Algorithm. Chin. J. Undergr. Space Eng. 2023, 19, 908–919. [Google Scholar]
Figure 1. Schematic diagram of RF structure.
Figure 1. Schematic diagram of RF structure.
Applsci 14 05719 g001
Figure 2. Schematic diagram of SVM structure.
Figure 2. Schematic diagram of SVM structure.
Applsci 14 05719 g002
Figure 3. Schematic diagram of KNN structure.
Figure 3. Schematic diagram of KNN structure.
Applsci 14 05719 g003
Figure 4. BiGRU Schematic.
Figure 4. BiGRU Schematic.
Applsci 14 05719 g004
Figure 5. Schematic diagram of the optimisation algorithm for the Northern Eagle.
Figure 5. Schematic diagram of the optimisation algorithm for the Northern Eagle.
Applsci 14 05719 g005
Figure 6. Structure of CNN-BiGRU-Attention model.
Figure 6. Structure of CNN-BiGRU-Attention model.
Applsci 14 05719 g006
Figure 7. Distribution of rockburst grades.
Figure 7. Distribution of rockburst grades.
Applsci 14 05719 g007
Figure 8. Fiddle map of rockburst data distribution: (a) the maximum tangential stress of the surrounding rock; (b) elasticity and strain energy of the rock index; (c) rock brittleness ratio of rock; (d) tensile strength of rock; (e) the maximum tangential stress; (f) the stress ratio of rock.
Figure 8. Fiddle map of rockburst data distribution: (a) the maximum tangential stress of the surrounding rock; (b) elasticity and strain energy of the rock index; (c) rock brittleness ratio of rock; (d) tensile strength of rock; (e) the maximum tangential stress; (f) the stress ratio of rock.
Applsci 14 05719 g008aApplsci 14 05719 g008b
Figure 9. Correlation of rockburst samples.
Figure 9. Correlation of rockburst samples.
Applsci 14 05719 g009
Figure 10. Confusion matrix diagram.
Figure 10. Confusion matrix diagram.
Applsci 14 05719 g010aApplsci 14 05719 g010b
Figure 11. Schematic diagram of feature importance.
Figure 11. Schematic diagram of feature importance.
Applsci 14 05719 g011
Table 1. Distribution of rockburst labelling in the database [4,23,24,25,42,43].
Table 1. Distribution of rockburst labelling in the database [4,23,24,25,42,43].
ReferencesNumber of CasesRockburst Label
NoneLightModerateStrong
Li et al. (2022)80332
Xue et al. (2020)153543
Wu et al. (2019)70151
Pu et al. (2019)1201110
Long et al. (2024)10819324314
Afraei et al. (2019)13725365521
Total287477812141
Table 2. Generic classification criteria for rock blast intensity.
Table 2. Generic classification criteria for rock blast intensity.
Rockburst LabelSignFailure Characteristics
None rockburstThere is no sound of rock bursting or rock falling.
Light rockburstThe rock surrounding the area displays signs of spalling, cracking or striping. There is no evidence of ejection, and the sound is weak.
Moderate rockburstThe rocks surrounding the area exhibit significant deformation and fracturing, resulting in loose rock chips and sudden destruction. This is often accompanied by a crunchy squeaking sound, which is common in local caverns within the surrounding rock.
Strong rockburstThe rocks surrounding the tunnel are severely fractured and suddenly propelled into it, accompanied by strong bursts, roars, air jets and other storm phenomena. This causes rapid expansion into the deep surrounding rocks.
Table 3. Calculation of statistical analysis indicators for each indicator.
Table 3. Calculation of statistical analysis indicators for each indicator.
ParametersData Indicators
STDKurtMaxMinMeanMedianRange
σ θ (MPa)32.650.85167.202.6043.6149.50164.60
σ c (MPa)53.631.08306.5815.00100.54112.50291.58
σ t (MPa)4.471.5022.600.385.165.2022.22
σ θ / σ c 0.5643.465.260.050.410.445.21
σ c / σ t 12.123.4280.000.1518.9821.1879.85
W e t 2.10−0.6210.570.853.934.639.72
Table 4. Evaluation of the performance of the models.
Table 4. Evaluation of the performance of the models.
ModelAccuracyPrecisionRecallF1 Score
NGO-CNN-BiGRU-Attention98.28%97.92%97.73%97.82%
CNN-BiGRU-Attention96.55%94.95%94.95%94.95%
RF94.83%94.68%95.88%95.28%
BiGRU93.10%93.73%90.91%92.30%
ELM89.66%90.65%89.06%89.85%
CNN86.21%84.85%89.90%87.30%
KNN82.76%82.51%78.45%80.43%
SVM79.31%77.78%83.00%80.31%
Table 5. Prediction results of rock explosion cases in Daxiangling Tunnel.
Table 5. Prediction results of rock explosion cases in Daxiangling Tunnel.
No. σ θ σ c σ t σ θ / σ c σ c / σ t W e t ActualPrediction
139.4065.202.300.6028.353.40
238.2071.403.400.5421.003.60
329.7116.002.700.2642.963.70
429.1094.002.600.3136.153.20
558.2083.602.600.7032.155.90
657.2080.602.500.7132.245.50
727.8090.002.100.3142.861.80
825.7059.701.300.4345.921.70
Table 6. Comparison of the accuracy of the models.
Table 6. Comparison of the accuracy of the models.
ModelAccuracy
NGO-CNN-BiGRU-Attention1
CNN-BiGRU-Attention1
RF0.875
BiGRU0.875
ELM0.875
CNN0.75
KNN0.75
SVM0.625
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, H.; Ma, T.; Lin, Y.; Peng, K.; Hu, X.; Xie, S.; Luo, K. Deep Learning in Rockburst Intensity Level Prediction: Performance Evaluation and Comparison of the NGO-CNN-BiGRU-Attention Model. Appl. Sci. 2024, 14, 5719. https://doi.org/10.3390/app14135719

AMA Style

Liu H, Ma T, Lin Y, Peng K, Hu X, Xie S, Luo K. Deep Learning in Rockburst Intensity Level Prediction: Performance Evaluation and Comparison of the NGO-CNN-BiGRU-Attention Model. Applied Sciences. 2024; 14(13):5719. https://doi.org/10.3390/app14135719

Chicago/Turabian Style

Liu, Hengyu, Tianxing Ma, Yun Lin, Kang Peng, Xiangqi Hu, Shijie Xie, and Kun Luo. 2024. "Deep Learning in Rockburst Intensity Level Prediction: Performance Evaluation and Comparison of the NGO-CNN-BiGRU-Attention Model" Applied Sciences 14, no. 13: 5719. https://doi.org/10.3390/app14135719

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop