Next Article in Journal
An Existence Result for a Class of p(x)—Anisotropic Type Equations
Next Article in Special Issue
Experimental Research on the Flexural Performance of RC Rectangular Beams Strengthened by Reverse-Arch Method
Previous Article in Journal
On (p,q)-Analogues of Laplace-Typed Integral Transforms and Applications
Previous Article in Special Issue
Anti-Plane Dynamics Analysis of a Circular Lined Tunnel in the Ground under Covering Layer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rockburst Hazard Prediction in Underground Projects Using Two Intelligent Classification Techniques: A Comparative Study

1
Department of Civil Engineering, University of Engineering and Technology Peshawar, Bannu Campus, Bannu 28100, Pakistan
2
College of Civil Engineering and Architecture, China Three Gorges University, Yichang 443002, China
3
Faculty of Civil Engineering and Architecture Osijek, Josip Juraj Strossmayer University of Osijek, Vladimira Preloga, 31000 Osijek, Croatia
4
State Key Laboratory of Coastal and Offshore Engineering, Dalian University of Technology, Dalian 116024, China
5
Institute of Construction Project Management, College of Civil Engineering & Architecture, Zhejiang University, Hangzhou 310058, China
6
Department of Electrical Engineering, Bahauddin Zakariya University, Multan 66000, Pakistan
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(4), 632; https://doi.org/10.3390/sym13040632
Submission received: 12 March 2021 / Revised: 25 March 2021 / Accepted: 1 April 2021 / Published: 9 April 2021
(This article belongs to the Special Issue Recent Advances in Computational and Structural Engineering)

Abstract

:
Rockburst is a complex phenomenon of dynamic instability in the underground excavation of rock. Owing to the complex and unclear rockburst mechanism, it is difficult to accurately predict and reasonably assess the rockburst potential. With the increasing availability of case histories from rock engineering and the advancement of data science, the data mining algorithms provide a good way to predict complex phenomena, like rockburst potential. This paper investigates the potential of J48 and random tree algorithms to predict the rockburst classification ranks using 165 cases, with four parameters, namely maximum tangential stress of surrounding rock, uniaxial compressive strength, uniaxial tensile strength, and strain energy storage index. A comparison of developed models’ performances reveals that the random tree gives more reliable predictions than J48 and other empirical models (Russenes criterion, rock brittleness coefficient criterion, and artificial neural networks). Similar comparisons with convolutional neural network resulted at par performance in modeling the rockburst hazard data.

1. Introduction

During underground operations, rockburst is a sudden and violent release of elastic energy stored in rock and coal masses. This causes rock fragments to eject, potentially causing injury, collapse, and deformation of supporting structures, as well as damage to facilities [1,2,3]. Related activity occurs in open cuts in the mass of the joint rock [4,5]. For both the civil and mining engineering industries, its economic consequences are important. The mechanism is not yet well understood owing to the difficulty and uncertainty of the rockburst. In order to mitigate the risks caused by rockburst, such as damage to equipment, access closure, delays, and loss of property and life, it is important to accurately predict or estimate the realistic potential of rockburst for the safety and efficient construction and serviceability of underground projects.
Conventional mechanics-based methods fail to provide precise rockburst hazard detection due to the highly complex relationship between geological, geometric, and mechanical parameters of rock masses in underground environments. Further, mechanics-based methods have several underlying assumptions, which, if flouted, may yield biased model predictions. This has forced many researchers in recent years to investigate alternative methods for better hazard prediction and detection of the rockburst phenomenon. Several researchers suggested several indicators to assess burst potential. The strain energy storage index (Wet), proposed by Kidybinski [6], is the ratio of strain energy stored (Wsp) to strain energy dissipated (Wst).Wattimena et al. [7] used elastic strain energy density as a burst potential measure. The rock brittleness coefficient, which is based on the ratio of uniaxial compressive stress (UCS) to tensile stress, is another widely used burst liability index [8]. A tangential stress criterion, the ratio between tangential stress around underground excavations (σθ), and UCS of rock (σc), can be employed to assess the risk of rock bursts [9]. Energy-based burst potential index was developed by Mitri et al. [10] to diagnose burst proneness. However, many techniques have been developed in the last few decades to predict or assess rockburst, but there has been no advancement or a widely accepted technique preferred over other rockburst methods.
Over the past few decades, data mining techniques have been shown to be efficient in getting complex non-linear relationships between predictor and response variables and may be used to identify sites that are prone to rockburst events, as case history information is increasingly available. A number of approaches have been suggested by several researchers to predict the rockburst, such as Support Vector Machine (SVM) [11], Artificial Neural Networks (ANNs) [12], Distance Discriminant Analysis (DDA) [13], Bayes Discriminant Analysis (BDA) [14], and Fisher Linear Discriminant Analysis (LDA) [15], and moreover, some systems are based upon hybrid (Zhou et al. [16]; Adoko et al. [17]; Liu et al. [18]) or ensemble (Ge and Feng [19]; Dong et al. [20]) learning methods in long-term prediction of rockburst and their prediction accuracies are compared. Zhao and Chen [21], recently developed and compared a data-driven model based on a convolutional neural network (CNN) to a traditional neural network. In rockburst prediction, this proposed CNN model has a high potential compared to the conventional neural network. These algorithms used a number of rockburst indicators as input features, and the size of their training samples varied. While most of the aforementioned techniques have been effective in predicting rockburst hazard, they do have shortcomings. For example, the optimal structure (e.g., number of inputs, hidden layers, and transfer functions) must be specified a priori in the ANN method. This is usually accomplished by a process of trial and error. The black box nature of the ANN model, as well as the fact that the relationship between the system’s input and output parameters is described in terms of a weight matrix and biases that are not available to the user, is another major limitation [22]. Table 1 summarizes the main rockburst prediction studies that used machine learning (ML) methods with input parameters and accuracy.
The majority of the established models (Table 1) are black boxes, meaning they do not show a clear and understandable relationship between input and output. These studies attempted to solve the problems of rockburst, but they were never entirely successful. In certain instances, a particular procedure can be appropriate, but not in others. Notably, precision ranges from 66.5 to 100%, which is a major variance in rockburst prediction. Rockburst prediction is a complex and nonlinear process that is hindered by model and parameter uncertainty, as well as limited by inadequate knowledge, lack of information characterization, and noisy data. Machine learning has been widely recognized in mining and geotechnical engineering applications for dealing with nonlinear problems and developing predictive data-mining models [25,26,27,28,29,30,31].
In this study, the random tree and J48 algorithms have been specifically selected on the basis of these considerations because they are primarily used in civil engineering but have not yet been thoroughly evaluated with each other and because of their open-source availability. The primary aim of this research was to reveal and compare the suitability of random tree and J48 algorithms in underground projects for rockburst hazard prediction. First, rockburst hazards classification cases are collected from the published literature. Next, these two algorithms are used to predict the rockburst hazard classification. Finally, their detailed performance is evaluated and compared with empirical models.

2. Materials and Methods

2.1. Influence Factors and Data Assemblage

Several factors such as the geological structures of the rock mass, the conditions of geo-stress, rock mass strength, the method of excavation and excavation size, and rock blasting are all related to the occurrence of rockbursts [9,15,32]. However, due to the limitations present in the reported parameters in previous field studies, all variables cannot be taken into account. Therefore, to create the new models, only parameters recorded in all field studies are used here. Previous studies used uniaxial compressive and tensile strength of the rock mass, maximum tangential stress of the surrounding rock, and strain energy storage index as main parameters. The existing rockburst cases were collected as supportive data for the development of the prediction model. This study used the rockburst database from different types of underground projects from all over the world previously collected by Pu et al. [24] and Zhou et al. [16] and recently referenced by Zhao and Chen [21]. The dataset contains 165 rockburst events of underground engineering projects which have four influence factors and a corresponding rank of rockburst. In general, the projects chosen experienced the most significant rockburst activity.
The way data are divided into training and research sets has a major influence on the results of data mining techniques [33]. The statistical analysis’ main goal was to ensure that the subsets’ statistical properties were as similar as possible and thus represented the same statistical population. To fairly compare the predictive performance of the proposed J48 and random tree models in this study, the dataset used for the training (137 cases) and testing (28 cases) was kept the same in the prediction of rockburst hazard. As per Cai et al. [34] the rockburst database has four rankings to determine the risk for the rockburst. They are as follows, with increasing severity: no rockburst (NR), moderate rockburst (MR), strong rockburst (SR) and violent rockburst (VR). The specific grading criteria are shown in Table 2 [34]. Figure 1 shows the distribution of rockburst data as a pie chart showing the portion of four rockburst hazard forms in underground projects, classified as no rockburst (NR, 31 cases), moderate rockburst (MR, 43 cases), strong rockburst (SR, 63 cases) and violent rockburst (VR, 28 cases).
The database contains maximum tangential stress of surrounding rock σθ (MPa), uniaxial compressive strength σc (MPa), uniaxial tensile strength σt (MPa), and strain energy storage index Wet. The histograms, cumulative distribution functions and descriptive statistics (such as minimum and maximum values, mean and standard deviations) of the selected rockburst parameters with the established J48 and random tree models are provided in Figure 2 and Table 3 (the complete database is available in Appendix A, Table A1). It should be noted that in ranges with more dense data, the established models are more accurate.
Rockburst events typically occur in rock masses characterised by brittle behaviour (low deformability), massive and relatively homogeneous structure, high intact rock strength and high in-situ stress states. That said, it is hard to accept that rockburst events were registered in some case-studies involving rock masses with low or very low values of uniaxial compressive strength (see for instance case-studies no. 4, 6, 26, 27, 39, 44, 52, 53, 60, 67, 83, 90, 121, 137). Very probably, in these cases other factors such as large in-situ stress attended to provide high energy releases.

2.2. Methodology

2.2.1. J48

The J48 algorithm is the implementation of the C4.5 algorithm [35]. Waikato Environment for Knowledge Analysis (WEKA) software’s implementation of the C4.5 random tree learner is the J48 algorithm (J48 also implements a later and slightly modified version named C4.5 revision 8, which was the last public version of this algorithm family before C5.0 was released for commercial implementation) [36].
The random trees are generated with C4.5. A tree-like structure serves as a random support system in the random tree. A root node, internal nodes, and leaf nodes build the tree structure. The root node contains all of the input data. A random function is associated with an internal node that may have two or more branches. The label class is represented by the leaf node, which is the output of the input vector. The main advantage of random trees is that they are simple to build and the resulting trees are easy to understand [37]. The C4.5 algorithm has recently been used to determine the potential for seismic soil liquefaction [28,29] and landslide susceptibility [37]. The J48 random tree algorithm takes the following steps [38].
Step 1: Compute the Entropy(S) of the training dataset S as follows:
E n t r o p y S = i = 1 K f r e q C i , S S log 2 f r e q C i , S S
where |S| is the number of samples in the training dataset, Ci is a dependent variable, i = 1, 2, …, K, K is the number of classes for the dependent variable, and freq(Ci, S) is the number of samples included in class Ci.
Step 2: Calculate the Information Gain X(S) for the partition’s X test attribute:
G a i n X S = E n t r o p y S i = 1 L S i S E n t r o p y S i
where L is the number of test outputs, X, Si is the subset of S corresponding to the output of ith, and |Si| is the number of dependent variables of the subset Si. A subset that provides maximum information gain will be selected as the threshold for a particular partition attribute. The node branch will consist of the two partitions S and SSi. The tree is a leaf if the cases are from the same class; hence, the leaf is returned by defining the same dependent variables (class).
Step 3: Calculate the Split Info (X) acquisition’s partition information value for S partitioned to L subsets:
S p l i t   I n f o X = i = 1 L S i S log 2 S i S + 1 S i S log 2 1 S i S
Step 4: Calculate the Gain Ratio(X):
G a i n   R a t i o X = I n f o r m a t i o n   G a i n X S S p l i t   I n f o X
Step 5: The root node will be the attribute with the highest gain ratio, and the same calculation from step 1 to step 4 will be repeated for each intermediate node until all instances are exhausted and it reaches the leaf node as defined in step 2.

2.2.2. Random Tree (RT)

RT splits the data set into sub-spaces and fits the constant to each sub-space. A single tree model tends to be very unstable and provides lower prediction accuracy. However, very accurate results can be obtained by bagging RT as a random tree algorithm [39]. RT has a high degree of versatility and quick training capabilities [40]. More comprehensive details of the RT can be found in Sattari et al. [41] and Kiranmai and Laxmi [42]. Table 4 presents the different variations between the J48 and random tree data mining algorithms.

3. Construction of Prediction Model

The construction method of the prediction model is presented in Figure 3. The data is divided in two sub-datasets, i.e., training and testing. A training dataset of 137 cases is chosen to train the model and the remaining 28 cases are used to test the model. A hold-out technique is used to tune the hyper parameters of the model. The prediction model is fitted using the optimal configuration of hyperparameters, based on the training dataset. Different performance indexes such as classification accuracy (ACC), kappa statistic, producer accuracy (PA), and user accuracy (UA) are used to evaluate model performance. Last, the optimum model is calculated by evaluating the comprehensive performance of these models. If the predicted performance of this model is appropriate, then it can be adopted for deployment. In WEKA, the entire method of calculation is carried out.

3.1. Hyperparameter Optimization

Tuning is the process of maximizing a model’s performance and avoiding overfitting or excessive variance. This is achieved in machine learning by choosing suitable “hyperparameters”. Choosing the right set of hyperparameters is critical for model accuracy, but can be computationally challenging. Hyperparameters are different from other model parameters in that they are not automatically learned by the model through training methods, instead, these parameters must be set manually. Critical hyperparameters in random tree and J48 algorithms are tuned to determine the optimum value of algorithm parameters such as the minimum number of instances per leaf and the confidence factor. For both algorithms, the trial and error approach is used to evaluate these parameters in a particular search range in order to achieve the best classification accuracy. The search range of the same hyperparameters is kept consistent. Furthermore, according to the maximum accuracy, the optimal values for each set of hyperparameters are obtained. In this analysis, the optimum values for the J48 algorithm were found as: the minimum number of instances was 1 and the confidence factor was 0.25, the tree size was 63, the number of leaves was 32, and in the case of a random tree, the tree size was 103 and the minimum total weight of the instances in a leaf was 1.

3.2. Model Evaluation Indexes

J48 and random tree algorithms are used as described in Section 2.2 in order to construct a predictive model for the classification of a rockburst hazard. The uncertainty matrix is a projected results visualization table where each row of the matrix represents the cases in the actual class, while each column displays the cases in the predicted class. It is normally constructed with m rows and m columns, where m is equal to the number of classes. The accuracy, kappa, producer accuracy, and user accuracy found for each class of the confusion matrix sample are used to test the predictive model’s efficacy. Let xij (i and j = 1, 2, …, m) be the joint frequency of observations designated to class i by prediction and class j by observation data, xi+ is the total frequency of class i as obtained from prediction, and x + j is the total frequency of class j based on observed data (see Table 5).
Classification accuracy (ACC) is determined by summing correctly classified diagonal instances and dividing them by the total number of instances. The ACC is given as:
A C C = 1 n i = 1 m x i i × 100 %
Cohen’s kappa index, which is a robust index that considers the likelihood of an event classification by chance, measures the proportion of units that are correctly categorized units proportions after removing the probability of chance agreement [43]. Kappa can be calculated using the following formula:
K a p p a = n i = 1 m x i i i = 1 m x i + x + i n 2 i = 1 m x i + x + i
where n is the number of instances, m is the number of class values, xii is the number of cells in the main diagonal, and xi+, x+i are the cumulative counts of the rows and columns, respectively.
Landis and Koch [44] suggested a scale to show the degree of concordance (see Table 6). A kappa value of less than 0.4 indicates poor agreement, while a value of 0.4 and above indicates good agreement (Sakiyama et al. [45]; Landis and Koch [44]). The producer’s accuracy of class i (PAi) can be measured using Congalton and Green’s formula [46].
PA i = x i i x + m × 100 % = x i i i = 1 m x i m × 100 %
and user’s accuracy of class i (UAi) can be found as:
UA i   =   x i i x m +   ×   100 %   =   x i i i = 1 m x m j   ×   100 %
The J48 and random tree models are studied for the suitability of predicting rockburst. Comparison is made with the traditional empirical criteria of the rockburst including the Russenes criterion [47], the rock brittleness coefficient criterion [48], ANN [21], and CNN [21] models using performance measures i.e., ACC, PA, UA, and kappa.

4. Results and Discussion

The performance of both models was tested to predict rockburst hazard using various evaluation criteria as mentioned in Section 3. Based on the statistical assessment criteria, it was observed that both models had very high predictive accuracy capability (J48: 92.857%; Random tree: 100%). The result of the kappa coefficient showed that both of these models are almost perfect but the random tree model performed best, owing to having the highest kappa coefficient value (1.0), followed by the J48 (0.904). With comparisons of the results from the four models, it is found that the prediction accuracy (ACC) and kappa coefficient of random tree and CNN models are equivalent, which is higher than that of other models shown in Table 7.
In terms of the producer accuracy (PA) value, the random tree model also had the highest PA (100%) for all ranks of rockburst, followed by the J48. PA and UA show that some of the features are better classified than others (see Table 8). As can been seen from the Table 8, in the J48 model, both “no rockburst” and “moderate rockburst” both have at par PA values (100%) compared to “strong rockburst” (88.889%) and “no rockburst” (83.333%). While in the J48 model, both “no rockburst” and “violent rockburst” have at par UA values (100%) compared to “moderate rockburst” (88.889%) and “strong rockburst” (87.5%). The results showed that there was a statistically significant difference between the random tree and the J48 models, and it was found that the performance of the J48 is just secondary to the random tree model in prediction of rockburst hazard.
To illustrate and verify the performances of J48 and random tree models, the prediction capacity of these models is compared with the other models including the Russenes criterion [47], the rock brittleness coefficient criterion [48], ANN [21], and CNN [21]. The confusion matrix and the performance indices for traditional empirical criteria of the rockburst including Russenes criterion [47], the rock brittleness coefficient criterion [48], ANN [21], and CNN [21] models are presented in Table 8. The comparison of these models reveals that J48 and random tree models can be applied efficiently, but the random model exhibits the best and at par performance with the CNN [21] model and after that the J48 model. The random tree model can predict “no rockburst”, “moderate rockburst”, “strong rockburst” and “violent rockburst” cases with overall accuracy of 100%.
Although existing models such as SVM do not provide explicit equations for professionals, the established J48 and random tree models (see Figure 4) can be used by civil and mining engineering professionals with the aid of a spreadsheet to analyze potential rockburst events without going into the complexities of model development. Furthermore, the J48 and random tree approaches do not require data normalization or scaling, which is an advantage over most other approaches.
In general, both of the established models performed well in the testing data phase, while overall performance of the random tree model showed better performance (see Table 7 and Table 8) and it is shown that the random tree model is preferred over other models. It was found that for a larger dataset with almost no sampling bias (i.e., disparity in class ratio between population and sample) in the training and testing phases, predictive performance should be further examined. Though the dataset (165) used in this study is small, the J48 and random tree models can always be updated to yield better results, as new data become available. Study results suggest that proposed models show improved predictive performance compared to the majority of existing studies. This, in turn, will ensure the reduction in the loss of property, human lives, and injuries from a practical viewpoint.

5. Conclusions and Limitations

In underground mining and civil engineering projects, models for predicting rockburst can be valuable tools. This study compared J48 and random tree models for predicting rockburst. Four variables (σθ, σc, σt, and Wet) were selected as influence factors for predicting rockburst using a dataset of 165 rockburst events compiled from recent published research, which was used to construct the decision tree models. The following conclusions can be drawn, based on the results of the study:
  • The performance measures of the testing dataset for the J48 and random tree algorithms conclude that it is rational and feasible to choose the maximum tangential stress of surrounding rock σθ (MPa), the uniaxial compressive strength σc (MPa), the uniaxial tensile strength σt (MPa), and the strain energy storage index Wet as indexes for predicting rockburst.
  • The classification accuracy of J48 and random tree models in the test phase is 92.857% and 100%, respectively, which shows that the random tree model is an accurate and efficient approach to predicting the potential for rockburst classification ranks.
  • The kappa index of the developed J48 and random tree models is in the range of 0.904–1.000, which means that the correlation between observed and predicted values is nearly perfect.
  • The comparison of models’ performances reveals that the random tree model gives more reliable predictions and their application is easier owing to a clear graphical outcome that can be used by civil and mining engineering professionals.
Although the proposed method yields adequate prediction results, certain limitations should be discussed in the future.
(1) The sample size is restricted and unbalanced. The number and quality of datasets have a significant impact on the prediction performance of random tree and J48 algorithms. In general, the generalization and reliability of all data-driven models are affected by the size of the dataset. Although the random tree and J48 algorithms perform well, larger datasets can yield better prediction results. Furthermore, the dataset is unbalanced, particularly for samples with violent rockburst (17%) and samples with no rockburst (19%). As a result, establishing a larger and more balanced rockburst database is essential.
(2) Other variables may have an effect on the prediction outcomes. Numerous factors influence the risk of a rockburst, including rock properties, energy, excavation depth, and support structure, among others. Although the four indicators used in this study can define the required conditions for rockburst hazard assessment to some degree, some other indicators, such as the buried depth of the tunnel (H), failure duration time, and energy-based burst potential index, may also have an impact on rockburst hazard. As a consequence, it is crucial to look into the effects of these variables on the prediction outcomes.

Author Contributions

Conceptualization, M.A. (Mahmood Ahmad), J.-L.H. and X.-W.T.; methodology, M.A. (Mahmood Ahmad), M.H.-N. and F.A.; software, M.A. (Mahmood Ahmad) and F.A.; validation, M.A. (Mahmood Ahmad), A.N. and F.A.; formal analysis, M.A. (Mahmood Ahmad); investigation, M.H.-N., F.A., Z.U.R., M.A. (Mahmood Ahmad) and M.A. (Muhammad Abrar); resources, X.-W.T.; data curation, M.A. (Muhammad Abrar), A.N. and Z.U.R.; writing—original draft preparation, M.A. (Mahmood Ahmad); writing—review and editing, M.A. (Mahmood Ahmad), M.H.-N., A.N. and Z.U.R.; supervision, X.-W.T., J.-L.H., M.H.-N.; project administration, X.-W.T., J.-L.H.; funding acquisition, J.-L.H., X.-W.T. All authors have read and agreed to the published version of the manuscript.

Funding

The work presented in this paper was part of the research sponsored by the Key Program of National Natural Science Foundation of China under Grant No. 51639002 and National Key Research and Development Plan of China under Grant No. 2018YFC1505300-5.3.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Rockburst database.
Table A1. Rockburst database.
S. No.Maximum
Tangential Stress of Surrounding Rock, σθ (MPa)
Uniaxial Compressive Strength, σc (MPa)Uniaxial Tensile Strength, σt (MPa)Strain Energy Storage Index, WetRockburst Rank
126.962.82.12.4MR
224.9399.74.83.8NR
357.612055.1SR
4 *132.151.52.474.63SR
517.41613.982.19MR
634.1554.212.13.17MR
76013515.044.86MR
840.472.12.11.9MR
9 *55.41767.39.3SR
10105.519017.13.97SR
1162.42359.59VR
12 *801806.75.5MR
1329.813211.54.6SR
1412.11605.22.22NR
1558.283.62.65.9VR
1629.71162.73.7MR
1749.51101.55.7SR
18105237.1617.666.38VR
19 *111054.94.7NR
207.5523.71.3NR
21191534.482.11MR
22892368.35SR
2356.1131.999.447.44SR
2460.7111.57.866.16VR
2533.615610.85.2SR
26127.935.821.243.67MR
2725.4954.22.493.17MR
28481201.55.8SR
29 *9226310.78MR
3062.61659.49MR
3189.56170.2812.075.76SR
32 *3088.73.76.6SR
3318.81785.77.4NR
3454.21349.097.08SR
3591.3225.617.27.3VR
3644.412055.1MR
379017011.39SR
38 *11.3904.83.6NR
3935.867.83.84.3SR
4070.41104.56.31SR
415013065SR
42 *108.414085VR
43341495.97.6MR
44 *148.466.773.815.08MR
4556.81122.25.2SR
4660136.7910.422.12MR
4719.71424.552.26MR
48167.2110.38.366.83VR
4930.188.73.76.6VR
5038.271.43.43.6SR
5161171.522.67.5MR
5241.667.62.73.7SR
5319.5302.672.03SR
5489.56187.1719.177.27SR
55 *60.7111.57.866.16VR
5663.81104.56.31SR
57 *62.42359.59VR
58148.466.773.815.08MR
5962.61659.49MR
60118.526.060.772.89MR
61902207.47.3MR
62 *96.4118.320.381.87NR
6340.666.62.63.7SR
6412.3237.117.666.9NR
6555.61142.34.7SR
66 *98.61206.53.8SR
6730302.672.03VR
6828.6122122.5SR
6970.31298.736.43SR
70681076.17.2VR
7129.1942.63.2MR
7273.212055.1SR
731053049.125.76SR
7432.367.46.71.1NR
75 *341505.47.8NR
7615.253.85.561.92NR
7739.465.22.33.4SR
78751808.35SR
79111054.94.7NR
8018.78210.91.5NR
8114.9699.74.83.8NR
82 *9017011.39SR
8313.5302.672.03MR
84 *167.2110.38.366.83VR
8562.51757.255SR
86 *902207.47.3MR
8720.91605.22.22NR
883970.12.44.8SR
89631151.55.7SR
9047.5658.53.55MR
91157.391.236.926.27VR
9260149.199.33.5MR
9356.91232.75.2SR
9498.61206.53.8SR
9581.41104.56.31VR
9696.4118.320.381.87NR
97 *70.31298.736.43SR
9862.11322.45SR
99 *38.2533.91.6NR
10013.91244.222.04NR
10154.21349.17.1SR
1023.82031.39NR
103 *892368.35SR
10446.41004.92MR
105238030.85MR
10655.91286.298.1VR
10711.3904.83.6NR
10826.992.89.473.7SR
109105171.322.67.27VR
110801806.75.5MR
11159.8285.87.312.78SR
11255.6256.518.99.1SR
11335133.49.32.9MR
114107.521.50.62.29NR
11559.996.611.71.8MR
11640.172.12.34.6SR
11721.81605.22.22NR
11838.2533.91.6NR
119571808.35SR
12057.280.62.55.5VR
121 *127.935.821.243.67MR
122108.4138.47.71.9VR
1233088.73.76.6SR
124 *54.21349.097.08SR
125105128.61135.76VR
12646.21055.32.3MR
127 *157.391.236.926.27VR
1289226310.78MR
12955.61142.34.7SR
13072.07147.0910.986.53SR
13139.469.22.73.8SR
1326066.499.722.15MR
13348.751808.35SR
134 *107.521.50.62.29NR
13589128.613.24.9VR
136 *35133.49.32.9MR
137132.151.52.474.63SR
13825.759.71.31.7NR
13955.41767.39.3SR
140105.518719.27.27SR
14143.412365SR
1427517011.39SR
1434.62031.39NR
14443.4136.57.25.6VR
14530.3883.13MR
14660106.3811.26.11MR
147108.414085VR
148 *48.751808.35SR
14969.819822.44.68MR
15089.56190.317.133.97SR
151105306.5813.96.38VR
15218.8171.56.37NR
153105304.2120.910.57VR
15470.3128.38.76.4SR
15527.8902.11.8NR
156105.517012.15.76SR
157 *43.4136.57.25.6VR
1586086.037.142.85MR
1591111555.7NR
160341505.47.8NR
16145.769.13.24.1SR
16288.914213.23.62VR
16330.982.566.53.2MR
16443.6278.13.26MR
1652.62031.39NR
Note: Cases with * are testing samples.

References

  1. Ortlepp, W.; Stacey, T. Rockburst mechanisms in tunnels and shafts. Tunn. Undergr. Space Technol. 1994, 9, 59–65. [Google Scholar] [CrossRef]
  2. Dou, L.; Chen, T.; Gong, S.; He, H.; Zhang, S. Rockburst hazard determination by using computed tomography technology in deep workface. Saf. Sci. 2012, 50, 736–740. [Google Scholar] [CrossRef]
  3. Cai, M. Principles of rock support in burst-prone ground. Tunn. Undergr. Space Technol. 2013, 36, 46–56. [Google Scholar] [CrossRef]
  4. Al-Shayea, N. Failure of Rock Anchors along the Road Cut Slopes of Dhila Decent Road, Saudi Arabia. In Proceedings of the International Conference on Problematic Soils (GEOPROB 2005), Famagusta, Cyprus, 25–57 May 2005; pp. 1129–1136. [Google Scholar]
  5. Al-Shayea, N.A. Pullout Failure of Rock Anchor Rods at Slope Cuts, Dhila Decent Road, Saudi Arabia. In Proceedings of the 17th International Road Federation (IRF) World Meeting and Exhibition, Riyadh, Saudi Arabia, 10–14 November 2013. [Google Scholar]
  6. Kidybiński, A. Bursting liability indices of coal. Int. J. Rock Mech. Min. Sci. Geomech. Abstr. 1981, 18, 295–304. [Google Scholar] [CrossRef]
  7. Wattimena, R.K.; Sirait, B.; Widodo, N.P.; Matsui, K. Evaluation of rockburst potential in a cut-and-fill mine using energy balance. Int. J. Jcrm 2012, 8, 19–23. [Google Scholar]
  8. Altindag, R. Correlation of specific energy with rock brittleness concepts on rock cutting. J. South. Afr. Inst. Min. Metall. 2003, 103, 163–171. [Google Scholar]
  9. Wang, J.-A.; Park, H. Comprehensive prediction of rockburst based on analysis of strain energy in rocks. Tunn. Undergr. Space Technol. 2001, 16, 49–57. [Google Scholar] [CrossRef]
  10. Mitri, H.; Tang, B.; Simon, R. FE modelling of mining-induced energy release and storage rates. J. South. Afr. Inst. Min. Metall. 1999, 99, 103–110. [Google Scholar]
  11. Hong-Bo, Z. Classification of rockburst using support vector machine. Rock Soil Mech. 2005, 26, 642–644. [Google Scholar]
  12. Chen, D.; Feng, X.; Yang, C.; Chen, B.; Qiu, S.; Xu, D. Neural network estimation of rockburst damage severity based on engineering cases. In Proceedings of the SINOROCK 2013 Symposium, Shanghai, China, 18–20 June 2013; pp. 457–463. [Google Scholar]
  13. Gong, F.; Li, X. A distance discriminant analysis method for prediction of possibility and classification of rockburst and its application. Yanshilixue Yu Gongcheng Xuebao/Chin. J. Rock Mech. Eng. 2007, 26, 1012–1018. [Google Scholar]
  14. Gong, F.; Li, X.; Zhang, W. Rockburst prediction of underground engineering based on Bayes discriminant analysis method. Rock Soil Mech. 2010, 31, 370–377. [Google Scholar]
  15. Zhou, J.; Shi, X.-Z.; Dong, L.; Hu, H.-Y.; Wang, H.-Y. Fisher discriminant analysis model and its application for prediction of classification of rockburst in deep-buried long tunnel. J. Coal Sci. Eng. 2010, 16, 144–149. [Google Scholar] [CrossRef]
  16. Zhou, J.; Li, X.; Shi, X. Long-term prediction model of rockburst in underground openings using heuristic algorithms and support vector machines. Saf. Sci. 2012, 50, 629–644. [Google Scholar] [CrossRef]
  17. Adoko, A.C.; Gokceoglu, C.; Wu, L.; Zuo, Q.J. Knowledge-based and data-driven fuzzy modeling for rockburst prediction. Int. J. Rock Mech. Min. Sci. 2013, 61, 86–95. [Google Scholar] [CrossRef]
  18. Liu, Z.; Shao, J.; Xu, W.; Meng, Y. Prediction of rock burst classification using the technique of cloud models with attribution weight. Nat. Hazard. 2013, 68, 549–568. [Google Scholar] [CrossRef]
  19. Ge, Q.; Feng, X. Classification and prediction of rockburst using AdaBoost combination learning method. Rock Soil Mech. Wuhan 2008, 29, 943. [Google Scholar]
  20. Dong, L.-J.; Li, X.-B.; Kang, P. Prediction of rockburst classification using Random Forest. Trans. Nonferrous Met. Soc. China 2013, 23, 472–477. [Google Scholar] [CrossRef]
  21. Zhao, H.; Chen, B. Data-Driven Model for Rockburst Prediction. Math. Probl. Eng. 2020, 2020. [Google Scholar] [CrossRef]
  22. Javadi, A.A.; Ahangar-Asr, A.; Johari, A.; Faramarzi, A.; Toll, D. Modelling stress–strain and volume change behaviour of unsaturated soils using an evolutionary based data mining technique, an incremental approach. Eng. Appl. Artif. Intell. 2012, 25, 926–933. [Google Scholar] [CrossRef]
  23. Zhu, Y.; Liu, X.; Zhou, J. Rockburst prediction analysis based on v-SVR algorithm. J. China Coal Soc. 2008, 33, 277–281. [Google Scholar]
  24. Pu, Y.; Apel, D.B.; Lingga, B. Rockburst prediction in kimberlite using decision tree with incomplete data. J. Sustain. Min. 2018, 17, 158–165. [Google Scholar] [CrossRef]
  25. Ahmad, M.; Al-Shayea, N.A.; Tang, X.-W.; Jamal, A.; M Al-Ahmadi, H.; Ahmad, F. Predicting the Pillar Stability of Underground Mines with Random Trees and C4. 5 Decision Trees. Appl. Sci. 2020, 10, 6486. [Google Scholar] [CrossRef]
  26. Ahmad, M.; Tang, X.-W.; Qiu, J.-N.; Ahmad, F.; Gu, W.-J. A step forward towards a comprehensive framework for assessing liquefaction land damage vulnerability: Exploration from historical data. Front. Struct. Civ. Eng. 2020, 14, 1476–1491. [Google Scholar] [CrossRef]
  27. Ahmad, M.; Tang, X.; Ahmad, F. Evaluation of Liquefaction-Induced Settlement Using Random Forest and REP Tree Models: Taking Pohang Earthquake as a Case of Illustration. In Natural Hazards-Impacts, Adjustments & Resilience; IntechOpen: London, UK, 2020. [Google Scholar]
  28. Ahmad, M.; Tang, X.-W.; Qiu, J.-N.; Gu, W.-J.; Ahmad, F. A hybrid approach for evaluating CPT-based seismic soil liquefaction potential using Bayesian belief networks. J. Cent. South Univ. 2020, 27, 500–516. [Google Scholar]
  29. Ahmad, M.; Tang, X.-W.; Qiu, J.-N.; Ahmad, F. Evaluating Seismic Soil Liquefaction Potential Using Bayesian Belief Network and C4. 5 Decision Tree Approaches. Appl. Sci. 2019, 9, 4226. [Google Scholar] [CrossRef] [Green Version]
  30. Ahmad, M.; Tang, X.-W.; Qiu, J.-N.; Ahmad, F. Evaluation of liquefaction-induced lateral displacement using Bayesian belief networks. Front. Struct. Civ. Eng. 2021. [Google Scholar] [CrossRef]
  31. Pirhadi, N.; Tang, X.; Yang, Q.; Kang, F. A New Equation to Evaluate Liquefaction Triggering Using the Response Surface Method and Parametric Sensitivity Analysis. Sustainability 2019, 11, 112. [Google Scholar] [CrossRef] [Green Version]
  32. Mansurov, V. Prediction of rockbursts by analysis of induced seismicity data. Int. J. Rock Mech. Min. Sci. 2001, 38, 893–901. [Google Scholar] [CrossRef]
  33. Rezania, M.; Faramarzi, A.; Javadi, A.A. An evolutionary based approach for assessment of earthquake-induced soil liquefaction and lateral displacement. Eng. Appl. Artif. Intell. 2011, 24, 142–153. [Google Scholar] [CrossRef]
  34. Cai, W.; Dou, L.; Si, G.; Cao, A.; He, J.; Liu, S. A principal component analysis/fuzzy comprehensive evaluation model for coal burst liability assessment. Int. J. Rock Mech. Min. Sci. 2016, 100, 62–69. [Google Scholar] [CrossRef]
  35. Witten, I.H.; Frank, E.; Hall, M. Data Mining: Practical Machine Learning Tools and Techniques; Morgen Kaufmann: San Francisco, CA, USA, 2005. [Google Scholar]
  36. Witten, I.; Frank, E.; Hall, M. Data Mining: Practical Machine Learning Tools and Techniques, 3rd ed.; Morgan Kaufmann: San Francisco, CA, USA, 2011. [Google Scholar]
  37. Bui, D.T.; Ho, T.C.; Revhaug, I.; Pradhan, B.; Nguyen, D.B. Landslide susceptibility mapping along the national road 32 of Vietnam using GIS-based J48 decision tree classifier and its ensembles. In Cartography from Pole to Pole; Springer: Berlin/Heidelberg, Germany, 2014; pp. 303–317. [Google Scholar]
  38. Sun, W.; Chen, J.; Li, J. Decision tree and PCA-based fault diagnosis of rotating machinery. Mech. Syst. Signal Process. 2007, 21, 1300–1317. [Google Scholar] [CrossRef]
  39. Aldous, D.; Pitman, J. Inhomogeneous continuum random trees and the entrance boundary of the additive coalescent. Probab. Theory Relat. Fields 2000, 118, 455–482. [Google Scholar] [CrossRef] [Green Version]
  40. LaValle, S.M. Rapidly-Exploring Random Trees: A New Tool for Path Planning; Department of Computer Science: Ames, IA, USA, 1998. [Google Scholar]
  41. Sattari, M.T.; Apaydin, H.; Shamshirband, S. Performance Evaluation of Deep Learning-Based Gated Recurrent Units (GRUs) and Tree-Based Models for Estimating ETo by Using Limited Meteorological Variables. Mathematics 2020, 8, 972. [Google Scholar] [CrossRef]
  42. Kiranmai, S.A.; Laxmi, A.J. Data mining for classification of power quality problems using WEKA and the effect of attributes on classification accuracy. Prot. Control Mod. Power Syst. 2018, 3, 29. [Google Scholar] [CrossRef]
  43. Kuhn, M.; Johnson, K. Applied Predictive Modeling; Springer: Berlin/Heidelberg, Germany, 2013; Volume 26. [Google Scholar]
  44. Landis, J.R.; Koch, G.G. An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers. Biometrics 1977, 363–374. [Google Scholar] [CrossRef]
  45. Sakiyama, Y.; Yuki, H.; Moriya, T.; Hattori, K.; Suzuki, M.; Shimada, K.; Honma, T. Predicting human liver microsomal stability with machine learning techniques. J. Mol. Graph. Model. 2008, 26, 907–915. [Google Scholar] [CrossRef]
  46. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices; CRC Press: Boca Raton, FL, USA, 2002. [Google Scholar]
  47. Russenes, B. Analysis of Rock Spalling for Tunnels in Steep Valley Sides; Norwegian Institute of Technology: Trondheim, Norway, 1974. [Google Scholar]
  48. Wang, Y.; Li, W.; Li, Q. Fuzzy estimation method of rockburst prediction. Chin. J. Rock Mech. Eng 1998, 17, 493–501. [Google Scholar]
Figure 1. Distribution of observed rockburst hazard in underground projects.
Figure 1. Distribution of observed rockburst hazard in underground projects.
Symmetry 13 00632 g001
Figure 2. Histograms of the input parameters considered to predict the rockburst hazard: (a) Maximum tangential stress of surrounding rock (σθ; MPa), (b) Uniaxial compressive strength, (σc; MPa), (c) Uniaxial tensile strength (σt; MPa), (d) Strain energy storage index (Wet).
Figure 2. Histograms of the input parameters considered to predict the rockburst hazard: (a) Maximum tangential stress of surrounding rock (σθ; MPa), (b) Uniaxial compressive strength, (σc; MPa), (c) Uniaxial tensile strength (σt; MPa), (d) Strain energy storage index (Wet).
Symmetry 13 00632 g002aSymmetry 13 00632 g002b
Figure 3. Construction process of prediction model.
Figure 3. Construction process of prediction model.
Symmetry 13 00632 g003
Figure 4. Part of (a) J48 and (b) random tree models.
Figure 4. Part of (a) J48 and (b) random tree models.
Symmetry 13 00632 g004
Table 1. Study of the rockburst classification ML algorithm with influence factors and accuracy values.
Table 1. Study of the rockburst classification ML algorithm with influence factors and accuracy values.
AlgorithmDσθσcσtWetσθcσctAccuracy (%)DatasetReference
Support vector machine10016Zhao [11]
Distance discriminant analysis10015Gong and Li [13]
v-support vector regression93.7545Zhu et al. [23]
AdaBoost87.8–89.936Ge and Feng [19]
Bayes discriminant analysis10021Gong et al. [14]
Fisher linear discriminant analysis10015Zhou et al. [15]
Heuristic algorithms and support vector machines66.67–90132Zhou et al. [16]
Adaptive neuro fuzzy inference system66.5–95.6174Adoko et al. [17]
Random forest10046Dong et al. [20]
Cloud model90–100162Liu et al. [18]
Decision tree model73–93108 and 132Pu et al. [24]
Artificial neural network and convolutional neural network89.29–100165Zhao and Chen [21]
Note: D = depth, m; σθ = maximum tangential stress of surrounding rock, MPa; σθc = stress concentration factor; σct = rock brittleness (B); σc = uniaxial compressive strength of rock, MPa; σt = uniaxial tensile strength of rock, MPa; Wet = strain energy storage index.
Table 2. Grading criteria of rockburst intensity.
Table 2. Grading criteria of rockburst intensity.
Rockburst GradeNo RockburstModerate RockburstStrong RockburstViolent Rockburst
σθ/σc<0.30.3–0.50.5–0.7>0.7
σc/σt>4026.7–4014.5–26.7<14.5
Wet>53.5–5.02.0–3.5<2.0
Table 3. Descriptive statistics of the rockburst dataset.
Table 3. Descriptive statistics of the rockburst dataset.
Statistical ParameterDatasetMaximum Tangential Stress of Surrounding Rock, σθ (MPa)Uniaxial Compressive Strength, σc (MPa)Uniaxial Tensile Strength, σt (MPa)Strain Energy Storage Index, Wet
MinimumTraining2.618.320.380.85
Testing1118.320.381.6
Total2.618.320.380.85
MaximumTraining167.2306.5822.610.57
Testing167.226311.39.3
Total167.2306.5822.610.57
MeanTraining56.17122.317.144.69
Testing78.65128.986.505.68
Total59.9913.447.034.86
Standard deviationTraining33.6259.735.092.15
Testing42.3264.982.972.19
Total36.1160.54.792.18
Table 4. Differences between J48 and random tree algorithms.
Table 4. Differences between J48 and random tree algorithms.
PropertiesJ48Random Tree
Attributes available at each decision nodeAllRandom Subset
Selection of attributes at each decision nodeHighest information gain among allBest among a random subset
Number of treesOneOne
Data samples used for trainingAllAll
Final result of ClassificationBased on the leaf node reachedBased on the leaf node reached
Table 5. Confusion matrix.
Table 5. Confusion matrix.
PredictedActualTotalUA (%)
12m
1x11x21x1mx1+(x11/x1+) × 100%
2x21x22x2mx2+(x22/x2+) × 100%
mxm1xm2xmmxm+(xmm/xm+) × 100%
Totalx+1x+1x+m
PA (%)(x11/x+1) × 100%(x22/x+2) × 100%(xmm/x+m) × 100%
Table 6. Strength agreement measure related to kappa statistic.
Table 6. Strength agreement measure related to kappa statistic.
InterpretationAlmost PerfectSubstantialModerateFairSlightPoor
Kappa statistic0.81–1.000.61–0.800.41–0.600.21–0.400.00–0.20−1.00–0.00
Table 7. Performance metrics of each model for test data.
Table 7. Performance metrics of each model for test data.
MethodACC (%)Kappa
Russenes criterion [47]42.8670.222
Rock brittleness coefficient criterion [48]53.5710.352
ANN [21]89.2860.856
CNN [21]1001.000
J48 (present study)92.8570.904
Random tree (present study)1001.000
Note: Bold values indicate the highest value for each model.
Table 8. Confusion matrices and performance measures based on testing dataset of rockburst.
Table 8. Confusion matrices and performance measures based on testing dataset of rockburst.
PredictedObservedTotal UA (%)PredictedObservedTotal UA (%)
NRMRSRVRNRMRSRVR
Russenes criterion [47]CNN [21]
NR20002100NR60006100
MR1111425MR07007100
SR04621250SR00909100
VR32231030VR00066100
Total679628 Total679628
PA (%)33.33314.28666.66750 PA (%)100100100100
Rock brittleness coefficient criterion [48]J48 (present study)
NR10001100NR60006100
MR2310650MR0710887.5
SR23831650SR0081988.889
VR1103560VR00055100
Total679628 Total60006
PA (%)16.66742.85788.88950 PA (%)10010088.88983.333
ANN [21]Random Tree (present study)
NR60006100NR60006100
MR0710887.5MR07007100
SR0071885.5SR00909100
VR0015683.333VR00066100
Total679628 Total679628
PA (%)10010077.77883.333 PA (%)100100100100
Note: NR: no rockburst; MR: moderate rockburst; SR: strong rockburst; VR: violent rockburst.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ahmad, M.; Hu, J.-L.; Hadzima-Nyarko, M.; Ahmad, F.; Tang, X.-W.; Rahman, Z.U.; Nawaz, A.; Abrar, M. Rockburst Hazard Prediction in Underground Projects Using Two Intelligent Classification Techniques: A Comparative Study. Symmetry 2021, 13, 632. https://doi.org/10.3390/sym13040632

AMA Style

Ahmad M, Hu J-L, Hadzima-Nyarko M, Ahmad F, Tang X-W, Rahman ZU, Nawaz A, Abrar M. Rockburst Hazard Prediction in Underground Projects Using Two Intelligent Classification Techniques: A Comparative Study. Symmetry. 2021; 13(4):632. https://doi.org/10.3390/sym13040632

Chicago/Turabian Style

Ahmad, Mahmood, Ji-Lei Hu, Marijana Hadzima-Nyarko, Feezan Ahmad, Xiao-Wei Tang, Zia Ur Rahman, Ahsan Nawaz, and Muhammad Abrar. 2021. "Rockburst Hazard Prediction in Underground Projects Using Two Intelligent Classification Techniques: A Comparative Study" Symmetry 13, no. 4: 632. https://doi.org/10.3390/sym13040632

APA Style

Ahmad, M., Hu, J. -L., Hadzima-Nyarko, M., Ahmad, F., Tang, X. -W., Rahman, Z. U., Nawaz, A., & Abrar, M. (2021). Rockburst Hazard Prediction in Underground Projects Using Two Intelligent Classification Techniques: A Comparative Study. Symmetry, 13(4), 632. https://doi.org/10.3390/sym13040632

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop