Next Article in Journal
A Vibrational Energy Harvesting Sensor Based on Linear and Rotational Electromechanical Effects
Previous Article in Journal
A Robust Multi-Camera Vehicle Tracking Algorithm in Highway Scenarios Using Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classification of Rock Mass Quality in Underground Rock Engineering with Incomplete Data Using XGBoost Model and Zebra Optimization Algorithm

1
School of Resources and Safety Engineering, Central South University, Changsha 410083, China
2
State Key Laboratory of Ni&Co Associated Minerals Resources Development and Comprehensive Utilization, Jinchang 737104, China
3
Jinchuan Nickel&Cobalt Research and Engineering Institute, Jinchang 737104, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(16), 7074; https://doi.org/10.3390/app14167074
Submission received: 23 July 2024 / Revised: 10 August 2024 / Accepted: 11 August 2024 / Published: 12 August 2024

Abstract

:
Accurate rock mass quality classification is crucial for the design and construction of underground projects. Traditional methods often rely on expert experience, introducing subjectivity, and struggle with complex geological conditions. Machine learning algorithms have improved this issue, but obtaining complete rock mass quality datasets is often difficult due to high cost and complex procedures. This study proposed a hybrid XGBoost model for predicting rock mass quality using incomplete datasets. The zebra optimization algorithm (ZOA) and Bayesian optimization (BO) were used to optimize the hyperparameters of the model. Data from various regions and types of underground engineering projects were utilized. Adaptive synthetic (ADASYN) oversampling addressed class imbalance. The model was evaluated using metrics including accuracy, Kappa, precision, recall, and F1-score. The ZOA-XGBoost model achieved an accuracy of 0.923 on the test set, demonstrating the best overall performance. Feature importance analysis and individual conditional expectation (ICE) plots highlighted the roles of RQD and UCS in predicting rock mass quality. The model’s robustness with incomplete data was verified by comparing its performance with other machine learning models on a dataset with missing values. The ZOA-XGBoost model outperformed other models, proving its reliability and effectiveness. This study provides an efficient and objective method for rock mass quality classification, offering significant value for engineering applications.

1. Introduction

With the rapid development of underground engineering projects, such as tunnels, mining, and underground storage facilities, numerous challenges and difficulties arise alongside increasing demands. One of the significant factors constraining the development of underground engineering is the geological complexity, including the heterogeneity of rock layers, fluctuations in groundwater levels, and geological instabilities, which increase construction difficulty. Therefore, objective and accurate rock mass quality assessment is essential for the design and construction of underground engineering to ensure safety and cost effectiveness.
Over the past few decades, various rock mass classification methods have been proposed. Generally, these methods can be divided into single-index and multi-index classification methods. Single-index methods use one factor of the rock mass for classification. For instance, the Protodyakonov coefficient method categorizes rock masses based on rock strength [1]. The rock quality designation (RQD) method determines quality based on the core recovery rate [2]. While single-index methods can quickly classify rock masses, they fail to comprehensively reflect rock mass quality due to the limited factors considered. As geotechnical engineering evolved, the need for multi-index classification methods became evident. Bieniawski [3] introduced the rock mass rating (RMR) system, which includes factors such as uniaxial compressive strength, RQD, joint spacing, joint condition, and groundwater. Barton et al. [4] proposed the Q-system, which considers RQD, joint characteristics, fracture water, and stress conditions, establishing the relationship between Q-value and support type. Other commonly used empirical methods include the geological strength index (GSI), rock mass index (RMI), hydropower classification (HC), and basic quality (BQ). Ma et al. [5] summarized these methods, highlighting their respective indicators. Multi-index methods provide a more comprehensive assessment, but obtaining certain rock mass indicators can be challenging under complex geological conditions, increasing the difficulty of classification. Delays in rock mass quality assessment can slow project progress and cause safety incidents, especially during the initial stages of underground construction. Moreover, empirical methods often use simple linear relationships, struggling to adapt to complex and variable geological conditions.
In recent years, with the rapid advancement of artificial intelligence, machine learning methods have been widely applied in geotechnical engineering [6]. Machine learning, with its ability to handle complex nonlinear data, flexibility, and automated feature extraction, has gained traction among researchers for rock mass quality classification [7]. For example, Liu et al. [8] proposed an intelligent rock mass classification model based on genetic algorithms (GA) and support vector machines (SVM), demonstrating its adaptability and accuracy compared to backpropagation neural networks (BPN). Santos et al. [9] identified factors influencing rock mass quality evaluation through multivariate statistical analysis, including rock mass strength and weathering degree, fragmentation, and water flow conditions, and classified rock masses using artificial neural networks (ANN) for open-pit mines. This study reduced the subjectivity in selecting indicators and classification methods, improving classification quality and accuracy. Additionally, Zhou et al. [10] considered 13 indicators affecting tunnel face classification, using ensemble learning algorithms like gradient boosting regression trees (GBRT) and random forest (RF) to systematically study rock mass quality prediction. Liu et al. [11] utilized TBM operational parameters and classifications obtained through HC to develop an ensemble learning model for predicting tunnel face rock mass quality using classification and regression trees and adaptive boosting (AdaBoost). Clearly, machine learning methods show promising performance, with ensemble learning algorithms demonstrating significant advantages due to their strong adaptability and robustness.
Despite advancements in rock mass classification research, several shortcomings remain. For example, most studies focus on open-pit mines, tunnels, and subway projects, with relatively few studies on deep mines. As shallow mineral resources deplete, deep mining becomes necessary, requiring enhanced research on deep rock mass classification. Deep mining introduces more complex environments, making comprehensive geological surveys difficult and leading to insufficient geological information. Accurately assessing rock mass quality with limited geological data becomes a critical issue in engineering design and construction.
To address these challenges, this study collected rock mass quality data from various geotechnical engineering projects, including different tunnels and underground mines, and established a multi-source underground engineering rock mass classification database. By integrating data from different regions and types of projects, the database provides comprehensive and diverse information, reducing the reliance on specific regions or types and avoiding data source limitations. Additionally, this study developed a hybrid ensemble learning model combining the zebra optimization algorithm (ZOA) and extreme gradient boosting (XGBoost) for rock mass quality classification in underground engineering. The hybrid model overcomes the limitations of single algorithms and maintains good predictive performance with incomplete data, providing a reliable and practical solution for rock mass classification in underground engineering.
Section 2 introduces the principles of ensemble learning and optimization algorithms and details the construction of the hybrid model. Section 3 describes the establishment and preprocessing of the multi-source database. Section 4 explains the evaluation metrics and training process of the hybrid model. Section 5 evaluates the model’s performance and validates its effectiveness through practical engineering applications. Section 6 discusses the study’s limitations and future improvements. Finally, Section 7 summarizes the research and conclusions.

2. Methodology

2.1. Extreme Gradient Boosting

Extreme gradient boosting (XGBoost) is an efficient, flexible, and scalable machine learning method based on the gradient boosting framework [12]. Its core principle involves iteratively improving the model by constructing a sequence of weak learners, each attempting to correct the errors of its predecessor.
Unlike other gradient boosting algorithms, XGBoost offers significant advantages in performance and flexibility. It simplifies the calculation of the loss function through a second-order Taylor expansion and includes a regularization term in the objective function to control model complexity. The objective function of XGBoost is shown in Equation (1). Additionally, XGBoost supports parallel computing and cache optimization, which enhances training speed and efficiency. It also introduces a sparsity-aware algorithm to handle missing data, ensuring high accuracy and robustness in the presence of incomplete data. Given these advantages, XGBoost is an optimal choice for the rock mass classification database used in this study.
O b j = i = 1 n L ( y i , y ^ i ) + k = 1 K Ω ( f k )
where Obj is the objective function; L is the loss function; yi and y ^ i represent the actual and predicted values of sample i, respectively; Ω is the regularization term; K is the number of weak learners; and fk denotes the complexity of the k-th weak learner.
Despite its many strengths, XGBoost has some limitations. For example, the hyperparameter tuning process requires substantial computational resources and time. Therefore, suitable optimization methods are crucial for enhancing its efficiency and effectiveness. To address these challenges, metaheuristic optimization algorithms have become an area of significant interest.

2.2. Zebra Optimization Algorithm

Metaheuristic optimization algorithms, by simulating various phenomena and behaviors in nature, provide powerful tools for solving complex optimization problems and play a significant role in scientific research and engineering applications [13]. The zebra optimization algorithm (ZOA) used in this study is a novel swarm intelligence optimization algorithm that features robust optimization capabilities and rapid convergence [14]. Zebras are primarily found in the African savanna and are renowned for their distinctive black and white stripes. They are highly social animals, and their social behavior exhibits a high degree of organization and cooperation. ZOA mimics the collaborative behaviors of zebra groups when searching for food and responding to predators.
In the ZOA, each zebra represents a potential solution, and the grassland where the zebras reside represents the solution space of the problem. The optimization steps of ZOA are as follows:
(1) Initialization: An initial population is randomly generated within the solution space, and the position of each zebra is set, as shown in Equation (2).
x i , j = l b j + r ( u b j l b j )
where xi,j denotes the position of the i-th zebra in the j-th dimension; ubj and lbj are the upper and lower bounds of optimization, respectively; and r is a random number within the range [0, 1].
(2) First Phase (Foraging Behavior): This is the global search phase of ZOA. Zebras update their positions based on their foraging behavior. In ZOA, the leading zebra is considered the best-performing member of the population, guiding other members toward better positions within the search space. The mathematical representation of the foraging behavior is given by Equations (3) and (4).
x i , j n e w , P 1 = x i , j + r ( P Z j I x i , j )
X i = { X i n e w , P 1 , F i n e w , P 1 < F i X i , e l s e
where X i n e w , P 1 is the new position of the i-th zebra during the first phase, x i , j n e w , P 1 is its j-th dimension value, P Z j represents the position of the leading zebra in the j-th dimension, I is the population variation control parameter, and F i     is the fitness value of the i-th zebra.
(3) Second Phase (Defense Strategies Against Predators): This is the local search phase of ZOA. Zebras exhibit different reactions when facing various predators. When encountering large predators (e.g., lions), zebras choose to flee; when encountering smaller predators (e.g., hyenas), zebras gather to confuse or intimidate the predator. In ZOA, these two behaviors occur with equal probability. The mathematical description of the position update is provided in Equations (5) and (6).
x i , j n e w , P 2 = { S 1 : x i , j + R ( 2 r 1 ) ( 1 t T ) x i , j , P s 0.5 S 2 : x i , j + r ( A Z j I x i , j ) , e l s e
X i = { X i n e w , P 2 , F i n e w , P 2 < F i X i , e l s e
where X i n e w , P 2 is the new position of the i-th zebra during the second phase, x i , j n e w , P 2 is its j-th dimension value, R is a constant (value of 0.01), t is the current iteration number, T is the maximum number of iterations, Ps is the switching probability between the two strategies, and AZj is the position of the attacked zebra in the j-th dimension.

2.3. Bayesian Optimization Algorithm

The Bayesian optimization (BO) algorithm is a probabilistic optimization method well-suited for global optimization problems, particularly those involving expensive functions without an analytical expression [15]. BO approximates the posterior distribution of an unknown objective function using prior knowledge and then selects the next hyperparameter combination to sample based on this distribution. This approach allows BO to effectively utilize information from previous searches, minimizing extensive and inefficient exploration of the hyperparameter space.
BO’s two key components are the probabilistic surrogate model and the acquisition function. The surrogate model approximates the objective function, with Gaussian processes being the most used. Gaussian processes assume that the function values at any finite set of points follow a multivariate normal distribution, defining the similarity between points through a kernel function to construct the covariance matrix. The acquisition function selects the next parameter point to evaluate. It balances exploration and exploitation based on the surrogate model’s output, efficiently finding the global optimum. The synergy between the surrogate model and the acquisition function enables BO to excel in solving complex and expensive optimization problems.

2.4. Hybrid Models

This study aims to optimize the XGBoost model for predicting the quality classification of surrounding rock in underground engineering. To achieve this, ZOA and BO were employed to select the optimal hyperparameter combinations for XGBoost. These two optimization algorithms were chosen for their exceptional performance in addressing complex hyperparameter tuning problems and their distinct advantages. ZOA is a metaheuristic optimization algorithm inspired by the collective behavior of zebras, simulating their evasion behavior and group coordination mechanisms under predator threats. ZOA possesses global search capabilities and can avoid local optima, enabling it to find near-global optimal solutions in complex and high-dimensional search spaces. BO, on the other hand, employs an efficient exploration–exploitation balance strategy to identify better hyperparameter combinations within fewer iterations. Despite the different optimization philosophies of these algorithms, they follow similar steps when optimizing XGBoost, as illustrated in Figure 1. The specific process is as follows:
(1)
Data Preparation: Collect rock mass classification data from underground engineering projects, randomly dividing it into training and test sets. Preprocess the data to eliminate the effects of class imbalance, scale, and magnitude. The same training set is used for both models, while the test set is reserved for evaluation.
(2)
Parameter Setting and Initialization: Define the hyperparameter optimization range for XGBoost. Set the population size and iteration number for ZOA and the same iteration number for BO. ZOA randomly generates a group of zebra individuals in the search space, while BO randomly selects several points within the search space.
(3)
Fitness Evaluation: Establish an appropriate fitness function for models and calculate the fitness value for each individual or point.
(4)
Iterative Update: ZOA updates positions using strategies to evade predators, retaining individuals with higher fitness for the next generation. BO selects the next evaluation point based on the surrogate model through the acquisition function and updates the surrogate model’s parameters with new data points. In each iteration, the model evaluates the objective function.
(5)
Output Optimal Solution: Upon reaching the stopping condition, output the optimal hyperparameter combination.

3. Data

3.1. Data Preparation and Analysis

Accurate rock mass quality classification methods rely on the appropriate selection of rock mass indicators. Existing classification methods suggest that the factors influencing rock mass quality evaluation can be mainly divided into three categories: (1) rock mass strength indicators, (2) rock mass structural features, and (3) other geological factors affecting rock mass quality. To make rock mass classification for underground engineering more rational and scientific, the selected evaluation indicators should comprehensively reflect the true quality of the rock mass. Additionally, the feasibility of obtaining these indicators in practical engineering must be considered. In this study, uniaxial compressive strength (UCS) was used as the strength indicator; the rock mass integrity coefficient (Kv) and rock quality designation (RQD) were used as the structural features; and the groundwater condition (W) was used as the geological factor affecting rock mass quality.
A rock mass classification database comprising 130 samples was established through a literature review and selection of rock mass classification case data. This database covers various types of geotechnical engineering projects, including tunnels, underground chambers in power stations, and underground mines [16,17,18,19,20,21,22,23]. The surrounding rock is classified into five grades: very good (class I), good (class II), fair (class III), poor (class IV), and very poor (class V). It is noteworthy that there are no class I cases in the dataset used for this study. However, this does not affect the study’s engineering applicability, as very high-quality rock masses are rare in underground or deep rock engineering.
Ultimately, a database was created with four input parameters (UCS, RQD, Kv, and W) and one output parameter (rock mass classification). Table 1 provides the statistical descriptions of the four rock mass classifications. As shown in Table 1, as the rock mass classification increases, the values of UCS and RQD decrease, indicating a negative correlation between UCS, RQD, and rock mass classification. The numbers of cases for each rock mass classification in the database are: class II with 37 cases (28.5%), class III with 43 cases (33.1%), class IV with 27 cases (20.8%), and class V with 23 cases (17.7%), indicating an imbalance in the database. Additionally, the correlation matrix of the rock mass classification data in Figure 2 can be used to analyze the correlation and distribution relationships between the data. It is evident that UCS and RQD are strongly correlated with rock mass classification. The scatter plot distribution shows that the range of input parameters is broad, effectively covering the rock mass quality conditions in underground engineering, further demonstrating the reliability and comprehensiveness of the database.

3.2. Data Pre-Processing

A reasonable division of the database is crucial for improving model performance. The rock mass classification database was randomly divided into a training set and a test set, with proportions of 70% and 30%, respectively. Although random division cannot completely ensure consistent data distribution between the training and test sets, it maintains the objectivity of model evaluation. The training set contains 91 sample points, and the issue of class imbalance remains.
This study employed adaptive synthetic (ADASYN) oversampling technology [24] to increase the number of minority class samples without disrupting the structure of the majority class rock mass samples, thereby achieving class balance in the rock mass classification data. The core idea of ADASYN is to adjust the generation process of synthetic samples based on the density of each minority class sample in the feature space. For each minority class sample, ADASYN calculates its ratio to its k-nearest majority class samples, representing the density ratio of the minority class sample. A certain number of synthetic samples are generated according to the density ratio; fewer synthetic samples are generated for higher density ratios and more for lower density ratios. To ensure that the synthetic samples are closer to the decision boundary, ADASYN introduces a weighting factor that weights the synthetic samples according to the density ratio of each minority class sample. By adaptively generating synthetic samples based on the dataset characteristics, ADASYN increases the coverage of the minority class samples and enhances the diversity and authenticity of the synthetic samples.
Data analysis revealed that different rock mass indicators in the database have varying magnitudes and scales, which can affect model performance and interpretation. Therefore, data preprocessing was necessary before model training. This study used the Z-score method for normalization, converting the data to a standard normal distribution with a mean of 0 and a standard deviation of 1, as shown in Equation (7). Z-score normalization eliminates the influence of different feature scales while preserving the relative order and information content of the data, improving the stability and reliability of the model.
x * = x μ σ
where x is the value of the original data sample, μ is the mean of the original data sample, and σ is the standard deviation of the original data sample.

4. Modeling

4.1. Model Metrics

Performance evaluation is a critical aspect of the model-building process [25]. By utilizing appropriate evaluation methods, the strengths and weaknesses of the model can be understood, ensuring its robust generalization ability in rock mass classification prediction. This study utilized various evaluation metrics, including accuracy, Kappa, precision, recall, and F1-score. Accuracy measures the proportion of correct classifications overall but has limitations in imbalanced datasets. The Kappa accounts for random guessing, providing an adjusted standard for evaluating accuracy. Precision reflects the model’s accuracy in predicting positive classes, while recall measures the model’s comprehensiveness in detecting positive classes. The F1-score combines precision and recall, offering a balanced metric, particularly useful in cases of category imbalance. The calculation of these evaluation indicators is illustrated in Figure 3. The comprehensive use of these evaluation metrics aids in fully understanding the model’s performance, thereby better guiding its optimization and improvement.

4.2. Model Training

Optimizing the XGBoost model is crucial for enhancing its predictive performance and generalization capability. As an ensemble learning algorithm, XGBoost excels in handling complex data. However, an unoptimized model can consume excessive computational resources and time and is prone to overfitting. Therefore, this study employed ZOA and BO to optimize XGBoost, improving the model’s efficiency and scalability. ZOA simulates the collective behavior of zebra groups to achieve global search and optimization in a multidimensional parameter space, demonstrating good convergence and global search capabilities. BO uses Bayesian inference to establish a probabilistic model of the objective function, quickly finding the optimal solution within a limited number of iterations and showing strong robustness to noise. The application of ZOA and BO makes XGBoost more efficient in handling complex data. The model construction process is detailed below.
Given the numerous hyperparameters in XGBoost, this study focused on optimizing those that significantly impact model training [26]. Hyperparameters related to the base learner (decision trees) and the gradient boosting framework are key components affecting model performance. Parameters such as “n_estimators”, “max_depth”, “learning_rate”, and “gamma” influence the construction and optimization of each tree. “reg_alpha” and “reg_lambda”, which control the complexity and stability of the overall model through regularization, help prevent overfitting. Therefore, “n_estimators”, “max_depth”, “learning_rate”, “gamma”, “reg_alpha”, and “reg_lambda” were selected for optimization. The optimization ranges for these hyperparameters are shown in Table 2. Considering that some hyperparameters are integers while ZOA generates candidate solutions as continuous values, rounding was applied to convert these values into integers, thereby ensuring that the hyperparameters met their discrete constraints. This approach maintains the effectiveness of the search and addresses non-convex issues arising from discretization, ensuring stable convergence to the optimal solution. For BO, the method evaluated the objective function at discrete points to maintain integer constraints, enhancing optimization efficiency and performance within a limited number of iterations.
In the process of optimizing the model hyperparameters using ZOA, population size and the number of iterations are critical parameters. Population size determines the number of zebra individuals in each generation, while the number of iterations represents the total generations executed by the algorithm. A larger population size can enhance the global search capability but also increase computational cost; conversely, a smaller population size may lead the algorithm to fall into local optima. Thus, a balance between global exploration and computational cost is essential. The choice of iteration number depends on the problem’s complexity and computational resource constraints. Generally, more iterations allow the algorithm to find better solutions, but excessive iterations may increase computation time and degrade convergence performance. After multiple tests, six population sizes (30, 50, 70, 90, 110, and 130) were finally set, and XGBoost was trained through 100 iterations. It is worth noting that the selection of the number of iterations considered the variations in the accuracy of the model. Initially, the iteration count was set to 50 and it was incrementally increased to assess its impact on model accuracy. As the iteration count increased, the accuracy improved and eventually stabilized. To balance accuracy with computational efficiency, the iteration count was set to 100. Before optimization, the fitness function needs to be defined. The cross-entropy loss function, widely used in classification tasks, measures the difference between the predicted probabilities of each class and the actual labels. The lower the cross-entropy loss, the higher the model’s predicted probability for the true class. The iteration convergence graph of the ZOA-XGBoost model is shown in Figure 4. As the number of iterations increases, the fitness values of different populations decrease, and the models with different population sizes achieve the lowest loss at the end of optimization. The larger the population size, the faster the model converges. Around the 45th iteration, the fitness values of all models stabilize, reaching the optimal state of optimization.
After completing ZOA-XGBoost training, the optimal hyperparameters and training time corresponding to different population sizes were obtained. The optimal hyperparameters for different models are shown in Table 3. The training accuracy and training time of the various models are illustrated in Figure 5. Typically, the optimal population size is determined by the lowest cross-entropy loss value. Figure 4 shows that the loss value of the model decreases as the population size increases. The model with a population size of 130 has the lowest loss value in 100 iterations. However, this study also considered training time in the model evaluation. When models have similar predictive performance on the same dataset, those with shorter training times are preferable. Figure 5 shows that the ZOA-XGBoost models achieve an accuracy of 1 across different population sizes, indicating excellent predictive performance during training. It can also be observed that the model with a population size of 30 has the shortest training time, thus making it the optimal ZOA-XGBoost model.
In optimizing the model using BO, the same model hyperparameters and iterations as in ZOA were adopted. BO comprises two core components: the probabilistic surrogate model and the acquisition function. This study used a Gaussian process as the probabilistic surrogate model and gp_hedge as the acquisition function to leverage the advantages of multiple acquisition functions, enhancing optimization efficiency and performance. The fitness value variation curve of the BO-XGBoost model with the number of iterations is shown in Figure 6. The cross-entropy loss value significantly decreases within the first 10 iterations and stabilizes after 29 iterations. The convergence of the iterative curve is fast and effective, demonstrating the effectiveness and superiority of the BO method in optimization problems. Ultimately, the BO-XGBoost model achieved a training accuracy of 0.983, with the optimal hyperparameter combination determined as follows: “n_estimators = 241, max_depth = 25, learning_rate = 1, gamma = 0, reg_alpha = 1, and reg_lambda = 11.2811.”

5. Results and Discussion

5.1. Model Evaluation

After determining the optimal hyperparameter combinations, it was crucial to comprehensively and objectively evaluate the models’ performance to ensure they provide high-precision rock mass classification predictions for engineering applications. Although the models demonstrated perfect fitting ability on the training set, their generalization capability needed to be verified using the test set.
Confusion matrices for each model on the test set were established, as shown in Figure 7. A confusion matrix is a table where rows represent actual classes and columns represent predicted classes. From the figure, it is evident that all models achieve a perfect prediction for class II, with a value of 16. The unoptimized XGBoost performs poorly on class III, misclassifying four instances. After optimization, both BO-XGBoost and ZOA-XGBoost improve the correct classification numbers for classes III and IV. Various evaluation metrics calculated from the confusion matrix are presented in Table 4. The unoptimized XGBoost shows the poorest predictive performance, with the following metrics: Accuracy of 0.821, Kappa of 0.738, Precision of 0.829, Recall of 0.821, and F1-score of 0.813. The optimized models exhibit better predictive performance. Specifically, ZOA-XGBoost significantly outperforms the unoptimized XGBoost model across multiple performance metrics, highlighting ZOA’s advantage in hyperparameter tuning. The metrics for ZOA-XGBoost are Accuracy of 0.923, Kappa of 0.887, Precision of 0.932, Recall of 0.932, and F1-score of 0.922. BO also enhances XGBoost’s predictive performance, albeit slightly less than ZOA-XGBoost, but still significantly better than the unoptimized model. The metrics for BO-XGBoost are Accuracy of 0.897, Kappa of 0.850, Precision of 0.916, Recall of 0.897, and F1-score of 0.896. These results indicate that BO is also effective in selecting XGBoost hyperparameters and significantly improving model performance. Figure 8 visually presents the evaluation metric scores for the three models on the test set. It is evident that ZOA-XGBoost achieves the highest score, demonstrating superior predictive performance. The performance ranking of the models is as follows: ZOA-XGBoost > BO-XGBoost > XGBoost. Additionally, Figure 9 illustrates the receiver operating characteristic (ROC) curves and the area under the ROC curve (AUC) values for each model across different classes. The closer the ROC curve is to the top left corner, the better the model’s performance. Based on the curve distribution and AUC values, ZOA-XGBoost still exhibits the best performance, providing superior predictions for rock mass classification.

5.2. Model Interpretation

Model interpretation is crucial in predicting rock mass quality classifications. Analyzing feature importance helps to understand the contribution of input parameters to model predictions. This understanding allows for optimizing geological survey and data collection strategies, thereby improving prediction accuracy.
XGBoost provides various methods for evaluating feature importance, including weight-based, cover-based, and gain-based approaches. The weight-based method calculates the number of times a feature is used to split the data across all decision trees. The more frequently a feature appears at split points, the higher its weight. The cover method measures the number of training data instances that pass through split points using a particular feature, reflecting the feature’s prevalence in the model. Gain indicates the average reduction in training loss when a feature is used for splitting, directly reflecting its contribution to model accuracy. Analyzing feature importance through multiple methods offers a comprehensive understanding of each feature’s contribution to the model. As shown in Figure 10, RQD stands out in all three methods, particularly in gain (0.517), indicating its significant role in improving model performance. UCS and W also show high importance, consistently contributing across all methods, while Kv has relatively lower importance. The feature importance analysis ranks the features as follows: RQD > UCS > W > Kv.
To determine the impact of features on rock mass quality prediction, individual conditional expectation (ICE) plots were used for detailed analysis of each feature [27], as shown in Figure 11. ICE plots illustrate the effect and direction of a single feature’s change on rock mass quality prediction, holding other feature values constant. The pink lines represent the prediction changes of individual instances, while the purple lines show the average prediction changes for all instances. The figure reveals that as RQD increases, particularly in the higher RQD range, the prediction probability for class II rises significantly, while it decreases for class V. At low RQD values, the probability for class III is low, but it increases markedly as RQD grows, indicating that mid to high RQD values favor class III predictions. Similarly, mid to low RQD values support class IV predictions, highlighting RQD’s critical impact on rock mass quality prediction, with higher RQD values indicating better rock mass quality. UCS also significantly impacts prediction probability, showing trends similar to RQD for classes III, IV, and V. However, changes in UCS have a smaller effect on class II predictions. The relatively flat lines in the ICE plot for Kv suggest its limited contribution to rock mass classification predictions, whereas W has a slightly higher influence than Kv.
The ICE plot analysis indicates that RQD and UCS are the most influential features for rock mass quality classification predictions, while W and Kv have relatively minor impacts, consistent with the feature importance analysis results. Future data collection efforts should focus on gathering RQD and UCS to enhance model prediction performance. Although W and Kv have smaller roles in the model, they should not be overlooked, especially under specific geological conditions where they might be significant.

5.3. Engineering Validation

Under complex geological conditions, it is often challenging to obtain a complete rock mass classification dataset. Therefore, handling missing data becomes a critical issue in model applications. This study collected an additional dataset with missing values to evaluate the performance of the ZOA-XGBoost model [28,29,30,31,32], as shown in Table 5. The incomplete dataset was input into the developed ZOA-XGBoost model, and its accuracy was calculated. For comparison, several commonly used machine learning models were also trained on the same dataset. These models included random forest (RF), CatBoost, decision tree (DT), SVM, ANN, k-nearest neighbors (KNN), and Naïve Bayes (NB). Since not all models can handle missing values, four imputation methods (filling with 0, the mean, median, and mode of the training set) were used to process the missing data.
Figure 12 presents the performance comparison of the different models on the incomplete dataset. As shown in Figure 12a, the ZOA-XGBoost model achieves an accuracy of 80.00% directly on the dataset with missing values, significantly outperforming other models after imputation. The accuracy of other models is relatively low when missing values are filled with 0. Figure 12b shows the performance when missing values were filled with the mean of the training set. Although the performance of other models improves, ZOA-XGBoost remains the best performer. Figure 12c illustrates the results when missing values were filled with the median. RF, CatBoost, and KNN demonstrate more stable performance compared to other models but still lag behind ZOA-XGBoost. Figure 12d shows the performance when missing values were filled with the mode. While other models demonstrate more balanced performance after mode imputation, they still do not match the performance of ZOA-XGBoost overall.
In summary, ZOA-XGBoost shows significant advantages and high robustness in handling missing values. Compared to other models, it adapts better to incomplete data and makes accurate predictions. This also validates the reliability and effectiveness of ZOA-XGBoost in practical engineering applications.

6. Limitations and Future Studies

After evaluation and validation, the hybrid model developed in this study demonstrates high prediction accuracy and engineering applicability in rock mass classification. However, several limitations require further improvement:
  • The dataset used is relatively small and lacks cases of class I. Expanding the dataset size in the future could further enhance the model’s applicability to a wider range of engineering scenarios.
  • While ZOA-XGBoost performs well with missing values, its robustness and stability against other types of uncertainty and noisy data have not been fully verified. Future research could incorporate more noise-handling and uncertainty-evaluation methods to improve model reliability.
  • The features used in this study (UCS, RQD, Kv, and W) play important roles in rock mass quality prediction, but other influential factors were not included. Future improvements could involve incorporating additional geological and engineering factors, such as rock mass structural characteristics and stress conditions, to further refine the model.

7. Conclusions

Real-time and accurate evaluation of rock mass quality is crucial for the design and construction of underground engineering projects. However, due to the geological complexity of deep rock engineering, comprehensive geological surveys are challenging, making it difficult to accurately assess the quality of surrounding rock with limited geological data. To address this issue, this study established a rock mass quality classification database comprising various underground engineering projects and developed a rock mass classification prediction model using XGBoost. To enhance the model’s predictive performance, ZOA and BO algorithms were introduced to optimize XGBoost’s hyperparameters. The specific conclusions are as follows:
  • Performance evaluation results indicate that the ZOA-XGBoost model with a population size of 30 achieved the best predictive performance. The evaluation metrics on the test set were Accuracy of 0.923, Kappa of 0.887, Precision of 0.932, Recall of 0.932, and F1-score of 0.922.
  • Multiple feature importance analysis methods demonstrated that RQD and UCS are the key input variables for predicting rock mass quality. ICE analysis revealed that higher RQD and UCS values correspond to better rock mass quality, consistent with actual engineering experience.
  • To further validate the performance of the ZOA-XGBoost model, an additional rock mass quality classification dataset containing missing values was constructed. The results showed that ZOA-XGBoost achieved an accuracy of 80.00% on the incomplete dataset, significantly outperforming other machine learning models with imputed missing values. This confirms the reliability and effectiveness of the hybrid model developed in this study for practical engineering applications.

Author Contributions

Conceptualization, B.Y. and D.L.; Data curation, B.Y. and Y.L.; Formal analysis, Q.Z.; Funding acquisition, Q.Z. and D.L.; Investigation, Q.Z.; Methodology, Y.L.; Project administration, Q.Z. and D.L.; Resources, Y.L.; Software, B.Y. and Z.L.; Supervision, Q.Z. and D.L.; Validation, B.Y., Y.L., and Z.L.; Visualization, B.Y.; Writing—original draft, B.Y.; Writing—review & editing, Q.Z. and D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant nos. 52374153 and 52304113.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Terzaghi, K. Rock Defects and Loads on Tunnel Supports; Harvard University, Graduate School of Engineering: Cambridge, MA, USA, 1946. [Google Scholar]
  2. Deere, D.U. Technical Description of Rock Cores for Engineering Purposes; University of Illinois: Urbana, IL, USA, 1962. [Google Scholar]
  3. Bieniawski, Z.T. Engineering Classification of Jointed Rock Masses. Civ. Eng. Siviele Ingenieurswese 1973, 1973, 335–343. [Google Scholar]
  4. Barton, N.; Lien, R.; Lunde, J. Analysis of Rock Mass Quality and Support Practice in Tunneling, and a Guide for Estimating Support Requirements: Internal Report; Norges Geotekniske Institute: Trondheim, Norway, 1974. [Google Scholar]
  5. Ma, J.; Li, T.; Yang, G.; Dai, K.; Ma, C.; Tang, H.; Wang, G.; Wang, J.; Xiao, B.; Meng, L. A Real-Time Intelligent Classification Model Using Machine Learning for Tunnel Surrounding Rock and Its Application. Georisk Assess. Manag. Risk Eng. Syst. Geohazards 2023, 17, 148–168. [Google Scholar] [CrossRef]
  6. Zhao, J.; Li, D.; Jiang, J.; Luo, P. Uniaxial Compressive Strength Prediction for Rock Material in Deep Mine Using Boosting-Based Machine Learning Methods and Optimization Algorithms. CMES Comput. Model. Eng. Sci. 2024, 140, 275–304. [Google Scholar] [CrossRef]
  7. Hamdia, K.M.; Ghasemi, H.; Bazi, Y.; AlHichri, H.; Alajlan, N.; Rabczuk, T. A Novel Deep Learning Based Method for the Computational Material Design of Flexoelectric Nanostructures with Topology Optimization. Finite Elem. Anal. Des. 2019, 165, 21–30. [Google Scholar] [CrossRef]
  8. Liu, K.; Liu, B.; Fang, Y. An Intelligent Model Based on Statistical Learning Theory for Engineering Rock Mass Classification. Bull. Eng. Geol. Environ. 2019, 78, 4533–4548. [Google Scholar] [CrossRef]
  9. Santos, A.E.M.; Lana, M.S.; Pereira, T.M. Rock Mass Classification by Multivariate Statistical Techniques and Artificial Intelligence. Geotech. Geol. Eng. 2021, 39, 2409–2430. [Google Scholar] [CrossRef]
  10. Zhou, M.; Chen, J.; Huang, H.; Zhang, D.; Zhao, S.; Shadabfar, M. Multi-Source Data Driven Method for Assessing the Rock Mass Quality of a NATM Tunnel Face via Hybrid Ensemble Learning Models. Int. J. Rock Mech. Min. Sci. 2021, 147, 104914. [Google Scholar] [CrossRef]
  11. Liu, Q.; Wang, X.; Huang, X.; Yin, X. Prediction Model of Rock Mass Class Using Classification and Regression Tree Integrated AdaBoost Algorithm Based on TBM Driving Data. Tunn. Undergr. Space Technol. 2020, 106, 103595. [Google Scholar] [CrossRef]
  12. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; ACM: San Francisco, CA, USA, 2016; pp. 785–794. [Google Scholar]
  13. Zhou, J.; Wang, Z.; Li, C.; Wei, W.; Wang, S.; Armaghani, D.J.; Peng, K. Hybridized Random Forest with Population-Based Optimization for Predicting Shear Properties of Rock Fractures. J. Comput. Sci. 2023, 72, 102097. [Google Scholar] [CrossRef]
  14. Trojovska, E.; Dehghani, M.; Trojovsky, P. Zebra Optimization Algorithm: A New Bio-Inspired Optimization Algorithm for Solving Optimization Algorithm. IEEE Access 2022, 10, 49445–49473. [Google Scholar] [CrossRef]
  15. Li, D.; Liu, Z.; Xiao, P.; Zhou, J.; Armaghani, D.J. Intelligent Rockburst Prediction Model with Sample Category Balance Using Feedforward Neural Network and Bayesian Optimization. Undergr. Space 2022, 7, 833–846. [Google Scholar] [CrossRef]
  16. Hu, J.; Zhou, T.; Ma, S.; Yang, D.; Guo, M.; Huang, P. Rock Mass Classification Prediction Model Using Heuristic Algorithms and Support Vector Machines: A Case Study of Chambishi Copper Mine. Sci. Rep. 2022, 12, 928. [Google Scholar] [CrossRef] [PubMed]
  17. Li, S.; Shen, Y.; Lin, P.; Xie, J.; Tian, S.; Lv, Y.; Ma, W. Classification Method of Surrounding Rock of Plateau Tunnel Based on BP Neural Network. Front. Earth Sci. 2023, 11, 1283520. [Google Scholar] [CrossRef]
  18. Wu, S.; Yang, S.; Du, X. A Model for Evaluation of Surrounding Rock Stability Based on D-S Evidence Theory and Error-Eliminating Theory. Bull. Eng. Geol. Environ. 2021, 80, 2237–2248. [Google Scholar] [CrossRef]
  19. Yin, H.; Zhao, H.; Xu, L.; Zhao, C.; Ma, D.; Cong, S. Classification of Rock Mass in Mine Based on Improved Fuzzy Comprehensive Evaluation Method. Met. Mine 2020, 53–58. (In Chinese) [Google Scholar] [CrossRef]
  20. Liu, A.; Su, L.; Zhu, X.; Zhao, G. Rock Quality Evaluation Based on Distance Discriminant Analysis and Fuzzy Mathematic Method. J. Min. Saf. Eng. 2011, 28, 462–467. (In Chinese) [Google Scholar]
  21. Hu, J.; Ai, Z. Extension Evaluation Model of Rock Mass Quality for Underground Mine Based on Optimal Combination Weighting. Gold Sci. Technol. 2017, 25, 39–45. (In Chinese) [Google Scholar]
  22. Cai, G. Study of the BP Neural Network on the Stability Classification of Surrounding Rocks. Master’s Thesis, Hohai University, Nanjing, China, 2002. (In Chinese). [Google Scholar]
  23. Huang, R.; Zhao, Z.; Li, P.; Zhang, X. Tunnel’s Quality Evaluation of Surrounding Rock Based on Entropy Weight Method and Extenics. Highw. Eng. 2012, 37, 139–143. (In Chinese) [Google Scholar]
  24. He, H.; Bai, Y.; Garcia, E.A.; Li, S. ADASYN: Adaptive Synthetic Sampling Approach for Imbalanced Learning. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–8 June 2008; pp. 1322–1328. [Google Scholar]
  25. Qiu, Y.; Zhou, J. Short-Term Rockburst Damage Assessment in Burst-Prone Mines: An Explainable XGBOOST Hybrid Model with SCSO Algorithm. Rock Mech. Rock Eng. 2023. [Google Scholar] [CrossRef]
  26. Li, C.; Zhou, J. Prediction and Optimization of Adverse Responses for a Highway Tunnel after Blasting Excavation Using a Novel Hybrid Multi-Objective Intelligent Model. Transp. Geotech. 2024, 45, 101228. [Google Scholar] [CrossRef]
  27. Goldstein, A.; Kapelner, A.; Bleich, J.; Pitkin, E. Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation. J. Comput. Graph. Stat. 2015, 24, 44–65. [Google Scholar] [CrossRef]
  28. Wu, S.; Chen, J.; Wu, M. Study on Stability Classification of Underground Engineering Surrounding Rock Based on Concept Lattice—TOPSIS. Arab. J. Geosci. 2020, 13, 346. [Google Scholar] [CrossRef]
  29. Zheng, X. Rock Mass Quality Classification Method Based on the VWT and Cloud Model. Mod. Min. 2018, 34, 88–90+95. (In Chinese) [Google Scholar]
  30. Liu, M.; Jiang, J.; Jiang, H.; Chen, Z.; Xie, R. Rock Mass Quality Grading of Underground Mining Mine Based on PCA-EWM-TOPSIS Coupled Algorithm. Gold 2022, 43, 27–31. (In Chinese) [Google Scholar]
  31. Zhang, H.; Yan, W.; Guo, S.; Jiao, M.; Lei, M. Classification of Rock Mass Quality in a Mine Based on ELM Model and Its Application. Gold 2018, 39, 32–34+38. (In Chinese) [Google Scholar]
  32. Yang, C.; Ke, C.; Yang, J.; Zhou, P.; Sha, Y.; Wu, Y. Evaluation and Prediction of Rock Mass Quality Based on Fuzzy RES-Extenics Theory: A Case Study of Pulang Copper Mine Area. Saf. Environ. Eng. 2022, 29, 122–131. (In Chinese) [Google Scholar] [CrossRef]
Figure 1. Overall construction process of the hybrid models.
Figure 1. Overall construction process of the hybrid models.
Applsci 14 07074 g001
Figure 2. Variable correlation matrix of rock mass classification data. The different shades of purple correspond to the magnitude of the absolute value of correlation; the darker the shade, the larger the absolute value of the correlation.
Figure 2. Variable correlation matrix of rock mass classification data. The different shades of purple correspond to the magnitude of the absolute value of correlation; the darker the shade, the larger the absolute value of the correlation.
Applsci 14 07074 g002
Figure 3. Calculation of evaluation indicators.
Figure 3. Calculation of evaluation indicators.
Applsci 14 07074 g003
Figure 4. Iterative convergence graph of the ZOA-XGBoost model.
Figure 4. Iterative convergence graph of the ZOA-XGBoost model.
Applsci 14 07074 g004
Figure 5. Training accuracy and training time for ZOA-XGBoost.
Figure 5. Training accuracy and training time for ZOA-XGBoost.
Applsci 14 07074 g005
Figure 6. Iterative convergence graph of the BO-XGBoost model.
Figure 6. Iterative convergence graph of the BO-XGBoost model.
Applsci 14 07074 g006
Figure 7. Confusion matrix of models during the testing phase: (a) XGBoost; (b) BO -XGBoost; (c) ZOA-XGBoost.
Figure 7. Confusion matrix of models during the testing phase: (a) XGBoost; (b) BO -XGBoost; (c) ZOA-XGBoost.
Applsci 14 07074 g007
Figure 8. The final ranking of models during the testing phase.
Figure 8. The final ranking of models during the testing phase.
Applsci 14 07074 g008
Figure 9. The ROC curves and AUC values of models: (a) XGBoost; (b) BO -XGBoost; (c) ZOA-XGBoost.
Figure 9. The ROC curves and AUC values of models: (a) XGBoost; (b) BO -XGBoost; (c) ZOA-XGBoost.
Applsci 14 07074 g009
Figure 10. The relative importance of the total features.
Figure 10. The relative importance of the total features.
Applsci 14 07074 g010
Figure 11. ICE plots for four features and rock quality classes.
Figure 11. ICE plots for four features and rock quality classes.
Applsci 14 07074 g011
Figure 12. Comparison of model performance on incomplete datasets: (a) Fill value is 0; (b) Fill value is the average value; (c) Fill value is the median value; (d) Fill value is the mode.
Figure 12. Comparison of model performance on incomplete datasets: (a) Fill value is 0; (b) Fill value is the average value; (c) Fill value is the median value; (d) Fill value is the mode.
Applsci 14 07074 g012
Table 1. Statistical description of input parameters.
Table 1. Statistical description of input parameters.
ClassStatistical IndicatorsUCS
(MPa)
RQD
(%)
KvW
L (min·10 m)−1
class IICount and percentage37 (28.5%)37 (28.5%)37 (28.5%)37 (28.5%)
Mean value102.3277.720.625.02
Standard deviation25.746.940.115.30
Min value70.0058.130.300
25th percentile90.0075.000.550
50th percentile94.0077.500.651.00
75th percentile95.0082.000.7010.00
Max value181.7390.100.7517.00
class IIICount and percentage43 (33.1%)43 (33.1%)43 (33.1%)43 (33.1%)
Mean value66.3155.590.4421.64
Standard deviation28.9015.490.1636.34
Min value25.0026.000.120
25th percentile40.0050.000.329.50
50th percentile70.0052.000.4015.00
75th percentile93.5069.000.5720.00
Max value127.9288.600.70223.00
class IVCount and percentage27 (20.8%)27 (20.8%)27 (20.8%)27 (20.8%)
Mean value45.1645.950.4230.11
Standard deviation24.9015.930.1536.45
Min value6.0020.000.200
25th percentile28.6036.750.308.75
50th percentile40.0043.000.3821.00
75th percentile53.0052.500.5440.00
Max value118.0089.500.71168.00
class VCount and percentage23 (17.7%)23 (17.7%)23 (17.7%)23 (17.7%)
Mean value26.9723.700.4335.28
Standard deviation18.829.830.1931.51
Min value4.0010.000.070
25th percentile14.5016.000.2911.00
50th percentile27.0024.000.4320.00
75th percentile35.9029.000.5858.50
Max value86.0044.000.75125.00
Table 2. Optimization ranges for XGBoost hyperparameters.
Table 2. Optimization ranges for XGBoost hyperparameters.
HyperparametersMinimum ValueMaximum Value
n_estimators1500
max_depth125
learning_rate0.0011
gamma05
reg_alpha115
reg_lambda115
Table 3. Optimal hyperparameters for ZOA-XGBoost.
Table 3. Optimal hyperparameters for ZOA-XGBoost.
ModelHyperparameters
n_EstimatorsMax_DepthLearning_RateGammaReg_Alphareg_Lambda
ZOA-XGBoost-3016140.22120.00011.00421.4136
ZOA-XGBoost-5012540.38240.00201.00021.4090
ZOA-XGBoost-7012140.24940.00021.00271.7869
ZOA-XGBoost-909840.37710.00111.00061.7440
ZOA-XGBoost-1105940.382301.00001.1562
ZOA-XGBoost-13010240.34610.00041.00021.0101
Table 4. Evaluation results of models during the testing phase.
Table 4. Evaluation results of models during the testing phase.
ModelACCKappaPrecisionRecallF1-Score
XGBoost0.8210.7380.8290.8210.813
BO-XGBoost0.8970.8500.9160.8970.896
ZOA-XGBoost0.9230.8870.9320.9230.922
Table 5. Datasets with missing values.
Table 5. Datasets with missing values.
No.ProjectUCS
(Mpa)
RQD
(%)
KvW
(L (min·10 m)−1)
Class
1Jinzhou LPG cavern144.0068.000.90-II
2160.1082.000.83-II
3176.4066.000.80-II
4Jiaojia gold mine39.7368.10-8.00III
539.7363.70-80.00IV
650.2076.30-15.00III
7A metal mine120.3569.800.47-III
8120.3559.500.41-III
9120.3565.300.45-III
10120.3570.700.48-III
11120.3563.500.44-III
12120.3565.700.45-III
13120.3557.500.40-III
14120.3556.800.40-III
15120.3556.800.40-III
16120.3560.900.42-III
17120.3568.800.47-III
18120.3566.400.46-III
19120.3565.600.45-III
20120.3570.800.48-III
21120.3559.900.41-III
22156.7580.000.54-II
2389.8330.600.21-IV
24A gold mine30.0047.00-15.00III
2547.6049.00-9.80III
2638.0023.00-16.30IV
2788.0057.00-19.00III
2870.1051.80-0.05II
29Pulang copper mine-64.000.7315.06II
30-65.000.6915.19II
31-58.000.6116.17II
32-50.000.5717.62III
33-56.000.6018.77II
34-59.000.6416.33II
35-53.000.6621.66II
36-41.000.5324.55III
37-61.000.6512.16III
38-70.000.717.16III
39-62.000.6512.13III
40-63.000.6316.62III
41-58.000.6214.69III
42-61.000.6416.16III
43-57.000.5913.15III
44-64.000.678.02III
45-64.000.6219.48III
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, B.; Liu, Y.; Liu, Z.; Zhu, Q.; Li, D. Classification of Rock Mass Quality in Underground Rock Engineering with Incomplete Data Using XGBoost Model and Zebra Optimization Algorithm. Appl. Sci. 2024, 14, 7074. https://doi.org/10.3390/app14167074

AMA Style

Yang B, Liu Y, Liu Z, Zhu Q, Li D. Classification of Rock Mass Quality in Underground Rock Engineering with Incomplete Data Using XGBoost Model and Zebra Optimization Algorithm. Applied Sciences. 2024; 14(16):7074. https://doi.org/10.3390/app14167074

Chicago/Turabian Style

Yang, Bo, Yongping Liu, Zida Liu, Quanqi Zhu, and Diyuan Li. 2024. "Classification of Rock Mass Quality in Underground Rock Engineering with Incomplete Data Using XGBoost Model and Zebra Optimization Algorithm" Applied Sciences 14, no. 16: 7074. https://doi.org/10.3390/app14167074

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop