Next Article in Journal
Insights into New Generalization of q-Legendre-Based Appell Polynomials: Properties and Quasi Monomiality
Previous Article in Journal
ZNN-Based Gait Optimization for Humanoid Robots with ALIP and Inequality Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian Random Forest with Multiple Imputation by Chain Equations for High-Dimensional Missing Data: A Simulation Study

by
Oyebayo Ridwan Olaniran
1 and
Ali Rashash R. Alzahrani
2,*
1
Department of Statistics, Faculty of Physical Sciences, University of Ilorin, Ilorin 1515, Nigeria
2
Mathematics Department, Faculty of Sciences, Umm Al-Qura University, Makkah 24382, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(6), 956; https://doi.org/10.3390/math13060956
Submission received: 7 February 2025 / Revised: 4 March 2025 / Accepted: 12 March 2025 / Published: 13 March 2025
(This article belongs to the Section D1: Probability and Statistics)

Abstract

:
The pervasive challenge of missing data in scientific research forces a critical trade-off: discarding incomplete observations, which risks significant information loss, while conventional imputation methods struggle to maintain accuracy in high-dimensional settings. Although approaches like multiple imputation (MI) and random forest (RF) proximity-based imputation offer improvements over naive deletion, they exhibit limitations in complex missing data scenarios or sparse high-dimensional settings. To address these gaps, we propose a novel integration of Multiple Imputation by Chained Equations (MICE) with Bayesian Random Forest (BRF), leveraging MICE’s iterative flexibility and BRF’s probabilistic robustness to enhance the imputation accuracy and downstream predictive performance. Our hybrid framework, BRF-MICE, uniquely combines the efficiency of MICE’s chained equations with BRF’s ability to quantify uncertainty through Bayesian tree ensembles, providing stable parameter estimates even under extreme missingness. We empirically validate this approach using synthetic datasets with controlled missingness mechanisms (MCAR, MAR, MNAR) and dimensionality, contrasting it against established methods, including RF and Bayesian Additive Regression Trees (BART). The results demonstrate that BRF-MICE achieves a superior performance in classification and regression tasks, with a 15–20% lower error under varying missingness conditions compared to RF and BART while maintaining computational scalability. The method’s iterative Bayesian updates effectively propagate imputation uncertainty, reducing overconfidence in high-dimensional predictions, a key weakness of frequentist alternatives.

1. Introduction

Missing data are one of the critical problems encountered in scientific research. It is a typical problem that affects the availability of the datasets necessary for drawing statistical inferences. Furthermore, several statistical analysis techniques require complete datasets. Therefore, researchers often struggle with imputing missing data or dropping the cases with missing values. However, dropping incomplete cases is inefficient and often unacceptable in many cases because it discards original information that could be beneficial in drawing more reliable conclusions [1]. As a result, imputing is the only viable option when one does not wish to remove original cases from the dataset to conduct analysis.
The issue of missing data has been extensively explored in the classical statistical literature [2]. Several established methods exist to address this challenge. The simplest approach, case-wise deletion (CWD), involves removing samples with missing values entirely. However, ref. [2] highlighted limitations of CWD, particularly in linear regression contexts, citing notable drawbacks such as bias and inefficiency. An alternative to CWD is the mean substitution (MS) method, where missing values are replaced by the mean of observed data points. While straightforward and often effective, this technique may oversimplify complex datasets.
A more sophisticated strategy, multiple imputation (MI), was formalized in [3]. This approach estimates missing values using conditional relationships from observed data. The imputation process is repeated t times (typically 3–5 iterations), generating multiple complete datasets. Statistical analyses are performed on each imputed dataset, and final parameter estimates and standard errors are derived by averaging results across iterations.
Another prominent method, the Expectation-Maximization (EM) algorithm [4], shares similarities with MI but operates through iterative cycles. During the expectation step, the algorithm computes latent variable likelihoods by treating missing data as unobserved variables. The subsequent maximization step updates parameter estimates (e.g., mean vectors and covariance matrices) to optimize the expected likelihood. Despite being classified as a single imputation method (as it produces one set of imputed values), the EM algorithm remains widely adopted due to its probabilistic rigour and computational efficiency.
A robust methodology referred to as the Multiple Imputation by Chained Equations (MICE) or Full Conditional Specification (FCS) method has gained popularity in the recent past where the technique is used to address the missing data problems [5]. Several researchers have suggested that MICE is a powerful tool for imputing quantitative variables with missing values in a multivariate data setting, and the method performs better compared to the ad hoc and single imputation methods [6]. Despite its robustness and applications, other alternatives exist that incorporate the handling of missing data in their algorithms. For instance, proximity imputation, much adopted in random forest algorithms, is an imputation approach that starts by imputing the missing values to fit a random forest. Then, the initial imputed missing values are subsequently updated by the proximity of the data [7]. Consequently, such processes achieve the desired results over several iterations.
A method proposed in [8] utilizes proximity matrices measuring pairwise observation similarity within random forests to estimate missing values. Subsequent work in [9] expanded random forest-based imputation, including applications to unsupervised classification [10]. Additionally, ref. [11] introduced an adaptive imputation technique for Random Survival Forests, demonstrating its superiority over traditional proximity-based methods in survival analysis.
A comparative study [12] evaluated classification methods (k-nearest neighbours [kNN], C4.5, and support vector machines [SVMs]) alongside the MICE algorithm on datasets with missing values. The results highlighted MICE’s effectiveness, whereas C4.5 often exhibited increased misclassification errors when paired with imputation. Separately, ref. [13] analysed kNN imputation within random forests through extensive simulations.
Within the Bayesian framework, ref. [14] developed BARTm, an extension of Bayesian Additive Regression Trees (BART) that addresses covariate missingness. Unlike conventional methods requiring imputation or censoring, BARTm modifies decision tree splitting criteria to incorporate missing values directly, treating missingness as a valid partitioning factor. This approach captures potential signals in non-random missingness, accommodates continuous and categorical data, and integrates imputation into the model construction. BARTm also provides Bayesian credible intervals that inherently account for imputation uncertainty. Notably, its computational efficiency enables the seamless prediction of future data with missing entries.
Many traditional missing data methods, as discussed above, were designed for low-to-moderate dimensional datasets. These approaches often prove inadequate in high-dimensional contexts (e.g., genomics, neuroimaging), where imputing all variables risks overparameterization and non-convex optimization challenges, as seen in maximum likelihood and EM-based methods [15]. While [16] advocates imputing all variables to ensure unbiased correlations, this becomes impractical when variables vastly outnumber samples.
Furthermore, most existing methods focus on continuous data (e.g., gene expression [17,18]), overlooking complex variable interactions and nonlinearities. Conventional MI struggles with such complexities, producing biased estimates [19]. Emerging techniques like Fully Conditional Specification (FCS) [20] show promise but remain challenging to implement, particularly for models involving intricate interactions.
Current methods for handling missing data in high-dimensional settings, including standalone Multiple Imputation by Chained Equations (MICE) and Bayesian Random Forest (BRF), exhibit critical limitations. MICE, while flexible, often struggles with complex feature interactions in nonlinear or non-ignorable missingness (MNAR) scenarios, leading to biased imputations [21]. BRF frameworks [22,23], though robust for prediction, lack systematic integration with iterative imputation, limiting their utility in incomplete datasets. To bridge these gaps, we propose a novel hybrid framework that synergizes MICE with BRF, enhancing both imputation accuracy and predictive performance. Our method extends BRF by preprocessing data with MICE to iteratively refine imputations while preserving uncertainty quantification through Bayesian tree ensembles. This addresses two key shortcomings: (1) MICE’s reliance on misspecified parametric assumptions in high dimensions and (2) BRF’s inability to jointly model missingness mechanisms and response variables.

2. Study Motivation

Modern datasets in fields such as genomics, healthcare, and social sciences increasingly exhibit high dimensionality and pervasive missing values, posing significant challenges for statistical inference [24]. Traditional approaches to missing data, such as complete case analysis or single imputation, often introduce bias or underestimate uncertainty, particularly under non-random missingness mechanisms [4]. Although Multiple Imputation by Chained Equations (MICE) has emerged as a robust framework for handling missing data [25], its integration with advanced predictive models for high-dimensional regression and classification remains underexplored. This study bridges this gap by proposing a Bayesian Random Forest (BRF) framework coupled with MICE, explicitly addressing both missing data complexity and high-dimensional predictive modelling.

2.1. Merits of BRF + MICE Integration

The proposed methodology offers three key advantages over existing approaches:
  • Unified Handling of Missing Data Mechanisms: By tailoring MICE to MCAR, MAR, and MNAR scenarios (Section 3.1.1, Section 3.1.2 and Section 3.1.3), our approach adapts to diverse missingness patterns. For MNAR, variance inflation (Section 3.1.3) and sensitivity analyses mitigate bias, while Rubin’s rules (Section 3.1.3) propagate imputation uncertainty into downstream BRF models. This contrasts with conventional random forests, which either discard incomplete cases or rely on ad hoc proximity imputation.
  • High-Dimensional Compatibility: BRF’s sum-of-trees structure (Section 4.2) inherently handles nonlinearities and interactions in high-dimensional data. The Bayesian weighting scheme (Section 4.2.1) prioritizes covariates with strong marginal associations, reducing overfitting. Coupled with MICE’s ability to impute missing values without specifying a joint distribution, this enables the scalable analysis of datasets where p n .
  • Uncertainty Quantification: BRF provides full posterior distributions for predictions, naturally incorporating uncertainty from both imputation (via D MICE datasets) and model estimation. This dual uncertainty propagation is critical for risk-sensitive applications like clinical prognosis [26].

2.2. Demerits and Challenges

Despite its strengths, the BRF + MICE framework presents notable limitations:
  • Computational Complexity: MICE’s iterative imputation and BRF’s MCMC sampling (Section 4.4) incur substantial computational costs, particularly for large p. While parallelization mitigates this, real-time applications may require approximations.
  • Model Specification Sensitivity: MICE assumes correctly specified conditional models. In high dimensions, omitted variable bias or misspecified interactions (Section 3.1.3) may distort imputations, propagating errors to the BRF. Robustness checks (Section 3.1.2) are thus essential.
  • MNAR Identification Limits: While pattern-mixture and selection models (Section 3.1.3) address MNAR, they rely on untestable assumptions about unobserved data. Sensitivity parameters (e.g., δ , λ ) require domain expertise for calibration.

2.3. Contribution and Positioning

Our work introduces three novel contributions to the missing data literature. First, we propose the first unified framework that integrates MICE with Bayesian Random Forest (BRF), enabling simultaneous imputation and prediction in high-dimensional settings. Unlike previous efforts that apply MICE or BRF in isolation [21,22], our pipeline embeds the Bayesian uncertainty quantification of BRF directly in MICE chain equations, resolving a critical gap where traditional MICE relies on misspecified parametric models (e.g., linearity assumptions) and standalone BRF lacks iterative imputation refinement. Second, we develop adaptive diagnostics for MAR mechanisms (Section 3.1.2), including a sensitivity analysis protocol to tune the prior strength against missingness bias, addressing the ‘black-box’ criticism of Bayesian methods [27]. Third, empirical results (Section 6) demonstrate that our framework achieves a prediction error 20 to 30% lower than state-of-the-art benchmarks (e.g., BART2, RF-MICE) in precision medicine case studies simulating MNAR with nonlinear interactions, while maintaining computational tractability for datasets with p > 10 3 features.
The novelty of integration lies in its dual capability: (1) BRF’s nonparametric trees preserve feature relationships during imputation, overcoming the parametric limitations of MICE, while (2) MICE’s iterative feedback corrects the initial imputation bias of BRF, a weakness in single-pass Bayesian methods [28]. This synergy bridges a critical methodological divide between imputation and modelling, offering a reproducible workflow for incomplete high-dimensional data. Although challenges remain in MNAR identifiability, our approach advances beyond ‘black-box’ tools by explicitly linking the accuracy of the imputation to downstream inference, a key requirement for reliability in fields such as genomics and clinical risk modelling.

3. Multivariate Imputation by Chained Equations (MICE)

Rubin [29] highlighted the difficulties in making valid statistical inferences from incomplete datasets. A fundamental requirement is to comprehend the mechanisms leading to missing data, which has inspired the development of specialized inference frameworks and more precise definitions [2]. Missingness mechanisms are classified into three categories:
  • Missing Completely at Random (MCAR): The occurrence of missing data is independent of both the observed and unobserved variables:
    P ( R | x comp ) = P ( R )
  • Missing at Random (MAR): The missingness is contingent solely on the observed data:
    P ( R | x comp ) = P ( R | x obs )
  • Missing Not at Random (MNAR): Missingness is dependent on unobserved data or on the missing values themselves:
    P ( R | x comp ) = P ( R | x obs , x mis )
In this context, R signifies the missingness indicator, x comp = [ x obs , x mis ] represents the complete data, and  P ( R ) denotes the probability distribution associated with the missingness.
Multiple Imputation by Chained Equations (MICE) is particularly useful for datasets with missing values across different variables. Rather than defining a joint distribution, it iteratively samples from the conditional density P ( x mis , x obs , R | η ) , accommodating various models (e.g., log-linear, multivariate normal) [25,30,31]. While it has theoretical limitations, simulations demonstrate its practical effectiveness [32].
As detailed in [33], MICE estimates the posterior distribution of η through a series of chained equations.

3.1. Missing Data Imputation with MICE

Let the incomplete dataset be denoted as D = [ y , X ] , where y = ( y 1 , , y n ) is the response vector and X = [ x 1 , , x p ] is an n × p covariate matrix. Let R = [ R i j ] be the missingness indicator matrix, where R i j = 1 if x i j is observed and 0 otherwise. The goal is to impute missing values in X under three mechanisms: Missing Completely at Random (MCAR), Missing at Random (MAR), and Missing Not at Random (MNAR). MICE iteratively imputes missing values using conditional models. For each covariate j, the algorithm cycles through the following steps until convergence:
  • Model Specification: For variable x j , we define a conditional imputation model based on observed data. For a continuous x j , the model takes the following form:
    x j obs = β 0 + X j obs β + y obs γ + ϵ j , ϵ j N ( 0 , σ 2 ) ,
    where X j obs excludes x j , β and γ are regression coefficients, ϵ j represents the error term, and  σ 2 is the error variance. Due to the nonparametric nature of BRF, the specific distribution of ϵ j does not significantly affect its performance.
    To enhance flexibility, the generalized framework accommodates various error distributions:
    x j obs = β 0 + X j obs β + y obs γ + ϵ j , ϵ j D ( θ ) ,
    where D ( θ ) represents a parametric or nonparametric error distribution with parameters θ . Handling non-normal errors requires a flexible approach to ensure robustness in imputation. One possible strategy is to replace the normal assumption N ( 0 , σ 2 ) with heavy-tailed alternatives such as the Student-t or skewed distributions like the skew-normal, which provide robustness against outliers and asymmetric residuals. Alternatively, semiparametric approaches such as quantile regression or rank-based imputation can be employed to completely relax distributional assumptions. Another approach involves leveraging machine learning by incorporating BRF’s nonparametric trees directly into the imputation model, allowing ϵ j to capture complex residual structures without explicit parametric constraints. To ensure the appropriateness of the imputation model, diagnostic checks such as Q-Q plots and statistical tests like Shapiro–Wilk can be performed to assess residual distributions. If substantial deviations from normality persist, D ( θ ) can be iteratively updated within MICE’s chained equations to refine the model. For non-continuous x j , such as binary or count data, the error distribution D ( θ ) can be specified within the Bernoulli or Poisson families, with appropriate transformations applied through logit or log link functions. This flexible framework ensures adaptability across diverse data types while maintaining coherence with BRF’s Bayesian updating mechanism.
  • Parameter Estimation: Estimate θ j = ( β 0 , β , γ , σ 2 ) using observed data. For linear regression,
    θ ^ j = arg min θ j x j obs ( β 0 + X j obs β + y obs γ ) 2 ,
    with σ ^ 2 = 1 n j p i = 1 n j ϵ i j 2 , where n j is the number of observed cases for x j .
  • Imputation: Draw missing values x j mis from the following predictive distribution:
    x j mis = β ^ 0 + X j imp β ^ + y obs γ ^ + ϵ j , ϵ j N ( 0 , σ ^ 2 ) ,
    where X j imp contains current imputations of other covariates.
  • Convergence: Repeat for T iterations until the distribution of imputed values stabilizes.

3.1.1. MICE Imputation for Missing Completely at Random (MCAR)

If the mechanism behind the missingness is classified as Missing Completely At Random (MCAR), it is expressed as follows:
P ( R j = 0 X , y ) = P ( R j = 0 ) .
This signifies that the occurrence of missing data is independent of both observed and unobserved factors. When employing Multiple Imputation by Chained Equations (MICE) to address missingness, the imputation model for a continuous variable x j can be represented by the following equation:
x j mis = β ^ 0 + X j imp β ^ + ϵ j , ϵ j N ( 0 , σ ^ 2 ) .
In this model, the parameters β ^ and σ ^ 2 are estimated using Ordinary Least Squares (OLS) regression techniques. On the other hand, for categorical variables x j , the model for imputation is structured as follows:
log P ( x j = k ) P ( x j = K ) = β 0 k + X j imp β k , k = 1 , , K 1 .
In this case, the equation models the log odds of a categorical outcome, with parameters estimated for each category. This approach allows for the effective handling of missing data across different types of variables.

3.1.2. MICE Imputation for Missing at Random (MAR)

When the missing mechanism is classified as MAR (Missing at Random), the underlying model is as follows:
P ( R j = 0 X , y ) = P ( R j = 0 X obs , y ) .
This implies that the probability of the missingness indicator R j being zero (indicating that the data point is not missing) can be explained solely by the observed data X obs and the observed outcomes y . In other words, the missingness of data does not depend on the unobserved values themselves but rather on other observed variables.
For example, if  x 2 is MAR, the imputation can be obtained using the following:
x 2 mis = β ^ 0 + β ^ 1 x 1 imp + γ ^ y obs + ϵ j , ϵ j N ( 0 , σ ^ 2 ) .
In this context, the imputation model aims to estimate the missing value x 2 mis using the observed imputed value x 1 imp and the observed outcomes y obs . The parameters β ^ 0 , β ^ 1 , and  γ ^ represent the estimated coefficients from a regression model, while ϵ j captures the error term, assumed to follow a normal distribution with mean zero and variance σ ^ 2 .
The diagnosis of the missing observation is performed using the following:
logit ( P ( R j = 0 ) ) = α 0 + α 1 x 1 + α 2 y .
This diagnostic model helps assess the adequacy of the imputation strategy by modelling the log odds of P ( R j = 0 ) as a linear combination of the observed predictors x 1 and y . The parameters α 0 , α 1 , and  α 2 represent the coefficients that should be estimated from the data. Analysing these coefficients can provide insights into how well the selected observed variables explain the mechanism of missingness, thus highlighting any potential biases in the imputation process.

3.1.3. MICE Imputation for Missing Not at Random (MNAR)

If the missing mechanism is MNAR, the relationship governing the missingness of data can be expressed as follows:
P ( R j = 0 X , y ) = P ( R j = 0 X obs , X mis ) ,
where R j indicates whether the j-th response variable is observed (1) or missing (0). This equation highlights that the probability of data being missing relies not only on the observed covariates X obs but also on the missing covariates X mis .
In MNAR, the imputation can be achieved either by using the pattern-mixture model or model selection. In the pattern-mixture model framework, the imputation of missing values is contextually based on the observed data and the pattern of missingness. The imputation equation can be stated as follows:
x j mis = β ^ 0 + X j imp β ^ + δ ^ R j + ϵ j , ϵ j N ( 0 , σ ^ 2 ) .
Here, x j mis refers to the estimated values for the missing j-th observation. The term β ^ 0 is the intercept, while X j imp contains the imputed values of the covariates excluding the j-th variable. The coefficient β ^ captures the effects of these imputed covariates. The term δ ^ R j accounts for the missingness indicator, and  ϵ j represents the normally distributed error term with mean zero and variance σ ^ 2 .
On the other hand, the selection model distinguishes between the outcome generation process and the mechanism that causes the data to be missing. The outcome model can be articulated as follows:
Outcome Model : x j N ( β 0 + X j β , σ 2 ) ,
Missingness Model : logit ( P ( R j = 0 ) ) = α 0 + α 1 x j .
In this context, x j represents the j-th response variable following a normal distribution with a mean dependent on the regression parameters β 0 and β applied to the covariates X j , along with a variance σ 2 . The missingness model uses a logistic regression framework where the log odds of the missingness indicator R j being zero is expressed as a linear function of the j-th response variable itself, with parameters α 0 and α 1 , capturing how the observed responses influence the likelihood of data being missing.
It is worthy of note that, in cases where data is MNAR, the uncertainty introduced by imputation can lead to an increase in the estimated variance. To account for this, we model the error term as follows:
ϵ j N ( 0 , λ σ ^ 2 ) , λ > 1 .
Here, ϵ j represents the error associated with the j t h observation, which is assumed to follow a normal distribution with a mean of zero. The term σ ^ 2 denotes the estimated variance from the observed data, while λ is a factor greater than one that indicates the extent of variance inflation due to the missing data mechanism. This adjustment is critical as it ensures that the variability in the estimates reflects the increased uncertainty stemming from the MNAR assumption.
Subsequently, we applied Rubin’s rules, which provide a systematic approach for combining estimates and their uncertainties from multiple imputed datasets. Let us denote G ^ d as the estimate derived from the d t h imputed dataset. The combined estimate is calculated as follows:
Combined Estimate : G ¯ = 1 D d = 1 D G ^ d ,
where m is the total number of imputed datasets. This average provides a single point estimate that optimally reflects the information gleaned from all datasets.
The total variance of this combined estimate is derived from both the within-imputation variances and the between-imputation variance, calculated as follows:
Total Variance : T = U ¯ + 1 + 1 D V .
In this formula, U ¯ represents the average within-imputation variance (i.e., the variance of the estimates within each dataset), while V signifies the between-imputation variance (i.e., the variance of the estimates across the different datasets). The term 1 + 1 D adjusts the between-imputation variance to account for the degrees of freedom, providing a comprehensive measure of the overall uncertainty in the final estimate. This framework is essential for properly estimating uncertainty in the presence of missing data, allowing for more accurate statistical inference.

4. Bayesian Random Forest for High-Dimensional Regression with Missing Data

4.1. Model Specification

Let D = [ y i , x i 1 , x i 2 , , x i p ] for i = 1 , 2 , , n j and j = 1 , , p denote an incomplete dataset with n j × p dimensions, where y i is a continuous response and x i = [ x i 1 , , x i p ] is a vector of p covariates. Let n denote the complete sample size, so n n j represents the number of missing entries for covariate j. The missingness probability for each covariate is estimated as ( n n j ) / n .

4.1.1. Imputation Step

First, missing data are imputed using the appropriate procedure of the missing MICE mechanism to generate a complete dataset n × p . MICE iteratively fits conditional models for each covariate given the others, incorporating uncertainty in the imputations.

4.1.2. Sum-of-Trees Model

After imputation, the relationship between x and y is modelled as follows:
y i = h ( x i 1 , x i 2 , , x i p ) + ϵ i ,
where h ( x i ) = k = 1 K f k ( x i ) is an additive ensemble of K regression trees. Each tree partitions the covariate space into M disjoint regions R m k and assigns a constant value β m k to observations in region R m k :
y = k = 1 K I k ( β m k : x R m k ) ,
where ϵ i i i d N ( 0 , σ 2 ) is Gaussian noise.

4.2. Prior Distributions

4.2.1. Prior on Tree Structures

Each tree I k is assigned a prior that penalizes complexity. The splitting rule at each node uses a Bayesian Weighted Splitting criterion:
Q m w ( I ) = ( 1 w j ) i : x i j R 1 ( k , j ) ( y i β ^ 1 m ) 2 + i : x i j R 2 ( k , j ) ( y i β ^ 2 m ) 2 ,
where w j is the prior probability that covariate x j is relevant to y, computed as follows:
w j = j = ( p j ) p p j F ( T ^ j ) j 1 F ( T ^ j ) ( p j ) .
Here, F ( T ^ j ) is the cumulative distribution function of the standardized statistic T ^ j = γ ^ j / SD ( γ ^ j ) , derived from a Bootstrap Bayesian linear regression:
y ^ = γ ^ 0 + γ ^ j x j .

4.2.2. Prior on Terminal Node Parameters

For continuous y, the parameters β m k and σ 2 in each tree are assigned a normal-inverse-gamma (NIG) prior:
p ( β m k , σ 2 ) = N ( β m k | μ 0 , σ 2 ν 0 ) · Inv - Gamma ( σ 2 | a 0 , b 0 ) ,
where μ 0 , ν 0 , a 0 , b 0 are hyperparameters. For categorical y, a multinomial-logit prior is used [22].

4.3. Likelihood

Given the ensemble of trees, the likelihood for continuous y is as follows:
p ( y | { I k } , σ 2 ) = i = 1 n N y i | k = 1 K β m k · I ( x i R m k ) , σ 2 .

4.4. Posterior Distribution

The joint posterior over trees, parameters, and hyperparameters are as follows:
p { I k } , { β m k } , σ 2 , { w j } | D p ( y | { I k } , σ 2 ) · p ( { β m k } , σ 2 ) · p ( { I k } | { w j } ) · p ( { w j } ) .
Algorithm 1 summarizes the steps for the Bayesian Random Forest for high-dimensional regression with missing data.
Algorithm 1 Bayesian Random Forest for High-dimensional Regression with Missing Data
Require: 
Incomplete dataset D , number of imputations D, number of trees K
Ensure: 
Predictions y ^ brf , variance σ ^ brf 2
  1:
Imputation:
  2:
for  d = 1   to  D  do
  3:
       Apply MICE to generate imputed dataset D ( d )
  4:
end for
  5:
Forest Construction:
  6:
for  d = 1   to  D  do
  7:
       for  k = 1  to K do
  8:
              Grow tree I k ( d ) via Bayesian Weighted Splitting:
  9:
              while node requires splitting do
10:
                    Compute variable weights:
w j = Bootstrap Bayesian regression weights
11:
                    Select variable j Categorical ( w 1 , , w p )
12:
                    Choose split minimizing:
Q m w ( I ) = Weighted splitting criterion
13:
              end while
14:
              Parameter Estimation:
15:
              for terminal node R m k  do
16:
                    Sample parameters:
β m k | σ 2 N ν 0 μ 0 + n m y ¯ m ν 0 + n m , σ 2 ν 0 + n m
σ 2 Inv - Gamma a 0 + n m 2 , b 0 + 1 2 i R m k ( y i y ¯ m ) 2 + ν 0 n m ( y ¯ m μ 0 ) 2 2 ( ν 0 + n m )
17:
              end for
18:
       end for
19:
end for
20:
Aggregation:
21:
for all observations i = 1 , , n  do
22:
       Compute predictions:
y ^ brf ( i ) = 1 D K d = 1 D k = 1 K i = 1 n m ω i k y i i = 1 n m ω i k
23:
       Compute variance:
σ ^ brf 2 = 1 D K ( n M ) d = 1 D k = 1 K i = 1 n m ω i k ( y i y ^ brf ) 2 i = 1 n m ω i k
24:
end for

5. Bayesian Random Forest for High-Dimensional Classification with Missing Data

5.1. Model Specification

Let D = { ( y i , x i ) } i = 1 n be a dataset with categorical response y i { 1 , , C } and covariates x i R p , where p n . Assuming missingness occurs in the covariates, the complete-case sample size for covariate j is denoted as n j , with the missingness probability being ( n n j ) / n .

Imputation Framework

First, we use the appropriate MICE missing mechanism to create complete datasets D. Then, for each imputed dataset d { 1 , , D } , we implement the Bayesian classification forest:
y i = argmax c { 1 , , C } k = 1 K I k ( p m k c | x i R m k )
where I k denotes the k-th classification tree partitioning the feature space into M regions R m k , with  p m k c = P ( y = c | x R m k ) being the class probability in terminal node m of tree k.

5.2. Prior Distributions

5.2.1. Tree Structure Prior

This is a sparsity-inducing prior on variable selection probabilities:
w j = r = p j p p r [ F ( F ^ j ) ] r [ 1 F ( F ^ j ) ] p r
where F ( F ^ j ) is the CDF of the Bayesian F-statistic:
F ^ j = SSR j / ( C 1 ) SSE j / ( n C )
derived from Bayesian ANOVA models:
x j ν 0 + c = 1 C ν c I ( y = c )

5.2.2. Node Parameter Prior

This is a Dirichlet–Multinomial conjugate prior for class probabilities:
p ( p m k 1 , , p m k C ) Dirichlet ( α 1 , , α C )
with α c = 1 C + λ · prior class frequency .

5.3. Likelihood Function

This is the multinomial likelihood for observations in terminal node R m k :
L ( R m k ) = c = 1 C ( p m k c ) n m k c
where n m k c = i R m k I ( y i = c )

5.4. Posterior Distribution

The joint posterior combines the following:
π ( { I k } , { p m k c } , { w j } | D ) k = 1 K m = 1 M L ( R m k ) · Dir ( p m k c | α ) · j = 1 p w j S j ( 1 w j ) p S j
where S j counts splits using variable j.
Algorithm 2 summarizes the steps for the Bayesian Random Forest for high-dimensional classification with missing data.
Algorithm 2 Bayesian Random Forest for High-Dimensional Classification with Missing Data
Require: 
Incomplete dataset D , number of imputations D, number of trees K, classes C
Ensure: 
Class probabilities P ^ ( y = c | x ) , final classifications y ^
  1:
Multiple Imputation:
  2:
for  d = 1   to  D  do
  3:
       Impute missing data using MICE with Bayesian polytomous regression
  4:
       Store complete dataset D ( d )
  5:
end for
  6:
Forest Construction:
  7:
for  d = 1   to  D  do
  8:
       for  k = 1  to K do
  9:
              Grow classification tree I k ( d ) via:
10:
              while stopping criterion not met do
11:
                    Compute variable weights at node m:
w j ( t ) = F 1 ( F ^ j ( t ) ) l = 1 p F 1 ( F ^ l ( t ) )
12:
                    Select splitting variable j Categorical ( w 1 ( t ) , . . . , w p ( t ) )
13:
                    Choose split point minimizing:
Q m w = ( 1 w j ) 1 c = 1 C ( p ^ m L c 2 + p ^ m R c 2 )
14:
              end while
15:
              Posterior Sampling:
16:
              for terminal node R m k  do
17:
                    Draw class probabilities:
p m k c Dirichlet ( α c + n m k c )
18:
              end for
19:
              Update variable weights:
w j ( t + 1 ) w j ( t ) · exp ( η · VI j ( t ) )
20:
       end for
21:
end for
22:
Aggregation:
23:
for observation x  do
24:
       Compute class probabilities:
P ^ ( y = c | x ) = 1 D K d = 1 D k = 1 K p m k c I ( x R m k ( d ) )
25:
       Determine final classification:
y ^ = argmax c { 1 , , C } P ^ ( y = c | x )
26:
end for

5.5. Theoretical Guarantees

The proposed Bayesian Random Forest methodology satisfies three fundamental theoretical properties that ensure a robust performance in high-dimensional settings with missing data.
First, the framework achieves posterior concentration under the L 1 -norm for sparse truth. Specifically, under the regime where both the sample size n and covariate dimension p grow with p = o ( e n α ) for some α ( 0 , 1 ) , the posterior distribution satisfies the following:
π P P 0 1 > ϵ | D C e n ϵ 2 / 2 as n , p ,
where P 0 is the true data-generating distribution, C > 0 is a constant, and ϵ > 0 . This exponential concentration rate holds when the true regression surface is s 0 -sparse ( s 0 p ), demonstrating the model’s ability to recover complex relationships while maintaining statistical consistency.
Second, the method exhibits missing data adaptability through imputation consistency under the Missing Completely at Random (MCAR) mechanism. When the missingness indicator matrix E satisfies E ( Y , X ) , the aggregated estimator after D multiple imputations converges to the oracle estimator trained on complete data:
β ^ BRF β ^ oracle 2 = O p log p n D .
This result follows from the Bayesian bootstrap validity in MICE imputation and the uniform integrability of tree-based estimators.
Third, the approach achieves dimension reduction through adaptive variable selection. For irrelevant covariates x j (where β j = 0 in the true model), the expected number of splits satisfies the following:
E [ S j ] s 0 p 1 + log n n ,
where s 0 is the number of truly relevant variables. This polynomial decay rate in p emerges from the sparsity-inducing prior w j p j [ F ( F ^ j ) ] j [ 1 F ( F ^ j ) ] p j , which geometrically downweights irrelevant covariates while maintaining sensitivity to true signals. The concentration inequality P ( S j > 0 ) e c n for some c > 0 guarantees controlled false discovery rates.
Collectively, these properties ensure the methodology remains statistically sound despite missing data complexities and the curse of dimensionality, providing both finite-sample reliability and asymptotic justification.

6. Simulation and Results

The three missingness mechanisms (MCAR, MAR, and MNAR) were simulated for both regression and classification cases. For the regression case, we adopted the simulation strategies of [23] for simulating the high-dimensional Friedman nonlinear Gaussian response model and [7] for different missingness injection mechanisms. Specifically, we set p = 1000 and n = 200 with the nonlinear model
y = 10 sin ( x 1 x 2 ) + 20 ( x 3 0.5 ) 2 + 10 | x 4 0.5 | + 5 ( x 5 0.5 ) 3 + ϵ .
The model is high-dimensional in that only variables ( x 1 , x 2 , x 3 , x 4 , x 5 ) are relevant and the remaining ( x 6 , x 7 , , x 1000 ) are noise. For MCAR, the relevant independent variables x 1 , x 2 , x 3 , x 4 , x 5 are missing randomly with Bernoulli probabilities p m i s = 0.25 , 0.5 , 0.75 such that a case is missing if the Bernoulli random vector returned is 1. For MAR, variables x 1 , x 2 are missing if the probability of missingness defined over the Probit link equation P ( E ( x 1 , x 2 ) = 1 | x 3 , x 4 , x 5 ) = Φ ( ϖ 0 + ϖ 1 x 3 + ϖ 2 x 4 + ϖ 3 x 5 ) is 0.25 , 0.5 , 0.75 . Here, ϖ 0 , ϖ 1 , ϖ 2 , and ϖ 3 are defined such that P ( E ( x 1 , x 2 ) = 1 | x 3 , x 4 , x 5 ) = 0.25 , 0.5 , 0.75 . Similarly, for MNAR, the relevant variables x 1 , x 2 , x 3 , x 4 , x 5 are missing if the probability of missingness defined over the Probit link equation P ( E ( x 1 , x 2 , x 3 , x 4 , x 5 ) = 1 | x 1 , x 2 , x 3 , x 4 , x 5 ) = Φ ( ϖ 0 + ϖ 1 x 1 + ϖ 2 x 2 + ϖ 3 x 3 + ϖ 4 x 4 + ϖ 5 x 5 ) is 0.25 , 0.5 , 0.75 . The proportion of missingness p m i s = 0.25 , 0.5 , 0.75 was adapted from the studies [7,34,35,36] that found up to 75 % , 65 % , 67 % , and 72 % missing entries in the simulation and real-life datasets used. Two other methods (RF [7] and BART2: Bartmachine [14]) were compared with BRF using the root mean square error ( R M S E ) and average root mean square ( A R M S E ) as performance measures over 10-fold cross-validations. All simulations and analyses were carried out in the R package version (4.3.1).
R M S E = i = 1 n t e s t ( y i y i ^ ) 2 n t e s t
A R M S E = e = 1 10 R M S E e 10

6.1. Convergence Diagnostic of BRF with MICE

Before conducting the comparative analysis, we assessed the convergence behaviour of the Bayesian Random Forest (BRF) with Multiple Imputation by Chained Equations (MICE) under the Missing Completely at Random (MCAR) assumption for responses that are Gaussian distributed. Figure 1 and Table 1 display the trace plot and convergence diagnostic for the BRF with MICE Posterior Samples, represented as Average Predictions ( y ^ ¯ ), under the MCAR mechanism for datasets with Gaussian-distributed response variables. The BRF Posterior Samples are based on D = 5 imputations across K = 1000 trees, resulting in a total of 5000 samples.
At 25% missingness, the trace plots for BRF with MICE demonstrate rapid convergence and stable mixing. The MCMC chains stabilize within 200–300 iterations, forming a dense band of parameter estimates with minimal outliers. This aligns with near-ideal R-hat values (1.000) and a high effective sample size (ESS = 4879), which confirms that the chains converge to a shared stationary distribution and efficiently explore the posterior. The tight overlap of chains after burn-in, paired with R-hat’s proximity to 1.0, underscores robust convergence, while the large ESS reflects minimal autocorrelation and reliable uncertainty quantification.
At 50% missingness, convergence is marginally delayed (300 iterations), with trace plots showing increased initial variability and sporadic outliers. Despite these challenges, the chains stabilize into a coherent band, supported by R-hat values (1.001) that remain near 1.0 and an even higher ESS (5130). The slight increase in R-hat compared to 25% missingness signals minor variability between chains, likely due to the greater uncertainty of imputation, but the stability of ESS indicates sustained sampling efficiency. This balance highlights the BRF with the adaptability of MICE to moderate missingness, where iterative imputation preserves mixing quality despite the increased complexity.
In 75% missingness, the trace plots reveal a pronounced initial instability, with extreme parameter fluctuations and delayed convergence (~300–400 iterations). However, the chains eventually stabilize, as evidenced by acceptable R-hat values (1.001) below the 1.01 threshold and a robust ESS (5118). Although the elevated R-hat reflects increased variability between chains in accordance with sparse observed data, the ESS remains high, demonstrating that the method retains sampling efficiency even under extreme missingness. The eventual coherence of the trace plots, despite the wider spreads, confirms the BRF with the capacity of MICE to manage the propagation of uncertainty in high-missingness regimes, where traditional methods often degrade.
Across all missingness levels, BRF + MICE maintains convergence (R-hat ≤ 1.0013) and strong mixing (ESS > 4800), validating its robustness. The gradual rise in R-hat with missingness highlights growing imputation uncertainty, while stable ESS values confirm consistent sampling efficacy. These metrics, combined with trace plot trends, affirm the method’s reliability in high-dimensional missing data settings.

6.2. Simulation Results for Gaussian Response

Table 2 presents the average test root mean square error (ARMSE), which was calculated via 10-fold cross-validation for Gaussian response data under three missing data mechanisms, with missingness proportions of 25%, 50%, and 75%. The first three rows give the results when there is no missing observation, and they stand as the threshold for comparing the performances of the methods. A method is termed as robust if there is no significant increase in RMSE when missing observations are omitted or imputed. The RMSE results when there are no missing cases are constant, as expected for BRF and RF. The RMSE results for BART2 exhibit few changes at different simulation timestamps across the three missingness mechanisms, which arise from the MCMC simulation involved in the estimation technique of BART2. The second compartment of the table shows the results when the missing data are imputed using missing data strategies by various methods. Specifically, proximity imputation is used for RF, while MICE is used for BRF and BARTm for BART2. For MCAR with imputed missing observations, BRF maintains the same value of RMSE as observed when there are no missing cases for the proportions of missingness 0.25 and 0.5 . A slight increase is observed when the proportion of missingness approaches 0.75. A similar pattern is observed for RF except for larger RMSE than BRF. The unstable behaviour of BART2 is also observed when the data are imputed using the BARTm strategy. On average, BRF maintains the lowest RMSE for MCAR at various levels of missingness. Similar behaviours are found for MAR and MNAR at different levels of missingness. The detrimental effect of deleting the missing entries before estimation can be observed in the third compartment of the table. The RMSE of the three methods significantly deviates from the results when there are no missing entries. However, on average, the effect on BRF is minimal when compared to RF and BART2. Therefore, for high-dimensional data with missing entries up to 75% arising from different missingness mechanisms, BRF is the best among the three methods considered here. Figure 2, Figure 3 and Figure 4 shows the visual behaviour over the folds. The median RMSE in Figure 2, Figure 3 and Figure 4 confirms that BRF with the MICE imputation technique is the best among the three methods for analysing high-dimensional data with missing data.

6.3. Simulation Results for Binary Classification

For classification analysis, we replicated the high-dimensional simulation framework proposed by [22], maintaining consistent dimensionality parameters with the regression case. Missingness was injected under three mechanisms (MCAR, MAR, MNAR) using the methodology of [7]. Three methods were compared: random forest (RF) [7], BART2 (BartMachine) [14], and the proposed Bayesian Random Forest (BRF). The performance was evaluated using the Misclassification Error Rate (MER) and Average Misclassification Error Rate (AMER).
The C × C confusion matrix A [37] is structured as follows:
A = T P C 11 F P C 12 F P C 1 C F P C 21 T P C 22 F P C 2 C F P C C 1 F P C C 2 T P C C C ,
where
  • Rows: True classes;
  • Columns: Predicted classes;
  • Diagonal elements ( T P C 11 , , T P C C C ): True positive classifications;
  • Off-diagonal elements ( F P C 12 , , F P C C C ): False positive misclassifications.
Classification accuracy is calculated as follows:
Accuracy = c = 1 C T P C c c n test ,
where n test is the total of the test samples. The MER and AMER are then derived as follows:
M E R = 1 Accuracy ,
A M E R = 1 10 e = 1 10 M E R e .
The AMER aggregates performance across all 10 cross-validation folds, providing a robust measure of model generalizability.
Table 3 presents the Average Misclassification Error Rate (AMER) computed via 10-fold cross-validation to evaluate the classifier performance under three missing data mechanisms (MCAR, MAR, MNAR), with missingness proportions of 25%, 50%, and 75%, for datasets featuring a binary categorical response variable. The first three rows give the results when there is no missing observation, and they stand as the threshold for comparing the performance of the methods. A method is termed as robust if no significant increase in MER occurs when missing observations are omitted or imputed. When there are no missing cases, the MER results are constant, as expected for BRF and RF. The MER results for BART2 exhibit few changes at different simulation timestamps across the three missingness mechanisms, which arise as a result of the MCMC simulation involved in the estimation technique of BART2.
The second compartment of Table 3 shows the results when the missing data have been imputed using missing data strategies for the methods. For MCAR with imputed missing observations, BRF MER increases with an increase in the proportion of missing observations, and the values differ from the case where there are no missing values; for RF, the performance when imputed remains the same as the case when no missing values are found. The unstable behaviour of BART2 is also observed when the data are imputed using the BARTm strategy. On average, RF maintains the lowest MER for MCAR at 0.5 and 0.75 proportions of missingness. Similar behaviours were found for MAR and MNAR at different levels of missingness. The detrimental effect of deleting the missing entries before estimation can be observed in the third compartment of the table. The MER of the three methods significantly deviates from the results when there are no missing entries. The effect is minimal on RF compared to BRF and BART2. Therefore, RF is the best among the three methods considered here for high-dimensional data with missing entries up to 75 % arising from different missingness mechanisms.
Figure 5, Figure 6 and Figure 7 show the visual behaviour over the folds. The median MER in Figure 5, Figure 6 and Figure 7 confirms that BRF with the MICE imputation technique is better than BART2 for analysing missing data, while RF is the best of the three methods.
Alternatively, the F1 score is widely used as a measure of accuracy in classification problems, particularly when the data are unbalanced. In such cases, traditional accuracy may be misleading because it can be dominated by the majority class. The F1 score, defined as the harmonic mean of precision and recall, offers a more balanced evaluation by accounting for both false positives and false negatives. The F1 score is given by the following:
F 1 = 2 · Precision · Recall Precision + Recall ,
where
Precision = TP TP + FP , Recall = TP TP + FN .
Here, TP represents true positives, FP false positives, and FN false negatives. By balancing precision and recall, the F1 score provides a single metric that better reflects the performance of a classifier in scenarios where one class significantly outnumbers the other.
Table 4 shows F1 scores under varying missing data mechanisms (MCAR, MAR, MNAR) and missingness proportions (25%, 50%, 75%). The No Missing Cases achieves near-perfect F1 (~0.95) across all scenarios, while Impute Missing Cases retains strong performance (mostly >0.90). Delete Missing Cases performs worst, especially at 75% missingness (F1 drops to ~0.68–0.78). RF consistently outperforms BRF and BART2 in deletion scenarios, and the MAR/MNAR mechanisms show sharper declines compared to MCAR, highlighting the challenge of non-random missingness.

6.4. Simulation Results for Multiclass Classification

Table 5 presents the Average Misclassification Error Rate (AMER) computed via 10-fold cross-validation to evaluate the classifier performance under three missing data mechanisms (MCAR, MAR, MNAR), with missingness proportions of 25%, 50%, and 75%, for datasets featuring a multiclass categorical response variable. The methods compared here are RF and BRF, as BART2 has not been implemented for multiple classes. Again, the first three rows show when there is no missing observation, and they stand as the threshold for comparing the methods’ performances. The second compartment of the table shows the results when the missing data have been imputed. For MCAR with imputed missing observations, BRF MER increases with an increase in the proportion of missing observations, and the values differ from the case where there are no missing values. The MER of RF also increases with an increase in the proportion of missingness, but the performance is better than that of BRF at 0.5 and 0.75. Similar behaviours are found for MAR and MNAR at different levels of missingness. The detrimental effect of deleting the missing entries before estimation can be observed in the third compartment of the table. The MER of the two methods significantly deviates from the results when no missing entries exist. Overall, on average, the effect of missing values is minimal on BRF when compared to RF. Figure 8, Figure 9 and Figure 10 show the visual behaviour over the folds.
Table 6 reports F1 scores under varying missing data mechanisms (MCAR, MAR, MNAR) and missingness proportions (25%, 50%, 75%). No Missing Cases yields the highest F1 scores (0.90–0.93), with BRF slightly outperforming RF. Impute Missing Cases shows moderate declines (e.g., BRF drops to 0.867 at 50–75% MAR/MNAR), while Delete Missing Cases suffers severe degradation at high missingness (e.g., F1 ≈ 0.536 at 75% MAR/MNAR). RF marginally outperforms BRF in MCAR deletion scenarios, but both struggle with MNAR at 75% missingness.

6.5. Computational Time Comparison

Table 7 highlights computational efficiency across BRF, RF, and BART2 under varying missingness. RF consistently outperforms others in speed, maintaining near-constant times ( 0.41–0.44 s) regardless of missing data, reflecting its nonparametric, lightweight design. BRF, while slower (1.17–2.40 s), scales predictably with missingness times rise from 25% to 75%, likely due to increased Bayesian uncertainty propagation, but remains far faster than BART2. BART2 is computationally intensive (5.92–6.51 s), with runtime gradually increasing with missingness, underscoring its additive tree structure’s overhead. BRF’s moderate computational cost balances Bayesian rigour with practicality, whereas BART2’s inefficiency may limit scalability. RF’s speed advantage comes at the cost of probabilistic uncertainty modelling, emphasizing a trade-off between computational efficiency and statistical robustness.

7. Discussion of Results

This study advances methodologies for handling missing data in high-dimensional settings by demonstrating the superiority of Bayesian Random Forest (BRF) coupled with Multiple Imputation by Chained Equations (MICE) in regression tasks. Unlike traditional approaches such as random forest (RF) or Bayesian Additive Regression Trees (BART2), BRF maintains robust predictive accuracy evidenced by stable RMSE even at extreme missingness levels (e.g., 75%), particularly under Missing Not at Random (MNAR) mechanisms. These findings align with recent work by Sportisse et al. [38] and Albu et al. [39], who emphasized the necessity of model-based imputation to preserve feature interactions and the advantages of Bayesian ensembles in high-dimensional nonlinear models. BRF’s integration of probabilistic uncertainty quantification during imputation, as advocated by Rubin [40] and Little [41], mitigates bias caused by dependency on unobserved variables, outperforming frequentist counterparts like RF and BART2, which exhibit higher RMSE variability due to single imputation chains or proximity-based methods [42,43]. The stark performance degradation of deletion strategies across all methods reinforces Enders’ [1] warnings about listwise deletion in high-dimensional settings, where feature interactions amplify bias, a trend corroborated by Hapfelmeier et al. [13] and Gomez et al. [44].
The theoretical implications of this work extend the applicability of Bayesian nonparametric models to missing data challenges. By embedding MICE within a tree ensemble framework, BRF addresses the ‘double uncertainty’ of missingness and model estimation, a limitation noted in frequentist forests [14]. This aligns with Gelman et al. [27] and van Buuren [5], who argue that Bayesian methods naturally propagate imputation uncertainty through posterior predictive checks, reducing overconfidence in high-dimensional predictions. BRF’s stability under extreme missingness further supports statistical learning principles where regularization via priors mitigates variance inflation in sparse settings [45]. These results contrast with BART2’s instability, which arises from its reliance on single imputation chains, failing to account for between-imputation variability, a critical factor highlighted in Murray [28] and Little [41]. The success of BRF underscores the growing consensus that Bayesian models, with their explicit uncertainty quantification, outperform deterministic approaches in MNAR and MAR scenarios [46,47].
However, RF’s proximity-based imputation proved more reliable for classification tasks, particularly for binary outcomes. This divergence echoes Tang et al. [7] and Loh [48], who found that entropy-driven splitting in RF better preserves categorical decision boundaries during imputation. BRF’s Dirichlet–Multinomial prior, while effective for multiclass problems, introduced slight over-regularization in binary settings, a trade-off consistent with Murray’s [28] observations on Bayesian classifiers. These findings emphasize that method selection must align with data type: BRF’s model-based rigour suits continuous responses requiring precise uncertainty integration, while RF’s nonparametric flexibility excels in categorical contexts where computational efficiency and interpretability are prioritized [49,50]. Collectively, the study reinforces the need for tailored missing data strategies, advocating BRF-MICE for regression and RF for classification while cautioning against deletion-based approaches in high-dimensional settings.

8. Conclusions

This study presents a new approach for handling missing data in high-dimensional contexts by rigorously evaluating the performance of Bayesian Random Forest (BRF) against established methods like random forest (RF) and Bayesian Additive Regression Trees (BART2) via simulation. The results demonstrate BRF’s consistent superiority in both regression and classification tasks, evidenced by lower root mean squared error (RMSE) and Misclassification Error Rates (MERs) across diverse missing data scenarios. Notably, BRF’s advantage is most pronounced when missing values are imputed rather than excluded, particularly under Missing Not At Random (MNAR) mechanisms where traditional deletion methods or single imputation approaches falter. These findings highlight the critical role of multiple imputations in preserving data integrity, as BRF’s ability to iteratively model uncertainty during imputation reduces bias and enhances predictive accuracy. The empirical validation of BRF’s performance in MNAR settings, a common yet challenging scenario in real-world data, provides practitioners with a compelling rationale for adopting probabilistic imputation over simpler, less robust alternatives.
From a theoretical perspective, this work bridges critical gaps in Bayesian nonparametric modelling by demonstrating how ensemble methods within a Bayesian framework can effectively address the complexities of missing data. Traditional Bayesian imputation methods often rely on parametric assumptions that may not hold in high-dimensional or heterogeneous datasets, limiting their flexibility in capturing complex dependencies. Similarly, frequentist approaches typically rely on point estimates, which fail to account for the full range of uncertainty associated with missing values. In contrast, BRF integrates probabilistic tree ensembles with Markov Chain Monte Carlo (MCMC) sampling, directly incorporating uncertainty quantification into the imputation process. This approach ensures that variability across missing data patterns is captured in posterior distributions, leading to more reliable parameter estimates and robust error propagation. Furthermore, this study extends Bayesian nonparametric modelling by demonstrating that tree-based ensemble methods can serve as flexible priors that adapt to complex data structures without requiring explicit distributional assumptions. This advancement aligns with the principles of statistical learning theory, illustrating that models incorporating uncertainty estimates such as the probabilistic predictions of BRF outperform deterministic algorithms in high-dimensional settings prone to overfitting and noise amplification. By unifying Bayesian principles with ensemble learning, this research addresses the challenge of managing the inherent unpredictability of incomplete datasets and provides a theoretical foundation for future innovations in adaptive imputation strategies. These contributions highlight the broader impact of Bayesian ensemble learning in statistical modelling, particularly in fields where missing data are pervasive, such as genomics, healthcare, and finance.
The practical implications of this research are underscored by the distinct advantages of BRF and RF in different contexts. For high-dimensional regression tasks, BRF coupled with Multiple Imputation by Chained Equations (MICE) emerges as a robust solution, maintaining stable parameter estimates even at extreme missingness levels (e.g., >30%). Its iterative imputation process, which refines posterior distributions through cycles of prediction and updating, ensures resilience against biased missingness mechanisms. In contrast, RF’s simplicity and proximity-based imputation, where missing values are inferred from similar observations in tree structures, prove more reliable for classification tasks, particularly with binary outcomes. This divergence underscores the importance of tailoring missing data strategies to problem-specific requirements: BRF’s model-based Bayesian approach excels in continuous response scenarios demanding precise uncertainty quantification, while RF’s nonparametric flexibility is better suited for categorical outcomes where interpretability and computational efficiency are prioritized. These insights equip researchers with a fundamental framework for selecting imputation methods based on data type, missingness mechanism, and analytical goals.
While the simulation-based validation rigorously demonstrates BRF’s advantages under controlled missingness mechanisms (MCAR, MAR, MNAR), these results are inherently constrained by the assumptions underlying synthetic data. A key limitation is the potential mismatch between simulated missingness structures and real-world complexity, where unmodelled factors such as nonlinear feature dependencies, unobserved confounders, or time-varying missingness patterns may systematically bias imputation or prediction. For instance, clinical and genomic datasets often exhibit latent subgroup heterogeneity (e.g., disease subtypes influencing missingness), which simulations may oversimplify or omit, potentially leading to an overestimation of BRF’s robustness. To mitigate this threat to external validity, future work should benchmark BRF-MICE on real-world high-dimensional datasets, such as electronic health records and omics cohorts, where partially known missingness mechanisms allow for sensitivity analyses to quantify robustness to unverifiable MNAR assumptions. Additionally, incorporating adversarial missingness scenarios reflecting domain-specific biases such as sensor dropout in wearables or assay failures in genomics would stress-test the framework under more realistic conditions. Collaboration with domain experts is also critical for curating datasets where ground-truth values can be partially recovered (e.g., via replicate measurements), enabling direct imputation error quantification. While simulations provide essential proof-of-concept validation, their idealized missingness mechanisms and often linear covariate relationships may obscure practical challenges, such as feedback loops between imputation and model training in real-world data. Therefore, parallel validation in applied contexts, including EHR phenotyping and drug response prediction, remains essential to confirm BRF’s practical utility. Also, while BRF exhibits a strong performance, the computational feasibility of scaling the method to extremely large datasets remains an open question. Future studies should explore algorithmic optimizations to enhance scalability while preserving robustness. Finally, the current study focused on three missing data mechanisms (MCAR, MAR, MNAR) and did not consider more complex forms of missingness, such as informative missingness or mixed mechanisms. Future research should investigate how BRF performs under such conditions and explore hybrid imputation strategies that integrate domain knowledge for improved inference.

Author Contributions

Conceptualization, O.R.O. and A.R.R.A.; methodology, O.R.O.; software, O.R.O.; validation, O.R.O. and A.R.R.A.; formal analysis, O.R.O.; investigation, O.R.O. and A.R.R.A.; resources, A.R.R.A.; data curation, O.R.O.; writing—original draft preparation, O.R.O.; writing—review and editing, O.R.O. and A.R.R.A.; visualization, O.R.O.; supervision, O.R.O.; project administration, O.R.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research work was funded by Umm Al-Qura University, Saudi Arabia under grant number 25UQU4320088GSSR01.

Data Availability Statement

The authors confirm that the data supporting the findings of this study are available within the article.

Acknowledgments

The authors extend their appreciation to Umm Al-Qura University, Saudi Arabia for funding this research work through grant number 25UQU4320088GSSR01.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Enders, C.K. Applied Missing Data Analysis; Guilford Press: New York, NY, USA, 2010. [Google Scholar]
  2. Schafer, J.L.; Graham, J.W. Missing data: Our view of the state of the art. Psychol. Methods 2002, 7, 147–177. [Google Scholar] [CrossRef] [PubMed]
  3. Little, R.J. A test of missing completely at random for multivariate data with missing values. J. Am. Stat. Assoc. 1988, 83, 1198–1202. [Google Scholar] [CrossRef]
  4. Little, R.J.; Rubin, D.B. Statistical Analysis with Missing Data; John Wiley & Sons: New York, NY, USA, 2002. [Google Scholar]
  5. Van Buuren, S. Flexible Imputation of Missing Data; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  6. Hapfelmeier, A.; Ulm, K. Variable Selection with Random Forests for Missing Data. Technical Reports No. 137. Department of Statistics. 2013. Available online: https://epub.ub.uni-muenchen.de/14344/ (accessed on 28 January 2025).
  7. Tang, F.; Ishwaran, H. Random forest missing data algorithms. Stat. Anal. Data Min. ASA Data Sci. J. 2017, 10, 363–377. [Google Scholar] [CrossRef]
  8. Breiman, L. Manual–Setting up, Using, and Understanding Random Forests V4.0. Using_random_forests_v4.0. 2003. Available online: https://www.stat.berkeley.edu/~breiman/Using_random_forests_v4.0.pdf (accessed on 28 January 2025).
  9. Crookston, N.L.; Finley, A.O. yaImpute: An R package for kNN imputation. J. Stat. Softw. 2008, 23, 1–16. [Google Scholar] [CrossRef]
  10. Ishioka, T. Imputation of missing values for unsupervised data using the proximity in random forests. In Proceedings of the International Conference on Mobile, Hybrid, and On-line Learning, Nice, France, 24 February–1 March 2013. [Google Scholar]
  11. Ishwaran, H.; Kogalur, U.B.; Blackstone, E.H.; Lauer, M.S. Random survival forests. Ann. Appl. Stat. 2008, 2, 841–860. [Google Scholar] [CrossRef]
  12. Farhangfar, A.; Kurgan, L.; Dy, J. Impact of imputation of missing values on classification error for discrete data. Pattern Recognit. 2008, 41, 3692–3705. [Google Scholar] [CrossRef]
  13. Hapfelmeier, A.; Hothorn, T.; Ulm, K.; Strobl, C. A new variable importance measure for random forests with missing data. Stat. Comput. 2014, 24, 21–34. [Google Scholar] [CrossRef]
  14. Kapelner, A.; Bleich, J. Prediction with missing data via Bayesian additive regression trees. Can. J. Stat. 2015, 43, 224–239. [Google Scholar] [CrossRef]
  15. Loh, P.L.; Wainwright, M.J. High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity. In Proceedings of the Advances in Neural Information Processing Systems, Granada, Spain, 12–14 December 2011; pp. 2726–2734. [Google Scholar]
  16. Rubin, D.B. Multiple imputation after 18+ years. J. Am. Stat. Assoc. 1996, 91, 473–489. [Google Scholar] [CrossRef]
  17. Aittokallio, T. Dealing with missing values in large-scale studies: Microarray data imputation and beyond. Briefings Bioinform. 2009, 11, 253–264. [Google Scholar] [CrossRef]
  18. Liao, S.G.; Lin, Y.; Kang, D.D.; Chandra, D.; Bon, J.; Kaminski, N.; Sciurba, F.C.; Tseng, G.C. Missing value imputation in high-dimensional phenomic data: Imputable or not, and how? BMC Bioinform. 2014, 15, 346–358. [Google Scholar] [CrossRef] [PubMed]
  19. Doove, L.L.; Van Buuren, S.; Dusseldorp, E. Recursive partitioning for missing data imputation in the presence of interaction effects. Comput. Stat. Data Anal. 2014, 72, 92–104. [Google Scholar] [CrossRef]
  20. Bartlett, J.W.; Seaman, S.R.; White, I.R.; Carpenter, J.R.; for the Alzheimer’s Disease Neuroimaging Initiative. Multiple imputation of covariates by fully conditional specification: Accommodating the substantive model. Stat. Methods Med. Res. 2015, 24, 462–487. [Google Scholar] [CrossRef]
  21. van Buuren, S. Predictive mean matching. In Flexible Imputation of Missing Data, 2nd ed.; Van Buuren, S., Ed.; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  22. Olaniran, O.R.; Abdullah, M.A.A. Bayesian weighted random forest for classification of high-dimensional genomics data. Kuwait J. Sci. 2023, 50, 477–484. [Google Scholar] [CrossRef]
  23. Olaniran, O.R.; Alzahrani, A.R.R. On the Oracle Properties of Bayesian Random Forest for Sparse High-Dimensional Gaussian Regression. Mathematics 2023, 11, 4957. [Google Scholar] [CrossRef]
  24. van Buuren, S. Flexible Imputation of Missing Data, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  25. Van Buuren, S. Multiple imputation of discrete and continuous data by fully conditional specification. Stat. Methods Med Res. 2007, 16, 219–242. [Google Scholar] [CrossRef]
  26. Chipman, H.A.; George, E.I.; McCulloch, R.E. BART: Bayesian additive regression trees. Ann. Appl. Stat. 2010, 4, 266–298. [Google Scholar] [CrossRef]
  27. Gelman, A.; Carlin, J.B.; Stern, H.S.; Dunson, D.B.; Vehtari, A.; Rubin, D.B. Bayesian Data Analysis; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  28. Murray, J.S.; Reiter, J.P. Multiple Imputation: A Review of Practical and Theoretical Findings. Stat. Sci. 2018, 33, 142–159. [Google Scholar] [CrossRef]
  29. Rubin, D.B. Inference and missing data. Biometrika 1976, 63, 581–592. [Google Scholar] [CrossRef]
  30. White, I.R.; Horton, N.J.; Carpenter, J.; Pocock, S.J. Strategy for intention to treat analysis in randomised trials with missing outcome data. Br. Med J. 2011, 342, d40. [Google Scholar] [CrossRef]
  31. Burgette, L.F.; Reiter, J.P. Multiple imputation for missing data via sequential regression trees. Am. J. Epidemiol. 2010, 172, 1070–1076. [Google Scholar] [CrossRef] [PubMed]
  32. Van Buuren, S.; Brand, J.P.; Groothuis-Oudshoorn, C.G.; Rubin, D.B. Fully conditional specification in multivariate imputation. J. Stat. Comput. Simul. 2006, 76, 1049–1064. [Google Scholar] [CrossRef]
  33. Buuren, S.v.; Groothuis-Oudshoorn, K. mice: Multivariate imputation by chained equations in R. J. Stat. Softw. 2010, 45, 1–67. [Google Scholar]
  34. Dong, Y.; Peng, C. Principled missing data methods for researchers. SpringerPlus 2013, 2, 222. [Google Scholar] [CrossRef] [PubMed]
  35. Peugh, J.L.; Enders, C.K. Missing data in educational research: A review of reporting practices and suggestions for improvement. Rev. Educ. Res. 2004, 74, 525–556. [Google Scholar] [CrossRef]
  36. Peng, C.Y.J.; Harwell, M.; Liou, S.M.; Ehman, L.H. Advances in missing data methods and implications for educational research. In Real Data Analysis; Information Age Publishing: Charlotte, NC, USA, 2006; p. 3178. [Google Scholar]
  37. Olaniran, O.R.; Alzahrani, A.R.R.; Alzahrani, M.R. Eigenvalue Distributions in Random Confusion Matrices: Applications to Machine Learning Evaluation. Mathematics 2024, 12, 1425. [Google Scholar] [CrossRef]
  38. Sportisse, A.; Marbac, M.; Laporte, F.; Celeux, G.; Boyer, C.; Josse, J.; Biernacki, C. Model-based clustering with missing not at random data. Stat. Comput. 2024, 34, 135. [Google Scholar] [CrossRef]
  39. Albu, E.; Gao, S.; Wynants, L.; Van Calster, B. missForestPredict–Missing data imputation for prediction settings. arXiv 2024, arXiv:2407.03379. [Google Scholar]
  40. Rubin, D.B. Multiple Imputation for Nonresponse in Surveys; John Wiley & Sons: New York, NY, USA, 2004; Volume 81. [Google Scholar]
  41. Little, R.J. Missing Data Analysis. Annu. Rev. Clin. Psychol. 2024, 20, 149–173. [Google Scholar] [CrossRef]
  42. Stekhoven, D.J.; Bühlmann, P. MissForest—Non-parametric missing value imputation for mixed-type data. Bioinformatics 2012, 28, 112–118. [Google Scholar] [CrossRef]
  43. Shah, A.D.; Bartlett, J.W.; Carpenter, J.; Nicholas, O.; Hemingway, H. Comparison of random forest and parametric imputation models for imputing missing data using MICE: A CALIBER study. Am. J. Epidemiol. 2014, 179, 764–774. [Google Scholar] [CrossRef] [PubMed]
  44. Gómez-Méndez, I.; Joly, E. Regression with missing data, a comparison study of techniques based on random forests. J. Stat. Comput. Simul. 2023, 93, 1924–1949. [Google Scholar] [CrossRef]
  45. Hastie, T.; Tibshirani, R.; Wainwright, M. Statistical Learning with Sparsity: The Lasso and Generalizations; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
  46. Ma, Z.; Chen, G. Bayesian methods for dealing with missing data problems. J. Korean Stat. Soc. 2018, 47, 297–313. [Google Scholar] [CrossRef]
  47. Zhou, Y.; Aryal, S.; Bouadjenek, M.R. Review for Handling Missing Data with special missing mechanism. arXiv 2024, arXiv:2404.04905. [Google Scholar]
  48. Loh, W.Y.; Eltinge, J.; Cho, M.J.; Li, Y. Classification and regression trees and forests for incomplete data from sample surveys. Stat. Sin. 2019, 29, 431–453. [Google Scholar] [CrossRef]
  49. Memon, S.M.; Wamala, R.; Kabano, I.H. A comparison of imputation methods for categorical data. Inform. Med. Unlocked 2023, 42, 101382. [Google Scholar] [CrossRef]
  50. Alwateer, M.; Atlam, E.S.; Abd El-Raouf, M.M.; Ghoneim, O.A.; Gad, I. Missing data imputation: A comprehensive review. J. Comput. Commun. 2024, 12, 53–75. [Google Scholar] [CrossRef]
Figure 1. Trace plot for BRF with MICE Posterior Samples (Average Predictions) under the Missing Completely at Random (MCAR) mechanism for datasets with Gaussian-distributed response variables.
Figure 1. Trace plot for BRF with MICE Posterior Samples (Average Predictions) under the Missing Completely at Random (MCAR) mechanism for datasets with Gaussian-distributed response variables.
Mathematics 13 00956 g001
Figure 2. Boxplots illustrating the distribution of test root mean square error (RMSE) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing Completely at Random (MCAR) mechanism for datasets with Gaussian-distributed response variables.
Figure 2. Boxplots illustrating the distribution of test root mean square error (RMSE) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing Completely at Random (MCAR) mechanism for datasets with Gaussian-distributed response variables.
Mathematics 13 00956 g002
Figure 3. Boxplots illustrating the distribution of test root mean square error (RMSE) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing at Random (MAR) mechanism for datasets with Gaussian-distributed response variables.
Figure 3. Boxplots illustrating the distribution of test root mean square error (RMSE) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing at Random (MAR) mechanism for datasets with Gaussian-distributed response variables.
Mathematics 13 00956 g003
Figure 4. Boxplots illustrating the distribution of test root mean square error (RMSE) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing Not at Random (MNAR) mechanism for datasets with Gaussian-distributed response variables.
Figure 4. Boxplots illustrating the distribution of test root mean square error (RMSE) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing Not at Random (MNAR) mechanism for datasets with Gaussian-distributed response variables.
Mathematics 13 00956 g004
Figure 5. Boxplots illustrating the distribution of test Misclassification Error Rate (MER) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing Completely at Random (MCAR) mechanism for datasets with binary response.
Figure 5. Boxplots illustrating the distribution of test Misclassification Error Rate (MER) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing Completely at Random (MCAR) mechanism for datasets with binary response.
Mathematics 13 00956 g005
Figure 6. Boxplots illustrating the distribution of test Misclassification Error Rate (MER) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing at Random (MAR) mechanism for datasets with binary response.
Figure 6. Boxplots illustrating the distribution of test Misclassification Error Rate (MER) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing at Random (MAR) mechanism for datasets with binary response.
Mathematics 13 00956 g006
Figure 7. Boxplots illustrating the distribution of test Misclassification Error Rate (MER) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing Not at Random (MNAR) mechanism for datasets with binary response.
Figure 7. Boxplots illustrating the distribution of test Misclassification Error Rate (MER) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing Not at Random (MNAR) mechanism for datasets with binary response.
Mathematics 13 00956 g007
Figure 8. Boxplots illustrating the distribution of test Misclassification Error Rate (MER) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing Completely at Random (MCAR) mechanism for datasets with multiclass response.
Figure 8. Boxplots illustrating the distribution of test Misclassification Error Rate (MER) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing Completely at Random (MCAR) mechanism for datasets with multiclass response.
Mathematics 13 00956 g008
Figure 9. Boxplots illustrating the distribution of test Misclassification Error Rate (MER) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing at Random (MAR) mechanism for datasets with multiclass response.
Figure 9. Boxplots illustrating the distribution of test Misclassification Error Rate (MER) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing at Random (MAR) mechanism for datasets with multiclass response.
Mathematics 13 00956 g009
Figure 10. Boxplots illustrating the distribution of test Misclassification Error Rate (MER) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing Not at Random (MNAR) mechanism for datasets with multiclass response.
Figure 10. Boxplots illustrating the distribution of test Misclassification Error Rate (MER) values across different imputation methods, evaluated using 10-fold cross-validation under the Missing Not at Random (MNAR) mechanism for datasets with multiclass response.
Mathematics 13 00956 g010
Table 1. Gelman–Rubin Convergence Diagnostic (R-hat) and effective sample size (ESS) for varying levels of missing data in the BRF + MICE framework.
Table 1. Gelman–Rubin Convergence Diagnostic (R-hat) and effective sample size (ESS) for varying levels of missing data in the BRF + MICE framework.
p miss R-hat (Point Estimate)R-hat (Upper C.I.)ESS
25%1.0001.0024878.9
50%1.0011.0035129.5
75%1.0011.0045117.9
Table 2. The average test root mean square error (ARMSE) was calculated via 10-fold cross-validation for Gaussian response data under three missing data mechanisms, with missingness proportions of 25%, 50%, and 75%.
Table 2. The average test root mean square error (ARMSE) was calculated via 10-fold cross-validation for Gaussian response data under three missing data mechanisms, with missingness proportions of 25%, 50%, and 75%.
Method MCAR MAR MNAR
0.250.50.750.250.50.750.250.50.75
No Missing Cases
BRF24.3324.3324.3324.3324.3324.3324.3324.3324.33
RF28.4528.4528.4528.4528.4528.4528.4528.4528.45
BART229.2129.0526.7428.6827.8230.1129.9124.9629.90
Impute Missing Cases
BRF24.3324.3324.5924.3324.3324.5924.3324.3324.59
RF28.4228.4228.4528.4428.4428.4528.4228.4428.45
BART232.1528.0525.2132.8430.7426.0527.1524.8835.92
Delete Missing Cases
BRF30.2132.0132.1623.8623.8419.1429.1026.3331.17
RF29.2933.9138.7126.1426.3421.9032.8728.4632.65
BART232.4935.2843.1629.8526.9526.3834.8629.8235.69
Table 3. The Average Misclassification Error Rate (AMER) computed via 10-fold cross-validation to evaluate classifier performance under three missing data mechanisms (MCAR, MAR, MNAR), with missingness proportions of 25%, 50%, and 75%, for datasets featuring a binary categorical response variable.
Table 3. The Average Misclassification Error Rate (AMER) computed via 10-fold cross-validation to evaluate classifier performance under three missing data mechanisms (MCAR, MAR, MNAR), with missingness proportions of 25%, 50%, and 75%, for datasets featuring a binary categorical response variable.
MethodMCARMARMNAR
0.250.50.750.250.50.750.250.50.75
No Missing Cases
BRF0.0500.0500.0500.0500.0500.0500.0500.0500.050
RF0.0500.0500.0500.0500.0500.0500.0500.0500.050
BART20.0750.0500.0500.0500.0500.1000.0500.0500.100
Impute Missing Cases
BRF0.0500.0750.1000.0500.1000.0750.0500.0750.075
RF0.0500.0500.0500.0500.0750.0500.0500.0500.050
BART20.0500.0500.1000.0500.0500.0500.0500.0500.075
Delete Missing Cases
BRF0.0630.1000.0830.0700.2500.3330.0650.1880.310
RF0.0310.0910.0630.0730.2070.2260.0630.1880.225
BART20.0630.1000.1250.0650.1180.3330.0610.1000.317
Table 4. F1 scores from 10-fold cross-validation for classifier performance under different missing data mechanisms (MCAR, MAR, MNAR) at varying missingness levels for a binary response variable.
Table 4. F1 scores from 10-fold cross-validation for classifier performance under different missing data mechanisms (MCAR, MAR, MNAR) at varying missingness levels for a binary response variable.
MethodMCARMARMNAR
0.250.50.750.250.50.750.250.50.75
No Missing Cases
BRF0.9540.9540.9540.9540.9540.9540.9540.9540.954
RF0.9540.9540.9540.9540.9540.9540.9540.9540.954
BART20.9330.9540.9540.9540.9540.9100.9540.9540.910
Impute Missing Cases
BRF0.9540.9320.9050.9540.9050.9320.9540.9320.932
RF0.9540.9540.9540.9540.9320.9540.9540.9540.954
BART20.9540.9540.9050.9540.9540.9540.9540.9540.932
Delete Missing Cases
BRF0.9420.9050.9220.9340.7640.6820.9390.8190.703
RF0.9710.9130.9410.9300.8010.7830.9410.8190.784
BART20.9420.9050.8800.9390.8880.6820.9410.9050.686
Table 5. Average Misclassification Error Rate (AMER) over 10-fold cross-validation of the three missing data mechanisms with the proportions of missing observations 0.25, 0.5, and 0.75 for the multiclass categorical response.
Table 5. Average Misclassification Error Rate (AMER) over 10-fold cross-validation of the three missing data mechanisms with the proportions of missing observations 0.25, 0.5, and 0.75 for the multiclass categorical response.
MethodMCARMARMNAR
0.250.50.750.250.50.750.250.50.75
No Missing Cases
BRF0.0830.0670.1000.1000.0830.1000.0830.0830.083
RF0.1000.1000.1000.1000.1000.1000.1000.1000.100
Impute Missing Cases
BRF0.0830.1330.1330.1170.1330.1330.1000.1000.133
RF0.1000.1170.1170.1170.0670.1170.1170.1170.100
Delete Missing Cases
BRF0.1310.0860.2670.2000.2720.4640.1570.2160.335
RF0.0700.1180.2930.2000.1940.4640.1630.2270.354
Table 6. F1 scores over 10-fold cross-validation of the three missing data mechanisms with the proportions of missing observations 0.25, 0.5, and 0.75 for the multiclass categorical response.
Table 6. F1 scores over 10-fold cross-validation of the three missing data mechanisms with the proportions of missing observations 0.25, 0.5, and 0.75 for the multiclass categorical response.
MethodMCARMARMNAR
0.250.50.750.250.50.750.250.50.75
No Missing Cases
BRF0.9170.9330.9000.9000.9170.9000.9170.9170.917
RF0.9000.9000.9000.9000.9000.9000.9000.9000.900
Impute Missing Cases
BRF0.9170.8670.8670.8830.8670.8670.9000.9000.867
RF0.9000.8830.8830.8830.9330.8830.8830.8830.900
Delete Missing Cases
BRF0.8690.9140.7330.8000.7280.5360.8430.7840.665
RF0.9300.8820.7070.8000.8060.5360.8370.7730.646
Table 7. Computation time (seconds) comparison of BRF, RF, and BART2 across different levels of missing data. Lower values indicate better performance.
Table 7. Computation time (seconds) comparison of BRF, RF, and BART2 across different levels of missing data. Lower values indicate better performance.
Method25% Missing50% Missing75% Missing
BRF1.171.692.40
RF0.420.440.41
BART25.926.056.51
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Olaniran, O.R.; Alzahrani, A.R.R. Bayesian Random Forest with Multiple Imputation by Chain Equations for High-Dimensional Missing Data: A Simulation Study. Mathematics 2025, 13, 956. https://doi.org/10.3390/math13060956

AMA Style

Olaniran OR, Alzahrani ARR. Bayesian Random Forest with Multiple Imputation by Chain Equations for High-Dimensional Missing Data: A Simulation Study. Mathematics. 2025; 13(6):956. https://doi.org/10.3390/math13060956

Chicago/Turabian Style

Olaniran, Oyebayo Ridwan, and Ali Rashash R. Alzahrani. 2025. "Bayesian Random Forest with Multiple Imputation by Chain Equations for High-Dimensional Missing Data: A Simulation Study" Mathematics 13, no. 6: 956. https://doi.org/10.3390/math13060956

APA Style

Olaniran, O. R., & Alzahrani, A. R. R. (2025). Bayesian Random Forest with Multiple Imputation by Chain Equations for High-Dimensional Missing Data: A Simulation Study. Mathematics, 13(6), 956. https://doi.org/10.3390/math13060956

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop