Next Article in Journal
Enhanced Coexistence of Quantum Key Distribution and Classical Communication over Hollow-Core and Multi-Core Fibers
Previous Article in Journal
Analyzing Sequential Betting with a Kelly-Inspired Convective-Diffusion Equation
Previous Article in Special Issue
Multimodel Approaches Are Not the Best Way to Understand Multifactorial Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bootstrap Approximation of Model Selection Probabilities for Multimodel Inference Frameworks

by
Andres Dajles
*,† and
Joseph Cavanaugh
Department of Biostatistics, University of Iowa, 145 N. Riverside Drive, Iowa City, IA 52242, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2024, 26(7), 599; https://doi.org/10.3390/e26070599
Submission received: 22 February 2024 / Revised: 8 July 2024 / Accepted: 12 July 2024 / Published: 15 July 2024

Abstract

:
Most statistical modeling applications involve the consideration of a candidate collection of models based on various sets of explanatory variables. The candidate models may also differ in terms of the structural formulations for the systematic component and the posited probability distributions for the random component. A common practice is to use an information criterion to select a model from the collection that provides an optimal balance between fidelity to the data and parsimony. The analyst then typically proceeds as if the chosen model was the only model ever considered. However, such a practice fails to account for the variability inherent in the model selection process, which can lead to inappropriate inferential results and conclusions. In recent years, inferential methods have been proposed for multimodel frameworks that attempt to provide an appropriate accounting of modeling uncertainty. In the frequentist paradigm, such methods should ideally involve model selection probabilities, i.e., the relative frequencies of selection for each candidate model based on repeated sampling. Model selection probabilities can be conveniently approximated through bootstrapping. When the Akaike information criterion is employed, Akaike weights are also commonly used as a surrogate for selection probabilities. In this work, we show that the conventional bootstrap approach for approximating model selection probabilities is impacted by bias. We propose a simple correction to adjust for this bias. We also argue that Akaike weights do not provide adequate approximations for selection probabilities, although they do provide a crude gauge of model plausibility.

1. Introduction

When presented with a set of variables posited as potential predictors of a targeted outcome, researchers harness an array of techniques to construct an appropriate descriptive or predictive model. Historically, the conventional approach entailed the formulation of a model under the guidance of an expert, who, grounded in a scientific understanding of the underlying mechanics of the observed outcome, would advocate for the structure of the proposed model. In contemporary practice, while domain expertise continues to inform the compilation of potential explanatory variables, the adoption of an all-encompassing model incorporating all of these variables is rare. Instead, scientists and statisticians have gravitated towards model selection algorithms (e.g., best subsets selection, forward selection, backward elimination, stepwise selection, and the LASSO), which utilize the sample at hand to yield models exhibiting more parsimonious structures than the comprehensive global model.
Once a model is selected, the common practice is to conduct inference on that model, proceeding as if this were the only model to ever be considered. However, different samples from the same population may lead to a selection of models with different structures. Therefore, the standard practice neglects the sampling variability inherent in the selection process. This pervasive issue in contemporary applications of statistics was described by Breiman as a “quiet scandal in the statistical community” [1].
A direct consequence of this oversight is the inherent difficulty in replicating the results of a modeling analysis using subsequent samples [2,3]. Therefore, inferences contingent on the chosen model are profoundly impacted, raising concerns regarding the bias of regression effect estimators, the accuracy of their estimated standard errors, and the validity of associated p-values or confidence intervals [3]. For instance, failing to account for model selection variability will typically result in smaller standard errors, which in turn results in erroneously smaller p-values and overly optimistic, narrower confidence intervals.
In addition, common model selection procedures are prone to including spurious effects in the final model [4,5]. In the setting of regression models with correlated variables, the inclusion of spurious effects can heavily influence the estimates and the interpretation of important effects. For example, if two explanatory variables are correlated, their estimated effects can vastly vary for models that only include one of the variables as opposed to models that include both variables [3].
In recent years, various approaches have been proposed that are aimed at addressing the estimation of regression effects while accounting for model selection variability. These approaches can often be characterized as multimodel inference procedures, where regression effect estimates are not solely derived from a single model but rather from the entire collection of models. Conceptually, such procedures are aligned with ensemble methods, where the final prediction or classification is derived through the combination of multiple submodels.
The foundational principles of multimodel inference are influenced by Bayesian principles, particularly the concept of Bayesian model averaging (BMA). The BMA framework is founded on the recognition that different estimates are contingent upon specific models, and that these models possess varying probabilities of arising, given the observed data. Consequently, the posterior mean of any quantity of interest should encompass contributions from all the models based on their posterior probabilities. In essence, the posterior mean of the target quantity should be an average of the estimates conditional on each model, weighted by the posterior probability of that particular model.
In the frequentist domain, several multimodel inference methodologies mirror the BMA framework. A prominent approach involves the utilization of Akaike weights to play the role of model probabilities [6]. These Akaike weights offer a measure of evidence for each model in a candidate set, allowing us to compute the estimate of each regression coefficient by aggregating their Akaike-weighted model-specific estimates. In this manner, a model averaging calculation akin to BMA is performed but with the incorporation of each model’s Akaike weight as opposed to its posterior probability.
The basis of incorporating Akaike weights into the multimodel inference framework presupposes the validity of Akaike weights as approximations to model selection probabilities. In fact, Burnham and Anderson claim that Akaike weights “may be interpreted as the probability that model i is the actual expected K–L [Kullback–Leibler] best model for the sampling situation considered” [6]. However, in this paper, we argue that Akaike weights do not provide adequate approximations to model probabilities. Instead, we demonstrate that repeating the model selection process via the bootstrap yields better approximations to model selection probabilities, which can then be incorporated for the purpose of conducting multimodel inference. Yet this bootstrap procedure must be implemented with caution because of the bias generated from bootstrapping the likelihood-based information criteria. Under appropriate conditions, we derive the form of this bias and propose a simple correction.

2. Akaike Weights

2.1. Background

The Akaike information criterion (AIC), introduced by Hirotugu Akaike in his seminal 1973 publication “Information Theory and an Extension of the Maximum Likelihood Principle”, has emerged as a pivotal tool in statistical model selection. To appreciate the evolution of the AIC framework, it is important to recall that the likelihood function encapsulates the fidelity of a model to the observed data. Consequently, it serves as a potent instrument for discerning the propriety of different candidate models. However, the adherence of the likelihood function to the data increases in tandem with the complexity of the model. If models are assessed based on the magnitude of the empirical likelihood, higher-dimensional models will invariably be favored, thus potentially culminating in the selection of models with excessively complex structures. Therefore, to employ the maximum likelihood framework for model selection, a recalibration of the likelihood function is necessary so that the favorability of complexity is conferred solely when warranted. This essential adaptation is achieved by AIC.
For any prospective model parameterized by θ , the AIC is given by
2 ( θ ^ | y ) + 2 k ,
where k denotes the dimension of θ , and θ ^ denotes the maximum likelihood estimate of the parameter vector θ . The statistic 2 ( θ ^ | y ) is known as the “goodness-of-fit” term, and represents the log of the empirical likelihood for the model parameterized by θ based on the data y. Ergo, this term inherently gravitates towards models exhibiting intricate configurations, thus conceivably engendering a predisposition towards complexity. Counterbalancing this predilection, the term 2 k is known as the “penalty”, which increases as the model complexity escalates. In the pursuit of selecting an optimal model from a candidate collection, the objective is to choose the model exhibiting the minimal AIC value. This AIC value represents an equilibrium between the “goodness-of-fit” term and the “penalty” term. To elucidate this point, a model of pronounced intricacy is able to assimilate the subtleties inherent in the data, consequently resulting in a diminished “goodness-of-fit” term. However, the associated complexity results in a high “penalty” term, which means the more complex models may not necessarily produce the smallest AIC value.
In the traditional linear regression setting, consider a vector of outcomes y that we aim to model using a set of ( p 1 ) explanatory variables plus an intercept term, represented by the rank-p design matrix X. Let θ = β T σ 2 T , where β T = β 1 β p denotes the corresponding regression coefficients, and let M = { M 1 , M 2 , , M i , , M 2 ( p 1 ) } represent the set of all conceivable candidate models that can be constructed from the ( p 1 ) variables. Additionally, let A I C i denote the value of AIC associated with the i t h model M i M , and let A I C m i n denote the minimum AIC value across all the models in the candidate set M. In other words, we have that
A I C m i n = m i n { A I C 1 , A I C 2 , , A I C 2 ( p 1 ) } .
If we then let Δ i = A I C i A I C m i n , the Akaike weight for model M i is defined as
w i = e 0.5 Δ i j = 1 2 ( p 1 ) e 0.5 Δ j .

2.2. Akaike Weights vs. Model Probabilities

Akaike weights have been frequently utilized as approximations to model selection probabilities. At first glance, this seems like a reasonable practice. For instance, Equation (2) has a similar structure to the Bayesian posterior model probability approximated via the Bayesian information criterion (BIC); see, for instance, the work of Neath and Cavanaugh [7]. However, although Burnham and Anderson advocate for the use of Akaike weights as approximations for model selection probabilities, they also recognize that such an approximation may be crude [6]. Through the following example, we argue that Akaike weights do not serve as adequate approximations to selection probabilities.
Consider a simple case where we have a null and an alternative model represented by M 0 and M 1 , respectively. Then, the corresponding Akaike weights are given by
w 0 = e 0.5 Δ 0 e 0.5 Δ 0 + e 0.5 Δ 1
and
w 1 = e 0.5 Δ 1 e 0.5 Δ 0 + e 0.5 Δ 1 .
However, the Akaike weights can be rewritten as follows:
w 1 = e 0.5 Δ 1 e 0.5 Δ 0 + e 0.5 Δ 1 = e 0.5 Δ 1 e 0.5 Δ 1 e 0.5 Δ 0 + e 0.5 Δ 1 e 0.5 Δ 1 = 1 1 + e 0.5 ( A I C 0 A I C m i n ) + 0.5 ( A I C 1 A I C m i n ) = 1 1 + e 0.5 ( A I C 0 A I C 1 ) .
Similarly,
w 0 = 1 1 + e 0.5 ( A I C 0 A I C 1 ) .
Now, note the following:
w 1 = 1 1 + e 0.5 ( A I C 0 A I C 1 ) w 1 + w 1 e 0.5 ( A I C 0 A I C 1 ) = 1 w 1 e 0.5 ( A I C 0 A I C 1 ) = 1 w 1 w 0 = w 1 e 0.5 ( A I C 0 A I C 1 ) because w 0 = 1 w 1 w 0 w 1 = e 0.5 ( A I C 0 A I C 1 ) log w 0 w 1 = 0.5 ( A I C 0 A I C 1 ) .
This derivation implies that if the Akaike weights are treated as actual model choice probabilities, then for models M i and M j such that i j , the log of the corresponding relative risk is proportional to A I C i A I C j . However, this property is erroneous, in that the relative risk based on the actual model selection probabilities will not satisfy this proportionality.
To illustrate this incongruity, consider the case where M 0 and M 1 are parameterized by θ 0 and θ 1 , respectively, but | θ 1 | | θ 0 | = 1 ; in other words, the dimension of the parameter vectors only differs by one. In addition, assume that the null is the true data-generating model. Note that in this case, Δ 0 = 0 A I C 0 A I C 1 0 ; therefore, if we let P ( M 0 ) be the selection probability for model M 0 , then P ( M 0 ) is equivalent to P ( A I C 0 A I C 1 0 ) . Under a true null model, we can write
P ( M 0 ) = P ( A I C 0 A I C 1 0 ) = P ( 2 ( ( θ ^ 0 | y ) ( θ ^ 1 | y ) ) 2 0 ) P ( χ 1 2 2 ) = 0.8427
This means that asymptotically, the relative risk of M 0 over M 1 is equal to
P ( M 0 ) P ( M 1 ) = 0.8427 1 0.8427 = 5.3573 .
For the sake of argument, suppose that w 0 and w 1 can serve as approximations to P ( M 0 ) and P ( M 1 ) . However, note that
w 0 w 1 = e 0.5 ( A I C 0 A I C 1 ) < e 0.5 ( 2 ) because 2 ( ( θ ^ 0 | y ) ( θ ^ 1 | y ) ) > 0 = e 1 = 2.7183 .
Since the ratio ( w 0 / w 1 ) can never attain the actual value of the relative risk, the Akaike weights should not be treated as approximations to the probability of choosing a particular model.

3. Bootstrap Model Frequencies

3.1. Model Frequencies as Multinomial Probability Vector Approximations

Consider a scenario in which we can draw a set of S samples from the true data-generating model, denoted by g. For each sample s S , we evaluate the quantity Δ i = A I C i A I C m i n .
Now, it is worth noting that if M i emerges as the optimal model for sample s, then A I C i = A I C m i n , which implies that Δ i = 0 for such a sample. Subsequently, we define a random vector τ T = ( τ 1 , τ 2 , , τ 2 ( p 1 ) ) to denote the number of selections for each model across all S draws. Therefore,
τ i = s = 1 S I ( Δ i ( s ) = 0 ) ,
where Δ i ( s ) denotes the AIC difference Δ ( i ) for sample s.
Given this configuration, we can assert that τ multinomial ( S , π ) , where S represents the number of trials, and π T = ( π 1 , π 2 , , π 2 ( p 1 ) ) constitutes a vector of probabilities. Specifically, π i corresponds to P ( M i ) , the probability of selecting model M i . Furthermore, since model M i is exclusively chosen when Δ i = 0 , we deduce that π i = P ( Δ i = 0 ) . Finally, we obtain:
π T = E g ( π ^ T ) = E g s = 1 S I ( Δ 1 ( s ) = 0 ) S , , s = 1 S I ( Δ 2 ( p 1 ) ( s ) = 0 ) S .
Thus, we can define model selection probabilities as the probability vector π from a multinomial distribution with S number of trials, where each trial s S is a sample from the true data generating model. Therefore, a model selection probability approximation should be an attempt to estimate π . For this task, we will employ the non-parametric bootstrap, though we need to consider a set of important caveats delineated in Section 3.2.

3.2. Bias Induced from Bootstrapping Fitted Likelihoods

Let y T = ( y 1 , , y n ) denote the observation vector, which we assume is comprised of n independent variates y i . To assess the disparity between a data generating model g ( y ) and a parametric approximating model f ( y | θ ) , we can employ the Kullback–Leibler information between g ( y ) and f ( y | θ ) with respect to g ( y ) , given by
I K L ( g , θ ) = E g log g ( y ) f ( y | θ ) .
The expression
d ( g , θ ) = E g [ 2 ( θ | y ) ] ,
known as the Kullback–Leibler discrepancy (KLD), is often used as a substitute for I K L since E g [ log g ( y ) ] does not depend on the structure of the approximating model f ( y | θ ) .
In practice, the goal is to determine the propriety of fitted models of the form f ( y | θ ^ ) , where θ ^ = a r g m a x θ Θ ( θ | y ) . The KLD for the fitted model is given by
d ( g , θ ^ ) = E g [ 2 ( θ | y ) ] | θ = θ ^ .
To develop bootstrap approximations to model selection probabilities, we will use the “plug-in” principle as described by Efron and Tibshirani [8]. In our current application, we replace the true data generating distribution g by the empirical distribution g ^ ; the original sample y by a bootstrap sample y * drawn from g ^ ; and finally, θ ^ by the maximum likelihood estimate (MLE) θ ^ * derived under the bootstrap sample y * . For clarification purposes, it is worth emphasizing that E g ( · ) is the expected value with respect to the data generating distribution, while E * ( · ) is the expected value with respect to the bootstrap distribution. In fact, any notation presented with the symbol * will correspond to a bootstrap-based construct. For example, the bootstrap version of AIC is defined as
A I C * = 2 ( θ ^ * | y * ) + 2 k .
With the preceding replacements, the bootstrap version of the KLD is given by
d ( g ^ , θ ^ * ) = E g ^ [ 2 ( θ | y ) ] | θ = θ ^ * = i = 1 N 2 i ( θ ^ * | y i ) ( because each y i is independent . ) = 2 ( θ ^ * | y ) ,
where i represents the contribution to the likelihood based on the ith response y i .
The work by Dajles and Cavanaugh [9] shows that
E g E * [ 2 ( θ ^ * | y ) ] E g [ d ( g , θ ^ ) ] k
where k is the dimension of the model. With these results, we can provide the following lemma.
Lemma 1. 
Consider a large-sample setting and assume that the candidate model parameterized by θ subsumes the true model. Then, for the corresponding empirical log-likelihood given by ( θ ^ | y ) , we have that
E g 2 ( θ ^ | y ) E g E * 2 ( θ ^ * | y * ) + k
where k = | θ | and E * is the expectation with respect to the bootstrap distribution.
Proof. 
Let d ( g , θ ^ ) as defined in (3) denote the KLD for the fitted candidate model.
Now, consider the extended information criterion (EIC) [10] for this fitted model, which is given by
E I C = 2 ( θ ^ | y ) + 2 E * ( θ ^ * | y * ) ( θ ^ * | y ) .
Under the assumption that the candidate model subsumes the true model, it has been shown that EIC is an asymptotically unbiased estimator of the expected KLD, which means that E g ( d ( g , θ ^ ) ) E g ( E I C ) [10]. Therefore, by applying the expected value under the generating model g to the expression for EIC, we obtain the following:
E g ( d ( g , θ ^ ) ) E g 2 ( θ ^ | y ) + 2 E * ( θ ^ * | y * ) ( θ ^ * | y ) = E g 2 ( θ ^ | y ) + 2 k 2 k + E g 2 E * ( θ ^ * | y * ) ( θ ^ * | y ) = E g A I C + E g E * 2 ( θ ^ * | y * ) 2 k E g E * 2 ( θ ^ * | y ) by ( 1 ) = E g A I C E g E * 2 ( θ ^ * | y * ) + 2 k E g E * 2 ( θ ^ * | y ) = E g A I C E g E * A I C * E g E * 2 ( θ ^ * | y ) by ( 4 ) .
Now, under the same modeling specification assumption, we know that AIC is also an asymptotically unbiased estimator of the expected KLD. Therefore,
E g ( d ( g , θ ^ ) ) E g A I C E g E * A I C * E g E * 2 ( θ ^ * | y ) 0 E g E * A I C * E g E * 2 ( θ ^ * | y ) E g E * A I C * E g E * 2 ( θ ^ * | y ) .
Recall the approximation (5), which states that
E * 2 ( θ ^ * | y ) A I C k .
Hence, we have that
E g E * A I C * E g E * 2 ( θ ^ * | y ) E g E * A I C * E g A I C k by ( 5 ) E g A I C E g E * A I C * + k E g 2 ( θ ^ | y ) E g E * 2 ( θ ^ * | y * ) + k .
The preceding establishes the lemma. □
The results from Lemma 1 indicate that if we want to build a bootstrap distribution of AIC variates for which the expected value of each variate is approximately unbiased for the expected value of the fitted KLD, then we must adjust the variates by k. Moreover, the results from Lemma 1 in combination with the expression (5) reveal a deeper understanding of the behavior of fitted log-likelihoods when subjected to a bootstrap procedure.
For the purpose of presenting parallel notation, employing the trivial result that 2 ( θ ^ | y ) = E * 2 ( θ ^ | y ) , we have the following asymptotic results:
E g d ( g , θ ^ ) E g E * [ 2 ( θ ^ * | y ) ] + k by Equation ( 5 )
E g d ( g , θ ^ ) E g E * [ 2 ( θ ^ | y ) ] + 2 k by the properties of AIC
E g d ( g , θ ^ ) E g E * [ 2 ( θ ^ * | y * ) ] + 3 k by Lemma ( 1 )
Equations (6)–(8) exhibit the intricate connection between the penalty term and the level of optimism conveyed by fitted log-likelihoods in the context of the bootstrap. In Equation (6), the estimate of the parameter θ is based on the bootstrap data, and the likelihood evaluates the efficacy of the fitted model parameterized by θ ^ * in predicting the original sample y. This corresponds to the classical setting in which the bootstrap is used to assess the predictive accuracy of a fitted model. Since some of the elements of y will not be represented in y * , one can think of this setting as representing pseudo out-of-sample prediction. Here, the penalty term should be k, the number of parameters in the model.
Now, when we look at Equation (7), the likelihood evaluates the efficacy of the fitted model parameterized by θ ^ in predicting the sample y that was used to obtain θ ^ * . The strong dependence of θ ^ and y results in overly optimistic prediction, thus justifying the need to increase the penalty to 2 k . Analogously, when we look at Equation (8), the likelihood evaluates the efficacy of the fitted model parameterized by the bootstrap replicate θ ^ * in predicting the bootstrap sample y * used to obtain θ ^ * . The strong dependence of θ ^ * and y * again results in overly optimistic prediction. Moreover, the optimism is increased by the duplicated elements in the fitting sample y * , i.e., there are more independent pieces of information in y than in y * . Here, the penalization needs to be increased to 3 k .

3.3. Bootstrap Approximation of Model Selection Probabilities

In Section 3.1, we introduced the idea of the model selection probability as an entry of a probability vector in a multinomial distribution. With this framework, we have that
π T = E g ( π ^ T ) = E g s = 1 S I ( Δ 1 ( s ) = 0 ) S , , s = 1 S I ( Δ 2 ( p 1 ) ( s ) = 0 ) S E * b = 1 B I ( Δ 1 * ( b ) = 0 ) B , , b = 1 B I ( Δ 2 ( p 1 ) * ( b ) = 0 ) B ,
where E * is the expected value with respect to the bootstrap distribution. Here, Δ i * = A I C i , a d j * A I C m i n , a d j * , with A I C i , a d j denoting the bias-adjusted bootstrap variant of AIC for model M i : A I C i , a d j * = A I C i * + k i , k i = | θ i | , and with
A I C m i n , a d j * = min ( { A I C 1 , a d j * , , A I C 2 ( p 1 ) , a d j * } ) .
Thus, for model M i , we define the Bootstrap Model Frequency (BMF) by
P * ( M i ) = b = 1 B I ( Δ i * ( b ) = 0 ) B ,
which serves as a bootstrap approximation of P ( M i ) .

3.4. Akaike Weights vs. Bootstrap Model Frequencies: Simulation

To better understand the behavior of Akaike weights vs. BMFs, consider the true data-generating model given by
y i = β 0 , 1 + ϵ i ,
with ϵ i N ( 0 , 1 ) and β 0 , 1 = 1 . Now, suppose the null and alternative candidate models under consideration have the following form:
M 0 : y i = β 1 + η i , M 1 : y i = β 1 + β 2 x i 2 + η i ,
where x i 2 N ( 0 , 1 ) and η i N ( 0 , σ 2 ) . Thus, the null model is correctly specified and the alternative is overspecified. For 1000 bootstrap iterations and 500 samples from the data-generating model, each of size N = 20 , we can compute the Akaike weights and the AIC-based bootstrap model frequencies for M 0 and M 1 . The results are displayed in Figure 1. Here, the discrepancy between Akaike weights and BMFs is noticeable.
The red, vertical dotted lines in Figure 1 represent the boundaries associated with the Akaike weights. More precisely, similarly to the log of the relative risk, the Akaike weights are bounded by a function of the minimum difference between the empirical log-likelihoods of the null and the alternative models. In other words, since 2 ( ( θ ^ 0 | y ) ( θ ^ 1 | y ) ) > 0 , then
w 0 = 1 1 + e 0.5 ( A I C 0 A I C 1 ) < 1 1 + e 0.5 ( 2 ) = 0.7311 ,
and similarly,
w 1 = 1 1 + e 0.5 ( A I C 0 A I C 1 ) > 1 1 + e 0.5 ( 2 ) = 0.2689 .
Table 1 shows the average of the Akaike weights and the BMFs over all 500 simulated data sets. Herein, we see that the average Akaike weights and the average BMFs are not close to each other. However, although not identical in value, the average BMF is close to the empirical estimate of the true model probability. For these 500 data sets, we have that P ( M 0 = 0 ) 0.8440 and
s = 1 500 P * ( Δ 0 * ( s ) = 0 ) 500 0.7566 .
Conversely, the average Akaike weight for M 0 is 0.6166 , which is far from the empirical estimate of P ( M 0 ) .

3.5. More Complex Simulation Scenarios

The setting explored in Section 3.4 provides an intuitive and illustrative example of the behavior of Akaike weights and BMFs and the differences between them. However, one can consider other more complex simulation scenarios, though perhaps at the expense of mathematical clarity.
For instance, consider the data generating model given by
y i = 1 + 2 x i 1 + 1 x i 2 + 0 x i 3 + ϵ i ,
with ϵ i N ( 0 , 1 ) , and
x i 1 x i 2 x i 3 T N 3 ( μ , Σ ) ,
where μ = { 1 , 1 , 1 } and Σ = 1 0.5 0.5 0.5 1 0.5 0.5 0.5 1 .
In this setting, of the eight possible candidate models, only two models subsume the correct mean structure, while the remaining models are underspecified. Moreover, to test the versatility of BMFs, we also introduce some correlation between the explanatory variables.
Table 2 shows the performance one would expect given the theoretical underpinnings that we have developed and presented. In approximating the model probabilities for the small sample size, the performance of the average BMF is not ideal but is still superior to that of the Akaike weights. However, as the sample size increases, we observe a significant improvement in BMF performance. For the sample size of N = 1000 , we note that the average BMFs are sufficiently close to their respective model selection probabilities.
Now, consider a more complex case with the data generating model given by
y i = 1 + 2 x i 1 + 1 x i 2 + 0 x i 3 + ϵ i ,
with ϵ i Z · N ( 0 , 3 ) + ( 1 Z ) · N ( 0 , 1 ) , Z B e r n o u l l i ( π ) , π = 0.8 , and
x i 1 x i 2 x i 3 T N 3 ( μ , Σ ) ,
where μ = { 1 , 1 , 1 } and Σ = 1 0.5 0.5 0.5 1 0.5 0.5 0.5 1 .
In this setting, assuming that the models are fit using a Gaussian log-likelihood, each candidate model is misspecified with respect to the error distribution. With regard to the mean structure, two models subsume the correct structure, while the remaining models are underspecified. Table 3 illustrates that for a sufficiently large sample size ( N = 1000 ), the adjusted bootstrap model frequencies (BMFs) outperform the Akaike weights. As the sample size decreases, we observe that the BMFs diverge from the simulated model selection probabilities.
The performance observed in the large sample size scenario aligns with theoretical expectations. For any information criterion based on empirical log-likelihoods, if the sample size is sufficiently large relative to the number of variables in the candidate models, the goodness-of-fit term will predominate and favor the selection of larger models that subsume the appropriate mean structure. Since the true model selection probabilities are calculated via AIC, model misspecification is likely to similarly affect both the BMFs and the simulated true model probabilities. However, keep in mind that for this setting, AIC is not guaranteed to provide an asymptotically unbiased estimator of the KLD, which means that the simulated true model selection probabilities will not be correctly calibrated.
As the sample size decreases, the penalty term becomes more influential, and the effects of violations to the regularity conditions under which AIC is asymptotically justified become more pronounced. Therefore, in small sample size settings, we expect to see greater discrepancies in model selection between the simulated samples and the bootstrap samples. Precisely elucidating these discrepancies is challenging and would require a rigorous mathematical exploration of the effect of misspecification on the performance of AIC as an asymptotically unbiased estimator of the expected KLD. To avoid mathematical ambiguities, it is best to consider a simple case as demonstrated in Section 3.4.

3.6. Bootstrap-Based Multimodel Estimates and Confidence Intervals

Bootstrap model frequencies are an important component of the overall multimodel inference framework. Procedures for obtaining bootstrap-based estimates and confidence intervals within this framework are rigorously outlined in Efron’s seminal work [11]. In our current contribution, we apply the concept of “bootstrap smoothing” to the estimation of regression coefficients. The algorithm can be summarized as follows.
Consider a vector of outcomes y that we aim to model using a set of ( p 1 ) explanatory variables plus an intercept term, represented by the design matrix X. Let β T = β 1 β p denote the corresponding coefficients, and let
M = { M 1 , M 2 , , M i , , M 2 ( p 1 ) }
represent the set of all conceivable candidate models that can be constructed from the ( p 1 ) variables. Additionally, let I C ( M i ) denote the value of the information criterion associated with the ith model M i .
Now, consider the following bootstrapping procedure.
(1)
Obtain a bootstrap sample denoted as ( y * , X * ) .
(2)
Fit each of the candidate models to the bootstrap sample.
(3)
Identify the model M i that corresponds to the minimum information criterion value. In other words, choose model M i such that
I C * ( M i ) = min ( { I C * ( M 1 ) , , I C * ( M 2 ( p 1 ) ) } ) ,
where I C * ( M i ) is the bootstrap version of I C ( M i ) .
(4)
Record β j ^ * , the maximum likelihood estimate (MLE) of each β j corresponding to the covariates represented in the model selected from step 3. The MLEs for unselected variables are set to zero.
(5)
Repeat the aforementioned steps B times.
In our setting, we have that I C = A I C . For the unadjusted results, we let I C * = A I C * , where A I C * = 2 ( θ ^ * | y * ) + 2 k , θ ^ * = β ^ * T σ ^ 2 * T , and k = | θ | . On the other hand, for the bias-adjusted results, we have that A I C * = 2 ( θ ^ * | y * ) + 3 k .
If we let y * ( b ) be a bootstrap sample from y, with b { 1 , 2 , , B } , and let β ^ j * ( b ) be the estimate of the regression coefficient β j for the b t h bootstrap sample under the model selected for that specific sample, the procedure described above results in a set of estimates given by { β ^ j * ( 1 ) , β ^ j * ( 2 ) , , β ^ j * ( B ) } . The smoothed estimator of β ^ j is defined as
β ˜ j * = 1 B b = 1 B β ^ j * ( b ) .
For the confidence interval based on the smoothed estimator, Efron [11] proposes the following approach. Suppose that we have a sample y = y 1 y 2 y N T , a set of bootstrap samples { y * ( 1 ) , y * ( 2 ) , , y * ( B ) } where the elements of this set are of the form y * ( b ) = y 1 * ( b ) y 2 * ( b ) y k * ( b ) y N * ( b ) T , and a bootstrap coefficient estimate β ^ j * ( b ) , where b = { 1 , 2 , , B } . Based on the case when B = N N , where N N is equal to all possible bootstrap samples of the form y * ( b ) , each having a probability of 1 / B , if we define
Y b i * = # { y k * ( b ) = y i }
to be the number of elements of y * ( b ) equaling the original data point y i , then the smoothed standard error of β ˜ j * can be estimated by
s d ˜ j = i = 1 N c o v i j 2 1 / 2
where
c o v i j = c o v * Y b i * , β ^ j * ( b ) .
However, bootstrap applications do not typically consider the “ideal” case with all possible N N bootstrap samples. For the “non-ideal” setting with B bootstrap samples and B not necessarily equal to N N , we can obtain an estimate of the smoothed standard error via
s d ˜ j B = i = 1 N c o v ^ i j 2 1 / 2 ,
where
c o v ^ i j = b = 1 B Y b i * Y · i * β ^ j * ( b ) β ˜ j * / B .
Note that this estimator solely relies on { y * ( 1 ) , y * ( 2 ) , , y * ( B ) } , which corresponds to the set of bootstrap samples used to estimate β ˜ j * . A 95 % confidence interval (CI) for the smoothed estimator is called the smoothed 95 % CI and it is given by
β ˜ j * ± 1.96 s d ˜ j B .

4. Application in Biomedicine: Sulindac for the Treatment of Colonic and Rectal Ademonams in Patients with Familial Adenomatous Polyposis

Familial adenomatous polyposis (FAP) is an autosomal dominant genetic disease characterized by the development of thousands of polyps throughout the colon and rectum. This condition is rare; it occurs in 1 in 1000 people, and although it accounts for only 1 % of all diagnosed colorectal cancers, it is the second most common inherited colorectal cancer syndrome.
FAP is the result of a mutation of a tumor suppressor gene on chromosome 5 known as the ademomatous polyposis coli (APC) gene. Polyps begin to arise in the early teens and if untreated, patients have a 100 % lifetime risk of developing colorectal cancer. In addition, patients with FAP are at risk of developing extracolonic pathologies such as desmoid tumors (a solid connective tissue tumor), hepatoblastomas (liver tumors), and thyroid cancer [12].
In patients with FAP, early detection and treatment are essential for preventing the development of colorectal carcinoma, thus improving the prognosis. The standard treatment for FAP is colectomy with or without protectomy. Subtotal colectomy is desirable for many patients, but it requires continued surveillance, while total proctocolectomy does not require surveillance, but patients experience increased stool urgency and higher rates of urinary dysfunction. Therefore, there is a need for the development of non-surgical treatments for patients with FAP [13,14].
In 1993, the effect of Sulindac, a non-steroidal anti-inflammatory drug (NSAID), was investigated for the treatment of FAP in a randomized clinical trial [15]. The study recruited 22 patients with FAP, 11 of whom were assigned to the treatment group that received 150 mg dosages of Sulindac, and 11 of whom were assigned to control group that received an identical-appearing placebo tablet. The study also considered the sex and age of the patients.
The results of the study were published in the paper titled “Treatment of Colonic and Rectal Adenomas with Sulindac in Familial Adenomatous Polyposis” in The New England Journal of Medicine [15]. The data set is available in the R package “medicaldata” [16].
The main outcome of interest in the study is the proportionate difference in the number of polyps at 3 months compared to baseline. In other words, if we define D as the proportionate change in the number of polyps, then we have that
D = n u m 3 p o l b a s e l i n e b a s e l i n e .
In addition to this outcome, the explanatory variables in the data set include t r e a t m e n t (a value of 1 for Sulindac and 0 for the placebo), s e x (a value 1 for male and 0 for female), and a g e . Moreover, in our modeling analysis, we consider the interaction between t r e a t m e n t and s e x , and we define this variable as i n t e r a c t i o n = t r e a t m e n t · s e x .
The primary objective of this application is to assess the selection probability of different linear models that can be constructed with the variables collected from the study. To maintain consistency with the original publication [15], all the candidate models are fitted using least squares regression; an inspection of the distribution of the residuals for the full model confirms that this is a reasonable approach. Ultimately, we will compare the model selection probability estimates that are obtained by utilizing the Akaike weights, the unadjusted bootstrap model frequencies and the bias-adjusted bootstrap model frequencies. The results are displayed in Table 4.
In analyzing the results of this example, several important points emerge. First, it is worth emphasizing that all estimating approaches consistently point towards the model incorporating both the t r e a t m e n t and s e x variables as the most likely. This finding is particularly reassuring, as each approach hinges on model selections through AIC.
However, a notable discrepancy arises in the degree to which this favored model is endorsed as reflected by the disparities between the Akaike weights and the BMFs. The Akaike weights suggest a model selection probability of 46.4 % , whereas both the adjusted and unadjusted BMFs indicate a higher probability, hovering around 66 % each. This distinction carries practical implications, especially in the context of estimating treatment effects and standard errors within the model averaging framework. A higher probability assigned to a single model implies a concentration of estimates from that model, leading to reduced variability due to model selection.
Further scrutiny reveals distinctions between the unadjusted and bias-adjusted BMFs in their estimation of probabilities for the second most popular model. Specifically, for the model containing t r e a t m e n t , s e x , and the i n t e r a c t i o n , the unadjusted BMF yields an estimated probability of 12.3 % (in proximity to the Akaike weight for the same model). In contrast, the adjusted version provides a markedly lower estimate of 0.075 % . However, for the model involving only t r e a t m e n t , the adjusted approach yields a probability estimate of 16.2 % .
To better appreciate the practical effects of these differences in model probability estimation, consider the results on Table 5 and Table 6, which show that the smoothed CIs for the adjusted BMFs are narrower than for the unadjusted case. More importantly, the length of the smoothed CI for t r e a t m e n t is noticeably narrower for the adjusted setting. This occurs because, as noted previously, the second most popular model for the adjusted case is the model that only contains the t r e a t m e n t variable. In other words, 16.2 % of the contributions to the smoothed estimate of t r e a t m e n t come from a model that allocates all the data to estimating the effect of t r e a t m e n t . On the other hand, for the unadjusted case, 12.3 % of the contributions to the smoothed estimate of t r e a t m e n t come from a model that allocates the data to estimating t r e a t m e n t , s e x , and the i n t e r a c t i o n term.
From an information theoretic point of view, this dissimilarity can be explained by considering the optimism inherent in fitted log-likelihoods. As established in Section 3.2, the fitted log-likelihood given by 2 ( θ ^ * | y * ) serves as an overly optimistic measure of goodness-of-fit. Thus, to circumvent the unrealistic selection of complex models, the addition of a penalty term of size 2 k (as employed in AIC) proves insufficient. Instead, an extra k must be incorporated to strike a proper balance between the goodness-of-fit term and the penalty term. Since the unadjusted BMF lacks this extra penalty, it is expected to tend to favor larger models than the adjusted BMF.

5. Conclusions

The current practice of statistical modeling recognizes that no single model can capture all of the features of the true data generating model, and that the uncertainty inherent in selecting a model from a candidate collection should ideally be accommodated in the development of inferential procedures. Statisticians have therefore proposed estimating the model parameters of interest by incorporating the contributions of multiple possible candidate models. However, each model’s contribution must be proportional to its probability of being selected.
In this paper, we showed that the AIC-based bootstrap model frequency (BMF) serves as a reasonable approximation to the corresponding AIC-based model selection probability. However, we established that in the process of computing these BMFs, we must add an extra penalty to the bootstrap AIC values of size equal to the number of parameters in the candidate model. This penalty ensures that the bootstrap AIC will remain an asymptotically unbiased estimator of the expected value of the Kullback–Leibler discrepancy. Finally, we showed that the BMFs serve as better approximations to model selection probabilities than the commonly used Akaike weights. We exhibited the superior performance of BMFs through a simulation study, and illustrated their use relative to Akaike weights in a real data application.

Author Contributions

Conceptualization, A.D. and J.C.; Formal analysis, A.D. and J.C.; Methodology, A.D. and J.C.; Writing—review and editing, A.D. and J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The R code used in generating the data for the simulation study is available on request from the corresponding author. The data for the application are not publicly available since the data set is confidential.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Breiman, L. The Little Bootstrap and Other Methods for Dimensionality Selection in Regression: X-fixed Prediction Error. J. Am. Stat. Assoc. 1992, 87, 738–754. [Google Scholar] [CrossRef]
  2. Montgomery, J.M.; Nyhan, B. Bayesian Model Averaging: Theoretical Developments and Practical Applications. Polit. Anal. 2010, 18, 245–270. [Google Scholar] [CrossRef]
  3. Heinze, G.; Wallisch, C.; Dunkler, D. Variable Selection—A Review and Recommendations for the Practicing Statistician. Biom. J. 2018, 60, 431–449. [Google Scholar] [CrossRef] [PubMed]
  4. Sauerbrei, W.; Perperoglou, A.; Schmid, M.; Abrahamowicz, M.; Becher, H.; Binder, H.; Dunkler, D.; Harrell, F.; Royston, P.; Heinze, G. State of the Art in Selection of Variables and Functional Forms in Multivariable Analysis-Outstanding Issues. Diagn. Progn. Res. 2020, 4, 1–18. [Google Scholar] [CrossRef] [PubMed]
  5. Derksen, S.; Keselman, H.J.D. Backward, Forward and Stepwise Automated Subset Selection Algorithms: Frequency of Obtaining Authentic and Noise Variables. Br. J. Math. Stat. Psychol. 1992, 45, 265–282. [Google Scholar] [CrossRef]
  6. Burnham, K.P.; Anderson, D.R. Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  7. Neath, A.A.; Cavanaugh, J.E. The Bayesian information criterion: Background, derivation, and applications. WIREs Comput. Stat. 2012, 4, 199–203. [Google Scholar] [CrossRef]
  8. Efron, B.; Tibshirani, R.J. An Introduction to the Bootstrap; Chapman & Hall: London, UK, 1993. [Google Scholar]
  9. Dajles, A.; Cavanaugh, J.E. Probabilistic Pairwise Model Comparisons Based on Bootstrap Estimators of the Kullback-Leibler Discrepancy. Entropy 2022, 24, 1483. [Google Scholar] [CrossRef] [PubMed]
  10. Ishiguro, M.; Sakamoto, Y.; Kitagawa, G. Bootstrapping log likelihood and EIC, an extension of AIC. Ann. Inst. Stat. Math. 1997, 49, 411–434. [Google Scholar] [CrossRef]
  11. Efron, B. Estimation and Accuracy After Model Selection. J. Am. Stat. Assoc. 2014, 109, 991–1007. [Google Scholar] [CrossRef] [PubMed]
  12. Dinarvand, P.; Davaro, E.P.; Doan, J.V.; Ising, M.E.; Evans, N.R.; Phillips, N.J.; Lai, J.; Guzman, M.A. Familial Adenomatous Polyposis Syndrome: An Update and Review of Extraintestinal Manifestations. Arch. Path. Lab. 2019, 143, 1382–1398. [Google Scholar] [CrossRef] [PubMed]
  13. Chintalacheruvu, L.; Shaw, T.; Buddam, A.; Diab, O.; Kassim, T.; Mukherjee, S.; Lynch, H. Major Hereditary Gastrointestinal Cancer Syndromes: A Narrative Review. J. Gastrointestin. Liver Dis. 2017, 26, 157–163. [Google Scholar] [CrossRef] [PubMed]
  14. Chittleborough, T.J.; Warrier, S.K.; Heriot, A.G.; Kalady, M.; Church, J. Dispelling Misconceptions in the Management of Familial Adenomatous Polyposis. ANZ J. Surg. 2017, 87, 441–445. [Google Scholar] [CrossRef] [PubMed]
  15. Giardiello, F.M.; Hamilton, S.R.; Krush, A.J.; Piantadosi, S.; Hylind, L.M.; Celano, P.; Booker, S.V.; Robinson, C.R.; Offerhaus, G.J. Treatment of Colonic and Rectal Adenomas with Sulindac in Familial Adenomatous Polyposis. N. Engl. J. Med. 1993, 328, 1313–1316. [Google Scholar] [CrossRef] [PubMed]
  16. Higgins, P. Medicaldata: Data Package for Medical Datasets—R Package Version 0.2.0. 2021. Available online: https://CRAN.R-project.org/package=medicaldata (accessed on 9 September 2023).
Figure 1. AIC-based bootstrap model frequencies vs. Akaike weights for a null and an alternative model denoted by M0 and M1, respectively. The simulation generates 500 data sets from the null model, which is assumed to be true, and each data set is of size N = 20. The BMFs are computed with 1000 bootstrap samples.
Figure 1. AIC-based bootstrap model frequencies vs. Akaike weights for a null and an alternative model denoted by M0 and M1, respectively. The simulation generates 500 data sets from the null model, which is assumed to be true, and each data set is of size N = 20. The BMFs are computed with 1000 bootstrap samples.
Entropy 26 00599 g001
Table 1. Average of the bootstrap model frequencies and the Akaike weights for M 0 and M 1 over all 500 simulated data sets.
Table 1. Average of the bootstrap model frequencies and the Akaike weights for M 0 and M 1 over all 500 simulated data sets.
AIC-Based BMFAkaike Weight
M00.75660.6166
M10.24340.3833
Table 2. Results of a simulation with properly specified errors. The simulation consists of 100 data sets from the data generating model, with varying sample sizes. The true model selection probabilities are estimated with the 100 simulated data sets. The BMFs are calculated with 100 bootstraps. The BMFs and Akaike weights are averaged over all 100 data sets.
Table 2. Results of a simulation with properly specified errors. The simulation consists of 100 data sets from the data generating model, with varying sample sizes. The true model selection probabilities are estimated with the 100 simulated data sets. The BMFs are calculated with 100 bootstraps. The BMFs and Akaike weights are averaged over all 100 data sets.
Sample Size: N = 1000
x1x2x3BMFWeightsProbabilities
1110.260.410.23
1100.740.590.77
1010.000.000.00
1000.000.000.00
0110.000.000.00
0100.000.000.00
0010.000.000.00
0000.000.000.00
Sample Size: N = 100
x1x2x3BMFWeightsProbabilities
1110.250.380.15
1100.750.620.85
1010.000.000.00
1000.000.000.00
0110.000.000.00
0100.000.000.00
0010.000.000.00
0000.000.000.00
Sample Size: N = 20
x1x2x3BMFWeightsProbabilities
1110.280.400.21
1100.690.600.79
1000.020.000.00
1010.010.000.00
0110.000.000.00
0100.000.000.00
0010.000.000.00
0000.000.000.00
Table 3. Results of a simulation with error misspecification. The simulation consists of 100 data sets from the data generating model, with varying sample sizes. The true model selection probabilities are estimated with the 100 simulated data sets. The BMFs are calculated with 100 bootstraps. The BMFs and Akaike weights are averaged over all 100 data sets.
Table 3. Results of a simulation with error misspecification. The simulation consists of 100 data sets from the data generating model, with varying sample sizes. The true model selection probabilities are estimated with the 100 simulated data sets. The BMFs are calculated with 100 bootstraps. The BMFs and Akaike weights are averaged over all 100 data sets.
Sample Size: N = 1000
x1x2x3BMFWeightsProbabilities
1110.240.400.22
1100.760.600.78
1010.000.000.00
1000.000.000.00
0110.000.000.00
0100.000.000.00
0010.000.000.00
0000.000.000.00
Sample Size: N = 100
x1x2x3BMFWeightsProbabilities
1110.220.380.18
1100.770.620.82
1010.000.000.00
1000.000.000.00
0110.000.000.00
0100.000.000.00
0010.000.000.00
0000.000.000.00
Sample Size: N = 20
x1x2x3BMFWeightsProbabilities
1110.170.400.19
1100.460.600.81
1010.050.000.00
1000.140.000.00
0110.030.000.00
0100.050.000.00
0010.040.000.00
0000.040.000.00
Table 4. Akaike weights, unadjusted, and adjusted bootstrap model frequencies (BMFs) for models fit to the FAP data set. The BMFs are calculated using 1000 bootstrap samples. Zeros and ones are used to denote the absence or presence of the variable in a selected model. The intercept is included in each model.
Table 4. Akaike weights, unadjusted, and adjusted bootstrap model frequencies (BMFs) for models fit to the FAP data set. The BMFs are calculated using 1000 bootstrap samples. Zeros and ones are used to denote the absence or presence of the variable in a selected model. The intercept is included in each model.
TreatmentSexAgeInteractionWeightsUnadjustedAdjusted
11000.4640.6560.668
11010.1830.1230.075
11100.1710.0700.050
10000.0730.0730.162
11110.0680.0510.021
10100.0390.0250.021
01000.0010.0000.000
01100.0000.0000.001
00100.0000.0020.002
00000.0000.0000.000
Table 5. Smoothed estimates and smoothed 95% confidence intervals for regression coefficients from analysis of FAP data set. The results include the bias-adjusted and unadjusted values.
Table 5. Smoothed estimates and smoothed 95% confidence intervals for regression coefficients from analysis of FAP data set. The results include the bias-adjusted and unadjusted values.
AdjustedUnadjusted
VariableSmoothed EstimatesSmoothed CIsSmoothed EstimatesSmoothed CIs
treatment−0.4609(−0.6322, −0.2896)−0.4473(−0.6567, −0.2379)
sex0.2498(0.0122, 0.4874)0.2773(0.0101, 0.5444)
age0.0009(−0.0050, 0.0068)0.0012(−0.0077, 0.0102)
interaction−0.0205(−0.1846, 0.1436)−0.0348(−0.2824, 0.2127)
Table 6. Lengths of smoothed 95% confidence intervals from analysis of FAP data set.
Table 6. Lengths of smoothed 95% confidence intervals from analysis of FAP data set.
Confidence Interval Lengths
Variable Adjusted Unadjusted
treatment0.34270.4188
sex0.47520.5343
age0.01180.0180
interaction0.32820.4951
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dajles, A.; Cavanaugh, J. Bootstrap Approximation of Model Selection Probabilities for Multimodel Inference Frameworks. Entropy 2024, 26, 599. https://doi.org/10.3390/e26070599

AMA Style

Dajles A, Cavanaugh J. Bootstrap Approximation of Model Selection Probabilities for Multimodel Inference Frameworks. Entropy. 2024; 26(7):599. https://doi.org/10.3390/e26070599

Chicago/Turabian Style

Dajles, Andres, and Joseph Cavanaugh. 2024. "Bootstrap Approximation of Model Selection Probabilities for Multimodel Inference Frameworks" Entropy 26, no. 7: 599. https://doi.org/10.3390/e26070599

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop