Next Article in Journal
Onset of the Mach Reflection of Zel’dovich–von Neumann–Döring Detonations
Next Article in Special Issue
“Exact” and Approximate Methods for Bayesian Inference: Stochastic Volatility Case Study
Previous Article in Journal / Special Issue
Approximate Bayesian Computation for Discrete Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PAC-Bayes Bounds on Variational Tempered Posteriors for Markov Models

1
Department of Statistics, Purdue University, West Lafayette, IN 47907, USA
2
School of Industrial Engineering, Purdue University, West Lafayette, IN 47907, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2021, 23(3), 313; https://doi.org/10.3390/e23030313
Submission received: 8 February 2021 / Revised: 4 March 2021 / Accepted: 4 March 2021 / Published: 6 March 2021
(This article belongs to the Special Issue Approximate Bayesian Inference)

Abstract

:
Datasets displaying temporal dependencies abound in science and engineering applications, with Markov models representing a simplified and popular view of the temporal dependence structure. In this paper, we consider Bayesian settings that place prior distributions over the parameters of the transition kernel of a Markov model, and seek to characterize the resulting, typically intractable, posterior distributions. We present a Probably Approximately Correct (PAC)-Bayesian analysis of variational Bayes (VB) approximations to tempered Bayesian posterior distributions, bounding the model risk of the VB approximations. Tempered posteriors are known to be robust to model misspecification, and their variational approximations do not suffer the usual problems of over confident approximations. Our results tie the risk bounds to the mixing and ergodic properties of the Markov data generating model. We illustrate the PAC-Bayes bounds through a number of example Markov models, and also consider the situation where the Markov model is misspecified.

1. Introduction

This paper presents probably approximately correct (PAC)-Bayesian bounds on variational Bayesian (VB) approximations of fractional or tempered posterior distributions for Markov data generation models. Exact computation of either standard or tempered posterior distributions is a hard problem that has, broadly speaking, spawned two classes of computational methods. The first, Markov chain Monte Carlo (MCMC), constructs ergodic Markov chains to approximately sample from the posterior distribution. MCMC is known to suffer from high variance and complex diagnostics, leading to the development of variational Bayesian (VB) [1] methods as an alternative in recent years. VB methods pose posterior computation as a variational optimization problem, approximating the posterior distribution of interest by the ‘closest’ element of an appropriately defined class of ‘simple’ probability measures. Typically, the measure of closeness used by VB methods is the Kullback–Leibler (KL) divergence. Excellent introductions to this so-called KL-VB method can be found in [2,3,4]. More recently, there has also been interest in alternative divergence measures, particularly the α -Rényi divergence [5,6,7], though in this paper, we focus on the KL-VB setting.
Theoretical properties of VB approximations, and in particular asymptotic frequentist consistency, have been studied extensively under the assumption of an independent and identically distributed (i.i.d.) data generation model [4,8,9]. On the other hand, the common setting where data sets display temporal dependencies presents unique challenges. In this paper, we focus on homogeneous Markov chains with parameterized transition kernels, representing a parsimonious class of data generation models with a wide range of applications. We work in the Bayesian framework, focusing on the posterior distribution over the unknown parameters of the transition kernel. Our theory develops PAC bounds that link the ergodic and mixing properties of the data generating Markov chain to the Bayes risk associated with approximate posterior distributions.
Frequentist consistency of Bayesian methods, in the sense of concentration of the posterior distribution around neighborhoods of the ‘true’ data generating distribution, have been established in significant generality, in both the i.i.d. [10,11,12] and in the non-i.i.d. data generation setting [13,14]. More recent work [14,15,16] has studied fractional or tempered posteriors, a class of generalized Bayesian posteriors obtained by combining the likelihood function raised to a fractional power with an appropriate prior distribution using Bayes’ theorem. Tempered posteriors are known to be robust against model misspecification: in the Markov setting we consider, the associated stationary distribution as well as mixing properties are sensitive to model parameterization. Further, tempered posteriors are known to be much simpler to analyze theoretically [14,16]. Therefore, following [14,15,16] we focus on tempered posterior distributions on the transition kernel parameters, and study the rate of concentration of variational approximations to the tempered posterior. Equivalently, as shown in [16] and discussed in Section 1.1, our results also apply to so-called α -variational approximations to standard posterior distributions over kernel parameters. The latter are modifications of the standard KL-VB algorithm to address the well-known problem of overconfident posterior approximations.
While there have been a number of recent papers studying the consistency of approximate variational posteriors [5,8,15] in the large sample limit, rates of convergence have received less attention. Exceptions include [9,15,17], where an i.i.d. data generation model is assumed. [15] establishes PAC-Bayes bounds on the convergence of a variational tempered posterior with fractional powers in the range [ 0 ,   1 ) , while [9] considers the standard variational posterior case (where the fractional power equals 1). [17], on the other hand, establishes PAC-Bayes bounds for risk-sensitive Bayesian decision making problems in the standard variational posterior setting. The setting in [15] allows for model misspecification and the analysis is generally more straightforward than that in [9,17]. Our work extends [15] to the setting of a discrete-time Markov data generation model.
Our first results in Theorem 1 and Corollary 1 of Section 2 establish PAC-Bayes bounds for sequences with arbitrary temporal dependence. Our results generalize [15], [Theorem 2.4] to the non-i.i.d. data setting in a straightforward manner. Note that Theorem 1 also recovers ([16], [Theorem 3.3]), which is established under different ‘existence of test’ conditions. Our objective in this paper is to explicate how the ergodic and mixing properties of the Markov data generating process influences the PAC-Bayes bound. The sufficient conditions of our theorem, bounding the mean and variance of the log-likelihood ratio of the data, allows for developing this understanding, without the technicalities of proving the existence of test conditions intruding on the insights.
In Section 3, we study the setting where the data generating model is a stationary α -mixing Markov chain. Stationarity means that the Markov chain is initialized with the invariant distribution corresponding to the parameterized transition kernel, implying all subsequent states also follow this marginal distribution. The α -mixing condition ensures that the variance of the likelihood ratio of the Markov data does not grow faster than linear in the sample size. Our main results in this setting are applicable when the state space of the Markov chain is either continuous or discrete. The primary requirement on the class of data generating Markov models is for the log-likelihood ratio of the parameterized transition kernel and invariant distribution to satisfy a Lipschitz property. This condition implies a decoupling between the model parameters and the random samples, affording a straightforward verification of the mean and variance bounds. We highlight this main result by demonstrating that it is satisfied by a finite state Markov chain, a birth-death Markov chain on the positive integers, and a one-dimensional Gaussian linear model.
In practice, the assumption that the data generating model is stationary is unlikely to be satisfied. Typically, the initial distribution is arbitrary, with the state distribution of the Markov sequence converging weakly to the stationary distribution. In this setting, we must further assume that the class of data generating Markov chains are geometrically ergodic. We show that this implies the boundedness of the mean and variance of the log-likelihood ratio of the data generating Markov chain. Alternatively, in Theorem 4 we directly impose a drift condition on random variables that bound the log-likelihood ratio. Again, in this more general nonstationary setting, we illustrate the main results by showing that the PAC-Bayes bound is satisfied by a finite state Markov chain, a birth-death Markov chain on the positive integers, and a one-dimensional Gaussian linear model.
In preparation for our main technical results starting in Section 2 we first note relevant notations and definitions in the next section.

1.1. Notations and Definitions

We broadly adopt the notation in [15]. Let the sequence of random variables X n = ( X 0 , , X n ) R m × ( n + 1 ) represent a dataset of n + 1 observations drawn from a joint distribution P θ 0 ( n ) , where θ 0 Θ R d is the ‘true’ parameter underlying the data generation process. We assume the state space S R m of the random variables X i is either discrete-valued or continuous, and write { x 0 , , x n } for a realization of the dataset. We also adopt the convention that 0 log ( 0 / 0 ) = 0 .
For each θ Θ , we will write p θ ( n ) as the probability density of P θ ( n ) with respect to some measure Q ( n ) , i.e., p θ ( n ) : = d P θ ( n ) d Q ( n ) , where Q ( n ) is either Lebesgue measure or the counting measure. Unless stated otherwise, all probabilities, expectations and variances, which we represent as P, E [ X ] and Var [ X ] , are with respect to the true distribution P θ 0 ( n ) .
Let π ( θ ) be a prior distribution with support Θ . The α t e -fractional posterior is defined as
π n , α t e | X n ( d θ ) : = e α t e r n ( θ , θ 0 ) ( X n ) π ( d θ ) e α t e r n ( θ , θ 0 ) ( X n ) π ( d θ ) ,
where, for θ 0 , θ Θ , r n ( θ , θ 0 ) ( · ) : = log p θ 0 ( n ) ( · ) p θ ( n ) ( · ) , is the log-likelihood ratio of the corresponding density functions, and α t e ( 0 , ) is a tempering coefficient. Setting α t e = 1 recovers the standard Bayesian posterior. Note that we will use superscripts to distinguish different quantities that are referred to just as α in the literature.
The Kullback–Leibler (KL) divergence between distributions P , Q is defined as
K ( P , Q ) : = X log p ( x ) q ( x ) p ( x ) d x ,
where p , q are the densities corresponding to P , Q on some sample space X . In particular, the KL divergence between the distributions parameterized by θ 0 and θ is
K ( P θ 0 ( n ) , P θ ( n ) ) : = log p θ 0 ( n ) ( x 0 , , x n ) p θ ( n ) ( x 0 , , x n ) p θ 0 ( n ) ( x 0 , , x n ) d x 0 d x n = r n ( θ , θ 0 ) ( x 0 , , x n ) p θ 0 n ( x 0 , , x n ) d x 0 d x n .
The α r e -Rényi divergence D α r e ( P θ ( n ) , P θ 0 ( n ) ) is defined as
D α r e ( P θ ( n ) , P θ 0 ( n ) ) : = 1 α r e 1 log exp α r e r n ( θ , θ 0 ) ( x 0 , , x n ) p θ 0 ( n ) ( x 0 , , x n ) d x 0 d x n ,
where α r e ( 0 , 1 ) . As α r e 1 , the α r e -Rényi divergence recovers the KL divergence.
Let F be some class of distributions with support in R d and such that any distribution P in F is absolutely continuous with respect to the tempered posterior: P π n , α t e | X n .
Many choices of F exist; for instance (see also [15]), F can be the set of Gaussian measures, denoted F i d Φ :
F i d Φ = { Φ ( d θ ; μ , Σ ) : μ R d , Σ d × d P . D . } ,
where P.D. references the class of positive definite matrices. Alternately, F can be the family of mean-field or factored distributions where the components θ i of θ are independent of each other. Let π ˜ n , α t e | X n be the variational approximation to the tempered posterior, defined as
π ˜ n , α t e | X n : = arg min ρ F K ( ρ , π n , α t e | X n )
It is easy to see that finding π ˜ n , α t e | X n in Equation (5) is equivalent to the following optimization problem:
π ˜ n , α t e | X n : = arg max ρ F r n ( θ , θ 0 ) ( x 0 , , x n ) ρ ( d θ ) α t e 1 K ( ρ , π ) .
Setting α t e = 1 again recovers the usual variational solution that seeks to approximate the posterior distribution with the closest element of F (the right-hand side above is called the evidence lower bound (ELBO)). Other settings of α t e constitute α t e -variational inference [16], which seeks to regularize the ‘overconfident’ approximate posteriors that standard variational methods tend to produce.
Our results in this paper focus on parametrized Markov chains. We term a Markov chain as ‘parameterized’ if the transition kernel p θ ( · | · ) is parametrized by some θ Θ R d . Let q ( 0 ) ( · ) be the initial density (defined with respect to the Lebesgue measure over R m ) or initial probability mass function. Then, the joint density is p θ ( n ) ( x 0 , , x n ) = q ( 0 ) ( x 0 ) i = 0 n 1 p θ ( x i + 1 | x i ) ; recall, this joint density p θ ( n ) ( x 0 , , x n ) corresponds to the walk probability of a time-homogeneous Markov chain. We assume that corresponding to each transition kernel p θ , θ Θ , there exists an invariant distribution q θ ( ) q θ that satisfies
q θ ( x ) = p θ ( x | y ) q θ ( d y ) x R m , θ Θ .
We will also use q θ to designate the density of the invariant measure (as before, this is with respect to the Lebesgue or counting measure for continuous or discrete state spaces, respectively). A Markov chain is stationary if its initial distribution is the invariant probability distribution, that is, X 0 q θ .
Our results in the ensuing sections will be established under strong mixing conditions [18] on the Markov chain. Specifically, recall the definition of the α -mixing coefficients of a Markov chain { X n } :
Definition 1
( α -mixing coefficient). Let M i j denote the σ-field generated by the Markov chain { X k : i k j } parameterized by θ Θ . Then, the α-mixing coefficient is defined as
α k = sup t > 0 sup ( A , B ) M t × M t + k P θ ( A B ) P θ ( A ) P θ ( B ) .
Informally speaking, the α -mixing coefficients { α k } measure the dependence between any two events A (in the ‘history’ σ -algebra) and B (in the ‘future’ σ -algebra) with a time lag k. We note that we do not use superscripts to identify these α parameters, since they are the only ones with subscripts, and can be identified through this.

2. A Concentration Bound for the α re -Rényi Divergence

The object of analysis in what follows is the probability measure π ˜ n , α t e | X n ( θ ) , the variational approximation to the tempered posterior. Our main result establishes a bound on the Bayes risk of this distribution; in particular, given a sequence of loss functions n ( θ , θ 0 ) , we bound n ( θ , θ 0 ) π ˜ n , α t e | X n ( θ ) d θ . Following recent work in both the i.i.d. and dependent sequence settings [14,15,16], we will use n ( θ , θ 0 ) = D α r e ( P θ ( n ) , P θ 0 ( n ) ) , the α r e -Rényi divergence between P θ ( n ) and P θ 0 ( n ) as our loss function. Unlike loss functions like Euclidean distance, Rényi divergence compares θ and θ 0 through their effect on observed sequences, so that issues like parameter identifiability no longer arise. Our first result generalizes [15], [Theorem 2.1] to a general non-i.i.d. data setting.
Proposition 1.
Let F be a subset of all probability distributions on Θ. For any α r e ( 0 , 1 ) , ϵ ( 0 , 1 ) and n 1 , the following probabilistic uniform upper bound on the expected α r e -Rényi divergence holds:
P sup ρ F D α r e ( P θ ( n ) , P θ 0 ( n ) ) ρ ( d θ ) α r e 1 α r e r n ( θ , θ 0 ) ρ ( d θ ) + K ( ρ , π ) + log ( 1 ϵ ) 1 α r e 1 ϵ .
The proof of Proposition 1 follows easily from [15], and we include it in Appendix B.1.1 for completeness. Mirroring the comments in [15], when ρ = π ˜ n , α t e this result is precisely [14, Theorem 3.4]. We also note from [14] that α r e , β ( 0 , 1 ] α r e -Rényi divergences are all equivalent through the following inequality α r e ( 1 β ) β ( 1 α r e ) D β D α r e D β α r e β . Hence, for the subsequent results, we simplify by assuming that α t e = α r e . This probabilistic bound implies the following PAC-Bayesian concentration bound on the model risk computed with respect to the fractional variational posterior:
Theorem 1.
Let F be a subset of all probability distributions parameterized by Θ, and assume there exist ϵ n > 0 and ρ n F such that
i. 
K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) = E [ r n ( θ , θ 0 ) ] ρ n ( d θ ) n ϵ n ,
ii. 
Var r n ( θ , θ 0 ) ρ n ( d θ ) n ϵ n , and
iii. 
K ( ρ n , π ) n ϵ n .
Then, for any α r e ( 0 , 1 ) and ( ϵ , η ) ( 0 , 1 ) × ( 0 , 1 ) ,
P D α r e ( P θ ( n ) , P θ 0 ( n ) ) π ˜ n , α r e ( d θ | X ( n ) ) ( α r e + 1 ) n ϵ n + α r e n ϵ n η log ( ϵ ) 1 α r e 1 ϵ η .
The proof of Theorem 1 is a generalization of [15] (Theorem 2.4) to the non-i.i.d. setting, and a special case of [16] (Theorem 3.1), where the problem setting includes latent variables. We include a proof for completeness. As noted in [15], the sufficient conditions follow closely from [13] and we will show that they hold for a variety of Markov chain models.
A direct corollary of Theorem 1 follows by setting η = 1 n ϵ n , ϵ = e n ϵ n and using the fact that e n ϵ n 1 n ϵ n . Note that Equation (9) is vacuous if η + ϵ > 1 . Therefore, without loss of generality, we restrict ourselves to the condition 2 n ϵ n < 1 .
Corollary 1.
Assume ϵ n > 0 , ρ n F such that the following conditions hold:
i. 
K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) = E [ r n ( θ , θ 0 ) ] ρ n ( d θ ) n ϵ n ,
ii. 
Var r n ( θ , θ 0 ) ρ n ( d θ ) n ϵ n , and
iii. 
K ( ρ n , π ) n ϵ n .
Then, for any α r e ( 0 , 1 ) ,
P D α r e ( P θ ( n ) , P θ 0 ( n ) ) π ˜ n , α r e ( d θ | X ( n ) ) 2 ( α r e + 1 ) ϵ n 1 α r e 1 2 n ϵ n .
We observe that Theorem 1 and Corollary 1 place no assumptions on the nature of the statistical dependence between data points. However, verification of the sufficient conditions is quite hard, in general. One of our key contributions is to verify that under reasonable assumptions on the smoothness of the transition kernel, the sufficient conditions of Theorem 1 and Corollary 1 are satisfied by ergodic Markov chains.
Observe that the first two conditions in Corollary 1 ensure that the distribution ρ n concentrates on parameters θ Θ around the true parameter θ 0 , while the third condition requires that ρ n not diverge from the prior π rapidly as a function of the sample size n. In general, verifying the first and third conditions is relatively straightforward. The second condition, on the other hand, is significantly more complicated in the current setting of dependent data, as the variance of r n ( θ , θ 0 ) includes correlations between the observations { X 0 , , X n } . In the next section, we will make assumptions on the transition kernels (and corresponding invariant densities) that ’decouple’ the temporal correlations and the model parameters in the setting of strongly mixing and ergodic Markov chain models, and allow for the verification of the conditions in Corollary 1. Towards this, Propositions 2 and 3 below characterize the expectation and variance of the log-likelihood ratio r n ( · , · ) in terms of the one-step transition kernels of the Markov chain. First, consider the expectation of r n ( · , · ) in condition (i).
Proposition 2.
Fix θ 1 , θ 2 Θ and consider the parameterized Markov transition kernels p θ 1 and p θ 2 , and initial distributions q θ 1 ( 0 ) and q θ 2 ( 0 ) . Let p θ 1 ( n ) and p θ 2 ( n ) be the corresponding joint probability densities; that is,
p θ j ( n ) ( x 0 , , x n ) = q θ j ( 0 ) ( x 0 ) i = 1 n p θ i ( x i | x i 1 )
for j { 1 , 2 } . Then, for any n 1 , the log-likelihood ratio r n ( θ 2 , θ 1 ) satisfies
E θ 1 r n ( θ 2 , θ 1 ) = i = 1 n E θ 1 log p θ 1 ( X i | X i 1 ) p θ 2 ( X i | X i 1 ) + E θ 1 [ Z 0 ] ,
where Z 0 : = log q θ 1 ( 0 ) ( X 0 ) q θ 2 ( 0 ) ( X 0 ) . The expectation in the first term is with respect to the joint density function p θ 1 ( y , x ) = p θ 1 ( y | x ) q θ 1 ( i 1 ) ( x ) where the marginal density satisfies
q θ 1 ( i 1 ) ( x ) = p θ 1 ( i 1 ) ( x 0 , , x i 2 , x ) d x 0 d x i 2 f o r i > 1 , a n d q θ 1 ( 0 ) ( x ) f o r i = 1 .
If the Markov chain is also stationary under θ 1 , then Equation (12) simplifies to
E θ 1 r n ( θ 2 , θ 1 ) = n E θ 1 log p θ 1 ( X 1 | X 0 ) p θ 2 ( X 1 | X 0 ) + E θ 1 [ Z 0 ] .
Notice that E θ 1 r n ( θ 2 , θ 1 ) is precisely the KL divergence, K ( P θ 1 ( n ) , P θ 2 ( n ) ) . Next, the following proposition uses [19] (Lemma 1.3) to upper bound the variance of the log-likelihood ratio.
Proposition 3.
Fix θ 1 , θ 2 Θ and consider parameterized Markov transition kernels p θ 1 and p θ 2 , with initial distributions q θ 1 ( 0 ) and q θ 2 ( 0 ) . Let p θ 1 ( n ) and p θ 2 ( n ) be the corresponding joint probability densities of the sequence ( x 0 , , x n ) , and q θ j ( i ) the marginal density for i { 1 , , n } and j { 1 , 2 } . Fix δ > 0 and, for each i { 1 , , n } , define
C θ 1 , θ 2 ( i ) : = log p θ 1 ( x i | x i 1 ) p θ 2 ( x i | x i 1 ) 2 + δ p θ 1 ( x i | x i 1 ) q θ 1 ( i 1 ) ( x i 1 ) d x i d x i 1 .
Similarly, define Z 0 : = log q θ 1 ( 0 ) ( X 0 ) q θ 2 ( 0 ) ( X 0 ) , and D 1 , 2 : = E θ 1 Z 0 2 + δ . Suppose the Markov chain corresponding to θ 1 is α-mixing with coefficients { α k } . Then,
Var r n ( θ 1 , θ 2 ) < i , j = 1 n 4 n + 2 n δ / 2 ( C θ 1 , θ 2 ( i ) + C θ 1 , θ 2 ( j ) + C θ 1 , θ 2 ( i ) C θ 1 , θ 2 ( j ) ) α | i j | 1 δ / ( 2 + δ ) (14) + i = 1 n 4 n + 2 n δ / 2 ( C θ 1 , θ 2 ( i ) + D 1 , 2 + C θ 1 , θ 2 ( i ) D 1 , 2 ) α i 1 δ / ( 2 + δ ) (15) + Cov ( Z 0 , Z 0 ) .
Note that this result holds for any parameterized Markov chain. In particular, when the Markov chain is stationary, C θ 1 , θ 2 ( i ) = C θ 1 , θ 2 ( 1 ) i and θ Θ , and Equation (14) simplifies to
Var r n ( θ 1 , θ 2 ) < n 4 n + 6 n δ / 2 C θ 1 , θ 2 ( 1 ) k 0 α k δ / ( 2 + δ ) + 4 n + 2 n δ / 2 ( C θ 1 , θ 2 ( 1 ) + D 1 , 2 + C θ 1 , θ 2 ( 1 ) D 1 , 2 ) k 1 α k δ / ( 2 + δ ) + Cov ( Z 0 , Z 0 ) .
If the sum k 0 α k δ / ( 2 + δ ) is infinite, the bound is trivially true. For it to be finite, of course, the coefficients α k must decay to zero sufficiently quickly. For instance, Theorem A.1.2 shows that if the Markov chain is geometrically ergodic, then the α -mixing coefficients are geometrically decreasing. We will use this fact when the Markov chain is non-stationary, as in Section 4. In the next section, however, we first consider the simpler stationary Markov chain setting where geometric ergodic conditions are not explicitly imposed. We also note that unless only a finite number of α k are nonzero, the sum k 0 α k δ / ( 2 + δ ) is infinite when δ = 0 , and our results will typically require δ > 0 .

3. Stationary Markov Data-Generating Models

Observe that the PAC-Bayesian concentration bound in Corollary 1 specifically requires bounding the mean and variance of the log-likelihood ratio r n ( θ , θ 0 ) . We ensure this by imposing regularity conditions on the log-ratio of the one-step transition kernels and the corresponding invariant densities. Specifically, we assume the following conditions that decouple the model parameters from the random samples, allowing us to verify the bounds in Corollary 1.
Assumption 1.
There exist positive functions M k ( 1 ) ( · , · ) and M k ( 2 ) ( · ) , k { 1 , 2 , , m } such that for any parameters θ 1 , θ 2 Θ , the log of the ratio of one-step transition kernels and the log of the ratio of the invariant distributions satisfy, respectively,
| log p θ 1 ( x 1 | x 0 ) log p θ 2 ( x 1 | x 0 ) | k = 1 m M k ( 1 ) ( x 1 , x 0 ) | f k ( 1 ) ( θ 2 , θ 1 ) | ( x 0 , x 1 ) , a n d
| log q θ 1 ( x ) log q θ 2 ( x ) | k = 1 m M k ( 2 ) ( x ) | f k ( 2 ) ( θ 2 , θ 1 ) | x .
We further assume that for some δ > 0 , the functions f k ( 1 ) , f k ( 2 ) and M k ( 1 ) satisfy the following:
i 
there exist constants C k ( t ) and measures ρ n F such that | f k ( t ) ( θ , θ 0 ) | 2 + δ ρ n ( d θ ) < C k ( t ) n for t { 1 , 2 } , n 1 and k { 1 , 2 , , m } , and
ii 
there exists a constant B such that M k ( 1 ) ( x 1 , x 0 ) 2 + δ p θ j ( x 1 | x 0 ) q θ j ( 0 ) ( x 0 ) d x 1 d x 0 < B , k { 1 , , m } and j { 1 , 2 } .
The following examples illustrate Equations (17) and (18) for discrete and continuous state Markov chains.
Example 1.
Suppose { X 0 , , X n } is generated by the birth-death chain with parameterized transition probability mass function,
p θ ( j | i ) = θ i f j = i 1 , 1 θ i f j = i + 1 .
In this example, the parameter θ denotes the probability of birth. We shall see that, m = 3 : M 1 ( 1 ) ( X 1 , X 0 ) = I [ X 1 = X 0 + 1 ] , M 2 ( 1 ) ( X 1 , X 0 ) = I [ X 1 = X 0 1 ] , and M 3 ( 1 ) ( X 1 , X 0 ) = 1 . We also define M 1 ( 2 ) ( X 0 ) = 1 , and set M 2 ( 2 ) ( X 0 ) and M 3 ( 2 ) ( X 0 ) both to X 0 1 . Let f 1 ( 1 ) ( θ , θ 0 ) = log θ 0 θ , f 2 ( 1 ) ( θ , θ 0 ) = log 1 θ 0 1 θ , f 3 ( 1 ) ( θ , θ 0 ) = 0 , f 1 ( 2 ) ( θ , θ 0 ) = f 3 ( 2 ) ( θ , θ 0 ) = log 1 θ 0 1 θ , and f 2 ( 2 ) ( θ , θ 0 ) = log θ 0 θ . The derivation of these terms and that they satisfy the conditions of Assumption 1 is provided in the proof of Proposition 6.
Example 2.
Suppose { X 0 , , X n } is generated by the ‘simple linear’ Gauss–Markov model
X n = θ X n 1 + W n ,
where { W n } is a sequence of i.i.d. standard Gaussian random variables. Then, m = 2 , with M 1 ( 1 ) ( X n , X n 1 ) = | X n X n 1 | , M 2 ( 1 ) ( X n , X n 1 ) = X n 2 , M 1 ( 2 ) ( x ) = x 2 2 and M 2 ( 2 ) ( X ) = 0 . Corresponding to these, we have f 1 ( 1 ) ( θ , θ 0 ) = ( θ θ 0 ) , f 2 ( 1 ) ( θ , θ 0 ) = ( θ 0 2 θ 2 ) , f 1 ( 2 ) ( θ 0 , θ 0 ) = ( θ 0 2 θ 2 ) and f 2 ( 2 ) ( θ 0 , θ 0 ) = 0 . The derivation of these quantities and that these satisfy the conditions of Assumption 1 under appropriate choice of ρ n is shown in the proof of Proposition 10.
Note that assuming the same number m of M k ( 1 ) and M k ( 2 ) involves no loss of generality, since these functions can be set to 0. Both Equations (17) and (18) can be viewed as generalized Lipschitz-smoothness conditions, recovering the usual Lipschitz-smoothness when m = 1 and when f k ( t ) is Euclidean distance. Our generalized conditions are useful for distributions like the Gaussian, where Lipschitz smoothness does not apply. From Jensen’s inequality we have | f k ( t ) ( θ , θ 0 ) | ρ n ( d θ ) | | f k ( t ) ( θ , θ 0 ) | 2 + δ ρ n ( d θ ) 1 2 + δ , and Assumption 1(i) above implies that for some constant C > 0 and k { 1 , 2 , , m } , t { 1 , 2 } ,
| f k ( t ) ( θ , θ 0 ) | ρ n ( d θ ) C n 1 / ( 2 + δ ) < C n .
Assumption 1(i) is satisfied in a variety of scenarios, for example, under mild assumptions on the partial derivatives of the functions f k ( t ) . To illustrate this, we present the following proposition.
Proposition 4.
Let f ( θ , θ 0 ) be a function on a bounded domain with bounded partial derivatives with f ( θ 0 , θ 0 ) = 0 . Let { ρ n ( · ) } be a sequence of probability densities on θ such that E ρ n [ θ ] = θ 0 and Var ρ n [ θ ] = σ 2 n for some σ > 0 . Then, for some C > 0 ,
| f ( θ , θ 0 ) | 2 + δ ρ n ( d θ ) < C n .
Proof. 
Define θ f ( θ , θ 0 ) : = f ( θ , θ 0 ) θ as the partial derivative of the function f. By the mean value theorem, | f ( θ , θ 0 ) | = | θ θ 0 | | θ f ( θ * , θ 0 ) | , for some θ * [ min { θ , θ 0 } , max { θ , θ 0 } ] . Since the partial derivatives are bounded, there exists L R such that θ f ( θ * , θ 0 ) < L , and | f ( θ , θ 0 ) | 2 + δ ρ n ( d θ ) < L 2 + δ | θ θ 0 | 2 + δ ρ n ( d θ ) . Choose G > 0 be such that | θ | < G , then θ θ 0 2 G 2 + δ < θ θ 0 2 G 2 . Therefore, | θ θ 0 | 2 + δ ρ n ( d θ ) < ( 2 G ) 2 + δ Var θ 2 G < ( 2 G ) δ σ 2 n . Now choosing ( 2 G ) δ σ 2 as C completes the proof. □
If θ f k ( t ) is continuous and Θ is compact, then θ f k ( t ) is always bounded. Furthermore, observe that if E M k ( 1 ) ( X 1 , X 0 ) 2 + δ < B , without loss of generality we can use Jensen’s inequality to conclude that, for all 0 < a < 2 + δ , E M k ( 1 ) ( X 1 , X 0 ) a < B a 2 + δ < B .
We can now state the main theorem of this section.
Theorem 2.
Let { X 0 , , X n } be generated by a stationary, α-mixing Markov chain parametrized by θ 0 Θ . Suppose that Assumption 1 holds and that the α-mixing coefficients satisfy k 1 α k δ / ( 2 + δ ) < + . Furthermore, assume that K ( ρ n , π ) n C for some constant C > 0 . Then, the conditions of Corollary 1 are satisfied with ϵ n = O max ( 1 n , n δ / 2 n ) .
Theorem 2 is satisfied by a large class of Markov chains, including chains with countable and continuous state spaces. In particular, if the Markov chain is geometrically ergodic, then it follows from Equation (A4) (in the appendix) that k 1 α k δ / ( 2 + δ ) < + . Observe that in order to achieve O ( 1 n ) convergence, we need δ 1 . Key to the proof of Theorem 2 is the fact that the variance of the log-likelihood ratio can be controlled via the application of Assumption 1 and Proposition 3. Note also that as δ decreases, satisfying the condition k 1 α k δ / ( 2 + δ ) requires the Markov chain to be faster mixing.
We now illustrate Theorem 2 for a number of Markov chain models. First, consider a birth-death Markov chain on a finite state space.
Proposition 5.
Suppose the data-generating process is a birth-death Markov chain, with one-step transition kernel parametrized by the birth probability θ 0 Θ . Let F be the set of all Beta distributions. We choose the prior to be a Beta distribution. Then, the conditions of Theorem 2 are satisfied and ϵ n = O 1 n .
Proof. 
The proof of Proposition 5 follows from the more general Proposition 8, by fixing the initial distribution to the invariant distribution under θ 0 . Therefore it has been omitted. We simply refer to the proof of Proposition 8 under a more general setup in Appendix B.3. □
The birth-death chain on the finite state space is, of course, geometrically ergodic and the α -mixing coefficients α k decay geometrically. Note that the invariant distribution of this Markov chain is uniform over the state space, and consequently this is a particularly simple example. A more complicated and more realistic example is a birth-death Markov chain on the nonnegative integers. We note that if the probability of birth θ in a birth-death Markov chain on positive integers is greater than 0.5 , then the Markov chain is transient, and consequently, not ergodic. Hence, our prior should be chosen to have support within ( 0 , 0.5 ) . For that purpose, we define the class of scaled beta distributions.
Definition 2
(Scaled Beta). If X is a beta distribution on with parameters a and b, then Y is said to be a scaled beta distribution with same parameters on the interval ( c , m + c ) if
Y = m x + c ; ( m , c ) R 2
and in that case, the pdf of Y is obtained as
f ( y ) = 1 m Beta ( a , b ) y c m a 1 1 y c m b 1 if y ( c , m + c ) , 0 otherwise . .
Here, E [ Y ] = m a a + b + c and Var [ Y ] = m 2 a b ( a + b ) 2 ( a + b + 1 ) . For the birth-death chain, we set m = 0.5 and c = 0 giving it support on ( 0 , 1 2 ) . Setting m = 2 and c = 1 gives a beta distribution rescaled to have support on ( 1 , 1 ) .
Proposition 6.
Suppose the data-generating process is a positive recurrent birth-death Markov chain on the positive integers parameterized by the birth probability θ 0 ( 0 , 1 2 ) . Further let F be the set of all Beta distributions rescaled to have support ( 0 , 1 2 ) . We choose the prior to be a scaled Beta distribution on ( 0 , 1 / 2 ) with parameters a and b. Then, the conditions of Theorem 2 are satisfied with ϵ n = O 1 n .
Proof. 
The proof of Proposition 6 (for the stationary case) follows from the more general Proposition 9 (the nonstationary case) by fixing the initial distribution to the invariant distribution under θ 0 . We omit the proof and simply refer to the proof of Proposition 9 under a more general setup in Appendix B.3. □
Unlike with the finite state-space, the invariant distribution now depends on the parameter θ Θ , and verification of the conditions of the proposition is more involved. In Appendix A.2, we prove that the class of scaled beta distributions satisfy the condition K ( ρ n , π ) n ϵ n when the prior π is a beta or an uniform distribution. This fact will allow us to prove the above propositions.
Both Proposition 5 and Proposition 6 assume a discrete state space. The next example considers a strictly stationary simple linear model (as defined in Example 2), which has a continuous, unbounded state space.
Proposition 7.
Suppose the data-generating model is a stationary simple linear model:
X n = θ 0 X n 1 + W n ,
where { W n } are i.i.d. standard Gaussian random variables and | θ 0 | < 1 . Suppose that F is the class of all beta distributions rescaled to have the support ( 1 , 1 ) . Then, the conditions of Theorem 2 are satisfied with ϵ n = O 1 n .
Proof. 
This is a special case of the more general non-stationary simple linear model which is detailed in Proposition 10. Therefore, the proof of the fact that the simple linear model satisfies Assumption 1 when starting from stationarity is deferred to the proof of Proposition 10. The simple linear model with | θ 0 | < 1 has geometrically decreasing (and therefore summable) α -mixing coefficients as a consequence of [20] (eq. (15.49)) and Theorem A.1.2. Combining these two facts, it follows that the conditions of Theorem 2 are satisfied. □
Observe that Theorem 1 (and Corollary 1) are general, and hold for any dependent data-generating process. Therefore, there can be Markov chains that satisfy these, but do not satisfy Assumption 1 which entails some loss of generality. However, as our examples demonstrate, common Markov chain models do indeed satisfy the latter assumption.

4. Non-Stationary, Ergodic Markov Data-Generating Models

We call a time-homogeneous Markov chain non-stationary if the initial distribution q ( 0 ) is not the invariant distribution. There are two sets of results in this setting: in Theorem 3 and Theorem 4 we explicitly impose the α -mixing condition, while in Theorem 5 we impose a f-geometric ergodicity condition (Definition A.1.2 in the appendix). As seen in Equation (A4) (in the appendix) if the Markov chain is also geometrically ergodic, then δ > 0 , α k δ / ( 2 + δ ) < . This condition can be relaxed, albeit at the risk of more complicated calculations that, nonetheless, mirror those in the geometrically ergodic setting. A common thread through these results is that we must impose some integrability or regularity conditions on the functions M k ( 1 ) .
First, in Theorem 3 we assume that the M k ( 1 ) functions in Assumption 1 are uniformly bounded and that the α -mixing condition is satisfied. This result holds for both discrete and continuous state space settings.
Theorem 3.
Let { X 0 , , X n } be generated by an α-mixing Markov chain parametrized by θ 0 Θ with transition probabilities satisfying Assumption 1 and with known initial distribution q ( 0 ) . Let { α k } be the α-mixing coefficients under θ 0 , and assume that k 1 α k δ / ( 2 + δ ) < + . Suppose that there exists B R such that sup x , y | M k ( 1 ) ( x , y ) | < B for all k { 1 , 2 , , m } in Assumption 1. Furthermore, assume that there exists ρ n F such that K ( ρ n , π ) n C for some constant C > 0 . If the initial distribution q ( 0 ) satisfies E q ( 0 ) | M k ( 2 ) ( X 0 ) | 2 < + for all k { 1 , 2 , , m } , then the conditions of Corollary 1 are satisfied with ϵ n = O max ( 1 n , n δ / 2 n ) .
The following result in Proposition 8 illustrates Theorem 3 in the setting of a finite state birth-death Markov chain.
Proposition 8.
Suppose the data-generating process is a finite state birth-death Markov chain, with one-step transition kernel parametrized by the birth probability θ 0 . Let F be the set of all Beta distributions. We choose the prior on θ 0 to be a Beta distribution. Then, the conditions of Theorem 3 are satisfied with ϵ n = O 1 n for any initial distribution q ( 0 ) .
Theorem 3 also applies to data generated by Markov chains with countably infinite state spaces, so long as the class of data-generating Markov chains is strongly ergodic and the initial distribution has finite second moments. The following example demonstrates this in the setting of a birth-death Markov chain on the positive integers, where the initial distribution is assumed to have finite second moments.
Proposition 9.
Suppose the data-generating process is a birth-death Markov chain on the non-negative integers, parameterized by the probability of birth θ 0 ( 0 , 1 2 ) . Further let F be the set of all Beta distributions rescaled upon the support ( 0 , 1 2 ) . Let q ( 0 ) be a probability mass function on non-negative integers such that i = 1 i 2 q ( 0 ) ( i ) < + . We choose the prior to be a scaled Beta distribution on ( 0 , 1 / 2 ) with parameters a and b. Then, the conditions of Theorem 3 are satisfied with ϵ n = O 1 n .
Since continuous functions on a compact domain are bounded, we have the following (easy) corollary (stated without proof).
Corollary 2.
Let { X 0 , , X n } be generated by an α-mixing Markov chain parametrized by θ 0 Θ on a compact state space, and with initial distribution q ( 0 ) . Suppose the α-mixing coefficients satisfy k 1 α k δ / ( 2 + δ ) < + , and that Assumption 1 holds with continuous functions M k ( 1 ) ( · , · ) , k { 1 , 2 , , m } . Furthermore, assume that there exists ρ n such that K ( ρ n , π ) n C for some constant C. Then, Theorem 3 is satisfied with ϵ n = O max ( 1 n , n δ / 2 n ) .
In general, the M k ( 1 ) functions will not be uniformly bounded (consider the case of the Gauss–Markov simple linear model in Example 2), and stronger conditions must be imposed on the data-generating Markov chain itself. The following assumption imposes a ‘drift’ condition from [21]. Specifically, [21] (Theorem 2.3) shows that under the conditions of Assumption 2, the moment generating function of an aperiodic Markov chain { X n } can be upper bounded by a function of the moment generating function of X 0 . Together with the α -mixing condition, Assumption 2 implies that this Markov data generating process satisfies Corollary 1.
Assumption 2.
Consider a Markov chain { X n } parameterized by θ 0 Θ . Let M n denote the σ-field generated by { X , , X n 1 , X n } . Denote the stochastic process { M n k } : = { M k ( 1 ) ( X n , X n 1 ) } ; recall M k ( 1 ) , for each k = 1 , , m 1 , are defined in Assumption 1. For each k = 1 , , m , assume the process { M n k } satisfies the following conditions:
  • The drift condition holds for { M n k } , i.e., E M n k M n 1 k | M n 1 , M n 1 k > a ϵ for some ϵ , a > 0 .
  • For some λ > 0 and D > 0 , E e λ ( M n k M n 1 k ) | M n 1 D .
Under this drift condition, the next theorem shows that Corollary 1 is satisfied.
Theorem 4.
Let { X 0 , , X n } be generated by an aperiodic α-mixing Markov chain parametrized by θ 0 Θ and initial distribution q ( 0 ) . Suppose that Assumption 1 and Assumption 2 hold, and that the α-mixing coefficients satisfy k 1 α k δ / ( 2 + δ ) < + . Furthermore, assume K ( ρ n , π ) n C for some constant C > 0 . If e λ M k ( 1 ) ( y , x ) p θ 0 ( y | x ) q 1 ( 0 ) ( x ) d x < + for all k = 1 , , m 1 , then the conditions of Corollary 1 are satisfied with ϵ n = O max ( 1 n , n δ / 2 n ) .
Verifying the conditions in Theorem 4 can be quite challenging. Instead, we suggest a different approach that requires f-geometric ergodicity. Unlike the drift condition in Assumption 2, f-geometric ergodicity additionally requires the existence of a petite set. As noted before, geometric ergodicity implies α -mixing with geometrically decaying mixing coefficients. As with Theorem 4, we assume for simplicity that the Markov chain is aperiodic.
Theorem 5.
Let { X 0 , , X n } be generated by an aperiodic Markov chain parametrized by θ 0 Θ with known initial distribution q ( 0 ) , and assumed to be V-geometrically ergodic for some V : R m [ 1 , ) . Suppose that Assumption 1 holds and M k ( 1 ) ( y , x ) 2 + δ p θ 0 ( y | x ) d y < V ( x ) k , x and some δ > 0 . Furthermore, assume that K ( ρ n , π ) n C for some constant C > 0 . If the initial distribution q ( 0 ) satisfies E q ( 0 ) [ V ( X 0 ) ] < + , then the conditions of Corollary 1 are satisfied with ϵ n = O max ( 1 n , n δ / 2 n ) .
The following Proposition 10 shows, the simple linear model satisfies Theorem 5 when the parameter θ 0 is suitably restricted.
Proposition 10.
Consider the simple linear model satisfying the equation
X n = θ 0 X n 1 + W n ,
where { W n } are i.i.d. standard Gaussian random variables and | θ 0 | < 2 1 4 + 2 δ 1 for δ > 0 . Let F be the space of all scaled Beta distributions on ( 1 , 1 ) and suppose the prior π is a uniform distribution on ( 1 , 1 ) . Then, the conditions of Theorem 5 are satisfied with ϵ n = O max ( 1 n , n δ / 2 n ) , if the initial distribution q ( 0 ) satisfies E q ( 0 ) [ X 0 4 + 2 δ ] < + .

5. Misspecified Models

We show next how our results can be extended to the misspecified model setting. Assume that the true data generating distribution is parametrized by θ 0 Θ . Let θ n * : = arg min θ Θ K ( P θ 0 ( n ) , P θ ( n ) ) represent the closest parametrized distribution in the variational family to the data-generating distribution. Further, assume our usual conditions:
i. 
E [ r n ( θ , θ n * ) ] ρ n ( d θ ) n ϵ n ,
ii. 
Var r n ( θ , θ n * ) ρ n ( d θ ) n ϵ n .
Now, since r n ( θ , θ 0 ) = r n ( θ , θ n * ) + r n ( θ n * , θ 0 ) , we have
K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) E [ r n ( θ 0 , θ n * ) ] + n ϵ n .
Similarly, decomposing the variance it follows that
Var [ r n ( θ , θ 0 ) ] = Var [ r n ( θ , θ n * ) ] + Var [ r n ( θ n * , θ 0 ) ] + 2 Cov [ r n ( θ , θ n * ) , r n ( θ n * , θ 0 ) ] .
Using the fact that 2 a b a 2 + b 2 on the covariance term 2 Cov [ r n ( θ , θ n * ) , r n ( θ n * , θ 0 ) ] = 2 E r n ( θ , θ n * ) E [ r n ( θ , θ n * ) ] r n ( θ n * , θ 0 ) E [ r n ( θ n * , θ 0 ) ] , we have
Var [ r n ( θ , θ 0 ) ] 2 Var [ r n ( θ , θ n * ) ] + 2 Var [ r n ( θ n * , θ 0 ) ] .
Integrating both sides with respect to ρ n ( d θ ) we get
Var [ r n ( θ , θ 0 ) ] ρ n ( d θ ) 2 Var [ r n ( θ , θ n * ) ] ρ n ( d θ ) + 2 Var [ r n ( θ n * , θ 0 ) ] ρ n ( d θ ) 2 n ϵ n + 2 Var [ r n ( θ n * , θ 0 ) ] .
Consequently, we arrive at the following result:
Theorem 6.
Let F be a subset of all probability distributions parameterized by Θ. Let θ n * = arg min θ Θ K ( P θ 0 ( n ) , P θ ( n ) ) and assume there exist ϵ n > 0 and ρ n F such that
i
E [ r n ( θ , θ n * ) ] ρ n ( d θ ) n ϵ n ,
ii
Var r n ( θ , θ n * ) ρ n ( d θ ) n ϵ n , and
iii
K ( ρ n , π ) n ϵ n .
Then, for any α r e ( 0 , 1 ) and ( ϵ , η ) ( 0 , 1 ) × ( 0 , 1 ) ,
P [ D α r e ( P θ ( n ) , P θ 0 ( n ) ) π ˜ n , α r e ( d θ | X ( n ) ) ( α r e + 1 ) n ϵ n + E [ r n ( θ 0 , θ n * ) ] + α r e 2 n ϵ n + 2 Var [ r n ( θ n * , θ 0 ) ] η log ( ϵ ) 1 α r e ] 1 ϵ η .
The proof of this theorem is straightforward and follows from the proof of Theorem 1 by plugging in the upper bounds for KL-divergence from Equation (23), and variance from Equation (26) to (A13). A sketch of the proof is presented in the appendix.

6. Conclusions

Concentration of the KL-VB model risk, in terms of the expected α r e -Rényi divergence, is well established under the i.i.d. data generating model assumption. Here, we extended this to the setting of Markov data generating models, linking the concentration rate to the mixing and ergodic properties of the Markov model. Our results apply to both stationary and non-stationary Markov chains, as well as to the situation with misspecified models. There remain a number of open questions. An immediate one is to extend the current analysis to continuous-time Markov chains and Markov jump processes, possibly using uniformization of the continuous time model. Another direction is to extend this to the setting of non-homogeneous Markov chains, where analogues of notions such as stationarity are less straightforward. Further, as noted in the introduction, [14] establish PAC-Bayes bounds under slightly weaker ‘existence of test functions’ conditions, while our results are established under the stronger conditions used by [15] for the i.i.d. setting. Weakening the conditions in our analysis is important, but complicated. A possible path is to build on results from [22], who provides conditions form the existence of exponentially powerful test functions exist for distinguishing between two Markov chains. It is also known that there exists a likelihood ratio test separating any two ergodic measures [23]. However, leveraging these to establish the PAC-Bayes bounds for the KL-VB posterior is a challenging effort that we leave to future papers. Finally it is of interest to generalize our PAC-bounds to posterior approximations beyond KL-variational inference, such as α r e -Rényi posterior approximations [6], and loss-calibrated posterior approximations [24,25].

Author Contributions

Formal analysis, I.B.; Investigation, I.B.; Methodology, I.B., V.A.R. and H.H.; Resources, V.A.R. and H.H.; Validation, V.A.R. and H.H. All authors have read and agreed to the published version of the manuscript.

Funding

National Science Foundation: IIS-1816499; DMS-1812197.

Acknowledgments

Rao and Honnappa acknowledge support from NSF DMS-1812197. In addition, Rao acknowledges NSF IIS-1816499 for supporting this project.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Technical Desiderata

Appendix A.1. Definitions Related to Markov Chains

As noted before, ergodicity plays an acute role in establishing our results. We consolidate various definitions used throughout the paper in this appendix. Recall that we assume the parameterized Markov chain possesses an invariant probability density or mass function q θ under parameter θ Θ . Our results in Section 4 also rely on the ergodic properties of the Markov chain, and we assume that the Markov chain is f-geometrically ergodic [20] (Chapter 15). First, refer to the definition of the functional norm · f , from Definition A.1.1,
Definition A.1.1
(f-norm). The functional norm in f-metric of a measure v, or the f-norm of v is
v f = sup g : | g | < f g d v ,
where f and g are any two functions.
An immediate consequence of this definition is that if f 1 , f 2 are two functions such that f 1 < f 2 (i.e., for all points in the support of the functions), then
v f 1 v f 2 .
Now that we have defined the · f norm, we can now define f-geometric ergodicity. In the following, we assume the Markov chain is positive Harris; see [20] for a definition. This is a mild and fairly standard assumption in Markov chain theory.
Definition A.1.2
(f-geometric ergodicity). For any function f, Markov chain { X n } parameterized by θ Θ is said to be f-geometrically ergodic if it is positive Harris and there exists a constant r f > 1 , that depends on f, such that for any A B ( X ) ,
n = 1 n r f n P θ ( X n A | X 0 = x ) A q θ ( y ) d y f < .
It is straightforward to see that this is equivalent to
P θ ( X n A | X 0 = x ) q θ ( y ) d y f C r f n
for an appropriate constant C (which may depend on the state x), that is, the Markov chain approaches steady state at a geometrically fast rate. If a Markov chain is f-geometrically ergodic for f 1 , then, it is simply termed as geometrically ergodic. It is straightforward to see (via Theorem A.1.2 in the Appendix) that a geometrically ergodic Markov chain is also α -mixing, with mixing coefficients satisfying
k 0 α k υ < υ > 0 ,
showing that, under geometric ergodicity, the α -mixing coefficients raised to any positive power υ are finitely summable. We note here that the most standard procedure to establish f-geometric ergodicity for any Markov chain is through the verification of the drift condition. The drift condition is a sufficient condition for a Markov chain to be f-geometrically ergodic, as long as there exists a set (called petite set) towards which the Markov chain drifts to (see Assumption A.1.1 in the appendix). If a Markov chain is f-geometrically ergodic with f V , for some particular function V, then we call it V-geometrically ergodic.
We defined V-geometric ergodicity in the previous sections. In this section, we provide a sufficient condition for a Markov chain to be V-geometrically ergodic. First, we recall the definition of resolvent from [20] (Chapter 5).
Definition A.1.3
(Resolvent). Let n { 0 , 1 , 2 , } and q n be such that q n 0 n and n = 1 q n = 1 . Note that q n can be thought of being a probability mass function for a random variable "q" taking values on non-negative integers. Then, the resolvent of a Markov chain with respect to q is given by K q ( x , A ) where,
K q ( x , A ) = n = 0 q n P ( X n A | X 0 = x ) .
Then, the definition of petite sets follows (see, for Reference, [20] (Chapter 5)).
Definition A.1.4
(Petite Sets). Let X 0 , , X n be n samples from a Markov chain taking values on the state space X . Let C be a set. We shall call C to be v q petite if
K q ( x , B ) υ q ( B )
for all x C and B B ( X ) , and a non-trivial measure υ q on B ( X ) , and a probability mass function q on { 1 , 2 , 3 , }
Now, let Δ V ( x ) : = E [ V ( X n ) | X n 1 = x ] V ( x ) for V : S [ 1 , ) .
Assumption A.1.1
(Drift condition). [20] (Chapter 5) Suppose the chain { X n } is, aperiodic and ψ-irreducible. Let there exists a petite set C, constants b < , β > 0 , and a non-trivial function V : S [ 1 , ) satisfying
Δ V ( x ) β V ( x ) + b I x C x S .
If a Markov chain drifts towards a petite set then it is V-geometrically ergodic. Suppose, for simplicity, that V ( x ) = | X | . Then, the drift condition becomes E [ | X n X x 1 ] | X n 1 | = β | X n | + b I X n C . The left hand side of this equation represents the change in the state of the Markov chain in one time epoch. Thus, the condition in Assumption A.1.1 essentially states that the Markov chain drifts towards a petite set C and then, once it reaches that set, moves to any point in the state space with at least some probability independent of C.
Theorem A.1.1
(Geometrically ergodic theorem). Suppose that { X n } is satisfies Assumption A.1.1. Then, the set S V = { x : V ( x ) < } is absorbing, i.e., P θ ( X 1 S V | X 0 = x ) = 1 x S V , and full, i.e., ψ ( S V c ) = 0 . Furthermore, constants r > 1 , R < such that, for any A B ( S ) ,
P θ ( X n A | X 0 = x ) A q θ ( y ) d y V R r n V ( x ) .
Any aperiodic and ψ -irreducible Markov chain satisfying the drift condition is geometrically ergodic. A consequence of Equation (A2) is that if, { X n } is V-geometrically ergodic, then for any other function U , such that | U | < V , it is also U-geometrically ergodic. In essence, a geometrically ergodic Markov chain is asymptotically uncorrelated in a precise sense. Recall ρ -mixing coefficients defined as follows. Let A be a sigma field and L 2 ( A ) be the set of square integrable, real valued, A measurable functions.
Definition A.1.5
( ρ -mixing coefficient). Let M i j denote the sigma field generated by the measures X k , where i k j . Then,
ρ k = sup t > 0 sup ( f , g ) L 2 M t × L 2 M t + k Corr ( f , g ) ,
where Corr is the correlation function.
Theorem A.1.2.
If X n is geometrically ergodic, then it is α-mixing. That is, there exists a constant c > 0 such that α k = O ( e c k ) .
Proof. 
By [26] (Theorem 2) it follows that a geometrically ergodic Markov chain is asymptotically uncorrelated with ρ -mixing coefficients (see Definition A.1.5) that satisfy ρ k = O ( e c k ) . Furthermore, it is well known that [18,26] α k 1 4 ρ k , implying α k = O ( e c k ) . □

Appendix A.2. Bounding the KL-Divergence between Beta Distributions

The following results will be utilized in the proofs of Propositions 8–10.
Lemma A.2.1.
Let θ 0 ( 0 , 1 ) . Let, ρ n be a sequence of Beta distributions with parameters a n = n θ 0 and b n = n ( 1 θ 0 ) . Let π denote an uniform distribution, U ( 0 , 1 ) . Then, K ( ρ n , π ) < C + 1 2 log ( n ) , for some constant C > 0 .
Proof. 
Without loss of generality, we can assume a n > 1 and b n > 1 . The same form of the result can be obtained in all the other cases, by appropriate use of the bounds presented in the proof. We write the KL divergence K ( ρ n , π ) as log ρ n π ρ n ( d θ ) . Since π is uniform, π ( θ ) = 1 whenever θ ( 0 , 1 ) . Hence, the KL-divergence can be written as the negative of the entropy of ρ n 0 1 log ρ n θ ρ n ( d θ ) , which can be written as
K ( ρ n , π ) = ( a n 1 ) ψ ( a n ) + ( b n 1 ) ψ ( b n ) ( a n + b n 2 ) ψ ( a n + b n ) log Beta ( a n , b n ) ,
where ψ is the digamma function. Using Stirling’s approximation on Beta ( a n , b n ) yields,
Beta ( a n , b n ) = 2 π a n a n 1 / 2 b n b n 1 / 2 ( a n + b n ) a n + b n 1 / 2 ( 1 + o ( 1 ) ) .
Hence, setting C 1 = log ( 2 π ) , we can write log Beta ( a n , b n ) as,
log Beta ( a n , b n ) = C 1 ( a n 1 2 ) log ( a n ) ( b n 1 2 ) log ( b n ) + ( a n + b n 1 2 ) log ( a n + b n ) + log ( 1 + o ( 1 ) ) .
From [27] we have that log ( x ) 1 x < ψ ( x ) < log ( x ) 1 2 x x > 0 . Since we assumed a n > 1 and b n > 1 , the fact that ψ ( x ) < log ( x ) 1 2 x implies
( a n 1 ) ψ ( a n ) < ( a n 1 ) log ( a n ) a n 1 2 a n and , ( b n 1 ) ψ ( b n ) < ( b n 1 ) log ( b n ) b n 1 2 b n .
Finally, using the fact that log ( x ) 1 x < ψ ( x ) , we get,
( a n + b n 2 ) ψ ( a n + b n ) < ( a n + b n 2 ) log ( a n + b n ) + a n + b n 2 a n + b n .
Therefore, after much cancellation, the KL-divergence
( a n 1 ) ψ ( a n ) + ( b n 1 ) ψ ( b n ) ( a n + b n 2 ) ψ ( a n + b n ) log Beta ( a n , b n )
can be upper bounded by
1 2 log ( a n ) 1 2 log ( b n ) + 3 2 log ( a n + b n ) + a n + b n 2 a n + b n a n 1 2 a n b n 1 2 b n .
Now, plugging in the values of a n and b n , we get Plugging in the values of a n and b n , we get as upper bound for the KL-divergence as,
K ( ρ n , π ) < 1 2 log ( n θ 0 ) 1 2 log ( n ( 1 θ 0 ) ) + 3 2 log ( n ) + n 2 n n θ 0 1 2 n θ 0 n ( 1 θ 0 ) 1 2 n ( 1 θ 0 ) = 1 2 log ( n ) 1 2 log ( θ 0 ) + log ( 1 θ 0 ) + 3 2 n 1 2 n θ 0 1 2 n ( 1 θ 0 ) < C + 1 2 log ( n ) ,
for some large enough positive constant C. This completes our proof. □
Proposition A.2.1.
Let θ 0 ( 0 , 1 ) . Let, ρ n be a sequence of Beta distributions with parameters a n = n θ 0 and b n = n ( 1 θ 0 ) . Let π denote an Beta distribution, with parameters ( a , b ) . Then, K ( ρ n , π ) < C + 1 2 log ( n ) , for some constant C > 0 .
Proof. 
Without loss of generality, we assume a > 1 and b > 1 . As mentioned in the proof of Lemma A.2.1, the other cases follows similarly. We write the KL-divergence between ρ n and π as,
K ( ρ n , π ) = log ρ n π ρ n ( d θ ) = log ρ n U ρ n ( d θ ) + log U π ρ n ( d θ ) ,
where, U is an uniform distribution on ( 0 , 1 ) . We analyze the second term in the above expression. The second term can be written as,
log U π ρ n ( d θ ) = log 1 1 Beta ( a , b ) θ a 1 ( 1 θ ) b 1 ρ n ( d θ ) = C 1 ( a 1 ) log ( θ ) ρ n ( d θ ) ( b 1 ) log ( 1 θ ) ρ n ( d θ ) ,
where C 1 is log ( Beta ( a , b ) ) . Since, ρ n follows a Beta distribution with parameters a n = n θ 0 and b n = n ( 1 θ 0 ) , we get that,
log U π ρ n ( d θ ) = C 1 ( a 1 ) ψ ( a n ) ψ ( a n + b n ) ( b 1 ) ψ ( b n ) ψ ( a n + b n )
Since, log ( x ) 1 x < ψ ( x ) < log ( x ) 1 2 x , looking at the term ψ ( a n ) ψ ( a n + b n ) , we get that,
ψ ( a n ) ψ ( a n + b n ) = ψ ( n θ 0 ) ψ ( n θ 0 + n ( 1 θ 0 ) ) = ψ ( n θ 0 ) ψ ( n ) .
Using the lower bound on ψ ( n θ 0 ) and the upper bound on ψ ( n ) , we get
ψ ( a n ) ψ ( a n + b n ) < log ( n θ 0 ) + 1 n θ 0 + log ( n ) 1 2 n = log ( θ 0 ) + 2 θ 0 2 n θ 0 .
Furthermore, similarly, we get that,
ψ ( b n ) ψ ( a n + b n ) < log ( 1 θ 0 ) + 2 ( 1 θ 0 ) 2 n ( 1 θ 0 ) .
Therefore it follows that
max ( a 1 ) ψ ( a n ) ψ ( a n + b n ) , ( b 1 ) ψ ( b n ) ψ ( a n + b n ) < max ( a 1 ) log ( θ 0 ) + 2 θ 0 2 n θ 0 , ( b 1 ) log ( 1 θ 0 ) + 2 ( 1 θ 0 ) 2 n ( 1 θ 0 ) < C ,
for a large positive constant C. Using the above bounds, we finally show that,
C 1 ( a 1 ) ψ ( a n ) ψ ( a n + b n ) ( b 1 ) ψ ( b n ) ψ ( a n + b n ) < C 1 + 2 C ,
which can be upper bounded by C for some large constant C . Finally, we upper bound log ρ n U ρ n ( d θ ) by Lemma A.2.1 thereby completing the proof. □

Appendix B. Proofs of Main Results

Appendix B.1. Proofs for A Concentration Bound for the αre-Rényi Divergence

Appendix B.1.1. Proof of Proposition 1

We start by recalling the variational formula of Donsker and Varadhan [28].
Lemma B.1.1
(Donsker-Varadhan)b. For any probability distribution function π on Θ, and for any measurable function h : Θ R , if e h d π < , then
log e h d π = sup ρ M + ( Θ ) h d ρ K ( ρ , π )
Now, fix α r e ( 0 , 1 ) , and θ Θ . First, observe that by the definition of the α r e -Rényi divergence we have
E θ 0 ( n ) [ exp ( α r e r n ( θ , θ 0 ) ) ] = exp [ ( 1 α r e ) D α r e ( P θ ( n ) , P θ 0 ( n ) ) ]
Multiplying both sides of the equation by exp [ ( 1 α r e ) D α r e ( P θ ( n ) , P θ 0 ( n ) ) and integrating with respect to (w.r.t.) π ( θ ) it follows that
E θ 0 ( n ) exp α r e r n ( θ , θ 0 ) + ( 1 α r e ) D α r e ( P θ ( n ) , P θ 0 ( n ) ) π ( d θ ) = 1 , or
E θ 0 ( n ) exp α r e r n ( θ , θ 0 ) + ( 1 α r e ) D α r e ( P θ ( n ) , P θ 0 ( n ) ) π ( d θ ) = 1 .
Define h ( θ ) : = α r e r n ( θ , θ 0 ) + ( 1 α r e ) D α r e ( P θ ( n ) , P θ 0 ( n ) ) . Then, applying Lemma B.1.1 to the integrand on the left hand side (l.h.s.) above, it follows that
E θ 0 ( n ) exp sup ρ M + ( Θ ) h ( θ ) ρ ( d θ ) K ( ρ , π ) = 1 .
Multiply both sides of this equation by ϵ > 0 to obtain
E θ 0 ( n ) exp sup ρ M + ( Θ ) h ( θ ) ρ ( d θ ) K ( ρ , π ) + log ( ϵ ) = ϵ .
Now, by Markov’s inequality, we have
P θ 0 ( n ) sup ρ M + ( Θ ) ( α r e r n ( θ , θ 0 ) + ( 1 α r e ) D α r e ( P θ ( n ) , P θ 0 ( n ) ) ) ρ ( d θ ) K ( ρ , π ) + log ( ϵ ) 0 ϵ .
Thus, it follows via complementation that
P θ 0 ( n ) [ ρ F ( Θ ) D α r e ( P θ ( n ) , P θ 0 ( n ) ) ρ ( d θ ) α r e ( 1 α r e ) r n ( θ , θ 0 ) ρ ( d θ ) + K ( ρ , π ) log ( ϵ ) 1 α r e ] 1 ϵ ,
thereby completing the proof.

Appendix B.1.2. Proof of Theorem 1

Recall the definition of the fractional posterior and the VB approximation,
π n , α r e | X n = exp α r e r n ( θ , θ 0 ) ( X n ) π ( d θ ) exp α r e r n ( γ , θ 0 ) ( X n ) π ( d γ ) , π ˜ n , α r e | X n = arg min ρ F K ( ρ , π n , α r e | X ( n ) ) .
It follows by definition of the KL divergence that
π ˜ n , α r e | X n = arg min ρ F α r e r n ( θ , θ 0 ) ρ ( d θ ) + K ( ρ , π ) ,
where π is the prior distribution. Following Proposition 1 it follows that for any ϵ > 0
D α r e ( P θ ( n ) , P θ 0 ( n ) ) π ˜ ( d θ | X n ) α r e ( 1 α r e ) r n ( θ , θ 0 ) ρ ( d θ ) + K ( ρ , π ) log ( ϵ ) 1 α r e ,
with probability 1 ϵ . We fix an η ( 0 , 1 ) . Using Chebychev’s inequality, we have
P θ 0 ( n ) [ α r e 1 α r e r n ( θ , θ 0 ) ρ n ( d θ ) α r e 1 α r e E [ r n ( θ , θ 0 ) ] ρ n ( d θ ) + α r e 1 α r e Var [ r n ( θ , θ 0 ) ρ n ( d θ ) ] η + K ( ρ n , π ) 1 α r e ] = P θ 0 ( n ) [ α r e 1 α r e r n ( θ , θ 0 ) ρ n ( d θ ) α r e 1 α r e E [ r n ( θ , θ 0 ) ] ρ n ( d θ ) K ( ρ n , π ) 1 α r e α r e 1 α r e Var [ r n ( θ , θ 0 ) ρ n ( d θ ) ] η ] Var α r e 1 α r e r n ( θ , θ 0 ) ρ n ( d θ ) α r e 1 α r e E [ r n ( θ , θ 0 ) ] ρ n ( d θ ) K ( ρ n , π ) 1 α r e α r e 2 1 α r e 2 Var [ r n ( θ , θ 0 ) ρ n ( d θ ) ] η .
Note that α r e 1 α r e E ( r n ( θ , θ 0 ) ) ρ n ( d θ ) and K ( ρ n , π ) 1 α r e are constants with respect to the data, implying
Var [ α r e 1 α r e r n ( θ , θ 0 ) ρ n ( d θ ) α r e 1 α r e E [ r n ( θ , θ 0 ) ] ρ n ( d θ ) K ( ρ n , π ) 1 α r e ] = α r e 2 ( 1 α r e ) 2 Var r n ( θ , θ 0 ) ρ n ( d θ ) .
Therefore, we have
P θ 0 ( n ) [ α r e 1 α r e r n ( θ , θ 0 ) ρ n ( d θ ) α r e 1 α r e E [ r n ( θ , θ 0 ) ] ρ n ( d θ ) + α r e 1 α r e Var [ r n ( θ , θ 0 ) ρ n ( d θ ) ] η + K ( ρ n , π ) 1 α ] η .
From Proposition 1, with probability 1 ϵ the following holds
D α r e ( P θ ( n ) , P θ 0 ( n ) ) π ˜ n , α r e | X n ( d θ ) α r e r n ( θ , θ 0 ) ρ n ( d θ ) + K ( ρ n , π ) log ( ϵ ) 1 α r e .
Therefore, with probability 1 η ϵ the following statement holds
D α r e ( P θ ( n ) , P θ 0 ( n ) ) π ˜ n , α r e | X n ( d θ ) α r e 1 α r e K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) + α r e 1 α r e Var [ r n ( θ , θ 0 ) ρ n ( d θ ) ] η + K ( ρ n , π ) log ( ϵ ) 1 α r e .
Next, we observe that
Var r n ( θ , θ 0 ) ρ n ( d θ ) = E θ 0 ( n ) r n ( θ , θ 0 ) ρ n ( d θ ) E r n ( θ , θ 0 ) ρ n ( d θ ) 2 Var [ r n ( θ , θ 0 ) ] ρ n ( d θ ) ,
by a straightforward application of Jensen’s inequality to the inner integral on the left hand side. Finally, following the hypotheses (i), (ii) and (iii), we have,
D α r e ( P θ ( n ) , P θ 0 ( n ) ) π ˜ n , α r e | X n ( d θ ) α r e 1 α r e K ( P θ 0 ( n ) , P θ ( n ) ) + Var [ r n ( θ , θ 0 ) ] ρ n ( d θ ) η ρ n ( d θ ) + 1 α r e K ( ρ n , π ) log ( ϵ ) α r e ( ϵ n + n ϵ n η ) 1 α r e + n ϵ n log ( ϵ ) 1 α r e ,
thereby concluding the proof. □

Appendix B.1.3. Proof of Proposition 2

We define Y i : = log p θ 1 ( X i | X i 1 ) p θ 2 ( X i | X i 1 ) for i = 1 , , n , and Z 0 = log q 1 ( 0 ) ( X 0 ) q 2 ( 0 ) ( X 0 ) . Then, using the Markov property we can see that the Kullback–Leibler divergence between the joint distributions P θ 1 ( n ) and P θ 2 ( n ) satisfies K P θ 1 ( n ) , P θ 2 ( n ) = i = 1 n E θ 1 Y i + E θ 1 [ Z 0 ] . If the Markov chain { X i } is stationary under θ 1 , so is { Y i } . Hence Y i = d Y 1 and the above equation reduces to,
K P θ 1 ( n ) , P θ 2 ( n ) = n E θ 1 Y 1 + E θ 1 [ Z 0 ] .

Appendix B.1.4. Proof of Proposition 3

First, recall the following result from [19].
Lemma B.1.2.
[19] (Lemma 1.2) Let X , , X 1 , X 2 , be an α-mixing Markov chain with α-mixing coefficients given by α k . Let M a b be the sigma-field generated by the subsequence ( X a , X a + 1 , , X b ) . Let η t M t and τ t M t + k be adapted random variables such that | η t | 1 , | τ t | 1 . Then,
sup t sup η t , τ t | E [ η t τ t ] E [ η t ] E [ τ t ] | 4 α k .
This lemma provides an upper bound on the covariance of events η and τ , as shown next.
Lemma B.1.3.
Let η M t τ M t + k be such that, E | η | 2 + δ C 1 , E | τ | 2 + δ C 2 for some δ > 0 . Then, for a fixed n < + , we have
| E η τ E η E τ | 4 n + 2 n δ / 2 ( C 1 + C 2 ) + 2 n δ / 2 C 1 C 2 α k 2 δ / ( 2 + δ ) .
Proof. 
Let N < + be a fixed number. We get from the triangle inequality that
| E η τ E η E τ | | E η τ I [ | η | N , | τ | N ] E η I [ | η | N ] E τ I [ | τ | N ] | + | E η τ I [ | η | N , | τ | N ] E η I [ | η | N ] E τ I [ | τ | N ] | + | E η τ I [ | η | N , | τ | N ] E η I [ | η | N ] E τ I [ | τ | N ] | + | E η τ I [ | η | N , | τ | N ] E η I [ | η | N ] E τ I [ | τ | N ] | .
Multiplying and dividing the first term by N 2 and applying Lemma B.1.2, we get | E η τ I [ | η | N , | τ | N ] E η I [ | η | N ] E τ I [ | τ | N ] | 4 N 2 α k . For the second term, if | τ | N , then τ N and τ N . Plugging this in the second term we get,
(A18) | E η τ I [ | η | N , | τ | N ] E η I [ | η | N ] E τ I [ | τ | N ] | N E η I [ | η | N + N E η I [ | η | N ] (A19) = 2 N | E η I [ | η | N ] | .
Since | η | N , we have 1 | η | 1 + δ N 1 + δ . Following this,
(A20) | 2 N E η I [ | η | N ] | 2 N E | η | 2 + δ N 1 + δ I [ | η | N ] (A21) 2 N 1 N 1 + δ | E η 2 + δ | 2 C 1 N δ .
Similarly, we can also write for the third term, | E η τ I [ | η | N , | τ | N ] E η I [ | η | N ] E τ I [ | τ | N ] | 2 C 2 N δ . Finally, for the last term we get that by Cauchy-Schwarz inequality,
| E η τ I [ | η | N , | τ | N ] E η I [ | η | N ] E τ I [ | τ | N ] | Var η I [ | η | N ] Var τ I [ | τ | N ]
< 2 Var η I [ | η | N ] Var τ I [ | τ | N ]
2 E η 2 I [ | η | N ] E τ 2 I [ | τ | N ] .
Since | η | > N , 1 < | η | δ N δ . Similarly, 1 < | τ | δ N δ . Plugging these in the previous equation, we get,
E η 2 I [ | η | N ] E τ 2 I [ | τ | N ] 1 N 2 δ E | η | 2 + δ I [ | η | N ] E | τ | 2 + δ I [ | τ | N ]
1 N δ C 1 C 2 .
Combining the four upper bounds above, we get,
| E η τ E η E τ | 4 N 2 α k + 2 N δ ( C 1 + C 2 ) + 2 N δ C 1 C 2 .
Now, in particular, setting N = n 1 / 2 α k 1 / ( 2 + δ ) it follows that
| E η τ E η E τ | 4 n α k δ / ( 2 + δ ) + 2 n δ / 2 α k δ / ( 2 + δ ) ( C 1 + C 2 ) + 2 n δ / 2 α k δ / ( 2 + δ ) C 1 C 2
= 4 n + 2 n δ / 2 ( C 1 + C 2 ) + 2 n δ / 2 C 1 C 2 α k δ / ( 2 + δ ) .
Lemma B.1.4.
Let { X t } be an α-mixing Markov chain with mixing coefficient α k . Further assume that E | X t | 2 + δ C 1 and E | X t + k | 2 + δ C 2 for some δ > 0 . Then, for any t and any n > 0
| Cov ( X t , X t + k ) | 4 n + 2 n δ / 2 ( C 1 + C 2 ) + 2 n δ / 2 C 1 C 2 α k δ / ( 2 + δ ) .
Proof. 
Set η = X t , τ = X t + k Lemma B.1.3. □
We also need to establish the following technical lemma.
Lemma B.1.5.
Let { X t } be an α-mixing Markov Chain with mixing coefficients { α t } . Then the process { Y t } where Y t : = log p θ 0 ( X t | X t 1 ) p θ ( X t | X t 1 ) is also α-mixing with mixing coefficients { α ˜ t } where α ˜ t = α t 1 .
Proof. 
By Z i denote the paired random measure ( X i , X i 1 ) . Let M i j denote the sigma field generated by the measures X k , where i k j . By G i j denote the sigma field generated by the measures Z k , where i k j . Let C M i 1 j . Then, C can be expressed as ( C i 1 × C i × × C j ) . for C i 1 M i 1 i 1 , C i M i i and so on. Now, consider a map. T i j : ( C i 1 × C i × × C j ) ( C i 1 × C i × C i × × C j 1 × C j 1 × C j ) . Note that, T i j ( C ) G i j . It is easy to see that G i j = T i j ( M i 1 j ) M i 1 * j , where T i j ( M i 1 j ) is obtained by applying the map T i j to each element of M i 1 j . If we assume this latter set to be the range and M i 1 j to be the domain, then, by construction, T i j is a bijection. Furthermore, the two classes are made of disjoint sets, i.e., if A T i j ( M i 1 j ) and A * M i 1 * j , then A A * = ϕ . Furthermore, note that M i 1 j * is made of impossible sets. i.e., P ( A * ) = 0 A * M i 1 j * . Now consider the α -mixing coefficients for Z i . By definition, it is given by
α k z = sup i sup A G i , B G i + k | P ( A B ) P ( A ) P ( B ) | = sup i sup A G i , B G i + k | P ( ( A o A * ) ( B o B * ) ) P ( ( A o A * ) ) P ( ( B o B * ) ) | .
where,
A = ( A o A * ) B = ( B o B * )
A o T i ( M i ) A * M * i
B o T i + k 1 ( M j + k 1 ) B * M j + k 1 * .
Then, the expression for the α -mixing coefficient can be reduced into
α k z = sup i sup A o T i ( M i ) , B o T i + k 1 ( M i + k 1 ) | P ( A o B o ) P ( A o ) P ( B o ) | .
Note that, by bijection property of T i j , we can find A M i and B M i + k 1 such that
α k z = sup i sup A M i , B M i + k 1 | P ( T i ( A ) T i + k 1 ( B ) ) P ( T i ( A ) ) P ( T i + k 1 ( B ) ) | . = α k 1 .
Now, log p θ 0 ( X n | X n 1 ) p θ ( X n | X n 1 ) is just a function of the paired Markov chain Z i , therefore it has α -mixing coefficient α k 1 . □
We now proceed to the proof of Proposition 3. Let { X k } be a stationary α -mixing Markov chain under θ 1 with mixing coefficients { α k } . Observe that the log-likelihood can be expressed as
r n ( θ 2 , θ 1 ) = i = 1 n log p θ 1 ( X i | X i 1 ) p θ 2 ( X i | X i 1 ) + log q 1 ( 0 ) ( X 0 ) q 2 ( 0 ) ( X 0 ) i = 1 n Y i + Z 0 .
Therefore, the variance of the log-likelihood ratio is simply
Var θ 1 r n ( θ 2 , θ 1 ) = Var θ 1 i = 1 n Y i + Z 0 = i , j = 1 n Cov θ 1 ( Y i , Y j ) + i , j = 1 n Cov θ 1 ( Y i , Z 0 ) + Cov θ 1 ( Z 0 , Z 0 ) .
It follows from Lemma B.1.5 that { Y k } is a stochastic process with α -mixing coefficients α k 1 . Therefore, using Lemma B.1.4 we have
| Cov θ 1 ( Y i , Y j ) | = | E θ 1 Y i Y j E θ 1 Y i E θ 1 Y j | < ( 4 n + 2 n δ / 2 ( E θ 1 | Y i | 2 + δ + E θ 1 | Y j | 2 + δ + E θ 1 | Y i | 2 + δ E θ 1 | Y j | 2 + δ ) ) α | j i | 1 δ / ( 2 + δ ) = 4 n + 2 n δ / 2 ( C θ 1 , θ 2 ( i ) + C θ 1 , θ 2 ( j ) + C θ 1 , θ 2 ( i ) C θ 1 , θ 2 ( j ) ) α | j i | 1 δ / ( 2 + δ ) .
Similarly, as above we can also say
| Cov θ 1 ( Y i , Z 0 ) | < 4 n + 2 n δ / 2 ( C θ 1 , θ 2 ( i ) + D 1 , 2 + C θ 1 , θ 2 ( i ) D 1 , 2 ) α i 1 δ / ( 2 + δ )
Combining, the two upper bounds above, we get the first result:
Var θ 1 r n ( θ 2 , θ 1 ) < i , j = 1 n 4 n + 2 n δ / 2 ( C θ 1 , θ 2 ( i ) + C θ 1 , θ 2 ( j ) + C θ 1 , θ 2 ( i ) C θ 1 , θ 2 ( j ) ) α | i j | 1 δ / ( 2 + δ ) + i = 1 n 4 n 2 + 2 n δ / 2 ( C θ 1 , θ 2 ( i ) + D 1 , 2 + C θ 1 , θ 2 ( i ) D 1 , 2 ) α i 1 δ / ( 2 + δ ) + Var [ Z 0 , Z 0 ] .
If { X i } is stationary under θ 1 , so is { Y i } . Therefore, E θ 1 | Y i | 2 + δ = E θ 1 | Y 1 | 2 + δ = C θ 1 , θ 2 ( 1 ) i , and
i , j = 1 n Cov θ 1 ( Y i , Y j ) i , j = 1 n 4 n + 6 n δ / 2 C θ 1 , θ 2 ( 1 ) α | j i | 1 δ / ( 2 + δ ) n 4 n + 6 n δ / 2 C θ 1 , θ 2 ( 1 ) h 1 α h 1 δ / ( 2 + δ ) .
Again, using Lemma B.1.4 on Cov θ 1 ( Y i , Z 0 ) , yields
i = 1 n Cov θ 1 ( Y i , Z 0 ) 4 n + 2 n δ / 2 ( C θ + D 1 , 2 + C θ D 1 , 2 ) h 1 α h δ / ( 2 + δ ) .
Finally, using Equations (A31) and (A32) we have
Var θ 1 r n ( θ 2 , θ 1 ) n 4 n + 6 n δ / 2 C θ 1 , θ 2 ( 1 ) h 1 α h 1 δ / ( 2 + δ ) + 4 n + 2 n δ / 2 ( C θ 1 , θ 2 ( 1 ) + D 1 , 2 + C θ 1 , θ 2 ( 1 ) D 1 , 2 ) h 1 α h δ / ( 2 + δ ) + Cov θ 1 ( Z 0 , Z 0 ) .

Appendix B.2. Proofs for Stationary Markov Data-Generating Models

Proof of Theorem 2

Part 1: Verifying condition (i) of Corollary 1.
We substitute the true parameter θ 0 for θ 1 and θ for θ 2 . We also set q 1 ( 0 ) to be the invariant distribution of the Markov chain under θ 0 , q 0 , and q 2 ( 0 ) as the invariant distribution of the Markov chain under θ , q θ . Applying the fact that these Markov chains are stationary to Proposition 2, we have
K ( P θ 0 ( n ) , P θ ( n ) ) = n E log p θ 0 ( X 1 | X 0 ) p θ ( X 1 | X 0 ) + E [ Z 0 ] , n j = 1 m E M j ( 1 ) ( X 1 , X 0 ) | f j ( 1 ) ( θ , θ 0 ) | + k = 1 m E [ M k ( 2 ) ( X 0 ) ] | f k ( 2 ) ( θ , θ 0 ) | ,
where the inequality follows from Assumption 1. Therefore, it follows that
K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) n j = 1 m E M j ( 1 ) ( X 1 , X 0 ) | f j ( 1 ) ( θ , θ 0 ) | ρ n ( d θ ) + k = 1 m E [ M k ( 2 ) ( X 0 ) ] | f k ( 2 ) ( θ , θ 0 ) | ρ n ( d θ ) .
By Assumption 1(i), it follows that
K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) n j = 1 m E M j ( 1 ) ( X 1 , X 0 ) C n + k = 1 m E [ M k ( 2 ) ( X 0 ) ] C n n ϵ n ( 1 ) ,
where ϵ n ( 1 ) = O 1 n .
Part 2: Verifying condition (ii) of 1. Again, using Proposition 3 along with the fact that the Markov chain is stationary we have
Var [ r n ( θ , θ 0 ) ] n 4 n + 6 n δ / 2 C θ 0 , θ ( 1 ) k 0 α k δ / ( 2 + δ ) + 4 n 2 + 2 n δ / 2 ( C θ 0 , θ ( 1 ) + D θ 0 , θ + C θ 0 , θ ( 1 ) D θ 0 , θ ) k 1 α k δ / ( 2 + δ ) + Var [ Z 0 ] .
It then follows that
Var [ r n ( θ , θ 0 ) ] ρ n ( d θ ) n 4 n + 6 n δ / 2 C θ 0 , θ ( 1 ) ρ n ( d θ ) k 1 α k 1 δ / ( 2 + δ ) + Var [ Z 0 ] ρ n ( d θ ) + ( 4 n 2 + 2 n δ / 2 ( C θ 0 , θ ( 1 ) ρ n ( d θ ) + D θ 0 , θ ρ n ( d θ ) + C θ 0 , θ ( 1 ) D θ 0 , θ ρ n ( d θ ) ) ) k 1 α k δ / ( 2 + δ ) .
First, consider the term C θ 0 , θ ( 1 ) ρ n ( θ ) , and observe that
C θ 0 , θ ( 1 ) ρ n ( d θ ) = E log p θ 0 ( X 1 | X 0 ) p θ ( X 1 | X 0 ) 2 + δ ρ n ( d θ ) .
By Assumption 1, we have
E log p θ 0 ( X 1 | X 0 ) p θ ( X 1 | X 0 ) 2 + δ ρ n ( d θ ) E j = 1 m M j ( 1 ) ( X 1 , X 0 ) | f k ( 1 ) ( θ , θ 0 ) | 2 + δ ρ n ( d θ ) .
Since the function x x 2 + δ is convex, we can apply Jensen’s inequality to obtain,
j = 1 m M j ( 1 ) ( X 1 , X 0 ) | f k ( 1 ) ( θ , θ 0 ) | 2 + δ m 1 + δ k = 1 m M j ( 1 ) ( X 1 , X 0 ) 2 + δ | f k ( 1 ) ( θ , θ 0 ) | 2 + δ .
Therefore, it follows that
E log p θ 0 ( X 1 | X 0 ) p θ ( X 1 | X 0 ) 2 + δ ρ n ( d θ ) m 1 + δ k = 1 m E [ M k ( 1 ) ( X 1 , X 0 ) 2 + δ ] × | f k ( 1 ) ( θ , θ 0 ) | 2 + δ ρ n ( d θ ) .
By Assumption 1, | f k ( θ , θ 0 ) | 2 + δ ρ n ( d θ ) < C n and E [ M k ( 1 ) ( X 1 , X 0 ) 2 + δ ] < B , implying that
C θ 0 , θ ( 1 ) ρ n ( d θ ) m 1 + δ k = 1 m B C n = m 2 + δ B C n .
Since k 0 α k δ / ( 2 + δ ) < , it follows that 4 n + 6 n δ / 2 C θ 0 , θ ( 1 ) ρ n ( d θ ) k 1 α k 1 δ / ( 2 + δ ) = O ( n δ / 2 n ) . Similarly, we can show that D θ 0 , θ ρ n ( d θ ) = O ( 1 n ) , and Var [ Z 0 ] ρ n ( d θ ) = O ( 1 n ) .
For the final term C θ 0 , θ ( 1 ) D θ 0 , θ ρ n ( d θ ) , use the Cauchy-Schwarz inequality to obtain the upper bound C θ 0 , θ ( 1 ) ρ n ( d θ ) D θ 0 , θ ρ n ( d θ ) 1 / 2 which is also of order O ( 1 n ) . Combining all of these together we have
Var [ r n ( θ , θ 0 ) ] ρ n ( d θ ) n ϵ n ( 2 ) ,
for some ϵ n ( 2 ) = O ( n δ / 2 n ) .
Since K ( ρ n , π ) < n C = n C n , it follows that K ( ρ n , π ) < n ϵ n ( 3 ) , where ϵ n ( 3 ) = O ( 1 / n ) as before. Finally, by choosing ϵ n = max ( ϵ n ( 1 ) , ϵ n ( 2 ) , ϵ n ( 3 ) ) , our theorem is proved. □

Appendix B.3. Proofs for Non-Stationary, Ergodic Markov Data-Generating Models

Appendix B.3.1. Proof of Theorem 3

Part 1: Verifying condition (i) of Corollary 1: As in the proof of Theorem 2 substitute the true parameter θ 0 for θ 1 and θ for θ 2 in. We also set q 1 ( 0 ) and q 2 ( 0 ) to the distribution q ( 0 ) . Applying Proposition 2 to the corresponding transition kernels and initial distribution we have,
K ( P θ 0 ( n ) , P θ ( n ) ) = i = 1 n E log p θ 0 ( X i | X i 1 ) p θ ( X i | X i 1 ) + E log D ( X 0 ) D ( X 0 ) = i = 1 n E log p θ 0 ( X i | X i 1 ) p θ ( X i | X i 1 ) .
Now, applying Assumption 1, we can bound the previous equation as follows,
K ( P θ 0 ( n ) , P θ ( n ) ) i = 1 n E k = 1 m M k ( 1 ) ( X i , X i 1 ) | f k ( 1 ) ( θ , θ 0 ) | = i = 1 n k = 1 m E M k ( 1 ) ( X i , X i 1 ) | f k ( 1 ) ( θ , θ 0 ) | .
Since M k ( 1 ) ’s are bounded there exists a constant Q so that,
K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) Q i = 1 n k = 1 m | f k ( 1 ) ( θ , θ 0 ) | ρ n ( d θ ) = Q n k = 1 m | f k ( 1 ) ( θ , θ 0 ) | ρ n ( d θ ) .
By Assumption 19 in Assumption 1, it follows that
K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) Q n k = 1 m C n = n m Q C n = n ϵ n ( 1 ) ,
for some ϵ n ( 1 ) = O ( 1 n ) .
Part 2: Verifying condition (ii) of Corollary 1: As in the previous part, Z 0 = 0 , implying that D θ , θ 0 . Applying Proposition 3 and integrating with respect to ρ n , we obtain
Var r n ( θ , θ 0 ) ρ n ( d θ ) i = 1 n 4 n + 2 n δ / 2 C θ 0 , θ ( i ) ρ n ( d θ ) α i 1 δ / ( 2 + δ ) + i , j = 1 n 4 n + 2 n δ / 2 ( C θ 0 , θ ( i ) ρ n ( d θ ) + C θ 0 , θ ( j ) ρ n ( d θ ) + C θ 0 , θ ( i ) C θ 0 , θ ( j ) ρ n ( d θ ) ) × α | i j | 1 δ / ( 2 + δ ) .
First, consider the term C θ 0 , θ ( i ) ρ n ( d θ ) . Using Assumption 1, we can upper bound C θ 0 , θ ( i ) as,
C θ 0 , θ ( i ) E k = 1 m M k ( 1 ) ( X i , X i 1 ) | f k ( 1 ) ( θ , θ 0 ) | 2 + δ k = 1 m m 1 + δ E M k ( 1 ) ( X i , X i 1 ) | f k ( 1 ) ( θ , θ 0 ) | 2 + δ ( by Jensen s inequality ) = k = 1 m m 1 + δ E M k ( 1 ) ( X i , X i 1 ) 2 + δ | f k ( 1 ) ( θ , θ 0 ) | 2 + δ .
Since M k ( 1 ) ’s are upper bounded by Q, it follows from the previous expression that, C θ 0 , θ ( i ) k = 1 m m 1 + δ Q 2 + δ | f k ( 1 ) ( θ , θ 0 ) | 2 + δ .
Hence, from Assumption 1, we get,
C θ 0 , θ ( i ) ρ n ( d θ ) k = 1 m m 1 + δ Q 2 + δ | f k ( 1 ) ( θ , θ 0 ) | 2 + δ ρ n ( d θ ) ( m Q ) 2 + δ C n .
Using the upper bound above, we can say for an L large enough, C θ 0 , θ ( i ) ρ n ( d θ ) L n . Next, by the Cauchy-Schwarz inequality, we have that C θ 0 , θ ( i ) C θ 0 , θ ( j ) ρ n ( d θ ) ) < C θ 0 , θ ( i ) ρ n ( d θ ) C θ 0 , θ ( j ) ρ n ( d θ ) ) L n . Thus, we have the following upper bound.
Var r n ( θ , θ 0 ) ρ n ( d θ ) i = 1 n 4 n + 2 n δ / 2 L n α i 1 δ / ( 2 + δ ) + i , j = 1 n 4 n + 2 n δ / 2 ( L n + L n + L n ) α | i j | 1 δ / ( 2 + δ ) = 4 n + 2 n δ / 2 L n i = 1 n α i 1 δ / ( 2 + δ ) + 4 n + 6 n δ / 2 L n i , j = 1 n α | i j | 1 δ / ( 2 + δ ) .
Since i , j = 1 n α | i j | 1 δ / ( 2 + δ ) < n k 1 α k 1 δ / ( 2 + δ ) < , we have that for some ϵ n ( 2 ) = O ( n δ / 2 n ) ,
Var r n ( θ , θ 0 ) ρ n ( d θ ) < n ϵ n ( 2 ) .
Since K ( ρ n , π ) n C , following the concluding argument in Theorem 2 completes the proof. □

Appendix B.3.2. Proof of Proposition 8

We verify Assumption 1 and the proof follows from Theorem 3. For i { 1 , 2 , , K 1 } ,
p θ ( j | i ) = θ if j = i 1 , 1 θ if j = i + 1 .
If i = 0 or i = K , then the Markov chain goes back to 1 or K 1 , respectively, with probability 1. With the convention log 0 0 = 0 , the log ratio of the transition probabilities becomes,
| log p θ 0 ( X 1 | X 0 ) log p θ ( X 1 | X 0 ) | = I [ X 1 = X 0 + 1 ] log θ 0 θ + I [ X 1 = X 0 1 ] log 1 θ 0 1 θ .
In this case, m = 2 . M 1 ( 1 ) ( X 1 , X 0 ) = I [ X 1 = X 0 + 1 ] and M 2 ( 1 ) ( X 1 , X 0 ) = I [ X 1 = X 0 1 ] , both of which are bounded. Let f 1 ( 1 ) ( θ , θ 0 ) : = log θ 0 θ suppose f 2 ( 1 ) ( θ , θ 0 ) : = log 1 θ 0 1 θ .
The stationary distribution q θ ( i ) = 1 K i 1 , 2 , , K . Hence the log of the ratio of the invariant distribution becomes
log q 0 ( x ) log q θ ( x ) = 0 ,
and we can set M i ( 2 ) ( · ) : = 1 and f i ( 2 ) ( · , · ) : = 0 for i { 1 , 2 } . Thus, to prove the concentration bound for this Markov chain it is enough to assume that δ = 1 and show that [ f 1 ( 1 ) ( θ , θ 0 ) ] 3 ρ n ( d θ ) < C n and [ f 2 ( 1 ) ( θ , θ 0 ) ] 3 ρ n ( d θ ) < C n for some constant C > 0 .
As given, { ρ n } is a sequence of beta probability distribution functions, with parameters a n , b n that satisfy the constraint a n a n + b n = θ 0 . Specifically, we choose a n = n θ 0 and (therefore) b n = n ( 1 θ 0 ) . Thus, we get the following,
| f 1 ( 1 ) ( θ , θ 0 ) | 3 ρ n ( d θ ) = log θ 0 θ 3 ρ n ( d θ ) < θ 0 θ 1 3 ρ n ( d θ ) = 1 Beta ( a n , b n ) 0 1 θ 0 θ θ 3 θ a n 1 ( 1 θ ) b n 1 d θ .
Since θ 0 , θ ( 0 , 1 ) , so is | θ 0 θ | 2 , giving | θ 0 θ | 3 < 2 ( θ 0 θ ) 2 . We use that fact to arrive at
| f 1 ( 1 ) ( θ , θ 0 ) | 3 ρ n ( d θ ) 2 Beta ( a n , b n ) 0 1 ( θ 0 θ ) 2 θ a n 4 ( 1 θ ) b n 1 d θ = 2 Beta ( a n 3 , b n ) Beta ( a n , b n ) ( a n 3 ) ( b n ) ( a n + b n 3 ) 2 ( a n + b n 2 ) .
From our choice of a n and b n , 2 Beta ( a n 3 , b n ) Beta ( a n , b n ) = O ( 1 ) , and plugging the values of a n and b n into ( a n 3 ) ( b n ) ( a n + b n 3 ) 2 ( a n + b n 2 ) , we get ( a n 3 ) ( b n ) ( a n + b n 3 ) 2 ( a n + b n 2 ) = 1 n ( θ 0 3 n ) ( 1 θ 0 ) ( 1 3 n ) 2 ( 1 2 n ) , which is upper bounded by C 1 n for some constant C 1 > 0 . Hence,
| f 1 ( 1 ) ( θ , θ 0 ) | 3 ρ n ( d θ ) < C 1 n .
Similarly, we can also show that,
| f 2 ( 1 ) ( θ , θ 0 ) | 3 ρ n ( d θ ) < C 2 n .
Finally, from Proposition A.2.1, we get that K ( ρ n , π ) < C + 1 2 log ( n ) for some large constant C. Hence, K ( ρ n , π ) < C 3 n for some constant C 3 > 0 . Choosing C = max ( C 1 , C 2 , C 3 ) , we satisfy all the conditions of Assumption 1 and Theorem 3. □

Appendix B.3.3. Proof of Proposition 9

For the purpose of this proof, we choose ρ n ’s with scaled Beta distribution with parameters a n = n ( θ 0 / 2 ) and b n = n ( 1 θ 0 / 2 ) . Since, ρ n is a scaled Beta distribution with the scaling factors m = 0.5 and c = 0 , the pdf of ρ n is given by
ρ n ( θ ) = 2 Beta ( a n , b n ) 2 θ a n 1 2 θ b n
Since this is a scaled distribution, E ρ n [ θ ] = 2 a n a n + b n = θ 0 and there exists a constant σ > 0 , Var ρ n [ θ ] = σ 2 n . Now, we analyse the transition probabilities. For i { 1 , 2 , } , the Birth-Death process has transition probabilities
p θ ( j | i ) = θ if j = i 1 , 1 θ if j = i + 1 .
If i = 0 , then the Markov chain goes to 1 with probability 1. Hence with the convention log 0 0 = 0 the ratio of the log of the transition probabilities becomes,
| log p θ 0 ( X 1 | X 0 ) log p θ ( X 1 | X 0 ) | = I [ X 1 = X 0 + 1 ] log θ 0 θ + I [ X 1 = X 0 1 ] log 1 θ 0 1 θ .
In this case, m = 3 . M 1 ( 1 ) ( X 1 , X 0 ) = I [ X 1 = X 0 + 1 ] and M 2 ( 1 ) ( X 1 , X 0 ) = I [ X 1 = X 0 1 ] . Define M 3 ( 1 ) ( X 1 , X 0 ) : = 1 . All these random variables are bounded. Define f 1 ( 1 ) ( θ , θ 0 ) : = log θ 0 θ , f 2 ( 1 ) ( θ , θ 0 ) : = log 1 θ 0 1 θ and f 3 ( 1 ) ( θ , θ 0 ) : = 0 . Similarly as in the proof on Proposition 8,
[ f 1 ( 1 ) ( θ , θ 0 ) ] 3 ρ n ( d θ ) < C 1 n , and [ f 2 ( 1 ) ( θ , θ 0 ) ] 3 ρ n ( d θ ) < C 2 n .
The stationary distribution is given by q θ ( i ) = ( θ 1 θ ) i 1 q θ ( 1 ) i 1 , 2 , , so that q θ ( i ) = ( 1 θ ) ( θ 1 θ ) i 1 Hence the log of the ratio of the invariant distribution becomes
log q 0 ( i ) log q θ ( i ) = log 1 θ 0 1 θ + ( i 1 ) log θ 0 θ ( i 1 ) log 1 θ 0 1 θ
We define M 1 ( 2 ) ( X 0 ) : = 1 , and M 2 ( 2 ) ( X 0 ) = M 3 ( 2 ) ( X 0 ) : = X 0 1 . We can write E q ( 0 ) [ M 2 ( 2 ) ( X 0 ) ] 2 = i = 1 ( i 1 ) 2 q ( 0 ) ( i ) < i = 1 i 2 q ( 0 ) ( i ) . We have chosen q ( 0 ) such that i = 1 i 2 q ( 0 ) ( i ) is bounded. Hence, E q ( 0 ) [ M 2 ( 2 ) ( X 0 ) ] 2 < . To verify Assumption i define, f 1 ( 2 ) ( θ , θ 0 ) = f 3 ( 2 ) ( θ , θ 0 ) : = log 1 θ 0 1 θ , and define f 2 ( 2 ) ( θ , θ 0 ) : = log θ 0 θ . Therefore following the proof of Proposition 8,
| f 1 ( 2 ) ( θ , θ 0 ) | 3 ρ n ( d θ ) = | f 3 ( 2 ) ( θ , θ 0 ) | 3 ρ n ( d θ ) = | f 2 ( 1 ) ( θ , θ 0 ) | 3 ρ n ( d θ ) < C 2 n , and , | f 2 ( 2 ) ( θ , θ 0 ) | 3 ρ n ( d θ ) = | f 1 ( 1 ) ( θ , θ 0 ) | 3 ρ n ( d θ ) < C 1 n .
Finally, we take the KL-divergence K ( ρ n , π ) . ρ n follows a scaled Beta distribution on ( 0 , 1 / 2 ) with parameters a n = n ( θ 0 / 2 ) and b n = n ( 1 θ 0 / 2 ) , while π follows a scaled Beta distribution on ( 0 , 1 / 2 ) with parameters a and b. Thus,
K ( ρ n , π ) = 0 1 2 log ρ n ( θ ) π ( θ ) ρ n ( d θ ) ,
which, by substituting t = 2 θ , we get,
K ( ρ n , π ) = 2 0 1 log ρ n ( t ) π ( t ) ρ n ( d t ) .
0 1 log ρ n ( t ) π ( t ) ρ n ( d t ) is the KL-divergence between a Beta distribution with parameters a n and b n and a Beta distribution with parameters a and b. An application of Proposition A.2.1 gives us for a constant C 1 > 0 ,
0 1 log ρ n ( t ) π ( t ) ρ n ( d t ) < C 1 + 1 2 log ( n ) .
Hence we can say, K ( ρ n , π ) < 2 C 1 + 1 2 log ( n ) . Thus, we now get that for some constant C 3 > 0 ,
K ( ρ n , π ) < C 3 n .
Choosing C = max ( C 1 , C 2 , C 3 ) we satisfy all of the conditions of Assumption 1 and thus by Theorem 3, we are complete the proof. □

Appendix B.3.4. Proof of Theorem 4

Part 1: Verifying condition (i) of Corollary 1 As in the proof of Theorem 2 substitute the true parameter θ 0 for θ 1 and θ for θ 2 . We also set our initial distributions q 1 ( 0 ) and q 2 ( 0 ) to the known initial distribution q ( 0 ) . A method similar to Equation (A35), yields
K ( P θ 0 ( n ) , P θ ( n ) ) i = 1 n k = 1 m E M k ( 1 ) ( X i , X i 1 ) | f k ( 1 ) ( θ , θ 0 ) | .
Because M k ( 1 ) s satisfy Assumption 2, it follows by the application of Theorem 2.3, [21] that λ > 0 such that for any 0 < κ λ , and for some ζ ( 0 , 1 ) possibly depending upon λ ,
E e κ M k ( 1 ) ( X i , X i 1 ) X 1 , X 0 ] ζ i 1 e κ M k ( 1 ) ( X 1 , X 0 ) + 1 ζ i 1 ζ D e κ a for all i > 1 .
We rewrite E M k ( 1 ) ( X i , X i 1 ) | X 1 , X 0 as follows:
E M k ( 1 ) ( X i , X i 1 ) | X 1 , X 0 = E [ κ M k ( 1 ) ( X i , X i 1 ) | X 1 , X 0 ] κ E [ e κ M k ( 1 ) ( X i , X i 1 ) | X 1 , X 0 ] κ .
Therefore, i = 1 n E M k ( 1 ) ( X i , X i 1 ) can be upper bounded as,
i = 1 n E M k ( 1 ) ( X i , X i 1 ) = i = 1 n E κ M k ( 1 ) ( X i , X i 1 ) | X 1 , X 0 κ 1 i = 1 n ζ i 1 E e κ M k ( 1 ) ( X 1 , X 0 ) + 1 ζ i 1 ζ D e κ a κ 1 .
Since, ζ ( 0 , 1 ) , ζ i < 1 . Hence, we can write that,
i = 1 n ζ i 1 E e κ M k ( 1 ) ( X 1 , X 0 ) + 1 ζ i 1 ζ D e κ a κ 1 i = 1 n ζ i 1 E e κ M k ( 1 ) ( X 1 , X 0 ) + 1 1 ζ D e κ a κ 1 = 1 ζ n 1 ζ E e κ M k ( 1 ) ( X 1 , X 0 ) + n 1 ζ D e κ a κ 1 n L ,
for a large constant L. Therefore K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) can be upper bounded as follows,
K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) k = 1 m n L | f k ( 1 ) ( θ , θ 0 ) | ρ n ( d θ ) = k = 1 m n L | f k ( 1 ) ( θ , θ 0 ) | ρ n ( d θ ) .
By Assumption 1, | f k ( 1 ) ( θ , θ 0 ) | ρ n ( d θ ) < C n , hence,
K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) n L C n .
Hence, for some ϵ n ( 1 ) = O ( 1 n ) , we have obtained that, K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) n ϵ n ( 1 ) .
Part 2: Verifying condition (ii) of Corollary 1: Similar to as in the proof of Theorem 3, we upper bound Var r n ( θ , θ 0 ) ρ n ( d θ ) by
Var r n ( θ , θ 0 ) ρ n ( d θ ) i , j = 1 n ( 4 n + 2 n δ / 2 ( C θ 0 , θ ( i ) ρ n ( d θ ) + C θ 0 , θ ( j ) ρ n ( d θ )
+ C θ 0 , θ ( i ) C θ 0 , θ ( j ) ρ n ( d θ ) ) ) α | i j | 1 δ / ( 2 + δ ) + i = 1 n 4 n + 2 n δ / 2 C θ 0 , θ ( i ) ρ n ( d θ ) α i 1 δ / ( 2 + δ ) ,
where C θ 0 , θ is upper bounded as
C θ 0 , θ ( i ) k = 1 m m 1 + δ E M k ( 1 ) ( X i , X i 1 ) 2 + δ | f k ( 1 ) ( θ , θ 0 ) | 2 + δ .
There exists a constant C δ depending upon δ such that,
[ M k ( 1 ) ] 2 + δ ( X i , X i 1 ) = κ 2 + δ [ M k ( 1 ) ] 2 + δ ( X i , X i 1 ) 2 + δ κ 2 + δ e κ M k ( 1 ) ( X i , X i 1 ) + C δ κ 2 + δ .
By expressing E M k ( 1 ) ( X i , X i 1 ) 2 + δ = E E M k ( 1 ) ( X i , X i 1 ) 2 + δ | X 1 , X 0 and following a method similar to the previous part, we get,
E M k ( 1 ) ( X i , X i 1 ) 2 + δ ζ i E e κ M k ( 1 ) ( X 1 , X 0 ) + 1 ζ i 1 ζ D e κ a + C δ κ 2 + δ .
The fact that 0 < ζ < 1 implies that 0 < ζ i < ζ . This gives us the following,
E M k ( 1 ) ( X i , X i 1 ) 2 + δ ζ E e κ M k ( 1 ) ( X 1 , X 0 ) + 1 1 ζ D e κ a + C δ κ 2 + δ .
Since κ < λ , by the application of Jensen’s inequality, we get
E M k ( 1 ) ( X i , X i 1 ) 2 + δ ζ E e λ M k ( 1 ) ( X 1 , X 0 ) + 1 1 ζ D e κ a + C δ κ 2 + δ = ζ e λ M k ( 1 ) ( x 1 , x 0 ) p θ 0 ( x 1 | x 0 ) D ( x 0 ) d x 1 d x 0 + 1 1 ζ D e κ a + C δ κ 2 + δ .
We know that | f k ( 1 ) ( θ , θ 0 ) | 2 + δ ρ n ( d θ ) < C n . Thus, following Assumption 1 we can say that, for a large constant L, C θ 0 , θ ( i ) ρ n ( d θ ) L n . The rest of the proof follows similarly as in the proof of Theorem 3, and we obtain an ϵ n ( 2 ) = O ( n δ / 2 n ) , such that,
Var [ r n ( θ , θ 0 ) ] ρ n ( d θ ) < n ϵ n ( 2 ) .
Since, K ( ρ n , π ) n C , similar arguments as in the proof of Theorem 2 holds. The theorem is thus proved.

Appendix B.3.5. Proof of Theorem 5

Part 1: Verifying condition (i) of Corollary 1 As in the proof of Theorem 2 substitute the true parameter θ 0 for θ 1 and θ for θ 2 . We also set q 1 ( 0 ) and q 2 ( 0 ) to the known initial distribution q ( 0 ) . Similar to the steps leading to Equation (A35), we get
K ( P θ 0 ( n ) , P θ ( n ) ) i = 1 n k = 1 m E M k ( 1 ) ( X i , X i 1 ) | f k ( 1 ) ( θ , θ 0 ) | .
Consider the term E M k ( 1 ) ( X i , X i 1 ) . With q θ 0 ( i 1 ) the marginal distribution of X i 1 , we have
E M k ( 1 ) ( X i , X i 1 ) = M k ( 1 ) ( x i , x i 1 ) p θ 0 ( x i | x i 1 ) q θ 0 ( i 1 ) ( x i 1 ) d x i d x i 1 . E M k ( 1 ) ( X i , X i 1 ) = M k ( 1 ) ( x i , x i 1 ) p θ 0 ( x i | x i 1 ) p θ 0 i 1 ( x i 1 | x 0 ) q θ 0 ( 0 ) ( x 0 ) d x 0 d x i d x i 1
Recall that the marginal density satisfies q θ 0 ( i 1 ) ( x i 1 ) = p θ 0 i 1 ( x i 1 | x 0 ) q θ 0 ( 0 ) ( x 0 ) d ( x 0 ) , where p θ 0 i ( · | x 0 ) is the i-step transition probability. Then
E M k ( 1 ) ( X i , X i 1 ) = E M k ( 1 ) ( X i , x i 1 ) | x i 1 p θ 0 i 1 ( x i 1 | x 0 ) q θ 0 ( 0 ) ( x 0 ) d x 0 d x i 1 .
Since the Markov chain { X n } satisfies Assumption A.1.1, we know by the application of Theorem A.1.1 that { X n } is V-geometrically ergodic. Hence, τ < 1 , R < such that | f | < V
| f ( x i 1 ) p θ 0 i 1 ( x i 1 | x 0 ) d x i 1 f ( x i 1 ) q θ 0 ( x i 1 ) d x i 1 | < R V ( x 0 ) τ i 1 ,
where q θ 0 is the stationary distribution, implying that
f ( x i 1 ) p θ 0 i 1 ( x i 1 | x 0 ) d x i 1 < f ( x i 1 ) q θ 0 ( x i 1 ) d x i 1 + R V ( x 0 ) τ i 1 .
By the application of Jensen’s inequality we get E M k ( 1 ) ( X i , X i 1 ) | X i 1 2 + δ E M k ( 1 ) ( X i , X i 1 ) 2 + δ | X i 1 < V ( X i 1 ) . Since V ( · ) 1 , it follows from the previous expression that E M k ( 1 ) ( X i , X i 1 ) | X i 1 < V ( X i 1 ) 1 / ( 2 + δ ) V ( X i 1 ) . Thus, setting f ( x ) = E M k ( 1 ) ( X i , X i 1 ) | X i 1 = x , we obtain
E M k ( 1 ) ( X i , X i 1 ) < E M k ( 1 ) ( X i , X i 1 ) | X i 1 q θ 0 ( x i ) d x i 1 + R V ( x 0 ) τ i 1 q ( 0 ) ( x 0 ) d x 0 = E [ M k ( 1 ) ( X 1 , X 0 ) ] + τ i 1 R V ( x 0 ) q ( 0 ) ( x 0 ) d x 0 .
Summing from i = 1 to n, we get
i = 1 n E M k ( 1 ) ( x i , x i 1 ) < n E [ M k ( 1 ) ( X 1 , X 0 ) ] + i = 1 n τ i 1 R V ( x 0 ) q ( 0 ) ( x 0 ) d x 0
= n E [ M k ( 1 ) ( X 1 , X 0 ) ] + 1 τ n 1 τ R V ( x 0 ) q ( 0 ) ( x 0 ) d x 0 .
This gives us the following bound on K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) :
K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) k = 1 m n E [ M k ( 1 ) ( X 1 , X 0 ) ] + 1 τ n 1 τ R V ( x 0 ) D ( x 0 ) d x 0 × | f k ( 1 ) ( θ , θ 0 ) | ρ n ( d θ ) .
By Assumption 1, | f k ( 1 ) ( θ , θ 0 ) | ρ n ( d θ ) < C n . Hence, we can rewrite the previous expression as
K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) k = 1 m n E [ M k ( 1 ) ( X 1 , X 0 ) ] + 1 τ n 1 τ R V ( x 1 ) D ( x 1 ) d x 1 C n = n m E [ M k ( 1 ) ( X 1 , X 0 ) ] + 1 τ n n ( 1 τ ) R V ( x 0 ) D ( x 0 ) d x 0 C n .
Since, τ < 1 , 0 < 1 τ n < 1 , and we rewrite the previous equation as,
K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) n m E [ M k ( 1 ) ( X 1 , X 0 ) ] + 1 n ( 1 τ ) R V ( x 0 ) D ( x 0 ) d x 0 C n .
Hence, there exists an ϵ n ( 1 ) = O ( 1 n ) such that K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) n ϵ n ( 1 ) .
Part 2: Verifying condition (ii) of Corollary 1: Similar to as in the proof of Theorem 3, we upper bound Var r n ( θ , θ 0 ) ρ n ( d θ ) by
Var r n ( θ , θ 0 ) ρ n ( d θ ) i , j = 1 n ( 4 n + 2 n δ / 2 ( C θ 0 , θ ( i ) ρ n ( d θ ) + C θ 0 , θ ( j ) ρ n ( d θ )
+ C θ 0 , θ ( i ) C θ 0 , θ ( j ) ρ n ( d θ ) ) ) α | i j | 1 δ / ( 2 + δ ) + i = 1 n 4 n + 2 n δ / 2 C θ 0 , θ ( i ) ρ n ( d θ ) α i 1 δ / ( 2 + δ ) ,
where C θ 0 , θ is upper bounded as
C θ 0 , θ ( i ) k = 1 m m 1 + δ E M k ( 1 ) ( X i , X i 1 ) 2 + δ | f k ( 1 ) ( θ , θ 0 ) | 2 + δ .
Since E M k ( 1 ) ( X i , X i 1 ) 2 + δ | X i 1 < V ( X i 1 ) , by a similar application of V-geometric ergodicity, we can say that, 0 < τ < 1 , such that
E M k ( 1 ) ( X i , X i 1 ) 2 + δ n E [ M k ( 1 ) ( X 1 , X 0 ) ] 2 + δ + τ i 1 R V ( x 0 ) D ( x 0 ) d x 0 ,
which, by the fact that τ i 1 < τ , gives us,
E M k ( 1 ) ( X i , X i 1 ) 2 + δ E [ M k ( 1 ) ( X 1 , X 0 ) ] 2 + δ + τ R V ( x 0 ) D ( x 0 ) d x 0 .
By Assumption 1, we know that, | f k ( 1 ) ( θ , θ 0 ) | 2 + δ ρ n ( d θ ) < C n . Hence, for a large constant L, C θ 0 , θ ( i ) ρ n ( d θ ) L n . We also see that since the chain is geometrically ergodic, by the application of Equation (A4), k 1 α k δ / ( 2 + δ ) < + . The rest of the proof follows similarly as in the proof of Theorem 3, and we obtain an ϵ n ( 2 ) = O ( n δ / 2 n ) , such that,
Var [ r n ( θ , θ 0 ) ] ρ n ( d θ ) < n ϵ n ( 2 ) .
Since, K ( ρ n , π ) n C , similar arguments as in the proof of Theorem 2 holds. The theorem is thus proved. □

Appendix B.3.6. Proof of Proposition 10

For the purpose of the proof, we choose ρ n ’s with scaled Beta distribution with parameters a n = n 1 + θ 0 2 and b n = n 1 θ 0 2 . Since, ρ n is a scaled Beta distribution with the scaling factors m = 2 and c = 1 , the pdf of ρ n is given by
ρ n ( θ ) = 1 2 Beta ( a n , b n ) 1 + θ 2 a n 1 θ 2 b n
Since this is a scaled distribution, E ρ n [ θ ] = 2 a n a n + b n 1 = θ 0 and there exists a constant σ > 0 , Var ρ n [ θ ] = σ 2 n . We now analyse the log-ratio of the transition probabilities for the Markov chain,
log p θ 0 ( X n | X n 1 ) log p θ ( X n | X n 1 ) = 2 X n X n 1 ( θ θ 0 ) + X n 1 2 ( θ 0 2 θ 2 ) .
Observe that in this setting, M 1 ( 1 ) ( X n , X n 1 ) = | X n X n 1 | and M 2 ( 1 ) ( X n , X n 1 ) = X n 2 . Next, using the fact that
E [ | X n | 2 + δ | X n 1 ] = E [ | X n θ 0 X n 1 + θ 0 X n 1 | 2 + δ | X n 1 ] ,
and by an application of triangle inequality, we obtain
E [ | X n | 2 + δ | X n 1 ] E | X n θ 0 X n 1 | + | θ 0 X n 1 | 2 + δ | X n 1 = E 2 | X n θ 0 X n 1 | + | θ 0 X n 1 | 2 2 + δ | X n 1 = E 2 2 + δ | X n θ 0 X n 1 | + | θ 0 X n 1 | 2 2 + δ | X n 1 .
Now by using Jensen’s inequality we get,
E [ | X n | 2 + δ | X n 1 ] E 2 2 + δ | X n θ 0 X n 1 | 2 + δ + | θ 0 X n 1 | 2 + δ 2 | X n 1 = 2 1 + δ E | X n θ 0 X n 1 | 2 + δ | X n 1 + 2 1 + δ | θ 0 X n 1 | .
We know if Y N ( μ , σ 2 ) , then E | Y μ | p = σ p 2 p 2 Γ ( p + 1 2 ) π . Consequently,
E [ | X n | 2 + δ | X n 1 ] 2 1 + δ 2 2 + δ 2 Γ ( 3 + δ 2 ) π + 2 1 + δ | θ 0 X n 1 | 2 + δ .
It follows that,
E [ M 1 ( 1 ) ( X n , X n 1 ) 2 + δ | X n 1 ] 2 1 + δ 2 2 + δ 2 Γ ( 3 + δ 2 ) π | X n 1 | 2 + δ + 2 1 + δ | θ 0 | 2 + δ | X n 1 | 4 + 2 δ 2 1 + δ 2 2 + δ 2 Γ ( 3 + δ 2 ) π + 2 1 + δ | θ 0 | 2 + δ ( | X n 1 | 4 + 2 δ + 1 ) .
Since θ 0 < 1 , we can say,
E [ M 1 ( 1 ) ( X n , X n 1 ) 2 + δ | X n 1 ] 2 1 + δ 2 2 + δ 2 Γ ( 3 + δ 2 ) π + 2 1 + δ ( | X n 1 | 4 + 2 δ + 1 ) .
Define a constant C δ : = 2 1 + δ 2 2 + δ 2 Γ ( 3 + δ 2 ) π + 2 1 + δ . The above term then becomes,
E [ M 1 ( 1 ) ( X n , X n 1 ) 2 + δ | X n 1 ] C δ ( | X n 1 | 4 + 2 δ + 1 ) .
Next we analyse the term M 2 ( 1 ) ( X n , X n 1 ) .
E M 2 ( 1 ) ( X n , X n 1 ) 2 + δ | X n 1 = E [ X n 1 4 + 2 δ | X n 1 ] = X n 1 4 + 2 δ C δ ( X n 1 4 + 2 δ + 1 ) .
Then, defining V ( x ) : = C δ ( x 4 + 2 δ + 1 ) it follows that,
E V ( X n ) | X n 1 = E C δ ( X n 4 + 2 δ + 1 ) | X n 1 .
Using a technique similar to Equation (A43) we get,
E C δ ( X n 4 + 2 δ + 1 ) | X n 1 C δ ( 2 3 + 2 δ 2 4 + 2 δ 2 Γ ( 5 + 2 δ 2 ) π + 2 3 + 2 δ | θ 0 X n 1 | 4 + 2 δ + 1 ) .
Define another constant C δ : = C δ 2 3 + 2 δ 2 4 + 2 δ 2 Γ ( 5 + 2 δ 2 ) π 2 3 + 2 δ | θ 0 | 4 + 2 δ + 1 . Since δ > 0 , 2 4 + 2 δ 2 Γ ( 5 + 2 δ 2 ) π > 1 . Furthermore, since | θ 0 | < 1 , so is | θ 0 | 4 + 2 δ . Hence,
2 3 + 2 δ 2 4 + 2 δ 2 Γ ( 5 + 2 δ 2 ) π 2 3 + 2 δ | θ 0 | 4 + 2 δ > 0 .
Hence, we have shown that,
E V ( X n ) | X n 1 ( 2 3 + 2 δ | θ 0 | 4 + 2 δ ) C δ ( X n 1 4 + 2 δ + 1 ) + C δ .
Since | θ 0 | < 2 1 4 + 2 δ 1 , 2 3 + 2 δ | θ 0 | 4 + 2 δ < 1 , and we can express the above equation as,
E V ( X n ) | X n 1 V ( X n 1 ) + C δ .
Define the set C ( m ) : = { x : | x | 4 + 2 δ + 1 m } . From Proposition 11.4.2, [20], for a large enough m, C ( m ) forms a petite set. Thus, we have proved that V ( x ) as defined in this example satisfies Assumption A.1.1, and { X n } is V-geometrically ergodic. The f j ( 1 ) ’s corresponding to Assumption 1 are given by f 1 ( 1 ) ( θ , θ 0 ) = ( θ θ 0 ) and f 2 ( 1 ) ( θ , θ 0 ) = ( θ 0 2 θ 2 ) . Therefore, it follows that,
θ f 1 ( 1 ) = 1 , θ f 2 ( 1 ) = 2 θ and 2 < 2 θ < 2 .
Since f 1 ( 1 ) ( θ 0 , θ 0 ) = f 2 ( 1 ) ( θ 0 , θ 0 ) = 0 , We just showed that they also have bounded partial derivatives. We also know that | θ | < 1 . Hence, by Proposition 4 f j ( 1 ) ’s satisfy the conditions of Assumption 1.
The invariant distribution for the simple linear model Markov-chain under parameter θ is given by a gaussian distribution with mean 0 and variance 1 1 θ 2 . In other words,
q θ ( x ) = 1 2 π e 1 θ 2 2 x 2 .
Analyzing the log likelihood yields,
log q 0 ( x ) log q θ ( x ) = x 2 2 ( 1 θ 0 2 ) + x 2 2 ( 1 θ 2 ) = x 2 2 ( θ 0 2 θ 2 ) .
Let f 1 ( 2 ) ( θ 0 , θ 0 ) = ( θ 0 2 θ 2 ) and f 1 ( 2 ) ( θ 0 , θ 0 ) = 0 . Since f 1 ( 2 ) ( θ 0 , θ 0 ) = f 2 ( 1 ) ( θ 0 , θ 0 ) , by following arguments similar as before, can conclude that f 1 ( 2 ) ( θ 0 , θ 0 ) also satisfies the requirements of Assumption 1. Let M 1 ( 2 ) ( x ) = x 2 2 and define M 2 ( 2 ) ( x ) : = 1 . Let X 0 q 1 ( 0 ) . As long as x 4 + 2 δ q 1 ( 0 ) ( x ) d x < , we satisfy all the conditions required for Theorem 5. Finally we need to verify the condition that K ( ρ n , π ) < C n for some constant C > 0 . The KL-divergence log ρ n ( θ ) π ( θ ) ρ n ( d θ ) becomes,
K ( ρ n , π ) = 1 1 log 1 2 Beta ( a n , b n ) 1 + θ 2 a n 1 θ 2 b n × 1 2 Beta ( a n , b n ) 1 + θ 2 a n 1 θ 2 b n d θ .
Substituting, y = 1 + θ 2 , we get,
K ( ρ n , π ) = 0 1 log 1 2 Beta ( a n , b n ) y a n 1 y b n 1 2 Beta ( a n , b n ) y a n 1 y b n d y = 0 1 log 1 2 1 Beta ( a n , b n ) y a n 1 y b n d y + 0 1 log 1 Beta ( a n , b n ) y a n 1 y b n 1 Beta ( a n , b n ) y a n 1 y b n .
The first term integrates up to log ( 1 / 2 ) . The second term is the KL-divergence between a Uniform and Beta distribution with parameters a n = n 1 + θ 0 2 and b n = n ( 1 1 + θ 0 2 ) and support [ 0 , 1 ] . Following Lemma A.2.1 it follows that K ( ρ n , π ) is upper bounded by,
K ( ρ n , π ) < log ( 1 / 2 ) + C 1 + 1 2 log ( n ) < C n ,
for some large constant C. This completes the proof. □

Appendix B.4. Proofs for Misspecified Models

Proof of Theorem 6

As in the proof of Theorem 1, following Equation (A13), we note that,
D α r e ( P θ ( n ) , P θ 0 ( n ) ) π ˜ n , α r e | X n ( d θ ) α r e 1 α r e K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) + α r e 1 α r e Var [ r n ( θ , θ 0 ) ρ n ( d θ ) ] η + K ( ρ n , π ) log ( ϵ ) 1 α r e .
Following from Equations (23) and (26), we get that,
K ( P θ 0 ( n ) , P θ ( n ) ) ρ n ( d θ ) E [ r n ( θ 0 , θ n * ) ] + n ϵ n ,
and
Var [ r n ( θ , θ 0 ) ] ρ n ( d θ ) 2 n ϵ n + 2 Var [ r n ( θ n * , θ 0 ) ] .
Plugging these into Equation (A44), we are done. □

References

  1. Wainwright, M.J.; Jordan, M.I. Introduction to Variational Methods for Graphical Models. Found. Trends Mach. Learn. 2008, 1, 1–103. [Google Scholar] [CrossRef] [Green Version]
  2. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin, Germany, 2006. [Google Scholar]
  3. Ormerod, J.T.; Wand, M.P. Explaining variational approximations. Am. Stat. 2010, 64, 140–153. [Google Scholar] [CrossRef] [Green Version]
  4. Blei, D.M.; Kucukelbir, A.; McAuliffe, J.D. Variational inference: A review for statisticians. J. Am. Stat. Assoc. 2017, 112, 859–877. [Google Scholar] [CrossRef] [Green Version]
  5. Jaiswal, P.; Rao, V.; Honnappa, H. Asymptotic Consistency of α-Rényi-Approximate Posteriors. J. Mach. Learn. Res. 2020, 21, 1–42. [Google Scholar]
  6. Li, Y.; Turner, R.E. Rényi divergence variational inference. In Proceedings of the 30th Annual Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; Volume 29, pp. 1073–1081. [Google Scholar]
  7. Dieng, A.B.; Tran, D.; Ranganath, R.; Paisley, J.; Blei, D. Variational Inference via χ Upper Bound Minimization. In Proceedings of the 31th Annual Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  8. Wang, Y.; Blei, D.M. Frequentist consistency of variational Bayes. J. Am. Stat. Assoc. 2019, 114, 1147–1161. [Google Scholar] [CrossRef] [Green Version]
  9. Zhang, F.; Gao, C. Convergence rates of variational posterior distributions. Ann. Stat. 2020, 48, 2180–2207. [Google Scholar] [CrossRef]
  10. Ghosal, S.; Ghosh, J.K.; Van Der Vaart, A.W. Convergence rates of posterior distributions. Ann. Stat. 2000, 28, 500–531. [Google Scholar] [CrossRef]
  11. Shen, X.; Wasserman, L. Rates of convergence of posterior distributions. Ann. Stat. 2001, 29, 687–714. [Google Scholar]
  12. Rousseau, J. On the frequentist properties of Bayesian nonparametric methods. Annu. Rev. Stat. Its Appl. 2016, 3, 211–231. [Google Scholar] [CrossRef] [Green Version]
  13. Ghosal, S.; Van Der Vaart, A.W. Entropies and rates of convergence for maximum likelihood and Bayes estimation for mixtures of Normal densities. Ann. Stat. 2001, 1233–1263. [Google Scholar]
  14. Bhattacharya, A.; Pati, D.; Yang, Y. Bayesian fractional posteriors. Ann. Stat. 2019, 47, 39–66. [Google Scholar] [CrossRef] [Green Version]
  15. Alquier, P.; Ridgway, J. Concentration of tempered posteriors and of their variational approximations. Ann. Stat. 2020, 48, 1475–1497. [Google Scholar] [CrossRef]
  16. Yang, Y.; Pati, D.; Bhattacharya, A. α-variational inference with statistical guarantees. Ann. Stat. 2020, 48, 886–905. [Google Scholar] [CrossRef]
  17. Jaiswal, P.; Honnappa, H.; Rao, V.A. Risk-sensitive variational Bayes: Formulations and bounds. arXiv 2019, arXiv:1903.05220. [Google Scholar]
  18. Bradley, R.C. Basic Properties of Strong Mixing Conditions. A Survey and Some Open Questions. Probab. Surv. 2005, 2, 107–144. [Google Scholar] [CrossRef] [Green Version]
  19. Ibragimov, I.A. Some limit theorems for stationary processes. Theory Probab Appl. 1962, 7, 349–382. [Google Scholar] [CrossRef]
  20. Meyn, S.P.; Tweedie, R.L. Markov Chains and Stochastic Stability; Springer: Berlin, Germany, 2012. [Google Scholar]
  21. Hajek, B. Hitting-time and occupation-time bounds implied by drift analysis with applications. Adv. Appl. Probab. 1982, 502–525. [Google Scholar] [CrossRef]
  22. Birgé, L. Robust testing for independent non identically distributed variables and Markov chains. In Specifying Statistical Models; Springer: Berlin, Germany, 1983; pp. 134–162. [Google Scholar]
  23. Ryabko, D. Testing statistical hypotheses about ergodic processes. In Proceedings of the IEEE Region 8 International Conference on Computational Technologies in Electrical and Electronics Engineering, Novosibirsk, Russia, 21–25 July 2008. [Google Scholar]
  24. Lacoste-Julien, S.; Huszár, F.; Ghahramani, Z. Approximate inference for the loss-calibrated Bayesian. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Ft. Lauderdale, FL, USA, 11–13 April 2011. [Google Scholar]
  25. Jaiswal, P.; Honnappa, H.; Rao, V.A. Asymptotic consistency of loss-calibrated variational Bayes. Stat 2020, 9, e258. [Google Scholar] [CrossRef] [Green Version]
  26. Jones, G.L. On the Markov chain central limit theorem. Probab. Survey. 2004, 1, 299–320. [Google Scholar] [CrossRef]
  27. Alzer, H. On some inequalities for the gamma and psi functions. Math. Comput. 1997, 66, 373–389. [Google Scholar] [CrossRef] [Green Version]
  28. Donsker, M.D.; Varadhan, S.S. Asymptotic evaluation of certain Markov process expectations for large time, I. Commun. Pure Appl. Math. 1975, 28, 1–47. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Banerjee, I.; Rao, V.A.; Honnappa, H. PAC-Bayes Bounds on Variational Tempered Posteriors for Markov Models. Entropy 2021, 23, 313. https://doi.org/10.3390/e23030313

AMA Style

Banerjee I, Rao VA, Honnappa H. PAC-Bayes Bounds on Variational Tempered Posteriors for Markov Models. Entropy. 2021; 23(3):313. https://doi.org/10.3390/e23030313

Chicago/Turabian Style

Banerjee, Imon, Vinayak A. Rao, and Harsha Honnappa. 2021. "PAC-Bayes Bounds on Variational Tempered Posteriors for Markov Models" Entropy 23, no. 3: 313. https://doi.org/10.3390/e23030313

APA Style

Banerjee, I., Rao, V. A., & Honnappa, H. (2021). PAC-Bayes Bounds on Variational Tempered Posteriors for Markov Models. Entropy, 23(3), 313. https://doi.org/10.3390/e23030313

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop