**2. Logarithmic Norm Method**

Ergodic properties of Markov chains have been the subject of many research papers (see e.g., [25,26]). Yet obtaining practically useful general ergodicity bounds is difficult and remains, to large extent, an open problem. Below we describe one method, called the "logarithmic norm" method, which is applicable in the situations, when the discrete state space of the Markov chain cannot be replaced by the continuous one and the transition intensities are such that the chain is either null or weakly ergodic. The method is based on the notion of the logarithmic norm (see e.g., [27,28]) and utilizes the properties of linear systems of differential equations.

Consider an ODE system

$$\frac{d}{dt}\mathbf{y}(t) = H(t)\mathbf{y}(t), \; t \ge 0,\tag{1}$$

where the entries of the matrix *H*(*t*)=(*hij*(*t*))<sup>∞</sup> *<sup>i</sup>*,*j*=<sup>0</sup> are locally integrable on [0, ∞) and *H*(*t*) is bounded in the sense that -*H*(*t*)is finite for any fixed *t*. Then

$$\frac{d}{dt} \|\mathbf{y}(t)\| \le -\beta(t) \|\mathbf{y}(t)\|,\tag{2}$$

where −*β*(*t*) is the logarithmic norm of *H*(*t*) i.e.

$$1 - \beta(t) = \sup\_{i} \left\{ h\_{ii}(t) + \sum\_{j \neq i} |h\_{ji}(t)| \right\}. \tag{3}$$

Thus the following upper bound holds:

$$\|\mathbf{y}(t)\| \le e^{-\int\_0^t \beta(u) \, du} \|\mathbf{y}(0)\|. \tag{4}$$

If *H*(*t*) has non-negative non-diagonal elements (and arbitrary elements on the diagonal, (such a matrix in the literature is called sometimes essentially nonnegative).) and all of its column sums are identical, then there exist **y**(0)such that in (4) the equality holds.

The logarithmic norm method is put into an application in four consecutive steps. Firstly one has to determine whether the given Markov chain (further always denoted by *X*(*t*)) is null-ergodic or weakly ergodic,(a Markov chain is called null-ergodic, if for all its state probabilities *pi*(*t*) → 0 as *t* → ∞ for any initial condition; a Markov chain is called weakly ergodic if **p**∗(*t*) − **p**∗∗(*t*)- → 0 as *t* → ∞ for any initial condition **p**∗(0), **p**∗∗(0), where the vector **p**(*t*) contains state probabilities). Secondly one excludes one "border state" from the Kolmogorov forward equations and thus obtains the new system with the matrix which, in general, may have negative off-diagonal terms. The third step is to perform (if possible) the similarity transformation (see (11) and (24)), i.e., to transform the new matrix in such a way that its off-diagonal terms are nonnegative and the column sums differ as little as possible. At the final, fourth step one uses the logarithmic norm to estimate the convergence rate. The key step is the third one. The transformation is made using a sequence of positive numbers (see the sequences {*δn*, *n* ≥ 0} below), which usually has to be guessed, does not have any probabilistic meaning and can be considered as an analogue of Lyapunov functions.
