**1. Introduction**

The topic of this paper concerns the analysis of (one-dimensional) inhomogeneous *continuous-time Markov chains* (CTMC) with discrete state space. The inhomogeneity property implies that (some or all) transition intensities are non-random functions of time and (may or may not) depend on the state of the chain. For such mathematical models many operations research applications are known (see, for example, [1–4] and [Section 5] in [5]), but the motivation of this paper is queueing. Thus all the examples considered in this paper are devoted to time varying queues. Substantial literature on the problem exists in which various aspects (like existence of processes, numerical algorithms, asymptotics, approximations and others) are analyzed. The attempt to give a systematic classification of the available approaches (based on the papers published up to 2016) is made in [5]; up-to-date point of view is given in [Sections 1 and 1.2] of [4] (see also [6]).

The specific question, being the topic of this paper, is the computation of the longrun (see, for example, in [Introduction] of [7]), (limiting) time-dependent performance characteristics of a CTMC with time varying intensities. This question can be considered from different point of views: computation time, accuracy, complexity, storage use etc. As a result, various solution techniques have been developed, but none of them is the

**Citation:** Zeifman, A.; Satin, Y.; Kovalev, I.; Razumchik, R.; Korolev, V. Facilitating Numerical Solutions of Inhomogeneous Continuous Time Markov Chains Using Ergodicity Bounds Obtained with Logarithmic Norm Method. *Mathematics* **2021**, *9*, 42. https://dx.doi.org/10.3390/ math9010042

Received: 28 November 2020 Accepted: 19 December 2020 Published: 27 December 2020

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/ licenses/by/4.0/).

ubiquitous tool. One of the ways to improve the efficiency of a solution technique is to supply it with a method for the limiting regime detection, (or, in other words, a method providing ergodicity bounds): once the limiting regime is reached, there is no need to continue the computation indefinitely. The main contribution of this paper is the review of one such method (see Section 2) and presentation of its applicability in two new usecases, not considered before in the literature (see Sections 4 and 5). It is worth noting that methods, which provide ergodicity bounds, can be also helpful, whenever a truncation of the countable state space of the chain is required. The method presented in Section 2, whenever applicable, is helpful in this aspect as well (see also [8,9]).

The end of this section is devoted to the review (by no means exhaustive) of the popular solution techniques for the analysis of Markov chains in time varying queueing models. The attention is drawn to the ability of a technique to yield limiting time-dependent performance characteristics of a Markov chain with time varying intensities. For each technique mentioned, (computer simulation methods and numerical transform inversion algorithms are not discussed here), it is highlighted if any benefit can be gained when the technique is used along with a method providing ergodicity bounds.

In many applied settings the performance analysis is based on the procedure known as point-wise stationary approximation [10] and its ramifications. According to it the timedependent probability vector **x**(*t*) at time *t* is approximated by the steady-state probability vector **y**(*t*) by solving **y**(*t*)*H*(*t*) = 0 and **y**(*t*)-1 = 1, where *H*(*t*) is the time-dependent intensity matrix (throughout the paper the vectors denoted by bold letters are regarded as column vectors, **e***<sup>k</sup>* denotes the *k*th unit basis vector, **1**T—row vector of 1's with T denoting the matrix transpose). In its initial version, the approximation breaks down if the instantaneous system's load is allowed to exceed 1. In general its quality depends on the values of the transition rates, and for some models (like time-dependent birthand-death processes) the approach is proved to be correct asymptotically in the limit (as transition intensities increase). Another fruitful set of techniques, which help one understand the performance of complex queueing systems, is the (conventional and manyserver) heavy-traffic approximations, (another approximation technique, worth mentioning here especially because of its applicability to non-Markov time varying queues, is robust optimization. See [4], Section 2.). Since scaling is important in heavy-traffic limits, usually the technique is more justified whenever the state space of a chain is in some intuitive sense close to continuous (see e.g., [11,12] and no doubt others), and less (or even not at all) justified if the state space is essentially discrete, (for example, when formed by the number of customers in the system *Mt*/*Mt*/1/*N* (for fixed *N*) at time *t*). Due to the nature of both class of techniques mentioned above they do not benefit from methods providing ergodicity bounds.

The very popular set of techniques to calculate performance measures, which stands apart from the two mentioned above, is comprised of numerical methods for systems of ordinary differential equations (ODEs)—Kolmogorov forward equations, (for an illustration the reader can refer to, for example, [13]). Due to the increasing computer power such methods keep gaining popularity. By introducing approximations these methods can be made more efficient. For example, when only moments of the Markov chain are of interest one can use closure approximations, (since the moment dynamics are (when available) close to the true dynamics of the original process, the benefits from the methods providing ergodicity bounds, when used alongside, are clear), (see e.g., [14–16]). Another method for the computation of transient distributions of Markov chains is uniformization (see [17]). It is numerically stable and, as reported, usually outperforms known differential equation solvers (see [Section 6] in [18]).

The methods based on uniformization suffer from slow convergence of a Markov chain: whenever it is slow, computations involve a large number of matrix-vector products. An ODE technique yields the numerical values of performance measures, but it is complicated by a number of facts, among which we highlight only those which are related to the topic of this paper. Firstly, there can be infinitely many ODEs in the system of

equations. Traditionally this is circumvented by truncating the system, i.e., making the number of equations finite. But there is no general "rule of thumb" for choosing the truncation threshold. Secondly, (time-dependent) limiting characteristics of a CTMC are usually considered to be identical to the solution of the system on some distant time interval (see, for example, [17–23]). This procedure yields limiting characteristics with any desired accuracy, whenever the CTMC is ergodic. Yet, in general, it is not suitable for Markov chains with countable (or finite but large) state space. Moreover it is not clear, (convergence tests are usually required, which result in additional computations). how to choose the position and the length of the "distant time interval", on which the solution of the system must be found. Thus in practice without an understanding a priori about when the limiting regime is reached, significant computational efforts are required to make oneself sure that the obtained solution is the one required, (and, for example, the steady-state is not detected prematurely (see [24]). The authors in [20] propose the solution technique equipped with the steady-state detection. As is shown, it allows significant computational savings and simultaneously ensures strict error bounding. Yet the technique is only applicable, when the stationary solution of a Markov chain can be efficiently calculated in advance).

The approaches mentioned in the previous paragraph have straightforward benefit from the methods providing a priori determination of point of convergence. Although generally this task is not feasible, certain techniques exist, which provide ergodicity bounds for some classes of Markov chains. In the next section we review one such technique, being developed by the authors, which is based on the logarithmic norm of linear operators and special transformations of the intensity matrix, governing the behaviour of a CTMC. In the Sections 3–5 it is applied to three use-cases. Section 6 concludes the paper.

In what follows by -· we denote the *l*1-norm, i.e., if **x** is an (*l* + 1)-dimensional column vector then **x**- = ∑*<sup>l</sup> <sup>k</sup>*=<sup>0</sup> |*xk*|. If **x** is a probability vector, then **x**- = 1. The choice of operator norms will be the one induced by the *l*1-norm on column vectors, i.e., -*A*- <sup>=</sup> sup0≤*j*≤*<sup>l</sup>* <sup>∑</sup>*<sup>l</sup> <sup>i</sup>*=<sup>0</sup> |*aij*| for a linear operator (matrix) *A*.
