Next Article in Journal
Analysis of Higher-Order Bézier Curves for Approximation of the Static Magnetic Properties of NO Electrical Steels
Next Article in Special Issue
New Computer Experiment Designs with Area-Interaction Point Processes
Previous Article in Journal
Decoding of Z2S Linear Generalized Kerdock Codes
Previous Article in Special Issue
Wasserstein Dissimilarity for Copula-Based Clustering of Time Series with Spatial Information
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Matrix-Multiplicative Solution for Multi-Dimensional QBD Processes

Service Innovation Research Institute, Annankatu 8 A, 00120 Helsinki, Finland
Mathematics 2024, 12(3), 444; https://doi.org/10.3390/math12030444
Submission received: 21 December 2023 / Revised: 25 January 2024 / Accepted: 29 January 2024 / Published: 30 January 2024
(This article belongs to the Special Issue Stochastic Processes: Theory, Simulation and Applications)

Abstract

:
We consider an irreducible positive-recurrent discrete-time Markov process on the state space X = + M × J , where + is the set of non-negative integers and J = { 1 , 2 , , n } . The number of states in J may be either finite or infinite. We assume that the process is a homogeneous quasi-birth-and-death process (QBD). It means that the one-step transition probability between non-boundary states ( k , i ) and ( n , j ) may depend on i , j , and n k but not on the specific values of k and n . It is shown that the stationary probability vector of the process is expressed through square matrices of order n , which are the minimal non-negative solutions to nonlinear matrix equations.

1. Introduction

The processes of birth and death [1,2] represent a basic model that explains the temporal evolution of population size. In recent decades, various useful generalizations of this simple model have emerged.
The framework of interacting particle systems (IPS) introduced independently by Spitzer [3] and Dobrushin [4,5] makes it possible to describe not only the dynamics of population size but also spatio-temporal information about all individuals in the population. Members of the population are located on the set of vertices of a graph, typically the d-dimensional integer lattice. Each particle has a state selected from a finite set of possible states. The dynamics are described via local interactions indicating the rate at which a vertex changes its state based on the states of its neighbors. Lyapunov functions [6] are commonly employed to investigate IPS stability conditions. Contact processes [7], spatial birth and death processes [8], lattice birth and death processes [9], and spatial birth–death–move processes [10]—this is not a complete list of systems of interacting particles that generalize simple birth and death processes. For details, see, e.g., [11,12,13,14].
Other generalizations of the simple birth-and-death process are birth and death processes in random environments [15,16,17,18]. The birth and death process in a random environment is a two-component process, one component of which describes the dynamics of population size, and the other represents the state (phase) of the external environment. If such a process is a Markov chain, it is called a quasi-birth-and-death process (QBD).
The state space of a QBD can be partitioned into non-empty disjoint subsets called levels, such that one-step state transitions are restricted to states at the same level or two adjacent levels. The transition matrix of a QBD is block-tridiagonal. If all blocks along the main diagonal, except the boundary ones, are identical, then the QBD is said to be level-independent.
Let the stationary probability vector p of an irreducible and positive recurrent level-independent QBD process be partitioned into subvectors p = [ π 0 , π 1 , π 2 , ] , corresponding to the state space partition into levels. Then, the vector is such that.
π n = π n 1 R ,   n 1 ,
where the matrix R , called the rate matrix, is the minimal non-negative solution of a quadratic matrix equation [19,20] and Chapter 6 in [21]. In general, there is no explicit solution for the quadratic matrix equations. However, the iterative logarithmic reduction algorithm [22] has proven to be an efficient way to compute its minimal non-negative solution R [23,24,25].
The matrix geometric method proposed in [19] led to the rapid development of new matrix–analytic methods for stochastic modeling. These methods initiated by Neuts [19,26] provide a powerful framework for the unified analysis of large classes of Markov processes and, more importantly, for their numerical solution. Matrix–analytic methods have found applications in a wide range of fields, including supply chains [27], retrial systems [28], cognitive radio [29], and more. Matrix analysis methods and their applications as it stands today are outlined in [22,30,31,32,33,34,35,36,37,38].
The multi-dimensional quasi-birth-and-death process (Md-QBD) is a Markov chain x ( t ) = ( α ( t ) , β ( t ) ) on the state space X = + M × J , where + is the set of non-negative integers and J = { 1 , 2 , , n } is the environmental space. The first component of the process x ( t ) is a multiclass birth-and-death process α ( t ) = ( α 1 ( t ) , α 2 ( t ) , , α M ( t ) ) . The second component β ( t ) , called the phase process, takes values from the set J = { 1 , 2 , n } . The number n of phases may be either finite or infinite. One-step transitions of x ( t ) from a state ( k , i ) are restricted to states ( n , j ) such that k n { 1 , 0 , 1 } M .
By choosing any of the coordinates of the process α ( t ) , say coordinate k , Md-QBD x ( t ) = ( ( α 1 ( t ) , α 2 ( t ) , , α M ( t ) ) , β ( t ) ) with the phase space J can be transformed into one-dimensional level-dependent QBD (LD-QBD) processes x k ( t ) = ( α k ( t ) , ( α 1 ( t ) , , α k 1 ( t ) ,   α k + 1 ( t ) , α M ( t ) , β ( t ) ) ) with the phase space + M 1 × J [39]. We obtain another one-dimensional LD-QBD process if each level l is obtained by merging all state subsets { ( k 1 , , k M ) } × J with max k i = l [40].
The explicit analytical representation for the stationary distribution of Md-QBDs is unknown, and most work has been devoted to deriving asymptotic formulas of the stationary distribution. Asymptotic properties of the stationary distribution of 2d-QBD processes were studied in [41,42,43,44]. The conditions ensuring a positive-recurrent or transient 2d-QBD process were analyzed in [45].
For general Md-QBD, several process components can change their values simultaneously. In simple Md-QBD, no more than one component of α ( t ) may be changed at a time. One-step transitions of simple Md-QBD from any state ( ( k 1 , k 2 , , k M ) , i ) are restricted to states ( ( n 1 , n 2 , , n M ) , j ) such as | k 1 n 1 | + | k 2 n 2 | + + | k M n M | 1 . Simple Md-QBD may be analyzed using an iterative power series algorithm, as was introduced in [46]. The review article [47] and Chapter 4 in [48] describe the application of the PSA for the analysis of simple Md-QBD.
The state space of a Md-QBD can be partitioned into the levels X l = Z l × J , l 0 , where Z 0 = { n Z | n 1 } , and Z l = { n Z | n 1 ,   min ( n i ) = l } , l 1 . Due to the process’s homogeneity, it can be considered as the one-dimensional level-independent QBD with an infinite rate matrix R in which the elements of the set X 1 index entries.
This work aims to show that the stationary distribution of a simple Md-QBD has a matrix-multiplicative solution expressed in terms of square matrices of order n and propose iterative algorithms for computing these matrices. This may serve as a basis for creating efficient algorithms for computing stationary distributions of the Md-QBDs in the future. In Section 2, we study some systems of nonlinear matrix equations for substochastic matrices and propose several iterative algorithms for computing their minimal non-negative solutions. In Section 3, for simple irreducible and positive-recurrent discrete-time Md-QBD, we derive the matrix of the expected sojourn times in the states of the sets X ( w ) = { ( k , i ) X | k w } , w 1 , before the first visit to any state outside X ( w ) . The matrix-multiplicative solution for the stationary distribution of the process in terms of square matrices of order n is obtained in Section 4. We finally give some concluding remarks in Section 5.
We use bold capital letters to denote matrices and bold lowercase letters to denote vectors. For vectors x = ( x 1 , x 2 , , x M ) and y = ( y 1 , y 2 , , y M ) , x y means that x j y j for all j , and x y means that x j < y j for at least one value of j . Notations x y and x y are defined similarly. Vector 1 represents the vector of all 1 s, and the vector e m indicates the vector with zero entries except the mth entry, which equals one.

2. Preliminaries

In what follows, Q m , M m M , are non-negative square matrices of finite or infinite order such that Q = r = M M Q r is a stochastic matrix, and monotonicity and convergence of a sequence of matrices mean their entry-wise monotonicity and convergence.
Theorem 1.
The sequence of  M -tuples { ( G m ( k ) ,   1 m M ) , k 0 }, recursively defined by
G m ( 0 ) = O   f o r   1 m M ,   a n d
G m ( k + 1 ) = Q m + ( Q 0 + s = 1 M Q s G s ( k ) ) G m ( k )   f o r   1 m M   a n d   k 0 ,
is monotonically increasing and converges to the  M -tuple  ( G m ,   1 m M ) , which is the minimal solution of the system
Y m = Q m + ( Q 0 + s = 1 M Q s Y s ) Y m   f o r   1 m M ,
in the set of  M -tuples  ( Y m , 1 m M ) of non-negative matrices. Matrix  G = r = 1 M G r  is substochastic.
Proof of Theorem 1.
We first show that the sequence { ( G m ( k ) , 1 m M ) , k 0 } is monotonically increasing and satisfies m = 1 M G m ( k ) 1 1 for all k 0 . We proceed using induction.
Since G m ( 0 ) = O , we know that G m ( 1 ) = Q m G m ( 0 ) and m = 1 M G m ( 1 ) 1 1 . Let us assume that G m ( k ) G m ( k 1 ) and m = 1 M G m ( k ) 1 1 for some k and all 1 m M . Then,
G m ( k + 1 ) = Q m + ( Q 0 + s = 1 M Q s G s ( k ) ) G m ( k ) Q m + ( Q 0 + s = 1 M Q s G s ( k 1 ) ) G m ( k 1 ) = G m ( k ) ,
and
m = 1 M G m ( k + 1 ) 1 = m = 1 M Q m 1 + ( Q 0 + s = 1 M Q s G s ( k ) ) m = 1 M G m ( k ) 1 m = 1 M Q l 1 + ( Q 0 + s = 1 M Q s G s ( k ) ) 1 m = M M Q m 1 1 ,
which proves the induction step. Thus, for every 1 m M , the sequence G m ( k ) , k = 0 , 1 , 2 , is bounded and monotonically increasing. This implies the existence of the limits G m = lim k G m ( k ) , 1 m M , which satisfy the system (2) and the inequality G 1 = m = 1 M G m 1 1 .
Assume that ( Y m * , 1 m M ) is another non-negative solution of (2). We show via induction that G m ( k ) Y m * for all 1 m M and all k . Since Y m * O , we know that G m ( 0 ) = O Y m * for all 1 m M . Now, let us assume that G m ( k ) Y m * for some k and all 1 m M . Then,
G m ( k + 1 ) = Q m + ( Q 0 + s = 1 M Q s G s ( k ) ) G m ( k ) Q m + ( Q 0 + s = 1 M Q s Y s * ) Y m * = Y m *
which proves the induction step and the minimal property of the M - tuple ( G m , 1 m M ) . □
Substochastic matrices G m satisfy the system
G m = Q m + ( Q 0 + s = 1 M Q s G s ) G m ,   1 m M .
Therefore, we have
( I U ) G m = Q m , 1 m M ,
where the matrix U is defined using
U = Q 0 + s = 1 M Q s G s .
The following theorem describes some properties of the matrix U .
Theorem 2.
Let the  M -tuple  ( G m ,   1 m M )  be the minimal non-negative solution of the system (2). Then, the matrix  U  defined in (5) is substochastic. It is the minimal solution of the equation
X = Q 0 + n = 0 m = 1 M Q m X n   Q m
in the set of non-negative matrices  X  such that the series converges.
The sequences  { U ( k ) , k 1 }  and  { U ( k ) , k 1 }  defined using
U ( 1 ) = Q 0 ,   U ( k + 1 ) = Q 0 + n = 0 k m = 1 M Q m U ( k ) n   Q m   f o r   k 1 ,
U ( 1 ) = Q 0 ,   U ( k + 1 ) = Q 0 + n = 0 m = 1 M Q m U ( k ) n   Q m   f o r   k 1 ,
are monotonically increasing and converging to  U .
Proof of Theorem 2.
The matrix is substochastic since the matrix Q = m = M M Q m is stochastic and G m 1 1 for all 1 m M .
From (3) and (5) it follows that
U = Q 0 + m = 1 M Q m G m = Q 0 + m = 1 M Q m ( Q m + U G m ) = Q 0 + m = 1 M Q m Q m + m = 1 M Q m U G m = = Q 0 + m = 1 M Q m Q m + m = 1 M Q m U G m + m = 1 M Q m U 2 Q m + m = 1 M Q m U 3 Q m + = Q 0 + n = 0 m = 1 M Q m U n   Q m .
Therefore, the matrix U satisfies (6).
Let us show via induction that the sequence { U ( k ) , k 1 } is monotonically increasing and satisfies U ( k ) U for all k 1 . Since U ( 1 ) = Q 0 U and
U ( 2 ) = Q 0 + n = 0 m = 1 M Q m ( Q 0 ) n   Q m   Q 0 + n = 0 m = 1 M Q m U n   Q m = U ,
we know that U ( 2 ) U ( 1 ) and U ( 2 ) U . Let us assume that U ( k ) U ( k 1 ) and U ( k ) U for some k . Then,
U ( k + 1 ) = Q 0 + n = 0 m = 1 M Q m U ( k ) n   Q m Q 0 + n = 0 m = 1 M Q m U ( k 1 ) n   Q m = U ( k )
and
U ( k + 1 ) = Q 0 + n = 0 m = 1 M Q m U ( k ) n   Q m Q 0 + n = 0 m = 1 M Q m U n   Q m = U .
Thus, the sequence U ( k ) , k = 1 , 2 , , is bounded and monotonically increasing. This implies the existence of the limit U = lim k U ( k ) , which satisfies Equation (6) and the inequality U U . Similarly, it can be proven that the sequence U ( k ) , k = 1 , 2 , , is monotonically increasing and bounded by U . The limit U = lim k U ( k ) U exists and satisfies Equation (6).
Consider the sequence { U ( k ) , k 1 } defined using
U ( k ) = Q 0 + m = 1 M Q m G m ( k 1 )
with G m ( k ) defined in Theorem 1. Since
G m ( k ) = Q m + U ( k ) G m ( k 1 ) ,
we have
U ( k + 1 ) = Q 0 + m = 1 M Q m G m ( k ) = Q 0 + m = 1 M Q m Q m + m = 1 M Q m U ( k ) G m ( k 1 ) ) = Q 0 + m = 1 M Q m Q m + m = 1 M Q m U ( k ) Q m + m = 1 M Q m U ( k ) U ( k 1 ) G m ( k 2 ) = = Q 0 + n = 0 k m = 1 M Q m U ( n ) U ( n 1 ) U ( 1 ) Q m .
It follows that
U ( k + 1 ) = U ( k ) + m = 1 M Q m U ( k ) U ( k 1 ) U ( 1 ) Q m ,
which implies that the sequence { U ( k ) , k 1 } is increasing. From this and (9), we obtain the following inequality:
U ( k + 1 ) Q 0 + n = 0 k m = 1 M Q m U ( k ) n Q m .
We use this to show that U ( k ) U ( k ) U ( k ) for all k 1 . We then proceed using induction. We have U ( 1 ) = U ( 1 ) = U ( 1 ) = Q 0 ; therefore, U ( 1 ) U ( 1 ) U ( 1 ) . Now suppose that U ( k ) U ( k ) U ( k ) for some k and we can show that U ( k + 1 ) U ( k + 1 ) U ( k + 1 ) . It follows from (12) that
U ( k + 1 ) Q 0 + n = 0 k m = 1 M Q m U ( k ) n Q m Q 0 + n = 0 k m = 1 M Q m U ( k ) n Q m = U ( k + 1 ) Q 0 + n = 0 k m = 1 M Q m U ( k ) n Q m Q 0 + n = 0 m = 1 M Q m U ( k ) n Q m = U ( k + 1 ) .
Therefore, U ( k + 1 ) U ( k + 1 ) U ( k + 1 ) , which proves the induction step. This implies that U ( k ) U ( k ) U ( k ) for all k 1 and that U U U . Since we have already shown that U U , it means that the sequences { U ( k ) , k 1 } , { U ( k ) , k 1 } , and { U ( k ) , k 1 } converge to the same solution U = U = U of Equation (6).
Assume that X * is another non-negative solution of (6). We show that U ( k ) X * for all k , such that U = lim k U ( k ) X * . We prove this using induction. Since
X * = Q 0 + n = 0 m = 1 M Q m ( X * ) n   Q m Q 0 ,
we have that U ( 1 ) = Q 0 X * . Now, suppose that U ( k ) X * for some k . Then,
U ( k + 1 ) = Q 0 + n = 0 m = 1 M Q m U ( k ) n   Q m Q 0 + n = 0 m = 1 M Q m ( X * ) n   Q m = X * ,
which proves the induction step. Thus, U = lim k U ( k ) X * , and U is the minimal solution of (6). □
For any substochastic matrix M , we say that the series k 0 M k converges if it has finite entries. In this case, we say that the matrix I M is invertible and write k 0 M k = ( I M ) 1 , even if the order of M is infinite. If the series k = 0 U k converges, then the matrix U , defined using (5), satisfies
U = Q 0 + m = 1 M Q m ( I U ) 1 Q m .
We can use Equations (9) and (10) as the basis of the following algorithm for the calculation of matrices G m ,   1 m M , and U :
G m ( 0 ) = O , 1 m M ;
U ( k ) = Q 0 + m = 1 M Q m G m ( k 1 ) ,
G m ( k ) = Q m + U ( k ) G m ( k 1 ) , 1 m M ,   k = 1 , 2 ,
The convergence of this iterative scheme was proven during the proof of Theorem 2. Although the convergence of the sequence U ( k ) is slower than that of the sequences in (7) and (8), the number of arithmetic operations per iteration is significantly less than in (7) and (8). The algorithm (14)–(16) involves 2 M matrix multiplications and 2 M matrix additions per iteration. For finite n , this requires 2 M n 3 multiplications of scalars and 2 M ( n 3 n 2 ) + 2 M n 2 additions. Therefore, the worst-case computational complexity per iteration is O ( 2 M n 3 ) .

3. Multi-Dimensional QBD Processes

Consider a discrete-time Md-QBD process x ( t ) = ( α ( t ) , β ( t ) ) on the state space X = + M × J , and denote P k , n ( i , j ) the probability of a one-step transition from ( k , i ) to ( n , j ) . We assume that the transition probability matrix P = [ P k , n ( i , j ) ]   ( k , n + M , i , j J ) partitioned into blocks P k , n of order n , has the following form:
P k , n = O   if   m = 1 M | k m n m | > 1 ,   for   k , n 0 ;
P n , n = Q 0 ,   P n e m , n = Q m ,   P n + e m , n = Q m ,   for   1 m M   and   n 1 ,
where Q m , M m M , are non-negative square matrices such that Q = r = M M Q r is a stochastic matrix.
We denote using P ( w ) the block matrix with blocks P k , n , k , n w . The following theorem demonstrates how the minimal non-negative solution of Equation (2) can be used for the decomposition of P ( w ) as a product of the two block matrices.
Theorem 3.
Let the  M -tuple  ( G m , 1 m M ) ,  be the minimal non-negative solution of Equation (2), let the matrix  U  be defined using (5), and let a vector  w + M  satisfy  w 1 . Then, the matrix  I P ( w )  can be decomposed as a product
I P ( w ) = Θ ( w ) ( I Ψ ( w ) ) ,
where matrices  Θ ( w ) = [ Θ k , n ( w ) ] k , n w  and  Ψ ( w ) = [ Ψ k , n ( w ) ] k , n w , partitioned into blocks of the order  n , are defined as follows,
Θ k ; k ( w ) = I U , Θ k ; k + e m ( w ) = Q m ,   1 m M ;
Θ k , n ( w ) = O   i f   n k   a n d   n k + e m   f o r   a l l   1 m M ;
Ψ k ; k e m ( w ) = G m ,   1 m M ;   Ψ k , n ( w ) = O   i f   n k e m   f o r   a l l   1 m M .
Proof of Theorem 3.
For the matrix   Θ ( w ) Ψ ( w ) block located in a block row k and a block column n , we have
r w Θ k , r ( w ) Ψ r , n ( w ) = Θ k , k ( w ) Ψ k , n ( w ) + m = 1 M Θ k , k + e m ( w ) Ψ k + e m , n ( w ) = ( I U ) Ψ k , n ( w ) m = 1 M Q m Ψ k + e m , n ( w ) = m = 1 M Q m G m   if   n = k , ( I U ) G r   if   n = k e r   for   some   1 r M , O   otherwise .
From this and (19) it follows that
Θ k , n ( w ) r w Θ k , r ( w ) Ψ r , n ( w ) = ( I U ) + m = 1 M Q m G m   if   n = k , ( I U ) G r   if   n = k e r   for   some   1 r M , Q r   if   n = k + e r   for   some   1 r M , O   otherwise .
Finally, using (4) and (5), we obtain
Θ k , n ( w ) r w Θ k , r ( w ) Ψ r , n ( w ) = I Q 0   if   n = k , Q r   if   n = k e r   for   some   1 r M , Q r   if   n = k + e r   for   some   1 r M , O   otherwise ,
which proves Equality (22). □
The matrix Θ is the block upper triangular and Ψ is the block low triangular in the following sense: Θ k , n = O if k n ; Ψ k , n = O if n k . Thus, the decomposition in (18) may be considered as the block UL decomposition of the matrix I P ( w ) .
Consider an irreducible and positively recurrent Markov chain with a state space S . Let D be a proper subset of S , let P D be the submatrix of transition probabilities between the states of D , and let N D denote the matrix of the expected sojourn times in the states of D before the first visit to any state outside D . Lemma 5.1.2 from [22] describes the relation between the matrices P D and N D .
Lemma 1.
The matrix  N D  of the expected sojourn times in the subset  D , before the first passage to the complementary subset  S \ D , is given using  N D = ( I P D ) 1 , where  ( I P D ) 1 = k 0 P D k  is the minimal non-negative solution of the systems  ( I P D ) N D = I  and  N D ( I P D ) = I .
We use this lemma to prove that, for the irreducible and positive-recurrent process x ( t ) , the matrix Θ ( w ) in the decomposition in (18) of the matrix I P ( w ) is invertible.
Theorem 4.
In addition to the conditions of Theorem 3, let the Md-QBD  x ( t )  be irreducible and positive-recurrent. Then, the matrix  I U  is invertible, and the matrix  I P ( w )  can be decomposed as a product
I P ( w ) = ( I Φ ( w ) ) D ( w ) ( I Ψ ( w ) ) ,
where matrices D ( w ) = [ D k , n ( w ) ] k , n w , Φ ( w ) = [ Φ k , n ( w ) ] k , n w  , and   Ψ ( w ) = [ Ψ k , n ( w ) ] k , n w , partitioned into blocks of the order  n , are defined as follows,
D k ; k ( w ) = I U ,   D k , n ( w ) = O   i f   n k ;
Φ k ; k + e m ( w ) = R m ,   1 m M ;
Φ k , n ( w ) = O   i f   n k + e m   f o r   a l l   1 m M ;
Ψ k ; k e m ( w ) = G m ,   1 m M ;   Ψ k , n ( w ) = O   i f   n k e m   f o r   a l l   1 m M ;
with the matrix  R m  given using
R m = Q m ( I U ) 1 ,   1 m M .
Proof of Theorem 4.
Firstly note that P ( w ) = P D , where the subset D is given using D = { ( k , j ) X | k w } , and from Lemma 1, it follows that the matrix I P ( w ) is invertible.
Consider the r th power Ψ ( w ) r = [ Ψ k , n ( r ) ( w ) ] k , n w of the matrix Ψ ( w ) , partitioned into blocks of the order n . Diagonal blocks Ψ k , k ( r ) ( w ) of the matrix Ψ ( w ) r are zero for all r 1 . If k n w and m = 1 M ( k i w i ) = l , a non-diagonal block Ψ k , n ( r ) ( w ) is only non-zero in the case where r < l and m = 1 M ( n i w i ) = l r . It follows that the series r = 0 Ψ k , n ( r ) ( w ) is equal to r = 0 l Ψ k , n ( r ) ( w ) . Therefore, the matrix I Ψ ( w ) in (22) is invertible.
Post-multiplying both sides of (22) using ( I Ψ ( w ) ) 1 and pre-multiplying using ( I P ( w ) ) 1 , we find that
( I Ψ ( w ) ) 1 = ( I P ( w ) ) 1 Θ ( w ) .
Let us partition the matrix ( I P ( w ) ) 1 into blocks N k , n ( w ) ( k , n w ) of order n , and partition the matrix ( I Ψ ( w ) ) 1 into blocks W k , n ( w ) ( k , n w ) of order n . Then, Equation (28) may be written in block form as
W k , n ( w ) = r w N k , r ( w ) Θ r , n ( w ) .
From this, where k = n = w , we obtain
I = r w N w , r ( w ) Θ r , w ( w ) = N w , w ( w ) ( I U ) ,
which means that N w , w ( w ) = ( I U ) 1 and proves the invertibility of the matrix I U . Now, the matrix Θ ( w ) , defined in Theorem 3, can be decomposed as
Θ ( w ) = ( I Φ ( w ) ) D ( w ) ,
which implies (22), completing the proof. □
Note that the matrix N w , w ( w ) = ( I U ) 1 denotes the expected number of returns to { w } × J before the first passage to the subset X 0 ( w ) = X \ X ( w ) , and the matrix U records the return probabilities to { w } × J under the taboo of X 0 ( w ) (see Theorem 5.2.3 in [3]).
We define the function γ ( k , n ) as γ ( k , n ) = m = 1 M ( n m k m ) if k n , and γ ( k , n ) = 0 otherwise. We also denote using Θ ( k , n ) the set of sequences θ = ( θ 1 , θ 2 , , θ γ ) of the length γ = γ ( k , n ) such that k + e θ 1 + e θ 2 + + e θ γ = n . Theorem 4 has the following consequence.
Corollary 1.
Let the matrix  ( I P ( w ) ) 1  be partitioned into blocks  N k , n ( w ) ,  k , n w ,  related to the division of the set  X ( w )  into subsets  { k } × J , k w . Then, we have
N k , n ( w ) = r : w r k w r n σ Θ ( r , k ) θ Θ ( r , n ) G σ 1 G σ 2 G σ γ ( r , k ) N R θ 1 R θ 2 R θ γ ( r , n ) ,
where  N = ( I U ) 1 , and it is assumed that the matrix products are equal to the identity matrix if  γ ( r , k ) = 0  or  γ ( r , n ) = 0 .
Proof of Corollary 1.
It follows from Theorem 3 that the matrix ( I P ( w ) ) 1 can be decomposed as
( I P ( w ) ) 1 = V ( w ) D ( w ) 1 W ( w )
with the matrices V ( w ) and W ( w ) defined using
V ( w ) = ( I Ψ ( w ) ) 1 = k = 0 Ψ ( w ) k ,   and   W ( w ) = ( I Φ ( w ) ) 1 = k = 0 Φ ( w ) k ,
respectively. We partition these matrices as V ( w ) = [ V k , n ] k , n w and W ( w ) = [ W k , n ] k , n w following the division of the set X ( w ) into subsets { k } × J , k w . It follows from (30) that
V k , r = σ Θ ( r , k ) G σ 1 G σ 2 G σ γ ( r , k ) ,   for   r k , and
W r , n = θ Θ ( r , n ) R θ 1 R θ 2 R θ γ ( r , n ) ,   for   r n .
Combining (31)–(33), we find that the matrices N k , n ( w ) , k , n l 1 , have the form of (30), which completes the proof. □
The submatrices N k , n ( w ) , k , n w , of the matrix ( I P ( w ) ) 1 satisfy the following property:
N k , n ( w ) = N k w + 1 , n w + 1 ,
where N k , n = N k , n ( 1 ) , k , n 1 . Therefore, the matrices N k , n , k , n 1 , uniquely define the matrices N k , n ( w ) ,   k , n w , for all w 1 . In particular, we have that
N k , n ( l 1 ) = N k ( l 1 ) 1 , n ( l 1 ) 1   for   k , n l 1 .
The matrices R m , 1 m M , defined using (27), satisfy R m = Q m + R m U , 1 m M , and the matrix U may be written as
U = Q 0 + m = 1 M R m Q m .
Therefore, the matrices R m , 1 m M , satisfy the following system:
R m = Q m + R m ( Q 0 + s = 1 M R s Q s ) ,   for   1 m M .
Note, if any of the matrix U , the M -tuple ( G m , 1 m M ), or the M -tuple ( R m , 1 m M ) is known, then we may determine the other two by applying some of the following equations:
R m = Q m ( I U ) 1 ,   G m = ( I U ) 1 Q m ,   Q m G m = R m Q m ,
U = Q 0 + m = 1 M R m Q m ,   U = Q 0 + m = 1 M Q m G m .

4. Stationary Distribution

Let a vector w + M satisfy w 1 . We partition the stationary probability vector p of x ( t ) into subvectors π i ( w ) , ( i 0 ) , related to the partition of the state set X into the subsets X l ( w ) , l 0 , and partition each vector π i ( w ) into subvectors p n   ( n Z i ( w ) ) . The following theorem gives an expression for the vectors p n , n Z 1 ( w ) , in terms of the vectors p k , k Z 0 ( w ) .
Theorem 5.
Let Md-QBD  x ( t )  be irreducible and positive-recurrent, let the matrix  U  be the minimal non-negative solution of (6), and let the matrices  R m ,  1 m M , be defined using (27). If vectors  w , n + M  satisfy  w 1  and  n Z 1 ( w ) , then we have
p n = k Z 1 ( w ) m = 1 k m = w m M p k e m Q m N k , n ( w ) .
Proof of Theorem 5.
Let us partition the matrix ( I P ( w ) ) 1 into blocks H i , j ( w ) ( i , j 1 ) , related to partitioning the set X ( w ) into the subsets X l ( w ) , l 0 . It is known (see Theorem 6.2.1 in [3]) that the stationary probability vector p = [ π 0 ( w ) , π 1 ( w ) , π 2 ( w ) , ] satisfies the equation
π 1 ( w ) = π 0 ( w ) A ( w ) H 1 , 1 ( w ) ,
where A ( w ) is the matrix of the transition probabilities from states into states in Z 1 ( w ) × J .
Let us partition the matrices A ( w ) = [ P k , n ] k Z 0 ( w ) , n Z 1 ( w ) and H 1 , 1 ( w ) = [ N k , n ( w ) ] k , n Z 1 ( w ) into blocks of the order n and partition the vectors π 0 ( w ) = ( p k ) k Z 0 ( w ) and π 1 ( w ) = ( p k ) k Z 1 ( w ) into subvectors of length n .
For blocks P k , n , r Z 0 ( w ) , and k Z 1 ( w ) of the matrix A ( w ) we have P r , k = Q m if r = k e m for some 1 m M ; otherwise, we have P r , k = O . Therefore, Equation (40) may be transformed as (39). Thus, the result is proven. □
Theorem 6.
Let Md-QBD  x ( t )  be irreducible and positive-recurrent, let the matrix  U  be the minimal non-negative solution of (6), and let the matrices  R m , 1 m M , be defined using (27). If a vector  w + M  satisfies  w 1 , then the vector of the stationary probabilities  π 0 ( w ) = [ p n ] n Z 0 ( w )  is proportional to the unique solution  q ( w ) = [ q n ] n Z 0 ( w )  of the following linear system
q n = k Z 0 ( w ) q k P k , n + r , s Z 1 ( w ) m = 1 k m = w m M q r e m Q m N r , s ( w ) P s , n , n Z 0 ( w ) ;
n Z 0 ( w ) q n 1 = 1 .
Proof of Theorem 6.
Consider the restricted process on X 0 ( w ) . From Theorem 5.3.1 in [3], the transition matrix P ( X 0 ( w ) ) of the restricted process is given using
P ( X 0 ( w ) ) = B ( w ) + A ( w ) ( I P ( w ) ) 1 C ( w ) .
The stationary probability vector q ( w ) of the restricted process is proportional to π 0 ( w ) and is the unique solution of the system
q ( w ) = q ( w ) P ( X 0 ( w ) ) ,   q ( w ) 1 = 1 .
Let us partition the matrices A ( w ) = [ P k , n ] k Z 0 ( w ) , n Z 1 ( w ) , B ( w ) = [ P k , n ] k , n Z 0 ( w ) , and C ( w ) = [ P k , n ] k Z 1 ( w ) , n Z 0 ( w ) into blocks of order n and partition the vector q into the subvectors q k , k Z 0 ( w ) . From (17) and (42), it follows that the system (43) may be written as (41), which completes the proof. □

5. Conclusions

This study presents several theoretical results that may be used to calculate the stationary distribution of simple Md-QBDs. It includes the following four stages.
(1)
First, there is a need to find the minimal non-negative solution U of Equation (6) and calculate matrices N = ( I U ) 1 , G m = N Q m , and R m = Q m N , 1 m M . This can be performed using the simple algorithms (14)–(16) or recursive procedures of Theorems 1 and 2;
(2)
Then, one must partition the set Z = + M into levels Z l ( 1 ) , l 0 , and determine the stationary probability vector q = ( q n ) n Z 0 ( 1 ) of the restricted process on X 0 ( 1 ) , which is the unique solution of the linear system in (41) with w = 1 ;
(3)
After this, one must calculate a vector x = ( x n ) n Z proportional to the vector of the stationary probabilities. We set x n = q n for n Z 0 ( 1 ) and then use Equation (39) to subsequently calculate subvectors x n , n Z 1 ( l 1 ) , for l = 1 , 2 , ;
(4)
Finally, the stationary probability vector p may be obtained by normalizing the vector x so that n Z x n 1 = 1 .
For a practical implementation of this approach, it is necessary to overcome several difficulties similar to those encountered when calculating the stationary distribution of one-dimensional QBDs; see [22,24]. The normalization of x might lead to problems of overflow if the value of x 1 is much greater than l. To overcome such problems, one might renormalize the vector x each time new subvectors x n , n Z l ( 1 ) , are computed during Stage 3. When n is infinite, it is impossible to actually completely compute any infinite vectors and matrices, and it will be necessary to truncate this set of phases; see possible solutions in [49]. For Md-QBDs, it is also desirable to develop faster algorithms for calculating the matrices U and G m , similar to the logarithmic reduction algorithm for one-dimensional QBDs [21].

Funding

This research received no external funding.

Data Availability Statement

The study did not report any data.

Acknowledgments

The author wishes to thank the anonymous reviewers for their valuable and helpful comments, which have significantly improved the study.

Conflicts of Interest

The author declare no conflict of interest.

References

  1. Feller, W. Die Grundlagen der Volterraschen Theorie des Kampfes ums Dasein in wahrscheinlichkeitstheoretischer Behandlung. Acta Biotheor. 1939, 5, 11–40. [Google Scholar] [CrossRef]
  2. Kendall, D.G. Stochastic processes and population growth. J. Roy. Stat. Soc. B 1949, 11, 230–282. [Google Scholar] [CrossRef]
  3. Spitzer, F. Interaction of Markov processes. Adv. Math. 1970, 5, 246–290. [Google Scholar] [CrossRef]
  4. Dobrushin, R.L. Markov processes with a large number of locally interacting components: The invertible case and certain generalizations. Probl. Inf. Transm. 1971, 7, 57–66. [Google Scholar]
  5. Dobrushin, R.L. Markov Processes with a Large Number of Locally Interacting Components: Existence of a Limit Process and Its Ergodicity. Probl. Inf. Transm. 1971, 7, 149–164. [Google Scholar]
  6. Menshikov, M.; Popov, S.; Wade, A. Non-Homogeneous Random Walks. Lyapunov Function Methods for Near-Critical Stochastic Systems; Cambridge University Press: Cambridge, UK, 2017. [Google Scholar]
  7. Harris, T.E. Contact interactions on a lattice. Ann. Probab. 1974, 2, 969–988. [Google Scholar] [CrossRef]
  8. Preston, C. Spatial birth-and-death processes. Adv. Appl. Proba. 1975, 7, 465–466. [Google Scholar] [CrossRef]
  9. Bezborodov, V.; Kondratiev, Y.; Kutoviy, O. Lattice birth-and-death processes. Mosc. Math. J. 2019, 19, 7–36. [Google Scholar] [CrossRef]
  10. Lavancier, F.; Le Guével, R. Spatial birth-death-move processes: Basic properties and estimation of their intensity functions. J. Roy. Stat. Soc. B 2021, 83, 798–825. [Google Scholar] [CrossRef]
  11. Liggett, T.M. Continuous Time Markov Processes: An Introduction; AMS: Providence, RI, USA, 2010. [Google Scholar]
  12. Lanchier, N. Stochastic Modeling; Springer: Cham, Switzerland, 2017. [Google Scholar]
  13. Swart, J. A Course in Interacting Particle Systems. arXiv 2017, arXiv:1703.10007. Available online: https://arxiv.org/abs/1703.10007 (accessed on 24 January 2024).
  14. Jahnel, B.; König, W. Probabilistic Methods in Telecommunications; Springer: Cham, Switzerland, 2020. [Google Scholar]
  15. Yechiali, U. A queueing-type birth-and-death process defied on a continuous-time Markov chain. Oper. Res. 1973, 21, 604–609. [Google Scholar] [CrossRef]
  16. Torrez, W.C. The Birth and Death Chain in a Random Environment: Instability and Extinction Theorems. Ann. Probab. 1978, 6, 1026–1043. [Google Scholar] [CrossRef]
  17. Cogburn, R.; Torrez, W.C. Birth and death processes with random environments in continuous time. J. Appl. Probab. 1981, 18, 19–30. [Google Scholar] [CrossRef] [PubMed]
  18. Gaver, D.P.; Jacobs, P.A.; Latouche, G.G. Finite Birth-and-Death Models in Randomly Changing Environments. Adv. Appl. Prob. 1984, 16, 715–731. [Google Scholar] [CrossRef]
  19. Neuts, M.F. Matrix-Geometric Solutions in Stochastic Models: An Algorithmic Approach; Johns Hopkins University Press: Baltimore, MD, USA, 1981. [Google Scholar]
  20. Tweedie, R.L. Operator-geometric stationary distribution for Markov chains, with applications to queueing models. Adv. Appl. Probab. 1982, 14, 368–391. [Google Scholar] [CrossRef]
  21. Latouche, G.; Ramaswami, V. Introduction to Matrix Analytic Methods in Stochastic Modeling; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1999. [Google Scholar]
  22. Latouche, G.; Ramaswami, V. A logarithmic reduction algorithm for quasi-birth-and-death processes. J. Appl. Probab. 1993, 30, 650–674. [Google Scholar] [CrossRef]
  23. Tran, H.T.; Do, T.V. Computational Aspects for Steady State Analysis of QBD Processes. Period. Polytech. Elec. 2000, 44, 179–200. [Google Scholar]
  24. Bini, D.A.; Latouche, G.; Meini, B. Numerical Methods for Structured Markov Chains; Oxford University Press: New York, NY, USA, 2005. [Google Scholar]
  25. Ye, Q. On Latouche-Ramaswami’s logarithmic reduction algorithm for quasi-birth-and-death processes. Stoch. Models 2007, 18, 449–467. [Google Scholar] [CrossRef]
  26. Neuts, M.F. Structured Stochastic Matrices of M/G/1 Type and Their Applications; Marcel Dekker: New York, NY, USA, 1989. [Google Scholar]
  27. Boute, R.N.; Colen, P.J.; Creemers, S.; Noblesse, A.; Houdt, B.V. Matrix-Analytic Methods in Supply Chain Management: Recent Developments. Review of Business and Economic Literature. Rev. Bus. Econ. Res. 2012, 57, 283–302. [Google Scholar]
  28. Phung-Duc, T. Retrial Queueing Models: A Survey on Theory and Applications. In Stochastic Operations Research in Business and Industry; Dohi, T., Ano, K., Kasahara, S., Eds.; World Scientific Publisher: Singapore, 2017; pp. 1–26. Available online: https://arxiv.org/abs/1906.09560 (accessed on 24 January 2024).
  29. Paluncic, F.; Alfa, A.S.; Maharaj, B.T.; Tsimba, H.M. Queueing Models for Cognitive Radio Networks: A Survey. IEEE Access 2018, 6, 50801–50823. [Google Scholar] [CrossRef]
  30. Breuer, L.; Baum, D. An Introduction to Queueing Theory and Matrix-Analytic Methods; Springer: Dordrecht, The Netherlands, 2005. [Google Scholar]
  31. Li, Q.L. Constructive Computation in Stochastic Models with Applications. The RG-Factorizations; Springer: Berlin, Germany, 2010. [Google Scholar]
  32. Artalejo, J.R.; Gómez-Corral, A. Retrial Queueing Systems: A Computational Approach; Springer: Berlin, Germany, 2008. [Google Scholar]
  33. Lipsky, L. Queueing Theory. A Linear Algebraic Approach; Springer: New York, NY, USA, 2009. [Google Scholar]
  34. He, Q.M. Fundamentals of Matrix-Analytic Methods; Springer: New York, NY, USA, 2014. [Google Scholar]
  35. Dudin, A.N.; Klimenok, V.I.; Vishnevsky, V.M. The Theory of Queueing Systems with Correlated Flows; Springer: Basel, Switzerland, 2020. [Google Scholar]
  36. Naumov, V.; Gaidamaka, Y.; Yarkina, N.; Samouylov, K. Matrix and Analytical Methods for Performance Analysis of Telecommunication Systems; Springer: Cham, Switzerland, 2021. [Google Scholar]
  37. Chakravarthy, S.R. Introduction to Matrix-Analytic Methods in Queues 1: Analytical and Simulation Approach—Basics; Wiley: London, UK, 2022. [Google Scholar]
  38. Chakravarthy, S.R. Introduction to Matrix-Analytic Methods in Queues 2: Analytical and Simulation Approach—Queues and Simulation; Wiley: London, UK, 2022. [Google Scholar]
  39. Ozawa, T. Asymptotic properties of the occupation measure in a multidimensional skip-free Markov modulated random walk. Queueing Syst. 2021, 97, 125–161. [Google Scholar] [CrossRef]
  40. O’Reilly, M.; Palmowski, Z.; Aksamit, A. Random walk on a quadrant: Mapping to a one-dimensional level-dependent Quasi-Birth-and-Death process (LD-QBD). In Proceedings of the Eleventh International Conference on Matrix-Analytic Methods in Stochastic Models (MAM11), Seoul, Republic of Korea, 29 June 2022. [Google Scholar]
  41. Ozawa, T. Asymptotics for the stationary distribution in a discrete-time two-dimensional quasi-birth-and-death process. Queueing Syst. 2013, 74, 109–149. [Google Scholar] [CrossRef]
  42. Miyazawa, M. A superharmonic vector for a nonnegative matrix with QBD block structure and its application to a Markov-modulated two-dimensional reflecting process. Queueing Syst. 2015, 81, 1–48. [Google Scholar] [CrossRef]
  43. Ozawa, T.; Kobayashi, M. Exact asymptotic formulae of the stationary distribution of a discrete-time two-dimensional QBD process. Queueing Syst. 2018, 90, 351–403. [Google Scholar] [CrossRef]
  44. Ozawa, T. Tail Asymptotics in any direction of the stationary distribution in a two-dimensional discrete-time QBD process. Queueing Syst. 2022, 102, 227–267. [Google Scholar] [CrossRef]
  45. Ozawa, T. Stability condition of a two-dimensional QBD process and its application to estimation of efficiency for two-queue models. Perf. Eval. 2019, 130, 101–118. [Google Scholar] [CrossRef]
  46. Hooghiemstra, G.; Keane, M.; Van De Ree, S. Power series for stationary distributions of coupled processors models. SIAM J. Appl. Math. 1988, 48, 1159–1166. [Google Scholar] [CrossRef]
  47. Blanc, J.P.C. Performance Evaluation of Polling Systems by Means of the Power-Series Algorithm. Ann. Oper. Res. 1992, 35, 155–186. [Google Scholar] [CrossRef]
  48. Conti, M.; Gregori, E.; Lenzini, L. Metropolitan Area Networks; Springer: London, UK, 1997. [Google Scholar]
  49. Somashekar, G.; Delasay, M.; Gandhi, A. Truncating Multi-Dimensional Markov Chains with Accuracy Guarantee. In Proceedings of the 30th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS), Nice, France, 18–20 October 2022; pp. 121–128. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Naumov, V. A Matrix-Multiplicative Solution for Multi-Dimensional QBD Processes. Mathematics 2024, 12, 444. https://doi.org/10.3390/math12030444

AMA Style

Naumov V. A Matrix-Multiplicative Solution for Multi-Dimensional QBD Processes. Mathematics. 2024; 12(3):444. https://doi.org/10.3390/math12030444

Chicago/Turabian Style

Naumov, Valeriy. 2024. "A Matrix-Multiplicative Solution for Multi-Dimensional QBD Processes" Mathematics 12, no. 3: 444. https://doi.org/10.3390/math12030444

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop