Next Article in Journal
Shock Waves of the Gerdjikov–Ivanov Equation Using the Adomian Decomposition Schemes
Previous Article in Journal
General Decay for a Viscoelastic Equation with Acoustic Boundary Conditions and a Logarithmic Nonlinearity
Previous Article in Special Issue
Queuing Pricing with Time-Varying and Step Tolls: A Mathematical Framework for User Classification and Behavioral Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Upper and Lower Bounds of Performance Metrics in Hybrid Systems with Setup Time

1
Faculty of Informatics, Gunma University, 4-2 Aramaki, Maebashi 371-8510, Gunma, Japan
2
Graduate School of Informatics, Gunma University, 4-2 Aramaki, Maebashi 371-8510, Gunma, Japan
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(16), 2685; https://doi.org/10.3390/math13162685
Submission received: 31 July 2025 / Revised: 15 August 2025 / Accepted: 16 August 2025 / Published: 20 August 2025
(This article belongs to the Special Issue Recent Research in Queuing Theory and Stochastic Models, 2nd Edition)

Abstract

To address the increasing demand for computational and communication resources, modern networked systems often rely on heterogeneous servers, including those requiring setup times, such as virtual machines or servers, and others that are always active. In this paper, we model and analyze the performance of such hybrid systems using a level-dependent quasi-birth-and-death (LDQBD) process. Building upon an existing queueing model, we extend the analysis by considering scalable approximation methods. Since matrix analytic methods become computationally expensive in large-scale settings, we propose a stochastic bounding approach that derives upper and lower bounds for the stationary distribution, thereby significantly reducing computational cost. This approach further provides bounds on the performance metrics of the hybrid system.

1. Introduction

In recent years, communication traffic has been steadily increasing, driven by factors such as the widespread use of Internet-connected devices, including smartphones and tablets, and the growing volume of data generated by digital transformation (DX). To meet this rising demand, 5G networks have been widely deployed. While 5G networks offer high capacity, high speed, low latency, and massive connectivity, they also raise concerns about increased power consumption. As a result, developing operational strategies to mitigate energy-related costs has become a crucial research challenge.
In mobile networks, many network functions are now virtualized, resulting in configurations that combine legacy network equipment with virtualized network functions, resembling a non-standalone 5G architecture. Legacy equipment remains continuously powered on, whereas virtual network functions (VNFs) can be dynamically activated or deactivated. In order to design such a hybrid system that minimizes power consumption, it is important to conduct performance analysis based on mathematical modeling.
A study closely related to the present work is that of Sato et al. [1], who modeled a hybrid system consisting of both servers running on legacy network equipment (hereafter referred to as legacy servers) and virtualized servers (hereafter referred to as virtual servers) that require setup time to become active. Their model extends the frameworks proposed by Phung-Duc et al. [2] and Ren et al. [3] to allow for different processing rates between legacy and virtual servers. Importantly, their model assumes that once a job is assigned to a virtual server, it cannot be transferred back to a legacy server. For details of the job assignment policy, see Sato et al. [1]. They formulated the system as a level-dependent quasi-birth-and-death (LDQBD) process and analyzed its stationary behavior.
In contrast to the models in Phung-Duc et al. [2] and Ren et al. [3], where the special transition structure of the LDQBD process allows efficient computation of stationary performance metrics using the technique of Phung-Duc and Kawanishi [4], the model in Sato et al. [1] lacks such a structure. Consequently, their stationary analysis relies on standard matrix analytic methods.
For the matrix analytic methods, see, for example, Neuts [5] and Latouche and Ramaswami [6]. See also Artalejo and Gómez-Corral [7] for a recent study on queueing systems with complex dynamics. Algorithms for computing the stationary distribution of LDQBD processes were proposed by Bright and Taylor [8], Phung-Duc et al. [9], Baumann and Sandmann [10]. For more general Markov processes, including LDQBD processes, a sequential update algorithm was developed by Masuyama [11]. Since matrix analytic methods require matrix operations, they might become computationally intensive and thus impractical for large-scale systems, as the computational cost grows significantly with the system size (e.g., the number of virtual servers) due to the large matrix dimensions involved.
To address this issue, we focus on bounds for the stationary distribution and its expectations. There is a substantial body of literature analyzing bounds on stationary expectations for Markov processes, including LDQBD processes. Bounding the stationary distribution and performance metrics with the help of a Lyapunov function [12,13] is a common technique in the literature on stochastic models and their applications. For example, systems of stochastic chemical kinetics have been analyzed using LDQBD processes [14], where the Lyapunov function-based approach was applied to bound their stationary distribution. By leveraging information about the moments of the state variables of a Markov process, together with the Lyapunov function-based approach, a tight bounding technique was proposed in [15].
Another approach to obtaining bounds for the stationary distribution is to utilize stochastic comparison methods (see, e.g., [16,17]). The key idea of this approach is to design a new Markov process whose stationary distribution serves as an upper or lower bound, in a certain stochastic ordering, for the stationary distribution of the original Markov process. For an algorithmic approach to such stochastic bounds, see [18]. Furthermore, structural properties of Markov processes, such as lumpability [19] and censoring [20], have also been exploited in bounding techniques [21,22].
In this paper, we analyze the same system model as in Sato et al. [1], which involves both servers with and without setup times formulated as an LDQBD process. Since matrix analytic methods become computationally expensive in large-scale settings, we apply the bounding technique developed by Bright and Taylor [8] to this model. By exploiting the transition structure of the model, we derive upper and lower stochastic bounds for the stationary distribution, which can be computed via recurrence relations without resorting to matrix-based operations. This approach leads to a significant reduction in computational cost. Our contributions are threefold.
  • We refine the upper bounding model developed by Bright and Taylor [8], making it tighter across a wider range of system parameters.
  • We derive recurrence relations for the stationary distribution that avoid matrix-analytic computations, thereby improving computational efficiency. This development relies on specific structural properties of the transition rates, as in Sato et al. [1].
  • We extend the analysis of Kawanishi [23] and further develop a stochastic lower bounding model within the same framework.
We further show that key performance metrics, such as the expected sojourn time computed from the true stationary distribution, are bounded above and below by our proposed stochastic bounds. In addition, we conduct numerical experiments to evaluate the sensitivity of these performance metrics to variations in system parameters.
The remainder of this paper is organized as follows. Section 2 describes the system model. Section 3 provides a brief overview of partial order relations and stochastic dominance. Section 4 and Section 5 introduce the proposed upper and lower bounding models, respectively. Section 6 discusses performance bounds based on the proposed upper and lower bounding models. Section 7 presents numerical examples to validate the proposed approach. Finally, Section 8 concludes this paper.

2. System Model

We consider a queueing system with multiple servers, consisting of both legacy servers and virtual servers. Legacy servers are assumed to be always powered on, and incoming jobs are assigned to them preferentially in order to reduce power consumption. When all legacy servers are busy, jobs are assigned to virtual servers. Virtual servers require a setup time before they become available to process jobs. Once a job is assigned to a server, either legacy or virtual, it is completed on that server without migration. After completing a job, a virtual server is immediately turned off if no jobs are waiting in the queue.
Jobs are processed in the order of arrival, i.e., according to the first-come, first-served discipline. If servers are available and no jobs are waiting, an arriving job is processed immediately. Otherwise, the job enters a finite-capacity waiting room. If the buffer is full on arrival, the job is rejected and lost. Each waiting job is associated with a deadline by which its service must begin. If the job’s service is not started before its deadline, it leaves the system without being processed.
Let l and v denote the number of legacy servers and virtual servers, respectively, so that the total number of servers is l + v. The maximum capacity of the entire queueing system, including both servers and waiting space, is denoted by K, and we assume Kl + v. Jobs arrive at the system according to a Poisson process with rate λ. The deadline time associated with each job is assumed to follow an exponential distribution with mean 1 / θ. Service times are exponentially distributed. The mean service time is 1 / μ for legacy servers and 1 / μv for virtual servers. The setup time required before a virtual server becomes active is also assumed to be exponentially distributed, with mean 1 / α. The parameters of the queueing system are summarized in Table 1.
Based on the conditions described above, we model the system as a two-dimensional continuous-time Markov chain { X ( t ) = ( N ( t ) , J ( t ) ) ; t 0 } , where N(t) denotes the number of active virtual servers that have completed setup at time t, and J(t) represents the total number of jobs being processed by legacy servers and those waiting in the queue at time t. The state space S of the Markov chain { X ( t ) = ( N ( t ) , J ( t ) ) ; t 0 } is defined as
S = { ( i , j ) 0 i v , 0 j K i } .
The continuous-time Markov chain { X ( t ) = ( N ( t ) , J ( t ) ) ; t 0 } can be regarded as a finite LDQBD process, where J(t) represents the level and N(t) the phase. The transition rate matrix Q of this LDQBD process has the following block-tridiagonal structure:
Q = Q 1 ( 0 ) Q 0 ( 0 ) O O Q 2 ( 1 ) Q 1 ( 1 ) Q 0 ( 1 ) O Q 2 ( 2 ) Q 1 ( 2 ) O Q 0 ( K 1 ) O O Q 2 ( K ) Q 1 ( K ) ,
where the block matrices Q 0 ( j ) (for 0 ≤ jK − 1), Q 1 ( j ) (for 0 ≤ jK), and Q 2 ( j ) (for 1 ≤ jK) represent the transitions that increase, preserve, and decrease the level, respectively. The explicit forms of these block matrices are provided in Appendix A. As an illustrative example, Figure 1 shows the state transition diagram for the case where l = 2, v = 2, and K = 5.
We can confirm that the generator matrix Q is irreducible. Since the state space is finite, the LDQBD process has a unique stationary distribution π, which satisfies the following system of linear equations
π Q = 0 , π 1 = 1 ,
where 0 is the row vector of all zeros, and 1 is the column vector of all ones. Thanks to the block tridiagonal structure of Q, it is well known that π can be computed using the matrix analytic method [6]. Specifically, if we partition the stationary distribution as π = (π0, π1, …, πK), where
π j = ( π 0 , j , π 1 , j , , π min { K j , v } , j ) , 0 j K ,
and
π i , j = lim t Pr ( X ( t ) = ( i , j ) ) , ( i , j ) S ,
then the vectors πj can be computed recursively as
π j = π j 1 R ( j ) , j = 1 , 2 , , K ,
where the rate matrix R(j) is defined by
R ( j ) = Q 0 ( j 1 ) ( U ( j ) ) 1 , j = 1 , 2 , , K ,
and the auxiliary matrices U(j) are computed via the backward recursion
U ( K ) = Q 1 ( K ) , U ( j ) = Q 1 ( j ) + Q 0 ( j ) ( U ( j + 1 ) ) 1 Q 2 ( j + 1 ) , j = 0 , 1 , , K 1 .
The vector π0 is obtained as the solution to π 0 U ( 0 ) = 0 . The normalization condition π 1 = 1 leads to
π 0 1 + π 0 R ( 1 ) 1 + π 0 R ( 1 ) R ( 2 ) 1 + + π 0 R ( 1 ) R ( 2 ) R ( K ) 1 = 1 .
As the size of the block matrices increases, the computational cost grows significantly, especially for large-scale systems. It can be verified that the total number of states of the model is
( v + 1 ) ( K + 1 v ) + v ( v + 1 ) 2 = ( v + 1 ) ( l + Δ + 1 ) + v ( v + 1 ) 2 ,
where Δ = Kvl ≥ 0. To obtain the stationary distribution of all states, it is necessary to compute R(j) for 1 ≤ jK. In what follows, we focus on R(j) for 1 ≤ jK and treat v as an input parameter to analyze the computational complexity with respect to v. Specifically, the size of R(j) for 1 ≤ jK is
( min { K j + 1 , v } + 1 ) × ( min { K j , v } + 1 ) .
Hence, there are Kv = l + Δ matrices of size (v + 1) × (v + 1), and one matrix of size (k + 1) × k for each 1 ≤ kv. The total space required to store R(j) for 1 ≤ jK is therefore
( v + 1 ) 2 ( K v ) + k = 1 v ( k + 1 ) k = ( v + 1 ) 2 ( l + Δ ) + v ( v + 1 ) ( v + 2 ) 3 .
Thus, the space complexity grows as O(v3) with v. Since the most computationally expensive operations for evaluating R(j) are matrix multiplications, it can be verified that the total time required to obtain R(j) for 1 ≤ jK is
( v + 1 ) 3 × ( K v ) + k = 1 v k × ( k + 1 ) k ,
which is of the order O(v4).
To reduce the computational cost of obtaining the stationary distribution, it is effective to adopt the recurrence-based method proposed in Phung-Duc and Kawanishi [4]. However, the transition structure of the model considered in this paper differs from that of the model in Phung-Duc and Kawanishi [4], which prevents direct application of their recurrence-based approach. To address this issue, we construct LDQBD processes that stochastically dominate the original process { X(t) = (N(t), J(t)); t ≥ 0 }, and whose stationary distributions can be computed via recurrence relations.

3. Partial Order Relation and Stochastic Dominance on S

3.1. Partial Order on S

Following the framework of Bright and Taylor [8], we introduce a partial order ⪯ on the state space S defined in Section 2. The quasi order ≺ and the associated partial order ⪯ are defined as follows.
Definition 1
(Quasi order). For two states ( i 1 , j 1 ) S and ( i 2 , j 2 ) S , we write ( i 1 , j 1 ) ( i 2 , j 2 ) if
j 1 < j 2 .
Definition 2
(Partial order). For ( i 1 , j 1 ) , ( i 2 , j 2 ) S , we write ( i 1 , j 1 ) ( i 2 , j 2 ) if
( i 1 , j 1 ) = ( i 2 , j 2 )   or   ( i 1 , j 1 ) ( i 2 , j 2 ) .
This ordering will be used in the next subsection to establish stochastic dominance between the original LDQBD process and its bounding processes; see Bright and Taylor [8] for further background.

3.2. Stochastic Dominance on S

We now define stochastic dominance on the partially ordered set S with respect to the partial order relation ⪯ introduced in Section 3.1.
Definition 3
(Stochastic dominance of random variables). Let X and Y be random variables taking values in a partially ordered set S. Let f : S R be a real-valued function such that
f ( i 1 , j 1 ) f ( i 2 , j 2 )   whenever   ( i 1 , j 1 ) ( i 2 , j 2 ) .
Then, we say that X is stochastically dominated by Y if
( i , j ) S f ( i , j ) Pr ( X = ( i , j ) ) ( i , j ) S f ( i , j ) Pr ( Y = ( i , j ) ) .
We denote this relation as X s Y .
Definition 4
(Stochastic dominance of Markov processes). Let { X ( t ) ; t 0 } and { Y ( t ) ; t 0 } be continuous-time Markov chains on the state space S equipped with a partial order ⪯. We say that { X ( t ) ; t 0 } is stochastically dominated by { Y ( t ) ; t 0 } if
X ( 0 ) s Y ( 0 ) X ( t ) s Y ( t ) ,   for all t 0 .
Let p = (Pr(X = (i, j)))(i,j)∈S and q = (Pr(Y = (i, j)))(i,j)∈S denote the probability vectors associated with X and Y, respectively. If Xs Y, we write ps q. Therefore, if { X ( t ) ; t 0 } and { Y ( t ) ; t 0 } have stationary distributions ν and σ, respectively, and { Y ( t ) ; t 0 } stochastically dominates { X ( t ) ; t 0 } , then we have
ν s σ .

4. Upper Bounding Models

In this section, we construct LDQBD processes that stochastically dominate the original model introduced in Section 2. Our approach follows the idea of Bright and Taylor [8], where an LDQBD process providing a stochastic upper bound for a more general (possibly infinite) LDQBD process is constructed.
A key feature of our proposal is that the stationary distribution of the upper bounding process can be computed via recurrence relations. To achieve this, we extend the original state space S, while ensuring that the stationary distribution remains the same as that of the model in Section 2. This extension paves the way for computing the stationary distribution via recurrence relations.

4.1. Extension of State Space

We consider an LDQBD process { X ^ ( t ) ; t 0 } on the extended state space S ^ defined by
S ^ = { ( i , j ) 0 i v , j 0 } .
Note that S ^ S , and S ^ is infinite. Moreover, we can define the same partial order ⪯ on S ^ as on S, by comparing the second components of states.
Let Q ^ denote the transition rate matrix of the process { X ^ ( t ) ; t 0 } . We consider Q ^ to have the following block tridiagonal structure as
Q ^ = Q ^ 1 ( 0 ) Q ^ 0 ( 0 ) O O Q ^ 2 ( 1 ) O Q ^ 0 ( K 1 ) O Q ^ 2 ( K ) Q ^ 1 ( K ) Q ^ 0 ( K ) O O Q ^ 2 ( K + 1 ) Q ^ 1 ( K + 1 ) ,
where Q ^ 0 ( j ) ( j 0 ) , Q ^ 1 ( j ) ( j 0 ) , and Q ^ 2 ( j ) ( j 1 ) are square matrices of size v + 1, and are defined as follows:
( Q ^ 0 ( j ) ) i , n = ( Q 0 ( j ) ) i , n if ( i , j ) S , ( n , j + 1 ) S , j 0 , 0 otherwise , ( Q ^ 1 ( j ) ) i , n = ( Q 1 ( j ) ) i , n if ( i , j ) , ( n , j ) S , i n , j 0 , 0 otherwise ( except   for   diagonal   elements ) , ( Q ^ 2 ( j ) ) i , n = ( Q 2 ( j ) ) i , n if ( i , j ) S , ( n , j 1 ) S , j 1 , ( j l ) μ + i μ v + [ j l ] + θ if ( i , j ) S , ( n , j 1 ) S ^ S , i = n , j 1 , ( ( v i ) [ j l ] + ) α if ( i , j ) S , ( n , j 1 ) S ^ S , i = n 1 , j 1 , 0 otherwise .
Here, we define ab ≜ min {a, b} for constants a and b, and [a]+ ≜ max {a, 0}. The diagonal entries of Q ^ 1 ( j ) ( j 0 ) are determined such that the row sums of Q ^ are zero.
Figure 2 illustrates the state transition diagram of the process { X ^ ( t ) ; t 0 } on S ^ when l = 2, v = 2, and K = 5. The states (2, 4), (1, 5), (2, 5), (0, 6), (1, 6), and (2, 6) are transient states that do not belong to S, and they do not have transitions to states at higher levels. The states (0, j), (1, j), (2, j) for j ≥ 7 are omitted from the figure, as they exhibit the same behavior as (0, 6), (1, 6), (2, 6), respectively.
We observe that S ^ contains a single irreducible class that exactly coincides with the original state space S. Moreover, the transition rates within this irreducible class are identical to those of the original model on S. This implies that the stationary distribution of { X ^ ( t ) ; t 0 } , when restricted to this irreducible class, is identical to the stationary distribution π of the original model.
We define the probability vector π ^ as
π ^ = ( π ^ 0 , π ^ 1 , , π ^ K , 0 , 0 , ) .
Here, each π ^ j is defined as
π ^ j = ( π ^ 0 , j , π ^ 1 , j , , π ^ v , j ) ,   0 j K ,
where
π ^ i , j = lim t Pr ( X ^ ( t ) = ( i , j ) ) ,   ( i , j ) S ^ .
Note that π ^ i , j = 0 for all transient states ( i , j ) S ^ S . Therefore, π ^ is essentially the same as the stationary distribution π of the original model on S, padded with zeros corresponding to the transient states in S ^ .

4.2. Upper Bounding Model by Bright and Taylor

We briefly summarize the construction of an LDQBD process { X ¯ ( t ) ; t 0 } that stochastically dominates { X ^ ( t ) ; t 0 } . We begin with the following assumption.
Condition 1.
For all  j 1 , for every  i { 0 , 1 , , v } , there exists  n { 0 , 1 , , v }  such that
( Q ^ 2 ( j ) ) i , n > 0 .
The following proposition gives a construction of an LDQBD process that stochastically dominates { X ^ ( t ) ; t 0 } .
Proposition 1
(Theorem 1 in [8]). Suppose the LDQBD process { X ^ ( t ) ; t 0 } satisfies Condition 1. Define a new LDQBD process { X ¯ ( t ) ; t 0 } with transition rate matrix Q ¯ given by
Q ¯ = Q ¯ 1 ( 0 ) Q ¯ 0 ( 0 ) O O Q ¯ 2 ( 1 ) O Q ¯ 0 ( K 1 ) O Q ¯ 2 ( K ) Q ¯ 1 ( K ) Q ¯ 0 ( K ) O O Q ¯ 2 ( K + 1 ) Q ¯ 1 ( K + 1 ) ,
where the block matrices are defined as
( Q ¯ 0 ( 0 ) ) i , n = ( Q ^ 0 ( 0 ) ) i , n , ( Q ¯ 0 ( k ) ) i , n = max ( Q ^ 0 ( k 1 ) 1 ) max M k + 1 , ( Q ^ 0 ( k ) ) i , n , k 1 , ( Q ¯ 2 ( 1 ) ) i , n = 0 , ( Q ¯ 2 ( k ) ) i , n = min ( Q ^ 2 ( k 1 ) 1 ) min M k 1 , ( Q ^ 2 ( k ) ) i , n , k 2 , ( Q ¯ 1 ( k ) ) i , n = ( Q ^ 1 ( k ) ) i , n , i n , k 0 .
Here, M k denotes the number of states in level k, i.e., S ^ k = { ( i , j ) S ^ j = k } , and ( a ) max , ( a ) min denote the maximum and minimum components of vector a, respectively. Then, the process { X ¯ ( t ) ; t 0 } stochastically dominates { X ^ ( t ) ; t 0 } .
It should be noted that Condition 1 is satisfied for the specific LDQBD process { X ^ ( t ) ; t 0 } . Moreover, we observe that { X ¯ ( t ) ; t 0 } has a single finite irreducible class given by
C ¯ = ( i , j ) 0 i v , 1 j K + 1 .
The following corollary is immediate from the stochastic dominance of { X ¯ ( t ) ; t 0 } over { X ^ ( t ) ; t 0 } .
Corollary 1.
Let ( π ¯ 1 , , π ¯ K + 1 ) denote the stationary distribution on C ¯ , where
π ¯ j = ( π ¯ 0 , j , π ¯ 1 , j , , π ¯ v , j ) , 1 j K + 1 ,
and
π ¯ i , j = lim t Pr ( X ( t ) = ( i , j ) ) , ( i , j ) C ¯ .
If we define the probability vector π ¯ as
π ¯ = ( 0 , π ¯ 1 , , π ¯ K + 1 , 0 , ) ,
then the following stochastic dominance relation holds.
π ^ s π ¯ .

4.3. Alternative Upper Bounding Model

Using Proposition 1, we can obtain the LDQBD process { X ¯ ( t ) ; t 0 } that stochastically dominates { X ^ ( t ) ; t 0 } . However, there are several issues that must be addressed.
  • The structure of ( Q ¯ 0 ( k ) ) i , n does not preserve the transition structure compatible with the recursive approach developed in Phung-Duc and Kawanishi [4]. As a result, we cannot compute the stationary distribution of { X ¯ ( t ) ; t 0 } using a recurrence relation.
This limitation is a key challenge that we overcome in this paper. In addition, there are the following two issues.
2.
Since the transition rate ( Q ¯ 2 ( k ) ) i , n is defined as the minimum of Q ^ 2 ( k 1 ) 1 min / M k 1 and ( Q ^ 2 ( k ) ) i , n , it may result in a conservative upper bounding LDQBD process.
3.
Similarly, since ( Q ¯ 0 ( k ) ) i , n is defined as the maximum of Q ^ 0 ( k 1 ) 1 max / M k + 1 and ( Q ^ 0 ( k ) ) i , n , it may also lead to a large upper bounding process.
To address the second issue, we suppose that { X ^ ( t ) ; t 0 } satisfies Condition 1, and we define the normalized transition weights w ^ i , n ( k ) as
w ^ i , n ( k ) = ( Q ^ 2 ( k ) ) i , n n S ^ k 1 ( Q ^ 2 ( k ) ) i , n ,   k 1 ,   i S ^ k ,   n S ^ k 1 .
Note that n S ^ k 1 ( Q ^ 2 ( k ) ) i , n 0 by Condition 1.
To resolve the third issue, we define the normalized forward transition weights z ^ i , n ( k ) as
z ^ i , n ( k ) = ( Q ^ 0 ( k ) ) i , n n S ^ k + 1 ( Q ^ 0 ( k ) ) i , n , if n S ^ k + 1 ( Q ^ 0 ( k ) ) i , n 0 , 0 , otherwise , k 1 , i S ^ k , n S ^ k + 1 .
Taking account of the aforementioned issues, we design an alternative LDQBD process that not only stochastically dominates { X ^ ( t ) ; t 0 } but also enables us to compute its stationary distribution using a recurrence relation. The construction is summarized in the following theorem. The proof is provided in Appendix B.
Theorem 1.
Suppose that { X ^ ( t ) ; t 0 } satisfies Condition 1. Let us consider the LDQBD process { X ˇ ( t ) ; t 0 } with transition rate matrix Q ˇ given by
Q ˇ = Q ˇ 1 ( 0 ) Q ˇ 0 ( 0 ) O O Q ˇ 2 ( 1 ) O Q ˇ 0 ( K 1 ) O Q ˇ 2 ( K ) Q ˇ 1 ( K ) Q ˇ 0 ( K ) O O Q ˇ 2 ( K + 1 ) Q ˇ 1 ( K + 1 ) ,
where the block matrices are defined as
(1) ( Q ˇ 0 ( k ) ) i , n = ( Q ^ 0 ( k ) ) i , n , k = 0 , (2) ( Q ˇ 0 ( k ) ) i , n = max δ i , n ( Q ^ 0 ( k 1 ) 1 ) max , z ^ i , n ( k ) ( Q ^ 0 ( k 1 ) 1 ) max , ( Q ^ 0 ( k ) ) i , n , k = 1 , (3) ( Q ˇ 0 ( k ) ) i , n = max δ i , n ( Q ^ 0 ( k 1 ) 1 ) max , z ^ i , n ( k ) ( Q ^ 0 ( k 1 ) 1 ) max , ( Q ^ 0 ( k ) ) i , n + δ i , n m m i ( Q ^ 1 ( k ) ) i , m , k 2 , (4) ( Q ˇ 2 ( k ) ) i , n = 0 , k = 1 , (5) ( Q ˇ 2 ( k ) ) i , n = min w ^ i , n ( k ) ( Q ^ 2 ( k 1 ) 1 ) min , ( Q ^ 2 ( k ) ) i , n , k 2 , (6) ( Q ˇ 1 ( k ) ) i , n = ( Q ^ 1 ( k ) ) i , n , i n , k = 0 , 1 , (7) ( Q ˇ 1 ( k ) ) i , n = 0 , i n , k 2 .
Here, δ i , n denotes the Kronecker delta defined by
δ i , n = 1 if i = n , 0 otherwise .
Then, the process { X ˇ ( t ) ; t 0 } stochastically dominates { X ^ ( t ) ; t 0 } .
Remark 1.
Note that, for k 2 , the term δ i , n m : m i ( Q ^ 1 ( k ) ) i , m is added to the transition rate ( Q ^ 0 ( k ) ) i , n in (3). This modification is essential for compensating for the removal of off-diagonal elements in Q ˇ 1 ( k ) for k 2 (see (7)). As a result of this removal, the only possible nonzero transition rates that preserve the level variable appear at levels k = 0 and k = 1 , which is the key distinction between { X ˇ ( t ) ; t 0 } and { X ^ ( t ) ; t 0 } .
We observe that the LDQBD process { X ˇ ( t ) ; t 0 } has a single finite irreducible class C ˇ given by
C ˇ = { ( i , j ) 0 i v , 1 j K + 1 } .
All states in S ^ C ˇ are transient. This leads to the following corollary.
Corollary 2.
Let ( π ˇ 1 , , π ˇ K + 1 ) denote the stationary distribution on C ˇ , where
π ˇ j = ( π ˇ 0 , j , π ˇ 1 , j , , π ˇ v , j ) , 1 j K + 1 ,
and
π ˇ i , j = lim t Pr ( X ( t ) = ( i , j ) ) , ( i , j ) C ˇ .
Define the probability vector π ˇ as
π ˇ = ( 0 , π ˇ 1 , , π ˇ K + 1 , 0 , ) ,
where the zero vectors correspond to the transient states in S ^ C ˇ . Then, the following stochastic dominance relation holds.
π ^ s π ˇ .
Remark 2.
Note that the single irreducible class C ˇ = { ( i , j ) 0 i v , 1 j K + 1 } of the upper bounding model does not include the set of states { ( i , 0 ) 0 i v } . Therefore, the transition rates at level k = 0 do not affect the stationary distribution of { X ˇ ( t ) ; t 0 } . As a result, the transition structure of { X ˇ ( t ) ; t 0 } allows the computation of π ˇ , which stochastically dominates π ^ and is also compatible with the recursive computation method proposed by Phung-Duc and Kawanishi [4].
Example 1.
As an illustrative example, let us consider the case when k = 2 in Figure 2. In this case, we have
Q ^ 0 ( 2 ) = λ 0 0 0 λ 0 0 0 λ , ( Q ^ 0 ( 1 ) 1 ) max = λ .
If we apply Proposition 1, then Q ¯ 0 ( 2 ) is obtained as follows:
Q ¯ 0 ( 2 ) = λ λ / 3 λ / 3 λ / 3 λ λ / 3 λ / 3 λ / 3 λ .
Since the row sums of Q ¯ 0 ( 2 ) are greater than or equal to λ = ( Q ^ 0 ( 1 ) 1 ) max , the condition ensuring that { X ¯ ( t ) ; t 0 } stochastically dominates { X ^ ( t ) ; t 0 } is satisfied. However, due to the presence of off-diagonal components such as λ / 3 , the resulting transition structure is no longer compatible with the recurrence-based method proposed in Phung-Duc and Kawanishi [4]. In contrast, if we apply Theorem 1, then Q ˇ 0 ( 2 ) becomes
Q ˇ 0 ( 2 ) = λ 0 0 0 λ + μ v 0 0 0 λ + 2 μ v .
Since all the off-diagonal components of Q ˇ 0 ( 2 ) are zero, the recurrence-based method can be applied in this case.
Figure 3 illustrates the state transition diagram of the process { X ˇ ( t ) ; t 0 } on S ^ when l = 2, v = 2, and K = 5. It also illustrates the differences in the transition structure before and after applying Theorem 1. The transition rate μv from state (1, 2) to state (0, 2) has been removed and reassigned to the transition rate leading to state (1, 3). Similarly, the transition rate 2μv from state (2, 2) to state (1, 2) has been removed and reassigned to the transition rate leading to state (2, 3). Additionally, transitions represented by dashed arrows with rate λ are newly added.
Since ( Q ˇ 1 ( k ) ) i , n = 0 for k ≥ 2 and in, the process { X ˇ ( t ) ; t 0 } in Theorem 1 has a transition structure that enables computation of the stationary distribution by recurrence relations, as proposed in Phung-Duc and Kawanishi [4].

4.4. Balance Equations of Stationary Distribution on C ˇ

We now derive the global balance equations that characterize the stationary distribution ( π ˇ 1 , π ˇ 2 , , π ˇ K + 1 ) on the finite irreducible class C ˇ of the process { X ˇ ( t ) ; t 0 } .
For i = 0 and 1 < jK + 1, the global balance equations for π ˇ 0 , j are given by
( Q ˇ 0 ( j ) ) 0 , 0 + n S ^ j 1 ( Q ˇ 2 ( j ) ) 0 , n π ˇ 0 , j = ( Q ˇ 0 ( j 1 ) ) 0 , 0 π ˇ 0 , j 1 + ( Q ˇ 2 ( j + 1 ) ) 0 , 0 · π ˇ 0 , j + 1 , 1 < j < K + 1 ,
n S ^ j 1 ( Q ˇ 2 ( j ) ) 0 , n π ˇ 0 , j = ( Q ˇ 0 ( j 1 ) ) 0 , 0 π ˇ 0 , j 1 , j = K + 1 .
For 1 ≤ iv and 1 < jK + 1, the global balance equations for π ˇ i , j are given by
( Q ˇ 0 ( j ) ) i , i + n S ^ j 1 ( Q ˇ 2 ( j ) ) i , n π ˇ i , j = ( Q ˇ 0 ( j 1 ) ) i , i π ˇ i , j 1 + ( Q ˇ 2 ( j + 1 ) ) i , i · π ˇ i , j + 1 + ( Q ˇ 2 ( j + 1 ) ) i 1 , i · π ˇ i 1 , j + 1 , 1 < j < K + 1 ,
n S ^ j 1 ( Q ˇ 2 ( j ) ) i , n π ˇ i , j = ( Q ˇ 0 ( j 1 ) ) i , i π ˇ i , j 1 , j = K + 1 .
Note that the above equations do not cover the case where j = 1. To address this, we define a subset C ˇ i of the irreducible class C ˇ as follows.
C ˇ i = ( n , j ) i n v , 1 j K + 1 , 1 i v .
From the global balance between C ˇ i and its complement C ˇ C ˇ i , we obtain
i μ v π ˇ i , 1 = j = 1 K + 1 ( Q ˇ 2 ( j ) ) i 1 , i · π ˇ i 1 , j ,   1 i v .
The base case π ˇ 0 , 1 is determined from the normalization condition given by
i = 0 v j = 1 K + 1 π ˇ i , j = 1 .
From the above balance equations and normalization condition, we see that the stationary distribution ( π ˇ 1 , π ˇ 2 , , π ˇ K + 1 ) is uniquely determined.

4.5. Construction of Recurrence Relation

We now construct a recurrence relation for the stationary distribution components π ˇ i , j on C ˇ . We begin with the case i = 0. From the global balance Equation (9), we observe that π ˇ 0 , K + 1 is determined once π ˇ 0 , K is given. Next, using (8) for j = K, we find that π ˇ 0 , K is determined if π ˇ 0 , K 1 is known. Repeating this backward process for j = K − 1, K − 2, …, 2, we see that π ˇ 0 , j + 1 is recursively determined from π ˇ 0 , j . Therefore, starting from the initial value π ˇ 0 , 1 , we obtain the following recurrence relation. The proof is omitted as it is an immediate consequence of the global balance equations.
Lemma 1.
The sequence π ˇ 0 , j ( 1 < j K + 1 ) satisfies the recurrence relation
π ˇ 0 , j = b ˇ 0 , j · π ˇ 0 , j 1 ,   1 < j K + 1 ,
where the coefficient b ˇ 0 , j is given by
b ˇ 0 , j = ( Q ˇ 0 ( j 1 ) ) 0 , 0 n S ^ j 1 ( Q ˇ 2 ( j ) ) 0 , n + ( Q ˇ 0 ( j ) ) 0 , 0 · c ˇ 0 , j ,   1 < j K + 1 ,
and the auxiliary coefficient c ˇ 0 , j is recursively defined as
c ˇ 0 , j = 0 , j = K + 1 , ( Q ˇ 2 ( j + 1 ) ) 0 , 1 + ( Q ˇ 0 ( j + 1 ) ) 0 , 0 · c ˇ 0 , j + 1 n S ^ j ( Q ˇ 2 ( j + 1 ) ) 0 , n + ( Q ˇ 0 ( j + 1 ) ) 0 , 0 · c ˇ 0 , j + 1 , 1 < j < K + 1 .
Remark 3.
Lemma 1 implies that all π ˇ 0 , j for 1 < j K + 1 can be expressed in terms of the initial value π ˇ 0 , 1 . Moreover, from the balance equation between the subset C ˇ 1 and its complement, we have
μ v π ˇ 1 , 1 = j = 1 K + 1 ( Q ˇ 2 ( j ) ) 0 , 1 · π ˇ 0 , j .
Since the right-hand side depends only on π ˇ 0 , j for j > 0 , which are themselves recursively determined by π ˇ 0 , 1 , it follows that π ˇ 1 , 1 is also determined by π ˇ 0 , 1 .
Remark 4.
As in Phung-Duc et al. [2] and Ren et al. [3], the recurrence relations in Lemma 1 can be reformulated without using the auxiliary coefficient c ˇ 0 , j as follows.
b ˇ 0 , j = ( Q ˇ 0 ( j 1 ) ) 0 , 0 n S ^ j 1 ( Q ˇ 2 ( j ) ) 0 , n , j = K + 1 , ( Q ˇ 0 ( j 1 ) ) 0 , 0 ( Q ˇ 0 ( j ) ) 0 , 0 + n S ^ j 1 ( Q ˇ 2 ( j ) ) 0 , n ( Q ˇ 2 ( j + 1 ) ) 0 , 0 · b ˇ 0 , j + 1 , 1 < j < K + 1 .
These recurrence relations involve subtraction, which may lead to numerical instability. In contrast, the formulation given in Lemma 1 avoids subtraction and is numerically more stable.
Next, we consider the recurrence relation for π ˇ i , j with 1 ≤ iv. Using the global balance Equations (10) and (11), we obtain a similar recursive structure as follows. The proof is again omitted, as it is an immediate consequence of these equations.
Lemma 2.
For 1 i v and 1 < j K + 1 , π ˇ i , j satisfies the recurrence relation
π ˇ i , j = a ˇ i , j + b ˇ i , j · π ˇ i , j 1 ,
where the coefficients are defined as
a ˇ i , j = 0 , j = K + 1 , ( Q ˇ 2 ( j + 1 ) ) i , i · a ˇ i , j + 1 + ( Q ˇ 2 ( j + 1 ) ) i 1 , i · π ˇ i 1 , j + 1 n S ^ j 1 ( Q ˇ 2 ( j ) ) i , n + ( Q ˇ 0 ( j ) ) i , i · c ˇ i , j , 1 < j < K + 1 , b ˇ i , j = ( Q ˇ 0 ( j 1 ) ) i , i n S ^ j 1 ( Q ˇ 2 ( j ) ) i , n + ( Q ˇ 0 ( j ) ) i , i · c ˇ i , j , 1 < j K + 1 , c ˇ i , j = 0 , j = K + 1 , ( Q ˇ 2 ( j + 1 ) ) i , i + 1 + ( Q ˇ 0 ( j + 1 ) ) i , i · c ˇ i , j + 1 n S ^ j ( Q ˇ 2 ( j + 1 ) ) i , n + ( Q ˇ 0 ( j + 1 ) ) i , i · c ˇ i , j + 1 , 1 < j < K + 1 ,
with the convention that ( Q ˇ 2 ( j + 1 ) ) v , v + 1 = 0 for 1 < j < K + 1 .
Remark 5.
From Lemma 2, each π ˇ i , j ( 1 i v , 1 < j K + 1 ) is expressed in terms of π ˇ i , j 1 and π ˇ i 1 , j + 1 . Combined with the balance equation
i μ v π ˇ i , 1 = j = 1 K + 1 ( Q ˇ 2 ( j ) ) i 1 , i · π ˇ i 1 , j ,
we find that π ˇ i , 1 depends on π ˇ i 1 , j for j > 0 . Proceeding inductively, and using Lemmas 1 and 2, all components π ˇ i , j for 0 i v , 1 j K + 1 are recursively determined from the initial value π ˇ 0 , 1 .
Remark 6.
The total number of states in C ˇ is
( v + 1 ) ( K + 1 ) = ( v + 1 ) ( l + v + Δ + 1 ) ,
which scales as O ( v 2 ) and is asymptotically of the same order as that of the original model. The recurrence-based method computes π ˇ i , j for all states in C ˇ using the coefficients a ˇ i , j , b ˇ i , j , and c ˇ i , j . The total space required to store these coefficients is
v × K + ( v + 1 ) × K + ( v + 1 ) × K = ( 3 v + 2 ) × ( l + v + Δ ) ,
which grows as O ( v 2 ) . This is asymptotically smaller than the space requirement for storing R ( j ) ( 1 j K ) in the matrix analytic methods. For each i { 0 , 1 , , v } , computing a ˇ i , j , b ˇ i , j , and c ˇ i , j involves at most ( v + 1 ) additions in the denominators of the coefficients. Consequently, the total time required to obtain all coefficients is at most
3 × ( v + 1 ) × ( v + 1 ) × ( K + 1 ) = 3 ( v + 1 ) 2 × ( l + v + Δ + 1 ) ,
which grows as O ( v 3 ) . This is also asymptotically smaller than the O ( v 4 ) complexity of computing R ( j ) in the matrix analytic methods.

5. Lower Bounding Model

In constructing the upper bounding LDQBD process, we used J(t) as the level variable, defined as the total number of jobs being processed by legacy servers and those waiting in the queue at time t. A natural idea for deriving a lower bounding LDQBD process is to reverse the direction of the level variable, i.e., to replace J(t) with KJ(t). However, such a straightforward reversal does not yield a LDQBD process that satisfies Condition 1, which is essential for applying the method in [8].
To overcome this issue, we redefine the level variable. Instead of J(t), we consider J ˜ ( t ) as the total number of jobs in the system at time t, including those being processed by both legacy and virtual servers as well as those waiting in the queue. Then, we define the level variable by J ˜ ( t ) = K J ˜ ( t ) and consider the LDQBD process { X ˜ ( t ) = ( N ˜ ( t ) , J ˜ ( t ) ) ; t 0 } , where N ˜ ( t ) denotes the number of active virtual servers that have completed their setup at time t. The state space S ˜ of the process { X ˜ ( t ) = ( N ˜ ( t ) , J ˜ ( t ) ) ; t 0 } is defined by
S ˜ = ( i , j ) | 0 i v , 0 j K i .
To derive a lower bounding model for the original process, we construct a LDQBD process with the level variable reversed and appropriately redefined. Let Q ˜ denote the transition rate matrix of { X ˜ ( t ) = ( N ˜ ( t ) , J ˜ ( t ) ) ; t 0 } on the state space S ˜ . Then, Q ˜ is constructed so that it has a block tridiagonal structure and is given by
Q ˜ = Q ˜ 1 ( 0 ) Q ˜ 0 ( 0 ) O O Q ˜ 2 ( 1 ) Q ˜ 1 ( 1 ) Q ˜ 0 ( 1 ) O Q ˜ 2 ( 2 ) Q ˜ 1 ( 2 ) O Q ˜ 0 ( K 1 ) O O Q ˜ 2 ( K ) Q ˜ 1 ( K ) .
Block matrices Q ˜ 0 ( j ) ( 0 j K 1 ) , Q ˜ 1 ( j ) ( 0 j K ) , Q ˜ 2 ( j ) ( 1 j K ) of Q ˜ are transition rate matrices, which represent transitions that increase, maintain, and decrease the level variable J ˜ ( t ) by one, respectively. The block matrices of Q ˜ are presented explicitly in Appendix A. As an example, we show the state transition diagram for S ˜ when l = 2, v = 2, and K = 5 in Figure 4.
Remark 7.
In contrast to the upper bounding model, the level variable in the lower bounding model increases due to the departure of jobs rather than their arrival. This reversal reflects the fact that the state ( i , j ) with smaller j corresponds to a larger number of jobs in the system.
Since S ˜ is finite, and Q ˜ is irreducible, the stationary distribution π ˜ of { X ˜ ( t ) = ( N ˜ ( t ) , J ˜ ( t ) ) ; t 0 } uniquely exists and is given by the solution of the following system of linear equations.
π ˜ Q ˜ = 0 , π ˜ 1 = 1 .
The stationary distribution can be partitioned as π ˜ = ( π ˜ 0 , π ˜ 1 , , π ˜ K ) , where
π ˜ j = ( π ˜ 0 , j , π ˜ 1 , j , , π ˜ min { K j , v } , j ) , 0 j K ,
and
π ˜ i , j = lim t Pr ( X ˜ ( t ) = ( i , j ) ) , ( i , j ) S ˜ .

5.1. Partial Order Relation on S ˜

To compare the states based on the number of jobs in the system, we define a partial order on the state space S ˜ that reflects the reversed level structure.
Definition 5.
For two states ( i 1 , j 1 ) , ( i 2 , j 2 ) S ˜ , we write ( i 1 , j 1 ) ( i 2 , j 2 ) if and only if
j 1 < j 2 .
Definition 6.
For two states ( i 1 , j 1 ) S ˜ and ( i 2 , j 2 ) S ˜ , we write ( i 1 , j 1 ) ( i 2 , j 2 ) which means that the following condition is satisfied.
( i 1 , j 1 ) = ( i 2 , j 2 ) or ( i 1 , j 1 ) ( i 2 , j 2 ) .
Remark 8.
Note that the partial order relation is defined in terms of the level variable j , not the actual number of jobs in the system. In our construction, j = K j , where j denotes the number of jobs in the system. Therefore, the strict inequality j 1 < j 2 is equivalent to
K j 1 < K j 2 j 1 > j 2 .
In other words, ( i 1 , j 1 ) ( i 2 , j 2 ) holds if and only if the number of jobs in the system in the first state is strictly greater than that in the second.

5.2. Stochastic Dominance on S ˜

Let X ˜ and Y ˜ be random variables taking values in the partially ordered set S ˜ with the order relation ⪯. We define stochastic dominance in terms of increasing functions on S ˜ .
Definition 7.
We say that X ˜ is stochastically dominated by Y ˜ if for every function f ˜ : S ˜ R that is non-decreasing with respect to, the following holds.
( i , j ) S ˜ f ˜ ( ( i , j ) ) Pr ( X ˜ = ( i , j ) ) ( i , j ) S ˜ f ˜ ( ( i , j ) ) Pr ( Y ˜ = ( i , j ) ) .
We write X ˜ s Y ˜ if X ˜ is stochastically dominated by Y ˜ .

5.3. Stochastically Dominating LDQBD for { X ˜ ( t ) ; t 0 }

We construct an LDQBD process that stochastically dominates { X ˜ ( t ) ; t 0 } . Recall that the level variable J ˜ ( t ) is defined by J ˜ ( t ) = K J ˜ ( t ) , where J ˜ ( t ) is the number of jobs in the system at time t. This means that the level increases as the number of jobs decreases. Therefore, when an LDQBD process stochastically dominates { X ˜ ( t ) ; t 0 } with respect to this level variable, it provides a lower bound in terms of the number of jobs in the system.
To construct such a process, we again apply the framework proposed by Bright and Taylor [8], which enables us to construct an LDQBD process that stochastically dominates { X ˜ ( t ) ; t 0 } under a sufficient condition. Unlike the upper bounding model, we do not consider an LDQBD process on an extended state space that includes additional transient states while preserving the same stationary distribution as π ˜ . However, the following condition plays a crucial role in ensuring the applicability of their method as the upper bounding models.
Condition 2.
For all k { 1 , 2 , , K } and i { 0 , 1 , , min { K k , v } } , there exists n { 0 , 1 , , min { K k + 1 , v } } such that
( Q ˜ 2 ( k ) ) i , n > 0 .
For simplicity, we construct a lower bounding model in the case of K > l + v. We obtain the following theorem for the LDQBD process { X ˜ ( t ) ; t 0 } under Condition 2. The proof is provided in Appendix B.
Theorem 2.
Suppose that K > l + v and { X ˜ ( t ) ; t 0 } satisfies Condition 2. Define the normalized transition weights
z ˜ i , n ( k ) = ( Q ˜ 0 ( k ) ) i , n n S ˜ k + 1 ( Q ˜ 0 ( k ) ) i , n , w ˜ i , n ( k ) = ( Q ˜ 2 ( k ) ) i , n n S ˜ k 1 ( Q ˜ 2 ( k ) ) i , n ,
where S ˜ k = { ( i , j ) S ˜ j = k } .
Let { X ˜ ˜ ( t ) ; t 0 } be an LDQBD process with transition rate matrix
Q ˜ ˜ = Q ˜ ˜ 1 ( 0 ) Q ˜ ˜ 0 ( 0 ) O O Q ˜ ˜ 2 ( 1 ) Q ˜ ˜ 1 ( 1 ) Q ˜ ˜ 0 ( 1 ) O Q ˜ ˜ 2 ( 2 ) Q ˜ ˜ 1 ( 2 ) O Q ˜ ˜ 0 ( K 1 ) O O Q ˜ ˜ 2 ( K ) Q ˜ ˜ 1 ( K ) ,
where the block matrices are defined as
(12) ( Q ˜ ˜ 0 ( k ) ) i , n = ( Q ˜ 0 ( k ) ) i , n , k = 0 , (13) ( Q ˜ ˜ 0 ( k ) ) i , n = max z ˜ i , n ( k ) ( Q ˜ 0 ( k 1 ) 1 ) max , ( Q ˜ 0 ( k ) ) i , n , k = 1 , (14) ( Q ˜ ˜ 0 ( k ) ) i , n = max z ˜ i , n ( k ) ( Q ˜ 0 ( k 1 ) 1 ) max , ( Q ˜ 0 ( k ) ) i , n + δ i , n m m i ( Q ˜ 1 ( k ) ) i , m , 2 k K 1 , (15) ( Q ˜ ˜ 2 ( k ) ) i , n = 0 , k = 1 , (16) ( Q ˜ ˜ 2 ( k ) ) i , n = min w ˜ i , n ( k ) ( Q ˜ 2 ( k 1 ) 1 ) min , ( Q ˜ 2 ( k ) ) i , n , 2 k K , (17) ( Q ˜ ˜ 1 ( k ) ) i , n = ( Q ˜ 1 ( k ) ) i , n , i n , k = 0 , 1 , (18) ( Q ˜ ˜ 1 ( k ) ) i , n = 0 , i n , 2 k K .
Then, { X ˜ ˜ ( t ) ; t 0 } stochastically dominates { X ˜ ( t ) ; t 0 } .
Remark 9.
Due to the transition structure of Q ˜ , we have n S ˜ k + 1 ( Q ˜ 0 ( k ) ) i , n 0 for all k { 0 , 1 , , K 1 } and i { 0 , 1 , , min { K k , v } } . Since Condition 2 is satisfied for Q ˜ , we also have n S ˜ k 1 ( Q ˜ 2 ( k ) ) i , n 0 for all k { 1 , 2 , , K } and i { 0 , 1 , , min { K k , v } } .
Remark 10.
The transition rate ( Q ˜ ˜ 1 ( k ) ) i , n is set to zero for 2 k K in order to preserve the transition structure that enables the computation of the stationary distribution of { X ˜ ˜ ( t ) ; t 0 } via recurrence relations. To ensure stochastic dominance over { X ˜ ( t ) ; t 0 } , the term δ i , n m i ( Q ˜ 1 ( k ) ) i , m is added to the transition rate ( Q ˜ 0 ( k ) ) i , n .
Remark 11.
The process { X ˜ ˜ ( t ) ; t 0 } has a single irreducible class given by
C ˜ ˜ = ( i , j ) S ˜ 0 i v , 1 j K i .
All other states in S ˜ C ˜ ˜ are transient. In particular, all states with level j = 0 are transient, and hence their stationary probabilities are equal to zero. It should be noted that the assumption of strict inequality K > l + v is essential to obtain the single irreducible class.
Corollary 3.
Let ( π ˜ ˜ 1 , π ˜ ˜ 2 , , π ˜ ˜ K ) denote the stationary distribution of the process { X ˜ ˜ ( t ) ; t 0 } on the irreducible class C ˜ ˜ , where
π ˜ ˜ j = ( π ˜ ˜ 0 , j , π ˜ ˜ 1 , j , , π ˜ ˜ min { K j , v } , j ) , 1 j K ,
and
π ˜ ˜ i , j = lim t Pr ( X ˜ ˜ ( t ) = ( i , j ) ) , ( i , j ) C ˜ ˜ .
If we define the probability vector π ˜ ˜ on the entire state space S ˜ by
π ˜ ˜ = ( 0 , π ˜ ˜ 1 , π ˜ ˜ 2 , , π ˜ ˜ K ) ,
then the following stochastic dominance relation holds:
π ˜ s π ˜ ˜ .
Example 2.
As a concrete example, let us consider the case where k = 3 and k = 4 in Figure 4. In these cases, the original block matrices Q ˜ 0 ( 3 ) and Q ˜ 0 ( 4 ) are given by
Q ˜ 0 ( 3 ) = 2 μ 0 μ v μ 0 2 μ v , Q ˜ 0 ( 4 ) = μ μ v .
Applying Theorem 2, we obtain the modified block matrices Q ˜ ˜ 0 ( 3 ) and Q ˜ ˜ 0 ( 4 ) as follows:
Q ˜ ˜ 0 ( 3 ) = 0 0 , Q ˜ ˜ 0 ( 4 ) = .
Here,denotes certain positive values that satisfy the conditions specified in Theorem 2.
In Figure 5, we illustrate the transition structure of the process before and after applying Theorem 2. We observe that the transition labeled α from state (0, 2) to state (1, 2) is removed, and its rate is instead added to the transition from state (0, 2) to state (0, 3). As a result, the modified process { X ˜ ˜ ( t ) ; t 0 } has a transition structure that allows for efficient computation of the stationary distribution via recurrence relations, as proposed in Phung-Duc and Kawanishi [4].

5.4. Balance Equations of Stationary Distributions in Irreducible Class C ˜ ˜

We now derive the global balance equations that characterize the stationary distribution ( π ˜ ˜ 1 , π ˜ ˜ 2 , , π ˜ ˜ K ) over the irreducible class C ˜ ˜ of the process { X ˜ ˜ ( t ) ; t 0 } .
For i = v and 1 < jKv, the global balance equations of π ˜ ˜ v , j are given by
λ + n S ˜ j + 1 ( Q ˜ ˜ 0 ( j ) ) v , n π ˜ ˜ v , j = λ π ˜ ˜ v , j + 1 + ( Q ˜ ˜ 0 ( j 1 ) ) v , v · π ˜ ˜ v , j 1 , 1 < j < K v ,
λ + ( Q ˜ ˜ 0 ( j ) ) v , v 1 π ˜ ˜ v , j = ( Q ˜ ˜ 0 ( j 1 ) ) v , v · π ˜ ˜ v , j 1 , j = K v .
For 0 ≤ iv − 1 and 1 < jKi, the global balance equations of π ˜ ˜ i , j are given by
λ + n S ˜ j + 1 ( Q ˜ ˜ 0 ( j ) ) i , n π ˜ ˜ i , j = λ π ˜ ˜ i , j + 1 + ( Q ˜ ˜ 0 ( j 1 ) ) i , i · π ˜ ˜ i , j 1 + ( Q ˜ ˜ 0 ( j 1 ) ) i + 1 , i · π ˜ ˜ i + 1 , j 1 , 1 < j < K i ,
λ + ( Q ˜ ˜ 0 ( j ) ) i , i 1 π ˜ ˜ i , j = ( Q ˜ ˜ 0 ( j 1 ) ) i , i · π ˜ ˜ i , j 1 + ( Q ˜ ˜ 0 ( j 1 ) ) i + 1 , i · π ˜ ˜ i + 1 , j 1 , j = K i .
The global balance equations above do not include expressions for π ˜ ˜ i , 1 with 0 ≤ iv. To obtain these, we define a subset of the irreducible class as
C ˜ ˜ i = { ( n , j ) i n v , 1 j K i } , 0 i v 1 ,
and consider the total flow between C ˜ ˜ i and its complement in C ˜ ˜ . From this, we obtain
( v i ) α π ˜ ˜ i , 1 = j = 1 K i 1 ( Q ˜ ˜ 0 ( j ) ) i + 1 , i · π ˜ ˜ i + 1 , j , 0 i v 1 .
Finally, π ˜ ˜ v , 1 is determined via the normalization condition given by
i = 0 v j = 1 K i π ˜ ˜ i , j = 1 .

5.5. Construction of Recurrence Relation

We now construct recurrence relations that determine the stationary probabilities π ˜ ˜ v , j for 1 < jKv. Observe from the global balance Equation (20) that π ˜ ˜ v , K v is determined if π ˜ ˜ v , K v 1 is given. Similarly, from (19) at j = Kv − 1, since π ˜ ˜ v , K v has already been determined from π ˜ ˜ v , K v 1 , it follows that π ˜ ˜ v , K v 1 is determined from π ˜ ˜ v , K v 2 . Proceeding recursively, for any 1 < jKv, π ˜ ˜ v , j can be determined from π ˜ ˜ v , j 1 . Therefore, the sequence { π ˜ ˜ v , j } is determined by a recurrence relation starting from π ˜ ˜ v , 1 . The proof is omitted, as it is analogous to that of Lemma 1.
Lemma 3.
The stationary probability π ˜ ˜ v , j for 1 < j K v satisfies the recurrence relation
π ˜ ˜ v , j = b ˜ ˜ v , j · π ˜ ˜ v , j 1 , 1 < j K v ,
where the coefficient b ˜ ˜ v , j is given by
b ˜ ˜ v , j = ( Q ˜ ˜ 0 ( j 1 ) ) v , v λ + ( Q ˜ ˜ 0 ( j ) ) v , v 1 + ( Q ˜ ˜ 0 ( j ) ) v , v · c ˜ ˜ v , j , 1 < j K v ,
and the auxiliary sequence c ˜ ˜ v , j is recursively defined as
c ˜ ˜ v , j = 0 , j = K v , ( Q ˜ ˜ 0 ( j + 1 ) ) v , v 1 + ( Q ˜ ˜ 0 ( j + 1 ) ) v , v · c ˜ ˜ v , j + 1 λ + ( Q ˜ ˜ 0 ( j + 1 ) ) v , v 1 + ( Q ˜ ˜ 0 ( j + 1 ) ) v , v · c ˜ ˜ v , j + 1 , 1 < j < K v .
Remark 12.
From Lemma 3, we observe that each π ˜ ˜ v , j for 1 < j K v can be recursively expressed in terms of π ˜ ˜ v , 1 . Furthermore, consider the balance equation between the subset C ˜ ˜ v 1 and its complement C ˜ ˜ C ˜ ˜ v 1 . This equation takes the following form.
α π ˜ ˜ v 1 , 1 = j = 1 K v ( Q ˜ ˜ 0 ( j ) ) v , v 1 · π ˜ ˜ v , j .
Since the right-hand side consists only of terms π ˜ ˜ v , j for 1 j K v , which in turn can be written in terms of π ˜ ˜ v , 1 , it follows that π ˜ ˜ v 1 , 1 can also be expressed as a function of π ˜ ˜ v , 1 .
Using the global balance Equations (21) and (22), we obtain similar recursive relations as follows. The proof is again omitted, as it is standard and analogous to that of Lemma 2.
Lemma 4.
For 0 i v 1 and 1 < j K i , the stationary probability π ˜ ˜ i , j satisfies the following recurrence relation
π ˜ ˜ i , j = a ˜ ˜ i , j + b ˜ ˜ i , j · π ˜ ˜ i , j 1 ,
where
a ˜ ˜ i , j = ( Q ˜ ˜ 0 ( j 1 ) ) i + 1 , i · π ˜ ˜ i + 1 , j 1 λ + ( Q ˜ ˜ 0 ( j ) ) i , i 1 , j = K i , λ · a ˜ ˜ i , j + 1 + ( Q ˜ ˜ 0 ( j 1 ) ) i + 1 , i · π ˜ ˜ i + 1 , j 1 λ + ( Q ˜ ˜ 0 ( j ) ) i , i 1 + ( Q ˜ ˜ 0 ( j ) ) i , i · c ˜ ˜ i , j , 1 < j < K i , b ˜ ˜ i , j = ( Q ˜ ˜ 0 ( j 1 ) ) i , i λ + ( Q ˜ ˜ 0 ( j ) ) i , i 1 + ( Q ˜ ˜ 0 ( j ) ) i , i · c ˜ ˜ i , j , 1 < j K i , c ˜ ˜ i , j = 0 , j = K i , ( Q ˜ ˜ 0 ( j + 1 ) ) i , i 1 + ( Q ˜ ˜ 0 ( j + 1 ) ) i , i · c ˜ ˜ i , j + 1 λ + ( Q ˜ ˜ 0 ( j + 1 ) ) i , i 1 + ( Q ˜ ˜ 0 ( j + 1 ) ) i , i · c ˜ ˜ i , j + 1 , 1 < j < K i ,
with the convention that ( Q ˜ ˜ 0 ( j + 1 ) ) 0 , 1 = 0 for 1 < j K .
Remark 13.
From Lemma 4, it follows that the sequence π ˜ ˜ v 1 , j for 1 < j K v + 1 can be expressed in terms of π ˜ ˜ v 1 , 1 and π ˜ ˜ v , j for 1 j K v . In addition, consider the balance equation between C ˜ ˜ v 2 and its complement C ˜ ˜ C ˜ ˜ v 2 obtained as
2 α π ˜ ˜ v 2 , 1 = j = 1 K v + 1 ( Q ˜ ˜ 0 ( j ) ) v 1 , v 2 · π ˜ ˜ v 1 , j .
The right-hand side depends only on π ˜ ˜ v 1 , j for 1 < j K v + 1 , and thus π ˜ ˜ v 2 , 1 can be written in terms of π ˜ ˜ v 1 , 1 . Furthermore, from Lemma 3, π ˜ ˜ v 1 , 1 and π ˜ ˜ v , j (for 1 < j K v ) are also expressed in terms of π ˜ ˜ v , 1 . Hence, π ˜ ˜ v 2 , 1 and all π ˜ ˜ v 1 , j (for 1 < j K v + 1 ) can be expressed in terms of π ˜ ˜ v , 1 .
By iterating this argument for i = v 3 , v 4 , , 0 , using the balance equations between C ˜ ˜ i and C ˜ ˜ C ˜ ˜ i , and applying Lemma 4 repeatedly, we conclude that all π ˜ ˜ i , j for 0 i v 1 and 1 j K i can ultimately be expressed in terms of π ˜ ˜ v , 1 .

6. Bounds of Performance Metrics

This section presents bounds for key performance metrics. It is important to note that the partial order relations underlying the stochastic comparison are defined with respect to the level variables of the LDQBD processes, but the definitions of the level variables differ between the upper and lower bounding models. In the upper bounding model, the level variable is defined as the sum of the number of jobs being processed by the legacy servers and the number of jobs waiting in the queue. Accordingly, the resulting performance bounds pertain to the number of waiting jobs. In contrast, in the lower bounding model, the level variable represents the total number of jobs in the system, including both jobs in waiting and those in service. Thus, the bounds derived from the lower model are based on the total number of jobs in system.

6.1. Upper Bound of Performance Metrics

Let Lq denote the expected number of jobs waiting in the queue in the original model, whose stationary distribution is denoted by π. Let L q u denote the corresponding expectation in the upper bounding model with stationary distribution π ˇ . Define a monotone function f : S ^ R by
f ( ( i , j ) ) = [ j l ] + , for ( i , j ) S ^ .
Since π ^ and π have the same support and satisfy π ^ s π ˇ , we obtain the following inequality:
L q = j = 0 K [ j l ] + π j 1 = j = 0 K [ j l ] + π ^ j 1 = j = 0 K + 1 [ j l ] + π ^ j 1 j = 0 K + 1 [ j l ] + π ˇ j 1 = L q u .
Note that π ^ K + 1 is the zero vector, and thus the sum over j = 0 to K + 1 does not affect the total value.
Let Wq and PB denote the expected waiting time and the loss probability of a job in the original model with stationary distribution π, respectively. By applying Little’s law, we obtain the following relation:
W q = L q λ ( 1 P B ) L q u λ ( 1 P B u ) ,
where P B u is any non-negative constant satisfying P B P B u < 1 . It should be emphasized that although PB can be expressed as
P B = i + j = K , ( i , j ) S ^ π i , j = i + j = K , ( i , j ) S ^ π ^ i , j ,
we cannot guarantee the inequality
i + j = K , ( i , j ) S ^ π ^ i , j i + j = K , ( i , j ) S ^ π ˇ i , j .
In our stochastic dominance framework, performance metrics can be compared when they can be represented as expectations of monotone functions. Since the loss rate PB cannot be expressed as a monotone function with respect to the chosen partial order, it cannot be bounded above using the upper bounding distribution π ˇ , and hence P B u cannot be chosen as i + j = K , ( i , j ) S ^ π ˇ i , j . As a result, in order to evaluate an upper bound on Wq, it is necessary to use an external estimate P B u satisfying P B P B u < 1 .
To obtain P B u which is easy to calculate and still holds the relation P B P B u < 1 , we consider the LDQBD process { X ˜ ( t ) ; t 0 } with the level variable as the number of jobs in the system with the stationary distribution π ˜ = ( π ˜ 0 , π ˜ 1 , , π ˜ K ) . According to [8], the probability distribution { j } 0 j K that satisfies j = k K π ˜ j 1 j = k K j for any k ( 0 k K ) is given by
0 = 0 , 1 = 1 + j = 2 K k = 1 j 1 λ μ k + 1 , k 1 , j = 1 k = 1 j 1 λ μ k + 1 , k , 2 j K ,
where
μ j + 1 , j = min { ( l j ) μ + ( 0 j ) μ v + [ j l ] + θ , , ( l j ) μ + ( v j ) μ v + [ j v l ] + θ }
for 1 ≤ jK. Since P B = π ˜ K 1 by the PASTA (Poisson Arrivals See Time Averages) property [24], we can choose P B u such that P B P B u < 1 by letting P B u = K . Therefore, Wq is bounded by W q u which is given by
W q u = L q u λ ( 1 K ) .
Let W denote the expected sojourn time in the original system, i.e., the expected total time a job spends in the system. Then, we have
W = W q + μ 1 s l + μ v 1 s v ,
where sl is the probability that a job does not abandon the queue and is served by a legacy server, and sv is the probability that a job does not abandon the queue and is served by a virtual server. Since sl ≤ 1 and sv ≤ 1, we obtain the following upper bound:
W W q + max { μ 1 , μ v 1 } .
Therefore, by using the previously obtained bound W q W q u , we obtain the upper bound Wu for W as
W u = W q u + max { μ 1 , μ v 1 } .

6.2. Lower Bound of Performance Metrics

Let us choose a monotone function f on S ˜ defined by f((i, j)) = j for ( i , j ) S ˜ . Since π ˜ s π ˜ ˜ , the following inequality holds:
j = 0 K j π ˜ j 1 j = 0 K j π ˜ ˜ j 1 .
Let L denote the expected number of jobs in the system in the original model with stationary distribution π ˜ , and let Ll denote that of the lower bounding model with stationary distribution π ˜ ˜ . Since the level variable J ˜ ( t ) of the process { X ˜ ( t ) = ( N ˜ ( t ) , J ˜ ( t ) ) ; t 0 } is related to the actual number of jobs J ˜ ( t ) by J ˜ ( t ) = K J ˜ ( t ) , we have
L = j = 0 K j π ˜ K j 1 = j = 0 K ( K j ) π ˜ j 1 j = 0 K ( K j ) π ˜ ˜ j 1 = j = 0 K j π ˜ ˜ K j 1 = L l .
Next, let PB and P B l denote the job loss rates in the original and lower bounding models, respectively. Since P B = π ˜ 0 1 π ˜ ˜ 0 1 = P B l and the probability masses of π ˜ ˜ 0 are all zero, we have P B l = 0 . Therefore,
W = L λ ( 1 P B ) L l λ ( 1 P B l ) = L l λ = W l ,
which gives a lower bound for the expected sojourn time.

7. Numerical Results

In this section, we present numerical results based on the stationary distribution of the LDQBD process. Specifically, we provide two types of validation: (i) a comparison between the numerical results and the upper bounding models given in Proposition 1 and Theorem 1 and (ii) a comparison between the upper and lower bounds established in Theorems 1 and 2.

7.1. Comparison of Upper Bounding Models

Table 2 compares the marginal tail probabilities j = k K π j 1 , j = k K + 1 π ¯ j 1 , and j = k K + 1 π ˇ j 1 , obtained from the original model, Proposition 1, and Theorem 1, respectively. We consider two values of the parameter λ, namely λ = 1 and λ = 10, while fixing the remaining parameters at l = 10, v = 20, K = 35, μ = 1, μv = 2, α = 0.01, and θ = 1. We observe that the marginal tail probabilities obtained from Proposition 1 and Theorem 1 are larger than those of the original model, as expected. However, Theorem 1 provides a tighter upper bound than Proposition 1 for the original model under these parameter settings.
Figure 6 shows a comparison of the expected waiting time obtained from the upper bounding models, as well as from the original model, as the value of μv varies. Figure 6a and Figure 6b correspond to the cases where λ = 1 and λ = 10, respectively. The other parameters are fixed at l = 10, v = 20, K = 35, μ = 1, α = 0.01, and θ = 1.
Similarly, Figure 7 shows a comparison of the expected waiting time from the upper bounding models, along with the original model, as the value of α varies. Again, Figure 7a and Figure 7b correspond to λ = 1 and λ = 10, respectively. The remaining parameters are set to l = 10, v = 20, K = 35, μ = 1, μv = 2, and θ = 1.
These figures show that the upper bounding model proposed in Theorem 1 provides a significantly tighter approximation than the one in Proposition 1, originally developed by Bright and Taylor [8], across a wide range of system parameters.
As shown in Example 1, the transition rate matrix Q ¯ 0 ( k ) given by Proposition 1 is fully populated with nonzero entries related to λ, whereas Q ^ 0 ( k ) is not. In contrast, Q ˇ 0 ( j ) as defined in Theorem 1 preserves zeros in the same positions as Q ^ 0 ( k ) . This difference likely accounts for the significant discrepancy between Proposition 1 and Theorem 1 regarding the upper bound of the expected waiting time at λ = 1. It is also worth noting that the expected waiting time based on Theorem 1 can be computed from the stationary distribution obtained via recurrence relations. This implies that one can estimate performance metrics with high accuracy while avoiding expensive computational costs.

7.2. Upper and Lower Bounding Models

Figure 8 shows a comparison of the expected sojourn time obtained from the upper and lower bounding models, as well as from the original model, for various values of μv. Figure 8a and Figure 8b correspond to the cases where λ = 1 and λ = 10, respectively. The other parameters are fixed at l = 10, v = 20, K = 35, μ = 1, α = 0.01, and θ = 1.
Similarly, Figure 9 presents a comparison of the expected sojourn time obtained from the upper and lower bounding models, along with the original model, for various values of α. Again, Figure 9a and Figure 9b correspond to λ = 1 and λ = 10, respectively. The remaining parameters are set to l = 10, v = 20, K = 35, μ = 1, μv = 2, and θ = 1.
We observe that the accuracy of the upper bound for the expected sojourn time depends on the value of μv, which may be due to the term max μ 1 , μ v 1 in Wu. In contrast, the accuracy of the lower bound deteriorates monotonically as μv increases.
Regarding the dependence on the setup rate α of virtual servers, both the upper and lower bounds for the expected sojourn time appear to be almost insensitive to changes in α, indicating that the range between the bounds is robust with respect to α. Overall, the upper bound provides a more accurate estimate of the expected sojourn time than the lower bound.

7.3. CPU Time

We demonstrate performance improvements achieved by the proposed recurrence-based method. As the performance metric, we use the CPU time required to obtain the upper bound of the expected waiting time. Both the matrix analytic method and our recurrence-based method were implemented in MATLAB® R2025a. Experiments were conducted on a laptop with a 2.4 GHz quad-core CPU and 16 GB of main memory. For each method, CPU times were measured over 10 independent runs.
Table 3 compares the CPU times required to obtain the upper bound of the expected waiting time, based on the stationary distribution in Theorem 1, between the direct application of the matrix analytic method and the proposed recurrence-based method. The parameter v was varied from 1000 to 1500 in increments of 100. The other parameters were fixed at λ = 40, l = 10, μ = 1, α = 0.05, θ = 1, and μv = 2. Note that K = l + v = 10 + v, and thus K also varies with v.
We observe that the proposed recurrence-based method outperforms the matrix analytic method in terms of CPU time for v = 1000, 1100, and 1200. For v greater than 1200, the matrix analytic method fails to compute the upper bound of the expected waiting time in our computing environment due to insufficient memory. In contrast, the recurrence-based method successfully computes the upper bound in all cases. It is also noteworthy that the CPU time of the recurrence-based method at v = 1500 is almost comparable to that of the matrix analytic method at v = 1200.

8. Conclusions

In this paper, we analyzed the queueing model of hybrid systems proposed by Sato et al. [1], formulated as an LDQBD process. By exploiting the structural properties of the transition rates specific to this model and refining the bounding technique developed by Bright and Taylor [8], we derived both upper and lower stochastic bounds for the stationary distribution, which can be efficiently computed using recurrence relations.
Thanks to these recurrence relations, the upper and lower bounds can be obtained with a significant reduction in computational cost. Furthermore, we showed that key performance metrics computed from the true stationary distribution are bounded above and below by our proposed stochastic bounds. We also conducted numerical experiments to evaluate the sensitivity of these performance metrics to variations in system parameters.
The derivation of the bounds relies heavily on the specific structural properties of the transition rates in the LDQBD process of the queueing model. Extending the applicability of our approach to more general LDQBD processes remains an important topic for future research. Another key challenge is to establish error bounds on the difference between the bounding and the true stationary distributions, which we leave for future work.

Author Contributions

Conceptualization, K.K.; methodology, K.K.; software, K.K.; validation, K.K. and Y.I.; formal analysis, K.K. and Y.I.; investigation, K.K. and Y.I.; writing—original draft preparation, Y.I.; writing—review and editing, K.K.; visualization, Y.I.; supervision, K.K.; project administration, K.K.; funding acquisition, K.K. All authors have read and agreed to the published version of this manuscript.

Funding

This research was funded by JSPS KAKENHI 23K10994.

Data Availability Statement

The original contributions presented in this study are included in this article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Block Matrices of Q and Q ˜

This appendix presents the block matrices of Q and Q ˜ explicitly.

Appendix A.1. Block Matrices of Q

The block matrices Q 0 ( j ) for 0 ≤ j < l, Q 1 ( j ) for 0 ≤ jl, and Q 2 ( j ) for 0 < jl are (v + 1) × (v + 1) matrices given by
Q 0 ( j ) = λ I , Q 1 ( j ) = q 0 , j 0 0 μ v q 1 , j 2 μ v 0 0 v μ v q v , j , Q 2 ( j ) = j μ I ,
where I is the identity matrix of appropriate size. The exit rate qi,j of state (i, j) ∈ S for 0 ≤ iv and 0 ≤ jl is given by
q i , j = λ + j μ + i μ v .
The matrices Q 0 ( j ) for lj < l + v, Q 1 ( j ) for l < jl + v, and Q 2 ( j ) for l < jl + v are (v + 1) × (v + 1) matrices given by
Q 0 ( j ) = λ I , Q 1 ( j ) = diag { q 0 , j , q 1 , j , , q v , j } , Q 2 ( j ) = l μ + ( j l ) θ I + μ v diag { 0 , 1 , , v } + 0 p 0 , j α 0 0 0 p 1 , j α 0 p v 1 , j α 0 0 0 ,
where pi,j = (jl) ∧ (vi). The exit rate qi,j of state (i, j) ∈ S, for 0 ≤ iv and l < jl + v, is given by
q i , j = λ + l μ + ( j l ) θ + i μ v + p i , j α .
The matrices Q 0 ( j ) for l + vj < K, Q 1 ( j ) for l + v < jK, and Q 2 ( j ) for l + v < jK are (Kj + 1) × (Kj), (Kj + 1) × (Kj + 1), and (Kj + 1) × (Kj + 2) matrices, respectively, and are explicitly given by
Q 0 ( j ) = λ 0 0 0 λ 0 0 0 λ 0 , Q 1 ( j ) = diag { q 0 , j , q 1 , j , , q K j , j } , Q 2 ( j ) = { l μ + ( j l ) θ } I 0 + μ v diag { 0 , 1 , , v } 0 + 0 p 0 , j α 0 0 p 1 , j α 0 0 0 p K j , j α ,
where 0 denotes the column vector of all zeros, i.e., the transpose of the row vector 0 defined in the main text. The exit rate qi,j of state (i, j) ∈ S, for 0 ≤ iKj and l + v < jK, is given by
q i , j = λ + l μ + ( j l ) θ + i μ v + p i , j α , j K , l μ + ( j l ) θ + i μ v + p i , j α , j = K .

Appendix A.2. Block Matrices of Q ˜

The block matrices Q ˜ 0 ( j ) for 0 ≤ j < Klv, Q ˜ 1 ( j ) for 0 ≤ j < Klv, and Q ˜ 2 ( j ) for 0 < jKlv are (v + 1) × (v + 1) matrices given by
Q ˜ 0 ( j ) = l μ I + μ v diag { 0 , 1 , , v } + θ diag { K l j , K l 1 j , , K l v j } , Q ˜ 1 ( j ) = q ˜ 0 , j v α 0 0 0 q ˜ 1 , j ( v 1 ) α 0 α 0 0 q ˜ v , j , Q ˜ 2 ( j ) = λ I ,
where q ˜ i , j for state ( i , j ) S ˜ with 0 ≤ iv and 0 ≤ j < Klv is given by
q ˜ i , j = l μ + ( K l i ) θ + i μ v + ( v i ) α , j = 0 , λ + l μ + ( K l i j ) θ + i μ v + ( v i ) α , j 0 .
The matrices Q ˜ 0 ( j ) for Klvj < Kl, Q ˜ 1 ( j ) for Klvj < Kl, and Q ˜ 2 ( j ) for Klv < jKl are (v + 1) × (v + 1) matrices given by
Q ˜ 0 ( j ) = μ diag l ( K j ) , l ( K j 1 ) , , l ( K j v ) + θ diag [ K j l ] + , [ K j l 1 ] + , , [ K j l v ] + + μ v r ˜ 0 , j 0 0 1 r ˜ 1 , j r ˜ 1 , j 0 0 0 0 v r ˜ v , j r ˜ v , j , Q ˜ 1 ( j ) = q ˜ 0 , j α p ˜ 0 , j 0 0 0 q ˜ 1 , j α p ˜ 1 , j 0 α p ˜ v 1 , j 0 0 q ˜ v , j , Q ˜ 2 ( j ) = λ I ,
where r ˜ i , j = i if i < Kjl, and r ˜ i , j = 0 otherwise, and p ˜ i , j = v [ K j l i ] + . The exit rate q ˜ i , j of state ( i , j ) S ˜ for 0 ≤ iv and Klvj < Kl is given by
q ˜ i , j = λ + ( l ( K j i ) ) μ + [ K j l i ] + θ + i μ v + p ˜ i , j α .
The matrices Q ˜ 0 ( j ) for Klj < K, Q ˜ 1 ( j ) for KljK, and Q ˜ 2 ( j ) for Kl < jK are (Kj + 1) × (Kj), (Kj + 1) × (Kj + 1), and (Kj + 1) × (Kj + 2) matrices, respectively, and are explicitly given by
Q ˜ 0 ( j ) = ( l ( K j ) ) μ 0 0 μ v ( l ( K j 1 ) ) μ 0 2 μ v 0 ( l 1 ) μ 0 0 ( K j ) μ v , Q ˜ 1 ( j ) = diag { q ˜ 0 , j , q ˜ 1 , j , , q ˜ K j , j } , Q ˜ 2 ( j ) = λ I 0 ,
where the exit rate q ˜ i , j of state ( i , j ) S ˜ for 0 ≤ iKj and KljK is given by
q ˜ i , j = λ + ( l ( K j i ) ) μ + i μ v , j K , λ , j = K .

Appendix B. Proofs of Theorems 1 and 2

This appendix provides the proofs of Theorems 1 and 2.

Appendix B.1. Preliminaries

Let S be a set equipped with a partial order relation ⪯. A subset Γ ⊂ S is called an increasing set if it satisfies the following condition.
x Γ , x y y Γ .
We prove Theorems 1 and 2 using the following lemma, established by Massey [25] for uniform Markov processes and later extended by Brandt and Last [26] to accommodate non-uniform Markov processes.
Lemma A1.
Let q 1 and q 2 be Markov processes on a common state space S, equipped with a partial order. Suppose that for all x , y S such that x y , and for all increasing sets Γ S , the following two conditions hold.
(i)
If x , y Γ , then
z Γ q 1 ( x , z ) z Γ q 2 ( y , z ) .
(ii)
If x , y Γ , then
z Γ q 1 ( x , z ) z Γ q 2 ( y , z ) .
Then, q 2 stochastically dominates q 1 with respect to the partial order ⪯.
Let P (M) denote the power set of a set M. Following the idea of [8], we define a family of increasing sets for the LDQBD processes on S ^ in the following form.
Φ I , j = I k = j S ^ k , I P ( S ^ j 1 ) , I , j 1 .
Note that the family
I j = Φ I , j | I P ( S ^ j 1 ) , I , j 1
satisfies the following property:
I j I k = for j k .
Similarly, for the LDQBD processes on S ˜ , we define increasing sets as
Φ I , j = I k = j K S ˜ k , I P ( S ˜ j 1 ) , I , 1 j K ,
and define
I j = Φ I , j | I P ( S ˜ j 1 ) , I , 1 j K ,
which also satisfies the following property.
I j I k = for j k .

Appendix B.2. Proof of Theorem 1

For the proof of Theorem 1, we apply Lemma A1 to LDQBD processes defined on the state space S ^ , equipped with the partial order ⪯ as defined in Definition 2. We associate q1 in Lemma A1 with the generator of { X ^ ( t ) ; t 0 } and q2 with that of { X ˇ ( t ) ; t 0 } . We then verify that the two sufficient conditions (i) and (ii) in Lemma A1 are satisfied for every pair of states x , y S ^ such that xy and for every increasing set Γ S ^ of the form ΦI,j defined in the previous subsection.
Before presenting the detailed proof, we outline its structure. The proof is divided into two parts, corresponding to the two sufficient conditions (i) and (ii) in Lemma A1. For each condition, we examine all non-trivial relevant cases for every pair of states x , y S ^ satisfying xy and for every increasing set Γ S ^ of the form ΦI,j.
Proof. 
  • Verification of condition ( i )
We verify condition ( i ) in Lemma A1.
  • In the case of x y Γ
    For Γ I j , j > 1 , we may choose x = ( i , j 1 ) and y = ( k , j ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , j 1 ) Γ ( Q ^ 1 ( j 1 ) ) i , n + n : ( n , j 2 ) Γ ( Q ^ 2 ( j 1 ) ) i , n , z Γ q 2 ( y , z ) = n : ( n , j 1 ) Γ ( Q ˇ 2 ( j ) ) k , n .
    From (5), we have
    ( Q ˇ 2 ( j ) ) k , n w ^ k , n ( j ) ( Q ^ 2 ( j 1 ) 1 ) min .
    Therefore,
    n : ( n , j 1 ) Γ ( Q ˇ 2 ( j ) ) k , n ( Q ^ 2 ( j 1 ) 1 ) min n : ( n , j 1 ) Γ w ^ k , n ( j ) ( Q ^ 2 ( j 1 ) 1 ) min n : ( n , j 2 ) Γ ( Q ^ 2 ( j 1 ) ) i , n .
    Here, we used the fact that
    n : ( n , j 1 ) Γ w ^ k , n ( j ) n S ^ j 1 w ^ k , n ( j ) = 1 .
    For Γ I 1 , we may choose x = ( i , 0 ) and y = ( k , 1 ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , 0 ) Γ ( Q ^ 1 ( 0 ) ) i , n 0 , z Γ q 2 ( y , z ) = n : ( n , 0 ) Γ ( Q ˇ 2 ( 1 ) ) k , n = 0 ,
    where the last equality follows from (4). Thus, the inequality holds.
    Therefore, under the condition x y Γ , condition ( i ) is satisfied.
  • In the case of x = y Γ
    For Γ I j , j > 1 , we may choose x = y = ( i , j ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , j 1 ) Γ ( Q ^ 2 ( j ) ) i , n , z Γ q 2 ( y , z ) = n : ( n , j 1 ) Γ ( Q ˇ 2 ( j ) ) i , n .
    From (5), we have ( Q ˇ 2 ( j ) ) i , n ( Q ^ 2 ( j ) ) i , n .
    For Γ I j , j > 1 , we can take x = y = ( i , j 1 ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , j 1 ) Γ ( Q ^ 1 ( j 1 ) ) i , n + n : ( n , j 2 ) Γ ( Q ^ 2 ( j 1 ) ) i , n , z Γ q 2 ( y , z ) = n : ( n , j 1 ) Γ ( Q ˇ 1 ( j 1 ) ) i , n + n : ( n , j 2 ) Γ ( Q ˇ 2 ( j 1 ) ) i , n .
    If j = 2 , then ( Q ˇ 1 ( 1 ) ) i , n = ( Q ^ 1 ( 1 ) ) i , n by (6), and ( Q ˇ 2 ( 1 ) ) i , n = 0 by (4).
    If j > 2 , then ( Q ˇ 1 ( j 1 ) ) i , n = 0 by (7), and ( Q ˇ 2 ( j 1 ) ) i , n ( Q ^ 2 ( j 1 ) ) i , n by (5).
    -
    For Γ I 1 , we may choose x = y = ( i , 1 ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , 0 ) Γ ( Q ^ 2 ( 1 ) ) i , n 0 , z Γ q 2 ( y , z ) = n : ( n , 0 ) Γ ( Q ˇ 2 ( 1 ) ) i , n = 0 ,
    where the last equality follows from (4).
    For Γ I 1 , we can also choose x = y = ( i , 0 ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , 0 ) Γ ( Q ^ 1 ( 0 ) ) i , n , z Γ q 2 ( y , z ) = n : ( n , 0 ) Γ ( Q ˇ 1 ( 0 ) ) i , n .
    From (6), we have ( Q ˇ 1 ( 0 ) ) i , n = ( Q ^ 1 ( 0 ) ) i , n .
    Therefore, under the condition x = y Γ , condition ( i ) is satisfied.
  • Verification of condition ( ii )
We verify condition ( ii ) in Lemma A1.
  • In the case of x y Γ
    For Γ I j , j > 1 , we may choose x = ( i , j 2 ) and y = ( k , j 1 ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , j 1 ) Γ ( Q ^ 0 ( j 2 ) ) i , n , z Γ q 2 ( y , z ) = n : ( n , j ) Γ ( Q ˇ 0 ( j 1 ) ) k , n + n : ( n , j 1 ) Γ ( Q ˇ 1 ( j 1 ) ) k , n .
    Suppose there exists n S ^ j such that z ^ k , n ( j 1 ) 0 . Since Γ I j , we have
    n : ( n , j ) Γ z ^ k , n ( j 1 ) = n : ( n , j ) S ^ j z ^ k , n ( j 1 ) = 1 .
    From (2) and (3),
    z ^ k , n ( j 1 ) ( Q ^ 0 ( j 2 ) 1 ) max ( Q ˇ 0 ( j 1 ) ) k , n ,
    which yields
    ( Q ^ 0 ( j 2 ) 1 ) max n : ( n , j ) Γ ( Q ˇ 0 ( j 1 ) ) k , n .
    If z ^ k , n ( j 1 ) = 0 for all n S ^ j , then from (2) and (3),
    δ k , n ( Q ^ 0 ( j 2 ) 1 ) max ( Q ˇ 0 ( j 1 ) ) k , n ,
    and hence we have
    ( Q ^ 0 ( j 2 ) 1 ) max = n S ^ j δ k , n ( Q ^ 0 ( j 2 ) 1 ) max = n : ( n , j ) Γ δ k , n ( Q ^ 0 ( j 2 ) 1 ) max n : ( n , j ) Γ ( Q ˇ 0 ( j 1 ) ) k , n .
    Here, the second equality is justified by S ^ j 1 = S ^ j .
    Therefore, under the condition x y Γ , condition ( ii ) is satisfied.
  • In the case of x = y Γ
    For Γ I j , j > 1 , we may choose x = y = ( i , j 1 ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , j ) Γ ( Q ^ 0 ( j 1 ) ) i , n + n : ( n , j 1 ) Γ ( Q ^ 1 ( j 1 ) ) i , n , z Γ q 2 ( y , z ) = n : ( n , j ) Γ ( Q ˇ 0 ( j 1 ) ) i , n + n : ( n , j 1 ) Γ ( Q ˇ 1 ( j 1 ) ) i , n .
    If j = 2 , then from (2) and (6), we have
    ( Q ^ 0 ( 1 ) ) i , n ( Q ˇ 0 ( 1 ) ) i , n , ( Q ^ 1 ( 1 ) ) i , n = ( Q ˇ 1 ( 1 ) ) i , n .
    If j > 2 , then from (3) and (7), we have
    n : ( n , j ) Γ ( Q ^ 0 ( j 1 ) ) i , n + n : ( n , j 1 ) Γ ( Q ^ 1 ( j 1 ) ) i , n n : ( n , j ) Γ ( Q ^ 0 ( j 1 ) ) i , n + δ i , n m i ( Q ^ 1 ( j 1 ) ) i , m n : ( n , j ) Γ ( Q ˇ 0 ( j 1 ) ) i , n .
    The second equality is justified by S ^ j 1 = S ^ j .
    For Γ I j , j > 1 , we can also choose x = y = ( i , j 2 ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , j 1 ) Γ ( Q ^ 0 ( j 2 ) ) i , n , z Γ q 2 ( y , z ) = n : ( n , j 1 ) Γ ( Q ˇ 0 ( j 2 ) ) i , n .
    If j = 2 , then ( Q ^ 0 ( 0 ) ) i , n = ( Q ˇ 0 ( 0 ) ) i , n by (1).
    If j = 3 , then ( Q ^ 0 ( 1 ) ) i , n ( Q ˇ 0 ( 1 ) ) i , n by (2).
    If j > 3 , then from (3), we have
    ( Q ^ 0 ( j 2 ) ) i , n ( Q ^ 0 ( j 2 ) ) i , n + δ i , n m i ( Q ^ 1 ( j 2 ) ) i , m ( Q ˇ 0 ( j 2 ) ) i , n .
    For Γ I 1 , we may choose x = y = ( i , 0 ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , 1 ) Γ ( Q ^ 0 ( 0 ) ) i , n + n : ( n , 0 ) Γ ( Q ^ 1 ( 0 ) ) i , n , z Γ q 2 ( y , z ) = n : ( n , 1 ) Γ ( Q ˇ 0 ( 0 ) ) i , n + n : ( n , 0 ) Γ ( Q ˇ 1 ( 0 ) ) i , n .
    From (1) and (6), we have
    ( Q ^ 0 ( 0 ) ) i , n = ( Q ˇ 0 ( 0 ) ) i , n , ( Q ^ 1 ( 0 ) ) i , n = ( Q ˇ 1 ( 0 ) ) i , n .
    Therefore, under the condition x = y Γ , condition ( ii ) is satisfied.
Note that we have only considered non-trivial cases. We have thus verified that conditions (i) and (ii) in Lemma A1 hold for all relevant state pairs and increasing sets. Hence, the proof of Theorem 1 is complete. □

Appendix B.3. Proof of Theorem 2

In the following, we verify conditions (i) and (ii) in Lemma A1 by applying { X ˜ ( t ) ; t 0 } to q1 and { X ˜ ˜ ( t ) ; t 0 } to q2, respectively. For the proof of Theorem 2, we apply Lemma A1 to LDQBD processes defined on the state space S ˜ , with the partial order ⪯ given in Definition 6.
Proof. 
  • Verification of condition ( i )
We first verify condition ( i ) .
  • In the case of x y Γ
    For Γ I j , 1 < j K , we may take x = ( i , j 1 ) and y = ( k , j ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , j 1 ) Γ ( Q ˜ 1 ( j 1 ) ) i , n + n : ( n , j 2 ) Γ ( Q ˜ 2 ( j 1 ) ) i , n , z Γ q 2 ( y , z ) = n : ( n , j 1 ) Γ ( Q ˜ ˜ 2 ( j ) ) k , n .
    From (16), we have ( Q ˜ ˜ 2 ( j ) ) k , n w ˜ k , n ( j ) ( Q ˜ 2 ( j 1 ) 1 ) min , and hence,
    n : ( n , j 1 ) Γ ( Q ˜ ˜ 2 ( j ) ) k , n ( Q ˜ 2 ( j 1 ) 1 ) min n : ( n , j 1 ) Γ w ˜ k , n ( j ) ( Q ˜ 2 ( j 1 ) 1 ) min n : ( n , j 2 ) Γ ( Q ˜ 2 ( j 1 ) ) i , n .
    Here, we used that n S ˜ j 1 w ˜ k , n ( j ) = 1 .
    For Γ I 1 , we may take x = ( i , 0 ) and y = ( k , 1 ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , 0 ) Γ ( Q ˜ 1 ( 0 ) ) i , n 0 , z Γ q 2 ( y , z ) = n : ( n , 0 ) Γ ( Q ˜ ˜ 2 ( 1 ) ) k , n = 0 ,
    where the last equality follows from (15).
    Thus, condition ( i ) is satisfied when x y Γ .
  • In the case of x = y Γ
    For Γ I j , 1 < j K , we may take x = y = ( i , j ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , j 1 ) Γ ( Q ˜ 2 ( j ) ) i , n , z Γ q 2 ( y , z ) = n : ( n , j 1 ) Γ ( Q ˜ ˜ 2 ( j ) ) i , n .
    From (16), we have ( Q ˜ ˜ 2 ( j ) ) i , n ( Q ˜ 2 ( j ) ) i , n , so the condition is satisfied.
    For Γ I j , 1 < j K , we may also take x = y = ( i , j 1 ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , j 1 ) Γ ( Q ˜ 1 ( j 1 ) ) i , n + n : ( n , j 2 ) Γ ( Q ˜ 2 ( j 1 ) ) i , n , z Γ q 2 ( y , z ) = n : ( n , j 1 ) Γ ( Q ˜ ˜ 1 ( j 1 ) ) i , n + n : ( n , j 2 ) Γ ( Q ˜ ˜ 2 ( j 1 ) ) i , n .
    If j = 2 , then from (17) and (15), we have
    ( Q ˜ ˜ 1 ( 1 ) ) i , n = ( Q ˜ 1 ( 1 ) ) i , n , ( Q ˜ ˜ 2 ( 1 ) ) i , n = 0 ,
    so the inequality is satisfied.
    If 2 < j K , then from (18) and (16), we have
    ( Q ˜ ˜ 1 ( j 1 ) ) i , n = 0 , ( Q ˜ ˜ 2 ( j 1 ) ) i , n ( Q ˜ 2 ( j 1 ) ) i , n ,
    so again the inequality holds.
    For Γ I 1 , we may take x = y = ( i , 1 ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , 0 ) Γ ( Q ˜ 2 ( 1 ) ) i , n 0 , z Γ q 2 ( y , z ) = n : ( n , 0 ) Γ ( Q ˜ ˜ 2 ( 1 ) ) i , n = 0 ,
    where the last equality follows from (15), and thus the inequality is satisfied.
    For Γ I 1 , we may also take x = y = ( i , 0 ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , 0 ) Γ ( Q ˜ 1 ( 0 ) ) i , n , z Γ q 2 ( y , z ) = n : ( n , 0 ) Γ ( Q ˜ ˜ 1 ( 0 ) ) i , n .
    From (17), we have ( Q ˜ ˜ 1 ( 0 ) ) i , n = ( Q ˜ 1 ( 0 ) ) i , n , so the inequality holds.
    Therefore, under the condition x = y Γ , condition ( i ) is satisfied.
  • Verification of condition ( ii )
We next verify condition ( ii ) .
  • In the case of x y Γ
    For Γ I j , 1 < j K , we may choose x = ( i , j 2 ) and y = ( k , j 1 ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , j 1 ) Γ ( Q ˜ 0 ( j 2 ) ) i , n , z Γ q 2 ( y , z ) = n : ( n , j ) Γ ( Q ˜ ˜ 0 ( j 1 ) ) k , n + n : ( n , j 1 ) Γ ( Q ˜ ˜ 1 ( j 1 ) ) k , n .
    Since there exists an n S ˜ j such that z ˜ k , n ( j 1 ) 0 and Γ I j , we have
    n : ( n , j ) Γ z ˜ k , n ( j 1 ) = 1 .
    Using (13) and (14), we obtain
    z ˜ k , n ( j 1 ) ( Q ˜ 0 ( j 2 ) 1 ) max ( Q ˜ ˜ 0 ( j 1 ) ) k , n ,
    and hence we have
    ( Q ˜ 0 ( j 2 ) 1 ) max = n : ( n , j ) Γ z ˜ k , n ( j 1 ) ( Q ˜ 0 ( j 2 ) 1 ) max n : ( n , j ) Γ ( Q ˜ ˜ 0 ( j 1 ) ) k , n .
    Therefore, under the condition x y Γ , condition ( ii ) is satisfied.
  • In the case of x = y Γ
    For Γ I j , 1 < j K , we may choose x = y = ( i , j 1 ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , j ) Γ ( Q ˜ 0 ( j 1 ) ) i , n + n : ( n , j 1 ) Γ ( Q ˜ 1 ( j 1 ) ) i , n , z Γ q 2 ( y , z ) = n : ( n , j ) Γ ( Q ˜ ˜ 0 ( j 1 ) ) i , n + n : ( n , j 1 ) Γ ( Q ˜ ˜ 1 ( j 1 ) ) i , n .
    If j = 2 , then from (13) and (17), we have
    ( Q ˜ 0 ( 1 ) ) i , n ( Q ˜ ˜ 0 ( 1 ) ) i , n , ( Q ˜ 1 ( 1 ) ) i , n = ( Q ˜ ˜ 1 ( 1 ) ) i , n .
    If 2 < j K v , then ( Q ˜ ˜ 1 ( j 1 ) ) i , n = 0 by (18), and using (14),
    n : ( n , j ) Γ ( Q ˜ 0 ( j 1 ) ) i , n + n : ( n , j 1 ) Γ ( Q ˜ 1 ( j 1 ) ) i , n n : ( n , j ) Γ ( Q ˜ 0 ( j 1 ) ) i , n + δ i , n m i ( Q ˜ 1 ( j 1 ) ) i , m n : ( n , j ) Γ ( Q ˜ ˜ 0 ( j 1 ) ) i , n .
    If K v < j K , then ( Q ˜ 1 ( j 1 ) ) i , n = 0 and ( Q ˜ ˜ 1 ( j 1 ) ) i , n = 0 . Hence, from (14),
    ( Q ˜ 0 ( j 1 ) ) i , n = ( Q ˜ 0 ( j 1 ) ) i , n + δ i , n m i ( Q ˜ 1 ( j 1 ) ) i , m ( Q ˜ ˜ 0 ( j 1 ) ) i , n .
    For Γ I j , 1 < j K , we may choose x = y = ( i , j 2 ) as well. Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , j 1 ) Γ ( Q ˜ 0 ( j 2 ) ) i , n , z Γ q 2 ( y , z ) = n : ( n , j 1 ) Γ ( Q ˜ ˜ 0 ( j 2 ) ) i , n .
    If j = 2 , then ( Q ˜ 0 ( 0 ) ) i , n = ( Q ˜ ˜ 0 ( 0 ) ) i , n by (12).
    If j = 3 , then ( Q ˜ 0 ( 1 ) ) i , n ( Q ˜ ˜ 0 ( 1 ) ) i , n by (13).
    If 3 < j K , then from (14), we have
    ( Q ˜ 0 ( j 2 ) ) i , n ( Q ˜ 0 ( j 2 ) ) i , n + δ i , n m i ( Q ˜ 1 ( j 2 ) ) i , m ( Q ˜ ˜ 0 ( j 2 ) ) i , n .
    For Γ I 1 , we may choose x = y = ( i , 0 ) . Then, it holds that
    z Γ q 1 ( x , z ) = n : ( n , 1 ) Γ ( Q ˜ 0 ( 0 ) ) i , n + n : ( n , 0 ) Γ ( Q ˜ 1 ( 0 ) ) i , n , z Γ q 2 ( y , z ) = n : ( n , 1 ) Γ ( Q ˜ ˜ 0 ( 0 ) ) i , n + n : ( n , 0 ) Γ ( Q ˜ ˜ 1 ( 0 ) ) i , n .
    From (12) and (17), we have
    ( Q ˜ 0 ( 0 ) ) i , n = ( Q ˜ ˜ 0 ( 0 ) ) i , n , ( Q ˜ 1 ( 0 ) ) i , n = ( Q ˜ ˜ 1 ( 0 ) ) i , n .
    Thus, condition ( ii ) is satisfied when x = y Γ .
In summary, conditions ( i ) and ( ii ) are satisfied for all non-trivial cases of x , y S ˜ with x y , and for every increasing set Γ S ˜ of the form Φ I , j . Therefore, the proof is complete. □

References

  1. Sato, M.; Kawamura, K.; Kawanishi, K.; Phung-Duc, T. Modeling and performance analysis of hybrid systems by queues with setup time. Perform. Eval. 2023, 162, 102366. [Google Scholar] [CrossRef]
  2. Phung-Duc, T.; Ren, Y.; Chen, J.-C.; Yu, Z.-W. Design and analysis of deadline and budget constrained autoscaling (DBCA) algorithm for 5G mobile networks. In Proceedings of the IEEE International Conference on Cloud Computing Technology and Science (CloudCom), Luxembourg, 12–15 December 2016. [Google Scholar]
  3. Ren, Y.; Phung-Duc, T.; Chen, J.-C.; Li, F.Y. Enabling dynamic autoscaling for NFV in a non-standalone virtual EPC: Design and analysis. IEEE Trans. Veh. Technol. 2023, 72, 7743–7756. [Google Scholar] [CrossRef]
  4. Phung-Duc, T.; Kawanishi, K. Energy-aware data centers with s-staggered setup and abandonment. In Proceedings of the Analytical and Stochastic Modelling Techniques and Applications (ASMTA 2016), Cardiff, UK, 24–26 August 2016; pp. 269–283. [Google Scholar]
  5. Neuts, M.F. Matrix-Geometric Solutions in Stochastic Models: An Algorithmic Approach; The Johns Hopkins University Press: Baltimore, MD, USA, 1981. [Google Scholar]
  6. Latouche, G.; Ramaswami, V. Introduction to Matrix Analytic Methods in Stochastic Modeling; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1987. [Google Scholar]
  7. Artalejo, J.R.; Gómez-Corral, A. Retrial Queueing Systems: A Computational Approach; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  8. Bright, L.; Taylor, P.G. Calculating the equilibrium distribution in level dependent quasi-birth-and-death processes. Stoch. Model. 1996, 11, 497–525. [Google Scholar] [CrossRef]
  9. Phung-Duc, T.; Masuyama, H.; Kasahara, S.; Takahashi, Y. A simple algorithm for the rate matrices of level-dependent QBD processes. In Proceedings of the 5th International Conference on Queueing Theory and Network Applications (QTNA2010), Beijing, China, 24–26 July 2010; pp. 46–52. [Google Scholar]
  10. Baumann, H.; Sandmann, W. Numerical solution of level dependent quasi-birth-and-death processes. Procedia Comput. Sci. 2012, 1, 1561–1569. [Google Scholar] [CrossRef]
  11. Masuyama, H. A sequential update algorithm for computing the stationary distribution vector in upper block-Hessenberg Markov chains. Queueing Syst. 2019, 92, 173–200. [Google Scholar] [CrossRef]
  12. Tweedie, R.L. Sufficient conditions for regularity, recurrence and ergodicity of Markov processes. Math. Proc. Camb. Philos. Soc. 1975, 78, 125–136. [Google Scholar] [CrossRef]
  13. Glynn, P.W.; Zeevi, A. Bounding stationary expectations of Markov processes. Markov Process. Relat. Top. Festschr. Thomas Kurtz 2008, 4, 195–214. [Google Scholar]
  14. Dayar, T.; Sandmann, W.; Spieler, D.; Wolf, V. Infinite level-dependent QBD processes and matrix-analytic solutions for stochastic chemical kinetics. Adv. Appl. Probab. 2011, 43, 1005–1026. [Google Scholar] [CrossRef]
  15. Somashekar, G.; Delasay, M.; Gandhi, A. Efficient and accurate Lyapunov function-based truncation technique for multi-dimensional Markov chains with applications to discriminatory processor sharing and priority queues. Perform. Eval. 2023, 162, 102356. [Google Scholar] [CrossRef]
  16. Müller, A.; Stoyan, D. Comparison Methods for Stochastic Models and Risks; John Wiley & Sons: Chichester, UK, 2002. [Google Scholar]
  17. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer: New York, NY, USA, 2007. [Google Scholar]
  18. Fourneau, J.M.; Pekergin, N. An algorithmic approach to stochastic bounds. In Lecture Notes in Computer Science; Calzarossa, M.C., Tucci, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2002; Volume 2459, pp. 64–88. [Google Scholar]
  19. Buchholz, P. Exact and ordinary lumpability in finite Markov chains. J. Appl. Probab. 1994, 31, 3159–3175. [Google Scholar] [CrossRef]
  20. Zhao, Y.Q.; Liu, D. The censored Markov chain and the best augmentation. J. Appl. Probab. 1996, 33, 623–629. [Google Scholar] [CrossRef]
  21. Fourneau, J.M.; Lecoz, M.; Quessette, F. Algorithms for an irreducible and lumpable strong stochastic bound. Linear Algebra Its Appl. 2004, 386, 167–185. [Google Scholar] [CrossRef]
  22. Fourneau, J.M.; Pekergin, N.; Younès, S. Censoring Markov chains and stochastic bounds. In Proceedings of the Formal Methods and Stochastic Models for Performance Evaluation (EPEW 2007), Berlin, Germany, 27–28 September 2007; pp. 213–227. [Google Scholar]
  23. Kawanishi, K. Bounding performance of stochastic models for server virtualization in cloud computing. In Proceedings of the 10th International Congress on Industrial and Applied Mathematics (ICIAM 2023), Tokyo, Japan, 20–25 August 2023. [Google Scholar]
  24. Wolff, R.W. Poisson arrivals see time averages. Oper. Res. 1982, 30, 223–231. [Google Scholar] [CrossRef]
  25. Massey, W.A. Stochastic orderings for Markov processes on partially ordered spaces. Math. Oper. Res. 1987, 12, 350–367. [Google Scholar] [CrossRef]
  26. Brandt, A.; Last, G. On the pathwise comparison of jump processes driven by stochastic intensities. Math. Nachrichten 1994, 167, 21–42. [Google Scholar] [CrossRef]
Figure 1. State transition diagram of the model on the state space S.
Figure 1. State transition diagram of the model on the state space S.
Mathematics 13 02685 g001
Figure 2. State transition diagram of the Markov chain model on the extended state space S ^ .
Figure 2. State transition diagram of the Markov chain model on the extended state space S ^ .
Mathematics 13 02685 g002
Figure 3. State space extension model that gives a stochastic upper bound. Red arrows indicate transitions whose rates have been reassigned: μv from (1, 2) → (0, 2) to (1, 2) → (1, 3), and 2μv from (2, 2) → (1, 2) to (2, 2) → (2, 3). Dashed arrows indicate additional transitions with rate λ.
Figure 3. State space extension model that gives a stochastic upper bound. Red arrows indicate transitions whose rates have been reassigned: μv from (1, 2) → (0, 2) to (1, 2) → (1, 3), and 2μv from (2, 2) → (1, 2) to (2, 2) → (2, 3). Dashed arrows indicate additional transitions with rate λ.
Mathematics 13 02685 g003
Figure 4. State transition diagram of the model on state space S ˜ .
Figure 4. State transition diagram of the model on state space S ˜ .
Mathematics 13 02685 g004
Figure 5. Stochastic lower bounding model. Blue arrows indicate the reassignment of the rate from (0, 2) → (1, 2) to (0, 2) → (0, 3).
Figure 5. Stochastic lower bounding model. Blue arrows indicate the reassignment of the rate from (0, 2) → (1, 2) to (0, 2) → (0, 3).
Mathematics 13 02685 g005
Figure 6. Expected waiting time for various values of μv.
Figure 6. Expected waiting time for various values of μv.
Mathematics 13 02685 g006
Figure 7. Expected waiting time for various values of α.
Figure 7. Expected waiting time for various values of α.
Mathematics 13 02685 g007
Figure 8. Expected sojourn time for various values of μv.
Figure 8. Expected sojourn time for various values of μv.
Mathematics 13 02685 g008
Figure 9. Expected sojourn time for various values of α.
Figure 9. Expected sojourn time for various values of α.
Mathematics 13 02685 g009
Table 1. Parameters of queueing system.
Table 1. Parameters of queueing system.
ParameterDescription
lNumber of legacy servers
vNumber of virtual servers
KMaximum capacity of the entire queueing system
λArrival rate of jobs
θDeadline expiry rate of waiting jobs
μService rate of legacy servers
μvService rate of virtual servers
αSetup rate of virtual servers
Table 2. Marginal tail probabilities for the original model, Proposition 1, and Theorem 1.
Table 2. Marginal tail probabilities for the original model, Proposition 1, and Theorem 1.
k5101520
λ = 1Original Model 3.66 × 10−3 1.11 × 10−7 2.97 × 10−13 1.53 × 10−19
Proposition 1 1.00 × 100 1.00 × 100 1.00 × 100 9.76 × 10−1
Theorem 1 1.90 × 10−2 1.20 × 10−6 6.81 × 10−12 4.70 × 10−18
λ = 10Original Model 9.71 × 10−1 5.38 × 10−1 8.05 × 10−2 3.20 × 10−3
Proposition 1 1.00 × 100 1.00 × 100 1.00 × 100 1.00 × 100
Theorem 1 1.00 × 100 9.91 × 10−1 2.35 × 10−1 1.22 × 10−2
Table 3. Maximum, average, and minimum CPU times (in seconds) required to obtain the upper bound of the expected waiting time. “N/A” indicates that the computation could not be completed due to insufficient memory.
Table 3. Maximum, average, and minimum CPU times (in seconds) required to obtain the upper bound of the expected waiting time. “N/A” indicates that the computation could not be completed due to insufficient memory.
v
1000 1100 1200 1300 1400 1500
Matrix analytic methodmaximum236.4330.1431.3N/AN/AN/A
average230.1319.6419.8N/AN/AN/A
minimum217.3297.4399.4N/AN/AN/A
Recurrence-based methodmaximum102.0144.4187.2247.9311.0439.9
average98.7138.6181.6239.5305.0416.2
minimum96.2134.9178.4230.9298.8406.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kawanishi, K.; Ino, Y. Upper and Lower Bounds of Performance Metrics in Hybrid Systems with Setup Time. Mathematics 2025, 13, 2685. https://doi.org/10.3390/math13162685

AMA Style

Kawanishi K, Ino Y. Upper and Lower Bounds of Performance Metrics in Hybrid Systems with Setup Time. Mathematics. 2025; 13(16):2685. https://doi.org/10.3390/math13162685

Chicago/Turabian Style

Kawanishi, Ken’ichi, and Yuki Ino. 2025. "Upper and Lower Bounds of Performance Metrics in Hybrid Systems with Setup Time" Mathematics 13, no. 16: 2685. https://doi.org/10.3390/math13162685

APA Style

Kawanishi, K., & Ino, Y. (2025). Upper and Lower Bounds of Performance Metrics in Hybrid Systems with Setup Time. Mathematics, 13(16), 2685. https://doi.org/10.3390/math13162685

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop