Next Article in Journal
The Bregman–Opial Property and Bregman Generalized Hybrid Maps of Reflexive Banach Spaces
Next Article in Special Issue
The Feynman–Kac Representation and Dobrushin–Lanford–Ruelle States of a Quantum Bose-Gas
Previous Article in Journal
A Simple Method for Network Visualization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rate of Convergence and Periodicity of the Expected Population Structure of Markov Systems that Live in a General State Space

by
P. -C. G. Vassiliou
Department of Statistical Science, University College London, Gower st, London WC1E 6BT, UK
Mathematics 2020, 8(6), 1021; https://doi.org/10.3390/math8061021
Submission received: 25 May 2020 / Revised: 12 June 2020 / Accepted: 12 June 2020 / Published: 22 June 2020

Abstract

:
In this article we study the asymptotic behaviour of the expected population structure of a Markov system that lives in a general state space (MSGS) and its rate of convergence. We continue with the study of the asymptotic periodicity of the expected population structure. We conclude with the study of total variability from the invariant measure in the periodic case for the expected population structure of an MSGS.

1. Introductory Notes

Ref [1] introduced the stochastic process non-homogeneous Markov system (NHMS) with countable state space. Ref [2] introduced the stochastic process non-homogeneous semi-Markov system (NHSMS) in countable spaces. The theory and applications of both processes have seen the realization of an interesting growth and have found many applications in fields with great diversity. The latest reviews on the theory are given in [3,4]. The motives of this theory go back in the homogeneous Markov chain models for manpower systems in [5,6] and the evolution of which, in a variety of populations, are well described in [7]. The stochastic process of an NHMS represents a general frame work for a large variety of applied probability models to be accommodated in. However, real motives were the works on non-homogeneous Markov models for manpower systems such as [8,9,10]. We could selectively only supply some applications of the theory due to their large numbers. We start with the evolution of the HIV virus within the human body of T-cells in [11,12,13]; asthma was studied in [14]; reliability applications exist in [15] for example; examples of biomedical studies exist in [16,17]; applications in gene sequences ([18]); in DNA and web navigation ([19]); in manpower systems in [20,21,22,23,24,25]; in Physical Chemistry ([26]); examples from ecological modelling ([27]). Finally, there is the work of the research school of Prof McClean in the health systems (see, for example, [28,29,30,31,32]). Note that health systems are large manpower systems and each member has many parameters that are used to categorize him/her into the different groups. This characteristic of the health systems is what makes it an area of potential application of the present results (see [33]).
The rigorous foundation of Markov systems in general spaces (MSGS) was introduced in [33]. Also, the problem of asymptotic behaviour or ergodicity of Markov systems was studied. Important theorems were proved on the ergodicity of the distribution of expected population structure of a Markov system, which lives in the general space X , B X . In addition, the total variability from the invariant measure was studied given that the Markov system is ergodic. It is shown that the total variation is finite.
Markov chains in general spaces have very important applications in the areas of Markov models in time series, models in control and systems theory, Markov models with regenerations times, Moran dams, Markov chains Monte Carlo simulation algorithms, etc. For more details, see [34] Chapter 2. There is a belief that the introduction in the presence of a population which is modulated as a Markov chain with general state space will increase the dynamics for new applications considerably, as it was described above in the countable case. Hence, although the results in the present are purely theoretical, the prospects for interesting applications are present and promising.
In Section 2 and Section 2.1, we start with the formal definition of a Markov system in a general state space, as it was introduced in [33]. In Section 2.2, we provide an abstract image for an MSGS in an attempt to help the reader in understanding the present paper. In Section 2.3, we introduce some known concepts and results useful in the foundation of the novel results that follow from this point to the end. In Section 3, we study the rate of convergence of the expected population structure in an MSGS. Two theorems are provided where in the first we provide conditions under which the MSGS is uniformly ergodic, and in the second we provide conditions under which the MSGS is V-uniformly ergodic. In Section 4, we proceed to study the asymptotic periodicity of the expected population structure, for a population that lives in a general state space. A basic theorem is proved, where it is shown that if d is the period of the inherent Markov chain, then the sequence of the expected population structures splits into d converging subsequences. The asymptotic periodicity of non-homogeneous Markov systems in countable spaces was studied by [35,36]. In the present study, the basic tools and methodology are different since, among other reasons, the equations are of completely different nature. That is, we move from difference equations to integral equations for the expected population structure. In Section 5, we study the total variability from the invariant measure in the periodic case for the expected population structure of an MSGS. In the form of two theorems, we provide conditions under which the total variability in the periodic case is finite.

2. The Markov System in a General State Space

We will start with a formal definition of a Markov system in a general state space ([33]) and then we provide an explanatory example. The reader could choose his own suitable order of reading the two subsections.

2.1. The Foundation of an MSGS

Let X , B X be the state space where X is a general set and B X denote a countably generated σ -field on X. Assume that small letters are elements of X and A , B , C elements of B X . Let us denote by T t the population of the system at time t, which lives in X , B X . We assume that T t t = 0 is a known stochastic process with state space Z + and in discrete time.
Let that λ : X R + be a positive σ -finite measure λ : X R + , that is,
λ A 0 , λ A < for every A B X ,
for which we have that λ X = 1 . We assume that λ . represents the initial distribution of a member of the population in the space X , B X . We also assume that there is a set W B X for which λ W = 0 . We will refer to W as the “gate” of the space X , B X .
In the space X , B X where the population lives, it is assumed that each member has a membership, and that the leavers are leaving their memberships at the “gate”. Therefore, at any time t at the “gate” W, the number of memberships are those left by leavers, and the necessary Δ T t = T t T t 1 memberships to complete the desired population T t . We will only work the cases for which Δ T t 0 . New members of the population take their memberships at the gate “gate” W from which they then make a transition to any A B X . Now let
P x , A , x X , A B X ,
be the transition probability kernel from x X to the set A B X in one time step for a member of the population. Denote by Φ p = Φ p n n N + the Markov chain with state space X , B X , which is defined uniquely by the probability kernel 2.1 . We now make the following:
Assumption 1.
For the Markov chain Φ p = Φ p n n N + the set W is an atom.
That is, there exists a measure μ . on B X such that
P x , A = μ A for every x W and A B X ,
with μ X = 1 .
Now, define by
Q x , A = P x , A + P x , W μ A , for every x X , A B X , x W ,
Q x , A = P x , A = μ A , for every x W , A B X .
The transition probability kernel Q x , A in 3 is the probability of a membership to make a transition from point x into the set A in one time step, either by direct transition of the member holding the membership with probability P x , A , or by the member leaving the system through the gate W with probability P x , W and the entrance of a new member, who gets the membership of the member from gate W, into set A with probability μ A . The transition probability kernel Q x , A in 4 is the probability of new memberships entering the space X , B X at each time interval t 1 , t . From the gate of the atom, W is then distributed according to the σ -finite probability measure μ . : B X 0 , 1 .
We will use the notation by Φ Q = Φ Q n n N + for the Markov chain with state space X , B X defined uniquely by Q x , A in 3 and 4 . We also use the notation Q n x , A for probability that a membership in x will move in n time steps to the set A B X .
Now, define:
N x , t , A : the number of memberships of the population that are in set A B X at time t given that were initially at x X .
Assume, as it is usual in many applications, a natural partition of the space X , B X , that is,
A i i = 1 k + 1 , with A i B X and A i A j = for i j and such that
i = 1 k + 1 A i = X and with no loss of generality A k + 1 = W .
It is of interest the expected population structure defined as:
E P N t , X = E P N x , t , A 1 , E P N x , t , A 2 , , E P N x , t , A k ,
where N t , X is a non-negative measurable function on X. We are intersted also for the relative expected population structure defined by
E P q t , X = E P N t , X / T t .
It could be easily seen that q : X R + is a positive σ -finite probability measure since
q x , t , A i = N x , t , A i T t 0 ; q x , t , A i < i N + and q x , t , X = 1 .
Of central importance is the evolution of the expected population structure E P N t , X or the relative expected population structure E P q t , X in the study of Markov systems in general state space. From Vassiliou 2014 we get that
E P N x , t , A = T 0 X λ d x Q t x , A + n = 1 t 1 Δ T n X μ d x Q t n x , A .
Note that
T 0 X λ d x Q t x , A ,
is the expected number of memberships from the initial population T 0 that survived up to time t and are in set A. The part
n = 1 t 1 Δ T n X μ d x Q t n x , A ,
is the expected number of memberships that entered the system in the interval 0 , t and survived up to time t and are in set A .
We call the random process described above with state space X , B X a Markov system in a general state space. In addition, we will call the Markov chain Φ Q = Φ Q n n N + the inherent Markov chain of the Markov system.

2.2. An Abstract Image for MSGS

In the present subsection, in order to understand the various concepts and results, we will use the analogy of the example of the frog who jumps on lilies in a pond. This example of an abstract image was firstly used in [33] and the reader is referred to that study for a more extensive description and an explanation of the various concepts. In summary, imagine a lily pond that is covered by the leaves of the lilies. A population of frogs is living on the pond and a number of them are in every lily at each point in time and hence we have an expected population of frogs in the lilies. There are also leavers and newcomers in the pond at every point in time due to the antagonistic mating situation that exists between the males.

2.3. Introducing Some Important Concepts and Known Results

Throughout the paper, we will assume that the Markov chain Φ Q is ψ -irreducible. That is, there is a σ -finite measure ψ on X , B X such that, for any A B X with ψ A > 0 , and any x X ,
Q n x , A > 0 , for all n sufficiently large .
Assume that ψ is maximal with respect to ψ . We will also assume that the Markov chain is aperiodic, apart from the last section, where we will study periodicity. The concept of ψ irreducibility for Markov chains on general state spaces was introduced by [37,38], and followed by many authors [39,40,41,42,43,44].
We will adopt the term ergodicity for the Markov chains for which the limit, lim n Q n x , A exists and is equal with π A , where π is the invariant measure or stationary distribution of Φ Q . As a mode of convergence in the space X , B X , the total variation norm will be used. For a a signed measure μ on X , B X , the total variation norm μ is given by
μ : = sup f : f 1 μ f = sup A B X μ A inf A B X μ A .
The key limit of interest in the present paper will be of the form
lim n Q n x , . π . = 2 lim n sup A Q n x , . π . .
For more on the spectral theory and rates of convergence for Markov chains in general spaces, see [34]. Research is motivated by elegant mathematics as well as a range of applications. Note that, for countable inhomogeneous Markov systems it was proved that the space of all possible expected population structures is a Hilbert space ([45]). Now, for a ψ irreducible Markov chain who lives in X , B X , denoted by
B + X = A B X : ψ A > 0 .
Let Q 1 and Q 2 be transition kernels for Markov chains, then for a positive function > V 1 , define the V-norm distance between Q 1 and Q 2 as
Q 1 Q 2 V : = sup x X Q 1 x , . Q 2 x , . V V x .
We define the outer product of the function 1 and the invariant measure π the kernel
1 π x , A = π A , x X , A B X .
In many applications, we consider the distance Q n 1 π V for large n. We provide the following definition from [34]:
Definition 1.
(V-uniform ergodicity). An ergodic chain Φ with transition kernel Q is called V-uniformly ergodic if
Q n 1 π V 0 , as n .
Of great interest is the special case when V = 1 . In this case, we provide the following definition:
Definition 2.
(Uniform ergodicity). A chain Φ with transition kernel Q is called uniformly ergodic if it is V-uniformly ergodic with V = 1 , that is, if
sup x X Q n x , . π 0 , as n .
The concept of a small set is very useful in order to have a finite break up into cyclic sets for ψ -irreducible chains.
Definition 3.
For C B X we say that it is a small set if there exists an m > 0 , and a non-trivial measure ν m on B X , such that for all x C , B B X
P m x , B ν m B .
In such a case we say that C is ν m -small.
From [34] p. 105 we get the following Theorem:
Theorem 1.
If Φ is ψ-irreducible, then for every A B + X , there exists m 1 and a ν m -small set C A such that C B + X and ν m C > 0 .
Given the existence of just one small set from the previous Theorem, the Proposition in [34] p. 106 states that it is further possible for set X to be covered by small sets in the ψ -irreducible case.
Assume that C is any ν M -small set, and ν M C > 0 , without loss of generality. With the use of the set C ν M we define a cycle for a general irreducible Markov chain. We will suppress the subscript of ν for simplicity.
We have that
P M x , . ν . , x C and ν C > 0 ,
hence, when the chain starts in C, there is a positive probability that the chain will return to C at time M. Let
E C = n 1 : the set C is ν n = δ n ν small for some δ n > 0 .
Then the “period” for the set C, is given by the greatest common divisor of E C . The following Theorem ([34] p. 113) will be useful in what follows:
Theorem 2.
Assume that Φ is a ψ-irreducible Markov chain on X .
If C B + X is ν M -small and d the greatest common divisor of the set E C . Then there exist disjoint sets D 0 , , D d 1 B X (a "d-cycle) such that
i for x D i , P x , D i + 1 = 1 , i = 0 , , d 1 ( mod d ) .
i i the set N = i = 1 d D i c is ψ-null.
The d-cycle D i is maximal in the sense that for any collection
d , D k , k = 0 , , d 1
satisfying i - i i , we have d dividing d; whilst if d = d , then, by reordering the indices if necessary D i = D i a.e. ψ.
It is obvious from the above that any small set must be essentially contained inside one specific member of the d-cycle D i cyclic class. From [34] p. 115, we need the following Proposition:
Proposition 1.
Suppose Φ is a ψ-irreducible chain with period d and a D-cycle D i , i = 0 , , d 1 . Then each of the sets D i is an absorbing ψ-irreducible set for the chain Φ d corresponding to the transition kernel P d , and Φ d on each D i is aperiodic.
For any set A B X , we denote by o c A the number of visits by Φ to A after time zero and it is given by
o c A = n = 1 1 Φ n A .
We define the kernel
U x , A : = n = 1 Q n x , A = E P o c x A .
The chain Φ is called recurrent if it is ψ -irreducible and U x , A = for every x X and every A B + X . If the chain Φ is irreducible and admits an atom a B + X then if a is recurrent, then every set in B + X is recurrent.
Definition 4.
The set A B + X is called Harris recurrent if
P o c x A = = 1 .
A chain Φ is called Harris recurrent if it is ψ-irreducible and every set in B + X is Harris recurrent.
Definition 5.
A σ-finite measure π on B X with the property
π A = X π d x P x , A , A B X .
will be called invariant.
Suppose that Φ is irreducible, and admits an invariant probability measure π . Then Φ is called a positive chain. If Φ is Harris recurrent and positive, then Φ is called a positive Harris chain. From [34] p. 328 and p. 204, we get the following two results:
Theorem 3.
If Φ is positive Harris and aperiodic, then for any initial distribution λ
X λ d x P n x , . π 0 as n .
For any Harris recurrent set H, we write H = y : L y , H = 1 , where L y , H is the probability of Φ of ever entering H from y, so that H H and H is absorbing. We will call H a maximal absorbing set if H = H .
Theorem 4.
If Φ is recurrent, then we can write
X = H N
where H is a non-empty maximal Harris set and N is transient.

3. Rate of Convergence of MSGS

In the present section, we will study the rate of convergence of Markov systems in general spaces. We will provide conditions under which the rate of convergence of the expected population structure of an ergodic Markov system is uniformly ergodic and V-uniformly ergodic. From [34] p. 393, we get the following two Theorems:
Theorem 5.
Suppose that Φ is ψ-irreducible and aperiodic with transition kernel Q. Then the following are equivalent for V 1 :
i Φ is V-uniformly ergodic.
i i There exists r > 1 and R < such that for all n N +
Q n 1 π V R r n .
i i i There exists some n > 0 such that Q i 1 π V < for i n and
Q i 1 π V < 1 .
Theorem 6.
For any Markov chain Φ with transition kernel Q, the following are equivalent:
i Φ is uniformly ergodic.
i i There exists r > 1 and R < such that for all x
Q n x , . π . R r n ;
that is, the convergence takes place at a uniform geometric rate.
i i i For some n N +
sup x X Q n x , . π . < 1 .
i v The chain is aperiodic and Doeblin condition holds: that is, there is a probability measure ϕ on B X and ε < 1 , δ > 0 , n N + such that when ϕ A > ε
inf x X Q n x , A > δ .
v The set X for some m is ν m -small.
v i For every set A B + X and an aperiodic Markov chain there is a petit set C with
sup x X E x τ C < ,
and in this case sup x X E x τ A < .
v i i For every set A B + X and an aperiodic Markov chain there is a petite set C and κ > 1 with
sup x X E x κ τ C < ,
and we have for some κ A > 1 ,
sup x X E x κ A τ A < ,
v i i i For an aperiodic Markov chain there is a bounded solution V 1 to
Δ V x β V x + b I C x , x X .
for some β > 0 , b < , and some petit set C ,
Under v , we get that for any x ,
Q n x , . π . 2 ρ n / m ,
where ρ = 1 ν m X .
Now, from [46], we will borrow the following theorem adapted for an MSGS:
Theorem 7.
Let a Markov system in a general state space X , B X , which is expanding Δ T t = T t T t 1 0 . The following two statements are equivalent
i The sequence T t t = 0 converges, that is, lim t T t = T geometrically.
i i The non-negative sequence Δ T t t = 0 tends to zero geometrically.
We will start by studying Uniform ergodicity for an MSGS for simplicity reasons and then proceed to study V-uniform ergodicity.
Definition 6.
(Uniform ergodicity for an MSGS). Let a Markov system in a general state space X , B X . Also let T t t = 0 be the total population of memberships with lim t T t = T ; Q x , A with x X and A B X the transition kernel of the inherent Markov chain of memberships and E Q N x , A the expected population structure. We say that the MSGS is uniformly ergodic if and only if there exists a C < and an 0 < a < 1 such that
E Q N x , t , A ] T π A C a t for every x X , A B X .
We now provide the following theorem concerning uniform ergodicity for an MSGS.
Theorem 8.
Let a Markov system that lives in X , B X . Assume in addition that
lim t T t = T at a geometric rate , Δ T t 0 for every t N + ,
and that the inherent Markov chain with transition kernel Q x , A is uniformly ergodic. Then the MSGS is uniformly ergodic.
Proof. 
We have that
E Q N x , t , A ] T π A = E Q N x , t , A ] T 0 π A T T 0 π A = From Equation 10 = T 0 X λ d x Q t x , A π A + n = 1 t 1 Δ T n X μ d x Q t n x , A n = 1 Δ T n π A T 0 X λ d x Q t x , A π A + n = 1 t 1 Δ T n X μ d x Q t n x , A π A n = t Δ T n π A T 0 X λ d x Q t x , A π A + n = 1 t 1 Δ T n X μ d x Q t n x , A π A + n = t Δ T n π A .
From 27 and since lim t T t = T is geometrically fast and Δ T t 0 for every t N + from Theorem 6 we get that there exist a c > 0 and a 1 > b > 0 such that
n = t Δ T n π A π A n = t Δ T n c 1 b 1 b t .
Also from 27 , since X λ d x = 1 and since the inherent Markov chain with kernel Q x , A is uniformly ergodic, from Theorem 6 (ii) there exists R < and r > 1 such that
T 0 X λ d x Q t x , A π A = T 0 X λ d x Q t x , A π A T 0 X λ d x Q t x , A π A T 0 X λ d x R a t T 0 R a t ,
with a = r 1 , 0 < a < 1 .
Now, again since lim t T t = T is geometrically fast and Δ T t 0 for every t N + from Theorem 6 there exist a c > 0 and a 1 > b > 0 with Δ T t c b t ; in addition since the inherent Markov chain with kernel Q x , A is uniformly ergodic, from Theorem 6 (ii) there exists R < and r > 1 , that is, 0 < a = r 1 < 1 such that
n = 1 t 1 Δ T n X μ d x Q t n x , A π A n = 1 t 1 Δ T n X μ d x Q t n x , A π A n = 1 t 1 Δ T n X μ d x Q t n x , A π A n = 1 t 1 Δ T n X μ d x R a t n n = 1 t 1 c b t R a t n c R 1 b / a 1 1 b / a t a t = R * a t .
Now, from (27)–(30) we have that
E Q N x , t , A ] T π A T 0 R a t + R * a t + c 1 b 1 b t .
Let a 1 = max a , b then 0 < a 1 < 1 and
E Q N x , t , A ] T π A T 0 R a 1 t + R * a 1 t + c 1 b 1 a 1 t T 0 R + R * + c 1 b 1 a 1 t c * a t ,
which concludes the proof that MSGS is uniformly ergodic. □
Generalizing the above results to V iniform ergodicity is now straight forward. We start with the definition
Definition 7.
(V-Uniform ergodicity for an MSGS). Let a Markov system in a general state space X , B X . Also let T t t = 0 be the total population of memberships with lim t T t = T ; Q x , A with x X and A B X the transition kernel of the inherent Markov chain of memberships and E Q N x , A the expected population structure. We say that the MSGS is V uniformly ergodic if and only if there exists a C < and an 0 < a < 1 such that
E Q N x , t , A ] T π A V C a t for every x X , A B X .
Following exactly the proof of Theorem 8, we arrive at the following result:
Theorem 9.
Let a Markov system that leaves in X , B X . Assume in addition that
lim t T t = T at a geometric rate , Δ T t 0 for every t N + ,
and that the inherent Markov chain with transition kernel Q x , A is V uniformly ergodic. Then the MSGS is V uniformly ergodic.

4. Asymptotic Periodicity of an MSGS

The study of the asymptotic behaviour of Markov chains has been very important for finite or countable spaces from the start of their history. The general state space results are due to [40,41,47]. In the non-homogeneous Markov chains when the state space is countable, the initial important results on asymptotic behaviour were achieved by [48,49,50,51,52]. For the semi Markov process analogue, a wealth of results exist in the books by [53,54]. For NHMS and NHSMS on countable spaces the asymptotic behaviour has also been a problem of central importance see for example [1,36,55,56,57]. In the present section, we will assume that the Markov chain is periodic and study the asymptotic behaviour of the expected population structure, which apparently is an important problem for a population that lives in X , B X . Let it be that the inherent Markov chain of the MSGS with kernel Q is positive Harris with period d 1 , and let the d-cycle described in Theorem 2 be the set D 0 , D 1 , , D d 1 , where D i B X for i = 0 , 1 , , d 1 . We have assumed that λ x is the initial distribution on X. Then the distribution on the set D i is
λ i x = λ x λ D i where x D i .
From Theorem 2, we have that if x D 0 then Q x , D 1 = 1 and consequently Q 2 x , D 2 = D 1 Q x , d y Q y , D 2 . However, for y D 1 we have Q y , D 2 = 1 , hence Q 2 x , D 2 = D 1 Q x , d y = Q x , D 1 = 1 . Similarly, since the period is d we have
If x D i then Q n d x , D i = 1 , Q n d x , D j = 0 for i j .
Now, in general for x D i and for d > i + υ we have
Q n d + υ x , D i + υ = D i Q n d x , d y Q υ y , D i + υ = for y D i and d > i + υ , Q υ y , D i + υ = 1 = D i Q n d x , d y = 1 .
It is apparent from 35 that
If x D i and for d > i + υ we have Q n d + υ x , D j = 0 for j i + υ .
If x D i and for d i + υ we have
Q n d + υ x , D i + υ d = D i Q n d x , d y Q υ y , D i + υ d = for y D i and d i + υ , Q υ y , D i + υ d = 1 = D i Q n d x , d y = 1
From the fact that the inherent Markov chain with kernel Q is assumed to be positive Harris with period d we know that for each cyclic set is positive Harris on the d-skeleton and aperiodic, and by Theorem 3 it is straightforward to check the following Theorem:
Theorem 10.
If the inherent Markov chain of the MSGS is positive Harris and periodic with period d, then for the sets D 0 , D 1 , , D d 1 with D i B X we have that
lim n D i λ i d x P n d + υ x , A π i + υ A = 0 if d > i + υ for i = 0 , 1 , , d 1 ,
for any set A D i + υ , and
lim n D i λ i x P n d + υ x , A π i + υ d A = 0 if d i + υ for i = 0 , 1 , , d 1 ,
for any set A D i + υ d .
Now, from the above Theorem and the decomposition Theorem 4, we immediately get the following:
Theorem 11.
If the inherent Markov chain of the MSGS is positive recurrent and periodic with period d, then for the sets D 0 , D 1 , , D d 1 with D i B X , then for each i there exist a π i null set N i which for every initial distribution λ i with λ i N i = 0
lim n D i λ i d x P n d + υ x , A π i + υ A = 0 if d > i + υ for i = 0 , 1 , , d 1 ,
for any set A D i + υ , and
lim n D i λ i d x P n d + υ x , A π i + υ d A = 0 if d i + υ for i = 0 , 1 , , d 1 ,
for any set A D i + υ d .
We will now start building the proof for the periodicity Theorem for an MSGS and we will state the theorem in the end. The important question is the periodicity of the expected population structure
E P N t , X = E P N x , t , A 1 , E P N x , t , A 2 , , E P N x , t , A k ,
when the inherent Markov chain with kernel Q is positive Harris and periodic with period d or is positive recurrent and periodic with period d. With no loss of generality and for simplicity reasons, in what follows we will use the set A instead of any of the sets A i , i = 1 , 2 , , k . From Equation 10 we have that
E P N x , t , A = T 0 X λ d x Q t x , A + n = 1 t 1 Δ T n X μ d x Q t n x , A .
We will work with each part of the right hand side of Equation 37 separately. We have that
X λ d x Q t x , A = λ D 0 D 0 λ 0 d x Q t x , A + λ D 1 D 1 λ 1 d x Q t x , A + + λ D d 1 D d 1 λ d 1 d x Q t x , A
We could write the set A in general as follows
A = A D 0 A D 1 A D d 1 , with A D i A D j = , for i j .
Let without loss of generality that t = π d + υ and consider in what follows that for all i = 0 , 1 , , d 1 , d > i + υ . Then we have that
Q t x , A = Q π d + υ x , A = Q π d + υ x , A D 0 + Q π d + υ x , A D 1 + + Q π d + υ x , A D d 1 .
From 38 we have that
lim t X λ d x Q t x , A i = 0 d 1 λ D i π i + υ A D i + υ = = lim π i = 0 d 1 λ D i D i λ i d x Q π d + υ x , A i = 0 d 1 λ D i π i + υ A D i + υ lim π i = 0 d 1 λ D i D i λ i d x Q π d + υ x , A π i + υ A D i + υ
Now, from 40 and 41 we get that
lim t X λ d x Q t x , A i = 0 d 1 λ D i π i + υ A D i + υ lim π i = 0 d 1 λ D i j = 0 d 1 D i λ i d x Q π d + υ x , A D j π i + υ A D i + υ lim π i = 0 d 1 λ D i D i λ i d x Q π d + υ x , A D i + υ π i + υ A D i + υ + lim π i = 0 d 1 λ D i j = 0 , j i + υ d 1 D i λ i d x Q π d + υ x , A D j ( from Theorem 10 and relation 36 we get that ) .
Hence, we proved that when t = π d + υ and d > i + υ then the limit of X λ d x Q t x , A as t is
i = 0 d 1 λ D i π i + υ A D i + υ .
Similarly, following the same steps we could prove that for t = π d + υ and d i + υ then
lim t X λ d x Q t x , A i = 0 d 1 λ D i π i + υ d A D i + υ d = 0 .
From 42 and 43 it is not difficult to check that
lim t X λ d x Q t x , A i = 0 d υ 1 λ D i π i + υ A D i + υ i = d υ d 1 λ D i π i + υ d A D i + υ d = 0 .
Assume now that lim t T t = T and Δ T t 0 for every t. Let again that t = π d + υ then
U t = n = 1 t 1 Δ T n X μ d x Q t n x , A = n = 1 π d + υ 1 Δ T n X μ d x Q π d + υ n x , A = m = 0 d 1 τ = 0 π 1 Δ T τ d + m X μ d x Q π d + υ τ d + m x , A + m = 0 υ 1 Δ T π d + m X μ d x Q π d + υ π d + m x , A .
Now, since lim t Δ T t = 0 we have
lim π m = 0 υ 1 Δ T π d + m X μ d x Q π d + υ π d + m x , A m = 0 υ 1 lim π Δ T π d + m lim π X μ d x Q π d + υ π d + m x , A = 0 .
Hence, from 46 and 47 we get that
lim t U t = lim π m = 0 d 1 τ = 0 π 1 Δ T τ d + m X μ d x Q π d + υ τ d + m x , A = lim π m = 0 υ τ = 0 π 1 Δ T τ d + m X μ d x Q π τ d + υ m x , A + lim π m = υ + 1 d 1 τ = 0 π 1 Δ T τ d + m X μ d x Q π τ 1 d + d + υ m x , A .
It is apparent that since Δ T t 0
τ = 0 π 1 Δ T τ d + m t = 0 π d + r Δ T t = T π d + r T 0 ,
and thus the series
τ = 0 π 1 Δ T τ d + m ,
is bounded by T T 0 , its elements are non-negative and thus it converges; so let
w m = lim π τ = 0 π 1 Δ T τ d + m lim t τ = 0 t Δ T t .
Now, let us define by
lim π U 1 t = m = 0 υ lim π τ = 0 π 1 Δ T τ d + m X μ d x Q π τ d + υ m x , A .
Now from 49 we get that
lim π τ = 0 π 1 Δ T τ d + m = w m lim t τ = 0 t Δ T t = w m T T 0 .
Therefore 50 becomes
lim π U 1 t = T T 0 m = 0 υ w m
lim π τ = 0 π 1 Δ T τ d + m τ = 0 π 1 Δ T τ d + m X μ d x Q π τ d + υ m x , A × lim π τ = 0 π 1 Δ T τ d + m τ = 0 π 1 Δ T τ d + m X μ d x Q π τ d + υ m x , A .
We now provide the following useful Lemma:
Lemma 1.
If a n / k = 0 n a k 0 as n and a n non-negative, and b n b then
lim n b 0 a n + b 1 a n 1 + + b n a 0 k = 0 n a k = b .
From Lemma 1, 52 and the result in 45 where we may replace λ d x by μ d x it is not difficult to check that the following holds
lim π U 1 t T T 0 m = 0 υ i = 0 d υ m 1 w m μ D i π i + υ m A D i + υ m T T 0 m = 0 υ i = d υ m i d 1 d 1 w m μ D i π i d + υ m A D i d + υ m = 0 .
Define by
lim t U 2 t = lim π m = υ + 1 d 1 τ = 0 π 1 Δ T τ d + m X μ d x Q π τ 1 d + d + υ m x , A .
Then similarly we could prove that
lim π U 2 t ( T T 0 ) × m = υ + 1 d 1 i = 0 d d + υ m 1 w m μ D i π i + d + υ m A D i + d + υ m ( T T 0 ) m = υ + 1 d 1 i = d d + m υ i d 1 d 1 w m μ D i × π i d + d + υ m A D i d + d + υ m = 0 .
Therefore from 37 , 45 , 48 , 54 and 56 for t = π d + υ we get that
lim t E P N x , t , A T 0 i = 0 d υ 1 λ D i π i + υ A D i + υ T 0 i = d υ d 1 λ D i π i + υ d A D i + υ d T T 0 m = 0 υ i = 0 d υ m 1 w m μ D i π i + υ m A D i + υ m T T 0 m = 0 υ i = d υ m i d 1 d 1 w m μ D i π i d + υ m A D i d + υ m ( T T 0 ) m = υ + 1 d 1 i = 0 m υ 1 w m μ D i π i + d + υ m A D i + d + υ m ( T T 0 ) m = υ + 1 d 1 i = m υ i d 1 d 1 w m μ D i π i + υ m A D i + υ m = 0 ,
for υ = 0 , 1 , , d 1 . Hence, we have proved the following Theorem:
Theorem 12.
Let an MSGS and let that the inherent Markov chain of the MSGS be positive Harris and periodic with period d.
OR let it be that the inherent Markov chain of the MSGS is positive recurrent and periodic with period d, and let the sets D 0 , D 1 , , D d 1 with D i B X . Then the expected population structure splits into d converging subsequences, that is,
lim t E P N x , t , A T 0 i = 0 d υ 1 λ D i π i + υ A D i + υ
T 0 i = d υ d 1 λ D i π i + υ d A D i + υ d
T T 0 m = 0 υ i = 0 d υ m 1 w m μ D i π i + υ m A D i + υ m
T T 0 m = 0 υ i = d υ m i d 1 d 1 w m μ D i π i d + υ m A D i d + υ m
( T T 0 ) m = υ + 1 d 1 i = 0 m υ 1 w m μ D i π i + d + υ m A D i + d + υ m
( T T 0 ) m = υ + 1 d 1 i = m υ i d 1 d 1 w m μ D i π i + υ m A D i + υ m = 0 ,
for υ = 0 , 1 , 2 , , d 1 .

5. Total Variability from the Invariant Measures in the Periodic Case. Coupling Theorems

In this section, we study the total variability from the invariant measures in the periodic case for an MSGS. We show that the total variation in the periodic case is finite. This is also known as the coupling problem (see [58]). From [34] (p. 332) we get the following:
Theorem 13.
Suppose that Φ is positive Harris and aperiodic chain and assume that the chain has an atom a B + X . Then for any λ , μ regular initial distributions
n = 1 X X λ d x μ d y P n x , . P n y , . < ,
and in the case that Φ is regular, then for any x , y X
n = 1 P n x , . P n y , . < .
Theorem 14.
Suppose Φ is a positive Harris and aperiodic chain. For any λ , μ regular initial distributions
n = 1 X X λ d x μ d y P n x , . P n y , . < .
Theorem 15.
Suppose that Φ is a positive Harris and aperiodic chain and then let α be an accessible atom. If E α τ α 2 < , then for any regular initial distribution
n = 1 λ P n π < .
Now, if we assume that the inherent Markov chain Φ Q with kernel Q of the MSGS is positive Harris with period d then by Proposition 1 when restricted to each cyclic set is aperiodic and positive Harris on the d-skeleton, and by Theorems 13–15 it is straightforward to check the following theorem:
Theorem 16.
If the inherent Markov chain Φ Q with kernel Q of the MSGS is positive Harris with period d , then for the sets D 0 , D 1 , , D d 1 with D i B X we have that
(a) For any regular initial distributions λ , μ and for d > i + υ ,
n = 1 D i D i λ d x μ d y Q n d + υ x , A D i + υ Q n d + υ y , A D i + υ
< , x , y D i , for i , υ = 0 , 1 , , d 1 .
Also, for d i + υ
n = 1 D i D i λ d x μ d y Q n d + υ x , A D i + υ d Q n d + υ y , A D i + υ d
< , x , y D i , for i , υ = 0 , 1 , , d 1 .
for i , υ = 0 , 1 , , d 1 .
(b) If for the accessible atom W we have that E w τ w 2 < , then for any regular initial distribution λ we have for d > i + υ , x D i and i , υ = 0 , 1 , , d 1
n = 1 λ d x Q n d + υ x , A D i + υ π i + υ A D i + υ < ,
and for d i + υ , x D i and i , υ = 0 , 1 , , d 1
n = 1 λ d x Q n d + υ x , A D i + υ d π i + υ d A D i + υ d < .
We will now prove the following coupling theorem for an MSGS with inherent Markov chain Φ Q , which is assumed to be positive Harris and periodic.
Theorem 17.
Assume that the inherent Markov chain Φ Q of a Markov system that lives in X , B X with kernel Q x , . is positive Harris with period d . Also, let it be that
lim t T t = T , Δ T t 0 for every t Z + .
Then for any regular distributions λ 1 , μ 1 , λ 2 , μ 2 we have
t = 1 E P N x , t , A E P N y , t , A < .
Proof. 
We have that
t = 1 E P N x , t , A E P N y , t , A = t = 1 T 0 X λ 1 d x Q t x , A + n = 1 t 1 Δ T n X μ 1 d x Q t n x , A T 0 X λ 2 d y Q t y , A n = 1 t 1 Δ T n X μ 2 d y Q t n y , A t = 1 T 0 X λ 1 d x Q t x , A X λ 2 d y Q t y , A ] + t = 1 n = 1 t 1 Δ T n X μ 1 d x Q t n x , A X μ 2 d y Q t n y , A = U λ t + U μ t
With no loss of generality assume that t = π d + υ . Denoted by
λ 1 i d x = λ 1 d x λ 1 D i and λ 2 i d x = λ 2 d x λ 2 D i .
From 58 and 59 we have that
U λ t π = 0 i = 0 d 1 T 0 λ 1 D i D i λ 1 i d x Q π d + υ x , A λ 2 D i D i λ 2 i d y Q π d + υ y , A .
We have that
i = 0 d 1 λ 1 D i = 1 and i = 0 d 1 λ 2 D i = 1 .
From 61 it is apparent that with no loss of generality we may assume that
λ 1 D i λ 2 D i for i = 0 , 1 , j and λ 1 D i λ 2 D i for j = j + 1 , , d 1 .
Also, without loss of generality again we may assume that d > j + υ , then from 60 and 62
U λ t π = 0 i = 0 j T 0 λ 2 D i D i λ 1 i d x Q π d + υ x , A D i + υ D i λ 2 i d y Q π d + υ y , A D i + υ + π = 0 i = j + 1 d υ 1 T 0 λ 1 D i D i λ 1 i d x Q π d + υ x , A D i + υ D i λ 2 i d y Q π d + υ y , A D i + υ + π = 0 i = d υ d 1 T 0 λ 1 D i D i λ 1 i d x Q π d + υ x , A D i + υ d D i λ 2 i d y Q π d + υ x , A D i + υ d = U λ 1 t + U λ 2 t + U λ 3 t .
Now, it is not difficult to check that
D i λ 1 i d x Q π d + υ x , A D i + υ = D i D i λ 1 i d x λ 2 i d y Q π d + υ x , A D i + υ ,
and
D i λ 2 i d y Q π d + υ y , A D i + υ = D i D i λ 1 i d x λ 2 i d y Q π d + υ y , A D i + υ .
Therefore from (63)–(65) we get that
U λ 1 t i = 0 j T 0 λ 2 D i π = 0 D i D i λ 1 i d x λ 2 i d y × Q π d + υ x , A D i + υ Q π d + υ y , A D i + υ .
From Theorem 16 and 66 we get that
U λ 1 t i = 0 j T 0 λ 2 D i K 1 < ,
In a similar way we get that U λ 2 t < and U λ 3 t < and consequently from 63 we get that
U λ t < .
The case for d j + υ is proved in an analogous way.
We have that
U μ t = t = 1 [ n = 0 t 1 Δ T n X μ 1 d x Q t n x , A n = 0 t 1 Δ T n X μ 2 d x Q t n x , A ] = π = 0 υ = 0 d 1 n = 0 π d + υ 1 Δ T n X μ 1 d x Q t n x , A X μ 2 d y Q t n y , A = π = 0 υ = 0 d 1 [ m = 0 d 1 τ = 0 π 1 Δ T τ d + m × { X μ 1 d x Q π d + υ τ d + m x , A X μ 2 d y Q π d + υ τ d + m y , A } ] + π = 0 [ m = 0 υ 1 Δ T τ d + m { X μ 1 d x Q π d + υ π d + m x , A X μ 2 d y Q π d + υ π d + m y , A } ] = U μ π t + U μ υ t .
We start with U μ υ t and we have that
U μ υ t π = 0 [ m = 0 υ 1 Δ T τ d + m × X μ 1 d x Q υ m x , A X μ 2 d y Q υ m y , A } ] π = 0 [ m = 0 υ 1 Δ T τ d + m i = 0 d 1 D i μ 1 d x Q υ m x , A D i + υ m D i μ 2 d y Q υ m y , A D i + υ m } ] .
Denote by
μ 1 i d x = μ 1 d x μ 1 D i and μ 2 i d x = μ 2 d x μ 2 D i .
Then with no loss of generality we may assume that there is an r such that
μ 1 D i μ 2 D i for i = 0 , 1 , , r and μ 1 D i > μ 2 D i for i = r + 1 , , d 1 .
From (70)–(72) and using Theorem 16 we get that
U μ υ t π = 0 [ m = 0 υ 1 Δ T τ d + m i = 0 r μ 2 D i ×
D i D i μ 1 i d x μ 2 i d y Q υ m x , A D i + υ m D i D i μ 1 i d x μ 2 i d y Q υ m y , A D i + υ m } ] + π = 0 [ m = 0 υ 1 Δ T τ d + m j = r + 1 d 1 μ 1 D i × D i D i μ 1 i d x μ 2 i d y Q υ m x , A D i + υ m D i D i μ 1 i d x μ 2 i d y Q υ m y , A D i + υ m } ] π = 0 m = 0 υ 1 Δ T τ d + m [ j = 0 r μ 2 D i K 1 υ m + j = r + 1 d 1 μ 1 D i K 2 υ m ] < .
We now move to U μ π t and we have that
U μ π t π = 0 υ = 0 d 1 [ m = 0 υ τ = 0 π 1 Δ T τ d + m X μ 1 d x Q π τ d + υ m x , A X μ 2 d y Q π τ d + υ m y , A π = 0 υ = 0 d 1 [ m = υ + 1 d 1 τ = 0 π 1 Δ T τ d + m X μ 1 d x Q π τ 1 d + d + υ m x , A X μ 2 d y Q π τ 1 d + d + υ m y , A = U μ π , 1 t + U μ π , 2 t .
We start first with U μ π , 1 t and we get that
U μ π , 1 t τ = 0 υ = 0 d 1 m = 0 υ Δ T τ d + m π τ = 1 i = 0 d 1 D i μ 1 d x Q π τ d + υ m x , A D i μ 2 d y Q π τ d + υ m y , A τ = 0 υ = 0 d 1 m = 0 υ Δ T τ d + m π τ = 1 i = 0 r μ 2 D i × D i D i μ 1 i d x μ 2 i d y Q π τ d + υ m x , A D i D i μ 1 i d x μ 2 i d y Q π τ d + υ m x , A + τ = 0 υ = 0 d 1 m = 0 υ Δ T τ d + m π τ = 1 i = r + 1 d 1 μ 1 D i × D i D i μ 1 i d x μ 2 i d y Q π τ d + υ m x , A D i D i μ 1 i d x μ 2 i d y Q π τ d + υ m x , A = U μ π , 1 , 1 t + U μ π , 1 , 2 t .
Now, using Theorem 17 and from 75 we have that
U μ π , 1 , 1 t τ = 0 υ = 0 d 1 m = 0 υ Δ T τ d + m π τ = 1 i = 0 r μ 2 D i C υ m < .
Similarly, we get also that
U μ π , 1 , 2 t < and U μ π , 2 t < .
From 68 , 69 , 76 , and 77 , we conclude the proof of the Theorem. □
Similarly, and with the use of Theorem 16.b we could prove the following Theorem
Theorem 18.
Assume that the inherent Markov chain Φ Q of a Markov system that lives in X , B X with kernel Q x , . is positive Harris with period d. Assume, in addition, that for the accessible atom W, we have E ω τ ω 2 < and that
lim t T t = T , Δ T t 0 for every t Z + .
Then for any regular distribution λ we have
t = 1 E N x , t , A T 0 υ = 0 d 1 i = 0 d υ 1 λ D i π i + υ A D i + υ
T 0 υ = 0 d 1 i = d υ d 1 λ D i π i + υ d A D i + υ d
T T 0 υ = 0 d 1 m = 0 υ i = 0 d υ m 1 w m μ D i π i + υ m A D i + υ m
T T 0 υ = 0 d 1 m = 0 υ i = d υ m i d 1 d 1 w m μ D i π i d + υ m A D i d + υ m
( T T 0 ) υ = 0 d 1 m = υ + 1 d 1 i = 0 m υ 1 w m μ D i π i + d + υ m A D i + d + υ m
( T T 0 ) υ = 0 d 1 m = υ + 1 d 1 i = m υ i d 1 d 1 w m μ D i π i + υ m A D i + υ m = 0 .

6. Conclusions and Further Research

We studied a population that lives in a general state space that at every instant has leavers and new entrants, which evolves under the Markov property. We proved that under certain conditions the expected population structure converges with geometrical rate to an invariant population structure with loss of memory. We also provided the analogous result for the V-uniformly ergodicity of MSGS. Moreover, under the assumption that the inherent Markov chain is periodic with period d, we proved that the sequence of the expected population structure splits into d convergent subsequences and provided their limits in closed forms. Finally, we proved that the total variability from the invariant measure in the periodic case is finite. We concluded the novel results by proving two coupling theorems under the assumption that the inherent Markov chain is positive Harris with period d using different additional conditions in each case. The above theoretical results have important applications both in the classical areas where Markov chains in general spaces have already been applied but especially to the important area of the health systems that, nowadays, are in crisis all over the world.
The results in the present paper together with the ones introduced in [33] provide the nucleus for a new path for extensive research in the area. Immediate further research of interest is to establish Laws of Large numbers for MSGS as a possible extension of the Laws of Large numbers for populations that live in countable spaces [59].

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Vassiliou, P.-C.G. Asymptotic behaviour of Markov systems. J. Appl. Probab. 1982, 19, 851–857. [Google Scholar] [CrossRef]
  2. Vassiliou, P.-C.G. and Papadopoulou, A.A. Nonhomogeneous semi-Markov systems and maintainability of the state sizes. J. Appl. Probab. 1992, 29, 519–534. [Google Scholar] [CrossRef]
  3. Vassiliou, P.-C.G. The evolution of the theory of non-homogeneous Markov systems. Appl. Stoch. Models Data Anal. 1997, 13, 159–176. [Google Scholar] [CrossRef]
  4. Ugwuogo, F.I.; Mc Clean, S.I. Modelling heterogeneity in manpower systems: A review. Appl. Stoch. Models Bus. Ind. 2000, 2, 99–110. [Google Scholar] [CrossRef]
  5. Young, A.; Almond, G. Predicting distributions of staff. Comput. J. 1961, 3, 246–250. [Google Scholar] [CrossRef] [Green Version]
  6. Bartholomew, D.J. A multistage renewal process. J. R. Stat. Soc. B 1963, 25, 150–168. [Google Scholar]
  7. Bartholomew, D.J. Stochastic Models for Social Processes; Wiley: New York, NY, USA, 1982. [Google Scholar]
  8. Young, A.; Vassiliou, P.-C.G. A non-linear model on the promotion of staff. J. R. Stat. Soc. A 1974, 138, 584–595. [Google Scholar] [CrossRef]
  9. Vassiliou, P.-C.G. A markov model for wastage in manpower systems. Oper. Res. Quart. 1976, 27, 57–70. [Google Scholar] [CrossRef]
  10. Vassiliou, P.-C.G. A high order non-linear Markovian model for promotion in manpower systems. J. R. Stat. Soc. A 1978, 141, 86–94. [Google Scholar] [CrossRef]
  11. Mathew, E.; Foucher, Y.; Dellamonika, P. Daures, J.-P. Parametric and Non-Homogeneous Semi-Markov Process for HIV Control; Working paper; Archer Hospital: Nice, France, 2006. [Google Scholar]
  12. Foucher, Y.; Mathew, E.; Saint Pierre, P.; Durant, J.-F.; Daures, J.P. A semi-Markov model based on generalized Weibull distribution with an illustration for HIV disease. Biom. J. 2005, 47, 825–833. [Google Scholar] [CrossRef]
  13. Dessie, Z.G. Modeling of HIV/AIDS dynamic evolution using non-homogeneous semi-Markov process. Springerplus 2014, 3, 537. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Saint Pierre, P. Modelles multi-etas de type Markovien et Application a la Astme. Ph.D. Thesis, University of Monpellier I, Montpellier, France, 2005. [Google Scholar]
  15. Barbu, V.; Boussement, M.; Limnios, N. Discrete-time semi Markov model for reliability and survival analysis. Commun. Stat. Theory Methods 2004, 33, 2833–2886. [Google Scholar] [CrossRef]
  16. Perez-Ocon, R.; Castro, J.E.R. A semi-Markov model in biomedical studies. Commun. Stat. Theory Methods 2004, 33, 437–455. [Google Scholar]
  17. Ocana-Riola, R. Non-homogeneous Markov process for biomedical data analysis. Biom. J. 2005, 47, 369–376. [Google Scholar] [CrossRef] [PubMed]
  18. McClean, S.I.; Scotney, B.; Robinson, S. Conceptual clustering of heterogeneous gene expression sequences. Artif. Intell. Rev. 2003, 20, 53–73. [Google Scholar] [CrossRef]
  19. Papadopoulou, A.A. Some results on modeling biological sequences and web navigation with a semi-Markov chain. Commun. Stat.Theory Methods 2013, 41, 2853–2871. [Google Scholar] [CrossRef]
  20. De Freyter, T. Modelling heterogeneity in manpower planning: Dividing the personel system in more homogeneous subgroups. Appl. Stoch. Model. Bus. Ind. 2006, 22, 321–334. [Google Scholar] [CrossRef]
  21. Nilakantan, K.; Raghavendra, B.G. Control aspects in proportionality Markov manpower systems. Appl. Stoch. Models Data Anal. 2005, 7, 27–341. [Google Scholar] [CrossRef]
  22. Yadavalli, V.S.S.; Natarajan, R.; Udayabhaskaram, S. Optimal training policy for promotion-stochastic models of manpower systems. Electron. Publ. 2002, 13, 13–23. [Google Scholar] [CrossRef]
  23. Sen Gupta, A.; Ugwuogo, F.L. Modelling multi-stage processes through multivariate distributions. J. Appl. Stat. 2006, 33, 175–187. [Google Scholar] [CrossRef]
  24. De Freyter, T.; Guerry, M. Markov manpower models: A review. In Handbook of Optimization Theory: Decision Analysis and Applications; Varela, J., Acuidja, S., Eds.; Nova Science Publishers: Hauppauge, NY, USA, 2011; pp. 67–88. [Google Scholar]
  25. Guerry, M.A. Some results on the Embeddable problem for discrete time Markov models. Commun. Stat. Theory Methods 2014, 43, 1575–1584. [Google Scholar] [CrossRef]
  26. Crooks, G.E. Path ensemble averages in system driven far from equilibrium. Phys. Rev. E 2000, 61, 2361–2366. [Google Scholar] [CrossRef] [Green Version]
  27. Patoucheas, P.D.; Stamou, G. Non-homogeneous Markovian models in ecological modelling: A study of the zoobenthos dynamics in Thermaikos Gulf, Greece. Ecol. Model. 1993, 66, 197–215. [Google Scholar] [CrossRef]
  28. Faddy, M.J.; McClean, S.I. Markov chain for geriatric patient care. Methods Inf. Med. 2005, 44, 369–373. [Google Scholar] [CrossRef] [PubMed]
  29. Gournescu, F.; McClean, S.I.; Millard, P.H. A queueing model for bed-occupancy management and planning of hospitals. J. Oper. Res. Soc. 2002, 53, 19–24. [Google Scholar] [CrossRef]
  30. Gournescu, F.; McClean, S.I.; Millard, P.H. Using a queueing model to help plan bed allocation in a department of geriatric medicine. Health Care Manag. Sci. 2002, 5, 307–312. [Google Scholar] [CrossRef]
  31. Marshall, A.H.; McClean, S.I. Using Coxian phase-type distributions to identify patient characteristics for duration of stay in hospital. Health Care Manag. Sci. 2004, 7, 285–289. [Google Scholar] [CrossRef]
  32. McClean, S.I.; Millard, P. Where to treat the older patient? Can Markov models help us better understand the relationship between hospital and community care. J. Oper. Res. Soc. 2007, 58, 255–261. [Google Scholar] [CrossRef]
  33. Vassiliou, P.-C.G. Markov systems in a General State Space. Commun. Stat.-Theory Methods 2014, 43, 1322–1339. [Google Scholar] [CrossRef]
  34. Meyn, S.; Tweedie, R.L. Markov Chains and Stochastic Stability; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  35. Tsaklidis, G.M.; Vassiliou, P.-C.G. Asymptotic periodicity of the variances and covariances of the state sizes in non-homogeneous Markov systems. J. Appl. Probab. 1988, 25, 21–33. [Google Scholar] [CrossRef]
  36. Georgiou, A.C.; Vassiliou, P.-C.G. Periodicity of asymptotically attainable structures in non-homogeneous Markov systems. Linear Algebra Its Appl. 1992, 176, 137–174. [Google Scholar] [CrossRef] [Green Version]
  37. Doeblin, W. Sur les proprietes asymptotiques de mouvement regis par cerain types de chaines simples. Bull. Math. Soc. Roum. Sci. 1937, 39, 57–115. [Google Scholar]
  38. Doeblin, W. Elements d’une theorie general des chaines simple constantes de Markov. Ann. Sci. l’Ecole. Norm. Super. 1940, 57, 61–111. [Google Scholar] [CrossRef] [Green Version]
  39. Doob, J.L. Stochastic Processes; John Wiley: New York, NY, USA, 1953. [Google Scholar]
  40. Harris, T.E. The existence of stationary measures for certain Markov processes. In Proceedings of the 3rd Berkeley Symposium on Mathematical Statistics and Probability; University of California Press: Berkeley, CA, USA, 1956; Volume 2, pp. 113–124. [Google Scholar]
  41. Orey, S. Recurrent Markov chains. Pacific J. Math. 1959, 9, 805–827. [Google Scholar] [CrossRef] [Green Version]
  42. Tweedie, R.L. R-theory for Markov chains on a general state space I: Solidarity properties and R-recurrent chains. Adv. Appl. Probab. 1974, 2, 840–864. [Google Scholar] [CrossRef]
  43. Tweedie, R.L. R-theory for Markov chains on a general state space II: R-subinvariant measures for R-transient chains. Adv. Appl. Probab. 1974, 2, 865–878. [Google Scholar] [CrossRef]
  44. Tweedie, R.L. Sufficient conditions for ergodicity and recurrence of Markov chains on a general state space. Stoch. Proc. Appl. 1975, 3, 385–403. [Google Scholar] [CrossRef] [Green Version]
  45. Vassiliou, P.-C.G. Exotic properties of non-homogeneous Markov and semi-Markov systems. Commun. Stat.-Theory Methods 2013, 42, 2971–2990. [Google Scholar] [CrossRef]
  46. Vassiliou, P.-C.G.; Tsaklidis, G.M. On the rate of convergence of the vector of variances and covariances in non-homogeneous Markov systems. J. Appl. Probab. 1989, 26, 776–783. [Google Scholar] [CrossRef]
  47. Orey, S. Limit Theorems for Markov Chain Transition Probabilities; Van Nostrand Reinhold: London, UK, 1971. [Google Scholar]
  48. Hajnal, J. The ergodic properties of non-homogeneous finite Markov chains. Proc. Camb. Philos. Soci. 1956, 52, 67–77. [Google Scholar] [CrossRef]
  49. Hajnal, J. Weak ergodicity in non-homogeneous Markov chains. Proc. Camb. Philos. Soc. 1958, 54, 233–246. [Google Scholar] [CrossRef]
  50. Hajnal, J. Shuffling with two matrices. Contemp. Math. 1993, 149, 271–287. [Google Scholar]
  51. Dobrushin, R.L. Central limit theorem for non-stationary Markov chains, I, II. Theory Probab. Its Appl. 1956, 1, 65–80. [Google Scholar] [CrossRef]
  52. Nummelin, E. General Irreducible Markov chains and Nonnegative Operators; Cambridge University Press: Cambridge, UK, 1984. [Google Scholar]
  53. Howard, R.A. Dynamic Probabilistic Systems; Wiley: Chichester, UK, 1971; Volumes I and II. [Google Scholar]
  54. Hunter, J.J. Mathematical Techniques of Applied Probability; Academic Press: Cambridge, MA, USA, 1983. [Google Scholar]
  55. Vassiliou, P.-C.G. On the limiting behavior of a non-homogeneous Markov chain model in manpower systems. Biometrika 1981, 68, 557–561. [Google Scholar]
  56. Papadopoulou, A.A.; Vassiliou, P.-C.G. Asymptotic behaviour of non-homogeneous Markov systems. Linear Algebra Its Appl. 1994, 210, 153–198. [Google Scholar] [CrossRef] [Green Version]
  57. Vassiliou, P.-C.G. Onn the periodicity of non-homogeneous Markov chains and systems. Linear Algebra Its Appl. 2015, 471, 654–684. [Google Scholar] [CrossRef]
  58. Lindvall, T. Lectures on the Coupling Method; John Wiley: Hoboken, NJ, USA, 1992. [Google Scholar]
  59. Vassiliou, P.-C.G. Laws of Large numbers for non-homogeneous Markov systems. Methodol. Comput. Appl. Probab. 2018, 1–28. [Google Scholar] [CrossRef] [Green Version]

Share and Cite

MDPI and ACS Style

Vassiliou, P.-C.G. Rate of Convergence and Periodicity of the Expected Population Structure of Markov Systems that Live in a General State Space. Mathematics 2020, 8, 1021. https://doi.org/10.3390/math8061021

AMA Style

Vassiliou P-CG. Rate of Convergence and Periodicity of the Expected Population Structure of Markov Systems that Live in a General State Space. Mathematics. 2020; 8(6):1021. https://doi.org/10.3390/math8061021

Chicago/Turabian Style

Vassiliou, P. -C. G. 2020. "Rate of Convergence and Periodicity of the Expected Population Structure of Markov Systems that Live in a General State Space" Mathematics 8, no. 6: 1021. https://doi.org/10.3390/math8061021

APA Style

Vassiliou, P. -C. G. (2020). Rate of Convergence and Periodicity of the Expected Population Structure of Markov Systems that Live in a General State Space. Mathematics, 8(6), 1021. https://doi.org/10.3390/math8061021

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop