*Article* **On State Occupancies, First Passage Times and Duration in Non-Homogeneous Semi-Markov Chains**

**Andreas C. Georgiou 1,\*, Alexandra Papadopoulou 2, Pavlos Kolias 2, Haris Palikrousis <sup>2</sup> and Evanthia Farmakioti <sup>2</sup>**


**Abstract:** Semi-Markov processes generalize the Markov chains framework by utilizing abstract sojourn time distributions. They are widely known for offering enhanced accuracy in modeling stochastic phenomena. The aim of this paper is to provide closed analytic forms for three types of probabilities which describe attributes of considerable research interest in semi-Markov modeling: (a) the number of transitions to a state through time (Occupancy), (b) the number of transitions or the amount of time required to observe the first passage to a state (First passage time) and (c) the number of transitions or the amount of time required after a state is entered before the first real transition is made to another state (Duration). The non-homogeneous in time recursive relations of the above probabilities are developed and a description of the corresponding geometric transforms is produced. By applying appropriate properties, the closed analytic forms of the above probabilities are provided. Finally, data from human DNA sequences are used to illustrate the theoretical results of the paper.

**Keywords:** semi-Markov modeling; occupancy; first passage time; duration; non-homogeneity; DNA sequences

#### **1. Introduction**

Human populations can be divided into categories (states and classes) taking into account some of their basic characteristics, such as place of residence, social class or rank in a hierarchy system. People usually move from a category to another category in a probabilistic manner and a person's history contains a sequence of sojourn times in the various categories and a set of transitions that have taken place. These are the basic parameters that construct a semi-Markov chain (SMC), according to which a mathematical model can be developed for the study of those systems [1,2]. These systems do not necessarily have to include humans, instead, they can describe any potential system characterized by and composed of historical observations, such as stay times in situations as well as transitions from one category to another. If, for the study of a population system, we reside on a Markov chain, we assume that the probability of transition from one category in another does not depend on the length of stay. Nonetheless, this time dependence is, in some cases, desirable to include in the process since it provides additional useful information. In this case, the transitions of such a system are not merely described by a typical Markov chain procedure and Semi-Markov models are introduced as the stochastic tools that provide a more rigorous framework accommodating a greater variety of applied probability models [3–5]. Various applications of semi-Markov processes include manpower planning, credit risk, word sequencing and DNA analysis [6–14].

In addition to semi-Markov processes, the non-homogeneous semi-Markov system (NHSMS) was defined, introducing a class of broader stochastic models [15,16] that provide

**Citation:** Georgiou, A.C.; Papadopoulou, A.; Kolias, P.; Palikrousis, H.; Farmakioti, E. On State Occupancies, First Passage Times and Duration in Non-Homogeneous Semi-Markov Chains. *Mathematics* **2021**, *9*, 1745. https://doi.org/10.3390/math9151745


Academic Editor: Alexander Zeifman

Received: 20 May 2021 Accepted: 21 July 2021 Published: 24 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

a more general framework to describe the complex semantics of the system involved. Semi-Markov systems, which deploy a number of Markov chains evolving in parallel, are mostly applied in manpower planning, where the most important issues pertain to the evolution, control and asymptotic behavior [17–19]. In the last two decades, there has been an extended body of literature regarding the theory and results about NHMS [20–29]. The dynamic characteristics of the semi-Markov systems influence the number of times the chain occupies a state, of how long it takes to leave a state as well as the probability of first passage to a state. Therefore, in order to accompany the basic parameters of the semi-Markov chain and to enhance the modeling framework, additional attributes of critical interest are the occupancy, first passage time and duration probabilities, which are described as follows


DNA sequences are usually studied using probabilistic models, as nucleotide appearances are inter-correlated and attempts to use Markov models to model them have been reported [10,37]. One of the earliest studies applied a Markov model on the nucleotide alphabet {*A*, *C*, *G*, *T*} to estimate the transition probability matrix and the number of doublets and triplets [38]. Several statistics have been proposed to test the dependency order of the sequence, e.g., the Markov order, such as the phi-divergent statistics and conditional mutual information [39–41]. More advances in the subject include hidden-Markov models that are able to model different regions of DNA sequences [42]. Word occurrences are also of interest in DNA analysis [43]. Previous studies have examined the distribution, moments and properties of successive word occurrences [44,45]. Papadopoulou has provided some examples of semi-Markov models on modeling biological sequences [46]. Furthermore, algorithmic applications for estimating the first passage time probabilities in genomic sequences have been reported [47].

The aim of this study is to provide insight on the actual mechanism of the recursive relations of the probabilities mentioned above. Section 2 presents the basic parameters of a SMC, the interval transition probabilities and the entrance probabilities. Section 3 presents the main results of the paper, that is, the closed analytic solutions for the occupancy, duration and first passage time probabilities. The final section applies these theoretical results to human genome DNA strands. For the first illustration, the aim is to find the corresponding probabilities between nucleotide words and their symmetric complements by using the analytic form of the first passage time probabilities. Finally, for the second illustration, the frequency of the dinucleotide *GC* is examined for two distinct DNA sequences, using the occupancy probabilities.

#### **2. Basic Framework**

We can consider the semi-Markov chain {*Xt*}*t*≥<sup>1</sup> with state space *S* = {1, 2, . . . , *N*} as a discrete stochastic process in which the successive states are defined by the transition probability matrix and the sojourn time in each state is described by a random variable conditioned on the current and the next state to be transitioned into. Thus, during the transition times, the process is equivalent to a Markov process. We call this Markovian process the *embedded* process. Let transition probabilities *pij*(*t*) be the probability of a SMC provided that it entered state *i* during its last transition at time *t* to transition to state *j* in the next transition. The transition probabilities should satisfy the same equations of a Markovian process, that is, *pij* <sup>≥</sup> 0, <sup>∀</sup>*i*, *<sup>j</sup>* <sup>∈</sup> *<sup>S</sup>* and <sup>∑</sup>*<sup>N</sup> <sup>j</sup>*=<sup>1</sup> *pij* = 1, ∀*i* ∈ *S*. When the process enters state *i* at time *t*, we assume that this state determines the next transition to state *j*, which occurs according to the transition probabilities. However, before making the transition from state *i* to state *j* and after the next state *j* is selected, the chain holds in state *i* for time *τij*. The sojourn time *τij* is a positive random variable with density function *hij*(·), which is called the function of sojourn time to transition from state *i* to state *j*. Thus, *Prob*[*τij* = *m*] = *hij*(*m*), for *m* = 1, 2, .., and *i*, *j* ∈ *S*. We assume that the mean values of the distributions of sojourn times are finite and *hij*(0) = 0. In matrix notation, the basic parameters of the semi-Markov chain are the sequence of transition matrices {*P*(*t*)}<sup>∞</sup> *t*=0 and the sequence of sojourn time matrices {*H*(*m*)}<sup>∞</sup> *<sup>m</sup>*=1. The probabilities of the *waiting times wi*(*t*, *m*) are defined as follows:

$$w\_i(t,m) = \sum\_{j=1}^{N} p\_{ij}(t)h\_{ij}(m) = \text{Prob}[\pi\_i = m|t]\_{\text{set}}$$

where *τ<sup>i</sup>* is the holding time of the SMC in state *i*. The *core matrix* of the SMC connects the transition probabilities and the sojourn times and it is defined as follows:

$$\mathcal{C}(t,m) = \{c\_{ij}(t,m)\}\_{ij \in S} = \mathcal{P}(t) \circ \mathcal{H}(m).$$

The operator {◦} denotes the element-wise product of matrices (Hadamard product). Using the *core* matrix, we define *qij*(*k*|*t*, *n*), which is the joint probability that the SMC will be in state *j* at time *t* + *n* and that it has made *k* transitions during the time interval (*t*, *t* + *n*], given that at time *t* the process has entered state *i*. In order to calculate the probability *qi*,*j*(*k*|*t*, *n*), we distinguish two cases. First, we consider that during the time interval (*t*, *t* + *n*] the number of transitions is zero. Then, in order for the process at time *t* + *n* to be in state *j*, given that no transitions were made, it must be that the states *i*, *j* are the same. Secondly, assume that the SMC makes the first transition to state *r* at time *t* + *m*, 0 < *m* < *n*. Then, in the time interval (*t*, *t* + *m*], we have one transition to state *r* and, in the remaining time interval (*t* + *m*, *t* + *n*], we have the remaining *k* − 1 transitions, with a final transition to state *j*. Thus, the resulting formula is as follows:

$$q\_{i\bar{j}}(k|t,n) = \delta\_{i\bar{j}}\delta(k)^{\underset{\rightleftharpoons}{\searrow}}w\_i(t,n) + \sum\_{r=1}^{N}\sum\_{m=0}^{n}c\_{i\bar{r}}(t,m)q\_{r\bar{j}}(k-1|t+m,n-m).$$

where <sup>&</sup>gt;*wi*(*t*, *n*) = ∑<sup>∞</sup> *<sup>k</sup>*=*n*+<sup>1</sup> *wi*(*t*, *k*) indicates the survival function of *wi*(*t*, *n*) and *δ*(*k*) = 1 if *k* is zero, otherwise it is zero. If we are not interested in counting the number of transitions up to the final state *j*, we can deduce the following recursive relationship.

$$q\_{\vec{i}\vec{j}}(t,n) = \delta\_{\vec{i}\vec{j}}^{>}w\_{\vec{i}}(t,n) + \sum\_{r=1}^{N} \sum\_{m=0}^{n} c\_{\vec{i}r}(t,m)q\_{r\vec{j}}(t+m,n-m).$$

We also define the quantity *ei*,*j*(*k*|*t*, *n*), which is the probability that the SMC enters state *j* at time *t* + *n* and the total number of transitions in the time interval (*t*, *t* + *n*] is *k*, given that the SMC has entered state *i* at the initial position. Here, we can distinguish two cases. First, we assume that the number of transitions in the time interval (*t*, *t* + *n*] is zero. Then, to enter in state *j* at time *t* + *n*, the states *i* and *j* must be the same since state *i* was entered at the initial time. For the second case, suppose that the SMC at time *t* + *m*, 0 < *m* < *n* makes its first transition to state *r*. Then, at the time interval (*t*, *t* + *m*] we have a transition to state *r* and, at the time interval (*t* + *m*, *t* + *n*], we have the remaining *k* − 1 transitions, with the final transition to state *j*. These facts result in the following recursive relationship.

$$c\_{ij}(k|t,n) = \delta\_{ij}\delta(n)\delta(k) + \sum\_{r=1}^{N} \sum\_{m=0}^{n} c\_{ir}(t,m)e\_{rj}(k-1|t+m,n-m).$$

If we are not interested in the number of transitions up to the final state *j*, we can reduce the recursive relationship to the quantity *eij*(*t*, *n*), which are the probabilities that the SMC will enter state *j* at time *n*, provided that, at the initial position at time *t*, the SMC has entered state *i*. The equation for calculating the probabilities *eij*(*t*, *n*) is given by the following.

$$e\_{i\bar{j}}(t,n) = \delta\_{i\bar{j}}\delta(n) + \sum\_{r=1}^{N} \sum\_{m=0}^{n} c\_{ir}(t,m)e\_{r\bar{j}}(t+m,n-m).$$

The interval transition probabilities and entrance probabilities are connected by the following relationship.

$$q\_{ij}(k|t,m) = \sum\_{m=0}^{n} c\_{ij}(k|t,m)^{>} w\_j(t+m, n-m).$$

#### **3. Theoretical Results: Analytic Solutions of the Recursive Equations**

#### *3.1. First Passage Time*

The first passage times provide a measure of how long it takes to reach a given state from another. We can think of first passage times either in terms of transitions or of time or both. Thus, let *fij*(*k*|*t*, *n*) be the probability that *k* transitions and time *n* will be required for the first passage from state *i* to state *j* given that the SMC entered state *i* at time *t*. Applying a probabilistic argument, we can provide the following recursive formula.

$$f\_{ij}(k|t,n) = \sum\_{r \neq j}^{N} \sum\_{m=0}^{n} c\_{ir}(t,m) f\_{rj}(k-1|t+m,n-m) + \delta(k-1)c\_{ij}(t,n). \tag{1}$$

The first term of equation (1) corresponds to the case where *k* > 1 and the SMC makes a transition to some state *r* different from *j* at time *t* + *m* and then makes a first passage from *r* to *j* in *k* − 1 transitions during the interval (*t* + *m*, *n* − *m*]. The term is summed over all states and holding times that could describe the first transition. The second term corresponds to the case where *k* = 1 and the process moves directly to state *j* at time *t* + *n*. If we are not interested in counting the transitions, then the recursive formula of the probabilities *fij*(*t*, *n*) is provided by the following.

$$f\_{\vec{i}\vec{j}}(t,n) = \sum\_{r \neq \vec{j}}^{N} \sum\_{m=0}^{n} c\_{\vec{i}r}(t,m) f\_{\vec{r}\vec{j}}(t+m, n-m) + c\_{\vec{i}\vec{j}}(t,n). \tag{2}$$

**Theorem 1.** *For each non-homogeneous SMC with discrete state space S* = 1, 2, . . . , *N, a sequence of transition probability matrices* {*P*(*t*)}<sup>∞</sup> *<sup>t</sup>*=<sup>0</sup> *and a sequence of sojourn time matrices* {*H*(*m*)}<sup>∞</sup> *<sup>m</sup>*=1*, the probability matrices of first passage times* **<sup>F</sup>**(*k*|*t*, *<sup>n</sup>*) = { *fij*(*k*|*t*, *<sup>n</sup>*)}*i*,*j*∈*<sup>S</sup> are given by the following relationships:*


*where* **<sup>B</sup>** = **<sup>U</sup>** − **<sup>I</sup>***,* **<sup>U</sup>** = {*uij* = <sup>1</sup>}*i*,*j*∈*<sup>S</sup> ,* **<sup>I</sup>** *is the N* × *N identity matrix and*

$$\begin{aligned} &\prod\_{r=0}^{k-1} \, \mathsf{C}(\mathsf{s} + m\_{k-r-1}, m\_{k-r} - m\_{k-r-1}) = \\ & \qquad = \mathsf{C}(\mathsf{s}, m\_1) \{ \mathsf{C}(\mathsf{s} + m\_1, m\_2 - m\_1) \{ \dots \{ \mathsf{C}(\mathsf{s} + m\_{k-1}, n - m\_{k-1}) \circ \mathsf{B} \} \circ \mathsf{B} \} \dots \} \circ \mathsf{B} \} .\end{aligned}$$

**Proof.** Appendix A.1.

#### *3.2. Duration*

Transitions of a SMC can be divided into two categories: virtual and real. The first category refers to transitions made from one state to the same state, while the second category refers to transitions from one state to a different state. Based on those two categories, one can define the duration as the number of transitions or the time required for the SMC to leave the initial state and to move to a different state, i.e., a real transition to take place for the first time and not a virtual one. Therefore, it is of interest to study the duration probability *di*(*k*|*t*, *n*) defined as the probability that the SMC moves for the first time to a different state that the initial one after *n* time units and *k* transitions during the interval (*t*, *t* + *n*], given that the process entered state *i* at time *t*. We note here that out of the total *k* transitions in the above case, *k* − 1 transitions are virtual and one transition is real. The duration probabilities for *k* ≤ *n* are provided by the following.

$$d\_i(k|t,n) = \sum\_{m=0}^{n} c\_{ii}(t,m)d\_i(k-1|t+m,n-m) + \delta(k-1)(\mathbf{w}\_i(t,n) - \mathbf{c}\_{ii}(t,n)).\tag{3}$$

In the case that *k* > *n* or *k* = 0, then *di*(*k*|*t*, *n*) = 0. The rationale of this relationship can be deconstructed into two parts. In the first part, we can assume that the SMC has at least one virtual intermediate transition, while it starts from state *i* at time *t*, holds at the state *i* for *m* time units and finally transfers to state *i* again. At this point, the associated probability is *di*(*k* − 1|*t* + *m*, *n* − *m*). In the second scenario, we assume that the SMC makes no transition up to time *t* + *n*. Therefore, the chain holds at state *i* for exactly *n* time units and then moves to a state *j* different than *i*. Thus, the duration defined in the present measures how long it takes to leave a given state.

**Theorem 2.** *For each non-homogeneous SMC with discrete state space S* = 1, 2, . . . , *N, a sequence of transition probability matrices* {*P*(*t*)}<sup>∞</sup> *<sup>t</sup>*=<sup>0</sup> *and a sequence of sojourn time matrices* {*H*(*m*)}<sup>∞</sup> *<sup>m</sup>*=1*, the duration probability matrices* **<sup>D</sup>**(*k*|*t*, *<sup>n</sup>*) = *diag*{*di*(*k*|*t*, *<sup>n</sup>*)}*i*∈*<sup>S</sup> are provided by the following relationships:*


*where* **<sup>W</sup>**(*t*, *<sup>n</sup>*) = *diag*{*wi*(*t*, *<sup>n</sup>*)}*i*∈*S.*

#### **Proof.** Appendix A.2.

#### *3.3. Occupancy*

We define *vij*(*t*, *n*) to be the number of times the SMC makes transitions to a state *j* in time interval of length equal to *n*, provided that in the initial time *t* the SMC had entered state *i*. If the initial state is the same as *j*, that is when *i* = *j*, then the initial state is not counted in *vij*(*t*, *n*). We call the quantity *vij*(*t*, *n*) as the *occupancy measure* of state *j* at time *t* + *n*, provided that the SMC entered state *i* at time *t*. Clearly, the quantity *vij*(*t*, *n*) is a discrete random variable. We define as *ωij*(·|*t*, *n*) the probability mass distribution of *vij*(*t*, *n*), which is *ωij*(*x*|*t*, *n*) = *Prob*[*vij*(*t*, *n*) = *x*]. The recursive relationship of the occupancy probabilities is given by the following:

$$\begin{split} \omega\_{\overline{ij}}(\mathbf{x}|t,n) &= \sum\_{r=1 \atop r \neq j}^{N} \sum\_{m=0}^{n} c\_{ir}(t,m) \omega\_{r\overline{j}}(\mathbf{x}|t+m,n-m) + \\ &+ \sum\_{m=0}^{n} c\_{\overline{ij}}(t,m) \omega\_{\overline{j}\overline{j}}(\mathbf{x}-1|t+m,n-m) + \delta(\mathbf{x}) > w\_{\overline{i}}(t,n), \end{split} \tag{4}$$

where *i*, *j* ∈ *S*, *n* = 0, 1, . . . , and *x* = 0, 1, . . ..

**Assumption 1.** *In what follows, we assume that the embedded Markov chain is homogeneous, i.e.,* {*P*(*t*)}<sup>∞</sup> *<sup>t</sup>*=<sup>0</sup> = *P*, *for each t.*

Considering the above assumption, one can use the double geometric transform of the occupancy probabilities as follows.

$$
\omega\_{ij}^{\circledast \circledast} (y|z) = \sum\_{\mathbf{x}=0}^{\infty} \sum\_{n=0}^{\infty} \omega\_{ij} (\mathbf{x}|n) z^n y^n.
$$

Moreover, from the Equation (4), we can write the double geometric transform of the occupancy probabilities as follows.

$$
\omega\_{ij}^{\mathbb{S}\mathbb{S}}(y|z) = \sum\_{r=1}^{N} c\_{ir}^{\mathbb{S}}(z) \omega\_{rj}^{\mathbb{S}\mathbb{S}}(y|z) - (1-y)c\_{ij}^{\mathbb{S}}(z)\omega\_{jj}^{\mathbb{S}\mathbb{S}}(y|z) + {}^{>}w\_i^{\mathbb{S}}(z) \lrcorner 
$$

In matrix notation, we can use the previous results to obtain the following [3]:

$$\mathbf{U}\mathfrak{R}^{\mathbb{S}\mathbb{S}}(y|z) = \frac{1}{1-z}\mathbf{U} - \frac{1-y}{1-z} [\mathbf{I} - \mathbf{C}^{\mathbb{S}}(z)]^{-1} \mathbf{C}^{\mathbb{S}}(z) \left( y\mathbf{I} + (1-y)[\mathbf{I} - \mathbf{C}^{\mathbb{S}}(z)]^{-1} \circ \mathbf{I} \right)^{-1} \boldsymbol{\lambda}$$

where **<sup>U</sup>** is the unit matrix, **<sup>Ω</sup>***gg*(*y*|*z*) = *ωgg ij* (*y*|*z*) *<sup>i</sup>*,*j*∈*<sup>S</sup>* is the double geometric transform of **<sup>Ω</sup>**(*x*|*n*) = {*ωij*(*x*|*n*)}*i*,*j*∈*<sup>S</sup>* and **<sup>C</sup>***g*(*z*) = *c g ij*(*z*) *i*,*j*∈*S* .

The occupancy probabilities are connected with the corresponding homogeneous first passage time probabilities through the following relationship.

$$
\omega\_{i\bar{j}}(\mathbf{x}|n) = \delta(\mathbf{x})^{>} f\_{i\bar{j}}(n) + \sum\_{m=0}^{n} f\_{i\bar{j}}(m) \omega\_{j\bar{j}}(\mathbf{x} - \mathbf{1}|n-m).
$$

Using the double geometric transform, we can present the occupancy probabilities in matrix form according to the geometric transforms of the first passage time probabilities:

$$\mathbf{D}^{\mathbb{S}\mathbb{X}}(y|z) = \prescript{\succ}{}{\mathbf{F}}^{\mathbb{S}}(z) + y\mathbf{F}^{\mathbb{S}}(z) \left[ {}^{\succ}\mathbf{F}^{\mathbb{S}}(z) \circ \mathbf{I} \right] \left[ \mathbf{I} - y(\mathbf{F}^{\mathbb{S}}(z) \circ \mathbf{I}) \right]^{-1} \prescript{\succ}{}{\mathbf{I}}$$

which could be further simplified by using <sup>&</sup>gt;*f g ij*(*z*) = <sup>1</sup>−*<sup>f</sup> ij*(*z*) <sup>1</sup>−*<sup>z</sup>* (Appendix B.1) resulting in matrix notation in (Appendix B.2).

$$
\Omega^{\mathbb{S}\mathbb{S}}(y|z) = \frac{1}{1-z} \mathbf{U} - \frac{1-y}{1-z} \mathbf{F}^{\mathbb{S}}(z) \left[ \mathbf{I} - y \mathbf{F}^{\mathbb{S}}(z) \circ \mathbf{I} \right]^{-1}.
$$

We now provide Theorem 3 and Lemma 1 that will be used to prove the main Theorem 4 of the occupancy probabilities with respect to the core matrix.

**Theorem 3.** *For a SMC with core matrix* **C**(·)*, we have the following:*

$$\begin{split} \mathbf{O}^{\mathcal{E}}(z|n) &= (z-1) \sum\_{j=1}^{n-1} \left[ \mathbf{C}(j) + \Big[ \sum\_{l=2}^{j} \left( \mathbf{C}(i-1) + \sum\_{k=1}^{l-2} \mathbf{S}\_{l}(k,m\_{k}) \right) \mathbf{C}(j+1-i) \Big] \Big[ \mathbf{O}^{\mathcal{E}}(z|n-j) \Big] \right] \\ &+ z \Big[ \mathbf{C}(n) + \sum\_{j=2}^{n} \left( \mathbf{C}(j-1) + \sum\_{k=1}^{j-2} \mathbf{S}\_{j}(k,m\_{k}) \right) \mathbf{C}(n+1-j) \Big] + \\ &+ \left[ \sum\_{j=2}^{n} \left( \mathbf{C}(j-1) + \sum\_{k=1}^{j-2} \mathbf{S}\_{j}(k,m\_{k}) \right)^{\flat} \mathbf{W}(n+1-j) + \,^{\flat} \mathbf{W}(n) \right], \end{split}$$

*where* **<sup>S</sup>***i*(*k*, *mk*) <sup>=</sup> <sup>∑</sup>*i*−*<sup>k</sup> mk*=<sup>2</sup> <sup>∑</sup>*i*−*k*+<sup>1</sup> *mk*−<sup>1</sup>=1+*mk* ...... <sup>∑</sup>*i*−<sup>1</sup> *<sup>m</sup>*1=1+*m*<sup>2</sup> <sup>∏</sup>*k*−<sup>1</sup> *<sup>r</sup>*=−<sup>1</sup> **<sup>C</sup>**(*mk*−*r*−<sup>1</sup> <sup>−</sup> *mk*−*r*)*,* <sup>∀</sup>*i*, *<sup>j</sup>* <sup>∈</sup> *<sup>S</sup> and <sup>n</sup>* <sup>=</sup> 0, 1, 2, ... *Please note that the (j*,*r) element of* **S***i*(*k*, *mk*) *is the probability of moving from state j to state r after i* − 1 *time units and k intermediate transitions during the interval* (*t*, *t* + *i* − 1] *for every t due to the time-homogeneity assumption.*

**Proof.** Appendix A.3.

**Lemma 1.** *The product* **<sup>Ω</sup>***g*(*z*|*n*) ◦ **<sup>I</sup>** *is equal to the following:*

$$\begin{split} \boldsymbol{\Omega}^{\mathbb{S}}(\boldsymbol{z}|\boldsymbol{n}) \circ \mathbf{I} &= -\left(\boldsymbol{z} - \mathbf{1}\right) \sum\_{j=1}^{n-1} \left[ \left[ \sum\_{i=1}^{j} \mathbf{a}\_{1i}^{-1} \mathbf{C}(j+1-i) \right] \circ \mathbf{I} \right] \left[ \boldsymbol{\Omega}^{\mathbb{S}}(\boldsymbol{z}|\boldsymbol{n}-j) \circ \mathbf{I} \right] \\ &- \boldsymbol{z} \sum\_{j=1}^{n} \left[ \mathbf{a}\_{1j}^{-1} \mathbf{C}(n+1-j) \right] \circ \mathbf{I} + \sum\_{j=1}^{n} \left[ -\mathbf{a}\_{1j}^{-1} \mathbf{V}(n+1-j) \right] \circ \mathbf{I}, \\ \forall i, j \in \mathcal{S} \; \text{and} \; n = 0, 1, 2, \dots, \\ & \mathbf{-a}\_{1i}^{-1} = \mathbf{C}(i-1) + \sum\_{k=1}^{i-2} \mathbf{S}\_{i}(k, m\_{k}). \end{split}$$

**Proof.** Appendix A.4.

We now provide Theorem 4, which describes the analytic solutions of the occupancy probabilities. In order to facilitate the presentation and proof of Theorem 4, we begin with some aggregate notation. Let the following be the case:

**A***<sup>j</sup>* = **C**(*j*) + *j* ∑ *i*=2 **C**(*i* − 1) + *i*−2 ∑ *k*=1 **S***i*(*k*, *mk*) **C**(*j* + 1 − *i*), **B***n*,*<sup>j</sup>* = ( *n*−*j* ∑ *w*=2 (**C**(*w* − 1) + *w*−2 ∑ *k*=1 **S***w*(*k*, *mk*) <sup>&</sup>gt;**W**(*<sup>n</sup>* <sup>−</sup> *<sup>j</sup>* <sup>+</sup> <sup>1</sup> <sup>−</sup> *<sup>w</sup>*) ) ◦ **<sup>I</sup>** <sup>+</sup> <sup>&</sup>gt;**W**(*<sup>n</sup>* <sup>−</sup> *<sup>j</sup>*) ) ◦ **I**, **M***<sup>u</sup>* = − ( **C**(*u* − 1) + *u*−1 ∑ *i*=2 **C**(*i* − 1) + *i*−2 ∑ *k*=1 **S***i*(*k*, *mk*) **C**(*u* − *i*) ) ◦ **I** + *u*−2 ∑ *k*=1 (−1)*k*+1**R***u*(*k*, *mk*), **M** *<sup>u</sup>* = ( **C**(*u* − 1) + *u*−1 ∑ *i*=2 **C**(*i* − 1) + *i*−2 ∑ *k*=1 **S***i*(*k*, *mk*) **C**(*u* − *i*) ) ◦ **I**, **M** *<sup>u</sup>* = ( **C**(*u* − 1) + *u*−1 ∑ *i*=2 **C**(*i* − 1) + *i*−2 ∑ *k*=1 **S***i*(*k*, *mk*) **C**(*u* − *i*) ) ◦ **I** + ( *u*−2 ∑ *k*=1 (*<sup>k</sup>* <sup>+</sup> <sup>1</sup>)(−1)*k***R***u*(*k*, *mk*) ) , **M** *<sup>u</sup>* = ( **C**(*u* − 1) + *u*−1 ∑ *i*=2 **C**(*i* − 1) + *i*−2 ∑ *k*=1 **S***i*(*k*, *mk*) **C**(*u* − *i*) ) ◦ **I** − ( *u*−2 ∑ *k*=1 (*<sup>k</sup>* <sup>+</sup> <sup>2</sup>)(−1)*k***R***u*(*k*, *mk*) ) , **E***<sup>n</sup>* = *n* ∑ *j*=2 **C**(*j* − 1) + *j*−2 ∑ *k*=1 **S***j*(*k*, *mk*) <sup>&</sup>gt;**W**(*<sup>n</sup>* <sup>+</sup> <sup>1</sup> <sup>−</sup> *<sup>j</sup>*) +<sup>&</sup>gt; **<sup>W</sup>**(*n*), **F***x*,*<sup>u</sup>* = *x*(*x* − 1) *u*−2 ∑ *k*=*x*−3 ( *x*−4 ∏*r*=−1 (*k* − *r*) ) (−1)(*k*−*x*+3) **R***u*(*k*, *mk*) − *x* ( *u*−2 ∑ *k*=*x*−2 ( *x*−3 ∏*r*=−1 (*k* − *r*) ) (−1)(*k*−*x*+2) **R***u*(*k*, *mk*) ) , **G***u*,*n*,*<sup>j</sup>* = **C**(*n* − *j* + 1 − *u*) ◦ **I** + *n*−*j*+1−*u* ∑ *w*=2 (**C**(*w* − 1) + *w*−2 ∑ *k*=1 **S***w*(*k*, *mk*) **C**(*n* − *j* + 2 − *u* − *w*) ) ◦ **I**, **H***x*,*<sup>u</sup>* = *x u*−2 ∑ *k*=*x*−2 ( *x*−3 ∏*r*=−1 (*k* − *r*) ) (−1)*k*−(*x*−2) **R***u*(*k*, *mk*) − *u*−2 ∑ *k*=*x*−1 ( *x*−2 ∏*r*=−1 (*k* − *r*) ) (−1)*k*−(*x*−1) **R***u*(*k*, *mk*), **Q***u*,*n*,*<sup>j</sup>* = *n*−*j*+1−*u* ∑ *w*=2 (**C**(*w* − 1) + *w*−2 ∑ *k*=1 **S***w*(*k*, *mk*) <sup>&</sup>gt;**W**(*<sup>n</sup>* <sup>−</sup> *<sup>j</sup>* <sup>+</sup> <sup>2</sup> <sup>−</sup> *<sup>u</sup>* <sup>−</sup> *<sup>w</sup>*) ) ◦ **<sup>I</sup>** <sup>+</sup> >**W**(*<sup>n</sup>* <sup>−</sup> *<sup>j</sup>* <sup>+</sup> <sup>1</sup> <sup>−</sup> *<sup>u</sup>*) ◦ **I**,

where

$$\begin{split} \mathbf{R}\_{u}(k,m\_{k}) &= \sum\_{m\_{k}=2}^{u-k} \sum\_{m\_{k-1}=1+m\_{k}}^{u-k+1} \dots \sum\_{m\_{1}=1+m\_{2}}^{u-1} \prod\_{r=1}^{k-1} \left[ \sum\_{i=1}^{m\_{k-r-1}-m\_{k-r}} \left(-\mathbf{a}\_{1i}^{-1}\right) \mathbf{C}(m\_{k-r-1}-m\_{k-r}+1-i) \right] \circ \mathbf{1}\_{k-1} \\ \mathbf{S}\_{i}(k,m\_{k}) &= \sum\_{m\_{k}=2}^{i-k} \sum\_{m\_{k-1}=1+m\_{k}}^{i-k+1} \dots \sum\_{m\_{1}=1+m\_{2}}^{i-1} \prod\_{r=1}^{k-1} \mathbf{C}(m\_{k-r-1}-m\_{k-r}) \\ &\text{and} \\ -\mathbf{a}\_{1i}^{-1} &= \mathbf{C}(i-1) + \sum\_{k=1}^{i-2} \mathbf{S}\_{i}(k,m\_{k}). \end{split}$$

**Theorem 4.** *For a SMC with core matrix* **C**(·)*, by adopting the above notations, we have that the following:*

**Ω**(0|*n*) = − *n*−1 ∑ *j*=1 **A***j* ( **B***n*,*<sup>j</sup>* + *n*−*j* ∑ *u*=2 **M***u***Q***u*,*n*,*<sup>j</sup>* ) + **E***n*, **Ω**(1|*n*) = *n*−1 ∑ *j*=1 **A***j* ( **B***n*,*<sup>j</sup>* − **G**1,*n*,*<sup>j</sup>* − *n*−*j* ∑ *u*=2 **M***<sup>u</sup>* + **G***u*,*n*,*<sup>j</sup>* − 2 *n*−*j* ∑ *u*=2 **M** *<sup>u</sup>* **Q***u*,*n*,*<sup>j</sup>* ) , **Ω**(2|*n*) = *n*−1 ∑ *j*=1 **A***j* ( 2**G**1,*n*,*<sup>j</sup>* + *n*−*j* ∑ *u*=2 *u*−2 ∑ *k*=1 (−2*<sup>k</sup>* <sup>−</sup> <sup>4</sup>)(−1)*k***R***u*(*k*, *mk*)**G***u*,*n*,*<sup>j</sup>* − 4 *n*−*j* ∑ *u*=2 **M** *<sup>u</sup>***G***u*,*n*,*<sup>j</sup>* +2 *n*−*j* ∑ *u*=2 **M** *<sup>u</sup>***Q***u*,*n*,*<sup>j</sup>* − *n*−*j* ∑ *u*=2 *u*−2 ∑ *k*=1 (*<sup>k</sup>* <sup>+</sup> <sup>1</sup>)(*<sup>k</sup>* <sup>+</sup> <sup>2</sup>)(−1)*k*−1**R***u*(*k*, *mk*)**Q***u*,*n*,*<sup>j</sup>* ) , **Ω**(3|*n*) = *n*−1 ∑ *j*=1 **A***j* ( 6 *n*−*j* ∑ *u*=2 **M** *<sup>u</sup>***G***u*,*n*,*<sup>j</sup>* − 3 *n*−*j* ∑ *u*=2 *u*−2 ∑ *k*=1 *<sup>k</sup>*(*<sup>k</sup>* <sup>+</sup> <sup>1</sup>)(−1)*k*+1**R***u*(*k*, *mk*)**G***u*,*n*,*<sup>j</sup>* − *n*−*j* ∑ *u*=2 (*<sup>k</sup>* <sup>−</sup> <sup>1</sup>)*k*(*<sup>k</sup>* <sup>+</sup> <sup>1</sup>)(−1)*k*−2**R***u*(*k*, *mk*)**Q***u*,*n*,*<sup>j</sup>* <sup>+</sup> <sup>3</sup> *n*−*j* ∑ *u*=2 *u*−2 ∑ *k*=1 *<sup>k</sup>*(*<sup>k</sup>* <sup>+</sup> <sup>1</sup>)(−1)*k*−1**R***u*(*k*, *mk*)**Q***u*,*n*,*<sup>j</sup>* ) ,

*and*

$$\mathfrak{Q}(x|n) = \sum\_{j=1}^{n-1} \left[ \mathbf{A}\_j \sum\_{\mu=2}^{n-j} \left[ \mathbf{F}\_{\mathbf{x},\mu} + \mathbf{G}\_{\mathbf{u},\mathbf{n},j} + \mathbf{H}\_{\mathbf{x},\mu} \mathbf{Q}\_{\mathbf{u},\mu,j} \right] \right], \quad \forall \; x \ge 4.$$

**Proof.** Appendix A.5.

#### **4. Illustration**

In this section we will accompany the theoretical results of the paper with two applications related to DNA sequences. It is known that a DNA strand consists of a sequence of adenine (A), guanine (G), cytosine (C) and thymine (T), which are the four nucleotides. We assume that a DNA sequence could be described by a homogeneous discrete SMC {*Xt*}<sup>∞</sup> *t*=0 with state space *S* = {*w*1, *w*2,..., *wN*}, where *wi*, *i* = 1, 2, . . . , *N* is a specific word that is a combination of the letters of the DNA alphabet *S* = {*A*, *C*, *G*, *T*} with length *l* and *t* denoting the position of the word inside the sequence.

#### *4.1. Inverted Repeats*

The main focus of the following approach is the appearance of specific words formed from the alphabet A, C, G, T and their symmetric complements (inverted repeats). Inverted repeats are commonly found in eukaryotic genomes [48]. The presence of inverted repeats could form DNA cruciforms that have been shown to play an important role in the regulation of natural processes involving DNA. The cruciform structures are important for various biological processes, including replication, regulation of gene expression and

nucleosome structure. They have also been implicated in the development of diseases including cancer, Werner's syndrome and others [49].

For each DNA word *w*, there exists a reversed complement of the word *w* . For example, the word *w* = *ACG* has the word *w* = *CGT* as an inverted repeat. The main question that we will attempt to address by applying the analytic relationships derived earlier is the following: Given that the SMC entered at the initial position in the word *w*, we want to estimate the probability of the reversed complement word *w* appearing for the first time after a certain range of letters *n*. We define the distance, *d*, between two words as the number of letters between the first letter of the initial word that has appeared and the first letter of the following word that subsequently appears. For the sake of simplicity, we consider only the scenario where *d* > *l*. The DNA sequence that was used for this illustration is the first chromosome of the human genome consisting of 248,956,422 base-pairs that are publicly available from the website of the National Center for Biotechnology Information (NCBI) [50].

For the first illustration, three words of length *l* = 7 were chosen that have been previously shown to exhibit different distances between them and their inverted complements [51]. The words were *w*<sup>1</sup> = *GGCTCAC*, *w*<sup>2</sup> = *ATATATG* and *w*<sup>3</sup> = *CCACAAT*. For each word, the state space of the SMC consisted of the word and its reversed complement, e.g., *S* = {*wi*, *w i* }. First, the basic parameters of the SMC were estimated, namely the transition probability matrix and the sequence of sojourn times. The sojourn time was defined as the distance, i.e., the number of nucleotides that occur between each word and its inverted repeat. The transition matrix and the empirical distribution of the sojourn times were estimated using the empirical estimators. The sequence of the core matrices was calculated as the Hadamard product of the transition matrix with the sequence of the sojourn time matrices. For each word *w* ∈ *S*, the first passage time probability was calculated between the word *w* and its reversed complement *w* according to the proposed analytic relationship (Theorem 1). For a maximum distance, (*n* = 1000), the highest first passage time probabilities of the three words and their inverted repeats, along with the corresponding distances are illustrated in Figure 1. Concretely, the first passage time probabilities were calculated for the human Chromosome 1, aiming to estimate the most probable distances between words and their symmetrical complements. More specifically, as presented in Figure 1, we have noted that, for the first passage time probabilities, we have *argmax*(*fw*1*w* 1 ) = 210, *argmax*(*fw*2*w* 2 ) = 10 and *argmax*(*fw*3*w* 3 ) = 132 approximating the numerical results of previous studies with corresponding values for the arguments 210, 15 and 133 for the three words, respectively [51]. This highlights the fact that specific DNA words exhibit different behaviors and the distance between them and their inverted repeats demonstrates variability.

#### *4.2. CpG Islands*

Usually, in vertebrate DNA sequences, the dinucleotide CG occurs less frequently than expected [52]. For the second illustration, we considered CpG islands, which are genomic regions that contain an elevated number of the dinucleotide CG. The human genome contains approximately 30 thousand CpG islands. The APRT gene is an example of a CpG region and it was used for this analysis [53]. This gene provides instructions for making an enzyme called adenine phosphoribosyltransferase (APRT). APRT contains approximately 2500 nucleotides and it had been shown to include an elevated amount of the dinucleotide GC [54]. We modeled the sequence of this DNA region as a homogeneous SMC with state space containing all the two-letter words from the DNA alphabet. The transition probability matrix and the sojourn times were estimated using the empirical estimators. The occupancy distribution *ωGCGC*(*x*|*n*) for a fixed length of *n* = 100 was calculated using the analytic relationship from Theorem 4 in order to estimate the occupancy distribution of specific words up to a specified sequence length. For comparison, we also applied the model to an intron sequence of human's phosphodiesterase gene (PDEA) [55]. The two sequences are publicly available from the NCBI. The occupancy probabilities are presented in Figure 2 up to length *n* = 50. It is confirmed that the number of occupancies of the dinucleotide GC will be greater in the CpG island compared to the intron sequence. As expected, the occupancy probabilities applied on the two sequences indicated that the occurrences of GCs were more frequent in the CpG sequence.

(**c**) *w*<sup>3</sup> = *CCACAAT* **Figure 1.** First passage time (FPT) probabilities for distance *n* ≤ 1000.

**Figure 2.** Occupancy probabilities of APRT and PDEA genes.

#### **5. Concluding Remarks**

In this article, three classes of important probabilities of a semi-Markov process, namely the first passage time, the occupancy and the duration probabilities were defined and their closed analytic forms were proved by using the basic parameters of the process. The study of the first passage time probability provides information regarding the distribution of the time elapsed to reach a state from another for the first time, either in terms of transitions or time. The second category of duration probabilities provides information about the distribution of the number of virtual transitions taking place before an actual transition to a different state occurs. Finally, the third class of probabilities provides insight information regarding the distribution of the number of times the SMC makes transitions to some state in a time interval of a given length. We provided analytic forms on the actual behavior of the recursive relations of the aforementioned probabilities and included these results into specific propositions and theorems.

The analytical results were accompanied with two illustrations on human genome DNA strands which are often studied using probabilistic modeling and, specifically, Markovian models. Although, in the relevant literature, there exist several algorithmic approaches analyzing the occupancy and appearance of words in DNA sequences, the results of the illustration section strongly suggest that the proposed modeling framework could also be used for the investigation of the structure of genome sequences.

Of course nothing comes without limitations and motivation for further research. For example, additional research effort could aim towards high-order dependencies since DNA sequences often show long-range correlations. This could result in a more coherent modeling approach. Furthermore, additional parameters could be included in the model, for example the length of sequence or specific mutations, resulting in more realistic representations regarding the different structures of complex genome of humans and other organisms. Finally, the proposed model could be applied in completely different contexts, such as natural language processing, linguistics, text similarity and anomaly detection, i.e., areas of machine learning that appear to be amongst the most popular areas in the last decade in data science and stochastic modeling.

**Author Contributions:** Conceptualization, A.C.G., A.P. and P.K.; Data curation, P.K.; Formal analysis, P.K.; Investigation, A.P. and P.K.; Methodology, A.C.G., A.P., H.P. and E.F.; Software, P.K.; Supervision, A.C.G. and A.P.; Validation, A.C.G.; Visualization, P.K.; Writing—original draft, A.P., P.K., H.P. and E.F.; Writing—review & editing, A.C.G., A.P. and P.K. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Publicly available datasets were analyzed in this study. This data can be found here: https://www.ncbi.nlm.nih.gov/nuccore/CM000663, https://www.ncbi.nlm.nih.gov/ gtr/genes/353/, https://www.ncbi.nlm.nih.gov/nuccore/1059792111.

**Acknowledgments:** The authors greatly acknowledge the comments and suggestions of the three anonymous referees, which improved the content and the presentation of the current paper.

**Conflicts of Interest:** The authors declare no conflicts of interest.

#### **Appendix A. Proofs**

*Appendix A.1. Proof of Theorem 1*

The results for (1) and (2) are obvious. For the third part, we used the matrix notation of the first passage time probabilities:

$$\mathbf{F}(k|t,n) = \sum\_{m=1}^{n-k+1} \mathbf{C}(t,m) \left\{ \mathbf{F}(k-1|t+m,n-m) \circ \mathbf{B} \right\} + \delta(k-1)\mathbf{C}(t,m),$$

with **F**(*k*|*t*, *n*) = 0 if *k* > *n* or *k* = 0. For *k* = 1 and *m* = *mi* we have shown the results for the case where *k* > 1 can be proved by induction. Thus, we assume that this result holds for *k* − 1 and we will show that it also holds for each *k* ≤ *n*. Here we note that the recursive relationship of the first passage time probabilities could be reformulated as follows.

$$\begin{split} f\_{lj}(k|t,n) &= \sum\_{m\_1=1}^{n-k+1} \sum\_{x\_1 \neq j} c\_{lx\_1}(t,m\_1) \left\{ \sum\_{m\_2=1+m\_1}^{n-k+2} \sum\_{x\_2 \neq j} c\_{x\_1x\_2} \left(t+m\_1,m\_2-m\_1\right) \right\} \\ &\left\{ \dots \left\{ \sum\_{m\_{k-1}=1+m\_{k-2}}^{n-1} \sum\_{x\_1 \neq j} c\_{x\_{k-2}m\_{k-1}} \left(t+m\_{k-2},m\_{k-1}-m\_{k-2}\right) c\_{x\_{k-1}j} (t+m\_{k-1},n-m\_{k-1}) \right\} \right\} \dots \\ &+ \delta(k-1)c\_{lj}(t,n) .\end{split}$$

Using matrix notation, we can express the previous relationship as the following.

$$\begin{split} \mathsf{F}(k|t,n) &= \sum\_{m\_1=1}^{n-k+1} \mathsf{C}(t,m\_1) \left\{ \sum\_{m\_2=1+m\_1}^{n-k+2} \mathsf{C}(t+m\_1,m\_2-m\_1) \right\} \\ &\quad \left\{ \dots \left\{ \sum\_{m\_{k-1}=1+m\_{k-2}}^{n-1} \mathsf{C}(t+m\_{k-2},m\_{k-1}-m\_{k-2}) \left\{ \mathsf{C}(t+m\_{k-1},n-m\_{k-1}) \diamond \mathbf{B} \right\} \right\} \circ \mathbf{B} \right\} \dots \circ \mathbf{B} \\ &\quad \text{for } 0 < k \le n. \end{split}$$

The initial conditions are **F**(*k*|*t*, *n*) = 0 for *k* > *n* or *k* = 0 and **F**(1|*t*, *n*) = **C**(*t*, *n*). By using the following notation:

$$\sum\_{m\_1=1}^{n-k+1} \left\{ \sum\_{m\_2=1+m\_1}^{n-k+2} \left\{ \dots \cdot \left\{ \sum\_{m\_{k-1}=1+m\_{k-2}}^{n-1} = \sum\_{m\_1=1}^{n-k+1} \* \sum\_{m\_2=1+m\_1}^{n-k+2} \* \dots \dots \* \sum\_{m\_{k-1}=1+m\_{k-2}}^{n-1} \right\} \right\}$$

we obtain the following.

$$\mathbf{F}(k|t,n) = \sum\_{m=1}^{n-k+1} \mathbf{C}(t,m) \left\{ \left\{ \begin{aligned} ^{n-m-k+2} & \dots & ^{n-m-k+3} \\ ^{n\_1-1} & ^{n\_2-1+m\_1} & \dots & ^{n-m-1} \end{aligned} & \sum\_{m\_1-2}^{n-m-1} \mathbf{C}(t,m\_1) \right\} \right\},$$

$$\left\{ \mathbf{C}(t+m\_1, m\_2-m\_1) \{ \dots \{ \mathbf{C}(t+m+m\_{k-2}, n-m-m\_{k-2}) \diamond \mathbf{B} \} \circ \mathbf{B} \right\} \dots \right) \diamond \mathbf{B}.$$

By the appropriate substitution of the time indices and by the definition of the following operation ∏<sup>2</sup> *r*=1 {**B**} **A***<sup>r</sup>* = **A**<sup>1</sup> ∗*<sup>B</sup>* **A**<sup>2</sup> = **A**2(**A**<sup>1</sup> ◦ **B**) for the matrices **A**1, **A**2, **B**, we obtain the desired result.

#### *Appendix A.2. Proof of Theorem 2*

The results for (1) and (2) are obvious. For the third part, we used induction. By using matrix notation on the recursive relationship, it holds that, for *k* = 2, we have the following.

$$\mathbf{D}(2|t,n) = \sum\_{m\_1=1}^{n-1} \left( \mathbf{C}(t,m\_1) \circ \mathbf{I} \right) \left( \mathbf{W}(t+m\_1,n-m\_1) - \mathbf{C}(t+m\_1,n-m\_1) \circ \mathbf{I} \right)$$

Now assume that the relationship hold for *k* − 1, which is the following.

$$\begin{array}{c} \mathbf{D}(k-1|t+m,n-m) = \sum\_{m\_1-1}^{n-m-k+2} \ast \sum\_{m\_2-1+m\_1}^{n-m-k+3} \ast \dots \ast \sum\_{m\_{k-2}-1+m\_{k-3}}^{n-m-1} (\mathbf{C}(t+m,m\_1) \diamond \mathbf{I}) \\ \mathbf{(C}(t+m+m\_1,m\_2-m\_1) \diamond \mathbf{I}) \dots (\mathbf{C}(t+m+m\_{k-3},m\_{k-2}-m\_{k-3}) \diamond \mathbf{I}) \\ (\mathbf{W(t+m+m\_{k-2},n-m-m\_{k-2}) - \mathbf{C}(t+m+m\_{k-2},n-m-m\_{k-2}) \diamond \mathbf{I}). \end{array}$$

Therefore, the following obtains.

$$\begin{array}{c} \mathbf{D}(k|t,n) = \sum\_{m=1}^{n-\underline{k}+1} \* \sum\_{m\_1=1}^{n-m\_1-\underline{k}+2} \* \dots \* \sum\_{m\_{k-2}=1+m\_{k-3}}^{n-m\_1-1} \left( \mathbf{C}(t+m,m\_1) \diamond \mathbf{I} \right) \\ \vdots \\ \mathbf{C}(t+m+m\_1, m\_2-m\_1) \diamond \mathbf{I}) \dots \left( \mathbf{C}(t+m+m\_{k-3}, m\_{k-2}-m\_{k-3}) \diamond \mathbf{I} \right) \\ \left( \mathbf{W}(t+m+m\_{k-2}, n-m-m\_{k-2}) - \mathbf{C}(t+m+m\_{k-2}, n-m-m\_{k-2}) \diamond \mathbf{I} \right) . \end{array}$$

By appropriately substituting the time indices with *m* <sup>0</sup> = 0, *m* <sup>1</sup> = *m*, *m* <sup>2</sup> = *m* + *m*1, ... *m* <sup>i</sup> = *m* + *m*i−1,...,*m <sup>k</sup>*−<sup>1</sup> <sup>=</sup> *<sup>m</sup>* <sup>+</sup> *mk*−2, i <sup>=</sup> 1, 2, ... , *<sup>k</sup>* <sup>−</sup> 1, where 1 <sup>+</sup> *mi*−<sup>1</sup> <sup>≤</sup> *mi* <sup>≤</sup> *n* − *m* − *k* + *i* + 1, we obtain the following:

$$\begin{split} \mathbf{D}(k|t,n) &= \sum\_{m\_1'=1}^{n-k+1} \ast \sum\_{m\_2'=1+m\_1'}^{n-k+2} \ast \dots \ast \sum\_{m\_{k-1}'=1+m\_{k-2}'}^{n-1} \left( \mathbf{C} \left( m\_1' \right) \diamond \mathbf{I} \right) \left( \mathbf{C} \left( m\_2' - m\_1' \right) \diamond \mathbf{I} \right) \left( \mathbf{C} \left( m\_3' - m\_2' \right) \diamond \mathbf{I} \right) \\ & \dots \left( \mathbf{C} \left( m\_{k-1}' - m\_{k-2}' \right) \diamond \mathbf{I} \right) \left( \mathbf{W} \left( n - m\_{k-1}' \right) - \mathbf{C} \left( n - m\_{k-1}' \right) \diamond \mathbf{I} \right), \end{split}$$

which results in the stated relationship.

#### *Appendix A.3. Proof of Theorem 3*

Assuming homogeneity in time, Equation (4) is provided by the following:

$$\begin{aligned} \omega\_{ij}(\mathbf{x}|n) &= \sum\_{r=1 \atop r \neq j}^{N} \sum\_{m=0}^{n} c\_{ir}(m) \omega\_{rj}(\mathbf{x}|n-m) + \\ &+ \sum\_{m=0}^{n} c\_{ij}(m) \omega\_{jj}(\mathbf{x}-1|n-m) + \delta(\mathbf{x})^{>} w\_{i}(n), \end{aligned} \tag{A1}$$

where *i*, *j* ∈ *S*, *n* = 0, 1, . . . and *x* = 0, 1, . . .. Equation (A1) can be written as follows.

$$
\omega\_{\rm ij}(\mathbf{x}|n) = \sum\_{r=1}^{N} \sum\_{m=0}^{n} c\_{\rm ir}(m) [\omega\_{\rm rj}(\mathbf{x}|n-m)(1-\delta\_{\rm rj}) + \omega\_{\rm rj}(\mathbf{x}-1|n-m)\delta\_{\rm rj}] + \delta(\mathbf{x})^{>}w\_{\rm i}(n). \tag{A2}
$$

Equation (A2) in matrix notation is the following.

$$\mathbf{D}(\mathbf{x}|n) = \sum\_{m=1}^{n} \mathbf{C}(m) [\boldsymbol{\Omega}(\mathbf{x}|n-m) \circ (\mathbf{U}-\mathbf{I}) + \boldsymbol{\Omega}(\mathbf{x}-\mathbf{1}|n-m) \circ \mathbf{I}] + \delta(\mathbf{x})^{>} \mathbf{W}(n).$$

By applying the geometric transform to the above, we obtain the following:

$$\mathbf{D}^{\mathbb{S}}(z|n) = \sum\_{m=1}^{n} \mathbf{C}(m)\mathbf{D}^{\mathbb{S}}(z|n-m) + (z-1)\sum\_{m=1}^{n} \mathbf{C}(m)[\mathbf{D}^{\mathbb{S}}(z|n-m)\circ \mathbf{I}] + \overset{>}{}{\mathbf{W}}(n),$$

with initial condition **<sup>Ω</sup>***g*(*z*|0) = **<sup>I</sup>**. Following the methodology of Vassiliou and Papadopoulou (1992), we derive the result of the Theorem 3. [15]

#### *Appendix A.4. Proof of Lemma 1*

By using the Hadamard product on Theorem 3, we have the following.

$$\begin{aligned} \mathbf{O}^{\mathbb{S}}(z|n) \circ \mathbf{I} &= -(z-1) \sum\_{j=1}^{n-1} \left[ \left[ \sum\_{i=1}^{j} \mathbf{a}\_{1i}^{-1} \mathbf{C}(j+1-i) \right] \left[ \mathbf{O}^{\mathbb{S}}(z|n-j) \circ \mathbf{I} \right] \right] \circ \mathbf{I} \\ &- z \sum\_{j=1}^{n} \left[ \mathbf{a}\_{1j}^{-1} \mathbf{C}(n+1-j) \right] \circ \mathbf{I} - \sum\_{j=1}^{n} \left[ \mathbf{a}\_{1j}^{-1} \mathbf{W}(n+1-j) \right] \circ \mathbf{I} . \end{aligned}$$

By using the following property:

$$(\mathbf{A}(\mathbf{B}\circ\mathbf{I}))\circ\mathbf{I}=(\mathbf{A}\circ\mathbf{I})(\mathbf{B}\circ\mathbf{I}).\text{ }$$

we obtain the following:

$$\mathbb{E}\left[\left[\sum\_{l=1}^{j}\mathbf{a}\_{1l}^{-1}\mathbf{C}(j+1-i)\right]\left[\mathbf{I}\mathbf{P}^{\mathcal{E}}(\boldsymbol{z}|n-j)\circ\mathbf{I}\right]\right]\diamond\mathbf{I} = \left[\left[\sum\_{l=1}^{j}\mathbf{a}\_{1l}^{-1}\mathbf{C}(j+1-i)\right]\diamond\mathbf{I}\right]\left[\mathbf{I}\mathbf{P}^{\mathcal{E}}(\boldsymbol{z}|n-j)\circ\mathbf{I}\right].$$

which completes the proof.

#### *Appendix A.5. Proof of Theorem 4*

An early version of the proof of Theorem 4 can be found in [56]. We analytically present here all necessary steps of the proof. Using the equations provided by the results of Theorem 3 and by substituting **<sup>Ω</sup>***g*(*z*|*n*) ◦ **<sup>I</sup>** with the result found in Lemma 1, we can obtain the analytic relation for the geometric transforms of **<sup>Ω</sup>***g*(*z*|*n*), which is as follows:

$$\begin{split} \boldsymbol{\Omega}^{\mathbb{E}}(\boldsymbol{z}|n) &= (\boldsymbol{z}-1) \sum\_{j=1}^{n-1} \mathbf{A}\_{j} \begin{bmatrix} \boldsymbol{z}\mathbf{G}\_{1,n,j} + \boldsymbol{z} \sum\_{u=2}^{n-j} \left[ (\boldsymbol{z}-1)\mathbf{M}\_{u}^{\prime} + \sum\_{k=1}^{n-2} (\boldsymbol{z}-1)^{k+1} \mathbf{R}\_{u}(k,m\_{k}) \right] \mathbf{G}\_{u,n,j} \\ + \mathbf{Q}\_{1,n,j} + \sum\_{u=2}^{n-j} \left( (\boldsymbol{z}-1)\mathbf{M}\_{u}^{\prime} + \sum\_{k=1}^{n-2} (\boldsymbol{z}-1)^{k+1} \mathbf{R}\_{u}(k,m\_{k}) \right) \mathbf{Q}\_{u,n,j} \\ + \boldsymbol{z}\mathbf{A}\_{n} + \mathbf{E}\_{n} \end{bmatrix} + \\ & \quad \quad \quad \quad \quad \quad \quad \quad \end{split} \tag{A3}$$

where

$$\begin{split} \mathbf{A}\_{j} &= \mathbf{C}(j) + \sum\_{i=2}^{j} \left( \mathbf{C}(i-1) + \sum\_{k=1}^{i-2} \mathbf{S}\_{i}(k,m\_{k}) \right) \mathbf{C}(j+1-i), \\ \mathbf{M}\_{u}^{\prime} &= \left[ \mathbf{C}(u-1) + \sum\_{i=2}^{u-1} \left( \mathbf{C}(i-1) + \sum\_{k=1}^{i-2} \mathbf{S}\_{i}(k,m\_{k}) \right) \mathbf{C}(u-i) \right] \diamond \mathbf{I}, \\ \mathbf{E}\_{u} &= \sum\_{j=2}^{u} \left( \mathbf{C}(j-1) + \sum\_{k=1}^{i-2} \mathbf{S}\_{i}(k,m\_{k}) \right) \rhcorner \mathbf{W}(n+1-j) + \boldsymbol{\upbeta} \cdot \mathbf{W}(n), \\ \mathbf{G}\_{u,n,j} &= \mathbf{C}(n-j+1-u) \diamond \mathbf{I} + \sum\_{w=2}^{n-j+1-u} \left[ \left( \mathbf{C}(w-1) + \sum\_{k=1}^{w-2} \mathbf{S}\_{w}(k,m\_{k}) \right) \mathbf{C}(n-j+2-u-w) \right] \diamond \mathbf{I}, \\ \mathbf{Q}\_{u,n,j} &= \sum\_{w=2}^{n-j+1-u} \left[ \left( \mathbf{C}(w-1) + \sum\_{k=1}^{u-2} \mathbf{S}\_{w}(k,m\_{k}) \right) \rhcorner \mathbf{W}(n-j+2-u-w) \right] \diamond \mathbf{I} + \left[ \mathbf{V}\mathbf{W}(n-j+1-u) \right] \diamond \mathbf{I}. \end{split}$$

Then, by applying properties of the inverse geometric transforms by using the equation **<sup>Ω</sup>**(*x*|*n*) = <sup>1</sup> *x*! *d*(*x*) *dz<sup>x</sup>* **<sup>Ω</sup>***g*(*z*|*n*) *z*=0 and by repeatedly taking the derivatives of **<sup>Ω</sup>***g*(*z*|*n*) with respect to *z*, we obtain the result of the Theorem 5 for *x* ≥ 1.

Finally, for the special case where *x* = 0, by substituting *z* = 0 in expression (A3), we obtain the following:

$$\mathbf{O}(0|n) = -\sum\_{j=1}^{n-1} \mathbf{A}\_j \left[ \mathbf{B}\_{n,j} + \sum\_{u=2}^{n-j} \mathbf{M}\_u \mathbf{Q}\_{u,n,j} \right] + \mathbf{E}\_{n,j}$$

where the following results.

$$\begin{aligned} \mathbf{B}\_{n,j} &= \left[ \sum\_{w=2}^{n-j} \left[ \left( \mathbf{C}(w-1) + \sum\_{k=1}^{w-2} \mathbf{S}\_w(k, m\_k) \right) \rhd \mathbf{W}(n-j+1-w) \right] \circ \mathbf{I} + \,^\succ \mathbf{W}(n-j) \right] \circ \mathbf{I}, \\ \mathbf{M}\_{\boldsymbol{\upmu}} &= - \left[ \mathbf{C}(\boldsymbol{u}-1) + \sum\_{i=2}^{u-1} \left( \mathbf{C}(i-1) + \sum\_{k=1}^{i-2} \mathbf{S}\_i(k, m\_k) \right) \mathbf{C}(\boldsymbol{u}-i) \right] \circ \mathbf{I} + \sum\_{k=1}^{u-2} (-1)^{k+1} \mathbf{R}\_{\boldsymbol{\upmu}}(k, m\_k). \end{aligned}$$

#### **Appendix B**

*Appendix B.1*

 $f\_{\vec{i}\vec{j}}(n) = 1 - f\_{\vec{i}\vec{j}}(n) \Rightarrow \,^{>}f\_{\vec{i}\vec{j}}^{\mathcal{G}}(z) = \sum\_{n=0}^{\infty} \,\_{0}f\_{\vec{i}\vec{j}}(n)z^{n} = \sum\_{n=0}^{\infty} (1 - f\_{\vec{i}\vec{j}}(n))z^{n} = \sum\_{n=0}^{\infty} z^{n} - \sum\_{n=0}^{\infty} f\_{\vec{i}\vec{j}}(n)z^{n}$  $n = \sum\_{n=0}^{\infty} z^{n} - \sum\_{n=0}^{\infty} f\_{\vec{i}\vec{j}}(n)z^{n} = \frac{1}{1-z} - \sum\_{n=0}^{\infty} \left(\sum\_{m=0}^{n} f\_{\vec{i}\vec{j}}(m)\right)z^{n} = \frac{1}{1-z} - \sum\_{n=0}^{\infty} \frac{f\_{\vec{i}\vec{j}}^{\mathcal{G}}(z)}{1-z} = \frac{1 - f\_{\vec{i}\vec{j}}^{\mathcal{G}}(z)}{1-z}.$ 

*Appendix B.2*

*ωgg ij* (*y*|*z*) =<sup>&</sup>gt; *<sup>f</sup> g ij*(*z*) + *y f <sup>g</sup> ij*(*z*) > *f g jj*(*z*) - <sup>1</sup> <sup>−</sup> *y f <sup>g</sup> jj*(*z*) <sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>f</sup> g ij*(*z*) <sup>1</sup> <sup>−</sup> *<sup>z</sup>* <sup>+</sup> *y f <sup>g</sup> ij*(*z*) - <sup>1</sup> <sup>−</sup> *y f <sup>g</sup> jj*(*z*) 1 − *f g jj*(*z*) 1 − *z* = - 1 − *f g ij*(*z*) -<sup>1</sup> <sup>−</sup> *y f <sup>g</sup> jj*(*z*) + *y f <sup>g</sup> ij*(*z*) - 1 − *f g jj*(*z*) (1 − *z*) - <sup>1</sup> <sup>−</sup> *y f <sup>g</sup> jj*(*z*) <sup>=</sup> <sup>1</sup> <sup>−</sup> *y f <sup>g</sup> jj*(*z*) − *f g ij*(*z*) + *y f <sup>g</sup> jj*(*z*)*f g ij*(*z*) + *y f <sup>g</sup> ij*(*z*) <sup>−</sup> *y f <sup>g</sup> ij*(*z*)*f g jj*(*z*) (1 − *z*) - <sup>1</sup> <sup>−</sup> *y f <sup>g</sup> jj*(*z*) = - <sup>1</sup> <sup>−</sup> *y f <sup>g</sup> jj*(*z*) − *f g ij*(*z*) + *y f <sup>g</sup> ij*(*z*) (1 − *z*) - <sup>1</sup> <sup>−</sup> *y f <sup>g</sup> jj*(*z*) = - <sup>1</sup> <sup>−</sup> *y f <sup>g</sup> jj*(*z*) − (1 − *y*)*f g ij*(*z*) (1 − *z*) - <sup>1</sup> <sup>−</sup> *y f <sup>g</sup> jj*(*z*) .

#### **References**

