*Article* **Sequential Interval Reliability for Discrete-Time Homogeneous Semi-Markov Repairable Systems**

**Vlad Stefan Barbu 1, Guglielmo D'Amico 2,\* and Thomas Gkelsinis <sup>1</sup>**


**Abstract:** In this paper, a new reliability measure, named sequential interval reliability, is introduced for homogeneous semi-Markov repairable systems in discrete time. This measure is the probability that the system is working in a given sequence of non-overlapping time intervals. Many reliability measures are particular cases of this new reliability measure that we propose; this is the case for the interval reliability, the reliability function and the availability function. A recurrent-type formula is established for the calculation in the transient case and an asymptotic result determines its limiting behaviour. The results are illustrated by means of a numerical example which illustrates the possible application of the measure to real systems.

**Keywords:** semi-Markov; reliability; transient analysis; asymptotic analysis

**Citation:** Barbu, V.S.; D'Amico, G.; Gkelsinis, T. Sequential Interval Reliability for Semi-Markov Repairable Systems. *Mathematics* **2021**, *9*, 1997. https://doi.org/10.3390/ math9161997


Academic Editors: Panagiotis-Christos Vassiliou and Andreas C. Georgiou

Received: 10 July 2021 Accepted: 19 August 2021 Published: 20 August 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **1. Introduction**

This paper is concerned with reliability indicators for semi-Markov systems. As it is well known (see, e.g., [1–6]), semi-Markov processes represent an important modelling tool for practical problems in reliability, survival analysis, financial mathematics, and manpower planning, among other applied domains. The attractiveness of these processes comes from the fact that the sojourn time in a state can be arbitrarily distributed, as compared to Markov processes, where the sojourn time in a state is constrained to be geometrically or exponentially distributed.

Several researchers have investigated the reliability measures of semi-Markov processes. Examples of discrete-time semi-Markov processes with the associated reliability measures and statistical topics can be found in, e.g., [7–10], who proposed a semi-Markov chain usage model in discrete time and provided analytical formulas for the mean and variance of the single-use reliability of the system. The evaluation of reliability indicators for continuous-time semi-Markov processes and statistical inference can be found in [11–15]. The readers interested in solving numerically continuous-time semi-Markov processes by using discrete-time semi-Markov processes for solving continuous ones are referred to [16–19].

In the present work, we propose a new measure for analysing the performance of a system, called the sequential interval reliability (SIR). This generalises the notion of interval reliability, as it is introduced in [20] for discrete-time semi-Markov processes and further studied in [21,22]. In line with the work of [23], we are also interested in a general definition that takes into account the dependence on what is called the final backward. It is worth mentioning that interval reliability was first introduced and studied for continuoustime semi-Markov systems in [24,25]. In those contributions, the interval reliability was expressed in terms of a system of integral equation.

This measure computes the probability that a system is in a working state during a sequence of non-overlapping intervals. This type of measure is of importance in several applications: in reliability, when a system has to perform during consequent time periods; in extreme value theory, where we can be interested in the occurrence of an extreme event during several time periods; in energy studies, where we are interested, for instance, in the electricity consumption that is greater or below a certain threshold; and in financial modelling, in order to create advanced credit scoring models, etc.

This article is structured as follows: in the next section, we introduce the basic semi-Markov notions and notations and we also give the corresponding measures of reliability. The main object of our study, namely sequential interval reliability, is introduced in Section 3. Then, we first perform transient analysis, providing a recurrence formula for computing the SIR. Second, we furnish an asymptotic result, as a time of interest that extends to infinity. A numerical example is provided in Section 4, illustrating some aspects of our theoretical work.

#### **2. Discrete-Time Semi-Markov Processes and Reliability Measures**

Let us consider a random system with finite state space *E* = {1, ... ,*s*}, *s* < ∞ and let (Ω, A, P) be a probability space. We assume that the evolution in time of the system is governed by a stochastic process *<sup>Z</sup>* = (*Zk*)*k*∈N, defined on (Ω, A, P) with values in *<sup>E</sup>*; in other words, *Zk* gives the state of the system at time *<sup>k</sup>*. Let *<sup>T</sup>* = (*Tn*)*n*∈N, defined on (Ω, A, P) with values in Z, be the successive time points when state changes in (*Zk*)*k*∈<sup>N</sup> occur (the jump times) and let *<sup>J</sup>* = (*Jn*)*n*∈N, defined on (Ω, A, P) with values in *<sup>E</sup>*, be the successively visited states at these time points. We denote by *<sup>X</sup>* = (*Xn*)*n*∈N<sup>∗</sup> the successive sojourn times in the visited states, i.e., *Xn*+<sup>1</sup> = *Sn*+<sup>1</sup> − *Sn*, *n* ∈ N. The relation between the process *Z* and the process *J* of the successively visited states is given by *Zk* = *JN*(*k*), or, equivalently, *Jn* = *ZTn* , *n*, *k* ∈ N, where:

$$N(k) := \max\{n \in \mathbb{N} \mid T\_n \le k\} \tag{1}$$

is the discrete-time counting process of the number of jumps in [0, *k*] ⊂ N.

**Definition 1** (Semi-Markov chain SMC and Markov renewal chain MRC)**.** *If we have:*

$$\mathbb{P}(I\_{n+1} = j, T\_{n+1} - T\_n = k | I\_n = i, I\_{n-1}, \dots, I\_0, T\_n, \dots, T\_0) = \mathbb{P}(I\_{n+1} = j, T\_{n+1} - T\_n = k | I\_n = i), \tag{2}$$

*then Z* = (*Zk*)*<sup>k</sup> is called a semi-Markov chain (SMC) and* (*J*, *T*)=(*Jn*, *Tn*)*<sup>n</sup> is called a Markov renewal chain (MRC).*

Throughout this paper, we assume that the MRC or SMC are homogeneous with respect to the time in the sense that Equation (2) is independent of *n*. Thus, we will work under the following assumption:

**Assumption 1.** *The SMC (or, equivalently, the MRC) is assumed to be homogeneous in time.*

It is clear that, if (*J*, *<sup>T</sup>*) is a MRC, then *<sup>J</sup>* = (*Jn*)*n*∈<sup>N</sup> is a Markov chain with state space *E*, called the *embedded Markov chain* of the MRC (*J*, *T*) (or of the SMC *Z*).

**Definition 2.** *For a semi-Markov chain, under Assumption 1, we define:*


Note that:

$$q\_{i\bar{j}}(k) = p\_{i\bar{j}} f\_{i\bar{j}}(k) \dots$$

**Remark 1.** *We would like to draw the attention to a specific terminological matter that we encountered in the literature of discrete-time semi-Markov processes with finite or countable state space and that may lead to terminological confusion.*

*Some authors use the term "semi-Markov kernel" of discrete-time SM processes for* P(*Jn*+<sup>1</sup> = *j*, *Tn*+<sup>1</sup> − *Tn* = *k*|*Jn* = *i*) *(see, e.g., [1,7,8,10,19,20,22]). Other authors use the term "semi-Markov kernel" of discrete-time SM processes for* P(*Jn*+<sup>1</sup> = *j*, *Tn*+<sup>1</sup> − *Tn* ≤ *k*|*Jn* = *i*) *(see, e.g., [21,23,26–28]), while the quantity* P(*Jn*+<sup>1</sup> = *j*, *Tn*+<sup>1</sup> − *Tn* = *k*|*Jn* = *i*) *can have several names, for instance semi-Markov core matrix. In this article, we used this second terminology. In the authors' opinion, this terminological confusion stems from the following reasons:*


*In any case, all this discussion is only a matter of notational convenience.*

Clearly, a semi-Markov chain is uniquely determined a.s. by an initial distribution (*μi*)*i*∈*<sup>E</sup>* and a *semi-Markov core matrix* (*qij*(*k*))*i*,*j*∈*E*,*k*∈<sup>N</sup> or, equivalently, by an initial distribution (*μi*)*i*∈*E*, a Markov transition matrix (*pij*)*i*,*j*∈*<sup>E</sup>* and conditional sojourn time distributions (*fij*(*k*))*i*,*j*∈*E*,*k*∈N.

Our work will be carried out under the following assumptions:

**Assumption 2.** *Transitions to the same state are not allowed, i.e., pii* ≡ 0 *for all i* ∈ *E*.

**Assumption 3.** *There are no instantaneous transitions, i.e., qij*(0) ≡ 0 *for all i*, *j* ∈ *E*.

Clearly, Assumption 2 is equivalent to *qii*(*k*) = 0 for all *i* ∈ *E*, *k* ∈ N, and Assumption 3 is equivalent to *fij*(0) ≡ 0 for all *i*, *j* ∈ *E*; note that this implies that *T* is a strictly increasing sequence.

For the conditional sojourn time distribution and sojourn time distribution in a state, one can consider the associated cumulative distribution functions defined by

$$F\_{\vec{i}\vec{j}}(k) \quad := \quad \mathbb{P}(T\_{n+1} - T\_n \le k | f\_n = i, f\_{n+1} = j) = \sum\_{t=1}^{k} f\_{\vec{i}\vec{j}}(t);$$

$$H\_{\vec{i}}(k) \quad := \quad \mathbb{P}(T\_{n+1} - T\_n \le k | f\_n = i) = \sum\_{t=1}^{k} h\_{\vec{i}}(t).$$

For any distribution function *F*(·), we can consider the associated survival/reliability function defined by

$$
\widetilde{F}(k) \quad := \quad 1 - F(t).
$$

Consequently, we have:

$$\begin{array}{lll} \mathbb{P}\_{ij}(k) &:=& \mathbb{P}(T\_{n+1} - T\_n > k | I\_{\mathbb{N}} = i, I\_{\mathbb{N}+1} = j) = 1 - \sum\_{t=0}^{k} f\_{ij}(t) = \sum\_{t=k+1}^{\infty} f\_{ij}(t); \\\\ \overline{H}\_i(k) &:=& \mathbb{P}(T\_{n+1} - T\_n > k | I\_{\mathbb{N}} = i) = 1 - \sum\_{t=0}^{k} h\_i(t) = \sum\_{t=k+1}^{\infty} h\_i(t). \end{array}$$

To investigate the reliability behaviour of a semi-Markov system, we split the space *E* into two subsets: *U* for the up-states and *D* for the down-states, with *E* = *U* ∪ *D* and *E* = *U* ∩ *D* = ∅. For simplicity, we consider *U* = {1, . . . ,*s*1} and *D* = {*s*<sup>1</sup> + 1, . . . ,*s*}.

Two important reliability measures of a system are the reliability (or survival) function at time *k* ∈ N, denoted by *R*(*k*), and the (instantaneous) availability function at time *k* ∈ N, denoted by *A*(*k*), defined, respectively, by

$$\begin{array}{rcl} R(k) &:=& \mathbb{P}(Z\_0 \in \mathcal{U}, \dots, Z\_k \in \mathcal{U}),\\ A(k) &:=& \mathbb{P}(Z\_k \in \mathcal{U}). \end{array}$$

If ever we condition on the initial state, we obtain the corresponding conditional reliability (or survival) function at time *k* ∈ N given that {*Z*<sup>0</sup> = *i*}, *i* ∈ *U*, denoted by *Ri*(*k*), and the conditional (instantaneous) availability function at time *k* ∈ N given that {*Z*<sup>0</sup> = *i*}, *i* ∈ *E*, denoted by *Ai*(*k*), is defined, respectively, by

$$\begin{array}{rcl} R(k) &:=& \mathbb{P}(Z\_0 \in \mathcal{U}, \dots, Z\_k \in \mathcal{U} \mid Z\_0 = i), i \in \mathcal{U}, \\ A(k) &:=& \mathbb{P}(Z\_k \in \mathcal{U} \mid Z\_0 = i), i \in \mathcal{E}. \end{array}$$

Let us now define the interval reliability, introduced in [20] as the probability that the system is in up-states during a time interval.

**Definition 3** (Interval reliability, conditional interval reliability, cf. [20])**.** *For k*, *p* ∈ N *and i* ∈ *E*, *the interval reliability IR*(*k*, *p*) *and conditional interval reliability IRi*(*k*, *p*) *given the event* {*Z*<sup>0</sup> = *i*} *are, respectively, defined by*

$$IR(k, p) \quad := \quad \mathbb{P}(Z\_l \in \mathcal{U}, l \in [k, k + p]);\tag{3}$$

$$IR\_l(k, p) \quad := \quad \mathbb{P}(Z\_l \in \mathcal{U}, l \in [k, k + p] \mid Z\_0 = i). \tag{4}$$

For *k*, *p* ∈ N and *i* ∈ *E*, it is clear that we have the following properties of the interval reliability and conditional interval reliability (cf. Proposition 1 and Remark 1 of [20]):

$$R(k+p) \le IR(k,p) \le A(k+p);\tag{5}$$

$$IR\_i(0, p) = \mathcal{R}\_i(p);\tag{6}$$

$$IR\_i(k,0) = A\_i(k). \tag{7}$$

#### **3. Sequential Interval Reliability**

Let us consider a repairable system. In this section, we introduce a new reliability measure that we will call sequential interval reliability. This will generalise the notion of interval reliability presented before, in the sense that we are looking at the probability that the system is in working mode during two or several non-overlapping intervals.

More precisely, let us consider *t* := (*ti*)*i*=1,...,*<sup>N</sup>* and *p* := (*pi*)*i*=1,...,*<sup>N</sup>* two time sequences such that:


It is clear that, in this case, {[*ti*, *ti* + *pi*]}*i*=1,...,*<sup>N</sup>* is a sequence of non-overlapping real intervals.

For a sequence *t* := (*ti*)*i*=1,...,*<sup>N</sup>* indexes *k*1, *k*<sup>2</sup> ∈ N, *k*<sup>1</sup> ≤ *k*2, we will also use the notation *tk*1:*k*<sup>2</sup> := (*ti*)*i*=*k*1,...,*k*<sup>2</sup> .

**Definition 4** (Sequential interval reliability)**.** *Let* (*Zk*)*k*∈<sup>N</sup> *be a discrete time semi-Markov system and let t* := (*ti*)*i*=1,...,*<sup>N</sup> and p* := (*pi*)*i*=1,...,*N*, *N* ∈ N∗, *be two time sequences such that* {[*ti*, *ti* + *pi*]}*i*=1,...,*<sup>N</sup> is a sequence of non-overlapping real intervals. We assume that Assumptions 1–3 hold true.*

*1. We define the sequential interval reliability, SIR*(*N*)(*t*, *p*), *as the probability that the system is in the up-states U during the time intervals* {[*ti*, *ti* + *pi*]}*i*=1,...,*N, meaning that:*

$$SIR^{(N)}(\mathcal{L}, \underline{p}) := \mathbb{P}(Z\_l \in \mathcal{U}, \text{ for all } l \in [t\_i, t\_i + p\_i], i = 1, \dots, N);\tag{8}$$

*2. For <sup>v</sup>* <sup>∈</sup> <sup>N</sup> *and <sup>k</sup>* <sup>∈</sup> *<sup>E</sup>*, *we define the conditional sequential interval reliability, SIR*(*N*) *<sup>k</sup>* (*v*; *t*, *p*), *as the conditional probability that the system is in the up-states U during the time intervals* {[*ti*, *ti* + *pi*]}*i*=1,...,*N*, *given the event* (*k*, *v*) := {*Z*<sup>0</sup> = *k*, *B*<sup>0</sup> = *v*} = {*JN*(0) = *k*, *TN*(0) = −*v*}, *meaning that:*

$$\begin{split} \operatorname{SIR}\_{k}^{(N)}(\boldsymbol{\upsilon};\underline{t},\underline{p}) &:= \mathbb{P}(\boldsymbol{Z}\_{l} \in \mathsf{U}, \text{ for all } l \in [t\_{i}, t\_{i} + p\_{i}], i = 1, \dots, N \mid \boldsymbol{Z}\_{0} = k, \boldsymbol{B}\_{0} = \boldsymbol{\upsilon}) \\ &= \mathbb{P}\_{(k, \boldsymbol{\upsilon})}(\boldsymbol{Z}\_{l} \in \mathsf{U}, \text{ for all } l \in [t\_{i}, t\_{i} + p\_{i}], i = 1, \dots, N), \end{split} \tag{9}$$

*where Bt* := *t* − *TN*(*t*) *is the backward time process associated to the semi-Markov process.*

Note that we have the obvious relationship between the sequential interval reliability and the conditional sequential interval reliability:

$$SIR^{(N)}(\underline{t}; \underline{p}) = \sum\_{k \in E} \mu\_k SIR\_k^{(N)}(0; \underline{t}; \underline{p}).\tag{10}$$

For notational convenience, we will set:

$$\text{SIR}\_k^{(N)}(\mathsf{f}, \underline{p}) \quad := \quad \text{SIR}\_k^{(N)}(\mathbf{0}; \mathsf{f}, \underline{p}) = \mathbb{P}(Z\_l \in \mathsf{U}, \text{ for all } l \in [t\_l, t\_l + p\_l], l = 1, \dots, N \mid Z\_0 = k).$$

**Remark 2.** *Under the previous notation, we have:*


#### *3.1. Transient Analysis*

We will now investigate the recursive formula for computing the sequential interval reliability of a discrete-time semi-Markov system.

**Proposition 1.** *Let* (*Zk*)*k*∈<sup>N</sup> *be a discrete time semi-Markov system, assuming Assumptions 1–3 hold true and let t* := (*ti*)*i*=1,...,*<sup>N</sup> and p* := (*pi*)*i*=1,...,*N*, *N* ∈ N∗, *be two time sequences such that* {[*ti*, *ti* + *pi*]}*i*=1,...,*<sup>N</sup> is a sequence of non-overlapping real intervals. Let v* ∈ N *be the value of the backward process at time t* = 0 *and k* ∈ *E be the initial state. Then, the conditional sequential interval reliability, SIR*(*N*) *<sup>k</sup>* (*v*; *t*, *p*), *satisfies the following equation:*

$$\begin{array}{rcl} \text{SIR}\_{k}^{(N)}(\boldsymbol{v};\mathtt{t},\underline{\boldsymbol{p}}) &=& \mathcal{Y}\_{k}^{(N)}(\boldsymbol{v};\mathtt{t},\underline{\boldsymbol{p}}) + \sum\_{\boldsymbol{r}\in\mathcal{E}} \sum\_{\theta=1}^{t\_{1}} \frac{q\_{\ell r}(\boldsymbol{v}+\theta)}{\overline{H}\_{k}(\boldsymbol{v})} \text{SIR}\_{r}^{(N)}(\boldsymbol{0};\mathtt{t}-\theta\mathbf{1}\_{1:N,\boldsymbol{r}}\underline{\boldsymbol{p}}), \end{array} \tag{11}$$

*where* **<sup>1</sup>**1:*<sup>N</sup> is a vector of* <sup>1</sup>*s of length N*, *and g*(*N*) *<sup>k</sup>* (*v*; *t*, *p*) *is given by*

$$\begin{split} g\_{k}^{(N)}(\boldsymbol{v};\boldsymbol{L};\underline{\boldsymbol{\nu}}) &= \mathbf{1}\_{\{k \in \mathcal{U}\}} \left| \frac{\overline{\mathbf{H}\_{k}}(t\_{N} + p\_{N} + \boldsymbol{v})}{\overline{\mathbf{H}\_{k}}(\boldsymbol{v})} \right. \\ &+ \sum\_{\theta = t\_{1}+1}^{t\_{1}+p\_{1}} \sum\_{\boldsymbol{v} \in \mathcal{U}} \sum\_{\ell \in \mathcal{U}} \sum\_{\ell'=\ell}^{t\_{1}+p\_{1}-\theta} \frac{q\_{\nu}(\boldsymbol{v}+\theta)}{\overline{\mathbf{H}\_{k}}(\boldsymbol{v})} R\_{\boldsymbol{v}}^{b}(\boldsymbol{v}';t\_{1}+p\_{1}-\theta) SIR\_{m}^{(N-1)}(\boldsymbol{v}';t\_{2:N}-\mathbf{1}\_{2N}(t\_{1}+p\_{1}),p\_{2:N}) \\ &+ \sum\_{j=1}^{N} \sum\_{\theta = t\_{1}+t\_{j}}^{t\_{j+1}+p\_{1}} \sum\_{\ell'=\ell}^{q\_{\theta}} \frac{q\_{\nu}(\boldsymbol{v}+\theta)}{\overline{\mathbf{H}\_{k}}(\boldsymbol{v})} SIR\_{r}^{(N-j+1)}(\boldsymbol{0};(\boldsymbol{\nu},t\_{j+1:N}-\mathbf{1}\_{j+1:N}\boldsymbol{\theta}),(t\_{j}+p\_{j}-\theta,p\_{j+1:N})) \\ &+ \sum\_{j=1}^{N-1} \sum\_{\ell'=\ell+t\_{j}+p\_{j}+1}^{t\_{j+1}+1} \sum\_{\ell'=\ell}^{q\_{\theta}} \frac{q\_{\nu}(\boldsymbol{v}+\theta)}{\overline{\mathbf{H}\_{k}}(\boldsymbol{v})} SIR\_{r}^{(N-j)}(\boldsymbol{0};t\_{j+1:N}-\mathbf{1}\_{j+1:N}\boldsymbol{\theta},p\_{j$$

*where* {*k*∈*U*} *is the indicator function of the event* {*<sup>k</sup>* <sup>∈</sup> *<sup>U</sup>*} *and <sup>R</sup><sup>b</sup> ij*(*v*; *k*) *is the reliability with final backward defined by*

$$R\_{ij}^b(\upsilon; k) := \mathbb{P}(Z\_s \in \mathsf{U}, \text{ for all } \mathsf{s} \in \{0, \ldots, k - \upsilon\}, Z\_k = j, B\_k = \upsilon \mid Z\_0 = \mathsf{i}, T\_{N(0)} = 0). \tag{13}$$

**Proof.** Before proceeding with the proof, let us introduce the notation:

$$Z(\underline{t}, \underline{p}) := \langle Z\_{t\_1}, \dots, Z\_{t\_1 + p\_1}, \dots, Z\_{t\_2}, \dots, Z\_{t\_2 + p\_2}, \dots, Z\_{t\_{N'}}, \dots, Z\_{t\_N + p\_N} \rangle \dots$$

From the definition of the SIR, it is clear that

$$SIR\_k^{(N)}(v; \underline{t}, \underline{p}) = \mathbb{P}\_{(k, \underline{\nu})} \left( Z(\underline{t}, \underline{p}) \in \mathcal{U}^{N + \sum\_{i=1}^{N} p\_i} \right). \tag{14}$$

Let us consider now the r.v. *T*<sup>1</sup> and observe that the events {*T*<sup>1</sup> > *tN* + *pN*}, {*T*<sup>1</sup> < *t*1}, {*T*<sup>1</sup> ∈ ∪*<sup>N</sup> <sup>j</sup>*=1[*tj*, *tj* <sup>+</sup> *pj*]} and {*T*<sup>1</sup> ∈ ∪*N*−<sup>1</sup> *<sup>j</sup>*=<sup>1</sup> [*tj* + *pj* + 1, *tj*+<sup>1</sup> − 1]} are mutually exclusive. Consequently, we can write (14) as follows:

$$\begin{split} \operatorname{SIR}\_{k}^{(N)}(\boldsymbol{v};\mathtt{f},\underline{\boldsymbol{p}}) &= \mathbb{P}\_{(\boldsymbol{k},\boldsymbol{p})}\Big(Z(\underline{\boldsymbol{t}},\underline{\boldsymbol{p}}) \in \mathsf{U}^{N+\sum\_{i=1}^{N}p\_{i}}, \boldsymbol{T}\_{1} > \boldsymbol{t}\_{N} + p\_{N}\Big) \\ &+ \mathbb{P}\_{(\boldsymbol{k},\boldsymbol{p})}\Big(Z(\underline{\boldsymbol{t}},\underline{\boldsymbol{p}}) \in \mathsf{U}^{N+\sum\_{i=1}^{N}p\_{i}}, \boldsymbol{T}\_{1} < \boldsymbol{t}\_{1}\Big) + \mathbb{P}\_{(\boldsymbol{k},\boldsymbol{p})}\Big(Z(\underline{\boldsymbol{t}},\underline{\boldsymbol{p}}) \in \mathsf{U}^{N+\sum\_{i=1}^{N}p\_{i}}, \boldsymbol{T}\_{1} \in \mathsf{U}^{N}\_{j-1}[\boldsymbol{t}\_{j},\boldsymbol{t}\_{j} + p\_{j}]\Big) \\ &+ \mathbb{P}\_{(\boldsymbol{k},\boldsymbol{p})}\Big(Z(\underline{\boldsymbol{t}},\underline{\boldsymbol{p}}) \in \mathsf{U}^{N+\sum\_{i=1}^{N}p\_{i}}, \boldsymbol{T}\_{1} \in \mathsf{U}^{N-1}\_{j-1}[\boldsymbol{t}\_{j} + p\_{j} + 1, \boldsymbol{t}\_{j+1} - 1]\Big). \end{split} \tag{15}$$

We need to compute the four terms of the right-hand side of (15); let us denote them by *RT*1, *RT*2, *RT*<sup>3</sup> and *RT*4, respectively.

First, through a straightforward computation, we obtain

$$RT\_1 = \mathbf{1}\_{\{k \in \mathcal{U}\}} \frac{\overline{H}\_k (t\_N + p\_N + v)}{\overline{H}\_k (v)}.\tag{16}$$

Second, using the double expectation formula and conditioning with respect to (*J*1, *T*1), we immediately obtained the second term given by

$$RT\_2 = \sum\_{r \in E} \sum\_{\theta=1}^{t\_1 - 1} \frac{q\_{kr}(v + \theta)}{\overline{H}\_k(v)} SIR\_r^{(N)}(0; \underline{t} - \theta \mathbf{1}\_{1:N}, \underline{p}).\tag{17}$$

Third, using the double expectation formula, conditioning with respect to (*J*1, *T*1), summing over all the possible values of *T*<sup>1</sup> and splitting the computation according to the interval to which *T*<sup>1</sup> belongs, a quite long computation yields:

$$\begin{split} RT\_{3} &= \sum\_{\boldsymbol{r} \in \mathcal{E}} \frac{q\_{3\boldsymbol{r}}(\boldsymbol{\nu} + \boldsymbol{t}\_{1})}{\overline{H}\_{k}(\boldsymbol{\nu})} SIR\_{\boldsymbol{r}}^{(N)}(\boldsymbol{0}; \underline{t} - \boldsymbol{t}\_{1} \mathbf{1}\_{1 \sim N} \underline{p}) \\ &+ \sum\_{\boldsymbol{\theta} = \boldsymbol{t}\_{1} + \boldsymbol{t}\_{1}}^{\boldsymbol{t}\_{1} + \boldsymbol{p}\_{1}} \sum\_{\boldsymbol{r} \in \mathcal{E}} \sum\_{m \in \mathcal{U}} \sum\_{\nu' = 0}^{t\_{1} + p\_{1} - \theta} \frac{q\_{3\nu}(\boldsymbol{\nu} + \boldsymbol{\theta})}{\overline{H}\_{k}(\boldsymbol{\nu})} R\_{\boldsymbol{r}\nu}^{\boldsymbol{t}}(\boldsymbol{\nu}'; \boldsymbol{t}\_{1} + p\_{1} - \theta) SIR\_{\boldsymbol{m}}^{(N-1)}(\boldsymbol{\nu}'; \underline{t}\_{2 \times N} - \mathbf{1}\_{2 \times N}(\boldsymbol{t}\_{1} + p\_{1}), p\_{2 \times 1}) \\ &+ \sum\_{j = 2}^{N} \sum\_{\theta = t\_{1}}^{t\_{j} + p\_{j}} \sum\_{\boldsymbol{r} \in \mathcal{E}} \frac{q\_{3\nu}(\boldsymbol{\nu} + \boldsymbol{\theta})}{\overline{H}\_{k}(\boldsymbol{\nu})} SIR\_{\boldsymbol{r}}^{(N-j+1)}(\boldsymbol{0}; (\boldsymbol{\nu}, \boldsymbol{t}\_{j+1:N} - \mathbf{1}\_{j+1:N} \boldsymbol{\theta}), (\boldsymbol{t}\_{j} + p\_{j} - \theta, p\_{j+1:N})). \end{split} \tag{18}$$

Furthermore, fourth, using the double expectation formula, conditioning with respect to (*J*1, *T*1) and summing over all the possible values of *T*1, we obtain:

$$RT\_4 = \mathbf{1}\_{\{k \in \mathcal{U}\}} \sum\_{j=1}^{N-1} \sum\_{\substack{\theta = t\_j + p\_j + 1}}^{t\_{j+1}-1} \sum\_{r \in \tilde{E}} \frac{q\_{\mathbb{H}^r}(\upsilon + \theta)}{\mathbf{H}\_k(\upsilon)} SIR\_r^{(N-j)}(0; t\_{j+1:N} - \mathbf{1}\_{j+1:N} \theta\_r \, p\_{j+1:N}). \tag{19}$$

Substituting these four terms in (15), we obtain the recurrence formula given in (11) and (12).

If no initial backward is considered, taking *v* = 0 in Equation (11), we immediately obtain the following recursive formula for the sequential interval reliability of a discretetime semi-Markov system, given the initial state.

**Corollary 1.** *Let* (*Zk*)*k*∈<sup>N</sup> *be a discrete time semi-Markov system, assuming Assumptions 1–3 hold true and let t* := (*ti*)*i*=1,...,*<sup>N</sup> and p* := (*pi*)*i*=1,...,*N*, *N* ∈ N∗, *be two time sequences such that* {[*ti*, *ti* + *pi*]}*i*=1,...,*<sup>N</sup> is a sequence of non-overlapping real intervals. Then, the sequential interval reliability, SIR*(*N*)(*t*, *p*), *satisfies the following equation:*

$$\begin{array}{rcl} \underline{SIR}\_{k}^{(N)}(\underline{t},\underline{p}) &=& \mathsf{g}\_{k}^{(N)}(\underline{t},\underline{p}) + \sum\_{r \in E} \sum\_{\theta=1}^{t\_{1}} q\_{kr}(\theta) SIR\_{r}^{(N)}(\underline{t}-\theta \mathbf{1}\_{1:N\_{r}} \underline{p}), \end{array} \tag{20}$$

*where we have set g*(*N*) *<sup>k</sup>* (*t*, *p*) := *g* (*N*) *<sup>k</sup>* (0; *t*, *p*).

The next result provides a formula for computing the reliability with the final backward *R<sup>b</sup> ij*(*v*; *k*) defined in Equation (13).

**Lemma 1.** *For a discrete time semi-Markov system* (*Zk*)*k*∈N, *under Assumptions 1–3, let us define the entrance probabilities eij*(*n*), *i*, *j* ∈ *E*, *n* ∈ N *by eij*(*n*) = *the probability that the system that entered state i at time* 0 *will enter state j at time n*. *Under the previous notations, the reliability with the final backward R<sup>b</sup> ij*(*v*; *k*) *defined in Equation* (13) *is given by*

$$R\_{ij}^b(\upsilon;k) = \overline{H}\_j(\upsilon)e\_{ij}^{\overline{q}}(k-\upsilon),\tag{21}$$

*where eq*' *ij*(*k* − *v*) *represents the entrance probabilities for the semi-Markov system (cf. [27]) associated with the* semi-Markov core matrix:

$$
\widetilde{q}(k) = \begin{pmatrix} q\_{ULI}(k) & q\_{LD}(k)\mathbf{1}\_{s-s\_1} \\ \mathbf{0}\_{1s\_1} & 0 \end{pmatrix}, k \in \mathbb{N}\_{\prime}
$$

*with qUU*(*k*) *and qUD*(*k*) *being the partitions of the matrix q*(*k*) *according to U* × *U and U* × *D*.

**Proof.** First, it can be easily seen that:

$$R\_{ij}^{\mathbb{B}}(\upsilon;k) = \overline{H}\_j(\upsilon) \mathbb{P}(Z\_{k-\upsilon} = j, Z\_{k-\upsilon-1} \neq j, Z\_{\mathbb{B}} \in \mathsf{U}, s = 0, 1, \ldots, k - \upsilon \mid Z\_0 = i).$$

Second, note that P(*Zk*−*<sup>v</sup>* = *<sup>j</sup>*, *Zk*−*v*−<sup>1</sup> = *<sup>j</sup>*, *Zs* ∈ *<sup>U</sup>*,*<sup>s</sup>* = 0, 1, ... , *<sup>k</sup>* − *<sup>v</sup>* | *<sup>Z</sup>*<sup>0</sup> = *<sup>i</sup>*), *<sup>i</sup>*, *<sup>j</sup>* ∈ *<sup>E</sup>*, represent the entrance probabilities for the semi-Markov system associated to the *semi-Markov core matrix q*'(*k*). See also Proposition 5.1 of [1] for the use of the semi-Markov system associated to the *semi-Markov core matrix q*'(*k*) in reliability computation.

Third, in order to compute the entrance probabilities, one can use the recurrence formulas (see [27]):

$$
\varepsilon\_{\vec{i}\vec{j}}(n) = \delta\_{\vec{i}\vec{j}}\delta(n) + \sum\_{r=1}^{s} \sum\_{m=1}^{n} p\_{\vec{i}r} f\_{\vec{i}r}(m) \varepsilon\_{\vec{j}}(n-m),\tag{22}
$$

where *δij* := 1 if *i* = *j*, *δij* := 0 if *i* = *j*, *δ*(*n*) := 1 if *n* = 0, *δ*(*n*) = 0 if *n* = 0.

Looking at the recurrence relationship given in Proposition 1 for computing the conditional sequential interval reliability with initial backward *SIR*(*N*) *<sup>k</sup>* (*v*; *t*, *p*), and taking into account that, for *N* = 1, we obtain the interval reliability with the initial backward, and see that we need a formula for computing the interval reliability with the initial backward, denoted by *IRk*(*v*; *t*, *p*) and defined by

$$IR\_k(v; t, p) := \mathbb{P}(Z\_l \in \mathcal{U}, l \in [t, t + p] \mid Z\_0 = k, B\_0 = v). \tag{23}$$

The next result provides a formula for computing this quantity.

**Lemma 2.** *Under the previous notations, the interval reliability with initial backward IRk*(*v*; *t*, *p*) *is given by*

$$\begin{split} IR\_{k}(\boldsymbol{v};t,\boldsymbol{p}) &= \frac{1}{\overline{H}\_{k}(\boldsymbol{v})} \Big[ \overline{H}\_{k}(\boldsymbol{v}+t+\boldsymbol{p})\mathbf{1}\_{\{k\in\mathcal{U}\}} + \sum\_{j\in\mathcal{U}} \sum\_{\theta=t}^{t+\underline{p}} q\_{kj}(\boldsymbol{v}+\theta)R\_{j}(t+\boldsymbol{p}-\theta)\mathbf{1}\_{\{k\in\mathcal{U}\}} \\ &+ \sum\_{j\in\mathcal{E}} \sum\_{\theta=1}^{t-1} q\_{kj}(\boldsymbol{v}+\theta)IR\_{j}(t-\theta,\boldsymbol{p}) \Big]. \end{split} \tag{24}$$

**Proof.** The proof is a quite straightforward adaptation of a more general result presented in [21].

The next result provides a series of inequalities between sequential interval reliability, sequential availability, conditional reliability and conditional availability.

**Proposition 2.** *Let* (*Zk*)*k*∈<sup>N</sup> *be a discrete time semi-Markov system, assuming Assumptions 1–3 hold true, and let k* ∈ *E and v* ∈ N.

*1. For any t*1:*<sup>N</sup> and p*1:*N*, *N* ∈ N∗, *such that* {[*ti*, *ti* + *pi*]}*i*=1,...,*<sup>N</sup> is a sequence of nonoverlapping real intervals, and for s* ≤ *N we have:*

$$\begin{split} R\_k(\boldsymbol{\upsilon}, t\_N + p\_N) &\quad \leq \quad SINR\_k^{(N)}(\boldsymbol{\upsilon}; t\_{1:N\iota}, p\_{1:N}) \leq SINR\_k^{(N)}(\boldsymbol{\upsilon}; t\_{1:s\iota}, p\_{1:s}) \leq SA\_k^{(N)}(\boldsymbol{\upsilon}; t\_{1:N} + p\_{1:N})\\ &\quad \leq \quad SA\_k^{(s)}(\boldsymbol{\upsilon}; t\_{1:s} + p\_{1:s}) \leq A\_k(\boldsymbol{\upsilon}; t\_s + p\_s). \end{split} \tag{25}$$

*2. For any t*1:*N*, *a*1:*<sup>N</sup> and b*1:*N*, *N* ∈ N∗, *such that ai* ≤ *bi*, *i* = 1, ... , *N*, *and* {[*ti*, *ti* + *ai*]}*i*=1,...,*<sup>N</sup> and* {[*ti*, *ti* + *bi*]}*i*=1,...,*<sup>N</sup> are two sequences of non-overlapping real intervals, then we have:*

$$SIR\_k^{(N)}(v; t\_{1:N}, b\_{1:N}) \le SIR\_k^{(N)}(v; t\_{1:N}, a\_{1:N}).\tag{26}$$

*3. For any t*1:*N*, *p*1:*N*, *x*1:*N*, *w*1:*N*, *N* ∈ N∗, *such that* {[*ti*, *ti* + *pi*]}*i*=1,...,*<sup>N</sup> and* {[*xi*, *xi* + *wi*]}*i*=1,...,*<sup>N</sup> are two sequences of non-overlapping real intervals such that t*1:*<sup>N</sup>* + *p*1:*<sup>N</sup>* = *x*1:*<sup>N</sup>* + *w*1:*N*, *and t*1:*<sup>N</sup>* ≥ *x*1:*<sup>N</sup> (element-wise), then we have:*

$$SIR\_k^{(N)}(v; \mathbf{x}\_{1:N}, w\_{1:N}) \le SIR\_k^{(N)}(v; \mathbf{t}\_{1:N\prime} p\_{1:N}).\tag{27}$$

**Proof.** For any *k* ∈ *E* and *v* ∈ N, let us define the set Ω(*k*,*v*) by

$$\Omega\_{(k,v)} := \{ \omega \in \Omega \mid Z\_0(\omega) = k, B\_0(\omega) = v \}.$$

The first point is obtained noticing that, for *s* ≤ *N*, we have:

$$\begin{split} & \left\{ \omega \in \Omega\_{\left(k,\mathbb{J}\right)} \mid Z\_{\mathbb{s}} \in \mathcal{U}, \forall \mathsf{s} = 1, \dots, t\_{N} + p\_{N} \right\} \subseteq \left\{ \omega \in \Omega\_{\left(k,\mathbb{J}\right)} \mid Z(t\_{1:N,\cdot}p\_{1:N}) \in \mathcal{U}^{N + \sum\_{i=1}^{N}p\_{i}} \right\} \\ & \subseteq \left\{ \omega \in \Omega\_{\left(k,\mathbb{J}\right)} \mid Z(t\_{1:l\*}p\_{1:\mathbb{J}}) \in \mathcal{U}^{s + \sum\_{i=1}^{s}p\_{i}} \right\} \subseteq \left\{ \omega \in \Omega\_{\left(k,\mathbb{J}\right)} \mid Z(t\_{1:N} + p\_{1:N}, 0\_{1:N}) \in \mathcal{U}^{N} \right\} \\ & \subseteq \left\{ \omega \in \Omega\_{\left(k,\mathbb{J}\right)} \mid Z(t\_{1:l} + p\_{1:l\*}, 0\_{1:i}) \in \mathcal{U}^{\mathbb{J}} \right\} \subseteq \left\{ \omega \in \Omega\_{\left(k,\mathbb{J}\right)} \mid Z\_{l:+p\_{l}} \in \mathcal{U} \right\}. \end{split}$$

Applying the probability on this chain of inequalities and taking into account the definitions of reliability, sequential reliability, availability and sequential availability, we obtain the inequalities given in (25).

In order to prove the second point, we first observe that *ai* ≤ *bi*, *i* = 1, ... , *N*, implies that the two sequences of non-overlapping real intervals {[*ti*, *ti* + *ai*]}*i*=1,...,*<sup>N</sup>* and {[*ti*, *ti* + *bi*]}*i*=1,...,*<sup>N</sup>* are such that [*ti*, *ti* + *ai*] ⊆ [*ti*, *ti* + *bi*], *i* = 1, . . . , *N*. Thus, we have:

$$\left\{\omega \in \Omega\_{(\mathbb{k},\mathbb{p})} \mid Z(\mathfrak{t}\_{1:N}, \mathfrak{b}\_{1:N}) \in \mathcal{U}^{N + \sum\_{l=1}^{N} \mathfrak{b}\_{l}}\right\} \subseteq \left\{\omega \in \Omega\_{(\mathbb{k},\mathbb{p})} \mid Z(\mathfrak{t}\_{1:N}, a\_{1:N}) \in \mathcal{U}^{N + \sum\_{l=1}^{N} a\_{l}}\right\},$$

which implies that *SIR*(*N*) *<sup>k</sup>* (*v*; *<sup>t</sup>*1:*N*, *<sup>b</sup>*1:*N*) <sup>≤</sup> *SIR*(*N*) *<sup>k</sup>* (*v*; *t*1:*N*, *a*1:*N*), so we obtain (26).

To prove the last point, since *t*1:*<sup>N</sup>* ≥ *x*1:*<sup>N</sup>* and *t*1:*<sup>N</sup>* + *p*1:*<sup>N</sup>* = *x*1:*<sup>N</sup>* + *w*1:*N*, we have that [*ti*, *ti* + *pi*] ⊆ [*xi*, *xi* + *wi*], *i* = 1, . . . , *N*. Consequently, we have:

$$\left\{\omega\in\Omega\_{(\mathbb{k},\mathbb{p})}\mid Z(\mathbf{x}\_{1:N},w\_{1:N})\in\mathsf{U}^{N+\sum\_{i=1}^{N}p\_{i}}\right\}\subseteq\left\{\omega\in\Omega\_{(\mathbb{k},\mathbb{p})}\mid Z(\mathbf{t}\_{1:N},p\_{1:N})\in\mathsf{U}^{N+\sum\_{i=1}^{N}p\_{i}}\right\},$$

which implies that *SIR*(*N*) *<sup>k</sup>* (*v*; *<sup>x</sup>*1:*N*, *<sup>w</sup>*1:*N*) <sup>≤</sup> *SIR*(*N*) *<sup>k</sup>* (*v*; *t*1:*N*, *p*1:*N*), so we obtain (27).

#### *3.2. Asymptotic Analysis*

We can investigate the asymptotic analysis of sequential interval reliability *SIR*(*N*) *<sup>k</sup>* (*v*; *t*1:*N*, *b*1:*N*), by letting *t*<sup>1</sup> tend towards infinity. The next result given in Theorem 1 answers this question.

Let *t* := (*ti*)*i*=1,...,*<sup>N</sup>* and *p* := (*pi*)*i*=1,...,*N*, *N* ∈ N∗, be two time sequences such that {[*ti*, *ti* + *pi*]}*i*=1,...,*andN* is a sequence of non-overlapping real intervals. Let us denote by *li* := *ti* − *ti*−1, *<sup>i</sup>* = 2, . . . , *<sup>N</sup>*.

**Theorem 1.** *Let us consider an ergodic semi-Markov chain such that Assumptions 1–3 hold true and the mean sojourn times mi in any state i are finite, mi* < ∞, *i* ∈ *E*, *where mi is the mean time of the distribution* (*hi*(*k*))*k*∈N, *<sup>i</sup>* ∈ *<sup>E</sup>*. *Then, under the previous notations, we have:*

$$\lim\_{t\_1 \to \infty} SIR\_k^{(N)}(\mathbf{v}; \underline{\mathfrak{L}}, \underline{p}) = \lim\_{t\_1 \to \infty} SIR^{(N)}(\underline{\mathfrak{L}}, \underline{p}) = \frac{1}{\sum\_{i \in E} \nu(i) m\_i} \sum\_{j \in \mathbb{Z}} \nu(j) \sum\_{t\_1 \ge 0} \mathbf{g}\_j^{(N)}(\underline{\mathfrak{L}}, \underline{p}). \tag{28}$$

*where* (*ν*(*i*))*i*∈*<sup>E</sup> represents the stationary distribution of the embedded Markov chain* (*Jn*)*n*∈N.

Before giving the proof of this result, we first need some preliminary notions and results. First, let us recall some definitions related to the matrix convolution product. Let us denote by M*<sup>E</sup>* the set of real matrices on *E* × *E* and by M*E*(N) the set of matrix-valued

functions defined on N, with values in M*E*. For **A** ∈ M*E*(N), we write **A** = (**A**(*k*); *k* ∈ N), where, for *k* ∈ N fixed, **A**(*k*)=(*Aij*(*k*); *i*, *j* ∈ *E*) ∈ M*E*. Let **I** ∈ M*<sup>E</sup>* be the identity matrix and **0** ∈ M*<sup>E</sup>* be the null matrix. Let us also define **I** := (**I**(*k*); *k* ∈ N) as the constant matrix-valued function whose value for any nonnegative integer *k* is the identity matrix, that is, **I**(*k*) := **I** for any *k* ∈ N. Similarly, we set **0** := (**0**(*k*); *k* ∈ N), with **0**(*k*) := **0** for any *k* ∈ N.

Let **A**,**B** ∈ M*E*(N) be two matrix-valued functions. The matrix convolution product **A** ∗ **B** is the matrix-valued function **C** ∈ M*E*(N) defined by

$$\mathbb{C}\_{lj}(k) := \sum\_{r \in E} \sum\_{l=0}^{k} A\_{ir}(k-l) \, B\_{rj}(l), \quad i, j \in E, \quad k \in \mathbb{N}, \tag{29}$$

or, in matrix form:

$$\mathbf{C}(k) := \sum\_{l=0}^{k} \mathbf{A}(k-l) \, \mathbf{B}(l), \quad k \in \mathbb{N}.$$

It can be easily checked whether the identity element for the matrix convolution product in discrete time exists, and whether it is unique and given by *δI* = (*dij*(*k*); *i*, *j* ∈ *E*) ∈ M*E*(N) defined by

$$d\_{\vec{i}\vec{j}}(k) := \begin{cases} 1, & \text{if } i = j \text{ and } k = 0, \\\ 0, & \text{elsewhere,} \end{cases}$$

or, in matrix form:

$$\delta I(k) := \begin{cases} \mathbf{I}, & \text{if } k = 0, \\ \mathbf{0}, & \text{elsewhere.} \end{cases}$$

The power in the sense of convolution is straightforwardly defined using the previous definition of the matrix convolution product given in (29). For **A** ∈ M*E*(N), a matrixvalued function and *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, the *<sup>n</sup>*-fold convolution **<sup>A</sup>**(*n*) is the matrix-valued function recursively defined by

$$\begin{aligned} A\_{ij}^{(0)}(k) &:= & d\_{ij}(k) = \begin{cases} 1, & \text{if } i = j \text{ and } k = 0, \\ 0, & \text{elsewhere,} \end{cases} \\ A\_{ij}^{(1)}(k) &:= & A\_{ij}(k), \end{aligned}$$

$$A\_{ij}^{(n)}(k) \quad := \sum\_{r \in E} \sum\_{l=0}^{k} A\_{ir}(l) \, A\_{rj}^{(n-1)}(k-l), \quad n \ge 2, k \in \mathbb{N}\_0$$

that is:

$$\mathbf{A}^{(0)} := \delta I, \mathbf{A}^{(1)} := \mathbf{A} \text{ and } \mathbf{A}^{(n)} := \mathbf{A} \* \mathbf{A}^{(n-1)}, \quad n \ge 2.$$

Second, let us introduce two sets of functions that will be useful for our study. Thus, let us define:

$$\mathcal{A} := \left\{ f : \{ (\mathtt{t}, \mathtt{p}) \in \mathbb{N}^{N} \times \mathbb{N}^{N} \mid t\_{i} \le t\_{i+1}, i = 1, \ldots, N-1 \right\} \to \mathbb{R} \right\} \tag{30}$$

and, for *<sup>l</sup>* <sup>∈</sup> <sup>N</sup>*N*−1, *<sup>p</sup>* <sup>∈</sup> <sup>N</sup>*<sup>N</sup>* :

$$\mathcal{B}\_{\underline{l},\underline{p}} := \left\{ \check{f} : \mathbb{N} \to \mathbb{R} \mid \check{f}(t\_1) = \hat{f}(t\_1; \underline{l}, \underline{p}) \right\},\tag{31}$$

where, by writing *f* '(*t*1; *l*, *p*), we mean that the function *f* 'is a function of the variable *t*1, while *l*, *p* are some parameters.

Let us consider a map between the two sets, Φ : A→B*l*,*<sup>p</sup>* defined by

$$\Phi(f(\underline{t}, \underline{p})) := \tilde{f}(t\_1; \underline{l}, \underline{p}), \tag{32}$$

where *li* := *ti* − *ti*−1, *<sup>i</sup>* = 2, . . . , *<sup>N</sup>*.

The map Φ allows to represent a function *f*(*t*, *p*) ∈ A as an element of the set B*l*,*p*, that is to say as a parametric function of one variable, namely *t*1. One can easily check that Φ is bijective and linear.

The last point before giving the proof of Theorem 1 will be to introduce a new matrix convolution product, important for our framework, and to see the relationship with the classical matrix convolution product.

**Definition 5.** *Let* **A** ∈ M*E*(N) *be a matrix-valued function and let* **b** = (*b*1,..., *bs*) *be a vectorvalued function such that every component br* ∈ A,*r* ∈ *E*. *The matrix convolution product* ∗ *is defined by*

$$(\mathbf{A}\overline{\ast}\mathbf{b})\_k(\underline{t},\underline{p}) := \sum\_{r\in E} \sum\_{\theta=1}^{t\_1} A\_{kr}(\theta) b\_r(\underline{t}-\theta\mathbf{1}\_{1:N},\underline{p})\_r$$

*or, in matrix form:*

$$(\mathbf{A} \overline{\ast} \mathbf{b})(\underline{\t} \lrcorner \underline{p}) := \sum\_{\theta=1}^{t\_1} \mathbf{A}(\theta) \mathbf{b}(\underline{\t} - \theta \mathbf{1}\_{1:N\_Y} \underline{p}) \cdot \underline{\mathbf{b}}$$

The next result will give a relationship between this new introduced matrix convolution product (cf. Definition 5) and the classical one defined in (29).

**Proposition 3.** *Let* **q** ∈ M*E*(N) *be a semi-Markov* semi-Markov core matrix *and let* **f** = (*f*1,..., *fs*) *be a vector-valued function such that every component fr* ∈ A,*r* ∈ *E*. *Then:*

$$\Phi((\mathbf{q} \, \mathbf{\overline{\*}f})(\mathbf{t} \, \underline{\mathbf{p}})) = (\mathbf{q} \, \mathbf{\*} \, \mathbf{\widetilde{f}})(t\_1; l\_{2:N} \, \underline{\mathbf{p}}) \cdot $$

**Proof.** From the additivity of the map Φ, we have:

$$\begin{split} \Phi((\mathbf{q} \ast \mathbf{f})(\underline{t} \lrcorner \underline{p}))\_{k} &= \sum\_{r \in E} \sum\_{\theta=1}^{t\_{1}} \Phi(q\_{kr}(\theta) f\_{r}(\underline{t} - \theta \mathbf{1}\_{1:N\_{r}} \underline{p})) = \sum\_{r \in E} \sum\_{\theta=1}^{t\_{1}} q\_{kr}(\theta) f\_{r}(\underline{t} - \theta \mathbf{1}\_{1:N\_{r}} \underline{p}) \\ &= \sum\_{r \in E} \sum\_{\theta=1}^{t\_{1}} q\_{kr}(\theta) \tilde{f}\_{r}(t\_{1} - \theta \mathbf{;} l\_{2:N\_{r}} \underline{p}) = (\mathbf{q} \ast \tilde{\mathbf{f}})\_{k}(t\_{1}; l\_{2:N\_{r}} \underline{p}). \end{split}$$

**Proof of Theorem 1.** First of all, it is important to notice that we have:

$$\lim\_{t\_1 \to \infty} SIR\_k^{(N)}(v; t\_{1:N}, p\_{1:N}) = \lim\_{t\_1 \to \infty} SIR^{(N)}(t\_{1:N}, p\_{1:N}).$$

provided that this limit exists. Consequently, since our interest now is in a limiting result, in order to investigate the asymptotic behaviour of *SIR*(*N*) *<sup>k</sup>* (*v*; *t*1:*N*, *p*1:*N*) as *t*<sup>1</sup> goes to ∞ we can consider the initial backward *v* = 0. Thus, the expression of sequential interval reliability that we will take into account in the next computations will be *SIR*(*N*)(*t*1:*N*, *p*1:*N*), that is recurrently obtained through Relation (20).

The main idea of this proof is to consider *SIR*(*N*) *<sup>k</sup>* as a function of the variable *t*<sup>1</sup> and also on other additional parameters; then, we will apply the Markov renewal theory (cf. [1]) to this function of *t*1. Using Proposition 3 and applying the function Φ defined in (32) to the left and right hand sides of Equation (20), we obtain:

$$\overline{S}\overline{I}\overline{\mathbb{R}}^{(N)}(t\_1; l\_{2:N}, \underline{p}) \quad = \quad \overline{\mathbb{S}}(t\_1; l\_{2:N}, \underline{p}) + \mathbf{q} \* \overline{S}\overline{I}\overline{\mathbb{R}}^{(N)}(t\_1; l\_{2:N}, \underline{p}), \tag{33}$$

where *SIR* @(*N*) := Φ(*SIR*(*N*)) and *g*' := Φ(*g*).

It is clear that Equation (33) is an ordinary Markov renewal equation (MRE) in variable *t*1, with parameters (*l*2:*N*, *p*). The solution of this MRE is well known (cf. [1]) and it is given by

$$(\overline{S}\overline{R}^{(N)}(t\_1; l\_{2:N}, \underline{p}) \quad = \ (\Psi \* \widehat{\mathfrak{g}})(t\_1; l\_{2:N}, \underline{p}), \tag{34}$$

or element-wise, using Proposition 3:

$$\left(\overrightarrow{SIR}\_{k}^{(N)}\left(t\_{1};l\_{2:N},\underline{p}\right)\right) \quad = \sum\_{r\in E} \sum\_{\theta=1}^{t\_{1}} \psi\_{kr}(\theta) \underline{g}\_{r}(\underline{t}-\theta \mathbf{1}\_{1:N},\underline{p}),\tag{35}$$

where the matrix-valued function *ψ* = (*ψ*(*k*); *k* ∈ N) is given by

$$\boldsymbol{\Psi}(k) = \sum\_{n=0}^{k} \mathbf{q}^{(n)}(k), \ k \in \mathbb{N}.\tag{36}$$

Since *SIR* @(*N*) *<sup>k</sup>* (*t*1; *<sup>l</sup>*2:*N*, *<sup>p</sup>*) = <sup>Φ</sup>(*SIR*(*N*) *<sup>k</sup>* (*t*, *<sup>p</sup>*)) = *SIR*(*N*) *<sup>k</sup>* (*t*, *p*), we obtain:

$$SIR\_k^{(N)}(\mathfrak{t}, \underline{p}) = \overline{SIR}\_k^{(N)}(t\_1; l\_{2:N}, \underline{p}) = \sum\_{r \in E} \sum\_{\theta=1}^{t\_1} \psi\_{kr}(\theta) \underline{g}\_r(\underline{t} - \theta \mathbf{1}\_{1:N}, \underline{p}).\tag{37}$$

Consequently:

$$\lim\_{t\_1 \to \infty} SIR^{(N)}(\underline{t}, \underline{p}) = \lim\_{t\_1 \to \infty} \overline{SIR}\_k^{(N)}(t\_1; l\_{2:N}, \underline{p}).$$

Let us now compute the second limit using the key Markov renewal Theorem (cf. [1]). First, we observe that:

$$\begin{split} \widetilde{\mathcal{g}}\_k(t\_1; l\_{2:N}, \underline{p}) &= \mathcal{g}\_k(\underline{t}, \underline{p}) = \mathbb{P}\_{(k,0)}(Z\_l \in \mathcal{U}, \text{ for all } l \in [t\_i, t\_1 + p\_i], i = 1, \dots, N, T\_1 > t\_1) \\ &\le \mathbb{P}\_{(k,0)}(Z\_l \in \mathcal{U}, l \in [t\_1, t\_1 + p\_1], T\_1 > t\_1) = \mathcal{g}\_k(t\_1, p\_1) \le \mathcal{R}\_k(t\_1 + p\_1). \end{split}$$

Using this result, we have:

$$\sum\_{t\_1 \ge 0} |\,\,\widetilde{\mathfrak{g}}\_k(t\_1; l\_{2 \div N \prime} \underline{p}) \mid \le \sum\_{t\_1 \ge 0} R\_k(t\_1 + p\_1) = \mathbb{E}\_{(k,0)}(T\_D),$$

where *TD* is the lifetime of the system. Thus, we are under the hypotheses of the key Markov renewal theorem and we obtain:

$$\begin{split} &\lim\_{t\_1 \to \infty} SIR^{(N)}(t\underline{\nu}) = \lim\_{t\_1 \to \infty} \mu \psi \* \widetilde{\mathcal{S}}(t\_1; l\_{2:N\_\prime} \underline{\nu}) \\ &= \sum\_{i \in E} \mu\_i \sum\_{j \in II} \frac{1}{\mu\_{\widetilde{\mathcal{U}}}} \sum\_{t\_1 \ge 0} \mathcal{g}\_j^{(N)}(\underline{t}\,\underline{\nu}) = \frac{1}{\sum\_{i \in E} \nu(i) m\_i} \sum\_{j \in \mathcal{U}} \nu(j) \sum\_{t\_1 \ge 0} \mathcal{g}\_j^{(N)}(\underline{t}\,\underline{\nu}) . \end{split}$$

where *μjj* is the mean recurrence time to state *j* for the semi-Markov chain.

#### **4. A Numerical Example**

In this section, we will present a numerical example considering a semi-Markov model that governs a repairable system. The setting is as follows: the state space is *E* = {1, 2, 3}, the operational states are the first two, *U* = {1, 2}, and the non-working state is the last one, *D* = {3}.

The transitions of the repairable semi-Markov model are given in the flowgraph of Figure 1.

**Figure 1.** Semi-Markov model.

The transition matrix **p** of the EMC *J* and the initial distribution *μ* are given by

$$\mathbf{p} = \begin{pmatrix} 0 & 1 & 0 \\ 0.8 & 0 & 0.2 \\ 1 & 0 & 0 \end{pmatrix}, \boldsymbol{\mu} = (1,0,0).$$

Now, let *Xij* be the conditional sojourn time of the SMC *Z* in state *i* given that the next state is *j* (*j* = *i*). The conditional sojourn times are given as follows:

> *X*<sup>12</sup> ∼ Geometric(0.2), *X*<sup>21</sup> ∼ discrete Weibull(0.8, 1.2), *X*<sup>23</sup> ∼ discrete Weibull(0.6, 1.2), *X*<sup>31</sup> ∼ discrete Weibull(0.9, 1.2),

In the following figures, we investigate the semi-Markov repairable system in terms of the proposed reliability measures studied in Section 3.

Figure 2 illustrates the conditional sequential interval reliability for two time intervals moving equally through the time with the same length (one time unit). That is the probability that the system will be operational in the time intervals (*k*, *k* + 1) and (*k* + 2, *k* + 3) for *k* ∈ {1, 2, 3, 4, 5, 6, 7, 8}. We have to note that, as the time *k* passes, then the system tends to converge to the asymptotic sequential interval reliability *SIR*(2)(0; *t*, *p*) = 0.6603, as given in Theorem 1. Furthermore, the sequential interval reliability *SIR*(2)(0;(*k*, *k* + 2),(1, 1)) is equal to the conditional one *SIR*(2) <sup>1</sup> (0;(*k*, *k* + 2),(1, 1)) due to the fact that the only possible initial state is the first one. The point here is to study the probability of a system being operational during different time periods with the same working duration and a fixed time distance between them, equal to one time unit.

Then, Figure 3 examines the probability that the system is working during two time periods, with same working duration (equal to one time unit) and increasing the time distance between them. We considered the first interval to be fixed equal to (1, 2) and the other moving apart with step 1 each time and be (*k* + 2, *k* + 3) for *k* ∈ {1, 2, 3, 4, 5, 6, 7, 8}. It can be easily seen that the probability of the system will be still working in both time

intervals and is a decreasing function of time *k*, which means that the two time intervals are far enough apart for the system to be operational. Note that the probability of the system starting from the non-working state 3 to be in the up-states is sufficiently small.

As in all the previous cases, the concept of the initial backward did not play a role in the simulations (Figures 2 and 3), and in Figure 4, the sequential interval reliability with the initial backward *v* = 10 is examined. The first time interval is considered as fixed, (1, 2), and the other one is moving apart with step 1 each time and it is (*k* + 3, *k* + 4) for *k* ∈ {1, 2, 3, 4, 5, 6, 7, 8}.

From the point of view of real applications, the proposed reliability measures can be applied to a huge variety of physical phenomena which they characterised from time dependence. In the literature, a lot of research works are presented for modelling a variety of such phenomena via semi-Markov processes, from financial [23] to power demand [29]. D'Amico et al. ([23,26]) proposed a semi-Markov model and associated reliability measures for constructing a credit risk model that solves problems arising from the non-Markovianity nature of the phenomenon. Further developments and recent contributions on this aspect are provided in [28,30,31].

The characteristics of semi-Markov chains which allow for no-memoryless sojourn time distributions, permit considering the duration problem in an effective way. Indeed, it is possible to define and compute different probabilities of changing state, default probabilities included, taking into account the permanence of time in a rating class. This aspect is crucial in credit rating studies because the duration dependence of transition probabilities naturally translates in many financial indicators, which change their values according to the time elapsed in the last rating class, see, e.g., [32].

**Figure 2.** Sequential interval reliability plot with equally moving intervals.

Interval

**Figure 3.** Sequential interval reliability plot with intervals moving apart.

**Figure 4.** Sequential interval reliability plot with the initial backward (*v* = 10) and intervals moving apart.

In the case of financial modelling (see, e.g., [33]), the presented results could be applied in order to create advanced credit scoring models. A financial asset, similarly to government bond, usually takes a "grade" based on the reliability of the country to pay the debt (also known as creditworthiness). These grades strongly affect the interest rates of the country's debt and they are clearly separated. Let us consider the following set of states, from a simple point of view, as the ratings:

$$E = \{A, \ B, \ C\}...$$

If the country bond receives a rating in the set *U* = {*A*, *B*}, this means that it is thought to be creditworthy and can borrow money from the markets with reasonable interest rates. On the contrary, if the rating is within the set *D* = {*C*}, then the country cannot borrow money from the markets due to very high interest rates caused from its problems for repaying the debt. D'Amico et al. [23] proposed flexible reliability measures based on semi-Markov processes for constructing a credit risk model that solves the problems arising from the non-Markovianity nature of the phenomenon. Following that work, the measures proposed in the present paper can be applied in the same way as follows:


It became clear that these measures have applications in a variety of stochastic phenomena due to their flexibility and their ability to significantly extend our knowledge about the process evolution. They can solve problems and provide answers for systems that can shift from failure to operational states, from a probabilistic point of view. Finally, they can be considered as generalisations of the classical measures of reliability for semi-Markov processes.

#### **5. Concluding Remarks**

This paper presents a new reliability indicator, called sequential interval reliability (SIR), which is evaluated for a discrete-time homogeneous semi-Markov repairable system. This indicator includes as particular cases several functions that are frequently used in reliability studies, such as the reliability and availability functions, as well as the interval reliability function.

The paper contains new theoretical results on both the transient and the asymptotic cases. More precisely, a recurrent-type equation is established for the calculation of the SIR function in the transient case and a limit theorem establishes its asymptotic behaviour. These results generalise corresponding known results for standard reliability indicators. The possibility to apply our results to real systems is shown by implementing a numerical example where the theoretical results are illustrated from a practical point of view. The paper leaves unresolved several aspects among which an important role is played by the application of the proposed indicator in different applied problems involving the use of real data.

**Author Contributions:** Conceptualization, G.D.; data curation, T.G.; formal analysis, V.S.B. and G.D.; methodology, G.D.; software, V.S.B. and T.G.; supervision, V.S.B.; validation, V.S.B. and T.G.; writing–original draft, V.S.B. and T.G. All authors have read and agreed to the published version of the manuscript.

**Funding:** Guglielmo D'Amico acknowledges the financial support by the University G. d'Annunzio of Chieti-Pescara, Italy (FAR2020).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors would like to express their gratitude to Professors Panagiotis-Christos Vassiliou and Andreas C. Georgiou for the opportunity to submit the present manuscript to the Special Issue *Markov and Semi-Markov Chains, Processes, Systems and Emerging Related Fields* for possible publication. The authors also wish to express their appreciation to the anonymous referees as well as to the editorial assistance of Devin Zhang for the time dedicated to our paper and for the valuable suggestions and constructive comments that improved both the quality and the presentation of the manuscript.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

