*1.1. Background Introduction to the Study*

Ensuring desired availability of all machine tools in a production line is an important issue [1,2]. It stands for their ability to obtain and maintain the functional state necessary to produce the required performance [3–5]. The technical readiness of machines is an important element of the company diagnostics and should be estimated, as its evaluation helps shape the capacity of a production line. High machine tools reliability translates into no unnecessary downtime and, consequently, greater process efficiency. Machine tools must be technically sound, adequately controlled and supplied with necessary materials, energy and information [6]. The availability of a machine tool is determined using a probability theory-based reliability model. In probability theory, the state of an object is defined as the result of one and only one event in a sequence of trials of finite or computable set of elementary events excluding each other in pairs [7]. This makes it possible to use the tools of probability calculus and mathematical statistics to analyze technical systems. When machine tools are in operation they stochastically transit from one state to another. As a result, transition probabilities are associated with

all machine tools in a production line. Therefore, Markov chain and its derivatives are often used to set a model of reliability. Some of the relevant articles where Markov chain-based reliability models are used to study the availability of machines tools in a production line are described below.

The use of Markov processes and their generalization—semi-Markov processes—are popular. Their use is dictated by the multi-states condition of the technical objects and the assumption that the assessment of individual functional states the object is in, is a better measure than the readiness of the object as a whole. However, the use of these models is subject to restrictions. First of all, it is necessary to fulfil the Markov property which states that the probability of a future state is independent of the past states, and depends only on the present state. Identifying a model without meeting this assumption may lead to false conclusions, which is suggested by many authors [8,9]. They point out that ignoring the Markov property examination will results in incorrect analysis results, e.g., Shi et al. [10], Zhang et al. [8], or Kozłowski et al. [11]. Therefore, it is necessary to examine the randomness of sequences of subsequent operational states, as it is done by Yang et al. [12] or Komorowski and Raffa [13].

In addition, Markov's models require meeting the assumption that the unconditional process dwell times in the individual states and the conditional durations of an individual state, are random variables of exponential distribution, provided that the next one is one of the remaining states [14,15]. Many authors point out that proper matching of distributions affects the reliability of results [13,16]. They use Markov's model for exponential distributions [17,18], and the semi-Markov model for the remaining ones, e.g., Weibull [19] or Gamma [20]. Adoption of only the assumption on the form of distribution without a statistical survey of the collected sample may lead to wrong conclusions.

#### *1.2. The Aim of the Study*

The need to check Markov's property is discussed in more detail in the literature [10,11], while less attention is paid to the distribution of variables studied. Therefore, this publication compares the results of process examination according to the semi-Markov model for variables of non-parametric distribution with the analysis according to the Markov model, in which their exponential form was (falsely) assumed. The differences in the results obtained clearly indicate that it is necessary to carry out a preliminary test before choosing the right model. Failure to meet the assumptions leads to an inaccurate analysis of the process.

The aim of the article was also to evaluate the readiness of a production machine, which is an important element of the analyzed production process. The results obtained made it possible to determine the probabilities of transitions between the individual states distinguished in the production process, as well as to define limit probabilities and the technical readiness coefficient. This allows to assess the compliance of the functioning of the analyzed process with the schedule adopted in the company, or to evaluate the results of production abilities. The proposed models can also be used to simulate the production process, e.g., at the design phase.

The article consists of five sections. The first one presents an analysis of the literature on the application of Markov models for studying the technical readiness of machine tools in a production line. Section 2 presents a mathematical formulation of the research problem. In Section 3, a description of the studied company was given and data analysis was carried out in terms of studying the Markov property and the form of distribution of variables. Section 4 presents a case study containing the estimation of Markov and semi-Markov models parameters, as well as a numerical example and accurate calculations according to the developed model. The article ends with conclusions describing the goals achieved and indicating the added value of the study.

#### **2. Mathematical Modeling**

**Definition 1.** *Let us consider a random process with a finite state space S* = *{1,..., s}, s* < ∞*. Let* (Ω, F , *P*) *be a probabilistic space and X* (*t*) : *t* ∈ *T a stochastic process defined for* (Ω, F , *P*)*, taking values from the finite* *or calculable set S. Process X*(*t*) : *t* ∈ *T is called a Markov process if for each i*, *j*, *i*0, *i*1,...,*in*−<sup>1</sup> ∈ *S and for each t*0, *t*<sup>1</sup> , ... , *tn*, *tn*+<sup>1</sup> ∈ *T meeting the requirement t*<sup>0</sup> < *t*<sup>1</sup> < *tn* < *tn*+<sup>1</sup> *the dependency given below is met:*

$$P(\mathbf{X}(t\_{n+1}) = j | \mathbf{X}(t\_n) = i, \mathbf{X}(t\_{n-1}) = i\_{n-1}, \dots, \mathbf{X}(t\_0) = i\_0) = P(\mathbf{X}(t\_{n+1}) = j | \mathbf{X}(t\_n) = i),\tag{1}$$

Assuming that *tn* = *u*, *tn*+<sup>1</sup> = τ, then the conditional probability:

$$P(X(\pi) = j | X(\mathfrak{u}) = i) = p\_{ij}(\mathfrak{u}, \mathfrak{s}), \tag{2}$$

for *i*, *j* ∈ *S*, where *pij*(*u*,*s*) denotes the probability of transition from state *i* at time *u*, to state *j* at time *s*.

Assuming that *t*0, *t*<sup>1</sup> , ... , *tn*−<sup>1</sup> denote time (instants) from the past, *tn* denotes the present instant, and *tn*+<sup>1</sup> the time in the future, the equation says that the future does not depend on the past when the present is known, thus the probability of the future state is independent of the past states, but only of the present state. This property is called Markov's property, and the stochastic process that satisfies it, a memoryless process. If instants of time are discrete, *T* = *N*<sup>0</sup> = {0, 1, 2, ...}, then we are dealing with Markov's chain, and when the process is realized in a continuous time *T* = *R*<sup>+</sup> = [0, ∞), it is a continuous-time Markov process.

For the stochastic process *X*(*t*) : *t* > 0 taking values from the finite or countable set *S* with fixed and right-hand continuous phase trajectories in some sections, and for τ<sup>0</sup> = 0 which marks the start of the process and τ1, τ2, ... which denotes successive times of change of states, the random variable:

$$T\_i = \tau\_{n+1} - \tau\_n | \mathbf{X}(\tau\_n) = i, \ i \in S,\tag{3}$$

denotes the waiting time in the state *i* when a successor state is unknown. From the Chapman–Kolmogorov equation, it follows [21,22] that the process dwelling times in the individual states constitute random variables with exponential distributions and with the parameter λ*<sup>i</sup>* > 0:

$$G\_i(t) = P(T\_i \le t) = P(\tau\_{n+1} - \tau\_n \le t | X(\tau\_n) = i) = 1 - e^{-\lambda\_i \cdot t}, \ t \ge 0, \ i \in \mathbb{S}, \tag{4}$$

where *Gi* is a cumulative probability distribution of a random variable *Ti* [23] when a successor state is unknown.

The generalization of Markov processes are semi-Markov processes, for which dwelling times in the individual states can have arbitrary distributions, concentrated in the set [0, ∞]. Based on [24,25] it was assumed in this article to define the semi-Markov process with a finite set of states starting from Markov renewal process.

In the probabilistic space (Ω, F , P) random variables are defined for each *n* ∈ N:

$$\mathcal{L}\_n \colon \Omega \to \mathcal{S}\_\prime \tag{5}$$

$$\mathcal{R}\_n: \Omega \to \mathbb{R}\_+\tag{6}$$

A two-dimensional sequence of random variables (ξ*n*, ϑ*n*) : *n* ∈ *N* is referred to as the Markov renewal process if for each *n* ∈ *N*, *i*, *j* ∈ *S*, *t* ∈ *R*+:

$$P\{\xi\_{n+1} = j, \ \mathfrak{s}\_{n+1} < t/\xi\_n = i, \xi\_{n-1}, \dots \xi\_0, \mathfrak{s}\_n, \dots, \mathfrak{s}\_0\} = P\{\xi\_{n+1} = j, \mathfrak{s}\_{n+1} < t/\xi\_n = i\},\tag{7}$$

and

$$P\{\xi\_0 = i, \theta\_0 = 0\} = P\{\xi\_0 = i\},\tag{8}$$

This definition shows that the Markov renewal process is a specific case of the two-dimensional Markov process. Transition probabilities of this process depend solely on the discrete value of the coordinate. The Markov renewal process (ξ*n*, ϑ*n*) : *n* ∈ *N* is called homogeneous if the probabilities:

$$P\{\xi\_{n+1} = j, \theta\_{n+1} < t/\xi\_n = i\} = Q\_{ij}(t),\tag{9}$$

Do not depend on *n*.

From the above definition, it follows that for each pair (*i*, *j*) ∈ *SxS* function *Qij*(*t*) is [24,25]:


Functional matrix:

$$Q(t) = \left\lfloor Q\_{\bar{i}\bar{j}}(t) \right\rfloor, i, j \in \mathcal{S}, \tag{10}$$

is called the renewal kernel of the semi-Markov process and together with the initial distribution:

$$p\_i = P\{\xi\_n = 1\}, \ i \in \mathcal{S},\tag{11}$$

characterizes the homogeneous Markov renewal process.

Semi-Markov process is defined based on the homogeneous Markov renewal process (ξ*n*, ϑ*n*) : *n* ∈ *N* . Let:

$$
\tau\_0 = \mathfrak{d}\_0 = 0,\tag{12}
$$

$$
\pi\_n = \theta\_1 + \dots + \theta\_{n\prime} \tag{13}
$$

$$
\pi\_{\infty} = \sup \{ \tau\_n : \mathbf{n} \in \mathbb{N}\_0 \}. \tag{14}
$$

The stochastic process *X*(*t*) : *t* ∈ *R*<sup>+</sup> , which assumes a constant value in the range (τ*n*+1), *n* ∈ *N*:

$$X(t) = \xi\_{n\prime} \tag{15}$$

is called the semi-Markov process.

Markov and semi-Markov models are particularly often used to assess the readiness and reliability of technical facilities or their individual components [26–28]. Various systems, including production ones [29,30], are analyzed both in terms of maintaining operability [31], production organization [32] as well as shaping of the demand [33]. This article analyzes the production system from the point of view of machine readiness to perform production tasks.

#### **3. Data Handling**
