**2. Process Capability**

The characteristics of the course of the process are best determined using appropriate statistical methods (PN-EN ISO 9004:2018-06) [28]. This involves the need to obtain quantitative results of measuring process variables (PN-EN ISO 9001:2015-10) [29] that will allow effective process monitoring.

The basic goals of process analysis are as follows [30]:


An ideal, stationary process should be characterized by the lack of dispersion of results—any selected process variable characterizing its output during the process has a constant value. In reality (practice) there are no stationary processes. The result of each real process is in the form of a distribution of the values of the selected process variable—it is characterized by dispersion. Thus, the basic parameters for assessing the quality of processes are:


The assessment of the quality of production processes is associated with the assumption that each product delivered to the customer is endowed with a defect (loss); the smaller it is, the higher is the quality of the product [31,32]. This loss is the higher the more the value of the product feature considered deviates from the target value, including within the tolerance range. This contradicts the view expressed by Taylor that the product quality is constant if the property under consideration falls within the tolerance range (Figure 1). The tolerance field is defined here by the lower LSL and upper USL tolerance limits.

**Figure 1.** Taguchi and Taylor quality loss functions. Source: own study.

Therefore, given the approach presented by Taguchi, every product whose parameters deviate from the target value is characterized by a loss of quality [33,34].

To characterize the process with the capability specified for the selected process variable x, the width of the range in which we accept the obtained process results should be compared with the adopted limit of the process variable distribution, e.g., with the range of *6*σ [35].

Indicators for process capability are increasingly used in industrial practice. Due to the introduction of a process approach in quality management in 2000 [29], it became necessary to monitor processes. "The organization should define the processes needed in the quality management system and their application in the organization and should (...) define and apply the criteria and methods (including monitoring, measurement and related performance indicators) needed to ensure the effective conduct and supervision of these processes". The point 8.5.1. of this standard refers to "the ability to achieve the planned results of production processes and services provided." In industrial practice, the planned result is the required width of the tested property range of the manufactured product.

The process capability is a special inherent feature of the process resulting from the statistical description of one of the outputs (adopted for the description of the process) carried out in a selected period of time. The process capability is the relationship between the required tolerance of the considered product property—process output—treated as a process variable and the obtained dispersion of the value of this variable—the result of the adopted method of limiting the distribution of variable values obtained in a given process duration. Graphic interpretation of the process capability assessment is presented in Figure 2.

The simplest form of the process capability indicator, denoted by *CP*, is defined as follows:

$$C\_P = \frac{\text{required tolerance}}{\text{process dispersion}'} \tag{1}$$

This coefficient is used when the instantaneous average process value *x*, obtained on the basis of measurements, is equal to the assumed—purposeful process value *T*. Arithmetic mean *x* and standard deviation *s* of measurements are given by the formula:

$$\overline{\mathbf{x}} = \frac{1}{n} \sum \mathbf{x}\_{i\nu} \tag{2}$$

$$s = \sqrt{\frac{1}{n} \sum\_{i=1}^{n} (x\_i - \overline{x})^2} \quad \text{for } n > 30,\tag{3}$$

$$s = \sqrt{\frac{1}{n-1} \sum\_{i=1}^{n} \left(\mathbf{x}\_i - \overline{\mathbf{x}}\right)^2} \quad \text{for } \mathbf{n} \le \text{ 30},\tag{4}$$

where:

*xi*, measurement value; *n*, sample size.

As follows from the above dependence, in order not to generate excessive losses associated with maintaining defective products, the process capability index should be [36]:

$$C\_P \ge 1,\tag{5}$$

In the assessment of process capability in industrial practice, the distribution limited by six standard deviations is taken as the measure of the scatter of measurement results [37]. For the normal distribution of the process variable, the *CP* process capability indicator takes the form:

$$\mathcal{C}\_p = \frac{\mathcal{U}ISL - LSL}{6\sigma},\tag{6}$$

where:

*USL*, upper tolerance limit;

*LSL*, lower tolerance limit;

*n*, sample size;

σ, standard deviation of the general population.

Due to the growing requirements of customers, especially global concerns, the criteria for the minimum limit value of the *CP* coefficient have been adopted for some industries [38]. According to Steinem et al., [39] the minimum values of *CP* coefficient for selected industries are for the machinery industry *CP* = 1, for the automotive industry *CP* = 1.33, and in the aviation industry *CP* = 2. Analyzing the capability of the production process, three ranges of the *CP* value can be presented. When *CP* > 1 the process dispersion is smaller than the width of the tolerance range, and this is the recommended process capability. For *CP* = 1, the tolerance range is equal to the process dispersion, and this is a satisfactory process capability [40]. When *CP* < 1, the tolerance range is narrower than the process dispersion—defective products are produced in excessive quantity. Then, the process capability is insufficient [41]. Another interpretation of the process capability coefficient value can be found in the literature, e.g., Kubera states that low process capability is *CP* < 1, average 1 < *CP* < 1.3 and high *CP* > 1.33 [41]. The interpretation may be different for different types of industry and processes, in this paper the capability at *CP* = 1 level will be accepted as satisfactory.

Whether or not a production process to be executed is capable of achieving the assumed performance parameters depends, among others, on the reliability of the machines and technological devices that make up the system under design [42]. Layouts and temporal structure optimization of manufacturing requires application of a multi-criteria approach in designing production systems [43].

**Figure 2.** Graphic illustration of process capability. Source: own study.

#### **3. Bootstrap Method**

There are methods that simplify the procedure when assessing process capability using numerical indicators. One of such methods is bootstrap analysis based on the so-called bootstrap samples [44], which can be used when the sample size is not very large (show Figure 3). The purpose of this analysis is the possible verification of previously obtained results using factor methods.

**Figure 3.** Scheme of bootstrap analysis. Source: own study.

Bootstrap methods have been known for over 20 years, but only in recent years they have been widely used, primarily in stochastic simulation models. The basis of this method is the assumption that the future is similar to the past. Therefore, instead of studying the past and trying to describe it using theoretical distributions, and then simulating the future using the selected distributions, you can generate simulation input data directly from historical data [45,46]. As a consequence, this means that since the observed sample of real data contains all the necessary information about the studied population, this sample can be treated as a population.

Bootstrap analysis consists of a draw with returning individual results from a random sample (from the original data) and then creating a new sample from the drawn samples for the study [47]. Thus, it is a method of estimating the distribution of estimation errors, using multiple random draws

of observations with returns from the original sample. The draws take into account all possible combinations of elements from the sample, based on real data [48]. It allows checking the distribution of parameters in relation to the initial sample [49]. The bootstrap method is useful when the form of the distribution of the variable in the population is not known and when the quality or amount of information collected does not allow the use of classic statistical methods [50]. Due to the fact that it does not make assumptions about the distribution in the population, it is included in the non-parametric methods.

Let *xj*1, *xj*2, *xj*3, ... *xji* denote the values of the process variable *Xi* of the production process under investigation. From a given set of *xji* values, we draw with returning *n* measurement values and in this way we get new bootstrap samples *Bj*\*1, *Bj*\*2, *Bj*\*3, ... *Bj*\**B*.

Each bootstrap sample consists of exactly the same number of "*n*" elements as the number of values tested [30,51–53].

The number of bootstrap draws cannot be less than *nn*. Based on empirical research, it has been shown that a sufficient number of draws for conducting tests is *B* = 1000 measurements [54]. After obtaining the set number of bootstrap attempts, based on them, inference is calculated by calculating the appropriate statistics φ. The empirical distribution obtained in this way is used to make inferences about the parameter θ [48,55].

The main advantage of this is that the process variable distribution is not studied but empirically constructed based on measuring a large number of samples of the process variable value [56]. The advantage of this is also the ability to assess process capabilities based on abnormal distributions, characterized by high skewness, flattening, drift, etc.

To sum up the above, the following analogy is crucial for the use of the bootstrap method in statistical inference: the bootstrap sample is for the sample drawn what the sample drawn for the entire population.
