**Enhanced Positioning Algorithm Using a Single Image in an LCD-Camera System by Mesh Elements' Recalculation and Angle Error Orientation**

**Óscar de Francisco Ortiz 1,\*, Manuel Estrems Amestoy 2, Horacio T. Sánchez Reinoso <sup>2</sup> and Julio Carrero-Blanco Martínez-Hombre <sup>2</sup>**


Received: 31 October 2019; Accepted: 11 December 2019; Published: 16 December 2019

**Abstract:** In this article, we present a method to position the tool in a micromachine system based on a camera-LCD screen positioning system that also provides information about angular deviations of the tool axis during its running. Both position and angular deviations are obtained by reducing a matrix of LEDs in the image to a single rectangle in the conical perspective that is treated by a photogrammetry method. This method computes the coordinates and orientation of the camera with respect to the fixed screen coordinate system. The used image consists of 5 × 5 lit LEDs, which are analyzed by the algorithm to determine a rectangle with known dimensions. The coordinates of the vertices of the rectangle in space are obtained by an inverse perspective computation from the image. The method presents a good approximation of the central point of the rectangle and provides the inclination of the workpiece with respect to the LCD screen reference system of coordinates. A test of the method is designed with the assistance of a Coordinate Measurement Machine (CMM) to check the accuracy of the positioning method. The performed test delivers a good accuracy in the position measurement of the designed method. A high dispersion in the angular deviation is detected, although the orientation of the inclination is appropriate in almost every case. This is due to the small values of the angles that makes the trigonometric function approximations very erratic. This method is a good starting point for the compensation of angular deviation in vision based micromachine tools, which is the principal source of errors in these operations and represents the main volume in the cost of machine elements' parts.

**Keywords:** image processing; position control; accuracy; micromachines; position compensation; inverse conical perspective; micromanufacturing; manufacturing systems; mechatronics

#### **1. Introduction**

Positioning systems are increasingly present in all industrial processes. Furthermore, technology requires progressively more precise systems capable of positioning rapidly and robustly. The cost of those is one of the key factors to integrate high precision systems.

Thanks to the advances in screen and camera technology, positioning algorithms that analyze a pattern shown in a photographic image have been developed [1,2]. More recently, camera-screen positioning systems with dedicated artificial vision algorithms [3–5] have provided high precision at a very interesting cost compared to other positioning technologies such as encoders or resolvers.

Vision positioning systems are increasingly common in process automation [6–10], autonomous driving [11–15], or augmented reality assistants [16–20]. Indeed, this is one of the most promising

elements in the Industry 4.0 revolution. However, the current positioning systems used in the machine tool industry based on high precision encoders and sensors are limited by their cost. Therefore, machine tools used for micro-manufacturing have very high prices and require large floor space. Due to this, in micro-manufacturing, the methods that use vision can be competitive by including high performance commercial elements and reducing space such as cameras and mobile phones' LCD screens. In addition, such devices are increasing in definition and resolution, providing vision with much better accuracy.

The methodology used in this article to calculate the position and orientation of the camera in relation to the screen is based on pose determination [21–24], which is used to estimate the position and orientation of one calibrated camera. Several similar methods for calculating the position and orientation of a camera in space using a single image have been described and presented [22,25,26]. Nevertheless, pose estimation and marker detection are widely used tasks for many other technological applications such as autonomous robots [27–29], unmanned vehicles [30–37], and virtual assistants [38–41], among others.

Consequently, this article presents an enhanced method of recalculating the center of the image used by the positioning algorithm in an LCD-camera system, similar to that developed by de Francisco [4] and improved in subsequent studies [42], but being completely different from such previous studies regarding the procedure to calculate the positioning of the part with respect to the reference system of the screen. In previous works, the positioning was obtained through the global center of gravity of the 25 selected LEDs in the image. In this work, the position of the piece is calculated by previously determining an equivalent square obtained by means of regressions of the different lines that form the grid of the 25 LEDs.

In addition, this manuscript also presents the calculation and correction of the orientation angle, which, although very small, always influences the precision positioning due to the large distance between the location of the cutter and the screen. The new method is based on the calculation of the equivalent quadrangle that allows not only the positioning of the center of the image, but also the inclination. The method uses the treatment of an image to obtain the pixel coordinates of a 5 × 5 dot matrix that serves to locate the focus and orientation of the camera, where the error is due to the distance between the focus and the screen and can be assumed as sine error.

#### **2. Materials and Methods**

#### *2.1. Experimental Setup and Measurements*

The experimental study was applied to a two-dimensional control system (*X* and *Y*). Figure 1 shows the model of the Micromachine Tool (MMT) demonstrator developed for this research. Two stepper motors (ST28, 12, 280 mA) controlled and moved two precision guides (IKO BSR2080 50 mm stroke), which were connected to a M3 ball screw/nut. The LCD screen used provided a 1136 × 640 pixel resolution, 326 ppi, and 0.078 mm dot pitch. The screen size was 88.5 × 49.9 mm. Both stepper motors were controlled by the digital output signals provided by an NI 6001-USB data acquisition card connected to the USB port of a laptop computer. The output signal of the acquisition card was treated by a pre-amplification power station composed of two L293 H-bridges. The control was programmed in LabVIEW. It received the image captured by the camera and processed it according to an image enhancing process. It consisted of an image mask application with color plane extraction, fuzzy pixel removal, small object removal, and particle analysis of the mass center of each evaluated pixel. Once it was processed using the developed artificial vision algorithm, it provided the positioning feedback signals needed to move the *X* and *Y* axes.

**Figure 1.** Model of the micromachine tool demonstrator used during the experimental test.

The images were taken by the camera included in the MMT, a Model MITSAI 1.3M digital camera with a resolution of 1280 × 1024 pixels (1.3 MPixels). To analyze the position, a Coordinate Measuring Machine (CMM) Pioneer DEA 03.10.06 with measuring strokes 600 × 1000 × 600 mm was used (Figure 2). The maximum permissible error of the DEA in the measurements was 2.8 + 4.0 L/1000 μm. The software used for the measurements was PC-DMIS.

**Figure 2.** Setup used during the experimental test for the measurement with the Micromachine Tool (MMT) and the Coordinate Measuring Machine (CMM).

Several tests were performed over a 2 × 2 gap pattern using the camera-LCD algorithm. The simulation consisted of testing a 5 mm *X* axis movement using 10 steps of 0.5 mm. Each travel was repeated 3 times in both the forward and backward direction, according to the the VDI/DGQ 3441 standard: Statistical Testing of the Operational and Positional Accuracy of Machine Tools - Basis.

#### *2.2. Image Acquisition*

Image acquisition was done using a procedure developed by the authors in VBA similar to that performed by software such as ImageJc in its tool "Analyze Particles ...".

The image may not be focused, although many webcams have autofocus mechanisms that make the focal length variable. In our case, it was unimportant because what matters was the bulk and its center of gravity. It should also be noted that if the extraction was from the complete image, the image usually contained the spherical errors of the lenses that focused the image onto the sensor.

In our case, to speed up the process and calculations, only the central area from the BMP image file that included all 25 LEDs was extracted. Only the red layer was analyzed because it was proven to be the most efficient and the only one used to generate the image. Given the size of the LEDs, an image size of 600 × 600 was sufficient to ensure the presence of at least 25 LEDs in the image.

#### **3. Obtaining the Equivalent Quadrilateral**

Once the 25 coordinates of the centers of the LEDs were obtained, as seen in Figure 3, these data had to be statistically treated to obtain four vertices of a quadrilateral that collected information about the coordinates of the 25 points. With this quadrilateral and knowing the real side dimensions given by the size of the pixels, the position and orientation of the camera with respect to this square were obtained.

**Figure 3.** The 5 × 5 mesh captured by the camera with numbered elements.

#### *3.1. Regression of Lines*

From the analysis of the 5 × 5 grid, different horizontal lines could be segregated, rearranging the table of coordinates by values in *y*, obtaining 5 groups of 5 values corresponding to the horizontal lines. Reordering by the values in *x*, the vertical lines were obtained in the same way. The 5 horizontal lines must be translated into 2 lines, the same with the vertical lines, so that the intersection of the four lines gave rise to the 4 vertices of the quadrilateral that represented a square in conical perspective. The two vanishing points were obtained by the intersection of opposite sides.

Figure 4 shows the regression lines, vertical and horizontal, that represented the different groups of points. The slope and interception terms of the lines followed a tendency that could be anyway also found as shown in Figure 5. These tendencies allowed the calculation of the different slopes in the extreme lines of the rectangle that represented adequately the 25 points, as the border of a chessboard included the dimension and position of the interior squares. The correlated lines and the rectangle used to determine the position and inclination of the axis of the camera are represented in Figure 6.

Since the angles of the slopes had very small variation, the line equations had the form *y* = *mix* + *ni*. The intersection of a horizontal line with another vertical line is given by Equation (1):

$$x = \frac{n\_j + m\_j n\_i}{1 - m\_i m\_j} \tag{1}$$

where subindex *i* corresponds to horizontal lines, while subindex *j* corresponds to vertical lines.

*yi =mi*

y01

**Figure 5.** Regression lines (compensation) to optimize the position of the lines.

y06

*y0*

y010

*x0*

y05

**Figure 6.** Final distribution of the rectangle used for the position and angle correction.

The steps to obtain the two horizontal lines were the following:


$$y = m(y\_1)\mathbf{x} + (y\_1 - m(y\_1)\mathbf{x}\_1) \tag{2}$$

$$y = m(y\_5)\mathbf{x} + (y\_5 - m(y\_1)\mathbf{x}\_5) \tag{3}$$

A similar method was used for the calculation of the two vertical lines.

The intersection of the two almost parallel lines provided the 4 extreme vertices *A*, *B*, *C*, and *D* (Figure 6) that were introduced to the program of the inverse conical perspective to obtain the position and orientation of the camera in relation to the fixed coordinates located and oriented with the square that represented the grid of departure. The position and orientation of the contact point with respect to the screen reference system were obtained using an improvement of the method developed by Haralik for the rectangle reconstruction [22].

#### *3.2. Example of the Calculation of Vertices*

To obtain the straight lines, the slopes of the linear regression lines through the data point in the *x* and *y* arrays were calculated. In addition, the point at which a line intersected the *y* axis by using existing *x* and *y* values was also calculated for each vertex. The interception point was based on a best fit regression line drawn through the known *x* and *y* values, using an internal algorithm for the least squares regression procedure.

The starting point was the table of the centers of each of the zones sorted by the number of pixels comprising the area called mass (Table 1) as the number of pixels that included each zone.

Next, they were sorted by the *y* coordinates and were classified into groups that corresponded to the horizontal coordinates (Table 2).

As a result, the 5 horizontal lines were obtained *y* = *mix* + *ni*, *j* = 1 . . . 5 (Table 3).

In the same way, we proceeded to obtain the vertical lines *y* = *mj* + *nj*, *j* = 6 . . . 10 (Table 4).

It is noted in Tables 3 and 4 that all coefficients *m* and *n* had a tendency that could be also the object of a regression. This indicated that the lines were parallel as they had a vanishing point, and the plane that contained all the lines was not perpendicular to the focal line of the camera.

To find the two horizontal lines that represented the 5 lines, we proceeded to find the intersection points of the vertical center line with the 5 horizontal lines, obtaining the intersection points (*xi*, *yi*). The slope *mi* was correlated with the vertical coordinates *yi*, obtaining the two slopes of the representative lines as lines that had a slope *m*(*y*1) and *m*(*y*5) and passing them through Points 1 and 5, respectively (Table 5).


**Table 1.** Table of the center of gravity example for Image 1.

**Table 2.** Groups of points corresponding to horizontal lines.


Once having performed the regression of *m* with respect to *yi*, a function of the slope that varied regularly across the different heights (*<sup>m</sup>* <sup>=</sup> <sup>−</sup>1.136 <sup>×</sup> <sup>10</sup>−<sup>3</sup> *<sup>i</sup>* + 9.331 × <sup>10</sup>−3) was obtained. As a result, the equations of the horizontal lines that passed through Points 1 and 5 could be calculated.

$$y = -6108 \times 10^{-3} \text{x} + 540.807 \tag{4}$$

$$y = -9533 \times 10^{-4} \text{x} + 83.981\tag{5}$$

**Table 3.** Coefficients of horizontal lines.


**Table 4.** Coefficients vertical lines.


**Table 5.** Intersection points of the central vertical line with horizontal lines.


In the same way, we proceeded to obtain the 2 representative vertical lines:

$$\mathbf{x} = -6444 \times 10^{-3} \mathbf{y} + 570.765 \tag{6}$$

$$\mathbf{x} = -1277 \times 10^{-3} \mathbf{y} + 112.547 \tag{7}$$

The intersection of the opposite lines of this square provided the coordinates of the four vertices that represented the 25 LEDs (Table 6).

**Table 6.** Quadrilateral points to deal with the reverse perspective program.


The vertices represented in Table 6 were treated using the photogrammetry method of the reconstruction of a rectangle described in Estrems [24], and the coordinates of the camera with respected to the square coordinate system were obtained, as well as the cosine direction of the focal line in this system.

$$a = \sqrt{d\_f^2 - \left(d\_f \cdot \cos b\right)^2} = d\_f \sqrt{1 - \cos^2 b} \tag{8}$$

$$a\_2 - a\_1 = d\_f \left(\sqrt{1 - \cos^2 b\_2} - \sqrt{1 - \cos^2 b\_1} \right) \tag{9}$$

In Figure 7, the Abbe error *a* is represented and calculated by the focal distance *df* and the cosine in the *z* direction cos *b*. The Abbe error is calculated in Equation (8), and the step error due to the variation of angle *b* during the movement is compensated at each point according to Equation (9).

**Figure 7.** Position error in the vision system due to camera inclination for axis direction movement.

#### **4. Experimental Results and Discussion**

The data obtained in the experimental test are described in Tables 7–12, where CMM is the distance measure by the Coordinate Measuring Machine in the movements done by the MMT in every step during the test; Image is the distance moved in every step analyzed by the vision system; Error Image is the error provided by the vision system in every step after a comparison with the distance moved provided by the CMM; Compensation is the distance compensated due to angle error calculated in every step; Image compensated is the distance measured by the CMM after the application of the compensation calculated; Error compensation is the error after the compensation is applied; and Coincidence provides information about the coincidence in the orientation calculated for the angle compensation (YES means orientation coincidence, and NO means the angle orientation calculated is opposite the compensation required to minimize the error).

**Table 7.** Data for Run #1 forward with a mesh of 5 × 5 LEDs (values in μm).


**Table 8.** Data for Run #1 backward with a mesh of 5 × 5 LEDs (values in μm).



**Table 9.** Data for Run #2 forward with a mesh of 5 × 5 LEDs (values in μm).

**Table 10.** Data for Run #2 backward with a mesh of 5 × 5 LEDs (values in μm).


**Table 11.** Data for Run #3 forward with a mesh of 5 × 5 LEDs (values in μm).


**Table 12.** Data for Run #3 backward with a mesh of 5 × 5 LEDs (values in μm).


Table 13 summarizes the mean errors (*e*¯) and the standard deviation errors (*σ*) calculated in each run of the experimental tests. The global mean (4.786 μm) and standard deviation (5.698 μm) were also calculated.

**Table 13.** Summary of the errors, in absolute value, provided by the proposed vision positioning algorithm.


As is seen in the graphs of Figure 8, the precision of positioning depended strongly on the initial error function, so the variation of the error was less than ±2μm, except several discrete points that were measured in a transition between columns of LEDs that were not so homogeneous in the LCD.

One remarkable result was that the orientation of the compensation error coincided with the sign of the error checked at almost all points. This indicated that there was variation of the orientation of the focal line with respect to the screen coordinate system, although this was not applied efficiently to improve the precision of measurement. This was probably due to the problems evaluating the trigonometric functions in angles with values less than 10−<sup>3</sup> radians.

Therefore, due to the distance from the vanishing points of the lines, the estimation of the angular error did not turn out to be very precise in its quantification, although it provided qualitative descriptors on the direction and magnitude of the variation in the inclination of the camera.

**Figure 8.** Error due to the vision system before and after angle compensation was applied.

#### **5. Conclusions**

A new method was developed to position the tool in a micromachine system based on a camera-LCD screen positioning system that also provided information on the angular deviations of the tool axis during operation.

The method gave a good approximation of the center point of the rectangle with a mean error of 0.96%, considering not only the vision algorithm, but also the mechanical test device, and provided the inclination of the workpiece with respect to the LCD-screen reference coordinate system.

The equivalent square was calculated as regression of the lines that could be drawn through the centers of gravity of each of the LEDs. The lack of parallelism between the sides of the square indicated an inclination of the camera axis relative to perpendicular to the screen. The variation of this inclination introduced errors in the displacements that were added to the simple displacement of the center of gravity and whose compensation was also calculated in this article.

A test of the method was designed with the assistance of a Coordinate Measurement Machine (CMM) to verify the accuracy of the positioning method. The test performed provided good accuracy in measuring the position of the designed method, but a high dispersion in the angular deviation was detected, although the orientation of the inclination was appropriate in almost all cases (85.7%). This was due to the small value of the angles that made the approximations of the trigonometric functions very erratic. With accurate formulas to approximate trigonometric functions for small angles, the method could help in obtaining more accurate measurements.

**Author Contributions:** Conceptualization, Ó.d.F.O. and M.E.A.; methodology, Ó.d.F.O.; software, Ó.d.F.O. and M.E.A.; validation, Ó.d.F.O., H.T.S.R., and J.C.-B.M.-H.; formal analysis, Ó.d.F.O., H.T.S.R., and J.C.-B.M.-H.; writing, original draft preparation, Ó.d.F.O.; writing, review and editing, M.E.A., H.T.S.R., and J.C.-B.M.-H.; visualization, Ó.d.F.O., M.E.A., H.T.S.R., and J.C.-B.M.-H.

**Funding:** This research was funded by Ingeniería Murciana SL by a private contract.

**Acknowledgments:** The authors want to thank the University Center of Defense at the Spanish Air Force Academy, MDE-UPCT, for financial support.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**

The following abbreviations are used in this manuscript:

LCD Liquid Crystal Display MMT Micromachine Tool ppi points per inch CMM Coordinates Measuring Machine

#### **References**


c 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article* **Industrial Calibration Procedure for Confocal Microscopes**

#### **Alberto Mínguez Martínez 1,2,\* and Jesús de Vicente y Oliva 1,2,\***


Received: 10 November 2019; Accepted: 9 December 2019; Published: 10 December 2019

**Abstract:** Coordinate metrology techniques are widely used in industry to carry out dimensional measurements. For applications involving measurements in the submillimeter range, the use of optical, non-contact instruments with suitable traceability is usually advisable. One of the most used instruments to perform measurements of this type is the confocal microscope. In this paper, the authors present a complete calibration procedure for confocal microscopes designed to be implemented preferably in workshops or industrial environments rather than in research and development departments. Therefore, it has been designed to be as simple as possible. The procedure was designed without forgetting any of the key aspects that need to be taken into account and is based on classical reference material standards. These standards can be easily found in industrial dimensional laboratories and easily calibrated in accredited calibration laboratories. The procedure described in this paper can be easily adapted to calibrate other optical instruments (e.g., focus variation microscopes) that perform 3D dimensional measurements in the submillimeter range.

**Keywords:** coordinate metrology; confocal microscopy; measurement; calibration; traceability; uncertainty; quality assessment

#### **1. Introduction**

In industry, coordinate measuring machines (CMMs) are widely used to carry out dimensional measurements, because these kinds of machines are capable of measuring many different types of geometries with great flexibility and sufficient accuracy. For this reason, CMMs can be considered as universal measuring devices [1]. In addition, modern advanced manufacturing processes demand a deep study of surface textures. Probably the most common method for surface texture verification up to recent years was to perform roughness measurements with a roughness measuring machine—usually a 2D stylus instrument [2]. In many cases, manufacturers prefer to verify texture and geometry without mechanical contact between the instrument and surface [3]. Due to this, optical instruments for coordinate metrology have been developed. ISO 25178-6 lists optical methods for measuring surface texture [4], and among them, it includes confocal microscopy, which permits both dimensional and 2D/3D roughness measurements [5] without mechanical contact.

The confocal microscope was developed in 1955 by Minsky [6,7] and allows images of optical sections of samples to be obtained, from which the full 3D object geometry can be reconstructed. Confocal microscopy is also important, because it is a powerful tool for observation and measurement at both scientific research and workshop levels. It presents the following advantages [8]:

• It adds the Z-axis to traditional measuring optical microscopes, which only work in the XY plane.


Confocal microscopy has applications in many fields, both in research and industrial applications. This type of microscope is widely used in biomedical science, material science, and surface quality metrology at micro and macro scales [9]. However, there is no standardized procedure to calibrate and provide traceability to these instruments in the fields of coordinate metrology and surface texture metrology. Intense work is being done around ISO standard 25178-700, but it is still in the draft stage. The calibration of measurement instruments is crucial to maintain the traceability of measurement results [10]. As is widely known, traceability can be defined as the property of a measurement result by which it can be related to a reference through an uninterrupted and documented chain of calibrations, each of which contributes to measurement uncertainty [11].

The purpose of this paper is to describe a way to provide suitable traceability to a confocal microscope when performing metrological activities using single topography measurements, in the fields of both coordinate metrology and roughness metrology. Please note that when image stitching is not used (single topography), there is no movement of the XY stage, and there is therefore no need to calibrate the displacements of this stage. The calibration procedure presented by the authors is intended to be simple and is based on classical mechanical standards. Note that the objective was not to perform a state-of-the-art calibration of a confocal microscope [12–15], neither was it to achieve very low uncertainties; the objective was to ensure adequate traceability with adequate uncertainty estimation in the field of dimensional metrology in the submillimeter range.

Prior to carrying out the calibration, an analysis of the operating principle of the confocal microscope is necessary for a better understanding of the device. This kind of microscope usually uses a low-power, high-intensity, monochromatic laser system for illumination [16–19]. A laser beam passes through a beam splitter, and one of the beams is then redirected to the sample, passing through complex optics [7]. Once the scanning surface is illuminated, the reflected beam travels back along the same path. If the illumination is properly focused on the surface, the reflected beam will go to the detector without losing intensity, but if the surface is out of focus, the intensity will be lower. The filtered beam arrives at the detector and a computer system processes the signal, making a 3D reconstruction of the surface [9,16,18].

Several factors affect the quality of these measurements [5,20]:


As can be seen in Figure 1, the confocal microscope projects, through a complex optical system, illumination patterns over the surface that is being explored and captures the returned beam through the same pattern of illumination. As a result, it is possible to discriminate if the returned beams are out of focus and filter them [5,7,16,17,21]. In Figure 2, this property is shown in detail.

**Figure 1.** Scheme of a confocal microscope [5,18,22].

**Figure 2.** Filtration of out-of-focus signal (red) in confocal microscopy [5,7,16,17,21] in comparison with in-focus signal (green).

Once the in-focus image goes to the detector, the computational treatment starts. The electronic controller moves the objective along the Z-axis in order to permit the confocal microscope to capture 2D images at different *z* coordinates. As 2D images are composed of pixels, 3D images obtained with confocal microscopes are composed of voxels, as shown in Figure 3. If an interpolation if made between consecutive 2D images, it is possible to create a 3D model of the scanned surface [23].

**Figure 3.** Transformation from pixel to voxel.

In order to achieve dimensional traceability in dimensional measurements carried out with confocal microscopes, it is necessary to perform a 3D calibration of the instrument. This calibration should provide estimations of voxel sizes along X, Y, and Z axes, but it would also be advisable to provide estimations of perpendicularity errors between axes. Additionally, as the confocal microscope can be used to perform roughness measurements, a specific calibration of the instrument for roughness measurements is advisable.

#### **2. Materials and Methods**

In order to ease the understanding of the calibration procedures described later on, a calibration example was carried with the following confocal microscope and software:


In this paper, we propose a calibration procedure that is only valid for single topography measurements—that is, without using image stitching. In single topography measurements, the XY stage is not moved during measurement, and its errors do not contribute to uncertainty. When using image stitching (extended topography measurements), the XY stage does contribute to uncertainty, and the calibration procedure described in this paper should be updated using techniques such as those described in [14,15]. The complete calibration procedure includes the following:


All reference measurement standards used were chosen to be:


• Common in the field of dimensional metrology in order to facilitate their acquisition, calibration, and correct use.

#### *2.1. Flatness Verification*

Before calibrating the X and Y axes, a flatness verification must be performed. This is necessary because it is often observed that the XY plane of confocal microscopes is slightly curved. This is evident when exploring a flat surface such as an optical flat whose total flatness error is usually lower than 50 nm. In these cases, the reference flat surface when observed by the confocal microscope appears curved, as if it was a cap of a sphere or an ellipsoid. According to manufacturers, this error is usually small enough, but it is impossible to carry out an accurate measurement without taking this component of uncertainty into account [27].

For this verification, the authors propose following a procedure based on [28], but using a confocal microscope instead of an interferometer. The software of the confocal microscope provides a topographic map of the explored surface from which the total flatness error (peak to peak) or the RMS (root mean square error) flatness can be estimated.

The calibration is done in two positions (0◦ and 90◦) and therefore, two measurements are obtained (Figure 4):

**Figure 4.** (**a**) Positions of the optical flat during calibration; (**b**) optical flat.

For this calibration, the authors recommend using the RMS flatness deviation, because it is more statistically stable than the total flatness deviation. As discussed in [29] (Section 1.3.5.6 "Measures of Scale"), the total flatness deviation—which is equivalent to the range—is very sensible to the presence of outliers because it is determined as the difference between the two most extreme points. The problem increases with the number *N* of points from which the range is determined. In our case, the number of points was very large (*N* = 768 × 576 = 442, 368). For these reasons, other parameters such as the standard deviation (equivalent to the root mean square error), or even better, the median absolute deviation (MAD) should be used. Since confocal software always includes the RMS flatness deviation and it is not very easy to include MAD, we recommend the RMS flatness deviation.

#### *2.2. XY Plane Calibration*

In the literature, it is possible to find several procedures for this calibration. Following the studies of de Vicente et al. [30] and Guarneros et al. [31], it is possible to calibrate X and Y scales and estimate their perpendicularity error by making measurements of a stage micrometer in four positions (Figure 5).

A stage micrometer is easy to calibrate in a National Measurement Institute (NMI) or in an accredited calibration laboratory (ACL) with sufficiently small uncertainty (equal to or lower than 1 μm) for the calibration of a confocal microscope.

**Figure 5.** (**a**) The different positions of scanning for the stage micrometer; (**b**) the stage micrometer used during calibration.

It is strongly recommended that the stage micrometer should be metallic and have the marks engraved, not painted, such as those used to calibrate metallographic microscopes. Marks painted over glass are difficult to detect with a confocal instrument.

The matrix model proposed for calibration by de Vicente et al. [32] is as follows:

$$
\begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} p \\ q \end{bmatrix} + \begin{bmatrix} c\_{xy} + a & \theta/2 \\ \theta/2 & c\_{xy} - a \end{bmatrix} \begin{bmatrix} p \\ q \end{bmatrix} \tag{1}
$$

where (*p*, *q*) are the readings directly provided by the confocal microscope for the Cartesian coordinates in the XY plane. (*x*, *y*) are the corrected Cartesian coordinates once the calibration parameters *cxy*, *a*, and θ have been applied using the previous matrix model.

The meanings of these three parameters are as follows:

*cxy* represents the deviation of actual pixel width *wxy* from the nominal pixel width *wxy*,*nom*:

$$w\_{xy} = w\_{xy,nom} \cdot \left(1 + c\_{xy}\right). \tag{2}$$

*a* represents the difference between pixel widths along X-axis (*wx*) and Y-axis (*wy*):

$$w\_x = w\_{xy,nom} \cdot \left(1 + c\_{xy} + a\right) \tag{3}$$

$$w\_{\mathcal{Y}} = w\_{xy,nom} \cdot \left(1 + c\_{xy} - a\right)\_{\prime} \tag{4}$$

$$w\_{xy} = \frac{(w\_x + w\_x)}{2}.\tag{5}$$

θ represents the perpendicularity error between the X-axis and Y-axis. The actual angle between these axes is π/2 − θ.

The amplification coefficients α*x*, α*y*, and α*<sup>z</sup>* of the axes (according to ISO 25178-70 [33]) are:

$$a\_x = 1 + c\_{xy} + a\_r \tag{6}$$

$$a\_y = 1 + c\_{xy} - a\_\prime \tag{7}$$

$$
\alpha\_z = 1 + \mathfrak{c}\_z. \tag{8}
$$

We recommend using the average pitch  of the stage micrometers.  is the average of all individual pitches (distances between two consecutive marks) observed in the images provided by the confocal microscope. Figure 6 summarizes the measurement of the stage micrometer in one position. Using special software for this task, written in Matlab®R2019a and developed at the Laboratorio de Metrología y Metrotecnia (LMM), it is possible to automatically detect and estimate the distance *di* of each mark from the zero mark. Using this software, all the distances (pitches) between two consecutive marks in the stage micrometers were estimated (Figure 6a). Moreover, pitches can be measured in different positions—in the middle and in higher and lower positions—which permits the estimation of the repeatability during the pitch measurements. In Figure 6a, for each pitch, the average value is represented by a circle, and the measurement variability around this value is represented with a vertical line. In order to estimate the non-linearity errors *ei* (Figure 6b), a straight line *di m* + ·*i* was fitted, and the errors were estimated as *ei* = *di* − (*m* + ·*i*). The coefficient *m* represents the deviation of the zero mark from its estimated position and  represents the average pitch.

**Figure 6.** Measurement of stage micrometer in position 0◦: (**a**) pitch measurements results in μm; (**b**) non-linear errors in μm.

If the previously mentioned special software is not available, measurements of distances between marks must be done by hand. However, although it is more laborious, the entire procedure described above for the estimation of the average pitch  and non-linearity errors *ei* can be carried out without any problem.

Let <sup>0</sup> be the average pitch of the stage micrometer certified by a suitable laboratory with a standard uncertainty *u*(<sup>0</sup>). 1, 2, 3, and <sup>4</sup> are the average pitches measured with the confocal microscope in positions 0◦, 90◦, 45◦, and 135◦, respectively. Their corresponding standard uncertainties are *u*(<sup>1</sup>), *u*(<sup>2</sup>), *u*(<sup>3</sup>), and *u*(<sup>4</sup>), where only the variability observed in Figure 6 (or equivalent ones) was taken into account.

When the matrix model is applied to positions 0◦, 90◦, 45◦, and 135◦, we obtain the following expressions that permit simple estimations of calibration parameters *cxy*, *a*, and θ:

$$\text{Position } 0^{\circ}: \; \ell\_1 \cdot \left(1 + c\_{xy} + a\right) \lnot \; \ell\_0 \tag{9}$$

$$\text{Position } 90^{\circ}: \; \ell\_2 \cdot \left(1 + c\_{xy} - a\right) \cong \ell\_{0\star} \tag{10}$$

$$\text{Position } 45^{\circ}: \ \ell\_3 \left(1 + c\_{xy} + \frac{\theta}{2}\right) \ltimes \ell\_{0\prime} \tag{11}$$

$$\text{Position 135}^{\circ}: \; \ell\_4 \cdot \left(1 + \mathfrak{c}\_{xy} - \frac{\Theta}{2}\right) \cong \ell\_0. \tag{12}$$

From these expressions, it is easy to conclude that possible estimations of *cxy*, *a*, and θ are:

$$c\_{xy} = \frac{\ell\_0}{4} \cdot \left(\frac{1}{\ell\_1} + \frac{1}{\ell\_2} + \frac{1}{\ell\_3} + \frac{1}{\ell\_4}\right) - 1,\tag{13}$$

$$\text{with } u(\mathbf{c}\_{\text{xy}}) = \frac{\sqrt{u^2(\ell\_0) + [u^2(\ell\_1) + u^2(\ell\_2) + u^2(\ell\_3) + u^2(\ell\_4)]/16}}{\ell\_0},\tag{14}$$

$$a = \frac{\ell\_0}{2} \cdot \left(\frac{1}{\ell\_1} - \frac{1}{\ell\_2}\right). \tag{15}$$

$$\text{with } u(a) = \frac{\sqrt{u^2(\ell\_1) + u^2(\ell\_2)}}{2\ell\_0}. \tag{16}$$

$$
\theta = \ell\_0 \left( \frac{1}{\ell\_3} - \frac{1}{\ell\_4} \right) . \tag{17}
$$

$$\text{with } u(\theta) = \frac{\sqrt{\mu^2(\ell\_3) + \mu^2(\ell\_4)}}{\ell\_0}. \tag{18}$$

Correlations between these parameters (*cxy*, *a*, and θ) are usually very small (lower than 0.01). Therefore, these correlations can be neglected.

#### *2.3. Z-Axis Calibration*

Document [34] proposes calibrating the Z-axis using a step gauge built with gauge blocks over an optical flat (Figure 7). However, the short field of view of confocal microscopes makes it difficult to carry out the calibration with this type of measurement standard.

To solve this problem, several authors [5,35] have proposed the use of step height standards (Figure 8). Wang et al. [35] used them with the nominal values 24, 7, 2, and 0.7 μm. This kind of measurement standard has several grooves whose nominal depths cover the range of use of the confocal microscope on the Z-axis. Following their procedures, every groove has to be measured 10 times, changing the position of the standard on the objective.

**Figure 7.** A step gauge built on an optical flat (with total flatness error 0.00005 mm) and a set of gauge blocks.

**Figure 8.** This figure shows (**a**) a typical model of a step height standard; and (**b**) different models of step height standards' grooves (ISO 5436-1 types A and B) [30,35–37].

This kind of standard is typically used for roughness calibration. If the purpose is to make a calibration on the Z-axis, these standards have the limitation of the groove depth, which usually is small to cover the range of the Z-axis.

In order to solve this problem, we propose using a small metallic sphere, as shown in Figure 9, with a nominal diameter between 1 and 10 mm, similar to the one used in [38]. This kind of measurement standard is easy to find and easy to calibrate in both NMIs and ACLs with uncertainties equal to or lower than 0.5 μm. The software of confocal microscopes usually permits a spherical surface to be fit to the points detected over the surface of the spherical measurement standard. If not, coordinates (*p*, *q*,*r*) obtained with the confocal microscope can be exported to a text file and processed with a routine similar to that described in Appendix A. Therefore, it is possible to compare the certified diameter *D*<sup>0</sup> of the sphere against the diameter *Dm* of the spherical surface fitted by the confocal microscope. We propose the use of an extended matrix model to take into account the calibration of the Z-axis:

$$
\begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} 1 + c\_{xy} + a & \theta/2 & 0 \\ \theta/2 & 1 + c\_{xy} - a & 0 \\ 0 & 0 & 1 + c\_z \end{bmatrix} \begin{bmatrix} p \\ q \\ r \end{bmatrix} \tag{19}
$$

where *p*, *q*, and *r* are readings provided by the confocal microscope for the Cartesian coordinates *x*, *y*, and *z*. The calibration parameters are those described in Section 2.2 (*cxy*, *a*, θ), and the new parameter *cz* is introduced to permit the calibration in the Z-axis. The corrected *z* coordinate is:

$$z = (1 + c\_z) \cdot r.\tag{20}$$

This simple matrix model supposes that there is no (or negligible) perpendicular error between the Z-axis and the XY plane. This hypothesis is very close to reality when the Z-axis range is clearly lower than ranges of the X and Y axes. When the Z-axis range is equal to or greater than X and Y ranges, a more complex model must be used (zero terms in the matrix of the model are no longer zero—see, for example, [39]). It is easy to demonstrate that, using the matrix model, the corrected diameter *D* of the spherical surface fitted by the confocal microscope software is:

$$D = D\_m \cdot \frac{1 + 2c\_{xy}}{1 + c\_z} \tag{21}$$

where *Dm* is the diameter provided by the confocal microscope prior to applying any calibration parameter. Therefore, an estimation of *cz* is

$$c\_z = \frac{D\_m}{D\_0} \cdot \left(1 + 2c\_{xy}\right) - 1\tag{22}$$

where *D*<sup>0</sup> is the certified diameter of the sphere by the ACL. The standard uncertainty of *cz* is

$$u(c\_z) = \sqrt{\frac{\mu^2(D\_0) + \mu^2(D\_m)}{D\_0^2} + 4\mu^2(c\_{xy})}.\tag{23}$$

Equation (22) of *cz* shows a clear positive dependency with *cxy*. Therefore, the correlation coefficient *r cz*, *cxy* should be estimated, and it can be done using the following expression:

$$r(c\_{z}, c\_{xy}) = 2 \cdot \frac{\mu(c\_{xy})}{\mu(c\_z)}.\tag{24}$$

Note that the correlation coefficient is denoted as *r cz*, *cxy* . Do not confuse it with the reading of the confocal microscope for the Z-axis, which is denoted as *r*.

**Figure 9.** Steel sphere used in calibration.

#### *2.4. Calibration for Roughness Measurements*

The calibration of the Z-axis against the reference sphere (previous section) guarantees the traceability of the vertical measurements performed with the confocal microscope to the SI unit of length (the meter). Therefore, any vertical roughness parameter will have an adequate traceability once the instrument has been calibrated along its Z-axis. Notwithstanding, we followed the recommendation included in documents DKD-R 4-2 [40–42], which propose performing an additional calibration against roughness standards to validate the Z-axis calibration for roughness measurements.

Many parameters are used to characterize surface texture. Among the 2D roughness parameters, one of the most widely used is the *Ra* parameter, which is the arithmetic mean of the absolute values of the profile deviations from the mean line of the roughness profile [43]. We only consider the *Ra* parameter during calibration, but readers interested in other 2D roughness vertical parameters (*Rq*, *Rp*, *Rv*, *Rz*, ... ) can use the same calibration procedure described in this paper but with minor variations. Calibration was performed in the range 0.1 < *Ra* ≤ 2 μm. For this range, according to ISO 4288 [44], the sampling length should be *lr* = 0.8 mm, which is possible to carry out with a field of view of 1270 μm × 952 μm. For *Ra* > 2 μm, the sampling length should be *lr* = 2.5 mm or higher, and it is impossible to achieve this with a field of view of 1270 μm × 952 μm (10× objective). It makes no sense to measure roughness lower than *Ra* = 0.1 μm with an instrument with repeatability in the Z-axis of around 0.5 μm. Therefore, calibration for *Ra* < 0.1 μm and *Ra* > 2 μm was discarded.

Figure 10a shows three metallic, aperiodic 2D roughness standards. Figure 10b shows three glass, periodic 2D roughness standards. Other types of 2D roughness profile types are described in Section 7 of ISO 25178-70 [33]. We recommend the use of aperiodic standards because they cover a wide range of wavelengths, in contrast to periodic standards that only cover a single wavelength. However, periodic standards were used in this case in order to complete the range of measurements between 0.1 μm and 2 μm for different calibration points and for different materials (glass instead of metallic items).

**Figure 10.** Step height standards used during calibration: (**a**) aperiodic, metallic standards and (**b**) periodic, glass standards.

These standards were measured over five different zones in two different orientations (Figure 11a,b). In each zone, the measurement was carried out along a line perpendicular to the roughness lines (Figure 11c) and located at the center of the zone. Therefore, a total of 2 × 5 = 10 roughness measurements were obtained from each standard.

**Figure 11.** Location of the five scanning positions for roughness calibration: (**a**) horizontal orientation; (**b**) vertical orientation; and (**c**) location of measurement lines.

#### *Materials* **2019**, *12*, 4137

We recommend using at least three different roughness standards with nominal values of*Ra* uniformly distributed along the range where the instrument must be calibrated. However, it is advisable to use five or more standards and, if possible, standards made of different materials (e.g., metallic and glass).

It is important to note that there will be differences between measurements obtained with a confocal microscope and measurements obtained with a stylus instrument [4,45].

The main reasons for this are as follows.


In order to ensure a good match between roughness measurements performed with stylus instruments and optical instruments, the concept of "bandwidth matching" should be correctly applied. This term refers to the good correspondence between the spectral bandwidths of two different instruments used for roughness measurements [45]. The effective spectral bandwidth of a roughness-measuring instrument is limited by the two cut-off wavelengths, λ*<sup>S</sup>* (a high-pass filter), and λ*<sup>C</sup>* (a low-pass filter), and it is influenced by the X-axis resolution and tip radius in stylus instruments or lateral resolution and pixel size in optical instruments.

#### *2.5. Summary of Characteristics of Measurement Standards Used during Calibration*

In this section, the nominal values and the uncertainties of the different reference measurement standards used during calibration are summarized. All of them were calibrated in ACLs.

We include the calibration of the confocal microscope against roughness standard #6 only for informative purposes. Its measurements were made using a sampling length of *lr* = 0.8 mm because of the reduced field of view of the instrument (with an 10× objective). However, ISO 4288 [44] recommends the use of a sampling length *lr* = 2.5 mm, which is a measurement that is impossible to achieve with a 10× objective.

#### **3. Results**

#### *3.1. Flatness Verification*

The following figure shows a topographic image of the optical flat of Figure 4 that was used as a flatness calibration surface. This optical flat was previously calibrated in an accredited laboratory. The total flatness error was 118 nm with a standard uncertainty of 25 nm (*k* = 1), and its RMS flatness was 28 nm with a standard uncertainty of 7 nm (*k* = 1); see Table 1.


**Table 1.** Nominal values and the uncertainties of the material reference standards used during calibration.1 *Sm* is a spacing parameter defined as the mean spacing between peaks. *Sm* values included in this table are only informative. RMS: root mean square error.

Figure 12 shows the absence of significant curvature in the XY plane. A slight uncorrected curvature of about 0.6 μm (peak to peak) was observed, which is small and could be neglected when compared with the Z-axis axial step (2.0 μm) and observed instrument noise (about 1.0 μm peak to peak). This could be an empirical demonstration of a good adjustment and/or correction of the microscope by the manufacturer. In a situation such as this, there is no need to apply any further correction to compensate the curvature of the XY plane.

**Figure 12.** Result of the flatness measurement performed over the optical flat.

Table 2 shows the results of the measurements performed with the confocal microscope (in both positions 0◦ and 90◦).

**Table 2.** RMS flatness measured with the confocal microscope in positions 0◦ and 90◦.


The RMS values of Table 2 are small when compared with the Z-axis axial step of 2.0 μm. Therefore, they were probably caused by the lack of repeatability of the instrument. In any case, the most conservative option is to estimate a component of the uncertainty associated with the possible curvature of the XY plane equal to the average value of both RMS values of Table 2:

$$
u\_{\rm FLT} = 0.54 \,\upmu\text{m} \tag{25}$$

A better estimation for *u*FLT would likely be to quadratically subtract the RMS flatness of the optical flat (0.28 μm):

$$
\mu\_{\rm HL} = \sqrt{(0.54 \,\upmu\text{m})^2 - \left(0.028 \,\upmu\text{m}\right)^2} = 0.539 \,\upmu\text{m} \tag{26}
$$

Regardless, we considered that the first estimation (*u*FLT = 0.54 μm) is slightly more conservative and clearly simpler.

#### *3.2. XY Plane Calibration*

Figure 13 shows the four positions (0◦, 45◦, 90◦, 135◦) in which the stage micrometer was measured in the confocal microscope during the XY plane calibration.

**Figure 13.** Different measurement positions (0◦, 45◦, 90◦, 135◦) of the stage micrometer.

In each position, the average pitch *<sup>i</sup>* was determined from readings provided by the confocal microscope. The results are shown in Table 3.


**Table 3.** Measurements of the average pitch *<sup>i</sup>* in different orientations.

The average value for repeatability in the XY plane was *sr*(*x*) = *sr*(*y*) = 0.4 μm. This is a reasonable value when compared with the 1.65 μm lateral resolution (nominal voxel width).

The stage micrometer had a certified average pitch <sup>0</sup> = 9.980 μm with a standard uncertainty *u*(<sup>0</sup>) = 0.005 μm.

*Materials* **2019**, *12*, 4137

Using the expression for Section 2.2, we obtained the following estimations for calibration parameters *cxy*, *a*, and θ:

$$
\sigma\_{xy} = \frac{\ell\_0}{4} \cdot \left( \frac{1}{\ell\_1} + \frac{1}{\ell\_2} + \frac{1}{\ell\_3} + \frac{1}{\ell\_4} \right) - 1 = 0.00883 \tag{27}
$$

$$\text{with } u(c\_{xy}) = \frac{\sqrt{\mu^2(\ell\_0) + [\mu^2(\ell\_1) + \mu^2(\ell\_2) + \mu^2(\ell\_3) + \mu^2(\ell\_4)]/16}}{\ell\_0} = 0.00050\tag{28}$$

$$a = \frac{\ell\_0}{2} \cdot \left(\frac{1}{\ell\_1} - \frac{1}{\ell\_2}\right) = -0.000040\tag{29}$$

$$\text{with } u(a) = \frac{\sqrt{u^2(\ell\_1) + u^2(\ell\_2)}}{2\ell\_0} = 0.000036\tag{30}$$

$$
\theta = \ell\_0 \cdot \left(\frac{1}{\ell\_3} - \frac{1}{\ell\_4}\right) = -0.000798 \tag{31}
$$

$$\text{with } u(\theta) = \frac{\sqrt{\mu^2(\ell\_3) + \mu^2(\ell\_4)}}{\ell\_0} = 0.000074 \tag{32}$$

All three parameters are dimensionless.

Observing the non-linearity RMS values in Table 3, the overall standard uncertainty estimation for non-linearity in the XY-plane was *u*NL,*xy* = 0.7 μm.

#### *3.3. Z-Axis Calibration*

Figure 14 shows an example of a measurement of the spherical cap of a stainless steel reference sphere with a 4-mm nominal diameter (see Figure 9). It is a three-dimensional reconstruction of the sphere surface.

**Figure 14.** Results of the measurement of a bearing sphere with white light.

Using this information, the confocal microscope software can perform a least-square fitting to a spherical surface from which we could estimate the diameter of the sphere and RMS error of the fit. If the confocal software does not permit fitting a spherical surface, a code similar to the one described and listed in Appendix A can be used.

In this calibration, two different types of illumination were used (white and blue light), and measurements were taken in three orientations: 0◦, 45◦, and 90◦. Finally, there were *n* = 6 measurements. Results obtained during the measurement of the bearing steel sphere of Figure 9 are presented in Table 4.


**Table 4.** Root mean square error and diameter *Dm* of the spherical caps fitted using least squares.

The average value *Dm* of the six diameters *Dm* was *Dm* = 3.9712 mm, and the standard deviation *s*(*Dm*) was 0.0096 mm. We estimated *u*(*Dm*) as:

$$
\mu(\overline{D}\_m) = \frac{s(D\_m)}{\sqrt{n}} = 0.0039 \text{ mm.} \tag{33}
$$

The RMS error is an estimation of the repeatability in the Z-axis, which probably includes the non-linearity in the Z-axis. The mean value for this Z-axis repeatability was *sr*(*z*) = 0.8 μm, which seems to be a reasonable value when compared with the Z-axis axial step of 2 μm.

The certified diameter *D*<sup>0</sup> of the reference sphere was *D*<sup>0</sup> = 4.0011 mm with a standard uncertainty *u*(*D*0) = 0.25 μm.

Using the expression of Section 2.3, the Z-axis calibration parameter *cz* can be estimated as follows:

$$c\_z = \frac{D\_m}{D\_0} \cdot \left(1 + 2c\_{xy}\right) - 1 = 0.0101\tag{34}$$

$$\text{with } u(c\_z) = \sqrt{\frac{u^2(D\_0) + u^2(\overline{D}\_{\text{ff}})}{D\_0^2} + 4u^2(c\_{xy})} = 0.0014 \tag{35}$$

The correlation coefficient *r cz*, *cxy* was

$$\pi \left( c\_{z\_{\tau}} c\_{xy} \right) = 2 \cdot \frac{\mu \left( c\_{xy} \right)}{\mu \left( c\_z \right)} = 0.72 \tag{36}$$

This correlation coefficient is clearly higher than zero, showing a strong positive correlation between *cz* and *cxy* that should be taken into account after calibration when needed. Correlation coefficients *r*(*cz*, *a*) and *r*(*cz*, θ) are usually very small (lower than 0.01); therefore, correlation between *cz* and parameters *a* and θ could be neglected.

#### *3.4. Calibration for Roughness Measurements*

As an example of data acquisition results when measuring a material roughness standard, the following figures show three-dimensional reconstructions of the surface of an aperiodic, metallic roughness standard (Figure 15) and a periodic, glass roughness standard (Figure 16).

**Figure 15.** Measurement of an aperiodic roughness standard with a confocal microscope: 3D view of the measurement results.

**Figure 16.** Measurement of a periodic roughness standard with a confocal microscope: 3D view of the measurement results with roughness lines (**a**) parallel to the X-axis and (**b**) parallel to the Y-axis.

(**a**) (**b**)

Calibration was performed by repeating the measurements of six roughness standards 10 times (five zones, two orientations; see Table 1). The results are summarized in Table 5, showing average results *R* of the 10 repeated measurements and corresponding standard deviations *s*(*R*). Direct readings *R* provided by the confocal microscope were obtained prior to introducing the calibration parameter *cz* using only one sampling length *lr* = 0.8 mm.

Therefore, these readings should be corrected by applying the following expression to take into account the Z-axis calibration:

$$R\_{\text{corrected}} = \vec{R} \cdot (1 + c\_z). \tag{37}$$

The authors followed the recommendations of ISO 4288 [44] that, for 0.1 < *Ra* ≤ 2 mm, recommend five sampling lengths *lr* = 0.8 mm for a total evaluation length of *ln* = 4 mm. Due to the limitations of the instrument's field of view (see Section 2), only one sampling length *lr* = 0.8 mm could be used. This reduction in the number of sampling lengths from five to one caused slightly lower values for *Ra* and higher variabilities [45].

It can be concluded from Table 5 that a typical value for *s*(*R*) was *s*(*R*) = 0.07 μm, which was the quadratic average of repeatabilities of the first five standards.

Note that corrected values *R*·(1 + *cz*) were always higher than the certified values *R*<sup>0</sup> (compare the results from Table 5 to those from Table 1). It seems that for surface roughness similar to the nominal voxel height (*wz* = 2 μm), readings provided by the confocal microscope presented a positive bias caused by noise observed, for example, when measuring an optical flat (see Section 3.1, Figure 12). The RMS flatness observed when measuring the optical flat (0.54 μm) was slightly higher than the *Ra* values that were observed when measuring roughness standard #1, which is a quasi-flat surface (certified value *Ra* = 0.183 μm) for an instrument with a voxel height of 2 μm. The definition of *Ra* is similar but not equal to the definition of RMS flatness, but most importantly, *Ra* was evaluated after filtering the readings using a low-pass filter (defined through the sampling length *lr*).

**Table 5.** Results obtained when calibrating the confocal microscope described in Section 2 using six roughness standards (Table 1).


<sup>1</sup> Values obtained when measuring roughness standard #6 are included in this table only for informative reasons. Measurements of this standard were made using a sampling length *lr* = 0.8 mm, because of the reduced field of view of the instrument, instead of a sampling length *lr* = 2.5 mm, as recommended by ISO 4288 [44].

We suggest estimating the positive bias at each calibration point using the following expression, where *R*<sup>0</sup> is the *Ra* certified value for the standard used at each calibration point:

$$b = \overline{\mathbb{R}} \cdot (1 + c\_z) - \mathbb{R}\_0. \tag{38}$$

Its corresponding standard uncertainty *u*(*b*) is:

$$
\mu(b) = \sqrt{\mu^2(R\_0) + \overline{R}^2 \cdot \mu^2(c\_z) + \frac{s^2(R)}{n}}.\tag{39}
$$

Using this approach, the calibration results are those values, *bi* and *u*(*bi*), which are presented in the two columns on the right side of Table 5. Index *i* refers to the roughness standard used. These results are represented graphically in Figure 17 in order to analyze their metrological compatibility. Red lines represent values corresponding to metallic, aperiodic standards #1, #2, and #3. Green lines represent values corresponding to standards #4 and #5. The blue line is the result from standard #6 that will not be taken into account. Vertical lines represents uncertainty intervals *bi* ± *U*(*bi*) where the expanded uncertainties *U*(*bi*) = *k*·*u b* were evaluated for a coverage factor *k* = 2 (see Section 6.2.1 in [24] for definitions of coverage factor and expanded uncertainty). When analyzing the compatibility between measurement results, it is very common to use a coverage factor of *k* = 2 to estimate the expanded uncertainties. The horizontal black solid line in Figure 17 corresponds to the average value *b* of the first *N* = 5 roughness standards:

$$\overline{b} = \frac{\sum\_{i=1}^{N} b\_i}{N} = \frac{b\_1 + b\_2 + b\_3 + b\_4 + b\_5}{5} = 0.11 \text{ }\mu\text{m} \tag{40}$$

In order to make a correct estimation of average bias *b*, the correlation between bias *bi*, *bj* at each calibration points should be taken into account for the following reasons:


**Figure 17.** Bias observed at each calibration point (roughness calibration).

There will be a high correlation between bias *bi*. We performed estimations in different situations, and it is possible to see correlation coefficients *r bi*, *bj* as high as +0.8.

In order to simplify calculations, we suggest assuming *r bi*, *bj* = +1, which leads to higher estimations for the uncertainty *u b* of *b*. Then, it can be demonstrated that *u b* was:

$$\mu(\overleftarrow{b}) = \frac{1}{N} \sum\_{i=1}^{N} \mu(b\_{\overleftarrow{b}}) = \frac{\mu(b\_1) + \mu(b\_2) + \mu(b\_3) + \mu(b\_4) + \mu(b\_5)}{5} = 0.05 \,\upmu\text{m} \tag{41}$$

In Figure 17, the uncertainty interval *b* ± *U b* is represented by the space between the higher and lower black dotted lines. *U b* = *k*·*u b* is the expanded uncertainty of *b* evaluated with a coverage factor *k* = 2. Please note that all the uncertainty intervals *bi* ± *U*(*bi*) overlap the interval *b* ± *U b* . Notwithstanding, point *b*<sup>1</sup> is outside the interval *b* ± *U b* . This could indicate that some variability of the bias *b* was not taken into account in *u b* . Therefore, a conservative approach would be to assume that there is a variability represented by δ*b* that should be added to *u b* . Suppose that δ*b* is a uniform random variable of null mean and a full range *b*max − *b*min. Then, its standard uncertainty would be:

$$
\mu(\delta b) = \frac{b\_{\text{max}} - b\_{\text{min}}}{\sqrt{12}} = 0.06 \text{ }\mu\text{m}\tag{42}
$$

In order to estimate the noise of the instrument, according to [40–42], we repeated 10 measurements over an optical flat (that of Figure 4) in two orientations: 0◦ and 90◦. For a confocal microscope, an optical flat is a specimen with null roughness (very small in comparison with its noise). Therefore, values of *Ra* obtained over an optical flat are a very good estimation of the instrument noise. The average value and the standard deviation of the 10 *Ra* values were

$$\overline{R}\_{\text{il}} = 0.09 \,\upmu\text{m} \tag{43}$$

$$s(R\_{\mathfrak{a}}) = 0.003 \,\upmu\text{m} \tag{44}$$

Therefore, a good estimation for the uncertainty component associated with noise instrument is

$$
\mu\_{\text{noise}} = \overline{\mathcal{R}}\_{\text{d}} = 0.09 \,\upmu\text{m} \tag{45}
$$

#### **4. Discussion**

Table 6 summarizes the results obtained during the confocal microscope calibration (Section 2).


**Table 6.** Results of calibration.

<sup>1</sup> Non-linearity in Z-axis is included in *sr*(*z*).

Note that these results are only valid for measurements made with the same objective (10×). If other objectives are used, a whole recalibration is needed for each new objective.

The effects and their uncertainties were the highest for parameters *cxy* and *cz*. If their effects were not corrected, their contribution to the relative expanded uncertainty would be around 1%.

Fortunately, the software of confocal microscopes usually permits users to introduce their value in order to compensate their effects. If this compensation is done, their contribution to the relative expanded uncertainty is reduced to 0.3%.

The effect of parameter *a* (difference between pixel lengths along X and Y axes) was negligible. Its absolute value was lower than its expanded uncertainty *U*(*a*) = *k*·*u*(*a*) (for *k* = 2); therefore, the null hypothesis *a* = 0 could not be rejected. Its contribution to the relative expanded uncertainty was very low (around 0.01%).

The effect of parameter θ (perpendicularity error between X and Y axes) seemed to be significant (its absolute value was clearly higher than its expanded uncertainty), but its contribution to the relative expanded uncertainty (around 0.1%) was clearly negligible in comparison with *cxy* and *cz*.

The contributions of XY plane RMS flatness (*u*FLT) and the non-linearity in X and Y axes were clearly lower than the voxel dimensions (*wxy* = 1.65 μm and *wz* = 2 μm). Therefore, the instrument adjustment performed by the manufacturer seems to have been good.

Repeatabilities in the XY plane and in the Z-axis, in comparison with the voxels dimensions, were low. Again, this can be used to conclude that the instrument was working well.

In roughness measurements (which only apply when using the*Ra* parameter), the repeatability *s*(*R*), average bias *b*, bias variability *u*(δ*b*), and instrument noise *u*noise were very small in comparison with voxel height *wz* = 2 μm.

#### *4.1. Expanded Uncertainty Estimation for Length Measurements in the XY Plane*

As was pointed out, the instrument software usually permits users to introduce parameters *cxy* and *cz* in order to apply the corresponding corrections. On the contrary, parameters *a* and θ cannot be introduced. Therefore, the effect of uncorrected, non-null parameters *a* and θ would be taken into account as a systematic effect whose equivalent standard uncertainties would respectively be |*a*|/ √ 3 and |θ|/ √ 3. We supposed that it was equivalent to the introduction of two components, δ*a* and δθ, uniformly distributed along [−*a*. + *a*] and [−θ. + θ*a*], respectively.

For length measurements performed in the XY plane, the following expression could be a good estimate of its expanded uncertainty, where the pixel width component was estimated as *wxy*/ √ 12 (uniformly distributed between ±*wxy*/2):

$$\begin{split} \mathcal{U}\mathcal{U}\{\mathcal{L}\_{\text{xy}}\} &= k \cdot \sqrt{\mathcal{L}\_{\text{xy}}^2 \cdot \left\{ \mu^2 \Big(\mathbf{c}\_{\text{xy}}\Big) + \frac{\mu^2}{3} + \mu^2 (\boldsymbol{a}) + \frac{1}{2} \Big[\frac{\theta^2}{3} + \mu^2 (\theta)\Big] \right\} + \mathcal{u}\_{\text{NL},\text{xy}}^2 + \mathcal{s}\_r^2(\mathbf{x}) + \frac{\boldsymbol{w}\_{\text{xy}}^2}{12}} \leq \\ &\leq 1.9 \,\upmu\text{m} + \frac{L}{1600}. \end{split} \tag{46}$$

Uncertainty components for which no distribution was described were supposed to have normal distributions. In such situations, where there are many uncertainty components (eight components) and most of them are normally distributed and they contribute similarly to the total combined uncertainty, it can be supposed that the output variable *Lxy* is normally distributed [25]. Therefore, a coverage factor *k* = 2 can be used when computing the expanded uncertainty, assuming a coverage probability of approximately 95%.

#### *4.2. Expanded Uncertainty Estimation for Height Measurements along the Z-Axis*

For height measurement (0 ≤ *h* ≤ 100 μm—the Z range approximately covered by the sphere cap measured), the following expression gives us a reasonable estimation of its expanded uncertainty *U*(*h*) for a coverage factor *k* = 2:

$$\mathcal{U}(h) = k \cdot \sqrt{h^2 \cdot u^2(c\_z) + u\_{\text{FLT}}^2 + s\_r^2(z) + \frac{w\_z^2}{12}} \le 2.2 \text{ } \mu\text{m} + \frac{h}{120} \tag{47}$$

Now there are four uncertainty components, where three are distributed normally, and only one *wz* is distributed uniformly, but *wz* is never the dominant contribution. Again, in a situation such as this, a coverage factor *k* = 2 can be used when computing the expanded uncertainty, assuming a coverage probability of approximately 95% [25].

#### *4.3. Expanded Uncertainty for Roughness Measurements*

Following the recommendations of DKD-R 4-2 [40–42], a model for a corrected *Ra* roughness measurement performed after instrument calibration would be:

$$R\_a = \overline{R} \cdot (1 + c\_z) - \left(\overline{b} + \delta b\right) + \delta R\_{\text{roois}} \tag{48}$$

where now *R* is the average of *m* repeated measurements made over the specimen being measured, and δ*R*noise is a random variable of null mean distributed normally with standard deviation *u*noise. The standard uncertainty of *Ra* is

$$
\mu(R\_d) = \sqrt{\frac{s^2(R)}{m} + \overline{R}^2 \cdot \mu^2(c\_z) + \mu^2(\overline{b}) + \mu^2(\delta b) + \mu^2\_{\text{noise}}} \tag{49}
$$

The expanded uncertainty *U*(*Ra*), using a coverage factor *k* is

$$\mathcal{U}(R\_4) = k \cdot \sqrt{\frac{s^2(R)}{m} + \overline{\mathcal{R}}^2 \cdot \mu^2(c\_z) + \mu^2(\overline{b}) + \mu^2(\delta b) + \mu\_{\text{noise}}^2} \tag{50}$$

In this case, there are five uncertainty components, δ*b* is distributed uniformly, and *R* follows a *t*-Student distribution with ν = *m* − 1 degrees of freedom. If we compute the degrees of freedom of the output variable *Ra*, the result is approximately ν(*Ra*) = 30. With ν(*Ra*) > 10, it is possible to use a coverage factor *k* = 2 corresponding to a coverage probability of approximately 95% [25]. Then, assuming that measurements will be repeated *m* = 3 times, the expanded uncertainty would be

$$\mathcal{U}(\mathbb{R}\_{\mathbf{a}}) < 0.25 \,\upmu\text{m} \tag{51}$$

This value is very good for an instrument with a voxel height of *wz* = 2 μm.

*4.4. Propagation of Uncertainty When Measuring the Radius of a Cylindrical Surface*

As an example of uncertainty propagation in dimensional measurements not directly covered in Sections 4.2 and 4.3, we present the case of a measurement of the radius of a cylindrical surface (see Figure 18) of a steel bar with a nominal value of 2.75 mm.

**Figure 18.** Measurement of a cylindrical surface.

The uncertainty propagation was done using Monte Carlo simulation [46]. The model function used during the simulation of the coordinates of the surface's points was the following:

$$
\begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} 1 + c\_{\rm xy} + a + \delta a & (\theta + \delta \theta)/2 & 0 \\ (\theta + \delta \theta)/2 & 1 + c\_{\rm xy} - a - \delta a & 0 \\ 0 & 0 & 1 + c\_z \end{bmatrix} \begin{bmatrix} p \\ q \\ r \end{bmatrix} + \begin{bmatrix} \delta x \\ \delta y \\ \delta z \end{bmatrix} \tag{52}
$$

where (*p*, *q*,*r*) are the coordinates provided by the confocal microscope during the measurement. They were not simulated. Table 7 enumerates variables that were simulated and how the simulation was performed. Corrections *a* and *t* were not applied, because the instrument software cannot take them into account. In order to take into account the effect of not applying these corrections, δ*a* and δ*t* were introduced, and *a* and *t* are supposed to be normal distributions with zero mean (not applying corrections) and typical uncertainty *u*(*a*) and *u*(*t*). δ*a* and δ*t* have uniform distribution with zero mean and typical uncertainties |*a*|/ √ 3 and |θ|/ √ 3. δ*p*, δ*q*, and δ*r* represent the repeatability effects over coordinates *x*, *y*, and *z*. We supposed that they had normal distributions with zero mean and typical uncertainties *u*(δ*p*) = (δ*q*) = *sr*(*x*) = *sr*(*y*) and *u*(δ*r*) = *sLS* = 1.0 μm, where *sLS* is the root mean squared error observed when fitting a cylindrical surface to measured points (*x*, *y*, *z*) using a least-squares fit.


**Table 7.** Variables simulation.1 Corrections *a* and *t* were not applied.

A total of *N* = 10<sup>4</sup> simulations were generated. For each simulation, a value *Ri* was obtained for the radius of the cylindrical surface. Figure 19 shows the histogram of the simulated radius of the cylindrical surface. The red smooth line represents the best approximation of the histogram through a normal distribution. Differences between the red line and the histogram were small enough to be negligible. It is likely that upon increasing the number of simulations (the advisable value for *N* when there is no time limit during the execution to the simulation process is *N* = 106 [46]), these differences would be smaller. For this reason, a coverage factor *k* = 2 (corresponding to an approximate coverage probability of 95%) was chosen in previous sections: final distributions are usually very close to normal distribution where *k* = 2 corresponds to a coverage probability of 95.45%.

**Figure 19.** Histogram of the radius *R* of the cylindrical surface.

The final result was *R* = (2.7907 ± 0.0061) mm, where the expanded uncertainty *U*(*R*) = *k*·*s*(*R*), where now *s*(*R*) is the standard deviation of *N* simulated values *Ri*. A coverage factor *k* = 2 for a coverage probability of approximately 95% was used.

A similar approach could be used to propagate uncertainties when using the confocal microscope for other types of dimensional or angular measurements.

#### **5. Conclusions**

A complete calibration procedure that provides adequate traceability to confocal microscopes used in submillimeter coordinate metrology was presented. This procedure provides adequate traceability for length and roughness measurements performed with confocal microscopes and can be easily adapted to calibrate other 3D optical instruments (e.g., focus variation microscopes). The calibration procedure is as simple as possible, as it was designed to be implemented in industrial environments. Reference material standards were chosen to be easy to find and easy to calibrate again in industrial environments. The calibration procedure covers all the key points of operation of a confocal microscope. It permits the estimation of:


Some of these parameters (amplification coefficients, flatness deviation in XY plane) can usually be introduced in the instrument software to compensate for their effects. Others cannot be compensated (i.e., θ, *a*) but if high values are detected, the user can ask the instrument manufacturer to adjust and/or repair the instrument to reduce their effects. Even if they are not introduced, an alternative approach is presented here to account for the fact that these corrections were not applied.

Uncertainty estimations were carried out for all parameters following the mainstream GUM method. In addition, for measurements of lengths and roughness, expressions for expanded uncertainties of measurement carried out by the instrument were provided. There are other types of measurements, such as angular measurements, that were not addressed in this paper due to limitations in the extent of the text. Notwithstanding, all the information needed to propagate uncertainties to these other types of measurements is provided in the paper, as can be demonstrated through an example solved using Monte Carlo simulation.

The procedure described in this paper can be easily adapted to calibrate other optical instruments in the submillimeter range, which are capable of providing 3D information of surfaces being observed by them (e.g., focus variation microscopes). For example, the material of some of the reference material standards would have to be changed, but the core of the procedure would remain the same.

**Author Contributions:** Conceptualization, A.M.M. and J.d.V.y.O.; Investigation, A.M.M.; Methodology, A.M.M., and J.d.V.y.O.; Software, A.M.M. and J.d.V.y.O.; Supervision, J.d.V.y.O.; Validation, J.d.V.y.O.; Writing—Original draft, A.M.M.; Writing—Review and editing, J.d.V.y.O.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflicts of interest.

#### **Appendix A Algorithm for Spherical Cap Fitting**

At the end of this appendix, we include the source code of a routine written to be executed in Matlab®(www.mathworks.com) or GNU Octave (www.gnu.org/software/octave/) that performs a least-squares fitting of *n* points with Cartesian coordinates (*xi*, *yi*, *zi*) to a spherical surface of radius *R* and center at (*x*C, *y*C, *z*C). The following equation is used to describe the surface:

$$z(\mathbf{x}, y) = f(\mathbf{p}, \mathbf{x}, y) = z\_{\mathbb{C}} + \sqrt{\mathbb{R}^2 - (\mathbf{x} - \mathbf{x}\_{\mathbb{C}})^2 - (y - y\_{\mathbb{C}})^2}. \tag{A1}$$

Vector **p** = [*R x*<sup>C</sup> *y*<sup>C</sup> *z*C] stores the four parameters that define the spherical surface.

This expression works well when performing a least-square fitting when points (*xi*, *yi*, *zi*) are over a spherical cap near *z*MAX = *z*<sup>C</sup> + *R* (sphere north pole) or *z*MIN = *z*<sup>C</sup> − *R* (sphere south pole).

The algorithm needs an initial solution **p**<sup>0</sup> = + *R*(0) *x* (0) <sup>C</sup> *y* (0) <sup>C</sup> *z* (0) C , from which starts an iterative process. In each iteration *k*, the following system of equations is solved by least squares:

$$\frac{R^{(k-1)}}{h\_i} \cdot \Delta R + \frac{\mathbf{x}\_i - \mathbf{x}\_C^{(k-1)}}{h\_i} \cdot \Delta \mathbf{x}\_{\mathbb{C}} + \frac{y\_i - y\_{\mathbb{C}}^{(k-1)}}{h\_i} \cdot \Delta y\_{\mathbb{C}} + \Delta z\_{\mathbb{C}} = z\_i - f(\mathbf{p}\_{k-1}, \mathbf{x}\_i, y\_i), \tag{A2}$$

where *i* = 1, 2, ··· , *n* and *hi* = *R*(*k*−1) 2 − *xi* − *x* (*k*−1) C 2 − *yi* − *y* (*k*−1) C 2 .

In each iteration, a detection of outliers is made following the method described in Section 1.3.5.17 of [29]. The *z* coordinates obtained with a confocal microscope are usually noisy enough to make the adjustment by least squares of the surface very difficult. Therefore, a good outlier detection algorithm must be used. The iterative process finishes when no more outliers are detected and no decrease in RMS residual is observed.

```
function [R,p,s,e,Cp,xn,yn,zn]=SphericalCapFit(x,y,z,xo,yo,zo,Ro); 
 % Fit a spherical surface to probing points (x,y,z) 
 % with outliers detection. 
 % 
 % Equation of the spherical surface 
 % 
 % z(x,y) = zo + sqrt( R^2 - (x-xo)^2 - (y-yo)^2 ) 
 % 
 % It is needed an initial solution: 
 % Sphere Center: (xo,yo,zo) 
 % Sphere Radius: Ro 
 % 
 % Output variables 
 % R : Sphere Radius 
 % p : Shpere Parameters Vector, p=[R xo yo zo] 
 % s : RMS Residual 
 % e : Vector of residuals 
 % Cp : Covariance matrix of p 
 % xn,yn,zn: Coordinates of points not discarded 
 % Initial parameters 
 p =[ Ro xo yo zo ]'; 
 while true 
 R = p(1); xo=p(2); yo=p(3); zo=p(4); 
 h = sqrt( R^2 - (x-xo).^2 - (y-yo).^2 ); 
 zest = zo + h ; 
 A = [ R./h (x-xo)./h (y-yo)./h ones(size(x)) ]; 
 dp = A\(z-zest) ; 
 e = (z-zest)-A*dp ; % Residuals 
 % Outliers detection following 
 % section 1.3.5.17. Detection of Outliers 
 % NIST - Engineering Statistical Handbook 
 % https://www.itl.nist.gov/div898/handbook/eda/section3/eda35h.htm 
 se = 1/0.6745*median(abs(e)); % Robust estimation of standard deviation 
 ii = find(abs(e)<3.5*se); % Outliers detection 
 if (length(ii)==length(e))&&(se/se_ans>=1) 
 break % End of the iterations 
 else 
 se_ans=se; 
 end 
 x=x(ii); y=y(ii); z=z(ii); e=e(ii); % Outliers elimination 
 p = p + dp; 
 end 
 % Uncertainty Estimation 
 se = sqrt(sum(e.^2)/(length(e)-length(p))) ; 
 Cp = se^2*inv(A'*A); % Covariance matrix of parameters p 
 R = p(1); % Estimate of R 
 xn=x; yn=y; zn=z; % Not discarded points coordinates 
 % Residuals 
 zest=p(4)+sqrt(p(1)^2-(x-p(2)).^2-(y-p(3)).^2); 
 e = z - zest ; 
 % RMS residual 
 s = sqrt(sum(e.^2)/(length(e)-length(p))) ; 
end
```
#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article* **Development and Validation of a Calibration Gauge for Length Measurement Systems**

**Francisco Javier Brosed \*, Raquel Acero Cacho, Sergio Aguado, Marta Herrer, Juan José Aguilar and Jorge Santolaria Mazo**

Design and Manufacturing Engineering Department, Universidad de Zaragoza, María Luna 3, 50018 Zaragoza, Spain; racero@unizar.es (R.A.C.); saguadoj@unizar.es (S.A.); 680722@unizar.es (M.H.); jaguilar@unizar.es (J.J.A.); jsmazo@unizar.es (J.S.M.)

**\*** Correspondence: fjbrosed@unizar.es

Received: 31 October 2019; Accepted: 27 November 2019; Published: 29 November 2019

**Abstract:** Due to accuracy requirements, robots and machine-tools need to be periodically verified and calibrated through associated verification systems that sometimes use extensible guidance systems. This work presents the development of a reference artefact to evaluate the performance characteristics of different extensible precision guidance systems applicable to robot and machine tool verification. To this end, we present the design, modeling, manufacture and experimental validation of a reference artefact to evaluate the behavior of these extensible guidance systems. The system should be compatible with customized designed guides, as well as with commercial and existing telescopic guidance systems. Different design proposals are evaluated with finite element analysis, and two final prototypes are experimentally tested assuring that the design performs the expected function. An estimation of the uncertainty of the reference artefact is evaluated with a Monte Carlo simulation.

**Keywords:** calibration artifact; kinematic support; dimensional metrology; machine tool; length measurement

#### **1. Introduction**

Volumetric verification is a verification technique to improve the accuracy of machine tools (MTs) and robots based on indirect measurement [1]. It uses the combined effect of all geometric errors through a parameter identification process [2]. Many studies have been carried out for its application to coordinate measurement machines (CMMs) and machine tools (MTs) [3,4]. The increasing implementation of this verification technique in the field of machine tool verification has led to the development of verification procedures that depend on different factors such as the type of machine, the non-geometric errors of the machine, the system and measurement technique applied, etc. [5]. The result of the equipment's verification is linked to the calibration of the measurement system used, procedure which is normally carried out in accordance with the applicable standards. This applies to measuring instruments commonly used in volumetric verification such as laser trackers [6]. However, in some cases, the lack of guidelines or standards makes it necessary to develop internal calibration procedures and to use specific reference gauges [7,8].

Therefore, this work presents the development of a reference artefact to calibrate extensible guidance systems used in machine tool and robot verification procedures. The reference artefact materializes several working positions and lengths with a fixed reference origin. The reference origin consists of a nest for a precision sphere, and the working positions include different nests with precision spheres and kinematic couplings. The mechanical repeatability of the reference artefact for the nests' positioning in the different working locations is achieved with kinematic couplings configuration of spheres and cylinders. The design of the artefact will also compensate the errors associated with its deflections [9,10]. In [9], an analysis of the measured length of long artifacts is showed, and the use

of tubes instead of solid bars is recommended to reduce the elongation due to self-loading. In [10], the author gives an estimation of the location of the supports to obtain parallel surfaces at the end of a bar (Airy points) and another option (Bessel points) to obtain the minimum change in length.

The paper is structured as follows. Firstly, the authors analyze the requirements of the design and the structure of the reference artefact. Secondly, it is performed an evaluation of the different gauge design proposals by means of a finite element simulation in Solid Edge. In this analysis, the displacement generated in the gauge due to the load application is measured for each case. Then, the design proposals selected are manufactured by 3D printing, and these prototypes are used in the experimental testing and measured with a CMM (coordinate measurement machine). Finally, after optimizing the design with the feedback of simulation and experimental testing, the paper presents an uncertainty estimation of the designed calibration system.

#### **2. Materials and Methods**

The calibration artefact has to materialize the calibration positions for a length measurement instrument. The instrument consists of a system that measures the distance between two spheres. One of the spheres is fixed to the instrument and the other is fixed to the machine tool, robot or coordinate measuring machine under verification. As it can be seen in Figure 1, the gauge is composed of a sphere (1) and a support (5) to hold the sphere fixed to the machine tool under verification (6), being both located at the edges of the artefact. Between both sides, there is an interferometer (2) to measure the different distances that will be materialized in the gauge. These different lengths are achieved with a telescopic system (3) that also assures the alignment of the interferometer and the retroreflector (6). The interferometer is located on the left side of the gauge closed to the fix sphere (1). The retroreflector will be located on the other side of the system close to the sphere fixed to the machine tool. The calibration artefact should be able to calibrate measurement instruments with a measurement range from 400 mm to 1600 mm (max. and min. in Figure 1). Once calibrated, the instrument will give the distance between the centers of the two spheres.

**Figure 1.** Scheme and components of the measurement instrument. (1) sphere fixed to the instrument; (2) interferometer; (3) telescopic system; (4) retroreflector; (5) magnetic holder; (6) sphere fixed to the machine tool.

The calibration artefact has a fixed magnetic sphere-holder to lock the position of the sphere fixed to the instrument (1) and several kinematic supports to obtain a repeatable positioning of a sphere. When the sphere of the instrument (1) is locked in the magnetic sphere holder and the other side of the instrument (5) reaches the sphere fixed to the machine tool, a calibrated length materializes in the gauge. The defined nominal lengths of the calibration artefact range from 400 to 1600 mm.

During the calibration of the measurement instrument, the calibration artefact rests in a flat surface. Therefore, the artefact incorporates three support legs on its base to assure its stability in the calibration process.

The main components under analysis in the design of the gauge are the following: the position of the support points to minimize the deformation in the length measurement, the kinematic couplings support that allows the movable sphere positioning with high repeatability and the mechanical structure to materialize the calibration lengths.

During the calibration of the measurement instrument, we need a repeatable positioning of the movable sphere to materialize the calibration positions. For this purpose, a kinematic base has been designed with calibrated spheres and cylinders (6-points 3-cylinders). The kinematic contact has two parts (upper and lower). In the lower part, six spheres are fixed in three pairs located at 120◦ meanwhile in the upper part, three cylinders are fixed with its axis located at 120◦ and pointing to the center of the geometrical distribution (Figure 2). Each interface provides two constraints, totaling six constraints for the system. The best stability is achieved when the axes of the contact planes bisect the coupling triangle with each interface as a vertex of this triangle. Four spheres secure the position of the cylinders in the upper part. The upper and lower parts are fixed with magnets located in the center of the geometrical distribution (in the upper and lower part respectively).

**Figure 2.** Kinematic support for the mobile sphere. (**a**) Distribution and orientation of the cylinders; (**b**) distribution and orientation of the spheres; (**c**) model of the kinematic support mounted, contact between spheres and cylinders.

The main element of the artefact is a tube that goes through the other parts of the assembly. The junction between the tube and the other parts (magnetic holder or kinematic support for the magnetic holder) is materialized with a flange.

Two design proposals for the artefact structure are evaluated. The first prototype is a single tube structure in which the line of the measurement points is parallel to the bar and is located beyond the structure (Figure 3a). Each flange has been designed to hold the bar and locate the magnetic holder, in one case, and the kinematic support for the magnetic holder, in the rest of the cases, defining each measurement position (Figure 3b,c).

**Figure 3.** (**a**) Single tube structure for the calibration artefact; (**b**) flange of the single bar artefact with standing legs; (**c**) flange of the single bar artefact without standing legs.

The second prototype is a double tube structure that locates the line of the measurement points between both bars (Figure 4a). The flanges hold the bars and support a base where the magnetic holder is located in the first point and the kinematic supports in the other cases (Figure 4b).

**Figure 4.** (**a**) Double tube structure for the calibration artefact; (**b**) flange for the double bar structure with the base and he kinematic support.

A horizontal bar of great length requires two points of support in the direction of its length to be stable. The position of these standing legs determines the action of the gravity of this bar; depending on how it is supported, measurement errors can be caused [11]. Therefore, if the supports are positioned at the ends, it will warp in the center causing the ends to come closer and tilt upwards. On the contrary, if the two supports are positioned in the middle, the bar will be bent at the ends [9]. From [9] the use of tubes instead of bars to reduce the elongation due to self-loading is taken as well as the location of the spheres in the neutral bending surface.

The distances between the supports of the bar have been defined using Airy and Bessel methodologies and comparing the results of the deformation. A bar supported at its Airy points has parallel ends and supported at its Bessel points has maximum length due to deflection reduction. The value of the Airy and Bessel points has been taken from [10]. The distance between supports (a) and the position of each support (Lmin and Lmid), for a simple bar of 1600 mm length (L), appears in Table 1, and the deformation obtained in the bar appears in Figure 5.


**Table 1.** Values of the Airy and Bessel point.

**Figure 5.** Results of the simulation of a simple bar (aluminum) with two supports following each two methods. (**a**) Simple bar supported at the Airy points; (**b**) simple bar supported at the Bessel points.

Four different positions of the supports are proposed; two of them following the Airy and Bessel methodologies (Figure 6a). The other two configurations locate the supports in the reference flange (point 0, Figure 6b) and in the flange that materializes Lmid (point 2, Figure 6b) in the third case and in the reference flange (point 0, Figure 6c) and in the flange that materializes Lmax (point 3, Figure 6c) in the fourth case.

**Figure 6.** Scheme of the four locations of the supports and the location of the loads in each simulation; there is a load of 1N in point 0 for every simulation and another load of 1N in point 1 for the simulation of Lmin, in point 2 for the simulation of Lmid, and in point 3 for the simulation of Lmax. (**a**) Airy and Bessel points; (**b**) supports located in point 0 (reference) and in Lmid; (**c**) supports located in point 0 (reference) and in Lmax.

#### **3. Results**

This section provides the description of the main results of the components analysis and the development of the artefact. As the previous section indicates, the main components under analysis are the following: the position of the support points to minimize the deformation in the length measurement, the kinematic couplings support that will allow the movable sphere positioning with high repeatability, and the mechanical structure to materialize the calibration lengths.

#### *3.1. Design Selection*

In order to define the position of the standing legs, a finite element analysis of the deformation of the structure has been carried out (twenty-four simulations, twelve for each prototype, were performed). The study analyses four different positions of the supports, and the deformation occurred when the measurement system was placed in the three different measurement positions (Lmin, Lmid, and Lmax). The measurement system will rest in the reference position and in the position under verification (Lmin, Lmid, or Lmax). Therefore, in the analysis there is a load of 1N in the reference position (position of the magnetic holder, point 0, Figure 6) and another load of the same value in the measurement position for each case. The four positions of the supports are the Airy point (a = 923.76 mm), the Bessel points (a = 895.04 mm, Figure 6a), the supports located in the reference position and in the Lmid (Figure 6b), and finally, the supports located in the reference position and in the Lmax (Figure 6c). All the combinations make twenty-four simulations (2 prototypes, 4 supporting leg configuration and 3 load configurations).

The material properties taken into account for the structural analysis are shown in Table 2.


**Table 2.** Material properties for prototypes 1 and 2 (aluminum 6061) and prototype 3 (carbon fiber).

The increment of the measurement distance or the measurement error Δ*LMEAS* in Equation (1) characterizes the deformation of the structure.

$$
\Delta L\_{\rm MEAS} = \sqrt{\left(L\_{\rm tr} + \Delta x\_{\rm ll} - \Delta x\_0\right)^2 + \left(\Delta y\_{\rm tr} - \Delta y\_0\right)^2 + \left(\Delta z\_{\rm tr} - \Delta z\_0\right)^2},
\tag{1}
$$

where *Ln* is the nominal distance of the measurement point (*n* = Lmin, Lmid and Lmax); (Δ*xn*, Δ*yn*, Δ*zn*) are the displacements of the measurement point due to the deformation of the structure, and (Δ*x0*, Δ*y0*, Δ*z0*) are the displacements of the reference point due to the deformation of the structure.

Combining the four proposed positions of the supports and the three different pairs of loads, twelve values of simulated measurement error have been obtained for each prototype after the analysis

of the prototypes using finite elements software (Solid Edge ST8, Siemens PLM Software, Plano, TX, USA) (Figure 7).

**Figure 7.** Measurement error in μm for the twelve cases for each two prototypes. These results have been obtained after the analysis of the prototypes using finite elements software.

The localization of the spheres beyond the structure line amplifies the measurement error due to the deformation in prototype 1. Based on that, prototype 1 was discarded and the results shown in the paper correspond to prototype 2. The measurement errors in prototype 2 are minimum using the Bessel points and do not exceed from 0.1 μm, value obtained when the system is loaded in Lmax position (points 0 and 3, Figure 6) and lower in the other cases (Lmin and Lmid) (according with the simulation results using finite elements software (Solid Edge ST8, Siemens PLM Software, Plano, TX, USA)).

After the simulation with the 3D models of prototypes 1 and 2, a kinematic support prototype was manufactured by additive manufacturing (Figure 8) and tested using a CMM. The kinematic supports have been tested measuring the repeatability of the manufactured kinematic supports with the spheres and the cylinders. The position measurement repeatability of each location obtained after ten iterations was 8 μm (measured with a CMM).

**Figure 8.** (**a**) Kinematic supports samples manufactured by additive manufacturing; (**b**) prototype 3.

Once the kinematic supports have been tested (repeatability measured with a CMM), and the adequacy of the Bessel points for this application has been proved (simulation using finite elements software (Solid Edge ST8, Siemens PLM Software, Plano, TX, USA), we manufactured a new design with aluminum flanges and carbon fiber structure tubes (27 ' diameter, 1830 mm length). The number of measurement points increments to seven from three in the previous prototypes (Table 3). In this case, the values of the measurement errors obtained for each measurement position are under 0.1 μm (measured with CMM).


**Table 3.** Bessel and measurement points for prototype 3 (proposed nominal values).

#### *3.2. Manufacture, Assembly, and Performance*

The flanges, the bases where the kinematic supports are located, and the standing legs of the artefact were made of Aluminum 6061 for the prototype 3. The flanges join the carbon fiber tubes with the base that contains the magnetic holder for the position A (n = 0, Figure 8b) and with the base that contains the kinematic supports for the rest of the cases, positions B to H (n from 1 to 7, Figure 8b).

The flange geometry has been redesigned to adequate it to a wire EDM manufacturing process (Figure 9a,b).

**Figure 9.** (**a**) Model of the flanges in prototype 3; (**b**) detail of the flange embracing a tube; (**c**) prototype 3 with the new flanges and carbon fiber tubes during performance test in coordinate measurement machine (CMM); (**d**) CMM measuring a fixed sphere (Diameter 38.1 mm), distance to the mean of the ten iterations in mm. Standard deviation values of each coordinate are included next to the legend.

The position of the spheres has been measured with a coordinate measurement machine (CMM) to obtain the uncertainty of the calibration artefact. First, the repeatability of the CMM measuring a sphere with a diameter of 38.1 mm (1<sup>1</sup> <sup>2</sup> ") has been estimated in 0.4 μm. The measurement position number 3 (Figure 8b) was measured ten times with the CMM without removing the sphere from the kinematic support (Figure 9d).

A second measurement with ten iterations was carried out assembling and disassembling the kinematic support with the sphere located in position number 3 (not fixed sphere, Figure 10). The results are compared with those obtained without removing the kinematic support (fixed sphere, Figure 10).

The standard deviation values of the sample in X, Y, and Z coordinates are 0.4, 0.1, and 0.4 μm, respectively, without removing the kinematic support (fixed sphere). Disassembling the kinematic support with the sphere located in position number 3 (not fixed sphere) are 0.4, 0.2, and 0.5 μm.

Z coordinate is more sensible to the movements when mounting and demounting the kinematic support but, in any case, the standard deviation of the sample is low enough for the application.

The next step, after measuring the repeatability of the kinematic supports, is to check the effect of the deformation in the measurement length corresponding to each sphere position. To evaluate this effect, the measurement of the reference position (A, number 0 in Figure 8b) and the other positions has been carried out moving the sphere with the kinematic support from B (position number 1 in Figure 8b) to H (position number 7 in Figure 8b). The procedure is repeated ten times. The measurement results

allow estimating the repeatability of the measurement length in each position as the standard deviation of ten repetitions in each position (Figure 11).

**Figure 10.** CMM measuring a sphere (Diameter 38.1 mm), distance to the mean of the ten iterations in mm in each coordinate. Standard deviation values of each coordinate (fixed and not fixed) are included in the legend. The results are compared with those obtained measuring a fixed sphere. (**a**) Distance to the mean of the ten iterations in mm in X coordinate; (**b**) Distance to the mean of the ten iterations in mm in Y coordinate; (**c**) Distance to the mean of the ten iterations in mm in Z coordinate.

**Figure 11.** CMM measuring a sphere (Diameter 38.1 mm) in each position of the artefact. The graph shows the standard deviation of the sample (ten iterations) for each measurement length (X, Y, and Z coordinates). The abscissa identifies the mean value of the measurement length of each position (from A to H).

#### *3.3. Calibration Artefact Uncertainty Estimation with Monte Carlo Simulation*

Monte Carlo (MC) method is widely used in measurement uncertainty estimation procedures [12,13], for example applied to coordinate measuring machines (CMMs) uncertainty analysis [14], [15–17]. In this work, we used the MC simulation to estimate the uncertainty of the reference artefact in the calibration of length measurement systems. The uncertainty values have been calculated according with the Guide to the expression of uncertainty in measurement GUM [18,19] using a confidence level of 95% (k = 2).

The input data for the MC simulation method are the probability distributions of the variability of the different error sources. In this case, the main error source is the variability of the positioning of each calibration point.

The nominal value of each position coordinate is the mean value obtained from the CMM measurement. The standard deviation of the distribution of each position is also the standard deviation of the CMM measurements for each position (Figure 11). Then, each position (from A to H) is measured with a CMM and the repeatability of the X, Y, and Z coordinates is evaluated modelling its distribution as a normal distribution. When the effect of the distribution of each variable that influences the measurement length materialized by the calibration artefact is considered, the distribution of the measurement result can be estimated (Figure 12).

**Figure 12.** Distribution of the values for each set of coordinates of A and B positions. The distance between this two positions materializes the measurement length number 1 (d1, AB). The distribution of the measurement length can be obtained by simulation with the Monte Carlo method.

The results of the Monte Carlo simulation also depend on the number of iterations carried out. When the number of iterations is low, the results are not representative but as the number of results increases, the values obtained for the uncertainty converge (Figure 13).

**Figure 13.** Evolution of the results as the number of iterations increase from 102 to 106. (**a**) Results for the first measurement length materialized between point A and point B of the calibration artefact; (**b**) results for the fourth measurement length materialized between point A and point E of the calibration artefact; (**c**) results for the seventh measurement length materialized between point A and point H of the calibration artefact.

The results of the MC simulation are shown in Table 4 including the uncertainty value for each length measurement. The uncertainty distribution for each measurement length (from distance between n=0 to n=1 in Figure 8b, distance 1 AB, to distance between n=0 to n=7 in Figure 8b, distance 1 AH) is shown in Figure 14 obtaining values below ±1.60 μm for all the positions in the study.


**Table 4.** Bessel and measurement points for prototype 3 (proposed nominal values).

 [ G\$% [G\$& [ G\$' [ G\$( [ G\$) [ G\$\* [ G\$+

**Figure 14.** The uncertainty distribution for each measurement length (from distance d1, AB to distance d7, AH).

#### **4. Discussion**

This work presented the design, development, manufacturing, and experimental validation of a reference artefact to calibrate extensible guidance systems used in machine tool and robot verification procedures. The artefact uses spheres and spherical nests with kinematic supports that assure the high repeatability of the system. Different design proposals were evaluated with finite element analysis, and two final prototypes were experimentally tested assuring that the design of kinematic couplings performs the expected function. The paper finally presents the uncertainty estimation of the calibration artifact using a Monte Carlo simulation (MC).

We could conclude from the results of the Monte Carlo simulation that the calibration uncertainty of the artefact designed for length measurement systems could be adequate for the application, considering tests carried out in a horizontal position.

The calibration artefact presented in this work can be used to test the telescopic system not only in a horizontal position but also by varying the angle and reaching an upright position. Therefore, simulation and experimental validation would be necessary in these conditions in the future, although it is expected that the configuration of the most precise calibration artefact would be the same as the one presented in this paper for horizontal tests.

**Author Contributions:** Conceptualization, J.J.A., M.H. and J.S.M.; methodology, J.J.A., M.H. and R.A.C.; software and validation, J.J.A., M.H. and S.A.; formal analysis, S.A., J.S.M. and F.J.B.; investigation, J.J.A. and M.H., S.A.; resources, R.A.C., J.S.M., F.J.B.; writing—original draft preparation, R.A.C., S.A. and F.J.B.; writing—review and editing, R.A.C., F.J.B. and J.S.M.

**Funding:** This research was funded by the Ministerio de Economía, Industria y Competitividad with project number Reto 2017-DPI2017-90106-R, by Aragon Government (Department of Industry and Innovation) through the Research Activity Grant for research groups recognized by the Aragon Government (T56\_17R Manufacturing Engineering and Advanced Metrology Group) co-funded with European Union ERDF funds (European Regional Development Fund 2014-2020, "Construyendo Europa desde Aragón") and by the Universidad de Zaragoza and the Centro Universitario de la Defensa, project number UZCUD2018-TEC-04.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article* **2D–3D Digital Image Correlation Comparative Analysis for Indentation Process**

#### **Carolina Bermudo Gamboa, Sergio Martín-Béjar \*, F. Javier Trujillo Vilches, G. Castillo López and Lorenzo Sevilla Hurtado**

Department of Civil, Material and Manufacturing Engineering, EII University of Malaga C/Dr, Ortiz Ramos s/n, E-29071 Malaga, Spain; bgamboa@uma.es (C.B.G.); trujillov@uma.es (F.J.T.V.); gcastillo@uma.es (G.C.L.); lsevilla@uma.es (L.S.H.)

**\*** Correspondence: smartinb@uma.es

Received: 28 October 2019; Accepted: 9 December 2019; Published: 11 December 2019

**Abstract:** Nowadays, localized forming operations, such as incremental forming processes, are being developed as an alternative to conventional machining or forming techniques. An indentation process is the main action that takes places in these forming activities, allowing small, localized deformations. It is essential to have the knowledge of the material behavior under the punch and the transmitted forces to achieve correct control of the entire procedure. This paper presents the work carried out with the digital image correlation (DIC) technique applied to the study of the material flow that takes place under an indentation process. The material flow analysis is performed under 2D and 3D conditions, establishing the methodology for the calibration and implementation for each alternative. Two-dimensional DIC has been proven to be a satisfactory technique compared with the 3D method, showing results in good agreement with experimental tests and models developed by the finite element method. Notwithstanding, part of the indented material flows under the punch, emerging on the front surface and generating a dead zone that can only be addressed with a 3D technique. So, the main objective is to carry out a comparison between the 2D and 3D techniques to identify if the 3D application could be mandatory for this type of process. Also, a 2D–3D mix analysis is proposed for study cases in which it is necessary to know the material flow in that specific area of the workpiece.

**Keywords:** digital image correlation; indentation process; incremental forming; finite element method; material flow

#### **1. Introduction**

Plastic deformation is one of the main processes in the current manufacturing industry. The material is progressively shaped from a simpler geometry to a final complex shape. Therefore, the knowledge and control of the material flow is essential to achieve a correct product and improve the entire sequence. The indentation process is still one of the most commonly applied processes needed to investigate the material characteristics in general. Recent works show how the technique is still evolving and can offer essential information [1–4].

From a manufacturing perspective, the indentation process is applied to innovative approaches, as part of localized forming operations, such as incremental forming processes (IFPs). IFPs are relatively new processes that are mainly applied on sheet forming processes [5]. Complex shapes are obtained according to the predetermined trajectory that the forming tool follows, generally controlled by Computer Numerical Control (CNC) machines. Manufacturing small batches, formability limits, forming process, shape accuracy, springback problems, and so on can be analyzed and solved. However, the principal advantage is that a specific tooling is not needed, thus giving the process great flexibility as well as great formability compared with other forming processes. These processes are presented as

interesting alternatives in important industries such as the medical product manufacturing, aeronautics, and transport industries, among other applications [6–10].

From the bulk forming perspective, incremental forming is an old and well-known technique within the metal working field. Nowadays, these kind of processes are employed to obtain a wide range of products and are continuously evolving and adapting with the newest technologies [11]. Incremental sheet-bulk metal forming technology can be found in the automotive industry for the manufacturing of geared components. An indenter with the final gear form is pressed to the semi-finished products. This process allows avoiding the conventional hardening and heat treatment [12–14].

In order to understand the process and establish the final workpiece shape, a deep analysis of the stresses and strains is essential. Currently, different numerical (analytical methodologies, finite element method (FEM), and so on) and experimental techniques such as digital image correlation (DIC) can be applied.

Analytical approaches, like the upper bound technique or the slip line theory, have been proven to provide good results for a wide variety of processes. Different works can be found where the material flow field and the forces needed to achieve certain deformation are obtained. Through the comparison with other techniques like FEM or experimental tests, it has been proven that the analytical methods can be accurate [15–18]. However, the application is complex, owing to the large number of variables involved. Therefore, the accuracy of the results strongly depends on the analysis optimization. Nevertheless, the results obtained can be considered to be a good approximation for facilitating the decision-making. When a great number of variables must be studied and controlled, the analytical methods can be excessively complex and, therefore, resources like FEM, DIC, and so on are well considered.

FEM and other general numerical simulation methods reduce the number of required experiments, as well as replace the more expensive and complex experiments, thus being a great tool to optimize the final shape of the workpiece [19]. An extensive range of materials, processes, and variables can be simulated. However, the results obtained always rely on the mathematical model behind this technology, and it is primordially to dispose the adequate material law, being difficult to obtain under extreme conditions. Also, remeshing techniques optimization is needed to avoid element distortion, inducing numerical errors and computational time increment [20,21]. Experimental analysis to obtain new material models and validations is commonly required. So, the FEM approach can be an expensive resource owing to the time consumption and experimental support in those case studies.

Focusing on the material flow analysis and material behavior, DIC is growing as a good alternative, showing certain advantages over other methods that make it more suitable. One of the main advantages is the non-contact optical application, which reduces the specimens' interaction, providing high resolution results in wide measurement ranges [22,23]. Measuring displacements and strains in real time is also possible. After the image capture of the sample deformation process, a mathematical correlation analysis is carried out, displaying bulk deformation by comparing deformed and un-deformed digital images, determining the displacements field and mapping their distribution [24–27]. Another important advantage is that this technique can be applied to 2D or 3D case studies without increasing much the difficulty of the analysis.

The 2D DIC technology offers higher accuracy and reduced computational complexity. Various studies show the application breadth of it. For instance, S. Roux et al. applies the DIC technique to detect cracks and estimate the stress intensity factor on SiC samples [28], as several works show how suitable this technique is when applied to fracture case studies [29–32]. On the study of material characterization, mechanical properties, such as the Young's modulus, Poisson's ratio, anisotropic plastic ratio parameters, and flow curves, are easily obtained with 2D DIC [33]. In general, DIC is a technology widely applied today, presenting a fair comparison with the results obtained through classical measurement methods [34–37].

For more complex processes and to avoid limiting the study to planar specimens and no-out-of-plane motion, a 3D analysis can be achieved. The number of cameras has to be increased and positioned to take the images from different angles. The same analysis methodology is applied and the computational time does not increase as much as in other analysis techniques, such as 3D FEM [38,39]. The methodology is not only applicable to 3D objects, but also to objects in motion [24,40] and objects subjected to large deformations [41,42].

Analyzed samples must present an irregular pattern on the surface in order to be studied by DIC technology. If the workpiece does not present an irregular pattern on the surface on its own, this must be created. This pattern consists of a random mottling or speckle that allows recognizing the position of every single point before and after deformation (Figure 1). To be able to analyze the images obtained through the deformation process, it is necessary to select a subset. The subset is recognized as a portion of the pattern selected for tracking, being placed in the reference image from which the displacements are calculated. After the images have been captured during the deformation process, a successive comparison is made between them. The subset in the deformed image is matched to the subset from the reference image to evaluate the selected subset displacement. Correlation algorithms (Equation (1)) [23,43–45] work by locating the subset from the reference image within the new image. The pixels of the reference subset are associated with a number depending on the grey level. These values depend on the number of bits of the CCD camera. Figure 2 shows an example where a value of 100 is assigned for white and 0 for black. The algorithm applied use a grey value interpolation, being able to choose between 4, 6, or 8 tap splines in this case. For a greater accuracy, the highest order spline must be selected.

$$\mathcal{C}(\mathbf{x}, y, u, v) = \sum\_{i,j=\frac{\pi}{2}\mathfrak{a}}^{\frac{\pi}{2}} \left( I(\mathbf{x} + i, y + j) - \Gamma(\mathbf{x} + u + i, y + v + j) \right)^2,\tag{1}$$

where

C(*x*, *y*, *u*, *v*): correlation function.

*x*, *y*: pixel coordinates values in the reference image.

*u*, *v*: displacement in x and y coordinates.

*n*/2: should be taken as an integer.

*I*(*x* + *i*, *y* + *j*): value associated to pixel in position (*x* + *i*, *y* + *j*) (reference image).

Г(*x* + *u* + *i*, *y* + *v* + *j*): value associated to pixel in position (*x* + *u* + *i*, *y* + *v* + *j*) (deformed image).

(*x* + *i*), (*y* + *j*): pixel values of image before deformation.

(*x* + *u* + *i*), (*y* + *v* + *j*): pixel values of image after deformation.

**Figure 1.** Example of random subset painted in sample.

**Figure 2.** Subset deformation (**a**) before and (**b**) after deformation.

The lowest C value gives the best correlation possible, giving the pixel new position (*x*, *y*) after deformation as well as the horizontal and vertical displacements (*u*, *v*) [45,46].

The coordinates of a Q point around the subset centred P in the reference subset can be mapped to Q' point in the target subset using displacement mapping first order shape functions [22,47]. Thus, the first order shape function that allows translation, rotation, shear, normal strains, and their combinations of the subset Equation (2) and Equation (3):

$$\mathbf{x}'\_{i} = \mathbf{x}\_{i} + \mathfrak{u}\_{\mathbf{x}} \Delta \mathbf{x} + \mathfrak{u}\_{\mathbf{y}} \Delta \mathbf{y},\tag{2}$$

$$y\_i' = y\_i + \upsilon\_{\text{x}} \Delta \mathfrak{x} + \upsilon\_{\text{y}} \Delta y\_{\text{y}} \tag{3}$$

where

*x i* , *y <sup>i</sup>* is the mapped position of Q point.

*xi*, *yi* is the position of Q point in the reference image.

*ux*, *uy*, *vx*, *vy* are the first-order displacement gradients of the reference subset.

Δ*x* is the *x* distance between Q and P points in the reference subset.

Δ*y* is the *y* distance between Q and P points in the reference subset.

Between two consecutives frames, the hypothesis of small strains is applicable. So, the Cauchy– Almansi tensor can be applied to obtain the strain field (Equation (4)):

$$
\varepsilon = \begin{pmatrix} \varepsilon\_{xx} & \varepsilon\_{yx} \\ \varepsilon\_{xy} & \varepsilon\_{yy} \end{pmatrix} = \begin{pmatrix} u\_x & \frac{1}{2} \{u\_y + v\_x\} \\ \frac{1}{2} \{u\_y + v\_x\} & v\_y \end{pmatrix} \tag{4}
$$

where

ε*xx* and ε*yy* are the longitudinal strains in x and y directions, respectively. ε*xy* is the angular strain.

Previous study focused on the 2D DIC technology for the study of the material flow in an indentation process, comparing the displacement load curve and the strain maps obtained from the material surface with FEM [23]. Even though the tests specimens meet the necessary characteristics to be mostly assimilated to plane strain condition, part of the material was projected outwards from the plane (Z axis), and so it was not possible to analyze part of the studied area, requiring a 3D approach. This work presents a comparative study with 2D and 3D DIC technology, showing the methodology implemented and the results achieved in each case study. A 2D–3D mix analysis is also proposed for

studying cases like the one addressed, where a protuberance is projected on the vertical surface of the workpiece during the punch penetration in the indentation process, corresponding to the dead material added to the punch. Being an out-of-plane situation, the 2D analysis does not reach the analysis in that particular area of the workpiece. Working with 3D DIC, the Z axis towards the exterior of the tested specimen can be taken into account, carrying out a complete and more accurate study of the deformation during indentation if necessary.

#### **2. Materials and Methods**

The present work considers different approaches to address the case study presented. Experimental tests analyzed by the 2D and 3D DIC technique and FEM. Two different experimental tests were carried out. Compression tests to obtain the material behavior for the FEM model and the indentation tests for the flow analysis. Once the material was characterized, the indentation tests applying the 2D and 3D DIC analysis were performed, adapting the methodology for each case study. The results obtained from both methodologies (FEM and DIC) were compared.

In the following sections, the equipment used is described, as well as the material, test specimens, and the performance of the tests.

#### *2.1. Materials and Specimens*

To achieve a deep indentation, the specimens were manufactured with 99% tin (NB1101003). Tin bars were melted in order to obtain sand casted bulk ingots from which the specimens (40 × 30 × 30 mm) are obtained (Figure 3a) machined from the previous ingots (Figure 3b,c). Knowing that the tests are performed under plane strain conditions and the indenter is 4 mm wide, each specimen depth must be between 6 and 10 times the width. So, most of the specimen behaves as in plane strain conditions, except for the vicinity of the free surfaces, where a plane stress behaviour can be considerate. A steel AISI 304 punch was used. Also, a restraining tool was needed to avoid punch inclination during the indentation process. Figure 3d shows the restraining tool designed. It provides embedding to the upper area of the punch. A lateral compression force, provided by two fixing screws, stabilizes the punch. The fixing screws push a bar that homogeneously distributes the force over the whole embedded punch surface, making sure that it does not affect the indentation results.

**Figure 3.** (**a**) Casting process, (**b**) tin ingot, (**c**) machined sample, (**d**) fixing tool for the stabilization of the indentation process, and (**e**) mottling pattern.

Owing to the lack of a natural pattern on the surface of the samples (Figure 3) and being essential to avoid specular reflections that can saturate camera sensors, an artificial pattern was conferred to each specimen. The samples were painted with spray paint in order to generate a random mottling. A white cover was sprayed, making sure the light would not reflect in the metal surface and saturates the image. After the white coat dried, a black mottling could be also sprayed from a greater distance to avoid thick droplets (Figure 3e). This pattern allows points recognition before and after the deformation process. The speckles should be neither too small nor too large. If the pattern is too large, it will be necessary to increase the subset size, but at the cost of spatial resolution. If the pattern is too small, the image will be very sensitive to defocus and also the resolution of the camera may not be enough accurately and an aliasing problem will appear. In this work, the size of the speckles was selected in order to have at least five speckles in the subset [48].

#### *2.2. Equipment*

Although the equipment used for the image capture is similar for 2D and 3D DIC methodology, the after treatment differs. The main differences are in the calibration process before the images capture and the images analysis after the images capture.

For the 2D analysis, an Allied digital camera Stingray F-504 (Allied Vision, Stadtroda, Germany) of 5 megapixels was used, with a cell size of 3.45 μm × 3.45 μm. The camera was equipped with a Pentax C7528-M lens (Ricoh, Barcenola, España). This lens is specially designed for image processing applications. It is purposely designed to maximize the picture performance at short distances with a 75 mm focal length. The field of view (fov) for a distance of 0.6 m was 62.65 mm with a pixel size of 30 μm. A 2 Hz frame acquisition frequency was established for the images capture, treating them with the software VIC SNAP [49] and VIC 2D [50] after the test were conducted.

For the 3D analysis, a binocular stereovision was needed. So, two cameras Grasshopper3 GS3-U3-123S6M-C (Flir, Wilsonville, OR, USA) of 12.3 megapixels were used, with a cell size of 3.45 μm × 3.45 μm. The cameras were equipped with a Fujinon HF50HA-1B lens (Fujifilm, Japan) with a 50 mm focal length. The field of view (fov) for a distance of 0.6 m was 80 mm with a pixel size of 30 μm. The cameras were placed around the specimens, providing enough information for the 3D study. For the test area illumination, a Hedler spotlight DX 15 (metal 150 W Halide lamp) (Hendler Systemlicht, Runkel, Germany) was used in both 2D and 3D analysis. Figure 4 shows the sets for both analyses.

(**a**) (**b**) **Figure 4.** Test set for the (**a**) 2D and (**b**) 3D digital image correlation (DIC) analysis.

The calibration procedure for 2D and 3D DIC differs from each other. During the 2D image capture, it is only necessary to capture an image with the sample and a reference element near it, as Figure 5a shows. Once the indentation process is completed and all the images are captured and stored, the calibration process can take place with VIC 2D. The first image is the one taken where the specimen appears near a ruler in mm as the reference element (Figure 5a). This image is analysed by the software. In this case, having the ruler near the specimen, a line of a certain length is drawn, scaling the length of the line with the ruler. Having that reference, the software can calibrate itself and process the rest of the images.

**Figure 5.** (**a**) Calibration for 2D and (**b**) 3D with the 3D pattern.

The 3D calibration requires more stages because it needs to calibrate the intersection of two optical rays formulated in a common coordinate system, being a stereo-triangulation process. A calibration target from Correlated Solutions Inc. was selected in order to cover the fov, with a spaced hole array every 5 mm (Figure 5b). This calibration target is placed where the specimens are going to proceed the experimental tests, undergoing through arbitrary motions along the three axis, capturing at least five images per axis, rotating the pattern left/right and up/down, and taking four or five more aleatory position images. The calibration system has a recognition software that determines the correspondences between the target points from the images captured, previously knowing the shape and scale of the target used, as seen in Figure 5b. The recognition program determines the situation of the cameras by the correspondence between the images captured with both cameras. After the calibration process, the specimens need to be place in the same position as the special target was situated, so it is essential to maintain the cameras position, focus, zoom, and illumination, marking the test area for a good positioning between tests.

A universal tension-compression machine Servosis ME 405 (Servosis Teaching Machines, Madrid, Spain), equipped with a 5 kN load cell, was used for the indentation tests. To improve the image capturing and displacement precision, the tests were set at 1 mm/min speed, synchronizing test start and ending with the image capture. The tension-compression machine allows obtaining the load forces and displacements of the tool. With previous synchronization of the digital image acquisition, it is possible to know the load-time and displacement-time evolution for each test performed.

For the image analysis, it is necessary to identify the window or area of interest in which the analysis will be performed. Figure 6 shows this window as a red area. After a subset is selected, it is necessary to find the new position of this subset in the next image. It must move the subset around the area of interest by means of a selected distance defined as a number of pixels (step) and calculate the correlation, according to Equation (1). A low value of the step could give a higher accuracy of results, but the computational cost increase.

In previous studies [23], a parametric step and subset study was carried out to obtain its optimum values, taking into account the computation time and the precision obtained. The step and subset are established in 2 and 45, respectively, to obtain a confidence below 0.001 pixels for the match location, which offers good quality results (Figure 6a). The bigger the confidence gets, the more information that can be lost during the process. The step size controls the analyzed data density. Having the step set at 2, the software carries out the correlation at every other pixel in both the horizontal and vertical direction. A low step number leads to more accurate analysis, but it increases the resolution time, varying inversely with the square of the step size.

**Figure 6.** Subset placement at the beginning of the tests and test completed for (**a**) 2D and (**b**) 3D DIC analysis.

For the 3D analysis, the area of interest is placed only on the nearest half of the specimen to avoid interferences when the punch starts penetrating (Figure 6b). The 3D software, 3D VIC (version, company, city, country) [51], suggests a subset size depending on the image supplied, adapting the size for each sample. The step size is established in 3. Once a first correlation is launched, a von Mises analysis can be performed in order to get the displacement and tensions from each test. So, the analysis time increments increase considerably in this case compared with the 2D study.

On the basis of previous studies [52–54], the optimal mesh is established with 1000 elements, two mesh windows with a 1/10 relation, and remeshing every two steps to avoid element distortion. A plane strain conditions is considered for the 2D analysis taking into account the dimensions of the specimen and the punch. Vertical displacements are fixed at the workpiece base without friction. The elements used are two-dimensional plane strain elements of four nodes. For the punch-workpiece contact, a 0.12 shear type friction was considered [50]. For the material behaviour, a tabular data format (σ = σ ) , . , *T* \* , where <sup>σ</sup> is the flow stress, is the effective plastic strain, . is the effective strain rate, and *T* is the temperature) was selected [50] in order to introduce data obtained previously from the compression tests performed to this aim.

#### **3. Results**

Figure 7 shows the compression tests implemented to obtain the material behaviour necessary for the FEM model. Figure 7a shows the compression values obtained from the experimental tests and FEM simulation, in order to validate the material model introduced and adjust it for the indentation models.

Regarding the indentation tests, five specimens were tested for both the 2D and 3D DIC analysis, being 10 indentations tests in total. Figure 8 shows the comparison between the results mean obtained from the indentation tests, as well as the 3D and 2D FEM analysis. It can be seen that numerical and experimental results are in good agreement. For a deep indentation, starting from 5 mm, FEM forces are higher than those obtained experimentally. This can be because of a greater element deformation and a coarse mesh at that stage of the process. Nevertheless, 2D and 3D FEM models offer results according to the test performed, being a good approximation for a first analysis of the indentation process.

Figure 9 presents the main differences between the 2D and 3D FEM models. Because of the necessity of a larger mesh to cover the whole 3D model, the 2D analysis takes much less computing time. Nevertheless, with the 2D analysis, is not possible to simulate the material nose or dead zone that is developed under the punch while the indentation is being performed (Figure 10). This dead zone can be observed during the 3D simulation and the material flow can be examined. However, knowing that the main material flow occurs on the surroundings of the material nose and that strain and stress results obtained from both models are similar, the 2D model can be considered as an adequate solution for this case study.

**Figure 7.** Results obtained from the compression tests, (**a**) experimental and 3D FEM simulation and (**b**) specimen subjected to the compression test.

**Figure 8.** Comparison between results obtained from the indentation test and the 2D and 3D models developed.

**Figure 9.** Comparison between results obtained from the indentation process and the 2D and 3D models developed.

**Figure 10.** Specimen after an indentation process and detail of the material nose developed under the punch during its penetration.

Figure 11 presents the main differences between the 2D and 3D DIC analysis performed with 2D and 3D VIC software. On the one hand, for the 2D study, the whole front surface is selected for the image analysis, being the main strains concentrated under the punch. For the 3D study, only half of the specimen can be selected. Owing to the cameras' positioning, it is not possible to have a frontal capture. As the punch progresses, it hides part of the specimen. However, this case study considers a symmetric specimen, so it is possible to reduce the analysis to half. Also, even selecting the entire half of the sample, the selection is reduced to the area where the main material flow takes place during the correlation analysis.

**Figure 11.** Results obtained with 3D and 2D DIC analysis.

On the other hand, the 3D DIC analysis takes more computational time, and achieving results with a good confidence value is a greater challenge, owing to the combination of two cameras and the random pattern. Figure 12 shows part of the obtained results for the 3D analysis, being the von Mises strain values represented in Figure 12a, with its virtual representation in Figure 12b.

**Figure 12.** Von Mises strains obtained with 3D VIC, (**a**) projected on the specimen 1, (**b**) 3D representation for specimen 1.

Focusing on the von Mises strain results under the punch, the maximum strain obtained for the 2D analysis is over 3.5, while the 3D analysis shows a maximum of 3.58 over the bulge that emerges under the punch and the FEM results are over 3.1 (Figure 13). For the Z displacement (Figure 10), the whole specimen set was measured after the indentation test, presenting an average displacement of 3.07 mm. So, the error with the correlation displacement results is set on 2.67% (average).

**Figure 13.** Von Mises comparison between (**a**) 2D DIC and 2D finite element method (FEM) analysis and (**b**) 3D DIC and 2D FEM analysis.

#### **4. Conclusions**

The aim of the present work is to compare the application and results of the 2D and 3D DIC technique analyzing the material flow that takes place in a deep indentation process. Also, a validation between DIC perspectives and FEM is made in order to establish the methodology and select the optimal analysis method, showing the advantages and disadvantages of each one.

Both DIC methods can be presented as efficient for the analysis carried out, adequately identifying the material flow field and von Mises strains on the specimens studied. The von Mises strain results obtained are in good agreement with each other and with the results obtained from the FEM analysis, 3.5 (2D), 3.58 (3D), and 3.1 (FEM). Nevertheless, depending on the analysis purpose, the 3D or 2D technique can be adjusted better or worse to the study. If the specific area of material generated under the punch needs to be examined, in order to know how much it grows towards the Z axis or the material flow that occurs just at that precise point, 2D technology is not able to provide such information. Focusing on the dead material under the punch, the 3D DIC analysis provides Z displacement of 2.99 mm versus an average displacement of 3.07 mm measured on the specimens tested and 2.90 mm provided by FEM results.

For a general analysis of the process, the more adequate method proposed is the 2D DIC analysis versus its 3D variant. The time for the set implementation is considerably reduced (about a 60% reduction) because there is no need to calibrate the camera before the image capture. During the 2D analysis, it is possible to integrate the calibration, having one of the capture images with a reference measurement element. The 3D analysis needs a proper calibration of the axis before placing the specimens and starting the indentation tests. Furthermore, the computational time for the images processing for the 2D technique versus the 3D option is also reduced. With the results obtained being similar, the 3D DIC is not the ideal solution for a general analysis in this case study.

Therefore, a mixed 2D–3D analysis is proposed, with it being possible to place both cameras so that the captured images can be used for a 2D analysis. In the case in which the study of a specific point of the material that evolves along the Z axis is needed, it is possible to implement the 3D analysis only integrating the images of the second camera with the 3D software.

**Author Contributions:** Conceptualization, C.B.G.; methodology, C.B.G., G.C.L., and S.M.-B.; software, C.B.G., G.C.L., and F.J.T.V.; validation, C.B.G., G.C.L., and S.M.-B.; formal analysis, C.B.G., F.J.T.V., and L.S.H.; writing—original draft preparation, C.B.G.; writing—review and editing, L.S.H., G.C.L., and F.J.T.V.; supervision, L.S.H.

**Funding:** This research received no external funding.

**Acknowledgments:** The authors want to thank the University of Malaga—Andalucía Tech, International Campus of Excellence and the state subprogram of scientific and technical infrastructures and equipment, co-financed by the Ministry of Economy and Competitiveness and with European Union Structural Funds (Ref: UNMA 15-CE-3571).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article* **Parametric Analysis of the Mandrel Geometrical Data in a Cold Expansion Process of Small Holes Drilled in Thick Plates**

#### **Jose Calaf-Chica 1,\*, Marta María Marín 2, Eva María Rubio 2, Roberto Teti <sup>3</sup> and Tiziana Segreto <sup>3</sup>**


Received: 31 October 2019; Accepted: 3 December 2019; Published: 8 December 2019

**Abstract:** Cold expansion technology is a cold-forming process widely used in aeronautics to extend the fatigue life of riveted and bolted holes. During this process, an oversized mandrel is pushed through the hole in order to yield it and generate compressive residual stresses contributing to the fatigue life extension of the hole. In this paper, a parametric analysis of the mandrel geometrical data (inlet angle straight zone length and diametric interference) and their influence on the residual stresses was carried out using a finite element method (FEM). The obtained results were compared with the conclusions presented in a previous parametric FEM analysis on the influence of the swage geometry in a swaging cold-forming process of gun barrels. This process could be considered, in a simplified way, as a scale-up of the cold expansion process of small holes, and this investigation demonstrated the influence of the diameter ratio (K) on the relation between the mandrel or swage geometry and the residual stresses obtained after the cold-forming process.

**Keywords:** cold expansion; mandrel; cold-forming; swaging

#### **1. Introduction**

The autofrettage cold-forming process for thick-walled cylinders has its origin in artillery technology of the 19th century. Gun barrels showed a significant development in range, power and reliability because of the application of an innovative manufacturing process: the generation of internal compressions at the bore with a hoop installed on the barrel with enough interference. In the 1920s, a new cold-forming process was developed to obtain similar residual compressive stresses using a hydraulic pressure into the barrel. It was called hydraulic "autofrettage" using a French term which means "self-hooping" due to the absence of hoops to obtain the residual stresses [1]. Development of new steel alloys during the first half of the 20th century with higher yield and ultimate tensile strengths generated an increment of the necessary autofrettage pressure to reach the requirements of residual stresses. The inherent process difficulties to reach these pressure levels motivated the development of alternative processes (explosive autofrettage [2,3], double-layer autofrettage [4], thermal autofrettage [5], rotational autofrettage [6], etc.). In this context, around the middle of the last century, an innovative cold-forming process in gun barrels was presented [7]: the mechanical autofrettage or swaging. In this process, a swage with a diametrical interference with the barrel is forced to pass through bore, generating compressive residual stresses. The necessary pressure to move the swage was significantly lower than the equivalent hydraulic pressure in a hydraulic autofrettage.

The cold-forming process of swages showed alternative applications in aeronautics in the 1970s to extend the fatigue life of fastened parts on aircrafts. In 1974, Boeing's materials research and development developed a swaging process called "coldworking" which was applied to the fastened holes to extend their fatigue life [8,9]. An oversized swage or mandrel was pushed through the hole to yield the inner diameter of the hole. The remaining elastic zone after the swaging process generated a residual compressive hoop stress which was the origin of the delay in the crack initiation. Although there are many investigations related to the effect of the cold expansion on holes [10,11] and failure mode analyses of the riveted assemblies [12], there are less studies centred on the geometrical parameters of the cold expansion process. An optimization of the mandrel geometry would derive on an increase of the compressive residual stresses obtained after the cold forming process and an increase in the fatigue life of the fastened assemblies. The cost of maintenance in the aeronautical industry is directly related to the frequency of the periodic inspections. This frequency is calculated with analytical and numerical analyses of crack propagation (fatigue and damage tolerance analyses). Most of the crack initiation locations in aeronautics are considered in fastened holes, and an aircraft is plenty of holes. The cold expansion process is a technique to expand the fatigue life of holes, retarding the initiation of a crack and slowing down the crack propagation. Therefore, this extension in fatigue life would reduce the necessary frequency of some periodic inspections, reducing the cost of maintenance, and expanding the economic life of the airplanes.

The main difference between both applications of this cold-forming process, swaging of thick-walled cylinders and cold expansion of fastened holes in aircrafts, would seem to be only in the size of the hole: big holes in thick-walled cylinders and small holes in aeronautics. Swage geometry and how it affects to the residual stresses could be similar in both applications, because it is a scale-up of a similar cold-forming process.

Figure 1 shows a schematic representation of the most important parts in a cold-forming process of swaging:


**Figure 1.** Schematic representation of the swaging process.

Figure 2 shows the application of the swaging process in a hole drilled in a thick plate representing a sketch of the application of the cold expansion process for fastened holes in aeronautics. When the ratio between the hole diameter and the thickness of the plate is high enough (thick plates and plane strain condition), the main difference between thick-walled cylinders and holes in plates is in the *K* ratio or diameter ratio of the cylinder (*K* = *Dext*/*Dint*). In the thick-walled cylinders, *K* is limited to small values, and it takes very high values for holes in plates.

**Figure 2.** Sketch of a cold expansion process in a small hole.

In both cases (Figures 1 and 2), the residual stress field obtained after the cold-forming swaging process is typically represented in a polar coordinate system. The principal stresses (σ<sup>1</sup> = hoop, σ<sup>2</sup> = longitudinal, and σ<sup>3</sup> = radial) are shown in Figure 3. In Figure 4 an example of the residual stress field is reported along the normalized radial distance *r* = (*D* − *Dint*)/(*Dext* − *Dint*) at the middle longitudinal distance of a hole. After the cold expansion forming process, a compressive residual hoop stress (Figure 4a) is generated in the inner radius of the hole preventing and retarding any crack initiation.

**Figure 3.** Sketch of stress field nomenclature.

The main geometrical parameters of the mandrel are: the inlet angle (α), the outlet angle (β), the length of the straight zone (*SZ*) and the major diameter (*ds*) (see Figure 5). The swage major diameter is important just related to the hole diameter and the interference between them (*int*).

The most important reference for the investigation of the influence of the swage geometrical data for thick-walled cylinders was the research published by Gibson et al. [13,14]. A systematic finite element method (FEM) analysis of a swaging process in a cylinder with *K* = 2.257 was performed to obtain the residual stress field and quantify the influence of the geometrical parameters of the swage in the stress field. Gibson achieved that an increment of the inlet and outlet angles and a reduction of the length of the straight zone generated an increase of the residual hoop stresses at the bore of the hole. Moreover, these increments of the inlet and outlet angles and the reduction of the length of the straight zone generated a reduction of the yielded area after the cold expansion process.

**Figure 4.** Residual (**a**) hoop, (**b**) radial and (**c**) longitudinal stresses after a cold expansion forming process.

**Figure 5.** Sketch of a mandrel or swage with the geometrical parameters.

In the case of the cold expansion of riveted holes in aeronautics, there are less research related to the influence of the mandrel geometrical parameters on the residual stresses. There are investigations using steel balls as mandrels [15] or swages, as previously shown in Figure 5 [16], but there is no systematic FEM analysis such as the Gibson's study applied to small holes.

Starting from Gibson's study on a parametric analysis of the mandrel geometry and its influence on in the residual stresses [13,14], this paper is focused on the specific cold-forming application for small holes drilled in thick plates. Similarities and differences in the cold-forming process applied to small holes and thick-walled cylinders were performed to verify or refute the applicability of the conclusions published by Gibson et al. for both scenarios.

#### **2. Materials and Methods**

This investigation was centred on a parametric numerical analysis through a finite element method (FEM) of the mandrel geometrical data during a cold expansion process for a small hole. An ANSYS v19 software was used to simulate the cold-forming process. The evaluated mandrel geometrical parameters were: the inlet angle, the length of the straight zone and the interference between the swage and the hole. The outlet angle was fixed at β = 3◦. A full factorial DOE (Design of Experiments) was designed to analyse every possible combination of the geometrical parameters (inlet angle, length of the straight zone and diametric interference).

Table 1 summarized the geometrical data included in the performed FEM analysis: four levels for the inlet angle (α), three levels for the straight zone length (*SZ*), and three levels for the diametric interference (*int*). A total number of 36 simulations for the cold expansion forming process were done (full factorial DOE). The selected ranges for each geometrical parameter were based on the optimum values deduced in previous investigations by Gibson [13] and O'Hara [17].

**Table 1.** Mandrel geometry analysed in the parametric FEM simulations.


For the plate geometry, a high ratio *L* = *t*/*Dhole* = 22 was selected to consider a thick plate (where *t* = 75 mm was the plate thickness and *Dhole* = 3.4 mm was the hole diameter; see Figure 6). As mentioned previously, the aim of the *L* ratio was to avoid the effect of the hole ends in order to evaluate and compare the results with the thick-walled cylinders.

**Figure 6.** Sketch of the drilled plate.

In the specific case of the small holes, mandrels are usually manufactured with high strength steels, and considering the advances of powder metallurgy for tungsten carbide parts development, in this paper, a CTS12L alloy manufactured by Ceratizit [18] was selected for the high-strength and high-stiffness values (ultimate tensile strength of σ*u* = 3500 MPa and Young's modulus of *E* = 630 GPa).

Moreover, the material selected for the plate was the precipitation hardening stainless steel 15-5 PH H1025. Whose mechanical properties were listed in Table 2 as specified in the AMS5659 standard [19] As a note, it is important to point out that the alteration of plate material with lower plastic mechanical properties would derive in a reduction of the maximum residual stresses that cold expansion process could obtain, but the influence of each geometrical parameter evaluated in this investigation would have similar tendencies in the residual stresses.



The stress-strain curve of the considered steel was approximated with the Ramberg–Osgood hardening law [20] (see Equation (1)). Equation (2) deduced by Kamaya et al. [21] was used to calculate the hardening coefficient *n*, obtaining *n* = 30.3. The plastic hardening was simulated with a kinematic hardening model to introduce the Bauschinger effect in the cold forming process.

$$
\varepsilon = \sigma \beta \mathbb{E} + \varepsilon\_{\text{offset}} \left( \sigma \langle \sigma\_y \rangle^n \right) \tag{1}
$$

$$m = 3.93 \{ \ln(\sigma\_{\text{ul\\_ell\\_ell\\_g}} \langle \sigma\_{\text{\\_}} \rangle) \}^{-0.754} \tag{2}$$

where:

*E*: Young's modulus,

ε: true strain,

ε*o*ff*set*: offset strain (0.002) to obtain the yield strength,

σ: true stress,

*n*: hardening coefficient,

σ*u\_eng*: engineering ultimate tensile strength, and

σ*y*: yield strength.

To reduce the computational requirement of the numerical analyses, an axisymmetric model was considered. In the case of a thick plate, the outer diameter of the part needs to be defined (see Figure 7). The Lamé's equation for the hoop stress on the inner diameter of a thick-walled cylinder with an inner pressure of 1000 MPa (see Equation (3)) was used to fix a specific value for the outer diameter.

$$
\sigma\_{h\alpha\wp} = (D\_{\rm ext}\,^2 + D\_{\rm int}\,^2) \langle (D\_{\rm ext}\,^2 - D\_{\rm int}\,^2) \cdot P\_{\rm int} \tag{3}
$$

where:

*Dext*: outer diameter, *Dint*: inner diameter, *Pint*: internal pressure, and σ*hoop*: hoop stress.

Fixing the inner diameter at *Dint* = 3.4 mm and the internal pressure at *Pint* = 1000 MPa, the influence of the outer diameter (*Dext*) on the hoop stress at the hole's bore tended to a stabilized value equal to the internal pressure (see Figure 8). Therefore, an outer diameter of 30 mm could be considered equivalent to a distant hole from any discontinuity (other holes or plate ends). Figure 9 shows the considered geometry for the axisymmetric model of the plate.

**Figure 7.** Geometry of the thick plate as an axisymmetric model.

**Figure 8.** Hoop stress (σ*hoop*) in the inner diameter vs. the outer diameter.

**Figure 9.** Geometry of the axisymmetric model.

The FEM simulations were identified with the ID AeExxx-Ixxx-Lrxx, where: "Ae" represents the inlet angle of the mandrel followed by a numerical value in tenths of a degree; "I", the diametric interference in hundredths of a millimeter; and "Lr", the length of the straight zone in tenths of a millimeter. For example, the ID Ae200-I020-Lr15 matches with the mandrel of: inlet angle equal to α = 2◦, diametric interference of *int* = 0.2 mm and length of the straight zone of *SZ* = 1.5 mm.

For the contact between the mandrel and the hole, a friction coefficient μ = 0.05 (typical for a lubricated contact) was fixed for the simulations. A sensitivity mesh analysis was performed with fine mesh sizes of *ms* = 0.4, 0.3 and 0.2 mm established in the edge of the inner diameter of the plate and the outer face of the mandrel with Quad8 and Tria6 elements (see Figure 10). Figure 11 shows the residual hoop stresses obtained after the cold expansion process for the different *ms* values. These results showed no significant alterations and the mesh size of 0.2 mm was established as the standard for this FEM analysis, showing an orthogonal quality of *OQ* <sup>=</sup> {0.602, 1.000} and a skewness of *<sup>S</sup>* <sup>=</sup> {1.114 <sup>×</sup> <sup>10</sup><sup>−</sup>3, 0.591}. These mesh metrics are enough good to guarantee no convergence problems related with mesh quality. The boundary conditions of the model were a restricted longitudinal movement of the inlet edge of the plate (B location of Figure 12), and an imposed longitudinal movement of the outlet edge of the mandrel equal to 86 mm (A location of Figure 12).

**Figure 10.** Mesh of the cold expansion simulation (*ms* = 0.2 mm).

**Figure 11.** Sensitivity mesh analysis.

**Figure 12.** Boundary conditions of the FEM model.

This FEM model was verified with a comparison with the Gibson FEM results [13], just changing the geometrical parameters and material properties of the previously shown model adapting it to the Gibson cylinder geometry. Table 3 shows the corresponding geometrical data and the mechanical properties and Figure 13 shows the FEM model used for this comparison. Figure 14 represents the comparison of the residual hoop stresses (normalized by the yield strength) versus the normalized radial position (from 0: inner radius to 1: outer radius). Similar results ensured the feasibility of the FEM model designed for this investigation.

**Figure 13.** FEM model for the comparison with the Gibson FEM model.

**Figure 14.** Comparison between residual hoop stresses obtained with the FEM model and the Gibson FEM model.


**Table 3.** Geometry and mechanical properties of the Gibson FEM model [13].

#### **3. Results**

The results obtained from the FEM simulations of the mandrel geometrical data during the cold expansion process were reported in Figures 15–19. Figure 15 shows the Von Mises stress during the cold expansion process simulation, where the mandrel has been pushed up to the half of the plate thickness. This figure represents the simulation ID Ae200-I020-Lr15 (inlet angle: 2.0◦; diametric interference: 0.2 mm; and straight zone length: 1.5 mm). The most critical Von Mises stress is in the mandrel with values close to 3000 MPa. Therefore, this part needs to have a very high yield strength to guarantee no plastic behaviour of the mandrel during the process.

**Figure 15.** Von Mises stress (σ*VM*) during the cold expansion process.

Radial stress field is represented for the same ID Ae200-I020-Lr15 in Figure 16. The most stressed location matched with the contact between the mandrel and the hole in the straight zone of the mandrel. The contact pressure reached up to −3600 MPa. This is feasible due to the high hydrostatic component

in the stress field: the hoop stress reached −3600 MPa (see Figure 17) and the longitudinal stress showed values of −2600 MPa (see Figure 18).

**Figure 16.** Radial stress (σ*rad*) during the cold expansion process.

Figure 19 shows the triaxiality for the same ID Ae200-I020-Lr15. Focusing the analysis in the region where the most critical stresses shown in the previous figures (i.e., the contact region between the mandrel and the hole), the triaxiality reached *FT* = −2.29 in the most critical location for the plate. This fact verified the previous hypothesis: with the high compressive hydrostatic component of the stress field (equivalent to a very negative triaxiality) was feasible to reach avery high contact pressure during the cold-forming process.

**Figure 19.** Triaxiality during the cold expansion process.

Figure 20a–c displays the residual hoop, radial and longitudinal stresses along the normalized radial distance in the plate (0, the hole radius; 1, the outer radius) after the cold expansion process for the IDs Ae075-I010-Lr05 /-Lr10 and /-Lr15. Changes in the length of the mandrel straight zone did not generate any significant variation in the residual stresses after the process. Figure 20d shows the maximum load needed to push the mandrel through the hole. This process parameter showed an increase with the increase of the length of the straight zone.

**Figure 20.** (**a**) Hoop; (**b**) radial; (**c**) longitudinal residual stresses along the normalized radial distance after the cold expansion process and (**d**) maximum pushing load in the mandrel (IDs Ae075-I010).

Figure 21 shows the hoop residual stress and the maximum pushing load of the mandrel for the IDs Ae075-I015 and Ae075-I020. Similar results were obtained: the modification of the straight zone length did not generate any significant variation of the residual stresses on mandrels with an inlet angle of 0.75◦, and diametric interferences of 0.15 and 0.20 mm with the hole. Interference growth generated:


**Figure 21.** *Cont*.

**Figure 21.** (**a**) Hoop residual stress along the normalized radial distance after the cold expansion process and (**b**) maximum pushing load in the mandrel [(**1**) IDs Ae075-I015; (**2**) IDs Ae075-I020].

In Figure 22, the residual hoop stress and the maximum pushing load of the mandrel were reported for the IDs with an inlet angle of 1.0◦: Ae100-I010, Ae100-I015 and Ae100-I020 with /-Lr05, /-Lr10 and /-Lr15. The variation of the straight zone length did not generate any change in the residual stresses, increasing only the maximum pushing load of the mandrel. Comparing these results with the inlet angle previously considered equal to 0.75◦, the residual hoop stress did not change significantly, but the maximum pushing load of the mandrel diminished with the increase of the inlet angle.

**Figure 22.** (**a**) Hoop residual stress along the normalized radial distance after the cold expansion process, and (**b**) maximum pushing load in the mandrel ((**1**) IDs Ae100-I010; (**2**) IDs Ae100-I015; (**3**) IDs Ae100-I020).

The results obtained for inlet angles equal to 1.5◦ and 2.0◦ are reported in Figures 23 and 24, respectively.


**Figure 23.** (**a**) Hoop residual stress along the normalized radial distance after the cold expansion process and (**b**) maximum pushing load in the mandrel ((**1**) IDs Ae150-I010; (**2**) IDs Ae150-I015; (**3**) IDs Ae150-I020).

**Figure 24.** *Cont*.

**Figure 24.** (**a**) Hoop residual stress along the normalized radial distance after the cold expansion process, and (**b**) maximum pushing load in the mandrel ((**1**) IDs Ae200-I010; (**2**) IDs Ae200-I015; (**3**) IDs Ae200-I020).

#### **4. Discussion**

The geometrical parameters of the mandrel evaluated in this study, inlet angle and the straight zone length, showed no significant influence in the residual stresses after the cold expansion process. This behaviour did not match the results showed by the numerical analysis performed by Gibson et al. [5,6] where there was significant variation of the hoop stress of thick-walled cylinders when the geometrical parameters of the swage were modified. Figure 25 shows the residual hoop stress obtained from the analysis performed by Gibson [5], where the influence of the straight zone length was analysed. An increase of this mandrel length did not significantly change the residual hoop stress, but from the length 1.0 up to 4.50, the compressive hoop stress at the smaller diameters of the cylinder showed an appreciable reduction. Therefore, the length value of 1.0 appeared as a limit.

**Figure 25.** Normalized hoop residual stress along the normalized radial distance for different lengths of the straight zone in the Gibson et al. analysis [13].

This contradiction between the swaging of thick-walled cylinders and the cold expansion of small holes could have the explanation in the difference between the diameter ratio *K* of both geometries. The hole analysed in this investigation had a *K* = 30/3.4 = 8.8, whereas the Gibson's study analysed a cylinder with *K* = 2.257. Other investigations, focused on the analysis of the geometrical parameters of the swage, followed similar *K* ratios: O'Hara (*K* = 2.257 [17]), Bihamta et al. (*K* = 3.18 [22]) and Alinezhad et al. (*K* = 3.18 [23]). Therefore, the *K* value used in thick-walled cylinders ranges from 2 to 4, a much smaller interval than the *K*'s value typically used in cold expansion processes. Could the *K* ratio be the origin of the previously mentioned contradiction? It could be answered with another question: what is the origin of the residual stresses after a swaging or cold expansion process? The remaining part of the cylinder or plate which has not been yielded during the process is the main part that is trying to return to its original position. However, the yielded zone does not want to return to that original position. Therefore, the non-yielded zone presses the yielded zone. If there is not a sufficient non-yielded zone, the residual compressive hoop stresses in the yielded zone will be significantly reduced. The vertical red dashed lines in Figure 25 shows the limit between the yielded and the non-yielded zones for the Gibson's model. The increase of the straight zone length of the swage generates an increase of the yielded zone. Thus, the region of the cylinder which must press the yielded zone to generate the residual compressive hoop stress is smaller and smaller. When there is not a sufficient non-yielded zone, the residual compressive hoop stress ends diminish. This means that an increase of the straight zone length of the swage does not reduces the capability of the swage to yield the inner diameter of the hole. This capability is increased but if there is not sufficient outer diameter to press the hole after the cold-forming process, an increase of the yielded zone reduces the residual stresses. This could explain the lack of variation of the residual stresses of the small hole evaluated in this investigation. There is much more non-yielded zone in the plate, and the modification of the straight zone length of the mandrel did not reduce enough this zone to show consequences in the residual stresses.

In order to verify this approach, a FEM simulation of the cold expansion process evaluated in this investigation was repeated, but modifying the outer diameter from 30 mm to the obtained value with a *K* = 2.257 (the same *K* value used in the Gibson's study): *Dext* = 3.4 × 2.257 = 7.67 mm. Two lengths of the straight zone were evaluated: 0.5 and 1.5 mm. The remaining geometrical parameters were:


Figure 26 shows the residual hoop stress for straight zone lengths equal to 0.5 and 1.5 mm) along the normalized radial distance for the cold expansion process with a diameter ratio of *K* = 2.257 instead of *K* = 8.8 used in this investigation. The reduction of the compressive residual hoop stress for the inner diameters for *SZ* = 1.5 mm is similar to the behaviour shown by Gibson's model, but the reason for this reduction is not justified only by the increase of the straight zone length. This increment generated an extension in the diameter of the yielded zone and the absence of enough non-yielded material to press the rest of the cylinder was the main cause for this reduction of the compressive residual hoop stresses. Consequently, the dependency of the residual hoop stresses with the straight zone length also depends on the *K* ratio.

In a cold expansion cold-forming process, the *K* ratio could show small values if the formed hole is close to any discontinuity (such as another hole or the end of the plate). It means that the influence of the straight zone length of the mandrel should be considered in those scenarios.

**Figure 26.** Hoop residual stress along the normalized radial distance after the cold expansion process with *K* = 2.257.

#### **5. Conclusions**

The main objective of this investigation was focused on a parametric analysis of the influence of the mandrel geometry in the residual stresses obtained after a cold expansion process of small holes with high ratio of plate thickness vs. hole diameter. Related to this general objective, this research has concluded the following:


**Author Contributions:** Conceptualization: J.C.-C.; methodology, J.C.-C. and M.M.M.; validation: J.C.-C. and M.M.M.; formal analysis: J.C.-C. and M.M.M.; investigation: J.C.-C., M.M.M., E.M.R. and R.T.; resources: M.M.M. and E.M.R.; data curation: J.C.C. and M.M.M.; writing—original draft preparation: J.C.-C.; writing—review and editing: J.C.-C., M.M.M., E.M.R., R.T and T.S.; visualization: J.C.-C.; supervision: M.M.M., E.M.R. and R.T.; project administration: M.M.M., E.M.R. and R.T.

**Funding:** This work has been funding, in part, by grants from Industrial Engineering School-UNED (REF2019-ICF05 and REF2019-ICF08).

**Acknowledgments:** The authors thank to the Research Group "Industrial Production and Manufacturing Engineering (IPME)" the given support during the development of the present work.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article* **Shipbuilding 4.0 Index Approaching Supply Chain**

#### **Magdalena Ramirez-Peña 1,\*, Francisco J. Abad Fraga 2, Alejandro J. Sánchez Sotano <sup>1</sup> and Moises Batista 1,\***


Received: 31 October 2019; Accepted: 6 December 2019; Published: 10 December 2019

**Abstract:** The shipbuilding industry shows a special interest in adapting to the changes proposed by the industry 4.0. This article bets on the development of an index that indicates the current situation considering that supply chain is a key factor in any type of change, and at the same time it serves as a control tool in the implementation of improvements. The proposed indices provide a first definition of the paradigm or paradigms that best fit the supply chain in order to improve its sustainability and a second definition, regarding the key enabling technologies for Industry 4.0. The values obtained put shipbuilding on the road to industry 4.0 while suggesting categorized planning of technologies.

**Keywords:** shipbuilding; LARG paradigm; supply chain

#### **1. Introduction**

Every industrial revolution has brought important improvements in terms of manufacturing. Since 2015, the industry is working on the so-called fourth industrial revolution or Industry 4.0. This fourth industrial revolution introduces new advanced production models with new technologies that allow the digitalization of processes, services, and even business models [1]. Among other innovations, Industry 4.0 (I4.0) gives rise to the inclusion of the social aspect in the definition of the performance model of manufacturing processes, thus completing the economic, energy, environmental, and functional aspects considered until now [2].

It is also very common nowadays to use the distributed manufacturing systems which consist of manufacturing components in different physical locations and then going through supply chain management, bringing them together for the final assembly of a complex product [3]. Within the shipbuilding industry, there are two distinct fields of work, one dedicated to the repair, maintenance, or improvement of ships already built and the second dedicated to new ships. Focusing on the new construction and referring to it as shipbuilding could be consider as a case of distributed manufacturing, where the different blocks that constitute the ship built in different workshops belonging to the same manufacturing center are assembled afterwards in the dock. Therefore, shipbuilding is a complex manufacturing process that must adapt to I4.0 in order to progress. In this case, shipbuilding is a complex industry, with a complex structure composed of a large number of suppliers belonging to different locations, sizes, and typologies [4]. In addition, any small change made by each part of this structure not only affects the rest of the members but can also have enormous consequences. In this type of complex manufacturing [5], supply chain (SC) is a key factor to improve the efficiency of the shipyard in adapting to I4.0 [6]. Thus, the digitization—objective of the I4.0—of the supply chain will provide it with the agility and efficiency that shipbuilding needs to be more profitable [7,8].

Supply chain is the set of the flows of materials and information that take place within a company from the suppliers of raw materials to the consumer of the final product [9]. It is the concept that

connects companies to their suppliers [10], as well as having among its activities the control of logistic activities [11] and the responsibility of analyzing purchases [12]. Supply chain represents one of the areas with the greatest investment in successful companies as it has become a strategic tool with a multidisciplinary and transversal character that affects all strategic levels of the company. It affects the sector and the market where the company is going to compete, defined by the corporate strategy, how it is going to compete, defined by the competitive strategy and of course each of the affected areas within the company, defined by the functional strategy [13].

Within the objectives of the supply chain are a rapid response to demand, flexible manufacturing, cost reduction, and inventory reduction. In addition, through the achievement of these objectives, SC aims to achieve improved competitiveness and sustainability of the company [14]. In the framework of Industry 4.0, the main objective of the supply chain must be total visibility of all product movements for each member of the chain as well as a total integration [15]. The most important paradigms on the supply chain found in the literature, under the perspective of sustainability shows the paradigm LARG. It is the paradigm defined under the acronym LARG: Lean, Agile, Resilient, and Green [14].

Lean Paradigm: The principles on which the Lean philosophy is based and its practices, make them ideal for the supply chain consisting of a network of business units or even independent companies, becoming challenging because of the complexity of management [16]. Among its contributions are:


Agile Paradigm: The agile supply chain must know what is happening in the market in order to be able to respond as quickly and close as possible to reality [18]. It is through the integration of partners where the acquisitions of new skills allow them to respond quickly to the constant changes in the market [19]. The key elements are their dynamic structure and the visibility of information configured from beginning to end of an event-based management, such as relationships. For some authors, the supply chain should be adjusted when there is a question of a production of a considered volume with little variety, in a predictable, controllable business environment, whereas, if it is a question of unpredictable changes in the market, a small volume and a great variety are required, in this case an agile paradigm is required. Other authors such as Naylor et al. [20] introduce the term "Leagile" for the supply chain whenever demand is variable and there is a wide variety of products.

Resilient Paradigm: Resilience is the ability to overcome the disturbances suffered and recover the state in which it was before the disturbance. Based on this definition, the supply chain must have this characteristic and understand resilience as the capacity that the organization must have to continuously adjust the supply chain of events that may alter the balance of its activities [21]. One of the objectives of the resilient supply chain is to avoid a change to an undesirable state [14]. One way to achieve this is to design strategies to restore the previous state of the system [22]. Among the must have characteristics resilient supply chain, most authors agree on the total visibility of it; characteristic shared with the Agile paradigm and with Industry 4.0 [23,24]. However, several authors consider this paradigm very costly and complicated to implement. Therefore, they consider Lean Production and/or Six Sigma option as an alternative that provides flexibility and a corporate culture that could also provide resilient to supply chain [25].

Green Paradigm: This paradigm offers different approaches, from the perspective of supplier management in terms of risks and returns, in terms of supply chain management for sustainable products or both at the same time [26]. This ecological supply chain term is the consideration of environmental extension within supply chain management from the stages of product design, to the manufacturing process itself, until the delivery to the final consumer and even to the end-of-life management of the manufactured product [27]. This paradigm even lead the determination that through the greening of the different stages of the supply chain an integrated green supply chain can be achieved, which would lead to an increase in competitiveness and economic performance [28,29]. Sustainable supply chain also consider coordination of economic, environmental, and social considerations [30,31].

Looking for quantifiers on the supply chain in the literature, different parameters analysis are detected. Of those who seek the measure of their performance, some do so quantitatively, others qualitatively and there are those who analyze from both perspectives, identifying parameters such as visibility as Lia et al. did [6]. There are approaches to improving supply chain performance at different stages of the product life cycle by applying different linguistic scales to assess uncertain supply behavior or as in the case of Chang et al. [32] studying the selection of suppliers using fuzzy logic. This approach allows companies the assessment with no limitation on categories of scale and data [33]. However, there are other approaches such as improving the decision-making process. Wang et al. researched to provide decision-makers with rapid access to the practical performance of suppliers supply [34].

Alternatively, and as decision support process for incomplete hesitant fuzzy preference relations, the qualification of the supply chain is evaluated [35]. Other studies show focus on relieving the complexity of the aggregation and evaluation process by showing the connection between product strategies and supply chain performance [36]. In addition, there are benchmarking tools that develop indices to measure the agility of the supply chain such as Lin et al. [37]. Also to evaluate parameters of the supply chain itself as green or resilient [38] or even several at once as is the case of the LARG index, which evaluates the supply chain from the perspective of Lean, Agile, Resilient, and Green researched by Azevedo et al. [39].

In this case, the aim is to evaluate a shipyard in the process of adapting to industry 4.0 through its supply chain. There are already studies in which a conceptual model is being developed that separately confronts the different paradigms that make up the LARG paradigm and confronts them with the enabling technologies of Industry 4.0 in the field of shipbuilding [40]. In the case at hand and based on the previous studies of Azevedo [38,39], the Delphi method is used as a strategic information-gathering tool considered appropriate for supply chain [39]. The Delphi method consists of the technique that allows information gathered through consultation with experts [39]. It is an iterative process based on the anonymity of the answers, which allows the analysis of the answers. It is composed of several phases: 1. Definition of the research problem, 2. Determination of the participants to take part, 3. Elaboration of the questionnaire establishing the number of necessary rounds, and 4. Results [41]. A small number of participants (6–30) makes this technique best suited to scientific evidence and social values [42], coinciding with the new aspect that introduces the I4.0 in the economic performance of companies [43].

There is no previous experience of quantifying the contribution of the LARG paradigm in the shipbuilding industry, only the experience of Azevedo in the automotive industry [39], or even of the 4.0 industry in general. However, knowing that SC determine KETs is possible to look for the most important practices associated with each technology for each paradigm of the supply chain. Analyzing these practices, the relationship between the technologies and the supply chain is analyzed at the same time. Then the method developed allows evaluating how LARG is the shipbuilding industry related to its practices. In the second phase, in order to know how 4.0 SC is through the evaluation of the implementation of KETs according to each of them to the supply chain paradigms. In this case, in addition to carrying out both quantifications, a special index is created to know how advanced is the adaptation to the 4.0 industry of the shipbuilding supply chain, this being the main objective of this paper. Based on all the above, the importance of the supply chain to a company remains latent, especially for companies as complex as those dedicated to shipbuilding, specifically to the shipyard. The purpose of the article is to define an index that shows the situation in which a shipbuilding

company has in relation to its adaptation to Industry 4.0, addressing its supply chain. The proposal is, on first place to define the paradigms that best suit the achievement of sustainability in the company. Second, analyze how each of the enabling technologies of Industry 4.0 affects the supply chain through the evaluation of the results obtained with the Delphi method. This evaluation of results will allow the shipyard to establish under which paradigms of the supply chain to work as well as to know which technologies will allow it to fully adapt to industry 4.0.

#### **2. Experimental Methodology**

As already mentioned, the experimental methodology follows the Delphi method. Previously, the Delphi method has already been used in issues related to supply chain. A collaboration index between retailers and manufacturers studied by Anbanandam et al. [44], Supply Chain Fragility Index by Stonebraker et al. [45], performance SC index proposed by Nunlee et al. [46], or risk assessment index studied by Rao and Schoenherr [47]. In this communication, the proposal is to develop a shipbuilding 4.0 index. Figure 1 shows a general diagram developed below:

**Figure 1.** Scheme of the experimental methodology followed.

#### *2.1. Stage 1: Design*

The first stage of the experimental methodology begins with the definition of the problem, the selection of experts according to the problem addressed and the definition of the appropriate questionnaire that allows us to reach the solution of the problem posed.

Regarding the definition of the problem, the aim is to assess the level of adaptation of a shipbuilding company, specifically dedicated to the block assembly, to the 4.0 industry by addressing its supply chain. To this end, the first step is to define one or more of the supply chain paradigms that are most appropriate for this type of company in such a complicated sector. Specifically, the aim is to study the "LARG" paradigm formed by the combination of the Lean, Agile, Resilient, and Green paradigms.

At the same time, this stage has to establish which of the enabling technologies for Industry 4.0 facilitates and are best suited to contribute to the sustainability supply chain and subsequent implementation in the case study. There are studies that establish that there are twelve technologies that are suitable for shipbuilding [40]. These technologies are Additive Manufacturing, Big Data, Cloud Computing, Augmented Reality, Autonomous Robots, Automatic Guided Vehicles, Blockchain, Cybersecurity, Horizontal and Vertical Integration System, Artificial Intelligence, Internet of things and Simulation.

A very important weight in the design lies on the selection of experts to participate in the surveys. It is ideal to select people who are interested in the subject matter and whose expertise includes the topic in question, as well as their impartiality. At this point, forty people take part in the survey. Twenty of them are personnel from the shipbuilding sector itself, as well as twenty from academics directly related to the sector. In this sense, one can have essential perspectives on the object of research. Finally, half of all guests agreed to participate.

It is now possible to define the questions that will constitute the questionnaire. It has seventeen sections. A first section in which an assessment is requested from nothing important to extremely important, where nothing important is weighted with 1 point and extremely important with 5 points, on the importance of each of the Supply Chain paradigms, Lean, Agile, Resilient and Green. Table 1 shows the questions in section 1 as well as the average rating values and weights of each of the paradigms posed.


**Table 1.** Mean rating and weightings of LARG to shipbuilding supply chain.

Sections two to five evaluate the implementation or non-implementation of four practices of each of the paradigms. Table 2 shows questions sections 2 to 5, indicated by P for Practice, sub-indices L, A, R, G, followed by numbering from 1 to 4, mean rating values and weightings of these practices. Finally, sections six to seventeen include the weighting of the importance of each of the twelve technologies, according to each of the supply chain paradigms. Table 3 shows the questions in sections 6 to 17 that highlight the importance of each of the industry 4.0 enabling technologies for the shipbuilding supply chain.

#### *2.2. Stage 2: Execution*

Once the questionnaire design is complete, it is ready to launch the first round of surveys to all professionals who have agreed to participate. This is sent virtually, via e-mail, where the survey is in a form linked in that e-mail.

After the first round, data is collected and analyzed. To know the level of agreement reached by the experts, the Kendall concordance coefficient is used. This coefficient indicates that the level of agreement reached in this first round is low. It is for this reason that it is necessary to carry out a new round.

In the same way, a second round is sent, in which the participants are informed of the mean rating results of the first round. In this second round, the Kendall correlation coefficient has increased in value indicating that there is greater consistency between the professionals and academics. At this point, it is decided to conclude the surveys and proceed to analyze them.


**Table 2.** Questions, mean rating, and weightings of LARG level of implantation to shipbuilding supply chain.

**Table 3.** Data LARGSC index.


#### *2.3. Stage 3: Analysis*

Data analysis has three phases. In the first one, the importance of each supply chain paradigm is analyzed separately, i.e., the importance of the Lean, Agile, Resilient, and Green paradigms for the shipbuilding supply chain. The second phase analyses how each of the previous paradigms has been implemented. After knowing this value, we obtain a general index that will allow us to know how LARG shipbuilding supply chain is.

It is in the third phase and through the twelve key enabling technologies (KETs), where we get to know the importance of each of the KETs for the supply chain of shipbuilding. In the three phases, using the same mathematical procedure, the mean rating, weightings, and consequent indices mentioned are calculated.

#### **3. Results**

#### *3.1. How Important is Each LARG Paradigm to Shipbuilding Supply Chain*

The first result found presents the importance of the Lean, Agile, Resilient, and Green paradigms. As a result of the mean rating, the weight for each paradigm is calculated according to the equation [58]:

$$\mathbf{w}\_{\mathbf{x}} = \frac{\mathbf{Mx}}{\sum\_{\mathbf{g}=1}^{n} \mathbf{Mg}} \tag{1}$$

where w**<sup>x</sup>** represents the weighting of the paradigm x, Mx represents the mean rate of that particular paradigm and **<sup>n</sup> <sup>g</sup>**=<sup>1</sup> Mg represents the sum of the means for each paradigm. Figure 2 shows the relative results obtained by studying the importance of LARG for Shipbuilding Supply Chain represented by its mean rating. Table 1 shows the questions in section 1, mean rating values and weightings of each paradigms asked.

As can be seen, the Lean paradigm is the most valued of all; followed closely by the Agile paradigm, even more than the Green paradigm despite the importance it must have for this sector.

**Figure 2.** Importance of LARG to Shipbuilding SC.

#### *3.2. How Implanted is Each LARG Paradigm to Shipbuilding Supply Chain*

In the same way, with the values contributed by the experts with respect to the level of implantation, we are in disposition to calculate how much is each one of the studied paradigms is implemented. Figure 3 represents the level of implementation of LARG for shipbuilding supply chain represented by its mean rating. Table 2 shows the questions sections 2 to 5, indicated by P for Practice, sub-indices L, A, R, G, followed by numbering from 1 to 4, mean rating values, and weightings of these practices.

**Figure 3.** Level of implantation of LARG to shipbuilding SC.

It is now defined the expert behavior as:

$$\rm{EB}\_{\rm i} = \sum\_{i=1}^{n} \left( \rm{D}\_{\rm{ixj}} \cdot \rm{w}\_{\rm{Pxj}} \right) \tag{2}$$

where Dixj is the answer of the expert i to practice j of the paradigm x. At the same time, the behavior of each paradigm is:

$$\text{PB}\_{\text{\textquotedblleft}} = \frac{\sum\_{\text{i=1}}^{\text{n}} (\text{EB}\_{\text{i}})}{\text{n}} \tag{3}$$

Figure 4 shows the results obtained.

**Figure 4.** Paradigms behavior.

With these data, it is possible to calculate the implementation of each paradigm for the supply chain (SCx):

$$\mathbf{SC}\_{\mathbf{x}} = \mathbf{PB}\_{\mathbf{x}} \cdot \mathbf{w}\_{\mathbf{x}} \tag{4}$$

Being LARGSC index:

$$\text{LARG}\_{\text{SC}} = \text{SC}\_{\text{L}} + \text{SC}\_{\text{A}} + \text{SC}\_{\text{R}} + \text{SC}\_{\text{G}} \tag{5}$$

The result after the indicated calculations and according to the reflected data is LARGSC = 2.33. This indicates an intermediate implantation value as it is valued on a scale from 1 to 5. It is observed that despite the importance valued in the previous section, now, referring to the implementation level, Lean and Green are the most valued paradigms leaving behind the paradigms Agil and Resilient.

#### *3.3. What Level of 4.0 is the LARG to Shipbuilding Supply Chain?*

In order to know the 4.0 level of the LARG supply chain in shipbuilding, the weighting that the experts answered in this respect is used. Table 4 shows the questions in sections 6 to 17 that highlights the importance of each of the industry 4.0 enabling technologies for the shipbuilding supply chain. In the same way, calculate the mean rating, the weights using Equation (1) and the results shown in Table 5 and Figure 5 are the KETs behavior calculated in the same way as above according to the expert behavior.


#### **Table 4.** Key enabling technologies and supply chain paradigms questionnaire.


With this data, it is possible to calculate the import of each KET for the SCx (KET-SCx):

$$\text{SC}\_{\text{KETl}} = \text{KETB}\_{\text{i}} \cdot \text{w}\_{\text{KETl}} \tag{6}$$

Being in this case, the index LARG4.0 is therefore defined by the following equation:

$$\text{LARG}\_{4.0} = \sum\_{i=1}^{n} \text{SC}\_{\text{KETI}} \tag{7}$$

The result in this case of the index value that indicates the level of adaptation to industry 4.0 of the LARG supply chain has turned out to be of LARG4.0 = 3.77 indicating that this is a value on a scale of 1 to 5, which is quite interesting as we will analyze it later. Table 6 shows the data needed to calculate the LARG4.0 index.



#### **4. Discussion**

Starting with the study of the importance for the experts of each of the paradigms consulted for the shipbuilding supply chain, the results of the survey indicate that the paradigm with the greatest weight according to the experts is Lean, closely followed by the Agile paradigm, Resilient being the one with a lower weight. Lean is the most widespread and used paradigm for the longest time, which is one of the reasons why it undoubtedly holds the first place in addition to what the paradigm itself offers [101,102]. The Agile paradigm does not seem to be entirely in line with the sector in terms of volume and the great variety of production required in this case as some authors consider [20]. It is true that Resilient is one of the paradigms that most needs SC visibility [23,24]. It is also known that this is a paradigm difficult to get into practice mainly because of the high cost associated and the possibility of bringing the practices related to it close to those of the Lean paradigm. The most impressive of these results is the lower weight of the Green paradigm, as it is considered as a key factor in shipbuilding to increase the competitiveness of the company [103] and improve the energy efficiency [104].

However, studying the implementation each paradigm has, according the supply chain behavior, it is worth mentioning that two of the practices evaluated reached higher mean rate. One of them belongs to the Lean paradigm (PL4) and the other to the Green paradigm (PG2). This shows precisely the declaration on the part of those consulted to prefer for long relationship with suppliers with the benefits that this entail, and second, to be the supporters of the certification of the corresponding regulation over the (PG3) to reduce energy consumption. In conclusion, the predominant paradigms are precisely Lean and Green, and Agile and Resilient being the least. In this case, it coincides totally with previous studies where results are the same for shipbuilding supply chain [40].

If we focus the attention on the value obtained for the LARG index that evaluates how are those paradigms implanted in SC, calculations indicate a value of 2.33. This value, on the same weight scale from 1 to 5, indicates that its implementation is on the way. It also indicates that, according to the consulted experts, the full implementation of the LARG paradigm is needed, mainly because all its benefits has not yet been achieved. This is not the case in other sectors, such as the automotive industry. Comparing the obtained values with the automotive industry (LARGSC = 3.75) [39], it is shown how much the automotive sector is ahead of the shipbuilding sector. The latter demonstrating to have not remained on the sidelines despite the difference between the two sectors.

Regarding the KET's, the value obtained is LARG4.0 = 3.77. It is higher than that obtained previously (LARGSC = 2.33). Unlike the previous one that indicated the level of implementation of the paradigms to the supply chain, in this case it indicates the level of importance of each KET to each paradigm. Analyzing the results, it is observed that all belong to the same range. KET 9 (Horizontal and Vertical Integration System) exceeds the average value minimally. The reading in this case is that the shipbuilding is committed to adapt to Industry 4.0; however, because of the values obtained, it is not clear which technology can be more interesting to implement sooner.

Focusing on the LARG4.0 index, as it is a new creation, it is not possible to compare the index value in itself. However, by targeting the interpretation of that value, it indicates an intermediate level that is in line with the global trend. American companies created the Industrial Internet Consortium (IIC) and some of them opted for the immediate application of the Internet of Things in the shipbuilding sector and it can be said that today shipbuilding has a great demand [105]. In Germany, one of the shipyards that stands out for technological innovation is Meyer-Werft (MW) [106], where the level of use of digital technologies throughout the production process has increased significantly. The identification of parts by radio frequency is another of the fields in which they work, increasing their control and traceability. In Korea, under the so-called "Manufacturing Innovation 3.0 Strategy" innovation is bet on the naval environment of Busan, where the shipyards of Daewoo, Samsung, and Hyundai are working toward the implementation of systems and technologies that are directly aligned with the 4.0 guidelines. Daewoo Shipbuilding and Marine Engineering (DSME) [107] develops systems according to the concept of intelligent factory and integrates technologies that allow optimizing the manufacturing processes; the key for this shipyard is to apply robotics in automating those processes that allow it. The first approaches of Hyundai Heavy Industries (HHI) [108] focused on remote monitoring, evolving into an integrated platform that contemplates fundamental aspects of the ship's life cycle [109]. The Samsung Heavy Industries (SHI) [110] shipyards quantify in 68% the automation of their productive process, motivated to a great extent by own developments of robotized solutions.

In addition to this first vision, the questionnaire aims to recognize a powerful tool for the leaders that can evaluate which are the practices that best fit in each case.

#### **5. Conclusions**

With the intention of evaluating where shipbuilding adapts the 4.0 Industry, this article develops an index that allows to know, on the one hand, how its current supply chain is and, on the other hand, how to evaluate the technologies that can allow it to adapt more quickly and efficiently to the 4.0 Industry. The development of both indices has been carried out with the collaboration of experts, both professionals from the shipbuilding sector and academics who have an important connection with this sector.

The importance that experts have given to the different paradigms studied for the supply chain has proved to be the most suitable for the sector in order to achieve a supply chain that contributes greatly to achieve the sustainability of their businesses. The value obtained for the index that indicates the level of implementation is considered satisfactory for an industrial sector in which its pace is different from that of other sectors. However, there is still room for improvement and the calculation of this index could be considered as a tool that can reflect which practices and paradigms could be more interesting to implement.

Finally, the value obtained for the index that indicates the level of adaptation of the supply chain to Industry 4.0, shows us interest as well as some disorientation. In the same way, this can be considered as an opportunity for managers to decide which technologies have priority over others in order to achieve their objectives. This also highlights the need for future research to establish different criteria to improve the index studied.

**Author Contributions:** M.R.-P. and M.B. conceptualized the paper. M.B. and F.J.A.F. approved the experimental procedure; M.R.-P. carried out the experiment; M.R.-P. and M.B. analyzed the data; M.R.-P. wrote the paper; M.B. and A.J.S.S. revised the paper; F.J.A.F. supervised the paper.

**Acknowledgments:** Navantia S.A. S.M.E. and University of Cadiz (UCA) supported this work.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article* **New Risk Methodology Based on Control Charts to Assess Occupational Risks in Manufacturing Processes**

#### **Martin Folch-Calvo 1,\*, Francisco Brocal <sup>2</sup> and Miguel A. Sebastián <sup>1</sup>**


Received: 12 October 2019; Accepted: 7 November 2019; Published: 11 November 2019

**Abstract:** The accident rate in the EU-28 region of the European Union showed a value of 2 fatal accidents per 100,000 people in 2019 that mainly affect construction (24%), manufacturing (19%) and logistics (19%). To manage situations that affect occupational risk at work, a review of existing tools is first carried out taking into account three prevention, simultaneity and immediacy characteristics. As a result, a new dynamic methodology called Statistical Risk Control (SRC) based on Bayesian inference, control charts and analysis of the hidden Markov chain is presented. The objective is to detect a situation outside the limits early enough to allow corrective actions to reduce the risk before an accident occurs. A case is developed in a medium-density fiberboard (MDF) manufacturing plant, in which five inference models based on Poisson, exponential and Weibull distributions and risk parameters following gamma and normal distributions have been tested. The results show that the methodology offers all three characteristics, together with a better understanding of the evolution of the operators in the plant and the safety barriers in the scenario under study.

**Keywords:** Bayesian inference; control chart; dynamic methodology; hidden Markov chain; occupational accident; risk assessment; risk control; risk management

#### **1. Introduction**

The accident rate in the European Union, [1] for the EU-28 region, was 2 fatal accidents per 100,000 people employed in 2015. The most affected industrial activities were: building (24%), manufacturing (19%), transport and storage (19%), agriculture—fishing (15%), retail (9%), public administration (9%), water supply–waste management (3%) and mining (2)%). In Spain, between 2014 and 2018, the most affected activities were practically the same [2].

The main causes of accidents also in Spain for the same period 2014–2018 were: inadequate movements of the human body, in actions of pushing and pulling and by inappropriate body turns, all of them under physical effort (33%); entrapment and contact with sharp areas in machine elements (22%); falls and slips (18%); loss of total or partial control of a machine (16%); breakage and sliding of a work support (6%); leaks and spillages (2%); aggression (2%); and explosions–fire (1%) [2], Figure 1.

Of the total accidents generated by these causes, (99.1%) have had minor consequences, (0.8%) have caused severe damage, and (0.1%) have been very serious with a fatal outcome. Avoiding an accident at work regardless of its severity requires specific risk management, carried out throughout the life cycle; from design, engineering and construction, commissioning, operations, logistics and

final dismantling; and in which at all times it is necessary to monitor qualitatively and quantitatively the moment in which an accident risk arises.

**Figure 1.** (**a**) Accident rate by activity in the year 2015 in the EU-28 countries of the European Union (adapted from [1]); (**b**) description of origin of occupational accidents in Spain for years 2014–2018 (adapted from [2]).

There are different points of view about the concept of risk; for example, in general a risk arises from the existence of uncertainty [3], in a more specific way a risk arises from the existence of uncertainty in the objectives [4,5], or in more practical terms, risk can be defined as the measure of lack of security [6]. The idea of security also has different meanings that can be defined simply as the lack of accidents, or from a labor point of view, how people can provide the required performance in expected and unexpected conditions [7], or quantifying safety as a numerical condition where the number of adverse outcomes is acceptably small [8] or from an analytical point of view that defines it as the study of why things go wrong [9].

Despite the definitions, the most important issue is how to manage risk in various scenarios and specifically those related to occupational safety at work. In this sense, an initial framework issued by the European Union is Directive 89/391/EEC [10] aimed at employers to establish basic principles of prevention with risk assessment and its avoidance being a preliminary and basic proposal to carry out an assessment of occupational risks.

The ISO 31000:2018 establishes the principles of risk management and the ISO/IEC 31010:2019 establishes the risk management-assessment associated techniques. It is based on the Deming cycle [11], consisting of a sequence of steps: "plan, do, check, act".

Quantitative risk assessment (QRA) is a formal and systematic risk-analysis approach to quantifying the risks associated with the industrial and human processes. The risk assessment is the general procedure that covers the risk identification process which can be performed based on historical data, through a panel of experts or using inductive cause-effect techniques; its analysis applying: qualitative methods, indicating the levels of importance of the risk and its consequences; semi-quantitative methods, indicating numerical risk rating scales and their consequences; and quantitative methods, defining the probabilities of risk-generation and its consequences [12–14], and their evaluation that implies determining the importance and prioritizing from the point of view of risk consequence or benefit–cost [15], Figure 2.

With the objective of managing occupational hazards, the ISO 45001 guideline [16] aims to provide guidelines that allow the implementation of a system of occupational health and safety (OH&S). In Spain, the main guidelines on occupational hazards come from the National Institute of Occupational Safety and Health, and are issued with the objective of providing guidelines that allow analyzing and studying health and safety conditions in the workplace, as well as their improvement [17]. In the context of manufacturing processes involving chemical agents, there are two European directives on the assessment of chemical agents at work [18] and for carcinogens and mutagens at work [19].

**Figure 2.** Risk-management process and Deming-cycle equivalence (adapted from [5]).

In this context, is especially important to prevent both occupational accidents and major accidents which can be interrelated [20], by means of adequate management and assessment methodologies. In the scientific literature, methodologies oriented to the management and assessment of occupational health and safety risks are collected with the addition of specific tools; such as a risk assessment based on fuzzy logic with application in the mining industry [21] and manufacturing plants [22]; or the two cases of risk assessment in process industries and to detect and evaluate emerging risks in industrial processes [23,24]; or the application of a multi-objective evolutionary algorithm to take into account the tasks, activities, associated risks and safety of workers with case studies associated with scaffolding falls [25,26]; the use of Bayesian networks applied to work situations on the high seas where slips, trips and falls must be avoided [27]; several cases of risk assessment applied in the mining industry [28]; in situations of construction of scaffolding falling objects and contact with moving machines and vehicles [29]; risk assessment in an aluminum process industry for workers using press extruders, forklifts, cranes, and production [30]; and the use of block diagrams to identify and evaluate fall hazards from an escalator [31]. It is applied in the construction of a natural gas pipeline using a tool based on Pythagorean fuzzy logic [32], the risk assessment produced by falls and falling objects, crane handling, and interaction with the moving parts of the machines [33–36], and risk assessment in occupational accidents related to the construction, operation and maintenance of wind farms on land [37].

However, there is no method that allows us to obtain an overview of the state of occupational risk in a manufacturing plant in the simplest way possible that is at the same time formal, which warns about the existence of a risk to avoid in advance. For this, three characteristics are considered necessary:


To achieve this objective, it is necessary to first review what are the current tools related to risk management and evaluation and if, verifying which meet the three characteristics. Based on the most appropriate methodology, the objective of this paper is to present a new tool called Statistical Risk Control (SRC) for to manage and assess the situations of risk, focused in occupational accidents. With this object, this work is organized as follows: in Section 2 we review the state of the existing tools; in Section 3 we examine the development of the SRC methodology applied for occupational accidents; in Section 4 we present a case study and the results; in Section 5 we discuss the results; and in Section 6 we draw conclusions.

#### **2. Existing and Related Tools for Occupational Risk Management**

#### *2.1. Regulations and Traditional, Modern Models*

The analysis covers three main groups:





#### **Table 2.** Existing traditional models.

#### **Table 3.** Existing modern models.




#### *2.2. The Dynamic Risk Models*

This group covers a dynamic risk-analysis concept using sequential models like fault-tree analysis (FTA), event-tree analysis (ETA) and the BOW-TIE graph approaches and performing a Bayesian inference analysis to update the failure probabilities from the information collected of the named accident precursors or precursor data. This group has five models; the dynamic risk assessment (DRA) being the representative; the dynamic procedure for atypical scenarios identification (DyPASI), the dynamic risk analysis, the risk barometer methodology and the dynamic operational risk assessment, [86–88].

DRA is an extension of the risk assessment (RA). The process needs to establish a prior function for the statistical parameter that models the risk probability. The precursors, events or causes that can lead to an accident are observed and formalized through the application of Bayesian inference to obtain the posterior function of the parameter that models the risk probability, [89–94].

Figure 3 presents the main equivalences between RA and DRA models. Highlights are:


$$f(p/
Delta a) \propto \text{g(Data/p)} \cdot f(p) \tag{1}$$

where *p* is the statistical parameter, *f*(*p*) is the prior statistical distribution for the parameter *p*; *g*(*Data*/*p*) is corresponding to the observed precursor data and *g*(*p*/*Data*) is the posterior statistical distribution.

This strategy has been applied to a number of case studies in petrochemical industry including the case for a storage tank containing hazardous chemicals, a refinery, and oil-spill accidents; or additionally performing the inference using Bayesian Networks applied in offshore oil and gas accidents [98–103].

The characteristics for the four remaining models can be seen in Table 4.

**Figure 3.** (**a**) Risk assessment; (**b**) dynamic risk assessment (adapted from [5]).



#### **3. Statistical Risk Control (SRC) Methodology**

#### *3.1. General Application*

There is a need to use control charts and a dynamic risk assessment model for analyze risk in general industrial situations. There is no literature in the use of the control charts for risk management but nevertheless they are applied in the earned value method in project management [109], in environmental assessment [110] or for cost control and project duration [111]. The methodology is compatible with the ISO guidelines using the Bayesian inference and Hidden Monte Carlo–Markov methods under a new concept of Statistical Risk Control (SRC) [112] (Figure 4).

**Figure 4.** (**a**) Statistical Risk Control (SRC) methodology; (**b**) position in the risk-assessment scheme (adapted from [5]).

When the risk has been identified in a considered scenario, the following seven steps are applied:


**Figure 5.** Safety barriers: ISn: preventive barrier (n); SFISnm: preventive barrier (n) sub-function (m); SFISg general sub-function safety barrier; SFn: mitigative barrier (n); SFnm: mitigative barrier (n) sub-function (m); ic: initiation causes (ba, ha, pot); p,s,f: probability of success or fail in the mitigative or reactive safety barrier.

Chart determination has two modes:


Considering also the observed initiation causes (ic), and visualizing the bow-tie, there are two possibilities of analysis to include in the control chart:

	- a.1. Collecting the total of the initiation causes (ic) affecting all the preventive and mitigative safety barriers and their barrier sub-functions.
	- a.2. Collecting only the first level for initiation causes (ic) and fails in first level of barrier sub-functions.

The Baum–Welch algorithm is applied to obtain, from the observations, the posteriors transition and emission matrices with a methodology that can also be direct or recurrent.

**Figure 6.** Analysis modes. (**a**) collecting total of (ic's) of preventive and mitigative barriers and their sub-functions (highlighted yellow); (**b**) collecting at the first level of (ic´s) and barriers sub-functions (highlighted yellow).

**Figure 7.** Analysis modes of the hidden Markov chain. (**a**) Defining transition probabilities for the mitigative barriers (highlighted green) and emission probabilities for the observed end sates in function of the mitigative barriers (highlighted brown); (**b**) defining transition probabilities for the end states (highlighted green) and emission probabilities for the observed mitigative barriers states (highlighted brown).

#### *3.2. Potential Causes (Pot)*

Some authors consider [119] that when unexpected deviations arise due to causes that are difficult to predict because of their randomness, this can be classified as a special risk [120,121]. From the point of view of the methodology (SRC), these causes that can lead to unexpected risk situations are called potential causes, which are summarized in Figure 8.


**Figure 8.** Potential causes (pot) and attributes defined in the Statistical Risk Control (SRC) methodology.

#### *3.3. General Application for Occupational Accidents*

In accordance with the SRC methodology, a bow-tie is defined in each particular scenario. In the case of occupational accidents before being able to work with different risk scenarios, it is necessary to analyze what human behavior is at work and what are the factors that affect it. Three views and organizations that cover the most critical have been considered. The first group compiled is general, [17,122,123]. The second group includes the factors considered critical in the 6th European Survey of Working Conditions [124], and the third group includes the emerging psychosocial risks related to occupational safety and health [125]. The aggregation and coincidence of the three groups is presented in Figure 9.


**Figure 9.** Factors and attributes affecting human behavior at work.

These critical factors must be taken into account as possible causes of initiation (ic) of an occupational accident and must be present in all scenarios. However, it is considered that there are four situations that occur as a consequence and symptom of the critical factors. These are: failure in the self-control of work (JSC); failure to supervise work (JSU); failure in security self-control (SSC) and failure in security supervision (SSU). In addition, these four situations that are easily observable arise in an automated process and manufacturing environments, where workers additionally perform control and supervision tasks. These four situations, pro their control and supervision function, are associated with safety barriers. The general bow-tie for occupational accidents (Figure 10), is equivalent to the general one presented above, with the differences in preventive barriers (ISn) integrated by the four safety barriers (ISJSC, ISJSU; ISSSC; ISSSU). In this scheme, the (SFISnm) are sub-functions of the four safety barriers, which can be formed by a procedure, an automatism, an alarm indication, an actuator of a control system or the organizational culture itself. The general safety barrier (SFISg) works in parallel with the activity of the operators and may consist of automatic control systems, protections, alarms, actuators or automated operations management that guides the operator at each step of the process and does not permit the next step to be formalized if a number of conditions are not met. Sub-function barrier (SFnm) covers the various functional components that integrate the mitigating safety barriers. The final states represented are bounded at one end by the absence of personal injuries, if the first mitigating safety barrier acts correctly and the other by a fatality if the failure of all mitigating safety barriers occurs. The graph must be taken as a framework and must adapt according to the processes, the occupational works and the scenarios that are being analyzed.

**Figure 10.** General bow-tie for occupational accidents. Safety barriers: ISn: preventive barrier (n) based on job self control (JSC); job supervision (JSU); safety self control (SSC) and safety supervision (SSU). SFISg general sub-function working parallel to the human actions; SFn: mitigative barrier (n); SFnm: mitigative or reactive barrier (n) sub-function (m); ic: initiation causes (ba, ha, pot); p,s,f: probability of success or fail in the mitigative or reactive safety barrier.

#### **4. Case Study in a Medium-Density Fiberboard (MDF) Manufacturing Process Plant**

#### *4.1. Process*

The general process scheme is depicted in Figure 11. Its goal is to produce urea-melamine medium density fiber (MDF) board elements using as basic raw materials paper, wood, melamine, urea, a resin (such as a polyamide or vinyl chloroacetate) and formaldehyde. The paper is subjected to a surface printing treatment continuing with the impregnation phase performed with melamine-formaldehyde. Drying and cooling processes are executed next in a single step if only the melamine-formaldehyde polymer is added or with one additional step if the urea-formaldehyde polymer is added, and with the same impregnation, drying and subsequent cooling steps. The process continues with the cutting and winding of the paper and its stacking. At the same time, the wood is splintered by subsequently drying the material at 180 ◦C to reduce moisture. The dry material (8% moisture) is impregnated with the urea-formaldehyde solution and the resin. It follows a stage of forming and pressing at 200 ◦C. The board thus obtained is subjected to a curing process, and is completed in a union-pressing stage of the board formed with the sheet of paper.

**Figure 11.** General production scheme of a medium-density fiberboard (MDF) urea-melamine plant.

In plant there are 42 workers distributed in two shifts. The plant is highly automated with robots for handling, feeding, palletizing and control systems in every step. The finishing area is made up of panel sectioning machines composed of vertical bar and pressure sawing machines, as well as circular saws of one or several discs, in addition to a final sanding and calibration zone. Also in the work areas and in order to maintain the correct level of particles and volatile organic compounds (VOC) emissions, there is a centralized air aspiration system with subsequent filtration an purification processes prior to its emission to the environment. Quality and safety policies are established. Workers wear personal protective equipment and there are periodic safety checks at the process plant.

#### *4.2. Results*

The analysis covers the general plant and the bow-tie is presented in Figure 12 with four final states: no injuries, minor, serious, and fatal. If an accident event (AE) is generated, the mitigating safety barriers (SF1 SF2 SF3) are activated. The final states represented are bounded at one end by the absence of personal injuries, if the first mitigating safety barrier (SF1) acts correctly, in case of failure the second barrier (SF2) acts ending with a minor injury if it works correctly; in case of failure the third barrier (SF3) acts, leading to a serious problem in case of correct operation, or to a fatality in case of failure. The sub-functions (SF11 SF21 SF31) correspond to automatisms, procedures, alarms and active or passive protections belonging to the main function of each of the mitigating safety barriers.


**Figure 12.** Bow-tie for occupational accident first level analysis in the MDF process plant.

The analysis is carried out at the first level, highlighted in yellow on the graph. The observations are made in the worst case taking the plant in general, which means that the observations are collected on the one hand for the preventive barriers, the general safety barrier (SFISg) and the initiation causes (ic) that collect all workers in one shift; and on the other hand, from each of the mitigative barriers. SF1–SF11 and associated (ic) is the group in which the general passive and active protections are located in each workplace of the plant; the SF2-SF21 and associated (ic) is the group in which the general safety controls, fire extinction, alarms and the automated or manual shutdown of every workplace are located, and the SF3-SF31 and associated (ic) is the group for the general emergency power, general fire extinction systems and internal rescue actuations. The observations are made at 10 time intervals per day, covering all shifts. Figure 13 shows three observed causes in intervals 4, 7 and 8.

Collecting observations is expected that can follow a Poisson, an Exponential or a Weibull distributions, being the *g*(*data*/*p*) in Equation (1). And the statistical parameter *p* is corresponding to the rate λ or frequency of events and the prior *f*(*p*) can be defined as a gamma or a normal distribution.


**Figure 13.** Initiation causes (ic´s) and barrier fails collected at 10 intervals.

#### 4.2.1. Poisson–Gamma Model

With a Poisson–gamma model the expression for *f*(*p*/*data*)∝*g*(*data*/*p*)·*f*(*p*) with parameter *p* = λ is;

$$f(\lambda/\text{data}) \propto \text{g}(\text{data}/\lambda) f(\lambda) \propto \lambda^{(\alpha+s)-1} e^{-\lambda(\beta+\text{Nt})} = \text{g}am(\infty + \text{s}\_{\text{r}}\beta + \text{Nt})\tag{2}$$

where;

$$s = \sum data = \sum y\_i\tag{3}$$

Being *Nt* number of interval and *s* the sum of initiation causes (ic's) and safety barriers incidences as data *yi* in the corresponding time interval *i*. The values α and β are the parameters of the gamma distribution.

A recurrent method with mean prior and equal to a desired value as a target, is applied. In this case the target is for have zero accidents, then the parameters of the gamma prior are α = β = 0.001. Working with +/−1σpost the posterior values are (Figure 14):


**Figure 14.** Poisson–gamma model. Recurrent with mean prior method. Evolution of collected (ic's) and safety barrier fails in 10 intervals.

The following comments can be made for every interval.

Interval 1. With zero incidences, is, *yi* = [0] and posterior density for λ; *gam*(α + *s*, β + *Nt*) = *gam*(0.001, 1) with λpost = 0 and σpost = 0.031.

Interval 2. Also with zero incidences, is, *yi* = [0] and posterior density for λ; *gam*(α + *s*, β + *Nt*) = *gam*(0.001, 2) with λpost = 0 and σpost = 0.016.

Interval 3. With zero incidences, is, *yi* = [0] and posterior density for λ; *gam*(α + *s*, β + *Nt*) = *gam*(0.001, 3) with λpost = 0 and σpost = 0.011.

Interval 4. With one incidence in one sensor affecting the SF11 safety barrier sub-function, is in this case, *yi* = [1] and posterior density for λ; *gam*(α + *s*, β + *Nt*) = *gam*(1, 4) with λpost = 0.25 and σpost = 0.25. Due to the no memory characteristic of the exponential, Poisson and Weibull distributions, the observed parameter value for 1 incidence in 4 time intervals is 0.25 that is coincident with the posterior. The values are in the upper control limit (UCL).

Interval 5. With zero incidences, is, *yi* = [0] and posterior density for λ; *gam*(α + *s*, β + *Nt*) = *gam*(1, 5) with λpost = 0.20 and σpost = 0.20. As a characteristic of the Bayesian inference, the posterior distribution responds softening the reduction of parameter λ from 0.25 to 0.20.

Interval 6. With zero incidences, is, *yi* = [0] posterior density for λ; *gam*(α + *s*, β + *Nt* = *gam*(1, 6) with λpost = 0.17 and σpost = 0.17. The Bayesian inference also responds by softening the reduction of parameter λ from 0.20 to 0.17.

Interval 7. With one incidence affecting a job self control fail (JSC) in the hot pressing process, with *yi* = [1] and posterior density for λ; *gam*(α + *s*, β + *Nt*) = *gam*(2, 7) with λpost = 0.29 and σpost = 0.20. The observed parameter value for 1 incidence in 7 − 4 = 3 time intervals is 0.33 that is practically coincident with the posterior. An out of limits is also displayed.

Interval 8. With one incidence affecting a failed test in the SF3 safety barrier, with, *yi* = [1] and posterior densityfor λ; *gam*(α + *s*, β + *Nt* = *gam*(3, 8) with λpost = 0.38 and σpost = 0.22. The observed parameter value for 1 incidence in 8 − 7 = 1 time intervals is 1. And also shows and out of limits.

Interval 9. With zero incidences, is, *yi* = [0] and posterior density for λ; *gam*(α + *s*, β + *Nt* = *gam*(3, 9) with λpost = 0.33 and σpost = 0.19.

Interval 10. With zero incidences, is, *yi* = [0] and posterior density for λ; *gam*(α + *s*, β + *Nt* = *gam*(3, 10) with λpost = 0.30 and σpost = 0.17.

Charts for intervals 4 and 7 for posterior and observed values are presented in Figures 15 and 16.

**Figure 15.** Poisson–gamma model. Recurrent with mean prior method. Charts based on posterior lambda evolution for intervals 4 and 7.

**Figure 16.** Poisson–gamma model. Recurrent with mean prior method. Charts based on observed lambda evolution for intervals 4 and 7.

#### 4.2.2. Exponential–Gamma Model

With an exponential–gamma model the posterior expression *f*(*p*/*data*)∝*g*(*data*/*p*)·*f*(*p*) with parameter *p* = λ is:

$$f(\lambda/\text{data}) \ltimes g(\text{data}/\lambda) \cdot f(\lambda) \ltimes \lambda^{\alpha} \text{ e}^{-\lambda(\beta+t)}\tag{4}$$

being α and β the parameters of the gamma distribution and *t* the observed time between causes.

The analysis also will be effectuated using a recurrent method with mean prior and equal to a desired value as a target, being in this example changed to a less restrictive value of 1 accident in 20 time intervals and equal to 0.05, with gamma prior parameters α = 0.5 and β = 10. Working with +/−1σpost and using the Metropolis–Hastings (MH) sampler, Figure 17 shows the collected values and the (MH) sampling with acceptance rate (AR) 58% at interval 8 in Figure 18.


**Figure 17.** Exponential–gamma model. Recurrent with mean prior method. Evolution of collected (ic's) and safety barrier fails in 10 intervals. Confidence interval (CI) [5%–95%]. Out of limits framed blue.

**Figure 18.** Metropolis–Hastings. Interval 8. Sampling n = 4500, burn = 500; 10 cycles. Acceptance rate (AR) = 58%.

#### 4.2.3. Weibull–Gamma Model

With a Weibull–gamma model the human fatigue is considered in the analysis and the posterior expression for *f*(*p*/*data*)∝*g*(*data*/*p*)·*f*(*p*) with parameter *p* = λ is;

$$f(\lambda/\text{data}) \ltimes \text{g}(\text{data}/\lambda) \cdot f(\lambda) \ll (\lambda)^{a-1} \cdot \exp\left(-\lambda \beta\right) \cdot (\lambda)^{c} \cdot \exp\left(-(\lambda t)^{c}\right) \tag{5}$$

being α and β the parameters of the gamma distribution, *t* the observed time between causes and c is the fatigue parameter.

When c = 1 the failure rate function is constant being equivalent to an exponential–uniform model. If c > 1 the failure rate function is increasing. If 0 < c < 1 the failure rate function is decreasing. A conservative c = 1.5 value can be adopted. With a value of 1 accident in 20 time intervals equal to 0.05 and working with +/−1σpost the Metropolis–Hastings sampler is applied. The Figure 19 shows the collected values and the (MH) sampling with acceptance rate (AR) 63% at interval 8 in Figure 20.


**Figure 19.** Weibull–gamma model. Recurrent with mean prior method. Evolution of collected (ic's) and safety barrier fails in 10 intervals. Confidence interval (CI) [5%–95%]. Out of limits framed blue.

**Figure 20.** Metropolis–Hastings. Interval 8. Sampling n = 4500, burn = 500; 10 cycles. AR = 63%.

The out of limits are presented in the same time intervals as the previous model. It should be noted that the parameter *p* = λ is modeled as a gamma distribution. The posterior distribution is also a Gamma distribution independently of the observations as Poisson, exponential or Weibull functions. Another possibility is to apply a normal distribution for the approximation to the parameter *p* characteristic.

#### 4.2.4. Exponential–Normal Model

With the exponential–normal model the posterior expression *f*(*p*/*data*)∝*g*(*data*/*p*)·*f*(*p*) with parameter *p* = λ is;

$$f(\lambda/\text{data}) \ltimes \text{g}(\text{data}/\lambda) \cdot f(\lambda) \ltimes \lambda \exp\left(-\lambda t\right) \cdot 1/\sigma \exp\left(-(\mathbf{x} - \lambda)^2/2\sigma^2\right) \tag{6}$$

being *t* the observed time between causes and *x* the evolution of the parameter value.

With the same 0.05 prior value and working with +/−1σpost also in recurrent method with mean prior, the collected values are presented in Figure 21, and in Figure 22 the (MH) sampling with acceptance rate (AR) 56% at interval 4.


**Figure 21.** Exponential–normal model. Recurrent with mean prior method. Evolution of collected (ic's) and safety barrier fails in 10 intervals. Confidence interval (CI) [5%–95%]. Out of limits framed blue.

**Figure 22.** Metropolis–Hastings. Interval 4. Sampling n = 4500, burn = 500; 10 cycles. AR = 56%.

This model also presents out of limits at intervals 4, 7 and 8. The posterior is a normal distribution.

#### 4.2.5. Poisson–Normal Model

In the Poisson–normal model the posterior expression *f*(*p*/*data*)∝*g*(*data*/*p*)·*f*(*p*) with parameter *p* = λ is;

$$f(\lambda/\text{data}) \ltimes \gcd(\text{data}/\lambda) \cdot f(\lambda) \ltimes (\lambda y)^n / n! \exp\left(-\lambda y\right) \cdot 1/\sigma \exp\left(-(\mathbf{x} - \lambda)^2/2\sigma^2\right) \tag{7}$$

Being *y* the observed data and *x* the evolution of the parameter value.

With the same 0.05 prior value and working with +/−1σpost in a recurrent method with mean prior, the collected values are presented in Figure 23, and in Figure 24 the (MH) sampling with acceptance rate (AR) 57% at interval 7.


**Figure 23.** Poisson–normal model. Recurrent with mean prior method. Evolution of collected (ic's) and safety barrier fails in 10 intervals. Confidence interval (CI) [5%–95%]. Out of limits framed blue.

**Figure 24.** Interval 7. Sampling n = 4500, burn = 500; 10 cycles. AR = 57%.

This model also presents out of limits at intervals 4, 7 and 8. The posterior is a normal distribution.

4.2.6. Analysis of the Mitigative Safety Barriers Observing End States

If "0" is defined as a barrier is "correct" and active and "1" is in "fail" the possible end states are defined in Figure 25. Where "000" means that the first, second and third barriers are active and the end state is V1 = No injury; and the same logic applies to the rest.


**Figure 25.** End states in function of the mitigative safety barriers situation.

A prior transition matrix is defined for the three safety barriers. Being *p*<sup>11</sup> the probability for the SF1 barrier to stay active in state 1; *p*<sup>12</sup> the probability of transition from SF1 active to SF2 active because SF1 has failed; *p*<sup>13</sup> is the probability of transition from SF1 to SF3 because SF1 has failed and also SF2 has failed, and so on. Additionally an emission matrix is defined to indicate the probabilities that a barrier Sn is active based on the observed end states (V1 V2 V3 V4), Figure 26.

$$\text{Tran} = \begin{pmatrix} p\_{11} & p\_{12} & p\_{13} \\ p\_{21} & p\_{22} & p\_{23} \\ p\_{31} & p\_{32} & p\_{33} \end{pmatrix} = \begin{pmatrix} \mathbf{0.8} & \mathbf{0.15} & \mathbf{0.05} \\ \mathbf{0.55} & \mathbf{0.4} & \mathbf{0.05} \\ \mathbf{0.2} & \mathbf{0.4} & \mathbf{0.4} \end{pmatrix} \qquad E \text{miss} = \begin{pmatrix} V\_1 & V\_2 & V\_3 & V\_4 \\ S\_1 & \mathbf{0.8} & \mathbf{0.1} & \mathbf{0.05} & \mathbf{0.05} \\ S\_2 & \mathbf{0.4} & \mathbf{0.2} & \mathbf{0.2} & \mathbf{0.2} \\ S\_3 & \mathbf{0.3} & \mathbf{0.4} & \mathbf{0.2} & \mathbf{0.1} \end{pmatrix}$$

**Figure 26.** Prior transition and emission matrices for mitigative safety barriers and end states.

The observations are made by creating a group of ten following a first in first out(FIFO) order. Each new one is added to the group and the oldest one disappears. The observed sequence is seqobs = [1111112311] indicating that in the six first observations times no injury has been sampled, one minor injury in the seventh, one serious in the eighth, and no injury in the next two observations. Performing a Baum–Welch algorithm, the posterior transition and emission matrices are obtained from the observed data, Figure 27.


**Figure 27.** Posterior transition and emission matrices obtained from the observed sequence.

Also, the relative occupations at steady state, after an infinite number of transitions, are obtained, being active SF1 = 80%, SF2 = 10% and SF3 = 10%. The number of visits or transitions to every state into the 10 observations are presented in Figure 28.

$$\mathcal{M}(10) = \begin{pmatrix} p\_{11} & p\_{12} & p\_{13} \\ & p\_{21} & p\_{22} & p\_{23} \\ & p\_{31} & p\_{32} & p\_{33} \end{pmatrix} = \begin{pmatrix} 8.04 & 0.93 & 1.03 \\ & 8.24 & 0.83 & 0.93 \\ & 7.44 & 1.73 & 0.83 \end{pmatrix}$$

**Figure 28.** Number of visits for the posterior transition matrix.

It is important also to know what will be the next passage, in observations intervals, from the SF1 and SF2 safety barriers to the most critical SF3 barrier. In this case is m1 = 8 and m2 = 9, meaning that, according to the observations, from SF1 a transition to SF3 can be produced in 8 intervals, and from SF2 to SF3 in 9 intervals.

The posterior transition and emission matrices in Figure 27, can be the new prior in the next sampling, in this case a minor injury (V2) has been observed; then seqobs = [1111123112] and a new posterior for transition and emission matrices is obtained, Figure 29.


**Figure 29.** Posterior transition and emission matrices obtained from a new observation.

With occupations at steady state SF1 = 67%, SF2 = 17% and SF3 = 16%, the number of visits or transitions are presented in Figure 30.

$$\mathcal{M}(10) = \begin{pmatrix} p\_{11} & p\_{12} & p\_{13} \\ & p\_{21} & p\_{22} & p\_{23} \\ & p\_{31} & p\_{32} & p\_{33} \end{pmatrix} = \begin{pmatrix} 6.67 & 1.58 & 1.75 \\ & 7.00 & 1.42 & 1.58 \\ & 6.33 & 2.25 & 1.42 \end{pmatrix}$$

**Figure 30.** Number of visits for the posterior transition matrix.

The next passage values for the third safety barrier are m1 = 4 and m2 = 5, meaning that from SF1 a transition to SF3 can be produced in 4 intervals, and from SF2 to SF3 in 5 intervals. The values are lower than the previous ones because the sampled observation has been for state V2, a minor injury, which reduces the number of no injuries, state V1 and makes the possible use of the third barrier more critical.

#### 4.2.7. Analysis of the End States Observing Mitigative Safety Barriers

The prior matrices are defined as the transition matrix for the four end sates and the emission matrix indicating the probabilities to stay in an end state Sn being: S1 = No injury; S2 = Minor injury; S3 = Serious injury and S4 = Fatality; in function of the observed three active mitigative barriers (V1 V2 V3), Figure 31.


**Figure 31.** Prior transition and emission matrices for end states and mitigative safety barriers.

The sampling is obtained by sequentially observing the barriers, if the first is active, then its value is 1, if it fails and the second barrier is active, then the value is 2 and so on; a group of 10 observations is also maintained. The sampled sequence of the safety barriers is seqobs = [1111112211]. The posterior transition and emission matrices are presented on Figure 32.


**Figure 32.** Posterior transition and emission matrices.

With occupations V1 = 80%, V2 = 10%, V3 = 4% and V4 = 6%, and number of visits or transitions presented in Figure 33.

$$\mathcal{M}(10) = \begin{pmatrix} p\_{11} & p\_{12} & p\_{13} & p\_{14} \\ & p\_{21} & p\_{22} & p\_{23} & p\_{24} \\ & p\_{31} & p\_{32} & p\_{33} & p\_{34} \\ & p\_{41} & p\_{42} & p\_{43} & p\_{44} \end{pmatrix} = \begin{pmatrix} 8.04 & 0.93 & 0.423 & 0.607 \\ 8.24 & 0.83 & 0.3819 & 0.5481 \\ 7.44 & 1.73 & 0.3408 & 0.4892 \\ 7.44 & 1.73 & 0.3408 & 0.4892 \end{pmatrix}$$

**Figure 33.** Number of visits for the posterior transition matrix.

Showing a high number of visits to the end state 1 in concordance to with their occupations. The next passage values for the forth state, are m1 = 15, m2 = 16 and m3 = 17. Making a new observation and adding it to the group is seqobs = [1111122112]. The new posterior and emission matrices are obtained if the steady state occupations are S1 = 0%, S2 = 100%, S3 = 0% and S4 = 0%, indicating that in a steady state a high commitment is obtained in the use of the safety barrier SF2, due to the increase in the number of minor injuries corresponding to their associated final state S2.

The number of visits or transitions, are presented in Figure 34.

$$M(10) = \begin{pmatrix} p\_{11} & p\_{12} & p\_{13} & p\_{14} \\ p\_{21} & p\_{22} & p\_{23} & p\_{24} \\ p\_{31} & p\_{32} & p\_{33} & p\_{34} \\ p\_{41} & p\_{42} & p\_{43} & p\_{44} \end{pmatrix} = \begin{pmatrix} 0.7001 & 5.9137 & 1.6875 & 1.6986 \\ 0 & 10 & 0 & 0 \\ 0 & 9.3 & 0.7 & 0 \\ 0 & 7.6013 & 1.6986 & 0.7001 \end{pmatrix}.$$

**Figure 34.** Number of visits for the posterior transition matrix.

With next passage values for the fourth state; m1 = 2, m2 = 482 and m3 = 484. There is a correlation between the number of visits or transitions with the passage for the last barrier. In this case, in 10 intervals the transition *p*<sup>44</sup> shows low stability in the fourth end state (Fatality) with a value *p*<sup>44</sup> = 0.7, and the transitions *p*<sup>24</sup> = *p*<sup>34</sup> = 0 are indicating that there is no transition to the fourth state from the second and third and, therefore, it shows high passage intervals 482 and 484.

#### **5. Discussion**

The observations in which the risk parameter *p* is a frequency or ratio of risk causes requires the use of models based on Poisson, exponential and Weibull distributions that are those that have been used in this work. A recurring procedure has been used due to the characteristic of not having memory of this type of distributions, and each time interval begins as if it were new. The application of the gamma and normal distributions to characterize the risk parameter *p* is adequate when the variability and possible values are well collected, within the range of positive numbers, which this parameter can adopt.

The posterior distribution, because it is a reflection of the evolution of the prior, always takes the same distribution function.

The use of a control chart first requires, in every situation, previous tests to adjust the control limits. In this case the required accident rate value is very low. The control limits +/− 1sigma are restrictive in order to highlight the out of limits situation well in advance. Due to have a prior mean with a lower value near "0" and short upper limits has been possible to share the observations and the posterior mean with the same limit controls.

Despite the restrictive limits, the Poisson–gamma model showed just at the upper limit the first observation with value 0.25. As a characteristic the Bayesian inference softens the evolution of the statistical parameter based on the observations, for example, in the first observation at interval 4 with a value of 0.25, the Poisson–gamma model presents also a 0.25, showing an internal difference of a 4.5% into the group consisting of the exponential–gamma, exponential–normal and Poisson–normal but with a 60% difference with the Poisson–gamma. In the second observation at interval 7 with a value of 0.33, the inference models take values for a Poisson–gamma of 0.29, and the rest of models show an internal difference of 30% and a 135% difference with the Poisson–gamma. In the last observation with a value of 1, the Poisson–gamma shows a value of 0.38 and the exponential–normal with a value of 0.559 and Poisson-Normal with a value of 0.478 follow better this change to with the rest of inference models. At low changes the Poisson–gamma respond better and with important changes are better the Exponential-Normal and Poisson–normal.

The Metropolis–Hastings sampling has been done with a random simulation of 4500 values and removing (burning) the first 500 to guarantee the stability of the process, with a repetition cycle of 10 times, determining the average value and confidence intervals to obtain the representative posterior values. The resultant traces of the sampling processes also show a correct stochastic variability, with acceptance rates (AR) oscillating between 48% to 68% being necessary conditions to obtain a good sampling.

When initiation causes occur, at the same time information is being obtained on the status of the process and the different jobs, forcing a continuous review of its operation; for example, in the interval 4 due to a fault in a sensor that affects the hot pressing section, located in the sub-function of the SF1 mitigating safety barrier, it is a situation that requires verifying the same type of sensors. The same situation occurs when in the interval 7 there is a failure of self-control of work (JSC), then it is necessary to analyze the procedures and sub-functions that affect the design of the work, the environment of the operator, the characteristics of the operator, the interface of the system, procedures and information and design that affect workplaces. The last cause occurs in a failed rescue simulation that affects the SF3 safety barrier, which requires reviewing the performance of this procedure.

The hidden Markov chain procedure allows you to follow the situation of the mitigating barriers and obtain a map of your activity. The action of mitigating barriers is critical because when an accident occurs, they have to act with the highest probability that no failures will occur. However, it should be borne in mind that the method is an estimate of reality from the observations made allowing a quick view of which barrier is most used and likely to fail because it is showing high occupation. According to the experience gathered in the application of the Baum–Welch algorithm, the determination of the new transition and emission matrices from observations must have 10 values to obtain representative results.

If prior values for transition and emission matrices are not well known, the recurrent procedure adjusts the subsequent values according to the observations.

Collecting observations for the Bayesian inference or for the application of the hidden Markov chain can be carried out by the company's own personnel, or by automated control systems, however human failures, in general, must be collected on site by personal observation.

A control chart process could be established without the need to associate it with a Bayesian inference; however, the objective of the new method, in addition to the graphic control of the evolution of risk, allowing simultaneous and immediate corrective actions, is having, on the one hand, the objective evaluation of a statistical parameter, which has been obtained from the observations that have been made through a formal procedure, and on the other hand, being able to have a continuous map of the functioning of the barriers of safety and the causes that affect the safety and health of workers.

The new method has strengths and weaknesses. The strong point is that the objective of the new methodology is that it is easy to implement and use to quickly gain an idea of the overall risk status of

the process and of the worker's situation; at the same time this is a weakness since it alone cannot be enough and must be complemented with other techniques. In addition, an important feature of the new method is that it is based on dynamic risk methodologies which, with respect to the quantitative risk analysis, allow the risk situation to be updated [87,126], and the weak points are based on the fact that by itself it is not a substitute for other methods. New methodologies are emerging as the use of the dynamic risk assessment and a decision-making trial and evaluation tool integrated into a Bayesian network for assessing the situations of leakage accidents [127], or a new assessment methodology based on the application of the FMECA including the economic valuation, [128] or the assessment of the domino effect using a graphical network of events (PETRI-NET) tool for discrete event treatment [129] or the general applications of the Bayesian networks and Petri-nets tools for risk assessment into the manufacturing and process industries [130] or the new techniques for the identification and monitoring of emerging risks over time [23,24]. These are new proposals that highlight the importance of risk assessment and the supporting role that the new SRC methodology can provide.

Future work is extensive, in a specific scenario the analysis can be carried out considering the entire industrial manufacturing process as a whole or analyzing more detailed areas. A sensitivity analysis can also be carried out determining which initiation causes (ic's) or failures in safety barriers are more common and which areas within an industrial process have more incidence of risk.

#### **6. Conclusions**

From the review of existing methodologies, the dynamic methodologies are adapted to the need to have immediacy in the information. The new methodology (SRC) applies the characteristics of this group.

The bow-tie, which is an important element of the new methodology, was first generated by establishing a framework for the general treatment of risks, followed by adaptation to occupational accidents, which is applied without major changes in the treatment of the manufacturing plant case.

The content of the initiation causes (ic's) and safety barriers have been defined and applied in the analysis of a occupational accidents in a plant for obtaining laminated paper and wood producing medium-density fiber (MDF) boards with melamine. The inference procedures applied in this case cover data observations belonging to Poisson, exponential and Weibull distributions and considering the risk statistical parameter under the evolution of gamma and normal distributions, showing that they are suitable for this type of analysis.

The application of the hidden Markov chain for the analysis of reactive safety barriers provides control for these types of barriers, since they are critical in their operation or failure in the event of an accident. The application of the hidden Markov chain for the analysis of reactive safety barriers provides control for these types of barriers, since they are critical in their operation or failure in the event of an accident. The tool, in its application to the case of the manufacturing plant, is suitable for obtaining information from the state of the different barriers or the situation of the different final states.

In conclusion, the SRC method is a formal method that allows us to meet the three characteristics of prevention, simultaneity and immediacy. It is applicable to the assessment of occupational hazards in different industrial and manufacturing scenarios, and offers an overview of the risk status in the simplest way possible and at the same time it is reliable, in accordance with the established control limits and providing warning in advance to be able to prevent risk.

This work can point towards future research actions considering its application in different types of industries and in project management, analyzing how it can complement other methodological tools.

**Author Contributions:** Conceptualization, M.F.-C.; Investigation, M.F.-C.; Methodology, M.F.-C.; Supervision, F.B. and M.A.S.; Validation, F.B. and M.A.S.; Writing—original draft, M.F.-C.; Writing—review and editing, M.F.-C., F.B. and M.A.S.

**Funding:** This research was funded by the Ministry of Economy and Competitiveness of Spain, with reference DPI2016-79824-R.

**Acknowledgments:** The present paper has been produced within the scope of the doctoral activities carried out by the lead author at the International Doctorate School of the Spanish National Distance-Learning University (EIDUNED\_ Escuela Internacional de Doctorado de la Universidad Nacional de Educación a Distancia). The authors are grateful for the support provided by this institution.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Materials* Editorial Office E-mail: materials@mdpi.com www.mdpi.com/journal/materials

MDPI St. Alban-Anlage 66 4052 Basel Switzerland

Tel: +41 61 683 77 34 Fax: +41 61 302 89 18