**Elastoplastic Fracture Analysis of the P91 Steel Welded Joint under Repair Welding Thermal Shock Based on XFEM**

### **Kai Yang 1,2 , Yingjie Zhang 1,2 and Jianping Zhao 1,2,\***


Received: 29 August 2020; Accepted: 22 September 2020; Published: 25 September 2020

**Abstract:** P91 steel is a typical steel used in the manufacture of boilers in ultra-supercritical power plants and heat exchangers in nuclear power plants. For the long-term serviced P91 steel pressurized structures, the main failure mode is the welded joint failure, especially the heat affected zone (HAZ) failure. Repair welding technique is an effective method for repairing such local defects. However, the thermal shock composed of high temperature and thermal stress in the repair welding process will pose a critical loading condition for the existing defects near the heat source which cannot be detected by conventional means. So, the evaluation of structural integrity for the welded joint in the thermal-mechanical coupling field is necessary. In this work, the crack propagation law in the HAZ for the P91 steel welded joint was investigated under repair welding thermal loads. The weld repair model of the P91 steel welded joint was established by ABAQUS. The transient temperature field and stress field in repair welding process were calculated by relevant user subroutines and sequential coupling simulation method. The residual stress was determined by the impact indentation strain method to verify the feasibility of the finite element (FE) model and simulation method. In order to obtain the crack propagation path, the elastoplastic fracture analysis of the welded joint with initial crack was performed based on the extended finite element method (XFEM). The influence of different welding linear energy on the crack propagation was analyzed. The results show that the cracks in the HAZ propagate perpendicular to the surface and tend to deflect to the welding seam under repair welding thermal loads. The crack propagation occurs in the early stage of cooling. Higher welding linear energy leads to larger HAZ and higher overall temperature. With the increase of welding linear energy, the length and critical distance of the crack propagation increase. Therefore, low welding linear energy can effectively inhibit the crack propagation in the HAZ. The above calculation and analysis provide a reference for the thermal shock damage analysis of repair welding process, which is of great significance to improving the safety and reliability of weld repaired components.

**Keywords:** fracture analysis; welded joint; repair welding thermal shock; XFEM; welding linear energy

### **1. Introduction**

P91 steel is widely used in boiler components of ultra-supercritical power plants due to its excellent creep strength and steam corrosion resistance at high temperature [1]. It is also one of the common choices of steel pipe material for heat exchangers in nuclear power plants [2]. In practical application, P91 steel is usually connected by welding to form the pressure equipment components. During long-term service, local defects will occur inevitably in pressure equipment due to the effect of harsh service environment or improper operation procedure, especially in vulnerable parts, such as

welded joints [3,4]. As a Cr-Mo heat-resistant steel, the main failure mode of the P91 steel welded joints is the type IV cracking near the fine-grained zone (FGHAZ) (shown in Figure 1), which will result in the degradation of structural safety [5]. An efficient and economic method to repair the cracking region is to remove the area with local defects and then refill it with welding rods by repair welding technique [6,7], which greatly extends the service life of pressure equipment. However, conventional defect detection means, including radiographic testing, ultrasonic testing, magnetic particle testing, penetrant testing and eddy current testing, have a corresponding detection sensitivity. This will result in undetected defects existing in the structure. as welded joints [3,4]. As a Cr-Mo heat-resistant steel, the main failure mode of the P91 steel welded joints is the type IV cracking near the fine-grained zone (FGHAZ) (shown in Figure 1), which will result in the degradation of structural safety [5]. An efficient and economic method to repair the cracking region is to remove the area with local defects and then refill it with welding rods by repair welding technique [6,7], which greatly extends the service life of pressure equipment. However, conventional defect detection means, including radiographic testing, ultrasonic testing, magnetic particle testing, penetrant testing and eddy current testing, have a corresponding detection sensitivity. This will result in undetected defects existing in the structure.

*Metals* **2020**, *10*, x FOR PEER REVIEW 2 of 26

of harsh service environment or improper operation procedure, especially in vulnerable parts, such

**Figure 1.** Crack classification of Cr-Mo steel welded joint, reproduced from [5], with permission from International Materials Reviews, 2020. **Figure 1.** Crack classification of Cr-Mo steel welded joint, reproduced from [5], with permission from International Materials Reviews, 2020.

Repair welding is a rapid welding heat transfer process, and the arc temperature during welding process may reach thousands of degrees centigrade [8]. The welding heat source loads a large amount of heat flow into the welding seam and nearby area in a short time. The high temperature and the unsteady thermal stress act on the weldment in the form of thermal shock. On the one hand, it will cause the formation of hot cracks in the welding seam; on the other hand, the interaction between existing cracks and thermal-mechanical coupling field will lead to the crack propagation and even the failure of the whole component. In order to ameliorate the quality of weldments and weld repaired components, it is necessary to study the effect of the welding process on the cracking behavior. Alvarez et al. [9] compared the hot cracking sensitivity of tungsten inert gas welding (TIG) and laser beam welding (LBW) by analyzing the microstructure and chemical composition of the welding seam of 718 alloy. Chelladurai et al. [10] studied the solidification cracking behavior of 316 stainless steel under different energy transfer modes by adjusting the pulse parameters in pulsed laser welding (PLW). Hosseini et al. [11] investigated the effect of heat input and welding speed in electron beam welding (EBW) on the hot cracking sensitivity of AA2024-T351 alloy experimentally. Coniglio et al. [12] explored the initiation mechanism of solidification crack in the arc welding seam for 6060 aluminum alloy theoretically and numerically. Wei et al. [13] developed a software which can automatically plot the driving force and resistance curves of solidification cracks according to the numerical and experimental results. Bordreuil et al. [14] established a solidification cracking model which can predict microstructure and pore nucleation in the welding seam by combining cellular automata model with intergranular fluid flow model. Agarwal et al. [15] analyzed the influence of metallurgical factors and molten pool shape on the solidification cracking in laser beam welding of advanced high strength steels numerically and experimentally. Jiang et al. [16] researched the effect of repair width on residual stress for the composite plate to reduce the probability of interface cracking. Hyde et al. [17] studied the creep crack propagation behavior of P91 steel weldment under constant loads by both experimental method and finite element (FE) method. Pandey et al. [18] investigated experimentally the effect of different diffusible hydrogen concentrations in weld metal Repair welding is a rapid welding heat transfer process, and the arc temperature during welding process may reach thousands of degrees centigrade [8]. The welding heat source loads a large amount of heat flow into the welding seam and nearby area in a short time. The high temperature and the unsteady thermal stress act on the weldment in the form of thermal shock. On the one hand, it will cause the formation of hot cracks in the welding seam; on the other hand, the interaction between existing cracks and thermal-mechanical coupling field will lead to the crack propagation and even the failure of the whole component. In order to ameliorate the quality of weldments and weld repaired components, it is necessary to study the effect of the welding process on the cracking behavior. Alvarez et al. [9] compared the hot cracking sensitivity of tungsten inert gas welding (TIG) and laser beam welding (LBW) by analyzing the microstructure and chemical composition of the welding seam of 718 alloy. Chelladurai et al. [10] studied the solidification cracking behavior of 316 stainless steel under different energy transfer modes by adjusting the pulse parameters in pulsed laser welding (PLW). Hosseini et al. [11] investigated the effect of heat input and welding speed in electron beam welding (EBW) on the hot cracking sensitivity of AA2024-T351 alloy experimentally. Coniglio et al. [12] explored the initiation mechanism of solidification crack in the arc welding seam for 6060 aluminum alloy theoretically and numerically. Wei et al. [13] developed a software which can automatically plot the driving force and resistance curves of solidification cracks according to the numerical and experimental results. Bordreuil et al. [14] established a solidification cracking model which can predict microstructure and pore nucleation in the welding seam by combining cellular automata model with intergranular fluid flow model. Agarwal et al. [15] analyzed the influence of metallurgical factors and molten pool shape on the solidification cracking in laser beam welding of advanced high strength steels numerically and experimentally. Jiang et al. [16] researched the effect of repair width on residual stress for the composite plate to reduce the probability of interface cracking. Hyde et al. [17] studied the creep crack propagation behavior of P91 steel weldment under constant loads by both experimental method and finite element (FE) method. Pandey et al. [18] investigated experimentally the effect of different diffusible hydrogen concentrations in weld metal on hydrogen-induced cracking features of P91 steel. Zhang et al. [19] applied the high energy spark deposition (HESD) method to weld repair and tested

the fracture properties of the repair welding seam. He et al. [20] performed the elastic fracture analysis of cracked aluminum alloy plate during metal inert gas welding (MIG) and cooling processes. seam. He et al. [20] performed the elastic fracture analysis of cracked aluminum alloy plate during metal inert gas welding (MIG) and cooling processes. According to the existing literature, a lot of theoretical and experimental studies have been

deposition (HESD) method to weld repair and tested the fracture properties of the repair welding

*Metals* **2020**, *10*, x FOR PEER REVIEW 3 of 26

According to the existing literature, a lot of theoretical and experimental studies have been carried out on the hot cracks in welding processes. Almost all the current numerical simulations of cracking behavior for welding focus on the structure without crack or with stationary crack. The thermal effect of welding, especially repair welding, will make the existing cracks which are neglected by conventional detection methods further extend and even penetrate through the whole structure. However, there is little research on the crack propagation behavior under repair welding thermal shock. Therefore, it is necessary to analyze the structural integrity of weldments under repair welding thermal loads. carried out on the hot cracks in welding processes. Almost all the current numerical simulations of cracking behavior for welding focus on the structure without crack or with stationary crack. The thermal effect of welding, especially repair welding, will make the existing cracks which are neglected by conventional detection methods further extend and even penetrate through the whole structure. However, there is little research on the crack propagation behavior under repair welding thermal shock. Therefore, it is necessary to analyze the structural integrity of weldments under repair welding thermal loads.

Combined with the specific damage model, the extended finite element method (XFEM) can simulate the ductile crack growth behavior accurately [21]. Belytschko [22] first proposed the embryonic form of the XEFM. It was independent of mesh generation and enhanced the finite element approximation by adding discontinuous enrichment functions to the displacement field near the crack tip. Moës et al. [23] introduced the step function and crack tip asymptotic function to describe the crack surface and crack tip, respectively, making it successfully used in the analysis of fracture mechanics, and called this technique "extended finite element method". Daux et al. [24] added several different asymptotic functions and step functions to the crack tip and crack surface to simulate the crack branching. Chessa et al. [25] improved the convergence of the blending element in the XFEM (shown in Figure 2) by utilizing the extended strain method. Stolarska et al. [26] used the level set method to locate the crack location, which further improved the theory of the XFEM. For completing the modelling of dynamic crack, Song et al. [27] applied the phantom node method to the XFEM, which implemented the definition of the crack propagation behavior within the framework of the conventional finite element method. Until now, FE software such as ABAQUS, ANSYS and LS-DYNA have implemented the XFEM program into the function module of fracture analysis. This paper uses the XFEM to predict how the repair welding thermal shock affects the cracking behavior of the heat affected zone (HAZ) in a P91 steel welded joint. The influence of welding linear energy on the cracking features has been studied, which provides a reference for repair welding of structures containing defects. This study has guiding significance for the life extension of pressure equipment under long-term service or extended service. Combined with the specific damage model, the extended finite element method (XFEM) can simulate the ductile crack growth behavior accurately [21]. Belytschko [22] first proposed the embryonic form of the XEFM. It was independent of mesh generation and enhanced the finite element approximation by adding discontinuous enrichment functions to the displacement field near the crack tip. Moës et al. [23] introduced the step function and crack tip asymptotic function to describe the crack surface and crack tip, respectively, making it successfully used in the analysis of fracture mechanics, and called this technique "extended finite element method". Daux et al. [24] added several different asymptotic functions and step functions to the crack tip and crack surface to simulate the crack branching. Chessa et al. [25] improved the convergence of the blending element in the XFEM (shown in Figure 2) by utilizing the extended strain method. Stolarska et al. [26] used the level set method to locate the crack location, which further improved the theory of the XFEM. For completing the modelling of dynamic crack, Song et al. [27] applied the phantom node method to the XFEM, which implemented the definition of the crack propagation behavior within the framework of the conventional finite element method. Until now, FE software such as ABAQUS, ANSYS and LS-DYNA have implemented the XFEM program into the function module of fracture analysis. This paper uses the XFEM to predict how the repair welding thermal shock affects the cracking behavior of the heat affected zone (HAZ) in a P91 steel welded joint. The influence of welding linear energy on the cracking features has been studied, which provides a reference for repair welding of structures containing defects. This study has guiding significance for the life extension of pressure equipment under long-term service or extended service.

**Figure 2.** Cut element and blending element in the extended finite element method (XFEM). **Figure 2.** Cut element and blending element in the extended finite element method (XFEM).

#### **2. FE Model and Simulation Method 2. FE Model and Simulation Method**

#### *2.1. Model Geometry 2.1. Model Geometry*

Two P91 steel plates with a size of 200 mm × 100 mm × 12 mm (length, width and thickness, respectively) are butt welded with four welding layers, including two layers of root welding (welding layers 1 to 2), one layer of filler welding (welding layer 3) and one layer of cover welding (welding layer 4). The selection of the form and size of welding groove is based on the criteria of Yang et al. [28]. A V-shaped groove is adopted, the root gap and blunt edge are both 2 mm, the groove angle is Two P91 steel plates with a size of 200 mm × 100 mm × 12 mm (length, width and thickness, respectively) are butt welded with four welding layers, including two layers of root welding (welding layers 1 to 2), one layer of filler welding (welding layer 3) and one layer of cover welding (welding layer 4). The selection of the form and size of welding groove is based on the criteria of Yang et al. [28]. A V-shaped groove is adopted, the root gap and blunt edge are both 2 mm, the groove angle is 60◦ . The area containing a type IV crack in the HAZ will be removed and then refilled by

repair welding. The size of weld repair area is 80 mm × 8 mm × 3 mm (length, width and thickness, respectively), and the angle of groove surface for repair welding is 30◦ . Figure 3 shows the schematic drawing of the model geometric dimensions. 60°. The area containing a type IV crack in the HAZ will be removed and then refilled by repair welding. The size of weld repair area is 80 mm × 8 mm × 3 mm (length, width and thickness, respectively), and the angle of groove surface for repair welding is 30°. Figure 3 shows the schematic drawing of the model geometric dimensions. welding. The size of weld repair area is 80 mm × 8 mm × 3 mm (length, width and thickness, respectively), and the angle of groove surface for repair welding is 30°. Figure 3 shows the schematic drawing of the model geometric dimensions.

*Metals* **2020**, *10*, x FOR PEER REVIEW 4 of 26

60°. The area containing a type IV crack in the HAZ will be removed and then refilled by repair

**Figure 3.** Schematic drawing of the model geometry. (unit: mm). **Figure 3.** Schematic drawing of the model geometry. (unit: mm). **Figure 3.** Schematic drawing of the model geometry. (unit: mm).

The finite element modelling tool of thermal-mechanical coupling process and fracture behavior is ABAQUS 6.14-5 (Dassault Systemes, Paris, France). In order to save the calculating work and avoid the convergence problem in 3-D computational fracture mechanics, a 2-D plane strain analysis model is set up. The plane studied in this paper is the middle cross section of the plate, as shown in Figure 4. Mesh refinement is carried out in the welding seam and HAZ, where exist high heat flux loading, to overcome the mesh-sensitivity problem. The area far away from the welding seam and HAZ is coarsened appropriately to reduce the calculation time. The transitional mesh is adopted between the refined mesh and the coarsened mesh. The FE model and meshing are shown in Figure 5. A total of 9063 elements and 9322 nodes are meshed. The element edge size for the refined mesh is 0.25 mm, and that for the coarsened mesh is 2 mm. The finite element modelling tool of thermal-mechanical coupling process and fracture behavior is ABAQUS 6.14-5 (Dassault Systemes, Paris, France). In order to save the calculating work and avoid the convergence problem in 3-D computational fracture mechanics, a 2-D plane strain analysis model is set up. The plane studied in this paper is the middle cross section of the plate, as shown in Figure 4. Mesh refinement is carried out in the welding seam and HAZ, where exist high heat flux loading, to overcome the mesh-sensitivity problem. The area far away from the welding seam and HAZ is coarsened appropriately to reduce the calculation time. The transitional mesh is adopted between the refined mesh and the coarsened mesh. The FE model and meshing are shown in Figure 5. A total of 9063 elements and 9322 nodes are meshed. The element edge size for the refined mesh is 0.25 mm, and that for the coarsened mesh is 2 mm. The finite element modelling tool of thermal-mechanical coupling process and fracture behavior is ABAQUS 6.14-5 (Dassault Systemes, Paris, France). In order to save the calculating work and avoid the convergence problem in 3-D computational fracture mechanics, a 2-D plane strain analysis model is set up. The plane studied in this paper is the middle cross section of the plate, as shown in Figure 4. Mesh refinement is carried out in the welding seam and HAZ, where exist high heat flux loading, to overcome the mesh-sensitivity problem. The area far away from the welding seam and HAZ is coarsened appropriately to reduce the calculation time. The transitional mesh is adopted between the refined mesh and the coarsened mesh. The FE model and meshing are shown in Figure 5. A total of 9063 elements and 9322 nodes are meshed. The element edge size for the refined mesh is 0.25 mm, and that for the coarsened mesh is 2 mm.

**Figure 4.** The location of the cross section for analysis. (unit: mm). **Figure 4. Figure 4.** The location of the cross section for analysis. The location of the cross section for analysis. (unit: mm). (unit: mm).

The sequential coupling method is used in the thermal-mechanical coupling simulation. Firstly, the thermal analysis is performed, and then the temperature field results are applied to the nodes of the mechanical model to carry out the mechanical analysis. The removal and deposition of the metal are implemented by the method of Birth and Death Element. A 4-node linear heat transfer quadrilateral (DC2D4) element is adopted for the thermal analysis, and a 4-node bilinear plane strain quadrilateral reduced-integration (CPE4R) element is adopted for the mechanical analysis. The topological relationship for meshing between the thermal model and the mechanical model is consistent.

*Metals* **2020**, *10*, x FOR PEER REVIEW 5 of 26

*Metals* **2020**, *10*, x FOR PEER REVIEW 5 of 26

**Figure 5.** Finite element (FE) model and meshing. **Figure 5.** Finite element (FE) model and meshing.

#### The sequential coupling method is used in the thermal-mechanical coupling simulation. Firstly, *2.2. Material Properties and Heat Source Model 2.2. Material Properties and Heat Source Model*

#### the thermal analysis is performed, and then the temperature field results are applied to the nodes of the mechanical model to carry out the mechanical analysis. The removal and deposition of the metal 2.2.1. Material Property 2.2.1. Material Property

consistent.

2.2.1. Material Property

are implemented by the method of Birth and Death Element. A 4-node linear heat transfer quadrilateral (DC2D4) element is adopted for the thermal analysis, and a 4-node bilinear plane strain quadrilateral reduced-integration (CPE4R) element is adopted for the mechanical analysis. The topological relationship for meshing between the thermal model and the mechanical model is consistent. *2.2. Material Properties and Heat Source Model* Temperature-dependent nonlinear features are considered for the thermal physical properties and mechanical properties of P91 steel, as shown in Figure 6. The thermal physical properties include thermal conductivity, specific heat and thermal expansion coefficient, while the mechanical properties include Young's modulus, Poisson's ratio and yield strength. The density of P91 steel is assumed to be constant at a value of 7780 kg/m<sup>3</sup> . The material properties of the weld metal are also considered in the FE simulation for obtaining higher accuracy. The solidus and liquidus temperatures of both base metal and weld metal are set at 1420 ◦C and 1500 ◦C, respectively. The melting heat and solidification heat of the material are both set at 260 kJ/kg. Temperature-dependent nonlinear features are considered for the thermal physical properties and mechanical properties of P91 steel, as shown in Figure 6. The thermal physical properties include thermal conductivity, specific heat and thermal expansion coefficient, while the mechanical properties include Young's modulus, Poisson's ratio and yield strength. The density of P91 steel assumed to be constant at a value of 7780 kg/m<sup>3</sup> . The material properties of the weld metal are also considered in the FE simulation for obtaining higher accuracy. The solidus and liquidus temperatures of both base metal and weld metal are set at 1420 °C and 1500 °C, respectively. The melting heat and solidification heat of the material are both set at 260 kJ/kg.

2 **Figure 6.** Temperature-dependent thermal physical properties and mechanical properties, for base metal and weld metal, used in the FE simulation, data from [29].

### 2.2.2. Heat Source Definition

0 500 1000 1500 2000 0 1 The heat source loads should reflect the actual temperature change during welding as accurately as possible. The distributed heat flux (DFLUX) has been widely used because of its mature theory. Among the distributed heat flux, Gaussian heat flux distribution, uniform body heat flux distribution and double ellipsoidal heat flux distribution are commonly used [30].

Temperature (°C)

*Metals* **2020**, *10*, 1285

The thermal exchange during welding is mainly the thermal conduction inside the weldment, which follows Fourier's law (Equation (1)), and the governing equation of temperature field (Equation (2)) follows the law of conservation of energy.

$$R = -\lambda \times \frac{\partial T}{\partial \mathbf{x}} \tag{1}$$

$$
\rho c \times \frac{\partial T}{\partial t} = \frac{\partial}{\partial \mathbf{x}} \times \left(\lambda \times \frac{\partial T}{\partial \mathbf{x}}\right) + \frac{\partial}{\partial y} \times \left(\lambda \times \frac{\partial T}{\partial y}\right) + \frac{\partial}{\partial z} \times \left(\lambda \times \frac{\partial T}{\partial z}\right) + Q \tag{2}
$$

where:

*R* is the heat flux (W/m<sup>2</sup> );

λ is the thermal conductivity (W/m·k);

*T* is the distribution function of temperature field (K);

ρ is the density (kg/m<sup>3</sup> );

*c* is the specific heat (J/kg·K);

*t* is the transient time (s);

*Q* is the intensity of thermal energy (W), including the thermal energy generated by the heat source and the thermal energy generated by the solid-liquid phase change.

The conventional uniform body heat flux distribution is defined by Equation (3) [29].

$$q = \frac{\eta \times \mathcal{U} \times I}{V\_{act}} \tag{3}$$

where *q* is the heat generation rate by heat source; η is the arc efficiency factor; *U* is the arc voltage; *I* is the welding current; *Vact* is the action volume of the heat source.

The conventional uniform body heat flux distribution adopts amplitude curves to control the piecewise changes of the heat generation rate with time. The forms of the changes are linear in each time segment, which cannot reflect the real situation of the heat source passing through the plane. In addition, the value of *Vact* needs to be estimated first and then calibrated according to the experimental results. Continuous adjustment must be made until the simulated results are consistent with the experimental results.

An improved uniform body heat flux distribution is used to simulate the welding process. The action volume of the heat source is calculated according to the groove size and welding process parameters. As shown in Figure 7, assuming that all thermal energy is concentrated in the filler metal, *Vact* is determined by the product of the cross-sectional area of the welding layer (*A*) and the action length of the heat source (*C*). *C* is estimated by Equation (4) [31].

$$
\mathbf{C} = \boldsymbol{\tau} \times \mathbf{U} \times \mathbf{I} \tag{4}
$$

(5)

(6)

(7)

(8)

(9)

where τ is the coefficient determined by the welding method and welding process parameters, and in this paper τ = 3 mm·kW−<sup>1</sup> *Metals*  . **2020**, *10*, x FOR PEER REVIEW 7 of 26

**Figure 7.** Action volume of the heat source. **Figure 7.** Action volume of the heat source.

Considering the effect of the heat source approaching and leaving the studied cross section during the welding process, the improved uniform body heat flux distribution changes with time

12

 

*U I v t t*

where *t* is the transient loading time; *t*<sup>0</sup> is the time taken for the center of the heat source to move to

The modelling of the improved uniform body heat flux distribution is done by FORTRAN language written in DFLUX, one of user subroutines in ABAQUS. The arc efficiency factor is assumed to be 0.8 for the shielded metal arc welding (SMAW), and 0.6 for the gas tungsten arc welding

The ambient temperature is assumed as 20 °C. The convection heat transfer exists between air and weldment surface during welding due to the temperature difference. Additionally, the temperature difference between weldment surface and surrounding environment will result in continuous radiation heat transfer. The convection heat transfer follows Newton's law of cooling

> *c c q T*

where *q<sup>c</sup>* is the heat flux of convection heat transfer; *α<sup>c</sup>* is the convection heat transfer coefficient; Δ*T*

The convection heat transfer coefficient is calculated by Equation (8), which is introduced into

0.0668 0 500 0.231 82.1 500

The heat flux radiated outward by weldment follows the Stefan–Boltzmann law (Equation (9)).

4

*q T* 

where *ε* is the emissivity, assumed as 0.85; *σ* is the Stefan-Boltzmann constant with the value of

. *T* is the temperature of the weldment surface.

*T T T T*

°C °C

exp

*V C*

 2 0

2

Therefore, the total action time of the heat source is obtained by Equation (5).

*act*

*t*

instantaneously. The distributed heat flux is calculated by Equation (6) [32].

is the value of temperature difference between air and weldment surface.

the FE model by the user subroutine FILM in ABAQUS 6.14-5.

where *T* is the transient temperature of the weldment surface.

*c*

*q*

*act*

where *v* is the welding speed.

(GTAW).

(Equation (7)).

5.67 × 10−<sup>8</sup> W/m2·K<sup>4</sup>

*2.3. Boundary Conditions*

the studied cross section, equal to half of *tact*.

Therefore, the total action time of the heat source is obtained by Equation (5).

$$t\_{\rm act} = \frac{\mathbb{C}}{v} = \frac{\tau \times U \times I}{v} \tag{5}$$

where *v* is the welding speed.

Considering the effect of the heat source approaching and leaving the studied cross section during the welding process, the improved uniform body heat flux distribution changes with time instantaneously. The distributed heat flux is calculated by Equation (6) [32].

$$q = \frac{\eta \times U \times I}{V\_{act}} \exp\left\{\frac{-12 \times \left[v \times (t - t\_0)\right]^2}{\mathcal{C}^2}\right\} \tag{6}$$

where *t* is the transient loading time; *t*<sup>0</sup> is the time taken for the center of the heat source to move to the studied cross section, equal to half of *tact*.

The modelling of the improved uniform body heat flux distribution is done by FORTRAN language written in DFLUX, one of user subroutines in ABAQUS. The arc efficiency factor is assumed to be 0.8 for the shielded metal arc welding (SMAW), and 0.6 for the gas tungsten arc welding (GTAW).

### *2.3. Boundary Conditions*

The ambient temperature is assumed as 20 ◦C. The convection heat transfer exists between air and weldment surface during welding due to the temperature difference. Additionally, the temperature difference between weldment surface and surrounding environment will result in continuous radiation heat transfer. The convection heat transfer follows Newton's law of cooling (Equation (7)).

$$q\_{\mathbb{C}} = \alpha\_{\mathbb{C}} \times \Delta T \tag{7}$$

where *q<sup>c</sup>* is the heat flux of convection heat transfer; α*<sup>c</sup>* is the convection heat transfer coefficient; ∆*T* is the value of temperature difference between air and weldment surface.

The convection heat transfer coefficient is calculated by Equation (8), which is introduced into the FE model by the user subroutine FILM in ABAQUS 6.14-5.

$$a\_{\mathbb{C}} = \begin{cases} \begin{array}{c} 0.0668 \times T \\ 0.231T - 82.1 \end{array} & \begin{array}{c} 0 < T \le 500 \text{ }^\circ \text{C} \end{array} \\ \text{(8)} \\ \end{cases} \tag{8}$$

where *T* is the transient temperature of the weldment surface.

The heat flux radiated outward by weldment follows the Stefan–Boltzmann law (Equation (9)).

$$q = \varepsilon \times \sigma \times T^4 \tag{9}$$

where ε is the emissivity, assumed as 0.85; σ is the Stefan-Boltzmann constant with the value of 5.67 × 10−<sup>8</sup> W/m<sup>2</sup> ·K 4 . *T* is the temperature of the weldment surface.

The radiation heat transfer between weldment surface and surrounding environment is calculated by Equations (10) and (11).

$$q\_r = \alpha\_r \times \left(T - T\_f\right) \tag{10}$$

$$\alpha\_{\mathbb{F}} = \varepsilon \times \sigma \times \frac{\left(\frac{T + 273}{100}\right)^3 - \left(\frac{T + 273}{100}\right)^4}{T - T\_f} \tag{11}$$

where *q<sup>r</sup>* is the heat flux of radiation heat transfer; α*<sup>r</sup>* is the radiation heat transfer coefficient; *T<sup>f</sup>* is the ambient temperature.

During the mechanical analysis, degrees of freedom for node A and node B at two ends of the bottom surface of the FE model (shown in Figure 5) are constrained in the X direction and Y direction to prevent the rigid body displacement.

#### *2.4. Thermal Loading Patterns* bottom surface of the FE model (shown in Figure 5) are constrained in the X direction and Y direction to prevent the rigid body displacement.

calculated by Equations (10) and (11).

Welding linear energy is the heat energy input by welding heat source to unit length welding seam, which influences the surface forming of welding seam and the formation of welding defects. The magnitude of welding linear energy is calculated by Equation (12). *2.4. Thermal Loading Patterns* Welding linear energy is the heat energy input by welding heat source to unit length welding

*Metals* **2020**, *10*, x FOR PEER REVIEW 8 of 26

*q T T r r f* 

*r*

 

The radiation heat transfer between weldment surface and surrounding environment is

*T T*

 

During the mechanical analysis, degrees of freedom for node A and node B at two ends of the

3 4 273 273 100 100

*f*

*T T*

$$Q = \frac{\eta \times II \times I}{v} \tag{12}$$
  $\dots \quad \dots \quad \dots \quad \dots \quad \dots \quad \dots \quad \dots \quad \dots \quad \dots \quad \dots \dots \quad \dots \quad \dots$ 

(10)

(11)

Three different repair welding linear energy levels will be considered in the fracture analysis. The values of three selected linear energies are 10 kJ/cm, 16 kJ/cm and 25 kJ/cm, respectively. According to the heat flux distribution defined by Equation (6), the time distribution curves of heat generation rate in the weld repair area under three different linear energy rates is plotted in Figure 8. The heat generation rate presents a Gaussian profile with time, which can reflect the moving features of the heat source. With the increase of linear energy, both the duration of the thermal loading and the generated total heat energy increase. The duration of the thermal loading is 3.96 s for the linear energy of 10 kJ/cm, 6.34 s for the linear energy of 16 kJ/cm and 9.90 s for the linear energy of 25 kJ/cm. All three thermal loading patterns have the same peak heat generation rate in the intermediate instant of the corresponding loading time, which is 14.8 GW/m<sup>3</sup> . The intermediate instant is the time when the center of the heat source passes through the studied cross section. *v* (12) Three different repair welding linear energy levels will be considered in the fracture analysis. The values of three selected linear energies are 10 kJ/cm, 16 kJ/cm and 25 kJ/cm, respectively. According to the heat flux distribution defined by Equation (6), the time distribution curves of heat generation rate in the weld repair area under three different linear energy rates is plotted in Figure 8. The heat generation rate presents a Gaussian profile with time, which can reflect the moving features of the heat source. With the increase of linear energy, both the duration of the thermal loading and the generated total heat energy increase. The duration of the thermal loading is 3.96 s for the linear energy of 10 kJ/cm, 6.34 s for the linear energy of 16 kJ/cm and 9.90 s for the linear energy of 25 kJ/cm. All three thermal loading patterns have the same peak heat generation rate in the intermediate instant of the corresponding loading time, which is 14.8 GW/m<sup>3</sup> . The intermediate instant is the time when the center of the heat source passes through the studied cross section.

**Figure 8.** Time distribution of heat generation rate under different linear energy. **Figure 8.** Time distribution of heat generation rate under different linear energy.

#### *2.5. Damage Model 2.5. Damage Model*

ABAQUS.

The XFEM used in this paper is based on the cohesive crack model [33]. The damage response of a crack takes the traction–separation law as the constitutive relation. As shown in Figure 9, the The XFEM used in this paper is based on the cohesive crack model [33]. The damage response of a crack takes the traction–separation law as the constitutive relation. As shown in Figure 9, the damage development includes three stages: damage initiation, damage evolution and element failure. *Metals* **2020**, *10*, x FOR PEER REVIEW 9 of 26 damage development includes three stages: damage initiation, damage evolution and element failure.

**Figure 9.** The traction-separation law. **Figure 9.** The traction-separation law.

the damage initiation condition. Meanwhile, the crack propagation direction is also controlled by the MPS criterion. The cohesive crack will propagate along the direction of maximum principal stress.

max

max 1 1 *f f* 

where < *σ*max > is the actual maximum principal stress of the element, the symbol "< >" means the

stress defined; *f*tol is the tolerance with a value of 0.05, which is a default value recommended by

The damage evolution defines the degradation patterns of cohesive stiffness after damage initiation. The nonlinear exponential softening response is adopted to describe the degradation of cohesive stiffness for the analysis of elastoplastic fracture mechanics (EPFM). As the fracture energy is one of the fracture properties of materials, the evolution law based on energy method is selected. Once the crack propagation driving force exceeds the equivalent critical energy release rate (equivalent to fracture energy), the crack will extend. The equivalent critical energy release rate is

*II III*

*G G*

*G G G*

*I II III*

*eqC IIC IC IC*

where *GeqC* is the equivalent critical energy release rate; *GI*, *GII* and *GIII* are the energy release rates of mode I, II and III cracks, respectively; *GIC* and *GIIC* are the critical energy release rates of mode I and II cracks, respectively; *η* is the exponent, to which the response is insensitive for isotropic failure.

Fracture toughness is an index to measure the capacity of materials to prevent crack propagation, which is one of the inherent properties of materials. It can be expressed by a single parameter such as energy release rate *G*, stress intensity factor *K*, crack tip opening displacement CTOD and *J*-integral. The critical energy release rate is used as the fracture toughness here. The fracture failure mode is assumed to be isotropic, so *GIC* = *GIIC*. The fracture energy in the process of damage evolution is calculated on the basis of the fracture toughness of P91 steel. The tensile strength of P91 steel is taken as an approximation for the MPS. For both fracture toughness and tensile strength, temperature-dependent data are used to simulate the actual fracture behavior. The fracture parameters under high temperature are extrapolated by a linear interpolation method. The fracture

*G G G G*

0 tol

0

(13)

(14)

max is the allowable maximum principal

The damage initiation criterion satisfies Equation (13).

damage initiation does not exist in a pure compression state; *σ*

calculated by the Benzeggagh–Kenane law (Equation (14)) [34].

properties of P91 steel are shown in Figure 10.

The damage initiation of a crack is the starting point at which the cohesive stiffness between the

The damage initiation of a crack is the starting point at which the cohesive stiffness between the crack surfaces begins to degrade. It occurs when the stress or strain of the element reaches a critical value. Here, the maximum principal stress (MPS) criterion is used to judge whether a crack reaches the damage initiation condition. Meanwhile, the crack propagation direction is also controlled by the MPS criterion. The cohesive crack will propagate along the direction of maximum principal stress. The damage initiation criterion satisfies Equation (13).

$$1 \le f = \frac{\langle \sigma\_{\text{max}} \rangle}{\sigma\_{\text{max}}^0} \le 1 + f\_{\text{tol}} \tag{13}$$

where < σmax > is the actual maximum principal stress of the element, the symbol "< >" means the damage initiation does not exist in a pure compression state; σ 0 max is the allowable maximum principal stress defined; *f* tol is the tolerance with a value of 0.05, which is a default value recommended by ABAQUS.

The damage evolution defines the degradation patterns of cohesive stiffness after damage initiation. The nonlinear exponential softening response is adopted to describe the degradation of cohesive stiffness for the analysis of elastoplastic fracture mechanics (EPFM). As the fracture energy is one of the fracture properties of materials, the evolution law based on energy method is selected. Once the crack propagation driving force exceeds the equivalent critical energy release rate (equivalent to fracture energy), the crack will extend. The equivalent critical energy release rate is calculated by the Benzeggagh–Kenane law (Equation (14)) [34].

$$\mathbf{G}\_{\rm eq\overline{C}} = (\mathbf{G}\_{\rm IIC} - \mathbf{G}\_{\rm IC}) \times \left(\frac{\mathbf{G}\_{\rm II} + \mathbf{G}\_{\rm III}}{\mathbf{G}\_{\rm I} + \mathbf{G}\_{\rm II} + \mathbf{G}\_{\rm III}}\right)^{\eta} + \mathbf{G}\_{\rm IC} \tag{14}$$

where *GeqC* is the equivalent critical energy release rate; *G<sup>I</sup>* , *GII* and *GIII* are the energy release rates of mode I, II and III cracks, respectively; *GIC* and *GIIC* are the critical energy release rates of mode I and II cracks, respectively; η is the exponent, to which the response is insensitive for isotropic failure.

Fracture toughness is an index to measure the capacity of materials to prevent crack propagation, which is one of the inherent properties of materials. It can be expressed by a single parameter such as energy release rate *G*, stress intensity factor *K*, crack tip opening displacement CTOD and *J*-integral. The critical energy release rate is used as the fracture toughness here. The fracture failure mode is assumed to be isotropic, so *GIC* = *GIIC*. The fracture energy in the process of damage evolution is calculated on the basis of the fracture toughness of P91 steel. The tensile strength of P91 steel is taken as an approximation for the MPS. For both fracture toughness and tensile strength, temperature-dependent data are used to simulate the actual fracture behavior. The fracture parameters under high temperature are extrapolated by a linear interpolation method. The fracture properties of P91 steel are shown in Figure 10. *Metals* **2020**, *10*, x FOR PEER REVIEW 10 of 26

**Figure 10.** Temperature-dependent fracture properties for P91 steel, used in the FE simulation, data from [35,36]. **Figure 10.** Temperature-dependent fracture properties for P91 steel, used in the FE simulation, data from [35,36].

With the development of damage evolution, the cohesive traction between crack surfaces

The fracture problems with damage definition have strong nonlinearity and discontinuity, which makes numerical solutions difficult to converge. ABAQUS provides a viscous regularization method to solve this problem. By setting an appropriate small viscosity coefficient, the convergence of the fracture model will be significantly improved. Here, the viscosity coefficient is set to be 1.0 ×

The type IV crack with a size of 0.5 mm perpendicular to the surface is prefabricated in the HAZ of repair welding, as shown in Figure 11a. *d* is the distance between the crack and the fusion line. The initial crack is introduced into the FE model by the XFEM technique, and the contact attributes between the crack surfaces are set as frictionless for tangential behavior and "hard" contact for

(**a**) (**b**) **Figure 11.** Precrack definition: (**a**) the physical model with a precrack; (**b**) the XFEM model with a

Weld repair experiments of the P91 steel welded joint were carried out to verify the feasibility of the FE model. The size of the specimens was the same as that of the FE model. The chemical composition of experimental base metal is shown in Table 1, which measures up to ASME BPVC.II.A-2019 [37]. The selected weld metal was particularly suited for matching P91 steel. ER90S-B9 welding rod was used as a filler metal for GTAW, while E9015-B9 stick electrode was used as a filler metal for

crack surfaces have been completely opened.

normal behavior, as shown in Figure 11b.

10−<sup>5</sup> .

precrack.

**3. Weld Repair Experiments**

*3.1. Weld Repair Specimens*

With the development of damage evolution, the cohesive traction between crack surfaces decreases. When the cohesive traction is reduced to zero, the element fails, which means that the crack surfaces have been completely opened. decreases. When the cohesive traction is reduced to zero, the element fails, which means that the crack surfaces have been completely opened. The fracture problems with damage definition have strong nonlinearity and discontinuity,

With the development of damage evolution, the cohesive traction between crack surfaces

0 500 1000 1500 2000

Tensile strength (1.0 × 10<sup>8</sup>

Fracture toughness (1.0 × 10<sup>2</sup>

Pa)

N/mm)

Temperature (℃)

**Figure 10.** Temperature-dependent fracture properties for P91 steel, used in the FE simulation, data

8

Fracture properties

*Metals* **2020**, *10*, x FOR PEER REVIEW 10 of 26

The fracture problems with damage definition have strong nonlinearity and discontinuity, which makes numerical solutions difficult to converge. ABAQUS provides a viscous regularization method to solve this problem. By setting an appropriate small viscosity coefficient, the convergence of the fracture model will be significantly improved. Here, the viscosity coefficient is set to be 1.0 × 10−<sup>5</sup> . which makes numerical solutions difficult to converge. ABAQUS provides a viscous regularization method to solve this problem. By setting an appropriate small viscosity coefficient, the convergence of the fracture model will be significantly improved. Here, the viscosity coefficient is set to be 1.0 × 10−<sup>5</sup> .

The type IV crack with a size of 0.5 mm perpendicular to the surface is prefabricated in the HAZ of repair welding, as shown in Figure 11a. *d* is the distance between the crack and the fusion line. The initial crack is introduced into the FE model by the XFEM technique, and the contact attributes between the crack surfaces are set as frictionless for tangential behavior and "hard" contact for normal behavior, as shown in Figure 11b. The type IV crack with a size of 0.5 mm perpendicular to the surface is prefabricated in the HAZ of repair welding, as shown in Figure 11a. *d* is the distance between the crack and the fusion line. The initial crack is introduced into the FE model by the XFEM technique, and the contact attributes between the crack surfaces are set as frictionless for tangential behavior and "hard" contact for normal behavior, as shown in Figure 11b.

**Figure 11.** Precrack definition: (**a**) the physical model with a precrack; (**b**) the XFEM model with a precrack. **Figure 11.** Precrack definition: (**a**) the physical model with a precrack; (**b**) the XFEM model with a precrack.

#### **3. Weld Repair Experiments 3. Weld Repair Experiments**

#### *3.1. Weld Repair Specimens 3.1. Weld Repair Specimens*

Weld repair experiments of the P91 steel welded joint were carried out to verify the feasibility of the FE model. The size of the specimens was the same as that of the FE model. The chemical composition of experimental base metal is shown in Table 1, which measures up to ASME BPVC.II.A-2019 [37]. The selected weld metal was particularly suited for matching P91 steel. ER90S-B9 welding rod was used as a filler metal for GTAW, while E9015-B9 stick electrode was used as a filler metal for Weld repair experiments of the P91 steel welded joint were carried out to verify the feasibility of the FE model. The size of the specimens was the same as that of the FE model. The chemical composition of experimental base metal is shown in Table 1, which measures up to ASME BPVC.II.A-2019 [37]. The selected weld metal was particularly suited for matching P91 steel. ER90S-B9 welding rod was used as a filler metal for GTAW, while E9015-B9 stick electrode was used as a filler metal for SMAW. The chemical composition of the weld metal is shown in Table 2, which measures up to ASME BPVC.II.C-2019 [38].


**Table 1.** Chemical composition of P91 steel (wt. %).

**Table 2.** Chemical composition of the weld metal (wt. %).

0.08 0.27 0.6 0.006 0.007 8.86 0.38 0.98 0.19 0.06 0.06 0.04


Firstly, the base metal plates were connected by multilayer welding. GTAW was used for root welding, followed by SMAW used for filler welding and cover welding. The shielding gas for GTAW was argon. Then the welded joint was air-cooled to the ambient temperature. The metal near the FGHAZ of the welded joint, where it was prone to IV type cracking, was removed. Finally, repair welding

**Layer** 

ASME BPVC.II.C-2019 [38].

technique was utilized to refill it by SMAW with the welding linear energy of 10 kJ/cm. The specific welding parameters for each layer and weld repair are shown in Table 3. The interlayer temperature was maintained at 200–300 ◦C during welding. The morphology of welding seams after initial welding and repair welding is shown in Figure 12. temperature was maintained at 200–300 °C during welding. The morphology of welding seams after initial welding and repair welding is shown in Figure 12. **Table 3.** Specific welding parameters for each layer and weld repair.

specific welding parameters for each layer and weld repair are shown in Table 3. The interlayer

*Metals* **2020**, *10*, x FOR PEER REVIEW 11 of 26

SMAW. The chemical composition of the weld metal is shown in Table 2, which measures up to

**Table 1.** Chemical composition of P91 steel (wt. %). **C Si Mn S P Cr Ni Mo V Nb N Al** 0.08 0.27 0.6 0.006 0.007 8.86 0.38 0.98 0.19 0.06 0.06 0.04

**Table 2.** Chemical composition of the weld metal (wt. %).

**Material Grade C Si Mn Cr Ni Mo V Nb** ER90S-B9 0.1 0.3 0.5 9.0 0.7 1.0 0.2 0.06 E9015-B9 0.09 0.2 0.6 9.0 0.8 1.1 0.2 0.05

Firstly, the base metal plates were connected by multilayer welding. GTAW was used for root welding, followed by SMAW used for filler welding and cover welding. The shielding gas for GTAW was argon. Then the welded joint was air-cooled to the ambient temperature. The metal near the FGHAZ of the welded joint, where it was prone to IV type cracking, was removed. Finally, repair


**Table 3.** Specific welding parameters for each layer and weld repair. **Welding Electrode Electrode Diameter Current Voltage** 

**Welding** 

**Arc** 

**Welding Speed** 

(**a**) (**b**)

(**c**)

**Figure 12.** Morphology of specimens: (**a**) after initial welding; (**b**) after repair welding; (**c**) morphology of welding layers.

### *3.2. Verification of the FE Model*

After the fabrication of weld repair specimens, the impact indentation strain method [39] was used to measure the residual stress. A spherical indenter was used to generate indentations in the form of impact loads at the location of measuring points. On the one hand, the impact action makes the material appear to have plastic flow deformation. On the other hand, the elastoplastic deformation caused by indentations itself changes under the action of residual stress. The total deformation after superposition of the two kinds of deformation is called strain increment. The relationship between the strain increment generated by the impact indentation strain method and elastic strain satisfies Equation (15).

$$
\Delta \varepsilon = a\_1 \times \varepsilon\_\ell + a\_2 \times \varepsilon\_\ell^2 + a\_3 \times \varepsilon\_\ell^3 + B \tag{15}
$$

where ∆ε is the strain increment; ε*<sup>e</sup>* is the elastic strain; *a*1, *a*2, *a*<sup>3</sup> are the coefficients obtained from the calibration curve; *B* is the strain increment in a zero-stress state.

The biaxial resistance strain gauge containing two sensing grid elements with different axial directions, namely *X*-axis direction and *Z*-axis direction, was used to measure residual strain. A certain of welding layers.

*3.2. Verification of the FE Model*

elastic strain satisfies Equation (15).

size of indentation was produced at the midpoint of the grid axis by the impact load. The value of strain increment was recorded by the strain recording instrument. The value of residual strain was determined according to the relationship between the calibrated elastic strain and the strain increment. The value of residual stress was calculated by Hooke's law. directions, namely *X*-axis direction and *Z*-axis direction, was used to measure residual strain. A certain size of indentation was produced at the midpoint of the grid axis by the impact load. The value of strain increment was recorded by the strain recording instrument. The value of residual strain was determined according to the relationship between the calibrated elastic strain and the

1 2 3 *e e e* 

 

*a a a B*

The biaxial resistance strain gauge containing two sensing grid elements with different axial

 

*Metals* **2020**, *10*, x FOR PEER REVIEW 12 of 26

**Figure 12.** Morphology of specimens: (**a**) after initial welding; (**b**) after repair welding; (**c**) morphology

After the fabrication of weld repair specimens, the impact indentation strain method [39] was used to measure the residual stress. A spherical indenter was used to generate indentations in the form of impact loads at the location of measuring points. On the one hand, the impact action makes the material appear to have plastic flow deformation. On the other hand, the elastoplastic deformation caused by indentations itself changes under the action of residual stress. The total deformation after superposition of the two kinds of deformation is called strain increment. The relationship between the strain increment generated by the impact indentation strain method and

2 3

 

(15)

The surface residual stress of specimens after both initial welding and repair welding was measured. Five measuring points were taken from each specimen, and the distance between adjacent measuring points on each side of the welding seam was 2.5 mm. The specific position of each measuring point is shown in Figure 13. The measurement of surface residual stress was finished by the KJS-3\4 indentation stress measurement system (Developed by Institute of Metal Research, Chinese Academy of Sciences, Shenyang, China). Before measurements, the accuracy of the indentation stress measurement system was calibrated with conventional residual stress measurement methods, such as X-ray diffraction, which measures up to GB/T 24179-2009. As shown in Figure 14a, the measurement system consists of two parts: portable intelligent stress tester and indentation generating device. The indentation generating device includes a striking rod (for generating indentation), a permanent magnet fixed base (for restricting the displacement of the measurement device) and a centering microscope (for centering the grid axis of strain gauge). The residual stress on-site measuring is shown in Figure 14b. strain increment. The value of residual stress was calculated by Hooke's law. The surface residual stress of specimens after both initial welding and repair welding was measured. Five measuring points were taken from each specimen, and the distance between adjacent measuring points on each side of the welding seam was 2.5 mm. The specific position of each measuring point is shown in Figure 13. The measurement of surface residual stress was finished by the KJS-3\4 indentation stress measurement system (Developed by Institute of Metal Research, Chinese Academy of Sciences, Shenyang, China). Before measurements, the accuracy of the indentation stress measurement system was calibrated with conventional residual stress measurement methods, such as X-ray diffraction, which measures up to GB/T 24179-2009. As shown in Figure 14a, the measurement system consists of two parts: portable intelligent stress tester and indentation generating device. The indentation generating device includes a striking rod (for generating indentation), a permanent magnet fixed base (for restricting the displacement of the measurement device) and a centering microscope (for centering the grid axis of strain gauge). The residual stress on-site measuring is shown in Figure 14b.

**Figure 13.** Position of measuring points for specimens (unit: mm): (**a**) after initial welding; (**b**) after repair welding. **Figure 13.** Position of measuring points for specimens (unit: mm): (**a**) after initial welding; (**b**) after repair welding.

**Figure 14.** The measurement of residual stress: (**a**) residual stress measurement system; (**b**) residual stress on-site measuring. **Figure 14.** The measurement of residual stress: (**a**) residual stress measurement system; (**b**) residual stress on-site measuring.

Figure 15 presents a comparison of the residual stress within and around the welding seam zone obtained from experiments and FE simulations. It is shown that the simulated results are in good agreement with the experimental results. The maximum difference between the simulated value and the experimental value is less than 10%. Therefore, the FE model and program developed in this Figure 15 presents a comparison of the residual stress within and around the welding seam zone obtained from experiments and FE simulations. It is shown that the simulated results are in good agreement with the experimental results. The maximum difference between the simulated value and

plate. Thus, only when the size of specimens in the length direction is infinitely large, the simulated

 FEM Experiment

S33 stress (MPa)

(**a**) (**b**)

0 5 10 15 20 25 30 35 40

Distance (mm)

Initial welding seam

Repair welding seam

 FEM Experiment

results may be very close to the experimental results.

0 5 10 15 20 25 30 35 40

Distance (mm)

Initial welding seam

Repair welding seam

S11 stress (MPa)

paper are suitable for the thermal-mechanical coupled simulation of repair welding process for the

stress on-site measuring.

repair welding.

the experimental value is less than 10%. Therefore, the FE model and program developed in this paper are suitable for the thermal-mechanical coupled simulation of repair welding process for the welded joint. The reason for the existing difference between the simulated results and the experimental results is that the 2-D numerical model is a simplification of the 3-D experimental model. The 2-D plane in the FE simulations actually corresponds to the middle plane of an infinite plate. Thus, only when the size of specimens in the length direction is infinitely large, the simulated results may be very close to the experimental results. agreement with the experimental results. The maximum difference between the simulated value and the experimental value is less than 10%. Therefore, the FE model and program developed in this paper are suitable for the thermal-mechanical coupled simulation of repair welding process for the welded joint. The reason for the existing difference between the simulated results and the experimental results is that the 2-D numerical model is a simplification of the 3-D experimental model. The 2-D plane in the FE simulations actually corresponds to the middle plane of an infinite plate. Thus, only when the size of specimens in the length direction is infinitely large, the simulated results may be very close to the experimental results.

obtained from experiments and FE simulations. It is shown that the simulated results are in good

(**a**) (**b**) **Figure 14.** The measurement of residual stress: (**a**) residual stress measurement system; (**b**) residual

*Metals* **2020**, *10*, x FOR PEER REVIEW 13 of 26

**Figure 13.** Position of measuring points for specimens (unit: mm): (**a**) after initial welding; (**b**) after

**Figure 15.** Comparison of the residual stress obtained from FE simulations and experiments: (**a**) S11 stress after initial welding; (**b**) S33 stress after initial welding; (**c**) S11 stress after repair welding; (**d**) S33 stress after repair welding. **Figure 15.** Comparison of the residual stress obtained from FE simulations and experiments: (**a**) S11 stress after initial welding; (**b**) S33 stress after initial welding; (**c**) S11 stress after repair welding; (**d**) S33 stress after repair welding.

#### **4. Results and Discussion 4. Results and Discussion**

#### *4.1. Thermal Analysis 4.1. Thermal Analysis*

*Q* = 10 kJ/cm

The transient temperature contours at the intermediate instant and the end instant of repair welding for different linear energy are shown in Figure 16. The variation of linear energy has a significant influence on the temperature profile of the welding seam and HAZ. With the increase of linear energy, the peak temperature at the intermediate instant of repair welding increases. The peak temperature is 1819 °C for the linear energy of 10 kJ/cm, 2088 °C for the linear energy of 16 kJ/cm and 2320 °C for the linear energy of 25 kJ/cm. This is because higher linear energy involves longer loading times, thus more thermal energy is poured into the bulk metal during repair welding. After the intermediate instant of repair welding, due to the fact that the heat source starts to depart from the The transient temperature contours at the intermediate instant and the end instant of repair welding for different linear energy are shown in Figure 16. The variation of linear energy has a significant influence on the temperature profile of the welding seam and HAZ. With the increase of linear energy, the peak temperature at the intermediate instant of repair welding increases. The peak temperature is 1819 ◦C for the linear energy of 10 kJ/cm, 2088 ◦C for the linear energy of 16 kJ/cm and 2320 ◦C for the linear energy of 25 kJ/cm. This is because higher linear energy involves longer loading times, thus more thermal energy is poured into the bulk metal during repair welding. After the intermediate instant of repair welding, due to the fact that the heat source starts to depart from the cross

Intermediate instant of repair welding End instant of repair welding

cross section, the heat generation rate decreases continuously. At the end instant of repair welding, the peak temperature for different linear energy has no obvious difference, all are around 1500 °C. In S33 stress after repair welding.

0 5 10 15 20 25 30 35 40

Distance (mm)

Initial welding seam

Repair welding seam

**4. Results and Discussion**

*4.1. Thermal Analysis*


S11 stress (MPa)

section, the heat generation rate decreases continuously. At the end instant of repair welding, the peak temperature for different linear energy has no obvious difference, all are around 1500 ◦C. In addition, with the increase of repair welding linear energy, the range of the HAZ expands. The higher linear energy brings higher overall temperature of the HAZ and poorer fracture properties. intermediate instant of repair welding, due to the fact that the heat source starts to depart from the cross section, the heat generation rate decreases continuously. At the end instant of repair welding, the peak temperature for different linear energy has no obvious difference, all are around 1500 °C. In addition, with the increase of repair welding linear energy, the range of the HAZ expands. The higher linear energy brings higher overall temperature of the HAZ and poorer fracture properties.

times, thus more thermal energy is poured into the bulk metal during repair welding. After the

*Metals* **2020**, *10*, x FOR PEER REVIEW 14 of 26

 FEM Experiment

S33 stress (MPa)

(**c**) (**d**) **Figure 15.** Comparison of the residual stress obtained from FE simulations and experiments: (**a**) S11 stress after initial welding; (**b**) S33 stress after initial welding; (**c**) S11 stress after repair welding; (**d**)

100

200

300

400

500

600

0 5 10 15 20 25 30 35 40

Distance (mm)

Initial welding seam

Repair welding seam

 FEM Experiment

The transient temperature contours at the intermediate instant and the end instant of repair welding for different linear energy are shown in Figure 16. The variation of linear energy has a significant influence on the temperature profile of the welding seam and HAZ. With the increase of linear energy, the peak temperature at the intermediate instant of repair welding increases. The peak temperature is 1819 °C for the linear energy of 10 kJ/cm, 2088 °C for the linear energy of 16 kJ/cm and

**Figure 16.** Temperature contours at the intermediate instant and the end instant of repair welding for different linear energy. **Figure 16.** Temperature contours at the intermediate instant and the end instant of repair welding for different linear energy.

### Figure 17 shows the temperature distribution at a depth of 0.5 mm in the HAZ at the end instant *4.2. Mechanical Analysis*

*4.2. Mechanical Analysis*

of repair welding for different linear energy. The overall temperature in the HAZ increases with the increment of linear energy. The peak temperature of the HAZ is 961 °C for the linear energy of 10 kJ/cm, 1130 °C for the linear energy of 16 kJ/cm and 1258 °C for the linear energy of 25 kJ/cm. The temperature distribution of the HAZ is gentler for high linear energy. 1000 1200 1400 Temperature (℃) 10 kJ/cm 16 kJ/cm 25 kJ/cm The calculated temperature field loads are used as the predefined field of stress calculation to perform the mechanical analysis. In Figure 18, the contours of the Von Mises stress field are shown at the end instant of repair welding and the end instant of cooling (the transient time at which the weld repaired component is cooled to the ambient temperature) for different linear energy. With the increase of linear energy, the range of high stress area in the HAZ at both the end instant of repair welding and the end instant of cooling expands. For different linear energy, the peak Von Mises stress in the HAZ at the end instant of repair welding and the end instant of cooling is similar. The peak Von Mises stress at the end instant of repair welding is about 445 MPa, and the peak Von Mises stress at the end instant of cooling is about 480 MPa. At the end instant of cooling, the peak Von Mises stress of the repair welding

**Figure 17.** Temperature distribution at a depth of 0.5 mm in the heat affected zone (HAZ) at the end

0 2 4 6 8 10 12

Distance from the fusion line (mm)

The calculated temperature field loads are used as the predefined field of stress calculation to perform the mechanical analysis. In Figure 18, the contours of the Von Mises stress field are shown at the end instant of repair welding and the end instant of cooling (the transient time at which the

instant of repair welding for different linear energy.

0

*4.2. Mechanical Analysis*

the fusion line.

*Q* = 25 kJ/cm

*Q* = 16 kJ/cm

seam is greater than that of the HAZ. This is because of the different material properties considered between the weld metal and the base metal. **Figure 16.** Temperature contours at the intermediate instant and the end instant of repair welding for different linear energy.

*Metals* **2020**, *10*, x FOR PEER REVIEW 15 of 26

Figure 17 shows the temperature distribution at a depth of 0.5 mm in the HAZ at the end instant of repair welding for different linear energy. The overall temperature in the HAZ increases with the increment of linear energy. The peak temperature of the HAZ is 961 ◦C for the linear energy of 10 kJ/cm, 1130 ◦C for the linear energy of 16 kJ/cm and 1258 ◦C for the linear energy of 25 kJ/cm. The temperature distribution of the HAZ is gentler for high linear energy. Figure 17 shows the temperature distribution at a depth of 0.5 mm in the HAZ at the end instant of repair welding for different linear energy. The overall temperature in the HAZ increases with the increment of linear energy. The peak temperature of the HAZ is 961 °C for the linear energy of 10 kJ/cm, 1130 °C for the linear energy of 16 kJ/cm and 1258 °C for the linear energy of 25 kJ/cm. The temperature distribution of the HAZ is gentler for high linear energy.

**Figure 17.** Temperature distribution at a depth of 0.5 mm in the heat affected zone (HAZ) at the end instant of repair welding for different linear energy. **Figure 17.** Temperature distribution at a depth of 0.5 mm in the heat affected zone (HAZ) at the end instant of repair welding for different linear energy. of the repair welding seam is greater than that of the HAZ. This is because of the different material properties considered between the weld metal and the base metal.

End instant of repair welding End instant of cooling

**Figure 18.** Stress contours at the end instant of repair welding and the end instant of cooling for different linear energy. **Figure 18.** Stress contours at the end instant of repair welding and the end instant of cooling for different linear energy.

away from the fusion line. The peak S33 stress and Von Mises stress has little change with the increase of linear energy, while the peak S11 stress increases slightly with the increment of linear energy. The S11 stress and S33 stress are tensile stress near the fusion line and compressive stress far away from

The S11, S33 and Von Mises stresses at a depth of 0.5 mm in the HAZ at the end instant of repair welding for different linear energy are shown in Figure 19. With the increase of linear energy, the range of high stress area of S11, S33 and Von Mises expands and moves towards the direction far away from the fusion line. The peak S33 stress and Von Mises stress has little change with the increase of linear energy, while the peak S11 stress increases slightly with the increment of linear energy. The S11 stress and S33 stress are tensile stress near the fusion line and compressive stress far away from the fusion line. *Metals* **2020**, *10*, x FOR PEER REVIEW 17 of 26

**Figure 19.** Stress distribution at a depth of 0.5 mm in the HAZ at the end instant of repair welding for different linear energy: (**a**) S11 stress in the HAZ; (**b**) S33 stress in the HAZ; (**c**) Von Mises stress in the HAZ. **Figure 19.** Stress distribution at a depth of 0.5 mm in the HAZ at the end instant of repair welding for different linear energy: (**a**) S11 stress in the HAZ; (**b**) S33 stress in the HAZ; (**c**) Von Mises stress in the HAZ.

Figure 20 shows the S11, S33 and Von Mises stress distribution at a depth of 0.5 mm in the HAZ at the end instant of cooling for different linear energy. Both the S33 stress and Von Mises stress have similar peaks for different linear energy. The peak S33 stress is about 525 MPa, and the peak Von Mises stress is about 480 MPa. In the cooling process, the S11 stress near the fusion line in the HAZ changes from tensile stress to compressive stress, and the S11 stress far away from the fusion line changes from compressive stress to tensile stress. Almost all the S33 stress in the HAZ changes from compressive stress to tensile stress, and the peaks change from −500 MPa to +525 MPa, which is caused by the shrinkage of welding seam during cooling. The remarkable tensile stress may lead to the propagation of initial cracks in the HAZ. Moreover, with the increase of linear energy, the range of the high stress area in the HAZ expands. The range of the high stress area in the HAZ is 4 mm for the linear energy of 10 kJ/cm, 6 mm for the linear energy of 16 kJ/cm and 10 mm for the linear energy of 25 kJ/cm. Figure 20 shows the S11, S33 and Von Mises stress distribution at a depth of 0.5 mm in the HAZ at the end instant of cooling for different linear energy. Both the S33 stress and Von Mises stress have similar peaks for different linear energy. The peak S33 stress is about 525 MPa, and the peak Von Mises stress is about 480 MPa. In the cooling process, the S11 stress near the fusion line in the HAZ changes from tensile stress to compressive stress, and the S11 stress far away from the fusion line changes from compressive stress to tensile stress. Almost all the S33 stress in the HAZ changes from compressive stress to tensile stress, and the peaks change from −500 MPa to +525 MPa, which is caused by the shrinkage of welding seam during cooling. The remarkable tensile stress may lead to the propagation of initial cracks in the HAZ. Moreover, with the increase of linear energy, the range of the high stress area in the HAZ expands. The range of the high stress area in the HAZ is 4 mm for the linear energy of 10 kJ/cm, 6 mm for the linear energy of 16 kJ/cm and 10 mm for the linear energy of 25 kJ/cm.

**Figure 20.** Stress distribution at a depth of 0.5 mm in the HAZ at the end instant of cooling for different linear energy: (**a**) S11 stress in the HAZ; (**b**) S33 stress in the HAZ; (**c**) Von Mises stress in the HAZ. **Figure 20.** Stress distribution at a depth of 0.5 mm in the HAZ at the end instant of cooling for different linear energy: (**a**) S11 stress in the HAZ; (**b**) S33 stress in the HAZ; (**c**) Von Mises stress in the HAZ.

(**c**)

#### *4.3. XFEM Analysis 4.3. XFEM Analysis*

Under repair welding thermal shock, the welding seam and HAZ deform plastically, so the fracture mode is elastoplastic fracture. The whole fracture process has strong nonlinearity, including not only the nonlinearity of material properties, but also the geometric nonlinearity consisting of large thermal deformation and crack propagation. The crack at the position of *d* = 2 mm under the repair welding linear energy of 10 kJ/cm is Under repair welding thermal shock, the welding seam and HAZ deform plastically, so the fracture mode is elastoplastic fracture. The whole fracture process has strong nonlinearity, including not only the nonlinearity of material properties, but also the geometric nonlinearity consisting of large thermal deformation and crack propagation.

selected to analyze and the simulated crack propagation paths are shown in Figure 21. It is noteworthy that the crack propagation occurs during the cooling process, which is in agreement with the mechanical analysis. The crack propagation direction is almost perpendicular to the surface due to the control of MPS criterion and there is a certain tendency for the crack to deflect to the welding seam during the propagation process. A total of two elements are extended and the extracted crack propagation length is 0.52 mm. Figure 21a shows the MPS contours for the crack zone at different transient times. *t* = 3.96 s is the end instant of repair welding and the start instant of cooling. At the start instant of cooling, the MPS around the crack tip is lower than the allowable MPS at the corresponding temperature according to the damage initiation criterion, so the crack does not propagate. With the cooling process going on, the tensile stress increases continuously, which leads to the increment of the MPS around the crack tip. At *t* = 4.43 s, the crack damage initiation condition is reached, and the new crack surface begins to form. After the formation of the new crack surface, The crack at the position of *d* = 2 mm under the repair welding linear energy of 10 kJ/cm is selected to analyze and the simulated crack propagation paths are shown in Figure 21. It is noteworthy that the crack propagation occurs during the cooling process, which is in agreement with the mechanical analysis. The crack propagation direction is almost perpendicular to the surface due to the control of MPS criterion and there is a certain tendency for the crack to deflect to the welding seam during the propagation process. A total of two elements are extended and the extracted crack propagation length is 0.52 mm. Figure 21a shows the MPS contours for the crack zone at different transient times. *t* = 3.96 s is the end instant of repair welding and the start instant of cooling. At the start instant of cooling, the MPS around the crack tip is lower than the allowable MPS at the corresponding temperature according to the damage initiation criterion, so the crack does not propagate. With the cooling process going on, the tensile stress increases continuously, which leads to the increment of the MPS around the crack tip. At *t* = 4.43 s, the crack damage initiation condition is reached, and the new crack surface begins to form. After the formation of the new crack surface, the stress concentration around the crack tip is released and the MPS decreases. A new driving force is required to promote the further crack propagation. At *t* = 5.22 s, the new driving force makes the crack extend further.

crack extend further.

the stress concentration around the crack tip is released and the MPS decreases. A new driving force is required to promote the further crack propagation. At *t* = 5.22 s, the new driving force makes the

by the extra driving force, which threatens the structural integrity of weld repaired components.

Figure 21b shows the status of the enriched element (STATUSXFEM) for the crack zone at different transient times. The variable STATUSXFEM represents the damage degree of the crack, ranging from 0 to 1.0. A value of 0 means that the material has no damage. A value of 1.0 means that the crack is fully opened and the cohesive traction between the crack surfaces is zero. When the value is greater than 0 and less than 1.0, it means that the cohesive crack has already extended and needs extra driving force to fully open. Although the new cracks are not fully opened, in the complex service

**Figure 21.** Crack propagation paths at different transient times (*d* = 2 mm, *Q* = 10 kJ/cm): (**a**) maximum principal stress (MPS) contours; (**b**) status of the enriched element (STATUSXFEM). **Figure 21.** Crack propagation paths at different transient times (*d* = 2 mm, *Q* = 10 kJ/cm): (**a**) maximum principal stress (MPS) contours; (**b**) status of the enriched element (STATUSXFEM).

Figure 21b shows the status of the enriched element (STATUSXFEM) for the crack zone at different transient times. The variable STATUSXFEM represents the damage degree of the crack, ranging from 0 to 1.0. A value of 0 means that the material has no damage. A value of 1.0 means that the crack is fully opened and the cohesive traction between the crack surfaces is zero. When the value is greater than 0 and less than 1.0, it means that the cohesive crack has already extended and needs extra driving force to fully open. Although the new cracks are not fully opened, in the complex service environment, the cohesive cracks containing damage are likely to open completely after being driven by the extra driving force, which threatens the structural integrity of weld repaired components.

According to the crack propagation paths in Figure 21, two elements in the initial crack front are selected to analyze. The transient S11, S33 and Von Mises stresses of the elements under repair welding thermal shock are shown in Figure 22. The first 3.96 s of transient time is the repair welding process. In this process, the HAZ is compressed due to the outward expansion of the welding seam. The S11 and S33 stresses are compressive stress, and the value of compressive stress increases with the promoting of repair welding. After repair welding, the welding seam begins to cool and shrink, which results in the tensile stress in the HAZ. The S11 and S33 stress changes from compressive stress to tensile stress in the early stage of cooling and the crack propagation occurs in this stage. welding thermal shock are shown in Figure 22. The first 3.96 s of transient time is the repair welding process. In this process, the HAZ is compressed due to the outward expansion of the welding seam. The S11 and S33 stresses are compressive stress, and the value of compressive stress increases with the promoting of repair welding. After repair welding, the welding seam begins to cool and shrink, which results in the tensile stress in the HAZ. The S11 and S33 stress changes from compressive stress to tensile stress in the early stage of cooling and the crack propagation occurs in this stage.

*Metals* **2020**, *10*, x FOR PEER REVIEW 20 of 26

selected to analyze. The transient S11, S33 and Von Mises stresses of the elements under repair

**Figure 22.** Transient stress curves of the elements in the crack front: (**a**) initial crack and elements in the crack front; (**b**) element 1; (**c**) element 2. **Figure 22.** Transient stress curves of the elements in the crack front: (**a**) initial crack and elements in the crack front; (**b**) element 1; (**c**) element 2.

In order to investigate the influence of repair welding thermal shock on the cracks at different distances from the fusion line for three different linear energy values, the initial cracks with a size of 0.5 mm perpendicular to the surface were prefabricated at the positions where the distances from the fusion line were 2 mm, 4 mm, 6 mm, 8 mm and 10 mm, respectively. Figure 23 shows the crack propagation predicted by XFEM at different positions for three different linear energy values. All the crack propagation occurs in the early stage of cooling. For cracks at the same location, with the increase of linear energy, the crack propagation length and the number of damaged elements increase. This is because the higher linear energy brings the higher temperature of the HAZ, which leads to poorer crack resistance. The longest crack propagation appears at the position of *d* = 2 mm under the linear energy of 25 kJ/cm. The extracted crack propagation length is 1.1 mm, which is almost twice of the initial crack length. In this case, the fully opened crack propagation has occurred, and the crack is likely to develop into the macrocrack, even penetrate through the whole wall thickness. For the same linear energy, the crack propagation length decreases with the increment of distance from the fusion line. Both the cracks with *d* > 6 mm for the linear energy of 10 kJ/cm and the In order to investigate the influence of repair welding thermal shock on the cracks at different distances from the fusion line for three different linear energy values, the initial cracks with a size of 0.5 mm perpendicular to the surface were prefabricated at the positions where the distances from the fusion line were 2 mm, 4 mm, 6 mm, 8 mm and 10 mm, respectively. Figure 23 shows the crack propagation predicted by XFEM at different positions for three different linear energy values. All the crack propagation occurs in the early stage of cooling. For cracks at the same location, with the increase of linear energy, the crack propagation length and the number of damaged elements increase. This is because the higher linear energy brings the higher temperature of the HAZ, which leads to poorer crack resistance. The longest crack propagation appears at the position of *d* = 2 mm under the linear energy of 25 kJ/cm. The extracted crack propagation length is 1.1 mm, which is almost twice of the initial crack length. In this case, the fully opened crack propagation has occurred, and the crack is likely to develop into the macrocrack, even penetrate through the whole wall thickness. For the same linear energy, the crack propagation length decreases with the increment of distance from the fusion line. Both the cracks with *d* > 6 mm for the linear energy of 10 kJ/cm and the cracks with *d* > 8 mm for the linear energy of 16 kJ/cm do not propagate. The reason is that the lower the linear energy is, the smaller the scope of the HAZ is. Once the crack is far away from the fusion line, the driving force is

not enough to promote the crack propagation. It should be noted that the area where cracks propagate is the high stress area where the peak stress is located. Almost all the directions of crack propagation are perpendicular to the surface and tend to deflect to the welding seam, which may be related to the high stress in the welding seam. the linear energy is, the smaller the scope of the HAZ is. Once the crack is far away from the fusion line, the driving force is not enough to promote the crack propagation. It should be noted that the area where cracks propagate is the high stress area where the peak stress is located. Almost all the directions of crack propagation are perpendicular to the surface and tend to deflect to the welding seam, which may be related to the high stress in the welding seam.

cracks with *d* > 8 mm for the linear energy of 16 kJ/cm do not propagate. The reason is that the lower

**Figure 23.** Crack propagation predicted by XFEM at different positions for three different linear energy values. **Figure 23.** Crack propagation predicted by XFEM at different positions for three different linear energy values.

Since the crack resistance of materials is related to temperature, the temperature at the crack tip is one of the key factors affecting crack propagation. On the one hand, the higher the temperature is, the poorer the crack resistance is, and the cracks are easier to extend under the same driving force. On the other hand, the larger the temperature gradient is, the greater the thermal stress is, so the

driving force is sufficient for crack propagation. Figure 24 shows the thermal cycle curves at different crack tips in the HAZ for three different linear energy values. For the same linear energy, with the decrease of the distance from the fusion line, the difference of the peak temperature at the crack tip becomes larger. As a result, the closer the crack is to the fusion line, the greater the temperature gradient and temperature change around the crack tip, the greater the driving force and the longer the crack propagation length. For the same crack tip, the peak temperature increases with the increment of linear energy. Therefore, the greater the linear energy is, the poorer the crack resistance of materials in the HAZ is. In the early stage of cooling, the temperature value in the HAZ is higher and the temperature difference in the HAZ is larger, so crack propagation occurs in the early stage of cooling. With the cooling process going on, the temperature at the crack tip at different positions gradually tends to become consistent. driving force is sufficient for crack propagation. Figure 24 shows the thermal cycle curves at different crack tips in the HAZ for three different linear energy values. For the same linear energy, with the decrease of the distance from the fusion line, the difference of the peak temperature at the crack tip becomes larger. As a result, the closer the crack is to the fusion line, the greater the temperature gradient and temperature change around the crack tip, the greater the driving force and the longer the crack propagation length. For the same crack tip, the peak temperature increases with the increment of linear energy. Therefore, the greater the linear energy is, the poorer the crack resistance of materials in the HAZ is. In the early stage of cooling, the temperature value in the HAZ is higher and the temperature difference in the HAZ is larger, so crack propagation occurs in the early stage of cooling. With the cooling process going on, the temperature at the crack tip at different positions gradually tends to become consistent.

*Metals* **2020**, *10*, x FOR PEER REVIEW 22 of 26

On the other hand, the larger the temperature gradient is, the greater the thermal stress is, so the

Since the crack resistance of materials is related to temperature, the temperature at the crack tip

**Figure 24.** Thermal cycle curves at crack tips in the HAZ for three different linear energy values: (**a**) *Q* = 10 kJ/cm; (**b**) *Q* = 16 kJ/cm; (**c**) *Q* = 25 kJ/cm. **Figure 24.** Thermal cycle curves at crack tips in the HAZ for three different linear energy values: (**a**) *Q* = 10 kJ/cm; (**b**) *Q* = 16 kJ/cm; (**c**) *Q* = 25 kJ/cm.

The distribution of heat flux through the crack surfaces with the transient time is shown in Figure 25. The distribution of heat flux is similar to the thermal cycle curves at crack tips, which indicates that the temperature at crack tips is proportional to the thermal energy density through the crack The distribution of heat flux through the crack surfaces with the transient time is shown in Figure 25. The distribution of heat flux is similar to the thermal cycle curves at crack tips, which indicates that the temperature at crack tips is proportional to the thermal energy density through the crack surfaces. For the same crack surface, the peak heat flux increases with the increment of linear energy. For the same linear energy, in repair welding process and early stage of cooling, the closer the crack surface is to the fusion line, the greater the heat flux gradient is and the greater the heat flux change is.

surfaces. For the same crack surface, the peak heat flux increases with the increment of linear energy. For the same linear energy, in repair welding process and early stage of cooling, the closer the crack

**Figure 25.** Heat flux distribution through crack surfaces in the HAZ for three different linear energy values: (**a**) *Q* = 10 kJ/cm; (**b**) *Q* = 16 kJ/cm; (**c**) *Q* = 25 kJ/cm. **Figure 25.** Heat flux distribution through crack surfaces in the HAZ for three different linear energy values: (**a**) *Q* = 10 kJ/cm; (**b**) *Q* = 16 kJ/cm; (**c**) *Q* = 25 kJ/cm.

#### **5. Conclusions 5. Conclusions**

In this study, the XFEM based on a cohesive crack model is used to establish the repair welding model of P91 steel welded joint containing initial cracks. The MPS criterion and energy method are used to define the damage initiation and damage evolution, respectively. Considering the temperature-dependent material physical properties and fracture properties, the effect of repair welding thermal shock on the crack propagation behavior in the HAZ is simulated. The influence of different repair welding linear energy and different crack positions on the cracking features is analyzed. The main conclusions are as follows: In this study, the XFEM based on a cohesive crack model is used to establish the repair welding model of P91 steel welded joint containing initial cracks. The MPS criterion and energy method are used to define the damage initiation and damage evolution, respectively. Considering the temperature-dependent material physical properties and fracture properties, the effect of repair welding thermal shock on the crack propagation behavior in the HAZ is simulated. The influence of different repair welding linear energy and different crack positions on the cracking features is analyzed. The main conclusions are as follows:


(5) After a certain distance from the fusion line, the cracks in the HAZ do not extend. This distance is the critical distance for the crack propagation. With the increase of linear energy, the critical distance of crack propagation increases. The critical distance of crack propagation is consistent with the high stress area after repair welding. Therefore, for repair welding, low linear energy is preferred to constrain the cracking behavior in the HAZ.

**Author Contributions:** Conceptualization, J.Z.; methodology, K.Y.; software, K.Y.; validation, Y.Z.; formal analysis, K.Y.; investigation, Y.Z.; resources, Y.Z.; data curation, K.Y.; writing—original draft, K.Y.; writing—review & editing, K.Y., Y.Z. and J.Z.; visualization, K.Y.; supervision, J.Z.; project administration, K.Y.; funding acquisition, J.Z. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the National Key Research and Development Program of China (2016YFC0801905).

**Acknowledgments:** The authors gratefully acknowledge the financial support from the National Key Research and Development Program of China (2016YFC0801905). Furthermore, the authors are grateful for the technical support of Jiangsu Key Lab of Design and Manufacture of Extreme Pressure Equipment.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Review* **Dirlik and Tovo-Benasciutti Spectral Methods in Vibration Fatigue: A Review with a Historical Perspective**

**Turan Dirlik 1,\* and Denis Benasciutti 2,\***


**Abstract:** The frequency domain techniques (also known as "spectral methods") prove significantly more efficient than the time domain fatigue life calculations, especially when they are used in conjunction with finite element analysis. Frequency domain methods are now well established, and suitable commercial software is commonly available. Among the existing techniques, the methods by Dirlik and by Tovo–Benasciutti (TB) have become the most used. This study presents the historical background and the motivation behind the development of these two spectral methods, by also emphasizing their application and possible limitations. It further presents a brief review of the other spectral methods available for cycle counting directly from the power spectral density of the random loading. Finally, some ideas for future work are suggested.

**Keywords:** random loading; fatigue damage; power spectral density (PSD); spectral methods

### **1. Introduction**

The estimation of fatigue life under variable amplitude or random loadings has been an active research topic in the last fifty years, and the activity has further increased in the last two decades. Cumulative damage calculations, in particular, occupy a dominant sector in the structural integrity assessment of metallic components and structures subjected to random fatigue loadings. In order to achieve a high level of structural reliability, fatigue life calculations must be made at several stages of the design and development process. An important aspect of the development of fatigue-resistant components and structures is the ability to estimate component fatigue life and, thus, prevent unexpected failures.

The traditional approach to fatigue analysis—often referred to as "time domain approach"—uses a technique called rainflow cycle counting to decompose a variable amplitude time signal of stress into fatigue cycles [1,2]. The damage from each cycle is then computed using an S/N curve, which characterizes the material strength for constant amplitude loadings. The damage over the entire time signal is finally calculated by summing the damage from all the individual cycles, using, for example, the celebrated Palmgren–Miner linear damage rule. This simple rule, which sums the damage of cycles, regardless of their order in the loading, also postulates that fatigue failure occurs when the damage exceeds a critical level equal to unity [3].

The time domain approach does not present any particular theoretical complexity and, nowadays, it is easy to implement by considering, for example, that the rainflow algorithm is available in some numerical packages [4,5]. The approach may, however, require a long computation time when it has to analyze a lot of digitalized signals of long duration, such as those computed in hundreds of thousands of nodes in a finite element model. This unfortunate circumstance could render the fatigue damage computation the bottleneck of the whole design process.

In addition, the procedure by which stress signals are obtained plays an important role. In fact, the computation time may increase if the random stress signals are obtained by a transient simulation in time domain, which is orders of magnitude slower than an

**Citation:** Dirlik, T.; Benasciutti, D. Dirlik and Tovo-Benasciutti Spectral Methods in Vibration Fatigue: A Review with a Historical Perspective. *Metals* **2021**, *11*, 1333. https:// doi.org/10.3390/met11091333

Academic Editor: Anders E. W. Jarfors

Received: 3 August 2021 Accepted: 20 August 2021 Published: 24 August 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

analysis in frequency domain, able to compute harmonic transfer functions and stress power spectral densities with much less computational effort.

For linear systems, fatigue life calculations conducted entirely in the frequency domain can shorten the computation time considerably. This corresponds to the so-called "frequency domain approach" in vibration fatigue. Fatigue life calculation methods based on frequency domain information of a random loading are now well established, and suitable commercial software is commonly available (e.g., nCode DesignLife, Simulia Fe-safe, CAEfatigue, Ansys Random Vibration Fatigue). Among frequency domain methods, one should also number those approaches in which spectral methods are combined with crack growth models with the aim to estimate fatigue life under random loadings [6–10]. Crack growth models, though relevant for the fatigue durability field, are not included in the following discussion, which is concerned with S/N based approaches in frequency domain.

Since the early works of Miles [11] and Bendat [12], and in particular in the past thirty years, tens of spectral methods have been developed and their number continues to rise even today (a short list is found in [13,14]).

However, among all the existing methods, two of them have gained an increased popularity and large use in the scientific and technical community: they are the Dirlik method [15] and the Tovo–Benasciutti (or TB) method [16,17]. One reason for their popularity is their accuracy, often superior to other methods, as highlighted by several comparative studies that are commented on in the following sections.

Among the multitude of spectral methods, the Dirlik and TB methods are specifically considered hereafter as they were invented by the two authors of this paper, who thus have a vantage point in explaining the motivation behind the development of both methods, meanwhile emphasizing some of their theoretical peculiarities. The purpose of this paper is also to comment on their progresses and possible shortcomings, as well as to discuss their relationship with a few other spectral methods, providing some insights into future work. To this regard, this paper is not meant to be, in the strict sense, either a critical review or a comparison of lots of spectral methods; rather, its aim is to focus on the Dirlik and TB methods while also casting a glance on how they interrelate with other existing methods.

### **2. Random Processes in Frequency Domain: Spectral Properties and Fatigue Damage** *2.1. Spectral Properties*

Fatigue life assessment is critical in the design of components and structures exposed to random loads, such as a car on a rough road or a wind turbine. These loads can be viewed as the realization of a stationary random process *s*(*t*), which can be described in the frequency domain by a power spectral density (PSD) function *G*(*ω*), where *ω* is the circular frequency (in rad/s). The PSD is related to the Fourier transform of *s*(*t*) [18]. It provides a picture of the energy content of *s*(*t*) over frequencies. The power spectrum is often used to describe signals for qualification tests [19]. Note that two definitions of PSD exist, namely single-sided versus double-sided. Each one can be function of circular frequency *ω* (rad/s) or frequency *f* = *ω*/2*π* (Hertz). *G*(*ω*) is a single-sided PSD function of *ω*.

It is customary to classify a random signal *s*(*t*) based on its frequency content, that is, on the shape of its PSD. The signal is said to be "narrow band" if *G*(*ω*) has a peak around a single frequency, generally the resonant frequency of a vibrating system. In all other cases in which the PSD covers a wider range of frequencies, the random signal is not narrow band, and it is named as "wide band" (or "broad band"). Sometimes, more specific definitions ("bimodal", "trimodal", "multimodal", etc.) are adopted to specify that a PSD has two, three, or more well-defined peaks.

Figure 1 compares three time history samples belonging to three types of random processes. The figure emphasizes quite well the differences among the various time histories based on their corresponding PSD, although it considers power spectra with rectangular blocks that are only idealizations of the smoother spectra normally encountered in practical applications.

that a PSD has two, three, or more well-defined peaks.

practical applications.

moments [18]:

**Figure 1.** (**a**) Time history samples and (**b**) their PSD (narrow band, wide band, bimodal). **Figure 1.** (**a**) Time history samples and (**b**) their PSD (narrow band, wide band, bimodal).

The definitions of narrow band or wide band process introduced so far are merely qualitative. A more quantitative definition is provided by means of several types of spectral parameters, which are indeed introduced with the purpose of establishing to which degree a random signal is narrow band or wide band. The first parameters are the spectral The definitions of narrow band or wide band process introduced so far are merely qualitative. A more quantitative definition is provided by means of several types of spectral parameters, which are indeed introduced with the purpose of establishing to which degree a random signal is narrow band or wide band. The first parameters are the spectral moments [18]:

specific definitions ("bimodal", "trimodal", "multimodal", etc.) are adopted to specify

processes. The figure emphasizes quite well the differences among the various time histories based on their corresponding PSD, although it considers power spectra with rectangular blocks that are only idealizations of the smoother spectra normally encountered in

Figure 1 compares three time history samples belonging to three types of random

$$\mathfrak{m}\_{\mathfrak{n}} = \int\_{-0}^{\infty} \omega^{\mathfrak{n}} \mathbf{G}(\omega) \, \mathrm{d}\omega, \quad \mathfrak{n} = 0, 1, 2\dots \tag{1}$$

݉ = න ω (1) ... 0,1,2 = ݊ ߱,d) ߱(ܩ The theory shows [18] that, for a random loading with zero mean value, the spectral The theory shows [18] that, for a random loading with zero mean value, the spectral moment of order zero coincides with the variance of the random signal, *m*<sup>0</sup> = *Var*(*s*(*t*)); the root mean square (rms) value is *σ<sup>x</sup>* = √ *m*0.

moment of order zero coincides with the variance of the random signal, ݉ = ܸܽݎ)ݏ)ݐ ;(( the root mean square (rms) value is ߪ<sup>௫</sup> = ඥ݉ . Other spectral moments of higher order—or, better, their combinations—are also used to characterize some statistical properties of the random signal that are of interest in fatigue analysis. For example, for a Gaussian process, ߣ ା = ඥ݉<sup>ଶ</sup> ⁄݉ (rad/s) is the expected number of mean value upcrossings per unit time, while ߤ = ඥ݉<sup>ସ</sup> ⁄݉<sup>ଶ</sup> (rad/s) is Other spectral moments of higher order—or, better, their combinations—are also used to characterize some statistical properties of the random signal that are of interest in fatigue analysis. For example, for a Gaussian process, *λ* + <sup>0</sup> = √ *m*2/*m*<sup>0</sup> (rad/s) is the expected number of mean value upcrossings per unit time, while *µ* = √ *m*4/*m*<sup>2</sup> (rad/s) is the expected number of peaks per unit time [18]. For what follows, it is useful to introduce the "mean frequency" *ω*<sup>1</sup> = *m*1/*m*<sup>0</sup> (rad/s) of the spectral density; it may be interpreted as the distance of the centroid of the spectral mass from the frequency original [20].

the expected number of peaks per unit time [18]. For what follows, it is useful to introduce the "mean frequency" ߱<sup>ଵ</sup> = ݉<sup>ଵ</sup> ⁄݉ (rad/s) of the spectral density; it may be interpreted as the distance of the centroid of the spectral mass from the frequency original [20]. Besides the previous parameters, other quantities, known as bandwidth parameters, are used extensively. They are combinations of spectral moments. Various equivalent definitions exist, but those most used are [18]:

$$a\_1 = \frac{m\_1}{\sqrt{m\_0 m\_2}}, \quad a\_2 = \frac{m\_2}{\sqrt{m\_0 m\_4}}, \quad a\_{0.75} = \frac{m\_{0.75}}{\sqrt{m\_0 m\_{1.5}}} \tag{2}$$

(2)

initions exist, but those most used are [18]: = <sup>ଵ</sup>ߙ ݉<sup>ଵ</sup> ඥ݉݉<sup>ଶ</sup> = <sup>ଶ</sup>ߙ , ݉<sup>ଶ</sup> ඥ݉݉<sup>ସ</sup> = ହ.ߙ , ݉.ହ ඥ݉݉ଵ.ହ They belong to a more general class, ߙ= ೖ ඥబమೖ . Note the relationship with the Vanmarcke's spectral parameter ݍ = ඥ1− ߙ<sup>ଶ</sup> ଶ [20]. Both ߙ<sup>ଵ</sup> and ߙ<sup>ଶ</sup> approach unity for a narrow band process, whilst they approach zero when the spectral width increases. Pa-They belong to a more general class, *α<sup>k</sup>* = <sup>√</sup> *mk m*0*m*2*<sup>k</sup>* . Note the relationship with the Vanmarcke's spectral parameter *q* = q 1 − *α* 2 2 [20]. Both *α*<sup>1</sup> and *α*<sup>2</sup> approach unity for a narrow band process, whilst they approach zero when the spectral width increases. Parameter *α*<sup>1</sup> is related to the definition and properties of the envelope of the random process, and it is often called "groupness parameter" in ocean engineering. Parameter *α*<sup>2</sup> is often called the "irregularity factor", which is a useful parameter characterizing the behavior of the process. In time domain, the irregularity factor is defined as the ratio of the number of mean value upcrossings *n* + 0 to the expected number of peaks *np*, that is *IF* = *n* + 0 /*np*; for a Gaussian process it takes the form *γ* = *λ* + 0 /*µ*, which coincides with *α*2. This parameter lies between zero and unity. It approaches unity for a narrow band process

in which peaks and troughs are placed symmetrically above and below the mean value of the process. It approaches zero when the process has many peaks between two successive crossings of its mean value.

### *2.2. Fatigue Damage*

The fatigue damage of a random time history *s*(*t*) of time length *T* is computed in two stages. In the first, a counting method (e.g., rainflow) is applied to *s*(*t*), with the purpose of identifying the set of fatigue cycles [2]. Counted cycles have, in general, different amplitudes and mean values, and thus they contribute with a different damage. Therefore, in the second stage, the damage of every counted cycle is summed up to determine the damage of *s*(*t*) as a whole. This step calls for a damage accumulation rule, as the Palmgren– Miner linear law [3]:

$$D = \sum\_{i}^{N(T)} \frac{n\_i}{N\_i} \tag{3}$$

This summation extends over the total number of cycles *N*(*T*) counted in *s*(*t*); *n<sup>i</sup>* numbers the cycles with amplitude *s<sup>i</sup>* , which would cause a failure after *N<sup>i</sup>* repetitions in a constant amplitude test.

Often, the relationship between stress amplitude *s<sup>i</sup>* and cycles to failure *N<sup>i</sup>* is expressed by a straight line in a log–log diagram, through the S/N curve *Ns<sup>b</sup>* = *K*. This equation is fitted to experimental data by regression analysis. In some cases, a two-slope equation is preferred when it fits experiments better.

In Palmgren–Miner rule, fatigue failure is predicted to occur when damage *D* reaches a critical value equal to unity, even though lower values are recommended in some design codes (e.g., [21]) to account for experimental evidence.

The summation in Equation (3) assumes that the cycles counted in *s*(*t*) are grouped into bins of *n<sup>i</sup>* cycles, all having the same amplitude *s<sup>i</sup>* . The actual values of *n<sup>i</sup>* and *s<sup>i</sup>* obviously depend on the actual course of *s*(*t*). If the instantaneous values of *s*(*t*) vary randomly—that is, *s*(*t*) is modeled as a random process—it follows that *n<sup>i</sup>* and *s<sup>i</sup>* are both random variables. In this case, the summation in Equation (3) needs to be reformulated in a probabilistic way:

$$D = n\_{rfc} \int\_{0}^{+\infty} \frac{1}{N(s)} p(s) ds \tag{4}$$

Here, *p*(*s*) is the probability distribution of the amplitudes of rainflow cycles, *nr f c* is the number of rainflow cycles counted in *T*, and *N*(*s*) is the number of cycles to failure at constant amplitude *s*. Notice that the previous expression is very general, as *N*(*s*) is not restricted to a straight-line equation.

Although, at first glance, the previous formula may appear complicated, it is nothing more than the Palmgren–Miner rule in Equation (3) extended to continuously distributed amplitudes.

Equation (4) can be solved in closed form if one knows the analytical expressions of *p*(*s*). This circumstance occurs, for example, when *s*(*t*) is a narrow band process. In this case, peaks and valleys are placed symmetrically around the mean value of *s*(*t*), and each cycle is formed by pairing each peak to the adjacent valley (this corresponds to the peak counting method)—the range is the peak–valley distance. Consequently, each cycle has zero mean and amplitude coincident with the peak value, which in a narrow band process follows a Rayleigh distribution [18]. In a narrow band process, the number of cycles counted is also known, it being equal to the number of crossings of the mean value, *n*rfc = *λ* + 0 *T*. When inserted into Equation (4), the previous mathematical conditions yield the expression of the "narrow band" damage in time interval *T* [18]:

$$D\_{NB} = \frac{\lambda\_0^+ T}{K} \left(\sqrt{2m\_0}\right)^b \Gamma\left(1 + \frac{b}{2}\right) \tag{5}$$

in which Γ(−) is the gamma function. This expression is usually credited to Miles [11] or Bendat [12]. It is restricted to a straight S/N line. In case the S/N relationship is not straight, it is possible to solve Equation (4) numerically, by taking *p*(*s*) as a Rayleigh distribution.

When the random process is no longer narrow band, the previous formula becomes too conservative. Indeed, in a wide band process, fatigue cycles cannot simply be obtained by joining a peak with a symmetric valley, as is done in the peak counting [2]; actually, this counting procedure would yield cycles with amplitudes larger than those identified by the rainflow counting. On the other hand, the algorithm of the rainflow counting is so intricate that it seems not possible to determine, in closed form, the expression of the probability distribution of amplitudes and mean values of rainflow cycles, and to relate it to the bandwidth parameters of the wide band process.

For this reason, historically, the approach followed initially—easy but oversimplified—was to introduce a correction factor less than unity, by which to reduce the narrow band damage in Equation (5). The correction factor must become smaller and smaller when the spectral bandwidth of the random process becomes wider and wider. Since this inverse relationship is not known, and it cannot be determined in closed form because of the same reasons mentioned above for the rainflow counting, the only feasible way was to calibrate the damage correction factor based on simulation results. The difficult task was to select which bandwidth parameters must enter the correction factor. Assumptions were made based on the trends observed in simulations.

Among the spectral methods that followed this approach, the first, and probably the most famous, one is that of Wirsching and Light, proposed in 1980 [22], in which the correction factor is made to depend solely on the spectral parameter *α*2. In subsequent years, other methods following the same idea were proposed. In those methods, the expression of the correction factors was refined by introducing the dependency on other bandwidth parameters. For example, an approach named the "empirical *α*0.75-method" proposed a correction factor in the form α 2 0.75 [17].

It should come as no surprise that the use of a correction factor for estimating the damage of a wide band process, no matter how simple, has the drawback of not providing the amplitude probability distribution *p*(*s*) of the cycles causing that damage. Knowing *p*(*s*) indeed has several advantages. First, it makes it possible to compute the fatigue damage not only for a straight S/N line, but also for any smooth curve with continuous change of slope, with or without endurance limit. Second, the amplitude distribution allows one to determine the cumulative (or loading) spectra of rainflow cycles, and to extrapolate it towards cycles with large amplitudes (rare events).

Both Dirlik and TB methods illustrated below belong to the category of approaches that provide an estimate of the amplitude probability distribution of rainflow cycles. For a more comprehensive survey on spectral methods, the reader may refer to [13,14].

### **3. Dirlik Method (1985)**

This method can, for sure, be numbered as a milestone among spectral methods. This is testified by the number of citations, to date more than 600 in Google Scholar [23].

As already said, a power spectral density function specifies how the characteristics of a random signal are distributed over a given frequency range. This information is needed in many engineering sectors; techniques to calculate the PSD from a time record are readily available in the form of fast Fourier transform (FFT).

Early in the development of the Dirlik method, because of the complexity of the rainflow counting algorithm, it was decided that it would be such a difficult task to derive in closed form the distribution of rainflow cycles from the PSD function *G*(*ω*). A Monte Carlo approach was thus employed to generate a sample stress history *s*(*t*) from *G*(*ω*) using FFT methods [24,25]. Then, the rainflow algorithm was used on *s*(*t*) to extract the cycles and the probability density function of rainflow counted ranges, which in turn allowed calculating the fatigue damage for any given material constants in an S/N curve. events.

quency" ݔ = ߱<sup>ଵ</sup>

The idea of the Monte Carlo approach is based on the "weak law of large numbers", which states that the sample average converges in probability towards the expected value if the sample size increases [26]. It is then important that, in numerical simulations, the sample stress histories are sufficiently long to contain a large number of turning points (peaks and troughs); in fact, the longer the time history, the more chance of catching rare events. sities is explained fully in [15,27], and it is briefly summarized here. A total of 70 PSDs of various shapes were considered, some rectangular and some smoothly varying, as shown in Figure 2. To make the comparison easier, power spectra were normalized so that all had the same rms value ߪ<sup>௫</sup> and the same expected rate of peaks ߤ) that is, the same number of rainflow cycles counted in a time interval). With

Early in the development of the Dirlik method, because of the complexity of the rainflow counting algorithm, it was decided that it would be such a difficult task to derive in closed form the distribution of rainflow cycles from the PSD function ܩ)߱(. A Monte Carlo approach was thus employed to generate a sample stress history ݏ)ݐ (from ܩ)߱( using FFT methods [24,25]. Then, the rainflow algorithm was used on ݏ)ݐ (to extract the cycles and the probability density function of rainflow counted ranges, which in turn allowed calculating the fatigue damage for any given material constants in an S/N curve.

The idea of the Monte Carlo approach is based on the "weak law of large numbers",

The exact procedure of the approach linking PSDs to rainflow range probability den-

which states that the sample average converges in probability towards the expected value if the sample size increases [26]. It is then important that, in numerical simulations, the sample stress histories are sufficiently long to contain a large number of turning points

The exact procedure of the approach linking PSDs to rainflow range probability densities is explained fully in [15,27], and it is briefly summarized here. these two parameters kept fixed, the irregularity factor ߛ was made to vary (from 0.16 to 0.99) by adjusting the parameters defining the power spectrum shape—which in turn cor-

A total of 70 PSDs of various shapes were considered, some rectangular and some smoothly varying, as shown in Figure 2. To make the comparison easier, power spectra were normalized so that all had the same rms value *σ<sup>x</sup>* and the same expected rate of peaks *µ* (that is, the same number of rainflow cycles counted in a time interval). With these two parameters kept fixed, the irregularity factor *γ* was made to vary (from 0.16 to 0.99) by adjusting the parameters defining the power spectrum shape—which in turn corresponds to varying the spectral moments of order zero, two, and four (*m*0, *m*2, *m*4). responds to varying the spectral moments of order zero, two, and four (݉ , ݉<sup>ଶ</sup> , ݉<sup>ସ</sup> ). The rectangular bimodal spectrum (see Figure 2a) was used extensively because of its simplicity to assume a wide range of values of the irregularity factor ߛ, having the same rms value ߪ<sup>௫</sup> and the same expected rate of peaks ߤ, by adjusting the amplitudes and the frequency boundaries of the power spectrum.

*Metals* **2021**, *11*, x FOR PEER REVIEW 6 of 21

**Figure 2.** Types of PSD used in numerical simulations used to develop the Dirlik method: (**a**) rectangular bimodal spectrum; (**b**) smooth spectrum. Figures are from [15]. **Figure 2.** Types of PSD used in numerical simulations used to develop the Dirlik method: (**a**) rectangular bimodal spectrum; (**b**) smooth spectrum. Figures are from [15].

In preliminary simulation trials, it was, however, observed that the mean frequency ߱<sup>ଵ</sup> of the spectrum (that is, its first order moment ݉<sup>ଵ</sup> ) also has a role in changing the fatigue damage of a simulated time history. To investigate this relationship more closely, The rectangular bimodal spectrum (see Figure 2a) was used extensively because of its simplicity to assume a wide range of values of the irregularity factor *γ*, having the same rms value *σ<sup>x</sup>* and the same expected rate of peaks *µ*, by adjusting the amplitudes and the frequency boundaries of the power spectrum.

42 out of 70 power spectra were shaped so as to have the same ߪ௫, ߤ, ߛ, but different ݉<sup>ଵ</sup> . Because the expected rate of peaks ߤ is identical for all spectra, a "normalized mean fre-⁄ߤ was introduced: = ݔ ݉<sup>ଵ</sup> ඨ ݉<sup>ଶ</sup> In preliminary simulation trials, it was, however, observed that the mean frequency *ω*<sup>1</sup> of the spectrum (that is, its first order moment *m*1) also has a role in changing the fatigue damage of a simulated time history. To investigate this relationship more closely, 42 out of 70 power spectra were shaped so as to have the same *σx*, *µ*, *γ*, but different *m*1. Because the expected rate of peaks *µ* is identical for all spectra, a "normalized mean frequency" *x<sup>m</sup>* = *ω*1/*µ* was introduced:

$$\mathbf{x}\_m = \frac{m\_1}{m\_0} \sqrt{\frac{m\_2}{m\_4}}\tag{6}$$

For a given power spectral density, sample stress time histories *s*(*t*) were generated using the inverse FFT (IFFT). A single generated stress time history had 1024 points. The simulation procedure was repeated 20 times in order to obtain a sufficiently long record of stress time history. This long record (called one block) consisted of 1024 × 20 = 20,480 time points [15]. The number of 1024 FFT points was not a choice, but it was dictated by the maximum size of the memory and disk capacity of computers available at that time. The

݉

use of 20 replicated stress time histories had the very purpose of increasing the entire length of the simulated block. Processed in time domain, the block was converted into a sequence of peaks and troughs, from which fatigue cycles were extracted by means of the rainflow counting and the simple-range counting (which pairs a peak with the next valley [2]). From the set of counted cycles, the probability distributions of simple ranges and rainflow ranges were finally calculated. An example is displayed in Figure 3. The use of 20 replicated stress time histories had the very purpose of increasing the entire length of the simulated block. Processed in time domain, the block was converted into a sequence of peaks and troughs, from which fatigue cycles were extracted by means of the rainflow counting and the simple-range counting (which pairs a peak with the next valley [2]). From the set of counted cycles, the probability distributions of simple ranges and rainflow ranges were finally calculated. An example is displayed in Figure 3.

For a given power spectral density, sample stress time histories ݏ)ݐ (were generated using the inverse FFT (IFFT). A single generated stress time history had 1024 points. The simulation procedure was repeated 20 times in order to obtain a sufficiently long record of stress time history. This long record (called one block) consisted of 1024 × 20 = 20,480 time points [15]. The number of 1024 FFT points was not a choice, but it was dictated by the maximum size of the memory and disk capacity of computers available at that time.

*Metals* **2021**, *11*, x FOR PEER REVIEW 7 of 21

**Figure 3.** Probability distribution of: (**a**) simple ranges; (**b**) rainflow ranges. The numbers correspond to the irregularity factor of smooth PSD. On the x-axis, values are normalized as ݔ/2ߪ<sup>௫</sup> [15]. **Figure 3.** Probability distribution of: (**a**) simple ranges; (**b**) rainflow ranges. The numbers correspond to the irregularity factor of smooth PSD. On the *x*-axis, values are normalized as *x*/2*σx* [15].

Thanks to the simulation results, it was argued that the probability density function (PDF) of rainflow ranges is to be a mixture of three distributions: an exponential function, a Rayleigh function with variable parameter, and a standard Rayleigh function. The full expression in terms of a normalized variable ܼ = ଶඥబ is [15]: Thanks to the simulation results, it was argued that the probability density function (PDF) of rainflow ranges is to be a mixture of three distributions: an exponential function, a Rayleigh function with variable parameter, and a standard Rayleigh function. The full expression in terms of a normalized variable *Z* = *<sup>r</sup>* 2 √ *m*<sup>0</sup> is [15]:

$$p\_{DK}(Z) = \frac{1}{2\sqrt{m\_0}} \left[ \frac{D\_1}{Q} e^{-\frac{Z}{Q}} + \frac{D\_2 Z}{R^2} e^{-\frac{Z^2}{2R^2}} + D\_3 Z e^{-\frac{Z^2}{2}} \right] \tag{7}$$

మ

మ

where ݎ is the rainflow range. The coefficients ܦ<sup>ଵ</sup> <sup>ଶ</sup>ܦ , , ܦ<sup>ଷ</sup> and ܴ are defined as <sup>ଶ</sup>ߙ − ݔ)2 ଶ ) <sup>ଵ</sup>ܦ + <sup>ଵ</sup>ܦ − <sup>ଶ</sup>ߙ − 1 ଶ where *r* is the rainflow range. The coefficients *D*1, *D*2, *D*<sup>3</sup> and *R* are defined as

$$\begin{aligned} D\_1 &= \frac{2\left(x\_m - a\_2^2\right)}{1 + a\_2^2} & D\_2 &= \frac{1 - a\_2 - D\_1 + D\_1^2}{1 - R} & D\_3 &= 1 - D\_1 - D\_2 \\ R &= \frac{a\_2 - x\_m - D\_1^2}{1 - a\_2 - D\_1 + D\_1^2} & Q &= 1.25 \frac{(a\_2 - D\_3 - D\_2 R)}{D\_1} \end{aligned} \tag{8}$$

where ݔ is the normalized mean frequency in Equation (6). where *x<sup>m</sup>* is the normalized mean frequency in Equation (6).

Note that the symbols in Equations (7) and (8) conform to the notation used nowadays, which differs from that originally used in [15]. The quantities ܦ<sup>ଵ</sup> <sup>ଶ</sup>ܦ , <sup>ଷ</sup>ܦ , , and ܴ are "best fit" parameters that turned out after minimizing the difference between the observed probability distribution and the analytical one. Note that the symbols in Equations (7) and (8) conform to the notation used nowadays, which differs from that originally used in [15]. The quantities *D*1, *D*2, *D*3, and *R* are "best fit" parameters that turned out after minimizing the difference between the observed probability distribution and the analytical one.

The probability distribution in Equation (7) represents the link between the rainflow counted ranges and the power spectral density. The importance of this equation lies in the fact that, once it is used to determine the PDF of rainflow ranges, the life estimation can be made with any form of S/N data by using Equation (4). The use of this method is not The probability distribution in Equation (7) represents the link between the rainflow counted ranges and the power spectral density. The importance of this equation lies in the fact that, once it is used to determine the PDF of rainflow ranges, the life estimation can be made with any form of S/N data by using Equation (4). The use of this method is not restricted to a straight line representation of S/N data on a log–log scale only, which instead could be a smooth curve with continuous change of slope with or without endurance limit. In the case of single slope S/N line, *Ns<sup>b</sup>* = *K*, the substitution of *pDK*(*Z*) into Equation (4) returns a closed-form expression for the damage in time interval *T* [17,28]:

$$D\_{\rm DK} = \frac{\nu\_p T}{K} (\sqrt{m\_0})^b \left[ D\_1 Q^b \Gamma(1+b) + \left(\sqrt{2}\right)^b \Gamma\left(1+\frac{b}{2}\right) \left(D\_2 |R|^b + D\_3\right) \right] \tag{9}$$

That described so far is the original version Dirlik method commonly used. A temperature-modified version was also proposed in [29–31] to estimate the high-cycle fatigue damage for uniaxial loadings caused by random vibrations directly from a power spectral analysis. The model combines the frequency-based method and the temperature effect, and it is verified by comparison with experimental test results for a high-pressure die-cast aluminum alloy. Actually, the approach in [29] incorporates the temperature effect into the Dirlik method by considering a temperature-dependent inverse slope and fatigue strength *b*(*T*) and *K*(*T*), and by also taking the temperature time history into consideration by using a weighted sum of the damage at each temperature. In this way, the temperaturedependent spectral approach, though developed in [29] only for the Dirlik method, is in fact applicable to any other spectral method, provided that temperature-dependent S/N parameters are used. analysis. The model combines the frequency-based method and the temperature effect, and it is verified by comparison with experimental test results for a high-pressure die-cast aluminum alloy. Actually, the approach in [29] incorporates the temperature effect into the Dirlik method by considering a temperature-dependent inverse slope and fatigue strength ܾ(ܶ) and ܭ)ܶ(, and by also taking the temperature time history into consideration by using a weighted sum of the damage at each temperature. In this way, the temperature-dependent spectral approach, though developed in [29] only for the Dirlik method, is in fact applicable to any other spectral method, provided that temperaturedependent S/N parameters are used. **4. Tovo–Benasciutti (TB) Method (2002, 2005)**

restricted to a straight line representation of S/N data on a log–log scale only, which instead could be a smooth curve with continuous change of slope with or without endur-

Equation (4) returns a closed-form expression for the damage in time interval ܶ [17,28]:

߁)1 + ܾ(+ ൫√2൯

That described so far is the original version Dirlik method commonly used. A temperature-modified version was also proposed in [29–31] to estimate the high-cycle fatigue damage for uniaxial loadings caused by random vibrations directly from a power spectral

+ 1൬ ߁

ܾ 2 ൰ (ܦ<sup>ଶ</sup>

= ܭ, the substitution of )ܼ (into


<sup>ଷ</sup>ܦ +

)൨ (9)

#### **4. Tovo–Benasciutti (TB) Method (2002, 2005)** The theory of this method was first laid out in [32]. It postulates that the amplitude–

*Metals* **2021**, *11*, x FOR PEER REVIEW 8 of 21

ance limit. In the case of single slope S/N line, ܰݏ

൫ඥ݉ ൯ ܳଵܦ

ܶߥ ܭ

= *DK*ܦ

The theory of this method was first laid out in [32]. It postulates that the amplitude– mean joint probability distribution of rainflow cycles lies between two limit distributions, and can be estimated as their linear combination: mean joint probability distribution of rainflow cycles lies between two limit distributions, and can be estimated as their linear combination:

$$p\_{\rm TB}(s,m) = w \ p\_{\rm LCC}(s,m) + (1-w) \ p\_{\rm RC}(s,m) \tag{10}$$

Here, *w* is a weight factor that needs to be determined. Unlike Dirlik method, the TB method provides the joint distribution of amplitudes and mean values of rainflow cycles. The two functions *pLCC*(*s*, *m*) and *pRC*(*s*, *m*) represent the amplitude–mean distributions of the level-crossing counting (LCC) and of the simple-range counting (RC)—actually, the latter is only approximated. Their analytical expressions, not reported here, can be found in [16,32]. The two distributions are shown in Figure 4. Note that *pLCC*(*s*, *m*) involves two Dirac delta functions. Here, ݓ is a weight factor that needs to be determined. Unlike Dirlik method, the TB method provides the joint distribution of amplitudes and mean values of rainflow cycles. The two functions )ݏ, ݉ (and ோ(ݏ, ݉ (represent the amplitude–mean distributions of the level-crossing counting (LCC) and of the simple-range counting (RC)—actually, the latter is only approximated. Their analytical expressions, not reported here, can be found in [16,32]. The two distributions are shown in Figure 4. Note that )ݏ, ݉ (involves two Dirac delta functions.

**Figure 4.** Amplitude–mean probability distributions in TB method: (**a**) level crossing counting; (**b**) approximate range counting. Representation of )ݏ, ݉ (is only qualitative, as it includes a Dirac delta function for ݏ = 0 and ݉ = 0. Reproduced from [32] with permission from Elsevier. **Figure 4.** Amplitude–mean probability distributions in TB method: (**a**) level crossing counting; (**b**) approximate range counting. Representation of *pLCC*(*s*, *m*) is only qualitative, as it includes a Dirac delta function for *s* = 0 and *m* = 0. Reproduced from [32] with permission from Elsevier.

Equation (10) shows that the LCC and RC distributions represent two bounds between which the rainflow cycle distribution is enclosed. In other words, Equation (10) postulates that the rainflow cycle distribution of any random process lies between two limit distributions, and its actual shape depends on the value of *w*.

Likewise in Equation (10), the same weighted sum holds also for the marginal probability distributions *pTB*(*s*), *pLCC*(*s*), *pRC*(*s*), and, most importantly, for the damage values:

$$D\_{TB} = w \, D\_{L\text{CC}} + (1 - w) \, D\_{RC} \cong \left[ w + (1 - w)a\_2^{b-1} \right] D\_{NB} \tag{11}$$

The latter inequality takes advantage of the fact that *DLCC* = *DNB* and that the simplerange counting damage is approximated as *DRC* ∼= *α b*−1 <sup>2</sup> *DNB* [16,17]. Though, from a practical point of view, the quantity <sup>h</sup> *w* + (1 − *w*)*α b*−1 2 i can be interpreted as a correction factor of the narrow band damage, the above arguments clearly demonstrate that the origin of this factor is in fact quite different, and the TB method has a sound theoretical basis.

Turning back now to the weighted sum that links probability distributions and damage values through *w*, the next step to complete the definition of the TB method was to find a proper expression for *w*. At least theoretically, parameter *w* is a function of the whole set of spectral and bandwidth parameters of the PSD of the random process.

Nevertheless, it was—as it seems even today—rather hard, if not impossible, to demonstrate theoretically which spectral parameters contribute to the definition of *w*. Some trends can be argued. In the narrow band case, for example, the rainflow distribution must converge to the Rayleigh one (i.e., LCC function), which in turn requires that *w* ∼= 1 for a narrow band process in which *α*<sup>1</sup> and *α*<sup>2</sup> both approach unity. Other trends emerged from numerical simulations. The approach was to generate stress time histories, as in the procedure described for the Dirlik method.

Using simulation results with power spectra densities of various shapes, a first expression was proposed in [32]:

$$w\_1 = \min\left(\frac{\alpha\_1 - \alpha\_2}{1 - \alpha\_1}, 1\right) \tag{12}$$

This formula is yet not continuous. Moreover, its accuracy tends to diminish if *α*<sup>1</sup> and *α*<sup>2</sup> differ significantly [13], as in the special case of a lightly damped linear oscillator driven by white noise, whose response has *α*<sup>1</sup> ∼= 1 but *α*<sup>2</sup> = 0 (irregular process) [18].

From a broader perspective, conflicting views seem to emerge in the literature regarding the accuracy of parameter *w*1. While the agreement with simulation results was judged as "excellent" in [32], a less flattering opinion—moreover consistent with the results of [33]—is given in [34], where the use of this formulation of *w*<sup>1</sup> is indeed discouraged.

Apart from this, long before the above studies, it was soon discovered that Equation (12) could be ameliorated by seeking better solutions. Although a first attempt was made in [35] to find a continuous but equally accurate expression to replace Equation (12), it was only in [36], and later in [16], that the expression of *w* used today has finally been developed:

$$w\_2 = \frac{\left(\mathfrak{a}\_1 - \mathfrak{a}\_2\right) \left[1.112\left(1 + \mathfrak{a}\_1\mathfrak{a}\_2 - \left(\mathfrak{a}\_1 + \mathfrak{a}\_2\right)\right)e^{2.11\mathfrak{a}\_2} + \left(\mathfrak{a}\_1 - \mathfrak{a}\_2\right)\right]}{\left(1 - \mathfrak{a}\_2\right)^2} \tag{13}$$

This expression is a result of a best fitting on simulation results from power spectral densities of various shapes (see Figure 5). Similar to the Dirlik method, this formula assumes that the rainflow probability distribution is linked to four spectral moments, *m*0, *m*1, *m*2, and *m*4. The soundness of this correlation, first foreseen by Dirlik, was confirmed after comparing frequency domain estimations with time domain simulation results. It was shown in [16] that the use of Equation (13) along with Equation (11) guarantees an excellent accuracy.

though the latter seems to remain the method preferred by most scholars.

**Figure 5.** Power spectral densities used in simulations to calibrate weight ݓ<sup>ଶ</sup> in TB method. **Figure 5.** Power spectral densities used in simulations to calibrate weight *w*<sup>2</sup> in TB method.

Soon after [16] was published, a further attempt was made in [17] to correlate ݓ to fractional order spectral moments ݉.ହ and ݉ଵ.ହ (in bandwidth parameter ߙ.ହ) by the formula After that, this accuracy was further confirmed by other studies (e.g., [33,37]), and the TB method rapidly became popular and widely used, similar to the Dirlik method—though the latter seems to remain the method preferred by most scholars.

After that, this accuracy was further confirmed by other studies (e.g., [33,37]), and the TB method rapidly became popular and widely used, similar to the Dirlik method—

= <sup>ଷ</sup>ݓ ହ.ߙ <sup>ଶ</sup> − ߙ<sup>ଶ</sup> ଶ <sup>ଶ</sup>ߙ − 1 ଶ (14) Soon after [16] was published, a further attempt was made in [17] to correlate *w* to fractional order spectral moments *m*0.75 and *m*1.5 (in bandwidth parameter *α*0.75) by the formula

$$w\_3 = \frac{a\_{0.75}^2 - a\_2^2}{1 - a\_2^2} \tag{14}$$

[39,40], "empirical ߙ.ହ-method" [17]). Against all odds, the comparison made in [17] even revealed that the accuracy of Equation (14) is slightly better than Equation (13); however, despite this outcome, quite surprisingly, Equation (14) has been ignored by scholars in favor of Equation (13). **5. Area of Application of Spectral Methods** Since their early development, but especially from the appearance of Dirlik approach This choice was suggested after having observed that fractional spectral moments were considered also by other spectral methods (Zhao-Baker [38], "single-moment" [39,40], "empirical *α*0.75-method" [17]). Against all odds, the comparison made in [17] even revealed that the accuracy of Equation (14) is slightly better than Equation (13); however, despite this outcome, quite surprisingly, Equation (14) has been ignored by scholars in favor of Equation (13).

#### in 1985, spectral methods have found ever more application in many engineering fields, **5. Area of Application of Spectral Methods**

even those much different from each other. Over the years, many examples and case studies have been developed in engineering sectors such as automotive, wind, offshore, and marine, not to mention the field of aerospace and advanced composite materials. Obviously, as already pointed out for spectral methods, here it is also not possible (as well not very useful) to enumerate all the examples or case studies cited in the literature, by also considering that probably a great many of them were not published as scien-Since their early development, but especially from the appearance of Dirlik approach in 1985, spectral methods have found ever more application in many engineering fields, even those much different from each other. Over the years, many examples and case studies have been developed in engineering sectors such as automotive, wind, offshore, and marine, not to mention the field of aerospace and advanced composite materials.

tific articles. Nor is the aim of this section to present a summary list of articles. Rather, the lines that follow have the only purpose of casting a glance at a few representative applications of spectral methods, with special interest on examples that analyze real structures, often with the aid of finite element simulation. It has indeed to be pointed out that one of the main advantages of spectral methods (which may in fact explain their widespread use) is the possibility to use them in conjunction with computer-aided analysis. Typically, once a finite element model is built and sub-Obviously, as already pointed out for spectral methods, here it is also not possible (as well not very useful) to enumerate all the examples or case studies cited in the literature, by also considering that probably a great many of them were not published as scientific articles. Nor is the aim of this section to present a summary list of articles. Rather, the lines that follow have the only purpose of casting a glance at a few representative applications of spectral methods, with special interest on examples that analyze real structures, often with the aid of finite element simulation.

jected to a known random acceleration/load expressed by a power spectral density, the stress power spectra in each node are calculated and then input directly in a spectral method, which then provides a damage value in each node of the model. If the nodal stress is multiaxial (i.e., a PSD matrix is defined), a multiaxial criterion must be used [41]. Among those existing, multiaxial spectral methods that transform a multiaxial stress into an equivalent uniaxial stress permit the "traditional" spectral methods (like as Dirlik, TB method or others) to be used [42]. This synergistic combination of multiaxial criteria It has indeed to be pointed out that one of the main advantages of spectral methods (which may in fact explain their widespread use) is the possibility to use them in conjunction with computer-aided analysis. Typically, once a finite element model is built and subjected to a known random acceleration/load expressed by a power spectral density, the stress power spectra in each node are calculated and then input directly in a spectral method, which then provides a damage value in each node of the model. If the nodal stress is multiaxial (i.e., a PSD matrix is defined), a multiaxial criterion must be used [41].

and frequency domain methods has allowed the field of application of spectral methods to be expanded considerably. In the automotive field, application of spectral methods generally relies on results of multibody and/or finite element simulation at various levels of modeling detail (chassis, engine, tires, etc.) [43–47]. Sometimes the focus is on a specific component—structural or Among those existing, multiaxial spectral methods that transform a multiaxial stress into an equivalent uniaxial stress permit the "traditional" spectral methods (like as Dirlik, TB method or others) to be used [42]. This synergistic combination of multiaxial criteria and frequency domain methods has allowed the field of application of spectral methods to be expanded considerably.

In the automotive field, application of spectral methods generally relies on results of multibody and/or finite element simulation at various levels of modeling detail (chassis, engine, tires, etc.) [43–47]. Sometimes the focus is on a specific component—structural or not—that has to endure random vibrations due to road roughness. Interesting is, for example, the study in [48], which applies spectral methods for estimating the road-induced

fatigue damage in a carbon steel coil spring based on acceleration signals acquired in a car driving on various road types [48]. A similar goal is pursued in [49], where field measurements and finite element modeling are used for analyzing a structural component of an automotive headlight. The capabilities of spectral methods as design tool are further demonstrated in [50] by an experimental/numerical analysis of a wheel of an industrial vehicle. When computation time becomes essential, it is possible to integrate spectral methods in an algorithm for monitoring fatigue damage in real time using onboard equipment, as demonstrated by the example in [51] with a heavy-duty truck frame.

In the field of maritime engineering, where there are different types of structures subjected to random waves (e.g., ships, mooring systems, offshore wind turbines, and fixed offshore structures), the use of spectral methods is particularly effective. An advantage is that the power spectra for certain sea states (e.g., JONSWAP, Pierson–Moskowitz) are known in closed form and can be input directly in a finite element analysis aimed at determining the structural dynamic behavior and stress response. Frequency-based fatigue analysis methods are also mentioned in design guidelines [52].

An example of frequency domain fatigue analysis of a semi-submersible platform is described in [53]. The study applies spectral methods with multiaxial fatigue criteria to the output of a three-dimensional finite element model. The case study also serves as a basis to compare the accuracy of various spectral methods.

When data storage and computational cost in recording and processing stress time histories are important, spectral methods prove to be particularly advantageous. For example, [54] proposed a structural monitoring system for a ship hull in which a damage detection algorithm using spectral methods is embedded into a wireless sensor.

For offshore wind turbines, the combined effect of wave and wind loadings acting simultaneously is investigated in [55]; the contribution of different sea and/or wind states can also be taken into account. Spectral methods are also used for estimating the structural integrity of welded joints, which constitute a typical structural detail in offshore structures [55,56].

Examples specific to the field of wind engineering show how spectral methods can be used to predict the fatigue damage of wind-excited structures [57,58]; corrections factors are introduced to account for non-Gaussian effects in the output stress caused by nonlinear aerodynamic damping. A spectral based fatigue analysis, integrated with finite element simulation, is the basis for a structural integrity assessment of two welded joints in a wind turbine tubular tower subjected to different wind speed and direction conditions [59]. When a best balance is required between accuracy and overall computation cost, the study suggests the use of the frequency domain approach over the time-domain one—and among spectral methods, TB and Dirlik methods are recommended [59].

The literature also offers examples of use of spectral methods in engineering sectors such as aerospace and electronics, or with composite materials. In the former cases, for example, spectral methods combined with finite element analysis are employed for the virtual qualification of electronic assemblies subjected to aerospace vibratory environment [60]. Other examples with electronic devices [61,62] are discussed in the following section. Concerning composite materials, spectral based approaches attempt to reformulate strength criteria for composite materials (e.g., Tsai–Hill [63], residual strength, or stiffness model [64,65]) to extend their validity to the frequency domain for the case of random vibrations [66]. Even for metal/composite assemblies undergoing random vibrations, an example demonstrates how to combine finite element analysis with spectral methods successfully [67]. Another application is described in [68] for carbon fiber-reinforced silicon carbide ceramic matrix composite, which are considered as excellent materials for thermal protection structures of launch vehicles and spacecraft structures and, for this reason, exposed to thermo-acoustic vibration loading in service.

Besides the examples above, other, more specific, applications of spectral methods deal with welded [56] or riveted joints [69], to nonferrous materials [70] or even to nonmetallic materials such as concrete [71].

### **6. Comparison of Spectral Methods**

Although the discussion so far has focused on Dirlik and TB approaches, it should not be forgotten that, in the literature, a great many other spectral methods exist for wide band random processes. They are nevertheless too numerous to list them all here. The following list, certainly not exhaustive, mentions in chronological order those spectral methods that seem to be among the most cited in the literature:


All these frequency domain methods are usually derived for stationary Gaussian processes, but extensions to stationary non-Gaussian processes or to specific subclasses of nonstationary processes (e.g., switching case) have also been proposed. Except for the narrow band case, all these methods are approximate; they differ in what they estimate. While some methods estimate only the expected fatigue damage, others also attempt to also estimate the probability distribution of rainflow amplitudes [17]. Some methods in the previous list (e.g., "single-moment", Fu–Cebon method and its modified version, Gao and Moan) were developed for bimodal or trimodal processes, and they are now considered as a safe technique to assess the vibration fatigue life of real engineering structures.

With the increase in the number of spectral methods, the need has come to establish which of them is "the best" method. A lot of articles published over the last decades have proposed systematic comparisons among spectral methods, and therefore this work does not repeat such a similar analysis. Rather, it summarizes the main findings from other articles. Readers are advised to draw their own conclusions as to which method is "the best" for their own application after investigating each method's assumptions, limitations, and application areas. When interpreting the outcomes of comparison studies, an important aspect to consider is the length of simulated time histories, which should be the longest possible so as to increases the chance of observing large amplitude cycles that appear only occasionally, and thus to make the comparison more reliable.

### *6.1. Comparison between Dirlik and TB Method*

Before comparing these two methods with others, it is useful to give them a closer inspection. To this end, this section compares the Dirlik and TB methods for some selected power spectral densities. The comparison considers not only the amplitude probability distribution, but also the "damage distribution" (to be defined below) and the estimated damage. The purpose is to show that, even though the methods yield, in general, very close predictions, some differences may sometimes occur in their amplitude distributions.

The following discussion deals with the power spectral density in Figure 2a, which is defined by the area ratio *r<sup>A</sup>* = *A*2/*A*<sup>1</sup> and by frequencies *p*1, *f*1, *p*2, *f*2. The frequency ratios *p*1/ *f*<sup>1</sup> and *p*2/ *f*<sup>2</sup> can be chosen so as to have different [15] or equal [39,40] values. The condition *c* = *p*1/ *f*<sup>1</sup> = *p*2/ *f*<sup>2</sup> ensures the same spectral bandwidth for both blocks. In this case, the power spectrum is only defined by ratio of areas *r<sup>A</sup>* = *A*2/*A*<sup>1</sup> and frequencies *r<sup>f</sup>* = *f*2/ *f*<sup>1</sup> = *p*2/*p*1. This type of power spectrum is particularly advantageous, as it allows not only bimodal cases, but also narrow band and band-limited power spectra to be obtained by a suitable choice of geometric parameters. When *r<sup>A</sup>* = 0 (whatever *r<sup>f</sup>* ) the spectrum becomes unimodal—whether narrow band or wide band depends on *c*. A small value (e.g., *c* = 1.1/0.9) assures that each rectangle is narrow band. As *c* increases, the spectrum width widens. Additionally, for this power spectrum, it is easy to obtain closed-form expressions that express *α*1, *α*2, *λ* in terms of *c*, *rB*, *r<sup>f</sup>* (see [13]). With such

formulae, parameters *r<sup>A</sup>* and *r<sup>f</sup>* can be varied so as to obtained prescribed values of *α*<sup>1</sup> and *α*2.

A short list of PSD geometrical and spectral parameters is summarized in Table 1. For every spectrum, the lowest frequency *f*<sup>1</sup> is adjusted so that all power spectra have the same number of peaks per second *µ* = 20. The three columns on the right provide the damage ratio *DTB*/*DDK* for three values of S/N slope.

**Table 1.** Parameters of some types of narrow band, band-limited, and bimodal power spectral densities for which Dirlik and TB methods are compared (*f*<sup>1</sup> in Hz, *µ* peaks per second, *λ* + 0 crossing per second, n.a. = not applicable).


Among the examples of Table 1, two specific cases are investigated in more detail in Figures 6 and 7. They refer to combinations: *r<sup>f</sup>* = 7.408, *r<sup>A</sup>* = 0.006 (with *α*<sup>1</sup> = 0.900, *α*<sup>2</sup> = 0.300) and *r<sup>f</sup>* = 3.233, *r<sup>A</sup>* = 0.240 (with *α*<sup>1</sup> = 0.850, *α*<sup>2</sup> = 0.600). In Figures 6 and 7, the graphs on the left show the amplitude probability distributions *pDK*(*s*) and *pTB*(*s*), whereas the graphs on the right show each distribution multiplied by *µ s <sup>b</sup>*/*K*. In all figures, the Rayleigh distribution, also multiplied by *λ* + 0 *s b* /*K* on the right graph, is shown for comparison. Values *b* = 5 and *K* = 1 of the S/N curve are chosen.

The graphs on the right, so suitably devised, have the purpose of highlighting the distribution of the damage contributed by each amplitude. In fact, apart from constant *K*, the quantity *µ s <sup>b</sup> p*(*s*) is the damage caused by the cycle with amplitude *s*. Looking at this kind of "probability distribution weighted with damage" or simply "damage distribution", it is possible to explain why, in spite of having different rainflow amplitude distributions, Dirlik and TB methods estimate almost identical damage values. For the example in Figure 6, the damage ratio is *DTB*/*DDK* = 0.966, while it slightly grows to *DTB*/*DDK* = 0.944 for the example in Figure 7. The comparison also demonstrates, at least for the two cases considered in the figures, that a difference in amplitude distributions in the region of small amplitudes is not of concern, since small amplitudes contribute very little to the total damage, especially for high *b*. Furthermore, the graph also makes apparent how the Rayleigh probability distribution always leads to a larger damage.

*Metals* **2021**, *11*, x FOR PEER REVIEW 14 of 21

**Figure 6.** Comparison of (**a**) rainflow probability distributions, (**b**) weighted by the damage per amplitude. The figures refer to parameters ݎ = 7.408, ݎ = 0.006) with ߙ<sup>ଵ</sup> = 0.900, ߙ<sup>ଶ</sup> = 0.300). **Figure 6.** Comparison of (**a**) rainflow probability distributions, (**b**) weighted by the damage per amplitude. The figures refer to parameters *r<sup>f</sup>* = 7.408, *r<sup>A</sup>* = 0.006 (with *α*<sup>1</sup> = 0.900, *α*<sup>2</sup> = 0.300). **Figure 6.** Comparison of (**a**) rainflow probability distributions, (**b**) weighted by the damage per amplitude. The figures refer to parameters ݎ = 7.408, ݎ = 0.006) with ߙ<sup>ଵ</sup> = 0.900, ߙ<sup>ଶ</sup> = 0.300).

**Figure 7.** Comparison of (**a**) rainflow probability distributions, (**b**) weighted by the damage per amplitude. The figures refer to parameters ݎ = 3.233, ݎ = 0.240) with ߙ<sup>ଵ</sup> = 0.850, ߙ<sup>ଶ</sup> = 0.600). **Figure 7.** Comparison of (**a**) rainflow probability distributions, (**b**) weighted by the damage per amplitude. The figures refer to parameters ݎ = 3.233, ݎ = 0.240) with ߙ<sup>ଵ</sup> = 0.850, ߙ<sup>ଶ</sup> = 0.600). **Figure 7.** Comparison of (**a**) rainflow probability distributions, (**b**) weighted by the damage per amplitude. The figures refer to parameters *r<sup>f</sup>* = 3.233, *r<sup>A</sup>* = 0.240 (with *α*<sup>1</sup> = 0.850, *α*<sup>2</sup> = 0.600).

#### *6.2. Comparison among Spectral Methods (Numerical Simulations) 6.2. Comparison among Spectral Methods (Numerical Simulations) 6.2. Comparison among Spectral Methods (Numerical Simulations)*

To our knowledge, one of the earliest comparisons of spectral methods is that performed in [28]—obviously, it does not include the TB method that appeared only about ten years later. The comparison in [28] indicated the Dirlik method as superior to other spectral methods (as, for example, Wirsching‒Light and "single-moment") for a wide combination of power spectral densities and S/N slopes, where its superiority was attributed to the use of four spectral moments. This outcome then highlighted the role of spectral moment ݉<sup>ଵ</sup> , in addition to ݉ , ݉<sup>ଶ</sup> , and ݉<sup>ସ</sup> , in describing the shape of the rainflow range probability distribution. To this regard, the article concluded that the rainflow distribution is a mixed distribution and, "in this sense Dirlik took the right approach" [28]. To our knowledge, one of the earliest comparisons of spectral methods is that performed in [28]—obviously, it does not include the TB method that appeared only about ten years later. The comparison in [28] indicated the Dirlik method as superior to other spectral methods (as, for example, Wirsching‒Light and "single-moment") for a wide combination of power spectral densities and S/N slopes, where its superiority was attributed to the use of four spectral moments. This outcome then highlighted the role of spectral moment ݉<sup>ଵ</sup> , in addition to ݉ , ݉<sup>ଶ</sup> , and ݉<sup>ସ</sup> , in describing the shape of the rainflow range probability distribution. To this regard, the article concluded that the rainflow distribution is a mixed distribution and, "in this sense Dirlik took the right approach" [28]. To our knowledge, one of the earliest comparisons of spectral methods is that performed in [28]—obviously, it does not include the TB method that appeared only about ten years later. The comparison in [28] indicated the Dirlik method as superior to other spectral methods (as, for example, Wirsching-Light and "single-moment") for a wide combination of power spectral densities and S/N slopes, where its superiority was attributed to the use of four spectral moments. This outcome then highlighted the role of spectral moment *m*1, in addition to *m*0, *m*2, and *m*4, in describing the shape of the rainflow range probability distribution. To this regard, the article concluded that the rainflow distribution is a mixed distribution and, "in this sense Dirlik took the right approach" [28].

A later study [74], though actually not aiming to compare different methods, also confirmed the good performance of the Dirlik solution, even when applied to the subclass of bimodal random processes, for which specific solutions usually perform better. confirmed the good performance of the Dirlik solution, even when applied to the subclass of bimodal random processes, for which specific solutions usually perform better. The first comparison also including the TB method was presented in [17]. Using an

A later study [74], though actually not aiming to compare different methods, also

The first comparison also including the TB method was presented in [17]. Using an error index to measure the estimation accuracy, the study confirmed that "the TB method has been shown to be as accurate as the DK approach "—although the latter has an error index slightly smaller. Actually, the lowest error is for TB method using *w*<sup>3</sup> in Equation (14), but surprisingly, this version has not become of common use. The results in [17] emphasize once more the need to include spectral moment *m*<sup>1</sup> to obtain improved damage estimates, although they have not yet excluded that a more complex relationship with other fractional moments, such as *m*0.75 and *m*1.5, could exist. error index to measure the estimation accuracy, the study confirmed that "the TB method has been shown to be as accurate as the DK approach*"*—although the latter has an error index slightly smaller. Actually, the lowest error is for TB method using ݓ<sup>ଷ</sup> in Equation (14), but surprisingly, this version has not become of common use. The results in [17] emphasize once more the need to include spectral moment ݉<sup>ଵ</sup> to obtain improved damage estimates, although they have not yet excluded that a more complex relationship with other fractional moments, such as ݉.ହ and ݉ଵ.ହ , could exist. Of particular interest are the findings of [37], since this article compares spectral

Of particular interest are the findings of [37], since this article compares spectral methods over a variety of power spectral density shapes that also include those already used in [15] and in [16,17] (see Figure 8). The results in this figure, quite similar to other figures in the article, show the ratio of the damage from spectral methods to the damage from time-domain simulations. In the figure, while the narrow band solution greatly overestimates the fatigue damage, Dirlik and TB methods both yield extremely accurate predictions, as in general occur also for other power spectra. Despite, as the article states, the fatigue damage predicted by the Dirlik and TB method "might be slightly underestimated in most of the cases" (the article here refers to some trimodal spectra), "it is demonstrated that the two formulae give quite accurate fatigue damage estimates for all of the spectral shapes and bandwidth parameters considered herein" [37]. methods over a variety of power spectral density shapes that also include those already used in [15] and in [16,17] (see Figure 8). The results in this figure, quite similar to other figures in the article, show the ratio of the damage from spectral methods to the damage from time-domain simulations. In the figure, while the narrow band solution greatly overestimates the fatigue damage, Dirlik and TB methods both yield extremely accurate predictions, as in general occur also for other power spectra. Despite, as the article states, the fatigue damage predicted by the Dirlik and TB method "might be slightly underestimated in most of the cases*"* (the article here refers to some trimodal spectra), "it is demonstrated that the two formulae give quite accurate fatigue damage estimates for all of the spectral shapes and bandwidth parameters considered herein" [37].

**Figure 8.** Example of comparison in [37] with the same power spectra used (**a**) in [16] and (**b**) in [15] (SS = simple sum of damage of frequency components in trimodal spectrum, NB = narrow band formula in Equation (5), DK = Dirlik, BT = TB method, Proposed = method proposed in [37]). Reproduced from [37] with permission from Elsevier. **Figure 8.** Example of comparison in [37] with the same power spectra used (**a**) in [16] and (**b**) in [15] (SS = simple sum of damage of frequency components in trimodal spectrum, NB = narrow band formula in Equation (5), DK = Dirlik, BT = TB method, Proposed = method proposed in [37]). Reproduced from [37] with permission from Elsevier.

The results in [33], too, corroborate the fact that both Dirlik and TB methods generally perform better than other approaches. The peculiarity of [33], however, is to have considered real power spectra as typical of structural dynamics and automotive industry. Unlike previous comparative studies, the findings of [33] seem to indicate a slightly better performance of TB method over Dirlik, although the difference remains within a few percentage points. The results in [33], too, corroborate the fact that both Dirlik and TB methods generally perform better than other approaches. The peculiarity of [33], however, is to have considered real power spectra as typical of structural dynamics and automotive industry. Unlike previous comparative studies, the findings of [33] seem to indicate a slightly better performance of TB method over Dirlik, although the difference remains within a few percentage points.

Common to previous studies is to observe for the tendency of prediction error to increase as the S/N curve becomes less steep (as ܾ increases). Common to previous studies is to observe for the tendency of prediction error to increase as the S/N curve becomes less steep (as *b* increases).

More recently, other simulation studies [77,78] have been carried out to benchmark the various spectral methods against different power spectral density shapes. Again, the comparison in [77] confirms that, for any spectral method, the estimation error is magnified with an increasing slope ܾ, especially at values close to or above 8–10, which are typical of smooth/unnotched components. Contrasting trends characterize the accuracy of More recently, other simulation studies [77,78] have been carried out to benchmark the various spectral methods against different power spectral density shapes. Again, the comparison in [77] confirms that, for any spectral method, the estimation error is magnified with an increasing slope *b*, especially at values close to or above 8–10, which are typical of smooth/unnotched components. Contrasting trends characterize the accuracy of the Dirlik

and TB methods, sometimes the former being better for some *b* values while the latter for other values. *6.3. Comparison with Experiments*

the Dirlik and TB methods, sometimes the former being better for some ܾ values while

In opposite trend with previous studies, the comparison of [78] claims different conclusions. Although for some power spectra the Dirlik and TB methods are "highlighted as the best-performing methods", in general their accuracy is judged not so satisfactory and, for steel and aluminum, it seems comparable to that of other methods—among them, is (surprisingly) also included the Wirsching‒Light method, despite its accuracy with wide

*Metals* **2021**, *11*, x FOR PEER REVIEW 16 of 21

In opposite trend with previous studies, the comparison of [78] claims different conclusions. Although for some power spectra the Dirlik and TB methods are "highlighted as the best-performing methods", in general their accuracy is judged not so satisfactory and, for steel and aluminum, it seems comparable to that of other methods—among them, is (surprisingly) also included the Wirsching-Light method, despite its accuracy with wide band spectra known to be not excellent. Unlike simulation-based comparisons, there are far fewer studies that compare spectral methods against experiments. Only few of them are mentioned here. An example is the experimental study in [79] focusing on bending‒torsion random loading tests. Other than showing an agreement between estimations and experiments, this study also represents an example that explains how to apply spectral methods (Dirlik, specifically) in combination with multiaxial criteria based on an equivalent stress.

#### *6.3. Comparison with Experiments* An interesting comprehensive experimental study is presented in [80], where estima-

band spectra known to be not excellent.

the latter for other values.

Unlike simulation-based comparisons, there are far fewer studies that compare spectral methods against experiments. Only few of them are mentioned here. tions of several spectral methods were compared with experiments from bending and/or torsion random loadings with wide band power spectra of various shape. The multiaxial

An example is the experimental study in [79] focusing on bending-torsion random loading tests. Other than showing an agreement between estimations and experiments, this study also represents an example that explains how to apply spectral methods (Dirlik, specifically) in combination with multiaxial criteria based on an equivalent stress. spectral criterion of [79] was first used to transform the local multiaxial stress into an equivalent uniaxial stress to which spectral methods (Dirlik, TB method, and others) are next applied to estimate the fatigue life by means of Smith‒Watson‒Topper (SWT) parameter [80]. The main conclusion of the experimental study is that the Dirlik and TB methods

An interesting comprehensive experimental study is presented in [80], where estimations of several spectral methods were compared with experiments from bending and/or torsion random loadings with wide band power spectra of various shape. The multiaxial spectral criterion of [79] was first used to transform the local multiaxial stress into an equivalent uniaxial stress to which spectral methods (Dirlik, TB method, and others) are next applied to estimate the fatigue life by means of Smith-Watson-Topper (SWT) parameter [80]. The main conclusion of the experimental study is that the Dirlik and TB methods "proved to be substantially better than the Rayleigh method, and resulted in very small deviations when compared with the experimental data", as demonstrated by Figure 9. "proved to be substantially better than the Rayleigh method, and resulted in very small deviations when compared with the experimental data", as demonstrated by Figure 9. Interestingly, the study also pointed out how the two spectral methods "give a very similar fatigue life" [80], despite showing essential differences in their rainflow probability distributions. This apparent inconsistency is explained by the fact that both methods have in common that part of the rainflow probability distribution that contributes most to the fatigue damage—this aspect has already been discussed in Section 6.1.

**Figure 9.** Example of comparison with experimental fatigue life presented in [80] for: (**a**) Dirlik method and (**b**) TB method. Reproduced from [80] with permission from Elsevier. **Figure 9.** Example of comparison with experimental fatigue life presented in [80] for: (**a**) Dirlik method and (**b**) TB method. Reproduced from [80] with permission from Elsevier.

A comparison with experimental data is provided in [61,62] for electronic devices subjected to random loadings, with particular attention on the structural integrity of solder joints in two architectures called package-on-package (PoP) and ball grid array (BGA). Interestingly, the study also pointed out how the two spectral methods "give a very similar fatigue life" [80], despite showing essential differences in their rainflow probability distributions. This apparent inconsistency is explained by the fact that both methods have in common that part of the rainflow probability distribution that contributes most to the fatigue damage—this aspect has already been discussed in Section 6.1.

A comparison with experimental data is provided in [61,62] for electronic devices subjected to random loadings, with particular attention on the structural integrity of solder

joints in two architectures called package-on-package (PoP) and ball grid array (BGA). The comparison is displayed in Figure 10a. Although the number of test results is not so high to permit general conclusions to be drawn on a statistical basis, some trends emerge clearly. For example, in both the vibration tests the TB method yields estimations within 10% from experiments—note that the Dirlik method was not included in the study [61]. Similar trends were confirmed when other spectral methods were included in the comparison [62]. Again, the few experimental tests point out that, among all spectral methods, "both the Dirlik and Tovo–Benasciutti methods exhibit less than 10% absolute error, which are more accurate than others" [62]. For sure, these two methods are seen to perform much better than the Steinberg three-band method [73] that traditionally is preferred in the field of electronics applications. As already mentioned in Section 3, the temperature-modified Dirlik method is checked against experimental data obtained in [30,31] by testing a die-casting aluminum alloy at various temperatures. Apart from some scatter in experimental data, the comparison shows that fatigue life can be estimated successfully by the Dirlik method [30,31] (see Figure 10b). A nice correlation with experimental data, which confirms the capability of spectral methods in predicting fatigue life, is also highlighted in [81] for the Dirlik method—presumably the TB method, though not shown, is close to experimental data as well. It finally has to be mentioned how the good accuracy of the Dirlik method encompasses not only metallic materials, but also advanced materials as composites [66].

The comparison is displayed in Figure 10a. Although the number of test results is not so high to permit general conclusions to be drawn on a statistical basis, some trends emerge clearly. For example, in both the vibration tests the TB method yields estimations within 10% from experiments—note that the Dirlik method was not included in the study [61]. Similar trends were confirmed when other spectral methods were included in the comparison [62]. Again, the few experimental tests point out that, among all spectral methods, "both the Dirlik and Tovo–Benasciutti methods exhibit less than 10% absolute error, which are more accurate than others" [62]. For sure, these two methods are seen to perform much better than the Steinberg three-band method [73] that traditionally is preferred

*Metals* **2021**, *11*, x FOR PEER REVIEW 17 of 21

in the field of electronics applications.

**Figure 10.** Comparison with experimental data: (**a**) electronic application for package-on-package (PoP) and ball grid array (BGA) (DK = Dirlik, TB = TB method, ZB = Zhao‒Baker, SM = single-moment, ST = Steinberg method, WL = Wirsching‒ Light) (**b**) temperature effect on random loading (DTMDK = temperature modified Dirlik method). Reproduced from [30,62] with permission from Elsevier. **Figure 10.** Comparison with experimental data: (**a**) electronic application for package-on-package (PoP) and ball grid array (BGA) (DK = Dirlik, TB = TB method, ZB = Zhao-Baker, SM = single-moment, ST = Steinberg method, WL = Wirsching-Light) (**b**) temperature effect on random loading (DTMDK = temperature modified Dirlik method). Reproduced from [30,62] with permission from Elsevier.

**7. Conclusions** Spectral methods are finding more and more application in various engineering areas and with various material types: from testing handheld devices to rocket launchers, from testing common metallic materials to ceramic matrix composites and even to concrete as-As already mentioned in Section 3, the temperature-modified Dirlik method is checked against experimental data obtained in [30,31] by testing a die-casting aluminum alloy at various temperatures. Apart from some scatter in experimental data, the comparison shows that fatigue life can be estimated successfully by the Dirlik method [30,31] (see Figure 10b).

phalt mortars. Whether or not a spectral method is accurate enough in a certain application depends, first, on if its hypotheses are satisfied by that application. Methods restricted to A nice correlation with experimental data, which confirms the capability of spectral methods in predicting fatigue life, is also highlighted in [81] for the Dirlik method—presumably the TB method, though not shown, is close to experimental data as well.

narrow band processes will overestimate the damage if applied to a wide band process. Similarly, methods developed for stationary Gaussian processes will be wrong if applied It finally has to be mentioned how the good accuracy of the Dirlik method encompasses not only metallic materials, but also advanced materials as composites [66].

#### to non-Gaussian or nonstationary processes. For any given application, the decision on which method is more appropriate is left to each one's engineering judgement. **7. Conclusions**

Spectral methods are finding more and more application in various engineering areas and with various material types: from testing handheld devices to rocket launchers, from testing common metallic materials to ceramic matrix composites and even to concrete asphalt mortars.

Whether or not a spectral method is accurate enough in a certain application depends, first, on if its hypotheses are satisfied by that application. Methods restricted to narrow band processes will overestimate the damage if applied to a wide band process. Similarly, methods developed for stationary Gaussian processes will be wrong if applied to non-Gaussian or nonstationary processes. For any given application, the decision on which method is more appropriate is left to each one's engineering judgement.

On the other hand, a certain degree of approximation must be accepted when using spectral methods, since the actual measured random time histories of stress (for example, obtained by measurements) can hardly be exactly stationary and Gaussian, as instead prescribed by the models. The degree of approximation depends on how much the model hypotheses are violated. Sometimes, dividing a nonstationary signal into stationary, or almost-stationary, segments may be a simple and feasible solution. In other cases, especially with irregular time histories, the solution is not so obvious, or may even not exist.

Once it has been verified that a given application matches the model hypotheses, the choice of "the best" model sometimes becomes less obvious, especially for "newcomers" to vibration fatigue. The multitude of spectral methods in the literature may let one feel somehow disoriented, indeed. Help may come from results of comparative studies. As this article pointed out, for wide band stationary Gaussian processes, the literature seems to have found by now a consensus in identifying the methods by Dirlik and by Tovo– Benasciutti (TB) as "the best" spectral methods, with very little—if not even negligible differences between them. Which one to use then becomes a matter of personal preference.

For what concerns possible areas of development of spectral methods, besides additional studies to evaluate the potential of spectral methods in different industrial sectors using different materials, some research topics could deserve attention in the future: multiaxial fatigue with nonstationary random loadings, crack propagation models, and, finally, the uncertainty in damage estimates due to sampling variability of power spectrum.

For multiaxial fatigue, it is known that the great majority of spectral criteria developed thus far—mostly as reformulations of multiaxial criteria originally in time domain [82,83]—are limited to stationary loadings. A great deal of research is required to extend those criteria to nonstationary multiaxial random loadings, by also considering that the nonstationary case has not yet been solved completely even for uniaxial loadings. At the same time, the development of new multiaxial spectral criteria, either in stationary or nonstationary case, should go hand in hand with experimental testing, aimed at gathering a database for calibrating and validating new multiaxial spectral methods—for example, by testing additively manufactured materials that have recently attracted so much attention by research.

For fatigue crack growth, it seems that the current literature on spectral-based crack propagation models is not so vast [6–10], so this topic may represent quite a promising research field; a starting point may be, for example, the development of spectral methods to include the effect of overloads.

Finally, a topic that, so far, has received very little or almost no attention is the effect on spectral fatigue damage of the sampling variability of a power spectral density computed from short time history records. When this sampling variability is included in the analysis, the fatigue damage estimated by spectral methods becomes a random value with a statistical variability evaluated through confidence intervals. Although some achievements have been obtained recently [84], this topic certainly deserves more attention by research in the years to come.

**Author Contributions:** Writing—original draft preparation, D.B.; writing—review and editing, T.D. and D.B. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


11. Miles, J.W. On structural fatigue under random loading. *J. Aeron. Sci.* **1954**, *21*, 753–762. [CrossRef]


**Julian M. E. Marques 1,\* , Luigi Solazzi <sup>2</sup> and Denis Benasciutti 3,\***


**Abstract:** This article presents an application of a method for estimating the inherent statistical variability of the fatigue damage computed in one single nonstationary random time history. The method applies the concept of confidence interval for the damage, which is constructed after the single time history is subdivided into pseudo-stationary segments, with each of them further divided into shorter blocks. As a case study, the method is applied to the strain time histories measured in a wheel of a telescopic handler industrial vehicle. A preliminary screening involving the short-time Fourier transform and the run test is carried out to verify whether the measured time histories are truly nonstationary and fall within the hypotheses of the proposed method. After that, the confidence interval for the unknown expected damage is computed; its upper bound can be used as a safety limit in a structural integrity assessment. The obtained results seem very promising and suggest the possible use of the proposed approach in similar engineering applications.

**Keywords:** fatigue damage; nonstationary random loadings; run test; short-time Fourier transform

### **1. Introduction**

Wheels in industrial vehicles are critical components, being subjected in service to complex random and nonstationary fatigue loadings. Here, the term 'random' indicates the intrinsic aleatory nature of the loading that makes it impossible to exactly predict the loading values. Instead, 'nonstationary' specifies that the statistical properties of the random loading (e.g., mean value, root-mean-square value, autocorrelation function) change over time as a consequence, for example, of the different driving conditions, routes (curves, straights, jolts, etc.), or vehicle usage (e.g., speed, operating modes) [1–4].

When considering the design of industrial vehicles from a material viewpoint, a new trend has recently emerged that exploits a lightweight design based on the use of composite materials not only for wheels [5,6] but also for other structural details of the vehicle (e.g., trucks [7,8], earthmoving machines [9], or working platforms [10,11]). This trend in design aims at improving the wheel and vehicle performance, while reducing the overall weight.

In addition to the material choice, a crucial step in the design process remains the structural integrity assessment of critical parts. For wheels of industrial vehicles, the design process usually develops as a synergistic interaction between numerical modeling and in-field experimental testing [12,13]. The outcomes of numerical simulation analyses are, indeed, supported by experimental tests conducted on prototypes or real vehicles subjected to standardized loadings that should be as representative as possible of the actual loadings observed in service.

Within this experimental framework, one of the most complex tasks is the definition of the testing conditions that lead to the most representative loadings. An example of the

**Citation:** Marques, J.M.E.; Solazzi, L.; Benasciutti, D. Fatigue Analysis of Nonstationary Random Loadings Measured in an Industrial Vehicle Wheel: Uncertainty of Fatigue Damage. *Metals* **2022**, *12*, 616. https://doi.org/10.3390/ met12040616

Academic Editors: Mark T. Whittaker, Turan Dirlik and Anders Jarfors

Received: 25 February 2022 Accepted: 30 March 2022 Published: 2 April 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

complexity of service loadings is that characterizing the wheels of industrial vehicles in industrial sectors such as agricultural, construction, or material-handling. In these sectors, the wheels have to endure heavy loads characterized by different intensities or that are distributed unevenly as a result of the excitation from an uneven ground. The complexity in loading definition further increases if the loading intensity is correlated to the various wheel geometries that often cover a wide range; for example, the nominal wheel load can range from 4 kN up to 250 kN for a wheel size varying from 800 (203 mm) to 5400 (1371 mm) in diameter and from 300 to 3600 in width [12].

Accordingly, it becomes quite unlikely for a single validation test to be able to encompass all the loading conditions to which the wheels will be subjected in their service life, or to include all the number of cycles counted in the entire wheel lifetime; this number is on the order of several millions as determined by the number of wheel revolutions.

If interpreted from a statistical point of view, the definition of appropriate testing loadings for experimental validation becomes a matter of the representativeness and sampling variability of the testing loading. When only a single or, at best, a few loading time histories are recorded from tests, reliable conclusions on the service fatigue life of the wheel can only be made on a statistical basis from the knowledge of a small set of fatigue cycles and few damage values. Such few damage values, however, represent only a small fraction of a much larger population of values that might theoretically be obtained from testing a much larger (virtually infinite) set of time histories—a situation clearly not obtainable in practice. The goal is to deduce meaningful statistical conclusions on the wheel reliability and safety solely from the knowledge of such a small sample of time history records and damage values obtained in experiments.

This issue is intimately related to the inherent statistical variability of fatigue damage. More precisely, let *D*(*T*) be the fatigue damage of a time history *z*(*t*) of time duration *T*. From a mathematical point of view, damage *D*(*T*) is a random variable that follows a certain probability distribution with expected value *E*[*D*(*T*)] and variance *σ* 2 *D* .

Starting from the pioneering works of Bendat [14] and Mark and Crandall [15,16], several methods have been proposed for estimating *E*[*D*(*T*)] and *σ* 2 *D* from the statistical properties of random loading in the time domain or from its power spectral density in the frequency domain. While the early works were restricted to certain classes of random processes (Gaussian linear oscillator) [14–16], more recent methods made the solution include any kind of Gaussian random loading with specific types of power spectral density, e.g., narrow-band, multimodal, or wide-band [17–19]. An extension to narrow-band and non-Gaussian cases has also been proposed recently [20].

A completely different approach, developed in [21], exploited a data-driven analysis of time history records to account for the variability of the random loading by constructing a confidence interval for the expected damage. The approach, first benchmarked with numerical simulations, was further validated against experimental records for a mountain bike under pseudo-stationary conditions [22].

The approaches mentioned so far are, however, restricted to the case of stationary random loadings with time-invariant statistical properties. This stationary condition, as already emphasized, may be too oversimplified for the nonstationary in-service loadings experienced by the wheels of industrial vehicles.

This work, thus, attempts to extend the confidence interval approach mentioned above to the case of nonstationary random loadings. Although the theoretical framework of the method was preliminary devised in [23,24], the goal here is to apply the method to an actual case study, i.e., to the nonstationary time histories recorded from a wheel of a telescopic handler vehicle.

In a preliminary stage, each recorded time history is confirmed to be really nonstationary by the application of, first, the short-time Fourier transform (STFT) and, then, the run test. The outcome of the STFT allows the time history to be divided into stationary or pseudo-stationary segments. On the basis of such subdivision, it is possible to apply, for each state, the usual method valid for stationary loading to compute the confidence

interval for the expected damage. Confidence intervals for different stationary segments are combined to arrive at the confidence interval for the whole nonstationary time history. The obtained results seem very promising and suggest the possible use of the proposed approach in similar engineering applications.

### **2. Fatigue Damage and Expected Value**

Let *z*(*t*), 0 ≤ *t* ≤ *T* be a nonstationary time history of time duration *T*, which acts on a component or structural detail. According to the Palmgren–Miner damage accumulation rule, the fatigue damage of *z*(*t*) is computed by summing up the damage of all rainflow cycles

$$D(T) = \sum\_{i=1}^{N\_T} d\_i = \sum\_{i=1}^{N\_T} \frac{S\_i^m}{K} \tag{1}$$

where *S<sup>i</sup>* is the stress amplitude of the *i*-th rainflow cycle, and *N<sup>T</sup>* is the total number of rainflow cycles counted in *z*(*t*). Parameter *K* is the fatigue strength coefficient, and *m* is the inverse slope of the S–N curve *S <sup>m</sup>N* = *K* of the component or structural detail. Below, a single-slope S–N curve is used, but the approach presented hereafter allows for any type of S–N curve. Furthermore, to account for the scatter in material fatigue strength, the S–N curve *S <sup>m</sup>N* = *K* can be thought to represent a characteristic line defined for a prescribed survival probability and confidence [25].

Since *z*(*t*) is a random time history, the fatigue cycles counted in *z*(*t*) have randomly distributed amplitudes, and their number *N<sup>T</sup>* is also random. As a result, the damage *D*(*T*) in Equation (1) has to be regarded as a random variable, being affected by two sources of randomness related to amplitudes *S<sup>i</sup>* and number of cycles *NT*. In other words, the random variables, *S* and *NT*, determine an inherent statistical variability of the damage value *D*(*T*) when it is computed from a particular time history *z*(*t*). For example, damage *D*(*T*) is likely not to be equal if it were computed from other time histories, despite being obtained or measured under virtually identical conditions. From a probabilistic point of view, the random variable *D*(*T*) follows a certain probability distribution with expected value *E*[*D*(*T*)] and variance *σ* 2 *D* .

As a result, the damage *D*(*T*) computed from a single time history *z*(*t*) with finite length *T* is only a sample value that is distributed randomly around *E*[*D*(*T*)]; it may be smaller or larger than *E*[*D*(*T*)]. If the ratio *σD*/ p *E*[*D*(*T*)] (coefficient of variation) is large, the values of *D*(*T*) may be so scattered around *E*[*D*(*T*)] that a fatigue life estimation based on a single value *D*(*T*) becomes rather uncertain [21].

Since the damage *D*(*T*) is a random variable following a certain distribution, the expected damage is given by taking the expectation of Equation (1)

$$E[D(T)] = E\left[\sum\_{i=1}^{N\_T} d\_i\right] = E[N\_T] \frac{E[S^m]}{K},\tag{2}$$

where *E*[-] is the probabilistic expectation, which represents a weighted average over an infinite population of values.

According to this definition of expectation, the expected damage *E*[*D*(*T*)] in Equation (2) is the average over an infinite population of damage values that are computed from an infinite ensemble of time histories; it clearly represents a mathematical abstraction. In engineering applications, conversely, the common situation is that in which only one time history with duration *T* is obtained from measurements. In this case, *E*[*D*(*T*)] cannot be computed since the infinite ensemble of time histories is unknown.

The problem then becomes that to make statistical conclusions about *E*[*D*(*T*)] solely on the basis of the knowledge of damage *D*(*T*). Since this damage is a random variable with expected value *E*[*D*(*T*)], it is not known a priori the closeness of *D*(*T*) and *E*[*D*(*T*)]. The damage *D*(*T*) can be very close to *E*[*D*(*T*)], or it can be significantly far from *E*[*D*(*T*)]. A way to address this issue is to use *D*(*T*) for constructing a confidence interval for *E*[*D*(*T*)].

In [21], a 100(1 − *β*)% confidence interval to include *E*[*D*(*T*)] was proposed for the case of a stationary time history. The proposed confidence interval was proven to agree with results from simulated [21] and measured stationary loadings, the latter acquired on a mountain bike [22]. Despite these promising results, the proposed approach was only applied to stationary time histories. In situations where the time history is nonstationary, a different procedure is needed to obtain the confidence interval for *E*[*D*(*T*)]. **3. Confidence Interval of Damage in a Nonstationary Loading**  This section describes how to construct a confidence interval to enclose the expected damage for the case of a nonstationary switching random time history (). The term 'switching' indicates that () is formed by a sequence of stationary load states. The number of load states needs to be two or more, ௌ ≥ 2, and they can have same or different time durations, ௌ,, = 1,2, … ௌ. The total time length of () is = ∑ ேೄ ୀଵ . Note that the different states can appear in any random sequence in () and any

a different procedure is needed to obtain the confidence interval for [()].

only one time history with duration is obtained from measurements. In this case, [()] cannot be computed since the infinite ensemble of time histories is unknown.

The problem then becomes that to make statistical conclusions about [()] solely on the basis of the knowledge of damage (). Since this damage is a random variable with expected value [()], it is not known a priori the closeness of () and [()]. The damage () can be very close to [()], or it can be significantly far from [()]. A way to address this issue is to use () for constructing a confidence interval

In [21], a 100(1−)% confidence interval to include [()] was proposed for the case of a stationary time history. The proposed confidence interval was proven to agree with results from simulated [21] and measured stationary loadings, the latter acquired on a mountain bike [22]. Despite these promising results, the proposed approach was only applied to stationary time histories. In situations where the time history is nonstationary,

#### **3. Confidence Interval of Damage in a Nonstationary Loading** number of times (see Figure 1a), i.e., the same state can appear repeatedly several times in

*Metals* **2022**, *12*, x FOR PEER REVIEW 4 of 20

for [()].

This section describes how to construct a confidence interval to enclose the expected damage for the case of a nonstationary switching random time history *z*(*t*). The term 'switching' indicates that *z*(*t*) is formed by a sequence of stationary load states. The number of load states needs to be two or more, *N<sup>S</sup>* ≥ 2, and they can have same or different time durations, *TS*,*<sup>i</sup>* , *i* = 1, 2, . . . *NS*. The total time length of *z*(*t*) is *T* = ∑ *N<sup>S</sup> i*=1 *Ti* . the sequence, as often observed in real applications [26]. In this circumstance, the quantity ௌ, indicates the *total* time duration of state *i* in (). For the procedure that follows, it is irrelevant in which time order and how many times a stationary state appears in (). Hence, in a very first stage, the original sequence of states must be reordered so that, in the sequence, each load state appears only once in

Note that the different states can appear in any random sequence in *z*(*t*) and any number of times (see Figure 1a), i.e., the same state can appear repeatedly several times in the sequence, as often observed in real applications [26]. In this circumstance, the quantity *TS*,*<sup>i</sup>* indicates the *total* time duration of state *i* in *z*(*t*). its full length ௌ,. This stage reordering yields a nonstationary time history () without replicated load states (see the example in Figure 1b). While this state reordering will greatly simplify the following analysis, it only causes a small number of cycles to be lost, i.e., those cycles formed by peaks and valleys falling in different states before reordering.

**Figure 1.** Example of state reordering: (**a**) nonstationary switching time history with replicated stationary states; (**b**) the same time history after state reordering. **Figure 1.** Example of state reordering: (**a**) nonstationary switching time history with replicated stationary states; (**b**) the same time history after state reordering.

After reordering, the nonstationary time history with load states ordered in sequence is divided into ௌ disjoint states of length ௌ,, = 1,2, … ௌ. A further subdivision of each load state into blocks is then performed; the blocks must be disjointed (not overlapped). Symbol , denotes the time length of bocks in state . Note that, since each For the procedure that follows, it is irrelevant in which time order and how many times a stationary state appears in *z*(*t*). Hence, in a very first stage, the original sequence of states must be reordered so that, in the sequence, each load state appears only once in its full length *TS*,*<sup>i</sup>* . This stage reordering yields a nonstationary time history *z*(*t*) without replicated load states (see the example in Figure 1b). While this state reordering will greatly simplify the following analysis, it only causes a small number of cycles to be lost, i.e., those cycles formed by peaks and valleys falling in different states before reordering.

After reordering, the nonstationary time history with load states ordered in sequence is divided into *N<sup>S</sup>* disjoint states of length *TS*,*<sup>i</sup>* , *i* = 1, 2, . . . *NS*. A further subdivision of each load state into *N<sup>B</sup>* blocks is then performed; the blocks must be disjointed (not overlapped). Symbol *TB*,*<sup>i</sup>* denotes the time length of bocks in state *i*. Note that, since each state has in principle a different time length, the use of the same number of block subdivisions for all states makes the blocks in each state have different time lengths.

Within the *i*-th state, the damage of each block is computed by the rainflow counting method and Palmgren–Miner rule. After subdivision into states and blocks, *NS*·*N<sup>B</sup>* damage values *DB*,*ij*(*TB*,*i*), *i* = 1, 2, . . . *N<sup>S</sup>* and *j* = 1, 2, . . . *N<sup>B</sup>* are obtained, where index *i* identifies the state and *j* denotes the block.

Since the states are distinct and the blocks are not overlapped, the damage values *DB*,*ij*(*TB*,*i*) are independent random variables. Furthermore, since each state is stationary, the damage values *DB*,*ij*(*TB*,*i*), *j* = 1, 2, . . . *N<sup>B</sup>* of the blocks within the same *i*-th stationary state follow the same probability distribution, i.e., they are identically distributed.

A further hypothesis, also adopted in [18,21], is that the block damage values in each state follow a normal probability distribution, which may be different for each stationary state. For the block damage values *DB*,*ij*(*TB*,*i*), the sample mean *DB*,*i*(*TB*,*i*) and the sample variance *σ*ˆ 2 *DB*,*i* are computed as

$$\begin{aligned} \overline{D}\_{B,i}(T\_B) &= \frac{1}{N\_B} \sum\_{j=1}^{N\_B} D\_{B,ij}(T\_{B,i})\_\prime \\\ S\_{D,i}^2 &= \frac{1}{N\_B - 1} \sum\_{j=1}^{N\_B} \left[ D\_{B,ij}(T\_{B,i}) - \overline{D}\_{B,i}(T\_{B,i}) \right]^2. \end{aligned} \tag{3}$$

The technique of dividing the whole nonstationary time history into stationary states and each state into blocks, and then calculating, for each state, the corresponding sample values of the damage represents a preliminary data processing stage required for the confidence interval of damage to be constructed.

*σ*ˆ

This confidence interval is obtained by reformulating the approximate confidence interval for the difference in means of two independent normal random variables with unknown and unequal variances [27]. Instead of the difference of the means of two variables, the confidence interval here proposed considers the sum in means of two or more random variables. Fortunately, the difference and the sum of normal random variables are also normally distributed.

According to this reformulation, the confidence interval of the expected damage is as follows [23,24]:

$$\begin{split} \sum\_{i=1}^{N\_{\mathcal{S}}} \overline{D}\_{\mathcal{B},i}(T\_{\mathcal{B},i}) - \quad t\_{\mathcal{\beta}/2,\nu} \sqrt{\sum\_{i=1}^{N\_{\mathcal{S}}} \frac{\hat{\sigma}\_{\mathcal{D},i}^{2}}{N\_{\mathcal{B}}}} &\leq \sum\_{i=1}^{N\_{\mathcal{S}}} E[D\_{\mathcal{B},i}(T\_{\mathcal{B},i})] \\ \leq \sum\_{i=1}^{N\_{\mathcal{S}}} \overline{D}\_{\mathcal{B},i}(T\_{\mathcal{B},i}) + t\_{\mathcal{\beta}/2,\nu} \sqrt{\sum\_{i=1}^{N\_{\mathcal{S}}} \frac{\hat{\sigma}\_{\mathcal{D},i}^{2}}{N\_{\mathcal{B}}}} \end{split} \tag{4}$$

in which *tβ*/2,*<sup>ν</sup>* is the quantile of Student's *t*-distribution, and *ν* is an equivalent number of degrees of freedom

$$\nu \cong (\mathcal{N}\_{\mathcal{B}} - 1) \frac{\left(\sum\_{i=1}^{N\_{\mathcal{S}}} \hat{\sigma}\_{D\_{\mathcal{B}},i}^2\right)^2}{\sum\_{i=1}^{N\_{\mathcal{S}}} \left(\mathcal{V}\_{D\_{\mathcal{B}},i}^2\right)^2}. \tag{5}$$

After substituting the sample mean *DB*,*i*(*TB*,*i*) into Equation (4) and multiplying this expression by *NB*, the expression of the confidence interval turns out to be

$$\begin{split} \sum\_{i=1}^{N\_{\mathcal{S}}} \sum\_{j=1}^{N\_{\mathcal{B}}} D\_{\mathcal{B},ij}(T\_{\mathcal{B},i}) & \quad -t\_{\mathcal{B}/2,\nu} \sum\_{i=1}^{N\_{\mathcal{S}}} \sqrt{N\_{\mathcal{B}} \cdot \hat{\sigma}\_{D\_{\mathcal{B}},i}^{2}} \le \sum\_{i=1}^{N\_{\mathcal{S}}} N\_{\mathcal{B}} \mathbb{E}[D\_{\mathcal{B},i}(T\_{\mathcal{B},i})] \\ & \le \sum\_{i=1}^{N\_{\mathcal{S}}} \sum\_{j=1}^{N\_{\mathcal{B}}} D\_{\mathcal{B},ij}(T\_{\mathcal{B},i}) + t\_{\mathcal{B}/2,\nu} \sum\_{i=1}^{N\_{\mathcal{S}}} \sqrt{N\_{\mathcal{B}} \cdot \hat{\sigma}\_{D\_{\mathcal{B}},i}^{2}}. \end{split} \tag{6}$$

Since *N<sup>S</sup>* and *N<sup>B</sup>* can be considered as deterministic values (not random), it is possible to substitute into Equation (6) the single summation of the expected damage of blocks ∑ *N<sup>S</sup> <sup>i</sup>*=<sup>1</sup> *NBE*[*DB*,*i*(*TB*,*i*)] = *E*[*D*(*T*)] and the double summation ∑ *N<sup>S</sup> <sup>i</sup>*=<sup>1</sup> ∑ *N<sup>B</sup> <sup>j</sup>*=<sup>1</sup> *DB*,*ij*(*TB*,*i*) ∼= *D*(*T*). The double summation well approximates the total damage *D*(*T*) since only a small number of cycles are lost after subdivision of the time history into blocks. To ensure that this approximation is acceptable, the number of cycles in each block should be much greater than the number of blocks (see [21]), a condition that is usually met in practice.

After the above mathematical simplifications, the final expression of the 100(1 − *β*)% confidence interval of the expected damage *E*[*D*(*T*)] for a single nonstationary switching time history is

$$\mathbb{E}\left[D(T) - t\_{\mathbb{\hat{B}/2,\nu}} \sum\_{i=1}^{N\_{\mathbb{S}}} \sqrt{N\_{\mathbb{B}} \cdot \mathbb{\hat{t}^2\_{D\_{\mathbb{B}},i}}} \le \mathbb{E}[D(T)] \le D(T) + t\_{\mathbb{\hat{B}/2,\nu}} \sum\_{i=1}^{N\_{\mathbb{S}}} \sqrt{N\_{\mathbb{B}} \cdot \mathbb{\hat{t}^2\_{D\_{\mathbb{B}},i}}}.\tag{7}$$

This confidence interval can be constructed if a minimum number of blocks *N<sup>B</sup>* ≥ 2 is available to compute the sample statistics. For *N<sup>S</sup>* = 1, Equation (7) yields the result obtained in [21].

In the next sections, this confidence interval is applied to a nonstationary time history measured in a wheel of a telescopic handler vehicle.

### **4. Experimental Tests**

The experimental tests were carried out on the frontal wheel of the loader Manitou MT 1840 (Manitou BF, Ancenis, France) (see Figure 2). The main characteristics of the machine are as follows: weight in unladen conditions 116.3 kN; working load limit on the forks 40 kN; weight in the front axle equal to 54.8 kN in unloaded condition and 129.3 kN with the maximum load on the forks. *Metals* **2022**, *12*, x FOR PEER REVIEW 7 of 20

**Figure 2.** Different load conditions applied to the machine: (**a**) on-road case; (**b**) load placement case; (**c**) off-road case; (**d**) strain gauges and data system applied to the wheel to acquire and register the strain time-histories. **Figure 2.** Different load conditions applied to the machine: (**a**) on-road case; (**b**) load placement case; (**c**) off-road case; (**d**) strain gauges and data system applied to the wheel to acquire and register the strain time-histories.

Strain gauges (Vishay Precision Group Inc., Raleigh, NC, USA) and an appropriate data logger (Hottinger Bruel & Kjaer, Virum, Denmark) were applied on the wheel to acquire and record the strain signals (see Figure 2d). The strain gauges applied to the wheel were both uniaxial and biaxial (see Figure 3). The grid size was 3 mm, and the The wheel positioned on the machine is a classical one made of S355 structural steel (UNI EN 10025). The wheel diameter is about 1020 mm, and the width is about 320 mm. On the wheel, a Michelin Power Cl 440/80–24" 168 A8 tire (Michelin S.A., Clermont Ferrand, France) was mounted, with an inflate pressure of 4.5 bar. Other details can be found in [12].

resistance was equal to 350 Ω for both strain gauge types. The acquisitions were performed at 100 Hz sampling frequency, a value sufficiently high to avoid missing signal information. As already pointed out, there are many different load conditions that a machine can be subjected to during its life and they depend on several factors such as the route type, velocity, and weight. In the absence of specific standards that help define the load conditions to use for designing the machine and its structural components, four different load cases were analyzed in the experiments. These load conditions were chosen, in agreement with the wheel manufacturer, with the aim to better describe the different uses to which the machine can be subjected during its life.

1. On-road case: the machine moved on a paved road in an oval circuit; this load condition was performed without any load on the forks (Figure 2a);

*Metals* **2022**, *12*, x FOR PEER REVIEW 7 of 20


Strain gauges (Vishay Precision Group Inc., Raleigh, NC, USA) and an appropriate data logger (Hottinger Bruel & Kjaer, Virum, Denmark) were applied on the wheel to acquire and record the strain signals (see Figure 2d). The strain gauges applied to the wheel were both uniaxial and biaxial (see Figure 3). The grid size was 3 mm, and the resistance was equal to 350 Ω for both strain gauge types. The acquisitions were performed at 100 Hz sampling frequency, a value sufficiently high to avoid missing signal information. data logger (Hottinger Bruel & Kjaer, Virum, Denmark) were applied on the wheel to acquire and record the strain signals (see Figure 2d). The strain gauges applied to the wheel were both uniaxial and biaxial (see Figure 3). The grid size was 3 mm, and the resistance was equal to 350 Ω for both strain gauge types. The acquisitions were performed at 100 Hz sampling frequency, a value sufficiently high to avoid missing signal information.

**Figure 3.** Strain gauges applied to the rim and folder of the wheel.

### **5. Analysis of the Time History Measured on the Wheel**

Among all measured signals, the strain time histories measured on the outer rim of the wheel (strain gauge CH6 in Figure 4) were selected and analyzed in detail with the purpose of illustrating how the confidence interval of damage is constructed. In a preliminary stage, the analysis exploits some signal processing tools that are used to identify and describe some relevant features of the measured time history. Such tools can indicate, both qualitatively and quantitatively, whether the time history is nonstationary.



(a)

(b)

available was the measured signal () itself.

s, see Figure 5a.

**Figure 3.** Strain gauges applied to the rim and folder of the wheel.

**5. Analysis of the Time History Measured on the Wheel** 

Among all measured signals, the strain time histories measured on the outer rim of the wheel (strain gauge CH6 in Figure 4) were selected and analyzed in detail with the purpose of illustrating how the confidence interval of damage is constructed. In a preliminary stage, the analysis exploits some signal processing tools that are used to identify and describe some relevant features of the measured time history. Such tools can indicate, both qualitatively and quantitatively, whether the time history is nonstationary. For the selected strain gauge channel, four time histories, corresponding to the four loading conditions mentioned in Section 4, are available. Each time history represents a well-defined state in which the wheel operates. On the other hand, for the confidence interval method in Section 3 to be applied, the individual time history () needs to be a single nonstationary switching signal formed by different states. For this reason, and for the sake of analysis, the four strain signals from channel CH6 were combined together to form a single nonstationary switching time history () with a total duration of = 3456

*Metals* **2022**, *12*, x FOR PEER REVIEW 8 of 20

**Figure 3.** Strain gauges applied to the rim and folder of the wheel.

**5. Analysis of the Time History Measured on the Wheel** 

Nevertheless, this prior information about () was not exploited to decide a priori how many and which stationary states are present in () . With the aim of best emphasizing the capabilities of the processing tools in detecting the relevant features of (), the analysis proceeded with a sort of "blind approach" in which the only information

Among all measured signals, the strain time histories measured on the outer rim of the wheel (strain gauge CH6 in Figure 4) were selected and analyzed in detail with the purpose of illustrating how the confidence interval of damage is constructed. In a preliminary stage, the analysis exploits some signal processing tools that are used to identify and describe some relevant features of the measured time history. Such tools can indicate, both qualitatively and quantitatively, whether the time history is nonstationary. For the selected strain gauge channel, four time histories, corresponding to the four loading conditions mentioned in Section 4, are available. Each time history represents a well-defined state in which the wheel operates. On the other hand, for the confidence interval method in Section 3 to be applied, the individual time history () needs to be a

**Figure 4.** Location of strain gauge for channels CH4 (inner rim) and CH6 (outer rim). Reproduced from [12] with permission from Elsevier. **Figure 4.** Location of strain gauge for channels CH4 (inner rim) and CH6 (outer rim). Reproduced from [12] with permission from Elsevier.

0 500 1000 1500 2000 2500 3000 3456 Onroad Load placement Loader Offroad For the selected strain gauge channel, four time histories, corresponding to the four loading conditions mentioned in Section 4, are available. Each time history represents a well-defined state in which the wheel operates. On the other hand, for the confidence interval method in Section 3 to be applied, the individual time history *z*(*t*) needs to be a single nonstationary switching signal formed by different states. For this reason, and for the sake of analysis, the four strain signals from channel CH6 were combined together to form a single nonstationary switching time history *z*(*t*) with a total duration of *T* = 3456 s, see Figure 5a. **Figure 4.** Location of strain gauge for channels CH4 (inner rim) and CH6 (outer rim). Reproduced from [12] with permission from Elsevier.

**Figure 5.** Measured time history used in the analysis (CH6, outer rim): (**a**) before and (**b**) after filtering and elimination of steady-state segments. **Figure 5.** Measured time history used in the analysis (CH6, outer rim): (**a**) before and (**b**) after filtering and elimination of steady-state segments.

Nevertheless, this prior information about *z*(*t*) was not exploited to decide a priori how many and which stationary states are present in *z*(*t*). With the aim of best emphasizing the capabilities of the processing tools in detecting the relevant features of *z*(*t*), the analysis proceeded with a sort of "blind approach" in which the only information available was the measured signal *z*(*t*) itself.

In a first step, and without loss of generality, the single strain record obtained by measurements was normalized to a signal *z*(*t*) with zero mean and unit variance (see Figure 5a). The time history *z*(*t*) was very irregular over time *t* and had a marked nonstationary character.

Figure 5a highlights how the signal was formed by a few segments with zero or practically zero values, corresponding to the vehicle resting in a steady-state condition. These segments were not important for the subsequent analysis since they did not contribute with any fatigue cycle; therefore, they were eliminated by a trigger threshold algorithm

applied directly to the original time history *z*(*t*). In addition to this algorithm, a Butterworth filter with a cutoff frequency of 10 Hz was applied to eliminate high-frequency vibrations seen not to be of interest for the subsequent fatigue analysis. In fact, such a cutoff frequency value was higher than the range of frequencies that characterize the vehicle usage in testing and that are related to the low speed of vehicle and to lifting operations. After eliminating the steady-state segments and filtering the original signal, a shorter time history with duration *T* = 2560 s was obtained (see Figure 5b). hereafter. It contained the relevant fatigue cycles formed by peaks and valleys, which were obtained by the measurement performed in the vehicle wheel during the duration = 3456 s. As usual in structural durability, the entire time history was subjected to rainflow counting according to the "four-point algorithm" as per the ASTM E 1049 standard [28]; values were discretized into 64 bins. The three-dimensional plot of the rainflow matrix is displayed in Figure 6 as a histogram that shows the statistical distribution of amplitudes and mean values of rainflow cycles.

The time history () in Figure 5b was processed in the analysis stages described

shorter time history with duration = 2560 s was obtained (see Figure 5b).

In a first step, and without loss of generality, the single strain record obtained by measurements was normalized to a signal () with zero mean and unit variance (see Figure 5a). The time history () was very irregular over time and had a marked

Figure 5a highlights how the signal was formed by a few segments with zero or practically zero values, corresponding to the vehicle resting in a steady-state condition. These segments were not important for the subsequent analysis since they did not contribute with any fatigue cycle; therefore, they were eliminated by a trigger threshold algorithm applied directly to the original time history (). In addition to this algorithm, a Butterworth filter with a cutoff frequency of 10 Hz was applied to eliminate highfrequency vibrations seen not to be of interest for the subsequent fatigue analysis. In fact, such a cutoff frequency value was higher than the range of frequencies that characterize the vehicle usage in testing and that are related to the low speed of vehicle and to lifting operations. After eliminating the steady-state segments and filtering the original signal, a

The time history *z*(*t*) in Figure 5b was processed in the analysis stages described hereafter. It contained the relevant fatigue cycles formed by peaks and valleys, which were obtained by the measurement performed in the vehicle wheel during the duration *T* = 3456 s. As usual in structural durability, the entire time history was subjected to rainflow counting according to the "four-point algorithm" as per the ASTM E 1049 standard [28]; values were discretized into 64 bins. The three-dimensional plot of the rainflow matrix is displayed in Figure 6 as a histogram that shows the statistical distribution of amplitudes and mean values of rainflow cycles. Figure 6 evidences an asymmetric distribution around the zero mean value, with large-amplitude cycles having a positive mean value. Furthermore, Figure 6 demonstrates that the time history had a great deal of low-amplitude cycles with mean zero or close to zero. This trend suggests that the statistical distribution of mean values did not follow a Gaussian distribution, and that of amplitudes did not follow a Rayleigh distribution, as would be expected for a "regular" time history in which cycles have zero mean and symmetrical peak and valley; this type of random signal is often called "narrow-band" in the literature [29].

*Metals* **2022**, *12*, x FOR PEER REVIEW 9 of 20

nonstationary character.

**Figure 6.** Three-dimensional histogram of rainflow matrix for the measured time history (CH6, outer rim), as a function of normalized amplitude and mean value of counted cycles. **Figure 6.** Three-dimensional histogram of rainflow matrix for the measured time history (CH6, outer rim), as a function of normalized amplitude and mean value of counted cycles.

Although the three-dimensional rainflow matrix is an important characteristic of the signal, it collects in one single ensemble all the fatigue cycles counted in the same Figure 6 evidences an asymmetric distribution around the zero mean value, with large-amplitude cycles having a positive mean value. Furthermore, Figure 6 demonstrates that the time history had a great deal of low-amplitude cycles with mean zero or close to zero. This trend suggests that the statistical distribution of mean values did not follow a Gaussian distribution, and that of amplitudes did not follow a Rayleigh distribution, as would be expected for a "regular" time history in which cycles have zero mean and symmetrical peak and valley; this type of random signal is often called "narrow-band" in the literature [29].

Although the three-dimensional rainflow matrix is an important characteristic of the signal, it collects in one single ensemble all the fatigue cycles counted in the same measured time history; therefore, it cannot be used to detect whether cycles counted in different portions of the time history follow, in fact, different statistical distributions. As a consequence, the nonstationarity of the measured time history cannot be verified by the rainflow matrix in Figure 6. To this end, other techniques (e.g., level-crossing spectra, cumulative spectra, short-time Fourier transform, run test) are applied to check the nonstationarity of the measured time history. They are grouped into two categories (qualitative and quantitative methods) depending on whether their results allow for conclusions based on statistical methods.

105

#### *5.1. Qualifying the Nonstationarity* Since the confidence interval in Equation (7) is only applicable to a nonstationary time history formed by a sequence of stationary states, qualitative and quantitative

*5.1. Qualifying the Nonstationarity* 

conclusions based on statistical methods.

*Metals* **2022**, *12*, x FOR PEER REVIEW 10 of 20

Since the confidence interval in Equation (7) is only applicable to a nonstationary time history formed by a sequence of stationary states, qualitative and quantitative methods were used to verify the nonstationarity of the measured time-history *z*(*t*) in Figure 5b. methods were used to verify the nonstationarity of the measured time-history () in Figure 5b. One qualitative method is based on the comparison of level-crossing spectra

measured time history; therefore, it cannot be used to detect whether cycles counted in different portions of the time history follow, in fact, different statistical distributions. As a consequence, the nonstationarity of the measured time history cannot be verified by the rainflow matrix in Figure 6. To this end, other techniques (e.g., level-crossing spectra, cumulative spectra, short-time Fourier transform, run test) are applied to check the nonstationarity of the measured time history. They are grouped into two categories (qualitative and quantitative methods) depending on whether their results allow for

One qualitative method is based on the comparison of level-crossing spectra computed from different portions into which the whole signal has been divided. The level-crossing spectrum depicts the distribution of the number of times a signal upward crosses (upcrossings) a level [30], as a function of the level. The shape of the crossing spectrum is an indirect measure of the statistical properties of the random signal. The technique used here to check for the presence of a nonstationarity was to divide the time history into segments of equal length and to compare the level-crossing spectrum of each segment. computed from different portions into which the whole signal has been divided. The levelcrossing spectrum depicts the distribution of the number of times a signal upward crosses (upcrossings) a level [30], as a function of the level. The shape of the crossing spectrum is an indirect measure of the statistical properties of the random signal. The technique used here to check for the presence of a nonstationarity was to divide the time history into segments of equal length and to compare the level-crossing spectrum of each segment. After dividing the measured time history () into three segments with the same

After dividing the measured time history *z*(*t*) into three segments with the same length *T<sup>s</sup>* = 853.3 s each, the level-crossing spectrum was computed for each segment; their comparison is illustrated in Figure 7. length ௦ = 853.3 s each, the level-crossing spectrum was computed for each segment; their comparison is illustrated in Figure 7.

**Figure 7.** Comparison of level-crossing spectra from three segments of () with duration ௦ = 853.3 s each (CH6 record, outer rim). Levels are normalized to signal standard deviation. **Figure 7.** Comparison of level-crossing spectra from three segments of *z*(*t*) with duration *T<sup>s</sup>* = 853.3 s each (CH6 record, outer rim). Levels are normalized to signal standard deviation.

All three level-crossing spectra were not symmetric; rather, they had a positively skewed distribution characterized by a large number of crossings in the upper tail at positive values, with the presence of crossings even at extreme positive levels. This result is consistent with the presence of cycles with positive mean value observed in the rainflow matrix in Figure 6. In addition, it can be observed that all three level-crossing distributions in Figure 7 showed a similar shape but with a large difference in the number of upcrossings over the levels. This difference is a result in favor of nonstationarity, as it emphasizes the difference in load characteristics from one segment to another. All three level-crossing spectra were not symmetric; rather, they had a positively skewed distribution characterized by a large number of crossings in the upper tail at positive values, with the presence of crossings even at extreme positive levels. This result isconsistent with the presence of cycles with positive mean value observed in the rainflow matrix in Figure 6. In addition, it can be observed that all three level-crossing distributions in Figure 7 showed a similar shape but with a large difference in the number of upcrossings over the levels. This difference is a result in favor of nonstationarity, as it emphasizes the difference in load characteristics from one segment to another.

Another qualitative method to check for nonstationary features is the comparison of the amplitude distribution of rainflow cycles counted in different subsegments of () Another qualitative method to check for nonstationary features is the comparison of the amplitude distribution of rainflow cycles counted in different subsegments of *z*(*t*) with same duration. As usual in structural durability, the amplitude distribution is given in terms of a cumulative (or loading) spectrum, which shows how the amplitude varies versus the number of cumulated cycles. The cumulative spectrum is also a useful tool to investigate the cumulative damage behavior [30].

Figure 8 compares three cumulative spectra obtained by first dividing the measured time history *z*(*t*) into three segments of same duration *T<sup>s</sup>* = 853.3, and then applying the rainflow counting method to each one. The comparison clearly highlights a marked difference in the amplitude distribution. It also reveals that a higher amplitude leads to worse agreement among cumulative spectra. This disagreement further suggests that the measured time history *z*(*t*) is nonstationary.

investigate the cumulative damage behavior [30].

measured time history () is nonstationary.

**Figure 8.** Comparison of cumulative spectra from three segments of () of same duration ௦ = 853.3 s each (CH6 record, outer rim). Amplitude is normalized to signal standard deviation. **Figure 8.** Comparison of cumulative spectra from three segments of *z*(*t*) of same duration *T<sup>s</sup>* = 853.3 s each (CH6 record, outer rim). Amplitude is normalized to signal standard deviation.

with same duration. As usual in structural durability, the amplitude distribution is given in terms of a cumulative (or loading) spectrum, which shows how the amplitude varies versus the number of cumulated cycles. The cumulative spectrum is also a useful tool to

Figure 8 compares three cumulative spectra obtained by first dividing the measured time history () into three segments of same duration ௦ = 853.3, and then applying the rainflow counting method to each one. The comparison clearly highlights a marked difference in the amplitude distribution. It also reveals that a higher amplitude leads to worse agreement among cumulative spectra. This disagreement further suggests that the

The short-time Fourier transform (STFT) is a method that can be used to analyze how the frequency content of a time history varies over time [31,32]. In order to apply the STFT, the time history must first be divided into a series of short time windows, and the Fourier transform is then computed for the signal within each window. The short-time Fourier transform (STFT) is a method that can be used to analyze how the frequency content of a time history varies over time [31,32]. In order to apply the STFT, the time history must first be divided into a series of short time windows, and the Fourier transform is then computed for the signal within each window.

The method has some requirements to control the resolution achievable in the frequency domain and time domain. The frequency resolution can be improved by increasing the time length of the windows; conversely, the time resolution increases when the window length decreases. These opposite trends (linked to the Heisenberg–Gabor uncertainty principle [33]) emphasize that a high resolution cannot be achieved simultaneously in the time domain and frequency domain; a compromise must be found. On the other hand, taking a time history with fixed time duration , the STFT plots (spectrograms) in both time and frequency domains are also altered using overlapped windows and zero padding [31]. A positive overlapping between consecutive segments, for example, yields a smoother spectrogram, although it does by no means decrease the estimation error, unlike Welch's method where overlapping improves the spectrum estimation accuracy [34]. It can indeed be demonstrated that, regardless of segment overlapping, the STFT has a normalized random error (coefficient of variation) of ඥ4 ⁄ − 1 ≅ 0.522 [32], a value rather large as it means that the standard deviation of the estimate is about one-half of the value being estimated. The method has some requirements to control the resolution achievable in the frequency domain and time domain. The frequency resolution can be improved by increasing the time length of the windows; conversely, the time resolution increases when the window length decreases. These opposite trends (linked to the Heisenberg–Gabor uncertaintyprinciple [33]) emphasize that a high resolution cannot be achieved simultaneously in the time domain and frequency domain; a compromise must be found. On the other hand, taking a time history with fixed time duration *T*, the STFT plots (spectrograms) in both time and frequency domains are also altered using overlapped windows and zero padding [31]. A positive overlapping between consecutive segments, for example, yields a smoother spectrogram, although it does by no means decrease the estimation error, unlike Welch's method where overlapping improves the spectrum estimation accuracy [34]. It can indeed be demonstrated that, regardless of segment overlapping, the STFT has a normalized random error (coefficient of variation) of <sup>√</sup> 4/*π* − 1 ∼= 0.522 [32], a value rather large as it means that the standard deviation of the estimate is about one-half of the value being estimated.

Here, the STFT was applied to the measured time history using two different window lengths (௪ = 10 s and ௪ = 40 s) without overlapping and zero padding (see Figure 9). The various colors identify different spectrum levels. A look at the figures shows that, for both window lengths ௪, the frequency content of the signal had significant variations over time . A marked change in the frequency content is apparent, which in the initial signal portion was mainly concentrated on a narrow range around 2 Hz, before shifting Here, the STFT was applied to the measured time history using two different window lengths (*T<sup>w</sup>* = 10 s and *T<sup>w</sup>* = 40 s) without overlapping and zero padding (see Figure 9). The various colors identify different spectrum levels. A look at the figures shows that, for both window lengths *Tw*, the frequency content of the signal had significant variations over time *t*. A marked change in the frequency content is apparent, which in the initial signal portion was mainly concentrated on a narrow range around 2 Hz, before shifting toward lower frequencies and eventually shifting again in the last signal portion to cover a broader frequency range up to 3 Hz. Note that this trend over time can be appreciated best with the spectrogram obtained by a shorter window, which indeed yields a better time resolution. The longer window, by contrast, allows for a better visualization of the frequency content.

frequency content.

toward lower frequencies and eventually shifting again in the last signal portion to cover a broader frequency range up to 3 Hz. Note that this trend over time can be appreciated best with the spectrogram obtained by a shorter window, which indeed yields a better time resolution. The longer window, by contrast, allows for a better visualization of the

**Figure 9.** Short-time Fourier transform applied to the measured time history () with two window lengths (CH6 record, outer rim): (**a**) ௪ = 10 s and (**b**) ௪ = 40 s. **Figure 9.** Short-time Fourier transform applied to the measured time history *z*(*t*) with two window lengths (CH6 record, outer rim): (**a**) *T<sup>w</sup>* = 10 s and (**b**) *T<sup>w</sup>* = 40 s.

Regardless of window length, a characteristic time evolution clearly emerged. More precisely, Figure 9a,b suggest that the entire signal could approximately be divided into three distinct segments with different frequency content. The first segment can be identified visually from 0 to 600 s, the second from 600 s to 1640 s, and the last from 1640 s to 2560 s. This means that the entire measured time history () could be classified, at least visually, as a nonstationary signal formed by three quasi-stationary or nearly stationary states with different duration ௦. Surprisingly, this finding contrasts with the signal being in fact formed by four portions obtained in four different testing conditions. It may then be concluded that two of the portions had very similar frequency characteristics and could be grouped together for the subsequent derivation of the confidence interval. Albeit apparently contrasting, this result evidences the capabilities of the proposed processing technique for discriminating the various states in () solely on the basis of the knowledge of the signal itself, and without any prior knowledge on the testing conditions, which indeed may often not be available to the analyst. Regardless of window length, a characteristic time evolution clearly emerged. More precisely, Figure 9a,b suggest that the entire signal could approximately be divided intothree distinct segments with different frequency content. The first segment can be identifiedvisually from 0 to 600 s, the second from 600 s to 1640 s, and the last from 1640 s to 2560 s. This means that the entire measured time history *<sup>z</sup>*(*t*) could be classified, at leastvisually, as a nonstationary signal formed by three quasi-stationary or nearly stationary states with different duration *T<sup>s</sup>* . Surprisingly, this finding contrasts with the signal being in fact formed by four portions obtained in four different testing conditions. It may then be concluded that two of the portions had very similar frequency characteristics and could be grouped together for the subsequent derivation of the confidence interval. Albeit apparently contrasting, this result evidences the capabilities of the proposed processing technique for discriminating the various states in *z*(*t*) solely on the basis of the knowledge of the signal itself, and without any prior knowledge on the testing conditions, which indeed may often not be available to the analyst.

By considering the outcome of all previous qualitative analyses, the measured time history from the vehicle wheel appears to be nonstationary. This conclusion, albeit quite plausible when also considering the nonstationary testing conditions in which the signal was obtained, was drawn only from a visual inspection of results and, therefore, needs to be supported by further conclusions based on quantitative methods, as considered in the next section. By considering the outcome of all previous qualitative analyses, the measured time history from the vehicle wheel appears to be nonstationary. This conclusion, albeit quite plausible when also considering the nonstationary testing conditions in which the signal was obtained, was drawn only from a visual inspection of results and, therefore, needs to be supported by further conclusions based on quantitative methods, as considered in the next section.

### *5.2. Quantifying the Nonstationarity 5.2. Quantifying the Nonstationarity*

This section considers a statistical method that can be used to draw conclusions on a quantitative basis on whether the measured time history is nonstationary. The statistical method is a nonparametric test called the Wald–Wolfowitz run test or simply run test [35]. This section considers a statistical method that can be used to draw conclusions on a quantitative basis on whether the measured time history is nonstationary. The statistical method is a nonparametric test called the Wald–Wolfowitz run test or simply run test [35]. The results obtained from this method are aimed at supporting the conclusions from the more qualitative analysis in the previous section.

The run test can detect whether a sequence of values is characterized by an underlying deterministic (not random) trend that leads to a nonstationary behavior. More specifically, the run test is a hypothesis test that checks for the null hypothesis "the sequence has no deterministic trend and is stationary". The run test is based on the definition of "run" as a sequence of identical observations followed and preceded by a different observation

or no observation at all [35–37]. The sequence of values is usually classified into two dichotomic categories being above (+) or below (−) a reference value, usually the median of the sequence.

In order to apply the run test to a random time history that is a continuous function of time, a discrete sequence of values must be obtained. For this purpose, the time history is divided into segments and, for each segment, a statistical quantity (usually the root-meansquare (RMS) value) is computed. Accordingly, the time history is converted into a discrete sequence of RMS values, which is then tested for stationarity by the run test [38].

For a sequence with a large number of RMS values (i.e., more than 10), the distribution of runs in the sequence approaches a normal distribution with mean value *µ<sup>r</sup>* = *E*[*r*] and variance *σ* 2 *<sup>r</sup>* = *Var*(*r*) [36,37]

$$
\mu\_r = 1 + n\_+ \quad \sigma\_r^2 = \frac{n\_+(n\_+ - 1)}{2n\_+ - 1} \, \prime \tag{8}
$$

where *n*<sup>+</sup> is the number of observations above the median. Equation (8) assumes that *n*<sup>+</sup> = *n*−, i.e., the number of observations above and below the median is equal, a condition that holds true according to the definition of median for an even sequence of values (for an odd sequence, the value equal to the median is discarded, indeed).

In the run test, the acceptance region of the null hypothesis at the significance level *β* is

$$r\_{n\_+,1-\mathfrak{F}/2} < r \le r\_{n\_+,\mathfrak{F}/2} \tag{9}$$

in which *<sup>r</sup>n*+,1−*β*/2 and *<sup>r</sup>n*+,*β*/2 are, respectively, the lower and upper limits of the region, which can be found in statistical tables [39] or determined (if the sequence has >10 values) by assuming a normal distribution of runs with *µ<sup>r</sup>* and *σ* 2 *r* in Equation (8).

On the basis of the outcome of the run test, a time history is classified as stationary if the number of runs *r* falls inside the acceptance region. Otherwise, the time history is classified as nonstationary.

The run test method was here applied to the measured time history *z*(*t*) by taking two different values of the segment length (*T<sup>s</sup>* = 10 s and *T<sup>s</sup>* = 20 s; see Figure 10). The first value coincides with that already used for the STFT. Two different values were considered with the aim of investigating the sensitivity of the run test to the segment length. As already pointed out in [38], a long value of *T<sup>s</sup>* has the effect of smoothing local variations in the RMS value, such that the time history tends to be classified as stationary. Conversely, a too small value of *T<sup>s</sup>* will increase the appearance of the time history as nonstationary [38].

Figure 10a shows the results of the run test when the time segment length was *T<sup>s</sup>* = 10 s. For *n*<sup>+</sup> = *n*<sup>−</sup> = 128 observations above and below the median, and a 5% significance level, the upper and lower limits according to Equation (9) are *r*128,0.975 = 113.3 and *r*128,0.025 = 144.6. These limits were obtained by computing the mean and variance in Equation (8) and assuming a normal distribution of runs. The number of runs *r* = 83 was achieved by counting how many times the RMS values, computed in each segment, crossed up and down the median. Given that the observed number of runs *r* = 83 fell outside the acceptance region, the time history was classified as nonstationary by the run test.

The same conclusion was also obtained with a longer segment length (Figure 10b). For the case of *T<sup>s</sup>* = 20 s, the number of observations above/below the median was *n*<sup>+</sup> = *n*<sup>−</sup> = 64 and, for a 5% significance, the limits of the acceptance region are now *r*64,0.975 = 54 and *r*64,0.025 = 76. With a longer segment length, the number of runs decreased to *r* = 35, a value that was nevertheless outside the acceptance region for stationarity. Although this value is closer to the lower bound of the region than the previous case, the run test again yielded an indication in favor of nonstationarity.

**Figure 10.** Run test method applied to the measured time history () with two segment lengths (CH6 record, outer rim): (**a**) ௦ = 10 s and (**b**) ௦ = 20 s. **Figure 10.** Run test method applied to the measured time history *z*(*t*) with two segment lengths (CH6 record, outer rim): (**a**) *T<sup>s</sup>* = 10 s and (**b**) *T<sup>s</sup>* = 20 s.

Figure 10a shows the results of the run test when the time segment length was ௦ = 10 s. For ା = ି = 128 observations above and below the median, and a 5% significance level, the upper and lower limits according to Equation (9) are ଵଶ଼,.ଽହ = 113.3 and ଵଶ଼,.ଶହ = 144.6. These limits were obtained by computing the mean and variance in Equation (8) and assuming a normal distribution of runs. The number of runs = 83 was achieved by counting how many times the RMS values, computed in each segment, According to [38], a "stationarity index" *γ<sup>r</sup>* = *r*/*µ<sup>r</sup>* is defined as the ratio of the observed number of runs to the expected number of runs for a stationary signal. In [38], the parameter *γ<sup>r</sup>* was used as an indicator of the degree of stationarity, where a high value of *γ<sup>r</sup>* indicates that a signal is "more stationary". For the above examples, the two values would be *γ<sup>r</sup>* = 83/129 = 64.3% (for *T<sup>s</sup>* = 10 s) and *γ<sup>r</sup>* = 35/65 = 53.8% (for *T<sup>s</sup>* = 20 s), indicating that a shorter segment length made the time history appear as more stationary.

#### crossed up and down the median. Given that the observed number of runs = 83 fell outside the acceptance region, the time history was classified as nonstationary by the run **6. Confidence Interval of Damage: Vehicle Wheel Data**

test. The same conclusion was also obtained with a longer segment length (Figure 10b). For the case of ௦ = 20 s, the number of observations above/below the median was ା = ି = 64 and, for a 5% significance, the limits of the acceptance region are now ସ,.ଽହ = After the previous analysis confirmed that the time history *z*(*t*) measured in the industrial vehicle wheel (see Figure 5b) is nonstationary and also formed by a sequence of stationary states, it was possible to apply the confidence interval of the damage in Equation (7).

54 and ସ,.ଶହ = 76. With a longer segment length, the number of runs decreased to = 35, a value that was nevertheless outside the acceptance region for stationarity. Although this value is closer to the lower bound of the region than the previous case, the run test again yielded an indication in favor of nonstationarity. According to [38], a "stationarity index" = ⁄ is defined as the ratio of the As a first step, the confidence interval required that the individual states in *z*(*t*) be identified. On the basis of the results of the STFT analysis, the entire measured signal was divided into three segments (*N<sup>S</sup>* = 3). Each segment was identified by the STFT spectrum in Figure 9. These segments may be classified as quasi-stationary or nearly stationary because they have a frequency content almost constant over time.

observed number of runs to the expected number of runs for a stationary signal. In [38], the parameter was used as an indicator of the degree of stationarity, where a high value of indicates that a signal is "more stationary". For the above examples, the two values would be = 83 129 ⁄ = 64.3% (for ௦ = 10 s) and = 35 65 ⁄ = 53.8% (for ௦ = 20 s), indicating that a shorter segment length made the time history appear as more The confidence interval was constructed with a 95% confidence level by computing the sample values *DB*,*ij*(*TB*,*i*) and *σ*ˆ 2 *DB*,*i* , where *i* = 1, 2, 3 is the number of states and *j* = 1, 2, . . . *N<sup>B</sup>* is the number of block subdivisions within each state. *DB*,*ij*(*TB*,*i*) is the fatigue damage in block *j* located within state *i*. The damage was calculated by assuming an S–N curve *S <sup>m</sup>N* = *K* with normalized strength constant *K* = 1 and inverse slope *m* = 3.

stationary. **6. Confidence Interval of Damage: Vehicle Wheel Data**  After the previous analysis confirmed that the time history () measured in the industrial vehicle wheel (see Figure 5b) is nonstationary and also formed by a sequence The confidence interval was then calculated by varying the number of blocks in the range *N<sup>B</sup>* = 2, 3 . . . 10. In the limit case *N<sup>B</sup>* = 10, each block contained approximately 600 counted cycles, a number large enough to ensure the approximation of *D*(*T*) ∼= ∑ *N<sup>S</sup> <sup>i</sup>*=<sup>1</sup> ∑ *N<sup>B</sup> <sup>j</sup>*=<sup>1</sup> *DB*,*ij*(*TB*,*i*) to hold, at least for the measured time history *z*(*t*) considered in this study. This is confirmed by the results shown below.

of stationary states, it was possible to apply the confidence interval of the damage in Equation (7). Figure 11 displays the confidence intervals of the damage as a function of different number of block subdivisions, *NB*. All damage values were normalized to the damage *D*(*T*) of the whole time history *z*(*t*). The intervals were constructed, according to Equation (7),

around the total damage (solid circles) approximated by the double summation of the sample damage *D*(*T*) ∼= ∑ *N<sup>S</sup> <sup>i</sup>*=<sup>1</sup> ∑ *N<sup>B</sup> <sup>j</sup>*=<sup>1</sup> *DB*,*ij*(*TB*,*i*). () of the whole time history () . The intervals were constructed, according to Equation (7), around the total damage (solid circles) approximated by the double summation of the sample damage () ≅ ∑ ∑ ,൫,൯ ேಳ ୀଵ ேೄ ୀଵ .

ୀଵ to hold, at least for the measured time history () considered in this

Figure 11 displays the confidence intervals of the damage as a function of different number of block subdivisions, . All damage values were normalized to the damage

As a first step, the confidence interval required that the individual states in () be identified. On the basis of the results of the STFT analysis, the entire measured signal was divided into three segments (ௌ = 3). Each segment was identified by the STFT spectrum in Figure 9. These segments may be classified as quasi-stationary or nearly stationary

The confidence interval was constructed with a 95% confidence level by computing the sample values ,൫,൯ and ොಳ, <sup>ଶ</sup> , where = 1, 2, 3 is the number of states and = 1,2, … is the number of block subdivisions within each state. ,൫,൯ is the fatigue damage in block located within state . The damage was calculated by assuming an S– N curve = with normalized strength constant =1 and inverse slope =3.

The confidence interval was then calculated by varying the number of blocks in the range = 2,3 … 10. In the limit case = 10, each block contained approximately 600 counted cycles, a number large enough to ensure the approximation of () ≅

*Metals* **2022**, *12*, x FOR PEER REVIEW 15 of 20

because they have a frequency content almost constant over time.

study. This is confirmed by the results shown below.

∑ ∑ ,൫,൯ ேಳ ୀଵ

ேೄ

**Figure 11.** Confidence interval of the damage for the measured time history () from the vehicle wheel (CH6 record, outer rim), as a function of the number of blocks, . **Figure 11.** Confidence interval of the damage for the measured time history *z*(*t*) from the vehicle wheel (CH6 record, outer rim), as a function of the number of blocks, *NB*.

It is interesting to note that the damage value identified by the solid circle remained almost constant for any number of blocks, . This result confirms that the block subdivision eliminated a very negligible number of cycles, such that the approximation () ≅ ∑ ∑ ,൫,൯ ேಳ ୀଵ ேೄ ୀଵ was perfectly acceptable. It is interesting to note that the damage value identified by the solid circle remained almost constant for any number of blocks, *NB*. This result confirms that the block subdivision eliminated a very negligible number of cycles, such that the approximation *D*(*T*) ∼= ∑ *N<sup>S</sup> <sup>i</sup>*=<sup>1</sup> ∑ *N<sup>B</sup> <sup>j</sup>*=<sup>1</sup> *DB*,*ij*(*TB*,*i*) was perfectly acceptable.

Figure 11 also confirms a general trend already observed in [21] for a stationary time history. The confidence interval became narrower as the number of blocks increased. However, upon increasing , the intervals did not approach zero; rather, they appeared to converge approximately to a sort of minimum interval width. Figure 11 also confirms a general trend already observed in [21] for a stationary time history. The confidence interval became narrower as the number of blocks *N<sup>B</sup>* increased. However, upon increasing *NB*, the intervals did not approach zero; rather, they appeared to converge approximately to a sort of minimum interval width.

This outcome means that, while the prediction error for the damage was reduced insofar as the number of block damages (i.e., the block number ) increased, it could not become infinitely zero since the number of fatigue cycles was already given in the entire time history. On the other hand, the results shown in Figure 11 prove the importance of using as many blocks as possible in order to decrease the prediction error. This outcome means that, while the prediction error for the damage was reduced insofar as the number of block damages (i.e., the block number *NB*) increased, it could not become infinitely zero since the number of fatigue cycles was already given in the entire time history. On the other hand, the results shown in Figure 11 prove the importance of using as many blocks as possible in order to decrease the prediction error.

The trends of the confidence intervals here applied to the time history *z*(*t*) measured in the industrial vehicle wheel agree with the findings obtained in previous studies with simulations and experiments using mountain bike data [21,22].

Nevertheless, while it was possible for the latter results to be verified by comparison with the expected damage *E*[*D*(*T*)] (which may or may not fall within the confidence interval), this verification was not achievable in the practical case study analyzed here, because the expected damage *E*[*D*(*T*)] was not available.

This circumstance is the rule in practice. Indeed, the expected damage would correspond to the average damage computed over an infinite ensemble of time histories acting on the vehicle wheel, which for obvious reasons was not available. Nevertheless, from the positive feedback from previous validation studies on the same approach presented here, there is reason to believe that the proposed confidence interval approach is in fact correct. On the other hand, in practical applications, it is common to have available only one single measured time history, as in the case study discussed so far in this work.

As a final remark, if the unknown expected damage *E*[*D*(*T*)] were underestimated by the observed value of *D*(*T*), a structure or component would be designed unsafely. Only for damage values greater than *E*[*D*(*T*)] would the structure or component be in the safe region. On the other hand, in all such practical cases (as that analyzed in the present

work) in which *E*[*D*(*T*)] is unknown, it is not possible to establish a priori how far *D*(*T*) is from *E*[*D*(*T*)]. In this regard, the confidence interval, which includes information about the variability of *D*(*T*), may help in drawing meaningful conclusions on this point. More specifically, it is suggested to take the upper limit as the reference damage value to be used in design [21].

### **7. Analysis of Time History from Inner Wheel Rim**

In addition to presenting results, the purpose of Section 6 was also to illustrate in great detail the various analysis steps of an individual nonstationary time history. As an example, the procedure was applied to the time history from the outer wheel rim (CH6 in Figure 4). For comparison, the procedure was also applied to the inner rim time history (CH4). Most of the analysis details already discussed in Section 6 are now skipped, thus focusing the attention more on the results.

At first glance, the CH4 time history appears to be very similar to the CH6 record depicted in Figure 5. Upon a closer look, however, it is possible to observe that rainflow cycles were distributed more symmetrically around the global mean value (see Figure 12a). Compared to the distribution in Figure 6, the large-amplitude cycles had a mean value zero or closer to zero. The level-crossing spectra computed in three consecutive segments of the CH4 record are displayed in Figure 12b; they are very similar to those of CH6 shown in Figure 7, apart from the lower number of crossings counted. Interesting in Figure 12b is the peculiar two-spike shape of the level-crossing spectrum in the first segment of the CH4 time history. *Metals* **2022**, *12*, x FOR PEER REVIEW 17 of 20

**Figure 12.** CH4 record, inner rim: (**a**) Rainflow matrix; (**b**) level-crossing spectrum. Amplitude, mean and level are normalized to signal standard deviation. **Figure 12.** CH4 record, inner rim: (**a**) Rainflow matrix; (**b**) level-crossing spectrum. Amplitude, mean and level are normalized to signal standard deviation.

As with the CH6 record, in a preliminary stage, the CH4 time history was also analyzed by the short-time Fourier transform and the run test to detect nonstationary features. As a representative example, Figure 13 shows the results obtained using both techniques when choosing a time window of 10 s. The output for the CH4 time history is practically coincident with that already pointed out for the CH6 signal. Not only did both techniques suggest, visually and quantitatively, that the signal was in fact nonstationary, but they also allowed the signal to be separated for the subsequent analysis into three segments with similar characteristics. As with the CH6 record, in a preliminary stage, the CH4 time history was also analyzed by the short-time Fourier transform and the run test to detect nonstationary features. As a representative example, Figure 13 shows the results obtained using both techniques when choosing a time window of 10 s. The output for the CH4 time history is practically coincident with that already pointed out for the CH6 signal. Not only did both techniques suggest, visually and quantitatively, that the signal was in fact nonstationary, but they also allowed the signal to be separated for the subsequent analysis into three segments with similar characteristics.

Observation a)

Median

**Figure 13.** Short-time Fourier transform and run test applied to CH4 time history (inner rim), shown

Once the individual CH4 time history was divided into three states, and then into blocks, the confidence interval of damage was computed as a function of the number of

at the bottom of the figure. In both methods, the window time length was 10 s.

(**a**) (**b**)

mean and level are normalized to signal standard deviation.

segments with similar characteristics.

**Figure 12.** CH4 record, inner rim: (**a**) Rainflow matrix; (**b**) level-crossing spectrum. Amplitude,

0

1000

2000

Number of upcrossings

3000

As with the CH6 record, in a preliminary stage, the CH4 time history was also analyzed by the short-time Fourier transform and the run test to detect nonstationary features. As a representative example, Figure 13 shows the results obtained using both techniques when choosing a time window of 10 s. The output for the CH4 time history is practically coincident with that already pointed out for the CH6 signal. Not only did both techniques suggest, visually and quantitatively, that the signal was in fact nonstationary, but they also allowed the signal to be separated for the subsequent analysis into three

2nd segment


1st segment 3rd segment

**Figure 13.** Short-time Fourier transform and run test applied to CH4 time history (inner rim), shown at the bottom of the figure. In both methods, the window time length was 10 s. **Figure 13.** Short-time Fourier transform and run test applied to CH4 time history (inner rim), shown at the bottom of the figure. In both methods, the window time length was 10 s. *Metals* **2022**, *12*, x FOR PEER REVIEW 18 of 20

Once the individual CH4 time history was divided into three states, and then into blocks, the confidence interval of damage was computed as a function of the number of Once the individual CH4 time history was divided into three states, and then into blocks, the confidence interval of damage was computed as a function of the number of blocks (see Figure 14). Without a doubt, the results and trends for the CH4 signal are pretty much similar to those obtained for the CH6 time history (see Figure 11). blocks (see Figure 14). Without a doubt, the results and trends for the CH4 signal are pretty much similar to those obtained for the CH6 time history (see Figure 11).

**Figure 14.** Confidence interval of the damage for the measured time history () from the vehicle wheel (CH4 record, inner rim), as a function of number of blocks, . **Figure 14.** Confidence interval of the damage for the measured time history *z*(*t*) from the vehicle wheel (CH4 record, inner rim), as a function of number of blocks, *NB*.

#### **8. Conclusions 8. Conclusions**

be determined.

This article presented an approach for analyzing a single nonstationary random fatigue loading with the purpose of constructing the confidence interval of the expected fatigue damage. Specifically, the approach was applied to a so-called switching nonstationary loading formed by a finite number of stationary or pseudo-stationary states. The obtained confidence interval is a statistical tool to assess the sampling variability of a single fatigue damage value computed from only one individual time history. This article presented an approach for analyzing a single nonstationary random fatigue loading with the purpose of constructing the confidence interval of the expected fatigue damage. Specifically, the approach was applied to a so-called switching nonstationary loading formed by a finite number of stationary or pseudo-stationary states. The obtained confidence interval is a statistical tool to assess the sampling variability of a single fatigue damage value computed from only one individual time history.

The approach was applied here to the strain time histories measured in a wheel of a telescopic handler vehicle when subjected to four testing conditions. The time histories were recorded in the inner and outer wheel rim. In a preliminary phase, qualitative and The approach was applied here to the strain time histories measured in a wheel of a telescopic handler vehicle when subjected to four testing conditions. The time histories were recorded in the inner and outer wheel rim. In a preliminary phase, qualitative and

quantitative methods (short-time Fourier transform, level-crossing spectrum, run test)

deciding how many and which states the whole signal was divided into before calculating the confidence interval. After the loading was divided into states, and each state was further subdivided into smaller portions (blocks), the confidence interval could eventually

• The outcomes of both qualitative and quantitative methods suggested that the individual measured signals could in fact be classified as nonstationary. This conclusion was confirmed independently of the choice of the length of the time window used by

• On the basis of the outcome from the STFT, the original single nonstationary time history was divided into three different states. Surprisingly, this finding contrasts with the fact that the signal was, in fact, composed by four consecutive signal portions corresponding to four different loading conditions. It may then be concluded that two of the portions had very similar frequency characteristics and could be

grouped together for the subsequent derivation of the confidence interval; • Subdivision into states and then blocks determined a loss of a very small fraction of fatigue cycles, i.e., those formed by peaks and valleys falling into different states or blocks before subdivision. This small loss by no means affected the accuracy of the

• As the number of blocks increased, the width of the confidence interval decreased, albeit not indefinitely, since it eventually attained approximately a sort of minimum

The obtained results allowed reaching the following conclusions:

the short-time Fourier transform (STFT) and by the run test;

calculated confidence interval;

quantitative methods (short-time Fourier transform, level-crossing spectrum, run test) were used to deduce some characteristic features of the signal. For example, they allowed deciding how many and which states the whole signal was divided into before calculating the confidence interval. After the loading was divided into states, and each state was further subdivided into smaller portions (blocks), the confidence interval could eventually be determined.

The obtained results allowed reaching the following conclusions:


**Author Contributions:** Conceptualization, J.M.E.M. and D.B.; experiments, L.S.; data analysis, J.M.E.M.; writing—original draft preparation, J.M.E.M.; writing—review and editing, D.B. and L.S.; supervision, D.B. All authors have read and agreed to the published version of the manuscript.

**Funding:** The research activity of one author (J.M.E.M.) was partially funded by the CTU Global Postdoc Fellowship Program (research topic #2-11).

**Data Availability Statement:** Data are contained within the article.

**Acknowledgments:** The authors would like to thank Eng. Gianpietro Bramè of Moveero Ltd. company for the information provided during the development of this research.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


**Julian M. E. Marques 1,\* , Denis Benasciutti 1,\* , Adam Niesłony <sup>2</sup> and Janko Slaviˇc <sup>3</sup>**


**Abstract:** This paper presents an overview of fatigue testing systems in high-cycle regime for metals subjected to uniaxial and multiaxial random loadings. The different testing systems are critically discussed, highlighting advantages and possible limitations. By identifying relevant features, the testing systems are classified in terms of type of machine (servo-hydraulic or shaker tables), specimen geometry and applied constraints, number of load or acceleration inputs needed to perform the test, type of loading acting on the specimen and resulting state of stress. Specimens with plate, cylindrical and more elaborated geometry are also considered as a further classification criterion. This review also discusses the relationship between the applied input and the resulting local state of stress in the specimen. Since a general criterion to classify fatigue testing systems for random loadings seems not to exist, the present review—by emphasizing analogies and differences among various layouts—may provide the reader with a guideline to classify future equipment.

**Keywords:** fatigue; testing systems; random loadings; servo-hydraulic; shaker table

### **1. Introduction**

Mechanical components are often subjected to random loadings during their service life. Due to these loads, components may be exposed to a local random stress, which can be uniaxial (i.e., only one stress component) or multiaxial (i.e., two or more stress components). To estimate the component life, engineers usually perform a structural durability assessment in the predesign stage, often with the aid of a finite element (FE) analysis.

If the nodal random stress is uniaxial, the approach commonly followed makes use of rainflow counting and the Palmgren–Miner rule to compute the damage of the nodal stress output, based on uniaxial strength data given as an S–N curve. This analysis can be developed in time domain or, equivalently, in frequency domain [1].

If, instead, the nodal stress in the FE model is multiaxial, the analysis requires the use of a multiaxial fatigue criterion, which can also be formulated in time domain or frequency domain [2]. By analyzing all nodal results of a FE model (e.g., hundreds of thousands), the computation time is very long using a time-domain criterion. In addition, it may become impracticable when a huge number of planes have to be scanned in the whole three-dimensional FE model in the case of multiaxial fatigue criteria using the critical plane concept [3–5]. In general, for both the uniaxial and multiaxial cases, frequency domain solutions are several orders of magnitude faster than time domain simulations [6–9].

Methods for durability analysis, either in time domain or frequency domain, have to rely on material strength data obtained by experimental laboratory tests. Such tests are performed by applying loadings on a specimen, where cracks nucleate and grow until the specimen breaks. Tests with simple loadings (e.g., axial, bending and torsion) are carried

**Citation:** Marques, J.M.E.; Benasciutti, D.; Niesłony, A.; Slaviˇc, J. An Overview of Fatigue Testing Systems for Metals under Uniaxial and Multiaxial Random Loadings. *Metals* **2021**, *11*, 447. https:// doi.org/10.3390/met11030447

Academic Editor: Tilmann Beck

Received: 30 January 2021 Accepted: 3 March 2021 Published: 8 March 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

out to estimate the material fatigue strength data in the uniaxial case. These uniaxial tests yield an S–N curve in high-cycle fatigue regime, which characterizes the material strength behavior in terms of amplitudes versus number of cycles to failure. Uniaxial test data are also necessary for calibrating a multiaxial criterion. Once calibration has been carried out, specific tests with multiaxial loadings (e.g., bending and torsion) are also performed to gather the necessary data for validating the multiaxial criterion. It is, nevertheless, clear that systems and methodologies for fatigue testing play a paramount role in the durability analysis of both uniaxial and multiaxial states of stress.

Fatigue testing methodologies may vary, for instance, in terms of type of machine used. Two different types (servo-hydraulic or electrodynamic shaker tables) are usually adopted in laboratories. Although, in general, servo-hydraulic machines can be used with both plate and cylindrical specimens, they are normally employed with cylindrical specimens in the random fatigue testing methodologies addressed in this paper. By contrast, shaker tables also adopt specimen geometries other than cylindrical, for example, plate or more elaborated ones.

Various specimen geometries and layouts are considered in fatigue tests, too. While the use of servo-hydraulic machines or shaker tables sometimes leads to rather simple testing layouts or state of stress (e.g., uniaxial), in some cases, the testing systems are all but obvious. As an example, a widely used system layout is that considering a cantilever plate specimen with rectangular shape [10–14]. Mounted on shaker tables and excited at its base, this system produces a bending random loading, which results in a near uniaxial state of stress in the critical location (e.g., notch or hole). However, using a different system with a cantilever cylindrical specimen, it is possible to reach a biaxial state of stress with both normal and shear stresses [15]. Applying loads at the free end of the specimen by a uniaxial shaker allows the system only to develop a coupled bending–torsion random loading. To overcome this limitation, bending and torsion can be applied by two independent uniaxial shakers [15]. In this case, not only coupled (correlated) but also uncoupled (uncorrelated) bending–torsion random loadings are achieved.

Another interesting system with a cantilever cylindrical specimen is that using two masses of different weights fixed at the free end [16]. Excited by a uniaxial shaker at its base, this specimen experiences a bending–torsion loading that produces coupled normal and shear stress components. Inspired by this layout, a new testing system was designed to apply only bending or torsion, as well as coupled or uncoupled bending–torsion loadings by a tri-axis shaker [17–19]. This system permits the intensity and phase shift of bending and torsion loadings to be controlled independently, which results in local normal and shear stresses with any degree of correlation. Experimental and numerical analyses confirmed the system's behavior.

A more elaborated specimen with Y-shaped geometry was also proposed not only to accelerate fatigue tests by means of shaker tables but also to simulate a real-world scenario of complex structures [20–25]. Although this special Y-shaped system is excited by two uncoupled random loadings (in vertical and horizontal directions), it develops a biaxial state of stress at the critical location in terms of normal stresses.

As may be expected, and perhaps become more apparent in the following sections, performing tests with a uniaxial stress is much easier than executing tests with a biaxial stress, even if one makes use of a bending–torsion servo-hydraulic machine. The degree of complexity of the testing system increases the most when the biaxial state of stress is random. While a uniaxial stress can be achieved simply by a servo-hydraulic machine with axially loaded specimens, or by a uniaxial shaker table with a cantilever specimen, fatigue tests in a biaxial random state of stress require one or more machines (e.g., one tri-axis shaker or two uniaxial shakers), one or more types of input as force/torque or acceleration, a specific testing layout and/or a particular shape of specimen. Although it is true that carrying out a fatigue test with a biaxial state of stress is not as simple as executing a test with uniaxial stress, the interest in biaxial random fatigue tests has increased in the last decade in the scientific community [15,17–25].

[26,27].

Figure 1.

By considering this increased interest, the present paper aims to review the random loading fatigue testing systems available in the literature. The different testing systems are critically discussed, highlighting advantages and possible disadvantages. Some general features are also identified, which allow testing systems to be classified and grouped in terms of type of testing machine, specimen geometry, applied constraints, type and among of input needed to carry out a fatigue test, type of random loadings acting on the specimen and the resulting local state of stress. Regarding the loadings applied on specimens in fatigue tests, this paper focuses on axial, bending, torsion, axial-torsion and bending–torsion loadings. Specimens subjected to such loadings yield a uniaxial or a biaxial state of stress, which is also discussed hereafter. eral features are also identified, which allow testing systems to be classified and grouped in terms of type of testing machine, specimen geometry, applied constraints, type and among of input needed to carry out a fatigue test, type of random loadings acting on the specimen and the resulting local state of stress. Regarding the loadings applied on specimens in fatigue tests, this paper focuses on axial, bending, torsion, axial-torsion and bending–torsion loadings. Specimens subjected to such loadings yield a uniaxial or a biaxial state of stress, which is also discussed hereafter. Finally, it must be emphasized that, although testing systems for fatigue random loading are reviewed in detail throughout the text, the relationship between experimental results and estimations by various multiaxial fatigue criteria—though interesting—is not

*Metals* **2021**, *11*, x FOR PEER REVIEW 3 of 16

last decade in the scientific community [15,17–25].

test with uniaxial stress, the interest in biaxial random fatigue tests has increased in the

By considering this increased interest, the present paper aims to review the random loading fatigue testing systems available in the literature. The different testing systems are critically discussed, highlighting advantages and possible disadvantages. Some gen-

Finally, it must be emphasized that, although testing systems for fatigue random loading are reviewed in detail throughout the text, the relationship between experimental results and estimations by various multiaxial fatigue criteria—though interesting—is not the scope of the present paper. For a discussion on this topic, the reader may refer to [26,27]. the scope of the present paper. For a discussion on this topic, the reader may refer to **2. Common Random Loadings in Fatigue Tests and Resulting State of Stress** 

#### **2. Common Random Loadings in Fatigue Tests and Resulting State of Stress** A multiaxial state of stress at a critical point of a mechanical component is repre-

A multiaxial state of stress at a critical point of a mechanical component is represented by a tensor *σ*(*t*), which in the most general case has six independent stress components [28]. For this reason, the six independent stress components are conveniently arranged into a six-dimensional vector *s*(*t*) = - *σxx*(*t*), *σyy*(*t*), *σzz*(*t*), *τxy*(*t*), *τyz*(*t*), *τzx*(*t*) , where *σxx*(*t*), *σyy*(*t*) and *σzz*(*t*) are normal stresses and *τxy*(*t*), *τyz*(*t*) and *τzx*(*t*) are the shear stresses in an X–Y–Z cartesian coordinate system. sented by a tensor (), which in the most general case has six independent stress components [28]. For this reason, the six independent stress components are conveniently arranged into a six-dimensional vector () = ൣ௫௫(), ௬௬(), ௭௭(), ௫௬(), ௬௭(), ௭௫()൧, where ௫௫(), ௬௬() and ௭௭() are normal stresses and ௫௬(), ௬௭() and ௭௫() are the shear stresses in an X–Y–Z cartesian coordinate system. Usually, fatigue cracks nucleate at the surface of mechanical components, where the

Usually, fatigue cracks nucleate at the surface of mechanical components, where the local state of stress is biaxial or even uniaxial. Therefore, in laboratory fatigue tests, the aim is to replicate in a specimen the same biaxial or uniaxial state of stress, in which only two or less normal stress components are nonzero [26,28]. While, in plane stress, a biaxial stress may have up to three nonzero components, *σxx*(*t*), *σyy*(*t*) and *τxy*(*t*), special cases often employed in laboratory tests consider one normal *σxx*(*t*) and one shear *τxy*(*t*) stress, or nonzero normal stresses in two directions *σxx*(*t*), *σyy*(*t*). The uniaxial cases frequently adopted refer to a pure normal stress *σxx*(*t*) or shear stress *τxy*(*t*); see Figure 1. local state of stress is biaxial or even uniaxial. Therefore, in laboratory fatigue tests, the aim is to replicate in a specimen the same biaxial or uniaxial state of stress, in which only two or less normal stress components are nonzero [26,28]. While, in plane stress, a biaxial stress may have up to three nonzero components, ௫௫(), ௬௬() and ௫௬(), special cases often employed in laboratory tests consider one normal ௫௫() and one shear ௫௬() stress, or nonzero normal stresses in two directions ௫௫(), ௬௬(). The uniaxial cases frequently adopted refer to a pure normal stress ௫௫() or shear stress ௫௬(); see

**Figure 1.** Common state of stress in fatigue testing: biaxial (e.g., normal and shear stresses or normal stresses in two directions); uniaxial (e.g., pure normal stress or pure shear stress). **Figure 1.** Common state of stress in fatigue testing: biaxial (e.g., normal and shear stresses or normal stresses in two directions); uniaxial (e.g., pure normal stress or pure shear stress).

In tests with constant amplitude loadings, it is common to use harmonic (sinusoidal) functions. For example, a biaxial normal shear stress with zero-mean is: In tests with constant amplitude loadings, it is common to use harmonic (sinusoidal) functions. For example, a biaxial normal shear stress with zero-mean is:

$$
\sigma\_{\rm xx}(t) = \sigma\_{\rm xx,a} \sin(\omega t), \ \tau\_{\rm xy}(t) = \tau\_{\rm xy,a} \sin(\omega t - \varphi) \tag{1}
$$

where is the angular frequency and ௫௫, and ௫௬, are the stress amplitudes. where *ω* is the angular frequency and *σxx*,*<sup>a</sup>* and *τxy*,*<sup>a</sup>* are the stress amplitudes.

For this biaxial state of stress (as, in fact, in any multiaxial case), the magnitude of stress components may change proportionally (in-phase) or nonproportionally (out-ofphase) with time. In particular, the ratio between normal and shear stresses, For this biaxial state of stress (as, in fact, in any multiaxial case), the magnitude of stress components may change proportionally (in-phase) or nonproportionally (out-ofphase) with time. In particular, the ratio between normal and shear stresses, *σxx*(*t*)/*τxy*(*t*), at any time instant does not vary if *ϕ* = 0, and both stresses follow two in-phase harmonic functions. Instead, the ratio varies with time if *ϕ* 6= 0 and the two harmonic functions are out-of-phase. Additionally, the orientation of principal stress directions may change or not, depending on the value of *ϕ*: for in-phase stresses, they remain fixed; for out-of-phase stress, they change with time.

For harmonic stresses with the same frequency, the phase shift *ϕ* provides a simple measure of the degree of nonproportionality. This definition cannot be extended to random stresses, as they can be viewed as a superposition of many harmonic functions with different frequencies and phase shifts. In the random case, a statistical approach is needed to express the concepts of "fully correlated", "partially correlated" or "not correlated" (uncorrelated) signals. To this end, one introduces the so-called correlation coefficient between two random stresses, which is defined as the covariance normalized to the standard deviation (see below).

In tests with multiaxial random loadings, each stress component is a zero-mean stationary Gaussian process, which can be characterized in the frequency domain by a Power Spectral Density (PSD) matrix. For a biaxial random stress, *σxx*(*t*), *σyy*(*t*) and *τxy*(*t*), the PSD matrix takes the form [9,29]:

$$\mathbf{S}(\omega) = \begin{bmatrix} \mathbf{S}\_{\mathbf{xx}, \mathbf{x}\mathbf{x}}(\omega) & \mathbf{S}\_{\mathbf{xx}, \mathbf{y}\mathbf{y}}(\omega) & \mathbf{S}\_{\mathbf{xx}, \mathbf{x}\mathbf{y}}(\omega) \\ \mathbf{S}^{\*}\_{\mathbf{xx}, \mathbf{y}\mathbf{y}}(\omega) & \mathbf{S}\_{\mathbf{yy}, \mathbf{y}\mathbf{y}}(\omega) & \mathbf{S}\_{\mathbf{yy}, \mathbf{x}\mathbf{y}}(\omega) \\ \mathbf{S}^{\*}\_{\mathbf{xx}, \mathbf{x}\mathbf{y}}(\omega) & \mathbf{S}^{\*}\_{\mathbf{yy}, \mathbf{x}\mathbf{y}}(\omega) & \mathbf{S}\_{\mathbf{xy}, \mathbf{x}\mathbf{y}}(\omega) \end{bmatrix}, \quad \mathbf{S}\_{\mathbf{ij}}(\omega) = \int\_{-\infty}^{\infty} \mathbf{R}\_{\mathbf{ij}}(\delta) e^{-i\omega \delta} \,\mathbf{d}\delta \tag{2}$$

in which *Rij*(*δ*) is the autocorrelation (for *i* = *j*) and cross-correlation (for *i* 6= *j*) function in time-domain, and *δ* is a time lag. The diagonal terms of *S*(*ω*) are the auto-PSDs *Sxx*,*xx*(*ω*), *Syy*,*yy*(*ω*) and *Sxy*,*xy*(*ω*), whereas the out-of-diagonal terms are the cross-PSDs *S* ∗ *xx*,*yy*(*ω*), *S* ∗ *xx*,*xy*(*ω*) and *S* ∗ *yy*,*xy*(*ω*), where the superscript \* denotes the complex conjugate. Hence, the cross-PSDs are the summation of a real and an imaginary part. The real part is an even function of *ω* (coincident spectrum or co-spectrum), and the imaginary part is an odd function of *ω* (quadrature spectrum or quad-spectrum). The PSD matrix in Equation (2) is Hermitian.

Thanks to the relationship between covariance terms and the zero-order spectral moments, *Cij* = *λ*0,*ij*, the covariance matrix can be defined as [29]:

$$\mathbf{C} = \begin{bmatrix} \mathbf{C}\_{\text{xx}, \text{xx}} & \mathbf{C}\_{\text{xx}, \text{yy}} & \mathbf{C}\_{\text{xx}, \text{xy}} \\ \mathbf{C}\_{\text{yy}, \text{xx}} & \mathbf{C}\_{\text{yy}, \text{yy}} & \mathbf{C}\_{\text{yy}, \text{xy}} \\ \mathbf{C}\_{\text{xy}, \text{xx}} & \mathbf{C}\_{\text{xy}, \text{yy}} & \mathbf{C}\_{\text{xy}, \text{xy}} \end{bmatrix}, \quad \mathbf{C}\_{\text{ij}} = \int\_{-\infty}^{\infty} \mathbf{S}\_{\text{ij}}(\omega) \, \mathbf{d}\omega \tag{3}$$

Equation (3) is a symmetric matrix. The main diagonal terms are the variance of each stress component, *Cxx*,*xx* = Var(*σxx*(*t*)), *Cyy*,*yy* = Var *σyy*(*t*) and *Cxy*,*xy* = Var *τxy*(*t*) ; the out-of-diagonal terms are the covariance of two components, *Cxx*,*yy* = Cov *σxx*(*t*), *σyy*(*t*) , *Cxx*,*xy* = Cov *σxx*(*t*), *τxy*(*t*) and *Cyy*,*xy* = Cov *σyy*(*t*), *τxy*(*t*) . The covariances are used to define the correlation coefficient *rij* = *Cij*/ q *CiiCjj* between stress components *i* and *j*. This coefficient is close to unity when two components are "fully correlated" (proportional); it tends to zero when they are "uncorrelated" (nonproportional) [29].

In some testing layouts, the type of local random stress in the specimen (e.g., biaxial normal and shear stresses *σxx*(*t*), *τxy*(*t*)) closely depends on the type of machine, specimen geometry, constrains, and type of excitation. For example, a cantilever cylindrical specimen with a circumferential notch can be subjected to various independent types of loadings applied at the free end, see Figure 2. In the most general case, they are axial load *P*(*t*); torsion *MT*(*t*); and two bending moments, *MB*,*x*(*t*) and *MB*,*y*(*t*),, in two orthogonal planes. For this specimen geometry, the maximum stresses are located on the surface at the center of the notch.

Assuming a bending–torsion loading, *MB*,*x*(*t*) and *MT*(*t*), without *P*(*t*) and *MB*,*y*(*t*), the cylindrical specimen experiences a biaxial normal and shear stress at the critical location. By contrast, the specimen under the two bending moments *MB*,*x*(*t*) and *MB*,*y*(*t*) develops a uniaxial normal stress at the notch surface. The location of the maximum normal stress changes or not, depending on whether the instantaneous value *MB*,*y*(*t*)/*MB*,*x*(*t*) varies or not.

*Metals* **2021**, *11*, x FOR PEER REVIEW 5 of 16

**Figure 2.** Notched cantilever cylindrical specimen under axial (); torsion ்(); and two bending moments, ,௫() and ,௬(), applied at the free end. **Figure 2.** Notched cantilever cylindrical specimen under axial *P*(*t*); torsion *MT*(*t*); and two bending moments, *MB*,*x*(*t*) and *MB*,*y*(*t*), applied at the free end. mum normal stress changes or not, depending on whether the instantaneous value ,௬() ⁄ ,௫() varies or not. If the cylindrical specimen in Figure 2 undergoes bending ,௫() and torsion

Assuming a bending–torsion loading, ,௫() and ்() , without () and ,௬(), the cylindrical specimen experiences a biaxial normal and shear stress at the crit-If the cylindrical specimen in Figure 2 undergoes bending *MB*,*x*(*t*) and torsion *MT*(*t*) moments, it is straightforward to determine the elastic peak normal and shear stresses as [30]: ்() moments, it is straightforward to determine the elastic peak normal and shear stresses as [30]:

$$
\sigma\_{z,p}(t) = \frac{32 \, M\_{\text{B},x}(t)}{\pi d^3} \text{K}\_{l,\text{B}} \, \, \tau\_{xy,p}(t) = \frac{16 \, M\_{\text{T}}(t)}{\pi d^3} \text{K}\_{l,\text{T}} \tag{4}
$$

mum normal stress changes or not, depending on whether the instantaneous value ,௬() ⁄ ,௫() varies or not. If the cylindrical specimen in Figure 2 undergoes bending ,௫() and torsion ்() moments, it is straightforward to determine the elastic peak normal and shear where *d* is the specimen diameter in the smallest section, and *Kt*,*<sup>B</sup>* and *Kt*,*<sup>T</sup>* are the stress concentration factors in bending and torsion. Equation (4) makes apparent how the local stress is directly proportional to the applied loadings. Moreover, it also highlights the role of the stress concentration factors, *Kt*,*<sup>B</sup>* and *Kt*,*T*, in increasing the local stress. where is the specimen diameter in the smallest section, and ௧, and ௧,் are the stress concentration factors in bending and torsion. Equation (4) makes apparent how the local stress is directly proportional to the applied loadings. Moreover, it also highlights the role of the stress concentration factors, ௧, and ௧,், in increasing the local stress.

stresses as [30]: ௭,() <sup>=</sup> 32 ,௫() ଷ ௧,, ௫௬,() <sup>=</sup> 16 ்() ଷ ௧,் (4) where is the specimen diameter in the smallest section, and ௧, and ௧,் are the stress concentration factors in bending and torsion. Equation (4) makes apparent how the Another common layout often exploited in fatigue tests is that in which the specimen is excited at its base. Figure 3 illustrates a cantilever cylindrical specimen, subjected to two orthogonal accelerations, .. *<sup>y</sup>*(*t*) and .. *x*(*t*), at the clamped end. Both accelerations make the specimen vibrate in bending. In most cases, though, only the vertical acceleration is applied. Another common layout often exploited in fatigue tests is that in which the specimen is excited at its base. Figure 3 illustrates a cantilever cylindrical specimen, subjected to two orthogonal accelerations, ሷ() and ሷ(), at the clamped end. Both accelerations make the specimen vibrate in bending. In most cases, though, only the vertical acceleration is applied.

**Figure 3.** Notched cantilever cylindrical specimen subjected to orthogonal base accelerations ሷ() and ሷ(). **Figure 3.** Notched cantilever cylindrical specimen subjected to orthogonal base accelerations .. *y*(*t*) and .. *x*(*t*).

In this testing layout, a modal analysis is normally carried out to identify the modes of vibration and the natural frequencies of the specimen [9]. Usually, the input acceleration is tuned at the first specimen natural frequency. Based on the harmonic analysis, the dynamic response (e.g., stress) of the system is associated to the excitation (e.g., force or acceleration); here, special care is required for kinematic excitation as the natural dynam-In this testing layout, a modal analysis is normally carried out to identify the modes of vibration and the natural frequencies of the specimen [9]. Usually, the input acceleration is tuned at the first specimen natural frequency. Based on the harmonic analysis, the dynamic response (e.g., stress) of the system is associated to the excitation (e.g., force or acceleration); here, special care is required for kinematic excitation as the natural dynamics changes if compared to the force/dynamics excitation [9].

**Figure 3.** Notched cantilever cylindrical specimen subjected to orthogonal base accelerations ሷ() and ሷ(). In this testing layout, a modal analysis is normally carried out to identify the modes of vibration and the natural frequencies of the specimen [9]. Usually, the input acceleraics changes if compared to the force/dynamics excitation [9]. Assuming a time-invariant linear system, simple relationships exist between the input acceleration and the resulting local stress. For example, for a harmonic vertical acceleration .. *y*(*t*) = *a<sup>y</sup>* cos(*ω*) with amplitude *a<sup>y</sup>* and frequency *ω* centered on the first resonance frequency of the specimen, the corresponding peak stress is also harmonic with same frequency and amplitude:

acceleration); here, special care is required for kinematic excitation as the natural dynam-

$$(\sigma\_{\mathbf{z},\mathbf{p}})\_{\mathbf{a}} = |H\_{\sigma}(\omega)| \cdot a\_{\mathbf{y}} \tag{5}$$

where *Hσ*(*ω*) is the frequency transfer function for bending. Similar relations hold for other nonzero stress components, if present. For example, an eccentric (off-set) tip mass mounted on the specimen in Figure 3 would also determine a torsional deformation of the specimen, with a corresponding shear stress at the notch. The amplitude of the peak shear stress then would be *τxy*,*p <sup>a</sup>* = |*Hτ*(*ω*)|·*ay*, where *Hτ*(*ω*) is the frequency transfer function in torsion.

The previous relations can be generalized to the case of a specimen under random base accelerations:

$$S(\omega) = H(\omega) S\_a(\omega) H^{\*T}(\omega) \tag{6}$$

where *Sa*(*ω*) is the PSD matrix of the input accelerations, *H*(*ω*) the frequency transfer function matrix characterizing the system, and *S*(*ω*) is the PSD matrix of the stress, defined in Equation (2). The matrix *Sa*(*ω*) specifies the frequency content and correlations of the random accelerations applied to the specimen, in terms of auto- and cross-PSDs.

Before carrying out the tests, one has to carefully establish the relationship between the type and number of random loadings applied to a specimen and the resulting state of stress at the critical point. Not always does a multiaxial input determine a multiaxial state of stress. In some circumstances, a testing system under a multiaxial input develops a uniaxial stress [31,32]. An alternative to verify the state of stress evaluated theoretically or numerically (e.g., results obtained by FE analysis) is by means of strain gages. Although some specimens have a complex notched geometry that makes it difficult, if not impossible, to attach strain gages directly at the notch, they allow the local stress to be assessed indirectly through strains measured in other points of either the specimen or the testing system [17–19].

In addition to the external random loadings and the state of stress of interest, the choice of the testing machine also offers advantages and disadvantages when performing the fatigue tests; they are discussed in the next section.

### **3. Fatigue Testing Machines**

Two different types of testing machines are usually adopted in mechanical laboratories to perform fatigue tests with random loading. They are known as servo-hydraulic, Figure 4a, and electrodynamic shaker tables, Figure 4b. Servo-hydraulic machines impose a force and/or torque as the input excitation, while electrodynamic shakers apply a force to the vibrating table (where the acceleration is usually controlled with a closed-loop control). *Metals* **2021**, *11*, x FOR PEER REVIEW 7 of 16

**Figure 4.** Testing machines adopted in mechanical laboratories: (**a**) servo-hydraulic; (**b**) electrodynamic shaker table. With a uniaxial electrodynamic shaker, a specimen can easily be subjected to a ran-**Figure 4.** Testing machines adopted in mechanical laboratories: (**a**) servo-hydraulic; (**b**) electrodynamic shaker table.

axial-torsion loading to produce a biaxial state of stress ௫௫(), ௫௬().

dom bending when excited by a vertical base acceleration; see Figure 3. However, by an appropriate setup [17–19], electrodynamic shakers can also be used for tests with bend-

and ௫௬(). This type of bending–torsion loading is not commonly found in tests with a servo-hydraulic machine. In fact, biaxial servo-hydraulic machines can normally apply

Servo-hydraulic machines, on the other hand, have a greater flexibility in controlling the amplitude and phase of axial-torsion loadings. In the case of fatigue testing with shaker tables, bending–torsion loadings are reached by a more complex system configuration, and their values are not controlled directly by the system. This restriction may pose some difficulty in determining the actual values of the local state of stress that is obtained in the specimen. For this reason, in applications with shaker tables, it is also important to monitor accelerations and strains. Accelerometers are often used to control the acceleration imposed on the table and the dynamic response of the system. Strain gages are employed for measuring and controlling strain at the point of maximum stress, or nearby in case it is not directly accessible. Strain gages are attached on the specimen close to a notch or hole, or on the clamping system to provide an indirect measure of the stress in the

In electrodynamic shaker tests, the fixation of the sample to the shaker armature or to the shaker table is critical; it is required that the fixation of the specimen should not have any natural frequencies in the frequency range of testing. Usually, the base acceleration of the fixation is controlled in a closed loop. Experimental transfer functions are usually based on sine-sweep, impact or random excitation and can be compared to the results obtained by using the finite element model. Proper dynamic analysis usually requires Ex-

The following section makes a critical analysis of the various testing methodologies encountered in the literature, which differ not only in terms of machines but also of specimen geometries, external loads and state of stress in the critical location. Throughout the text, the terms "testing method" and "testing system" define a specific combination of

testing machine, specimen geometry and load set used to perform a fatigue test.

perimental Modal Analysis (EMA) [9].

specimen.

Electrodynamic shakers have a great advantage in that they allow their table to be driven at high frequencies. Consequently, fatigue tests by shaker tables take much less time than tests by servo-hydraulic machines. On the other hand, in servo-hydraulic machines, the force or torque applied to the specimen is controlled directly by the control system; this permits the local state of stress in the specimen to be related directly to the applied loadings, see, for example, Equation (4). With electrodynamic shakers, instead, the local state of stress depends on the dynamic response of the test specimen. Carrying out a dynamic analysis to estimate the local stress in the specimen is, therefore, of considerable importance before performing fatigue tests with shaker tables.

With a uniaxial electrodynamic shaker, a specimen can easily be subjected to a random bending when excited by a vertical base acceleration; see Figure 3. However, by an appropriate setup [17–19], electrodynamic shakers can also be used for tests with bending– torsion random loadings, which result into biaxial normal and shear stresses, *σxx*(*t*) and *τxy*(*t*). This type of bending–torsion loading is not commonly found in tests with a servohydraulic machine. In fact, biaxial servo-hydraulic machines can normally apply axialtorsion loading to produce a biaxial state of stress *σxx*(*t*), *τxy*(*t*).

Servo-hydraulic machines, on the other hand, have a greater flexibility in controlling the amplitude and phase of axial-torsion loadings. In the case of fatigue testing with shaker tables, bending–torsion loadings are reached by a more complex system configuration, and their values are not controlled directly by the system. This restriction may pose some difficulty in determining the actual values of the local state of stress that is obtained in the specimen. For this reason, in applications with shaker tables, it is also important to monitor accelerations and strains. Accelerometers are often used to control the acceleration imposed on the table and the dynamic response of the system. Strain gages are employed for measuring and controlling strain at the point of maximum stress, or nearby in case it is not directly accessible. Strain gages are attached on the specimen close to a notch or hole, or on the clamping system to provide an indirect measure of the stress in the specimen.

In electrodynamic shaker tests, the fixation of the sample to the shaker armature or to the shaker table is critical; it is required that the fixation of the specimen should not have any natural frequencies in the frequency range of testing. Usually, the base acceleration of the fixation is controlled in a closed loop. Experimental transfer functions are usually based on sine-sweep, impact or random excitation and can be compared to the results obtained by using the finite element model. Proper dynamic analysis usually requires Experimental Modal Analysis (EMA) [9].

The following section makes a critical analysis of the various testing methodologies encountered in the literature, which differ not only in terms of machines but also of specimen geometries, external loads and state of stress in the critical location. Throughout the text, the terms "testing method" and "testing system" define a specific combination of testing machine, specimen geometry and load set used to perform a fatigue test.

### **4. Fatigue Testing Systems**

Various fatigue testing systems are proposed in the literature, often with significant differences. Some systems apply deterministic (harmonic) loads, and others apply random loadings. While reviewing the systems described in the literature, this section emphasizes several important features related to fatigue testing. These features include the type of machine, specimen geometries, imposed constrains, number of input needed in terms of force and/or torque and acceleration, random loadings acting on the specimen (e.g., axial, bending, torsion, bending–torsion and tension–torsion loadings) and resulting state of stress in the critical location (e.g., uniaxial and biaxial). All these features constitute the general classification criterion adopted to classify the testing systems; see Table 1. The table groups the various systems based on common or different characteristics. It also points out whether the multiaxial state of stress applied by the system is correlated or not. Additionally, mentioned in Table 1 is the distinction of the type of specimen (i.e., plate, cylindrical and more elaborated), which is discussed in more detail later on. The

classification criterion of Table 1 may represent a useful guideline to classify any new testing system.


**Table 1.** Classification of fatigue testing systems (C = correlated stress components; UC = uncorrelated stress components).

<sup>1</sup> Ax = axial; Be = bending; To = torsion; Ax-To = axial-torsion; Be-To = bending–torsion. <sup>2</sup> Biaxial with two normal stresses, *σxx* and *σyy*.

<sup>3</sup> Biaxial with normal stress, *σxx*, and shear stress, *τxy*.

Electrodynamic shakers seem to be the most used machine, at least for the systems considered in Table 1. A possible reason for this is that the time to perform fatigue tests in high-cycle regime with random loadings is significantly shorter with shakers than servo-hydraulic machines. Another reason is that accelerated vibration tests in the automotive and aerospace industry are defined for electrodynamic shakers. Table 1 shows that the maximum number of excitations is two (e.g., vertical and horizontal accelerations), although some shakers allow three independent excitations to be applied simultaneously (e.g., vertical, horizontal and longitudinal acceleration)—in fact, only two of them are the active channels. Note also that servo-hydraulic machines in Table 1 are used to apply axial, torsion or axial-torsion loading, whereas shaker tables can apply bending, torsion or bending–torsion loadings. The type of state of stress in the specimen varies widely from one system to the other. However, cylindrical specimens can produce a pure shear stress, or combined normal and shear stresses, whereas the other specimen geometries cannot. Details of each type of specimen are described in the following sections.

### *4.1. Plate Specimens*

Thin plate specimens with rectangular or square shape, excited at the base in vertical direction, represent the simplest and most convenient layout to produce a uniaxial state of stress in notches or holes; see [10–14] in Table 1. These systems are usually mounted on shaker tables and then excited by harmonic acceleration centered on the specimen resonant frequency. Harmonic acceleration has the advantage of easily obtaining the harmonic transfer function as the ratio between accelerations at two measurement positions. In fact, two accelerometers can be used [10–12], one attached on the base of the shaker and the second one attached at the free end of the specimen. A few accelerometers positioned along the entire length of the specimen are also observed in some applications [40,41] with the aim to obtain the modes of vibration at resonance frequencies. Although harmonic loadings allow the harmonic transfer function of the system to be determined, and they can also be used for constant amplitude fatigue tests, the random acceleration is the type of excitation most used in fatigue tests with shaker tables, so as to replicate the random loadings commonly found in engineering applications.

It is important to underline that plate specimens with rectangular or square shape, mounted on shaker tables and excited at the base, cannot produce a biaxial state of stress

at critical location, but rather a uniaxial stress state. This circumstance occurs even if the specimen is excited by a multiaxial input along more than one direction, for example, vertical and horizontal accelerations, each of which can produce a bending loading. Indeed, these plate specimens of thin thickness are usually subjected to bending loadings that lead to a uniaxial state of stress in the critical location. mounted on shaker tables and excited at the base, cannot produce a biaxial state of stress at critical location, but rather a uniaxial stress state. This circumstance occurs even if the specimen is excited by a multiaxial input along more than one direction, for example, vertical and horizontal accelerations, each of which can produce a bending loading. Indeed, these plate specimens of thin thickness are usually subjected to bending loadings that lead to a uniaxial state of stress in the critical location.

It is important to underline that plate specimens with rectangular or square shape,

shaker tables and then excited by harmonic acceleration centered on the specimen resonant frequency. Harmonic acceleration has the advantage of easily obtaining the harmonic transfer function as the ratio between accelerations at two measurement positions. In fact, two accelerometers can be used [10–12], one attached on the base of the shaker and the second one attached at the free end of the specimen. A few accelerometers positioned along the entire length of the specimen are also observed in some applications [40,41] with the aim to obtain the modes of vibration at resonance frequencies. Although harmonic loadings allow the harmonic transfer function of the system to be determined, and they can also be used for constant amplitude fatigue tests, the random acceleration is the type of excitation most used in fatigue tests with shaker tables, so as to replicate the random

Plate specimens with circular shape are also employed for tests with shaker tables [35,36]. As depicted in Figure 5, such specimens are fixed at the center and then excited by a vertical random acceleration, which produces a biaxial state of stress with the critical location outside the center. The intensity of stress components can be controlled by varying the diameter or thickness of the specimen. However, the nonzero stress components are only the radial and circumferential ones. Plate specimens with a circular shape then do not develop shear stress components at the critical point; see [35,36]. Plate specimens with circular shape are also employed for tests with shaker tables [35,36]. As depicted in Figure 5, such specimens are fixed at the center and then excited by a vertical random acceleration, which produces a biaxial state of stress with the critical location outside the center. The intensity of stress components can be controlled by varying the diameter or thickness of the specimen. However, the nonzero stress components are only the radial and circumferential ones. Plate specimens with a circular shape then do not develop shear stress components at the critical point; see [35,36].

*Metals* **2021**, *11*, x FOR PEER REVIEW 9 of 16

loadings commonly found in engineering applications.

**Figure 5.** Testing system with a circular plate specimen excited by a uniaxial shaker (reprinted from [35], with permission from Elsevier, 2021). **Figure 5.** Testing system with a circular plate specimen excited by a uniaxial shaker (reprinted with permission from ref. [35]. Copyright 2021 Elsevier).

Plate specimens with cruciform geometry, excited by random loadings, are encountered in some fatigue tests using a servo-hydraulic machine; see [33,42] in Table 1. More often, this type of specimen is used in constant amplitude low cycle fatigue tests [43], whereas it seems not to be used in low cycle regime with random loadings. When loaded by axial loadings applied to its two orthogonal arms (see Figure 6), the Plate specimens with cruciform geometry, excited by random loadings, are encountered in some fatigue tests using a servo-hydraulic machine; see [33,42] in Table 1. More often, this type of specimen is used in constant amplitude low cycle fatigue tests [43], whereas it seems not to be used in low cycle regime with random loadings.

cruciform specimen can develop in the critical location a biaxial stress with two normal stress components, similarly to the circular plate specimen. Other plate specimen shapes (e.g., a particular wing shape) can be considered in tests When loaded by axial loadings applied to its two orthogonal arms (see Figure 6), the cruciform specimen can develop in the critical location a biaxial stress with two normal stress components, similarly to the circular plate specimen.

if a biaxial stress is to be obtained by means of shaker tables; see [37] in Table 1. Though a plate specimen can develop a biaxial state of stress with two normal stresses, it cannot generate a biaxial state with normal and shear stresses. To obtain this stress, a cylindrical

specimen is required.

**Figure 6.** Plate specimens with cruciform geometry (reprinted from [33], with permission from Elsevier, 1999). **Figure 6.** Plate specimens with cruciform geometry (reprinted with permission from ref. [33]. Copyright 1999 Elsevier).

*4.2. Cylindrical Specimens*  Cylindrical specimens, with or without notches, can be loaded in bending if excited by a vertical acceleration imposed by a shaker table. The acceleration can be harmonic or random, and the bending loading accordingly. In the typical layout, the specimen has one Other plate specimen shapes (e.g., a particular wing shape) can be considered in tests if a biaxial stress is to be obtained by means of shaker tables; see [37] in Table 1. Though a plate specimen can develop a biaxial state of stress with two normal stresses, it cannot generate a biaxial state with normal and shear stresses. To obtain this stress, a cylindrical specimen is required.

#### end free and the other fixed to the shaker table. The system may be instrumented by two or more accelerometers to measure accelerations at different points, and by strain gages *4.2. Cylindrical Specimens*

attached on notches to control the strain [44]. In some applications, the above system is excited simultaneously by tri-axis excitations centered on specimen resonance frequencies [31,32]. Although the random loadings correspond to axial and to bending loadings in two planes, the state of stress remains uniaxial in the critical location of interest. Indeed, it is only the maximum stress position that changes, depending on the intensity and phase shift of the excitations. Therefore, the use Cylindrical specimens, with or without notches, can be loaded in bending if excited by a vertical acceleration imposed by a shaker table. The acceleration can be harmonic or random, and the bending loading accordingly. In the typical layout, the specimen has one end free and the other fixed to the shaker table. The system may be instrumented by two or more accelerometers to measure accelerations at different points, and by strain gages attached on notches to control the strain [44].

of a tri-axial shaker does not assure that a biaxial state of stress be obtained on a cantilever cylindrical specimen. This example emphasizes that a different configuration of the test system is needed to reach a biaxial state of stress in cylindrical specimens. To this end, a possible solution is to exploit a uniaxial shaker with a rotary table structure and a lever [15]. In one side, the cantilever cylindrical specimen is fixed to the holder structure; in the other side, it is attached to the lever. By rotating the lever arm with an arbitrary angle in the range 0 ≤ ≤ /2, the shaker excites the lever by imposing sim-In some applications, the above system is excited simultaneously by tri-axis excitations centered on specimen resonance frequencies [31,32]. Although the random loadings correspond to axial and to bending loadings in two planes, the state of stress remains uniaxial in the critical location of interest. Indeed, it is only the maximum stress position that changes, depending on the intensity and phase shift of the excitations. Therefore, the use of a tri-axial shaker does not assure that a biaxial state of stress be obtained on a cantilever cylindrical specimen. This example emphasizes that a different configuration of the test system is needed to reach a biaxial state of stress in cylindrical specimens.

ultaneous bending and torsion moments to the specimen. Due to its layout, this system can only develop a coupled (correlated) bending–torsion loading, i.e., it is limited to proportional loadings. Accordingly, in Table 1, this system is listed in the fifth row (reference [15], with one input). Choosing either of the two limit angular values, the system can apply a pure bending when the lever is parallel to the specimen axis (=0), or a pure torsion when the lever is perpendicular to the specimen axis ( = /2). Strain gages are also attached on the lever to measure the strain and to control the value of normal stress in the specimen. According to the imposed angle , it is straightforward to calculate the value of stress in any system configuration, i.e., biaxial normal and shear stresses when 0<< /2, uniaxial pure normal stress when = /2 and uniaxial pure shear stress when = 0. To this end, a possible solution is to exploit a uniaxial shaker with a rotary table structure and a lever [15]. In one side, the cantilever cylindrical specimen is fixed to the holder structure; in the other side, it is attached to the lever. By rotating the lever arm with an arbitrary angle in the range 0 ≤ *α* ≤ *π*/2, the shaker excites the lever by imposing simultaneous bending and torsion moments to the specimen. Due to its layout, this system can only develop a coupled (correlated) bending–torsion loading, i.e., it is limited to proportional loadings. Accordingly, in Table 1, this system is listed in the fifth row (ref. [15], with one input). Choosing either of the two limit angular values, the system can apply a pure bending when the lever is parallel to the specimen axis (*α* = 0), or a pure torsion when the lever is perpendicular to the specimen axis (*α* = *π*/2). Strain gages are also attached on the lever to measure the strain and to control the value of normal stress

By adopting a similar system layout, fatigue tests can also be performed with uncou-

in the specimen. According to the imposed angle *α*, it is straightforward to calculate the value of stress in any system configuration, i.e., biaxial normal and shear stresses when 0 < *α* < *π*/2, uniaxial pure normal stress when *α* = *π*/2 and uniaxial pure shear stress when *α* = 0.

By adopting a similar system layout, fatigue tests can also be performed with uncoupled (uncorrelated) bending–torsion loadings [15]. In contrast to the system described so far, now two uniaxial shakers are controlled independently. They are mounted on a table in order to excite two arms positioned perpendicularly; see Figure 7. In this case, not only coupled but also uncoupled bending–torsion random loadings can be achieved. Accordingly, in Table 1, this system is listed in the sixth row (ref. [15], with two inputs). This system yields a pure bending loading when only the arm parallel to specimen axis is excited by a uniaxial shaker. Instead, if the excitation is only imposed by the other shaker (arm perpendicular to the specimen), the specimen is subjected to torsion loading, without bending. Of course, this testing system layout needs two uniaxial shakers. Furthermore, it requires an input/output system to control the accelerations in the two shakers simultaneously. *Metals* **2021**, *11*, x FOR PEER REVIEW 11 of 16

> By adding two tip masses of different weight at the free end of a cylindrical specimen excited by a uniaxial shaker table, it is possible to obtain a bending–torsion loading in the specimen; see [16] in Table 1. This layout shows that a specimen excited by a base vertical random acceleration experiences a normal and shear biaxial stress. In this layout, however, both stress components are always coupled. Their relative magnitude can be controlled by increasing or decreasing the weight ratio of the two tip masses. A pure bending loading results by selecting the same weight for both tip masses. However, obtaining the opposite loading case (only torsion) is not possible—indeed, torsion is always coupled with bending. Another aspect to mention is that this layout seems not to have been verified by experimental tests, but only by numerical simulations [16]. far, now two uniaxial shakers are controlled independently. They are mounted on a table in order to excite two arms positioned perpendicularly; see Figure 7. In this case, not only coupled but also uncoupled bending–torsion random loadings can be achieved. Accordingly, in Table 1, this system is listed in the sixth row (reference [15], with two inputs). This system yields a pure bending loading when only the arm parallel to specimen axis is excited by a uniaxial shaker. Instead, if the excitation is only imposed by the other shaker (arm perpendicular to the specimen), the specimen is subjected to torsion loading, without bending. Of course, this testing system layout needs two uniaxial shakers. Furthermore, it requires an input/output system to control the accelerations in the two shakers simultaneously.

**Figure 7.** Testing system with a cylindrical specimen excited by two independent shakers. **Figure 7.** Testing system with a cylindrical specimen excited by two independent shakers.

By adding two tip masses of different weight at the free end of a cylindrical specimen excited by a uniaxial shaker table, it is possible to obtain a bending–torsion loading in the specimen; see [16] in Table 1. This layout shows that a specimen excited by a base vertical random acceleration experiences a normal and shear biaxial stress. In this layout, however, both stress components are always coupled. Their relative magnitude can be controlled by increasing or decreasing the weight ratio of the two tip masses. A pure bending loading results by selecting the same weight for both tip masses. However, obtaining the opposite loading case (only torsion) is not possible—indeed, torsion is always coupled with bending. Another aspect to mention is that this layout seems not to have been veri-A way to decouple the bending and torsion loadings—and thus improve the capabilities of the testing system in [16]—is to use a tri-axis shaker, which can apply up to three independent excitations along three orthogonal directions. Inspired by [16], a special holder structure (see Figure 8) has been designed to allow the bending and torsion random loadings to be controlled independently; see [17–19] in Table 1. In the system in Figure 8, a U-notched cylindrical specimen is fixed to a T-clamp at one end. At the other end, the specimen mounts a cantilever arm with two equal masses, and it is also constrained by a thin and flexible plate.

A way to decouple the bending and torsion loadings—and thus improve the capabilities of the testing system in [16]—is to use a tri-axis shaker, which can apply up to three independent excitations along three orthogonal directions. Inspired by [16], a special

This thin and flexible plate constraints any horizontal movement but allows the rotation of the specimen end. The plate thus prevents the specimen from being subjected to bending in horizontal direction, but it is very thin to allow the specimen to rotate around its axis when subjected to torsion. Therefore, when the specimen is excited by a vertical base acceleration, Figure 9a, it vibrates in the vertical plane (only bending loading). When, instead, it is excited by a horizontal base acceleration, Figure 9b, it vibrates in the horizontal plane (only torsion loading caused by the oscillation of the two eccentric masses).

dom loadings to be controlled independently; see [17–19] in Table 1. In the system in Figure 8, a U-notched cylindrical specimen is fixed to a T-clamp at one end. At the other end, the specimen mounts a cantilever arm with two equal masses, and it is also constrained

by a thin and flexible plate.

fied by experimental tests, but only by numerical simulations [16].

celeration.

**Figure 8.** Testing system composed of a cylindrical specimen with eccentric tip masses, and excited by vertical and/or horizontal base accelerations (adapted from [19], with permission from Elsevier, 2018). **Figure 8.** Testing system composed of a cylindrical specimen with eccentric tip masses, and excited by vertical and/or horizontal base accelerations (adapted with permission from ref. [19]. Copyright 2018 Elsevier).

The testing system has accelerometers to monitor the accelerations of the shaker table in closed-loop control, and the acceleration of the extremity of the specimen and cantilever arm. As it is not possible to use strain gages to measure the strains directly at the specimen notch (being it too small), an indirect measure it performed. Strain gages are indeed glued onto the lateral faces of the T-clamp, so as to provide a measure of the bending moments there and, indirectly, of the bending and torsion loadings and, accordingly, of the resulting normal and shear stress at the specimen notch. Note that the tri-axis shaker cannot excite only one single axis or two axes at a time, This thin and flexible plate constraints any horizontal movement but allows the rotation of the specimen end. The plate thus prevents the specimen from being subjected to bending in horizontal direction, but it is very thin to allow the specimen to rotate around its axis when subjected to torsion. Therefore, when the specimen is excited by a vertical base acceleration, Figure 9a, it vibrates in the vertical plane (only bending loading). When, instead, it is excited by a horizontal base acceleration, Figure 9b, it vibrates in the horizontal plane (only torsion loading caused by the oscillation of the two eccentric masses). *Metals* **2021**, *11*, x FOR PEER REVIEW 13 of 16

**Figure 9.** Testing system with a cylindrical specimen excited by (**a**) vertical base acceleration and (**b**) horizontal base ac-**Figure 9.** Testing system with a cylindrical specimen excited by (**a**) vertical base acceleration and (**b**) horizontal base acceleration.

A hollow cylindrical specimen with a small hole perpendicular to its axis, loaded by a servo-hydraulic machine, may be subjected to an uncoupled biaxial state of stress; see [34] in Table 1. This specimen is fixed at both ends, where the servo-hydraulic machine applies an axial-torsion random loading. The biaxial state of stress at the hole can be monitored with minimal difficulty by controlling the input force and/or torque and relating it to the corresponding stress components. Once again, this system configuration emphasizes how servo-hydraulic machines offer a greater simplicity over shaker tables in the direct control of both the intensity and phase shift between axial-torsion loading actions The testing system has accelerometers to monitor the accelerations of the shaker table in closed-loop control, and the acceleration of the extremity of the specimen and cantilever arm. As it is not possible to use strain gages to measure the strains directly at the specimen notch (being it too small), an indirect measure it performed. Strain gages are indeed glued onto the lateral faces of the T-clamp, so as to provide a measure of the bending moments there and, indirectly, of the bending and torsion loadings and, accordingly, of the resulting normal and shear stress at the specimen notch.

on cylindrical specimens. Instead, cantilever cylindrical specimens excited by shakers require the dynamic response of the specimen fixed on the holder system to be determined in order to evaluate the normal and shear stress values at the critical location. Note that the tri-axis shaker cannot excite only one single axis or two axes at a time, keeping the others at rest. All three axes need to be active simultaneously, which appears to be a small limitation if tests with two, or even only one, accelerations need to be carried

try has a cost slightly higher than producing specimens with simpler shapes.

tigue tests using shaker tables and simulate a real-world scenario of complex structures. A Y-shaped specimen with a central hole and two masses at the free ends has been proposed; see Figure 10. The Y-shaped specimen is made by three rectangular cross-sections arranged at 120° around the hole and has in the frequency range up to 2 kHz several natural frequencies that can be vibration fatigued. The attached masses can be used to adjust the frequencies of particular natural frequencies [21]. In [22], it was shown that the internal damping has a significant influence on the fatigue damage. In [20], multiaxial loads were achieved by exciting two mode shapes (one in the vertical and one in the horizontal direction). In vibration fatigue, the specimen is typically considered as broken when the natural frequency drops by 2–5% (different values are used in different studies); see e.g., [9]. Fatigue parameters (fatigue strength and inverse slope of S–N curve) need to be experimentally identified using the specimen; see e.g., [23], where 10 specimens were used. It may finally be presumed that manufacturing several specimens with more elaborated geome-

*4.3. More Elaborated Specimens* 

out. However, this circumstance can easily be overcome by setting a very low level of acceleration on the "secondary" axes that, theoretically, should not be activated.

It is finally worth to mention that the system layout described so far, as shown in Figure 8, has been adopted in [38,39], though with two lateral thin plates instead of one, to perform bending–torsion fatigue tests.

Hollow cylindrical specimens subjected to axial, torsion and internal pressure perhaps represent the most versatile testing system in terms of the state of stress achievable. Indeed, this system allows a three-dimensional stress state (two normal stresses and one shear stress) to be obtained by servo-hydraulic machines and pressure chambers [45,46]. On the other hand, it may be presumed that loading a hollow specimen in axial, torsion and simultaneously with an internal pressure, requires special-purpose equipment that is likely to be more expensive than the usual testing machines found in laboratories.

A hollow cylindrical specimen with a small hole perpendicular to its axis, loaded by a servo-hydraulic machine, may be subjected to an uncoupled biaxial state of stress; see [34] in Table 1. This specimen is fixed at both ends, where the servo-hydraulic machine applies an axial-torsion random loading. The biaxial state of stress at the hole can be monitored with minimal difficulty by controlling the input force and/or torque and relating it to the corresponding stress components. Once again, this system configuration emphasizes how servo-hydraulic machines offer a greater simplicity over shaker tables in the direct control of both the intensity and phase shift between axial-torsion loading actions on cylindrical specimens. Instead, cantilever cylindrical specimens excited by shakers require the dynamic response of the specimen fixed on the holder system to be determined in order to evaluate the normal and shear stress values at the critical location.

### *4.3. More Elaborated Specimens*

More elaborated specimens are developed with the aim to perform accelerated fatigue tests using shaker tables and simulate a real-world scenario of complex structures. A Y-shaped specimen with a central hole and two masses at the free ends has been proposed; see Figure 10. The Y-shaped specimen is made by three rectangular cross-sections arranged at 120◦ around the hole and has in the frequency range up to 2 kHz several natural frequencies that can be vibration fatigued. The attached masses can be used to adjust the frequencies of particular natural frequencies [21]. In [22], it was shown that the internal damping has a significant influence on the fatigue damage. In [20], multiaxial loads were achieved by exciting two mode shapes (one in the vertical and one in the horizontal direction). In vibration fatigue, the specimen is typically considered as broken when the natural frequency drops by 2–5% (different values are used in different studies); see e.g., [9]. Fatigue parameters (fatigue strength and inverse slope of S–N curve) need to be experimentally identified using the specimen; see e.g., [23], where 10 specimens were used. It may finally be presumed that manufacturing several specimens with more elaborated geometry has a cost slightly higher than producing specimens with simpler shapes.

**Figure 10.** Testing system including Y-shaped specimen with a central hole and two masses at the free ends subjected to horizontal force and vertical acceleration (reprinted from [23], with permission from Elsevier, 2016). **Figure 10.** Testing system including Y-shaped specimen with a central hole and two masses at the free ends subjected to horizontal force and vertical acceleration (reprinted with permission from ref. [23]. Copyright 2016 Elsevier).

In [20], two electrodynamic shakers were used: one for vertical random excitation (for the excitation of first natural frequency) and one for horizontal excitation (for the second natural frequency). Masses attached to the Y-sample allowed modal frequencies to be adjusted to the needs. The test system is instrumented with accelerometers positioned at different points and a strain gage attached in the critical region. The accelerometers are used to monitor in real time the dynamic response of system, which in turn updates the FE model. The vertical excitations are controlled in a closed loop with measurements of accelerometers. Another excitation is applied perpendicular to the shaker vertical axis. It is imposed close to the specimen hole and monitored by a force transducer. This special Y-shaped system is excited by two uncoupled random loadings (along In [20], two electrodynamic shakers were used: one for vertical random excitation (for the excitation of first natural frequency) and one for horizontal excitation (for the second natural frequency). Masses attached to the Y-sample allowed modal frequencies to be adjusted to the needs. The test system is instrumented with accelerometers positioned at different points and a strain gage attached in the critical region. The accelerometers are used to monitor in real time the dynamic response of system, which in turn updates the FE model. The vertical excitations are controlled in a closed loop with measurements of accelerometers. Another excitation is applied perpendicular to the shaker vertical axis. It is imposed close to the specimen hole and monitored by a force transducer.

vertical and horizontal directions); see [20–25] in Table 1. The system can develop an uncoupled biaxial state of stress at the critical location in terms of normal stress components. This special Y-shaped system is excited by two uncoupled random loadings (along vertical and horizontal directions); see [20–25] in Table 1. The system can develop an uncoupled biaxial state of stress at the critical location in terms of normal stress components.

### **5. Conclusions 5. Conclusions**

This paper presented an overview of the various fatigue testing systems used for subjecting metals to random loadings, as they are described in relevant articles from the literature. The presented overview of relevant works compared the different testing systems in terms of several characteristics, namely, the type of machine (e.g., servo-hydraulic and shaker tables), specimen geometry, number of inputs needed to carry out a fatigue test, random loadings acting on the specimen and resulting local state of stress. For each of the above characteristics, Table 2 summarizes the specific features adopted for classifying the various systems. Based on the specific features in Table 2, all the analyzed testing systems were also classified into a more comprehensive table (named Table 1 in the text), which allowed the analogies, differences, advantages and possible limitations to be emphasized in a clear way. Both tables also summarize the main criteria that may be used, in the future, as a guideline to classify new equipment. This paper presented an overview of the various fatigue testing systems used for subjecting metals to random loadings, as they are described in relevant articles from the literature. The presented overview of relevant works compared the different testing systems in terms of several characteristics, namely, the type of machine (e.g., servo-hydraulic and shaker tables), specimen geometry, number of inputs needed to carry out a fatigue test, random loadings acting on the specimen and resulting local state of stress. For each of the above characteristics, Table 2 summarizes the specific features adopted for classifying the various systems. Based on the specific features in Table 2, all the analyzed testing systems were also classified into a more comprehensive table (named Table 1 in the text), which allowed the analogies, differences, advantages and possible limitations to be emphasized in a clear way. Both tables also summarize the main criteria that may be used, in the future, as a guideline to classify new equipment.


**Table 2.** Characteristics and specific features used to classify fatigue testing systems.


1 Ax = axial; Be = bending; To = torsion; Ax-To = axial-torsion; Be-To = bending–torsion. <sup>1</sup> Ax = axial; Be = bending; To = torsion; Ax-To = axial-torsion; Be-To = bending–torsion.

**Author Contributions:** Writing—original draft preparation, J.M.E.M. and D.B.; writing—review and editing, J.M.E.M., D.B., A.N. and J.S.; supervision, D.B., A.N. and J.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**

