**1. Introduction**

Modeling has become a very effective way to analyze, in a first instance, complex engineering problems. Almost the totality of engineers, either in academia or in the industry, claim to take benefit from these techniques, considering them to be irreplaceable for their work.

Though the reliability of models keeps constantly increasing, and therefore the trust placed on their predictions, there is still need for understanding the intrinsic uncertainties that affect simulation, for estimating their effect on predictions, and for developing efficient methodologies to reduce them in a cost-effective manner. In this respect, an interesting and promising approach has emerged in recent years. It consists of employing advanced statistical methods not only to assess the uncertainty in a model but also to guide the experimental campaign that needs to be carried out to feed the parameter calibration. One of these tools is Global Sensitivity Analysis (GSA), a very useful strategy when it comes to analyzing the influence of all the parameters participating in a model. Improving local techniques introduced in the 1980s [1], GSA methods were proposed much later to account for the influence of parameters in an overall and rigorous fashion [2].

One important limitation of GSA techniques is that they require large amounts of simulated data as input. Numerical experiments obtained, for instance, through finite element (FE) simulations, demand a huge amount of computational resources that are often unavailable. A convenient remedy to this problem is to employ meta-models that provide reasonable approximations to the models' response, but at a fraction of their computational cost. These types of models are built by sampling the original ones, as illustrated in Figure 1, and were originally proposed to optimize processes [3]. Initially known as Response Surface Models (RSM), they rapidly evolved and became emulators of computational codes at a very reduced cost. As a result, they have been utilized in a wide range of sensitivity analyses and applications [4–6].

**Figure 1.** Meta-modeling construction process.

There exist several families of meta-models. Some of the most commonly employed are the ones based on Kriging [7] and Radial Basis Functions (RBF) [8]. Both of them are generally accepted as good methods to efficiently capture trends associated with small data sets. Since they accurately adapt to available information, they must be re-calibrated when new inputs are provided [9]. The Bayesian approach, on the other hand, is a well-known technique that has been successfully employed in several scientific disciplines for parameter selection. See, for example, one of the very first generic applications, developed by Guttman [10], in which this inference procedure is already used to choose the best manufacturing parameters to make the widest possible population of fabricated items lie within the specified tolerance limits. More recent works have improved the Bayesian inference methodology (see, e.g., [11]).

Bayesian inference can be used systematically for the calibration of model parameters, taking into consideration the uncertainties due to the model itself, the experimental measurements, noise, etc. [12–14]. This approach has become relatively standard, not only providing optimized parameter values but a complete Gaussian distribution for them.

In this work, we will combine GSA with Bayesian calibration because we believe that this combination is extremely powerful for understanding computer models, and drawing as much information as possible from experiments, be them numerical or physical. This mix of techniques is not new, and similar ones have been considered in the past. For example, sensitivity analysis and Bayesian calibration were employed together in [15], trying to assess multiple sources of uncertainty in waste disposal models by considering independent and composite scenarios, obtaining predictive output distributions using a Bayesian approach, and later performing a variance-based sensitivity analysis. In addition, the work by [16] proposes a procedure to evaluate the sensitivity of the parameters and a posterior calibration of the most important ones applied to a model describing the chemical composition of mass of waters.

In the current article, we explore the use of GSA and Bayesian calibration for complex constitutive models, an application that has not been previously considered for this type of analysis and that can greatly benefit from it. More precisely, we study three fairly known constitutive material models suitable for metals subjected to extreme conditions, namely, Johnson–Cook [17], Zerilli–Armstrong [18], and Arrhenius-type [19] models. These are fairly complex constitutive relations that depend on a relatively large number of material parameters that need to be adjusted for each specific material and test range. The actual implementations of the three can be found in the publicly available material library MUESLI [20], and we have used them together with standard explicit finite element calculations.

The remainder of the article is structured as follows. In Section 2, we will outline the theoretical principles in which the statistical theory employed is based on, as well as the three constitutive material models used in the study. In Section 3, we describe the application of the presented framework to the analysis of Taylor's impact test [21,22], an experiment often used to characterize the elastoplastic behavior of metals under high strain rates. The results of our investigation are reported in Section 4, providing insights for the three constitutive models. Finally, Section 5 collects the main findings and conclusions of the study.

#### **2. Fundamentals**

#### *2.1. Global Sensitivity Analysis*

Global Sensitivity Analysis (GSA) refers to a collection of techniques that allow identifying the most relevant variables in a model with respect to given Quantities of Interest (QoIs). They focus on apportioning the output's uncertainty to the different sources of uncertain parameters [2] and define qualitative and quantitative mappings between the multi-dimensional space of stochastic variables in the input and the output. The most popular GSA techniques are based on the decomposition of the variance's output probability distribution and allow the calculation of Sobol's sensitivity indices.

According to Sobol's decomposition theory, a function can be approximated as the sum of functions of increasing dimensionality and orthogonal with respect to the standard *L*<sup>2</sup> inner product. Hence, given a mathematical model *y* = *f*(*x*) with *n* parameters arranged in the input vector *x*, the decomposition can be expressed as:

$$f(\mathbf{x}\_1, \dots, \mathbf{x}\_n) = f\_0 + \sum\_{i=1}^n f\_i(\mathbf{x}\_i) + \sum\_{1 \le i < j \le n} f\_{\overline{j}}(\mathbf{x}\_i, \mathbf{x}\_j) + \dots + f\_{1,2,\dots,n}(\mathbf{x}\_1, \dots, \mathbf{x}\_n), \tag{1}$$

where *f*<sup>0</sup> is a constant and *f*... are functions with domains of increasing dimensionality. If we consider that *f* is defined on random variables *X<sup>i</sup>* ∼ U(0, 1), *i* = 1, . . . , *n*, then the model output is itself a random variable with variance

$$D = \text{Var}[f] = \int\_{\mathbb{R}^n} f^2(\mathbf{x}) \, d\mathbf{x} - f\_o^2. \tag{2}$$

Integrating Equation (1) and using the orthogonality property of the functions *f*..., we note that the variance itself can be decomposed in the sum

$$D = \sum\_{i=1}^{n} D\_i + \sum\_{1 \le i < j \le n} D\_{ij} + \dots + D\_{1,2,\dots,n}.\tag{3}$$

This expression motivates the definition of the Sobol indices

$$S\_{\dot{\mathbf{i}}\_1,\ldots,\dot{\mathbf{i}}\_s} = D\_{\dot{\mathbf{i}}\_1,\ldots,\dot{\mathbf{i}}\_s} / D\_\prime \tag{4}$$

that trivially satisfy

$$\sum\_{i=1}^{n} \mathcal{S}\_{i} + \sum\_{1 \le i < j \le n} \mathcal{S}\_{i\bar{j}} + \dots + \mathcal{S}\_{1,2,\dots,n} = 1. \tag{5}$$

This decomposition of the total variance *D* reveals the main metrics employed to assess the relevance of each parameter in the scatter of the quantity of interest *f* . The relative variances *S<sup>i</sup>* are referred to as the first order indices or main effects and gauge the influence of the *i*-th parameter on the model's output. The total effect or total order sensitivities associated with the *i*−parameter, including its influence when combined with other parameters, is calculated as

$$S\_{\overline{T}\_{\hat{i}}} = \sum\_{\mathcal{Z}\_{\hat{i}}} D\_{\hat{i}\_1, \dots, \hat{i}\_{k'}} \mathcal{Z}\_{\hat{i}} = \left\{ (i\_1, \dots, i\_{\hat{s}}) : \exists k, 1 \le k \le s, i\_k = i \right\}. \tag{6}$$

The widespread use of the main and total effects as sensitivity measures is due to the relative simplicity of the formulas and algorithms that can be employed to calculate or approximate them ([2], Chapter 4). Specifically, the number of simulations required to evaluate these measures is *Ns*(*n* + 2), where *N<sup>s</sup>* is the so-called base sample, a number that depends on the model complexity varying from a few hundreds to several thousands, and *n* is, as before, the number of parameters of the model (we refer to ([2], Chapter 4) for details on these figures). To calculate sensitivity indices, it proves essential to first create a simple meta-model that approximates the true model, because a limited set of runs is supposed to suffice for building a surrogate that, demanding far fewer computational resources, can then be run a large number of times to complete the GSA.
