Next Article in Journal
The Optimization of PID Controller and Color Filter Parameters with a Genetic Algorithm for Pineapple Tracking Using an ROS2 and MicroROS-Based Robotic Head
Previous Article in Journal
MySTOCKS: Multi-Modal Yield eSTimation System of in-prOmotion Commercial Key-ProductS
Previous Article in Special Issue
A DNN-Based Surrogate Constitutive Equation for Geometrically Exact Thin-Walled Rod Members
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Up-Scaling of Discrete Fine-Scale Models Using Bayesian Updating

by
Muhammad Sadiq Sarfaraz
1,*,
Bojana V. Rosić
2 and
Hermann G. Matthies
1
1
Institute of Scientific Computing, Technische Universität Braunschweig, 38106 Braunschweig, Germany
2
Chair for Applied Mechanics and Data Analysis, University of Twente, Drienerlolaan 5, 7522 NB Enschede, The Netherlands
*
Author to whom correspondence should be addressed.
Computation 2025, 13(3), 68; https://doi.org/10.3390/computation13030068
Submission received: 14 December 2024 / Revised: 5 February 2025 / Accepted: 17 February 2025 / Published: 7 March 2025
(This article belongs to the Special Issue Synergy between Multiphysics/Multiscale Modeling and Machine Learning)

Abstract

:
In this work, we present an up-scaling framework in a multi-scale setting to calibrate a stochastic material model. In particular with regard to application of the proposed method, we employ Bayesian updating to identify the probability distribution of continuum-based coarse-scale model parameters from fine-scale measurements, which is discrete and also inherently random (aleatory uncertainty) in nature. Owing to the completely dissimilar nature of models for the involved scales, the energy is used as the essential medium (i.e., the predictions of the coarse-scale model and measurements from the fine-scale model) of communication between them. This task is realized computationally using a generalized version of the Kalman filter, employing a functional approximation of the involved parameters. The approximations are obtained in a non-intrusive manner and are discussed in detail especially for the fine-scale measurements. The demonstrated numerical examples show the utility and generality of the presented approach in terms of obtaining calibrated coarse-scale models as reasonably accurate approximations of fine-scale ones and greater freedom to select widely different models on both scales, respectively.

1. Introduction

Randomness and heterogeneity are the main inherent characteristics that are found in almost all the materials that occur naturally or are engineered through man-made processes. Thus, the researchers/engineers—working in the relevant experimental and theoretical domains—have to take into account these characteristics while investigating a lab specimen or designing a structure made from the material under consideration. Specifically, in the context of computational mechanics, one of the most relevant problems is the characterization of micro-/meso-scale randomness and hence investigation of the effect of material heterogeneity on structural behavior through multi-scale numerical simulations. The previously mentioned approaches can be broadly classified into concurrent and non-concurrent approaches. Concurrent schemes consider both the coarse and fine scales during the course of the simulation, e.g., the FE-squared method [1,2], whereas the non-concurrent schemes are based on computing the desired Quantity of Interest (QoI) e.g., average stress, strain or energies, parameters for constitutive models, etc. through numerical experiments on a representative volume element (RVE) [3,4].
Focusing on the non-concurrent methods, there has been an increasing interest in incorporating material randomness aspects into computational studies. A classical approach in this regard is to perform numerical homogenization on an RVE ensemble that incorporates various sources of uncertainties—material properties, the spatial distribution of different phases and the size and shape of inclusions—and extract the relevant statistical QoI [5,6,7,8,9,10,11,12,13]. Moreover, extensive research in probabilistic numerics has yielded different numerical schemes to produce stochastic surrogate models for random microstructures with an aim to mitigate the effect of the curse of dimensionality due to the large number of sources of uncertainties encountered in such problems; see, e.g., [14,15,16,17]. Data-driven and machine learning-based approximate models, e.g., Neural Nets, have also been used in a similar pursuit [18,19,20,21,22,23]. The power of Deep Learning in Computational Homogenization has been leveraged in, e.g., [24,25,26,27,28] to yield a more elaborate fine-scale response in multi-scale computations. In recent times, the Physics-Informed Neural Networks (PINNs)—proposed in [29]—have gained huge popularity in the computational mechanics community. The main idea behind PINN is to incorporate physical principles, e.g., governing equations, boundary and initial conditions, etc., into the training phase of the neural network. In particular, with respect to the material calibration problem, both linear (elastic) [30,31,32] and non-linear (plasticity and damage) [33,34,35] problems have been explored. For accelerating multi-scale computations, the usage of PINN can be examined in [36,37,38]. Another interesting approach, based on unsupervised learning, has been devised in [39,40]. This approach automatically decides which computed material parameters (from a pool of constitutive models) are most relevant to the input measurements (e.g., displacements, strains, etc). It has been used for both elastic [40] and inelastic [41,42] model calibration. The utility of this approach in a probabilistic setting has been demonstrated in [43]. Furthermore, the methods based on Bayesian inference (which also forms the conceptual basis of our numerical scheme), to obtain a probabilistic description of coarse-scale characteristics through the incorporation of fine-scale measurements, have also been demonstrated in [44,45,46]. Bayesian methods allow for the explicit incorporation of uncertainty at both the fine scale (meso-scale) and the coarse scale (macro-scale). This is particularly important in up-scaling because material properties are often not deterministic but are influenced by variability in microstructure (e.g., distribution of inclusions, porosity). By considering both epistemic uncertainty (knowledge uncertainty) and aleatory uncertainty (inherent randomness), Bayesian methods provide a more complete understanding of material behavior, including confidence intervals for predictions. Unlike traditional methods, which typically provide point estimates (e.g., the mean value properties like in the classical homogenization), Bayesian up-scaling produces a probabilistic distribution of the coarse-scale material properties.
In this paper, we demonstrate the application of the Bayesian up-scaling framework developed in [47,48] for the computational homogenization of concrete. The detailed structure of concrete at the meso-scale is studied, including the arrangement and behavior of its components. This information is then translated to the macro-scale, where the focus shifts to the overall material performance. The homogenization process converts these detailed properties into more manageable parameters that can be used to predict concrete’s behavior in real-world engineering applications. When framed within a Bayesian context, computational homogenization not only provides predictions of macro-scale material properties but also quantifies the confidence or epistemic uncertainty in these predictions. It offers an automatic homogenization procedure, as described in [47,48], using a deterministic representative volume element as an example. Taking this a step further, this paper addresses a more general case in which the meso-scale description also varies. Specifically, we account for aleatory uncertainty that represents the geometric arrangement of aggregates within the matrix phase. To extend the Bayesian up-scaling procedure, we apply it to the homogenization of a stochastic representative volume element. We model the meso-scale concrete using a fine-tuned stochastic finite element model, which describes the behavior of an ensemble of representative volume elements with a random distribution of inclusions in the matrix phase. At the macro-scale, the material behavior is represented by a super-element, or coarse scale, which accounts for the homogeneous material properties obtained through the Bayesian up-scaling procedure. Due to the aleatory uncertainty at the meso-scale (referred to as the fine scale), the coarse-scale representation also becomes stochastic. This includes both epistemic uncertainty introduced by the Bayesian up-scaling algorithm and the aleatory uncertainty arising from variations at the meso-scale.
Thus, the goal is to estimate the continuous distribution of the coarse scale properties given the discrete version (samples) of stochastic fine scale measurements. The novelty of this work stems from two new advancements. Firstly, the numerical finite element (FE) schemes employed to model concrete material behavior on both the scales differ completely with respect to the elements they use: namely, the material specimen on the coarse scale uses continuum elements, whereas the fine scale is composed of discrete truss elements to approximate the solution field. This example scenario highlights the generality of the up-scaling scheme in terms of its applicability. Secondly, we propose the unsupervised learning method for the estimation of the stochastic coarse-scale properties. This method is based on the approximate Bayesian estimation with the help of the generalized Kalman filter based on the polynomial chaos approximation as proposed in [49]. As this method requires the functional approximation of the fine-scale measurements, one has to be able to evaluate the polynomial chaos approximation of the fine-scale measurement by using standard uncertainty quantification techniques. Having in mind that the fine scale representation depends on many parameters used to describe the detailed heterogeneous material properties, such an approach is usually practically not feasible. Therefore, we suggest using an unsupervised learning technique for the fine-scale parameter reduction. In other words, we utilize the transport maps approach [50,51,52] to transform the fine-scale measurement samples from the high-dimensional nonlinear space to low-dimensional Gaussian space. Once the minimal parameterization is found, one may construct the required functional approximation of fine-scale measurement, therefore avoiding the complications related to constructing measurement approximation from a high-dimensional stochastic process, generating fine-scale material realizations and the need to incorporate significant modifications in the already existing up-scaling framework.
The contents in the paper are presented as follows: Section 2 describes the abstract formulation of the up-scaling problem. Section 3 dwells briefly on the Bayesian framework, stating the main results and the update formula for up-scaling. Section 4 presents the details on the computational scheme—with a major focus on constructing a functional approximation of fine-scale measurement—to be employed for conducting numerical experiments. Finally, in Section 5, examples illustrating the use of the up-scaling scheme are presented, and Section 6 concludes the discussion.

2. Abstract Formulation of Up-Scaling Problem

We consider a mechanical multi-scale problem composed of coarse and fine scales with an objective to calibrate the material characteristics of the former using the measurements from the latter. On the coarse scale, one considers an abstract model construct given as ( U c , E c , D c ) , which represents a rate-independent small-strain homogeneous macro-model in which U c denotes the state space, E c : [ 0 , T ] × U c R is a time-dependent energy functional, and D c : U c × U c ˙ [ 0 , ] is a convex and lower-semicontinuous dissipation potential satisfying D c ( u c , 0 ) = 0 and the homogeneity property u c U c : D c ( u c , λ v c ) = λ D c ( u c , v c ) for all λ > 0 . Then, in an abstract manner, the coarse-scale mechanical system can be described mathematically by the sub-differential inclusion
u c : [ 0 , T ] U c : u c ˙ D c ( κ c , u c , u ˙ c ) + D u c E c ( t , κ c , u c ) 0
where D u c stands for the Gâteux partial differential with respect to the state variable u c , and the derivative of D c is given in terms of the set-valued sub-differential D c in the sense of the convex analysis; see [53]. Moreover, the parameter vector κ c represents the spatially homogeneous material characteristics governing the material behavior. On the fine scale, in terms of the formalism of stored energy and dissipation potential, one considers conceptually a similar model as the one on the coarse scale:
u ˙ f D f ( u f , κ f , u ˙ f ) + D u f E f ( t , κ f , u f ) 0 .
However, this model has a more elaborate description, encoded in its parameter κ f , and it represents the material and geometrical/spatial variability of material properties on the fine scale. Such a material description lends itself to a higher computational cost when employed in a numerical scheme.
The numerical experiment consists of a material specimen under appropriate loading conditions. Keeping the same boundary conditions on the coarse and fine scales (and that the numerical schemes—employing the coarse and fine material models, respectively—are employed on the same specimen), the objective is to calibrate the coarse-scale κ c to mimic a fine-scale model response as accurately as possible. Since U c U f , the states u c and u f cannot be directly compared as we will witness in the numerical illustration later in Section 5, where coarse- and fine-scale computational models are based on continuum and discrete truss elements, respectively, and the two models can only communicate in terms of some observables or measurements (e.g., energy, stress or strain, etc.) y Y , where Y is typically some vector space like R L . In other words, one defines a measurement vector—obtained by the application of a respective measurement operator on the solution states—associated with the respective fine and coarse-scale models:
y f = Y f ( κ f , u f ( κ f , Γ f ) ) + ϵ ^ ,
y c = Y c ( κ c , u c ( κ c , Γ c ) )
where Γ c and Γ f are the external excitation on the coarse and fine levels of similar type (i.e., either deformation or forced based), which are applied on the specimen boundaries. Y c and Y f are representing the corresponding abstract measurement operator on the prospective scales, whereas ϵ ^ depicts the measurement noise associated with the fine scale. To draw parallels between the measurement output of the two models, one has to associate a random variable ϵ ( ω ϵ ) L 2 ( Ω ϵ , F ϵ , P ϵ ) with the measurement y c . Here, ( Ω ϵ , F ϵ , P ϵ ) is the triplet describing the probability space consisting of the set of all events, the sigma-algebra and the probability measure, respectively. ϵ ( ω ϵ ) represents the prediction of the measurement and modeling error that reflects the inability of the coarse-scale computational model to simulate the true/fine-scale measurement, or in other words ϵ ( ω ϵ ) represent the knowledge about ϵ ^ . In this case, we assume that the modeling error is additive and model it a priori as a Gaussian distribution, the parameters of which are also learned during the up-scaling process. The stochastic version of y c , after the addition of ϵ ( ω ϵ ) , reads as
y c ( ω ϵ ) = Y c ( κ c , u c ( κ c , Γ c ) ) + ϵ ( ω ϵ ) .
Since the objective of our study is to up-scale an ensemble of fine-scale material realizations, we introduce an additional type of uncertainty in our fine-scale material description which stems from an inherent randomness of the material. Considering the probabilistic view on such uncertainty, we model κ f as a random variable/field in L 2 ( Ω κ f , B κ f , P κ f ; K f ) defined by mapping
κ f ( ω κ ) : Ω κ K f .
Here, K f is the parameter space which depends on the application. Consequently, the evolution problem described by ( U f , E f , D f ) also becomes uncertain, so therefore Equation (2) is reformulated as
u ˙ f D f ( u f ( ω κ ) , κ f ( ω κ ) , u ˙ f ( ω κ ) ) + D u f E f ( t , ω κ , κ f ( x , ω κ ) , u f ( ω κ ) ) 0   a.s.
respectively. Once the uncertainty is present in the fine-scale model, the observation in Equation (3) modifies as
y f ( ω y ) = Y f ( u f ( κ f ( ω κ ) , Γ f ( ω κ ) ) ) + ϵ ^ ( ω ϵ ) , ω y : = ( ω κ , ω ϵ ) .
Following the previous formulation, the main goal is to estimate the distribution of the coarse-scale parameter κ c given the set of discrete measurements of y f ( ω y i ) , i = 1 , , n f . To achieve this, we employ the Bayesian approach as described in the following section.

3. Bayesian Formulation of Stochastic Up-Scaling

The fine-scale material model characterized by a stochastic parameter vector κ f ( ω κ ) renders the measurement in Equation (8) to be a random variable. This measurement should match the coarse-scale measurement in Equation (3) by tuning the corresponding unknown parameter κ c . Thus, we would like to update κ c by incorporating the information from the fine-scale measurement and hence obtain an indication on what the probability distribution for κ c should be. To realize this, κ c is assumed to be uncertain (unknown) and further modeled a priori as a random variable κ c ( ω ) —prior—belonging to L 2 ( Ω κ c , B κ c , P κ c ; K c ) . Hence, the coarse-scale model in Equation (1) is converted into a stochastic one:
u ˙ c D c ( κ c ( ω ) , u c ( κ c ( ω ) ) , u ˙ c ( κ c ( ω ) ) ) + D u c E c ( t , κ c ( ω ) , u c ( κ c ( ω ) ) ) 0   a.s.
and subsequently a priori prediction of the coarse-scale measurement becomes
y c ( κ c ( ω ) , ϵ ( ω ϵ ) ) = Y c ( κ c ( ω ) , u c ( κ c ( ω ) ) , Γ c ) ) + ϵ ( ω ϵ )
with ϵ ( ω ϵ ) L 2 ( Ω ϵ , B ϵ , P ϵ ) . The goal is to identify the vector κ c ( ω ) given y f using Bayes’s rule such that
p ( κ c | y f ) = p ( y f | κ c ) p ( κ c ) p ( y f )
holds. Here, p ( κ c | y f ) is referred to as the posterior distribution of κ c ( ω ) , as it incorporates the information from the fine-scale measurements via likelihood function p ( y f | κ c ) , and p ( y f ) is the normalization factor or evidence. Obtaining an analytical expression for p ( κ c | y f ) is possible only in ideal cases, e.g., when both the prior and the likelihood are conjugate; i.e., they belong to the exponential family of distributions with predefined statistics. However, in most practical cases, obtaining full posterior p ( κ c | y f ) is analytically intractable; hence, one has to resort to methods that are computationally expensive either due to evidence estimation or due to slow convergence of the random walk algorithms [54]. In the subsequent discussion, instead of focusing on obtaining the complete posterior information encoded in p ( κ c | y f ) , we devise the algorithm to estimate posterior functional of κ c in terms of a conditional expectation.
One last step, before we proceed to the next section, is to consider q c : = log κ c as the objective random variable for calibration, as κ c is positive definite and this constraint has to be taken into consideration. In this way, computationally, whatever approximations or linear operations are performed on the numerical representation of q c , eventually taking exp ( q c ) is always going to be positive. Therefore, from this point onwards, we will use q c in developing Bayesian scheme for up-scaling.

Bayes Filter Using Conditional Expectation

The conditional expectation of q c with respect to the posterior distribution is given as
E ( q c | y f ) = Ω q c p ( q c | y f ) d q c .
Instead of direct integration over the posterior measure, the conditional expectation can be estimated in a straightforward manner by projecting the random variable q c onto the subspace generated by the sub-sigma algebra B : = σ ( Y f ) of fine-scale measurement. To achieve this, one has to compute the minimal distance of q c to the point q c * which can be defined in different ways. As shown by [49,55], the notion of distance can be generalized given a strictly convex, differentiable function φ : R d R with the hyperplane tangent H q ^ c ( q c ) = φ ( q ^ c ) + q c q ^ c , φ ( q ^ c ) to φ at point q ^ c , such that
q c * : = E ( q c | B ) = arg min q ^ c L 2 ( Ω , B , P ; Q ) E ( D φ ( q c | | q ^ c ) )
holds. Here, D φ ( q c | | q ^ c ) = H q c ( q c ) H q ^ c ( q c ) denotes the distance term that is also known as the Bregman’s loss function (BLF) or divergence. In general, the projection in Equation (13) is of a non-orthogonal kind and reflects the Bergman Pythagorean inequality:
E ( D φ ( q c | | q ˇ c ) ) E ( D φ ( q c | | q c * ) ) + E ( D φ ( q c * | | q ˇ c ) )
that also holds for any arbitrary F -measurable random variable q ˇ c . In case when q c * = E ( q c ) , the previous relation rewrites to
E ( D φ ( q c | | E ( q c ) ) ) = E ( D φ ( q c | | q c ˇ ) ) E ( D φ ( E ( q c ) | | q ˇ c ) ) 0 ,
in which the distance E ( D φ ( q | | E ( q c ) ) ) has a notion of variance, which is further referred to as Bregman’s variance. In terms of convex function φ , Bregman’s variance obtains the following form
var φ ( q c ) : = E ( D φ ( q c | | E ( q c ) ) ) = E ( φ ( q c ) φ ( E ( q c ) ) + q c E ( q c ) , φ ( E ( q c ) ) ) = E ( φ ( q c ) ) φ ( E ( q c ) ) 0 ,
For computational purposes, Bregman’s distance in Equation (13) is further taken as the squared Euclidean distance by assuming φ ( q c ) = 1 2 q c 2 , such that D φ ( q c | | q c ^ ) = 1 2 q c q c ^ 2 and
q c * : = E ( q c | B ) = arg min q c ^ L 2 ( Ω , B , P ; Q ) E ( q c q c ^ 2 ) .
In this case, Equation (14) reduces to the classical Pythagorean theorem. Moreover, one can show that the conditional expectation is the minimizer of the mean squared error and is the minimum variance unbiased estimator; kindly refer to [55] for detailed proof.
Focusing now on Equation (17), one may decompose the random variable q c belonging to ( Ω q c , F q c , P q c ) into projected q c p and residual q c r components such that
q c = q c p + q c r = P B q c + ( I P B ) q c
holds. Here, q c p = P B q c = E ( q c | B ) is the orthogonal projection of the random variable q c onto the space ( Ω q c , B q c , P q c ) of all distributions consistent with the data, whereas q c r : = ( I P B ) q c is its orthogonal residual. To give Equation (18) a more practical form, the projection term P B q c is further described by a measurable mapping ϕ according to the Doob–Dynkin lemma, which states
E ( q c | y f ) = ϕ ( y f ) = ϕ y f .
As a result, Equation (18) rewrites to
q c = ϕ ( y f ) + ( q c ϕ ( y c ) )
in which the first term in the sum, i.e., ϕ ( y f ) , is taken to be the projection of q c onto the fine-scale data set y f according to Equations (18) and (19), whereas ( q c ϕ ( y c ) ) is the residual component defined by a priori knowledge q c on the coarse scale. Following this, Equation (20) recasts to the update equation for the random variable q c as
q a , c = q c + ϕ ( y f ) ϕ ( y c )
in which q a , c is the assimilated random variable and y c is the observation prediction on the coarse scale given the prior q c . Therefore, to estimate q a , c , one requires only information on the map ϕ . For the sake of computational simplicity, the map in Equation (19) is further approximated in a Galerkin manner by a family of polynomials
Q n = { ϕ ( y ) Q | ϕ n : y q c   a   n-th   degree polynomial }
such that the filter in Equation (21) rewrites to
q a , c ( ω ) = q c ( ω ) + ϕ n ( y f ) ϕ n ( y c ( ω ) ) .
As the map in Equation (22) is parameterized by a set of coefficients β , i.e., ϕ n ( y ; β ) , these further can be found by minimizing the residual component (the optimality condition in Equation (17)):
β * = arg min β   E ( q c ϕ n ( y f ; β ) 2 ) .
In an affine case, when n = 1 , ϕ 1 ( y c ; β ) = K y + b , the previous optimisation outcomes in a formula defining the well-known Kalman gain:
K = cov q c , y c cov y c 1
such that the update formula in Equation (23) reduces to the generalization of the Kalman filter
q a , c = q c + K ( y f y c ) ,
which is here referred as a Gauss–Markov–Kalman filter; for more details, please see [56].

4. Computational Scheme for Up-Scaling

The update Equation (26) derived in the previous section needs to be realized in a computational framework to illustrate the use of the up-scaling scheme. In concrete terms, one requires a numerical approximation of the involved random variables in Equation (23). A widely used strategy in this regard is to use an ensemble of sampling points for the random material parameters of coarse and fine-scale models, which leads to an ensemble-based interpretation of Kalman filter (EnKF). We pursue a different path here to discretize the RVs in Equation (26), and a functional approximation is employed; i.e.,  the RVs are defined in terms of functions of known standard RVs. We shall assume that these have been chosen to be independent and often even normalized Gaussians. The final step in describing the computational versions of RVs in Equation (26) is to choose a finite set of linearly independent functions { Ψ α } α J Z of these base RVs, where the index α = ( , α k , ) represents a multi-index, and the set of multi-indices for approximation J Z is a finite set with cardinality (size) Z. There are several systems of functions that can possibly be used in this regard, e.g., polynomial chaos expansion (PCE) or generalized PCE (gPCE) [57], kernel functions [58], radial basis functions [59], or functions derived from a fuzzy set [60]. In this work, all the subsequent development is carried out in terms of PCE-based approximation. In the following discussion, we describe the computational procedure to develop the PCE approximation for coarse and fine scales. Moreover, it merits mentioning here that the PCE for all the relevant RVs are computed in a non-intrusive fashion (details of the related methods are out of the scope of this paper; the interested reader can consult [49,61,62,63,64] for more information), meaning the PCE approximations are computed using samples, i.e., the material parameters are sampled from their associated probability density functions (PDFs), and the corresponding energy measurements are obtained by repeated execution of the deterministic FEM solvers for coarse and fine scales, respectively.

4.1. Coarse-Scale Prior and Energy Approximation

The PCE approximation of coarse-scale prior material properties q c = log κ c modeled as log-normal RVs to preserve the positive-definitiveness property, q ^ c ( θ c ( ω ) ) , see Equation (10), can be easily constructed by using the basis defined in terms of standard normal RVs θ c ( ω ) R M c that are sampled according to θ c ( ω ) N ( 0 , I ) . These random variables are known, as they are constructed by experts. Having q ^ c ( θ c ( ω ) ) at hand, the material samples are obtained from it using the seed from θ c ( ω ) . These material samples are then fed into the FEM computational model M c to obtain for each sample the solution field u c ( x , ω i ) (see Equation (10)) and the corresponding prediction of the energy measurement using the measurement operator Y c ; see Equation (11). Consequently, one can obtain the PCE approximation for energy as y ^ c ( θ c ( ω ) ) R L , where L is the number of different types of predicted energy measurements (e.g., stored energy, dissipation, etc). The procedure to construct predicted energy PCE is summarized in Algorithm 1.
Algorithm 1 Coarse-scale energy PCE
1:
Input: coarse-scale parameter PCE q ^ c ( θ c ( ω ) )
2:
Output: coarse-scale energy PCE y ^ c ( θ c ( ω ) )
3:
Generate n c samples from PCE q ^ c ( θ c ( ω ) ) :
{ θ c ( ω i ) } i = 1 n c N ( 0 , I ) : { ( θ c 1 ( ω i ) , θ c M c ( ω i ) ) } i = 1 n c
{ q c ( ω i ) } i = 1 n c = { q c ^ ( θ c ( ω i ) ) } i = 1 n c
4:
Input { q c ( ω i ) } i = 1 n c for M c to obtain energy samples:
  • Input { q c ( ω i ) } i = 1 n c in M c to obtain solution { u c ( x , ω i ) } i = 1 n c
  • Input { u c ( x , ω i ) } i = 1 n c in measurement operator Y c to obtain energy samples:
    { Y c ( u c ( x , ω i ) ) } i = 1 n c : { y c ( ω 1 ) , y c ( ω n c ) } , y c ( ω i ) R L
5:
Construct PCE approximation of energy using samples { ( y c ( ω i ) , θ c ( ω i ) ) } i = 1 n c and least square estimate
y ^ c ( θ c ( ω ) ) = α J Ψ y c ( α ) Ψ ( θ c ( ω ) )

4.2. Fine-Scale Measurement Approximation

The up-scaling scheme presented in this paper deals with a stochastic fine-scale material model; therefore, the resulting measurement y f in Equation (9)—to be used to update the coarse-scale material model—reflects the inherent randomness (aleatoric uncertainty) of the fine scale. This scenario poses a hurdle to construct the fine-scale measurement PCE approximation mainly due to two reasons:
  • The underlying process to generate the fine-scale material realizations is not available; i.e., we do not have the continuous data y f (in a form of a random variable). Rather, we have its discrete version given as a set of random fine-scale realizations (e.g., from real experiments).
  • Even if the random process—generating the fine-scale realizations—is available, e.g., in a computational setting (as in the numerical experiments of Section 5), computing the corresponding energy PCE is not easily achievable. This is due to the high-dimensional and spatially varying nature of the fine-scale material description (e.g., properties described by a random field or the random distribution of different material phases), resulting in a more involved and computationally expensive procedure due to the large number of unknown PCE coefficients.
To tackle this problem, the idea is to model the fine-scale response y f as a nonlinear mapping φ y of some basic standard random variable θ f such that
y f ( ω ˜ i ) = φ y ( w y f , θ f ( ω ˜ i ) ) , i = 1 , n f
holds. The random variable θ f is not known a priori and is to be determined. Similarly, the map in Equation (28) is parameterized by coefficient vector w y f , which also needs to be determined. Therefore, the problem boils down to estimating ( w y f , { θ f ( ω ˜ i ) } i = 1 n f , which is solved in two steps. We first transform y f ( ω ˜ i ) samples to their standard normal counterparts θ f ( ω ˜ i ) . Secondly, using ( y f ( ω ˜ i ) , θ f ( ω ˜ i ) ) pairs, the parameters w y of the measurement map φ y in Equation (28) are computed. This procedure—implemented using transport maps [51]—is detailed in the following discussion.

4.2.1. Transport Maps: Formal Definition

A transport map involves mapping a given (reference) random variable into another (target) random variable such that the probability measure of the target and the reference random variables is conserved. The idea was originally proposed by Monge [65] who formulated the problem of transporting a unit mass associated with variable x to another variable y in terms of the following cost function
C T = R c ( x , T ( x ) ) μ x d x
subjected to the constraint μ x = T # μ y , where μ x and μ y are the probability measures of the respective RVs and T # is the pushforward operator. The result of the above problem is the transport map T, which is optimal in terms of incurring the lowest transportation cost. Transport maps have a wide domain of applications in various fields [66,67,68,69,70,71,72,73,74,75,76].
To construct the transport map using the fine-scale energy samples y f ( ω ˜ i ) , we use the approach that was first presented in [51] and discussed further in related research articles [50,52,77,78]. The method described here is adopted from [51], where it was originally proposed.
To put forth the problem in the current setting, we have at our disposal n f fine-scale energy samples: { y f ( ω ˜ 1 ) , y f ( ω ˜ n f ) } obtained from the corresponding FEM solver of the fine-scale model M f using input material property realizations { κ f ( ω ˜ 1 ) , κ f ( ω ˜ n f ) } ; see Equation (7). We seek to build an approximate map that transports energy samples y f ( ω i ) of the target random variable—having PDF ρ y f and measure μ y f —to the reference random variable samples θ f ( ω ˜ i ) having—PDF ρ θ f and measure μ θ f . Furthermore, instead of seeking an optimal map by minimizing the cost function similar to Equation (29), we explicitly specify the sought map to have a Lower Triangular structure that approximately satisfies the measure constraint μ θ f = T # μ y f , which is given as
T ( y f ) = T ( y f 1 , y f 2 , , y f L ) = T 1 ( y f 1 ) T 2 ( y f 1 , y f 2 ) T d ( y f 1 , , y f L ) .
The map is assumed to be monotone and μ θ f is assumed to be differentiable. The induced density is given as
ρ ˜ y f ( y f ) = ρ θ f ( T ( y f ) ) det   T ( y f )
where T ( y f ) and det T ( y f ) are the Jacobian and its absolute value calculated with respect to y f . In order to enforce the constraint μ θ f = T # μ y f exactly, the map-induced density ρ ˜ y f should be equal to the exact one ρ y f . Here, Kullback–Leibler (KL) divergence is used as a measure of discrepancy:
D KL ρ y f ( y f ) ρ ˜ y f ( y f ) = E ρ y f log ρ y f ( y f ) ρ ˜ y f ( y f ) .
The above equation serves as an objective function that needs to be minimized. By slight reformulation, one arrives to
T ˜ = arg min T T   E ρ y f log ρ y f ( y f ) log ρ θ f ( T ( y f ) ) log | det T ( y f ) |
where T is some space of lower-triangular functions in which we seek T ˜ . We can further write the objective function in Equation (33) in form of a Monte Carlo approximation using the given fine-energy samples as
T ˜ = arg min T T   1 n f i = 1 n f log ρ y f ( y f ( ω ˜ i ) ) log ρ θ f ( T ( y f ( ω ˜ i ) ) ) log | det T ( y f ( ω ˜ i ) ) | .
The discrete objective in Equation (34) still needs further modifications before it can be utilized in a practical optimization routine. For the sake of brevity, we briefly enumerate the necessary approximations concluded by the final optimization objective and some important remarks related to it.
  • Specification of reference θ f : The reference random variable is taken as standard normal θ f N ( 0 , I ) (as we aim to build the inverse map, i.e., a PCE approximation of fine-scale energy in terms of Hermite basis). The log of θ f can be written as
    log θ f = L 2 log ( 2 π ) 1 2 l = 1 L ( θ f l ) 2
  • Separability of objective function: The lower triangular structure of the map renders the objective function to be split into L separate optimization problems (L being the dimensionality of the map). Hence, one can solve for each T ˜ l component of the map independently, where l [ 1 , L ] .
  • Map parameterization: The last essential in solving the optimization problem is to approximate the map over some finite dimensional space which requires each T ˜ l component of the map to be parameterized in terms of some parameter set γ l R N γ l , hence assuming the form T l ˜ ( y f , γ l ) . This parameter set depends on the basis functions used to approximate the map, e.g., multivariate polynomials from a family of orthogonal polynomials or radial basis functions. After parameterization, the map is now defined in terms of container γ = { γ 1 , γ 2 , , γ L } in which each element is a set containing the coefficients of the basis functions of the respective map component.
The final optimization problem after employing the above-mentioned modifications appears as
min γ l i = 1 n f 1 2 T l 2 ( y f ( ω ˜ i ) , γ l ) log T l y f l | y f ( ω ˜ i )
which has to be solved for parameter vector γ . The objective function in Equation (36) can be split into l separate problems (i.e., each component of map T l ˜ ( y f , γ l ) ), which are convex, linear in the respective parameter set γ l and can be solved independently using Newton-based optimization algorithms [79].

4.2.2. PCE Approximation of Energy

In the discussion before, we have described the construction of a transport map using energy samples to obtain the corresponding output samples θ ( ω i ) that are of standard normal type. Our aim is to construct a fine-scale measurement map φ y f ( w y f , θ f ( ω ˜ i ) ) in Equation (28) as a PCE approximation α J Ψ y f ( α ) Ψ ( θ f ( ω ˜ ) ) . In other terms, y ^ f ( θ f ( ω ˜ ) ) is the inverse of the originally computed transport map T ˜ ( y f , γ ) . Therefore, given the energy samples { y f ( ω i ) } i = 1 n f and their transformed standard counterparts, { θ f ( ω ˜ i ) } i = 1 n f , such that T ˜ ( y ( ω ˜ i ) f , γ ) = θ f ( ω ˜ i ) , one may compute PCE for fine-scale energy measurement as
φ y f ( w y f , θ f ( ω ˜ i ) ) : = y ^ f ( θ f ( ω ˜ ) ) = α J Ψ y f ( α ) Ψ ( θ f ( ω ˜ ) )
Compared to Equation (28), the parameters w y f can be interpreted as the PCE coefficients { y f ( α ) } α J Ψ of the chosen Hermite basis. The computation of fine-scale PCE is summarized in the Algorithm 2.
Algorithm 2 Fine-Scale Energy PCE
1:
Generate fine-scale material realizations:
  • Obtain n f samples from fine-scale parameter random vector κ f ( ω ˜ ) R M f :
    { κ f ( ω ˜ i ) } i = 1 n f : { ( κ f 1 ( ω ˜ i ) , κ f 2 ( ω ˜ i ) , κ f M f ( ω ˜ i ) ) } i = 1 n f
2:
Obtain fine-scale energy samples:
  • Input { κ f ( ω ˜ i ) } i = 1 n f in FEM model M f to obtain solution { u f ( x , ω ˜ i ) } i = 1 n f ;
  • Input { u f ( x , ω ˜ i ) } i = 1 n f in measurement operator Y f to obtain energy samples:
    { Y f ( u f ( x , ω ˜ i ) ) } i = 1 n f : { y f ( ω ˜ 1 ) , y f ( ω ˜ n f ) } , y f ( ω ˜ i ) R L
3:
Obtain transformed energy samples using transport map
  • Given energy samples { y f ( ω ˜ i ) } i = 1 n f , compute T ˜ ( y ( ω ˜ i ) f , γ ) by solving Equation (36)
  • Obtain { θ f ( ω ˜ i ) } i = 1 n f using computed map, such that
    T ˜ ( y f ( ω ˜ i ) f , γ ) = θ f ( ω ˜ i ) , θ f ( ω ˜ i ) R L
4:
Construct PCE of energy samples:
  • Provided n f pairs of energy y f ( ω ˜ i ) and its transformed values θ f ( ω ˜ i ) :
    { ( y f ( ω ˜ i ) , θ f ( ω ˜ i ) ) } i = 1 n f : { ( y f ( ω ˜ i ) , θ f ( ω ˜ ˜ i ) ) , , ( y f ( ω ˜ n f ) , θ f ( ω ˜ n f ) ) }
  • Set PCE approximation of energy using { ( y f ( ω ˜ i ) , θ f ( ω ˜ i ) ) } i = 1 n f
    y f ^ ( θ f ( ω ˜ ) ) = α J Ψ y f ( α ) Ψ ( θ f ( ω ˜ ) )

4.3. Coarse-Scale Parameter Estimation

Having the PCE approximations for coarse-scale prior q ^ c ( θ c ( ω ) ) , its energy prediction in Equation (27) and the fine-scale energy measurement in Equation (38); the corresponding up-scaled posterior is obtained by substituting the respective approximations into Equation (26). The resulting computational form of the update formula in Equation (26) appears in terms of PCE coefficients as
α J : q a , c ( α ) = q c ( α ) + K g ( y f ( α ) y c ( α ) ) ,
where q a , c denotes the coefficients of updated coarse-scale parameters PCE having fine-scale measurement information assimilated into them. K g is the well-known Kalman gain defined as
K g = C q c y c ( C y c + C ϵ ) 1 ,
and evaluated directly from the prior PCE coefficients. In particular, the formula for covariance matrix C y c reads
C y c = α > 0 y c ( α ) y c ( α ) α ! ,
and analogously can be defined for other covariance matrices appearing in Equation (40).
The overall up-scaling procedure is further introduced in Algorithm 3.
Algorithm 3 Up-Scaling
1:
Input:
  • The RVE with appropriate boundary conditions Γ f for the fine-scale model and Γ c for the coarse-scale superelement
  • The fine-scale model including variations; see Equation (3):
    Y f ( κ f , u f ( κ f , Γ f ) )
  • The prior knowledge on the coarse-scale parameter q c
  • The coarse-scale model; see Equation (5)
    Y c ( κ , u c ( κ c , Γ c ) )
2:
Output: updated coarse scale parameter q a , c
3:
Predict response of the superelement on the coarse scale y c ^ ( θ c ( ω ) ) (prior prediction—epistemic knowledge)
run Algorithm 1
4:
Compute the surrogate of the fine-scale model y f ^ ( θ f ( ω ˜ ) ) (prediction of aleatory uncertainty due to variation on the fine scale)
run Algorithm 2
5:
Estimate Kalman gain using Equations (40) and (41):
K g = C q c y c ( C y c + C ϵ ) 1
6:
Update coarse-scale parameter using PCE representations (see Equation (39)):
α J : q a , c ( α ) = q c ( α ) + K g ( y f ( α ) y c ( α ) ) ,

5. Numerical Examples

In this section, we present the application of the proposed approach for the up-scaling methodology presented in the previous sections. The problem considered here is the estimation of coarse-scale homogeneous isotropic elastic phenomenological model parameters modeled on bi-phased concrete as random variables. The measurements come from a more elaborate fine-scale model whose stochastic nature is attributed to the random distribution of hard inclusions embedded in the soft matrix phase. In the following discussion, the computational setup for the modeling and simulation of coarse and fine-scale models is described, concluding with the presentation and the discussion of the results of the considered numerical experiment.

5.1. Computational Setup

The material specimen is modeled as a 3D block of dimensions ( L x , L y , L z ) = ( 10 , 10 , 10 ) in cm. In order to extract the desired coarse-scale parameters, two displacement-based loading cases are considered: hydro-static compression as load case I and bi-axial tension–compression (pure shear) as load case II, the corresponding deformation tensor for the application of required displacement boundary conditions on the specimen faces is given as
F I = p 0 0 0 p 0 0 0 p ; F I I = 0.5 p 0 0 0 p 0 0 0 0.5 p
where p is the proportional factor taken as 0.003 for the single load increment. The resulting deformations are illustrated in Figure 1. The specimen block is modeled as one 3D elastic brick element on the coarse scale with, i.e., { K , G } as random parameters that need to be updated. As mentioned earlier, the fine-scale block comprises a random distribution of spherical inclusions in the matrix phase and is computationally represented by a lattice truss model. The adopted truss model has been proposed in [80] to model bi-phased material [81,82] (e.g., concrete) with an enhanced ability to model the interface between inclusion and the matrix phase (Figure 2 shows the deformation approximation of the two-phase material in the truss element). As a consequence, one does not need to explicitly model the interface, hence eliminating the need to employ a different mesh for each fine-scale realization, as illustrated graphically in Figure 3. This offers a great advantage of reduced computational burden in terms of mesh generation and also avoiding the mesh-related error variations when one considers an ensemble of fine-scale realizations. The truss elements are basically the edges of the tetrahedral elements filling the 3D block; the cross-section area of each element is obtained by the dual Voronoi tesselation of the tetrahedral mesh. Further details on the mesh generation and FE formulation of truss elements can be found in [82]. For the numerical examples considered here, the discrete fine-scale is composed of 58,749 truss elements (different realizations are shown in Figure 4).
The material parameters for inclusion and matrix phases, i.e., the Young modulli of their representative truss elements, are given in Table 1. For the interface element, an added parameter θ l —which defines the relative proportion of inclusion and matrix phases in the element—is also computed and provided as input for the FE model.
The computational model for the fine scale is implemented as a User-Defined Element in the FEAP (Finite Element Analysis Program) Version 8.5 [83], which is a multi-purpose finite element program from University of California, Berkeley, CA, USA. Moreover, the computation of PCE approximations for material parameters and measurements and the update step in the Algorithm 3 (Section 4.3) of the up-scaling scheme are implemented in Matlab R2022b [84].

5.2. Verification Case: Up-Scaling One Fine-Scale Realization

In this case, we demonstrate the working of an up-scaling framework on one fine-scale realization of a lattice model described previously. It is composed of 50 inclusions and a 25 % volume fraction. The prior characteristics of coarse-scale parameters are given in Table 2, and their corresponding probability density functions (PDFs) are shown in Figure 5 as prior distributions. The prior material properties are set as an input into the FEM simulation model to obtain the corresponding energies for the two loading cases; see Figure 1. The prior energy PDFs along—with their mean—are shown in Figure 6 in blue. The lack of knowledge regarding fine-scale characteristics on the coarse scale are reflected in the spread of energy PDFs and the offset of the mean values from the fine-scale counterpart. The PDF of posterior material properties (also shown in Figure 5)—after performing the up-scaling procedure—shows a marked reduction in their spread compared to the ones for prior properties. A similar behavior is observed in the posterior coarse-scale energy PDF, as shown in Figure 6. The uncertainty in the energy prediction of posterior properties is significantly reduced, and their respective mean values have shifted close to the fine-scale energy values, showing a negligible difference between the two values. Following this result, one may conclude that the Bayesian type of approach can be used to identify the properties of the coarse-scale continuum model given the fine-scale discrete measurement data.

5.3. Up-Scaling Ensemble of Fine-Scale Realizations

In this example, we consider a more realistic scenario of up-scaling an ensemble of fine-scale realizations. The main problem in this case is to construct the PCE approximations for the measured fine-scale energies. For this purpose, an ensemble of 1000 fine-scale realizations is generated for different numbers of particles: [ 5 , 10 , 25 , 50 ] for a fixed volume fraction of 25%. The samples differ from one another by the placement of spherical inclusions using the Random Sequential Adsorption Algorithm [85] in the matrix phase. The corresponding ensemble of fine-scale energies is obtained by the discrete truss FEM model (described briefly in Section 5.1) for the two loading cases. The energy samples are then used to construct the transport map in order to obtain the corresponding standard normal samples; see Section 4.2.1. Having energy and its transformed value pairs at hand, the required PCE in Equation (37) approximation is constructed for fine-scale energy (detailed in Section 4.2). We have used the TransportMaps library [86] for map construction; in our example problem, the map is chosen to be of Isotropic Integrated Squared Triangular form and is parameterized using hermite polynomials (further details related to map types and related solvers for optimization problems can be found in the online help of the mentioned library). The PDF values of the measured fine-scale energies for the loading cases, presented in Figure 1, are shown in Figure 7, which shows that the inherent uncertainty of the fine scale is inversely proportional to the number of inclusions in the specimen. The largest spread is observed in the measurement PDF for specimen having five particles, whereas the 50 inclusions case represents the smallest variability, as expected. To estimate the bulk and shear moduli on the coarse scale, we need to make an assumption on the prior uncertainty. Based on the homogenization theory [87], each sample of fine-scale realization of the fine energy measurement is used to estimate the effective K and G values analytically. By their averaging, we have obtained the mean value for the prior, to which is then added a large variance to obtain a sufficient variability; see Figure 8 for prior distributions.
The PDFs of the updated coarse-scale parameters are shown in Figure 8. They exhibit a reduced degree of uncertainty as compared to the chosen prior. Moreover, the aleatoric uncertainty—observed in the response of fine-scale specimen with respect to the number of inclusions—is also reflected by the updated coarse scale i.e., the PDFs of K and G have a progressively increasing variance while going from 50 to 5 particles for fine-scale specimen. Finally, to demonstrate the validity of the updated coarse scale, the corresponding PDFs of energy posterior predictions are shown along with the fine-scale measurement ones in Figure 9. The up-scaled coarse-scale predictions match really well with the fine-scale measurement; the difference between the PDFs is unnoticeable for each inclusion case and the considered loading cases.
An interesting behavior, related to the up-scaling experiment, can be observed by examining the correlation between the prior and posterior material properties and the corresponding energy predictions in Figure 10 and Figure 11, respectively. Since K and G and the associated loading conditions, under consideration, are independent of each other (in particular for loading cases: hydrostatic compression and pure shear are mutually exclusive in terms of triggering volumetric and deviatoric deformations), the correlation for both (material properties and energies) prior are depicted as concentric circles in the left plots of Figure 10 and Figure 11 for different fine-scale particle configurations. The posterior or up-scaled coarse-scale properties and the corresponding energies correlations are shown on the right plots of Figure 10 and Figure 11, respectively. An obvious observation is that as one increases the number of particles, the variability decreases, which is also confirmed by the spread of distributions of up-scaled coarse-scale K and G, e.g., in Figure 8. Additionally, the correlation in the posterior coarse-scale properties and corresponding energy predictions are depicted as ellipses, from which one can conclude that the updated material parameters are correlated. The possible reason for correlated updated parameters is due to the geometric uncertainty (spatial position of particles) that is present in the considered RVs and is common for both types of loads.

6. Conclusions

In this paper, a Bayesian up-scaling framework is devised and is applied in a computational homogenization setting to estimate coarse-scale, homogenized material parameters that are representative of a more elaborate fine-scale material model. Moreover, the coarse and fine scales are assumed to be completely incomparable in terms of material description and their corresponding FEM simulation models; therefore, energy is used as the communication measure between the two scales, since it is representative of the physical behavior (e.g., elastic and inelastic phenomenon) common to both the scales. The algorithm is computationally efficient owing to the use of the functional approximation of involved quantities, and it is also non-intrusive: the PCE approximations are readily built from the ensemble of solutions obtained via the repeated execution of deterministic FEM solvers for the given random material samples. With regard to constructing PCE approximation for fine-scale energy, the construction of a transport map from energy samples has proved to be computationally efficient, as it allows to build PCE approximation directly from the energy samples and their transformed standard normal counterparts instead of using random samples to build fine-scale material realizations, which can be very expensive, owing to their high dimensionality, or impossible to build in the first place. The representative examples also show promising results as one is able to calibrate a continuum-based coarse-scale model using energy measurements from truss-based discrete fine-scale models. The reliability of the updated coarse-scale parameters is judged by comparing their energy predictions with the fine-scale ones. In both the cases, one realization and an ensemble of realizations, satisfactory results have been obtained under reasonable accuracy. The developed framework and the example study presented in this paper can be used as a platform to improve upon the existing algorithm and investigate more complex examples, e.g., incorporating inelastic or dissipative material behavior for up-scaling.

Author Contributions

B.V.R. proposed the idea; M.S.S. conducted the numerical implementation and simulations. Both M.S.S. and B.V.R. contributed to the writing part. H.G.M. writing, supervision, funding. All authors have read and agreed to the published version of the manuscript.

Funding

The support is provided by the German Science Foundation (Deutsche Forschungs-gemeinschaft, DFG) as part of priority programs SPP 1886 and SPP 1748.

Data Availability Statement

Data available upon request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following main abbreviations and mathematical symbols are used in this manuscript:
FEMfinite element method
PCEpolynomial chaos expansion
gPCEgeneralized polynomial chaos expansion
PDFprobability density function
RVrandom variable
QoIquantity of interest
RVErepresentative volume element
PINNphysics-informed neural networks
EnKFensemble Kalman Filter
U c state-space variables of coarse-scale model
E c energy functional of coarse-scale model
D c dissipation potential of coarse-scale model
ϵ ^ measurement noise associated with the fine-scale
ϵ ( ω ϵ ) measurement and modelling error associated with coarse-scale model
Y c ( · ) measurement operator of coarse scale
κ c material parameters of coarse-scale model
Γ c external excitation on coarse level
Ω probability space of set of all events associated with RV
P probability measure associated with RV
F sigma-algebra
B sub-sigma-algebra
q c : = log κ c RV used in computational scheme, taken as log of coase-scale material parameter κ c .
ϕ measurable map of RVs
E ( · ) expectation operator
D φ ( · | | · ) Bregman’s loss function (BLF) or divergence between RVs
H hyperplane tangent to differentiable function at a given point
Q n set of n-th degree polynomials
cov covariance matrix between RVs
K g Kalman gain of PCE-based filter
J Z finite set of multi-indices with cardinality (size) Z
D KL ( · | | · ) Kullback–Leibler (KL) divergence between probability densities
{ Ψ α } α J Z set of linearly independent functions for PCE
M c , M f coarse and fine-scale computational models
C T cost function associated with transport map
T # ( · ) pushforward operator associated with transport map
T space of lower-triangular functions for transport map approximation
μ y probability measure of RV y
θ f N ( 0 , I ) vector valued standard normal RVs
C y c covariance matrix in terms of PCE coefficients of y c
F I deformation tensor
E p a r t , E m a t Young modulli for particle and matrix truss elements

References

  1. Feyel, F.; Chaboche, J.L. FE2 multiscale approach for modelling the elastoviscoplastic behaviour of long fibre SiC/Ti composite materials. Comput. Methods Appl. Mech. Eng. 2000, 183, 309–330. [Google Scholar] [CrossRef]
  2. Feyel, F. A multilevel finite element method (FE2) to describe the response of highly non-linear structures using generalized continua. Comput. Methods Appl. Mech. Eng. 2003, 192, 3233–3244. [Google Scholar] [CrossRef]
  3. Yvonnet, J.; Bonnet, G. A consistent nonlocal scheme based on filters for the homogenization of heterogeneous linear materials with non-separated scales. Int. J. Solids Struct. 2014, 51, 196–209. [Google Scholar] [CrossRef]
  4. Yvonnet, J.; Bonnet, G. Nonlocal/coarse-graining homogenization of linear elastic media with non-separated scales using least-square polynomial filters. Int. J. Multiscale Comput. Eng. 2014, 12, 375–395. [Google Scholar] [CrossRef]
  5. Ma, J.; Zhang, S.; Wriggers, P.; Gao, W.; De Lorenzis, L. Stochastic homogenized effective properties of three-dimensional composite material with full randomness and correlation in the microstructure. Comput. Struct. 2014, 144, 62–74. [Google Scholar] [CrossRef]
  6. Ma, J.; Sahraee, S.; Wriggers, P.; De Lorenzis, L. Stochastic multiscale homogenization analysis of heterogeneous materials under finite deformations with full uncertainty in the microstructure. Comput. Mech. 2015, 55, 819–835. [Google Scholar] [CrossRef]
  7. Ma, J.; Wriggers, P.; Li, L. Homogenized thermal properties of 3D composites with full uncertainty in the microstructure. Struct. Eng. Mech. 2016, 57, 369–387. [Google Scholar] [CrossRef]
  8. Ma, J.; Temizer, I.; Wriggers, P. Random homogenization analysis in linear elasticity based on analytical bounds and estimates. Int. J. Solids Struct. 2011, 48, 280–291. [Google Scholar] [CrossRef]
  9. Savvas, D.; Stefanou, G.; Papadrakakis, M. Determination of RVE size for random composites with local volume fraction variation. Comput. Methods Appl. Mech. Eng. 2016, 305, 340–358. [Google Scholar] [CrossRef]
  10. Savvas, D.; Stefanou, G. Determination of random material properties of graphene sheets with different types of defects. Compos. Part B Eng. 2018, 143, 47–54. [Google Scholar] [CrossRef]
  11. Stefanou, G.; Savvas, D.; Papadrakakis, M. Stochastic finite element analysis of composite structures based on material microstructure. Compos. Struct. 2015, 132, 384–392. [Google Scholar] [CrossRef]
  12. Stefanou, G. Simulation of heterogeneous two-phase media using random fields and level sets. Front. Struct. Civ. Eng. 2015, 9, 114–120. [Google Scholar] [CrossRef]
  13. Stefanou, G.; Savvas, D.; Papadrakakis, M. Stochastic finite element analysis of composite structures based on mesoscale random fields of material properties. Comput. Methods Appl. Mech. Eng. 2017, 326, 319–337. [Google Scholar] [CrossRef]
  14. Clément, A.; Soize, C.; Yvonnet, J. Computational nonlinear stochastic homogenization using a nonconcurrent multiscale approach for hyperelastic heterogeneous microstructures analysis. Int. J. Numer. Methods Eng. 2012, 91, 799–824. [Google Scholar] [CrossRef]
  15. Clément, A.; Soize, C.; Yvonnet, J. Uncertainty quantification in computational stochastic multiscale analysis of nonlinear elastic materials. Comput. Methods Appl. Mech. Eng. 2013, 254, 61–82. [Google Scholar] [CrossRef]
  16. Clément, A.; Soize, C.; Yvonnet, J. High-dimension polynomial chaos expansions of effective constitutive equations for hyperelastic heterogeneous random microstructures. In Proceedings of the Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS 2012), Vienna, Austria, 10–14 September 2012; Vienna University of Technology: Vienna, Austria, 2012; pp. 1–2. [Google Scholar]
  17. Staber, B.; Guilleminot, J. Functional approximation and projection of stored energy functions in computational homogenization of hyperelastic materials: A probabilistic perspective. Comput. Methods Appl. Mech. Eng. 2017, 313, 1–27. [Google Scholar] [CrossRef]
  18. Lu, X.; Giovanis, D.G.; Yvonnet, J.; Papadopoulos, V.; Detrez, F.; Bai, J. A data-driven computational homogenization method based on neural networks for the nonlinear anisotropic electrical response of graphene/polymer nanocomposites. Comput. Mech. 2019, 64, 307–321. [Google Scholar] [CrossRef]
  19. Ba Anh, L.; Yvonnet, J.; He, Q.C. Computational homogenization of nonlinear elastic materials using Neural Networks: Neural Networks-based computational homogenization. Int. J. Numer. Methods Eng. 2015, 104, 1061–1084. [Google Scholar] [CrossRef]
  20. Sagiyama, K.; Garikipati, K. Machine learning materials physics: Deep neural networks trained on elastic free energy data from martensitic microstructures predict homogenized stress fields with high accuracy. arXiv 2019, arXiv:1901.00524. Available online: http://arxiv.org/abs/1901.00524 (accessed on 15 November 2024).
  21. Unger, J.; Könke, C. An inverse parameter identification procedure assessing the quality of the estimates using Bayesian neural networks. Appl. Soft Comput. 2011, 11, 3357–3367. [Google Scholar] [CrossRef]
  22. Unger, J.; Könke, C. Coupling of scales in a multiscale simulation using neural networks. Comput. Struct. 2008, 86, 1994–2003. [Google Scholar] [CrossRef]
  23. Lizarazu, J.; Harirchian, E.; Shaik, U.A.; Shareef, M.; Antoni-Zdziobek, A.; Lahmer, T. Application of machine learning-based algorithms to predict the stress-strain curves of additively manufactured mild steel out of its microstructural characteristics. Results Eng. 2023, 20, 101587. [Google Scholar] [CrossRef]
  24. Logarzo, H.J.; Capuano, G.; Rimoli, J.J. Smart constitutive laws: Inelastic homogenization through machine learning. Comput. Methods Appl. Mech. Eng. 2021, 373, 113482. [Google Scholar] [CrossRef]
  25. Wang, K.; Sun, W. A multiscale multi-permeability poroplasticity model linked by recursive homogenizations and deep learning. Comput. Methods Appl. Mech. Eng. 2018, 334, 337–380. [Google Scholar] [CrossRef]
  26. Rixner, M.; Koutsourelakis, P.S. Self-supervised optimization of random material microstructures in the small-data regime. Npj Comput. Mater. 2022, 8, 46. [Google Scholar] [CrossRef]
  27. Frankel, A.; Jones, R.; Alleman, C.; Templeton, J. Predicting the mechanical response of oligocrystals with deep learning. Comput. Mater. Sci. 2019, 169, 109099. [Google Scholar] [CrossRef]
  28. Wang, K.; Sun, W. Meta-modeling game for deriving theory-consistent, microstructure-based traction–separation laws via deep reinforcement learning. Comput. Methods Appl. Mech. Eng. 2019, 346, 216–241. [Google Scholar] [CrossRef]
  29. Raissi, M.; Perdikaris, P.; Karniadakis, G. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  30. Haghighat, E.; Raissi, M.; Moure, A.; Gomez, H.; Juanes, R. A physics-informed deep learning framework for inversion and surrogate modeling in solid mechanics. Comput. Methods Appl. Mech. Eng. 2021, 379, 113741. [Google Scholar] [CrossRef]
  31. Chen, C.T.; Gu, G.X. Physics-Informed Deep-Learning For Elasticity: Forward, Inverse, and Mixed Problems. Adv. Sci. 2023, 10, 2300439. [Google Scholar] [CrossRef]
  32. Xu, K.; Tartakovsky, A.M.; Burghardt, J.; Darve, E. Inverse modeling of viscoelasticity materials using physics constrained learning. arXiv 2020, arXiv:2005.04384. [Google Scholar]
  33. Tartakovsky, A.M.; Marrero, C.O.; Perdikaris, P.; Tartakovsky, G.D.; Barajas-Solano, D. Learning parameters and constitutive relationships with physics informed deep neural networks. arXiv 2018, arXiv:1808.03398. [Google Scholar]
  34. Haghighat, E.; Abouali, S.; Vaziri, R. Constitutive model characterization and discovery using physics-informed deep learning. Eng. Appl. Artif. Intell. 2023, 120, 105828. [Google Scholar] [CrossRef]
  35. Rojas, C.J.; Boldrini, J.L.; Bittencourt, M.L. Parameter identification for a damage phase field model using a physics-informed neural network. Theor. Appl. Mech. Lett. 2023, 13, 100450. [Google Scholar] [CrossRef]
  36. Maia, M.; Rocha, I.; Kerfriden, P.; van der Meer, F. Physically recurrent neural networks for path-dependent heterogeneous materials: Embedding constitutive models in a data-driven surrogate. Comput. Methods Appl. Mech. Eng. 2023, 407, 115934. [Google Scholar] [CrossRef]
  37. Rocha, I.; Kerfriden, P.; van der Meer, F. Machine learning of evolving physics-based material models for multiscale solid mechanics. Mech. Mater. 2023, 184, 104707. [Google Scholar] [CrossRef]
  38. Borkowski, L.; Skinner, T.; Chattopadhyay, A. Woven ceramic matrix composite surrogate model based on physics-informed recurrent neural network. Compos. Struct. 2023, 305, 116455. [Google Scholar] [CrossRef]
  39. Flaschel, M.; Kumar, S.; De Lorenzis, L. Unsupervised discovery of interpretable hyperelastic constitutive laws. Comput. Methods Appl. Mech. Eng. 2021, 381, 113852. [Google Scholar] [CrossRef]
  40. Wang, Z.; Estrada, J.; Arruda, E.; Garikipati, K. Inference of deformation mechanisms and constitutive response of soft material surrogates of biological tissue by full-field characterization and data-driven variational system identification. J. Mech. Phys. Solids 2021, 153, 104474. [Google Scholar] [CrossRef]
  41. Flaschel, M.; Kumar, S.; De Lorenzis, L. Automated discovery of generalized standard material models with EUCLID. Comput. Methods Appl. Mech. Eng. 2023, 405, 115867. [Google Scholar] [CrossRef]
  42. Flaschel, M.; Kumar, S.; De Lorenzis, L. Discovering plasticity models without stress data. Npj Comput. Mater. 2022, 8, 91. [Google Scholar] [CrossRef]
  43. Joshi, A.; Thakolkaran, P.; Zheng, Y.; Escande, M.; Flaschel, M.; De Lorenzis, L.; Kumar, S. Bayesian-EUCLID: Discovering hyperelastic material laws with uncertainties. Comput. Methods Appl. Mech. Eng. 2022, 398, 115225. [Google Scholar] [CrossRef]
  44. Koutsourelakis, P.S. Stochastic upscaling in solid mechanics: An excercise in machine learning. J. Comput. Phys. 2007, 226, 301–325. [Google Scholar] [CrossRef]
  45. Schöberl, M.; Zabaras, N.; Koutsourelakis, P.S. Predictive Collective Variable Discovery with Deep Bayesian Models. arXiv 2018, arXiv:1809.06913. [Google Scholar] [CrossRef]
  46. Felsberger, L.; Koutsourelakis, P.S. Physics-constrained, data-driven discovery of coarse-grained dynamics. arXiv 2018, arXiv:1802.03824. [Google Scholar]
  47. Sarfaraz, S.M.; Rosić, B.V.; Matthies, H.G.; Ibrahimbegović, A. Bayesian stochastic multi-scale analysis via energy considerations. arXiv 2019, arXiv:1912.03108. [Google Scholar] [CrossRef]
  48. Sarfaraz, S.M.; Rosić, B.V.; Matthies, H.G.; Ibrahimbegović, A. Stochastic upscaling via linear Bayesian updating. In Multiscale Modeling of Heterogeneous Structures; Springer: Berlin/Heidelberg, Germany, 2018; pp. 163–181. [Google Scholar]
  49. Rosić, B.V. Stochastic state estimation via incremental iterative sparse polynomial chaos based Bayesian-Gauss-Newton-Markov-Kalman filter. arXiv 2019, arXiv:1909.07209. [Google Scholar]
  50. Marzouk, Y.; Moselhy, T.; Parno, M.; Spantini, A. Sampling via Measure Transport: An Introduction. In Handbook of Uncertainty Quantification; Springer International Publishing: Cham, Swizterland, 2016; pp. 1–41. [Google Scholar] [CrossRef]
  51. Parno, M.D. Transport Maps for Accelerated Bayesian Computation. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2015. [Google Scholar]
  52. Parno, M.D.; Marzouk, Y.M. Transport Map Accelerated Markov Chain Monte Carlo. SIAM/ASA J. Uncertain. Quantif. 2018, 6, 645–682. [Google Scholar] [CrossRef]
  53. Mielke, A.; Roubíček, T. Rate Independent Systems: Theory and Application; Springer: New York, NY, USA, 2015. [Google Scholar]
  54. Barbu, A.; Zhu, S.C. Monte Carlo Methods; Springer Nature: Singapore, 2020. [Google Scholar]
  55. Banerjee, A.; Guo, X.; Wang, H. On the Optimality of Conditional Expectation as a Bregman Predictor. Inf. Theory IEEE Trans. 2005, 51, 2664–2669. [Google Scholar] [CrossRef]
  56. Matthies, H.G.; Zander, E.; Rosić, B.V.; Litvinenko, A.; Pajonk, O. Inverse Problems in a Bayesian Setting; Springer International Publishing: Cham, Swizterland, 2016; pp. 245–286. [Google Scholar] [CrossRef]
  57. Xiu, D. Numerical Methods for Stochastic Computations: A Spectral Method Approach; Princeton University Press: Princeton, NJ, USA, 2010. [Google Scholar]
  58. Rojo-Alvarez, J.L.; Martínez-Ramón, M.; Muňoz-Marí, J.; Camps-Valls, G. Digital Signal Processing with Kernel Methods; IEEE Press, Wiley: Hoboken, NJ, USA, 2018. [Google Scholar]
  59. Buhmann, M.D. Radial Basis Functions: Theory and Implementations; Cambridge Monographs on Applied and Computational Mathematics; Cambridge University Press: London, UK, 2003. [Google Scholar]
  60. Zimmermann, H.J. Fuzzy set theory. WIREs Comput. Stat. 2010, 2, 317–332. [Google Scholar] [CrossRef]
  61. Matthies, H.G. Stochastic finite elements: Computational approaches to stochastic partial differential equations. Z. für Angew. Math. und Mech. (ZAMM) 2008, 88, 849–873. [Google Scholar] [CrossRef]
  62. Rosić, B.V.; Matthies, H.G. Variational Theory and Computations in Stochastic Plasticity. Arch. Comput. Methods Eng. 2015, 22, 457–509. [Google Scholar] [CrossRef]
  63. Blatman, G.; Sudret, B. Adaptive sparse polynomial chaos expansion based on least angle regression. J. Comput. Phys. 2011, 230, 2345–2367. [Google Scholar] [CrossRef]
  64. Oladyshkin, S.; Nowak, W. Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion. Reliab. Eng. Syst. Saf. 2012, 106, 179–190. [Google Scholar] [CrossRef]
  65. Villani, C. Optimal Transport: Old and New; Grundlehren der mathematischen Wissenschaften; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  66. Galichon, A. Optimal Transport Methods in Economics; Princeton University Press: Princeton, NJ, USA, 2018. [Google Scholar]
  67. Dominitz, A.; Angenent, S.; Tannenbaum, A. On the Computation of Optimal Transport Maps Using Gradient Flows and Multiresolution Analysis. In Proceedings of the Recent Advances in Learning and Control; Blondel, V.D., Boyd, S.P., Kimura, H., Eds.; Springer: London, UK, 2008; pp. 65–78. [Google Scholar]
  68. Brenier, Y. Optimal transportation of particles, fluids and currents. In Variational Methods for Evolving Objects; Mathematical Society of Japan (MSJ): Tokyo, Japan, 2015; pp. 59–85. [Google Scholar] [CrossRef]
  69. Solomon, J.; De Goes, F.; Peyré, G.; Cuturi, M.; Butscher, A.; Nguyen, A.; Du, T.; Guibas, L. Convolutional wasserstein distances: Efficient optimal transportation on geometric domains. ACM Trans. Graph. (TOG) 2015, 34, 66. [Google Scholar] [CrossRef]
  70. Bonneel, N.; van de Panne, M.; Paris, S.; Heidrich, W. Displacement Interpolation Using Lagrangian Mass Transport. ACM Trans. Graph. 2011, 30, 158:1–158:12. [Google Scholar] [CrossRef]
  71. Genevay, A.; Cuturi, M.; Peyré, G.; Bach, F. Stochastic Optimization for Large-scale Optimal Transport. In Advances in Neural Information Processing Systems 29; Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2016; pp. 3440–3448. [Google Scholar]
  72. Cuturi, M. Sinkhorn distances: Lightspeed computation of optimal transport. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–10 December 2013; pp. 2292–2300. [Google Scholar]
  73. Henderson, T.; Solomon, J. Audio Transport: A Generalized Portamento via Optimal Transport. 2019. Available online: http://arxiv.org/abs/1906.06763 (accessed on 15 November 2024).
  74. Dessein, A.; Papadakis, N.; Deledalle, C.A. Parameter estimation in finite mixture models by regularized optimal transport: A unified framework for hard and soft clustering. arXiv 2017, arXiv:1711.04366. [Google Scholar]
  75. Genevay, A.; Peyré, G.; Cuturi, M. GAN and VAE from an optimal transport point of view. arXiv 2017, arXiv:1706.01807. [Google Scholar]
  76. Eigel, M.; Gruhlke, R.; Marschall, M. Low-rank tensor reconstruction of concentrated densities with application to Bayesian inversion. Stat. Comput. 2022, 32, 27. [Google Scholar] [CrossRef]
  77. Spantini, A.; Bigoni, D.; Marzouk, Y. Inference via low-dimensional couplings. J. Mach. Learn. Res. 2018, 19, 2639–2709. [Google Scholar]
  78. Huan, X. Numerical Approaches for Sequential Bayesian Optimal Experimental Design. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2015. [Google Scholar]
  79. Deuflhard, P. Newton Methods for Nonlinear Problems: Affine Invariance and Adaptive Algorithms; Springer Series in Computational Mathematics; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  80. Benkemoun, N. Contribution aux Approches Multi-échelles Séquencées pour la Modélisation Numérique des Matériaux à Matrice Cimentaire. Ph.D. Thesis, Ècole Normale Supérieure Paris-Saclay, Cachan, France, 2010. [Google Scholar]
  81. Benkemoun, N.; Ibrahimbegović, A.; Colliat, J.B. Anisotropic constitutive model of plasticity capable of accounting for details of meso-structure of two-phase composite material. Comput. Struct. 2012, 90, 153–162. [Google Scholar] [CrossRef]
  82. Benkemoun, N.; Hautefeuille, M.; Colliat, J.B.; Ibrahimbegović, A. Failure of heterogeneous materials: 3D meso-scale FE models with embedded discontinuities. Int. J. Numer. Methods Eng. 2010, 82, 1671–1688. [Google Scholar] [CrossRef]
  83. Taylor, R. FEAP—Finite Element Analysis Program; University of California: Berkeley, CA, USA, 2014. [Google Scholar]
  84. The MathWorks Inc. MATLAB Version: 9.13.0 (R2022b). (The MathWorks Inc., 2022). Available online: https://www.mathworks.com (accessed on 5 October 2024).
  85. Torquato, S. Random Heterogeneous Materials: Microstructure and Macroscopic Properties; Interdisciplinary Applied Mathematics; Springer: New York, NY, USA, 2005. [Google Scholar]
  86. Bigoni, D.; Spantini, A.; Morrisno, R.; Baptista, R.; Marzouk, Y. TransportMaps v2.0. 2018. Available online: http://transportmaps.mit.edu/ (accessed on 20 October 2024).
  87. Ostoja-Starzewski, M. Microstructural Randomness and Scaling in Mechanics of Materials; Modern Mechanics and Mathematics; CRC Press: Boca Raton, FL, USA, 2007. [Google Scholar]
Figure 1. Loading cases: (Load I) hydrostatic compression, (Load II) bi-axial tension–compression.
Figure 1. Loading cases: (Load I) hydrostatic compression, (Load II) bi-axial tension–compression.
Computation 13 00068 g001
Figure 2. Deformation approximation for the two-phase material in the truss element [82].
Figure 2. Deformation approximation for the two-phase material in the truss element [82].
Computation 13 00068 g002
Figure 3. Inclusion and matrix interface (left) and its FEM discretization with interface elements shown in red (right) [80].
Figure 3. Inclusion and matrix interface (left) and its FEM discretization with interface elements shown in red (right) [80].
Computation 13 00068 g003
Figure 4. Fine-scale realizations for different numbers of particles: [ 5 , 10 , 25 , 50 ] embedded in the matrix phase.
Figure 4. Fine-scale realizations for different numbers of particles: [ 5 , 10 , 25 , 50 ] embedded in the matrix phase.
Computation 13 00068 g004
Figure 5. Prior and posterior PDF for bulk K and shear G moduli on coarse-scale model, using one fine-scale realization.
Figure 5. Prior and posterior PDF for bulk K and shear G moduli on coarse-scale model, using one fine-scale realization.
Computation 13 00068 g005
Figure 6. Coarse-scale prior and posterior predicted energy PDFs for load cases: (I,II), using one fine-scale realization.
Figure 6. Coarse-scale prior and posterior predicted energy PDFs for load cases: (I,II), using one fine-scale realization.
Computation 13 00068 g006
Figure 7. Fine-scale energy PDFs for load cases (I,II) for different numbers of particles.
Figure 7. Fine-scale energy PDFs for load cases (I,II) for different numbers of particles.
Computation 13 00068 g007
Figure 8. Coarse-scale prior and posterior PDF for K and G using fine scale with different numbers of particles.
Figure 8. Coarse-scale prior and posterior PDF for K and G using fine scale with different numbers of particles.
Computation 13 00068 g008
Figure 9. Coarse-scale posterior predicted energy comparison with fine-scale measurements with different numbers of particles for load cases (I,II).
Figure 9. Coarse-scale posterior predicted energy comparison with fine-scale measurements with different numbers of particles for load cases (I,II).
Computation 13 00068 g009
Figure 10. Correlation between prior and posterior (up-scaled) coarse-scale K and G considering all fine-scale particle cases.
Figure 10. Correlation between prior and posterior (up-scaled) coarse-scale K and G considering all fine-scale particle cases.
Computation 13 00068 g010
Figure 11. Correlation between energies from load cases I and II using coarse-scale prior and up-scaled K and G considering all fine-scale particle cases.
Figure 11. Correlation between energies from load cases I and II using coarse-scale prior and up-scaled K and G considering all fine-scale particle cases.
Computation 13 00068 g011
Table 1. Young modulli for fine-scale particle and matrix truss elements (in GPa).
Table 1. Young modulli for fine-scale particle and matrix truss elements (in GPa).
E part E mat
70,00010,000
Table 2. Prior statistics in terms of mean μ and standard deviation σ values for coarse-scale material parameters (in MPa).
Table 2. Prior statistics in terms of mean μ and standard deviation σ values for coarse-scale material parameters (in MPa).
PropertyKG
μ 1577.3071211.65
σ 157.73121.16
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sarfaraz, M.S.; Rosić, B.V.; Matthies, H.G. Stochastic Up-Scaling of Discrete Fine-Scale Models Using Bayesian Updating. Computation 2025, 13, 68. https://doi.org/10.3390/computation13030068

AMA Style

Sarfaraz MS, Rosić BV, Matthies HG. Stochastic Up-Scaling of Discrete Fine-Scale Models Using Bayesian Updating. Computation. 2025; 13(3):68. https://doi.org/10.3390/computation13030068

Chicago/Turabian Style

Sarfaraz, Muhammad Sadiq, Bojana V. Rosić, and Hermann G. Matthies. 2025. "Stochastic Up-Scaling of Discrete Fine-Scale Models Using Bayesian Updating" Computation 13, no. 3: 68. https://doi.org/10.3390/computation13030068

APA Style

Sarfaraz, M. S., Rosić, B. V., & Matthies, H. G. (2025). Stochastic Up-Scaling of Discrete Fine-Scale Models Using Bayesian Updating. Computation, 13(3), 68. https://doi.org/10.3390/computation13030068

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop