Next Article in Journal
A Review of Pool-Boiling Processes Based on Bubble-Dynamics Parameters
Next Article in Special Issue
Active Learning-Based Kriging Model with Noise Responses and Its Application to Reliability Analysis of Structures
Previous Article in Journal
Research and Development of a Multi-Point High-Precision Displacement Measuring System for the Installation Space of Vibration Isolation on Submarine Raft Structures
Previous Article in Special Issue
Construction Risk Assessment of Yellow River Bridges Based on Combined Empowerment Method and Two-Dimensional Cloud Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review on Local Failure Probability Sensitivity Analysis

ONERA/DTIS, Université de Toulouse, F-31055 Toulouse, France
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(21), 12021; https://doi.org/10.3390/app132112021
Submission received: 20 September 2023 / Revised: 24 October 2023 / Accepted: 26 October 2023 / Published: 3 November 2023

Abstract

:
When assessing the reliability of a system, a mathematical model is often defined to replicate the system’s behavior. The inputs of the system are then gathered into two categories, random inputs and deterministic inputs. The failure of the system depends on both categories and here we focus on the influence of the deterministic inputs. Local failure probability sensitivity analysis consists in computing the derivatives of the failure probability with respect to these deterministic parameters and is a fundamental step in reliability-based design optimization. These sensitivities also provide valuable insights into how specific model parameters affect the failure probability, allowing engineers and designers to make informed decisions about adjusting those parameters to enhance reliability. This article explores various techniques developed in the literature for assessing the sensitivity of failure probability with respect to distribution or design parameters. Depending on the nature of the deterministic parameters and the selected input space, different methods are available. The statistical characteristics of the resulting estimates as well as their computational cost are discussed here, for comparison purpose.

1. Introduction

In order to derive the reliability of a system, a limit state function g is often defined depending on different identified inputs of the system. The scalar output of the limit state function is then called the response of the system. This limit state function makes it possible to assess the state of the system thanks to the value of the response: if the value is positive, the system is safe, otherwise it is failing. We assume here that the inputs of the system are divided into two categories: deterministic inputs s R p and random inputs X R d . Consequently, the response of the system is random and is denoted Y = g ( s , X ) . The deterministic inputs s of the system can be of different nature and, for the rest of this paper, depending on their nature, a specific notation will be used. Physical variables like design parameters of the system are thus denoted δ R n . Whereas distribution parameters of the probabilistic framework selected to model the random inputs X are denoted θ R m . Therefore, the deterministic input vector s is written s = [ θ , δ ] , and m + n = p . It is assumed that the limit state function g is numerically expensive (it can require the resolution of partial derivative equations for instance). Accordingly, the reliability analysis of the system must require a number of evaluations of g as small as possible and the use of advanced reliability techniques [1].
Understanding how the deterministic parameters s impact the system’s failure is highly valuable, especially in the context of Reliability-Based Design Optimization (RBDO), as discussed in [2]. RBDO is a field of research for engineers tasked with designing structures in the face of uncertainties. The principle of the RBDO problem usually revolves around striking a balance between the structure’s cost and its reliability. Therefore, the optimization problem can be written with the minimization of an objective function describing the cost of the structure [3], under some probabilistic reliability constraints. Those constraints typically involve the estimation of the failure probability P f = P ( g ( s , X ) < 0 ) . The derivatives of P f with respect to s are then needed for gradient-based algorithms used to solve RBDO problems [4]. By integrating these derivatives with gradient-based optimization techniques, reliability-based design optimization becomes a data-driven and systematic approach to ensuring the safety and performance of critical systems.
The estimation of these derivatives with respect to s is part of the local reliability sensitivity analysis (RSA) [5] of the system. The derivatives, also called sensitivities, make it possible to optimize the reliability of the system. Indeed, these sensitivities provide valuable insights into how specific model parameters affect the failure probability, allowing engineers and designers to make informed decisions about adjusting those parameters to enhance reliability [6]. RSA [5] is in fact a specific form of sensitivity analysis (SA) [7] where instead of directly studying the influence of the system’s inputs on the output Y, one examines their influence on other quantities such as the failure probability.
In this article, we propose to review different approaches for local RSA with regard to deterministic inputs. The quantity of interest is thus the vector of derivatives of P f ( s ) according to s for a fixed value of s . In practice, the estimation of these derivatives usually follows the computation of P f ( s ) . Therefore, the evaluations of g needed to estimate P f ( s ) are repurposed for the computation of those derivatives to reduce the additional numerical cost [8]. Depending on the nature of s , different local RSA methods are available. The efficiency of those methods relies on the input space type (original or standard) chosen by the practitioner, along with the reliability method selected (Monte Carlo methods, approximation methods, etc.). The statistical characteristics of the resulting estimates as well as their computational cost are discussed here for each local RSA method.
This paper is organized as follows. In the next section, the expression of the local sensitivities are presented in detail with a particular focus on the influence of the nature of the deterministic parameters. Section 3 describes the available local RSA methods with respect to distribution parameters θ , performed in the original input space, with a focus on sampling methods. Section 4 presents local RSA methods derived in the standardized input space with respect to any deterministic inputs s . Finally, a conclusion briefly summarizes the main outcomes of the review.

2. Local RSA with Respect to Deterministic Inputs

It is assumed here that the random inputs X follow a standard normal distribution. However, this may be the result of an isoprobalistic transformation: let Z be the original inputs of the system, then X = T ( Z ) where T is a diffeomorphism. The interested reader may find more information about the various isoprobabilistic transformations T available (not detailed here) in [9,10]. We denote g Z as the limit state function in the original input space and g as the transformed limit state function in the standard space. The joint probability density function (pdf) of Z is written f Z , while the joint pdf of X is written f X .

2.1. Expression of the Local RSA with Respect to Distribution Parameters

We describe here the framework of the local RSA with respect to θ , the distribution parameters of the original inputs Z . As mentioned before, the derivatives of P f with regard to θ give valuable information about the influence of the probabilistic model selected for the random inputs on the system’s failure occurrence. This sensitivity also allows us to identify which random inputs should be focused [11,12]. In order to compute the derivatives of P f ( θ ) , two options are possible.
The first option is to evaluate both the failure probability P f and its derivatives in the original input space. The dependence of P f on θ is then uniquely contained in the joint distribution f Z of the original inputs Z . It is assumed here that f Z is continuously differentiable with respect to θ . Furthermore, we suppose that the integration domain of Z does not depend on θ . The failure probability is thus written
P f ( θ ) = D f Z f Z ( z ; θ ) d z = R d I D f Z ( z ) f Z ( z ; θ ) d z = E f Z I D f Z ( Z ) ,
where D f Z = z R d | g Z ( z ) 0 is the failure domain in the original space. The derivatives of P f with respect to θ for [ 1 , , m ] are straightforward
P f ( θ ) θ = D f Z f Z ( z ; θ ) θ d z = R d I D f Z ( z ) f Z ( z ; θ ) θ d z .
The resulting expression of the derivative of P f in Equation (2) is thus a domain integral, which depends on the derivatives of f Z with respect to θ , and is defined on D f Z .
The second option is to evaluate both the failure probability P f and its derivatives in the standard space. The dependence of P f on θ is then contained in the transformed limit state function g through the isoprobabilistic transformation while the standardized joint distribution f X is parameter-free. Therefore, the failure probability is expressed as
P f ( θ ) = D f ( θ ) f X ( x ) d x = R d I D f ( θ ) ( x ) f X ( x ) d x ,
where D f ( θ ) = x R d | g ( θ , x ) 0 is the failure domain in the standard normal space. Assuming the gradient x g ( θ , x ) 0 for all x and θ on the limit state surface { g ( θ , x ) = 0 } , the derivatives of P f ( θ ) with respect to θ for [ 1 , , m ] are defined by the following surface integral [13]
P f ( θ ) θ = g ( θ , x ) = 0 1 x g ( θ , x ) g ( θ , x ) θ f X ( x ) d s ( x ) ,
where d s ( x ) stands for surface integration over the limit state surface { g ( θ , x ) = 0 } . The resulting expression of the derivative of P f in Equation (4) is thus a surface integral, which depends on the derivative of g with respect to θ and its gradient x g .

2.2. Expression of the Local RSA with Respect to Design Parameters

Computing the derivatives of P f with respect to design parameters δ is required for the gradient-based algorithms used to solve a RBDO problem [2], as previously mentioned. The expression of the dependence of P f on the design parameters δ is necessarily contained in the limit state function. Therefore, no matter the input space selected, the expression of the derivatives of P f is a surface integral as in Equation (4) where one adapts the integral to f Z or f X . In the rest of this article, we assume that the sensitivities with regard to design parameters are computed in the standard space of the inputs X . The surface integral is thus given by the following equation
P f ( δ ) δ = g ( δ , x ) = 0 1 x g ( δ , x ) g ( δ , x ) δ f X ( x ) d s ( x ) ,
with the same assumptions and notations as in Equation (4), for [ 1 , , n ] . This sensitivity depends then necessarily on the derivative of g with respect to δ and its gradient x g .

2.3. Estimation of the Local Sensitivities

The main challenge of the estimation of the derivatives with respect to s , no matter its nature, is to increase as little as possible the simulation budget needed for the estimation of the probability of failure P f ( s ) . Consequently, it is assumed that the computation of this sensitivity reuses as much as possible the evaluations of the limit state function needed for the computation of P f , as previously mentioned.
In other words, it is uncommon to estimate P f and its derivatives in different input spaces, as the limit state functions differ.
According to the input space, the derivatives of P f with regard to distribution parameters θ are either domain integrals Equation (2) or surface integrals Equation (4). Domain integrals are much easier to handle than surface integrals. However, the literature on failure probability estimation in standardized spaces is more luxuriant than in the original input space. For instance, the methods that depend on the location of the so-called design points of the system [14] require standardized input space (FORM/SORM approximations, radial-based importance sampling, line sampling, etc.) [15]. Therefore, both options present advantages and drawbacks.
The next section details methods available to derive the sensitivity with regard to θ in the original input space.

3. Local RSA in the Original Input Space

In this section, the vector s contains only distribution parameters θ . We denote Z as the original random inputs of the system and f Z as its joint pdf, as before. The dependence of P f on θ is then expressed in the joint pdf f Z , as previously mentioned. The joint pdf f Z is supposed to be continuously differentiable with respect to θ . We further assume that the integration domain of Z does not depend on θ . The derivatives of P f are written as a domain integral Equation (2), and can be estimated with various sampling algorithms presented in the following.

3.1. Score Function Method for Sensitivity Analysis

In order to estimate the derivatives of P f , the score function method was first presented in [16] and later in [11,17]. It consists in introducing a density function in the domain integral, like in the important sampling framework [18]. The density is simply taken as the joint pdf of the inputs Z , and the sensitivity is written
P f ( θ ) θ = R d I D f Z ( z ) f Z ( z ; θ ) θ d z = R d I D f Z ( z ) 1 f Z ( z ; θ ) f Z ( z ; θ ) θ f Z ( z ; θ ) d z ,
for = 1 , , m . Noticing the following equality ln f Z ( z ; θ ) / θ = 1 / f Z ( z ; θ ) f Z ( z ; θ ) / θ holds for all z R d , the derivative is thus equal to
P f ( θ ) θ = R d I D f Z ( z ) ln f Z ( z ; θ ) θ f Z ( z ; θ ) d z = E f Z I D f Z ( Z ) ln f Z ( Z ; θ ) θ .
The derivative Q ( · ) = ln f Z ( · ; θ ) / θ is then called the score function. In order to compute the sensitivity with regard to the distribution parameters of Z , the following expected value must thus be evaluated: E f Z I D f Z ( Z ) Q ( Z ) . This expected value can be estimated with a crude Monte Carlo (MC) method [19], and the MC estimate of the derivatives of P f is then
P f ( θ ) ^ θ MC = 1 N j = 1 N I D f Z Z ( j ) Q Z ( j )
where the observations Z ( j ) are independent and identically distributed (iid), generated from f Z . Therefore, assuming the failure probability is estimated with a crude MC method and written P ^ f MC , the same observations can be entirely reused here. The simulation budget is thus kept constant as the score function does not depend on the costly limit state function g.

3.2. Statistical Properties of the Score Function Estimate with Crude MC

In this section, the bias, variance and mean square error of the MC sensitivity estimate are derived. As noted in [20], the crude MC estimate of Equation (6) is unbiased, and its variance is written
V ar P f ( θ ) ^ θ MC = 1 N E f Z I D f Z Z Q Z 2 P f ( θ ) θ 2 ,
since I D f Z z 2 = I D f Z z for all z R d . As the MC sensitivity estimate is unbiased, the expression of its mean square error (mse) E mse MC is equal to its variance.
The error estimate is then easy to derive as the quantity E f Z I D f Z Z Q Z 2 can be evaluated with a crude MC estimate using the same observations generated to compute both P f and P f ( θ ) / θ with the MC method. In the literature, several studies have focused on the expression of the mean square error of estimates obtained with the score function method [21,22], not necessarily in the failure probability framework. Here, we derive the expression presented in [20], which is fitted to our particular case of E mse MC . The mean square error is bounded with the following inequalities
1 N 1 P f 1 P f ( θ ) θ 2 E mse MC 1 N P f E Q Z 4 P f ( θ ) θ 2 .
The lower bound of Equation (8) is an expression depending on the failure probability and its derivatives; thus, it can be easily derived in practice. The upper bound depends on another quantity: E Q Z 4 , in addition to P f and the sensitivity. This quantity can have an analytical expression in very simple cases. Suppose Z is a univariate random variable, which follows a normal distribution of mean μ Z and variance σ Z 2 . We are interested in the derivative of P f with respect to the mean value μ Z . In this scenario, E Q Z 4 = 3 / σ 4 [20] and the right bounds become
E mse MC 1 N 3 P f σ 2 P f ( θ ) μ Z 2 .
In the other cases, the analytical expression of E Q Z 4 can be more cumbersome, and the quantity is thus estimated with the Monte Carlo method, reusing the same sample once more.
From the inequality Equation (8), we can conclude that the error, and therefore the variance of the estimate, is proportional to 1 / N . Consequently, the same variance convergence rate applies for both estimations with the Monte Carlo method of the failure probability P ^ f MC and its derivatives with respect to distribution parameters: P f ( θ ) ^ θ MC [23].

3.3. Score Function Method with Advanced Sampling Techniques

Various advanced sampling techniques have been developed, based on the MC method [15], to estimate the failure probability more efficiently. Some of these techniques can be applied to the context of the score function method, to estimate the expected value E f Z I D f Z ( Z ) Q ( Z ) in addition to P f . Therefore, these methods make it possible to evaluate more effectively the derivatives of P f according to the distribution parameters θ of Z . We derive some of these methods here.

3.3.1. Score Function Method with Subset Sampling

We recall here that the failure probability estimate with subset sampling (SS) [24,25] has the following expression, with underlined dependence on θ
P f ( θ ) = i = 0 q P i ( θ ) and P ^ f SS ( θ ) = i = 0 q P ^ i MC ( θ ) ,
with P 0 ( θ ) = P ( F 0 ) and P i ( θ ) = P ( F i | F i 1 ) for i > 0 , where F i are the intermediate failure domains. It is recalled that F 0 F 1 F q = D f Z [26]. We assume here that the intermediate failure domains F 0 , F 1 , , F q 1 are independent of θ and are chosen a priori. As underlined in [27], taking the derivatives of Equation (9) with respect to θ , for = 1 , , m gives the following equation
P f ( θ ) θ = i = 0 q P f ( θ ) P i ( θ ) P i ( θ ) θ ,
where the terms of the sum are defined with the equation
P 0 ( θ ) θ = R d I F 0 ( z ) f Z ( z ; θ ) θ d z ,
and for i > 0
P i ( θ ) θ = R d I F i ( z ) f Z | F i 1 ( z ; θ ) θ d z where z R d f Z | F i 1 ( z ; θ ) = I F i 1 ( z ) f Z ( z ; θ ) P ( F i 1 ) .
It should be noted that each P ( F i 1 ) is dependent on θ as well, even if the notation is not explicit. Applying the score function method to the Equations (11) and (12) leads to the final sensitivity estimate
P f ( θ ) θ = P f ( θ ) P 0 ( θ ) E f Z I F 0 ( Z ) Q ( Z ) + i = 1 q P f ( θ ) P i ( θ ) E f Z | F i 1 I F i ( Z ) Q i ( Z ) ,
where Q i = ln f Z | F i 1 ( · ; θ ) / θ for i > 0 . Those expectations can be evaluated with crude Monte Carlo estimates, reusing the exact same samples generated during the SS procedure
P f ( θ ) ^ θ SS = P ^ f SS ( θ ) P ^ 0 MC ( θ ) 1 N j = 1 N I F 0 Z 0 ( j ) Q Z 0 ( j ) + i = 1 q P ^ f SS ( θ ) P ^ i MC ( θ ) 1 N j = 1 N I F i Z i ( j ) Q i Z i ( j ) ,
where the random variables Z 0 ( j ) are generated from f Z and Z i ( j ) are generated from f Z ( s , · | F i 1 ) , with i > 0 , see [27] for more details. The simulation budget is therefore constant as the score functions Q i do not depend on the costly limit state function g.
The statistical properties of P f ( θ ) ^ θ SS are quite similar as those of the failure probability estimate P ^ f SS . The MC estimates of E f Z I F 0 ( Z ) Q ( Z ) and E f Z | F i 1 I F i ( Z ) Q i ( Z ) for i > 0 are each unbiased. However, there is a small bias induced by the dependence between the samples generated at each step, if one uses the Metropolis algorithms for the Markov chain MC procedure. This bias affects P ^ f SS and the SS sensitivity estimate [27]. The dependence makes the variance of the sensitivity estimate along with the mean square error hard to derive. Local sensitivity computation has also been studied in a special case of subset sampling called moving particles method [28].

3.3.2. Score Function Method with Importance Sampling

The score function method is, by definition, already an importance sampling (IS) technique, since a density is added in the sensitivity integral. Nevertheless, if this density is selected as f Z , then the resulting MC estimate is not very efficient, as most of the observations will not be in the failure domain. Suppose the failure probability is estimated in the IS context with an auxiliary density h. The resulting failure probability estimate is written P ^ f IS . This auxiliary density h is supposed to be capable of generating more observations in the failure domain. Using the same density h, the sensitivity of the failure probability can be estimated as
P f ( θ ) θ = R d I D f Z ( z ) 1 h ( z ) f Z ( z ; θ ) θ h ( z ) d z = E h I D f Z ( Z ) Q h ( Z ) ,
where Q h ( · ) = ( 1 / h ) × ( f Z ( · ; θ ) / θ ) = ( f Z / h ) × ( ln ( f Z ( · ; θ ) ) / θ ) . The IS sensitivity estimate is then written
P f ( θ ) ^ θ IS = 1 N j = 1 N I D f Z Z ( j ) Q h Z ( j ) ,
where the random variables Z ( j ) are iid with density h and are the same observations used for the estimation of P ^ f IS . The simulation budget is kept constant as the score function Q h does not depend on the costly limit state function g. The IS sensitivity estimate is then unbiased, and its variance, as well as the mean square error, is written
V ar P f ( θ ) ^ θ IS = E mse IS = 1 N E h I D f Z Z Q h Z 2 P f ( θ ) θ 2 .
Using the same computations as in [20], it is possible to derive an upper for the E mse IS as such
E mse IS 1 N P f E h f Z ( Z ) h ( Z ) 3 Q ( Z ) 4 P f ( θ ) θ 2 .
Therefore, we notice that compared to the formula Equation (8) the likelihood ratio f Z / h to the power 3 is introduced in the expected value. The convergence of E mse IS is thus greatly affected by the choice of the auxiliary density h.
The use of another IS auxiliary density than f z in the score function framework was initiated by Wu in [11,17] where an adaptive importance sampling scheme is presented and the resulting auxiliary density h is close to h opt = I D f Z f Z / P f . Later in [27], the authors developed a simulation method combining the SS framework and the IS technique to build the failure probability estimate, along with the sensitivity estimate with the score function method. Indeed, instead of estimating each intermediate failure probability E f Z | F i 1 I F i ( Z ) and intermediate sensitivity E f Z | F i 1 I F i ( Z ) Q i ( Z ) of the SS framework with crude MC estimates, they employ the IS technique. The samples drawn by IS are then iid and the variance of both the final failure probability estimate and its sensitivity can be derived.
In short, local RSA in the original input space, for distribution parameters θ , does not require any additional evaluation of the limit state function (lsf). The simulation budget is kept constant, no matter the simulation method selected. Furthermore, the sensitivity estimates are unbiased (except in the SS framework with Metropolis algorithms). In the next section, we present local RSA performed in the standard normal space, for any deterministic inputs s .

4. Local RSA in the Standard Normal Space

In this section, we assume that after an isoprobabilistic transformation T, the random inputs Z have been transformed into standard normal inputs X , such as X = T ( Z ) , using the same notations as before. The dependence of P f on s is then necessarily expressed in the (transformed) limit state function g. According to the nature of the parameters s , this dependence can be further derived.
Suppose the vector s contains distribution parameters only, s = θ . Consequently, these parameters have no influence on the limit state function in the Z -space and the influence on the limit state function in the X -space is actually solely contains in the transformation T, denoted T θ such as x = T θ ( z ) , therefore
x R d g ( θ , x ) = g Z ( z ) where z = T θ 1 ( x ) ,
with T θ 1 , the inverse isoprobabilistic transformation with fixed θ .
Suppose the vector s contains design parameters only, s = δ . Accordingly, the parameters δ have influence on the definition of the limit state functions in both spaces. The relationship between the two limit state functions g and g Z can then be written
x R d g ( δ , x ) = g ( δ , T ( z ) ) = g Z ( δ , z ) where z = T 1 ( x ) .
Independently of the nature of s , the sensitivity of P f is expressed with a surface integral: Equation (4) for θ and Equation (5) for δ . This surface integral can be estimated with various methods [29,30] for RBDO purposes. In this article, we notably focus on methods which directly come from frameworks employed to compute failure probability estimates, as the goal is to increase as little as possible the simulation budget.

4.1. Sensitivity through the Design Point of the System

4.1.1. Expression in the FORM Context

In the FORM and SORM context, the limit state function g is approximated with Taylor series expansions in the neighborhood of the design points in the standard normal space [14]. The design points of the system are the closest points to the origin of each failure region of the failure domain D f . The reliability analysis is then performed using the built approximations of g. We derive here the sensitivities of P f with the FORM approximation, first in a context where there is a unique design point P * , of norm β : β = P * . The failure probability is estimated with the equation
P ^ f FORM = 1 F X 1 ( β ( s ) ) ,
where F X 1 is the cumulative distribution function (cdf) of the univariate standard normal variable X 1 . The quantity β is equal to
β ( s ) = α P * where α = x g ( s , P * ) x g ( s , P * ) .
Therefore, the value of β is a function of s . The derivative of P f according to s in the FORM context is then simply
P f ( s ) ^ s FORM = β ( s ) s f X 1 ( β ( s ) ) ,
where f X 1 is the pdf of the random variable X 1 and β ( s ) / s is the derivatives of β with respect to s . After some calculus, see [31], the derivative is equal to
β ( s ) s = 1 x g ( s , P * ) g ( s , P * ) s .
Consequently the sensitivity of P f with respect to s , with [ 1 , , p ] is equal to
P f ( s ) ^ s FORM = 1 x g ( s , P * ) g ( s , P * ) s f X 1 ( β ( s ) ) ,
with the FORM approximation context. Therefore, one needs the value of the gradient x g evaluated in [ s , P * ] , as well as the derivative of g with respect to s evaluated in [ s , P * ] . Apart from these two quantities, no additional evaluation of the costly lsf is required to estimate the sensitivities. Depending on the nature of the deterministic inputs s , this derivative can be furthered detailed [31]. If the vector s represents distribution parameters only, s = θ , then
β ( θ ) θ = α T ( θ , z * ) θ ,
where z * = T θ 1 ( P * ) . Therefore, the derivative of P f is approximated with
P f ( θ ) ^ θ FORM = α T ( θ , z * ) θ f X 1 ( β ( θ ) ) .
If the vector s represents design parameters only, s = δ , it follows that g ( δ , P * ) / δ = g Z ( δ , z * ) / δ , where z * = T 1 ( P * ) .
It should be noted that the study of the bias and the error of the sensitivity estimate can be particularly challenging in a black-box model context. As the failure probability estimation in the FORM framework, the method requires very small simulation budget but the statistical properties of the resulting estimates are arduous to obtain. This remark is also valid when there are multiple design points. The computation of the sensitivities in this context is detailed next.

4.1.2. Sensitivity with Multiple Design Points in the Standard Normal Space

For systems with multiple design points, an accurate formula of P f is difficult to derive [32]. However, in the standard normal space, with the approximation introduced in [33], the derivatives can be easily derived in the FORM context. Suppose K design points have been identified P 1 * , P 2 * , , P K * ; we recall here that the failure probability is approximated with the formula [31]
P ^ f FORM = 1 F K ( β ( s ) ; C ( s ) ) ,
where F K is the cdf of a centered normal vector of dimension K with covariance matrix C ( s ) and β ( s ) is the vector of the norms of all the design points β k = P k * k = 1 , , K . The covariance matrix C ( s ) is defined such as C i , j = α i α j where α i = x g ( s , P i * ) / x g ( s , P i * ) . The derivative of the failure probability with respect to s is written
P f ( s ) ^ s FORM = k = 1 K β k ( s ) s F K ( β ( s ) ; C ( s ) ) x k + j = 1 k 1 C k , j ( s ) s F K ( β ( s ) ; C ( s ) ) C k , j ,
where the derivative of each β k can be derived from Equation (13). The interested reader can find the derivatives of C k , j in [31]. Another approach to compute the sensitivity of P f in a series system is presented in [12]. First, a MC sampling is performed using the various approximated lsfs built at the different design points, in a FORM/SORM context, to estimate P f . Then, the derivatives are obtained with the score function method with IS, using, in the Z -space, the failing observations obtained from the computation of P f ; cf Section 3.3.2. This special approach is restricted to distribution parameters θ only.

4.1.3. Sensitivity with Other Approximation Methods Inspired by FORM

The FORM approximation is accurate for linear lsfs only. Sensitivities derived in a SORM context are proposed in [34], when s are distribution parameters, s = θ . The SORM context makes it possible to build quadratic approximated lsfs, which can be more accurate when g is nonlinear. The authors in [34] take advantage of the radial distribution of the standard normal inputs to derive another quadratic approximation of the limit state surface. The sensitivities are then estimated with the differentiation of this novel approximation. This method is tested for systems of various dimensions (up to d = 20 ).
Other approximation methods inspired by the FORM/SORM framework have been derived in the reliability literature. These approximation methods aim to give better results than FORM and SORM in a nonlinear lsf context. We mention here the multi-hyperplane combination method (MHCM), first introduced in [35]. In the MHCM framework, several hyperplanes are employed to approximate the lsf at the vicinity of a design point, in the standard normal space. The sensitivities with respect to the distribution parameters θ of the original inputs Z in the MHCM context are presented in [36]. Another approximation method presented in [37] is completely independent of the design points’ position, as the approximation of the lsf is a hyperplane constructed from a Monte Carlo sample. The sensitivities with respect to the distribution parameters are then computed with this hyperplane.
In a broader context, the sensitivities computed with these approximation methods generally require very few to none additional lsf evaluation (or its gradient). However, the error of the resulting sensitivity estimate is usually difficult to measure without any reference value [34,36].

4.2. Sensitivity with Direct Simulation Method

Surface integrals cannot be directly estimated with Monte Carlo methods. Nevertheless, there are special contexts in which the derivative of the domain integral P f is still a domain integral, in the standard normal space. Indeed, if the failure probability is estimated in the directional sampling framework or in the line sampling framework, the failure domain indicator function is no longer explicit in the definition of the failure probability integral. Consequently, the differentiation of the probability integral results in a domain integral as well, which can be estimated with MC methods.
The sensitivity analysis in both sampling frameworks is detailed in the following sections, where no assumption on the nature of s is made. In both frameworks, the observations generated to compute the failure probability estimate can be reused to compute the failure probability sensitivity estimate.

4.2.1. Sensitivity Estimation with Directional Sampling

Directional Sampling (DS) is a sampling technique first introduced in [38] and derived for structural reliability problems in [39,40]. We focus here on the case where the limit state function is star-shaped. It is recalled here that the random normal inputs X can be written X = R T [41] where R is a positive random variable and T is a d-variate random vector which is independent of R and uniformly distributed on the unit sphere T d : { ( t 1 , , t d ) R d : t 1 2 + + t d 2 = 1 } . The probability of failure, with the dependency on s included, is written
P f ( s ) = P ( g ( s , R T ) 0 ) = T d P ( g ( s , R t ) 0 ) f T ( t ) d t = T d ( 1 F R ( r ( s , t ) ) f T ( t ) d t ,
where t r ( s , t ) is the function which assigns to every direction t its root (where the dependency of s is underlined), f T is the density of a random vector T and F R is the cdf of the radial component R of the standard normal inputs X . In the DS framework, the root r ( s , t ) is the distance between the limit state surface { g ( s , x ) = 0 } and the origin, in the direction t . The sensitivity of P f can be simply derived with the equation
P f ( s ) s = T d r ( s , t ) s f R ( r ( s , t ) ) f T ( t ) d t ,
where f R is the pdf of R. The derivatives of r for [ 1 , , p ] are equal to [42]
t T d r ( s , t ) s = 1 x g s , r ( s , t ) t t g s , r ( s , t ) t s .
Therefore, the failure probability sensitivity can be estimated with
P f ( s ) ^ s DS = 1 N j = 1 N f R r s , T ( j ) x g s , r s , T ( j ) T ( j ) T ( j ) g s , r s , T ( j ) T ( j ) s ,
where the observations T ( j ) are iid from f T . Introducing the IS framework, the sensitivity estimate in the Directional IS (DIS) context is written
P f ( s ) ^ s DIS = 1 N j = 1 N f R r s , T ( j ) x g s , r s , T ( j ) T ( j ) T ( j ) g s , r s , T ( j ) T ( j ) s f T T ( j ) h T T ( j ) ,
where the observations T ( j ) are iid from h T . The estimates Equations (15) and (16) are unbiased. Since all the observations are iid, the variance and mean square error are derived with the classical formulas. The observations are the same as the ones used to compute the failure probability estimate. However, it should be noted that the denominator of the first term of both estimates is the cosine between the sampling direction and the gradient of the lsf evaluated at the intersection between the sampling direction and the limit state surface. Consequently, an evaluation of the gradient x g is required for each observation T ( j ) , in addition to the evaluation of the derivative of g with respect to s . Therefore, the simulation budget becomes substantially heavier.
In [43], the sensitivities with respect to distribution parameters only, s = θ , are furthered detailed. Furthermore, a link between the root function r, and the norm β of the design point in the FORM framework is underlined. Indeed, for each observation r s , T ( j ) T ( j ) at the intersection between the sampling direction and the limit state surface, a hyperplane can be built as in the FORM context to locally approximate the lsf. There is consequently an equivalence between the derivatives of r and those of β ; cf Section 4.1.1. The resulting sensitivity estimate can then be seen as the mean value, with a correction factor, of all the local sensitivities computed on the approximated lsfs [43].

4.2.2. Sensitivity Estimation with Line Sampling

Line sampling was first introduced in [44] and aims to correct the FORM/SORM approximations with sampling. Suppose the design point P * of the system is unique. The idea is to sample on the hyperplane denoted V 1 perpendicular to the direction pointing towards the design point, in the standard normal space X . In order to do so, the sampling space is rotated and reduced by one dimension: V = R X , with V = V 1 , V d where V 1 contains the ( d 1 ) -dimensional subspace, V d is a coordinate parallel to the direction of the design point P * , and R is a rotation matrix. The rotation matrix verifies R R = I d and its d-th row is the unit direction P * / P * ; see [42]. In the LS framework, the root b ( v 1 ) represents the distance to the limit state surface in a direction orthogonal to the hyperplane { v d = 0 } at v 1 . We focus here on the case where the root of the limit state function for the vector v 1 is unique. The probability of failure with the dependency on s , is expressed as
P f ( s ) = R d 1 Ψ ( b ( s , v 1 ) ) f V 1 ( v 1 ) d v 1 ,
where v 1 b ( s , v 1 ) is the function which assigns to every vector v 1 its root, with the dependency of s underlined, Ψ denotes the cdf of an univariate standard normal variable and f V 1 is the pdf of the ( d 1 ) -variate standard normal random vector on the hyperplane { v d = 0 } . The sensitivity of P f can simply be derived with the equation
P f ( s ) s = R d 1 b ( s , v 1 ) s ϕ b ( s , v 1 ) f V 1 ( v 1 ) d v 1 ,
where ϕ denotes the pdf of a univariate standard normal variable (which is symmetric), and the derivatives of b are equal to [42]
v 1 R d 1 b ( s , v 1 ) s = 1 x g s , R v 1 , b ( s , v 1 ) a g s , R v 1 , b ( s , v 1 ) s ,
where a = P * / P * . Consequently, the failure probability sensitivity can be estimated with
P f ( s ) ^ s LS = 1 N j = 1 N ϕ b s , V 1 ( j ) x g s , R V 1 ( j ) , b s , V 1 ( j ) a g s , R V 1 ( j ) , b s , V 1 ( j ) s ,
where the observations V 1 ( j ) are iid from f V 1 . The sensitivity estimate is unbiased. Since all the observations are iid, the variance and mean square error are derived with the classical formulas; the coefficient of variation (CV) is displayed in [45] for the interested reader. The observations are the same as the ones used to compute the failure probability estimate. However, it should be noted that the denominator of the first term is the cosine between the direction a towards the design point and the gradient of the lsf evaluated at the intersection between the vector parallel to a at V 1 ( j ) and the limit state surface. Consequently, as in the DS framework, the evaluation of the gradient x g is required for each observation V 1 ( j ) , in addition to the evaluation of the derivative of g with respect to s . Therefore, the simulation budget also becomes substantially heavier with this sampling method.
The case of systems with multiple design points P 1 * , P 2 * , is studied in [46], where the sensitivities with regard to distribution parameters only, s = θ , are further detailed. A small modification in the line sampling scheme is required to obtain non-overlapping failure regions [47]. Once the failure regions are non-overlapping, the sensitivity of the failure probability with regard to a deterministic parameter is simply equal to the sum of the failure probability sensitivities associated to each design point. Here, we derive the formula for any deterministic input s , as the logic presented in [46] is the same for design parameters. Suppose there are K design points, the derivative of P f is written
P f ( s ) ^ s LS = k = 1 K P f ( s ) ^ s k LS
where each term in the sum is equal to Equation (17) adapted to the design point P k * . The resulting sensitivity estimate is still unbiased and since all the samples are independent, its variance can be computed with the classical formula.
As in the DS framework, a link between the differentiation of the root function b and the differentiation of the norm β in the FORM framework can be derived [45,46,48]. In [45], a second approach is presented to compute the sensitivity with regard to distribution parameters only, s = θ , in the LS framework. Indeed, in the particular case where the original inputs Z are independent, the expression of the sensitivity with regard to the distribution parameters is first derived in the original input space Z with the score function method (cf Section 3.1). Next, this domain integral is recast in the standard normal space X as the transformation T is quite simple since Z is already an independent vector. The LS method is then applied to the resulting domain integral in the standard normal space; see [45] for more details. A novel method that combines Monte Carlo simulation with LS in the case of a high dimension d is proposed in [49].

4.3. Sensitivity through an Approximation of the Failure Domain Indicator Function

As previously mentioned, surface integrals cannot be directly computed with Monte Carlo methods. However, it is possible to obtain a close estimate of the surface integral Equations (4) and (5) with a domain integral, through the approximation of the failure domain indicator function. This approach is called the Weak approach by Torii in [20,50], as the sensitivity is evaluated in the sense of distributions. The Weak approach is presented here and the statistical properties of the sensitivity estimate are derived, for any deterministic input s , for = 1 , , p .

4.3.1. Weak Approach: Approximation of the Indicator Function

The failure domain indicator function makes the direct differentiation of the failure probability integral of Equation (3) not possible. By replacing the failure domain indicator function with a smoother function, the differentiation can be performed and results in a domain integral [42]. Several smooth approximations of the indicator function have been derived in the literature [51], which are typically cdfs of continuous univariate variables [50]. Here we derive the approximation chosen in [42,52], which comes from the following limit
x R d I D f ( x ) = lim σ 0 Ψ g ( s , x ) σ ,
where Ψ is the standard normal univariate cdf, and σ > 0 . An illustration of this approximation of the indicator function is displayed in Figure 1. It should be noted, however, that the uniform distribution defined on [ σ / 2 , σ / 2 ] , with σ > 0 , is another univariate cdf often considered for the approximation of the indicator function [20,50].
In the sense of distributions, the derivative of P f with respect to s can then be defined as [50]
P f ( s ) s = lim σ 0 R d Ψ g ( s , x ) / σ s f X ( x ) d x = lim σ 0 R d 1 σ g ( s , x ) s ϕ g ( s , x ) σ f X ( x ) d x ,
where ϕ is the univariate standard normal pdf. The convergence is proven in [42] under some regularity conditions. Consequently, an approximation of the failure probability sensitivity can be obtained with the following domain integral
P f ( s ) s R d 1 σ ˜ g ( s , x ) s ϕ g ( s , x ) σ ˜ f X ( x ) d x = P f ˜ ( s , σ ˜ ) s
where P f ˜ ( s , σ ˜ ) = R d Ψ g ( s , x ) σ ˜ f X ( x ) d x ,
for σ ˜ a fixed positive value. One should notice that the following convergence also holds: P f ˜ ( s , σ ) converges to P f ( s ) as σ 0 . The domain integral P f ˜ ( s , σ ˜ ) / s can be estimated with the Monte Carlo method with the following estimate
P f ˜ ( s , σ ˜ ) ^ s MC = 1 N j = 1 N 1 σ ˜ g s , X ( j ) s ϕ g s , X ( j ) σ ˜ ,
where the X ( j ) are iid from f X . The same sample can be reused to compute both the failure probability estimate with MC and this sensitivity estimate. However, for each observation X ( j ) , the derivative of the lsf with respect to s has to be evaluated. Therefore, the simulation budget increases. This estimate is biased, since σ ˜ 0 and its statistical properties are detailed in the next section.

4.3.2. Statistical Properties of the Weak Approach with Crude MC

The statistical properties of this sensitivity estimate have been studied in numerous papers [20,42,50] as the choice of σ ˜ is crucial and has influence on both the bias and the variance of the estimate. As the estimate is biased, it is interesting to use the expression of the mean square error denoted E ˜ mse MC ( σ ) , derived in [50], which is a function of σ > 0 .
E ˜ mse MC ( σ ) = E P f ˜ ( s , σ ) ^ s MC P f ( s ) s 2 + V ar P f ˜ ( s , σ ) ^ s MC E ˜ mse MC ( σ ) = e 2 + V ar P f ˜ ( s , σ ) ^ s MC ,
where e is the bias. Intuitively, one would want the value of σ ˜ to be as small as possible to reduce the bias. However, for very small σ , the domain integral of P f ˜ ( s , σ ) / s converges to a surface integral over the limit state surface { g ( s , x ) = 0 } . Consequently, most observations would then have a negligible weight in the mean sample value Equation (19) of the MC sensitivity estimate, as only few of them are close to the limit state surface.
In [50], analytical formulas are derived when the uniform cdf defined on the interval [ σ / 2 , σ / 2 ] is chosen as the approximation of the failure domain indicator function, for the one-dimensional case ( d = 1 ). The resulting bias e is proportional to σ 2 , thus e 2 σ 4 and the variance is proportional to 1 / N σ . The proportional property of the bias is extended to the multidimensional case in [20]. Consequently, decreasing the parameter σ greatly reduces the bias. Nevertheless, past a certain point, the variance of the estimate increases for smaller σ values, if N is kept constant. This is schematically illustrated in Figure 2. The horizontal axis is separated into two stages: the bias stage where the error is mainly affected by the bias of the estimate and the variance stage where the error is mainly affected by the variance of the estimate.
Therefore, the optimal value of σ ˜ should be selected at the intersection between the bias stage and the variance stage. This intersection value is challenging to obtain in practice and is problem dependent [51]. An analytical formula of the optimal σ ˜ is detailed in [20] for the one-dimensional case, with the uniform cdf approximation mentioned above. This formula is proportional to 1 / N 1 / 5 and depends on the quantity of interest P f ( s ) / s as well as the derivative of order 3 of P f with respect to s ; thus, its computation is intractable in practice. With this optimal σ ˜ value, the value of the mean square error is computed and E ˜ mse MC ( σ ˜ ) 1 / N 4 / 5 . This convergence rate is slower than the error convergence rate of the MC estimate obtained with the score function method, which is 1 / N , see Section 3.2. Consequently, even with the best σ ˜ value, the score function method is more efficient. However, the score function is applicable only when s are distribution parameters.
Finally, it should be underlined that the bias and variance of the sensitivity estimate are independent of P f ( s ) , as noticed in [50]. They are dependent on the derivatives of P f of order 1 and above. In comparison, the mean square error of the MC estimate obtained with the score function method depends on P f , as shown in Equation (8). Consequently, the sensitivities of P f ( s ) computed with the Weak approach and the failure probability estimate do not depend on the same quantities. In other words, improvements on the failure probability estimation will not necessarily lead to improvements on the failure probability sensitivity estimation and vice versa, in the Weak approach framework.

4.3.3. Weak Approach with Advanced Sampling Techniques

The Weak approach has been associated with the sequential importance sampling (SIS) framework in [52], to obtain a sensitivity estimate more efficient than the MC one. The selected value of σ ˜ is the smallest value of σ such as the coefficient of variation of the sensitivity estimate is close to a CV target.
σ ˜ = argmin σ ( 0 , σ L ) CV P f ˜ ( s , σ ) ^ s SIS CV target 2 ,
where σ L is the final value of the decreasing sequence σ i of the SIS framework (see [53]). This selection does not require any additional evaluation of the lsf. The resulting sensitivity estimate has a controlled CV; however, it is not possible to measure its bias, other than with the approximation mentioned above e σ ˜ 2 . Nevertheless, this method is efficient for multiple failure regions [52] and the SIS framework is also quite robust in high-dimensional spaces [53].
The Weak approach has been combined with the subset sampling framework in [51], for sensitivities with respect to design parameters. Equation (10), presented in Section 3.3.1, is still valid (with δ instead of θ ); however, the dependence on δ in each P i ( δ ) is now contained in the intermediate failure domain indicator function, as well as in P ( F i 1 ) . Each intermediate sensitivity P i ( δ ) / δ is then estimated with the Weak approach. The selected value of σ ˜ is obtained with a formula that depends on the number of failing observations in the sampling procedure, on P f ( s ) , and on the order of magnitude of the lsf g. More precisely, σ ˜ is chosen as a quantile of the Y ( j ) = g s , X ( j ) ordered values, for j = 1 , , N . This choice results in a number N r of response values within ± σ ˜ , where N r depends on N and P f ( s ) and is set a priori. As previously mentioned, the bias of this estimate cannot be controlled other than with the approximation e σ ˜ 2 .

4.4. Sensitivity with Finite Difference Schemes

All the methods presented so far require the evaluation of the gradient of the limit state function with regard to s . We briefly present here approaches that do not rely on those derivatives of g to compute the sensitivity of P f . These approaches are applicable for many kinds of systems, such as high-dimensional systems whose failure domain encompasses several failure regions for instance.

4.4.1. Expression of the Finite Difference Schemes

The derivatives of P f with respect to s can be directly based on finite difference schemes [54,55]. Denoting h as the p-vector whose components equal zero except the -th component equals to h, with h > 0 , we recall here the forward formula
P f ( s , h ) ^ s F = P f ( s + h ) P f ( s ) h = E f X I D f ( s + h ) ( X ) E f X I D f ( s ) ( X ) h ,
and the central formula
P f ( s , h ) ^ s C = P f ( s + h ) P f ( s h ) 2 h = E f X I D f ( s + h ) ( X ) E f X I D f ( s h ) ( X ) 2 h .
The expected values of these formulas can then be estimated with sampling methods. Here, we use the same formulations to present the different approaches as in [50]. The Direct approach consists in employing independent samples for the estimations of P f ( s + h ) , P f ( s ) and P f ( s h ) . Therefore, in order to obtain the failure probability sensitivity, the simulation budget is multiplied by two with the forward formula and by three with the central formula. In contrast, the Common Random Variable (CRV) approach employs the same sample for the estimations of P f ( s + h ) , P f ( s ) and P f ( s h ) . However, the limit state functions with parameter s + h and s h differ from g ( s , · ) . Consequently, the simulation budget is still multiplied by two with the forward formula and by three with the central formula. Using the same sample makes it possible to cancel the noise of the estimates of the expected values, which justifies the CRV approach. Improvements of the finite difference schemes have been recently proposed in [56].

4.4.2. Statistical Properties of the Finite Difference Scheme Estimates

Both approaches are biased since h 0 . The expression of the bias and variance of both approaches, for both finite difference schemes can be found in [50]. The bias e of both approaches is proportional to h with the forward formula and h 2 with the central formula. The variance of the Direct approach is proportional to 1 / N h 2 . Whereas the variance of the CRV approach is proportional to 1 / N h . Consequently, the CRV is theoretically superior to the Direct approach. Furthermore, among the two formulas, the central formula is the best theoretically as the bias converges more rapidly, for small h. The evolution of the mean square error as a function of h can be separated into two stages for both methods: the variance stage for small values of h and the bias stage for greater values of h, as presented for the Weak approach in Section 4.3.2.
An optimal value of h is displayed in [20] for the central formula in the CRV approach. This value is proportional to 1 / N 1 / 5 and depends on the quantity of interest P f ( s ) / s as well as the derivative of order 3 of P f with respect to s ; thus, its computation is intractable in practice. With this optimal h, the value of the mean square error is computed and E ˜ mse MC ( σ ˜ ) 1 / N 4 / 5 , which is the same convergence rate than the Weak approach with the optimal σ ˜ . The same conclusion can be drawn: this convergence rate is slower than the error convergence rate of the MC estimate obtained with the score function method, which is 1 / N , see Section 3.2. Consequently, for sensitivities with respect to distribution parameters, the score function method is more efficient.

5. Conclusions

In this article, we reviewed different methods to compute the derivatives of the failure probability with respect to the deterministic inputs s of the system, for RBDO and local reliability-based sensitivity analysis purposes. For each method, the additional simulation budget needed is underlined, compared to the simulation budget necessary to the estimation of the failure probability.
If the deterministic inputs are distribution parameters, denoted θ , two different frameworks are possible. The first framework relies on the score function methods and the estimation is performed in the original input space. The simulation budget is then kept constant and the sensitivity estimates are typically unbiased. However, there are few efficient techniques available to compute the failure probability in the original space. In the second framework, the original inputs are transformed into standard normal inputs with an isoprobabilistic transformation. In this input space, there is a greater number of efficient methods to compute the failure probability. Nevertheless, the sensitivity estimation causes a heavier simulation budget, as the derivatives of the limit state function with respect to θ must be evaluated. Some approaches result in unbiased estimates, as in the DS and LS context, but require an even greater simulation budget as evaluations of the gradient of g with respect to the random inputs are needed. While the other approaches lead to biased estimates, whose bias depends on a scalar parameter ( σ or h), or remains unknown (in the FORM context). The resulting bias cannot be properly controlled.
If the deterministic parameters are design parameters, denoted δ , the evaluation of the derivatives of g with respect to δ is mandatory, except in the finite difference schemes context (where the simulation budget is then doubled or tripled). The sensitivity analysis is performed in the standard normal space and the same conclusions can be drawn concerning the simulation budget and the bias than with distribution parameters. Thus, the simulation budget substantially increases with LS or DS and the bias cannot be properly managed with the other approaches.

Author Contributions

M.C., J.M. and S.D.; Methodology, M.C., J.M. and S.D.; Software, M.C.; Validation, M.C.; Formal analysis, M.C., J.M. and S.D.; Investigation, M.C.; Writing—original draft, M.C.; Writing—review & editing, M.C., J.M. and S.D.; Supervision, J.M. and S.D.; Funding acquisition, J.M. and S.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Morio, J.; Balesdent, M. Estimation of Rare Event Probabilities in Complex Aerospace and Other Systems: A Practical Approach; Woodhead Publishing: Sawston, UK, 2015. [Google Scholar]
  2. Moustapha, M.; Sudret, B. Surrogate-assisted reliability-based design optimization: A survey and a unified modular framework. Struct. Multidiscip. Optim. 2019, 60, 2157–2176. [Google Scholar] [CrossRef]
  3. Hilton, H.H.; Feigen, M. Minimum weight analysis based on structural reliability. J. Aerosp. Sci. 1960, 27, 641–652. [Google Scholar] [CrossRef]
  4. Meng, Z.; Li, G.; Wang, X.; Sait, S.M.; Yıldız, A.R. A comparative study of metaheuristic algorithms for reliability-based design optimization problems. Arch. Comput. Methods Eng. 2021, 28, 1853–1869. [Google Scholar] [CrossRef]
  5. Chabridon, V. Reliability-Oriented Sensitivity Analysis under Probabilistic Model Uncertainty—Application to Aerospace Systems. Ph.D. Thesis, Université Clermont Auvergne, Clermont-Ferrand, France, 2018. [Google Scholar]
  6. Xiao, Z.; Han, X.; Jiang, C.; Yang, G. An efficient uncertainty propagation method for parameterized probability boxes. Acta Mech. 2016, 227, 633–649. [Google Scholar] [CrossRef]
  7. Razavi, S.; Jakeman, A.; Saltelli, A.; Prieur, C.; Iooss, B.; Borgonovo, E.; Plischke, E.; Piano, S.L.; Iwanaga, T.; Becker, W.; et al. The future of sensitivity analysis: An essential discipline for systems modeling and policy support. Environ. Model. Softw. 2021, 137, 104954. [Google Scholar] [CrossRef]
  8. Yang, J.; Clot, A.; Langley, R.S. Combined sensitivity analysis for multiple failure modes. Comput. Methods Appl. Mech. Eng. 2022, 395, 115030. [Google Scholar] [CrossRef]
  9. Sarazin, G. Analyse de Sensibilité Fiabiliste en Présence d’Incertitudes Épistémiques Introduites par les Données d’Apprentissage. Ph.D. Thesis, ISAE, Toulouse, France, 2021. [Google Scholar]
  10. Bourinet, J.M. Reliability Analysis and Optimal Design under Uncertainty-Focus on Adaptive Surrogate-Based Approaches; HDR. Université Clermont Auvergne: Clermont-Ferrand, France, 2018. [Google Scholar]
  11. Wu, Y.T. Computational methods for efficient structural reliability and reliability sensitivity analysis. AIAA J. 1994, 32, 1717–1723. [Google Scholar] [CrossRef]
  12. Sues, R.H.; Cesare, M.A. System reliability and sensitivity factors via the MPPSS method. Probabilistic Eng. Mech. 2005, 20, 148–157. [Google Scholar] [CrossRef]
  13. Breitung, K.W. Asymptotic Approximations for Probability Integrals; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  14. Madsen, H.O.; Krenk, S.; Lind, N.C. Methods of Structural Safety; Courier Corporation: Chelmsford, MA, USA, 2006. [Google Scholar]
  15. Morio, J.; Balesdent, M.; Jacquemart, D.; Vergé, C. A survey of rare event simulation methods for static input–output models. Simul. Model. Pract. Theory 2014, 49, 287–304. [Google Scholar] [CrossRef]
  16. Rubinstein, R.Y. The score function approach for sensitivity analysis of computer simulation models. Math. Comput. Simul. 1986, 28, 351–379. [Google Scholar] [CrossRef]
  17. Wu, Y.T. Adaptive importance sampling (AIS)-based system reliability sensitivity analysis method. In Proceedings of the Probabilistic Structural Mechanics: Advances in Structural Reliability Methods: IUTAM Symposium, San Antonio, TX, USA, 7–10 June 1993; Springer: Berlin/Heidelberg, Germany, 1994; pp. 550–564. [Google Scholar]
  18. Bucklew, J.A. Introduction to Rare Event Simulation; Springer: Berlin/Heidelberg, Germany, 2004; Volume 5. [Google Scholar]
  19. Sobol, I.M. A Primer for the Monte Carlo Method; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  20. Torii, A.J.; Novotny, A.A. A priori error estimates for local reliability-based sensitivity analysis with Monte Carlo Simulation. Reliab. Eng. Syst. Saf. 2021, 213, 107749. [Google Scholar] [CrossRef]
  21. Rubinstein, R.Y.; Kroese, D.P. Simulation and the Monte Carlo Method; John Wiley & Sons: Hoboken, NJ, USA, 2016. [Google Scholar]
  22. Rubinstein, R.Y.; Shapiro, A. Optimization of static simulation models by the score function method. Math. Comput. Simul. 1990, 32, 373–392. [Google Scholar] [CrossRef]
  23. Robert, C.P.; Casella, G. Monte Carlo Statistical Methods; Springer: Berlin/Heidelberg, Germany, 1999; Volume 2. [Google Scholar]
  24. Au, S.K.; Beck, J. Important sampling in high dimensions. Struct. Saf. 2003, 25, 139–163. [Google Scholar] [CrossRef]
  25. Cérou, F.; Del Moral, P.; Furon, T.; Guyader, A. Sequential Monte Carlo for rare event estimation. Stat. Comput. 2012, 22, 795–808. [Google Scholar] [CrossRef]
  26. Au, S.K.; Beck, J.L. Estimation of small failure probabilities in high dimensions by subset simulation. Probabilistic Eng. Mech. 2001, 16, 263–277. [Google Scholar] [CrossRef]
  27. Song, S.; Lu, Z.; Qiao, H. Subset simulation for structural reliability sensitivity analysis. Reliab. Eng. Syst. Saf. 2009, 94, 658–665. [Google Scholar] [CrossRef]
  28. Proppe, C. Local reliability based sensitivity analysis with the moving particles method. Reliab. Eng. Syst. Saf. 2021, 207, 107269. [Google Scholar] [CrossRef]
  29. Royset, J.; Polak, E. Reliability-based optimal design using sample average approximations. Probabilistic Eng. Mech. 2004, 19, 331–343. [Google Scholar] [CrossRef]
  30. Jensen, H.A.; Valdebenito, M.A.; Schuëller, G.I.; Kusanovic, D.S. Reliability-based optimization of stochastic systems using line search. Comput. Methods Appl. Mech. Eng. 2009, 198, 3915–3924. [Google Scholar] [CrossRef]
  31. Ditlevsen, O.; Madsen, H.O. Structural Reliability Methods; Wiley: New York, NY, USA, 1996; Volume 178. [Google Scholar]
  32. Der Kiureghian, A.; Dakessian, T. Multiple design points in first and second-order reliability. Struct. Saf. 1998, 20, 37–49. [Google Scholar] [CrossRef]
  33. Hohenbichler, M. An asymptotic formula for the probability of intersections. In Berichte zur Zuverlassigkeitstheorie der Bauwerke; TU Munchen: Munich, Germany, 1984; pp. 21–48. [Google Scholar]
  34. Yoo, D.; Lee, I.; Cho, H. Probabilistic sensitivity analysis for novel second-order reliability method (SORM) using generalized chi-squared distribution. Struct. Multidiscip. Optim. 2014, 50, 787–797. [Google Scholar] [CrossRef]
  35. Feng, Y. The computation of failure probability for nonlinear safety margin equations. Reliab. Eng. Syst. Saf. 1990, 27, 323–331. [Google Scholar] [CrossRef]
  36. Dong, Y.g.; Lu, H.t.; Li, L.l. Reliability sensitivity analysis based on multi-hyperplane combination method. Def. Technol. 2014, 10, 354–359. [Google Scholar] [CrossRef]
  37. Melchers, R.; Ahammed, M. A fast approximate method for parameter sensitivity estimation in Monte Carlo structural reliability. Comput. Struct. 2004, 82, 55–61. [Google Scholar] [CrossRef]
  38. Ditlevsen, O.; Olesen, R.; Mohr, G. Solution of a class of load combination problems by directional simulation. Struct. Saf. 1986, 4, 95–109. [Google Scholar] [CrossRef]
  39. Bjerager, P. Probability integration by directional simulation. J. Eng. Mech. 1988, 114, 1285–1302. [Google Scholar] [CrossRef]
  40. Ditlevsen, O.; Melchers, R.E.; Gluver, H. General multi-dimensional probability integration by directional simulation. Comput. Struct. 1990, 36, 355–368. [Google Scholar] [CrossRef]
  41. Genest, C.; Favre, A.C.; Béliveau, J.; Jacques, C. Metaelliptical copulas and their use in frequency analysis of multivariate hydrological data. Water Resour. Res. 2007, 43, W09401. [Google Scholar] [CrossRef]
  42. Papaioannou, I.; Breitung, K.; Straub, D. Reliability sensitivity analysis with Monte Carlo methods. In Proceedings of the ICOSSAR 2013, New York, NY, USA, 16–20 June 2013. [Google Scholar]
  43. Song, S.; Lu, Z.; Song, Z. Reliability sensitivity analysis involving correlated random variables by directional sampling. In Proceedings of the 2011 International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering, Xi’an, China, 17–19 June 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 845–850. [Google Scholar]
  44. Hohenbichler, M.; Rackwitz, R. Improvement of second-order reliability estimates by importance sampling. J. Eng. Mech. 1988, 114, 2195–2199. [Google Scholar] [CrossRef]
  45. Valdebenito, M.A.; Jensen, H.A.; Hernández, H.; Mehrez, L. Sensitivity estimation of failure probability applying line sampling. Reliab. Eng. Syst. Saf. 2018, 171, 99–111. [Google Scholar] [CrossRef]
  46. Lu, Z.; Song, S.; Yue, Z.; Wang, J. Reliability sensitivity method by line sampling. Struct. Saf. 2008, 30, 517–532. [Google Scholar] [CrossRef]
  47. Schueller, G.I.; Pradlwarter, H.J.; Koutsourelakis, P.S. A critical appraisal of reliability estimation procedures for high dimensions. Probabilistic Eng. Mech. 2004, 19, 463–474. [Google Scholar] [CrossRef]
  48. Shufang, S.; Zhenzhou, L.; Weiwei, Z.; Zhengyin, Y. Reliability and sensitivity analysis of transonic flutter using improved line sampling technique. Chin. J. Aeronaut. 2009, 22, 513–519. [Google Scholar] [CrossRef]
  49. Zhang, X.; Lu, Z.; Yun, W.; Feng, K.; Wang, Y. Line sampling-based local and global reliability sensitivity analysis. Struct. Multidiscip. Optim. 2020, 61, 267–281. [Google Scholar] [CrossRef]
  50. Torii, A.J. On sampling-based schemes for probability of failure sensitivity analysis. Probabilistic Eng. Mech. 2020, 62, 103099. [Google Scholar] [CrossRef]
  51. Lacaze, S.; Brevault, L.; Missoum, S.; Balesdent, M. Probability of failure sensitivity with respect to decision variables. Struct. Multidiscip. Optim. 2015, 52, 375–381. [Google Scholar] [CrossRef]
  52. Papaioannou, I.; Breitung, K.; Straub, D. Reliability sensitivity estimation with sequential importance sampling. Struct. Saf. 2018, 75, 24–34. [Google Scholar] [CrossRef]
  53. Papaioannou, I.; Papadimitriou, C.; Straub, D. Sequential importance sampling for structural reliability analysis. Struct. Saf. 2016, 62, 66–75. [Google Scholar] [CrossRef]
  54. Atkinson, K. An Introduction to Numerical Analysis; John Wiley & Sons: Hoboken, NJ, USA, 1991. [Google Scholar]
  55. Quarteroni, A.; Sacco, R.; Saleri, F. Numerical Mathematics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010; Volume 37. [Google Scholar]
  56. Chun, J. Sensitivity analysis of system reliability using the complex-step derivative approximation. Reliab. Eng. Syst. Saf. 2021, 215, 107814. [Google Scholar] [CrossRef]
Figure 1. Approximation of the indicator function I y > 0 with the function y Ψ ( y / σ ) for three different values of σ , where Ψ is the cdf of a standard normal univariate variable.
Figure 1. Approximation of the indicator function I y > 0 with the function y Ψ ( y / σ ) for three different values of σ , where Ψ is the cdf of a standard normal univariate variable.
Applsci 13 12021 g001
Figure 2. Illustration of the relation between E ˜ mse MC and σ in log–log scale, for a fixed sample size N, with the approximations derived in [50] in a one-dimensional case.
Figure 2. Illustration of the relation between E ˜ mse MC and σ in log–log scale, for a fixed sample size N, with the approximations derived in [50] in a one-dimensional case.
Applsci 13 12021 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chiron, M.; Morio, J.; Dubreuil, S. A Review on Local Failure Probability Sensitivity Analysis. Appl. Sci. 2023, 13, 12021. https://doi.org/10.3390/app132112021

AMA Style

Chiron M, Morio J, Dubreuil S. A Review on Local Failure Probability Sensitivity Analysis. Applied Sciences. 2023; 13(21):12021. https://doi.org/10.3390/app132112021

Chicago/Turabian Style

Chiron, Marie, Jérôme Morio, and Sylvain Dubreuil. 2023. "A Review on Local Failure Probability Sensitivity Analysis" Applied Sciences 13, no. 21: 12021. https://doi.org/10.3390/app132112021

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop