1. Introduction
The results of measurements and computations are never perfectly accurate but are subject to unavoidable errors. Around any reported experimental value there always exists a range of values that may also be plausibly representative of the true but unknown value of the measured quantity. Similarly, computations are subject to errors stemming from uncertain model parameters, initial and boundary conditions, imperfectly known physical processes, imperfectly known physical geometry caused by imperfect material boundaries and, finally, inexact numerical computations. Therefore, knowledge of just the nominal values of experimentally measured or computed quantities is insufficient for determining the reliability of results in practical applications. The quantitative uncertainties accompanying measurements and computations are also needed. Furthermore, it is imperative to ensure that uncertainties are reduced, in order to reduce the risks associated with the activities/solutions needed to achieve the desired goal.
The conventional path for combining measurements with computations is provided by data assimilation (DA) methods [
1,
2]. These methods rely fundamentally on the minimization of a user-defined subjective functional intended to represent the differences (usually in the energy norm) between computed and measured results of interest. On the other hand, using Jaynes’ maximum entropy (MaxEnt) principle [
3] and Shannon’s information entropy concept [
4], Cacuci [
5] has conceived a predictive modeling (PM) methodology called BERRU-PM for combining experimental and computational information to obtain optimally predicted “best-estimate results with reduced uncertainties” (acronym: BERRU). The BERRU-PM methodology is formulated in the combined phase-space of model parameters and model responses (i.e., results of interest computed using the model) and comprises the following key elements:
- (i)
Arbitrarily high-order sensitivities (i.e., functional derivatives) of model responses with respect to model parameters, computed using the high-order adjoint sensitivity analysis methodology conceived and developed by Cacuci [
6,
7], which largely overcomes the curse of dimensionality in sensitivity analysis [
8], thus providing the most efficient methodology for computing exact derived expressions of sensitivities;
- (ii)
uncertainty quantification of uncertainties arising in the results computed by models due to the uncertainties in the underlying model parameters;
- (iii)
integration of experimental data for performing model validation, calibration, model extrapolation, and estimation of the validation domain. Model validation addresses the question “does the respective model represent reality?” Model calibration addresses the integration of experimental data for the purpose of updating the parameters underlying the computational model, including the estimation of inconsistencies in the experimental data, and quantification of the biases between model predictions and experimental data. Model extrapolation addresses the quantification of the uncertainties in predictions under new conditions, identifying the areas where the predictive estimation of uncertainty meets specified requirements for the performance, reliability, or safety of the system of interest.
Since the BERRU-PM methodology is formulated in the combined phase-space of model parameters and responses, it can be utilized both for “forward/direct predictive modeling” and for “inverse predictive modeling”. The “forward” or “direct problem” solves the “parameter-to-output” mapping that describes the “cause-to-effect” relationship in the physical process being modeled. The “inverse problem” aims at solving the “output-to-parameters” mapping. For either forward or inverse problems, the BERRU-PM methodology yields reduced predicted uncertainties in both the calibrated model parameters and the predicted responses (results) of interest. Reducing uncertainties is tantamount to reducing risk, thereby increasing the probability of successful delivery of desired results.
Recently, Cacuci [
9,
10] has generalized his original BERRU-PM methodology, which included only first-order responses sensitivities with respect to model parameters, to include second- and higher-order sensitivities of model responses to model parameters, naming it “2nd-BERRU-PM”. Although the 2nd-BERRU-PM methodology currently yields only the first- and second-order moments of the predicted distribution of responses and parameters, the “input” for the 2nd-BERRU-PM can include sensitivities of second- and higher-order. The impact of these higher-order sensitivities will be underscored by the results presented below, in
Section 3. The 2nd-BERRU-PM methodology is also constructed in the most inclusive “joint-phase-space of parameters, computed and measured responses” and consequently simultaneously calibrates responses and parameters, thus simultaneously providing results for forward and inverse problems. In contradistinction, DA methods are formulated conceptually [
1,
2] either just in the phase-space of measured responses (“observation-space formulation”) or just in the phase-space of the model’s dependent variables (“state-space formulation”), so even the so-called “second-order DA” can only calibrate initial conditions as “direct results” but cannot directly calibrate any other model parameters. Furthermore, second-order DA methods fail fundamentally if experiments are perfectly well known and/or if the response measurements happen to coincide with the computed value of the response. Although such situations are not expected to occur frequently in practice, there are no negative consequences (should such a situation occur) if the 2nd-BERRU-PM methodology is used, in contradistinction to using the DA methodology. Furthermore, the 2nd-BERRU-PM methodology is significantly more efficient computationally than DA since it requires the inversion of a single matrix of the size given by the total number of simultaneously considered responses, in contradistinction with DA-methods, where the smallest possible matrix to be inverted has the size of the number of discretization nodes/points of the grid designed for computing numerically the model’s dependent variables. These advantages of the 2nd-BERRU-PM methodology over the DA methodology stem from the fact that the 2nd-BERRU-PM methodology is fundamentally anchored in physics-based principles (thermodynamics and information theory) formulated in the most inclusive possible phase-space (namely the combined phase-space of computed and measured parameters and responses), whereas the DA methodology is fundamentally based on the minimization of a subjective user chosen functional.
Section 2 of this work describes the 2nd-BERRU-PM methodology.
Section 3 will present illustrative results obtained by applying the 2nd-BERRU-PM methodology to a polyethylene-reflected plutonium (PERP) OECD/NEA reactor physics benchmark [
11]. This benchmark is modeled using the neutron transport Boltzmann equation, the solution of which is representative of “large-scale computations”. The numerical model of this benchmark comprises 21976 uncertain parameters, as follows: 6 isotopic number densities (for
239Pu,
240Pu,
69Ga,
71Ga,
12C,
1H); 180 group-averaged microscopic total cross sections, 21,600 group-averaged microscopic scattering cross sections; 120 fission process parameters; 60 fission spectrum parameters; 10 parameters describing the experiment’s nuclear sources [
12]. The response of interest for this benchmark is the leakage of neutrons through the outer sphere of the plutonium benchmark.
Section 4 summarizes the complete set of results produced by 2nd-BERRU-PM methodology.
2. Methods: Second-Order Predictive Modeling Methodology (2nd-BERRU-PM)
Using the principle of maximum entropy (MaxEnt) originally formulated by Jaynes [
3] and the Shannon information entropy concept [
4], Cacuci [
5] has formulated the BERRU-PM methodology for combining experimental and computational information to obtain optimally predicted results with reduced uncertainties, providing optimal compatibility with the available information while simultaneously ensuring minimal spurious information content. Recently, Cacuci [
9,
10] has extended the BERRU-PM methodology, obtaining the following expressions for the predicted best-estimate responses and calibrated model parameters:
- (i)
The best-estimate optimal mean values,
, for the predicted responses:
The quantities appearing in Equation (1) are defined as follows:
- a.
Matrices are denoted using capital bold letters while vectors are denoted using either capital or lower-case bold letters.
- b.
The components , of the vector denote the best-estimate values for the responses. The quantity “” denotes the “total number of responses” under consideration. The symbol “” is used to denote “is defined as” or “is by definition equal to”. Transposition is indicated by a dagger superscript.
- c.
The components , of the vector denote the expected values of the experimentally measured responses. The letter “e” will be used either as a superscript or a superscript to indicate experimentally measured quantities.
- d.
The quantity denotes the covariance matrix of experimentally measured responses. The component covariances stem from measurements.
- e.
The quantity denotes the covariance matrix for the computed responses. The complete expression of the covariance between two computed responses and is provided in the Section below.
- f.
The quantity denotes the “vector of expected values of computed responses” and its components, are obtained by determining the expectation value of the multivariate Taylor expansion of the computed response with respect to the benchmark model’s uncertain parameters, evaluated around the parameters’ mean values. The complete expression of the expected value of a generic response is provided in the Section below.
- (ii)
The best-estimate optimal covariances,
, for the best-estimate predicted responses:
It is important to note that, for each response
,
, the variance
of each of the best-estimate responses
, is smaller than either the original variance,
, of the corresponding experimentally measured response or the original variance,
, of the corresponding computed response. This fact can be shown by using Equation (2) to obtain the following result:
Since all of the matrices in Equation (3) are positive definite, and since the components of the diagonals of the matrices on the left side of this equation are the respective response variances, it follows that:
A similar computation yields the following result:
Since all of the matrices in Equation (5) are positive definite, and since the components of the diagonals of the matrices on the left-side of this equations are the respective response variances, it follows that:
The inequalities in Equations (4) and (6) demonstrate that the 2nd-BERRU-PM methodology yields a posteriori variances for the best-estimate responses which are reduced by comparison to the original variances of either the experimentally measured or computed responses. This mathematically guaranteed reduction of predicted uncertainties in the predicted responses enabled by the 2nd-BERRU-PM methodology, as well as the significant impact of higher-order sensitivities, will be illustrated in
Section 3 by considering parameter uncertainties and measured response uncertainties for a polyethylene-reflected plutonium OECD/NEA reactor physics benchmark [
11].
The 2nd-BERRU-PM methodology also yields the following results [
9,
10]:
- (iii)
The best-estimate optimal mean values,
, for the calibrated model parameters:
- (iv)
The best-estimate optimal covariances,
, for the best-estimate calibrated parameters:
- (v)
The best-estimate parameter-response correlation matrix,
:
- (vi)
A
-like “consistency indicator”, denoted below as “
CI” which can be used ab initio (i.e., before actually combining the computational and experimental information) to assess the degree of agreement between the computed and measured responses. For predictive modeling, it is important to assess if the response and data measurements are free of gross errors (blunders such as wrong settings, mistaken readings, etc.), and if the measurements are consistent with the assumptions regarding the respective means, variances, and covariances.
The results shown in Equations (7)–(10) are not used in this work (but will be used in future works), so they will not be discussed further in the present context; the interested reader may wish to consult references [
5,
9,
10].
The expected value,
, of a computed response,
, is obtained by using the multivariate Taylor expansion of the computed response
with respect to the benchmark model’s uncertain parameters, evaluated around the parameters’ mean values. The fourth-order Taylor series of a response
around the expected (or nominal) parameter values
has the following formal expression (where the possible dependence of the response on the model’s independent and dependent variables has been suppressed):
In Equation (11), the quantity denotes the computed value of the response using the expected/nominal parameter values , while the notation indicates that the functional derivatives within the braces are also computed using the expected/nominal parameter values. These functional derivatives are called the “sensitivities” of the response with respect to the model parameters. Thus, the quantity denotes the first-order sensitivity of the response with respect to the model parameters , computed at the nominal parameter values . Similarly, the quantity is called the second-order “sensitivity” of the response with respect to the model parameters and , computed at the nominal parameter values . The third- and fourth-order sensitivities of the response also appear in Equation (11). The quantity in Equation (11) comprises all quantifiable errors in the representation of the computed response as a function of the model parameters, including the truncation errors of the Taylor series expansion, possible bias errors due to incompletely modeled physical phenomena and possible random errors due to numerical approximations. The radius/domain of convergence of the series in Equation (11) determines the largest values of the parameter variations which are admissible before the respective series becomes divergent. In turn, these maximum admissible parameter variations limit the largest parameter covariances/standard deviations which can be considered for using the Taylor expansion for the subsequent purposes of computing moments of the distribution of computed responses. In general, several computed responses are of interest; the total number of responses of interest will be denoted as “” and each response , will be considered to be the component of the vector of responses denoted as .
As is well known, the computation by conventional methods of the
-order functional derivatives (called “sensitivities” in the field of sensitivity analysis) of a response with respect to the
parameters on which it depends would require at least
large-scale computations. The exponential increase—with the order of response sensitivities—of the number of large-scale computations needed to determine higher-order sensitivities is the manifestation of the “curse of dimensionality in sensitivity analysis,” by analogy to the expression coined by Bellman [
8] to express the difficulty of using “brute-force” grid search when optimizing a function with many input variables. The “nth-order Comprehensive Adjoint Sensitivity Analysis Methodology for Response-Coupled Forward/Adjoint Linear Systems” (nth-CASAM-L) conceived by Cacuci [
6] and the “nth-order Comprehensive Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (nth-CASAM-N) conceived by Cacuci [
7] are currently the only methodologies that enable the exact and efficient computation of arbitrarily high-order sensitivities while overcoming the curse of dimensionality. In particular, a single “adjoint computation” is needed to obtain all of the 1st-order sensitivities
after having solved the “1st-Level Adjoint Sensitivity System (1st-LASS)” which, very importantly, is independent of parameter variation
,
, and therefore needs to be solved only once per response, regardless of the number of model parameters under consideration. For each 1st-order sensitivity, there corresponds a “2nd-Level Adjoint Sensitivity System (2nd-LASS),” the solving of which is equivalent to solving (at most) two 1st-LASS, for obtaining all of the 2nd-order sensitivities corresponding to the 1st-order sensitivity under consideration. The computation of the 2nd-order sensitivities would logically be prioritized based on the relative magnitudes of the 1st-order sensitivities: the largest relative 1st-order response sensitivity would have the highest priority for computing the corresponding 2nd-order mixed sensitivities; then, the second largest relative 1st-order response sensitivity would be considered next, and so on. The unimportant 2nd-order sensitivities can be deliberately neglected while knowing the error incurred by neglecting them. Computing 2nd-order sensitivities that correspond to vanishing 1st-order sensitivities may be of special interest, since vanishing 1st-order sensitivities may indicate critical points of the response in the phase-space of model parameters.
Using Equation (11), the expected value,
, of the computed response
has the following expression up to fourth-order sensitivities:
The quantities appearing in Equation (12) are defined as follows: the covariance of two parameters, and , is denoted as , where denotes the standard deviation of , denotes the variance of a parameter , and denotes the correlation between the parameters and . The quantities and denote the triple and, respectively, the quadruple correlations among model parameters.
The quantity
in Equations (1) and (2) denote the covariance matrix for the computed responses. Its components,
, denote the covariances between two computed responses
and
. These covariances are also determined by using the Taylor series expansion, as shown in Equation (11), of the computed response
with respect to the benchmark model’s uncertain parameters. Up to fourth-order sensitivities, the covariance
has the following expression:
The quantities and , which appear in Equations (14) and (15), respectively, denote the fifth-order and, respectively, sixth-order correlations among the corresponding model parameters.
3. Results
This Section will present results predicted by the 2nd-BERRU-PM methodology for the PERP-benchmark’s neutron leakage response when considering various levels of precision (represented by standard deviation values) for the measured response and for the benchmark’s imprecisely known cross sections.
The PERP benchmark [
11] for subcritical neutron and gamma measurements comprises a metallic inner sphere (“core”), which contains the following four isotopes: Isotope 1 (
239Pu; weight fraction: 9.3804 × 10
−1), Isotope 2 (
240Pu; weight fraction: 5.9411 × 10
−2), Isotope 3 (
69Ga; weight fraction: 1.5152 × 10
−3), and Isotope 4 (
71Ga; weight fraction: 1.0346 × 10
−3). This core is surrounded by a spherical shell of polyethylene containing two isotopes, designated as Isotope 5 (
12C; weight fraction: 8.5630 × 10
−1) and Isotope 6 (
1H; weight fraction: 1.4370 × 10
−1. The inner sphere has a radius
= 3.794 cm. The outer sphere has a radius
= 7.604 cm.
The neutron flux distribution within the PERP benchmark is computed by using the multi-group discrete ordinates particle transport code PARTISN [
13] to solve the multi-group approximation of the neutron transport equation with a spontaneous fission source being provided by the code SOURCES4C [
14]. The PARTISN [
13] computations used the MENDF71X [
15] 618−group cross-section data collapsed to
energy groups, as well as an angular quadrature of S
32 and a P
3 Legendre expansion of the scattering cross-section, in conjunction with a fine-mesh spacing of 0.005 cm (comprising 759 meshes for the plutonium sphere radius of 3.794 cm, and 762 meshes for the polyethylene shell of thickness 3.81 cm). The group boundaries of the
energy groups are provided in [
12]. The numerical model of the PERP benchmark comprises 21,976 uncertain parameters, of which 7477 parameters have non-zero values. These non-zero parameters are as follows: 180 group-averaged microscopic total cross sections; 7101 non-zero group-averaged microscopic scattering cross sections; 120 fission process parameters; 60 fission spectrum parameters; 10 parameters describing the experiment’s nuclear sources; 6 isotopic number densities. All of the 7477 non-zero first-order sensitivities and (7477)
2 s-order sensitivities of the PERP leakage response with respect to the benchmark’s parameters were computed and analyzed in [
12]. The results presented in [
12] revealed that the 2nd-order sensitivities of the PERP benchmark’s leakage response with respect to the 180 group-averaged microscopic total cross-sections are the largest and have, therefore, the largest impact on the uncertainties induced in the leakage response.
As has been mentioned above, the response of interest for the PERP-benchmark [
11] is the total leakage of neutrons (i.e., neutrons exiting the outer surface of the benchmark). The computed nominal value of the total leakage response for this benchmark is
neutrons/s. The superscript “c” will be used to denote “computed quantities”.
The results presented in this Section are based on the expression shown in Equation (1) for the best-estimate (optimal) mean values,
, for the predicted leakage response (which takes the following form):
The best-estimate (optimal) covariances,
, for the best-estimate predicted leakage response is given by Equation (2), as reproduced below:
In Equations (16) and (17),
- (i)
The quantity reduces to the (scalar) variance, , of the measured leakage response, , since only one response, namely, the neutron leakage response will be analyzed.
- (ii)
The quantity
reduces to
for the single leakage response
. As shown in
Section 2 and as will also be shown below, the expression of
involves the sensitivities of the benchmark’s leakage response to the benchmark’s uncertain parameters. These sensitivities have been computed up to and including the fourth-order by applying the nth-CASAM methodology [
6] and have been reported in [
12]. The largest relative sensitivities of the leakage response are with respect to the benchmark’s 180 microscopic total cross-sections. The remainder of the parameters have very small sensitivities. Their combined impact on the expectation value and variance of the computed leakage response is of the order of 5%. Therefore, these parameters can be safely neglected, thus greatly reducing the number of computations which would otherwise be required, without affecting the conclusions that will be reached by considering just the microscopic total cross-sections. Hence, only the 180 microscopic total cross-sections will be considered as the benchmark’s imprecisely known parameters.
The second-order, third-order (skewness), and fourth-order (kurtosis) correlations for the total cross-sections comprising this benchmark are not available. Nevertheless, the impact of the standard deviations of the benchmark’s total cross-sections can be illustrated by considering that these cross-sections are uncorrelated and normally distributed. Under these circumstances, only the unmixed higher-order sensitivities will contribute and the mean value,
, of the computed leakage response takes on the following simplified form of the expression provided in Equation (12):
where
neutrons/s denotes the value of the computed leakage response at the nominal parameter values and where
, denoting the “total number” of microscopic total cross sections.
- (iii)
Since only the neutron leakage response will be analyzed, and the 180 microscopic total cross sections, which will be considered as the benchmark’s imprecisely known parameters are assumed to be uncorrelated and normally distributed, the matrix
reduces to the (scalar) variance,
, of the computed leakage response,
, having the following simplified form of the corresponding expression provided in Equation (13):
As shown in
Section 2, the result provided in Equation (17) implies that the 2nd-BERRU-PM methodology yields a posteriori variance for the best-estimate responses which are reduced by comparison to the original variances of either the experimentally measured or computed responses. This mathematically guaranteed reduction of predicted uncertainties in the predicted responses enabled by the 2nd-BERRU-PM methodology will be illustrated in the remainder of this Section by considering various combinations involving the precision (high, medium, and low) of the measured leakage response and the value (small, medium, and large), considered to be uniform, of the relative standard deviations for the total cross sections. The first set of analyses to be presented in
Section 3.1 will investigate the effects of the precision (small, medium, and large) of the total cross-sections on the predicted best-estimate response.
Section 3.2, below, will present the effects of lowering the precision of the measured response (by increasing the relative standard deviation of the measured response from 5% to 10%) while keeping high precision (relative standard deviation of 3%) for the parameters (total cross-sections). Finally,
Section 3.3, below, will present the analysis of a case when the measured value of the leakage response happens to coincide with the nominally computed value of this response. It is well-known that the data assimilation methods [
1,
2] fail in such a situation. It will be shown that, in contradistinction, the 2nd-BERRU-PM methodology not only does not fail but yields excellent results.
3.1. Leakage Response Measured with Medium Precision (Relative Standard )
The results presented in this Section all consider that the experimentally measured value of the leakage response is neutrons/s and that the relative standard deviation of the measured value is , which is a usual (medium) value for this type of measurement. Recall that the nominal value of the computed leakage response is neutrons/s. The results to be presented will include the following quantities:
- (i)
The numerical values for the computed quantity , where ; for , only the contributions stemming from the first-order sensitivities are included; for , the contributions stemming from the first and second-order sensitivities are included; for , the contributions stemming from the first, second, and third-order sensitivities are included; for , all contributions stemming from the first through fourth-order sensitivities are included.
- (ii)
The numerical values for the quantity , where , and where denotes the value of the corresponding best-estimate predicted response and where denotes the accompanying best-estimate (reduced) predicted standard deviation. As has been described above, only the contributions stemming from the first-order sensitivities are included when ; for , the contributions stemming from both the first and second-order sensitivities are included; for , the contributions stemming from the first, second, and third-order sensitivities are included; all contributions stemming from the first through fourth-order sensitivities are included when .
Three sub-cases will be considered regarding the precision (standard deviations) of the parameters (total cross-sections), as follows:
- (a)
Section 3.1.1: high-precision parameters, relative standard deviations of 3%;
- (b)
Section 3.1.2: medium-precision parameters, relative standard deviations of 5%;
- (c)
Section 3.1.3: low-precision parameters, relative standard deviations of 10%.
3.1.1. High Precision Total Cross Sections (3% Relative Standard Deviations)
It has been shown in [
16] that for uniform relative standard deviations of 3% for the model parameters, the approximate radius of convergence for the Taylor series expansion (of the computed response in terms of the model parameters), as shown in Equation (11), is expected to be convergent, since the convergence “ratio-test” of the 3rd-order term with respect to the 2nd-order term of the Taylor series is 0.58, and the ratio of the 4th-order term with respect to the 3rd-order term of the Taylor series is 0.68. Both of these results are below 1.00. The numerical values for the quantities
and
, for
, are presented in
Table 1 for the case when the model parameters (which are the uncorrelated and normally distributed total microscopic cross sections) are known with high precision, all having relative standard deviations of 3%. The numerical results presented in
Table 1 are depicted in
Figure 1.
The following conclusions follow from the results presented in
Table 1 and
Figure 1:
- (i)
Since the standard deviation of the measured response overlaps with the standard deviation of the computed result already when considering just the first-order sensitivities, the measured response value can be considered to be consistent (as opposed to “inconsistent” or “discrepant”) with the computed nominal value of the response.
- (ii)
As expected from Equation (18), the expected value, , of the computed leakage response coincides with the nominal value, , of the computed response only when to first-order, since the first-order sensitivities do not contribute to .
- (iii)
As expected from Equations (18) and (19), the values of both and increase progressively as the contributions from the higher-order sensitivities are progressively taken into account, for .
- (iv)
The best-estimate predicted response value, , is expected to fall in between the measured and the computed values, being closer to the value which is more precisely known (i.e., least uncertain). These expectations are confirmed already when considering just the first-order sensitivities; the predicted value is very close to the experimentally measured value of the response, the standard deviation of which is smaller than the standard deviation of the computed response.
- (v)
As expected, the best-estimate predicted standard deviation, , is smaller than either the computed or measured standard deviations. The actual predicted value for the best-estimate predicted response is very close to (albeit still smaller than) the standard deviation of the measured response, while being much smaller than the standard deviation of the computed leakage response.
3.1.2. Medium Precision Total Cross Sections (5% Relative Standard Deviations)
It has been shown in [
16] that, for relative standard deviations of 5% for the model parameters, the ratio of the 3rd-order term with respect to the 2nd-order term of the Taylor series is 0.97 < 1.00, but the ratio of the 4th-order term with respect to the 3rd-order term of the Taylor series is 1.13 > 1.00. These ratios indicate that relative standard deviations of 5% for the model parameters are “borderline” values regarding the convergence (or non-convergence) of the Taylor series presented in Equation (11). On the other hand, the relative standard deviation of 5% is often encountered in measurements of total cross sections, so such standard deviations are representative of the “usual uncertainties encountered in practice”. This situation, therefore, underscores the impact of the higher-order response sensitivities to parameters, when the parameter uncertainties are representative of uncertainties usually encountered in practice while also being “borderline” in terms of the convergence of the Taylor series that underlies the determination of the statistics (expected values, variance, etc.) of the distribution of the computed response in the phase-space of imprecisely known model parameters. The numerical values for the quantities
and
, for
, are presented in
Table 2 for the case when the model parameters (which are the uncorrelated and normally distributed total microscopic cross sections) are known with medium/average precision, all having relative standard deviations of 5%. The numerical results presented in
Table 2 are depicted in
Figure 2, below.
The following conclusions follow from the results presented in
Table 2 and
Figure 2:
- (i)
The results in
Table 2 and
Figure 2 display the same trends as already displayed by the results presented in
Table 1 and
Figure 1, respectively, but those trends are significantly more accentuated regarding the progressive increase of the values of
and
, as the contributions from the higher-order sensitivities are progressively taken into account (
). This massive increase indicates the possible non-convergence of the Taylor-series expansion given in Equation (11), which underlies the computations of
and
.
- (ii)
However, despite the massive increase in the computed values , , both the best-estimate predicted response value, , and the best-estimate predicted standard deviation, , have values that are very close to the corresponding experimentally measured values, thus proving that the 2nd-BERRU-PM methodology correctly predicts best-estimate values even when the quantities that characterize the computed responses may be unreliable because of non-convergence of the respective mathematical expressions.
3.1.3. Low Precision Total Cross Sections (10% Relative Standard Deviations)
It has been shown in [
16] that when considering a uniform relative standard deviation of 10% for all of the parameters, the ratio of the 3rd-order term with respect to the 2nd-order term of the Taylor series is 1.93; the ratio of the 4th-order term with respect to the 3rd-order term of the Taylor series is 2.26. Both of these results are larger than 1.00, which indicates that the Taylor series presented in Equation (11), is being used outside its radius of convergence to compute the response’s expected value and variance. The numerical values for the quantities
and
, for
, are presented in
Table 3 and depicted in
Figure 3, for the case when the model parameters (which are the uncorrelated and normally distributed total microscopic cross-sections) are known with low precision, all having relative standard deviations of 10%.
The following conclusions follow from the results presented in
Table 3 and
Figure 3:
- (i)
The results in
Table 3 and
Figure 3 display the same trends as already displayed by the results presented in
Table 1 and
Table 2 (and
Figure 1 and
Figure 2), respectively, but the trends displayed in
Table 3 and
Figure 3 are accentuated substantially more regarding the progressive increase of the values of
and
, as the contributions from the higher-order sensitivities are progressively taken into account (
). This massive increase confirms the non-convergence of the Taylor series expansion given in Equation (11), which underlies the computations of
and
.
- (ii)
Nevertheless, despite the massive increase in the computed values , , both the best-estimate predicted response value, , and the best-estimate predicted standard deviation, , have values that are very close to the corresponding experimentally measured values, thus proving that the 2nd-BERRU-PM methodology correctly predicts best-estimate values even when the quantities that characterize the computed responses may be unreliable because of non-convergence of the respective mathematical expressions.
3.2. Leakage Response Measured with Low Precision (), but Parameters Measured with High Precision (3% Relative Standard Deviations)
The situation analyzed in this Section differs from the situation analyzed in
Section 3.1.1 only regarding the precision of the measured leakage response, in that the relative standard deviation of the experimentally measured response is
, which is twice as large (i.e., 10% vs. 5%) as the relative standard deviation of the experimentally measured response considered in
Section 3.1.1. In other words, the situation analyzed in this Section involves a low-precision response measurement but “high precision” parameters (relative SD = 3%), thus enabling a direct quantification of the impact of the precision (low vs. medium) of the measured response. The numerical values obtained for this case are presented in
Table 4 and depicted in
Figure 4, respectively. These values can be directly compared to the corresponding values presented in
Table 1 and depicted in
Figure 1, respectively. This comparison shows that the values of the computed quantities
are identical in the two tables, as expected. In both
Table 1 and
Table 4, the best-estimate predicted values,
, of the leakage response are all very close to the experimentally measured value, indicating that the precision (low and medium) of the respective measurement insignificantly affects the prediction of the 2nd-BERRU-PM methodology. Only the predicted standard deviation of the best-estimate predicted leakage response is affected, as can be seen by comparing the results presented in the right-most column of
Table 4 with the corresponding results reported in the right-most column of
Table 1. This comparison indicates that, for the low precision measurement, the results presented in the right-most column of
Table 4 indicate that the standard deviation is approximately 1.563 × 10
5 neutrons/s, which is smaller than the corresponding measured SD (1.600 × 10
5) and is much smaller than the computed ones (ranging from 5.548 × 10
5 to 1.847 × 10
6). On the other hand, for the medium precision measurement, the results presented in the right-most column of
Table 1 indicate that the standard deviation is approximately 7.918 × 10
4 neutrons/s which is smaller than the corresponding measured SD (8.00 × 10
4) and is much smaller than the computed ones (ranging from 5.548 × 10
5 to 1.847 × 10
6, just as in
Table 4). Thus, the reduction in the respective predicted standard deviation of the leakage response is larger when the precision of the measured response is higher. In all cases, however, the predicted standard deviation of the response is smaller than the respective initially computed and/or measured values (for the response standard deviation).
3.3. Nominally Computed Response Coincides with the Measured Response with Medium Precision (Relative Standard Deviation )
The situation that will be analyzed in this Section involves the unusual but possible situation when the measured value coincides with the computed value of the leakage response, both having the value
neutrons/s. In such a situation, the conventional data assimilation methods [
1,
2] fail. It is considered that the measurement has a moderate precision, with
, and that the parameters are also known with moderate precision, all having relative standard deviations of 5%. The results for this case are presented in
Table 5 and depicted in
Figure 5, which indicate that the 2nd-BERRU-PM procedure predicts the best-estimate response value to be very closed to the measured one having a predicted standard deviation also close to (albeit marginally smaller, as expected) the measured standard deviation, since the measured standard deviation is smaller than the computed one.
4. Discussion
The results presented in this work have indicated that the nominal (value of the) computed response, , coincides with the expected value of the computed response, , only if all sensitivities higher than first order are ignored. Otherwise, the effects of 2nd- and higher-order sensitivities cause the expected value of the computed response, , to be increasingly larger than the nominal computed value , i.e., . Similarly, the standard deviation of the computed response increases as sensitivities of increasingly higher order are incorporated, as would logically be expected. However, this fact has no negative consequences after the 2nd-BERRU-PM methodology is applied to combine the computational results with the experimental results since the 2nd-BERRU-PM methodology reduces the predicted best-estimate standard deviations to values that are smaller than both the computed and the experimentally measured values of the initial standard deviations. The 2nd-BERRU-PM methodology also predicts that the best-estimate response value, , will fall in between the “expected value of the computed response”, , and the experimentally measured value , being closer to the value that is more precisely known (i.e., has a lower accompanying standard deviation). These properties of the 2nd-BERRU-PM methodology (i.e., prediction of best-estimate results that fall in between the corresponding computed and measured values, while reducing the predicted standard deviations of the predicted results to values smaller than either the experimentally measured or the computed values of the respective standard deviations) stem from the information-theoretical foundation of the 2nd-BERRU-PM, which ensures that the incorporation of additional (consistent) information reduces the predicted uncertainties.
The situations analyzed in this work pertain to values of the measured responses which are consistent with the computed response values, in that the standard deviations of the measured responses overlap with the standard deviations of the computed responses. The situations that can arise when the measured values appear to be inconsistent with the computed values will be analyzed in an accompanying work [
17].