*3.2. Coefficient Calculation*

The two most common methods of solving Equation (9) for the chaos coefficients are sampling-based and projection-based. The first, and most common, approach requires truncating the infinite summation in Equation (9) to yield

$$\varepsilon(\mathbf{x}, \boldsymbol{\xi}) = \sum\_{k=0}^{N} \varepsilon\_k(\mathbf{x}) \Psi\_k(\boldsymbol{\xi}) \, , \tag{18}$$

where the truncation term *N*, which depends on the dimension of the state *n* and the highest order polynomial *p*, is

$$N+1 = \frac{(n+p)!}{n!p!}$$

Drawing *Q* samples of *ξ*, where *Q* > *N*, and evaluating Ψ*k* and *ε* at these points effectively results in randomly sampling *ε* directly. After initial sampling, *ε* can be transformed in *x* (commonly *x* is taken to be time so this indicates propagating the variable forward in time) resulting in a system of *Q* equations with *N* + 1 unknowns that describe the pdf of *ε* after the transformation that is given by

$$\begin{aligned} \varepsilon(\mathbf{x}, \boldsymbol{\xi}\_1) &= \varepsilon\_0(\mathbf{x}) \Psi\_0(\boldsymbol{\xi}\_1) + \epsilon\_1(\mathbf{x}) \Psi\_1(\boldsymbol{\xi}\_1) + \dots + \epsilon\_N(\mathbf{x}) \Psi\_N(\boldsymbol{\xi}\_1) \\ \varepsilon(\mathbf{x}, \boldsymbol{\xi}\_2) &= \epsilon\_0(\mathbf{x}) \Psi\_0(\boldsymbol{\xi}\_2) + \epsilon\_1(\mathbf{x}) \Psi\_1(\boldsymbol{\xi}\_2) + \dots + \epsilon\_N(\mathbf{x}) \Psi\_N(\boldsymbol{\xi}\_2) \\ &\vdots \\ \varepsilon(\mathbf{x}, \boldsymbol{\xi}\_Q) &= \epsilon\_0(\mathbf{x}) \Psi\_0(\boldsymbol{\xi}\_Q) + \epsilon\_1(\mathbf{x}) \Psi\_1(\boldsymbol{\xi}\_Q) + \dots + \epsilon\_N(\mathbf{x}) \Psi\_N(\boldsymbol{\xi}\_Q) \end{aligned}$$

This overdetermined system can be solved for using a least-squares approximation. The coefficients can then be used to calculate convenient statistical data about *ε* (e.g., central and raw moments).

While the sampling-based method is more practical to apply, the projection based method is not dependent on sampling the underlying distribution. Projecting the pdf of *ε* onto the *j*th basis yields

$$\left< \left( \varepsilon(\mathbf{x}, \xi), \Psi\_{\dot{\jmath}}(\xi) \right)\_{p(\xi)} = \left< \sum\_{k=0}^{\infty} \epsilon\_k(\mathbf{x}) \Psi\_k(\xi), \Psi\_{\dot{\jmath}}(\xi) \right>\_{p(\xi)}$$

The inner product is with respect to the variable *ξ*; therefore, the coefficient acts as a scalar. The inner product is linear in the first argument; therefore, the summation coefficients can be removed from the inner product without alteration, that is

$$\left< \varepsilon(\mathbf{x}, \xi), \Psi\_j(\xi) \right>\_{p(\xi)} = \sum\_{k=0}^{\infty} \varepsilon\_k(\mathbf{x}) \left< \Psi\_k(\xi), \Psi\_j(\xi) \right>\_{p(\xi)}.\tag{19}$$

.

In contrast, if the summation is an element of the second argument, the linearity condition still holds; however, the coefficients incur a complex conjugate. Recall the basis polynomials are generally chosen to be orthogonal, so the right-hand inner product of Equation (19) reduces to the scaled Kronecker delta, resulting in

$$\begin{aligned} \left< \varepsilon(\mathbf{x}, \xi), \Psi\_j(\xi) \right>\_{p(\xi)} &= \sum\_{k=0}^{\infty} \epsilon\_k(\mathbf{x}) \left< \Psi\_k(\xi), \Psi\_j(\xi) \right>\_{p(\xi)} \\ &= \sum\_{k=0}^{\infty} \epsilon\_k(\mathbf{x}) c \,\delta\_{kj} \,. \end{aligned}$$

This leaves only the *j*th term (with the constant *c* = Ψ<sup>2</sup> *<sup>j</sup>*(*ξ*)*p*(*ξ*)), and an equation that is easily solvable for *j* is

$$\mathfrak{e}\_{\vec{\jmath}}(\mathbf{x}) = \frac{\left< \mathfrak{e}(\mathbf{x}, \mathfrak{f}), \Psi\_{\vec{\jmath}}(\mathfrak{f}) \right>\_{p(\xi)}}{\left< \Psi\_{\vec{\jmath}}^2(\xi) \right>\_{p(\xi)}}\tag{20a}$$

$$\hat{\xi} = \frac{\int\_{\mathbb{Z}} \varepsilon(\mathbf{x}, \boldsymbol{\xi}) \Psi\_{\boldsymbol{j}}(\boldsymbol{\xi}) dp(\boldsymbol{\xi})}{\int\_{\mathbb{Z}} \Psi\_{\boldsymbol{j}}^2(\boldsymbol{\xi}) dp(\boldsymbol{\xi})},\tag{20b}$$

which almost always requires numeric approximation.
