**3. Polynomial Chaos**

At any given instance in time, the deviation of the estimate from the truth can be approximated as a Gaussian distribution centered at the mean of the estimate. The space of these mean-centered Gaussians is known as a Gaussian linear space [11]; when that space is closed (i.e., the distributions have finite second moments), it falls into the Gaussian Hilbert space H. At this point, what is needed is a way to quantify H, as this gives the uncertainty between the estimate and the truth. This can be achieved by projecting H onto a complete set of orthogonal polynomials when those basis functions are evaluated at a random variable *ξ* ∈ H. While the distribution at any point in time natively exists in H, its projection onto the set of orthogonal polynomials provides a way of quantifying it by means of the ordered coordinates, as in Equation (1).

The homogeneous chaos [10] specifies *ξ* to be normally distributed with zero mean and unit variance (i.e., unit Gaussian), and the orthogonal polynomials to be the Hermite polynomials due to their orthogonality with respect to the standard Gaussian pdf [47]. Not only does this apply for Gaussian processes, but the Cameron-Martin theorem [48] says that this applies for any process with a finite second moment. Although the solution does converge as the number of orthogonal polynomials increases, further development has shown that, for different stochastic processes, certain basis functions cause the solution to converge faster [16], leading to the more general polynomial chaos (PC).

To begin applying this method mathematically for a general stochastic process, let a stochastic variable, *ε*, be expressed as the linear combination over an infinite-dimensional vector space, i.e.,

$$\varepsilon(\mathbf{x}, \xi) = \sum\_{k=0}^{\infty} \epsilon\_k(\mathbf{x}) \Psi\_k(\xi) \,\,\,\,\,\tag{9}$$

where *i*(*x*) is the deterministic component and <sup>Ψ</sup>*i*(*ξ*) is an *i*th-order orthogonal basis function evaluated at, and orthogonal with respect to, the weight function, *ξ*. The polynomial families listed in Table 1 have been shown by Xiu [16] to provide convenient types of chaos based on their weight functions.

In general, the elements of the coordinate ([]) are called the polynomial chaos coefficients. These coefficients hold deterministic information about the distribution of the random variable; for instance, the first and second central moments of *ε* can be calculated easily as

$$\mathbf{E}[\mathfrak{e}] = \mu\_1 = \mathfrak{e}\_0 \tag{10a}$$

.

$$\mathbb{E}\left[ (\varepsilon - \mathbb{E}[\varepsilon])^2 \right] = \mu\_2 = \sum\_{k=1}^{\infty} \mathfrak{e}\_k^2 \langle \Psi\_k^2 \rangle\_{p(\tilde{\xi})'} \tag{10b}$$

where E[ ] denotes expected value.

Now, let *ε* be an *n*-dimensional vector. Each of the *n* elements in *ε* are expanded separately; therefore, Equation (9) is effectively identical in vector form

$$\boldsymbol{\varepsilon}(\mathbf{x},\boldsymbol{\xi}) = \begin{bmatrix} \boldsymbol{\varepsilon}^{(1)}(\boldsymbol{\pi}^{(1)},\boldsymbol{\xi}) \\ \boldsymbol{\varepsilon}^{(2)}(\boldsymbol{\pi}^{(2)},\boldsymbol{\xi}) \\ \vdots \\ \boldsymbol{\varepsilon}^{(n)}(\boldsymbol{\pi}^{(n)},\boldsymbol{\xi}) \end{bmatrix} = \begin{bmatrix} \sum\_{k=0}^{\infty} \boldsymbol{\varepsilon}\_{k}^{(1)}(\boldsymbol{\pi}^{(1)}) \boldsymbol{\Psi}\_{k}(\boldsymbol{\xi}) \\ \sum\_{k=0}^{\infty} \boldsymbol{\varepsilon}\_{k}^{(2)}(\boldsymbol{\pi}^{(2)}) \boldsymbol{\Psi}\_{k}(\boldsymbol{\xi}) \\ \vdots \\ \sum\_{k=0}^{\infty} \boldsymbol{\varepsilon}\_{k}^{(n)}(\boldsymbol{\pi}^{(n)}) \boldsymbol{\Psi}\_{k}(\boldsymbol{\xi}) \end{bmatrix}.$$

Because the central moments are independent, the mean and variance of each their calculations similarly do not change. In addition to mean and variance, the correlation between two random variables is commonly desired. With the chaos coefficients estimated for each random variable and the polynomial basis known, correlation terms such covariance can be estimated.
