*Article* **Approximating the Density of Random Differential Equations with Weak Nonlinearities via Perturbation Techniques**

**Juan-Carlos Cortés †, Elena López-Navarro †, José-Vicente Romero † and María-Dolores Roselló \*,†**

Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain; jccortes@imm.upv.es (J.-C.C.); ellona1@upvnet.upv.es (E.L.-N.); jvromero@imm.upv.es (J.-V.R.)

**\*** Correspondence: drosello@imm.upv.es

† These authors contributed equally to this work.

**Abstract:** We combine the stochastic perturbation method with the maximum entropy principle to construct approximations of the first probability density function of the steady-state solution of a class of nonlinear oscillators subject to small perturbations in the nonlinear term and driven by a stochastic excitation. The nonlinearity depends both upon position and velocity, and the excitation is given by a stationary Gaussian stochastic process with certain additional properties. Furthermore, we approximate higher-order moments, the variance, and the correlation functions of the solution. The theoretical findings are illustrated via some numerical experiments that confirm that our approximations are reliable.

**Keywords:** stochastic perturbations; random nonlinear oscillator; maximum entropy principle; probability density function; stationary Gaussian noise

#### **1. Introduction and Motivation**

The analysis of stochastic perturbations in nonlinear dynamical systems is a hot topic in applied mathematics [1,2] with many applications in apparently different areas such as control [3], economy [4] and especially in dealing with nonlinear vibratory systems. The study of systems subject to vibrations is encountered, for example, in Physics (in the analysis of different types of oscillators) and in Engineering (in the analysis of road vehicles, response of structures to earthquakes' excitations or to sea waves). The nature of vibrations in this type of systems is usually random because they are spawned by complex factors that are not known in a deterministic manner but statistically characterized via measurements that often contain errors and uncertainties. Although, oscillators in Physics and Engineering systems have been extensively studied in the deterministic case [5,6], and particularly, in the nonlinear case [7–9], due to the above-mentioned facts the stochastic analysis is more suitable since provides better understanding of their dynamics.

Many vibratory systems are governed by differential equations with small nonlinear terms of the following form,

$$
\dot{X}(t) + \beta \dot{X}(t) + \omega\_0^2 (X(t) + \epsilon \lg(X(t))) = \mathcal{Y}(t), \quad t > 0. \tag{1}
$$

Here, *<sup>X</sup>*(*t*) denotes the position (usually of the angle w.r.t. an origin) of the oscillatory system at the time instant *<sup>t</sup>*, the parameter *<sup>β</sup>* is given by *<sup>β</sup>* := <sup>2</sup>*ξω*0, being *<sup>ξ</sup>* the damping constant and *ω*<sup>0</sup> > 0 the undamped angular frequency, and finally,  is a small perturbation (<sup>|</sup>| 1) affecting a nonlinear function of the position, *<sup>g</sup>*(*X*(*t*)). The expression *<sup>X</sup>*(*t*) +  *<sup>g</sup>*(*X*(*t*)) is referred to as the nonlinear restoring term. The right-hand side term, *<sup>Y</sup>*(*t*), stands for an external source/forcing term (vibration) acting upon the system. In the setting of random vibration systems, *<sup>Y</sup>*(*t*) is assumed to be a stochastic process, termed stochastic excitation, having certain characteristics that in the present study will be specified later.

**Citation:** Cortés, J.-C.; López-Navarro, E.; Romero, J.-V.; Roselló, M.-D. Approximating the Density of Random Differential Equations with Weak Nonlinearities via Perturbation Techniques. *Mathematics* **2021**, *9*, 204. https://doi. org/10.3390/math9030204

Received: 27 December 2020 Accepted: 15 January 2021 Published: 20 January 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional clai-ms in published maps and institutio-nal affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Notice that the nonlinear restoring term in Equation (1) involves the parameter , which determines the magnitude of the nonlinear perturbation, whose shape is given by *<sup>g</sup>*(*X*(*t*)). When  = 0, Equation (1) describes a random linear oscillator. In [10], authors analyze this class of oscillators considering two cases for the stochastic source term *<sup>Y</sup>*(*t*), first when is Gaussian and, secondly, when it can be represented via a Karhunen–Loève expansion. In the case that  = 0, the inclusion of the nonlinear term makes more difficult (even simply impossible) to exactly solve Equation (1). An effective method to construct reliable approximations of Equation (1) in the case that  represents a small parameter is the perturbation technique [11–15]. In the stochastic setting, this method has been successfully applied to study different type of oscillators subject to random vibrations. After pioneer contributions by Crandall [16,17], the analysis of random vibration systems has attracted many researchers (see, for instance, in [15,18,19] for a full overview of this topic). In [20], approximations of quadratic and cubic nonlinear oscillators subject to white noise excitations are constructed by combining the Wiener–Hermite expansion and the homotopy perturbation technique. The aforementioned approximations correspond to the first statistical moments (mean and variance) because, as authors indicate in the introduction section, the computation of the probability density function (PDF) is usually very difficult to obtain. In [21], the authors extend the previous analysis to compute higher-order statistical moments of the oscillator response in the case the nonlinearity is only quadratic. The previous methodology is extended and algorithmically automated in [22]. In [23], the author considers the interesting scenario of an harmonic oscillator with a random mass and analyses important dynamic characteristics such as the stochastic stability and the resonance phenomena. To conduct that study, a new type of Brownian motion is introduced. The perturbation technique has also been used to approximate the first moments, mainly the mean and the variance, of some oscillators subject to small nonlinearities. The computational procedures of this method often requires amendments to the existing solution codes, so it is classified as an intrusive method. A spectral technique that allows overcoming this drawback is non-intrusive polynomial chaos expansion (PCE) in which simulations are used as black boxes and the calculation of chaos expansion coefficients for response metrics of interest is based on a set of simulation response evaluations. In the recent paper [24], authors design an interesting hybrid non-intrusive procedure that combine PCE with Chebyshev Surrogate Method to analyze a number of uncertain physical parameters and the corresponding transient responses of a rotating system.

Besides computing the first statistical moments of the response or performing a stability analysis of systems under stochastic vibrations, we must emphasize that the computation of the finite distribution (usually termed "fidis") associated to the stationary solution, and particularly of the stationary PDF, is also a major goal in the realm of vibratory systems with uncertainties. Some interesting contributions in this regard include [25,26]. In [25], the authors first present a complete overview of methods and techniques available to determine the stationary PDF of nonlinear oscillators excited by random functions. Second, nonlinear stochastic oscilators excited by a combination of Gaussian and Poisson white noises are fully analyzed. The study is based on solving the forward generalized Kolmogorov partial differential equation (PDE) using the exponential-polynomial closure method. The theoretical analysis is accompanied with several illustrative examples. In the recent contribution [26], authors propose a new method to compute a closed-form solution of stationary PDF of single-degree-of-freedom vibro-impact systems under Gaussian white noise excitation. The density is obtained by solving the Fokker–Planck–Kolmogorov PDE using the iterative method of weighted residue combined with the concepts of the circulatory and potential probability flows. Apart from obtaining the density of the solutions, it is worth to pointing out that in some recent contributions one also determines the densities of key quantities, that belong to Reliability Theory, like the first-passage time for vibro-impact systems with randomly fluctuating restoring and damping terms (see [27] and references therein).

In this paper, we address the study of random cross-nonlinear oscillators subject to small perturbations affecting the nonlinear term, *g*, which depend on both the position, *<sup>X</sup>*(*t*), and the velocity, *<sup>X</sup>*˙ (*t*),

$$
\dot{X}(t) + 2\zeta\omega\_0 \dot{X}(t) + \epsilon \g{\mathcal{X}}(X(t), \dot{X}(t)) + \omega\_0^2 X(t) = \mathcal{Y}(t). \tag{2}
$$

Here, the stochastic derivatives are understood in the mean square sense [28] (Chapter 4). In our subsequent analysis, we will consider the case that *<sup>g</sup>*(*X*(*t*), *<sup>X</sup>*˙ (*t*)) = *<sup>X</sup>*2(*t*)*X*˙ (*t*) and the excitation *<sup>Y</sup>*(*t*) is a mean square differentiable and stationary zero-mean Gaussian stochastic process whose correlation function, <sup>Γ</sup>*YY*(*τ*), is known. On the other hand, assuming that *<sup>Y</sup>*(*t*) is a stationary and Gaussian stochastic process is a rather intuitive concept, which has been extensively used in both theoretical and practical studies [29,30]. Stationarity means that the statistical properties of the process do not vary significantly over time/space. This feature is usually met in a number of modeling problems as the surface of the sea in both spatial and time coordinates, noise in time in electric circuits under steady-state operations, homogeneous impurities in engineering materials and media, for example [28] (Chapter 3).

Now, we list the main novelties of our contribution.


To the best of our knowledge, this is the first time that stochastic nonlinear oscillators with the above-described type of cross-nonlinearities is studied using our approach, i.e., combining mean square calculus and the stochastic perturbation method. In this sense, we think that our approach may be useful to extend our study to stochastic nonlinear oscillators having more general cross-nonlinearities, in particular of the form *<sup>g</sup>*(*X*(*t*), *<sup>X</sup>*˙ (*t*)) = *<sup>X</sup>n*(*t*)*X*˙ *<sup>m</sup>*(*t*), for *<sup>n</sup>* <sup>≥</sup> 3 and *<sup>m</sup>* <sup>≥</sup> 2.

The paper is organized as follows. In Section 2, we introduce the auxiliary stochastic results that will be used throughout the whole paper. This section is intended to help the reader to better understand the technical aspects of the paper. Section 3 is divided into two parts. In Section 3.1, we apply the perturbation technique to construct a first-order approximation of the stationary solution stochastic process of model (2) with *<sup>g</sup>*(*X*(*t*), *<sup>X</sup>*˙ (*t*)) = *<sup>X</sup>*2(*t*)*X*˙ (*t*). In Section 3.2, we determine expressions for the first higherorder moments, the variance, the covariance, and the correlation of the aforementioned first-order approximation. These expressions will be given in terms of certain integrals of the correlation function of the Gaussian noise, *<sup>Y</sup>*(*t*), and of the classical impulse response function to the linearized oscillator associated to Equation (2). In Section 4, we take advantage of the results given in Section 3 to construct reliable approximations of the PDF of the stationary solution using the principle of maximum entropy. In Section 5, we illustrate all theoretical findings by means of several illustrative examples. Our numerical results are compared with Monte Carlo simulations and with the application of Euler–Maruyama numerical scheme, showing full agreement. Conclusions are drawn in Section 6.

#### **2. Stochastic Preliminaries**

For the sake of completeness, in this section we will introduce some technical stochastic results that will be required throughout the paper.

Hereinafter, we will work on a complete probability space (Ω, <sup>F</sup>, <sup>P</sup>), i.e., <sup>Ω</sup> is a sample space; <sup>F</sup> is a *<sup>σ</sup>*-algebra of sets of <sup>Ω</sup>, usually called events; and <sup>P</sup> is a probability measure. To simplify, we will omit the sample notation, so the input and the solution stochastic processes involved in Equation (2) will be denoted by *<sup>Y</sup>*(*t*) ≡ {*Y*(*t*) : *<sup>t</sup>* <sup>≥</sup> <sup>0</sup>} and *<sup>X</sup>*(*t*) ≡ {*X*(*t*) : *<sup>t</sup>* <sup>≥</sup> <sup>0</sup>}, respectively, rather than {*Y*(*t*; *<sup>ω</sup>*) : *<sup>t</sup>* <sup>≥</sup> 0, *<sup>ω</sup>* <sup>∈</sup> <sup>Ω</sup>} and {*X*(*t*; *<sup>ω</sup>*) : *<sup>t</sup>* <sup>≥</sup> 0, *<sup>ω</sup>* <sup>∈</sup> <sup>Ω</sup>}, respectively.

The following result will be applied to calculate some higher-order moments of the solution stochastic process, *<sup>X</sup>*(*t*), of the random differential Equation (2), since as it shall be seen later, *<sup>X</sup>*(*t*) depends on a product of the stochastic excitation, *<sup>Y</sup>*(*t*), evaluated at a finite number of instants, say *<sup>t</sup>*1, *<sup>t</sup>*2, ..., *tn*, *<sup>Y</sup>*(*ti*) = *Yi*, 1 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>n</sup>*.

**Proposition 1** (p. 28, [28])**.** *Let the random variables Y*1,*Y*2, ... , *Yn be jointly Gaussian with zero mean,* <sup>E</sup>{*Yi*} = <sup>0</sup>*,* <sup>1</sup> <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *n. Then, all odd order moments of these random variables vanish and for n even,*

$$\mathbb{E}\{Y\_1 Y\_2 \cdots Y\_n\} = \sum\_{m\_1, m\_2, \dots, m\_n} \mathbb{E}\{Y\_{m\_1} Y\_{m\_2}\} \mathbb{E}\{Y\_{m\_3} Y\_{m\_4}\} \cdots \cdot \mathbb{E}\{Y\_{m\_{n-1}} Y\_{m\_n}\}.$$

*The sum above is taken over all possible combinations of n*/2 *pairs of n random variables. The number of terms in the summation is* <sup>1</sup> · <sup>3</sup> · <sup>5</sup> ···(*<sup>n</sup>* <sup>−</sup> <sup>3</sup>) · (*<sup>n</sup>* <sup>−</sup> <sup>1</sup>)*.*

The two following results permit interchange the expectation operator with the mean square derivative and the mean square integral. In [28] (Equation (4.130) in Section 4.4.2), the first result is established for *<sup>n</sup>* = 2 and then it follows straightforwardly by induction.

**Proposition 2.** *Let* {*Y*(*t*) : *<sup>t</sup>* <sup>≥</sup> <sup>0</sup>} *be a mean square differentiable stochastic process. Then,*

$$\mathbb{E}\{\boldsymbol{Y}(t\_1)\cdot\cdot\cdot\boldsymbol{Y}(t\_{n-1})\dot{\boldsymbol{Y}}(t\_n)\} = \frac{\partial}{\partial t\_n}(\mathbb{E}\{\boldsymbol{Y}(t\_1)\cdot\cdot\cdot\boldsymbol{Y}(t\_n)\}), \quad t\_1,\dots,t\_n \ge 0,$$

*provided the above expectations exists.*

**Proposition 3** (p. 104, [28])**.** *Let* {*Y*(*t*) : <sup>−</sup><sup>∞</sup> <sup>≤</sup> *<sup>a</sup>* <sup>≤</sup> *<sup>t</sup>* <sup>≤</sup> *<sup>b</sup>* <sup>≤</sup> +∞} *be a second-order stochastic process integrable in the mean square sense and <sup>h</sup>*(*t*) *a Riemann integrable deterministic function on t* <sup>∈</sup> (*a*, *<sup>b</sup>*)*. Then,*

$$\mathbb{E}\left\{\int\_{a}^{b} h(t)\mathcal{Y}(t)\,\mathrm{d}t\right\} = \int\_{a}^{b} h(t)\mathbb{E}\{\mathcal{Y}(t)\}\,\mathrm{d}t.$$

The following is a distinctive property of Gaussian processes since they preserve Gaussianity under mean square integration.

**Proposition 4** (p. 112, [28])**.** *Let* {*Y*(*t*) : *<sup>a</sup>* <sup>≤</sup> *<sup>t</sup>* <sup>≤</sup> <sup>∞</sup>} *be a Gaussian process and let <sup>h</sup>*(*t*) *be a Riemann integrable deterministic function on* (*a*, *<sup>t</sup>*) *such that the following mean square integral,*

$$\mathbf{X}(t) = \int\_{a}^{t} h(t, \tau) \mathbf{Y}(\tau) \mathbf{d} \,\tau,$$

*exists, then* {*X*(*t*) : *<sup>t</sup>* <sup>≥</sup> *<sup>a</sup>*} *is a Gaussian process.*

#### **3. Probabilistic Model Study**

As it has been indicated in Section 1, in this paper we will study, from a probabilistic standpoint, the random cross-nonlinear oscillator

$$
\ddot{X}(t) + 2\zeta\omega\_0 \dot{X}(t) + \epsilon X^2(t)\dot{X}(t) + \omega\_0^2 X(t) = \mathcal{Y}(t). \tag{3}
$$

The analysis will be divided into two steps. First, in Section 3.1 we will apply the perturbation technique to obtain an approximation, *<sup>X</sup>*-(*t*), of the stationary solution stochastic process, *<sup>X</sup>*(*t*). Then, in Section 3.2 we will take advantage of *<sup>X</sup>*-(*t*) to determine reliable approximations of the main statistical functions of *<sup>X</sup>*(*t*), namely, the first higher-order moments, <sup>E</sup>{*Xn*(*t*)}, *<sup>n</sup>* = 1, ... , 5; the variance, <sup>V</sup>{*X*(*t*)}; the covariance, <sup>C</sup>ov{*X*(*t*1), *<sup>X</sup>*(*t*2)}; and the correlation, <sup>Γ</sup>*XX*(*τ*).

#### *3.1. Perturbation Technique*

Let us consider the Equation (3). The main idea of the stochastic perturbation technique is to consider that the solution *<sup>X</sup>*(*t*) can be expanded in the powers of the small parameter  (|| 1),

$$X(t) = X\_0(t) + \epsilon X\_1(t) + \epsilon^2 X\_2(t) + \dotsb \tag{4}$$

Replacing expression (4) into Equation (3), yields the following sequence of linear differential equations, with random inputs

$$\begin{array}{ccccccccc} \varepsilon^{0} & : & \mathbb{X}\_{0}(t) + 2\mathbb{X}\_{0}\boldsymbol{\omega}\_{0}\mathbf{\dot{X}}\_{0}(t) + \omega\_{0}^{2}\mathbf{X}\_{0}(t) & = & \mathbb{Y}(t),\\ \varepsilon^{1} & : & \bar{\mathbf{X}}\_{1}(t) + 2\mathbb{X}\_{0}\boldsymbol{\omega}\_{0}\mathbf{\dot{X}}\_{1}(t) + \omega\_{0}^{2}\mathbf{X}\_{1}(t) & = & -\mathbf{X}\_{0}^{2}(t)\boldsymbol{\dot{X}}\_{0}(t),\\ \varepsilon^{2} & : & \mathbb{X}\_{2}(t) + 2\mathbb{X}\_{0}\boldsymbol{\omega}\_{0}\mathbf{\dot{X}}\_{2}(t) + \omega\_{0}^{2}\mathbf{X}\_{2}(t) & = & -2\mathbf{X}\_{0}(t)\mathbf{X}\_{1}(t)\mathbf{\dot{X}}\_{0}(t) - \mathbf{X}\_{0}^{2}(t)\mathbf{\dot{X}}\_{1},\\ \vdots & \vdots & \vdots & \vdots & \vdots \end{array} \tag{5}$$

Notice that each equation can be solved in cascade. As usual, when applying the perturbation technique, we take the first-order approximation

$$X(t) = X\_0(t) + \epsilon X\_1(t). \tag{6}$$

This entails that in our subsequent development we will only need the two first equations listed in (5).

As indicated in Section 1, now we will focus on the analysis of the steady-state solution. Using the linear theory, the two first equations in (5) can be solved using the convolution integral [31]:

$$X\_0(t) = \int\_0^\infty h(s)\mathcal{Y}(t-s) \,\mathrm{d}s,\tag{7}$$

and

$$X\_1(t) = \int\_0^\infty h(s) \left[ -X\_0^2(t-s)X\_0(t-s) \right] ds,\tag{8}$$

where

$$h(t) = \begin{cases} \left(\omega\_0^2 - \zeta^2 \omega\_0^2\right)^{-\frac{1}{2}} \mathbf{e}^{-\zeta\omega\_0 t} \sin\left[\left(\omega\_0^2 - \zeta^2 \omega\_0^2\right)^{\frac{1}{2}} t\right] & \text{if } t > 0, \\\ 0 & \text{if } t \le 0, \end{cases} \tag{9}$$

is the impulse response function for the underdamped case *ζ*<sup>2</sup> < 1. This situation corresponds to the condition in which damping of an oscillator causes it to return to equilibrium with the amplitude gradually decreasing to zero (in our random setting it means that the expectation of the amplitude is null); system returns to equilibrium faster but overshoots and crosses the equilibrium position one or more times. Although, they are no treated hereinafter, two more situations are also possible, namely, critical damping and overdamping. The former corresponds to *<sup>ζ</sup>*<sup>2</sup> = 1 and in that case the damping of an oscillator causes it to return as quickly as possible to its equilibrium position without oscillating back and forth about this position, while the latter corresponds to *ζ*<sup>2</sup> > 1, and in this situation damping of an oscillator causes it to return to equilibrium without oscillating; oscillator moves more slowly toward equilibrium than in the critically damped system [32].

#### *3.2. Approximation of the Main Statistical Moments*

This subsection is devoted to calculate the main probabilistic information of the stationary solution stochastic process, *<sup>X</sup>*(*t*), of model (3). As it has been previously pointed out, to this end, we assume that the input term *<sup>Y</sup>*(*t*) is a stationary zero-mean (E{*Y*(*t*)} = 0) Gaussian stochastic process whose correlation function, <sup>Γ</sup>*YY*(*τ*), is given. We will further assume that *<sup>Y</sup>*(*t*) is mean square differentiable. This additional hypothesis will be apparent later. At this point, it is convenient to recall that for any stationary stochastic process its correlation function is even, so <sup>Γ</sup>*YY*(*τ*) = <sup>Γ</sup>*YY*(−*τ*), (p. 47, [28]). This property will be extensively applied throughout our subsequent developments.

To compute the mean of the approximation, we first take the expectation operator in (6),

$$\mathbb{E}\{\hat{X}(t)\} = \mathbb{E}\{X\_0(t)\} + \epsilon \mathbb{E}\{X\_1(t)\}.\tag{10}$$

Therefore, we now need to determine both <sup>E</sup>{*X*0(*t*)} and <sup>E</sup>{*X*1(*t*)}. To compute the <sup>E</sup>{*X*0(*t*)} we again use the expectation operator in (7),

$$\mathbb{E}\{X\_0(t)\} = \mathbb{E}\left\{\int\_0^\infty h(s)Y(t-s)\,\mathrm{d}s\right\} = \int\_0^\infty h(s)\mathbb{E}\{Y(t-s)\}\,\mathrm{d}s = 0,\tag{11}$$

where we have applied Proposition <sup>3</sup> and that <sup>E</sup>{*Y*(*t*)} <sup>=</sup> 0.

Now, we deal with the computation of <sup>E</sup>{*X*1(*t*)} in an analogous manner but using the representation of *<sup>X</sup>*1(*t*) given in (8),

$$\begin{split} \mathbb{E}\{X\_{1}(t)\} &= \mathbb{E}\left\{\int\_{0}^{\infty} h(s) \left[ -X\_{0}^{2}(t-s)X\_{0}(t-s) \right] ds \right\} = \int\_{0}^{\infty} h(s) \mathbb{E}\left\{ \left[ -X\_{0}^{2}(t-s)X\_{0}(t-s) \right] \right\} ds \\ &= -\int\_{0}^{\infty} h(s) \int\_{0}^{\infty} h(s\_{1}) \int\_{0}^{\infty} h(s\_{2}) \int\_{0}^{\infty} h(s\_{3}) \mathbb{E}\left\{ Y(t-s-s\_{1})Y(t-s-s\_{2})Y(t-s-s\_{3}) \right\} ds\_{3} ds\_{2} ds\_{1} ds \\ &= 0. \end{split} \tag{12}$$

Notice that the assumption of mean square differentiability of the input process *<sup>Y</sup>*(*t*) appears naturally at this stage.

Let us justify the last step in expression (12). Let us denote *<sup>u</sup>*<sup>1</sup> <sup>=</sup> *<sup>t</sup>* <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*1, *<sup>u</sup>*<sup>2</sup> <sup>=</sup> *<sup>t</sup>* <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*<sup>2</sup> and *<sup>u</sup>*<sup>3</sup> <sup>=</sup> *<sup>t</sup>* <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*3, then applying Propositions <sup>2</sup> and 1, both with *<sup>n</sup>* <sup>=</sup> 3, one gets

$$\mathbb{E}\left\{\mathbf{Y}(t-s-s\_1)\mathbf{Y}(t-s-s\_2)\dot{\mathbf{Y}}(t-s-s\_3)\right\} = \mathbb{E}\left\{\mathbf{Y}(u\_1)\mathbf{Y}(u\_2)\dot{\mathbf{Y}}(u\_3)\right\} = \frac{\partial}{\partial u\_3}\mathbb{E}\left\{\mathbf{Y}(u\_1)\mathbf{Y}(u\_2)\mathbf{Y}(u\_3)\right\} = 0.$$

Therefore, substituting (11) and (12) into (10), we obtain the expectation of the approximation is null,

$$\mathbb{E}\{\hat{X}(t)\} = \mathbb{E}\{X\_0(t)\} + \epsilon \mathbb{E}\{X\_1(t)\} = 0. \tag{13}$$

From the approximation (6) and neglecting the term 2, the second-order moment for *X*-(*t*) is given by

$$\mathbb{E}\left\{\hat{X}^2(t)\right\} = \mathbb{E}\left\{X\_0^2(t)\right\} + 2\epsilon \mathbb{E}\{X\_0(t)X\_1(t)\}.\tag{14}$$

The first addend can be calculated using expression (7) and Fubini's theorem,

$$\begin{split} \mathbb{E}\left\{ X\_0^2(t) \right\} &= \int\_0^\infty h(s) \int\_0^\infty h(s\_1) \mathbb{E}\{ \mathcal{Y}(t-s) \mathcal{Y}(t-s\_1) \} \, \mathrm{d}s\_1 \, \mathrm{d}s \\ &= \int\_0^\infty h(s) \int\_0^\infty h(s\_1) \Gamma\_{YY}(s-s\_1) \, \mathrm{d}s\_1 \, \mathrm{d}s \, . \end{split} \tag{15}$$

Notice that we have used that *<sup>Y</sup>*(*t*) is a stationary process, so

$$\mathbb{E}\{Y(t-s)Y(t-s\_1)\} = \Gamma\_{YY}(t-s\_1 - (t-s)) = \Gamma\_{YY}(s-s\_1).$$

Now, we calculate the second addend in (14). To this end, we substitute the expressions of *<sup>X</sup>*0(*t*) and *<sup>X</sup>*1(*t*) given in (7) and (8), respectively,

<sup>E</sup>{*X*0(*t*) *<sup>X</sup>*1(*t*)} <sup>=</sup> <sup>∞</sup> 0 *<sup>h</sup>*(*s*)E{*X*0(*t*)[−*X*<sup>2</sup> <sup>0</sup> (*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)*X*˙ <sup>0</sup>(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)]} <sup>d</sup>*<sup>s</sup>* = <sup>∞</sup> 0 *<sup>h</sup>*(*s*)<sup>E</sup> 1 − <sup>∞</sup> 0 *<sup>h</sup>*(*s*1)*Y*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*1) <sup>d</sup>*s*<sup>1</sup> <sup>∞</sup> 0 *<sup>h</sup>*(*s*2)*Y*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*2) <sup>d</sup>*s*<sup>2</sup> <sup>∞</sup> 0 *<sup>h</sup>*(*s*3)*Y*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*3) <sup>d</sup>*s*<sup>3</sup> <sup>∞</sup> 0 *<sup>h</sup>*(*s*4)*Y*˙(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*4) <sup>d</sup>*s*<sup>4</sup> 2 d*s* = <sup>−</sup> <sup>∞</sup> 0 *<sup>h</sup>*(*s*) <sup>∞</sup> 0 *<sup>h</sup>*(*s*1) <sup>∞</sup> 0 *<sup>h</sup>*(*s*2) <sup>∞</sup> 0 *<sup>h</sup>*(*s*3) <sup>∞</sup> 0 *<sup>h</sup>*(*s*4)<sup>E</sup> *<sup>Y</sup>*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*1)*Y*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*2)*Y*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*3)*Y*˙(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*4) d*s*<sup>4</sup> d*s*<sup>3</sup> d*s*<sup>2</sup> d*s*<sup>1</sup> d*s* (I) = <sup>−</sup> <sup>∞</sup> 0 *<sup>h</sup>*(*s*) <sup>∞</sup> 0 *<sup>h</sup>*(*s*1) <sup>∞</sup> 0 *<sup>h</sup>*(*s*2) <sup>∞</sup> 0 *<sup>h</sup>*(*s*3) <sup>∞</sup> 0 *<sup>h</sup>*(*s*4) <sup>Γ</sup>*YY*(*s*<sup>1</sup> <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*2)<sup>Γ</sup> *YY*(*s*<sup>3</sup> <sup>−</sup> *<sup>s</sup>*4) + <sup>Γ</sup>*YY*(*s*<sup>1</sup> <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*3)<sup>Γ</sup> *YY*(*s*<sup>2</sup> <sup>−</sup> *<sup>s</sup>*4) + <sup>Γ</sup> *YY*(*s*<sup>1</sup> <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*4)Γ*YY*(*s*<sup>2</sup> <sup>−</sup> *<sup>s</sup>*3) d*s*<sup>4</sup> d*s*<sup>3</sup> d*s*<sup>2</sup> d*s*<sup>1</sup> d*s* (II) = <sup>−</sup> <sup>∞</sup> 0 *<sup>h</sup>*(*s*) <sup>∞</sup> 0 *<sup>h</sup>*(*s*1) <sup>∞</sup> 0 *<sup>h</sup>*(*s*2) <sup>∞</sup> 0 *<sup>h</sup>*(*s*3) <sup>∞</sup> 0 *<sup>h</sup>*(*s*4) <sup>2</sup>Γ*YY*(*s*<sup>1</sup> <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*2)<sup>Γ</sup> *YY*(*s*<sup>3</sup> <sup>−</sup> *<sup>s</sup>*4) + <sup>Γ</sup> *YY*(*s*<sup>1</sup> <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*4)Γ*YY*(*s*<sup>2</sup> <sup>−</sup> *<sup>s</sup>*3) d*s*<sup>4</sup> d*s*<sup>3</sup> d*s*<sup>2</sup> d*s*<sup>1</sup> d*s*. (16)

Observe that in the step (I) of the above expression, we have first applied Proposition 2 and second Proposition 1. Indeed, let us denote by *<sup>u</sup>*<sup>1</sup> <sup>=</sup> *<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*1, *<sup>u</sup>*<sup>2</sup> <sup>=</sup> *<sup>t</sup>* <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*2, *<sup>u</sup>*<sup>3</sup> <sup>=</sup> *<sup>t</sup>* <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*<sup>3</sup> and *<sup>u</sup>*<sup>4</sup> <sup>=</sup> *<sup>t</sup>* <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*4, then by Proposition 2, with *<sup>n</sup>* <sup>=</sup> 4, one gets

$$\mathbb{E}\{Y(t-s\_1)Y(t-s-s\_2)Y(t-s-s\_3)\dot{Y}(t-s-s\_4)\} = \frac{\partial}{\partial u\_4}\mathbb{E}\{Y(u\_1)Y(u\_2)Y(u\_3)Y(u\_4)\},$$

and now we apply Proposition 1, with *<sup>n</sup>* <sup>=</sup> 4, to the right-hand side. This yields

$$\begin{split} &\mathbb{E}\left\{Y(t-s\_{1})Y(t-s-s\_{2})Y(t-s-s\_{3})\dot{Y}(t-s-s\_{4})\right\} = \\ &=\frac{\partial}{\partial u\_{4}}\Big{(}\mathbb{E}\{Y(u\_{1})Y(u\_{2})\}\mathbb{E}\{Y(u\_{3})Y(u\_{4})\} + \mathbb{E}\{Y(u\_{1})Y(u\_{3})\}\mathbb{E}\{Y(u\_{2})Y(u\_{4})\} + \mathbb{E}\{Y(u\_{1})Y(u\_{4})\}\mathbb{E}\{Y(u\_{2})Y(u\_{3})\}\Big{)} \\ &=\frac{\partial}{\partial u\_{4}}(\Gamma\_{YY}(u\_{2}-u\_{1})\Gamma\_{YY}(u\_{4}-u\_{3}) + \Gamma\_{YY}(u\_{3}-u\_{1})\Gamma\_{YY}(u\_{4}-u\_{2}) + \Gamma\_{YY}(u\_{4}-u\_{1})\Gamma\_{YY}(u\_{3}-u\_{2})) \\ &=\Gamma\_{YY}(u)|\_{u=s\_{1}-s\_{2}}\Gamma\_{YY}'(u)|\_{u=s\_{3}-s\_{4}} + \Gamma\_{YY}(u)|\_{u=s\_{1}-s-s\_{4}}\Gamma\_{YY}'(u)|\_{u=s\_{2}-s\_{4}} + \Gamma\_{YY}'(u)|\_{u=s\_{1}-s-s\_{4}}\Gamma\_{YY}'(u)|\_{u=s\_{2}-s\_{3}}.\end{split}$$

In step (II) of expression (16) we have taken advantage of the symmetry of the indexes. Then, substituing (15) and (16) in (14) one gets

$$\begin{split} \mathbb{E}\left\{\hat{\mathcal{N}}^{2}(t)\right\} &= \int\_{0}^{\infty} h(\mathbf{s}) \int\_{0}^{\infty} h(\mathbf{s}\_{1}) \Gamma\_{YY}(\mathbf{s} - \mathbf{s}\_{1}) \, \mathrm{d}\mathbf{s}\_{1} \, \mathrm{d}\mathbf{s} - 2\varepsilon \left( \int\_{0}^{\infty} h(\mathbf{s}) \int\_{0}^{\infty} h(\mathbf{s}\_{1}) \int\_{0}^{\infty} h(\mathbf{s}\_{2}) \int\_{0}^{\infty} h(\mathbf{s}\_{3}) \int\_{0}^{\infty} h(\mathbf{s}\_{4}) \, \mathrm{d}\mathbf{s}\_{2} \right) \\ & \cdot \left( 2\Gamma\_{YY}(\mathbf{s}\_{1} - \mathbf{s} - \mathbf{s}\_{2}) \Gamma\_{YY}'(\mathbf{s}\_{3} - \mathbf{s}\_{4}) + \Gamma\_{YY}'(\mathbf{s}\_{1} - \mathbf{s} - \mathbf{s}\_{4}) \Gamma\_{YY}(\mathbf{s}\_{2} - \mathbf{s}\_{3}) \right) \, \mathrm{d}\mathbf{s}\_{4} \, \mathrm{d}\mathbf{s}\_{3} \, \mathrm{d}\mathbf{s}\_{2} \, \mathrm{d}\mathbf{s}\_{1} \, \mathrm{d}\mathbf{s} \right). \end{split} \tag{17}$$

Notice that E + *X*-2(*t*) , does not depend on *t*. This is consistent with the fact that we are dealing with the stochastic analysis of the stationary solution. The same feature will hold when computing higher-order moments, E + *Xn*(*t*) , , *n* > 2, later.

As E + *X*-(*t*) , is null (see (13)), then the variance of the solution coincides with E + *X*-2(*t*) , . Now, we calculate the third-order moment of *<sup>X</sup>*-(*t*) keeping up to the first-order term of perturbation . Therefore,

$$\mathbb{E}\left\{\hat{X}^3(t)\right\} = \mathbb{E}\left\{X\_0^3(t)\right\} + 3\epsilon \mathbb{E}\left\{X\_0^2(t)X\_1(t)\right\}.\tag{18}$$

Reasoning analogously as we have shown before, we obtain

$$\mathbb{E}\left\{X\_0^3(t)\right\} = \int\_0^\infty h(\mathbf{s}) \int\_0^\infty h(\mathbf{s}\_1) \int\_0^\infty h(\mathbf{s}\_2) \mathbb{E}\{Y(t-\mathbf{s})Y(t-\mathbf{s}\_1)Y(t-\mathbf{s}\_2)\} \,\mathrm{d}\mathbf{s}\_2 \,\mathrm{d}\mathbf{s}\_1 \,\mathrm{d}\mathbf{s} = 0,\tag{19}$$

where we have applied Proposition 1 in the last step. The second addend in (18) is calculated using Propositions 1 and 2,

$$\begin{split} \mathbb{E}\left\{X\_{0}^{2}(t)X\_{1}(t)\right\} &= \int\_{0}^{\infty} h(s) \mathbb{E}\left\{X\_{0}^{2}(t) \left[-X\_{0}^{2}(t-s)X\_{0}(t-s)\right]\right\} ds \\ &= -\int\_{0}^{\infty} h(s) \int\_{0}^{\infty} h(s\_{1}) \int\_{0}^{\infty} h(s\_{2}) \int\_{0}^{\infty} h(s\_{3}) \int\_{0}^{\infty} h(s\_{4}) \mathbb{E}\left\{Y(t-s)Y(t-s\_{1})Y(t-s-s\_{3})Y(t-s-s\_{4}) - (t-s\_{1}-s\_{2})Y(t-s\_{1}-s\_{3})\right\} ds ds \\ &\cdot \dot{Y}(t-s-s\_{3})\int\_{0}^{\infty} \mathrm{ds}\_{4} \,\mathrm{ds}\_{3} \,\mathrm{ds}\_{2} \,\mathrm{ds}\_{1} \,\mathrm{ds} = 0. \end{split} \tag{20}$$

From (19) and (20), we obtain

$$\mathbb{E}\left\{\hat{X}^3(t)\right\} = \mathbb{E}\{X\_0^3(t)\} + 3\epsilon \mathbb{E}\{X\_0^2(t)X\_1(t)\} = 0.$$

Using again the first-order approximation of the perturbation , in general, it can be straightforwardly seen that

$$\mathbb{E}\{\hat{X}^n(t)\} = 0, \qquad n = 1, 3, 5, \dots \tag{21}$$

Indeed, we know that,

$$\mathbb{E}\left\{\hat{X}^n(t)\right\} = \mathbb{E}\{X\_0^n(t)\} + n\,\epsilon \,\mathbb{E}\left\{X\_0^{n-1}(t)X\_1(t)\right\}.\tag{22}$$

On the one hand, let us observe that applying first Fubini's theorem and Proposition 3, and second Proposition 1 for *n* odd, one gets

$$\mathbb{E}\{X\_0^n(t)\} = \mathbb{E}\left\{ \left( \int\_0^\infty h(s)Y(t-s) \, \mathrm{d}s \right)^n \right\} = \int\_0^\infty h(s\_1) \cdots \int\_0^\infty h(s\_n) \mathbb{E}\{Y(t-s\_1) \cdots Y(t-s\_n)\} \, \mathrm{d}s\_n \cdots \, \mathrm{d}s\_1 = 0.$$

On the other hand, using the same reasoning as in (20),

$$\mathbb{E}\left\{X\_0^{n-1}(t)X\_1(t)\right\} = \int\_0^\infty h(s)\mathbb{E}\left\{X\_0^{n-1}(t)\left[-X\_0^2(t-s)X\_0(t-s)\right]\right\}ds = 0,$$

where first we have applied Proposition 2, in order to put the first derivative out of the expectation, and second, we have utilized that *<sup>X</sup>n*−<sup>1</sup> <sup>0</sup> (*t*), *<sup>X</sup>*<sup>2</sup> <sup>0</sup>(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*) and *<sup>X</sup>*˙ <sup>0</sup>(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*) depend upon *<sup>n</sup>* <sup>−</sup> 1, 2 and 1 terms of *<sup>Y</sup>*(·), respectively, together with Proposition <sup>1</sup> (notice that *<sup>n</sup>* + 2 is odd as *<sup>n</sup>* is odd).

To complete the information of statistical moments of the approximation, we also determine E + *X*-4(*t*) , .

The fourth-order moment of *<sup>X</sup>*-(*t*), based on the first-order approximation via the perturbation method, is given by

$$\mathbb{E}\left\{\hat{X}^4(t)\right\} = \mathbb{E}\left\{X\_0^4(t)\right\} + 4\epsilon \mathbb{E}\left\{X\_0^3(t)X\_1(t)\right\}.\tag{23}$$

Reasoning analogously as we have shown in previous sections, we obtain for the first addend

$$\mathbb{E}\{X\_0^4(t)\} = 3\int\_0^\infty h(s)\int\_0^\infty h(s\_1)\int\_0^\infty h(s\_2)\int\_0^\infty h(s\_3)\Gamma\_{YY}(s-s\_1)\Gamma\_{YY}(s\_2-s\_3) \,\mathrm{d}s \,\mathrm{ds}\_1 \,\mathrm{d}s\_2 \,\mathrm{d}s\_3,\tag{24}$$

and for the second addend

<sup>E</sup>{*X*<sup>3</sup> <sup>0</sup>(*t*)*X*1(*t*)} = <sup>−</sup> <sup>∞</sup> 0 *<sup>h</sup>*(*s*) <sup>∞</sup> 0 *<sup>h</sup>*(*s*1) <sup>∞</sup> 0 *<sup>h</sup>*(*s*2) <sup>∞</sup> 0 *<sup>h</sup>*(*s*3) <sup>∞</sup> 0 *<sup>h</sup>*(*s*4) <sup>∞</sup> 0 *<sup>h</sup>*(*s*5) <sup>∞</sup> 0 *<sup>h</sup>*(*s*6)E{*Y*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*1) · *<sup>Y</sup>*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*2)*Y*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*3)*Y*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*4)*Y*(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*5)*Y*˙(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*6)} <sup>d</sup>*<sup>s</sup>* <sup>d</sup>*s*<sup>1</sup> <sup>d</sup>*s*<sup>2</sup> <sup>d</sup>*s*<sup>3</sup> <sup>d</sup>*s*<sup>4</sup> <sup>d</sup>*s*<sup>5</sup> <sup>d</sup>*s*<sup>6</sup> = <sup>−</sup> <sup>∞</sup> 0 *<sup>h</sup>*(*s*) <sup>∞</sup> 0 *<sup>h</sup>*(*s*1) <sup>∞</sup> 0 *<sup>h</sup>*(*s*2) <sup>∞</sup> 0 *<sup>h</sup>*(*s*3) <sup>∞</sup> 0 *<sup>h</sup>*(*s*4) <sup>∞</sup> 0 *<sup>h</sup>*(*s*5) <sup>∞</sup> 0 *<sup>h</sup>*(*s*6) 6 Γ *YY*(*s*<sup>5</sup> <sup>−</sup> *<sup>s</sup>*6) · <sup>Γ</sup>*YY*(*s*<sup>1</sup> <sup>−</sup> *<sup>s</sup>*2)Γ*YY*(*s*<sup>3</sup> <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*4) + <sup>3</sup> <sup>Γ</sup> *YY*(*s*<sup>1</sup> <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*6) <sup>2</sup> <sup>Γ</sup>*YY*(*s*<sup>2</sup> <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*4)Γ*YY*(*s*<sup>3</sup> <sup>−</sup> *<sup>s</sup>* <sup>−</sup> *<sup>s</sup>*5) <sup>+</sup> <sup>Γ</sup>*YY*(*s*<sup>2</sup> <sup>−</sup> *<sup>s</sup>*3)Γ*YY*(*s*<sup>4</sup> <sup>−</sup> *<sup>s</sup>*5) d*s* d*s*<sup>1</sup> d*s*<sup>2</sup> d*s*<sup>3</sup> d*s*<sup>4</sup> d*s*<sup>5</sup> d*s*<sup>6</sup> . (25)

Observe that in the last step of the above expression, first we have used Proposition 2, and second, Proposition 1. From this last proposition, we know that exist 15 combinations, but we can reduce the expression by the symmetry of involved indexes.

Now we deal with the approximation of the correlation function of *<sup>X</sup>*(*t*) via (6), i.e., taking the first-order approximation of the perturbation expansion,

$$\Gamma\_{\hat{X}\hat{X}}(\tau) = \mathbb{E}\{\hat{X}(t)\hat{X}(t+\tau)\} = \mathbb{E}\{X\mathbb{o}(t)X\mathbb{o}(t+\tau)\} + \varepsilon[\mathbb{E}\{X\mathbb{o}(t)X\_1(t+\tau)\} + \mathbb{E}\{X\_1(t)X\mathbb{o}(t+\tau)\}].\tag{26}$$

The first addend in (26) corresponds to the correlation function of *<sup>X</sup>*0(*t*). It can be expressed as

$$\begin{split} \mathbb{E}\{X\_0(t)X\_0(t+\tau)\} &= \int\_0^\infty \int\_0^\infty h(s)h(s\_1)\mathbb{E}\{Y(t-s)Y(t+\tau-s\_1)\} \,\mathrm{ds}\,\mathrm{ds}\_1 \\ &= \int\_0^\infty \int\_0^\infty h(s)h(s\_1)\Gamma\_{YY}(\tau-s\_1+s)\,\mathrm{ds}\_1 \,\mathrm{ds} .\end{split}$$

The two last addends in (26) represent the cross-correlation of *<sup>X</sup>*0(*t*) and *<sup>X</sup>*1(*t*). They are given, respectively, by

$$\begin{split} \mathbb{E}\{X\_{0}(t)X\_{1}(t+\tau)\} &= \int\_{0}^{\infty} h(s) \mathbb{E}\{X\_{0}(t)[-X\_{0}^{2}(t+\tau-s)X\_{0}(t+\tau-s)]\} \, \mathrm{d}s \\ &= -\int\_{0}^{\infty} h(s) \int\_{0}^{\infty} h(s\_{1}) \int\_{0}^{\infty} h(s\_{2}) \int\_{0}^{\infty} h(s\_{3}) \int\_{0}^{\infty} h(s\_{4}) \left\{2\,\Gamma\_{YY}(\tau-s-s\_{2}+s\_{1})\Gamma\_{YY}'(s\_{3}-s\_{4})\right\} \, \mathrm{d}s \\ &+ \Gamma\_{YY}'(\tau-s-s\_{4}+s\_{1})\Gamma\_{YY}(s\_{2}-s\_{3})\right\} \, \mathrm{ds}\_{4} \, \mathrm{ds}\_{3} \, \mathrm{ds}\_{2} \, \mathrm{ds}\_{1} \, \mathrm{ds} \, . \end{split}$$

and

$$\begin{split} \mathbb{E}\{X\_{1}(t)X\_{0}(t+\tau)\} &= \mathbb{E}\left\{\int\_{0}^{\infty}-h(s)X\_{0}^{2}(t-s)\dot{X}\_{0}(t-s)X\_{0}(t+\tau)\right\} \mathrm{d}s \\ &= -\int\_{0}^{\infty}h(s)\int\_{0}^{\infty}h(s\_{1})\int\_{0}^{\infty}h(s\_{2})\int\_{0}^{\infty}h(s\_{3})\left\{\Gamma\_{YY}(s\_{1}-s\_{2})\Gamma\_{YY}'(\tau-s\_{4}+s+s\_{3})\right\} \mathrm{d}s \\ &+ 2\,\Gamma\_{YY}'(s\_{1}-s\_{3})\Gamma\_{YY}(\tau-s\_{4}+s+s\_{2})\right\} \mathrm{ds}\_{4}\,\mathrm{d}s\_{3}\,\mathrm{d}s\_{2}\,\mathrm{d}s \,. \end{split}$$

Summarizing,

$$\begin{split} \Gamma\_{\tilde{X}\tilde{X}}(\tau) &= \int\_{0}^{\infty} \int\_{0}^{\infty} h(\mathbf{s}) h(\mathbf{s}\_{1}) \Gamma\_{YY}(\tau - s\_{1} + \mathbf{s}) \, \mathrm{d}\mathbf{s} \, \mathrm{d}\mathbf{s}\_{1} \\ &- \varepsilon \int\_{0}^{\infty} h(\mathbf{s}) \int\_{0}^{\infty} h(\mathbf{s}\_{1}) \int\_{0}^{\infty} h(\mathbf{s}\_{2}) \int\_{0}^{\infty} h(\mathbf{s}\_{3}) \int\_{0}^{\infty} h(\mathbf{s}\_{4}) \left\{ 2 \, \Gamma\_{YY}(\tau - s - \mathbf{s}\_{2} + \mathbf{s}\_{1}) \Gamma\_{YY}'(\mathbf{s}\_{3} - \mathbf{s}\_{4}) \right. \\ &+ \Gamma\_{YY}'(\tau - s - \mathbf{s}\_{4} + \mathbf{s}\_{1}) \Gamma\_{YY}(\mathbf{s}\_{2} - \mathbf{s}\_{3}) + \Gamma\_{YY}'(\mathbf{s}\_{1} - \mathbf{s}\_{2}) \Gamma\_{YY}'(\tau - \mathbf{s}\_{4} + \mathbf{s} + \mathbf{s}\_{3}) \\ &+ 2 \, \Gamma\_{YY}'(\mathbf{s}\_{1} - \mathbf{s}\_{3}) \Gamma\_{YY}'(\tau - \mathbf{s}\_{4} + \mathbf{s} + \mathbf{s}\_{2}) \Bigg\} \, \mathrm{d}\mathbf{s}\_{1} \, \mathrm{d}\mathbf{s}\_{2} \, \mathrm{d}\mathbf{s}\_{1} \, \mathrm{d}\mathbf{s} \, \mathrm{d} . \end{split} \tag{27}$$

As <sup>E</sup>{*X*-(*t*)} = 0, we observe that the covariance and correlation functions of *X*-(*t*) coincide,

$$\text{Cov}\{\hat{X}(t\_1), \hat{X}(t\_2)\} = \Gamma\_{\hat{X}\hat{X}}(\tau), \ \tau = |t\_1 - t\_2|.$$

#### **4. Approximating the PDF via the Maximum Entropy Principle**

So far, we have calculated approximations of the moments <sup>E</sup>{*Xn*(*t*)}, *<sup>n</sup>* = 1, ... , 5 to the first-order approximation, *<sup>X</sup>*-(*t*), via the perturbation method, of the steady-state solution of the random nonlinear oscillator (3). Although this is an important information, a more ambitious goal is the approximation of the PDF, say *f X*-(*<sup>t</sup>*)(*x*), as from it one can calculate key stochastic information as the probability that the output lies in a specific interval of interest, say [*a*1, *<sup>a</sup>*2],

$$\mathbb{P}\{a\_1 \le \hat{\mathcal{X}}(t) \le a\_2\} = \int\_{a\_1}^{a\_2} f\_{\hat{\mathcal{X}}(t)}(\mathbf{x}) \, \mathrm{d}\mathbf{x},$$

for any arbitrary fixed time *t*. Furthermore, from the knowledge of the PDF one can easily compute confidence intervals at a specific confidence level *<sup>α</sup>* <sup>∈</sup> (0, 1),

$$1 - \mathfrak{a} = \mathbb{P}\left\{\mu\_{\widehat{X}}(t) - k\sigma\_{\widehat{X}}(t) \le \widehat{X}(t) \le \mu\_{\widehat{X}}(t) + k\sigma\_{\widehat{X}}(t)\right\} = \int\_{\mu\_{\widehat{X}}(t) - k\sigma\_{\widehat{X}}(t)}^{\mu\_{\widehat{X}}(t) + k\sigma\_{\widehat{X}}(t)} f\_{\widehat{X}(t)}(\mathbf{x}) \, d\mathbf{x}, \quad t \in [0, T]$$

where *<sup>μ</sup>X*- (*t*) = <sup>E</sup>{*X*-(*t*)} <sup>=</sup> 0 (see (13)) and *<sup>σ</sup>X*- (*t*) = / <sup>V</sup>{*X*-(*t*)}. Usually *<sup>α</sup>* is taken as *<sup>α</sup>* = 0.05 so that 95% confidence intervals are built, and *<sup>k</sup>* <sup>&</sup>gt; 0 must be determined numerically.

As we have calculated the approximations <sup>E</sup>{*Xn*(*t*)}, *<sup>n</sup>* = 1, ... , 5, a suitable method to approximate the PDF, *f X*-(*<sup>t</sup>*)(*x*), is the Principle of Maximum Entropy (PME), [33]. For *<sup>t</sup>* fixed, the PME seeks for a PDF, *f X*-(*<sup>t</sup>*)(*x*), that maximizes the so-called Shannon's Entropy, of random variable *<sup>X</sup>*-(*t*) with support [*a*, *<sup>b</sup>*], defined via the following functional,

$$\mathcal{S}\left\{f\_{\hat{X}(t)}(\mathbf{x})\right\} = -\int\_{a}^{b} f\_{\hat{X}(t)}(\mathbf{x}) \log(f\_{\hat{X}(t)}(\mathbf{x})) \, \mathrm{d}\mathbf{x},\tag{28}$$

satisfying the following restrictions

$$\int\_{a}^{b} f\_{\widehat{X}(t)}(\mathbf{x}) \, \mathbf{dx} = 1,\tag{29}$$

$$\mathbb{E}\left\{\hat{\mathcal{X}}^n(t)\right\} = \int\_a^b \mathbf{x}^n f\_{\hat{\mathcal{X}}(t)}(\mathbf{x}) \, \mathbf{dx} = m\_n, \quad n = 1, \ldots, M. \tag{30}$$

Condition (29) guarantees *f X*-(*<sup>t</sup>*)(*x*) is a PDF, and the *<sup>M</sup>* conditions given in (30) impose that the sampled moments, *mn*, match the moments, E + *Xn*(*t*) , , obtained in our setting by the stochastic perturbation method. For each *t* fixed, the maximization of functional (28) subject to the constrains (29)–(30) can be solved via the auxiliary Lagrange function

$$\mathcal{L}\left\{f\_{\tilde{X}(t)'}\lambda\_{0\prime},\ldots,\lambda\_M\right\} = \mathcal{S}\left\{f\_{\tilde{X}(t)}(\mathbf{x})\right\} + \sum\_{i=0}^{M} \lambda\_i \left[m\_i - \int\_a^b \mathbf{x}^i f\_{\tilde{X}(t)}(\mathbf{x}) \,\mathbf{dx}\right],$$

where *<sup>m</sup>*<sup>0</sup> <sup>=</sup> 1. It can be seen that the form of the PDF is given by [33]

$$f\_{\hat{X}(t)}(\mathbf{x}) = \mathbf{1}\_{[a,b]} \mathbf{e}^{-\sum\_{i=0}^{M} \lambda\_i \mathbf{x}^i}$$

,

where [*a*,*b*] denotes the characteristic function on the interval [*a*, *<sup>b</sup>*].

In Section 3, we have approximated, via the stochastic perturbation technique, the moments E + *Xn*(*t*) , for *<sup>n</sup>* = 1, 2, ... , 5. Therefore, to apply the PME we will take *<sup>M</sup>* = 5 in (30). Notice that, in practice, to calculate the parameters *<sup>λ</sup>i*, *<sup>i</sup>* = 0, 1, ... , 5, we will need to numerically solve the system of nonlinear Equations (29) and (30).

#### **5. Numerical Examples**

This section is devoted to illustrate the theoretical findings obtained in previous sections. We take the following data for the parameters of the random nonlinear oscillator (3), *<sup>ζ</sup>* <sup>=</sup> 0.05 (*ζ*<sup>2</sup> <sup>&</sup>lt; <sup>1</sup>) and *<sup>ω</sup>*<sup>0</sup> <sup>=</sup> 1.

**Example 1.** *Let us consider as input excitation the trigonometric stochastic process defined by <sup>Y</sup>*(*t*) = *<sup>ξ</sup>*<sup>1</sup> cos(*t*) + *<sup>ξ</sup>*<sup>2</sup> sin(*t*)*, where <sup>ξ</sup>*1, *<sup>ξ</sup>*<sup>2</sup> <sup>∼</sup> *<sup>N</sup>*(0, 1) *are independent. Observe that <sup>Y</sup>*(*t*) *satisfies the hypotheses, i.e.,* <sup>E</sup>{*Y*(*t*)} = <sup>0</sup>*, <sup>Y</sup>*(*t*) *is Gaussian, mean square differentiable with respect to t, and stationary, with its correlation being* <sup>Γ</sup>*YY*(*t*1, *<sup>t</sup>*2) = cos(*t*<sup>1</sup> <sup>−</sup> *<sup>t</sup>*2) *or* <sup>Γ</sup>*YY*(*τ*) = cos(*τ*)*. Substituting this data into Equation* (3)*, we obtain*

$$
\dot{X}(t) + 0.1\dot{X}(t) + \varepsilon X^2(t)\dot{X}(t) + X(t) = \xi\_1 \cos(t) + \xi\_2 \sin(t), \ \xi\_1, \xi\_2 \sim N(0, 1). \tag{31}
$$

*Now we shall obtain approximations to the first moments,* <sup>E</sup>{*Xi* (*t*)}, *<sup>i</sup>* = 1, ... , 5*, the correlation function and the variance,* <sup>V</sup>{*X*-(*t*)}*, of the approximate solution <sup>X</sup>*-(*t*) *of random nonlinear oscillator* (31)*.*

*As we have seen in the expression* (21)*, the moments of odd order are null, so, in this case,* <sup>E</sup>{*X*-(*t*)} = <sup>E</sup>{*X*-3(*t*)} = <sup>E</sup>{*X*-5(*t*)} = <sup>0</sup>*. Now, we sequentially derive some bounds for the perturbation parameter using the positiveness of even order moments, i.e.,* <sup>E</sup>{*X*-2(*t*)} <sup>&</sup>gt; <sup>0</sup> *and* <sup>E</sup>{*X*-4(*t*)} <sup>&</sup>gt; <sup>0</sup>*. First, it is easy to check that, using expression* (17)*, the second-order moment is given by*

$$\mathbb{E}\{\hat{X}^2(t)\} = 100 - 200000\varepsilon,\tag{32}$$

*so we obtain the bound*  <sup>&</sup>lt; 0.0005*. Since <sup>E</sup>*{*X*-(*t*)} = <sup>0</sup>*, observe that the variance of the first-order approximation is given by* (32)*. Second, using expressions* (23)*–*(25)*,*

$$\mathbb{E}\{\hat{X}^4(t)\} = 30,000 - \frac{1,153,800,000,000}{6409}\epsilon.\tag{33}$$

*This provides a stronger bound,*  < 0.000166641*.*

*Now, applying* (27)*, we obtain the following approximation of the correlation function,*

$$
\Gamma\_{\hat{X}\hat{X}}(\tau) = 100(1 - 2000\varepsilon)\cos(\tau). \tag{34}
$$

*In Figure 1, we show the graphical representation of the correlation function,* <sup>Γ</sup>*X*-*X*- (*τ*)*, given in the expression* (34) *for different values of . We can see the higher the perturbation , the lower the variability. This graphical behavior is in full agreement with the physical interpretation of the oscillator dynamics. Indeed, let us rewrite Equation* (31) *as follows,*

$$
\ddot{X}(t) + (0.1 + \epsilon X^2(t))\dot{X}(t) + X(t) = \xi\_1 \cos(t) + \xi\_2 \sin(t).
$$

*As*  <sup>&</sup>gt; <sup>0</sup> *increases, the damped coefficient* 0.1 +  *<sup>X</sup>*2(*t*) *does, so the mechanical system reduces its oscillations. It should be noted that*  = 0.0004 *only satisfies the first bound (*  <sup>&</sup>lt; 0.0005*); however, we can observe that the corresponding approximation preserves symmetry of correlation function. This might be due to the sample regularity of the random excitation, <sup>Y</sup>*(*t*)*, which is differentiable.*

**Figure 1.** Correlation function <sup>Γ</sup>*X*-*X*-(*τ*) of *<sup>X</sup>*(*t*) for different values of . Example 1.

*For the approximation of the PDF, f X*-(*<sup>t</sup>*)(*x*)*, we apply the results exhibited in Section <sup>4</sup> based on PME by taking*  <sup>=</sup> 0.00005*, which satisfies the stronger bound previously determined (*  < 0.000166641*). We first compute the approximation based on the three first moments*

$$f\_{\hat{X}(t)}(\mathbf{x}) = \mathbf{e}^{-1 - 2.181 + 1.045 \cdot 10^{-5} \mathbf{x} - 0.005 \mathbf{x}^2 - 4.9217 \cdot 10^{-8} \mathbf{x}^3}$$

*and, second, the approximation based on the five first moments*

$$f\_{\mathcal{R}(t)}(\mathbf{x}) = \mathbf{e}^{-1.2.243 + 2.552 \cdot 10^{-8} \mathbf{x} - 0.004 \mathbf{x}^2 - 2.177 \cdot 10^{-9} \mathbf{x}^3 - 3.789 \cdot 10^{-6} \mathbf{x}^4 + 6.754 \cdot 10^{-13} \mathbf{x}^5}$$

*In Figure 2, we compare both graphical representations. From them, we can observe that both plots are quite similar, so giving evidence that computations are consistent.*

**Figure 2.** Approximation of PDF, *f X*-(*<sup>t</sup>*)(*x*), using until the third and the fifth-order moment for  = 0.00005 via the PME. Example 1.

*Finally, to check that our approximations are reliable, we compare the mean and standard deviation of the approximate solution obtained via the perturbation method against the ones calculated by Monte Carlo. The results are collected in Table 1. We can observe that both approximations agree.*

**Table 1.** Comparison between perturbation method and Monte Carlo simulations using  = 0.00005. Example 1.


**Example 2.** *To previously perform our theoretical analysis, we have required the stationary Gaussian stochastic excitation <sup>Y</sup>*(*t*) *be differentiable in the mean square sense (or equivalently, its correlation function,* <sup>Γ</sup>*Y*(*τ*)*, be twice differentiable in the ordinary sense at <sup>τ</sup>* = <sup>0</sup> *[28] (Chapter 4)), so having differentiable sample trajectories [34]. If we carefully revise our previous development, we can notice this is an hypothesis coming from the fact the nonlinearity cross-term depends upon <sup>X</sup>*˙ (*t*)*. In this second example, we shall show that using the general concept of differentiability, in the sense of distributions, we can still obtain good results via the perturbation techniques when the excitation is not differentiable. To this end, we have chosen, <sup>Y</sup>*(*t*) = *<sup>ξ</sup>*(*t*)*, a Gaussian white-noise process with zero-mean and correlation function* <sup>Γ</sup>*YY*(*τ*) = <sup>1</sup> <sup>2</sup>*Wδ*(*τ*)*, where <sup>δ</sup>*(*τ*) *is the Dirac delta function and W is the noise power. This type of random noise has been extensively used in the literature since the earliest contributions [17]. Observe that <sup>Y</sup>*(*t*) = *<sup>ξ</sup>*(*t*) *is a stationary zero-mean Gaussian process but is not mean square differentiable (as its correlation function, given by the Dirac delta function, is not differentiable) and, consequently, its sample trajectories are not differentiable either. In this case, Equation* (3) *becomes*

$$
\dot{X}(t) + 0.1\dot{X}(t) + \varepsilon X^2(t)\dot{X}(t) + X(t) = \xi(t). \tag{35}
$$

*As in the previous example, we are going to obtain approximations to the five first moments,* <sup>E</sup>{*Xi* (*t*)}*, <sup>i</sup>* = 1, ... , 5*, the correlation function and the variance,* <sup>V</sup>{*X*-(*t*)}*, of the approximate solution <sup>X</sup>*-(*t*) *of Equation* (35)*. To implement the corresponding formulas derived throughout Section 3.2 saving computational time in Mathematica, we have taken into account the following properties of Dirac delta function,*

$$\int\_{-\infty}^{\infty} h(t)\delta(t-s) \, \mathrm{d}t = h(s), \quad \int\_{-\infty}^{\infty} h(t)\delta'(t-s) \, \mathrm{d}t = -h'(s).$$

*As mentioned in Example 1, the moments of odd order are null and using the positiveness of even order moments we can obtain some bounds for the perturbation parameter . First, using expression* (17)*, the second-order moment is determined by*

$$\mathbb{E}\{\hat{X}^2(t)\} = \frac{1}{40} - \frac{\epsilon}{160},\tag{36}$$

*so we obtain the bound*  <sup>&</sup>lt; <sup>4</sup>*. Since* <sup>E</sup>{*X*-(*t*)} = <sup>0</sup>*, expression* (36) *is also the variance of the first-order approximation. Second, using expression* (23)*–*(25)*,*

$$\mathbb{E}\{\hat{X}^4(t)\} = \frac{3}{1600} - \frac{759}{644800}\epsilon. \tag{37}$$

*This provides a stronger bound,*  < 1.59289*.*

*Now, applying* (27)*, we obtain the following approximation of the correlation function,*

$$\Gamma\_{\tilde{X}\tilde{X}}(\mathbf{r}) = \begin{cases} \mathrm{e}^{-\mathrm{r}\tau/20} \left( -399(-399 + 5 \mathrm{er}) \cos\left(\frac{\sqrt{399}\mathbf{r}}{20}\right) + \sqrt{399}(399 + 100 \mathrm{e}) \sin\left(\frac{\sqrt{399}\mathbf{r}}{20}\right) \right) & \text{if } \mathbf{r} \ge 0, \\\hline \qquad \text{e}, 368, 040 & \text{if } \mathbf{r} \ge 0, \\\hline \qquad \mathrm{e}^{\mathrm{r}/20} \left( -399(-399 + 5 \mathrm{er}) \cos\left(\frac{\sqrt{399}\mathbf{r}}{20}\right) + \sqrt{399}(-399 + 100 \mathrm{e}) \sin\left(\frac{\sqrt{399}\mathbf{r}}{20}\right) \right) & \text{if } \mathbf{r} < 0. \end{cases} \tag{38}$$

*In Figure 3, we show the plot of the correlation function,* <sup>Γ</sup>*X*-*X*- (*τ*)*, given in the expression* (38) *for different values of satisfying the weaker and the stronger bounds previously determined. We can observe that for smaller values of the obtained approximation of the correlation function better preserves the symmetry as expected.*

**Figure 3.** Correlation function <sup>Γ</sup>*X*-*X*-(*τ*) of *<sup>X</sup>*(*t*) for different values of . Example 2.

*Applying the results presented in Section 4, we obtain the approximation of the PDF, f X*-(*<sup>t</sup>*)(*x*)*, for*  = 0.5*, which satisfies the stronger bound* 1.59289*. We first compute the approximation based on the three first moments*

$$f\_{\tilde{X}(t)}(\mathbf{x}) = \mathbf{e}^{-1 + 1.992 + 1.438 \cdot 10^{-8} \mathbf{x} - 22.857 \mathbf{x}^2 - 2.197 \cdot 10^{-7} \mathbf{x}^3}$$

*and, second, the approximation based on the five first moments*

$$f\_{\mathcal{R}(t)}(\mathbf{x}) = \mathbf{e}^{-1 + 1.940 - 5.226 \cdot 10^{-11} \cdot x - 17.837 \mathbf{x}^2 + 1.580 \cdot 10^{-9} \mathbf{x}^3 - 42.679 \mathbf{x}^4 - 7.904 \cdot 10^{-9} \mathbf{x}^5}$$

.

*In Figure 4, we compare both graphical representations. We can observe, again, the similarity between them, thus showing full agreement in our numerical computations.*

*Finally, to check that our approximations are accurate, we compare the mean and standard deviation of <sup>X</sup>*-(*t*) *obtained via the perturbation method against the ones computed by Euler– Maruyama numerical scheme [35]. The results are shown in Table 2. We can observe that both approximations are similar.*

**Table 2.** Comparison between perturbation method and Euler–Maruyama simulations using  = 0.5. Example 2.


#### **6. Conclusions**

We have studied, from a probabilistic standpoint, a family of oscillators subject to small perturbations on the nonlinear term that depends both upon the position and the velocity (cross-nonlinearity) and whose forcing source is driven by a mean square differentiable stationary zero-mean Gaussian process. Despite the hypothesis of differentiability for the stochastic excitation, we have checked, via a numerical example, the method also provides good results when this hypothesis is not fulfilled, but involved computations are performed using the concept of general differentiability in the sense of distributions. We must point out that the majority of contributions dealing with this type of stochastic oscillators focus on the computation of the mean, the variance and correlation function. Our main contribution is the computation of reliable approximations of the probability density function of the stationary solution, by combining the stochastic perturbation method and the principle of maximum entropy. In this manner, we provide a fuller probabilistic description of the solution since from the density one can determine any one-dimensional moment as well as further probabilistic information of the steady-state. The proposed approach can be very useful to open new avenues in the analysis to other kind of nonlinear oscillators subject to small fluctuations and whose forcing term is a stochastic process that satisfies certain hypotheses. In our future research, we will work to continue contributing in this direction.

**Author Contributions:** Funding acquisition, J.-C.C.; Methodology, J.-C.C., E.L.-N., J.-V.R. and M.- D.R.; Supervision, J.-V.R. and M.-D.R.; Writing—original draft, J.-C.C. and E.L.-N.; Writing—review & editing, E.L.-N., J.-V.R. and M.-D.R. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work has been supported by the Spanish Ministerio de Economía, Industria y Competitividad (MINECO), the Agencia Estatal de Investigación (AEI) and Fondo Europeo de Desarrollo Regional (FEDER UE) grant MTM2017–89664–P. Elena López-Navarro has been supported by the European Union through the Operational Program of the [European Regional Development Fund (ERDF)/European Social Fund (ESF)] of the Valencian Community 2014-2020 (GJIDI/2018/A/010).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare that they do not have any conflict of interest.

#### **References**

