1. Introduction and Motivation
The analysis of stochastic perturbations in nonlinear dynamical systems is a hot topic in applied mathematics [
1,
2] with many applications in apparently different areas such as control [
3], economy [
4] and especially in dealing with nonlinear vibratory systems. The study of systems subject to vibrations is encountered, for example, in Physics (in the analysis of different types of oscillators) and in Engineering (in the analysis of road vehicles, response of structures to earthquakes’ excitations or to sea waves). The nature of vibrations in this type of systems is usually random because they are spawned by complex factors that are not known in a deterministic manner but statistically characterized via measurements that often contain errors and uncertainties. Although, oscillators in Physics and Engineering systems have been extensively studied in the deterministic case [
5,
6], and particularly, in the nonlinear case [
7,
8,
9], due to the above-mentioned facts the stochastic analysis is more suitable since provides better understanding of their dynamics.
Many vibratory systems are governed by differential equations with small nonlinear terms of the following form,
Here, denotes the position (usually of the angle w.r.t. an origin) of the oscillatory system at the time instant t, the parameter is given by , being the damping constant and the undamped angular frequency, and finally, is a small perturbation () affecting a nonlinear function of the position, . The expression is referred to as the nonlinear restoring term. The right-hand side term, , stands for an external source/forcing term (vibration) acting upon the system. In the setting of random vibration systems, is assumed to be a stochastic process, termed stochastic excitation, having certain characteristics that in the present study will be specified later.
Notice that the nonlinear restoring term in Equation (
1) involves the parameter
, which determines the magnitude of the nonlinear perturbation, whose shape is given by
. When
, Equation (
1) describes a random linear oscillator. In [
10], authors analyze this class of oscillators considering two cases for the stochastic source term
, first when is Gaussian and, secondly, when it can be represented via a Karhunen–Loève expansion. In the case that
, the inclusion of the nonlinear term makes more difficult (even simply impossible) to exactly solve Equation (
1). An effective method to construct reliable approximations of Equation (
1) in the case that
represents a small parameter is the perturbation technique [
11,
12,
13,
14,
15]. In the stochastic setting, this method has been successfully applied to study different type of oscillators subject to random vibrations. After pioneer contributions by Crandall [
16,
17], the analysis of random vibration systems has attracted many researchers (see, for instance, in [
15,
18,
19] for a full overview of this topic). In [
20], approximations of quadratic and cubic nonlinear oscillators subject to white noise excitations are constructed by combining the Wiener–Hermite expansion and the homotopy perturbation technique. The aforementioned approximations correspond to the first statistical moments (mean and variance) because, as authors indicate in the introduction section, the computation of the probability density function (PDF) is usually very difficult to obtain. In [
21], the authors extend the previous analysis to compute higher-order statistical moments of the oscillator response in the case the nonlinearity is only quadratic. The previous methodology is extended and algorithmically automated in [
22]. In [
23], the author considers the interesting scenario of an harmonic oscillator with a random mass and analyses important dynamic characteristics such as the stochastic stability and the resonance phenomena. To conduct that study, a new type of Brownian motion is introduced. The perturbation technique has also been used to approximate the first moments, mainly the mean and the variance, of some oscillators subject to small nonlinearities. The computational procedures of this method often requires amendments to the existing solution codes, so it is classified as an intrusive method. A spectral technique that allows overcoming this drawback is non-intrusive polynomial chaos expansion (PCE) in which simulations are used as black boxes and the calculation of chaos expansion coefficients for response metrics of interest is based on a set of simulation response evaluations. In the recent paper [
24], authors design an interesting hybrid non-intrusive procedure that combine PCE with Chebyshev Surrogate Method to analyze a number of uncertain physical parameters and the corresponding transient responses of a rotating system.
Besides computing the first statistical moments of the response or performing a stability analysis of systems under stochastic vibrations, we must emphasize that the computation of the finite distribution (usually termed “fidis”) associated to the stationary solution, and particularly of the stationary PDF, is also a major goal in the realm of vibratory systems with uncertainties. Some interesting contributions in this regard include [
25,
26]. In [
25], the authors first present a complete overview of methods and techniques available to determine the stationary PDF of nonlinear oscillators excited by random functions. Second, nonlinear stochastic oscilators excited by a combination of Gaussian and Poisson white noises are fully analyzed. The study is based on solving the forward generalized Kolmogorov partial differential equation (PDE) using the exponential-polynomial closure method. The theoretical analysis is accompanied with several illustrative examples. In the recent contribution [
26], authors propose a new method to compute a closed-form solution of stationary PDF of single-degree-of-freedom vibro-impact systems under Gaussian white noise excitation. The density is obtained by solving the Fokker–Planck–Kolmogorov PDE using the iterative method of weighted residue combined with the concepts of the circulatory and potential probability flows. Apart from obtaining the density of the solutions, it is worth to pointing out that in some recent contributions one also determines the densities of key quantities, that belong to Reliability Theory, like the first-passage time for vibro-impact systems with randomly fluctuating restoring and damping terms (see [
27] and references therein).
In this paper, we address the study of random cross-nonlinear oscillators subject to small perturbations affecting the nonlinear term,
g, which depend on both the position,
, and the velocity,
,
Here, the stochastic derivatives are understood in the mean square sense [
28] (Chapter 4). In our subsequent analysis, we will consider the case that
and the excitation
is a mean square differentiable and stationary zero-mean Gaussian stochastic process whose correlation function,
, is known. On the other hand, assuming that
is a stationary and Gaussian stochastic process is a rather intuitive concept, which has been extensively used in both theoretical and practical studies [
29,
30]. Stationarity means that the statistical properties of the process do not vary significantly over time/space. This feature is usually met in a number of modeling problems as the surface of the sea in both spatial and time coordinates, noise in time in electric circuits under steady-state operations, homogeneous impurities in engineering materials and media, for example [
28] (Chapter 3).
Now, we list the main novelties of our contribution.
We combine mean square calculus and the stochastic perturbation method to study a class of nonlinear oscillators whose nonlinear term, g, involves both position, , as velocity, , specifically, we consider the case . This corresponds to the most complicated case, usually termed cross-nonlinearity.
The oscillator is subject to random excitations driven by a stochastic process, , having the following properties: is mean square differentiable and stationary zero-mean Gaussian.
We compute reliable approximations, not only of the mean, the variance, and the covariance (as is usually done), but also of higher moments (including the asymmetry and the kurtosis) of the steady-state of the above-described nonlinear oscillator.
We combine the foregoing information related to higher moments and the entropy method to construct reliable approximations of the probability density function of the steady-state solution. The approximation is quite accurate as it is based on higher moments.
To the best of our knowledge, this is the first time that stochastic nonlinear oscillators with the above-described type of cross-nonlinearities is studied using our approach, i.e., combining mean square calculus and the stochastic perturbation method. In this sense, we think that our approach may be useful to extend our study to stochastic nonlinear oscillators having more general cross-nonlinearities, in particular of the form , for and .
The paper is organized as follows. In
Section 2, we introduce the auxiliary stochastic results that will be used throughout the whole paper. This section is intended to help the reader to better understand the technical aspects of the paper.
Section 3 is divided into two parts. In
Section 3.1, we apply the perturbation technique to construct a first-order approximation of the stationary solution stochastic process of model (
2) with
. In
Section 3.2, we determine expressions for the first higher-order moments, the variance, the covariance, and the correlation of the aforementioned first-order approximation. These expressions will be given in terms of certain integrals of the correlation function of the Gaussian noise,
, and of the classical impulse response function to the linearized oscillator associated to Equation (
2). In
Section 4, we take advantage of the results given in
Section 3 to construct reliable approximations of the PDF of the stationary solution using the principle of maximum entropy. In
Section 5, we illustrate all theoretical findings by means of several illustrative examples. Our numerical results are compared with Monte Carlo simulations and with the application of Euler–Maruyama numerical scheme, showing full agreement. Conclusions are drawn in
Section 6.
2. Stochastic Preliminaries
For the sake of completeness, in this section we will introduce some technical stochastic results that will be required throughout the paper.
Hereinafter, we will work on a complete probability space
, i.e.,
is a sample space;
is a
-algebra of sets of
, usually called events; and
is a probability measure. To simplify, we will omit the sample notation, so the input and the solution stochastic processes involved in Equation (
2) will be denoted by
and
, respectively, rather than
and
, respectively.
The following result will be applied to calculate some higher-order moments of the solution stochastic process,
, of the random differential Equation (
2), since as it shall be seen later,
depends on a product of the stochastic excitation,
, evaluated at a finite number of instants, say
,
,
.
Proposition 1.
(p. 28, [
28]).
Let the random variables be jointly Gaussian with zero mean, , . Then, all odd order moments of these random variables vanish and for n even,The sum above is taken over all possible combinations of pairs of n random variables. The number of terms in the summation is .
The two following results permit interchange the expectation operator with the mean square derivative and the mean square integral. In [
28] (Equation (4.130) in Section 4.4.2), the first result is established for
and then it follows straightforwardly by induction.
Proposition 2. Let be a mean square differentiable stochastic process. Then,provided the above expectations exists. Proposition 3.
(p. 104, [
28]).
Let be a second-order stochastic process integrable in the mean square sense and a Riemann integrable deterministic function on . Then, The following is a distinctive property of Gaussian processes since they preserve Gaussianity under mean square integration.
Proposition 4.
(p. 112, [
28]).
Let be a Gaussian process and let be a Riemann integrable deterministic function on such that the following mean square integral,exists, then is a Gaussian process. 3. Probabilistic Model Study
As it has been indicated in
Section 1, in this paper we will study, from a probabilistic standpoint, the random cross-nonlinear oscillator
The analysis will be divided into two steps. First, in
Section 3.1 we will apply the perturbation technique to obtain an approximation,
, of the stationary solution stochastic process,
. Then, in
Section 3.2 we will take advantage of
to determine reliable approximations of the main statistical functions of
, namely, the first higher-order moments,
; the variance,
; the covariance,
; and the correlation,
.
3.1. Perturbation Technique
Let us consider the Equation (
3). The main idea of the stochastic perturbation technique is to consider that the solution
can be expanded in the powers of the small parameter
(
),
Replacing expression (
4) into Equation (
3), yields the following sequence of linear differential equations, with random inputs
Notice that each equation can be solved in cascade. As usual, when applying the perturbation technique, we take the first-order approximation
This entails that in our subsequent development we will only need the two first equations listed in (
5).
As indicated in
Section 1, now we will focus on the analysis of the steady-state solution. Using the linear theory, the two first equations in (
5) can be solved using the convolution integral [
31]:
and
where
is the impulse response function for the underdamped case
. This situation corresponds to the condition in which damping of an oscillator causes it to return to equilibrium with the amplitude gradually decreasing to zero (in our random setting it means that the expectation of the amplitude is null); system returns to equilibrium faster but overshoots and crosses the equilibrium position one or more times. Although, they are no treated hereinafter, two more situations are also possible, namely, critical damping and overdamping. The former corresponds to
and in that case the damping of an oscillator causes it to return as quickly as possible to its equilibrium position without oscillating back and forth about this position, while the latter corresponds to
, and in this situation damping of an oscillator causes it to return to equilibrium without oscillating; oscillator moves more slowly toward equilibrium than in the critically damped system [
32].
3.2. Approximation of the Main Statistical Moments
This subsection is devoted to calculate the main probabilistic information of the stationary solution stochastic process,
, of model (
3). As it has been previously pointed out, to this end, we assume that the input term
is a stationary zero-mean (
) Gaussian stochastic process whose correlation function,
, is given. We will further assume that
is mean square differentiable. This additional hypothesis will be apparent later. At this point, it is convenient to recall that for any stationary stochastic process its correlation function is even, so
, (p. 47, [
28]). This property will be extensively applied throughout our subsequent developments.
To compute the mean of the approximation, we first take the expectation operator in (
6),
Therefore, we now need to determine both
and
. To compute the
we again use the expectation operator in (
7),
where we have applied Proposition 3 and that
.
Now, we deal with the computation of
in an analogous manner but using the representation of
given in (
8),
Notice that the assumption of mean square differentiability of the input process appears naturally at this stage.
Let us justify the last step in expression (
12). Let us denote
,
and
, then applying Propositions 2 and 1, both with
, one gets
Therefore, substituting (
11) and (
12) into (
10), we obtain the expectation of the approximation is null,
From the approximation (
6) and neglecting the term
, the second-order moment for
is given by
The first addend can be calculated using expression (
7) and Fubini’s theorem,
Notice that we have used that
is a stationary process, so
Now, we calculate the second addend in (
14). To this end, we substitute the expressions of
and
given in (
7) and (
8), respectively,
Observe that in the step (I) of the above expression, we have first applied Proposition 2 and second Proposition 1. Indeed, let us denote by
,
,
and
, then by Proposition 2, with
, one gets
and now we apply Proposition 1, with
, to the right-hand side. This yields
In step (II) of expression (
16) we have taken advantage of the symmetry of the indexes.
Then, substituing (
15) and (
16) in (
14) one gets
Notice that does not depend on t. This is consistent with the fact that we are dealing with the stochastic analysis of the stationary solution. The same feature will hold when computing higher-order moments, , , later.
As
is null (see (
13)), then the variance of the solution coincides with
.
Now, we calculate the third-order moment of
keeping up to the first-order term of perturbation
. Therefore,
Reasoning analogously as we have shown before, we obtain
where we have applied Proposition 1 in the last step.
The second addend in (
18) is calculated using Propositions 1 and 2,
From (
19) and (
20), we obtain
Using again the first-order approximation of the perturbation
, in general, it can be straightforwardly seen that
Indeed, we know that,
On the one hand, let us observe that applying first Fubini’s theorem and Proposition 3, and second Proposition 1 for
n odd, one gets
On the other hand, using the same reasoning as in (
20),
where first we have applied Proposition 2, in order to put the first derivative out of the expectation, and second, we have utilized that
,
and
depend upon
, 2 and 1 terms of
, respectively, together with Proposition 1 (notice that
is odd as
n is odd).
To complete the information of statistical moments of the approximation, we also determine .
The fourth-order moment of
, based on the first-order approximation via the perturbation method, is given by
Reasoning analogously as we have shown in previous sections, we obtain for the first addend
and for the second addend
Observe that in the last step of the above expression, first we have used Proposition 2, and second, Proposition 1. From this last proposition, we know that exist 15 combinations, but we can reduce the expression by the symmetry of involved indexes.
Now we deal with the approximation of the correlation function of
via (
6), i.e., taking the first-order approximation of the perturbation expansion,
The first addend in (
26) corresponds to the correlation function of
. It can be expressed as
The two last addends in (
26) represent the cross-correlation of
and
. They are given, respectively, by
and
As
, we observe that the covariance and correlation functions of
coincide,
4. Approximating the PDF via the Maximum Entropy Principle
So far, we have calculated approximations of the moments
to the first-order approximation,
, via the perturbation method, of the steady-state solution of the random nonlinear oscillator (
3). Although this is an important information, a more ambitious goal is the approximation of the PDF, say
, as from it one can calculate key stochastic information as the probability that the output lies in a specific interval of interest, say
,
for any arbitrary fixed time
t. Furthermore, from the knowledge of the PDF one can easily compute confidence intervals at a specific confidence level
,
where
(see (
13)) and
. Usually
is taken as
so that
confidence intervals are built, and
must be determined numerically.
As we have calculated the approximations
,
a suitable method to approximate the PDF,
, is the Principle of Maximum Entropy (PME), [
33]. For
t fixed, the PME seeks for a PDF,
, that maximizes the so-called Shannon’s Entropy, of random variable
with support
, defined via the following functional,
satisfying the following restrictions
Condition (
29) guarantees
is a PDF, and the
M conditions given in (
30) impose that the sampled moments,
, match the moments,
, obtained in our setting by the stochastic perturbation method. For each
t fixed, the maximization of functional (
28) subject to the constrains (
29)–(
30) can be solved via the auxiliary Lagrange function
where
. It can be seen that the form of the PDF is given by [
33]
where
denotes the characteristic function on the interval
.
In
Section 3, we have approximated, via the stochastic perturbation technique, the moments
for
. Therefore, to apply the PME we will take
in (
30). Notice that, in practice, to calculate the parameters
,
, we will need to numerically solve the system of nonlinear Equations (
29) and (
30).