1. Introduction
The integration of rapidly oscillating functions is used in many applications, from fluid dynamics to mathematical physics, electronic engineering, acoustic and electromagnetic radiation, etc. [
1]. In the study of optical radiation, the problem of integrating rapidly oscillating functions arose and still continues to be relevant in scalar wave optics using the Huygens–Fresnel and Helmholtz–Kirchhoff integrals [
2]. An example of numerical integration of a rapidly oscillating function in this kind of problem is given in Ref. [
3]. The generalization of the theory of diffraction from the scalar theory to the vector one, described by Maxwell’s equations, is very well described in the classic book by Ilyinsky, Kravtsov, and Sveshnikov [
4] and in the newer revised book by Sveshnikov and Mogilevskii [
5]. In these books, using the Stratton–Chu formulas, the problem of electromagnetic diffraction is reduced to solving the Fredholm equations of the first and second kind, and further it is shown that it is very convenient to represent the solution as an expansion in complete systems of rapidly oscillating functions corresponding to the geometry of a particular problem. The calculation of the expansion coefficients in such problems is reduced to the integration of rapidly oscillating functions. An example of solving the problem of describing structured laser radiation, described by Maxwell’s equations, is given in Refs. [
6,
7].
Numerical calculations of integrals of rapidly oscillating functions are often a part of large-scale computational projects for solving problems of describing electromagnetic radiation. In addition, there are research projects with large amounts of numerical integration of rapidly oscillating functions devoted to the study of other, non-optical phenomena, e.g., the geolocation method [
8,
9,
10]. Another example, the problem of the interaction of particles affected by external electromagnetic fields and matter, is one of the important topics in elementary particle physics (see, e.g., [
11]). The theoretical basis of the muon decay process in an electromagnetic field was developed mainly in Ref. [
12]. Practical calculation of the muon decay rate using the formulae of Ref. [
12] is difficult due to the summation of a significant number of integrals of rapidly oscillating functions with varying amplitude functions and constant phase frequencies. Such serial integration
with various amplitude functions
on similar grids appear also in other applied areas.
Whereas at the early stages the problem of integrating rapidly oscillating functions was solved by improving the traditional methods of numerical integration, later, as the number of such problems grew and the requirements for the achieved accuracy increased, the researchers were forced to look for alternative approaches.
Levin’s approach [
13] to the integration of rapidly oscillating functions consists in the transition to the calculation of the anti-derivative function from the integrand using the collocation procedure in the physical space. In this case, the element of degenerate [
14,
15] differentiation matrix of the collocation method are functions of coordinates of the grid points; the matrix elements are calculated using very simple expressions.
Let us consider the solution of the problem of integrating a rapidly oscillating function with a non-linear phase. According to Levin’s approach [
13,
16], the calculation of the integral of a rapidly oscillating function with a non-linear phase is as follows:
and reduces to solving an ordinary differential equation, i.e., to the determination of the anti-derivative
:
Levin’s paper states that Equation (
2) has a particular solution that is not rapidly oscillating. We will search for this particular solution by the collocation method in the class of polynomial expansion in Chebyshev polynomials. This approach is justified by the fact that the complete system of Chebyshev polynomials is optimal in the uniform metric and almost optimal in the quadratic metric [
15,
17].
If the unknown function
is a solution to Equation (
2), then the result of integration can be obtained according to the formula
The method proposed by Levin in both one-dimensional and multidimensional cases was published by him in [
13,
16] and then thoroughly studied in [
18]. The method is presented in great detail in the famous monograph [
1], which describes the evolution of numerical methods for integrating rapidly oscillating functions at the beginning of the century.
In [
11,
19], various options for implementing this method in the case of various non-linear frequency functions of a fairly general form were studied, and many applied problems were solved. The method of integrating functions with a linear phase was studied in [
20]. The use of a simple structure of a system of linear algebraic equations (SLAEs), to which the integration problem is reduced in this case, allowed the authors of [
21] to construct a very fast and economical algorithm for calculating integrals.
In the case of a non-linear phase, a particular difficulty for the existing algorithms is the degenerate situation—the presence of singular, so-called stationary phase points. The stationary phase point of an oscillatory integral is a point where the first derivative of the phase function vanishes. In this case, the algorithms considered above do not work reliably enough, since they make it necessary to solve ill-conditioned SLAEs. In this regard, we note the recently published work devoted to an in-depth study of Levin’s method [
19].
In Refs. [
18,
21,
22,
23,
24], the authors consider various approaches in order to propose fast and stable methods for integrating rapidly oscillating functions, often reducing these methods to solving SLAEs. As noted in these papers, the main problem requiring further research is that the corresponding systems of linear algebraic equations are ill-conditioned. One of the signs that a system is ill-conditioned is the presence of stationary points for the phase function [
18,
25]. This is one of the fundamental problems for methods of integrating rapidly oscillating functions. The method we propose here for solving ODEs based on integrating factors makes it possible to discard the drawbacks of existing approaches to integrate rapidly oscillating functions based on Levin’s method in the presence of stationary points. The algorithm stably determines the solution both in the absence of stationary points and in their presence when the derivative of the phase function vanishes in the interval of integration.
Usually, the collocation method for solving an ODE is formulated in physical space. A convenient and promising tool for finding a solution based on the Chebyshev approximation is the use of differentiation and integration matrices [
26], whose elements are functions of the coordinates of the grid points calculated using rather simple formulas. However, in specific physical-space implementations of the collocation method degenerate Chebyshev differentiation matrices are used, which, moreover, have eigenvalues that differ by orders of magnitude. In this case, it may be impossible to construct a stable numerical algorithm for solving the resulting SLAEs. If instead we search for the expansion coefficients of the solution of the differential equation in the spectral space, then the problem reduces to sparse and well-conditioned SLAEs. The properties of the algorithm are improved both by using the sparse structure of the differentiation and integration matrices, and by taking into account the discrete orthogonality of the Chebyshev matrices when choosing the Chebyshev–Gauss–Lobatto computational grid.
Differentiation matrices are given explicitly or implicitly in many publications concerning the use of pseudospectral collocation methods [
6,
7,
8]. Solving ODEs with the use of degenerate differentiation matrices in physical and/or spectral spaces rather naturally led to the poor conditionality in the systems of linear algebraic equations to be solved. Refs. [
5,
6,
17,
18,
20] formulate the specific features of differentiation and integration matrices considered on similar or mutually dependent grids. The explicit use of differentiation matrices on Chebyshev–Gauss–Lobatto grids when solving ODEs allows us to propose stable and economical methods for solving ODEs. We use integration matrices on Chebyshev–Gauss–Lobatto grids in the spectral representation. For more details about the form and properties of these matrices, see Refs. [
6,
17,
18,
20].
In the present paper, we propose an original numerical method for solving an ordinary differential equation based on the multistage algorithm [
26] of finding a single-parameter family of solutions of the simplest first-order differential equation
without initial or boundary conditions. This algorithm was developed for solving linear ODEs [
27] based on constructing and using the integrating factor method, once proposed by Bernoulli brothers [
28].
The novelty of the proposed approach [
26,
27] to solving Cauchy problems for ODEs of the first and second order is to first select a class (set) of functions that satisfy a differential equation using a stable and computationally simple method for calculating the expansion coefficients of the derivative of the future solution. Then; we calculate the
coefficients (except for the zero one) of the expansion of the future solution from the calculated expansion coefficients of the derivative using the Chebyshev integration matrix [
29]. Only after that, from the set of solutions obtained in this way, we select those corresponding to the given initial conditions.
In this paper, the calculation of the integral of rapidly oscillating functions in accordance with Levin’s approach is reduced to solving a linear ordinary differential equation with a complex coefficient. The approximate solution of ODE (
1) is constructed based on the method of integrating factors [
28] in the form of an expansion into a truncated series in Chebyshev polynomials of the first kind:
First, the main "building block" of the method is presented, namely the algorithm for obtaining an approximate general solution to the simplest Cauchy problem, which in turn consists of several sub-tasks. The solution of the first sub-task is the calculation of the interpolation coefficients of the derivative of the desired solution by multiplying the transposed Chebyshev matrix by the vector of derivative values (functions on the right-hand side of the equation). The search for the expansion coefficients of the derivative is carried out on the Gauss–Lobatto grid, which makes it possible to use the discrete orthogonality of Chebyshev polynomials of the first kind. This fact allows for calculating the expansion coefficients by simple division of the product components by n or . Introducing the differentiation matrices and the “inverse” integration matrix in the spectral space into consideration makes it possible to reveal the relationship between the expansion coefficients of functions and the expansion coefficients of derivatives with respect to the same basis in matrix form. In fact, the calculation of the leading coefficients for solving the simplest ODE is reduced to multiplying the tridiagonal integration matrix by the calculated vector of derivative interpolation coefficients. To solve the Cauchy problem, it is sufficient to use the initial or boundary condition for determining the zero coefficient of the solution.
In the following Sections, we consider a technique for solving ODE (
2) and show that the problem of finding an integrating factor and then solving an ODE can be reduced to the already solved problem of restoring the expansion coefficients of a function from the values of its derivative. In fact, to solve a first-order ODE, it suffices to apply the algorithm for restoring a function from its derivative twice.
Methods
One of the most efficient methods of approximating a function
is to find the interpolating polynomial
of degree
n, satisfying the condition
for the set of
collocation points
,
. In practice, the Chebyshev points are usually chosen for collocation points, and the obtained interpolation polynomial, known as Chebyshev interpolant, is an almost optimal approximation to
in the space of polynomials of degree
n [
30]. A usual method to calculate the interpolation polynomial
is to use the polynomial Lagrange basis, which can be stably implemented using the barycentric interpolation formula [
5,
20]. We will use the basis of Chebyshev functions of the first kind to calculate the interpolation polynomial. The interpolation itself is implemented on Chebyshev–Gauss–Lobatto grids, allowing for the active use of the discrete orthogonality of Chebyshev matrices and providing high interpolative properties of this approach [
30] with minimum computational costs.
The main idea of spectral methods is to represent the solution as a truncated series expansion in terms of known basis functions. The linear transformation (derivative operator) that translates the vector of coefficients of the expansion of function into the vector of coefficients of the expansion of its derivative in a similar series in the same basis is known as the matrix of spectral differentiation.
The expansion of the
n times differentiable function
in Chebyshev polynomials
, has the form
As polynomial interpolant, the series truncated to
n terms is usually considered:
This approximation has the order
at
and at
it tends to zero super algebraically [
15,
16].
2. Reconstructing a Function from Its Derivative: The Cauchy Problem
Following [
26], as the basic algorithm for calculating an integral of a rapidly oscillating function by Levin’s method, we consider the problem of reconstructing the expansion coefficients
of differentiable function
from the expansion coefficients
of its derivative
defined on the interval
,
If no additional restrictions are imposed on the solution, it is determined up to the zero coefficient
. If an initial/boundary condition (or other additional condition at some internal point
) is specified,
then the problem of finding the function
that satisfies conditions (
4)–(
5) is called a single-point problem.
The choice of basis of orthogonal polynomials is determined by the following properties:
The system of Chebyshev polynomials of the first kind satisfies these requirements.
The next problem is the method of calculating the expansion coefficients. It can often be simplified by the choice of collocation nodes. Let us consider in more detail the various stages of the approximate solution of the simplest differential Equation (
4) with additional condition (
5).
Interpolation—calculation of the coefficients of the derivative expansion in Chebyshev polynomials;
Integration—calculating the expansion coefficients of the solution from the expansion coefficients of the derivative;
Extension of the definition of the solution expansion coefficients (using an additional condition) to identify the missing zero coefficient of the solution expansion.
To solve these problems consistently, we will consider as an approximate solution an interpolation polynomial obtained by expanding the solution into a series in Chebyshev polynomials of the first kind on a certain grid. The approximation of continuous functions is obtained by discarding the terms of the Chebyshev series, the value of which is small. Unlike the approximations obtained using other power series, the approximation in Chebyshev polynomials has the property of almost optimality, minimizing the number of terms needed to approximate the function by polynomials with a given accuracy [
26,
30,
31]. The approximation by Chebyshev polynomials in known to be the best in
and almost optimal in
. This is also related to the fact that the approximation based on the Chebyshev series turns out to be rather close to the best uniform approximation (among polynomials of the same degree), but it is easier to find. In addition, it makes it possible to discard the Runge phenomenon upon a reasonable choice of interpolating points.
Without loss of generality, below we will assume that the domain of the solution is the interval . Consider the problem of finding the derivative of the desired function, or rather, the polynomial approximating , satisfying the collocation condition at a given finite number of points in the interval . The pseudospectral (collocation) method for solving the problem consists in representing the desired approximating polynomial as an expansion into a finite series in Chebyshev polynomials of the first kind, , defined in the Hilbert space of functions on the segment .
Algorithm Based on the Use of Integration Matrices
Let us represent the desired function
, the future approximate solution to Equation (
4), as an expansion in a finite set of Chebyshev polynomials
,
:
By differentiation of Equation (
6), the first derivative of this function can be presented as a series:
At the same time, the derivative
as a polynomial of degree
n can be expanded in a series in the initial basis
,
with coefficients
:
3. Differentiation and Integration Matrices
From the properties of Chebyshev polynomials [
6,
23], it follows that the derivative of the polynomial
of the
k-th degree can be written as an expansion in Chebyshev polynomials
,
of lower degree as a sum:
Here, the notation means that the maximum integer does not exceed x, and the expression equals 1, if k is odd and 0, if k is even.
The integral of a Chebyshev polynomial can be written in a compact form [
32]:
where
K is an arbitrary constant, often chosen such that
.
Using these recurrence relations for Chebyshev polynomials of the first kind and their derivatives and anti-derivatives [
29,
33], and equating the coefficients at similar polynomials in Equations (
7) and (
8), we arrive [
34] to the dependence of the expansion coefficients of the desired solution
on the expansion coefficients of the derivative
and vice versa.
Based on the above formulae, it is possible to establish a relationship between the expansion coefficients of a Chebyshev polynomial of the first kind and the expansion coefficients of its derivative in the same basis.
In matrix form, this dependence can be represented using the matrices of differentiation
and integration
as
and
The infinite matrix of differentiation
for Chebyshev polynomials has the form [
22,
30,
35]:
and the infinite three-diagonal matrix of integration (anti-derivative) has the following form [
26,
35]:
The formula that determines (partially) the coefficients of the expansion of function from the known coefficients of the expansion of its derivative in matrix form is written as The infinite three-diagonal matrix of integration (anti-derivative) is presented in the truncated form. The expansion coefficients of the derivative are calculated from the known expansion coefficients of function by the formula .
Let us make use of the relation
in the finite-dimensional formulation:
Then, the dependence of the coefficients
on
can be realized algorithmically by means of the following explicit formulae:
Thus, known the expansion coefficients
of function
of problem (
4) in Chebyshev polynomials of the first kind, we can reconstruct the expansion coefficients (except zero) of the desired function in the same basis functions using Equations (
9) and (
10) [
29,
34].
Obviously, the availability of the integration tridiagonal matrix offers a simple and efficient method of calculating the expansion coefficients of the solution from the expansion coefficients of its derivative.
Let us return to the problem of interpolating
, i.e., calculation of the spectral coefficients of the derivative by the collocation method, i.e., choosing such coefficients
of the interpolation polynomial expansion, at which the following equalities hold:
at the collocation points
.
The latter means that coefficients
must be a solution to the system of linear algebraic Equations (
11). In matrix form, this system looks as
Here, we use the notation , for the values of the polynomial of degree k calculated at point . is the Chebyshev matrix mapping the point (vector) from the space of coefficients into the space of function values.
For non-degeneracy of the system of Equations (
11), it is sufficient that all points of the grid be different. Matrix
of the system in this case is a matrix of the general form, and for solving the SLAE it will be necessary to use the Gauss method or the LU decomposition method mostly from the point of view of the number of operations. To simplify the calculation of vector
(i.e., the search for the normal solution) is possible by using the arbitrariness in the position of the collocation points. On the Gauss–Lobatto grid, the Chebyshev matrices possess the property of discrete orthogonality. Let us choose the set of collocation points as
,
and multiply the first and the last equation from (
12) by
. We obtain an equivalent “modified” system with a new matrix
(instead of
) and vector
instead of
. The new system is good for possessing the property of discrete “orthogonality”: its multiplication from the left by the transposed matrix
yields a diagonal matrix:
Let us transform system (
12) by multiplying it from the left by the transposed matrix
and arrive at a matrix equation with a diagonal matrix for calculating the desired expansion coefficients
, i.e., the normal solution to system (
11):
Denoting by
the product of matrix
by vector
in the right-hand side of the equation, it is easy to write the desired expansion coefficients for the derivative of the solution in the explicit form
Therefore, Equations (
10) and (
13) unambiguously determine the last
n coefficients
of the expansion of the desired approximate solution
in Chebyshev polynomials. Namely, the senior coefficients of the Cauchy problem solution easily express through the interpolation coefficients of the derivative expansion:
Hence, Equations (
14) unambiguously define the last
n coefficients of expansion of the desired function
, and to determine one more coefficient,
, it is necessary to impose at least one more additional condition.
4. Integrating Factor
We now return to the solution of the ordinary differential Equation (
2), which, using Levin’s method, will give us the desired value of the integral according to Formula (
3).
The equation for the anti-derivative is an ordinary differential equation with complex coefficient
and real right-hand side
where the notation
is introduced. We will seek the solution of the equation using the numerical method of integrating factors proposed in Ref. [
27]. Let us try to find an integrating complex factor
such that, by multiplying Equation (
15) by it,
we will reduce the solution of the problem of finding anti-derivative to a simpler equation of the form (
4) for finding the product
:
When functions
and
are continuous, there is an integrating factor
, nonzero in the entire domain of definition and satisfying the equality
Dividing both sides of Equation (
16) by
, we obtain a differential equation for calculating the integrating factor, satisfying an auxiliary equation
the left-hand side of which is a logarithmic derivative of the integrating factor,
Thus, the search for the derivative of the logarithmic function of
is reduced to the problem solved above, namely the reconstruction of a polynomial approximation of the function from its derivative. Integrating equality (
17), we obtain
From the point of view of notations, we know that
k is an unknown integration constant. To simplify, we put the sign minus before it and will use the constant with plus multiplied by the imaginary unit. This will not affect the ultimate result but will simplify the expression for the integrating factor. Considering this change, we denote by
the integral of the frequency function and write in explicit form the expression for calculating the integrating factor:
Let us denote by
the vector of expansion coefficients of polynomial
on the grid
,
, calculated in correspondence with the first part of the algorithm for reconstructing the expansion coefficients of the function using expressions of the form (7). To simplify further calculations, we denote the vector of the values of Chebyshev polynomials of the first kind by
. Then,
and taking into account the transition to the formula for calculating the integral at an arbitrary point from the known coefficients
, we obtain
The formula for calculating the logarithm of the integrating factor takes the form
The integration constant k in this formula remains indefinite.
Using the properties of an exponential function of a complex-valued argument, we can present the integrating factor as the following function:
where the complex factor
, expressed through the integration constant
k, is present as an indefinite coefficient.
5. Linear Differential Equations of the First Order
We substitute the obtained expression for the integrating factor into the relation
where
. From this expression, it is seen that the equation does not change upon division of the factor
by the constant coefficient
. Therefore, we will consider the integrating factor in the form of a simplified complex-valued function:
where
is calculated as
.The left-hand side of Equation (
22) contains a derivative of the product of the integrating factor and the desired solution. Therefore, in accordance with the algorithm of reconstructing the function from its derivative by Equations (
9)–(
14), we obtained a possibility of stable and reliable calculation of the desired solution
multiplied by the integrating factor
:
Thus, the problem of finding a solution to ODE (
22) is reduced to the same basic problem of reconstructing a function from its known derivative
; in the present case, the product
, in which
in the entire interval where the differential equation is defined. The problem can be solved using the method described in the first Section, i.e., to determine the spectral coefficients of the function expansion in Chebyshev polynomials provided that the expansion coefficients for the derivative of this function are known. The difference is only that the real and imaginary parts of the product
are to be reconstructed separately.
The problem splits into two subproblems of calculating the real and imaginary components of the solution of the differential Equation (
23). Let us denote by
the coefficients of expansion of the product
, i.e.,
.
We calculate the expansion coefficients for the real and imaginary components of the product
in accordance with the basic algorithm of reconstructing the coefficient of function from the interpolation coefficients of its derivative. Using Equations (
9)–(
14) and the integration matrix, we calculate the expansion coefficients for the product
in the Chebyshev basis. Let us denote them by
, so that
Thus, the formula for calculating the product of the solution by the integrating factor in the entire interval of integration takes the form
Here, one of the factors
is an exponential function. We multiply Equation (
25) by a nonzero function
and arrive at an explicit expression for the desired solution
Writing explicitly (to simplify the program implementation), the real and imaginary parts in Equation (
26), we obtain
From the last formula, it follows that the presence of stationary points of function does not affect the properties and behavior of the algorithm (there is no low-frequency breakdown in the realization of the above algorithm).
To find the desired integral using Equation (
3), it remains for us to calculate the values of function (
27) at two boundary points of the interval, i.e.,
,
. Substituting the calculated expressions into Equation (
3), we obtain the value of the integral of the rapidly oscillating function.
8. Conclusions
Levin’s method and other methods of this type are very good quadrature methods for integrating rapidly oscillating functions. However, in their traditional representation they encounter a number of difficulties.
Levin’s method is a classical method for calculating integrals of rapidly oscillating functions, which consists in the transition to finding the anti-derivative function in the form of a solution to an ordinary differential equation. The difference between the values of the anti-derivative at the boundary points of the integration interval is the desired value of the integral. Solving such an ODE by the collocation method can be reduced to solving a system of linear algebraic equations. Unfortunately, if the integrand has stationary points, the resulting SLAE becomes degenerate or ill-conditioned at or in the presence of stationary points of the phase function.
That is why, for a long time, it was believed that Levin’s approach suffers from instability, the error in calculating the integral increases when the integrand oscillates slowly
(low-frequency breakdown) or the derivative of the phase function turns into zero (stationary points). The results of Refs. [
37,
38] testify that, if the system of linear algebraic equations obtained by the discretization of the differential equation is solved by means of a singular value decomposition, then no low-frequency breakdown occurs. A similar approach based on the Tikhonov regularization method was undertaken in Ref. [
41]. In Bremer’s paper [
39], a technique for reducing instability in the calculation of integrals of rapidly oscillating functions by dividing the integration interval into parts is presented.
The modification of Levin’s method based on the transition to the solution of the first-order ODE formed by the integrating factor method significantly expands the scope of applicability of the original Levin’s method. The solution of this intermediate ODE is reduced to the successive solution of two auxiliary ODEs of the simplest type using integration matrices. There is no need to solve ill-conditioned SLAEs that give rise to a low-frequency breakdown when using the traditional Levin’s method. The main computational operations in the proposed method are operations of multiplication of banded tridiagonal matrices by vectors, which are well-conditioned regardless of the presence of stationary points in the phase function. The proposed approach turns out to be highly efficient in the cases of both linear and nonlinear phase functions.
Here, we present an algorithm that does not experience difficulties in the presence of low-frequency oscillations or stationary points of the phase function and does not lead to a deterioration in accuracy and stability during the calculation process. The proposed stable method based on integrating factors is applicable when not only the integrand is slowly oscillating, but also there are stationary points. Note also that the proposed two-stage method (based on the use of integrating factors and integration matrices) makes Levin’s approach very promising for a wide frequency range. The presented results of numerical experiments demonstrate that the resulting two-stage algorithm for solving ODEs (Levin’s method) can effectively and accurately calculate certain integrals for a large class of rapidly oscillating functions, including those with stationary points and other features.