Next Article in Journal
Primary School Teachers’ Conceptions about the Use of Robotics in Mathematics
Previous Article in Journal
Limits of Quantum B-Algebras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rational Approximation Method for Stiff Initial Value Problems

by
Artur Karimov
1,*,
Denis Butusov
1,*,
Valery Andreev 
2 and
Erivelton G. Nepomuceno
3
1
Youth Research Institute, St. Petersburg Electrotechnical University “LETI”, 5 Professora Popova St., 197376 Saint Petersburg, Russia
2
Department of Computer-Aided Design, St. Petersburg Electrotechnical University “LETI”, 5 Professora Popova St., 197376 Saint Petersburg, Russia
3
Centre for Ocean Energy Research, Department of Electronic Engineering, Maynooth University, W23 F2H6 Maynooth, Ireland
*
Authors to whom correspondence should be addressed.
Mathematics 2021, 9(24), 3185; https://doi.org/10.3390/math9243185
Submission received: 14 November 2021 / Revised: 5 December 2021 / Accepted: 8 December 2021 / Published: 10 December 2021
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
While purely numerical methods for solving ordinary differential equations (ODE), e.g., Runge–Kutta methods, are easy to implement, solvers that utilize analytical derivations of the right-hand side of the ODE, such as the Taylor series method, outperform them in many cases. Nevertheless, the Taylor series method is not well-suited for stiff problems since it is explicit and not A-stable. In our paper, we present a numerical-analytical method based on the rational approximation of the ODE solution, which is naturally A- and A ( α ) -stable. We describe the rational approximation method and consider issues of order, stability, and adaptive step control. Finally, through examples, we prove the superior performance of the rational approximation method when solving highly stiff problems, comparing it with the Taylor series and Runge–Kutta methods of the same accuracy order.

1. Introduction

The basic principle in designing numerical methods for solving the initial value problem
y ˙ = f ( t , y ) , y ( t 0 ) = y 0
is that the numerical method must fit the Taylor series expansion of the solution in a given point with the desired accuracy [1]. The Taylor series can directly be used for numerical integration as well, and a number of algorithms and software have been developed based on Taylor series expansion [2,3,4,5].
In real-life applications, the one-step Taylor series method outperforms purely numerical one-step algorithms such as the Dormand–Prince and Gragg–Bulirsch–Stör methods when extended data types are used and small error tolerances are required. Its high computational efficiency on conventional data types can be achieved by using automatic differentiation for computing the derivatives in the Taylor expansion [3,6].
The main drawback of the explicit Taylor series method is that it can fail on stiff ODEs, i.e., lose stability when the step size is not appropriate or may select small step sizes in adaptive time step implementation, being guided more by stability requirements rather than accuracy limits, which results in performance loss. The formal definition of stiffness can be found in [7], and a vital feature of the numerical method suitable for efficient solving stiff equations is A-stability and its generalizations, e.g.,  A ( α ) -stability [8]. The Taylor series method is not A-stable. Still, its stability region grows with its order [9], which may increase the performance of high-order Taylor series methods on moderately stiff problems. Recently, a modification of the Taylor series method for semi-linear problems called the exponential Taylor series method was proposed [10]. This idea can be used to derive Taylor-like methods, which are A-stable and L-stable [11].
In general, the Taylor series approximation can be considered a variant of the solution’s polynomial approximation with respect to time step h.
However, the Taylor series is not the only variant of the function decomposition that allows for constructing integration schemes. Several other examples include the Haar wavelet decomposition and hybrid function decomposition for integer-order ODEs [12], and fractional-order general Lagrange scaling functions for fractional-order ODEs [13]. It is also well known that, in some practical cases, using rational approximation instead of polynomial approximation is more suitable, e.g., when the approximated function contains poles. L. Wuytack was among the first researchers to notice it in application to numerical integration and proposed the second-order numerical method based on the Padé approximation for solving problems with singularities [14]. Later works, e.g., [15], developed this idea. It is also notable that there is a well-known relation between multi-step methods and the Padé approximation [16]. The Padé approximation of a matrix exponential in domain-specific implementations of the exponential Taylor method can be highly efficient [17].
An attempt to analyze the applicability of the Padé type rational approximation methods to the one-dimensional stiff problem was made by M. Gadella and L. Lara [18]. However, the high potential of the rational approximation methods in solving stiff multidimensional problems has still not been investigated.
This paper aims to fill this gap by introducing a family of A- and A ( α ) -stable rational approximation methods and by studying their performance on stiff problems. The paper is organized as follows. In Section 2, we introduce the theoretical background of the proposed rational approximation method. Section 3 presents the stability analysis and proves that the proposed method is A- or A ( α ) -stable depending on the order of accuracy. In Section 4, we introduce an adaptive step control technique for this method. In Section 5, we present the experimental results. Section 6 gives the conclusions.
The following special notation is accepted in the paper:
  • ∘ denotes the Hadamard (element-wise) multiplication;
  • ⊘ denotes an arbitrary binary vector operation.

2. Basics of Rational Approximation in Numerical Integration

For a given initial value problem (1), where y is a vector-valued phase variable and t is a scalar (time), the approximate solution obtained in the next time point is denoted as y n + 1 y ( t n + 1 ) . Denote the time step h = t n + 1 t n . Then, consider the Taylor series that approximates the solution:
y n + 1 = i = 0 y n ( i ) i ! h i .
Consider a rational approximation of y n + 1 as a fraction of two polynomials P L ( h ) and Q M ( h ) :
P L ( h ) = p 0 + p 1 h + . . . + p L h L , Q M ( h ) = q 0 + q 1 h + . . . + q M h M ,
where L and M are powers of these polynomials. The rational approximation of the solution reads:
y n + 1 = P L ( h ) Q M ( h ) .
A number of reliable algorithms for finding the rational approximant (3) exist [19]. Here, we use the most straightforward approach. Let M + L = p , where p is a certain natural number. Assuming that (2) and (3) must give the same result up to the error term O ( h p + 1 ) , it follows that
i = 0 y ( i ) i ! h i P L ( h ) Q M ( h ) = O ( h p + 1 ) ,
which results in a system of equations:
p 0 = y n q 0 , p 1 = h y n q 0 + y n q 1 , p 2 = h 2 y n 2 q 0 + h y n q 1 + y n q 2 , p L = h L y n ( L ) L ! q 0 + h L 1 y n ( L 1 ) ( L 1 ) ! q 1 + + y n q L , 0 = h L + 1 y n ( L + 1 ) ( L + 1 ) ! q 0 + h L y n ( L ) L ! q 1 + + h L M + 1 y n ( L M + 1 ) ( L M + 1 ) ! q M , 0 = h L + M y n ( L + M ) ( L + M ) ! q 0 + h L + M 1 y n ( L + M 1 ) ( L + M 1 ) ! q 1 + + h L y n ( L ) L ! q M .
One can notice that this system is complete only if L + M = p , which is exactly the case of the Padé rational approximation. Similarly, the Padé approximant corresponds to the rational approximation of the highest possible order of accuracy. Usually, the notation R L / M or [ L / M ] is used to denote the powers of polynomials in the Padé approximant with the power of the numerator L and the power of the denominator M. Otherwise, if  L + M p = r , then r > 0 , the last r equations from (5) should be omitted, and the r coefficients in P and Q remain free. This option can be used to obtain rational approximants with some desired properties, but their order of accuracy is lower than that with the Padé approximant.

2.1. Padé Approximation

The second-order numerical integration method proposed by L. Wuytack is the [ 1 / 1 ] Padé approximation similar to the type in (3). After simplification, it reads:
y n + 1 = y n + h f 2 ( t n , y n ) f ( t n , y n ) h 2 f ( t n , y n ) ,
where f ( t , y ) is a time derivative of f ( t , y ) .
The fourth-order Padé numerical method of type [2/2] can be obtained from (5) in a similar way [18]. After simplification, it can be presented as follows:
y n + 1 = y n + + ( 24 f n f n 2 36 f n 2 f n ) h + ( 24 f n f n f n 6 f n ( 3 ) f n 2 18 f n 3 ) h 2 36 f n 2 + 24 f n f n + ( 12 f n f n 6 f n ( 3 ) f n ) h + ( 4 f n 2 + 3 f n ( 3 ) f n ) h 2 ,
where f n ( q ) = f ( q ) ( t n , y n ) for more compact notation.
Two problems are evident when working with numerical methods such as (7). First, this formula can be straightforwardly implemented in a one-dimensional case, but when y n and f n are vectors, some special techniques are needed to perform the division of two vector values, appearing in the numerator and denominator of (7). The classical definition of vector space does not include vector division, but some authors [14,20,21] propose using element-wise operation for this purpose:
v w = v 1 w 1 v i w i v m w m .
Alas, there is no proof that such an operation can preserve A- and A ( α ) -stability even when the formula is A- or A ( α ) -stable for a one-dimensional problem [22]. Moreover, it was shown that this approach might notably affect the dynamical properties of the simulated system [21].
The second problem of the Padé approximant is that, starting with the [2/2] formula, it becomes cumbersome and difficult to derive, implement, and compute.

2.2. Proposed Approximation

Reducing the rational approximation order may result in more feasible types of approximation. In this paper, we focus on a class of methods given by [ p 1 ] / [ p 1 ] approximants for even p and [ p ] / [ p 1 ] for odd p. The method of order 2 of this class is (6), similar to the Padé approximant. The Methods of orders 3–6 are given in Table 1. These formulas were obtained from the equations in (5) with an additional condition: the denominator of these rational approximants Q should represent the Taylor series for y ( t n h ) y ( t n ) / h :
Q = f ( t n , y n ) h 2 f ( t n , y n ) + h 2 6 f ( t n , y n ) +
Denoting the numerator P, one can notice that, in such types of approximation, the numerator is always a sum of two terms:
P = y n Φ h ( t n , y n ) + h Θ h ( t n , y n ) ,
where Φ h ( t n , y n ) = Q and Θ h ( t n , y n ) is another function of t n and y n . Therefore, we can write the general form of these type methods similar to conventional one-step methods:
y n + 1 = y n + h Θ h ( t n , y n ) Φ h ( t n , y n ) .
One obvious advantage of the proposed rational approximation (RA) method is that they have a more straightforward formulation than Padé approximation methods. Another advantage is that they not only can be A- or A ( α ) -stable but also can guarantee stability preservation in the multidimensional case, which is considered in the next sections.

2.3. Multidimensional Case

Following [4], we rewrite the non-autonomous ODE (1) in a form of an autonomous ODE:
Y ( t ) = t , y 1 , , y m , F ( y ) = 1 , f 1 ( Y ) , , f m ( Y ) ,
so Equation (1) becomes
Y ( t ) = F ( Y ( t ) ) .
The first derivatives of F ( Y ) are as follows:
F = J F , F = J J F + J Y F F , F = 2 J Y 2 F F F + J Y J F F + 2 J Y F J F + J J Y F F + J J J F , ,
where F = F ( Y ) and J = F / Y is the Jacobian matrix. Higher-order derivatives are given in Appendix A.
Furthermore, we show that the RA method can be written in a matrix form:
M d e n Δ Y n + 1 = M n u m F ( Y n ) ,
where Δ Y n + 1 = Y n + 1 Y n , and M n u m and M d e n are d × d matrices, where d is the system dimension after writing it as an autonomous system: d = m if (1) is autonomous and d = m + 1 if (1) is non-autonomous. This allows us to solve (11) via the Gauss method in a manner similar to implicit methods performed for a single iteration of the Newton–Raphson procedure, which is important for stability.
First, we prove the following lemma.
Lemma 1.
The i-th derivative of F ( Y ) can be represented as M i F ( Y ) , where M i is a matrix, i Z + .
Proof. 
Due to the rules of differentiation,
F ( Y ( t ) ) = F ( Y ) Y ( t ) d Y ( t ) d t .
Substituting (10) into (12) leads to
F ( Y ( t ) ) = F ( Y ) Y ( t ) F ( Y ( t ) ) ,
which already reads as M 1 ( Y ) F ( Y ) , and after differentiating,
d d t M 1 ( Y ) F ( Y ) = M 1 ( Y ) Y ( t ) F ( Y ) F ( Y ) + M 1 2 ( Y ) F ( Y ) .
Collecting terms by F ( Y ) in (14) as M 2 = M 1 ( Y ) Y ( t ) F ( Y ) F ( Y ) + M 1 2 ( Y ) , we obtain
d d t M 1 ( Y ) F ( Y ) = M 2 ( Y ) F ( Y ) ,
which is similar to (12) up to the index by M. Continuing this process, we obtain
d d t M i ( Y ) F ( Y ) = M i + 1 ( Y ) F ( Y ) ,
which proves the lemma. □
Then, the following theorem can be proven.
Theorem 1.
RA method of order p
y n + 1 = y n + h f n 2 + h 3 12 4 f n f n + 3 f n 2 + f n h 2 f n + h 2 6 f n + ,
for the multidimensional ODE reads in a matrix form as
Δ Y n + 1 = M d e n 1 M n u m F ( Y n ) ,
where M n u m and M d e n are matrices.
Proof of Theorem 1.
Using the notation in (10) and results of Lemma 1, we rewrite (9) for the one-dimensional case as follows:
Y n + 1 = Y n + M n u m F ( Y n ) F ( Y n ) M d e n F ( Y n ) ,
where M n u m collects all terms by F ( Y n ) in the numerator and  M d e n collects all terms by F ( Y n ) in the denominator. Equation (19) in the multidimensional case reads
Δ Y n + 1 = M d e n F ( Y n ) 1 M n u m F ( Y n ) F ( Y n ) ,
where Δ Y n + 1 = Y n + 1 Y n and ∘ denotes the Hadamard (element-wise) multiplication. From (20), implying | | F ( Y n ) | | 0 and using the commutativity of the Hadamard product, we obtain the following the formula:
Δ Y n + 1 = M d e n 1 M n u m F ( Y n ) ,
which proves Theorem 1. □
The second-order RA method (6) in matrix form is as follows:
Δ Y n + 1 = I h 2 J ( Y n ) 1 h F ( Y n ) ,
where I is the identity matrix, which gives the linearly implicit midpoint method, also known under the abbreviation LIMP [8].
Denote
F 1 = J , F 2 = J Y F + J 2 , F 3 = 2 J Y Y F F + J Y J F + 2 J Y F J + J J Y F + J 3 .
The RA method of order 4 is as follows:
Δ Y n + 1 = I h 2 F 1 + h 2 6 F 2 h 3 24 F 3 1 I + h 2 1 4 F 1 2 + 1 3 F 2 h F ,
Similarly, formulae for methods of higher order can be derived. From (23), one can see that one step of the fourth-order method is not that much more complicated than the fourth-order Taylor series method:
Δ Y n + 1 = h F + h 2 2 F 1 F + h 3 6 F 2 F + h 4 24 F 3 F
in terms of matrix operations except the matrix inverse, which resembles conventional implementations of implicit methods. This operation preserves the A-stability of the method, which is not inherent to implementations with element-wise vector division; see Theorem 2.

2.4. Other Issues of Implementation

One of the most complicated problems in the implementation of RA method is finding high-order partial derivatives of F since they are multidimensional matrices. Indeed,
J = F Y
is a two-dimensional matrix, while
J ˙ = J Y   and   J ¨ = 2 J Y Y
are three-dimensional and four-dimensional matrices, respectively. However, in practical computations, we only need to compute products J ˙ F and J ¨ F F , which are two-dimensional matrices. Using manual or automatic matrix differentiation allows for finding and preliminarily implementing formulas for these matrices.

3. Stability Analysis

This section analyzes the stability of RA methods, proving that using a matrix inverse instead of element-wise division preserves the A ( α ) -stability and plot stability regions of these methods.
First, consider the stability functions of [ p 1 ] / [ p 1 ] RA methods. The linear test problem for stability analysis is as follows:
Y ( t ) = λ Y ( t ) ,
Denote z = h λ , and, since the denominator of RA methods is a Taylor series for Δ Y n 1 / h , in terms of z,
Q ( z ) = 1 z 2 + z 2 6 z 3 24 + ± z p 1 p ! .
Solving the system (5) for Q ( z ) and the problem (25), we obtain
P ( z ) = 1 1 1 1 2 1 1 1 6 1 2 1 1 1 p ! 1 6 1 2 1 1 1 1 2 1 6 ± 1 p ! 0 1 z z 2 z p 1 0 ,
where zero elements in the matrix are left blank. From (27), the stability functions of RA methods are
R ( z ) = 1 + z 2 + z 2 6 + z 3 24 + + z p 1 p ! 1 z 2 + z 2 6 z 3 24 + z p 1 p !
for even p and
R ( z ) = 1 + z 2 + z 2 6 + z 3 24 + z p 1 p ! + 2 z p ( p + 1 ) ! 1 z 2 + z 2 6 z 3 24 + + z p 1 p !
for odd p. Even-order RA methods are A-stable for orders 2 and 4, and  A ( α ) -stable for the higher orders, and odd-order RA methods are not A ( α ) -stable; see the stability regions of the order 2–7 RA methods in Figure 1.
The essential issue in RA method implementation is using the matrix inverse (18) instead of element-wise operations. Due to Theorem 2, this preserves A ( α ) -stability.
Theorem 2.
For the A ( α ) -stable rational approximation method,
Y n + 1 = v ( Y n ) w ( Y n ) , v = P F F , w = Q F ,
where v and w are vectors, P and Q are matrices, and  is a certain binary operator:
1. 
Treating as an element-wise division (8) may lead to the loss of A ( α ) -stability for a certain Y n , i.e., for a multidimensional linear test problem:
Y = A Y ,
where A is a matrix, { Y n , A , h } : Y n + 1 = R Y n , max | eig ( R ) | 1 .
2. 
Treating as a matrix inverse-based operation (18) preserves A ( α ) -stability for any Y n .
Proof. 
Consider the n-dimensional test problem in (31). For the method in (30), its matrix form reads:
Y n + 1 = ( P Y n Y n ) ( Q Y n ) ,
and from (28)
P = 1 + h A 2 + h 2 A 2 6 + + h p 1 A p 1 p !
and
Q = 1 h A 2 + h 2 A 2 6 h p 1 A p 1 p ! .
The stability function can be found depending on the operator ⊘:
1.
If ⊘ is an element-wise division, the stability function for each dimension is found independently as
R i ( h A , Y n ) = u i v i , u = ( P Y n Y n ) , v = ( Q Y n ) .
Let us give one example of how the stability may be violated. Let Y n = ( 1 1 0 0 0 ) , and A is a matrix of a stable system in the Jordan normal form:
A = σ ω ω σ ,
where σ < 0 and ω > 0 are real numbers. In this example,
R 1 ( h A , Y n ) = 1 + h σ 2 + h ω 2 + h 2 σ 2 ω 2 6 + h 2 2 σ ω 6 + 1 h σ 2 h ω 2 + h 2 σ 2 ω 2 6 + h 2 2 σ ω 6 .
Let h be small enough to neglect terms by h 2 and its higher powers because 
R 1 ( h A , Y n ) = 1 + h σ 2 + h ω 2 + O ( h 2 ) 1 h σ 2 h ω 2 + O ( h 2 ) ,
and denote h σ 2 = a , h ω 2 = b . Then,
R 1 ( h A , Y n ) = 1 a + b 1 + a b , R 2 ( h A , Y n ) = 1 a b 1 + a + b .
The stability loss means elongation of the vector Y n + 1 , i.e.,  | | Y n + 1 | | 2 2 = R 1 2 + R 2 2 > | | Y n | | 2 2 = 2 . However, for all positive a and b, the condition | | Y n + 1 | | | 2 2 > 2 is satisfied. This example explicitly shows that element-wise division may violate A ( α ) -stability for an arbitrary stable system and initial conditions.
2.
If ⊘ is the matrix inverse, the stability function is
R ( h A ) = Q 1 P ,
which does not depend on Y n . Therefore, A ( α ) -stability is preserved in the multidimensional case similar to the other one-step methods [1].
 □

4. Adaptive Step Control

Adaptive step control may sufficiently increase the performance of the integration program. In order to control the step size, error estimation is required. RA methods provide three possible error estimation strategies:
1.
The error term for the method of order p can be found similarly to the Taylor method:
e r r n + 1 = h p F ( p 1 ) ( Y n + 1 ) p ! ,
which is practically applicable since F ( p 1 ) ( Y n + 1 ) is already computed.
2.
The error term for the method of order p can be found from the previous order method. Let p 1 be odd, then the ( p 1 ) -order solution is
Y ˜ n + 1 = Y n + Q p 1 1 ( Y n ) h Θ ( Y n ) ,
and the p-order solution is
Y n + 1 = Y n + Q p 1 h p F ( p 1 ) ( Y n ) p ! 1 ( Y n ) h Θ ( Y n ) ,
since Θ ( Y n ) is similar for orders p and p 1 , from where
e r r n + 1 = Y n + 1 Y ˜ n + 1 .
3.
The error term for the method of order p can be found using the numerator alternation technique. Let the p-order solution be
Y n + 1 = Y n + Q 1 ( Y n ) h Θ ( Y n ) ,
then the p 1 -order solution is
Y ˜ n + 1 = Y n + Q 1 ( Y n ) h Θ ( Y n ) + c h p F ( p 1 ) ( Y n ) ,
where c is a constant. We propose to use c = 1 p ! . The error is then found using the formula (33). The technique (34) is less computationally expensive since the matrix Q is similar for both Y ˜ n + 1 and Y n + 1 , which allows for performing its L U -decomposition and for speeding up the second matrix inverse.
Furthermore, we used the error estimator based on Equation (34), which provided slightly better results in our preliminary tests.
In order to provide smoother step size control, we use a step control algorithm developed by Söderlind [23]:
h n + 1 = γ h n ξ t o l | | e r r n + 1 | | b 1 ξ t o l | | e r r n | | b 2 ξ t o l | | e r r n 1 | | b 3 h n h n 1 a 2 h n 1 h n 2 a 3 ,
where a and b are parameter vectors, γ = 0.99 , ξ = 0.9 are safety factors, and t o l is the tolerance chosen to satisfy the condition:
t o l = max ( r e l t o l · | | Y n + 1 | | , a b s t o l ) ,
where r e l t o l is the relative tolerance and a b s t o l is the absolute tolerance.
The best results were achieved using
b =   1 4 k 1 2 k 1 4 k , a =   1 0 0 ,
where k is chosen proportional to the method order. In our practical tests, we use k = 4 p for better damping.

5. Experimental Results

In this section, the implementation of the fourth-order rational approximation method with its third-order counterpart (34) for determining the error is considered. The resulting method is denoted as RA4(3). The order 2 RA method is identical to the widely known linearly implicit midpoint method, of which the properties have been investigated in [8]; therefore, we omit its analysis. The sixth-order method RA6(5) and the higher-order methods can be efficient only with extremely high tolerance settings, which require extended data types due to the complexity of the higher derivatives in matrix form (see Appendix A), so their properties are to be investigated in the future. Nevertheless, fourth-order methods, including the popular Runge–Kutta 4 method, are widely used in engineering and scientific computations and are among the most common methods [1].
We compare the RA4(3) methods with the Taylor 4(3) method and two Runge–Kutta methods. The first Runge–Kutta method under investigation is the explicit one-step 4(3) method described in [24]. Precisely, it is the classical Runge–Kutta 4 method with an embedded counterpart of order 3. This method is further denoted as ERK4(3). The second Runge–Kutta method is the implicit LobattoIIIC 4(3) method, which is L-stable and B-stable and was considered by some authors to be an excellent method for stiff problems [25]. Meanwhile, it is a fully implicit RK method and, therefore, a computationally expensive one. To make it more efficient in our study, we use the conventional technique of “Jacobian freezing” in the Newton method iterations; see [8].
To obtain a reference solution, we used the Dormand–Prince 8(7) method with high accuracy settings and extended data type [1].
We considered four stiff test problems in this study: the Van der Pol oscillator problem (VDPL), High Irradiance Responses (HIRES) of the photomorphogenesis problem [1], the Riccati four-dimensional equation problem (RICCATI) [26], and a modified FitzHugh–Nagumo–Rinzel neuron model (FNR) from [27]. Our first example shows how different methods deal with the step size and how numerical differentiation affects step size dynamics. The second example studies the performance of the methods.

5.1. Time Domain Analysis and Issues of Step Control

This subsection considers several test problems and how different methods behave with step-size controllers on them. The Van der Pol (VDPL) test problem is given by the following equations:
y 1 ˙ = y 2 , y 2 ˙ = μ ( 1 y 1 2 ) y 2 y 1 ,
where μ is the nonlinearity parameter: the higher its value, the stiffer the problem. In our experiments, we use the value μ = 1000 and experiment time t max = 2000 . We used the following settings: h min = 10 10 , h max = 10 , a b s t o l = 10 11 , r e l t o l = 10 8 . Figure 2 depicts the obtained results in the time domain.
The time-domain solution is given on a logarithmic scale of its absolute value. The plot at the bottom shows that the largest steps are selected by the L-stable Lobatto IIIC method. The RA method generally follows it, while the step size strategies of the Taylor and ERK methods are quite similar.
Figure 3 shows the difference in step selection strategies between the method with element-wise division (8) and the method with matrix inverse. This example compromises the element-wise division applied to the RA method. The element-wise division in the Van der Pol oscillator leads to a notable accuracy loss (see upper plot) and reduces the efficiency of the method because it becomes unable to select large step sizes in the same way that a truly implicit method can. The Taylor method stepping strategy is given for comparison.
The stiff Riccati equation test problem (RICCATI) is described by the following equations:
y 1 ˙ = 10,000 y 1 2 y 3 y 2 , y 2 ˙ = y 2 ( y 1 + y 4 ) , y 3 ˙ = y 3 ( y 1 + y 4 ) , y 4 ˙ = 10,000 y 4 2 y 3 y 2 ,
with initial value y ( t 0 ) = ( 0 0 1 0 ) and simulation time t max = 3 . The solver settings used for time-domain test were h min = 10 10 , h max = 100 , a b s t o l = 10 10 , r e l t o l = 10 5 , Figure 4 depicts the obtained results in the time domain.
The High Irradiance Responses test problem (HIRES) is described by the following equations:
y 1 ˙ = 1.71 y 1 + 0.43 y 2 + 8.32 y 3 + 0.0007 , y 2 ˙ = 1.71 y 1 8.75 y 2 , y 3 ˙ = 10.03 y 3 + 0.43 y 4 + 0.035 y 5 , y 4 ˙ = 8.32 y 2 + 1.71 y 3 1.12 y 4 , y 5 ˙ = 1.745 y 5 + 0.43 y 6 + 0.43 y 7 , y 6 ˙ = 280 y 6 y 8 + 0.69 y 4 + 1.71 y 5 0.43 y 6 + 0.69 y 7 , y 7 ˙ = 280 y 6 y 8 1.81 y 7 , y 8 ˙ = 280 y 6 y 8 + 1.81 y 7 ,
with initial value y ( t 0 ) = ( 1 0 0 0 0 0 0 0.0057 ) and simulation time t max = 100 . The solver settings were h min = 10 10 , h max = 100 , a b s t o l = 10 10 , r e l t o l = 10 5 , Figure 5 depicts the obtained results in the time domain.
The sine-excited FitzHugh–Nagumo–Rinzel neuron test problem (FHN) is described by the following equations:
y 1 ˙ = y 2 y 1 3 / 3 y 2 y 3 + y 5 , y 2 ˙ = a 1 ( a 2 + y 1 a 3 y 2 ) , y 3 ˙ = a 4 ( a 5 y 3 y 1 ) , y 4 ˙ = A y 5 , y 5 ˙ = ω 2 y 4 ,
where the parameter vector is a = ( 0.5 0.7 0.8 0.002 1 ) , the sine perturbation function amplitude is A = 0.083 , the radial frequency is ω = 0.00003 · 2 π , the initial value is y ( t 0 ) = ( 0 0 0 0 1 ) , and the simulation time is t max = 70,000.
The solver settings were h min = 10 10 , h max = 100 , a b s t o l = 10 10 , r e l t o l = 10 8 , Figure 6 shows the behavior of the obtained solution.
In all examples, the LobattoIIIC method can work with the largest step sizes, while the Taylor and ERK solvers select small step sizes, restricted by stability requirements.

5.2. Performance Analysis

In this subsection, we study the performance of the methods on four described problems. The results are given in Figure 7. Four considered test problems highlighted the higher performance of the proposed RA method. Explicit methods are poorly suited for this set of problems, as seen from the plots: their computational time is almost independent of accuracy. Thus, their step controllers restrict step sizes on all investigated tolerances to maintain the stability.
All of the experiments were carried out in a MATLAB R2018b environment, on a computer with the 64-bit Win10 operational system, Intel Core i5-7300HQ 2.5 GHz CPU, and 16 GB RAM. The relative tolerance value was altered within the interval [ 10 9 , 10 1 ] , and the absolute tolerance was chosen as a b s t o l = 10 5 · r e l t o l . The double data type was used.

6. Conclusions and Discussion

In this paper, we proposed a prospective type of rational approximation method for solving stiff ODEs. This method has a strong relation to the Taylor expansion and is easy to derive and implement. The A- and A ( α ) -stability of this method is proven, and most notably, the method is A-stable in the case of orders 2 and 4. Compared with the Taylor method, and explicit and implicit Runge–Kutta methods, it has the best performance when solving stiff test problems, as our experiments with 4(3) order methods on double data type revealed.
Similar to the Taylor series methods, the proposed method requires algebraic differentiation to achieve the best performance. It is of great interest to test the performance of the proposed method on extended data types and high orders, which will be the topic of our future work.
The question arises of whether we found the “best” rational approximation for numerical integration or whether there exist other types of approximation that can achieve better performance. It is also of interest to derive a Runge–Kutta-like formula for this method, especially for the matrix denominator, which may considerably increase the method performance.
The proposed numerical method is also prospective for solving problems with singularities, especially poles in the denominator of the derivative function.
RA methods are potentially suitable for solving DAEs, which will be a subject of our future investigation.

Author Contributions

Conceptualization, A.K. and D.B.; data curation, E.G.N.; formal analysis, E.G.N.; investigation, A.K., D.B. and V.A.; methodology, D.B.; project administration, D.B.; resources, A.K. and V.A.; software, A.K. and V.A.; validation, A.K., D.B. and E.G.N.; visualization, A.K.; writing—original draft, A.K. and D.B.; writing—review and editing, V.A. and E.G.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Higher-Order Derivatives in Matrix Form

Here, we present derivatives of orders 4 and 5, needed for RA and Taylor series methods of orders 5 and 6:
F ( 4 ) = 3 J Y 3 F F F F + 2 J Y 2 J F F F + 2 2 J Y 2 F J F F + 3 2 J Y 2 F F J F + J Y J Y F F F + + J Y J J F F + 3 J Y J F J F + 3 J Y F J Y F F + 3 J Y F J J F + J 2 J Y 2 F F F + + J J Y J F F + 2 J J Y F J F + J J J Y F F + J J J J F ,
F ( 5 ) = 4 J Y 4 F F F F F + 3 J Y 3 J F F F F + 2 3 J Y 3 F J F F F + 3 3 J Y 3 F F J F F + 4 3 J Y 3 F F F J F + 2 J Y 2 J Y F F F F + 2 J Y 2 J J F F F + 3 2 J Y 2 J F J F F + 4 2 J Y 2 J F F J F + 3 2 J Y 2 F J Y F F F + + 3 2 J Y 2 F J J F F + 8 2 J Y 2 F J F J F + 6 2 J Y 2 F F J Y F F + 6 2 J Y 2 F F J J F + J Y 2 J Y 2 F F F F + + J Y J Y J F F F + 2 J Y J Y F J F F + 4 J Y J Y F F J F + J Y J J Y F F F + J Y J J J F F + + 4 J Y J J F J F + 6 J Y J F J Y F F + 6 J Y J F J J F + 4 J Y F 2 J Y 2 F F F + 4 J Y F J Y J F F + + 8 J Y F J Y F J F + 4 J Y F J J Y F F + 4 J Y F J J J F + J 3 J Y 3 F F F F + J 2 J Y 2 J F F F + + 2 J 2 J Y 2 F J F F + 3 J 2 J Y 2 F F J F + J J Y J Y F F F + J J Y J J F F + 3 J J Y J F J F + 3 J J Y F J Y F F + 3 J J Y F J J F + J J 2 J Y 2 F F F + J J J Y J F F + 2 J J J Y F J F + + J J J J Y F F + J J J J J F .
Due to their complexity, we do not recommend implementing methods utilizing these formulas using double data type. Instead, the extended data type should be used.

References

  1. Hairer, E.; Noersett, S.P.; Wanner, G. Solving Ordinary Differential Equations I: Nonstiff Problems; Springer: Berlin/Heidelberg, Germany, 1987. [Google Scholar]
  2. Corliss, G.; Chang, Y. Solving ordinary differential equations using Taylor series. ACM Trans. Math. Softw. (TOMS) 1982, 8, 114–144. [Google Scholar] [CrossRef]
  3. Abad, A.; Barrio, R.; Blesa, F.; Rodríguez, M. Algorithm 924: TIDES, a Taylor series integrator for differential equations. ACM Trans. Math. Softw. (TOMS) 2012, 39, 1–28. [Google Scholar] [CrossRef]
  4. Miletics, E.; Molnárka, G. Taylor Series Method with Numerical Derivatives for numerical solution of ODE initial values problems. Hung. Electron. J. Sci. 2003, 1–16. [Google Scholar]
  5. Berry, D.W.; Childs, A.M.; Cleve, R.; Kothari, R.; Somma, R.D. Simulating Hamiltonian dynamics with a truncated Taylor series. Phys. Rev. Lett. 2015, 114, 090502. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Barrio, R. Sensitivity analysis of ODEs/DAEs using the Taylor series method. SIAM J. Sci. Comput. 2006, 27, 1929–1947. [Google Scholar] [CrossRef]
  7. Brugnano, L.; Mazzia, F.; Trigiante, D. Fifty Years of Stiffness. In Recent Advances in Computational and Applied Mathematics; Springer: Dordrecht, The Netherlands, 2009. [Google Scholar] [CrossRef] [Green Version]
  8. Hairer, E.; Wanner, G. Solving Ordinary Differential Equations II. Stiff and Differential-Algebraic Problems; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  9. Barrio, R. Performance of the Taylor series method for ODEs/DAEs. Appl. Math. Comput. 2005, 163, 525–545. [Google Scholar] [CrossRef]
  10. Koskela, A.; Ostermann, A. Exponential Taylor methods: Analysis and implementation. Comput. Math. Appl. 2013, 65, 487–499. [Google Scholar] [CrossRef]
  11. El-Zahar, E.R.; Tenreiro Machado, J.; Ebaid, A. A New Generalized Taylor-Like Explicit Method for Stiff Ordinary Differential Equations. Mathematics 2019, 7, 1154. [Google Scholar] [CrossRef] [Green Version]
  12. ul Islam, S.; Aziz, I.; Haq, F. A comparative study of numerical integration based on Haar wavelets and hybrid functions. Comput. Math. Appl. 2010, 59, 2026–2036. [Google Scholar] [CrossRef] [Green Version]
  13. Sabermahani, S.; Ordokhani, Y.; Yousefi, S.A. Fractional-order general Lagrange scaling functions and their applications. BIT Numer. Math. 2020, 60, 101–128. [Google Scholar] [CrossRef]
  14. Wuytack, L. Numerical integration by using nonlinear techniques. J. Comput. Appl. Math. 1975, 1, 267–272. [Google Scholar] [CrossRef] [Green Version]
  15. Alaybeyi, M.M. On the Relation between Numerical Integration and Padé Approximation. Ph.D. Thesis, Carnegie Mellon University, Pittsburgh, PA, USA, 1994. [Google Scholar]
  16. Dahlquist, G.; Björck, Å. Numerical Methods in Scientific Computing, Volume I; SIAM: Philadelphia, PA, USA, 2008. [Google Scholar]
  17. Burtsev, Y. High Precision Methods Based on Pade Approximation of Matrix Exponent for Numerical Analysis of Stiff-Oscillatory Electrical Circuits. In Proceedings of the 2020 International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM), Sochi, Russia, 18–22 May 2020; pp. 1–7. [Google Scholar]
  18. Gadella, M.; Lara, L. A Numerical method for solving ODE by rational approximation. Appl. Math. Sci. 2013, 7, 1119–1130. [Google Scholar] [CrossRef]
  19. Graves-Morris, P.; Hopkins, T. Reliable rational interpolation. Numer. Math. 1980, 36, 111–128. [Google Scholar] [CrossRef]
  20. Ramos, H.; Singh, G.; Kanwar, V.; Bhatia, S. An embedded 3(2) pair of nonlinear methods for solving first order initial-value ordinary differential systems. Numer. Algorithms 2017, 75, 509–529. [Google Scholar] [CrossRef]
  21. Butusov, D.; Karimov, A.; Tutueva, A.; Kaplun, D.; Nepomuceno, E.G. The effects of Padé numerical integration in simulation of conservative chaotic systems. Entropy 2019, 21, 362. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Wen, B.J.; Yun, N.; Wang, F.Z. An A-stable Explicit Numerical Integration Method Based on Padé Approximation. In Proceedings of the 2016 International Conference on Electrical Engineering and Automation (ICEEA 2016), Xiamen, China, 18–19 December 2016. [Google Scholar]
  23. Söderlind, G. Digital filters in adaptive time-stepping. ACM Trans. Math. Softw. (TOMS) 2003, 29, 1–26. [Google Scholar] [CrossRef] [Green Version]
  24. Balac, S.; Mahé, F. Embedded Runge–Kutta scheme for step-size control in the interaction picture method. Comput. Phys. Commun. 2013, 184, 1211–1219. [Google Scholar] [CrossRef] [Green Version]
  25. Jay, L.O. Lobatto methods. Chemistry 1996, 29, 298–305. [Google Scholar]
  26. Butusov, D.; Karimov, T.; Ostrovskii, V. Semi-implicit ODE solver for matrix Riccati equation. In Proceedings of the 2016 IEEE NW Russia Young Researchers in Electrical and Electronic Engineering Conference (EIConRusNW), Saint Petersburg, Russia, 2–3 February 2016; pp. 168–172. [Google Scholar]
  27. Wojcik, J.; Shilnikov, A. Voltage interval mappings for an elliptic bursting model. In Nonlinear Dynamics New Directions; Springer: Berlin/Heidelberg, Germany, 2015; pp. 195–213. [Google Scholar]
Figure 1. Stability regions of the order 2–7 RA methods.
Figure 1. Stability regions of the order 2–7 RA methods.
Mathematics 09 03185 g001
Figure 2. Solution to the Van der Pol system in the time domain (top) and step sizes for different solvers (bottom).
Figure 2. Solution to the Van der Pol system in the time domain (top) and step sizes for different solvers (bottom).
Mathematics 09 03185 g002
Figure 3. Comparison of element-wise and matrix operation in the rational approximation method. The solution to the Van der Pol system in the time domain (top), step sizes for different solvers (bottom).
Figure 3. Comparison of element-wise and matrix operation in the rational approximation method. The solution to the Van der Pol system in the time domain (top), step sizes for different solvers (bottom).
Mathematics 09 03185 g003
Figure 4. Solution to the Riccati system in the time domain (top) and step sizes for different solvers (bottom).
Figure 4. Solution to the Riccati system in the time domain (top) and step sizes for different solvers (bottom).
Mathematics 09 03185 g004
Figure 5. Solution to the HIRES system in the time domain (top) and step sizes for different solvers (bottom).
Figure 5. Solution to the HIRES system in the time domain (top) and step sizes for different solvers (bottom).
Mathematics 09 03185 g005
Figure 6. Solution to the FitzHugh–Nagumo–Rinzel system in the time domain (top) and step sizes for different solvers (bottom).
Figure 6. Solution to the FitzHugh–Nagumo–Rinzel system in the time domain (top) and step sizes for different solvers (bottom).
Mathematics 09 03185 g006
Figure 7. Performance plots of the 4(3) methods for the stiff test problems: LobattoIIIC method, rational approximation method with analytical derivative (RA), Taylor series method, and explicit Runge–Kutta (ERK) method. The horizontal axis represents the negative logarithm of the achieved error in the last point; the vertical axis represents the computation time as a median value of 7 successive runs.
Figure 7. Performance plots of the 4(3) methods for the stiff test problems: LobattoIIIC method, rational approximation method with analytical derivative (RA), Taylor series method, and explicit Runge–Kutta (ERK) method. The horizontal axis represents the negative logarithm of the achieved error in the last point; the vertical axis represents the computation time as a median value of 7 successive runs.
Mathematics 09 03185 g007
Table 1. RA methods of orders 3–6.
Table 1. RA methods of orders 3–6.
pFormula
3
y n + 1 = y n + h f n 2 + h 3 12 4 f n f n f n 2 f n h 2 f n + h 2 6 f n
4
y n + 1 = y n + h f n 2 + h 3 12 4 f n f n 3 f n 2 f n h 2 f n + h 2 6 f n h 3 24 f n ( 3 )
5
y n + 1 = y n + h f n 2 + h 3 12 4 f n f n 3 f n 2 + h 5 360 6 f n ( 4 ) f n + 10 f n 2 15 f n ( 3 ) f n f n h 2 f n + h 2 6 f n h 3 24 f n ( 3 ) + h 4 120 f n ( 4 )
6
y n + 1 = y n + h f n 2 + h 3 12 4 f n f n 3 f n 2 + h 5 360 6 f n ( 4 ) f n + 10 f n 2 15 f n ( 3 ) f n f n h 2 f n + h 2 6 f n h 3 24 f n ( 3 ) + h 4 120 f n ( 4 ) h 5 720 f n ( 5 )
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Karimov, A.; Butusov, D.; Andreev , V.; Nepomuceno, E.G. Rational Approximation Method for Stiff Initial Value Problems. Mathematics 2021, 9, 3185. https://doi.org/10.3390/math9243185

AMA Style

Karimov A, Butusov D, Andreev  V, Nepomuceno EG. Rational Approximation Method for Stiff Initial Value Problems. Mathematics. 2021; 9(24):3185. https://doi.org/10.3390/math9243185

Chicago/Turabian Style

Karimov, Artur, Denis Butusov, Valery Andreev , and Erivelton G. Nepomuceno. 2021. "Rational Approximation Method for Stiff Initial Value Problems" Mathematics 9, no. 24: 3185. https://doi.org/10.3390/math9243185

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop