**1. Introduction**

In this article, we discuss various methods for the numerical solution of the second order initial value problem

$$\begin{array}{l} y'' = f(\mathbf{x}, y)\_{\prime} \\ y(\mathbf{x}\_{0}) = y\_{0\prime} \\ y'(\mathbf{x}\_{0}) = y'\_{0}. \end{array} \tag{1}$$

If the initial value problem contains *y* (*x*) then it is usually converted to a system of first order

$$\begin{array}{llll} y\_1'(x) & = & y\_2, \\ y\_1(x\_0) & = & y\_{0'} \\ y\_2'(x) & = & f(x, y\_1, y\_2), \\ y\_2(x\_0) & = & y\_{0'}' \end{array} \tag{2}$$

by defining new variables *y*1 = *y*, *y*2 = *y*. In vector notation the system (2) can be written as

$$\mathbf{y}' = \mathbf{f}(\mathbf{x}, \mathbf{y}), \ \mathbf{y}(\mathbf{x}\_0) = \mathbf{y}\_{0^\prime} \tag{3}$$

where **y** = [*y*1, *y*2] *T*, **f** = [*y*2, *f*(*<sup>x</sup>*, *y*1, *<sup>y</sup>*2)]*<sup>T</sup>*, and **y**0 = [*y*0, *y* 0] *T*.

Here we are concerned with trigonometrically-fitted methods for (1) and (3).

There are several classes of methods, such as linear multistep methods (including Obrechkoff methods) and Runge-Kutta methods. Here we will introduce each class and then review the extension of those to solution of problems for which the frequency is approximately known in advance.

Linear multistep methods for the solution of (3) are given by

$$\sum\_{j=0}^{k} \alpha\_j y\_{n+j} = h \sum\_{j=0}^{k'} \beta\_j f\_{n+j} \tag{4}$$

and of (1) are given by

$$\sum\_{j=0}^{k} a\_j y\_{n+j} = h^2 \sum\_{j=0}^{k'} \beta\_j f\_{n+j} \tag{5}$$

where *yn*+*j* is the approximate value at *xn*+*j* and similarly for *fn*+*j*. In here *k* is called the *step-number* and *k* is either *k* − 1 or *k*. In the former case the method is called *explicit* and in the latter it is called *implicit*. The coefficients *αj* and *βj* are chosen to satisfy stability and convergence, as we describe in the sequel.

We now introduce the *first* and *second characteristic polynomials*,

$$\rho(\zeta) = \sum\_{j=0}^{k} a\_j \zeta^j,\tag{6}$$

$$\sigma(\zeta) = \sum\_{j=0}^{k'} \beta\_j \zeta^j. \tag{7}$$

For (3) explicit methods for which *ρ*(*ζ*) = *ζk* − *ζ<sup>k</sup>*−<sup>1</sup> are called *Adams-Bashforth* and the implicit ones are called *Adams-Moulton*. Explicit methods for which *ρ*(*ζ*) = *ζk* − *ζ<sup>k</sup>*−<sup>2</sup> are called *Nyström methods* and the implicit ones are called *Generalized Milne-Simpson methods*. Gautschi [1] has developed Adams-type methods for first order equation as well as Nyström methods for the second order equation. Neta and Ford [2] only developed Nyström and Generalized Milne-Simpson methods for first order equation.

**Definition 1.** *If, for an arbitrary smooth enough test function <sup>z</sup>*(*x*)*, we have*

$$\sum\_{j=0}^{k} a\_j z(\mathbf{x} + jh) - h \sum\_{j=0}^{k'} \beta\_j z''(\mathbf{x} + jh) = \mathbb{C}\_{p+1} h^{p+1} z^{(p+1)}(\mathbf{x}) + \mathcal{O}(h^{p+2}),\tag{8}$$

*then, p is called the order of the linear multistep method (4) and Cp*+<sup>1</sup> *is its error constant.*

*The expression given by (8) is called the local truncation error at xn*+*k of the method (4), when z*(*x*) *is the theoretical solution of the initial value problem (1).*

In a similar fashion we have for (5)

$$\sum\_{j=0}^{k} a\_j z(\mathbf{x} + jh) - h^2 \sum\_{j=0}^{k'} \beta\_j z''(\mathbf{x} + jh) = \mathbb{C}\_{p+2} h^{p+2} z^{(p+2)}(\mathbf{x}) + O(h^{p+3}).\tag{9}$$

Throughout, we shall assume that the linear multistep method (4) satisfies the following hypotheses (see [3]):


For the method (5) for second order initial value problems we have


We now consider the test equation (see, e.g., Chawla and Neta [4])

$$y''(\mathbf{x}) = -\lambda^2 y(\mathbf{x}).\tag{10}$$

Let *ζ<sup>s</sup>*,*<sup>s</sup>* = 1, 2, . . . , *k* denote the zeros of the polynomial

$$
\Omega(\zeta, H^2) = \rho(\zeta) + H^2 \sigma(\zeta),
\tag{11}
$$

for *H* = *λh* and let *ζ*1, *ζ*2 correspond to perturbations of the principal roots of *ρ*(*ζ*). We define *interval of periodicity* (0, *H*<sup>2</sup>) if, for all *H*<sup>2</sup> in the interval, the roots *ζs* of (11) satisfy *ζ*1 = *<sup>e</sup>iθ*(*H*), *ζ*2 = *<sup>e</sup>*<sup>−</sup>*iθ*(*H*), |*ζs*| ≤ 1, *s* ≥ 3 and *θ*(*H*) is real.

If the interval of periodicity is (0, <sup>∞</sup>), then the method is called *P-stable*. Lambert and Watson [5] had shown that P-stable linear multistep methods are implicit of order at most 2.

**Remark 1.** *If the problem (1) has periodic solutions and the period is not known, then the P-stability is desirable. If the period is known approximately, then one can use the ideas in Gautschi [1], Neta and Ford [2], and others to be reviewed here.*

Another important property when solving (1) is the *phase lag* which was introduced by Brusa and Nigro [6]. Upon applying a linear two-step method to the test Equation (10), we obtain a difference equation of the form

$$A(H)y\_{n+2} + B(H)y\_{n+1} + \mathcal{C}(H)y\_n = 0,\tag{12}$$

whose solution is

$$y\_n = B\_1 \lambda\_1^n + B\_2 \lambda\_2^n,\tag{13}$$

where *B*1 and *B*2 are constants depending on the initial conditions. The quadratic polynomial

$$A(H)\lambda^2 + B(H)\lambda + \mathcal{C}(H) = 0,\tag{14}$$

is called the *stability polynomial*. The solutions to (14) are given by

$$\begin{array}{l} \lambda\_1 = \mathfrak{e}^{(-a(H) + ib(H))H} \\ \lambda\_2 = \mathfrak{e}^{(-a(H) - ib(H))H} \end{array}' \tag{15}$$

If *a*(*H*) ≡ 0 and *b*(*H*) ≡ 1, then we ge<sup>t</sup> the exact solution to the test Equation (10). The difference between the amplitudes of the exact solution of (10) and numerical solution is called *dissipation error*, see [7]. The expansion *b*(*H*) − 1 in powers of *H* is called *phase lag expansion*. The modulus of the leading terms is the *phase lag* of the method. See also Thomas [8] and Twizell [9].

**Remark 2.** *Raptis and Simos have developed methods with minimal phase-lag and also P-stable methods in [10–14].*

We now introduce an extension to the linear multistep methods. These are called *multiderivative* or *Obrechkoff* methods, see Obrechkoff [15] or [16].

For the first order equation we have

$$\sum\_{j=0}^{k} a\_j y\_{n-j+1} = \sum\_{i=1}^{\ell} \sum\_{j=0}^{k} \beta\_i y\_j^i y\_{n-j+1'}^{(i)} \tag{16}$$

and for the second order equation

$$\sum\_{j=0}^{k} a\_j y\_{n-j+1} = \sum\_{i=1}^{\ell} \sum\_{j=0}^{k} \beta\_{i\cdot j} h^{2i} y\_{n-j+1}^{(2i)}.\tag{17}$$

According to Lambert and Mitchell [17], the error constant decreases more rapidly with increasing - rather than the step *k*. Thus, one can ge<sup>t</sup> one-step high order methods. A list of Obrechkoff methods for various *k* and - is given in [17] for first order equations and in [18] for second order equations.

Several P-stable Obrechkoff methods for second-order initial-value problems (for - ≤ 3) were derived by Van Daele and Vanden Berghe [19]. Ananthakrishnaiah [18] has also included the case - = 4.

Lastly, we introduce Runge-Kutta-Nyström (RKN) methods.

The general form of an explicit *k*-stage two-step Runge-Kutta-Nyström method (RKN) for the solution of (1) is given by, see Franco and Rández [20]

$$\left(Y\_i \right) = \left(1 + c\_i\right)y\_n - c\_i y\_{n-1} + h^2 \sum\_{j=1}^k a\_{i\cdot j} f\left(\mathbf{x}\_n + c\_j h, \mathbf{Y}\_j\right),\tag{18}$$

$$\left(y\_{n+1} \quad = \ 2y\_n - y\_{n-1} + h^2 \sum\_{i=1}^k b\_i f\left(x\_n + c\_i h\_i \, \mathcal{Y}\_i\right). \tag{19}$$

Vigo-Aguiar and Ramos [21] introduced methods based on Runge-Kutta collocation.

**Definition 2.** *Trigonometrically-fitted RKN method (18)–(19) integrates exactly the functions* sin(*<sup>λ</sup>x*) *and* cos(*<sup>λ</sup>x*) *with λ* > 0 *the principal frequency of the problem when applied to the test Equation (10).*

In general, a method integrates exactly the set of functions {*<sup>u</sup>*1(*x*), *<sup>u</sup>*2(*x*), ... , *ur*(*x*)}, *r* ≤ *k* if the following conditions are satisfied

$$\begin{aligned} u\_{\ell}(\mathbf{x}\_{n} + h) &= 2u\_{\ell}(\mathbf{x}\_{n}) - u\_{\ell}(\mathbf{x}\_{n} - h) + h^{2} \sum\_{i=1}^{k} b\_{i} u\_{\ell}^{\prime\prime}(\mathbf{x}\_{n} + c\_{i}h), \; \ell = 1, \ldots, r \\ u\_{\ell}(\mathbf{x}\_{n} + c\_{i}h) &= (1 + c\_{i})u\_{\ell}(\mathbf{x}\_{n}) - c\_{i}u\_{\ell}(\mathbf{x}\_{n} - h) + h^{2} \sum\_{j=1}^{k} a\_{ij} u\_{\ell}^{\prime\prime}(\mathbf{x}\_{n} + c\_{j}h), \\ &\quad i = 1, \ldots, k, \; \ell = 1, \ldots, r \end{aligned} \tag{20}$$

#### **2. Methods Based on Linear Multistep Methods**

The idea of fitting functions other than monomials goes back to Greenwood [22], Brock and Murray [23], Dennis [24], Gautschi [1] and Salzer [25].

The first paper suggesting the use of the frequency of the solution is due to Gautschi [1]. He considered Störmer type methods for the solution of (1). The idea is to allow the coefficients to depend on the frequency *ω*. Let L be a functional defined by

$$\mathcal{L}y = \sum\_{j=0}^{k} \left[ a\_j y(\mathbf{x}\_0 + (n+1-j)h) - h\beta\_j f(\mathbf{x}\_0 + (n+1-j)h) \right],\tag{21}$$

where *α*0 = 1. Since we are introducing trigonometric functions, we refer to order as *algebraic order* and define *trigonometric order* as follows:

**Definition 3.** *A linear functional* L ∈ *Cs*[*<sup>a</sup>*, *b*] *is said to be of algebraic order p, if*

$$\mathcal{L}x^r \equiv 0, \; r = 0, 1, \dots, p\_r$$

*and* L*xp*+<sup>1</sup> *does not vanish. Therefore we have p* + 1 *conditions for methods of algebraic order p. The method*

$$\sum\_{j=0}^{k} a\_j y\_{n+j} = h \sum\_{j=0}^{k'} \beta\_j(v) f\_{n+j} \tag{22}$$

*where v* = *ωh and αk* = 1 *is said to be of trigonometric order q relative to the frequency ω if the associated linear functional*

$$\mathcal{L}y(x) = \sum\_{j=0}^{k} \alpha\_j y\_{n+j} - h \sum\_{j=0}^{k'} \beta\_j(v) y'\_{n+j}$$

*satisfies*

*and*

$$\mathcal{L}\cos(r\omega \mathbf{x}) \equiv \mathcal{L}\sin(r\omega \mathbf{x}) \equiv 0, \; r = 1, 2, \dots, q, \; q$$

L1 ≡ 0,

*and* Lcos((*q* + <sup>1</sup>)*ωx*) *and* Lsin((*q* + <sup>1</sup>)*ωx*) *are not both identically zero.*

Therefore, methods of trigonometric order *q* satisfy 2*q* + 1 conditions.

Linear multistep or trigonometrically fitted method for second order ordinary differential equations (ODEs) (1) satisfy an additional condition

> L*x* ≡ 0

for the same order, see Lambert [3].

**Remark 3.** *The trigonometric order is lower than the algebraic order, since the trigonometric polynomials requires two conditions for each degree, see Lambert [3].*

Gautschi [1] allowed the coefficients *αj* to depend on *v* and listed several explicit and implicit methods of trigonometric orders *q* ≤ 3. The form of the explicit methods is:

$$y\_{n+1} + a\_{q1}(v)y\_n + a\_{q2}(v)y\_{n-1} = h^2 \sum\_{j=1}^{2q-1} \beta\_{qj}(v)f\_{n+1-j}.\tag{23}$$

We only list the methods of trigonometric orders 1 and 2 using powers of cos(*v*) instead of the Taylor series expansions shown in [1]. Those Taylor series expansions should be used when *h* → 0.

For *q* = 1, the coefficients are:

$$
\alpha\_{11} = -2, \ \alpha\_{12} = 1, \ \beta\_{11} = \left(\frac{2\sin(v/2)}{v}\right)^2. \tag{24}
$$

For *q* = 2, the coefficients are:

$$\begin{aligned} a\_{21} &= \frac{2}{3} \left( \cos(2v) - 4 \cos(v) \right), \\\\ a\_{22} &= -a\_{21} - 1, \\\\ \beta\_{21} &= \frac{1}{6} \frac{-16 \cos(v)^3 + 9 \cos(v) + 7}{v^2 (2 \cos(v) + 1)}, \\\\ \beta\_{22} &= \frac{1}{3} \frac{8 \cos(v)^3 - 9 \cos(v)^2 - 3 \cos(v) + 4}{v^2 (2 \cos(v) + 1)}, \\\\ \beta\_{23} &= \frac{1}{2} \frac{1 - \cos(v)}{v^2 (2 \cos(v) + 1)}. \end{aligned} \tag{25}$$

The form of the implicit methods is:

$$y\_{n+1} + \mathfrak{a}\_{q,1}(v)y\_n + \mathfrak{a}\_{q,2}(v)y\_{n-1} = h^2 \sum\_{j=0}^{2q-2} \beta\_{q,j}(v)f\_{n+1-j}.\tag{26}$$

For *q* = 1, the coefficients are:

$$\begin{aligned} \alpha\_{11} &= \frac{2\cos(v)}{1 - 2\cos(v)}, \\\\ \alpha\_{12} &= -\alpha\_{11} - 1, \\\\ \beta\_{10} &= \frac{2(1 - \cos(v))}{v^2(2\cos(v) - 1)}. \end{aligned} \tag{27}$$

For *q* = 2, the coefficients are:

$$\begin{aligned} a\_{21} &= -2, \\ a\_{22} &= 1, \\\\ \beta\_{20} &= \frac{1}{2} \frac{1 - \cos(v)}{v^2 (2 \cos(v) + 1)}, \\\\ \beta\_{21} &= \frac{2 + \cos(v) - 3 \cos(v)^2}{v^2 (2 \cos(v) + 1)}, \\\\ \beta\_{22} &= \beta\_{20}. \end{aligned} \tag{28}$$

Neta and Ford [2] have constructed the *Nyström* and *Generalized Milne-Simpson methods* for a first order (3) where the coefficients *βj* are functions of the frequency. Here we list a few of those.

For *q* = 1, the explicit method is

$$y\_{n+2} - y\_n = \frac{2\sin(v)}{v} hf\_{n+1}.\tag{29}$$

For *q* = 2, the explicit method is

$$\begin{split} y\_{n+4} - y\_{n+2} &= -h \frac{\sin(v)}{v(1+2\cos(v))} \left[ f\_n + 2(1 - 2\cos(v))(1 + \cos(v))f\_{n+1} \\\\ &+ (4\cos(v)\cos(2v) + 1)f\_{n+2} - 4\cos(v)(1 + \cos(v))f\_{n+3} \right]. \end{split} \tag{30}$$

For *q* = 1, the implicit method is a one-parameter family

$$y\_{n+2} - y\_n = h \left[ \beta\_0 f\_n + \left( \frac{2 \sin(\upsilon)}{\upsilon} - 2\beta\_0 \cos(\upsilon) \right) f\_{n+1} + \beta\_0 f\_{n+2} \right]. \tag{31}$$

Note that the choice *β*0 = 0 leads to the explicit method (29). For *q* = 2, the implicit method is

$$y\_{n+3} - y\_{n+1} = h \frac{\sin(v)}{v(1 + 2\cos(v))} \left[ f\_{n+1} + 2(1 + \cos(v)) f\_{n+2} + f\_{n+3} \right].\tag{32}$$

Vigo-Aguiar and Ramos [26] show how to choose the frequency for nonlinear ODEs. Van der Houwen and Sommeijer [27] have developed predictor-corrector methods. Neta [28] has developed exponentially fitted methods for problems whose oscillatory solution is damped. Raptis and Allison [29] have used the idea for the solution of Schrödinger equation. Stiefel and Bettis [30] have stabilized Cowell's method [31] by fitting trigonometric polynomials. Lambert and Watson [5] introduced symmetric multistep methods which have non-vanishing interval of periodicity. Quinlan and Termaine [32] have developed high order symmetric multistep methods. Simos and Vigo-Aguiar [33] have developed exponentially-fitted symmetric methods of algebraic order eight based on the work of [32]. They demonstrated the superiority of their method on two orbital examples integrated on a long time interval *t* ∈ [0, <sup>10</sup><sup>5</sup>]. Another idea developed by Neta and Lipowski [34] is to use the energy of the system instead of integrating the angular velocity. They have demonstrated the benefit of their method using several examples for perturbation-free flight and a more general case on long time flight. Vigo-Aguiar and Ferrándiz [35] have developed a general procedure for the adaptation of multistep methods to numerically solve problems having periodic solutions. Vigo-Aguiar et al. [36] and Martín-Vaquero and Vigo-Aguiar [37] have developed methods for stiff problems by using Backward Differentiation Formulae (BDF) methods. See also Neta [38].

Sommeijer et al. [39] have suggested a different idea for trigonometrically-fitted methods. Instead of requiring fitting cosine and sine functions of multiple of the frequency, they sugges<sup>t</sup> taking several frequencies in some interval around the known frequency. The frequencies are chosen to be the roots of a Chebyshev polynomial, so that we minimize the maximum error. Such methods were called minimax methods. See also Neta [40].

We now give more details. Suppose we have an interval [ *ω*, *ω*¯ ] of frequencies and we pick *q* frequencies

$$
\omega\_{\bar{j}} = \frac{1}{2} \left( (\bar{\omega})^2 + (\underline{\omega})^2 \right) + \frac{1}{2} \left( (\bar{\omega})^2 - (\underline{\omega})^2 \right) \cos(\frac{2j - 1}{2q} \pi) [^{1/2} \dots]
$$

The idea is to interpolate the sine and cosine functions of those frequencies

> L1 ≡ 0,

and

$$\mathcal{L}\cos(\omega\_r \mathbf{x}) \equiv \mathcal{L}\sin(\omega\_r \mathbf{x}) \equiv 0, \; r = 1, 2, \dots, q.$$

Thus for the second order initial value problem, we have the system

$$(\hbar\omega\_{\hat{\jmath}})^2 \left\{ \sum\_{\ell=0}^{k/2-1} 2b\_{\ell} \cos((k/2-\ell)h\omega\_{\hat{\jmath}}) + b\_{k/2} \right\} = -\sum\_{\ell=0}^{k} a\_{\ell} \cos((k/2-\ell)h\omega\_{\hat{\jmath}}),$$

for *j* = 1, ... , *q*. Unfortunately, this yields very messy coefficients and we will not list any of them here.

#### **3. Methods Based on Obrechkoff Methods**

Simos [41] has developed a P-stable trigonometrically-fitted Obrechkoff method of algebraic order 10 for (1).

$$y\_{n+1} - 2y\_n + y\_{n-1} = \sum\_{j=1}^{3} h^{2j} \left[ b\_{j \, 0} y\_{n+1}^{(2j)} + 2b\_{j \, 1} y\_n^{(2j)} + b\_{j \, 0} y\_{n-1}^{(2j)} \right],\tag{33}$$

where

$$\begin{aligned} b\_{10} &= \frac{89}{1878} - \frac{15120}{313} b\_{31\prime} \\ b\_{11} &= \frac{425}{939} + \frac{15120}{313} b\_{31\prime} \\ b\_{20} &= -\frac{1907}{1577520} + \frac{660}{313} b\_{31\prime} \\ b\_{21} &= \frac{30257}{1577520} + \frac{690}{313} b\_{31\prime} \\ b\_{30} &= \frac{59}{3155040} - \frac{13}{313} b\_{31\prime} \end{aligned} \tag{34}$$

In order to ensure P-stability, the coefficient *b*3 1 must be

$$\begin{aligned} b\_{31} &= \left( 190816819200[1 - \cos(H)] - 95408409600H^2 + 7950700800H^4 \\\\ -265023360H^6 + 4732560H^8 - 52584H^{10} + 1727H^{12} \right) / (3568320H^{12}) .\end{aligned} \tag{35}$$

The method requires an approximation of the first derivative which is given by

$$y\_{n+1}' = \frac{1}{2h} \left( y\_{n-1} - 4y\_n + 3y\_{n+1} \right) - \frac{h}{12} \left( y\_{n-1}'' + 2y\_n'' + 3y\_{n+1}'' \right). \tag{36}$$

He showed that the local truncation error is

$$LTE = \left( -\frac{2923}{209898501120} + \frac{59}{1577520} b\_{3.1} \right) h^{12} y\_n^{(12)}$$

Wang et al. [42] have suggested a slight modification to the coefficient *b*3 1 as follows

$$b\_{31} = \frac{3155040 - 1428000H^2 + 60514H^4 - a\_1 \cos(H)}{5040H^2 (-15120 + 6900H^2 - 313H^4 + a\_2 \cos(H))'} \tag{37}$$

.

where *a*1 = 3155040 + 149520*H*<sup>2</sup> + 3814*H*<sup>4</sup> + 59*H*<sup>6</sup> and *a*2 = 15120 + 660*H*<sup>2</sup> + 13*H*4. Wang et al. [42] have developed a method of algebraic order 12 as follows

$$\begin{array}{rcl} y\_{n+1} - 2y\_n + y\_{n-1} &=& h^2 \left( a\_1 \left( y\_{n+1}^{\prime\prime} + y\_{n-1}^{\prime\prime} \right) + a\_2 y\_n^{\prime\prime} \right) \\\\ &+& h^4 \left( \beta\_1 \left( y\_{n+1}^{\prime\prime} + y\_{n-1}^{\prime\prime} \right) + \beta\_2 y\_n^{\prime\prime} \right) \\\\ &+& h^6 \left( \gamma\_1 \left( y\_{n+1}^{\prime\prime} + y\_{n-1}^{\prime\prime} \right) + \gamma\_2 y\_n^{\prime\prime} \right) \end{array} \tag{38}$$

.

where

$$\alpha\_1 = \frac{229}{7788}, \ \beta\_1 = -\frac{1}{2360}, \ \beta\_2 = \frac{711}{12980}\gamma$$

$$\gamma\_1 = \frac{127}{39251520'} \ \ \gamma\_2 = \frac{2923}{3925152'}$$

and *α*2 is chosen so the method is P-stable,

$$\mathcal{A}\_2 = -2H^{-2} + H^2 \beta\_2 - H^4 \gamma\_2 + 2 \cos(H) \left( H^{-2} - \alpha\_1 + H^2 \beta\_1 - H^4 \gamma\_1 \right).$$

The method is of algebraic order 12 and the local truncation error is now

$$LTE = \frac{45469}{1697361329664000} h^{14} \left(\omega^{12} y\_n^{\prime\prime} - y\_n^{(14)}\right).$$

The first order derivative is obtained by

$$y\_{n+1}' = \frac{1}{66h} \left( 305y\_{n+1} - 544y\_n + 239y\_{n-1} \right) + \frac{h}{1980} \left( -5728y\_n'' - 571y\_{n-1}'' + 119y\_{n+1}'' \right)$$

$$+ \frac{h^2}{2970} \left( 128y\_n''' - 173y\_{n-1}'' \right) + \frac{h^3}{2970} \left( -346y\_n^{(4)} - 13y\_{n-1}^{(4)} \right) + \frac{h^5}{62370} \left( -71y\_n^{(6)} + y\_{n-1}^{(6)} \right).$$

**Remark 4.** *Neta [43] has developed a P-stable method of algebraic order 18.*

Vanden Berghe and Van Daele [44] have suggested fitting a combination of monomials and exponentials, i.e., the set {1, *x*, ... , *<sup>x</sup>K*,*e*±*μ<sup>x</sup>*, *xe*±*μ<sup>x</sup>*, ... , *<sup>x</sup>Pe*<sup>±</sup>*μx*}. Clearly when *μ* is purely imaginary, we ge<sup>t</sup> the cosine and sine functions. When *K* = −1, we ge<sup>t</sup> only the exponential functions and when *P* = −1 we ge<sup>t</sup> only monomials (which is the well known Obrechkoff method). Even when *K* = −1, we are not getting the cosine and sine functions of multiples of the frequency as in the previously discussed methods. They developed methods of algebraic order 8. Here we list only two of those, one with *K* = 5, *P* = 1 (39) and the other with *K* = 7, *P* = 0 (40).

The first method is given by

$$\begin{aligned} b\_{10} &= \frac{1}{12} - 2b\_{20} - 2b\_{21}, \\ b\_{11} &= \frac{5}{12} + 2b\_{20} + 2b\_{21}, \\ b\_{20} &= \frac{v^5 \sin(v) + 2(\cos(v) + 5)v^4 + 48(\cos(v) - 1)A}{12v^4(v^3 \sin(v) - 4(1 - \cos(v))^2)}, \\ b\_{21} &= \frac{5v^5 \sin(v) - 2\cos(v)(\cos(v) + 5)v^4 - 48(\cos(v) - 1)B}{12v^4(v^3 \sin(v) - 4(1 - \cos(v))^2)}, \end{aligned} \tag{39}$$

where *A* = (*v*<sup>2</sup> + cos(*v*) − 1) and *B* = (*v*<sup>2</sup> cos(*v*) + cos(*v*) − <sup>1</sup>).

The second method is

$$\begin{aligned} b\_{10} &= \frac{1}{30} - 12b\_{20}, \\ b\_{11} &= \frac{7}{15} + 12b\_{20}, \\ b\_{20} &= \frac{4\cos(v)v^2 + 56v^2 - 3v^4 + 120\cos(v) - 120}{120v^2(12\cos(v) - 12 + \cos(v)v^2 + 5v^2)}, \\ b\_{21} &= \frac{1}{40} + 5b\_{20}. \end{aligned} \tag{40}$$

#### **4. Methods Based on Runge-Kutta**

For a trigonometrically-fitted method, we have (see Franco and Rández [20])

$$\begin{aligned} \sum\_{\substack{i=1 \\ j=1 \\ i=1}}^k b\_i \cos(c\_i v) &= \frac{2(\cos(v) - 1)}{v^2}, \\ \sum\_{\substack{i=1 \\ j=1 \\ j=1}}^k b\_i \sin(c\_i v) &= 0, \\ \sum\_{j=1}^k a\_{ij} \sin(c\_i v) &= \frac{\cos(c\_i v) + c\_i \cos(v) - (1 + c\_i)}{v^2}, i = 1, \dots, k, \\ \sum\_{j=1}^k a\_{ij} \sin(c\_i v) &= \frac{\sin(c\_i v) - c\_i \sin(v)}{v^2}, i = 1, \dots, k. \end{aligned} \tag{41}$$

The solution for *k* = 3 is given in Franco [45]

$$\begin{aligned} c\_1 &= -1, \\ c\_2 &= 0, \\ c\_3 &= 1, \\ b\_1 &= \frac{2\cos(v) - 2 - v^2}{2v^2(\cos(v) - 1)}, \\ b\_2 &= \frac{(2 - v^2)\cos(v) - 2}{v^2(\cos(v) - 1)}, \\ b\_3 &= b\_1, \\ a\_{3,1} &= 0, \\ a\_{3,2} &= \frac{2(\cos(v) - 1)}{v^2}, \\ a\_{3,3} &= 0. \end{aligned} \tag{42}$$

This method has an algebraic order 2 and reduces to the two-stage explicit Numerov method of Chawla [46]. The method integrates exactly the set of functions {1, *x*, *x*2, cos(*ωx*), sin(*ωx*)}, similar to the idea of Vanden Berghe and Van Daele [44].

Franco and Rández [20] have developed a 7-stage method of algebraic order 7 which integrates exacly the monomials up to *x*6 and sin(*ωx*) and cos(*ωx*). A 5-stage family of methods of algebraic order 6 listed here and in Tsitouras [47] has been developed by Fang et al. [48]. Here we list just one member of the family.

$$c\_1 = -1, \ c\_2 = 0, \ c\_3 = \frac{1}{2}, \ c\_4 = -\frac{1}{2}, \ c\_5 = 1,\tag{43}$$

$$\begin{aligned} a\_{11} &= -\frac{\sin^2(v/4)}{v^2 \cos(v/2) + 2}, \\ a\_{22} &= \frac{2\cos(v/2) + 2\cot(v)\sin(v/2) - 3}{2v^2}, \\ a\_{11} &= \frac{36 + v^2 - 36\cos(v/2)}{72v^2\cos(v/2)}, \\ a\_{22} &= \frac{36\sin(v/2) - v^2\sin(3v/2) - 18\sin(v)}{36v^2\sin(v)}, \\ a\_{13} &= \frac{1}{36}, \\ a\_{15} &= -\frac{2}{9\cos(v/2)}, \\ a\_{52} &= \frac{2\cos(v) - 2}{v^2} - \frac{1}{3\cos(v/2)} - \frac{2\sin(3v/2)}{9\sin(v)}, \\ a\_{53} &= \frac{2}{9}, \\ a\_{54} &= \frac{2}{9}, \\ b\_{1} &= b\_{5} = \frac{6\cos(v) - 6 - v^2 - 2v^2\cos(v/2)}{48v^2\sin^4(v/4)}, \\ b\_{2} &= \frac{(18 + v^2)\cos(v) - 18 - 10v^2\cos(v/2)}{24v^2\sin^4(v/4)}, \\ b\_{3} &= b\_{4} = \frac{12 + 5v^2 + (v^2 - 12)\cos(v)}{24v^2\sin^4(v/4)}. \end{aligned} \tag{45}$$

Demba et al. [49] have developed fourth and fifth order Runge-Kutta-Nyström trigonometrically-fitted methods for (1). The idea is based on using 3-stage method to ge<sup>t</sup> a 4-stage trigonometrically-fitted method. Here we list the coefficients

$$y\_{n+1} = y\_n + hy\_n' + h^2 \sum\_{i=1}^s b\_i f(\mathbf{x}\_n + hc\_i, \mathbf{y}\_i) \tag{46}$$

$$y\_{n+1}' = y\_n' + h \sum\_{i=1}^s d\_i f(\mathbf{x}\_n + hc\_{i\prime} \mathbf{y}\_i) \tag{47}$$

where *Yi* are given by (18) and

$$\begin{aligned} c\_1 &= 0, \\ c\_2 &= \frac{3}{16} \frac{v^3 \cos(v) - 5v^2 \sin(v) + 4v^3 - 32v \cos(v) + 160 \sin(v) - 128v}{v(6v \sin(v) + v^2 + 30 \cos(v) - 30)}, \\ c\_3 &= -\frac{3}{500} \frac{-11v^6(4 + \cos(v)) + 55v^5 \sin(v) + v^4(38 \cos(v) + 1536) + T\_1 + T\_2}{v^2(6v \sin(v) + v^2 + 30 \cos(v) - 30)}, \\ a\_{2,1} &= \frac{1}{32}, \\ a\_{3,1} &= -\frac{1}{1000} \frac{-11v^5 + 384v^3 + 2112 \sin(v) - 2112v}{v^3}, \\ a\_{3,2} &= \frac{44}{125}, \end{aligned} \tag{48}$$

where

$$T\_1 = -1920v^3 \sin(v) + 2112v \cos(v) \sin(v) - v^2 (672 \cos(v) + 4448) + 10560 \cos(v)^2,$$

and

$$\begin{aligned} T\_2 &= +3008v\sin(v) - 21120\cos(v) + 10560, \\ b\_1 &= \frac{1}{24}, \\ b\_2 &= \frac{16}{165}\frac{v^4 + 66v\sin(v) - 21v^2 + 330\cos(v) - 330}{v^2(v^2 - 32)}, \\ b\_3 &= \frac{25}{264}, \\ d\_1 &= \frac{1}{24}, \\ d\_2 &= \frac{16}{33}, \\ d\_3 &= \frac{125}{264}. \end{aligned} \tag{49}$$

The Taylor series expansion of the coefficients is given by

$$\begin{aligned} a\_{3,1} &= -4/125 - (33/5000)v^2 + (11/26250)v^4 - (11/1890000)v^6, \\ b\_2 &= 4/11 - (1/3600)v^4 + (1/161280)v^6, \\ c\_2 &= 1/4 - (1/26880)v^4 + (19/7741440)v^6, \\ c\_3 &= 4/5 + (13/4375)v^4 - (257/3780000)v^6. \end{aligned} \tag{50}$$

#### **5. Comments on Order**

Definition 1 of order (see (8) and (9)) can be extended to trigonometrically fitted methods. Note that a method is of order *p* for first (second) order ODEs if it fits monomomials up to degree *p* + 1 (*p* + 2), respectively. Therefore, methods of trigonometric order *q* are methods of order 2*q*. Method, such as (39) for second order ODEs is of eighth order, since it fits monomial up to degree 5, and *x<sup>n</sup>* cos(*ωx*), *x<sup>n</sup>* sin(*ωx*), *n* = 0, 1. In Table 1, we will list all methods used in the examples with their order.

