**1. Introduction**

The initial value problem (IVP) is

$$y' = f(\mathbf{x}, y), y(\mathbf{x}\_0) = y\_0 \tag{1}$$

with *<sup>x</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>, *<sup>y</sup>*, *<sup>y</sup>* <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* and *<sup>f</sup>* : <sup>R</sup> <sup>×</sup> <sup>R</sup>*<sup>m</sup>* <sup>→</sup> <sup>R</sup>*m*.

Runge–Kutta (RK) pairs are amongst the most popular numerical methods for addressing (1). They are characterized by the following Butcher tableau [1,2]:


with *bT*, ˆ *<sup>b</sup>T*, *<sup>c</sup>* <sup>∈</sup> *<sup>R</sup><sup>s</sup>* and *<sup>A</sup>* <sup>∈</sup> *<sup>R</sup>s*×*<sup>s</sup>* . Then, the method shares *s* stages, and when *c*<sup>1</sup> = 0 and *A* is strictly lower triangular, it is evaluated explicitly. The approximated solution steps from (*xn*, *yn*) to *xn*+<sup>1</sup> = *xn* + *hn* by producing two estimations for *y*(*xn*+1). Namely, *yn*+<sup>1</sup> and *y*ˆ*n*+1, given by

**Citation:** Shen, Y.-C.; Lin, C.-L.; Simos, T.E.; Tsitouras, C. Runge–Kutta Pairs of Orders 6(5) with Coefficients Trained to Perform Best on Classical Orbits. *Mathematics* **2021**, *9*, 1342. https://doi.org/ 10.3390/math9121342

Academic Editor: Arsen Palestini

Received: 26 May 2021 Accepted: 8 June 2021 Published: 10 June 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

and

with

$$f\_i = f(\mathfrak{x}\_n + c\_i h\_{n\prime} \mathfrak{y}\_n + h\_n \sum\_{j=1}^{i-1} a\_{ij} f\_j)\_{\prime i}$$

*yn*+<sup>1</sup> = *yn* + *hn*

*yn*+<sup>1</sup> = *yn* + *hn*

*s* ∑ *i*=1 *bi fi*

*s* ∑ *i*=1 ˆ *bi fi*

for *i* = 1, 2, ··· ,*s*. These two approximations *yn*+<sup>1</sup> and *yn*+<sup>1</sup> are of algebraic orders *p* and *q* < *p* respectively. Thus, a local error estimation

$$\epsilon\_n = h\_n^{p-q-1} \cdot ||y\_{n+1} - \hat{y}\_{n+1}||\_2$$

is formed in every step and is combined in an algorithm for changing the step size:

$$h\_{n+1} = 0.9 \cdot h\_n \cdot (\frac{t}{\epsilon\_n})^{1/p} \cdot t$$

where *t* is a tolerance given by the user. When  *<sup>n</sup>* < *t*, the above formula is used for the new step forward. In reverse, we also use it, but the solution is not advanced and *hn*+<sup>1</sup> is a new version of *hn*. Details can be found in [3]. As an abbreviation, these methods are named RKp(q) pairs.

Runge–Kutta methods were introduced back in the late 19th century [4,5]. After 1960, RK pairs appeared. Fehlberg gave the first celebrated such pairs of orders 5(4), 6(5) and 8(7) [6,7]. Dormand and Prince followed in the early 1980s [8,9]. Our group has also presented a series of successful RK pairs [10–13].

Runge—Kutta pairs are suited for efficiently solving almost every non-stiff problem of the form (1). The variety of pairs is explained by the accuracy required. Thus, when less accuracy is required, the lowest RK pairs are more efficient. In contract, for stringent accuracies at quadruple precision, a high-order pair should be chosen [14].

Here we focus on RK6(5) pairs, which are preferred for moderate to higher accuracies. We are especially interested in problems (1) that resemble Kepler-like orbits. Thus, we will propose a particular RK6(5) pair for addressing this type of problem.

## **2. Producing Runge–Kutta Pairs of Orders 6(5) and Training Their Coefficients**

Runge–Kutta pairs of orders six and five are amongst the most frequently used. The coefficients have to satisfy 54 order conditions. Thus, families of solutions have been discovered over the years. Here we chose the Verner-DLMP [15,16] family, which has the advantage of being solved linearly. Then, we freely chose the coefficients *c*2, *c*4, *c*5, *c*6, *c*<sup>7</sup> and ˆ *b*9. Pairs from this family have been proven to perform most efficiently in various classes of problems [17].

We proceed by explicitly evaluating the remaining coefficients. The algorithm is discussed in [10] and is given as a Mathematica [18] package in the Appendix A.

Although *s* = 9, the family spends only eight stages per step since the ninth stage is used as the first stage of the next step. This property is called FSAL (first stage as last).

The next question to be answered is regarding how to select the free parameters. Traditionally we try to minimize the norm of the principal term of the local truncation error. That is, the coefficients of *h*<sup>7</sup> in the residual of Taylor error expansions corresponding to the sixth-order method of the underlying RK pair.

We intend to derive a particular RK6(5) pair belonging to the family of interest here. The resulting pair has to perform best on Kepler orbits and other problems of this nature. Thus, we concentrate on the particular orbit

$$\begin{aligned} \,^1y' &= \,^3y' \\ \,^2y' &= \,^4y' \\ \,^3y' &= -\frac{\,^1y}{\left(\sqrt{\,^1y^2 + \,^2y^2}\right)^3}\sqrt{\,^1y} \\ \,^4y' &= -\frac{\,^2y}{\left(\sqrt{\,^1y^2 + \,^2y^2}\right)^3}\sqrt{\,^1y} \end{aligned}$$

with *<sup>x</sup>* <sup>∈</sup> [0, 10*π*], *<sup>y</sup>*(0) = <sup>1</sup> <sup>−</sup> *ecc*, 0, 0, 1+*ecc* <sup>1</sup>−*ecc <sup>T</sup>* and the theoretical solution

$$y^1y(x) = \cos(v) - \csc\prime\prime^2 y(x) = \sin(v)\sqrt{1 - \csc^2 x}$$

In the above, *v* = *ecc* · sin(*u*) + *x*, *ecc* is the eccentricity, and the components of *y* are denoted by the left superscript. They should not be confused with *y*<sup>1</sup> = <sup>1</sup>*y*1, <sup>2</sup> *y*1, <sup>3</sup> *y*1, <sup>4</sup> *y*<sup>1</sup> *T* , *y*<sup>2</sup> = <sup>1</sup>*y*2, <sup>2</sup> *y*2, <sup>3</sup> *y*2, <sup>4</sup> *y*<sup>2</sup> *T* , *y*3, ··· , which represent the vectors approximating the solution at *x*1, *x*2, *x*3, ··· .

This problem can be solved with an RK6(5) pair from the family we are interested in here. After a certain run, we recorded the number *f ev* of function evaluations (stages) needed and the global error *ge* observed over the mesh (grid) in the interval of integration. Then, we formed the efficiency measure

$$
\mu = f \varepsilon v \cdot g \varepsilon^{1/6}. \tag{2}
$$

Running DLMP6(5) twice for Kepler we obtained the efficiency measures *u*ˆ1 and *u*ˆ2 as reported in Table 1. For example, we needed 1121 stages in order to achieve a global error of 2.14 · <sup>10</sup>−<sup>6</sup> when running Kepler for *ecc* <sup>=</sup> 0, *xend* <sup>=</sup> <sup>10</sup>*<sup>π</sup>* and *tol* <sup>=</sup> <sup>10</sup>−7. Thus, we obtained *<sup>u</sup>*ˆ1 <sup>=</sup> <sup>1123</sup> · 2.14 · <sup>10</sup>−<sup>6</sup> <sup>≈</sup> 127.22, as reported in Table 1. Analogously, for a second run with *ecc* = 0.6, *xend* = 20*π* and *tol* = 10−<sup>11</sup> we observed *u*ˆ2 = 833.27.

Let us suppose that any new pair NEW6(5) furnishes corresponding efficiency measures *u*˜1 and *u*˜2 for the same runs. Then, as a fitness function we form the sum

$$u = \frac{\vec{u}\_1}{\vec{u}\_1} + \frac{\vec{u}\_2}{\vec{u}\_2}.$$

and try to maximize it. That is, the fitness function is actually two whole runs of initial value problems. The value *u* changes according to the selection of the free parameters *c*2, *c*4, *c*5, *c*6, *c*<sup>7</sup> and ˆ *b*9.

The original idea is based on [19]. For the minimization of *u* we used the differential evolution technique [20]. We have already tried this approach and obtained some interesting results in producing Numerov-type methods for integrating orbits [21]. In this latter work we trained the coefficients of a Numerov-type method on a Kepler orbit. We observed very pleasant results over a set of Kepler orbits as well as other known orbital problems.

**Table 1.** Efficiency measures for both runs and pairs.


The optimization furnished six values for the free parameters. They are included with the rest of the coefficients in the resulting pair NEW6(5) presented in Table 2.


**Table 2.** Coefficients of the proposed NEW6(5) pair, accurate for double precision computations.

Interpreting Table 1, we observe that DLMP6(5) was 151% and 116% more expensive than NEW6(5) for the first and second run, respectively. The norm of the principal truncation error coefficients was *<sup>T</sup>*(7) <sup>2</sup> <sup>≈</sup> 2.64 · <sup>10</sup>−4, which is much larger than the corresponding value *<sup>T</sup>*(7) <sup>2</sup> <sup>≈</sup> 4.37 · <sup>10</sup>−<sup>5</sup> for DLMP6(5). The interval of absolute stability for the new pair was (−4.24, 0] while for for DLMP6(5) it was (−4.21, 0].

In conclusion, no extra property seemed to hold. The pair given in Table 2 does not possess any interesting properties. It is difficult to believe a special performance could be obtained after seeing its traditional characteristics.

## **3. Numerical Tests**

We tested the following pairs chosen from the family studied above.


DLMP6(5) is the best representative of conventional RK pairs. Everything else presented until now is hardly more efficient [10]. Both pairs were run for tolerances of <sup>10</sup>−5, 10−6, ··· , 10−11, and the efficiency measures (2) were recorded for each one. We set NEW6(5) as the reference pair. Then we divided each efficiency measure of DLMP6(5) with the corresponding efficiency measure of NEW6(5). Numbers greater than 1 indicate that NEW6(5) is more efficient. Thus, we can interpret the number 1.1 as DLMP6(5) being 0.1 = 10% more expensive than NEW6(5) while an entry of 2 means that DLMP6(5) is 100% more expensive (i.e., has twice the cost for achieving the same accuracy).

The problems we tested were as follows.
