*2.1. Methods*

In this subsection, we briefly describe ODE solvers classified by mathematical theory. Numerical methods for ODEs fall naturally into two categories: one is 'multi-stage method' using one starting value at each step and the other is 'multi-step method' or 'multi-value method' based on several values of the solution. We deal with the theories of two methods in terms of convergence and stability. The multi-step method has a critically bad stability property with a higher convergence rate that can not actually be used. Due to these reasons, the third-order RK method (RK3) and the third-order BDF (BDF3) are considered as examples of multi-stage methods and multi-step methods. Note that the higher order multi-step method is also available, but it has very low practical use.

The general form of the multi-step methods [1,3,7,26,38,40,41] is described by

$$y\_{n+1} = \sum\_{j=0}^{s} a\_j y\_{n-j} + h \sum\_{j=-1}^{s} b\_j f(t\_{n-j}, y\_{n-j}), \quad n \ge s. \tag{2}$$

Here, the coefficients *a*0, ... , *as*, *b*−1, *b*0, ... , *bs* are constants. If method (2) use *s* + 1 previous solution values with either *as* = 0 or *bs* = 0, the method is called an *s* + 1 step method. A BDF method is the most efficient linear multi-step methods among several multi-step methods [40]. It is composed of the coefficients *bp*−<sup>1</sup> = ··· = *b*0 = 0 and the others chosen such that the method with the convergence order of *s* convergence order *s*. Thus, the *s*-step BDF has *s*-th convergence order. The BDF3 is given by

$$y\_{n+3} - \frac{18}{11}y\_{n+2} + \frac{9}{11}y\_{n+1} - \frac{2}{11}y\_n = \frac{6}{11}hf(t\_{n+3}, y\_{n+3}).\tag{3}$$

Since implicit *A*-stable linear multi-step methods have convergence order of at most 2, second-order BDF can be *A*-stable, but the method can not be *A*-stable with order more than 3. The stability of BDF3 is almost *A*-stable [38].

An explicit RK method has been developed by Runge, Heun, and Kutta based on a Euler method [3,40]. Later, an implicit RK was developed for stiff problems based on several quadrature rules. RK methods have the following form:

$$\begin{aligned} y\_{n+1} &= y\_n + h \sum\_{i=1}^s b\_i k\_{i\prime} \\ k\_i &= f(t\_n + c\_i h, y\_n + h \sum\_{j=1}^s a\_{ij} k\_j), \quad i = 1, \dots, s, \end{aligned} \tag{4}$$

or an equivalent form of Butcher tableau

$$\begin{array}{c|c} c & A \\ \hline \hline \end{array}$$

One can specify a particular RK method by providing the number of stage *s* and all elements of the Butcher tableau, *aij* (1 ≤ *i*, *j* ≤ *s*), *bi* and *ci* (*i* = 1, ... ,*<sup>s</sup>*). There is a popular implicit RK method for solving the stiff problem, which is called a collocation method. The collocation method is changed depending on the choice of the collocation points. For more details on the collocation method, one can refer to [3,38]. If we select uniform collocation points defined by *ci* = *i*/3 (1 ≤ *i* ≤ 3), we can obtain a third-order collocation method with having the following butcher table:

$$
\begin{array}{c|cccc}
\frac{1}{7} & \frac{23}{56} & -\frac{4}{9} & \frac{5}{36} \\
\frac{2}{3} & \frac{7}{9} & -\frac{2}{9} & \frac{1}{9} \\
1 & \frac{3}{4} & 0 & \frac{1}{4} \\
\hline & \frac{3}{4} & 0 & \frac{1}{4}
\end{array}
\tag{5}
$$

Note that the order of the stage and convergence for the method (5) are both three as shown in the convergence analysis in [3,38]. Furthermore, the stability of (5) demonstrated through Dahlquist's problem is almost *L*-stable.

#### *2.2. Simplified Newton Iteration and Eigenvalue Decomposition Method*

To explicate a simplified Newton iteration proposed by Liniger and Willoughby [10], we consider the following nonlinear system obtained by RK-type methods,

*Mathematics* **2019**, *7*, 1158

$$z\_i = h \sum\_{j=1}^{s} a\_{ij} f(x\_0 + c\_j h, y\_0 + z\_j), \qquad i = 1, \dots, s. \tag{6}$$

Equation (6) is equivalent to a system of equations described by

$$Z = h(A \otimes I\_d) / F(Z),\tag{7}$$

where

$$\begin{aligned} Z &= \left[ z\_1, \dots, z\_s \right]^T, \\ A &= \left( a\_{ij} \right)\_{i,j=1}^s, \\ F(Z) &= \left[ f(x\_0 + c\_1 h\_\prime y\_0 + z\_1), \dots, f(x\_0 + c\_s h\_\prime y\_0 + z\_s) \right]^T, \end{aligned}$$

and *Id* is *d*-dimensional identity matrix. By applying Newton iteration to the nonlinear system of Equation (7), we can ge<sup>t</sup> a linear system of the form

$$\begin{aligned} \left(I\_{sd} - h(A \otimes I\_d)\mathcal{J}\right)\Delta Z^k &= -Z^k + h(A \otimes I\_d)F(Z^k),\\ Z^{k+1} &= Z^k + \Delta Z^k. \end{aligned} \tag{8}$$

where J is a block diagonal matrix that consists of Jacobians *∂ f ∂y* (*tn* + *cih*, *yn* + *zi*), *i* = 1, ... ,*s*, *Z<sup>k</sup>* = (*zk*1, ... , *zks*)*<sup>T</sup>* is the *k*-th iterated solution, Δ*Z<sup>k</sup>* = (<sup>Δ</sup>*zk*1, ... <sup>Δ</sup>*zks*)*<sup>T</sup>* is the increment, and *F*(*Z<sup>k</sup>*) denotes for

$$F(Z^k) = (f(\mathbf{x}\_0 + \mathbf{c}\_1 h, y\_0 + z\_1^k), \dots, f(\mathbf{x}\_0 + \mathbf{c}\_s h, y\_0 + z\_s^k))^T.$$

Usually, one Newton iteration needs several calculations of the Jacobian which requires lots of computational costs. To reduce such costs, all Jacobians *∂ f ∂y* (*tn* + *cih*, *yn* + *zi*) are replaced by *∂ f ∂y* (*tn*, *yn*). This process is called 'simplified Newton iteration'. The simplified Newton iteration for (7) leads (9) to the formula

$$\begin{aligned} (I\_{\rm sd} - hA \otimes I) \Delta Z^k &= -Z^k + h(A \otimes I\_d) F(Z^k), \\ Z^{k+1} &= Z^k + \Delta Z^k. \end{aligned} \tag{9}$$

where *J* := *∂ f ∂y* (*tn*, *yn*). Each iteration requires *s* times evaluation of *f* and the calculations of a *d* · *s*-dimensional linear system.

Note that, by using the simplified Newton iteration, the matrix (*I* − *hA* ⊗ *J*) is the same for all iterations, so the decomposition method for solving the resulting linear system can be needed only once. For the linear system, we consider an eigenvalue decomposition technique in that it decomposes the given *d* · *s* dimensional linear system into several *s*-dimensional linear systems. In the view of computational efficiency, it is more efficient to calculate several small size systems even if it is a complex system, rather than to calculate one big size system. Note that only a simplified Newton iteration (9) enables usage of eigenvalue decompositions that cannot be applicable to traditional Newton iteration (8). The eigenvalue decomposition method for (9) is proposed independently by Butcher [31] and Bickart [30]. The main ideas of the method are eigenvalue decomposition of the matrix *A*−<sup>1</sup> = *T*Λ*T*−<sup>1</sup> and linear transformation of the vector *Zk*. By transforming *W<sup>k</sup>* = (*<sup>T</sup>*−<sup>1</sup> ⊗ *<sup>I</sup>*)*Zk*, the iteration (9) becomes equivalent to

$$(h^{-1}\Lambda \otimes I\_d - I\_3 \otimes I)\Delta \mathcal{W}^k = -h^{-1}(\Lambda \otimes I\_d)\mathcal{W}^k + (T^{-1} \otimes I\_d)\mathcal{F}\left((T \otimes I)\mathcal{W}^k\right),\tag{10}$$

$$\mathcal{W}^{k+1} = \mathcal{W}^k + \Delta \mathcal{W}^k.$$

In a general case of the three-stage implicit RK method such as (5), the inverse matrix of *A* has an eigenvalue decomposition as follows:

$$A^{-1} = T\Lambda T^{-1} = \begin{bmatrix} u\_{0\prime} & u\_{1\prime} & -v\_1 \end{bmatrix} \begin{bmatrix} \hat{\gamma} & 0 & 0 \\ 0 & \hat{\alpha} & -\hat{\beta} \\ 0 & \hat{\beta} & \hat{\alpha} \end{bmatrix} \begin{bmatrix} u\_{0\prime} & u\_{1\prime} & -v\_1 \end{bmatrix}^{-1} \tag{11}$$

where *γ*ˆ is one real real eigenvalue, *α*ˆ ± *iβ* ˆ are one complex eigenvalue pair and *u*0, and *u*1 ± *v*1 are eigenvectors corresponding to *γ*ˆ, *α*ˆ ± *iβ* ˆ , respectively. Therefore, the matrix in (10) can be rewritten as

$$
\begin{bmatrix}
\gamma I\_d - J & 0 & 0 \\
0 & \alpha I\_d - J & -\beta I\_d \\
0 & \beta I\_d & \alpha I\_d - J
\end{bmatrix}
\tag{12}
$$

with *γ* = *γ*ˆ/*h*, *α* = *<sup>α</sup>*ˆ/*h*, *β* = *β* ˆ /*h* so that (10) can be split into two linear systems of dimension *d* and 2*d*, respectively. Moreover, the 2*d*-dimensional real valued subsystem can be transformed to the following *d*-dimensional complex valued system

$$\left( (a+i\beta)I - f \right) \left( u+iv \right) = a+ib. \tag{13}$$

In terms of computational cost, the number of multiplication to solve (13) is approximately 4*d*3/3, since the complex multiplication consists of four real multiplications. Then, the total multiplication number for (12) is about 5*d*3/3, while the number of multiplications for decomposing the untransformed matrix (*I* − *hA* ⊗ *J*) in (9) is about (3*d*)3/3. Thus, we can reduce the number of multiplications to about 80% by calculating (12) instead of directly calculating the inverse of the matrix of (*I* − *hA* ⊗ *J*) in (10). Finally, to solve the transformations *Z<sup>k</sup>* = (*T* ⊗ *<sup>I</sup>*)*Wk*, it additionally requires a multiplication of O(*n*). This difference becomes more apparent as the size of the matrix (or the numbers of stage) increases.

#### **3. Numerical Comparison**

In this section, we experiment five commonly used physical examples for comparison of both methods. In Sections 3.1–3.3, the BDF3 method (3) and RK3 (4) with its butcher table (5) are used as an example of multi-step and multi-stage methods, respectively. The initial guess for BDF3 is taken by exact values. Both methods use the traditional Newton iteration for solving nonlinear systems. In Section 3.3, especially, we measure CPU-time to compare the two methods in terms of accuracy and efficiency and simplified Newton iteration is used for a nonlinear solver. In Sections 3.4–3.5, we use RADAU5 and ODE15s representing a multi-stage and a multi-step method, respectively, which numerical codes are well optimized and open-source. Note that RADAU5, one of multi-stage methods, has convergence order 5 and stage order 3 [38] and ODE15s, one of multi-step methods, included MATLAB library, has variable orders from 1 to 5 [42]. Remarkably, RADAU5 has applied the eigenvalue decomposition and simplified Newton iteration. All numerical simulations are executed with the software MATLAB 2010b (Mathworks, Natick, MA, USA) under OS WINDOWS 7 (Microsoft, Redmond, WA, USA). Note that most numerical results in this section are repeatable even if different computational resources are used.

#### *3.1. Simple Linear ODE*

As the first example, we consider the Prothero–Robinson problem [29],

$$f(t, y(t)) = \nu(y(t) - \mathbf{g}(t)) + \mathbf{g}'(t), \quad t \in (0, 10], \quad y(0) = \mathbf{g}(0), \tag{14}$$

which presents a stiffness by varying the parameter *ν*. The analytic solution of problem is given by *y*(*t*) = *g*(*t*). To compare the error behaviors of the two methods, we set up the parameter *ν* = −1.0 × 10<sup>6</sup> so that the given problem can be highly stiff. Here, the exact solution of this problem is set by *g*(*t*) = sin(*t*). In Figure 1, we display absolute errors |*y*(*ti*) − *yi*| at each integration step in a log scale obtained by the two methods with different time step sizes *h* = <sup>2</sup>−*k*, (a) *k* = 1, (b) *k* = 2 and (c) *k* = 3. One can see that the error of BDF3 (Red) has magnitude (a) 1.0 × <sup>10</sup>−7, (b) 1.0 × <sup>10</sup>−8, and (c) 1.0 × 10−9. The error of RK3 (Blue) has magnitude (a) 1.0 × <sup>10</sup>−9, (b) 1.0 × <sup>10</sup>−10, and (c) 1.0 × 10−11. All three graphs in Figure 1 show that RK3 has better accuracy than BDF3. Additionally, to demonstrate the meaning of the stage of multi-stage methods and the step of multi-step methods, we set up a time step size of the multi-step method, BDF3, as ˜ *h* = *h*/3. The result of BDF3 with ˜ *h* = *h*/3 is labeled as BDF3c hereafter. It can be seen that the result from BDF3c (Black) has the same accuracy, compared with RK3. Therefore, it is sufficient to see a comparison of RK3 and BDF3c for further comparison.

**Figure 1.** Prothero–Robinson equation: comparing two methods for accuracy by varying step size *h* = <sup>2</sup>−*k*, for (**a**) *k* = 1, (**b**) *k* = 2, (**c**) *k* = 3.

#### *3.2. Nonlinear Stiff ODE System: Multi-Mode Problem*

As the second example, we consider a nonlinear ODE system based on the Prothero–Robinson problem. The system is given by

$$\begin{aligned} f(t, \boldsymbol{\varUpsilon}(t)) &= -\Lambda(\boldsymbol{\varUpsilon}(t) - \boldsymbol{\multimap}(t) \cdot \mathbf{1}\_{\mathcal{N}})^\delta + \boldsymbol{g}'(t) \cdot \mathbf{1}\_{\mathcal{N}}, \quad t \in (0, 10], \\ \boldsymbol{\varUpsilon}(0) &= (0, \dots, 0)^T \in \mathbb{R}^N, \end{aligned} \tag{15}$$

where *g*(*t*) = sin(*t*), **1***N* = (1, ... , 1)*<sup>T</sup>* ∈ R*<sup>N</sup>* and *N* is the number of dimension. The exact solution is *Y*(*t*) = sin(*t*) · **1***N*. The stiffness of (15) can be controlled by the eigenvalues of the matrix Λ, where Λ is diagonal matrix that has elements *λi* = 1.0*e* + *ki* (*i* = 1, ... , *<sup>N</sup>*), *ki* is random integer between 0 and 6. In addition, a linearity of the problem depends on the parameter *δ*. In this experiment, *δ* = 1 and *δ* = 5 are taken for linear and nonlinear cases, respectively. The parameter set (*<sup>N</sup>*, *h*)=(100, <sup>2</sup>−<sup>3</sup>) is used for both linear and nonlinear cases.

As similar to the previous subsection, the error behaviors of two methods for both linear and nonlinear cases are observed over time, and the results are plotted in Figure 2. The error is measured as *L*∞-norm at each integration step, ||*Y*(*ti*) − *Yi*||<sup>∞</sup>. For the nonlinear case, a traditional Newton iteration is used for a linearization. As mentioned in the previous subsection, BDF3c uses a smaller time step size ˜ *h* = *h*/3 and is compared with RK3 with time step *h*. Just in case, we mention that BDF3 with time step *h* is not appropriate to compare RK3 with the same time step size because of the meaning of the stage, explained in the previous subsection. In the linear case, *δ* = 1, RK3, and BDF3c have similar error behaviors as 1.0 × 10−5.544 and 1.0 × 10−5.253 at the final time point, respectively. In the nonlinear case, *δ* = 5, RK3, and BDF3 also have similar error behaviors as 1.0 × 10−5.378 and 1.0 × 10−5.123 at the final time, respectively.

**Figure 2.** Multi-mode problem: comparing errors of two methods for linear case (**left**) and nonlinear case (**right**).

#### *3.3. Linear PDE—Heat Equation*

We consider a linear partial differential equation (PDE), the heat equation generally described by

$$u\_l = u\_{xx}, \quad (t, \mathbf{x}) \in [0, 1] \times [0, 1] \tag{16}$$

with initial value *u*(0, *x*) = sin(*πx*) + 12 sin(<sup>3</sup>*πx*) and boundary conditions *<sup>u</sup>*(*<sup>t</sup>*, 0) = *<sup>u</sup>*(*<sup>t</sup>*, 1) = 0. The exact solution is given by *<sup>u</sup>*(*<sup>x</sup>*, *t*) = *e*<sup>−</sup>*π*2*<sup>t</sup>* sin(*πx*) + 12 *e*<sup>−</sup>(<sup>3</sup>*π*)<sup>2</sup>*<sup>t</sup>* sin(<sup>3</sup>*πx*). This problem is intended to compare two methods for solving big size stiff problems induced from PDE by spatial discretization such as Method of Lines. For the spatial discretization, we use the second-order central difference after evaluating at *x* = *xj* (*xj* = *jN* ). Then, the resulting system becomes a *N*-dimensional system of time dependent ODE. That is, the resulting system can be a big size ODE system depending on the discretization. Note that, to avoid unnecessary computational costs of the multi-stage methods described in the previous sections, we employ the multi-stage methods by combining an efficient linear solver such as an eigenvalue decomposition technique.

To examine the numerical accuracy of two methods for big size stiff systems, we integrate this problem by setting the system size *N* = 100, step size *h* = 1/64 for RK3 and step size ˜ *h* = 1/192 for BDF3. For the numerical comparison, we measure *L*∞-norm error *Err*(*ti*) = ||*u*(*xj*, *ti*) − *<sup>u</sup>ij*||∞ in each integration time step where *uij* ≈ *<sup>u</sup>*(*xj*, *ti*). The error behaviors of two methods are plotted in Figure 3, which are measured on a logarithmic scale.

It can be seen that the accuracy of the multi-stage method RK3 with time step *h* is quite similar to that of the multi-step method BDF3 with time step size ˜ *h*. Additionally, to observe of the efficiency for the two methods, CPU-times, and the absolute error are measured at the final time *t* = 1 by varying the resolution of space *N* = *k* · 10<sup>2</sup> from *k* = 1 to *k* = 10. The results are plotted by absolute error versus CPU-time in Figure 4 with time step sizes *h* = 1/100 and ˜ *h* = 1/300.

Figure 4 can be good evidence of the conclusion that RK3 with eigenvalue decomposition technique is more efficient than the BDF3 method. More precisely speaking, BDF3 requires more computational costs to obtain a similar magnitude of accuracy. In addition, RK3 combined with the eigenvalue decomposition technique can obtain higher accuracy for the same cost.

**Figure 3.** Heat equation: comparing two methods for error behaviors over time.

**Figure 4.** Heat equation: comparing two methods for CPU-time versus error.

#### *3.4. Nonlinear PDE: Medical Akzo Nobel Problem*

In this example, we consider one of nonlinear stiff PDE, a reaction-diffusion system with one spatial dimension, described by

$$\begin{cases} u\_t = u\_{xx} - kuv\\ v\_t = -kuv \end{cases} \quad 0 < x < \infty, \qquad 0 < t < T\_\prime \tag{17}$$

along with the following initial and boundary conditions,

$$u(0, \mathbf{x}) = 0, \quad v(0, \mathbf{x}) = v\_0 \qquad \text{for} \quad \mathbf{x} > 0,$$

where *v*0 is a constant and

$$u(t,0) = \phi(t) \qquad \text{for} \quad 0 < t < T.$$

Semi-discretization of this system yields the nonlinear stiff ODE given by

$$\frac{dy}{dt} = f(t, y), \quad y(0) = \text{g}, \quad y \in \mathbb{R}^{2N}, \quad 0 \le t \le 20. \tag{18}$$

The function *f* is given by

$$\begin{aligned} f\_{2j-1} &= a\_j \frac{y\_{2j+1} - y\_{2j-3}}{2\Delta\_{\mathfrak{z}}^{\gamma}} + \beta\_j \frac{y\_{2j-3} - 2y\_{2j-1} + y\_{2j+1}}{\left(\Delta\_{\mathfrak{z}}^{\gamma}\right)^2} - k y\_{2j-1} y\_{2j}, \\ f\_{2j} &= -k y\_{2j} y\_{2j-1}. \end{aligned}$$

where

$$\alpha\_{j} = \frac{2(j\Delta\underline{\zeta}-1)^{3}}{c^{2}}, \qquad \beta\_{j} = \frac{(j\Delta\underline{\zeta}-1)^{4}}{c^{2}}, \qquad j = 1, \ldots, N$$

with Δ*ζ* = 1*N* , *y*−<sup>1</sup>(*t*) = *φ*(*t*), *y*2*N*+<sup>1</sup> = *y*2*N*−<sup>1</sup> and

$$\mathcal{g} = (0, \upsilon\_0, 0, \upsilon\_0, \dots, 0, \upsilon\_0)^T \in \mathbb{R}^{2N}.$$

The function *φ* is given by

$$\phi(t) = \begin{cases} 2 & \text{for} \quad t \in (0,5], \\ 0 & \text{for} \quad t \in (5,20]. \end{cases}$$

The parameters *k*, *v*0, and *c* are set to 100, 1, and 4, respectively. The integer *N* can be decided by the user. In this experiment, we set *N* as 200. Since analytic solutions are unavailable, we use reference solutions listed in Table 1 excerpted from [43].

**Table 1.** Reference solutions for Medical Akzo Nobel problem at the end of the integration interval.


Specifically, results independent of computational resources were measured to compare the efficiency of two methods in this example. The number of times that nonlinear solvers are called (nsolve) and the number of function evaluations (nfeval) are measured by varying relative tolerance (*Rtol*) and absolute tolerance (*Atol*) as (*Rtol*, *Atol*)=(<sup>10</sup>−*n*, <sup>10</sup>−*n*−<sup>2</sup>) (*n* = 4, ... , <sup>11</sup>). We also measure an *L*∞-norm error at the end time for each tolerance and plot the error in a logarithm scale as a function of nsolve (left) and nfeval (right) in Figure 5. These figures show that RADAU5 generates smaller errors, compared with ODE15s, for paying a similar computational expenses. From a different perspective, RADAU5 requires less computational resources than ODE15s to ge<sup>t</sup> similar level of errors. Thus, we can claim that RADAU5 has better performance in terms of computational costs and accuracy than ODE15s.

**Figure 5.** Medical Akzo Nobel problem: nsolve versus error (**left**) and nfeval versus error (**right**).

#### *3.5. Kepler Problem*

In this subsection, we consider a two-body Kepler's problem to examine two conservation properties—the Hamiltonian energy and angular momentum—which are indispensable factors in physics. The Kepler's problem describes the Newton's law of gravity revolving around their center of mass placed at the origin in elliptic orbits in the (*q*1, *q*2)-plan [44]. The equations with unitary masses and gravitational constant are defined by

$$\begin{cases} p\_1'(t) = -q\_1(q\_1^2 + q\_2^2)^{(-3/2)}, \\ p\_2'(t) = -q\_2(q\_1^2 + q\_2^2)^{(-3/2)}, \\ q\_1'(t) = p\_1, \\ q\_2'(t) = p\_2. \end{cases} \tag{19}$$

with initial conditions *p*1(0) = 0, *p*2(0) = 2, *q*1(0) = 0.4, and *p*1(0) = 0 on the interval [0, <sup>100</sup>*π*]. The dynamics are described by Hamiltonian function given by

$$H(p\_1, p\_2, q\_1, q\_2) = \frac{1}{2}(p\_1^2 + p\_2^2) - \frac{1}{\sqrt{q\_1^2 + q\_2^2}}$$

together with angular momentum *L* given by

$$L(p\_1, p\_2, q\_1, q\_2) = q\_1 p\_2 - q\_2 p\_1 \dots$$

The initial Hamiltonian and the initial angular momentum conditions are *H*0 = −0.5 and *L*0 = 0.8, respectively.

The conservation properties for the Hamiltonian energy *H* and angular momentum *L* are investigated by simulating with the two methods, RADAU5 and ODE15s, with time step size *h* = 0.1 and plot the results in Figure 6. As shown in Figure 6, RADAU5 can conserve both quantities, whereas ODE15s loses the properties as time is going on.

Next, we also consider the movement of comet in planar regulated three-body problem of Sun–Jupiter–Comet. To investigate conservation properties of the two methods, we measure the Hamiltonian energy *K* and the angular momentum *D* for the three-body Kepler problem described by

$$\mathbf{x}^{\prime\prime}(t) = \nu \frac{\mathbf{x}\_{\mathcal{S}} - \mathbf{x}}{r\_{13}^3} + \mu \frac{\mathbf{x}\_{f} - \mathbf{x}}{r\_{23}^3}, \qquad \mathbf{y}^{\prime\prime}(t) = \nu \frac{y\_{\mathcal{S}} - y}{r\_{13}^3} + \mu \frac{y\_f - y}{r\_{23}^3}, \tag{20}$$

*Mathematics* **2019**, *7*, 1158

where

$$\begin{aligned} r\_{13}^2 &= \left(\mathbf{x}\_S - \mathbf{x}\right)^2 + \left(y\_S - y\right)^2, & r\_{23}^2 &= \left(\mathbf{x}\_\parallel - \mathbf{x}\right)^2 + \left(y\_\parallel - y\right)^2, \\ \mathbf{x}\_S &= -\mu \cos(t - t\_0), & y\_S &= -\mu \sin(t - t\_0), \\ \mathbf{x}\_I &= \nu \cos(t - t\_0), & y\_I &= \nu \sin(t - t\_0). \end{aligned}$$

The energy and angular momentum of the comet

$$K/2 = \frac{1}{2}(x^2 + y^2) - \frac{1}{\sqrt{x^2 + y^2}'} \qquad D = xy' - yx'$$

are constant, when *μ* = 0 and *ν* = 1. For this experiment, initial condition is set to

$$\mathbf{x}(0) = \mathbf{5}, \quad \mathbf{y}(0) = \mathbf{1}, \quad \mathbf{x}'(0) = 0, \quad \mathbf{y}'(0) = \mathbf{1},$$

and the initial energy and angular momentum are set to *K*0/2 = 0.3 and *D*0 = 5 with parameter step size *h* = 1/2 *π* and *t*0 = 0.

**Figure 6.** Two-body Kepler problem: comparing two methods in terms of conservation of Hamiltonian and angular momentum.

In Figure 7, one can see that the behaviors of the energy and the momentum over time interval [0, 100 *<sup>π</sup>*]. As observed in Figure 7, RADAU5 gives a maximum variation of 8.0356 × 10−<sup>5</sup> and 0.0013 for the energy and the momentum, whereas ODE15s presents a variation of 5.9063 × 10−<sup>4</sup> and 0.0173 for them. Therefore, one can conclude that RADAU5 has better conservation properties, compared with ODE15s.

**Figure 7.** Three-body Kepler problem: comparing two methods in terms of the conservation of total energy and angular momentum.

#### **4. Conclusions and Further Discussion**

In this work, we compare multi-stage methods with multi-step methods by investigating the numerical properties of both methods. In a classical approach, nonlinear stiff systems were usually solved by multi-step methods to avoid huge computational complexity induced from linearization of a given nonlinear system. However, the computational costs for the multi-stage method can be reduced sufficiently without loss of stability and conservation, which is possible by using suitable nonlinear and linear solvers such as a Newton-type method and eigenvalue decomposition. It means that the multi-stage method can also be applied to solve nonlinear stiff systems without any damage to computational costs, compared with the multi-step methods. Moreover, it is seen that the multi-stage methods preserve the invariants of the energy and angular momentum in Hamiltonian systems. In addition, it is well-known that a stability property of multi-stage methods is much better than that of multi-step methods.

Overall, one can conclude that the multi-stage method can be a good candidate to solve nonlinear stiff systems. It means that, without any damage to computational costs, multi-stage methods can be applied to long-time simulations and massive physical simulations in fields such as astronomy, meteorology, nuclear fusion, nuclear power, aerospace, machinery, etc.

**Author Contributions:** Conceptualization, Y.J. and S.B. (Sunyoung Bu); methodology, S.B. (Sunyoung Bu); software, Y.J.; validation, Y.J. and S.B. (Soyoon Bak); formal analysis, Y.J.; investigation, Y.J. and S.B. (Soyoon Bak); resources, Y.J and S.B. (Soyoon Bak); data curation, Y.J.; writing—original draft preparation, S.B. (Sunyoung Bu); writing—review and editing, S.B. (Soyoon Bak); visualization, Y.J.; supervision, S.B. (Sunyoung Bu); project administration, S.B. (Sunyoung Bu); funding acquisition, S.B. (Sunyoung Bu).

**Funding:** This was supported by the National Research Foundation of Korea (NRF) gran<sup>t</sup> funded by the Korea governmen<sup>t</sup> (MSIT) (Grant No.: NRF-2019R1H1A2079997) and the R&D programs through NFRI (National Fusion Research Institute) funded by the Ministry of Science and ICT of the Republic of Korea (Grant No. NFRI-EN1841-4). In addition, the corresponding author Sunyoung Bu was partly supported by the National Research Foundation of Korea (NRF) gran<sup>t</sup> funded by the Korea governmen<sup>t</sup> (MSIT) (Grant No.: NRF-2019R1F1A1058378).

**Conflicts of Interest:** The authors declare no conflict of interest.
