Next Article in Journal
An Alternative to Real Number Axioms
Next Article in Special Issue
Efficient Implementation of ADER Discontinuous Galerkin Schemes for a Scalable Hyperbolic PDE Engine
Previous Article in Journal
Single-Valued Neutrosophic Clustering Algorithm Based on Tsallis Entropy Maximization
Previous Article in Special Issue
A Convex Model for Edge-Histogram Specification with Applications to Edge-Preserving Smoothing
 
 
Correction published on 16 May 2019, see Axioms 2019, 8(2), 59.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On a Class of Hermite-Obreshkov One-Step Methods with Continuous Spline Extension

by
Francesca Mazzia
1,*,† and
Alessandra Sestini
2,†
1
Dipartimento di Informatica, Università degli Studi di Bari Aldo Moro, 70125 Bari, Italy
2
Dipartimento di Matematica e Informatica U. Dini, Università di Firenze, 50134 Firenze, Italy
*
Author to whom correspondence should be addressed.
Member of the INdAM Research group GNCS.
Axioms 2018, 7(3), 58; https://doi.org/10.3390/axioms7030058
Submission received: 23 May 2018 / Revised: 13 August 2018 / Accepted: 15 August 2018 / Published: 20 August 2018
(This article belongs to the Special Issue Advanced Numerical Methods in Applied Sciences)

Abstract

:
The class of A-stable symmetric one-step Hermite–Obreshkov (HO) methods introduced by F. Loscalzo in 1968 for dealing with initial value problems is analyzed. Such schemes have the peculiarity of admitting a multiple knot spline extension collocating the differential equation at the mesh points. As a new result, it is shown that these maximal order schemes are conjugate symplectic up to order p + r , where r = 2 and p is the order of the method, which is a benefit when the methods have to be applied to Hamiltonian problems. Furthermore, a new efficient approach for the computation of the spline extension is introduced, adopting the same strategy developed for the BS linear multistep methods. The performances of the schemes are tested in particular on some Hamiltonian benchmarks and compared with those of the Gauss–Runge–Kutta schemes and Euler–Maclaurin formulas of the same order.

1. Introduction

We are interested in the numerical solution of the Cauchy problem, that is the first order Ordinary Differential Equation (ODE),
y ( t ) = f ( y ( t ) ) , t [ t 0 , t 0 + T ] ,
associated with the initial condition:
y ( t 0 ) = y 0 ,
where f : I R m I R m , m 1 , is a C R 1 , R 1 , function on its domain and y 0 I R m is assigned. Note that there is no loss of generality in assuming that the equation is autonomous. In this context, here, we focus on one-step Hermite–Obreshkov (HO) methods ([1], p. 277). Unlike Runge–Kutta schemes, a high order of convergence is obtained with HO methods without adding stages. Clearly, there is a price for this because total derivatives of the f function are involved in the difference equation defining the method, and thus, a suitable smoothness requirement for f is necessary. Multiderivative methods have been considered often in the past for the numerical treatment of ODEs, for example also in the context of boundary value methods [2], and in the last years, there has been a renewed interest in this topic, also considering its application to the numerical solution of differential algebraic equations; see, e.g., [3,4,5,6,7,8]. Here, we consider the numerical solution of Hamiltonian problems which in canonical form can be written as follows:
y = J H ( y ) , y ( t 0 ) = y 0 I R 2 ,
with:
y = q p , q , p I R , J = O I I O ,
where q and p are the generalized coordinates and momenta, H : I R 2 I R is the Hamiltonian function and I stands for the identity matrix of dimension . Note that the flow φ t : y 0 y ( t ) associated with the dynamical system (3) is symplectic; this means that its Jacobian satisfies:
φ t ( y ) y J φ t ( y ) y = J , y I R 2 .
A one-step numerical method Φ h : I R 2 I R 2 with stepsize h is symplectic if the discrete flow y n + 1 = Φ h ( y n ) , n 0 , satisfies:
Φ h ( y ) y J Φ h ( y ) y = J , y I R 2 .
Two numerical methods Φ h , Ψ h are conjugate to each other if there exists a global change of coordinates χ h , such that:
Ψ h = χ h Φ h χ h 1
with χ h ( y ) = y + O ( h ) uniformly for y varying in a compact set and ∘ denoting a composition operator [9]. A method which is conjugate to a symplectic method is said to be conjugate symplectic, this is a less strong requirement than symplecticity, which allows the numerical solution to have the same long-time behavior of a symplectic method. Observe that the conjugate symplecticity here refers to a property of the discrete flow of the two numerical methods; this should be not confused with the group of conjugate symplectic matrices, the set of matrices M C 2 that satisfy M H J M = J , where H means Hermitian conjugate [10].
A more relaxed property, shared by a wider class of numerical schemes, is a generalization of the conjugate-symplecticity property, introduced in [11]. A method y 1 = Ψ h ( y 0 ) of order p is conjugate-symplectic up to order p + r , with r 0 , if a global change of coordinates χ h ( y ) = y + O ( h p ) exists such that Ψ h = χ h Φ h χ h 1 , with the map Ψ h satisfying
Ψ h ( y ) y J Ψ h ( y ) y = J + O ( h p + r + 1 ) .
A consequence of property (7) is that the method Ψ h ( y ) nearly conserves all quadratic first integrals and the Hamiltonian function over time intervals of length O ( h r ) (see [11]).
Recently, the class of Euler–Maclaurin methods for the solution of Hamiltonian problems has been analyzed in [12,13] where the conjugate symplecticity up to order p + 2 of the p-th order methods was proven.
In this paper, we consider the symmetric one-step HO methods, which were analyzed in [14,15] in the context of spline applications. We call them BSHO methods, since they are connected to B-Splines, as we will show. BSHO methods have a formulation similar to that of the Euler–Maclaurin formulas, and the order two and four schemes of the two families are the same. As a new result, we prove that BSHO methods are conjugate symplectic schemes up to order p + 2 , as is the case for the Euler–Maclaurin methods [12,13], and so, both families are suited to the context of geometric integration.
BSHO methods are also strictly related to BS methods [16,17], which are a class of linear multistep methods also based on B-splines suited for addressing boundary value problems formulated as first order differential problems. Note that also BS methods were firstly studied in [14,15], but at that time, they were discarded in favor of BSHO methods since; when used as initial value methods, they are not convergent. In [16,17], the same schemes have been studied as boundary value methods, and they have been recovered in particular in connection with boundary value problems. As for the BSHO methods, the discrete solution generated by a BS method can be easily extended to a continuous spline collocating the differential problem at the mesh points [18]. The idea now is to rely on B-splines with multiple inner knots in order to derive one-step HO schemes. The inner knot multiplicity is strictly connected to the number of derivatives of f involved in the difference equations defining the method and consequently with the order of the method. The efficient approach introduced in [18] dealing with BS methods for the computation of the collocating spline extension is here extended to BSHO methods, working with multiple knots. Note that we adopt a reversed point of view with respect to [14,15] because we assume to have already available the numerical solution generated by the BSHO methods and to be interested in an efficient procedure for obtaining the B-spline coefficients of the associated spline.
The paper is organized as follows. In Section 2, one-step symmetric HO methods are introduced, focusing in particular on BSHO methods. Section 3 is devoted to proving that BSHO methods are conjugate symplectic methods up to order p + 2 . Then, Section 4 first shows how these methods can be revisited in the spline collocation context. Successively, an efficient procedure is introduced to compute the B-spline form of the collocating spline extension associated with the numerical solution produced by the R-th BSHO, and it is shown that its convergence order is equal to that of the numerical solution. Section 6 presents some numerical results related to Hamiltonian problems, comparing them with those generated by Euler–Maclaurin and Gauss–Runge–Kutta schemes of the same order.

2. One-Step Symmetric Hermite–Obreshkov Methods

Let t i , i = 0 , , N , be an assigned partition of the integration interval [ t 0 , t 0 + T ] , and let us denote by u i an approximation of y ( t i ) . Any one-step symmetric Hermite–Obreshkov (HO) method can be written as follows, clearly setting u 0 : = y 0 ,
u n + 1 = u n + j = 1 R h n j β j ( R ) u n ( j ) ( 1 ) j u n + 1 ( j ) , n = 0 , , N 1 ,
where h n : = t n + 1 t n and where u r ( j ) , for j 1 , denotes the total ( j 1 ) -th derivative of f with respect to t computed at u r ,
u r ( j ) : = d j 1 f d t j 1 ( y ( t ) ) | u r , j = 1 , , R .
Note that u r ( j ) y ( j ) ( t r ) , and on the basis of (1), the analytical computation of the j-th derivative y ( j ) involves a tensor of order j. For example, y ( 2 ) ( t ) = d f d t ( y ( t ) ) = f y ( y ( t ) ) f ( y ( t ) ) (where f y becomes the Jacobian m × m matrix of f with respect to y when m > 1 ). As a consequence, it is u r ( 2 ) = f y ( u r ) f ( u r ) . We observe that the definition in (14) implies that only u n + 1 is unknown in (8), which in general is a nonlinear vector equation in I R m with respect to it.
For example, the one-step Euler–Maclaurin [1] formulas of order 2 s with s I N , s 1 ,
u n + 1 = u n + h n 2 u n ( 1 ) + u n + 1 ( 1 ) ) + i = 1 s 1 h n 2 i b 2 i ( 2 i ) ! u n ( 2 i ) u n + 1 ( 2 i ) , n = 0 , , N 1 ,
(where the b 2 i denote the Bernoulli numbers, which are reported in Table 2) belong to this class of methods. These methods will be referred to in the following with the label EMHO (Euler–Maclaurin Hermite–Obreshkov).
Here, we consider another class of symmetric HO methods that can be obtained by defining as follows the polynomial P 2 R ,
P 2 R ( x ) : = x R ( x 1 ) R ( 2 R ) !
appearing in ([1], Lemma 13.3), the statement of which is reported in Lemma 1.
Lemma 1.
Let R be any positive integer and P 2 R be a polynomial of exact degree 2 R . Then, the following one-step linear difference equation,
j = 0 2 R h n j u n + 1 ( j ) P 2 R ( 2 R j ) ( 0 ) = j = 0 2 R h n j u n ( j ) P 2 R ( 2 R j ) ( 1 )
defines a multiderivative method of order 2 R .
Referring to the methods obtainable by Lemma 1, if in particular the polynomial P 2 R is defined as in (11), then we obtain the class of methods in which we are interested here. They can be written as in (8) with,
β j ( R ) : = 1 j ! R ( R 1 ) ( R j + 1 ) ( 2 R ) ( 2 R 1 ) ( 2 R j + 1 )
which are reported in Table 1, for R = 1 , , 5 . In particular, for R = 1 and R = 2 , we obtain the trapezoidal rule and the Euler–Maclaurin method of order four, respectively.
These methods were originally introduced in the spline collocation context, dealing in particular with splines with multiple knots [14,15], as we will show in Section 4. We call them BSHO methods since we will show that they can be obtained dealing in particular with the standard B-spline basis. The stability function of the R-th one-step symmetric BSHO method is the rational function corresponding to the ( R , R ) -Padé approximation of the exponential function, as is that of the same order Runge–Kutta–Gauss method ([19], p. 72). It has been proven that methods with this stability function are A-stable ([19], Theorem 4.12). For the proof of the statement of the following corollary, which will be useful in the sequel, we refer to [15],
Corollary 1.
Let us assume that f C 2 R + 1 ( D ) , where D : = { y I R m | t [ t 0 , t 0 + T ] such that y y ( t ) 2 L b } , with L b > 0 . Then, there exists a positive constant h b such that if max 0 n N 1 h n = : h < h b and { u i } i = 0 N denotes the related numerical solution produced by the R-th one-step symmetric BSHO method in (8)–(12), it is:
u i ( j ) y i ( j ) = O ( h 2 R ) , j = 1 , , R , i = 0 , , N .

3. Conjugate Symplecticity of the Symmetric One-Step BSHO Methods

Following the lines of the proof given in [13], in this section, we prove that one-step symmetric BSHO methods are conjugate symplectic schemes up to order 2 R + 2 . The following lemma, proved in [20], is the starting point of the proof, and it makes use of the B-series integrator concept. On this concern, referring to [9] for the details, here, we just recall that a B-series integrator is a numerical method that can be expressed as a formal B-series, that is it has a power series in the time step in which each term is a sum of elementary differentials of the vector field and where the number of terms is allowed to be infinite.
Lemma 2.
Assume that Problem (1) admits a quadratic first integral Q ( y ) = y T S y (with S denoting a constant symmetric matrix) and that it is solved by a B-series integrator Φ h ( y ) . Then, the following properties, where all formulas have to be interpreted in the sense of formal series, are equivalent:
(a) 
Φ h ( y ) has a modified first integral of the form Q ˜ ( y ) = Q ( y ) + h Q 1 ( y ) + h 2 Q 2 ( y ) + . . . where each Q i ( · ) is a differential functional;
(b) 
Φ h ( y ) is conjugate to a symplectic B-series integrator.
We observe that Lemma 2 is used in [21] to prove the conjugate symplecticity of symmetric linear multistep methods. Following the lines of the proof given in [13], we can actually prove that the R-th one-step symmetric BSHO method is conjugate symplectic up to order 2 R + 2 . With similar arguments of [13] we prove the following theorem, showing that the map y 1 = Ψ h ( y 0 ) associated with the BSHO method is such that Ψ h ( y ) = Φ h ( y ) + O ( h 2 R + 3 ) , where y 1 = Φ h ( y 0 ) is a suitable conjugate symplectic B-series integrator.
Theorem 1.
The map u 1 = Ψ h ( u 0 ) associated with the one-step method (8) admits a B-series expansion and is conjugate to a symplectic B-series integrator up to order 2 R + 2 .
Proof. 
The existence of a B-series expansion for y 1 = Ψ h ( y 0 ) is directly deduced from [22], where a B-series representation of a generic multi-derivative Runge-Kutta method has been obtained. By defining the two characteristic polynomials of the trapezoidal rule:
ρ ( z ) : = z 1 , σ ( z ) : = 1 2 ( z + 1 )
and the shift operator E ( u n ) : = u n + 1 , the R-th method described in (8) reads,
ρ ( E ) u n = k = 1 R / 2 2 β 2 k 1 ( R ) h 2 k 1 σ ( E ) u n ( 2 k 1 ) k = 1 R / 2 β 2 k ( R ) h 2 k ρ ( E ) u n ( 2 k ) .
Observe that u i ( j ) , for j 1 , denotes the ( j 1 ) -th Lie derivative of f computed at u i ,
u i ( j ) : = D j 1 f ( u i ) , j = 1 , , R ,
where D 0 = I is the identity operator and D k f ( z ) is defined as the k-th total derivative of f ( y ( t ) ) computed at y ( t ) = z , where for the computation of the total derivative it is assumed that y satisfies the differential equation in (1). Note that we use the subscript to define the Lie operator to avoid confusion with the same order classical derivative operator in the following denoted as D k . With this clarification on the definition of u i ( j ) , we now consider a function v ( t ) , a stepsize h and the shift operator E h ( v ( t ) ) : = v ( t + h ) , and we look for a continuous function v ( t ) that satisfies (13) in the sense of formal series (a series where the number of terms is allowed to be infinite), using the relation E h = j = 0 h j j ! D j e h D where D = D 1 is the classical derivative operator,
ρ ( e h D ) v ( t ) = k = 1 R / 2 2 β 2 k 1 ( R ) h 2 k 1 σ ( e h D ) D 2 k 2 f ( v ( t ) ) k = 1 R / 2 β 2 k ( R ) h 2 k ρ ( e h D ) D 2 k 1 f ( v ( t ) ) .
By multiplying both sides of the previous equation by D ρ ( e h D ) 1 , we obtain:
D v ( t ) = h D ρ ( e h D ) 1 σ ( e h D ) k = 0 R / 2 1 2 β 2 k + 1 ( R ) h 2 k D 2 k f ( v ( t ) ) k = 1 R / 2 β 2 k ( R ) h 2 k D D 2 k 1 f ( v ( t ) ) .
Now, since Bernoulli numbers define the Taylor expansion of the function z / ( e z 1 ) and b 0 = 1 , b 1 = 1 / 2 and b j = 0 for the other odd j , we have:
z σ ( e z ) ρ ( e z ) = 1 2 z ( e z + 1 ) e z 1 = z e z 1 + z 2 = 1 + j = 1 b 2 j ( 2 j ) ! z 2 j .
Thus, we can write (15) as
v ˙ ( t ) = I + j = 1 b 2 j ( 2 j ) ! h 2 j D 2 j I + k = 1 R / 2 1 2 β 2 k + 1 ( R ) h 2 k D 2 k k = 1 R / 2 β 2 k ( R ) h 2 k D D 2 k 1 f ( v ( t ) ) .
Adding and subtracting terms involving the classical derivative operator D 2 k , D 2 k 1 , we get
v ˙ ( t ) = I + j = 1 b 2 j ( 2 j ) ! h 2 j D 2 j I + k = 1 R / 2 1 2 β 2 k + 1 ( R ) h 2 k D 2 k + k = 1 R / 2 1 2 β 2 k + 1 ( R ) h 2 k ( D 2 k D 2 k ) k = 1 R / 2 β 2 k ( R ) h 2 k D D 2 k 1 k = 1 R / 2 β 2 k ( R ) h 2 k D ( D 2 k 1 D 2 k 1 ) f ( v ( t ) ) .
that we recast as
v ˙ ( t ) = I + j = 1 b 2 j ( 2 j ) ! h 2 j D 2 j I + k = 1 R / 2 1 2 β 2 k + 1 ( R ) h 2 k D 2 k k = 1 R / 2 β 2 k ( R ) h 2 k D 2 k f ( v ( t ) ) + I + j = 1 b 2 j ( 2 j ) ! h 2 j D 2 j k = 1 R / 2 1 2 β 2 k + 1 ( R ) h 2 k ( D 2 k D 2 k ) k = 1 R / 2 β 2 k ( R ) h 2 k D ( D 2 k 1 D 2 k 1 ) f ( v ( t ) ) .
Since v ( t ) = y ( t ) + O ( h 2 R ) , due to the regularity conditions on the function f , we see that ( D i D i ) f ( v ( t ) ) = O ( h 2 R ) , i = 1 , , R 1 and hence the solution v ( t ) of (16) is O ( h 2 R + 2 ) -close to the solution of the following initial value problem
w ˙ ( t ) = f ( w ( t ) ) + j = R δ j h 2 j D 2 j f ( w ( t ) ) ,
with:
δ j : = k = 0 R / 2 1 b 2 ( j k ) ( 2 ( j k ) ) ! 2 β 2 k + 1 ( R ) , j R .
that has been derived from (16) by neglecting the sums containing the derivatives D 2 k , D 2 k 1 . Observe that δ j = 0 for j = 1 , , R 1 , since the method is of order 2 R (see [9], Theorem 3.1, page 340). We may interpret (17) as the modified equation of a one-step method y 1 = Φ h ( y 0 ) , where Φ h is evidently the time-h flow associated with (17). Expanding the solution of (17) in Taylor series, we get the modified initial value differential equation associated with the numerical scheme by coupling (17) with the initial condition w ( t 0 ) = y 0 . Thus, Φ h is a B-series integrators. The proof of the conjugated symplecticity of Φ h follows exactly the same steps of the analogous proof in Theorem 1 of [13]. Since Ψ h ( y ) = Φ h ( y ) + O ( h 2 R + 3 ) and Φ h is conjugate-symplectic, the result follows using the same global change of coordinates χ h ( y ) associated to Φ h . ☐
In Table 2, we report the coefficients δ R for R 5 and the corresponding Bernoulli numbers. We can observe that the truncation error in the modified initial value problem is smaller than the one of the EMHO methods of the same order, which is equal to b i / i ! (see [13]). The conjugate symplecticity up to order 2 R + 2 property of a numerical scheme makes it suitable for the solution of Hamiltonian problems. A well-known pair of conjugate symplectic methods is composed by the trapezoidal and midpoint rules. Observe that the trapezoidal rule belongs to both the classes BSHO and EMHO of multiderivative methods, and its characteristic polynomial plays an important role in the proof of Theorem 1.

4. The Spline Extension

A (vector) Hermite polynomial of degree 2 R + 1 interpolating both u n and u n + 1 respectively at t n and t n + 1 together with assigned derivatives u n ( k ) , u n + 1 ( k ) , k = 1 , , R , can be computed using the Newton interpolation formulas with multiple nodes. On the other hand, in his Ph.D. thesis [15], Loscalzo proved that a polynomial of degree 2 R verifying the same conditions exists if and only if (8) is fulfilled with the β coefficients defined as in (12). Note that, since the polynomial of degree 2 R + 1 fulfilling these conditions is always unique and its principal coefficient is given by the generalized divided difference u [ t n , , t n , t n + 1 , , t n + 1 ] of order 2 R + 1 associated with the given R-order Hermite data, the n-th condition in (8) holds iff this coefficient vanishes. If all the conditions in (8) are fulfilled, it is possible to define a piecewise polynomial, the restriction to [ t n , t n + 1 ] of which coincides with this polynomial, and it is clearly a C R spline of degree 2 R with breakpoints at the mesh points. Now, when the definition given in (14) is used together with the assumption u 0 = y 0 , the conditions in (8) become a multiderivative one-step scheme for the numerical solution of (1). Thus, the numerical solution u n , n = 0 , , N it produces and the associated derivative values defined as in (14) can be associated with the above-mentioned 2 R degree spline extension. Such a spline collocates the differential equation at the mesh points with multiplicity R , that is it verifies the given differential equation and also the equations y ( j ) ( t ) = d ( j 1 ) ( f y ) d t j 1 ( t ) , j = 2 , , R at the mesh points. This piecewise representation of the spline is that adopted in [15]. Here, we are interested in deriving its more compact B-spline representation. Besides being more compact, this also allows us to clarify the connection between BSHO and BS methods previously introduced in [16,17,18]. For this aim, let us introduce some necessary notation. Let S 2 R , be the space of C R 2 R -degree splines with breakpoints at t i , i = 0 , , N , where t 0 < < t N = t 0 + T . Since we relate to the B-spline basis, we need to introduce the associated extended knot vector:
T : = { τ 2 R , , τ 1 , τ 0 , , τ ( N 1 ) R , τ ( N 1 ) R + 1 , τ ( N 1 ) R + 2 , τ ( N + 1 ) R + 1 } ,
where:
τ 2 R = = τ 0 = t 0 , τ ( n 1 ) R + 1 = = τ n R = t n , n = 1 , , N 1 , τ ( N 1 ) R + 1 = = τ ( N + 1 ) R + 1 = t N ,
which means that all the inner breakpoints have multiplicity R in T and both t 0 and t N have multiplicity 2 R + 1 . The associated B-spline basis is denoted as B i , i = 2 R , , ( N 1 ) R and the dimension of S 2 R as D , with D : = ( N + 1 ) R + 1 .
The mentioned result proven by Loscalzo is equivalent to saying that, if the β coefficients are defined as in (12), any C R spline of degree 2 R with breakpoints at the mesh points fulfills the relation in (8), where u n ( j ) denotes the j-th spline derivative at t n . In turn, this is equivalent to saying that such a relation holds for any element of the B-spline basis of S 2 R . Thus, setting α : = ( 1 ; 1 ) T I R 2 and β ( i ) : = ( β i ( R ) ; ( 1 ) i β i ( R ) ) I R 2 , i = 1 , , R , considering the local support of the B-spline basis, we have that ( α ; β ( 1 ) ; . . . ; β ( R ) ) , where the punctuation mark “;” means vertical catenation (to make a column-vector), can be also characterized as the unique solution of the following linear system,
G ( n ) ( α ; β ( 1 ) ; ; β ( R ) ) = e 2 R + 2 ,
where e 2 R + 2 = ( 0 ; ; 0 ; 1 ) I R 2 R + 2 and:
G ( n ) : = A 1 ( n ) T h n A 2 ( n ) T h n 2 A 3 ( n ) T h n R A R + 1 ( n ) T ( 0 , 0 ) ( 1 , 1 ) ( 0 , 0 ) ( 0 , 0 ) ,
with A 1 ( n ) , A 2 ( n ) , A R + 1 ( n ) defined as,
A j + 1 ( n ) : = B ( n 2 ) R ( j ) ( t n ) , , B n R ( j ) ( t n ) B ( n 2 ) R ( j ) ( t n + 1 ) , , B n R ( j ) ( t n + 1 ) 2 × ( 2 R + 1 )
where B i ( j ) denotes the j-th derivative of B i . Note that the last equation in (19), 2 β 1 ( R ) = 1 , is just a normalization condition.
In order to prove the non-singularity of the matrix G ( n ) , we need to introduce the following definition,
Definition 1.
Given a non-decreasing set of abscissas Θ : = { θ i } i = 0 M , we say that a function g 1 agrees with another function g 2 at Θ if g 1 ( j ) ( θ i ) = g 2 ( j ) ( θ i ) , j = 0 , , m i 1 , i = 0 , , M , where m i denotes the multiplicity of θ i in Θ .
Then, we can formulate the following proposition,
Proposition 1.
The ( 2 R + 2 ) × ( 2 R + 2 ) matrix G ( n ) defined in (20) and associated with the B-spline basis of S 2 R is nonsingular.
Proof. 
Observe that the restriction to I n = [ t n , t n + 1 ] of the splines in S 2 R generates Π 2 R since there are no inner knots in I n . Then, restricting to I n , Π 2 R can be also generated by the B-splines of S 2 R not vanishing in I n , that is from B ( n 2 ) R , , B n R . Since the polynomial in Π 2 R agreeing with a given function in:
Θ = { t n , , t n R + 1 , t n + 1 , , t n + 1 R } ,
is unique, it follows that also the corresponding ( 2 R + 1 ) × ( 2 R + 1 ) matrix collocating the spline basis active in I n is nonsingular. Such a matrix is the principal submatrix of G ( n ) T of order 2 R + 1 . Thus now, considering that the restriction to I n of any function in S 2 R is a polynomial of degree 2 R , we prove by reductio ad absurdum that the last row of G ( n ) cannot be a linear combination of the other rows. In fact, in the opposite case, there would exist a polynomial P of degree 2 R such that P ( t n ) = P ( t n + 1 ) = 0 , P ( t n ) = P ( t n + 1 ) = 1 , and P ( j ) ( t n ) = P ( j ) ( t n + 1 ) = 0 , j = 2 , , R . Considering the specific interpolation conditions, this P does not fulfill the n-th condition in (8). This is absurd, since Loscalzo [15] has proven that such a condition is equivalent to requiring degree reduction for the unique polynomial of degree less than or equal to 2 R + 1 , fulfilling R + 1 Hermite conditions at both t n and t n + 1 .  ☐
Note that this different form for defining the coefficient of the R-th BSHO scheme is analogous to that adopted in [17] for defining a BS method on a general partition. However, in this case, the coefficients of the scheme do not depend on the mesh distribution, so there is no need to determine them solving the above linear system. On the other hand, having proven that the matrix G ( n ) is nonsingular will be useful in the following for determining the B-spline form of the associated spline extension.
Thus, let us now see how the B-spline coefficients of the spline in S 2 R associated with the numerical solution generated by the R-th BSHO can be efficiently obtained, considering that the following conditions have to be imposed,
s 2 R ( t n ) = u n , s 2 R ( j ) ( t n ) = u n ( j ) , j = 1 , , R . n = 0 , , N .
Now, we are interested in deriving the B-spline coefficients c i , i = 2 R , , ( N 1 ) R , of s 2 R ,
s 2 R ( t ) = i = 2 R ( N 1 ) R c i B i ( t ) , t [ t 0 , t 0 + T ] .
Relying on the representation in (23), all the conditions in (22) can be re-written in the following compact matrix form,
( A I m ) c = ( u 0 ; ; u N ; u 0 ( 1 ) ; ; u N ( 1 ) ; ; u 0 ( R ) ; ; u N ( R ) ) ,
where c = ( c 2 R ; ; c ( N 1 ) R ) I R m D , with c j I R m , I m is the identity matrix of size m × m , D is the dimension of the spline space previously introduced and where:
A : = A 1 ; A 2 ; ; A R + 1 ,
with each A being a ( R + 1 ) -banded matrix of size ( N + 1 ) × D (see Figure 1) with entries defined as follows:
( A ) i , j : = B j ( 1 ) ( t i ) .
The following theorem related to the rectangular linear system in (24) ensures that the collocating spline s 2 R is well defined.
Theorem 2.
The rectangular linear system in (24) has always a unique solution, if the entries of the vector on its right-hand side satisfy the conditions in (8) with the β coefficients given in (12).
Proof. 
The proof is analogous to that in [18] (Theorem 1), and it is omitted. ☐
We now move to introduce the strategy adopted for an efficient computation of the B-spline coefficients of s 2 R .

4.1. Efficient Spline Computation

Concerning the computation of the spline coefficient vectors:
c i , i = ( 2 R ) , , ( N 1 ) R ,
the unique solution of (24) can be computed with several different strategies, which can have very different computational costs and can produce results with different accuracy when implemented in finite arithmetic. Here, we follow the local strategy used in [18]. Taking into account the banded structure of A i , i = 1 , , R + 1 , we can verify that (24) implies the following relations,
A 1 ( i ) h i A 2 ( i ) h i R A R + 1 ( i ) I m c ( i ) = w ( i ) ( u )
where u = ( u 0 ; ; u N ) , c ( i ) : = ( c ( i 3 ) R ; ; c ( i 1 ) R ) I R m ( 2 R + 1 ) , i = 1 , , N and:
w ( i ) ( u ) : = ( u i 1 ; u i ; h i u i 1 ( 1 ) ; h i u i ( 1 ) ; ; h i R u i 1 ( R ) ; h i R u i ( R ) ) .
As a consequence, we can also write that,
( G ( i ) T I m ) c ^ ( i ) = w ( i ) ( u )
where c ^ ( i ) : = ( c ( i ) ; 0 ) I R m ( 2 R + 2 ) .
Now, for all integers r < 2 R + 2 , we can define other R + 1 auxiliary vectors α ^ i , r ( R ) , β ^ l , i , r ( R ) , l = 1 , , R I R 2 , defined as the solution of the following linear system,
G ( i ) ( α ^ i , r ( R ) ; β ^ 1 , i , r ( R ) ; ; β ^ R , i , r ( R ) ) = e r ,
where e r is the r-th unit vector in I R 2 R + 2 (that is the auxiliary vectors define the r-th column of the inverse of G ( i ) ). Then, we can write,
( ( α ^ i , r ( R ) ; β ^ 1 , i , r ( R ) ; ; β ^ R , i , r ( R ) ) T I m ) ( G ( i ) T I m ) c ^ ( i ) = ( e r T I m ) c ^ ( i ) = c ( i 3 ) R + r 1 .
From this formula, considering (27), we can conclude that:
c ( i 3 ) R + r 1 = ( ( α ^ i , r ( R ) ; β ^ 1 , i , r ( R ) ; ; β ^ R , i , r ( R ) ) T I m ) w ( i ) ( u )
Thus, solving all the systems (28) for i = 1 , , N , r = r 1 ( i ) , , r 2 ( i ) , with:
r 1 ( i ) : = 1 if i = 1 , R + 1 if 1 < i N , r 2 ( i ) : = 2 R if 1 i < N , 2 R + 1 if i = N ,
all the spline coefficients are obtained. Note that, with this approach, we solve D auxiliary systems, the size of which does not depend on N, using only N different coefficient matrices. Furthermore, only the information at t i 1 and t i is necessary to compute c ( i 3 ) R + r 1 . Thus, the spline can be dynamically computed at the same time the numerical solution is advanced at a new time value. This is clearly of interest for a dynamical adaptation of the stepsize.
In the following subsection, relying on its B-spline representation, we prove that the convergence order of s 2 R to y is equal to that of the numerical solution. This result was already available in [15] (see Theorem 4.2 in the reference), but proven with different longer arguments.

4.2. Spline Convergence

Let us assume the following quasi-uniformity requirement for the mesh,
M l h i h i + 1 M u , i = 0 , , N 1 ,
where M l and M u are positive constants not depending on h , with M l 1 and M u 1 . Note that this requirement is a standard assumption in the refinement strategies of numerical methods for ODEs. We first prove the following result, that will be useful in the sequel.
Proposition 2.
If y S 2 R and so in particular if y is a polynomial of degree at most 2 R , then:
y n + 1 y n j = 1 R h n j β j ( R ) y n ( j ) ( 1 ) j y n + 1 ( j ) = 0 , n = 0 , , N 1 ,
where y n : = y ( t n ) , y n ( j ) : = d j y d j t ( t n ) , j = 1 , , R , n = 0 , , N , and the spline extension s 2 R coincides with y .
Proof. 
The result follows by considering that the divided difference vanishes and, as a consequence, the local truncation error of the methods is null. ☐
Then, we can prove the following theorem (where for notational simplicity, we restrict to m = 1 ), the statement of which is analogous to that on the convergence of the spline extension associated with BS methods [18]. In the proof of the theorem, we relate to the quasi-interpolation approach for function approximation, the peculiarity of which consists of being a local approach. For example, in the spline context considered here, this means that only a local subset of a given discrete dataset is required to compute a B-spline coefficient of the approximant; refer to [23] for the details.
Theorem 3.
Let us assume that the assumptions on f done in Corollary 1 hold and that (30) holds. Then, the spline extension s 2 R approximates the solution y of (1) with an error of order O ( h 2 R ) where h : = max i = 0 , , N 1 h i .
Proof. 
Let s ¯ 2 R denote the spline belonging to S 2 R obtained by quasi-interpolating y with one of the rules introduced in Formula (5.1) in [23] by point evaluation functionals. From [23] (Theorem 5.2), under the quasi-uniformity assumption on the mesh distribution, we can derive that such a spline approximates y with maximal approximation order also with respect to all the derivatives, that is,
s ¯ 2 R ( j ) y ( j ) K y ( 2 R + 1 ) h 2 R + 1 j , j = 0 , , R ,
where K is a constant depending only on R , M l and M u .
On the other hand, by using the triangular inequality, we can state that:
s 2 R y s 2 R s ¯ 2 R + s ¯ 2 R y ,
Thus, we need to consider the first term on the right-hand side of this inequality. On this concern, because of the partition of unity property of the B-splines, we can write:
s 2 R s ¯ 2 R = i = 2 R ( N + 1 ) R + 1 ( c i c ¯ i ) B i ( · ) c c ¯ ,
where c : = ( c 2 R ; ; c ( N + 1 ) R + 1 ) and c ¯ : = ( c ¯ 2 R ; ; c ¯ ( N + 1 ) R + 1 ) .
Now, for any function g C 2 R [ t 0 , t 0 + T ] , we can define the following linear functionals,
λ i , r ( g ) : = w ( i ) T ( g ) ( α ^ i , r ( R ) ; β ^ 1 , i , r ( R ) ; ; β ^ R , i , r ( R ) ) ,
where:
w ( i ) ( g ) : = ( g ( t i 1 ) ; g ( t i ) ; h i g ( t i 1 ) ; h i g ( t i ) ; ; h i R g ( R ) ( t i 1 ) ; h i R g ( R ) ( t i ) )
and the vector ( α ^ i , r ( R ) ; β ^ 1 , i , r ( R ) ; ; β ^ R , i , r ( R ) ) has been defined in the previous section. Considering from Proposition 2 that s ¯ 2 R , as well as any other spline belonging to S 2 R can be written as follows,
s ¯ 2 R ( · ) = i = 1 N r = r 1 ( i ) r 2 ( i ) λ i , r ( s ¯ 2 R ) B 2 R 1 + i + r r 1 ( i ) ( · ) ,
from (31), we can deduce that:
c ¯ = λ 1 , r 1 ( 1 ) ( s ¯ 2 R ) ; ; λ N , r 2 ( N ) ( s ¯ 2 R ) = λ 1 , r 1 ( 1 ) ( y ) ; ; λ N , r 2 ( N ) ( y ) + O ( h 2 R + 1 ) .
Now, the vector ( α ^ i , r ( R ) ; β ^ 1 , i , r ( R ) ; ; β ^ R , i , r ( R ) ) is defined in (28) as the r-th column of the inverse of the matrix G ( i ) . On the other hand, the entries of such nonsingular matrix do not depend on h, but because of the locality of the B-spline basis and of the R-th multiplicity of the inner knots, only on the ratios h j / h j + 1 , j = i 1 , i , which are uniformly bounded from below and from above because of (30). Thus, there exists a constant C depending on M l , M u and R such that G ( i ) 1 C , which implies that the same is true for any one of the mentioned coefficient vectors. From the latter, we deduce that for all indices, we find:
| c i c ¯ i | K w ( i ) ( u ) w ( i ) ( y ) + O ( h 2 R + 1 ) .
On the other hand, taking into account the result reported in Corollary 1 besides (31), we can easily derive that w ( i ) ( u ) w ( i ) ( y ) = O ( h 2 R ) , which then implies that c c ¯ = O ( h 2 R ) .  ☐

5. Approximation of the Derivatives

The computation of the derivative u n ( j ) , j 2 , from the corresponding u n is quite expensive, and thus, usually, methods not requiring derivative values are preferred. Therefore, as well as for any other multiderivative method, it is of interest to associate with BSHO methods an efficient way to compute the derivative values at the mesh points. We are exploiting a number of possibilities, such as:
  • using generic symbolic tools, if the function f is known in closed form;
  • using a tool of automatic differentiation, like ADiGator, a MATLAB Automatic Differentiation Tool [24];
  • using the Infinity Computer Arithmetic, if the function f is known as a black box [6,7,13];
  • approximating it with, for example, finite differences.
As shown in the remainder of this section, when approximate derivatives are used, we obtain a different numerical solution, since the numerical scheme for its identification changes. In this case, the final formulation of the scheme is that of a standard linear multistep method, being still derived from (8) with coefficients in (12), but by replacing derivatives of order higher than one with their approximations. In this section, we just show the relation of these methods with a class of Boundary Value Methods (BVMs), the Extended Trapezoidal Rules (ETRs), linear multistep methods used with boundary conditions [25]. Similar relations have been found in [26] with HO and the equivalent class of the super-implicit methods, which require the knowledge of functions not only at past, but also at future time steps. The ETRs can be derived from BSHO when the derivatives are approximated by finite differences. Let us consider the order four method with R = 2 . In this case, the first derivative of f could be approximated using central differences:
f i f i + 1 f i 1 2 h i
the numerical scheme (8), denoting u i ( 1 ) = : f i and u i ( 2 ) = : f i , is:
u i + 1 = u i + h 2 f i + 1 + f i h 2 12 f i + 1 f i ,
after the approximation becomes:
u i + 1 = u i + h 2 f i + 1 + f i h 24 f i + 2 f i f i + 1 + f i 1 ,
rearranging, we recover the ETR of order four:
u i + 1 = u i + h 24 f i + 2 + 13 f i + 1 + 13 f i f i 1 .
With similar arguments for the method of order six, R = 3 , by approximating the derivatives with the order four finite differences:
f i 1 h 1 12 f i + 3 + 2 3 f i + 2 2 3 f i + 1 12 f i 1 ,
and:
u i ( 3 ) = : f i 1 h 2 1 12 f i + 2 + 4 3 f i + 1 5 2 f i + 4 3 f i 1 1 12 f i 2 ,
and rearranging, we obtain the sixth order ETR method:
u i + 1 = u i + h 1440 11 f i + 3 93 f i + 2 + 802 f i + 1 + 802 f i 93 f i 1 + 11 f i 2 .
This relation allows us to derive a continuous extension of the ETR schemes using the continuous extension of the BSHO method, just substituting the derivatives by the corresponding approximations. Naturally, a change of the stepsize will now change the coefficients of the linear multistep schemes. Observe that BVMs have been efficiently used for the solution of boundary value problems in [27], and the BS methods are also in this class [16].
It has been proven in [21] that symmetric linear multistep methods are conjugate symplectic schemes. Naturally, in the context of linear multistep methods used with only initial conditions, this property refers only to the trapezoidal method, but when we solve boundary value problems, the correct use of a linear multistep formula is with boundary conditions; this makes the corresponding formulas stable, with a region of stability equal to the left half plane of C (see [25]). The conjugate symplecticity of the methods is the reason for their good behavior shown in [28,29] when used in block form and with a sufficiently large block for the solution of conservative problems.
Remark 1.
We recall that, even when approximated derivatives are used, the numerical solution admits a C R 2 R -degree spline extension verifying all the conditions in (24), where all the u n ( j ) , j 2 appearing on the right-hand side have to be replaced with the adopted approximations. The exact solution of the rectangular system in (24) is still possible, since (8) with coefficients in (12) is still verified by the numerical solution u n , n = 0 , , N , by its derivatives u n ( 1 ) = f ( u n ) , n = 0 , , N and by the approximations of the higher order derivatives. The only difference in this case is that the continuous spline extension collocates at the breakpoints of just the given first order differential equation.

6. Numerical Examples

The numerical examples reported here have two main purposes: the first is to show the good behavior of BSHO methods for Hamiltonian problems, showing both the linear growth of the error for long time computation and the conservation of the Hamiltonian. To this end, we compare the methods with the symplectic Gauss–Runge–Kutta methods and with the conjugate symplectic up to order p + 2 EMHO methods. On the other hand, we are interested in showing the convergence properties of the spline continuous extensions. Observe that the availability of a continuous extension of the same order of the method is an important property. In fact for high order methods, especially for superconvergent methods like the Gauss ones, it is very difficult to find a good continuous extension. The natural continuous extension of these methods does not keep the same order of accuracy, without adding extra stages [30]. Observe also that a good continuous extension is an important tool, for example for the event location.
We report results of our experiments for BSHO methods of order six and eight. We recall that the order two BSHO method corresponds to the well-known trapezoidal rule, the property of conjugate symplecticity of which is well known (see for example [9]) and the continuous extension by the B-spline of which has been already developed in [18]. The order four BSHO belongs also to the EMHO class, and it has been analyzed in detail in [13].

6.1. Kepler Problem

The first example is the classical Kepler problem, which describes the motion of two bodies subject to Newton’s law of gravitation. This problem is a completely integrable Hamiltonian nonlinear dynamical system with two degrees of freedom (see, for details, [31]). The Hamiltonian function:
H ( q 1 , q 2 , p 1 , p 2 ) = 1 2 ( p 1 2 + p 2 2 ) 1 q 1 2 + q 2 2 ,
describes the motion of the body that is not located in the origin of the coordinate systems. This motion is an ellipse in the q 1 - q 2 plane, the eccentricity e of which is set using as starting values:
q 1 ( 0 ) = 1 e , q 2 ( 0 ) = 0 , p 1 ( 0 ) = 0 , p 2 ( 0 ) = 1 + e 1 e ,
and with period μ : = 2 π . The first integrals of this problem are: the total energy H, the angular momentum:
M ( q 1 , q 2 , p 1 , p 2 ) : = q 1 p 2 q 2 p 1 .
and the Lenz vector A : = ( A 1 , A 2 , A 3 ) , the components of which are:
A 1 ( q , p ) : = p 2 M ( q , p ) q 1 | | q | | 2 , A 2 ( q , p ) : = p 1 M ( q , p ) q 2 | | q | | 2 , A 3 ( q , p ) : = 0 .
Only three of the four first integrals are independent, so, for example, A 2 can be neglected.
As in [13], we set e = 0 . 6 and h = μ / 200 , and we integrate the problem over 10 3 periods. Setting y : = ( q 1 , q 2 , p 1 , p 2 ) , the error y j y 0 1 in the solution is computed at specific times fixed equal to multiples of the period, that is at t j = 2 π j , with j = 1 , 2 , ; the errors in the invariants have been computed at the mesh points t n = π n , n = 1 , 3 , 5 . Figure 2 reports the obtained results for the sixth and eighth order BSHO (dotted line, BSHO6, BSHO8), the sixth order EMHO (solid lines, EMHO6) and the sixth and eighth order Gauss–Runge–Kutta (GRK) (dashed lines, GRK6, GRK8) methods. In the top-left picture, the absolute error of the numerical solution is shown; the top-right picture shows the error in the Hamiltonian function; the error in the angular momentum is drawn in the bottom-left picture, while the bottom-right picture concerns the error in the first component of the Lenz vector. As expected from a symplectic or a conjugate symplectic integrator, we can see a linear drift in the error y j y 0 1 as the time increases (top left plot) and in the first component of the Lenz vector (bottom right picture). As well as for the other considered methods, we can see that BSHO methods guarantee a near conservation of the Hamiltonian function and of the angular momentum (other pictures). This latter quadratic invariant is precisely conserved (up to machine precision) by GRK methods due to their symplecticity property. We observe also that, as expected, the error for the BSHO6 method is 3 10 of the error of the EMHO6 method.
To check the convergence behavior of the continuous extensions, we integrated the problem over 10 periods starting with stepsize h = μ / N , N = 100 . We computed a reference solution using the order eight method with a halved stepsize, and we computed the maximum absolute error on the doubled grid. The results are reported in Table 3 for the solution and the first derivative and clearly show that the continuous extension respects the theoretical order of convergence.

6.2. Non-Linear Pendulum Problem

As a second example, we consider the dynamics of a pendulum under the influence of gravity. This dynamics is usually described in terms of the angle q that the pendulum forms with its stable rest position:
q ¨ + sin q = 0 ,
where p = q ˙ is the angular velocity. The Hamiltonian function associated with (33) is:
H ( q , p ) = 1 2 p 2 cos q .
An initial condition ( q 0 , p 0 ) such that | H ( q 0 , p 0 ) | < 1 gives rise to a periodic solution y ( t ) = ( q ( t ) , p ( t ) ) corresponding to oscillations of the pendulum around the straight-down stationary position. In particular, starting at y 0 = ( q 0 , 0 ) , the period of oscillation may be expressed in terms of the complete elliptical integral of the first kind as:
μ ( q 0 ) = 0 1 d z ( 1 z 2 ) ( 1 sin 2 ( q 0 / 2 ) z 2 ) .
For the experiments, we choose q 0 = π / 2 ; thus, the period μ is equal to 7 . 416298709205487 . We use the sixth and eighth order BSHO and GRK methods and the sixth order EMHO method with stepsize h = μ / 20 to integrate the problem over 2 · 10 4 periods. Setting y = ( q , p ) , again, the errors y j y 0 in the solution are evaluated at times that are multiples of the period μ , that is for t j = μ j , with j = 1 , 2 , ; the energy error H ( y n ) H ( y 0 ) has been computed at the mesh points t n = 11 h n , n = 1 , 2 , . Figure 3 reports the obtained results. In the left plot, we can see that, for all the considered methods, the error in the solution grows linearly as time increases. A near conservation of the energy function is observable in both pictures on the right. The amplitudes of the bounded oscillations are similar for both methods, confirming the good long-time behavior properties of BSHO methods for the problem at hand. To check the convergence behavior of the continuous extensions, we integrated the problem over 10 periods starting with stepsize h = μ / N , N = 10 . We computed a reference solution using the order eight method with a halved stepsize, and we compute the maximum absolute error on the doubled grid. The results are reported in Table 4 for the solution and the first derivative and clearly show, also for this example, that the continuous extension respects the theoretical order of convergence.

7. Conclusions

In this paper, we have analyzed the BSHO schemes, a class of symmetric one-step multi-derivative methods firstly introduced in [14,15] for the numerical solution of the Cauchy problem. As a new result, we have proven that these are conjugate symplectic schemes up to order 2 R + 2 , thus suited to the context of geometric integration. Moreover, an efficient approach for the computation of the B-spline form of the spline extending the numerical solution produced by any BSHO method has been presented. The spline associated with the R-th BSHO method collocates the differential equation at the mesh points with multiplicity R and approximates the solution of the considered differential problem with the same accuracy O ( h 2 R ) characterizing the numerical solution. The relation between BSHO schemes and symmetric linear multistep methods when the derivatives are approximated by finite differences has also been pointed out.
Future related work will consist in studying the possibility of associating with the BSHO schemes a dual quasi-interpolation approach, as already done dealing with the B S linear multistep methods in [16,18,32].

Author Contributions

Conceptualization, F.M. and A.S.. Formal analysis, F.M. and A.S. Investigation, F.M. and A.S. Methodology, F.M. and A.S. Writing, original draft, F.M. and A.S. Writing, review and editing, F.M. and A.S.

Funding

Supported by INdAM through GNCS 2018 research projects.

Acknowledgments

We thank Felice Iavernaro for helpful discussions related to this research and the anonymous referees for their careful reading and useful remarks.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hairer, E.; Nørsett, S.; Wanner, G. Solving Ordinary Differential Equations I. Nonstiff Problems, 2nd ed.; Springer: Berlin, Germany, 1993. [Google Scholar]
  2. Ghelardoni, P.; Marzulli, P. Stability of some boundary value methods for IVPs. Appl. Numer. Math. 1995, 18, 141–153. [Google Scholar] [CrossRef]
  3. Estévez Schwarz, D.; Lamour, R. A new approach for computing consistent initial values and Taylor coefficients for DAEs using projector-based constrained optimization. Numer. Algorithms 2018, 78, 355–377. [Google Scholar]
  4. Baeza, A.; Boscarino, S.; Mulet, P.; Russo, G.; Zorí­o, D. Approximate Taylor methods for ODEs. Comput. Fluids 2017, 159, 156–166. [Google Scholar] [CrossRef]
  5. Skvortsov, L. A fifth order implicit method for the numerical solution of differential-algebraic equations. Comput. Math. Math. Phys. 2015, 55, 962–968. [Google Scholar] [CrossRef]
  6. Amodio, P.; Iavernaro, F.; Mazzia, F.; Mukhametzhanov, M.; Sergeyev, Y. A generalized Taylor method of order three for the solution of initial value problems in standard and infinity floating-point arithmetic. Math. Comput. Simul 2017, 141, 24–39. [Google Scholar] [CrossRef]
  7. Sergeyev, Y.; Mukhametzhanov, M.; Mazzia, F.; Iavernaro, F.; Amodio, P. Numerical methods for solving initial value problems on the Infinity Computer. Int. J. Unconv. Comput. 2016, 12, 3–23. [Google Scholar]
  8. Butcher, J.; Sehnalová, P. Predictor-corrector Obreshkov pairs. Computing 2013, 95, 355–371. [Google Scholar] [CrossRef]
  9. Hairer, E.; Lubich, C.; Wanner, G. Geometric Numerical Integration. Structure-Preserving Algorithms for Ordinary Differential Equations, 2nd ed.; Springer: Berlin, Germany, 2006. [Google Scholar]
  10. Mackey, D.S.; Mackey, N.; Tisseur, F. Structured tools for structured matrices. Electron. J. Linear Algebra 2003, 10, 106–145. [Google Scholar] [CrossRef]
  11. Hairer, E.; Zbinden, C.J. On conjugate symplecticity of B-series integrators. IMA J. Numer. Anal. 2013, 33, 57–79. [Google Scholar] [CrossRef]
  12. Iavernaro, F.; Mazzia, F. Symplecticity properties of Euler–Maclaurin methods. In Proceedings of the AIP Conference, Thessaloniki, Greece, 25–30 September 2017; Volume 1978. [Google Scholar]
  13. Iavernaro, F.; Mazzia, F.; Mukhametzhanov, M.; Sergeyev, Y. Conjugate symplecticity of Euler–Maclaurin methods and their implementation on the Infinity Computer. arXiv 2018, arXiv:1807.10952. Applied Numerical Mathematics. in press. [Google Scholar]
  14. Loscalzo, F. An introduction to the application of spline functions to initial value problems. In Theory and Applications of Spline Functions; Academic Press: New York, NY, USA, 1969; pp. 37–64. [Google Scholar]
  15. Loscalzo, F. On the Use of Spline Functions for the Numerical Solution Of Ordinary Differential Equations, Ph.D. Thesis, University of Wisconsin, Madison, WI, USA, 1968. [Google Scholar]
  16. Mazzia, F.; Sestini, A.; Trigiante, D. B-spline linear multistep methods and their continuous extensions. SIAM J. Numer. Anal. 2006, 44, 1954–1973. [Google Scholar] [CrossRef]
  17. Mazzia, F.; Sestini, A.; Trigiante, D. BS linear multistep methods on non-uniform meshes. J. Numer. Anal. Ind. Appl. Math. 2006, 1, 131–144. [Google Scholar]
  18. Mazzia, F.; Sestini, A.; Trigiante, D. The continuous extension of the B-spline linear multistep methods for BVPs on non-uniform meshes. Appl. Numer. Math. 2009, 59, 723–738. [Google Scholar] [CrossRef]
  19. Hairer, E.; Wanner, G. Solving Ordinary Differential Equations II. Stiff and Differential Algebraic Problems, 2nd ed.; Springer: Berlin, Germany, 1996. [Google Scholar]
  20. Chartier, P.; Faou, E.; Murua, A. An algebraic approach to invariant preserving integrators: The case of quadratic and hamiltonian invariants. Numer. Math. 2006, 103, 575–590. [Google Scholar] [CrossRef]
  21. Hairer, E. Conjugate-symplecticity of linear multistep methods. J. Comput. Math. 2008, 26, 657–659. [Google Scholar]
  22. Hairer, E.; Murua, A.; Sanz-Serna, J. The non-existence of symplectic multi-derivative Runge–Kutta methods. BIT 1994, 34, 80–87. [Google Scholar] [CrossRef]
  23. Lyche, T.; Shumaker, L.L. Local spline approximation methods. J. Approx. Theory 1975, 15, 294–325. [Google Scholar] [CrossRef]
  24. Weinstein, M.J.; Rao, A.V. Algorithm 984: ADiGator, a toolbox for the algorithmic differentiation of mathematical functions in MATLAB using source transformation via operator overloading. ACM Trans. Math. Softw. 2017, 44, 21:1–21:25. [Google Scholar] [CrossRef]
  25. Brugnano, L.; Trigiante, D. Solving Differential Problems by Multistep Initial and Boundary Value Methods; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  26. Neta, B.; Fukushima, T. Obrechkoff versus super-implicit methods for the solution of first- and second-order initial value problems. Comput. Math. Appl. 2003, 45, 383–390. [Google Scholar] [CrossRef]
  27. Mazzia, F.; Trigiante, D. A hybrid mesh selection strategy based on conditioning for boundary value ODE problems. Numer. Algorithms 2004, 36, 169–187. [Google Scholar] [CrossRef]
  28. Iavernaro, F.; Mazzia, F.; Trigiante, D. Multistep methods for conservative problems. Med. J. Math. 2005, 2, 53–69. [Google Scholar] [CrossRef]
  29. Mazzia, F.; Pavani, R. Symmetric block BVMs for the solution of conservative systems. In Proceedings of the AIP Conference Proceedings, Rhodes, Greece, 21–27 September 2013; Volume 1558, pp. 738–741. [Google Scholar]
  30. Shampine, L.F.; Jay, L.O. Dense Output. In Encyclopedia of Applied and Computational Mathematics; Springer: Berlin, Germany, 2015; pp. 339–345. [Google Scholar]
  31. Brugnano, L.; Iavernaro, F. Line Integral Methods for Conservative Problems; Monographs and Research Notes in Mathematics; Gordon and Breach Science Publishers: Amsterdam, The Netherlands, 1998. [Google Scholar]
  32. Mazzia, F.; Sestini, A. The BS class of Hermite spline quasi-interpolants on nonuniform knot distributions. BIT Numer. Math. 2009, 49, 611–628. [Google Scholar] [CrossRef]
Figure 1. Sparsity structure of the matrix A with N = 8 , R = 1 (left) and with N = 8 , R = 2 (right).
Figure 1. Sparsity structure of the matrix A with N = 8 , R = 1 (left) and with N = 8 , R = 2 (right).
Axioms 07 00058 g001
Figure 2. Kepler problem: results for the sixth (BSHO6, red dotted line) and eighth (BSHO8, purple dotted line) order BSHO methods, sixth order Euler–Maclaurin method (EMHO6, blue solid line) and sixth (Gauss–Runge–Kutta (GRK6), yellow dashed line) and eighth (GRK8-green dashed line) order Gauss methods. (Top-left) Absolute error of the numerical solution; (top-right) error in the Hamiltonian function; (bottom-left) error in the angular momentum; (bottom-right) error in the second component of the Lenz vector.
Figure 2. Kepler problem: results for the sixth (BSHO6, red dotted line) and eighth (BSHO8, purple dotted line) order BSHO methods, sixth order Euler–Maclaurin method (EMHO6, blue solid line) and sixth (Gauss–Runge–Kutta (GRK6), yellow dashed line) and eighth (GRK8-green dashed line) order Gauss methods. (Top-left) Absolute error of the numerical solution; (top-right) error in the Hamiltonian function; (bottom-left) error in the angular momentum; (bottom-right) error in the second component of the Lenz vector.
Axioms 07 00058 g002
Figure 3. Nonlinear pendulum problem: results for the Hermite–Obreshkov method of order six and eight (BSHO6, red, and BSHO8, purple dotted lines), for the sixth order Euler–Maclaurin (EMHO6, blue solid line) and Gauss methods (GRK6, yellow, and GRK8, green dashed lines) applied to the pendulum problem. (Left) plot: absolute error of the numerical solution; (upper-right) and (bottom-right) plots: error in the Hamiltonian function for the sixth order and eighth order integrators, respectively.
Figure 3. Nonlinear pendulum problem: results for the Hermite–Obreshkov method of order six and eight (BSHO6, red, and BSHO8, purple dotted lines), for the sixth order Euler–Maclaurin (EMHO6, blue solid line) and Gauss methods (GRK6, yellow, and GRK8, green dashed lines) applied to the pendulum problem. (Left) plot: absolute error of the numerical solution; (upper-right) and (bottom-right) plots: error in the Hamiltonian function for the sixth order and eighth order integrators, respectively.
Axioms 07 00058 g003
Table 1. Symmetric one-step B-Spline Hermite–Obreshkov (BSHO) coefficients.
Table 1. Symmetric one-step B-Spline Hermite–Obreshkov (BSHO) coefficients.
R β 1 ( R ) β 2 ( R ) β 3 ( R ) β 4 ( R ) β 5 ( R )
1 1 2
2 1 2 1 12
3 1 2 1 10 1 120
4 1 2 3 28 1 84 1 1680
5 1 2 1 9 1 72 1 1008 1 30240
Table 2. Coefficients of the modified differential equations and Bernoulli numbers.
Table 2. Coefficients of the modified differential equations and Bernoulli numbers.
R12345
δ R b 2 2 ! b 4 4 ! 3 10 b 6 6 ! 1 21 b 8 8 ! 1 210 b 10 10 !
b 2 R 1 6 1 30 1 42 1 30 5 66
Table 3. Kepler problem: maximum absolute error of the numerical solution and its derivative computed for 10 periods.
Table 3. Kepler problem: maximum absolute error of the numerical solution and its derivative computed for 10 periods.
OrderNerryRateerryRate
4100 2 . 69 · 10 1 1 . 33 · 10 0
4200 1 . 69 · 10 2 3.99 8 . 50 · 10 2 3.96
4400 1 . 06 · 10 3 4.00 5 . 30 · 10 3 4.00
4800 6 . 60 · 10 5 4.00 3 . 31 · 10 4 4.00
6100 1 . 95 · 10 3 9 . 74 · 10 3
6200 2 . 96 · 10 5 6.03 1 . 48 · 10 4 6.03
6400 4 . 60 · 10 7 6.00 2 . 30 · 10 6 6.00
6800 7 . 19 · 10 9 6.00 3 . 60 · 10 8 6.00
8100 1 . 56 · 10 5 7 . 82 · 10 5
8200 5 . 75 · 10 8 8.08 2 . 88 · 10 7 8.08
8400 2 . 17 · 10 10 8.05 1 . 08 · 10 9 8.05
8800 7 . 62 · 10 12 4.87 3 . 70 · 10 11 4.44
Table 4. Nonlinear pendulum problem: Maximum absolute error of the numerical solution and its derivative computed for 10 periods.
Table 4. Nonlinear pendulum problem: Maximum absolute error of the numerical solution and its derivative computed for 10 periods.
OrderNerryRateerryRate
410 1 . 26 · 10 2 1 . 28 · 10 2
420 9 . 02 · 10 4 3.81 1 . 10 · 10 3 3.53
440 5 . 73 · 10 5 3.97 6 . 60 · 10 5 4.06
480 3 . 58 · 10 6 4.00 4 . 52 · 10 6 3.86
610 2 . 65 · 10 4 2 . 82 · 10 4
620 1 . 36 · 10 6 7.59 5 . 77 · 10 6 5.61
640 2 . 07 · 10 8 6.04 1 . 15 · 10 8 5.65
680 3 . 21 · 10 10 6.01 1 . 81 · 10 9 5.98
810 2 . 56 · 10 5 2 . 61 · 10 5
820 1 . 53 · 10 8 10.7 8 . 50 · 10 8 8.26
840 6 . 14 · 10 11 7.96 4 . 02 · 10 10 7.72
880 3 . 01 · 10 13 7.67 1 . 56 · 10 12 8.01

Share and Cite

MDPI and ACS Style

Mazzia, F.; Sestini, A. On a Class of Hermite-Obreshkov One-Step Methods with Continuous Spline Extension. Axioms 2018, 7, 58. https://doi.org/10.3390/axioms7030058

AMA Style

Mazzia F, Sestini A. On a Class of Hermite-Obreshkov One-Step Methods with Continuous Spline Extension. Axioms. 2018; 7(3):58. https://doi.org/10.3390/axioms7030058

Chicago/Turabian Style

Mazzia, Francesca, and Alessandra Sestini. 2018. "On a Class of Hermite-Obreshkov One-Step Methods with Continuous Spline Extension" Axioms 7, no. 3: 58. https://doi.org/10.3390/axioms7030058

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop