Next Article in Journal
Variational Approaches for Lagrangian Discrete Nonlinear Systems
Next Article in Special Issue
R-Adaptive Multisymplectic and Variational Integrators
Previous Article in Journal
Risk Measurement of Stock Markets in BRICS, G7, and G20: Vine Copulas versus Factor Copulas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Line Integral Solution of Hamiltonian PDEs

by
Luigi Brugnano
1,*,
Gianluca Frasca-Caccia
2 and
Felice Iavernaro
3
1
Dipartimento di Matematica e Informatica “U. Dini”, Università di Firenze, Viale Morgagni 67/A, 50134 Firenze, Italy
2
School of Mathematics, Statistics & Actuarial Science, University of Kent, Sibson Building, Parkwood Road, Canterbury CT2 7FS, UK
3
Dipartimento di Matematica, Università di Bari, Via Orabona 4, 70125 Bari, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(3), 275; https://doi.org/10.3390/math7030275
Submission received: 8 January 2019 / Revised: 8 March 2019 / Accepted: 12 March 2019 / Published: 18 March 2019
(This article belongs to the Special Issue Geometric Numerical Integration)

Abstract

:
In this paper, we report on recent findings in the numerical solution of Hamiltonian Partial Differential Equations (PDEs) by using energy-conserving line integral methods in the Hamiltonian Boundary Value Methods (HBVMs) class. In particular, we consider the semilinear wave equation, the nonlinear Schrödinger equation, and the Korteweg–de Vries equation, to illustrate the main features of this novel approach.

1. Introduction

The numerical solution of ordinary differential equations (ODE) problems, though researched for over sixty years, is still a very active field of investigation, following several trends, such as:
(a)
the search for methods suited for specific relevant classes of problems;
(b)
their efficient implementation on a computer;
(c)
the extension of existing methods to cope with wider classes of problems.
Point (a) is particularly interesting, since it is presently well understood that relevant classes of problems do possess specific geometric properties in their solutions and, often, one is interested in reproducing such properties in the discrete solution obtained by a numerical method. As matter of fact, the term Geometric Integration has been coined to denote the study of numerical methods able to preserve such properties. These latter methods, in turn, are named geometric integrators. As an example, when dealing with dissipative problems, A-stable methods are geometric integrators, since they retain the asymptotic stability of equilibria. Nevertheless, when stability results by first approximation do not apply, things become much more involved. This is the case, for example, of Hamiltonian problems, i.e., problems in the form
y ˙ = J H ( y ) = : f ( y ) , y ( 0 ) = y 0 R 2 m ,
with J = J and H a scalar function (which we shall hereafter assume to be suitably regular), called the Hamiltonian or energy. Due to the skew-symmetry of J, this latter function turns out to be conserved along the solution of (1). In fact, one has:
d d t H ( y ) = H ( y ) y ˙ = H ( y ) J H ( y ) = 0 .
Hamiltonian problems are very important in the applications and, for this reason, their numerical simulation has been the subject of much research: we refer the reader, e.g., to the monographs [1,2,3,4,5] and references therein. In particular, numerical methods able to conserve H are geometric integrators, referred to as energy-conserving methods.
Point (b) is also paramount: in fact, no numerical method can be really useful, if it cannot be efficiently implemented on a computer. Therefore, a particular care has to be devoted to devise robust implementation techniques, in order to make the studied methods suitable for solving a wide class of problems. In particular, the availability of efficient Newton-type procedures for solving the discrete problems generated by the methods turns out to be central, when numerically solving the Hamiltonian problems described at the next point.
At last, point (c) is one of the main focuses of the present paper. In fact, according to [5] (p. 157), one effective way of solving Hamiltonian PDEs is to discretize, at first, the space variable(s). In so doing, under appropriate space discretizations, one obtains a large-size Hamiltonian problem, which can be then solved by using a suitable geometric integrator. In particular, for the sake of simplicity and brevity, in this paper we shall deal with initial-boundary value problems in one space dimension, equipped with periodic boundary conditions, even though the arguments can be extended to cope with higher space dimensions, as is sketched in Section 3.3. As was anticipated above, the numerical solution of the Hamiltonian problems arising from the space discretization of Hamiltonian PDEs will require the use of effective Newton-type procedures, in order to avoid severe step-size limitations.
With these premises, the present paper is devoted to reporting on recent findings in the numerical solution of Hamiltonian PDEs by using Hamiltonian Boundary Value Methods (HBVMs), a class of energy-conserving Runge-Kutta methods for Hamiltonian problems. The novelty in their use stems from the fact that they provide effective and arbitrarily high-order energy-conserving methods for the time integration of the Hamiltonian semi-discrete problems obtained from Hamiltonian PDEs. In fact, low order methods have been mainly considered for this purpose, so far (see, e.g., [6,7,8,9,10,11]). Further approaches can be found in [12,13,14,15,16,17,18,19,20]. In more details, the structure of this paper is as follows:
  • in Section 2 we recall the main facts about HBVMs, also sketching their efficient blended implementation;
  • in Section 3 we describe the space discretization of the semilinear wave equation, and the efficient solution of the resulting Hamiltonian ODE problem via HBVMs. For this equation we shall provide full details, whereas the whole procedure will be only sketched for the subsequent equations;
  • in Section 4 we see that the same approach can be used for the nonlinear Schrödinger equation;
  • in Section 5 we consider, instead, the Korteweg–de Vries equation;
  • Section 6 contains some numerical tests, aimed at showing the effectiveness of the proposed approach;
  • at last, a few conclusions are made in Section 7.

2. Hamiltonian Boundary Value Methods (HBVMs)

HBVMs are energy-conserving methods derived within the framework of (discrete) line integral methods, initially proposed in [21,22,23,24,25], and later refined in [26,27,28,29,30,31]. The approach has also been extended along several directions [28,32,33,34,35,36,37,38], including Hamiltonian BVPs [39], constrained Hamiltonian problems [40], highly-oscillatory problems [41,42,43], and Hamiltonian PDEs [41,44,45,46,47,48]. We also refer to the review paper [49] and to the monograph [2].
The basic idea line integral methods rely on is that the conservation of an invariant can be recast as the vanishing of a corresponding line-integral. In the case of the Hamiltonian H for (1), one has:
H ( y ( t ) ) H ( y 0 ) = 0 t H ( y ( τ ) ) y ˙ ( τ ) d τ = 0 t H ( y ( τ ) ) J H ( y ( τ ) ) d τ = 0 ,
due to the fact that the integrand is identically zero. Consequently, H ( y ( t ) ) = H ( y 0 ) , for all t 0 . Nevertheless, when dealing with a discrete time dynamics, ruled by a time-step h > 0 , one can consider a path σ : [ 0 , h ] R 2 m such that
σ ( 0 ) = y 0 , σ ( h ) = : y 1 , y 1 y ( h ) ,
and
H ( y 1 ) H ( y 0 ) = H ( σ ( h ) ) H ( σ ( 0 ) ) = 0 h H ( σ ( t ) ) σ ˙ ( t ) d t = h 0 1 H ( σ ( c h ) ) σ ˙ ( c h ) d c = 0 ,
but without requiring the integrand to be identically zero. In such a case, there are infinitely many paths satisfying (2) and (3), each providing a corresponding line integral method. In particular, we here consider a polynomial path, which we expand along the orthonormal Legendre basis:
P i Π i , 0 1 P i ( c ) P j ( c ) d c = δ i j . i , j = 0 , 1 , ,
where, as is usual, Π i is the set of polynomials of degree i and δ i j is the Kronecker symbol. In order to obtain a path σ Π s satisfying (2) and (3), let us then consider the expansion
σ ˙ ( c h ) = j = 0 s 1 P j ( c ) γ j ( σ ) , c [ 0 , 1 ] ,
in terms of the s unknown vector coefficients { γ j ( σ ) } . In order to fulfill (2), integrating both sides of (5) and taking into account that (see (4)) 0 1 P j ( c ) d c = δ j 0 , one obtains:
σ ( c h ) = y 0 + h j = 0 s 1 0 c P j ( τ ) d τ γ j ( σ ) , c [ 0 , 1 ] , y 1 σ ( h ) = y 0 + h γ 0 ( σ ) .
Taking into account (5), condition (3) becomes
0 1 H ( σ ( c h ) ) σ ˙ ( c h ) d c = 0 1 H ( σ ( c h ) ) j = 0 s 1 P j ( c ) γ j ( σ ) d c = j = 0 s 1 0 1 P j ( c ) H ( σ ( c h ) ) d c γ j ( σ ) = 0 ,
which is satisfied by choosing (see (1)):
γ j ( σ ) = J 0 1 P j ( c ) H ( σ ( c h ) ) d c 0 1 P j ( c ) f ( σ ( c h ) ) d c ,
because of the skew-symmetry of matrix J. Therefore, this specific energy-conserving line integral method is defined by the polynomial path σ , whose coefficients satisfy the following set of s nonlinear vector equations, derived from (6) and (7):
γ j ( σ ) = 0 1 P j ( c ) f y 0 + h i = 0 s 1 0 c P i ( τ ) d τ γ i ( σ ) d c , j = 0 , , s 1 .
Moreover, it can be proved that σ ( h ) y ( h ) = O ( h 2 s + 1 ) , i.e., the approximation procedure has order 2 s [30] (Theorem 1) (see also [49]). However, this procedure does not yet provide a numerical method since, quoting e.g., Dahlquist and Björk [50] (p. 521), “as is well known, even many relatively simple integrals cannot be expressed in finite terms of elementary functions, and thus must be evaluated by numerical methods.” In particular, since we are dealing with a polynomial approximation, we consider the Gaussian interpolatory quadrature rule, based at the zeros 0 < c 1 < < c k < 1 of P k , whose weights we denote, respectively, by b 1 , , b k , which is well-known to have order 2 k (i.e., it is exact for polynomial integrands up to order 2 k 1 ). Consequently, with reference to (7), we obtain the approximation
γ j ( σ ) = 1 k b P j ( c ) f ( σ ( c h ) ) = : γ ^ j , j = 0 , , s 1 ,
where, for the sake of brevity, we continue to denote σ the polynomial approximation. The new discrete problem is then given by
γ ^ j = = 1 k b P j ( c ) f y 0 + h i = 0 s 1 0 c P i ( τ ) d τ γ ^ i , j = 0 , , s 1 ,
which remarkably has, alike (8), dimension s, independently of k.
Definition 1.
The discrete problem (10) defines a HBVM ( k , s ) method. The limit as k , given by (8), defines a HBVM ( , s ) formula.
It is possible to prove the following result [2,49].
Theorem 1.
For all k s , by using the k Gauss-Legendre abscissae, a HBVM ( k , s ) method is symmetric and of order 2 s . Moreover, it reduces to the s-stage Gauss collocation method, when k = s . Concerning energy-conservation when applied for solving (1), one has:
H ( y 1 ) H ( y 0 ) = 0 , i f H Π ν w i t h ν 2 k / s , O ( h 2 k + 1 ) , o t h e r w i s e .
It is worth mentioning that, because of (11), by choosing k large enough one can either obtain:
  • an exact conservation of energy, when H is a polynomial;
  • a practical conservation of energy, otherwise. In fact, in such a case, it is enough that the energy error falls within the round-off error level.

2.1. Runge-Kutta form of HBVM ( k , s )

It is possible to see that, actually, a HBVM ( k , s ) method is a k-stage Runge-Kutta method. In fact, by setting in (9) Y : = σ ( c h ) , = 1 , , k , one obtains:
Y i σ ( c i h ) = y 0 + h j = 0 s 1 0 c i P j ( τ ) d τ γ ^ j = y 0 + h j = 0 s 1 0 c i P j ( τ ) d τ = 1 k b P j ( c ) f ( Y ) = y 0 + h j = 1 k b j = 0 s 1 0 c i P ( τ ) d τ P ( c j ) f ( Y j ) , i = 1 , , k ,
with the new approximation given by
y 1 = y 0 + h γ ^ 0 y 0 + h i = 1 k b i f ( Y i ) .
It can be readily seen that (12) and (13) define the k-stage Runge-Kutta method with Butcher tableau
c I s P s Ω b
with
c = c 1 c k , b = b 1 b k , Ω = b 1 b k ,
and
P s = P 0 ( c 1 ) P s 1 ( c 1 ) P 0 ( c k ) P s 1 ( c k ) , I s = 0 c 1 P 0 ( x ) d x 0 c 1 P s 1 ( x ) d x 0 c k P 0 ( x ) d x 0 c k P s 1 ( x ) d x R k × s .
For this Runge-Kutta method, the stage Equation (12) has (block) dimension k and is given by
Y = e y 0 + h I s P s Ω I 2 m f ( Y ) , Y = Y 1 Y k , f ( Y ) = f ( Y 1 ) f ( Y k ) , e = 1 1 R k ,
having set, in general, I r R r × r the identity matrix. Nonetheless, the equivalent discrete problem (10), whose dimension is s independently of k, turns out to be given by:
F ( γ ^ ) : = γ ^ P s Ω I 2 m f e y 0 + h I s I 2 m γ ^ = 0 ,
where
γ ^ = γ ^ 0 γ ^ s 1 , γ ^ i R 2 m , i = 0 , , s 1 .
Once (17) is solved, according to (13) the new approximation is given by y 1 = y 0 + h γ ^ 0 .

2.2. Special Second-Order Problems

Sometimes, the problem (1) assumes the form of a special second-order problem,
q ¨ = U ( q ) , q ( 0 ) = q 0 , q ˙ ( 0 ) = p 0 R m ,
for which, setting p = q ˙ , y = q p and H ( y ) H ( q , p ) = 1 2 p p U ( q ) . In such a case, the dimension of the blocks of the discrete problem can be halved. In fact, by using (14) for solving (19), one sees that the stage equations for q and p are respectively given by:
Q = e q 0 + h I s P s Ω I m P , P = e p 0 + h I s P s Ω I m U ( Q ) ,
having set
Q = Q 1 Q k , P = P 1 P k , U ( Q ) = U ( Q 1 ) U ( Q k ) .
Plugging the second equation in (20) into the first one, considering that I s P s Ω e = c and, moreover,
P s Ω I s = X s ξ 0 ξ 1 ξ 1 0 ξ s 1 ξ s 1 0 , ξ i = 2 | 4 i 2 1 | 1 , i = 0 , , s 1 ,
one then obtains:
Q = e q 0 + h c p 0 + h 2 I s X s P s Ω I m U ( Q ) .
Setting (compare with (18))
γ ¯ γ ¯ 0 γ ¯ s 1 = P s Ω I m U ( Q ) , γ ¯ i R m , i = 0 , , s 1 ,
and taking into account (22), one then obtains the new discrete problem (compare with (17)):
G ( γ ¯ ) : = γ ¯ P s Ω I m U e q 0 + h c p 0 + h 2 I s X s I m γ ¯ = 0 .
Once it has been solved, it can be seen that the new approximations are given by (see, e.g., [2] (Chapter 4)):
q 1 = q 0 + h p 0 + h 2 ξ 0 γ ¯ 0 ξ 1 γ ¯ 1 , p 1 = p 0 + h γ ¯ 0 ,
where ξ 0 and ξ 1 are the nonzero entries on the first row of matrix X s defined in (21).

2.3. Blended Iteration

The efficient solution of the discrete problem (17) has been studied in a series of papers [2,51,52,53]. We here recall the main facts about the so called blended implementation of HBVMs, which represents a Newton-type iteration for solving (17). This approach, at first sketched in [54], has then been analyzed in [55] and developed in [56,57,58]. It has been then implemented in the Fortran codes BiM [59] and BiMD [60], for the numerical solution of stiff ODE-IVPs and linearly implicit DAEs: both codes can be retrieved at [61]; the latter code is also available at the Test Set for IVP Solvers [62]. The blended implementation of HBVMs has then been considered in [53] and implemented in the Matlab function hbvm available at the url [63]. We also mention that, more recently, this approach has been also considered for RKN methods [64].
Let us then consider the simplified Newton iteration for solving (17) which, by taking into account (21) amounts to solving the following set of linear systems:
I s I 2 m h X s f ( y 0 ) Δ γ ^ = F ( γ ^ ) , = 0 , 1 , ,
with f ( y 0 ) the Jacobian of f evaluated at y 0 . This iteration, though straightforward and very effective, requires, however, the factorization of a 2 m s × 2 m s matrix, which can be cumbersome, when s and/or m are large. To get rid of this problem, by considering that matrix X s is nonsingular one at first considers the following equivalent formulation of (25), having set ρ s a positive, and for the moment unspecified, parameter:
ρ s X s 1 I 2 m h I s f ( y 0 ) Δ γ ^ = ( ρ s X s 1 I 2 m ) F ( γ ^ ) , = 0 , 1 , .
The next step is to consider the blending of the two equivalent formulations (25) and (26) with weights θ s and I s I 2 m θ s , respectively, where:
θ s = I s Σ 1 , Σ = [ I 2 m h ρ s f ( y 0 ) ] .
In so doing, one obtains a new linear system, whose coefficient matrix has the inverse which can be approximated by θ s . Skipping the details (for which we refer to [53], see also [2,49]), one then obtains the following blended iteration for solving (17):
η = F ( γ ^ ) , η 1 = ρ s X s 1 I 2 m η , Δ γ ^ = θ s η 1 + θ s η η 1 , = 0 , 1 , ,
which only requires to factor the matrix Σ in (27), having the same size as that of the continuous problem. Concerning the choice of the parameter ρ s , as is shown in [55], the optimal choice, based on a linear convergence analysis, turns out to be:
ρ s = min λ σ ( X s ) | λ | ,
where, as is usual, σ ( X s ) is the spectrum of X s .
In the case of the special second-order problem (19), the simplified Newton iteration for solving (24) becomes:
I s I m h 2 X s 2 2 U ( q 0 ) Δ γ ¯ = G ( γ ¯ ) , = 0 , 1 , ,
with 2 U ( q 0 ) the Hessian of U evaluated at q 0 . Consequently, similar steps as above can be repeated, via the following formal substitutions:
F G , γ ^ γ ¯ , f ( y 0 ) 2 U ( q 0 ) , I 2 m I m , h h 2 , X s X s 2 , ρ s ρ s 2 .
As a result, the blended iteration for solving (24) is given by:
η = G ( γ ¯ ) , η 1 = ρ s 2 X s 2 I m η , Δ γ ¯ = θ s η 1 + θ s η η 1 , = 0 , 1 , ,
with the parameter ρ s still given by (29) and
θ s = I s Σ 1 , Σ = [ I m h 2 ρ s 2 2 U ( q 0 ) ] .
Consequently, also in such a case, one has only to factor a matrix having the same size as that of the continuous problem.

2.4. Blended Iteration for Semilinear Problems

Once more, we stress that the availability of a Newton-type iteration for solving (17) is paramount, in order to avoid severe step-size limitations, when such a problem is derived from the space discretization of Hamiltonian PDEs. In fact, in such a case, the resulting ODE problem turns out to be in the form
y ˙ = A y + g ( y ) , y ( 0 ) = y 0 R 2 m ,
with the dimension and the norm of matrix A tending to infinity, as the space discretization is made more and more accurate, whereas g remains bounded, if the solution is bounded. Consequently, one can consider a constant approximation of the Jacobian of the right-hand side of (33), given by the matrix A of the linear term. As a result, the matrix Σ defined in (27) becomes
Σ = I 2 m h ρ s A ,
which is constant for all time-steps and, consequently, it needs to be factored only once.
Similarly, when problem (19) is in the form
q ¨ = A 2 q + g ( q ) , q ( 0 ) = q 0 , q ˙ ( 0 ) = p 0 R m ,
with A 2 symmetric and semi-positive definite, and A 2 g , one can approximate the matrix Σ in (32) as
Σ = I m + h 2 ρ s 2 A 2 ,
which, also in this case, is constant for all time-steps and needs to be factored only once.
We end this subsection by stressing that, for the problems that we shall consider in the sequel, matrix A in (34), or matrix A 2 in (36), has a block structure with diagonal blocks. As a result, the corresponding blended iterations (28) and (31) are computationally inexpensive. Moreover, the linear algebra can be made still more efficient, as is done in the Matlab function hbvm available at [63], by considering a matrix formulation of the iteration [2,65].

2.5. HBVMs as Spectral Methods in Time

To conclude this quick introduction to HBVMs, we mention their use as spectral methods in time, which has been the subject of recent investigations [41,42,43]. We mention that the use of Runge-Kutta methods as spectral methods in time has been considered previously in [66,67,68,69] (see also [30]). In more details, if we consider the expansion of the right-hand side of (1), on the interval [ 0 , h ] , along the Legendre basis (4), one has:
y ˙ ( c h ) = f ( y ( c h ) ) j 0 P j ( c ) γ j ( y ) , c [ 0 , 1 ] ,
where γ j ( y ) is defined according to (7), by formally replacing σ by y. On the other hand, the polynomial approximation σ defined in (5) is obtained by truncating the previous series after s terms. However, by considering that
0 1 f ( y ( c h ) ) 2 2 d c = j 0 γ j ( y ) 2 2 ,
one has that
γ j ( y ) 2 0 , j ,
the more regular f ( y ) , the faster the convergence to 0 of γ j ( y ) 2 , as j . Consequently, when using a finite precision arithmetic with machine epsilon ε , if one truncates the expansion (37) when the Fourier coefficient γ s ( y ) is negligible, w.r.t. the previous ones, then one obtains that (37) and (5) become indistinguishable, in the used finite precision arithmetic. A straightforward criterion for this to happen, considered in [41,42], is to require that
γ s ( y ) 2 < t o l · max j = 0 , , s 1 γ j ( y ) 2 ,
with t o l ε . Moreover, the analysis in [43] shows that one could even use t o l ε in (38), still obtaining full machine accuracy at t = h . At last (see (9)), by choosing k large enough, one may obtain full machine accuracy in the approximation of γ j ( σ ) by means of γ ^ j , j = 0 , , s 1 . As a result, the use of HBVMs as spectral methods in time (which we shall denote by SHBVMs, as an abbreviation for spectral HBVMs) usually requires the use of relatively large values of s and k. This, in turn, is not a big issue; in fact:
  • on one hand, we have the availability of the blended iteration (28) (or (31)), whose computational cost is mildly affected by such parameters, also considering the approximation (34) (or (36));
  • on the other hand, SHBVMs will allow the use of relatively large time-steps.
Summing all up, overall SHBVMs will result in being extremely effective and competitive, as is testified by the numerical tests reported in Section 6 (see also [41,42,43]).

3. The Semilinear Wave Equation

The first Hamiltonian PDE that we consider is the semilinear wave equation:
u t t ( x , t ) = u x x ( x , t ) f ( u ( x , t ) ) , ( x , t ) [ a , b ] × [ 0 , T ] , u ( x , 0 ) = u 0 ( x ) , u t ( x , 0 ) = v 0 ( x ) , x [ a , b ] ,
with f the derivative of f. The problem (39) is completed by prescribing periodic boundary conditions. Hereafter, we shall assume the solution to be suitably regular, as a periodic function in space. Moreover, for the sake of brevity, we shall omit the arguments of the involved functions, when not necessary. By setting v = u t , one obtains that (39) is a Hamiltonian PDE, with Hamiltonian functional
H [ u , v ] ( t ) = 1 2 a b v 2 ( x , t ) + u x 2 ( x , t ) + 2 f ( u ( x , t ) ) d x = : a b L ( x , t , u , u x , v ) d x ,
so that, by setting
H = δ u H δ v H ,
the vector of the functional derivatives of H , with
δ u H = ( u x u x ) L f ( u ) u x x , δ v H = v L v ,
one has:
u t v t = J 2 H , J 2 : = 1 1 ,
which is formally in the form (1). As in the ODE case, also now one has the conservation of the Hamiltonian.
Theorem 2.
Assuming that the solution of (39) is suitably smooth in space, the Hamitonian (40) is conserved, when periodic boundary conditions are prescribed.
Proof. 
In fact, from (39) and (40), and taking into account that v = u t , one has:
H ˙ [ u , v ] = a b L t d x = a b v v t + u x u x t + f ( u ) u t d x = a b u x v x + v ( v t + f ( u ) ) d x = a b u x v x + v u x x d x = u x v x = a x = b = 0 ,
because of the periodic boundary conditions. □
In order to numerically solve (39), according to what sketched in the introduction, we at first discretize the space variable, with the aim of obtaining a corresponding Hamiltonian ODE problem. For this purpose, we consider the following orthonormal basis on the interval [ a , b ] , which takes into account of the periodic boundary conditions [2,44,45,46,47,49]:
c 0 ( x ) ( b a ) 1 2 ,
c j ( x ) = 2 b a cos 2 π j x a b a , x [ a , b ] ,
s j ( x ) = 2 b a sin 2 π j x a b a , j = 1 , 2 , .
In fact, for all allowed i , j , one has:
a b c i ( x ) c j ( x ) d x = δ i j = a b s i ( x ) s j ( x ) d x , a b c i ( x ) s j ( x ) d x = 0 .
Consequently, for suitable time dependent coefficients α 0 ( t ) , α 1 ( t ) , β 1 ( t ) , , one has:
u ( x , t ) = c 0 ( x ) α 0 ( t ) + j 1 c j ( x ) α j ( t ) + s j ( x ) β j ( t ) .
The infinite expansion (46) can be cast in vector form, by defining the infinite-dimensional vectors
ω ( x ) = c 0 ( x ) , s 1 ( x ) , c 1 ( x ) , , q ( t ) = α 0 ( t ) , β 1 ( t ) , α 1 ( t ) , ,
as
u ( x , t ) = ω ( x ) q ( t ) .
By also introducing the infinite matrix
D = 2 π b a 0 1 · I 2 2 · I 2 ,
and considering that (45) can be written in matrix form as
a b ω ( x ) ω ( x ) d x = I ,
the identity operator, we then prove the following result.
Theorem 3.
With reference to (42)–(50), problem (39) can be rewritten as the special second-order problem
q ¨ = D 2 q a b ω ( x ) f ( ω ( x ) q ) d x , t [ 0 , T ] , q ( 0 ) = a b ω ( x ) u 0 ( x ) d x = : q 0 , q ˙ ( 0 ) = a b ω ( x ) v 0 ( x ) d x = : p 0 .
By setting p = q ˙ , this latter problem is Hamiltonian, with Hamiltonian function
H ( q , p ) = 1 2 p p + q D 2 q + 2 a b f ( ω ( x ) q ) d x ,
which turns out to be equivalent to the Hamiltonian functional (40).
Proof. 
Problem (51) is clearly Hamiltonian, w.r.t. the Hamiltonian (52), since
q ˙ = p H ( q , p ) , p ˙ = q H ( q , p ) .
Let us then show that:
  • (51) is equivalent to (39);
  • (52) is equivalent to (40).
Concerning the first point, we observe that
u t t ( x , t ) = ω ( x ) q ¨ ( t ) , u x x ( x , t ) = ω ( x ) q ( t ) ω ( x ) D 2 q ( t ) ,
with an obvious meaning of ω ( x ) , so that (39) can be rewritten as
ω ( x ) q ¨ ( t ) = ω ( x ) D 2 q ( t ) f ( ω ( x ) q ( t ) ) , ( x , t ) [ a , b ] × [ 0 , T ] .
Multiplying both sides by ω ( x ) , then integrating in space from a to b, and taking into account (50), give us (51).
Concerning the second point, the statement easily follows by considering that v ( x , t ) = u t ( x , t ) = ω ( x ) p , and
p p = a b p ω ( x ) ω ( x ) p d x = a b q ˙ ω ( x ) ω ( x ) q ˙ d x = a b v 2 d x .
Moreover, by defining the matrix
D ¯ = 2 π b a 0 1 · J 2 2 · J 2 = D ¯ ,
where matrix J 2 is that defined in (41), one has:
u x ( x , t ) = ω ( x ) q ( t ) D ¯ ω ( x ) q ( t ) , D ¯ D ¯ = D 2 ,
so that, by taking again into account (50), one obtains:
q D 2 q = q D ¯ D ¯ q = a b q D ¯ ω ( x ) ω ( x ) D ¯ q d x = a b u x 2 d x .
The proof is completed by considering that, from (48), f ( ω ( x ) q ( t ) ) = f ( u ( x , t ) ) . □

3.1. Discretization

In order to solve problem (51) on a computer, the infinite expansion (46) must be truncated at a convenient index N. In so doing, (47) and (49) respectively become
ω ( x ) = c 0 ( x ) s 1 ( x ) c 1 ( x ) s N ( x ) c N ( x ) , q ( t ) = α 0 ( t ) β 1 ( t ) α 1 ( t ) β N ( t ) α N ( t ) , D = 2 π b a 0 1 · I 2 2 · I 2 N · I 2 ,
(observe that, for the sake of brevity, we continue to use the same notation ω , q , and D used for the infinite expansion, though, hereafter, they will refer to the finite counterparts (54)) so that (48) continues formally to hold, even though now u is no more the solution of (39). Nevertheless, in the spirit of Fourier-Galerkin methods [70] (the same Fourier-Galerkin procedure will be used for the Hamiltonian PDEs studied in the following sections), by requiring the residual obtained by plugging u into (39) be orthogonal to the functional subspace
V N = span c 0 ( x ) , s 1 ( x ) , c 1 ( x ) , , s N ( x ) , c N ( x ) ,
which, for fixed t, contains u, one obtains a finite set of 2 N + 1 ODEs, formally still given by (51), for which Theorem 3 continues formally to hold, with the only exception that now H ( q , p ) is no more equivalent to the Hamiltonian functional (40), but only yields an approximation to it. Nevertheless, it is well known that, under suitable regularity assumptions on f and the initial data u 0 and v 0 , this truncated version converges exponentially to the original functional (40), as N (this phenomenon is usually referred to as spectral accuracy), as well as the truncated version of u converges to the infinite expansion (46).
The resulting finite-dimensional semi-discrete problem (51), which is still Hamiltonian, is the one we will solve by using line-integral methods. Actually, it is not yet ready to be solved, since the integral a b ω ( x ) f ( ω ( x ) q ) d x , appearing in it, needs to be evaluated. For this purpose, since we are dealing with an integrand which is periodic in space, a composite trapezoidal rule based at the abscissae
x i = a + i b a m , i = 0 , , m ,
can be considered. We refer, e.g., to [45] (Theorems 5 and 6), for a proper choice of the number, m + 1 , of points in (55), able to preserve the property of spectral accuracy.
Problem (51) can then be solved by using a HBVM ( k , s ) method, for which the accuracy results of Theorem 1 hold true. In particular, concerning the conservation of the semi-discrete Hamiltonian (52), the next result holds true, which follows from (11).
Theorem 4.
If a HBVM ( k , s ) method is used with time-step h for solving (51), one has
H ( q 1 , p 1 ) H ( q 0 , p 0 ) = 0 , i f f Π ν w i t h ν 2 k / s , O ( h 2 k + 1 ) , o t h e r w i s e .
having set q 1 q ( h ) and p 1 p ( h ) the new approximations.

3.2. The Nonlinear Iteration

In light of what previously stated, in order to obtain a spectral accuracy in space a suitably large value of N in (54) has to be considered (a practical criterion for its choice will be sketched in Section 6). Consequently, the special second-order problem (51) is semilinear, with a bounded nonlinear term, when the solution is bounded, and the linear term given by D 2 q . On the other hand, both the size (i.e., 2 N + 1 ) and the norm (i.e., 2 π N b a 2 ) of the matrix D 2 tend to infinity, as N . Consequently, when using a HBVM ( k , s ) method for solving (51), the blended iteration (29), (31)–(32) can be conveniently used, to get rid of the large norm of the linear term, with matrix Σ approximated as in (36). As a result, it turns out to be given by
Σ = I 2 N + 1 + h 2 ρ s 2 D 2 ,
which is a diagonal matrix and, therefore, Σ 1 can be cheaply computed and stored. Consequently, the complexity of the blended iteration turns out to be comparable with that of an explicit method, though not suffering from the step-size restrictions of this latter. As a matter of fact, the use of an explicit method usually would require h D < 1 , i.e., h = O ( N 1 ) , which may be restrictive, when N 1 .

3.3. Extension to Higher Space Dimensions

For completeness, in this section we sketch the generalization of most of the previous arguments to the case where the space domain of the wave equation is, for the sake of simplicity, the square [ a , b ] 2 : = [ a , b ] × [ a , b ] :
u t t ( x , y , t ) = Δ u ( x , y , t ) f ( u ( x , t ) ) , ( x , y , t ) [ a , b ] 2 × [ 0 , T ] , u ( x , y , 0 ) = u 0 ( x , y ) , u t ( x , y , 0 ) = v 0 ( x , y ) , ( x , y ) [ a , b ] 2 .
As before, the problem (58) is completed by prescribing periodic boundary conditions. In such a case, the Hamiltonian functional becomes, by setting as usual v = u t ,
H [ u , v ] ( t ) = 1 2 a b a b v 2 ( x , y , t ) + u ( x , y , t ) 2 2 + 2 f ( u ( x , y , t ) ) d x d y .
Because of the periodic boundary conditions, we can again consider the orthonormal basis (42)–(44) in each space dimension, thus obtaining the expansion (for the sake of brevity, let us set s 0 0 )
u ( x , y , t ) = j , k 0 c j ( x ) α j ( t ) + s j ( x ) β j ( t ) · c k ( y ) η k ( t ) + s k ( y ) μ k ( t ) ,
involving the additional time-dependent coefficients η 0 ( t ) , μ 1 ( t ) , η 1 ( t ) , . With reference to the infinite-dimensional vectors in (47), and defining the vectors
q 1 ( t ) : = q ( t ) , q 2 ( t ) = η 0 ( t ) , μ 1 ( t ) , η 1 ( t ) , ,
the infinite expansion (60) can be cast in vector form as
u ( x , y , t ) = [ ω ( x ) ω ( y ) ] q 1 ( t ) q 2 ( t ) .
Consequently, by taking into account (49) and (50), one obtains that (compare with Theorem 3) problem (58) can be recast as the infinite set of second order ODEs:
q ¨ 1 q ¨ 2 = D 2 q 1 D 2 q 2 a b a b ω ( x ) ω ( y ) f [ ω ( x ) ω ( y ) ] q 1 q 2 d x d y , t [ 0 , T ] , q 1 ( 0 ) q 2 ( 0 ) = a b a b ω ( x ) ω ( y ) u 0 ( x , y ) d x d y , q ˙ 1 ( 0 ) q ˙ 2 ( 0 ) = a b a b ω ( x ) ω ( y ) v 0 ( x , y ) d x d y .
By setting p 1 p 2 = q ˙ 1 q ˙ 2 , one then obtains that problem (62) is Hamiltonian, with Hamiltonian function
H ( q 1 q 2 , p 1 p 2 ) = 1 2 ( p 1 p 2 ) ( p 1 p 2 ) + ( q 1 q 2 ) ( D D ) 2 ( q 1 q 2 ) + 2 a b a b f [ ω ( x ) ω ( y ) ] q 1 q 2 d x d y .
This latter function, in turn, is equivalent to the Hamiltonian functional (59), via the expansion (61). Then, as done in the one dimensional case, the vectors ω ( x ) , ω ( y ) , q 1 ( t ) , q 2 ( t ) , are truncated after 2 N + 1 terms, for a convenient large value of N, so that (62) becomes a Hamiltonian set of ( 2 N + 1 ) 2 ODEs, with Hamiltonian (63). This problem can be solved by adapting the arguments previously explained in the one dimensional case, even though now the complexity is clearly increased. Remarkably enough, however, the diagonal structure of the Jacobian of the linear term in (62), i.e., ( D D ) 2 , is still preserved (evidently, this property holds true whichever is the dimension of the considered space domain).

4. The Nonlinear Schrödinger Equation

We now consider the nonlinear Schrödinger equation, which is very important in many applications (see, e.g., the introduction in [44]). In real variables, it takes the form,
u t = v x x f ( u 2 + v 2 ) v , u ( x , 0 ) = u 0 ( x ) , v t = u x x + f ( u 2 + v 2 ) u , v ( x , 0 ) = v 0 ( x ) , ( x , t ) [ a , b ] × [ 0 , T ] ,
f being the derivative of a suitably regular function f. The problem is completed with periodic boundary conditions and, hereafter, we shall assume the initial functions to be suitably regular (as periodic functions), in order to guarantee a suitably smooth solution. Such an equation can be written in the form (41), with H the vector of the functional derivatives of the Hamiltonian functional
H [ u , v ] ( t ) = 1 2 a b u x 2 + v x 2 f ( u 2 + v 2 ) d x .
This latter functional is conserved, because of the periodic boundary conditions [44] (Theorem 1). Additional conserved (quadratic) functionals are the mass and the momentum [44] (Theorem 2), respectively given by:
M 1 [ u , v ] ( t ) = a b ( u 2 + v 2 ) d x , M 2 [ u , v ] ( t ) = 1 2 a b ( v x u u x v ) d x .
In order to obtain a space discretization which takes into account of the periodic boundary conditions, we consider again the expansion along the Fourier basis (42)–(45), for u and v. The expansion for u is formally still given by (46). Similarly, that for v will be given by:
v ( x , t ) = c 0 ( x ) η 0 ( t ) + j 1 c j ( x ) η j ( t ) + s j ( x ) μ j ( t ) ,
for suitable time dependent coefficients, η 0 ( t ) , η 1 ( t ) , μ 1 ( t ) , . By using the infinite vectors (47) and
p ( t ) = η 0 ( t ) , μ 1 ( t ) , η 1 ( t ) , ,
we can cast the expansions of u and v in vector form, respectively, as (48) and
v ( x , t ) = ω ( x ) p ( t ) .
As a consequence, the following result holds true, whose proof is similar to that of Theorem 2 (see also [44] (Section 2)).
Theorem 5.
With reference to (42)–(50), problem (64) can be rewritten as the infinite-dimensional Hamiltonian ODE problem
q ˙ = D 2 p a b ω ( x ) f ( ( ω ( x ) q ) 2 + ( ω ( x ) p ) 2 ) ω ( x ) p d x , p ˙ = D 2 q + a b ω ( x ) f ( ( ω ( x ) q ) 2 + ( ω ( x ) p ) 2 ) ω ( x ) q d x , t [ 0 , T ] , q ( 0 ) = a b ω ( x ) u 0 ( x ) d x = : q 0 , p ( 0 ) = a b ω ( x ) v 0 ( x ) d x = : p 0 .
This latter problem is Hamiltonian w.r.t. the Hamiltonian
H ( q , p ) = 1 2 p D 2 p + q D 2 q a b f ( ( ω ( x ) q ) 2 + ( ω ( x ) p ) 2 ) d x ,
which turns out to be equivalent to the Hamiltonian functional (65). Moreover, the two quadratic invariants (66) can be respectively rewritten as
M 1 ( q , p ) = q q + p p , M 2 ( q , p ) = q D ¯ p ,
where D ¯ is the matrix defined in (53).
As in the case of the nonlinear wave equation, in order to solve problem (70) on a computer, one needs to truncate the infinite expansions (46) and (67) at a convenient index N. In so doing, the infinite vectors and matrices (47), (49), and (68) become those in (54) and
p ( t ) = η 0 ( t ) , μ 1 ( t ) , η 1 ( t ) , , μ N ( t ) , η N ( t ) ,
respectively. As a result, one eventually arrives again at the finite-dimensional Hamiltonian ODE problem (70), having dimension 4 N + 2 , with the Hamiltonian and the invariants still given by (71) and (72), respectively. Again, spectral accuracy is expected, if the solution is regular enough in space (as a periodic function). Finally, we mention that also in this case the integrals in space can be computed by means of a composite trapezoidal rule, based at the abscissae (55), for a suitably large value of m.
Again, we can use an HBVM ( k , s ) method for solving (70). Concerning the conservation of the Hamiltonian, the following straightforward result follows from (11).
Theorem 6.
If a HBVM ( k , s ) method is used with time-step h for solving (70), one has
H ( q 1 , p 1 ) H ( q 0 , p 0 ) = 0 , i f f Π ν w i t h ν k / s , O ( h 2 k + 1 ) , o t h e r w i s e .
having set q 1 q ( h ) and p 1 p ( h ) the new approximations.

The Nonlinear Iteration

Following the same arguments discussed in the previous section, in order to obtain a spectral accuracy in space a suitably large value of N in (54) and (73) has to be considered (we remind that a practical criterion for its choice will be given in Section 6). Consequently, the Hamiltonian problem (70) is semilinear, with a bounded nonlinear term, if the solution is bounded, and the linear term given by (see (41))
J 2 D 2 q p .
On the other hand, the norm of the matrix D 2 is 2 π N b a 2 , and tends to infinity, as N , as well as its size. Consequently, when using a HBVM ( k , s ) method for solving (70), the blended iteration (27)–(29) can be conveniently used, to get rid of the large norm of the linear term, with matrix Σ approximated as in (34) and, in the present context, given by
Σ = I 2 N + 1 B B I 2 N + 1 , B = h ρ s D 2 .
This is a block matrix with diagonal blocks and, therefore, Σ 1 can be cheaply computed and stored. As matter of fact, one has [44] (Theorem 5):
Σ 1 = Γ B · Γ B · Γ Γ , Γ = ( I 2 N + 1 + B 2 ) 1 ,
which is again a block matrix with diagonal blocks (actually, two vectors are enough to store it). As a consequence, also in the present case the complexity of the blended iteration turns out to be comparable with that of an explicit method, though not suffering from step-size restrictions. As a matter of fact, the use of an explicit method would require h D 2 < 1 , i.e., h = O ( N 2 ) , which may be very restrictive, when N 1 .

5. The Korteweg-de Vries (KdV) Equation

The last Hamiltonian PDE that we consider is the Korteweg–de Vries equation, recently investigated in [46] by using line integral methods,
u t = α u x x x + β u u x , ( x , t ) [ a , b ] × [ 0 , T ] , u ( x , 0 ) = u 0 ( x ) ,
with α β 0 , and coupled with periodic boundary conditions. As usual, we shall assume that u 0 is smooth enough, as a periodic function, so that u ( x , t ) turns out to be suitably regular, as a periodic function in space [71]. Equation (76) can be written in Hamiltonian form as
u t = x δ u H [ u ]
with H [ u ] the Hamiltonian functional
H [ u ] ( t ) = 1 2 a b α u x ( x , t ) 2 + β 3 u ( x , t ) 3 d t a b L ( x , t , u , u x ) d x ,
and
δ u H [ u ] = ( u x u x ) L ( x , t , u , u x ) ,
its functional derivative (actually, it can be seen that there is a further Hamiltonian formulation of (76) [72], so that the PDE has a so called bi-Hamiltonian structure). Because of the periodic boundary conditions, the Hamiltonian functional (77) turns out to be conserved. Another conserved functional is given by
U [ u ] = a b u d x ,
as it can be readily shown. In order to obtain a space discretization, we consider an expansion along the usual orthonormal basis (42)–(45), which provides us with an expression formally still given by (46). However, because of the conservation of the functional (78), one obtains that
a b u 0 ( x ) d x = a b u ( x , t ) d x a b c 0 ( x ) α 0 ( t ) d x = ( b a ) c 0 ( x ) α 0 ( t ) , t 0 .
Consequently, the expansion (46) now becomes
u ( x , t ) = u ^ 0 + j 1 c j ( x ) α j ( t ) + s j ( x ) β j ( t ) , u ^ 0 = ( b a ) 1 a b u 0 ( x ) d x .
In order to put this expansion in vector form, let us introduce the infinite vectors
c ( x ) = c 1 ( x ) c 2 ( x ) , s ( x ) = s 1 ( x ) s 2 ( x ) , q ( t ) = α 1 ( t ) α 2 ( t ) , p ( t ) = β 1 ( t ) β 2 ( t ) .
In so doing, we can rewrite (79) as:
u ( x , t ) = u ^ 0 + c ( x ) q ( t ) + s ( x ) p ( t ) , u ^ 0 = ( b a ) 1 a b u 0 ( x ) d x .
Consequently, the conservation of (78) is automatically granted. Moreover, by defining the infinite matrix
D = 2 π b a 1 2 ,
such that
c ( x ) = D s ( x ) , s ( x ) = D c ( x ) ,
and similarly for the higher derivatives, and considering that
a b c ( x ) c ( x ) d x = a b s ( x ) s ( x ) d x = I , a b c ( x ) s ( x ) d x = O ,
one verifies that (76) can be rewritten as the infinite dimensional ODE problem (see [46] (Lemma 3) for full details)
q ˙ = D α D 2 p + β 2 a b s ( u ^ 0 + c q + s p ) 2 d x , p ˙ = D α D 2 q + β 2 a b c ( u ^ 0 + c q + s p ) 2 d x , t [ 0 , T ] , q ( 0 ) = a b c ( x ) u 0 ( x ) d x = : q 0 , p ( 0 ) = a b s ( x ) u 0 ( x ) d x = : p 0 .
For this problem, the following result holds true [46] (Theorem 1).
Theorem 7.
Problem (85) is in the form (1) with (see (41))
y = q p , J = J 2 D ,
and the Hamiltonian given by
H ( q , p ) = 1 2 α q D 2 q + p D 2 p + β 3 a b ( u ^ 0 + c q + s p ) 3 d x .
This latter is equivalent to the Hamiltonian functional (77), via (81), (83) and (84).
As done before, in order for the problem (85) to be solvable on a computer, the infinite expansion in (79) must be truncated to a convenient index N. In so doing, one still formally retrieves the vector formulation (81), where now the vectors
c ( x ) = c 1 ( x ) c N ( x ) , s ( x ) = s 1 ( x ) s N ( x ) , q ( t ) = α 1 ( t ) α N ( t ) , p ( t ) = β 1 ( t ) β N ( t ) ,
are hereafter used in place of (80). Similarly, by replacing matrix (82) with
D = 2 π b a 1 N ,
one obtains a set of 2 N Hamiltonian equations, formally still given by (85), with the Hamiltonian H also formally given by (86). As for the Hamiltonian PDEs previously studied, spectral accuracy is expected, as N , upon regularity assumptions on u 0 . Moreover, concerning the integrals appearing in (85) and (86), they can be exactly computed via a composite trapezoidal rule based at the abscissae (55), by choosing m > 3 N [46].
Having got the finite dimensional Hamiltonian ODE problem (85), we can use a HBVM ( k , s ) method for its time integration. Concerning energy conservation, the following result easily follows from (11).
Theorem 8.
A HBVM ( k , s ) method used for solving (85) is energy-conserving, for all k 3 s / 2 .

The Nonlinear Iteration

Also in this case, problem (85) is semilinear. However, it is worth observing that the Hessian of the Hamiltonian H in (86) is given by
2 H ( q , p ) = α D 2 + β a b u ( x , t ) c ( x ) c ( x ) d x β a b u ( x , t ) c ( x ) s ( x ) d x β a b u ( x , t ) s ( x ) c ( x ) d x α D 2 + β a b u ( x , t ) s ( x ) s ( x ) d x ,
with u ( x , t ) given by the expansion (81). Consequently, by considering the constant approximation u ( x , t ) u ^ 0 (due to the conservation of (78)), and taking into account (84), one obtains the constant approximate (diagonal) Hessian
2 H ( q , p ) I 2 D ^ , D ^ : = α D 2 + β u ^ 0 I N .
Therefore, the blended iteration (27)–(29) can be conveniently used, by considering the resulting approximated matrix (see (41) and (88))
Σ = I 2 I N h ρ s ( J 2 D ) ( I 2 D ^ ) I N B B I N , B = h ρ s D D ^ ,
which is a block matrix with diagonal blocks. Moreover, one has [46] (Theorem 3):
Σ 1 = Γ B · Γ B · Γ Γ , Γ = ( I N + B 2 ) 1 ,
which can be easily computed (once for all) and stored (in fact, only two vectors of length N are needed). Consequently, the complexity of the blended iteration turns out to be comparable with that of an explicit method, though not suffering from its step-size restrictions which, for the present problem, would require h = O ( N 3 ) .

6. Numerical Tests

In this section, we report a few numerical tests, aimed at assessing the effectiveness of HBVMs for solving the previously studied Hamiltonian PDEs. In particular, the spectral version of HBVMs (SHBVMs) will be recognized to be very promising. In more details, we shall compare the following methods:
  • the symplectic s-stage Gauss methods, s = 1 , 2 ;
  • the energy-conserving HBVM ( k , s ) methods, s = 1 , 2 , and k suitably chosen;
  • the SHBVM method.
The comparisons will be quite fair, since the same Matlab function, which is a modification of the function hbvm available at [63], implements all methods. All numerical tests have been done on a 2.8 GHz Intel Core i7 computer with 16GB of memory, running Matlab 2017b.
To begin with, let us define the criterion used for getting spectral accuracy in space, i.e., for a correct choice of N in (54), (73), (87), and (88). In more details, N has been chosen in order to fulfil both the two following requirements:
  • a good approximation of the initial condition. This is achieved by requiring
    E 0 : = max u 0 ( x ) ω ( x ) q 0 , v 0 ( x ) ω ( x ) p 0 t o l ε ,
    with ε the machine epsilon, for problems (51) and (70), or
    E 0 : = u 0 ( x ) u ^ 0 c ( x ) q 0 s ( x ) p 0 t o l ε ,
    for problem (85);
  • a good approximation of the Hamiltonian. This is achieved by computing the initial value H ( q 0 , p 0 ) = : H 0 of the semi-discrete Hamiltonian (i.e., (52), or (71), or (86)) for consecutive values of N, and checking that the absolute value of the difference, Δ H 0 , satisfies:
    Δ H 0 t o l ε .

6.1. The Semilinear Wave Equation

We consider the so called sine-Gordon equation [45] (Section 7) with a breather soliton solution,
u t t = u x x sin ( u ) , ( x , t ) [ 50 , 50 ] × [ 0 , 100 ] , u ( x , 0 ) = 0 , u t ( x , 0 ) = 4 γ sech x γ ,
where we choose γ = 1 . 5 . Its solution, depicted in the upper plot in Figure 1, is:
u ( x , t ) = 4 atan sech x γ sin t 1 γ 2 γ 2 1 .
In the lower plot of Figure 1 there are the graphs of E 0 and Δ H 0 , as defined in (89) and (91), respectively. From such plots, one infers that the choice N = 250 is adequate to obtain spectral accuracy in space. In Table 1 we list the obtained numerical results by solving the resulting semi-discrete problem (51) with time-step h = 100 / n . In more details: the execution time (in sec), the maximum solution and Hamiltonian errors, e u and e H , respectively, and the rate of convergence, where appropriate; for the SHBVM method, we also list the used values of k and s, the latter obtained by using t o l ε in (38) and k suitably larger than s. From the obtained results, one sees that:
  • the higher-order methods perform better than the lower-order ones;
  • the energy-conserving methods are slightly more efficient than the symplectic ones, when the largest time-steps are used;
  • the spectral method turns out to be the most effective one, and uses much larger time-steps.

6.2. The Nonlinear Schrödinger Equation

We consider the so called focusing equation (the de-focusing case is obtained when the sign of the coupling term is reversed),
u t = v x x 2 ( u 2 + v 2 ) v , v t = u x x + 2 ( u 2 + v 2 ) u , ( x , t ) [ 40 , 120 ] × [ 0 , 20 ] ,
where the initial conditions at t = 0 are taken from the known solution,
u ( x , t ) = sech ( x 4 t ) cos ( 2 x 3 t ) , v ( x , t ) = sech ( x 4 t ) sin ( 2 x 3 t ) ,
depicted in the upper plot of Figure 2, plus (approximate) boundary conditions. In the lower plot of the same figure, there are the plots of E 0 and Δ H 0 , as defined in (89) and (91), respectively. From such plots, one infers that the choice N = 600 is adequate to obtain spectral accuracy in space. For this problem, the symplectic s-stage Gauss methods conserve the quadratic invariants (72), whereas the HBVM ( 2 s , s ) methods are energy conserving (according to Theorem 6, since f ( x ) = x 2 ). For the SHBVM method, we use t o l 10 1 ε in (38). In Table 2 we list the numerical results obtained by solving the resulting semi-discrete problem (70) with time-step h = 20 / n : besides the execution time (in sec), we list the maximum solution, mass, momentum, and Hamiltonian errors, e u v , e 1 , e 2 , and e H , respectively, along with the rate of convergence, where appropriate; for the SHBVM method, we also list the used values of k and s. We observe that a kind of super-convergence occurs in the invariants (twice the convergence order of the solution) for the Gauss and HBVM methods. In this case, the symplectic and energy-conserving methods turn out to be almost equivalent, with the higher-order methods more efficient than the lower-order ones. However, the SHBVM method outperform all of them, being able to use much larger time-steps, and having a uniformly small error in both the solution and the invariants (which are all conserved within the round-off error level).

6.3. The Korteweg-de Vries Equation

This example is adapted from [46] (Example 2):
u t + ϵ u x x x + u u x = 0 , ( x , t ) [ 0 , 1 ] × [ 0 , 10 ] ,
equipped with periodic boundary conditions and the initial condition obtained from the known cnoidal wave solution,
u ( x , t ) = a cn 2 4 K ( m ) ( x ν t x 0 ) .
Here cn : = cn ( z | m ) is the Jacobi elliptic function with modulus m, K ( m ) is the complete elliptic integral of the first kind, and the following parameters have been used:
ϵ = 10 2 , m = 0 . 9 , a = 192 m ϵ K 2 ( m ) , ν = 64 ϵ ( 2 m 1 ) K 2 ( m ) , x 0 = 1 / 2 .
The initial part of the solution (97) is depicted in the upper plot of Figure 3, whereas in the lower plot one may find E 0 and Δ H 0 , as defined in (90) and (91), respectively, versus N. From the latter plots, one infers that the choice N = 50 is adequate to obtain spectral accuracy in space. By recalling the result of Theorem 8 for HBVMs, in Table 3 we list the numerical results obtained by solving the resulting semi-discrete problem (85) with time-step h = 10 / n , in terms of: execution time (in sec); maximum solution and Hamiltonian errors, e u and e H , respectively; rate of convergence, where appropriate. We observe that, also in this case, for the Gauss method a super-convergence occurs in the Hamiltonian error. For the SHBVM method, we also list the used values of k and s, the latter obtained by using t o l 10 1 ε in (38) and k suitably larger than s. From the obtained results, one sees that the energy-conserving and symplectic methods are almost equivalent, with the higher-order methods performing better than the lower-order ones. Also in this case, however, the spectral method turns out to be the most effective, being able to use much larger time-steps, with uniformly small solution and Hamiltonian errors.

6.4. A Few Remarks

From the obtained results, we can draw a few conclusions, which we report in the sequel.
Energy-conservation. 
When the conservation of energy is not an issue, the performance of energy-conserving HBVMs seems to be comparable with that of the symplectic Gauss formulae of the same order. Clearly, things may change when energy-conservation is an important feature (see, e.g., the example in [45] (Section 7)).
Order of the methods. 
From the numerical results, one clearly sees that the second-order methods are outperformed by higher-order HBVMs and/or Gauss methods. In particular, for problems (94) and (96), the second-order HBVM(2,1) method is exactly energy-conserving, and can be regarded as a high-performance implementation of the AVF method in [73]. Despite this, its performance is not comparable with that of the higher-order methods.
Spectral methods in time. 
The obtained numerical results further confirm what recently observed in [41,42,43], i.e., that the use of HBVMs as spectral methods in time is a very promising way of getting very high-performance ODE solvers, due to the effectiveness of the underlying blended iteration described in Section 2.

7. Conclusions

In this paper we have reviewed the basic facts concerning the use of energy-conserving line integral methods for efficiently solving Hamiltonian PDEs. This has been done by performing, at first, a suitable space discretization, along a Fourier orthonormal basis, thus obtaining a corresponding high-dimensional Hamiltonian problem. In particular, we have studied the semilinear wave equation, the nonlinear Schrödinger equation, and the Korteweg–de Vries equation in one dimension. It is worth mentioning, however, that: as sketched in Section 3.3, the used space discretization can be straightforwardly extended to the case of more space dimensions; additional Hamiltonian PDEs have been considered in [47,48]. In the future, we plan to further investigate Hamiltonian PDEs within the same framework.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blanes, S.; Casas, F. A Concise Introduction to Geometric Numerical Integration; Chapman et Hall/CRC: Boca Raton, FL, USA, 2016. [Google Scholar]
  2. Brugnano, L.; Iavernaro, F. Line Integral Methods for Conservative Problems; Chapman et Hall/CRC: Boca Raton, FL, USA, 2016. [Google Scholar]
  3. Hairer, E.; Lubich, C.; Wanner, G. Geometric Numerical Integration, 2nd ed.; Springer: Berlin, Germany, 2006. [Google Scholar]
  4. Leimkuhler, B.; Reich, S. Simulating Hamiltonian Dynamics; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  5. Sanz-Serna, J.M.; Calvo, M.P. Numerical Hamiltonian Problems; Chapman & Hall: London, UK, 1994. [Google Scholar]
  6. Celledoni, E.; Grimm, V.; McLachlan, R.I.; McLaren, D.I.; O’Neal, D.; Owren, B.; Quispel, G.R.W. Preserving energy resp. dissipation in numerical PDEs using the “Average Vector Field” method. J. Comput. Phys. 2012, 231, 6770–6789. [Google Scholar] [CrossRef]
  7. Gong, Y.; Cai, J.; Wang, Y. Some new structure-preserving algorithms for general multi-symplectic formulations of Hamiltonian PDEs. J. Comput. Phys. 2014, 279, 80–102. [Google Scholar] [CrossRef]
  8. Gong, Y.; Wang, Y. An Energy-Preserving Wavelet Collocation Method for General Multi-Symplectic Formulations of Hamiltonian PDEs. Commun. Comput. Phys. 2016, 20, 1313–1339. [Google Scholar] [CrossRef]
  9. Jiang, C.; Sun, J.; Li, H.; Wang, Y. A fourth-order AVF method for the numerical integration of sine-Gordon equation. Appl. Math. Comput. 2017, 313, 144–158. [Google Scholar] [CrossRef]
  10. Karasözen, B.; Şimşek, G. Energy preserving integration of bi-Hamiltonian partial differential equations. Appl. Math. Lett. 2013, 26, 1125–1133. [Google Scholar] [CrossRef]
  11. McLachlan, R.I.; Quispel, G.R.W. Discrete gradient methods have an energy conservation law. Discrete Contin. Dyn. Syst. 2014, 34, 1099–1104. [Google Scholar] [CrossRef]
  12. Dahlby, M.; Owren, B. A general framework for deriving integral preserving numerical methods for PDEs. SIAM J. Sci. Comput. 2011, 33, 2318–2340. [Google Scholar] [CrossRef]
  13. Durán, A.; López-Marcos, M.A. Conservative numerical methods for solitary wave interactions. J. Phys. A Math. Gen. 2003, 36, 7761–7770. [Google Scholar] [CrossRef]
  14. Durán, A.; Sanz-Serna, J.M. The numerical integration of relative equilibrium solutions. Geometric theory. Nonlinearity 1998, 11, 1547–1567. [Google Scholar] [CrossRef]
  15. Frasca-Caccia, G.; Hydon, P.E. Simple bespoke preservation of two conservation laws. IMA J. Numer. Anal. 2018, 1–36. [Google Scholar] [CrossRef]
  16. De Frutos, J.; Sanz-Serna, J.M. Accuracy and conservation properties in numerical integration: The case of the Korteweg-de Vries equation. Numer. Math. 1997, 75, 421–445. [Google Scholar] [CrossRef]
  17. Furihata, D. Finite Difference Schemes for u/t = (/x)αδG/δu that inherit energy conservation or dissipation property. J. Comput. Phys. 1999, 156, 181–205. [Google Scholar] [CrossRef]
  18. Furihata, D.; Matsuo, T. Discrete Variational Derivative Method: A Structure-Preserving Numerical Method for Partial Differential Equations; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar]
  19. Matsuo, T.; Furihata, D. Dissipative or conservative finite-difference schemes for complex-valued nonlinear partial differential equations. J. Comput. Phys. 2001, 171, 425–447. [Google Scholar] [CrossRef]
  20. Guo, L.; Xu, Y. Energy Conserving Local Discontinuous Galerkin Methods for the Nonlinear Schrödinger Equation with Wave Operator. J. Sci. Comput. 2015, 65, 622–647. [Google Scholar] [CrossRef]
  21. Iavernaro, F.; Pace, B. s-stage trapezoidal methods for the conservation of Hamiltonian functions of polynomial type. AIP Conf. Proc. 2007, 936, 603–606. [Google Scholar]
  22. Iavernaro, F.; Pace, B. Conservative block-Boundary Value Methods for the solution of polynomial Hamiltonian systems. AIP Conf. Proc. 2008, 1048, 888–891. [Google Scholar]
  23. Iavernaro, F.; Trigiante, D. On some conservation properties of the trapezoidal method applied to Hamiltonian systems. In ICNAAM, International Conference on Numerical Analysis and Applied Mathematics 2005; Wiley-Vch Verlag GmbH & Co.: Weinheim, Germany, 2005; pp. 254–257. [Google Scholar]
  24. Iavernaro, F.; Trigiante, D. Discrete Conservative Vector Fields Induced by the Trapezoidal Method. JNAIAM J. Numer. Anal. Ind. Appl. Math. 2006, 1, 113–130. [Google Scholar]
  25. Iavernaro, F.; Trigiante, D. High-order Symmetric Schemes for the Energy Conservation of Polynomial Hamiltonian Problems. JNAIAM J. Numer. Anal. Ind. Appl. Math. 2009, 4, 87–101. [Google Scholar]
  26. Brugnano, L.; Iavernaro, F.; Susca, T. Numerical comparisons between Gauss-Legendre methods and Hamiltonian BVMs defined over Gauss points. Monografías Real Academia Ciencias Zaragoza 2010, 33, 95–112. [Google Scholar]
  27. Brugnano, L.; Iavernaro, F.; Trigiante, D. Hamiltonian BVMs (HBVMs): A family of “drift-free” methods for integrating polynomial Hamiltonian systems. AIP Conf. Proc. 2009, 1168, 715–718. [Google Scholar]
  28. Brugnano, L.; Iavernaro, F.; Trigiante, D. Hamiltonian Boundary Value Methods (Energy Preserving Discrete Line Integral Methods). JNAIAM. J. Numer. Anal. Ind. Appl. Math. 2010, 5, 17–37. [Google Scholar]
  29. Brugnano, L.; Iavernaro, F.; Trigiante, D. The lack of continuity and the role of Infinite and infinitesimal in numerical methods for ODEs: The case of symplecticity. Appl. Math. Comput. 2012, 218, 8053–8063. [Google Scholar] [CrossRef]
  30. Brugnano, L.; Iavernaro, F.; Trigiante, D. A simple framework for the derivation and analysis of effective one-step methods for ODEs. Appl. Math. Comput. 2012, 218, 8475–8485. [Google Scholar] [CrossRef]
  31. Brugnano, L.; Iavernaro, F.; Trigiante, D. Analisys of Hamiltonian Boundary Value Methods (HBVMs): A class of energy-preserving Runge-Kutta methods for the numerical solution of polynomial Hamiltonian systems. Commun. Nonlinear Sci. Numer. Simul. 2015, 20, 650–667. [Google Scholar] [CrossRef]
  32. Brugnano, L.; Calvo, M.; Montijano, J.I.; Ràndez, L. Energy preserving methods for Poisson systems. J. Comput. Appl. Math. 2012, 236, 3890–3904. [Google Scholar] [CrossRef]
  33. Brugnano, L.; Gurioli, G.; Iavernaro, F. Analysis of Energy and QUadratic Invariant Preserving (EQUIP) methods. J. Comput. Appl. Math. 2018, 335, 51–73. [Google Scholar] [CrossRef]
  34. Brugnano, L.; Iavernaro, F. Line Integral Methods which preserve all invariants of conservative problems. J. Comput. Appl. Math. 2012, 236, 3905–3919. [Google Scholar] [CrossRef]
  35. Brugnano, L.; Iavernaro, F.; Trigiante, D. Energy and Quadratic Invariants Preserving Integrators of Gaussian Type. AIP Conf. Proc. 2010, 1281, 227–230. [Google Scholar]
  36. Brugnano, L.; Iavernaro, F.; Trigiante, D. A two-step, fourth-order method with energy preserving properties. Comput. Phys. Commun. 2012, 183, 1860–1868. [Google Scholar] [CrossRef]
  37. Brugnano, L.; Iavernaro, F.; Trigiante, D. Energy and QUadratic Invariants Preserving integrators based upon Gauss collocation formulae. SIAM J. Numer. Anal. 2012, 50, 2897–2916. [Google Scholar] [CrossRef]
  38. Brugnano, L.; Sun, Y. Multiple invariants conserving Runge-Kutta type methods for Hamiltonian problems. Numer. Algorithms 2014, 65, 611–632. [Google Scholar] [CrossRef]
  39. Amodio, P.; Brugnano, L.; Iavernaro, F. Energy-conserving methods for Hamiltonian Boundary Value Problems and applications in astrodynamics. Adv. Comput. Math. 2015, 41, 881–905. [Google Scholar] [CrossRef]
  40. Brugnano, L.; Gurioli, G.; Iavernaro, F.; Weinmüller, E.B. Line integral solution of Hamiltonian systems with holonomic constraints. Appl. Numer. Math. 2018, 127, 56–77. [Google Scholar] [CrossRef]
  41. Brugnano, L.; Iavernaro, F.; Montijano, J.I.; Rández, L. Spectrally accurate space-time solution of Hamiltonian PDEs. Numer. Algorithms 2018, 1–20. [Google Scholar] [CrossRef]
  42. Brugnano, L.; Montijano, J.I.; Rández, L. On the effectiveness of spectral methods for the numerical solution of multi-frequency highly-oscillatory Hamiltonian problems. Numer. Algorithms 2018, 1–32. [Google Scholar] [CrossRef]
  43. Amodio, P.; Brugnano, L.; Iavernaro, F. Analysis of Spectral Hamiltonian Boundary Value Methods (SHBVMs) for the numerical solution of ODE problems. arXiv 2018, arXiv:1811.06800. [Google Scholar]
  44. Barletti, L.; Brugnano, L.; Frasca-Caccia, G.; Iavernaro, F. Energy-conserving methods for the nonlinear Schrödinger equation. Appl. Math. Comput. 2018, 318, 3–18. [Google Scholar] [CrossRef]
  45. Brugnano, L.; Frasca-Caccia, G.; Iavernaro, F. Energy conservation issues in the numerical solution of the semilinear wave equation. Appl. Math. Comput. 2015, 270, 842–870. [Google Scholar] [CrossRef]
  46. Brugnano, L.; Gurioli, G.; Sun, Y. Energy-conserving Hamiltonian Boundary Value Methods for the numerical solution of the Korteweg-de Vries equation. J. Comput. Appl. Math. 2019, 351, 117–135. [Google Scholar] [CrossRef]
  47. Brugnano, L.; Gurioli, G.; Zhang, C. Spectrally Accurate Energy-preserving Methods for the Numerical Solution of the “Good” Boussinesq Equation. Numer. Meth. Part. Differ. Equ. 2019, 1–20. [Google Scholar] [CrossRef]
  48. Brugnano, L.; Zhang, C.; Li, D. A class of energy-conserving Hamiltonian boundary value methods for nonlinear Schrödinger equation with wave operator. Commun. Nonlinear Sci. Numer. Simul. 2018, 60, 33–49. [Google Scholar] [CrossRef]
  49. Brugnano, L.; Iavernaro, F. Line Integral Solution of Differential Problems. Axioms 2018, 7, 36. [Google Scholar] [CrossRef]
  50. Dahlquist, G.; Björk, Å. Numerical Methods in Scientific Computing; SIAM: Philadelphia, PA, USA, 2008; Volume 1. [Google Scholar]
  51. Brugnano, L.; Frasca-Caccia, G.; Iavernaro, F. Efficient implementation of Gauss collocation and Hamiltonian Boundary Value Methods. Numer. Algorithms 2014, 65, 633–650. [Google Scholar] [CrossRef]
  52. Brugnano, L.; Frasca-Caccia, G.; Iavernaro, F. Hamiltonian Boundary Value Methods (HBVMs) and their efficient implementation. Math. Eng. Sci. Aerosp. 2014, 5, 343–411. [Google Scholar]
  53. Brugnano, L.; Iavernaro, F.; Trigiante, D. A note on the efficient implementation of Hamiltonian BVMs. J. Comput. Appl. Math. 2011, 236, 375–383. [Google Scholar] [CrossRef]
  54. Brugnano, L. Blended Block BVMs (B3VMs): A Family of Economical Implicit Methods for ODEs. J. Comput. Appl. Math. 2000, 116, 41–62. [Google Scholar] [CrossRef]
  55. Brugnano, L.; Magherini, C. Blended Implementation of Block Implicit Methods for ODEs. Appl. Numer. Math. 2002, 42, 29–45. [Google Scholar] [CrossRef]
  56. Brugnano, L.; Magherini, C. Blended Implicit Methods for solving ODE and DAE problems, and their extension for second order problems. J. Comput. Appl. Math. 2007, 205, 777–790. [Google Scholar] [CrossRef]
  57. Brugnano, L.; Magherini, C. Recent advances in linear analysis of convergence for splittings for solving ODE problems. Appl. Numer. Math. 2009, 59, 542–557. [Google Scholar] [CrossRef]
  58. Brugnano, L.; Magherini, C. Blended General Linear Methods based on Boundary Value Methods in the Generalized BDF family. JNAIAM J. Numer. Anal. Ind. Appl. Math. 2009, 4, 23–40. [Google Scholar]
  59. Brugnano, L.; Magherini, C. The BiM code for the numerical solution of ODEs. J. Comput. Appl. Math. 2004, 164–165, 145–158. [Google Scholar] [CrossRef]
  60. Brugnano, L.; Magherini, C.; Mugnai, F. Blended Implicit Methods for the Numerical Solution of DAE Problems. J. Comput. Appl. Math. 2006, 189, 34–50. [Google Scholar] [CrossRef]
  61. The Codes BiM and BiMD Home Page. Available online: http://web.math.unifi.it/users/brugnano/BiM/index.html (accessed on 4 January 2019).
  62. Test Set for IVP Solvers. Available online: https://archimede.dm.uniba.it/~testset/testsetivpsolvers/ (accessed on 4 January 2019).
  63. Line Integral Methods for Conservative Problems. Available online: http://web.math.unifi.it/users/brugnano/LIMbook/ (accessed on 4 January 2019).
  64. Wang, B.; Meng, F.; Fang, Y. Efficient implementation of RKN-type Fourier collocation methods for second-order differential equations. Appl. Numer. Math. 2017, 119, 164–178. [Google Scholar] [CrossRef]
  65. Simoncini, V. Computational methods for linear matrix equations. SIAM Rev. 2016, 58, 377–441. [Google Scholar] [CrossRef]
  66. Betsch, P.; Steinmann, P. Inherently Energy Conserving Time Finite Elements for Classical Mechanics. J. Comput. Phys. 2000, 160, 88–116. [Google Scholar] [CrossRef]
  67. Betsch, P.; Steinmann, P. Conservation properties of a time FE method. I. Time-stepping schemes for N-body problems. Int. J. Numer. Methods Eng. 2000, 49, 599–638. [Google Scholar] [CrossRef]
  68. Bottasso, C.L. A new look at finite elements in time: a variational interpretation of Runge-Kutta methods. Appl. Numer. Math. 1997, 25, 355–368. [Google Scholar] [CrossRef]
  69. Tang, Q.; Chen, C.M. Continuous finite element methods for Hamiltonian systems. Appl. Math. Mech. 2007, 28, 1071–1080. [Google Scholar] [CrossRef]
  70. Boyd, J.P. Chebyshev and Fourier Spectral Methods, 2nd ed.; Dover Publications Inc.: Mineola, NY, USA, 2001. [Google Scholar]
  71. Kappeler, T.; Pöschel, J. On the well-posedness of the periodic KdV equation in high regularity classes. In Hamiltonian Systems and Applications; Craig, W., Ed.; Springer: Berlin, Germany, 2008; pp. 431–441. [Google Scholar]
  72. Olver, P.J. Hamiltonian and non-Hamiltonian models for water waves. In Trends and Applications of Pure Mathematics to Mechanics; Springer: Berlin/Heidelberg, Germany, 1984; Volume 195, pp. 273–290. [Google Scholar]
  73. Quispel, G.R.W.; McLaren, D.I. A new class of energy-preserving numerical integration methods. J. Phys. A Math. Theor. 2008, 41, 045206. [Google Scholar] [CrossRef]
Figure 1. Sine-Gordon Equation (92); upper plot: solution (93); lower plot: E0 (see (89)) and ΔH0 (see (91)) versus N.
Figure 1. Sine-Gordon Equation (92); upper plot: solution (93); lower plot: E0 (see (89)) and ΔH0 (see (91)) versus N.
Mathematics 07 00275 g001aMathematics 07 00275 g001b
Figure 2. Nonlinear Schrödinger Equation (94); upper plot: modulus of the solution (95); lower plot: E0 (see (89)) and ΔH0 (see (91)) versus N.
Figure 2. Nonlinear Schrödinger Equation (94); upper plot: modulus of the solution (95); lower plot: E0 (see (89)) and ΔH0 (see (91)) versus N.
Mathematics 07 00275 g002aMathematics 07 00275 g002b
Figure 3. Korteweg–de Vries Equation (96); upper plot: solution (97); lower plot: E0 (see (90)) and ΔH0 (see (91)) versus N.
Figure 3. Korteweg–de Vries Equation (96); upper plot: solution (97); lower plot: E0 (see (90)) and ΔH0 (see (91)) versus N.
Mathematics 07 00275 g003aMathematics 07 00275 g003b
Table 1. Numerical solution of the sine-Gordon Equation (92) using a time-step h = 100 / n .
Table 1. Numerical solution of the sine-Gordon Equation (92) using a time-step h = 100 / n .
Gauss 1
ntime e u rate e H rate
20002.1 4 . 61 × 10 02 1 . 98 × 10 03
30002.8 2 . 05 × 10 02 2.0 8 . 79 × 10 04 2.0
40003.8 1 . 15 × 10 02 2.0 4 . 95 × 10 04 2.0
50004.8 7 . 37 × 10 03 2.0 3 . 17 × 10 04 2.0
60006.1 5 . 12 × 10 03 2.0 2 . 20 × 10 04 2.0
Gauss 2
ntime e u rate e H rate
10002.4 2 . 69 × 10 05 2 . 57 × 10 06
15003.3 5 . 28 × 10 06 4.0 5 . 05 × 10 07 4.0
20003.9 1 . 67 × 10 06 4.0 1 . 59 × 10 07 4.0
25005.3 6 . 83 × 10 07 4.0 6 . 53 × 10 08 4.0
30006.6 3 . 29 × 10 07 4.0 3 . 15 × 10 08 4.0
HBVM (4,1)
ntime e u rate e H
10002.4 1 . 37 × 10 02 7 . 11 × 10 15
15003.3 6 . 15 × 10 03 2.0 1 . 07 × 10 14
20004.2 3 . 47 × 10 03 2.0 8 . 88 × 10 15
25005.7 2 . 22 × 10 03 2.0 1 . 07 × 10 14
30007.0 1 . 55 × 10 03 2.0 8 . 88 × 10 15
HBVM (4,2)
ntime e u rate e H
10002.8 2 . 11 × 10 05 1 . 07 × 10 14
15004.0 4 . 18 × 10 06 4.0 8 . 88 × 10 15
20004.6 1 . 32 × 10 06 4.0 1 . 07 × 10 14
25006.0 5 . 42 × 10 07 4.0 7 . 11 × 10 15
30007.0 2 . 61 × 10 07 4.0 8 . 88 × 10 15
SHBVM
ntimeks e u e H
502.72220 2 . 87 × 10 12 3 . 55 × 10 15
751.62018 3 . 61 × 10 13 7 . 11 × 10 15
1001.31512 3 . 53 × 10 13 3 . 55 × 10 15
Table 2. Numerical solution of the nonlinear Schrödinger Equation (94) using a time-step h = 20 / n .
Table 2. Numerical solution of the nonlinear Schrödinger Equation (94) using a time-step h = 20 / n .
Gauss 1
ntime e u v rate e 1 e 2 e H rate
40019.1 4 . 93 × 10 01 1 . 60 × 10 14 2 . 01 × 10 16 9 . 58 × 10 04
60026.3 2 . 32 × 10 01 1.9 4 . 75 × 10 14 6 . 70 × 10 16 1 . 79 × 10 04 4.1
80033.2 1 . 31 × 10 01 2.0 2 . 38 × 10 14 1 . 39 × 10 16 5 . 55 × 10 05 4.1
100040.7 8 . 41 × 10 02 2.0 2 . 22 × 10 14 5 . 20 × 10 17 2 . 25 × 10 05 4.0
Gauss 2
ntime e u v rate e 1 e 2 e H rate
40041.3 1 . 71 × 10 03 3 . 13 × 10 14 3 . 12 × 10 17 5 . 20 × 10 08
60060.4 3 . 41 × 10 04 4.0 1 . 91 × 10 14 2 . 43 × 10 17 2 . 14 × 10 09 7.9
80072.1 1 . 08 × 10 04 4.0 1 . 91 × 10 14 2 . 78 × 10 17 2 . 18 × 10 10 7.9
100084.7 4 . 44 × 10 05 4.0 2 . 00 × 10 14 3 . 82 × 10 17 3 . 69 × 10 11 8.0
HBVM (2,1)
ntime e u v rate e 1 rate e 2 rate e H
40036.3 5 . 23 × 10 01 1 . 40 × 10 04 4 . 25 × 10 06 3 . 55 × 10 15
60050.7 2 . 45 × 10 01 1.9 2 . 64 × 10 05 4.1 7 . 96 × 10 07 4.1 3 . 55 × 10 15
80060.1 1 . 38 × 10 01 2.0 8 . 23 × 10 06 4.1 2 . 47 × 10 07 4.1 4 . 00 × 10 15
100074.5 8 . 89 × 10 02 2.0 3 . 35 × 10 06 4.0 1 . 01 × 10 07 4.0 4 . 00 × 10 15
HBVM (4,2)
ntime e u v rate e 1 rate e 2 rate e H
40043.2 1 . 74 × 10 03 6 . 66 × 10 09 1 . 83 × 10 10 4 . 44 × 10 15
60061.4 3 . 47 × 10 04 4.0 2 . 70 × 10 10 7.9 7 . 48 × 10 12 7.9 3 . 55 × 10 15
80077.0 1 . 10 × 10 04 4.0 2 . 74 × 10 11 8.0 7 . 61 × 10 13 7.9 4 . 00 × 10 15
100092.3 4 . 52 × 10 05 4.0 4 . 63 × 10 12 8.0 1 . 29 × 10 13 8.0 3 . 55 × 10 15
SHBVM
ntimeks e u v e 1 e 2 e H
5055.02018 3 . 13 × 10 11 1 . 35 × 10 14 6 . 59 × 10 17 4 . 88 × 10 15
7553.61614 2 . 27 × 10 11 1 . 33 × 10 14 7 . 29 × 10 17 3 . 11 × 10 15
10061.61412 2 . 47 × 10 11 1 . 40 × 10 14 6 . 25 × 10 17 3 . 11 × 10 15
Table 3. Numerical solution of the Korteweg-de Vries Equation (96) using a time-step h = 10 / n .
Table 3. Numerical solution of the Korteweg-de Vries Equation (96) using a time-step h = 10 / n .
Gauss 1
ntime e u rate e H rate
10,0005.1 1 . 10 × 10 + 00 9 . 38 × 10 07
20,0008.6 2 . 75 × 10 01 2.0 5 . 85 × 10 08 4.0
30,00013.8 1 . 22 × 10 01 2.0 1 . 16 × 10 08 4.0
40,00017.6 6 . 88 × 10 02 2.0 3 . 67 × 10 09 4.0
50,00020.7 4 . 40 × 10 02 2.0 1 . 50 × 10 09 4.0
Gauss 2
ntime e u rate e H rate
10,0009.4 9 . 33 × 10 05 7 . 30 × 10 12
20,00018.1 5 . 84 × 10 06 4.0 1 . 99 × 10 13 5.2
30,00025.3 1 . 15 × 10 06 4.0 2 . 84 × 10 13 ***
40,00031.3 3 . 65 × 10 07 4.0 6 . 54 × 10 13 ***
50,00039.5 1 . 50 × 10 07 4.0 7 . 25 × 10 13 ***
HBVM (2,1)
ntime e u rate e H
10,0009.4 9 . 65 × 10 01 6 . 39 × 10 14
20,00016.6 2 . 42 × 10 01 2.0 5 . 68 × 10 14
30,00021.8 1 . 08 × 10 01 2.0 7 . 11 × 10 14
40,00029.0 6 . 05 × 10 02 2.0 6 . 39 × 10 14
50,00033.6 3 . 87 × 10 02 2.0 6 . 39 × 10 14
HBVM (3,2)
ntime e u rate e H
10,00012.8 8 . 49 × 10 05 5 . 68 × 10 14
20,00024.4 5 . 32 × 10 06 4.0 5 . 68 × 10 14
30,00034.6 1 . 05 × 10 06 4.0 6 . 39 × 10 14
40,00043.0 3 . 32 × 10 07 4.0 5 . 68 × 10 14
50,00054.5 1 . 36 × 10 07 4.0 7 . 11 × 10 14
SHBVM
ntimeks e u e H
4007.22018 1 . 31 × 10 11 4 . 26 × 10 14
6007.41614 3 . 70 × 10 12 4 . 26 × 10 14
8008.61412 4 . 75 × 10 12 4 . 26 × 10 14

Share and Cite

MDPI and ACS Style

Brugnano, L.; Frasca-Caccia, G.; Iavernaro, F. Line Integral Solution of Hamiltonian PDEs. Mathematics 2019, 7, 275. https://doi.org/10.3390/math7030275

AMA Style

Brugnano L, Frasca-Caccia G, Iavernaro F. Line Integral Solution of Hamiltonian PDEs. Mathematics. 2019; 7(3):275. https://doi.org/10.3390/math7030275

Chicago/Turabian Style

Brugnano, Luigi, Gianluca Frasca-Caccia, and Felice Iavernaro. 2019. "Line Integral Solution of Hamiltonian PDEs" Mathematics 7, no. 3: 275. https://doi.org/10.3390/math7030275

APA Style

Brugnano, L., Frasca-Caccia, G., & Iavernaro, F. (2019). Line Integral Solution of Hamiltonian PDEs. Mathematics, 7(3), 275. https://doi.org/10.3390/math7030275

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop