Next Article in Journal
Optimality of a Network Monitoring Agent and Validation in a Real Probe
Previous Article in Journal
Overview of Identification Methods of Autoregressive Model in Presence of Additive Noise
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Parametric Method Optimised for the Solution of the (2+1)-Dimensional Nonlinear Schrödinger Equation

by
Zacharias A. Anastassi
1,*,
Athinoula A. Kosti
1 and
Mufutau Ajani Rufai
2
1
Institute of Artificial Intelligence, School of Computer Science and Informatics, De Montfort University, Leicester LE1 9BH, UK
2
Department of Mathematics, University of Bari Aldo Moro, 70125 Bari, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(3), 609; https://doi.org/10.3390/math11030609
Submission received: 20 December 2022 / Revised: 16 January 2023 / Accepted: 21 January 2023 / Published: 26 January 2023

Abstract

:
We investigate the numerical solution of the nonlinear Schrödinger equation in two spatial dimensions and one temporal dimension. We develop a parametric Runge–Kutta method with four of their coefficients considered as free parameters, and we provide the full process of constructing the method and the explicit formulas of all other coefficients. Consequently, we produce an adaptable method with four degrees of freedom, which permit further optimisation. In fact, with this methodology, we produce a family of methods, each of which can be tailored to a specific problem. We then optimise the new parametric method to obtain an optimal Runge–Kutta method that performs efficiently for the nonlinear Schrödinger equation. We perform a stability analysis, and utilise an exact dark soliton solution to measure the global error and mass error of the new method with and without the use of finite difference schemes for the spatial semi-discretisation. We also compare the efficiency of the new method and other numerical integrators, in terms of accuracy versus computational cost, revealing the superiority of the new method. The proposed methodology is general and can be applied to a variety of problems, without being limited to linear problems or problems with oscillatory/periodic solutions.

1. Introduction

We consider the (2+1)-dimensional nonlinear Schrödinger (NLS) equation of the form:
i u t + a u x x + b u y y + c | u | 2 u + d u = 0 , u ( x , y , t = 0 ) = u 0 ( x , y ) ,
where i = 1 , a , b , c , d R , u ( x , y , t ) : R 3 C is a complex function of the spatial variables x , y and the temporal variable t and u 0 ( x , y ) : R 2 C . The term i u t denotes the temporal evolution, the terms a u x x and b u y y denote the dispersion with respect to x and y, respectively, while c | u | 2 u is a nonlinear term, whose introduction is motivated by several applications. Equation (1) can represent atomic Bose–Einstein condensates (BECs), in which case u expresses the mean-field function of the matter-wave, or, if applied in the context of nonlinear optics for the study of optical beams [1], u describes the complex electric field envelope, t is the propagation distance, and x , y are the transverse coordinates [2,3].
The analytical solution of the NLS equation has attracted great interest in recent years, especially when the solutions are solitons [4,5,6,7]. Additionally, the numerical computation of the NLS equation is a critical part of the verification process of analytical theories. Different strategies have been adopted to solve the NLS equation, its linear counterpart, or differential equations with similar behaviour. A very significant factor in the efficiency of the computation lies in the time integrator; this is the case for both the scalar forms and vector forms after applying the method of lines. Preferred time integrators for the Schrödinger equation include Runge–Kutta(–Nyström) (RK/RKN) methods [8,9,10,11] and multistep methods [12,13,14,15]. RK/RKN methods are especially well-established, with various tools for achieving a high order of accuracy and obtaining intrinsic properties for specific problems, e.g., in [16,17], specialised RK methods were developed and optimised with differential evolution algorithms; in [18,19,20,21], RK/RKN methods were constructed for problems with periodic/oscillatory behaviour using fitting techniques; finally, in [22], a hybrid block method was produced for the efficient solution of differential systems. Many of the aforementioned techniques are targeted towards linear differential equations, ordinary differential equations, problems with oscillatory/periodic solutions or combinations of these. Here, we propose a general approach that can be applied to a problem without these limitations, and is tailored to a specific nonlinear partial differential equation that does not always exhibit periodic behaviour.
Following our previous work in [9], where we investigated the numerical solution of the NLS equation in (1+1) dimensions, here we extend this study to problems in (2+1) dimensions; that is, two dimensions in space and one in time. However, since this is a problem with increased significance that could be experimentally verified, we decided to follow a different approach and develop a method that is tailored to the efficient numerical solution of the problem. To achieve this, we initially developed a new parametric RK method, with as many coefficients as possible treated as free parameters. In this way, we produced an adaptable method with four degrees of freedom, which can be applied to a plethora of problems and specifically tailored to their efficient solution. Subsequently, in the case of problem (1), we selected the optimal values that correspond to the method with the minimum global error when integrating the problem for various step sizes. We chose to construct a method with six algebraic orders and eight stages, a maximised real stability interval, and coefficients with similar orders of magnitude, to minimise the round-off error.
The structure of this paper is as follows:
  • In Section 2, we present the necessary theoretical concepts;
  • In Section 3, we show the construction and analysis of the new RK method;
  • In Section 4, we report the numerical experiments and results;
  • In Section 5, we provide a discussion of the results and future perspectives;
  • In Section 6, we communicate our conclusions.

2. Theory

2.1. Explicit Runge–Kutta Methods

For the numerical solution of Equation (1),
let ( x , y , t ) [ x L , x R ] × [ y L , y R ] × [ 0 , t R ] and u l , m n denotes an approximation of u ( x l , y m , t n ) , where x l = x L + l Δ x , l = 0 , 1 , , L 1 , L = x R x L Δ x + 1 , Δ x > 0 , y m = y L + m Δ y , m = 0 , 1 , , M 1 , M = y R y L Δ y + 1 , Δ y > 0 , t n = n h , n = 0 , 1 , , N 1 , N = t R h + 1 , h > 0 .
An s stage explicit Runge–Kutta method for the solution of Equation (1) is presented below:
u l , m n + 1 = u l , m n + h i = 1 s b i k i k i = f ( t n + c i h , u l , m n + h j = 1 i 1 a i j k j ) , i = 1 , , s o r c A b
where A = [ a i j ] R s × s is strictly lower triangular, b = [ b i ] T R s , c = [ c i ] R s , a i j , b i , c i R , i = 1 , 2 , , s and j = 1 , 2 , , i 1 are the coefficient matrices, h = Δ t is the step size in time, and f is defined in u = f ( x , y , t , u ) .

2.2. Algebraic Order Conditions

According to rooted tree analysis [23], there are 37 equations ( q 1 q 37 ) that must be satisfied to obtain a Runge–Kutta method of sixth algebraic order and 7 additional equations for an explicit Runge–Kutta method with eight stages ( q 38 q 44 ) , with a total of 44 equations, as seen below in the set of Equation (4).
q 1 = b e 1 , q 2 = b c 1 2 , q 3 = b c 2 1 3 , q 4 = b A c 1 6 , q 5 = b c 3 1 4 , q 6 = b C A c 1 8 , q 7 = b A c 2 1 12 , q 8 = b A 2 c 1 24 , q 9 = b c 4 1 5 , q 10 = b C 2 A c 1 10 , q 11 = b C A c 2 1 15 , q 12 = b C A 2 c 1 30 , q 13 = b A c 3 1 20 , q 14 = b A C A c 1 40 , q 15 = b A 2 c 2 1 60 , q 16 = b A 3 c 1 120 , q 17 = b A c 2 1 20 , q 18 = b c 5 1 6 , q 19 = b C 3 A c 1 12 , q 20 = b C 2 A c 2 1 18 , q 21 = b C A c 2 1 24 , q 22 = b C 2 A 2 c 1 36 , q 23 = b C A c 3 1 24 , q 24 = b C A C A c 1 48 , q 25 = b ( A c ) ( A c 2 ) 1 36 , q 26 = b C A 3 c 1 144 , q 27 = b ( A c ) ( A 2 c ) 1 72 , q 28 = b C A 2 c 2 1 72 , q 29 = b A c 4 1 30 , q 30 = b A C 2 A c 1 60 , q 31 = b A C A c 2 1 90 , q 32 = b A C A 2 c 1 180 , q 33 = b A A c 2 1 120 , q 34 = b A 2 c 3 1 120 , q 35 = b A 2 C A c 1 240 , q 36 = b A 3 c 2 1 360 , q 37 = b A 4 c 1 720 , q i + 36 = ( A e c ) i , where i = 2 ( 1 ) 8 .
Here, e = [ 1 , , 1 ] T R s , the diagonal matrix C = d i a g ( c ) R s × s has C i i = c i , i = 1 , 2 , , s , the operator * denotes element-wise multiplication and, finally, the powers of c, C and A c are defined as element-wise powers, i.e., c 3 = c c c , whereas the powers of A are defined normally, i.e., A 3 = A A A .

2.3. Stability

We consider the problem
u = i ω u , u ( 0 ) = u 0 ω , u 0 R ,
with exact solution u ( t ) = u 0 e i ω t , which represents the circular orbit on the complex plane and ω its frequency.
Equation (5), solved numerically by the RK high-order method of Equation (3), yields the solution u m n + 1 = R n u m n , where R ( v ) = A s ( v 2 ) + i v B s ( v 2 ) is called the stability polynomial; v = ω h and A s , B s are polynomials in v 2 . The exact solution of Equation (5) satisfies the relation u ˜ ( t n + 1 ) = u ˜ ( t n ) e i v n . For a generic v = v n , since | e i v | = 1 and arg ( e i v ) = v we have:
Definition 1. 
For the method of Equation (3), if | R ( v R ) | < 1 and | R ( v R ϵ ) | > 1 , for every v I R and every suitably small positive ϵ, then the real stability interval is I R = ( v R , 0 ) .
Definition 2. 
The stability region is defined as the set S = z C : | R ( z ) | < 1 .

3. Construction and Analysis

3.1. Criteria

For the construction of the new method, we aimed to satisfy the following criteria:
  • Parametric method with as many as possible coefficients treated as free parameters.
  • Minimum global error when integrating problem (1) for various step sizes.
  • Sixth algebraic order and eight stages, which implies 44 equations (4).
  • Maximised real stability interval, based on Definition 2.
  • Coefficients with similar orders of magnitude, to minimise the round-off error.

3.2. Parametric Method—General Case

We began the development by considering the 44 equations (4). There are 43 variables involved in these equations, and we fixed 10 of these, namely
c 3 = 1 6 , c 6 = 1 2 , c 8 = 1 , b 2 = 0 , b 3 = 0 , a 42 = 0 , a 52 = 0 , a 62 = 0 , a 72 = 0 , a 82 = 0 .
Of the remaining 33 variables, 29 are contained in S 1 = a 21 , a 31 , a 32 , a 41 , a 43 , a 51 , a 53 , a 54 , a 61 , a 63 , a 64 , a 65 , a 71 , a 73 , a 74 , a 75 , a 76 , a 81 , a 83 , a 84 , a 85 , a 86 , a 87 , b 1 , b 4 , b 5 , b 6 , b 7 , b 8 and 4 are contained in S 2 = c 2 , c 4 , c 5 , c 7 . We aimed to solve the system of 44 equations for the 29 variables in S 1 , considering the 4 variables in S 2 as free parameters, thus effectively creating a general family of methods with 4 degrees of freedom. In order to solve the highly nonlinear system of equations, we used the mathematical software Maple. In Table 1 we solved the 29 equations in the specified order, solving for the variable mentioned after the equation. The order of equations and the selected variables are important, and were chosen so that the corresponding system after the variable substitution is the least complex. Otherwise, the system solution has impractical computation times.
After solving the system of 29 equations, and due to our initial fixed values of 10 coefficients, all 44 equations are now satisfied, including the ones we did not explicitly solve. All coefficients, except for the initially fixed ones, now depend on c 2 , c 4 , c 5 , c 7 :
b 1 = 10 c 4 c 5 c 7 c 4 c 5 c 7 + 1 60 c 4 c 5 c 7 b 4 = c 5 1 + c 7 60 c 4 c 7 1 + c 4 c 4 c 5 2 c 4 1 c 4 b 5 = c 4 + 1 c 7 60 2 c 5 1 c 5 c 5 c 7 c 4 c 5 c 5 1 b 6 = 80 2 c 7 1 c 5 40 c 7 + 24 c 4 + 40 c 7 + 24 c 5 + 24 c 7 16 15 2 c 7 1 2 c 5 1 2 c 4 1 b 7 = c 4 + c 5 1 120 c 7 60 c 5 c 7 c 4 c 7 c 7 1 + c 7 b 8 = 10 c 7 10 c 5 10 c 7 + 9 c 4 + 10 c 7 + 9 c 5 + 9 c 7 8 60 c 7 1 c 5 1 1 + c 4 a 21 = c 2 a 31 = 1 + 12 c 2 72 c 2 a 32 = 1 72 c 2 a 41 = 3 c 4 2 + c 4 a 43 = 3 c 4 2 a 51 = 30 c 4 3 c 5 10 c 4 3 45 c 4 2 c 5 + 15 c 4 c 5 2 + 14 c 4 2 + 6 c 4 c 5 4 c 5 2 3 c 4 + c 5 c 5 2 c 4 5 c 4 2 5 c 4 + 1 a 53 = 90 c 4 3 c 5 2 + 150 c 5 2 + 12 c 5 c 4 2 + 45 c 5 3 + 33 c 5 2 3 c 5 c 4 12 c 5 3 30 c 4 3 35 c 4 2 + 11 c 4 1 a 54 = c 5 15 c 4 c 5 4 c 4 4 c 5 + 1 c 4 c 5 60 c 4 4 70 c 4 3 + 22 c 4 2 2 c 4
a 61 = 1 48 1200 c 7 780 c 5 2 + 900 c 7 + 540 c 5 + 150 c 7 90 c 4 4 + 2100 c 7 + 1410 c 5 2 + 1590 c 7 974 c 5 285 c 7 + 175 c 4 3 + 1380 c 7 945 c 5 2 + 1029 c 7 + 645 c 5 + 195 c 7 123 c 4 2 + 399 c 7 + 279 c 5 2 + 291 c 7 188 c 5 57 c 7 + 37 c 4 + 42 c 7 30 c 5 2 + 30 c 7 + 20 c 5 + 6 c 7 4 c 5 c 4 10 c 4 c 5 c 7 5 c 4 c 5 5 c 4 c 7 5 c 5 c 7 + 3 c 4 + 3 c 5 + 3 c 7 2 5 c 4 2 5 c 4 + 1 1 a 63 = 180 c 4 4 c 5 + 300 c 5 + 90 c 7 66 c 4 3 + 90 c 7 + 225 c 5 81 c 7 + 63 c 4 2 + 69 c 7 87 c 5 + 15 c 7 12 c 4 12 1 + c 7 c 5 8 6 c 4 1 10 c 4 c 5 c 7 5 c 4 c 5 5 c 4 c 7 5 c 5 c 7 + 3 c 4 + 3 c 5 + 3 c 7 2 5 c 4 2 5 c 4 + 1 a 64 = 75 c 4 1 2 2 c 4 2 c 5 10 c 4 c 5 c 7 5 c 4 c 5 5 c 4 c 7 5 c 5 c 7 + 3 c 4 + 3 c 5 + 3 c 7 2 6 c 4 1 c 4 5 c 4 2 5 c 4 + 1 c 7 3 5 c 5 2 + 9 c 7 10 + 31 60 c 5 + 1 5 c 7 3 25 c 4 3 + 7 6 c 7 + 3 4 c 5 2 + c 7 11 18 c 5 11 c 7 50 + 5 36 c 4 2 + 5 c 7 12 17 60 c 5 2 + 33 c 7 100 + 19 90 c 5 + 7 c 7 100 41 900 c 4 + 7 c 7 150 + 1 30 c 5 2 + 1 30 c 7 1 45 c 5 c 7 150 + 1 225 1 a 65 = 2 c 5 1 2 c 4 1 15 c 4 c 7 9 c 4 6 c 7 + 4 48 c 5 10 c 4 c 5 c 7 5 c 4 c 5 5 c 4 c 7 5 c 5 c 7 + 3 c 4 + 3 c 5 + 3 c 7 2 c 4 c 5 a 71 = 10 c 7 c 5 c 4 2 c 4 + 1 5 c 4 + c 5 1 c 4 2 5 c 5 c 7 2 + 1 4 + c 5 2 + 1 10 c 5 c 7 4 5 c 5 2 + 7 c 5 20 c 4 4 + 2 5 + c 5 c 7 3 + 7 20 c 5 2 + c 5 c 7 2 + 9 c 5 2 10 1 10 c 5 + 23 60 c 7 + 13 c 5 2 10 187 c 5 300 c 4 3 + 7 5 c 5 + 3 5 c 7 3 + 31 300 + 7 5 c 5 2 3 c 5 20 c 7 2 + 23 c 5 150 11 60 3 c 5 2 20 c 7 7 c 5 2 10 + 53 c 5 150 c 4 2 + 7 25 + 3 5 c 5 c 7 3 + c 5 100 + 43 300 3 4 c 5 2 c 7 2 + 21 c 5 2 50 13 c 5 75 + 2 75 c 7 + 11 c 5 2 100 17 c 5 300 c 4 + 3 c 7 25 2 3 c 5 + 1 3 c 7 2 + c 5 2 2 9 c 7 3 4 c 5 2 + 11 c 5 36 a 73 = 90 c 7 c 4 + c 5 1 6 c 4 1 5 c 4 2 5 c 4 + 1 c 4 4 c 5 + c 7 2 5 3 c 5 c 7 + 2 15 c 4 3 + 7 6 c 7 2 + c 5 + 7 6 c 7 1 6 c 4 2 + 1 2 c 5 + 7 30 c 7 2 + 1 2 c 5 7 30 c 7 1 6 c 5 + 1 30 c 4 2 15 c 7 1 + c 7 c 5 a 74 = 300 c 7 c 4 c 7 2 c 4 1 c 4 + c 5 1 c 4 c 5 6 c 4 1 c 4 5 c 4 2 5 c 4 + 1 c 7 3 5 c 5 2 + 13 60 + 1 5 c 7 c 7 2 c 5 + 2 5 c 7 2 + 1 75 4 c 7 15 c 4 4 + 47 c 7 30 + 1 c 5 2 + 1 3 c 7 + 47 c 7 2 30 133 360 c 5 2 3 c 7 2 + 103 c 7 225 7 300 c 4 3 + 53 c 7 60 3 5 c 5 2 + 47 c 7 300 5 6 c 7 2 + 139 600 c 5 + 19 c 7 2 50 79 c 7 300 + 7 600 c 4 2 + 131 c 7 600 + 47 300 c 5 2 + 9 c 7 2 50 13 c 7 600 113 1800 c 5 1 600 13 c 7 2 150 + 107 c 7 1800 c 4 + 3 200 + 1 50 c 7 c 5 2 + 11 1800 c 7 2 75 c 5 + c 7 2 150 c 7 225 a 75 = c 7 c 5 c 7 c 4 c 7 60 c 4 c 5 c 7 36 c 4 c 5 24 c 4 c 7 24 c 5 c 7 + 15 c 4 + 16 c 5 + 12 c 7 8 12 c 5 1 2 c 5 c 4 c 5 c 4 + c 5 1 a 76 = 4 2 c 7 1 c 5 c 7 c 4 c 7 c 7 5 c 4 c 5 2 c 4 2 c 5 + 1 2 c 5 1 2 c 4 1 c 4 + c 5 1 a 81 = 300 c 5 2 + 450 c 5 150 c 7 2 + 240 c 5 2 315 c 5 + 135 c 7 + 30 c 5 30 c 4 4 + 750 c 5 2 1080 c 5 + 390 c 7 2 + 600 c 5 2 + 817 c 5 370 c 7 + 30 c 5 2 120 c 5 + 90 c 4 3 + 705 c 5 2 + 972 c 5 360 c 7 2 + 600 c 5 2 815 c 5 + 362 c 7 60 c 5 2 + 156 c 5 96 c 4 2 + 297 c 5 2 381 c 5 + 138 c 7 2 + 267 c 5 2 + 351 c 5 147 c 7 + 36 c 5 2 78 c 5 + 42 c 4 + 42 c 5 2 + 51 c 5 18 c 7 2 + 39 c 5 2 50 c 5 + 20 c 7 6 c 5 1 2 300 c 5 c 7 c 4 2 c 4 + 1 / 5 c 4 1 c 5 1 c 7 c 5 + 9 10 c 4 + c 5 + 9 10 c 7 + 9 c 5 10 4 / 5 1 a 83 = 90 c 4 4 c 5 + 150 c 5 90 c 7 + 78 c 4 3 + 45 c 7 135 c 5 + 93 c 7 78 c 4 2 + 57 c 7 + 72 c 5 18 c 7 + 15 c 4 + 12 1 + c 7 c 5 10 c 4 c 5 c 7 10 c 4 c 5 10 c 4 c 7 10 c 5 c 7 + 9 c 4 + 9 c 5 + 9 c 7 8 6 c 4 1 5 c 4 2 5 c 4 + 1 a 84 = 300 1 + c 4 2 c 4 1 c 4 c 7 c 4 c 5 10 c 4 c 5 c 7 10 c 4 c 5 10 c 4 c 7 10 c 5 c 7 + 9 c 4 + 9 c 5 + 9 c 7 8 6 c 4 1 c 4 5 c 4 2 5 c 4 + 1 1 c 7 3 5 c 5 2 + 49 60 7 5 c 7 c 5 + 2 5 c 7 19 75 c 4 5 + c 7 2 + 11 10 16 c 7 15 c 5 2 + 511 360 + 22 c 7 15 + 7 5 c 7 2 c 5 2 5 c 7 2 + 379 900 2 5 c 7 c 4 4 + 1 5 c 7 + 5 3 c 7 2 37 60 c 5 2 + 373 c 7 900 143 c 7 2 60 + 33 50 c 5 + 113 c 7 2 150 113 c 7 450 17 120 c 4 3 + c 7 2 + 347 c 7 600 + 23 300 c 5 2 + 271 c 7 300 + 853 c 7 2 600 + 1 45 c 5 73 c 7 2 150 + 39 c 7 100 109 1800 c 4 2 + 31 c 7 150 + 51 c 7 2 200 + 11 600 c 5 2 + 103 c 7 2 300 + 533 c 7 1800 1 18 c 5 + 29 900 + 73 c 7 2 600 221 c 7 1800 c 4 + 1 300 7 c 7 2 300 + 13 c 7 600 c 5 2 + 1 36 c 7 + 1 150 + 17 c 7 2 600 c 5 c 7 2 100 + c 7 90 1 300 a 85 = 1 + c 4 c 5 1 2 c 5 1 2 c 5 c 5 c 7 c 4 c 5 c 4 2 5 c 7 3 5 c 4 + 4 15 c 5 2 + c 4 + 2 5 c 7 2 + 1 10 c 4 1 15 c 7 + 7 c 4 20 2 15 c 5 + 1 2 c 4 3 10 c 7 2 + 9 c 4 20 + 1 3 c 7 + 1 10 c 4 1 10 1 + c 4 c 7 c 4 + 9 10 c 5 + c 4 + 9 10 c 7 + 9 c 4 10 4 5 1 a 86 = 160 1 + c 4 c 5 1 2 c 7 1 10 c 4 c 5 c 7 10 c 4 c 5 10 c 4 c 7 10 c 5 c 7 + 9 c 4 + 9 c 5 + 9 c 7 8 2 c 5 1 2 c 4 1 c 5 1 2 c 4 1 2 c 5 + 3 10 c 7 2 + 5 4 c 5 + 7 10 c 4 + 7 c 5 10 9 20 c 7 + 3 8 c 5 1 4 c 4 1 4 c 5 + 7 40 a 87 = c 5 1 1 + c 4 c 4 + c 5 1 1 + c 7 20 c 7 1 2 c 5 c 7 c 4 c 7 c 7 c 5 1 c 4 c 5 + 9 10 c 7 + c 5 + 9 10 c 4 + 9 c 5 10 4 5 1

3.3. Parametric Method—Optimal Case

After generating the parametric method, we aimed to identify the optimal method with the best performance. In order to achieve this, we selected different step-lengths and integrated problem (1). The optimal values of the four coefficients are considered the ones that yield higher accuracy results for all different step-lengths. The latter were chosen to be Δ t = { 0.1 , 0.04 , 0.01 } , so that they can yield results with an accuracy of different orders of magnitudes. These step-lengths return results of above 4, 6.5 and 10 accurate decimal digits, respectively. These accuracy values were selected to be above average while still allowing for a plethora of feasible solutions/coefficient combinations, and will later be used as benchmarks. We are interested in the quadruples
( c 2 , c 4 , c 5 , c 7 ) , where c 2 , c 4 , c 5 , c 7 [ 0 , 1 ] ,
for which the method has the highest accuracy. The optimisation process is as follows:
  • We evaluate the accuracy at each grid point of the mesh defined by
    c i = j Δ c , i = 2 , 4 , 5 , 7 , j = 0 , 1 , , C 1 , C = 1 Δ c + 1 , Δ c > 0 .
    In the first iteration, we choose the coefficient step-length Δ c = 0.1 and integration step-length Δ t = 0.1 , and identify the regions with maximum accuracy. We observe that all high efficient quadruples appear to satisfy the constraint
    c 2 c 4 c 5 c 7 ,
    as is also the case with the majority of RK methods. Thus, for the subsequent iterations, we impose this constraint in advance, which drastically reduces the computation cost without sacrificing the accuracy.
  • For the next two iterations, we use Δ c = 0.1 , together with Δ t = 0.04 and Δ t = 0.01 , and follow the procedure of step 1. We identify the intersection of all regions for which a certain quadruple has a higher accuracy than its corresponding benchmark.
  • We repeat this process for Δ c = 0.01 combined with Δ t = { 0.1 , 0.04 , 0.01 } , as described in step 1, but only within the regions narrowed down by step 2, i.e.,
    ( c 2 , c 4 , c 5 , c 7 ) [ 0.01 , 0.16 ] × [ 0.18 , 0.22 ] × [ 0.31 , 0.45 ] × [ 0.52 , 0.92 ] .
  • We run the process one last time for Δ c = 0.001 combined with Δ t = { 0.1 , 0.04 , 0.01 } , within the regions updated by step 3, i.e.,
    ( c 2 , c 4 , c 5 , c 7 ) [ 0.001 , 0.010 ] × [ 0.205 , 0.213 ] × [ 0.435 , 0.445 ] × [ 0.909 , 0.915 ] .
The optimal quadruple is primarily chosen with respect to the accuracy, and secondarily considering the robustness of the solution. The latter is expressed by the absence of sensitivity of the solution when a coefficient value deviates from the optimal value. This is the reason why no further local optimisation is needed and why we stop at three decimal digits of accuracy for the optimal coefficient values.
The optimal values returned by the optimisation process are c 2 = 0.007 , c 4 = 0.208 , c 5 = 0.441 , c 7 = 0.915 . These can be exactly expressed as rational numbers c 2 = 7 1000 , c 4 = 26 125 , c 5 = 441 1000 , c 7 = 183 200 . By substituting these values to the other coefficients, we obtain the particular case that is optimal for this problem, as seen below in Table 2.

3.4. Error and Stability Analysis

We perform local truncation error analysis on the method of Table 2, based on the Taylor expansion series of the difference
ε = u l , m n + 1 u ( x l , y m , t n + 1 ) .
The principal term of the local truncation error is evaluated as
ε = 227406522263 6682354278516000 u ( 7 ) ( x ) h 7 ,
which implies that while, locally, the order of accuracy is seven, globally, the order of the new method is six (see [23] for explanation).
Regarding the stability of the method, following the methodology of Section 2.3, we evaluated the stability polynomial of the method presented in Table 2. The polynomial is given by
R = 1 + z + 1 2 z 2 + 1 6 z 3 + 1 24 z 4 + z 5 120 + z 6 720 + 156922488841 z 7 954622039788000 + 948518207 z 8 39775918324500 .
The stability analysis was carried out numerically in a mesh around the origin with Δ x = Δ y = 0.01 , and the stability region is shown in Figure 1, as the grid points that satisfy Definition 2. Furthermore, with a similar procedure performed on the real axis and according to Definition 1, the real stability interval is 3.99 , 0 .

4. Numerical Experiments

4.1. Theoretical Solution and Mass Conservation Law

Equation (1) has a dark soliton solution given by
u ( x , y , t ) = γ c tanh γ 2 β k 1 x + l 1 y 2 a t + ξ 0 e i k 2 x + l 2 y c 2 t + η 0
where β = a k 1 2 + b l 1 2 and γ = c 2 + d a k 2 2 b l 2 2 [7]. We also choose, as in [7],
a = 1 , b = 1 , c = 1 , d = 1 , k 1 = 1 , k 2 = 1 , l 1 = 0 , l 2 = 0 , c 2 = 2 , ξ 0 = 0 , η 0 = 0 .
For this particular case, the theoretical solution becomes
u ( x , y , t ) = 2 tanh x 2 t e i x 2 t
and
u ( x , y , t ) 2 = 2 tanh x 2 t 2 ,
which is called density.
Furthermore, the solution u ( x , y , t ) of Equation (1) satisfies the mass conservation law [4]:
M [ u ] ( t ) = R 2 | u ( x , y , t ) | 2 d x d y M [ u ] ( 0 ) .

4.2. Numerical Solution

We performed a numerical computation of problem (1), utilising the method of lines for ( x , y , t ) [ 20 , 30 ] × [ 1 , 1 ] × [ 0 , 5 ] . We used Equation (11) for both the initial condition and the boundary conditions. Regarding the semi-discretisation of u x x and u y y , we chose a combination of 10th-order forward, central and backward finite difference schemes. Next, we defined the maximum absolute solution error and the maximum relative mass error of the numerical computation.
The maximum absolute solution error is given by
max l , m , n ( | u l , m n u ( x l , y m , t n ) | ) .
where u l , m n denotes the numerical approximation of u ( x l , y m , t n ) .
The maximum relative mass error over the region D is given by
max n M D n M D 0 M D 0 ,
where M D n is the numerical approximation of M [ u ] ( t n ) over the region D = [ 20 , 30 ] × [ 1 , 1 ] for t [ 0 , 5 ] , which provides a sufficiently constant density outside of it, as seen in Figure 2.
In Table 3, the maximum absolute solution error and the maximum relative mass error for different step sizes are shown. The Δ t is the maximum value that yields a stable solution for the preselected values of Δ x and Δ y .
In fact, the errors in Table 3 correspond to the total error of both the finite difference schemes for the semi-discretisation along x and y, and the integrator along time t. For the purpose of eliminating the semi-discretisation error and to analyse the performance of the time integrator alone, we used the second derivatives u x x , u y y of the solution (11) instead of applying the finite difference schemes. We used Δ x = Δ y = 0.1 and Δ t = 0.01 , and the results are presented in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7.
The numerical solution of the dark soliton, namely, u ( x , y , t = 5 ) 2 , u ( x , y , t = 5 ) and u ( x , y , t = 5 ) when solved by the new method of Table 2, are presented in Figure 2, Figure 3 and Figure 4 respectively. As expected from the physical properties of the dark soliton, this preserves its form while moving from x = 0 (for t = 0 ) to x = 10 (for t = 5 ).
Furthermore, we present the propagation of the dark soliton, namely u ( x , y = 0 , t ) 2 , u ( x , y = 0 , t ) and u ( x , y = 0 , t ) when solved by the new method, in Figure 5, Figure 6 and Figure 7, respectively.

4.3. Error

We evaluated the error of the numerical approximation compared to the theoretical solution. We used two different implementations: in the first one, we used the second derivatives of the known solution and we chose Δ x = Δ y = 0.1 and Δ t = 0.01 ; in the second one, we used finite difference schemes for the spatial semi-discretisation and we chose Δ x = Δ y = 0.1 and Δ t = 0.006 .
More specifically, we present
  • The maximum, along t n [ 0 , 5 ] , solution error max n ( | u l , m n u ( x l , y m , t n ) | ) without and with the use of finite difference schemes in Figure 8 and Figure 9, respectively;
  • The maximum, along y m [ 1 , 1 ] , solution error max m ( | u l , m n u ( x l , y m , t n ) | ) without and with the use of finite difference schemes in Figure 10 and Figure 11, respectively;
  • The maximum, along x l [ 20 , 30 ] , solution error max l ( | u l , m n u ( x l , y m , t n ) | ) without and with the use of finite difference schemes in Figure 12 and Figure 13, respectively;
  • The mass error M D n M D 0 M D 0 versus t n = [ 0 , 5 ] without and with the use of finite difference schemes in Figure 14 and Figure 15 respectively.
It is important to understand that, while Figure 8, Figure 10, Figure 12 and Figure 14 represent the error of the time-stepper alone, this is not the case for Figure 9, Figure 11, Figure 13 and Figure 15, where we observe the total error of the time-stepper in addition to the error of the space semi-discretisation finite difference scheme. Furthermore, the error of the finite difference scheme is dominant, and this is why the error in the second case is larger by approximately two orders of magnitude. We observe that, without the use of finite difference schemes for the spatial semi-discretisation, the maximum error is located near the area of the initial centre of the dark soliton x = 0 , without being propagated along with the dark soliton centre itself, as seen in Figure 8. However, with the use of finite difference schemes for spatial semi-discretisation, the error is propagated differently along x and y, as seen in Figure 9, mainly along the track of the soliton peak x [ 0 , 10 ] . Regarding the error propagation in time, the error without the finite difference schemes gradually increases, as seen in Figure 10 and Figure 12, while with the finite difference schemes it rapidly increases in the first time steps, then slightly decreases and stays at the same order of magnitude with small oscillations, as seen in Figure 11 and Figure 13. This is due to the effect of the space semi-discretisation scheme on the error, in combination with the time integrator. The oscillations are of a numerical nature and are a known phenomenon caused by the linear finite difference schemes applied in the spatial semi-discretisation. The same oscillations are also observed in the mass error with finite difference schemes, as seen in Figure 15, but are absent in Figure 14, where no finite difference schemes are applied.

4.4. Efficiency

We measured the efficiency of the new method with other methods from the literature and present the results in Figure 16 and Figure 17.
The compared methods are presented below:
  • The new parametric method (Table 2)
  • The method of Papageorgiou et al. [24]
  • The method of Kosti et al. (I) [9]
  • The method of Dormand et al. [23]
  • The method of Kosti et al. (II) [10]
  • The method of Fehlberg et al. [23]
  • The method of Triantafyllidis et al. [11]
The efficiency of all compared methods in terms of the maximum absolute solution error, as expressed by (13), versus the function evaluations is presented in Figure 16. The order of accuracy of each method is represented by the slope of its corresponding efficiency line. Indeed, we verify that, except for two methods with fifth order and, thus, a smaller slope, five methods have sixth order, for which the lines are almost parallel. Additionally, we present the maximum absolute solution error versus the CPU time in seconds in Figure 17. In order to avoid fluctuations, we repeated each solution 10 times and used the median CPU time. We observe that the two graphs are similar, as the main contributor to the total computation time is the function evaluations. Furthermore, we see that the new optimised method is more efficient than all the other methods for all error orders.

5. Discussion

The numerical results are robust, exhibiting high efficiency among all step size values, with and without the use of finite difference schemes for the spatial semi-discretisation. Furthermore, the error propagation in time is typical of other explicit RK methods. The time integrator error is better showcased when no finite difference schemes are used, as in the case with finite difference schemes the error of the spatial discretisation method is dominant.
In general, the development of new RK methods that are more efficient than established methods in the literature is a challenging task, as one step of the method construction involves the solution of a system of numerous highly nonlinear equations. This is especially true for high-order methods that require a high number of stages, where the only way to solve the system is to use simplifying assumptions and/or fixed values for a set of coefficients. To our knowledge, there have been no explicit RK methods with order six or higher produced without the use of pre-determined coefficient values or simplifying assumptions. The solution with hand calculations is impossible, even for methods that use some assumptions, and even when using computer algebra software, as the solution is limited by the available computer memory and computational power. Leaving some variables free until the end of the system solution increases the complexity even further, but allows for the parameterisation of the method. Here, we managed to find a combination of a minimum number of fixed coefficients that still yields a solution with four free coefficients. The latter provide the method with some adaptability due to the four degrees of freedom. This versatility permits further optimisation, allowing for the construction of a method tailored to the nonlinear Schrödinger equation. The proposed methodology is general and can be applied to many problems, without being limited to linear problems or problems with oscillatory/periodic solutions.
Although the methodology offers satisfactory results, it also has limitations, namely the need to manually select the initially fixed coefficients and, most importantly, the actual fixed coefficients themselves, which hinder the optimisation potential of the method’s accuracy. Ideally, we aim to develop an optimisation process that concurrently maximises the number of free coefficients, minimises the number of fixed coefficients, and leads to a solvable system. Additionally, in our future work, we could combine this technique with other established techniques for better results, e.g., fitting techniques for problems with periodic solutions.

6. Conclusions

In this paper, we investigated the numerical solution of the (2+1)-dimensional nonlinear Schrödinger equation. We developed a parametric sixth-order eight-stage explicit Runge–Kutta method with four of their coefficients treated as free parameters/degrees of freedom, and we provided the full process of constructing the method and the explicit formulas of all other coefficients. We optimised the new parametric method to obtain the optimal Runge–Kutta method that performs efficiently by numerical testing. We performed stability analysis, and we utilised an exact dark soliton solution to measure the global error and the mass error of the new method. We also compared the efficiency of the new method and other numerical integrators, revealing the superiority of the new method.

Author Contributions

Conceptualisation, Z.A.A.; formal analysis, Z.A.A., A.A.K. and M.A.R.; investigation, Z.A.A., A.A.K. and M.A.R.; methodology, Z.A.A., A.A.K. and M.A.R.; verification, Z.A.A., A.A.K. and M.A.R.; writing—original draft, Z.A.A.; writing—review and editing, Z.A.A., A.A.K. and M.A.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We thank the anonymous reviewers for their useful comments and remarks.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Moloney, J.V.; Newell, A.C. Nonlinear optics. Phys. D Nonlinear Phenom. 1990, 44, 1–37. [Google Scholar] [CrossRef]
  2. Pitaevskii, L.; Stringari, S. Bose-Einstein Condensation; Oxford University Press: Oxford, UK, 2003. [Google Scholar]
  3. Malomed, B. Multi-Component Bose-Einstein Condensates: Theory. In Emergent Nonlinear Phenomena in Bose-Einstein Condensates: Theory and Experiment; Springer: Berlin/Heidelberg, Germany, 2008; pp. 287–305. [Google Scholar] [CrossRef]
  4. Guardia, M.; Hani, Z.; Haus, E.; Maspero, A.; Procesi, M. Strong nonlinear instability and growth of Sobolev norms near quasiperiodic finite gap tori for the 2D cubic NLS equation. J. Eur. Math. Soc. 2022. published online first. [Google Scholar] [CrossRef]
  5. Tsitoura, F.; Anastassi, Z.A.; Marzuola, J.L.; Kevrekidis, P.G.; Frantzeskakis, D.J. Dark Solitons Potential Nonlinearity Steps. Phys. Rev. A 2016, 94, 063612. [Google Scholar] [CrossRef] [Green Version]
  6. Tsitoura, F.; Anastassi, Z.A.; Marzuola, J.L.; Kevrekidis, P.G.; Frantzeskakis, D.J. Dark soliton scattering in symmetric and asymmetric double potential barriers. Phys. Lett. A 2017, 381, 2514–2520. [Google Scholar] [CrossRef] [Green Version]
  7. Feng, D.; Jiao, J.; Jiang, G. Optical solitons and periodic solutions of the (2+1)-dimensional nonlinear Schrödinger’s equation. Phys. Lett. A 2018, 382, 2081–2084. [Google Scholar] [CrossRef]
  8. Fang, Y.; Yang, Y.; You, X.; Ma, L. Modified THDRK methods for the numerical integration of the Schrödinger equation. Int. J. Mod. Phys. C 2020, 31, 2050149. [Google Scholar] [CrossRef]
  9. Kosti, A.A.; Colreavy-Donnelly, S.; Caraffini, F.; Anastassi, Z.A. Efficient Computation of the Nonlinear Schrödinger Equation with Time-Dependent Coefficients. Mathematics 2020, 8, 374. [Google Scholar] [CrossRef] [Green Version]
  10. Kosti, A.A.; Anastassi, Z.A.; Simos, T.E. An optimized explicit Runge–Kutta method with increased phase-lag order for the numerical solution of the Schrödinger equation and related problems. J. Math. Chem. 2010, 47, 315. [Google Scholar] [CrossRef]
  11. Triantafyllidis, T.V.; Anastassi, Z.A.; Simos, T.E. Two optimized Runge–Kutta methods for the solution of the Schrödinger equation. MATCH Commun. Math. Comput. Chem. 2008, 60, 3. [Google Scholar]
  12. Zhang, Y.; Fang, Y.; You, X.; Liu, G. Trigonometrically-fitted multi-derivative linear methods for the resonant state of the Schrödinger equation. J. Math. Chem. 2018, 56, 1250–1261. [Google Scholar] [CrossRef]
  13. Shokri, A.; Mehdizadeh Khalsaraei, M. A new implicit high-order six-step singularly P-stable method for the numerical solution of Schrödinger equation. J. Math. Chem. 2021, 59, 224–249. [Google Scholar] [CrossRef]
  14. Shokri, A.; Vigo-Aguiar, J.; Mehdizadeh Khalsaraei, M.; Garcia-Rubio, R. A new implicit six-step P-stable method for the numerical solution of Schrödinger equation. J. Math. Chem. 2019, 97, 802–817. [Google Scholar] [CrossRef]
  15. Obaidat, S.; Mesloub, S. A New Explicit Four-Step Symmetric Method for Solving Schrödinger’s Equation. Mathematics 2019, 7, 1124. [Google Scholar] [CrossRef] [Green Version]
  16. Jerbi, H.; Ben Aoun, S.; Omri, M.; Simos, T.E.; Tsitouras, C. A Neural Network Type Approach for Constructing Runge–Kutta Pairs of Orders Six and Five That Perform Best on Problems with Oscillatory Solutions. Mathematics 2022, 10, 827. [Google Scholar] [CrossRef]
  17. Kovalnogov, V.N.; Fedorov, R.V.; Khakhalev, Y.A.; Simos, T.E.; Tsitouras, C. A Neural Network Technique for the Derivation of Runge–Kutta Pairs Adjusted for Scalar Autonomous Problems. Mathematics 2021, 9, 1842. [Google Scholar] [CrossRef]
  18. Anastassi, Z.A.; Kosti, A.A. A 6(4) optimized embedded Runge–Kutta–Nyström pair for the numerical solution of periodic problems. J. Comput. Appl. Math. 2015, 275, 311–320. [Google Scholar] [CrossRef]
  19. Kosti, A.A.; Anastassi, Z.A. Explicit almost P-stable Runge–Kutta–Nyström methods for the numerical solution of the two-body problem. Comput. Appl. Math. 2015, 34, 647–659. [Google Scholar] [CrossRef]
  20. Demba, M.; Senu, N.; Ismail, F. A 5(4) Embedded Pair of Explicit Trigonometrically-Fitted Runge–Kutta–Nyström Methods for the Numerical Solution of Oscillatory Initial Value Problems. Math. Comput. Appl. 2016, 21, 46. [Google Scholar] [CrossRef] [Green Version]
  21. Ahmad, N.A.; Senu, N.; Ismail, F. Phase-Fitted and Amplification-Fitted Higher Order Two-Derivative Runge–Kutta Method for the Numerical Solution of Orbital and Related Periodical IVPs. Math. Probl. Eng. 2017, 2017, 1871278. [Google Scholar] [CrossRef]
  22. Ramos, H.; Rufai, M.A. An adaptive one-point second-derivative Lobatto-type hybrid method for solving efficiently differential systems. Int. J. Comput. Math. 2022, 99, 1687–1705. [Google Scholar] [CrossRef]
  23. Butcher, J.C. Trees and numerical methods for ordinary differential equations. Numer. Alg. 2010, 53, 153–170. [Google Scholar] [CrossRef]
  24. Papageorgiou, G.; Tsitouras, C.; Papakostas, S.N. Runge-Kutta pairs for periodic initial value problems. Computing 1993, 51, 151–163. [Google Scholar] [CrossRef]
Figure 1. Stability region of the new Runge–Kutta method of Table 2.
Figure 1. Stability region of the new Runge–Kutta method of Table 2.
Mathematics 11 00609 g001
Figure 2. The graph of u ( x , y , t = 5 ) 2 , evaluated numerically.
Figure 2. The graph of u ( x , y , t = 5 ) 2 , evaluated numerically.
Mathematics 11 00609 g002
Figure 3. The graph of u ( x , y , t = 5 ) , evaluated numerically.
Figure 3. The graph of u ( x , y , t = 5 ) , evaluated numerically.
Mathematics 11 00609 g003
Figure 4. The graph of u ( x , y , t = 5 ) , evaluated numerically.
Figure 4. The graph of u ( x , y , t = 5 ) , evaluated numerically.
Mathematics 11 00609 g004
Figure 5. The graph of u ( x , y = 0 , t ) 2 , evaluated numerically.
Figure 5. The graph of u ( x , y = 0 , t ) 2 , evaluated numerically.
Mathematics 11 00609 g005
Figure 6. The graph of u ( x , y = 0 , t ) , evaluated numerically.
Figure 6. The graph of u ( x , y = 0 , t ) , evaluated numerically.
Mathematics 11 00609 g006
Figure 7. The graph of u ( x , y = 0 , t ) , evaluated numerically.
Figure 7. The graph of u ( x , y = 0 , t ) , evaluated numerically.
Mathematics 11 00609 g007
Figure 8. The maximum, along t, absolute error of the solution versus x and y without the use of finite difference schemes for the spatial semi-discretisation.
Figure 8. The maximum, along t, absolute error of the solution versus x and y without the use of finite difference schemes for the spatial semi-discretisation.
Mathematics 11 00609 g008
Figure 9. The maximum, along t, absolute error of the solution versus x and y with the use of finite difference schemes for the spatial semi-discretisation.
Figure 9. The maximum, along t, absolute error of the solution versus x and y with the use of finite difference schemes for the spatial semi-discretisation.
Mathematics 11 00609 g009
Figure 10. The maximum, along y, absolute error of the solution versus t and x without the use of finite difference schemes for the spatial semi-discretisation.
Figure 10. The maximum, along y, absolute error of the solution versus t and x without the use of finite difference schemes for the spatial semi-discretisation.
Mathematics 11 00609 g010
Figure 11. The maximum, along y, absolute error of the solution versus t and x with the use of finite difference schemes for the spatial semi-discretisation.
Figure 11. The maximum, along y, absolute error of the solution versus t and x with the use of finite difference schemes for the spatial semi-discretisation.
Mathematics 11 00609 g011
Figure 12. The maximum, along x, absolute error of the solution versus t and y without the use of finite difference schemes for the spatial semi-discretisation.
Figure 12. The maximum, along x, absolute error of the solution versus t and y without the use of finite difference schemes for the spatial semi-discretisation.
Mathematics 11 00609 g012
Figure 13. The maximum, along x, absolute error of the solution versus t and y with the use of finite difference schemes for the spatial semi-discretisation.
Figure 13. The maximum, along x, absolute error of the solution versus t and y with the use of finite difference schemes for the spatial semi-discretisation.
Mathematics 11 00609 g013
Figure 14. The mass error versus t without the use of finite difference schemes for the spatial semi-discretisation.
Figure 14. The mass error versus t without the use of finite difference schemes for the spatial semi-discretisation.
Mathematics 11 00609 g014
Figure 15. The mass error versus t with the use of finite difference schemes for the spatial semi-discretisation.
Figure 15. The mass error versus t with the use of finite difference schemes for the spatial semi-discretisation.
Mathematics 11 00609 g015
Figure 16. The maximum absolute error of the solution versus the function evaluations, for all compared methods.
Figure 16. The maximum absolute error of the solution versus the function evaluations, for all compared methods.
Mathematics 11 00609 g016
Figure 17. The maximum absolute error of the solution versus the CPU time, for all compared methods.
Figure 17. The maximum absolute error of the solution versus the CPU time, for all compared methods.
Mathematics 11 00609 g017
Table 1. The order in which the equations are solved, followed by the variable for which each one is solved.
Table 1. The order in which the equations are solved, followed by the variable for which each one is solved.
 1. Solve q 38 for a 21 11. Solve q 5 for b 5 21. Solve q 7 for a 63
 2. Solve q 39 for a 32 12. Solve q 9 for b 4 22. Solve q 12 for a 83
 3. Solve q 40 for a 43 13. Solve q 18 for b 1 23. Solve q 27 for a 73
 4. Solve q 41 for a 54 14. Solve q 10 for a 86 24. Solve q 8 for a 61
 5. Solve q 42 for a 65 15. Solve q 6 for a 75 25. Solve q 23 for a 84
 6. Solve q 43 for a 76 16. Solve q 19 for a 64 26. Solve q 13 for a 71
 7. Solve q 1 for b 8 17. Solve q 4 for a 53 27. Solve q 29 for a 81
 8. Solve q 44 for a 87 18. Solve q 21 for a 41 28. Solve q 28 for a 51
 9. Solve q 2 for b 7 19. Solve q 20 for a 85 29. Solve q 26 for a 31
10. Solve q 3 for b 6 20. Solve q 11 for a 74
Table 2. The Butcher tableau of the optimised Runge–Kutta method.
Table 2. The Butcher tableau of the optimised Runge–Kutta method.
7 1000 7 1000
1 6 229 126 125 63
26 125 1222 15625 0 2028 15625
441 1000 19009080531 114608000000 0 66139404093 68324000000 282673503 227382272
1 2 498423839357 3017837416104 0 33701852355 35356371844 60281311022875 49428207837912 10922552000 159518175579
183 200 711091869877829 844775568000000 0 588657911761 177642400000 37397696814038267 56562846567552000 96981681064 12666142125 70978718489 10498312500
1 144640010763881 19166802690126 0 61948580865 2548536443 2761988623890181125 796823971990899034 1484898545416000 23819350452933 2715651824886 53337237643 36691083000000 42192182210017
114713 2098278 00 17382812500 46429926543 512500000000 803168516241 1034962 1072443 1300000000 4807323563 149203 5644782
Table 3. The solution error and the mass relative error for different step sizes.
Table 3. The solution error and the mass relative error for different step sizes.
Δ x Δ y Δ t Solution ErrorMass Error
0.50.50.14 2.67 × 10 3 9.38 × 10 6
0.20.20.024 1.20 × 10 6 1.73 × 10 9
0.10.10.006 1.72 × 10 9 2.59 × 10 12
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Anastassi, Z.A.; Kosti, A.A.; Rufai, M.A. A Parametric Method Optimised for the Solution of the (2+1)-Dimensional Nonlinear Schrödinger Equation. Mathematics 2023, 11, 609. https://doi.org/10.3390/math11030609

AMA Style

Anastassi ZA, Kosti AA, Rufai MA. A Parametric Method Optimised for the Solution of the (2+1)-Dimensional Nonlinear Schrödinger Equation. Mathematics. 2023; 11(3):609. https://doi.org/10.3390/math11030609

Chicago/Turabian Style

Anastassi, Zacharias A., Athinoula A. Kosti, and Mufutau Ajani Rufai. 2023. "A Parametric Method Optimised for the Solution of the (2+1)-Dimensional Nonlinear Schrödinger Equation" Mathematics 11, no. 3: 609. https://doi.org/10.3390/math11030609

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop