Next Article in Journal
Measuring and Modelling the Concentration of Vehicle-Related PM2.5 and PM10 Emissions Based on Neural Networks
Previous Article in Journal
Automated CNN Architectural Design: A Simple and Efficient Methodology for Computer Vision Tasks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Optimal Family of Block Techniques to Solve Models of Infectious Diseases: Fixed and Adaptive Stepsize Strategies

1
Department of Mathematics and Statistics, College of Science, King Faisal University, Hafuf 31982, Saudi Arabia
2
Department of Basic Sciences and Related Studies, Mehran University of Engineering & Technology, Jamshoro 6062, Pakistan
3
Department of Mathematics, Near East University, Mersin 99138, Turkey
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(5), 1135; https://doi.org/10.3390/math11051135
Submission received: 12 January 2023 / Revised: 17 February 2023 / Accepted: 20 February 2023 / Published: 24 February 2023

Abstract

:
The contemporary scientific community is very familiar with implicit block techniques for solving initial value problems in ordinary differential equations. This is due to the fact that these techniques are cost effective, consistent and stable, and they typically converge quickly when applied to solve particularly stiff models. These aspects of block techniques are the key motivations for the one-step optimized block technique with two off-grid points that was developed in the current research project. Based on collocation points, a family of block techniques can be devised, and it is shown that an optimal member of the family can be picked up from the leading term of the local truncation error. The theoretical analysis is taken into consideration, and some of the concepts that are looked at are the order of convergence, consistency, zero-stability, linear stability, order stars, and the local truncation error. Through the use of numerical simulations of models from epidemiology, it was demonstrated that the technique is superior to the numerous existing methodologies that share comparable characteristics. For numerical simulation, a number of models from different areas of medical science were taken into account. These include the SIR model from epidemiology, the ventricular arrhythmia model from the pharmacy, the biomass transfer model from plants, and a few more.

1. Introduction

In this study, we focus on finding numerically close solutions to first-order initial-value problems (IVPs) for ordinary differential equations (ODEs) of the type
ϕ ( x ) = f ( x , ϕ ( x ) ) , ϕ ( x 0 ) = ϕ 0 , x [ x 0 , x N ] , ϕ , f , R d ,
where d is the dimension of the system. With an initial value of ϕ 0 and a continuous function f that meets Lipchitz’s condition, we can check that the existence and uniqueness theorem holds for this problem (see [1]). The numerical value of the theoretical solution at x n is denoted by ϕ n ϕ ( x n ) . In order to obtain a theoretical solution to ODEs in a computationally efficient manner, numerical methods are necessary [2,3] because they may be used to describe physical events, such as the movement of objects in space or the flow of liquids through pipes. ODEs find extensive applications in the fields of engineering, physics, logistic damping effect in chemotaxis models, epidemiology, applied microbiology and biotechnology [4,5,6,7]. In this way, numerical approaches give us the tools we need to solve ODEs quickly and accurately. The primary advantage of numerical methods is that they provide solutions to ODEs that may be implemented rapidly and without the requirement for integration over extended periods of time. Because numerical approaches can account for the effects of a variable’s values over time, they are often more accurate than analytical methods. In general, numerical approaches are crucial for solving ODEs because they permit more rapid and precise solution derivation. Engineers and scientists can examine the behavior of a system in many scenarios much more rapidly and precisely when they use numerical approaches [8,9].
It is of the utmost importance to develop numerical approaches that are both exact and efficient for tackling the challenges that are related to ordinary differential equations. This is due to the fact that ordinary differential equations find widespread applicability in the current world. In this particular setting, the body of previous research provides a variety of distinct numerical methods as viable answers to the problem at hand. Many different domains, including chemistry, flame propagation, computational fluid dynamics, population dynamics, engineering, microbiology and mathematical biology, require approximate solutions to challenging issues; nevertheless, the bulk of available numerical methods do not meet the requirements. In order for models to have adequate rigidity in terms of having non-linearity and stiffness, numerical methods that are not only computationally expensive but also need to have unbounded stability areas must be used [10,11].
One of the main weaknesses of numerical methods for solving ODEs is that they can be computationally expensive. Numerical methods require the use of large amounts of data to create a solution, and this requires a significant amount of computing power and time to calculate. Additionally, numerical methods can be inaccurate if the data used are not precise enough or if the steps taken in the numerical solution are too small. Another limitation of numerical methods is that they typically rely on approximations of the true solution, and this can lead to inaccuracies in the results. Additionally, numerical methods can produce a limited number of solutions, as they are often designed to solve specific types of ODEs. Finally, numerical methods can be difficult to debug and modify, as they involve complex algorithms and equations. Overall, numerical methods for solving differential equations have their strengths and weaknesses, and it is important to consider both when deciding which method to use. With careful consideration and proper implementation, numerical methods can provide accurate and efficient solutions to ODEs [12].
Due to high computing cost (extremely tiny step size Δ x 0 ) or limited stability region, most conventional methods, such as explicit Runge–Kutta, the Lobatto family, the multi-step Adams family, and higher-order multi-derivative types, are not employed. However, the implicit block procedures are recommended since they can begin by themselves, are computationally robust (cheap), are extremely accurate, converge quickly, and are mathematically stable (with A or L stability features). The most significant advantages of block approaches are their ability to initiate themselves and prevent overlapping between different parts of solutions. However, the stability areas of some block approaches are extremely small, and that is a drawback of the methodology suggested here [13,14]. In future research work, this shortcoming will be removed.
By applying multiple formulas to the IVP at once, as in the block strategy, they are able to improve upon one another and provide a more accurate estimate. ODEs of the form (1) have been tackled using block methods in a number of publications [15,16], and there are block methods with several features for tackling higher-order problems as described in [17,18]. Modern computer algebra systems (CASs), such as MAPLE and MATHEMATICA, provide a plethora of pre-built numerical code functions that simplify the process of getting numerical approximations to the theoretical solution. These programs are optimized for solving problems with variable step sizes that can have a wide variety of solutions, including those that are stiff, non-stiff, singular, etc. Due to the numerical instability of some numerical approaches, stiff systems are among the most difficult systems to solve. Notable academics have expended a lot of effort to develop more effective means of addressing such problems. There are already numerical approaches for solving (1), but utilizing a variable step-size formulation with an appropriate error estimation can increase the accuracy and rate of convergence. In this study, we also aim to find a new one-step way to find the approximate solution to first-order IVPs of the type given in (1), using both fixed and adaptive step-size formulations.
The remaining sections of the paper are as follows: The next part (Section 2) of the present article will discuss the new technique’s creation. In Section 3, we analyze the proposed procedure. The suggested technique’s adaptive step-size formulation is described in Section 4, and its implementation is covered in Section 4.1. In Section 5, we provide several real-world examples drawn from the natural sciences to demonstrate the effectiveness and accuracy of the proposed approach. Finally, a brief conclusion is provided in Section 6, which also includes suggestions for moving forward.

2. Mathematical Formulation

In this part, we derive the one-step optimized block method with two off-grid points, while considering d = 1 in (1) to facilitate the derivation. The off-grid points are optimized by using the main formula’s local truncation error denoted by L . Let us take into consideration the partition x 0 < x 1 , < x N on the integration interval [ x 0 , x n + 1 ] , with constant step-length Δ x = x n + 1 x n , n = 0 , 1 , , N 1 . Taking this into consideration, we assume an approximation to the theoretical solution of (1) by an appropriate interpolating polynomial of the form as given below:
ϕ ( x ) S ( x ) = j = 0 4 φ j x j ,
where φ j R represent real unknown parameters. Differentiating (2) with respect to x, one obtains
ϕ ( x ) S ( x ) = j = 1 4 j φ j x j 1 .
Similarly, differentiating (3) with respect to x, one obtains
ϕ ( x ) S ( x ) = j = 2 4 j ( j 1 ) φ j x j 2 .
Take into consideration the two off-grid points as x n + u = x n + u Δ x and x n + v = x n + v Δ x with 0 < u < v < 1 in order to calculate the estimated solution of the IVP (1) at the point x n + 1 under the assumption that ϕ n = ϕ ( x n ) . It is crucial to note at this point that the local truncation error of the main formula will be used to compute the optimal values of these two off-grid points. Consider that the polynomial S ( x ) in (2) and its first derivative in (3) are evaluated at the point x n , while the second derivative of S ( x ) in (4) is evaluated at the points x n + u , x n + v , x n + 1 as follows:
S ( x n ) = ϕ ( x n ) , S ( x n ) = f n S ( x n + u ) = g n + u , S ( x n + v ) = g n + v , S ( x n + 1 ) = g n + 1 .
This above setting results in a system of five equations with five real unknown parameters ( φ j , j = 0 , 1 , , 4 ) . These equations can be arranged in matrix form as follows:
1 x n x n 2 x n 3 x n 4 0 1 2 x n 3 x n 2 4 x n 3 0 0 2 6 x n + u 12 x n + u 2 0 0 2 6 x n + v 12 x n + v 2 0 0 2 6 x n + 1 12 x n + 1 2 φ 0 φ 1 φ 2 φ 3 φ 4 = ϕ n f n g n + u g n + v g n + 1 .
The five unknown coefficients ( φ j , j = 0 , 1 , , 4 ), which are computed by solving the above linear system, are not, for the sake of conciseness, shown here. However, substituting the values of these five parameters in (2) while utilizing the variable change x = x n + t Δ x , we reach the following:
S ( x n + t Δ x ) = φ 0 ϕ n + Δ x Ξ 0 f n + Ξ u g n + u + Ξ v g n + v + Ξ 1 g n + 1 ,
where,
φ 0 = 1 , Ξ 0 = Δ x t , Ξ u = 1 12 Δ x 2 t 2 t 2 + 2 t 6 v + 2 vt u 1 u v , Ξ v = 1 12 Δ x 2 t 2 t 2 + 2 u t + 2 t 6 u v 1 u v , Ξ 1 = 1 12 Δ x 2 t 2 t 2 2 u t 2 v t + 6 u v v 1 u 1 .
We examine the one-step block approach to obtain S ( x n + t Δ x ) at the collocation points x n + u , x n + v , and  x n + 1 . In (6), we take t = u , v , 1 . The resulting three formulas are as follows:
ϕ n + u = ϕ n + u Δ x f n + u 2 u 2 + 2 u v + 2 u 6 v g n + u 12 u 1 u v + u 2 u 2 4 u g n + v 12 v 1 u v + u 2 u 2 + 4 u v g n + 1 12 v 1 u 1 Δ x 2 ,
ϕ n + v = ϕ n + v Δ x f n + v 2 v 2 4 v g n + u 12 u 1 u v + v 2 2 u v v 2 6 u + 2 v g n + v 12 v 1 u v + v 2 4 u v v 2 g n + 1 12 v 1 u 1 Δ x 2 ,
ϕ n + 1 = ϕ n + Δ x f n + 4 v + 1 g n + u 12 u 1 u v + 4 u + 1 g n + v 12 v 1 u v + 6 u v 2 u 2 v + 1 g n + 1 12 v 1 u 1 Δ x 2 ,
where ϕ n + i are approximations of the true solution ϕ ( x n + i Δ x ) , and  g n + i = f ( x n + i , ϕ n + i ) for i = u , v , 1 . The aforementioned approximations use two unknown parameters u , v and two off-grid points x u , x v . We set the first two terms of the local truncation error ( L ) from (10) to zero to acquire suitable values for these parameters (u and v). By doing so, optimal parameters will be acquired, and at the end of the generic subinterval [ x n , x n + 1 ] , the only value needed to advance the integration is ϕ n + 1 . Taylor expansion allows us to calculate the following local truncation error ( L ) of the main formula given in (10):
L ( ϕ ( x n + 1 ) ; Δ x ) = u v 1 180 + u 72 + v 72 Δ x 5 ϕ ( 5 ) ( x n ) , + u v 96 u 2 v 72 1 480 + u 2 288 + v 288 + u 288 + v 2 288 Δ x 6 ϕ ( 6 ) ( x n ) .
The coefficients of Δ x 5 and Δ x 6 are equated to zero and the following optimal values of the parameters (u and v) are obtained from the resulting algebraic system, as is similar to what occurs while utilizing numerous other approaches to solving with numerical solutions of partial differential equations (see [19,20]):
u = 1 3 10 15 , v = 1 3 + 10 15 .
Substituting the above optimal values into the local truncation errors of the formulae (8) to (10), we obtain the following:
L ( ϕ ( x n + u ) ; Δ x ) = Δ x 5 ϕ ( 5 ) ( x n ) 97 10 + 250 10935000 + O ( Δ x 6 ) , L ( ϕ ( x n + v ) ; Δ x ) = Δ x 5 ϕ ( 5 ) ( x n ) 97 10 250 10935000 + O ( Δ x 6 ) , L ( ϕ ( x n + 1 ) ; Δ x ) = Δ x 7 ϕ ( 7 ) ( x n ) 189000 + O ( Δ x 8 ) .
Putting the optimal values given in (12) into formulae (8) to (10), the following one-step optimized block method with two off-grid points is produced, whereas the pseudo-code for the method (fixed stepsize) is provided in Algorithm 1.
Algorithm 1: Pseudo-code for the optimized hybrid block technique with two intra-step points with a fixed stepsize approach.
Data: x 0 , x N (integration interval), N (number of steps), ϕ 00 , ϕ 10 ,
 (initial values), ϕ
Result: sol (discrete approximate solution of the IVP (1))
1 Let n = 0 , Δ x = x 0 x n N
2 Let x n = x 0 , ϕ n = ϕ 00 , ϕ n = ϕ 10 , ϕ n = ϕ 10
3 Let sol  = { ( x n , ϕ n ) }
4 Solve (14) to get ϕ n + k , ϕ n + k , where k = 0 , u , v , 1
5 Let sol = sol  { ( x n + k , ϕ n + k ) } k = 0 , u , v , 1
6 Let x n = x n + 2 Δ x , ϕ n = ϕ n + 1
7 Let n = n + 2
8 if n = N then (
9   go to 13
10 else (
11   go to 4;
12 end (
13 End
ϕ n + u = ϕ n 10 5 Δ x f n 15 + Δ x 2 19440 9 37 5 10 g n + u + 1189 395 10 g n + v + 2 4 10 5 g n + 1 , ϕ n + v = ϕ n + 5 + 10 Δ x f n 15 + Δ x 2 19440 1189 + 395 10 g n + u + 9 37 + 5 10 g n + v 2 5 + 4 10 g n + 1 , ϕ n + 1 = ϕ n + Δ x f n + Δ x 2 144 35 + 10 g n + u 10 35 g n + v + 2 g n + 1 .

3. Theoretical Analysis

In this section, we offer an examination of the fundamental properties of the one-step optimized block approach that was presented in the aforementioned section. These fundamental qualities include the following: order of convergence, zero-stability, linear stability, consistency, and relative measure of stability.

3.1. Order of Convergence

Rewritten using the matrix notation, f n + i for i = 0 , u , v , 1 are introduced to augment the zero entries of the vector notations. Thus, the new proposed optimal block method abbreviated as NPOBM in the matrix form is given below:
Y n + 1 = A 0 Y n + Δ x B 0 F n + Δ x 2 B 1 G n + 1 ,
where A 0 , B 0 and B 1 are 3 × 3 matrices given by
A 0 = 0 0 1 0 0 1 0 0 1 , B 0 = 0 0 10 5 15 0 0 10 + 5 15 0 0 1 ,
B 1 = 9 ( 37 5 10 19440 1189 395 10 19440 2 ( 4 10 5 ) 19440 ( 1189 5 10 19440 9 ( 37 + 5 10 ) 19440 2 ( 4 10 + 5 ) 19440 35 + 10 144 10 35 144 1 144 .
The proposed method is formulated from the Equation (14) by defining the vectors Y n , ϕ n + 1 , F n , and  G n + 1 as follows:
Y n = ( y n 1 + u , y n 1 + v , y n ) T , Y n + 1 = ( y n + u , y n + v , y n ) T , F n = ( f n u , f n v , f n + 1 ) T , G n + 1 = ( g n + u 1 , g n + v 1 , g n ) T .

3.2. Accuracy and Consistency

Following the procedure in [21] and assuming that ϕ ( x ) is a sufficiently differentiable function, we define the operator L ¯ given as follows:
L ¯ [ ϕ ( x ) ; Δ x ] = j J φ ¯ j ϕ ( x n + j Δ x ) ( Δ x 2 ) β ¯ j ϕ ( x n + j Δ x ) ,
where φ ¯ j , β ¯ j and J = 0 , u , v , 1 . By utilizing the Taylor series about x n , we expand the formula in Equation (19) and obtain the following identity:
L ¯ [ ϕ ( x ) ; Δ x ] = T 0 ϕ ( x n ) + T 1 Δ x ϕ ( x n ) + T 2 Δ x 2 ϕ ( x n ) , , T r Δ x r ϕ ( r ) ( x n ) + O ( Δ x r + 1 ) .
Following the guidelines given in [22], Equation (20) and the associated formulas are said to be of algebraic order r if T 0 = T 1 = = T r = 0 , T r + 1 0 , where T r is the coefficient of the power Δ x r in the Taylor series expansion of (19). The proposed approach developed in (14) has T 0 = T 1 = = T 4 = 0 , and 
T 5 = 0 , 0 , 1 189000 .
Thus, each formula in the new proposed optimized block method (NPOBM) has at least a fourth algebraic order of convergence. It should be noted that this gives a lower constraint to the order. In order to obtain the correct order, the suggested approach can be reformulated as an RK method [23]; in reality, the theory of RK methods will offer the method’s order, which is six. We demonstrate that this result is consistent with the numerical data in Section 5. Since the proposed method has an order higher than one, thus, based on [24], it is consistent.

3.3. Zero Stability and Convergence

A numerical method for IVPs is called zero-stable if it maintains the stability of the trivial solution (i.e., the solution where all variables are equal to zero) as the solution is advanced in time. This means that any small perturbations in the initial conditions will remain small as the solution is calculated, and will not grow or cause the solution to become unbounded. Zero stability is an important property for numerical methods, as it helps ensure that the method will produce meaningful results, even when the initial conditions are only known approximately. The stability of a numerical method can be assessed through the use of stability analyses, such as the von Neumann stability analysis, or by running the method on test problems and examining the behavior of the solution. If a numerical method is not zero-stable, it may produce solutions that diverge from the true solution, or become unbounded, as the solution is calculated. In such cases, the method may be considered unreliable and may need to be improved or modified in order to produce more accurate results.
In this section, we demonstrate the stability of the suggested one-step optimized block strategy with two off-grid points and we also discuss the convergence of the proposed method. The concept of zero-stability relates to considering a homogeneous equation ϕ = 0 and its discretized version, as written below:
A 0 Y λ + 1 A 1 Y λ = 0 ,
where A 0 is the the matrix that was shown previously in Section 3.1, whereas A 1 is a 3 × 3 identity matrix. Now, if the discrete algebraic Equation (21) allows solutions that develop in time, then the suggested block technique will not be zero-stable and thus not be of any use. On the contrary, if the zeros Ψ i of the first characteristic polynomial κ ( Ψ ) = | z A 1 A 0 | fulfills | Ψ i | 1 and for those zeros with | Ψ i | = 1 , the multiplicity should not exceed 1 as stated in [25]. The first characteristic polynomial of the proposed block method (14) is given by
κ ( Ψ ) = Ψ 3 ( Ψ 1 ) .
As a result, the proposed approach suggested in (14) is a zero-stable method. Having the properties of being zero-stable and consistent (as explained in [26]), the proposed method (14) results in a convergent numerical method.

3.4. Linear Stability Analysis

The concept of linear stability is particularly important in the study of dynamical systems, where it is used to determine the long-term behavior of a system and to identify critical points that determine the stability of the system. It is also used in control theory to design controllers that stabilize unstable systems, and in engineering and physics to study the stability of structures and fluid flows. The step-length Δ x 0 characterizes the behavior of the underlying numerical technique. This behavior is connected to the idea of zero stability, which is related to the behavior of the method. In other situations though, a practical point of view requires a different idea of stability. To be stable, a numerical method must be able to reliably give results for some non-zero value of the step-length, denoted here by notation Δ x > 0 . For the numerical approach that is being considered, this kind of behavior is referred to as the linear stability behavior, and it involves applying the strategy described by Dahlquist [27] for solving a linear test problem and is provided by
ϕ ( x ) = γ ϕ ( x ) , with Re ( γ ) < 0 .
It is necessary to find the area of the linear test problem presented in (23), within which the approximations generated using the numerical technique replicate the behavior of the precise solution to the issue. When the optimized block approach from (14) is used to solve the linear test problem from (23), the following recurrence equation is found:
Y n = M ( z ) Y n 1 ,
where M ( z ) is the stability matrix defined by
M ( z ) = ( A 1 z 2 B 1 ) 1 ( A 0 + z B 0 ) , z = γ Δ x .
By substituting matrices given in Section 3.1 into Equation (16), the  spectral radius is now obtained:
ρ [ M ( z ) ] = 52 z 5 612 z 4 3840 z 3 14640 z 2 32400 z 32400 z 6 42 z 4 + 1560 z 2 32400 .
The behavior of the approximate numerical solution is determined by the eigenvalues of the stability matrix, which is denoted by the notation M ( z ) given in Equation (16). This is a characteristic of a numerical method that makes use of the spectral radius. This is a property that is well known and widely used in practice as in [28]. The following set may be used to provide a description of the area of absolute linear stability denoted by A :
A = { z C : ρ [ M ( z ) ] < 1 } .
The visual explanations in Figure 1 demonstrate that some part of the complex plane, denoted by C , is contained in the stability region of the optimized block method in (14) that is said to be conditionally stable if some part of C A .

3.5. Relative Measure of Stability

The relative measure of stability, which is commonly called the order star, is an effective and strong contemporary technique for numerical approaches. It gives essential information, such as order and stability considerations, for the numerical approach that is being used within the context of a framework that is consistent. With regard to the approach of the conditional stable block method (14), we have ρ ( M ( z ) ) ; this is provided by an approximation that can be rationalized. Additional appealing characteristics are associated with this form of approximation. Investigating the characteristics of a relative approximation in the complex plane may be performed with the assistance of an order star. It is essential to keep in mind that our interest in numerical methods usually gives origin to our interest in the study of qualities that are approximations.
Let P and Q be possibly complex-valued polynomials of degree m and n, respectively, and denote the quotient R ( z ) = P / Q by R m n . Certainly, a zero of Q is a pole of the rational function R ( z ) . Let F ( z ) be a complex function. An order star ϱ ( z ) defines a partition in the complex plane, namely the triplet { + , 0 , } , where
+ = { z : ϱ ( z ) > ξ } , 0 = { z : ϱ ( z ) = ξ } , = { z : ϱ ( z ) < ξ } .
Fundamentally, there are two types of order stars, ϱ ( z ) , that are usually considered in the literature as follows:
| R ( z ) / F ( z ) | with ξ = 1 , and R e ( R ( z ) F ( z ) ) with ξ = 0 .
Some of the properties related to the order stars are stated below:
Property 1 
(Order). R ( z ) is an order q approximation to exp ( z ) if z is connected by q + 1 parts of + and divided by q + 1 parts of + . With an asymptotic angle of ( π q + 1 ) , all parts approach z. Bounded, related parts of + are commonly referred to as fingers, whereas similar sections of are referred to as dual fingers.
Property 2 
(Enumeration). The number of R poles (zeros) in each bounded linked part of + ( ) , multiplied by their multiplicity, equals the number of interpolation points (i.e., such that R ( z ) = F ( z ) ) .
Property 3 
(Unbounded). There are two unbounded associated parts, one of ( + ) and the other of ( ) .
It is standard practice to shade the former in order to identify it from the latter, which is written as . When this is done, it is made abundantly evident that the zone of growth of relative stability is indicated for the set that is symbolized by + , while the region of counter activity falls within the set that is designated by . As a direct consequence of this, the variable 0 is used to establish the limit between the area of relative stability and the region of contractility. By redefining the order star of ρ ( M ( z ) ) , one may create a set consisting of the following three regions, which can then be taken into consideration. Because the first kind of order star is of the utmost importance to us, we determined the following sets to be true:
+ = { z C : | ρ ( M ( z ) ) | > | exp ( z ) | } = { z C : | exp ( z ) ρ ( M ( z ) ) | > 1 } , 0 = { z C : | ρ ( M ( z ) ) | = | exp ( z ) | } = { z C : | exp ( z ) ρ ( M ( z ) ) | = 1 } , = { z C : | ρ ( M ( z ) ) | < | exp ( z ) | } = { z C : | exp ( z ) ρ ( M ( z ) ) | < 1 } .
The graphs of the above sets yield some star-like (fingers) different from the one we are familiar with (regions of absolute stability). These fingers are shown in Figure 2.

4. Adaptive Stepsize Approach

In this section, we present an adaptive step-size formulation for the one-step optimal approach that was given earlier. This will make it possible for us to obtain an effective formulation that will supply accurate numerical approximations to the IVP, as we will see in the following section. In the introduction, it is mentioned that in order to make efficient use of numerical methods, one must adjust the step size according to the way the solution behaves locally. An embedding strategy is typically used for this purpose, which means that in addition to the approximation for ϕ n + 1 , an additional lower-order technique must be obtained to estimate the local error (LE) at the final point of each integration step. This is because the approximation for ϕ n + 1 is a higher-order technique. The estimate of the LE can be found by taking the difference between the two approximations. For the considered second-order explicit method given in [29], this choice does not cost any additional computational burden while simulating the differential systems:
y n + 1 = y n + 2 h ( f n ) 2 2 f n h f n ,
and the local truncation error, LTE, is employed to estimate the local error through the following difference equation:
E S T = | ϕ n + 1 ϕ n + 1 * | .
If | E S T | T O L , where TOL stands for the tolerance predefined by the user, then this is the point where we agree with the results and select the next step as Δ x adaptive = 2 × Δ x o l d , to minimize the computational burden and continue the integration process with Δ x adaptive with the assumption that Δ x m i n i Δ x adaptive Δ m a x . However, if  | E S T | > T O L , then we need to reject the achieved results by reducing them and repeating the calculations with the new step as follows:
Δ x adaptive = η Δ x o l d T O L | | E S T | | 1 p + 1 .
Here, the order of the lower-order technique (32) is denoted by the value p = 4 , while the value 0 < η < 1 denotes a safety factor whose purpose is to circumvent the steps that were unsuccessful. In the part devoted to numerical examples, we took into consideration both a very modest initial step size as well as a strategy for modifying the step size that, if required, will cause the algorithm to alter the step size. Algorithm 2, in the form of pseudo-code, explains the implementation of the adaptive stepsize version of the proposed optimized block method.
Algorithm 2: Pseudo-code for the two-step optimized hybrid block method with two intra-step points under variable stepsize approach.
Data: Initial stepsize: Δ x = Δ x 0 = Δ x o l d , x m : = x 0 , ϕ m : = ϕ 0 ; Integration interval: [ x 0 , X N ] ; Initial value: ϕ 0 ; Function f: f ( x , ϕ ( x ) ) ; Given tolerance: TOL
Result: Approximations of the problem (1) at selected points.
1 if x m X N then ( (
2 end (
3 if x m + Δ x > X N , Δ x = X N x 0 then ( (
4 end (
5 while x m < X N , then solve system of equations in (14) to get the values ϕ n + v do ( (
6  compute ϕ n + v * to get EST.
7 end (
8 if | E S T | TOL then accept the results and substitute Δ x n e w = 2 × Δ x o l d then ( (
9 end (
10 Set x n = x n + 1 Δ x n = n + 1 and use the formula in (32) to determine the new stepsize.
11 if | E S T | > TOL, then reject the results and repeat the calculations using (32) and go to step (6) then ( (
12 end (
13 end

4.1. Implementation Steps

The proposed method NPOBM given in (14) is implicit, which means that on each step, a system of equations must be solved. Those systems, which provide the values ϕ ( x n + 1 ) at each step, are usually solved using Newton’s method or its variants. Here, we adopted in the numerical realization of the scheme the solution provided by the command FindRoot in Mathematica 12.1 , which uses a Newton damped method. As it is well known, this kind of procedure presents local convergence, which means that good starting values must be provided. In what follows, we summarize below how NPOBM is applied to solve IVPs.
  • Step 1. Choose N, Δ x = ( x N x 0 ) N , on the partition G N .
  • Step 2. Using (15), n = 0 , solve for the values of ( ϕ u , ϕ v , ϕ 1 ) T and ( ϕ u , ϕ v , ϕ 1 ) T simultaneously on the sub-interval [ x 0 , x n ] , as ϕ 0 and ϕ 0 are known from the IVP (1).
  • Step 3. Next, for n = 1 , the values of ( ϕ u , ϕ v , ϕ 1 ) T and ( ϕ u , ϕ v , ϕ 1 ) T are simultaneously obtained over the sub-interval [ x 1 , x 2 ] , as ϕ 1 and ϕ 1 are known from the previous block.
  • Step 4. The process is continued for n = 2 , , N 1 to obtain the numerical solution to (1) on the sub-intervals [ x 0 , x 1 ] , [ x 1 , x 2 ] , , [ x N 1 , x N ] .

5. Biological Models for Numerical Simulations

This section discusses the numerical simulations of the proposed block method given in (14) on the basis of accuracy via error distributions (absolute maximum global error = max 1 n N | | ϕ ( x n ) ϕ n | | , absolute error computed at the last mesh point over the chosen integration interval = | | ϕ ( x N ) ϕ N | | , norm = 1 n N ( ϕ ( x n ) ϕ n ) 2 , and root mean square error = 1 N 1 n N ( ϕ ( x n ) ϕ n ) 2 , precision factor (scd = log 10 | | ϕ ( x N ) ϕ N | | ), and time efficiency (CPU time measured in seconds). In order to acquire ϕ 1 after successfully solving the system, we took advantage of the well-known and widely used second-order convergent Newton–Raphson approach. The next step in the process involves determining the value ϕ 2 by using the value ϕ 1 . As the starting value, we use the value from the preceding block. This process is repeated until the destination point ( x N ) is reached. We determined that the length of the integration interval is [ x 0 , x N ] because the suggested methodology is a one-step method and some of the methods used for comparisons are one- and two-step. The Newton–Raphson method was implemented using the FindRoot command, which is provided in Mathematica 12.1 . It is important to mention that all of the numerical calculations are carried out using Mathematica 12.1 , which is installed on a personal computer that is powered by Windows OS and has an Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz 1.50GHz processor with 24.0 GB of installed RAM. It is also worth noting that we employed one sixth-order approach and two, at least, fifth-order ones for the purpose of comparison. Included are the following methods for the numerical comparative analysis:
  • NPOBM: New proposed optimal block method given in (14).
  • MHIRK: Multi-derivative hybrid implicit Runge–Kutta method with fifth-order convergence, which appeared in [30].
  • RADIIA and IRK: Fully implicit RK-type fifth-order methods, which appeared in [31]
  • Sahi: A -stable hybrid block method of order six, which appeared in [32].
  • LPHBM: Laguerre polynomial hybrid block method of sixth-order convergence, which appeared in [33].
Different kinds of biological models based on differential equations are taken into consideration to analyze the performance of the proposed block technique presented in Equation (14). For the numerical simulations, the following notations were used:
  • Δ x : step-size.
  • Δ x i n i : initial step-size.
  • LE: local error estimate.
  • LTE: Local truncation error.
  • MaxErr: Maximum error on the selected grid points over the chosen integration interval.
  • RMSE: Root mean square error on the selected grid points over the chosen integration interval.
  • FFE: Total number of function evaluations.
  • NS: Total number of steps.
  • TOL: Predefined tolerance.
  • CPU: Computational time in seconds.
Problem 1 
(Susceptible–Infected–Recovered (SIR) Model [34]). The SIR model is an epidemiological model that computes the theoretical number of people infected with an infectious illness in a closed community over the course of time. This number is based on the assumption that the disease spreads from person to person. This category of models gets its name from the fact that they use coupled equations to relate the numbers of sensitive individuals to one another. The total number of susceptible people is S ( x ) , the total number of infected people is I ( x ) , and the total number of people who have recovered is R ( x ) . This is a useful and simple way to explain a wide range of diseases that spread from person to person. Included are diseases such as measles, mumps, and rubella that can be explained via this well-known model. The flowchart of the model is given in Figure 3.
The nonlinear differential system describing the SIR model is shown below:
d S d x = λ ( 1 S ( x ) ) β I ( x ) S ( x ) , d I d x = λ I ( x ) ρ I ( x ) + β I ( x ) S ( x ) , d R d x = λ R ( x ) + ρ I ( x ) ,
where λ ,   ρ , and β are positive parameters. Define ϕ to be:
ϕ = S + I + R .
By adding all equations in (33), we obtain
ϕ ( x ) = λ ( 1 ϕ ( x ) ) .
Taking into consideration λ = 1 2 with initial condition ϕ ( 0 ) = 1 2 , the exact solution is as follows: ϕ ( x ) = 1 1 2 exp x 2 . The SIR model is simulated with different techniques, and the results are tabulated. Both constant and adaptive stepsize approaches are used. It can be seen in Table 1, Table 2 and Table 3 that the proposed technique (14) not only yields the minimum errors, but it also goes with the highest scd factor, thereby proving the much better performance of (14) in comparison to other well-known block techniques. In addition, the superiority of the proposed technique is maintained, even when the adaptive stepsize approach is utilized with varying values of the tolerance as shown in Table 4. Finally, the efficiency curves produced with each technique under consideration speak volumes in favor of (14) since they take the shortest time to yield the smallest maximum absolute global error as shown in Figure 4.
Problem 2 
(Application on Irregular Heartbeats and Lidocaine model [35]). Clinically, the condition known as ventricular arrhythmia, also known as an irregular heartbeat, may be treated with the medication lidocaine. The medicine must be kept at a bloodstream concentration of 1.5 milligram per liter in order for it to be effective. However, concentrations of the drug in circulation that are over 6 milligram per liter are believed to be deadly in certain people. The exact dose is determined by the body weight of the patient. It has been stated that the highest adult dose for ventricular tachycardia is 3 milligrams per kilogram. The medicine comes in 0.5%, 1%, and 2% concentration solutions that can be kept at room temperature. A model based on differential equations that captures the dynamic nature of pharmacological treatment was designed in [36], whose flowchart (Figure 5) is given as follows:
The set of linear equations describing the above phenomenon is given as follows:
ϕ 1 ( x ) = 0.09 ϕ 1 ( x ) + 0.038 ϕ 2 ( x ) , ϕ 2 ( x ) = 0.066 ϕ 1 ( x ) 0.038 ϕ 2 ( x ) .
The physically significant initial data are zero drug in the bloodstream ϕ 1 ( 0 ) = 0 and the injection dosage is ϕ 2 ( 0 ) = 1 . The exact solution for IVP given in (36) is computed as follows:
ϕ 1 ( x ) = 19 exp 1 250 ( 16 + 199 ) x ( 1 + exp 199 125 x ) 4 199 , ϕ 2 ( x ) = 1 796 19 exp 1 250 ( 16 + 199 ) x 398 13 199 + ( 398 + 13 199 ) ( exp ( 199 125 x ) ) .
The irregular heartbeats and lidocaine model is simulated with different techniques, and the results are tabulated. Both constant and adaptive stepsize approaches are used. It can be seen in Table 5, Table 6 and Table 7 that the proposed technique (14) not only yields minimal errors, but it also goes with the highest scd factor, thereby proving the much better performance of (14) in comparison to other well-known block techniques. In addition, the superiority of the proposed technique is maintained, even when the adaptive stepsize approach is utilized with varying values of the tolerance as shown in Table 8. Finally, the efficiency curves produced with each technique under consideration speak volumes in favor of (14) since they take the shortest time to yield the smallest maximum absolute global error as shown in Figure 6.
Problem 3 
(Nutrient Flow in an Aquarium [37]). This flow is important for maintaining a healthy and balanced aquatic environment, as it supports the growth and survival of plants and animals in the aquarium. Imagine there is a body of water that has a radioactive isotope that is going to be utilized as a tracer for the food chain. The food chain is made up of several types of aquatic plankton, such as A and B. Plankton are defined as creatures that live in water and move with the flow of the water, and may be found in such places as the Chesapeake Bay. There are two different kinds of plankton, which are known as phytoplankton and zooplankton. The phytoplankton are plant-like organisms that float across the water, including diatoms and other types of algae. Animal-like organisms that float in the water known as zooplankton include copepods, larvae, and small crustaceans. Figure 7 explains the phenomenon.
More complex models might consider additional factors, such as the interactions between different types of plants and animals, the effects of different types of filtration systems, and the influence of water temperature and light intensity on the rate of photosynthesis. By considering these relationships, a differential equation model can provide a more comprehensive description of the nutrient flow in an aquarium and help to understand how changes in one aspect of the system might affect other parts of the system. The state variables ϕ 1 ( x ) , ϕ 2 ( x ) , ϕ 3 ( x ) are explained as follows: ϕ 1 ( x ) = represents the concentration of an isotope in the water; ϕ 2 ( x ) = represents the concentration of an isotope in A; and ϕ 3 ( x ) = represents the concentration of an isotope in B. They are used in the following coupled linear system of differential equations:
ϕ 1 ( x ) = 3 ϕ 1 ( x ) + 6 ϕ 2 ( x ) + 5 ϕ 3 ( x ) , ϕ 2 ( x ) = 2 ϕ 1 ( x ) 12 ϕ 2 ( x ) , ϕ 3 ( x ) = ϕ 1 ( x ) + 6 ϕ 2 ( x ) 5 ϕ 3 ( x ) .
with initial condition is ϕ 1 ( 0 ) = 1 , ϕ 2 ( 0 ) = 0 and ϕ 3 ( 0 ) = 0 . The exact solution is as follows:
ϕ 1 ( x ) = 1 282 180 + exp ( 10 x ) 102 cosh 6 x + 29 6 x sinh 6 x , ϕ 2 ( x ) = 1 141 15 + exp ( 10 x ) 15 cosh 6 x + 22 6 x sinh 6 x , ϕ 3 ( x ) = 1 282 72 + exp ( 10 x ) 72 cosh 6 x + 73 6 x sinh 6 x .
Model (37) is simulated with different techniques, and the results are tabulated. Both constant and adaptive stepsize approaches are used. It can be seen in Table 9, Table 10 and Table 11 that the proposed technique (14) not only yields the minimum errors, but it also goes with the highest scd factor, thereby proving the much better performance of (14) in comparison to other well-known block techniques. In addition, the superiority of the proposed technique is maintained, even when the adaptive stepsize approach is utilized with varying values of the tolerance as shown in Table 12. Finally, the efficiency curves produced with each technique under consideration speak volumes in favor of (14) since they take the shortest time to yield the smallest maximum absolute global error as shown in Figure 8.
Problem 4 
(Biomass Transfer [38]). Take, for example, a forest in Europe that only has one or two different kinds of trees. We begin by selecting some of the oldest trees, those that are forecast to pass away during the next several years, and then we trace the progression of live trees into dead ones. The dead trees will gradually rot and collapse due to the various biological and seasonal occurrences. In the end, the trees that have fallen form humus. Define the variables x, y, z, and t by the following:
  • ϕ 1 ( x ) = equals the biomass that has decomposed into humus;
  • ϕ 2 ( x ) = represents the biomass of dead trees;
  • ϕ 3 ( x ) = represents the biomass of live trees;
  • x = equals the amount of time in decades (one decade equals ten years).
The differential equations for the above scenario are given below:
ϕ 1 ( x ) = ϕ 1 ( x ) + 3 ϕ 2 ( x ) , ϕ 2 ( x ) = 3 ϕ 2 ( x ) + 5 ϕ 3 ( x ) , ϕ 3 ( x ) = 5 ϕ 3 ( x ) .
with initial condition ϕ 1 ( 0 ) = 0 , ϕ 2 ( 0 ) = 0 and ϕ 3 ( 0 ) = 1 . The exact solution is as follows:
ϕ 1 ( x ) = 15 8 exp ( 5 x ) 1 + exp ( 2 x ) 2 , ϕ 2 ( x ) = 5 2 exp ( 5 x ) 1 + exp ( 2 x ) , ϕ 3 ( x ) = exp ( 5 x ) .
Model (38) is simulated with different techniques, and the results are tabulated. Both constant and adaptive stepsize approaches are used. It can be seen in Table 13, Table 14 and Table 15 that the proposed technique (14) not only yields the minimum errors, but it also goes with the highest scd factor, thereby proving the much better performance of (14) in comparison to other well-known block techniques. In addition, the superiority of the proposed technique is maintained, even when the adaptive stepsize approach is utilized with varying values of the tolerance as shown in Table 16. It is also worth noting that method MHIRK could not produce promising results even utilizing 88 steps, whereas, in comparison, NPOBM outperforms. Finally, the efficiency curves produced with each technique under consideration speak volumes in favor of (14) since they take the shortest time to yield the smallest maximum absolute global error as shown in Figure 9.
Problem 5. 
We consider the following two-dimensional nonlinear system taken from Ref. [39]:
ϕ 1 ( x ) = μ ϕ 1 ( x ) + ϕ 2 ( x ) 2 , ϕ 2 ( x ) = ϕ 2 ( x ) , ϕ 1 ( 0 ) = 1 μ + 2 , ϕ 2 ( 0 ) = 1 , x [ 0 , 20 ] ,
while the exact solution is ϕ 1 ( x ) = exp ( 2 x ) μ + 2 , ϕ 2 ( x ) = exp ( x ) with μ = 10 3 .
Model (39) mentioned in Problem 5 is simulated with different techniques, and the results are tabulated. It can be seen in Table 17, Table 18 and Table 19 that the proposed technique (14) not only yields the minimum errors, but it also goes with the highest scd factor, thereby proving the much better performance of (14) in comparison to other well-known block techniques. It is also worth noting that method IRK failed to produce promising results even utilizing 2 8 steps, whereas, in comparison, NPOBM outperforms.

6. Concluding Remarks and Future Directions

By combining interpolation and collocation methods, a novel one-step optimized block method was devised, with a calculated order of convergence of at least five. The method’s theoretical analysis in this work shows that it is consistent, zero-stable, linear stable, aligned with the theory of order stars, and convergent; these properties allow it to successfully numerically solve a wide range of models that are stiff and nonlinear but appear in the applied and health sciences. Further, the method’s two off-grid spots are optimized based on the local truncation errors derived from a Taylor series. The suggested block method outperforms other similar methods from the literature, as demonstrated by numerical experiments selected from a variety of disciplines. When compared to others, the proposed method performs admirably, especially when it comes to the adaptive step-size strategy. With this in mind, the initial value problems of ordinary differential equations are a good candidate for the optimized block method suggested here. The order of convergence of the current optimized block method will be improved in the future to make it either A - or L -stable.

Author Contributions

The authors confirm their contribution to the paper as follows: K.A.: Conceptualization, Formal Analysis, Funding acquisition, Writing—original draft; S.Q.: Writing—original draft, Investigation, Methodology, Software; A.S.: Formal analysis, Software, Validation, Visualization; M.A.: Supervision, Writing—review and editing. All authors reviewed the results and approved the final version of the manuscript.

Funding

This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia [Grant No. 2569].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Henrici, P. Discrete Variable Methods in Ordinary Differential Equations; Wiley: New York, NY, USA, 1962; Volume 6, p. 407. [Google Scholar]
  2. Grimshaw, R. Nonlinear Ordinary Differential Equations: Applied Mathematics and Engineering Science Texts; CRC Press: Boca Raton, NY, USA, 1993; pp. 1–367. [Google Scholar]
  3. Weinstock, R. Calculus of Variations: With Applications to Physics and Engineering; Dover Publications, Inc.: New York, NY, USA, 1974; pp. 1–321. [Google Scholar]
  4. Weiglhofer, W.S.; Lindsay, K.A. Ordinary Differential Equations and Applications: Mathematical Methods for Applied Mathematicians, Physicists, Engineers and Bioscientists; Elsevier: Amsterdam, The Netherlands, 1999; pp. 1–156. [Google Scholar]
  5. Li, H.; Peng, R.; Wang, Z.A. On a diffusive SIS epidemic model with mass action mechanism and birth-death effect: Analysis, simulations and comparison with other mechanisms. SIAM J. Appl. Math. 2018, 78, 2129–2153. [Google Scholar] [CrossRef]
  6. Xie, X.; Xie, B.; Xiong, D.; Hou, M.; Zuo, J.; Wei, G.; Chevallier, J. New theoretical ISM-K2 Bayesian network model for evaluating vaccination effectiveness. J. Ambient. Intell. Humaniz. Comput. 2022, 1–17. [Google Scholar] [CrossRef]
  7. Lyu, W.; Wang, Z.A. Logistic damping effect in chemotaxis models with density-suppressed motility. Adv. Nonlinear Anal. 2022, 12, 336–355. [Google Scholar] [CrossRef]
  8. Butcher, J.C. The Numerical Analysis of Ordinary Differential Equations: Runge-Kutta and General Linear Methods; Wiley-Interscience: Hoboken, NJ, USA, 1987. [Google Scholar]
  9. Brugnano, L.; Iavernaro, F.; Trigiante, D. The lack of continuity and the role of infinite and infinitesimal in numerical methods for ODEs: The case of symplecticity. Appl. Math. Comput. 2012, 218, 8056–8063. [Google Scholar] [CrossRef] [Green Version]
  10. Finizio, N.; Ladas, G. Ordinary Differential Equations with Modern Applications; Wadsworth Pub. Co.: Belmont, CA, USA, 1988. [Google Scholar]
  11. Gregus, M. Third Order Linear Differential Equations; De Reidel Publishing Company: Tokyo, Japan, 1987; pp. 86–3198. [Google Scholar]
  12. Roberts, S.B. Multimethods for the Efficient Solution of Multiscale Differential Equations. Doctoral Dissertation, Virginia Tech, Blacksburg, VA, USA, 2021; pp. 1–196. [Google Scholar]
  13. Prothero, A.; Robinson, A. On the stability and accuracy of one-step methods for solving stiff systems of ordinary differential equations. Math. Comput. 1974, 28, 145–162. [Google Scholar] [CrossRef]
  14. Tam, H.W. One-stage parallel methods for the numerical solution of ordinary differential equations. SIAM J. Comput. 1992, 13, 1039–1061. [Google Scholar] [CrossRef]
  15. Jator, S.N.; Li, J. A self-starting linear multistep method for a direct solution of the general second order initial value problem. Int. J. Comput. Math. 2007, 86, 817–836. [Google Scholar] [CrossRef]
  16. Shokri, A. A new eight-order symmetric two-step multiderivative method for the numerical solution of second-order IVPs with oscillating solutions. Numer. Algorithm 2018, 77, 95–109. [Google Scholar] [CrossRef]
  17. Ramos, H.; Rufai, M.A. A two-step hybrid block method with fourth derivatives for solving third-order boundary value problems. Comput. Appl. Math. 2022, 404, 113419. [Google Scholar] [CrossRef]
  18. Sunday, J.; Shokri, A.; Marian, D. Variable step hybrid block method for the approximation of Kepler problem. Fractal Fract. 2022, 6, 343. [Google Scholar] [CrossRef]
  19. Singh, G.; Ramos, H. An optimized two-step hybrid block method formulated in variable stepsize mode for integrating y” = f(x, y, y’) numerically. Numer. Math. Theory Methods Appl. 2019, 12, 640–660. [Google Scholar]
  20. Rufai, M.A.; Ramos, H. Numerical solution of second-order singular problems arising in astrophysics by combining a pair of one-step hybrid block Nyström methods. Astrophys. Space Sci. 2020, 365, 1–13. [Google Scholar] [CrossRef]
  21. Areo, E.A.; Rufai, M.A. A new uniform fourth order one-third step continuous block method for direct solutions of y” = f(x, y, y’). Br. J. Math. Comput. Sci. 2016, 15, 1–12. [Google Scholar] [CrossRef] [PubMed]
  22. Qureshi, S.; Soomro, A.; Hincal, E.; Lee, J.R.; Park, C.; Osman, M.S. An efficient variable stepsize rational method for stiff, singular and singularly perturbed problems. Alex. Eng. J. 2022, 61, 10953–10963. [Google Scholar] [CrossRef]
  23. Ramos, H. Development of a new Runge-Kutta method and its economical implementation. Comput. Math. Methods 2019, 1, e1016. [Google Scholar] [CrossRef] [Green Version]
  24. Qureshi, S.; Ramos, H.; Soomro, A.; Hincal, E. Time-efficient reformulation of the Lobatto III family of order eight. J. Comput. Sci. 2022, 63, 101792. [Google Scholar] [CrossRef]
  25. Ramos, H.; Rufai, M.A. Third derivative modification of k-step block Falkner methods for the numerical solution of second order initial value problems. Appl. Math. Comput. 2018, 333, 231–245. [Google Scholar] [CrossRef]
  26. Ramos, H.; Vigo-Aguiar, J. A new algorithm appropriate for solving singular and singularly perturbed autonomous initial-value problems. Int. J. Comput. Math. 2008, 603–611. [Google Scholar] [CrossRef]
  27. Dahlquist, G.G. A special stability problem for linear multistep methods. Bit Numer. Math. 1963, 3, 27–43. [Google Scholar] [CrossRef]
  28. Wanner, G.; Hairer, E. Solving Ordinary Differential Equations II; Springer: Berlin/Heidelberg, Germany, 1996; p. 375. [Google Scholar]
  29. Qureshi, S.; Ramos, H. L-stable explicit nonlinear method with constant and variable step-size formulation for solving initial value problems. Int. J. Nonlinear Sci. Numer. 2018, 19, 741–751. [Google Scholar] [CrossRef]
  30. Akinfenwa, O.A.; Okunuga, S.A.; Akinnukawe, B.I.; Rufai, U.P.; Abdulganiy, R.I. Multi-derivative hybrid implicit Runge-Kutta method for solving stiff system of a first order differential equation. Far East J. Math. Sci. 2018, 106, 543–562. [Google Scholar]
  31. Butcher, J.C. Numerical Methods for Ordinary Differential Equations; John Wiley & Sons: Hoboken, NJ, USA, 2016. [Google Scholar]
  32. Sahi, R.K.; Jator, S.N.; Khan, N.A. A Simpson’s-type second derivative method for stiff systems. Int. J. Pure Appl. Math. 2012, 81, 619–633. [Google Scholar]
  33. Sunday, J.; Kolawole, F.M.; Ibijola, E.A.; Ogunrinde, R.B. Two-step Laguerre polynomial hybrid block method for stiff and oscillatory first-order ordinary differential equations. J. Math. Comput. Sci. 2015, 5, 658–668. [Google Scholar]
  34. Rufai, M.A.; Duromola, M.K.; Ganiyu, A.A. Derivation of one-sixth hybrid block method for solving general first order ordinary differential equations. IOSR-JM 2016, 12, 20–27. [Google Scholar]
  35. Jenny, K.; Käppeli, O.; Fiechter, A. Biosurfactants from Bacillus licheniformis: Structural analysis and characterization. Appl. Microbiol. Biotechnol. 1991, 36, 5–13. [Google Scholar] [CrossRef] [PubMed]
  36. Alvarez, C.M.; Litchfield, R.; Jackowski, D.; Griffin, S.; Kirkley, A. A prospective, double-blind, randomized clinical trial comparing subacromial injection of betamethasone and xylocaine to xylocaine alone in chronic rotator cuff tendinosis. Am. J. Sports Med. 2005, 33, 255–262. [Google Scholar] [CrossRef] [PubMed]
  37. Horgan, M.J. Differential Structuring of Reservoir Phytoplankton and Nutrient Dynamics by Nitrate and Ammonium. Doctoral Dissertation, Miami University, Oxford, OH, USA, 2005. [Google Scholar]
  38. May, R.; McLean, A.R. Theoretical Ecology: Principles and Applications; Oxford University Press on Demand: Oxford, UK, 2007. [Google Scholar]
  39. Ramos, H.; Singh, G. A note on variable step-size formulation of a Simpson’s-type second derivative block method for solving stiff systems. Appl. Math. Lett. 2017, 64, 101–107. [Google Scholar] [CrossRef]
Figure 1. Stability region (shaded region) obtained from Equation (26) for the proposed optimal block technique.
Figure 1. Stability region (shaded region) obtained from Equation (26) for the proposed optimal block technique.
Mathematics 11 01135 g001
Figure 2. The plot of order stars of the first kind (right) and the order stars of the second kind (left) using the proposed one-step optimized block technique presented in Equation (14).
Figure 2. The plot of order stars of the first kind (right) and the order stars of the second kind (left) using the proposed one-step optimized block technique presented in Equation (14).
Mathematics 11 01135 g002
Figure 3. Flow chart of the SIR model.
Figure 3. Flow chart of the SIR model.
Mathematics 11 01135 g003
Figure 4. Efficiency curves for the system in Problem 1 to observe the behavior of absolute maximum global errors versus the CPU time (sec) with a logarithmic scale on y-axis.
Figure 4. Efficiency curves for the system in Problem 1 to observe the behavior of absolute maximum global errors versus the CPU time (sec) with a logarithmic scale on y-axis.
Mathematics 11 01135 g004
Figure 5. Xylocaine label, brand name of the drug lidocaine [36].
Figure 5. Xylocaine label, brand name of the drug lidocaine [36].
Mathematics 11 01135 g005
Figure 6. Efficiency curves for the system in Problem 2 to observe the behavior of absolute maximum global errors versus the CPU time (sec) with a logarithmic scale on y-axis.
Figure 6. Efficiency curves for the system in Problem 2 to observe the behavior of absolute maximum global errors versus the CPU time (sec) with a logarithmic scale on y-axis.
Mathematics 11 01135 g006
Figure 7. (Left): Bacillaria paxillifera, phytoplankton. (Right): Anomura Galathea zoea, zooplankton.
Figure 7. (Left): Bacillaria paxillifera, phytoplankton. (Right): Anomura Galathea zoea, zooplankton.
Mathematics 11 01135 g007
Figure 8. Efficiency curves for the system in Problem 3 to observe the behavior of absolute maximum global errors versus the CPU time (sec) with a logarithmic scale on y-axis.
Figure 8. Efficiency curves for the system in Problem 3 to observe the behavior of absolute maximum global errors versus the CPU time (sec) with a logarithmic scale on y-axis.
Mathematics 11 01135 g008
Figure 9. Efficiency curves for the system in Problem 4 to observe the behavior of absolute maximum global errors versus the CPU time (sec) with a logarithmic scale on y-axis.
Figure 9. Efficiency curves for the system in Problem 4 to observe the behavior of absolute maximum global errors versus the CPU time (sec) with a logarithmic scale on y-axis.
Mathematics 11 01135 g009
Table 1. Numerical results for Problem 1 with NS = 10 2 , where x [ 0 , 2 ] .
Table 1. Numerical results for Problem 1 with NS = 10 2 , where x [ 0 , 2 ] .
MethodMaxErrLastErrNormRMSEscd
NPOBM3.270 × 10 18 3.270 × 10 18 2.538 × 10 17 2.525 × 10 18 17.49
MHIRK4.257 × 10 16 4.257 × 10 16 3.303 × 10 15 3.287 × 10 16 15.37
IRK2.247 × 10 15 2.247 × 10 15 1.743 × 10 14 1.735 × 10 15 14.65
RADIIA2.550 × 10 15 2.550 × 10 15 1.979 × 10 14 1.969 × 10 15 14.59
LPHBM8.496 × 10 17 3.650 × 10 17 3.394 × 10 16 3.377 × 10 17 16.07
Sahi1.946 × 10 17 1.946 × 10 17 1.511 × 10 16 1.503 × 10 17 16.71
Table 2. Numerical results for Problem 1 with NS = 10 3 where x [ 0 , 2 ] .
Table 2. Numerical results for Problem 1 with NS = 10 3 where x [ 0 , 2 ] .
MethodMaxErrLastErrNormRMSEscd
NPOBM3.247 × 10 24 3.247 × 10 24 7.938 × 10 23 2.509 × 10 24 23.49
MHIRK4.258 × 10 21 4.258 × 10 21 1.041 × 10 19 3.290 × 10 21 20.37
IRK2.236 × 10 20 2.236 × 10 20 5.468 × 10 19 1.728 × 10 20 19.65
RADIIA2.554 × 10 20 2.554 × 10 20 6.245 × 10 19 1.974 × 10 20 19 .59
LPHBM8.662 × 10 23 3.650 × 10 23 1.071 × 10 21 3.384 × 10 23 22.06
Sahi1.946 × 10 23 1.946 × 10 23 4.759 × 10 22 1.504 × 10 23 22.71
Table 3. Numerical results for Problem 1 with NS = 10 4 where x [ 0 , 2 ] .
Table 3. Numerical results for Problem 1 with NS = 10 4 where x [ 0 , 2 ] .
MethodMaxErrLastErrNormRMSEscd
NPOBM3.244 × 10 30 3.244 × 10 30 2.507 × 10 28 2.507 × 10 30 29.49
MHIRK4.258 × 10 26 4.258 × 10 26 3.291 × 10 24 3.291 × 10 26 25.37
IRK2.235 × 10 25 2.235 × 10 25 1.728 × 10 23 1.728 × 10 25 24.65
RADIIA2.555 × 10 25 2.555 × 10 25 1.974 × 10 23 1.974 × 10 25 24.59
LPHBM8.679 × 10 29 3.650 × 10 29 3.385 × 10 27 3.385 × 10 29 28.06
Sahi1.946 × 10 29 1.946 × 10 29 1.504 × 10 27 1.504 × 10 29 28.71
Table 4. Numerical results for Problem 1 with adaptive step-size approach while Δ x i n i = 10 6 .
Table 4. Numerical results for Problem 1 with adaptive step-size approach while Δ x i n i = 10 6 .
tolMethodMaxErrLastErrNormRMSENS
NPOBM3.521 × 10 9 3.198 × 10 9 5.286 × 10 9 2.158 × 10 9 5
10 3 MHIRK9.326 × 10 9 8.25 × 10 9 1.449 × 10 8 5.914 × 10 9 5
LPHBM2.026 × 10 8 1.08 × 10 8 2.746 × 10 8 1.038 × 10 8 6
NPOBM3.474 × 10 11 3.474 × 10 11 6.249 × 10 11 2.083 × 10 11 8
10 4 MHIRK2.799 × 10 10 2.799 × 10 10 4.722 × 10 10 1.574 × 10 10 8
LPHBM2.794 × 10 10 2.189 × 10 10 6.289 × 10 10 1.896 × 10 10 10
NPOBM4.061 × 10 13 3.892 × 10 13 9.325 × 10 13 2.262 × 10 13 16
10 5 MHIRK6.577 × 10 12 6.361 × 10 12 1.588 × 10 11 3.852 × 10 12 16
LPHBM3.666 × 10 12 3.428 × 10 12 1.141 × 10 11 2.617 × 10 12 18
Table 5. Numerical results for Problem 2 with NS = 2 6 where x [ 0 , 5 ] .
Table 5. Numerical results for Problem 2 with NS = 2 6 where x [ 0 , 5 ] .
Method Norm RMSEMeanscd
NPOBM9.845 × 10 17 8.918 × 10 17 8.864 × 10 17 16.01
MHIRK6.760 × 10 15 6.124 × 10 15 6.087 × 10 15 14.17
IRK3.585 × 10 14 3.247 × 10 14 3.228 × 10 14 13.45
RADIIA4.045 × 10 14 3.665 × 10 14 3.642 × 10 14 13.39
LPHBM2.492 × 10 15 2.258 × 10 15 2.244 × 10 15 14.60
Sahi5.818 × 10 16 5.270 × 10 16 5.238 × 10 16 15.24
Table 6. Numerical results for Problem 2 with NS = 2 7 where x [ 0 , 5 ] .
Table 6. Numerical results for Problem 2 with NS = 2 7 where x [ 0 , 5 ] .
Method Norm RMSEMeanscd
NPOBM1.527 × 10 18 1.383 × 10 18 1.375 × 10 18 17.82
MHIRK2.113 × 10 16 1.914 × 10 16 1.903 × 10 16 15.68
IRK1.115 × 10 15 1.010 × 10 15 1.004 × 10 15 14.95
RADIIA1.266 × 10 15 1.147 × 10 15 1.140 × 10 15 14.90
LPHBM3.973 × 10 17 3.599 × 10 17 3.578 × 10 17 16.40
Sahi9.091 × 10 18 8.235 × 10 18 8.185 × 10 18 17.04
Table 7. Numerical results for Problem 2 with NS = 2 8 where x [ 0 , 5 ] .
Table 7. Numerical results for Problem 2 with NS = 2 8 where x [ 0 , 5 ] .
Method Norm RMSEMeanscd
NPOBM2.376 × 10 20 2.153 × 10 20 2.140 × 10 20 19.62
MHIRK6.605 × 10 18 5.983 × 10 18 5.947 × 10 18 17.18
IRK3.476 × 10 17 3.149 × 10 17 3.130 × 10 17 16.46
RADIIA3.960 × 10 17 3.587 × 10 17 3.566 × 10 17 16.40
LPHBM6.271 × 10 19 5.681 × 10 19 5.647 × 10 19 18.20
Sahi1.421 × 10 19 1.287 × 10 19 1.279 × 10 19 18.85
Table 8. Numerical results for Problem 2 with adaptive step-size approach while Δ x i n i = 10 6 .
Table 8. Numerical results for Problem 2 with adaptive step-size approach while Δ x i n i = 10 6 .
tolMethod Norm RMSEMeanNS
NPOBM3.918 × 10 9 3.549 × 10 9 3.527 × 10 9 5
10 3 MHIRK1.301 × 10 8 1.179 × 10 8 1.172 × 10 8 5
LPHBM4.086 × 10 8 3.701 × 10 8 3.679 × 10 8 6
NPOBM4.231 × 10 11 3.833 × 10 11 3.81 × 10 11 9
10 4 MHIRK4.192 × 10 10 3.797 × 10 10 3.774 × 10 10 9
LPHBM6.482 × 10 10 5.871 × 10 10 5.836 × 10 10 10
NPOBM4.341 × 10 13 3.933 × 10 13 3.909 × 10 13 18
10 5 MHIRK1.076 × 10 11 9.75 × 10 12 9.69 × 10 12 17
LPHBM7.947 × 10 12 7.199 × 10 12 7.155 × 10 12 18
Table 9. Numerical results for Problem 3 with NS = 2 6 where x [ 0 , 5 ] .
Table 9. Numerical results for Problem 3 with NS = 2 6 where x [ 0 , 5 ] .
Method Norm RMSEMeanscd
NPOBM7.337 × 10 9 5.347 × 10 9 4.891 × 10 9 8.134
MHIRK1.731 × 10 8 1.250 × 10 8 1.160 × 10 8 7.762
IRK1.124 × 10 7 8.129 × 10 8 7.524 × 10 8 6.949
RADIIA9.878 × 10 8 7.133 × 10 8 6.620 × 10 8 7.005
LPHBM6.035 × 10 8 4.437 × 10 8 4.110 × 10 8 7.219
Sahi3.243 × 10 8 2.365 × 10 8 2.162 × 10 8 7.489
Table 10. Numerical results for Problem 3 with NS = 2 7 where x [ 0 , 5 ] .
Table 10. Numerical results for Problem 3 with NS = 2 7 where x [ 0 , 5 ] .
Method Norm RMSEMeanscd
NPOBM9.832 × 10 11 7.167 × 10 11 6.557 × 10 11 10.01
MHIRK5.522 × 10 10 3.982 × 10 10 3.688 × 10 10 9.258
IRK3.222 × 10 9 2.325 × 10 9 2.151 × 10 9 8.492
RADIIA3.227 × 10 9 2.326 × 10 9 2.155 × 10 9 8.491
LPHBM1.458 × 10 9 1.056 × 10 9 9.723 × 10 10 8.836
Sahi4.995 × 10 10 3.634 × 10 10 3.332 × 10 10 9.301
Table 11. Numerical results for Problem 3 with NS = 2 8 where x [ 0 , 5 ] .
Table 11. Numerical results for Problem 3 with NS = 2 8 where x [ 0 , 5 ] .
Method Norm RMSEMeanscd
NPOBM1.416 × 10 12 1.032 × 10 12 9.449 × 10 13 11.85
MHIRK1.731 × 10 11 1.248 × 10 11 1.156 × 10 11 10.76
IRK9.575 × 10 11 6.908 × 10 11 6.394 × 10 11 10.02
RADIIA1.024 × 10 10 7.385 × 10 11 6.840 × 10 11 9.990
LPHBM2.823 × 10 11 2.048 × 10 11 1.882 × 10 11 10.55
Sahi7.864 × 10 12 5.727 × 10 12 5.245 × 10 12 11.10
Table 12. Numerical results for Problem 3 with adaptive step-size approach while Δ x i n i = 10 6 .
Table 12. Numerical results for Problem 3 with adaptive step-size approach while Δ x i n i = 10 6 .
tolMethod Norm RMSEMeanNS
NPOBM6.671 × 10 7 8.005 × 10 7 6.25 × 10 7 14
10 3 MHIRK8.822 × 10 2 6.858 × 10 2 6.394 × 10 2 10
LPHBM5.202 × 10 7 4.282 × 10 7 4.203 × 10 7 16
NPOBM1.368 × 10 8 1.368 × 10 8 1.087 × 10 8 28
10 4 MHIRK8.849 × 10 2 6.902 × 10 2 6.468 × 10 2 19
LPHBM2.391 × 10 8 1.917 × 10 8 1.77 × 10 8 28
NPOBM2.702 × 10 10 2.702 × 10 10 2.069 × 10 10 58
10 5 MHIRK8.883 × 10 2 6.927 × 10 2 6.488 × 10 2 38
LPHBM1.267 × 10 9 9.427 × 10 10 8.444 × 10 10 52
Table 13. Numerical results for Problem 4 with NS = 2 6 where x [ 0 , 10 ] .
Table 13. Numerical results for Problem 4 with NS = 2 6 where x [ 0 , 10 ] .
Method Norm RMSEMeanscd
NPOBM7.576 × 10 8 5.688 × 10 8 5.394 × 10 8 7.121
MHIRK1.749 × 10 7 1.306 × 10 7 1.241 × 10 7 6.757
IRK1.140 × 10 6 8.518 × 10 7 8.093 × 10 7 5.943
RADIIA9.974 × 10 7 7.445 × 10 7 7.077 × 10 7 6.001
LPHBM6.239 × 10 7 4.691 × 10 7 4.453 × 10 7 6.205
Sahi3.353 × 10 7 2.518 × 10 7 2.388 × 10 7 6.475
Table 14. Numerical results for Problem 4 with NS = 2 7 where x [ 0 , 10 ] .
Table 14. Numerical results for Problem 4 with NS = 2 7 where x [ 0 , 10 ] .
Method Norm RMSEMeanscd
NPOBM1.016 × 10 9 7.625 × 10 10 7.230 × 10 10 8.993
MHIRK5.600 × 10 9 4.174 × 10 9 3.965 × 10 9 8.252
IRK3.273 × 10 8 2.441 × 10 8 2.319 × 10 8 7.485
RADIIA3.271 × 10 8 2.438 × 10 8 2.315 × 10 8 7.485
LPHBM1.492 × 10 8 1.116 × 10 8 1.059 × 10 8 7.826
Sahi5.140 × 10 9 3.854 × 10 9 3.656 × 10 9 8.289
Table 15. Numerical results for Problem 4 with NS = 2 8 , where x [ 0 , 10 ] .
Table 15. Numerical results for Problem 4 with NS = 2 8 , where x [ 0 , 10 ] .
Method Norm RMSEMeanscd
NPOBM1.461 × 10 11 1.097 × 10 11 1.040 × 10 11 10.84
MHIRK1.755 × 10 10 1.308 × 10 10 1.243 × 10 10 9.756
IRK9.719 × 10 10 7.247 × 10 10 6.883 × 10 10 9.012
RADIIA1.038 × 10 9 7.740 × 10 10 7.352 × 10 10 8.984
LPHBM2.896 × 10 10 2.168 × 10 10 2.058 × 10 10 9.538
Sahi8.108 × 10 11 6.084 × 10 11 5.770 × 10 11 10.09
Table 16. Numerical results for Problem 4 with adaptive step-size approach while Δ x i n i = 10 6 .
Table 16. Numerical results for Problem 4 with adaptive step-size approach while Δ x i n i = 10 6 .
tolMethod Norm RMSEMeanNS
NPOBM9.483 × 10 9 8.579 × 10 8 5.952 × 10 8 46
10 3 MHIRK3.008 × 10 1 2.663 × 10 1 2.651 × 10 1 22
LPHBM7.42 × 10 7 5.038 × 10 7 4.401 × 10 7 30
NPOBM1.024 × 10 10 7.966 × 10 10 5.538 × 10 10 90
10 4 MHIRK3.007 × 10 1 2.666 × 10 1 2.654 × 10 1 42
LPHBM2.432 × 10 8 1.674 × 10 8 1.412 × 10 8 56
NPOBM1.075 × 10 12 7.793 × 10 12 5.407 × 10 12 185
10 5 MHIRK3.008 × 10 1 2.666 × 10 1 2.654 × 10 1 88
LPHBM3.388 × 10 10 2.277 × 10 10 1.906 × 10 10 110
Table 17. Numerical results for Problem 5 with NS = 2 6 where x [ 0 , 20 ] .
Table 17. Numerical results for Problem 5 with NS = 2 6 where x [ 0 , 20 ] .
Method Norm RMSEMeanscd
NPOBM7.816 × 10 9 5.527 × 10 9 3.917 × 10 9 8.107
MHIRK2.511 × 10 8 1.775 × 10 8 1.257 × 10 8 7.600
IRKFailed
RADIIA1.447 × 10 7 1.023 × 10 7 7.424 × 10 8 8.421
LPHBM8.185 × 10 8 5.788 × 10 8 4.119 × 10 8 7.087
Sahi3.611 × 10 8 2.553 × 10 8 1.807 × 10 8 7.442
Table 18. Numerical results for Problem 5 with NS = 2 7 where x [ 0 , 20 ] .
Table 18. Numerical results for Problem 5 with NS = 2 7 where x [ 0 , 20 ] .
Method Norm RMSEMeanscd
NPOBM1.070 × 10 10 7.573 × 10 11 5.530 × 10 11 9.970
MHIRK7.880 × 10 10 5.572 × 10 10 3.944 × 10 10 9.103
IRKFailed
RADIIA4.628 × 10 9 3.294 × 10 9 2.580 × 10 9 9.274
LPHBM1.803 × 10 9 1.275 × 10 9 9.152 × 10 10 8.744
Sahi5.686 × 10 10 4.020 × 10 10 2.846 × 10 10 9.245
Table 19. Numerical results for Problem 5 with NS = 2 8 where x [ 0 , 20 ] .
Table 19. Numerical results for Problem 5 with NS = 2 8 where x [ 0 , 20 ] .
Method Norm RMSEMeanscd
NPOBM1.571 × 10 12 1.229 × 10 12 1.156 × 10 12 11.80
MHIRK2.473 × 10 11 1.748 × 10 11 1.238 × 10 11 10.61
IRKFailed
RADIIA1.467 × 10 10 1.144 × 10 10 1.075 × 10 10 10.17
LPHBM3.337 × 10 11 2.361 × 10 11 1.722 × 10 11 10.48
Sahi8.846 × 10 12 6.255 × 10 12 4.428 × 10 12 11.05
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abuasbeh, K.; Qureshi, S.; Soomro, A.; Awadalla, M. An Optimal Family of Block Techniques to Solve Models of Infectious Diseases: Fixed and Adaptive Stepsize Strategies. Mathematics 2023, 11, 1135. https://doi.org/10.3390/math11051135

AMA Style

Abuasbeh K, Qureshi S, Soomro A, Awadalla M. An Optimal Family of Block Techniques to Solve Models of Infectious Diseases: Fixed and Adaptive Stepsize Strategies. Mathematics. 2023; 11(5):1135. https://doi.org/10.3390/math11051135

Chicago/Turabian Style

Abuasbeh, Kinda, Sania Qureshi, Amanullah Soomro, and Muath Awadalla. 2023. "An Optimal Family of Block Techniques to Solve Models of Infectious Diseases: Fixed and Adaptive Stepsize Strategies" Mathematics 11, no. 5: 1135. https://doi.org/10.3390/math11051135

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop