Next Article in Journal
A Preconditioned Iterative Method for a Multi-State Time-Fractional Linear Complementary Problem in Option Pricing
Next Article in Special Issue
Compact Difference Schemes with Temporal Uniform/Non-Uniform Meshes for Time-Fractional Black–Scholes Equation
Previous Article in Journal
A Numerical Solution of Generalized Caputo Fractional Initial Value Problems
Previous Article in Special Issue
Photothermal Response for the Thermoelastic Bending Effect Considering Dissipating Effects by Means of Fractional Dual-Phase-Lag Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast and Accurate Numerical Algorithm with Performance Assessment for Nonlinear Functional Volterra Equations

1
Department of Mathematics, Rivers State University, Port Harcourt 5080, Nigeria
2
Department of Mathematics, Faculty of Mathematics and Computer Science, Babeş-Bolyai University, Cluj-Napoca, 1 M. Kogălniceanu Street, 400084 Cluj-Napoca, Romania
*
Author to whom correspondence should be addressed.
Fractal Fract. 2023, 7(4), 333; https://doi.org/10.3390/fractalfract7040333
Submission received: 12 February 2023 / Revised: 12 April 2023 / Accepted: 12 April 2023 / Published: 17 April 2023

Abstract

:
An efficient numerical algorithm is developed for solving nonlinear functional Volterra integral equations. The core idea is to define an appropriate operator, then combine the Krasnoselskij iterative scheme with collocation at discrete points and the Newton–Cotes quadrature rule. This results in an explicit scheme that does not require solving a nonlinear or linear algebraic system. For the convergence analysis, the discretization error is estimated and proved to converge via a recurrence relation. The discretization error is combined with the Krasnoselskij iteration error to estimate the total approximation error, hence establishing the convergence of the method. Then, numerical experiments are provided, first, to demonstrate the second order convergence of the proposed method, and secondly, to show the better performance of the scheme over the existing nonlinear-based approach.

1. Introduction

This study is devoted to analyzing and computing the solution of nonlinear functional Volterra integral equations. These equations have applications in several areas such as physical sciences [1,2,3], optimal control and economics [4,5,6,7], reformulation of more difficult mathematical problems [8,9], and epidemiology [10,11]. In [12], sufficient conditions for the existence of a principal solution were derived for nonlinear Volterra equations and an explicit method was also proposed.
Since closed-form analytical solutions, in general, do not exist, numerical techniques provide a means of approximating them. For example, numerical algorithms have been proposed based on triangular functions [13], collocation methods [14,15], CAS wavelets [16,17], the variational iteration methods [18,19,20,21], collocation–trapezoidal methods [22,23], linear programming [24], Picard–trapezoidal rule [9], and Taylor series [25]. Moreover, see [10,11,26,27,28,29] for other ideas. Most of these methods are based on directly discretizing the original nonlinear integral equations without using any fixed-point iteration (such as the Picard iteration). In this work, we shall refer to this type of methods as direct discretization (DD) algorithms. A typical example is the one proposed in [22].
It is well known that, under suitable conditions and with an arbitrary initial function in a suitable Banach space, the fixed point of an appropriate operator can be approximated using an applicable fixed-point iteration technique, such as the Picard, Krasnoselskijj, Mann, or Ishikawa schemes [30,31,32,33], see also [34]. These iterative methods can produce analytical expressions for the approximating functions, provided all the operations in the operator are analytically realizable. We shall refer to this type of approach as the Picard-type (PT) schemes. See [35] for an example.
The challenge with DD schemes is that they lead to nonlinear algebraic systems which require a lot of computational resources, time, and even high programming skills to solve. The PT schemes face the challenge of not being practically useful once the operations involved in the operator cannot be obtained analytically. This is usually the case in nonlinear problems. Micula [9] came up with the idea of combining PT schemes with DD using the Picard iteration and trapezoidal rule; see also [36] for Mann’s iteration.
It is known that the Mann iteration converges faster than the Picard’s [36]; however, it is also proved in Theorem 9.4 of [30] that for certain operators, given any Mann iteration that converges to the fixed point, there is always a Krasnoselskij iteration which converges faster. Therefore, the present paper develops the combined technique for functional integral equations using the Krasnoselkij iterative algorithm and a one-dimensional quadrature rule defined at collocation points. The advantages of the approach are as follows: First, unlike the DD schemes, it does not lead to a coupled nonlinear algebraic system, and not even linear systems are encountered. Hence, Newton or other nonlinear solvers are completely avoided and even linear iterative algorithms are also not needed. Second, unlike the PT schemes, every integral is explicitly approximated. A systematic analysis of the convergence of the approach is carried out and numerical examples are provided to show the second-order accuracy of the method. Moreover, the existence and uniqueness of the results in Micula [9,36] are obtained on the basis of a contraction assumption. In the current work, we prove the solvability of the problem without any contraction assumption by employing the generalized Banach contraction principle.
To be precise, the problem investigated in this work is the following nonlinear functional Volterra integral equation of the second kind:
u ( x ) = g ( x ) + f x , y = a x k ( x , y , u ( y ) ) d y , x [ a , b ] ,
where C [ a , b ] g : , f : × , k : × × .
The solvability of the problem is proved in Section 2, whereas the numerical algorithm which begins with the Krasnoselskij iteration is derived in Section 3. The error and convergence of the method are analyzed in Section 4, whereas numerical examples are provided in Section 5 to demonstrate the accuracy. In Section 6 we assess the performance. Some concluding remarks are given in Section 7.

2. Solvability

We make the following assumptions:
  • g , k , f ( x , y = a x k ( x , y , u ( y ) ) d y ) L .
  • The functional f is Lipschitz continuous with respect to the second argument, with a Lipschitz constant α f 0 :
    f ( x , u ) f ( x , v ) α f u v , for   all   u , v L .
  • The kernel, k is Lipschitz continuous with respect to u with Lipschitz constant α k 0 :
    k ( x , y , u ) k ( x , y , v ) α k u v for   all   u , v L .
We define an operator, T, by
( T u ) ( x ) : = g ( x ) + f x , y = a x k ( x , y , u ( y ) ) d y .
Lemma 1.
The operator T defined in (4) satisfies the inequality:
| T k u ( x ) T k v ( x ) | α f α k ( x a ) k k ! | u ( x ) v ( x ) | , f o r   k = 1 , 2 , .
Proof. 
We prove this by induction. Setting k = 1 , we obtain:
| ( T u ) ( x ) ( T v ) ( x ) | = | f x , a x k ( x , y , u ( y ) ) d y f x , a x k ( x , y , v ( y ) ) d y | α f α k a x | u ( y ) v ( y ) | d y ( by   Lipschitz   Continuity   of   f   and   k ) = α f α k | u ( y ) v ( y ) | a x d y α f α k ( x a ) | u ( y ) v ( y ) | .
This shows that (5) is true for k = 1 . Next, we let k = 2 , then we obtain:
| T 2 u ( x ) T 2 v ( x ) | α f α k a x | T u ( y ) T v ( y ) | d y α f α k a x α f α k ( y a ) | u ( y ) v ( y ) | d y α f 2 α k 2 ( x a ) 2 2 | u ( x ) v ( x ) |
which is also true. Now we assume it is true for any k. Then
| T k + 1 u ( x ) T k + 1 v ( x ) | α f α k a x | T k u ( y ) T k v ( y ) | d y α f α k a x α f α k ( y a ) k k ! | u ( y ) v ( y ) | d y = α f α k ( x a ) k + 1 ( k + 1 ) ! | u ( x ) v ( x ) | .
Hence, it is true for all k. □
Theorem 1
(Solvability). If Assumptions 1–3 are true, then the nonlinear functional Volterra integral Equation (1) has a unique solution.
Proof. 
Assumption (1) above guarantees that T : L L . Now, Lemma 1 gives
| T k u ( x ) T k v ( x ) | α f α k ( x a ) k k ! | u ( x ) v ( x ) | , for   k = 1 , 2 , α f α k ( b a ) k k ! | u ( x ) v ( x ) | 0   as   k .
This shows that T is a contraction and it follows from the generalized Banach contraction principle that T has a unique fixed point which is the unique solution of problem (1). □

3. Numerical Algorithm

This section details the numerical approximation of problem (1). To this end, let N Z + and define the mesh Ω h = { x i = a + i h : i = 0 , 1 , , N , h : = b a N } . We also define the grid functions
u i u ( x i ) for   each   i ,   and   ξ i N : = h 1 2 , if   i = 0 , N , 1 , otherwise , .
Since T is a contraction, the sequence { u ( x ) } n = 0 generated by
u n + 1 ( x ) = ( 1 λ ) u n ( x ) + λ T u n ( x ) , λ ( 0 , 1 ) ,
converges to the fixed point of T [30] with the error estimate [31]:
u n + 1 ( x ) u ( x ) α n u 1 ( x ) u ( x ) for   each   x [ a , b ] ,
where
α = [ 1 ( 1 α ^ ) λ ] < 1 , α ^ = α f α k ( b a ) < 1 .
Lemma 2
(See [37]). Let x i = a + i h , i = 0 , N , h = ( b a ) / N be points in the interval [ a , b ] . Suppose that f C 2 [ a , b ] . Then
a b f ( x ) d x = j = 0 N ξ j N f ( x j ) + R f ,
and there exists 0 R m < such that
| R f | R m = max χ [ a , b ] h 2 ( b a ) 12 f ( χ ) = O ( h 2 ) .
To derive the method, we first collocate problem (1) at x 0 = a Ω h . This gives:
u ( x 0 ) = g ( x 0 ) + f ( x 0 , 0 ) .
Observe that this is exact as no approximation has been made. Hence, we can initialize the Krasnoselskij sequence (9) from (14) as follows:
u 0 ( x ) = g ( x ) + f ( x , 0 ) , for   all   x [ a , b ] .
Therefore, the iteration becomes:
u n + 1 ( x ) = ( 1 λ ) u n ( x ) + λ ( T u n ) ( x ) , n 0 , u 0 ( x ) = g ( x ) + f ( x , 0 ) , 0 < λ < 1 .
We now collocate (15) at x i Ω h \ { x 0 } :
0 < λ < 1 , u 0 ( x i ) = g ( x i ) + f ( x i , 0 ) , n 0 : I n ( x i ) = y = a x i k ( x i , y , u n ( y ) ) d y , u n + 1 ( x i ) = ( 1 λ ) u n ( x i ) + λ g ( x i ) + f ( x i , I n ( x i ) ) .
Using the trapezoidal rule in Lemma 2 to approximate the integral on the iterative scheme (16), we have the algorithm:
x i Ω h \ { x 0 } , n = 0 , 1 , , u ^ n + 1 ( x i ) = ( 1 λ ) u ^ n ( x i ) + λ ( g ( x i ) + f ( x i , I ^ h , n ( x i ) ) ) , u ^ 0 ( x i ) = u 0 ( x i ) , for   all   x i Ω h , I ^ h , n ( x i ) = j = 0 i ξ j i k ( x i , x j , u ^ n ( x j ) ) .
The system (17) constitutes the numerical algorithm for approximating problem (1). We prove in the next section that a sequence of solutions computed with this scheme converges to the exact solution of problem (1).

4. Convergence Analysis

Definition 1
(Maximum Error). The numerical error, e h , is the maximum error committed in approximating u ( x i ) by using the scheme (17) for all x i Ω h when n . That is
e h = lim n max x i Ω h e n ( x i )
where
e n ( x i ) = | u ( x i ) u ^ n ( x i ) | .
Remark 1.
The goal in this section is to show that the quantity e n ( x i ) vanishes whenever n , h 0 , and assumption (11) holds.
Let us first prove the following lemma.
Lemma 3.
Let γ be a Lipschitz operator with constant α g . Then
γ ( u + v ) γ ( u ) + α g | v | f o r   a l l   u , v D o m ( γ ) .
Proof. 
The result is trivial when v = 0 . It is also trivial if v 0 and γ ( u + v ) γ ( u ) . We only prove the inequality for the case when v 0 and
γ ( u + v ) > γ ( u ) .
By the Lipschitz continuity of γ , we have
| γ ( u + v ) γ ( u ) | α g | v | .
Because of (20), we can write the left side of the last inequality as:
γ ( u + v ) γ ( u ) α g | v | .
Hence the result. □
Lemma 4
(Recurrence Relation). The error, R ^ n ( x i ) = | u n ( x i ) u ^ n ( x i ) | , committed in using the scheme (17) to approximate the iterative process (16) satisfies the recurrence relation:
R ^ n + 1 ( x i ) 1 λ + λ α f α k j = 1 i ξ j i R ^ n ( x i ) + λ α f | R m | .
Proof. 
First, since u ^ 0 ( x i ) = u 0 ( x i ) , it means that R ^ 0 = 0 .
Setting n = 0 in (17), we have:
u 1 ( x i ) = ( 1 λ ) u 0 ( x i ) + λ g ( x i ) + f x i , j = 0 i ξ j i k ( x i , x j , u 0 ( x j ) ) + R ˜ 1 ( 1 λ ) u 0 ( x i ) + λ g ( x i ) + f x i , j = 0 i ξ j i k ( x i , x j , u 0 ( x j ) ) + α f | R ˜ 1 | ( see   Lemma   3 ) ( 1 λ ) u 0 ( x i ) + λ g ( x i ) + f x i , j = 0 i ξ j i k ( x i , x j , u 0 ( x j ) ) + α f λ | R m | = u ^ 1 ( x i ) + α f λ | R m | .
Hence,
R ^ 1 : = | u 1 ( x i ) u ^ 1 ( x i ) | α f λ | R m | = 1 λ + λ α f α k j = 1 n 1 ξ j i R ^ 0 ( x i ) + α f λ | R m | .
Similarly, setting n = 1 , we obtain:
u 2 ( x i ) = ( 1 λ ) u 1 ( x i ) + λ g ( x i ) + f x i , j = 0 i ξ j i k ( x i , x j , u 1 ( x j ) ) + R ˜ 2 ( 1 λ ) u ^ 1 ( x i ) + ( 1 λ ) R ^ 1 ( x i ) + λ g ( x i ) + f x i , j = 0 i ξ j i k x i , x j , u ^ 1 ( x j ) + R ^ 1 + R m ( 1 λ ) u ^ 1 ( x i ) + λ g ( x i ) + f x i , j = 0 i ξ j i k ( x i , x j , u ^ 1 ( x j ) ) + α k | R ^ 1 | j = 0 i ξ j i + R m + ( 1 λ ) R ^ 1 ( x i ) ( 1 λ ) u ^ 1 ( x i ) + λ g ( x i ) + f x i , j = 0 i ξ j i k ( x i , x j , u ^ 1 ( x j ) ) + 1 λ + λ α f α k j = 1 i ξ j i R ^ 1 ( x i ) + λ α f | R m | = u ^ 2 ( x i ) + 1 λ + λ α f α k j = 1 i ξ j i R ^ 1 ( x i ) + λ α f | R m | .
Hence,
R ^ 2 ( x i ) = | u 2 ( x i ) u ^ 2 ( x i ) | 1 λ + λ α f α k j = 1 i ξ j i R ^ 1 ( x i ) + λ α f | R m | .
In general,
u n + 1 ( x i ) = ( 1 λ ) u n ( x i ) + λ g ( x i ) + f x i , j = 0 i ξ j i k ( x i , x j , u n ( x j ) ) + R ˜ n + 1 ( 1 λ ) u ^ n ( x i ) + ( 1 λ ) R ^ n ( x i ) + λ g ( x i ) + f x i , j = 0 i ξ j i k x i , x j , u ^ n ( x j ) + R ^ n + R m ( 1 λ ) u ^ n ( x i ) + λ g ( x i ) + f x i , j = 0 i ξ j i k ( x i , x j , u ^ n ( x j ) ) + α k R ^ n j = 0 i ξ j i + R m + ( 1 λ ) R ^ n ( x i ) ( 1 λ ) u ^ n ( x i ) + λ g ( x i ) + f x i , j = 0 i ξ j i k ( x i , x j , u ^ n ( x j ) ) + 1 λ + λ α f α k j = 1 i ξ j i R ^ n ( x i ) + λ α f | R m | = u ^ n + 1 ( x i ) + 1 λ + λ α f α k j = 1 i ξ j i R ^ n ( x i ) + λ α f | R m | .
Inequality (26) gives:
R ^ n + 1 : = | u n + 1 ( x i ) u ^ n + 1 ( x i ) |
1 λ + λ α f α k j = 1 i ξ j i R ^ n ( x i ) + λ α f | R m | .
This proves the claim. □
Theorem 2
(Convergence). The error R ^ n ( x i ) = | u n ( x i ) u ^ n ( x i ) | committed in using the scheme (17) to approximate the iterative process (16) satisfies:
R ^ n ( x i ) c 2 j = 0 n 1 c 1 j ,
hence
lim n R ^ n ( x i ) α f 1 α f α k j = 0 i ξ j i | R m | = O ( h 2 ) ,
where c 1 = 1 λ + λ α f α k j = 1 n 1 ξ j i and c 2 = λ α f | R m | .
Proof. 
From Lemma 21, we have
R ^ n ( x i ) 1 λ + λ α f α f j = 1 i ξ j i R ^ n 1 ( x i ) + λ α f | R m | = c 1 R ^ n 1 + c 2 c 1 c 1 R ^ n 2 + c 2 + c 2 c 1 2 R ^ n 2 + c 1 c 2 + c 2 c 1 n R ^ 0 + c 2 j = 0 n 1 c 1 j = c 2 j = 0 n 1 c 1 j .
Taking limit:
lim n R ^ n c 2 1 c 1 = α f 1 α f α k j = 0 i ξ j i | R m | = O ( h 2 ) .
Theorem 3
(Convergence). The numerical solution, u ^ n ( x ) , computed using the scheme (17), converges to the exact solution, u ( x ) .
Proof. 
The error e n ( x i ) between u ( x i ) and u ^ n ( x i ) is
e n ( x i ) = | u ( x i ) u ^ n ( x i ) | = | u ( x i ) u n ( x i ) + u n ( x i ) u ^ n ( x i ) | | u ( x i ) u n ( x i ) | + | u n ( x i ) u ^ n ( x i ) | α n | u ( x i ) u n ( x i ) | + α f 1 α f α k j = 0 i ξ j i | R m | = α n | u ( x i ) u n ( x i ) | + O ( h 2 ) .
where α < 1 is defined in (11). Hence, the convergence result follows (see definition 1 and the remark that follows it). □
Remark 2.
Observe the appearance of 1 α f α k j = 0 i ξ j i in (30). This implies that we require this quantity to be non-negative (since the left hand side of (30) is positive). Since α f α k j = 0 i ξ j i α f α k ( b a ) , it follows that the requirement is satisfied if
α f α k ( b a ) < 1 .
This is a requirement for the scheme to be convergent.
Remark 3
(Implication of the analysis). As observed in the proof of Theorem 3, the numerical error consists of two parts—the fixed-point iteration error and the quadrature error. Hence, both errors must converge to zero for the proposed method to converge to the exact solution, and the solutions computed with the method would be obtained at minimal computational cost compared to methods which involve solving nonlinear systems. However, since the convergence of the fixed-point iteration is guaranteed whenever the operator is a contraction, it then implies that if an appropriate quadrature rule is in place, then the contractivity of the associated operator guarantees that numerical solutions can be computed accurately and efficiently.

5. Numerical Experiments

Numerical examples are now presented to assess the accuracy and efficiency of the scheme (17). The examples are derived through the method of manufacture solutions [38]. In each of the four problems, a sequence of solutions is computed with different meshes of varying sizes. In all the computations, we take λ = 0.5 and terminate each Krasnoselskij iteration whenever | u n + 1 ( x i ) u n ( x i ) | 8 × 10 11 . The error (in maximum norm) is computed as in (18), whereas the experimental order of convergence is computed using:
E O C = l o g e h e h / 2 l o g ( 2 ) ,
see [39].

5.1. Example 1

The following problem is considered (see [22]):
u ( x ) = g ( x ) + sin ( x ) 1 + 0 x u ( y ) e x y u 2 d y 2 , x [ 0 , 1 ] .
where
g ( x ) = e x sin ( x ) e 1 e x / 2 + e x e e 2 x / 2 2 + 1 .
The exact solution of this problem is u ( x ) = e x . Table 1 tabulates the numerical results and shows that the computed solution converges to the exact solution with second order of accuracy. Figure 1 shows the plots of the numerical and exact solutions on grids with 2 , 3 , 6 and 20 grid points. The convergence of the method is obviously ascertained.

5.2. Example 2

In this example, we consider the problem (see also [22]):
u ( x ) = x 2 9 x 6 / 27 + 1 1 / 3 + f x , 0 1 k ( x , y , u ( y ) ) d y ,
where
f ( x , I ) = ( I 2 + 1 ) 1 / 3 , k ( x , y , u ) = u , if   y x , 0 , otherwise .
The exact solution to this problem is u ( x ) = x 2 9 . The numerical results as tabulated in Table 2 show that the computed solution converges to the exact solution with second order of accuracy. To make the discussion easier to understand, the solutions computed on different grids are plotted in Figure 2, and the convergence of the method is obvious.

5.3. Example 3

As a third example, we consider the problem:
u ( x ) = g ( x ) + x 2 1 + 0 x x 3 y 5 1 + u 2 d y 2 x [ 0 , 1 ] .
where
g ( x ) = x 2 ( x ( x 6 l o g ( x 6 + 1 ) 2 + 36 ) 36 ) x 6 l o g ( x 6 + 1 ) 2 + 36 .
The exact solution of this problem is u ( x ) = x 3 . It can also be seen in Table 3 that the numerical solution converges to the exact solution with second order of convergence. Moreover, Figure 3 demonstrates the convergence of the method as the grid is refined from just two points to twenty points. The proposed numerical scheme is clearly convergent. The astonishing feature of the method is that very high accuracy is attained even with very coarse grids.

5.4. Example 4

Finally, we consider the problem:
u ( x ) = g ( x ) + x 3 + 1 3 [ 1 4 0 x k ( x , y , u ( y ) ) d y 4 3 2 0 x k ( x , y , u ( y ) ) d y 2 + 0 x k ( x , y , u ( y ) ) d y ] x [ 0 , 1 ] ,
where
k ( x , y , u ( y ) ) = x 4 + y 2 1 3 u 3 ( y ) + u 2 ( y )
and
g ( x ) = 0.333333333333333 x 5 1.11111111111111 x 3 0.166666666666667 x 0.0833333333333333 ( x 5 + x 3 / 3 + x / 2 + sin ( x ) 3 / 9 + sin ( x ) cos ( x ) / 2 sin ( x ) / 3 ) 4 + 0.5 ( x 5 + x 3 / 3 + x / 2 + sin ( x ) 3 / 9 + sin ( x ) cos ( x ) / 2 sin ( x ) / 3 ) 2 0.037037037037037 sin ( x ) 3 0.166666666666667 sin ( x ) cos ( x ) + 0.111111111111111 sin ( x ) + cos ( x ) .
The exact solution to this problem is u ( x ) = cos ( x ) . Similar to the previous examples, one can see in Table 4 that the numerical solution converges to the exact solution with second order of convergence. Figure 4 displays the plots of the numerical and exact solutions on different grids. It is evident that the method converges.

6. Performance Analysis

In this section, we reuse the examples in Section 5 to assess the computational efficiency of the proposed method in comparison with a nonlinear system based (DD) method as proposed in [22]. We achieve this by comparing CPU times used by each method. We will also briefly discuss the memory usage of the scheme. For example 1, we set λ = 0.9 ; for example 2, we set λ = 0.8 ; example 3 used λ = 0.9 ; whereas example 4 used λ = 0.95 . Both algorithms (the proposed method and that of [22]) solve each of the four example problems on N = 4096 equal sub-intervals in [ 0 , 1 ]. Table 5, Table 6, Table 7 and Table 8 display the results with the elapsed CPU time and error committed by each method in solving the problem. It is obvious that the proposed method highly outperforms the direct discretization (DD) method [22] as it has much better computational efficiency. It is important to notice that the error of the DD scheme is not better than that of the new scheme.
In addition to the above merits, the new scheme is also more memory efficient than the DD scheme since it does not require to solve linear or nonlinear systems. Yet, from the programming point of view, the new scheme is very easy to program or implement. All these advantages lead to the conclusion that the present method is highly competitive.

7. Conclusions

A fixed-point iteration—the Krasnoselskij scheme—is used to construct an efficient scheme for solving nonlinear functional Volterra equations without a need for nonlinear systems nor a concern for differentiability of the kernels or functionals. The convergence to the exact solution is thoroughly analyzed. Numerical experiments are provided and lead to the following conclusions:
  • The scheme is very accurate and competitive;
  • The scheme is less computationally expensive than the direct discretization methods, such as the one proposed in [22];
  • It is also more memory efficient than that of [22];
  • This approach should be adopted whenever possible.
The implication is that if the associated operator is a contraction (satisfying a condition similar to inequality (33)), then fixed-point methods can be formulated to approximate the solution accurately and efficiently. As a further study, we intend to investigate the convergence and performance of other iterative algorithms for related operator equations.

Author Contributions

Conceptualization, C.N. and S.M.; methodology, C.N. and S.M.; software, C.N.; validation, C.N. and S.M.; formal analysis, C.N.; investigation, C.N.; resources, C.N.; writing—original draft preparation, C.N. and S.M.; writing—review and editing, C.N. and S.M.; visualization, C.N.; supervision, C.N. and S.M.; project administration, C.N.; funding acquisition, S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the reviewers and editors for their valuable comments, which improved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abdou, M. On a symptotic methods for Fredholm–Volterra integral equation of the second kind in contact problems. J. Comput. Appl. Math. 2003, 154, 431–446. [Google Scholar] [CrossRef]
  2. Le, T.D.; Moyne, C.; Murad, M.; Lima, S. A two-scale non-local model of swelling porous media incorporating ion size correlation effects. J. Mech. Phys. Solids 2013, 61, 2493–2521. [Google Scholar] [CrossRef]
  3. Rocha, A.C.; Murad, M.A.; Moyne, C.; Oliveira, S.P.; Le, T.D. A new methodology for computing ionic profiles and disjoining pressure in swelling porous media. Comput. Geosci. 2016, 20, 975–996. [Google Scholar] [CrossRef]
  4. Hu, S.; Khavanin, M.; Zhuang, W. Integral equations arising in the kinetic theory of gases. Appl. Anal. 1989, 34, 261–266. [Google Scholar] [CrossRef]
  5. Jerri, A. Introduction to Integral Equations with Applications; John Wiley & Sons: Hoboken, NJ, USA, 1999. [Google Scholar]
  6. Oregan, D. Existence results for nonlinear integral equations. J. Math. Anal. Appl. 1995, 192, 705–726. [Google Scholar] [CrossRef]
  7. Prüss, J. Evolutionary Integral Equations and Applications; Birkhäuser: Basel, Switzerland, 2013; Volume 87. [Google Scholar]
  8. Wazwaz, A.M. First Course In Integral Equations, A; World Scientific Publishing Company: Singapore, 2015. [Google Scholar]
  9. Micula, S. On Some Iterative Numerical Methods for Mixed Volterra–Fredholm Integral Equations. Symmetry 2019, 11, 1200. [Google Scholar] [CrossRef]
  10. Maleknejad, K.; Hadizadeh, M. A new computational method for Volterra-Fredholm integral equations. Comput. Math. Appl. 1999, 37, 1–8. [Google Scholar] [CrossRef]
  11. Wazwaz, A.M. A reliable treatment for mixed Volterra–Fredholm integral equations. Appl. Math. Comput. 2002, 127, 405–414. [Google Scholar] [CrossRef]
  12. Sidorov, D. Existence and blow-up of Kantorovich principal continuous solutions of nonlinear integral equations. Differ. Equ. 2014, 50, 1217–1224. [Google Scholar] [CrossRef]
  13. Manochehr, K. Triangular functions for numerical solution of the nonlinear Volterra Integral Equations. J. Appl. Math. Comput. 2021, 68, 1979–2002. [Google Scholar]
  14. Ordokhani, Y.; Razzaghi, M. Solution of nonlinear Volterra–Fredholm–Hammerstein integral equations via a collocation method and rationalized Haar functions. Appl. Math. Lett. 2008, 21, 4–9. [Google Scholar] [CrossRef]
  15. Brunner, H. On the numerical solution of nonlinear Volterra–Fredholm integral equations by collocation methods. SIAM J. Numer. Anal. 1990, 27, 987–1000. [Google Scholar] [CrossRef]
  16. Ezzati, R.; Najafalizadeh, S. Numerical methods for solving linear and nonlinear Volterra-Fredholm integral equations by using CAS wavelets. World Appl. Sci. J. 2012, 18, 1847–1854. [Google Scholar]
  17. Shiralashetti, S.; Lamani, L. CAS Wavelets Stochastic Operational Matrix of Integration and its Application for Solving Stochastic Itô-Volterra Integral Equations. Jordan J. Math. Stat. 2021, 14, 555–580. [Google Scholar]
  18. Xu, L. Variational iteration method for solving integral equations. Comput. Math. Appl. 2007, 54, 1071–1078. [Google Scholar] [CrossRef]
  19. Sheth, S.S.; Singh, D. An Analytical Approximate Solution of Linear, System of Linear and Non Linear Volterra Integral Equations Using Variational Iteration Method. In Proceedings of the International Conference on Advancements in Computing & Management (ICACM), Jaipur, India, 13–14 April 2019. [Google Scholar]
  20. Yousefi, S.A.; Lotfi, A.; Dehghan, M. He’s variational iteration method for solving nonlinear mixed Volterra–Fredholm integral equations. Comput. Math. Appl. 2009, 58, 2172–2176. [Google Scholar] [CrossRef]
  21. Hamoud, A.; Ghadle, K. On the numerical solution of nonlinear Volterra-Fredholm integral equations by variational iteration method. Int. J. Adv. Sci. Tech. Res. 2016, 3, 45–51. [Google Scholar]
  22. Bazm, S.; Lima, P.; Nemati, S. Analysis of the Euler and trapezoidal discretization methods for the numerical solution of nonlinear functional Volterra integral equations of Urysohn type. J. Comput. Appl. Math. 2021, 398, 113628. [Google Scholar] [CrossRef]
  23. Nwaigwe, C.; Benedict, D.N. Generalized Banach fixed-point theorem and numerical discretization for nonlinear Volterra–Fredholm equations. J. Comput. Appl. Math. 2023, 425, 115019. [Google Scholar] [CrossRef]
  24. Hasan, P.M.; Suleiman, N. Numerical Solution of Mixed Volterra-Fredholm Integral Equations Using Linear Programming Problem. Appl. Math. 2018, 8, 42–45. [Google Scholar]
  25. Chen, Z.; Jiang, W. An approximate solution for a mixed linear Volterra–Fredholm integral equation. Appl. Math. Lett. 2012, 25, 1131–1134. [Google Scholar] [CrossRef]
  26. Atkinson, K.E. The numerical solution of integral equations of the second kind. In Cambridge Monographs on Applied and Computational Mathematics; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  27. Aziz, I.; ul Islam, S. New algorithms for the numerical solution of nonlinear Fredholm and Volterra integral equations using Haar wavelets. J. Comput. Appl. Math. 2013, 239, 333–345. [Google Scholar] [CrossRef]
  28. Nwaigwe, C. Solvability and Approximation of Nonlinear Functional Mixed Volterra–Fredholm Equation in Banach Space. J. Integral Equ. Appl. 2022, 34, 489–500. [Google Scholar] [CrossRef]
  29. Youssri, Y.; Hafez, R. Chebyshev collocation treatment of Volterra–Fredholm integral equation with error analysis. Arab. J. Math. 2020, 9, 471–480. [Google Scholar] [CrossRef]
  30. Berinde, V. Iterative Approximation of Fixed Points; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  31. Okeke, G.A. Convergence analysis of the Picard–Ishikawa hybrid iterative process with applications. Afr. Mat. 2019, 30, 817–835. [Google Scholar] [CrossRef]
  32. Ofem, A.; Igbokwe, D.; Udo-utun, X. Implicit iteration process for Lipschitzian α-hemicontraction semigroups. MathLAB J. 2020, 7, 43–52. [Google Scholar]
  33. Liou, Y.C. Computing the fixed points of strictly pseudocontractive mappings by the implicit and explicit iterations. Abstr. Appl. Anal. 2012, 2012, 315835. [Google Scholar] [CrossRef]
  34. Osilike, M. Nonlinear accretive and pseudo-contractive operator equations in Banach spaces. Nonlinear Anal. 1996, 31, 779–789. [Google Scholar]
  35. Hasan, P.M.; Sulaiman, N.A.; Soleymani, F.; Akgül, A. The existence and uniqueness of solution for linear system of mixed Volterra-Fredholm integral equations in Banach space. AIMS Math. 2020, 5, 226–235. [Google Scholar] [CrossRef]
  36. Micula, S. Numerical Solution of Two-Dimensional Fredholm–Volterra Integral Equations of the Second Kind. Symmetry 2021, 13, 1326. [Google Scholar] [CrossRef]
  37. Burden, R.L. Numerical Analysis; Cengage Learning’: Boston, MA, USA, 2011. [Google Scholar]
  38. Nwaigwe, C. An Unconditionally Stable Scheme for Two-Dimensional Convection-Diffusion-Reaction Equations. 2022. Available online: https://www.researchgate.net/publication/357606287_An_Unconditionally_Stable_Scheme_for_Two-Dimensional_Convection-Diffusion-Reaction_Equations (accessed on 11 February 2023).
  39. Nwaigwe, C. Coupling Methods for 2d/1d Shallow Water Flow Models for Flood Simulations. Ph.D. Thesis, University of Warwick, Coventry, UK, 2016. [Google Scholar]
Figure 1. Plot of solution of Example 1 for different grid sizes.
Figure 1. Plot of solution of Example 1 for different grid sizes.
Fractalfract 07 00333 g001
Figure 2. Plot of solution of Example 2 for different grid sizes.
Figure 2. Plot of solution of Example 2 for different grid sizes.
Fractalfract 07 00333 g002
Figure 3. Plot of solution of Example 3 for different grid sizes.
Figure 3. Plot of solution of Example 3 for different grid sizes.
Fractalfract 07 00333 g003
Figure 4. Plot of solution of Example 4 for different grid sizes.
Figure 4. Plot of solution of Example 4 for different grid sizes.
Fractalfract 07 00333 g004
Table 1. Numerical Results for Example 1. N = number of sub-intervals. Errors are computed with the infinity norm. EOC = Experimental order of convergence.
Table 1. Numerical Results for Example 1. N = number of sub-intervals. Errors are computed with the infinity norm. EOC = Experimental order of convergence.
NErrorEOC
10.01213743114921989-
20.0050905013135272851.2535834665481371
40.00134644498262515011.9186524594014347
80.000350115416898166831.943252785210336
168.790389104307295 × 10 5 1.993831658033252
322.2059673406571445 × 10 5 1.9945155953164302
645.515784523291156 × 10 6 1.9997734284826107
1281.3789984285583756 × 10 6 1.9999452855688975
2563.4476374671799093 × 10 7 1.9999408304436097
5128.619000313458969 × 10 8 2.000015626096606
10242.1545896289332234 × 10 8 2.000107431585809
20485.385888846021203 × 10 9 2.000156753556326
Table 2. Numerical results for Example 2.
Table 2. Numerical results for Example 2.
NErrorEOC
10.0005814856077918928-
20.000122303393331718582.2492790520697135
42.91619860433856 × 10 5 2.0683035510752483
87.202538943651415 × 10 6 2.017511515015079
161.7951203420268902 × 10 6 2.0044249926801028
324.4841246069071694 × 10 7 2.001182289199546
641.1207335620655456 × 10 7 2.000383029609112
1282.80046026646108 × 10 8 2.000707475089127
2566.996875440146155 × 10 9 2.0008812453274856
5121.7429708232263863 × 10 9 2.005162389306972
10244.336317416253621 × 10 10 2.0070061491915854
20481.0526418625644851 × 10 10 2.042455689416187
40962.525327169600189 × 10 11 2.059472461771361
Table 3. Numerical results for Example 3. Results show second-order convergence.
Table 3. Numerical results for Example 3. Results show second-order convergence.
NErrorEOC
10.05163523545962034-
20.0063731618858838823.018274672426548
40.00123076131129140622.372458308685062
80.00029839155556654622.044272385454718
167.400954062197762 × 10 5 2.0114235414553514
321.8465882868468064 × 10 5 2.002849020979508
644.6142027866347135 × 10 6 2.000708925979108
1281.1534254369394148 × 10 6 2.00015666531903
2562.883532977948633 × 10 7 2.0000153169770556
5127.209643260175369 × 10 8 1.9998377416249797
10241.802702365161224 × 10 8 1.9997666548103157
20484.511132578599586 × 10 9 1.9985996290821917
40961.129273896616212 × 10 9 1.998094243306733
81922.8453139844231146 × 10 10 1.9887356731658479
Table 4. Numerical results for Example 4. Results show second-order convergence.
Table 4. Numerical results for Example 4. Results show second-order convergence.
NErrorEOC
10.12721182356988314-
20.0194774254827745352.707357866274032
40.0042060183377158332.211275950082332
80.00100081915615579662.0712738313024626
160.00024680377602215312.019744936190632
326.148467592825835 × 10 5 2.0050656757391567
641.5357610263277977 × 10 5 2.0012731451169947
1283.838580611259523 × 10 6 2.000308890953441
2569.596195429395493 × 10 7 2.0000385014534188
5122.3991099107334435 × 10 7 1.999963285326972
10245.999148333657445 × 10 8 1.9996696446899704
20481.5002635800343 × 10 8 1.999541714861857
Table 5. Performance results from Example 1. (Proposed scheme used λ = 0.9 ).
Table 5. Performance results from Example 1. (Proposed scheme used λ = 0.9 ).
SchemeCPU Time (s)Error
Proposed Method210.16 1.347 × 10 9
Direct Discretization839.41 1.347 × 10 9
Table 6. Performance results from Example 2. (Proposed scheme used λ = 0.8 ).
Table 6. Performance results from Example 2. (Proposed scheme used λ = 0.8 ).
SchemeCPU Time (s)Error
Proposed Method72.24 2.714 × 10 11
Direct Discretization144.52 2.738 × 10 11
Table 7. Performance results from Example 3. (Proposed scheme used λ = 0.9 ).
Table 7. Performance results from Example 3. (Proposed scheme used λ = 0.9 ).
SchemeCPU Time (s)Error
Proposed Method122.89 1.127 × 10 9
Direct Discretization264.84 1.127 × 10 9
Table 8. Performance results from Example 4. (Proposed scheme used λ = 0.95 ).
Table 8. Performance results from Example 4. (Proposed scheme used λ = 0.95 ).
SchemeCPU Time (s)Error
Proposed Method149.10 3.75 × 10 9
Direct Discretization189.18 3.75 × 10 9
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nwaigwe, C.; Micula, S. Fast and Accurate Numerical Algorithm with Performance Assessment for Nonlinear Functional Volterra Equations. Fractal Fract. 2023, 7, 333. https://doi.org/10.3390/fractalfract7040333

AMA Style

Nwaigwe C, Micula S. Fast and Accurate Numerical Algorithm with Performance Assessment for Nonlinear Functional Volterra Equations. Fractal and Fractional. 2023; 7(4):333. https://doi.org/10.3390/fractalfract7040333

Chicago/Turabian Style

Nwaigwe, Chinedu, and Sanda Micula. 2023. "Fast and Accurate Numerical Algorithm with Performance Assessment for Nonlinear Functional Volterra Equations" Fractal and Fractional 7, no. 4: 333. https://doi.org/10.3390/fractalfract7040333

Article Metrics

Back to TopTop