Next Article in Journal
On Self-Aggregations of Min-Subgroups
Next Article in Special Issue
Inequalities on General Lp-Mixed Chord Integral Difference
Previous Article in Journal
Random Walk Analysis in a Reliability System under Constant Degradation and Random Shocks
Previous Article in Special Issue
An Improved Tikhonov-Regularized Variable Projection Algorithm for Separable Nonlinear Least Squares
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Block-Pulse Functions for Numerical Solution of Mixed Volterra-Fredholm Integral Equations

1
School of Mathematics and Information Science, Henan Polytechnic University, Jiaozuo 454000, China
2
School of Science, Xi’an University of Architecture and Technology, Xi’an 710055, China
3
National Engineering Laboratory for Modern Silk, College of Textile and Clothing Engineering, Soochow University, 199 Ren-Ai Road, Suzhou 215006, China
4
Department of Mathematics, Faculty of Education, Ain Shams University, Roxy, Cairo 11566, Egypt
5
Department of Mathematics and Computer Science, Faculty of Science, Menoufia University, Menoufia 12946, Egypt
*
Author to whom correspondence should be addressed.
Axioms 2021, 10(3), 200; https://doi.org/10.3390/axioms10030200
Submission received: 17 July 2021 / Revised: 19 August 2021 / Accepted: 19 August 2021 / Published: 24 August 2021

Abstract

:
The present paper employs a numerical method based on the improved block–pulse basis functions (IBPFs). This was mainly performed to resolve the Volterra–Fredholm integral equations of the second kind. Those equations are often simplified into a linear system of algebraic equations through the use of IBPFs in addition to the operational matrix of integration. Typically, the classical alterations have enhanced the time taken by the computer program to solve the system of algebraic equations. The current modification works perfectly and has improved the efficiency over the regular block–pulse basis functions (BPF). Additionally, the paper handles the uniqueness plus the convergence theorems of the solution. Numerical examples have been presented to illustrate the efficiency as well as the accuracy of the method. Furthermore, tables and graphs are used to show and confirm how the method is highly efficient.

1. Introduction

In recent years, there has been a growing interest in the formulation of many engineering and physical problems in terms of integral equations. Therefore, the parallel literature on the numerical solution of these equations is growing rapidly. This article gives a further contribution to what is becoming a subject of increasing concern to scientists, engineers, and mathematicians. Generally, integral equations are typically hard to solve analytically. During the last decade, many attempts have been made to solve integral equations by several researchers using numerical or perturbed methods. The time collocation method and the projection method were successful in solving them, and details can be found in [1]. Other authors applied the iterative method such as the Lagrange method in [2] to solve these equations. Researchers also used the modified Homotopy perturbation method as in [3]. The rationalized Haar function method was widely used in [4,5]. Rizkallah et al. used the differential transformation method in [6]. Pour-Mahmoud et al. applied the Tau method in [7]. In 1999, He [8] tried to solve linear differential and integral equations by using a new method called the homotopy perturbation method (HPM). Furthermore, he managed to solve some nonlinear problems [9]. After that, he developed his technique to work on more complicated problems and introduced a solution to a number of problems [10]. The Adomian decomposition method was used by Wazwaz in [11]. Wazwaz made a lot of modifications to the method then, he succeeded in solving and obtaining an approximate solution for most problems containing these equations. The latest work that aimed to modify He’s HMP was made in 2021, and was completed by L. Akinyemi et al., for the one dimension fractional differential equation [12]. L. Akinyemi and Olaniyi S. Iyiola succeeded in applying the modified HMP to the multi-dimension fractional differential equation and they deduced accurate and reliable results [13]. Noeiaghdam et al. also found that the homotopy perturbation method is a powerful tool for integral equations [14]. Recently, Ji-Huan He proposed a simple and effective method to solve integral equations [15].
In this paper, IBPFs are presented. Some theorems are proved for the IBPFs method, which shows some results of these numerical expansions. It is shown that these expansions are more precise than the results of the block–pulse expansions. The functions are disjoint, orthogonal, and complete. According to the disjointness of the IBPFs, the joint terms will disappear at each subinterval when some operations such as multiplication, division, and other operations are applied. Additionally, the orthogonality property of IBPFs causes the operational matrix to be sparse which makes the numerical process easier and faster. The completeness of the IBPFs guarantees an arbitrary small mean square error that can be obtained from a real bounded function. This has only a finite number of discontinuous points in the interval x     0 , 1 . By increasing the number of terms in the improved block–pulse series, the numerical solution converges to the exact solution.
To crystallize the presentation of the current paper, it is systematized as follows: In Section 2, IBPFs and their properties as well as their application to linear and nonlinear different types of integral equations are described. The convergence analysis is discussed in Section 3. Numerical results are given in Section 4 to illustrate the efficiency and accuracy of the proposed method. Finally, Section 5 gives the main findings of the paper as conclusions.

2. The Basic Idea of the New Improved Block–Pulse Function (IBPF)

Suppose that an integral equation is in the form of the Volterra–Fredholm integral equation of the second kind as follows:
y ( t ) = f ( t ) + 0 1 k 1 ( t , s ) . y ( s ) d s + 0 x k 2 ( t , s ) . y ( s ) d s , t [ 0 , 1 )
or
y ( t ) = f ( t ) + 0 1 k 1 ( t , s ) . ( y ( s ) ) r d s + 0 x k 2 ( t , s ) . ( y ( s ) ) q d s , t [ 0 , 1 )
where y t is a function to be determined, f t is an analytical function over the interval 0 , 1 , k t , s , k 1 t , s and k 2 t , s are the kernels of the integral equation which are analytic on 0 , 1 × 0 , 1 , and r and q are non-negative integers. All these functions can be represented in the vector form as follows:
Y m = [ y 0 t , y 1 t , , y m t ] T
F m = [ f 0 t , f 1 t , , f m t ] T
K m = k i j t , s , i , j = 0 , 1 , , m
The IBPFs were first introduced by Farshid Mirzaee [16]. Farshid solved a system of integral equations by using IBPFs. However, in this article, the operator matrix of the Volterra integral equation is modified to achieve better results, where the linear and nonlinear integral equations will be solved. Moreover, mixed Fredholm and Volterra integral equations will be considered. The variable interval block–pulse functions were derived from the regular block–pulse functions, but with a slight change in interval widths. This change caused a huge alteration in the algorithm of the method which will be studied in detail. The variable interval block–pulse is a m + 1 set of functions defined over the interval 0 , 1 as follows:
φ 0 t = 1 , t 0 , 1 2 m , 0 , o t h e r w i s e ,
φ i t = 1 , t i 1 2 m , i + 1 2 m , 0 , o t h e r w i s e ,                 i = 1 , 2 , , m 1
φ m t = 1 , t 1 1 2 m , 1 , 0 , o t h e r w i s e ,
where m is a positive constant that represents the number of sub-intervals to be decided for the accuracy needed for solving the problem. The IBPFs are disjointed. Therefore, one obtains:
φ i t φ j t = φ i t , i = j , 0 , o t h e r w i s e ,
where i ,   j = 0 , 1 , , m and are orthogonal to each other.
0 1 φ i ( t ) φ j ( t ) d t = { 1 2 m , i = j 0 , m , 1 m , i = j 1 , 2 , , m 1 , 0 , o t h e r w i s e ,
where t 0 , 1 .
The first m + 1 terms of the IBPF can be written in vector form:
ϕ m t = φ 0 t , φ 1 t , , φ m t ] T , t [ 0 , 1
Equation (11) gives:
ϕ m t ϕ m T t = φ 0 t 0 0 0 φ 1 t 0 0 0 φ m t = d i a g ϕ m t .
Furthermore, we have:
0 t φ 0 ( s ) d s = { t , t [ 0 , 1 2 m ) , 1 2 m , o t h e r w i s e ,
0 t φ i ( s ) d s = { 0 , t [ 0 , i 1 2 m ) , t ( i 1 2 m ) , t [ i 1 2 m , i + 1 2 m ) , 1 m , o t h e r w i s e ,
0 t φ n ( s ) d s = { t ( 1 1 2 m ) , t [ 1 1 2 m , 1 ) 0 , o t h e r w i s e . ,
It is to be noted that t = 1 4 m at the first and last interval; so, it can be approximated by this value, where m = 0   or n . However, at any of the middle interval,  t i 1 2 m can be approximated by 1 2 m . It can be deduced that the integration of the vector ϕ m t defined in Equation (11) can be represented approximately as:
0 t ϕ m ( s ) d s B ϕ m ( t ) ,
where:
B = 1 m 1 2 1 1 1 1 0 1 2 1 1 1 0 0 1 2 1 1 0 0 0 1 2 1 0 0 0 0 1 2  
This operational matrix will be modified to obtain better results than that used in [16]. By using the relation in Equation (12), one obtains:
0 1 ϕ m ( s ) ϕ m T ( s ) d s V ,
where:
V = 1 2 m 1 0 0 0 0 0 2 0 0 0 0 0 2 0 0 0 0 0 2 0 0 0 0 0 1 .
Suppose that y t is a continuous function, where y t L 2 0 , 1 and maybe expanded by the IBPFs as follows:
y t y m t = i = 0 m y i φ i t = Y m T ϕ m t = ϕ m T t Y m ,
only m-terms of Equation (20) are considered, where m is a power of 2, and Y m is an m + 1 × 1 vector given by:
Y m = [ y 0 , y 1 , , y m ] T .
Additionally, f t can be expanded by the IBPFs as:
f t f m t = i = 0 m f i φ i t = F m T ϕ m t = ϕ m T t F m ,
where F m is an m + 1 × 1 vector given by:
F m = [ f 0 , f 1 , , f m ] T ,
and ϕ m t is defined in Equation (11), and y i are the improved block pulse coefficients which are obtained by:
y i = { 2 m 0 1 2 m y ( t ) d t , i = 0 , m i 1 2 m i + 1 2 m y ( t ) d t , 1 i m 1 , 2 m 1 1 2 m 1 y ( t ) d t , i = m .
Similarly, a function of two variables, k t , s L 2 0 , 1 × 0 , 1 can be approximated by IBPFs as follows:
k t , s k m t , s = ϕ m T t K m ϕ m s .

2.1. Solution Algorithm for the Volterra–Fredholm Integral Equation

By using Equations (20), (22) and (25) into Equation (1), one obtains:
ϕ m T ( t ) Y m = ϕ m T ( t ) F m + 0 1 ϕ m T ( t ) K 1 m ϕ m ( s ) ϕ m T ( s ) Y m d s + 0 t ϕ m T ( t ) K 2 m ϕ m ( s ) ϕ m T ( s ) Y m d s .
From Equation (14), we have:
ϕ m T ( t ) Y m = ϕ m T ( t ) F m + 0 1 ϕ m T ( t ) K 1 m d i a g ( ϕ m ( s ) ) Y m d s + 0 1 ϕ m T ( t ) K 2 m d i a g ( ϕ m ( s ) ) Y m d s ,
or:
ϕ m T ( t ) Y m = ϕ m T ( t ) F m + ϕ m T ( t ) K 1 m 0 1 d i a g ( ϕ m ( s ) ) d s Y m + ϕ m T ( t ) K 2 m 0 t d i a g ( ϕ m ( s ) ) d s Y m ,
ϕ m T t Y m = ϕ m T t F m + ϕ m T t K 1 m V Y m + ϕ m T t K 2 m B Y m ,
then:
Y m = F m + K 1 m V Y m + K 2 m B Y m ,
then the unknown coefficients Y i can be determined from the relation:
Y m = I m + 1 K 1 m V K 2 m B 1 F m ,
where I m + 1 is the identity matrix with dimensions m + 1 × m + 1 ,   K 1 m V and K 2 m B that can be calculated as completed in the last section. After solving the linear system in Equation (31),  Y m can be found and then we can find y i then substitute in:
y t i = 0 m y i φ i t ,
to get the solution.

2.2. Solution Algorithm for Non-Linear Integral Equations

In this section, the non-linear Volterra–Fredholm integral equation given in Equation (2) will be solved using BPF. As mentioned in the previous sections,  y t is a function defined over the interval 0 , 1 and maybe expanded as:
y t y m t = i = 0 m y i φ i t = Y m T ϕ m t = ϕ m T t Y m ,
in the same manner, [ y t ] r can be approximated in terms of IBPFs:
[ y t ] r Y ˜ m T ϕ m t .
Now, the vector Y ˜ needs to be calculated as follows:
y t = Y m T ϕ m t     a n d     [ y t ] r Y ˜ m T ϕ m t .
Then:
Y ˜ m T ϕ t = [ Y m T ϕ m t ] r
Therefore, from Equations (9), (12) and (19), one obtains:
0 1 φ i ( t ) φ j ( t ) d t = { 1 2 m , i = j 0 , m , 1 m , i = j 1 , 2 , , m 1 , 0 , o t h e r w i s e
ϕ m t ϕ m T t = φ 0 t 0 0 0 φ 1 t 0 0 0 φ m t = d i a g ϕ m t ,
or it can be written as:
0 1 ϕ m ( t ) ϕ m T ( t ) d t = 1 m 1 2 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 2 = 1 m P m + 1 ,
0 1 ϕ m ( t ) ϕ m T ( t ) d t = 1 m P m + 1 ,
m P m + 1 1 0 1 ϕ m ( t ) ϕ m T ( t ) d t = I m + 1 ,
where P m + 1 1 is the inverse of the matrix P m + 1 , and I m + 1 is the identity matrix with dimensions m + 1 × m + 1 . Now, the vector is rewritten as follows:
Y ~ m T = Y ~ m T . I m + 1 = Y ~ m T . m P m + 1 1 0 1 ϕ m ( t ) ϕ m T ( t ) d t ,
Y ~ m T = m P m + 1 1 0 1 Y ~ m T . ϕ m ( t ) ϕ m T ( t ) d t .
By using Equation (36), we have:
Y ~ m T = m P m + 1 1 0 1 [ Y m T ϕ m ( t ) ] r ϕ m T ( t ) d t ,
Y ~ m T = m P m + 1 1 0 1 [ Y m T ϕ m ( t ) ] r 1 [ Y m T ϕ m ( t ) ] ϕ m T ( t ) d t .
This last equation can be written as:
Y ~ m T = m P m + 1 1 [ 0 1 2 m [ Y m T ϕ m ( t ) ] r 1 Y m T [ ϕ m ( t ) ϕ m T ( t ) ] d t + i = 1 m 1 i 1 2 m i + 1 2 m [ Y m T ϕ m ( t ) ] r 1 Y m T [ ϕ m ( t ) ϕ m T ( t ) ] d t + 1 1 2 m 1 [ Y m T ϕ m ( t ) ] r 1 Y m T [ ϕ m ( t ) ϕ m T ( t ) ] d t ]
Y ~ m T = m P m + 1 1 [ 0 1 2 m [ [ y 0 , y 1 , , y m ] ( 1 0 : 0 ) ] r 1 . [ [ y 0 , y 1 , , y m ] ( 1 0 .. 0 0 0 0 0 : 0 : 0 0 .. 0 ) d t ] + m P m + 1 1 [ i 1 2 m i + 1 2 m [ [ y 0 , y 1 , , y m ] ( 1 0 : 0 ) ] r 1 . [ y 0 , y 1 , , y m ] ( 1 0 .. 0 0 0 0 0 : 0 : 0 0 .. 0 ) d t ] + m P m + 1 1 [ 1 1 2 m 1 [ [ y 0 , y 1 , , y m ] ( 1 0 : 0 ) ] r 1 . [ y 0 , y 1 , , y m ] ( 1 0 .. 0 0 0 0 0 : 0 : 0 0 .. 0 ) d t ] = m P m + 1 1 [ 0 1 2 m [ y 0 ] r 1 . [ y 0 , 0 , , 0 ] d t + + i 1 2 m i + 1 2 m [ y i ] r 1 . [ 0 , y i , 0 ] d t + + i 1 2 m 1 [ y m ] r 1 [ 0 , 0 , , y m ] d t ] = m P m + 1 1 [ 0 1 2 m [ y 0 r , 0 , , 0 ] d t + + i 1 2 m i + 1 2 m [ 0 , y i r , , 0 ] d t + + 0 1 m [ 0 , 0 , y m r ] d t ] = m P m + 1 1 1 2 m y 0 r , 0 , , 0 + + 1 m 0 , y i r , , 0 + + 1 2 m 0 , 0 , , y m r = P m + 1 1 2 y 0 r , y 1 r , , 2 y m r = y 0 r , y 1 r , , y m 1 r , y m r .
At this stage, in order to solve the nonlinear Volterra–Fredholm integral equation given in Equation (2), the following approximations must be used:
y t ϕ m T t Y m ,
f t ϕ m T t F m ,
[ y t ] r Y ˜ 1 T ϕ m t ,
[ y t ] q Y ˜ 2 T ϕ m t ,
k 1 t , s ϕ m T t K 1 ϕ m s ,
k 2 t , s ϕ m T t K 2 ϕ m s ,
where the m + 1 vectors Y m , F m , Y ˜ 1 T , Y ˜ 2 T and the m + 1 × m + 1 matrices K 1 and K 2 are the IBPFs coefficients of the non-linear equation:
y ( t ) = f ( t ) + 0 1 k 1 ( t , s ) . ( y ( s ) ) r d s + 0 x k 2 ( t , s ) . ( y ( s ) ) q d s , t [ 0 , 1 )
which transforms into:
ϕ m T ( t ) Y m = ϕ m T ( t ) F m + 0 1 ϕ m T ( t ) K 1 ϕ m ( s ) . Y ~ 1 T ϕ m ( s ) d s + 0 x ϕ m T ( t ) K 2 ϕ m ( s ) . Y ~ 2 T ϕ m ( s ) d s , t [ 0 , 1 )
or:
ϕ m T ( t ) Y m = ϕ m T ( t ) F m + ϕ m T ( t ) K 1 0 1 ϕ m ( s ) . ϕ m T ( s ) d s Y ~ 1 + ϕ m T ( t ) K 2 0 x ϕ m ( s ) . ϕ m T ( s ) d s Y ~ 2 , t [ 0 , 1 ) ,
which will result in the linear system:
Y m = F m + K 1 V Y ˜ 1 + K 2 B Y ˜ 2 .
After solving this system in Equation (57), Y m can be found and then we can find y i . To obtain the solution, substitute in:
y t i = 0 m y i φ i t .

3. Convergence Analysis

In this section, the method is shown to be convergent, and its order of convergence is O 1 m   o r   O h . Let us now define:
y ( x ) = ( 0 1 | y ( x ) | 2 d x ) 1 2 ,
and:
Y x = i = 1 m Y i x 2 1 2 ,
where y x L 2 D   and Y x are defined in Equation (3), where k ( x , y ) = ( 0 1 0 1 | k ( x , y ) | 2 d x d y ) 1 2
Suppose:
k x , y = i = 0 m j = 0 m k i j x , y 2 1 2 ,
where k x , y L 2 D × D and k x , y is defined in Equation (5).
For our purposes, we need the following theorems.
Theorem 1.
Let y x L 2 D and y m x be the IBPFs expansion of y x that is defined as:
y m x = i = 0 m y i φ i t ,
where y i ; i = 0, 1, ..., m, are defined in Equation (24). Then the criterion of this approximation is that the mean square error between y x and y m x is in the interval x D :
0 1 ( y ( x ) y m ( x ) ) 2 d x ,
achieves its minimum value and also:
0 1 ( y ( x ) ) 2 d x = i = 0 y i 2 ϕ i ( x ) 2
Proof. 
It is an immediate consequence of the theorem which is proved in [16]. □
Theorem 2.
Suppose that y x is continuous on D , differentiable on 0 , 1 , and there exists a positive scalar M in such a way that y x M , for every x D . Then:
y b y a M b a , a , b D ,
Proof. 
See [16]. □
Theorem 3. 
Suppose y m x is the IBPFs expansion of y x that is defined as Equation (24) and f x is differentiable on D in such a way that y x M . Furthermore, assume that e m x = y x y n x , then:
e m x = O h .
Proof. 
Suppose that x 0 = 0 , x i = i h h 2 , i = 1 , , m and x m + 1 = 1 . The error between y x and its IBPFs expansion over every subinterval I i = x i , x i + 1 is defined as follows:
e m , i x = y x y i x , x I i ,
where i = 0 , 1 , , m .
By using the mean value theorem for integral, one obtains:
e m , 0 ( x ) 2 = 0 h 2 e m , 0 2 ( x ) d x = 0 h 2 ( y ( x ) y 0 ) 2 d x = h 2 ( y ( ε 0 ) y 0 ) 2 ,
where ε0I0. Moreover, for i = 1 , 2 , , m 1 ,   one obtains:
e m , i ( x ) 2 = i h h 2 i h + h 2 e m , i 2 ( x ) d x = i h h 2 i h + h 2 ( y ( x ) y i ) 2 d x = h 2 ( y ( ε i ) y i ) 2
where ε i I i . Additionally, one obtains:
e m , m ( x ) 2 = 1 h 2 1 e m , m 2 ( x ) d x = 1 h 2 1 ( y ( x ) y m ) 2 d x = h 2 ( y ( ε m ) y m ) 2 ,
where ε m I m .
By using Equation (26) and the mean value theorem, one obtains:
y i = { 2 m 0 1 2 m y ( t ) d t = 2 m h 2 y ( ε 0 ) = y ( ε 0 ) , i = 0 , m i 1 2 m i + 1 2 m y ( t ) d t = m h y ( ε i ) = y ( ε i ) , 1 i m 2 m 1 1 2 m 1 y ( t ) d t = 2 m h 2 y ( ε m ) = y ( ε m ) , i = m . 1
where ε i I i , i = 0 , 1 , , m .   From the above equations and Theorem 2, one obtains:
e m , i x 2 = h 2 ( y ε 0 y η 0 ) 2 M 2 h 2 ε 0 η 0 M 2 h 3 8 , i = 0 , h ( y ε i y η i ) 2 M 2 h ε 0 η 0 M 2 h 3 , 1 i m 1 , h 2 ( y ε m y η m ) 2 M 2 h 2 ε m η m M 2 h 3 8 , i = m .
Then the following is obtained:
e m ( x ) 2 = 0 1 e m 2 ( x ) d x = 0 1 ( i = 0 m e m , i ( x ) i ) 2 d x = 0 1 ( i = 0 m e m , i ( x ) i ) d x + 2 i j m 0 1 ( e m , i ( x ) e m , j ( x ) ) d x d x .
Since for i j , I i I j = ϕ ,   then:
e m ( x ) 2 = 0 1 ( i = 0 m e m , i 2 ( x ) i ) d x = i = 0 m e m , i ( x ) 2 d x .
Substituting from Equation (72) into Equation (74), one obtains:
e m ( x ) 2 M 2 h 2 3 M 2 h 3 4 ,
which completes the proof. □
Suppose that e m x is the error between y x and its BPFs expansion. As in [17], it is clear that:
e m x e m x .
Lemma 1.
Let f x be as defined in Equation (22), g m x be the IBPFs expansion approximation of g x , and e g ( x ) = g ( x ) g m ( x ) Then:
e g x = O h .
Proof. 
From Equation (59), one obtains:
e g ( x ) = i = 1 m g i ( x ) g m , i ( x ) 2 1 2 ,
and from Theorem 2 as in [18], g i ( x ) g m , i ( x ) 2 C i h . Then:
e g ( x ) i = 1 m C i 2 h 2 1 2 = i = 1 m C i 2 1 2 h = C h ,
 □
Theorem 4.
Let k m x , y be the IBPFs expansion approximation of k x , y defined as Equation (5) and k x , y be differentiable on D × D in such a way that k x , y M . Correspondingly, assume that e m x , y = k x , y k m x , y , then:
e m x , y = O h .
Proof. 
Suppose that x 0 = y 0 = 0 , x i = y i = i h h 2 , i = 1 , , m and x m + 1 = y m + 1 = 1 . The error is defined between k x , y and its IBPFs expansion over every subinterval I i , i = x i , x i + 1 × y i , y i + 1 as follows:
e m , i j x , y = k x , y k i j x , y , x I i , j , i , j = 0 , 1 , , m .
By using the mean value theorem for integral equations and as completed in the proof of Theorem 3, one obtains:
e m , i j x , y 2 = M 2 h 4 8 , i = 0 , m , 5 M 2 h 4 8 , 1 i m 1 ,
for i = 0 , n and:
e m , i j x , y 2 = 5 M 2 h 4 8 , i = 0 , m , 2 M 2 h 4 , 1 i m 1 ,
for i = 1 , 2 , , n 1 . one obtains:
e m ( x , y ) 2 = 0 1 0 1 e m 2 ( x , y ) d x d y = 0 1 0 1 ( i = 0 m i = 0 m e m , i j ( x , y ) i ) 2 d x d y = 0 1 0 1 ( i = 0 m j = 0 m e m , i j 2 ( x , y ) ) d x d y + 2 i k m i l m 0 1 0 1 e m , i j ( x , y ) e m , k l ( x , y ) d x d y
Since for i k and j l , one obtains I i I k = ϕ and I j I l = ϕ , then:
e m ( x , y ) 2 = 0 1 0 1 ( i = 0 m j = 0 m e m , i j 2 ( x , y ) ) d x d y = i = 0 m j = 0 m e m , i j x , y 2 .
Substituting Equations (81) and (82) into Equation (84), one obtains:
e m x , y 2 2 M 2 h 2 3 M 2 h 3 2 .
Suppose that e n x , y be the error between k x , y and its BPFs expansion. From [19], it is clear that:
e m x , y e m x , y .
 □
Lemma 2.
Let k x , y be as defined in Equation (27), k n x , y be the IBPFsof k x , y and e k x , y = k x , y k n x , y . Therefore, one obtains:
e k x , y = O h .
Proof. 
From Equation (84), one obtains:
e k x , y = i = 1 m j = 1 m k i j x , y k n , i j x , y 1 2
From Theorem 3, it is concluded that:
k i j x , y k n , i j x , y C i j 2 h .
Therefore, one obtains:
e k x , y i = 0 m j = 0 m C i j 2 h 2 1 2 = i = 0 m j = 0 m C i j 2 1 2 h = C h .
Let the error of M B P F s be denoted by:
E n = y x y n x , x D ,
where y x was defined in Equation (20). Furthermore, one may assume the following hypotheses: □
Hypotheses 1.
Let y x N for x D .
Hypotheses 2.
Let k x , t N for x , t D × D .
Hypotheses 3.
According to Lemma 1 and 2, let E g = e g x C h , and E k = e k x , t C h , where C and C were the coefficients defined in Equations (79) and (90) and g x and k x , t were defined in Equations (22) and (25), respectively.
Hypotheses 4.
Let N + C h < 1 .
Theorem 5.
Let y x and y n x be the exact and approximate solutions of Equation (1) or Equation (2), respectively. Hypotheses (1)–(4) are satisfied.
Then one obtains:
E n C + C N h 1 N C h
Proof. 
In the first case, from Equation (2), one obtains:
y ( x ) y n ( x ) = g ( x ) g n ( x ) + 0 x ( k ( x , t ) y ( t ) k n ( x , t ) y n ( t ) ) d t ,
and therefore:
E n E g + x k x , t y t k n x , t y n t .
It is clear that x 1 , so:
E n E g + k x , t y t k n x , t y n t
Moreover, in the second case, from Equation (1), one obtains:
y ( x ) y n ( x ) = g ( x ) g n ( x ) + 0 1 ( k ( x , t ) y ( t ) k n ( x , t ) y n ( t ) ) d t ,
and therefore:
E n E g + k x , t y t k n x , t y n t .
So, Equation (97) is true in both cases. Now, according to the Hypotheses (1)–(3) as in [18], one obtains:
k x , t y t k n x , t y n t k x , t E n + E k E n + y x N E n + C h E n + N .
Moreover, from Hypotheses (3), Equation (95), and (98), one obtains:
E n C + C N h + N + C h E n .
Therefore, according to Hypotheses (4), Equation (98) is satisfied, and this completes the proof. Moreover, one obtains:
E n = O h .
 □
Lemma 3.
Suppose that y x and y n x are the exact and approximate solutions of Equation (2) or Equation (1), respectively, where y x was defined in Equation (20) and
y n x = [ y 1 , n x , y 2 , n x , , y m , n x ] T
Then:
e i , n = y i x y i , n x = O h
Proof. 
From Theorem 4, one obtains E n C h and according to Equation (60), one obtains:
e i , n E n C h .
The series solution y n x is the approximate solution of Equation (2) or Equation (1), where y x was defined in Equation (20) and:
y m x = m = 1 i y i ϕ i x ,
converges to the exact solution y x , then:
lim m y x y m x 2 = 0 .
 □
Theorem 6.
Let L 2 be the Helbert space and ϕ i x defined in Equation (11) form a basis of the IBPFs.
Let y x m = 1 i y i ϕ i x be the solution of Equation (2) or Equation (1).
Now, the sequence of partial sums S i of ( α m ϕ i x ) is defined. Let S i and S j be the partial sums with i j . It is mainly required to prove that S i is a Cauchy sequence in Hilbert space.
Proof. 
Let:
S i = m = 1 i α m ϕ i x
Now,
< y x , S i < y x , m = 1 i α m ϕ i x m = 1 i α m < y x , ϕ i x > = m = 1 i α m α m = m = 1 i α m 2 .
It is claimed that:
S i S j 2 = m = 1 i α m ϕ i x 2 = < m = j + 1 i α m ϕ i x , m = j + 1 i α m ϕ i x > = m = j + 1 i m = j + 1 i α m α m < ϕ i x , ϕ i x > = m = j + 1 i α m 2 .
Therefore:
m = j + 1 i α m ϕ i x α m ϕ i x 2 = m = j + 1 i α m ϕ i x α m 2   ,   for   i > j .
From Bessel’s inequality, one obtains:
m = j + 1 i α m ϕ i x α m 2
It is convergent, and hence:
m = j + 1 i α m ϕ i x 2 0 , a s     i , j .
Hence, one obtains:
m = j + 1 i α m ϕ i x 0 ,
and S i is a Cauchy sequence and it converges to s (say). It is asserted that y x = s .
Now:
< s y x , ϕ i x < s , ϕ i x < y x , ϕ i x < lim m S i , ϕ i x ϕ i x = lim m < α m ϕ i x , ϕ i x ϕ i x = ϕ i x ϕ i x = 0 .
We conclude that:
< s y x , ϕ i x 0 .
Hence y x = s and S i = m = 1 i α m ϕ i x converges to y x as i and it is proved. The above relation is possible if:
u x = s .
 □

4. Numerical Modeling

This section includes some physical models which were solved by using the modified technique to demonstrate the reliability and efficiency of these modifications. It also includes numerical comparisons between the present method and other similar methods in the algorithm to show which method is more accurate. Some figures and tables might be included with each model for clarification. All methods used in these comparisons have been used by many authors to solve such problems.
Example 1.
Now, consider the Volterra–Fredholm integral equation [11]:
y ( t ) = 5 t + 12 t 2 t 3 t 4 + 0 1 ( t + s ) y ( s ) d s + 0 1 ( t s ) y ( s ) d s
where the exact solution is y t = 6 t + 12 t 2 .
Let:
y t Y T Ψ m t
5 t 12 t 2 t 3 + t 4 F T Ψ m t
t + s Ψ m T s K 1 m Ψ m t
t s Ψ m T s K 2 m Ψ m t
where Y T = y 0 , y 1 , , y m are the undetermined coefficients for the unknown function y t and, F T = f 0 , f 1 , , f m are known and found by using:
f i = { 2 m 0 1 2 m ( 5 t 12 t 2 t 3 + t 4 ) d t , i = 0 , m i 1 2 m i + 1 2 m ( 5 t 12 t 2 t 3 + t 4 ) d t , 1 i m 1 2 m 1 1 2 m 1 ( 5 t 12 t 2 t 3 + t 4 ) d t , i = m
Moreover, K r m = K r m + 1 × m + 1 , where r = 1, 2, can be found using K r m = [ k r i j ] m + 1 × m + 1 .
Substituting Equation (116):
Y T Ψ ( m ) ( t ) = F T Ψ ( m ) ( t ) + 0 t Ψ ( m ) T ( s ) K ( m ) Ψ ( m ) ( t ) Ψ ( m ) T ( t ) Y
from Equation (21):
Y m = I m + 1 K 1 m V K 2 m B 1 F m ,
Then by solving this system of linear equations, the IBP series coefficients can be found. After substituting into Equation (116), the IBPF approximate solution can be found. In what follows, the graphs of the IBP approximate solutions at m = 16 are sketched out below. Correspondingly, the exact solution was graphed on the same axes to see how close the new method was to the exact solution in the selected interval. The graphs of the IBPF solution at the same divisions of intervals are also outlined. Now, we can look at the combined graphs of the BPF, IBPF, and the exact solutions at m = 16 as shown in Figure 1, and the error in this case will be as shown in Figure 2.
One notices that the error is huge in this case. However, if we choose random points, the functions will be as shown in Figure 3 and the error in this case will be as shown in Figure 4. Now, Table 1 shows the values of the exact, BPF, and IBPF solutions at different points within the interval 0 , 1 . Notice that the modification was applied to the BPF which made the absolute error smaller than the regular BPF. Similarly, it is worth mentioning that it took less time to compute the solution by using IBPF than BPF. For instance, in this example, to compute the solution at m = 32 , the Mathematica software took around 2 s to compute the BPF solution. However, to compute IBPF, it took around 1.28125 s, which is about 65 percent of the time used by the regular method.
Example 2.
Now consider the Volterra–Fredholm integral equation [11]:
y ( t ) = t 2 e t + e t + 1 + 0 1 e ( t + s ) y ( s ) d s + 0 1 s e t y ( s ) d s
where the exact solution is y t = e t .
Let:
y t Y T Ψ m t
t 2 e t + e t + 1 F T Ψ m t
e t + s Ψ m T s K 1 m Ψ m t
s e t Ψ m T s K 2 m Ψ m t
where Y T = y 0 , y 1 , , y m are the undetermined coefficients for the unknown function y t , and F T = f 0 , f 1 , , f m are known and found using:
f i = { 2 m 0 1 2 m ( t 2 e t + e t + 1 ) d t , i = 0 , m i 1 2 m i + 1 2 m ( t 2 e t + e t + 1 ) d t , 1 i m 1 , 2 m 1 1 2 m 1 ( t 2 e t + e t + 1 ) d t , i = m .
Besides, K r m = K r m + 1 × m + 1 , where r = 1, 2, can be found using K r m = [ k r i j ] m + 1 × m + 1 Substituting in Equation (124):
Y T Ψ ( m ) ( t ) = F T Ψ ( m ) ( t ) + 0 t Ψ ( m ) T ( s ) K ( m ) Ψ ( m ) ( t ) Ψ ( m ) T ( t ) Y
from Equation (21):
Y m = I m + 1 K 1 m V K 2 m B 1 F m ,
Then, by solving this system of linear equations, the IBP series coefficients can be found. After substituting Equation (124), the IBPF approximate solution can be found. Below, the graphs of the IBP approximate solutions at m = 32 are shown. By the same token, the exact solution was graphed on the same axes to see how close the new method was to the exact solution in the selected intervals. The graphs of the IBPS solution at the same divisions of intervals are also sketched out. Now, we can look at the combined graphs of the BPF, IBPF, and exact solutions at m = 32 in Figure 5, and the errors in this case are shown in Figure 6.
One notices that the error is huge in this case. However, if we choose random points the functions will be as shown in Figure 7 and then the error will be as shown in Figure 8.
Now, Table 2 shows the values of the exact, BPF, and IBPF solutions at different points within the interval 0 , 1 . Notice that the modification was carried out in relation to the BPF and made the absolute error smaller than the regular BPF. Furthermore, it is worth mentioning that it took less time to compute the solution using IBPF than BPF. For instance, in this example, to compute the BPF solution at m = 32 , it took around 3.12 s by using the Mathematica software. However, to compute IBPF, it took around 2.455 s, which is about 80 percent of the time used by the regular method.
Example 3.
Now, consider the Volterra–Fredholm nonlinear integral equation [20,21]:
y t = 2 t 1 12 t 4 5 3 + 1 4 0 x x t y t 2 d t + 0 1 1 + t y t d t
where the exact solution is y t = 2 t .
Let:
y t Y T ϕ m t
2 t 1 12 t 4 5 3 F m T ϕ m T t ,
x t ϕ m T s K 1 m ϕ m t
1 + t ϕ m T s K 2 m ϕ m t
where Y T = y 0 , y 1 , , y m are the undetermined coefficients for the unknown function y t , and F T = f 0 , f 1 , , f m are known and found using:
f i = { 2 m 0 1 2 m ( 2 t 1 12 t 4 5 3 ) d t , i = 0 , m i 1 2 m i + 1 2 m ( 2 t 1 12 t 4 5 3 ) d t , 1 i m 1 , 2 m 1 1 2 m 1 ( 2 t 1 12 t 4 5 3 ) d t , i = m .
Besides, K r m = K r m + 1 × m + 1 , where r = 1, 2, can be found using K r m = [ k r i j ] m + 1 × m + 1 .
Substituting in Equation (132):
ϕ m T ( t ) Y m = ϕ m T ( t ) F m + ϕ m T ( t ) K 1 0 1 ϕ m ( s ) . ϕ m T ( s ) d s Y ~ 1 + ϕ m T ( t ) K 2 0 x ϕ m ( s ) . ϕ m T ( s ) d s Y ~ 2
from Equation (57):
Y m = F m + K 1 V Y ˜ 1 + K 2 B Y ˜ 2 ,
Then, by solving this system of linear equations, the IBP series coefficients can be found. After substituting Equation (132), the IBPF approximate solution can be found. Below, the graphs of the IBP approximate solutions at m = 64 are shown. Similarly, the exact solution was graphed on the same axes to see how close the new method was to the exact solution in the selected interval. The graphs of the IBPS solution at the same divisions of intervals are also graphed. Now, we can look at the combined graphs of the BPF, IBPF, and exact solutions at m = 64 in Figure 9, and the errors, in this case, are shown in Figure 10.
One notices that the error is huge in this case. However, if we choose random points, the functions will be as shown in Figure 11 and the error will be as shown in Figure 12.
Now, Table 3 shows the values of the exact, BPF, and IBPF solutions at different points within the interval 0 , 1 . Notice that the modification was made in relation to the BPF and was made to show that the absolute error is smaller than the regular BPF. Furthermore, it is worth mentioning that it took less time to compute the solution using IBPF than BPF. For instance, in this example, to compute the BPF solution at m = 64 , it took the Mathematica software around 7.17 s. However, it took around 6.75 s to compute IBPF. The difference is not that big; however, the IBPF method is still faster in computation, and yet more accurate.
This method can be extended to be coupled with other known methods similar to the work carried out by Ramadan and Osheba [22], as well as in the method in the work of Ramadan et al. [3], and the homotopy perturbation method can be used for its error estimation [14,23]. It gave very accurate results that can be used to develop this work thoroughly.

5. Conclusions

The IBPFs, also the operational matrices B and V, were used to obtain the numerical solutions of linear and 0 with the original BPF method, which showed a high accuracy at the midpoint of their intervals. This accuracy is much better than that of the original technique. This is stated by the graphs shown in the numerical applications section. Furthermore, the convergence was proved in this article for the proposed method. The absolute error was shown to state the applicability and accuracy of the method. The article here compared the work completed with many other methods to prove the effectiveness and convenience of the method. It is worth mentioning that the method was extended to solve the nonlinear Volterra–Fredholm integral equations.

Author Contributions

Conceptualization, J.-H.H., G.M.M.; methodology, J.-H.H., G.M.M.; validation, J.-H.H., M.H.T., M.A.R., G.M.M.; formal analysis, J.-H.H., M.H.T., M.A.R., G.M.M.; investigation, J.-H.H., G.M.M.; writing—original draft preparation, G.M.M.; writing—review and editing, J.-H.H.; supervision, J.-H.H.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pachpatta, B.G. On mixed Volterra–Fredholm type integral equations. Indian J. Pure Appl. Math. 1986, 17, 488–496. [Google Scholar]
  2. Yusufoglu, E. Numerical expansion methods for solving systems of linear integral equations using interpolation and quadrature rules. Int. J. Comput. Math. 2017, 84, 133–149. [Google Scholar] [CrossRef]
  3. Ramadan, M.A.; Moatimid, G.M.; Taha, M.H. A Powerful Method for Obtaining Exact Solutions of Volterra Integral Equation’s Types. Glob. J. Pure Appl. Math. 2020, 16, 325–339. [Google Scholar]
  4. Maleknejad, K.; Mirzaee†, F. Numerical solution of linear Fredholm integral equations system by rationalized Haar functions method. Int. J. Comput. Math. 2003, 80, 1397–1405. [Google Scholar] [CrossRef]
  5. Maleknejad, K.; Mirzaee, F.; Abbasbandy, S. Solving linear integro-differential equations system by using rationalized Haar functions method. Appl. Math. Comput. 2004, 155, 317–328. [Google Scholar] [CrossRef]
  6. Rizkalla, R.R.; Tantawy, S.S.; Taha, M.H. Application on differential transform method for some non-linear functions and for solving Volterra integral equations involving Fresnels integral. J. Fract. Calc. Appl. 2014, 5, 1–14. [Google Scholar]
  7. Pour-Mahmoud, J.; Rahimi-Ardabili, M.; Shahmorad, S. Numerical solution of the system of Fredholm integro-differential equations by the Tau method. Appl. Math. Comput. 2005, 168, 465–478. [Google Scholar] [CrossRef]
  8. He, J.-H. Homotopy perturbation technique. Comput. Methods Appl. Mech. Eng. 1999, 178, 257–262. [Google Scholar] [CrossRef]
  9. He, J.-H. A coupling method of a homotopy technique and a perturbation technique for non-linear problems. Int. J. Non-Linear Mech. 2000, 35, 37–43. [Google Scholar] [CrossRef]
  10. He, J.-H. Application of homotopy perturbation method to nonlinear wave equations. Chaos Solitons Fractals 2005, 26, 695–700. [Google Scholar] [CrossRef]
  11. Wazwaz, A. Linear and Nonlinear Integral Equations: Methods and Applications; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  12. Akinyemi, L.; Şenol, M.; Huseen, S.N. Modified homotopy methods for generalized fractional perturbed Zakharov–Kuznetsov equation in dusty plasma. Adv. Differ. Equ. 2021, 2021, 1–27. [Google Scholar] [CrossRef]
  13. Akinyemi, L.; Iyiola, O.S. Analytical Study of (3 + 1)-Dimensional Fractional-Reaction Diffusion Trimolecular Models. Int. J. Appl. Comput. Math. 2021, 7, 92. [Google Scholar] [CrossRef]
  14. Noeiaghdam, S.; Dreglea, A.; He, J.; Avazzadeh, Z.; Suleman, M.; Araghi, M.A.F.; Sidorov, D.N.; Sidorov, N. Error Estimation of the Homotopy Perturbation Method to Solve Second Kind Volterra Integral Equations with Piecewise Smooth Kernels: Application of the CADNA Library. Symmetry 2020, 12, 1730. [Google Scholar] [CrossRef]
  15. He, J.-H. A Simple Approach to Volterra-Fredholm Integral Equations. J. Appl. Comput. Mech. 2020, 6, 1184–1186. [Google Scholar] [CrossRef]
  16. Mirzaee, F. Numerical solution of system of linear integral equations via improvement of block-pulse functions. J. Math. Modeling 2016, 4, 133–159. [Google Scholar]
  17. Jiang, Z.H.; Schaufelberger, W. Block Pulse Functions and Their Applications in Control Systems; Springer: Berlin, Germany, 1992. [Google Scholar]
  18. Maleknejad, K.; Sohrabi, S.; Baranji, B. Application of 2D-BPFs to nonlinear integral equations. Commun. Non-Linear Sci. Numer. Simul. 2010, 15, 527–535. [Google Scholar] [CrossRef]
  19. Maleknejad, K.; Mahdiani, K. Iterated Block-Pulse Method for Solving Volterra Integral Equations. J. Appl. Math. 2012, 2, 17–20. [Google Scholar] [CrossRef]
  20. Shahsavaran, A. Computational method to solve nonlinear integral equations using block pulse functions by collocation method. Appl. Math. Sci. 2011, 5, 3211–3220. [Google Scholar]
  21. Zarebnia, M. A numerical solution of nonlinear volterra-fredholm integral equations. J. Appl. Anal. Comput. 2013, 3, 95–104. [Google Scholar]
  22. Ramadan, M.; Osheba, H.S. A new hybrid orthonormal Bernstein and improved block-pulse functions method for solving mathematical physics and engineering problems. Alex. Eng. J. 2020, 59, 3643–3652. [Google Scholar] [CrossRef]
  23. Anjum, N.; He, J.H. Higher-order homotopy perturbation method for conservative nonlinear oscillators generally and microelectromechanical systems' oscillators particularly. Int. J. Mod. Phys. B 2020, 34, 2050313. [Google Scholar] [CrossRef]
Figure 1. The absolute error of IBPFs and BPFs expansions comparison at the midpoints of the intervals of IBPFs.
Figure 1. The absolute error of IBPFs and BPFs expansions comparison at the midpoints of the intervals of IBPFs.
Axioms 10 00200 g001
Figure 2. The absolute error of IBPFs and BPFs expansions comparison at the midpoints of the intervals of IBPFs.
Figure 2. The absolute error of IBPFs and BPFs expansions comparison at the midpoints of the intervals of IBPFs.
Axioms 10 00200 g002
Figure 3. The comparison between the exact values and the approximation of both BPFs and IBPFs solutions.
Figure 3. The comparison between the exact values and the approximation of both BPFs and IBPFs solutions.
Axioms 10 00200 g003
Figure 4. The absolute error of IBPFs and BPFs expansions comparison at random points.
Figure 4. The absolute error of IBPFs and BPFs expansions comparison at random points.
Axioms 10 00200 g004
Figure 5. The absolute error of IBPFs and BPFs expansions comparison at the midpoints of the intervals of IBPFs.
Figure 5. The absolute error of IBPFs and BPFs expansions comparison at the midpoints of the intervals of IBPFs.
Axioms 10 00200 g005
Figure 6. The absolute error of IBPFs and BPFs expansions comparison at the midpoints of the intervals of IBPFs.
Figure 6. The absolute error of IBPFs and BPFs expansions comparison at the midpoints of the intervals of IBPFs.
Axioms 10 00200 g006
Figure 7. The comparison between the exact value and the approximation of both BPFs and IBPFs solutions.
Figure 7. The comparison between the exact value and the approximation of both BPFs and IBPFs solutions.
Axioms 10 00200 g007
Figure 8. The absolute error of IBPFs and BPFs expansions comparison at random points.
Figure 8. The absolute error of IBPFs and BPFs expansions comparison at random points.
Axioms 10 00200 g008
Figure 9. The absolute error of IBPFs and BPFs expansions comparison at the midpoints of the intervals of IBPFs.
Figure 9. The absolute error of IBPFs and BPFs expansions comparison at the midpoints of the intervals of IBPFs.
Axioms 10 00200 g009
Figure 10. The absolute error of IBPFs and BPFs expansions comparison at the midpoints of the intervals of IBPFs.
Figure 10. The absolute error of IBPFs and BPFs expansions comparison at the midpoints of the intervals of IBPFs.
Axioms 10 00200 g010
Figure 11. The comparison between the exact value and the approximation of both BPFs and IBPFs solutions.
Figure 11. The comparison between the exact value and the approximation of both BPFs and IBPFs solutions.
Axioms 10 00200 g011
Figure 12. The absolute error of IBPFs and BPFs expansions comparison at random points.
Figure 12. The absolute error of IBPFs and BPFs expansions comparison at random points.
Axioms 10 00200 g012
Table 1. The numerical results of example 1 at m = 16 and 32.
Table 1. The numerical results of example 1 at m = 16 and 32.
tExact SolutionBPFIBPFAbsolute Error
BPFIBPF
0.1250.120247640.170740920.116754295.04933 × 10−23.4934 × 10−3
0.250.21484125 0.246205190.208852073.13639 × 10−25.9892 × 10−3
0.3750.26988208 0.281388510.263811661.15064 × 10−26.0704 × 10−3
0.50.2862167 0.282432560.282375443.7841 × 10−33.8413 × 10−3
0.6250.27406586 0.261153860.273219841.29120 × 10−28.460 × 10−4
0.750.245760.228944280.247230951.68157 × 10−21.4710 × 10−3
0.8750.211233060.193953730.213706131.72793 × 10−22.4731 × 10−3
tExact SolutionBPFIBPFAbsolute Error
BPFIBPF
0.06250.421870.54292465050.4228551952.96410 × 10−29.80195 × 10−4
0.18751.5468751.7147035091.54762609 2.23075 × 10−27.51099 × 10−4
0.31253.046875 3.261449613.04661540031.21162 × 10−22.596 × 10−4
0.43754.921875 5.1831609634.91909662.6976 × 10−32.7783 × 10−3
0.56257.1718757.479834987.16414293.9190 × 10−37.73205 × 10−3
0.68759.79687510.151468539.780612407.4817 × 10−36.749 × 10−4
0.812512.79687513.198057812.7671313 8.7093 × 10−31.62626 × 10−2
0.937516.17187 16.619598421016.122072988.5144 × 10−34.9802 × 10−2
Table 2. The numerical results of example 2 at m = 16 and 32.
Table 2. The numerical results of example 2 at m = 16 and 32.
tExact SolutionIBPFBPFAbsolute Error
IBPFBPF
0.0625 1.1331484531.13355359821.20664440694.05145 × 10−47.3496 × 10−2
0.1875 1.454991414611.4555116313 1.5493620874 5.20217 × 10−49.43707 × 10−2
0.3125 1.868245957431.868913928931.98942029996.67972 × 10−41.21174 × 10−1
0.4375 2.39887529396 2.3997329863 2.55446622958.57692 × 10−41.55591 × 10−1
0.5625 3.08021684893.081318147743.2799995648 1.1013 × 10−31.99783 × 10−1
0.6875 3.9550767229 3.95649081860 4.211602808 1.4141 × 10−32.56526 × 10−1
0.8125 5.078419037185.0802347720 5.40780505041.81573 × 10−33.29386 × 10−1
0.9375 6.520819120336.52282194726.943759133 2.00283 × 10−34.2294 × 10−1
tExact SolutionIBPFBPFAbsolute Error
IBPFBPF
0.0625 1.13314845301.13324780371.16921882409.93506 × 10−53.60704 × 10−2
0.1875 1.4549914146 1.45511898331.5013066877 1.27569 × 10−44.63153 × 10−2
0.3125 1.86824595741.8684097589 1.92771594531.63802 × 10−45.947 × 10−2
0.4375 2.398875293 2.3990856192 2.47523626992.10325 × 10−47.6361 × 10−2
0.5625 3.0802168483.08048691193.17826628292.70063 × 10−49.80494 × 10−2
0.6875 3.9550767229 3.95542349074.08097468823.46768 × 10−41.25898 × 10−1
0.8125 5.0784190371 5.07886429 5.24007522454.45259 × 10−41.61656 × 10−1
0.9375 6.52081912036.521390843 6.7283897735.71723 × 10−42.07571 × 10−1
Table 3. The numerical results of example 3 at m = 32 and 64.
Table 3. The numerical results of example 3 at m = 32 and 64.
tExact SolutionIBPFBPFAbsolute Error
IBPFBPF
0.031250.06250.0937493 0.06249996.24043 × 10−83.12493 × 10−2
0.15625 0.3125 0.343606 0.3124964.13773 × 10−63.11058 × 10−2
0.281250.5625 0.59246 0.5624752.47592 × 10−52.99597 × 10−2
0.40625 0.81250.8384870.812412 8.79355 × 10−52.5987 × 10−2
0.53125 1.06251.07891 1.062272.34837 × 10−41.64118 × 10−2
0.65625 1.31251.310051.311985.22378 × 10−42.45347 × 10−3
0.78125 1.5625 1.527361.561481.02428 × 10−33.51379 × 10−2
0.90625 1.8125 1.725591.81067 1.83273 × 10−38.69077 × 10−2
tExact SolutionIBPFBPFAbsolute Error
IBPFBPF
0.015625 0.031250.046875 0.031253.84469 × 10−91.5625 × 10−2
0.140625 0.281250.296794 0.2812491.06583 × 10−61.55443 × 10−2
0.265625 0.531250.5459440.531242 8.45583 × 10−61.46945 × 10−2
0.390625 0.781250.7926820.7812163.44219 × 10−5 1.14325 × 10−2
0.515625 1.031251.03441 1.031159.88767 × 10−53.15871 × 10−3
0.640625 1.281251.26761.281022.29669 × 10−41.36475 × 10−2
0.765625 1.531251.487891.530794.63088 × 10−44.33638 × 10−2
0.890625 1.78125 1.690121.780418.44677 × 10−49.11273 × 10−2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

He, J.-H.; Taha, M.H.; Ramadan, M.A.; Moatimid, G.M. Improved Block-Pulse Functions for Numerical Solution of Mixed Volterra-Fredholm Integral Equations. Axioms 2021, 10, 200. https://doi.org/10.3390/axioms10030200

AMA Style

He J-H, Taha MH, Ramadan MA, Moatimid GM. Improved Block-Pulse Functions for Numerical Solution of Mixed Volterra-Fredholm Integral Equations. Axioms. 2021; 10(3):200. https://doi.org/10.3390/axioms10030200

Chicago/Turabian Style

He, Ji-Huan, Mahmoud H. Taha, Mohamed A. Ramadan, and Galal M. Moatimid. 2021. "Improved Block-Pulse Functions for Numerical Solution of Mixed Volterra-Fredholm Integral Equations" Axioms 10, no. 3: 200. https://doi.org/10.3390/axioms10030200

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop