1. Introduction
Integral equations play an important role in the area of applied mathematics, as they arise from a variety of physical, engineering and biological problems.
In this work, we consider mixed Volterra–Fredholm integral equations (MVFIEs) of the form
where
and
. More assumptions are made on
K and
f below. Such equations arise in many applications in areas of physics, fluid dynamics, electrodynamics, and biology. Various formulations of boundary value problems, with Neumann, Dirichlet or mixed boundary conditions are reduced to such integral equations. They also provide mathematical models for the development of an epidemic and numerous other physical and biological problems.
Over time, especially recently, there have been many papers devoted to studying these equations and their properties. Procedures for approximating their solutions numerically have been developed via collocation methods ([
1,
2,
3]), CAS wavelets ([
4]), Taylor expansion methods ([
5]), block-pulse functions ([
6]), linear programming ([
7]), spectral methods ([
8]), etc. For more considerations on mixed integral equations, see, e.g., [
9,
10].
The aim of this paper is to present a class of simple, yet reliable numerical methods for approximating the solution of MVFIE’s, using a combination of fixed point results for the existence and uniqueness of the solution and a suitable cubature formula for the numerical approximation of the successive iterates.
The rest of the paper is organized as follows. In
Section 2, we analyze the solvability of Equation (
1), using results from fixed point theory. In
Section 3, we develop a numerical procedure for approximating the solution of Equation (
1), when an appropriate cubature formula is used, discussing conditions for convergence and giving error estimates. In particular, we analyze in detail the convergence and give an error bound for the case when the two-dimensional composite trapezoidal rule is used.
Section 4 contains numerical experiments that show the applicability of the described method. In
Section 5, we draw conclusions and discuss future research ideas.
2. Solvability of the MVFIE in Equation (1)
We study the solvability of Equation (
1) using fixed point theory. We recall the main results for fixed points on a Banach space.
Definition 1. Let be a Banach space. A mapping is called acontractionif there exists such thatfor all . We have the classical result, the contraction principle on a Banach space.
Theorem 1. Let be a Banach space and a contraction. Then,
- (a)
equation has exactly one solution ;
- (b)
the sequence of successive approximations converges to the solution , for any arbitrary choice of initial point ; and
- (c)
the error estimateholds for every .
Remark 1. Theorem 1 still holds if X is replaced by any closed subset satisfying . This is important below (see Remark 2), since, to approximate the solution of the MVFIE, we want to apply this fixed point result locally (on such a subset Y), not globally, on the entire space X.
We apply Theorem 1 to the MVFIE in Equation (
1). To this end, we define the integral operator
associated with Equation (
1), by
Then, finding a solution of the integral in Equation (
1) is equivalent to finding a fixed point for the associated operator
F:
To use Theorem 1 (actually, Remark 1), we consider the space
equipped with the Chebyshev norm
and the ball
, for some
. Then,
is a Banach space and
is a closed convex subset. Thus, we have:
Theorem 2. Let and . Assume that:
- (i)
there exists a constant such thatfor all and all , where - (ii)
where over all and all - (iii)
Then,
- (a)
Equation (
3)
has exactly one solution ; - (b)
the sequence of successive approximationsconverges to the solution for any arbitrary initial point ; and - (c)
the error estimateholds for every .
Proof. Let
, arbitrary. Since
, it follows that
, for all
. Then, for every fixed
, we have
Thus, by Equation (
5),
.
Next, again for every fixed
, by Equation (
4), we have:
Then,
and by Equation (
6), all the conclusions follow from Theorem 1 and Remark 1. □
Remark 2. Let us discuss the Lipschitz condition in Equation (
4)
. One case where it is relatively easy to verify it is if is bounded. Then, any bound of the derivative is a Lipschitz constant L for the function. Still, this condition can be quite restrictive, as well, if imposed on the entire space. However, it is much more relaxed when the partial derivative has to be bounded only locally, which is the reason we use a local fixed point result, instead of a global one. We use this in our numerical examples. Of course, there exist functions with unbounded derivatives that still satisfy Equation (
4)
. Even some non-differentiable kernels (for instance, some kernels with weak singularities) can verify a Lipschitz condition (see, e.g., [1]). For more considerations on fixed point results, see, e.g., [
11].
3. Numerical Approximation of the Solution
Now, we have an iterative procedure, given by Equation (
7), for approximating the true solution
. However, to use it, we have to approximate the integrals numerically and give approximations of the iterative solution at some mesh points. We consider a cubature formula
with nodes
, coefficients
and for which the remainder satisfies
for some
, with
as
.
Since we need to integrate on
, we use Equation (
9) the following way: Let
be a partition of
and let
be the initial approximation. Then, we use the iteration in Equation (
7) and the numerical integration scheme in Equation (
9) to approximate
by
, for
and
. We have the following approximations:
where
Then, denoting the maximum error at the nodes by
we have, by Equation (
10),
We proceed further in a similar way to get:
where
Thus, the values can be computed from the previous iteration.
Now, let us estimate the error. To this end, denote by
. Then, by Equation (
11), we have
Similarly, denoting by
inductively, we get
Now, we can give an error estimate for our approximations.
Theorem 3. Assume the conditions of Theorem 2 hold. Further, assume that the coefficients in the cubature in Equation (9) satisfy the condition Then, for the true solution of Equation (3) and the approximations given by Equation (14), the error estimateholds for every . Proof. For every
and
, we have
The assertion then follows from Equations (
15) and (
16) and Theorem 2. □
Using the Trapezoidal Rule
Thus, to approximate the iterates
, we can choose any numerical integration formula that satisfies condition in Equation (
16). Next, we propose a simple such formula, the two-dimensional composite trapezoidal rule:
with equidistant nodes
. The remainder is given by
where the notation
is used.
For a fixed
m, we consider the nodes
. Then, we have
for each
.
Now,
, thus, in this case,
, which is already assumed to be strictly less than 1, by Equation (
6).
To find
M from Equation (
10), we have to find bounds for
,
and
, as functions of
t and
s. We have
and
From the calculations above, it is clear that, if
K is a
function with bounded fourth-order partial derivatives and
f is a
function with bounded second-order derivatives, then there exists a constant
, independent of
n or
m, such that
Thus, when the trapezoidal rule is used, we have the following approximation result:
Theorem 4. Assume the conditions of Theorem 2 hold with and . Then, for the true solution of Equation (3) and the approximations given by Equation (14), for all and any , we have the error estimatewith given in Equation (
21).
4. Numerical Experiments
Example 1. Let us start with the nonlinear MVFIEwith exact solution . We have
and
. Then, on
,
f is a decreasing function, thus
and
. In addition, on
,
. If we choose
, then
Thus, for
and
, we have
Now, to find the Lipschitz constant
L, we compute
. Then, for
and
, we get
Thus, the conditions of Theorem 4 are satisfied.
We use the trapezoidal rule with
and
, with the corresponding nodes
.
Table 1 contains the errors
, with initial approximation
.
Example 2. Next, consider the nonlinear MVFIEwhose true solution is In this case, , and . Now and .
Choose again
. Then, for
and
, we have
thus Theorem 4 applies.
For
and
, we use the trapezoidal rule with nodes
. In
Table 2, we give the errors
, with initial approximation
.
5. Conclusions
In this paper, we develop a class of numerical iterative methods for the solutions of one-dimensional mixed Volterra–Fredholm integral equations of the second kind. These methods use successive iterations of the analytical solution (given by fixed point theory results) and numerical approximations of the integrals involved in the iterative solutions. Many results in fixed point theory can be used, as long as they provide an iterative procedure that converges to the true analytic solution. In addition, we describe the conditions that a quadrature rule has to satisfy in order to guarantee the convergence of the approximations of the solution at some mesh points to the true values at those points. Again, many such numerical schemes can be employed. We have chosen the contraction principle for the first part and the two-dimensional composite trapezoidal rule for the numerical integration of the iterates. These choices were made in order to develop a fairly simple numerical method, simple in the proof of the convergence, the conditions to be met and, especially, in implementation. Most mathematical software have the trapezoidal rule built in, thus they are very handy, very little further coding being required. Another advantage of these types of methods, compared to, e.g., collocation, Galerkin, Nyström methods—or other numerical methods that search the solution in a certain form, substitute it into the equation and then force the equation to be true at some mesh points—is that they do not lead to a system of algebraic equations, which oftentimes is ill-conditioned and, thus, difficult to solve. Although the method presented is fairly easy to use and implement, as the numerical examples show, it gives good approximations even with a relatively small number of iterations and of quadrature nodes.
Similar ideas can be used for other types of mixed integral equations, with more complicated kernel functions, such as kernels with some type of singularity or kernels with modified arguments. In addition, other fixed point successive approximation results can be considered (such as Mann iteration or Krasnoselskii iteration), which, under certain conditions, may converge faster than Picard iteration. To increase the speed of convergence of the method, other, higher-order numerical integration schemes can also be used (such as spline quadratures, see, e.g., [
12,
13,
14]), as long as they satisfy the convergence conditions of Theorem 3.