**Multi-Wavelets Galerkin Method for Solving the System of Volterra Integral Equations**

### **Hoang Viet Long 1,2 , Haifa Bin Jebreen 3,\* and Stefania Tomasiello <sup>4</sup>**


Received: 1 July 2020; Accepted: 12 August 2020; Published: 15 August 2020

**Abstract:** In this work, an efficient algorithm is proposed for solving the system of Volterra integral equations based on wavelet Galerkin method. This problem is reduced to a set of algebraic equations using the operational matrix of integration and wavelet transform matrix. For linear type, the computational effort decreases by thresholding. The convergence analysis of the proposed scheme has been investigated and it is shown that its convergence is of order *O*(2 <sup>−</sup>*Jr*), where *J* is the refinement level and *r* is the multiplicity of multi-wavelets. Several numerical tests are provided to illustrate the ability and efficiency of the method.

**Keywords:** Volterra integral equations; operational matrix of integration; multi-wavelets

#### **1. Introduction**

In this paper, we study and construct a novel numerical algorithm for the system of Volterra integral equations of the second kind

$$\mathbf{u}(\mathbf{x}) = \mathbf{f}(\mathbf{x}) + \int\_0^\mathbf{x} \mathbf{g}(\mathbf{x}, t, \mathbf{u}(t)) dt, \quad \mathbf{x} \in \Omega := [0, 1], \tag{1}$$

where **<sup>f</sup>** : <sup>Ω</sup> <sup>→</sup> <sup>R</sup>*<sup>n</sup>* (*<sup>n</sup>* <sup>∈</sup> <sup>N</sup>) is a given real-valued continuous function, **<sup>u</sup>** : <sup>Ω</sup> <sup>→</sup> <sup>R</sup>*<sup>n</sup>* is the unknown function that will be determined and the function **<sup>g</sup>** : *<sup>S</sup>* <sup>→</sup> <sup>R</sup>*<sup>n</sup>* with *<sup>S</sup>* <sup>=</sup> {(*x*, *<sup>t</sup>*) : *<sup>x</sup>*, *<sup>t</sup>* <sup>∈</sup> <sup>Ω</sup>} is a given linear or nonlinear function of **u** which satisfies the following Lipschitz condition with respect to the third variable: for all *<sup>x</sup>*, *<sup>t</sup>* <sup>∈</sup> [0, 1] and for all **<sup>u</sup>**1, **<sup>u</sup>**<sup>2</sup> <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* ,

$$|\mathbf{g}(\mathbf{x},t,\mathbf{u}\_1(t)) - \mathbf{g}(\mathbf{x},t,\mathbf{u}\_2(t))| \le A|\mathbf{u}\_1 - \mathbf{u}\_2|.\tag{2}$$

Therefore, the functions **f** and **g** are considered so that the Equation (1) has a unique solution.

Equation (1) is the general form of second-order Volterra integral equation and appears in scientific applications in chemistry, engineering, mathematics, and physics [1–4]. Numerical and analytical solutions of linear and nonlinear Volterra integral equations have been investigated in many papers. A useful method to solve such equations is the Adomian decomposition method. This method was used to investigate the existence and uniqueness of solutions of this type of equation [5,6]. One of the best paper which utilizes the multi-wavelets for solving integro-differential equations was presented by Saray [7]. In [7], an efficient algorithm was proposed for solving the Volterra integro-differential equation. This method outperforming former approaches. Golbabai et al. [8] developed a general

method based on radial basis function networks to solve the system of Volterra integral equations. The modified homotopy perturbation method for solving this type of equation has been proposed by Aminikhah et al. [5,9]. Kılıçman et al. [10] used Simpson's 3/8 rule to solve this equation. Aguilar and Brunner used collocation techniques based on spline polynomials [11]. The umbral calculus and the Laplace transform methods were used as solution approaches as well [12].

Wavelets and specially multi-wavelets Galerkin method represent an efficient way to solve a variety of equations, including ordinary differential equations (ODEs), partial differential equations (PDEs), and integral equations [7,13,14]. Due to the discrete-time characterization of wavelet coefficient decay, the sparse form of the coefficients matrices arises. This property is very useful to reduce the computational cost. In this work, we aim to solve the system of the Volterra integral equation using Alpert's multi-wavelets by exploiting the above-mentioned property. Some results are formally proved and supported by numerical experiments.

The paper is structured as follows. A brief introduction of the Alpert's multi-wavelets is provided in Section 2. In Section 3, the wavelet Galerkin method is used to approximate the solution of the problem, and the convergence analysis is investigated. Some numerical experiments are performed to illustrate the efficiency and accuracy of the proposed method.

#### **2. Alpert's Multi-Wavelets and Multiresolution Analysis**

Alpert et al. [13,15] introduced a class of multi-wavelets for *L* 2 , which are indexed by a parameter *r* ≥ 0 and built via Lagrange polynomials of degree less than *r*. These multi-wavelets are piecewise polynomials that are locally supported and orthonormal. The multiresolution analysis (MRA) framework, introduced and developed by Mallat [16] and Meyer [17], is useful to construct these bases.

According to MRA, a set of primal scaling functions {*φ* 0 0,0, . . . , *φ r*−1 0,0 } is introduced for primal subspace *V r* <sup>0</sup> ∈ *L* 2 [0, 1]. By translation and dilation of primal scaling functions {*φ <sup>k</sup>*}, *<sup>k</sup>* <sup>=</sup> 0, . . . ,*<sup>r</sup>* <sup>−</sup> 1, we determine a space *V r j* ,

$$V\_{\mathfrak{j}}^r = \operatorname{Span}\{\phi\_{\mathfrak{j},b}^k := \mathcal{D}\_{\mathfrak{j}} \mathcal{T}\_b \phi^k \lrcorner b \in \mathcal{B}\_{\mathfrak{j}}, k = 0, 1, \dots, r - 1\}\_{\lambda}$$

where D*<sup>a</sup>* and T*<sup>b</sup>* are the dilation and translation operators, respectively such that for a given function *h*, D*ah*(*x*) = *a* 1 <sup>2</sup> *h*(*ax*) and T*bh*(*x*) = *h*(*x* − *b*), also B*<sup>j</sup>* :<sup>=</sup> {0, 1, . . . , 2*<sup>j</sup>* <sup>−</sup> <sup>1</sup>} for *<sup>j</sup>* <sup>∈</sup> <sup>Z</sup> <sup>+</sup> ∪ {0}. Therefore, *φ k j*,*b* is a polynomial of degree less than *k* which is restricted to *Ij*,*<sup>b</sup>* = [*xj*,*<sup>b</sup>* , *xj*,*b*+<sup>1</sup> ] where *xj*,*<sup>b</sup>* := 2 −*j b* and Ω := [0, 1] = S *b*∈B*<sup>j</sup> Ij*,*b* .

For a fixed integer *J* ≥ 0, the orthogonal projection P *r J h* of *h* ∈ *L* 2 [0, 1] onto *V r J* is determined by

$$h \approx \mathcal{P}\_I^r(h) = \sum\_{b \in \mathcal{B}\_I} \sum\_{k=0}^{r-1} \langle h, \phi\_{I,b}^k \rangle \phi\_{I,b}^k. \tag{3}$$

The coefficients *h k <sup>J</sup>*,*<sup>b</sup>* = h*f* , *φ k J*,*b* i are determined by *h k <sup>J</sup>*,*<sup>b</sup>* = R *IJ*,*<sup>b</sup> h*(*x*)*φ k J*,*b* (*x*)*dx* [18–20]. To avoid computing integrals, the Gauss–Legendre Quadrature are applied as follows

$$h\_{l,b}^k \approx 2^{-1/2} \sqrt{\frac{\omega\_k}{2}} h \left( 2^{-l} (\frac{\tau\_k + 1}{2} + b) \right), \quad k = 0, \ldots, r - 1, \ b \in \mathcal{B}\_{l,r} \tag{4}$$

where *ω<sup>k</sup>* and *τ<sup>k</sup>* are the Gauss-Legendre Quadrature weights and the roots of Legendre polynomial of order *r*, respectively which are introduced in [21–23]. The projection P *r J h* converges to *h* if the function *h* ∈ *C r* (Ω) (*<sup>r</sup>* times continuously differentiable) [15]. P *r J h* approximates *h* with mean error bounded as follows

$$\|\|\mathcal{P}\_I^r(h) - h\|\| \le 2^{-lr} \frac{2}{4^r r!} \sup\_{\mathbf{x} \in \Omega} |h^{(r)}(\mathbf{x})|. \tag{5}$$

Assume that Φ*<sup>r</sup> J* := [Φ*r*,*J*,0, . . . , <sup>Φ</sup>*r*,*J*,2*J*−<sup>1</sup> ] *T* , where Φ*r*,*J*,*<sup>b</sup>* := [*φ* 0 *J*,*b* , . . . , *φ r*−1 *J*,*b* ]. Φ*<sup>r</sup> J* refers to the function vector called a multi-scaling function. In fact, Φ*<sup>r</sup> J* is a function vector which includes the scaling function in the space *V r J* . By this definition, one can rewrite (3) viz, P *r J* (*h*) = *HT*Ψ*<sup>r</sup> <sup>J</sup>* where *H* is a vector with entries *Hbr*+*k*+<sup>1</sup> := *h k J*,*b* and has dimension *N* := *r*2 *J* .

This construction of multi-wavelets for *L* 2 (Ω) can be extended to the two–dimensional space including *L* 2 (Ω) 2 . Let us introduce the space *V r*,2 *J* := *V r <sup>J</sup>* × *V r <sup>J</sup>* which is spanned by orthonormal bases {*φ k J*,*b φ k* ′ *J*,*b* ′ : *b*, *b* ′ ∈ B*<sup>J</sup>* , *k*, *k* ′ = 0, 1, . . . ,*r* − 1}. Therefore, any function *h* ∈ *L* 2 (Ω) 2 can be projected onto the *V r*,2 *J* by the projection P *r J* via,

$$\mathfrak{M} \approx \mathcal{P}\_{\mathfrak{I}}^{r} \mathfrak{h} = \sum\_{b \in \mathfrak{B}\_{\mathfrak{I}}} \sum\_{k'=0}^{r-1} \sum\_{b' \in \mathfrak{B}\_{\mathfrak{I}}} H\_{rb + (k+1), rb' + (k'+1)} \mathfrak{G}\_{\mathfrak{I}, \mathfrak{b}}^{k}(\mathbf{x}) \mathfrak{g}\_{\mathfrak{I}, \mathfrak{b}'}^{k'}(\mathbf{y}) = \mathfrak{A}\_{\mathfrak{I}}^{rT}(\mathbf{x}) H \mathfrak{G}\_{\mathfrak{I}}^{r}(\mathbf{y}), \tag{6}$$

where *H* is an (*N* × *N*) matrix whose elements are obtained by

$$H\_{rb+(k+1),rb'+(k'+1)} \approx 2^{-I} \sqrt{\frac{\omega\_k}{2}} \sqrt{\frac{\omega\_{k'}}{2}} h\left(2^{-I}(\mathfrak{k}\_k + b), 2^{-I}(\mathfrak{k}\_{k'} + b')\right),\tag{7}$$

where *τ*ˆ*<sup>k</sup>* = (*τ<sup>k</sup>* + 1)/2. By the following theorem, it is possible to bound the error for such projection, if the function *h* is sufficiently smooth.

**Theorem 1** ([15])**.** *Suppose that the function h* : [0, 1] <sup>2</sup> <sup>→</sup> <sup>R</sup><sup>2</sup> *has continuous partial derivatives of order <sup>r</sup> and mixed partial derivative of order* 2*r. Then*

$$||\mathcal{P}\_{\rm I}^{r}h - h|| \le \mathcal{M}\_{\rm max} \frac{2^{1-r\mathcal{I}}}{4^{r}r!} \left(2 + \frac{2^{1-\mathcal{I}r}}{4^{r}r!} \right),\tag{8}$$

*where*

$$\mathcal{M}\_{\text{max}} = \max \left\{ \sup\_{\substack{\mathfrak{J} \in [0,1]}} |\frac{\partial^r}{\partial \mathbf{x}^r} h(\mathfrak{J}, \mathfrak{y})|, \sup\_{\eta \in [0,1]} |\frac{\partial^r}{\partial \mathbf{y}^r} h(\mathbf{x}, \eta)|, \sup\_{\substack{\mathfrak{J}' \mathbf{y}' \in [0,1]}} |\frac{\partial^{2r}}{\partial \mathbf{x}^r \partial \mathbf{y}^r} h(\mathfrak{J}', \mathfrak{y}')| \right\}. \end{vmatrix}$$

As the subspaces *V r j* are nested, there exist complementary orthogonal subspaces *W<sup>r</sup> j* such that

$$V\_{j+1}^r = V\_j^r \bigoplus W\_{j}^r, \qquad j \in \mathbb{Z} \cup \{0\}, \tag{9}$$

here and in the following L denotes orthogonal sums. There is a family of other bases such that the dilations and translations of these bases span the complementary subspaces *W<sup>r</sup> j* , namely,

$$\mathcal{W}^r\_{\mathfrak{j}} = \operatorname{Span} \{ \psi^k\_{\mathfrak{j}, \mathfrak{b}} := \mathcal{D}\_{\mathfrak{2}^j} \mathcal{T}\_{\mathfrak{b}} \psi^k, b \in \mathcal{B}\_{\mathfrak{j}}, k = 0, 1, \dots, r - 1 \}.$$

The functions *ψ k j*,*b* are called multi-wavelets. Due to (9), the multi-scale decomposition can be inductively found, *V r <sup>J</sup>* = *V r* <sup>0</sup> ⊕ (⊕ *J*−1 *j*=0*W<sup>r</sup> j* ). This decomposition gives rise to the multi-scale projection operator <sup>M</sup>*<sup>r</sup> J* that maps *L* 2 (Ω) onto *V r J* via

$$h \approx \mathcal{M}\_I^r(h) = (\mathcal{P}\_0^r + \sum\_{j=0}^{J-1} \mathcal{Q}\_j^r)(h), \tag{10}$$

where <sup>Q</sup>*<sup>r</sup> j* is the orthonormal projection operator that maps *L* 2 (Ω) onto *W<sup>r</sup> j* . In fact, by using multi-scale projection operator, any function *h* ∈ *L* 2 (Ω) can be approximated by multi-wavelets of higher levels *W<sup>r</sup> j* , *j* = 0, 1, . . . , *J* − 1 and the multi-scaling functions of the coarse space *V r* 0 viz,

$$h \approx \mathcal{M}\_I^r(h) = \sum\_{k=0}^{r-1} h\_{0,0}^k \phi\_{0,0}^k + \sum\_{j=0}^{J-1} \sum\_{b \in \mathcal{B}\_j} \sum\_{k=0}^{r-1} \tilde{h}\_{j,b}^k \psi\_{j,b^\*}^k \tag{11}$$

where

$$h\_{0,0}^k := \langle h, \phi\_{0,0}^k \rangle, \quad \tilde{h}\_{j,b}^k := \langle h, \psi\_{j,b}^k \rangle. \tag{12}$$

Note that the single-scale coefficients *h k* 0,0 can be determined by (4). However, for evaluating the multi-wavelets coefficients ˜*h k j*,*b* of higher levels, in many cases, they have to be calculated numerically. To avoid such numerical computations, the wavelet transform matrix *T<sup>J</sup>* can be applied, as introduced in [14,24]. This matrix is useful to find the multi-wavelets by using the scaling functions

$$
\Psi\_{\sf I}^{r} = T\_{\sf I} \Phi\_{\sf I}^{r} \tag{13}
$$

where Ψ*<sup>r</sup> J* := [Φ*r*,0,*<sup>b</sup>* , Ψ*r*,0,*<sup>b</sup>* , Ψ*r*,1,*<sup>b</sup>* , . . . , <sup>Ψ</sup>*r*,*J*−1,*<sup>b</sup>* ] *<sup>T</sup>* and Ψ*r*,*j*,*<sup>b</sup>* := [*ψ* 0 *j*,*b* , . . . , *ψ r*−1 *j*,*b* ], *b* ∈ B*<sup>j</sup>* . Using the vector function Ψ*<sup>r</sup> J* and (11), we can write

$$h \approx \mathcal{M}\_I^r(h) = \tilde{H}\_I^T \Psi\_{I'}^r \tag{14}$$

where *H*˜ *J* is an *N*-dimensional vector with entries *h k* 0,0 and ˜*<sup>h</sup> k j*,*b* for *b* ∈ B*<sup>j</sup>* , *j* = 0, . . . , *J* − 1 and *<sup>k</sup>* <sup>=</sup> 0, . . . ,*<sup>r</sup>* <sup>−</sup> 1. Besides, it is obvious that *<sup>H</sup>*˜ *<sup>J</sup>* = *T* −1 *J T HJ* .

#### *Thresholding*

Alpert's multi-wavelets provide vanishing moments of order *N<sup>k</sup> <sup>ψ</sup>* = *k* + *r* − 1 for *k* = 0, 1, . . . ,*r* − 1, i.e.,

$$\mathcal{N}\_p^k = \int\_{-\infty}^{\infty} \mathbf{x}^p \psi\_{0,0}^k(\mathbf{x}) d\mathbf{x}, \quad 0 \le p < N\_{\Psi'}^k \text{ and } k = 0, 1, \dots, r - 1. \tag{15}$$

Furthermore, Alpert's multi-wavelets are uniformly bounded concerning to *L*<sup>∞</sup> and *L*1, i.e.,

$$\|\psi\_{\mathbf{l},\mathbf{b}}^k\|\_{L\_{\infty}(\Omega)} \lesssim 1, \quad \|\psi\_{\mathbf{l},\mathbf{b}}^k\|\_{L\_1(\Omega)} \lesssim 1. \tag{16}$$

The vanishing moments and normalization (16) imply that the detail coefficients ˜*h k J*,*b* become small when the underlying function is locally smooth. Therefore it is possible to obtain [25]

$$\tilde{h}\_{I,b}^k = |\langle h, \boldsymbol{\psi}\_{I,b}^k \rangle| \le \inf\_{P \in \prod\_{\mathcal{N}\_\Psi^k}} |\langle h - P, \boldsymbol{\psi}\_{I,b}^k \rangle| \lesssim 2^{-I N\_\Psi^k} \|h\|\_{\boldsymbol{W}^{1,N\_\Psi^k}(\Omega)}.\tag{17}$$

So the detail coefficients decay at the rate of 2−*JN<sup>k</sup> <sup>ψ</sup>* and in the regions where the function is smooth, most of the detail coefficients may be discarded when the refinement level *J* increases. This gives rise to thresholding. The thresholding operator T*D<sup>ε</sup>* is introduced by

$$\mathcal{T}\_{\rm D\_{\ell}}(\tilde{\mathcal{H}}\_{\rm I}) = \mathcal{H}\_{\rm I'} \tag{18}$$

where *D<sup>ε</sup>* := {(*J*, *b*, *k*) : | ˜*h k J*,*b* | > *ε*} and the elements of H*<sup>J</sup>* are defined by

$$
\bar{h}^{k}\_{j,b} := \begin{cases}
\ \tilde{h}^{k}\_{j,b'} & (j,b,k) \in D\_{\varepsilon\prime} \\
0, & \text{else}, 
\end{cases} \qquad b \in \mathcal{B}\_{\flat}, j = 0, \ldots, J-1, k = 0, \ldots, r-1. \tag{19}
$$

Note that the thresholding affects only the detail coefficients while the coarse scale coefficients remain unchanged.

The approximation error due to the thresholding can be estimated similarly to the classical wavelets. Let A*D<sup>ε</sup>* be the approximation operator A*D<sup>ε</sup>* :<sup>=</sup> <sup>M</sup>*<sup>r</sup> J* −1 <sup>T</sup>*Dε*M*<sup>r</sup> J* . The approximation error due to the thresholding can be bounded as stated by the following proposition [25].

**Proposition 1.** *(Approximation error). Let* Ω *be bounded and ε<sup>j</sup>* = *a*¯ *j*−*J ε with a*¯ > 1*. Then the approximation error concerning to the set of significant details D<sup>ε</sup> is uniformly bounded concerning to L q* (Ω)*, <sup>q</sup>* ∈ [1, <sup>∞</sup>]*, i.e.,*

$$\|\|\mathcal{P}\_I^r h - \mathcal{P}\_{I,D\_t}^r h\|\|\_{L^q(\Omega)} \le \mathcal{C}\_{thr} \varepsilon\_\prime \tag{20}$$

*for some constant Cthr* > 0 *independent of J*,*ε. Here* P *r J h and* P *r J*,*D<sup>ε</sup> h are the projections according to (11) corresponding to the coefficients H*˜ *<sup>J</sup> and* <sup>A</sup>*D<sup>ε</sup> <sup>H</sup>*˜ *J .*

#### **3. Multi-Wavelets Galerkin Method**

In this section, we use the wavelet Galerkin method to solve the system of the Volterra integral Equation (1). To this end, we will apply the interpolation property of scaling functions to reach an efficient algorithm. Assume that the solution **u**(*x*) of Equation (1) can be expanded using multi-scale projection operator <sup>M</sup>*<sup>r</sup> J* based on multi-wavelets as follows

$$\mathbf{u}(\mathbf{x}) \approx \mathcal{M}\_I^r(\mathbf{u})(\mathbf{x}) = (\mathcal{P}\_0^r + \sum\_{j=0}^{I-1} \mathcal{Q}\_j^r)(\mathbf{u})(\mathbf{x}) = \mathbf{\tilde{U}}^T \otimes \mathbf{\Psi}\_I^r(\mathbf{x}),\tag{21}$$

where <sup>⊗</sup> is the Kronecker product and **<sup>U</sup>**˜ = (*U*˜ *<sup>T</sup>* 1 , . . . , *U*˜ *<sup>T</sup> n* ) *T* is a (1 × *nr*2 *J* ) vector whose elements are *n* unknown sub-vectors *U*˜ *<sup>i</sup>* with a dimension of (*r*2 *<sup>J</sup>* <sup>×</sup> <sup>1</sup>) such that

$$u\_i(\mathfrak{x}) := \tilde{\mathcal{U}}\_i^T \Psi\_J^r(\mathfrak{x}), \quad i = 1, 2, \dots, n.$$

One can imagine two types of equations, linear and nonlinear. For the linear type, the *i*-th component of the vector function **g**(*x*, *t*, **u**(*t*)) has the form

$$\mathbf{g}\_{j}(\mathbf{x},t,\mathbf{u}(t)) := \sum\_{i=1}^{n} a\_{ji}(\mathbf{x},t)u\_{i}(t), \quad j = 1,2,\ldots,n,\tag{22}$$

and it can be approximated by multi-scale operator, i.e.,

$$\begin{aligned} \sum\_{i=1}^n a\_{ji}(\mathbf{x},t)u\_i(t) &\approx \sum\_{i=1}^n \mathcal{P}\_I^r(a\_{ji}u\_i)(\mathbf{x},t) = \sum\_{i=1}^n \Phi\_I^{r^T}(\mathbf{x})A\_{ji}^T\Phi\_I^r(t),\\ \sum\_{i=1}^n \Psi\_I^{r^T}(\mathbf{x})T\_IA\_{ji}^TA\_I^{-1}\Psi\_I^r(t), \quad j=1,2,\dots,n,\end{aligned}$$

where *Aji* (*j*, *i* = 1 : *n*) are (*r*2 *<sup>J</sup>* <sup>×</sup> *<sup>r</sup>*<sup>2</sup> *J* ) matrices. Integrating from 0 to x, we get

$$\begin{split} \int\_{0}^{\mathbf{x}} \mathbf{g}\_{j}(\mathbf{x}, t, \mathbf{u}(t)) dt &= \sum\_{i=1}^{n} \underbrace{\Psi\_{I}^{rT}(\mathbf{x}) T\_{I} A\_{ji}^{T} T\_{I}^{-1} I\_{\Psi} \Psi\_{I}^{r}(\mathbf{x})}\_{p\_{j}(\mathbf{x})} \\ &= \sum\_{i=1}^{n} P\_{i}^{T} T\_{I}^{-1} \Psi\_{I}^{r}(\mathbf{x}) = \sum\_{i=1}^{n} \hat{\mathcal{U}}\_{i}^{T} \mathcal{A}\_{ji} T\_{I}^{-1} \Psi\_{I}^{r}(t), \end{split} \tag{23}$$

where A*ji* (*j*, *i* = 1 : *n*) are (*r*2 *<sup>J</sup>* <sup>×</sup> *<sup>r</sup>*<sup>2</sup> *J* ) matrices and the rest are (*r*2 *<sup>J</sup>* <sup>×</sup> <sup>1</sup>) vectors. But if the *<sup>j</sup>*-th component of vector function **g**(*x*, *t*, **u**(*t*)) is nonlinear, one can consider the following expansion

$$\begin{split} \int\_{0}^{\mathbf{x}} \mathbf{g}\_{j}(\mathbf{x}, t, \mathbf{u}(t)) dt &\approx \mathcal{P}\_{f}^{r} \left( \int\_{0}^{\mathbf{x}} \mathbf{g}\_{j} \left( \mathbf{x}, t, \mathcal{M}\_{f}^{r}(\mathbf{u})(t) \right) dt \right) \\ \mathbf{G}\_{j}^{T} \Phi\_{f}^{r}(\mathbf{x}) &= \mathbf{G}\_{j}^{T} T\_{f}^{-1} \Psi\_{f}^{r}(\mathbf{x}), \quad j = 1, 2, \dots, n, \end{split} \tag{24}$$

where *G<sup>j</sup>* is a (*r*2 *<sup>J</sup>* <sup>×</sup> <sup>1</sup>) vector whose elements are nonlinear algebraic equations. In view of the Equations (23) and (24), and using operational matrix *I<sup>ψ</sup>* of integration for multi-wavelets introduced in [7,14,15], one can write

$$\int\_0^\mathbf{x} \mathbf{g}(\mathbf{x}, t, \mathbf{u}(t)) dt \approx \begin{cases} \mathbf{U}^T \Gamma^T \otimes T\_I^{-1} \Psi\_I^r(\mathbf{x}), & \text{linear}, \\\mathbf{G}^T \otimes T\_I^{-1} \Psi\_I^r(\mathbf{x}), & \text{nonlinear}, \end{cases} \tag{25}$$

with <sup>Γ</sup> := (A)*ji*, (*j*, *<sup>i</sup>* = 1 : *<sup>n</sup>*) and **<sup>G</sup>** := (*<sup>G</sup> T* 1 , *G T* 2 , . . . , *G T n* ) *T* .

Such an approximation can be considered for the j-th element of **f** viz,

$$f\_{\vec{j}}(\mathbf{x}) \approx \mathcal{P}\_{\vec{l}}^{r}(f\_{\vec{j}})(\mathbf{x}) = F\_{\vec{j}}^{T}\Phi\_{\vec{l}}^{r}(\mathbf{x}) = F\_{\vec{j}}^{T}T\_{\vec{l}}^{-1}\Psi\_{\vec{l}}^{r}(\mathbf{x}),$$

and thus by putting **F** := (*F T* 1 , *F T* 2 , . . . , *F T n* ) *<sup>T</sup>* we have

$$\mathbf{f} \approx \mathbf{F}^T \otimes T\_f^{-1} \Psi\_f^r(\mathbf{x}). \tag{26}$$

Now, we introduce the residual as

$$\mathbf{r}\_{I}^{r}(\mathbf{x}) = \tilde{\mathbf{U}}^{T} \otimes \mathbf{Y}\_{I}^{r}(\mathbf{x}) - \mathbf{F}^{T} \otimes T\_{I}^{-1} \mathbf{W}\_{I}^{r}(\mathbf{x}) - \begin{cases} \tilde{\mathbf{U}}^{T} \Gamma^{T} \otimes T\_{I}^{-1} \mathbf{W}\_{I}^{r}(\mathbf{x}), & \text{linear}, \\\ \mathbf{G}^{T} \otimes T\_{I}^{-1} \mathbf{W}\_{I}^{r}(\mathbf{x}), & \text{nonlinear}. \end{cases} \tag{27}$$

To apply the Galerkin method, it is necessary that h*r r J* , Ψ*<sup>r</sup> J* i = 0. Thus we have

$$
\tilde{\mathbf{U}}^T - \mathbf{F}^T \otimes T\_f^{-1} - \begin{cases}
\tilde{\mathbf{U}}^T \Gamma^T \otimes T\_f^{-1} \\
\mathbf{G}^T \otimes T\_f^{-1}
\end{cases} = 0, \qquad \text{linear}, \tag{28}
$$

By solving this system of linear and nonlinear equations using restarted generalized minimal residual method (GMRES) and Newton methods, respectively, we obtain the approximate solution of the Equation (1). Note that because we use the Galerkin method with orthogonal bases, such a system will have a unique solution [26].

#### *Convergence Analysis*

**Theorem 2.** *Suppose that* **<sup>e</sup>***<sup>J</sup>* <sup>=</sup> **<sup>u</sup>** − M*<sup>r</sup> J* (**u**) *where* **<sup>u</sup>** *and* <sup>M</sup>*<sup>r</sup> J* (**u**) *are the exact and approximate solutions of nonlinear system (1), respectively. Let <sup>X</sup> be an open set in* <sup>R</sup> *and let <sup>g</sup>* : <sup>Ω</sup> <sup>×</sup> *<sup>X</sup>* <sup>→</sup> <sup>R</sup> *be a function such that g*(*x*, *t*, **u**(*x*)) ∈ *C r* (Ω) *for any* **<sup>u</sup>** ∈ *<sup>X</sup> and the condition (2) is satisfied. Furthermore, presume that* **<sup>f</sup>** ∈ *<sup>C</sup> r* (Ω)*. Furthermore, assume that the residual* **e** := *r*˜ − (*r r J* ) *where the residual r is specified as* ˜

$$\mathfrak{F}(\mathbf{x}) = \mathfrak{u}(\mathbf{x}) - f(\mathbf{x}) - \int\_0^\mathbf{x} \mathbf{g}(\mathbf{x}, t, \mathfrak{u}(t)) dt,\tag{29}$$

*and r<sup>r</sup> J is introduced in (27).*

*If* **u** ∈ *C r* (Ω)*, and the method used to solve system (28) is convergent then one has*

$$\|\|\mathbf{e}\|\|\_{2}^{2} \le \frac{2^{1-lr}}{4^{r}r!} \left( |1 + \sqrt{nr2^{l}}\kappa|\sup\_{\mathbf{x} \in \Omega} |\mathbf{u}(\mathbf{x})| + \sup\_{\mathbf{x} \in \Omega} |\mathbf{f}(\mathbf{x})| \right)^{l}$$

*where <sup>κ</sup> is a positive constant. Consequently,* **<sup>e</sup>** → <sup>0</sup> *when J* → <sup>∞</sup>*.*

**Proof.** Using (27), (29) and the hypotheses of the theorem, we can write

$$\mathbf{e}(\mathbf{x}) = \mathbf{e}\_{\mathbf{f}}(\mathbf{x}) - \left(\mathbf{f}(\mathbf{x}) - \mathcal{M}\_{\mathbf{f}}^{\mathbf{f}}(\mathbf{f})(\mathbf{x})\right) - \left(\int\_{0}^{\mathbf{x}} \mathbf{g}(\mathbf{x}, \mathbf{t}, \mathbf{u}(t)) dt - \mathcal{M}\_{\mathbf{f}}^{\mathbf{f}}(\int\_{0}^{\mathbf{x}} \mathbf{g}(\mathbf{x}, \mathbf{t}, \mathcal{M}\_{\mathbf{f}}^{\mathbf{f}}(\mathbf{u}))(t) dt\right), \tag{30}$$

where **e** := *r*˜ − (*r r J* ). Since the function **g** satisfies the Lipschitz condition (2), Equation (30) can be reduced to

$$\mathbf{e}(\mathbf{x}) = \mathbf{e}\_I(\mathbf{x}) - \left(\mathbf{f}(\mathbf{x}) - \mathcal{M}\_I^r(\mathbf{f})(\mathbf{x})\right) - A \int\_0^\mathbf{x} \mathbf{e}\_I(t) dt.$$

Now, suppose that

$$\mathbf{e} \approx \mathcal{E} \otimes \mathbf{\Psi}^r\_{I'} \quad \mathbf{e}\_I \approx \mathcal{E}\_I \otimes \mathbf{\Psi}^r\_{I'}$$

where E and E*<sup>J</sup>* are the (1 × *nr*2 *J* ) vectors and thus, one can write

$$\mathcal{E} \otimes \Psi\_I^r = \mathcal{E}\_I \otimes \Psi\_I^r - \left(\mathbf{f}(\boldsymbol{\pi}) - \mathcal{M}\_I^r(\mathbf{f})(\boldsymbol{\pi})\right) - A\mathcal{E}\_I \otimes I\_\Psi \Psi\_{I'}^r$$

Taking *L*2-norm from both sides and using the triangle inequality yields

$$\begin{split} \|\mathcal{E}\otimes\Psi\_{I}^{r}\|\_{2}^{2} &\leq \|\mathcal{E}\_{I}\otimes\Psi\_{I}^{r}\|\_{2}^{2} + \|\mathbf{f}(\mathbf{x}) - \mathcal{M}\_{I}^{r}(\mathbf{f})(\mathbf{x})\|\_{2}^{2} + A\|\mathcal{E}\_{I}\otimes I\_{\Psi}\Psi\_{I}^{r}\|\_{2}^{2} \\ &= \|\mathcal{E}\_{I}\|\_{2}^{2} \|\Psi\_{I}^{r}\|\_{2}^{2} + A\|\mathcal{E}\_{I}\|\_{2}^{2} \|I\_{\Psi}\Psi\_{I}^{r}\|\_{2}^{2} + \|\mathbf{f}(\mathbf{x}) - \mathcal{M}\_{I}^{r}(\mathbf{f})(\mathbf{x})\|\_{2}^{2} \\ &\leq \|\mathcal{E}\_{I}\|\_{2}^{2} \|\Psi\_{I}^{r}\|\_{2}^{2} + A\|\mathcal{E}\_{I}\|\_{2}^{2} \|I\_{\Psi}\|\_{2}^{2} \|\Psi\_{I}^{r}\|\_{2}^{2} + \|\mathbf{f}(\mathbf{x}) - \mathcal{M}\_{I}^{r}(\mathbf{f})(\mathbf{x})\|\_{2}^{2} \end{split}$$

where the second row comes from theorem 8 in [27]. Since Alpert multi-wavelets are orthonormal, one can write

$$\begin{split} \|\mathcal{E}\|\_{2}^{2} &\leq \|\mathcal{E}\_{I}\|\_{2}^{2} + A\|\mathcal{E}\_{I}\|\_{2}^{2} \|I\_{\Psi}\|\_{2}^{2} + \|\mathbf{f}(\mathbf{x}) - \mathcal{M}\_{I}^{r}(\mathbf{f})(\mathbf{x})\|\_{2}^{2} \\ &= \|\mathcal{E}\_{I}\|\_{2}^{2} \left( \|I\_{n\nu2\operatorname{I}}\|\_{2}^{2} + A\|I\_{\Psi}\|\_{2}^{2} \right) + \|\mathbf{f}(\mathbf{x}) - \mathcal{M}\_{I}^{r}(\mathbf{f})(\mathbf{x})\|\_{2}^{2} .\end{split}$$

According to the previous section, when Ψ*<sup>r</sup> J* has high vanishing moments and the function *h* is smooth, <sup>h</sup>*h*, <sup>Ψ</sup>*<sup>r</sup> J* i decays fast in *<sup>J</sup>* → <sup>∞</sup>. By means of vanishing moments of Alpert's multi-wavelets and the matrix norms inequalities, we get

$$\|\mathcal{E}\|\_2^2 \le \|\mathcal{E}\_I\|\_2^2 |1 + \sqrt{n r 2^I} \kappa| + \|\mathbf{f}(\mathbf{x}) - \mathcal{M}\_I^r(\mathbf{f})(\mathbf{x})\|\_2^2 \mu$$

where *κ* = *A*k*Iψ*k 2 <sup>∞</sup>. Now Equation (5) leads to the desired result

$$\|\|\mathcal{E}\|\|\_{2}^{2} \leq \frac{2^{1-lr}}{4^{r}r!} \left( |1 + \sqrt{nr2^{l}}\kappa|\sup\_{\mathbf{x} \in \Omega} |\mathbf{u}(\mathbf{x})| + \sup\_{\mathbf{x} \in \Omega} |\mathbf{f}(\mathbf{x})| \right).$$

**Theorem 3.** *Let the condition of Theorem 2 be valid. Suppose that* **u***<sup>J</sup> is the approximate solution obtained from solving (28) using restarted GMRES or Newton methods. If these methods solve (28) with proper accuracy, the error can be estimated from*

$$\|\|\mathbf{u} - \mathbf{u}\_I\|\|\_2^2 \le (1 - A\|\|I\_\Psi\|\|\_2^2)^{-1} \frac{2^{1-Jr}}{4^r r!} \sup\_{\mathbf{x} \in \Omega} |\mathbf{f}(\mathbf{x})| + \eta\_\prime \tag{31}$$

*where η is a small positive number that desire to zero.*

**Proof.** Taking <sup>M</sup>*<sup>r</sup> J* (**u**) as the approximate solution of (1) and **u***<sup>J</sup>* as the approximate solution obtained from solving (28) using restarted GMRES or Newton methods, the convergence can be concluded from

$$\|\|\mathbf{u} - \mathbf{u}\_I\|\| \le \|\mathbf{u} - \mathcal{M}\_I^r(\mathbf{u})\| + \|\mathcal{M}\_I^r(\mathbf{u}) - \mathbf{u}\_I\|. \tag{32}$$

The approximate solution of (1) satisfies

$$\mathcal{M}\_I^r(\mathbf{u})(\mathbf{x}) = \mathcal{M}\_I^r(\mathbf{f})(\mathbf{x}) + \mathcal{M}\_I^r \left( \int\_0^\mathbf{x} \mathbf{g}(\mathbf{x}, t, \mathcal{M}\_I^r(\mathbf{u})(t)dt \right). \tag{33}$$

Subtracting (33) from (1), and using the Lipschits condition (2), one can write

$$\mathbf{e}\_I \le \mathbf{f} - \mathcal{M}\_I^r(\mathbf{f}) + A \int\_0^\chi \mathbf{e}\_I dt. \tag{34}$$

Let us consider **<sup>e</sup>***<sup>J</sup>* ≈ E*<sup>J</sup>* <sup>⊗</sup> <sup>Ψ</sup>*<sup>r</sup> <sup>J</sup>* where E*<sup>J</sup>* is the (1 × *nr*2 *J* ) vector and thus, we have

$$\mathcal{E}\_I \otimes \Psi\_I^r = \left(\mathbf{f}(\mathbf{x}) - \mathcal{M}\_I^r(\mathbf{f})(\mathbf{x})\right) + A\mathcal{E}\_I \otimes I\_\Psi \Psi\_I^r. \tag{35}$$

Taking *L*2-norm from both sides of (35) and using Theorem 8 in [27] yields

$$\begin{aligned} \|\mathcal{E}\_I \otimes \Psi\_I^r\|\_2^2 &\le \|\mathbf{f}(\mathbf{x}) - \mathcal{M}\_I^r(\mathbf{f})(\mathbf{x})\|\_2^2 + A\|\mathcal{E}\_I \otimes I\_\Psi \Psi\_I^r\|\_2^2\\ &\le \|\mathbf{f}(\mathbf{x}) - \mathcal{M}\_I^r(\mathbf{f})(\mathbf{x})\|\_2^2 + A\|\mathcal{E}\_I\|\_2^2 \|I\_\Psi \Psi\_I^r\|\_2^2 \end{aligned}$$

Since Alpert multi-wavelets are orthonormal, one can write

$$\|\mathcal{E}\_I\|\_2^2 \le \|\mathbf{f}(\mathbf{x}) - \mathcal{M}\_I^r(\mathbf{f})(\mathbf{x})\|\_2^2 + A\|\mathcal{E}\_I\|\_2^2 \|I\_\Psi\|\_2^2. \tag{36}$$

Therefore one can bound the error of <sup>k</sup>**<sup>u</sup>** − M*<sup>r</sup> J* (**u**)k via

$$\|\|\mathcal{E}\_I\|\|\_2^2 \le (1 - A\|\|I\_{\Psi}\|\|\_2^2)^{-1} \|\mathbf{f}(\mathbf{x}) - \mathcal{M}\_I^r(\mathbf{f})(\mathbf{x})\|\_2^2. \tag{37}$$

According to the theorem hypotheses, the methods used to solve the obtained system are convergent. So *<sup>η</sup>* :<sup>=</sup> kM*<sup>r</sup> J* (**u**) − **u***J*k will be very small. Inequality (31) is obtained using (5) and (32), i.e.,

$$\|\|\mathbf{u} - \mathbf{u}\_I\|\|\_2^2 \le (1 - A\|\|I\_{\Psi}\|\|\_2^2)^{-1} \frac{2^{1 - Jr}}{4^r r!} \sup\_{\mathbf{x} \in \Omega} |\mathbf{f}(\mathbf{x})| + \eta.$$

#### **4. Numerical Examples**

In this section, some numerical experiments are considered to illustrate the convergence and efficiency of the proposed method. To this end, we report the *L* 2 errors of the solution which is defined by

$$\mathcal{J}\_{\mathfrak{U}} := \|\mathfrak{u} - \mathcal{M}\_{\mathfrak{I}}^r(\mathfrak{u})\|\_{\mathfrak{I}} = \left(\int\_{\Omega} |\mathfrak{u}(\mathfrak{x}) - \mathcal{M}\_{\mathfrak{I}}^r(\mathfrak{u})(\mathfrak{x})|^2 d\mathfrak{x}\right)^{1/2}\mathfrak{u}$$

where *<sup>u</sup>* and <sup>M</sup>*<sup>r</sup> J* (*u*) are the exact and approximate solution of systems (1), respectively. In order to get the sparse coefficients matrix in the linear type, all the entries of this matrix that are less than a small positive number *ε* are set to zero. Finally, one can find the rate of sparsity *S<sup>ε</sup>* which is defined by [28]

$$S\_{\varepsilon} = \frac{N\_0 - N\_{\varepsilon}}{N\_0} \times 100\% . $$

where *N*<sup>0</sup> is the total number of elements and *N<sup>ε</sup>* the number of elements remaining after thresholding.

**Example 1.** *Let us run the proposed method on the following linear Volterra integral equation [5,29]*

$$\begin{aligned} u\_1(\mathbf{x}) &= -\int\_0^\mathbf{x} e^{-(s-\mathbf{x})} u\_1(s) ds - \int\_0^\mathbf{x} \cos(s-\mathbf{x}) u\_2(s) ds + \cosh(\mathbf{x}) + \mathbf{x} \sin(\mathbf{x}), \\ u\_2(\mathbf{x}) &= -\int\_0^\mathbf{x} e^{s+\mathbf{x}} u\_1(s) ds - \int\_0^\mathbf{x} \mathbf{x} \cos(s) u\_2(s) ds + 2 \sin(\mathbf{x}) + \mathbf{x} (\sin^2(\mathbf{x}) + e^x). \end{aligned}$$

*The exact solution is* **u**(*x*) = (*e* −*x* , 2 sin(*x*))*. The effect of thresholding on L*2*-errors and sparsity percentage is reported in Table 1 for different values of r, J ad ε. To illustrate the effect of the refinement level J and the multiplicity parameter r on L* 2 *error, Figure 1 is plotted. Figure 2 shows sparse matrix when r* = 5*, J* = 3 *and ε* = 10−<sup>4</sup> , 10−<sup>2</sup> *.*

**Without Thresholding** *<sup>ε</sup>* = **<sup>10</sup>**−**<sup>5</sup>** *<sup>ε</sup>* = **<sup>10</sup>**−**<sup>3</sup>** *r J Sε L* **2 -Error** *Sε L* **2 -Error** *Sε L* **2 -Error** 3 2 6.25 *<sup>ξ</sup>u*<sup>1</sup> <sup>=</sup> 3.24 <sup>×</sup> <sup>10</sup>−<sup>5</sup> 24.48 *<sup>ξ</sup>u*<sup>1</sup> <sup>=</sup> 3.30 <sup>×</sup> <sup>10</sup>−<sup>5</sup> 49.83 *<sup>ξ</sup>u*<sup>1</sup> <sup>=</sup> 2.42 <sup>×</sup> <sup>10</sup>−<sup>4</sup> *<sup>ξ</sup>u*<sup>2</sup> <sup>=</sup> 8.41 <sup>×</sup> <sup>10</sup>−<sup>5</sup> *<sup>ξ</sup>u*<sup>2</sup> <sup>=</sup> 8.41 <sup>×</sup> <sup>10</sup>−<sup>5</sup> *<sup>ξ</sup>u*<sup>2</sup> <sup>=</sup> 3.04 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 3 17.19 *<sup>ξ</sup>u*<sup>1</sup> <sup>=</sup> 4.05 <sup>×</sup> <sup>10</sup>−<sup>6</sup> 53.30 *<sup>ξ</sup>u*<sup>1</sup> <sup>=</sup> 6.02 <sup>×</sup> <sup>10</sup>−<sup>6</sup> 74.13 *<sup>ξ</sup>u*<sup>1</sup> <sup>=</sup> 2.41 <sup>×</sup> <sup>10</sup>−<sup>4</sup> *<sup>ξ</sup>u*<sup>2</sup> <sup>=</sup> 1.05 <sup>×</sup> <sup>10</sup>−<sup>5</sup> *<sup>ξ</sup>u*<sup>2</sup> <sup>=</sup> 1.26 <sup>×</sup> <sup>10</sup>−<sup>5</sup> *<sup>ξ</sup>u*<sup>2</sup> <sup>=</sup> 5.15 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 5 2 6.25 *<sup>ξ</sup>u*<sup>1</sup> <sup>=</sup> 6.38 <sup>×</sup> <sup>10</sup>−<sup>9</sup> 45.69 *<sup>ξ</sup>u*<sup>1</sup> <sup>=</sup> 4.62 <sup>×</sup> <sup>10</sup>−<sup>6</sup> 70.69 *<sup>ξ</sup>u*<sup>1</sup> <sup>=</sup> 1.53 <sup>×</sup> <sup>10</sup>−<sup>4</sup> *<sup>ξ</sup>u*<sup>2</sup> <sup>=</sup> 1.66 <sup>×</sup> <sup>10</sup>−<sup>8</sup> *<sup>ξ</sup>u*<sup>2</sup> <sup>=</sup> 1.28 <sup>×</sup> <sup>10</sup>−<sup>6</sup> *<sup>ξ</sup>u*<sup>2</sup> <sup>=</sup> 6.51 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 3 17.19 *<sup>ξ</sup>u*<sup>1</sup> <sup>=</sup> 2.00 <sup>×</sup> <sup>10</sup>−<sup>10</sup> 71.42 *<sup>ξ</sup>u*<sup>1</sup> <sup>=</sup> 1.24 <sup>×</sup> <sup>10</sup>−<sup>5</sup> 86.75 *<sup>ξ</sup>u*<sup>1</sup> <sup>=</sup> 1.53 <sup>×</sup> <sup>10</sup>−<sup>4</sup> *<sup>ξ</sup>u*<sup>2</sup> <sup>=</sup> 5.19 <sup>×</sup> <sup>10</sup>−<sup>10</sup> *<sup>ξ</sup>u*<sup>2</sup> <sup>=</sup> 8.54 <sup>×</sup> <sup>10</sup>−<sup>6</sup> *<sup>ξ</sup>u*<sup>2</sup> <sup>=</sup> 6.52 <sup>×</sup> <sup>10</sup>−<sup>4</sup>

**Table 1.** Effects of parameters *r*, *J* and *ε* on sparsity and *L* 2 -error for Example 1.

**Figure 1.** Effects of the refinement level *J* (**left**) and the multiplicity parameter *r* (**right**) on *L* 2 error when *r* = 5 and *J* = 2 for Example 1.

**Figure 2.** Plot of sparse matrix after thresholding with *ε* = 10−<sup>4</sup> (**left**) and *ε* = 10−<sup>2</sup> (**right**) for Example 1.

**Example 2.** *Let us consider the following system of Volterra integral equation as a further example*

$$\begin{aligned} u\_1(\mathbf{x}) &= -\frac{\mathbf{x}^5}{3} - \frac{\mathbf{x}^4}{4} + \frac{\mathbf{x}^3}{3} + \mathbf{x} + \int\_0^\mathbf{x} (\mathbf{x}^2 - \mathbf{s})(u\_1(\mathbf{s}) + u\_2(\mathbf{s})) d\mathbf{s}, \\ u\_2(\mathbf{x}) &= \frac{\mathbf{x}^3}{2} - \frac{\mathbf{x}^4}{3} + \mathbf{x}^2 - \int\_0^\mathbf{x} (u\_1(\mathbf{s}) - u\_2(\mathbf{s})) d\mathbf{s}. \end{aligned}$$

*The solution is reported in [5,30] and is* **u** = (*x*, *x* 2 )*. To illustrate the effect of thresholding on the coefficients matrix obtained from proposed method, the graph in Figure 3 is provided. Figure 4 shows the effect of parameters r and J.*

**Figure 3.** Effects of the refinement level *J* (**left**) and the multiplicity parameter *r* (**right**) on *L* 2 error when *r* = 4 and *J* = 3 for Example 2.

**Figure 4.** Plot of sparse matrix after thresholding with *ε* = 10−<sup>5</sup> (**left**) and *ε* = 10−<sup>3</sup> (**right**) for Example 2.

**Example 3.** *Let us consider the following nonlinear system*

$$\begin{aligned} u\_1(\mathbf{x}) &= \sin(\mathbf{x}) - \mathbf{x} + \int\_0^\mathbf{x} (u\_1^2(s) + u\_2^2(s)) ds \\ u\_2(\mathbf{x}) &= \cos(\mathbf{x}) - \frac{1}{2} \sin^2(\mathbf{x}) + \int\_0^\mathbf{x} u\_1(s) u\_2(s) ds. \end{aligned}$$

*The exact solutions of this system are u*<sup>1</sup> = sin(*x*) *and u*<sup>2</sup> = cos(*x*) *[8].*

*Table 2 is reported to show the efficiency and accuracy of the proposed method. We observe when the refinement level J and multiplicity parameter r increase, the L*2*-errors decrease. In Tables 3 and 4, results are compared with other methods [10,31–33] in terms of the absolute errors. In this paper, J and r are the criteria for the discretization and the degree of polynomials used as a basis, respectively. Taking r* = 7 *and J* = 2*, the results of Tables 3 and 4 indicate that the proposed method solves this equation better than others [10,31–33]. Furthermore, we reported the exact and numerical solution by Figure 5.*

**Table 2.** Effect of the refinement level *J* and multiplicity parameter *r* on *L*2-error for Example 3.


**Table 3.** The comparison between absolute errors of Example 3 for *u*<sup>1</sup> .



**Table 4.** The comparison between absolute errors of Example 3 for *u*2.

**Figure 5.** Plot of the exact and numerical solution taking *r* = 6 and *J* = 2 for Example 3.

#### **5. Conclusions**

In this paper, we proposed the multi-wavelets Galerkin method to solve the linear and nonlinear Volterra integral equation. The convergence analysis and numerical simulations indicate that the proposed method gives a satisfactory approximation to the exact solution. Thresholding can be used to increase sparsity for a lower computational cost, without affecting the error in *L* 2 . Using the interpolation property of Alpert's multi-wavelets, the proposed method turns out to be fast and very competitive against state-of-the-art techniques. The main advantages of the proposed method are the lower computational cost and lower complexity.

**Author Contributions:** Conceptualization, H.V.L. and H.B.J.; methodology, software, H.B.J. and S.T.; validation, formal analysis, H.B.J.; writing–original draft preparation, investigation, funding acquisition, H.V.L., H.B.J. and S.T.; writing–review and editing, H.V.L., H.B.J. and S.T. All authors have read and agreed to the published version of the manuscript.

**Funding:** This project was supported by Researchers Supporting Project number (RSP-2020/210), King Saud University, Riyadh, Saudi Arabia.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*
