Next Article in Journal
Certain Fractional Proportional Integral Inequalities via Convex Functions
Next Article in Special Issue
Unique Determination of the Shape of a Scattering Screen from a Passive Measurement
Previous Article in Journal
Results in wt-Distance over b-Metric Spaces
Previous Article in Special Issue
An Inverse Problem for a Generalized Fractional Derivative with an Application in Reconstruction of Time- and Space-Dependent Sources in Fractional Diffusion and Wave Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiscale Compression Algorithm for Solving Nonlinear Ill-Posed Integral Equations via Landweber Iteration

1
School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510006, China
2
Guangdong Province Key Laboratory of Computational Science, Guangzhou 510275, China
3
School of Social Management, Jiangxi College of Applied Technology, Ganzhou 341000, China
4
School of Mathematics and Computer Science, Gannan Normal University, Ganzhou 341000, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(2), 221; https://doi.org/10.3390/math8020221
Submission received: 2 January 2020 / Revised: 6 February 2020 / Accepted: 6 February 2020 / Published: 9 February 2020
(This article belongs to the Special Issue Inverse and Ill-Posed Problems)

Abstract

:
In this paper, Landweber iteration with a relaxation factor is proposed to solve nonlinear ill-posed integral equations. A compression multiscale Galerkin method that retains the properties of the Landweber iteration is used to discretize the Landweber iteration. This method leads to the optimal convergence rates under certain conditions. As a consequence, we propose a multiscale compression algorithm to solve nonlinear ill-posed integral equations. Finally, the theoretical analysis is verified by numerical results.

1. Introduction

Ill-posed problems [1,2] include linear [3] and nonlinear [4,5,6,7] ill-posed problems. With the development of applied science, studies on nonlinear ill-posed problems are attracting increased attention, and a variety of methods have emerged. Some of the most widely known methods are Tikhonov regularization [8,9,10,11,12,13] and Landweber iteration [14,15,16,17]. The purpose of this study was to explore a compression multiscale Galerkin method for solving nonlinear ill-posed integral equations via Landweber iterations.
Landweber iteration is a regularization method when the iteration is terminated by the generalized discrepancy principle. Hanke et al. [14] used Landweber iteration to solve nonlinear ill-posed problems for the first time, and proved the convergence and convergence rate of the Landweber iteration. However, they did not consider the case of finite dimensions. Based on the gradient method, Neubauer [18] presented a new iterative method that greatly reduces iteration number. The convergence and convergence rate of the new method were proven. Neubauer did not consider the case of finite dimensions either. However, in numerical simulations and practical applications, we should consider regularization methods of finite dimension to solve nonlinear ill-posed problems. For this, Jin and Scherzer provided their results [16,19] in this field. In previous studies [14,16,18,19], the important parameter in the generalized discrepancy principle was greater than two. This is not satisfactory. Hanke [15] derived a smaller parameter for the generalized discrepancy principle that may be close to one under certain conditions. To obtain a better approximation solution, we hope the parameter in the generalized discrepancy principle is as close to one as possible.
From the development of Landweber iteration, the estimate of the parameter in the generalized discrepancy principle depends on an unknown constant, but the selection range of the parameter was not optimal in previous studies [14,16,18,19]. Therefore, the approximate solution obtained by this parameter is also not optimal. To obtain a better approximate solution, this paper introduces the radius ρ of the field and the relaxation factor ω under the strong Scherzer condition to improve the selection range of the parameter. Simultaneously, by combination with the generalized discrepancy principle, the convergence rates of the Landweber iteration are proven in finite dimensional space in this paper.
This paper is organized as follows. In Section 2, we outline some lemmas and propositions of Landweber iteration under the strong Scherzer condition. In Section 3, we describe the matrix form of the Landweber iteration discretized using the multiscale Galerkin method, and the convergence of the Landweber iteration is proven in finite dimensions. In Section 4, we develop a compression multiscale Galerkin method for Landweber iteration to solve nonlinear ill-posed integral equations, which leads to an optimal approximate solution under certain conditions. A multiscale compression algorithm is proposed to solve nonlinear ill-posed integral equations. Finally, in Section 5, we provide numerical example to verify the theoretical results.

2. Landweber Iteration

In this section, we will describe some lemmas and propositions of Landweber iteration for solving nonlinear integral equations in detail. These results are based on [14,15].
Suppose that Ω R d is a bounded domain with d 1 . Let spaces X and Y denote Hilbert spaces. For the sake of simplicity, inner products and norms of Hilbert space are denoted as · , · and · , respectively, unless otherwise specified. The nonlinear compact operator F : X Y is defined by
F ( x ) : = Ω Φ ( s , t , x ( t ) ) d t , s , t Ω .
where Φ ( s , t , x ) is nonlinear mapping. We focus on the following nonlinear integral equation,
F ( x ) = y ,
where y R ( F ) is given and x D ( F ) is the unknown to be determined. Note that R ( F ) represents the range of the operator F and D ( F ) represents the domain of the operator F. Without loss of generality, we assume that the nonlinear operator F satisfies the following two local properties.
( H 1 ) The F r e ´ c h e t derivative of nonlinear operator F is denoted by:
F ( x ) ( h ) : = Ω k ( s , t , x ( t ) ) h ( t ) d t , s , t Ω
with k ( s , t , x ( t ) ) C ( Ω × Ω ) , which satisfies
F ( x ) 1 , x B ρ ( x 0 ) D ( F ) ,
where B ρ ( x 0 ) denotes the neighborhood of x 0 with radius ρ .
( H 2 ) The F r e ´ c h e t derivative of nonlinear operator F satisfies the strong Scherzer condition. In other words, a bounded linear operator R ( x , x ˜ ) exists: X Y , such that
F ( x ) = R ( x , x ˜ ) F ( x ˜ )
holds for any elements x , x ˜ B ρ ( x 0 ) D ( F ) , where the linear operator R ( x , x ˜ ) satisfies
R ( x , x ˜ ) I c 0 x x ˜
for c 0 > 0 .
The accurate data y in Equation (1) may not be known; instead, we have noisy data y δ Y , satisfying:
y y δ δ ,
where δ 0 is a given small number. As F is a compact operator defined on an infinity dimensional space, (1) is an ill-posed problem. In other words, the solution does not depend continuously on the right-hand side. The Landweber iteration [3,4] is one of the prominent methods in which a sequence of iterative solutions { x l δ } is defined by
x l + 1 δ = x l δ + ω F ( x l δ ) * ( y δ F ( x l δ ) )
from x 0 δ = x 0 D ( F ) , where 0 < ω 1 is a relaxation factor. If the noise is free, i.e., the right-hand side is y, then we replace x l δ with x l . Note that x 0 is not the solution of problem (1); it is only a given initial function used to solve (1).
Similar to Proposition 2.1 of [14], we provide the following properties.
Lemma 1.
Let x * B ρ ( x 0 ) be a solution to Problem (1). Assume that Condition ( H 2 ) holds. If
0 < c 0 ρ < 1 / 2
holds, then any solution x ˜ * B ρ ( x 0 ) of Problem (1) satisfies
x * x ˜ * N ( F ( x * ) ) ,
and vice versa. Here, N ( · ) represents the null space of an operator.
Proof. 
From Condition ( H 2 ) , we have
F ( x * ) F ( x ˜ * ) F ( x ˜ * ) ( x * x ˜ * ) = 0 1 ( F ( z t ) F ( x ˜ * ) ) ( x * x ˜ * ) d t 0 1 ( R ( z t , x * ) I ) F ( x ˜ * ) ( x * x ˜ * ) d t ,
for any x * , x ˜ * B ρ ( x 0 ) , where z t = t x * + ( 1 t ) x ˜ * and 0 t 1 . From Equations (3) and (4), we obtain
F ( x * ) F ( x ˜ * ) F ( x ˜ * ) ( x * x ˜ * ) c 0 ρ F ( x ˜ * ) ( x * x ˜ * ) .
Thus,
( 1 c 0 ρ ) F ( x ˜ * ) ( x * x ˜ * ) F ( x * ) F ( x ˜ * ) ( 1 + c 0 ρ ) F ( x ˜ * ) ( x * x ˜ * ) .
From Equation (7), the conclusion is established. □
Let Problem (1) be solvable on B ρ ( x 0 ) and let
S ρ ( x 0 ) : = { x B ρ ( x 0 ) : F ( x ) = y }
be a solution set of Problem (1) on B ρ ( x 0 ) . From [18], a unique local minimum norm solution x exists for x 0 , i.e.,
x 0 x = inf { x 0 x : x S ρ ( x 0 ) } .
Proposition 1.
Let x * B ρ / 2 ( x 0 ) be a solution to Problem (1). If condition ( H 2 ) holds, then a unique local minimum norm solution x existsfor x 0 .
Proof. 
See Proposition 2.1 of [18]. □
Proposition 2.
Let x * B ρ / 2 ( x 0 ) be a solution to Problem (1). Assume that Conditions ( H 1 ) , ( H 2 ) , and (7) hold. If
ω < 2 1 2 c 0 ρ 1 c 0 ρ
holds, then we have
x l + 1 x * < x l x * , l = 0 , 1 , 2 ,
and
l = 0 y F ( x l ) 2 < .
Proof. 
From Conditions ( H 1 ) , ( H 2 ) , and (6), using the induction method, we can conclude that
x l + 1 x * 2 x l x * 2 = 2 x l + 1 x l , x l x * + x l + 1 x l 2 2 ω y F ( x l ) , F ( x l ) ( x * x l ) + ω 2 y F ( x l ) 2 2 ω y F ( x l ) ( y F ( x l ) F ( x l ) ( x * x l ) y F ( x l ) ) + ω 2 y F ( x l ) 2
with l 0 . Combining Equations (8) and (9) and Condition (12), we have x l B ρ / 2 ( x * ) B ρ ( x 0 ) and
x l + 1 x * 2 x l x * 2 2 ω ( ω 2 + c 0 ρ 1 c 0 ρ 1 ) y F ( x l ) 2 < 0
It follows from Equation (7) that Equation (13) holds. Combining the above inequality and Equation (13), we have
ω l = 0 y F ( x l ) 2 1 c 0 ρ 2 ω ( 4 ω ) c 0 ρ x 0 x * 2 .
Then, Equation (14) holds. □
To illustrate the convergence of the Landweber iteration with the noise-free case, the following lemma is given. The proof of this lemma refers to Theorem 2.3 in [14].
Lemma 2.
Assume that Conditions ( H 1 ) and ( H 2 ) , and Equations (7) and (12) hold. If Problem (1) is solvable on B ρ / 2 ( x 0 ) , then the approximate solution x l converges to x .
Proof. 
Let x ˜ * B ρ / 2 ( x 0 ) be a solution to Problem (1), and
e l : = x l x ˜ * , l = 0 , 1 , 2 , .
From Proposition 2, e l is monotonically decreasing and converges to a constant ε 0 . Now, we prove that e l is a Cauchy sequence. Without loss of generality, we assume that there exists a positive integer N such that integers k > l N . By the Minkowski inequality, we have
e k e l e k e i + e i e l ,
where the integer i [ l , k ] we chose satisfies
y F ( x i ) y F ( x j )
for any integer j [ l , k ] . Therefore, proving that e l is a Cauchy sequence, we only need to prove that e k e i 0 and e i e l 0 when l . We prove the first: e k e i 0 . From the definition of inner products and norms, we know that
e k e i 2 = 2 e k e i , e i + e k 2 e i 2 .
By Condition ( H 2 ) and Equation (6), we can conclude that:
| e k e i , e i | = ω | j = i k 1 y F ( x j ) , F ( x j ) e i | 3 ω 1 c 0 ρ j = i k 1 y F ( x j ) 2 .
Combining the monotonicity of the sequence e l and Equation (14), we have
lim l e k 2 e i 2 = ε 2 ε 2 = 0 , lim l | e k e i , e i | = 0 .
This means that lim l e k e i = 0 . Similarly, we can prove that lim l e i e l = 0 . So, sequences { e k } and { x k } are Cauchy sequences. From lim l y F ( x l ) = 0 , the limit x * * of the sequence { x k } is also a solution to Problem (1). Using Equation (3), we can obtain N ( F ( x ) ) N ( F ( x l ) ) . Therefore, it follows from Equation (6) that
x l + 1 x l R ( F ( x l ) * ) N ( F ( x l ) ) N ( F ( x ) ) .
Thus, x l x 0 N ( F ( x ) ) . Because x 0 x N ( F ( x ) ) , we have
x x * * = x x 0 + x 0 x * * N ( F ( x ) ) .
From Proposition 1 and the above, we can conclude that x * * = x . □

3. A Multiscale Galerkin Method of Landweber Iteration

The multiscale Galerkin method is a classical and effective projection method (cf. [20]), and is often used in integral equations (cf. [21,22,23,24]). We next discuss using the multiscale Galerkin method to discrete the iteration scheme (6). The purpose of this section is to analyze the convergence of the multiscale Galerkin method for Landweber iterations. Here, we only provide a brief description of the multiscale Galerkin method. For a more in-depth understanding of its specific structure and numerical implementation, please refer to [22,25,26].
Let N denote a set of natural numbera, and define N 0 : = { 0 } N . Suppose there is a nested and consistently dense space sequences { X n : n N 0 } X , i.e.,
X n 1 X n and n N 0 X n ¯ = X .
We further assume that a subspace W n exists satisfying for n N :
X n = X n 1 W n
with W 0 : = X 0 . Therefore, we conclude the multiscale and orthogonal subspace
X n = W 0 W n .
for n N . For the specific structure of this space, refer to Chapter 4 in [20]. We need to pay attention to spaces W 0 and W 1 , which are two polynomial function spaces of degree r . For n > 1 , subspace W n can be generated by subspace W 1 . We define w ( n ) : = dim W n μ n and s ( n ) : = dim X n μ n + 1 for some positive integer μ > 1 . Let the indicator set U n : = { ( i , j ) : i Z n + 1 , j Z w ( i ) } with Z n + 1 : = { 0 , 1 , , n } . Assume that the family base function of space W i is { w i j : ( i , j ) U i / U i 1 } X for i > 0 , i.e.,
W i = span { w i j : j Z w ( i ) } , i N .
Thus, we have
X n = span { w i j : ( i , j ) U n } .
Assume that P n is the linear orthogonal projection from X onto X n , and a positive constant c exists such that
I P n H r X c μ r n / d ,
where H r ( Ω ) denotes the linear subspace of X , which is equipped with the norm · H r = · X + D r · X . Here, D r is some linear operator acting from H r X and c denotes a generic constant. For convergence of projection operator P n , the following condition is needed.
( H 3 ) Assume that k ( · , · , · ) C r ( Ω × Ω ) holds. Then, a positive constant c exists such that:
| D s β k ( s , t , x ( t ) ) | c , s , t Ω , | β | = r .
Some related lemmas are outlined in the following that are similar to the case using Tikhonov regularization in [22,24]. We omit their proofs.
Lemma 3.
If condition ( H 3 ) holds, then there exists a constant c independent of n such that
( I P n ) F ( x ) * c μ r n / d .
Proof. 
See (3.10) of Lemma 3.1 in [24]. □
Lemma 4.
If condition ( H 3 ) holds, then there exists a constant c independent of n such that
| F ( x ) w i , j , w i , j | c μ ( r / d + 1 / 2 ) ( i + i ) .
Proof. 
See Lemma 2.1 of [22]. □
We apply the multiscale Galerkin method to solve iterative Scheme (6) for the free noise case, i.e., finding x n ( l ) , l + 1 X n ( l ) such that
x n ( l ) , l + 1 = x n ( l 1 ) , l + ω P n ( l ) F ( x n ( l 1 ) , l ) * ( y F ( x n ( l 1 ) , l ) ) , l = 0 , 1 , 2 ,
holds, where n ( l ) denotes the number n depending on iteration l, x n ( 1 ) , 0 = x 0 D ( F ) is a given initial function, and the space X n ( 1 ) is the selected initial space. From the definition of space X n ( l ) , the approximate solution x n ( l ) , l + 1 can be denoted as
x n ( l ) , l + 1 = ( i , j ) U n ( l ) c i j l + 1 w i j .
Therefore, the iteration Scheme (16) can be expressed as
E n ( l ) c n ( l ) l + 1 = E n ( l ) × n ( l 1 ) c n ( l 1 ) l + ω G n ( l ) l , l = 0 , 1 , 2 , ,
where
E n ( l ) : = [ w i j , w i j : ( i , j ) , ( i , j ) U n ( l ) ] , E n ( l ) × n ( l 1 ) : = [ w i j , w i j : ( i , j ) U n ( l ) , ( i , j ) U n ( l 1 ) ] , c n ( l 1 ) l : = [ c i j l : ( i , j ) U n ( l 1 ) ] , G n ( l ) l : = [ y F ( x n ( l ) , l + 1 ) , F ( x n ( l ) , l + 1 ) w i j : ( i , j ) U n ( l ) ] .
Format (18) is unique to the multiscale Galerkin projection. This is one of our motivations for using multiscale Galerkin projection.
To show that the multiscale Galerkin method maintains the convergence of Landweber iteration, we imply the following property:
Proposition 3.
Let x * B ρ / 2 ( x 0 ) be a solution to Problem (1). Assume that Conditions ( H 1 ) ( H 3 ) and Equation (7) hold. If ω 1 2 c 0 ρ 1 c 0 ρ and there exists positive integer n ( l ) , for every positive integer l:
c μ r n ( l ) < 1 2 c 0 ρ 4 ( 1 c 0 ρ ) y F ( x n ( l 1 ) , l )
with c denoting a generic constant and x n ( l 1 ) , l as in (16) for δ = 0 , then
x n ( l ) , l + 1 x * < x n ( l 1 ) , l x * .
Proof. 
From Conditions ( H 1 ) and ( H 2 ) , and iteration scheme (16), we can conclude that
x n ( l ) , l + 1 x * 2 x n ( l 1 ) , l x * 2 = 2 x n ( l ) , l + 1 x n ( l 1 ) , l , x n ( l 1 ) , l x * + x n ( l ) , l + 1 x n ( l 1 ) , l 2 = 2 ω y F ( x n ( l 1 ) , l ) , F ( x n ( l 1 ) , l ) ( x n ( l 1 ) , l P n ( l ) x * ) + ω 2 P n ( l ) F ( x n ( l 1 ) , l ) * ( y F ( x n ( l 1 ) , l ) 2 2 ω y F ( x n ( l 1 ) , l ) ( ( c 0 ρ 1 c 0 ρ ( 1 ω 2 ) ) y F ( x n ( l 1 ) , l ) + F ( x n ( l 1 ) , l ) ( I P n ( l ) ) x * )
for 0 l k . Now, we use the induction method to prove Equation (20). Combined with Lemma 3 and Equation (19), we can conclude that
x n ( 0 ) , 1 x * x n ( 1 ) , 0 x * ω 2 1 2 c 0 ρ 1 c 0 ρ y F ( x n ( 1 ) , 0 ) 2 .
Suppose that Equation (20) is established for 0 < l k .
x n ( k 1 ) , k x * < x n ( k 2 ) , k 1 x * < < x 0 x * ρ / 2
holds. Similar to Equation (21), we can derive
x n ( k ) , k + 1 x * < x n ( k 1 ) , k x * .
Therefore, (20) is established. □
From the above analysis, we know that if
lim l , n ( l ) c μ r n ( l ) y F ( x n ( l 1 ) , l ) < 1 2 c 0 ρ 4 x * ( 1 c 0 ρ )
with n depending on iterative steps l. Then,
l = 0 y F ( x n ( l 1 ) , l ) 2 < .
Note that the larger the iteration step l, the larger the number of discrete layers n. Due to the multiscale and orthogonal of spatial sequences (cf. Chapter 4 of [20]), the multiscale Galerkin scheme is more suitable for this iterative process than the general Galerkin scheme.
Theorem 1.
Assume that conditions ( H 1 ) ( H 3 ) , (7) and (19) hold. If Problem (1) is solvable on B ρ / 2 ( x 0 ) , then the approximate solution x n ( l 1 ) , l converges to x as l .
Proof. 
From Lemma 2 and Proposition 3, we can obtain the result. □

4. Rates of Convergence and Algorithm

In this section, the compression multiscale Galerkin method for Landweber iteration is used to solve Problem (1) with noisy data y δ . Convergence rates of this method are proved under certain conditions.
We apply the multiscale Galerkin method to solve the iterative Scheme (6) for the noisy case, i.e., finding x n , l + 1 δ X n such that
x n , l + 1 δ = x n , l δ + ω B n l ( y δ F ( x n , l δ ) ) , l = 0 , 1 , 2 ,
holds, where x n , 0 δ = x 0 D ( F ) is a given initial function and B n l : = P n F ( x n , l δ ) * P n . Note that the above number n does not depend on l. From the definition of space X n , the approximate solution x n , l + 1 δ can be denoted as
x n , l + 1 δ = ( i , j ) U n c i j l + 1 w i j .
Therefore, the iteration Scheme (16) is equivalent to the iteration system
E n c n l + 1 = E n c n l + ω B n l y n l , l = 0 , 1 , 2 , ,
where,
E n : = [ w i j , w i j : ( i , j ) , ( i , j ) U n ] , c n l + 1 : = [ c i j l + 1 : ( i , j ) U n ] , B n l : = [ F ( x n , l δ ) w i j , w i j : ( i , j ) , ( i , j ) U n ] , y n l : = [ y δ F ( x n , l δ ) , w i j : ( i , j ) U n ] .
As Lemma 4 shows that most entries of B n l are very small, these small entries can be neglected without affecting the overall accuracy of the approximation. To reduce the computational cost, the compression strategy is defined by
B ˜ n l = i Z n + 1 P i F ( x n , l δ ) Q n i
with Q n i = P n i P n i 1 and P 1 = 0 . Using the basis of space X n , the equivalent matrix form of operator B ˜ n l is
B ˜ n l : = [ B ˜ i j , i j l : ( i , j ) , ( i , j ) U n ]
with (cf. [22])
B ˜ i j , i j l = F ( x n , l δ ) w i j , w i j , i + i n 0 , otherwise .
We replace B n l with B ˜ n l in Equation (16). Then, a fast discrete scheme for the iterative scheme (6) with noise is established, i.e.,
x ˜ n , l + 1 δ = x ˜ n , l δ + ω B ˜ n l ( y δ F ( x ˜ n , l δ ) ) , l = 0 , 1 , 2 , ,
where x ˜ n , 0 δ = x 0 D ( F ) is a given initial function and
B ˜ n l = i Z n + 1 P i F ( x ˜ n , l δ ) Q n i .
To analyze the convergence rates of the compression multiscale Galerkin Scheme (23), we need the following estimates.
Lemma 5.
If Condition ( H 3 ) holds, then there exists a positive constant c r such that for any n N :
B l B ˜ n l c r ( n + 1 ) μ r n / d ,
where B l : = F ( x ˜ n , l δ ) * .
Proof. 
See Lemma 2.3 of [22]. We have B n l B ˜ n l c ( n + 1 ) μ r n / d . Combining this result and Lemma 3, the assertion is proved. □
To ensure the convergence rate of the approximate solution, we need some conditions: one is the stopping criterion [14,15,18], which is a generalized discrepancy principle; another is the smoothness condition of the initial function x 0 and x 0 -minimum-norm solution x [5,12]; and the last is the discrete error control criterion [16,19,22].
( H 4 ) A positive integer l * N exists such that
y δ F ( x n , l * δ ) τ δ < y δ F ( x n , l * 1 δ ) ,
where the discrete number n depends on the noise level δ for 0 l < l * and τ satisfies
τ > 1 1 2 c 0 ρ .
( H 5 ) Let x B ρ ( x 0 ) be a x 0 -minimum-norm solution of Problem (1) that satisfies
x 0 x ( F ( x ) * F ( x ) ) ν f , 0 < ν 1 / 2 .
( H 6 ) There exists a positive integer n N that satisfies the following condition,
c r ( n + 1 ) μ r n / d κ 1 τ δ c r ( n + 1 ) μ r ( n 1 ) / d
with 0 < κ 1 < f / ( max { ω c ν , 2 , ω c ν , 1 } τ τ 1 ( 1 + c 0 ρ ) ) .
Condition ( H 4 ) is a posteriori parameter selection criterion, which leads to the appropriate approximate solution. Condition ( H 5 ) is a necessary condition for the order optimal convergence rates. Condition ( H 6 ) ensures that the projection error does not affect the iterative process.
We next provide the proof of convergence rates for the compression multiscale Galerkin method of Landweber iteration under conditions (H1)–(H6).
Proposition 4.
For any l N , 0 < ν 1 / 2 and 0 < p , q 1 , two positive numbers c ν , 1 and c ν , 2 exist satisfying
j = 0 l 1 ( j + 1 ) p ( l j ) ν q c ν , 1 ( l + 1 ) 1 p q
and
j = 0 l 1 ( j + 1 ) 1 / 2 ν c ν , 2 ( l + 1 ) 1 / 2 ,
where c ν , 1 and c ν , 2 both depend on ν.
Proof. 
Proof of the general case was provided in [4] and will not be repeated here. □
Theorem 2.
Assume that conditions ( H 1 ) ( H 6 ) , (5) and Equations (7) hold. If f is small enough and the relaxation factors ω satisfy
ω τ 1 1 + c 0 ρ ,
then there exists a positive number c * such that
x ˜ n , l δ x c * f ( l + 1 ) ν
and
F ( x ) ( x ˜ n , l δ x ) ) c * f ( l + 1 ) ν 1 / 2
holds with c * f < 1 , where 0 l < l * and x ˜ n , l δ obtained from (23). We have
y δ F ( x ˜ n , l δ ) γ c * f ( l + 1 ) ν 1 / 2 , 0 < l < l *
with γ = τ τ 1 ( 1 + c 0 ρ ) .
Proof. 
For convenience, let x ¯ l : = x ˜ n , l δ , e l : = x ¯ l x , and K : = F ( x ) . It follows from Condition ( H 2 ) and Equation (23) that
e l + 1 = e l + ω B n l ( y δ F ( x ¯ l ) ) = ( I ω K * K ) e l + ω [ K * z l + K * ( y δ y ) + ( B ˜ n l B l ) ( y δ F ( x ¯ l ) ) ] ,
where z l : = K e l + F ( x ) F ( x ¯ l ) + ( R ( x ¯ l , x ) * I ) ( y δ F ( x ¯ l ) ) . For l N , this yields the closed expression for the error
e l = ( I ω K * K ) l e 0 + ω j = 0 l 1 ( I ω K * K ) j K * ( z l j 1 + y y δ ) + ω j = 0 l 1 ( I ω K * K ) j p l j 1
and consequently,
K e l = ( I ω K K * ) l K e 0 + ω j = 0 l 1 ( I ω K K * ) j K K * ( z l j 1 + y y δ ) + ω j = 0 l 1 ( I ω K K * ) j K p l j 1 ,
with p j = ( B ˜ n l B l ) ( y δ F ( x ¯ j ) ) . For 0 j < l * , we next turn to the estimates of z j and p j . Using Equation (8) implies that
K e j + F ( x ) F ( x ¯ j ) 1 2 c 0 e j K e j .
Combining Equations (5), (9), and (25), we have:
y δ F ( x ¯ j ) τ τ 1 ( y δ F ( x ¯ j ) δ ) γ K e j
with γ = τ τ 1 ( 1 + c 0 ρ ) . It follows from Equations (4) and (37) that
( R ( x ¯ j , x ) * I ) ( y δ F ( x ¯ j ) ) γ c 0 e j K e j .
Thus,
z j ( γ + 1 2 ) c 0 e j K e j .
Similarly, it follows with Lemma 5 and Equation (37) that
p j γ c r ( n + 1 ) μ r n / d K e j .
Now, we use induction to show that for all 0 l < l *
e l c * f ( l + 1 ) ν and K e l c * f ( l + 1 ) ν 1 / 2
hold with c * f < 1 . Here,
c * = ( ω ν 1 / 2 + 2 ) ( 1 + ω ( τ 1 ) τ 1 ω ( 1 + c 0 ρ ) ) .
For l = 0 , Equation (40) is always true. We assume that Equation (40) holds for all 0 l < k with k < l * . Thus, we have to verify Equation (40) for l = k . Combining K 1 and 0 < ω < 1 , we have [14,16]
ω ( I ω K * K ) k K * ( k + 1 ) 1 / 2 , ω j = 0 k 1 ( I ω K * K ) j K * k , ( I ω K * K ) k ω K * K ( k + 1 ) 1 , ( I ω K * K ) k ( ω K * K ) ν ( k + 1 ) ν .
From the above and Equations (35) and (36), we can conclude:
e k ω ν f ( k + 1 ) ν + ω 1 / 2 j = 0 k 1 ( j + 1 ) 1 / 2 z k j 1 + ω k δ + ω j = 0 k 1 p j
and
K e k ω ν 1 / 2 f ( k + 1 ) ν 1 / 2 + j = 0 k 1 ( j + 1 ) 1 z k j 1 + ω δ + ω j = 0 k 1 ( I ω K * K ) j K * p k 1 j .
For 0 < ν 1 / 2 , applying Equations (29), (30) and (38)–(40), we can conclude
j = 0 k 1 z k j 1 ( j + 1 ) 1 / 2 c 0 ( γ + 1 2 ) j = 0 k 1 e k j 1 K e k j 1 ( j + 1 ) 1 / 2 ] c 0 ( γ + 1 2 ) ( c * f ) 2 c ν , 1 ( k + 1 ) ν .
Using Equation (37), we can conclude that for 0 < k k
τ δ < y δ F ( x ¯ k ) τ τ 1 ( 1 + c 0 ρ ) K e k .
By combining with Equation (9), it follows that
δ < 1 + c 0 ρ τ 1 c * f ( k + 1 ) ν 1 / 2 < 1 + c 0 ρ τ 1 ( k + 1 ) ν 1 / 2 .
Thus, using Proposition 4, we can obtain
j = 0 k 1 p j γ c r ( n + 1 ) μ r n / d c * f j = 0 k 1 ( k j ) ν 1 / 2 γ c r ( n + 1 ) μ r n / d c ν , 2 k + 1
and
ω j = 0 k 1 ( I ω K * K ) j K * p k 1 j γ c r ( n + 1 ) μ r n / d j = 0 k 1 ω ( I ω K * K ) j K * K e k 1 j ω γ c r ( n + 1 ) μ r n / d c * f j = 0 k 1 ( j + 1 ) 1 / 2 ( k j ) 1 / 2 ν ω γ c r ( n + 1 ) μ r n / d c ν , 1 .
Combining Condition ( H 6 ) and Equation (41), it follows that
ω j = 0 k 1 p j ω γ c ν , 2 k + 1 c r ( n + 1 ) μ r n κ 1 κ 2 ( k + 1 ) ν f ( k + 1 ) ν
and
ω j = 0 k 1 ( I ω K * K ) j K * p k 1 j ω γ c ν , 1 c r ( n + 1 ) μ r n κ 1 κ 2 ( k + 1 ) ν 1 / 2 f ( k + 1 ) ν 1 / 2
with κ 2 : = max { ω c ν , 2 , ω c ν , 1 } τ τ 1 ( 1 + c 0 ρ ) . Therefore, it follows that
e k ω ν f ( k + 1 ) ν + ω 1 / 2 η 1 ( c * f ) 2 ( k + 1 ) ν + ( ω k ) 1 / 2 δ + f ( k + 1 ) ν ,
with η 1 = c 0 ( γ + 1 2 ) c ν , 1 . Similarly, we have
K e k ω ν 1 / 2 f ( k + 1 ) ν 1 / 2 + η 1 ( c * f ) 2 ( k + 1 ) ν 1 / 2 + ω δ + f ( k + 1 ) ν 1 / 2 .
This means that a constant C ν > 0 exists that depends on ν , ω , and ρ , but is independent of f , k and δ . Thus, we can obtain
e k ( 1 + ω ν 1 / 2 + C ν f ) f ( k + 1 ) ν + ( ω l ) 1 / 2 δ
and
K e k ( 1 + ω ν 1 / 2 + C ν f ) f ( k + 1 ) ν 1 / 2 + ω δ .
By combining
δ < 1 + c 0 ρ τ 1 K e k ,
(31) and (42), we can conclude
δ η 2 ( ω ν 1 / 2 + C ν f ) f ( k + 1 ) ν 1 / 2
with η 2 = τ 1 τ 1 ω ( 1 + c 0 ρ ) > 0 . Thus,
e k c 1 f ( k + 1 ) ν , K e k c 1 f ( k + 1 ) ν 1 / 2 ,
where:
c 1 = ( 1 + ω ν 1 / 2 + C ν f ) ( 1 + ω η 2 ) .
If f is sufficiently small, namely C ν f 1 , then c 1 c * . Therefore, Equations (32) and (33) are true. From Equations (33) and (37), Assertion (34) holds. □
Theorem 3.
If the conditions of Theorem 2 hold, then
x ˜ n , l δ x c 2 f 1 / ( 2 ν + 1 ) δ 2 ν / ( 2 ν + 1 ) ,
where c 2 > 0 is a constant and depends on ν.
Proof. 
We use the same notation as in the proof of Theorem 2. It follows from Equation (35) that
e l * = ( K * K ) ν f l * + ω j = 0 l * 1 ( I ω K * K ) j K * ( y y δ ) + ω j = 0 l * 1 ( I ω K * K ) j ( B ˜ n l B l ) ( y δ F ( x ¯ l j 1 ) ) ,
where
f l * = ( I ω K * K ) l * f + ω j = 0 l * 1 ( I ω K * K ) j ( K * K ) 1 / 2 ν z l * j 1 .
From Theorem 2, we have
f l * < f + ω j = 0 l * 1 ( j + 1 ) ν 1 / 2 z l * j 1 .
Combining the above and Proposition 4, we have
f l * < ζ 1 f
with ζ 1 > 0 depending on ν , but being independent of l * . It follows from (H1)–(H6), and Equations (9) and (45) that
K ( K * K ) ν f l * K e l * + ω δ + γ ω c * f j = 0 l * 1 ( j + 1 ) 1 / 2 ( l j ) ν 1 / 2 c r ( n + 1 ) μ r n y F ( x ¯ l * ) δ + δ ( 1 + c 0 ρ ) + γ ω c * f j = 0 l * 1 ( j + 1 ) 1 / 2 ( l j ) ν 1 / 2 c r ( n + 1 ) μ r n ( 1 + c 0 ρ ) ( 1 + τ ) δ + ω δ + γ ω c * f c ν , 1 κ 1 τ δ .
Thus, by the interpolation inequality [3], we can conclude
( K * K ) ν f l * C 1 f 1 / ( 2 ν + 1 ) δ 2 ν / ( 2 ν + 1 )
with C 1 > 0 depends on ν .
From Equation (45) and Proposition 4, we have
e l * ( K * K ) ν f l * + l * δ + C 2 l * δ ( K * K ) ν f l * + ( 1 + C 2 ) l * δ
with C 2 > 0 depending on ν . If l * = 0 , then the assertion holds. Otherwise, we apply (43) with l = l * 1 to obtain:
l * ν + 1 / 2 c * f / δ .
Therefore, Equation (44) holds. □
For the convenience of numerical calculation, we wrote the above analysis process as an algorithm. This algorithm includes three parts: constructing space X n (Algorithm 1), updating the iteration (Algorithm 2), and stopping criterion (Algorithm 3).
Algorithm 1 Constructing space X n
Step 1. Give initial function x 0 , perturbed level δ , and constants r, μ . Determine constants c r , τ , ω , and  κ 1 .
Step 2. Solve n from inequality
c r ( n + 1 ) μ r n / d κ 1 τ δ c r ( n + 1 ) μ r ( n 1 ) / d
with n N .
Step 3. Construct multiscale bases { w i j } for ( i , j ) U n .
Algorithm 2 Updating iteration
Step 1. Compute vector c n 0 , y n 0 and matrix E n , B ˜ n 0 .
Step 2. Suppose that vector c n l has been obtained.
● Compute B ˜ n l and y n l .
● Solve c n l + 1 X n from
E n c n l + 1 = E n c n l + ω B ˜ n l × y n l
  for l N 0 .
Algorithm 3 Stopping criterion
Step 1. Restore x ˜ n , l + 1 δ by bases { w i j , ( i , j ) U n } and vector c n l + 1 .
Step 2. Continue iteration until the approximate solution x ˜ n , l + 1 δ satisfies:
y δ F ( x ˜ n , l + 1 δ ) τ δ .

5. Numerical Experiment

This section provides a numerical example of nonlinear integral equations using the compression multiscale Galerkin method of Landweber iteration. The purpose was to verify the theoretical results.
Consider nonlinear integral equations [9] F ( x ) = y , where F : D ( F ) = H 1 [ 0 , 1 ] L 2 [ 0 , 1 ] is defined as
F ( x ) ( t ) = 0 s x 3 ( t ) d t .
Here, L 2 [ 0 , 1 ] denotes the linear space of all real-valued square integrable functions, and H 1 [ 0 , 1 ] denotes
H 1 [ 0 , 1 ] : = { u : u L 2 [ 0 , 1 ] , D u L 2 [ 0 , 1 ] } ,
where D represents first-order differential operators. The F r e ´ c h e t derivative of the operator F is
( F ( x ) h ) ( s ) = 0 s 3 x 2 ( t ) h ( t ) d t .
To complete numerical calculations, we provide the concrete construction of the sequence space X n (cf. [25]). For the space X 0 , we choose the linear basis function as
w 00 ( t ) = t , t [ 0 , 1 ] and w 01 ( t ) = 1 t , t [ 0 , 1 ] .
For space W 1 , we choose the linear basis function as
w 10 ( t ) = t , t [ 0 , 1 / 2 ) , 1 t , t [ 1 / 2 , 1 ] .
Next, we can construct all the base functions of the subspace W i for i > 1 through the following recursive formula [25],
w i , j ( t ) = 1 2 w i 1 , j ( 2 t ) , t [ 0 , 1 2 ) 0 , t [ 1 2 , 1 ] j = 0 , 1 , , w ( i 1 ) 1 ,
w i , w ( i 1 ) + j ( t ) = 0 , t [ 0 , 1 2 ) 1 2 w i 1 , j ( 2 t 1 ) , t [ 1 2 , 1 ] j = 0 , 1 , , w ( i 1 ) 1 .
Therefore, we can obtain a linear system (18) using these bases. Here, μ = 2 and r = 2 .
Let x : = 1 + π 100 cos ( π t ) be a x 0 -minimum-norm solution of F ( x ) = y . Thus, we have
y ( s ) = ( 1 + 3 E 2 2 ) s + 3 E + E 3 π sin ( π s ) + 3 E 2 4 π sin ( 2 π s ) E 3 3 π sin 3 ( π s )
with E = π / 10 2 and y L 2 [ 0 , 1 ] 0.5948 . We choose initial function x 0 ( t ) = 1 for t [ 0 , 1 ] . Therefore, we can obtain
x x 0 R ( F ( x ) * ) and x x 0 H 1 [ 0 , 1 ] 0.092 .
We can conclude that the rates of convergence are O ( δ ) . From the definition of linear operator R ( x , x ˜ ) , we know that
R ( x 0 , x ) I L 2 [ 0 , 1 ] / x x 0 H 1 [ 0 , 1 ] = ( x 0 ( t ) x ( t ) ) 2 I L 2 [ 0 , 1 ] / x ( t ) x 0 ( t ) H 1 [ 0 , 1 ] = ( x ( t ) + x 0 ( t ) ) ( x ( t ) x 0 ( t ) ) ( x ( t ) ) 2 L 2 [ 0 , 1 ] / x ( t ) x 0 ( t ) H 1 [ 0 , 1 ] 0.484 .
Thus, we take c 0 = 0.484 , ρ = 0.092 and τ = 1.31 . Let v ( s ) : = sin ( 200 π s + π / 7 ) and y δ : = y ( s ) + δ · v ( s ) with:
δ = e 100 · y L 2 [ 0 , 1 ]
for e { 0.9 , 0.7 , 0.5 , 0.3 , 0.1 } . From Equation (31), we know that the relaxation factor ω < τ 1 1 + c 0 ρ 0.2968 , then we choose ω = 0.29 . When constant c r = 0.5 , κ 1 = 0.5 , and noise level δ 0.000595 , it follows from Equation (28) that n = 7 .
All our numerical experiments were conducted in MATLAB (Sun Yat-sen University, Guangzhou) on a computer with a 3.0 GHz CPU and 8 GB memory. The numerical results in Table 1 show that the compression multiscale Galerkin method for Landweber iteration can effectively solve nonlinear ill-posed integral equations. The results in Table 1 are consistent with the assertion of Theorem 3. Figure 1 shows the relationship between the error x ˜ n , l δ x H 1 [ 0 , 1 ] and the iteration step l . Through the results in Figure 1, we further verified Theorem 2. Figure 2 shows the close degree of x ˜ n , l * δ and x and their generalized derivatives.

Author Contributions

R.Z. designed the paper and completed the research; F.L. and X.L. proposed the compression strategy and completed some proofs. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the National Natural Science Foundation of China under grants 11571386, 11771464, and 11761010.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Scherzer, O.; Grasmair, M.; Grossauer, H.; Haltmeier, M.; Lenzen, F. Variational Methods in Imaging; Springer: New York, NY, USA, 2009; p. 297. [Google Scholar]
  2. Schuster, T.; Kaltenbacher, B.; Hofmann, B.; Kazimierski, K.S. Regularization Methods in Banach Spaces; Walter de Gruyter: Berlin, Germany, 2012; p. 323. [Google Scholar]
  3. Engl, H.W.; Neubauer, A.; Scherzer, O. Regularization of Inverse Problems; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1996; p. 329. [Google Scholar]
  4. Kaltenbacher, B.; Hanke, M.; Neubauer, A. Iterative Regularization Methods for Nonlinear Ill-Posed Problems. Walter de Gruyter: Berlin, Germany; New York, NY, USA, 2008; p. 200. [Google Scholar]
  5. Jin, Q. On a regularized Levenberg-Marquardt method for solving nonlinear inverse problems. Numer. Math. 2010, 115, 229–259. [Google Scholar] [CrossRef]
  6. Hochbruck, M.; Honig, M.; Ostermann, A. A convergence analysis of the exponential Euler iteration for nonlinear ill-posed problems. Inverse Probl. 2009, 25, 075009. [Google Scholar] [CrossRef] [Green Version]
  7. Hanke, M. The regularizing Levenberg-Marquardt scheme of optimal order. J. Integral Equ. Appl. 2010, 2, 259–283. [Google Scholar] [CrossRef]
  8. Ito, K.; Jin, B. Inverse Problems: Tikhonov Theory and Algorithms; World Scientific Publishing Co. Pte. Ltd.: Hackensack, NJ, USA, 2015; p. 330. [Google Scholar]
  9. Engl, H.W.; Kunisch, K.; Neubauer, A. Convergence rates of Tikhonov regularization of nonlinear ill-posed problems. Inverse Probl. 1989, 5, 523–540. [Google Scholar] [CrossRef]
  10. Jin, Q. Applications of the modified discrepancy principle to Tikhonov regularization of nonlinear ill-posed problems. SIAM J. Numer. Anal. 1999, 36, 475–490. [Google Scholar]
  11. Hou, Z.; Jin, Q. Tikhonov regularization for nonlinear ill-posed problems. Nonlinear Anal. 1997, 28, 1799–1809. [Google Scholar] [CrossRef]
  12. Scherzer, O.; Engl, H.W.; Kunisch, K. Optimal a-posteriori parameter choice for Tikhonov regularization for solving nonlinear ill-posed problems. SIAM J. Numer. Anal. 1993, 30, 1796–1838. [Google Scholar] [CrossRef]
  13. Jin, Q.; Hou, Z. On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems. Numer. Math. 1999, 83, 139–159. [Google Scholar] [CrossRef]
  14. Hanke, M.; Neubauer, A.; Scherzer, O. A convergence analysis of the Landweber iteration for nonlinear ill-posed problems. Numer. Math. 1995, 72, 21–37. [Google Scholar] [CrossRef]
  15. Hanke, M. A note on the nonlinear Landweber iteration. Numer. Funct. Anal. Optim. 2014, 35, 1500–1510. [Google Scholar] [CrossRef]
  16. Jin, Q.; Amato, U. A Discrete Scheme of Landweber Iteration for Solving Nonlinear Ill-Posed Problems. J. Math. Anal. Appl. 2001, 253, 187–203. [Google Scholar] [CrossRef] [Green Version]
  17. Neubauer, A. Some generalizations for Landweber iteration for nonlinear ill-posed problems in Hilbert scales. J. Inv. Ill-Posed Probl. 2016, 24, 393–406. [Google Scholar] [CrossRef]
  18. Neubauer, A. A new gradient method for ill-posed problems. Numer. Funct. Anal. Optim. 2017, 39. [Google Scholar] [CrossRef]
  19. Scherzer, O. A iterative multi-level algorithm for solving nonlinear ill-posed problems. Numer. Math. 1998, 80, 579–600. [Google Scholar] [CrossRef]
  20. Chen, Z.; Micchelli, C.A.; Xu, Y. Multiscale Methods for Fredholm Integral Equations; Cambridge University Press: Cambridge, UK, 2015; p. 552. [Google Scholar]
  21. Dicken, V.; Maass, P. Wavelet-Galerkin methods for ill-posed problems. J. Inverse Ill-Posed Probl. 1996, 4, 203–221. [Google Scholar] [CrossRef]
  22. Chen, Z.; Cheng, S.; Nelakanti, G.; Yang, H. A fast multiscale Galerkin method for the first kind ill-posed integral equations via Tikhonov regularization. Int. J. Comput. Math. 2010, 87, 565–582. [Google Scholar] [CrossRef]
  23. Fang, W.; Wang, Y.; Xu, Y. An implementation of fast wavelet Galerkin methods for integral equations of the second kind. J. Sci. Comput. 2004, 20, 277–302. [Google Scholar] [CrossRef]
  24. Luo, X.; Li, F.; Yang, S. A posteriori parameter choice strategy for fast multiscale methods solving ill-posed integral equations. Adv. Comput. Math. 2012, 36, 299–314. [Google Scholar] [CrossRef]
  25. Chen, Z.; Wu, B.; Xu, Y. Multilevel Augmentation Methods for Differential Equations. Adv. Comput. Math. 2006, 24, 213–238. [Google Scholar] [CrossRef]
  26. Thongchuay, W.; Puntip, T.; Maleewong, M. Multilevel augmentation method with wavelet bases for singularly perturbed problem. J. Math. Chem. 2013, 51, 2328–2339. [Google Scholar] [CrossRef]
Figure 1. (a) For e = 0.9, the trend in error x ˜ n , l δ x H 1 [ 0 , 1 ] ; (b) for e = 0.5, the trend in error x ˜ n , l δ x H 1 [ 0 , 1 ] ; (c) for e = 0.1, the trend of error x ˜ n , l δ x H 1 [ 0 , 1 ] . The horizontal axis is l , the vertical axis is x ˜ n , l δ x H 1 [ 0 , 1 ] .
Figure 1. (a) For e = 0.9, the trend in error x ˜ n , l δ x H 1 [ 0 , 1 ] ; (b) for e = 0.5, the trend in error x ˜ n , l δ x H 1 [ 0 , 1 ] ; (c) for e = 0.1, the trend of error x ˜ n , l δ x H 1 [ 0 , 1 ] . The horizontal axis is l , the vertical axis is x ˜ n , l δ x H 1 [ 0 , 1 ] .
Mathematics 08 00221 g001
Figure 2. (a) For e = 0.1, the approximate solutions x ˜ n , l * δ (AS) and x 0 -minimum-norm solution x ˜ (OS); (b) for e = 0.1, the generalized derivatives of x ˜ n , l * δ (DAS) and x (DOS).
Figure 2. (a) For e = 0.1, the approximate solutions x ˜ n , l * δ (AS) and x 0 -minimum-norm solution x ˜ (OS); (b) for e = 0.1, the generalized derivatives of x ˜ n , l * δ (DAS) and x (DOS).
Mathematics 08 00221 g002
Table 1. Compression multiscale Galerkin method of Landweber iteration based on the discrepancy principle.
Table 1. Compression multiscale Galerkin method of Landweber iteration based on the discrepancy principle.
ne δ l * x ˜ n , l * δ x H 1 [ 0 , 1 ] x ˜ n , l * δ x H 1 [ 0 , 1 ] / δ
70.90.005353820.0442110.604265
70.70.0041641070.0364270.564529
70.50.0029741420.0288040.528174
70.30.0017841990.0217730.515422
70.10.0005954160.0149560.613259

Share and Cite

MDPI and ACS Style

Zhang, R.; Li, F.; Luo, X. Multiscale Compression Algorithm for Solving Nonlinear Ill-Posed Integral Equations via Landweber Iteration. Mathematics 2020, 8, 221. https://doi.org/10.3390/math8020221

AMA Style

Zhang R, Li F, Luo X. Multiscale Compression Algorithm for Solving Nonlinear Ill-Posed Integral Equations via Landweber Iteration. Mathematics. 2020; 8(2):221. https://doi.org/10.3390/math8020221

Chicago/Turabian Style

Zhang, Rong, Fanchun Li, and Xingjun Luo. 2020. "Multiscale Compression Algorithm for Solving Nonlinear Ill-Posed Integral Equations via Landweber Iteration" Mathematics 8, no. 2: 221. https://doi.org/10.3390/math8020221

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop