Next Article in Journal
A Stock Optimization Problem in Finance: Understanding Financial and Economic Indicators through Analytical Predictive Modeling
Previous Article in Journal
Efficient Preconditioning Based on Scaled Tridiagonal and Toeplitz-like Splitting Iteration Method for Conservative Space Fractional Diffusion Equations
Previous Article in Special Issue
Local Second Order Sobolev Regularity for p-Laplacian Equation in Semi-Simple Lie Group
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Inertial Relaxed CQ Algorithm with Two Adaptive Step Sizes and Its Application for Signal Recovery

by
Teeranush Suebcharoen
1,
Raweerote Suparatulatorn
1,2,
Tanadon Chaobankoh
1,
Khwanchai Kunwai
1 and
Thanasak Mouktonglang
1,*
1
Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
Office of Research Administration, Chiang Mai University, Chiang Mai 50200, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(15), 2406; https://doi.org/10.3390/math12152406
Submission received: 2 July 2024 / Revised: 28 July 2024 / Accepted: 30 July 2024 / Published: 2 August 2024

Abstract

:
This article presents a novel inertial relaxed CQ algorithm for solving split feasibility problems. Note that the algorithm incorporates two adaptive step sizes here. A strong convergence theorem is established for the problem under some standard conditions. Additionally, we explore the utility of the algorithm in solving signal recovery problems. Its performance is evaluated against existing techniques from the literature.

1. Introduction

Throughout this article, all sets are assumed to be nonempty. Let N , R , and R N denote the sets of positive integers, of real numbers, and of ordered N-tuples of real numbers, where N N , respectively. Let X : = X , · , · and Y : = Y , · , · be two inner product spaces with the induced norm · . Also, let C and Q be closed and convex subsets of X and Y, respectively. We denote a useful subset of X as follows:
Ω : = x C | A x Q where A : X Y is a bounded linear operator .
The problem of finding x Ω has been well known as the split feasibility problem (SFP). Introduced by Censor and Elfving [1] in 1994, the SFP has garnered significant attention. Its applications span diverse fields, including image and signal processing, inverse problems, and especially machine learning, which is very attractive nowadays (for example, see [2,3,4,5]). Initially, various algorithms were proposed to solve the SFP in both finite- and infinite-dimensional spaces, all requiring the existence of the inverse of A. However, the CQ algorithm, introduced by Byrne [6], stands out as one of the most notable. This algorithm is computationally efficient when the metric projections onto C and Q can be conveniently computed. The algorithm generates the sequence x n by
x n + 1 = P C x n γ A * ( I P Q ) A x n , for all n 1 ,
where P C and P Q are the (nearest point) metric projections on C and Q, respectively; A * is the adjoint operator of A; and γ (called the step size) is in 0 , 2 A 2 . Nevertheless, it may be impractical or computationally intensive to determine the metric projections exactly. More importantly, determining the step size, which depends on the operator norm, can be computationally challenging or difficult to estimate accurately.
In real-world applications, the sets C and Q are typically defined as the level sets of convex functions given by
C = x X : c ( x ) 0 and Q = y Y : q ( y ) 0
where c : X R and q : Y R are lower semi-continuous convex functions, and c and q are assumed to be bounded operators.
However, when C and Q are complicated, the nearest point projections P C and P Q have no closed forms, and thus, the computation is expensive. In 2004, Yang [7] presented a modification of the CQ algorithm, called a relaxed CQ algorithm, by considering C and Q as subsets of the half-spaces C n and Q n given by
C n = x X : c ( x n ) λ n , x n x where λ n c ( x n ) ; Q n = y Y : q ( A x n ) η n , A x n y where η n q ( A x n ) .
Note that P C n and P Q n now have closed forms. Since these projections are easily calculated, this method appears to be very practical.
In the following, we define, for n N ,
f n ( · ) = 1 2 ( I P Q n ) A ( · ) 2 .
Thus, for n N ,
f n = A * ( I P Q n ) A .
Next, in 2005, Yang [8] solved the SFP using a variable step size defined by
γ n = ρ n f n ( x n ) for all n N ,
where { ρ n } ( 0 , ) satisfies n = 1 ρ n = , and n = 1 ρ n 2 < . Note that using this step size still requires two strong conditions, which are the boundedness of Q and the full column rank of A . Later, López et al. [9] removed these two conditions by altering the step size as
γ n = ρ n f n ( x n ) f n ( x n ) 2 for all n N
where the sequence { ρ n } ( 0 , 4 ) satisfies that inf n ρ n ( 4 ρ n ) > 0 . This step size is effective because its convergence result can be proven without relying on the matrix norm or any specific properties of Q and A . Recent developments have introduced several algorithms that eliminate the need for the matrix norm in solving the SFP and other related problems, for example, see [10,11,12].
Subsequently, numerous algorithms and novel step sizes were proposed with the aim of enhancing the convergence rate of optimization algorithms. Among these, the inertial technique has gained significant attention. Introduced by Polyak [13] in 1964, it serves as an acceleration method for solving convex minimization problems with improved convergence speed. Its distinctive feature lies in its utilization of two preceding steps to compute the subsequent one. Alvarez et al. [14] extended the concept of the heavy ball method to the proximal point algorithm, leading to the development of the inertial proximal point algorithm. This algorithm is formulated as follows:
ω n = x n + χ n ( x n x n 1 ) , x n + 1 = ( I + γ n F ) 1 ( ω n ) , n 1 ,
where F is a maximal monotone operator. The convergence of the zero point of F is proven under the assumptions that { γ n } is non-decreasing and χ n [ 0 , 1 ) satisfies n = 1 χ n x n x n 1 2 < .
In [15], the authors proposed a modified inertial relaxed CQ algorithm (IRCQVA) by incorporating a viscosity approximation method and a novel step size strategy to solve the SFP as follows: for any x 0 , x 1 X and all n N ,
ω n = x n + χ n ( x n x n 1 ) , x n + 1 = P C n α n f ( ω n ) + ( 1 α n ) ( ω n φ n f n ( ω n ) ) ,
where f is a contraction on X , { α n } ( 0 , 1 ) , { χ n } [ 0 , 1 ) , { ρ n } ( 0 , 4 ) , and
φ n = ρ n f n ( ω n ) f n ( ω n ) 2 + ( I P C n ) ( ω n ) 2 .
It was shown that the sequence { x n } generated by (6) converges strongly to a point in Ω under the conditions lim n α n = lim n χ n α n x n x n 1 = 0 , n = 1 α n = , and inf n ρ n ( 4 ρ n ) > 0 . Consequently, in 2019, Gibali et al. [16] obtained a new inertial relaxed CQ algorithm (M-IRCQA) to solve SFP (1) given as follows: for any x 0 , x 1 X and all n N , select χ n such that 0 χ n χ ¯ n , where ε n = o ( α n ) and
χ ¯ n = min ε n x n x n 1 , χ if x n x n 1 ; χ otherwise
for some χ [ 0 , 1 ) . Compute
ω n = x n + χ n ( x n x n 1 ) , x n + 1 = ( 1 α n γ n ) ω n + γ n P C n ω n φ n f n ( ω n ) ,
where { α n } , { γ n } ( 0 , 1 ) , { ρ n } ( 0 , 4 ) , and φ n are defined as in (7). It also was shown that the sequence { x n } generated by (8) converges strongly to the minimum-norm element of Ω under the conditions that lim n α n = 0 , n = 1 α n = , inf n γ n ( 1 α n γ n ) > 0 , and inf n ρ n ( 4 ρ n ) > 0 .
In a recent study published in [4], the authors presented the inertial gradient CQ algorithm (IGCQHA). This algorithm employs Halpern iteration with a new step size to address the problem of solving the SFP. The algorithm is given as follows: for any u , x 0 , x 1 X and all n N ,
ω n = x n + χ n ( x n x n 1 ) , y n = ω n γ n f n ( ω n ) , x n + 1 = α n u + ( 1 α n ) P C n ( y n φ n * f n ( y n ) ) ,
where γ n = ρ n f n ( x n ) f n ( x n ) 2 + θ n , φ n * = ρ n f n ( y n ) f n ( y n ) 2 + θ n , { α n } , { θ n } ( 0 , 1 ) , { χ n } [ 0 , 1 ) , and { ρ n } ( 0 , 4 ) . As a result, the sequence { x n } generated by (9) converges strongly to P Ω u under the conditions that lim n α n = lim n θ n = lim n χ n α n x n x n 1 = 0 , n = 1 α n = , and inf n ρ n ( 4 ρ n ) > 0 .
Additionally, Ma and Liu [5] recently proposed the inertial Halpern-type CQ algorithm (IHTCQA) for solving the SFP with a step size that is bounded away from 0. Their algorithm scheme reads as follows: for any u , x 0 , x 1 X and all n N , select χ n [ 0 , χ ) [ 0 , 1 ) such that
χ n = min ε n x n x n 1 , χ if x n x n 1 ; χ otherwise ,
where ε n = o ( α n ) . Compute
ω n = x n + χ n ( x n x n 1 ) , x n + 1 = α n u + ( 1 α n ) P C n ω n τ n f n ( ω n ) ,
using the step size scheme as follows: for any τ 1 > 0 and δ ( 0 , 1 ] ,
τ n + 1 = min 2 δ f n ( ω n ) f n ( ω n ) 2 , ϕ n τ n + ψ n if f n ( ω n ) 0 ; ϕ n τ n + ψ n otherwise ,
where { α n } ( 0 , 1 ) , { ϕ n } [ 1 , ) , and { ψ n } [ 0 , ) . It follows that the sequence { x n } defined by (10) converges strongly to P Ω u under the conditions that lim n α n = 0 , n = 1 α n = , n = 1 ( ϕ n 1 ) < and n = 1 ψ n < .
In this article, we propose a novel inertial relaxed CQ algorithm with two adaptive step sizes for solving the SFP. The algorithm is based on a combination of inertial acceleration and adaptive step size control. Note that the proposed algorithm has several advantages over existing methods. First, it incorporates an inertial term, which helps to accelerate the convergence speed. Second, it uses two adaptive step sizes, which allows the algorithm to automatically adjust its step size during the iteration process. In Section 3, we provide a comprehensive convergence analysis that establishes the strong convergence of the proposed algorithm. In Section 4, numerical experiments demonstrate the efficiency and reliability of the algorithm. It outperforms existing algorithms in terms of accuracy and speed. These advantages make the proposed algorithm a promising tool for solving a wide range of optimization problems.

2. Preliminaries

In this section, we present some notations, definitions, and results that will be used in the next section.
We denote ⇀ and → as weak and strong convergence, respectively.
Proposition 1.
For u , v , w X and a , b , c [ 0 , 1 ] such that a + b + c = 1 ,
u + v 2 u 2 + 2 v , u + v ,
a u + ( 1 a ) v 2 = a u 2 + ( 1 a ) v 2 a ( 1 a ) u v 2 ,
a u + b v + c w 2 = a u 2 + b v 2 + c w 2 a b u v 2 a c u w 2 b c v w 2 .
Definition 1.
A mapping S : X X is said to be L-Lipschitz if there exists an L 0 satisfying, for all u , v X ,
S u S v L u v .
The mapping S is called an L-contraction when L [ 0 , 1 ) , and if L = 1 , then S is called nonexpansive.
For u X , define P C : X C by
P C ( u ) : = arg min v C u v .
It is well known that P C is the metric projection such that for any u X and v C ,
u P C ( u ) , v P C ( u ) 0 .
It is clear that P C is nonexpansive.
Definition 2.
Let c : X R be a function. Then, we have the following:
(i) 
A function c is said to be weakly lower semi-continuous (w-lsc) at z if for z n X such that z n z implies that
c ( z ) lim   inf n   c ( z n ) .
(ii) 
A function c : X R is convex if for all u , v X and all t ( 0 , 1 ) ,
c ( t u + ( 1 t ) v ) t c ( u ) + ( 1 t ) c ( v ) .
Proposition 2.
A differentiable function c : X R is convex if and only if for all x X ,
c ( z ) , x z + c ( z ) c ( x ) .
Definition 3.
An element b ¯ X is said to be a subgradient of c : X R at b if for all x X ,
b ¯ , x b + c ( b ) c ( x ) .
This relation is called the subdifferentiable inequality.
Definition 4.
A function c : X R is subdifferentiable at z if it has at least one subgradient at z. A function c is called subdifferentiable if it is subdifferentiable at all z X .
The set of subgradients of c at the point z is called the subdifferential of c at z, and it is denoted by c ( z ) . For a differentiable and convex function, its gradient and subgradient coincide.
We next collect some necessary lemmas for proving our main result.
Lemma 1
([17,18]). Let λ n and b n be sequences of [ 0 , ) , and let c n be a sequence of R such that n = 1 c n < . For a sequence d n ( 0 , 1 ) , suppose that there is n 0 N such that
λ n + 1 ( 1 d n ) λ n + b n + c n for all n n 0 .
Then, we have the following statements:
(i) 
If there exists some K > 0 such that b n d n K , then { λ n } is bounded.
(ii) 
lim n λ n = 0 when lim   sup n b n d n 0 and n = 1 d n = .
Lemma 2
([19]). For a sequence λ n R , if there exists a subsequence λ n j such that λ n j < λ n j + 1 for all j , then we have the following:
(i) 
lim n ψ ( n ) = where ψ ( n ) : = max { k n : λ k < λ k + 1 } ;
(ii) 
{ ψ ( n ) } n n * is nondecreasing such that for all n n * ,
λ ψ ( n ) λ ψ ( n ) + 1 a n d λ n λ ψ ( n ) + 1

3. Main Result

In this section, we first list all requirements for our algorithm below:
(R1)
A : X Y is a bounded linear operator with its adjoint A * .
(R2)
The sets C n , Q n , and f n and its gradient f n are defined as in (2), (3), and (4), respectively, and c and q are bounded operators.
(R3)
h : X R is a function such that h is ϱ -contractive (note that this implies that h is differentiable).
(R4)
{ χ n } [ 0 , ) is a sequence for which lim n χ n α n x n x n 1 = 0 , where { α n } ( 0 , 1 ) satisfies lim n α n = 0 and n = 1 α n = (see an example of { χ n } in Section 4).
(R5)
{ β n } and { γ n } are sequences in ( 0 , 1 ) such that α n + β n + γ n = 1 and γ n ( γ , γ * ) ( 0 , 1 α n ) for some γ , γ * ( 0 , 1 ) .
(R6)
{ ϕ n } , { ϕ n * } [ 1 , ) are sequences such that n = 1 ( ϕ n 1 ) < , n = 1 ( ϕ n * 1 ) < , and { ψ n } and { ψ n * } are sequences in [ 0 , ) such that n = 1 ψ n < and n = 1 ψ n * < .
We are now ready to present our algorithm (IVTCQA) (Algorithm 1).
Algorithm 1 Inertial viscosity-type CQ algorithm (IVTCQA)
Initialization: Take x 0 , x 1 X , τ 1 , ν 1 > 0 , δ , μ ( 0 , 2 ) , and set n : = 1 .
Iterative Steps: Construct { x n } using the following steps:
Step 1. Set ω n = x n + χ n x n x n 1 and compute
ζ n = ω n τ n f n ( ω n ) .
Step 2. Define
ϑ n = P C n ζ n ν n f n ( ζ n ) .
If ϑ n = ζ n , then stop, and ϑ n Ω . Otherwise, continue.
Step 3. Calculate
x n + 1 = α n h ( x n ) + β n x n + γ n ϑ n .
Step 4. Update
τ n + 1 = min 2 δ f n ( ω n ) f n ( ω n ) 2 , ϕ n τ n + ψ n if f n ( ω n ) 0 ; ϕ n τ n + ψ n otherwise
and
ν n + 1 = min 2 μ f n ( ζ n ) f n ( ζ n ) 2 , ϕ n * ν n + ψ n * if f n ( ζ n ) 0 ; ϕ n * ν n + ψ n * otherwise .

Replace n with n + 1 and then repeat Step 1.
Prior to presenting our main theorem, we establish the following necessary results.
Proposition 3.
Let ϑ n and ζ n be the sequences generated by the IVTCQA. If ϑ n = ζ n , then ϑ n Ω .
Proof. 
The proof is similar to that of Proposition 3.2 (1) in [20]. □
Lemma 3.
For τ n and ν n generated by the IVTCQA, lim n τ n = τ and lim n ν n = ν , where τ > min δ A * A , τ 1 > 0 and ν > min μ A * A , ν 1 > 0 .
Proof. 
This proof is analogous to the proof of Lemma 3.1 [5]. □
We now proceed to present our main result.
Theorem 1.
Let x n be a sequence generated by the IVTCQA. Then, it converges strongly to
s = P Ω h ( s ) .
Proof. 
Let r Ω . Since C C n and Q Q n , then r = P C ( r ) = P C n ( r ) and A r = P Q ( A r ) = P Q n ( A r ) . It follows that f n ( r ) = 0 , and thus,
ζ n r , f n ( ζ n ) = ζ n r , f n ( ζ n ) f n ( r ) = A ζ n A r , ( I P Q n ) A ζ n ( I P Q n ) A r ( I P Q n ) A ζ n 2 = 2 f n ( ζ n ) .
Also, we have
ω n r , f n ( ω n ) 2 f n ( ω n ) .
Set δ n = 2 δ τ n τ n + 1 and μ n = 2 μ ν n ν n + 1 .
By Lemma 3, lim n δ n = 2 δ > 0 and lim n μ n = 2 μ > 0 . Then, there exists n 0 N such that δ n > 0 and μ n > 0 for all n n 0 . If follows from the properties of τ n + 1 and ν n + 1 that
τ n + 1 f n ( ω n ) 2 2 δ f n ( ω n ) and ν n + 1 f n ( ζ n ) 2 2 μ f n ( ζ n ) .
By the nonexpansivity of P C n , and the definition of ζ n , (15)–(17), we obtain that
ϑ n r 2 = P C n ( ζ n ν n f n ( ζ n ) ) P C n ( r ) 2 ζ n r ν n f n ( ζ n ) 2 = ζ n r 2 + ν n 2 f n ( ζ n ) 2 2 ν n ζ n r , f n ( ζ n ) = ω n r 2 + ω n ζ n 2 + ν n 2 f n ( ζ n ) 2 + 2 ω n r , ζ n ω n 2 ν n ζ n r , f n ( ζ n ) = ω n r 2 + τ n 2 f n ( ω n ) 2 + ν n 2 f n ( ζ n ) 2 2 τ n ω n r , f n ( ω n ) 2 ν n ζ n r , f n ( ζ n ) ω n r 2 + 2 δ τ n 2 f n ( ω n ) τ n + 1 + 2 μ ν n 2 f n ( ζ n ) ν n + 1 4 τ n f n ( ω n ) 4 ν n f n ( ζ n ) = ω n r 2 2 δ n τ n f n ( ω n ) 2 μ n ν n f n ( ζ n ) .
Then, from (18), we have that ϑ n r 2 ω n r 2 , and thus,
ϑ n r ω n r x n r + χ n x n x n 1 for all n n 0 .
From the ϱ -contractivity of h and the property of x n + 1 ,
x n + 1 r = α n h ( x n ) r + β n x n r + γ n ϑ n r α n h ( x n ) h ( r ) + α n h ( r ) r + β n x n r + γ n ϑ n r α n h ( r ) r + ϱ α n + β n x n r + γ n ϑ n r .
Combining the above inequality with (19), we obtain that, for all n n 0 ,
x n + 1 r 1 d n x n r + γ n χ n x n x n 1 + α n h ( r ) r 1 d n x n r + d n κ n
where d n = α n 1 ϱ and κ n = 1 1 ϱ χ n α n x n x n 1 + h ( r ) r .
Next, we set M n = 2 M χ n x n x n 1 + χ n 2 x n x n 1 2 , where M : = sup n N x n s . Since lim n χ n α n x n x n 1 = 0 , κ n is bounded,
lim n M n = lim n M n d n = lim n ω n x n = 0 .
From (20) and the boundedness of κ n , by Lemma 1 ( i ) , x n r is bounded for any r Ω . Subsequently, x n is bounded and so is { h ( x n ) } . Since Ω is a nonempty, closed, and convex set, we have that P Ω h is ρ -contractive. It follows from the Banach fixed-point theorem that there exists a unique s Ω such that s = P Ω h ( s ) .
Next, let λ n = x n s 2 . By (13) and (18), we have that
λ n + 1 = α n h ( x n ) s 2 + β n λ n + γ n ϑ n s 2 α n β n h ( x n ) x n 2 β n γ n x n ϑ n 2 α n γ n h ( x n ) ϑ n 2 α n h ( x n ) s 2 + β n λ n + γ n ω n s 2 2 γ n δ n τ n f n ( ω n ) + μ n ν n f n ( ζ n ) β n γ n x n ϑ n 2 = α n h ( x n ) s 2 + β n λ n β n γ n x n ϑ n 2 2 γ n δ n τ n f n ( ω n ) + μ n ν n f n ( ζ n ) + γ n λ n + 2 χ n x n s , x n x n 1 + χ n 2 x n x n 1 2 λ n + α n h ( x n ) s 2 + M n β n γ n x n ϑ n 2 2 γ n δ n τ n f n ( ω n ) + μ n ν n f n ( ζ n ) .
Rearranging the above inequality, we have that
β n γ n x n ϑ n 2 + 2 γ n δ n τ n f n ( ω n ) + μ n ν n f n ( ζ n ) λ n λ n + 1 + α n h ( x n ) s 2 + M n .
Next, we consider the following two cases.
Case 1. Assume that there is N N such that for all n N , λ n + 1 λ n holds. Then, λ n is convergent. Also, from requirements (R4) and (R5), when taking n , the right-handed side of (22) tends to zero, and thus,
lim n ϑ n x n = lim n ( I P Q n ) A ω n = lim n ( I P Q n ) A ζ n = 0 .
Also, from α n + β n + γ n = 1 ,
x n + 1 x n x n + 1 ϑ n + ϑ n x n = α n h ( x n ) + β n x n + γ n ϑ n ϑ n + ϑ n x n α n h ( x n ) x n + 1 γ n ϑ n x n + ϑ n x n = α n h ( x n ) x n + ( 2 γ n ) ϑ n x n 0 as n .
From P Q n ( A ω n ) Q n , we have that
q ( A ω n ) η n , I P Q n A ω n
The boundedness of q implies that η n is bounded. By (23) and (25), we have
q ( A ω n ) η n I P Q n A ω n 0 as n .
We know that x n is bounded, so there exists a subsequence x n k of x n , which converges weakly to x * X . Thus, from (21), we have ω n k x * as k , and so A ω n k A x * as k .
Since q is w-lsc and (26),
q ( A x * ) lim   inf k   q ( A ω n k ) 0 .
This implies that A x * Q . From ϑ n C n ,
c ( ω n ) λ n , ω n ϑ n .
Now, the boundedness of c implies the boundedness of λ n . Combining this with (21), (23), and (27), we obtain that
c ( ω n ) λ n ω n x n + x n ϑ n 0 as n .
Similarly, one can show that c ( x * ) 0 , i.e., x * C . We now have that x * Ω .
Next, let h n = h ( s ) s , x n s and s n = x n + γ n ϑ n x n for all n N . Then,
s n x n = γ n ϑ n x n .
By (19), for all n n 0 ,
s n s = ( 1 γ n ) x n + γ n ϑ n s ( 1 γ n ) x n s + γ n ϑ n s ( 1 γ n ) x n s + γ n x n s + γ n χ n x n x n 1 = x n s + γ n χ n x n x n 1
From (14) and (24),
lim   sup n   h n + 1 lim   sup n   h ( s ) s , x n + 1 x n + lim   sup n   h n = lim k h n k = h ( s ) s , x * s 0 .
Consequently, from (11) and (12),
λ n + 1 = ( 1 α n ) ( s n s ) + α n ( h ( x n ) h ( s ) ) α n ( x n s n ) α n ( s h ( s ) ) 2 ( 1 α n ) ( s n s ) + α n ( h ( x n ) h ( s ) ) 2 2 α n x n s n + s h ( s ) , x n + 1 s ( 1 α n ) s n s 2 + α n h ( x n ) h ( s ) 2 + 2 α n s n x n , x n + 1 s + 2 α n h n + 1 .
The above inequality together with (28) and (29) implies that for all n n 0 ,
λ n + 1 ( 1 α n ) x n s + γ n χ n x n x n 1 2 + α n ϱ λ n + 2 α n s n x n x n + 1 s + 2 α n h n + 1 1 d n λ n + M n + 2 M α n γ n ϑ n x n + 2 α n h n + 1 1 d n λ n + b n ,
where b n = d n M n d n + 2 1 ϱ M ϑ n x n + h n + 1 . Finally, from n = 1 α n = , (21), (23), (30), (31), and Lemma 1 ( i i ) , we can conclude that lim n λ n = 0 .
Case 2. Suppose that for all j N , there is n j j such that λ n j < λ n j + 1 .
From Lemma 2, there is n * N such that for all n n * , λ ψ ( n ) λ ψ ( n ) + 1 and λ n λ ψ ( n ) + 1 , where ψ ( n ) : = max k n : λ ψ ( k ) < λ ψ ( k + 1 ) . Now, it follows from (22) that for all n n * ,
β ψ ( n ) γ ψ ( n ) x ψ ( n ) ϑ ψ ( n ) 2 + 2 γ ψ ( n ) δ ψ ( n ) τ ψ ( n ) f ψ ( n ) ( ω ψ ( n ) ) + μ ψ ( n ) ν ψ ( n ) f ψ ( n ) ( ζ ψ ( n ) ) α ψ ( n ) h ( x ψ ( n ) ) s 2 + M ψ ( n ) .
Since lim n ψ ( n ) = , then lim n α ψ ( n ) = lim n M ψ ( n ) = 0 , which implies that
lim n ϑ ψ ( n ) x ψ ( n ) = lim n ( I P Q ψ ( n ) ) A ω ψ ( n ) = lim n ( I P Q ψ ( n ) ) A ζ ψ ( n ) = 0 .
Now, an analogous argument to that employed in the preceding case shows that
lim   sup n   h ψ ( n ) + 1 0 .
By (31), for a large enough n, we have that
λ ψ ( n ) + 1 1 d ψ ( n ) λ ψ ( n ) + b ψ ( n ) 1 d ψ ( n ) λ ψ ( n ) + 1 + b ψ ( n ) .
It follows that
λ ψ ( n ) + 1 M ψ ( n ) d ψ ( n ) + 2 1 ϱ M ϑ ψ ( n ) x ψ ( n ) + h ψ ( n ) + 1 .
Then, lim   sup n   λ ψ ( n ) + 1 0 . Therefore,
lim n x ψ ( n ) + 1 s 2 = lim n λ ψ ( n ) + 1 = 0 .
From Lemma 2, lim n λ n lim n λ ψ ( n ) + 1 = 0 . Finally, we can conclude that { x n } converges strongly to s. The proof is now complete. □

4. Numerical Exemplifications

In this section, we investigate a signal recovery problem within the framework of compressed sensing. In mathematical terms, a signal recovery problem can be expressed as a system of linear equations with more unknowns than equations:
b = A x + ε ,
where x R N is the original signal, b R M is the observed signal with noise ε , and A is the M × N filter matrix with M < N . It is well known that problem (32) can be solved through the least absolute shrinkage and selection operator (LASSO) problem:
min 1 2 A ω b 2 2 : ω R N , ω 1 ς ,
where ς > 0 is a given constant.
Under specific conditions imposed on A, the solution to minimization problem (33) is equivalent to the 0 -norm solution of the linear system. For the considered SFP, we define f ( ω ) = 1 2 ( I P Q ) A ω 2 2 , C = { ω : ω 1 ς } , and Q = { b } . Since the metric projection onto the closed convex set C does not have a closed-form solution, we utilize the subgradient projection. We define a convex function c ( ω ) = ω 1 ς and, for a sequence { ω n } R N , denote the level set C n by ω : c ( ω n ) λ n , ω n ω , where λ n c ( ω n ) . Then, the orthogonal projection onto C n can be calculated using the following formula:
P C n ( ω ) = ω + λ n , ω n ω c ( ω n ) λ n 2 2 λ n if c ( ω n ) > λ n , ω n ω , ω otherwise .
We note that the subdifferential c at ω n is
c ( ω n ) = 1 if ω n j > 0 , [ 1 , 1 ] if ω n j = 0 , 1 if ω n j < 0 ,
where ω n j is the jth component of the vector ω n .
For the experiments, we evaluate the performances of five algorithms in solving problem (33): (1) IGCQHA, (2) M-IRCQA, (3) IRCQVA, (4) IHTCQA, and our purposed algorithm (5) IVTCQA. For algorithms (1–4), the parameters were selected based on experiments from [4,5,15,16], respectively. For algorithm (5), we conducted multiple experiments using various sets of parameters and selected the set that performed the best.
The experimental setup is listed below:
-
The original signal x is generated uniformly from the interval [ 2 , 2 ] with k nonzero elements.
-
The Gaussian matrix A is generated using MATLAB’s r a n d n ( M , N ) function.
-
The observation b is generated by adding white Gaussian noise with a signal-to-noise ratio (SNR) of 40 and ς = k .
-
The vectors x 0 and x 1 are randomly generated.
-
For all n N , we set α n = 1 n + 1 and
χ n = min 1 ( n + 1 ) 3 x n x n 1 2 , 0.3   if x n x n 1 ;         0.3 otherwise .
-
For (1) IGCQHA, (2) M-IRCQA, and (3) IRCQVA, let ρ n = 0.3 for all n N .
-
The vector u is generated randomly for (1) IGCQHA and (4) IHTCQA.
-
For (2) M-IRCQA and (5) IVTCQA, set γ n = 4 n 5 n + 5 for all n N .
-
For (4) IHTCQA and (5) IVTCQA, let τ 1 = 0.01 , δ = 0.3 , ψ n = 0 , and ϕ n = 22 ( n + 1 ) 1.1 + 1 for all n N .
-
For (1) IGCQHA, let θ n = 1 ( n + 1 ) 3 for any n N , and set f ( · ) = 0.9 ( · ) for (3) IRCQVA.
-
For (5) IVTCQA, let ν 1 = 0.01 , μ = 0.3 , h ( ω ) = 9 20 j = 1 N ω j 2 , ψ n * = 1 ( n + 1 ) 3 , and ϕ n * = 44 ( n + 1 ) 1.1 + 1 for all n N . We can verify that these meet our algorithm’s requirements.
Now, we consider three cases as follows:
  • Case 1: M = 2 9 , N = 2 10 and k = 2 6 ;
  • Case 2: M = 2 10 , N = 2 11 and k = 2 7 ;
  • Case 3: M = 2 11 , N = 2 12 and k = 2 8 .
We evaluate the accuracy of the recovered signals using the mean squared error (MSE), given as
MSE n = 1 N x n x 2 2 < 5 × 10 5 .
The computations were performed using MATLAB R2021a on an iMac equipped with an Apple M1 chip and 16 GB of RAM. The results obtained are presented below Table 1 and Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6.
After conducting multiple iterations of the experiment, we determined that χ = 0.3 consistently met the specified criteria. This parameter value yielded the most optimal results for the given example. Based on the aforementioned results, our proposed algorithm, (5) IVTCQA, stands out for its exceptional computational efficiency, outperforming the other four algorithms in terms of both execution time and the number of iterations required. This enhanced performance demonstrates the effectiveness of our algorithm in solving the considered problem.

5. Conclusions

We proposed a novel inertial relaxed CQ algorithm with two adaptive step sizes to solve the SFP. Our main theorem establishes the strong convergence of the proposed algorithm under certain conditions. We also applied the algorithm to the problem of signal recovery in compressed sensing. Numerical experiments demonstrated the efficiency and reliability of the proposed algorithm, and it outperforms existing algorithms in terms of accuracy and speed. The proposed algorithm has several advantages over existing methods. It incorporates an inertial term, which helps to accelerate the convergence speed. Also, it uses two adaptive step sizes, which allows the algorithm to automatically adjust its step size during the iteration process.

Author Contributions

Conceptualization, R.S.; Methodology, T.S., T.C. and T.M.; Software, T.S. and R.S.; Validation, R.S., T.C. and K.K.; Formal analysis, K.K.; Investigation, R.S. and T.M.; Data curation, T.C., K.K. and T.M.; Writing—original draft, T.S. and R.S.; Writing—review & editing, T.C. and K.K.; Supervision, T.S.; Funding acquisition, T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the CMU Mid-Career Research Fellowship program, Chiang Mai University, Faculty of Science, Chiang Mai University, and Chiang Mai University.

Data Availability Statement

Data are contained within the article.

Acknowledgments

This research was partially supported by the CMU Mid-Career Research Fellowship program, Chiang Mai University, Faculty of Science, Chiang Mai University, and Chiang Mai University.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projection in a product spce. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. Che, H.; Zhuang, Y.; Wang, Y.; Chen, H. A relaxed inertial and viscosity method for split feasibility problem and applications to image recovery. J. Glob. Optim. 2023, 87, 619–639. [Google Scholar] [CrossRef]
  3. Dong, Q.L.; He, S.; Rassias, M.T. General splitting methods with linearization for the split feasibility problem. J. Glob. Optim. 2021, 79, 813–836. [Google Scholar] [CrossRef]
  4. Kesornprom, S.; Cholamjiak, P. A new iterative scheme using inertial technique for the split feasibility problem with application to compressed sensing. Thai J. Math. 2020, 18, 315–332. [Google Scholar]
  5. Ma, X.; Liu, H. An inertial Halpern-type CQ algorithm for solving split feasibility problems in Hilbert spaces. J. Appl. Math. Comput. 2021, 68, 1699–1717. [Google Scholar] [CrossRef]
  6. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  7. Yang, Q. The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 2004, 20, 1261–1266. [Google Scholar] [CrossRef]
  8. Yang, Q. On variable-step relaxxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302, 166–179. [Google Scholar] [CrossRef]
  9. López, G.; Martín-Márquez, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 085004. [Google Scholar] [CrossRef]
  10. Dang, Y.; Sun, J.; Xu, H. Inertial accelerated algorithm for solving a split feasibility problem. J. Ind. Manag. Optim. 2017, 13, 1383–1394. [Google Scholar] [CrossRef]
  11. Gibali, A.; Liu, L.W.; Tang, Y.C. Note on the modified relaxation CQ algorithm for the split feasibility problem. Optim. Lett. 2018, 12, 817–830. [Google Scholar] [CrossRef]
  12. Kesornprom, S.; Pholasa, N.; Cholamjiak, P. On the convergence analysis of the gradient-CQ algorithms for the split feasibility problem. Numer. Algorithms 2020, 84, 997–1017. [Google Scholar] [CrossRef]
  13. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  14. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  15. Suantai, S.; Pholasa, N.; Cholamjiak, P. The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Indust. Manag. Optim. 2018, 14, 1595–1615. [Google Scholar] [CrossRef]
  16. Gibali, A.; Mai, D.T.; Vinh, N.T. A new relaxed CQ algorithm for solving split feasibility problems in Hilbert spaces and its applications. J. Ind. Manag. Optim. 2019, 15, 963–984. [Google Scholar] [CrossRef]
  17. Maingé, P.E. Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325, 469–479. [Google Scholar] [CrossRef]
  18. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  19. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  20. Sahu, D.R.; Cho, Y.J.; Dong, Q.L.; Kashyap, M.R.; Li, X.H. Inertial relaxed CQ algorithms for solving a split feasibility problem in Hilbert spaces. Numer. Algorithms 2021, 87, 1075–1095. [Google Scholar] [CrossRef]
Figure 1. From top to bottom: the original signal, the measurement, and the recovery signals from the five algorithms for Case 1.
Figure 1. From top to bottom: the original signal, the measurement, and the recovery signals from the five algorithms for Case 1.
Mathematics 12 02406 g001
Figure 2. From top to bottom: the original signal, the measurement, and the recovery signals from the five algorithms for Case 2.
Figure 2. From top to bottom: the original signal, the measurement, and the recovery signals from the five algorithms for Case 2.
Mathematics 12 02406 g002
Figure 3. From top to bottom: the original signal, the measurement, and the recovery signals from the five algorithms for Case 3.
Figure 3. From top to bottom: the original signal, the measurement, and the recovery signals from the five algorithms for Case 3.
Mathematics 12 02406 g003
Figure 4. Plots of MSEn over Iter for Case 1.
Figure 4. Plots of MSEn over Iter for Case 1.
Mathematics 12 02406 g004
Figure 5. Plots of MSEn over Iter for Case 2.
Figure 5. Plots of MSEn over Iter for Case 2.
Mathematics 12 02406 g005
Figure 6. Plots of MSEn over Iter for Case 3.
Figure 6. Plots of MSEn over Iter for Case 3.
Mathematics 12 02406 g006
Table 1. Numerical comparison of five algorithms.
Table 1. Numerical comparison of five algorithms.
AlgorithmsCase 1Case 2Case 3
IterCPU TimeIterCPU TimeIterCPU Time
(1) IGCQHA21401.410328784.2726426633.3395
(2) M-IRCQA39191.340543383.3317690628.0780
(3) IRCQVA5050.21145420.40567323.0214
(4) IHTCQA21830.726128182.1335434318.2522
(5) IVTCQA2030.15792340.35033232.6396
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Suebcharoen, T.; Suparatulatorn, R.; Chaobankoh, T.; Kunwai, K.; Mouktonglang, T. An Inertial Relaxed CQ Algorithm with Two Adaptive Step Sizes and Its Application for Signal Recovery. Mathematics 2024, 12, 2406. https://doi.org/10.3390/math12152406

AMA Style

Suebcharoen T, Suparatulatorn R, Chaobankoh T, Kunwai K, Mouktonglang T. An Inertial Relaxed CQ Algorithm with Two Adaptive Step Sizes and Its Application for Signal Recovery. Mathematics. 2024; 12(15):2406. https://doi.org/10.3390/math12152406

Chicago/Turabian Style

Suebcharoen, Teeranush, Raweerote Suparatulatorn, Tanadon Chaobankoh, Khwanchai Kunwai, and Thanasak Mouktonglang. 2024. "An Inertial Relaxed CQ Algorithm with Two Adaptive Step Sizes and Its Application for Signal Recovery" Mathematics 12, no. 15: 2406. https://doi.org/10.3390/math12152406

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop