Abstract
This article presents a novel inertial relaxed CQ algorithm for solving split feasibility problems. Note that the algorithm incorporates two adaptive step sizes here. A strong convergence theorem is established for the problem under some standard conditions. Additionally, we explore the utility of the algorithm in solving signal recovery problems. Its performance is evaluated against existing techniques from the literature.
Keywords:
split feasibility problem; signal recovery problem; inertial technique; CQ algorithm; strong convergence MSC:
47J25; 65K05; 65K10
1. Introduction
Throughout this article, all sets are assumed to be nonempty. Let , and denote the sets of positive integers, of real numbers, and of ordered N-tuples of real numbers, where , respectively. Let and be two inner product spaces with the induced norm . Also, let C and Q be closed and convex subsets of X and Y, respectively. We denote a useful subset of X as follows:
The problem of finding has been well known as the split feasibility problem (SFP). Introduced by Censor and Elfving [] in 1994, the SFP has garnered significant attention. Its applications span diverse fields, including image and signal processing, inverse problems, and especially machine learning, which is very attractive nowadays (for example, see [,,,]). Initially, various algorithms were proposed to solve the SFP in both finite- and infinite-dimensional spaces, all requiring the existence of the inverse of A. However, the CQ algorithm, introduced by Byrne [], stands out as one of the most notable. This algorithm is computationally efficient when the metric projections onto C and Q can be conveniently computed. The algorithm generates the sequence by
where and are the (nearest point) metric projections on C and Q, respectively; is the adjoint operator of A; and (called the step size) is in Nevertheless, it may be impractical or computationally intensive to determine the metric projections exactly. More importantly, determining the step size, which depends on the operator norm, can be computationally challenging or difficult to estimate accurately.
In real-world applications, the sets C and Q are typically defined as the level sets of convex functions given by
where and are lower semi-continuous convex functions, and and are assumed to be bounded operators.
However, when C and Q are complicated, the nearest point projections and have no closed forms, and thus, the computation is expensive. In 2004, Yang [] presented a modification of the CQ algorithm, called a relaxed CQ algorithm, by considering C and Q as subsets of the half-spaces and given by
Note that and now have closed forms. Since these projections are easily calculated, this method appears to be very practical.
In the following, we define, for ,
Thus, for ,
Next, in 2005, Yang [] solved the SFP using a variable step size defined by
where satisfies , and . Note that using this step size still requires two strong conditions, which are the boundedness of Q and the full column rank of Later, López et al. [] removed these two conditions by altering the step size as
where the sequence satisfies that This step size is effective because its convergence result can be proven without relying on the matrix norm or any specific properties of Q and Recent developments have introduced several algorithms that eliminate the need for the matrix norm in solving the SFP and other related problems, for example, see [,,].
Subsequently, numerous algorithms and novel step sizes were proposed with the aim of enhancing the convergence rate of optimization algorithms. Among these, the inertial technique has gained significant attention. Introduced by Polyak [] in 1964, it serves as an acceleration method for solving convex minimization problems with improved convergence speed. Its distinctive feature lies in its utilization of two preceding steps to compute the subsequent one. Alvarez et al. [] extended the concept of the heavy ball method to the proximal point algorithm, leading to the development of the inertial proximal point algorithm. This algorithm is formulated as follows:
where F is a maximal monotone operator. The convergence of the zero point of F is proven under the assumptions that is non-decreasing and satisfies .
In [], the authors proposed a modified inertial relaxed CQ algorithm (IRCQVA) by incorporating a viscosity approximation method and a novel step size strategy to solve the SFP as follows: for any and all ,
where f is a contraction on , and
It was shown that the sequence generated by (6) converges strongly to a point in under the conditions , and . Consequently, in 2019, Gibali et al. [] obtained a new inertial relaxed CQ algorithm (M-IRCQA) to solve SFP (1) given as follows: for any and all , select such that , where and
for some . Compute
where , and are defined as in (7). It also was shown that the sequence generated by (8) converges strongly to the minimum-norm element of under the conditions that , and .
In a recent study published in [], the authors presented the inertial gradient CQ algorithm (IGCQHA). This algorithm employs Halpern iteration with a new step size to address the problem of solving the SFP. The algorithm is given as follows: for any and all ,
where , , and As a result, the sequence generated by (9) converges strongly to under the conditions that , and .
Additionally, Ma and Liu [] recently proposed the inertial Halpern-type CQ algorithm (IHTCQA) for solving the SFP with a step size that is bounded away from 0. Their algorithm scheme reads as follows: for any and all , select such that
where . Compute
using the step size scheme as follows: for any and ,
where , , and It follows that the sequence defined by (10) converges strongly to under the conditions that , and .
In this article, we propose a novel inertial relaxed CQ algorithm with two adaptive step sizes for solving the SFP. The algorithm is based on a combination of inertial acceleration and adaptive step size control. Note that the proposed algorithm has several advantages over existing methods. First, it incorporates an inertial term, which helps to accelerate the convergence speed. Second, it uses two adaptive step sizes, which allows the algorithm to automatically adjust its step size during the iteration process. In Section 3, we provide a comprehensive convergence analysis that establishes the strong convergence of the proposed algorithm. In Section 4, numerical experiments demonstrate the efficiency and reliability of the algorithm. It outperforms existing algorithms in terms of accuracy and speed. These advantages make the proposed algorithm a promising tool for solving a wide range of optimization problems.
2. Preliminaries
In this section, we present some notations, definitions, and results that will be used in the next section.
We denote ⇀ and → as weak and strong convergence, respectively.
Proposition 1.
For and such that ,
Definition 1.
A mapping is said to be L-Lipschitz if there exists an satisfying, for all
The mapping S is called an L-contraction when and if , then S is called nonexpansive.
For , define by
It is well known that is the metric projection such that for any and ,
It is clear that is nonexpansive.
Definition 2.
Let be a function. Then, we have the following:
- (i)
- A function c is said to be weakly lower semi-continuous (w-lsc) at z if for such that implies that
- (ii)
- A function is convex if for all and all
Proposition 2.
A differentiable function is convex if and only if for all
Definition 3.
An element is said to be a subgradient of at b if for all ,
This relation is called the subdifferentiable inequality.
Definition 4.
A function is subdifferentiable at z if it has at least one subgradient at z. A function c is called subdifferentiable if it is subdifferentiable at all .
The set of subgradients of c at the point z is called the subdifferential of c at z, and it is denoted by . For a differentiable and convex function, its gradient and subgradient coincide.
We next collect some necessary lemmas for proving our main result.
Lemma 1
([,]). Let and be sequences of and let be a sequence of such that . For a sequence , suppose that there is such that
Then, we have the following statements:
- (i)
- If there exists some such that , then is bounded.
- (ii)
- when and
Lemma 2
([]). For a sequence if there exists a subsequence such that for all then we have the following:
- (i)
- where
- (ii)
- is nondecreasing such that for all ,
3. Main Result
In this section, we first list all requirements for our algorithm below:
- (R1)
- is a bounded linear operator with its adjoint
- (R2)
- (R3)
- is a function such that is -contractive (note that this implies that h is differentiable).
- (R4)
- is a sequence for which , where satisfies and (see an example of in Section 4).
- (R5)
- and are sequences in such that and for some
- (R6)
- are sequences such that , , and and are sequences in such that and .
We are now ready to present our algorithm (IVTCQA) (Algorithm 1).
Algorithm 1 Inertial viscosity-type CQ algorithm (IVTCQA) |
Initialization: Take , , and set Iterative Steps: Construct using the following steps: Step 1. Set and compute Step 3. Calculate Replace n with and then repeat Step 1. |
Prior to presenting our main theorem, we establish the following necessary results.
Proposition 3.
Let and be the sequences generated by the IVTCQA. If , then .
Proof.
The proof is similar to that of Proposition 3.2 (1) in []. □
Lemma 3.
For and generated by the IVTCQA, and , where and .
Proof.
This proof is analogous to the proof of Lemma 3.1 []. □
We now proceed to present our main result.
Theorem 1.
Let be a sequence generated by the IVTCQA. Then, it converges strongly to
Proof.
Let . Since and , then and . It follows that , and thus,
Also, we have
Set and .
By Lemma 3, and Then, there exists such that and for all . If follows from the properties of and that
Then, from (18), we have that , and thus,
From the -contractivity of and the property of ,
Combining the above inequality with (19), we obtain that, for all ,
where and .
Next, we set where . Since , is bounded,
From (20) and the boundedness of , by Lemma 1 , is bounded for any Subsequently, is bounded and so is Since is a nonempty, closed, and convex set, we have that is -contractive. It follows from the Banach fixed-point theorem that there exists a unique such that
Next, let . By (13) and (18), we have that
Rearranging the above inequality, we have that
Next, we consider the following two cases.
Case 1. Assume that there is such that for all , holds. Then, is convergent. Also, from requirements (R4) and (R5), when taking , the right-handed side of (22) tends to zero, and thus,
Also, from
From , we have that
We know that is bounded, so there exists a subsequence of , which converges weakly to . Thus, from (21), we have as , and so as .
This implies that . From ,
Now, the boundedness of implies the boundedness of Combining this with (21), (23), and (27), we obtain that
Similarly, one can show that , i.e., . We now have that .
Next, let and for all . Then,
By (19), for all ,
Consequently, from (11) and (12),
The above inequality together with (28) and (29) implies that for all ,
where . Finally, from , (21), (23), (30), (31), and Lemma 1 , we can conclude that
Case 2. Suppose that for all , there is such that .
From Lemma 2, there is such that for all and , where Now, it follows from (22) that for all ,
Since , then , which implies that
Now, an analogous argument to that employed in the preceding case shows that
It follows that
Then, . Therefore,
From Lemma 2, Finally, we can conclude that converges strongly to s. The proof is now complete. □
4. Numerical Exemplifications
In this section, we investigate a signal recovery problem within the framework of compressed sensing. In mathematical terms, a signal recovery problem can be expressed as a system of linear equations with more unknowns than equations:
where is the original signal, is the observed signal with noise , and A is the filter matrix with It is well known that problem (32) can be solved through the least absolute shrinkage and selection operator (LASSO) problem:
where is a given constant.
Under specific conditions imposed on A, the solution to minimization problem (33) is equivalent to the -norm solution of the linear system. For the considered SFP, we define , , and . Since the metric projection onto the closed convex set C does not have a closed-form solution, we utilize the subgradient projection. We define a convex function and, for a sequence , denote the level set by , where . Then, the orthogonal projection onto can be calculated using the following formula:
We note that the subdifferential at is
where is the jth component of the vector .
For the experiments, we evaluate the performances of five algorithms in solving problem (33): (1) IGCQHA, (2) M-IRCQA, (3) IRCQVA, (4) IHTCQA, and our purposed algorithm (5) IVTCQA. For algorithms (1–4), the parameters were selected based on experiments from [,,,], respectively. For algorithm (5), we conducted multiple experiments using various sets of parameters and selected the set that performed the best.
The experimental setup is listed below:
- -
- The original signal x is generated uniformly from the interval with k nonzero elements.
- -
- The Gaussian matrix A is generated using MATLAB’s function.
- -
- The observation b is generated by adding white Gaussian noise with a signal-to-noise ratio (SNR) of 40 and .
- -
- The vectors and are randomly generated.
- -
- For all , we set and
- -
- For (1) IGCQHA, (2) M-IRCQA, and (3) IRCQVA, let for all
- -
- The vector u is generated randomly for (1) IGCQHA and (4) IHTCQA.
- -
- For (2) M-IRCQA and (5) IVTCQA, set for all
- -
- For (4) IHTCQA and (5) IVTCQA, let , , , and for all
- -
- For (1) IGCQHA, let for any , and set for (3) IRCQVA.
- -
- For (5) IVTCQA, let , , , , and for all We can verify that these meet our algorithm’s requirements.
Now, we consider three cases as follows:
- Case 1: and
- Case 2: and
- Case 3: and
We evaluate the accuracy of the recovered signals using the mean squared error (MSE), given as
The computations were performed using MATLAB R2021a on an iMac equipped with an Apple M1 chip and 16 GB of RAM. The results obtained are presented below Table 1 and Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6.

Table 1.
Numerical comparison of five algorithms.

Figure 1.
From top to bottom: the original signal, the measurement, and the recovery signals from the five algorithms for Case 1.

Figure 2.
From top to bottom: the original signal, the measurement, and the recovery signals from the five algorithms for Case 2.

Figure 3.
From top to bottom: the original signal, the measurement, and the recovery signals from the five algorithms for Case 3.

Figure 4.
Plots of MSEn over Iter for Case 1.

Figure 5.
Plots of MSEn over Iter for Case 2.

Figure 6.
Plots of MSEn over Iter for Case 3.
After conducting multiple iterations of the experiment, we determined that consistently met the specified criteria. This parameter value yielded the most optimal results for the given example. Based on the aforementioned results, our proposed algorithm, (5) IVTCQA, stands out for its exceptional computational efficiency, outperforming the other four algorithms in terms of both execution time and the number of iterations required. This enhanced performance demonstrates the effectiveness of our algorithm in solving the considered problem.
5. Conclusions
We proposed a novel inertial relaxed CQ algorithm with two adaptive step sizes to solve the SFP. Our main theorem establishes the strong convergence of the proposed algorithm under certain conditions. We also applied the algorithm to the problem of signal recovery in compressed sensing. Numerical experiments demonstrated the efficiency and reliability of the proposed algorithm, and it outperforms existing algorithms in terms of accuracy and speed. The proposed algorithm has several advantages over existing methods. It incorporates an inertial term, which helps to accelerate the convergence speed. Also, it uses two adaptive step sizes, which allows the algorithm to automatically adjust its step size during the iteration process.
Author Contributions
Conceptualization, R.S.; Methodology, T.S., T.C. and T.M.; Software, T.S. and R.S.; Validation, R.S., T.C. and K.K.; Formal analysis, K.K.; Investigation, R.S. and T.M.; Data curation, T.C., K.K. and T.M.; Writing—original draft, T.S. and R.S.; Writing—review & editing, T.C. and K.K.; Supervision, T.S.; Funding acquisition, T.S. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the CMU Mid-Career Research Fellowship program, Chiang Mai University, Faculty of Science, Chiang Mai University, and Chiang Mai University.
Data Availability Statement
Data are contained within the article.
Acknowledgments
This research was partially supported by the CMU Mid-Career Research Fellowship program, Chiang Mai University, Faculty of Science, Chiang Mai University, and Chiang Mai University.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projection in a product spce. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
- Che, H.; Zhuang, Y.; Wang, Y.; Chen, H. A relaxed inertial and viscosity method for split feasibility problem and applications to image recovery. J. Glob. Optim. 2023, 87, 619–639. [Google Scholar] [CrossRef]
- Dong, Q.L.; He, S.; Rassias, M.T. General splitting methods with linearization for the split feasibility problem. J. Glob. Optim. 2021, 79, 813–836. [Google Scholar] [CrossRef]
- Kesornprom, S.; Cholamjiak, P. A new iterative scheme using inertial technique for the split feasibility problem with application to compressed sensing. Thai J. Math. 2020, 18, 315–332. [Google Scholar]
- Ma, X.; Liu, H. An inertial Halpern-type CQ algorithm for solving split feasibility problems in Hilbert spaces. J. Appl. Math. Comput. 2021, 68, 1699–1717. [Google Scholar] [CrossRef]
- Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
- Yang, Q. The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 2004, 20, 1261–1266. [Google Scholar] [CrossRef]
- Yang, Q. On variable-step relaxxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302, 166–179. [Google Scholar] [CrossRef]
- López, G.; Martín-Márquez, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 085004. [Google Scholar] [CrossRef]
- Dang, Y.; Sun, J.; Xu, H. Inertial accelerated algorithm for solving a split feasibility problem. J. Ind. Manag. Optim. 2017, 13, 1383–1394. [Google Scholar] [CrossRef]
- Gibali, A.; Liu, L.W.; Tang, Y.C. Note on the modified relaxation CQ algorithm for the split feasibility problem. Optim. Lett. 2018, 12, 817–830. [Google Scholar] [CrossRef]
- Kesornprom, S.; Pholasa, N.; Cholamjiak, P. On the convergence analysis of the gradient-CQ algorithms for the split feasibility problem. Numer. Algorithms 2020, 84, 997–1017. [Google Scholar] [CrossRef]
- Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
- Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
- Suantai, S.; Pholasa, N.; Cholamjiak, P. The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Indust. Manag. Optim. 2018, 14, 1595–1615. [Google Scholar] [CrossRef]
- Gibali, A.; Mai, D.T.; Vinh, N.T. A new relaxed CQ algorithm for solving split feasibility problems in Hilbert spaces and its applications. J. Ind. Manag. Optim. 2019, 15, 963–984. [Google Scholar] [CrossRef]
- Maingé, P.E. Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325, 469–479. [Google Scholar] [CrossRef]
- Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
- Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
- Sahu, D.R.; Cho, Y.J.; Dong, Q.L.; Kashyap, M.R.; Li, X.H. Inertial relaxed CQ algorithms for solving a split feasibility problem in Hilbert spaces. Numer. Algorithms 2021, 87, 1075–1095. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).