Abstract
This paper introduces a novel two-step inertial algorithm for locating a common fixed point of a countable family of nonexpansive mappings. We establish strong convergence properties of the proposed method under mild conditions and employ it to solve convex bilevel optimization problems. The method is further applied to the image recovery problem. Our numerical experiments show that the proposed method achieves faster convergence than other related methods in the literature.
Keywords:
convex bilevel optimization; forward–backward algorithms; image restoration problems; two-step inertial; viscosity approximation MSC:
47H09; 90C25; 65K10
1. Introduction
Bilevel optimization has received significant attention in recent years, having arisen as a powerful tool for many machine learning applications such as hyperparameter optimization [1,2], signal processing [3,4], and reinforcement learning [5]. It is defined as a mathematical program in which an optimization problem contains another optimization problem as a constraint. In this paper, we consider the bilevel optimization problem in which the following minima are sought:
where is assumed to be strongly convex and differentiable, while is a nonempty set of inner level optimizers satisfying
where is a differentiable and convex function such that is L-Lipschitz continuous and is a convex, proper, and lower semi-continuous function. We let be the solution set of (1).
Observe that this bilevel optimization model contains the inner level minimization problem (2) as a constraint to the outer level optimization problem (1). It is a well-known form (1) that
Many researchers have proposed algorithms for solving problem (2); see [6,7,8,9,10]. The basic algorithm is the proximal forward–backward technique, or proximal gradient method, defined by the iterative equation
where is the step-size, is the proximity operator of , and is the gradient of [6,11]. Equation (3) is referred to in the literature as the forward–backward splitting algorithm (FBSA). The FBSA can be used to solve the inner level optimization problem if is L-Lipschitz continuous [7].
The proximal gradient method can also be viewed as a fixed-point algorithm, where the iterated mapping is given by
and is called the forward–backward mapping [12]. The forward–backward mapping, T, is nonexpansive if , where L is a Lipschitz constant of and, in that case, . It is noted that implementation of the forward–backward operator can be simplified by first changing the inner level optimization problem into a zero-point problem of the sum of two monotone operators, and then, after analysis, translating back into the fixed-point problem. Exemplifying the fixed-point approach, Sabach et al. [13] proposed the bilevel gradient sequential averaging method (BiG-SAM) for solving problems (1) and (2). The iterative process can be defined as
where , is strongly convex with parameter and where and are Lipschitz constants for the gradients of f and . The authors analyzed the convergence behavior of BiG-SAM using an existing fixed-point algorithm and discussed its rate of convergence.
In optimization problems like those presented above, mathematicians frequently employ a technique known as inertial-type extrapolation [14,15] to accelerate the convergence of the iterative equations. This approach involves utilizing a term , where denotes an inertial parameter, to govern the momentum . One such algorithm that has enjoyed immense popularity was developed by Nesterov [14]. He used an inertial or extrapolation technique to solve convex optimization problems of the form of (2), where is a convex, smooth function. Nesterov’s algorithm takes the following form:
where the inertial parameter for all n and is the step size depending on the Lipschitz continuity modulus of . Nesterov proved that Equation (6) has a faster convergence rate than the general gradient algorithm by selecting such that . Similarly, in 2009, Beck et al. [16] introduced the fast iterative shrinkage-thresholding algorithm (FISTA) for solving linear inverse problems. Their result combined the proximity algorithm with the inertial technique, again resulting in the algorithm’s convergence rate being considerably accelerated.
In 2019, Shehu et al. [17] presented an inertial forward–backward algorithm, called the inertial bilevel gradient sequential averaging method (iBiG-SAM) for solving problems (1) and (2). Their method was subsequently improved by Sabach et al. [13], using the following iterative algorithm:
The authors transformed the bilevel optimization problem into a fixed-point problem for a nonexpansive mapping in an infinite dimensional Hilbert space and then proved strong convergence.
As the above suggests, research on fixed-point problems for nonexpansive mappings has become crucial for developing optimization methods. The Mann iterative process is a well-known method for approximating fixed points of nonexpansive mappings on Hilbert spaces. However, Mann’s process provides only weak convergence. Many authors have demonstrated fixed-point problems exhibiting strong convergence for nonexpansive mappings on Hilbert spaces using the viscosity approximation method, expressed by the equation
where , S is a contraction on Hilbert spaces H and ; see [18,19].
In 2009, Takahashi [20] modified the viscosity approximation method, selecting a particular fixed point of the nonexpansive self-mapping of Moudafi [18]. The iterative process is given by
where , S is a contraction of C into itself, is a countable family of nonexpansive of C into itself, C is subset of a Banach space, and . Takahashi proved the strong convergence of (9) to a common fixed point of .
Jailoka et al. [21] introduced a fast viscosity forward–backward algorithm (FVFBA) with the inertial technique for finding a common fixed point of a countable family of nonexpansive mappings. They proved a strong convergence result and applied it to solving a convex minimization problem of the sum of two convex functions. The iterative process can be formulated by
where , S is a contraction on Hilbert spaces H and .
Recently, Janngam et al. [22] presented an inertial viscosity modified SP algorithm (IVMSPA). The authors proved a strong convergence of their algorithm and applied it to solving the convex bilevel optimization problems (problems 1 and 2). Their algorithm was given by
where , , , S is contraction mapping on Hilbert spaces H and .
The above authors all employ a single inertial parameter to accelerate the convergence of their algorithms. However, it has been noted that the incorporation of two inertial parameters enhances motion modeling, improves stability and robustness, increases redundancy and fault tolerance, expands the range of applications, and offers flexibility and adaptability in algorithm design. In [23], it was illustrated through an example that the one-step inertial extrapolation, expressed as with , may not produce acceleration. Additionally, Ref. [24] mentioned that incorporating more than two points, such as and , in the inertial process could lead to acceleration. For instance, consider the following two-step inertial extrapolation:
where and can provide acceleration. The limitations of employing one-step inertial acceleration in the alternating direction method of multipliers (ADMM) were dissused in [25], which led to the proposal of adaptive acceleration as an alternative solution. In addition, Polyak [26] discussed the potential for multi-step inertial methods to enhance the speed of optimization techniques despite the absence of established convergence or rate results in [26]. Recent research conducted in [27] has further explored and examined various aspects of multi-step inertial methods.
Based on the information provided above, our aim in this paper is to solve the convex bilevel optimization problem by introducing a new accelerated viscosity algorithm with the two-point inertial technique, which we then apply to image recovery. The remainder of the paper is organized as follows. In Section 2, we recall some basic definitions and results that are crucial in the paper. The proposed algorithm and the analysis of its convergence are presented in Section 3. The performance of deblurring images using our algorithm is analyzed and illustrated in Section 4. Finally, we give conclusions and discuss directions for future work in Section 5.
2. Preliminaries
In this section, we present some preliminary material that will be needed for the main theorems.
Let C be a nonempty subset of a real Hilbert space H with norm , denote the set of real numbers, denote the non-negative real numbers, denote the positive real numbers, denote the set of positive integers, and let I denote the identity mapping on H.
Definition 1.
The mapping is said to be L-Lipschitz with , if
for all . Furthermore, if then T is called a contraction mapping, and it is nonexpansive if .
When is a sequence in C, we denote the strong convergence of to by , and will symbolize the set of all fixed points of T.
Let be a nonexpansive mapping and be a family of nonexpansive mappings of C into itself such that . The sequence is said to satisfy the NST-condition (I) with T [28], if for each bounded sequence ,
The following condition is an essential condition for proving our convergence theorem.
Definition 2
([29,30]). A sequence with is said to satisfy the condition (Z) if for every bounded sequence in C such that
then, every weak cluster point of belongs to .
Recall that for a nonempty closed convex subset C of H, the metric projection on C is a mapping , defined by
for all . Note that if and only if for all .
The definition and properties of a proximity operator are presented below.
Definition 3
([31,32]). Let be a function that is convex, proper, and lower semi-continuous. The function , known as the proximity operator of g, is defined as follows:
Alternatively, it can be expressed as:
where represents the subdifferential of g defined by:
for any . Additionally, for , we know that is firmly nonexpansive and
where is the set of fixed points of .
The following lemmas will be used for proving the convergence of our proposed algorithm.
Lemma 1
([33]). Let be a convex, proper, and lower semi-continuous function and let be a differentiable and convex function such that is L-Lipschitz continuous. Let
where with as . Then satisfies the NST-condition (I) with T.
Lemma 2
([34]). Let and . Then, the following properties are true:
- (i)
- ;
- (ii)
- ;
- (iii)
- .
Lemma 3
([35]). Let , and such that . Assume that
for all . If for every subsequence of satisfying
then .
3. Main Results
Throughout this section, we let C be closed convex with and a mapping be a k-contraction where . Let is a family of nonexpansive mappings of C into itself satisfying the condition (Z) such that .
For the first of our main results, we draw upon the ideas of Jailoka et al. [21] and Liang [24] and introduce a modified two-step inertial viscosity algorithm (MTIVA) for finding a common fixed point of a family of nonexpansive mappings , as follows:
In Theorem 1, we show that Algorithm 1 converges strongly.
| Algorithm 1 Modified Two-Step Inertial Viscosity Algorithm (MTIVA) |
| Initialization: Let , , and let , be bounded sequences. Take arbitrarily. For . |
| Step 1. Compute the inertial step:
|
| Step 2. Compute the viscosity step:
|
| Step 3. Compute :
|
Theorem 1.
Let a sequence be generated by Algorithm 1. Suppose the conditions (C1–C3) hold for the sequences , , and . Then, , where .
- (C1)
- (C2)
- for some ,
- (C3)
- , and .
Proof.
Let . By the definition of , we obtain
By the definition of , we obtain
By (13), (14) and (C1), we have as and as , and then exist such that
for all . Then,
where . Thus, by mathematical induction, we deduce that
for all . Hence, the sequence is bounded and so are the sequences , , . Now, by Lemma 2, we obtain
and
Next, we shall show that the sequence converges strongly to . Take and . From (22), we have
for all . To apply Lemma 3, we have to show that whenever a subsequence of satisfies
The condition (C2) and above inequality lead to
Using (C2) and (C3), and since
we obtain
Since is bounded, a subsequence of and exists such that as and
Since satisfies the condition (Z) and (27), we obtain . From , we obtain
For all . In particular, we have
Hence, we obtain (28). Thus, in view of Lemma 3, converges to , as required. □
In what follows, we impose the assumptions on the mappings , and associated with the convex bilevel optimization problems (1) and (2).
- (A1)
- is a convex and differentiable function such that is Lipschitz continuous with constant and are proper lower semi-continuous and convex functions;
- (A2)
- is strongly convex with parameter such that is -Lipschitz continuous and .
With the above assumptions in place, we propose the following algorithm, called the two-step inertial forward–backward bilevel gradient method (TIFB-BiGM), for solving problems (1) and (2).
The proposition below is attributable to Sabach and Shtern [13] and is critical to our next result.
Proposition 1.
Suppose that is strongly convex with and is Lipschitz continuous with constant . Hence, it follows that for all , the mapping is a contraction such that
for all u, v .
Theorem 2.
The sequence generated by Algorithm 2 converges strongly to where Λ is the set of all solutions of (1) and , provided that all conditions as in Theorem 1 hold.
| Algorithm 2 Two-Step Inertial Forward–Backward Bilevel Gradient Method (TIFB-BiGM) |
| Initialization: Let , , , and let , be bounded sequences. Take arbitrarily. |
| Let with as , where . For . |
| Step 1. Compute the inertial step:
|
| Step 2. Compute:
|
Proof.
Put and , where Then, by Proposition 1, F is a contraction mapping. We also know that is nonexpansive. Using Theorem 1, we conclude that , where . It is noted that, . Then, for all , we have
Dividing above inequalities by , we obtain
for all . Hence, , so . This completes the proof. □
4. Application to Image Recovery
Algorithm 2 will now be applied to the problem of image restoration. The algorithm’s performance will be compared to that of several existing methods, such as IVMSPA, FVFBA, BiG-SAM, and iBiG-SAM. Image restoration, also known as image deblurring or image deconvolution, is the process of removing or minimizing degradations (blur) in an image. Efforts along these lines began in the 1950s, and applications have been found in a number of areas, including consumer photography, scientific exploration, and image/video decoding; see [36,37]. Mathematically, image restoration can be modeled with the equation
where is the observed image, is the blurring matrix, is an original image, and is an additive noise. The objective is to recover the original image that satisfies (34) by minimizing the value of using the least squares method as shown in Equation (35). This method aims to minimize the squared difference between v and defined as follows:
where is the Euclidean norm. Many iterations, such as the Richardson iteration, see [38], can be used to estimate the solution of (35). The problem stated in Equation (35) is considered ill-posed because there are more unknown variables than observations, resulting in a norm result that is too large to be meaningful. This issue is discussed in references [39,40]. To address this problem, various regularization methods have been introduced to improve the least squares problem. One commonly used method is Tikhonov regularization, which was proposed by Tikhonov and involves minimizing a specific equation.
where is a positive parameter known as a regularization parameter, is the -norm and is the Euclidean norm, and is called the Tikhonov matrix. L is set to be the identity in the standard form. A well-known model for solving problem (34) is the least absolute shrinkage and selection operator (LASSO) [41], which is defined by the expression
The restoration of RGB images presents a challenge for the model (36) due to the significant size of the matrix A, as well as its associated elements, which can make computing the multiplication and quite expensive. To address this, researchers in this field commonly implement a 2-D fast Fourier transform to transform the images, resulting in a modified version of the model (36) that overcomes this issue.
The blurring operation A, commonly selected as , plays a crucial role in the problem (34). R represents the blurring matrix, while W denotes the two-dimensional fast Fourier transform. The observed image is affected by both blurring and noise, with its dimensions being .
Now, let be the set of all solutions of (38). Among the solutions in , we would also like to select a solution in such a way that is a minimizer of
We consider 2 RGB images (Wat Chedi Luang [42] and Matsue Castle) with the size of as the original images (see Figure 1). The pictures we used in this experiment were created by the third author. In order to simulate blurring, we convolved the images using a Gaussian blur filter with a size of and a standard deviation of with noise .
Figure 1.
Original images: (a) Wat Chedi Luang, (b) Matsue Castle.
Peak signal-to-noise ratio (PSNR) [43] and signal-to-noise ratio (SNR) [44] were used as the metrics for evaluating the performance of each algorithm. The PSNR and SNR at are given by
where is the maximum pixel value (usually 255 in 8-bit grayscale images) and is the mean squared error between the original and the distorted image. Both and SNR are expressed in decibels (dB) as a logarithmic measure of the signal-to-noise or signal-to-error ratio.
In image restoration, both PSNR and SNR are commonly used as metrics to assess the performance of deblurring results. However, it is important to note that these metrics provide different types of information.
PSNR measures the quality of a deblurred image by comparing it to the original image and evaluating the amount of noise introduced during the restoration process. It calculates the ratio between the peak signal power (the maximum possible value for the pixel) and the mean squared error (MSE) between the original and deblurred images. Higher PSNR values indicate better restoration quality as they indicate a lower level of distortion or noise.
On the other hand, SNR measures the ratio between the signal power and the noise power in the deblurred image. It quantifies the preservation of the original signal after the restoration process. Higher SNR values indicate less noise in the deblurred image.
While both PSNR and SNR are useful metrics, they focus on different aspects of image restoration. PSNR primarily considers the visual quality and fidelity of the deblurred image compared to the original, while SNR focuses more on the amount of noise present in the deblurred image.
To comprehensively evaluate the performance of your deblurring algorithm, it is recommended to consider both PSNR and SNR. They provide complementary information about the restoration quality.
We now employ our proposed algorithm (TIFB-BiGM) in Theorem 2 to solve the convex bilevel optimization problems (38) and (39). In our experiments, the algorithm developed in this paper (TIFB-BiGM) as well as the others are discussed and applied to solve the convex bilevel optimization problems (38) and (39), where , , and . The observed images are blurred images. We compute the Lipschitz constant by using the maximum eigenvalues of the matrix .
For the first experiment, the parameters of the TIFB-BiGM are chosen as follows: , , , and . Now, the experiments for recovering the “Wat Chedi Luang” image with size of using TIFB-BiGM with different inertial parameters are shown in Table 1 and Table 2. We also observe from Table 1 and Table 2 that tends to 1 and tends to 0
gives the highest values of PSNR and SNR for our method.
Table 1.
PSNR values for restoration of “Wat Chedi Luang” image by TIFB-BiGM after 300 iterations for different choices of parameters and .
Table 2.
SNR values for restoration of “Wat Chedi Luang” image by TIFB-BiGM after 300 iterations for different choices of parameters and .
The parameter values for each algorithm were chosen for optimum performance, based on the published literature. The value for in Table 3 is the best choice for BiG-SAM considered in [13]. For iBiG-SAM, is the best choice over other values considered in [17], and the same authors found, based on their numerical experiments, to be the best choice for FVFBA.
Table 3.
Parameters selection of TIFB-BiGM, IVMSPA, FVFBA, BiG-SAM, and iBiG-SAM.
The following experiments demonstrate Algorithm 2’s efficiency for image restoration in comparison to IVMSPA, FVFBA, BiG-SAM, and iBiG-SAM using PSNR and SNR as measurements.
The efficiency of restoring images using various algorithms under different iterations are illustrated in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7. The results indicate that TIFB-BiGM achieves higher PSNR and SNR values than IVMSPA, FVFBA, BiG-SAM, and iBiG-SAM. Therefore, our algorithm demonstrates superior convergence behavior compared to the aforementioned methods.
Figure 2.
The graphs of PSNR of each algorithm for Wat Chedi Luang.
Figure 3.
The graphs of SNR of each algorithm for Wat Chedi Luang.
Figure 4.
The graphs of PSNR of each algorithm for Matsue Castle.
Figure 5.
The graphs of SNR of each algorithm for Matsue Castle.

Figure 6.
Results for deblurring “Wat Chedi Luang” image using various algorithms at the 500th iteration. (a) Gaussian blurred image, (b) TIFB-BiGM (PSNR = 29.7216, SNR = 25.6962), (c) IVMSPA (PSNR = 29.5375, SNR = 25.5121), (d) FVFBA (PSNR = 28.9243, SNR = 24.8989), (e) BiG-SAM (PSNR = 24.7118, SNR = 20.6864), and (f) iBiG-SAM (PSNR = 27.0172, SNR = 22.9918).

Figure 7.
Results for deblurring “Matsue Castle” image using various algorithms at the 500th iteration. (a) Gaussian blurred image, (b) TIFB-BiGM (PSNR = 30.9830, SNR = 27.5075), (c) IVMSPA (PSNR = 30.8212, SNR = 27.3457), (d) FVFBA (PSNR = 30.43625, SNR = 26.9636), (e) BiG-SAM (PSNR = 25.4625, SNR = 21.9870), and (f) iBiG-SAM (PSNR = 27.9712, SNR = 24.4957).
5. Conclusions
In this paper, algorithmic solutions to a family of convex bilevel optimization problems are developed and applied to image processing. An interesting connection between minization problems and fixed-point methods is observed. We first present a modified two-step inertial viscosity algorithm (MTIVA) for finding a common fixed point of a family of nonexpansive operators in a Hilbert space and prove strong convergence under relatively mild conditions. This is the applied to the solution of a convex bilevel optimization problem by introducing a novel two-step inertial forward–backward bilevel gradient method (TIFB-BiGM). The main results are then employed in the solution of an image restoration problem. Through careful comparative analysis, we demonstrate that our algorithm outperforms several existing algorithms such as IVMSPA, FVFBA, BiG-SAM, and iBiG-SAM, in terms of image recovery efficiency, as verified through numerical experiments conducted under specific parameter settings.
There are several potential avenues for future research. Firstly, investigating the adaptability and performance of the proposed algorithm in different image processing tasks could provide valuable insights. Additionally, one might explore the algorithm’s scalability to large-scale image datasets or investigate the incorporation of parallel computing techniques that could enhance the algorithm’s computational efficiency. Moreover, conducting comparative studies with other state-of-the-art image restoration algorithms would provide a comprehensive evaluation of the algorithm’s strengths and limitations. Finally, exploring the applicability of the proposed algorithm to other domains beyond image processing, such as computer vision or signal processing, would broaden its potential impact.
Author Contributions
Conceptualization, S.S.; formal analysis, R.W. and S.S.; investigation, R.W. and K.J.; methodology, R.W. and S.S.; software, K.J.; supervision, S.S.; validation, R.W. and S.S.; writing—original draft, R.W. and K.J.; and writing—review and editing, R.W. and S.S. All authors have read and agreed to the published version of the manuscript.
Funding
NSRF via the program Management Unit for Human Resources & Institutional Development, Research, and Innovation (grant number B05F640183).
Data Availability Statement
Not applicable.
Acknowledgments
This research has received funding support from the NSRF via the Program Management Unit for Human Resources and Institutional Development, Research, and Innovation (grant number B05F640183), and it was also partially supported by Chiang Mai University and Ubon Ratchathani University.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Franceschi, L.; Frasconi, P.; Salzo, S.; Grazzi, R.; Pontil, M. Bilevel programming for hyperparameter optimization and meta-learning. In Proceedings of the International Conference on Machine Learning (ICML), Stockholm, Sweden, 10–15 July 2018; pp. 1568–1577. [Google Scholar]
- Shaban, A.; Cheng, C.-A.; Hatch, N.; Boots, B. Truncated back-propagation for bilevel optimization. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), Okinawa, Japan, 16–18 April 2019; pp. 1723–1732. [Google Scholar]
- Kunapuli, G.; Bennett, K.P.; Hu, J.; Pang, J.-S. Classification model selection via bilevel programming. Optim. Methods Softw. 2008, 23, 475–489. [Google Scholar] [CrossRef]
- Flamary, R.; Rakotomamonjy, A.; Gasso, G. Learning constrained task similarities in graph regularized multitask learning. In Regularization, Optimization, Kernels, and Support Vector Machines; Chapman and Hall/CRC: Boca Raton, FL, USA, 2014; Volume 103, ISBN 978-0367658984. [Google Scholar]
- Konda, V.R.; Tsitsiklis, J.N. Actor-critic algorithms. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Denver, CO, USA, 30 November–2 December 1999; pp. 1008–1014. [Google Scholar]
- Bruck, R.E., Jr. On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space. J. Math. Anal. Appl. 1977, 61, 159–164. [Google Scholar] [CrossRef]
- Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
- Janngam, K.; Suantai, S. An inertial modified S-Algorithm for convex minimization problems with directed graphs and their applications in classification problems. Mathematics 2022, 10, 4442. [Google Scholar] [CrossRef]
- Cabot, A. Proximal point algorithm controlled by a slowly vanishing term: Applications to hierarchial minimization. SIAM J. Optim. 2005, 15, 555–572. [Google Scholar] [CrossRef]
- Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
- Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef]
- Beck, A.; Sabach, S. A first order method for finding minimal norm-like solutions of convex optimization problems. Math. Program. 2014, 147, 25–46. [Google Scholar] [CrossRef]
- Sabach, S.; Shtern, S. A first order method for solving convex bilevel optimization problems. SIAM J. Optim. 2017, 27, 640–660. [Google Scholar] [CrossRef]
- Nesterov, Y.E. A method for solving the convex programming problem with convergence rate O(1/k2). Sov. Math. Dokl. 1983, 27, 372–376. [Google Scholar]
- Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
- Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
- Shehu, Y.; Vuong, P.T.; Zemkoho, A. An inertial extrapolation method for convex simple bilevel optimization. Optim. Methods Softw. 2019, 36, 1–19. [Google Scholar] [CrossRef]
- Moudafi, A. Viscosity approximation method for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef]
- Xu, H.K. Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298, 279–291. [Google Scholar] [CrossRef]
- Takahashi, W. Viscosity approximation methods for countable families of nonexpansive mappings in Banach spaces. Nonlinear Anal. 2009, 70, 719–734. [Google Scholar] [CrossRef]
- Jailoka, P.; Suantai, S.; Hanjing, A. A fast viscosity forward–backward algorithm for convex minimization problems with an application in image recovery. Carpathian J. Math. 2021, 37, 449–461. [Google Scholar] [CrossRef]
- Janngam, K.; Suantai, S.; Cho, Y.J.; Kaewkhao, A.; Wattanataweekul, R. A Novel Inertial Viscosity Algorithm for Bilevel Optimization Problems Applied to Classification Problems. Mathematics 2023, 11, 3241. [Google Scholar] [CrossRef]
- Poon, C.; Liang, J. Geometry of First-order Methods and Adaptive Acceleration. arXiv 2020, arXiv:2003.03910. [Google Scholar]
- Liang, J. Convergence Rates of First-Order Operator Splitting Methods. Ph.D. Thesis, Normandie Universit’e, Normaundie, France, 2016. [Google Scholar]
- Poon, C.; Liang, J. Trajectory of Alternating Direction Method of Multiplier and Adaptive Acceleration. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
- Polyak, B.T. Introduction to Optimization; Optimization Software, Publication Division: New York, NY, USA, 1987. [Google Scholar]
- Combettes, P.L.; Glaudin, L. Quasi-Nonexpansive Iterations on the Affine Hull of Orbits: From Mann’s Mean Value Algorithm to Inertial Methods. SIAM J. Optim. 2017, 27, 2356–2380. [Google Scholar] [CrossRef]
- Nakajo, K.; Shimoji, K.; Takahashi, W. On strong convergence by the hybrid method for families of mappings in Hilbert spaces. Nonlinear Anal. 2009, 71, 112–119. [Google Scholar] [CrossRef]
- Aoyama, K.; Kimura, Y. Strong convergence theorems for strongly nonexpansive sequences. Appl. Math. Comput. 2011, 217, 7537–7545. [Google Scholar] [CrossRef]
- Aoyama, K.; Kohsaka, F.; Takahashi, W. Strong convergence theorems by shrinking and hybrid projection methods for relatively nonexpansive mappings in Banach spaces. Nonlinear Anal. Convex Anal. 2009, 10, 7–26. [Google Scholar]
- Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace hilbertien. Comptes Rendus Acad. Sci. Paris Ser. A Math. 1962, 255, 2897–2899. [Google Scholar]
- Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
- Bussaban, L.; Suantai, S.; Kaewkhao, A. A parallel inertial S-iteration forward–backward algorithm for regression and classification problems. Carpathian J. Math. 2020, 36, 35–44. [Google Scholar] [CrossRef]
- Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
- Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 2012, 75, 724–750. [Google Scholar] [CrossRef]
- Maurya, A.; Tiwari, R. A Novel Method of Image Restoration by using Different Types of Filtering Techniques. Int. J. Eng. Sci. Innov. Technol. 2014, 3, 124–129. [Google Scholar]
- Suseela, G.; Basha, S.A.; Babu, K.P. Image Restoration Using Lucy Richardson Algorithm For X-Ray Images. IJISET Int. J. Innov.Sci. Eng. Technol. 2016, 3, 280–285. [Google Scholar]
- Vogel, C.R. Computational Methods for Inverse Problems; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
- Eldén, L. Algorithms for the Regularization of Ill-Conditioned Least Squares Problems. BIT Numer. Math. 1977, 17, 134–145. [Google Scholar] [CrossRef]
- Hansen, P.C.; Nagy, J.G.; O’Leary, D.P. Deblurring Images: Matrices, Spectra, and Filtering (Fundamentals of Algorithms 3) (Fundamentals of Algorithms); SIAM: Philadelphia, PA, USA, 2006. [Google Scholar]
- Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
- Yatakoat, P.; Suantai, S.; Hanjing, A. On Some Accelerated Optimization Algorithms Based on Fixed Point and Linesearch Techniques for Convex Minimization Problems with Applications. Adv. Cont. Discr. Mod. 2022, 2022, 43:1–43:13. [Google Scholar] [CrossRef]
- Thung, K.; Raveendran, P. A survey of image quality measures. In Proceedings of the 2009 International Conference for Technical Postgraduates (TECHPOS), Kuala Lumpur, Malaysia, 14–15 December 2009; pp. 1–4. [Google Scholar]
- Chen, D.Q.; Zhang, H.; Cheng, L.Z. A fast fixed-point algorithmfixed-point algorithmfixed-point algorithmfixed-point algorithm for total variation deblurring and segmentation. J. Math. Imaging Vis. 2012, 43, 167–179. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).








