Next Article in Journal
Identification of Novel Drugs Targeting Cell Cycle Regulators for the Treatment of High-Grade Serous Ovarian Cancer via Integrated Bioinformatics Analysis
Previous Article in Journal
A Note on the Lagrangian of Linear 3-Uniform Hypergraphs
Previous Article in Special Issue
Functional Inequalities for Metric-Preserving Functions with Respect to Intrinsic Metrics of Hyperbolic Type
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Scaled Dai–Yuan Projection-Based Conjugate Gradient Method for Solving Monotone Equations with Applications

1
Mathematics Department, College of Science, Taif University, Taif 21944, Saudi Arabia
2
Department of Mathematics, Faculty of Science, Yusuf Maitama Sule University, Kano P.M.B. 3099, Nigeria
3
Department of Mathematics, Hamedan Branch, Islamic Azad University, Hamedan 1584743311, Iran
4
Department of Statistics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand
5
Department of Mathematics, Institute of Technical Education and Research, Siksha O Anusandhan University, Bhubaneswar 751030, India
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(7), 1401; https://doi.org/10.3390/sym14071401
Submission received: 7 June 2022 / Revised: 28 June 2022 / Accepted: 4 July 2022 / Published: 7 July 2022
(This article belongs to the Special Issue Advance in Functional Equations)

Abstract

:
In this paper, we propose two scaled Dai–Yuan (DY) directions for solving constrained monotone nonlinear systems. The proposed directions satisfy the sufficient descent condition independent of the line search strategy. We also reasonably proposed two different relations for computing the scaling parameter at every iteration. The first relation is proposed by approaching the quasi-Newton direction, and the second one is by taking the advantage of the popular Barzilai–Borwein strategy. Moreover, we propose a robust projection-based algorithm for solving constrained monotone nonlinear equations with applications in signal restoration problems and reconstructing the blurred images. The global convergence of this algorithm is also provided, using some mild assumptions. Finally, a comprehensive numerical comparison with the relevant algorithms shows that the proposed algorithm is efficient.

1. Introduction

Consider the constrained algebraic system
F ( x ) = 0 , x M ,
where the function F : Ω R n is a continuous mapping and is considered to be monotone, i.e.,
( F ( x ) F ( y ) ) T ( x y ) 0 , x , y M .
In addition, the set Ω is known to be a closed convex subset of R n . If  Ω = R n , the problem (1) is referred to as an unconstrained monotone nonlinear system. Both cases have been extensively studied by many researchers. The constrained system (1) appeared in many physical and mathematical applications, such as the financial forecasting problems [1], compressive sensing [2], Bregman distances [3], and monotone variational inequalities [4].
Moreover, the Newton method [5] and the quasi-Newton method [6,7] are well-known methods of solving the system of nonlinear equations. However, these methods are undesirable due to the Jacobian computation or its approximations at each iteration, as well as the storage requirements for storing the Jacobian matrix or its approximations at each iteration. Due to these limitations, these methods are unsuitable for large-scale problems involving both smooth and nonsmooth systems. The spectral gradient (SG) and conjugate gradient (CG) methods are another set of methods, which are used to solve systems of nonlinear equations. These methods are matrix-free; they can successfully handle large-scale problems, see [8,9,10,11,12,13,14,15] and the references therein. The main iterative procedure of CG methods for the system of monotone equations given an initial point x 0 is
x k + 1 = x k + α k d k ,
where α k is the step size that can be computed using the line search strategies. The term d k is the spectral gradient direction defined by
d k = F ( x k ) , k = 0 , d k = F ( x k ) + β k d k 1 , k 1 ,
the scalar parameter β k is the parameter that differentiates the CG methods. Among the most efficient CG parameters is one proposed by Dai and Yuan [16], given by
β k D Y = F k T F k d k 1 T y k 1 .
where F k = F ( x k ) . Recently, the DY CG parameter and its modification have been used to solve the monotone nonlinear systems, for example, the two descent DY algorithm for monotone equations [17], the modified DY with sufficient descent property [18], the efficient DY-type spectral CG method [19], and the descent three-term DY CG method with application [20]. Most of these methods are efficient to some extent and apply to real-life applications. Motivated by these DY approaches, we will present a scaled, Dai–Yuan, projection-based conjugate gradient method to solve monotone equations with applications in signal and image recovery. In addition, we reasonably propose different relations to compute the scaling parameter at every iteration, namely, by tending the proposed direction to approach the quasi-Newton direction, and by taking advantage of the popular Barzilai–Borwein strategy.
The next section provides a detailed derivation of the proposed algorithm. Section 3 presents the convergence result. Section 4 provides the experimental findings in comparison with some existing algorithms. We provide the conclusion in the last section.

2. A Scaled Dai–Yuan CG Methods

This section will present spectral DY CG methods for solving monotone equations with the convex constrained. Among the well-known efficients is the one proposed by Dai and Young [16]. Moreover, the scaling strategy proved to be an efficient strategy for enhancing the performance of CG-based algorithms, see [21]. In this work, to enhance the performance of the DY CG parameter, we present the following scaled DY CG parameter
β k S D Y = σ β k D Y ,
such that σ 1 . The main contribution of this work is to propose some reasonable ways of computing the scalar σ at every iteration.

2.1. Scaling Parameter Based on the Quasi-Newton Approach

Newton and quasi-Newton directions contain the full Jacobian information or its approximates. Recall that the quasi-Newton equation is defined as
y k = B k + 1 s k ,
where B k + 1 is a positive definite and symmetric Jacobian estimate. Again, assuming that H k + 1 is the inverse of B k + 1 , the quasi-Newton direction is as follows:
d k + 1 = H k + 1 F k + 1 .
Using Equation (4), and Equation (6), the spectral DY search direction is given as
d k + 1 = F k + 1 + σ F k + 1 T F k + 1 d k T y k d k , k = 0 , 1 , .
Using the equality relation, and from Equations (8) and (9), we obtain
H k + 1 F k + 1 = F k + 1 + σ F k + 1 T F k + 1 d k T y k d k .
Multiplying Equation (10) by s k T and B k + 1 to obtain
s k T F k + 1 = B k + 1 s k T F k + 1 σ F k + 1 T F k + 1 d k T y k B k + 1 s k T d k .
We further eliminate the matrix B k + 1 in Equation (11) using the quasi-Newton Equation (7) to obtain
s k T F k + 1 = y k T F k + 1 σ F k + 1 T F k + 1 d k T y k y k T d k .
Solving for σ in Equation (12) to obtain
σ k = ( y k s k ) T F k + 1 F k + 1 2 .
Furthermore, to achieve σ 1 and to benefit from the nonnegative restriction of Polak–Ribiére-Polyak, we proposed the following modified version of Equation (14)
σ k = min 1 , σ .
Moreover, to ensure the sufficient condition is satisfied, independent of the line search procedure, we proposed the following scaled DY CG direction
d k + 1 = τ k F k + 1 + σ k β k D Y d k , k = 0 , 1 , ,
where
τ k = 1 + σ k F k + 1 T d k d k T y k .

2.2. Scaling Parameter Based on Barzilai–Borwein Approach

The most prevalent choices for the spectral scalars are those offered by Barzilai–Borwein [22], given as follows:
γ k 1 = s k T s k y k T s k and γ k 2 = s k T y k y k T y k .
Now, considering the relations (4) and (6), the direction (9) can be written as
d 0 = F 0 , d k + 1 = A k + 1 F k + 1 , k = 0 , 1 , ,
where A k + 1 is defined by
A k + 1 = I σ F k + 1 d k T d k T y k .
We aimed to take advantage of the Barzilai–Borwein [22] approach; hence, we put forward the parameter σ as the solution to the following minimization problem:
min σ A k + 1 γ k I F 2 ,
where . F stands for the Frobenius matrix norm and
γ k = max γ min , min γ k j , γ max , j = 1 , 2 ,
where the 0 < γ min < γ max < . Putting forward the relation A F 2 = T r a c e ( A T A ) , we obtained the following solution for (20) as follows
σ k = ( 1 γ k ) F k + 1 T y k d k 2 β k D Y .
Furthermore, to achieve σ 1 and to benefit from the non-negative restriction of Polak–Ribiére-Polyak, we proposed the following modified version of (22) as follows
σ k ^ = min 1 , σ .
Moreover, to ensure the sufficient condition is satisfied, independent of the line search procedure, we proposed the following scaled DY CG direction
d k + 1 = τ k ^ F k + 1 + σ k ^ β k d k , k = 0 , 1 , ,
where
τ k ^ = 1 + σ k ^ F k + 1 T d k d k T y k .
Next, we define the projection operator M Ω : R n Ω , given by
M Ω [ x ] = arg min x z | z Ω , x R n ,
one of the appealing features of this operator is its non-expansive property, i.e.,
M Ω [ x ] M Ω [ z ] x z , x , z R n .
Now, we present the spectral DY CG projection-based algorithm (Algorithm 1) for solving convex constrained monotone nonlinear equations.
Algorithm 1: The spectral DY CG projection-based algorithm (SDYCG)
Step 0 Initialize: ϵ 0 , a ( 0 , 1 ) , b > 0 , c > 0 , g > 0 , 0 < γ min γ max and
x 0 R n . Set k = 0 and d 0 = F 0 .
Step 1 If F k ϵ , stop; otherwise, go to Step 2.
Step 2 Compute the spectral DY CG directions d k using (15) or (24), where
s k = h k x k , and  y k = F ( h k ) F k + a s k .
Step 3 Set h k = x k + α k d k and determine α k = max b c i : i = 0 , 1 , 2 ,
 satisfying
F ( h k ) T d k g α k F ( h k ) d k 2
Step 4 If h k Ω and F ( h k ) = 0 stop; otherwise
x k + 1 = M Ω [ x k v k F ( h k ) ] ,
 where
v k = F ( h k ) T ( x k h k ) F ( h k ) 2 .
Step 5 Set k = k + 1 and then go to Step 1.

3. Global Convergence

This section presents the SDYCG algorithm’s global convergence result using the following assumptions:
  • A1 The solution set Ω is non-empty.
  • A2 The function F is Lipschitz continuous for Ω , i.e., for L > 0
    F ( x ) F ( z ) L x z , x , z Ω .
  • A3 The function F satisfies (2).
Remark:
  • The proposed search directions (15) and (24) satisfy the sufficient descent condition independent of the line-search procedure used.
  • Additionally, from the monotonicity assumption on F, we have
s k T y k a s k T s k .
Lemma 1.
For all k 0 , the line search (28) is well-defined.
Proof. 
Assume that there exists k 0 0 , such that, for any non-negative integer i,
F ( x k 0 + b c i d k 0 ) T d k 0 < g b c i F ( x k 0 + b c i d k 0 ) d k 0 2 .
Let i in (33), we have
F ( x k 0 ) T d k 0 0 .
From the sufficient condition of the HSGP direction, we have
F ( x k 0 ) T d k 0 F ( x k 0 ) 2 > 0 ,
clearly, (34) contradicts (35), and so the proof is complete. □
Lemma 2.
If the sequences { x k } and { b k } are generated by HSGP algorithm, then
lim k α k d k = 0 .
Proof. 
From the line search (28), we have
F ( h k ) T ( x k h k ) = F ( h k ) T d k λ α k 2 F ( h k ) d k 2 > 0 .
Assume that x Ω such that F ( x ) = 0 ; using the monotonicity of F, we have
F ( h k ) T ( x k x ) = F ( h k ) T ( x k h k ) + F ( h k ) T ( h k x ) F ( h k ) T ( x k h k ) + F ( x ) T ( h k x ) = F ( h k ) T ( x k h k ) .
Using (37) and (38) to obtain
x k + 1 x 2 = W Ω ( x k v k F ( h k ) ) x 2 x k v k F ( h k ) x 2 = x k x 2 2 v k F ( h k ) T ( x k x ) + v k 2 F ( h k ) 2 .
Looking at the definition of v k , and the Cauchy–Schwarz inequality in (39), we obtain
x k + 1 x 2 x k x 2 2 v k F ( h k ) T ( x k h k ) + v k 2 F ( h k ) 2 x k x 2 ( F ( h k ) T ( x k h k ) ) 2 F ( h k ) 2 x k x 2 g 2 x k h k 4 ,
thus, we have
x k + 1 x x k x , k 0 .
Therefore, { x k x } is a decreasing sequence and convergent. Now, utilizing (41) and the Lispchitz continuity of F, we obtain
F ( x k ) = F ( x k ) F ( x ) L x k x L x 0 x = T .
Using (28), monotonicity of F, and the Cauchy–Schwarz inequality, we obtain
g F ( h k ) x k h k 2 F ( h k ) T ( x k h k ) F ( h k ) x k h k ,
which gives
g x k h k 1 .
From (44), we can see that the sequence { h k } is bounded. From (40), we obtain
g 2 k = 0 x k h k 4 k = 0 ( x k x 2 x k + 1 x 2 ) < .
This implies
lim k x k h k = lim k α k d k = 0 .
The proof is now complete. □
Lemma 3.
The directions generated by SDYCG algorithms are bounded.
Proof. 
Starting with (15) for the case k = 0 ,
d 0 F 0 = F 0 .
Now, for k 0 , we obtain
d k = τ k F k + 1 + σ k β k D Y d k = 1 + σ k F k + 1 T d k d k T y k F k + 1 + σ k F k + 1 T F k + 1 d k T y k d k F k + 1 + 2 F k + 1 2 s k a s k 2 T + 2 T 2 a s k = T ¯
The first inequality directly follows from the Cauchy–Swartz inequality, (32) and the main definition of h k , while the second inequality directly follows from (42) and (46). The same result can be shown for the second search direction (24). □
Theorem 1.
If the sequence { x k } is generated by SDYCG algorithm, then
lim inf k F k = 0 .
Proof. 
Suppose that (49) is not true. Then, there exists ω > 0 , such that
F ( x k ) ω , k N .
Using the sufficient descent condition and the Cauchy–Schwartz inequality, we have
F k 2 F k T d k F k d k k N .
Hence,
d k ω > 0 .
Using Lemma 2 and the inequality (52), we obtain
lim k α k = 0 .
By line-search (28), there is α , so that
F ( x k + α d k ) T d k < g α F ( x k + α ) d k 2 .
From the fact that x k and d k are bounded, there is an accumulation point x , such that lim k x k = x , for k B 1 . Similarly, there is a set B 2 B 1 and an accumulation point d , such that lim k d k = d , for k B 2 , where B 1 , B 2 are infinite index sets. Hence, taking the limit as k in (54), we can obtain
F ( x ) T d 0 .
Similarly, from the sufficient condition, taking the limit as k in the sufficient descent condition to obtain
F ( x ) T d < 0 .
Clearly, (55) and (56) provide a contradiction. □

4. Numerical Experiment and Applications

This section comprises numerical experiments using the proposed algorithm to solve large-scale constrained monotone nonlinear systems and image and signal restoration problems.

4.1. Application to the Monotone Nonlinear Equations

The proposed two-direction algorithm was compared to the modified spectral gradient projection (MSGP) algorithm [23] and the derivative-free spectral projection (DFSP) [24]. Furthermore, for the proposed algorithm, we set the starting parameters as follows: a = 0.1 , g = 0.0001 , c = 0.99 , b = 1 , and e p s i l o n = 10 9 . Except for the stopping conditions, all of the compared algorithms were implemented using the published values. The following problems were tested:
Problem 1
([25]).
  • F 1 ( x ) = e x 1 1 ,
  • F i ( x ) = e x i + x i 1 1 , i = 2 , 3 , , n 1 , and Ω = R + n
Problem 2
([25]).
  • F i ( x ) = log ( x i + 1 ) x i n , i = 2 , 3 , , n ,
  • and Ω = { x R + n : i = 1 n x i n , x i 1 , i = 1 , 2 , , n } .
Problem 3
([26]).
  • F 1 ( x ) = cos ( x 1 ) 9 + 3 x 1 + 8 exp ( x 2 )
  • F i ( x ) = cos ( x i ) 9 + 3 x i + 8 exp ( x i 2 ) , for i = 2 , 3 , , n , and Ω = R + n
Problem 4
(This problem is from Reference).
  • F i ( x ) = e x i 1 , i = 1 , 2 , , n , and Ω = R + n .
Problem 5
([21]).
  • F i ( x ) = 4 x i + ( x i + 1 2 x i ) x i + 1 2 3 , for i = 1 , 2 , , n 1 ,
  • F n ( x ) = 4 x n + ( x n 1 2 x n ) x n 1 2 3 , and Ω = R + n .
Problem 6
([27]).
  • F ( x i ) = 2 x i sin x i , for i = 1 , 2 , 3 , , n , and Ω = R + n .
From Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6, The NORM the norm of the function F at the stopping point. The following initial points are considered: x 0 1 = ( 2 , 2 , , 2 ) , x 0 2 = ( 1 , 1 2 , 1 3 , , 1 n ) , x 0 3 = ( 1 , 1 , , 1 ) , x 0 4 = ( 1 n , 2 n , , 1 ) , x 0 5 = ( n 1 n , n 2 n , , n 1 ) , x 0 6 = ( 2 , 2 2 , 2 3 , , 2 n ) , x 0 7 = ( 1 1 , 1 1 2 , 1 1 3 , , 1 1 n ) and x 0 8 = ( 3 , 3 , , 3 ) .
For the sake of the numerical comparison, we used the three parameters, namely, the number of iterations (ITER), the number of function evaluations (FEV), and computational or CPU time (TIME). Starting with Problem 1, the proposed algorithm successfully solves the entire initial guess, with fewer ITER, FEV, and TIME. Additionally, the DFSP algorithm solved all the initial guesses, with a relatively high ITER and FEV compared to the SDYCG algorithm. We also observed that the MSGP algorithm failed for the three initial guesses at the start, and failed for four initial guesses regarding dimension increases, although it performed wonderfully for Problem 2 for all three parameters.
Moreover, for Problem 3, the proposed algorithm won in terms of the minimum ITER, FEV, and CPU time. In addition, the DFSP algorithm showed a high level of efficiency for the three parameters, but MSGP uniformly failed for three initial guesses across all five dimensions. For Problem 4, all the algorithms successfully and efficiently solved all the initial guesses. The SDYCG algorithm was the winner, followed by the DFSP algorithm and, lastly, the MSGP algorithm, for the three parameters. A similar observation can be made for Problems 5–6; it can also be seen the MSGP algorithm is relatively dimension-dependent, as the failure for the initial guesses increases as the dimension increases.
Furthermore, to ease the numerical comparisons in the above tables, we used the well-known Dolan and Moré technique [28] performance profile method. The three figures were plotted for ITER, TIME, and FVAL. Figure 1, Figure 2 and Figure 3 show that, on average, the SDYCG algorithm has less ITER, computing time, and the number of function evaluations than the DFSP and MSGP algorithms.

4.2. Application in Signal Recovery

This subsection Considers the pursuit of denoising problem. Let x be the original sparse signal, and f R s be an observation satisfying
m = Y x .
The standard sparse l 1 -regularization problem sparse term is represented as
min w x 1 + 1 2 Y x m 2 2 ,
this problem arises in compressive sensing, where w is a nonnegative parameter, m R k is an observation, Y R k × n ( k n ) is a linear operator, x R n , and . 1 , . 2 represent the l 1 and l 2 norms, respectively, see [2,29,30,31]. The problem (58) can be transformed to the following monotone nonlinear system
F ( u ) = min u , M u + r = 0 ,
where u = l z , r = w e 2 n + Y T m Y T m , M = Y T Y Y T Y Y T Y Y T Y , e n = ( 1 , 1 , , 1 ) T R n and x = l z , for some l 0 and z 0 . The function (59) is Lipschitz-continuous, and the monotone, for more detail information about the transformation of (59) from (58), see [32,33]. This was omitted to avoid repeats. The SDYCG algorithm will be used to address this problem. Similar methods have been employed to deal with this problem; see [34,35,36].
All codes in this work are written in Matlab R2014a and run on an HP core i5, 8th Gen personal computer. We carried out two restoration experiments: signal and image restoration. Further, we employed the direction (15) for signal restoration and the direction (24) for image restoration. Mean square error (MSE) is defined as
MSE = 1 n x x 2 ,
and the recovered image signal-to-ratio (SNR) as
SNR = 20 × log 10 x x x .
The primary purpose of the signal restoration experiment is to reconstruct a sparse signal with a length n from observations of length k. We initialized the following free variables: a = 0.4 , g = 0.00001 , c = 1 , and b = 0.9 . We conducted the experiment on the signal of length n = 4096 from observations of length k = 1024 using 128 original signal’s randomly nonzero components. Y is the Gaussian matrix produced by the MATLAB instruction r a n d ( s , n ) . Furthermore, the measurement w is noise distributed, that is, w = Y x + θ , where θ is Gaussian noise distributed normally with mean 0 and variance 10 4 . We used x 0 = Y T m , f ( x ) = w x 1 + 1 2 Y x m 2 2 as the merit function and stopped the iterations when
f k f k 1 f k 1 < 10 4 .
In comparison to the PCG [37] and MSP [38] algorithms, we used the SDYCG algorithm and restored the signal to almost its original form in a few iterations, MSE, and CPU time. We rua each code from the same starting point and used the same continuation strategy on parameter w. The convergence behavior of each algorithm was observed to achieve an accurate solution. As shown in Figure 4 and Figure 5, which has been replicated more than a hundred times with remarkably similar results. The SDYCG algorithm recovers the sparse single with a minimal processing time and minimal MSE error. Furthermore, for the objective function and MSE, the SDYCG method reduces faster than the PCG and MSP algorithms.
In addition, for the image restoration experiment, we used Y as a partial DWT matrix, with s rows chosen at random from the n × n DWT matrix. This type of encoding matrix Y does not need storage and allows for quick matrix-vector multiplications of Y and Y T . As a result, it may be tested on large images without the need to store any matrix. a = 0.2 , g = 0.000001 , c = 9 , and b = 0.4 are the parameter values. The initial parameter values are a = 0.2 , g = 0.000001 , c = 9 , and b = 0.4 . In this experiment, we utilized the standard Lena colour image with a size of 512 × 512 and the colour baby image with a size of 256 × 256 . In contrast, for performance comparison, we considered the well-known PSGM [34], CGD [35], and TPRP [36] iterative algorithms. The iteration procedure of all algorithms began at Y T m and ended when (60) was less than 10 5 . Figure 6 and Figure 7 show the original, blurred, and reconstructed images generated by each algorithm. As can be seen from the figures, all of the algorithms considered generated images of similar quality. However SDYCG was quicker. As a result, we can infer that SDYCG is the winner.

5. Conclusions

We suggested a scaled DY projection-based CG algorithm for solving convex constraint nonlinear monotone equations, with applications for signal and image restoration problems. Independent of the line search strategy that was employed, the proposed directions satisfied the sufficient descent condition. We also offer two alternate ways of determining the scaling parameter at each iteration, namely by tending the proposed direction to approach the quasi-Newton direction and by utilizing the popular Barzilai–Borwein [22] technique. The proposed algorithm’s global convergence result was established using some mild assumptions. The robustness of the proposed algorithm was demonstrated by its ability to solve large-scale monotone nonlinear systems with convex constraints. These two proposed directions may be employed in all fields of CG method application, such as unconstrained optimization problems, control motion of robotic manipulators, etc.

Author Contributions

Methodology, software, supervision, A.A.; validation, writing-original draft preparation, investigation, J.S.; visualization, project administration, formal analysis, H.E.; writing-review and editing, funding acquisition, resources, P.J.; writing-review and editing, data curation, S.K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Taif University Researches Supporting Project number (TURSP-2020/326), Taif University, Taif, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dai, Z.; Zhou, H.; Wen, F.; He, S. Efficient predictability of stock return volatility: The role of stock market implied volatility. N. Am. J. Econ. Finance 2020, 52, 101174. [Google Scholar] [CrossRef]
  2. Figueiredo, M.A.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction, application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef] [Green Version]
  3. Iusem, A.N.; Solodov, M.V. Newton-type methods with generalized distances for constrained optimization. Optimization 1997, 41, 257–278. [Google Scholar] [CrossRef]
  4. Zhao, Y.B.; Li, D.H. Monotonicity of fixed point and normal mapping associated with variational inequality and its application. SIAM J. Optim. 2001, 4, 962–973. [Google Scholar] [CrossRef]
  5. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: Cambridge, MA, USA, 1970. [Google Scholar]
  6. Zhou, G.; Toh, K.C. Superlinear convergence of a Newton-type algorithm for monotone equations. J. Optimiz. Theory App. 2005, 125, 205–221. [Google Scholar] [CrossRef]
  7. Zhou, W.J.; Li, D.H. A globally convergent BFGS method for nonlinear monotone equations without any merit functions. Math. Comput. 2008, 77, 2231–2240. [Google Scholar] [CrossRef]
  8. Sabi’u, J.; Shah, A.; Waziri, M.Y.; Ahmed, K. Modified Hager-Zhang conjugate gradient methods via singular value analysis for solving monotone nonlinear equations with convex constraint. Int. J. Comput. Methods 2020, 18, 2050043. [Google Scholar] [CrossRef]
  9. Sabi’u, J.; Shah, A.; Waziri, M.Y. A modified Hager-Zhang conjugate gradient method with optimal choices for solving monotone nonlinear equations. Int. J. Comput. Math. 2021, 99, 1–23. [Google Scholar] [CrossRef]
  10. Sabi’u, J.; Shah, A. An efficient three-term conjugate gradient-type algorithm for monotone nonlinear equations. RAIRO-Oper Res. 2021, 55, 1113. [Google Scholar] [CrossRef]
  11. Waziri, M.Y.; Ahmed, K.; Sabi’u, J.; Halilu, A.S. Enhanced Dai–Liao conjugate gradient methods for systems of monotone nonlinear equations. SeMA J. 2021, 78, 15–51. [Google Scholar] [CrossRef]
  12. Abubakar, A.B.; Sabi’u, J.; Kumam, P.; Shah, A. Solving nonlinear monotone operator equations via modified sr1 update. J. Appl. Math. Comput. 2021, 67, 1–31. [Google Scholar] [CrossRef]
  13. Waziri, M.Y.; Hungu, K.A.; Sabi’u, J. Descent Perry conjugate gradient methods for systems of monotone nonlinear equations. Numer. Algorithms 2020, 85, 763–785. [Google Scholar] [CrossRef]
  14. Waziri, M.Y.; Ahmed, K.; Sabi’u, J. A Dai–Liao conjugate gradient method via modified secant equation for system of nonlinear equations. Arab. J. Math. 2020, 9, 443–457. [Google Scholar] [CrossRef] [Green Version]
  15. Sabi’u, J.; Shah, A.; Waziri, M.Y. Two optimal Hager-Zhang conjugate gradient methods for solving monotone nonlinear equations. Appl. Numer. Math. 2020, 153, 217–233. [Google Scholar] [CrossRef]
  16. Dai, Y.H.; Yuan, Y. A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 1999, 10, 177–182. [Google Scholar] [CrossRef] [Green Version]
  17. Waziri, M.Y.; Ahmed, K. Two Descent Dai–Yuan Conjugate Gradient Methods for Systems of Monotone Nonlinear Equations. J. Sci. Comput. 2022, 90, 1–53. [Google Scholar] [CrossRef]
  18. Kambheera, A.; Ibrahim, A.H.; Muhammad, A.B.; Abubakar, A.B.; Hassan, B.A. Modified Dai–Yuan Conjugate Gradient Method with Sufficient Descent Property for Nonlinear Equations. Thai J. Math. 2022, 145–167. Available online: http://thaijmath.in.cmu.ac.th/index.php/thaijmath/article/viewFile/6026/354355047 (accessed on 6 June 2022).
  19. Aji, S.; Kumam, P.; Awwal, A.M.; Yahaya, M.M.; Sitthithakerngkiet, K. An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Math. 2021, 6, 8078–8106. [Google Scholar] [CrossRef]
  20. Abdullahi, H.; Awasthi, A.K.; Waziri, M.Y.; Halilu, A.S. Descent three-term DY-type conjugate gradient methods for constrained monotone equations with application. Comput. Appl. Math. 2022, 41, 1–28. [Google Scholar] [CrossRef]
  21. Sabi’u, J.; Aremu, K.O.; Althobaiti, A.; Shah, A. Scaled three-term conjugate gradient methods for solving monotone equations with application. Symmetry 2022, 14, 936. [Google Scholar] [CrossRef]
  22. Barzilai, J.; Borwein, J.M. Two-point step size gradient methods. IMA J. Numer. Anal. 1988, 8, 141–148. [Google Scholar] [CrossRef]
  23. Zheng, L.; Yang, L.; Liang, Y. A modified spectral gradient projection method for solving non-linear monotone equations with convex constraints and its application. IEEE Access 2020, 8, 92677–92686. [Google Scholar] [CrossRef]
  24. Amini, K.; Faramarzi, P.; Bahrami, S. A spectral conjugate gradient projection algorithm to solve the large-scale system of monotone nonlinear equations with application to compressed sensing. Int. J. Comp. Math. 2022, 1–18. [Google Scholar] [CrossRef]
  25. La Cruz, W.; Martinez, J.; Raydan, M. Spectral residual method without gradient information for solving large-scale nonlinear systems of equations. Math. Comput. 2006, 75, 1429–1448. [Google Scholar] [CrossRef] [Green Version]
  26. Halilu, A.S.; Majumder, A.; Waziri, M.Y.; Ahmed, K. Signal recovery with convex constrained nonlinear monotone equations through conjugate gradient hybrid approach. Math Comput. Simul. 2021, 187, 520–539. [Google Scholar] [CrossRef]
  27. Liu, J.; Duan, Y. Two spectral gradient projection methods for constrained equations and their linear convergence rate. J. Inequal. Appl. 2015, 2015, 1–13. [Google Scholar] [CrossRef] [Green Version]
  28. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
  29. Hale, E.T.; Yin, W.; Zhang, Y. A fixed-point continuation method for l1 regularized minimization with applications to compressed sensing. SIAM J. Optim. 2008, 19, 1107–1130. [Google Scholar] [CrossRef]
  30. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  31. Van den Berg, E.; Friedlander, M.P. Probing the pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 2008, 31, 890–912. [Google Scholar] [CrossRef] [Green Version]
  32. Xiao, Y.H.; Wang, Q.Y.; Hu, Q.J. Non-smooth equations based method for l1-norm problems with applications to compressed sensing. Nonlinear Anal. TMA 2011, 74, 3570–3577. [Google Scholar] [CrossRef]
  33. Pang, J.S. Inexact Newton methods for the nonlinear complementarity problem. Math. Program. 1986, 36, 54–71. [Google Scholar] [CrossRef]
  34. Awwal, A.M.; Kumam, P.; Mohammad, H.; Watthayu, W.; Abubakar, A.B. A Perry-type derivative-free algorithm for solving nonlinear system of equations and minimizing l1 regularized problem. Optimization 2021, 70, 1231–1259. [Google Scholar] [CrossRef]
  35. Xiao, Y.H.; Zhu, H. A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing. J. Math. Anal. Appl. 2013, 405, 310–319. [Google Scholar] [CrossRef]
  36. Ibrahim, A.H.; Deepho, J.; Abubakar, A.B.; Adamu, A. A three-term Polak-Ribière-Polyak derivative-free method and its application to image restoration. Sci. Afr. 2021, 13, e00880. [Google Scholar] [CrossRef]
  37. Liu, J.K.; Lia, S.J. A projection method for convex constrained monotone nonlinear equations with applications. Comput. Math. Appl. 2015, 70, 2442–2453. [Google Scholar] [CrossRef]
  38. Abubakar, A.B.; Kumam, P.; Mohammad, H.; Awwal, A.M. A Barzilai-Borwein gradient projection method for sparse signal and blurred image restoration. J. Franklin Inst. 2020, 357, 7266–7285. [Google Scholar] [CrossRef]
Figure 1. Performance profile of SDYCG algorithm versus MSGP algorithms [23], and DFSP [24] for number of iterations.
Figure 1. Performance profile of SDYCG algorithm versus MSGP algorithms [23], and DFSP [24] for number of iterations.
Symmetry 14 01401 g001
Figure 2. Performance profile of SDYCG algorithm versus MSGP algorithms [23], and DFSP [24] for the CPU time.
Figure 2. Performance profile of SDYCG algorithm versus MSGP algorithms [23], and DFSP [24] for the CPU time.
Symmetry 14 01401 g002
Figure 3. Performance profile of SDYCG algorithm versus MSGP algorithms [23], and DFSP [24] for number of function evaluations.
Figure 3. Performance profile of SDYCG algorithm versus MSGP algorithms [23], and DFSP [24] for number of function evaluations.
Symmetry 14 01401 g003
Figure 4. The original picture, measurement, and recovered signals using SDYCG, PCG, and MSP algorithms are shown from top to bottom.
Figure 4. The original picture, measurement, and recovered signals using SDYCG, PCG, and MSP algorithms are shown from top to bottom.
Symmetry 14 01401 g004
Figure 5. Comparison of the SDYCG, PCG, and MSP algorithms. The x-axes represent the number of iterations and the CPU time in seconds, respectively. MSE and objective function values are represented on the y-axes.
Figure 5. Comparison of the SDYCG, PCG, and MSP algorithms. The x-axes represent the number of iterations and the CPU time in seconds, respectively. MSE and objective function values are represented on the y-axes.
Symmetry 14 01401 g005
Figure 6. Original image, blurred image, restored image by SDYCG with ITER = 10, obj = 8.128 × 10 5 , TIME = 3.27, MSE = 6.3421 × 10 1 , SNR = 20.65, SSIM = 0.90, restored image by PSGM with ITER = 45, obj = 7.365 × 10 5 , TIME = 14.00, MSE = 3.8961 × 10 1 , SNR = 22.77, SSIM = 0.93, restored image by CGD with ITER = 713, obj = 7.268 × 10 5 , TIME = 167.89, MSE = 4.4565 × 10 1 , SNR = 22.19, SSIM = 0.93, and restored image by TPRP with ITER = 104, obj = 7.445 × 10 5 , TIME = 3049.66, MSE = 3.8931 × 10 1 , SNR = 22.77, SSIM = 0.92.
Figure 6. Original image, blurred image, restored image by SDYCG with ITER = 10, obj = 8.128 × 10 5 , TIME = 3.27, MSE = 6.3421 × 10 1 , SNR = 20.65, SSIM = 0.90, restored image by PSGM with ITER = 45, obj = 7.365 × 10 5 , TIME = 14.00, MSE = 3.8961 × 10 1 , SNR = 22.77, SSIM = 0.93, restored image by CGD with ITER = 713, obj = 7.268 × 10 5 , TIME = 167.89, MSE = 4.4565 × 10 1 , SNR = 22.19, SSIM = 0.93, and restored image by TPRP with ITER = 104, obj = 7.445 × 10 5 , TIME = 3049.66, MSE = 3.8931 × 10 1 , SNR = 22.77, SSIM = 0.92.
Symmetry 14 01401 g006
Figure 7. Original image, blurred image, restored image by SDYCG with ITER = 25, obj = 1.251 × 10 6 , TIME = 5.67, MSE = 6.7866 × 10 1 , SNR = 22.55, SSIM = 0.85, restored image by PSGM with ITER = 29, obj = 1.222 × 10 6 , TIME = 6.03, MSE =5.8080 × 10 1 , SNR = 23.23, SSIM = 0.87, restored image by CGD with ITER = 491, obj = 1.214 × 10 6 , TIME = 106.47, MSE = 6.0494 × 10 1 , SNR = 23.05, SSIM = 0.88, and restored image by TPRP with ITER = 41, obj = 1.256 × 10 6 , TIME = 4069.64, MSE = 7.0079 × 10 1 , SNR = 22.41, SSIM = 0.85.
Figure 7. Original image, blurred image, restored image by SDYCG with ITER = 25, obj = 1.251 × 10 6 , TIME = 5.67, MSE = 6.7866 × 10 1 , SNR = 22.55, SSIM = 0.85, restored image by PSGM with ITER = 29, obj = 1.222 × 10 6 , TIME = 6.03, MSE =5.8080 × 10 1 , SNR = 23.23, SSIM = 0.87, restored image by CGD with ITER = 491, obj = 1.214 × 10 6 , TIME = 106.47, MSE = 6.0494 × 10 1 , SNR = 23.05, SSIM = 0.88, and restored image by TPRP with ITER = 41, obj = 1.256 × 10 6 , TIME = 4069.64, MSE = 7.0079 × 10 1 , SNR = 22.41, SSIM = 0.85.
Symmetry 14 01401 g007
Table 1. Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
Table 1. Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
Problem 1 SDYCG(1)SDYCG(2)MSGPDFSP
DIMENSIONINITIAL
POINT
ITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
500 x 0 1 2270.17805202270.030794012115380.1028129.3 × 10 10 7730.0518870
x 0 2 2270.00791102270.01043109111730.0758938.58 × 10 10 4450.0259090
x 0 3 2270.00402502270.00428409512000.1462938.29 × 10 10 6690.0135210
x 0 4 3400.00852103400.0082340FailFailFailFail5570.0117950
x 0 5 2270.00688502270.0068780FailFailFailFail5560.0117740
x 0 6 2270.00594802270.0082590577250.0422698.33 × 10 10 6720.0142650
x 0 7 2270.00409202270.0064610FailFailFailFail5560.012170
x 0 8 5660.00950105660.01353402180.00434302170.070650
1000 x 0 1 2270.00463402270.008994013417060.3225199.92 × 10 10 8840.0227460
x 0 2 2270.00965502270.00919209111730.2257438.59 × 10 10 4450.0124670
x 0 3 2270.00861902270.00926608911220.2120447.94 × 10 10 9780.0197960
x 0 4 3400.0090603400.0117550FailFailFailFail5560.015590
x 0 5 2270.00677902270.0088950FailFailFailFail5560.0151610
x 0 6 2270.00479502270.0074950577250.1387297.84 × 10 10 6720.0190590
x 0 7 2270.00663202270.0096410FailFailFailFail5560.0156440
x 0 8 5660.01744405660.01705502180.00490802170.0069290
10,000 x 0 1 2270.04726702270.041266010413111.5607939.44 × 10 10 8650.1166770
x 0 2 2270.0319402270.03956409111731.9971198.6 × 10 10 4450.0800510
x 0 3 2270.03113602270.04183309812302.6013489.08 × 10 10 9680.1232080
x 0 4 3400.05708103400.0568580FailFailFailFail5560.096620
x 0 5 2270.03534202270.0406750FailFailFailFail5560.1036590
x 0 6 2270.02708902270.0382790567121.4827539.8 × 10 10 6720.1262390
x 0 7 2270.04158702270.040660FailFailFailFail5560.1025530
x 0 8 5660.08905305660.11198602180.04236603300.0547290
50,000 x 0 1 2270.11985102270.1923340103129310.944059.7 × 10 10 10890.6909110
x 0 2 2270.15800102270.18170909111738.0388918.6 × 10 10 4450.3292120
x 0 3 2270.187202270.1954890FailFailFailFail7710.5203110
x 0 4 3400.17059503400.2949150FailFailFailFail5560.4257460
x 0 5 2270.18215502270.197150FailFailFailFail5560.4182110
x 0 6 2270.11875302270.1884440567124.0082449.74 × 10 10 6720.5276910
x 0 7 2270.19192602270.1997470FailFailFailFail5560.4211860
x 0 8 5660.36367905660.46700702180.14228803300.2231210
100,000 x 0 1 2270.44700202270.4232250116145617.300299.17 × 10 10 9671.1146140
x 0 2 2270.28212702270.398508091117313.143278.6 × 10 10 4450.8649170
x 0 3 2270.43882602270.4395310FailFailFailFail111241.8965610
x 0 4 3400.45904403400.6301710FailFailFailFail5560.9513450
x 0 5 2270.45713202270.4412140FailFailFailFail5561.0734450
x 0 6 2270.38743202270.3981130567128.4518649.73 × 10 10 6721.1976230
x 0 7 2270.51252302270.4249110FailFailFailFail5560.9592610
x 0 8 5660.7624205660.99239702180.26669803300.5451180
Table 2. Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
Table 2. Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
Problem 2 SDYCG(1)SDYCG(2)MSGPDFSP
DIMENSIONINITIAL
POINT
ITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
500 x 0 1 7150.0169951 × 10 10 7150.0090551 × 10 10 250.0033110250.0061530
x 0 2 11340.01041504200.0089330370.00403104130.0079090
x 0 3 6130.0061061.9 × 10 10 6130.0085331.9 × 10 10 250.004184017350.0160279.24 × 10 10
x 0 4 5110.00587805110.00816504200.00845705120.0070360
x 0 5 5110.00585305110.00816404200.00701305120.0074060
x 0 6 6130.0085090490.0064860370.0042930490.0071390
x 0 7 5110.00674905110.00768504200.0063705120.0080240
x 0 8 130.0035920130.0041390130.0033450130.0039130
1000 x 0 1 7150.0092241.74 × 10 11 7150.0119621.74 × 10 11 250.004560250.0081430
x 0 2 490.0069404200.0116060370.00510604130.0091230
x 0 3 6130.0093093.6 × 10 11 6130.0108883.6 × 10 11 250.004814018370.0222155.1 × 10 10
x 0 4 5110.00914405110.00870104200.00921105120.009470
x 0 5 5110.00848905110.00880804200.00924405120.0095050
x 0 6 370.0056780490.0084070370.0049770490.0083680
x 0 7 5110.00808705110.00923804200.00860405120.0096770
x 0 8 130.00380130.0049870130.0053210130.0044670
10,000 x 0 1 7150.0457081.11 × 10 13 7150.0433341.33 × 10 13 250.024460250.0177140
x 0 2 490.0258604200.049290370.01749704130.033380
x 0 3 6130.0387793.55 × 10 13 6130.0439853.77 × 10 13 250.015054020410.1083593.63 × 10 10
x 0 4 5110.03235605110.03041704200.04768905120.036190
x 0 5 5110.03336405110.03408304200.0549405120.0380860
x 0 6 370.0222430490.0265860370.0278660490.0268960
x 0 7 5110.03274605110.03165404200.04710705120.0369120
x 0 8 130.0095830130.0091030130.0103960130.011290
50,000 x 0 1 6130.1300264.01 × 10 10 6130.1347824.01 × 10 10 250.051851021430.4450893.8 × 10 10
x 0 2 490.08273204200.1590780370.07405504130.1177980
x 0 3 6130.1099394.97 × 10 14 6140.1367211.34 × 10 11 250.051362021430.4176914.89 × 10 10
x 0 4 5110.09654605110.10625504200.16390905120.1208210
x 0 5 5110.09432905110.1083604200.17090505120.1146350
x 0 6 370.0587250490.0923470370.0698730490.0897480
x 0 7 5110.10121905110.11321304200.16855405120.1188810
x 0 8 130.0260440130.0288720130.0273490130.027840
100,000 x 0 1 6130.2292672.71 × 10 10 6130.2678162.71 × 10 10 250.104652022450.8770744.89 × 10 10
x 0 2 490.14529204200.3445690370.13136604130.2530010
x 0 3 5110.2082559.27 × 10 10 5110.214649.27 × 10 10 250.095152021430.8484329.42 × 10 10
x 0 4 5110.17139705110.21458504200.32031805120.2336110
x 0 5 5110.16781605110.21338204200.3422205120.2623680
x 0 6 370.1272680490.170470370.1320360490.1990590
x 0 7 5110.16250105110.21484104200.32140805120.2646220
x 0 8 130.0545560130.0517970130.0516470130.0556880
Table 3. Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
Table 3. Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
Problem 3 SDYCG(1)SDYCG(2)MSGPDFSP
DIMENSIONINITIAL
POINT
ITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
500 x 0 1 1140.023901140.0076701140.0070802260.0101940
x 0 2 1140.00773201140.00864703400.0123902240.0093970
x 0 3 1140.00798501140.00740601140.00775702240.0105150
x 0 4 1140.00744501140.0073370FailFailFailFail5630.0193280
x 0 5 1140.00748301140.00749403400.01160505630.0209520
x 0 6 1140.00608301140.0075180FailFailFailFail2240.0086740
x 0 7 1140.00698801140.00687303400.01150505630.0192380
x 0 8 2270.01634702270.01555601120.0067150190.0064390
1000 x 0 1 1140.0089501140.00953801140.00948702260.014870
x 0 2 1140.00787901140.00887703400.0174702240.0113460
x 0 3 1140.00875501140.0095401140.00975102240.012990
x 0 4 1140.00949501140.0092760FailFailFailFail5630.0301070
x 0 5 1140.0090801140.00892403400.01694505630.0284820
x 0 6 1140.00803801140.0085380FailFailFailFail2240.0126810
x 0 7 1140.00897201140.00907803400.01622505630.0276160
x 0 8 2270.02146902270.02311401120.0087730190.0076660
10,000 x 0 1 1140.03613201140.03953601140.04249402260.0692140
x 0 2 1140.03115601140.03829103400.09884802240.0594190
x 0 3 1140.03881701140.04539901140.03898402240.0641270
x 0 4 1140.03685101140.0475150FailFailFailFail5630.149850
x 0 5 1140.04720401140.04134703400.09662205630.1464820
x 0 6 1140.03151501140.0377430FailFailFailFail2240.0560530
x 0 7 1140.0352401140.03851703400.09422205630.1603580
x 0 8 2270.08745802270.09289601120.0332030190.0245820
50,000 x 0 1 1140.15401201140.14830501140.14624302260.2586530
x 0 2 1140.12818201140.13481503400.38424902240.215260
x 0 3 1140.15250601140.14660401140.1489502240.2298670
x 0 4 1140.13407601140.1607410FailFailFailFail5630.620620
x 0 5 1140.1507701140.15246703400.29754705630.5631230
x 0 6 1140.11979101140.1295540FailFailFailFail2240.2317040
x 0 7 1140.14551301140.15059603400.35907705630.6756050
x 0 8 2270.33178402270.39361701120.0903850190.0687570
100,000 x 0 1 1140.24758601140.2913201140.22909102260.5162860
x 0 2 1140.23824801140.26157703400.58618302240.3497970
x 0 3 1140.25222701140.28420901140.23037802240.4967480
x 0 4 1140.21076601140.3145890FailFailFailFail5631.3009430
x 0 5 1140.2629401140.29435903400.73555605631.6785950
x 0 6 1140.20409201140.2657770FailFailFailFail2240.4544250
x 0 7 1140.23487101140.28714303400.7026605631.2597940
x 0 8 2270.6667702270.81546101120.1347470190.1756680
Table 4. Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
Table 4. Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
Problem 4 SDYCG(1)SDYCG(2)MSGPDFSP
DIMENSIONINITIAL
POINT
ITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
500 x 0 1 1140.01443901140.00611807400.007555021540.0128053.36 × 10 10
x 0 2 2270.00531102270.00761705520.00962506660.0127110
x 0 3 1140.00465801140.00580706350.006392021450.0135983.34 × 10 10
x 0 4 2270.00603902270.0074409311670.1262119.31 × 10 10 5460.0100580
x 0 5 2270.00780202270.00755209211540.142099.1 × 10 10 6600.0121410
x 0 6 2270.00654202270.0077020313850.0444296.01 × 10 10 5500.0107740
x 0 7 2270.00724402270.00767309211540.2567539.1 × 10 10 6600.0124020
x 0 8 130.0036370130.0035580130.0027320130.0037190
1000 x 0 1 1140.00783801140.0071808560.021892021540.0217974.76 × 10 10
x 0 2 2270.00918802270.00923405520.02122306660.0168020
x 0 3 1140.00585101140.00669505250.013147021450.0166184.74 × 10 10
x 0 4 2270.00781702270.009626010713470.2266859.14 × 10 10 5460.0138430
x 0 5 2270.00867102270.010016010713480.3736359.05 × 10 10 6590.0167370
x 0 6 2270.00886702270.0086530313850.1188655.98 × 10 10 5500.013040
x 0 7 2270.00893902270.009434010713480.2223999.05 × 10 10 6590.015180
x 0 8 130.0041020130.0040620130.0035080130.0039790
10,000 x 0 1 1140.02273301140.02523207330.076215022560.1001934.57 × 10 10
x 0 2 2270.03486602270.03960705520.05895306660.0808010
x 0 3 1140.01961601140.02344406290.04933022470.0818464.61 × 10 10
x 0 4 2270.03448602270.03894013416932.212059.46 × 10 10 6590.0770720
x 0 5 2270.03480402270.039725013416932.1772779.54 × 10 10 6590.0947290
x 0 6 2270.03374302270.0381460313850.5098655.96 × 10 10 5500.0604770
x 0 7 2270.0372102270.04104013416932.1453719.54 × 10 10 6590.0837340
x 0 8 130.0088650130.0092990130.0082540130.0078810
50,000 x 0 1 1140.07693201140.08755607330.202677023580.3791233.14 × 10 10
x 0 2 2270.13506102270.14182805520.28919306660.330150
x 0 3 1140.07086101140.08054206280.17549023490.3331413.22 × 10 10
x 0 4 2270.1422302270.147057014318139.163879.66 × 10 10 6590.2994750
x 0 5 2270.14052202270.14851014318138.8031399.68 × 10 10 6590.3196910
x 0 6 2270.14628502270.1398330313851.8715165.96 × 10 10 5500.2588480
x 0 7 2270.14194102270.146955014318137.503369.68 × 10 10 6590.3111930
x 0 8 130.0277150130.0224380130.0172930130.0249750
100,000 x 0 1 1140.16040501140.15996107330.360256023580.7777674.51 × 10 10
x 0 2 2270.26811502270.27703805520.53420106660.8016370
x 0 3 1140.13648601140.15699106280.31303023490.7287794.7 × 10 10
x 0 4 2270.2776102270.2979490147186516.40689.46 × 10 10 6590.7295950
x 0 5 2270.25690302270.2859980147186516.008519.47 × 10 10 6590.7399320
x 0 6 2270.26307602270.2831140313852.0457935.96 × 10 10 5500.6008720
x 0 7 2270.23631702270.2870230147186513.370629.47 × 10 10 6590.7484390
x 0 8 130.0402070130.0414830130.0425680130.0459160
Table 5. Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
Table 5. Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
Problem 5 SDYCG(1)SDYCG(2)MSGPDFSP
DIMENSIONINITIAL
POINT
ITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
500 x 0 1 1140.02150601140.00726106410.01146809850.0244219.27 × 10 11
x 0 2 1140.00854901140.00770308920.01962906710.0186790
x 0 3 1140.00650801140.00702306450.01982209850.0251874.01 × 10 11
x 0 4 1140.00623301140.00694012015220.2934439.13 × 10 10 5580.0186680
x 0 5 1140.00614601140.006566012115400.2825299.12 × 10 10 5580.0171610
x 0 6 1140.00583401140.0064130719010.2154928.85 × 10 10 6710.0202370
x 0 7 1140.00696501140.007415012115400.4995589.12 × 10 10 5580.0174930
x 0 8 130.0043550130.004170130.0033390130.0050340
1000 x 0 1 1140.00787301140.00794606350.02222909850.0303141.38 × 10 10
x 0 2 1140.00830801140.00793708920.04492506710.0279310
x 0 3 1140.0082701140.00886106420.01357409850.031555.69 × 10 11
x 0 4 1140.00842601140.008309014117920.8457629.32 × 10 10 5580.0226190
x 0 5 1140.00818601140.008268013617330.8601759.81 × 10 10 5580.0205160
x 0 6 1140.00748301140.0079440719010.2251849.41 × 10 10 6710.0268130
x 0 7 1140.00603601140.007867013617330.7498359.81 × 10 10 5580.0242080
x 0 8 130.0048910130.0047260130.0036930130.0046390
10,000 x 0 1 1140.03838301140.03568906350.13284309850.2195835.85 × 10 10
x 0 2 1140.03590401140.03447108920.17444406710.1702780
x 0 3 1140.03867301140.03429807450.09083109850.2430691.85 × 10 10
x 0 4 1140.03417901140.0352630FailFailFailFail5580.1400850
x 0 5 1140.03898901140.037575014518445.101259.92 × 10 10 5580.1445940
x 0 6 1140.03451701140.0360540719012.1721079.93 × 10 10 6710.1664040
x 0 7 1140.03588101140.03835014518444.7619649.92 × 10 10 5580.1382950
x 0 8 130.011220130.0128540130.010330130.0156780
50,000 x 0 1 1140.14969401140.14967807400.601808010961.0055546.02 × 10 11
x 0 2 1140.14763101140.15228208920.82437906710.756670
x 0 3 1140.13518401140.14997607390.57866209850.9407414.31 × 10 10
x 0 4 1140.16089801140.1448120FailFailFailFail5580.6247480
x 0 5 1140.14225601140.1540830157199518.947119.08 × 10 10 5580.6123330
x 0 6 1140.15517201140.1446410719015.7467759.98 × 10 10 6710.7620540
x 0 7 1140.14043701140.1515390157199517.569059.08 × 10 10 5580.6178050
x 0 8 130.0355450130.0385430130.0300920130.0401410
100,000 x 0 1 1140.27322701140.2960407390.709299010961.963931.05 × 10 10
x 0 2 1140.30575901140.2891908922.14478106712.1166820
x 0 3 1140.26533201140.28244407400.93599109851.8041356.28 × 10 10
x 0 4 1140.28007601140.2901060FailFailFailFail5581.2641320
x 0 5 1140.2649301140.2878670FailFailFailFail5581.6404610
x 0 6 1140.27205301140.28787307190119.870819.99 × 10 10 6711.578780
x 0 7 1140.27619901140.2934670FailFailFailFail5581.6463270
x 0 8 130.0663750130.0704810130.0559970130.0742970
Table 6. Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
Table 6. Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
Problem 6 SDYCG(1)SDYCG(2)MSGPDFSP
DIMENSIONINITIAL
POINT
ITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
500 x 0 1 1140.02262401140.00641505310.009245021450.0181098.78 × 10 10
x 0 2 2270.00801502270.0081805540.01308406560.0144510
x 0 3 1140.00582201140.00556705300.009862020420.0168396.95 × 10 10
x 0 4 2270.00730102270.0092120FailFailFailFail5430.011770
x 0 5 2270.00810202270.0083670FailFailFailFail6560.0138960
x 0 6 2270.00773402270.008301013617040.293599.82 × 10 10 6570.0139730
x 0 7 2270.00845302270.0080590FailFailFailFail6560.0136350
x 0 8 2270.00812302270.0087960170.0030920160.0046030
1000 x 0 1 1140.00748701140.00759806430.014542022470.0226233.76 × 10 10
x 0 2 2270.00968502270.0089605540.01632106560.0180950
x 0 3 1140.00717101140.00643906390.014019020420.0219359.87 × 10 10
x 0 4 2270.01027402270.009290FailFailFailFail6560.0185090
x 0 5 2270.01013102270.0091050FailFailFailFail6570.01880
x 0 6 2270.01101402270.01066013617040.4319679.88 × 10 10 6570.0188370
x 0 7 2270.01082802270.0099330FailFailFailFail6570.0194870
x 0 8 2270.01074202270.0109660170.0033460160.0047710
10,000 x 0 1 1140.02758101140.02909206370.06886023490.1131963.84 × 10 10
x 0 2 2270.04537502270.04618605540.09239806560.0932380
x 0 3 1140.02714501140.0255506310.061397021440.1011499.68 × 10 10
x 0 4 2270.05150702270.0498990FailFailFailFail6560.0967730
x 0 5 2270.05573702270.0460710FailFailFailFail6560.0960530
x 0 6 2270.04827902270.046802013617042.6238279.93 × 10 10 6570.0937590
x 0 7 2270.05357602270.0478140FailFailFailFail6560.0984860
x 0 8 2270.05532402270.0531020170.0141410160.0154490
50,000 x 0 1 1140.1016101140.10195506320.236257023490.4458189.78 × 10 10
x 0 2 2270.18828802270.1874405540.37547506560.3820230
x 0 3 1140.10206501140.09717305230.176346022460.4594866.89 × 10 10
x 0 4 2270.20147302270.1906490FailFailFailFail5430.3111940
x 0 5 2270.19428102270.1888930FailFailFailFail5430.3211510
x 0 6 2270.18976602270.178816013617049.8107229.94 × 10 10 6570.3937910
x 0 7 2270.18874702270.1896130FailFailFailFail5430.3118080
x 0 8 2270.20209202270.1980670170.0544610160.0489220
100,000 x 0 1 1140.20043701140.20702106310.315882024510.937274.57 × 10 10
x 0 2 2270.37525202270.37457505540.50435206560.9318540
x 0 3 1140.190801140.18381905230.440007023480.8723023.05 × 10 10
x 0 4 2270.35574502270.3703610FailFailFailFail5430.7097850
x 0 5 2270.35670402270.3753880FailFailFailFail5430.6533360
x 0 6 2270.3691502270.3623550136170418.081039.94 × 10 10 6570.9083980
x 0 7 2270.37812402270.3718350FailFailFailFail5430.642920
x 0 8 2270.43277202270.4480330170.0900260160.0917680
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Althobaiti, A.; Sabi’u, J.; Emadifar, H.; Junsawang, P.; Sahoo, S.K. A Scaled Dai–Yuan Projection-Based Conjugate Gradient Method for Solving Monotone Equations with Applications. Symmetry 2022, 14, 1401. https://doi.org/10.3390/sym14071401

AMA Style

Althobaiti A, Sabi’u J, Emadifar H, Junsawang P, Sahoo SK. A Scaled Dai–Yuan Projection-Based Conjugate Gradient Method for Solving Monotone Equations with Applications. Symmetry. 2022; 14(7):1401. https://doi.org/10.3390/sym14071401

Chicago/Turabian Style

Althobaiti, Ali, Jamilu Sabi’u, Homan Emadifar, Prem Junsawang, and Soubhagya Kumar Sahoo. 2022. "A Scaled Dai–Yuan Projection-Based Conjugate Gradient Method for Solving Monotone Equations with Applications" Symmetry 14, no. 7: 1401. https://doi.org/10.3390/sym14071401

APA Style

Althobaiti, A., Sabi’u, J., Emadifar, H., Junsawang, P., & Sahoo, S. K. (2022). A Scaled Dai–Yuan Projection-Based Conjugate Gradient Method for Solving Monotone Equations with Applications. Symmetry, 14(7), 1401. https://doi.org/10.3390/sym14071401

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop