Next Article in Journal
Efficient Automatic Subdifferentiation for Programs with Linear Branches
Previous Article in Journal
Estimation in Semi-Varying Coefficient Heteroscedastic Instrumental Variable Models with Missing Responses
Previous Article in Special Issue
Three Chaotic Strategies for Enhancing the Self-Adaptive Harris Hawk Optimization Algorithm for Global Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Conjugate Gradient Method: Quantum Spectral Polak–Ribiére–Polyak Approach for Unconstrained Optimization Problems

1
International Business School, Shaanxi Normal University, Xi’an 710048, China
2
Department of Mathematics, Institute of Science, Banaras Hindu University, Varanasi 221005, India
3
Centre for Digital Transformation, Indian Institute of Management Ahmedabad, Vastrapur, Ahmedabad 380015, India
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(23), 4857; https://doi.org/10.3390/math11234857
Submission received: 22 October 2023 / Revised: 23 November 2023 / Accepted: 30 November 2023 / Published: 3 December 2023
(This article belongs to the Special Issue Advanced Optimization Methods and Applications, 2nd Edition)

Abstract

:
Quantum computing is an emerging field that has had a significant impact on optimization. Among the diverse quantum algorithms, quantum gradient descent has become a prominent technique for solving unconstrained optimization (UO) problems. In this paper, we propose a quantum spectral Polak–Ribiére–Polyak (PRP) conjugate gradient (CG) approach. The technique is considered as a generalization of the spectral PRP method which employs a q-gradient that approximates the classical gradient with quadratically better dependence on the quantum variable q. Additionally, the proposed method reduces to the classical variant as the quantum variable q approaches closer to 1. The quantum search direction always satisfies the sufficient descent condition and does not depend on any line search (LS). This approach is globally convergent with the standard Wolfe conditions without any convexity assumption. Numerical experiments are conducted and compared with the existing approach to demonstrate the improvement of the proposed strategy.

1. Introduction

Conjugate gradient methods remain a preferred alternative for solving a variety of multi-variable objective functions due to their better convergence rate and high accuracy [1]. Several authors have developed novel and high-performing CG optimization approaches [2,3,4]. These methods are: classical [5], hybrid [6], scale [7], and parameterized [8] CG methods.The two main issues need to be resolved: accurately calculating the step length and sequentially selecting orthogonal conjugate directions until the arrival of the optimal point [9]. When the starting point is far from the optimal solution and the objective functions contain multiple local optima, the classical steepest descent method based on gradient direction hits its limits [1] (see Chapter-6). Conjugate gradient methods require the gradient of the function. Generally, researchers use classical derivatives to compute the gradient. The quantum derivative is also accepted to find the quantum gradient of the function. We now consider the following nonlinear UO problem:
minimize θ ( z ) , z R n ,
where θ : R n R is a continuously quantum differentiable function whose quantum gradient is given by h q ( z ) . A better CG algorithm always converges to an optimum solution and converges quickly as well. Researchers have already shown efficient performance by replacing the gradient vector with the quantum gradient vector [10,11,12,13,14,15]. The CG method [1] is one of the most efficient and accurate methods for solving large-scale UO problems (1), whose iterative sequence [1] (see Chapter-8) { z ( λ ) } is generated in the context of quantum calculus as:
z ( λ + 1 ) = z ( λ ) + t ( λ ) p q ( λ ) ( λ ) , where λ = 0 , 1 , 2 ,
where scalar variable t ( λ ) is a positive step length. This variable is computed through any LS, and p q ( λ ) ( λ ) is a quantum descent search direction as:
p q ( λ ) ( λ ) = h q ( λ ) ( λ ) , where λ = 0 .
The vector h q ( λ ) ( λ ) = q ( λ ) θ z ( λ ) is the quantum steepest descent direction [10] at the starting point z ( 0 ) . For the next quantum iteration:
p q ( λ ) ( λ ) = h q ( λ ) ( λ ) + β ( λ ) Q u a n t u m P R P p q ( λ 1 ) ( λ 1 ) , where λ 1 .
Note that β ( λ ) q u a n t u m P R P is the scalar quantity and it is assumed as a quantum CG parameter. The quantum PRP CG method given by Mishra et al. [10] is a popular CG method, whose quantum CG parameter is given as:
β ( λ ) Q u a n t u m P R P = h q ( λ ) ( λ ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 ,
where · denotes the Euclidean norm of vector h q ( λ 1 ) ( λ 1 ) . Further, sufficient quantum descent condition is required to reach the global convergence (GC) point for objective function (1). This condition is represented as:
h q ( λ ) ( λ ) T p q ( λ ) ( λ ) c h q ( λ ) ( λ ) 2 , c > 0 .
Researchers have proposed the variant of the PRP CG method that establishes the condition (6). For instance, Zhang [11] suggested a modified PRP method that consistently executes a descent direction regardless of the LS employed. Wan et al. [16] established a distinct PRP method called the spectral PRP method. At each iteration, the search direction of the suggested method was demonstrated for the descent direction of the objective function. Hu et al. [17] proposed a class of an improved CG method in the support of descent direction to solve non-convex UO problems. More pertinent contributions are available in [18,19,20,21,22] and references therein. The spectral gradient methods and the conjugate gradient methods are more popular in solving a large-scale UO problems than the Newton and the quasi-Newton methods. One of the advantages lies in that there is no requirement of computing and storing the Jacobian matrix or its approximation. We first present the literature review for the existing method.

2. Literature Review

In this section, we present the existing literature of the conjugate gradient method, which takes the PRP approach to solve UO problems. Powell showed that the PRP method with an exact LS can cycle without approaching a solution point [23]. Wan et al. proposed a modified spectral PRP conjugate gradient method for solving UO problems. It was proven that the search direction at each iteration, which is a descent direction of the objective function and global convergence, was established under mild conditions [16]. The spectral and conjugate parameters were chosen such that the obtained search direction is always sufficiently descent as well as being close to the quasi-Newton direction. With these suitable choices, the boundedness of the spectral parameter was removed [24]. A spectral method was used to prove the global convergence of the nonlinear conjugate gradient methods under the standard Wolfe LS [25]. Three conjugate gradient methods were implemented based on the spectral scaling secant equation, an approximation to the spectral of the Hessian of the objective function, and a sufficient descent with a spectral scaling secant equation [26]. A derivative-free spectral residual algorithm under the setting of non-monotone PRP was proposed to solve square and under-determined systems of equations [27]. A new spectral conjugate gradient method was presented to solve nonlinear inverse problems, which transferred into the UO with a neighbor term. The global convergence and regularizing properties of the proposed method were analyzed [28]. A modification of the PRP conjugate gradient method with a combination of the spectral conjugate gradient method and the hyperplane projection technique were proposed for solving the system of monotone nonlinear equations [29]. A new three-term spectral conjugate gradient algorithm based on the quasi-Newton equation was proposed for a higher numerical performance [30]. A spectral conjugate gradient method was designed using the spectral parameter and the conjugate parameter for large-scale UO [31]. A modified spectral PRP conjugate gradient method was presented for solving tensor eigenvalue complementarity problems and the numerical results were compared with the inexact Levenberg–Marquardt method [32]. Note that Table 1 indicates the advantages and evaluation tools of the existing methods of other researchers that can be reconciled and add new concepts into our proposed work.
The above existing research uses classical derivative to solve the UO problems. In our present research, we utilize the q-derivative in place of classical derivative and formulate the following research questions:
RQ1.
When can the spectral PRP method be reduced to the standard PRP conjugate gradient approach?
RQ2.
How can the step length be chosen in selecting the subsequent iterative quantum points?
RQ3.
What is the impact of using the quantum derivative in place of the classical derivative in the proposed method?
In the Literature Review section, we have seen that there are variants of methods to show the faster convergence. However, quantum derivative has only been used and the problems have only been solved through very limited methods [10,13,14,33,34]. We aim to open doors for this exciting research area, where quantum calculus will help to solve problems with the least number of iterations— the quantum spectral gradient that provides a quantum descent direction at every iteration. We generate different values of q such that it should be very close to 1, and Wolfe-type inexact LS technique is utilized to find the step length without requiring the bounded level sets on the gradient of the function. The advantage of using quantum spectral gradient is shown by comparing our method with the method given in [16] based on the number of iterations. We propose to merge the concept of quantum spectral gradient and CG. We present a quantum spectral PRP approach where the search direction is a quantum descent direction at each quantum iteration. The rapid search for the descent point in the proposed algorithm is made possible by the quantum variable q.
We plan to pursue the above research questions in this way. In the next section, we present the proposed algorithm and show the convergence proof under several assumptions. We also present the quantum descent direction formulation, which helps to choose the next descent point. In Section 4, we show the numerical illustrations of several nonlinear functions and compare our results with the existing method. In Section 5, we discuss our proposed method to justify our research questions. Section 6 is the conclusion.

3. A Quantum Spectral PRP CG Algorithm and Convergence Analysis

In this subsection, we construct our method step by step. Consider the UO problem defined by (1). The CG method of Wan et al. [16] solves (1) by generating a sequence of iterate { z ( λ ) } . As shown by Wan et al. [16], we now introduce the quantum spectral PRP method for solving (1). The iterative schema of this method based on the quantum gradient is already shown. Motivated by [16], the quantum search direction p q ( λ ) ( λ ) for λ 1 is presented as:
p q ( λ ) ( λ ) = t ( λ ) h q ( λ ) ( λ ) + β ( λ ) Q u a n t u m P R P p q ( λ 1 ) ( λ 1 ) ,
where
t ( λ ) = p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) h q ( λ ) ( λ ) T h q ( λ 1 ) ( λ 1 ) h q ( λ ) ( λ ) 2 h q ( λ 1 ) ( λ 1 ) 2 .
Based on all of the above, we present the algorithm for the quantum spectral PRP CG method as follows (Algorithm 1):
Algorithm 1 A quantum spectral PRP CG algorithm
  • Step 0 Given a starting point z ( 0 ) R n , constant 0 < ρ < σ < 1 , δ 1 , δ 2 > 0 , ϵ > 0 . Let λ = 0 .
  • Step 1 If h q ( λ ) ( λ ) ϵ , the algorithm terminates. Otherwise, compute p q ( λ ) ( λ ) using (3) and (4), and go to Step 2.
  • Step 2 Find the step length t ( λ ) > 0 such that
    θ z ( λ ) θ z ( λ ) + t ( λ ) p q ( λ ) ( λ ) p q ( λ ) ( λ ) 2 ρ ( t ( λ ) ) 2 , h q ( λ ) z ( λ ) + t ( λ ) p q ( λ ) ( λ ) T p q ( λ ) ( λ ) 2 p q ( λ ) ( λ ) 2 σ t ( λ ) .
  • Step 3 Compute quantum descent point as z ( λ + 1 ) : = z ( λ ) + t ( λ ) p q ( λ ) ( λ ) .
  • Step 4 Set q i ( λ + 1 ) = 1 q i ( λ ) ( λ + 1 ) 2 for all i = 1 , , n .
  • Step 5 Set λ = λ + 1 , go to Step 1.
In this section, the GC of Algorithm 1 is proven. We begin with the given assumptions that play an important role in establishing the convergence proof of the proposed technique.
Assumption 1.
A1.
The level set L = z | θ ( z ) θ z ( 0 ) is bounded, where the starting point is z ( 0 ) .
A2.
There exists a constant L > 0 such that θ is continuously quantum-differentiable in the neighborhood N of Ω, and its quantum gradient is Lipschitz continuous, such that
h q ( y ) h q ( z ) L y z , y , z N .
A3.
The following inequality holds for λ large enough:
h q ( λ ) ( λ ) T h q ( λ ) ( λ ) 1 2 h q ( λ ) ( λ ) T h q ( λ 1 ) ( λ 1 ) > 0 .
Remark 1.
Assumptions A1 and A2 imply that there exists a positive constant γ, such that
h q ( z ) γ , z N .
Lemma 1.
If the direction p q ( λ ) ( λ ) is yielded by (3) and (7), then the following equation holds for any quantum iteration m:
h q ( λ ) ( λ ) T p q ( λ ) ( λ ) = h q ( λ ) ( λ ) .
Proof. 
First, for λ = 0 , using (3) and (7), it is easy to see that (13) is true. For λ > 0 , we assume that
p q ( λ 1 ) ( λ 1 ) T h q ( λ 1 ) ( λ 1 ) = h q ( λ 1 ) ( λ 1 ) 2
holds for λ 1 . Thus, we have the following:
h q ( λ ) ( λ ) T p q ( λ ) ( λ ) = t ( λ ) h q ( λ ) ( λ ) 2 + h q ( λ ) ( λ ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) g q m 1 m 1 2 p q ( m 1 ) ( m 1 ) T h q ( λ ) ( λ ) .
From (9), we have
h q ( λ ) ( λ ) T p q ( λ ) ( λ ) = p q ( m 1 ) ( m 1 ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) h q ( λ ) ( λ ) T h q ( λ 1 ) ( λ 1 ) h q ( λ ) ( λ ) 2 h q ( λ 1 ) ( λ 1 ) 2 h q ( λ ) ( λ ) 2 + h q ( λ ) ( λ ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) .
Therefore,
h q ( λ ) ( λ ) T p q ( λ ) ( λ ) = p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) h q ( λ ) ( λ ) T h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 h q ( λ ) ( λ ) 2 + h q ( λ ) ( λ ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) 2 p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) h q ( λ ) ( λ ) T h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) ,
that is,
h q ( λ ) ( λ ) T p q ( λ ) ( λ ) = p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) 2 h q ( λ ) ( λ ) 2 + p q ( λ 1 ) ( λ 1 ) T h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 h q ( λ ) ( λ ) 2 + h q ( λ ) ( λ ) T h q ( λ ) ( λ ) h q ( λ 1 ) ( λ 1 ) 2 p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) .
Therefore,
h q ( λ ) ( λ ) T p q ( λ ) ( λ ) = p q ( λ 1 ) ( λ 1 ) T h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 h q ( λ ) ( λ ) 2 .
From (14), we obtain
h q ( λ ) ( λ ) T p q ( λ ) ( λ ) = h q ( λ ) ( λ ) 2 .
Therefore, proof is accomplished. □
Remark 2.
It is known from Lemma 1 that the descent direction of θ at z ( λ ) is p q ( λ ) ( λ ) . Additionally, if the exact LS is employed, then h q ( λ ) ( λ ) T p q ( λ 1 ) ( λ 1 ) = 0 , thus
t ( λ ) = p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) p q ( λ 1 ) ( λ 1 ) T h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) T h q ( λ 1 ) ( λ 1 ) h q ( λ ) ( λ ) T p q ( λ 1 ) ( λ 1 ) h q ( λ ) ( λ ) T h q ( λ 1 ) ( λ 1 ) h q ( λ ) ( λ ) 2 h q ( λ 1 ) ( λ 1 ) 2 .
Therefore,
t ( λ ) = p q ( λ 1 ) ( λ 1 ) T h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) 2 = 1 .
Since from (14), we have
p q ( λ 1 ) ( λ 1 ) T h q ( λ 1 ) ( λ 1 ) = h q ( λ 1 ) ( λ 1 ) 2 ,
thus,
t ( λ ) = 1 .
The standard PRP method is a reduction in the suggested spectral PRP conjugate gradient approach. This addresses the answer of research question RQ1. On the other hand, the precise LS is frequently needless and time-consuming. Thus, this study presents a practical implementation of the Wolfe-type inexact LS. The next result shows the existence of the step-length t ( λ ) > 0 at each quantum iteration.
Theorem 1.
Suppose that Assumption A2 holds, then there exists t ( λ ) > 0 that satisfies (9).
Proof. 
Let ψ ( t ) = θ z ( λ ) + t p q ( λ ) ( λ ) + ρ t 2 p q ( λ ) ( λ ) 2 , for any t > 0 , we have
lim t 0 + ψ ( t ) ψ ( 0 ) t = h q ( λ ) ( λ ) T p q ( λ ) ( λ ) < 0 .
Therefore, there exists an ( t ( λ ) ) > 0 such that when t 0 , t ( λ ) , we have
ψ ( t ) ψ ( 0 ) t 0 .
From Assumption A1, it follows that
lim t + ψ ( t ) ψ ( 0 ) t = + .
Let
t ^ ( λ ) = inf t > 0 | ψ ( t ) ψ ( 0 ) t = 0 .
By intermediate value theorem and (15), we know that t ^ ( λ ) > 0 satisfies
ψ ( t ^ ( λ ) ψ ( 0 ) ) t ^ ( λ ) = 0 .
Moreover, for each
t ( 0 , t ^ ( λ ) ] ,
we have
ψ ( t ) ψ ( 0 ) t 0 .
By the mean value theorem and (17), we obtain
ψ α ( λ ) t ^ ( λ ) = 0 ,
where 0 < α ( λ ) < 1 .
Therefore,
h q ( λ ) x ( λ ) + α ( λ ) t ^ ( λ ) p q ( λ ) ( λ ) T p q ( λ ) ( λ ) + 2 ρ α ( λ ) t ^ ( λ ) p q ( λ ) ( λ ) 2 = 0 .
That is,
h q ( λ ) z ( λ ) + α ( λ ) t ^ ( λ ) p q ( λ ) ( λ ) T p q ( λ ) ( λ ) + 2 ρ α ( λ ) t ^ ( λ ) p q ( λ ) ( λ ) 2 2 σ α ( λ ) t ^ ( λ ) p q ( λ ) ( λ ) 2 .
From (18) and (20), we obtain
t ( λ ) = α ( λ ) t ^ ( λ ) .
It is obvious that t ( λ ) < t ^ ( λ ) , which is a desired step length. This addresses the answer of the research question RQ2. □
We are aware that Lemma 1 and Algorithm 1 are both clearly specified. In addition, Lemma 1 and Assumption 1 can be used to demonstrate this result.
Lemma 2.
Suppose Assumption 1 holds, we have
λ 0 h q ( λ ) ( λ ) 4 p q ( λ ) ( λ ) 2 < .
Proof. 
From the LS rule (9) and Assumption 1, it follows that
( 2 σ + L ) t ( λ ) p q ( λ ) ( λ ) 2 = 2 σ t ( λ ) p q ( λ ) ( λ ) 2 + L t ( λ ) p q ( λ ) ( λ ) 2 .
Therefore,
h q ( λ + 1 ) ( λ + 1 ) h q ( λ ) ( λ ) T p q ( λ ) ( λ ) ( 2 σ + L ) t ( λ ) p q ( λ ) ( λ ) 2 .
We obtain
( h q ( λ ) ( λ ) ) T ( p q ( λ ) ( λ ) ) ( 2 σ + L ) t ( λ ) p q ( λ ) ( λ ) 2 .
Hence,
t ( λ ) p q ( λ ) ( λ ) 2 1 2 σ + L h q ( λ ) ( λ ) T p q ( λ ) ( λ ) p q ( λ ) ( λ ) 2 .
Therefore,
t ( λ ) 2 p q ( λ ) ( λ ) 4 1 2 σ + L 2 h q ( λ ) ( λ ) T p q ( λ ) ( λ ) p q ( λ ) ( λ ) 2 2 .
From the quantum LS procedure and Assumption 1, we have
λ = 0 h q ( λ ) ( λ ) T p q ( λ ) ( λ ) p q ( λ ) λ 2 2 ( 2 σ + L ) 2 λ = 0 t ( λ ) 2 p q ( λ ) ( λ ) 2 ( 2 σ + L ) 2 ρ λ = 0 θ z ( λ ) θ z ( λ + 1 ) < + .
It is simple to finish the proof of (21) by using Lemma 13. □
The following results are established for the GC.
Theorem 2.
Under Assumption 1, we have
lim inf λ h q ( λ ) ( λ ) = 0 .
Proof. 
Assume that condition (23) is false, and there exists an ϵ > 0 such that for any λ ,
h q ( λ ) ( λ ) ϵ .
We are able to write as:
p q ( λ ) ( λ ) 2 = p q ( λ ) ( λ ) T p q ( λ ) ( λ ) .
From (4), we can express as:
p q ( λ ) ( λ ) 2 = t ( λ ) h q ( λ ) ( λ ) + β ( λ ) Q u a n t u m P R P p q ( λ 1 ) ( λ 1 ) T t ( λ ) h q ( λ ) ( λ ) + β ( λ ) Q u a n t u m P R P p q ( λ 1 ) ( λ 1 ) = t ( λ ) 2 h q ( λ ) ( λ ) 2 2 t ( λ ) β ( λ ) Q u a n t u m P R P p q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) + β ( λ ) Q u a n t u m P R P 2 p q ( λ 1 ) ( λ 1 ) 2 .
From (7), we can write as:
p q ( λ ) ( λ ) 2 = t ( λ ) 2 h q ( λ ) ( λ ) 2 2 t ( λ ) p q ( λ ) ( λ ) T + h q ( λ ) ( λ ) T t ( λ ) h q ( λ ) ( λ ) + β ( λ ) Q u a n t u m P R P 2 p q ( λ 1 ) ( λ 1 ) 2 = h q ( λ ) ( λ ) 2 t ( λ ) 2 2 t ( λ ) p q ( λ ) ( λ ) T h q ( λ ) ( λ ) 2 t ( λ ) 2 h q ( λ ) ( λ ) T h q ( λ ) ( λ ) + β ( λ ) Q u a n t u m P R P 2 p q ( λ 1 ) ( λ 1 ) 2 .
Therefore,
p q ( λ ) ( λ ) 2 = β ( λ ) Q u a n t u m P R P 2 p q ( λ 1 ) ( λ 1 ) 2 2 t ( λ ) h q ( λ ) ( λ ) T p q ( λ ) ( λ ) t ( λ ) 2 h q ( λ ) ( λ ) 2 .
Dividing by h q ( λ ) ( λ ) 4 for both sides of the above equality,
p q ( λ ) ( λ ) 2 h q ( λ ) ( λ ) 4 = β ( λ ) Q u a n t u m P R P 2 p q ( λ 1 ) ( λ 1 ) 2 2 t ( λ ) p q ( λ ) ( λ ) T h q ( λ ) ( λ ) h q ( λ ) ( λ ) 2 t ( λ ) 2 h q ( λ ) ( λ ) 4 .
From (5), we obtain
p q ( λ ) ( λ ) 2 h q ( λ ) ( λ ) 4 = h q ( λ ) ( λ ) T h q ( λ ) ( λ ) h q ( λ ) ( λ ) T h q ( λ 1 ) ( λ 1 ) 2 h q ( λ 1 ) ( λ 1 ) 4 p q ( λ 1 ) ( λ 1 ) 2 h q ( λ ) ( λ ) 4 2 t ( λ ) h q ( λ ) ( λ ) T p q ( λ ) ( λ ) + t ( λ ) 2 h q ( λ ) ( λ ) 2 h q ( λ ) ( λ ) 4 .
Thus,
p q ( λ ) ( λ ) 2 h q ( λ ) ( λ ) 4 = p q ( λ 1 ) ( λ 1 ) 2 h q ( λ 1 ) ( λ 1 ) 4 p q ( λ 1 ) ( λ 1 ) 2 h q ( λ 1 ) ( λ 1 ) 4 2 h q ( λ ) ( λ ) T h q ( λ ) ( λ ) h q ( λ ) ( λ ) T h q ( λ 1 ) ( λ 1 ) h q ( λ 1 ) ( λ 1 ) T h q ( λ ) ( λ ) h q ( λ ) ( λ ) 4 t ( λ ) 1 2 h q ( λ ) ( λ ) 2 + 1 h q ( λ ) ( λ ) 2 .
We obtain
p q ( λ ) ( λ ) 2 h q ( λ ) ( λ ) 4 p q ( λ 1 ) ( λ 1 ) 2 h q ( λ 1 ) ( λ 1 ) 4 + 1 h q ( λ ) ( λ ) 2 .
Therefore,
p q ( λ ) ( λ ) 2 h q ( λ ) ( λ ) 4 1 h q ( 0 ) ( 0 ) 2 + + 1 h q ( λ 1 ) ( λ 1 ) 2 i = 0 λ 1 1 g q ( i ) ( i ) 2 .
We thus obtain the following:
p q ( λ ) ( λ ) 2 h q ( λ ) ( λ ) 4 λ ϵ 2 .
The above inequality implies that
λ 1 h q ( λ ) ( λ ) 4 p q ( λ ) ( λ ) 2 ϵ 2 ( λ ) 0 1 λ = 0 1 1 λ d q λ = log λ 0 1 = log 1 log 0 = + .
This contradicts with (21). Thus, the result (23) holds. □

4. Numerical Illustration

We now solve numerical problems using Algorithm 1. We have taken 30 test problems from [35] and performed 37 experiments using 37 starting points. Our numerical tests are carried out on R 3.6.1 with an Intel(R) Core(TM) CPU ([email protected]). We apply the stopping condition as:
h q ( λ ) ( z ( λ ) ) 10 5 .
Dolan and Moré [36] proposed an appropriate approach to illustrate the performance profile, which is a statistical process. The performance ratio is as:
ρ p r , s = r ( p r , s ) min { r ( p r , s ) : 1 r n s } ,
where r ( p r , s ) denotes the number of quantum iteration for solver s, which resides on test problem p r and n s and refers to the number of total test problems given in the model test. The cumulative distribution function is presented as:
p s ( τ ) = 1 n p r size p r ρ ( p r , s ) τ ,
where p s ( τ ) is the probability that a performance ratio ρ ( p r , s ) is within a factor of τ of the optimum possible ratio. We plot the fraction p s ( τ ) of test problems to analyze the subset of test problems under the assumption that the algorithm is within a factor τ of the optimum. We use this technique to depict the efficiency of Algorithm 1. Thus, Figure 1 shows that the quantum spectral PRP method is efficient in comparison to another existing method [16], which addresses the answer of research question RQ3. Figure 1 is plotted using column ‘ i t ’ of Table 2 and Table 3.

5. Discussions

This research aims to investigate the convergence characteristics of quantum spectral PRP conjugate gradient technique for UO problems. We solely examine the scenario in which the technique is applied using the notion of a quantum derivative, and we report that, in general, this method is globally convergent for smooth nonlinear functions. Nonetheless, the quantum derivative is an effective tool for solving non-smooth and nonlinear functions. It relaxes the requirements for testing left-hand and right-hand limits for the differentiability [1]. The q-derivative operates without limits and is only dependent on the parameter q, whose value is constrained to be as close to 1 as possible but not equal to 1. There is no difference between the classical derivative and the q-derivative after obtaining q as 1. The primary decision in adopting this paradigm is to capitalize on the rapid convergence. The larger step length is taken to travel to the next descent point, and the method is free to choose the conjugate direction rapidly based on the value of q. Our approach involves creating a PRP spectral conjugate gradient algorithm that is enabled by quantum derivatives. In this algorithm, the objective function’s quantum descent direction is always the quantum search direction, which changes with each quantum iteration. The quantum gradient and the descent direction are combined to offer better directions. Using the Wolfe-type LS, we demonstrate the fast global convergence of the proposed algorithm. Since the q-gradient method’s geometric nature is a secant function rather than a tangent function, it can successfully avoid local minima and speed up convergence [37]. We compared our method with the existing method developed by Wan et al. [16] and found that nonlinear functions chosen from the literature obtained a reduced number of iterations while presenting optimal solutions; it is one of the best advantages of encompassing quantum derivative in our proposed method. Our method can easily be utilized in signal processing [37,38]. The drawback of the quantum gradient is that we can utilize values of q larger than 1, but this prevents us from comparing our approach to the existing one.

6. Conclusions and Future Prospects

A solution of the quantum spectral PRP CG approach for tackling UO problems has been proposed. The Wolfe LS conditions of quantum type have been established with the generalized algorithm. The practical behavior of the approach depends on the selection of the quantum variable q. The balance between local and global searches is managed by this dilation parameter q. The suggested procedure performs noticeably better when the iterative value of q is chosen correctly. Strategies to generate the parameter q and to compute the step length in a way that gradually transitions the search process from global search at the beginning to practically local search at the end are complementary to the q-gradient approach. The proposed technique outperforms the other, as the numerical findings verified. Future studies will apply the idea of q-gradient to the advanced conjugate gradient in order to handle non-smooth UO problems.

Author Contributions

Conceptualization, B.R. and R.S.; Methodology, K.K.L.; Software, B.R.; Validation, S.K.M.; Formal analysis, K.K.L.; Investigation, S.K.M.; Resources, B.R. and R.S.; Writing—original draft, B.R.; Funding acquisition, K.K.L. All authors have read and agreed to the published version of the manuscript.

Funding

The first author is supported by the International Business School, Shaanxi Normal University, Xi’an, China. The second author is financially supported by a Research Grant for Faculty (Institute of Eminence Scheme-Banaras Hindu University) under Development Scheme No. 6031. The third author acknowledges for the financial support by the Centre for Digital Transformation, Indian Institute of Management Ahmedabad, India. The fourth author is financially supported by the Banaras Hindu University-University Grants Commission Non-National Eligibility Fellowship (Research Scholar-2022-23/46476).

Data Availability Statement

No data were used to support this study.

Acknowledgments

We express our gratitude to the esteemed editor and reviewers for their insightful comments that helped to refine the work in its current state.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mishra, S.K.; Ram, B. Introduction to Unconstrained Optimization with R; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  2. Babaie-Kafaki, S. A survey on the Dai–Liao family of nonlinear conjugate gradient methods. RAIRO Oper. Res. 2023, 57, 43–58. [Google Scholar] [CrossRef]
  3. Wu, X.; Shao, H.; Liu, P.; Zhang, Y.; Zhuo, Y. An efficient conjugate gradient-based algorithm for unconstrained optimization and its projection extension to large-scale constrained nonlinear equations with applications in signal recovery and image denoising problems. J. Comput. Appl. Math. 2023, 422, 114879. [Google Scholar] [CrossRef]
  4. Liu, J.; Du, S.; Chen, Y. A sufficient descent nonlinear conjugate gradient method for solving M-tensor equations. J. Comput. Appl. Math. 2020, 371, 112709. [Google Scholar] [CrossRef]
  5. Andrei, N. On three-term conjugate gradient algorithms for unconstrained optimization. Appl. Math. Comput. 2013, 219, 6316–6327. [Google Scholar] [CrossRef]
  6. Dai, Y.h.; Yuan, Y. An efficient hybrid conjugate gradient method for unconstrained optimization. Ann. Oper. Res. 2001, 103, 33–47. [Google Scholar] [CrossRef]
  7. Shanno, D.F. Conjugate gradient methods with inexact searches. Math. Oper. Res. 1978, 3, 244–256. [Google Scholar] [CrossRef]
  8. Johnson, O.G.; Micchelli, C.A.; Paul, G. Polynomial preconditioners for conjugate gradient calculations. SIAM J. Numer. Anal. 1983, 20, 362–376. [Google Scholar] [CrossRef]
  9. Wei, Z.; Li, G.; Qi, L. New nonlinear conjugate gradient formulas for large-scale unconstrained optimization problems. Appl. Math. Comput. 2006, 179, 407–430. [Google Scholar] [CrossRef]
  10. Mishra, S.K.; Chakraborty, S.K.; Samei, M.E.; Ram, B. A q-Polak–Ribière–Polyak conjugate gradient algorithm for unconstrained optimization problems. J. Inequal. Appl. 2021, 2021, 1–29. [Google Scholar] [CrossRef]
  11. Zhang, L.; Zhou, W.; Li, D.H. A descent modified Polak–Ribière–Polyak conjugate gradient method and its global convergence. IMA J. Numer. Anal. 2006, 26, 629–640. [Google Scholar] [CrossRef]
  12. Gouvêa, É.J.; Regis, R.G.; Soterroni, A.C.; Scarabello, M.C.; Ramos, F.M. Global optimization using q-gradients. Eur. J. Oper. Res. 2016, 251, 727–738. [Google Scholar] [CrossRef]
  13. Mishra, S.K.; Panda, G.; Ansary, M.A.T.; Ram, B. On q-Newton’s method for unconstrained multiobjective optimization problems. J. Appl. Math. Comput. 2020, 63, 391–410. [Google Scholar] [CrossRef]
  14. Lai, K.K.; Mishra, S.K.; Panda, G.; Ansary, M.A.T.; Ram, B. On q-steepest descent method for unconstrained multiobjective optimization problems. AIMS Math. 2020, 5, 5521–5540. [Google Scholar]
  15. Lai, K.K.; Mishra, S.K.; Panda, G.; Chakraborty, S.K.; Samei, M.E.; Ram, B. A limited memory q-BFGS algorithm for unconstrained optimization problems. J. Appl. Math. Comput. 2021, 66, 183–202. [Google Scholar] [CrossRef]
  16. Wan, Z.; Yang, Z.; Wang, Y. New spectral PRP conjugate gradient method for unconstrained optimization. Appl. Math. Lett. 2011, 24, 16–22. [Google Scholar] [CrossRef]
  17. Hu, Q.; Zhang, H.; Zhou, Z.; Chen, Y. A class of improved conjugate gradient methods for nonconvex unconstrained optimization. Numer. Linear Algebra Appl. 2023, 30, e2482. [Google Scholar] [CrossRef]
  18. Gilbert, J.C.; Nocedal, J. Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 1992, 2, 21–42. [Google Scholar] [CrossRef]
  19. Polyak, B.T. The conjugate gradient method in extremal problems. USSR Comput. Math. Math. Phys. 1969, 9, 94–112. [Google Scholar] [CrossRef]
  20. Powell, M.J.D. Restart procedures for the conjugate gradient method. Math. Program. 1977, 12, 241–254. [Google Scholar] [CrossRef]
  21. Yuan, G.; Lu, X. A modified PRP conjugate gradient method. Ann. Oper. Res. 2009, 166, 73–90. [Google Scholar] [CrossRef]
  22. Wang, C.y.; Chen, Y.y.; Du, S.q. Further insight into the Shamanskii modification of Newton method. Appl. Math. Comput. 2006, 180, 46–52. [Google Scholar] [CrossRef]
  23. Powell, M.J. Nonconvex minimization calculations and the conjugate gradient method. In Numerical Analysis, Proceedings of the 10th Biennial Conference held at Dundee, Scotland, 28 June–1 July 1983; Springer: Berlin/Heidelberg, Germany, 1983; pp. 122–141. [Google Scholar]
  24. Li, Y.; Du, S.; Chen, Y. An improved spectral conjugate gradient algorithm for nonconvex unconstrained optimization problems. J. Optim. Theory Appl. 2013, 157, 820–842. [Google Scholar]
  25. Liu, D.; Zhang, L.; Xu, G. Spectral method and its application to the conjugate gradient method. Appl. Math. Comput. 2014, 240, 339–347. [Google Scholar] [CrossRef]
  26. Liu, H.; Yao, Y.; Qian, X.; Wang, H. Some nonlinear conjugate gradient methods based on spectral scaling secant equations. J. Comput. Appl. Math. 2016, 35, 639–651. [Google Scholar]
  27. Tarzanagh, D.A.; Nazari, P.; Peyghami, M.R. A nonmonotone PRP conjugate gradient method for solving square and under-determined systems of equations. Comput. Math. Appl. 2017, 73, 339–354. [Google Scholar] [CrossRef]
  28. Zhu, Z.; Wang, H.; Zhang, B. A spectral conjugate gradient method for nonlinear inverse problems. Comput. Math. Appl. 2018, 26, 1561–1589. [Google Scholar] [CrossRef]
  29. Awwal, A.M.; Kumam, P.; Abubakar, A.B. Spectral modified Polak–Ribiére–Polyak projection conjugate gradient method for solving monotone systems of nonlinear equations. Appl. Math. Comput. 2019, 362, 124514. [Google Scholar] [CrossRef]
  30. Guo, J.; Wan, Z. A new three-term spectral conjugate gradient algorithm with higher numerical performance for solving large scale optimization problems based on Quasi-Newton equation. Int. J. Model. Simul. Sci. Comput. 2021, 12, 2150053. [Google Scholar] [CrossRef]
  31. Jian, J.; Yang, L.; Jiang, X.; Liu, P.; Liu, M. A spectral conjugate gradient method with descent property. Mathematics 2020, 8, 280. [Google Scholar] [CrossRef]
  32. Li, Y.; Du, S.; Chen, Y. Modified spectral PRP conjugate gradient method for solving tensor eigenvalue complementarity problems. J. Ind. Manag. Optim. 2022, 18, 157–172. [Google Scholar] [CrossRef]
  33. Lai, K.K.; Mishra, S.K.; Sharma, R.; Sharma, M.; Ram, B. A Modified q-BFGS Algorithm for Unconstrained Optimization. Mathematics 2023, 11, 1420. [Google Scholar] [CrossRef]
  34. Lai, K.K.; Mishra, S.K.; Ram, B. A q-Conjugate Gradient Algorithm for Unconstrained Optimization Problems. Pac. J. Opt. 2021, 17, 57–76. [Google Scholar]
  35. Jamil, M.; Yang, X.S. A literature survey of benchmark functions for global optimisation problems. Int. J. Math. Model. Numer. 2013, 4, 150–194. [Google Scholar] [CrossRef]
  36. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
  37. Al-Saggaf, U.M.; Moinuddin, M.; Arif, M.; Zerguine, A. The q-least mean squares algorithm. Signal Process. 2015, 111, 50–60. [Google Scholar] [CrossRef]
  38. Cai, P.; Wang, S.; Qian, J.; Zhang, T.; Huang, G. The diffusion least mean square algorithm with variable q-gradient. Signal Process. 2022, 127, 50–60. [Google Scholar] [CrossRef]
Figure 1. Performance profile based on number of quantum iterations using Table 1 and Table 2.
Figure 1. Performance profile based on number of quantum iterations using Table 1 and Table 2.
Mathematics 11 04857 g001
Table 1. Literature survey for spectral PRP method.
Table 1. Literature survey for spectral PRP method.
SourceDescriptionAdvantageEvaluation Tool
Wan et al. [16]A new spectral PRP conjugate gradient algorithmSearch direction at each iteration is a descent directionClassical derivative
Deng et al. [24]An improved spectral conjugate gradient algorithmThe search direction is sufficiently close to the quasi-Newton directionClassical derivative
Liu et al. [25]A global convergence of the nonlinear conjugate gradient with the spectral methodAn iteration matrix is positive definite symmetricalClassical derivative
Liu et al. [26]Three conjugate gradient methods based on the spectral equationsAn approximation to the spectral of the Hessian of the objective function is adoptedClassical derivative
Tarzanagh et al. [27]A non-monotone PRP conjugate gradient method for solving square and under-determined systems of equationsA relaxed non-monotone line search techniqueDerivative-free
Table 2. Numerical results obtained by the proposed method.
Table 2. Numerical results obtained by the proposed method.
SerialQuantum Spectral PRP Algorithm
NumberTest ProblemStarting Point x * f ( x * ) it
1Rosenbrock(3 , 4 )⌃T(0.9993397 , 0.9986580)⌃T4.84E-0716
2Rosenbrock(4 , 3)⌃T(1.065881, 1.136169)⌃T0.00434077313
3SPHERE(1 , 2 , 3)⌃T(4.438117E-14 , 9.803269E-14 , −1.129652E-13)⌃T2.43E-263
4SPHERE(3 , 2 , 1)⌃T(−1.293611E-06 , −1.037781E-06 , −7.819502E-07)⌃T3.36E-123
5ACKLEY(0.2 , 0.2)⌃T(−4.105524E-07 , −4.105524E-07)⌃T1.64E-063
6ACKLEY 2(−6 , −6)⌃T(−0.00000000003559351 , −0.00000000003559351)⌃T−2002
7ACKLEY 2(−4 , −5)⌃T(1.633527E-09 , 2.246892E-09)⌃T−2002
8Beale(3 , 2)⌃T(3.0005905 , 0.5001345)⌃T1.45E-1011
9Beale(4 , 1)⌃T(3.0095993 , 0.5024505)⌃T1.47E-059
10Bohachevsky(0.2 , 0.3)⌃T(−5.841102E-07 , 3.085369E-08)⌃T4.92E-125
11Bohachevsky(0.1 , 0.2)⌃T(2.276130E-06 , −1.151592E-07)⌃T7.47E-115
12Booth(40 , 5)⌃T(0.9999947 , 3.0000062)⌃T6.888418E-114
13Booth(60 , 80)⌃T( 0.9994183 , 3.0008450)⌃T1.33E-066
14DROP-WAVE(0.1 , 0.2)⌃T(4.767480E-14 , 2.085834E-14)⌃T−13
15Colville(1 , 1 , 1 , 0.8)⌃T(1.0394380 , 1.0802997 , 0.9614360 , 0.9241323)⌃T0.005684157
16Colville(0.8 , 1 , 1 , 1)⌃T(0.9768628 , 0.9543990 , 1.0241951 , 1.0488717)⌃T0.0021230911
17Csendes(−4 , −5)⌃T(−0.03013482 , −0.01924908)⌃T8.146356E-107
18Csendes(0.4 , 1)⌃T(−0.01171701 , 0.01905952)⌃T1.41E-105
19Cube(−7 , 10)⌃T(0.9805592 , 0.9422543)⌃T0.00040816220
20Cube(1 , −6)⌃T( 0.9397829 , 0.8228094)⌃T0.00880897113
21Deckkers-Aarts(10 , 50)⌃T(9.597778E-07 , 1.494765E+01)⌃T−24,776.518
22Deckkers-Aarts(9, 40)⌃T(−1.078035E-06 , 1.494759E+01)⌃T−24 , 776.5110
23Dixon Price(7 , 4)⌃T(1.001180 , −0.707635)⌃T1.59E-068
24Easom(2.5 , 2.1)⌃T(3.142261 , 3.141810)⌃T−14
25Egg Crate(−1.3 , −1.6)⌃T(−5.302490E-07 , −4.244362E-07)⌃T1.20E-116
26Exponential(−3 , −1)⌃T( 4.306445E-08 , 1.072428E-08)⌃T−14
27Freudenstein Roth(4 , 5)⌃T(4.981870 , 4.000912)⌃T0.0011513867
28Six Hump Camel(7 , 1)⌃T(−0.08978278 , 0.71276481)⌃T−1.0316289
29Three Hump Camel(0.6 , 0.7)⌃T(−2.648394E-07 , −5.719217E-07)⌃T6.188417E-136
30Sum Squares(4 , 70)⌃T(−1.110432E-05 , 1.337182E-05)⌃T4.81E-104
31GRAMACY and LEE10.9490034−0.52660353
32Rotated Ellipse 2(100 , 1.4)(−1.438029E-07 , −1.555622E-07)⌃T2.25E-144
33Zakharov(−6 , −5)(2.640608E-07 , −3.044188E-07)⌃T1.92E-138
34Zirilli(3 , 7)⌃T(−1.04651E+00 , −2.308637E-05)⌃T−0.3523866
35Zett1(8 , 4)⌃T(−2.990219E-02 , −2.078441E-08)⌃T−0.0037912379
36Wayburn Seader 3(5 , 6)⌃T(5.147307, 6.839722)⌃T19.105885
37Wayburn Seader 2(1 , 1)⌃T(0.4604724 , 1.0073360)⌃T19.206777
Table 3. Numerical results obtained from Wan et al. [16].
Table 3. Numerical results obtained from Wan et al. [16].
SerialSpectral PRP Algorithm Given by Wan et al. [16]
NumberTest ProblemStarting Point x * f ( x * ) it
1Rosenbrock(3 , 4 )⌃T(0.9999998 , 0.9999995)⌃T4.84E-0720
2Rosenbrock(4 , 3)⌃T(0.9999194 , 0.9998484)⌃T1.56E-0816
3SPHERE(1 , 2 , 3)⌃T(−4.275638E-07 , -3.976857E-07, −3.678076E-07)⌃T4.76E-133
4SPHERE(3 , 2 , 1)⌃T(−1.125211E-13 , 9.832413E-14, −3.757411E-14)⌃T2.37E-263
5ACKLEY(0.2 , 0.2)⌃T(3.764868E-11 , 3.764868E-11)⌃T1.51E-104
6ACKLEY 2(−6 , −6)⌃T(3.630772E-08 , 3.630772E-08)⌃T−2002
7ACKLEY 2(−4 , −5)⌃T(−4.263692E-07 , 3.132365E-08)⌃T−20011
8Beale(3 , 2)⌃T(3.0000951 , 0.5000238)⌃T3.41E-0912
9Beale(4 , 1)⌃T(3.0000004, 0.5000001)⌃T2.77E-1410
10Bohachevsky(0.2 , 0.3)⌃T(3.324145E-13 , −2.460865E-14)⌃T05
11Bohachevsky(0.1 , 0.2)⌃T( 1.990839E-06 , −3.959075E-07)⌃T6.20E-115
12Booth(40 , 5)⌃T(1 , 3)⌃T2.55E-173
13Booth(60 , 80)⌃T(1 , 3)⌃T8.79E-173
14DROP-WAVE(0.1 , 0.2)⌃T(−5.062833E-07, −5.107584E-07)⌃T−14
15Colville(1 , 1 , 1 , 0.8)⌃T(0.9995939 , 0.9991930 , 1.0004239 , 1.0008541)⌃T6.49E-0720
16Colville(0.8 , 1 , 1 , 1)⌃T(0.9997061 , 0.9994101 , 1.0002997 , 1.0005954)⌃T3.19E-0725
17Csendes(−4 , −5)⌃T(−0.03055821, −0.01731213)⌃T8.71E-108
18Csendes(0.4 , 1)⌃T(0.01957772 , 0.01183933)⌃T1.60E-104
19Cube(−7, 10)⌃T(0.9999933, 0.9999799)⌃T4.51E-1122
20Cube(1 , −6)⌃T(1 , 1)⌃T9.29E-1720
21Deckkers-Aarts(10 , 50)⌃T(4.652559E-08 , 1.494511E+01)⌃T−24,776.529
22Deckkers-Aarts(9 , 40)⌃T(3.694906E-07 , 1.494511E+01)⌃T−24,776.528
23Dixon Price(7 , 4)⌃T(1.0000020 , −0.7071099)⌃T9.87E-119
24Easom(2.5 , 2.1)⌃T(3.141593 , 3.141593)⌃T−15
25Egg Crate(−1.3 , −1.6)⌃T(−8.137993E-07 , −8.800207E-08)⌃T1.74204E-116
26Exponential(−3 , −1)⌃T(1.527478E-10 , −4.751579E-10)⌃T−13
27Freudenstein Roth(4 , 5)⌃T(5.000009, 4.000000)⌃T1.18699E-108
28Six Hump Camel(7 , 1)⌃T(−0.08984676 , 0.71266325)⌃T−1.0316398
29Three Hump Camel(0.6 , 0.7)⌃T( −1.088238E-10 , 1.920156E-10)⌃T3.97E-206
30Sum Squares(4, 70)⌃T(−4.084082E-06 , −4.088583E-06)⌃T4.83E-104
31GRAMACY and LEE1−0.52660480.94893373
32Rotated Ellipse 2(100 , 1.4)( −1.249927E-06 , −1.367064E-06)⌃T1.72E-125
33Zakharov(−6 , −5)(1.359022E-15 , −5.823407E-16)⌃T2.20E-308
34Zirilli(3 , 7)⌃T( −1.046681E+00, 3.696484E-12)⌃T−0.35238618
35Zett1(8 , 4)⌃T(−2.989599E-02 , −6.894129E-08)⌃T−0.00379123710
36Wayburn Seader 3(5 , 6)⌃T(5.146885 , 6.839598)⌃T19.105887
37Wayburn Seader 2(1 , 1)⌃T(0.4248603 , 0.9999999)⌃T1.68E-1410
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lai, K.K.; Mishra, S.K.; Ram, B.; Sharma, R. A Conjugate Gradient Method: Quantum Spectral Polak–Ribiére–Polyak Approach for Unconstrained Optimization Problems. Mathematics 2023, 11, 4857. https://doi.org/10.3390/math11234857

AMA Style

Lai KK, Mishra SK, Ram B, Sharma R. A Conjugate Gradient Method: Quantum Spectral Polak–Ribiére–Polyak Approach for Unconstrained Optimization Problems. Mathematics. 2023; 11(23):4857. https://doi.org/10.3390/math11234857

Chicago/Turabian Style

Lai, Kin Keung, Shashi Kant Mishra, Bhagwat Ram, and Ravina Sharma. 2023. "A Conjugate Gradient Method: Quantum Spectral Polak–Ribiére–Polyak Approach for Unconstrained Optimization Problems" Mathematics 11, no. 23: 4857. https://doi.org/10.3390/math11234857

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop