Next Article in Journal
Ultrathin Acoustic Metasurface Holograms with Arbitrary Phase Control
Previous Article in Journal
Classification of Hyperspectral Images Based on Supervised Sparse Embedded Preserving Projection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Analysis and Parameter Optimization of the Optimal Fixed-Point Quantum Search

1
Department of Computer Science, College of Literature, Science, and the Arts, University of Michigan, Ann Arbor, MI 48109-2121, USA
2
State Key Laboratory of Advanced Optical Communication Systems and Networks, Center of Quantum Information Sensing and Processing, Shanghai Jiao Tong University, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(17), 3584; https://doi.org/10.3390/app9173584
Submission received: 10 July 2019 / Revised: 22 August 2019 / Accepted: 28 August 2019 / Published: 2 September 2019
(This article belongs to the Section Quantum Science and Technology)

Abstract

:
The optimal fixed-point quantum search (OFPQS) algorithm [Phys. Rev. Lett. 113, 210501 (2014)] achieves both the fixed-point property and quadratic speedup over classical algorithms, which gives a sufficient condition on the number of iterations to ensure the success probability is no less than a given lower bound (denoted by 1 δ 2 ). However, this condition is approximate and not exact. In this paper, we derive the sufficient and necessary condition on the number of feasible iterations, based on which the exact least number of iterations can be obtained. For example, when δ = 0 . 8 , iterations can be saved by almost 25%. Moreover, to find a target item certainly, setting directly 1 δ 2 = 100 % , the quadratic advantage of the OFPQS algorithm will be lost, then, applying the OFPQS algorithm with 1 δ 2 < 100 % requires multiple executions, which leads to a natural problem of choosing the optimal parameter δ . For this, we analyze the extreme and minimum properties of the success probability and further analytically derive the optimal δ which minimizes the query complexity. Our study can be a guideline for both the theoretical and application research on the fixed-point quantum search algorithms.

1. Introduction

Grover search [1,2] provides a quadratic speedup over classical search algorithms, and has been proven optimal [3,4,5,6]. However, there still exists the soufflé problem [7], i.e., the success probability will decline if the algorithm iterates too many times. Therefore, the Grover algorithm can only apply to the case where the optimal number of iterations [8] can be determined.
Based on the quantum amplitude amplification [9,10,11,12] and phase-matching methods [13,14,15,16], a fixed-point quantum search algorithm [17] has been proposed, where the final state of the algorithm converges to the target states and the success probability increases as the number of iterations grows. This algorithm applies to the case where the lower bound (denoted by λ 0 ) of the fraction of target items (denoted by λ ) is known. However, the advantage of quadratic speedup is lost [18,19].
Fortunately, Yoder et al. developed the optimal fixed-point quantum search (OFPQS) algorithm [20], which achieves both the fixed-point property and optimal query complexity, where the success probability for any unknown λ λ 0 can be always no less than a given value (denoted by 1 δ 2 ) between 0 and 1, as long as the given condition on the number of iterations is satisfied. However, this condition is approximate and not tight, i.e., the required number of iterations might be further reduced.
In addition, the lower bound of success probability 1 δ 2 of the OFPQS algorithm is lower than 100%, because setting δ = 0 , the algorithm will change back to the original fixed-point algorithm [17] and thus lose the quadratic speedup [20]. However, in practical search problems, it is often desired to find a target item eventually, rather than just succeed with probability above a lower bound. For this, a natural strategy is to make multiple trials of the OFPQS algorithm with δ > 0 until success. This brings up a new problem, that is, how to choose the optimal parameters to find a target item as soon as possible.
In this paper, we expect to give the minimum feasible number of iterations, analyze the extreme and minimum properties of the success probability, and further derive the optimal parameter of the OFPQS algorithm to find a target item with the minimum query complexity.
The paper is organized as follows: In Section 2, we briefly introduce the OFPQS algorithm. In Section 3, we derive the optimal number of iterations and analyze the properties of the success probability of OFPQS algorithm. In Section 4, we give the selection method of optimal parameter δ . Section 5 comprises of discussions about the effects of the optimal number of iterations and the optimal δ , as well as the upper bound of expected queries of applying the OFPQS algorithm to find a target item certainly. Finally, a brief conclusion is given in Section 6.

2. OFPQS Algorithm Revisited

Based on the multi-phase matching method [21,22], the OFPQS algorithm [20] overcomes the souffle problem in the original Grover algorithm [1] as well as the loss of quadratic speedup in the original fixed-point quantum search algorithm [17].
The initial state of the OFPQS algorithm is prepared to be ψ = A 0 , where A can be an arbitrary unitary operator. Denote the equal superposition of all target and nontarget states as α and β , respectively, i.e.,
α = 1 M x f 1 1 x ,
β = 1 N M x f 1 0 x ,
where M is the number of target items in the database of size N, f x is a Boolean function identifying the target state(s), i.e., if x is a target state, then f x = 1 —otherwise, f x = 0 , with A = H n and H being the Hadamard transform, ψ can be written as
ψ = λ α + 1 λ β ,
where λ = M / N represents the fraction of target items.
The sequence of operations performed on ψ is given by (see also Equation (2.6) of [21])
S L = G ϕ l , φ l G ϕ l 1 , φ l 1 G ϕ 1 , φ 1 ,
where L 1 = 2 l represents the query complexity of sequence S L , as each generalized Grover operation G ϕ j , φ j requires two Oracle queries [20], and
G ϕ j , φ j = A S 0 ϕ j A S f φ j ,
where S 0 ϕ j and S f φ j are the selective phase shifts ( i = 1 ) (see also Equations (1) and (2) of [13]),
S 0 ϕ j = I 1 e i ϕ j 0 0 ,
S f φ j = I 1 e i φ j x f 1 1 x x .
Under the multiphase matching condition (see also Equation (11) of [20])
ϕ j = φ l j + 1 = 2 cot 1 tan 2 π j / L 1 γ 2 , 1 j l ,
the final state can be obtained as
S L ψ = P L α + 1 P L β ,
where γ = T 1 / L 1 1 / δ , P L is the success probability of the algorithm (see also Equation (1) of [20]),
P L = 1 δ 2 T L 2 T 1 / L 1 / δ 1 λ ,
and T L x is the Lth Chebyshev polynomial of the first kind [23],
T L x = cos L arccos x , if x 1 , cosh L arcosh x , if x 1 , 1 L cosh L arcosh x , if x 1 .
To ensure the success probability P L λ no less than a given lower bound 1 δ 2 for any δ 0 , 1 , an approximated condition of L is given as
L ln 2 / δ λ ,
which is sufficient but not necessary, as shown below.

3. Performance Analysis of the OFPQS Algorithm

After deep theoretical analysis, we can obtain three new properties on the number of iterations and success probability of the OFPQS algorithm, which are as follows.
Property 1.
Given a lower bound λ 0 of the fraction λ of target items, the least exact number of iterations l (denoted by l o p t ) of the OFPQS algorithm that enables the success probability P L λ 1 δ 2 for any λ λ 0 with L = 2 l + 1 , can be given in the form
l o p t = arcosh 1 / δ 2 arcosh 1 / 1 λ 0 1 2 .
Proof. 
Based on Equation (10), for arbitrary λ 0 , 1 and δ 0 , 1 , a necessary and sufficient condition to make the success probability P L λ 1 δ 2 can be given as follows,
T 1 / L 1 / δ 1 λ 1 ,
due to the fact that
T L x 1 x 1 .
Note that if x > 1 , then, arcosh x > 0 and T L x = cosh L arcosh x > 1 ; if x 1 , then, T L x = cos L arccos x 1 . Due to T 1 / L 1 / δ 1 and cosh x monotonically increases for x 0 , Equation (14) can be further reduced to
L arcosh 1 / δ arcosh 1 / 1 λ .
Then, according to L = 2 l + 1 , we have
l arcosh 1 / δ 2 arcosh 1 / 1 λ 1 2 .
Therefore, to ensure that P L λ 1 δ 2 for any λ λ 0 , the least number of iterations, expressed by Equation (13), can be finally obtained. □
Property 2.
The extreme properties of the success probability P L λ as a function of λ 0 , 1 can be given as follows (see Proof in Appendix A):
For δ 0 , 1 , when L > 1 , P L λ has L 1 / 2 l local maximum points
λ m a x , j = 1 γ 2 cos 2 2 j 1 2 L π , j = 1 , 2 , , l ,
and l local minimum points
λ m i n , j = 1 γ 2 cos 2 j L π , j = 1 , 2 , , l .
When L = 1 , there are no local extreme points.
Property 3.
The success probability P L λ with L = 2 l o p t + 1 L o p t has a minimum value (denoted by P m i n ) on the range λ λ 0 , 1 for λ 0 4 / 5 , which can be written as (for a proof, see Appendix B)
P m i n min P L o p t λ : λ λ 0 = 1 δ 2 , if δ < 1 λ 0 , λ 0 , if δ 1 λ 0 .
Note that the case λ 0 > 4 / 5 can already be well disposed of by classical search.

4. Optimization of Parameters of the OFPQS Algorithm

In practical applications of the OFPQS algorithm, if set δ = 0 , then, from Equation (12) or Equation (13) it can be found that the quadratic speedup over classical algorithms will be lost; while, if set δ > 0 , the output of the OFPQS algorithm is not necessarily a target item. Inspired by [4], which achieves about 12% reduction of the expected queries through stopping the Grover algorithm short of the optimal number of iterations and restarting again in case of failure, to find a target item as soon as possible, a natural strategy is to set the lower bound of success probability 1 δ 2 < 100 % and repeat the OFPQS algorithm until it succeeds. Under this strategy, in order to find the optimal parameter δ , we shall estimate the expected number of Oracle queries of the OFPQS algorithm before a target item is found.
First, a single execution of the OFPQS algorithm requires at least 2 l o p t queries, since l o p t iterations are required in the sequence S L of Equation (4). Second, judging whether the algorithm is successful from the measurement result also takes one query. Then, define L o p t = 2 l o p t + 1 , we can get the expected number of Oracle queries as
L E = j = 1 p j L o p t ,
where
p j = 1 P L o p t λ j 1
is the probability of occurrence of the j-th execution of the OFPQS algorithm. From Equations (20)–(22), we can further obtain that
L E j = 1 1 P m i n j 1 L o p t = 2 l o p t + 1 / P m i n L E U p ,
where L E U p represents the upper bound of L E .
We can define the optimal δ as the one that makes the upper bound of expected number of queries L E U p as few as possible. Detailed analysis shows that such optimal δ (denoted by δ o p t ) exists and can be analytically written in the following form (Proof see Appendix C),
δ o p t = δ k , if λ 0 λ 0 , k , λ 0 , k 1 , k 1 , δ δ 0 , 1 , if λ 0 λ 0 , 0 , 4 / 5 ,
where
δ k = T 2 k + 1 1 1 / 1 λ 0 ,
T L x is defined by Equation (11),
λ 0 , k 1 = 1 cosh 2 y k 1 , k 1 ,
y k 1 is the unique solution of equation
sinh 2 4 k y + sinh 2 2 y 4 k sinh 4 k y sinh 2 y = 0 , x 4 k + 2 < y < x 4 k 2 ,
and x satisfies
sinh x 2 x = 0 , x > 0 .
Note that, when λ 0 = λ 0 , k ( k 0 ) or λ 0 , 0 λ 0 4 / 5 , the corresponding δ o p t has multiple values, as shown in Figure 1.
Note that, with the above optimal parameters, we can obtain the corresponding upper bound of the expected number of queries, i.e.,
L E , o p t U p λ 0 L E U p λ 0 , l o p t , δ o p t .

5. Discussion

In this section, the effects of optimal parameters l o p t of Equation (13) and δ o p t of Equation (24), and the complexity of L E , o p t U p λ 0 of Equation (29) are discussed as follows.

5.1. Effects of l o p t and δ o p t

Different from our optimal number of iterations l o p t of Equation (13), to achieve a success probability no less than 1 δ 2 for any λ λ 0 in the OFPQS algorithm [20], another number of iterations can be obtained from Equation (12), denoted by
l m i n Yoder s ln 2 / δ 2 λ 0 1 2 .
As a comparison, Figure 2 shows some examples of the success probability P L λ for the different number of iterations l with L = 2 l + 1 . We can see that both l = l m i n Yoder s and l = l o p t can achieve the goal P L λ 1 δ 2 for λ λ 0 ; while l = l o p t 1 can’t. Therefore, l o p t is just the least number of iterations required. When λ 0 1 , arcosh 1 / 1 λ 0 λ 0 , then, from Equations (13) and (30) it follows that
l o p t / l m i n Yoder s arcosh 1 / δ ln 2 / δ ,
which is shown in the inset of Figure 2.
We can see that when δ 1 , l o p t / l m i n Yoder s 0 . For example, when δ = 0 . 8 , l o p t / l m i n Yoder s 75 . 65 % , almost a quarter of iterations can be saved.
For any given lower bound λ 0 0 , 4 / 5 of the fraction of target items, by theoretical analysis we have obtained the analytical expression of the optimal δ , i.e., δ o p t defined by Equation (24). In order to compare the results of δ = δ o p t and δ δ o p t with l = l o p t δ , we define L E , r e l U p , n o r λ 0 , δ to be the normalized relative value of L E U p λ 0 , δ with respect to L E U p λ 0 , δ o p t , i.e.,
L E , r e l U p , n o r λ 0 , δ = L E , r e l U p λ 0 , δ max L E , r e l U p λ 0 , δ : 0 < δ 1 ,
where
L E , r e l U p λ 0 , δ = ln ln L E U p λ 0 , δ ln ln L E U p λ 0 , δ o p t .
The dependence of L E , r e l U p , n o r λ 0 , δ on λ 0 and δ is shown in Figure 3, where the darker the color, the smaller the value. For ease of comparison, δ o p t is marked by the white dashed curves. We can see that the color in the area corresponding to δ o p t is darkest, which indicates that the optimal parameter δ o p t indeed enables the upper bound of expected queries to reach the minimum.

5.2. Complexity of L E , o p t U p λ 0

Based on Equations (13), (20), (23), (24), and (29), we can obtain the expression of L E , o p t U p λ 0 as below:
L E , o p t U p λ 0 = 2 k + 1 1 δ k 2 , if λ 0 λ 0 , k , λ 0 , k 1 , k 1 , 1 1 δ 0 2 , if λ 0 λ 0 , 0 , 4 / 5 ,
where δ k and λ 0 , k 1 are defined by Equations (25) and (26), respectively. To analyze the complexity of L E , o p t U p λ 0 , first, for any known λ 0 1 , we can determine an integer
k = x 4 arcosh 1 / 1 λ 0 1 2 ,
such that λ 0 λ 0 , k , λ 0 , k 1 , where λ 0 , k is as defined by Equation (A26) and x > 0 is the solution of Equation (28). From Equations (26) and (27) it follows that
λ 0 , k λ 0 , k 1 < λ 0 , k 1 ,
then, we have λ 0 λ 0 , k , λ 0 , k 2 . When λ 0 1 , k , k k 1 , thus,
L E , o p t U p λ 0 2 k + 1 1 δ k 2 ,
where
δ k cosh 1 x 2
due to k x 4 arcosh 1 / 1 λ 0 1 2 , and
k x 4 λ 0 1 2
due to arcosh 1 / 1 λ 0 λ 0 . Combining Equations (37), (38), and (39), we can obtain
L E , o p t U p λ 0 x 2 1 cosh 2 x 2 1 λ 0 L E , o p t U p , λ 0 .
Therefore, we conclude that L E , o p t U p λ 0 = O 1 / λ 0 . The corresponding graphs of L E , o p t U p λ 0 and L E , o p t U p , λ 0 as functions of λ 0 are shown in Figure 4, which shows a good agreement. Note that, for an unknown λ and a given λ 0 , there also exists an upper bound of the expected queries of the classical search, namely, 1 / λ 0 . This means that the OFPQS algorithm with the optimal parameters achieves a quadratic speedup over classical algorithms.

6. Conclusions

In summary, we have analyzed the performance and optimized the parameters of the OFPQS algorithm. We derived the least number of iterations (denoted by l o p t ) of the OFPQS algorithm to ensure the success probability for a given lower bound of the fraction of target items no less than 1 δ 2 . Moreover, all extreme points as well as the minimum value of the success probability of the OFPQS algorithm were analyzed. In addition, we calculated the upper bound of expected queries of repeatedly executing the OFPQS algorithm to find a target item, and further analytically derived the optimal parameter that minimizes this upper bound. Compared with the minimum number of iterations given by [20], our optimal number of iterations l o p t has a significant reduction, e.g., when δ = 0 . 8 , almost a quarter of iterations can be saved. Our study can provide a guideline for the research and application of the OFPQS algorithm.

Author Contributions

Conceptualization, T.B. and D.H.; Writing—original draft, T.B.; Writing—review and editing, D.H.

Funding

This research was funded by the NATIONAL NATURAL SCIENCE FOUNDATION OF CHINA grant number 61801522.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviation is used in this manuscript:
OFPQSOptimal fixed-point quantum search

Appendix A. Proof of the Extreme Properties of P L λ on 0 , 1

Based on Equation (10), we can get the derivative of P L λ with respect to λ as
P L λ = δ 2 2 γ 1 λ sin 2 L arccos x sin arccos x ,
where γ = T 1 / L 1 1 / δ and
x T 1 / L 1 / δ 1 λ .
Then, for any λ 0 , 1 and δ 0 , 1 ,
P L λ = 0 sin 2 L arccos x = 0 and arccos x 0 .
Note that P L x = 2 L δ 2 0 when arccos x = 0 . Furthermore, we can get
x = cos k π 2 L 0 , 1 .
Note that, x 1 due to arccos x 0 , and x > 0 for λ 0 , 1 and δ 0 , 1 .
For the case L > 1 , from Equations (10), (A2), and (A4), it follows that when k = 2 j 1 , P L λ = 1 , where j = 1 , 2 , , l L 1 / 2 , thus, the local maximum points can be obtained as
λ m a x , j = 1 γ 2 cos 2 2 j 1 2 L π , j = 1 , 2 , , l .
Additionally, because P L λ = 0 = 0 and P L λ = 1 = 1 , P L has the same number of local maximum points and minimum points. Thus, when k = 2 j , j = 1 , 2 , , l , the corresponding local minimum points can be found, denoted by
λ m i n , j = 1 γ 2 cos 2 j L π , j = 1 , 2 , , l .
For the case L = 1 , there is no available k for Equation (A4), namely, P L has no extreme points.

Appendix B. Proof of the Minimum of P L o p t on λ 0 , 1 of Equation (20)

Based on Equation (13), for any l l o p t , if the local minimum point λ m i n , l (defined by Equation (19)) exists and λ m i n , l λ 0 , 1 , then the minimum of P L λ with L = 2 l + 1 can be obtained as 1 δ 2 , due to P L λ 1 δ 2 for any λ λ 0 , and P L λ m i n , j = 1 δ 2 for any 1 j l . Note that, λ m i n , l = max λ m i n , j : 1 j l . While if λ m i n , l doesn’t exist or λ m i n , l λ 0 , 1 , then P L λ monotonically increases on the range λ λ 0 , 1 , thus, the minimum success probability is P L λ 0 . Based on these, we can give the proof of P m i n (defined by Equation (20)) for l = l o p t and λ 0 4 5 as follows.
In the case of δ 1 λ 0 , from Equation (13) it follows that l o p t = 0 , thus, for l = l o p t , λ m i n , l does not exist and
P m i n = P L λ 0 = λ 0 .
In the case of δ < 1 λ 0 , from Equation (13) it follows that l o p t > 0 . Then, for a given λ 0 and l = l o p t , λ m i n , l exists and can be written as a step function with respect to δ , i.e., for δ δ k , δ k 1 ,
λ m i n , l = 1 T 1 2 k + 1 2 1 δ cos 2 k π 2 k + 1 , k 1 ,
where T L x and δ k are defined by Equations (11) and (25), respectively. Note that, δ 0 = 1 λ 0 and
k 1 δ k , δ k 1 = 0 , δ 0 .
From Equation (A8), we can see that λ m i n , l monotonically decreases on the range δ δ k , δ k 1 , then,
min λ m i n , l : δ k δ < δ k 1 = lim δ δ k 1 λ m i n , l = 1 T 2 k 1 2 k + 1 2 1 / 1 λ 0 cos 2 k 2 k + 1 π .
Moreover, lim δ δ k 1 λ m i n , l increases as k grows. Therefore,
min λ m i n , l : 0 < δ < δ c r i = lim δ δ 0 λ m i n , l = 1 T 1 / 3 2 1 / 1 λ 0 cos 2 π / 3 .
Note that
λ 0 lim δ δ 0 λ m i n , l 0 < λ 0 4 / 5 .
Then, when λ 0 4 5 , λ m i n , l λ 0 , 1 , and thus we can obtain that P m i n = 1 δ 2 .

Appendix C. Proof of the Optimal Parameter δ opt of Equation (24)

Based on Equations (20) and (23), for a given λ 0 , the upper bound of the expected number of queries L E U p as a function of δ can be obtained as follows:
L E U p δ = 2 l o p t + 1 / 1 δ 2 L E , l e f t U p δ , if δ < 1 λ 0 , 2 l o p t + 1 / λ 0 L E , r i g h t U p δ , if δ 1 λ 0 ,
where l o p t is defined by Equation (13). We can see that L E , l e f t U p δ is a step function with δ , i.e.,
L E , l e f t U p δ = 2 m + 1 1 δ 2 , if δ m δ < δ m 1 , m 1 ,
due to l o p t = m for δ δ m , δ m 1 , where δ m = T 2 m + 1 1 1 / δ c r i , consistent with Equation (25). From Equation (A14) it follows that L E , l e f t U p δ monotonically increases on range δ m , δ m 1 , then
min L E , l e f t U p δ : δ m δ < δ m 1 = L E , l e f t U p δ m = 2 m + 1 1 δ m 2 .
In addition, L E , r i g h t U p δ = 1 / λ 0 = L E , l e f t U p δ 0 , therefore,
min L E U p δ : 0 < δ < 1 = min L E , l e f t U p δ m : m 0 .
Note that if L E , l e f t U p δ k ( k > 0 ) is minimum, then δ k is optimal; while if L E , l e f t U p δ 0 is minimum, then arbitrary δ δ 0 , 1 is optimal.
To determine min L E , l e f t U p δ m : m 0 , we define
L E , l e f t U p , δ arcosh 1 / δ 1 δ 2 arcosh 1 / 1 λ 0 .
From Equations (13), (A13), and (A17), it follows that
L E , l e f t U p δ L E , l e f t U p , δ ,
L E , l e f t U p δ m = L E , l e f t U p , δ m .
Moreover, we can get the derivative of L E , l e f t U p , δ with respect to δ , i.e.,
L E , l e f t U p , δ = 1 δ 2 2 g δ arcosh 1 / 1 λ 0 ,
where
g δ 1 δ 2 δ 2 δ ln 1 + 1 δ 2 δ .
Solving g δ = 0 gives rise to a local minimum point, denoted by δ m i n l e f t , , which satisfies
δ m i n l e f t , = cosh 1 x 2 ,
where x > 0 is the unique solution of Equation (28). Then, we can obtain ( k 1 )
min L E , l e f t U p δ m : m 0 = L E , l e f t U p δ 0 , if δ m i n l e f t , δ 0 , min L E , l e f t U p δ k , L E , l e f t U p δ k 1 , if δ k δ m i n l e f t , < δ k 1 .
Note that for 0 < λ 0 4 / 5 ,
δ m i n l e f t , δ 0 λ 0 , 0 λ 0 4 / 5 ,
δ k δ m i n l e f t , < δ k 1 λ 0 , k λ 0 < λ 0 , k 1 ,
where
λ 0 , k = 1 T 1 2 k + 1 2 1 / δ m i n l e f t , , k 0 .
Define
L E , l e f t U p , Δ λ 0 = L E , l e f t U p δ k L E , l e f t U p δ k 1 , k 1 .
Then, through further analysis about L E , l e f t U p , Δ , we can find the following two results: (1) On the range λ 0 λ 0 , k , λ 0 , k 1 ( k 1 ), there is a solution for λ 0 of equation L E , l e f t U p , Δ = 0 , denoted by λ 0 , k 1 . (2) L E , l e f t U p , Δ < 0 for λ 0 , k λ 0 < λ 0 , k 1 , and L E , l e f t U p , Δ > 0 for λ 0 , k 1 < λ 0 < λ 0 , k 1 . Corresponding reasons are given as follows:
(1) From Equations (A15) and (A27) it follows that
L E , l e f t U p , Δ = 2 k + 1 1 δ k 2 2 k 1 1 δ k 1 2 = T 2 k 1 2 δ 0 T 2 k + 1 2 δ 0 2 1 δ k 2 1 δ k 1 2 h k y ,
where
y = arcosh 1 / 1 λ 0 ,
and
h k y = sinh 2 4 k y + sinh 2 2 y 4 k sinh 4 k y sinh 2 y .
Note that y increases as λ 0 grows, and according to Equations (A22), (A26), and (A29), y = x 4 k + 2 y k for λ 0 = λ 0 , k , and y = x 4 k 2 y k 1 for λ 0 = λ 0 , k 1 . When k is sufficiently large, i.e., k + , y k , y k 1 0 , then simple algebra shows that,
h k y k 2 x 2 2 k + 1 2 cosh x < 0 ,
h k y k 1 2 x 2 2 k 1 cosh x 2 > 0 ,
which can also be numerically proven for finite k, for example 1 k 10 5 . Therefore, based on the intermediate value theorem (See p. 271 of [24]), we confirm that there exists a solution of h k y = 0 between y k and y k 1 , denoted by y k 1 . Correspondingly, from Equation (A29), the solution of L E , l e f t U p , Δ = 0 for λ 0 , denoted by λ 0 , k 1 , can be finally obtained as defined in Equation (26).
(2) Based on Equation (29), we can see that L E , l e f t U p , Δ and h k y have the same sign. Moreover, we can obtain the derivative of h k y with respect to y, as below,
h k y y = 8 k sinh 4 k y cosh 4 k y + 4 sinh 2 y cosh 2 y 4 k 4 k sinh 2 y cosh 4 k y + 2 sinh 4 k y cosh 2 y h k y .
Due to sinh x , cosh x monotonically increases for x > 0 and y k y < y k 1 when k is sufficiently large, i.e., k + , we have y k , y k 1 0 , and
h k y > 8 k sinh 4 k y k cosh y k + 4 sinh 2 y k cosh 2 y k 4 k 4 k sinh 2 y k 1 cosh 4 k y k 1 + 2 sinh 4 k y k 1 cosh 2 y k 1 2 k sinh 2 x 4 sinh x 2 x 4 cosh x + 5 sinh 2 x + 2 0 , when k x 4 cosh x + 5 sinh 2 x + 2 sinh 2 x 4 sinh x = 12 .
Therefore, h k y monotonically increases on the range y y k , y k 1 , yielding that h k y < 0 for y k y < y k 1 and h k y > 0 for y k 1 y < y k 1 . Then, the corresponding results about L E , l e f t U p , Δ follow immediately, which can also be numerically proven to hold when k is small, for example, k = 1 , 2 , , 12.
Finally, combining Equations (A16) and (A23) with the above results (1) and (2), we can obtain that
min L E U p δ : 0 < δ < 1 = L E U p , l e f t δ k , if λ 0 λ 0 , k , λ 0 , k 1 , k 1 , L E U p , l e f t δ 0 , if λ 0 λ 0 , 0 , 4 / 5 ,
and thus δ o p t of Equation (24) is just the optimal δ .

References

  1. Grover, L.K. A fast quantum mechanical algorithm for database search. In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing; ACM: Philadelphia, PA, USA, 1996; pp. 212–219. [Google Scholar] [Green Version]
  2. Grover, L.K. Quantum computers can search arbitrarily large databases by a single query. Phys. Rev. Lett. 1997, 79, 4709. [Google Scholar] [CrossRef]
  3. Bennett, C.H.; Bernstein, E.; Brassard, G.; Vazirani, U. Strengths and weaknesses of quantum computing. SIAM J. Comput. 1997, 26, 1510. [Google Scholar] [CrossRef]
  4. Boyer, M.; Brassard, G.; Høyer, P.; Tapp, A. Tight bounds on quantum searching. Fortschr. Phys. 1998, 46, 493. [Google Scholar] [CrossRef]
  5. Zalka, C. Grover’s quantum searching algorithm is optimal. Phys. Rev. A 1999, 60, 2746. [Google Scholar] [CrossRef]
  6. Grover, L.K.; Radhakrishnan, J. Is partial quantum search of a database any easier? In Proceedings of the Seventeenth Annual ACM Symposium on Parallelism in Algorithms and Architectures; ACM: Las Vegas, NV, USA, 2005; pp. 186–194. [Google Scholar] [Green Version]
  7. Brassard, G. Searching a quantum phone book. Science 1997, 275, 627. [Google Scholar] [CrossRef]
  8. Nielsen, M.A.; Chuang, I.L. Quantum Computation and Quantum Information, 2nd ed; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  9. Brassard, G.; Høyer, P. An exact quantum polynomial-time algorithm for Simon’s problem. In Proceedings of the Fifth Israel Symposium on the Theory of Computing Systems; IEEE Computer Society: Washington, DC, USA, 1997; pp. 12–23. [Google Scholar]
  10. Grover, L.K. Quantum computers can search rapidly by using almost any transformation. Phys. Rev. Lett. 1998, 80, 4329. [Google Scholar] [CrossRef]
  11. Brassard, G.; Høyer, P.; Tapp, A. Quantum counting. In International Colloquium on Automata, Languages, and Programming; Larsen, K.G., Skyum, S., Winskel, G., Eds.; Springer: Berlin, Germany, 1998; pp. 820–831. [Google Scholar] [Green Version]
  12. Brassard, G.; Høyer, P.; Mosca, M.; Tapp, A. Quantum amplitude amplification and estimation. In Quantum Computation and Information; Lomonaco, S.J., Jr., Brandt, H.E., Eds.; AMS: Providence, RI, USA, 2002; pp. 53–74. [Google Scholar]
  13. Long, G.L.; Li, Y.S.; Zhang, W.L.; Niu, L. Phase matching in quantum searching. Phys. Lett. A 1999, 262, 27. [Google Scholar] [CrossRef]
  14. Høyer, P. Arbitrary phases in quantum amplitude amplification. Phys. Rev. A 2000, 62, 052304. [Google Scholar] [CrossRef] [Green Version]
  15. Long, G.-L.; Li, X.; Sun, Y. Phase matching condition for quantum search with a generalized initial state. Phys. Lett. A 2002, 294, 143. [Google Scholar] [CrossRef]
  16. Li, P.; Li, S. Phase matching in Grover’s algorithm. Phys. Lett. A 2007, 366, 42. [Google Scholar] [CrossRef]
  17. Grover, L.K. Fixed-point quantum search. Phys. Rev. Lett. 2005, 95, 150501. [Google Scholar] [CrossRef] [PubMed]
  18. Chakraborty, S.; Radhakrishnan, J.; Raghunathan, N. Bounds for error reduction with few quantum queries. In Approximation, Randomization and Combinatorial Optimization. Algorithms and Techniques; Springer: Berlin, Germany, 2005; pp. 245–256. [Google Scholar]
  19. Tulsi, T.; Grover, L.K.; Patel, A. A new algorithm for fixed point quantum search. Quantum Inf. Comput. 2006, 6, 483. [Google Scholar]
  20. Yoder, T.J.; Low, G.H.; Chuang, I.L. Fixed-point quantum search with an optimal number of queries. Phys. Rev. Lett. 2014, 113, 210501. [Google Scholar] [CrossRef] [PubMed]
  21. Toyama, F.M.; van Dijk, W.; Nogami, Y.; Tabuchi, M.; Kimura, Y. Multiphase matching in the Grover algorithm. Phys. Rev. A 2008, 77, 042324. [Google Scholar] [CrossRef] [Green Version]
  22. Toyama, F.M.; Kasai, S.; van Dijk, W.; Nogami, Y. Matched-multiphase Grover algorithm for a small number of marked states. Phys. Rev. A 2009, 79, 014301. [Google Scholar] [CrossRef]
  23. Mason, J.; Handscomb, D. Chebyshev Polynomials; CRC Press: Boca Raton, FL, USA, 2002. [Google Scholar]
  24. Zwillinger, D. CRC Standard Mathematical Tables and Formulae; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar]
Figure 1. (Color online.) The optimal parameter δ o p t as a function of the lower bound λ 0 of the fraction of target items. The red solid curves, red dotted vertical lines and the yellow (gray) area represent δ k , λ 0 , k ( k 0 ), and the value of δ o p t for λ 0 , 0 λ 0 4 / 5 , respectively.
Figure 1. (Color online.) The optimal parameter δ o p t as a function of the lower bound λ 0 of the fraction of target items. The red solid curves, red dotted vertical lines and the yellow (gray) area represent δ k , λ 0 , k ( k 0 ), and the value of δ o p t for λ 0 , 0 λ 0 4 / 5 , respectively.
Applsci 09 03584 g001
Figure 2. (Color online.) The success probability P L versus the fraction of target items λ for different number of iterations l with L = 2 l + 1 , λ λ 0 = 0 . 125 (dotted vertical line) and P L 1 δ 2 = 0 . 9 (dotted horizontal line). The red dashed-dotted, blue solid, and green dashed curves correspond to l = l m i n Yoder s = 3 , l = l o p t = 2 and l = l o p t 1 , respectively. Inset: We plot l o p t / l m i n Yoder s against δ with λ 0 1 .
Figure 2. (Color online.) The success probability P L versus the fraction of target items λ for different number of iterations l with L = 2 l + 1 , λ λ 0 = 0 . 125 (dotted vertical line) and P L 1 δ 2 = 0 . 9 (dotted horizontal line). The red dashed-dotted, blue solid, and green dashed curves correspond to l = l m i n Yoder s = 3 , l = l o p t = 2 and l = l o p t 1 , respectively. Inset: We plot l o p t / l m i n Yoder s against δ with λ 0 1 .
Applsci 09 03584 g002
Figure 3. (Color online.) The normalized relative value L E , r e l U p , n o r λ 0 , δ as a function of δ and λ 0 . The white dashed curves correspond to the optimal parameter δ o p t of Equation (24).
Figure 3. (Color online.) The normalized relative value L E , r e l U p , n o r λ 0 , δ as a function of δ and λ 0 . The white dashed curves correspond to the optimal parameter δ o p t of Equation (24).
Applsci 09 03584 g003
Figure 4. (Color online.) The optimal upper bound of expected queries L E , o p t U p λ 0 and its approximation L E , o p t U p , λ 0 as functions of the lower bound λ 0 10 4 , 10 1 of the fraction of target items. The blue dashed and red solid curves represent L E , o p t U p λ 0 and L E , o p t U p , λ 0 , respectively.
Figure 4. (Color online.) The optimal upper bound of expected queries L E , o p t U p λ 0 and its approximation L E , o p t U p , λ 0 as functions of the lower bound λ 0 10 4 , 10 1 of the fraction of target items. The blue dashed and red solid curves represent L E , o p t U p λ 0 and L E , o p t U p , λ 0 , respectively.
Applsci 09 03584 g004

Share and Cite

MDPI and ACS Style

Bao, T.; Huang, D. Performance Analysis and Parameter Optimization of the Optimal Fixed-Point Quantum Search. Appl. Sci. 2019, 9, 3584. https://doi.org/10.3390/app9173584

AMA Style

Bao T, Huang D. Performance Analysis and Parameter Optimization of the Optimal Fixed-Point Quantum Search. Applied Sciences. 2019; 9(17):3584. https://doi.org/10.3390/app9173584

Chicago/Turabian Style

Bao, Tianyi, and Duan Huang. 2019. "Performance Analysis and Parameter Optimization of the Optimal Fixed-Point Quantum Search" Applied Sciences 9, no. 17: 3584. https://doi.org/10.3390/app9173584

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop