Next Article in Journal
Functional Boundedness of Balleans: Coarse Versions of Compactness
Next Article in Special Issue
Correlations in Two-Qubit Systems under Non-Dissipative Decoherence
Previous Article in Journal
A New Set Theory for Analysis
Previous Article in Special Issue
Doily as Subgeometry of a Set of Nonunimodular Free Cyclic Submodules
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Quantum Adiabatic Algorithm for Multiobjective Combinatorial Optimization †

Núcleo de Investigación y Desarrollo Tecnológico, Universidad Nacional de Asunción, San Lorenzo C.P. 2619, Paraguay
*
Author to whom correspondence should be addressed.
This Paper is an Extended Version of Our Paper Pulished in Proceedings of the 42nd Latin American Conference on Informatics (CLEI), Valparaíso, Chile, 10–14 October 2016.
Axioms 2019, 8(1), 32; https://doi.org/10.3390/axioms8010032
Submission received: 14 November 2018 / Revised: 26 February 2019 / Accepted: 1 March 2019 / Published: 9 March 2019
(This article belongs to the Special Issue Foundations of Quantum Computing)

Abstract

:
In this work we show how to use a quantum adiabatic algorithm to solve multiobjective optimization problems. For the first time, we demonstrate a theorem proving that the quantum adiabatic algorithm can find Pareto-optimal solutions in finite-time, provided some restrictions to the problem are met. A numerical example illustrates an application of the theorem to a well-known problem in multiobjective optimization. This result opens the door to solve multiobjective optimization problems using current technology based on quantum annealing.

1. Introduction

Currently, quantum computation has many practical applications in engineering and computer science like machine learning, bioinformatics and artificial intelligence [1]. At the core of all these applications, there is an optimization procedure, and the quantum adiabatic computing paradigm of Farhi et al. [2] is the best method known thus far for optimization problems.
In this work, we show how to use a quantum adiabatic algorithm in multiobjective combinatorial optimization problems or MCO. An optimization problem is said to be “multiobjective” or “multicriteria” if there are two or more objective functions involved [3]. When these objective functions are required to be optimized at the same time, the optimality of solutions must be revised and, thus, an MCO can have a set of so-called “Pareto-optimal solutions” where no solution is better than any other. Here we show that the quantum adiabatic algorithm can find Pareto-optimal solutions in finite time, provided certain restrictions are met. In Theorem 2 we identify two structural features that a given MCO must satisfy in order to make an effective use of the quantum adiabatic algorithm presented in this work.
Even though most known quantum adiabatic optimization algorithms consider single-objective optimization problems (see [1,4]), only a few works discuss quantum algorithms for MCOs. Those few algorithms make use of Grover’s search method [5,6] as a subrutine that is invoked inside a classical algorithm. Alanis et al. [7] presented a quantum optimization algorithm in the context of routing problems. In [8], a general algorithm for MCO was presented and experimentally compared against a state-of-the-art metaheuristic. Both papers [7,8] use Grover’s search algorithm to solve an MCO; however, Grover’s algorithm is not naturally constructed for optimization problems. Having Grover’s algorithm as the main subrutine for optimization gives place to an “ad hoc” heuristic method whose finite time convergence has not yet been proved. Hence, these previous works [7,8] relied on numerical experiments instead of rigorous proofs.
This work presents the first quantum algorithm for MCO that guarantees finite time converge to a Pareto-optimal solution. Furthermore, since our method is constructed on the quantum adiabatic paradigm, it can be implemented in current technologies based on quantum annealing [1,9]. An extended abstract of this work appeared in [10].
The outline of this paper is as follows. In Section 2 and Section 3 we present the main concepts of MCO and adiabatic quantum computing that are relevant for this paper. In Section 4 we formally state our main result and present a full proof. In Section 5 we present a small numerical example of an MCO. Finally, in Section 6 we conclude this paper.

2. Preliminaries on Multiobjective Combinatorial Optimization

We present here the notation used throughout this paper. We also briefly review the main concepts of multiobjective optimization, which are standard definitions and can be found in several papers in the literature, for example [3]. Definitions 3, 4, 5 and 9 and Lemmas 1 and 3 are originals of this work.
The set of natural numbers (including 0) is denoted N , the set of integers is Z , the set of real numbers is denoted R and the set of positive real numbers is R + . For any i , j Z , with i < j , we let [ i , j ] Z denote the discrete interval { i , i + 1 , , j 1 , j } . The set of binary words of length n is denoted { 0 , 1 } n . We also let poly ( n ) = O ( n c ) be a polynomial in n, i.e., c N .
A multiobjective combinatorial optimization problem (MCO) is an optimization problem involving multiple objectives over a finite set of feasible solutions. These objectives typically present trade-offs among solutions and in general, there is no single optimal solution but a set of compromise solutions known as the Pareto set [3]. In this work, we follow the definition of Kung et al. [11]. Furthermore, with no loss of generality, all optimization problems considered in this work are minimization problems.
Let S 1 , , S d be totally ordered sets and let i be an order on set S i for each i [ 1 , d ] Z . We also let n i be the cardinality of S i . Define the natural partial order relation ≺ over the cartesian product S 1 × × S d in the following way. For any u = ( u 1 , , u d ) and v = ( v 1 , , v d ) in S 1 × × S d , we write u v if and only if for any i [ 1 , d ] Z it holds that u i i v i ; otherwise we write u v . An element u S is a minimal element if there is no v S such that v u and v u . Moreover, we say that u is non-comparable with v if u v and v u and succinctly write u v . In the context of multiobjective optimization, the relation ≺ as defined here is often referred to as the Pareto-order relation [11].
Definition 1.
An MCO is defined as a tuple Π = ( D , R , d , F , ) where D is a finite set called domain, R R + { 0 } is a set of real values, d is a positive integer, F is a finite collection of functions { f i } i [ 1 , d ] Z where each f i maps from D to R, and ≺ is the Pareto-order relation on R d (here R d is the d-fold cartesian product on R). We also define a function f that maps D to R d as f ( x ) = ( f 1 ( x ) , , f d ( x ) ) referred to as the objective vector of Π. If f ( x ) is a minimal element of R d , we say that x is a Pareto-optimal solution of Π. The set of all Pareto-optimal solutions of Π is denoted P ( Π ) .
Definition 2.
For any two elements x , y D , if f ( x ) f ( y ) we write x y ; similarly, if f ( x ) f ( y ) we write x y . For any x , y D , if x y and y x we say that x and y are equivalent and it is denoted as x y .
A typical example of a multiobjective optimization problem is the two-parabolas problem of Figure 1. In this problem we have two continuous objective functions defined by two parabolas that intersect in a single point.
The set of Pareto-optimal solutions can be very large, and therefore, most methods for MCOs are concerned with finding a subset of the Pareto-optimal solutions, or an approximation. Optimal query algorithms were discovered by Kung et al. [11] where all Pareto-optimal solutions can be found for d = 2 , 3 and proved almost tight upper and lower bounds for any d 4 up to polylogarithmic factors. Papadimitriou and Yannakakis [12] initiated the field of approximation algorithms for MCOs where an approximation to all Pareto-optimal solutions can be found in polynomial time.
For the remainder of this work, ≺ will always be the Pareto-order relation and will be omitted from the definition of any MCO. Furthermore, for convenience, we will often write Π d = ( D , R , F ) as a short-hand for Π = ( D , R , d , F ) . In addition, we will assume that each function f i F is polynomial-time computable and each f i ( x ) is bounded by a polynomial in the number of bits of x. This is a typical assumption in the theory of optimization algorithms in order to have the computational complexity of any optimization algorithm to depend only on the number of “black-box” accesses to the objective function.
Definition 3.
Given any MCO Π d we say that Π d is well-formed if and only if for each f i F there is a unique x D such that f i ( x ) = 0 .
Definition 4.
An MCO Π d is normal if and only if Π d is well-formed and f i ( x ) = 0 and f j ( y ) = 0 , for i j , implies x y .
In a normal MCO, the value of an optimal solution in each f i is 0, and all optimal solutions are different. In Figure 1, solutions x = 7 and x = 15 are optimal solutions of f 1 and f 2 with value 0, respectively; hence, the two-parabolas problem of Figure 1 is normal.
Definition 5.
An MCO Π d is collision-free if given λ = ( λ 1 , , λ d ) , with each λ i R + , for any i [ 1 , d ] Z and any pair x , y D , with x y , it holds that | f i ( x ) f i ( y ) | λ i . If Π d is collision-free we write succinctly as Π d λ .
The two-parabolas problem of Figure 1 is not collision-free; for example, for solutions x = 5 and x = 9 we have that f 1 ( 5 ) = f 1 ( 9 ) .
Definition 6.
A Pareto-optimal solution x is trivial if x is an optimal solution of some f i F .
In Figure 1, solutions x = 7 and x = 15 are trivial Pareto-optimal solutions, whereas any x between seven and 15 is non-trivial.
Lemma 1.
For any normal MCO Π d , if x and y are trivial Pareto-optimal solutions of Π d , then x and y are not equivalent.
Proof. 
Let x , y be two trivial Pareto-optimal solutions of Π d . There exists i , j such that f i ( x ) = 0 and f j ( y ) = 0 . Since Π d is normal we have that x y and f i ( y ) > 0 and f j ( x ) > 0 , hence, x y and they are not equivalent. □
Let W d be a set of of normalized vectors in [ 0 , 1 ) d , the continuous interval between zero and less than one, defined as
W d = w = ( w 1 , , w d ) [ 0 , 1 ) d | i = 1 d w i = 1 .
For any w W d , define f ( x ) , w = f ( x ) , w = w 1 f 1 ( x ) + + w d f d ( x ) . In the literature of multiobjective optimization, if each f i is an objective function of an MCO Π d , it is said that f ( x ) , w is a linearization or scalarization of Π d [13].
The following fact is a well-known property of MCOs.
Lemma 2.
Let Π d = ( D , R , F ) . For any w W d there exists x D such that if f ( x ) , w = min y D { f ( y ) , w } , then x is a Pareto-optimal solution of Π d .
Proof. 
Fix w W d and let x D be such that f ( x ) , w is minimum among all elements of D. For any y D , with y x , we need to consider two cases: (1) f ( y ) , w = f ( x ) , w and (2) f ( y ) , w > f ( x ) , w .
Case (1). Here we have another two subcases, either f i ( y ) = f i ( x ) for all i or there exists at least one pair i , j { 1 , , d } such that w i f i ( x ) < w i f i ( y ) and w j f j ( y ) < w j f j ( x ) . When f i ( x ) = f i ( y ) for each i = 1 , , d we have that x and y are equivalent. On the contrary, if w i f i ( x ) < w i f i ( y ) and w j f j ( y ) < w j f j ( x ) , we have that f i ( x ) < f i ( y ) and f j ( y ) < f j ( x ) , and hence, x y .
Case (2). In this case, there exists i { 1 , , d } such that w i f i ( x ) < w i f i ( y ) , and hence, f i ( x ) < f i ( y ) . Thus, f ( y ) f ( x ) and y x for any y x .
We conclude from Case (1) that x y or x y , and from Case (2) that y x . Therefore, x is Pareto-optimal. □
Given any linearization of an MCO, by Lemma 2, an optimal solution of a linearized MCO corresponds to a Pareto-optimal solution; however, it does not hold in general that each Pareto-optimal solution has a corresponding linearization, i.e., no all Pareto-optimal solutions can always be found using linearizations.
Lemma 3.
Given Π d = ( D , R , F ) , any two elements x , y D are equivalent if and only if for all w W d it holds that f ( x ) , w = f ( y ) , w .
Proof. 
Assume that x y . Hence f ( x ) = f ( y ) . If we pick any w W d we have that
f ( x ) , w = w 1 f 1 ( x ) + + w d f d ( x ) = w 1 f 1 ( y ) + + w d f d ( y ) = f ( y ) , w .
Now suppose that for all w W d it holds f ( x ) , w = f ( y ) , w . By contradiction, assume that x y . With no loss of generality, assume further that there is exactly one i [ 1 , d ] Z such that f i ( x ) f i ( y ) . Hence
w i ( f i ( x ) f i ( y ) ) = j i w j ( f j ( y ) f j ( x ) ) .
The right hand of Equation (2) is 0 because for all j i we have that f j ( x ) = f j ( y ) . The left hand of Equation (2), however, is not 0 by our assumption, hence, a contradiction. Therefore, x is equivalent to y. □
In this work, we are only interested in finding non-trivial Pareto-optimal solutions. Finding trivial elements can be done by optimizing each f i independently; consequently, in Equation (1) we do not allow for any w i to be 1.
Definition 7.
The set of supported Pareto-optimal solutions denoted S ( Π ) is defined as the set of Pareto-optimal solutions x where f ( x ) , w is optimal for some w W d .
From Lemma 2, we know that some Pareto-optimal solutions cannot be found using any linearization w W d .
Definition 8.
The set of non-supported Pareto-optimal solutions is the set N ( Π ) = P ( Π ) \ S ( Π ) .
Note that there may be Pareto-optimal solutions x and y that are non-comparable and f ( x ) , w = f ( y ) , w for some w W d . That is equivalent to saying that the objective function obtained from a linearization of an MCO is not one-to-one (injective).
Definition 9.
Any two elements x , y D are weakly-equivalent if and only if there exists w W d such that f ( x ) , w = f ( y ) , w .
By Lemma 3, any two equivalent solutions x , y are also weakly-equivalent; on the other hand, if x and y are weakly-equivalent it does not imply that they are equivalent. For example, consider two objective vectors f ( x ) = ( 1 , 2 , 3 ) and f ( y ) = ( 1 , 3 , 2 ) . Clearly, x and y are not equivalent; however, if w = ( 1 / 3 , 1 / 3 , 1 / 3 ) we can see that x and y are indeed weakly-equivalent. In Figure 1, points x = 10 and x = 12 are weakly-equivalent.

3. The Quantum Adiabatic Algorithm

The quantum adiabatic computing paradigm was discovered by Farhi et al. [2] and the quantum adiabatic algorithm was designed to solve single-objective optimization problems. The main idea behind quantum adiabatic algorithms is that optimization problems are somehow encoded into time-dependent Hamiltonians. Then, we start the algorithm with an easy-to-prepare quantum state and we let the system evolve according to the adiabatic theorem. After some time, we measure the system and obtain an optimal solution to our optimization problem. In this work, we follow the definition of McGeoch [1].
Given an objective function f where each element in its domain can be represented with p o l y ( n ) bits, a quantum adiabatic algorithm for f is constructed from three components:
  • an initial Hamiltonian H 0 chosen in such a way that its ground state is easy to prepare;
  • a final Hamiltonian H 1 encodes the function f in such a way that the minimum eigenvalue of H 1 corresponds to f ( x ) and its ground-state corresponds to x;
  • an adiabatic evolution path, that is, a function s ( t ) that decreases from 1 to 0 as the time t goes from 0 to a given time T. In this work, we will always use a linear path s ( t ) = 1 t / T .
The time-dependent Hamiltonian H for the algorithm is thus defined as
H ( t ) = s ( t ) H 0 + ( 1 s ( t ) ) H 1 .
If | ψ ( t ) is an eigenvector of H ( t ) that corresponds to the minimum eigenvalue, the quantum adiabatic algorithm works as follows. Prepare the system in state | ψ ( 0 ) and let it evolve according to the Schrödinger’s equation. After some time T, measure the state | ψ ( T ) with respect to some well-defined basis. The adiabatic theorem says that if T is large enough and H is non-degenerate in its minimum eigenvalue, the quantum state that is observed from the measurement is very close to | ψ ( T ) .
One of the most recent versions of the adiabatic algorithm is due to Ambainis and Regev [14] where a lower bound in the value of T can be estimated. Below we transcribe the complete statement of their theorem since it is one of the main pieces in our proof. Let H ( s ) be a time-dependent Hamiltonian with a linear path s = 1 t / T for some given T. Let H = max s [ 0 , 1 ] H ( s ) where · is the operator norm with respect to the 2 -norm, and let H and H be the first and second derivatives of H, respectively.
Theorem 1 
(Ambainis and Regev [14]). Let H ( s ) , 0 s 1 , be a time-dependent Hamiltonian, let | ψ ( s ) be one of its eigenstates, and let γ ( s ) be the corresponding eigenvalue. Assume that for any s [ 0 , 1 ] , all other eigenvalues of H ( s ) are either smaller that γ ( s ) λ or larger than γ ( s ) + λ . Consider the adiabatic evolution given by H and | ψ ( s ) applied for time T. Then, the following condition is enough to guarantee that the final state is at distance at most δ from | ψ ( 1 ) :
T 10 5 δ 2 · max H 3 λ 4 , H · H λ 3 .
One of the main design decisions for adiabatic algorithms is choosing an adequate initial Hamiltonian. To make use of the adiabatic theorem, we need to construct an initial Hamiltonian that does not commute with the final Hamiltonian; otherwise, the eigenvalue gap will disappear [2].

4. Main Result of This Work

With all the technical definitions established, now we are ready to formally state our main result. For any vector w in Euclidian space we define the 1 -norm of w as w 1 = | w 1 | + | w d | .
Theorem 2.
Let Π d λ be any normal and collision-free MCO. If there are no equivalent Pareto-optimal solutions, then for any w W d there exists w W d and a Hamiltonian H w , satisfying w w 1 1 / poly ( n ) , such that the quantum adiabatic algorithm, using H w as final Hamiltonian, can find a Pareto-optimal solution x corresponding to w in finite time.
In the following subsections we present a proof of Theorem 2. In Section 4.1 we show how to construct the initial and final Hamiltonians, and in Section 4.2 we show that our construction is correct.

4.1. The Initial and Final Hamiltonians for MCOs

In this section we show how to construct the initial and final Hamiltonians. Given any normal and collision-free MCO Π d λ = ( D , R , F ) , assume with no loss of generality that the domain of the objective functions is D = { 0 , 1 } n , that is, the set of al bit strings of length n.
For each i [ 1 , d ] Z we define a Hamiltonian H f i = x { 0 , 1 } n f ( x ) | x x | . Since Π d λ is collision-free and normal, each H f i is nondegenerate in all its eigenvalues and its minimum eigenvalue is 0. Given a linearization w of Π d λ , we construct the final Hamiltonian H w for our quantum algorithm as
H w = w 1 H f 1 + + w d H f d = x { 0 , 1 } n f ( x ) , w | x x | .
Now we construct the initial Hamiltonian, which cannot commute with the final Hamiltonian recently defined. In this work we make use of the Hamiltonian defined in [2]. Let | 0 ^ = ( | 0 + | 1 ) / 2 and | 1 ^ = ( | 0 | 1 ) / 2 . The operation F that makes | 0 ^ = F | 0 and | 1 ^ = F | 1 is called the Walsh-Hadamard transform. Thus, for any x { 0 , 1 } n we can make | x ^ = F n | x , where F is the n-fold Walsh–Hadamard transform. The initial Hamiltonian is
H 0 = x { 0 , 1 } n h ( x ) | x ^ x ^ | ,
where h ( 0 n ) = 0 and h ( x ) = 1 for any x 0 n .
Now that we have defined our initial and final Hamiltonians we need to show that the interpolating hamiltonian H ( t ) of Equation (3) is indeed nondegenerate in all its eigenvalues and that it fulfils the requirements of Theorem 2.

4.2. Analysis of the Final Hamiltonian

Note that if the initial Hamiltonian does not commute with the final Hamiltonian, it suffices to prove that the final Hamiltonian is nondegenerate in its minimum eigenvalue [2]. For the remaining of this work, we let σ w and α w be the smallest and second smallest eigenvalues of H w corresponding to a normal and collision-free MCO Π d λ = ( D , R , F ) .
Lemma 4.
Let x be a non-trivial Pareto-optimal solution of Π d λ . For any w W d it holds that σ w > λ , w .
Proof. 
Let σ w = w 1 f 1 ( x ) + + w d f d ( x ) and let x be a non-trivial Pareto-optimal element. For each w i N we have that
σ w = i w i f i ( x ) > i w i λ i = λ , w .
 □
Lemma 5.
For any w W d , let H w be a Hamiltonian with a nondegenerate minimum eigenvalue. The eigenvalue gap between the smallest and second smallest eigenvalues of H w is at least λ , w .
Proof. 
Let σ w be the unique minimum eigenvalue of H w . We have that σ w = f ( x ) , w for some x { 0 , 1 } n . Now let α w = f ( y ) , w be a second smallest eigenvalue of H w for some y { 0 , 1 } n where y x . Hence,
α w σ w = f ( y ) , w f ( x ) , w = w 1 f 1 ( y ) w 1 f 1 ( x ) + w 2 f 2 ( y ) w 2 f 2 ( x ) w 1 λ 1 + w 2 λ 2 = λ , w .
 □
Lemma 6.
If there are no weakly-equivalent Pareto-optimal solutions in Π d λ , then the Hamiltonian H w is non-degenerate in its minimum eigenvalue.
Proof. 
By the contrapositive, suppose H w is degenerate in its minimum eigenvalue σ w . Take any two degenerate minimal eigenstates | x and | y , with x y , such that
w 1 f 1 ( x ) + + w d f d ( x ) = w 1 f 1 ( y ) + + w d f 2 ( d ) = σ w .
Then it holds that x and y are weakly-equivalent. □
We further show that even if Π d λ has weakly-equivalent Pareto-optimal solutions, we can find a nondegenerate Hamiltonian. Let m = max x , i { f i ( x ) } .
Lemma 7.
For any Π d λ , let x 1 , , x D be Pareto-optimal solutions that are not pairwise equivalent. If there exists w W d and σ w R + such that f ( x 1 ) , w = = f ( x ) , w = σ w is minimum among all y D , then there exists w W d and i [ 1 , ] Z such that for all j [ 1 , ] Z , with j i , it holds f ( x i ) , w < f ( x j ) , w . Additionally, if the linearization w satisfies w w 1 λ , w m d , then f ( x i ) , w is unique and minimum among all f ( y ) , w for y D .
Proof. 
We prove the lemma by induction on . Let = 2 , then f ( x 1 ) , w = f ( x 2 ) , w , and hence,
w 1 f 1 ( x 1 ) + + w d f d ( x 1 ) = σ w w 1 f 1 ( x 2 ) + + w d f d ( x 2 ) = σ w .
for some σ w R + . From linear algebra we know that there is an infinite number of elements of W d that simultaneously satisfy Equation (6). With no loss of generality choose adequately f 1 and f 2 , fix w 3 , , w d and set b 1 = w 3 f 3 ( x 1 ) + + w d f d ( x 1 ) and b 2 = w 3 f 3 ( x 2 ) + + w d f d ( x 2 ) . We have that
w 1 f 1 ( x 1 ) + w 2 f 2 ( x 1 ) = σ w b 1 w 1 f 1 ( x 2 ) + w 2 f 2 ( x 2 ) = σ w b 2 .
Again, by linear algebra, we know that Equation (7) has a unique solution w 1 and w 2 ; it suffices to note that the determinant of the coefficient matrix of Equation (7) is not 0, as it is proved in Appendix A.
Choose any w 1 w 1 and w 2 w 2 satisfying w 1 + w 2 + w 3 + + w d = 1 and let w = ( w 1 , w 2 , w 3 , , w d ) . Then we have that f ( x 1 ) , w f ( x 2 ) , w because w 1 and w 2 are not solutions to Equation (7). Hence, either f ( x 1 ) , w or f ( x 2 ) , w must be smaller than the other.
Suppose that f ( x 1 ) , w < f ( x 2 ) , w . We now claim that f ( x 1 ) , w is minimum and unique among all y D . In addition to the constraint of the preceding paragraph that w must satisfy, in order for f ( x 1 ) , w to be minimum, we must choose w such that w w 1 λ , w m d .
Assume for the sake of contradiction the existence of y D such that f ( y ) , w f ( x 1 ) , w . Hence,
f ( y ) , w f ( x 1 ) , w < f ( y ) , w .
From Lemma 4, we know that | f ( x 1 ) , w f ( y ) , w | > λ , w , and thus,
| f ( y ) , w f ( y ) , w | > λ , w .
Using the Cauchy–Schwarz inequality we have that
| f ( y ) , w f ( y ) , w | = | f ( y ) , w w | f ( y ) 1 · w w 1 λ , w ,
where the last line follows from f ( y ) 1 m d and w w 1 λ , w m d ; from Equation (8), however, we have that | f ( y ) , w w | > λ , w , which is a contradiction. Therefore, we conclude that f ( y ) , w > f ( x 1 ) , w for any y D ; the case for f ( x 1 ) , w > f ( x 2 ) , w can be proved similarly. The base case of the induction is thus proved.
Now suppose the statement holds for . Let x 1 , , x , x + 1 be Pareto-optimal solutions that are not pairwise equivalent. Let w W d be such that f ( x 1 ) , w = = f ( w + 1 ) , w holds. By our induction hypothesis, there exists w W d and i [ 1 , ] Z such that f ( x i ) , w < f ( y ) , w for any other y D .
If f ( x i ) , w f ( x + 1 ) , w then we are done, because either one must be smaller. Suppose, however, that f ( x + 1 ) , w = f ( x i ) , w = σ w for some σ w R + . From the base case of the induction we know there exists w w that makes f ( x i ) , w < f ( x + 1 ) , w , and hence, f ( x i ) , w < f ( y ) , w for any y D . Therefore, the lemma is proved. □
The premise in Lemma 7, that each x 1 , , x must be Pareto-optimal solutions, is a sufficient condition because if one solution is not Pareto-optimal, then the statement will contradict Lemma 2.
We now apply Lemma 7 to find a Hamiltonian with a nondegenerate minimum eigenvalue.
Lemma 8.
Let Π d λ be a MCO with no equivalent Pareto-optimal solutions and let H w be a degenerate Hamiltonian in its minimum eigenvalue with corresponding minimum eigenstates | x 1 , , | x . There exists w W d , satisfying w w 1 λ , w m d , and i [ 1 , ] Z such that H w is nondegenerate in its smallest eigenvalue with corresponding eigenvector | x i .
Proof. 
From Lemma 6, we know that if Π d λ has no weakly-equivalent Pareto-optimal solutions, then for any w the Hamiltonian H w is nondegenerate.
We consider now the case when the minimum eigenvalue of H w is degenerate with Pareto-optimal solutions that are weakly-equivalent. Let x 1 , , x be such weakly-equivalent Pareto-optimal solutions that are non-trivial and x i x j for all i j . By Lemma 7 there exists w W d , where w w , such that f ( x i ) , w is minimum among all y D . □
If we consider our assumption from Section 2 that m = max x , i { f i ( x ) } is bounded by poly ( n ) , where n is the maximum number of bits of any element in x D , by Lemma 8 we have that any w must satisfy w w 1 1 / poly ( n ) . Then Theorem 2 follows immediately from Lemmas 2 and 8.
To see that the adiabatic evolution takes finite-time let Δ m a x = max s d d s H ( s ) and g m i n = min s g ( s ) , where g ( s ) is the eigenvalue gap of H ( s ) . Letting T = O ( Δ m a x g m i n 2 ) suffices to find a supported solution corresponding to w. Since g m i n > 0 and d d s H ( s ) = poly ( n ) , we conclude that T is finite.

5. An Application to the Two-Parabolas Problem

In this section we present a numerical example on how to construct a quantum adiabatic algorithm for the two-parabolas problem. Let T P 2 λ denote the two-parabolas problem with two objective functions and gap vector λ = ( 0.2 , 0.4 ) whose description is in Appendix B. In order to use the adiabatic algorithm of Section 3 we need to consider a collision-free version of the problem.
Let n = 7 be the number of bits that are needed to encode the entire domain of each objective function. Thus, a feasible solution x { 0 , , 127 } . Since the gap vector is λ = ( 0.2 , 0.4 ) , we construct objective functions f 1 and f 2 that resemble two parabolas in such a way that for each pair of feasible solutions x , y it holds that | f 1 ( x ) f 1 ( y ) | 0.2 and | f 2 ( x ) f 2 ( y ) | 0.4 . See Table A1 for a complete definition of T P 2 λ and Figure 2 shows a plot of all points.
We define the final and initial Hamiltonians following Equations (4) and (5), respectively. In particular, the initial Hamiltonian is defined as
H 0 = x { 0 , 1 } n \ { 0 n } 8 | x ^ x ^ | .
The number 8 in Equation (9) is there only to enhance the visual presentation of the plots in this paper. The Hamiltonian of the entire system for T P 2 λ is
H ( s ) = ( 1 s ) H 0 + s H w .
From the previous section we know that T = O ( Δ m a x g m i n 2 ) suffices to find a supported solution corresponding to w [15]. The quantity Δ m a x is usually easy to estimate [2]. The eigenvalue gap g m i n is, however, very difficult to compute; indeed, determining for any Hamiltonian if g m i n > 0 is undecidable [16].
In Figure 3 we present the eigenvalue gap of T P 2 λ for w = 0.57 where we let w 1 = w and w 2 = 1 w 1 ; for this particular value of w the final Hamiltonian H w has a unique minimum eigenstate which corresponds to Pareto-optimal solution x 0 = 59 . The two smallest eigenvalues never touch, and exactly at s = 1 the gap is | f ( x 0 ) , w f ( x 1 ) , w | , where x 0 = 59 and x 1 = 60 are the smallest and second smallest solutions with respect to w, which agrees with Lemmas 4 and 5. Figure 4 shows the eigenvalue gap as a function of s in a logarithmic scale.
Similar results can be observed for different values of w and a different number of qubits. Therefore, the experimental evidence lead us to conjecture that in the two-parabolas problem g m i n | f ( x ) , w f ( y ) , w | , where x and y are the smallest and second smallest solutions with respect to w.

6. Concluding Remarks and Open Problems

In the last few years the field of quantum computation is finding new applications in artificial intelligence, machine learning and data analysis. These new discoveries were fueled by a deeper understanding in the foundations of quantum information and computation which resulted in new quantum algorithms for optimization problems (see for example [17]).
In this paper we addressed another side of optimization problems, namely, multiobjective optimization problems, where multiple objective functions must be optimized at the same time. This paper presented the first quantum multiobjective optimization algorithm with provable finite-time convergence. Other authors proposed quantum algorithms for multiobjective optimization [7,8] but these algorithms were ad-hoc heuristics with no theoretical guarantees for convergence. Furthermore, these aforementioned proposals utilized a hybrid approach of classical and quantum computation where Grover’s search algorithm constitutes the only “quantum part” and the other parts of the algorithm are classical. The quantum algorithm of this work is based on the successful quantum adiabatic paradigm of Farhi et al. [2] and it is a full quantum algorithm, that is, all of its execution is performed by quantum operations with no classical parts except the initialization and read-out of the results.
The quantum multiobjective optimization algorithm of this work finds a single Pareto-optimal solution and in order to find other Pareto-optimal solutions it must be executed several times. Furthermore, the main result of this work, Theorem 2, requires that all multiobjective optimization problems be normal, collision-free and with no equivalent solutions. Even though this result constrains the class of mulitiobjective problems that can be solved, this work presents a first step forward to a general purpose quantum multiobjective optimization algorithm.
We end this paper by listing a few promising and challenging open problems.
  • We know from Lemma 2 that if we linearize a multiobjective optimization problem, some Pareto-optimal solutions (the non-supported solutions) may not be found. Considering that our quantum algorithm uses a linearization technique, a new mapping or embedding method of a multiobjective problem into a Hamiltonian is necessary in order to construct a quantum adiabatic algorithm that can also find supported Pareto-optimal solutions.
  • For a practical application of our quantum algorithm, the linearization w in Theorem 2 must be chosen so that the resulting total Hamiltonian is non-degenerate in its ground-state. Therefore, more research is necessary in order to develop a heuristic for choosing w before executing the algorithm.
  • As mentioned before, currently our algorithm is only good for multiobjective problems with no equivalent solutions. Natural multiobjective optimization problems appear in engineering and science with several equivalent solutions, and hence, in order to use our algorithm in a real-world situation we need to take into account equivalent solutions. This is a crucial point mainly because equivalent solutions yields degenerate ground-states in the total Hamiltonian, and hence, the quantum adiabatic theorem cannot be used.
  • The time complexity of our quantum multiobjective algorithm depends on the spectral gap of the total Hamiltonian. Even though we presented some numerical results that suggest a polynomial execution time for the two-parabolas problem, a more thorough and rigorous approach must be done. This depends on the analysis of the spectral gap of Hamiltonians that can be constructed for specific multiobjective problems, for example, solving our conjecture for the two-parabolas problem of Section 5.

Author Contributions

B.B. and M.V.; Formal analysis, M.V.; Funding acquisition, B.B. and M.V.; Investigation, M.V.; Methodology, M.V.; Project administration, M.V.; Supervision, M.V.; Writing—original draft, B.B. and M.V.; Writing—review & editing, B.B. and M.V.

Funding

M.V. is supported by Conacyt research grant PINV15-208.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Non-Singularity of Equation (7)

We want to demonstrate that the determinant of the matrix
A = f 1 ( x 1 ) f 2 ( x 1 ) f 1 ( x 2 ) f 2 ( x 2 )
is not 0.
Let d e t ( A ) = f 1 ( x 1 ) f 2 ( x 2 ) f 2 ( x 1 ) f 1 ( x 2 ) . Since x 1 and x 2 are two different Pareto-optimal solutions, we have that x 1 x 2 ; therefore, there exists a function f i ( x ) that we will call f 1 ( x ) for which f 1 ( x 1 ) < f 1 ( x 2 ) . Since x 1 and x 2 are Pareto-optimal, it cannot happen that f i ( x 1 ) < f i ( x 2 ) for all i; otherwise, x 1 would dominate x 2 . Therefore, there exists at least one function f j ( x ) that we denote f 2 ( x ) such that f 2 ( x 1 ) > f 2 ( x 2 ) . Thus it holds that
f 1 ( x 1 ) < f 1 ( x 2 ) , f 2 ( x 2 ) < f 2 ( x 1 ) .
Then, it follows that f 1 ( x 1 ) f 2 ( x 2 ) < f 1 ( x 2 ) f 2 ( x 1 ) and consequently d e t ( A ) 0 .

Appendix B. Data for the Two-Parabolas Problem of Figure 2

Table A1. Complete definition of the two-parabolas example of Figure 2 for seven qubits.
Table A1. Complete definition of the two-parabolas example of Figure 2 for seven qubits.
x f 1 ( x ) f 2 ( x ) x f 1 ( x ) f 2 ( x ) x f 1 ( x ) f 2 ( x ) x f 1 ( x ) f 2 ( x )
036.14214.879134.219208.038232.375201.354330.606194.825
428.91188.449527.285182.224625.729176.148724.24170.219
822.816164.435921.455158.7941020.155153.2941118.914147.933
1217.73142.7091316.601137.621415.525132.6641514.5127.839
1613.524123.1431712.595118.5741811.711114.131910.87109.809
2010.07105.609219.309101.528228.58597.564237.89693.715
247.2489.979256.61586.354266.01982.838275.4579.429
284.90676.125294.38572.924303.88569.824313.40466.823
322.9463.919332.49161.11342.05558.394351.6355.769
361.21453.233370.80550.784380.40148.4239046.139
400.80143.939411.20541.818421.61439.774432.0337.805
442.45535.909452.89134.084463.3432.328473.80430.639
484.28529.015494.78527.454505.30625.954515.8524.513
526.41923.129537.01521.8547.6420.524558.29619.299
568.98518.123579.70916.9945810.4715.915911.2714.869
6012.11113.8696112.99512.9086213.92411.9846314.911.095
6415.92510.2396517.0019.4146618.138.6186719.3147.849
6820.5557.1056921.8556.3847023.2165.6847124.645.003
7226.1294.3397327.6853.697429.313.0547531.0062.429
7632.7751.8137734.6191.2047836.540.67938.540
8040.6211.28142.7851.8048245.0342.4138347.373.029
8449.7953.6548552.3114.298654.924.9398757.6245.603
8860.4256.2848963.3256.9849066.3267.7059169.438.449
9272.6399.2189375.95510.0149479.3810.8399582.91611.695
9686.56512.5849790.32913.5089894.2114.4699998.2115.469
100102.33116.51101106.57517.594102110.94418.723103115.4419.899
104120.06521.124105124.82122.4106129.7123.729107134.73425.113
108139.89526.554109145.19528.054110150.63629.615111156.2231.239
112161.94932.928113167.82534.684114173.8536.509115180.02638.405
116186.35540.374117192.83942.418118199.4844.539119206.2846.739
120213.24149.02121220.36551.384122227.65453.833123235.1156.369
124242.73558.994125250.53161.71126258.564.519127266.64467.423

References

  1. Catherine, C.; McGeoch, C.C. Adiabatic Quantum Computation and Quantum Annealing: Theory and Practice; Morgan and Claypool: San Rafael, CA, USA, 2014. [Google Scholar]
  2. Farhi, E.; Goldstone, J.; Gutman, S.; Sipser, M. Quantum computation by adiabatic evolution. arXiv, 2000; arXiv:quant-ph/0001106. [Google Scholar]
  3. von Lücken, C.; Barán, B.; Brizuela, C. A survey on multi-objective evolutionary algorithms for many-objective problems. Comput. Optim. Appl. 2014, 58, 707–756. [Google Scholar] [CrossRef]
  4. Venegas-Andraca, S.; Cruz-Santos, W.; McGeoch, C.; Lanzagorta, M. A cross-disciplinary introduction to quantum annealing-based algorithms. Contemp. Phys. 2018, 59, 174–197. [Google Scholar] [CrossRef] [Green Version]
  5. Grover, L. A fast quantum mechanical algorithm for database search. In Proceedings of the 28th Annual ACM Symposium on the Theory of Computing (STOC), Philadelphia, PA, USA, 22–24 May 1996; pp. 212–219. [Google Scholar]
  6. Baritompa, W.P.; Bulger, D.W.; Wood, G.R. Grover’s quantum algorithm applied to global optimization. SIAM J. Optim. 2005, 15, 11701184. [Google Scholar] [CrossRef]
  7. Alanis, D.; Botsinis, P.; Xin Ng, S.; Lajos Hanzo, L. Quantum-Assisted Routing Optimization for Self-Organizing Networks. IEEE Access 2014, 2, 614–632. [Google Scholar] [CrossRef]
  8. Fogel, G.; Barán, B.; Villagra, M. Comparison of two types of Quantum Oracles based on Grover’s Adaptative Search Algorithm for Multiobjective Optimization Problems. In Proceedings of the 10th International Workshop on Computational Optimization (WCO), Federated Conference in Computer Science and Information Systems (FedCSIS), ACSIS, Prague, Czech Republic, 3–6 September 2017; Volume 11, pp. 421–428. [Google Scholar]
  9. Das, A.; Chakrabarti, B.K. Quantum annealing and quantum computation. Rev. Mod. Phys. 2008, 80, 1061. [Google Scholar] [CrossRef]
  10. Barán, B.; Villagra, M. Multiobjective optimization in a quantum adiabatic computer. Electr. Notes Theor. Comput. Sci. 2016, 329, 27–38. [Google Scholar] [CrossRef]
  11. Kung, H.T.; Luccio, F.; Preparata, F.P. On finding the maxima of a set of vectors. J. ACM 1975, 22, 469–476. [Google Scholar] [CrossRef]
  12. Papadimitriou, C.; Yannakakis, M. On the approximability of trade-offs and optimal access of web sources. In Proceedings of the 41st Annual Symposium on Foundations of Computer Science (FOCS), Washington, DC, USA, 12–14 November 2000; pp. 86–92. [Google Scholar]
  13. Ehrgott, M.; Gandibleux, X. A survey and annotated bibliography of multiobjective combinatorial optimization. OR Spektrum 2000, 22, 425–460. [Google Scholar] [CrossRef]
  14. Ambainis, A.; Regev, O. An elementary proof of the quantum adiabatic theorem. arXiv, 2004; arXiv:quant-ph/0411152. [Google Scholar]
  15. Wim van Dam, W.; Mosca, M.; Vazirani, U. How powerful is adiabatic quantum computation? In Proceedings of the 42nd IEEE Symposium on Foundations of Computer Science (FOCS), Las Vegas, NV, USA, 14–17 October 2001; pp. 279–287. [Google Scholar]
  16. Cubitt, T.; Perez-Garcia, D.; Wolf, M. Undecidability of the Spectral Gap. Nature 2015, 528, 207–211. [Google Scholar] [CrossRef] [PubMed]
  17. Biamonte, J.; Witteck, P.; Pancotti, N.; Rebentrost, P.; Wiebe, N.; Lloyd, S. Quantum machine learning. Nature 2017, 549, 195–202. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. The two-parabolas Problem. The first objective function f 1 ( x ) = ( x 7 ) 2 is represented by the bold line and the second objective function f 2 ( x ) = ( x 15 ) 2 by the dashed line. Note that there are no equivalent elements in the domain. In this particular example, all the solutions between seven and 15 are Pareto-optimal.
Figure 1. The two-parabolas Problem. The first objective function f 1 ( x ) = ( x 7 ) 2 is represented by the bold line and the second objective function f 2 ( x ) = ( x 15 ) 2 by the dashed line. Note that there are no equivalent elements in the domain. In this particular example, all the solutions between seven and 15 are Pareto-optimal.
Axioms 08 00032 g001
Figure 2. A discrete two-parabolas problem on seven qubits. Each objective function f 1 and f 2 is represented by the rounded points and the squared points, respectively. The gap vector is λ = ( 0.2 , 0.4 ) . The trivial Pareto-optimal points are 40 and 80.
Figure 2. A discrete two-parabolas problem on seven qubits. Each objective function f 1 and f 2 is represented by the rounded points and the squared points, respectively. The gap vector is λ = ( 0.2 , 0.4 ) . The trivial Pareto-optimal points are 40 and 80.
Axioms 08 00032 g002
Figure 3. Eigenvalues of Equation (10) for the two-parabolas problem T P 2 λ of Figure 2 for w = 0.57 . The eigenvalue gap g ( s ) at s = 1 is exactly | w , f ( x 0 ) w , f ( x 1 ) | , where x 0 = 59 and x 1 = 60 are the smallest and second smallest solutions with respect to this value of w.
Figure 3. Eigenvalues of Equation (10) for the two-parabolas problem T P 2 λ of Figure 2 for w = 0.57 . The eigenvalue gap g ( s ) at s = 1 is exactly | w , f ( x 0 ) w , f ( x 1 ) | , where x 0 = 59 and x 1 = 60 are the smallest and second smallest solutions with respect to this value of w.
Axioms 08 00032 g003
Figure 4. Logarithmic plot of the eigenvalue gap g ( s ) = ( α w σ w ) as a function of s.
Figure 4. Logarithmic plot of the eigenvalue gap g ( s ) = ( α w σ w ) as a function of s.
Axioms 08 00032 g004

Share and Cite

MDPI and ACS Style

Barán, B.; Villagra, M. A Quantum Adiabatic Algorithm for Multiobjective Combinatorial Optimization. Axioms 2019, 8, 32. https://doi.org/10.3390/axioms8010032

AMA Style

Barán B, Villagra M. A Quantum Adiabatic Algorithm for Multiobjective Combinatorial Optimization. Axioms. 2019; 8(1):32. https://doi.org/10.3390/axioms8010032

Chicago/Turabian Style

Barán, Benjamín, and Marcos Villagra. 2019. "A Quantum Adiabatic Algorithm for Multiobjective Combinatorial Optimization" Axioms 8, no. 1: 32. https://doi.org/10.3390/axioms8010032

APA Style

Barán, B., & Villagra, M. (2019). A Quantum Adiabatic Algorithm for Multiobjective Combinatorial Optimization. Axioms, 8(1), 32. https://doi.org/10.3390/axioms8010032

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop