Next Article in Journal
An Extended Membrane System Based on Cell-like P Systems and Improved Particle Swarm Optimization for Image Segmentation
Previous Article in Journal
Data-Driven Approach for Estimating Power and Fuel Consumption of Ship: A Case of Container Vessel
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Quantum-Behaved Particle Swarm Optimization Algorithm on Riemannian Manifolds

1
Key Laboratory of Advanced Process Control for Light Industry, Ministry of Education, No. 1800, Lihu Avenue, Wuxi 214122, China
2
School of Computer and Information Engineering, Xinjiang Agricultural University, Urumqi 830052, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(22), 4168; https://doi.org/10.3390/math10224168
Submission received: 3 September 2022 / Revised: 1 November 2022 / Accepted: 4 November 2022 / Published: 8 November 2022

Abstract

:
The Riemannian manifold optimization algorithms have been widely used in machine learning, computer vision, data mining, and other technical fields. Most of these algorithms are based on the geodesic or the retracement operator and use the classical methods (i.e., the steepest descent method, the conjugate gradient method, the Newton method, etc.) to solve engineering optimization problems. However, they lack the ability to solve non-differentiable mathematical models and ensure global convergence for non-convex manifolds. Considering this issue, this paper proposes a quantum-behaved particle swarm optimization (QPSO) algorithm on Riemannian manifolds named RQPSO. In this algorithm, the quantum-behaved particles are randomly distributed on the manifold surface and iteratively updated during the whole search process. Then, the vector transfer operator is used to translate the guiding vectors, which are not in the same Euclidean space, to the tangent space of the particles. Through the searching of these guiding vectors, we can achieve the retracement and update of points and finally obtain the optimized result. The proposed RQPSO algorithm does not depend on the expression form of a problem and could deal with various engineering technical problems, including both differentiable and non-differentiable ones. To verify the performance of RQPSO experimentally, we compare it with some traditional algorithms on three common matrix manifold optimization problems. The experimental results show that RQPSO has better performance than its competitors in terms of calculation speed and optimization efficiency.

1. Introduction

Optimization algorithms are indispensable mathematical methods for solving various scientific and engineering optimization problems [1,2], such as the Hermitian eigenvalue problems [3], adaptive regularization [4], camera pose estimation [5], low-rank matrix approximation [6], and so on. In some of these practical engineering problems, their objective functions are often not in the complete Euclidean space but are restricted to some curves or surfaces [3,4,5,6]. In other words, the problem itself has no linear structure, and its solution space has a complex topology, which inevitably needs to be optimized on the manifold. For example, in the process of target tracking based on computer vision, digital images need to undergo a continuous set transformation, where the set of the geometric transformations of such images constitutes an SL Lie group [7]. The feature subspaces of videos or image sequences constitute the Grassmann manifold structure [8]. In some specific fields, such as image processing and data analysis, the data matrix has a semi-positive symmetric structure, so many problems can be modeled by the semi-positive symmetric matrix SDP manifold [9].
Matrix manifold optimization [10] is a minimization problem of a real-valued function of the Riemannian manifold, which can be stated as follows:
min X M f X ,
where X is the matrix variable.
In general, the Riemannian manifold is a geometric structure defined in the Riemann space. The Riemann space is different from the Euclidean space, so the operators defined in the Euclidean space are no longer applicable to the Riemann space. Optimization theory based on Euclidean space cannot be directly applied to the Riemannian manifold. However, the Riemannian manifold optimization theory can well overcome the limitations of the optimization algorithms on Euclidean space, which forms the unique advantages of the Riemannian manifold optimization algorithm:
  • The iteration operation on the Riemannian manifold can reduce the modeling error and improve the expression ability of nonlinear problems;
  • It can make full use of the intrinsic geometric structure of the objective function and embed the constraints into the search space to solve the unconstrained optimization problems based on the constrained search space, such as the low-rank matrix constraint problems and the positive semidefinite programming problems [11];
  • Solving the target directly on the Riemannian manifold can reduce the damage to the original structural information caused by the vectorization of high-dimensional data.
With the development of Riemannian manifold optimization theory and related technologies, the Riemannian manifold optimization methods have been used in a variety of fields, such as big data, computer vision, target tracking, data mining, and so on [3,4,12,13,14,15].
At present, the commonly used Riemannian manifold optimization algorithms are based on the classical optimization methods, such as the steepest gradient descent method, the conjugate gradient method [16], and the Newton method [17], in the framework of the geodesic or retracement operator. These algorithms have good efficiency and local convergence for gradient-based optimization problems, but they still have shortcomings. Specifically, it is difficult to use the classical gradient-based methods directly because many problems, in reality, are non-smooth or non-differentiable. On the other hand, since most Riemannian manifold optimization problems are non-convex, the algorithms based on gradient descent would fall into the local optimum easily and thus cannot find the global optimum solution.
In order to overcome these algorithm defects, more and more researchers have focused on the improvement and development of Riemannian optimization algorithms. The Swarm Intelligence Algorithm (SIA) is a typical optimization algorithm [18]. It neither depends on the structural properties of the problem to be solved nor focuses on a specific problem. It has low requirements on the mathematical properties of the target problem and has strong global exploration ability. However, the original SIA designed in Euclidean space cannot be directly used for Riemannian manifold optimization. Therefore, in recent years, some researchers have begun to use the improved SIAs to solve black-box optimization problems on manifolds. For example, Borckmans et al. [19] introduced the Riemann gradient in the particle swarm optimization (PSO) algorithm and applied it to solve the low-rank tensor approximation problems. During the search process, the velocity vector of the particle is updated in the tangent space and then retracted to the manifold through the retracement operator. Colutto et al. [20] extended the Covariance Matrix Adaptation Evolutionary Strategies (CMA-ES) to the Riemannian manifold. It first synthesizes the functional on the minimized two-dimensional spherical surface and then reconstructs the shape of the three-dimensional spherical data. Compared with the manifold-based gradient method and the differential evolution algorithm (Differential Evolution, DE), the convergence rate of this algorithm is greatly improved. Arnold [21] combined the standard evolutionary strategy (ES) with the mutation strength adaptation mechanism and analyzed the convergence ability of the algorithm on spherical manifolds. Arnold [22] proposed an adaptive standard ES, which adaptively controls the complete covariance matrix of its offspring distribution, generating the mutation vector in the tangent space of the current population centroid. The points in the tangent space are mapped to the manifold by exponential mapping. Nevertheless, these SIAs still encounter problems of poor versatility, poor robustness, and low computational efficiency.
In 2004, Sun et al. [23] proposed the quantum-behaved particle swarm optimization (QPSO), inspired by the quantum mechanics and the trajectory analysis of PSO [24]. Because of its strong optimization performance, few control parameters, easy realization, and versatility, the QPSO algorithm has aroused the extensive interest of many researchers in solving real-world optimization problems. For example, it has been used to tackle the problems of constrained optimization [25], multi-objective optimization [26], neural network training [27], electromagnetic design [28], semiconductor design [29], image processing [30], and so on.
In this paper, we extend the QPSO algorithm from the Euclidean space to Riemannian manifolds and then propose a QPSO algorithm on Riemannian manifolds named RQPSO. It is tested on three kinds of Riemannian manifold optimization problems, including the positive semidefinite programming (SDP) problem [31], the secant-based dimension reduction (SDR) problem [32], and the robust principal component analysis (Robust PCA) problem [33].
The rest of the paper is organized as follows. Section 2 provides a brief introduction of the Riemannian manifold and the QPSO algorithm. Section 3 describes the details of our proposed RQPSO algorithm. The experimental results and analysis are presented in Section 4. Section 5 concludes the paper.

2. Preliminaries

2.1. The Riemannian Manifold

The Riemannian topological manifold, referred to as the Riemannian manifold, is a non-Euclidean geometric space that is locally isomorphic to European space and has a differential structure and Riemannian metric [10].
Stiefel Manifold: A Stiefel manifold is a set of matrices with any two columns orthogonal to each other, which can be defined as follows:
S t n , p = { X R n × p | X T X = I p } ,
where X represents the matrix variable, and X T is the transposed matrix of X . n ,   p are the sizes of the rows and columns of the matrix, respectively, and n > p . n × p is the n × p -order real number space, and I p is the p -order identity matrix. The set of an p -order orthogonal matrix can be defined as follows:
O p = { X R p × p | X T X = X X T = I p } .
Grassmann manifold: Based on the above Stiefel manifold and p -order orthogonal matrix set, the quotient manifold definition of Grassmann manifold can be stated as follows:
G r n , p = S t n , p / O p .
For any two points X 1 , X 2 S t n , p belongs to an equivalent class, if and only if there is an orthogonal matrix O O p , s.t. X 1 = X 2 O . We can use X to represent an equivalent class in the Grassmann manifold; that is, X is a point on the Grassmann manifold, and then X S t n , p is a representative element of the point X on the Grassmann manifold. In other words, the Grassmann manifold is a set of p -dimensional subspaces in R n , so another definition of the Grassmann manifold can be presented as follows:
G r n , p = { s p a n X | X n × p , X T X = I p } ,
where span X represents the subspace formed by the expansion of the X columns.
Oblique manifold: When the column vector of the constraint matrix is normalized and does not constrain the angle between the columns, the Oblique manifold is formed, which is denoted as follows:
O b n , p = { X n × p | d i a g X T X = I p } ,
where d i a g is the diagonal element of the matrix.

2.2. Basic Operators on Manifold

The tangent space can translate one point on the manifold to the tangent space where another point is located; then both points are located in the same Euclidean space, making it possible to perform a Euclidean space operation.
Suppose that γ t is a smooth curve defined on the manifold and has a smooth map:
γ :   R   M ,   t γ t ,
where γ is a smooth curve on the manifold M and γ 0 = X , then we can obtain
v X :   X R   ,   f v X f = df γ t dt | t = 0 ,
where v X is a map, represented as the tangent vector based on the curve γ t = 0 . f :   M R is a function defined on the manifold, and X represents the functions of whole defined on the manifold.
The space composed of all tangent vectors at any point X on manifold M is called the tangent space about X , denoted as T X M . It is a linear space which is tangent to point X on manifold M and regard X as the origin point.
When performing the operations (i.e., adding and subtracting) to the tangent vectors in different tangent spaces, it is necessary to translate one tangent vector into the tangent space where another tangent vector is located, but the properties of the tangent vectors cannot be changed during the translation process. This requires vector transfer, which is an operator that maintains the properties of the vectors on the manifolds.
The vector transfer on the manifold M is a smooth map,
Transp Y X ξ :   T X M T Y M } .
It transfers the vector ξ from a point X to a point Y and has ξ = Transp Y X ξ , where ξ T X M , ξ T Y M . Taking S t 3 , 1 as an example, the vector transfer process is shown in Figure 1.
The retracement operator on the manifold M is a smooth mapping from the tangent space T X M to M , and
R X :   T X M M .
As shown in Figure 2, R X Y is a point on the manifold, indicating that the point T X M on the tangent space T X M retracts to the corresponding point on the manifold, that is
R X Y = R X t ζ ,
where Y = X + t ζ is a random point on the tangent space T X M , ζ is the tangent line on the tangent space T X M , and t is the retracement parameter.
Suppose that M is a smooth manifold with connections , and that X M . For each ξ T X M , there is an open interval H 0 , and a unique geodesic γ t ; X , ξ :   H M , γ 0 = X , γ · 0 = ξ . Meanwhile, the geodesic has the same centroid, i.e., γ t ; X , a ξ = γ a t ; X , ξ , a R . The mapping is defined as follows:
Exp X :   T X M M :   ξ Exp X ξ = γ 1 ; X , ξ ,
which is known as the exponential mapping at X .
Logarithmic mapping is the inverse operator of exponential mapping. It only exists when exponential mapping is reversible, which is defined as follows:
L o g X :   M T X M :   Y L o g X Y = ξ s . t .   Exp X ξ = Y ,        ξ X = dist X , Y
where ξ X is the norm of the tangent vector ξ T X M at X . ξ X = ξ , ξ X , ξ , ξ X is the inner product in the tangent space determined by X , and dist X , Y is the Riemannian distance on the manifold.
When given a starting point X and a target point Y , the logarithmic mapping will return a tangent vector from X to Y , and L o g X Y = dist X , Y (see Figure 2).

2.3. Test Problems

2.3.1. Semidefinite Programming Problem

Semidefinite programming (SDP) [34] is one of four standard convex optimization problems. In recent years, SDP has gradually become a new research hotspot. Many engineering problems can be modeled as SDP problems or approximated in the form of SDP.
SDP problems can be defined as follows:
min X O b n , p f X = 1 2 A X · X ,
where X R n × p is a point on O b n , p , the symbol “ · ” represents point multiplication, and A is a real symmetric matrix of n × n .
Normally, for experiments, an upper triangular matrix A with a value of 1 can be generated with the probability of 0.1, and then the final A can be obtained in the form of A = 1 2 n A + A T . Then, 30 test examples are selected from n 25 , 50 , 100 , 150 , 200 , 250 and p 3 , 5 , 7 , 9 , 11 for the test.

2.3.2. Secant-Based Data Dimensionality Reduction Problem

Secant-based Dimension Reduction (SDR) is a kind of optimization problem that finds a mapping from high-dimensional data, that is, a single map [35]. In this paper, it is implemented on the basis of the study in the references [32,36].
Assuming that X R n × N is a data matrix, S is a subset of R n and Σ = X : , i X : , j X : , i X : , j | X : , i , X : , j S , X : , i X : , j is the set of unit secants of X, the SDR problem can be defined as minimizing the following function:
min U G r n , p f U = min s ϵ Σ U U T s 2
where U is a point on the Grassmann manifold, s is an element in the unit secant set. The data matrix is generated in the following way. X = Q Λ Z , Q R n × r is a randomly generated orthogonal matrix, Λ R r × r is a diagonal matrix, and its diagonal elements are denoted as Λ i , i = β 1 i , Z R r × N is a random matrix, and each element is subject to N 0 , 1 distribution. β is the condition number of the control data matrix.
For the experiment, we set β = 1.05 , n 50 , 100 , 150 , 200 , 250 , 300 , p 3 , 5 , 7 , 9 , 11 , and 30 different test instances are generated on this benchmark.

2.3.3. Robust Principal Component Analysis

Principal Component Analysis (PCA) is a data dimensionality reduction algorithm that can remove redundant information in the data and retain the primary information. Similar to classical PCA, Robust PCA [33] is also a data dimensionality reduction in nature. However, when the data are huge or contain a certain amount of noise, PCA cannot achieve the expected effect, while Robust PCA is competent for such data.
Assuming that A is the data matrix and N is the sample size of the data, the problem can be defined as follows:
min X G r n , p f X = i = 1 n j = 1 p A T A A T X X T A ,
where X is a point on the Grassmann manifold.
For the experiment, in order to generate the data matrix A , we first need to generate a noise-free data matrix B R n × N , which satisfies B = Q Λ Z . Q R n × r is a randomly generated orthogonal matrix. Λ R r × r is the diagonal matrix, its diagonal elements are denoted as Λ i , i = β 1 i , β is the attenuation index. Z R r × N is the random matrix, and each element obeys the N 0 , 1 distribution. r p + 1 , n is the rank of matrix X. β > 1 determines the decay of the eigenvalues of the sample covariance matrix BB T N . After obtaining the noise-free data matrix B , we can insert the non-Gaussian noise (−1 or 1) into the data matrix B with a probability of θ 2 , and then obtain the data matrix A containing noise. In addition, we set N = 200 , β = 1.5 , r = p + 10 , θ 0.1 , 0.01 , 0.001 , n 50 , 100 , 150 , 200 , p 3 , 5 , 7 , 9 , 11 , and 36 different test instances are generated on this benchmark.

2.4. The Quantum-Behaved Particle Swarm Optimization Algorithm

Sun [23,37,38] et al. introduced the concept of quantum behavior into the standard PSO algorithm and proposed the QPSO algorithm. In the QPSO with M particles, the position of each particle is updated as follows during the t th iteration.
p i t = φ t · P i t + 1 φ t · G t ,
X i t + 1 = p i t ± α C t X i t · ln 1 / u i t ,
where p i t is the local focus of particle i , P i t is the personal best position of particle i , G t is the global best position of the swarm, X i t is the position of particle i , i = 1 , 2 , , M .   φ t and u i t are both random numbers randomly distributed on 0 , 1 , denoted as φ t ~ U 0 , 1 , u i t ~ U 0 , 1 . α is the contraction and expansion coefficient, and C t is the mean best position, defined as
C t = 1 M i = 1 M P i t .
In the QPSO algorithm, there are two methods of iteration. One is to calculate the mean best position by Formula (19); the other is to calculate the mean best position by randomly selecting from the population. In this paper, we chose the latter one as the method of producing the mean best position, for it needs less calculation [39].

3. The Proposed RQPSO Algorithm

3.1. The QPSO Algorithm on Riemannian Manifold

As represented in Section 2.4, the QPSO algorithm can only work in Euclidean space. Specifically, the update of particles’ positions relies on the linear operators in Euclidean space. If the particles are distributed on the manifold, those positional update operators cannot be directly applied to the manifold. Therefore, in this paper, we extend the QPSO to manifolds and propose the RQPSO algorithm in order to solve the Riemannian manifold optimization problems.
The proposed RQPSO algorithm uses the retracement operator to project the to-be-update particles into the tangent space after initialization. Because the Riemann manifolds are local isomorphism in Euclidean space, the update operations of particles can be carried out in the tangent space. Then, the retracement operator retracts the updated particles back to the manifold. The details of the RQPSO algorithm are described as follows.
As for Equation (17), a point is randomly selected on the connecting line between the current best position P i t and the global best positions G t in Euclidean space:
p i t = R P i t r a n d · L o g P i t G t ,
where r a n d is a random number uniformly distributed on 0 , 1 , L o g P i t G t is the logarithmic mapping, which is expressed as the tangent vector from P i t to G t . R X λ ξ means that a tangent vector λ ξ in the tangent space corresponding to X is retracted to the manifold. and the random point Z between X and Y on the manifold M falls on the geodesic between the two points (see Figure 3).
For Equation (18), according to the δ potential well model, the position of a particle near can be defined as follows:
X = p ± L 2 ln 1 u ,
where L is the characteristic length, and u is a random number uniformly distributed on 0 , 1 , i.e., u ~ U 0 , 1 . In other words, this equation expresses the probability of particles in a quantum state appearing near point p , so, this equation can be rewritten as
L t = P p i t L o g X i t C t ,
X i t + 1 = R p i t α · L t · ln 1 / u i t , u i t ~ U 0 , 1 ,
where L t represents the characteristic length and L o g X i t C t represents the tangent from X i t to C t .
Through the vector transfer, the tangent from X i t to C t is translated into the tangent space corresponding to p i t , and then the eigenvectors in the tangent space are retracted to the manifold using Equation (23).
During the t th iteration, the RQPSO algorithm is executed as follows:
1.
Randomly select a point from the population P = P 1 , P 2 , P m as the mean best position C t ;
2.
For particle i ,   i = 1 , 2 , , M , randomly select a point along the geodesic line between P i and G t . Generate a tangent ζ = L o g P i G t from P i to G t , and then randomly select a point on the tangent and retract it to p i t = R P i r a n d ζ on the manifold;
3.
Generate a tangent Q = P p i t L o g X i t C t in the same direction as that from X i t to C t at the point produced in (2). Then, retract Q to R X Q on the manifold at point X and obtain a new population.
The text continues here.
Algorithm 1 provides the pseudo-code of the proposed RQPSO algorithm.
Algorithm 1 Quantum Particle Swarm Optimization on Riemannian Manifold (RQPSO)
Input: 
M : matrix manifold; f : R n × p R : objective function; R X : T X M M : retracement operator; L o g X : M T X M : y L o g x y = ξ : logarithmic map; T y x ξ : T x M T y M : vector transfer;
Output: 
The optimal solution
 1: 
t 0
 2: 
X i t Randomly distributed matrix in M , i 1 , 2 , , m
 3: 
P i = X i t i = 1 , 2 , , m the current personal best position of the particle
 4: 
F i t = f ( X i t ) i = 1 , 2 , , m the particles’ fitness value the current population
 5: 
F b e s t i = F i t i = 1 , 2 , , m the particles’ best fitness value the current population
 6: 
Y t = min ( F i t ) i = 1 , 2 , , m optimal fitness (optimal objective function value)
 7: 
G t = P g t , where, g = arg min 1 i m f P i t
 8: 
while ( t < m a x i t e r or | Y t Y t 1 | 10 6 ) do
 9: 
   C t Select one at random from P i
 10: 
   for ( i 1 : m ) do
 11: 
     ζ 1 = L o g P i G t
 12: 
     p = R P i ζ 1
 13: 
     ζ 2 = L o g X i t C t
 14: 
     ζ 3 = T p ζ 2
 15: 
    if (rand () < 0.5)
 16: 
      X i t = R p l o g 1 / r a n d n , p ζ 3
 17: 
    else
 18: 
      X i t = R p l o g 1 / r a n d n , p ζ 3
 19: 
    end if
 20: 
    end for
 21: 
    for ( i 1 : m ) do
 22: 
      F i t = f ( X i t )
 23: 
     if ( F i t < F b e s t i )
 24: 
       F b e s t i = F i t
 25: 
       P i = X i t
 26: 
     end if
 27: 
    end for
 28: 
    if ( min   ( F t ) < Y t )
 29: 
      Y t = min F t
 30: 
      u = i n d e x min F t : G t = X u t
 31: 
  end if
 32: 
end while
 33: 
return Y t , G t

3.2. Quantum Behavior of the RQPSO Algorithm

As we know, the Riemannian manifold is locally isomorphic to Euclidean space. Generally, the dimensions of the problems are finite. Therefore, it can be assumed that the manifold where the problem is solved is locally isomorphic to an N-dimensional Euclidean space instead of being isomorphic to the infinite space. Since finite-dimensional Euclidean space is a special case of Hilbert space, the neighborhood of a point on the Riemannian manifold has the properties of Hilbert space. On the basis of this assumption, we can study the quantum behavior of particles on Riemannian manifolds.
In the quantum time-space framework, the quantum state of a particle is described by the wave function Ψ X , t [40]. In a three-dimensional space, the wave function Ψ X , t of a particle satisfies the formula as below.
Ψ 2 d x d y d z = Q d x d y d z ,
where Q d x d y d z is the probability that the particle will appear in the infinitesimal element about the point x , y , z .
We assume that each single particle in QPSO is treated as a spin-less particle moving in an N-dimensional Hilbert space with a given energy, and thus its state is characterized by a wave function which only depends on the position of the particle. In this paper, we denote P i as p . With point p being the center of the potential well, the potential energy of the particle in the one-dimensional δ potential well is represented as follows:
V X = γ δ X p = γ δ Y ,
where Y = X p and γ is the intensity of the potential well.
According to Theorem 2 in reference [38], it can be known that if a particle moves in the bound state in the one-dimensional δ potential well, as described by Equation (25), its position can be determined by using the stochastic Equation (21). Since there is no addition or subtraction operation on the manifold, the addition and subtraction operation in Equation (21) is modified to the retraction operation of the feature vector in the tangent space where the particle is located. Therefore, the update of particles on the Riemannian manifold is characterized by quantum behavior, as in Euclidean space.

4. Experimental Studies

4.1. Parameters Analysis

As stated in Section 2.4 and Section 3, the RQPSO algorithm has two main parameters, i.e., φ and α , where φ affects the search ability of quantum particles on the manifold, and α is the influence factor of the characteristic length L in the vicinity of the local attractor p . The values of φ and α will affect the algorithm’s optimization performance on the manifold to a certain extent. Therefore, in this section, we experimentally investigate the effect of these two factors on the algorithmic performance of RQPSO.

4.1.1. The Effect of φ on the Performance of RQPSO

φ represents the distance from the current personal best position of a particle to the current global best position along the geodesic line in each iteration. As shown in Figure 4, if φ is too small, the particle deviates from the current global best position. The particle is mainly concentrated in the vicinity of its own personal best position, resulting in a decrease in the search ability of the particle. If φ is too large, the particle moves towards the current global best position, which may cause the particle to converge too fast and thus fall into the local best area. Therefore, it is important to select an appropriate value for φ .
For the three test problems, we chose n = 50 , p = 3 instances to test the parameter performance. We selected φ t U 0 , 0.1 ,   φ t U 0 , 0.5 , and φ t U 0 , 1 for testing on the SDP problem. On the SDR and Robust PCA problems, we chose φ t U 0 , 0.01 ,   φ t U 0 , 0.1 ,   φ t U 0 , 0.5 and φ t U 0 , 1 , fixed another parameter α = 0.1 , and ran each test case 20 times to find the average value. The obtained test results are shown in Figure 5a–c.
According to the experimental results, we can find that on the Robust PCA problem and the SDP problem, when φ t U 0 , 1 , the algorithm can converge faster. When φ t U 0 , 0.1 , the algorithm can achieve better results in the SDR problem. The reason is that the PCA problem and the SDP problem have better properties than the SDR problem, which is a non-differentiable objective function. Therefore, when the particle is updated, it has a broader update range and can easily find the optimal solution. The SDR problem is a more complicated objective function, so that φ t cannot be set large, otherwise the swarm would hardly find the global optima. However, when the value of φ t is too small, the search speed is too slow, as shown in Figure 5b.

4.1.2. The Effect of α on the Performance of RQPSO

Based on the retracement operator framework, parameter α can be regarded as the step size in the gradient descent algorithm. As shown in Figure 6, if α is too large, the step size of the algorithm would be too large to meet the condition of local isomorphism. If α is too small, the moving distance of the algorithm would be too small, and the convergence speed would be too slow.
Therefore, we selected α parameters of different orders of magnitude in each test problem. Under each set parameter, the algorithm ran 20 times, and then the average values of the results during these 20 runs were calculated. The parameter with the minimized value and the fastest decreasing rate of the objective function was selected. The test cases were the same as those in Section 4.1.1, and we chose α = 0.01 ,   α = 0.1 , and α = 0.5 . The experimental results are shown in Figure 7a–c.

4.1.3. The Effect of Parameters on the Performance of RQPSO

As mentioned above, α   and   φ are two parameters of the RQPSO algorithm, and the performance of the proposed algorithm depends on the choice of parameters α and φ . In addition to studying the effect of the algorithm in different test problems in Section 4.1.1, in this paper, we will use the PCA problem as the result measurement function to study the effect of different parameter selection on the performance of the algorithm.
In this paper, the selection range of α and φ is 0.01 , 0.05 , 0.1 , 0.3 , 0.5 , 0.7 , 0.9 , 1 . According to the Taguchi method [41], there are two factors under investigation, and each factor has eight selection levels. On this basis, generating an orthogonal table, and then the experiment is carried out with the combination of different factors in the orthogonal table. Each factor combination is run 20 times to obtain the average value, and the obtained results are shown in Table 1.
It can be seen from Table 1 that when α = 0.1 , the measurement result obtains the best average value, and when φ = 0.5 , the measurement result obtains the best average value. According to the rest results in Section 4.1.1 and this section, for different problems, the algorithm can achieve good results when α = 0.1 . However, the parameter of φ needs to be selected in combination with different Riemannian manifolds. In this paper, φ U 0 , 0.1 is selected for the experiment on the Oblique manifold, and φ U 0 , 1 is selected for the test on the Grassmann manifold.

4.2. Comparison Experiments

In this section, we chose the manifold-based Steepest Descent (SD), Particle Swarm Optimization (PSO), and Differential Evolution (DE) [15] as the comparison algorithms, and all of them, as well as the RQPSO, were tested on three sets of test problems mentioned before.
As for the RQPSO, we set α = 0.1 and φ t U 0 , 0.1 on the SDR problem, and α = 0.1 and φ t U 0 , 1 on both the SDP problem and robust PCA problem.
For SD, the steepest descent method on Riemannian manifolds is a generalization of the classical steepest descent method [10] on Riemannian manifolds. It is also a manifold optimization algorithm based on retracement. Since the search direction of the algorithm is in the steepest descent direction, we used the Armijo condition [42] to calculate the search step, and then retracted the forward-moving point to the manifold, and thus completed a descent process. The implementation of SD was conducted in the Manopt toolbox [43].
For PSO [19], it was also implemented in the Manopt toolbox with the default parameters. For DE, the mutation operation and crossover operation were implemented with a certain probability in the tangent space by using exponential mapping and logarithmic mapping. After that, the points were retracted to the manifold. According to the literature [44], the mutation rate was set to P F = 0.4 and the crossover rate was 1 P F . In addition, the scaling factor was set to F = 0 . s 7 and the cross coefficient was k = 0.85 .
The experimental results are shown in tables from Table 2, Table 3 and Table 4 and figures from Figure 8, Figure 9 and Figure 10.
It can be seen from Table 2 and Figure 8 that the effect of the RQPSO algorithm is significantly better than other comparison algorithms. This is due to the global search ability of RQPSO, which can efficiently find the global optimum. PSO is not a globally convergent algorithm [24], and it is easy to fall into the local optimum solution. The DE algorithm adopts the evolutionary algorithm framework, but it achieved the worst results because of its monotonous updating method and insufficient convergence ability. Compared with the PSO algorithm, the DE algorithm achieved poor results when the matrix dimension was high, for it is difficult to achieve good results through simple crossover and mutation. SD used the Riemannian gradient to search according to the gradient direction, achieving good results in solving the SDP problem. It can be seen from Table 2 that the SD is slightly better than the RQPSO algorithm in some examples, but its convergence speed is slower than RQPSO. In the RQPSO algorithm, particles randomly distributed on the manifold tend to move closer to the global optimal position G t . This helps particles along geodesics to find an optimal solution.
From Table 3 and Figure 9, it can be seen that the RQPSO algorithm has achieved significant results in all examples. Because RQPSO has no requirements for the differentiability of the objective function, it only needs to be updated with the fitness value of the objective function. In addition to the global convergence capability of the RQPSO algorithm, each instance of RQPSO on the SDR problem can achieve relatively good results. Although the PSO algorithm has no special requirements for the objective function, it cannot achieve good results in solving the problem because PSO lacks the ability of global convergence. As shown in Figure 4, Figure 5 and Figure 6a, the PSO algorithm achieves better results than SD in the case of n = 50 and p = 3 , but in the case of n = 300 and p = 11 in Figure 9b, the experimental results of PSO and DE are poor, which is due to a disadvantage of PSO and DE in dealing with problems with high dimensions, especially the ones in manifold optimization [45,46]. RQPSO performed stably on both tests, which shows that RQPSO has relatively stable performance on both low-dimensional and high-dimensional optimization problems. DE algorithm achieved better results than PSO, to some extent, especially when the objective function is non-differentiable. Due to the non-differentiability of the SDR problem, it is difficult for SD to find the correct decline direction on this problem, so the performance is very different from that of the SDP problem and robust PCA problem. In Figure 9b, given the sufficient number of operations of each algorithm, SD may break through the problem of gradient disappearance when the gradient direction is bad so that the algorithm could find the direction to decline the objective function again. However, it is impossible to judge whether the SD algorithm falls into local minimum or gradient ill conditions in practice. Generally, it will be treated as the end of the search, so the SD algorithm cannot achieve better results than DE in simulation experiments, but it can achieve better results than PSO in some cases. In addition, it can be seen that the optimization performance of the PSO algorithm on the manifold is inefficient. RQPSO can break through the essential defects of the PSO algorithm and achieve good results in manifold optimization.
According to Table 4, we can see that RQPSO can achieve good results in most instances, and the performance of the SD algorithm is slightly better than the RQPSO algorithm in a small number of instances. In general, the RQPSO algorithm has a good optimization ability in dealing with PCA problems on manifolds. It can be seen from Table 4 that when α = 0.1 , the SD algorithm achieved slightly better results than RQPSO. When α = 0.01 and α = 0.001 , RQPSO obtained better results than the SD in most instances. It shows that noise affects the performance of the RQPSO algorithm to some extent. The reason is that the noise with a certain probability affects the value of the objective function and thus affects the algorithm that only depends on the value of the objective function. From Figure 10a,b, it can be seen that the RQPSO algorithm achieved the best result. The convergence speed is comparable to the SD algorithm with the steepest descent method, which shows that RQPSO is a fast and efficient Riemannian manifold optimization algorithm. SD algorithm performed better on the PCA problem than on the first two problems, mainly due to the differentiability of the PCA problem [33]. DE achieved better results than PSO on this problem due to its evolutionary strategy. In addition, the PSO algorithm achieved the worst effect. The reason is that PSO has insufficient optimization ability to solve problems on high-dimensional matrix manifolds. However, the RQPSO algorithm can overcome its shortcoming to a great extent and is well-qualified for the optimization problem on the Riemannian manifold.

5. Conclusions

Given the current shortcomings of the optimization algorithms in solving Riemannian manifold optimization problems, this paper extended the QPSO algorithm from Euclidean space to Riemannian manifolds. A QPSO algorithm on the Riemannian manifold is proposed and named RQPSO. In this algorithm, the particle’s position is updated in the tangent space by using a guiding vector and vector transfer. The retracement operator is also employed to the updated position to the manifold for fitness evaluation. The operation process does not depend on the form of the optimization problem. In the process of optimization, the RQPSO algorithm only needs to use the fitness value of the objective function, which is independent of the form of the objective function. In addition, the historical and global best points of the particles distributed on the manifold guide the particles to move forward to the best position. The update of the particles does not depend on the local topology of the manifold, which can well avoid the particles falling into the local optimal area. The proposed algorithm was tested on three classical Riemannian manifold optimization problems, including the SDP problem, the SDR problem, and the Robust PCA problem. The experimental results showed that the RQPSO has better performance than its competitors. In the future, we will try to modify the updated strategies of RQPSO and extend them to more complex Riemannian manifold optimization problems.
The proposed algorithm uses particles as the unit in the update process. The particles search and update along the geodesic line, so the search range is reduced compared with the QPSO algorithm. In future research, the probability density distribution of particles will be added to the particle update process, and the generation and update of particles are guided by the probability density distribution function to enhance the search range of the particles.

Author Contributions

Conceptualization, Y.H. and C.Z.; Data curation, Y.H.; Formal analysis, Y.H.; Funding acquisition, J.S.; Investigation, Y.H.; Methodology, C.Z.; Writing—original draft, Y.H. and C.Z.; Writing—review & editing, Q.Y. and J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Key Research and Development Program of China (grant no: 2018YFC1603303, 2018YFC1604004) and the National Science Foundation of China (grant no: 61672263).

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to express their sincere thanks to the anonymous referees for their great efforts to improve this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nocedal, J.; Warght, S.J. Numerical Optimization, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  2. Sun, W.; Yuan, Y.X. Optimization Theory and Methods: Nonlinear Programming: Volume 1; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  3. Sirković, P.; Kressner, D. Subspace acceleration for large-scale parameter-dependent Hermitian eigenproblems. SIAM J. Matrix Anal. Appl. 2016, 37, 695–718. [Google Scholar] [CrossRef] [Green Version]
  4. Agarwal, N.; Boumal, N.; Bullins, B.; Cartis, C. Adaptive regularization with cubics on manifolds with a first-order analysis. arXiv 2018, arXiv:1806.00065. [Google Scholar]
  5. Sarkis, M.; Diepold, K. Camera-pose estimation via projective Newton optimization on the manifold. IEEE Trans. Image Process. 2012, 21, 1729–1741. [Google Scholar] [CrossRef] [PubMed]
  6. Wang, Z.; Lai, M.J.; Lu, Z.S.; Fan, W.; Davulcu, H.; Ye, J. Orthogonal rank-one matrix pursuit for low rank matrix completion. SIAM J. Sci. Comput. 2014, 37, 488–514. [Google Scholar] [CrossRef] [Green Version]
  7. Liu, T.; Shi, Z.; Liu, Y. Visualization of the Image Geometric Transformation Group Based on Riemannian Manifold. IEEE Access 2019, 7, 105531–105545. [Google Scholar] [CrossRef]
  8. Lee, K.C.; Ho, J.; Kriegman, D.J. Acquiring Linear Subspaces for Face Recognition under Variable Lighting. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 684–698. [Google Scholar]
  9. Tuzel, O.; Porikli, F.; Meer, P. Region Covariance: A Fast Descriptor for Detection and Classification; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  10. Absil, P.A.; Mahony, R.; Sepulchre, R. Optimization Algorithms on Matrix Manifolds; Princeton University Press: Princeton, NJ, USA, 2008. [Google Scholar]
  11. Zhang, H.Y.; He, W.; Zhang, L.P.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
  12. Smith, S.T. Optimization Techniques on Riemannian Manifolds. Mathematics 2014, 158, 328–342. [Google Scholar]
  13. Harandi, M.; Hartley, R.; Shen, C.; Lovell, B.; Sanderson, C. Extrinsic methods for coding and dictionary learning on Grassmann manifolds. Int. J. Comput. Vis. 2015, 114, 113–136. [Google Scholar] [CrossRef] [Green Version]
  14. Wang, R.; Shan, S.; Chen, X.; Gao, W. Manifold-manifold distance with application to face recognition based on image set. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008), Anchorage, AK, USA, 23–28 June 2008; Volume 9, pp. 2940–2947. [Google Scholar]
  15. Li, Z.Z.; Zhao, D.L.; Lin, Z.C.; Chang, E.Y. A new retraction for accelerating the Riemannian three-factor low-rank matrix completion algorithm. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 4530–4538. [Google Scholar]
  16. Gabay, D. Minimizing a differentiable function over a differential manifold. J. Optim. Theory Appl. 1982, 37, 177–219. [Google Scholar] [CrossRef] [Green Version]
  17. Zhao, Z.; Bai, Z.J.; Jin, X.Q. A Riemannian Newton Algorithm for Nonlinear Eigenvalue Problems. Siam J. Matrix Anal. Appl. 2015, 36, 752–774. [Google Scholar] [CrossRef]
  18. Shang, G.; Jing, Y. Swarm Intelligence Algorithm and Its Application; China Water Resources and Hydropower Press: Beijing, China, 2006. [Google Scholar]
  19. Borckmans, P.B.; Ishteva, M.; Absil, P.A. A Modified Particle Swarm Optimization Algorithm for the Best Low Multilinear Rank Approximation of Higher-Order Tensors; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  20. Colutto, S.; Fruhauf, F.; Fuchs, M.; Scherzer, O. The CMA-ES on Riemannian Manifolds to Reconstruct Shapes in 3-D Voxel Images. IEEE Trans. Evol. Comput. 2010, 14, 227–245. [Google Scholar] [CrossRef]
  21. Arnold, D.V. On the use of evolution strategies for optimization on spherical manifolds. In Proceedings of the Parallel Problem Solving from Nature—PPSN XIII, Ljubljana, Slovenia, 13–17 September 2014; pp. 882–891. [Google Scholar]
  22. Arnold, D.V.; Lu, A. An evolutionary algorithm for depth image based camera pose estimation in indoor environments. In IEEE Congress on Evolutionary Computation (CEC); IEEE: New York, NY, USA, 2016. [Google Scholar]
  23. Sun, J.; Feng, B.; Xu, W. Particle swarm optimization with particles having quantum behavior. In Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No.04TH8753), Beijing, China, 19–23 June 2004; pp. 325–331. [Google Scholar] [CrossRef]
  24. Clerc, M.; Kennedy, J. The particle swarm-explosion, stability and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef] [Green Version]
  25. Sun, J.; Liu, J.; Xu, W. Using quantum-behaved particle swarm optimization algorithm to solve non-linear programming problems. Int. J. Comput. Math. 2007, 84, 261–272. [Google Scholar] [CrossRef]
  26. Omkara, S.; Khandelwala, R.; Ananthb, T.; Naika, G.N.; Gopalakrishnana, S. Quantum behaved particle swarm optimization (QPSO) for multi-objective design optimization of composite structures. Expert Syst. Appl. 2009, 36, 11312–11322. [Google Scholar] [CrossRef]
  27. Li, S.-Y.; Wang, R.-G.; Hu, W.-W.; Sun, J. A new QPSO based BP neural network for face detection. In Fuzzy Information and Engineering; Cao, B.-Y., Ed.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 355–363. [Google Scholar]
  28. Mikki, S.; Kishk, A. Quantum particle swarm optimization for electromagnetics. IEEE Trans. Antennas Propag. 2006, 54, 2764–2775. [Google Scholar] [CrossRef] [Green Version]
  29. Lei, X.; Fu, A. Two-dimensional maximum entropy image segmentation method based on quantum-behaved particle swarm optimization algorithm. In Proceedings of the Fourth International Conference on Nature Computation, Mieres, Spain, 15–16 December 2015; pp. 692–696. [Google Scholar]
  30. Sun, J.; Fang, W.; Wang, D.; Xu, W. Solving the economic dispatch problem with a modified quantum-behaved particle swarm optimization method. Energy Convers. Manag. 2009, 50, 2967–2975. [Google Scholar] [CrossRef]
  31. Boumal, N.; Voroninski, V.; Bandeira, A. The non-convex Burer-Monteiro approach works on smooth semidefinite programs. Adv. Neural Inf. Process. Syst. 2016, 29, 2765–2773. [Google Scholar]
  32. Absil, P.A.; Hosseini, S. A Collection of Nonsmooth RIEMANNIAN Optimization Problems; ICTEAM Institute: Ottignies-Louvain-la-Neuve, Belgium, 2017. [Google Scholar]
  33. Ding, C.; Zhou, D.; He, X.; Zha, H. R1-PCA: Rotational Invariant L1-Norm Principal Component Analysis for Robust Subspace Factorization; ACM Press: New York, NY, USA, 2006; pp. 281–288. [Google Scholar]
  34. Vandenberghe, L.; Boyd, S. Semidefinite programming. SIAM Rev. 1996, 38, 49–95. [Google Scholar] [CrossRef] [Green Version]
  35. Broomhead, D.S.; Kirby, M.J. Dimensionality Reduction Using Secant-Based Projection Methods: The Induced Dynamics in Projected Systems. Nonlinear Dyn. 2005, 41, 47–67. [Google Scholar] [CrossRef]
  36. He, X.; Zhou, Y.; Chen, Z.; Jiang, S. An evolutionary approach to black-box optimization on matrix manifolds. Appl. Soft Comput. 2020, 97, 106773. [Google Scholar] [CrossRef]
  37. Sun, J.; Xu, W.; Feng, B. A global search strategy of quantum-behaved particle swarm optimization. IEEE Conf. Cybern. Intell. Syst. 2004, 1, 111–116. [Google Scholar] [CrossRef]
  38. Jun, S.; Fang, W.; Wu, X.; Palade, V.; Xu, W. Quantum-Behaved Particle Swarm Optimization: Analysis of Individual Particle Behavior and Parameter Selection. Evol. Comput. 2012, 20, 349–393. [Google Scholar]
  39. Sun, J.; Xu, W.; Feng, B. Adaptive parameter control for quantum-behaved particle swarm optimization on individual level. In Proceedings of the 2005 International Conference on Systems, Man and Cybernetics, Hefei, China, 12 October 2005; Volume 4, pp. 3049–3054. [Google Scholar]
  40. Cohen-Tannoudji, C.; Diu, B.; Laloe, F. Quantum Mechanics; John Wiley: New York, NY, USA, 1997; Volume 1. [Google Scholar]
  41. Rosa, J.L.; Robin, A.; Silva, M.B.; Baldan, C.A.; Peres, M.P. Electrodeposition of copper on titanium wires: Taguchi experimental design approach. J. Mater. Process. Technol. 2009, 209, 1181–1188. [Google Scholar] [CrossRef]
  42. Polak, E. Optimization: Algorithms and Consistent Approximations; Springer: Berlin/Heidelberg, Germany, 1997. [Google Scholar]
  43. Boumal, N.; Mishra, B.; Absil, P.A.; Sepulchre, R. Manopt, a matlab toolbox for optimization on manifolds. J. Mach. Learn. Res. 2014, 15, 1455–1459. [Google Scholar]
  44. Das, S.; Suganthan, P.N. Differential Evolution: A Survey of the State-of-the-Art. IEEE Trans. Evol. Comput. 2011, 15, 4–31. [Google Scholar] [CrossRef]
  45. Van den Bergh, F.; Engelbrecht, A.P. A Cooperative Approach to Particle Swarm Optimization. Trans. Evol. Comput. 2004, 8, 225–239. [Google Scholar] [CrossRef]
  46. Maučec, M.S.; Brest, J. A review of the recent use of Differential Evolution for Large-Scale Global Optimization: An analysis of selected algorithms on the CEC 2013 LSGO benchmark suite. Swarm Evol. Comput. 2019, 50, 100428. [Google Scholar] [CrossRef]
Figure 1. The schematic diagram of vector transfer on the manifold.
Figure 1. The schematic diagram of vector transfer on the manifold.
Mathematics 10 04168 g001
Figure 2. The schematic diagram of retracement process on the manifold.
Figure 2. The schematic diagram of retracement process on the manifold.
Mathematics 10 04168 g002
Figure 3. Random points between two points on a manifold.
Figure 3. Random points between two points on a manifold.
Mathematics 10 04168 g003
Figure 4. The effect of φ on the performance of the algorithm.
Figure 4. The effect of φ on the performance of the algorithm.
Mathematics 10 04168 g004
Figure 5. The results of three test problems. (a) SDP Test n = 50 ,   p = 3 ; (b) SDR Test n = 50 ,   p = 3 ; (c) PCA Test n = 50 ,   p = 3 .
Figure 5. The results of three test problems. (a) SDP Test n = 50 ,   p = 3 ; (b) SDR Test n = 50 ,   p = 3 ; (c) PCA Test n = 50 ,   p = 3 .
Mathematics 10 04168 g005aMathematics 10 04168 g005b
Figure 6. The effect of α on the performance of the algorithm.
Figure 6. The effect of α on the performance of the algorithm.
Mathematics 10 04168 g006
Figure 7. The experimental results of α on three test problems. (a) SDP Test n = 50 ,   p = 3 ; (b) SDR Test n = 50 ,   p = 3 ; (c) PCA Test n = 50 ,   p = 3 .
Figure 7. The experimental results of α on three test problems. (a) SDP Test n = 50 ,   p = 3 ; (b) SDR Test n = 50 ,   p = 3 ; (c) PCA Test n = 50 ,   p = 3 .
Mathematics 10 04168 g007aMathematics 10 04168 g007b
Figure 8. Convergence of the algorithm on SDP problem. (a) n = 3 , p = 25 ; (b) n = 3 , p = 500 .
Figure 8. Convergence of the algorithm on SDP problem. (a) n = 3 , p = 25 ; (b) n = 3 , p = 500 .
Mathematics 10 04168 g008
Figure 9. Convergence of the algorithm on SDR problem. (a) n = 50 , p = 3; (b) n = 300 , p = 11 .
Figure 9. Convergence of the algorithm on SDR problem. (a) n = 50 , p = 3; (b) n = 300 , p = 11 .
Mathematics 10 04168 g009
Figure 10. Convergence of the algorithm on PCA problem. (a) α = 0.1 , n = 50 , p = 3 ; (b) α = 0.001 , n = 200 , p = 11 .
Figure 10. Convergence of the algorithm on PCA problem. (a) α = 0.1 , n = 50 , p = 3 ; (b) α = 0.001 , n = 200 , p = 11 .
Mathematics 10 04168 g010
Table 1. The Effect of Parameters on the Performance of RQPSO in PCA problem.
Table 1. The Effect of Parameters on the Performance of RQPSO in PCA problem.
α 0.010.050.10.30.50.70.91Average
φ
0.018.19 × 1027.94 × 1027.73 × 1028.89 × 1028.90 × 1028.14 × 1027.59 × 1028.60 × 1028.25 × 102
0.057.90 × 1028.16 × 1028.54 × 1028.50 × 1028.74 × 1029.46 × 1028.38 × 1028.38 × 1028.51 × 102
0.17.86 × 1028.87 × 1028.59 × 1028.77 × 1028.41 × 1028.06 × 1028.33 × 1027.55 × 1028.30 × 102
0.38.94 × 1028.32 × 1028.12 × 1028.04 × 1028.16 × 1027.89 × 1028.48 × 1028.85 × 1028.35 × 102
0.58.32 × 1029.04 × 1028.16 × 1027.42 × 1027.99 × 1027.98 × 1028.37 × 1027.67 × 1028.12 × 102
0.79.15 × 1028.22 × 1028.46 × 1028.38 × 1028.56 × 1027.47 × 1028.74 × 1028.82 × 1028.48 × 102
0.99.33 × 1029.30 × 1028.65 × 1028.22 × 1029.00 × 1028.46 × 1028.96 × 1028.35 × 1028.79 × 102
19.78 × 1029.26 × 1028.13 × 1028.44 × 1028.30 × 1029.07 × 1029.75 × 1028.98 × 1028.97 × 102
Average8.68 × 1028.64 × 1028.30 × 1028.33 × 1028.51 × 1028.32 × 1028.58 × 1028.40 × 102
Table 2. Average and quartile results of optimal values of four algorithms running 20 times in SDPproblem.
Table 2. Average and quartile results of optimal values of four algorithms running 20 times in SDPproblem.
n p RQPSODEPSOSD
503−6.1752 × 10−2(4.9 × 10−3)−2.6914 × 10−2(3.2 × 10−3)−2.6504 × 10−2(3.0 × 10−3)−6.1607 × 10−2(4.7 × 10−3)
505−1.0293 × 10−1(5.9 × 10−3)−4.0609 × 10−2(5.1 × 10−3)−3.5358 × 10−2(6.4 × 10−3)−1.0289 × 10−1(5.8 × 10−3)
507−1.4504 × 10−1(9.9 × 10−3)−5.0532 × 10−2(8.0 × 10−3)−4.3870 × 10−2(5.4 × 10−3)−1.4231 × 10−1(1.2 × 10−2)
509−1.9149 × 10−1(1.1 × 10−2)−5.8637 × 10−2(1.4 × 10−2)−5.2653 × 10−2(1.0 × 10−2)−1.9150 × 10−1(1.1 × 10−2)
1003−4.5067 × 10−2(1.3 × 10−3)−1.5092 × 10−2(2.3 × 10−3)−1.2780 × 10−2(1.4 × 10−3)−4.4553 × 10−2(1.0 × 10−3)
1005−7.2857 × 10−2(3.0 × 10−3)−1.9977 × 10−2(3.8 × 10−3)−1.8332 × 10−2(2.0 × 10−3)−7.2205 × 10−2(3.2 × 10−3)
1007−1.0462 × 10−1(3.9 × 10−3)−2.9372 × 10−2(3.2 × 10−3)−2.5277 × 10−2(8.0 × 10−4)−1.0509 × 10−1(3.9 × 10−3)
1009−1.3619 × 10−1(6.0 × 10−4)−3.1572 × 10−2(4.5 × 10−3)−2.8639 × 10−2(9.0 × 10−4)−1.3589 × 10−1(1.1 × 10−3)
1503−3.6335 × 10−2(1.3 × 10−3)−8.6850 × 10−3(1.0 × 10−3)−8.8120 × 10−3(9.0 × 10−4)−3.5789 × 10−2(1.3 × 10−3)
1505−6.0190 × 10−2(2.6 × 10−3)−1.2079 × 10−2(1.6 × 10−3)−1.2042 × 10−2(1.4 × 10−3)−5.9307 × 10−2(2.8 × 10−3)
1507−8.6039 × 10−2(7.0 × 10−4)−1.6084 × 10−2(5.3 × 10−3)−1.6833 × 10−2(2.2 × 10−3)−8.4907 × 10−2(2.0 × 10−3)
1509−1.1066 × 10−1(2.8 × 10−3)−2.1269 × 10−2(3.2 × 10−3)−1.9045 × 10−2(3.0 × 10−4)−1.0970 × 10−1(2.6 × 10−3)
2003−3.1323 × 10−2(1.0 × 10−3)−6.9490 × 10−3(1.7 × 10−3)−7.1720 × 10−3(3.0 × 10−4)−3.0667 × 10−2(9.0 × 10−4)
2005−5.3004 × 10−2(2.0 × 10−3)−9.8660 × 10−3(2.2 × 10−3)−9.9820 × 10−3(1.4 × 10−3)−5.1632 × 10−2(3.4 × 10−3)
2007−7.2677 × 10−2(1.9 × 10−3)−1.1306 × 10−2(2.7 × 10−3)−1.2830 × 10−2(1.1 × 10−3)−7.2031 × 10−2(1.9 × 10−3)
2009−9.3857 × 10−2(3.2 × 10−3)−1.2133 × 10−2(3.7 × 10−3)−1.5398 × 10−2(1.4 × 10−3)−9.2505 × 10−2(4.1 × 10−3)
2503−2.8527 × 10−2(5.0 × 10−4)−5.6330 × 10−3(1.6 × 10−3)−5.8580 × 10−3(9.0 × 10−4)−2.7597 × 10−2(1.1 × 10−3)
2505−4.7342 × 10−2(1.0 × 10−3)−7.0910 × 10−3(1.5 × 10−3)−8.5570 × 10−3(1.1 × 10−3)−4.6141 × 10−2(1.2 × 10−3)
2507−6.5982 × 10−2(4.0 × 10−3)−1.0577 × 10−2(9.0 × 10−4)−1.0990 × 10−2(2.0 × 10−4)−6.5406 × 10−2(4.3 × 10−3)
2509−8.4324 × 10−2(4.0 × 10−4)−1.1266 × 10−2(3.4 × 10−3)−1.3213 × 10−2(1.6 × 10−3)−8.5312 × 10−2(4.0 × 10−4)
Table 3. Average and quartile results of optimal values of four algorithms running 20 times in SDR problem.
Table 3. Average and quartile results of optimal values of four algorithms running 20 times in SDR problem.
n p RQPSODEPSOSD
503−5.1412 × 10−1(3.6 × 10−2)−3.6914 × 10−1(3.9 × 10−2)−2.2311 × 10−1(3.7 × 10−2)−1.5535 × 10−1(4.0 × 10−2)
505−7.2470 × 10−1(4.3 × 10−2)−5.4939 × 10−1(4.4 × 10−2)−3.4967 × 10−1(3.9 × 10−2)−2.9375 × 10−1(1.3 × 10−1)
507−8.4877 × 10−1(2.4 × 10−2)−6.4097 × 10−1(4.4 × 10−2)−4.2784 × 10−1(3.7 × 10−2)−3.7782 × 10−1(1.4 × 10−1)
509−9.5784 × 10−1(2.5 × 10−2)−6.4802 × 10−1(2.5 × 10−2)−4.7087 × 10−1(4.2 × 10−2)−4.5747 × 10−1(2.7 × 10−2)
5011−9.7983 × 10−1(4.8 × 10−3)−7.6750 × 10−1(3.8 × 10−2)−5.2336 × 10−1(2.5 × 10−2)−5.4464 × 10−1(5.9 × 10−2)
1003−4.5088 × 10−1(6.3 × 10−2)−2.7348 × 10−1(2.8 × 10−2)−1.6399 × 10−1(2.9 × 10−2)−1.2822 × 10−1(6.4 × 10−2)
1005−7.0184 × 10−1(5.8 × 10−2)−3.9794 × 10−1(3.5 × 10−2)−2.3570 × 10−1(1.6 × 10−2)−2.1470 × 10−1(5.6 × 10−2)
1007−8.4863 × 10−1(3.8 × 10−2)−4.8878 × 10−1(5.2 × 10−2)−2.8928 × 10−1(4.1 × 10−2)−3.2030 × 10−1(1.2 × 10−1)
1009−9.3729 × 10−1(9.1 × 10−3)−5.6107 × 10−1(5.5 × 10−2)−3.3634 × 10−1(2.7 × 10−2)−4.0881 × 10−1(6.0 × 10−2)
10011−9.6894 × 10−1(2.1 × 10−2)−5.7446 × 10−1(2.5 × 10−2)−3.6837 × 10−1(3.9 × 10−2)−3.9402 × 10−1(3.8 × 10−2)
1503−4.5232 × 10−1(4.0 × 10−2)−2.1901 × 10−1(1.5 × 10−2)−1.3250 × 10−1(3.0 × 10−2)−1.0771 × 10−1(4.3 × 10−2)
1505−7.1913 × 10−1(3.6 × 10−2)−3.5151 × 10−1(2.1 × 10−2)−1.8971 × 10−1(2.0 × 10−2)−2.2270 × 10−1(4.9 × 10−2)
1507−8.4141 × 10−1(5.1 × 10−2)−4.4002 × 10−1(1.7 × 10−2)−2.4282 × 10−1(3.1 × 10−2)−2.6939 × 10−1(1.5 × 10−1)
1509−9.3148 × 10−1(2.7 × 10−2)−4.6331 × 10−1(2.5 × 10−2)−2.6749 × 10−1(1.9 × 10−2)−3.507 × 10−1(6.2 × 10−2)
15011−9.5823 × 10−1(1.9 × 10−2)−5.4493 × 10−1(2.9 × 10−2)−3.0970 × 10−1(2.1 × 10−2)−3.9021 × 10−1(1.0 × 10−1)
2003−4.5178 × 10−1(4.9 × 10−2)−1.8594 × 10−1(1.7 × 10−2)−1.1556 × 10−1(1.9 × 10−2)- 9.438 × 10−2(3.1 × 10−2)
2005−7.0589 × 10−1(4.6 × 10−2)−3.0637 × 10−1(1.3 × 10−2)−1.6907 × 10−1(1.2 × 10−2)−1.9180 × 10−1(6.6 × 10−2)
2007−8.4805 × 10−1(4.5 × 10−2)−3.8202 × 10−1(2.7 × 10−2)−2.1149 × 10−1(1.3 × 10−2)−2.5795 × 10−1(1.1 × 10−1)
2009−9.1661 × 10−1(4.3 × 10−2)−4.2616 × 10−1(2.2 × 10−2)−2.3520 × 10−1(2.9 × 10−2)−3.1237 × 10−1(8.2 × 10−2)
20011−9.5415 × 10−1(2.1 × 10−2)−4.3796 × 10−1(2.0 × 10−2)−2.6385 × 10−1(2.2 × 10−2)−3.1408 × 10−1(7.2 × 10−2)
2503−4.4076 × 10−1(5.2 × 10−2)−1.6892 × 10−1(1.2 × 10−2)−1.0321 × 10−1(2.2 × 10−2)−1.1178 × 10−1(5.2 × 10−2)
2505−7.1519 × 10−1(6.4 × 10−2)−2.6108 × 10−1(2.1 × 10−2)−1.4771 × 10−1(1.7 × 10−2)−1.6973 × 10−1(8.9 × 10−2)
2507−8.2571 × 10−1(4.9 × 10−2)−3.4541 × 10−1(3.7 × 10−2)−1.8112 × 10−1(1.2 × 10−2)−2.3421 × 10−1(9.1 × 10−2)
2509−9.1166 × 10−1(5.3 × 10−2)−3.7130 × 10−1(2.6 × 10−2)−2.1360 × 10−1(2.2 × 10−2)−2.8784 × 10−1(7.0 × 10−2)
25011−9.4470 × 10−1(3.3 × 10−2)−3.9195 × 10−1(1.6 × 10−2)−2.3519 × 10−1(2.0 × 10−2)−3.1495 × 10−1(5.8 × 10−2)
3003−4.3421 × 10−1(4.8 × 10−2)−1.5870 × 10−1(1.1 × 10−2)−8.8721 × 10−2(1.9 × 10−2)−1.0580 × 10−1(6.7 × 10−2)
3005−7.1296 × 10−1(3.0 × 10−2)−2.5536 × 10−1(1.0 × 10−2)−1.3469 × 10−1(2.6 × 10−2)−2.0357 × 10−1(1.1 × 10−1)
3007−8.4278 × 10−1(4.1 × 10−2)−3.1615 × 10−1(1.5 × 10−2)−1.7673 × 10−1(2.4 × 10−2)−2.4069 × 10−1(1.2 × 10−1)
3009−9.1008 × 10−1(2.5 × 10−2)−3.6700 × 10−1(2.1 × 10−2)−1.8787 × 10−1(2.5 × 10−2)−2.2254 × 10−1(6.7 × 10−2)
30011−9.2532 × 10−1(1.6 × 10−2)−3.6645 × 10−1(1.4 × 10−2)−2.1657 × 10−1(1.6 × 10−2)−2.9021 × 10−1(7.4 × 10−2)
Table 4. Average and quartile results of optimal values of four algorithms running 20 times in PCA problem.
Table 4. Average and quartile results of optimal values of four algorithms running 20 times in PCA problem.
θ n p RQPSODEPSOSD
0.15037.5654 × 102(5.4 × 101)8.7453 × 102(9.1 × 101)9.6618 × 102(1.0 × 102)7.6972 × 102(5.1 × 101)
0.15055.8195 × 102(5.3 × 101)7.1959 × 102(6.7 × 101)9.2457 × 102(5.4 × 101)5.9085 × 102(3.4 × 101)
0.15075.2952 × 102(6.0 × 101)6.8135 × 102(8.3 × 101)9.5001 × 102(9.1 × 101)5.0008 × 102(4.6 × 101)
0.15094.1113 × 102(2.1 × 101)5.4901 × 102(6.1 × 101)8.2810 × 102(5.6 × 101)3.8311 × 102(3.6 × 101)
0.150113.3373 × 102(3.1 × 101)4.9010 × 102(7.4 × 101)8.0673 × 102(7.6 × 101)3.2929 × 102(8.3 × 101)
0.110031.3832 × 103(1.3 × 102)1.56508 × 103(1.3 × 102)1.6717 × 103(1.2 × 102)1.3888 × 102(9.8 × 101)
0.110051.1628 × 103(8.3 × 101)1.4193 × 103(1.2 × 102)1.7113 × 103(1.4 × 102)1.1765 × 103(5.6 × 101)
0.110079.1449 × 102(1.5 × 102)1.2066 × 103(1.6 × 102)1.5274 × 103(1.9 × 102)9.1183 × 102(1.4 × 102)
0.110098.3650 × 102(1.8 × 102)1.1813 × 103(1.2 × 102)1.5791 × 103(1.4 × 102)8.3439 × 102(1.1 × 102)
0.1100117.2025 × 102(6.1 × 101)1.0461 × 103(6.2 × 10¹)1.5885 × 103(1.3 × 102)6.9774 × 102(3.5 × 101)
0.115031.9575 × 103(9.6 × 101)2.2522 × 103(8.3 × 101)2.3563 × 103(6.9 × 101)1.9948 × 103(4.8 × 101)
0.115051.6554 × 103(2.1 × 102)2.0202 × 103(1.8 × 102)2.2774 × 103(1.7 × 102)1.6435 × 102(2.3 × 102)
0.115071.4248 × 103(9.2 × 101)1.8835 × 103(2.0 × 102)2.2789 × 103(2.3 × 102)1.4109 × 103(1.4 × 102)
0.115091.1981 × 103(1.1 × 102)1.7073 × 103(1.3 × 102)2.1912 × 103(2.0 × 102)1.2012 × 103(8.1 × 101)
0.1150111.0890 × 102(1.4 × 102)1.6137 × 103(1.5 × 102)2.1808 × 103(1.9 × 102)1.1446 × 103(1.5 × 102)
0.120032.4460 × 103(3.2 × 102)2.7945 × 103(4.1 × 102)2.9424 × 103(3.7 × 102)2.4734 × 103(3.3 × 102)
0.120052.1402 × 103(6.3 × 101)2.6536 × 103(1.8 × 102)2.9612 × 103(1.4 × 102)2.1360 × 103(6.9 × 102)
0.120071.8318 × 103(9.5 × 101)2.4882 × 103(1.2 × 102)2.8832 × 103(2.5 × 102)1.8842 × 103(1.3 × 102)
0.120091.6339 × 103(1.8 × 102)2.3872 × 103(1.8 × 102)2.9148 × 103(1.8 × 102)1.6754 × 103(9.6 × 101)
0.1200111.4853 × 103(1.3 × 102)2.3016 × 103(4.9 × 101)2.8639 × 103(5.4 × 101)1.6038 × 103(1.5 × 102)
0.015037.1973 × 101(2.6 × 101)1.5195 × 102(1.1 × 101)2.0471 × 102(2.8 × 101)7.4976 × 101(3.1 × 101)
0.015055.1583 × 101(3.2 × 101)1.0843 × 102(2.5 × 101)1.7699 × 102(1.5 × 101)6.2757 × 101(3.0 × 101)
0.015074.7165 × 101(2.6 × 101)8.3367 × 101(2.4 × 101)1.7559 × 102(1.7 × 101)5.9280 × 101(2.8 × 101)
0.015092.9900 × 101(1.2 × 101)5.5275 × 101(1.1 × 101)1.3470 × 102(4.2 × 101)3.7299 × 101(1.3 × 101)
0.0150113.2033 × 101(1.3 × 101)4.9114 × 101(1.5 × 101)1.0878 × 102(1.7 × 101)4.0297 × 101(1.4 × 101)
0.0110031.2717 × 102(2.6 × 101)2.6042 × 102(3.9 × 101)2.9370 × 102(2.3 × 101)1.3677 × 101(1.6 × 101)
0.0110059.7236 × 101(3.5 × 101)2.0219 × 102(7.3 × 101)2.7024 × 102(7.7 × 101)1.1542 × 102(2.5 × 101)
0.0110077.3738 × 101(2.4 × 101)1.5125 × 102(1.9 × 101)2.3166 × 102(6.2 × 101)9.2338 × 101(1.2 × 101)
0.0110096.9656 × 101(2.7 × 101)1.5284 × 102(5.0 × 101)2.8383 × 102(7.4 × 101)8.5461 × 101(2.1 × 101)
0.01100115.6931 × 101(2.5 × 101)1.1947 × 102(2.1 × 101)2.1254 × 102(4.6 × 101)9.1455 × 101(3.0 × 101)
0.0115031.7892 × 102(6.9 × 101)3.5798 × 102(8.2 × 101)4.0475 × 102(1.1 × 102)1.9433 × 102(8.0 × 101)
0.0115051.4577 × 102(2.6 × 101)2.8052 × 102(5.1 × 101)3.4794 × 102(5,0 × 101)1.6277 × 102(2.9 × 101)
0.0115071.5671 × 102(4.2 × 101)2.8247 × 102(3.7 × 101)3.6967 × 102(4.2 × 101)1.7908 × 102(4.9 × 101)
0.0115091.1167 × 102(1.9 × 101)2.2275 × 102(3.4 × 101)3.1155 × 102(4.2 × 101)1.4133 × 102(1.4 × 101)
0.01150111.0527 × 102(1.4 × 101)1.9597 × 102(2.3 × 101)2.9205 × 102(5.1 × 101)1.4177 × 102(2.1 × 101)
0.0120032.0611 × 102(3.8 × 101)4.2597 × 102(1.7 × 102)4.4865 × 102(1.4 × 102)2.1420 × 102(4.6 × 101)
0.0120051.5747 × 102(6.1 × 101)3.2010 × 102(2.7 × 101)3.5707 × 102(2.4 × 101)1.7202 × 102(6.8 × 101)
0.0120071.5383 × 102(6.8 × 101)3.0628 × 102(8.2 × 101)3.9345 × 102(1.3 × 102)1.7395 × 102(6.3 × 101)
0.0120091.6872 × 102(3.1 × 101)3.4866 × 102(8.1 × 101)4.3993 × 102(5.9 × 101)2.1596 × 102(3.5 × 101)
0.01200111.6727 × 102(3.2 × 101)2.8346 × 102(4.4 × 101)3.5677 × 102(1.7 × 101)2.0155 × 102(6.7 × 101)
0.0015032.5855 × 101(9.3 × 100)1.2590 × 102(3.6 × 101)1.5201 × 102(3.4 × 101)3.4484 × 101(1.7 × 101)
0.0015058.3620 × 100(7.0 × 100)6.1819 × 101(2.1 × 101)1.2659 × 102(2.2 × 101)9.3607 × 100(1.1 × 101)
0.0015076.7883 × 100(5.5 × 100)3.8484 × 101(6.1 × 100)1.0937 × 102(1.1 × 101)6.4283 × 100(1.1 × 101)
0.0015096.4813 × 100(4.4 × 100)2.5437 × 101(8.9 × 100)9.5061 × 101(3.0 × 101)6.5565 × 100(1.0 × 101)
0.00150113.6024 × 100(1.4 × 100)1.1593 × 101(2.4 × 100)7.9012 × 101(4.3 × 101)2.1094 × 100(7.7 × 10−1)
0.00110032.8134 × 101(1.3 × 101)1.9378 × 102(6.1 × 101)2.3001 × 102(4.5 × 101)2.9517 × 101(1.4 × 101)
0.00110051.5818 × 101(5.3 × 100)1.8256 × 102(4.2 × 101)2.4844 × 102(5.2 × 101)2.1447 × 101(8.4 × 100)
0.00110071.1198 × 101(5.6 × 100)1.1245 × 102(3.5 × 101)1.7682 × 102(2.7 × 101)1.2736 × 101(9.5 × 100)
0.00110091.3117 × 10¹(5.5 × 100)9.3403 × 101(4.7 × 101)1.8227 × 102(8.1 × 101)1.8515 × 101(1.4 × 101)
0.001100117.6223 × 100(4.7 × 100)6.2281 × 101(1.4 × 101)1.4409 × 102(3.1 × 101)1.0546 × 101(1.0 × 101)
0.00115033.7829 × 101(1.0 × 101)2.2759 × 102(1.1 × 102)2.5950 × 102(1.1 × 102)4.2692 × 101(1.4 × 101)
0.00115052.7834 × 101(1.5 × 101)2.1556 × 102(6.3 × 101)2.7175 × 102(6.7 × 101)4.0423 × 101(2.5 × 101)
0.00115071.4826 × 101(1.2 × 101)1.3634 × 102(3.1 × 101)1.9849 × 102(5.9 × 101)1.9988 × 101(1.5 × 101)
0.00115091.6358 × 101(6.0 × 100)1.5928 × 102(3.5 × 101)2.4219 × 102(8.5 × 101)3.1013 × 101(1.6 × 101)
0.001150111.8158 × 101(5.1 × 100)1.2162 × 102(2.6 × 101)1.9174 × 102(3.0 × 101)2.4575 × 101(8.0 × 100)
0.00120033.2492 × 101(8.1 × 100)2.6359 × 102(1.0 × 102)2.8572 × 102(9.5 × 101)3.8216 × 101(1.6 × 101)
0.00120052.6752 × 101(9.8 × 100)2.1531 × 102(9.5 × 101)2.5783 × 102(1.1 × 102)3.6128 × 101(1.7 × 101)
0.00120072.6361 × 101(1.9 × 101)1.8300 × 102(1.4 × 102)2.2845 × 102(1.6 × 102)3.4054 × 101(2.6 × 101)
0.00120091.8149 × 101(8.3 × 100)1.6877 × 102(6.9 × 101)2.1808 × 102(1.0 × 102)2.7946 × 101(1.1 × 101)
0.001200112.2633 × 101(5.2 × 100)1.5670 × 102(3.5 × 101)2.0801 × 102(4.1 × 101)3.0098 × 101(3.9 × 100)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Halimu, Y.; Zhou, C.; You, Q.; Sun, J. A Quantum-Behaved Particle Swarm Optimization Algorithm on Riemannian Manifolds. Mathematics 2022, 10, 4168. https://doi.org/10.3390/math10224168

AMA Style

Halimu Y, Zhou C, You Q, Sun J. A Quantum-Behaved Particle Swarm Optimization Algorithm on Riemannian Manifolds. Mathematics. 2022; 10(22):4168. https://doi.org/10.3390/math10224168

Chicago/Turabian Style

Halimu, Yeerjiang, Chao Zhou, Qi You, and Jun Sun. 2022. "A Quantum-Behaved Particle Swarm Optimization Algorithm on Riemannian Manifolds" Mathematics 10, no. 22: 4168. https://doi.org/10.3390/math10224168

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop