Next Article in Journal
Image Splicing Compression Algorithm Based on the Extended Kalman Filter for Unmanned Aerial Vehicles Communication
Previous Article in Journal
Research on Drone Fault Detection Based on Failure Mode Databases
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Analysis of Nonlinear Programming Solvers: Performance Evaluation, Benchmarking, and Multi-UAV Optimal Path Planning

1
Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
2
Department of Mechanical Engineering, University of Maryland, College Park, MD 20742, USA
3
Department of Mechanical Engineering, University of Iowa, Iowa City, IA 52242, USA
4
Department of Mechanical Engineering, South Dakota State University, Brookings, SD 57006, USA
*
Author to whom correspondence should be addressed.
Drones 2023, 7(8), 487; https://doi.org/10.3390/drones7080487
Submission received: 30 May 2023 / Revised: 11 July 2023 / Accepted: 19 July 2023 / Published: 25 July 2023

Abstract

:
In this paper, we propose a set of guidelines to select a solver for the solution of nonlinear programming problems. We conduct a comparative analysis of the convergence performances of commonly used solvers for both unconstrained and constrained nonlinear programming problems. The comparison metrics involve accuracy, convergence rate, and computational time. MATLAB is chosen as the implementation platform due to its widespread adoption in academia and industry. Our study includes solvers which are either freely available or require a license, or are extensively documented in the literature. Moreover, we differentiate solvers if they allow the selection of different optimal search methods. We assess the performance of 24 algorithms on a set of 60 benchmark problems. We also evaluate the capability of each solver to tackle two large-scale UAV optimal path planning scenarios, specifically the 3D minimum time problem for UAV landing and the 3D minimum time problem for UAV formation flying. To enrich our analysis, we discuss the effects of each solver’s inner settings on accuracy, convergence rate, and computational time.

1. Introduction

The current technological era prioritizes, more than ever, high performance and efficiency of complex processes controlled by a set of variables. Examples of these processes are [1,2,3,4,5,6,7,8]: engineering designs, chemical plant reactions, manufacturing processes, grid power management, power generation/conversion process, path planning for autonomous vehicles, climate simulations, etc. Quite often, the search for the best performance, or the highest efficiency, can be transcribed into a nonlinear programming (NLP) problem, namely, the need to minimize (or maximize) a scalar cost function subjected to a set of constraints. In some instances, these functions are linear but in general, one or both of them are characterized by nonlinearities. For simple one-time use problems, one might successfully use any of the solvers available, such as FMINCON in MATLAB [9,10]. Nevertheless, if the NLP derives from some specific applications, such as real-time process optimization, then the solver choice begs a more accurate selection.
The first research efforts toward the characterization of optimization solvers began in the 1960s. In [11], the authors compare eight solvers on twenty benchmark unconstrained NLP problems containing up to 20 variables. Notably, they illustrate techniques to transform particular constrained NLPs into equivalent unconstrained problems. The authors of [12] analyze the convergence properties of two gradient-based solvers applied to 16 test problems. In the last few decades, with the development of new methodologies and optimization applications, more studies aimed to illustrate difference in performance among NLP solvers. Schittkowski et al. [13] performed the comparison of eleven different mathematical programming codes applied to structural optimization through finite element analysis. George et al. summarize a qualitative comparison of few optimization methodologies reported by several other sources [14]. In the research document prepared by Sandia National Laboratory [15], a study was conducted on four open source Linear Programming (LP) solvers applied to 201 benchmark problems. In [16], Kronqvist et al. carried out a performance comparison of mixed integer NLP solvers limited to convex benchmark problems. The work in [17] presents a comparison between linear and nonlinear programming techniques on the diet formulation for animals. In [18], the authors use the programming language R to analyze multiple nonlinear optimization methods applied to real-life problems. State-of-the-art optimization methods were used to compare their application on L1-regularized classifiers [19]. On a similar note, multiple global optimization solvers were compared in a work done by Arnold Neumaier [20]. Authors from [21,22,23,24] conducted a performance comparison of optimization techniques for specific applications such as: aerodynamic shape design, integrated manufacturing planning and scheduling, solving electromagnetic problems, and building energy design problems, respectively. Similarly, Frank et al. conducted a comparison between three optimization methods for solving aerodynamic design problems [25]. In [26], Karaboga et al. compared the performance of the artificial bee colony algorithm with the differential evolution, evolutionary and particle swarm optimization algorithms using multi-dimensional numerical problems.
In this paper, we aim to provide an explicit comparison of a set of NLP solvers. In our comparison, we incorporate widely used solvers available in MATLAB, several gradient descent methods that have been extensively utilized in the literature, and a particle swarm optimization algorithm. Because of its widespread use among research groups, both in academia and the private sector, we have decided to use MATLAB as a common implementation platform. For this reason, we will focus on all the solvers that are either written on or can be implemented in MATLAB. The NLP problems used in this comparison have been selected amongst the standard benchmark problems [27,28,29] with up to thirty variables and up to nine scalar constraints.
In addition to the selected benchmark problems, two large-scale minimum-time unmanned aerial vehicle (UAV) optimal path planning problems are included in the analysis. UAVs represent a low-cost and effective alternative in many practical applications, such as precision agriculture, environmental monitoring, product deliveries, or military missions, and interest has increasingly growing in the past two decades [30]. UAV path planning is an important research area since it aims to improve their autonomous and safety capabilities. Path planning often employs optimal control formulations as a key strategy. Utilizing this method, one can generate trajectories that not only minimize a specified cost function, but also adhere to both vehicle-related and problem-specific constraints. When path planning/optimal control problems are “simple”, they can be solved analytically using one of two classical methods: the Bellmann [31,32] and the Pontryagin methods [33]. However, when dealing with complex (e.g., large-scale, non-convex, nonlinear) optimal control problems, it is very hard or even impossible to find solutions analytically using one of these methods, and numerical methods must be sought. Numerical methods are based on discretizing/transcribing the optimal control problem into a finite-dimensional NLP problem, which can be solved using ready-to-use NLP solvers (e.g., FMINCON, SNOPT, SciPy, IPOPT). In this paper, we employ Bernstein polynomial approximations to solve UAV path planning problems by generating 3D trajectories, thereby converting optimal control problems into NLP problems [34]. The approach is inspired by prior research on optimal control using Bernstein polynomials [35,36], which demonstrated the efficient generation of feasible and collision-free trajectories for both single and multiple vehicles. Moreover, this approach can be applied to real-time safety-critical applications in complex environments. Therefore, the analysis is further improved by evaluating the performance of the solvers in a realistic case scenario.
The paper is organized as follows: Section 2 describes the statement of unconstrained and constrained NLP problems; in Section 3, we briefly enumerate the NLP solvers included in our analysis, provide an overview of the different convergence metrics, and finally, carry out the solvers’ implementations; the results of the comparison with the benchmark equations are discussed in Section 4; in Section 5, each solver is tested to solve two real-world UAV path planning optimal control problems. Finally, the main contributions of the paper are outlined in Section 6.

2. Nonlinear Programming Problem Statements

In general, a constrained NLP problem aims to minimize a nonlinear real scalar objective function with respect to a set of variables while satisfying a set of nonlinear constraints. If the problem entails the minimization of a function without the presence of constraints, then the problem is defined as unconstrained [37]. In the following section, the general forms of nonlinear unconstrained and constrained optimization problems are stated.

2.1. Unconstrained Optimization Problem

2.1.1. Statement

Let x R n be a real vector with n 1 components and let f : R n R be a smooth function. Then, the unconstrained optimization problem is defined as
min x R n f ( x ) .

2.1.2. Optimality Conditions

Given a function f ( x ) defined and differentiable over an interval ( a , b ) , the necessary condition for a point x * ( a , b ) to be a local maximum or minimum is that f ( x * ) = 0 . This is also known as Fermat’s theorem. The multidimensional extension of this condition states that the gradient must be zero at local optimum point, namely,
f ( x * ) = 0 .
Equation (2) is referred to as a first-order optimality condition.

2.2. Constrained Optimization Problem

2.2.1. Statement

The constrained optimization problem is formulated as
min x R n f ( x )
subject to
g i x 0 , i = 1 , 2 , , w ,
h j ( x ) = 0 , j = 1 , 2 , , l ,
with g i ( x ) and h j ( x ) smooth real-valued functions on a subset of R n . Notably, g i ( x ) and h j ( x ) represent the sets of inequality constraints and equality constraints, respectively. The feasible set is identified as the set of points x that satisfy just the constraints (Equations (4) and (5)). It must be pointed out that some of the solvers considered in this study are only able to support equality constraints. In these instances, we will introduce a set of slack variables s i , and convert Equation (5) into the following set of equality constraints
g i x + s i 2 = 0 , i = 1 , 2 , , w .
Such necessary expedients will obviously induce more computational burden on the particular solvers affected by this constraint-type limitation, since the slack variables s i become additional optimization variables. In this scenario, the constrained optimization problem can be reformulated as
min x , s R n f ( x , s )
subject to
g i x + s i 2 = 0 , i = 1 , 2 , , w ,
h j ( x ) = 0 , j = 1 , 2 , , l .

2.2.2. Optimality Conditions

The Karush–Kuhn–Tucker (KKT) conditions measure the first-order optimality for constrained problems [38]. These necessary conditions are defined as follows. Let the objective function f and the constraint functions g i and h j be continuously differentiable functions at x * R n . If x * is a local optimum and the optimization problem satisfies some regularity conditions [37], then there exist the two constants μ i ( i = 1 , , w ) and λ j ( j = 1 , , ) , called KKT multipliers, such that the following four groups of conditions hold:
  • Stationarity:
    f ( x ) : f ( x * ) + i = 1 m μ i g i ( x * ) + j = 1 λ j h j ( x * ) = 0 .
  • Primal feasibility:
    g i ( x * ) 0 , for i = 1 , , w .
    h j ( x * ) = 0 , for j = 1 , , .
  • Dual feasibility:
    μ i 0 , for i = 1 , , w .
  • Complementary slackness:
    i = 1 m μ i g i ( x * ) = 0 .

3. NLP Solvers and Convergence Metrics

This section briefly introduces the 24 solvers included in our study, narrates the key implementation steps for each solver, and provides the description of the convergence metrics used in our analysis.

3.1. NLP Solvers Selection

The selection of the NLP solvers considered in this work is based on the following aspects. First of all, we only examine algorithms that are available in MATLAB. Secondly, we have included solvers that are either free source or, for commercial software, have a trial version. The following is a list of the 24 solvers and algorithms that we have included in the benchmark analysis:
  • Accelerated Particle Swarm Optimization (APSO): an algorithm developed by Yang at Cambridge University in 2007, and based on swarm-intelligent search of the optimum [39]. Due to the nature of the algorithm, only constrained nonlinear programming problems can be solved.
  • Branch and Reduced Optimization Navigator (BARON): a commercial global optimization software that solves both NLP and mixed-integer nonlinear programs (MINLP) by using deterministic global optimization algorithms of the branch and bound search type [40,41,42]. It comes with embedded linear programming (LP) and NLP solvers, such as CLP/CBC [43], IPOPT [44,45], FilterSD [46] and FilterSQP [47]. BARON selects the NLP solver by default and may switch between other NLP solvers during the search based on problem characteristics and solver performance. To refer to the default option, the name BARON (auto) is chosen. For this study, we have acquired the monthly license of the software to be used in conjunction with the MATLAB interface. Such choice reflects the fact that the free demo version is characterized by some limitations, namely, it can only handle problems with up to ten variables, ten constraints, and it does not support trigonometric functions.
  • FMINCON: a MATLAB optimization toolbox used to solve constrained NLP problems [37]. FMINCON provides the user the option to select amongst five different algorithms to solve nonlinear problems: Active-set, Interior-point, Sequential Quadratic Programming (SQP), Sequential Quadratic Programming legacy (SQP-legacy), and Trust region reflective. Four out of the five algorithms are implemented in our analysis as one of them, the Trust Region Reflective algorithm, does not support most of the constraints considered in our benchmark cases.
  • FMINUNC: a MATLAB optimization toolbox used to solve unconstrained NLP problems [48]. In this case, FMINUNC gives the user the option of choosing between two different algorithms to solve nonlinear minimization problems [49]: Quasi-Newton and Trust region.
  • Globally Convergent Method of Moving Asymptotes (GCMMA): is a modified version of the MMA that evaluates the global optimum value [50,51,52].
  • Nonlinear Interior point Trust Region Optimization (ARTELYS KNITRO): a commercially available nonlinear optimization software package developed by Zienna Optimization since 2001 [53] for finding local solutions to both continuous and discrete optimization problems with integer or binary variables, with or without constraints [53,54]. In this work, the software free trial license is used, in conjunction with the MATLAB interface. Several algorithms are included in the software, such as Interior-point/Direct, Interior-point/CG, Active-set, and Sequential Quadratic Programming. Interior-point/CG mainly differs from the Interior-point/Direct algorithm because of the primal-dual KKT system solved using a projected conjugate gradient iteration [54].
  • Mixed Integer Distributed Ant Colony Optimization (MIDACO): a global optimization solver that combines an extended evolutionary probabilistic technique, called the Ant Colony Optimization algorithm, with the Oracle Penalty method for constrained handling [55,56]. In this work, we have obtained a license; otherwise, it must be noted that the free trial version has a limitation, namely, that it does not support more than four variables per problem.
  • Method of Moving Asymptotes (MMA): it solves nonlinear problem function by generating an approximate subproblem [51,52,57].
  • Modified Quasilinearization Algorithm (MQA): the modified version of the Standard Quasilinearization Algorithm (SQA) [58,59]. The goal is the progressive reduction of the performance index. Convergence to the desired solution is achieved when the performance index Q ˜ ε 1 or R ˜ ε 2 , with ε 1 and ε 2 small preselected positive constants, for the unconstrained and constrained case, respectively [60,61]. Regarding NLP problems, it must be noted that the MQA can only handle equality constraints. As a result, slack variables are introduced to convert the inequality constraints into equality constraints.
  • PENLAB: a MATLAB free open source software package suitable for nonlinear, linear semidefinite, and nonlinear semidefinite optimization, based on a generalized Augmented Lagrangian method [62,63,64,65].
  • Sequential Gradient-Restoration Algorithm (SGRA): a first-order NLP solver, characterized by a restoration phase, followed by a gradient phase [66,67]. The goal is the progressive reduction of the performance index. The performance index is expressed by R ˜ , which includes both the feasibility index P ˜ , and the optimality index Q ˜ . Convergence is achieved when the constraint error and the optimality condition error are P ˜ ε 1 , Q ˜ ε 2 , respectively, with ε 1 , ε 2 small preselected positive constants. It must be noted that only equality constraints can be handled by the SGRA. As a result, slack variables are introduced to convert the inequality constraints into equality constraints.
  • Sparse Nonlinear OPTimizer (SNOPT): a commercial software package for solving large-scale optimization problems, linear and nonlinear programs [68]. In this paper, we use the free trial version of the software in conjunction with the MATLAB interface, which can be retrieved at [69].
  • SOLNP: originally implemented in MATLAB to solve general nonlinear programming problems, characterized by nonlinear smooth functions in the objective and constraints [70,71]. Inequality constraints are converted into equality constraints by means of slack variables.
  • Standard Quasilinearization Algorithm (SQA): the standard version of the QA, and it uses QA techniques for solving nonlinear problems by generating a sequence of linear problems solutions [58,59]. As the MQA, SQA can only handle equality constraints. As a result, slack variables are introduced to convert the inequality constraints into equality constraints.
Unless stated otherwise, the solvers and algorithms are freely available. Mathematical details, description and documentation, the most direct source to each solver and algorithm, and all the benchmark test functions can be found in [72]. For each of the test functions, dimension, domain and search space, objective function, constraints, and minimum solution are listed.

3.2. Convergence Metrics

The main goal of this study is to characterize the convergence performance, in terms of accuracy and computational time, of the different solvers under analysis. We have selected a number of benchmark NLPs and compared the numerical solutions returned by each solver with the true analytical solution. Moreover, considering that the choice of the initial guesses critically affects the convergence process, we want to assess also the capability to converge to the true optimum, rather than converging to a local minima or not converging at all. With this in mind, we define as converging robustness the quality of a solver to achieve the solution when the search process is initiated from a broad set of initial guesses randomly chosen within the search domain. Finally, to have an accurate assessment of the computational time, we require the solver to repeat the same search several times and average out the total CPU time. As a result, given N benchmark test functions, M solvers/algorithms, K randomly generated initial guesses, and Z repeated identical search runs, a total of N × M × K × Z runs must be executed.
The following performance metrics are in order:
  • Mean error [%]:
    E ¯ m = 1 N n = 1 N E ¯ n , E ¯ n = 1 K k = 1 K E k , E k = 100 | f ( x ) f ( x * ) | max ( | f ( x * ) | , 0.001 )
    with f ( x ) the benchmark test function evaluated at the numerical solution x provided by the solver, f ( x * ) the benchmark test function evaluated at the optimal solution f ( x * ) , E k the error associated to the run from the k-th randomly generated initial guess, E ¯ n the mean error associated to the n-th benchmark test function, and E ¯ m the mean error delivered by the m-th solver. The biunivocal choice of the denominator of E k is based on the fact that some benchmark test functions at the optimal solution have zero value; in this case, a value of 0.001 is chosen instead as reference value.
  • Mean variance [%]:
    σ ¯ m = 1 N n = 1 N σ n , σ n = 1 K 1 k = 1 K | E k E ¯ n | 2
    where σ n is the variance correspondent to the n-th benchmark test function, and σ ¯ m the mean variance delivered by the m-th solver.
  • Mean convergence rate [%]:
    γ ¯ m = 1 N n = 1 N γ n , γ n = 100 K K c o n v K
    with K c o n v the number of runs (from a pool of K distinct initial guesses) which successfully reach convergence for the n-th function, γ n the convergence rate for the n-th function, and γ ¯ m the mean convergence rate delivered by the m-th solver. The convergence rate is computed considering successful a run that satisfies the converging threshold conditions E k E m a x = 5 % , and C P U k C P U m a x = 10 s, with C P U k the CPU time required to the run starting from the k-th initial guess.
  • Mean CPU time [s]:
    C P U ¯ m = 1 N n = 1 N C P U ¯ n ,
    C P U ¯ n = 1 Z z = 1 Z C P U ¯ z , C P U ¯ z = 1 K k = 1 K C P U k
    where C P U ¯ z is the mean CPU time per z-th repetition, C P U ¯ n is the mean CPU time related to the n-th benchmark test function, and C P U ¯ m is the mean CPU time delivered by the m-th solver.

3.3. Solvers Implementation

In this paper, we analyze the convergence performances of the different solvers in terms of robustness, accuracy, and CPU time. Considering that the user might decide to tune the convergence parameters to favor one of these metrics, we have decided to perform the comparison for three separate implementation scenarios: plug and play (P&P), high accuracy (HA), and quick solution (QS). The plug and play settings, as the name suggests, are the “out-of-the-box” settings of each solver. The high accuracy settings are based on more stringent tolerances and/or on a higher number of maximum iterations with respect to the plug and play settings. This tuning aims to achieve a more precise solution. Finally, the quick solution settings are characterized by more relaxed convergence tolerances, and a lower number of maximum iterations with respect to the plug and play settings. In this scenario, the algorithms should reach a less accurate solution but in a shorter time. In general, the objective function, its gradient, the initial conditions, the constraint function (for constrained problems only), and the solver options are elements which are inputted to each solver. The objective function gradient is not necessary for APSO, BARON, MIDACO, and SOLNP, but it is optional for FMINCON/FMINUNC and KNITRO. For GCMMA/MMA, SGRA, and SNOPT, the gradient of both the objective and constraint functions is necessary. MQA/SQA and PENLAB, in addition to these inputs, require the Hessian of the objective function. In the following subsection, details on each solver and on the three different solver settings per each solver are described. It must be noted that, in most cases, the settings’ names here reported are the same as the solver’s options names used in the code implementation. In this way, the reader can have a better understanding of which solver’s parameter has been tuned.

3.3.1. APSO

The three settings considered in the analysis are reported in Table 1, where no. particles is the number of particles, no. iterations is the total number of iterations, and γ is a control parameter that multiplies α , one of the two learning parameters or acceleration constants, α and β , the random amplitude of roaming particles and the speed of convergence, respectively. APSO does also require the number of problem variables, no. vars, to be defined but this parameter is, obviously, invariant for the three settings.

3.3.2. BARON

The three settings considered in the analysis are reported in Table 2, with E p s A the absolute termination tolerance, E p s R the relative termination tolerance, and A b s C o n F e a s T o l the absolute constraint feasibility tolerance. Due to the limitations of the solver, trigonometric functions are not supported; for this reason, the following test functions are excluded in the analysis: A.2, A.3, A.4, A.5, A.17, A.18, A.26 for unconstrained problems, and B.2, B.5, B.8, for constrained problems (refer to Appendix in [72]).

3.3.3. FMINCON/FMINUNC

The three settings considered in the analysis are reported in Table 3, with S t e p T o l e r a n c e the lower bound on the size of a step, C o n s t r a i n t T o l e r a n c e the upper bound on the magnitude of any constraint functions, F u n c t i o n T o l e r a n c e the lower bound on the change in the value of the objective function during a step, and O p t i m a l i t y T o l e r a n c e the tolerance for the first-order optimality measure.

3.3.4. GCMMA/MMA

The three settings considered in the analysis are reported in Table 4, where e p s i m i n is a prescribed small positive tolerance that terminates the algorithm, whereas m a x o u t i t is the maximum number of iterations for MMA, and the maximum number of outer iterations for GCMMA.

3.3.5. KNITRO

The three settings considered in the analysis are reported in Table 5, where x t o l and f t o l are tolerances that terminate the optimization process if the relative change of the solution point estimate or of the objective function are less than that values, o p t t o l and o p t t o l _ a b s specify the final relative and absolute stopping tolerance for the KKT (optimality) error, and f e a s t o l and f e a s t o l _ a b s specify the final relative and absolute stopping tolerance for the feasibility error.

3.3.6. MIDACO

The three settings considered in the analysis are reported in Table 6, where m a x e v a l is the maximum number of function evaluation. It is a distinctive feature of MIDACO that allows the solver to stop exactly after that number of function evaluation. It must be noted that another tunable parameter is a c c u r a c y , namely, the accuracy tolerance for the constraint violation, that is not considered in the settings since no beneficial effect has been found compared to m a x e v a l .

3.3.7. MQA

The three settings considered in the analysis are reported in Table 7, with ε 1 and ε 2 the prescribed small positive tolerances that allow the solver to stop, when the inequality Q ˜ ε 1 or R ˜ ε 2 is met. As mentioned in Section 3, regarding NLP problems, MQA can only handle equality constraints. As a result, slack variables are introduced to convert the inequality constraints into equality constraints. In this study, for all the three settings considered in the analysis, a value of 1 is chosen as initial guess for all the slack variables.

3.3.8. PENLAB

The three settings considered in the analysis are reported in Table 8, where m a x _ i n n e r _ i t e r is the maximum number of inner iterations, m a x _ o u t e r _ i t e r is the maximum number of outer iterations, m p e n a l t y _ m i n is the lower bound for penalty parameters, i n n e r _ s t o p _ l i m i t is the termination tolerance for the inner iterations, o u t e r _ s t o p _ l i m i t is the termination tolerance for the outer iterations, k k t _ s t o p _ l i m i t is the termination tolerance KKT optimality conditions, and u n c _ d i r _ s t o p _ l i m i t is the stopping tolerance for the unconstrained minimization.

3.3.9. SGRA

The three settings considered in the analysis are reported in Table 9, with ε 1 the tolerance related to the constraint error P ˜ , and ε 2 the tolerance related to the optimality condition error Q ˜ . Considering that the SGRA can only treat equality constraints, slack variables are introduced to convert the inequality constraints into equality constraints. In this study, for all the three settings considered in the analysis, a value of 1 is chosen for all the slack variables.

3.3.10. SNOPT

The three settings considered in the analysis are reported in Table 10, where m a j o r _ i t e r a t i o n s _ l i m i t is the limit on the number of major iterations in the SQP method, m i n o r _ i t e r a t i o n s _ l i m i t is the limit on minor iterations in the QP subproblems, m a j o r _ f e a s i b i l i t y _ t o l e r a n c e is the tolerance for feasibility of the nonlinear constraints, m a j o r _ o p t i m a l i t y _ t o l e r a n c e is the tolerance for the dual variables, and m i n o r _ f e a s i b i l i t y _ t o l e r a n c e is the tolerance for the variables and their bounds.

3.3.11. SOLNP

The three settings considered in the analysis are reported in Table 11, with ρ the penalty parameter in the augmented Lagrangian objective function, m a j the maximum number of major iterations, m i n the maximum number of minor iterations, δ the perturbation parameter for numerical gradient calculation, and ϵ the relative tolerance on optimality and feasibility. During the HA scenario implementation, we learned that different convergence settings are required for unconstrained and constrained problems. This peculiarity might be induced by the stringent tolerances adopted in this scenario.

3.3.12. SQA

The three settings considered in the analysis are reported in Table 12, with ε 1 and ε 2 the prescribed small positive tolerances that allow the solver to stop, when the inequality Q ˜ ε 1 or R ˜ ε 2 is met. As mentioned earlier, SQA can only treat equality constraints. To overcome this limitation, slack variables are introduced to convert the inequality constraints into equality constraints. In this study, for all the three settings considered in the analysis, a value of 1 is chosen for all the slack variables.

4. Benchmark Test Functions Analysis

We present a collection of unconstrained and constrained optimization test problems that are used to validate the performance of the various optimization algorithms presented above for the different implementation scenarios. The comparison results are also discussed in depth in this section.
For performance comparison purposes, an equivalent environment and control parameters have been created to run each NLP solver. All outputs tabulated in this paper are calculated using MATLAB software running on a desktop computer with the following specs: Intel(R) Core(TM) i7-6700 CPU 3.40GHz processor, 16.0 GB of RAM, running a 64-bit Windows 10 operating system. To assess the true computational time required by each algorithm to reach convergence, implementation steps that are expected to have an impact on the computer’s performance are deactivated during the run for the solution. The internet connection and other unrelated applications are turned off throughout the analysis, ensuring that unnecessary background activities are not accessing computational resources throughout the solvers’ performance. The unconstrained and constrained NLP problems are selected amongst the standard benchmark problems [27,28,29], and they are reported in [72]. Specifically, the benchmark problems include combinations of logarithmic, trigonometric, and exponential terms, non-convex and convex functions, a minimum of two to a maximum of thirty variables, and a maximum of nine constraint functions for the constrained optimization problems. As mentioned in Section 3.3, the comparison between each solver is carried out by considering three different settings: plug and play, high accuracy, and quick solution. In this way, we want to assess the robustness, accuracy, and computational time of every solver. For each benchmark problem, all solvers use the same set of randomly generated initial guesses.

4.1. Results for Unconstrained Optimization Problems

A collection of 30 unconstrained optimization test problems is used to validate the performance of the optimization algorithms. For the purpose of this analysis, given N = 30 benchmark test functions, M = 17 solvers and algorithms, K = 50 randomly generated initial guesses, and Z = 3 iterations, a set of N × M × K × Z runs are executed. Table 13, Table 14 and Table 15 report the results for the plug and play (P&P), high accuracy (HA), and quick solution (QS) settings, respectively. From the analysis of the results for the P&P settings, Table 13, we observe that all the versions of BARON have the highest convergence rate. BARON (auto) and BARON (ipopt) are able to reach the minimum mean error and variance, but they are not the fastest ones to reach the solution. Moreover, BARON (sd), BARON (sqp), SNOPT, and PENLAB are able to obtain good results in terms of mean error and variance. Overall, PENLAB is also able to reach a convergence rate similar to BARON (auto) and BARON (ipopt), with the advantage of being considerably faster than them. The worst results in terms of accuracy and convergence rate are obtained by SOLNP and SGRA. For the HA settings, Table 14, we can observe similar trends. In general, as expected, all the solvers manage to achieve a more accurate solution as they reduce the average error, increase their convergence rate, and increase the average convergence time. MIDACO is now able to reach the second highest convergence rate, after all the versions of BARON. Overall PENLAB is the solver which delivers a good trade-off in performance. With respect to the P&P settings, SOLNP significantly improves its convergence rate, whereas SGRA just slightly increase its performances. It is interesting to observe that KNITRO (sqp), aside from improving its convergence rate, increases its mean error and variance. Despite our effort, we are not sure how to explain this unexpected behaviour. Regarding the QS settings, Table 15, generally all the solvers reduce their convergence time and also decrease their convergence rate except for BARON (auto), BARON (ipopt), and BARON (sqp) which remain unaltered. SQA, FMINUNC (quasi-newton), SNOPT, and SOLNP are amongst the fastest to reach the solution but their convergence rate is quite low, except for SNOPT. In addition, conversely to all the other solvers that experience a smaller CPU time, BARON is not always able to achieve a faster CPU time with respect to the P&P settings. The same happens to the SGRA, probably due to its intrinsic iterative nature. Finally, KNITRO (sqp) delivers, in all cases, the slowest CPU time amongst the other subsolvers. This might be due to the fact that it implements, internally, Quadratic Programming subproblems characterized by computationally expensive iterations.

4.2. Results for Constrained Optimization Problems

A collection of 30 constrained optimization test problems is used to validate the performance of the optimization algorithms. For the purpose of the analysis, given N = 30 benchmark test functions, M = 22 solvers and algorithms, K = 50 randomly generated initial guesses, and Z = 3 iterations, a set of N × M × K × Z runs are executed. Table 16, Table 17 and Table 18 report the results for the P&P, HA, and QS settings, respectively. From the analysis of the results for the P&P settings, Table 16, we observe that all the versions of BARON are able to reach the highest accuracy and the best convergence rate but they are not the fastest to reach the solution. KNITRO (interior-point/D) is able to achieve the second best convergence rate, with an average CPU time that is two order of magnitude faster than BARON. PENLAB obtains the best mean error and variance but this performance is tempered by a low convergence rate, together with the SGRA, MQA, and SQA which are also quite slow to reach a solution. SNOPT reaches a convergence rate slightly lower than KNITRO (interior-point/D), KNITRO (interior-point/CG), FMINCON (interior-point), and KNITRO (sqp) but is significantly faster. Regarding the HA settings, Table 17, similar consideration can be made for BARON, but in this case the CPU time is increasing. MIDACO shows an improvement in the convergence rate, reaching values very similar to BARON. PENLAB still obtains the best mean error and variance, but it has one of the lowest convergence rates, together with the SGRA. In general, most of the solvers increase their convergence rate, and decrease their mean error, except for GCMMA and PENLAB. Regarding the QS settings, Table 18, generally all the solvers decrease their convergence rate except for BARON and PENLAB. The same considerations about BARON and PENLAB can be done as in the two previous scenarios. MIDACO reports a significant decrease in the convergence rate. SNOPT, FMINCON, and KNITRO algorithms reach a convergence rate lower than BARON, but not as low as other solvers (SONLP, APSO, PENLAB), and they are significantly faster. Also for the constrained problems, KNITRO (sqp) delivers the slowest CPU time amongst the other subsolvers. Again, this might be a consequence of the computationally heavy internal Quadratic Programming subproblems. The worst results in terms of convergence rate and CPU time are obtained by MQA and SQA.

5. UAV Path Planning: Real-World Application Benchmark

Path planning poses a significant challenge for autonomous UAVs, especially when specific mission criteria must be fulfilled. In general, path planning problems can be formulated as optimal control problems. i.e., letting the states trajectories and control inputs of the vehicles be denoted by x ( t ) and u ( t ) , respectively; the path planning problem can formally be stated as follows:
min x ( t ) , u ( t ) I ( x ( t ) , u ( t ) )
subject to
x ˙ ( t ) = f ( x ( t ) , u ( t ) ) , t [ 0 , t f ] ,
e ( x ( 0 ) , x ( t f ) ) = 0 ,
h ( x ( t ) , u ( t ) ) 0 , t [ 0 , t f ] ,
where I : R n x × R n u R is a cost function, f : R n x × R n u R n x represents the vehicles dynamics, e : R n x × R n x R n e and h : R n x × R n u R n h are constraints.
One common criterion is optimizing the path for either minimal energy consumption or minimum time of arrival at the destination. The constraint in Equation (22) enforces the boundary conditions, e.g., initial and final position, speed, and heading angles of the vehicles, and Equation (23) describes feasibility and mission specific constraints, e.g., minimum and maximum speed, acceleration, and collision avoidance constraints. With these considerations in mind, we present a path planning example that caters to both single and multiple rotorcraft UAV scenarios. In particular, this work considers two problems, namely, a single drone landing mission, and a multiple-drone-formation flight. Both problems consider 3D trajectories, and the chosen optimization criteria is mission time. The generation of the 3D trajectories is based on Bernstein polynomial approximations of vehicles’ trajectories to transcribe the infinite dimensional path planning problems into NLP problems [30,73]. In turn, the trajectory of vehicle i is parameterized by the n o -th order Bernstein polynomials
x ( t ) = i = 0 n o x ¯ i b i , n o ( t ) , t [ 0 , t f ] ,
where x ¯ i , i = 0 , , n o , are Bernstein polynomial coefficients and b i , n o ( t ) , i = 0 , , n o , are Bernstein polynomial basis functions of order. The following properties of Bernstein polynomials are used in this paper:
(1) Differentiation and integration: the -th derivative, with N , of the Bernstein polynomial x ( t ) defined above is computed as
d x k ( t ) d t = i = 0 n o j = 0 n o x ¯ j D j , i b i , n o ( t ) ,
where D j , i is the ( j , i ) -th entry of a square differentiation matrix [74].
(2) Arithmetic operations: the sum (difference) of two n o -th order Bernstein polynomials is an n o -th order Bernstein polynomial. The product between two Bernstein polynomials of orders n o 1 and n o 2 is a Bernstein polynomial of order n o 1 + n o 2 [75] (Chapter 5).
Using these properties, the following functions can be expressed as Bernstein polynomials:
speed : x ˙ ( t ) 2 = j = 0 2 n o v ¯ j b j , 2 n o ( t ) , acceleration : x ¨ ( t ) 2 = j = 0 2 n o a ¯ j b j , 2 n o ( t ) , distance : x ( t ) x ^ 2 = j = 0 2 n o d ¯ j b j , 2 n o ( t ) .
In the equation above, v ¯ j , a ¯ j and d ¯ j , j { 0 , , 2 N } , can be obtained from algebraic manipulation of the Bernstein coefficients x ¯ 0 , , x ¯ n o . With this setup, the planning problem amounts at finding optimal Bernstein polynomial coefficients such that trajectory x ( t ) minimizes some objective function and satisfies a set of constraints.
This benchmark comparison is performed considering the solvers’ plug and play (P&P) setting. Every test problem has been initially evaluated by using FMINCON (sqp) with ad hoc settings, as reported in Table 19. These settings are characterized by very stringent tolerances and a high number of maximum iterations. This provide us with the candidate optimal solution f ( x * ) for each randomly generated initial guess. The setting e x i t f l a g = 1 means that the first-order optimality measure is less than O p t i m a l i t y T o l e r a n c e , and the maximum constraint violation is less than C o n s t r a i n t T o l e r a n c e . It must be noted that only for the evaluation of the candidate optimal solution referring to the UAV formation flying problem with n v = 25 (see Section 5.2), the tolerances have been relaxed to 10 6 . This allow us to adopt the convergence metrics defined in Section 3.2. In turn, the convergence rate is computed considering successful a run that satisfies the converging threshold conditions E k E m a x = 5 % , and C P U k C P U m a x for every test problem (see Table 20 and Table 24), for the run starting from the k-th initial guess. The C P U m a x are chosen by adequately increasing the CPU time required by the ad hoc settings to solve each problem.

5.1. The 3D Minimum Time Problem: UAV Landing

The first problem is a minimum time problem for a simplified 3D model of a multi-rotor drone. The vehicle is required to reach the origin in minimum time from a given initial condition with all the control inputs bounded by ± 1 . The problem is defined as follows:
min x , u , t f J = t f ,
subject to
x ˙ 1 = x 4 , x ˙ 2 = x 5 , x ˙ 3 = x 6 , x ˙ 4 = u 1 , x ˙ 5 = u 2 , x ˙ 6 = g + u 3 , x 1 ( 0 ) = k 1 x 2 ( 0 ) = k 2 , x 3 ( 0 ) = k 3 , x 4 ( 0 ) = k 4 , x 5 ( 0 ) = k 5 , x 6 ( 0 ) = k 6 , x 1 ( t f ) = x 2 ( t f ) = x 3 ( t f ) = 0 , x 4 ( t f ) = x 5 ( t f ) = x 6 ( t f ) = 0 , | u 1 ( t ) | 1 , | u 2 ( t ) | 1 , | g + u 3 ( t ) | 1 , t [ 0 , t f ] ,
where x 1 , x 2 , x 3 , x 4 , x 5 , x 6 are the Bernstein polynomials associated to the drone position and velocity, u 1 , u 2 , u 3 are Bernstein polynomials associated to the control input, g is the gravitational acceleration, k 1 , . . , 6 are the randomly generated initial conditions, and t f is the final maneuver time to be minimized. The Bernstein polynomials are vectors of dimension n o +1, being n o the polynomial order of approximation. More accurate results can be obtained for larger n o , at the expense of the computational time. The dimension of the problem is 9 × ( n o + 1 ) + 1 , with the last 1 being the final time t f , with 6 × ( n o + 1 ) + 12 equality constraints, and 6 × ( n o + 1 ) inequality constraints. The derivative is computed using the squared differentiation matrix for Bernstein polynomials [30]. Figure 1 reports an example of solution for the 3D minimum time UAV landing problem with n o = 50 . Additional details regarding the problem can be found in [35].
In our analysis, given N = 3 benchmark test problems, each of them characterized by a different order of approximation n o , M = 22 solvers and algorithms, K = 50 randomly generated initial guesses, and Z = 3 iterations, a set of M × K × Z runs are executed for each test problem. Table 20 reports the number of variables, equality and inequality constraints for the different order of approximation n o . Due to the increased complexity of the problems with higher values of n o , and the way some solvers are implemented, not all the solvers are used for every benchmark test problem. In particular, for n o = 50 , 150 , MQA and SQA are omitted, as they require to evaluate additional slack variables for every inequality constraints, and the Hessian of the constraint functions, which needs to be loaded from stored MATLAB binary files at each iteration. We have noticed that this is extremely computationally expensive, going over the C P U m a x limit. For n o = 50 , 150 , PENLAB is omitted because the Hessians of the constraint functions need to be loaded from stored MATLAB binary files at each iteration, whereas SGRA is left out for n o = 150 because of the use of slack variables. In both cases, this is overly computationally expensive.
Table 21 reports the results for the UAV landing problem with n o = 5 . It can be seen that BARON (auto), BARON (ipopt), BARON (sqp), FMINCON (sqp), FMINCON (active-set), SNOPT, KNITRO (active-set), and KNITRO (sqp) reach 100% convergence rate and similar accuracy in terms of mean error and variance. The BARON algorithms have the highest CPU time, together with the SGRA. APSO, MMA/GCMMA, MIDACO, MQA and SQA are not able to find any successful solutions that satisfy the converging threshold conditions in terms of maximum error E m a x . It is interesting to note instead that PENLAB reaches a 100% convergence rate, but without satisfying the maximum CPU time. This shows how the need to load the Hessian of the constraint functions from stored MATLAB binary files at each iteration affects the computational time of PENLAB. From the UAV landing problem solutions with n o = 50 reported in Table 22, the results show that fewer solvers are able to reach full convergence, specifically BARON (auto), BARON (ipot), BARON (sqp), KNITRO (active-set), and SNOPT, with SNOPT being the fastest solver. In this case, an additional solver is not able to satisfy the converging threshold conditions, namely, FMINCON (interior-point). Regarding the UAV landing problem solutions with n o = 150 reported in Table 23, only BARON (auto), BARON (ipopt), and SNOPT reach 100% of convergence rate, whereas FMINCON (sqp-legacy) is not able to converge to any acceptable solutions, together with the same solvers mentioned in the previous cases. As expected, the average computational time is increasing for all the solvers, but SNOPT is able to outperform all the other solvers with a CPU time of one or two orders of magnitude less than them. The fact that KNITRO and FMINCON are no more able to reach full convergence and, at the same time, increase their CPU time is probably due to the general increase in the problem complexity (i.e., higher number of variables and constraints), and to the absence of a warm-start, since the initial conditions are randomly generated.

5.2. The 3D Minimum Time Problem: UAV Formation Flying

The second problem is a minimum time UAV formation flying problem. Starting from a grid at altitude z a l t i n = 0 , i.e., the initial condition, n v drones need to form a circle-shaped formation at z a l t f = 10 , i.e., the final condition. The trajectories must satisfy zero speed at arrival, minimum and maximum acceleration rates, and collision avoidance constraints. The vehicles need to arrive at destination at the same final time t f . As in the previous test case, n o is the Bernstein polynomial order of approximation, representing the number of nodes for each trajectory, whereas, n v is the number of vehicles involved in the mission. Because the complexity can grow up quite quickly, a fixed value of nodes is selected, n o = 4 . The initial and final conditions are randomly generated by varying the initial grid size, the final radius of the circle-shaped formation, and the final position on the circle for each drone. The control inputs are bounded by ± 1 , and the collision avoidance constraints are implemented to maintain a minimum distance greater or equal than 0.1 between the drones. The problem is defined as follows:
min x 1 , , x n v u 1 , , u n v t f J = t f ,
subject to
x ˙ j , 1 = x j , 4 , x ˙ j , 2 = x j , 5 , x ˙ j , 3 = x j , 6 , x ˙ j , 4 = u j , 1 , x ˙ j , 5 = u j , 2 , x ˙ j , 6 = g + u j , 3 , x j , 1 ( 0 ) = x g r i d j , x j , 2 ( 0 ) = y g r i d j , x j , 3 ( 0 ) = z a l t i n , x j , 4 ( 0 ) = x j , 5 ( 0 ) = x j , 6 ( 0 ) = 0 , x j , 1 ( t f ) = x c i r c l e j , x j , 2 ( t f ) = y c i r c l e j , x j , 3 ( t f ) = z a l t f , x j , 4 ( t f ) = x j , 5 ( t f ) = x j , 6 ( t f ) = 0 , | u j , 1 ( t ) | 1 , | u j , 2 ( t ) | 1 , | g + u j , 3 ( t ) | 1 , t [ 0 , t f ] , p j ( t ) p i ( t ) d safe , t [ 0 , t f ] ,
for all j , i = 1 , , n v where n v is the number of vehicles involved in the mission. In the problem above, x j , 1 , x j , 2 , x j , 3 , x j , 4 , x j , 5 , x j , 6 are the Bernstein polynomials associated to the j-th multi-rotor drone position and velocity, u j , 1 , u j , 2 , u j , 3 are the Bernstein polynomials associated to its control input, g is the gravitational acceleration, t f is the final maneuver time to be minimized and shared by all the vehicles, and x g r i d , y g r i d and x c i r c l e , y c i r c l e are random vectors in R 3 , representing random points on a grid and a circle, respectively. In other words, the vehicles start from a grid formation and eventually converge into a circle formation. The Bernstein polynomials are vectors of dimension n o +1, being n o the polynomial order of approximation. The dimension of the problem is n v × 9 × ( n o + 1 ) + 1 , with the last 1 being the final time t f , with n v × ( 6 × ( n o + 1 ) + 12 ) equality constraints, and n v × 6 × ( n o + 1 ) + i = 1 n v 1 i × ( n o + 1 ) inequality constraints. Figure 2 reports an example of solution for the 3D minimum time UAV formation flying problem with n v = 25 .
In our analysis, given N = 3 benchmark test problems, each of them characterized by a different number of vehicles n v , M = 19 solvers and algorithms, K = 50 randomly generated initial guesses, and Z = 3 iterations, a set of M × K × Z runs are executed for each test problem. Table 24 reports the number of variables, equality and inequality constraints for the different number of vehicles n v . Similarly to the UAV landing problem, some solvers are neglected. In particular, PENLAB, MQA, and SQA are disregarded. For n v = 10 , 25 , SGRA is omitted. The reasons for this choice have been discussed in Section 5.1.
Table 25 reports the results for the UAV formation flying problem with n v = 5 . It can be seen that BARON (auto), BARON (ipopt), BARON (sd), BARON (sqp), FMINCON (sqp), FMINCON (sqp-legacy), KNITRO (active-set), KNITRO (interior-point/D), and KNITRO (sqp) reach 100% convergence rate and similar accuracy in terms of mean error and variance. Almost all the algorithms have similar CPU time, except SNOPT and SOLNP that are one order of magnitude faster and slower, respectively. APSO, MMA/GCMMA, MIDACO, and SGRA instead are not able to find any successful solutions that satisfy the converging threshold conditions in terms of maximum error E m a x . The UAV formation flying problem solutions with n o = 10 reported in Table 26 show a similar situation, except that now KNITRO (active-set) is no more able to reach full convergence rate, and in general the computational times are increased. FMINCON (interior-point) is not able to converge to any acceptable solutions, together with the same solvers mentioned in the previous case. Regarding the UAV formation flying problem solutions with n v = 25 reported in Table 27, all the BARON algorithms have the highest convergence rate and the lowest CPU time. Between all the KNITRO algorithms, instead, only KNITRO (interior-point/D) is able to maintain a quite high convergence rate, probably due to its capability to deal with large-scale problems. It is interesting to note that also KNITRO (interior-point/CG) should be able to deal with large-scale problems, but its performances are not as good as KNITRO (interior-point/D). This can be due to the different way the KKT system is solved by this particular solver [54]. Furthermore, the presence of non-convex collision avoidance constraints in the UAV formation flying problem, together with randomly generated initial conditions, negatively affect the performances of SNOPT, as shown in [76].

6. Conclusions

In this paper, we provide an explicit comparison of a set of NLP solvers for the solution of unconstrained and constrained NLP problems. Because of its widespread use among research groups, both in academia and the private sector, we have used MATLAB as a common implementation platform. The benchmark analysis aims to compare popular solvers which are readily available in MATLAB, a few gradient descent methods that have been extensively used in the literature, and a particle swarm optimization in terms of accuracy, convergence rate, and computational time. In addition, three different implementation scenarios per each solver are taken into consideration, namely, plug and play (P&P), high accuracy (HA), and quick solution (QS). With this is mind, at first, each solver has been tested on a selection of constrained and unconstrained standard benchmark problems with up to thirty variables and a up to nine scalar constraints. Results for the unconstrained problems show that BARON is the algorithm that delivers the best convergence rate and accuracy but it is the slowest. PENLAB is the algorithm that has the best trade-off between accuracy, convergence rate, and speed. For the constrained NLP problems, again, BARON showcases exceptional accuracy and convergence rate, yet it falls within the slower range of algorithms. FMINCON, KNITRO, and SNOPT are the ones that are able to deliver a fair compromise of accuracy, convergence rate, and speed. Then, we have tested each solver to solve two large scale real-world minimum-time UAV optimal path planning problems for UAV landing and formation flying. Results for the UAV landing problems show that BARON and SNOPT deliver the best convergence rate and accuracy, with SNOPT being the fastest solver. Some solvers instead are not able to converge to any acceptable solutions. Finally, results for UAV formation flying problem show that overall BARON still reaches the best convergence rate and accuracy, but also the lowest CPU time. KNITRO and FMINCON follow closely behind BARON.

Author Contributions

Conceptualization, M.C. and G.L.; methodology, G.L., V.C. and M.C.; software, G.L. and K.G.; validation, G.L. and K.G.; formal analysis, G.L., V.C. and M.C.; investigation, M.C. and G.L.; resources, G.L., K.G., V.C. and M.C.; data curation, G.L.; writing—original draft preparation, M.C. and G.L.; writing—review and editing, G.L., K.G., V.C. and M.C.; visualization, G.L.; supervision, V.C. and M.C.; project administration, M.C. and G.L.; funding acquisition, M.C. and V.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Amazon (Amazon Research Award 2021), by the Office of Naval Research (grants N000142212634 and N000142112091), and by the National Science Foundation (grant 2136298).

Data Availability Statement

Data available on request from the authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lasdon, L.A.; Warren, A.D. Survey of Nonlinear Programming Applications. J. Oper. Res. Soc. Am. 1980, 28, 1029–1073. [Google Scholar] [CrossRef] [Green Version]
  2. Grossmann, I.E. Global Optimization in Engineering Design; Springer-Science+Business, B.V.: Berlin/Heidelberg, Germany, 1996. [Google Scholar] [CrossRef]
  3. Charalambous, C. Acceleration of the Least pth Algorithm for MiniMax Optimization with Engineering Applications. Math. Program. 1979, 17, 270–297. [Google Scholar] [CrossRef]
  4. Grossmann, I.E.; Kravanja, Z. Mixed-Integer Nonlinear Programming: A Survey of Algorithms and Applications. In Large-Scale Optimization with Applications; Biegler, L.T., Coleman, T.F., Conn, A.R., Santosa, F.N., Eds.; Springer: New York, NY, USA, 1997; Volume 93, pp. 73–100. [Google Scholar] [CrossRef]
  5. Wu, X.; William, S.L. Assimilation of ERBE Data with a Nonlinear Programming Technique to Improve Cloud-Cover Diagnosis. Am. Meteorol. Soc. 1992, 120, 2009–2024. [Google Scholar] [CrossRef]
  6. Wansuo, D.; Haiying, L. A New Strategy for Solving a Class of Constrained Nonlinear Optimization Problems Related to Weather and Climate Predictability. Adv. Atmos. Sci. 2010, 27, 741–749. [Google Scholar] [CrossRef]
  7. Rustagi, J. Optimization Techniques in Statistics; Academic Press Limited: Cambridge, MA, USA, 1994. [Google Scholar] [CrossRef]
  8. Ziemba, W.T.; Vickson, R.G. Stochastic Optimization Models in Finance; Academic Press INC.: Cambridge, MA, USA, 1975. [Google Scholar] [CrossRef]
  9. MathWorks. Fmincon. 2020. Available online: https://www.mathworks.com/help/optim/ug/fmincon.html#busp5fq-6 (accessed on 1 September 2021).
  10. MATLAB. The MathWorks Inc. 2020. Available online: https://www.mathworks.com/products/matlab.html (accessed on 1 September 2021).
  11. Box, M.J. A comparison of several current optimization methods, and the use of transformations in constrained problems. Comput. J. 1966, 9, 67–77. [Google Scholar] [CrossRef]
  12. Levy, A.V.; Guerra, V. On the Optimization of Constrained Functions: Comparison of Sequential Gradient-Restoration Algorithm and Gradient-Projection Algorithm; American Elsevier Publishing Company: Amsterdam, The Netherlands, 1976. [Google Scholar] [CrossRef]
  13. Schittkowski, K.; Zillober, C.; Zotemantel, R. Numerical Comparison of Nonlinear Programming Algorithms for Structural Optimization. Struct. Optim. 1994, 7, 1–19. [Google Scholar] [CrossRef] [Green Version]
  14. George, G.; Raimond, K. A Survey on Optimization Algorithms for Optimizing the Numerical Functions. Int. J. Comput. Appl. 2013, 61, 41–46. Available online: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.303.5096&rep=rep1&type=pdf (accessed on 1 September 2021). [CrossRef]
  15. Gearhart, J.L.; Adair, K.L.; Detry, R.J.; Durfee, J.D.; Jones, K.A.; Martin, N. Comparison of Open-Source Linear Programming Solvers; Technical Report; Sandia National Laboratory: Albuquerque, NM, USA, 2013. [Google Scholar] [CrossRef] [Green Version]
  16. Kronqvist, J.; Bernal, D.E.; Lundell, A.; Grossmann, I.E. A Review and Comparison of Solvers for Convex MINLP. Optim. Eng. 2018, 20, 397–455. [Google Scholar] [CrossRef] [Green Version]
  17. Saxena, P. Comparison of Linear and Nonlinear Programming Techniques for Animal Diet. Appl. Math. 2012, 1, 106–108. [Google Scholar] [CrossRef] [Green Version]
  18. Pucher, H.; Stix, V. Comparison of Nonlinear Optimization Methods on a Multinomial Logit-Model in R. Int. Multi-Conf. Eng. Technol. Innov. 2008. Available online: https://www.iiis.org/cds2009/cd2009sci/imeti2009/PapersPdf/F216BU.pdf (accessed on 1 September 2021).
  19. Yuan, G.; Chang, K.; Hsieh, C.; Lin, C. A Comparison of Optimization Methods and Software for Large-scale L1-regularized Linear Classification. J. Mach. Learn. Res. 2010, 11, 3183–3234. Available online: https://www.jmlr.org/papers/volume11/yuan10c/yuan10c.pdf (accessed on 1 September 2021).
  20. Neumaier, A.; Shcherbina, O.; Huyer, W.; Vinko, T. A Comparison of Complete Global Optimization Solvers. Math. Program. 2005, 103, 335–356. [Google Scholar] [CrossRef]
  21. Obayash, S.; Tsukahara, T. Comparison of Optimization Algorithms for Aerodynamic Shape Design. AIAA J. 1997, 35, 1413–1415. [Google Scholar] [CrossRef]
  22. McIlhagga, M.; Husbands, P.; Ives, R. A Comparison of Optimization Techniques for Integrated Manufacturing Planning and Scheduling. In Parallel Problem Solving from Nature; Voigt, H.M., Ebeling, W., Rechenberg, I., Schwefel, H.P., Eds.; Springer: Berlin/Heidelberg, Germany, 1996; Volume 1141, pp. 604–613. [Google Scholar] [CrossRef]
  23. Haupt, R. Comparison Between Genetic and Gradient-Based Optimization Algorithms for Solving Electromagnetics Problems. IEEE Trans. Magn. 1995, 31, 1932–1935. [Google Scholar] [CrossRef]
  24. Hamdy, M.; Nguyen, A.; Hensen, J.L. A Performance Comparison of Multi-objective Optimization Algorithms for Solving Nearly-zero-energy-building Design Problems. Energy Build. 2016, 121, 57–71. [Google Scholar] [CrossRef] [Green Version]
  25. Frank, P.D.; Shubin, G.R. A Comparison of Optimization-Based Approaches for a Model Computational Aerodynamics Design Problem. J. Comput. Phys. 1992, 98, 74–89. [Google Scholar] [CrossRef]
  26. Karaboga, D.; Basturk, B. On the Performance of Artificial Bee Colony (ABC) Algorithm. Appl. Soft Comput. 2008, 8, 687–697. [Google Scholar] [CrossRef]
  27. Hedar, A.R. Global Optimization Test Problems. 2020. Available online: http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar_files/TestGO.htm (accessed on 1 September 2021).
  28. Schittkowski, K. Test Examples for Nonlinear Programming Codes; Technical Report; University of Bayreuth: Bayreuth, Germany, 2009; Available online: http://www.apmath.spbu.ru/cnsa/pdf/obzor/Schittkowski_Test_problem.pdf (accessed on 1 September 2021).
  29. Floudas, C.; Pardalos, P.M.; Adjiman, C.; Esposito, W.R.; Gümüs, Z.H.; Harding, S.T.; Klepeis, J.L.; Meyer, C.A.; Schweiger, C.A. Handbook of Test Problems in Local and Global Optimization. In Nonconvex Optimization and Its Applications; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1999; Volume 33. [Google Scholar] [CrossRef]
  30. Kielas-Jensen, C.; Cichella, V.; Berry, T.; Kaminer, I.; Walton, C.; Pascoal, A. Bernstein Polynomial-Based Method for Solving Optimal Trajectory Generation Problems. Sensors 2022, 22, 1869. [Google Scholar] [CrossRef]
  31. Bellman, R.E. Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 1957. [Google Scholar]
  32. Clarke, F.H.; Ledyaev, Y.S.; Stern, R.J.; Wolenski, P.R. Nonsmooth Analysis and Control Theory; Springer Science & Business Media: New York, NY, USA, 2008; Volume 178. [Google Scholar]
  33. Bryson, A.E.; Ho, Y.C. Applied Optimal Control; Hemisphere: New York, NY, USA, 1975. [Google Scholar]
  34. Cichella, V.; Kaminer, I.; Walton, C.; Hovakimyan, N. Optimal motion planning for differentially flat systems using Bernstein approximation. IEEE Control. Syst. Lett. 2017, 2, 181–186. [Google Scholar] [CrossRef]
  35. Cichella, V.; Kaminer, I.; Walton, C.; Hovakimyan, N.; Pascoal, A. Consistency of Approximation of Bernstein Polynomial-Based Direct Methods for Optimal Control. Machines 2022, 10, 1132. [Google Scholar] [CrossRef]
  36. Cichella, V.; Kaminer, I.; Walton, C.; Hovakimyan, N.; Pascoal, A.M. Optimal multivehicle motion planning using bernstein approximants. IEEE Trans. Autom. Control 2020, 66, 1453–1467. [Google Scholar] [CrossRef]
  37. Nocedal, J.; Wright, S.J. Numerical Optimization; Springer Science+Business Media, LLC.: Berlin/Heidelberg, Germany, 2006; Available online: https://link.springer.com/book/10.1007/978-0-387-40065-5 (accessed on 1 September 2021).
  38. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004; Available online: https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf (accessed on 1 September 2021).
  39. Yang, X. Nature-Inspired Metaheuristic Algorithms; Luniver Press: Bristol, UK, 2014. [Google Scholar] [CrossRef]
  40. Firm, T.O. Analytics and Optimization Software. 2021. Available online: https://minlp.com/baron-downloads (accessed on 1 September 2021).
  41. Tawarmalani, M.; Sahinidis, N.V. Global Optimization of Mixed-integer Nonlinear Programs: A Theoretical and Computational Study. Math. Program. 2004, 99, 563–591. [Google Scholar] [CrossRef]
  42. Sahinidis, N. BARON User Manual; The Optimization Firm LLC. Available online: http://www.minlp.com/ (accessed on 1 September 2021).
  43. COIN-OR. Computational Optimization Infrastructure for Operations Research. 2016. Available online: https://www.coin-or.org/ (accessed on 1 September 2021).
  44. Wächter, A.; Biegler, L. On the Implementation of an Interior-Point Filter Line-Search Algorithm for Large-Scale Nonlinear Programming. Math. Program. 2006, 106, 25–57. [Google Scholar] [CrossRef]
  45. COIN-OR. IPOPT. 2021. Available online: https://coin-or.github.io/Ipopt/ (accessed on 1 September 2021).
  46. FilterSD. Computational Infrastructure for Operations Research, COIN-OR Project. 2020. Available online: https://projects.coin-or.org/filterSD/export/19/trunk/filterSD.pdf (accessed on 1 September 2021).
  47. Fletcher, R.; Leyffer, S. User Manual for Filter SQP; Technical Report, University of Dundee, Department of Mathematics: Dundee, UK, 1999; Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.139.7769&rep=rep1&type=pdf (accessed on 1 September 2021).
  48. MathWorks. Fminunc. 2020. Available online: https://www.mathworks.com/help/optim/ug/fminunc.html#but9q82-2_head (accessed on 1 September 2021).
  49. MathWorks. Quasi-Newton Algorithm. 2020. Available online: https://www.mathworks.com/help/optim/ug/unconstrained-nonlinear-optimization-algorithms.html#brnpcye (accessed on 1 September 2021).
  50. Svanberg, K. A Class of Globally Convergent Optimization Methods Based on Conservative Convex Separable Approximations. SIAM J. Optim. 2002, 12, 555–573. [Google Scholar] [CrossRef] [Green Version]
  51. Svanberg, K. MMA and GCMMA—Two Methods for Nonlinear Optimization. 2014. Available online: https://people.kth.se/~krille/mmagcmma.pdf (accessed on 1 September 2021).
  52. Svanberg, K. MMA and GCMMA Matlab Code. 2020. Available online: http://www.smoptit.se/ (accessed on 1 September 2021).
  53. Artelys Knitro. Artelys Optimization Solutions. 2021. Available online: https://www.artelys.com/solvers/knitro/ (accessed on 1 September 2021).
  54. Artelys Knitro. Artelys Knitro User’s Manual. 2021. Available online: https://www.artelys.com/docs/knitro//index.html (accessed on 1 September 2021).
  55. MIDACO-Solver, User Manual. MIDACO-SOLVER: Numerical High-Performance Optimization Software. 2021. Available online: http://www.midaco-solver.com/index.php/download (accessed on 1 September 2021).
  56. Schlueter, M.; Erb, S.O.; Gerdts, M.; Kemble, S.; Rückmann, J. MIDACO on MINLP space applications. Adv. Space Res. 2013, 51, 1116–1131. [Google Scholar] [CrossRef]
  57. Svanberg, K. The method of moving asymptotes—A new method for structural optimization. Int. J. Numer. Methods Eng. 1987, 24, 359–373. [Google Scholar] [CrossRef]
  58. Eloe, P.W.; Jonnalagadda, J. Quasilinearization and boundary value problems for Riemann-Liouville fractional differential equations. Electron. J. Differ. Equations 2019, 2019, 1–15. Available online: https://ejde.math.txstate.edu/Volumes/2019/58/eloe.pdf (accessed on 1 September 2021). [CrossRef] [Green Version]
  59. Yeo, B.P. A quasilinearization algorithm and its application to a manipulator problem. Int. J. Control 1974, 20, 623–640. [Google Scholar] [CrossRef]
  60. Miele, A.; Iyer, R.R. Modified quasilinearization method for solving nonlinear, two-point boundary-value problems. J. Math. Anal. Appl. 1971, 36, 674–692. [Google Scholar] [CrossRef]
  61. Miele, A.; Mangiavacchi, A.; Aggarwal, A.K. Modified quasilinearization algorithm for optimal control problems with nondifferential constraints. J. Optim. Theory Appl. 1974, 14, 529–556. [Google Scholar] [CrossRef]
  62. Fiala, J.; Kočvara, M.; Stingl, M. PENLAB: A MATLAB solver for nonlinear semidefinite optimization. arXiv 2013, arXiv:1311.5240. [Google Scholar] [CrossRef]
  63. Kocvara, M.; Stingl, M. PENNON—A generalized augmented Lagrangian method for semidefinite programming. In High Performance Algorithms and Software for Nonlinear Optimization; Applied, Optimization; Gianni Di Pillo, A.M., Ed.; Springer: Boston, MA, USA, 2003; Volume 82, pp. 303–321. [Google Scholar] [CrossRef]
  64. Polyak, R. Modified barrier functions (theory and methods). Math. Program. Ser. B 1992, 54, 177–222. [Google Scholar] [CrossRef]
  65. Kocvara, M. PENLAB. 2017. Available online: http://web.mat.bham.ac.uk/kocvara/penlab/ (accessed on 1 September 2021).
  66. COKER, E.M. Sequential Gradient-Restoration Algorithm for Optimal Control Problems with Control Inequality Constraints and General Boundary Conditions. Ph.D. Thesis, Rice University, Houston, TX, USA, 1985. Available online: https://www.proquest.com/dissertations-theses/sequential-gradient-restoration-algorithm-optimal/docview/303398376/se-2?accountid=15159 (accessed on 1 September 2021).
  67. Miele, A.; Huang, H.Y.; Heideman, J.C. Sequential gradient-restoration algorithm for the minimization of constrained functions—Ordinary and conjugate gradient versions. J. Optim. Theory Appl. 1969, 4, 213–243. [Google Scholar] [CrossRef]
  68. Gill, P.; Murray, W.; Saunders, M.; Drud, A.; Kalvelagen, E. SNOPT: An SQP algorithm for large-scale constrained optimization. SIAM Rev. 2001, 47, 99–131. [Google Scholar] [CrossRef]
  69. Gill, P.E.; Murray, W.; Saunders, M.A.; Wong, E. User’s Guide for SNOPT 7.7: Software for Large-Scale Nonlinear Programming; Center for Computational Mathematics Report CCoM 18-1, Department of Mathematics, University of California: San Diego, CA, USA, 2018. [Google Scholar]
  70. Ye, Y. SOLNP USERS’ GUIDE—A Nonlinear Optimization Program in MATLAB. 1989. Available online: https://web.stanford.edu/~yyye/matlab/manual.ps (accessed on 1 September 2021).
  71. Ye, Y. SOLNP. 2020. Available online: https://web.stanford.edu/~yyye/matlab.html (accessed on 1 September 2021).
  72. Lavezzi, G.; Guye, K.; Ciarcià, M. Nonlinear Programming Solvers for Unconstrained and Constrained Optimization Problems: A Benchmark Analysis. arXiv 2022, arXiv:2204.05297. [Google Scholar] [CrossRef]
  73. Kielas-Jensen, C.; Cichella, V. BeBOT: Bernstein polynomial toolkit for trajectory generation. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 3288–3293. [Google Scholar]
  74. Cichella, V.; Kaminer, I.; Walton, C.; Hovakimyan, N.; Pascoal, A. Bernstein approximation of optimal control problems. arXiv 2018, arXiv:1812.06132. [Google Scholar]
  75. Farouki, R.T. The Bernstein polynomial basis: A centennial retrospective. Comput. Aided Geom. Des. 2012, 29, 379–419. [Google Scholar] [CrossRef]
  76. Kalczynski, P.; Drezner, Z. Extremely non-convex optimization problems: The case of the multiple obnoxious facilities location. Optim. Lett. 2022, 16, 1153–1166. [Google Scholar] [CrossRef]
Figure 1. Example of solution for UAV Landing problem. (a) UAV position. (b) UAV control inputs.
Figure 1. Example of solution for UAV Landing problem. (a) UAV position. (b) UAV control inputs.
Drones 07 00487 g001
Figure 2. Example of solution for UAV Formation Flying problem.
Figure 2. Example of solution for UAV Formation Flying problem.
Drones 07 00487 g002
Table 1. APSO settings.
Table 1. APSO settings.
SettingsP&PHAQS
no. particles155010
no. iterations300500100
γ 0.90.950.95
Table 2. BARON settings.
Table 2. BARON settings.
SettingsP&PHAQS
E p s A 10 6 10 10 10 3
E p s R 10 4 10 10 10 3
A b s C o n F e a s T o l 10 5 10 10 10 3
Table 3. FMINCON/FMINUNC settings.
Table 3. FMINCON/FMINUNC settings.
SettingsP&PHAQS
FMINCON
S t e p T o l e r a n c e 10 10 10 10 10 6
C o n s t r a i n t T o l e r a n c e 10 6 10 10 10 3
F u n c t i o n T o l e r a n c e 10 6 10 10 10 3
O p t i m a l i t y T o l e r a n c e 10 6 10 10 10 3
FMINUNC (quasi-newton)
S t e p T o l e r a n c e 10 6 10 12 10 6
F u n c t i o n T o l e r a n c e 10 6 10 12 10 3
O p t i m a l i t y T o l e r a n c e 10 6 10 12 10 3
FMINUNC (trust-region)
S t e p T o l e r a n c e 10 6 10 12 10 6
F u n c t i o n T o l e r a n c e 10 6 10 12 10 3
O p t i m a l i t y T o l e r a n c e 10 6 10 6 10 3
Table 4. GCMMA/MMA settings.
Table 4. GCMMA/MMA settings.
SettingsP&PHAQS
e p s i m i n 10 7 10 10 10 3
m a x o u t i t 8015030
Table 5. KNITRO settings.
Table 5. KNITRO settings.
SettingsP&PHAQS
x t o l 10 6 10 10 10 3
f t o l 10 6 10 10 10 3
o p t t o l 10 6 10 10 10 3
o p t t o l _ a b s 10 6 10 10 10 3
f e a s t o l 10 6 10 10 10 3
f e a s t o l _ a b s 10 6 10 10 10 3
Table 6. MIDACO settings.
Table 6. MIDACO settings.
SettingsP&PHAQS
m a x e v a l 50,000150,00010,000
Table 7. MQA settings.
Table 7. MQA settings.
SettingsP&PHAQS
ε 1 10 5 10 8 10 2
ε 2 10 4 10 5 10 3
Table 8. PENLAB settings.
Table 8. PENLAB settings.
SettingsP&PHAQS
m a x _ i n n e r _ i t e r 100100025
m a x _ o u t e r _ i t e r 100100025
m p e n a l t y _ m i n 10 6 10 9 10 3
i n n e r _ s t o p _ l i m i t 10 2 10 9 10 1
o u t e r _ s t o p _ l i m i t 10 6 10 9 10 3
k k t _ s t o p _ l i m i t 10 4 10 6 10 2
u n c _ d i r _ s t o p _ l i m i t 10 2 10 9 10 1
Table 9. SGRA settings.
Table 9. SGRA settings.
SettingsP&PHAQS
ε 1 10 9 10 10 10 8
ε 2 10 4 10 6 10 2
Table 10. SNOPT settings.
Table 10. SNOPT settings.
SettingsP&PHAQS
m a j o r _ i t e r a t i o n s _ l i m i t 100010,000100
m i n o r _ i t e r a t i o n s _ l i m i t 5005000100
m a j o r _ f e a s i b i l i t y _ t o l e r a n c e 10 6 10 12 10 3
m a j o r _ o p t i m a l i t y _ t o l e r a n c e 10 6 10 12 10 3
m i n o r _ f e a s i b i l i t y _ t o l e r a n c e 10 6 10 12 10 3
Table 11. SOLNP settings. Tuning values for the HA scenario are divided for unconstrained (left-side) and constrained (right-side) problems.
Table 11. SOLNP settings. Tuning values for the HA scenario are divided for unconstrained (left-side) and constrained (right-side) problems.
SettingsP&PHAQS
ρ 111
m a j 10500∣1010
m i n 10500∣1010
δ 10 5 10 10 10 6 10 3
ϵ 10 4 10 12 10 7 10 3
Table 12. SQA settings.
Table 12. SQA settings.
SettingsP&PHAQS
ε 1 10 5 10 8 10 2
ε 2 10 4 10 5 10 3
Table 13. All unconstrained problems, plug and play (P&P) settings. Solvers ranked with respect to convergence rate.
Table 13. All unconstrained problems, plug and play (P&P) settings. Solvers ranked with respect to convergence rate.
RankingSolver E ¯ [%] σ ¯ [%] γ ¯ [%] CPU ¯ [s]Free
1BARON (auto)2.766 ×   10 6 1.448 ×   10 31 94.70.208No
2BARON (ipopt)2.766 ×   10 6 1.443 ×   10 31 94.70.205No
3BARON (sd)8.469 ×   10 5 1.743 ×   10 9 94.70.216No
4BARON (sqp)1.690 ×   10 7 2.821 ×   10 20 94.70.195No
5PENLAB1.016 ×   10 3 5.340 ×   10 37 88.50.0125Yes
6MIDACO1.435 ×   10 1 1.445 ×   10 1 88.40.349No
7SNOPT7.008 ×   10 3 2.444 ×   10 2 73.80.0071No
8FMINUNC (trust-region)7.836 ×   10 2 2.068 ×   10 2 68.80.0153No
9KNITRO (sqp)8.139 ×   10 2 9.254 ×   10 2 60.80.049No
10KNITRO (interior-point/D)1.045 ×   10 1 1.279 ×   10 1 60.30.019No
11KNITRO (interior-point/CG)1.045 ×   10 1 1.066 ×   10 1 59.90.017No
12KNITRO (active-set)1.163 ×   10 1 1.409 ×   10 1 59.80.017No
13FMINUNC (quasi-newton)8.643 ×   10 2 1.669 ×   10 1 52.80.0045No
14SQA2.362 ×   10 1 1.383 ×   10 1 52.50.0005Yes
15MQA2.031 ×   10 1 8.748 ×   10 2 51.70.1345Yes
16SOLNP4.648 ×   10 1 1.908 ×   10 1 48.20.0097Yes
17SGRA5.921 ×   10 1 8.627 ×   10 2 40.80.2227Yes
Table 14. All unconstrained problems, high accuracy (HA) settings. Solvers ranked with respect to mean error.
Table 14. All unconstrained problems, high accuracy (HA) settings. Solvers ranked with respect to mean error.
RankingSolver E ¯ [%] σ ¯ [%] γ ¯ [%] CPU ¯ [s]Free
1BARON (sqp)1.690 ×   10 7 2.821 ×   10 20 94.70.194No
2BARON (auto)2.107 ×   10 7 1.448 ×   10 31 94.70.208No
3BARON (ipopt)2.107 ×   10 7 1.443 ×   10 31 94.70.206No
4BARON (sd)1.433 ×   10 6 2.408 ×   10 13 94.70.301No
5PENLAB4.042 ×   10 6 8.944 ×   10 42 88.50.0121Yes
6SOLNP9.420 ×   10 4 7.900 ×   10 4 69.10.0095Yes
7SNOPT1.260 ×   10 3 1.298 ×   10 3 74.20.0099No
8MQA3.160 ×   10 3 5.835 ×   10 5 52.10.1520Yes
9FMINUNC (quasi-newton)3.526 ×   10 3 8.852 ×   10 4 59.20.0062No
10SQA3.984 ×   10 3 9.000 ×   10 5 53.20.0003Yes
11FMINUNC (trust-region)1.860 ×   10 2 1.423 ×   10 2 68.80.0238No
12KNITRO (interior-point/D)7.130 ×   10 2 1.921 ×   10 1 67.80.021No
13KNITRO (active-set)7.200 ×   10 2 1.883 ×   10 1 67.80.022No
14KNITRO (interior-point/CG)7.467 ×   10 2 1.381 ×   10 1 68.00.022No
15MIDACO7.754 ×   10 2 7.188 ×   10 2 92.01.035No
16KNITRO (sqp)1.034 ×   10 1 1.785 ×   10 1 68.90.074No
17SGRA2.709 ×   10 1 1.335 ×   10 1 44.90.2555Yes
Table 15. All unconstrained problems, quick solution (QS) settings. Solvers ranked with respect to mean CPU time.
Table 15. All unconstrained problems, quick solution (QS) settings. Solvers ranked with respect to mean CPU time.
RankingSolver E ¯ [%] σ ¯ [%] γ ¯ [%] CPU ¯ [s]Free
1SQA1.964 ×   10 1 1.609 ×   10 1 43.30.0002Yes
2FMINUNC (quasi-newton)6.076 ×   10 1 8.157 ×   10 1 33.80.0024No
3SNOPT1.581 ×   10 1 1.367 ×   10 1 66.40.0040No
4SOLNP5.357 ×   10 1 3.847 ×   10 1 41.20.0093Yes
5FMINUNC (trust-region)1.924 ×   10 1 1.997 ×   10 1 49.30.0108No
6PENLAB5.452 ×   10 5 5.623 ×   10 39 84.60.0118Yes
7KNITRO (interior-point/D)4.555 ×   10 1 4.975 ×   10 1 45.50.014No
8KNITRO (interior-point/CG)4.851 ×   10 1 3.918 ×   10 1 44.20.014No
9KNITRO (active-set)4.927 ×   10 1 4.737 ×   10 1 44.10.014No
10KNITRO (sqp)5.821 ×   10 1 5.660 ×   10 1 46.90.022No
11MIDACO2.455 ×   10 2 2.985 ×   10 2 74.60.070No
12MQA2.405 ×   10 1 2.930 ×   10 1 42.20.1819Yes
13BARON (sqp)1.690 ×   10 7 2.821 ×   10 20 94.70.198No
14BARON (ipopt)2.602 ×   10 6 1.443 ×   10 31 94.70.204No
15BARON (auto)2.602 ×   10 6 1.448 ×   10 31 94.70.205No
16BARON (sd)3.929 ×   10 6 1.846 ×   10 9 89.50.206No
17SGRA8.640 ×   10 1 2.211 ×   10 1 23.80.3033Yes
Table 16. All constrained problems, plug and play (P&P) settings. Solvers ranked with respect to convergence rate.
Table 16. All constrained problems, plug and play (P&P) settings. Solvers ranked with respect to convergence rate.
RankingSolver E ¯ [%] σ ¯ [%] γ ¯ [%] CPU ¯ [s]Free
1BARON (auto)1.299 ×   10 1 2.141 ×   10 7 92.01.497No
2BARON (ipopt)1.298 ×   10 1 2.894 ×   10 7 92.01.767No
3BARON (sd)1.296 ×   10 1 5.178 ×   10 7 92.01.379No
4BARON (sqp)1.298 ×   10 1 2.332 ×   10 7 92.01.412No
5KNITRO (interior-point/D)1.782 ×   10 1 2.035 ×   10 1 77.70.034No
6KNITRO (interior-point/CG)1.756 ×   10 1 2.085 ×   10 1 77.50.033No
7FMINCON (interior-point)1.985 ×   10 1 2.413 ×   10 1 75.90.0271No
8KNITRO (sqp)1.851 ×   10 1 2.102 ×   10 1 75.70.160No
9SNOPT1.689 ×   10 1 2.010 ×   10 1 72.10.0040No
10FMINCON (active-set)1.795 ×   10 1 2.123 ×   10 1 71.90.0204No
11FMINCON (sqp-legacy)1.893 ×   10 1 2.429 ×   10 1 69.40.0111No
12KNITRO (active-set)1.994 ×   10 1 2.702 ×   10 1 72.20.070No
13FMINCON (sqp)1.908 ×   10 1 2.446 ×   10 1 69.30.0093No
14MIDACO7.348 ×   10 1 3.500 ×   10 1 66.90.353No
15SOLNP3.243 ×   10 1 3.211 ×   10 1 48.10.0095Yes
16GCMMA4.490 ×   10 1 3.742 ×   10 1 45.70.9681Yes
17MMA7.188 ×   10 1 5.743 ×   10 1 44.10.5856Yes
18APSO1.5121.02539.20.1772Yes
19PENLAB1.127 ×   10 4 3.258 ×   10 41 31.00.0379Yes
20SGRA6.360 ×   10 1 7.011 ×   10 1 30.30.9815Yes
21MQA5.125 ×   10 1 3.460 ×   10 1 20.83.1559Yes
22SQA3.990 ×   10 1 5.778 ×   10 1 20.23.1822Yes
Table 17. All constrained problems, high accuracy (HA) settings. Solvers ranked with respect to mean error.
Table 17. All constrained problems, high accuracy (HA) settings. Solvers ranked with respect to mean error.
RankingSolver E ¯ [%] σ ¯ [%] γ ¯ [%] CPU ¯ [s]Free
1PENLAB1.502 ×   10 4 6.711 ×   10 39 31.00.0488Yes
2BARON (auto)1.943 ×   10 3 5.777 ×   10 18 92.02.056No
3BARON (sd)1.943 ×   10 3 1.405 ×   10 16 92.02.469No
4BARON (sqp)1.943 ×   10 3 4.745 ×   10 17 92.02.041No
5BARON (ipopt)8.074 ×   10 2 9.457 ×   10 10 90.83.568No
6SNOPT1.689 ×   10 1 2.010 ×   10 1 72.40.0069No
7KNITRO (interior-point/CG)1.754 ×   10 1 2.085 ×   10 1 77.70.040No
8FMINCON (active-set)1.770 ×   10 1 2.082 ×   10 1 72.30.0214No
9KNITRO (interior-point/D)1.782 ×   10 1 2.035 ×   10 1 78.00.039No
10FMINCON (sqp-legacy)1.857 ×   10 1 2.365 ×   10 1 69.70.0110No
11KNITRO (sqp)1.872 ×   10 1 2.120 ×   10 1 75.60.209No
12FMINCON (sqp)1.881 ×   10 1 2.388 ×   10 1 69.40.0082No
13KNITRO (active-set)1.951 ×   10 1 2.791 ×   10 1 73.40.078No
14FMINCON (interior-point)1.985 ×   10 1 2.413 ×   10 1 75.90.0326No
15SQA2.754 ×   10 1 2.838 ×   10 1 20.13.1838Yes
16SOLNP2.949 ×   10 1 3.106 ×   10 1 44.80.0112Yes
17MIDACO3.662 ×   10 1 2.938 ×   10 1 87.01.053No
18GCMMA5.112 ×   10 1 5.668 ×   10 1 45.21.0599Yes
19MQA5.358 ×   10 1 4.601 ×   10 1 20.83.2012Yes
20SGRA6.248 ×   10 1 7.673 ×   10 1 30.00.9632Yes
21MMA9.786 ×   10 1 5.748 ×   10 1 42.10.7101Yes
22APSO1.1731.01445.91.0168Yes
Table 18. All constrained problems, quick solution (QS) settings. Solvers ranked with respect to mean CPU time.
Table 18. All constrained problems, quick solution (QS) settings. Solvers ranked with respect to mean CPU time.
RankingSolver E ¯ [%] σ ¯ [%] γ ¯ [%] CPU ¯ [s]Free
1SNOPT1.767 ×   10 1 2.045 ×   10 1 70.20.0027No
2FMINCON (sqp)1.916 ×   10 1 2.448 ×   10 1 69.00.0071No
3SOLNP4.790 ×   10 1 6.452 ×   10 1 46.60.0087Yes
4FMINCON (sqp-legacy)1.902 ×   10 1 2.431 ×   10 1 69.10.0092No
5FMINCON (active-set)2.850 ×   10 1 3.484 ×   10 1 68.90.0165No
6KNITRO (interior-point/D)2.221 ×   10 1 2.700 ×   10 1 72.70.024No
7FMINCON (interior-point)2.166 ×   10 1 2.554 ×   10 1 72.30.0262No
8KNITRO (interior-point/CG)3.082 ×   10 1 3.585 ×   10 1 72.20.028No
9KNITRO (active-set)2.212 ×   10 1 2.975 ×   10 1 69.30.030No
10PENLAB1.896 ×   10 4 1.454 ×   10 37 31.00.0323Yes
11APSO1.5315.677 ×   10 1 35.20.0538Yes
12KNITRO (sqp)2.458 ×   10 1 3.221 ×   10 1 72.50.063No
13MIDACO9.737 ×   10 1 8.263 ×   10 1 53.00.070No
14SGRA8.774 ×   10 1 1.19827.50.9369Yes
15MMA1.1611.18941.00.1324Yes
16GCMMA6.967 ×   10 1 4.256 ×   10 1 45.50.5574Yes
17BARON (sqp)1.395 ×   10 1 8.212 ×   10 5 92.00.869No
18BARON (auto)1.374 ×   10 1 4.647 ×   10 5 92.00.874No
19BARON (ipopt)1.369 ×   10 1 2.163 ×   10 5 92.00.880No
20BARON (sd)1.370 ×   10 1 1.291 ×   10 5 92.00.999No
21MQA5.844 ×   10 1 4.193 ×   10 1 20.83.1174Yes
22SQA3.316 ×   10 1 3.131 ×   10 1 20.13.1361Yes
Table 19. FMINCON (sqp) ad hoc settings.
Table 19. FMINCON (sqp) ad hoc settings.
SettingsAd hoc
e x i t f l a g 1
M a x F u n c t i o n E v a l u a t i o n s 3,000,000
M a x I t e r a t i o n s 100,000
S t e p T o l e r a n c e 10 14
C o n s t r a i n t T o l e r a n c e 10 14
F u n c t i o n T o l e r a n c e 10 14
O p t i m a l i t y T o l e r a n c e 10 14
Table 20. UAV Landing: problem dimensions, and max CPU time threshold.
Table 20. UAV Landing: problem dimensions, and max CPU time threshold.
no 550150
Variables484601360
Equality constr.36318918
Inequality const.55306906
C P U m a x [s]2075500
Table 21. UAV landing problem solution for n o = 5 .
Table 21. UAV landing problem solution for n o = 5 .
RankingSolver E ¯ [%] σ ¯ [%] γ ¯ [%] CPU ¯ [s]
1BARON (auto)4.076 ×   10 6 7.082 ×   10 12 100.00.499
2BARON (ipopt)4.076 ×   10 6 7.084 ×   10 12 100.00.475
3BARON (sqp)4.084 ×   10 6 7.084 ×   10 12 100.00.359
4FMINCON (sqp)4.084 ×   10 6 7.084 ×   10 12 100.00.061
5FMINCON (active-set)3.957 ×   10 6 7.372 ×   10 12 100.00.059
6SNOPT4.101 ×   10 6 7.348 ×   10 12 100.00.023
7KNITRO (active-set)4.084 ×   10 6 7.084 ×   10 12 100.00.072
8KNITRO (sqp)4.084 ×   10 6 7.083 ×   10 12 100.00.178
9KNITRO (interior-point/CG)2.144 ×   10 5 2.125 ×   10 10 98.00.145
10FMINCON (sqp-legacy)3.900 ×   10 6 6.299 ×   10 12 96.00.082
11BARON (sd)4.076 ×   10 1 1.14478.00.920
12SOLNP4.015 ×   10 4 8.380 ×   10 7 76.00.073
13SGRA1.908 ×   10 4 5.960 ×   10 8 46.00.804
14KNITRO (interior-point/D)4.573 ×   10 6 9.251 ×   10 12 42.00.234
15FMINCON (interior-point)6.862 ×   10 1 1.39226.00.142
-APSO > E m a x ---
-GCMMA > E m a x ---
-MIDACO > E m a x ---
-MMA > E m a x ---
-MQA > E m a x ---
-PENLAB3.863 ×   10 6 8.382 ×   10 12 100.0 > C P U m a x
-SQA > E m a x ---
Table 22. UAV landing problem solution for n o = 50 .
Table 22. UAV landing problem solution for n o = 50 .
RankingSolver E ¯ [%] σ ¯ [%] γ ¯ [%] CPU ¯ [s]
1BARON (auto)4.148 ×   10 6 9.990 ×   10 12 100.05.802
2BARON (ipopt)4.148 ×   10 6 9.990 ×   10 12 100.05.647
3BARON (sqp)4.141 ×   10 6 9.977 ×   10 12 100.05.115
4KNITRO (active-set)4.141 ×   10 6 9.977 ×   10 12 100.05.763
5SNOPT4.101 ×   10 6 1.000 ×   10 11 100.01.001
6KNITRO (sqp)4.125 ×   10 6 1.017 ×   10 11 98.016.402
7FMINCON (active-set)3.155 ×   10 4 3.774 ×   10 7 92.026.765
8KNITRO (interior-point/CG)9.074 ×   10 2 1.511 ×   10 1 84.046.202
9KNITRO (interior-point/D)1.351 ×   10 1 6.208 ×   10 1 68.023.080
10BARON (sd)6.585 ×   10 1 2.22132.048.799
11FMINCON (sqp-legacy)4.991 ×   10 6 1.371 ×   10 11 58.024.868
12FMINCON (sqp)4.973 ×   10 6 1.325 ×   10 11 60.019.498
13SOLNP1.424 ×   10 2 4.673 ×   10 4 44.04.863
-APSO > E m a x ---
-FMINCON (interior-point) > E m a x ---
-GCMMA > E m a x ---
-MIDACO > E m a x ---
-MMA > E m a x ---
-SGRA > E m a x ---
Table 23. UAV landing problem solution for n o = 150 .
Table 23. UAV landing problem solution for n o = 150 .
RankingSolver E ¯ [%] σ ¯ [%] γ ¯ [%] CPU ¯ [s]
1BARON (auto)3.679 ×   10 6 7.324 ×   10 12 100.070.685
2BARON (ipopt)3.678 ×   10 6 7.352 ×   10 12 100.070.194
3SNOPT3.766 ×   10 6 6.505 ×   10 12 100.07.396
4KNITRO (sqp)8.930 ×   10 2 2.950 ×   10 1 74.0211.029
5KNITRO (active-set)6.241 ×   10 2 1.285 ×   10 1 66.0186.258
6KNITRO (interior-point/D)4.840 ×   10 1 1.23762.0352.009
7SOLNP1.141 ×   10 1 1.665 ×   10 3 18.0178.791
8BARON (sd)1.5583.42112.0424.121
9FMINCON (sqp)3.041 ×   10 3 2.228 ×   10 5 12.0411.148
10BARON (sqp)1.8703.54810.0430.064
11FMINCON (active-set)1.196 ×   10 2 3.600 ×   10 4 8.0386.340
12KNITRO (interior-point/CG)4.848 ×   10 4 4.374 ×   10 7 6.0371.203
-APSO > E m a x ---
-FMINCON (interior-point) > E m a x ---
-FMINCON (sqp-legacy) > E m a x ---
-GCMMA > E m a x ---
-MIDACO > E m a x ---
-MMA > E m a x ---
Table 24. UAV Formation Flying: problem dimensions and max CPU time threshold.
Table 24. UAV Formation Flying: problem dimensions and max CPU time threshold.
nv 51025
Variables2715411351
Equality constr.2404801200
Inequality const.2406302700
C P U m a x [s]2075200
Table 25. UAV formation flying problem solution for n v = 5 .
Table 25. UAV formation flying problem solution for n v = 5 .
RankingSolver E ¯ [%] σ ¯ [%] γ ¯ [%] CPU ¯ [s]
1BARON (auto)3.681 ×   10 4 7.563 ×   10 8 100.00.755
2BARON (ipopt)3.681 ×   10 4 7.563 ×   10 8 100.00.795
3BARON (sd)3.681 ×   10 4 7.563 ×   10 8 100.00.727
4BARON (sqp)3.681 ×   10 4 7.560 ×   10 8 100.00.734
5FMINCON (sqp)3.681 ×   10 4 7.563 ×   10 8 100.00.397
6FMINCON (sqp-legacy)3.681 ×   10 4 7.563 ×   10 8 100.00.446
7KNITRO (active-set)3.681 ×   10 4 7.563 ×   10 8 100.00.761
8KNITRO (interior-point/D)3.681 ×   10 4 7.564 ×   10 8 100.00.498
9KNITRO (sqp)3.681 ×   10 4 7.563 ×   10 8 100.00.680
10KNITRO (interior-point/CG)3.678 ×   10 4 7.963 ×   10 8 98.00.862
11SNOPT3.632 ×   10 4 7.401 ×   10 8 96.00.071
12FMINCON (interior-point)1.4002.39076.00.594
13FMINCON (active-set)2.811 ×   10 4 3.926 ×   10 8 30.00.535
14SOLNP1.125 ×   10 3 8.907 ×   10 7 34.02.846
-APSO > E m a x ---
-GCMMA > E m a x ---
-MIDACO > E m a x ---
-MMA > E m a x ---
-SGRA > E m a x ---
Table 26. UAV formation flying problem solution for n v = 10 .
Table 26. UAV formation flying problem solution for n v = 10 .
RankingSolver E ¯ [%] σ ¯ [%] γ ¯ [%] CPU ¯ [s]
1BARON (auto)3.492 ×   10 4 6.299 ×   10 8 100.01.481
2BARON (ipopt)3.492 ×   10 4 6.299 ×   10 8 100.01.530
3BARON (sd)3.492 ×   10 4 6.299 ×   10 8 100.01.415
4BARON (sqp)3.492 ×   10 4 6.299 ×   10 8 100.01.432
5FMINCON (sqp)3.492 ×   10 4 6.299 ×   10 8 100.01.785
6FMINCON (sqp-legacy)3.492 ×   10 4 6.299 ×   10 8 100.02.047
7KNITRO (interior-point/D)3.492 ×   10 4 6.299 ×   10 8 100.03.172
8KNITRO (sqp)3.492 ×   10 4 6.299 ×   10 8 100.09.651
9KNITRO (interior-point/CG)3.400 ×   10 4 6.397 ×   10 8 96.06.333
10KNITRO (active-set)3.233 ×   10 4 4.785 ×   10 8 96.010.044
11SNOPT8.069 ×   10 4 1.035 ×   10 5 96.02.811
12FMINCON (active-set)3.803 ×   10 4 4.417 ×   10 8 36.01.963
13SOLNP1.064 ×   10 3 8.218 ×   10 7 12.016.013
-APSO > E m a x ---
-FMINCON (interior-point) > E m a x ---
-GCMMA > E m a x ---
-MIDACO > E m a x ---
-MMA > E m a x ---
Table 27. UAV formation flying problem solution for n v = 25 .
Table 27. UAV formation flying problem solution for n v = 25 .
RankingSolver E ¯ [%] σ ¯ [%] γ ¯ [%] CPU ¯ [s]
1BARON (auto)1.953 ×   10 4 6.562 ×   10 8 96.05.782
2BARON (ipopt)1.953 ×   10 4 6.562 ×   10 8 96.013.923
3BARON (sd)1.953 ×   10 4 6.563 ×   10 8 96.05.052
4BARON (sqp)1.953 ×   10 4 6.562 ×   10 8 96.05.299
5KNITRO (interior-point/D)3.117 ×   10 3 3.899 ×   10 4 90.0119.081
6FMINCON (sqp)2.506 ×   10 2 2.302 ×   10 2 74.048.901
7FMINCON (sqp-legacy)2.506 ×   10 2 2.302 ×   10 2 74.054.861
8SNOPT2.083 ×   10 4 7.497 ×   10 8 44.045.408
9KNITRO (sqp)2.025 ×   10 4 7.837 ×   10 8 22.0133.070
10FMINCON (active-set)4.678 ×   10 5 2.198 ×   10 9 10.021.308
11KNITRO (active-set)3.933 ×   10 5 4.017 ×   10 9 6.065.720
12KNITRO (interior-point/CG)1.4840.0002.0165.145
-APSO > E m a x ---
-FMINCON (interior-point) > E m a x ---
-GCMMA > E m a x ---
-MIDACO > E m a x ---
-MMA > E m a x ---
-SOLNP > E m a x ---
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lavezzi, G.; Guye, K.; Cichella, V.; Ciarcià, M. Comparative Analysis of Nonlinear Programming Solvers: Performance Evaluation, Benchmarking, and Multi-UAV Optimal Path Planning. Drones 2023, 7, 487. https://doi.org/10.3390/drones7080487

AMA Style

Lavezzi G, Guye K, Cichella V, Ciarcià M. Comparative Analysis of Nonlinear Programming Solvers: Performance Evaluation, Benchmarking, and Multi-UAV Optimal Path Planning. Drones. 2023; 7(8):487. https://doi.org/10.3390/drones7080487

Chicago/Turabian Style

Lavezzi, Giovanni, Kidus Guye, Venanzio Cichella, and Marco Ciarcià. 2023. "Comparative Analysis of Nonlinear Programming Solvers: Performance Evaluation, Benchmarking, and Multi-UAV Optimal Path Planning" Drones 7, no. 8: 487. https://doi.org/10.3390/drones7080487

Article Metrics

Back to TopTop