Numerical Optimization and Algorithms: 2nd Edition

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Analysis of Algorithms and Complexity Theory".

Deadline for manuscript submissions: 15 September 2024 | Viewed by 2355

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Information Technology and Electrical Engineering, University of Oulu, 90570 Oulu, Finland
Interests: AI; machine learning; control algorithms; robotics; nonlinear optimization; control
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Numerical algorithms and optimization are widely used in the fields of science and engineering, such as physics, environment, mechanics, biology, data science, economics, finance, and so on. These problems are complex, highly non-linear, and difficult to predict. Over the last decade, computational problems have become popular and have gained much attention due to the improved computer performance, computing methods, and the rapid development of data science technology. However, these developments have also raised various issues and challenges, such as high non-linearity, the curse of dimensionality, uncertainty, complexity, and so on. Therefore, these challenges urgently need to be addressed by developing new numerical algorithms such as graph theory, optimization algorithms, algebra, uncertainty, data science or analysis, new differential equations solving algorithms and methods, probability, and statistics algorithms and methods.

This Special Issue deals with various numerical algorithms in the fields of both science and engineering.

Prof. Dr. Shuai Li
Prof. Dr. Dunhui Xiao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • graph theory
  • optimization
  • algebra
  • uncertainty
  • data science
  • differential equations
  • probability and statistics
  • numerical algorithms

Related Special Issue

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 956 KiB  
Article
Solving Least-Squares Problems via a Double-Optimal Algorithm and a Variant of the Karush–Kuhn–Tucker Equation for Over-Determined Systems
by Chein-Shan Liu, Chung-Lun Kuo and Chih-Wen Chang
Algorithms 2024, 17(5), 211; https://doi.org/10.3390/a17050211 - 14 May 2024
Viewed by 556
Abstract
A double optimal solution (DOS) of a least-squares problem Ax=b,ARq×n with qn is derived in an m+1-dimensional varying affine Krylov subspace (VAKS); two minimization techniques exactly determine the [...] Read more.
A double optimal solution (DOS) of a least-squares problem Ax=b,ARq×n with qn is derived in an m+1-dimensional varying affine Krylov subspace (VAKS); two minimization techniques exactly determine the m+1 expansion coefficients of the solution x in the VAKS. The minimal-norm solution can be obtained automatically regardless of whether the linear system is consistent or inconsistent. A new double optimal algorithm (DOA) is created; it is sufficiently time saving by inverting an m×m positive definite matrix at each iteration step, where mmin(n,q). The properties of the DOA are investigated and the estimation of residual error is provided. The residual norms are proven to be strictly decreasing in the iterations; hence, the DOA is absolutely convergent. Numerical tests reveal the efficiency of the DOA for solving least-squares problems. The DOA is applicable to least-squares problems regardless of whether q<n or q>n. The Moore–Penrose inverse matrix is also addressed by adopting the DOA; the accuracy and efficiency of the proposed method are proven. The m+1-dimensional VAKS is different from the traditional m-dimensional affine Krylov subspace used in the conjugate gradient (CG)-type iterative algorithms CGNR (or CGLS) and CGRE (or Craig method) for solving least-squares problems with q>n. We propose a variant of the Karush–Kuhn–Tucker equation, and then we apply the partial pivoting Gaussian elimination method to solve the variant, which is better than the original Karush–Kuhn–Tucker equation, the CGNR and the CGNE for solving over-determined linear systems. Our main contribution is developing a double-optimization-based iterative algorithm in a varying affine Krylov subspace for effectively and accurately solving least-squares problems, even for a dense and ill-conditioned matrix A with qn or qn. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
Show Figures

Figure 1

30 pages, 571 KiB  
Article
The Algorithm of Gu and Eisenstat and D-Optimal Design of Experiments
by Alistair Forbes
Algorithms 2024, 17(5), 193; https://doi.org/10.3390/a17050193 - 2 May 2024
Viewed by 639
Abstract
This paper addresses the following problem: given m potential observations to determine n parameters, m>n, what is the best choice of n observations. The problem can be formulated as finding the n×n submatrix of the complete [...] Read more.
This paper addresses the following problem: given m potential observations to determine n parameters, m>n, what is the best choice of n observations. The problem can be formulated as finding the n×n submatrix of the complete m×n observation matrix that has maximum determinant. An algorithm by Gu and Eisenstat for a determining a strongly rank-revealing QR factorisation of a matrix can be adapted to address this latter formulation. The algorithm starts with an initial selection of n rows of the observation matrix and then performs a sequence of row interchanges, with the determinant of the current submatrix strictly increasing at each step until no further improvement can be made. The algorithm implements rank-one updating strategies, which leads to a compact and efficient algorithm. The algorithm does not necessarily determine the global optimum but provides a practical approach to designing an effective measurement strategy. In this paper, we describe how the Gu–Eisenstat algorithm can be adapted to address the problem of optimal experimental design and used with the QR algorithm with column pivoting to provide effective designs. We also describe implementations of sequential algorithms to add further measurements that optimise the information gain at each step. We illustrate performance on several metrology examples. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
Show Figures

Figure 1

17 pages, 323 KiB  
Article
Hybrid Newton-like Inverse Free Algorithms for Solving Nonlinear Equations
by Ioannis K. Argyros, Santhosh George, Samundra Regmi and Christopher I. Argyros
Algorithms 2024, 17(4), 154; https://doi.org/10.3390/a17040154 - 10 Apr 2024
Viewed by 846
Abstract
Iterative algorithms requiring the computationally expensive in general inversion of linear operators are difficult to implement. This is the reason why hybrid Newton-like algorithms without inverses are developed in this paper to solve Banach space-valued nonlinear equations. The inverses of the linear operator [...] Read more.
Iterative algorithms requiring the computationally expensive in general inversion of linear operators are difficult to implement. This is the reason why hybrid Newton-like algorithms without inverses are developed in this paper to solve Banach space-valued nonlinear equations. The inverses of the linear operator are exchanged by a finite sum of fixed linear operators. Two types of convergence analysis are presented for these algorithms: the semilocal and the local. The Fréchet derivative of the operator on the equation is controlled by a majorant function. The semi-local analysis also relies on majorizing sequences. The celebrated contraction mapping principle is utilized to study the convergence of the Krasnoselskij-like algorithm. The numerical experimentation demonstrates that the new algorithms are essentially as effective but less expensive to implement. Although the new approach is demonstrated for Newton-like algorithms, it can be applied to other single-step, multistep, or multipoint algorithms using inverses of linear operators along the same lines. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
Back to TopTop