Numerical Optimization and Algorithms: 2nd Edition

A special issue of Algorithms (ISSN 1999-4893).

Deadline for manuscript submissions: 15 September 2024 | Viewed by 924

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Information Technology and Electrical Engineering, University of Oulu, 90570 Oulu, Finland
Interests: AI; machine learning; control algorithms; robotics; nonlinear optimization; control
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Numerical algorithms and optimization are widely used in the fields of science and engineering, such as physics, environment, mechanics, biology, data science, economics, finance, and so on. These problems are complex, highly non-linear, and difficult to predict. Over the last decade, computational problems have become popular and have gained much attention due to the improved computer performance, computing methods, and the rapid development of data science technology. However, these developments have also raised various issues and challenges, such as high non-linearity, the curse of dimensionality, uncertainty, complexity, and so on. Therefore, these challenges urgently need to be addressed by developing new numerical algorithms such as graph theory, optimization algorithms, algebra, uncertainty, data science or analysis, new differential equations solving algorithms and methods, probability, and statistics algorithms and methods.

This Special Issue deals with various numerical algorithms in the fields of both science and engineering.

Prof. Dr. Shuai Li
Prof. Dr. Dunhui Xiao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • graph theory
  • optimization
  • algebra
  • uncertainty
  • data science
  • differential equations
  • probability and statistics
  • numerical algorithms

Related Special Issue

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

30 pages, 568 KiB  
Article
The Algorithm of Gu and Eisenstat and D-Optimal Design of Experiments
by Alistair Forbes
Algorithms 2024, 17(5), 193; https://doi.org/10.3390/a17050193 - 02 May 2024
Viewed by 187
Abstract
This paper addresses the following problem: given m potential observations to determine n parameters, m>n, what is the best choice of n observations. The problem can be formulated as finding the n×n submatrix of the complete [...] Read more.
This paper addresses the following problem: given m potential observations to determine n parameters, m>n, what is the best choice of n observations. The problem can be formulated as finding the n×n submatrix of the complete m×n observation matrix that has maximum determinant. An algorithm by Gu and Eisenstat for a determining a strongly rank-revealing QR factorisation of a matrix can be adapted to address this latter formulation. The algorithm starts with an initial selection of n rows of the observation matrix and then performs a sequence of row interchanges, with the determinant of the current submatrix strictly increasing at each step until no further improvement can be made. The algorithm implements rank-one updating strategies, which leads to a compact and efficient algorithm. The algorithm does not necessarily determine the global optimum but provides a practical approach to designing an effective measurement strategy. In this paper, we describe how the Gu–Eisenstat algorithm can be adapted to address the problem of optimal experimental design and used with the QR algorithm with column pivoting to provide effective designs. We also describe implementations of sequential algorithms to add further measurements that optimise the information gain at each step. We illustrate performance on several metrology examples. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
17 pages, 323 KiB  
Article
Hybrid Newton-like Inverse Free Algorithms for Solving Nonlinear Equations
by Ioannis K. Argyros, Santhosh George, Samundra Regmi and Christopher I. Argyros
Algorithms 2024, 17(4), 154; https://doi.org/10.3390/a17040154 - 10 Apr 2024
Viewed by 510
Abstract
Iterative algorithms requiring the computationally expensive in general inversion of linear operators are difficult to implement. This is the reason why hybrid Newton-like algorithms without inverses are developed in this paper to solve Banach space-valued nonlinear equations. The inverses of the linear operator [...] Read more.
Iterative algorithms requiring the computationally expensive in general inversion of linear operators are difficult to implement. This is the reason why hybrid Newton-like algorithms without inverses are developed in this paper to solve Banach space-valued nonlinear equations. The inverses of the linear operator are exchanged by a finite sum of fixed linear operators. Two types of convergence analysis are presented for these algorithms: the semilocal and the local. The Fréchet derivative of the operator on the equation is controlled by a majorant function. The semi-local analysis also relies on majorizing sequences. The celebrated contraction mapping principle is utilized to study the convergence of the Krasnoselskij-like algorithm. The numerical experimentation demonstrates that the new algorithms are essentially as effective but less expensive to implement. Although the new approach is demonstrated for Newton-like algorithms, it can be applied to other single-step, multistep, or multipoint algorithms using inverses of linear operators along the same lines. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
Back to TopTop