Stochastic Algorithms and Their Applications

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Analysis of Algorithms and Complexity Theory".

Deadline for manuscript submissions: closed (30 April 2022) | Viewed by 36496

Special Issue Editor


E-Mail Website
Guest Editor
Applied Mathematics and Statistics, School of medicine, Paris Descartes University
Interests: Statistical modeling, stochastic algorithm convergence study, statistics with Riemanian geometry, medical data analysis, decision support systems

Special Issue Information

Dear Colleagues,

Stochastic algorithms are at the core of machine learning and artificial intelligence. Stochastic gradient descent and expectation–maximization algorithms among others offer incredible tools to calibrate statistical models or deep networks. Their studies are of particular interest to ensure garanties on their results, improve their convergence speed and optimize their use in machine learning problems.

The research fields of these algorithms are extremely diverse, ranging from computer vision (CV) to natural language processing (NLP), and targeting emerging applications, such as transport (for example, for autonomous vehicles) or medical data analysis (for example, to propose decision support systems).

We invite you to submit high quality papers for this Special Issue on “Stochastic Algorithms and their Applications”, with subjects covering a whole range of topics, from theory to applications. Both original contributions and review articles will be considered. Submitted articles may focus on any machine learning problem involving this (non-exhaustive) list of topics of interest:

  • Stochastic algorithms convergence, acceleration, optimization, etc.
  • Adaptation of state-of-the-art stochastic algorithms for new CV, NLP and high dimension data problems
  • Application of these stochastic algorithms to environment, transport or medical data and questions

Prof. Stephanie Allassonniere
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Stochastic algorithms convergence, acceleration, optimization, etc.
  • Adaptation of state-of-the-art stochastic algorithms for new CV, NLP and high dimension data problems
  • Application of these stochastic algorithms to environment, transport or medical data and questions

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

2 pages, 171 KiB  
Editorial
Special Issue: Stochastic Algorithms and Their Applications
by Stéphanie Allassonnière
Algorithms 2022, 15(9), 323; https://doi.org/10.3390/a15090323 - 9 Sep 2022
Viewed by 1387
Abstract
Stochastic algorithms are at the core of machine learning and artificial intelligence [...] Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)

Research

Jump to: Editorial, Review

34 pages, 33546 KiB  
Article
Topology Optimisation under Uncertainties with Neural Networks
by Martin Eigel, Marvin Haase and Johannes Neumann
Algorithms 2022, 15(7), 241; https://doi.org/10.3390/a15070241 - 12 Jul 2022
Cited by 1 | Viewed by 2195
Abstract
Topology optimisation is a mathematical approach relevant to different engineering problems where the distribution of material in a defined domain is distributed in some optimal way, subject to a predefined cost function representing desired (e.g., mechanical) properties and constraints. The computation of such [...] Read more.
Topology optimisation is a mathematical approach relevant to different engineering problems where the distribution of material in a defined domain is distributed in some optimal way, subject to a predefined cost function representing desired (e.g., mechanical) properties and constraints. The computation of such an optimal distribution depends on the numerical solution of some physical model (in our case linear elasticity) and robustness is achieved by introducing uncertainties into the model data, namely the forces acting on the structure and variations of the material stiffness, rendering the task high-dimensional and computationally expensive. To alleviate this computational burden, we develop two neural network architectures (NN) that are capable of predicting the gradient step of the optimisation procedure. Since state-of-the-art methods use adaptive mesh refinement, the neural networks are designed to use a sufficiently fine reference mesh such that only one training phase of the neural network suffices. As a first architecture, a convolutional neural network is adapted to the task. To include sequential information of the optimisation process, a recurrent neural network is constructed as a second architecture. A common 2D bridge benchmark is used to illustrate the performance of the proposed architectures. It is observed that the NN prediction of the gradient step clearly outperforms the classical optimisation method, in particular since larger iteration steps become viable. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

16 pages, 1444 KiB  
Article
Mean Estimation on the Diagonal of Product Manifolds
by Mathias Højgaard Jensen and Stefan Sommer
Algorithms 2022, 15(3), 92; https://doi.org/10.3390/a15030092 - 10 Mar 2022
Cited by 2 | Viewed by 2808
Abstract
Computing sample means on Riemannian manifolds is typically computationally costly, as exemplified by computation of the Fréchet mean, which often requires finding minimizing geodesics to each data point for each step of an iterative optimization scheme. When closed-form expressions for geodesics are not [...] Read more.
Computing sample means on Riemannian manifolds is typically computationally costly, as exemplified by computation of the Fréchet mean, which often requires finding minimizing geodesics to each data point for each step of an iterative optimization scheme. When closed-form expressions for geodesics are not available, this leads to a nested optimization problem that is costly to solve. The implied computational cost impacts applications in both geometric statistics and in geometric deep learning. The weighted diffusion mean offers an alternative to the weighted Fréchet mean. We show how the diffusion mean and the weighted diffusion mean can be estimated with a stochastic simulation scheme that does not require nested optimization. We achieve this by conditioning a Brownian motion in a product manifold to hit the diagonal at a predetermined time. We develop the theoretical foundation for the sampling-based mean estimation, we develop two simulation schemes, and we demonstrate the applicability of the method with examples of sampled means on two manifolds. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

37 pages, 1000 KiB  
Article
Deterministic Approximate EM Algorithm; Application to the Riemann Approximation EM and the Tempered EM
by Thomas Lartigue, Stanley Durrleman and Stéphanie Allassonnière
Algorithms 2022, 15(3), 78; https://doi.org/10.3390/a15030078 - 25 Feb 2022
Cited by 5 | Viewed by 3441
Abstract
The Expectation Maximisation (EM) algorithm is widely used to optimise non-convex likelihood functions with latent variables. Many authors modified its simple design to fit more specific situations. For instance, the Expectation (E) step has been replaced by Monte Carlo (MC), Markov Chain Monte [...] Read more.
The Expectation Maximisation (EM) algorithm is widely used to optimise non-convex likelihood functions with latent variables. Many authors modified its simple design to fit more specific situations. For instance, the Expectation (E) step has been replaced by Monte Carlo (MC), Markov Chain Monte Carlo or tempered approximations, etc. Most of the well-studied approximations belong to the stochastic class. By comparison, the literature is lacking when it comes to deterministic approximations. In this paper, we introduce a theoretical framework, with state-of-the-art convergence guarantees, for any deterministic approximation of the E step. We analyse theoretically and empirically several approximations that fit into this framework. First, for intractable E-steps, we introduce a deterministic version of MC-EM using Riemann sums. A straightforward method, not requiring any hyper-parameter fine-tuning, useful when the low dimensionality does not warrant a MC-EM. Then, we consider the tempered approximation, borrowed from the Simulated Annealing literature and used to escape local extrema. We prove that the tempered EM verifies the convergence guarantees for a wider range of temperature profiles than previously considered. We showcase empirically how new non-trivial profiles can more successfully escape adversarial initialisations. Finally, we combine the Riemann and tempered approximations into a method that accomplishes both their purposes. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

14 pages, 776 KiB  
Article
No Cell Left behind: Automated, Stochastic, Physics-Based Tracking of Every Cell in a Dense, Growing Colony
by Huy Pham, Emile R. Shehada, Shawna Stahlheber, Kushagra Pandey and Wayne B. Hayes
Algorithms 2022, 15(2), 51; https://doi.org/10.3390/a15020051 - 30 Jan 2022
Cited by 1 | Viewed by 3047
Abstract
Motivation: Precise tracking of individual cells—especially tracking the family lineage, for example in a developing embryo—has widespread applications in biology and medicine. Due to significant noise in microscope images, existing methods have difficulty precisely tracking cell activities. These difficulties often require human intervention [...] Read more.
Motivation: Precise tracking of individual cells—especially tracking the family lineage, for example in a developing embryo—has widespread applications in biology and medicine. Due to significant noise in microscope images, existing methods have difficulty precisely tracking cell activities. These difficulties often require human intervention to resolve. Humans are helpful because our brain naturally and automatically builds a simulation “model” of any scene that we observe. Because we understand simple truths about the world—for example cells can move and divide, but they cannot instantaneously move vast distances—this model “in our heads” helps us to severely constrain the possible interpretations of what we see, allowing us to easily distinguish signal from noise, and track the motion of cells even in the presence of extreme levels of noise that would completely confound existing automated methods. Results: Here, we mimic the ability of the human brain by building an explicit computer simulation model of the scene. Our simulated cells are programmed to allow movement and cell division consistent with reality. At each video frame, we stochastically generate millions of nearby “Universes” and evolve them stochastically to the next frame. We then find and fit the best universes to reality by minimizing the residual between the real image frame and a synthetic image of the simulation. The rule-based simulation puts extremely stringent constraints on possible interpretations of the data, allowing our system to perform far better than existing methods even in the presense of extreme levels of image noise. We demonstrate the viability of this method by accurately tracking every cell in a colony that grows from 4 to over 300 individuals, doing about as well as a human can in the difficult task of tracking cell lineages. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

13 pages, 478 KiB  
Article
Mixed Poisson Regression Models with Varying Dispersion Arising from Non-Conjugate Mixing Distributions
by George Tzougas, Natalia Hong and Ryan Ho
Algorithms 2022, 15(1), 16; https://doi.org/10.3390/a15010016 - 30 Dec 2021
Cited by 1 | Viewed by 3163
Abstract
In this article we present a class of mixed Poisson regression models with varying dispersion arising from non-conjugate to the Poisson mixing distributions for modelling overdispersed claim counts in non-life insurance. The proposed family of models combined with the adopted modelling framework can [...] Read more.
In this article we present a class of mixed Poisson regression models with varying dispersion arising from non-conjugate to the Poisson mixing distributions for modelling overdispersed claim counts in non-life insurance. The proposed family of models combined with the adopted modelling framework can provide sufficient flexibility for dealing with different levels of overdispersion. For illustrative purposes, the Poisson-lognormal regression model with regression structures on both its mean and dispersion parameters is employed for modelling claim count data from a motor insurance portfolio. Maximum likelihood estimation is carried out via an expectation-maximization type algorithm, which is developed for the proposed family of models and is demonstrated to perform satisfactorily. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

14 pages, 7662 KiB  
Article
Modeling of the 5G-Band Patch Antennas Using ANNs under the Uncertainty of the Geometrical Design Parameters Associated with the Manufacturing Process
by Piotr Górniak
Algorithms 2022, 15(1), 7; https://doi.org/10.3390/a15010007 - 24 Dec 2021
Cited by 3 | Viewed by 3419
Abstract
In the paper, the author deals with modeling the stochastic behavior of ordinary patch antennas in terms of the mean and standard deviation of their reflection coefficient |S11| under the geometrical uncertainty associated with their manufacturing process. The Artificial Neural [...] Read more.
In the paper, the author deals with modeling the stochastic behavior of ordinary patch antennas in terms of the mean and standard deviation of their reflection coefficient |S11| under the geometrical uncertainty associated with their manufacturing process. The Artificial Neural Network is used to model the stochastic reflection coefficient of the antennas. The Polynomial Chaos Expansion and FDTD computations are used to obtain the training and testing data for the Artificial Neural Network. For the first time, the author uses his analytical transformations to reduce the required number of highly time-consuming FDTD simulations for a given set of nominal values of the design parameters of the ordinary patch antenna. An analysis is performed for n257 and n258 frequency bands (24.5–28.7 GHz). The probability distributions of the design parameters are extracted from the measurement results obtained for a series of manufactured patch antenna arrays for three different frequencies in the C, X, and Ka bands. Patch antennas are chosen as the subject of the scientific analysis in this paper because of the popularity of the patch antennas in the scientific literature concerning antennas, as well as because of a simple form of these antennas that is reflected in the time required for computation of training and testing data for the Artificial Neural Network. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

16 pages, 1013 KiB  
Article
Accelerating Symmetric Rank-1 Quasi-Newton Method with Nesterov’s Gradient for Training Neural Networks
by S. Indrapriyadarsini, Shahrzad Mahboubi, Hiroshi Ninomiya, Takeshi Kamio and Hideki Asai
Algorithms 2022, 15(1), 6; https://doi.org/10.3390/a15010006 - 24 Dec 2021
Cited by 8 | Viewed by 3820
Abstract
Gradient-based methods are popularly used in training neural networks and can be broadly categorized into first and second order methods. Second order methods have shown to have better convergence compared to first order methods, especially in solving highly nonlinear problems. The BFGS quasi-Newton [...] Read more.
Gradient-based methods are popularly used in training neural networks and can be broadly categorized into first and second order methods. Second order methods have shown to have better convergence compared to first order methods, especially in solving highly nonlinear problems. The BFGS quasi-Newton method is the most commonly studied second order method for neural network training. Recent methods have been shown to speed up the convergence of the BFGS method using the Nesterov’s acclerated gradient and momentum terms. The SR1 quasi-Newton method, though less commonly used in training neural networks, is known to have interesting properties and provide good Hessian approximations when used with a trust-region approach. Thus, this paper aims to investigate accelerating the Symmetric Rank-1 (SR1) quasi-Newton method with the Nesterov’s gradient for training neural networks, and to briefly discuss its convergence. The performance of the proposed method is evaluated on a function approximation and image classification problem. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

22 pages, 930 KiB  
Article
Approximately Optimal Control of Nonlinear Dynamic Stochastic Problems with Learning: The OPTCON Algorithm
by Dmitri Blueschke, Viktoria Blueschke-Nikolaeva and Reinhard Neck
Algorithms 2021, 14(6), 181; https://doi.org/10.3390/a14060181 - 8 Jun 2021
Cited by 8 | Viewed by 2466
Abstract
OPTCON is an algorithm for the optimal control of nonlinear stochastic systems which is particularly applicable to economic models. It delivers approximate numerical solutions to optimum control (dynamic optimization) problems with a quadratic objective function for nonlinear economic models with additive and multiplicative [...] Read more.
OPTCON is an algorithm for the optimal control of nonlinear stochastic systems which is particularly applicable to economic models. It delivers approximate numerical solutions to optimum control (dynamic optimization) problems with a quadratic objective function for nonlinear economic models with additive and multiplicative (parameter) uncertainties. The algorithm was first programmed in C# and then in MATLAB. It allows for deterministic and stochastic control, the latter with open loop (OPTCON1), passive learning (open-loop feedback, OPTCON2), and active learning (closed-loop, dual, or adaptive control, OPTCON3) information patterns. The mathematical aspects of the algorithm with open-loop feedback and closed-loop information patterns are presented in more detail in this paper. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

18 pages, 688 KiB  
Article
An Effective Decomposition-Based Stochastic Algorithm for Solving the Permutation Flow-Shop Scheduling Problem
by Mehrdad Amirghasemi
Algorithms 2021, 14(4), 112; https://doi.org/10.3390/a14040112 - 30 Mar 2021
Cited by 6 | Viewed by 2946
Abstract
This paper presents an effective stochastic algorithm that embeds a large neighborhood decomposition technique into a variable neighborhood search for solving the permutation flow-shop scheduling problem. The algorithm first constructs a permutation as a seed using a recursive application of the extended two-machine [...] Read more.
This paper presents an effective stochastic algorithm that embeds a large neighborhood decomposition technique into a variable neighborhood search for solving the permutation flow-shop scheduling problem. The algorithm first constructs a permutation as a seed using a recursive application of the extended two-machine problem. In this method, the jobs are recursively decomposed into two separate groups, and, for each group, an optimal permutation is calculated based on the extended two-machine problem. Then the overall permutation, which is obtained by integrating the sub-solutions, is improved through the application of a variable neighborhood search technique. The same as the first technique, this one is also based on the decomposition paradigm and can find an optimal arrangement for a subset of jobs. In the employed large neighborhood search, the concept of the critical path has been used to help the decomposition process avoid unfruitful computation and arrange only promising contiguous parts of the permutation. In this fashion, the algorithm leaves those parts of the permutation which already have high-quality arrangements and concentrates on modifying other parts. The results of computational experiments on the benchmark instances indicate the procedure works effectively, demonstrating that solutions, in a very short distance of the best-known solutions, are calculated within seconds on a typical personal computer. In terms of the required running time to reach a high-quality solution, the procedure outperforms some well-known metaheuristic algorithms in the literature. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

30 pages, 937 KiB  
Review
A Review on the Performance of Linear and Mixed Integer Two-Stage Stochastic Programming Software
by Juan J. Torres, Can Li, Robert M. Apap and Ignacio E. Grossmann
Algorithms 2022, 15(4), 103; https://doi.org/10.3390/a15040103 - 22 Mar 2022
Cited by 14 | Viewed by 5422
Abstract
This paper presents a tutorial on the state-of-the-art software for the solution of two-stage (mixed-integer) linear stochastic programs and provides a list of software designed for this purpose. The methodologies are classified according to the decomposition alternatives and the types of the variables [...] Read more.
This paper presents a tutorial on the state-of-the-art software for the solution of two-stage (mixed-integer) linear stochastic programs and provides a list of software designed for this purpose. The methodologies are classified according to the decomposition alternatives and the types of the variables in the problem. We review the fundamentals of Benders decomposition, dual decomposition and progressive hedging, as well as possible improvements and variants. We also present extensive numerical results to underline the properties and performance of each algorithm using software implementations, including DECIS, FORTSP, PySP, and DSP. Finally, we discuss the strengths and weaknesses of each methodology and propose future research directions. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

Back to TopTop