Recurrent Neural Networks: algorithms design and applications for safety critical systems

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Combinatorial Optimization, Graph, and Network Algorithms".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 2529

Special Issue Editor


E-Mail Website
Guest Editor
Advanced Digital Research Service, School of Computer Science, The University of Nottingham, Nottingham NG8 1BB, UK
Interests: machine learning; active learning; computational intelligence; big data; health diagnostics; anomalies; traffic; hot spots; bio-inspired computation; meta-learning; behaviour identification

Special Issue Information

Dear Colleagues,

Recurrent Neural Networks (RNNs) are a category of Neural Networks that allow for capturing temporal dynamic behaviour from data. It has been widely applied to sequential data or time series data. Applications include natural language processing, prognostic and health management, healthcare, human behaviour detection, and other safety critical systems. RNNs are distinguished by their memory mechanism, as information from prior input layers of the network influence subsequent inputs and outputs. Several variations and modifications of RNNs are now found in the literature, such as GRUs, LSTMS, Bi-directional RNN, expanding the domains of applicability, as well as the effectiveness of these approaches to temporal data.

This Special Issue invites researchers to submit their recent advances in RNNs for safety critical systems. Potential topics of interest include, but are not limited to:

  • Novel algorithms and applications;
  • Stacked and/or hybrid architectures (e.g., ConvLSTM);
  • RNNs for sensor healthcare data;
  • Exploration of RNN and transformers for fault detection and/or anomaly detection;
  • Novel algorithms that address uncertainty using Bayesian Techniques, Gaussian Processes, etc.;
  • Engineering applications of RNN, such as prognostic and health management, remaining useful life prediction;
  • Multi-view RNNs;
  • Attention mechanisms;
  • RNN explanation/interpretation

Dr. Grazziela Patrocinio Figueredo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 4698 KiB  
Article
Order-Based Schedule of Dynamic Topology for Recurrent Neural Network
by Diego Sanchez Narvaez, Carlos Villaseñor, Carlos Lopez-Franco and Nancy Arana-Daniel
Algorithms 2023, 16(5), 231; https://doi.org/10.3390/a16050231 - 28 Apr 2023
Viewed by 1510
Abstract
It is well-known that part of the neural networks capacity is determined by their topology and the employed training process. How a neural network should be designed and how it should be updated every time that new data is acquired, is an issue [...] Read more.
It is well-known that part of the neural networks capacity is determined by their topology and the employed training process. How a neural network should be designed and how it should be updated every time that new data is acquired, is an issue that remains open since it its usually limited to a process of trial and error, based mainly on the experience of the designer. To address this issue, an algorithm that provides plasticity to recurrent neural networks (RNN) applied to time series forecasting is proposed. A decision-making grow and prune paradigm is created, based on the calculation of the data’s order, indicating in which situations during the re-training process (when new data is received), should the network increase or decrease its connections, giving as a result a dynamic architecture that can facilitate the design and implementation of the network, as well as improve its behavior. The proposed algorithm was tested with some time series of the M4 forecasting competition, using Long-Short Term Memory (LSTM) models. Better results were obtained for most of the tests, with new models both larger and smaller than their static versions, showing an average improvement of up to 18%. Full article
Show Figures

Figure 1

Back to TopTop