entropy-logo

Journal Browser

Journal Browser

Memory Storage Capacity in Recurrent Neural Networks

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (18 September 2022) | Viewed by 12977

Special Issue Editors


E-Mail
Guest Editor
Center for Life Nano Science (CLNS) - Istituto Italiano di Tecnologia, Viale Regina Elena, 291, 00161 Rome, Italy
Interests: artificial and biological neural networks; computational neuroscience; systems neuroscience; information theory; sensory processing; decision making

E-Mail
Guest Editor
Center for Life Nano Science, Italian Institute of Technology, 00161 Roma, Italy
Interests: neural networks; collective behavior; machine learning; computational models; combinatorial problems

E-Mail
Guest Editor
Department of Physics, Sapienza University, Piazzale Aldo Moro 5, 00185, Rome, Italy, Center for Life Nanoscience, Istituto Italiano di Tecnologia, Viale Regina Elena 291, 00161 Rome, Italy
Interests: molecular biophysics; image recognition; molecular interaction; graph theory; machine learning in bioscience

Special Issue Information

Dear Colleagues,

A neural network (NN) is an ensemble of simple analog-signal-processing units which are highly interconnected. Recurrent neural networks (RNNs) are a general class of neural networks with node connections defined by a bidirectional “coupling matrix”, allowing the existence of loops among neurons. This architecture produces recursive dynamics in which the network state depends on previous ones. RNNs exhibit a wide range of dynamic temporal behavior, and they can generate steady-states, limit-cycles, quasi-periodic and chaotic orbits, depending on the specific topology of their connectivity. A memory is defined as a set of activation states, the patterns of activation of the neurons at time t, which remains unchanged with network iterations. Hopfield neural networks are the simplest structure of RNN; these NNs are widely used as storage devices capable of storing patterns and can model “associative memory”. The attractor states of these neural networks can be considered “stored patterns”. This is possible because, given sufficient time, Hopfield neural networks may map input activation patterns (stimuli) to output activation patterns which can be represented by a steady state, or a limit cycle composed of several states. Thus, Hopfield RNNs can store stimuli-response associations and serve as a model of how biological neural networks store and recall behaviors as responses to given stimuli. The storage capacity limit of Hopfield RNNs was immediately recognized by Amit, Gutfreund, and Sompolinsky in the specific case of the multi-dyadic form of the coupling matrix, which represents the simplest form of learning strategy (Hebbian learning). This limit is linear with the network dimension N. For a Hopfield RNN to be effective, the retrieval error probability must be low, and a Hopfield RNN can only be efficient if the stored memories do not exceed 14% of the network size N. This strongly limits the application of neural networks for information storage. By contrast, randomly generated coupling matrices, without imposing any dyadic structure, show an exponentially large number of memory states. Clearly, the optimal storage problem is still open, and how to achieve this optimal storage is still the subject of research.

This Special Issue on Hopfield RNN and its storage capacity invites researchers to present state-of-the-art approaches, focusing on how modifications of the traditional Hopfield and Hebbian architecture affect behavior and storage capacities with the objective of finding more efficient memory strategies.

The topics relevant to this Special Issue include but are not limited to the following:

  • Theory of Hopfield RNN;
  • New information theories based on novel learning strategies;
  • Optimization of Hopfield RNN architecture;
  • RNN models of memory storage;
  • RNN for brain-inspired machine learning and biological modeling.

Dr. Viola Folli
Dr. Giorgio Gosti
Dr. Edoardo Milanetti
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

neural networks;

Hopfield model;

learning and memory;

collective dynamics;

structural connectivity;

machine learning;

complexity and information theory;

storage capacity;

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 1795 KiB  
Article
Prediction of Time Series Gene Expression and Structural Analysis of Gene Regulatory Networks Using Recurrent Neural Networks
by Michele Monti, Jonathan Fiorentino, Edoardo Milanetti, Giorgio Gosti and Gian Gaetano Tartaglia
Entropy 2022, 24(2), 141; https://doi.org/10.3390/e24020141 - 18 Jan 2022
Cited by 13 | Viewed by 3035
Abstract
Methods for time series prediction and classification of gene regulatory networks (GRNs) from gene expression data have been treated separately so far. The recent emergence of attention-based recurrent neural network (RNN) models boosted the interpretability of RNN parameters, making them appealing for the [...] Read more.
Methods for time series prediction and classification of gene regulatory networks (GRNs) from gene expression data have been treated separately so far. The recent emergence of attention-based recurrent neural network (RNN) models boosted the interpretability of RNN parameters, making them appealing for the understanding of gene interactions. In this work, we generated synthetic time series gene expression data from a range of archetypal GRNs and we relied on a dual attention RNN to predict the gene temporal dynamics. We show that the prediction is extremely accurate for GRNs with different architectures. Next, we focused on the attention mechanism of the RNN and, using tools from graph theory, we found that its graph properties allow one to hierarchically distinguish different architectures of the GRN. We show that the GRN responded differently to the addition of noise in the prediction by the RNN and we related the noise response to the analysis of the attention mechanism. In conclusion, this work provides a way to understand and exploit the attention mechanism of RNNs and it paves the way to RNN-based methods for time series prediction and inference of GRNs from gene expression data. Full article
(This article belongs to the Special Issue Memory Storage Capacity in Recurrent Neural Networks)
Show Figures

Figure 1

18 pages, 796 KiB  
Article
Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks
by Christopher Hillar, Tenzin Chan, Rachel Taubman and David Rolnick
Entropy 2021, 23(11), 1494; https://doi.org/10.3390/e23111494 - 11 Nov 2021
Cited by 3 | Viewed by 2442
Abstract
In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special [...] Read more.
In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and fixed-point attractor dynamics. Specifically, we explore minimum energy flow (MEF) as a scalable convex objective for determining network parameters. We catalog various properties of MEF, such as biological plausibility, and then compare to classical approaches in the theory of learning. Trained Hopfield networks can perform unsupervised clustering and define novel error-correcting coding schemes. They also efficiently find hidden structures (cliques) in graph theory. We extend this known connection from graphs to hypergraphs and discover n-node networks with robust storage of 2Ω(n1ϵ) memories for any ϵ>0. In the case of graphs, we also determine a critical ratio of training samples at which networks generalize completely. Full article
(This article belongs to the Special Issue Memory Storage Capacity in Recurrent Neural Networks)
Show Figures

Figure 1

17 pages, 13851 KiB  
Article
Inferring Excitatory and Inhibitory Connections in Neuronal Networks
by Silvia Ghirga, Letizia Chiodo, Riccardo Marrocchio, Javier G. Orlandi and Alessandro Loppini
Entropy 2021, 23(9), 1185; https://doi.org/10.3390/e23091185 - 8 Sep 2021
Viewed by 2579
Abstract
The comprehension of neuronal network functioning, from most basic mechanisms of signal transmission to complex patterns of memory and decision making, is at the basis of the modern research in experimental and computational neurophysiology. While mechanistic knowledge of neurons and synapses structure increased, [...] Read more.
The comprehension of neuronal network functioning, from most basic mechanisms of signal transmission to complex patterns of memory and decision making, is at the basis of the modern research in experimental and computational neurophysiology. While mechanistic knowledge of neurons and synapses structure increased, the study of functional and effective networks is more complex, involving emergent phenomena, nonlinear responses, collective waves, correlation and causal interactions. Refined data analysis may help in inferring functional/effective interactions and connectivity from neuronal activity. The Transfer Entropy (TE) technique is, among other things, well suited to predict structural interactions between neurons, and to infer both effective and structural connectivity in small- and large-scale networks. To efficiently disentangle the excitatory and inhibitory neural activities, in the article we present a revised version of TE, split in two contributions and characterized by a suited delay time. The method is tested on in silico small neuronal networks, built to simulate the calcium activity as measured via calcium imaging in two-dimensional neuronal cultures. The inhibitory connections are well characterized, still preserving a high accuracy for excitatory connections prediction. The method could be applied to study effective and structural interactions in systems of excitable cells, both in physiological and in pathological conditions. Full article
(This article belongs to the Special Issue Memory Storage Capacity in Recurrent Neural Networks)
Show Figures

Figure 1

14 pages, 5072 KiB  
Article
TOLOMEO, a Novel Machine Learning Algorithm to Measure Information and Order in Correlated Networks and Predict Their State
by Mattia Miotto and Lorenzo Monacelli
Entropy 2021, 23(9), 1138; https://doi.org/10.3390/e23091138 - 31 Aug 2021
Cited by 3 | Viewed by 1956
Abstract
We present ToloMEo (TOpoLogical netwOrk Maximum Entropy Optimization), a program implemented in C and Python that exploits a maximum entropy algorithm to evaluate network topological information. ToloMEo can study any system defined on a connected network where nodes can assume N discrete values [...] Read more.
We present ToloMEo (TOpoLogical netwOrk Maximum Entropy Optimization), a program implemented in C and Python that exploits a maximum entropy algorithm to evaluate network topological information. ToloMEo can study any system defined on a connected network where nodes can assume N discrete values by approximating the system probability distribution with a Pottz Hamiltonian on a graph. The software computes entropy through a thermodynamic integration from the mean-field solution to the final distribution. The nature of the algorithm guarantees that the evaluated entropy is variational (i.e., it always provides an upper bound to the exact entropy). The program also performs machine learning, inferring the system’s behavior providing the probability of unknown states of the network. These features make our method very general and applicable to a broad class of problems. Here, we focus on three different cases of study: (i) an agent-based model of a minimal ecosystem defined on a square lattice, where we show how topological entropy captures a crossover between hunting behaviors; (ii) an example of image processing, where starting from discretized pictures of cell populations we extract information about the ordering and interactions between cell types and reconstruct the most likely positions of cells when data are missing; and (iii) an application to recurrent neural networks, in which we measure the information stored in different realizations of the Hopfield model, extending our method to describe dynamical out-of-equilibrium processes. Full article
(This article belongs to the Special Issue Memory Storage Capacity in Recurrent Neural Networks)
Show Figures

Figure 1

29 pages, 1767 KiB  
Article
External Stimuli on Neural Networks: Analytical and Numerical Approaches
by Evaldo M. F. Curado, Nilo B. Melgar and Fernando D. Nobre
Entropy 2021, 23(8), 1034; https://doi.org/10.3390/e23081034 - 11 Aug 2021
Viewed by 1598
Abstract
Based on the behavior of living beings, which react mostly to external stimuli, we introduce a neural-network model that uses external patterns as a fundamental tool for the process of recognition. In this proposal, external stimuli appear as an additional field, and basins [...] Read more.
Based on the behavior of living beings, which react mostly to external stimuli, we introduce a neural-network model that uses external patterns as a fundamental tool for the process of recognition. In this proposal, external stimuli appear as an additional field, and basins of attraction, representing memories, arise in accordance with this new field. This is in contrast to the more-common attractor neural networks, where memories are attractors inside well-defined basins of attraction. We show that this procedure considerably increases the storage capabilities of the neural network; this property is illustrated by the standard Hopfield model, which reveals that the recognition capacity of our model may be enlarged, typically, by a factor 102. The primary challenge here consists in calibrating the influence of the external stimulus, in order to attenuate the noise generated by memories that are not correlated with the external pattern. The system is analyzed primarily through numerical simulations. However, since there is the possibility of performing analytical calculations for the Hopfield model, the agreement between these two approaches can be tested—matching results are indicated in some cases. We also show that the present proposal exhibits a crucial attribute of living beings, which concerns their ability to react promptly to changes in the external environment. Additionally, we illustrate that this new approach may significantly enlarge the recognition capacity of neural networks in various situations; with correlated and non-correlated memories, as well as diluted, symmetric, or asymmetric interactions (synapses). This demonstrates that it can be implemented easily on a wide diversity of models. Full article
(This article belongs to the Special Issue Memory Storage Capacity in Recurrent Neural Networks)
Show Figures

Figure 1

Back to TopTop