Memory–Non-Linearity Trade-Off in Distance-Based Delay Networks
Abstract
:1. Introduction
2. Background
3. Methods
3.1. DDNs
3.2. Hyperparameter Optimization
3.3. Memory Capacity
3.4. Information Processing Capacity
3.5. Task Capacities
3.6. Tasks
3.6.1. NARMA
3.6.2. Delayed XOR
4. Results
4.1. Unoptimized Networks
4.2. NARMA
4.3. Delayed XOR
5. Discussion
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
RNN | Recurrent neural network |
RC | Reservoir computing |
ESN | Echo state network |
LSM | Liquid state machine |
BPTT | Back-propagation through time |
DDN | Distance-based delay network |
ADDN | Adaptive distance-based delay network |
IPC | Information processing capacity |
MC | Memory capacity |
CMA-ES | Covariance matrix adaptation evolutionary strategy |
GMM | Gaussian mixture model |
NARMA | Non-linear autoregressive moving average |
NRMSE | Normalized root mean squared error |
XOR | Exclusive or |
Appendix A. Hyperparameter Settings
Appendix A.1. ESN
Cluster 1 | Cluster 2 | Cluster 3 | Cluster 4 | |
---|---|---|---|---|
Mixture components | 0.24 | 0.57 | 0.13 | 0.06 |
Bias weight scaling | 0 | 0 | 0 | 0 |
Decay parameter | 1 | 1 | 1 | 0.983 |
Weight Scaling | Cluster 1 | Cluster 2 | Cluster 3 | Cluster 4 | Input Neuron |
---|---|---|---|---|---|
Cluster 1 | 0.000 | 0.201 | 0.140 | 0.212 | 0.000 |
Cluster 2 | 0.494 | 0.572 | 0.834 | 0.061 | 0.244 |
Cluster 3 | 0.617 | 0.000 | 0.000 | 0.000 | 0.349 |
Cluster 4 | 0.000 | 0.803 | 0.611 | 0.000 | 0.767 |
Connectivity | Cluster 1 | Cluster 2 | Cluster 3 | Cluster 4 | Input Neuron |
---|---|---|---|---|---|
Cluster 1 | 0.867 | 1.000 | 0.688 | 0.905 | 0.920 |
Cluster 2 | 0.511 | 0.367 | 1.000 | 0.692 | 1.000 |
Cluster 3 | 0.850 | 1.000 | 0.860 | 1.000 | 1.000 |
Cluster 4 | 0.481 | 1.000 | 0.838 | 0.769 | 0.783 |
Appendix A.2. DDN
Cluster 1 | Cluster 2 | Cluster 3 | Cluster 4 | |
---|---|---|---|---|
Mixture components | 0.73691 | 0 | 0.07961 | 0.18348 |
Means, X | 0.00073 | 0.00186 | 0.00200 | 0.00064 |
Means, Y | 0.00080 | 0.00231 | 0.00343 | 0.00343 |
Variance, X | 0.00018 | 0.00065 | 0 | 0.00103 |
Variance, Y | 0 | 0.00066 | 0 | 0 |
Correlation, X-Y | 0.990 | −0.990 | −0.859 | −0.218 |
Bias weight scaling | 1.930 | 0.169 | 0.401 | 0.000 |
Decay parameter | 0.564 | 1.000 | 1.000 | 0.183 |
Weight Scaling | Cluster 1 | Cluster 2 | Cluster 3 | Cluster 4 | Input Neuron |
---|---|---|---|---|---|
Cluster 1 | 0.000 | 1.200 | 0.000 | 3,038 | 0.000 |
Cluster 2 | 0.776 | 0.000 | 0.000 | 0.000 | 0.000 |
Cluster 3 | 0.000 | 0.006 | 0.529 | 0.610 | 0.000 |
Cluster 4 | 0.000 | 0.000 | 0.638 | 0.000 | 0.901 |
Connectivity | Cluster 1 | Cluster 2 | Cluster 3 | Cluster 4 | Input Neuron |
---|---|---|---|---|---|
Cluster 1 | 1.000 | 0.602 | 0.174 | 1.000 | 0.924 |
Cluster 2 | 1.000 | 1.000 | 0.458 | 1.000 | 0.998 |
Cluster 3 | 0.698 | 1.000 | 1.000 | 1.000 | 1.000 |
Cluster 4 | 1.000 | 1.000 | 0.516 | 1.000 | 0.250 |
References
- Dambre, J.; Verstraeten, D.; Schrauwen, B.; Massar, S. Information processing capacity of dynamical systems. Sci. Rep. 2012, 2, 514. [Google Scholar] [CrossRef] [PubMed]
- Inubushi, M.; Yoshimura, K. Reservoir computing beyond memory-nonlinearity trade-off. Sci. Rep. 2017, 7, 10199. [Google Scholar] [CrossRef] [PubMed]
- Schrauwen, B.; Verstraeten, D.; Campenhout, J. An overview of reservoir computing: Theory, applications and implementations. In Proceedings of the 15th European Symposium on Artificial Neural Networks, Bruges, Belgium, 25–27 April 2007; pp. 471–482. [Google Scholar]
- Van der Sande, G.; Brunner, D.; Soriano, M.C. Advances in photonic reservoir computing. Nanophotonics 2017, 6, 561–576. [Google Scholar] [CrossRef]
- Soriano, M.C.; Brunner, D.; Escalona-Morán, M.; Mirasso, C.R.; Fischer, I. Minimal approach to neuro-inspired information processing. Front. Comput. Neurosci. 2015, 9, 68. [Google Scholar] [CrossRef] [PubMed]
- Du, C.; Cai, F.; Zidan, M.A.; Ma, W.; Lee, S.H.; Lu, W.D. Reservoir computing using dynamic memristors for temporal information processing. Nat. Commun. 2017, 8, 2204. [Google Scholar] [CrossRef]
- Tanaka, G.; Yamane, T.; Héroux, J.B.; Nakane, R.; Kanazawa, N.; Takeda, S.; Numata, H.; Nakano, D.; Hirose, A. Recent advances in physical reservoir computing: A review. Neural Netw. 2019, 115, 100–123. [Google Scholar] [CrossRef]
- Larger, L.; Baylón-Fuentes, A.; Martinenghi, R.; Udaltsov, V.S.; Chembo, Y.K.; Jacquot, M. High-Speed Photonic Reservoir Computing Using a Time-Delay-Based Architecture: Million Words per Second Classification. Phys. Rev. X 2017, 7, 011015. [Google Scholar] [CrossRef]
- Brunner, D.; Soriano, M.C.; Mirasso, C.R.; Fischer, I. Parallel photonic information processing at gigabyte per second data rates using transient states. Nat. Commun. 2013, 4, 1364. [Google Scholar] [CrossRef]
- Wang, W.J.; Tang, Y.; Xiong, J.; Zhang, Y.C. Stock market index prediction based on reservoir computing models. Expert Syst. Appl. 2021, 178, 115022. [Google Scholar] [CrossRef]
- Brucke, K.; Schmitz, S.; Köglmayr, D.; Baur, S.; Räth, C.; Ansari, E.; Klement, P. Benchmarking reservoir computing for residential energy demand forecasting. Energy Build. 2024, 314, 114236. [Google Scholar] [CrossRef]
- Yan, M.; Huang, C.; Bienstman, P.; Tino, P.; Lin, W.; Sun, J. Emerging opportunities and challenges for the future of reservoir computing. Nat. Commun. 2024, 15, 2056. [Google Scholar] [CrossRef] [PubMed]
- Cazalets, T.; Dambre, J. An homeostatic activity-dependent structural plasticity algorithm for richer input combination. In Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), Gold Coast, Australia, 18–23 June 2023; pp. 1–8. [Google Scholar]
- Iacob, S.; Chavlis, S.; Poirazi, P.; Dambre, J. Delay-Sensitive Local Plasticity in Echo State Networks. In Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), Gold Coast, Australia, 18–23 June 2023; pp. 1–8. [Google Scholar]
- McDaniel, S.L.; Villafañe–Delgado, M.; Johnson, E.C. Investigating Echo State Network Performance with Biologically-Inspired Hierarchical Network Structure. In Proceedings of the 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 18–23 July 2022; pp. 1–8. [Google Scholar]
- Jaeger, H.; Haas, H. Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication. Science 2004, 304, 78–80. [Google Scholar] [CrossRef] [PubMed]
- Iacob, S.; Freiberger, M.; Dambre, J. Distance-Based Delays in Echo State Networks. In Proceedings of the Intelligent Data Engineering and Automated Learning—IDEAL 2022, Manchester, UK, 24–26 November 2022; Yin, H., Camacho, D., Tino, P., Eds.; Spring: Cham, Switzerland, 2022; pp. 211–222. [Google Scholar]
- Vandoorne, K.; Mechet, P.; Van Vaerenbergh, T.; Fiers, M.; Morthier, G.; Verstraeten, D.; Schrauwen, B.; Dambre, J.; Bienstman, P. Experimental demonstration of reservoir computing on a silicon photonics chip. Nat. Commun. 2014, 5, 3541. [Google Scholar] [CrossRef]
- Lugnan, A.; Katumba, A.; Laporte, F.; Freiberger, M.; Sackesyn, S.; Ma, C.; Gooskens, E.; Dambre, J.; Bienstman, P. Photonic neuromorphic information processing and reservoir computing. APL Photonics 2020, 5, 020901. [Google Scholar] [CrossRef]
- Hülser, T.; Köster, F.; Jaurigue, L.; Lüdge, K. Role of delay-times in delay-based photonic reservoir computing [Invited]. Opt. Mater. Express 2022, 12, 1214–1231. [Google Scholar] [CrossRef]
- Jaurigue, L.; Robertson, E.; Wolters, J.; Lüdge, K. Reservoir Computing with Delayed Input for Fast and Easy Optimisation. Entropy 2021, 23, 1560. [Google Scholar] [CrossRef]
- Waxman, S.G. Determinants of conduction velocity in myelinated nerve fibers. Muscle Nerve: Off. J. Am. Assoc. Electrodiagn. Med. 1980, 3, 141–150. [Google Scholar] [CrossRef]
- Troyer, T. Neural Coding: Axonal Delays Make Waves. Curr. Biol. 2021, 31, R136–R137. [Google Scholar] [CrossRef]
- Carr, C.; Konishi, M. A circuit for detection of interaural time differences in the brain stem of the barn owl. J. Neurosci. 1990, 10, 3227–3246. [Google Scholar] [CrossRef]
- Egger, R.; Tupikov, Y.; Elmaleh, M.; Katlowitz, K.A.; Benezra, S.E.; Picardo, M.A.; Moll, F.; Kornfeld, J.; Jin, D.Z.; Long, M.A. Local axonal conduction shapes the spatiotemporal properties of neural sequences. Cell 2020, 183, 537–548. [Google Scholar] [CrossRef]
- Caminiti, R.; Carducci, F.; Piervincenzi, C.; Battaglia-Mayer, A.; Confalone, G.; Visco-Comandini, F.; Pantano, P.; Innocenti, G.M. Diameter, Length, Speed, and Conduction Delay of Callosal Axons in Macaque Monkeys and Humans: Comparing Data from Histology and Magnetic Resonance Imaging Diffusion Tractography. J. Neurosci. 2013, 33, 14501–14511. [Google Scholar] [CrossRef] [PubMed]
- Iacob, S.; Dambre, J. Exploiting Signal Propagation Delays to Match Task Memory Requirements in Reservoir Computing. Biomimetics 2024, 9, 355. [Google Scholar] [CrossRef] [PubMed]
- Bertschinger, N.; Natschläger, T. Real-Time Computation at the Edge of Chaos in Recurrent Neural Networks. Neural Comput. 2004, 16, 1413–1436. [Google Scholar] [CrossRef]
- Ceni, A.; Gallicchio, C. Edge of Stability Echo State Network. IEEE Trans. Neural Netw. Learn. Syst. 2024, 1–10. [Google Scholar] [CrossRef]
- Schulte to Brinke, T.; Dick, M.; Duarte, R.; Morrison, A. A refined information processing capacity metric allows an in-depth analysis of memory and nonlinearity trade-offs in neurocomputational systems. Sci. Rep. 2023, 13, 10517. [Google Scholar] [CrossRef]
- Hansen, N. The CMA evolution strategy: A comparing review. Towards New Evol. Comput. 2006, 192, 75–102. [Google Scholar] [CrossRef]
- Chang, H.; Futagami, K. Reinforcement learning with convolutional reservoir computing. Appl. Intell. 2020, 50, 2400–2410. [Google Scholar] [CrossRef]
- Liu, K.; Zhang, J. Optimization of Echo State Networks by Covariance Matrix Adaption Evolutionary Strategy. In Proceedings of the 2018 24th International Conference on Automation and Computing (ICAC), Newcastle upon Tyne, UK, 6–7 September 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Jaeger, H. Short Term Memory in Echo State Networks; GMD Forschungszentrum Informationstechnik: Sankt Augustin, Germany, 2001. [Google Scholar] [CrossRef]
- Atiya, A.F.; Parlos, A.G. New results on recurrent network training: Unifying the algorithms and accelerating convergence. IEEE Trans. Neural Netw. 2000, 11, 697–709. [Google Scholar] [CrossRef]
- Abdalla, M.; Zrounba, C.; Cardoso, R.; Jimenez, P.; Ren, G.; Boes, A.; Mitchell, A.; Bosio, A.; O’Connor, I.; Pavanello, F. Minimum complexity integrated photonic architecture for delay-based reservoir computing. Opt. Express 2023, 31, 11610–11623. [Google Scholar] [CrossRef]
- Borghi, M.; Biasi, S.; Pavesi, L. Reservoir computing based on a silicon microring and time multiplexing for binary and analog operations. Sci. Rep. 2021, 11, 15642. [Google Scholar] [CrossRef]
- Coarer, F.D.L.; Sciamanna, M.; Katumba, A.; Freiberger, M.; Dambre, J.; Bienstman, P.; Rontani, D. All-Optical Reservoir Computing on a Photonic Chip Using Silicon-Based Ring Resonators. IEEE J. Sel. Top. Quantum Electron. 2018, 24, 1–8. [Google Scholar] [CrossRef]
- Jeruchim, M. Techniques for Estimating the Bit Error Rate in the Simulation of Digital Communication Systems. IEEE J. Sel. Areas Commun. 1984, 2, 153–170. [Google Scholar] [CrossRef]
Equivalent Nr. of Generations | Equivalent Hyperparameters | ||
---|---|---|---|
Optimized ESN | Optimized DDN | Unoptimized ESN–Unoptimized DDN | |
NARMA-30 NRMSE | 0.455 ± 0.070 | 0.087 ± 0.018 | 0.187 ± 0.170 |
NARMA-30 Task overlap | 0.783 ± 0.066 | 0.940 ± 0.011 | −0.192 ± 0.197 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Iacob, S.; Dambre, J. Memory–Non-Linearity Trade-Off in Distance-Based Delay Networks. Biomimetics 2024, 9, 755. https://doi.org/10.3390/biomimetics9120755
Iacob S, Dambre J. Memory–Non-Linearity Trade-Off in Distance-Based Delay Networks. Biomimetics. 2024; 9(12):755. https://doi.org/10.3390/biomimetics9120755
Chicago/Turabian StyleIacob, Stefan, and Joni Dambre. 2024. "Memory–Non-Linearity Trade-Off in Distance-Based Delay Networks" Biomimetics 9, no. 12: 755. https://doi.org/10.3390/biomimetics9120755
APA StyleIacob, S., & Dambre, J. (2024). Memory–Non-Linearity Trade-Off in Distance-Based Delay Networks. Biomimetics, 9(12), 755. https://doi.org/10.3390/biomimetics9120755