Neural Networks and Learning Systems

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Network Science".

Deadline for manuscript submissions: closed (31 January 2021) | Viewed by 19100

Special Issue Editor


E-Mail Website
Guest Editor
Department of Information Engineering and Mathematics, University of Siena, Siena, Italy
Interests: bifurcation; memristor circuits; memristors; chaos; nonlinear dynamical systems; oscillators; Chua's circuit; Hopfield neural nets; Lyapunov methods; asymptotic stability; cellular neural nets; convergence; coupled circuits; hysteresis; piecewise linear techniques; stability; synchronization; time-varying networks; neural nets; circuit stability
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In the last decades, systems based on artificial neural networks and machine learning devices have become more and more present in everyday life. Artificial intelligence is considered one of the most useful tools in data analysis and decision making. The fields of application involve all sectors of our life including medicine, engineering, economy, manufacturing, and so on. In this context, the importance of research based on artificial neural network systems is evident, and for these reasons, some disciplines related to this topic are rapidly growing in terms of project financing and research scope. As a result, part of the scientific community is devoted to investigating learning machines and artificial neural network systems from a point of view of their application and theory, which is fundamental in order to provide validation for mathematical models.

The main goal of this Special Issue is to collect papers regarding state-of-the-art and the latest studies on neural networks and learning systems. Moreover, it is an opportunity to provide a place where researchers will be able to share and exchange their views on this topic in the fields of theory, design, and applications. The area of interest is wide and includes several categories such as stability and convergence analysis, learning algorithms, artificial vision, speech recognition, and so on.

Prof. Dr. Luca Pancioni
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • neural networks
  • neurons
  • stability
  • circuit theory
  • nonlinear systems
  • synchronization
  • network topology
  • couplings
  • convergence

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 511 KiB  
Article
An Algorithm Based on Loop-Cutting Contribution Function for Loop Cutset Problem in Bayesian Network
by Jie Wei, Wenxian Xie and Yufeng Nie
Mathematics 2021, 9(5), 462; https://doi.org/10.3390/math9050462 - 24 Feb 2021
Cited by 1 | Viewed by 1347
Abstract
The loop cutset solving algorithm in the Bayesian network is particularly important for Bayesian inference. This paper proposes an algorithm for solving the approximate minimum loop cutset based on the loop-cutting contribution index. Compared with the existing algorithms, the algorithm uses the loop-cutting [...] Read more.
The loop cutset solving algorithm in the Bayesian network is particularly important for Bayesian inference. This paper proposes an algorithm for solving the approximate minimum loop cutset based on the loop-cutting contribution index. Compared with the existing algorithms, the algorithm uses the loop-cutting contribution index of nodes and node-pairs to analyze nodes from a global perspective, and select loop cutset candidates with node-pair as the unit. The algorithm uses the parameter μ to control the range of node-pairs, and the parameter ω to control the selection conditions of the node-pairs, so that the algorithm can adjust the parameters according to the size of the Bayesian networks, which ensures computational efficiency. The numerical experiments show that the calculation efficiency of the algorithm is significantly improved when it is consistent with the accuracy of the existing algorithm; the experiments also studied the influence of parameter settings on calculation efficiency using trend analysis and two-way analysis of variance. The loop cutset solving algorithm based on the loop-cutting contribution index uses the node-pair as the unit to solve the loop cutset, which helps to improve the efficiency of Bayesian inference and Bayesian network structure analysis. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems)
Show Figures

Figure 1

11 pages, 559 KiB  
Article
Inertial Neural Networks with Unpredictable Oscillations
by Marat Akhmet, Madina Tleubergenova and Akylbek Zhamanshin
Mathematics 2020, 8(10), 1797; https://doi.org/10.3390/math8101797 - 16 Oct 2020
Cited by 9 | Viewed by 2300
Abstract
In this paper, inertial neural networks are under investigation, that is, the second order differential equations. The recently introduced new type of motions, unpredictable oscillations, are considered for the models. The motions continue a line of periodic and almost periodic oscillations. The research [...] Read more.
In this paper, inertial neural networks are under investigation, that is, the second order differential equations. The recently introduced new type of motions, unpredictable oscillations, are considered for the models. The motions continue a line of periodic and almost periodic oscillations. The research is of very strong importance for neuroscience, since the existence of unpredictable solutions proves Poincaré chaos. Sufficient conditions have been determined for the existence, uniqueness, and exponential stability of unpredictable solutions. The results can significantly extend the role of oscillations for artificial neural networks exploitation, since they provide strong new theoretical and practical opportunities for implementation of methods of chaos extension, synchronization, stabilization, and control of periodic motions in various types of neural networks. Numerical simulations are presented to demonstrate the validity of the theoretical results. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems)
Show Figures

Figure 1

14 pages, 460 KiB  
Article
Strongly Unpredictable Oscillations of Hopfield-Type Neural Networks
by Marat Akhmet, Madina Tleubergenova and Zakhira Nugayeva
Mathematics 2020, 8(10), 1791; https://doi.org/10.3390/math8101791 - 15 Oct 2020
Cited by 7 | Viewed by 1373
Abstract
In this paper, unpredictable oscillations in Hopfield-type neural networks is under investigation. The motion strongly relates to Poincaré chaos. Thus, the importance of the dynamics is indisputable for those problems of artificial intelligence, brain activity and robotics, which rely on chaos. Sufficient conditions [...] Read more.
In this paper, unpredictable oscillations in Hopfield-type neural networks is under investigation. The motion strongly relates to Poincaré chaos. Thus, the importance of the dynamics is indisputable for those problems of artificial intelligence, brain activity and robotics, which rely on chaos. Sufficient conditions for the existence and uniqueness of exponentially stable unpredictable solutions are determined. The oscillations continue the line of periodic and almost periodic motions, which already are verified as effective instruments of analysis and applications for image recognition, information processing and other areas of neuroscience. The concept of strongly unpredictable oscillations is a significant novelty of the present research, since the presence of chaos in each coordinate of the space state provides new opportunities in applications. Additionally to the theoretical analysis, we have provided strong simulation arguments, considering that all of the assumed conditions are fulfilled. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems)
Show Figures

Figure 1

30 pages, 5290 KiB  
Article
Sage Revised Reiterative Even Zernike Polynomials Neural Network Control with Modified Fish School Search Applied in SSCCRIM Impelled System
by Chih-Hong Lin
Mathematics 2020, 8(10), 1760; https://doi.org/10.3390/math8101760 - 13 Oct 2020
Viewed by 1231
Abstract
In light of fine learning ability in the existing uncertainties, a sage revised reiterative even Zernike polynomials neural network (SRREZPNN) control with modified fish school search (MFSS) method is proposed to control the six-phase squirrel cage copper rotor induction motor (SSCCRIM) impelled continuously [...] Read more.
In light of fine learning ability in the existing uncertainties, a sage revised reiterative even Zernike polynomials neural network (SRREZPNN) control with modified fish school search (MFSS) method is proposed to control the six-phase squirrel cage copper rotor induction motor (SSCCRIM) impelled continuously variable transmission assembled system for obtaining the brilliant control performance. This control construction can carry out the SRREZPNN control with the cozy learning law, and the indemnified control with an assessed law. In accordance with the Lyapunov stability theorem, the cozy learning law in the revised reiterative even Zernike polynomials neural network (RREZPNN) control can be extracted, and the assessed law of the indemnified control can be elicited. Besides, the MFSS can find two optimal values to adjust two learning rates with raising convergence. In comparison, experimental results are compared to some control systems and are expressed to confirm that the proposed control system can realize fine control performance. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems)
Show Figures

Figure 1

34 pages, 9762 KiB  
Article
A Rectified Reiterative Sieved-Pollaczek Polynomials Neural Network Backstepping Control with Improved Fish School Search for Motor Drive System
by Chih-Hong Lin
Mathematics 2020, 8(10), 1699; https://doi.org/10.3390/math8101699 - 03 Oct 2020
Cited by 1 | Viewed by 1236
Abstract
As the six-phase squirrel cage copper rotor induction motor has some nonlinear characteristics, such as nonlinear friction, nonsymmetric torque, wind stray torque, external load torque, and time-varying uncertainties, better control performances cannot be achieved by utilizing general linear controllers. The snug backstepping control [...] Read more.
As the six-phase squirrel cage copper rotor induction motor has some nonlinear characteristics, such as nonlinear friction, nonsymmetric torque, wind stray torque, external load torque, and time-varying uncertainties, better control performances cannot be achieved by utilizing general linear controllers. The snug backstepping control with sliding switching function for controlling the motion of a six-phase squirrel cage copper rotor induction motor drive system is proposed to reduce nonlinear uncertainty effects. However, the previously proposed control results in high chattering on nonlinear system effects and overtorque on matched uncertainties. So as to reduce the immense chattering situation, we then put forward the rectified reiterative sieved-Pollaczek polynomials neural network backstepping control with an improved fish school search method to estimate the external bundled torque uncertainties and to recoup the smallest reorganized error of the evaluated rule. In the light of Lyapunov stability, the online parametric training method of the rectified reiterative sieved-Pollaczek polynomials neural network can be derived by utilizing an adaptive rule. Moreover, to improve convergence and obtain beneficial learning manifestation, the improved fish school search algorithm is made use of to readjust two fickle learning rates of the weights in the rectified reiterative sieved-Pollaczek polynomials neural network. Lastly, the effectuality of the proposed control system is validated by examination results. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems)
Show Figures

Figure 1

12 pages, 680 KiB  
Article
Shared Node and Its Improvement to the Theory Analysis and Solving Algorithm for the Loop Cutset
by Jie Wei, Wenxian Xie and Yufeng Nie
Mathematics 2020, 8(9), 1625; https://doi.org/10.3390/math8091625 - 19 Sep 2020
Cited by 1 | Viewed by 2429
Abstract
Bayesian Network is one of the famous network models, and the loop cutset is one of the crucial structures for Bayesian Inference. In the Bayesian Network and its inference, how to measure the relationship between nodes is very important, because the relationship between [...] Read more.
Bayesian Network is one of the famous network models, and the loop cutset is one of the crucial structures for Bayesian Inference. In the Bayesian Network and its inference, how to measure the relationship between nodes is very important, because the relationship between different nodes has significant influence on the node-probability of the loop cutset. To analyse the relationship between two nodes in a graph, we define the shared node, prove the upper and lower bounds of the shared nodes number, and affirm that the shared node influences the node-probability of the loop cutset according to the theorems and experiments. These results can explain the problems that we found in studying on the statistical node-probability belonging to the loop cutset. The shared nodes are performed not only to improve the theoretical analysis on the loop cutset, but also to the loop cutset solving algorithms, especially the heuristic algorithms, in which the heuristic strategy can be optimized by a shared node. Our results provide a new tool to gauge the relationship between different nodes, a new perspective to estimate the loop cutset, and it is helpful to the loop cutset algorithm and network analysis. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems)
Show Figures

Figure 1

9 pages, 479 KiB  
Article
The Study of the Theoretical Size and Node Probability of the Loop Cutset in Bayesian Networks
by Jie Wei, Yufeng Nie and Wenxian Xie
Mathematics 2020, 8(7), 1079; https://doi.org/10.3390/math8071079 - 03 Jul 2020
Cited by 2 | Viewed by 2039
Abstract
Pearl’s conditioning method is one of the basic algorithms of Bayesian inference, and the loop cutset is crucial for the implementation of conditioning. There are many numerical algorithms for solving the loop cutset, but theoretical research on the characteristics of the loop cutset [...] Read more.
Pearl’s conditioning method is one of the basic algorithms of Bayesian inference, and the loop cutset is crucial for the implementation of conditioning. There are many numerical algorithms for solving the loop cutset, but theoretical research on the characteristics of the loop cutset is lacking. In this paper, theoretical insights into the size and node probability of the loop cutset are obtained based on graph theory and probability theory. It is proven that when the loop cutset in a p-complete graph has a size of p 2 , the upper bound of the size can be determined by the number of nodes. Furthermore, the probability that a node belongs to the loop cutset is proven to be positively correlated with its degree. Numerical simulations show that the application of the theoretical results can facilitate the prediction and verification of the loop cutset problem. This work is helpful in evaluating the performance of Bayesian networks. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems)
Show Figures

Figure 1

28 pages, 10078 KiB  
Article
Mixed Modified Recurring Rogers-Szego Polynomials Neural Network Control with Mended Grey Wolf Optimization Applied in SIM Expelling System
by Der-Fa Chen, Yi-Cheng Shih, Shih-Cheng Li, Chin-Tung Chen and Jung-Chu Ting
Mathematics 2020, 8(5), 754; https://doi.org/10.3390/math8050754 - 09 May 2020
Cited by 3 | Viewed by 1731
Abstract
Due to a good ability of learning for nonlinear uncertainties, a mixed modified recurring Rogers-Szego polynomials neural network (MMRRSPNN) control with mended grey wolf optimization (MGWO) by using two linear adjusted factors is proposed to the six-phase induction motor (SIM) expelling continuously variable [...] Read more.
Due to a good ability of learning for nonlinear uncertainties, a mixed modified recurring Rogers-Szego polynomials neural network (MMRRSPNN) control with mended grey wolf optimization (MGWO) by using two linear adjusted factors is proposed to the six-phase induction motor (SIM) expelling continuously variable transmission (CVT) organized system for acquiring better control performance. The control system can execute MRRSPNN control with a fitted learning rule, and repay control with an evaluated rule. In the light of the Lyapunov stability theorem, the fitted learning rule in the MRRSPNN control can be derived, and the evaluated rule of the repay control can be originated. Besides, the MGWO by using two linear adjusted factors yields two changeable learning rates for two parameters to find two ideal values and to speed-up convergence of weights. Experimental results in comparisons with some control systems are demonstrated to confirm that the proposed control system can achieve better control performance. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems)
Show Figures

Figure 1

19 pages, 853 KiB  
Article
Robust Stability of Complex-Valued Stochastic Neural Networks with Time-Varying Delays and Parameter Uncertainties
by Pharunyou Chanthorn, Grienggrai Rajchakit, Jenjira Thipcha, Chanikan Emharuethai, Ramalingam Sriraman, Chee Peng Lim and Raja Ramachandran
Mathematics 2020, 8(5), 742; https://doi.org/10.3390/math8050742 - 08 May 2020
Cited by 56 | Viewed by 2550
Abstract
In practical applications, stochastic effects are normally viewed as the major sources that lead to the system’s unwilling behaviours when modelling real neural systems. As such, the research on network models with stochastic effects is significant. In view of this, in this paper, [...] Read more.
In practical applications, stochastic effects are normally viewed as the major sources that lead to the system’s unwilling behaviours when modelling real neural systems. As such, the research on network models with stochastic effects is significant. In view of this, in this paper, we analyse the issue of robust stability for a class of uncertain complex-valued stochastic neural networks (UCVSNNs) with time-varying delays. Based on the real-imaginary separate-type activation function, the original UCVSNN model is analysed using an equivalent representation consisting of two real-valued neural networks. By constructing the proper Lyapunov–Krasovskii functional and applying Jensen’s inequality, a number of sufficient conditions can be derived by utilizing It o ^ ’s formula, the homeomorphism principle, the linear matrix inequality, and other analytic techniques. As a result, new sufficient conditions to ensure robust, globally asymptotic stability in the mean square for the considered UCVSNN models are derived. Numerical simulations are presented to illustrate the merit of the obtained results. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems)
Show Figures

Figure 1

11 pages, 292 KiB  
Article
Asymptotic Convergence of Soft-Constrained Neural Networks for Density Estimation
by Edmondo Trentin
Mathematics 2020, 8(4), 572; https://doi.org/10.3390/math8040572 - 12 Apr 2020
Cited by 4 | Viewed by 1885
Abstract
A soft-constrained neural network for density estimation (SC-NN-4pdf) has recently been introduced to tackle the issues arising from the application of neural networks to density estimation problems (in particular, the satisfaction of the second Kolmogorov axiom). Although the SC-NN-4pdf has been shown to [...] Read more.
A soft-constrained neural network for density estimation (SC-NN-4pdf) has recently been introduced to tackle the issues arising from the application of neural networks to density estimation problems (in particular, the satisfaction of the second Kolmogorov axiom). Although the SC-NN-4pdf has been shown to outperform parametric and non-parametric approaches (from both the machine learning and the statistics areas) over a variety of univariate and multivariate density estimation tasks, no clear rationale behind its performance has been put forward so far. Neither has there been any analysis of the fundamental theoretical properties of the SC-NN-4pdf. This paper narrows the gaps, delivering a formal statement of the class of density functions that can be modeled to any degree of precision by SC-NN-4pdfs, as well as a proof of asymptotic convergence in probability of the SC-NN-4pdf training algorithm under mild conditions for a popular class of neural architectures. These properties of the SC-NN-4pdf lay the groundwork for understanding the strong estimation capabilities that SC-NN-4pdfs have only exhibited empirically so far. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems)
Show Figures

Figure 1

Back to TopTop