Next Article in Journal
Solutions to the Sub-Optimality and Stability Issues of Recursive Pole and Zero Distribution Algorithms for the Approximation of Fractional Order Models
Previous Article in Journal
Robust Fuzzy Adaptive Sliding Mode Stabilization for Fractional-Order Chaos
Previous Article in Special Issue
BELMKN: Bayesian Extreme Learning Machines Kohonen Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Advanced Artificial Neural Networks

1
Department of Industrial Engineering and Management, National Chiao Tung University, Hsinchu City 30010, Taiwan
2
Department of Industrial Management, Vanung University, Taoyuan City 32061, Taiwan
3
Department of Industrial Engineering and Management, Chaoyang University of Technology, Wufeng District, Taichung City 41349, Taiwan
*
Author to whom correspondence should be addressed.
Algorithms 2018, 11(7), 102; https://doi.org/10.3390/a11070102
Submission received: 4 July 2018 / Accepted: 7 July 2018 / Published: 10 July 2018
(This article belongs to the Special Issue Advanced Artificial Neural Networks)

Abstract

:
Artificial neural networks (ANNs) have been extensively applied to a wide range of disciplines, such as system identification and control, decision making, pattern recognition, medical diagnosis, finance, data mining, visualization, and others. With advances in computing and networking technologies, more complicated forms of ANNs are expected to emerge, requiring the design of advanced learning algorithms. This Special Issue is intended to provide technical details of the construction and training of advanced ANNs.

1. Introduction

By mimicking the memorizing and information processing activities of neuronal networks, artificial neural networks (ANNs) give computer and information systems human-like classification and approximation capability. Such a capability is further enhanced by advances in computing and networking technologies, giving rise to advanced ANNs that grow in size and complexity. This Special Issue is intended to provide technical details of the construction and training of advanced ANNs. These details will hold great interest for researchers in computer science, artificial intelligence, soft computing, and information management, as well as for practicing engineers. This Special Issue features a balance between state-of-the-art research and practical applications. This Special Issue also provides a forum for researchers and practitioners to review and disseminate quality research work on advanced ANNs and critical issues for further development.

2. Special Issue

All submissions to this Special Issue have been reviewed by at least three experts in the ANN area. After a strict review, six papers were accepted, as introduced below.
Senthilnath et al. [1] built a Bayesian extreme learning machine Kohonen network (BELMKN) to solve the clustering problem of nonlinearly separable datasets. The BELMKN had three layers. In the first level, the extreme learning machine (ELM)-based feature learning approach was applied to map the data onto a d-dimensional space to capture the nonlinearity in the data distribution. Subsequently, in the second level, the Bayesian information criterion (BIC) was applied to the ELM-based feature extracted data to estimate the number of clusters. Finally, in the third level, the feature-extracted data associated with the number of clusters were processed with a Kohonen Network to optimize the clustering accuracy.
In the second paper, Tian et al. [2] built a random vector functional link neural network (RVFLNN) to predict the temperature inside a metro station. Inputs to the RVFLNN included temperatures in different places, passenger flow, and metro arrival frequency. Various numbers of nodes in the hidden layer have been tried. In addition, the results of a parametric analysis showed that the ranges of weights and biases did not affect the minimization of the prediction error.
Qin et al. [3] proposed a neuro-dynamic programming (NDP) method to simultaneously optimize fuel economy and battery state of charge (SOC). In the NDP approach, the critic network was a multi-resolution wavelet neural network based on the Meyer wavelet function, and the action network was a wavelet neural network based on the Morlet function. The values of weights and parameters in both networks were derived using a backward propagation algorithm.
To improve the feature recognition rate and reduce the time-cost (i.e., the time to unlock precisely) of convolutional neural networks (CNNs), Yang and Yang [4] proposed a modified CNN approach based on dropout and the stochastic gradient descent (SGD) optimizer (MCNN-DS). The proposed methodology and three existing methods, including the weighted CNN (WCNN), multilayer perceptron CNN (MLP-CNN), and support vector machine-optimized ELM (SVM-ELM), have been applied to several benchmark cases for a comparison. The experimental results supported the superiority of the MCNN-DS approach over the existing methods.
In the fifth paper, Zhang and Tao [5] constructed a back propagation neural network to fit the degradation model of 6205 deep groove ball bearings, so as to identify and classify the different fault states of the bearing. To further enhance the classification accuracy, the polynomial fitting principle and Pearson correlation coefficient were applied to fuse features, especially skewness and other features.
In the last paper, Cui et al. [6] built a hybrid deep neural network that was composed of a convolutional auto-encoder and a complementary convolutional neural network to classify remote sensing data from heterogeneous sources. In the proposed methodology, the convolutional auto-encoder was used to extract features from and reduce the dimensions of the remote sensing data. Then, the convolutional neural network classified the remote sensing data based on the extracted features.

Acknowledgments

The guest editor would like to thank the Algorithms Editor-in-Chief, Henning Fernau, for fully supporting the release of this special issue. The guest editor is also grateful to the contributors who shared their research, as well as to the reviewers who spared their valuable time to review papers.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Senthilnath, J.; Simha, C.S.; Nagaraj, G.; Thapa, M.; Indiramma, M. BELMKN: Bayesian Extreme Learning Machines Kohonen Network. Algorithms 2018, 11, 56. [Google Scholar] [CrossRef]
  2. Tian, Q.; Zhao, W.; Wei, Y.; Pang, L. Thermal Environment Prediction for Metro Stations Based on an RVFL Neural Network. Algorithms 2018, 11, 49. [Google Scholar] [CrossRef]
  3. Qin, F.; Li, W.; Hu, Y.; Xu, G. An Online Energy Management Control for Hybrid Electric Vehicles Based on Neuro-Dynamic Programming. Algorithms 2018, 11, 33. [Google Scholar] [CrossRef]
  4. Yang, J.; Yang, G. Modified Convolutional Neural Network Based on Dropout and the Stochastic Gradient Descent Optimizer. Algorithms 2018, 11, 28. [Google Scholar] [CrossRef]
  5. Zhang, L.; Tao, J. Research on Degeneration Model of Neural Network for Deep Groove Ball Bearing Based on Feature Fusion. Algorithms 2018, 11, 21. [Google Scholar] [CrossRef]
  6. Cui, W.; Zhou, Q.; Zheng, Z. Application of a Hybrid Model Based on a Convolutional Auto-Encoder and Convolutional Neural Network in Object-Oriented Remote Sensing Classification. Algorithms 2018, 11, 9. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Chen, T.-C.T.; Liu, C.-L.; Lin, H.-D. Advanced Artificial Neural Networks. Algorithms 2018, 11, 102. https://doi.org/10.3390/a11070102

AMA Style

Chen T-CT, Liu C-L, Lin H-D. Advanced Artificial Neural Networks. Algorithms. 2018; 11(7):102. https://doi.org/10.3390/a11070102

Chicago/Turabian Style

Chen, Tin-Chih Toly, Cheng-Li Liu, and Hong-Dar Lin. 2018. "Advanced Artificial Neural Networks" Algorithms 11, no. 7: 102. https://doi.org/10.3390/a11070102

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop