Topic Editors

Faculty of Mechanical and Industrial Engineering, Warsaw University of Technology, 02-524 Warsaw, Poland
Department of Electrical and Computer Engineering, COMSATS University Islamabad, Lahore 54000, Pakistan
Dr. Yong Wang
School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
Center for Automation and Robotics, Politechnic University of Madrid (UPM) and Spanish Council for Scientific Research (CSIC), Ctra. Campo Real km 0,200 La Poveda, 28500 Madrid, Spain

Advances in Artificial Neural Networks

Abstract submission deadline
closed (31 October 2023)
Manuscript submission deadline
closed (31 December 2023)
Viewed by
96539

Topic Information

Dear Colleagues,

This Topic Issue (TI), interdisciplinary in character, focuses on the dissemination of ideas between the various branches of pure and applied sciences, covering a range of artificial neural networks, including pattern recognition, computer vision, logical reasoning, knowledge engineering, expert systems, artificial intelligence, intelligent control and intelligent systems. It aims to promote the development of information science and technology.

The central interest of this TI is on mathematical and physical modeling, numerical/analytical study and computation, as well as the experimental investigation of artificial neural networks and their applications in order to understand their operation mechanism, potential application value and future development directions.

From the point of view of theoretical aspects, contributions addressing the following topics are welcome:

- Numerical algorithms and procedures;

- Convolutional Neural Network (CNN);

- Recurrent Neural Network (RNN);

- Snapshot ensembles;

- Dropout;

- Bias correction;

- Cyclical learning rates;

- Neuro-Symbolic Hybrid Intelligent Architecture;

- Radical Basis Neural Networks;

- Neural Network for Adaptive Pattern Recognition;

- Neural Network Learning;

- Recent Advances in Neural Network Applications in Process Control;

- Neural Architectures of Fuzzy Petri Nets.

Application engineers, scientists, and research students from all disciplines with an interest in considering neural networks to solve real-world problems are encouraged to contribute to this Topic Issue.

Dr. Krzysztof Ejsmont
Dr. Aamer Bilal Asghar
Dr. Yong Wang
Dr. Rodolfo Haber
Topic Editors

Keywords

  • artificial neural networks (ANNs)
  • neuro-fuzzy systems
  • adaptive neuro-fuzzy inference system (ANFIS)
  • backpropagation
  • training algorithm
  • MPPT algorithm
  • wavelet
  • support vector machines
  • particle swarm optimization (PSO)
  • adaptive cuckoo search optimization (ACSO)
  • partial shading (PS)
  • hybrid optimization

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
3.1 7.2 2020 17.6 Days CHF 1600
Algorithms
algorithms
1.8 4.1 2008 15 Days CHF 1600
Applied Sciences
applsci
2.5 5.3 2011 17.8 Days CHF 2400
Information
information
2.4 6.9 2010 14.9 Days CHF 1600
Mathematics
mathematics
2.3 4.0 2013 17.1 Days CHF 2600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (38 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
21 pages, 612 KiB  
Article
Efficient Optimization of a Support Vector Regression Model with Natural Logarithm of the Hyperbolic Cosine Loss Function for Broader Noise Distribution
by Aykut KocaoÄŸlu
Appl. Sci. 2024, 14(9), 3641; https://doi.org/10.3390/app14093641 - 25 Apr 2024
Viewed by 731
Abstract
While traditional support vector regression (SVR) models rely on loss functions tailored to specific noise distributions, this research explores an alternative approach: ε-ln SVR, which uses a loss function based on the natural logarithm of the hyperbolic cosine function (lncosh). This function [...] Read more.
While traditional support vector regression (SVR) models rely on loss functions tailored to specific noise distributions, this research explores an alternative approach: ε-ln SVR, which uses a loss function based on the natural logarithm of the hyperbolic cosine function (lncosh). This function exhibits optimality for a broader family of noise distributions known as power-raised hyperbolic secants (PHSs). We derive the dual formulation of the ε-ln SVR model, which reveals a nonsmooth, nonlinear convex optimization problem. To efficiently overcome these complexities, we propose a novel sequential minimal optimization (SMO)-like algorithm with an innovative working set selection (WSS) procedure. This procedure exploits second-order (SO)-like information by minimizing an upper bound on the second-order Taylor polynomial approximation of consecutive loss function values. Experimental results on benchmark datasets demonstrate the effectiveness of both the ε-ln SVR model with its lncosh loss and the proposed SMO-like algorithm with its computationally efficient WSS procedure. This study provides a promising tool for scenarios with different noise distributions, extending beyond the commonly assumed Gaussian to the broader PHS family. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

15 pages, 5464 KiB  
Article
Long-Term Forecasting Using MAMTF: A Matrix Attention Model Based on the Time and Frequency Domains
by Kaixin Guo and Xin Yu
Appl. Sci. 2024, 14(7), 2893; https://doi.org/10.3390/app14072893 - 29 Mar 2024
Viewed by 791
Abstract
There are many time series forecasting methods, but there are few research methods for long-term multivariate time series forecasting, which are mainly dominated by a series of forecasting models developed on the basis of a transformer. The aim of this study is to [...] Read more.
There are many time series forecasting methods, but there are few research methods for long-term multivariate time series forecasting, which are mainly dominated by a series of forecasting models developed on the basis of a transformer. The aim of this study is to perform forecasting for multivariate time series data and to improve the forecasting accuracy of the model. In the recent past, it has appeared that the prediction effect of linear models surpasses that of the family of self-attention mechanism models, which encourages us to look for new methods to solve the problem of long-term multivariate time series forecasting. In order to overcome the problem that the temporal order of information is easily broken in the self-attention family and that it is difficult to capture information on long-distance data using recurrent neural network models, we propose a matrix attention mechanism, which is able to weight each previous data point equally without breaking the temporal order of the data, so that the overall data information can be fully utilized. We used the matrix attention mechanism as the basic module to construct the frequency domain block and time domain block. Since complex and variable seasonal component features are difficult to capture in the time domain, mapping them to the frequency domain reduces the complexity of the seasonal components themselves and facilitates data feature extraction. Therefore, we use the frequency domain block to extract the seasonal information with high randomness and poor regularity to help the model capture the local dynamics. The time domain block is used to extract the smooth floating trend component information to help the model capture long-term change patterns. This also improves the overall prediction performance of the model. It is experimentally demonstrated that our model achieves the best prediction results on three public datasets and one private dataset. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

22 pages, 648 KiB  
Article
Synchronization Analysis for Quaternion-Valued Delayed Neural Networks with Impulse and Inertia via a Direct Technique
by Juan Yu, Kailong Xiong and Cheng Hu
Mathematics 2024, 12(7), 949; https://doi.org/10.3390/math12070949 - 23 Mar 2024
Cited by 1 | Viewed by 757
Abstract
The asymptotic synchronization of quaternion-valued delayed neural networks with impulses and inertia is studied in this article. Firstly, a convergence result on piecewise differentiable functions is developed, which is a generalization of the Barbalat lemma and provides a powerful tool for the convergence [...] Read more.
The asymptotic synchronization of quaternion-valued delayed neural networks with impulses and inertia is studied in this article. Firstly, a convergence result on piecewise differentiable functions is developed, which is a generalization of the Barbalat lemma and provides a powerful tool for the convergence analysis of discontinuous systems. To achieve synchronization, a constant gain-based control scheme and an adaptive gain-based control strategy are directly proposed for response quaternion-valued models. In the convergence analysis, a direct analysis method is developed to discuss the synchronization without using the separation technique or reduced-order transformation. In particular, some Lyapunov functionals, composed of the state variables and their derivatives, are directly constructed and some synchronization criteria represented by matrix inequalities are obtained based on quaternion theory. Some numerical results are shown to further confirm the theoretical analysis. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

17 pages, 950 KiB  
Article
PDEC: A Framework for Improving Knowledge Graph Reasoning Performance through Predicate Decomposition
by Xin Tian and Yuan Meng
Algorithms 2024, 17(3), 129; https://doi.org/10.3390/a17030129 - 21 Mar 2024
Viewed by 1224
Abstract
The judicious configuration of predicates is a crucial but often overlooked aspect in the field of knowledge graphs. While previous research has primarily focused on the precision of triples in assessing knowledge graph quality, the rationality of predicates has been largely ignored. This [...] Read more.
The judicious configuration of predicates is a crucial but often overlooked aspect in the field of knowledge graphs. While previous research has primarily focused on the precision of triples in assessing knowledge graph quality, the rationality of predicates has been largely ignored. This paper introduces an innovative approach aimed at enhancing knowledge graph reasoning by addressing the issue of predicate polysemy. Predicate polysemy refers to instances where a predicate possesses multiple meanings, introducing ambiguity into the knowledge graph. We present an adaptable optimization framework that effectively addresses predicate polysemy, thereby enhancing reasoning capabilities within knowledge graphs. Our approach serves as a versatile and generalized framework applicable to any reasoning model, offering a scalable and flexible solution to enhance performance across various domains and applications. Through rigorous experimental evaluations, we demonstrate the effectiveness and adaptability of our methodology, showing significant improvements in knowledge graph reasoning accuracy. Our findings underscore that discerning predicate polysemy is a crucial step towards achieving a more dependable and efficient knowledge graph reasoning process. Even in the age of large language models, the optimization and induction of predicates remain relevant in ensuring interpretable reasoning. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

16 pages, 1044 KiB  
Article
PVI-Net: Point–Voxel–Image Fusion for Semantic Segmentation of Point Clouds in Large-Scale Autonomous Driving Scenarios
by Zongshun Wang, Ce Li, Jialin Ma, Zhiqiang Feng and Limei Xiao
Information 2024, 15(3), 148; https://doi.org/10.3390/info15030148 - 7 Mar 2024
Cited by 1 | Viewed by 1669
Abstract
In this study, we introduce a novel framework for the semantic segmentation of point clouds in autonomous driving scenarios, termed PVI-Net. This framework uniquely integrates three different data perspectives—point clouds, voxels, and distance maps—executing feature extraction through three parallel branches. Throughout this process, [...] Read more.
In this study, we introduce a novel framework for the semantic segmentation of point clouds in autonomous driving scenarios, termed PVI-Net. This framework uniquely integrates three different data perspectives—point clouds, voxels, and distance maps—executing feature extraction through three parallel branches. Throughout this process, we ingeniously design a point cloud–voxel cross-attention mechanism and a multi-perspective feature fusion strategy for point images. These strategies facilitate information interaction across different feature dimensions of perspectives, thereby optimizing the fusion of information from various viewpoints and significantly enhancing the overall performance of the model. The network employs a U-Net structure and residual connections, effectively merging and encoding information to improve the precision and efficiency of semantic segmentation. We validated the performance of PVI-Net on the SemanticKITTI and nuScenes datasets. The results demonstrate that PVI-Net surpasses most of the previous methods in various performance metrics. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

13 pages, 1805 KiB  
Article
Understanding Self-Supervised Learning of Speech Representation via Invariance and Redundancy Reduction
by Yusuf Brima, Ulf Krumnack, Simone Pika and Gunther Heidemann
Information 2024, 15(2), 114; https://doi.org/10.3390/info15020114 - 15 Feb 2024
Viewed by 1914
Abstract
Self-supervised learning (SSL) has emerged as a promising paradigm for learning flexible speech representations from unlabeled data. By designing pretext tasks that exploit statistical regularities, SSL models can capture useful representations that are transferable to downstream tasks. Barlow Twins (BTs) is an [...] Read more.
Self-supervised learning (SSL) has emerged as a promising paradigm for learning flexible speech representations from unlabeled data. By designing pretext tasks that exploit statistical regularities, SSL models can capture useful representations that are transferable to downstream tasks. Barlow Twins (BTs) is an SSL technique inspired by theories of redundancy reduction in human perception. In downstream tasks, BTs representations accelerate learning and transfer this learning across applications. This study applies BTs to speech data and evaluates the obtained representations on several downstream tasks, showing the applicability of the approach. However, limitations exist in disentangling key explanatory factors, with redundancy reduction and invariance alone being insufficient for factorization of learned latents into modular, compact, and informative codes. Our ablation study isolated gains from invariance constraints, but the gains were context-dependent. Overall, this work substantiates the potential of Barlow Twins for sample-efficient speech encoding. However, challenges remain in achieving fully hierarchical representations. The analysis methodology and insights presented in this paper pave a path for extensions incorporating further inductive priors and perceptual principles to further enhance the BTs self-supervision framework. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

17 pages, 7382 KiB  
Article
Enhancing Communication Efficiency and Training Time Uniformity in Federated Learning through Multi-Branch Networks and the Oort Algorithm
by Pin-Hung Juan and Ja-Ling Wu
Algorithms 2024, 17(2), 52; https://doi.org/10.3390/a17020052 - 23 Jan 2024
Cited by 1 | Viewed by 2185
Abstract
In this study, we present a federated learning approach that combines a multi-branch network and the Oort client selection algorithm to improve the performance of federated learning systems. This method successfully addresses the significant issue of non-iid data, a challenge not adequately tackled [...] Read more.
In this study, we present a federated learning approach that combines a multi-branch network and the Oort client selection algorithm to improve the performance of federated learning systems. This method successfully addresses the significant issue of non-iid data, a challenge not adequately tackled by the commonly used MFedAvg method. Additionally, one of the key innovations of this research is the introduction of uniformity, a metric that quantifies the disparity in training time amongst participants in a federated learning setup. This novel concept not only aids in identifying stragglers but also provides valuable insights into assessing the fairness and efficiency of the system. The experimental results underscore the merits of the integrated multi-branch network with the Oort client selection algorithm and highlight the crucial role of uniformity in designing and evaluating federated learning systems. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

26 pages, 10339 KiB  
Article
Fault Diagnosis Strategy Based on BOA-ResNet18 Method for Motor Bearing Signals with Simulated Hydrogen Refueling Station Operating Noise
by Shuyi Liu, Shengtao Chen, Zuzhi Chen and Yongjun Gong
Appl. Sci. 2024, 14(1), 157; https://doi.org/10.3390/app14010157 - 23 Dec 2023
Cited by 2 | Viewed by 1416
Abstract
The harsh working environment of hydrogen refueling stations often causes equipment failure and is vulnerable to mechanical noise during monitoring. This limits the accuracy of equipment monitoring, ultimately decreasing efficiency. To address this issue, this paper presents a motor bearing vibration signal diagnosis [...] Read more.
The harsh working environment of hydrogen refueling stations often causes equipment failure and is vulnerable to mechanical noise during monitoring. This limits the accuracy of equipment monitoring, ultimately decreasing efficiency. To address this issue, this paper presents a motor bearing vibration signal diagnosis method that employs a Bayesian optimization (BOA) residual neural network (ResNet). The industrial noise signal of the hydrogenation station is simulated and then combined with the motor bearing signal. The resulting one-dimensional bearing signal is processed and transformed into a two-dimensional signal using Fast Fourier Transform (FFT). Afterwards, the signal is segmented using the sliding window translation method to enhance the data volume. After comparing signal feature extraction and classification results from various convolutional neural network models, ResNet18 yields the best classification accuracy, achieving a training accuracy of 89.50% with the shortest computation time. Afterwards, the hyperparameters of ResNet18 such as InitialLearnRate, Momentum, and L2Regularization Parameter are optimized using the Bayesian optimization algorithm. The experiment findings demonstrate a diagnostic accuracy of 99.31% for the original signal model, while the accuracy for the bearing signal, with simulated industrial noise from the hydrogenation station, can reach over 92%. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

19 pages, 7710 KiB  
Article
Application of YOLOv8 and Detectron2 for Bullet Hole Detection and Score Calculation from Shooting Cards
by Marya Butt, Nick Glas, Jaimy Monsuur, Ruben Stoop and Ander de Keijzer
AI 2024, 5(1), 72-90; https://doi.org/10.3390/ai5010005 - 22 Dec 2023
Cited by 3 | Viewed by 6106
Abstract
Scoring targets in shooting sports is a crucial and time-consuming task that relies on manually counting bullet holes. This paper introduces an automatic score detection model using object detection techniques. The study contributes to the field of computer vision by comparing the performance [...] Read more.
Scoring targets in shooting sports is a crucial and time-consuming task that relies on manually counting bullet holes. This paper introduces an automatic score detection model using object detection techniques. The study contributes to the field of computer vision by comparing the performance of seven models (belonging to two different architectural setups) and by making the dataset publicly available. Another value-added aspect is the inclusion of three variants of the object detection model, YOLOv8, recently released in 2023 (at the time of writing). Five of the used models are single-shot detectors, while two belong to the two-shot detectors category. The dataset was manually captured from the shooting range and expanded by generating more versatile data using Python code. Before the dataset was trained to develop models, it was resized (640 × 640) and augmented using Roboflow API. The trained models were then assessed on the test dataset, and their performance was compared using matrices like mAP50, mAP50-90, precision, and recall. The results showed that YOLOv8 models can detect multiple objects with good confidence scores. Among these models, YOLOv8m performed the best, with the highest mAP50 value of 96.7%, followed by the performance of YOLOv8s with the mAP50 value of 96.5%. It is suggested that if the system is to be implemented in a real-time environment, YOLOv8s is a better choice since it took significantly less inference time (2.3 ms) than YOLOv8m (5.7 ms) and yet generated a competitive mAP50 of 96.5%. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

16 pages, 2083 KiB  
Article
Deep Error-Correcting Output Codes
by Li-Na Wang, Hongxu Wei, Yuchen Zheng, Junyu Dong and Guoqiang Zhong
Algorithms 2023, 16(12), 555; https://doi.org/10.3390/a16120555 - 4 Dec 2023
Cited by 2 | Viewed by 1742
Abstract
Ensemble learning, online learning and deep learning are very effective and versatile in a wide spectrum of problem domains, such as feature extraction, multi-class classification and retrieval. In this paper, combining the ideas of ensemble learning, online learning and deep learning, we propose [...] Read more.
Ensemble learning, online learning and deep learning are very effective and versatile in a wide spectrum of problem domains, such as feature extraction, multi-class classification and retrieval. In this paper, combining the ideas of ensemble learning, online learning and deep learning, we propose a novel deep learning method called deep error-correcting output codes (DeepECOCs). DeepECOCs are composed of multiple layers of the ECOC module, which combines several incremental support vector machines (incremental SVMs) as base classifiers. In this novel deep architecture, each ECOC module can be considered as two successive layers of the network, while the incremental SVMs can be viewed as weighted links between two successive layers. In the pre-training procedure, supervisory information, i.e., class labels, can be used during the network initialization. The incremental SVMs lead this procedure to be very efficient, especially for large-scale applications. We have conducted extensive experiments to compare DeepECOCs with traditional ECOC, feature learning and deep learning algorithms. The results demonstrate that DeepECOCs perform, not only better than existing ECOC and feature learning algorithms, but also related to deep learning ones in most cases. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

25 pages, 4779 KiB  
Article
NDARTS: A Differentiable Architecture Search Based on the Neumann Series
by Xiaoyu Han, Chenyu Li, Zifan Wang and Guohua Liu
Algorithms 2023, 16(12), 536; https://doi.org/10.3390/a16120536 - 25 Nov 2023
Viewed by 1488
Abstract
Neural architecture search (NAS) has shown great potential in discovering powerful and flexible network models, becoming an important branch of automatic machine learning (AutoML). Although search methods based on reinforcement learning and evolutionary algorithms can find high-performance architectures, these search methods typically require [...] Read more.
Neural architecture search (NAS) has shown great potential in discovering powerful and flexible network models, becoming an important branch of automatic machine learning (AutoML). Although search methods based on reinforcement learning and evolutionary algorithms can find high-performance architectures, these search methods typically require hundreds of GPU days. Unlike searching in a discrete search space based on reinforcement learning and evolutionary algorithms, the differentiable neural architecture search (DARTS) continuously relaxes the search space, allowing for optimization using gradient-based methods. Based on DARTS, we propose NDARTS in this article. The new algorithm uses the Implicit Function Theorem and the Neumann series to approximate the hyper-gradient, which obtains better results than DARTS. In the simulation experiment, an ablation experiment was carried out to study the influence of the different parameters on the NDARTS algorithm and to determine the optimal weight, then the best performance of the NDARTS algorithm was searched for in the DARTS search space and the NAS-BENCH-201 search space. Compared with other NAS algorithms, the results showed that NDARTS achieved excellent results on the CIFAR-10, CIFAR-100, and ImageNet datasets, and was an effective neural architecture search algorithm. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

16 pages, 2931 KiB  
Article
Flow Prediction of a Measurement and Control Gate Based on an Optimized Back Propagation Neural Network
by Zheng Hou, Jiayong Niu, Jie Zhu and Liguo Lu
Appl. Sci. 2023, 13(22), 12313; https://doi.org/10.3390/app132212313 - 14 Nov 2023
Viewed by 1075
Abstract
The measurement and control gate, as a new type of measurement and control equipment, has been widely used for water quantity control in irrigation areas. However, there is a lack of methods for calibrating the flow inside the measurement box at present. This [...] Read more.
The measurement and control gate, as a new type of measurement and control equipment, has been widely used for water quantity control in irrigation areas. However, there is a lack of methods for calibrating the flow inside the measurement box at present. This paper establishes a flow prediction model based on a back propagation (BP) neural network and its optimization algorithm by using 450 sets of sample data obtained from the indoor gate overflow test and verified the effectiveness and accuracy of the prediction model by using another 205 sets of sample data. The results show that the gate flow prediction model based on a BP neural network and its optimization algorithm has self-adaptability to different flow patterns, and its prediction accuracy is significantly higher than that of the traditional water measurement method. Compared to the unoptimized BP model, the BP model optimized by the genetic algorithm (GA) or particle swarm optimization (PSO) has higher prediction accuracy and better error distribution. Both GA and PSO algorithms can be used to optimize the initial weights and thresholds of the BP flow prediction model. However, by comprehensively analyzing the prediction accuracy, error distribution, and running time, the PSO algorithm has better optimization performance compared to the GA algorithm. The prediction model can provide a reference for flow rate calibration and the anomaly rejection of measurement and control gates in the irrigation area. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

14 pages, 2031 KiB  
Article
Using Augmented Small Multimodal Models to Guide Large Language Models for Multimodal Relation Extraction
by Wentao He, Hanjie Ma, Shaohua Li, Hui Dong, Haixiang Zhang and Jie Feng
Appl. Sci. 2023, 13(22), 12208; https://doi.org/10.3390/app132212208 - 10 Nov 2023
Cited by 1 | Viewed by 3184
Abstract
Multimodal Relation Extraction (MRE) is a core task for constructing Multimodal Knowledge images (MKGs). Most current research is based on fine-tuning small-scale single-modal image and text pre-trained models, but we find that image-text datasets from network media suffer from data scarcity, simple text [...] Read more.
Multimodal Relation Extraction (MRE) is a core task for constructing Multimodal Knowledge images (MKGs). Most current research is based on fine-tuning small-scale single-modal image and text pre-trained models, but we find that image-text datasets from network media suffer from data scarcity, simple text data, and abstract image information, which requires a lot of external knowledge for supplementation and reasoning. We use Multimodal Relation Data augmentation (MRDA) to address the data scarcity problem in MRE, and propose a Flexible Threshold Loss (FTL) to handle the imbalanced entity pair distribution and long-tailed classes. After obtaining prompt information from the small model as a guide model, we employ a Large Language Model (LLM) as a knowledge engine to acquire common sense and reasoning abilities. Notably, both stages of our framework are flexibly replaceable, with the first stage adapting to multimodal related classification tasks for small models, and the second stage replaceable by more powerful LLMs. Through experiments, our EMRE2llm model framework achieves state-of-the-art performance on the challenging MNRE dataset, reaching an 82.95% F1 score on the test set. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

20 pages, 2452 KiB  
Article
Morlet Wavelet Neural Network Investigations to Present the Numerical Investigations of the Prediction Differential Model
by Zulqurnain Sabir, Adnène Arbi, Atef F. Hashem and Mohamed A Abdelkawy
Mathematics 2023, 11(21), 4480; https://doi.org/10.3390/math11214480 - 29 Oct 2023
Cited by 6 | Viewed by 1190
Abstract
In this study, a design of Morlet wavelet neural networks (MWNNs) is presented to solve the prediction differential model (PDM) by applying the global approximation capability of a genetic algorithm (GA) and local quick interior-point algorithm scheme (IPAS), i.e., MWNN-GAIPAS. The famous and [...] Read more.
In this study, a design of Morlet wavelet neural networks (MWNNs) is presented to solve the prediction differential model (PDM) by applying the global approximation capability of a genetic algorithm (GA) and local quick interior-point algorithm scheme (IPAS), i.e., MWNN-GAIPAS. The famous and historical PDM is known as a variant of the functional differential system that works as theopposite of the delay differential models. A fitness function is constructed by using the mean square error and optimized through the GA-IPAS for solving the PDM. Three PDM examples have been presented numerically to check the authenticity of the MWNN-GAIPAS. For the perfection of the designed MWNN-GAIPAS, the comparability of the obtained outputs and exact results is performed. Moreover, the neuron analysis is performed by taking 3, 10, and 20 neurons. The statistical observations have been performed to authenticate the reliability of the MWNN-GAIPAS for solving the PDM. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

17 pages, 4899 KiB  
Article
A Visual Fault Detection Algorithm of Substation Equipment Based on Improved YOLOv5
by Yuezhong Wu, Falong Xiao, Fumin Liu, Yuxuan Sun, Xiaoheng Deng, Lixin Lin and Congxu Zhu
Appl. Sci. 2023, 13(21), 11785; https://doi.org/10.3390/app132111785 - 27 Oct 2023
Cited by 7 | Viewed by 1435
Abstract
The development of artificial intelligence technology provides a new model for substation inspection in the power industry, and effective defect diagnosis can avoid the impact of substation equipment defects on the power grid and improve the reliability and stability of power grid operation. [...] Read more.
The development of artificial intelligence technology provides a new model for substation inspection in the power industry, and effective defect diagnosis can avoid the impact of substation equipment defects on the power grid and improve the reliability and stability of power grid operation. Aiming to combat the problem of poor recognition of small targets due to large differences in equipment morphology in complex substation scenarios, a visual fault detection algorithm of substation equipment based on improved YOLOv5 is proposed. Firstly, a deformable convolution module is introduced into the backbone network to achieve adaptive learning of scale and receptive field size. Secondly, in the neck of the network, a simple and effective BiFPN structure is used instead of PANet. The multi-level feature combination of the network is adjusted by a floating adaptive weighted fusion strategy. Lastly, an additional small object detection layer is added to detect shallower feature maps. Experimental results demonstrate that the improved algorithm effectively enhances the performance of power equipment and defect recognition. The overall recall rate has increased by 7.7%, precision rate has increased by nearly 6.3%, and [email protected] has improved by 4.6%. The improved model exhibits superior performance. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

15 pages, 1021 KiB  
Article
Comparative Analysis of Deep Learning Architectures and Vision Transformers for Musical Key Estimation
by Manav Garg, Pranshav Gajjar, Pooja Shah, Madhu Shukla, Biswaranjan Acharya, Vassilis C. Gerogiannis and Andreas Kanavos
Information 2023, 14(10), 527; https://doi.org/10.3390/info14100527 - 28 Sep 2023
Cited by 2 | Viewed by 2858
Abstract
The musical key serves as a crucial element in a piece, offering vital insights into the tonal center, harmonic structure, and chord progressions while enabling tasks such as transposition and arrangement. Moreover, accurate key estimation finds practical applications in music recommendation systems and [...] Read more.
The musical key serves as a crucial element in a piece, offering vital insights into the tonal center, harmonic structure, and chord progressions while enabling tasks such as transposition and arrangement. Moreover, accurate key estimation finds practical applications in music recommendation systems and automatic music transcription, making it relevant across academic and industrial domains. This paper presents a comprehensive comparison between standard deep learning architectures and emerging vision transformers, leveraging their success in various domains. We evaluate their performance on a specific subset of the GTZAN dataset, analyzing six different deep learning models. Our results demonstrate that DenseNet, a conventional deep learning architecture, achieves remarkable accuracy of 91.64%, outperforming vision transformers. However, we delve deeper into the analysis to shed light on the temporal characteristics of each deep learning model. Notably, the vision transformer and SWIN transformer exhibit a slight decrease in overall performance (1.82% and 2.29%, respectively), yet they demonstrate superior performance in temporal metrics compared to the DenseNet architecture. The significance of our findings lies in their contribution to the field of musical key estimation, where accurate and efficient algorithms play a pivotal role. By examining the strengths and weaknesses of deep learning architectures and vision transformers, we can gain valuable insights for practical implementations, particularly in music recommendation systems and automatic music transcription. Our research provides a foundation for future advancements and encourages further exploration in this area. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

20 pages, 8327 KiB  
Article
Segmentation Head Networks with Harnessing Self-Attention and Transformer for Insulator Surface Defect Detection
by Jun Guo, Tiancheng Li and Baigang Du
Appl. Sci. 2023, 13(16), 9109; https://doi.org/10.3390/app13169109 - 10 Aug 2023
Cited by 6 | Viewed by 1402
Abstract
Current methodologies for insulator defect detection are hindered by limitations in real-world applicability, spatial constraints, high computational demand, and segmentation challenges. Addressing these shortcomings, this paper presents a robust fast detection algorithm combined segmentation head networks with harnessing self-attention and transformer (HST-Net), which [...] Read more.
Current methodologies for insulator defect detection are hindered by limitations in real-world applicability, spatial constraints, high computational demand, and segmentation challenges. Addressing these shortcomings, this paper presents a robust fast detection algorithm combined segmentation head networks with harnessing self-attention and transformer (HST-Net), which is based on the You Only Look Once (YOLO) v5 to recognize and assess the extent and types of damage on the insulator surface. Firstly, the original backbone network is replaced by the transformer cross-stage partial (Transformer-CSP) networks to enrich the network’s ability by capturing information across different depths of network feature maps. Secondly, an insulator defect segmentation head network is presented to handle the segmentation of defect areas such as insulator losses and flashovers. It facilitates instance-level mask prediction for each insulator object, significantly reducing the influence of intricate backgrounds. Finally, comparative experiment results show that the positioning accuracy and defect segmentation accuracy of the proposed both surpass that of other popular models. It can be concluded that the proposed model not only satisfies the requirements for balance between accuracy and speed in power facility inspection, but also provides fresh perspectives for research in other defect detection domains. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

24 pages, 2175 KiB  
Article
Ensemble System of Deep Neural Networks for Single-Channel Audio Separation
by Musab T. S. Al-Kaltakchi, Ahmad Saeed Mohammad and Wai Lok Woo
Information 2023, 14(7), 352; https://doi.org/10.3390/info14070352 - 21 Jun 2023
Cited by 2 | Viewed by 1655
Abstract
Speech separation is a well-known problem, especially when there is only one sound mixture available. Estimating the Ideal Binary Mask (IBM) is one solution to this problem. Recent research has focused on the supervised classification approach. The challenge of extracting features from the [...] Read more.
Speech separation is a well-known problem, especially when there is only one sound mixture available. Estimating the Ideal Binary Mask (IBM) is one solution to this problem. Recent research has focused on the supervised classification approach. The challenge of extracting features from the sources is critical for this method. Speech separation has been accomplished by using a variety of feature extraction models. The majority of them, however, are concentrated on a single feature. The complementary nature of various features have not been thoroughly investigated. In this paper, we propose a deep neural network (DNN) ensemble architecture to completely explore the complimentary nature of the diverse features obtained from raw acoustic features. We examined the penultimate discriminative representations instead of employing the features acquired from the output layer. The learned representations were also fused to produce a new features vector, which was then classified by using the Extreme Learning Machine (ELM). In addition, a genetic algorithm (GA) was created to optimize the parameters globally. The results of the experiments showed that our proposed system completely considered various features and produced a high-quality IBM under different conditions. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

17 pages, 3748 KiB  
Article
NSGA-PINN: A Multi-Objective Optimization Method for Physics-Informed Neural Network Training
by Binghang Lu, Christian Moya and Guang Lin
Algorithms 2023, 16(4), 194; https://doi.org/10.3390/a16040194 - 3 Apr 2023
Cited by 4 | Viewed by 4110
Abstract
This paper presents NSGA-PINN, a multi-objective optimization framework for the effective training of physics-informed neural networks (PINNs). The proposed framework uses the non-dominated sorting genetic algorithm (NSGA-II) to enable traditional stochastic gradient optimization algorithms (e.g., ADAM) to escape local minima effectively. Additionally, the [...] Read more.
This paper presents NSGA-PINN, a multi-objective optimization framework for the effective training of physics-informed neural networks (PINNs). The proposed framework uses the non-dominated sorting genetic algorithm (NSGA-II) to enable traditional stochastic gradient optimization algorithms (e.g., ADAM) to escape local minima effectively. Additionally, the NSGA-II algorithm enables satisfying the initial and boundary conditions encoded into the loss function during physics-informed training precisely. We demonstrate the effectiveness of our framework by applying NSGA-PINN to several ordinary and partial differential equation problems. In particular, we show that the proposed framework can handle challenging inverse problems with noisy data. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

44 pages, 5170 KiB  
Review
Resilience and Resilient Systems of Artificial Intelligence: Taxonomy, Models and Methods
by Viacheslav Moskalenko, Vyacheslav Kharchenko, Alona Moskalenko and Borys Kuzikov
Algorithms 2023, 16(3), 165; https://doi.org/10.3390/a16030165 - 18 Mar 2023
Cited by 7 | Viewed by 5798
Abstract
Artificial intelligence systems are increasingly being used in industrial applications, security and military contexts, disaster response complexes, policing and justice practices, finance, and healthcare systems. However, disruptions to these systems can have negative impacts on health, mortality, human rights, and asset values. The [...] Read more.
Artificial intelligence systems are increasingly being used in industrial applications, security and military contexts, disaster response complexes, policing and justice practices, finance, and healthcare systems. However, disruptions to these systems can have negative impacts on health, mortality, human rights, and asset values. The protection of such systems from various types of destructive influences is thus a relevant area of research. The vast majority of previously published works are aimed at reducing vulnerability to certain types of disturbances or implementing certain resilience properties. At the same time, the authors either do not consider the concept of resilience as such, or their understanding varies greatly. The aim of this study is to present a systematic approach to analyzing the resilience of artificial intelligence systems, along with an analysis of relevant scientific publications. Our methodology involves the formation of a set of resilience factors, organizing and defining taxonomic and ontological relationships for resilience factors of artificial intelligence systems, and analyzing relevant resilience solutions and challenges. This study analyzes the sources of threats and methods to ensure each resilience properties for artificial intelligence systems. As a result, the potential to create a resilient artificial intelligence system by configuring the architecture and learning scenarios is confirmed. The results can serve as a roadmap for establishing technical requirements for forthcoming artificial intelligence systems, as well as a framework for assessing the resilience of already developed artificial intelligence systems. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

16 pages, 446 KiB  
Article
GRNN: Graph-Retraining Neural Network for Semi-Supervised Node Classification
by Jianhe Li and Suohai Fan
Algorithms 2023, 16(3), 126; https://doi.org/10.3390/a16030126 - 22 Feb 2023
Cited by 1 | Viewed by 2718
Abstract
In recent years, graph neural networks (GNNs) have played an important role in graph representation learning and have successfully achieved excellent results in semi-supervised classification. However, these GNNs often neglect the global smoothing of the graph because the global smoothing of the graph [...] Read more.
In recent years, graph neural networks (GNNs) have played an important role in graph representation learning and have successfully achieved excellent results in semi-supervised classification. However, these GNNs often neglect the global smoothing of the graph because the global smoothing of the graph is incompatible with node classification. Specifically, a cluster of nodes in the graph often has a small number of other classes of nodes. To address this issue, we propose a graph-retraining neural network (GRNN) model that performs smoothing over the graph by alternating between a learning procedure and an inference procedure, based on the key idea of the expectation-maximum algorithm. Moreover, the global smoothing error is combined with the cross-entropy error to form the loss function of GRNN, which effectively solves the problem. The experiments show that GRNN achieves high accuracy in the standard citation network datasets, including Cora, Citeseer, and PubMed, which proves the effectiveness of GRNN in semi-supervised node classification. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

13 pages, 2081 KiB  
Article
Named Entity Recognition Model Based on Feature Fusion
by Zhen Sun and Xinfu Li
Information 2023, 14(2), 133; https://doi.org/10.3390/info14020133 - 17 Feb 2023
Cited by 6 | Viewed by 3037
Abstract
Named entity recognition can deeply explore semantic features and enhance the ability of vector representation of text data. This paper proposes a named entity recognition method based on multi-head attention to aim at the problem of fuzzy lexical boundary in Chinese named entity [...] Read more.
Named entity recognition can deeply explore semantic features and enhance the ability of vector representation of text data. This paper proposes a named entity recognition method based on multi-head attention to aim at the problem of fuzzy lexical boundary in Chinese named entity recognition. Firstly, Word2vec is used to extract word vectors, HMM is used to extract boundary vectors, ALBERT is used to extract character vectors, the Feedforward-attention mechanism is used to fuse the three vectors, and then the fused vectors representation is used to remove features by BiLSTM. Then multi-head attention is used to mine the potential word information in the text features. Finally, the text label classification results are output after the conditional random field screening. Through the verification of WeiboNER, MSRA, and CLUENER2020 datasets, the results show that the proposed algorithm can effectively improve the performance of named entity recognition. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

19 pages, 575 KiB  
Article
Hybrid Backstepping Control of a Quadrotor Using a Radial Basis Function Neural Network
by Muhammad Maaruf, Waleed M. Hamanah and Mohammad A. Abido
Mathematics 2023, 11(4), 991; https://doi.org/10.3390/math11040991 - 15 Feb 2023
Cited by 8 | Viewed by 2461
Abstract
This article presents a hybrid backstepping consisting of two robust controllers utilizing the approximation property of a radial basis function neural network (RBFNN) for a quadrotor with time-varying uncertainties. The quadrotor dynamic system is decoupled into two subsystems: the position and the attitude [...] Read more.
This article presents a hybrid backstepping consisting of two robust controllers utilizing the approximation property of a radial basis function neural network (RBFNN) for a quadrotor with time-varying uncertainties. The quadrotor dynamic system is decoupled into two subsystems: the position and the attitude subsystems. As part of the position subsystem, adaptive RBFNN backstepping control (ANNBC) is developed to eliminate the effects of uncertainties, trace the quadrotor’s position, and provide the desired roll and pitch angles commands for the attitude subsystem. Then, adaptive RBFNN backstepping is integrated with integral fast terminal sliding mode control (ANNBIFTSMC) to track the required Euler angles and improve robustness against external disturbances. The proposed technique is advantageous because the quadrotor states trace the reference states in a short period of time without requiring knowledge of dynamic uncertainties and external disturbances. In addition, because the controller gains are based on the desired trajectories, adaptive algorithms are used to update them online. The stability of a closed loop system is proved by Lyapunov theory. Numerical simulations show acceptable attitude and position tracking performances. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

11 pages, 1195 KiB  
Article
Outcome Prediction for Patients with Bipolar Disorder Using Prodromal and Onset Data
by Yijun Shao, Yan Cheng, Srikanth Gottipati and Qing Zeng-Treitler
Appl. Sci. 2023, 13(3), 1552; https://doi.org/10.3390/app13031552 - 25 Jan 2023
Cited by 5 | Viewed by 2123
Abstract
Background: Predicting the outcomes of serious mental illnesses including bipolar disorder (BD) is clinically beneficial, yet difficult. Objectives: This study aimed to predict hospitalization and mortality for patients with incident BD using a deep neural network approach. Methods: We randomly sampled 20,000 US [...] Read more.
Background: Predicting the outcomes of serious mental illnesses including bipolar disorder (BD) is clinically beneficial, yet difficult. Objectives: This study aimed to predict hospitalization and mortality for patients with incident BD using a deep neural network approach. Methods: We randomly sampled 20,000 US Veterans with BD. Data on patients’ prior hospitalizations, diagnoses, procedures, medications, note types, vital signs, lab results, and BD symptoms that occurred within 1 year before and at the onset of the incident BD were extracted as features. We then created novel temporal images of patient clinical features both during the prodromal period and at the time of the disease onset. Using each temporal image as a feature, we trained and tested deep neural network learning models to predict the 1-year combined outcome of hospitalization and mortality. Results: The models achieved accuracies of 0.766–0.949 and AUCs of 0.745–0.806 for the combined outcomes. The AUC for predicting mortality was 0.814, while its highest and lowest values for predicting different types of hospitalization were 90.4% and 70.1%, suggesting that some outcomes were more difficult to predict than others. Conclusion: Deep learning using temporal graphics of clinical history is a new and promising analytical approach for mental health outcome prediction. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

24 pages, 717 KiB  
Article
Approximate Reasoning for Large-Scale ABox in OWL DL Based on Neural-Symbolic Learning
by Xixi Zhu, Bin Liu, Cheng Zhu, Zhaoyun Ding and Li Yao
Mathematics 2023, 11(3), 495; https://doi.org/10.3390/math11030495 - 17 Jan 2023
Cited by 2 | Viewed by 1874
Abstract
The ontology knowledge base (KB) can be divided into two parts: TBox and ABox, where the former models schema-level knowledge within the domain, and the latter is a set of statements of assertions or facts about instances. ABox reasoning is a process of [...] Read more.
The ontology knowledge base (KB) can be divided into two parts: TBox and ABox, where the former models schema-level knowledge within the domain, and the latter is a set of statements of assertions or facts about instances. ABox reasoning is a process of discovering implicit knowledge in ABox based on the existing KB, which is of great value in KB applications. ABox reasoning is influenced by both the complexity of TBox and scale of ABox. The traditional logic-based ontology reasoning methods are usually designed to be provably sound and complete but suffer from long algorithm runtimes and do not scale well for ontology KB represented by OWL DL (Description Logic). In some application scenarios, the soundness and completeness of reasoning results are not the key constraints, and it is acceptable to sacrifice them in exchange for the improvement of reasoning efficiency to some extent. Based on this view, an approximate reasoning method for large-scale ABox in OWL DL KBs was proposed, which is named the ChunfyReasoner (CFR). The CFR introduces neural-symbolic learning into ABox reasoning and integrates the advantages of symbolic systems and neural networks (NNs). By training the NN model, the CFR approximately compiles the logic deduction process of ontology reasoning, which can greatly improve the reasoning speed while ensuring higher reasoning quality. In this paper, we state the basic idea, framework, and construction process of the CFR in detail, and we conduct experiments on two open-source ontologies built on OWL DL. The experimental results verify the effectiveness of our method and show that the CFR can support the applications of large-scale ABox reasoning of OWL DL KBs. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

13 pages, 1394 KiB  
Article
Medical QA Oriented Multi-Task Learning Model for Question Intent Classification and Named Entity Recognition
by Turdi Tohti, Mamatjan Abdurxit and Askar Hamdulla
Information 2022, 13(12), 581; https://doi.org/10.3390/info13120581 - 14 Dec 2022
Cited by 2 | Viewed by 2581
Abstract
Intent classification and named entity recognition of medical questions are two key subtasks of the natural language understanding module in the question answering system. Most existing methods usually treat medical queries intent classification and named entity recognition as two separate tasks, ignoring the [...] Read more.
Intent classification and named entity recognition of medical questions are two key subtasks of the natural language understanding module in the question answering system. Most existing methods usually treat medical queries intent classification and named entity recognition as two separate tasks, ignoring the close relationship between the two tasks. In order to optimize the effect of medical queries intent classification and named entity recognition tasks, a multi-task learning model based on ALBERT-BILSTM is proposed for intent classification and named entity recognition of Chinese online medical questions. The multi-task learning model in this paper makes use of encoder parameter sharing, which enables the model’s underlying network to take into account both named entity recognition and intent classification features. The model learns the shared information between the two tasks while maintaining its unique characteristics during the decoding phase. The ALBERT pre-training language model is used to obtain word vectors containing semantic information and the bidirectional LSTM network is used for training. A comparative experiment of different models was conducted on Chinese medical questions dataset. Experimental results show that the proposed multi-task learning method outperforms the benchmark method in terms of precision, recall and F1 value. Compared with the single-task model, the generalization ability of the model has been improved. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

14 pages, 1310 KiB  
Article
A Mask-Based Adversarial Defense Scheme
by Weizhen Xu, Chenyi Zhang, Fangzhen Zhao and Liangda Fang
Algorithms 2022, 15(12), 461; https://doi.org/10.3390/a15120461 - 6 Dec 2022
Cited by 1 | Viewed by 2090
Abstract
Adversarial attacks hamper the functionality and accuracy of deep neural networks (DNNs) by meddling with subtle perturbations to their inputs. In this work, we propose a new mask-based adversarial defense scheme (MAD) for DNNs to mitigate the negative effect from adversarial attacks. Our [...] Read more.
Adversarial attacks hamper the functionality and accuracy of deep neural networks (DNNs) by meddling with subtle perturbations to their inputs. In this work, we propose a new mask-based adversarial defense scheme (MAD) for DNNs to mitigate the negative effect from adversarial attacks. Our method preprocesses multiple copies of a potential adversarial image by applying random masking, before the outputs of the DNN on all the randomly masked images are combined. As a result, the combined final output becomes more tolerant to minor perturbations on the original input. Compared with existing adversarial defense techniques, our method does not need any additional denoising structure or any change to a DNN’s architectural design. We have tested this approach on a collection of DNN models for a variety of datasets, and the experimental results confirm that the proposed method can effectively improve the defense abilities of the DNNs against all of the tested adversarial attack methods. In certain scenarios, the DNN models trained with MAD can improve classification accuracy by as much as 90% compared to the original models when given adversarial inputs. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

14 pages, 3117 KiB  
Article
Prediction of Friction Coefficient for Ductile Cast Iron Using Artificial Neural Network Methodology Based on Experimental Investigation
by Ahmad A. Khalaf and Muammel M. Hanon
Appl. Sci. 2022, 12(23), 11916; https://doi.org/10.3390/app122311916 - 22 Nov 2022
Cited by 6 | Viewed by 1651
Abstract
The key objective of the present study is to analyze the friction coefficient and wear rate for ductile cast iron. Three different microstructures were chosen upon which to perform the experimental tests under different sliding time, load, and sliding speed conditions. These specimens [...] Read more.
The key objective of the present study is to analyze the friction coefficient and wear rate for ductile cast iron. Three different microstructures were chosen upon which to perform the experimental tests under different sliding time, load, and sliding speed conditions. These specimens were perlite + ferrite, ferrite, and bainitic. Moreover, an artificial neural network (ANN) model was developed in order to predict the friction coefficient using a set of data collected during the experiments. The ANN model structure was made up of four input parameters (namely time, load, number, and nodule diameter) and one output parameter (friction coefficient). The Levenberg–Marquardt back-propagation algorithm was applied in the ANN model to train the data using feed-forward back propagation (FFBP). The results of the experiments revealed that the coefficient of friction reduced as the sliding speed increased under a constant load. Additionally, it exhibits the same pattern of action when the test is run with a heavy load and constant sliding speed. Additionally, when the sliding speed increased, the wear rate dropped. The results also show that the bainite structure is harder and wears less quickly than the ferrite structure. Additionally, the results pertaining to the ANN structure showed that a single hidden layer model is more accurate than a double hidden layer model. The highest performance in the validation stage, however, was observed at epochs 8 and 20, respectively, for a double hidden layer and at 0.012346 for a single layer at epoch 20. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

21 pages, 4721 KiB  
Article
Self-Organized Fuzzy Neural Network Nonlinear System Modeling Method Based on Clustering Algorithm
by Tong Zhang and Zhendong Wang
Appl. Sci. 2022, 12(22), 11435; https://doi.org/10.3390/app122211435 - 11 Nov 2022
Cited by 2 | Viewed by 1962
Abstract
In this paper, an improved self-organizing fuzzy neural network (SOFNN-CA) based on a clustering algorithm is proposed for nonlinear systems modeling in industrial processes. In order to reduce training time and increase training speed, we combine offline learning and online identification. The unsupervised [...] Read more.
In this paper, an improved self-organizing fuzzy neural network (SOFNN-CA) based on a clustering algorithm is proposed for nonlinear systems modeling in industrial processes. In order to reduce training time and increase training speed, we combine offline learning and online identification. The unsupervised clustering algorithm is used to generate the initial centers of the network in the offline learning phase, and, in the self-organizing phase of the system, the Mahalanobis distance (MD) index and error criterion are adopted to add neurons to learn new features. A new density potential index (DPI) combined with neuron local field potential (LFP) is designed to adjust the neuron width, which further improves the network generalization. The similarity index calculated by the Gaussian error function is used to merge neurons to reduce redundancy. Meanwhile, the convergence of SOFNN-CA in the case of structural self-organization is demonstrated. Simulations and experiments results show that the proposed SOFNN-CA has a more desirable modeling accuracy and convergence speed compared with SOFNN-ALA and SOFNN-AGA. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

19 pages, 2965 KiB  
Article
EEG-Based Emotion Recognition Using Convolutional Recurrent Neural Network with Multi-Head Self-Attention
by Zhangfang Hu, Libujie Chen, Yuan Luo and Jingfan Zhou
Appl. Sci. 2022, 12(21), 11255; https://doi.org/10.3390/app122111255 - 6 Nov 2022
Cited by 18 | Viewed by 4700
Abstract
In recent years, deep learning has been widely used in emotion recognition, but the models and algorithms in practical applications still have much room for improvement. With the development of graph convolutional neural networks, new ideas for emotional recognition based on EEG have [...] Read more.
In recent years, deep learning has been widely used in emotion recognition, but the models and algorithms in practical applications still have much room for improvement. With the development of graph convolutional neural networks, new ideas for emotional recognition based on EEG have arisen. In this paper, we propose a novel deep learning model-based emotion recognition method. First, the EEG signal is spatially filtered by using the common spatial pattern (CSP), and the filtered signal is converted into a time–frequency map by continuous wavelet transform (CWT). This is used as the input data of the network; then the feature extraction and classification are performed by the deep learning model. We called this model CNN-BiLSTM-MHSA, which consists of a convolutional neural network (CNN), bi-directional long and short-term memory network (BiLSTM), and multi-head self-attention (MHSA). This network is capable of learning the time series and spatial information of EEG emotion signals in depth, smoothing EEG signals and extracting deep features with CNN, learning emotion information of future and past time series with BiLSTM, and improving recognition accuracy with MHSA by reassigning weights to emotion features. Finally, we conducted experiments on the DEAP dataset for sentiment classification, and the experimental results showed that the method has better results than the existing classification. The accuracy of high and low valence, arousal, dominance, and liking state recognition is 98.10%, and the accuracy of four classifications of high and low valence-arousal recognition is 89.33%. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

12 pages, 1247 KiB  
Article
Artificial Neural Network as a Tool for Estimation of the Higher Heating Value of Miscanthus Based on Ultimate Analysis
by Ivan Brandić, Lato Pezo, Nikola Bilandžija, Anamarija Peter, Jona Šurić and Neven Voća
Mathematics 2022, 10(20), 3732; https://doi.org/10.3390/math10203732 - 11 Oct 2022
Cited by 13 | Viewed by 2265
Abstract
Miscanthus is a perennial energy crop that produces high yields and has the potential to be converted into energy. The ultimate analysis determines the composition of the biomass and the energy value in terms of the higher heating value (HHV), which is the [...] Read more.
Miscanthus is a perennial energy crop that produces high yields and has the potential to be converted into energy. The ultimate analysis determines the composition of the biomass and the energy value in terms of the higher heating value (HHV), which is the most important parameter in determining the quality of the fuel. In this study, an artificial neural network (ANN) model based on the principle of supervised learning was developed to predict the HHV of miscanthus biomass. The developed ANN model was compared with the models of predictive regression models (suggested from the literature) and the accuracy of the developed model was determined by the coefficient of determination. The paper presents data from 192 miscanthus biomass samples based on ultimate analysis and HHV. The developed model showed good properties and the possibility of prediction with high accuracy (R2 = 0.77). The paper proves the possibility of using ANN models in practical application in determining fuel properties of biomass energy crops and greater accuracy in predicting HHV than the regression models offered in the literature. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

24 pages, 4021 KiB  
Article
Design of Optical Tweezers Manipulation Control System Based on Novel Self-Organizing Fuzzy Cerebellar Model Neural Network
by Jing Zhao, Hui Hou, Qi-Yu Huang, Xun-Gao Zhong and Peng-Sheng Zheng
Appl. Sci. 2022, 12(19), 9655; https://doi.org/10.3390/app12199655 - 26 Sep 2022
Cited by 2 | Viewed by 1712
Abstract
Holographic optical tweezers have unique non-physical contact and can manipulate and control single or multiple cells in a non-invasive way. In this paper, the dynamics model of the cells captured by the optical trap is analyzed, and a control system based on a [...] Read more.
Holographic optical tweezers have unique non-physical contact and can manipulate and control single or multiple cells in a non-invasive way. In this paper, the dynamics model of the cells captured by the optical trap is analyzed, and a control system based on a novel self-organizing fuzzy cerebellar model neural network (NSOFCMNN) is proposed and applied to the cell manipulation control of holographic optical tweezers. This control system consists of a main controller using the NSOFCMNN with a new self-organization mechanism, a robust compensation controller, and a higher order sliding mode. It can accurately move the captured cells to the expected position through the optical trap generated by the holographic optical tweezers system. Both the layers and blocks of the proposed NSOFCMNN can be adjusted online according to the new self-organization mechanism. The compensation controller is used to eliminate the approximation errors. The higher order sliding surface can enhance the performance of controllers. The distances between cells are considered in order to further realize multi-cell cooperative control. In addition, the stability and convergence of the proposed NSOFCMNN are proved by the Lyapunov function, and the learning law is updated online by the gradient descent method. The simulation results show that the control system based on the proposed NSOFCMNN can effectively complete the cell manipulation task of optical tweezers and has better control performance than other neural network controllers. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

15 pages, 2702 KiB  
Article
Convolutional Neural Network Model Compression Method for Software—Hardware Co-Design
by Seojin Jang, Wei Liu and Yongbeom Cho
Information 2022, 13(10), 451; https://doi.org/10.3390/info13100451 - 26 Sep 2022
Cited by 3 | Viewed by 4018
Abstract
Owing to their high accuracy, deep convolutional neural networks (CNNs) are extensively used. However, they are characterized by high complexity. Real-time performance and acceleration are required in current CNN systems. A graphics processing unit (GPU) is one possible solution to improve real-time performance; [...] Read more.
Owing to their high accuracy, deep convolutional neural networks (CNNs) are extensively used. However, they are characterized by high complexity. Real-time performance and acceleration are required in current CNN systems. A graphics processing unit (GPU) is one possible solution to improve real-time performance; however, its power consumption ratio is poor owing to high power consumption. By contrast, field-programmable gate arrays (FPGAs) have lower power consumption and flexible architecture, making them more suitable for CNN implementation. In this study, we propose a method that offers both the speed of CNNs and the power and parallelism of FPGAs. This solution relies on two primary acceleration techniques—parallel processing of layer resources and pipelining within specific layers. Moreover, a new method is introduced for exchanging domain requirements for speed and design time by implementing an automatic parallel hardware–software co-design CNN using the software-defined system-on-chip tool. We evaluated the proposed method using five networks—MobileNetV1, ShuffleNetV2, SqueezeNet, ResNet-50, and VGG-16—and FPGA processors—ZCU102. We experimentally demonstrated that our design has a higher speed-up than the conventional implementation method. The proposed method achieves 2.47×, 1.93×, and 2.16× speed-up on the ZCU102 for MobileNetV1, ShuffleNetV2, and SqueezeNet, respectively. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

19 pages, 933 KiB  
Article
Training of an Extreme Learning Machine Autoencoder Based on an Iterative Shrinkage-Thresholding Optimization Algorithm
by José A. Vásquez-Coronel, Marco Mora and Karina Vilches
Appl. Sci. 2022, 12(18), 9021; https://doi.org/10.3390/app12189021 - 8 Sep 2022
Cited by 3 | Viewed by 3109
Abstract
Orthogonal transformations, proper decomposition, and the Moore–Penrose inverse are traditional methods of obtaining the output layer weights for an extreme learning machine autoencoder. However, an increase in the number of hidden neurons causes higher convergence times and computational complexity, whereas the generalization capability [...] Read more.
Orthogonal transformations, proper decomposition, and the Moore–Penrose inverse are traditional methods of obtaining the output layer weights for an extreme learning machine autoencoder. However, an increase in the number of hidden neurons causes higher convergence times and computational complexity, whereas the generalization capability is low when the number of neurons is small. One way to address this issue is to use the fast iterative shrinkage-thresholding algorithm (FISTA) to minimize the output weights of the extreme learning machine. In this work, we aim to improve the convergence speed of FISTA by using two fast algorithms of the shrinkage-thresholding class, called greedy FISTA (G-FISTA) and linearly convergent FISTA (LC-FISTA). Our method is an exciting proposal for decision-making involving the resolution of many application problems, especially those requiring longer computational times. In our experiments, we adopt six public datasets that are frequently used in machine learning: MNIST, NORB, CIFAR10, UMist, Caltech256, and Stanford Cars. We apply several metrics to evaluate the performance of our method, and the object of comparison is the FISTA algorithm due to its popularity for neural network training. The experimental results show that G-FISTA and LC-FISTA achieve higher convergence speeds in the autoencoder training process; for example, in the Stanford Cars dataset, G-FISTA and LC-FISTA are faster than FISTA by 48.42% and 47.32%, respectively. Overall, all three algorithms maintain good values of the performance metrics on all databases. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

13 pages, 832 KiB  
Article
Fixed-Time Convergent Gradient Neural Network for Solving Online Sylvester Equation
by Zhiguo Tan
Mathematics 2022, 10(17), 3090; https://doi.org/10.3390/math10173090 - 28 Aug 2022
Cited by 7 | Viewed by 1775
Abstract
This paper aims at finding a fixed-time solution to the Sylvester equation by using a gradient neural network (GNN). To reach this goal, a modified sign-bi-power (msbp) function is presented and applied on a linear GNN as an activation function. Accordingly, a fixed-time [...] Read more.
This paper aims at finding a fixed-time solution to the Sylvester equation by using a gradient neural network (GNN). To reach this goal, a modified sign-bi-power (msbp) function is presented and applied on a linear GNN as an activation function. Accordingly, a fixed-time convergent GNN (FTC-GNN) model is developed for solving the Sylvester equation. The upper bound of the convergence time of such an FTC-GNN model can be predetermined if parameters are given regardless of the initial conditions. This point is corroborated by a detailed theoretical analysis. In addition, the convergence time is also estimated utilizing the Lyapunov stability theory. Two examples are then simulated to demonstrate the validation of the theoretical analysis, as well as the superior convergence performance of the presented FTC-GNN model as compared to the existing GNN models. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

26 pages, 7210 KiB  
Article
OptiNET—Automatic Network Topology Optimization
by Andreas Maniatopoulos, Paraskevi Alvanaki and Nikolaos Mitianoudis
Information 2022, 13(9), 405; https://doi.org/10.3390/info13090405 - 27 Aug 2022
Cited by 1 | Viewed by 3398
Abstract
The recent boom of artificial Neural Networks (NN) has shown that NN can provide viable solutions to a variety of problems. However, their complexity and the lack of efficient interpretation of NN architectures (commonly considered black box techniques) has adverse effects on the [...] Read more.
The recent boom of artificial Neural Networks (NN) has shown that NN can provide viable solutions to a variety of problems. However, their complexity and the lack of efficient interpretation of NN architectures (commonly considered black box techniques) has adverse effects on the optimization of each NN architecture. One cannot simply use a generic topology and have the best performance in every application field, since the network topology is commonly fine-tuned to the problem/dataset in question. In this paper, we introduce a novel method of computationally assessing the complexity of the dataset. The NN is treated as an information channel, and thus information theory is used to estimate the optimal number of neurons for each layer, reducing the memory and computational load, while achieving the same, if not greater, accuracy. Experiments using common datasets confirm the theoretical findings, and the derived algorithm seems to improve the performance of the original architecture. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

12 pages, 1186 KiB  
Article
Almost Anti-Periodic Oscillation Excited by External Inputs and Synchronization of Clifford-Valued Recurrent Neural Networks
by Weiwei Qi and Yongkun Li
Mathematics 2022, 10(15), 2764; https://doi.org/10.3390/math10152764 - 4 Aug 2022
Cited by 4 | Viewed by 1849
Abstract
The main purpose of this paper was to study the almost anti-periodic oscillation caused by external inputs and the global exponential synchronization of Clifford-valued recurrent neural networks with mixed delays. Since the space consists of almost anti-periodic functions has no vector space structure, [...] Read more.
The main purpose of this paper was to study the almost anti-periodic oscillation caused by external inputs and the global exponential synchronization of Clifford-valued recurrent neural networks with mixed delays. Since the space consists of almost anti-periodic functions has no vector space structure, firstly, we prove that the network under consideration possesses a unique bounded continuous solution by using the contraction fixed point theorem. Then, by using the inequality technique, it was proved that the unique bounded continuous solution is also an almost anti-periodic solution. Secondly, taking the neural network that was considered as the driving system, introducing the corresponding response system and designing the appropriate controller, some sufficient conditions for the global exponential synchronization of the driving-response system were obtained by employing the inequality technique. When the system we consider degenerated into a real-valued system, our results were considered new. Finally, the validity of the results was verified using a numerical example. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

14 pages, 31666 KiB  
Article
Sequential Normalization: Embracing Smaller Sample Sizes for Normalization
by Neofytos Dimitriou and Ognjen Arandjelović
Information 2022, 13(7), 337; https://doi.org/10.3390/info13070337 - 12 Jul 2022
Cited by 1 | Viewed by 2441
Abstract
Normalization as a layer within neural networks has over the years demonstrated its effectiveness in neural network optimization across a wide range of different tasks, with one of the most successful approaches being that of batch normalization. The consensus is that better estimates [...] Read more.
Normalization as a layer within neural networks has over the years demonstrated its effectiveness in neural network optimization across a wide range of different tasks, with one of the most successful approaches being that of batch normalization. The consensus is that better estimates of the BatchNorm normalization statistics (μ and σ2) in each mini-batch result in better optimization. In this work, we challenge this belief and experiment with a new variant of BatchNorm known as GhostNorm that, despite independently normalizing batches within the mini-batches, i.e., μ and σ2 are independently computed and applied to groups of samples in each mini-batch, outperforms BatchNorm consistently. Next, we introduce sequential normalization (SeqNorm), the sequential application of the above type of normalization across two dimensions of the input, and find that models trained with SeqNorm consistently outperform models trained with BatchNorm or GhostNorm on multiple image classification data sets. Our contributions are as follows: (i) we uncover a source of regularization that is unique to GhostNorm, and not simply an extension from BatchNorm, and illustrate its effects on the loss landscape, (ii) we introduce sequential normalization (SeqNorm) a new normalization layer that improves the regularization effects of GhostNorm, (iii) we compare both GhostNorm and SeqNorm against BatchNorm alone as well as with other regularization techniques, (iv) for both GhostNorm and SeqNorm models, we train models whose performance is consistently better than our baselines, including ones with BatchNorm, on the standard image classification data sets of CIFAR–10, CIFAR-100, and ImageNet ((+0.2%, +0.7%, +0.4%), and (+0.3%, +1.7%, +1.1%) for GhostNorm and SeqNorm, respectively). Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

Back to TopTop