applsci-logo

Journal Browser

Journal Browser

Advances in Neural Networks and Deep Learning

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 30 April 2025 | Viewed by 33246

Special Issue Editors


E-Mail Website
Guest Editor
School of Mathematics and Statistics, Northeast Normal University, Changchun 130024, China
Interests: neural networks; deep learning; machine learning; computer vision; natural language processing; stochastic optimization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mathematics, Dalian Maritime University, Dalian 116026, China
Interests: artificial computing
School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China
Interests: machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Neural networks and deep learning are rapidly growing fields that have become crucial in various domains such as image recognition, speech recognition, natural language processing, and robotics. This Special Issue aims to provide a platform for researchers to share their latest advances in neural networks and deep learning, and their applications in solving real-world problems.

Topics of interest for this Special Issue include, but are not limited to:

  • New architectures and algorithms for neural networks and deep learning;
  • Advances in fuzzy neural networks, spiking neural network, extreme learning machine and support vector machine;
  • Applications of neural networks and deep learning in computer vision, speech recognition, natural language processing, and robotics;
  • Transfering learning techniques in neural networks and deep learning;
  • Neural network optimization and regularization techniques;
  • Deep learning for data analysis and prediction;
  • Adversarial machine learning and its applications.

We invite researchers to submit their original research articles, reviews, and short communications related to the above topics. All submissions will undergo a rigorous peer-review process, and accepted papers will be published in the Special Issue of Applied Sciences.

Prof. Dr. Dongpo Xu
Prof. Dr. Huisheng Zhang
Dr. Jie Yang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial neural networks
  • deep learning
  • convolutional neural networks
  • recurrent neural networks
  • long short-term memory
  • generative adversarial networks
  • reinforcement learning
  • computer vision
  • speech recognition
  • natural language processing
  • robotics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

20 pages, 5773 KiB  
Article
Enhancing Short-Term Load Forecasting Accuracy in High-Volatility Regions Using LSTM-SCN Hybrid Models
by Bingbing Tang, Jie Hu, Mei Yang, Chenglong Zhang and Qiang Bai
Appl. Sci. 2024, 14(24), 11606; https://doi.org/10.3390/app142411606 - 12 Dec 2024
Viewed by 404
Abstract
Short-Term Load Forecasting (STLF) is essential for the efficient management of power systems, as it improves forecasting accuracy while optimizing power scheduling efficiency. Despite significant recent advancements in STLF models, forecasting accuracy in high-volatility regions remains a key challenge. To address this issue, [...] Read more.
Short-Term Load Forecasting (STLF) is essential for the efficient management of power systems, as it improves forecasting accuracy while optimizing power scheduling efficiency. Despite significant recent advancements in STLF models, forecasting accuracy in high-volatility regions remains a key challenge. To address this issue, this paper introduces a hybrid load forecasting model that integrates the Long Short-Term Memory Network (LSTM) with the Stochastic Configuration Network (SCN). We first verify the Universal Approximation Property of SCN through experiments on two regression datasets. Subsequently, we reconstruct the features and input them into the LSTM for feature extraction. These extracted feature vectors are then used as inputs for SCN-based STLF. Finally, we evaluate the performance of the LSTM-SCN model against other baseline models using the Australian Electricity Load dataset. We also select five high-volatility regions in the test set to validate the LSTM-SCN model’s advantages in such scenarios. The results show that the LSTM-SCN model achieved an RMSE of 56.970, MAE of 43.033, and MAPE of 0.492% on the test set. Compared to the next best model, the LSTM-SCN model reduced errors by 6.016, 8.846, and 0.053% for RMSE, MAE, and MAPE, respectively. Additionally, the model consistently outperformed across all five high-volatility regions analyzed. These findings highlight its contribution to improved power system management, particularly in challenging high-volatility scenarios. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

23 pages, 6760 KiB  
Article
Hybrid Crow Search Algorithm–LSTM System for Enhanced Stock Price Forecasting
by Chang-Long Jiang, Yi-Kuang Tsai, Zhen-En Shao, Shih-Hsiung Lee, Cheng-Che Hsueh and Ko-Wei Huang
Appl. Sci. 2024, 14(23), 11380; https://doi.org/10.3390/app142311380 - 6 Dec 2024
Viewed by 501
Abstract
This study presents a hybrid crow search algorithm–long short-term memory (CSLSTM) system for forecasting stock prices. This system allows investors to effectively avoid risks and enhance profits by predicting the closing price the following day. This method utilizes a stacking ensemble of long [...] Read more.
This study presents a hybrid crow search algorithm–long short-term memory (CSLSTM) system for forecasting stock prices. This system allows investors to effectively avoid risks and enhance profits by predicting the closing price the following day. This method utilizes a stacking ensemble of long short-term memory (LSTM) networks, with the crow search algorithm (CSA) optimizing the weights assigned to the predictions from multiple LSTM models. To improve the overall accuracy, this system leverages three distinct datasets: technical analysis indicators; price fluctuation limits; and variation mode decomposition (VMD) subsignal sequences. The predictions for the three reference-data types are more comprehensive than single-model or single-data-type approaches. The prediction accuracies of the recurrent neural network, gate recurrent unit, and the LSTM network for five stocks were compared. The proposed CSLSTM system outperforms the other standalone models. Furthermore, we conducted backtesting to demonstrate that the prediction information from our model could generate profit in the stock market, enabling users to benefit from complex stock-market dynamics. The stock prices in this study are expressed in New Taiwan Dollars (TWD), the official currency of Taiwan. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

24 pages, 2450 KiB  
Article
Progressive Pruning of Light Dehaze Networks for Static Scenes
by Byeongseon Park, Heekwon Lee, Yong-Kab Kim and Sungkwan Youm
Appl. Sci. 2024, 14(23), 10820; https://doi.org/10.3390/app142310820 - 22 Nov 2024
Viewed by 419
Abstract
This paper introduces an progressive pruning method for Light DeHaze Networks, focusing on a static scene captured by a fixed camera environments. We develop a progressive pruning algorithm that aims to reduce computational complexity while maintaining dehazing quality within a specified threshold. Our [...] Read more.
This paper introduces an progressive pruning method for Light DeHaze Networks, focusing on a static scene captured by a fixed camera environments. We develop a progressive pruning algorithm that aims to reduce computational complexity while maintaining dehazing quality within a specified threshold. Our key contributions include a fine-tuning strategy for specific scenes, channel importance analysis, and an progressive pruning approach considering layer-wise sensitivity. Our experiments demonstrate the effectiveness of our progressive pruning method. Our progressive pruning algorithm, targeting a specific PSNR(Peak Signal-to-Noise Ratio) threshold, achieved optimal results at a certain pruning ratio, significantly reducing the number of channels in the target layer while maintaining PSNR above the threshold and preserving good structural similarity, before automatically stopping when performance dropped below the target. This demonstrates the algorithm’s ability to find an optimal balance between model compression and performance maintenance. This research enables efficient deployment of high-quality dehazing algorithms in resource-constrained environments, applicable to traffic monitoring and outdoor surveillance. Our method paves the way for more accessible image dehazing systems, enhancing visibility in various real-world hazy conditions while optimizing computational resources for fixed camera setups. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

16 pages, 1788 KiB  
Article
A Stock Prediction Method Based on Heterogeneous Bidirectional LSTM
by Shuai Sang and Lu Li
Appl. Sci. 2024, 14(20), 9158; https://doi.org/10.3390/app14209158 - 10 Oct 2024
Viewed by 944
Abstract
LSTM (long short-term memory) networks have been proven effective in processing stock data. However, the stability of LSTM is poor, it is greatly affected by data fluctuations, and it is weak in capturing long-term dependencies in sequential data. BiLSTM (bidirectional LSTM) has alleviated [...] Read more.
LSTM (long short-term memory) networks have been proven effective in processing stock data. However, the stability of LSTM is poor, it is greatly affected by data fluctuations, and it is weak in capturing long-term dependencies in sequential data. BiLSTM (bidirectional LSTM) has alleviated this issue to some extent; however, due to the inefficiency of information transmission within the LSTM units themselves, the generalization performance and accuracy of BiLSTM is still not very satisfactory. To address this problem, this paper improves LSTM units on the basis of traditional BiLSTM and proposes a He-BiLSTM (heterogeneous bidirectional LSTM) with a corresponding backpropagation algorithm. The parameters in He-BiLSTM are updated using the Adam gradient descent method. Experimental results show that compared to BiLSTM, He-BiLSTM has further improved in terms of accuracy, robustness, and generalization performance. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

20 pages, 16267 KiB  
Article
Multi-Scale Detail–Noise Complementary Learning for Image Denoising
by Yan Cui, Mingyue Shi and Jielin Jiang
Appl. Sci. 2024, 14(16), 7044; https://doi.org/10.3390/app14167044 - 11 Aug 2024
Viewed by 1529
Abstract
Deep convolutional neural networks (CNNs) have demonstrated significant potential in enhancing image denoising performance. However, most denoising methods fuse different levels of features through long and short skip connections, easily generating a lot of redundant information, thereby weakening the complementarity of different levels [...] Read more.
Deep convolutional neural networks (CNNs) have demonstrated significant potential in enhancing image denoising performance. However, most denoising methods fuse different levels of features through long and short skip connections, easily generating a lot of redundant information, thereby weakening the complementarity of different levels of features, resulting in the loss of image details. In this paper, we propose a multi-scale detail–noise complementary learning (MDNCL) network for additive white Gaussian noise removal and real-world noise removal. The MDNCL network comprises two branches, namely the Detail Feature Learning Branch (DLB) and the Noise Learning Branch (NLB). Specifically, a loss function is applied to guide the complementary learning of image detail features and noisy mappings in these two branches. This learning approach effectively balances noise reduction and detail restoration, especially when dealing with high ratios of noise. To enhance the complementarity of features between different network layers and avoid redundant information, we designed a Feature Subtraction Unit (FSU) to capture the differences in features across the DLB network layers. Our extensive experimental evaluations demonstrate that the MDNCL approach achieves impressive denoising performance and outperforms other popular denoising methods. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

15 pages, 2587 KiB  
Article
Unsupervised Scene Image Text Segmentation Based on Improved CycleGAN
by Xian Liu, Fang Yang and Wei Guo
Appl. Sci. 2024, 14(11), 4420; https://doi.org/10.3390/app14114420 - 23 May 2024
Viewed by 915
Abstract
Scene image text segmentation is an important task in computer vision, but the complexity and diversity of backgrounds make it challenging. All supervised image segmentation tasks require paired semantic label data to ensure the accuracy of segmentation, but semantic labels are often difficult [...] Read more.
Scene image text segmentation is an important task in computer vision, but the complexity and diversity of backgrounds make it challenging. All supervised image segmentation tasks require paired semantic label data to ensure the accuracy of segmentation, but semantic labels are often difficult to obtain. To solve this problem, we propose an unsupervised scene image text segmentation model based on the image style transfer model cyclic uniform Generation Adversarial network (CycleGAN), which is trained by partial unpaired label data. Text segmentation is achieved by converting a complex background to a simple background. Since the images generated by CycleGAN cannot retain the details of the text content, we also introduced the Atrous spatial Pyramid pool module (ASPP) to obtain the features of the text from multiple scales. The resulting image quality is improved. The proposed method is verified by experiments on a synthetic data set, the IIIT 5k word data set and the MACT data set, which effectively segments the text and preserves the details of the text content. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

21 pages, 8442 KiB  
Article
Multi-Step Multidimensional Statistical Arbitrage Prediction Using PSO Deep-ConvLSTM: An Enhanced Approach for Forecasting Price Spreads
by Sensen Tu, Panke Qin, Mingfu Zhu, Zeliang Zeng, Shenjie Cheng and Bo Ye
Appl. Sci. 2024, 14(9), 3798; https://doi.org/10.3390/app14093798 - 29 Apr 2024
Viewed by 1088
Abstract
Due to its effectiveness as a risk-hedging trading strategy in financial markets, futures arbitrage is highly sought after by investors in turbulent market conditions. The essence of futures arbitrage lies in formulating strategies based on predictions of future futures price differentials. However, contemporary [...] Read more.
Due to its effectiveness as a risk-hedging trading strategy in financial markets, futures arbitrage is highly sought after by investors in turbulent market conditions. The essence of futures arbitrage lies in formulating strategies based on predictions of future futures price differentials. However, contemporary research predominantly focuses on projections of single indicators for the subsequent temporal juncture, and devising efficacious arbitrage strategies often necessitates the examination of multiple indicators across timeframes. To tackle the aforementioned challenge, our methodology leverages a PSO Deep-ConvLSTM network, which, through particle swarm optimization (PSO), refines hyperparameters, including layer architectures and learning rates, culminating in superior predictive performance. By analyzing temporal-spatial data within financial markets through ConvLSTM, the model captures intricate market patterns, performing better in forecasting than traditional models. Multistep forward simulation experiments and extensive ablation studies using future data from the Shanghai Futures Exchange in China validate the effectiveness of the integrated model. Compared with the gate recurrent unit (GRU), long short-term memory (LSTM), Transformer, and FEDformer, this model exhibits an average reduction of 39.8% in root mean squared error (RMSE), 42.5% in mean absolute error (MAE), 45.6% in mean absolute percentage error (MAPE), and an average increase of 1.96% in coefficient of determination (R2) values. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

25 pages, 3314 KiB  
Article
Novel GA-Based DNN Architecture for Identifying the Failure Mode with High Accuracy and Analyzing Its Effects on the System
by Naeim Rezaeian, Regina Gurina, Olga A. Saltykova, Lokmane Hezla, Mammetnazar Nohurov and Kazem Reza Kashyzadeh
Appl. Sci. 2024, 14(8), 3354; https://doi.org/10.3390/app14083354 - 16 Apr 2024
Cited by 10 | Viewed by 1153
Abstract
Symmetric data play an effective role in the risk assessment process, and, therefore, integrating symmetrical information using Failure Mode and Effects Analysis (FMEA) is essential in implementing projects with big data. This proactive approach helps to quickly identify risks and take measures to [...] Read more.
Symmetric data play an effective role in the risk assessment process, and, therefore, integrating symmetrical information using Failure Mode and Effects Analysis (FMEA) is essential in implementing projects with big data. This proactive approach helps to quickly identify risks and take measures to address them. However, this task is always time-consuming and costly. On the other hand, there is an essential need for an expert in this field to carry out this process manually. Therefore, in the present study, the authors propose a new methodology to automatically manage this task through a deep-learning technique. Moreover, due to the different nature of the risk data, it is not possible to consider a single neural network architecture for all of them. To overcome this problem, a Genetic Algorithm (GA) was employed to find the best architecture and hyperparameters. Finally, the risks were processed and predicted using the new proposed methodology without sending data to other servers, i.e., external servers. The results of the analysis for the first risk, i.e., latency and real-time processing, showed that using the proposed methodology can improve the detection accuracy of the failure mode by 71.52%, 54.72%, 72.47%, and 75.73% compared to the unique algorithm with the activation function of Relu and number of neurons 32, respectively, related to the one, two, three, and four hidden layers. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

22 pages, 5750 KiB  
Article
Deep Q-Learning-Based Smart Scheduling of EVs for Demand Response in Smart Grids
by Viorica Rozina Chifu, Tudor Cioara, Cristina Bianca Pop, Horia Gabriel Rusu and Ionut Anghel
Appl. Sci. 2024, 14(4), 1421; https://doi.org/10.3390/app14041421 - 8 Feb 2024
Cited by 5 | Viewed by 1361
Abstract
Economic and policy factors are driving the continuous increase in the adoption and usage of electrical vehicles (EVs). However, despite being a cleaner alternative to combustion engine vehicles, EVs have negative impacts on the lifespan of microgrid equipment and energy balance due to [...] Read more.
Economic and policy factors are driving the continuous increase in the adoption and usage of electrical vehicles (EVs). However, despite being a cleaner alternative to combustion engine vehicles, EVs have negative impacts on the lifespan of microgrid equipment and energy balance due to increased power demands and the timing of their usage. In our view, grid management should leverage on EV scheduling flexibility to support local network balancing through active participation in demand response programs. In this paper, we propose a model-free solution, leveraging deep Q-learning to schedule the charging and discharging activities of EVs within a microgrid to align with a target energy profile provided by the distribution system operator. We adapted the Bellman equation to assess the value of a state based on specific rewards for EV scheduling actions and used a neural network to estimate Q-values for available actions and the epsilon-greedy algorithm to balance exploitation and exploration to meet the target energy profile. The results are promising, showing the effectiveness of the proposed solution in scheduling the charging and discharging actions for a fleet of 30 EVs to align with the target energy profile in demand response programs, achieving a Pearson coefficient of 0.99. This solution also demonstrates a high degree of adaptability in effectively managing scheduling situations for EVs that involve dynamicity, influenced by various state-of-charge distributions and e-mobility features. Adaptability is achieved solely through learning from data without requiring prior knowledge, configurations, or fine-tuning. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

17 pages, 16054 KiB  
Article
Simulation of Spinal Cord Reflexes
by Mihai Popescu and Cristian Ravariu
Appl. Sci. 2024, 14(1), 310; https://doi.org/10.3390/app14010310 - 29 Dec 2023
Cited by 2 | Viewed by 1028
Abstract
The importance of spinal reflexes is connected to the rehabilitation processes in neural prostheses and to the neuromuscular junction. In order to model neuron networks as electronic circuits, a simulation environment like LTSpice XVII or PSpice can be used to create a complete [...] Read more.
The importance of spinal reflexes is connected to the rehabilitation processes in neural prostheses and to the neuromuscular junction. In order to model neuron networks as electronic circuits, a simulation environment like LTSpice XVII or PSpice can be used to create a complete electronic description. There are four types of neurons employed in spinal reflexes: α-motoneurons, sensitive neurons, excitatory interneurons, and inhibitory interneurons. Many proposals have been made regarding methods that can be used for assimilating neurons using electronic circuits. In this paper, only a single internal model of a neuron is considered enough to simulate all four types of neurons implicated in the control loops. The main contribution of this paper is to propose the modeling of neurons using some electronic circuits designed either with a bipolar transistor or with CMOS transistors for the input and output of circuits stages. In this way, it is possible to mimic the neural pulses’ circulation along the loops of the spinal reflexes and to prove the accuracy of the simulation results with respect to the biological signals collected from the bibliographic materials. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

22 pages, 5601 KiB  
Article
Dynamic Depth Learning in Stacked AutoEncoders
by Sarah Alfayez, Ouiem Bchir and Mohamed Maher Ben Ismail
Appl. Sci. 2023, 13(19), 10994; https://doi.org/10.3390/app131910994 - 5 Oct 2023
Viewed by 1591
Abstract
The effectiveness of deep learning models depends on their architecture and topology. Thus, it is essential to determine the optimal depth of the network. In this paper, we propose a novel approach to learn the optimal depth of a stacked AutoEncoder, called Dynamic [...] Read more.
The effectiveness of deep learning models depends on their architecture and topology. Thus, it is essential to determine the optimal depth of the network. In this paper, we propose a novel approach to learn the optimal depth of a stacked AutoEncoder, called Dynamic Depth for Stacked AutoEncoders (DDSAE). DDSAE learns in an unsupervised manner the depth of a stacked AutoEncoder while training the network model. Specifically, we propose a novel objective function, aside from the AutoEncoder’s loss function to optimize the network depth: The optimization of the objective function determines the layers’ relevance weights. Additionally, we propose an algorithm that iteratively prunes the irrelevant layers based on the learned relevance weights. The performance of DDSAE was assessed using benchmark and real datasets. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

14 pages, 1831 KiB  
Article
Patch-Level Consistency Regularization in Self-Supervised Transfer Learning for Fine-Grained Image Recognition
by Yejin Lee, Suho Lee and Sangheum Hwang
Appl. Sci. 2023, 13(18), 10493; https://doi.org/10.3390/app131810493 - 20 Sep 2023
Viewed by 1357
Abstract
Fine-grained image recognition aims to classify fine subcategories belonging to the same parent category, such as vehicle model or bird species classification. This is an inherently challenging task because a classifier must capture subtle interclass differences under large intraclass variances. Most previous approaches [...] Read more.
Fine-grained image recognition aims to classify fine subcategories belonging to the same parent category, such as vehicle model or bird species classification. This is an inherently challenging task because a classifier must capture subtle interclass differences under large intraclass variances. Most previous approaches are based on supervised learning, which requires a large-scale labeled dataset. However, such large-scale annotated datasets for fine-grained image recognition are difficult to collect because they generally require domain expertise during the labeling process. In this study, we propose a self-supervised transfer learning method based on Vision Transformer (ViT) to learn finer representations without human annotations. Interestingly, it is observed that existing self-supervised learning methods using ViT (e.g., DINO) show poor patch-level semantic consistency, which may be detrimental to learning finer representations. Motivated by this observation, we propose a consistency loss function that encourages patch embeddings of the overlapping area between two augmented views to be similar to each other during self-supervised learning on fine-grained datasets. In addition, we explore effective transfer learning strategies to fully leverage existing self-supervised models trained on large-scale labeled datasets. Contrary to the previous literature, our findings indicate that training only the last block of ViT is effective for self-supervised transfer learning. We demonstrate the effectiveness of our proposed approach through extensive experiments using six fine-grained image classification benchmark datasets, including FGVC Aircraft, CUB-200-2011, Food-101, Oxford 102 Flowers, Stanford Cars, and Stanford Dogs. Under the linear evaluation protocol, our method achieves an average accuracy of 78.5%, outperforming the existing transfer learning method, which yields 77.2%. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

18 pages, 6770 KiB  
Article
Research on Improved GRU-Based Stock Price Prediction Method
by Chi Chen, Lei Xue and Wanqi Xing
Appl. Sci. 2023, 13(15), 8813; https://doi.org/10.3390/app13158813 - 30 Jul 2023
Cited by 7 | Viewed by 6276
Abstract
The prediction of stock prices holds significant implications for researchers and investors evaluating stock value and risk. In recent years, researchers have increasingly replaced traditional machine learning methods with deep learning approaches in this domain. However, the application of deep learning in forecasting [...] Read more.
The prediction of stock prices holds significant implications for researchers and investors evaluating stock value and risk. In recent years, researchers have increasingly replaced traditional machine learning methods with deep learning approaches in this domain. However, the application of deep learning in forecasting stock prices is confronted with the challenge of overfitting. To address the issue of overfitting and enhance predictive accuracy, this study proposes a stock prediction model based on a gated recurrent unit (GRU) with reconstructed datasets. This model integrates data from other stocks within the same industry, thereby enriching the extracted features and mitigating the risk of overfitting. Additionally, an auxiliary module is employed to augment the volume of data through dataset reconstruction, thereby enhancing the model’s training comprehensiveness and generalization capabilities. Experimental results demonstrate a substantial improvement in prediction accuracy across various industries. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

19 pages, 3494 KiB  
Article
A Multi-Layer Feature Fusion Model Based on Convolution and Attention Mechanisms for Text Classification
by Hua Yang, Shuxiang Zhang, Hao Shen, Gexiang Zhang, Xingquan Deng, Jianglin Xiong, Li Feng, Junxiong Wang, Haifeng Zhang and Shenyang Sheng
Appl. Sci. 2023, 13(14), 8550; https://doi.org/10.3390/app13148550 - 24 Jul 2023
Cited by 5 | Viewed by 3079
Abstract
Text classification is one of the fundamental tasks in natural language processing and is widely applied in various domains. CNN effectively utilizes local features, while the Attention mechanism performs well in capturing content-based global interactions. In this paper, we propose a multi-layer feature [...] Read more.
Text classification is one of the fundamental tasks in natural language processing and is widely applied in various domains. CNN effectively utilizes local features, while the Attention mechanism performs well in capturing content-based global interactions. In this paper, we propose a multi-layer feature fusion text classification model called CAC, based on the Combination of CNN and Attention. The model adopts the idea of first extracting local features and then calculating global attention, while drawing inspiration from the interaction process between membranes in membrane computing to improve the performance of text classification. Specifically, the CAC model utilizes the local feature extraction capability of CNN to transform the original semantics into a multi-dimensional feature space. Then, global attention is computed in each respective feature space to capture global contextual information within the text. Finally, the locally extracted features and globally extracted features are fused for classification. Experimental results on various public datasets demonstrate that the CAC model, which combines CNN and Attention, outperforms models that solely rely on the Attention mechanism. In terms of accuracy and performance, the CAC model also exhibits significant improvements over other models based on CNN, RNN, and Attention. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

17 pages, 8822 KiB  
Article
Removing Rain Streaks from Visual Image Using a Combination of Bilateral Filter and Generative Adversarial Network
by Yue Yang, Minglong Xu, Chuang Chen and Fan Xue
Appl. Sci. 2023, 13(11), 6387; https://doi.org/10.3390/app13116387 - 23 May 2023
Viewed by 1589
Abstract
Images acquired using vision sensors are easily affected by environmental limitations, especially rain streaks. These streaks will seriously reduce image quality, which, in turn, reduces the accuracy of the algorithms that use the resulting images in vision sensor systems. In this paper, we [...] Read more.
Images acquired using vision sensors are easily affected by environmental limitations, especially rain streaks. These streaks will seriously reduce image quality, which, in turn, reduces the accuracy of the algorithms that use the resulting images in vision sensor systems. In this paper, we proposed a method that combined the bilateral filter with the generative adversarial network to eliminate the interference of rain streaks. Unlike other methods that use all the information in an image as the input to the generative adversarial network, we used a bilateral filter to preprocess and separate the high frequency part of the original image. The generator for the high-frequency layer of the image was designed to generate an image with no rain streaks. The high-frequency information of the image was used in a high-frequency global discriminator designed to measure the authenticity of the generated image from multiple perspectives. We also designed a loss function based on the structural similarity index to further improve the effect of removal of the rain streaks. An ablation experiment proved the validity of the method. We also compared images in synthetic and real-world datasets. Our method could retain more image information, and the generated image was clearer. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

Review

Jump to: Research, Other

27 pages, 3698 KiB  
Review
A Historical Survey of Advances in Transformer Architectures
by Ali Reza Sajun, Imran Zualkernan and Donthi Sankalpa
Appl. Sci. 2024, 14(10), 4316; https://doi.org/10.3390/app14104316 - 20 May 2024
Cited by 1 | Viewed by 5056
Abstract
In recent times, transformer-based deep learning models have risen in prominence in the field of machine learning for a variety of tasks such as computer vision and text generation. Given this increased interest, a historical outlook at the development and rapid progression of [...] Read more.
In recent times, transformer-based deep learning models have risen in prominence in the field of machine learning for a variety of tasks such as computer vision and text generation. Given this increased interest, a historical outlook at the development and rapid progression of transformer-based models becomes imperative in order to gain an understanding of the rise of this key architecture. This paper presents a survey of key works related to the early development and implementation of transformer models in various domains such as generative deep learning and as backbones of large language models. Previous works are classified based on their historical approaches, followed by key works in the domain of text-based applications, image-based applications, and miscellaneous applications. A quantitative and qualitative analysis of the various approaches is presented. Additionally, recent directions of transformer-related research such as those in the biomedical and timeseries domains are discussed. Finally, future research opportunities, especially regarding the multi-modality and optimization of the transformer training process, are identified. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

Other

Jump to: Research, Review

26 pages, 767 KiB  
Tutorial
Hands-On Fundamentals of 1D Convolutional Neural Networks—A Tutorial for Beginner Users
by Ilaria Cacciari and Anedio Ranfagni
Appl. Sci. 2024, 14(18), 8500; https://doi.org/10.3390/app14188500 - 20 Sep 2024
Viewed by 2771
Abstract
In recent years, deep learning (DL) has garnered significant attention for its successful applications across various domains in solving complex problems. This interest has spurred the development of numerous neural network architectures, including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial [...] Read more.
In recent years, deep learning (DL) has garnered significant attention for its successful applications across various domains in solving complex problems. This interest has spurred the development of numerous neural network architectures, including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and the more recently introduced Transformers. The choice of architecture depends on the data characteristics and the specific task at hand. In the 1D domain, one-dimensional CNNs (1D CNNs) are widely used, particularly for tasks involving the classification and recognition of 1D signals. While there are many applications of 1D CNNs in the literature, the technical details of their training are often not thoroughly explained, posing challenges for those developing new libraries in languages other than those supported by available open-source solutions. This paper offers a comprehensive, step-by-step tutorial on deriving feedforward and backpropagation equations for 1D CNNs, applicable to both regression and classification tasks. By linking neural networks with linear algebra, statistics, and optimization, this tutorial aims to clarify concepts related to 1D CNNs, making it a valuable resource for those interested in developing new libraries beyond existing ones. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

Back to TopTop