Next Article in Journal
Semantic Kansei Engineering Approach for Game Controllers and Design Improvement
Previous Article in Journal
Reservoir Characteristics of Marine–Continental Transitional Taiyuan Formation Shale and Its Influence on Methane Adsorption Capacity: A Case Study in Southern North China Basin
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Optimizing Recurrent Neural Networks: A Study on Gradient Normalization of Weights for Enhanced Training Efficiency

1
School of Ocean Information Engineering, Jimei University, Xiamen 361021, China
2
College of Computer Engineering, Jimei University, Xiamen 361021, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(15), 6578; https://doi.org/10.3390/app14156578 (registering DOI)
Submission received: 31 May 2024 / Revised: 23 July 2024 / Accepted: 24 July 2024 / Published: 27 July 2024
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

Recurrent Neural Networks (RNNs) are classical models for processing sequential data, demonstrating excellent performance in tasks such as natural language processing and time series prediction. However, during the training of RNNs, the issues of vanishing and exploding gradients often arise, significantly impacting the model’s performance and efficiency. In this paper, we investigate why RNNs are more prone to gradient problems compared to other common sequential networks. To address this issue and enhance network performance, we propose a method for gradient normalization of network weights. This method suppresses the occurrence of gradient problems by altering the statistical properties of RNN weights, thereby improving training effectiveness. Additionally, we analyze the impact of weight gradient normalization on the probability-distribution characteristics of model weights and validate the sensitivity of this method to hyperparameters such as learning rate. The experimental results demonstrate that gradient normalization enhances the stability of model training and reduces the frequency of gradient issues. On the Penn Treebank dataset, this method achieves a perplexity level of 110.89, representing an 11.48% improvement over conventional gradient descent methods. For prediction lengths of 24 and 96 on the ETTm1 dataset, Mean Absolute Error (MAE) values of 0.778 and 0.592 are attained, respectively, resulting in 3.00% and 6.77% improvement over conventional gradient descent methods. Moreover, selected subsets of the UCR dataset show an increase in accuracy ranging from 0.4% to 6.0%. The gradient normalization method enhances the ability of RNNs to learn from sequential and causal data, thereby holding significant implications for optimizing the training effectiveness of RNN-based models.
Keywords: recurrent neural networks; vanishing gradients; exploding gradients; gradient normalization; probability distribution characteristics recurrent neural networks; vanishing gradients; exploding gradients; gradient normalization; probability distribution characteristics

Share and Cite

MDPI and ACS Style

Wu, X.; Xiang, B.; Lu, H.; Li, C.; Huang, X.; Huang, W. Optimizing Recurrent Neural Networks: A Study on Gradient Normalization of Weights for Enhanced Training Efficiency. Appl. Sci. 2024, 14, 6578. https://doi.org/10.3390/app14156578

AMA Style

Wu X, Xiang B, Lu H, Li C, Huang X, Huang W. Optimizing Recurrent Neural Networks: A Study on Gradient Normalization of Weights for Enhanced Training Efficiency. Applied Sciences. 2024; 14(15):6578. https://doi.org/10.3390/app14156578

Chicago/Turabian Style

Wu, Xinyi, Bingjie Xiang, Huaizheng Lu, Chaopeng Li, Xingwang Huang, and Weifang Huang. 2024. "Optimizing Recurrent Neural Networks: A Study on Gradient Normalization of Weights for Enhanced Training Efficiency" Applied Sciences 14, no. 15: 6578. https://doi.org/10.3390/app14156578

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop