Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,743)

Search Parameters:
Keywords = recurrent neural

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 1103 KB  
Article
Modified Soft Margin Optimal Hyperplane Algorithm for Support Vector Machines Applied to Fault Patterns and Disease Diagnosis
by Mario Antonio Ruz Canul, Jose A. Ruz-Hernandez, Alma Y. Alanis, Juan Carlos Gonzalez Gomez and Jorge Gálvez
Symmetry 2025, 17(10), 1749; https://doi.org/10.3390/sym17101749 - 16 Oct 2025
Abstract
This paper introduces a modified soft margin optimal hyperplane (MSMOH) algorithm, which enhances the linear separating properties of support vector machines (SVMs) by placing higher penalties on large misclassification errors. This approach improves margin symmetry in both balanced and asymmetric data distributions. The [...] Read more.
This paper introduces a modified soft margin optimal hyperplane (MSMOH) algorithm, which enhances the linear separating properties of support vector machines (SVMs) by placing higher penalties on large misclassification errors. This approach improves margin symmetry in both balanced and asymmetric data distributions. The research is divided into two main stages. The first stage evaluates MSMOH for synthetic data classification and its application in heart disease diagnosis. In a cross-validation setting with unknown data, MSMOH demonstrated superior average performance compared to the standard soft margin optimal hyperplane (SMOH). Performance metrics confirmed that MSMOH maximizes the margin and reduces the number of support vectors (SVs), thus improving classification performance, generalization, and computational efficiency. The second stage applies MSMOH as a novel synthesis algorithm to design a neural associative memory (NAM) based on a recurrent neural network (RNN). This NAM is used for fault diagnosis in fossil electric power plants. By promoting more symmetric decision boundaries, MSMOH increases the accurate convergence of 1024 possible input elements. The results show that MSMOH effectively designs the NAM, leading to better performance than other synthesis algorithms like perceptron, optimal hyperplane (OH), and SMOH. Specifically, MSMOH achieved the highest number of converged input elements (1019) and the smallest number of elements converging to spurious memories (5). Full article
(This article belongs to the Special Issue Symmetry in Fault Detection and Diagnosis for Dynamic Systems)
Show Figures

Figure 1

16 pages, 3008 KB  
Article
Lithium-Ion Battery State of Health Estimation Based on Multi-Dimensional Health Characteristics and GAPSO-BiGRU
by Lv Zhou, Yu Zhang, Kuiting Pan and Xiongfan Cheng
Energies 2025, 18(20), 5456; https://doi.org/10.3390/en18205456 (registering DOI) - 16 Oct 2025
Abstract
The state of health (SOH) of lithium-ion batteries (LIBs) is a key parameter that is crucial for delaying their lifespan degradation and ensuring safe use. To further explore the potential of charge curves in SOH estimation for LIBs, this paper proposes a method [...] Read more.
The state of health (SOH) of lithium-ion batteries (LIBs) is a key parameter that is crucial for delaying their lifespan degradation and ensuring safe use. To further explore the potential of charge curves in SOH estimation for LIBs, this paper proposes a method based on multi-dimensional health features and a genetic algorithm–particle swarm optimization (GAPSO)–bidirectional gated recurrent unit (BiGRU) neural network for SOH estimation. First, we extracted differential thermal voltammetry curves from the charging curve and defined the peak, valley, and their positions. Then, based on the charging temperature curve, we defined the time at which the maximum charging temperature occurs and the average charging temperature. Subsequently, we validated the correlation between the aforementioned six health features and SOH using the Pearson correlation coefficient. Finally, we used the multi-dimensional health features as model inputs to construct the BiGRU estimation model and employed the GAPSO hybrid strategy to achieve global adaptive optimization of the model’s hyperparameters. Experimental results on different LIBs show that the proposed method has relatively high accuracy, with an average absolute error and root mean square error of no more than 0.2771%. The comparison results with various methods further verify the superiority of the proposed method. Full article
(This article belongs to the Special Issue Advances in Battery Management Systems for Lithium-Ion Batteries)
Show Figures

Figure 1

22 pages, 964 KB  
Article
Multi-Modal Emotion Detection and Tracking System Using AI Techniques
by Werner Mostert, Anish Kurien and Karim Djouani
Computers 2025, 14(10), 441; https://doi.org/10.3390/computers14100441 (registering DOI) - 16 Oct 2025
Abstract
Emotion detection significantly impacts healthcare by enabling personalized patient care and improving treatment outcomes. Single-modality emotion recognition often lacks reliability due to the complexity and subjectivity of human emotions. This study proposes a multi-modal emotion detection platform integrating visual, audio, and heart rate [...] Read more.
Emotion detection significantly impacts healthcare by enabling personalized patient care and improving treatment outcomes. Single-modality emotion recognition often lacks reliability due to the complexity and subjectivity of human emotions. This study proposes a multi-modal emotion detection platform integrating visual, audio, and heart rate data using AI techniques, including convolutional neural networks and support vector machines. The system outperformed single-modality approaches, demonstrating enhanced accuracy and robustness. This improvement underscores the value of multi-modal AI in emotion detection, offering potential benefits across healthcare, education, and human–computer interaction. Full article
(This article belongs to the Special Issue Advances in Semantic Multimedia and Personalized Digital Content)
Show Figures

Figure 1

15 pages, 3296 KB  
Article
EmbTCN-Transformer: An Embedding Temporal Convolutional Network–Transformer Model for Multi-Trajectory Prediction
by Ao Chen, Haotian Chen, Zhenxin Zhang, Mingkai Yang and Yang-Yang Chen
Mathematics 2025, 13(20), 3306; https://doi.org/10.3390/math13203306 - 16 Oct 2025
Abstract
This paper addresses the multi-trajectory prediction problem and a so-called Embedded-TCN-Transformer (EmbTCN-Transformer) model is designed by using the real-time historical trajectories in a formation. A temporal convolutional network (TCN) is utilized as the input embedding to introduce temporal awareness capabilities into the model. [...] Read more.
This paper addresses the multi-trajectory prediction problem and a so-called Embedded-TCN-Transformer (EmbTCN-Transformer) model is designed by using the real-time historical trajectories in a formation. A temporal convolutional network (TCN) is utilized as the input embedding to introduce temporal awareness capabilities into the model. Then, the self-attention mechanism is incorporated as the backbone to extract correlations among different positions of the trajectory. An encoder–decoder structure is adopted to generate future trajectories. Ablation experiments validate the effectiveness of the EmbTCN-Transformer, showing that the TCN-based input embedding and the self-attention mechanism contribute to 30% and 80% reductions in prediction error, respectively. Comparative experiments further demonstrate the superiority of the proposed model, achieving at least 60% and 10% performance improvements over Recurrent Neural Network (RNN)-based networks and the conventional Transformer, respectively. Full article
(This article belongs to the Special Issue Augmented Control: Algorithms and Applications)
Show Figures

Figure 1

28 pages, 7034 KB  
Article
Water Quality Prediction Model Based on Temporal Attentive Bidirectional Gated Recurrent Unit Model
by Hongyu Yang, Lei Guo and Qingqing Tian
Sustainability 2025, 17(20), 9155; https://doi.org/10.3390/su17209155 (registering DOI) - 16 Oct 2025
Abstract
Water pollution has caused serious consequences for human health and aquatic systems. Therefore, analyzing and predicting water quality is of great significance for the early prevention and control of water pollution. Aiming at the shortcomings of the Gated Recurrent Unit (GRU) water quality [...] Read more.
Water pollution has caused serious consequences for human health and aquatic systems. Therefore, analyzing and predicting water quality is of great significance for the early prevention and control of water pollution. Aiming at the shortcomings of the Gated Recurrent Unit (GRU) water quality prediction model, such as the low utilization rate of early information and poor deep feature extraction ability of the hidden state mechanism, this study combines the temporal attention (TA) mechanism with the bidirectional superimposed neural network. A time-focused bidirectional gated recurrent unit (TA-Bi-GRU) model is proposed. Taking the actual water quality data of the water source reservoir in Xiduan Village as the research object, this model was used to predict four core water quality indicators, namely pH, ammonia nitrogen (NH3N), total nitrogen (TN), and dissolved oxygen (DOX). Predictions are made within multiple time ranges, with prediction periods of 7 days, 10 days, 15 days, and 30 days. In the long-term prediction of the TA-Bi-GRU model, its average R2 was 0.858 (7 days), 0.772 (10 days), 0.684 (15 days), and 0.553 (30 days), and the corresponding average MAE and MSE were both lower than those of the comparison models. The experimental results show that the TA-Bi-GRU model has higher prediction accuracy and stronger generalization ability compared with the existing GRU, bidirectional GRU (Bi-GRU), Time-focused Gated Recurrent Unit (TA-GRU), Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) and Deep Temporal Convolutional Networks-Long Short-Term Memory (DeepTCN-LSTM) models. Full article
Show Figures

Figure 1

20 pages, 3759 KB  
Article
Comparative Analysis and Validation of LSTM and GRU Models for Predicting Annual Mean Sea Level in the East Sea: A Case Study of Ulleungdo Island
by Tae-Yun Kim, Hong-Sik Yun, Hyung-Mi Yoon and Seung-Jun Lee
Appl. Sci. 2025, 15(20), 11067; https://doi.org/10.3390/app152011067 - 15 Oct 2025
Abstract
This study presents a deep learning-based model for predicting annual mean sea level (MSL) in the East Sea, with a focus on the Ulleungdo Island region, which maintains an independent vertical datum. To account for long-term tidal variability, the model enables continuous estimation [...] Read more.
This study presents a deep learning-based model for predicting annual mean sea level (MSL) in the East Sea, with a focus on the Ulleungdo Island region, which maintains an independent vertical datum. To account for long-term tidal variability, the model enables continuous estimation of hourly and annual MSL values. Two recurrent neural network (RNN) architectures—Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU)—were constructed and compared. Observational tide gauge data from 1 January 2000 to 3 August 2018 (covering 18.6 years and a full tidal nodal cycle) were preprocessed through missing-value and outlier treatment, followed by min–max normalization, and then structured for sequential learning. Comparative analysis demonstrated that the GRU model slightly outperformed the LSTM model in predictive accuracy and training stability. As a result, the GRU model was selected to produce annual MSL forecasts for the period 2018–2021. The GRU achieved a mean RMSE of approximately 0.44 cm during this prediction period, indicating robust performance in forecasting hourly sea level variations. The findings highlight the potential of deep learning methods to support vertical datum determination in island regions and to provide reliable sea level estimates for integration into coastal and oceanographic modeling. The proposed approach offers a scalable framework for long-term sea level prediction under evolving geodetic conditions. Full article
Show Figures

Figure 1

20 pages, 4914 KB  
Article
Dual-Channel Parallel Multimodal Feature Fusion for Bearing Fault Diagnosis
by Wanrong Li, Haichao Cai, Xiaokang Yang, Yujun Xue, Jun Ye and Xiangyi Hu
Machines 2025, 13(10), 950; https://doi.org/10.3390/machines13100950 (registering DOI) - 15 Oct 2025
Abstract
In recent years, the powerful feature extraction capabilities of deep learning have attracted widespread attention in the field of bearing fault diagnosis. To address the limitations of single-modal and single-channel feature extraction methods, which often result in incomplete information representation and difficulty in [...] Read more.
In recent years, the powerful feature extraction capabilities of deep learning have attracted widespread attention in the field of bearing fault diagnosis. To address the limitations of single-modal and single-channel feature extraction methods, which often result in incomplete information representation and difficulty in obtaining high-quality fault features, this paper proposes a dual-channel parallel multimodal feature fusion model for bearing fault diagnosis. In this method, the one-dimensional vibration signals are first transformed into two-dimensional time-frequency representations using continuous wavelet transform (CWT). Subsequently, both the one-dimensional vibration signals and the two-dimensional time-frequency representations are fed simultaneously into the dual-branch parallel model. Within this architecture, the first branch employs a combination of a one-dimensional convolutional neural network (1DCNN) and a bidirectional gated recurrent unit (BiGRU) to extract temporal features from the one-dimensional vibration signals. The second branch utilizes a dilated convolutional to capture spatial time–frequency information from the CWT-derived two-dimensional time–frequency representations. The features extracted by both branches were are input into the feature fusion layer. Furthermore, to leverage fault features more comprehensively, a channel attention mechanism is embedded after the feature fusion layer. This enables the network to focus more effectively on salient features across channels while suppressing interference from redundant features, thereby enhancing the performance and accuracy of the dual-branch network. Finally, the fused fault features are passed to a softmax classifier for fault classification. Experimental results demonstrate that the proposed method achieved an average accuracy of 99.50% on the Case Western Reserve University (CWRU) bearing dataset and 97.33% on the Southeast University (SEU) bearing dataset. These results confirm that the suggested model effectively improves fault diagnosis accuracy and exhibits strong generalization capability. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

33 pages, 1124 KB  
Review
Machine and Deep Learning in Agricultural Engineering: A Comprehensive Survey and Meta-Analysis of Techniques, Applications, and Challenges
by Samuel Akwasi Frimpong, Mu Han, Wenyi Zheng, Xiaowei Li, Ernest Akpaku and Ama Pokuah Obeng
Computers 2025, 14(10), 438; https://doi.org/10.3390/computers14100438 - 15 Oct 2025
Abstract
Machine learning and deep learning techniques integrated with advanced sensing technologies have revolutionized agricultural engineering, addressing complex challenges in food production, quality assessment, and environmental monitoring. This survey presents a systematic review and meta-analysis of recent developments by examining the peer-reviewed literature from [...] Read more.
Machine learning and deep learning techniques integrated with advanced sensing technologies have revolutionized agricultural engineering, addressing complex challenges in food production, quality assessment, and environmental monitoring. This survey presents a systematic review and meta-analysis of recent developments by examining the peer-reviewed literature from 2015 to 2024. The analysis reveals computational approaches ranging from traditional algorithms like support vector machines and random forests to deep learning architectures, including convolutional and recurrent neural networks. Deep learning models often demonstrate superior performance, showing 5–10% accuracy improvements over traditional methods and achieving 93–99% accuracy in image-based applications. Three primary application domains are identified: agricultural product quality assessment using hyperspectral imaging, crop and field management through precision optimization, and agricultural automation with machine vision systems. Dataset taxonomy shows spectral data predominating at 42.1%, followed by image data at 26.2%, indicating preference for non-destructive approaches. Current challenges include data limitations, model interpretability issues, and computational complexity. Future trends emphasize lightweight model development, ensemble learning, and expanding applications. This analysis provides a comprehensive understanding of current capabilities and future directions for machine learning in agricultural engineering, supporting the development of efficient and sustainable agricultural systems for global food security. Full article
Show Figures

Figure 1

14 pages, 1052 KB  
Proceeding Paper
Artificial Intelligence Models for Balancing Energy Consumption and Security in 5G Networks
by Hammad Lazrek, Hassan El Ferindi, Meryam El Mouhtadi, Mohammed Zouiten and Aniss Moumen
Eng. Proc. 2025, 112(1), 23; https://doi.org/10.3390/engproc2025112023 - 14 Oct 2025
Abstract
Fifth-generation (5G) networks represent a paradigm shift in telecommunications, offering ultra-reliable low-latency communication, massive connectivity of devices, and unparalleled data rates. While these advantages also present significant complications surrounding energy consumption and cybersecurity, requiring new approaches to maintain operational effectiveness and network fidelity. [...] Read more.
Fifth-generation (5G) networks represent a paradigm shift in telecommunications, offering ultra-reliable low-latency communication, massive connectivity of devices, and unparalleled data rates. While these advantages also present significant complications surrounding energy consumption and cybersecurity, requiring new approaches to maintain operational effectiveness and network fidelity. This study proposes a new hybrid artificial intelligence (AI) framework consisting of explainable AI (XAI) for transparent resource allocation, convolutional neural networks (CNNs) for real-time anomaly detection, and recurrent neural networks (RNNs) for predictive energy optimization. Experiments and real-world case studies illustrate this framework’s scalability and efficiency by achieving improved network resource management, a detection accuracy of 99.7% for anomalies, and energy savings of up to 65%. Full article
Show Figures

Figure 1

22 pages, 3339 KB  
Article
An AutoML Algorithm: Multiple-Steps Ahead Forecasting of Correlated Multivariate Time Series with Anomalies Using Gated Recurrent Unit Networks
by Ying Su and Morgan C. Wang
AI 2025, 6(10), 267; https://doi.org/10.3390/ai6100267 - 14 Oct 2025
Abstract
Multiple time series forecasting is critical in domains such as energy management, economic analysis, web traffic prediction and air pollution monitoring to support effective resource planning. Traditional statistical learning methods, including Vector Autoregression (VAR) and Vector Autoregressive Integrated Moving Average (VARIMA), struggle with [...] Read more.
Multiple time series forecasting is critical in domains such as energy management, economic analysis, web traffic prediction and air pollution monitoring to support effective resource planning. Traditional statistical learning methods, including Vector Autoregression (VAR) and Vector Autoregressive Integrated Moving Average (VARIMA), struggle with nonstationarity, temporal dependencies, inter-series correlations, and data anomalies such as trend shifts, seasonal variations, and missing data. Furthermore, their effectiveness in multi-step ahead forecasting is often limited. This article presents an Automated Machine Learning (AutoML) framework that provides an end-to-end solution for researchers who lack in-depth knowledge of time series forecasting or advanced programming skills. This framework utilizes Gated Recurrent Unit (GRU) networks, a variant of Recurrent Neural Networks (RNNs), to tackle multiple correlated time series forecasting problems, even in the presence of anomalies. To reduce complexity and facilitate the AutoML process, many model parameters are pre-specified, thereby requiring minimal tuning. This design enables efficient and accurate multi-step forecasting while addressing issues including missing values and structural shifts. We also examine the advantages and limitations of GRU-based RNNs within the AutoML system for multivariate time series forecasting. Model performance is evaluated using multiple accuracy metrics across various forecast horizons. The empirical results confirm our proposed approach’s ability to capture inter-series dependencies and handle anomalies in long-range forecasts. Full article
Show Figures

Figure 1

43 pages, 12462 KB  
Article
Real-Time Efficient Approximation of Nonlinear Fractional-Order PDE Systems via Selective Heterogeneous Ensemble Learning
by Biao Ma and Shimin Dong
Fractal Fract. 2025, 9(10), 660; https://doi.org/10.3390/fractalfract9100660 - 13 Oct 2025
Viewed by 91
Abstract
Rod-pumping systems represent complex nonlinear systems. Traditional soft-sensing methods used for efficiency prediction in such systems typically rely on complicated fractional-order partial differential equations, severely limiting the real-time capability of efficiency estimation. To address this limitation, we propose an approximate efficiency prediction model [...] Read more.
Rod-pumping systems represent complex nonlinear systems. Traditional soft-sensing methods used for efficiency prediction in such systems typically rely on complicated fractional-order partial differential equations, severely limiting the real-time capability of efficiency estimation. To address this limitation, we propose an approximate efficiency prediction model for nonlinear fractional-order differential systems based on selective heterogeneous ensemble learning. This method integrates electrical power time-series data with fundamental operational parameters to enhance real-time predictive capability. Initially, we extract critical parameters influencing system efficiency using statistical principles. These primary influencing factors are identified through Pearson correlation coefficients and validated using p-value significance analysis. Subsequently, we introduce three foundational approximate system efficiency models: Convolutional Neural Network-Echo State Network-Bidirectional Long Short-Term Memory (CNN-ESN-BiLSTM), Bidirectional Long Short-Term Memory-Bidirectional Gated Recurrent Unit-Transformer (BiLSTM-BiGRU-Transformer), and Convolutional Neural Network-Echo State Network-Bidirectional Gated Recurrent Unit (CNN-ESN-BiGRU). Finally, to balance diversity among basic approximation models and predictive accuracy, we develop a selective heterogeneous ensemble-based approximate efficiency model for nonlinear fractional-order differential systems. Experimental validation utilizing actual oil-well parameters demonstrates that the proposed approach effectively and accurately predicts the efficiency of rod-pumping systems. Full article
Show Figures

Figure 1

20 pages, 2421 KB  
Article
Adaptive Integral Sliding Mode Control for Symmetric UAV with Mismatched Disturbances Based on an Improved Recurrent Neural Network
by Shanping Wang, Haicheng Wan, Ping Wang and Wendong Li
Symmetry 2025, 17(10), 1720; https://doi.org/10.3390/sym17101720 - 13 Oct 2025
Viewed by 66
Abstract
This study proposes a sliding-mode-based adaptive control framework for symmetric quad-rotor altitude and attitude tracking under parametric uncertainties and mismatched disturbances. To address mismatched disturbances, a finite-time disturbance observer (DO) is integrated into a high-order terminal sliding mode manifold design. While conventional sliding [...] Read more.
This study proposes a sliding-mode-based adaptive control framework for symmetric quad-rotor altitude and attitude tracking under parametric uncertainties and mismatched disturbances. To address mismatched disturbances, a finite-time disturbance observer (DO) is integrated into a high-order terminal sliding mode manifold design. While conventional sliding mode control suffers from dependence on precise dynamic models that are unavailable in quad-rotor applications, we devise a fully connected double hidden layer recurrent neural network (FCDHRNN) with full interlayer feedback to approximate unmodeled dynamics. The structure uses double hidden layer connections to strengthen the approximation ability, and its double-layer structure achieves higher accuracy and generalization ability and uses fewer neurons than the single-hidden-layer network. Through Lyapunov stability analysis, weight adaptation laws are rigorously derived to guarantee finite-time convergence of both tracking errors and estimation residuals. Simulation results show that the proposed scheme has superior performance compared with the existing quad-rotor control scheme. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

17 pages, 7770 KB  
Article
Long-Term Runoff Prediction Using Large-Scale Climatic Indices and Machine Learning Model in Wudongde and Three Gorges Reservoirs
by Feng Ma, Xiaoshan Sun and Zihang Han
Water 2025, 17(20), 2942; https://doi.org/10.3390/w17202942 - 12 Oct 2025
Viewed by 266
Abstract
Reliable long-term runoff prediction for Wudongde and Three Gorges reservoirs, two major reservoirs in the upper Yangtze River basin, is crucial for optimal operation of cascade reservoirs and hydropower generation planning. This study develops a data-driven model that integrates large-scale climate factors with [...] Read more.
Reliable long-term runoff prediction for Wudongde and Three Gorges reservoirs, two major reservoirs in the upper Yangtze River basin, is crucial for optimal operation of cascade reservoirs and hydropower generation planning. This study develops a data-driven model that integrates large-scale climate factors with a Gated Recurrent Unit (GRU) neural network to enhance runoff forecasting at lead times of 7–18 months. Key climate predictors were systematically selected using correlation analysis and stepwise regression before being fed into the GRU model. Evaluation results demonstrate that the proposed model can skillfully predict the variability and magnitude of reservoir inflow. For Wudongde Reservoir, the model achieved a mean correlation coefficient (CC) of 0.71 and Kling–Gupta Efficiency (KGE) of 0.57 during the training period, and values of 0.69 and 0.53 respectively during the testing period. For Three Gorges Reservoir, the CC was 0.67 (training) and 0.66 (testing), and the KGE was 0.52 and 0.49 respectively. The model exhibited robust forecasting capabilities across a range of lead times but showed distinct seasonal variations, with superior performance in summer and winter compared to transitional months (April and October). This framework provides a valuable tool for long-term runoff forecasting by effectively linking large-scale climate signals to local hydrological responses. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

21 pages, 3543 KB  
Article
Application of Convolutional and Recurrent Neural Networks in Classifying Plant Responses to Abiotic Stress
by Chinwe Aghadinuno, Yasser Ismail, Faiza Dad, Eman El Dakkak, Yadong Qi, Wesley Gray, Jiecai Luo and Fred Lacy
Appl. Sci. 2025, 15(20), 10960; https://doi.org/10.3390/app152010960 - 12 Oct 2025
Viewed by 274
Abstract
Agriculture is a major economic industry that sustains life. Moreover, plant health is a crucial aspect of a highly functional agricultural system. Because stress agents can damage crops and plants, it is important to understand what effect these agents can have and be [...] Read more.
Agriculture is a major economic industry that sustains life. Moreover, plant health is a crucial aspect of a highly functional agricultural system. Because stress agents can damage crops and plants, it is important to understand what effect these agents can have and be able to detect this negative impact early in the process. Machine learning technology can help to prevent these undesirable consequences. This research investigates machine learning applications for plant health analysis and classification. Specifically, Residual Networks (ResNet) and Long Short-Term Memory (LSTM) models are utilized to detect and classify plants response to abiotic external stressors. Two types of plants, azalea (shrub) and Chinese tallow (tree), were used in this research study and different concentrations of sodium chloride (NaCL) and acetic acid were used to treat the plants. Data from cameras and soil sensors were analyzed by the machine learning algorithms. The ResNet34 and LSTM models achieved accuracies of 96% and 97.8%, respectively, in classifying plants with good, medium, or bad health status on test data sets. These results demonstrate that machine learning algorithms can be used to accurately detect plant health status as well as healthy and unhealthy plant conditions and thus potentially prevent negative long-term effects in agriculture. Full article
Show Figures

Figure 1

26 pages, 5440 KB  
Article
Improved Streamflow Forecasting Through SWE-Augmented Spatio-Temporal Graph Neural Networks
by Akhila Akkala, Soukaina Filali Boubrahimi, Shah Muhammad Hamdi, Pouya Hosseinzadeh and Ayman Nassar
Hydrology 2025, 12(10), 268; https://doi.org/10.3390/hydrology12100268 - 11 Oct 2025
Viewed by 283
Abstract
Streamflow forecasting in snowmelt-dominated basins is essential for water resource planning, flood mitigation, and ecological sustainability. This study presents a comparative evaluation of statistical, machine learning (Random Forest), and deep learning models (Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Spatio-Temporal Graph [...] Read more.
Streamflow forecasting in snowmelt-dominated basins is essential for water resource planning, flood mitigation, and ecological sustainability. This study presents a comparative evaluation of statistical, machine learning (Random Forest), and deep learning models (Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Spatio-Temporal Graph Neural Network (STGNN)) using 30 years of data from 20 monitoring stations across the Upper Colorado River Basin (UCRB). We assess the impact of integrating meteorological variables—particularly, the Snow Water Equivalent (SWE)—and spatial dependencies on predictive performance. Among all models, the Spatio-Temporal Graph Neural Network (STGNN) achieved the highest accuracy, with a Nash–Sutcliffe Efficiency (NSE) of 0.84 and Kling–Gupta Efficiency (KGE) of 0.84 in the multivariate setting at the critical downstream node, Lees Ferry. Compared to the univariate setup, SWE-enhanced predictions reduced Root Mean Square Error (RMSE) by 12.8%. Seasonal and spatial analyses showed the greatest improvements at high-elevation and mid-network stations, where snowmelt dynamics dominate runoff. These findings demonstrate that spatio-temporal learning frameworks, especially STGNNs, provide a scalable and physically consistent approach to streamflow forecasting under variable climatic conditions. Full article
Show Figures

Figure 1

Back to TopTop