Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,062)

Search Parameters:
Keywords = spatiotemporal predictive learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1917 KB  
Article
EvoDeep-Quality: A Closed-Loop Hybrid Framework Integrating CNN-LSTM and NSGA-III for Adaptive Quality Optimization in Smart Manufacturing
by Shaymaa E. Sorour and Ahmed E. Amin
Sustainability 2026, 18(8), 3679; https://doi.org/10.3390/su18083679 - 8 Apr 2026
Abstract
This study proposes EvoDeep-Quality, a closed-loop hybrid framework integrating deep learning-based perception with multi-objective evolutionary optimization for adaptive quality control in smart manufacturing. The architecture combines a CNN-LSTM network for real-time spatiotemporal quality prediction with an NSGA-III-based optimization unit to balance conflicting objectives [...] Read more.
This study proposes EvoDeep-Quality, a closed-loop hybrid framework integrating deep learning-based perception with multi-objective evolutionary optimization for adaptive quality control in smart manufacturing. The architecture combines a CNN-LSTM network for real-time spatiotemporal quality prediction with an NSGA-III-based optimization unit to balance conflicting objectives of quality, cost, and energy efficiency. A continuous adaptive learning loop addresses concept drift and process variability. Evaluated on an industrial-inspired synthetic dataset of textile blends (N = 5000) and validated on the real-world SECOM semiconductor manufacturing dataset, the framework demonstrates strong predictive capability (R2 = 0.947 ± 0.012, MAE = 0.035 ± 0.003) and significant manufacturing performance improvements, including a 23.5% quality enhancement and an 8.7–12.3% operational cost reduction compared to traditional and standalone AI models. Statistical significance testing (paired t-test, p < 0.01) confirms the superiority of the proposed approach. This deep-evolutionary framework advances proactive quality assurance and adaptive process control, offering a scalable solution aligned with Industry 4.0 and 5.0 paradigms. Full article
Show Figures

Figure 1

23 pages, 995 KB  
Article
Eye-Tracking Response Modeling and Design Optimization Method for Smart Home Interface Based on Transformer Attention Mechanism
by Yanping Lu and Myun Kim
Electronics 2026, 15(8), 1562; https://doi.org/10.3390/electronics15081562 - 8 Apr 2026
Abstract
In response to the redundant spatio-temporal modeling and insufficient adaptation to dynamic decision-making in eye-tracking interaction of smart home interfaces, a smart home interface eye-tracking response optimization model based on spatio-temporal Transformer and gate control cross-attention is proposed. It adapts the physiological characteristics [...] Read more.
In response to the redundant spatio-temporal modeling and insufficient adaptation to dynamic decision-making in eye-tracking interaction of smart home interfaces, a smart home interface eye-tracking response optimization model based on spatio-temporal Transformer and gate control cross-attention is proposed. It adapts the physiological characteristics of eye-tracking jumps through dynamic sparse attention gating to compress computational redundancy and combines multi-objective reinforcement learning attention modulation to construct a closed-loop decision-making mechanism, optimizing interface parameters in real-time. Experiments showed that the model reduced eye-tracking trajectory prediction error by 23.7% compared to advanced benchmarks, increased the success rate of adapting to dynamic mutation scenarios to 89.2%, and controlled performance fluctuations within 2.3% under noise interference. In high-fidelity user testing, the accuracy of cross-task gaze transfer reached 93.4%, the failure rate of glare interference was optimized to 2.4%, and the user cognitive load index was reduced by 27.9%. Its resource consumption and energy consumption were reduced by 26.7% and 44.9%, respectively, while its posture deviation tolerance remained at 3.5°. The sparse spatio-temporal modeling of the spatio-temporal adaptive Transformer module and the enhanced gating mechanism of the hierarchical gated cross-attention module work together to break through the limitations of traditional methods in computational efficiency and dynamic feedback, providing high-precision and low-latency eye-tracking interaction solutions for smart home interface systems, and promoting the practical evolution of personalized human–machine collaborative control. Full article
19 pages, 4124 KB  
Article
Prediction of Maximum Usable Frequency Based on a New Hybrid Deep Learning Model
by Yuyang Li, Zhigang Zhang and Jian Shen
Electronics 2026, 15(7), 1539; https://doi.org/10.3390/electronics15071539 - 7 Apr 2026
Abstract
The reliability of high-frequency (HF) frequency selection technology relies on the prediction accuracy of the Maximum Usable Frequency of the ionospheric F2 layer (MUF-F2). To improve its short-term prediction performance, a novel hybrid deep learning prediction model is proposed, which achieves accurate modeling [...] Read more.
The reliability of high-frequency (HF) frequency selection technology relies on the prediction accuracy of the Maximum Usable Frequency of the ionospheric F2 layer (MUF-F2). To improve its short-term prediction performance, a novel hybrid deep learning prediction model is proposed, which achieves accurate modeling of the complex spatiotemporal variation patterns of MUF-F2 by integrating a feature enhancement mechanism, a dual-branch feature extraction structure, and a bidirectional temporal dependency capture network. The hybrid prediction model integrates the Channel Attention mechanism (CA), Dual-Branch Convolutional Neural Network (DCNN), and Bidirectional Long Short-Term Memory network (BiLSTM). The model is trained and validated using MUF-F2 data from 5 communication links over China during geomagnetically quiet periods and 4 during geomagnetic storm periods, with the difference in the number of links attributed to experimental constraints and the disruptive effects of geomagnetic storms. Its performance is evaluated via multiple metrics, and a comparative analysis is conducted with commonly used prediction models such as the Long Short-Term Memory (LSTM) network. Experimental results show that during geomagnetically quiet periods, the proposed model achieves lower prediction errors (Root Mean Square Error (RMSE) < 1.1 MHz, Mean Absolute Percentage Error (MAPE) < 3.8%) and a higher goodness of fit (coefficient of determination (R2) > 0.94), with the average error reduction across all links ranging 8 from 6.2% to 46.9% compared with the baseline model. Under geomagnetic storm disturbance conditions, the model still maintains robust prediction performance, with R2 > 0.89 for all communication links, as well as RMSE < 0.6 MHz, Mean Absolute Error (MAE) < 0.4 MHz, and MAPE < 3.3%. The study demonstrates that the proposed CA-DCNN-BiLSTM model exhibits excellent prediction accuracy and anti-interference capability under different geomagnetic activity conditions, which can effectively improve the short-term prediction accuracy of MUF-F2 and provide more reliable technical support for HF communication frequency decision-making. Full article
Show Figures

Figure 1

23 pages, 2167 KB  
Article
Congestion-Aware Traffic Forecasting with Physics-Guided Spatio-Temporal Graph Convolutional Networks
by Yueqiao Zhang and Jian Zhang
Appl. Sci. 2026, 16(7), 3546; https://doi.org/10.3390/app16073546 - 4 Apr 2026
Viewed by 190
Abstract
Traffic flow forecasting provides essential support for the construction of smart transportation systems. Despite the superiority of the ASTGCN, which uses an attention mechanism to capture spatio-temporal correlations, it lacks an explicit physical interpretation and thus falls into a more general category known [...] Read more.
Traffic flow forecasting provides essential support for the construction of smart transportation systems. Despite the superiority of the ASTGCN, which uses an attention mechanism to capture spatio-temporal correlations, it lacks an explicit physical interpretation and thus falls into a more general category known for its lack of such interpretation. As a result, in the presence of sparse or unstable congestion, these data-driven models often violate conservation laws and may generate “physical anomalies” or other logically impossible states. To close the gap of data-driven expressiveness and physical consistency, we propose the congestion-aware physics-guided STGCN (CAP-STGCN). This framework builds a synergistic model that achieves intrinsic coupling between the macroscopic traffic flow kinematics (fundamental diagram) and the spatio-temporal learning process. That is to say, under the model’s solution-space constraining effect, its motion space is bound on a feasible manifold. In terms of kinematics, it restricts consistency in the flow, density and speed. Concurrently, to address slow convergence under long-tailed distributions due to a lack of training samples, such as when there are fewer users or higher-quality items, a dynamic congestion-rectification mechanism is introduced. The aforementioned mechanism redefines the optimization landscape by prioritizing hard-to-predict saturation occurrences. Experiments show that, compared with other models, CAP-STGCN achieves higher prediction accuracy; more importantly, it is free of physical anomalies during inference and can be directly used in practice. Full article
Show Figures

Figure 1

16 pages, 1553 KB  
Article
Research on the Collaborative Optimization Method of Power Prediction and DRL Control
by Mengjie Li, Yongbao Liu and Xing He
Processes 2026, 14(7), 1150; https://doi.org/10.3390/pr14071150 - 3 Apr 2026
Viewed by 167
Abstract
This paper proposes a collaborative energy management strategy based on power prediction and deep reinforcement learning (DRL) to address the trade-offs among economic efficiency, durability, and dynamic performance in fuel cell hybrid power systems (FCHPS) under dynamic driving conditions. First, a hybrid prediction [...] Read more.
This paper proposes a collaborative energy management strategy based on power prediction and deep reinforcement learning (DRL) to address the trade-offs among economic efficiency, durability, and dynamic performance in fuel cell hybrid power systems (FCHPS) under dynamic driving conditions. First, a hybrid prediction model termed LSTM-LSSVM with Cascade Correction (LSTM-LSSVM-CC) is developed. The cascade correction (CC) mechanism adopts a hierarchical structure to capture both low-frequency steady-state trends and high-frequency dynamic fluctuations, which are typically challenging for single models to represent. By integrating an online residual correction mechanism, this model generates accurate future power demand sequences. Second, a Dynamic Spatio-Temporal Fusion (DSTF) method is introduced to construct a high-dimensional DRL state space. This approach integrates predicted data, historical residuals, and real-time system states, enabling the agent to perform anticipatory decision-making. Third, a Dynamic Hierarchical Adaptive Multi-Objective Optimization Framework (DHAMOF) is designed. This framework dynamically adjusts objective weights and constraint boundaries based on real-time operating characteristics, enabling adaptive switching of optimization priorities across diverse scenarios. Furthermore, a closed-loop control architecture comprising “prediction–decision–execution–feedback” is established. By incorporating rolling horizon optimization and a proportional-integral (PI) residual compensation mechanism, the proposed architecture effectively suppresses prediction error accumulation and mitigates communication delays. Simulation results under combined CLTC-P and WLTP driving cycles demonstrate that, compared to conventional fixed-weight strategies, the proposed method achieves an 11.3% reduction in hydrogen consumption, a 30.9% decrease in SOC fluctuation range, and a 55.3% reduction in power tracking error. Moreover, under disturbance scenarios involving prediction errors, sensor noise, and a 200 ms communication delay, the system exhibits superior robustness: the increase in hydrogen consumption is limited to within 8.3 g/100 km, and the power tracking error is reduced by 65.6% relative to uncorrected baselines. This collaborative optimization approach overcomes the limitations of traditional open-loop prediction and fixed-weight control, offering a novel technical pathway for the high-efficiency and stable operation of fuel cell hybrid power systems. Full article
(This article belongs to the Special Issue Recent Advances in Fuel Cell Technology and Its Application Process)
Show Figures

Figure 1

24 pages, 1855 KB  
Article
Fairness-Aware Optimization in Spatio-Temporal Epidemic Data Mining: A Graph-Augmented Temporal Fusion Transformer
by Saleh Albahli
Mathematics 2026, 14(7), 1179; https://doi.org/10.3390/math14071179 - 1 Apr 2026
Viewed by 274
Abstract
Modeling the complex spatio-temporal dynamics of infectious diseases presents a significant computational challenge due to heterogeneous regional interactions, high-dimensional multimodal data streams, and the critical need for algorithmic fairness. This paper proposes a novel computational framework that unifies graph-based spatio-temporal forecasting, anomaly detection, [...] Read more.
Modeling the complex spatio-temporal dynamics of infectious diseases presents a significant computational challenge due to heterogeneous regional interactions, high-dimensional multimodal data streams, and the critical need for algorithmic fairness. This paper proposes a novel computational framework that unifies graph-based spatio-temporal forecasting, anomaly detection, and retrieval-augmented generation (RAG) into a single mathematical architecture. The predictive backbone employs a graph-augmented Temporal Fusion Transformer to capture non-linear temporal dependencies and spatial disease propagation. By formalizing regional topology and mobility flows as a weighted mathematical graph, the model systematically integrates structured epidemiological counts, continuous environmental covariates, and digital trace signals. To address algorithmic bias, we formulate a fairness-aware optimization problem by embedding a specific regularization term into the training objective, which mathematically penalizes disparities in true-positive rates across diverse socio-demographic strata. Furthermore, the numerical outputs and anomaly scores are processed by a large language model equipped with hybrid dense and sparse retrieval to generate interpretable, computationally grounded decision support. Extensive experiments on a longitudinal dataset comprising 62 administrative regions over 260 weeks validate the mathematical robustness of the proposed framework. The graph-augmented architecture improved forecasting accuracy by up to 24% and anomaly detection F1 scores by over 6% compared to state-of-the-art deep learning baselines, while the fairness-regularized loss function reduced the maximum subgroup recall gap by more than 50%. These findings demonstrate that predictive accuracy and algorithmic fairness can be jointly optimized, providing a rigorous computational methodology for spatio-temporal epidemic modeling and AI-driven surveillance. Full article
Show Figures

Figure 1

12 pages, 1514 KB  
Article
A Spatio-Temporal Dependency Modeling and Key Node Radiation-Based Method for Ultra-Short-Term Wind Farm Power Prediction Using GAT-TCN
by Shujun Liu, Tao Zhou, Xiaoze Du, Jiangbo Wu and Yiting He
Energies 2026, 19(7), 1710; https://doi.org/10.3390/en19071710 - 31 Mar 2026
Viewed by 294
Abstract
Deep learning has become an important tool for wind power forecasting because it can help improve wind energy utilization and support reliable grid-connected operation. For wind farms, accurate turbine-level forecasting depends on spatial interactions among turbines and temporal evolution of historical operating data. [...] Read more.
Deep learning has become an important tool for wind power forecasting because it can help improve wind energy utilization and support reliable grid-connected operation. For wind farms, accurate turbine-level forecasting depends on spatial interactions among turbines and temporal evolution of historical operating data. In this study, a spatio-temporal forecasting framework is developed by combining a Graph Attention Network with a Temporal Convolutional Network. The graph attention module describes the neighborhood relations among turbines and learns their influence strengths adaptively, while the temporal convolution module extracts temporal patterns from multivariate SCADA sequences for multi-step prediction. On this basis, the learned attention weights are further used to define a node influence metric. This makes it possible to identify a small set of key turbines and use only their historical data to predict the future power output of the whole wind farm. The proposed framework is evaluated using one year of SCADA data from 134 turbines. A sliding-window dataset is constructed, and the model is tested on the training, validation, and test sets. The results show that the method can capture the spatio-temporal dependencies within the wind farm and still provide effective farm-wide forecasting when only limited observation nodes are available. The value of this work lies in organizing existing techniques around a practical wind farm forecasting task and in providing an interpretable prediction strategy based on key turbine selection, rather than in proposing a fundamentally new theoretical model. Full article
Show Figures

Figure 1

26 pages, 4761 KB  
Article
A CNN–LSTM Framework for Player-Specific Baseball Pitch Type Prediction from Video Sequences
by Chin-Chih Chang, Chi-Hung Wei, Hao-Chen Li and Sean Hsiao
Appl. Syst. Innov. 2026, 9(4), 75; https://doi.org/10.3390/asi9040075 - 30 Mar 2026
Viewed by 368
Abstract
The performance of the pitcher is the cornerstone of baseball, often determining the flow and ultimate outcome of a game. Given this centrality, understanding the mechanics of an elite pitcher and decoding their strategies are paramount for both internal optimization and competitive scouting. [...] Read more.
The performance of the pitcher is the cornerstone of baseball, often determining the flow and ultimate outcome of a game. Given this centrality, understanding the mechanics of an elite pitcher and decoding their strategies are paramount for both internal optimization and competitive scouting. This study proposes an end-to-end deep learning pipeline for automatically classifying five distinct pitch types from raw broadcast footage of MLB pitcher Max Scherzer between 2015 and 2020. By formulating pitch delivery as a time-series classification problem tailored to the unique biomechanics of an elite athlete, the proposed CNN–LSTM framework integrates per-frame spatial feature extraction using an advanced CNN backbone (YOLOv8s-cls) with a two-layer long short-term memory (LSTM) network to capture subtle biomechanical cues across a standardized 20-frame delivery sequence. While skeletal pose estimation primarily focuses on tracking major joints to analyze standard pitching mechanics, the proposed pixel-based method preserves fine-grained visual cues—such as finger grip and wrist rotation—that are critical for distinguishing pitch variations. The proposed framework achieved an accuracy of 91.8% under a standard Random Split and, importantly, 84.5% under a strict Chronological Split across different seasons, validating the feasibility of automated pitch “tell” detection from broadcast video. The resulting system provides coaches and analysts with an objective, data-driven tool for generating personalized scouting reports, identifying mechanical inconsistencies, and refining pitching strategies. Full article
Show Figures

Figure 1

33 pages, 8263 KB  
Article
Semantic Graphs of Learning Activities from LLM Embeddings: A Lightweight and Explainable Approach for Smart Learning Systems
by Javier García-Sigüenza, Alberto Real-Fernández, Faraón Llorens-Largo, Jose F. Vicent and Rafael Molina-Carmona
Electronics 2026, 15(7), 1414; https://doi.org/10.3390/electronics15071414 - 28 Mar 2026
Viewed by 301
Abstract
Smart learning systems are designed to analyze the context, needs, and progress of each student. These are becoming increasingly common, but they present challenges, such as predicting student performance and automatically managing learning activities. In this context, Large Language Models (LLMs) can be [...] Read more.
Smart learning systems are designed to analyze the context, needs, and progress of each student. These are becoming increasingly common, but they present challenges, such as predicting student performance and automatically managing learning activities. In this context, Large Language Models (LLMs) can be useful, as they are capable of understanding word relationships and analyzing their context. They are often associated with chatbots, which are computationally expensive, thereby complicating their integration. Instead, in this work, we propose to leverage the capabilities of LLMs through a semantic graph of activities created from sentence embeddings. This representation is a lightweight and explainable alternative. On the one hand, it requires a lower computational cost. On the other hand, it allows us to observe which activities are most similar directly. On this basis, we propose two problems to validate our proposal. In the first, we use the graph to classify new activities. In the second, we extend this representation with the temporal dimension to formulate a spatio-temporal problem and predict student performance. The results show that the semantic graph not only provides an accurate representation for the organization and classification of activities, but also offers practical advantages and improves explainability. Full article
Show Figures

Figure 1

30 pages, 8163 KB  
Article
SDGR-Net: A Spatiotemporally Decoupled Gated Residual Network for Robust Multi-State HDD Health Prediction
by Zehong Wu, Jinghui Qin, Yongyi Lu and Zhijing Yang
Electronics 2026, 15(7), 1399; https://doi.org/10.3390/electronics15071399 - 27 Mar 2026
Viewed by 276
Abstract
Accurate prediction of hard disk drive (HDD) health states is critical for enabling proactive data maintenance and ensuring data reliability in large-scale data centers. However, conventional models often suffer from semantic entanglement among heterogeneous SMART attributes and from the masking of incipient failure [...] Read more.
Accurate prediction of hard disk drive (HDD) health states is critical for enabling proactive data maintenance and ensuring data reliability in large-scale data centers. However, conventional models often suffer from semantic entanglement among heterogeneous SMART attributes and from the masking of incipient failure signatures by stochastic noise. To address these challenges, we propose SDGR-Net, a spatiotemporally decoupled learning framework designed to model the complex degradation dynamics of HDDs. SDGR-Net introduces three synergistic innovations: (1) a spatiotemporally decoupled dual-branch encoder that disentangles longitudinal temporal evolution from cross-variable correlations via parameter-isolated branches, thereby reducing representational interference; (2) a parsimonious dual-view temporal extraction mechanism that captures early-stage anomalies through forward–reverse sequence concatenation, enabling high-fidelity preservation of non-stationary pre-failure patterns; and (3) a cross-branch dynamic gated residual fusion module that functions as an adaptive information bottleneck to emphasize failure-critical features while suppressing redundant noise. Extensive experiments conducted on three heterogeneous HDD datasets, ST4000DM000, HUH721212ALN604, and MG07ACA14TA, demonstrate that SDGR-Net consistently outperforms six state-of-the-art baselines. In particular, SDGR-Net achieves a peak fault detection rate (FDR) of 0.9898 and a 69.6% relative reduction in false alarm rate (FAR) under high-reliability operating conditions. These results, corroborated by comprehensive ablation studies, indicate that SDGR-Net effectively balances detection sensitivity and operational robustness, offering a practical solution for intelligent HDD health monitoring. Full article
Show Figures

Figure 1

36 pages, 76230 KB  
Article
Interpretable Adaptive Multiscale Spatiotemporal Network for Long-Term Global Sea Surface Temperature Prediction
by Rixu Hao, Yuxin Zhao and Xiong Deng
Remote Sens. 2026, 18(7), 997; https://doi.org/10.3390/rs18070997 - 26 Mar 2026
Viewed by 306
Abstract
Sea surface temperature (SST) serves as a fundamental driver of ocean–atmosphere interactions and global climate variability, exhibiting strong nonstationarity, multiscale dynamics, and cross-variable coupling. However, current deep learning models often fail to capture these complex characteristics, limiting their ability to support accurate and [...] Read more.
Sea surface temperature (SST) serves as a fundamental driver of ocean–atmosphere interactions and global climate variability, exhibiting strong nonstationarity, multiscale dynamics, and cross-variable coupling. However, current deep learning models often fail to capture these complex characteristics, limiting their ability to support accurate and physically consistent long-term SST prediction. To address these issues, we propose PAMSTnet, a unified deep learning framework for physics-informed adaptive multiscale spatiotemporal prediction. PAMSTnet leverages three-dimensional empirical wavelet transform (3DEWT) to learn interpretable multiscale spatiotemporal dynamics from raw observations, and applies multivariate spatiotemporal empirical orthogonal function (MSTEOF) to identify dominant cross-variable coupled modes. These physically meaningful representations are integrated into a deep ConvLSTM predictive network (DCPN) to support coordinated multiscale dynamical learning. Furthermore, PAMSTnet introduces physics-informed consistency learning (PICL) to enforce thermodynamic and dynamic constraints, enhancing physical consistency and interpretability. Extensive experiments demonstrate that PAMSTnet achieves superior performance against state-of-the-art baselines in long-term global SST prediction, reducing RMSE by 8.1% and improving ACC by 2.8% compared with the best-performing baseline, particularly under extreme climate events. Interpretation insights further highlight PAMSTnet’s adaptive representation of variable contributions and regional physical drivers. These findings position PAMSTnet as a promising paradigm for developing intelligent ocean prediction systems with enhanced physical consistency and interpretability. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

22 pages, 4755 KB  
Article
Comparative Assessment of Supervised Machine Learning Models for Predicting Water Uptake in Sorption-Based Thermal Energy Storage
by Milad Tajik Jamalabad, Elham Abohamzeh, Daud Mustafa Minhas, Seongbhin Kim, Dohyun Kim, Aejung Yoon and Georg Frey
Energies 2026, 19(7), 1619; https://doi.org/10.3390/en19071619 - 25 Mar 2026
Viewed by 272
Abstract
In this study, supervised machine learning (ML) regression models are employed to predict water uptake during the sorption process in a sorption reactor for thermal energy storage applications. Two main methods are used to study sorption storage systems: experimental studies and numerical simulations. [...] Read more.
In this study, supervised machine learning (ML) regression models are employed to predict water uptake during the sorption process in a sorption reactor for thermal energy storage applications. Two main methods are used to study sorption storage systems: experimental studies and numerical simulations. Experimental studies involve physical testing and measurements but are often costly and time-consuming. Numerical simulations are more flexible and cost-effective, though they can require significant computational resources for large or complex systems. To address these challenges, researchers are increasingly employing various machine learning techniques, which offer strong potential for data analysis and predictive modeling. In this study, CFD-based sorption simulations are integrated with machine learning models to predict the spatiotemporal evolution of water uptake. Several ML techniques including support vector regression (SVR), Random Forest, XGBoost, CatBoost (gradient boosting decision trees), and multilayer perceptron neural networks (MLPs) are evaluated and compared. A fixed-bed reactor equipped with fins and tubes is considered within a closed adsorption thermal storage system. Numerical simulations are conducted for three different fin lengths (10 mm, 25 mm, and 35 mm) to generate a comprehensive dataset for training the ML models and capturing the complex temporal evolution of water uptake, thereby enabling predictions for unseen fin geometries. The results indicate that neural network-based models achieve superior predictive performance compared to the other methods. For water uptake training, the mean absolute error (MAE), root mean squared error (RMSE), and coefficient of determination R2 are approximately 2.83, 4.37, and 0.91, respectively. The predicted water uptake shows close agreement with the numerical simulation results. For the prediction cases, the MAE, MSE, and R2 values are approximately 1.13, 1.2, and 0.8, respectively. Overall, the study demonstrates that machine learning models can accurately predict water uptake beyond the training dataset, indicating strong generalization capability and significant potential for improving thermal management system design. Additionally, the proposed approach reduces simulation time and computational cost while providing an efficient and reliable framework for modeling complex sorption processes in thermal energy storage systems. Full article
Show Figures

Figure 1

30 pages, 4192 KB  
Article
Spatio-Temporal Evolution of NPP, Vegetation Characteristics, and Multi-Model, Multi-Scenario Predictions in the Shaanxi Section of the Qinling Mountains, China
by Zhe Li, Xia Li, Guozhuang Zhang and Leyi Zhang
Sustainability 2026, 18(6), 3136; https://doi.org/10.3390/su18063136 - 23 Mar 2026
Viewed by 291
Abstract
The Shaanxi section of the Qinling Mountains serves as a critical ecological transition zone and security barrier between northern and southern China. Monitoring the dynamics of its vegetation Net Primary Productivity (NPP) is essential for understanding regional carbon cycling and informing ecological management [...] Read more.
The Shaanxi section of the Qinling Mountains serves as a critical ecological transition zone and security barrier between northern and southern China. Monitoring the dynamics of its vegetation Net Primary Productivity (NPP) is essential for understanding regional carbon cycling and informing ecological management strategies. This study integrates three complementary analytical frameworks: the Mann–Kendall test combined with the Theil–Sen slope for linear trend extrapolation (MK-Theil-Sen), mechanistic simulation (CASA model), and machine learning (random forest). First, we analyzed the spatiotemporal evolution of NPP from 2000 to 2023. Then, based on three CMIP6 scenarios (SSP119, SSP245, SSP585), we projected NPP changes for 2030–2050 and compared results across different models and scenarios. The key findings are as follows: ① From 2000 to 2023, NPP in the Shaanxi section of the Qinling Mountains exhibited a fluctuating upward trend with a cumulative increase of 16.7%. Spatially, it showed a pattern of “higher in the south, lower in the north; higher in the west, lower in the east”. ② Multiple models predict continued NPP growth, though the magnitude remains uncertain. Mechanistic models, incorporating climate stress factors, yield relatively conservative projections. ③ Emission scenarios significantly influence future trends, with low-emission pathways (SSP119) favoring NPP enhancement and extended growing seasons. ④ Different vegetation types exhibit varying responses to scenario changes: broadleaf forests show the highest sensitivity, while grasslands and meadows demonstrate strong climate stability across models, with cultivated vegetation exhibiting intermediate sensitivity. This study provides comprehensive scientific references for regional ecological security assessment and adaptive management through historical analysis and multi-model, multi-scenario projections of NPP in the Shaanxi section of the Qinling Mountains. Full article
Show Figures

Figure 1

15 pages, 3549 KB  
Article
Application and Comparison of Two Transformer-Based Deep Learning Models in Short-Term Precipitation Nowcasting
by Chuhan Lu and Qilong Pan
Water 2026, 18(6), 757; https://doi.org/10.3390/w18060757 - 23 Mar 2026
Viewed by 343
Abstract
Against the background of intensifying global climate change, extreme precipitation events have become increasingly frequent. Improving the accuracy of short-term precipitation nowcasting is therefore essential for disaster prevention and mitigation. Traditional numerical weather prediction (NWP) approaches are constrained by computational latency and errors [...] Read more.
Against the background of intensifying global climate change, extreme precipitation events have become increasingly frequent. Improving the accuracy of short-term precipitation nowcasting is therefore essential for disaster prevention and mitigation. Traditional numerical weather prediction (NWP) approaches are constrained by computational latency and errors arising from physical parameterizations, making it difficult to satisfy real-time forecasting requirements at high spatiotemporal resolution. Using the SEVIR dataset, this study conducts a systematic comparison of two Transformer-based deep learning models—Earthformer and LLMDiff—for short-term extreme precipitation nowcasting. Model performance is evaluated using the Critical Success Index (CSI), Probability of Detection (POD), and Success Ratio (SUCR). Results indicate that, for 0–30 min lead times, Earthformer more efficiently captures both local and long-range spatiotemporal dependencies via its Cuboid Attention mechanism and shows a slight advantage for low-intensity precipitation. As the lead time extends to 60 min, LLMDiff demonstrates stronger longer-horizon skill due to its diffusion-based probabilistic modeling and a frozen large language model (LLM) module, which enhance the representation of uncertainty and longer-term evolution of precipitation systems. However, LLMDiff tends to produce a higher false-alarm rate. Overall, Earthformer is better suited for rapid early warning of light precipitation, whereas LLMDiff is more appropriate for high-accuracy nowcasting of heavy precipitation, offering useful insights for intelligent forecasting of extreme weather. Full article
(This article belongs to the Special Issue Analysis of Extreme Precipitation Under Climate Change, 2nd Edition)
Show Figures

Figure 1

16 pages, 729 KB  
Article
Mamba-Based Macro–MicroSpatio-Temporal Model for Traffic Flow Prediction
by Haoning Lv, Fayang Lan and Weijie Xiu
Electronics 2026, 15(6), 1327; https://doi.org/10.3390/electronics15061327 - 23 Mar 2026
Viewed by 214
Abstract
Traffic flow prediction plays an important role in intelligent transportation systems. However, accurately modeling traffic dynamics remains challenging due to complex temporal correlations and spatial interactions across road networks. In this work, we propose a Mamba-based macro–micro spatio-temporal model for traffic flow prediction. [...] Read more.
Traffic flow prediction plays an important role in intelligent transportation systems. However, accurately modeling traffic dynamics remains challenging due to complex temporal correlations and spatial interactions across road networks. In this work, we propose a Mamba-based macro–micro spatio-temporal model for traffic flow prediction. Unlike graph-based approaches that rely on predefined adjacency matrices to model spatial relationships, our method treats sensor nodes as sequence elements and applies Mamba blocks along the spatial dimension. Through the global receptive field of the structured state space model, spatial dependencies are implicitly learned without requiring explicit graph structures. The proposed architecture consists of stacked spatio-temporal blocks, each composed of two Macro Feature Blocks and one Micro Feature Block. The Macro Feature Blocks are designed to capture global temporal dependencies and spatial interactions across all nodes, while the Micro Feature Block focuses on modeling localized spatio-temporal patterns at a finer granularity. By applying structured state space modeling along both temporal and spatial dimensions, the model is able to capture long-range temporal dependencies and global spatial correlations without relying on explicit graph structures. Experiments conducted on four real-world datasets demonstrate that the proposed model achieves competitive or improved performance compared with existing baseline methods under standard evaluation metrics. Full article
(This article belongs to the Special Issue AI Innovations in Smart Transportation)
Show Figures

Figure 1

Back to TopTop