Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,276)

Search Parameters:
Keywords = mean square error of prediction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 529 KB  
Article
Maturity Prediction and Correlation Analysis of Additive-Treated Cattle and Sheep Manure Composts and Vermicomposts Using Machine Learning Algorithms
by Shno Karimi, Hossein Shariatmadari, Mohammad Shayannejad and Farshid Nourbakhsh
Agriculture 2026, 16(8), 834; https://doi.org/10.3390/agriculture16080834 (registering DOI) - 9 Apr 2026
Abstract
Accurate prediction of compost maturity is vital for ensuring quality, safety, minimum substrate weight loss and agronomic performance of compost products. In this study, eight supervised machine learning (ML) classification models including Random Forest, Logistic Regression, Decision Tree, Gaussian and Multinomial Naive Bayes, [...] Read more.
Accurate prediction of compost maturity is vital for ensuring quality, safety, minimum substrate weight loss and agronomic performance of compost products. In this study, eight supervised machine learning (ML) classification models including Random Forest, Logistic Regression, Decision Tree, Gaussian and Multinomial Naive Bayes, K-Nearest Neighbors, Support Vector Machine, and AdaBoost were systematically evaluated for their ability to predict compost maturity using three key indicators: cation exchange capacity (CEC), carbon to nitrogen ratio (C/N), and humic acid (HA) content. A dataset comprising 756 samples (4 composting/vermicomposting systems × 7 treatments × 9 time points × 3 replicates) was generated. To reduce replicate-induced variability and ensure robust machine learning analysis, triplicates were averaged at each time point, resulting in 252 effective observations used for model development. Pearson correlation and heatmap analysis indicated strong interdependencies among CEC, HA, total nitrogen (TN) and organic matter (OM) content, confirming their collective utility in compost maturity classification. Model performance was assessed based on classification metrics (accuracy, precision, recall, F1-score) and regression-based error indicators, including mean absolute error (MAE), mean squared error (MSE), root mean squared error (RMSE), and coefficient of determination (R2). Ensemble models, particularly RF and AdaBoost, showed the highest predictive accuracy (up to 0.98) and lowest error rates (e.g., MAE < 0.05, RMSE < 0.1, R2 > 0.95) when predicting CEC and C/N-based maturity classes. HA-based predictions showed slightly lower precision and higher variance across models. Full article
Show Figures

Graphical abstract

32 pages, 6350 KB  
Article
Mixed Forecast of Air Quality Index with a Bibranch Parallel Architecture Considering Seasonal Heterogeneity
by Huibin Zeng, Ying Liu, Hongbin Dai, Xue Zhao and Ning Tian
Entropy 2026, 28(4), 419; https://doi.org/10.3390/e28040419 - 9 Apr 2026
Abstract
Accurate prediction of the air quality index (AQI) is crucial for understanding urban pollution dynamics and protecting public health. This study proposes a dual-branch fusion framework (CL-XGB-Season) to address seasonal heterogeneity in AQI prediction by integrating temporal dynamic features and static patterns. The [...] Read more.
Accurate prediction of the air quality index (AQI) is crucial for understanding urban pollution dynamics and protecting public health. This study proposes a dual-branch fusion framework (CL-XGB-Season) to address seasonal heterogeneity in AQI prediction by integrating temporal dynamic features and static patterns. The CNN-LSTM branch captures short-term temporal fluctuations, while a seasonally split XGBoost branch fits long-term static patterns via independent submodels for spring, summer, autumn, and winter. SHAP-based interpretability analysis revealed the dominant drivers across different seasons: the “temperature × O3” interaction feature plays a key role in summer, characterizing the ozone formation mechanism dominated by photochemical reactions under conditions of high temperature and strong solar radiation; whereas the PM2.5/PM10 ratio is crucial in winter (where pollution is primarily driven by pollutant accumulation). The dual-branch fusion framework was validated using hourly resolution data from Chongqing for the 2020–2025 period. Results indicate that the framework achieved a prediction accuracy of 0.197 root mean square error (nRMSE) and 0.9611 coefficient of determination (R2) on the test set, outperforming eight ablation variants and five baseline models (ARIMA, Transformer, etc.) in comparative experiments. Ablation studies confirm the necessity of dual branches and seasonal modeling, with the full model reducing nRMSE by 19–63% versus single-model variants. This framework maintains stable seasonal performance and provides actionable insights for targeted air quality management. Full article
Show Figures

Figure 1

17 pages, 834 KB  
Article
Improved Data-Driven Shrinkage Estimators for Regression Models Under Severe Multicollinearity
by Ali Rashash R. Alzahrani and Asma Ahmad Alzahrani
Mathematics 2026, 14(8), 1245; https://doi.org/10.3390/math14081245 - 9 Apr 2026
Abstract
Multicollinearity is a critical issue in regression analysis, often resulting in inflated variances and unstable parameter estimates. Ridge regression is a widely adopted solution to address this challenge; however, existing ridge estimators are typically tailored to specific scenarios, limiting their universal applicability. Akhtar [...] Read more.
Multicollinearity is a critical issue in regression analysis, often resulting in inflated variances and unstable parameter estimates. Ridge regression is a widely adopted solution to address this challenge; however, existing ridge estimators are typically tailored to specific scenarios, limiting their universal applicability. Akhtar and Alharthi developed ridge estimators based on condition-adjusted ridge estimators (CAREs) to handle severe multicollinearity issues. However, their approach did not account for the error variances in the estimation process. In this study, we propose improvements to these CAREs by incorporating error variances, resulting in the development of multiscale ridge estimators (MSRE1, MSRE2, MSRE3 and MSRE4) that more effectively address the challenges posed by severe multicollinearity. We compare the performance of our newly proposed estimators with ordinary least square (OLS) and other existing ridge estimators using both simulation studies and real-life datasets. The evaluation, based on estimated mean squared error (MSE), demonstrates that the proposed estimators consistently outperform existing methods, particularly in scenarios with significant multicollinearity, larger sample sizes, and higher predictor dimensions. Results from three real-life datasets further validate the proposed estimators’ ability to reduce estimation error and improve predictive accuracy across diverse practical applications. Full article
(This article belongs to the Special Issue Statistical Machine Learning: Models and Its Applications)
Show Figures

Figure 1

23 pages, 557 KB  
Article
A Multi-Stage Decomposition and Hybrid Statistical Framework for Time Series Forecasting
by Swera Zeb Abbasi, Mahmoud M. Abdelwahab, Imam Hussain, Moiz Qureshi, Moeeba Rind, Paulo Canas Rodrigues, Ijaz Hussain and Mohamed A. Abdelkawy
Axioms 2026, 15(4), 273; https://doi.org/10.3390/axioms15040273 - 9 Apr 2026
Abstract
Modeling and forecasting nonstationary and nonlinear economic time series remain fundamentally challenging due to structural breaks, volatility clustering, and noise contamination that distort the intrinsic stochastic structure. To address these limitations, this study proposes a novel three-stage hybrid statistical framework that systematically integrates [...] Read more.
Modeling and forecasting nonstationary and nonlinear economic time series remain fundamentally challenging due to structural breaks, volatility clustering, and noise contamination that distort the intrinsic stochastic structure. To address these limitations, this study proposes a novel three-stage hybrid statistical framework that systematically integrates multi-level signal decomposition with structured parametric modeling to enhance predictive accuracy. The proposed hybrid architectures—EMD–EEMD–ARIMA, EMD–EEMD–GMDH, and EMD–EEMD–ETS—employ a hierarchical decomposition–reconstruction strategy before forecasting. In the first stage, Empirical Mode Decomposition (EMD) decomposes the observed series into intrinsic mode functions (IMFs) and a residual component. In the second stage, Ensemble Empirical Mode Decomposition (EEMD) is applied to further refine the extracted components, mitigating mode mixing and improving signal separability. In the final stage, each reconstructed component is modeled using ARIMA, Exponential Smoothing State Space (ETS), and Group Method of Data Handling (GMDH) frameworks, and the individual forecasts are aggregated to obtain the final prediction. Empirical evaluation based on a recursive one-step-ahead forecasting scheme demonstrates consistent numerical improvements across all standard accuracy measures. In particular, the proposed EMD–EEMD–ARIMA model achieves the lowest forecasting error, reducing the root-mean-square error (RMSE) by approximately 6–7% relative to the best-performing single-stage model and by about 3–4% relative to the two-stage EMD-based hybrids. Similar improvements are observed in mean squared error (MSE), mean absolute error (MAE), and mean absolute percentage error (MAPE), indicating enhanced stability and robustness of the three-stage architecture. The results provide strong numerical evidence that multi-level decomposition combined with structured statistical modeling yields superior predictive performance for complex nonlinear and nonstationary time series. The proposed framework offers a mathematically coherent, computationally tractable, and systematically structured hybrid modeling strategy that effectively integrates noise-assisted decomposition with parametric and data-driven forecasting techniques. Full article
Show Figures

Figure 1

22 pages, 3510 KB  
Article
Nondestructive Detection of Eggshell Thickness Using Near-Infrared Spectroscopy Based on GBDT Feature Selection and an Improved CatBoost Algorithm
by Ziqing Li, Ying Ji, Changheng Zhao, Dehe Wang and Rongyan Zhou
Foods 2026, 15(8), 1286; https://doi.org/10.3390/foods15081286 - 8 Apr 2026
Abstract
Eggshell thickness is a critical indicator for evaluating egg breakage resistance and hatchability, yet traditional measurement methods remain destructive and inefficient. To address this, this study proposes a robust prediction approach by integrating Gradient Boosting Decision Tree (GBDT) feature optimization with an improved [...] Read more.
Eggshell thickness is a critical indicator for evaluating egg breakage resistance and hatchability, yet traditional measurement methods remain destructive and inefficient. To address this, this study proposes a robust prediction approach by integrating Gradient Boosting Decision Tree (GBDT) feature optimization with an improved CatBoost algorithm. First, a joint strategy of Standard Normal Variate (SNV) and Multiplicative Scatter Correction (MSC) was employed to eliminate spectral scattering noise and enhance organic matrix fingerprint information. Subsequently, GBDT was introduced for nonlinear feature evaluation to adaptively screen the top 50 wavelengths, effectively mitigating the “curse of dimensionality” and multicollinearity in full-spectrum data. A CatBoost regression model was then constructed using an Ordered Boosting mechanism, supported by a dual anti-overfitting strategy that merged 10-fold nested cross-validation with Bootstrap resampling. Experimental results demonstrate that this method significantly outperforms traditional algorithms in both prediction accuracy and generalization. The coefficients of determination (R2) for the calibration and prediction sets reached 0.930 and 0.918, respectively, with a root mean square error of prediction (RMSEP) of 0.008 mm. Residual analysis confirms that prediction errors follow a zero-mean Gaussian distribution, indicating that systematic bias was effectively eliminated. This research provides a reliable theoretical foundation and technical support for the intelligent grading of poultry egg quality. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

26 pages, 2327 KB  
Article
Prediction of Ship Estimated Time of Arrival Based on BO-CNN-LSTM Model
by Qiong Chen, Zhipeng Yang, Jiaqi Gao, Yui-yip Lau and Pengfei Zhang
J. Mar. Sci. Eng. 2026, 14(8), 694; https://doi.org/10.3390/jmse14080694 - 8 Apr 2026
Abstract
Accurate prediction of a ship’s Estimated Time of Arrival (ETA) is of great significance for port scheduling, logistics management, and navigation safety. Traditional ETA prediction approaches often rely on manual experience for parameter tuning, which tends to be inefficient and susceptible to subjective [...] Read more.
Accurate prediction of a ship’s Estimated Time of Arrival (ETA) is of great significance for port scheduling, logistics management, and navigation safety. Traditional ETA prediction approaches often rely on manual experience for parameter tuning, which tends to be inefficient and susceptible to subjective factors. To address this issue and improve prediction accuracy, this study proposes a hybrid modeling framework, integrating Bayesian Optimization (BO), Convolutional Neural Networks (CNNs), and Long Short-Term Memory (LSTM) networks. In this approach, Automatic Identification System (AIS) data is leveraged to predict the total voyage duration before departure, thereby deriving the vessel’s ETA. The model, referred to as BO-CNN-LSTM, utilizes BO for automatic hyperparameter tuning, employs CNN for extracting local features, and applies LSTM network to capture temporal dependencies. The model is developed using a dataset of 32,972 distinct voyage records, among which 23,947 are retained as valid samples after data cleaning. Pearson correlation analysis is conducted to select key input variables, including navigation speed, ship type, sailing distance, and deadweight tonnage. Additionally, sailing distance is processed using the Ramer–Douglas–Peucker algorithm. Experimental evaluation indicates that the BO-CNN-LSTM model achieves a coefficient of determination of 0.987, along with a mean absolute error and root mean square error of 6.078 and 8.730, respectively. These results significantly outperform comparison models such as CNN, LSTM, CNN-LSTM, random forest, AdaBoost, and Elman neural networks. Overall, this study validates the effectiveness and superiority of the proposed BO-CNN-LSTM model in ship ETA prediction, providing an efficient and effective prediction solution for intelligent maritime transportation systems. Full article
(This article belongs to the Section Ocean Engineering)
27 pages, 4791 KB  
Article
Combining Fast Orthogonal Search with Deep Learning to Improve Low-Cost IMU Signal Accuracy
by Jialin Guan, Eslam Mounier, Umar Iqbal and Michael J. Korenberg
Sensors 2026, 26(8), 2300; https://doi.org/10.3390/s26082300 - 8 Apr 2026
Abstract
Inertial measurement units (IMUs) in low-cost navigation systems suffer from significant drift and noise errors due to sensor biases, scale factor instability, and nonlinear stochastic noise. This paper proposes a hybrid error compensation approach that combines Fast Orthogonal Search (FOS), a nonlinear system [...] Read more.
Inertial measurement units (IMUs) in low-cost navigation systems suffer from significant drift and noise errors due to sensor biases, scale factor instability, and nonlinear stochastic noise. This paper proposes a hybrid error compensation approach that combines Fast Orthogonal Search (FOS), a nonlinear system identification technique, with deep Long Short-Term Memory (LSTM) neural networks to improve IMU signal accuracy in GNSS-denied navigation. The FOS algorithm efficiently models deterministic error patterns (such as bias drift and scale factor errors) using a small training dataset, while the LSTM learns the IMU’s complex time-dependent error dynamics from much longer training data. In the proposed method, FOS is first used to predict the output of a high-end IMU based on that of a low-end IMU, and the trained FOS model is then used to extend the training data for an LSTM-based predictor. We demonstrate the efficacy of this FOS–LSTM hybrid on real vehicular IMU data by training with a limited segment of high-precision reference measurements and testing on extended operation periods. The hybrid model achieves high predictive accuracy for predicting the high-end signal based on the low-end signal, with a mean squared error below 0.1% and yields more stable velocity estimates than models using FOS or LSTM alone. Although long-term position drift is not fully eliminated, the proposed method significantly reduces short-term uncertainty in the inertial solution. These results highlight a promising synergy between model-based system identification and data-driven learning for sensor error calibration in navigation systems. Key contributions include FOS-based pseudo-label bootstrapping for data-efficient LSTM training and a navigation-level evaluation illustrating how signal correction impacts dead reckoning drift. Full article
Show Figures

Figure 1

23 pages, 3355 KB  
Article
Fracture Pressure Prediction for Tight Conglomerate Reservoirs with Analysis of Acid Pretreatment Influence
by Yue Wang, Qinghua Cheng, Jianchao Li, Yunwei Kang, Hui Liu, Qian Wei, Dali Guo and Zixi Guo
Processes 2026, 14(8), 1192; https://doi.org/10.3390/pr14081192 - 8 Apr 2026
Abstract
Tight conglomerate reservoirs are characterized by strong heterogeneity, significant in-situ stress differences, and unbalanced fracturing stimulation, which make fracture pressure prediction challenging and severely restrict the effectiveness of reservoir stimulation and ultimate recovery. Although acid pretreatment is an effective means to reduce fracture [...] Read more.
Tight conglomerate reservoirs are characterized by strong heterogeneity, significant in-situ stress differences, and unbalanced fracturing stimulation, which make fracture pressure prediction challenging and severely restrict the effectiveness of reservoir stimulation and ultimate recovery. Although acid pretreatment is an effective means to reduce fracture pressure, its quantitative relationship with fracture pressure remains unclear. There is an urgent need to establish a systematic method that integrates reservoir heterogeneity characterization, data augmentation, and intelligent prediction. Aiming at the tight conglomerate reservoir in the MH Block, this study proposes an intelligent fracture pressure prediction and acid pretreatment optimization method that integrates Self-Organizing Maps (SOMs), Generative Adversarial Networks (GANs), and Transformer models. First, SOM is used to perform unsupervised clustering of logging parameters to identify different geological feature categories and achieve fine-scale characterization of reservoir heterogeneity. Second, to address the issue of limited samples within each cluster, GAN is employed for high-quality data augmentation to expand the training sample set. Finally, a fracture pressure prediction model is constructed based on the Transformer architecture, and the influence of acid treatment parameters on fracture pressure is quantitatively analyzed using the SHAP method and laboratory experiments. The results show that the proposed model achieves a coefficient of determination (R2) of 0.93, a root mean square error (RMSE) of 2.38 MPa, and a mean absolute percentage error (MAPE) of 2.02% on the test set, with prediction accuracy significantly outperforming benchmark models such as BPNN, XGBoost, and LSTM. Ablation experiments verify that both the SOM clustering and GAN augmentation modules effectively enhance model performance. Analysis of acid treatment parameters indicates that hydrofluoric acid (HF) concentration is the dominant factor influencing fracture pressure reduction, and the mud acid system exhibits a stronger synergistic effect compared to the single hydrochloric acid system. Reasonable optimization of acid concentration and dosage can significantly reduce fracture pressure (3.14–5.28 MPa). This method provides a theoretical basis and engineering guidance for accurate fracture pressure prediction and optimal design of acid pretreatment in tight conglomerate reservoirs. Full article
(This article belongs to the Section Petroleum and Low-Carbon Energy Process Engineering)
Show Figures

Figure 1

23 pages, 4282 KB  
Article
FPGA-Accelerated Machine Learning for Computational Environmental Information Processing in IoT-Integrated High-Density Nanosensor Networks
by Alaa Kamal Yousif Dafhalla, Fawzia Awad Elhassan Ali, Asma Ibrahim Gamar Eldeen, Ikhlas Saad Ahmed, Ameni Filali, Amel Mohamed essaket Zahou, Amal Abdallah AlShaer, Suhier Bashir Ahmed Elfaki, Rabaa Mohammed Eltayeb and Tijjani Adam
Information 2026, 17(4), 354; https://doi.org/10.3390/info17040354 - 8 Apr 2026
Abstract
This study presents a nanosensor network system for autonomous microclimate optimization in precision horticulture, leveraging a field-programmable gate array (FPGA)-based control architecture that is integrated with an edge-level machine learning inference. Unlike the conventional greenhouse automation systems, which exhibit thermal and hygroscopic hysteresis [...] Read more.
This study presents a nanosensor network system for autonomous microclimate optimization in precision horticulture, leveraging a field-programmable gate array (FPGA)-based control architecture that is integrated with an edge-level machine learning inference. Unlike the conventional greenhouse automation systems, which exhibit thermal and hygroscopic hysteresis often exceeding 32 °C and 78% relative humidity, the proposed framework embeds a random forest regression (RFR) model directly within the Altera DE2-115 FPGA fabric to enable predictive environmental regulation. The model achieved an R2 of 0.985 and root mean square error (RMSE) of 0.28 °C, allowing proactive compensation for the thermodynamic disturbances from the high-intensity light-emitting diode (LED) lighting with a 120 s predictive horizon. The real-time monitoring and remote supervision were supported via a NodeMCU-based IoT gateway, achieving a 140 ms mean communication latency and a 99.8% packet delivery reliability. The preliminary validation using lettuce (Lactuca sativa) optimized the environmental parameters, while the subsequent experiments with pepper (Capsicum annuum), a commercially important and environmentally sensitive crop, demonstrated system performance under real-world conditions. The control system maintained a temperature and humidity within ±0.3 °C and ±1.2% of the setpoints, respectively, and outperformed the baseline rule-based control with a 28% increase in fresh biomass, a 22% improvement in dry matter accumulation, a 25% reduction in actuator duty-cycle switching, and an 18% decrease in overall energy consumption. These results highlight the efficacy of FPGA-integrated edge intelligence combined with low-latency IoT telemetry as a scalable, energy-efficient, and high-fidelity solution for sub-degree environmental control in next-generation, controlled-environment, and vertical farming systems. Full article
Show Figures

Figure 1

13 pages, 4072 KB  
Proceeding Paper
Development of Static and Dynamic Sensor Node Energy Level Model for Different Wireless Communication Technologies
by Zoren Mabunga, Jennifer Dela Cruz and Reggie Cobarrubia Gustilo
Eng. Proc. 2026, 134(1), 33; https://doi.org/10.3390/engproc2026134033 - 8 Apr 2026
Abstract
WSN node energy forecasting contributes to improving network efficiency, extending network lifespan, and providing energy management strategies. In this study, a deep-learning-based wireless sensor network (WSN) node energy forecasting model based on Long Short-Term Memory (LSTM) and stacked-LSTM was developed across different wireless [...] Read more.
WSN node energy forecasting contributes to improving network efficiency, extending network lifespan, and providing energy management strategies. In this study, a deep-learning-based wireless sensor network (WSN) node energy forecasting model based on Long Short-Term Memory (LSTM) and stacked-LSTM was developed across different wireless communication technologies in both static and dynamic WSN setups. The performance of the deep-learning-based models was compared with traditional forecasting techniques such as Exponential Smoothing and Prophet. The results showed the superiority of LSTM and stacked-LSTM in terms of root mean square error and mean absolute error, with consistently lower values compared with the traditional forecasting techniques. The results also show that the models perform best with Long Range technology. The deep learning-based model also demonstrates its ability to perform better in both static and dynamic WSN scenarios. These results demonstrate the potential of deep-learning-based models in WSN node energy management, which can result in an optimal energy efficiency and prolong the network lifetime. Future research is needed to explore hybrid approaches to further improve the prediction performance of deep learning-based models by combining their strengths with statistical or traditional forecasting techniques. Full article
Show Figures

Figure 1

25 pages, 4741 KB  
Article
An Edge-Enabled Predictive Maintenance Approach Based on Anomaly-Driven Health Indicators for Industrial Production Systems
by Bouzidi Lamdjad and Adem Chaiter
Algorithms 2026, 19(4), 286; https://doi.org/10.3390/a19040286 - 8 Apr 2026
Abstract
This study develops a data-driven framework for predictive maintenance and prognostic health management in industrial systems using edge-enabled predictive algorithms. The objective is to support early identification of abnormal operating conditions and improve maintenance decision making under real production environments. The proposed approach [...] Read more.
This study develops a data-driven framework for predictive maintenance and prognostic health management in industrial systems using edge-enabled predictive algorithms. The objective is to support early identification of abnormal operating conditions and improve maintenance decision making under real production environments. The proposed approach combines edge-level monitoring, anomaly detection, and predictive modeling to analyze operational signals and estimate system health conditions from high-frequency industrial data. Empirical validation was conducted using operational datasets collected from two industrial production facilities between 2024 and 2025. The model evaluates patterns associated with operational instability and degradation-related anomalies and translates them into interpretable health indicators that can support proactive intervention. The empirical results show strong predictive performance, with R2 reaching 0.989, a mean absolute percentage error of 3.67%, and a root mean square error of 0.79. In addition, the mitigation of early anomaly signals was associated with an observed improvement of approximately 3.99% in system stability. Unlike many existing studies that treat anomaly detection, predictive modeling, and prognostic analysis as separate tasks, the proposed framework connects these stages within a unified analytical structure designed for deployment in industrial environments. The findings indicate that edge-generated anomaly signals can provide meaningful early information about potential system deterioration and can assist in planning timely maintenance actions even when explicit failure labels are limited. The study contributes to the development of scalable predictive maintenance solutions that integrate artificial intelligence with edge-based industrial monitoring systems. Full article
Show Figures

Figure 1

21 pages, 1551 KB  
Article
A Hybrid Model for Deliverability Prediction in Fractured Tight Sandstone Energy Storage Reservoirs
by Dengfeng Ren, Ju Liu, Chengwen Wang, Xin Qiao, Junyan Liu and Fen Peng
Energies 2026, 19(7), 1800; https://doi.org/10.3390/en19071800 - 7 Apr 2026
Abstract
Fractured tight sandstone reservoirs are promising targets for underground energy storage, but their heterogeneous nature and often-incomplete historical test data pose significant challenges for accurate deliverability prediction and reservoir evaluation. To address this, a novel hybrid methodology is proposed. For wells with complete [...] Read more.
Fractured tight sandstone reservoirs are promising targets for underground energy storage, but their heterogeneous nature and often-incomplete historical test data pose significant challenges for accurate deliverability prediction and reservoir evaluation. To address this, a novel hybrid methodology is proposed. For wells with complete historical data, deliverability is calculated using a binomial inflow performance relationship (IPR) model. For wells with incomplete data, a weighted fusion model integrating a Random Forest algorithm and least squares regression is developed to predict natural blowout capacity, a key proxy for energy storage injectivity/productivity. The fusion model achieved superior performance with a mean absolute error (MAE) of 7.19 × 104 m3/day and a Mean Relative Error (MRE) of 8.5%, outperforming standalone methods. Based on the predicted deliverability, reservoirs in the Bozi–North block (Kuche Depression, Tarim Basin) were classified into three potential grades (I, II, III). The study provides a data-adaptive framework for deliverability prediction and offers tailored reformation process recommendations (e.g., sand fracturing for Grade I reservoirs), thereby providing a more reliable and practical decision support tool for the efficient development of tight sandstone energy storage reservoirs. Full article
(This article belongs to the Topic Advanced Technology for Oil and Nature Gas Exploration)
Show Figures

Figure 1

19 pages, 4124 KB  
Article
Prediction of Maximum Usable Frequency Based on a New Hybrid Deep Learning Model
by Yuyang Li, Zhigang Zhang and Jian Shen
Electronics 2026, 15(7), 1539; https://doi.org/10.3390/electronics15071539 - 7 Apr 2026
Abstract
The reliability of high-frequency (HF) frequency selection technology relies on the prediction accuracy of the Maximum Usable Frequency of the ionospheric F2 layer (MUF-F2). To improve its short-term prediction performance, a novel hybrid deep learning prediction model is proposed, which achieves accurate modeling [...] Read more.
The reliability of high-frequency (HF) frequency selection technology relies on the prediction accuracy of the Maximum Usable Frequency of the ionospheric F2 layer (MUF-F2). To improve its short-term prediction performance, a novel hybrid deep learning prediction model is proposed, which achieves accurate modeling of the complex spatiotemporal variation patterns of MUF-F2 by integrating a feature enhancement mechanism, a dual-branch feature extraction structure, and a bidirectional temporal dependency capture network. The hybrid prediction model integrates the Channel Attention mechanism (CA), Dual-Branch Convolutional Neural Network (DCNN), and Bidirectional Long Short-Term Memory network (BiLSTM). The model is trained and validated using MUF-F2 data from 5 communication links over China during geomagnetically quiet periods and 4 during geomagnetic storm periods, with the difference in the number of links attributed to experimental constraints and the disruptive effects of geomagnetic storms. Its performance is evaluated via multiple metrics, and a comparative analysis is conducted with commonly used prediction models such as the Long Short-Term Memory (LSTM) network. Experimental results show that during geomagnetically quiet periods, the proposed model achieves lower prediction errors (Root Mean Square Error (RMSE) < 1.1 MHz, Mean Absolute Percentage Error (MAPE) < 3.8%) and a higher goodness of fit (coefficient of determination (R2) > 0.94), with the average error reduction across all links ranging 8 from 6.2% to 46.9% compared with the baseline model. Under geomagnetic storm disturbance conditions, the model still maintains robust prediction performance, with R2 > 0.89 for all communication links, as well as RMSE < 0.6 MHz, Mean Absolute Error (MAE) < 0.4 MHz, and MAPE < 3.3%. The study demonstrates that the proposed CA-DCNN-BiLSTM model exhibits excellent prediction accuracy and anti-interference capability under different geomagnetic activity conditions, which can effectively improve the short-term prediction accuracy of MUF-F2 and provide more reliable technical support for HF communication frequency decision-making. Full article
Show Figures

Figure 1

21 pages, 1713 KB  
Article
Mechanistic Modeling of TEG Dehydrator Emissions in Oil and Gas Industry
by Jacob Mdigo, Arthur Santos, Gerald Duggan, Prajay Vora, Kira Shonkwiler and Daniel Zimmerle
Fuels 2026, 7(2), 21; https://doi.org/10.3390/fuels7020021 - 7 Apr 2026
Viewed by 34
Abstract
This work presents a mechanistic modeling approach for simulating methane emissions from triethylene glycol (TEG) dehydrators used in oil & gas (O&G) operations. The model was developed as a modular component of the Mechanistic Air Emissions Simulator (MAES) tool, incorporating species-specific absorption and [...] Read more.
This work presents a mechanistic modeling approach for simulating methane emissions from triethylene glycol (TEG) dehydrators used in oil & gas (O&G) operations. The model was developed as a modular component of the Mechanistic Air Emissions Simulator (MAES) tool, incorporating species-specific absorption and emission dynamics through two-level, second-order polynomial regression (PR) models trained on ProMax simulation data: (1) species-level regression models that track the transfer rates of individual gas species within the dehydrator unit streams, and (2) outlet flow stream regression models that predict the fraction of inlet gas distributed among the outlet streams of the dehydrator unit. These behaviors were characterized over a range of glycol circulation ratios, wet gas pressures, and temperatures. The model was validated using root mean square error (RMSE) analysis. The species-level PR achieved low root mean square error (RMSE) values (<0.03) for light hydrocarbon species across all dehydrator components, ranging from 0.0009 for methane to 0.029 for normal pentane. Similarly, the outlet-level PR yielded RMSE values below 0.002 for the dry gas fraction, 0.001 for the flash tank fraction, and 0.002 for the still vent fraction, demonstrating strong agreement between predicted and reference ProMax values. When deployed at field facilities, the model significantly improved MAES-simulated dehydrator emissions, revealing that gas-assisted glycol pump emissions are the dominant contributors to both dehydrator-level and site-level methane emissions under uncontrolled conditions. Further analysis of the 154 dehydrator units reported by operators under the AMI 2024 project showed that 54 units (31%) used gas-driven glycol pumps, of which 6 units (11%) operated with uncontrolled flash tanks, and 22 units (40.7%) were identified as potentially oversized. Of the six dehydrator units with uncontrolled gas-assisted pumps, pump emissions accounted for 90.25% of total dehydrator emissions and 63.10% of total site-level emissions. These findings highlight substantial opportunities for emissions mitigation through equipment upgrades. Full article
Show Figures

Figure 1

33 pages, 19869 KB  
Article
Learning Nonlinear Dynamics of Flexible Structures for Predictive Control Using Gaussian Process NARX Models
by Nasser Ayidh Alqahtani
Biomimetics 2026, 11(4), 253; https://doi.org/10.3390/biomimetics11040253 - 7 Apr 2026
Viewed by 2
Abstract
Biological systems regulate motion and suppress unwanted vibrations through learning, adaptation, and predictive control under uncertainty. Inspired by these principles, Bayesian system identification has emerged as a powerful framework for modeling and estimation, particularly in the presence of uncertainty in structural systems. Flexible [...] Read more.
Biological systems regulate motion and suppress unwanted vibrations through learning, adaptation, and predictive control under uncertainty. Inspired by these principles, Bayesian system identification has emerged as a powerful framework for modeling and estimation, particularly in the presence of uncertainty in structural systems. Flexible structures in aerospace and robotics require advanced control to mitigate vibrations under model uncertainty. This paper proposes a data-driven strategy leveraging a Gaussian Process (GP) integrated within a Nonlinear Model Predictive Control (NMPC) framework. The core innovation lies in using a Gaussian Process Nonlinear AutoRegressive model with eXogenous input (GP-NARX) as a probabilistic predictor to capture structural dynamics while quantifying uncertainty. The operational mechanism involves a tight coupling where the GP provides multi-step-ahead forecasts that the NMPC optimizer uses to minimize a cost function subject to constraints. Validated through simulations on Duffing oscillators, linear oscillators, and cantilever beams, the GP-NMPC achieved an 88.2% reduction in displacement amplitude compared to uncontrolled systems. Quantitative analysis shows high predictive accuracy, with a Root Mean Square Error (RMSE) of 0.0031 and a Standardized Mean-Squared Error (SMSE) below 0.05. Furthermore, Mean Standardized Log Loss (MSLL) evaluations confirm the reliability of the predictive uncertainty within the control loop. These results demonstrate strong performance in both regulation and tracking tasks, justifying this Bayesian-predictive coupling as a powerful approach for high-performance structural vibration control and a potential foundation for bio-inspired mechanical design. Full article
(This article belongs to the Special Issue Design of Natural and Biomimetic Flexible Biological Structures)
Show Figures

Figure 1

Back to TopTop