Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (486)

Search Parameters:
Keywords = Bayesian deep learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 29939 KB  
Article
Flipout Bayesian LSTM with Residual Attention for Uncertainty-Aware PM2.5 Forecasting and Anomaly Detection
by Quan Li, Huaxing Lu, Haiyang Xu and Dengwei Sun
Sustainability 2026, 18(4), 1718; https://doi.org/10.3390/su18041718 (registering DOI) - 7 Feb 2026
Abstract
Accurate PM2.5 prediction and reliable uncertainty assessments are essential for effective early warnings and public health protection. However, most existing deep learning models only provide deterministic predictions, with limited treatment of predictive uncertainty, which may reduce the reliability under noisy or abrupt [...] Read more.
Accurate PM2.5 prediction and reliable uncertainty assessments are essential for effective early warnings and public health protection. However, most existing deep learning models only provide deterministic predictions, with limited treatment of predictive uncertainty, which may reduce the reliability under noisy or abrupt pollution conditions. This study presents a flipout Bayesian LSTM with residual attention (FBA-LSTM), which integrates Bayesian flipout inference, residual connections, and temporal attention to jointly improve the predictive accuracy and uncertainty estimation. Unlike MC dropout, our model explicitly represents weight distributions through variational flipout inference, yielding more comprehensive and stable uncertainty estimates with a lower computational cost. A lightweight calibration module based on standard-deviation scaling further aligns the confidence intervals with empirical coverage. Experiments on hourly PM2.5 data from four Nanjing stations (2020) showed that FBA-LSTM improves the accuracy, noise robustness, and exceedance warnings (F1 = 0.996) and achieves a higher anomaly-detection performance (F1 = 0.691) than baseline models and methods, thereby facilitating the realization of urban environmental sustainability and public health security. Full article
29 pages, 2849 KB  
Article
From Physical to Virtual Sensors: VSG-SGL for Reliable and Cost-Efficient Environmental Monitoring
by Murad Ali Khan, Qazi Waqas Khan, Ji-Eun Kim, SeungMyeong Jeong, Il-yeop Ahn and Do-Hyeun Kim
Automation 2026, 7(1), 27; https://doi.org/10.3390/automation7010027 - 3 Feb 2026
Viewed by 74
Abstract
Reliable environmental monitoring in remote or sparsely instrumented regions is hindered by the cost, maintenance demands, and inaccessibility of dense physical sensor deployments. To address these challenges, this study introduces VSG-SGL, a unified virtual sensor generation framework that integrates Sparse Gaussian Process Regression [...] Read more.
Reliable environmental monitoring in remote or sparsely instrumented regions is hindered by the cost, maintenance demands, and inaccessibility of dense physical sensor deployments. To address these challenges, this study introduces VSG-SGL, a unified virtual sensor generation framework that integrates Sparse Gaussian Process Regression (SGPR) and Bayesian Ridge Regression (BRR) with deep generative learning via Variational Autoencoders (VAE) and Conditional Tabular GANs (CTGAN). Real meteorological datasets from multiple South Korean cities were preprocessed using thresholding and Isolation Forest anomaly detection and evaluated using distributional alignment (KDE) and sequence-learning validation with BiLSTM and BiGRU models. Experimental findings demonstrate that VAE-augmented virtual sensors provide the most stable and reliable performance. For temperature, VAE maintains predictive errors close to those of BRR and SGPR, reflecting the already well-modeled dynamics of this variable. In contrast, humidity and wind-related variables exhibit measurable gains with VAE; for example, SGPR-based wind speed MAE improves from 0.1848 to 0.1604, while BRR-based wind direction RMSE decreases from 0.1842 to 0.1726. CTGAN augmentation, however, frequently increases error, particularly for humidity and wind speed. Overall, the results establish VAE-enhanced VSG-SGL virtual sensors as a cost-effective and accurate alternative in scenarios where physical sensing is limited or impractical. Full article
26 pages, 1858 KB  
Review
Artificial Intelligence in Lubricant Research—Advances in Monitoring and Predictive Maintenance
by Raj Shah, Kate Marussich, Vikram Mittal and Andreas Rosenkranz
Lubricants 2026, 14(2), 72; https://doi.org/10.3390/lubricants14020072 - 3 Feb 2026
Viewed by 202
Abstract
Artificial intelligence transforms lubricant research by linking molecular modeling, diagnostics, and industrial operations into predictive systems. In this regard, machine learning methods such as Bayesian optimization and neural-based Quantitative Structure–Property/Tribological Relationship (QSPR/QSTR) modeling help to accelerate additive design and formulation development. Moreover, deep [...] Read more.
Artificial intelligence transforms lubricant research by linking molecular modeling, diagnostics, and industrial operations into predictive systems. In this regard, machine learning methods such as Bayesian optimization and neural-based Quantitative Structure–Property/Tribological Relationship (QSPR/QSTR) modeling help to accelerate additive design and formulation development. Moreover, deep learning and hybrid physics–AI frameworks are now capable to predict key lubricant properties such as viscosity, oxidation stability, and wear resistance directly from molecular or spectral data, reducing the need for long-duration field trials like fleet or engine endurance tests. With respect to condition monitoring, convolutional neural networks automate wear debris classification, multimodal sensor fusion enables real-time oil health tracking, and digital twins provide predictive maintenance by forecasting lubricant degradation and optimizing drain intervals. AI-assisted blending and process control platforms extend these advantages into manufacturing, reducing waste and improving reproducibility. This article sheds light on recent progress in AI-driven formulation, monitoring, and maintenance, thus identifying major barriers to adoption such as fragmented datasets, limited model transferability, and low explainability. Moreover, it discusses how standardized data infrastructures, physics-informed learning, and secure federated approaches can advance the industry toward adaptive, sustainable lubricant development under the principles of Industry 5.0. Full article
Show Figures

Figure 1

26 pages, 6232 KB  
Article
MFE-YOLO: A Multi-Scale Feature Enhanced Network for PCB Defect Detection with Cross-Group Attention and FIoU Loss
by Ruohai Di, Hao Fan, Hanxiao Feng, Zhigang Lv, Lei Shu, Rui Xie and Ruoyu Qian
Entropy 2026, 28(2), 174; https://doi.org/10.3390/e28020174 - 2 Feb 2026
Viewed by 148
Abstract
The detection of defects in Printed Circuit Boards (PCBs) is a critical yet challenging task in industrial quality control, characterized by the prevalence of small targets and complex backgrounds. While deep learning models like YOLOv5 have shown promise, they often lack the ability [...] Read more.
The detection of defects in Printed Circuit Boards (PCBs) is a critical yet challenging task in industrial quality control, characterized by the prevalence of small targets and complex backgrounds. While deep learning models like YOLOv5 have shown promise, they often lack the ability to quantify predictive uncertainty, leading to overconfident errors in challenging scenarios—a major source of false alarms and reduced reliability in automated manufacturing inspection lines. From a Bayesian perspective, this overconfidence signifies a failure in probabilistic calibration, which is crucial for trustworthy automated inspection. To address this, we propose MFE-YOLO, a Bayesian-enhanced detection framework built upon YOLOv5 that systematically integrates uncertainty-aware mechanisms to improve both accuracy and operational reliability in real-world settings. First, we construct a multi-background PCB defect dataset with diverse substrate colors and shapes, enhancing the model’s ability to generalize beyond the single-background bias of existing data. Second, we integrate the Convolutional Block Attention Module (CBAM), reinterpreted through a Bayesian lens as a feature-wise uncertainty weighting mechanism, to suppress background interference and amplify salient defect features. Third, we propose a novel FIoU loss function, redesigned within a probabilistic framework to improve bounding box regression accuracy and implicitly capture localization uncertainty, particularly for small defects. Extensive experiments demonstrate that MFE-YOLO achieves state-of-the-art performance, with mAP@0.5 and mAP@0.5:0.95 values of 93.9% and 59.6%, respectively, outperforming existing detectors, including YOLOv8 and EfficientDet. More importantly, the proposed framework yields better-calibrated confidence scores, significantly reducing false alarms and enabling more reliable human-in-the-loop verification. This work provides a deployable, uncertainty-aware solution for high-throughput PCB inspection, advancing toward trustworthy and efficient quality control in modern manufacturing environments. Full article
(This article belongs to the Special Issue Bayesian Networks and Causal Discovery)
Show Figures

Figure 1

20 pages, 1275 KB  
Article
QEKI: A Quantum–Classical Framework for Efficient Bayesian Inversion of PDEs
by Jiawei Yong and Sihai Tang
Entropy 2026, 28(2), 156; https://doi.org/10.3390/e28020156 - 30 Jan 2026
Viewed by 184
Abstract
Solving Bayesian inverse problems efficiently stands as a major bottleneck in scientific computing. Although Bayesian Physics-Informed Neural Networks (B-PINNs) have introduced a robust way to quantify uncertainty, the high-dimensional parameter spaces inherent in deep learning often lead to prohibitive sampling costs. Addressing this, [...] Read more.
Solving Bayesian inverse problems efficiently stands as a major bottleneck in scientific computing. Although Bayesian Physics-Informed Neural Networks (B-PINNs) have introduced a robust way to quantify uncertainty, the high-dimensional parameter spaces inherent in deep learning often lead to prohibitive sampling costs. Addressing this, our work introduces Quantum-Encodable Bayesian PINNs trained via Classical Ensemble Kalman Inversion (QEKI), a framework that pairs Quantum Neural Networks (QNNs) with Ensemble Kalman Inversion (EKI). The core advantage lies in the QNN’s ability to act as a compact surrogate for PDE solutions, capturing complex physics with significantly fewer parameters than classical networks. By adopting the gradient-free EKI for training, we mitigate the barren plateau issue that plagues quantum optimization. Through several benchmarks on 1D and 2D nonlinear PDEs, we show that QEKI yields precise inversions and substantial parameter compression, even in the presence of noise. While large-scale applications are constrained by current quantum hardware, this research outlines a viable hybrid framework for including quantum features within Bayesian uncertainty quantification. Full article
(This article belongs to the Special Issue Quantum Computation, Quantum AI, and Quantum Information)
Show Figures

Figure 1

25 pages, 876 KB  
Article
Multi-Scale Digital Twin Framework with Physics-Informed Neural Networks for Real-Time Optimization and Predictive Control of Amine-Based Carbon Capture: Development, Experimental Validation, and Techno-Economic Assessment
by Mansour Almuwallad
Processes 2026, 14(3), 462; https://doi.org/10.3390/pr14030462 - 28 Jan 2026
Viewed by 133
Abstract
Carbon capture and storage (CCS) is essential for achieving net-zero emissions, yet amine-based capture systems face significant challenges including high energy penalties (20–30% of power plant output) and operational costs ($50–120/tonne CO2). This study develops and validates a novel multi-scale Digital [...] Read more.
Carbon capture and storage (CCS) is essential for achieving net-zero emissions, yet amine-based capture systems face significant challenges including high energy penalties (20–30% of power plant output) and operational costs ($50–120/tonne CO2). This study develops and validates a novel multi-scale Digital Twin (DT) framework integrating Physics-Informed Neural Networks (PINNs) to address these challenges through real-time optimization. The framework combines molecular dynamics, process simulation, computational fluid dynamics, and deep learning to enable real-time predictive control. A key innovation is the sequential training algorithm with domain decomposition, specifically designed to handle the nonlinear transport equations governing CO2 absorption with enhanced convergence properties.The algorithm achieves prediction errors below 1% for key process variables (R2> 0.98) when validated against CFD simulations across 500 test cases. Experimental validation against pilot-scale absorber data (12 m packing, 30 wt% MEA) confirms good agreement with measured profiles, including temperature (RMSE = 1.2 K), CO2 loading (RMSE = 0.015 mol/mol), and capture efficiency (RMSE = 0.6%). The trained surrogate enables computational speedups of up to four orders of magnitude, supporting real-time inference with response times below 100 ms suitable for closed-loop control. Under the conditions studied, the framework demonstrates reboiler duty reductions of 18.5% and operational cost reductions of approximately 31%. Sensitivity analysis identifies liquid-to-gas ratio and MEA concentration as the most influential parameters, with mechanistic explanations linking these to mass transfer enhancement and reaction kinetics. Techno-economic assessment indicates favorable investment metrics, though results depend on site-specific factors. The framework architecture is designed for extensibility to alternative solvent systems, with future work planned for industrial-scale validation and uncertainty quantification through Bayesian approaches. Full article
(This article belongs to the Section Petroleum and Low-Carbon Energy Process Engineering)
18 pages, 775 KB  
Article
Tuning Deep Learning for Predicting Aluminum Prices Under Different Sampling: Bayesian Optimization Versus Random Search
by Alicia Estefania Antonio Figueroa and Salim Lahmiri
Entropy 2026, 28(2), 145; https://doi.org/10.3390/e28020145 - 28 Jan 2026
Cited by 1 | Viewed by 194
Abstract
This work implements deep learning models to capture non-linear and complex data behavior in aluminum price data. Deep learning models include the long short-term memory (LSTM) and deep feedforward neural networks (FFNN). The support vector regression (SVR) is employed as a base model [...] Read more.
This work implements deep learning models to capture non-linear and complex data behavior in aluminum price data. Deep learning models include the long short-term memory (LSTM) and deep feedforward neural networks (FFNN). The support vector regression (SVR) is employed as a base model for comparison. Each predictive model is tuned by using two different optimization methods: Bayesian optimization (BO) and random search (RS). All models are tested on daily, weekly, and monthly data. Three performance metrics are used to evaluate each forecasting model: the root mean squared error (RMSE), mean absolute error (MAE), and the coefficient of determination (R2). The experimental results show that the LSTM-BO is the best-performing model across the time horizons (daily, weekly, and monthly). By consistently achieving the lowest RMSE, MAE, and highest R2, the LSTM-BO outperformed all the other models, including SVR-BO, FFNN-BO, LSTM-RS, SVR-RS, and FFNN-RS. In addition, predictive models utilizing BO regularly outperformed those using RS. In summary, LSTM-BO is highly beneficial for aluminum spot price forecasting. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

22 pages, 31480 KB  
Article
Bayesian Inference of Primordial Magnetic Field Parameters from CMB with Spherical Graph Neural Networks
by Juan Alejandro PintoCastro, Héctor J. Hortúa, Jorge Enrique García-Farieta and Roger Anderson Hurtado
Universe 2026, 12(2), 34; https://doi.org/10.3390/universe12020034 - 26 Jan 2026
Viewed by 221
Abstract
Deep learning has emerged as a transformative methodology in modern cosmology, providing powerful tools to extract meaningful physical information from complex astronomical data. This paper implements a novel Bayesian graph deep learning framework for estimating key cosmological parameters in a primordial magnetic field [...] Read more.
Deep learning has emerged as a transformative methodology in modern cosmology, providing powerful tools to extract meaningful physical information from complex astronomical data. This paper implements a novel Bayesian graph deep learning framework for estimating key cosmological parameters in a primordial magnetic field (PMF) cosmology from simulated Cosmic Microwave Background (CMB) maps. Our methodology utilizes DeepSphere, a spherical convolutional neural network architecture specifically designed to respect the spherical geometry of CMB data through HEALPix pixelization. To advance beyond deterministic point estimates and enable robust uncertainty quantification, we integrate Bayesian Neural Networks (BNNs) into the framework, capturing aleatoric and epistemic uncertainties that reflect the model confidence in its predictions. The proposed approach demonstrates exceptional performance, achieving R2 scores exceeding 89% for the magnetic parameter estimation. We further obtain well-calibrated uncertainty estimates through post hoc training techniques including Variance Scaling and GPNormal. This integrated DeepSphere-BNNs framework delivers accurate parameter estimation from CMB maps with PMF contributions while providing reliable uncertainty quantification, enabling robust cosmological inference in the era of precision cosmology. Full article
(This article belongs to the Section Astroinformatics and Astrostatistics)
25 pages, 2206 KB  
Article
Adaptive Bayesian System Identification for Long-Term Forecasting of Industrial Load and Renewables Generation
by Lina Sheng, Zhixian Wang, Xiaowen Wang and Linglong Zhu
Electronics 2026, 15(3), 530; https://doi.org/10.3390/electronics15030530 - 26 Jan 2026
Viewed by 131
Abstract
The expansion of renewables in modern power systems and the coordinated development of upstream and downstream industrial chains are promoting a shift on the utility side from traditional settlement by energy toward operation driven by data and models. Industrial electricity consumption data exhibit [...] Read more.
The expansion of renewables in modern power systems and the coordinated development of upstream and downstream industrial chains are promoting a shift on the utility side from traditional settlement by energy toward operation driven by data and models. Industrial electricity consumption data exhibit pronounced multi-scale temporal structures and sectoral heterogeneity, which makes unified long-term load and generation forecasting while maintaining accuracy, interpretability, and scalability a challenge. From a modern system identification perspective, this paper proposes a System Identification in Adaptive Bayesian Framework (SIABF) for medium- and long-term industrial load forecasting based on daily freeze electricity time series. By combining daily aggregation of high-frequency data, frequency domain analysis, sparse identification, and long-term extrapolation, we first construct daily freeze series from 15 min measurements, and then we apply discrete Fourier transforms and a spectral complexity index to extract dominant periodic components and build an interpretable sinusoidal basis library. A sparse regression formulation with 1 regularization is employed to select a compact set of key basis functions, yielding concise representations of sector and enterprise load profiles and naturally supporting multivariate and joint multi-sector modeling. Building on this structure, we implement a state-space-implicit physics-informed Bayesian forecasting model and evaluate it on real data from three representative sectors, namely, steel, photovoltaics, and chemical, using one year of 15 min measurements. Under a one-month-ahead evaluation using one year of 15 min measurements, the proposed framework attains a Mean Absolute Percentage Error (MAPE) of 4.5% for a representative PV-related customer case and achieves low single-digit MAPE for high-inertia sectors, often outperforming classical statistical models, sparse learning baselines, and deep learning architectures. These results should be interpreted as indicative given the limited time span and sample size, and broader multi-year, population-level validation is warranted. Full article
(This article belongs to the Section Systems & Control Engineering)
26 pages, 2618 KB  
Article
A Cascaded Batch Bayesian Yield Optimization Method for Analog Circuits via Deep Transfer Learning
by Ziqi Wang, Kaisheng Sun and Xiao Shi
Electronics 2026, 15(3), 516; https://doi.org/10.3390/electronics15030516 - 25 Jan 2026
Viewed by 220
Abstract
In nanometer integrated-circuit (IC) manufacturing, advanced technology scaling has intensified the effects of process variations on circuit reliability and performance. Random fluctuations in parameters such as threshold voltage, channel length, and oxide thickness further degrade design margins and increase the likelihood of functional [...] Read more.
In nanometer integrated-circuit (IC) manufacturing, advanced technology scaling has intensified the effects of process variations on circuit reliability and performance. Random fluctuations in parameters such as threshold voltage, channel length, and oxide thickness further degrade design margins and increase the likelihood of functional failures. These variations often lead to rare circuit failure events, underscoring the importance of accurate yield estimation and robust design methodologies. Conventional Monte Carlo yield estimation is computationally infeasible as millions of simulations are required to capture failure events with extremely low probability. This paper presents a novel reliability-based circuit design optimization framework that leverages deep transfer learning to improve the efficiency of repeated yield analysis in optimization iterations. Based on pre-trained neural network models from prior design knowledge, we utilize model fine-tuning to accelerate importance sampling (IS) for yield estimation. To improve estimation accuracy, adversarial perturbations are introduced to calibrate uncertainty near the model decision boundary. Moreover, we propose a cascaded batch Bayesian optimization (CBBO) framework that incorporates a smart initialization strategy and a localized penalty mechanism, guiding the search process toward high-yield regions while satisfying nominal performance constraints. Experimental validation on SRAM circuits and amplifiers reveals that CBBO achieves a computational speedup of 2.02×–4.63× over state-of-the-art (SOTA) methods, without compromising accuracy and robustness. Full article
(This article belongs to the Topic Advanced Integrated Circuit Design and Application)
Show Figures

Figure 1

22 pages, 3180 KB  
Article
Integrating Blockchain Traceability and Deep Learning for Risk Prediction in Grain and Oil Food Safety
by Hongyi Ge, Kairui Fan, Yuan Zhang, Yuying Jiang, Shun Wang and Zhikun Chen
Foods 2026, 15(2), 407; https://doi.org/10.3390/foods15020407 - 22 Jan 2026
Viewed by 123
Abstract
The quality and safety of grain and oil food are paramount to sustainable societal development and public health. Implementing early warning analysis and risk control is critical for the comprehensive identification and management of grain and oil food safety risks. However, traditional risk [...] Read more.
The quality and safety of grain and oil food are paramount to sustainable societal development and public health. Implementing early warning analysis and risk control is critical for the comprehensive identification and management of grain and oil food safety risks. However, traditional risk prediction models are limited by their inability to accurately analyze complex nonlinear data, while their reliance on centralized storage further undermines prediction credibility and traceability. This study proposes a deep learning risk prediction model integrated with a blockchain-based traceability mechanism. Firstly, a risk prediction model combining Grey Relational Analysis (GRA) and Bayesian-optimized Tabular Neural Network (TabNet-BO) is proposed, enabling precise and rapid fine-grained risk prediction of the data; Secondly, a risk prediction method combining blockchain and deep learning is proposed. This method first completes the prediction interaction with the deep learning model through a smart contract and then records the exceeding data and prediction results on the blockchain to ensure the authenticity and traceability of the data. At the same time, a storage optimization method is employed, where only the exceeding data is uploaded to the blockchain, while the non-exceeding data is encrypted and stored in the local database. Compared with existing models, the proposed model not only effectively enhances the prediction capability for grain and oil food quality and safety but also improves the transparency and credibility of data management. Full article
(This article belongs to the Section Food Quality and Safety)
Show Figures

Figure 1

26 pages, 60486 KB  
Article
Spatiotemporal Prediction of Ground Surface Deformation Using TPE-Optimized Deep Learning
by Maoqi Liu, Sichun Long, Tao Li, Wandi Wang and Jianan Li
Remote Sens. 2026, 18(2), 234; https://doi.org/10.3390/rs18020234 - 11 Jan 2026
Viewed by 245
Abstract
Surface deformation induced by the extraction of natural resources constitutes a non-stationary spatiotemporal process. Modeling surface deformation time series obtained through Interferometric Synthetic Aperture Radar (InSAR) technology using deep learning methods is crucial for disaster prevention and mitigation. However, the complexity of model [...] Read more.
Surface deformation induced by the extraction of natural resources constitutes a non-stationary spatiotemporal process. Modeling surface deformation time series obtained through Interferometric Synthetic Aperture Radar (InSAR) technology using deep learning methods is crucial for disaster prevention and mitigation. However, the complexity of model hyperparameter configuration and the lack of interpretability in the resulting predictions constrain its engineering applications. To enhance the reliability of model outputs and their decision-making value for engineering applications, this study presents a workflow that combines a Tree-structured Parzen Estimator (TPE)-based Bayesian optimization approach with ensemble inference. Using the Rhineland coalfield in Germany as a case study, we systematically evaluated six deep learning architectures in conjunction with various spatiotemporal coding strategies. Pairwise comparisons were conducted using a Welch t-test to evaluate the performance differences across each architecture under two parameter-tuning approaches. The Benjamini–Hochberg method was applied to control the false discovery rate (FDR) at 0.05 for multiple comparisons. The results indicate that TPE-optimized models demonstrate significantly improved performance compared to their manually tuned counterparts, with the ResNet+Transformer architecture yielding the most favorable outcomes. A comprehensive analysis of the spatial residuals further revealed that TPE optimization not only enhances average accuracy, but also mitigates the model’s prediction bias in fault zones and mineralize areas by improving the spatial distribution structure of errors. Based on this optimal architecture, we combined the ten highest-performing models from the optimization stage to generate a quantile-based susceptibility map, using the ensemble median as the central predictor. Uncertainty was quantified from three complementary perspectives: ensemble spread, class ambiguity, and classification confidence. Our analysis revealed spatial collinearity between physical uncertainty and absolute residuals. This suggests that uncertainty is more closely related to the physical complexity of geological discontinuities and human-disturbed zones, rather than statistical noise. In the analysis of super-threshold probability, the threshold sensitivity exhibited by the mining area reflects the widespread yet moderate impact of mining activities. By contrast, the fault zone continues to exhibit distinct high-probability zones, even under extreme thresholds. It suggests that fault-controlled deformation is more physically intense and poses a greater risk of disaster than mining activities. Finally, we propose an engineering decision strategy that combines uncertainty and residual spatial patterns. This approach transforms statistical diagnostics into actionable, tiered control measures, thereby increasing the practical value of susceptibility mapping in the planning of natural resource extraction. Full article
Show Figures

Figure 1

24 pages, 3242 KB  
Article
RF-Driven Adaptive Surrogate Models for LoRaDisC Network Performance Prediction in Smart Agriculture and Field Sensing Environments
by Showkat Ahmad Bhat, Ishfaq Bashir Sofi, Ming-Che Chen and Nen-Fu Huang
AgriEngineering 2026, 8(1), 27; https://doi.org/10.3390/agriengineering8010027 - 11 Jan 2026
Viewed by 236
Abstract
LoRa-based IoT systems are increasingly used in smart farming, greenhouse monitoring, and large-scale agricultural sensing, where long-range, energy-efficient communication is essential. However, estimating link quality metrics such as PRR, RSSI, and SNR typically requires continuous packet transmission and sequence logging, an impractical approach [...] Read more.
LoRa-based IoT systems are increasingly used in smart farming, greenhouse monitoring, and large-scale agricultural sensing, where long-range, energy-efficient communication is essential. However, estimating link quality metrics such as PRR, RSSI, and SNR typically requires continuous packet transmission and sequence logging, an impractical approach for power-constrained field nodes. This study proposes a deep learning-driven framework for real-time prediction of link- and network-level performance in multihop LoRa networks, targeting the LoRaDisC protocol commonly deployed in agricultural environments. By integrating Bayesian surrogate modeling with Random Forest-guided hyperparameter optimization, the system accurately predicts PRR, RSSI, and SNR using multivariate time series features. Experiments on a large-scale outdoor LoRa testbed (ChirpBox) show that aggregated link layer metrics strongly correlate with PRR, with performance influenced by environmental variables such as humidity, temperature, and field topology. The optimized model achieves a mean absolute error (MAE) of 8.83 and adapts effectively to dynamic environmental conditions. This work enables energy-efficient, autonomous communication in agricultural IoT deployments, supporting reliable field sensing, crop monitoring, livestock tracking, and other smart farming applications that depend on resilient low-power wireless connectivity. Full article
Show Figures

Figure 1

25 pages, 4490 KB  
Article
A Bi-Level Intelligent Control Framework Integrating Deep Reinforcement Learning and Bayesian Optimization for Multi-Objective Adaptive Scheduling in Opto-Mechanical Automated Manufacturing
by Lingyu Yin, Zhenhua Fang, Kaicen Li, Jing Chen, Naiji Fan and Mengyang Li
Appl. Sci. 2026, 16(2), 732; https://doi.org/10.3390/app16020732 - 10 Jan 2026
Viewed by 232
Abstract
The opto-mechanical automated manufacturing process, characterized by stringent process constraints, dynamic disturbances, and conflicting optimization objectives, presents significant control challenges for traditional scheduling and control approaches. We formulate the scheduling problem within a closed-loop control paradigm and propose a novel bi-level intelligent control [...] Read more.
The opto-mechanical automated manufacturing process, characterized by stringent process constraints, dynamic disturbances, and conflicting optimization objectives, presents significant control challenges for traditional scheduling and control approaches. We formulate the scheduling problem within a closed-loop control paradigm and propose a novel bi-level intelligent control framework integrating Deep Reinforcement Learning (DRL) and Bayesian Optimization (BO). The core of our approach is a bi-level intelligent control framework. An inner DRL agent acts as an adaptive controller, generating control actions (scheduling decisions) by perceiving the system state and learning a near-optimal policy through a carefully designed reward function, while an outer BO loop automatically tunes the DRL’s hyperparameters and reward weights for superior performance. This synergistic BO-DRL mechanism facilitates intelligent and adaptive decision-making. The proposed method is extensively evaluated against standard meta-heuristics, including Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), on a complex 20-jobs × 20-machines flexible job shop scheduling benchmark specific to opto-mechanical automated manufacturing. The experimental results demonstrate that our BO-DRL algorithm significantly outperforms these benchmarks, achieving reductions in makespan of 13.37% and 25.51% compared to GA and PSO, respectively, alongside higher machine utilization and better on-time delivery. Furthermore, the algorithm exhibits enhanced convergence speed, superior robustness under dynamic disruptions (e.g., machine failures, urgent orders), and excellent scalability to larger problem instances. This study confirms that integrating DRL’s perceptual decision-making capability with BO’s efficient parameter optimization yields a powerful and effective solution for intelligent scheduling in high-precision manufacturing environments. Full article
Show Figures

Figure 1

30 pages, 28242 KB  
Article
Generative Algorithms for Wildfire Progression Reconstruction from Multi-Modal Satellite Active Fire Measurements and Terrain Height
by Bryan Shaddy, Brianna Binder, Agnimitra Dasgupta, Haitong Qin, James Haley, Angel Farguell, Kyle Hilburn, Derek V. Mallia, Adam Kochanski, Jan Mandel and Assad A. Oberai
Remote Sens. 2026, 18(2), 227; https://doi.org/10.3390/rs18020227 - 10 Jan 2026
Viewed by 240
Abstract
Wildfire spread prediction models, including even the most sophisticated coupled atmosphere–wildfire models, diverge from observed wildfire progression during multi-day simulations, motivating the need for measurement-based assessments of wildfire state and improved data assimilation techniques. Data assimilation in the context of coupled atmosphere–wildfire models [...] Read more.
Wildfire spread prediction models, including even the most sophisticated coupled atmosphere–wildfire models, diverge from observed wildfire progression during multi-day simulations, motivating the need for measurement-based assessments of wildfire state and improved data assimilation techniques. Data assimilation in the context of coupled atmosphere–wildfire models entails estimating wildfire progression history from observations and using this to obtain initial conditions for subsequent simulations through a spin-up process. In this study, an approach is developed for estimating fire progression history from VIIRS active fire measurements, GOES-derived ignition times, and terrain height data. The approach utilizes a conditional Wasserstein Generative Adversarial Network trained on simulations of historic wildfires from the coupled atmosphere–wildfire model WRF-SFIRE, with corresponding measurements for training obtained through the application of an approximate observation operator. Once trained, the cWGAN leverages measurements of real fires and corresponding terrain data to probabilistically generate fire progression estimates that are consistent with the WRF-SFIRE solutions used for training. The approach is validated on five Pacific US wildfires, and results are compared against high-resolution perimeters measured via aircraft, finding an average Sørensen–Dice coefficient of 0.81. The influence of terrain data on fire progression estimates is also assessed, finding an increased contribution when measurements are uninformative. Full article
Show Figures

Figure 1

Back to TopTop