Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (318)

Search Parameters:
Keywords = Bayesian calibration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 8017 KB  
Article
Quantum-Inspired Variational Inference for Non-Convex Stochastic Optimization: A Unified Mathematical Framework with Convergence Guarantees and Applications to Machine Learning in Communication Networks
by Abrar S. Alhazmi
Mathematics 2026, 14(7), 1236; https://doi.org/10.3390/math14071236 - 7 Apr 2026
Abstract
Non-convex stochastic optimization presents fundamental mathematical challenges across machine learning, wireless networks, data center resource allocation, and optical wireless communication systems, where complex loss landscapes with multiple local minima and saddle points impede classical variational inference methods. This paper introduces the Quantum-Inspired Variational [...] Read more.
Non-convex stochastic optimization presents fundamental mathematical challenges across machine learning, wireless networks, data center resource allocation, and optical wireless communication systems, where complex loss landscapes with multiple local minima and saddle points impede classical variational inference methods. This paper introduces the Quantum-Inspired Variational Inference (QIVI) framework, which systematically integrates quantum mechanical principles (superposition, entanglement, and measurement operators) into classical variational inference through rigorous mathematical formulations grounded in Hilbert space theory and operator algebras. We develop a unified optimization framework that encodes classical parameters as quantum-inspired states within finite-dimensional complex Hilbert spaces, employing unitary evolution operators and adaptive basis selection governed by gradient covariance eigendecomposition. The core mathematical contribution establishes that QIVI achieves a convergence rate of O(log2T/T1/2) for σ-strongly non-convex functions, provably improving upon the classical O(T1/4) rate, yielding a theoretical speedup factor of 1.851.96×. Comprehensive experiments across synthetic benchmarks, Bayesian neural networks, and real-world applications in network optimization and financial portfolio management demonstrate 23–47% faster convergence, 15–35% superior objective values, and 28–46% improved uncertainty calibration. The principal contributions include: (i) a rigorous Hilbert space-based mathematical framework for quantum-inspired variational inference grounded in operator algebras, (ii) a novel hybrid quantum–classical algorithm (QIVI) with adaptive basis selection via gradient covariance eigendecomposition, (iii) formal convergence proofs establishing provable improvement over classical methods, (iv) comprehensive empirical validation across diverse problem domains relevant to machine learning and network optimization, and (v) demonstration of the framework’s applicability to optimization problems arising in wireless networks, data center resource allocation, and network system design. Statistical validation using the Friedman test (χ2=847.3, p<0.001) and post hoc Wilcoxon signed-rank tests with Holm–Bonferroni correction confirm that QIVI’s improvements over all baseline methods are statistically significant at the α=0.05 level across all benchmark categories. The framework discovers 18.1 out of 20 true modes in multimodal distributions versus 9.1 for classical methods, demonstrating the potential of quantum-inspired optimization approaches for challenging stochastic problems arising in machine learning, wireless communication, and network optimization. Full article
48 pages, 4189 KB  
Systematic Review
Analytical Models to Optimize Tacrolimus Dosing in Solid Organ Transplantation: A Systematic Review
by Elmira Amooei, Nandini Biyani, Amos Buh, Martin M. Klamrowski, Nawaf M. Alyahya, Christopher R. McCudden, James R. Green, Babak Rashidi, Haya Almuzirai, Stephanie Hoar, Ayub Akbari, Gregory L. Hundemer and Ran Klein
Pharmaceutics 2026, 18(4), 430; https://doi.org/10.3390/pharmaceutics18040430 - 31 Mar 2026
Viewed by 393
Abstract
Background: Tacrolimus dose optimization remains challenging due to its narrow therapeutic range and multiple influencing variables. This systematic review aimed to identify effective analytical modeling techniques for optimal tacrolimus dose prediction in solid organ transplant recipients. Methods: Two independent researchers conducted a comprehensive [...] Read more.
Background: Tacrolimus dose optimization remains challenging due to its narrow therapeutic range and multiple influencing variables. This systematic review aimed to identify effective analytical modeling techniques for optimal tacrolimus dose prediction in solid organ transplant recipients. Methods: Two independent researchers conducted a comprehensive review of studies examining analytical models that optimize tacrolimus dosing, searching Medline, Scopus, Embase, Web of Science, and PubMed. Results: In total, 115 studies met the inclusion criteria. Pharmacokinetic models (74 studies), particularly two-compartment with Bayesian forecasting, were most frequently used. Machine learning (ML) approaches, with increasing adoption, have demonstrated promising improved predictive accuracy. Key predictive variables included CYP3A5 genotype, hematocrit levels, post-operative days, and weight; however, the significance of genomic features seemed to diminish progressively as therapeutic drug monitoring calibrates dosing in the months following post-transplant. Only ten studies performed external validation, and none incorporated adherence data or predicted long-term graft outcomes. Conclusions: Clinical deployment of predictive models for tacrolimus dosing remains uncommon. In research, pharmacokinetic models remain prevalent, with ML approaches showing early incremental promise. Limited external validation raises generalizability concerns. Future research should prioritize outcome-based evaluation metrics rather than error metrics. Full article
(This article belongs to the Section Physical Pharmacy and Formulation)
Show Figures

Figure 1

21 pages, 1195 KB  
Article
Interpretable Machine Learning to Predict Metformin-Induced Vitamin B12 Deficiency: Association with Glycemic Control and Neuropathic Symptoms
by Yasmine Salhi, Meriem Yazidi, Amine Dhraief, Elyes Kamoun, Melika Chihaoui, Tamim Alsuliman and Layth Sliman
Metabolites 2026, 16(4), 227; https://doi.org/10.3390/metabo16040227 - 30 Mar 2026
Viewed by 215
Abstract
Background/Objectives: Vitamin B12 deficiency is a common but often underdiagnosed complication in patients with type 2 diabetes (T2D) undergoing long-term metformin therapy. Accurate early prediction could enable targeted screening and timely intervention. This study aimed to develop and interpret a machine learning model [...] Read more.
Background/Objectives: Vitamin B12 deficiency is a common but often underdiagnosed complication in patients with type 2 diabetes (T2D) undergoing long-term metformin therapy. Accurate early prediction could enable targeted screening and timely intervention. This study aimed to develop and interpret a machine learning model for predicting vitamin B12 deficiency in metformin-treated patients with T2D, using eXtreme Gradient Boosting (XGBoost). Methods: A retrospective cross-sectional study was conducted at a single endocrinology centre (La Rabta University Hospital, Tunis, Tunisia). Patients with T2D treated with metformin for at least three years were included (n = 257); those with conditions independently affecting vitamin B12 metabolism were excluded. Vitamin B12 deficiency was defined as a serum B12 level below 150 pmol/L or a borderline level (150–221 pmol/L) with concurrent hyperhomocysteinemia (>15 μmol/L). XGBoost was selected after comparison with Logistic Regression (L2), Random Forest, and Support Vector Machine on the same 5-fold stratified cross-validated pipeline. Hyperparameters were optimized via Bayesian search (100 iterations × 5-fold stratified cross-validation), with the Matthews correlation coefficient (MCC) as the primary optimization metric to account for class imbalance. Model interpretability was achieved using SHapley Additive exPlanations (SHAP). Discrimination and calibration were assessed on an independent test set using bootstrap 95% confidence intervals (2000 resamples). Results: Of 257 patients, 95 (37.0%) presented with vitamin B12 deficiency. On the independent test set (n = 52), the optimized XGBoost model achieved an ROC-AUC of 0.671 [95% CI: 0.514–0.818], sensitivity of 0.737 [95% CI: 0.533–0.938], specificity of 0.545 [95% CI: 0.375–0.710], MCC of 0.273 [95% CI: 0.018–0.517], and a Brier Score of 0.259. SHAP analysis identified HbA1c, microalbuminuria, autonomic neuropathy, BMI, DN4 score, and fasting glucose as the most influential predictors. Nonlinear SHAP interaction plots revealed an increased predicted risk in patients with low HbA1c combined with a high cumulative metformin dose. Conclusions: The XGBoost–SHAP framework provided interpretable predictions of vitamin B12 deficiency in patients with T2D on metformin, identifying key clinical profiles for targeted screening. External multi-centre validation is required before clinical deployment. Full article
(This article belongs to the Special Issue Metabolic Dysfunction in Diabetic Neuropathy)
Show Figures

Graphical abstract

25 pages, 3347 KB  
Article
Variational Bayesian-Based Reliability Evaluation of Nonlinear Structures by Active Learning Gaussian Process Modeling
by Wei-Chao Hou, Yu Xin, Ding-Tang Wang, Zuo-Cai Wang and Zong-Zu Liu
Infrastructures 2026, 11(4), 118; https://doi.org/10.3390/infrastructures11040118 - 27 Mar 2026
Viewed by 252
Abstract
In this study, variational Bayesian inference (VBI) with Gaussian mixture models is applied to update models of nonlinear structures, and then, the calibrated model is employed to estimate the failure probability of structures using a subset simulation (SS) algorithm. To improve the computation [...] Read more.
In this study, variational Bayesian inference (VBI) with Gaussian mixture models is applied to update models of nonlinear structures, and then, the calibrated model is employed to estimate the failure probability of structures using a subset simulation (SS) algorithm. To improve the computation efficiency of probabilistic nonlinear model updating, a Gaussian Process (GP) model is used to construct a surrogate likelihood function in Bayesian inference using an active learning algorithm, and then, Gaussian mixture models (GMMs) are employed to approximate the unknown posterior probabilistic density functions (PDFs) of model parameters. The optimized hyperparameters of GMMs can be obtained by maximizing the evidence lower bound (ELBO), and the stochastic gradient search method is used to solve this optimization problem. Based on the optimized hyperparameters, the posterior distributions of model parameters can be approximated using a combination of multiple Gaussian components. Subsequently, the SS algorithm is used to calculate the earthquake-induced failure probability of structures based on the calibrated nonlinear model. To verify the feasibility and effectiveness of the proposed method, a numerical simulation of a two-span bridge structure subjected to seismic excitations was developed. Moreover, the proposed strategy is further applied to estimate the failure probability of a scaled monolithic column structure subjected to bi-directional earthquake excitations. Both numerical and experimental results indicate that the proposed method is feasible and effective for probabilistic nonlinear model updates, and the updated model can significantly enhance the accuracy of structural failure probability predictions. Full article
(This article belongs to the Section Infrastructures and Structural Engineering)
Show Figures

Figure 1

27 pages, 6869 KB  
Article
Pedestrian Routing and Walkability Inference Using Realized WiFi Connectivity
by Tun Tun Win, Thanisorn Jundee and Santi Phithakkitnukoon
ISPRS Int. J. Geo-Inf. 2026, 15(3), 139; https://doi.org/10.3390/ijgi15030139 - 23 Mar 2026
Viewed by 830
Abstract
Traditional pedestrian routing algorithms typically minimize physical distance or travel time, often overlooking contextual factors that influence route choice in digitally connected environments. As public WiFi infrastructure becomes increasingly prevalent in smart-city districts and university campuses, digital connectivity may influence pedestrian mobility decisions. [...] Read more.
Traditional pedestrian routing algorithms typically minimize physical distance or travel time, often overlooking contextual factors that influence route choice in digitally connected environments. As public WiFi infrastructure becomes increasingly prevalent in smart-city districts and university campuses, digital connectivity may influence pedestrian mobility decisions. This study introduces P-WARP, a multi-factor routing and inference framework that reconstructs latent pedestrian preferences by integrating physical effort, environmental walkability, and WiFi connectivity within a unified semantic graph. The empirical analysis is conducted on the Chiang Mai University campus, a digitally connected environment serving as a smart campus testbed. The framework integrates heterogeneous spatial datasets, including OpenStreetMap topology, Shuttle Radar Topography Mission elevation data, environmental walkability grids, and WiFi roaming logs collected via a custom mobile sensing application from 21 volunteers across 71 verified walking trips. Two routing strategies are evaluated: a Global Static Model, representing infrastructure-based connectivity assumptions, and a Trip-Centric Dynamic Model, incorporating realized connectivity histories. Model parameters are calibrated using Bayesian Optimization with five-fold cross-validation. Results show that incorporating realized connectivity reduces trajectory reconstruction error by 6.84% relative to the baseline. The learned parameters reveal a notable detour tolerance, suggesting that stable digital connectivity can influence pedestrian route choice in digitally instrumented environments. Full article
Show Figures

Figure 1

33 pages, 662 KB  
Article
The Asymmetric Bimodal Normal Distribution: A Tractable Mixture Model for Skewed and Bimodal Data
by Hassan S. Bakouch, Hugo S. Salinas, Çağatay Çetinkaya, Shaykhah Aldossari, Amira F. Daghestani and John L. Santibáñez
Mathematics 2026, 14(5), 901; https://doi.org/10.3390/math14050901 - 6 Mar 2026
Viewed by 368
Abstract
We study a parsimonious constrained two-component Gaussian mixture with symmetric locations ±λ and unequal weights controlled by α[1,1]; we refer to this family as the asymmetric bimodal normal. The constraint eliminates label switching and [...] Read more.
We study a parsimonious constrained two-component Gaussian mixture with symmetric locations ±λ and unequal weights controlled by α[1,1]; we refer to this family as the asymmetric bimodal normal. The constraint eliminates label switching and yields an identifiable parametrization for λ>0, while noting the boundary degeneracy at λ=0 where α is not identifiable. We derive closed-form analytical expressions for the density and distribution functions, an equivalent constructive representation (useful for simulation and interpretation), explicit moment formulas, and conditions distinguishing unimodality from bimodality. For inference, we develop maximum likelihood estimation with observed information standard errors and provide numerically stable fits via a block-coordinate quasi-Newton routine using method of moments initial values. A Monte Carlo simulation study across representative parameter settings evaluates bias and root mean squared error, and examines the behavior of Hessian-based standard error estimates, highlighting regimes where the observed information becomes ill-conditioned under weak separation. Empirical analyses, chemical calibration deviations from the National Institute of Standards and Technology and a regression example with asymmetric errors, show competitive or superior fit and interpretability relative to skewed normal alternatives, asymmetric Laplace models, and unconstrained Gaussian mixtures, with consistent advantages under model comparison using the Akaike information criterion and the Bayesian information criterion. Full article
(This article belongs to the Special Issue Computational Statistics and Data Analysis, 3rd Edition)
Show Figures

Figure 1

82 pages, 6468 KB  
Article
Correction Functions and Refinement Algorithms for Enhancing the Performance of Machine Learning Models
by Attila Kovács, Judit Kovácsné Molnár and Károly Jármai
Automation 2026, 7(2), 45; https://doi.org/10.3390/automation7020045 - 6 Mar 2026
Viewed by 693
Abstract
The aim of this study is to investigate and demonstrate the role of correction functions and optimisation-based refinement algorithms in enhancing the performance of machine learning models, particularly in predictive anomaly detection tasks applied in industrial environments. The performance of machine learning models [...] Read more.
The aim of this study is to investigate and demonstrate the role of correction functions and optimisation-based refinement algorithms in enhancing the performance of machine learning models, particularly in predictive anomaly detection tasks applied in industrial environments. The performance of machine learning models is highly dependent on the quality of data preprocessing, model architecture, and post-processing methodology. In many practical applications—particularly in time-series forecasting and anomaly detection—the conventional training pipeline alone is insufficient, because model uncertainty, structural bias and the handling of rare events require specialised post hoc calibration and refinement mechanisms. This study provides a systematic overview of the role of correction functions (e.g., Principal Component Analysis (PCA), Squared Prediction Error (SPE)/Q-statistics, Hotelling’s T2, Bayesian calibration) and adaptive improvement algorithms (e.g., Genetic Algorithms (GA), Particle Swarm Optimisation (PSO), Simulated Annealing (SA), Gaussian Mixture Model (GMM) and ensemble-based techniques) in enhancing the performance of machine learning pipelines. The models were trained on a real industrial dataset compiled from power network analytics and harmonic-injection-based loading conditions. Model validation and equipment-level testing were performed using a large-scale harmonic measurement dataset collected over a five-year period. The reliability of the approach was confirmed by comparing predicted state transitions with actual fault occurrences, demonstrating its practical applicability and suitability for integration into predictive maintenance frameworks. The analysis demonstrates that correction functions introduce deterministic transformations in the data or error space, whereas improvement algorithms apply adaptive optimisation to fine-tune model parameters or decision boundaries. The combined use of these approaches significantly reduces overfitting, improves predictive accuracy and lowers false alarm rates. This work introduces the concept of an Organically Adaptive Predictive (OAP) ML model. The proposed model presents organic adaptivity, continuously adjusting its predictive behaviour in response to dynamic variations in network loading and harmonic spectrum composition. The introduced terminology characterises the organically emergent nature of the adaptive learning mechanism. Full article
Show Figures

Figure 1

20 pages, 2315 KB  
Article
A Context-Aware Framework for Sentiment Analysis of Student Feedback to Inform Educational Strategies in Latin America
by Anabel Pineda-Briseño, Jimy Oblitas Cruz, Laura Cleofas Sánchez, Wendy Sanchez and Rosario Baltazar
Educ. Sci. 2026, 16(3), 399; https://doi.org/10.3390/educsci16030399 - 5 Mar 2026
Viewed by 373
Abstract
Understanding student feedback is essential for informing pedagogical strategies and institutional decision-making in higher education. Sentiment analysis offers scalable mechanisms for extracting insights from open-ended student evaluations; however, many existing approaches prioritize technical performance without sufficient consideration of contextual and institutional constraints, particularly [...] Read more.
Understanding student feedback is essential for informing pedagogical strategies and institutional decision-making in higher education. Sentiment analysis offers scalable mechanisms for extracting insights from open-ended student evaluations; however, many existing approaches prioritize technical performance without sufficient consideration of contextual and institutional constraints, particularly in underrepresented regions. This study proposes a context-aware framework for sentiment analysis of student feedback, designed to support educational decision-making within Latin American universities. Rather than introducing new algorithms, the framework systematically evaluates established machine learning and deep learning models through a multi-phase process that includes data preprocessing, Bayesian optimization, threshold calibration, and class balancing. The framework is validated using authentic Spanish-language student feedback collected from a public university in Peru. Experimental results indicate that while advanced models can achieve strong predictive performance, simpler and more interpretable approaches often provide comparable institutional value when deployment feasibility, computational efficiency, and transparency are considered. These findings highlight that marginal performance gains do not necessarily translate into meaningful advantages for routine educational use. Overall, this work contributes a replicable and resource-sensitive framework that bridges learning analytics research and practical educational application. By prioritizing contextual suitability and interpretability, the proposed approach enables higher education institutions to leverage student sentiment data as an actionable input for continuous improvement and evidence-based educational strategies. Full article
Show Figures

Figure 1

16 pages, 3347 KB  
Article
Design and Validation of a Multimodal Environmental Monitoring System Based on Sensors and Artificial Intelligence
by Yu Fang and Mingjun Xin
Electronics 2026, 15(5), 1051; https://doi.org/10.3390/electronics15051051 - 3 Mar 2026
Viewed by 423
Abstract
Reliable and real-time environmental monitoring is essential for controlling pollution and protecting public health. However, conventional station-based measurements are expensive and often lack spatial and temporal resolution. This paper proposes a low-cost multimodal environmental monitoring system. Experiments verified that thin-film thermocouples exhibit near-linear [...] Read more.
Reliable and real-time environmental monitoring is essential for controlling pollution and protecting public health. However, conventional station-based measurements are expensive and often lack spatial and temporal resolution. This paper proposes a low-cost multimodal environmental monitoring system. Experiments verified that thin-film thermocouples exhibit near-linear voltage–temperature characteristics (R2>0.99). Integration of the AI data pipeline substantially enhances monitoring accuracy: the proposed fusion strategy reduces relative error to approximately 2.3% under typical noise conditions, with a correlation coefficient of 0.79 between predicted and observed PM2.5 values. This research provides a scalable blueprint for edge-deployable environmental monitoring. A thin-film thermocouple with a fast response time is used as a temperature sensor and is statically calibrated against a K-type reference. To improve dynamic tracking and reduce measurement noise, a Kalman filter-based fusion strategy is employed, which is then compared with weighted averaging and Bayesian fusion. Simulation-driven validation is performed for thermocouple linearity, PID-based temperature control, micro-signal filtering and system-level latency and robustness. The results demonstrate that thin-film thermocouples exhibit near-linear voltage–temperature characteristics (R2 > 0.99) with Seebeck coefficients ranging from 40.92 to 42.08 μV/°C, close to the theoretical K-type value of 42.87 μV/°C. The proposed fusion strategy reduces relative error to ~2.3% under typical noise conditions, enabling stable, real-time processing with near-second latency for 10,000-point batches. This study summarizes the design considerations for selecting and calibrating sensors and for achieving AI robustness in the presence of drift and faults. It provides a scalable blueprint for edge-deployable environmental monitoring. Full article
Show Figures

Figure 1

29 pages, 1017 KB  
Article
Bayesian Elastic Net Cox Models for Time-to-Event Prediction: Application to a Breast Cancer Cohort
by Ersin Yılmaz, Syed Ejaz Ahmed and Dursun Aydın
Entropy 2026, 28(3), 264; https://doi.org/10.3390/e28030264 - 27 Feb 2026
Viewed by 391
Abstract
High-dimensional survival analyses require calibrated risk and measurable uncertainty, but standard elastic net Cox models provide only point estimates. We develop a Bayesian elastic net Cox (BEN–Cox) model for high-dimensional proportional hazards regression that places a hierarchical global–local shrinkage prior on coefficients and [...] Read more.
High-dimensional survival analyses require calibrated risk and measurable uncertainty, but standard elastic net Cox models provide only point estimates. We develop a Bayesian elastic net Cox (BEN–Cox) model for high-dimensional proportional hazards regression that places a hierarchical global–local shrinkage prior on coefficients and performs full Bayesian inference via Hamiltonian Monte Carlo. We represent the elastic net penalty as a global–local Gaussian scale mixture with hyperpriors that learn the 1/2 trade-off, enabling adaptive sparsity that preserves correlated gene groups; using HMC with the Cox partial likelihood, we obtain full posterior distributions for hazard ratios and patient-level survival curves. Methodologically, we formalize a Bayesian analogue of the elastic net grouping effect at the posterior mode and establish posterior contraction under sparsity for the Cox partial likelihood, supporting the stability of the resulting risk scores. On the METABRIC breast cancer cohort (n=1903; p=440 gene-level features after preprocessing, derived from an Illumina HT-12 array with ≈24,000 probes at the raw feature level), BEN–Cox achieves slightly lower prediction error, higher discrimination, and better global calibration than a tuned ridge Cox, lasso Cox, and elastic net Cox baselines on a held-out test set. Posterior summaries provide credible intervals for hazard ratios and identify a compact gene panel that remains biologically plausible. BEN–Cox provides an uncertainty-aware alternative to tuned penalized Cox models with theoretical support, offering modest improvements in calibration and providing an interpretable sparse signature in highly-correlated survival data. Full article
Show Figures

Figure 1

34 pages, 2342 KB  
Article
Spatial Densification of Coastal Sea Surface Temperature and Chlorophyll via Bayesian Kriging
by Andronis Vassilis and Karathanassi Vassilia
Remote Sens. 2026, 18(5), 675; https://doi.org/10.3390/rs18050675 - 24 Feb 2026
Viewed by 265
Abstract
In many environmental applications, high-quality measurements are too sparse to resolve the small-scale patterns required for process understanding and management. We investigate a Bayesian kriging (BK) framework that densifies sparse coastal observations into high-resolution gridded fields with calibrated uncertainty. Two pilot sites are [...] Read more.
In many environmental applications, high-quality measurements are too sparse to resolve the small-scale patterns required for process understanding and management. We investigate a Bayesian kriging (BK) framework that densifies sparse coastal observations into high-resolution gridded fields with calibrated uncertainty. Two pilot sites are considered: (i) sea surface temperature (SST) in the Algarve (Portugal), where point measurements (~10 km spacing) are reconstructed on a 500 m grid, and (ii) chlorophyll (Chl) in the La Spezia embayment (Italy), where in situ supported fields are reconstructed at 30 m. The variogram parameters are treated as random variables with weakly informative priors and inferred via MCMC, so that both measurement noise and structural (variogram) uncertainty are propagated to predictions, yielding posterior means and 95% prediction intervals per grid cell. Independent repeated 80/20 cross validation demonstrates robust out-of-sample skill in both sites. For Algarve, the BK maps recover fine-scale thermal structure while preserving defensible uncertainty under severe sparsity. For La Spezia, the same framework resolves estuarine gradients at 30 m. Credible intervals widen away from observations yet remain sufficiently narrow elsewhere to guide interpretation. Satellite products are used strictly for validation on a common grid (MUR SST at 1 km resampled to 500 m, Landsat OC3 Chl at 30 m), confirming spatial fidelity and clarifying seasonal differences. Overall, the approach produces uncertainty-aware, high-resolution coastal fields from heterogeneous, sparse records, supporting reproducible EO analyses and risk-aware coastal monitoring. Full article
Show Figures

Figure 1

16 pages, 854 KB  
Article
A Unified Comparative Evaluation of Genomic Prediction Models Across Four Aquaculture Species
by Jinxin Zhang, Xiaofei Yang, Wei Wang, Hongxia Hu, Shaogang Xu and Hailiang Song
Fishes 2026, 11(2), 115; https://doi.org/10.3390/fishes11020115 - 12 Feb 2026
Viewed by 502
Abstract
Genomic prediction has been increasingly applied in aquaculture selective breeding; however, systematic evaluations of prediction accuracy across multiple aquaculture species and analytical methods under a unified and comparable framework remain limited. In this study, we conducted a comprehensive comparative assessment of genomic prediction [...] Read more.
Genomic prediction has been increasingly applied in aquaculture selective breeding; however, systematic evaluations of prediction accuracy across multiple aquaculture species and analytical methods under a unified and comparable framework remain limited. In this study, we conducted a comprehensive comparative assessment of genomic prediction performance across four representative aquaculture species, including Atlantic salmon (Salmo salar), gilthead sea bream (Sparus aurata), common carp (Cyprinus carpio), and rainbow trout (Oncorhynchus mykiss), using ten genomic prediction models including GBLUP, Bayesian and machine learning methods. Prediction accuracy varied widely among species and models, ranging from 0.49 to 0.85, and was strongly associated with trait heritability. High-heritability traits consistently achieved higher prediction accuracies, with rainbow trout and common carp exhibiting the best overall performance (0.75–0.83 and 0.73–0.85, respectively), whereas Atlantic salmon and gilthead sea bream showed lower and more variable accuracies (0.49–0.61 and 0.49–0.66). No single model performed optimally across all species. Machine learning-based approaches achieved the highest prediction accuracy in specific cases but exhibited pronounced species-dependent variability, while GBLUP provided stable and well-calibrated predictions with consistently low bias. Incremental SNP feature selection further improved prediction accuracy by 2.8–4.2% in three species using only 0.54–9.64% of the available markers, whereas no improvement was observed for a low-heritability trait. These results show that genomic prediction performance is highly context-dependent and underscores the importance of jointly considering trait genetic architecture, population characteristics, model choice, and marker selection when optimizing genomic selection strategies in aquaculture breeding programs. Full article
(This article belongs to the Special Issue Functional Gene Analysis and Genomic Technologies in Aquatic Animals)
Show Figures

Figure 1

17 pages, 608 KB  
Article
Physics-Informed Bayesian Inference for Virtual Testing and Prediction of Train Performance
by Kian Sepahvand, Christoph Schwarz, Oliver Urspruch and Frank Guenther
Machines 2026, 14(2), 211; https://doi.org/10.3390/machines14020211 - 11 Feb 2026
Viewed by 383
Abstract
This paper proposes a physics-informed Bayesian framework for virtual testing and predictive modeling of train performance, specifically addressing stopping-distance prediction. The approach unifies physical simulation models with data-driven statistical inference to achieve uncertainty-aware predictions under limited or noisy measurements. By embedding governing equations [...] Read more.
This paper proposes a physics-informed Bayesian framework for virtual testing and predictive modeling of train performance, specifically addressing stopping-distance prediction. The approach unifies physical simulation models with data-driven statistical inference to achieve uncertainty-aware predictions under limited or noisy measurements. By embedding governing equations of motion into a hierarchical Bayesian structure, the method systematically accounts for both model-form and data uncertainty, allowing explicit decomposition into aleatoric and epistemic components. A Gaussian process surrogate is employed to efficiently emulate high-fidelity physics simulations while preserving key dynamic behaviors and parameter sensitivities. The Bayesian formulation enables probabilistic calibration and validation, providing predictive distributions and confidence bounds. As a representative application, the framework is applied to the virtual prediction of train stopping distances, demonstrating how the proposed methodology captures nonlinear braking dynamics and quantifies uncertainty in safety-relevant performance metrics directly compatible with statistical verification standards such as EN 16834. The results confirm that the physics-informed Bayesian approach enables accurate, interpretable, and standards-aligned virtual testing across a wide range of dynamical systems. Full article
(This article belongs to the Special Issue Artificial Intelligence in Rail Transportation)
Show Figures

Figure 1

24 pages, 2846 KB  
Article
Efficient Hierarchical Latent Gaussian Models for Heterogeneous and Skewed IoT Reliability Data
by Adrian Dudek and Jerzy Baranowski
Symmetry 2026, 18(2), 325; https://doi.org/10.3390/sym18020325 - 11 Feb 2026
Viewed by 453
Abstract
The reliability of Internet of Things systems is critical for industrial applications; however, operational reliability data are often heterogeneous and strongly right-skewed, exhibiting non-Gaussian behaviour, overdispersion, and production-level variability that challenge classical predictive maintenance models. Existing approaches frequently rely on pooled assumptions or [...] Read more.
The reliability of Internet of Things systems is critical for industrial applications; however, operational reliability data are often heterogeneous and strongly right-skewed, exhibiting non-Gaussian behaviour, overdispersion, and production-level variability that challenge classical predictive maintenance models. Existing approaches frequently rely on pooled assumptions or simplified error structures, limiting their ability to identify latent batch-level degradation and to jointly interpret discrete failure events and continuous lifetime information. To address these limitations, this study proposes a hierarchical Bayesian framework based on Integrated Nested Laplace Approximation (INLA) to jointly model discrete reset counts and continuous failure times. Three Latent Gaussian Models are evaluated—ranging from pooled baseline specifications to a fully joint model with shared latent batch effects—using a synthetic dataset designed to mimic realistic industrial fault patterns. The analysis demonstrates that standard pooled models fail to capture the degradation dynamics of defective device batches. In contrast, the hierarchical joint model successfully recovers latent quality variations, accurately links high reset intensity with shortened lifetimes, and substantially improves model fit, achieving a DIC reduction of over 67% compared to baseline approaches. INLA provides a computationally efficient and rigorously calibrated alternative to MCMC-based methods for modelling skewed and heterogeneous reliability data. The proposed framework enables reliable identification of defective production batches and robust uncertainty quantification, offering a practical tool for data-driven predictive maintenance in Industry 4.0. Future work will focus on validating the proposed framework using real industrial IoT datasets. Full article
Show Figures

Figure 1

24 pages, 8773 KB  
Article
Soil Displacement Estimation from Integrated Sensing Technologies in Data-Driven Models Biased by Temporal Coherence of PS-InSAR
by Raffaele Tarantini, Gaetano Miraglia, Stefania Coccimiglio, Rosario Ceravolo and Giuseppe Andrea Ferro
Land 2026, 15(2), 296; https://doi.org/10.3390/land15020296 - 10 Feb 2026
Viewed by 456
Abstract
Spaceborne Synthetic Aperture Radar (SAR) interferometry provides long-term displacement measurements, but the quality of Persistent Scatterer (PS) time series depends critically on temporal coherence. Low-coherence points often exhibit auto-uncorrelated behaviours, which may be relevant to discriminate fast phenomena. This work introduces a coherence-based [...] Read more.
Spaceborne Synthetic Aperture Radar (SAR) interferometry provides long-term displacement measurements, but the quality of Persistent Scatterer (PS) time series depends critically on temporal coherence. Low-coherence points often exhibit auto-uncorrelated behaviours, which may be relevant to discriminate fast phenomena. This work introduces a coherence-based framework that identifies the coherence threshold beyond which PS displacement series retain sufficient reliability to support modelling. The threshold is estimated by analysing how data uncertainty, inferred through Sparse Bayesian Learning (SBL) techniques, varies with coherence and by detecting abrupt changes in this relationship. Once the optimal threshold is established, only the most reliable PS are used to train an SBL regression model linking satellite line-of-sight displacement to soil temperature and surface humidity measured by a low-cost ground sensor. PS-Interferometric SAR (PS-InSAR) time series are derived from COSMO-SkyMed raw images. The SBL model employs compressive-sensing principles and latent-parameter dictionaries of basis functions, whose latent parameters are calibrated through a constrained multi-start optimisation of a normalised residual-based objective function, regularised by a sub-validation dataset. In this work, it is shown that the trained model enables temporally denser reconstruction of displacement histories than the satellite revisit cycle allows and enables continuous soil monitoring by comparing model predictions with newly acquired PS-InSAR data. Full article
(This article belongs to the Special Issue Ground Deformation Monitoring via Remote Sensing Time Series Data)
Show Figures

Figure 1

Back to TopTop