Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (499)

Search Parameters:
Keywords = Bayes estimator

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 329 KB  
Article
The New Polynomial Single Parameter Distribution: Properties, Bayesian and Non-Bayesian Inference with Real-Data Applications
by Meriem Keddali, Hamida Talhi, Mohammed Amine Meraou and Ali Slimani
AppliedMath 2026, 6(4), 60; https://doi.org/10.3390/appliedmath6040060 - 10 Apr 2026
Abstract
A novel flexible single-parameter polynomial distribution is presented in this study. The forms of hazard rate and density functions are examined. Additionally, exact formulas for a number of numerical characteristics of distributions are obtained. Stochastic ordering, the moment technique, the maximum likelihood, and [...] Read more.
A novel flexible single-parameter polynomial distribution is presented in this study. The forms of hazard rate and density functions are examined. Additionally, exact formulas for a number of numerical characteristics of distributions are obtained. Stochastic ordering, the moment technique, the maximum likelihood, and a Bayesian analysis of this novel distribution based on type II censored data are used to derive the extreme order statistics. We construct Bayes estimators and the associated posterior risks using a variety of loss functions, such as the generalized quadratic, entropy, and Linex functions. Since tractable analytical formulations of these estimators are unattainable, we suggest using a simulation technique based on Markov chain Monte-Carlo (MCMC) to examine their performance. Furthermore, we construct maximum likelihood estimators given initial values for the model’s parameters. Additionally, we use integrated mean square error and Pitman’s proximity criteria to compare their performance with that of the Bayesian estimators. Lastly, we apply the new family to many real-world datasets to show its versatility, and we model cancer survival data using this new distribution to explain our methodology. Full article
(This article belongs to the Special Issue Large Language Models and Applications)
27 pages, 2527 KB  
Article
Integrating Genetic Mapping and Genomic Prediction to Elucidate the Genetic Architecture of Fusarium Ear Rot Resistance in Tropical Maize
by Jianfei Yang, Yubo Liu, Carlos Muñoz-Zavala, Hongjian Zheng, Thanda Dhliwayo, Felix San Vicente, Guanghui Hu, Xuecai Zhang and Xiaoli Sun
Agronomy 2026, 16(7), 719; https://doi.org/10.3390/agronomy16070719 - 30 Mar 2026
Viewed by 370
Abstract
Fusarium ear rot (FER) caused by Fusarium verticillioides is a major constraint on global maize production. The genetic basis of FER resistance is not yet fully understood, and the development of effective breeding strategies for improving FER resistance is still a critical priority. [...] Read more.
Fusarium ear rot (FER) caused by Fusarium verticillioides is a major constraint on global maize production. The genetic basis of FER resistance is not yet fully understood, and the development of effective breeding strategies for improving FER resistance is still a critical priority. In the present study, a collection of 254 CIMMYT tropical maize lines genotyped with 955,690 high-quality SNPs was used to conduct genome-wide association studies (GWAS), complemented by QTL (quantitative trait locus) mapping in two recombinant inbred line populations. Additionally, genomic prediction (GP) exploring various statistical models and SNP selection schemes was implemented to optimize predictive accuracy for improving FER resistance. The broad-sense heritability estimates of FER resistance were 0.69–0.86 in the CML panel across six environments and 0.39–0.69 in the two RIL populations. At a p-value threshold of 2.61 × 10−7, GWAS identified 18 SNPs significantly associated with FER resistance across six environments, and in single environment analyses, their phenotypic variance explained (PVE) values ranged from 0.68 to 13.75%, with 13 SNPs exceeding a PVE of 5%. At a p-value threshold of 1 × 10−5, an additional 37 SNPs were detected, clustering within seven environmentally stable regions identified in at least two environments. Furthermore, 13 haplotype blocks exhibiting significant phenotypic differences were identified within these stable regions, with PVE values ranging from 2.39 to 15.24%, 9 of which exceeded 5%. QTL mapping in the two RIL populations revealed 27 moderate-effect QTLs at a LOD threshold of 2.5, including four detected repeatedly across environments, though only one QTL overlapped with the GWAS-identified region. Moderate genomic prediction accuracies of FER severity were achieved across models, with GBLUP and BayesB outperforming other models, and the prediction accuracies of these two models in the three populations were all around 0.5. Integrating the significant SNP set from genetic mapping results with a 100-SNP background set enhanced the stability of cross-population predictions. These results implied that FER resistance in tropical maize is controlled by multiple genomic regions with small-to-moderate genetic effects, whereas the consistency of genomic regions detected by GWAS and QTL mapping is low. Genomic prediction incorporating regions identified across different genetic backgrounds emerges as a promising tool for accelerating FER resistance breeding. Full article
(This article belongs to the Special Issue Plant Stress Tolerance: From Genetic Mechanism to Cultivation Methods)
Show Figures

Figure 1

26 pages, 1092 KB  
Systematic Review
Screening and Prognostic Performance of Pre-Pregnancy BMI for Predicting Gestational Diabetes Mellitus in Asian Populations: A Systematic Review and Meta-Analysis
by Piyanut Xuto, Lawitra Khiaokham, Daniel Bressington and Patompong Khaw-on
Nurs. Rep. 2026, 16(4), 107; https://doi.org/10.3390/nursrep16040107 - 25 Mar 2026
Viewed by 362
Abstract
Background: The appropriateness of the World Health Organization (WHO) body mass index (BMI) cut-off (≥25 kg/m2) for gestational diabetes mellitus (GDM) screening in Asian populations remains controversial due to the “Asian phenotype,” characterized by higher body fat percentage and visceral adiposity [...] Read more.
Background: The appropriateness of the World Health Organization (WHO) body mass index (BMI) cut-off (≥25 kg/m2) for gestational diabetes mellitus (GDM) screening in Asian populations remains controversial due to the “Asian phenotype,” characterized by higher body fat percentage and visceral adiposity at lower BMI values. This systematic review evaluated the screening and prognostic performance of pre-pregnancy BMI thresholds (≥23, ≥24, ≥25 kg/m2) for predicting GDM in Asian women. Methods: A systematic review and meta-analysis were conducted following the JBI Manual for Evidence Synthesis and PRISMA-DTA guidelines. A comprehensive search was performed in PubMed, Scopus, Embase, CINAHL, Cochrane Library, and Google Scholar from January 2015 to August 2024. Studies reporting screening and prognostic performance of pre-pregnancy BMI for GDM prediction in Asian populations were assessed using the QUADAS-2 tool. Data were synthesized using MetaBayesDTA for univariate random-effects meta-analysis of sensitivity and specificity. A supplementary DerSimonian-Laird random-effects meta-analysis of odds ratios (ORs) was conducted to assess the prognostic association between BMI thresholds and GDM risk. Results: A total of 13 studies were included in the review, comprising a total of 427,159 Asian pregnant women. Most included studies were conducted in East Asian populations, predominantly Chinese, and findings may not generalize to South or Southeast Asian subgroups. For the Asian-standard threshold (≥23 kg/m2; n = 3 studies), pooled sensitivity was 0.47 (95% CrI 0.45–0.49) and specificity was 0.71 (95% CrI 0.56–0.83). For the intermediate threshold (≥24 kg/m2; n = 7 studies), sensitivity was 0.31 (95% CrI 0.25–0.37) and specificity 0.84 (95% CrI 0.80–0.88). For the WHO standard (≥25 kg/m2; n = 3 studies), sensitivity was 0.31 (95% CrI 0.11–0.61) and specificity 0.80 (95% CrI 0.45–0.95). Heterogeneity was extremely high for BMI ≥ 25 kg/m2 (I2 = 92% for sensitivity), substantially limiting the interpretability of pooled estimates for this threshold. Conclusions: Based on low-certainty evidence from three studies with very high heterogeneity, the WHO BMI criterion (≥25 kg/m2) appears to have clinically insufficient sensitivity for GDM detection in East Asian populations. The Asian-standard threshold (≥23 kg/m2) shows improved prediction (moderate-certainty evidence) but still misses approximately 53% of true positives. Supplementary OR meta-analysis confirms that all three thresholds are significantly associated with GDM risk (pooled ORs 1.80–2.38), though effect sizes are modest. BMI alone is insufficient for GDM screening and should be integrated into multifactorial risk assessment strategies. These findings apply primarily to East Asian populations and may not generalize to South or Southeast Asian subgroups. Full article
Show Figures

Figure 1

16 pages, 1981 KB  
Article
Genomic Insights into Ciprofloxacin-Resistant Enteropathogenic Escherichia coli ST752 in Republic of Korea: A One Health Perspective on Its Emergence and Transmission
by Yeongeun Seo, Wooju Kang, Eunkyung Shin, Jungsun Park, Mooneui Hong, Dong-Hyun Roh and Junyoung Kim
Antibiotics 2026, 15(3), 304; https://doi.org/10.3390/antibiotics15030304 - 17 Mar 2026
Viewed by 311
Abstract
Background/Objectives: We analyzed the whole-genome sequences of ciprofloxacin-resistant (CIP-R) enteropathogenic Escherichia coli (EPEC) ST752 isolates in South Korea to characterize their molecular epidemiology. This lineage has emerged as the predominant CIP-R EPEC clone in South Korea, accounting for 28.8% of human clinical [...] Read more.
Background/Objectives: We analyzed the whole-genome sequences of ciprofloxacin-resistant (CIP-R) enteropathogenic Escherichia coli (EPEC) ST752 isolates in South Korea to characterize their molecular epidemiology. This lineage has emerged as the predominant CIP-R EPEC clone in South Korea, accounting for 28.8% of human clinical isolates and circulating within the One Health interface. Methods: We performed whole-genome sequencing (WGS) and reference-based core-genome single-nucleotide polymorphism (SNP) analysis on 26 CIP-R EPEC ST752 isolates (19 human clinical and 7 poultry-derived isolates). To elucidate their evolutionary history and transmission dynamics, Bayesian phylodynamic and phylogeographic reconstructions were implemented by integrating domestic isolates with a global genome dataset (n = 508). Results: Isolates from human and poultry sources clustered together with an identical virulence profile and minimal genetic distance. The Bayesian molecular clock analysis estimated that the time to the most recent common ancestor of the South Korean clade was 2000.65. Moreover, the phylogeographic analysis supported statistical evidence (Bayes factor 32.16) for the introduction of this lineage into South Korea from Denmark and revealed a strongly supported host transition from humans to poultry (Bayes factor > 10,000), although this requires cautious interpretation due to limited temporal sampling of poultry isolates. Conclusions: Continued integrated One Health surveillance across human, animal, and environmental reservoirs is needed to monitor and prevent the spread of high-risk antimicrobial-resistant clones. Full article
(This article belongs to the Section Antibiotics Use and Antimicrobial Stewardship)
Show Figures

Figure 1

25 pages, 1131 KB  
Article
A Bayesian Approach for Clustering Constant-Wise Change-Point Data
by Ana Carolina da Cruz and Camila P. E. de Souza
Stats 2026, 9(2), 31; https://doi.org/10.3390/stats9020031 - 17 Mar 2026
Viewed by 356
Abstract
Change-point models deal with ordered data sequences. Their primary goal is to infer the locations where an aspect of the data sequence changes. In this paper, we propose and implement a nonparametric Bayesian model for clustering observations based on their constant-wise change-point profiles [...] Read more.
Change-point models deal with ordered data sequences. Their primary goal is to infer the locations where an aspect of the data sequence changes. In this paper, we propose and implement a nonparametric Bayesian model for clustering observations based on their constant-wise change-point profiles via a Gibbs sampler. Our model incorporates a Dirichlet process on the constant-wise change-point structures to cluster observations while simultaneously performing multiple change-point estimation. Additionally, our approach controls the number of clusters in the model, not requiring specification of the number of clusters a priori. Satisfactory clustering and estimation results were obtained when evaluating our method under various simulated scenarios and on a real dataset from single-cell genomic sequencing. Our proposed methodology is implemented as an R package called BayesCPclust and is available from the Comprehensive R Archive Network. Full article
(This article belongs to the Section Bayesian Methods)
Show Figures

Figure 1

26 pages, 1183 KB  
Article
Classical and Bayesian Inference for the Two-Parameter Rayleigh Distribution with Random Censored Data
by Lanxi Zhang, Wenhao Gui, Zihan Zhao and Minghui Liu
Entropy 2026, 28(3), 313; https://doi.org/10.3390/e28030313 - 10 Mar 2026
Viewed by 232
Abstract
This study focuses on parameter estimation and reliability analysis for the two-parameter Rayleigh distribution under random censoring. It is shown that directly fitting the standard Rayleigh distribution can lead to substantial estimation errors, especially when the dataset contains a markedly high minimum value. [...] Read more.
This study focuses on parameter estimation and reliability analysis for the two-parameter Rayleigh distribution under random censoring. It is shown that directly fitting the standard Rayleigh distribution can lead to substantial estimation errors, especially when the dataset contains a markedly high minimum value. To overcome the limitation of the conventional single-parameter Rayleigh distribution, which lacks a threshold parameter in practical applications, a two-parameter Rayleigh distribution model is proposed. The main research contents include the following: establishing a randomly censored data model; deriving classical inference methods based on maximum likelihood estimation along with several other classical estimation techniques; and constructing a Bayesian estimation framework. We also analyze several reliability and experimental characteristics by deriving their corresponding estimates. A Monte Carlo simulation study is carried out to assess the performance of the proposed estimators. Finally, the practicality and superiority of the two-parameter model are validated using real strength datasets. The results demonstrate that the two-parameter Rayleigh distribution can more accurately describe survival data with threshold characteristics and outperforms the single-parameter model in terms of model fit and reliability estimation. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

16 pages, 1996 KB  
Article
Genomic Selection for Lodging-Related Traits in Double-Cropping Rice
by Wenyu Lu, Jicheng Yue, Jinzhao Liu, Xilong Yuan, Hui Wang, Tao Guo and Hong Liu
Plants 2026, 15(5), 785; https://doi.org/10.3390/plants15050785 - 4 Mar 2026
Viewed by 356
Abstract
Genomic selection (GS) is a promising tool to accelerate genetic gain for complex traits. In this study, we evaluated the potential of GS for the improvement of seven lodging-related traits in double-cropping rice in Southern China using 438 rice accessions. The traits examined [...] Read more.
Genomic selection (GS) is a promising tool to accelerate genetic gain for complex traits. In this study, we evaluated the potential of GS for the improvement of seven lodging-related traits in double-cropping rice in Southern China using 438 rice accessions. The traits examined included the length and bending resistance of the third and fourth internodes (IL3, IL4, BR3, BR4), plant height (PH), and the ratio of internode length to plant height (IL3/PH, IL4/PH). Significant phenotypic differences were observed for all traits between the two seasons. In comparisons of cross-validation and independent prediction, GBLUP and BayesLASSO outperformed LightGBM across all traits in both seasons. Across all evaluated traits, prediction accuracies (Pearson’s r) ranged from 0.33 to 0.78 in cross-validation and from 0.28 to 0.75 in independent prediction using the GBLUP model. Bending resistance exhibited lower prediction accuracy due to its lower genomic heritability. Correlation analysis revealed that plant height was not significantly correlated with culm bending resistance, suggesting that these traits are genetically independent. We utilized GBLUP models trained on our experimental data to predict the genomic estimated breeding values (GEBVs) of the 3000 Rice Genomes Project (3kRG) dataset. The results demonstrated that GS can efficiently enrich the proportion of highly lodging-resistant accessions, increasing it from 31.40% in the base 3kRG population to a maximum of 83.00% among the top 200 selected individuals. Furthermore, indirect selection for traits with higher heritability, such as IL and IL/PH, was more effective at screening highly lodging-resistant cultivars than direct selection for BR. Our research demonstrates the feasibility of applying genomic selection for the breeding of lodging-resistant varieties in double-cropping rice and provides a foundation for further applications. Full article
(This article belongs to the Section Plant Genetics, Genomics and Biotechnology)
Show Figures

Figure 1

29 pages, 1890 KB  
Article
Inference for Two-Parameter Birnbaum–Saunders Distribution Under Joint Progressively Type-II Censored Data
by Omar M. Bdair
Mathematics 2026, 14(5), 825; https://doi.org/10.3390/math14050825 - 28 Feb 2026
Viewed by 242
Abstract
We study inference and prediction for two populations whose lifetimes follow two-parameter Birnbaum–Saunders distributions under a joint progressive Type-II censoring scheme. We derive the observed-data likelihood and obtain maximum likelihood estimates via an EM algorithm that treats progressively removed lifetimes as missing data. [...] Read more.
We study inference and prediction for two populations whose lifetimes follow two-parameter Birnbaum–Saunders distributions under a joint progressive Type-II censoring scheme. We derive the observed-data likelihood and obtain maximum likelihood estimates via an EM algorithm that treats progressively removed lifetimes as missing data. Bayesian inference is developed using importance sampling and a hybrid Gibbs–Metropolis–Hastings sampler, leading to Bayes estimators, credible intervals, and posterior predictive summaries. We further construct prediction intervals for the unobserved lifetimes removed at multiple censoring stages. Monte Carlo experiments under several censoring patterns and parameter configurations compare the frequentist and Bayesian procedures. A tuberculosis survival dataset illustrates model adequacy, parameter estimation, and prediction of removed units under joint progressive censoring. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

48 pages, 3619 KB  
Article
Comparative Assessment of the Reliability of Non-Recoverable Subsystems of Mining Electronic Equipment Using Various Computational Methods
by Nikita V. Martyushev, Boris V. Malozyomov, Anton Y. Demin, Alexander V. Pogrebnoy, Georgy E. Kurdyumov, Viktor V. Kondratiev and Antonina I. Karlina
Mathematics 2026, 14(4), 723; https://doi.org/10.3390/math14040723 - 19 Feb 2026
Viewed by 405
Abstract
The assessment of reliability in non-repairable subsystems of mining electronic equipment represents a computationally challenging problem, particularly for complex and highly connected structures. This study presents a systematic comparative analysis of several deterministic approaches for reliability estimation, focusing on their computational efficiency, accuracy, [...] Read more.
The assessment of reliability in non-repairable subsystems of mining electronic equipment represents a computationally challenging problem, particularly for complex and highly connected structures. This study presents a systematic comparative analysis of several deterministic approaches for reliability estimation, focusing on their computational efficiency, accuracy, and applicability. The investigated methods include classical boundary techniques (minimal paths and cuts), analytical decomposition based on the Bayes theorem, the logic–probabilistic method (LPM) employing triangle–star transformations, and the algorithmic Structure Convolution Method (SCM), which is based on matrix reduction of the system’s connectivity graph. The reliability problem is formally represented using graph theory, where each element is modeled as a binary variable with independent failures, which is a standard and practically justified assumption for power electronic subsystems operating without common-cause coupling. Numerical experiments were carried out on canonical benchmark topologies—bridge, tree, grid, and random connected graphs—representing different levels of structural complexity. The results demonstrate that the SCM achieves exact reliability values with up to six orders of magnitude acceleration compared to the LPM for systems containing more than 20 elements, while maintaining polynomial computational complexity. Qualitatively, the compared approaches differ in the nature of the output and practical applicability: boundary methods provide fast interval estimates suitable for preliminary screening, whereas decomposition may exhibit a systematic bias for highly connected (non-series–parallel) topologies. In contrast, the SCM consistently preserves exactness while remaining computationally tractable for medium and large sparse-to-moderately dense graphs, making it preferable for repeated recalculations in design and optimization workflows. The methods were implemented in Python 3.7 using NumPy and NetworkX, ensuring transparency and reproducibility. The findings confirm that the SCM is an efficient, scalable, and mathematically rigorous tool for reliability assessment and structural optimization of large-scale non-repairable systems. The presented methodology provides practical guidelines for selecting appropriate reliability evaluation techniques based on system complexity and computational resource constraints. Full article
Show Figures

Figure 1

65 pages, 1161 KB  
Article
The Empirical Bayes Estimators of the Variance Parameter of the Normal Distribution with a Normal-Inverse-Gamma Prior Under Stein’s Loss Function
by Ying-Ying Zhang
Axioms 2026, 15(2), 127; https://doi.org/10.3390/axioms15020127 - 10 Feb 2026
Viewed by 298
Abstract
For the hierarchical normal and normal-inverse-gamma model, we derive the Bayesian estimator of the variance parameter in the normal distribution under Stein’s loss function—a penalty function that treats gross overestimation and underestimation equally—and compute the associated Posterior Expected Stein’s Loss (PESL). Additionally, we [...] Read more.
For the hierarchical normal and normal-inverse-gamma model, we derive the Bayesian estimator of the variance parameter in the normal distribution under Stein’s loss function—a penalty function that treats gross overestimation and underestimation equally—and compute the associated Posterior Expected Stein’s Loss (PESL). Additionally, we determine the Bayesian estimator of the same variance parameter under the squared error loss function, along with its corresponding PESL. We further develop empirical Bayes estimators for the variance parameter using a conjugate normal-inverse-gamma prior, employing both the method of moments and Maximum Likelihood Estimation (MLE). Theoretical properties, including posterior and marginal distributions, two inequalities that relate two Bayes estimators and their corresponding PESLs, and consistencies of hyperparameter estimators and empirical Bayes estimators, are established. The simulation results demonstrate that MLEs outperform moment estimators in estimating hyperparameters, particularly with respect to consistency and model fit. Finally, we apply our methodology to real-world data on poverty levels—specifically, the percentage of individuals living below the poverty line—to validate and illustrate our theoretical findings. Full article
(This article belongs to the Section Mathematical Analysis)
Show Figures

Figure 1

19 pages, 1845 KB  
Article
Don’t Tell Us How Strong It Feels! Converging and Discriminant Validity of an Indirect Measure of Emotional Evidence Accumulation Efficiency
by Rotem Berkovich, Deanna M. Barch, Nachshon Meiran and Erin K. Moran
J. Intell. 2026, 14(2), 19; https://doi.org/10.3390/jintelligence14020019 - 31 Jan 2026
Viewed by 683
Abstract
The prevalent method for measuring emotional experiences is self-report scales. However, this method is prone to bias, affected by retrospective errors, and limited in studying individual differences due to variability in how individuals interpret scale values. In the present study, we tested the [...] Read more.
The prevalent method for measuring emotional experiences is self-report scales. However, this method is prone to bias, affected by retrospective errors, and limited in studying individual differences due to variability in how individuals interpret scale values. In the present study, we tested the convergent validity of an alternative approach, which infers emotional components from computational modeling as applied to binary pleasant/unpleasant reports about affective images. Reaction times and choices were modeled to estimate the drift rate (efficiency of emotional evidence accumulation) and the boundary (decision caution). Participants (N = 191) also completed five self-report questionnaires assessing affect, anhedonia, depressive symptoms, and pleasure. Only one correlation reached evidence level (Bayes Factor > 10): Higher consummatory pleasure was negatively associated with drift rate for unpleasant emotions (r(178) = −0.258). This suggests that individuals who typically experience greater in-the-moment pleasure accumulate evidence less efficiently toward unpleasant judgments. Other correlations were absent or inconclusive, potentially reflecting differences in temporal focus and in the specific facets of emotion for each measure. Overall, these results provide some initial support for the convergent and discriminant validity of the drift rate as an indirect measure of online emotional experience. Full article
Show Figures

Figure 1

27 pages, 6867 KB  
Article
Recovering Gamma-Ray Burst Redshift Completeness Maps via Spherical Generalized Additive Models
by Zsolt Bagoly and Istvan I. Racz
Universe 2026, 12(2), 31; https://doi.org/10.3390/universe12020031 - 24 Jan 2026
Viewed by 343
Abstract
We present an advanced statistical framework for estimating the relative intensity of astrophysical event distributions (e.g., Gamma-Ray Bursts, GRBs) on the sky tofacilitate population studies and large-scale structure analysis. In contrast to the traditional approach based on the ratio of Kernel Density Estimation [...] Read more.
We present an advanced statistical framework for estimating the relative intensity of astrophysical event distributions (e.g., Gamma-Ray Bursts, GRBs) on the sky tofacilitate population studies and large-scale structure analysis. In contrast to the traditional approach based on the ratio of Kernel Density Estimation (KDE), which is characterized by numerical instability and bandwidth sensitivity, this work applies a logistic regression embedded in a Bayesian framework to directly model selection effects. It reformulates the problem as a logistic regression task within a Generalized Additive Model (GAM) framework, utilizing isotropic Splines on the Sphere (SOS) to map the conditional probability of redshift measurement. The model complexity and smoothness are objectively optimized using Restricted Maximum Likelihood (REML) and the Akaike Information Criterion (AIC), ensuring a data-driven bias-variance trade-off. We benchmark this approach against an Adaptive Kernel Density Estimator (AKDE) using von Mises–Fisher kernels and Abramson’s square root law. The comparative analysis reveals strong statistical evidence in favor of this Preconditioned (Precon) Estimator, yielding a log-likelihood improvement of ΔL74.3 (Bayes factor >1030) over the adaptive method. We show that this Precon Estimator acts as a spectral bandwidth extender, effectively decoupling the wideband exposure map from the narrowband selection efficiency. This provides a tool for cosmologists to recover high-frequency structural features—such as the sharp cutoffs—that are mathematically irresolvable by direct density estimators due to the bandwidth limitation inherent in sparse samples. The methodology ensures that reconstructions of the cosmic web are stable against Poisson noise and consistent with observational constraints. Full article
(This article belongs to the Section Astroinformatics and Astrostatistics)
Show Figures

Figure 1

16 pages, 3176 KB  
Article
Stacking Ensemble Learning for Genomic Prediction Under Complex Genetic Architectures
by Maurício de Oliveira Celeri, Moyses Nascimento, Ana Carolina Campana Nascimento, Filipe Ribeiro Formiga Teixeira, Camila Ferreira Azevedo, Cosme Damião Cruz and Laís Mayara Azevedo Barroso
Agronomy 2026, 16(2), 241; https://doi.org/10.3390/agronomy16020241 - 20 Jan 2026
Viewed by 409
Abstract
Genomic selection (GS) estimates the GEBV from genome-wide markers to reduce generation intervals and optimize germplasm selection, which is particularly advantageous for high-cost or late-expressed traits. While models like GBLUP are popular, they assume a polygenic architecture. In contrast, the Bayesian alphabet and [...] Read more.
Genomic selection (GS) estimates the GEBV from genome-wide markers to reduce generation intervals and optimize germplasm selection, which is particularly advantageous for high-cost or late-expressed traits. While models like GBLUP are popular, they assume a polygenic architecture. In contrast, the Bayesian alphabet and machine learning (ML) can accommodate other types of genetic architectures. Given that no single model is universally optimal, stacking ensembles, which train a meta-model using predictions from diverse base learners, emerge as a compelling solution. However, the application of stacking in GS often overlooks non-additive effects. This study evaluated different stacking configurations for genomic prediction across 10 simulated traits, covering additive, dominance, and epistatic genetic architectures. A 5-fold cross-validation scheme was used to assess predictive ability and other evaluation metrics. The stacking approach demonstrated superior predictive ability in all scenarios. Gains were especially pronounced in complex architectures (100 QTLs, h2 = 0.3), reaching an 83% increment over the best individual model (BayesA with dominance), and also in oligogenic scenarios with epistasis (10 QTLs, h2 = 0.6), with a 27.59% gain. The success of stacking was attributed to two key strategies: base learner selection and the use of robust meta-learners (such as principal component or penalized regression) that effectively handled multicollinearity. Full article
Show Figures

Figure 1

14 pages, 1222 KB  
Article
BayesCNV: A Bayesian Hierarchical Model for Sensitive and Specific Copy Number Estimation in Cell Free DNA
by Austin Talbot, Alex Kotlar, Lavanya Rishishwar, Andrew Conley, Mengyao Zhao, Nachen Yang, Michael Liu, Zhaohui Wang, Sean Polvino and Yue Ke
Diagnostics 2026, 16(2), 280; https://doi.org/10.3390/diagnostics16020280 - 16 Jan 2026
Viewed by 394
Abstract
Background/Objectives: Detecting copy number variations (CNVs) from next-generation sequencing (NGS) is challenging, particularly in targeted sequencing panels, especially for cell-free DNA (cfDNA), where the signal is weak and noise is high. Methods: We present BayesCNV, a Bayesian hierarchical model for gene-level [...] Read more.
Background/Objectives: Detecting copy number variations (CNVs) from next-generation sequencing (NGS) is challenging, particularly in targeted sequencing panels, especially for cell-free DNA (cfDNA), where the signal is weak and noise is high. Methods: We present BayesCNV, a Bayesian hierarchical model for gene-level copy ratio estimation from targeted amplicon read depths compared to a CNV-neutral reference sample. The model provides posterior uncertainty for each gene and supports interpretable calling based on effect size and posterior confidence. The model also provides a principled quality-control strategy based on the marginal log likelihood of each sample, with low values indicating low confidence in the calls. BayesCNV uses thermodynamic integration, a technique to reliably estimate this quantity. We benchmark our method against two publicly available CNV callers using Seracare® reference samples with known CNVs on the OncoReveal® Core Lbx panel. Results: Our method achieves a sensitivity of 0.87 and specificity of 0.996, dramatically outperforming two competitor methods, IonCopy and DeviCNV. In a separate FFPE dataset using the OncoReveal® Essential Lbx panel, we show that the marginal log likelihood cleanly separates, degraded from high-quality samples, even when conventional sequencing QC metrics do not. Conclusions: BayesCNV provides accurate and interpretable gene-level CNV estimates and uncertainty quantification, along with an evidence-based quality control metric that improves robustness in targeted cfDNA workflows. Full article
(This article belongs to the Section Pathology and Molecular Diagnostics)
Show Figures

Figure 1

20 pages, 597 KB  
Article
Fast 3D-HEVC Depth Map Coding Method Based on Spatio-Temporal Correlation and a Two-Stage Mode Decision Framework
by Erlin Tian, Jiabao Zhang and Qiuwen Zhang
Sensors 2026, 26(2), 529; https://doi.org/10.3390/s26020529 - 13 Jan 2026
Viewed by 321
Abstract
Efficient intra-mode decision for depth maps assumes a pivotal role in augmenting the overall performance of 3D-HEVC. Existing research endeavors predominantly rely on fast mode screening strategies grounded in texture characteristics or machine learning techniques. These strategies, to a certain extent, mitigate the [...] Read more.
Efficient intra-mode decision for depth maps assumes a pivotal role in augmenting the overall performance of 3D-HEVC. Existing research endeavors predominantly rely on fast mode screening strategies grounded in texture characteristics or machine learning techniques. These strategies, to a certain extent, mitigate the complexity of mode search. Nevertheless, these approaches often fall short of fully leveraging the intrinsic spatio-temporal correlations within depth maps. Moreover, strategies relying on deterministic classifiers exhibit insufficient discrimination reliability in regions featuring edge mutations or intricate structures. To tackle these challenges, this paper presents a two-stage fast intra-mode decision algorithm for depth maps, integrating naive Bayes probability estimation and fuzzy support vector machine (FSVM). Initially, it confines the candidate mode space through spatio-temporal prior modeling. Subsequently, FSVM is employed to enhance the decision accuracy in regions with low confidence. This methodology constructs a joint mode decision framework spanning from probability screening to refined classification. By doing so, it significantly reduces the computational burden while preserving rate-distortion performance, thereby attaining an effective equilibrium between encoding complexity and performance. Experimental findings demonstrate that the proposed algorithm reduces the average encoding time by 52.30% with merely a 0.68% increment in BDBR. Additionally, it showcases stable universality across test sequences of diverse resolutions and scenes. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Back to TopTop