Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (296)

Search Parameters:
Keywords = skew-normal distribution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 31206 KB  
Article
Statistical Evaluation of Alpha-Powering Exponential Generalized Progressive Hybrid Censoring and Its Modeling for Medical and Engineering Sciences with Optimization Plans
by Heba S. Mohammed, Osama E. Abo-Kasem and Ahmed Elshahhat
Symmetry 2025, 17(9), 1473; https://doi.org/10.3390/sym17091473 (registering DOI) - 6 Sep 2025
Abstract
This study explores advanced methods for analyzing the two-parameter alpha-power exponential (APE) distribution using data from a novel generalized progressive hybrid censoring scheme. The APE model is inherently asymmetric, exhibiting positive skewness across all valid parameter values due to its right-skewed exponential base, [...] Read more.
This study explores advanced methods for analyzing the two-parameter alpha-power exponential (APE) distribution using data from a novel generalized progressive hybrid censoring scheme. The APE model is inherently asymmetric, exhibiting positive skewness across all valid parameter values due to its right-skewed exponential base, with the alpha-power transformation amplifying or dampening this skewness depending on the power parameter. The proposed censoring design offers new insights into modeling lifetime data that exhibit non-monotonic hazard behaviors. It enhances testing efficiency by simultaneously imposing fixed-time constraints and ensuring a minimum number of failures, thereby improving inference quality over traditional censoring methods. We derive maximum likelihood and Bayesian estimates for the APE distribution parameters and key reliability measures, such as the reliability and hazard rate functions. Bayesian analysis is performed using independent gamma priors under a symmetric squared error loss, implemented via the Metropolis–Hastings algorithm. Interval estimation is addressed using two normality-based asymptotic confidence intervals and two credible intervals obtained through a simulated Markov Chain Monte Carlo procedure. Monte Carlo simulations across various censoring scenarios demonstrate the stable and superior precision of the proposed methods. Optimal censoring patterns are identified based on the observed Fisher information and its inverse. Two real-world case studies—breast cancer remission times and global oil reserve data—illustrate the practical utility of the APE model within the proposed censoring framework. These applications underscore the model’s capability to effectively analyze diverse reliability phenomena, bridging theoretical innovation with empirical relevance in lifetime data analysis. Full article
(This article belongs to the Special Issue Unlocking the Power of Probability and Statistics for Symmetry)
26 pages, 1680 KB  
Article
Uniformity Testing and Estimation of Generalized Exponential Uncertainty in Human Health Analytics
by Mohamed Said Mohamed and Hanan H. Sakr
Symmetry 2025, 17(9), 1403; https://doi.org/10.3390/sym17091403 - 28 Aug 2025
Viewed by 295
Abstract
The entropy function, as a measure of information and uncertainty, has been widely applied in various scientific disciplines. One notable extension of entropy is exponential entropy, which finds applications in fields such as optimization, image segmentation, and fuzzy set theory. In this paper, [...] Read more.
The entropy function, as a measure of information and uncertainty, has been widely applied in various scientific disciplines. One notable extension of entropy is exponential entropy, which finds applications in fields such as optimization, image segmentation, and fuzzy set theory. In this paper, we explore the continuous case of generalized exponential entropy and analyze its behavior under symmetric and asymmetric probability distributions. Particular emphasis is placed on illustrating the role of symmetry through analytical results and graphical representations, including comparisons of entropy curves for symmetric and skewed distributions. Moreover, we investigate the relationship between the proposed entropy model and other information-theoretic measures such as entropy and extropy. Several non-parametric estimation techniques are studied, and their performance is evaluated using Monte Carlo simulations, highlighting asymptotic properties and the emergence of normality, an aspect closely related to distributional symmetry. Furthermore, the consistency and biases of the estimation methods, which rely on kernel estimation with ρcorr-mixing dependent data, are presented. Additionally, numerical calculations based on simulation and medical real data are applied. Finally, a test of uniformity using different test statistics is given. Full article
(This article belongs to the Special Issue Symmetric or Asymmetric Distributions and Its Applications)
Show Figures

Figure 1

18 pages, 2432 KB  
Article
From Volume to Mass: Transforming Volatile Organic Compound Detection with Photoionization Detectors and Machine Learning
by Yunfei Cai, Xiang Che and Yusen Duan
Sensors 2025, 25(17), 5314; https://doi.org/10.3390/s25175314 - 27 Aug 2025
Viewed by 501
Abstract
(1) Objective: Volatile organic compounds (VOCs) monitoring in industrial parks is crucial for environmental regulation and public health protection. However, current techniques face challenges related to cost and real-time performance. This study aims to develop a dynamic calibration framework for accurate real-time conversion [...] Read more.
(1) Objective: Volatile organic compounds (VOCs) monitoring in industrial parks is crucial for environmental regulation and public health protection. However, current techniques face challenges related to cost and real-time performance. This study aims to develop a dynamic calibration framework for accurate real-time conversion of VOCs volume fractions (nmol mol−1) to mass concentrations (μg m−3) in industrial environments, addressing the limitations of conventional monitoring methods such as high costs and delayed response times. (2) Methods: By innovatively integrating photoionization detector (PID) with machine learning, we developed a robust conversion model utilizing PID signals, meteorological data, and a random forest’s (RF) algorithm. The system’s performance was rigorously evaluated against standard gas chromatography-flame ionization detectors (GC-FID) measurements. (3) Results: The proposed framework demonstrated superior performance, achieving a coefficient of determination (R2) of 0.81, root mean squared error (RMSE) of 48.23 μg m−3, symmetric mean absolute percentage error (SMAPE) of 62.47%, and a normalized RMSE (RMSEnorm) of 2.07%, outperforming conventional methods. This framework not only achieved minute-level response times but also reduced costs to just 10% of those associated with GC-FID methods. Additionally, the model exhibited strong cross-site robustness with R2 values ranging from 0.68 to 0.69, although its accuracy was somewhat reduced for high-concentration samples (>1500 μg m−3), where the mean absolute percentage error (MAPE) was 17.8%. The inclusion of SMAPE and RMSEnorm provides a more nuanced understanding of the model’s performance, particularly in the context of skewed or heteroscedastic data distributions, thereby offering a more comprehensive assessment of the framework’s effectiveness. (4) Conclusions: The framework’s innovative combination of PID’s real-time capability and RF’s nonlinear modeling achieves accurate mass concentration conversion (R2 = 0.81) while maintaining a 95% faster response and 90% cost reduction compared to GC-FID systems. Compared with traditional single-coefficient PID calibration, this framework significantly improves accuracy and adaptability under dynamic industrial conditions. Future work will apply transfer learning to improve high-concentration detection for pollution tracing and environmental governance in industrial parks. Full article
(This article belongs to the Special Issue Advanced Sensors for Gas Monitoring)
Show Figures

Figure 1

22 pages, 9949 KB  
Article
A DeepAR-Based Modeling Framework for Probabilistic Mid–Long-Term Streamflow Prediction
by Shuai Xie, Dong Wang, Jin Wang, Chunhua Yang, Keyan Shen, Benjun Jia and Hui Cao
Water 2025, 17(17), 2506; https://doi.org/10.3390/w17172506 - 22 Aug 2025
Viewed by 667
Abstract
Mid–long-term streamflow prediction (MLSP) plays a critical role in water resource planning amid growing hydroclimatic and anthropogenic uncertainties. Although AI-based models have demonstrated strong performance in MLSP, their capacity to quantify predictive uncertainty remains limited. To address this challenge, a DeepAR-based probabilistic modeling [...] Read more.
Mid–long-term streamflow prediction (MLSP) plays a critical role in water resource planning amid growing hydroclimatic and anthropogenic uncertainties. Although AI-based models have demonstrated strong performance in MLSP, their capacity to quantify predictive uncertainty remains limited. To address this challenge, a DeepAR-based probabilistic modeling framework is developed, enabling direct estimation of streamflow distribution parameters and flexible selection of output distributions. The framework is applied to two case studies with distinct hydrological characteristics, where combinations of recurrent model structures (GRU and LSTM) and output distributions (Normal, Student’s t, and Gamma) are systematically evaluated. The results indicate that the choice of output distribution is the most critical factor for predictive performance. The Gamma distribution consistently outperformed those using Normal and Student’s t distributions, due to its ability to better capture the skewed, non-negative nature of streamflow data. Notably, the magnitude of performance gain from using the Gamma distribution is itself region-dependent, proving more significant in the basin with higher streamflow skewness. For instance, in the more skewed Upper Wudongde Reservoir area, the model using LSTM structure and Gamma distribution reduces RMSE by over 27% compared to its Normal-distribution counterpart (from 1407.77 m3/s to 1016.54 m3/s). Furthermore, the Gamma-based models yield superior probabilistic forecasts, achieving not only lower CRPS values but also a more effective balance between high reliability (PICP) and forecast sharpness (MPIW). In contrast, the relative performance between GRU and LSTM architectures was found to be less significant and inconsistent across the different basins. These findings highlight that the DeepAR-based framework delivers consistent enhancement in forecasting accuracy by prioritizing the selection of a physically plausible output distribution, thereby providing stronger and more reliable support for practical applications. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

27 pages, 4595 KB  
Article
The Unit Inverse Maxwell–Boltzmann Distribution: A Novel Single-Parameter Model for Unit-Interval Data
by Murat Genç and Ömer Özbilen
Axioms 2025, 14(8), 647; https://doi.org/10.3390/axioms14080647 - 21 Aug 2025
Viewed by 222
Abstract
The Unit Inverse Maxwell–Boltzmann (UIMB) distribution is introduced as a novel single-parameter model for data constrained within the unit interval (0,1), derived through an exponential transformation of the Inverse Maxwell–Boltzmann distribution. Designed to address the limitations of traditional unit-interval [...] Read more.
The Unit Inverse Maxwell–Boltzmann (UIMB) distribution is introduced as a novel single-parameter model for data constrained within the unit interval (0,1), derived through an exponential transformation of the Inverse Maxwell–Boltzmann distribution. Designed to address the limitations of traditional unit-interval distributions, the UIMB model exhibits flexible density shapes and hazard rate behaviors, including right-skewed, left-skewed, unimodal, and bathtub-shaped patterns, making it suitable for applications in reliability engineering, environmental science, and health studies. This study derives the statistical properties of the UIMB distribution, including moments, quantiles, survival, and hazard functions, as well as stochastic ordering, entropy measures, and the moment-generating function, and evaluates its performance through simulation studies and real-data applications. Various estimation methods, including maximum likelihood, Anderson–Darling, maximum product spacing, least-squares, and Cramér–von Mises, are assessed, with maximum likelihood demonstrating superior accuracy. Simulation studies confirm the model’s robustness under normal and outlier-contaminated scenarios, with MLE showing resilience across varying skewness levels. Applications to manufacturing and environmental datasets reveal the UIMB distribution’s exceptional fit compared to competing models, as evidenced by lower information criteria and goodness-of-fit statistics. The UIMB distribution’s computational efficiency and adaptability position it as a robust tool for modeling complex unit-interval data, with potential for further extensions in diverse domains. Full article
(This article belongs to the Section Mathematical Analysis)
Show Figures

Figure 1

21 pages, 1434 KB  
Article
Estimating Skewness and Kurtosis for Asymmetric Heavy-Tailed Data: A Regression Approach
by Joseph H. T. Kim and Heejin Kim
Mathematics 2025, 13(16), 2694; https://doi.org/10.3390/math13162694 - 21 Aug 2025
Viewed by 358
Abstract
Estimating skewness and kurtosis from real-world data remains a long-standing challenge in actuarial science and financial risk management, where these higher-order moments are critical for capturing asymmetry and tail risk. Traditional moment-based estimators are known to be highly sensitive to outliers and often [...] Read more.
Estimating skewness and kurtosis from real-world data remains a long-standing challenge in actuarial science and financial risk management, where these higher-order moments are critical for capturing asymmetry and tail risk. Traditional moment-based estimators are known to be highly sensitive to outliers and often fail when the assumption of normality is violated. Despite numerous extensions—from robust moment-based methods to quantile-based measures—being proposed over the decades, no universally satisfactory solution has been reported, and many existing methods exhibit limited effectiveness, particularly under challenging distributional shapes. In this paper we propose a novel method that jointly estimates skewness and kurtosis based on a regression adaptation of the Cornish–Fisher expansion. By modeling the empirical quantiles as a cubic polynomial of the standard normal variable, the proposed approach produces a reliable and efficient estimator that better captures distributional shape without strong parametric assumptions. Our comprehensive simulation studies show that the proposed method performs much better than existing estimators across a wide range of distributions, especially when the data are skewed or heavy-tailed, as is typical in actuarial and financial applications. Full article
(This article belongs to the Special Issue Actuarial Statistical Modeling and Applications)
Show Figures

Figure 1

26 pages, 9294 KB  
Article
Bayesian Analysis of Bitcoin Volatility Using Minute-by-Minute Data and Flexible Stochastic Volatility Models
by Makoto Nakakita, Tomoki Toyabe and Teruo Nakatsuma
Mathematics 2025, 13(16), 2691; https://doi.org/10.3390/math13162691 - 21 Aug 2025
Viewed by 960
Abstract
This study analyzes the volatility of Bitcoin using stochastic volatility models fitted to one-minute transaction data for the BTC/USDT pair between 1 April 2023, and 31 March 2024. Bernstein polynomial terms were introduced to accommodate intraday and intraweek seasonality, and flexible return distributions [...] Read more.
This study analyzes the volatility of Bitcoin using stochastic volatility models fitted to one-minute transaction data for the BTC/USDT pair between 1 April 2023, and 31 March 2024. Bernstein polynomial terms were introduced to accommodate intraday and intraweek seasonality, and flexible return distributions were used to capture distributional characteristics. Seven return distributions—normal, Student-t, skew-t, Laplace, asymmetric Laplace (AL), variance gamma, and skew variance gamma—were considered. We further incorporated explanatory variables derived from the trading volume and price changes to assess the effects of order flow. Our results reveal structural market changes, including a clear regime shift around October 2023, when the asymmetric Laplace distribution became the dominant model. Regression coefficients suggest a weakening of the volume–volatility relationship after September and the presence of non-persistent leverage effects. These findings highlight the need for flexible, distribution-aware modeling in 24/7 digital asset markets, with implications for market monitoring, volatility forecasting, and crypto risk management. Full article
Show Figures

Figure 1

22 pages, 1710 KB  
Article
Machine Learning Techniques Improving the Box–Cox Transformation in Breast Cancer Prediction
by Sultan S. Alshamrani
Electronics 2025, 14(16), 3173; https://doi.org/10.3390/electronics14163173 - 9 Aug 2025
Viewed by 434
Abstract
Breast cancer remains a major global health problem, characterized by high incidence and mortality rates. Developing accurate prediction models is essential to improving early detection and treatment outcomes. Machine learning (ML) has become a valuable resource in breast cancer prediction; however, the complexities [...] Read more.
Breast cancer remains a major global health problem, characterized by high incidence and mortality rates. Developing accurate prediction models is essential to improving early detection and treatment outcomes. Machine learning (ML) has become a valuable resource in breast cancer prediction; however, the complexities inherent in medical data, including biases and imbalances, can hinder the effectiveness of these models. This paper explores combining the Box–Cox transformation with ML models to normalize data distributions and stabilize variance, thereby enhancing prediction accuracy. Two datasets were analyzed: a synthetic gamma-distributed dataset that simulates skewed real-world data and the Surveillance, Epidemiology, and End Results (SEER) breast cancer dataset, which displays imbalanced real-world data. Four distinct experimental scenarios were conducted on the ML models with a synthetic dataset, the SEER dataset with the Box–Cox transformation, a SEER dataset with the logarithmic transformation, and with Synthetic Minority Over-sampling Technique (SMOTE) augmentation to evaluate the impact of the Box–Cox transformation through different lambda values. The results show that the Box–Cox transformation significantly improves the performance of Artificial Intelligence (AI) models, particularly the stacking model, achieving the highest accuracy with 94.53% and 94.74% of the F1 score. This study demonstrates the importance of feature transformation in healthcare analytics, offering a scalable framework for improving breast cancer prediction and potentially applicable to other medical datasets with similar challenges. Full article
Show Figures

Figure 1

15 pages, 24657 KB  
Article
Identification and Genetic Analysis of Downy Mildew Resistance in Intraspecific Hybrids of Vitis vinifera L.
by Xing Han, Yihan Li, Zhilei Wang, Zebin Li, Nanyang Li, Hua Li and Xinyao Duan
Plants 2025, 14(15), 2415; https://doi.org/10.3390/plants14152415 - 4 Aug 2025
Viewed by 379
Abstract
Downy mildew caused by Plasmopara viticola is an important disease in grape production, particularly in the highly susceptible, widely cultivated Vitis vinifera L. Breeding for disease resistance is an effective solution, and V. vinifera intraspecific crosses can yield progeny with both disease resistance [...] Read more.
Downy mildew caused by Plasmopara viticola is an important disease in grape production, particularly in the highly susceptible, widely cultivated Vitis vinifera L. Breeding for disease resistance is an effective solution, and V. vinifera intraspecific crosses can yield progeny with both disease resistance and high quality. To assess the potential of intraspecific recurrent selection in V. vinifera (IRSV) in improving grapevine resistance to downy mildew and to analyze the pattern of disease resistance inheritance, the disease-resistant variety Ecolly was selected as one of the parents and crossed with Cabernet Sauvignon, Marselan, and Dunkelfelder, respectively, creating three reciprocal combinations, resulting in 1657 hybrid F1 progenies. The primary results are as follows: (1) significant differences in disease resistance among grape varieties and, significant differences in disease resistance between different vintages of the same variety were found; (2) the leaf downy mildew resistance levels of F1 progeny of different hybrid combinations conformed to a skewed normal distribution and showed some maternal dominance; (3) the degree of leaf bulbous elevation was negatively correlated with the level of leaf downy mildew resistance, and the correlation coefficient with the level of field resistance was higher; (4) five progenies with higher levels of both field and in vitro disease resistance were obtained. Intraspecific hybridization can improve the disease resistance of offspring through super-parent genetic effects, and Ecolly can be used as breeding material for recurrent hybridization to obtain highly resistant varieties. Full article
Show Figures

Figure 1

18 pages, 761 KB  
Article
A Priori Sample Size Determination for Estimating a Location Parameter Under a Unified Skew-Normal Distribution
by Cong Wang, Weizhong Tian and Jingjing Yang
Symmetry 2025, 17(8), 1228; https://doi.org/10.3390/sym17081228 - 4 Aug 2025
Viewed by 256
Abstract
The a priori procedure (APP) is concerned with determining appropriate sample sizes to ensure that sample statistics to be obtained are likely to be good estimators of corresponding population parameters. Previous researchers have shown how to compute a priori confidence interval means or [...] Read more.
The a priori procedure (APP) is concerned with determining appropriate sample sizes to ensure that sample statistics to be obtained are likely to be good estimators of corresponding population parameters. Previous researchers have shown how to compute a priori confidence interval means or locations for normal and skew-normal distributions. However, two critical limitations persist in the literature: (1) While numerous skewed models have been proposed, the APP equations for location parameters have only been formally established for the basic skew-normal distributions. (2) Even within this fundamental framework, the APPs for sample size determinations in estimating locations are constructed on samples of specifically dependent observations having multivariate skew-normal distributions jointly. Our work addresses these limitations by extending a priori reasoning to the more comprehensive unified skew-normal (SUN) distribution. The SUN family not only encompasses multiple existing skew-normal models as special cases but also enables broader practical applications through its capacity to model mixed skewness patterns and diverse tail behaviors. In this paper, we establish APP equations for determining the required sample sizes and set up confidence intervals for the location parameter in the one-sample case, as well as for the difference in locations in matched pairs and two independent samples, assuming independent observations from the SUN family. This extension addresses a critical gap in the literature and offers a valuable contribution to the field. Simulation studies support the equations presented, and two applications involve real data sets for illustrations of our main results. Full article
Show Figures

Figure 1

32 pages, 12348 KB  
Article
Advances in Unsupervised Parameterization of the Seasonal–Diurnal Surface Wind Vector
by Nicholas J. Cook
Meteorology 2025, 4(3), 21; https://doi.org/10.3390/meteorology4030021 - 29 Jul 2025
Viewed by 278
Abstract
The Offset Elliptical Normal (OEN) mixture model represents the seasonal–diurnal surface wind vector for wind engineering design applications. This study upgrades the parameterization of OEN by accounting for changes in format of the global database of surface observations, improving performance by eliminating manual [...] Read more.
The Offset Elliptical Normal (OEN) mixture model represents the seasonal–diurnal surface wind vector for wind engineering design applications. This study upgrades the parameterization of OEN by accounting for changes in format of the global database of surface observations, improving performance by eliminating manual supervision and extending the scope of the model to include skewness. The previous coordinate transformation of binned speed and direction, used to evaluate the joint probability distributions of the wind vector, is replaced by direct kernel density estimation. The slow process of sequentially adding additional components is replaced by initializing all components together using fuzzy clustering. The supervised process of sequencing each mixture component through time is replaced by a fully automated unsupervised process using pattern matching. Previously reported departures from normal in the tails of the fuzzy-demodulated OEN orthogonal vectors are investigated by directly fitting the bivariate skew generalized t distribution, showing that the small observed skew is likely real but that the observed kurtosis is an artefact of the demodulation process, leading to a new Offset Skew Normal mixture model. The supplied open-source R scripts fully automate parametrization for locations in the NCEI Integrated Surface Hourly global database of wind observations. Full article
Show Figures

Figure 1

25 pages, 7778 KB  
Article
Pressure Characteristics Analysis of the Deflector Jet Pilot Stage Under Dynamic Skewed Velocity Distribution
by Zhilin Cheng, Wenjun Yang, Liangcai Zeng and Lin Wu
Aerospace 2025, 12(7), 638; https://doi.org/10.3390/aerospace12070638 - 17 Jul 2025
Viewed by 302
Abstract
The velocity distribution at the deflector jet outlet significantly influences the pressure characteristics of the pilot stage, thereby affecting the dynamic performance of the servo valve. Conventional mathematical models fail to account for the influence of dynamic velocity distribution on pilot stage pressure [...] Read more.
The velocity distribution at the deflector jet outlet significantly influences the pressure characteristics of the pilot stage, thereby affecting the dynamic performance of the servo valve. Conventional mathematical models fail to account for the influence of dynamic velocity distribution on pilot stage pressure characteristics, resulting in significant deviations from actual situations. As the deflector shifts, the secondary jet velocity distribution transitions from a symmetric to an asymmetric dynamic profile, altering the pressure within the receiving chambers. To address this, a dynamic skewed velocity distribution model is proposed to more accurately capture the pressure characteristics. The relationship between the skewness coefficient and deflector displacement is established, and the pressure calculation method for the receiving chambers is refined accordingly. A comparative analysis shows that the proposed model aligns most closely with computational fluid dynamics results, achieving a 98% match in velocity distribution and a maximum pressure error of 1.43%. This represents an improvement of 84.98% over the normal model and 82.35% over the uniform model, confirming the superior accuracy of the dynamic skewed model in pilot stage pressure calculation. Full article
(This article belongs to the Special Issue Aerospace Vehicles and Complex Fluid Flow Modelling)
Show Figures

Figure 1

22 pages, 1328 KB  
Article
Genetic Analysis of Main Gene + Polygenic Gene of Nutritional Traits of Land Cotton Cottonseed
by Yage Li, Weifeng Guo, Liangrong He and Xinchuan Cao
Agronomy 2025, 15(7), 1713; https://doi.org/10.3390/agronomy15071713 - 16 Jul 2025
Viewed by 295
Abstract
Background: The regulation of oil and protein contents in cottonseed is governed by a complex genetic network. Gaining insight into the mechanisms controlling these traits is necessary for dissecting the formation patterns of cottonseed quality. Method: In this study, Xinluzhong 37 (P1 [...] Read more.
Background: The regulation of oil and protein contents in cottonseed is governed by a complex genetic network. Gaining insight into the mechanisms controlling these traits is necessary for dissecting the formation patterns of cottonseed quality. Method: In this study, Xinluzhong 37 (P1) and Xinluzhong 51 (P2) were selected as parental lines for two reciprocal crosses: P1 × P2 (F1) and its reciprocal P2 × P1 (F1′). Each F1 was selfed and backcrossed to both parents to generate the F2 (F2′), B1 (B1′), and B2 (B2′) generations. To assess nutritional traits in hairy (non-delinted) and lint-free (delinted) seeds, two indicators, oil content and protein content, were measured in both seed types. Joint segregation analysis was employed to analyze the inheritance of these traits, based on a major gene plus polygene model. Results: In the orthogonal crosses, the CVs for the four nutritional traits ranged at 2.710–7.879%, 4.086–11.070%, 2.724–6.727%, and 3.717–9.602%. In the reciprocal crosses, CVs ranged at 2.710–8.053%, 4.086–9.572%, 2.724–6.376%, and 3.717–8.845%. All traits exhibited normal or skewed-normal distributions. For oil content in undelinted/delinted seeds, polygenic heritabilities in the orthogonal cross were 0.64/0.52, and 0.40/0.36 in the reciprocal cross. For protein content, major-gene heritabilities in the orthogonal cross were 0.79 (undelinted) and 0.78 (delinted), while those in the reciprocal cross were both 0.62. Conclusions: Oil and protein contents in cottonseeds are quantitative traits. In both orthogonal and reciprocal crosses, oil content is controlled by multiple genes and is shaped by additive, dominance, and epistatic effects. Protein content, in contrast, is largely controlled by two major genes along with minor genes. In the P1 × P2 combination, major genes act through additive, dominance, and epistatic effects, while in the P2 × P1 combination, their effects are additive only. In both combinations, minor genes contribute through additive and dominance effects. In summary, the oil content in cottonseed is mainly regulated by polygenes, whereas the protein content is primarily determined by major genes. These genetic features in both linted, and lint-free seeds may offer a theoretical foundation for molecular breeding aimed at improving cottonseed oil and protein quality. Full article
(This article belongs to the Section Crop Breeding and Genetics)
Show Figures

Figure 1

10 pages, 339 KB  
Article
Continuity Correction and Standard Error Calculation for Testing in Proportional Hazards Models
by Daniel Baumgartner and John E. Kolassa
Stats 2025, 8(3), 61; https://doi.org/10.3390/stats8030061 - 14 Jul 2025
Viewed by 341
Abstract
Standard asymptotic inference for proportional hazards models is conventionally performed by calculating a standard error for the estimate and comparing the estimate divided by the standard error to a standard normal distribution. In this paper, we compare various standard error estimates, including based [...] Read more.
Standard asymptotic inference for proportional hazards models is conventionally performed by calculating a standard error for the estimate and comparing the estimate divided by the standard error to a standard normal distribution. In this paper, we compare various standard error estimates, including based on the inverse observed information, the inverse expected inverse information, and the jackknife. Furthermore, correction for continuity is compared to omitting this correction. We find that correction for continuity represents an important improvement in the quality of approximation, and furthermore note that the usual naive standard error yields a distribution closer to normality, as measured by skewness and kurtosis, than any of the other standard errors investigated. Full article
Show Figures

Figure 1

16 pages, 1288 KB  
Article
Quantile Estimation Based on the Log-Skew-t Linear Regression Model: Statistical Aspects, Simulations, and Applications
by Raúl Alejandro Morán-Vásquez, Anlly Daniela Giraldo-Melo and Mauricio A. Mazo-Lopera
Stats 2025, 8(3), 58; https://doi.org/10.3390/stats8030058 - 11 Jul 2025
Viewed by 361
Abstract
We propose a robust linear regression model assuming a log-skew-t distribution for the response variable, with the aim of exploring the association between the covariates and the quantiles of a continuous and positive response variable under skewness and heavy tails. This model [...] Read more.
We propose a robust linear regression model assuming a log-skew-t distribution for the response variable, with the aim of exploring the association between the covariates and the quantiles of a continuous and positive response variable under skewness and heavy tails. This model includes the log-skew-normal and log-t linear regression models as special cases. Our simulation studies indicate good performance of the quantile estimation approach and its outperformance relative to the classical quantile regression model. The practical applicability of our methodology is demonstrated through an analysis of two real datasets. Full article
(This article belongs to the Special Issue Robust Statistics in Action II)
Show Figures

Figure 1

Back to TopTop