Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,155)

Search Parameters:
Keywords = maximum likelihood estimate

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 324 KB  
Article
A Comparative Study of Trimmed L-Moments, Direct L-Moments, and Maximum Likelihood Estimation for the Inverse Weibull Distribution Under Type-I Right Censoring
by Hager Ahmad Ibrahim and Ahmed R. El-Saeed
Symmetry 2025, 17(11), 1801; https://doi.org/10.3390/sym17111801 (registering DOI) - 25 Oct 2025
Abstract
This paper presents a comparative evaluation of three distinct methodologies for estimating the parameters of the Inverse Weibull distribution under Type-I right censoring: trimmed linear moments (using Type-AT and Type-BT variants), direct linear moments (using Type-AD and Type-BD variants), and maximum likelihood estimation. [...] Read more.
This paper presents a comparative evaluation of three distinct methodologies for estimating the parameters of the Inverse Weibull distribution under Type-I right censoring: trimmed linear moments (using Type-AT and Type-BT variants), direct linear moments (using Type-AD and Type-BD variants), and maximum likelihood estimation. The performance of these methods is assessed through Monte Carlo simulations, focusing on estimation accuracy, relative absolute bias, and root mean square error to identify the most appropriate approach. The practical applicability of these techniques is demonstrated through a real-world dataset analysis. Full article
(This article belongs to the Section Mathematics)
25 pages, 1288 KB  
Article
An Analysis of Implied Volatility, Sensitivity, and Calibration of the Kennedy Model
by Dalma Tóth-Lakits, Miklós Arató and András Ványolos
Mathematics 2025, 13(21), 3396; https://doi.org/10.3390/math13213396 (registering DOI) - 24 Oct 2025
Abstract
The Kennedy model provides a flexible and mathematically consistent framework for modeling the term structure of interest rates, leveraging Gaussian random fields to capture the dynamics of forward rates. Building upon our earlier work, where we developed both theoretical results—including novel proofs of [...] Read more.
The Kennedy model provides a flexible and mathematically consistent framework for modeling the term structure of interest rates, leveraging Gaussian random fields to capture the dynamics of forward rates. Building upon our earlier work, where we developed both theoretical results—including novel proofs of the martingale property, connections between the Kennedy and HJM frameworks, and parameter estimation theory—and practical calibration methods, using maximum likelihood, Radon–Nikodym derivatives, and numerical optimization (stochastic gradient descent) on simulated and real par swap rate data, this study extends the analysis in several directions. We derive detailed formulas for the volatilities implied by the Kennedy model and investigate their asymptotic properties. A comprehensive sensitivity analysis is conducted to evaluate the impact of key parameters on derivative prices. We implement an industry-standard Monte Carlo method, tailored to the conditional distribution of the Kennedy field, to efficiently generate scenarios consistent with observed initial forward curves. Furthermore, we present closed-form pricing formulas for various interest rate derivatives, including zero-coupon bonds, caplets, floorlets, swaplets, and the par swap rate. A key advantage of these results is that the formulas are expressed explicitly in terms of the initial forward curve and the original parameters of the Kennedy model, which ensures both analytical tractability and consistency with market-observed data. These closed-form expressions can be directly utilized in calibration procedures, substantially accelerating multidimensional nonlinear optimization algorithms. Moreover, given an observed initial forward curve, the model provides significantly more accurate pricing formulas, enhancing both theoretical precision and practical applicability. Finally, we calibrate the Kennedy model to market-observed caplet prices. The findings provide valuable insights into the practical applicability and robustness of the Kennedy model in real-world financial markets. Full article
(This article belongs to the Special Issue Modern Trends in Mathematics, Probability and Statistics for Finance)
Show Figures

Figure 1

13 pages, 2511 KB  
Article
NLOS Identification and Error Compensation Method for UWB in Workshop Scene
by Yu Su, Quan Yu, Xiaohao Xia, Wenfeng Li, Lijun He and Taiwei Yang
Sensors 2025, 25(21), 6555; https://doi.org/10.3390/s25216555 (registering DOI) - 24 Oct 2025
Abstract
To address the frequent safety incidents caused by positioning uncertainty due to NLOS (Non-Line-of-Sight) interference in complex manufacturing workshop environments, this paper aims to achieve high-precision distance measurement and positioning in complex workshop scenarios. First, common NLOS identification methods are analyzed. By combining [...] Read more.
To address the frequent safety incidents caused by positioning uncertainty due to NLOS (Non-Line-of-Sight) interference in complex manufacturing workshop environments, this paper aims to achieve high-precision distance measurement and positioning in complex workshop scenarios. First, common NLOS identification methods are analyzed. By combining received signal energy and ranging residuals, a rapid NLOS identification method is proposed. Building on this foundation, a ranging error compensation method based on maximum likelihood estimation and adaptive extended Kalman filtering is designed. Finally, static experiments are conducted to verify the effectiveness of the proposed NLOS identification method and ranging error compensation approach. Experimental results indicate that the ranging accuracy of the proposed method has been significantly improved and demonstrates considerable advantages over traditional Kalman filtering algorithms. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

23 pages, 882 KB  
Article
A Gauss Hypergeometric-Type Model for Heavy-Tailed Survival Times in Biomedical Research
by Jiju Gillariose, Mahmoud M. Abdelwahab, Joshin Joseph and Mustafa M. Hasaballah
Symmetry 2025, 17(11), 1795; https://doi.org/10.3390/sym17111795 - 24 Oct 2025
Viewed by 73
Abstract
In this study, we introduced and analyzed the Slash–Log–Logistic (SlaLL) distribution, a novel statistical model developed by applying the slash methodology to log–logistic and beta distributions. The SlaLL distribution is particularly suited for modeling datasets characterized by heavy tails and extreme [...] Read more.
In this study, we introduced and analyzed the Slash–Log–Logistic (SlaLL) distribution, a novel statistical model developed by applying the slash methodology to log–logistic and beta distributions. The SlaLL distribution is particularly suited for modeling datasets characterized by heavy tails and extreme values, frequently encountered in survival time analyses. We derived the mathematical representation of the distribution involving Gauss hypergeometric and beta functions, explicitly established the probability density function, cumulative distribution function, hazard rate function, and reliability function, and provided clear definitions of its moments. Through comprehensive simulation studies, the accuracy and robustness of maximum likelihood and Bayesian methods for parameter estimation were validated. Comparative empirical analyses demonstrated the SlaLL distribution’s superior fitting performance over well-known slash-based models, emphasizing its practical utility in accurately capturing the complexities of real-world survival time data. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

18 pages, 908 KB  
Article
Bayesian Estimation of Multicomponent Stress–Strength Model Using Progressively Censored Data from the Inverse Rayleigh Distribution
by Asuman Yılmaz
Entropy 2025, 27(11), 1095; https://doi.org/10.3390/e27111095 - 23 Oct 2025
Viewed by 76
Abstract
This paper presents a comprehensive study on the estimation of multicomponent stress–strength reliability under progressively censored data, assuming the inverse Rayleigh distribution. Both maximum likelihood estimation and Bayesian estimation methods are considered. The loss function and prior distribution play crucial roles in Bayesian [...] Read more.
This paper presents a comprehensive study on the estimation of multicomponent stress–strength reliability under progressively censored data, assuming the inverse Rayleigh distribution. Both maximum likelihood estimation and Bayesian estimation methods are considered. The loss function and prior distribution play crucial roles in Bayesian inference. Therefore, Bayes estimators of the unknown model parameters are obtained under symmetric (squared error loss function) and asymmetric (linear exponential and general entropy) loss functions using gamma priors. Lindley and MCMC approximation methods are used for Bayesian calculations. Additionally, asymptotic confidence intervals based on maximum likelihood estimators and Bayesian credible intervals constructed via Markov Chain Monte Carlo methods are presented. An extensive Monte Carlo simulation study compares the efficiencies of classical and Bayesian estimators, revealing that Bayesian estimators outperform classical ones. Finally, a real-life data example is provided to illustrate the practical applicability of the proposed methods. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

13 pages, 2749 KB  
Article
Analysis of the Optimal Receiver System for Underwater Electromagnetic Detection
by Bo Tang, Linsen Zhang and Siwei Tan
Electronics 2025, 14(21), 4143; https://doi.org/10.3390/electronics14214143 - 22 Oct 2025
Viewed by 139
Abstract
Given the characteristics of underwater electromagnetic detection systems, starting from typical applications, this paper analyzes the impact of random variables on the optimal receiver. The mathematical expression of the optimal receiver is derived using the Generalized Likelihood Ratio Test (GLRT), and the test [...] Read more.
Given the characteristics of underwater electromagnetic detection systems, starting from typical applications, this paper analyzes the impact of random variables on the optimal receiver. The mathematical expression of the optimal receiver is derived using the Generalized Likelihood Ratio Test (GLRT), and the test statistics are determined. Expressions for the receiver threshold, detection probability, and false alarm probability are derived, and the system block diagram of the optimal receiver is obtained. Through simulation, the working characteristics of the generalized likelihood ratio receiver and the matched filter are compared. It is verified that the performance of this receiver is close to that of a matched filter under different conditions. Since the maximum likelihood estimation of random parameters is used, this receiver is considered suboptimal. However, when the signal-to-noise ratio (SNR) is high, this receiver can replace the matched filter as the optimal receiver. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

27 pages, 1592 KB  
Article
Information-Theoretic Reliability Analysis of Consecutive r-out-of-n:G Systems via Residual Extropy
by Anfal A. Alqefari, Ghadah Alomani, Faten Alrewely and Mohamed Kayid
Entropy 2025, 27(11), 1090; https://doi.org/10.3390/e27111090 - 22 Oct 2025
Viewed by 117
Abstract
This paper develops an information-theoretic reliability inference framework for consecutive r-out-of-n:G systems by employing the concept of residual extropy, a dual measure to entropy. Explicit analytical representations are established in tractable cases, while novel bounds are derived for more complex [...] Read more.
This paper develops an information-theoretic reliability inference framework for consecutive r-out-of-n:G systems by employing the concept of residual extropy, a dual measure to entropy. Explicit analytical representations are established in tractable cases, while novel bounds are derived for more complex lifetime models, providing effective tools when closed-form expressions are unavailable. Preservation properties under classical stochastic orders and aging notions are examined, together with monotonicity and characterization results that offer deeper insights into system uncertainty. A conditional formulation, in which all components are assumed operational at a given time, is also investigated, yielding new theoretical findings. From an inferential perspective, we propose a maximum likelihood estimator of residual extropy under exponential lifetimes, supported by simulation studies and real-world reliability data. These contributions highlight residual extropy as a powerful information-theoretic tool for modeling, estimation, and decision-making in multicomponent reliability systems, thereby aligning with the objectives of statistical inference through entropy-like measures. Full article
(This article belongs to the Special Issue Recent Progress in Uncertainty Measures)
28 pages, 1946 KB  
Article
Efficient Analysis of the Gompertz–Makeham Theory in Unitary Mode and Its Applications in Petroleum and Mechanical Engineering
by Refah Alotaibi, Hoda Rezk and Ahmed Elshahhat
Axioms 2025, 14(11), 775; https://doi.org/10.3390/axioms14110775 - 22 Oct 2025
Viewed by 93
Abstract
This paper introduces a novel three-parameter probability model, the unit-Gompertz–Makeham (UGM) distribution, designed for modeling bounded data on the unit interval (0,1). By transforming the classical Gompertz–Makeham distribution, we derive a unit-support distribution that flexibly accommodates a wide range of shapes in both [...] Read more.
This paper introduces a novel three-parameter probability model, the unit-Gompertz–Makeham (UGM) distribution, designed for modeling bounded data on the unit interval (0,1). By transforming the classical Gompertz–Makeham distribution, we derive a unit-support distribution that flexibly accommodates a wide range of shapes in both the density and hazard rate functions, including increasing, decreasing, bathtub, and inverted-bathtub forms. The UGM density exhibits rich patterns such as symmetric, unimodal, U-shaped, J-shaped, and uniform-like forms, enhancing its ability to fit real-world bounded data more effectively than many existing models. We provide a thorough mathematical treatment of the UGM distribution, deriving explicit expressions for its quantile function, mode, central and non-central moments, mean residual life, moment-generating function, and order statistics. To facilitate parameter estimation, eight classical techniques, including maximum likelihood, least squares, and Cramér–von Mises methods, are developed and compared via a detailed simulation study assessing their accuracy and robustness under varying sample sizes and parameter settings. The practical relevance and superior performance of the UGM distribution are demonstrated using two real-world engineering datasets, where it outperforms existing bounded models, such as beta, Kumaraswamy, unit-Weibull, unit-gamma, and unit-Birnbaum–Saunders. These results highlight the UGM distribution’s potential as a versatile and powerful tool for modeling bounded data in reliability engineering, quality control, and related fields. Full article
(This article belongs to the Special Issue Advances in the Theory and Applications of Statistical Distributions)
Show Figures

Figure 1

16 pages, 1176 KB  
Article
Flood Frequency Analysis Using the Bivariate Logistic Model with Non-Stationary Gumbel and GEV Marginals
by Laura Berbesi-Prieto and Carlos Escalante-Sandoval
Hydrology 2025, 12(11), 274; https://doi.org/10.3390/hydrology12110274 - 22 Oct 2025
Viewed by 164
Abstract
Flood frequency analysis is essential for designing resilient hydraulic infrastructure, but traditional stationary models fail to capture the influence of climate variability and land-use change. This study applies a bivariate logistic model with non-stationary marginals to eight gauging stations in Sinaloa, Mexico, each [...] Read more.
Flood frequency analysis is essential for designing resilient hydraulic infrastructure, but traditional stationary models fail to capture the influence of climate variability and land-use change. This study applies a bivariate logistic model with non-stationary marginals to eight gauging stations in Sinaloa, Mexico, each with over 30 years of maximum discharge records. We compared stationary and non-stationary Gumbel and Generalized Extreme Value (GEV) distributions, along with their bivariate combinations. Results show that the non-stationary bivariate GEV–Gumbel distribution provided the best overall performance according to AIC. Importantly, GEV and Gumbel marginals captured site-specific differences: GEV was most suitable for sites with highly variable extremes, while Gumbel offered a robust fit for more regular records. At station 10086, where a significant increasing trend was detected by the Mann–Kendall and Spearman tests, the stationary GEV estimated a 50-year return flow of 772.66 m3/s, while the non-stationary model projected 861.00 m3/s for 2075. Under stationary assumptions, this discharge would be underestimated, occurring every ~30 years by 2075. These findings demonstrate that ignoring non-stationarity leads to systematic underestimation of design floods, while non-stationary bivariate models provide more reliable, policy-relevant estimates for climate adaptation and infrastructure safety. Full article
Show Figures

Figure 1

18 pages, 511 KB  
Article
Linking Motor Competence to Children’s Self-Perceptions: The Mediating Role of Physical Fitness
by Ivan Šerbetar, Jan Morten Loftesnes and Asgeir Mamen
Children 2025, 12(10), 1412; https://doi.org/10.3390/children12101412 - 20 Oct 2025
Viewed by 225
Abstract
Background/Objectives: Self-perceptions in childhood shape motivation, behavior, and well-being; however, their relationship to motor competence and physical fitness remains unclear. We tested whether physical fitness mediates the association between motor competence and domain-specific self-perceptions in middle childhood. Methods: In a school-based sample of [...] Read more.
Background/Objectives: Self-perceptions in childhood shape motivation, behavior, and well-being; however, their relationship to motor competence and physical fitness remains unclear. We tested whether physical fitness mediates the association between motor competence and domain-specific self-perceptions in middle childhood. Methods: In a school-based sample of 100 ten-year-olds (59 girls, 41 boys; 3 exclusions ≤ 5th MABC-2 percentile), children completed MABC-2 (motor competence), EUROFIT (physical fitness), and SPPC (self-perceptions). Principal component analysis of the nine EUROFIT tests yielded two factors: Motor Fitness (agility/endurance/flexibility/muscular endurance) and Strength/Size (handgrip and BMI). Parallel mediation models (MABC-2 → [Motor Fitness, Strength/Size] → SPPC) were estimated with maximum likelihood and 5000 bias-corrected bootstrap resamples. Benjamini–Hochberg FDR (q = 0.05) was applied within each path family across the six SPPC domains. Results: In baseline models (no covariates), Motor Fitness → Athletic Competence was significant after FDR (β = 0.263, p = 0.003, FDR p = 0.018). Associations with Scholastic (β = 0.217, p = 0.039, FDR p = 0.090) and Social (β = 0.212, p = 0.046, FDR p = 0.090) were positive but did not meet the FDR threshold. Strength/Size showed no associations with any SPPC domain. Direct effects from MABC-2 to SPPC were non-significant. Indirect effects via Motor Fitness were minor and not supported after FDR (e.g., Athletic: β = 0.067, p = 0.106, 95% CI [0.007, 0.174], FDR p = 0.251). In BMIz-adjusted sensitivity models, Motor Fitness remained significantly related to Athletic (β = 0.285, p = 0.008, FDR p = 0.035), Scholastic (β = 0.252, p = 0.018, FDR p = 0.035), and Social (β = 0.257, p = 0.015, FDR p = 0.035); MABC-2 → Motor Fitness was β = 0.235, p = 0.020. Some paths reached unadjusted significance but were not significant after FDR correction (all FDR p-values = 0.120 for indirect effects). Conclusions: Functional Motor Fitness, but not Strength/Size, showed small-to-moderate, domain-specific links with children’s Athletic (and, when adjusting for adiposity, Scholastic/Social) self-perceptions; mediated effects were small and not FDR-supported. Findings highlight the salience of visible, functional performances (e.g., agility/endurance tasks) for children’s self-views and support PE approaches that foster diverse motor skills and motor fitness. Because the study is cross-sectional and BMI-adjusted analyses are presented as robustness checks, caution should be exercised when interpreting the results causally. Full article
(This article belongs to the Section Global Pediatric Health)
Show Figures

Figure 1

17 pages, 478 KB  
Article
A Bayesian Model for Paired Data in Genome-Wide Association Studies with Application to Breast Cancer
by Yashi Bu, Min Chen, Zhenyu Xuan and Xinlei Wang
Entropy 2025, 27(10), 1077; https://doi.org/10.3390/e27101077 - 18 Oct 2025
Viewed by 185
Abstract
Complex human diseases, including cancer, are linked to genetic factors. Genome-wide association studies (GWASs) are powerful for identifying genetic variants associated with cancer but are limited by their reliance on case–control data. We propose approaches to expanding GWAS by using tumor and paired [...] Read more.
Complex human diseases, including cancer, are linked to genetic factors. Genome-wide association studies (GWASs) are powerful for identifying genetic variants associated with cancer but are limited by their reliance on case–control data. We propose approaches to expanding GWAS by using tumor and paired normal tissues to investigate somatic mutations. We apply penalized maximum likelihood estimation for single-marker analysis and develop a Bayesian hierarchical model to integrate multiple markers, identifying SNP sets grouped by genes or pathways, improving detection of moderate-effect SNPs. Applied to breast cancer data from The Cancer Genome Atlas (TCGA), both single- and multiple-marker analyses identify associated genes, with multiple-marker analysis providing more consistent results with external resources. The Bayesian model significantly increases the chance of new discoveries. Full article
Show Figures

Figure 1

17 pages, 341 KB  
Article
Inferences for the GKME Distribution Under Progressive Type-I Interval Censoring with Random Removals and Its Application to Survival Data
by Ela Verma, Mahmoud M. Abdelwahab, Sanjay Kumar Singh and Mustafa M. Hasaballah
Axioms 2025, 14(10), 769; https://doi.org/10.3390/axioms14100769 - 17 Oct 2025
Viewed by 171
Abstract
The analysis of lifetime data under censoring schemes plays a vital role in reliability studies and survival analysis, where complete information is often difficult to obtain. This work focuses on the estimation of the parameters of the recently proposed generalized Kavya–Manoharan exponential (GKME) [...] Read more.
The analysis of lifetime data under censoring schemes plays a vital role in reliability studies and survival analysis, where complete information is often difficult to obtain. This work focuses on the estimation of the parameters of the recently proposed generalized Kavya–Manoharan exponential (GKME) distribution under progressive Type-I interval censoring, a censoring scheme that frequently arises in medical and industrial life-testing experiments. Estimation procedures are developed under both classical and Bayesian paradigms, providing a comprehensive framework for inference. In the Bayesian setting, parameter estimation is carried out using Markov Chain Monte Carlo (MCMC) techniques under two distinct loss functions: the squared error loss function (SELF) and the general entropy loss function (GELF). For interval estimation, asymptotic confidence intervals as well as highest posterior density (HPD) credible intervals are constructed. The performance of the proposed estimators is systematically evaluated through a Monte Carlo simulation study in terms of mean squared error (MSE) and the average lengths of the interval estimates. The practical usefulness of the developed methodology is further demonstrated through the analysis of a real dataset on survival times of guinea pigs exposed to virulent tubercle bacilli. The findings indicate that the proposed methods provide flexible and efficient tools for analyzing progressively interval-censored lifetime data. Full article
Show Figures

Figure 1

13 pages, 275 KB  
Article
Generalized Gamma Frailty and Symmetric Normal Random Effects Model for Repeated Time-to-Event Data
by Kai Liu, Yan Qiao Wang, Xiaojun Zhu and Narayanaswamy Balakrishnan
Symmetry 2025, 17(10), 1760; https://doi.org/10.3390/sym17101760 - 17 Oct 2025
Viewed by 221
Abstract
Clustered time-to-event data are quite common in survival analysis and finding a suitable model to account for dispersion as well as censoring is an important issue. In this article, we present a flexible model for repeated, overdispersed time-to-event data with right-censoring. We present [...] Read more.
Clustered time-to-event data are quite common in survival analysis and finding a suitable model to account for dispersion as well as censoring is an important issue. In this article, we present a flexible model for repeated, overdispersed time-to-event data with right-censoring. We present here a general model by incorporating generalized gamma and normal random effects in a Weibull distribution to accommodate overdispersion and data hierarchies, respectively. The normal random effect has the property of being symmetrical, which means its probability density function is symmetric around its mean. While the random effects are symmetrically distributed, the resulting frailty model is asymmetric in its survival function because the random effects enter the model multiplicatively via the hazard function, and the exponentiation of a symmetric normal variable leads to lognormal distribution, which is right-skewed. Due to the intractable integrals involved in the likelihood function and its derivatives, the Monte Carlo approach is used to approximate the involved integrals. The maximum likelihood estimates of the parameters in the model are then numerically determined. An extensive simulation study is then conducted to evaluate the performance of the proposed model and the method of inference developed here. Finally, the usefulness of the model is demonstrated by analyzing a data on recurrent asthma attacks in children and a recurrent bladder data set known in the survival analysis literature. Full article
24 pages, 710 KB  
Article
On Fintech and Financial Inclusion: Evidence from Qatar
by Ashwaq Al-Sharshani, Fatma Al-Sharshani and Ali Malik
J. Risk Financial Manag. 2025, 18(10), 586; https://doi.org/10.3390/jrfm18100586 - 15 Oct 2025
Viewed by 590
Abstract
This study examines the role of fintech adoption in enhancing financial inclusion in Qatar, with a particular focus on the mediating influence of access barriers. A structured questionnaire was administered to 220 respondents, of which 200 valid responses were retained for analysis after [...] Read more.
This study examines the role of fintech adoption in enhancing financial inclusion in Qatar, with a particular focus on the mediating influence of access barriers. A structured questionnaire was administered to 220 respondents, of which 200 valid responses were retained for analysis after screening for completeness and outliers. The constructs of fintech adoption (FA), financial inclusion (FI), and access barriers (AB) were measured using validated multi-item scales adapted from prior literature. Measurement reliability and validity were confirmed through Cronbach’s alpha, composite reliability, and average variance extracted (AVE), alongside confirmatory factor analysis (CFA) for construct validity. A structural equation modeling (SEM) approach was employed to test the hypothesized relationships, using maximum likelihood estimation with bootstrap standard errors and confidence intervals. Model fit indices indicated excellent fit (χ2 = 48.983, df = 51, p = 0.554; CFI = 1.000; TLI = 1.003; RMSEA = 0.000; SRMR = 0.036). Factor loadings were all significant (p < 0.001), supporting convergent validity. However, the structural paths from FA to FI (β = −0.020, p = 0.827), AB to FI (β = −0.077, p = 0.394), and FA to AB (β = 0.054, p = 0.527) were not significant. The indirect mediation effect of AB was also statistically insignificant (β = −0.004, p = 0.700). Full article
(This article belongs to the Special Issue Behavioral Finance and Sustainable Green Investing)
Show Figures

Figure A1

14 pages, 505 KB  
Article
Modelling Interval Data with Random Intercepts: A Beta Regression Approach for Clustered and Longitudinal Structures
by Olga Usuga-Manco, Freddy Hernández-Barajas and Viviana Giampaoli
Modelling 2025, 6(4), 128; https://doi.org/10.3390/modelling6040128 - 14 Oct 2025
Viewed by 225
Abstract
Beta regression models are a class of models used frequently to model response variables in the interval (0, 1). Although there are articles in which these models are used to model clustered and longitudinal data, the prediction of [...] Read more.
Beta regression models are a class of models used frequently to model response variables in the interval (0, 1). Although there are articles in which these models are used to model clustered and longitudinal data, the prediction of random effects is limited, and residual analysis has not been implemented. In this paper, a random intercept beta regression model is proposed for the complete analysis of this type of data structure. We proposed some types of residuals and formulate a methodology to obtain the best prediction of random effects. This model is developed through the parameterisation of beta distribution in terms of the mean and dispersion parameters. A log-likelihood function is approximated by the Gauss–Hermite quadrature to numerically integrate the distribution of random intercepts. A simulation study is used to investigate the performance of the estimation process and the sampling distributions of the residuals. Full article
Show Figures

Figure 1

Back to TopTop