Next Issue
Volume 14, February-1
Previous Issue
Volume 14, January-1
 
 
mathematics-logo

Journal Browser

Journal Browser

Mathematics, Volume 14, Issue 2 (January-2 2026) – 185 articles

Cover Story (view full-size image): The paper deals with propositional logic with sequential primitives. Such primitives arise if evaluation of propositional atoms may have side effects or if successive evaluations of the same atom in an expression produce different outputs, which is a common case in computer programming. These primitives can be expressed in terms of the sequential conditional primitive P ◁ QR, which can be read as “if Q then P else R” and was introduced by Hoare in 1985. We consider various so-called valuation congruences and explore new techniques for proving completeness results for axiom systems for these, involving the conditional and the constants T and F for truth and falsehood. These techniques are based on the construction and transformation of so-called evaluation trees. In the figure, ¬x is defined by F ◁ x ▷ T, and x && y by yx ▷ F. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
30 pages, 30418 KB  
Article
Differentially Private Generative Modeling via Discrete Latent Projection
by Yinchi Ge, Hui Zhang and Haijun Yang
Mathematics 2026, 14(2), 388; https://doi.org/10.3390/math14020388 - 22 Jan 2026
Viewed by 401
Abstract
Deep generative models trained on sensitive data pose significant privacy risks, yet enforcing differential privacy (DP) in high-dimensional generators often leads to severe utility degradation. We propose Differentially Private Vector-Quantized Generation (DP-VQG), a three-stage generative framework that introduces a discrete latent bottleneck as [...] Read more.
Deep generative models trained on sensitive data pose significant privacy risks, yet enforcing differential privacy (DP) in high-dimensional generators often leads to severe utility degradation. We propose Differentially Private Vector-Quantized Generation (DP-VQG), a three-stage generative framework that introduces a discrete latent bottleneck as the interface for privacy preservation. DP-VQG separates geometric structure learning, differentially private discrete latent projection, and non-private prior modeling, ensuring that privacy-induced randomness operates on a finite codebook aligned with the decoder’s effective support. This design avoids off-support degradation while providing formal end-to-end DP guarantees through composition and post-processing. We provide a theoretical analysis of privacy and utility, including explicit bounds on privacy-induced distortion. Empirically, under the privacy budget of ε=10, DP-VQG attains Fréchet Inception Distance (FID) scores of 18.21 on MNIST and 77.09 on Fashion-MNIST, surpassing state-of-the-art differentially private generative models of comparable scale. Moreover, DP-VQG produces visually coherent samples on high-resolution datasets such as Flowers102, Food101, CelebA-HQ, and Cars, demonstrating scalability beyond prior end-to-end DP generative approaches. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

16 pages, 281 KB  
Article
On a Time-Fractional Biharmonic Nonlocal Initial Boundary-Value Problem with Frictional and Viscoelastic Damping Terms
by Rowaida Alrajhi and Said Mesloub
Mathematics 2026, 14(2), 387; https://doi.org/10.3390/math14020387 - 22 Jan 2026
Viewed by 214
Abstract
This research work investigates the existence, uniqueness, and stability of solution for a time-fractional fourth-order partial differential equation, subject to two initial conditions and four nonlocal integral boundary conditions. The equation incorporates several key components: the Caputo fractional derivative operator, the Laplace operator, [...] Read more.
This research work investigates the existence, uniqueness, and stability of solution for a time-fractional fourth-order partial differential equation, subject to two initial conditions and four nonlocal integral boundary conditions. The equation incorporates several key components: the Caputo fractional derivative operator, the Laplace operator, the biharmonic operator, as well as terms representing frictional and viscoelastic damping. The presence of these elements, particularly the nonlocal boundary constraints, introduces new mathematical challenges that require the development of advanced analytical methods. To address these challenges, we construct a functional analytic framework based on Sobolev spaces and employ energy estimates to rigorously prove the well-posedness of the problem. Full article
(This article belongs to the Special Issue Applications of Partial Differential Equations, 2nd Edition)
21 pages, 3384 KB  
Article
A Graphical Approach to the Generalized Extremal Problem of a Transported Log in a Navigable Canal
by Dusan Vallo
Mathematics 2026, 14(2), 386; https://doi.org/10.3390/math14020386 - 22 Jan 2026
Viewed by 172
Abstract
This article presents the solution to an optimization problem concerning the longest wooden log that can be floated through two perpendicularly intersecting water canals. This application problem is further generalized and solved using a graphical method. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

26 pages, 2162 KB  
Article
Iceberg Model as a Digital Risk Twin for the Health Monitoring of Complex Engineering Systems
by Igor Kabashkin
Mathematics 2026, 14(2), 385; https://doi.org/10.3390/math14020385 - 22 Jan 2026
Viewed by 335
Abstract
This paper introduces an iceberg-based digital risk twin (DRT) framework for the health monitoring of complex engineering systems. The proposed model transforms multidimensional sensor and contextual data into a structured, interpretable three-dimensional geometry that captures both observable and latent risk components. Each monitored [...] Read more.
This paper introduces an iceberg-based digital risk twin (DRT) framework for the health monitoring of complex engineering systems. The proposed model transforms multidimensional sensor and contextual data into a structured, interpretable three-dimensional geometry that captures both observable and latent risk components. Each monitored parameter is represented as a vertical geometric sheet whose height encodes a normalized risk level, producing an evolving iceberg structure in which the visible and submerged regions distinguish emergent anomalies from latent degradation. A formal mathematical formulation is developed, defining the mappings from the risk vector to geometric height functions, spatial layout, and surface composition. The resulting parametric representation provides both analytical tractability and intuitive visualization. A case study involving an aircraft fuel system demonstrates the capacity of the DRT to reveal dominant risk drivers, parameter asymmetries, and temporal trends not easily observable in traditional time-series analysis. The model is shown to integrate naturally into AI-enabled health management pipelines, providing an interpretable intermediary layer between raw data streams and advanced diagnostic or predictive algorithms. Owing to its modular structure and domain-agnostic formulation, the DRT approach is applicable beyond aviation, including power grids, rail systems, and industrial equipment monitoring. The results indicate that the iceberg representation offers a promising foundation for enhancing explainability, situational awareness, and decision support in the monitoring of complex engineering systems. Full article
Show Figures

Graphical abstract

27 pages, 586 KB  
Article
Symmetric Double Normal Models for Censored, Bounded, and Survival Data: Theory, Estimation, and Applications
by Guillermo Martínez-Flórez, Hugo Salinas and Javier Ramírez-Montoya
Mathematics 2026, 14(2), 384; https://doi.org/10.3390/math14020384 - 22 Jan 2026
Viewed by 230
Abstract
We develop a unified likelihood-based framework for limited outcomes built on the two-piece normal family. The framework includes a censored specification that accommodates boundary inflation, a doubly truncated specification on (0,1) for rates and proportions, and a survival formulation [...] Read more.
We develop a unified likelihood-based framework for limited outcomes built on the two-piece normal family. The framework includes a censored specification that accommodates boundary inflation, a doubly truncated specification on (0,1) for rates and proportions, and a survival formulation with a log-two-piece normal baseline and Gamma frailty to account for unobserved heterogeneity. We derive closed-form building blocks (pdf, cdf, survival, hazard, and cumulative hazard), full log-likelihoods with score functions and observed information, and stable reparameterizations that enable routine optimization. Monte Carlo experiments show a small bias and declining RMSE with increasing sample size; censoring primarily inflates the variability of regression coefficients; the scale parameter remains comparatively stable, and the shape parameter is most sensitive under heavy censoring. Applications to HIV-1 RNA with a detection limit, household food expenditure on (0,1), labor-supply hours with a corner solution, and childhood cancer times to hospitalization demonstrate improved fit over Gaussian, skew-normal, and beta benchmarks according to AIC/BIC/CAIC and goodness-of-fit diagnostics, with model-implied censoring closely matching the observed fraction. The proposed formulations are tractable, flexible, and readily implementable with standard software. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

28 pages, 564 KB  
Article
CONFIDE: CONformal Free Inference for Distribution-Free Estimation in Causal Competing Risks
by Quang-Vinh Dang, Ngoc-Son-An Nguyen and Thi-Bich-Diem Vo
Mathematics 2026, 14(2), 383; https://doi.org/10.3390/math14020383 - 22 Jan 2026
Viewed by 385
Abstract
Accurate prediction of individual treatment effects in survival analysis is often complicated by the presence of competing risks and the inherent unobservability of counterfactual outcomes. While machine learning models offer improved discriminative power, they typically lack rigorous guarantees for uncertainty quantification, which are [...] Read more.
Accurate prediction of individual treatment effects in survival analysis is often complicated by the presence of competing risks and the inherent unobservability of counterfactual outcomes. While machine learning models offer improved discriminative power, they typically lack rigorous guarantees for uncertainty quantification, which are essential for safety-critical clinical decision-making. In this paper, we introduce CONFIDE (CONFormal Inference for Distribution-free Estimation), a novel framework that bridges causal inference and conformal prediction to construct valid prediction sets for cause-specific cumulative incidence functions. Unlike traditional confidence intervals for population-level parameters, CONFIDE provides individual-level prediction sets for time-to-event outcomes, which are more clinically actionable for personalized treatment decisions by directly quantifying uncertainty in future patient outcomes rather than uncertainty in population averages. By integrating semi-parametric hazard estimation with targeted bias correction strategies, CONFIDE generates calibrated prediction sets that cover the true potential outcome with a user-specified probability, irrespective of the underlying data distribution. We empirically validate our approach on four diverse medical datasets, demonstrating that CONFIDE achieves competitive discrimination (C-index up to 0.83) while providing robust finite-sample marginal coverage guarantees (e.g., 85.7% coverage on the Bone Marrow Transplant dataset). We note two key limitations: (1) coverage may degrade under heavy censoring (>40%) unless inverse probability of censoring weighted (IPCW) conformal quantiles are used, as demonstrated in our sensitivity analysis; (2) while the method guarantees marginal coverage averaged over the covariate distribution, conditional coverage for specific covariate values is theoretically impossible without structural assumptions, though practical approximations via locally-adaptive calibration can improve conditional performance. Our framework effectively enables trustworthy personalized risk assessment in complex survival settings. Full article
(This article belongs to the Special Issue Statistical Models and Their Applications)
Show Figures

Figure 1

17 pages, 357 KB  
Article
Novel Bi-Univalent Subclasses Generated by the q-Analogue of the Ruscheweyh Operator and Hermite Polynomials
by Feras Yousef, Tariq Al-Hawary, Mohammad El-Ityan and Ibtisam Aldawish
Mathematics 2026, 14(2), 382; https://doi.org/10.3390/math14020382 - 22 Jan 2026
Viewed by 318
Abstract
This work introduces new bi-univalent function classes defined using the fractional q-Ruscheweyh operator and characterized by subordination to q-Hermite polynomials. We derive coefficient bounds and Fekete–Szegö inequalities for these classes and show that our results generalize several earlier findings in both [...] Read more.
This work introduces new bi-univalent function classes defined using the fractional q-Ruscheweyh operator and characterized by subordination to q-Hermite polynomials. We derive coefficient bounds and Fekete–Szegö inequalities for these classes and show that our results generalize several earlier findings in both the classical and q-analytic settings. The approach highlights the effectiveness of q-Hermite structures in analyzing operator-defined subclasses of bi-univalent functions. Full article
(This article belongs to the Special Issue Current Topics in Geometric Function Theory, 2nd Edition)
28 pages, 26446 KB  
Article
Interpreting Multi-Branch Anti-Spoofing Architectures: Correlating Internal Strategy with Empirical Performance
by Ivan Viakhirev, Kirill Borodin, Mikhail Gorodnichev and Grach Mkrtchian
Mathematics 2026, 14(2), 381; https://doi.org/10.3390/math14020381 - 22 Jan 2026
Viewed by 307
Abstract
Multi-branch deep neural networks like AASIST3 achieve state-of-the-art comparable performance in audio anti-spoofing, yet their internal decision dynamics remain opaque compared to traditional input-level saliency methods. While existing interpretability efforts largely focus on visualizing input artifacts, the way individual architectural branches cooperate or [...] Read more.
Multi-branch deep neural networks like AASIST3 achieve state-of-the-art comparable performance in audio anti-spoofing, yet their internal decision dynamics remain opaque compared to traditional input-level saliency methods. While existing interpretability efforts largely focus on visualizing input artifacts, the way individual architectural branches cooperate or compete under different spoofing attacks is not well characterized. This paper develops a framework for interpreting AASIST3 at the component level. Intermediate activations from fourteen branches and global attention modules are modeled with covariance operators whose leading eigenvalues form low-dimensional spectral signatures. These signatures train a CatBoost meta-classifier to generate TreeSHAP-based branch attributions, which we convert into normalized contribution shares and confidence scores (Cb) to quantify the model’s operational strategy. By analyzing 13 spoofing attacks from the ASVspoof 2019 benchmark, we identify four operational archetypes—ranging from “Effective Specialization” (e.g., A09, Equal Error Rate (EER) 0.04%, C=1.56) to “Ineffective Consensus” (e.g., A08, EER 3.14%, C=0.33). Crucially, our analysis exposes a “Flawed Specialization” mode where the model places high confidence in an incorrect branch, leading to severe performance degradation for attacks A17 and A18 (EER 14.26% and 28.63%, respectively). These quantitative findings link internal architectural strategy directly to empirical reliability, highlighting specific structural dependencies that standard performance metrics overlook. Full article
(This article belongs to the Special Issue New Solutions for Multimedia and Artificial Intelligence Security)
Show Figures

Figure 1

36 pages, 13674 KB  
Article
A Reference-Point Guided Multi-Objective Crested Porcupine Optimizer for Global Optimization and UAV Path Planning
by Zelei Shi and Chengpeng Li
Mathematics 2026, 14(2), 380; https://doi.org/10.3390/math14020380 - 22 Jan 2026
Cited by 1 | Viewed by 351
Abstract
Balancing convergence accuracy and population diversity remains a fundamental challenge in multi-objective optimization, particularly for complex and constrained engineering problems. To address this issue, this paper proposes a novel Multi-Objective Crested Porcupine Optimizer (MOCPO), inspired by the hierarchical defensive behaviors of crested porcupines. [...] Read more.
Balancing convergence accuracy and population diversity remains a fundamental challenge in multi-objective optimization, particularly for complex and constrained engineering problems. To address this issue, this paper proposes a novel Multi-Objective Crested Porcupine Optimizer (MOCPO), inspired by the hierarchical defensive behaviors of crested porcupines. The proposed algorithm integrates four biologically motivated defense strategies—vision, hearing, scent diffusion, and physical attack—into a unified optimization framework, where global exploration and local exploitation are dynamically coordinated. To effectively extend the original optimizer to multi-objective scenarios, MOCPO incorporates a reference-point guided external archiving mechanism to preserve a well-distributed set of non-dominated solutions, along with an environmental selection strategy that adaptively partitions the objective space and enhances solution quality. Furthermore, a multi-level leadership mechanism based on Euclidean distance is introduced to provide region-specific guidance, enabling precise and uniform coverage of the Pareto front. The performance of MOCPO is comprehensively evaluated on 18 benchmark problems from the WFG and CF test suites. Experimental results demonstrate that MOCPO consistently outperforms several state-of-the-art multi-objective algorithms, including MOPSO and NSGA-III, in terms of IGD, GD, HV, and Spread metrics, achieving the best overall ranking in Friedman statistical tests. Notably, the proposed algorithm exhibits strong robustness on discontinuous, multimodal, and constrained Pareto fronts. In addition, MOCPO is applied to UAV path planning in four complex terrain scenarios constructed from real digital elevation data. The results show that MOCPO generates shorter, smoother, and more stable flight paths while effectively balancing route length, threat avoidance, flight altitude, and trajectory smoothness. These findings confirm the effectiveness, robustness, and practical applicability of MOCPO for solving complex real-world multi-objective optimization problems. Full article
(This article belongs to the Special Issue Advances in Metaheuristic Optimization Algorithms)
Show Figures

Figure 1

38 pages, 812 KB  
Article
Basin of Attraction Analysis in Piecewise-Linear Systems with Big-Bang Bifurcation for the Period-Increment Phenomenon
by Juan Carlos Vargas Bernal, Simeón Casanova Trujillo and Diego A. Londoño Patiño
Mathematics 2026, 14(2), 379; https://doi.org/10.3390/math14020379 - 22 Jan 2026
Viewed by 357
Abstract
This paper investigates the basins of attraction of periodic orbits arising in one-dimensional piecewise-linear discrete dynamical systems as the system parameters vary in a neighborhood of a Big-Bang bifurcation point associated with the period-increment phenomenon. In this setting, the Big-Bang point corresponds to [...] Read more.
This paper investigates the basins of attraction of periodic orbits arising in one-dimensional piecewise-linear discrete dynamical systems as the system parameters vary in a neighborhood of a Big-Bang bifurcation point associated with the period-increment phenomenon. In this setting, the Big-Bang point corresponds to a parameter value through which infinitely many bifurcation curves pass, leading to the successive emergence of periodic orbits whose periods increase incrementally. The analysis is carried out using a fully analytical approach, exploiting the one-dimensional nature of the system and the occurrence of border-collision bifurcations. Within this framework, we construct analytical sequences that characterize the convergence of any initial condition on the real line toward a periodic point belonging to a periodic orbit, either isolated or coexisting with another periodic orbit. As the main results, we explicitly characterize the basins of attraction of periodic orbits generated in the period-increment Big-Bang scenario and provide explicit analytical conditions on the system parameters for the existence of these periodic orbits. Moreover, we show that, in certain regions of the parameter plane, at most two periodic orbits can coexist, and we describe explicitly the structure of their corresponding basins of attraction. This work provides a new analytical perspective on basin organization in piecewise-linear systems exhibiting the period-increment phenomenon. Full article
Show Figures

Figure 1

35 pages, 522 KB  
Review
Exploring the Potential of Topological Data Analysis for Explainable Large Language Models: A Scoping Review
by Petar Sekuloski, Dimitar Kitanovski, Igor Goshev, Kostadin Mishev, Monika Simjanoska Misheva and Vesna Dimitrievska Ristovska
Mathematics 2026, 14(2), 378; https://doi.org/10.3390/math14020378 - 22 Jan 2026
Viewed by 1482
Abstract
Large language models (LLMs) have become central to modern artificial intelligence, yet their internal decision-making processes remain difficult to interpret. As interest grows in making these models more transparent and reliable, topological data analysis (TDA) has emerged as a promising mathematical approach for [...] Read more.
Large language models (LLMs) have become central to modern artificial intelligence, yet their internal decision-making processes remain difficult to interpret. As interest grows in making these models more transparent and reliable, topological data analysis (TDA) has emerged as a promising mathematical approach for exploring their structure. This scoping review maps the current landscape of research where TDA tools—such as persistent homology and Mapper—are used to examine LLM components like attention patterns, latent representations, and training dynamics. By analyzing topological features across layers and tasks, these methods provide new ways to understand how language models generalize, respond to unfamiliar inputs, and shift under fine-tuning. The review also considers how TDA-based techniques contribute to broader goals in interpretability and robustness, especially in detecting hallucinations, out-of-distribution behavior, and representational collapse. Overall, the findings suggest that TDA offers a rigorous and versatile framework for studying LLMs, helping researchers uncover deeper patterns in how these models learn and reason. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

23 pages, 743 KB  
Article
Security-Enhanced Vehicle-to-Roadside Unit Authentication Scheme for Internet of Vehicles
by Yan Sun and Qi Xie
Mathematics 2026, 14(2), 377; https://doi.org/10.3390/math14020377 - 22 Jan 2026
Viewed by 307
Abstract
Secure real-time data interaction between vehicles and transportation infrastructure, such as RSUs (V2R), can achieve intelligent and safe driving, as well as efficient travel services, in Internet of Vehicles (IoV), a secure and efficient V2R authentication protocol, which plays an important role. Recently, [...] Read more.
Secure real-time data interaction between vehicles and transportation infrastructure, such as RSUs (V2R), can achieve intelligent and safe driving, as well as efficient travel services, in Internet of Vehicles (IoV), a secure and efficient V2R authentication protocol, which plays an important role. Recently, scholars have proposed a two-factor V2R authentication protocol for the IoV. However, subsequent research has shown that this protocol is vulnerable to insider and ephemeral secret leakage attacks, and cannot achieve perfect forward secrecy. To address these security flaws, an improved scheme was further proposed. Nevertheless, this paper points out that the improved scheme still has shortcomings: it cannot provide anonymity and perfect forward secrecy, exhibits insufficient session key secrecy, and remains vulnerable to password guessing attacks, RSU capture attacks, and suffers from inappropriate pseudo-identity update mechanisms. Therefore, a novel Physical Unclonable Function-based Lightweight V2R Authentication (PUF-LA) scheme is proposed, which uses Elliptic Curve Cryptography (ECC) to achieve perfect forward secrecy, uses PUF to resist devices captured attacks, and achieves two-factor secrecy protection against password guessing attacks. The security performance of PUF-LA is theoretically proved by leveraging the random oracle model. In contrast with relevant authentication schemes, PUF-LA is more secure and has low computation costs. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

20 pages, 1930 KB  
Article
Is Weniger’s Transformation Capable of Simulating the Stieltjes Function Branch Cut?
by Riccardo Borghi
Mathematics 2026, 14(2), 376; https://doi.org/10.3390/math14020376 - 22 Jan 2026
Viewed by 272
Abstract
The resummation of Stieltjes series remains a key challenge in mathematical physics, especially when Padé approximants fail, as in the case of superfactorially divergent series. Weniger’s δ-transformation, which incorporates a priori structural information on Stieltjes series, offers a superior framework with respect [...] Read more.
The resummation of Stieltjes series remains a key challenge in mathematical physics, especially when Padé approximants fail, as in the case of superfactorially divergent series. Weniger’s δ-transformation, which incorporates a priori structural information on Stieltjes series, offers a superior framework with respect to Padé. In the present work, the following fundamental question is addressed: Is the δ-transformation, once it is applied to a typical Stieltjes series, capable of correctly simulating the branch cut structure of the corresponding Stieltjes function? Here, it is proved that the intrinsic log-convexity of the Stieltjes moment sequence (guaranteed via the positivity of Hankel’s determinants) allows the necessary condition for δ to have all real poles to be satisfied. The same condition, however, is not sufficient to guarantee this. In attempting to bridge such a gap, we propose a mechanism rooted in the iterative action of a specific linear differential operator acting on a class of suitable auxiliary log-concave polynomials. To this end, we show that the denominator of the δ-approximants can always be recast as a high-order derivative of a log-concave polynomial. Then, on invoking the Gauss–Lucas theorem, a consistent geometrical justification of the δ pole positioning is proposed. Through such an approach, the pole alignment along the negative real axis can be viewed as the result of the progressive restriction of the convex hull under differentiation. Since a fully rigorous proof of this conjecture remains an open challenge, in order to substantiate it, a comprehensive numerical investigation across an extensive catalog of Stieltjes series is proposed. Our results provide systematic evidence of the potential δ-transformation ability to mimic the singularity structure of several target functions, including those involving superfactorial divergences. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

21 pages, 2096 KB  
Article
Computation of Population Variance Estimation in Simple Random Sampling Structures by Developing Generalized Estimator
by Ahlem Djebar, Abdulaziz S. Alghamdi, Manahil SidAhmed Mustafa and Sohaib Ahmad
Mathematics 2026, 14(2), 375; https://doi.org/10.3390/math14020375 - 22 Jan 2026
Cited by 1 | Viewed by 291
Abstract
The correct estimation of the population variance plays a vital role in the sampling procedure in surveys, especially when simple random sampling techniques are used. In this work, we propose a new generalized statistical inference in order to estimate the population variance using [...] Read more.
The correct estimation of the population variance plays a vital role in the sampling procedure in surveys, especially when simple random sampling techniques are used. In this work, we propose a new generalized statistical inference in order to estimate the population variance using auxiliary information. We can use the relationship between the study variable and the auxiliary variable to construct a novel generalized class of estimators that is better performing in terms of minimum mean squared error (MSE) and has a higher percentage of relative efficiency than the traditional estimators. The proposed methodology is based on the existing methods of inference with the introduction of modifications to cover the known population parameters of additional auxiliary variables, like the mean, the coefficient of variation, skewness, or kurtosis. Theoretical properties such as bias and mean squared error are obtained with regard to the first-order approximation. The performance of the proposed class of estimators is checked by comparing with that of the classical variance estimators in different population conditions based on real-life data sets and a simulation study. The numerical findings have indicated that the suggested class of estimators is more effective compared to classical methods, especially in cases where there is a very high linear correlation between the auxiliary and the study variables. Also, the estimators are robust, as confirmed using various sample sizes and population structures. The research has made a significant contribution to the development of statistical procedures in survey sampling because the practical and efficient tools provided in the study were useful in estimating the variance. The results have been of great importance when applied by researchers and practitioners active in large-scale surveys. Subsequently, in the case of efficient utilization of auxiliary information, it is feasible to have more accurate and cost-effective statistical inference. Full article
(This article belongs to the Special Issue Computational Statistics and Data Analysis, 3rd Edition)
Show Figures

Figure 1

16 pages, 294 KB  
Article
An Improved Similarity Measure for Interval-Valued Intuitionistic Fuzzy Numbers and Its Application to Multi-Attribute Decision-Making Problem
by Kartik Patra, Sanjib Sen and Shyamal Kumar Mondal
Mathematics 2026, 14(2), 374; https://doi.org/10.3390/math14020374 - 22 Jan 2026
Viewed by 384
Abstract
In this article, a new similarity measure is discussed on interval-valued intuitionistic fuzzy values (IVIFVs). Here, the proposed similarity measure has been derived based on transformed intervals and its probability density functions, mean values, and standard deviations of IVIFVs. Based on the proposed [...] Read more.
In this article, a new similarity measure is discussed on interval-valued intuitionistic fuzzy values (IVIFVs). Here, the proposed similarity measure has been derived based on transformed intervals and its probability density functions, mean values, and standard deviations of IVIFVs. Based on the proposed similarity measure, several essential properties have been illustrated in this paper. Additionally, a new algorithm has been developed using the similarity measure of interval-valued intuitionistic fuzzy values (IVIFVs) to solve multi-attribute decision-making (MADM) problem. The proposed method is highly effective for solving various types of MADM problems. To demonstrate the effectiveness of the proposed similarity measure, a car selection problem has been considered, where the objective is to choose a suitable car for a decision maker from a set of alternatives evaluated under multiple criteria. In car selection, different features often involve conflicting criteria with imprecise data. Therefore, the proposed similarity measure of interval-valued intuitionistic fuzzy values assists in determining the best alternative among these conflicting criteria. Full article
(This article belongs to the Special Issue Fuzzy Sets and Fuzzy Systems, 2nd Edition)
Show Figures

Figure 1

20 pages, 1124 KB  
Article
Scalable Neural Cryptanalysis of Block Ciphers in Federated Attack Environments
by Ongee Jeong, Seonghwan Park and Inkyu Moon
Mathematics 2026, 14(2), 373; https://doi.org/10.3390/math14020373 - 22 Jan 2026
Viewed by 428
Abstract
This paper presents an extended investigation into deep learning-based cryptanalysis of block ciphers by introducing and evaluating a multi-server attack environment. Building upon our prior work in centralized settings, we explore the practicality and scalability of deploying such attacks across multiple distributed edge [...] Read more.
This paper presents an extended investigation into deep learning-based cryptanalysis of block ciphers by introducing and evaluating a multi-server attack environment. Building upon our prior work in centralized settings, we explore the practicality and scalability of deploying such attacks across multiple distributed edge servers. We assess the vulnerability of five representative block ciphers—DES, SDES, AES-128, SAES, and SPECK32/64—under two neural attack models: Encryption Emulation (EE) and Plaintext Recovery (PR), using both fully connected neural networks and Recurrent Neural Networks (RNNs) based on bidirectional Long Short-Term Memory (BiLSTM). Our experimental results show that the proposed federated learning-based cryptanalysis framework achieves performance nearly identical to that of centralized attacks, particularly for ciphers with low round complexity. Even as the number of edge servers increases to 32, the attack models maintain high accuracy in reduced-round settings. We validate our security assessments through formal statistical significance testing using two-tailed binomial tests with 99% confidence intervals. Additionally, our scalability analysis demonstrates that aggregation times remain negligible (<0.01% of total training time), confirming the computational efficiency of the federated framework. Overall, this work provides both a scalable cryptanalysis framework and valuable insights into the design of cryptographic algorithms that are resilient to distributed, deep learning-based threats. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

15 pages, 286 KB  
Article
Isotopes of Biracks and Zhang Twists of Algebras
by Xiaolan Yu and Yanfei Zhang
Mathematics 2026, 14(2), 372; https://doi.org/10.3390/math14020372 - 22 Jan 2026
Viewed by 212
Abstract
In this paper, we introduce the notion of an Np-graded birack and construct its isotope. Every involutive Np-graded birack gives rise to an Np-graded Yang-Baxter algebra. We study the relation between isotopes of involutive Np-graded [...] Read more.
In this paper, we introduce the notion of an Np-graded birack and construct its isotope. Every involutive Np-graded birack gives rise to an Np-graded Yang-Baxter algebra. We study the relation between isotopes of involutive Np-graded biracks and Zhang twists of Np-graded Yang-Baxter algebras. As an example, Yang-Baxter algebras determined by distributive solutions are proved to be Zhang twists of polynomial algebras. Full article
(This article belongs to the Section A: Algebra and Logic)
5 pages, 147 KB  
Editorial
Editorial for the Special Issue “Modeling and Optimization of Complex Engineering Systems Under Uncertainties”
by Debiao Meng and Shui Yu
Mathematics 2026, 14(2), 371; https://doi.org/10.3390/math14020371 - 22 Jan 2026
Viewed by 216
Abstract
The contemporary engineering frontier is defined by a paradigm shift toward hyper-integrated systems operating within increasingly volatile environments, ranging from precision micro-electronics to critical large-scale infrastructure and sustainable energy ecosystems [...] Full article
16 pages, 26561 KB  
Article
Optimal Policies in an Insurance Stackelberg Game: Demand Response and Premium Setting
by Cuixia Chen, Bing Liu, Fumei He and Darhan Bahtbek
Mathematics 2026, 14(2), 370; https://doi.org/10.3390/math14020370 - 22 Jan 2026
Viewed by 280
Abstract
This paper examines a stochastic Stackelberg differential game between an insurer and a pool of homogeneous policyholders. Policyholders dynamically optimize insurance coverage and risky asset allocations to minimize the probability of wealth shortfall, while the insurer, acting as the leader, sets the premium [...] Read more.
This paper examines a stochastic Stackelberg differential game between an insurer and a pool of homogeneous policyholders. Policyholders dynamically optimize insurance coverage and risky asset allocations to minimize the probability of wealth shortfall, while the insurer, acting as the leader, sets the premium loading to maximize the expected exponential utility of terminal surplus. Employing dynamic programming techniques, we derive closed-form equilibrium strategies for both parties. The analysis reveals that a strong positive correlation between insurance claims and financial market returns incentivizes full coverage with modest premiums, whereas a strong negative correlation may induce market collapse as insurers exit underwriting to exploit natural hedging opportunities. Furthermore, larger policyholder pools generate diversification benefits that reduce equilibrium premiums and stimulate insurance demand. Full article
Show Figures

Figure 1

18 pages, 290 KB  
Article
Categorical Structures in Rough Set Theory and Information Systems
by Yu-Ru Syau, Churn-Jung Liau and En-Bing Lin
Mathematics 2026, 14(2), 369; https://doi.org/10.3390/math14020369 - 22 Jan 2026
Viewed by 334
Abstract
Using the concept of category, we provide some insight into and prove an intrinsic property of the category AprS of approximation spaces and continuous functions. We also introduce rough closure and rough interior operators to characterize clopen topologies. Our main result proves the [...] Read more.
Using the concept of category, we provide some insight into and prove an intrinsic property of the category AprS of approximation spaces and continuous functions. We also introduce rough closure and rough interior operators to characterize clopen topologies. Our main result proves the equivalence of several categories, including the category of equivalence relations and relation-preserving functions, the category of rough interior spaces and continuous functions, the category of rough closure spaces and continuous functions, and the category AprS. This work provides a deeper understanding of the interplay among rough set theory, information systems, and category theory. Full article
(This article belongs to the Section D2: Operations Research and Fuzzy Decision Making)
Show Figures

Figure 1

22 pages, 3601 KB  
Article
On Exploiting Tile Partitioning to Reduce Bitrate and Processing Time in VVC Surveillance Streams with Object Detection
by Panagiotis Belememis, Maria Koziri and Thanasis Loukopoulos
Mathematics 2026, 14(2), 368; https://doi.org/10.3390/math14020368 - 22 Jan 2026
Viewed by 298
Abstract
One of the main targets in video surveillance systems is to detect and possibly identify objects within monitoring range. This entails analyzing the video stream, by applying object detection techniques on one or more frames. Regardless of the output, the stream is usually [...] Read more.
One of the main targets in video surveillance systems is to detect and possibly identify objects within monitoring range. This entails analyzing the video stream, by applying object detection techniques on one or more frames. Regardless of the output, the stream is usually archived for future use. Real-time requirements, network bandwidth, and storage constraints play a significant role to total performance. As video resolution increases, so does the video stream size. To harness such an increase, newer video compression standards offer sophisticated coding tools that aim at reducing video size, with minimal quality loss. However, as the achievable compression ratio increases, so does the computational complexity. In this paper, we propose a methodology to reduce both bitrate and processing time of video surveillance streams whereby object detection is performed. The method takes advantage of tile partitioning, with the aim of (i) reducing the scope and the invocation frequency of the object detection module, (ii) encoding different blocks of a frame at different quality levels, depending on whether objects exist or not, and (iii) encoding and transmitting only tiles containing objects. Experimental results using the UA-DETRAC dataset and the VVenC encoder demonstrate that exploiting tile partitioning in the manner proposed in the paper results in reducing bitrate and processing time at the expense of only tiny losses in accuracy. Full article
Show Figures

Figure 1

20 pages, 7566 KB  
Article
Temporal Probability-Guided Graph Topology Learning for Robust 3D Human Mesh Reconstruction
by Hongsheng Wang, Jie Yang, Feng Lin and Fei Wu
Mathematics 2026, 14(2), 367; https://doi.org/10.3390/math14020367 - 21 Jan 2026
Viewed by 306
Abstract
Reconstructing 3D human motion from monocular video presents challenges when frames contain occlusions or blur, as conventional approaches depend on features extracted within limited temporal windows, resulting in structural distortions. In this paper, we introduce a novel framework that combines temporal probability guidance [...] Read more.
Reconstructing 3D human motion from monocular video presents challenges when frames contain occlusions or blur, as conventional approaches depend on features extracted within limited temporal windows, resulting in structural distortions. In this paper, we introduce a novel framework that combines temporal probability guidance with graph topology learning to achieve robust 3D human mesh reconstruction from incomplete observations. Our method leverages topology-aware probability distributions spanning entire motion sequences to recover missing anatomical regions. The Graph Topological Modeling (GTM) component captures structural relationships among body parts by learning the inherent connectivity patterns in human anatomy. Building upon GTM, our Temporal-alignable Probability Distribution (TPDist) mechanism predicts missing features through probabilistic inference, establishing temporal coherence across frames. Additionally, we propose a Hierarchical Human Loss (HHLoss) that hierarchically regularizes probability distribution errors for inter-frame features while accounting for topological variations. Experimental validation demonstrates that our approach outperforms state-of-the-art methods on the 3DPW benchmark, particularly excelling in scenarios involving occlusions and motion blur. Full article
Show Figures

Figure 1

23 pages, 1275 KB  
Article
Decision-Making in Dual-Channel Supply Chains Based on Different Carbon Quota Allocation Policies
by Hai Shen, Jiawei Liu, Siyi Li and Jianbo Zhao
Mathematics 2026, 14(2), 366; https://doi.org/10.3390/math14020366 - 21 Jan 2026
Viewed by 295
Abstract
This paper constructs a decision-making model of a dual-channel supply chain based on different carbon trading policies and discusses the impact of different carbon quota allocation methods adopted by the government on the dual-channel supply chain. Under the restriction of carbon quota trading [...] Read more.
This paper constructs a decision-making model of a dual-channel supply chain based on different carbon trading policies and discusses the impact of different carbon quota allocation methods adopted by the government on the dual-channel supply chain. Under the restriction of carbon quota trading policy, with the goal of maximizing enterprise profit, this paper compares and analyzes the influence of carbon emission quotas and carbon trading prices on the profits of the dual-channel supply chain and obtains the optimal decision-making model for enterprise channel selection. The example calculation shows that the profit level of manufacturers and retailers will be significantly affected by different carbon quota allocation policies along with the development of channels. The profit of manufacturers is positively correlated with the amount of carbon allowances, and the relationship with the carbon trading price shows different trends under different allocation policies regarding carbon allowances. The retailer’s profit in the dual channel is not affected by the amount of carbon quota and the price of carbon trading, and the relationship between the retailer’s profit and the amount of carbon quota and the price of carbon trading in the single channel shows different trends under different carbon quota allocation policies. Full article
Show Figures

Figure 1

18 pages, 4244 KB  
Article
Dual-Modal Contrastive Learning for Continual Generalized Category Discovery
by Wei Jin, Nannan Li, Chengcheng Yang, Huanqiang Hu and Kuo Li
Mathematics 2026, 14(2), 365; https://doi.org/10.3390/math14020365 - 21 Jan 2026
Viewed by 500
Abstract
Continual Generalized Category Discovery (C-GCD) is an emerging research direction in Open-World Learning. The model aims to incrementally discover novel classes from unlabeled data while maintaining recognition of previously learned classes, without accessing historical samples. The absence of supervision signal in incremental sessions [...] Read more.
Continual Generalized Category Discovery (C-GCD) is an emerging research direction in Open-World Learning. The model aims to incrementally discover novel classes from unlabeled data while maintaining recognition of previously learned classes, without accessing historical samples. The absence of supervision signal in incremental sessions makes catastrophic forgetting more severe than in traditional incremental learning. Existing methods primarily enhance generalization through single-modality contrastive learning, overlooking the natural advantages of textual information. Visual features capture perceptual details such as shapes and textures, while textual information helps distinguish visually similar but semantically distinct categories, offering complementary benefits. However, directly obtaining category descriptions for unlabeled data in C-GCD is challenging. To address this, we introduce a conditional prompt learning mechanism to generate pseudo-prompts as textual information for unlabeled samples. Additionally, we propose a dual-modal contrastive learning strategy to enhance vision-text alignment and exploit CLIP’s multimodal potential. Extensive experiments on four benchmark datasets demonstrate that our method achieves competitive performance. We hope this work provides new insights for future research. Full article
(This article belongs to the Special Issue Computational Intelligence, Computer Vision and Pattern Recognition)
Show Figures

Figure 1

17 pages, 843 KB  
Article
Lemniscate Starlikeness and Convexity for the Generalized Marcum Q-Function
by Khaled Mehrez and Abdulaziz Alenazi
Mathematics 2026, 14(2), 364; https://doi.org/10.3390/math14020364 - 21 Jan 2026
Viewed by 261
Abstract
In this paper, we investigate new geometric properties of normalized analytic functions associated with the generalized Marcum Q-function. In particular, we focus on two analytic forms derived from a normalized derivative of a representation involving the Marcum Q-function, and its Alexander [...] Read more.
In this paper, we investigate new geometric properties of normalized analytic functions associated with the generalized Marcum Q-function. In particular, we focus on two analytic forms derived from a normalized derivative of a representation involving the Marcum Q-function, and its Alexander transform. For these functions, we establish sufficient conditions ensuring membership in the classes of lemniscate starlike and lemniscate convex functions. Special attention is given to the case ν=1, where explicit admissible parameter ranges for b are derived. We further examine inclusion relations between these normalized analytic forms and lemniscate subclasses, complemented by several corollaries, illustrative examples, and graphical visualizations. These results extend and enrich the geometric function theory of special functions related to the generalized Marcum Q-function. Full article
(This article belongs to the Special Issue Current Topics in Geometric Function Theory, 2nd Edition)
Show Figures

Figure 1

20 pages, 300 KB  
Article
Quantifying Downstream Value Chain Carbon Risk: A Six-Factor Asset Pricing Model for China’s Low-Carbon Transition
by Wenqing Wang, Ling Shao and Sanmang Wu
Mathematics 2026, 14(2), 363; https://doi.org/10.3390/math14020363 - 21 Jan 2026
Viewed by 338
Abstract
Sustainable finance and carbon risk have attracted substantial interest from both practitioners and scholars. This paper integrates the income-based environmental responsibility framework with financial asset pricing models to investigate how carbon transition risk propagates along value chains and impacts asset returns. By utilizing [...] Read more.
Sustainable finance and carbon risk have attracted substantial interest from both practitioners and scholars. This paper integrates the income-based environmental responsibility framework with financial asset pricing models to investigate how carbon transition risk propagates along value chains and impacts asset returns. By utilizing the Ghosh supply-driven input–output model to quantify downstream value chain carbon emissions as a proxy for the dependence of a company’s revenue streams on high-carbon downstream clients, we construct a novel downstream carbon risk factor (DMC) by sorting stocks into portfolios based on this exposure and forming a factor mimicking long short portfolio. We then integrate this DMC factor into the Fama–French five-factor framework to propose a six-factor model capable of capturing value chain risk transmission. Empirical results of Chinese A-share listed companies demonstrate that firms with high DMC exposure, being vulnerable to carbon transition shocks such as carbon pricing, offer a significant risk premium even after controlling for traditional financial characteristics. This finding provides robust evidence for the carbon premium hypothesis in the world’s largest emerging market and contributes a theoretically grounded and empirically implementable framework for integrating value chain carbon risk into asset pricing analysis. Full article
25 pages, 7374 KB  
Article
Two-Stage Multi-Frequency Deep Learning for Electromagnetic Imaging of Uniaxial Objects
by Wei-Tsong Lee, Chien-Ching Chiu, Po-Hsiang Chen, Guan-Jang Li and Hao Jiang
Mathematics 2026, 14(2), 362; https://doi.org/10.3390/math14020362 - 21 Jan 2026
Viewed by 361
Abstract
In this paper, an anisotropic object electromagnetic image reconstruction system based on a two-stage multi-frequency extended network is developed by deep learning techniques. We obtain the scattered field information by irradiating the TM different polarization waves to uniaxial objects located in free space. [...] Read more.
In this paper, an anisotropic object electromagnetic image reconstruction system based on a two-stage multi-frequency extended network is developed by deep learning techniques. We obtain the scattered field information by irradiating the TM different polarization waves to uniaxial objects located in free space. We input the measured single-frequency scattered field into the Deep Residual Convolutional Neural Network (DRCNN) for training and to be further extended to multi-frequency data by the trained model. In the second stage, we feed the multi-frequency data into the Deep Convolutional Encoder–Decoder (DCED) architecture to reconstruct an accurate distribution of the dielectric constants. We focus on EMIS applications using Transverse Magnetic (TM) and Transverse Electric (TE) waves in 2D scenes. Numerical findings confirm that our method can effectively reconstruct high-contrast uniaxial objects under limited information. In addition, the TM/TE scattering from uniaxial anisotropic objects is governed by polarization-dependent Lippmann–Schwinger integral equations, yielding a nonlinear and severely ill-posed inverse operator that couples the dielectric tensor components with multi-frequency field responses. Within this mathematical framework, the proposed two-stage DRCNN–DCED architecture serves as a data-driven approximation to the anisotropic inverse scattering operator, providing improved stability and representational fidelity under limited-aperture measurement constraints. Full article
Show Figures

Figure 1

21 pages, 2253 KB  
Article
Feedback-Controlled Manipulation of Multiple Defect Bands of Phononic Crystals with Segmented Piezoelectric Sensor–Actuator Array
by Soo-Ho Jo
Mathematics 2026, 14(2), 361; https://doi.org/10.3390/math14020361 - 21 Jan 2026
Viewed by 282
Abstract
Defect modes in phononic crystals (PnCs) provide strongly localized resonances that are essential for frequency-dependent wave filtering and highly sensitive sensing. Their functionality increases greatly when their spectral characteristics can be externally tuned without altering the structural configuration. However, existing feedback control strategies [...] Read more.
Defect modes in phononic crystals (PnCs) provide strongly localized resonances that are essential for frequency-dependent wave filtering and highly sensitive sensing. Their functionality increases greatly when their spectral characteristics can be externally tuned without altering the structural configuration. However, existing feedback control strategies rely on laminated piezoelectric defects, which have uniform electromechanical loading that causes voltage cancellation for even-symmetric defect modes. Consequently, only odd-symmetric defect bands can be manipulated effectively, which limits multi-band tunability. To overcome this constraint, we propose a segmented piezoelectric sensor–actuator design that enables symmetry-dependent feedback at the defect site. We develop a transfer-matrix analytical framework to incorporate complex-valued feedback gains directly into dispersion and transmission calculations. Analytical predictions demonstrate that real-valued feedback yields opposite stiffness modifications for odd- and even-symmetric modes. This enables the simultaneous tuning of both defect bands and induces an exceptional-point-like coalescence. In contrast, imaginary feedback preserves stiffness but modulates effective damping, generating a parity-dependent amplification-suppression response. The analytical results closely match those of fully coupled finite-element simulations, reducing computation time by more than two orders of magnitude. These findings demonstrate that segmentation-enabled feedback provides an efficient and scalable approach to tunable, multi-band, non-Hermitian wave control in piezoelectric PnCs. Full article
(This article belongs to the Special Issue Analytical Methods in Wave Scattering and Diffraction, 3rd Edition)
Show Figures

Figure 1

32 pages, 16166 KB  
Article
A Multimodal Ensemble-Based Framework for Detecting Fake News Using Visual and Textual Features
by Muhammad Abdullah, Hongying Zan, Arifa Javed, Muhammad Sohail, Orken Mamyrbayev, Zhanibek Turysbek, Hassan Eshkiki and Fabio Caraffini
Mathematics 2026, 14(2), 360; https://doi.org/10.3390/math14020360 - 21 Jan 2026
Viewed by 1170
Abstract
Detecting fake news is essential in natural language processing to verify news authenticity and prevent misinformation-driven social, political, and economic disruptions targeting specific groups. A major challenge in multimodal fake news detection is effectively integrating textual and visual modalities, as semantic gaps and [...] Read more.
Detecting fake news is essential in natural language processing to verify news authenticity and prevent misinformation-driven social, political, and economic disruptions targeting specific groups. A major challenge in multimodal fake news detection is effectively integrating textual and visual modalities, as semantic gaps and contextual variations between images and text complicate alignment, interpretation, and the detection of subtle or blatant inconsistencies. To enhance accuracy in fake news detection, this article introduces an ensemble-based framework that integrates textual and visual data using ViLBERT’s two-stream architecture, incorporates VADER sentiment analysis to detect emotional language, and uses Image–Text Contextual Similarity to identify mismatches between visual and textual elements. These features are processed through the Bi-GRU classifier, Transformer-XL, DistilBERT, and XLNet, combined via a stacked ensemble method with soft voting, culminating in a T5 metaclassifier that predicts the outcome for robustness. Results on the Fakeddit and Weibo benchmarking datasets show that our method outperforms state-of-the-art models, achieving up to 96% and 94% accuracy in fake news detection, respectively. This study highlights the necessity for advanced multimodal fake news detection systems to address the increasing complexity of misinformation and offers a promising solution. Full article
Show Figures

Figure 1

33 pages, 1665 KB  
Article
Modeling Healthcare Data with a Novel Flexible Three-Parameter Distribution
by Thamer Manshi, Ammar M. Sarhan and M. E. Sobh
Mathematics 2026, 14(2), 359; https://doi.org/10.3390/math14020359 - 21 Jan 2026
Viewed by 311
Abstract
Developing flexible lifetime distributions is essential for accurately modeling reliability and lifetime data across various scientific and engineering contexts. In this work, we introduce a new three-parameter lifetime distribution, which extends the well-known two-parameter Sarhan–Tadj–Hamilton model. We derive and discuss several of its [...] Read more.
Developing flexible lifetime distributions is essential for accurately modeling reliability and lifetime data across various scientific and engineering contexts. In this work, we introduce a new three-parameter lifetime distribution, which extends the well-known two-parameter Sarhan–Tadj–Hamilton model. We derive and discuss several of its important theoretical properties, including the reliability characteristics and moments. The parameter estimation is carried out using both maximum likelihood and Bayesian approaches, providing a comprehensive comparison of inferential techniques. To further examine the efficiency and robustness of the proposed estimators, a detailed Monte Carlo simulation study is conducted under different sample sizes and parameter settings. The practical usefulness of the distribution is illustrated through its application to three real-world datasets, namely cancer and COVID-19 data, where it demonstrates superior fit and flexibility compared to existing and nested lifetime models. These findings highlight the potential of the proposed model as a valuable addition to the toolbox of applied statisticians and reliability practitioners. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop