Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,391)

Search Parameters:
Keywords = asymptotics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 4828 KB  
Article
A Novel Solution- and Moving Boundary-Adaptive Cartesian Grid Strategy for Efficient and High-Fidelity Simulations of Complex Flow with Moving Boundaries
by Zhiwei Guo, Lincheng Xu, Yuan Gao and Naizhen Zhou
Aerospace 2025, 12(11), 957; https://doi.org/10.3390/aerospace12110957 (registering DOI) - 26 Oct 2025
Abstract
In this paper, a novel solution- and moving boundary-adaptive Cartesian grid strategy is proposed and used to develop a computational fluid dynamics (CFD) solver. The new Cartesian grid strategy is based on a multi-block structure without grid overlapping or ghost grids in non-fluid [...] Read more.
In this paper, a novel solution- and moving boundary-adaptive Cartesian grid strategy is proposed and used to develop a computational fluid dynamics (CFD) solver. The new Cartesian grid strategy is based on a multi-block structure without grid overlapping or ghost grids in non-fluid areas. In particular, the dynamic grid adaptive operations, as well as the adaptive criteria calculations, are restricted to the grid block boundaries. This reduces the grid adaptation complexity to one dimension lower than that of CFD simulations and also facilitates an intrinsic compatibility with moving boundaries since they are natural grid block boundaries. In addition, an improved hybrid immersed boundary method enforcing a physical constraint of pressure is proposed to robustly implement boundary conditions. The recursively regularized lattice Boltzmann method is applied to solve for fluid dynamics. The performance of the proposed method is validated in simulations of flow induced by a series of two- (2D) and three-dimensional (3D) moving boundaries. Results confirm that the proposed method is adequate to provide efficient and effective dynamical grid refinements for flow solutions and moving boundaries simultaneously. The considered unsteady flow physics are accurately and efficiently reproduced. Particularly, the 3D multiscale flow induced by two tandem flapping wings is simulated at a computational time cost about one order lower than that of a reported adaptive Cartesian strategy. Notably, the grid adaptations only account for a small fraction of CFD time consumption, about 0.5% for pure flow characteristics and 5.0% when moving boundaries are involved. In addition, favorable asymptotic convergence with decreasing minimum grid spacing is observed in the 2D cases. Full article
(This article belongs to the Special Issue Aerospace Vehicles and Complex Fluid Flow Modelling)
Show Figures

Figure 1

19 pages, 347 KB  
Article
The Law of the Iterated Logarithm for the Error Distribution Estimator in First-Order Autoregressive Models
by Bing Wang, Yi Jin, Lina Wang, Xiaoping Shi and Wenzhi Yang
Axioms 2025, 14(11), 784; https://doi.org/10.3390/axioms14110784 (registering DOI) - 26 Oct 2025
Abstract
This paper investigates the asymptotic behavior of kernel-based estimators for the error distribution in a first-order autoregressive model with dependent errors. The model assumes that the error terms form an α-mixing sequence with an unknown cumulative distribution function (CDF) and finite second [...] Read more.
This paper investigates the asymptotic behavior of kernel-based estimators for the error distribution in a first-order autoregressive model with dependent errors. The model assumes that the error terms form an α-mixing sequence with an unknown cumulative distribution function (CDF) and finite second moment. Due to the unobservability of true errors, we construct kernel-smoothed estimators based on residuals obtained via least squares. Under mild assumptions on the kernel function, bandwidth selection, and mixing coefficients, we establish a logarithmic law of the iterated logarithm (LIL) for the supremum norm difference between the residual-based kernel estimator and the true distribution function. The limiting bound is shown to be 1/2, matching the classical LIL for independent samples. To support the theoretical results, simulation studies are conducted to compare the empirical and kernel distribution estimators under various sample sizes and error term distributions. The kernel estimators demonstrate smoother convergence behavior and improved finite-sample performance. These results contribute to the theoretical foundation for nonparametric inference in autoregressive models with dependent errors and highlight the advantages of kernel smoothing in distribution function estimation under dependence. Full article
Show Figures

Figure 1

21 pages, 2519 KB  
Article
Efficient Lightweight Image Classification via Coordinate Attention and Channel Pruning for Resource-Constrained Systems
by Yao-Liang Chung
Future Internet 2025, 17(11), 489; https://doi.org/10.3390/fi17110489 (registering DOI) - 25 Oct 2025
Viewed by 54
Abstract
Image classification is central to computer vision, supporting applications from autonomous driving to medical imaging, yet state-of-the-art convolutional neural networks remain constrained by heavy floating-point operations (FLOPs) and parameter counts on edge devices. To address this accuracy–efficiency trade-off, we propose a unified lightweight [...] Read more.
Image classification is central to computer vision, supporting applications from autonomous driving to medical imaging, yet state-of-the-art convolutional neural networks remain constrained by heavy floating-point operations (FLOPs) and parameter counts on edge devices. To address this accuracy–efficiency trade-off, we propose a unified lightweight framework built on a pruning-aware coordinate attention block (PACB). PACB integrates coordinate attention (CA) with L1-regularized channel pruning, enriching feature representation while enabling structured compression. Applied to MobileNetV3 and RepVGG, the framework achieves substantial efficiency gains. On GTSRB, MobileNetV3 parameters drop from 16.239 M to 9.871 M (–6.37 M) and FLOPs from 11.297 M to 8.552 M (–24.3%), with accuracy improving from 97.09% to 97.37%. For RepVGG, parameters fall from 7.683 M to 7.093 M (–0.59 M) and FLOPs from 31.264 M to 27.918 M (–3.35 M), with only ~0.51% average accuracy loss across CIFAR-10, Fashion-MNIST, and GTSRB. Complexity analysis further confirms PACB does not increase asymptotic order, since the additional CA operations contribute only lightweight lower-order terms. These results demonstrate that coupling CA with structured pruning yields a scalable accuracy–efficiency trade-off under hardware-agnostic metrics, making PACB a promising, deployment-ready solution for mobile and edge applications. Full article
(This article belongs to the Special Issue Clustered Federated Learning for Networks)
Show Figures

Figure 1

15 pages, 549 KB  
Article
Perfect Projective Synchronization of a Class of Fractional-Order Chaotic Systems Through Stabilization near the Origin via Fractional-Order Backstepping Control
by Abdelhamid Djari, Riadh Djabri, Abdelaziz Aouiche, Noureddine Bouarroudj, Yehya Houam, Maamar Bettayeb, Mohamad A. Alawad and Yazeed Alkhrijah
Fractal Fract. 2025, 9(11), 687; https://doi.org/10.3390/fractalfract9110687 (registering DOI) - 25 Oct 2025
Viewed by 73
Abstract
This study introduces a novel control strategy aimed at achieving projective synchronization in incommensurate fractional-order chaotic systems (IFOCS). The approach integrates the mathematical framework of fractional calculus with the recursive structure of the backstepping control technique. A key feature of the proposed method [...] Read more.
This study introduces a novel control strategy aimed at achieving projective synchronization in incommensurate fractional-order chaotic systems (IFOCS). The approach integrates the mathematical framework of fractional calculus with the recursive structure of the backstepping control technique. A key feature of the proposed method is the systematic use of the Mittag–Leffler function to verify stability at every step of the control design. By carefully constructing the error dynamics and proving their asymptotic convergence, the method guarantees the overall stability of the coupled system. In particular, stabilization of the error signals around the origin ensures perfect projective synchronization between the master and slave systems, even when these systems exhibit fundamentally different fractional-order chaotic behaviors. To illustrate the applicability of the method, the proposed fractional order backstepping control (FOBC) is implemented for the synchronization of two representative systems: the fractional-order Van der Pol oscillator and the fractional-order Rayleigh oscillator. These examples were deliberately chosen due to their structural differences, highlighting the robustness and versatility of the proposed approach. Extensive simulations are carried out under diverse initial conditions, confirming that the synchronization errors converge rapidly and remain stable in the presence of parameter variations and external disturbances. The results clearly demonstrate that the proposed FOBC strategy not only ensures precise synchronization but also provides resilience against uncertainties that typically challenge nonlinear chaotic systems. Overall, the work validates the effectiveness of FOBC as a powerful tool for managing complex dynamical behaviors in chaotic systems, opening the way for broader applications in engineering and science. Full article
Show Figures

Figure 1

33 pages, 3585 KB  
Article
Identifying the Location of Dynamic Load Using a Region’s Asymptotic Approximation
by Yuantian Qin, Jiakai Zheng and Vadim V. Silberschmidt
Aerospace 2025, 12(11), 953; https://doi.org/10.3390/aerospace12110953 (registering DOI) - 24 Oct 2025
Viewed by 62
Abstract
Since it is difficult to obtain the positions of dynamic loads on structures, this paper suggests a new method to identify the locations of dynamic loads step-by-step based on the correlation coefficients of dynamic responses. First, a recognition model for dynamic load position [...] Read more.
Since it is difficult to obtain the positions of dynamic loads on structures, this paper suggests a new method to identify the locations of dynamic loads step-by-step based on the correlation coefficients of dynamic responses. First, a recognition model for dynamic load position based on a finite-element scheme is established, with the finite-element domain divided into several regions. Second, virtual loads are applied at the central points of these regions, and acceleration responses are calculated at the sensor measurement points. Third, the maximum correlation coefficient between the calculational and measured accelerations is obtained, and the dynamic load is located in the region with the virtual load corresponding to the maximum correlation coefficient. Finally, this region is continuously subdivided with the refined mesh until the dynamic load is pinpointed in a sufficiently small area. Different virtual load construction methods are proposed according to different types of loads. The frequency response function, unresolvable for the actual problem due to the unknown location of the real dynamic load, can be transformed into a solvable form, involving only known points. This transformation simplifies the analytical process, making it more efficient and applicable to analysis of the dynamic behavior of the system. The identification of the dynamic load position in the entire structure is then transformed into a sub-region approach, focusing on the area where the dynamic load acts. Simulations for case studies are conducted to demonstrate that the proposed method can effectively identify positions of single and multiple dynamic loads. The correctness of the theory and simulation model is verified with experiments. Compared to recent methods that use machine learning and neural networks to identify positions of dynamic loads, the approach proposed in this paper avoids the heavy computational cost and time required for data training. Full article
Show Figures

Figure 1

25 pages, 1288 KB  
Article
An Analysis of Implied Volatility, Sensitivity, and Calibration of the Kennedy Model
by Dalma Tóth-Lakits, Miklós Arató and András Ványolos
Mathematics 2025, 13(21), 3396; https://doi.org/10.3390/math13213396 (registering DOI) - 24 Oct 2025
Viewed by 191
Abstract
The Kennedy model provides a flexible and mathematically consistent framework for modeling the term structure of interest rates, leveraging Gaussian random fields to capture the dynamics of forward rates. Building upon our earlier work, where we developed both theoretical results—including novel proofs of [...] Read more.
The Kennedy model provides a flexible and mathematically consistent framework for modeling the term structure of interest rates, leveraging Gaussian random fields to capture the dynamics of forward rates. Building upon our earlier work, where we developed both theoretical results—including novel proofs of the martingale property, connections between the Kennedy and HJM frameworks, and parameter estimation theory—and practical calibration methods, using maximum likelihood, Radon–Nikodym derivatives, and numerical optimization (stochastic gradient descent) on simulated and real par swap rate data, this study extends the analysis in several directions. We derive detailed formulas for the volatilities implied by the Kennedy model and investigate their asymptotic properties. A comprehensive sensitivity analysis is conducted to evaluate the impact of key parameters on derivative prices. We implement an industry-standard Monte Carlo method, tailored to the conditional distribution of the Kennedy field, to efficiently generate scenarios consistent with observed initial forward curves. Furthermore, we present closed-form pricing formulas for various interest rate derivatives, including zero-coupon bonds, caplets, floorlets, swaplets, and the par swap rate. A key advantage of these results is that the formulas are expressed explicitly in terms of the initial forward curve and the original parameters of the Kennedy model, which ensures both analytical tractability and consistency with market-observed data. These closed-form expressions can be directly utilized in calibration procedures, substantially accelerating multidimensional nonlinear optimization algorithms. Moreover, given an observed initial forward curve, the model provides significantly more accurate pricing formulas, enhancing both theoretical precision and practical applicability. Finally, we calibrate the Kennedy model to market-observed caplet prices. The findings provide valuable insights into the practical applicability and robustness of the Kennedy model in real-world financial markets. Full article
(This article belongs to the Special Issue Modern Trends in Mathematics, Probability and Statistics for Finance)
Show Figures

Figure 1

30 pages, 1387 KB  
Article
Asymptotic Analysis of the Bias–Variance Trade-Off in Subsampling Metropolis–Hastings
by Shuang Liu
Mathematics 2025, 13(21), 3395; https://doi.org/10.3390/math13213395 (registering DOI) - 24 Oct 2025
Viewed by 67
Abstract
Markov chain Monte Carlo (MCMC) methods are fundamental to Bayesian inference but are often computationally prohibitive for large datasets, as the full likelihood must be evaluated at each iteration. Subsampling-based approximate Metropolis–Hastings (MH) algorithms offer a popular alternative, trading a manageable bias for [...] Read more.
Markov chain Monte Carlo (MCMC) methods are fundamental to Bayesian inference but are often computationally prohibitive for large datasets, as the full likelihood must be evaluated at each iteration. Subsampling-based approximate Metropolis–Hastings (MH) algorithms offer a popular alternative, trading a manageable bias for a significant reduction in per-iteration cost. While this bias–variance trade-off is empirically understood, a formal theoretical framework for its optimization has been lacking. Our work establishes such a framework by bounding the mean squared error (MSE) as a function of the subsample size (m), the data size (n), and the number of epochs (E). This analysis reveals two optimal asymptotic scaling laws: the optimal subsample size is m=O(E1/2), leading to a minimal MSE that scales as MSE=O(E1/2). Furthermore, leveraging the large-sample asymptotic properties of the posterior, we show that when augmented with a control variate, the approximate MH algorithm can be asymptotically more efficient than the standard MH method under ideal conditions. Experimentally, we first validate the two optimal asymptotic scaling laws. We then use Bayesian logistic regression and Softmax classification models to highlight a key difference in convergence behavior: the exact algorithm starts with a high MSE that gradually decreases as the number of epochs increases. In contrast, the approximate algorithm with a practical control variate maintains a consistently low MSE that is largely insensitive to the number of epochs. Full article
Show Figures

Figure 1

33 pages, 672 KB  
Article
A Laplace Transform-Based Test for Exponentiality Against the EBUCL Class with Applications to Censored and Uncensored Data
by Walid B. H. Etman, Mahmoud E. Bakr, Arwa M. Alshangiti, Oluwafemi Samson Balogun and Rashad M. EL-Sagheer
Mathematics 2025, 13(21), 3379; https://doi.org/10.3390/math13213379 - 23 Oct 2025
Viewed by 87
Abstract
This paper proposes a novel statistical test for evaluating exponentiality against the recently introduced EBUCL (Exponential Better than Used in Convex Laplace transform order) class of life distributions. The EBUCL class generalizes classical aging concepts and provides a flexible framework for modeling various [...] Read more.
This paper proposes a novel statistical test for evaluating exponentiality against the recently introduced EBUCL (Exponential Better than Used in Convex Laplace transform order) class of life distributions. The EBUCL class generalizes classical aging concepts and provides a flexible framework for modeling various non-exponential aging behaviors. The test is constructed using Laplace transform ordering and is shown to be effective in distinguishing exponential distributions from EBUCL alternatives. We derive the test statistic, establish its asymptotic properties, and assess its performance using Pitman’s asymptotic efficiency under standard alternatives, including Weibull, Makeham, and linear failure rate distributions. Critical values are obtained through extensive Monte Carlo simulations, and the power of the proposed test is evaluated and compared with existing methods. Furthermore, the test is extended to handle right-censored data, demonstrating its robustness and practical applicability. The effectiveness of the procedure is illustrated through several real-world datasets involving both censored and uncensored observations. The results confirm that the proposed test is a powerful and versatile tool for reliability and survival analysis. Full article
Show Figures

Figure 1

18 pages, 908 KB  
Article
Bayesian Estimation of Multicomponent Stress–Strength Model Using Progressively Censored Data from the Inverse Rayleigh Distribution
by Asuman Yılmaz
Entropy 2025, 27(11), 1095; https://doi.org/10.3390/e27111095 - 23 Oct 2025
Viewed by 94
Abstract
This paper presents a comprehensive study on the estimation of multicomponent stress–strength reliability under progressively censored data, assuming the inverse Rayleigh distribution. Both maximum likelihood estimation and Bayesian estimation methods are considered. The loss function and prior distribution play crucial roles in Bayesian [...] Read more.
This paper presents a comprehensive study on the estimation of multicomponent stress–strength reliability under progressively censored data, assuming the inverse Rayleigh distribution. Both maximum likelihood estimation and Bayesian estimation methods are considered. The loss function and prior distribution play crucial roles in Bayesian inference. Therefore, Bayes estimators of the unknown model parameters are obtained under symmetric (squared error loss function) and asymmetric (linear exponential and general entropy) loss functions using gamma priors. Lindley and MCMC approximation methods are used for Bayesian calculations. Additionally, asymptotic confidence intervals based on maximum likelihood estimators and Bayesian credible intervals constructed via Markov Chain Monte Carlo methods are presented. An extensive Monte Carlo simulation study compares the efficiencies of classical and Bayesian estimators, revealing that Bayesian estimators outperform classical ones. Finally, a real-life data example is provided to illustrate the practical applicability of the proposed methods. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

24 pages, 1409 KB  
Article
A Lower-Bounded Extreme Value Distribution for Flood Frequency Analysis with Applications
by Fatimah E. Almuhayfith, Maher Kachour, Amira F. Daghestani, Zahid Ur Rehman, Tassaddaq Hussain and Hassan S. Bakouch
Mathematics 2025, 13(21), 3378; https://doi.org/10.3390/math13213378 - 23 Oct 2025
Viewed by 204
Abstract
This paper proposes the lower-bounded Fréchet–log-logistic distribution (LFLD), a probability model designed for robust flood frequency analysis (FFA). The LFLD addresses key limitations of traditional distributions (e.g., generalized extreme value (GEV) and log-Pearson Type III (LP3)) by combining bounded support ( [...] Read more.
This paper proposes the lower-bounded Fréchet–log-logistic distribution (LFLD), a probability model designed for robust flood frequency analysis (FFA). The LFLD addresses key limitations of traditional distributions (e.g., generalized extreme value (GEV) and log-Pearson Type III (LP3)) by combining bounded support (α<x<) to reflect physical flood thresholds, flexible tail behavior via Fréchet–log-logistic fusion for extreme-value accuracy, and maximum entropy characterization, ensuring optimal parameter estimation. Thus, we obtain the LFLD’s main statistical properties (PDF, CDF, and hazard rate), prove its asymptotic convergence to Fréchet distributions, and validate its superiority through simulation studies showing MLE consistency (bias < 0.02 and mean squared error < 0.0004 for α) and empirical flood data tests (52- and 98-year AMS series), where the LFLD outperforms 10 competitors (AIC reductions of 15–40%; Vuong test p < 0.01). The LFLD’s closed-form quantile function enables efficient return period estimation, critical for infrastructure planning. Results demonstrate its applicability to heavy-tailed, bounded hydrological data, offering a 20–30% improvement in flood magnitude prediction over LP3/GEV models. Full article
(This article belongs to the Special Issue Reliability Estimation and Mathematical Statistics)
Show Figures

Figure 1

14 pages, 1501 KB  
Article
Novel Nonlinear Control in a Chaotic Continuous Flow Enzymatic–Fermentative Bioreactor
by Juan Luis Mata-Machuca, Pablo Antonio López-Pérez and Ricardo Aguilar-López
Fermentation 2025, 11(10), 601; https://doi.org/10.3390/fermentation11100601 - 21 Oct 2025
Viewed by 386
Abstract
Fermentative processes are considered one of the most important technological developments in the modern transforming industry, due to this, the applied research to reach high performance standards with a crucial focus on system intensification, which is the the analysis, optimization, and control issues, [...] Read more.
Fermentative processes are considered one of the most important technological developments in the modern transforming industry, due to this, the applied research to reach high performance standards with a crucial focus on system intensification, which is the the analysis, optimization, and control issues, are a cornerstone. The goal of this proposal is to show a novel nonlinear feedback control structure to assure a stable closed-loop operation in a continuous flow enzymatic–fermentative bioreactor with chaotic dynamic behavior. The proposed structure contains an adaptive-type gain, which, coupled with a proportional term of the named control error, can lead the feedback control trajectories of the bioreactor to the required reference point or trajectory. The Lyapunov method is used to present the stability analysis of the system within a closed loop, where an adequate choice of the controller gains assures asymptotic stability. Moreover, analyzing the dynamic equation of the control error, under some properties of boundedness of the system, shows that the control error can be diminished to close to zero. Numerical experiments are carried out, where a well-tuned standard proportional–integral (PI) controller is also implemented for comparison purposes, the satisfactory performance of the proposed control scheme is observed, including the diminishing offsets, overshoots, and settling times in comparison with the PI controller. Full article
Show Figures

Figure 1

33 pages, 525 KB  
Article
Limit Theorem for Kernel Estimate of the Conditional Hazard Function with Weakly Dependent Functional Data
by Abderrahmane Belguerna, Abdelkader Rassoul, Hamza Daoudi, Zouaoui Chikr Elmezouar and Fatimah Alshahrani
Symmetry 2025, 17(10), 1777; https://doi.org/10.3390/sym17101777 - 21 Oct 2025
Viewed by 118
Abstract
This paper examines the asymptotic behavior of the conditional hazard function using kernel-based methods, with particular emphasis on functional weakly dependent data. In particular, we establish the asymptotic normality of the proposed estimator when the covariate follows a functional quasi-associated process. This contribution [...] Read more.
This paper examines the asymptotic behavior of the conditional hazard function using kernel-based methods, with particular emphasis on functional weakly dependent data. In particular, we establish the asymptotic normality of the proposed estimator when the covariate follows a functional quasi-associated process. This contribution extends the scope of nonparametric inference under weak dependence within the framework of functional data analysis. The estimator is constructed through kernel smoothing techniques inspired by the classical Nadaraya–Watson approach, and its theoretical properties are rigorously derived under appropriate regularity conditions. To evaluate its practical performance, we carried out an extensive simulation study, where finite-sample outcomes were compared with their asymptotic counterparts. The results showed the robustness and reliability of the estimator across a range of scenarios, thereby confirming the validity of the proposed limit theorem in empirical settings. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

23 pages, 321 KB  
Article
Nonlinear Shrinkage Estimation of Higher-Order Moments for Portfolio Optimization Under Uncertainty in Complex Financial Systems
by Wanbo Lu and Zhenzhong Tian
Entropy 2025, 27(10), 1083; https://doi.org/10.3390/e27101083 - 20 Oct 2025
Viewed by 184
Abstract
This paper develops a nonlinear shrinkage estimation method for higher-order moment matrices within a multifactor model framework and establishes its asymptotic consistency under high-dimensional settings. The approach extends the nonlinear shrinkage methodology from covariance to higher-order moments, thereby mitigating the “curse of dimensionality” [...] Read more.
This paper develops a nonlinear shrinkage estimation method for higher-order moment matrices within a multifactor model framework and establishes its asymptotic consistency under high-dimensional settings. The approach extends the nonlinear shrinkage methodology from covariance to higher-order moments, thereby mitigating the “curse of dimensionality” and alleviating estimation uncertainty in high-dimensional settings. Monte Carlo simulations demonstrate that, compared with linear shrinkage estimation, the proposed method substantially reduces mean squared errors (MSEs) and achieves greater Percentage Relative Improvement in Average Loss (PRIAL) for covariance and cokurtosis estimates; relative to sample estimation, it delivers significant gains in mitigating uncertainty for covariance, coskewness, and cokurtosis. An empirical portfolio analysis incorporating higher-order moments shows that, when the asset universe is large, portfolios based on the nonlinear shrinkage estimator outperform those constructed using linear shrinkage and sample estimators, achieving higher annualized return and Sharpe ratio with lower kurtosis and maximum drawdown, thus providing stronger resilience against uncertainty in complex financial systems. In smaller asset universes, nonlinear shrinkage portfolios perform on par with their linear shrinkage counterparts. These findings highlight the potential of nonlinear shrinkage techniques to reduce uncertainty in higher-order moment estimation and to improve portfolio performance across diverse and complex investment environments. Full article
(This article belongs to the Special Issue Complexity and Synchronization in Time Series)
19 pages, 607 KB  
Article
The Stability of Linear Control Systems on Low-Dimensional Lie Groups
by Víctor Ayala, William Eduardo Valdivia Hanco, Jhon Eddy Pariapaza Mamani and María Luisa Torreblanca Todco
Symmetry 2025, 17(10), 1766; https://doi.org/10.3390/sym17101766 - 20 Oct 2025
Viewed by 170
Abstract
This work investigates the stability analysis of linear control systems defined on Lie groups, with a particular focus on low-dimensional cases. Unlike their Euclidean counterparts, such systems evolve on manifolds with non-Euclidean geometry, where trajectories respect the group’s intrinsic symmetries. Stability notions, such [...] Read more.
This work investigates the stability analysis of linear control systems defined on Lie groups, with a particular focus on low-dimensional cases. Unlike their Euclidean counterparts, such systems evolve on manifolds with non-Euclidean geometry, where trajectories respect the group’s intrinsic symmetries. Stability notions, such as inner asymptotic, inner, and input–output (BIBO) stability, are studied. The qualitative behavior of solutions is shown to depend critically on the spectral decomposition of derivations associated with the drift, and on the algebraic structure of the underlying Lie algebra. We study two classes of examples in detail: Abelian and solvable two-dimensional Lie groups, and the three-dimensional nilpotent Heisenberg group. These settings, while mathematically tractable, retain essential features of non-commutativity, geometric non-linearity, and sub-Riemannian geometry, making them canonical models in control theory. The results highlight the interplay between algebraic properties, invariant submanifolds, and trajectory behavior, offering insights applicable to robotic motion planning, quantum control, and signal processing. Full article
(This article belongs to the Special Issue Symmetries in Dynamical Systems and Control Theory)
Show Figures

Figure 1

15 pages, 1593 KB  
Article
Influence of Sampling Effort and Taxonomic Resolution on Benthic Macroinvertebrate Taxa Richness and Bioassessment in a Non-Wadable Hard-Bottom River (China)
by Jiaxuan Liu, Hongjia Shan, Chengxing Xia and Sen Ding
Biology 2025, 14(10), 1444; https://doi.org/10.3390/biology14101444 - 20 Oct 2025
Viewed by 219
Abstract
Benthic macroinvertebrates are widely used for river ecosystem health monitoring, yet challenges remain in non-wadable rivers, particularly regarding sampling effort. We evaluated hand-net sampling efficiency at three sites along the Danjiang River (a Yangtze River tributary) by analyzing taxa richness across taxonomic levels [...] Read more.
Benthic macroinvertebrates are widely used for river ecosystem health monitoring, yet challenges remain in non-wadable rivers, particularly regarding sampling effort. We evaluated hand-net sampling efficiency at three sites along the Danjiang River (a Yangtze River tributary) by analyzing taxa richness across taxonomic levels under varying replicate numbers. In total, 61 taxa (41 families) of benthic macroinvertebrates were identified. Non-metric multidimensional scaling analysis indicated no significant spatiotemporal variation in community composition. However, sampling effort increased, and the benthic macroinvertebrate taxa richness at both genus/species and family levels also increased. At eight sample replicates, the taxa accumulation curve at the genus/species level did not show an asymptote, with the observed richness reaching 67–80% of the predicted values calculated by Jackknife 1. In contrast, the family-level curve exhibited a clear asymptotic trend, with the observed richness reaching 82–100% of the predicted values. As sampling effort increased, bias decreased and accuracy improved, particularly for family-level taxa. Additionally, the BMWP scores also increased with the sampling effort. When the replicate number was no less than six, the BMWP reached stable assessment grades for all cases. From the perspective of bioassessment in non-wadable rivers, the hand net is suitable for collecting benthic macroinvertebrates. However, there is a risk of underestimating taxa richness due to insufficient sampling effort. Using family-level taxa can partially mitigate the impacts caused by insufficient sampling efforts to a certain extent, but further validation is needed for other non-wadable rivers (e.g., those with soft substrates). In conclusion, our research results indicate that six replicate hand-net samplings in non-wadable hard-bottom rivers can be regarded as a cost-effective and reliable sampling method for benthic macroinvertebrate BMWP assessment. This strategy provides a relatively practical reference for the monitoring of benthic macroinvertebrate in the same type of rivers in China. Full article
Show Figures

Figure 1

Back to TopTop