Next Issue
Volume 14, March-2
Previous Issue
Volume 14, February-2
 
 
mathematics-logo

Journal Browser

Journal Browser

Mathematics, Volume 14, Issue 5 (March-1 2026) – 181 articles

Cover Story (view full-size image): The Smooth SCAD rule introduced in this work achieves that balance by replacing the classical SCAD threshold’s piecewise linear transition with a raised-cosine profile. The resulting shrinkage function, illustrated in the figure, smoothly moves from strong shrinkage of small coefficients to almost no shrinkage of large ones. This smooth transition eliminates the kinks of classical SCAD while retaining its key statistical advantages: sparsity, continuity, and near-unbiased estimation for large coefficients. The smoothness of the rule places it within the class of thresholding functions for which Stein’s unbiased risk estimate (SURE) can be applied directly, enabling stable data-driven threshold selection and providing both analytical tractability and improved performance in wavelet denoising. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 5676 KB  
Article
Complete Coverage Random Path Planning Based on a Novel Fractal-Fractional-Order Multi-Scroll Chaotic System
by Xiaoran Lin, Mengxuan Dong, Xueya Xue, Xiaojuan Li and Yachao Wang
Mathematics 2026, 14(5), 926; https://doi.org/10.3390/math14050926 - 9 Mar 2026
Viewed by 255
Abstract
With the increasing demands for autonomy and coverage efficiency in tasks such as security patrol and post-disaster exploration using mobile robots, achieving random, efficient, and complete coverage path planning has become a critical challenge. Traditional chaotic path planning methods, while capable of generating [...] Read more.
With the increasing demands for autonomy and coverage efficiency in tasks such as security patrol and post-disaster exploration using mobile robots, achieving random, efficient, and complete coverage path planning has become a critical challenge. Traditional chaotic path planning methods, while capable of generating unpredictable trajectories, still have limitations in terms of randomness strength, traversal uniformity, and convergence coverage. To address this, this study proposes a complete-coverage random path planning method based on a novel four-dimensional fractal-fractional multi-scroll chaotic system. The main contributions of this research are as follows: First, by introducing additional state variables and fractal-fractional operators into the classical Chen system, a fractal-fractional chaotic system with a multi-scroll attractor structure is constructed. The output of this system is then mapped into robot angular velocity commands to achieve area coverage in unknown environments. Key findings include: the novel chaotic system possesses two positive Lyapunov exponents; Spectral Entropy (SE) and Complexity (CO) analyses indicate that when parameter B is fixed and the fractional order α increases, the dynamic complexity of the system significantly rises; in a 50 × 50 grid environment, the robot driven by this system achieved a coverage rate of 98.88% within 10,000 iterations, outperforming methods based on Lorenz, Chua systems, and random walks; ablation experiments further demonstrate that the combined effects of the fractal order β, fractional order α, and multi-scroll nonlinear terms are key to enhancing system complexity and coverage performance. The significance of this study lies in that it not only provides new ideas for constructing complex chaotic systems but also offers a reliable theoretical foundation and practical solution for mobile robots to perform efficient, random, and high-coverage autonomous inspection tasks in unknown regions. Full article
Show Figures

Figure 1

39 pages, 40860 KB  
Article
Cultural History Optimization Based on Film and Television Strategy and Multi-Strategy Improvements for Global Optimization and Engineering Problems
by Yajie Chen and Meng Wang
Mathematics 2026, 14(5), 925; https://doi.org/10.3390/math14050925 - 9 Mar 2026
Viewed by 225
Abstract
Wireless sensor network (WSN) coverage optimization is a critical factor in improving network service quality, yet it faces challenges such as deployment uniformity, high-dimensional optimization, and the balance between exploration and exploitation under limited node resources. To address the shortcomings of the cultural [...] Read more.
Wireless sensor network (WSN) coverage optimization is a critical factor in improving network service quality, yet it faces challenges such as deployment uniformity, high-dimensional optimization, and the balance between exploration and exploitation under limited node resources. To address the shortcomings of the cultural historical optimization algorithm (CHOA), including insufficient global exploration, lack of dynamic regulation, and limited local exploitation accuracy, this paper proposes a film and television strategy-based multi-strategy cultural historical optimization algorithm (FTSCHOA). The proposed algorithm enhances performance through three synergistic mechanisms: a DE-style evolutionary operator that strengthens global exploration and population diversity; a film-and-television strategy that balances exploration and exploitation via random perturbations and adaptive parameter regulation; and a memory-based neighborhood local search that performs refined exploitation around high-quality solution sets to improve local optimization accuracy. Extensive experiments conducted on the CEC2017 and CEC2022 benchmark suites with dimensions of 10, 20, 30, and 50 demonstrate that FTSCHOA outperforms comparative algorithms in terms of optimization accuracy, convergence speed, and stability. The Friedman mean rank test indicates that FTSCHOA consistently achieves the best average ranking, while the Wilcoxon rank-sum test confirms that its performance differences with respect to competing algorithms are statistically significant (p<0.05). When applied to WSN coverage optimization in a 100m×100m monitoring region, FTSCHOA achieves coverage rates of 0.9351 and 0.9738 with 25 and 30 sensor nodes, respectively, which are significantly higher than those obtained by PSO, GWO, CHOA, and other algorithms. Moreover, the resulting node deployments exhibit greater uniformity, fewer coverage holes, and lower redundancy. The experimental results demonstrate that FTSCHOA effectively overcomes the limitations of traditional algorithms and provides an efficient and practical solution for WSN node deployment optimization, with strong potential for application in real-world scenarios such as environmental monitoring and smart agriculture. Full article
Show Figures

Figure 1

14 pages, 262 KB  
Article
On the Further Properties of the MPBT Inverse and Applications to Special Matrices
by Tingyu Zhao and Yuefeng Gao
Mathematics 2026, 14(5), 924; https://doi.org/10.3390/math14050924 - 9 Mar 2026
Viewed by 201
Abstract
This paper aims to simplify the form of the MPBT inverse, further explore its properties, and discuss when it coincides with other generalized inverses. Notably, the MPBT inverse coincides with the Moore–Penrose inverse when the index of the matrix is at most 1; [...] Read more.
This paper aims to simplify the form of the MPBT inverse, further explore its properties, and discuss when it coincides with other generalized inverses. Notably, the MPBT inverse coincides with the Moore–Penrose inverse when the index of the matrix is at most 1; the MPBT inverse equals the MPCEP-inverse when the index of the matrix is at most 2. Additionally, new characterizations of bi-EP matrices are presented, based on some properties of the MPBT inverse. Finally, MPBT matrices constructed via the MPBT inverse are shown to be equal to B-T matrices. Full article
17 pages, 661 KB  
Article
On Absolute q-Cesàro Summability Methods for Double Sequences
by Fadime Gökçe
Mathematics 2026, 14(5), 923; https://doi.org/10.3390/math14050923 - 9 Mar 2026
Viewed by 269
Abstract
In the present paper, a novel absolute summability method, denoted by |Cq,θ|s, is introduced for double sequences via the q-Cesàro matrix. The study also focuses on determining the necessary and sufficient conditions for various inclusion [...] Read more.
In the present paper, a novel absolute summability method, denoted by |Cq,θ|s, is introduced for double sequences via the q-Cesàro matrix. The study also focuses on determining the necessary and sufficient conditions for various inclusion relations, as well as comparing this method with existing absolute summability methods. In particular, the implications |Cp,ϕ||Cq,θ|s, |Cq,θ|s|Cp,ϕ|, and |Cp,ϕ|s|Cq,θ|s are fully characterized. The obtained results extend known summability frameworks for double series and highlight the role of q-analogues in providing a flexible and unifying approach to absolute summability theory. Full article
(This article belongs to the Topic Functional Equations: Methods and Applications)
Show Figures

Figure 1

24 pages, 743 KB  
Article
Tensor Train Completion from Fiberwise Observations Along a Single Mode
by Shakir Showkat Sofi and Lieven De Lathauwer
Mathematics 2026, 14(5), 922; https://doi.org/10.3390/math14050922 - 9 Mar 2026
Viewed by 274
Abstract
Tensor completion is an extension of matrix completion aimed at recovering a multiway data tensor by leveraging a given subset of its entries (observations) and the pattern of observation. The low-rank assumption is key in establishing a relationship between the observed and unobserved [...] Read more.
Tensor completion is an extension of matrix completion aimed at recovering a multiway data tensor by leveraging a given subset of its entries (observations) and the pattern of observation. The low-rank assumption is key in establishing a relationship between the observed and unobserved entries of the tensor. The low-rank tensor completion problem is typically solved using numerical optimization techniques, where the rank information is used either implicitly (in the rank minimization approach) or explicitly (in the error minimization approach). Current theories concerning these techniques often study probabilistic recovery guarantees under conditions such as random uniform observations and incoherence requirements. However, if an observation pattern exhibits some low-rank structure that can be exploited, more efficient algorithms with deterministic recovery guarantees can be designed by leveraging this structure. This work shows how to use only standard linear algebra operations to compute the tensor train decomposition of a specific type of “fiber-wise” observed tensor, where some of the fibers of a tensor (along a single specific mode) are either fully observed or entirely missing, unlike the usual entry-wise observations. From an application viewpoint, this setting is relevant when it is easier to sample or collect a multiway data tensor along a specific mode (e.g., temporal). The proposed completion method is fast and is guaranteed to work under reasonable deterministic conditions on the observation pattern. Through numerical experiments, we showcase interesting applications and use cases that illustrate the effectiveness of the proposed approach. Full article
Show Figures

Figure 1

24 pages, 717 KB  
Article
Changing Wage Effects of Educational Mismatch in China: Evidence from Threshold IV–Selection Models
by Lulu Jiang, Woraphon Yamaka and Paravee Maneejuk
Mathematics 2026, 14(5), 921; https://doi.org/10.3390/math14050921 - 9 Mar 2026
Viewed by 283
Abstract
This study examines the wage effects of educational mismatch in China by jointly addressing sample selection, endogeneity, and nonlinear career-stage heterogeneity within a unified econometric framework. Although educational mismatch has been widely studied, existing evidence largely relies on linear models that overlook experience-dependent [...] Read more.
This study examines the wage effects of educational mismatch in China by jointly addressing sample selection, endogeneity, and nonlinear career-stage heterogeneity within a unified econometric framework. Although educational mismatch has been widely studied, existing evidence largely relies on linear models that overlook experience-dependent wage dynamics and potential selection and endogeneity biases. Using data from the 2020 wave of the China Family Panel Studies (CFPS), this study extends the Duncan–Hoffman model by integrating a sample-selection-corrected threshold regression estimated via instrumental variables. This approach allows the identification of experience thresholds at which the wage effects of overeducation and undereducation differ across regimes. The results reveal pronounced nonlinearities in mismatch-related wage differentials. Overeducation is associated with wage penalties at early career stages, but these penalties weaken and, in some cases, disappear once workers surpass the estimated experience threshold. In contrast, undereducation yields modest wage premiums early in the career but becomes increasingly penalized at higher experience levels. Substantial gender heterogeneity is also observed: male workers are better able to use accumulated experience to offset educational shortfalls, whereas female workers face more persistent penalties, particularly at later career stages. Full article
Show Figures

Figure 1

26 pages, 1357 KB  
Article
Negotiation of Electricity Intention Based on Community Logic System
by Yusen Chen and Zhengwen Huang
Mathematics 2026, 14(5), 920; https://doi.org/10.3390/math14050920 - 9 Mar 2026
Viewed by 285
Abstract
In evolutionary computation, distinct clusters that address different subproblems evolve independently of each other, which makes it difficult to exchange genetic information between them. However, a vaguely defined task within one system may be expressed more clearly within another. Effective interaction methods enable [...] Read more.
In evolutionary computation, distinct clusters that address different subproblems evolve independently of each other, which makes it difficult to exchange genetic information between them. However, a vaguely defined task within one system may be expressed more clearly within another. Effective interaction methods enable subsystems to collaborate more effectively in solving global tasks. By analysing how ambiguous intentions regarding electricity consumption influence actual behaviour in real-world scenarios, we discovered that transaction and negotiation patterns within electricity markets can effectively support this process. By introducing time and third parties, the study presents a semiautomatic, interpretable reasoning community logic system that enables machines to express transaction negotiation patterns. Through formalised operations, it facilitates the conversion of intentions, uncovering hidden relationships within global structures through this liberated form of expression. This paper examines its impact on computational and search paradigms through case studies, enabling collaborative approaches and granularity control via dynamic anchor points, and explores automated peer-to-peer transactions and electricity monetisation within highly abstracted power trading processes. Full article
Show Figures

Figure 1

18 pages, 462 KB  
Article
Existence and Construction of Tangential and Anisotropic Bases in Finite-Dimensional Quadratic Spaces
by Alexander Leones, Pedro Hurtado, John Moreno and Adolfo Pimienta
Mathematics 2026, 14(5), 919; https://doi.org/10.3390/math14050919 - 9 Mar 2026
Viewed by 249
Abstract
This paper studies the existence and construction of bases consisting of tangential and anisotropic vectors in finite-dimensional quadratic spaces over fields of characteristic different from two. While classical theory guarantees the existence of orthogonal bases in regular quadratic spaces, the existence of bases [...] Read more.
This paper studies the existence and construction of bases consisting of tangential and anisotropic vectors in finite-dimensional quadratic spaces over fields of characteristic different from two. While classical theory guarantees the existence of orthogonal bases in regular quadratic spaces, the existence of bases governed by alternative geometric constraints such as tangency or isotropy has remained largely unexplored. We introduce determinant-based constructive methods extending the Gram–Schmidt process to arbitrary quadratic spaces, yielding systematic criteria for generating orthogonal, tangential, and isotropic families of vectors. Our main results establish necessary and sufficient conditions for the existence of tangential bases, including a characterization of regular spaces of positive index and strong algebraic obstructions in the hyperbolic case. In addition, we prove a general constructive existence theorem for isotropic bases in real regular quadratic spaces. Full article
Show Figures

Figure 1

15 pages, 320 KB  
Article
Weak Compactness in Wk,∞
by Cheng Chen and Shiqing Zhang
Mathematics 2026, 14(5), 918; https://doi.org/10.3390/math14050918 - 8 Mar 2026
Viewed by 263
Abstract
We characterize weak compactness in the Sobolev space Wk,(Ω). For non-reflexive spaces like Wk,, criteria beyond boundedness are required. By exploiting the von Neumann algebra structure of L via Gelfand duality, [...] Read more.
We characterize weak compactness in the Sobolev space Wk,(Ω). For non-reflexive spaces like Wk,, criteria beyond boundedness are required. By exploiting the von Neumann algebra structure of L via Gelfand duality, we establish a unified theory. Our main result is a necessary and sufficient condition: a subset is relatively weakly compact if and only if it is bounded and its weak derivatives up to order k have uniformly small oscillation on a finite measurable partition of Ω. This provides a tool for analyzing nonlinear problems in these spaces. Full article
20 pages, 299 KB  
Article
A Pessimistic Two-Stage Network DEA Model with Interval Data and Endogenous Weight Restrictions
by Chia-Nan Wang and Giovanni Cahilig
Mathematics 2026, 14(5), 917; https://doi.org/10.3390/math14050917 - 8 Mar 2026
Viewed by 273
Abstract
This paper develops a pessimistic two-stage network data envelopment analysis (DEA) model that integrates interval-valued data and endogenous weight restrictions within a unified linear programming framework. The proposed approach explicitly captures internal network structures while addressing bounded data uncertainty through an interval-to-deterministic transformation [...] Read more.
This paper develops a pessimistic two-stage network data envelopment analysis (DEA) model that integrates interval-valued data and endogenous weight restrictions within a unified linear programming framework. The proposed approach explicitly captures internal network structures while addressing bounded data uncertainty through an interval-to-deterministic transformation that preserves linearity and avoids probabilistic assumptions. Robustness is interpreted in the pessimistic interval DEA sense, where efficiency is evaluated under worst-case realizations of observed bounds rather than through explicit uncertainty-set optimization. To mitigate weight degeneracy and enhance discrimination power, data-driven proportional weight restrictions are introduced; these endogenous bounds are constructed solely from observed data and regularize the multiplier space without relying on subjective preferences or tuning parameters, while maintaining scale invariance and the nonparametric nature of DEA. The model admits equivalent multiplier and envelopment formulations and enables meaningful decomposition of overall efficiency into stage-specific components. Fundamental theoretical properties—including feasibility, boundedness, monotonicity, efficiency decomposition, and special case consistency—are rigorously established. An empirical application to OECD macroeconomic data, accompanied by sensitivity evaluation, demonstrates the stability and discriminatory capability of the proposed framework under bounded variability. Computational analysis confirms that the model retains linear programming structure and exhibits linear growth in problem size with respect to the number of decision-making units, thereby preserving the scalability characteristics of classical two-stage network DEA formulations. The proposed framework provides a theoretically grounded and computationally tractable approach for network efficiency analysis under bounded interval uncertainty. Full article
(This article belongs to the Special Issue New Advances of Optimization and Data Envelopment Analysis)
39 pages, 507 KB  
Article
An LM-Type Unit Root Test for Functional Time Series
by Yichao Chen and Chi Seng Pun
Mathematics 2026, 14(5), 916; https://doi.org/10.3390/math14050916 - 8 Mar 2026
Viewed by 210
Abstract
In this paper, we propose a Lagrange multiplier (LM)-type unit root test for functional time series. The key novelty lies not in introducing a new LM principle but in establishing the asymptotic validity of such a test under the functional random walk null [...] Read more.
In this paper, we propose a Lagrange multiplier (LM)-type unit root test for functional time series. The key novelty lies not in introducing a new LM principle but in establishing the asymptotic validity of such a test under the functional random walk null hypothesis without relying on functional principal component analysis (FPCA) or finite-dimensional unit root subspace assumptions. We derive the limit distribution of our proposed test statistics under the null hypothesis of a random walk and its asymptotic behavior of alternative hypotheses of trend stationary, weakly dependent stationary, and autoregressive stationary models. Specifically, we establish the theoretical consistency of the test under all aforementioned alternative hypotheses. Simulation studies corroborate these theoretical findings and demonstrate the desirable finite-sample performance of the proposed functional unit root test. The proposed test is also applied to real data of intraday stock price curves, and the test results are plausible. Full article
(This article belongs to the Special Issue New Challenges in Statistical Analysis and Multivariate Data Analysis)
Show Figures

Figure 1

39 pages, 67440 KB  
Article
LLM-TOC: LLM-Driven Theory-of-Mind Adversarial Curriculum for Multi-Agent Generalization
by Chenxu Wang, Jiang Yuan, Tianqi Yu, Xinyue Jiang, Liuyu Xiang, Junge Zhang and Zhaofeng He
Mathematics 2026, 14(5), 915; https://doi.org/10.3390/math14050915 - 8 Mar 2026
Viewed by 467
Abstract
Zero-shot generalization to out-of-distribution (OOD) teammates and opponents in multi-agent systems (MASs) remains a fundamental challenge for general-purpose AI, especially in open-ended interaction scenarios. Existing multi-agent reinforcement learning (MARL) paradigms, such as self-play and population-based training, often collapse to a limited subset of [...] Read more.
Zero-shot generalization to out-of-distribution (OOD) teammates and opponents in multi-agent systems (MASs) remains a fundamental challenge for general-purpose AI, especially in open-ended interaction scenarios. Existing multi-agent reinforcement learning (MARL) paradigms, such as self-play and population-based training, often collapse to a limited subset of Nash equilibria, leaving agents brittle when faced with semantically diverse, unseen behaviors. Recent approaches that invoke Large Language Models (LLMs) at run time can improve adaptability but introduce substantial latency and can become less reliable as task horizons grow; in contrast, LLM-assisted reward-shaping methods remain constrained by the inefficiency of the inner reinforcement-learning loop. To address these limitations, we propose LLM-TOC (LLM-Driven Theory-of-Mind Adversarial Curriculum), which casts generalization as a bi-level Stackelberg game: in the inner loop, a MARL agent (the follower) minimizes regret against a fixed population, while in the outer loop, an LLM serves as a semantic oracle that generates executable adversarial or cooperative strategies in a Turing-complete code space to maximize the agent’s regret. To cope with the absence of gradients in discrete code generation, we introduce Gradient Saliency Feedback, which transforms pixel-level value fluctuations into semantically meaningful causal cues to steer the LLM toward targeted strategy synthesis. We further provide motivating theoretical analysis via the PAC-Bayes framework, showing that LLM-TOC converges at rate O(1/K) and yields a tighter generalization error bound than parameter-space exploration under reasonable preconditions. Experiments on the Melting Pot benchmark demonstrate that, with expected cumulative collective return as the core zero-shot generalization metric, LLM-TOC consistently outperforms self-play baselines (IPPO and MAPPO) and the LLM-inference method Hypothetical Minds across all held-out test scenarios, reaching 75% to 85% of the upper-bound performance of Oracle PPO. Meanwhile, with the number of RL environment interaction steps to reach the target relative performance as the core efficiency metric, our framework reduces the total training computational cost by more than 60% compared with mainstream baselines. Full article
(This article belongs to the Special Issue Applications of Intelligent Game and Reinforcement Learning)
Show Figures

Figure 1

21 pages, 1433 KB  
Article
Minimax Lower Bounds for Uniform Estimation of Covariate-Dependent Copula Parameters
by Mathias Nthiani Muia, Olivia Atutey and Chathurika Srimali Abeykoon
Mathematics 2026, 14(5), 914; https://doi.org/10.3390/math14050914 - 8 Mar 2026
Cited by 1 | Viewed by 302
Abstract
Local likelihood methods are widely used to estimate calibration functions in conditional copula models. Recent work has established uniform stochastic equicontinuity and uniform convergence rates for local likelihood estimators of covariate-dependent copula parameters, yielding global consistency guarantees and supporting the stability of local [...] Read more.
Local likelihood methods are widely used to estimate calibration functions in conditional copula models. Recent work has established uniform stochastic equicontinuity and uniform convergence rates for local likelihood estimators of covariate-dependent copula parameters, yielding global consistency guarantees and supporting the stability of local optimization routines. This paper complements those results by deriving minimax lower bounds for uniform estimation over Hölder classes of calibration functions. Under mild regularity conditions on the copula family and the covariate design, we show that the minimax sup-norm risk over a compact covariate region is bounded below by the classical nonparametric rate for smooth functions on an s-dimensional domain. The proof combines a localized packing construction with a Fano–Le Cam testing argument, using second-order expansions of the conditional copula likelihood to control information distances. As a consequence, local polynomial likelihood estimators achieve the minimax rate up to the logarithmic factors inherent to uniform estimation, providing a sharp optimality justification for their use in conditional copula modeling. Full article
(This article belongs to the Special Issue Advances in Probability Theory and Stochastic Analysis)
Show Figures

Figure 1

27 pages, 1334 KB  
Article
ETR: Event-Centric Temporal Reasoning for Question-Conditioned Video Question Answering
by Lingmin Pan, Ziyi Gao, Yueming Zhu, Fuchen Chen, Chengyuan Zhang, Dan Yin, Yong Cai, Siqiao Tan and Lei Zhu
Mathematics 2026, 14(5), 913; https://doi.org/10.3390/math14050913 - 7 Mar 2026
Viewed by 406
Abstract
Video Question Answering (VideoQA) requires a deep understanding of dynamic video content, integrating spatial reasoning, temporal dependencies, and language comprehension. Existing methods often struggle with long or semantically complex videos due to the lack of question-guided keyframe weight adjustment and the absence of [...] Read more.
Video Question Answering (VideoQA) requires a deep understanding of dynamic video content, integrating spatial reasoning, temporal dependencies, and language comprehension. Existing methods often struggle with long or semantically complex videos due to the lack of question-guided keyframe weight adjustment and the absence of question-aligned cross-modal description generation. To address these challenges, we propose ETR (Event-centric Temporal Reasoning), an adaptive framework for VideoQA. ETR introduces three key mechanisms: (i) a hierarchical weight adjustment selector to identify questions requiring event-centric temporal reasoning; (ii) a T-Route that segments videos into semantically coherent events and dynamically adjusts keyframe weights with question intent; and (iii) a question-conditioned prompting strategy that focuses on key objects to generate textual prompts aligned with a question’s semantics. This hierarchical and adaptive design effectively balances visual and textual information, enhances temporal reasoning, and improves object-centric alignment. Experiments on two datasets demonstrate that ETR achieves competitive performance in fine question-aware VideoQA. Full article
(This article belongs to the Special Issue Structural Networks for Image Application)
Show Figures

Figure 1

31 pages, 4562 KB  
Article
A Mathematical Model of Within-Host HBV and HTLV-1 Co-Infection Dynamics
by Amani Alsulami and Ebtehal Almohaimeed
Mathematics 2026, 14(5), 912; https://doi.org/10.3390/math14050912 - 7 Mar 2026
Viewed by 274
Abstract
Hepatitis B virus (HBV) and human T-lymphotropic virus type 1 (HTLV-1) are blood-borne pathogens with overlapping transmission routes, resulting in an increased prevalence of HBV among individuals infected with HTLV-1. Notwithstanding the widespread application of mathematical modeling to the study of each virus [...] Read more.
Hepatitis B virus (HBV) and human T-lymphotropic virus type 1 (HTLV-1) are blood-borne pathogens with overlapping transmission routes, resulting in an increased prevalence of HBV among individuals infected with HTLV-1. Notwithstanding the widespread application of mathematical modeling to the study of each virus in isolation, the within-host dynamics of HBV–HTLV-1 co-infection remain insufficiently characterized. This study introduces a novel within-host co-infection model that characterizes the interactions between HBV and HTLV-1, where HTLV-1 infects CD4+ T cells and HBV targets hepatocytes. A comprehensive qualitative analysis yields four threshold parameters (Ri,i=1,2,3,4) governing the existence and stability of equilibrium points, with global stability established using Lyapunov functions. Numerical simulations validate the analytical results, and sensitivity analysis identifies parameters that most strongly influence the basic reproduction numbers for HBV (R1) and HTLV-1 (R2) mono-infections. Our results corroborate that, in patients with HBV, the presence of HTLV-1 contributes to an elevated HBV viral load and CD4+ T cells play a crucial role in controlling HBV infection. Full article
Show Figures

Figure 1

25 pages, 374 KB  
Article
Some New Subclasses of Bi-Univalent Functions Related to Quantum Calculus
by Renjie Guo, Sadia Riaz, Wajiha Bushra, Adeel Ahmad, Saqib Hussain and Saima Noor
Mathematics 2026, 14(5), 911; https://doi.org/10.3390/math14050911 - 7 Mar 2026
Viewed by 261
Abstract
The primary objective of this paper is to introduce and investigate several novel subclasses of bi-univalent functions associated with the q-calculus framework. Using appropriate analytical techniques, we derive coefficient bounds for the initial coefficients of the functions belonging to these newly defined [...] Read more.
The primary objective of this paper is to introduce and investigate several novel subclasses of bi-univalent functions associated with the q-calculus framework. Using appropriate analytical techniques, we derive coefficient bounds for the initial coefficients of the functions belonging to these newly defined classes. In particular, we provide explicit estimates for the second-order Hankel determinant and address the classical Fekete–Szegö functional problem within the context of these classes under suitable conditions. It is important to note that the findings presented in this work not only contribute to the ongoing development of q-analogs in geometric function theory, but also serve as a unifying generalization of many previously known results, which are obtained as special cases of our main findings. Full article
24 pages, 4228 KB  
Article
From Layout to Data: AI-Driven Route Matrix Generation for Logistics Optimization
by Ádám Francuz and Tamás Bányai
Mathematics 2026, 14(5), 910; https://doi.org/10.3390/math14050910 - 7 Mar 2026
Viewed by 431
Abstract
This study proposes an end-to-end mathematical framework to automatically transform warehouse layout images into optimization-ready route matrices. The objective is to convert visual spatial information into a discrete, graph-based representation suitable for combinatorial route optimization. The problem is formulated as a mapping from [...] Read more.
This study proposes an end-to-end mathematical framework to automatically transform warehouse layout images into optimization-ready route matrices. The objective is to convert visual spatial information into a discrete, graph-based representation suitable for combinatorial route optimization. The problem is formulated as a mapping from continuous image space to a structured grid representation, integrating image segmentation, graph construction, and Traveling Salesman Problem (TSP)-based routing. Synthetic warehouse layouts were generated to create labeled training data, and a U-Net convolutional neural network was trained to perform multi-class segmentation of warehouse elements. The predicted grid representation was then converted into a graph structure, where feasible cells define vertices and adjacency defines edges. Shortest path distances were computed using Breadth-First Search, and the resulting distance matrix was used to solve a TSP instance. The segmentation model achieved approximately 98% training accuracy and 95–97% validation accuracy. The generated route matrices enabled successful construction of feasible and optimal round-trip routes in all tested scenarios. The proposed framework demonstrates that warehouse layouts can be automatically transformed into discrete mathematical representations suitable for logistics optimization, reducing manual preprocessing and enabling scalable integration into digital logistics systems. Full article
(This article belongs to the Special Issue Soft Computing in Computational Intelligence and Machine Learning)
Show Figures

Figure 1

25 pages, 990 KB  
Article
An Adaptive Fitness-Guided Starfish Optimization Framework for Optimal Power Flow Operation
by Sulaiman Z. Almutairi and Abdullah M. Shaheen
Mathematics 2026, 14(5), 909; https://doi.org/10.3390/math14050909 - 7 Mar 2026
Cited by 1 | Viewed by 311
Abstract
Optimal Power Flow Operation (OPFO) is a large-scale, nonlinear, and highly constrained optimization problem that plays a central role in achieving economical, reliable, and environmentally sustainable power system operation. Despite the widespread use of metaheuristic algorithms for OPFO, many methods primarily depend on [...] Read more.
Optimal Power Flow Operation (OPFO) is a large-scale, nonlinear, and highly constrained optimization problem that plays a central role in achieving economical, reliable, and environmentally sustainable power system operation. Despite the widespread use of metaheuristic algorithms for OPFO, many methods primarily depend on global-best updates or complex hybrid operators, leading to issues like premature convergence and diminished population diversity. Furthermore, recent literature tends to focus on numerical improvements without sufficiently addressing the underlying interaction structures that ensure stability in convergence. To address these limitations, this paper proposes an Improved Starfish Optimization (ISFO) algorithm incorporating a hybrid fitness-aware population-based search mechanism for solving OPFO problems involving the simultaneous regulation of synchronous generator outputs, on-load tap-changing transformer ratios, and reactive power compensation devices. The proposed method introduces an adaptive Fitness-Aware Collective (FAC) interaction strategy that systematically models pairwise fitness relationships to guide attraction toward superior solutions and repulsion from inferior ones, thereby strengthening exploitation while preserving diversity through controlled stochastic peer-based perturbations. A dual-mode search framework further balances global exploration and local intensification without introducing additional control parameters, enhancing robustness and scalability. The OPFO problem is formulated as a constrained nonlinear optimization model, where equality constraints enforce power flow balance equations and inequality constraints represent operational limits of generators, transformers, voltages, and transmission lines. The proposed ISFO is validated on the IEEE 57-bus power system under three operating scenarios: fuel cost minimization, transmission loss minimization, and emission minimization. Comparative results demonstrate consistent superiority over the standard Starfish Optimization Algorithm (SFOA). In cost minimization, ISFO reduces the total generation cost from 41,697.85 $/h to 41,669.34 $/h while simultaneously decreasing real power losses by 5.22%. Under loss minimization, ISFO achieves a minimum transmission loss of 10.77 MW, corresponding to a 9.23% reduction relative to SFOA, with improved convergence stability. For emission minimization, ISFO attains the lowest emission level of 1.474 ton/h, representing a 6.65% reduction compared to SFOA, alongside an additional 5.67% reduction in system losses. Statistical evaluations based on 30 independent runs further confirm the robustness and reliability of the proposed approach, demonstrating reduced variance, narrower confidence intervals, and statistically significant improvements across all investigated objectives. Full article
(This article belongs to the Special Issue Mathematical Methods Applied in Power Systems, 2nd Edition)
Show Figures

Figure 1

18 pages, 1834 KB  
Article
Multi-Dataset Training for Improved Accuracy in Spatio-Temporal Problems: An Explainable Analysis
by Javier García-Sigüenza, Alberto Real-Fernández, Faraón Llorens-Largo, Rafael Molina-Carmona and Marc Semper
Mathematics 2026, 14(5), 908; https://doi.org/10.3390/math14050908 - 7 Mar 2026
Viewed by 388
Abstract
Deep learning models used to predict spatio-temporal data usually make use of embeddings to represent the different nodes that make up a graph, and thus are able to represent the characteristics of the nodes to be predicted. While in other fields of deep [...] Read more.
Deep learning models used to predict spatio-temporal data usually make use of embeddings to represent the different nodes that make up a graph, and thus are able to represent the characteristics of the nodes to be predicted. While in other fields of deep learning, such as NLP, a pre-training is performed on large datasets to obtain the embeddings and then apply them to another task with a smaller dataset, in the case of spatio-temporal problems, this is a more complex task. Therefore, in this paper, we propose a method for training on several graphs simultaneously to improve embeddings, using a model adapted to the problem and a dataset generated from subgraphs. To validate the method, a new dataset has been generated from several datasets used for traffic forecasting. The results obtained show that embeddings generated with training on multiple datasets increase prediction accuracy, improving metrics in the datasets used for validation. In addition, an analysis of the embeddings has been performed to add explainability to our method, providing a better understanding of how this training affects the generated embeddings. Full article
Show Figures

Figure 1

33 pages, 1702 KB  
Article
A Matheuristic for the Distance Constrained Inventory Routing Problem
by Víctor Manuel Valenzuela-Alcaraz, Efraín Ruiz-y-Ruiz, Alma Danisa Romero-Ocaño, Pamela Chiñas-Sánchez and Cecilia Guadalupe Mota-Gutiérrez
Mathematics 2026, 14(5), 907; https://doi.org/10.3390/math14050907 - 7 Mar 2026
Viewed by 333
Abstract
This paper addresses the Distance-Constrained Inventory Routing Problem (DCIRP), a complex problem that combines inventory management and vehicle routing in a logistics context. The problem arises in the context of a specialty gas delivery company that maintains a specialty gas holding facility at [...] Read more.
This paper addresses the Distance-Constrained Inventory Routing Problem (DCIRP), a complex problem that combines inventory management and vehicle routing in a logistics context. The problem arises in the context of a specialty gas delivery company that maintains a specialty gas holding facility at each customer’s site and uses several trucks to deliver specialty gas, with the additional constraint that drivers are limited to the number of kilometers they can drive each day. A Mixed Integer Linear Programming (MILP) formulation is proposed to model the DCIRP. The DCIRP is a variant of the Inventory Routing Problem (IRP), and an NP-hard combinatorial optimization problem. The main objective of this research is to improve the efficiency and effectiveness of DCIRP resolution, while accounting for vehicle capacity constraints, customer inventory levels, and delivery route distance constraints. By optimizing routes and inventory management, the company’s operations become more sustainable. To solve the problem, three solution approaches are proposed. The first is an exact method based on the MILP formulation. The second is a matheuristic that uses an inventory-first, route-second (IFRS) approach, including a minimum route cost approximation and a local search procedure. The results show that the proposed matheuristic produces high-quality solutions with a reasonable computational effort. Full article
Show Figures

Figure 1

21 pages, 394 KB  
Article
Geometric Properties of Infinite Direct Sums
by Paweł Kolwicz
Mathematics 2026, 14(5), 906; https://doi.org/10.3390/math14050906 - 7 Mar 2026
Viewed by 316
Abstract
We show exactly when the topology of convergence in measure in Banach ideal spaces is linear (equivalently, coarser than the norm topology). Next, we present the relationship between the Kadets–Klee and suitable monotonicity properties with respect to global convergence in measure. Applying these [...] Read more.
We show exactly when the topology of convergence in measure in Banach ideal spaces is linear (equivalently, coarser than the norm topology). Next, we present the relationship between the Kadets–Klee and suitable monotonicity properties with respect to global convergence in measure. Applying these results, we characterize the Kadets–Klee property with respect to the global convergence in measure in infinite direct sums. We also prove the criteria of some related monotonicity properties in infinite direct sums. Furthermore, we solve the fundamental lifting (inheritance) problem completely for all these properties. We finish the paper with concrete examples showing how our general results can be applied. Full article
(This article belongs to the Special Issue New Advances in Complex Analysis and Functional Analysis)
14 pages, 314 KB  
Article
A Class of Relational Almost Nonlinear Contractions Using (c)-Comparison Functions with an Application in Nonlinear Integral Equations
by Doaa Filali, Esmail Alshaban, Ahmed Alamer, Bassam Z. Albalawi, Adel Alatawi and Faizan Ahmad Khan
Mathematics 2026, 14(5), 905; https://doi.org/10.3390/math14050905 - 6 Mar 2026
Viewed by 243
Abstract
This work uses (c)-comparison functions in a metric space endowed with an arbitrary binary relation to establish certain fixed point results under a nonlinear version of almost contractions. The findings addressed here generalize and extend a number of recent findings. A few scenarios [...] Read more.
This work uses (c)-comparison functions in a metric space endowed with an arbitrary binary relation to establish certain fixed point results under a nonlinear version of almost contractions. The findings addressed here generalize and extend a number of recent findings. A few scenarios are described to demonstrate the validity of our results. The legitimacy of the unique solution of a nonlinear integral problem is assessed utilizing our findings. Full article
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications: 3rd Edition)
19 pages, 364 KB  
Article
New Fuzzy Topologies via Ideals and Generalized Openness
by Ahu Açıkgöz
Mathematics 2026, 14(5), 904; https://doi.org/10.3390/math14050904 - 6 Mar 2026
Viewed by 213
Abstract
This paper introduces and investigates a new class of generalized open sets, called fuzzy hI-open sets, in fuzzy ideal topological spaces (X,τ˜,I˜). We prove that the collection of all fuzzy hI [...] Read more.
This paper introduces and investigates a new class of generalized open sets, called fuzzy hI-open sets, in fuzzy ideal topological spaces (X,τ˜,I˜). We prove that the collection of all fuzzy hI-open sets forms a fuzzy topology τ˜hI satisfying τ˜τ˜hI and show that τ˜ and τ˜hI are in general incomparable, demonstrating that the hI-construction captures fundamentally different information from the ∗-topology. We establish precise conditions under which these topologies coincide and introduce a fuzzy hI-T1 separation axiom. Furthermore, we develop a comprehensive hierarchy of generalizations—fuzzy hαI-open, fuzzy hpI-open, fuzzy hsI-open, and fuzzy hβI-open sets—and prove that these classes are pairwise distinct through genuinely fuzzy (non-characteristic) examples. We introduce fuzzy hI-continuous and fuzzy hI-irresolute functions, providing six equivalent characterizations and a closed-set criterion via the ∗-interior operator. The framework is applied to a concrete multi-criteria decision-making problem, where the ideal filters negligible criteria and the hI-interior provides a refined ranking that demonstrably outperforms the original fuzzy topology. Full article
(This article belongs to the Topic Fuzzy Sets Theory and Its Applications)
14 pages, 363 KB  
Article
The Legendre Spectral Method for Solving the Nonlinear Time-Fractional Convection-Diffusion Equations
by Guangfeng Lu, Lihua Jiang, Wenping Chen, Qingping Cheng and Xinyue Wang
Mathematics 2026, 14(5), 903; https://doi.org/10.3390/math14050903 - 6 Mar 2026
Viewed by 321
Abstract
In this paper, the nonlinear time-fractional convection-diffusion equations are solved by the Legendre spectral method. The Caputo time-fractional derivative is discretized by the L21σ scheme. A priori estimates of the fully discrete scheme are derived, and the existence and [...] Read more.
In this paper, the nonlinear time-fractional convection-diffusion equations are solved by the Legendre spectral method. The Caputo time-fractional derivative is discretized by the L21σ scheme. A priori estimates of the fully discrete scheme are derived, and the existence and uniqueness of the numerical solution are analyzed. It is rigorously proved that the fully discrete scheme is unconditionally stable, and the convergence order of the numerical scheme is O(N1m+τ2). Finally, numerical results are presented to verify the theoretical analysis. Full article
Show Figures

Figure 1

28 pages, 822 KB  
Article
Differentiated Subsidy Policies for Outsourcing Remanufacturing Under Subsidy Phase-Out: Innovation, Production, and Consumption
by Danyang Du and Aiping Wu
Mathematics 2026, 14(5), 902; https://doi.org/10.3390/math14050902 - 6 Mar 2026
Viewed by 294
Abstract
In outsourcing remanufacturing, the Original Equipment Manufacturer (OEM) is responsible for product design and sales, while the Third-Party Remanufacturer (TPR) undertakes remanufacturing operations. The separation of responsibilities often leads to incentive misalignment and hinders industry development. Government subsidies are critical for mitigating this [...] Read more.
In outsourcing remanufacturing, the Original Equipment Manufacturer (OEM) is responsible for product design and sales, while the Third-Party Remanufacturer (TPR) undertakes remanufacturing operations. The separation of responsibilities often leads to incentive misalignment and hinders industry development. Government subsidies are critical for mitigating this conflict, and gradual subsidy phase-out has become a common policy trend. Therefore, this paper constructs a game-theoretic model between the OEM and the TPR under outsourcing remanufacturing. Innovation, production, and consumption subsidies are integrated into a unified analytical framework. Their impacts are systematically analyzed and compared. The subsidy phase-out is captured through detailed analysis across different subsidy intervals, leading to a piecewise equilibrium structure that reflects firms’ strategic adjustments as subsidy intensity declines. The results indicate that innovation subsidies primarily incentivize the OEM to enhance design for remanufacturing but have a limited impact on production and pricing decisions of both the OEM and TPR. Conversely, production and consumption subsidies significantly affect the TPR’s market entry strategies, exhibiting phasic characteristics during the phase-out process. Further comparison reveals that production and consumption subsidies are more effective in promoting the expansion of remanufacturing scale. The innovation subsidies are more advantageous for improving carbon efficiency and achieving long-term emission reduction in the mature stages of the industry. Full article
(This article belongs to the Section D: Statistics and Operational Research)
Show Figures

Figure 1

33 pages, 662 KB  
Article
The Asymmetric Bimodal Normal Distribution: A Tractable Mixture Model for Skewed and Bimodal Data
by Hassan S. Bakouch, Hugo S. Salinas, Çağatay Çetinkaya, Shaykhah Aldossari, Amira F. Daghestani and John L. Santibáñez
Mathematics 2026, 14(5), 901; https://doi.org/10.3390/math14050901 - 6 Mar 2026
Viewed by 368
Abstract
We study a parsimonious constrained two-component Gaussian mixture with symmetric locations ±λ and unequal weights controlled by α[1,1]; we refer to this family as the asymmetric bimodal normal. The constraint eliminates label switching and [...] Read more.
We study a parsimonious constrained two-component Gaussian mixture with symmetric locations ±λ and unequal weights controlled by α[1,1]; we refer to this family as the asymmetric bimodal normal. The constraint eliminates label switching and yields an identifiable parametrization for λ>0, while noting the boundary degeneracy at λ=0 where α is not identifiable. We derive closed-form analytical expressions for the density and distribution functions, an equivalent constructive representation (useful for simulation and interpretation), explicit moment formulas, and conditions distinguishing unimodality from bimodality. For inference, we develop maximum likelihood estimation with observed information standard errors and provide numerically stable fits via a block-coordinate quasi-Newton routine using method of moments initial values. A Monte Carlo simulation study across representative parameter settings evaluates bias and root mean squared error, and examines the behavior of Hessian-based standard error estimates, highlighting regimes where the observed information becomes ill-conditioned under weak separation. Empirical analyses, chemical calibration deviations from the National Institute of Standards and Technology and a regression example with asymmetric errors, show competitive or superior fit and interpretability relative to skewed normal alternatives, asymmetric Laplace models, and unconstrained Gaussian mixtures, with consistent advantages under model comparison using the Akaike information criterion and the Bayesian information criterion. Full article
(This article belongs to the Special Issue Computational Statistics and Data Analysis, 3rd Edition)
Show Figures

Figure 1

18 pages, 3617 KB  
Article
Adaptive Ensemble Weight Optimization for Natural Gas Consumption Forecasting: A Hybrid Stochastic–Deep Learning Framework Applied to the Czech Market
by Vojtěch Vávra and Josef Jablonsky
Mathematics 2026, 14(5), 900; https://doi.org/10.3390/math14050900 - 6 Mar 2026
Viewed by 301
Abstract
The transition towards data-driven energy management requires predictive frameworks capable of handling the nonlinear and non-stationary nature of natural gas consumption. Traditional static models often struggle to adapt to rapid regime shifts in liberalized markets. To address this forecasting problem, this study proposes [...] Read more.
The transition towards data-driven energy management requires predictive frameworks capable of handling the nonlinear and non-stationary nature of natural gas consumption. Traditional static models often struggle to adapt to rapid regime shifts in liberalized markets. To address this forecasting problem, this study proposes a convex ensemble weight optimization framework. Moving beyond simple model averaging, we formulate the ensemble weighting problem as a constrained convex optimization task on the unit simplex. We utilize the Frank–Wolfe algorithm (Conditional Gradient) to dynamically optimize the weights of a heterogeneous set of base learners, including SARIMAX, XGBoost, N-HiTS, and Temporal Fusion Transformers (TFTs). Our results on the Czech gas market dataset demonstrate that this mathematically grounded approach achieves a Mean Absolute Percentage Error (MAPE) of 4.25%, which compares favorably to individual models such as N-HiTS (5.31%) and static averaging (6.74%). While the accuracy gain over greedy ensemble selection is marginal, the proposed convex formulation offers improved stability and interpretability, which are practical advantages for operational deployment. Full article
(This article belongs to the Section D: Statistics and Operational Research)
Show Figures

Figure 1

49 pages, 1822 KB  
Review
Data-Driven Methods and Artificial Intelligence in Reliability and Maintenance: A Review
by Xuesong Chen, Wenting Li, Tianze Xia, Ruizhi Ouyang and Kaiye Gao
Mathematics 2026, 14(5), 899; https://doi.org/10.3390/math14050899 - 6 Mar 2026
Viewed by 541
Abstract
Reliability and maintenance serve as pivotal factors in safeguarding safety, enhancing efficiency, optimizing costs, and fostering sustainable development. They permeate all facets of industry, daily life, and society, thereby constituting a crucial foundation for achieving long-term, stable development. The rapid evolution of data-driven [...] Read more.
Reliability and maintenance serve as pivotal factors in safeguarding safety, enhancing efficiency, optimizing costs, and fostering sustainable development. They permeate all facets of industry, daily life, and society, thereby constituting a crucial foundation for achieving long-term, stable development. The rapid evolution of data-driven methods and artificial intelligence (AI) has revolutionized reliability and maintenance practices, driving a shift from reactive to predictive maintenance (PdM) and ultimately intelligent maintenance strategies. Unlike existing reviews that focus on single technologies or tasks, this paper adopts a system-level integration perspective to construct a closed-loop framework connecting data-driven reliability analysis, maintenance optimization, and intelligent decision-making. It further elucidates the integrated logic between prediction and decision-making through formalized mechanisms. This article systematically reviews the research progress and practical applications of data-driven methods and AI in reliability and maintenance. First, it classifies and summarizes data-driven reliability analysis methods based on existing literature. Second, a reliability-oriented maintenance optimization framework is proposed, comprehensively integrating economic, reliability, resource efficiency, and multi-objective collaboration considerations, while analyzing the characteristics of diverse maintenance systems. Furthermore, the innovative applications and performance advantages of AI algorithms in complex system maintenance are synthesized, and a comparative analysis of the applicability of different methods across various operational scenarios is conducted. And conducted a multidimensional comparison of the applicability scenarios for different methods from an engineering selection perspective. In addition, this review examines the current status and challenges of applying data-driven and AI technologies across multiple real industrial settings and identifies common obstacles encountered during project implementation. We further elucidate the research positioning of this work and provide a comparative discussion with existing review articles. Finally, the article conducts a bibliometric analysis to map the research landscape, provides quantitative support for the development trends in the field. Limitations in this field are also discussed. Full article
Show Figures

Figure 1

32 pages, 2704 KB  
Article
A Deep Learning Framework for Real-Time Pothole Detection from Combined Drone Imagery and Custom Dataset Using Enhanced YOLOv8 and Custom Feature Extraction
by Shiva Shankar Reddy, Midhunchakkaravarthy Janarthanan, Inam Ullah Khan and Kankanala Amrutha
Mathematics 2026, 14(5), 898; https://doi.org/10.3390/math14050898 - 6 Mar 2026
Viewed by 820
Abstract
Road safety depends heavily on the timely identification and repair of potholes; however, detecting potholes is challenging due to various lighting and weather conditions. This work presents an attention-enhanced object detection framework for aerial pothole detection design that relies on a pre-trained backbone, [...] Read more.
Road safety depends heavily on the timely identification and repair of potholes; however, detecting potholes is challenging due to various lighting and weather conditions. This work presents an attention-enhanced object detection framework for aerial pothole detection design that relies on a pre-trained backbone, YOLOv8, and a custom feature-extraction network, the Feature Pyramid Network (FPN). An enhanced detection head is used to make the model aware of discriminative areas in space to get accurate localization of a pothole to overcome the major limitations of the standard YOLOv8 used in aerial road inspection, irrespective of the road surface. The underlying architecture incorporates a purpose-built data layer and a preprocessing engine that can accommodate scenarios such as seasonal changes and bad weather. To further enhance learning dynamics, a customized loss function and a new optimizer framework are incorporated to improve convergence towards overall detection reliability. Specifically, a custom differential optimizer that uses layer-wise adaptive learning rates and momentum-based gradient updates to help suppress false positives and accelerate convergence. Conversely, the IoU-based personal loss function, combined with real-time validation, stabilizes training across a range of road conditions. A major feature of the proposed system is its ability to process aerial imagery from unmanned drone platforms. Empirical analysis proves a good result: an average precision of 0.980 with the IoU of 0.5 and an F1-score of 0.97 with a confidence threshold of 0.30. Precision is high (0.97 at the 90-percent confidence level). These metrics show how well the model will be able to balance false positives and false negatives—a critical need in a safety-critical deployment. The results make the framework a potential, scalable, and reliable candidate for integrating smart transportation systems and autonomous vehicle navigation. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Graph Neural Networks)
Show Figures

Figure 1

16 pages, 5839 KB  
Article
Multivariate Identification via Linear Projection of Eigenvectors
by Dong-Hwan Kim
Mathematics 2026, 14(5), 897; https://doi.org/10.3390/math14050897 - 6 Mar 2026
Viewed by 400
Abstract
A data-driven system identification algorithm that utilizes eigenvectors is presented. The eigenvectors are extracted from a unified solution space comprising both input and output subspaces. To expand the input subspace, a higher-order subspace from the input subspaces is augmented with the measured input [...] Read more.
A data-driven system identification algorithm that utilizes eigenvectors is presented. The eigenvectors are extracted from a unified solution space comprising both input and output subspaces. To expand the input subspace, a higher-order subspace from the input subspaces is augmented with the measured input subspace; this higher-order subspace exhibits additional cross-correlations with both the input and output subspaces, thus producing more informative eigenvectors and linearizing the system. The extracted eigenvectors are then deployed to sequentially project new input snapshots first onto the input subspace and subsequently onto the output subspace to predict the output. The algorithm effectively reconstructs the original governing equations of a quasi-stationary dynamic system, providing an inference that the original system is a series of data projections via eigenvectors, and also implying the possibility of reconstructing the low-rank governing equation with a limited number of eigenvectors, thus yielding a linearized representation of the system from the data. Notably, identifying the system from the well-expanded, high-dimensional nonlinear solution space requires only a limited duration of data snapshots, indicating that the essential spatial features manifested by the original governing equation are determined rapidly. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop