Previous Issue
Volume 13, May-2
 
 

Mathematics, Volume 13, Issue 11 (June-1 2025) – 34 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
27 pages, 6402 KiB  
Article
Stability Analysis of a Rumor-Spreading Model with Two Time Delays and Saturation Effect
by Chunfeng Wei, Chunlong Fu, Xiaofan Yang, Yang Qin and Luxing Yang
Mathematics 2025, 13(11), 1729; https://doi.org/10.3390/math13111729 - 23 May 2025
Abstract
Time delay and nonlinear incidence functions have a significant effect on rumor-spreading. In this article, a rumor-spreading model with two unequal time delays and a saturation effect is proposed. The existence, uniqueness, and non-negativity of the solution to this model are shown. The [...] Read more.
Time delay and nonlinear incidence functions have a significant effect on rumor-spreading. In this article, a rumor-spreading model with two unequal time delays and a saturation effect is proposed. The existence, uniqueness, and non-negativity of the solution to this model are shown. The basic reproduction number is determined. A criterion for the existence of a rumor-endemic equilibrium is derived. It is found that there is an interesting conditional forward bifurcation. As a consequence, a complex bifurcation phenomenon is exhibited. A collection of criteria for the asymptotic stability of the rumor-free equilibrium are outlined. In the absence of a time delay, a criterion for the local asymptotic stability of the rumor-endemic equilibrium is presented. In the presence of small time delays, a criterion for the local asymptotic stability of the rumor-endemic equilibrium is established by applying our recently developed technique. Finally, a rumor-spreading control problem is reduced to an optimal control model, which is tackled in the framework of optimal control theory. This work facilitates the understanding of the influence of time delays and the saturation effect on rumor-spreading. Full article
(This article belongs to the Special Issue The Delay Differential Equations and Their Applications)
28 pages, 488 KiB  
Article
Exploring a Diagnostic Test for Missingness at Random
by Dominick Sutton, Anahid Basiri and Ziqi Li
Mathematics 2025, 13(11), 1728; https://doi.org/10.3390/math13111728 - 23 May 2025
Abstract
Missing data remain a challenge for researchers and decision-makers due to their impact on analytical accuracy and uncertainty estimation. Many studies on missing data are based on randomness, but randomness itself is problematic. This makes it difficult to identify missing data mechanisms and [...] Read more.
Missing data remain a challenge for researchers and decision-makers due to their impact on analytical accuracy and uncertainty estimation. Many studies on missing data are based on randomness, but randomness itself is problematic. This makes it difficult to identify missing data mechanisms and affects how effectively the missing data impacts can be minimized. The purpose of this paper is to examine a potentially simple test to diagnose whether the missing data are missing at random. Such a test is developed using an extended taxonomy of missing data mechanisms. A key aspect of the approach is the use of single mean imputation for handling missing data in the test development dataset. Changing this to random imputation from the same underlying distribution, however, has a negative impact on the diagnosis. This is aggravated by the possibility of high inter-variable correlation, confounding, and mixed missing data mechanisms. The verification step uses data from a high-quality real-world dataset and finds some evidence—in one case—that the data may be missing at random, but this is less persuasive in the second case. Confidence in these results, however, is limited by the potential influence of correlation, confounding, and mixed missingness. This paper concludes with a discussion of the test’s merits and finds that sufficient uncertainties remain to render it unreliable, even if the initial results appear promising. Full article
(This article belongs to the Special Issue Statistical Research on Missing Data and Applications)
27 pages, 425 KiB  
Article
The Robust Malmquist Productivity Index: A Framework for Measuring Productivity Changes over Time Under Uncertainty
by Pejman Peykani, Roya Soltani, Cristina Tanasescu, Seyed Ehsan Shojaie and Alireza Jandaghian
Mathematics 2025, 13(11), 1727; https://doi.org/10.3390/math13111727 - 23 May 2025
Abstract
The purpose of this study is to propose a novel approach for measuring productivity changes in decision-making units (DMUs) over time and evaluating the performance of each DMU under uncertainty in terms of progress, regression, and stagnation. To achieve this, the Malmquist productivity [...] Read more.
The purpose of this study is to propose a novel approach for measuring productivity changes in decision-making units (DMUs) over time and evaluating the performance of each DMU under uncertainty in terms of progress, regression, and stagnation. To achieve this, the Malmquist productivity index (MPI) and the data envelopment analysis (DEA) models are extended, and a new productivity index capable of handling uncertain data are introduced through a robust optimization approach. Robust optimization is recognized as one of the most applicable and effective methods in uncertain programming. The implementation and calculation of the proposed index are demonstrated using data from 15 actively traded stocks in the petroleum products industry on the Tehran stock exchange over two consecutive years. The results reveal that a significant number of stocks exhibit an unfavorable trend, marked by a decline in productivity. The findings highlight the efficacy and effectiveness of the proposed robust Malmquist productivity index (RMPI) in measuring and identifying productivity trends for each stock under data uncertainty. Full article
42 pages, 3254 KiB  
Article
Resolution-Aware Deep Learning with Feature Space Optimization for Reliable Identity Verification in Electronic Know Your Customer Processes
by Mahasak Ketcham, Pongsarun Boonyopakorn and Thittaporn Ganokratanaa
Mathematics 2025, 13(11), 1726; https://doi.org/10.3390/math13111726 - 23 May 2025
Abstract
In modern digital transactions involving government agencies, financial institutions, and commercial enterprises, reliable identity verification is essential to ensure security and trust. Traditional methods, such as submitting photocopies of ID cards, are increasingly susceptible to identity theft and fraud. To address these challenges, [...] Read more.
In modern digital transactions involving government agencies, financial institutions, and commercial enterprises, reliable identity verification is essential to ensure security and trust. Traditional methods, such as submitting photocopies of ID cards, are increasingly susceptible to identity theft and fraud. To address these challenges, this study proposes a novel and robust identity verification framework that integrates super-resolution preprocessing, a convolutional neural network (CNN), and Monte Carlo dropout-based Bayesian uncertainty estimation for enhanced facial recognition in electronic know your customer (e-KYC) processes. The key contribution of this research lies in its ability to handle low-resolution and degraded facial images simulating real-world conditions where image quality is inconsistent while providing confidence-aware predictions to support transparent and risk-aware decision making. The proposed model is trained on facial images resized to 24 × 24 pixels, with a super-resolution module enhancing feature clarity prior to classification. By incorporating Monte Carlo dropout, the system estimates predictive uncertainty, addressing critical limitations of conventional black-box deep learning models. Experimental evaluations confirmed the effectiveness of the framework, achieving a classification accuracy of 99.7%, precision of 99.2%, recall of 99.3%, and an AUC score of 99.5% under standard testing conditions. The model also demonstrated strong robustness against noise and image blur, maintaining reliable performance even under challenging input conditions. In addition, the proposed system is designed to comply with international digital identity standards, including the Identity Assurance Level (IAL) and Authenticator Assurance Level (AAL), ensuring practical applicability in regulated environments. Overall, this research contributes a scalable, secure, and interpretable solution that advances the application of deep learning and uncertainty modeling in real-world e-KYC systems. Full article
(This article belongs to the Special Issue Advanced Studies in Mathematical Optimization and Machine Learning)
35 pages, 1605 KiB  
Article
The Development of Fractional Black–Scholes Model Solution Using the Daftardar-Gejji Laplace Method for Determining Rainfall Index-Based Agricultural Insurance Premiums
by Astrid Sulistya Azahra, Muhamad Deni Johansyah and Sukono
Mathematics 2025, 13(11), 1725; https://doi.org/10.3390/math13111725 - 23 May 2025
Abstract
The Black–Scholes model is a fundamental concept in modern financial theory. It is designed to estimate the theoretical value of derivatives, particularly option prices, by considering time and risk factors. In the context of agricultural insurance, this model can be applied to premium [...] Read more.
The Black–Scholes model is a fundamental concept in modern financial theory. It is designed to estimate the theoretical value of derivatives, particularly option prices, by considering time and risk factors. In the context of agricultural insurance, this model can be applied to premium determination due to the similar characteristics shared with the option pricing mechanism. The primary challenge in its implementation is determining a fair premium by considering the potential financial losses due to crop failure. Therefore, this study aimed to analyze the determination of rainfall index-based agricultural insurance premiums using the standard and fractional Black–Scholes models. The results showed that a solution to the fractional model could be obtained through the Daftardar-Gejji Laplace method. The premium was subsequently calculated using the Black–Scholes model applied throughout the growing season and paid at the beginning of the season. Meanwhile, the fractional Black–Scholes model incorporated the fractional order parameter to provide greater flexibility in the premium payment mechanism. The novelty of this study was in the application of the fractional Black–Scholes model for agricultural insurance premium determination, with due consideration for the long-term effects to ensure more dynamism and flexibility. The results could serve as a reference for governments, agricultural departments, and insurance companies in designing agricultural insurance programs to mitigate risks caused by rainfall fluctuations. Full article
Show Figures

Figure 1

31 pages, 1377 KiB  
Review
Review on Sound-Based Industrial Predictive Maintenance: From Feature Engineering to Deep Learning
by Tongzhou Ye, Tianhao Peng and Lidong Yang
Mathematics 2025, 13(11), 1724; https://doi.org/10.3390/math13111724 - 23 May 2025
Abstract
Sound-based predictive maintenance (PdM) is a critical enabler for ensuring operational continuity and productivity in industrial systems. Due to the diversity of equipment types and the complexity of working environments, numerous feature engineering methods and anomaly diagnosis models have been developed based on [...] Read more.
Sound-based predictive maintenance (PdM) is a critical enabler for ensuring operational continuity and productivity in industrial systems. Due to the diversity of equipment types and the complexity of working environments, numerous feature engineering methods and anomaly diagnosis models have been developed based on sound signals. However, existing reviews focus more on the structures and results of the detection model, while neglecting the impact of the differences in feature engineering on subsequent detection models. Therefore, this paper aims to provide a comprehensive review of the state-of-the-art feature extraction methods based on sound signals. The judgment standards in the sound detection models are analyzed from empirical thresholding to machine learning and deep learning. The advantages and limitations of sound detection algorithms in varied equipment are elucidated through detailed examples and descriptions, providing a comprehensive understanding of performance and applicability. This review also provides a guide to the selection of feature extraction and detection methods for the predictive maintenance of equipment based on sound signals. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
30 pages, 623 KiB  
Review
Mathematics and Machine Learning for Visual Computing in Medicine: Acquisition, Processing, Analysis, Visualization, and Interpretation of Visual Information
by Bin Li, Shixiang Feng, Jinhong Zhang, Guangbin Chen, Shiyang Huang, Sibei Li and Yuxin Zhang
Mathematics 2025, 13(11), 1723; https://doi.org/10.3390/math13111723 - 23 May 2025
Abstract
Visual computing in medicine involves handling the generation, acquisition, processing, analysis, exploration, visualization, and interpretation of medical visual information. Machine learning has become a prominent tool for data analytics and problem-solving, which is the process of enabling computers to automatically learn from data [...] Read more.
Visual computing in medicine involves handling the generation, acquisition, processing, analysis, exploration, visualization, and interpretation of medical visual information. Machine learning has become a prominent tool for data analytics and problem-solving, which is the process of enabling computers to automatically learn from data and obtain certain knowledge, patterns, or input–output relationships. The tasks involving visual computing in medicine often could be transformed into tasks of machine learning. In recent years, there has been a surge in research focusing on machine-learning-based visual computing. However, there are few reviews comprehensively introducing and surveying the systematic implementation of machine-learning-based vision computing in medicine, and in relevant reviews, little attention has been paid to the use of machine learning methods to transform medical visual computing tasks into data-driven learning problems with high-level feature representation, while exploring their effectiveness in key medical applications, such as image-guided surgery. This review paper addresses the above question and surveys fully and systematically the recent advancements, challenges, and future directions regarding machine-learning-based medical visual computing with high-level features. This paper is organized as follows. The fundamentals and paradigm of visual computing in medicine are first concisely introduced. Then, aspects of visual computing in medicine are delved into: (1) acquisition of visual information; (2) processing and analysis of visual information; (3) exploration and interpretation of visual information; and (4) image-guided surgery. In particular, this paper explores machine-learning-based methods and factors for visual computing tasks. Finally, the future prospects are discussed. In conclusion, this literature review on machine learning for visual computing in medicine showcases the diverse applications and advancements in this field. Full article
10 pages, 227 KiB  
Article
Existence of Global Mild Solutions for Nonautonomous Abstract Evolution Equations
by Mian Zhou, Yong Liang and Yong Zhou
Mathematics 2025, 13(11), 1722; https://doi.org/10.3390/math13111722 - 23 May 2025
Abstract
In this paper, we investigate the Cauchy problem for nonautonomous abstract evolution equations of the form [...] Read more.
In this paper, we investigate the Cauchy problem for nonautonomous abstract evolution equations of the form y(t)=A(t)y(t)+f(t,y(t)),t0, y(0)=y0. We obtain new existence theorems for global mild solutions under both compact and noncompact evolution families U(t,s). Our key method relies on the generalized Ascoli–Arzela theorem we previously obtained. Finally, an example is provided to illustrate the applicability of our results. Full article
(This article belongs to the Section C1: Difference and Differential Equations)
14 pages, 1034 KiB  
Article
Cholesterol Particle Interaction with the Surface of Blood Vessels
by Raimondas Jasevičius
Mathematics 2025, 13(11), 1721; https://doi.org/10.3390/math13111721 - 23 May 2025
Abstract
A numerical experiment on the dynamics of the cholesterol particle is presented. This work is devoted to the analysis of how cholesterol accumulates on the surface of the blood vessel. To get acquainted with the effect of cholesterol accumulation in the vessel, it [...] Read more.
A numerical experiment on the dynamics of the cholesterol particle is presented. This work is devoted to the analysis of how cholesterol accumulates on the surface of the blood vessel. To get acquainted with the effect of cholesterol accumulation in the vessel, it is important to understand the behavior of individual cholesterol particles. As a first step, the study is carried out by examining the ability of a particle of cholesterol to interact with the surface of the vessel, taking into account the influence of the action of various forces. For this, the experiment is divided into three parts, each of which is important for understanding how different forces act on the interaction of cholesterol with the surface of the vessel. To present the results of the motion of cholesterol particles, the figures of force, displacement, and time are provided. The results show the ability of cholesterol particles to interact and adhere to the blood vessel surface. Full article
Show Figures

Figure 1

19 pages, 1649 KiB  
Article
SFSIN: A Lightweight Model for Remote Sensing Image Super-Resolution with Strip-like Feature Superpixel Interaction Network
by Yanxia Lyu, Yuhang Liu, Qianqian Zhao, Ziwen Hao and Xin Song
Mathematics 2025, 13(11), 1720; https://doi.org/10.3390/math13111720 - 23 May 2025
Abstract
Remote sensing image (RSI) super-resolution plays a critical role in improving image details and reducing costs associated with physical imaging devices. However, existing super-resolution methods are not applicable to resource-constrained edge devices because they are hampered by a large number of parameters and [...] Read more.
Remote sensing image (RSI) super-resolution plays a critical role in improving image details and reducing costs associated with physical imaging devices. However, existing super-resolution methods are not applicable to resource-constrained edge devices because they are hampered by a large number of parameters and significant computational complexity. To address these challenges, we propose a novel lightweight super-resolution model for remote sensing images, a strip-like feature superpixel interaction network (SFSIN), which combines the flexibility of convolutional neural networks (CNNs) with the long-range learning capabilities of a Transformer. Specifically, the Transformer captures global context information through long-range dependencies, while the CNN performs shape-adaptive convolutions. By stacking strip-like feature superpixel interaction (SFSI) modules, we aggregate strip-like features to enable deep feature extraction from local and global perspectives. In addition to traditional methods that rely solely on direct upsampling for reconstruction, our model uses the convolutional block attention module with upsampling convolution (CBAMUpConv), which integrates deep features from spatial and channel dimensions to improve reconstruction performance. Extensive experiments on the AID dataset show that SFSIN outperforms ten state-of-the-art lightweight models. SFSIN achieves a PSNR of 33.10 dB and an SSIM of 0.8715 on the ×2 scale, outperforming competitive models in both quantity and quality, while also excelling at higher scales. Full article
Show Figures

Figure 1

18 pages, 2042 KiB  
Article
Error Estimate for a Finite-Difference Crank–Nicolson–ADI Scheme for a Class of Nonlinear Parabolic Isotropic Systems
by Chrysovalantis A. Sfyrakis and Markos Tsoukalas
Mathematics 2025, 13(11), 1719; https://doi.org/10.3390/math13111719 - 23 May 2025
Abstract
To understand phase-transition processes like solidification, phase-field models are frequently employed. In this paper, we study a finite-difference Crank–Nicolson–ADI scheme for a class of nonlinear parabolic isotropic systems. We establish an error estimate for this scheme, demonstrating its effectiveness in solving phase-field models. [...] Read more.
To understand phase-transition processes like solidification, phase-field models are frequently employed. In this paper, we study a finite-difference Crank–Nicolson–ADI scheme for a class of nonlinear parabolic isotropic systems. We establish an error estimate for this scheme, demonstrating its effectiveness in solving phase-field models. Our analysis provides rigorous mathematical justification for the numerical method’s reliability in simulating phase transitions. Full article
(This article belongs to the Topic Numerical Methods for Partial Differential Equations)
25 pages, 2083 KiB  
Article
Unsupervised Attribute Reduction Algorithms for Multiset-Valued Data Based on Uncertainty Measurement
by Xiaoyan Guo, Yichun Peng, Yu Li and Hai Lin
Mathematics 2025, 13(11), 1718; https://doi.org/10.3390/math13111718 - 23 May 2025
Abstract
Missing data introduce uncertainty in data mining, but existing set-valued approaches ignore frequency information. We propose unsupervised attribute reduction algorithms for multiset-valued data to address this gap. First, we define a multiset-valued information system (MSVIS) and establish θ-tolerance relation to form the [...] Read more.
Missing data introduce uncertainty in data mining, but existing set-valued approaches ignore frequency information. We propose unsupervised attribute reduction algorithms for multiset-valued data to address this gap. First, we define a multiset-valued information system (MSVIS) and establish θ-tolerance relation to form the information granules. Then, θ-information entropy and θ-information amount are introduced as uncertainty measures. Finally, these two UMs are used to design two unsupervised attribute reduction algorithms in an MSVIS. The experimental results demonstrate the superiority of the proposed algorithms, achieving average reductions of 50% in attribute subsets while improving clustering accuracy and outlier detection performance. Parameter analysis further validates the robustness of the framework under varying missing rates. Full article
Show Figures

Figure 1

16 pages, 284 KiB  
Article
Conditional Quantization for Some Discrete Distributions
by Edgar A. Gonzalez, Mrinal Kanti Roychowdhury, David A. Salinas and Vishal Veeramachaneni
Mathematics 2025, 13(11), 1717; https://doi.org/10.3390/math13111717 - 23 May 2025
Abstract
Quantization for a Borel probability measure refers to the idea of estimating a given probability by a discrete probability with support containing a finite number of elements. If in the quantization some of the elements in the finite support are preselected, then the [...] Read more.
Quantization for a Borel probability measure refers to the idea of estimating a given probability by a discrete probability with support containing a finite number of elements. If in the quantization some of the elements in the finite support are preselected, then the quantization is called a conditional quantization. In this paper, we have determined the conditional quantization, first for two different finite discrete distributions with a same conditional set, and for a finite discrete distribution with two different conditional sets. Next, we have determined the conditional and unconditional quantization for an infinite discrete distribution with support {12n:nN}. We have also investigated the conditional quantization for an infinite discrete distribution with support {1n:nN}. At the end of the paper, we have given a conjecture and discussed about some open problems based on the conjecture. Full article
(This article belongs to the Special Issue Statistical Analysis and Data Science for Complex Data)
22 pages, 5924 KiB  
Article
Topology Optimization of Automotive Vibration Test Jig: Natural Frequency Maximization and Weight Reduction
by Jun Won Choi, Min Gyu Kim, Jung Jin Kim and Jisun Kim
Mathematics 2025, 13(11), 1716; https://doi.org/10.3390/math13111716 - 23 May 2025
Abstract
Vibration test jigs are essential components for evaluating the dynamic performance and durability of automotive parts, such as lamps. This study aimed to derive optimal jig configurations that simultaneously maximize natural frequency and minimize structural weight through topology optimization. A fixed-grid finite-element model [...] Read more.
Vibration test jigs are essential components for evaluating the dynamic performance and durability of automotive parts, such as lamps. This study aimed to derive optimal jig configurations that simultaneously maximize natural frequency and minimize structural weight through topology optimization. A fixed-grid finite-element model was constructed by incorporating realistic lamp mass and boundary conditions at the mounting interfaces to simulate actual testing scenarios. Four optimization formalizations were investigated: (1) compliance minimization, (2) compliance minimization with natural-frequency constraints, (3) natural-frequency maximization, and (4) natural-frequency maximization with compliance constraints. Both full-domain and reduced-domain designs were analyzed to assess the influence of domain scope. The results indicate that formulations that use only natural-frequency objectives often result in shape divergence and convergence instability. In contrast, strategies incorporating frequency as a constraint—particularly compliance minimization with a natural-frequency constraint—exhibited superior performance by achieving a balance between stiffness and weight. Furthermore, the reduced-domain configuration enhanced the natural frequency owing to the greater design freedom, although this resulted in a trade-off of increased weight. These findings underscore the importance of selecting appropriate formalization strategies and domain settings to secure reliable vibration performance and support the necessity of multi-objective optimization frameworks for the practical design of vibration-sensitive structures. Full article
(This article belongs to the Special Issue Advanced Modeling and Design of Vibration and Wave Systems)
Show Figures

Figure 1

18 pages, 1714 KiB  
Article
Mathematical Modelling and Performance Assessment of Neural Network-Based Adaptive Law of Model Reference Adaptive System Estimator at Zero and Very Low Speeds in the Regenerating Mode
by Mohamed S. Zaky, Kotb B. Tawfiq and Mohamed K. Metwaly
Mathematics 2025, 13(11), 1715; https://doi.org/10.3390/math13111715 - 23 May 2025
Abstract
Precise speed estimation of sensorless induction motor (SIM) drives remains a significant challenge, particularly at zero and very low speeds. This paper proposes a mathematically modeled and enhanced stator current-based Model Reference Adaptive System (MRAS) estimator integrated with correction terms using rotor flux [...] Read more.
Precise speed estimation of sensorless induction motor (SIM) drives remains a significant challenge, particularly at zero and very low speeds. This paper proposes a mathematically modeled and enhanced stator current-based Model Reference Adaptive System (MRAS) estimator integrated with correction terms using rotor flux dynamics to continually update the value of the estimated speed to the correct value. The MRAS observer uses the stator current in the adjustable IM model instead of the rotor flux or the back emf to eliminate the effect of pure integration of the rotor flux, the parameters’ deviation, and measurement errors of stator voltages and currents on speed observation. It depends on the observed stator current, the current estimate error, and rotor flux estimation correction terms. A neural network (NN) for the adaptive law of the MRAS observer is proposed to enhance the accuracy of the suggested approach. Simulation results examine the developed method. A laboratory prototype based on DSP-DS1103 was also built, and the experimental results are presented. The SIM drive is examined at zero and very low speeds in motoring and regenerating modes. It exhibits good dynamic performance and low-speed estimation error compared to the conventional MRAS. Full article
(This article belongs to the Special Issue Artificial Neural Networks and Dynamic Control Systems)
24 pages, 3501 KiB  
Article
Stiffness Regulation of Cable-Driven Redundant Manipulators Through Combined Optimization of Configuration and Cable Tension
by Zhuo Liang, Pengkun Quan and Shichun Di
Mathematics 2025, 13(11), 1714; https://doi.org/10.3390/math13111714 - 23 May 2025
Abstract
Cable-driven redundant manipulators (CDRMs) are widely applied in various fields due to their notable advantages. Stiffness regulation capability is essential for CDRMs, as it enhances their adaptability and stability in diverse task scenarios. However, their stiffness regulation still faces two main challenges. First, [...] Read more.
Cable-driven redundant manipulators (CDRMs) are widely applied in various fields due to their notable advantages. Stiffness regulation capability is essential for CDRMs, as it enhances their adaptability and stability in diverse task scenarios. However, their stiffness regulation still faces two main challenges. First, stiffness regulation methods that involve physical structural modifications increase system complexity and reduce flexibility. Second, methods that rely solely on cable tension are constrained by the inherent stiffness of the cables, limiting the achievable regulation range. To address these challenges, this paper proposes a novel stiffness regulation method for CDRMs through the combined optimization of configuration and cable tension. A stiffness model is established to analyze the influence of the configuration and cable tension on stiffness. Due to the redundancy in degrees of freedom (DOFs) and actuation cables, there exist infinitely many configuration solutions for a specific pose and infinitely many cable tension solutions for a specific configuration. This paper proposes a dual-level stiffness regulation strategy that combines configuration and cable tension optimization. Motion-level and tension-level factors are introduced as control variables into the respective optimization models, enabling effective manipulation of configuration and tension solutions for stiffness regulation. An improved differential evolution algorithm is employed to generate adjustable configuration solutions based on motion-level factors, while a modified gradient projection method is adopted to derive adjustable cable tension solutions based on tension-level factors. Finally, a planar CDRM is used to validate the feasibility and effectiveness of the proposed method. Simulation results demonstrate that stiffness can be flexibly regulated by modifying motion-level and tension-level factors. The combined optimization method achieves a maximum RSR of 17.78 and an average RSR of 12.60 compared to configuration optimization, and a maximum RSR of 1.37 and an average RSR of 1.10 compared to tension optimization, demonstrating a broader stiffness regulation range. Full article
15 pages, 2646 KiB  
Article
Control of Nonlinear Systems with Input Delays and Disturbances Using State and Disturbance Sub-Predictors
by Ba Huy Nguyen, The Dong Dang, Igor B. Furtat, Anh Quan Dao and Pavel A. Gushchin
Mathematics 2025, 13(11), 1713; https://doi.org/10.3390/math13111713 - 23 May 2025
Abstract
This paper proposes a new method for controlling nonlinear systems with input signal delays in the presence of external disturbances. The control law consists of two components: the first component, based on a sub-predictor for the controlled variable, stabilizes the unstable system, while [...] Read more.
This paper proposes a new method for controlling nonlinear systems with input signal delays in the presence of external disturbances. The control law consists of two components: the first component, based on a sub-predictor for the controlled variable, stabilizes the unstable system, while the second component, which is based on a disturbance sub-predictor, compensates for external disturbances. The tracking error (stabilization error), which depends on the magnitude of the disturbances, can be reduced by increasing the order of the disturbance sub-predictor. Sufficient conditions for the stability of the closed-loop system with a given maximum delay are derived using the Lyapunov–Krasovskii method and formulated as linear matrix inequalities (LMIs). Numerical simulations are presented to demonstrate the effectiveness of the proposed method. Full article
(This article belongs to the Section C2: Dynamical Systems)
Show Figures

Figure 1

19 pages, 2429 KiB  
Article
Spin-Wheel: A Fast and Secure Chaotic Encryption System with Data Integrity Detection
by Luis D. Espino-Mandujano and Rogelio Hasimoto-Beltran
Mathematics 2025, 13(11), 1712; https://doi.org/10.3390/math13111712 - 23 May 2025
Abstract
The increasing demand for real-time multimedia communications has driven the need for highly secure and computationally efficient encryption schemes. In this work, we present a novel chaos-based encryption system that provides remarkable levels of security and performance. It leverages the benefits of applying [...] Read more.
The increasing demand for real-time multimedia communications has driven the need for highly secure and computationally efficient encryption schemes. In this work, we present a novel chaos-based encryption system that provides remarkable levels of security and performance. It leverages the benefits of applying fast-to-evaluate chaotic maps, along with a 2-Dimensional Look-Up Table approach (2D-LUT), and simple but powerful periodic perturbations. The foundation of our encryption system is a Pseudo-Random Number Generator (PRNG) that consists of a fully connected random graph with M vertices representing chaotic maps that populate the 2D-LUT. In every iteration of the system, one of the M chaotic maps in the graph and the corresponding trajectories are randomly selected from the 2D-LUT using an emulated spin-wheel picker game. This approach exacerbates the complexity in the event of an attack, since the trajectories may come from the same or totally different maps in a non-sequential time order. We additionally perform two levels of perturbation, at the map and trajectory level. The first perturbation (map level) produces new trajectories that are retrieved from the 2D-LUT in non-sequential order and with different initial conditions. The second perturbation applies a p-point crossover scheme to combine a pair of trajectories retrieved from the 2D-LUT and used in the ciphering process, providing higher levels of security. As a final process in our methodology, we implemented a simple packet-based data integrity scheme that detects with high probability if the received information has been modified (for example, by a man-in-the-middle attack). Our results show that our proposed encryption scheme is robust to common cryptanalysis attacks, providing high levels of security and confidentiality while supporting high processing speeds on the order of gigabits per second. To the best of our knowledge, our chaotic cipher implementation is the fastest reported in the literature. Full article
(This article belongs to the Special Issue Chaos-Based Secure Communication and Cryptography, 2nd Edition)
Show Figures

Figure 1

22 pages, 3388 KiB  
Article
Aggregating Image Segmentation Predictions with Probabilistic Risk Control Guarantees
by Joaquin Alvarez and Edgar Roman-Rangel
Mathematics 2025, 13(11), 1711; https://doi.org/10.3390/math13111711 - 23 May 2025
Abstract
In this work, we introduce a framework to combine arbitrary image segmentation algorithms from different agents under data privacy constraints to produce an aggregated prediction set satisfying finite-sample risk control guarantees. We leverage distribution-free uncertainty quantification techniques in order to aggregate deep neural [...] Read more.
In this work, we introduce a framework to combine arbitrary image segmentation algorithms from different agents under data privacy constraints to produce an aggregated prediction set satisfying finite-sample risk control guarantees. We leverage distribution-free uncertainty quantification techniques in order to aggregate deep neural networks for image segmentation tasks. Our method can be applied in settings to merge the predictions of multiple agents with arbitrarily dependent prediction sets. Moreover, we perform experiments in medical imaging tasks to illustrate our proposed framework. Our results show that the framework reduced the empirical false positive rate by 50% without compromising the false negative rate, with respect to the false positive rate of any of the constituent models in the aggregated prediction algorithm. Full article
(This article belongs to the Special Issue Artificial Intelligence: Deep Learning and Computer Vision)
Show Figures

Figure 1

14 pages, 266 KiB  
Article
Performance of Apriori Algorithm for Detecting Drug–Drug Interactions from Spontaneous Reporting Systems
by Yajie He, Jianping Sun and Xianming Tan
Mathematics 2025, 13(11), 1710; https://doi.org/10.3390/math13111710 - 23 May 2025
Abstract
Drug–drug interactions (DDIs) can pose significant risks in clinical practice and pharmacovigilance. Although traditional association rule mining techniques, such as the Apriori algorithm, have been applied to drug safety signal detection, their performance in DDI detection has not been systematically evaluated, especially in [...] Read more.
Drug–drug interactions (DDIs) can pose significant risks in clinical practice and pharmacovigilance. Although traditional association rule mining techniques, such as the Apriori algorithm, have been applied to drug safety signal detection, their performance in DDI detection has not been systematically evaluated, especially in the Spontaneous Reporting System (SRS), which contains a large number of drugs and AEs with a complex correlation structure and unobserved latent factors. This study fills that gap through comprehensive simulation studies designed to mimic key features of SRS data. We show that latent confounding can substantially distort detection accuracy: for example, when using the reporting ratio (RR) as a secondary indicator, the area under the curve (AUC) for detecting main effects dropped by approximately 30% and for DDIs by about 15%, compared to settings without confounding. A real-world application using 2024 VAERS data further illustrates the consequences of unmeasured bias, including a potentially spurious association between COVID-19 vaccination and infection. These findings highlight the limitations of existing methods and emphasize the need for future tools that account for latent factors to improve the reliability of safety signal detection in pharmacovigilance analyses. Full article
(This article belongs to the Section D1: Probability and Statistics)
15 pages, 405 KiB  
Article
Theoretical Properties of Closed Frequent Itemsets in Frequent Pattern Mining
by Huina Zhang, Hui Li, Yumei Li, Guangqiang Teng and Xianbing Cao
Mathematics 2025, 13(11), 1709; https://doi.org/10.3390/math13111709 - 23 May 2025
Abstract
Closed frequent itemsets (CFIs) play a crucial role in frequent pattern mining by providing a compact and complete representation of all frequent itemsets (FIs). This study systematically explores the theoretical properties of CFIs by revisiting closure operators and their fundamental definitions. A series [...] Read more.
Closed frequent itemsets (CFIs) play a crucial role in frequent pattern mining by providing a compact and complete representation of all frequent itemsets (FIs). This study systematically explores the theoretical properties of CFIs by revisiting closure operators and their fundamental definitions. A series of formal properties and rigorous proofs are presented to improve the theoretical understanding of CFIs. Furthermore, we propose confidence interval-based closed frequent itemsets (CICFIs) by integrating frequent pattern mining with probability theory. To evaluate the stability, three classical confidence interval (CI) estimation methods of relative support (rsup) based on the Wald CI, the Wilson CI, and the Clopper–Pearson CI are introduced. Extensive experiments on both an illustrative example and two real datasets are conducted to validate the theoretical properties. The results demonstrate that CICFIs effectively enhance the robustness and interpretability of frequent pattern mining under uncertainty. These contributions not only reinforce the solid theoretical foundation of CFIs but also provide practical insights for the development of more efficient algorithms in frequent pattern mining. Full article
(This article belongs to the Special Issue Advances in Statistical AI and Causal Inference)
Show Figures

Figure 1

10 pages, 451 KiB  
Article
PF2N: Periodicity–Frequency Fusion Network for Multi-Instrument Music Transcription
by Taehyeon Kim, Man-Je Kim and Chang Wook Ahn
Mathematics 2025, 13(11), 1708; https://doi.org/10.3390/math13111708 - 23 May 2025
Abstract
Automatic music transcription in multi-instrument settings remains a highly challenging task due to overlapping harmonics and diverse timbres. To address this, we propose the Periodicity–Frequency Fusion Network (PF2N), a lightweight and modular component that enhances transcription performance by integrating both spectral and periodicity-domain [...] Read more.
Automatic music transcription in multi-instrument settings remains a highly challenging task due to overlapping harmonics and diverse timbres. To address this, we propose the Periodicity–Frequency Fusion Network (PF2N), a lightweight and modular component that enhances transcription performance by integrating both spectral and periodicity-domain representations. Inspired by traditional combined frequency and periodicity (CFP) methods, the PF2N reformulates CFP as a neural module that jointly learns harmonically correlated features across the frequency and cepstral domains. Unlike handcrafted alignments in classical approaches, the PF2N performs data-driven fusion using a learnable joint feature extractor. Extensive experiments on three benchmark datasets (Slakh2100, MusicNet, and MAESTRO) demonstrate that the PF2N consistently improves transcription accuracy when incorporated into state-of-the-art models. The results confirm the effectiveness and adaptability of the PF2N, highlighting its potential as a general-purpose enhancement for multi-instrument AMT systems. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

42 pages, 551 KiB  
Article
AI Reasoning in Deep Learning Era: From Symbolic AI to Neural–Symbolic AI
by Baoyu Liang, Yuchen Wang and Chao Tong
Mathematics 2025, 13(11), 1707; https://doi.org/10.3390/math13111707 - 23 May 2025
Abstract
The pursuit of Artificial General Intelligence (AGI) demands AI systems that not only perceive but also reason in a human-like manner. While symbolic systems pioneered early breakthroughs in logic-based reasoning, such as MYCIN and DENDRAL, they suffered from brittleness and poor scalability. Conversely, [...] Read more.
The pursuit of Artificial General Intelligence (AGI) demands AI systems that not only perceive but also reason in a human-like manner. While symbolic systems pioneered early breakthroughs in logic-based reasoning, such as MYCIN and DENDRAL, they suffered from brittleness and poor scalability. Conversely, modern deep learning architectures have achieved remarkable success in perception tasks, yet continue to fall short in interpretable and structured reasoning. This dichotomy has motivated growing interest in Neural–Symbolic AI, a paradigm that integrates symbolic logic with neural computation to unify reasoning and learning. This survey provides a comprehensive and technically grounded overview of AI reasoning in the deep learning era, with a particular focus on Neural–Symbolic AI. Beyond a historical narrative, we introduce a formal definition of AI reasoning and propose a novel three-dimensional taxonomy that organizes reasoning paradigms by representation form, task structure, and application context. We then systematically review recent advances—including Differentiable Logic Programming, abductive learning, program induction, logic-aware Transformers, and LLM-based symbolic planning—highlighting their technical mechanisms, capabilities, and limitations. In contrast to prior surveys, this work bridges symbolic logic, neural computation, and emergent generative reasoning, offering a unified framework to understand and compare diverse approaches. We conclude by identifying key open challenges such as symbolic–continuous alignment, dynamic rule learning, and unified architectures, and we aim to provide a conceptual foundation for future developments in general-purpose reasoning systems. Full article
16 pages, 691 KiB  
Article
A Hybrid Vector Autoregressive Model for Accurate Macroeconomic Forecasting: An Application to the U.S. Economy
by Faridoon Khan, Hasnain Iftikhar, Imran Khan, Paulo Canas Rodrigues, Abdulmajeed Atiah Alharbi and Jeza Allohibi
Mathematics 2025, 13(11), 1706; https://doi.org/10.3390/math13111706 - 22 May 2025
Abstract
Forecasting macroeconomic variables is essential to macroeconomics, financial economics, and monetary policy analysis. Due to the high dimensionality of the macroeconomic dataset, it is challenging to forecast efficiently and accurately. Thus, this study provides a comprehensive analysis of predicting macroeconomic variables by comparing [...] Read more.
Forecasting macroeconomic variables is essential to macroeconomics, financial economics, and monetary policy analysis. Due to the high dimensionality of the macroeconomic dataset, it is challenging to forecast efficiently and accurately. Thus, this study provides a comprehensive analysis of predicting macroeconomic variables by comparing various vector autoregressive models followed by different estimation techniques. To address this, this paper proposes a novel hybrid model based on a smoothly clipped absolute deviation estimation method and a vector autoregression model that combats the curse of dimensionality and simultaneously produces reliable forecasts. The proposed hybrid model is applied to the U.S. quarterly macroeconomic data from the first quarter of 1959 to the fourth quarter of 2023, yielding multi-step-ahead forecasts (one-, three-, and six-step ahead). The multi-step-ahead out-of-sample forecast results (root mean square error and mean absolute error) for the considered data suggest that the proposed hybrid model yields a highly accurate and efficient gain. Additionally, it is demonstrated that the proposed models outperform the baseline models. Finally, the authors believe the proposed hybrid model may be expanded to other countries to assess its efficacy and accuracy. Full article
15 pages, 602 KiB  
Article
Mixed-Order Fuzzy Time Series Forecast
by Hao Wu, Haiming Long and Jiancheng Jiang
Mathematics 2025, 13(11), 1705; https://doi.org/10.3390/math13111705 - 22 May 2025
Abstract
Fuzzy time series forecasting has gained significant attention for its accuracy, robustness, and interpretability, making it widely applicable in practical prediction tasks. In classical fuzzy time series models, the lag order plays a crucial role, with variations in order often leading to markedly [...] Read more.
Fuzzy time series forecasting has gained significant attention for its accuracy, robustness, and interpretability, making it widely applicable in practical prediction tasks. In classical fuzzy time series models, the lag order plays a crucial role, with variations in order often leading to markedly different forecasting results. To obtain the best performance, we propose a mixed-order fuzzy time series model, which incorporates fuzzy logical relationships (FLRs) of different orders into its rule system. This approach mitigates the uncertainty in fuzzy forecasting caused by empty FLRs and FLR groups while fully exploiting the fitting advantages of different-order FLRs. Theoretical analysis is provided to establish the mathematical foundation of the mixed-order model, and its superiority over fixed-order models is demonstrated. Simulation studies reveal that the proposed model outperforms several classical time series models in specific scenarios. Furthermore, applications to real-world datasets, including a COVID-19 case study and a TAIEX financial dataset, validate the effectiveness and applicability of the proposed methodology. Full article
(This article belongs to the Special Issue Statistics: Theories and Applications)
Show Figures

Figure 1

37 pages, 6596 KiB  
Article
Optimizing Route Planning via the Weighted Sum Method and Multi-Criteria Decision-Making
by Guanquan Zhu, Minyi Ye, Xinqi Yu, Junhao Liu, Mingju Wang, Zihang Luo, Haomin Liang and Yubin Zhong
Mathematics 2025, 13(11), 1704; https://doi.org/10.3390/math13111704 - 22 May 2025
Abstract
Choosing the optimal path in planning is a complex task due to the numerous options and constraints; this is known as the trip design problem (TTDP). This study aims to achieve path optimization through the weighted sum method and multi-criteria decision analysis. Firstly, [...] Read more.
Choosing the optimal path in planning is a complex task due to the numerous options and constraints; this is known as the trip design problem (TTDP). This study aims to achieve path optimization through the weighted sum method and multi-criteria decision analysis. Firstly, this paper proposes a weighted sum optimization method using a comprehensive evaluation model to address TTDP, a complex multi-objective optimization problem. The goal of the research is to balance experience, cost, and efficiency by using the Analytic Hierarchy Process (AHP) and Entropy Weight Method (EWM) to assign subjective and objective weights to indicators such as ratings, duration, and costs. These weights are optimized using the Lagrange multiplier method and integrated into the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) model. Additionally, a weighted sum optimization method within the Traveling Salesman Problem (TSP) framework is used to maximize ratings while minimizing costs and distances. Secondly, this study compares seven heuristic algorithms—the genetic algorithm (GA), particle swarm optimization (PSO), the tabu search (TS), genetic-particle swarm optimization (GA-PSO), the gray wolf optimizer (GWO), and ant colony optimization (ACO)—to solve the TOPSIS model, with GA-PSO performing the best. The study then introduces the Lagrange multiplier method to the algorithms, improving the solution quality of all seven heuristic algorithms, with an average solution quality improvement of 112.5% (from 0.16 to 0.34). The PSO algorithm achieves the best solution quality. Based on this, the study introduces a new variant of PSO, namely PSO with Laplace disturbance (PSO-LD), which incorporates a dynamic adaptive Laplace perturbation term to enhance global search capabilities, improving stability and convergence speed. The experimental results show that PSO-LD outperforms the baseline PSO and other algorithms, achieving higher solution quality and faster convergence speed. The Wilcoxon signed-rank test confirms significant statistical differences among the algorithms. This study provides an effective method for experience-oriented path optimization and offers insights into algorithm selection for complex TTDP problems. Full article
Show Figures

Figure 1

26 pages, 3366 KiB  
Article
Two-Dimensional Fluid Flow Due to Blade-Shaped Waving of Cilia in Human Lungs
by Nisachon Kumankat and Nachayadar Kamolmitisom
Mathematics 2025, 13(11), 1703; https://doi.org/10.3390/math13111703 - 22 May 2025
Abstract
The mucociliary clearance system is an innate defense mechanism in the human respiratory tract, which plays a crucial role in protecting the airways from infections. The clearance system secretes mucus from the goblet cells, which scatters in the respiratory epithelium to trap foreign [...] Read more.
The mucociliary clearance system is an innate defense mechanism in the human respiratory tract, which plays a crucial role in protecting the airways from infections. The clearance system secretes mucus from the goblet cells, which scatters in the respiratory epithelium to trap foreign particles entering the airway, and then the mucus is removed from the body via the movement of cilia residing under the mucus and above the epithelium cells. The layer containing cilia is called the periciliary layer (PCL). This layer also contains an incompressible Newtonian fluid called PCL fluid. This study aims to determine the velocity of the PCL fluid driven by the cilia movement instead of a pressure gradient. We consider bundles of cilia, rather than an individual cilium. So, the generalized Brinkman equation in a macroscopic scale is used to predict the fluid velocity in the PCL. We apply a mixed finite element method to the governing equation and calculate the numerical solutions in a two-dimensional domain. The numerical domain is set up to be the shape of a fan blade, which is similar to the motion of the cilia. This problem can be applied to problems of fluid flow propelled via moving solid phases. Full article
Show Figures

Figure 1

18 pages, 361 KiB  
Article
A Hybrid Algorithm with a Data Augmentation Method to Enhance the Performance of the Zero-Inflated Bernoulli Model
by Chih-Jen Su, I-Fei Chen, Tzong-Ru Tsai and Yuhlong Lio
Mathematics 2025, 13(11), 1702; https://doi.org/10.3390/math13111702 - 22 May 2025
Abstract
The zero-inflated Bernoulli model, enhanced with elastic net regularization, effectively handles binary classification for zero-inflated datasets. This zero-inflated structure significantly contributes to data imbalance. To improve the ZIBer model’s ability to accurately identify minority classes, we explore the use of momentum and Nesterov’s [...] Read more.
The zero-inflated Bernoulli model, enhanced with elastic net regularization, effectively handles binary classification for zero-inflated datasets. This zero-inflated structure significantly contributes to data imbalance. To improve the ZIBer model’s ability to accurately identify minority classes, we explore the use of momentum and Nesterov’s gradient descent methods, particle swarm optimization, and a novel hybrid algorithm combining particle swarm optimization with Nesterov’s accelerated gradient techniques. Additionally, the synthesized minority oversampling technique is employed for data augmentation and training the model. Extensive simulations using holdout cross-validation reveal that the proposed hybrid algorithm with data augmentation excels in identifying true positive cases. Conversely, the hybrid algorithm without data augmentation is preferable when aiming for a balance between the metrics of recall and precision. Two case studies about diabetes and biopsy are provided to demonstrate the model’s effectiveness, with performance assessed through K-fold cross-validation. Full article
Show Figures

Figure 1

29 pages, 2543 KiB  
Article
A Finite Element–Finite Volume Code Coupling for Optimal Control Problems in Fluid Heat Transfer for Incompressible Navier–Stokes Equations
by Samuele Baldini, Giacomo Barbi, Giorgio Bornia, Antonio Cervone, Federico Giangolini, Sandro Manservisi and Lucia Sirotti
Mathematics 2025, 13(11), 1701; https://doi.org/10.3390/math13111701 - 22 May 2025
Abstract
In this work, we present a numerical approach for solving optimal control problems for fluid heat transfer applications with a mixed optimality system: an FEM code to solve the adjoint solution over a precise restricted admissible solution set and an open-source well-known code [...] Read more.
In this work, we present a numerical approach for solving optimal control problems for fluid heat transfer applications with a mixed optimality system: an FEM code to solve the adjoint solution over a precise restricted admissible solution set and an open-source well-known code for solving the state problem defined over a different one. In this way, we are able to decouple the optimality system and use well-established and validated numerical tools for the physical modeling. Specifically, two different CFD codes, OpenFOAM (finite volume-based) and FEMuS (finite element-based), have been used to solve the optimality system, while the data transfer between them is managed by the external library MEDCOUPLING. The state equations are solved in the finite volume code, while the adjoint and the control are solved in the finite element code. Two examples taken from the literature are implemented in order to validate the numerical algorithm: the first one considers a natural convection cavity resulting from a Rayleigh–Bénard configuration, and the second one is a conjugate heat transfer problem between a fluid and a solid region. Full article
Show Figures

Figure 1

20 pages, 1016 KiB  
Article
Efficient AI-Driven Query Optimization in Large-Scale Databases: A Reinforcement Learning and Graph-Based Approach
by Najla Sassi and Wassim Jaziri
Mathematics 2025, 13(11), 1700; https://doi.org/10.3390/math13111700 - 22 May 2025
Abstract
As data-centric applications become increasingly complex, understanding effective query optimization in large-scale relational databases is crucial for managing this complexity. Yet, traditional cost-based and heuristic approaches simply do not scale, adapt, or remain accurate in highly dynamic multi-join queries. This research work proposes [...] Read more.
As data-centric applications become increasingly complex, understanding effective query optimization in large-scale relational databases is crucial for managing this complexity. Yet, traditional cost-based and heuristic approaches simply do not scale, adapt, or remain accurate in highly dynamic multi-join queries. This research work proposes the reinforcement learning and graph-based hybrid query optimizer (GRQO), the first ever to apply reinforcement learning and graph theory for optimizing query execution plans, specifically in join order selection and cardinality estimation. By employing proximal policy optimization for adaptive policy learning and using graph-based schema representations for relational modeling, GRQO effectively traverses the combinatorial optimization space. Based on TPC-H (1 TB) and IMDB (500 GB) workloads, GRQO runs 25% faster in query execution time, scales 30% better, reduces CPU and memory use by 20–25%, and reduces the cardinality estimation error by 47% compared to traditional cost-based optimizers and machine learning-based optimizers. These findings highlight the ability of GRQO to optimize performance and resource efficiency in database management in cloud computing, data warehousing, and real-time analytics. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Previous Issue
Back to TopTop