Previous Issue
Volume 31, February
 
 

Math. Comput. Appl., Volume 31, Issue 2 (April 2026) – 25 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 1938 KB  
Article
Generalised Equations for Calculating Arsenic Removal Efficiency Using Synthetic Adsorbents
by Monzur Alam Imteaz, ABM Sharif Hossain, Hassan Ahmed Rudayni, Amimul Ahsan and Shahriar Shams
Math. Comput. Appl. 2026, 31(2), 57; https://doi.org/10.3390/mca31020057 - 5 Apr 2026
Viewed by 164
Abstract
This study develops generalised equations to predict arsenic removal efficiency during adsorption using synthetic sand, based on two key factors: adsorbent dose and temperature. Previous experimental investigations demonstrated that iron oxide coated sand (IOCS), aluminium oxide coated sand (AOCS), and their mixtures are [...] Read more.
This study develops generalised equations to predict arsenic removal efficiency during adsorption using synthetic sand, based on two key factors: adsorbent dose and temperature. Previous experimental investigations demonstrated that iron oxide coated sand (IOCS), aluminium oxide coated sand (AOCS), and their mixtures are highly effective for arsenic removal. Best-fit equations were first derived for IOCS and AOCS at discrete temperatures as functions of dose concentration, and these were subsequently unified into single predictive equations capable of estimating removal efficiency across a wide range of temperatures and doses. The resulting models closely replicate experimental results, with correlation coefficients exceeding 0.99 for both IOCS and AOCS. Using the same methodology, an additional equation was developed for a 50:50 mixture of IOCS and AOCS, yielding a slightly lower but still strong correlation coefficient of 0.97. In contrast, linear proportioning of the individual IOCS and AOCS equations failed to accurately predict the removal efficiency of the mixed adsorbent, indicating that simple linear scaling is inadequate for representing the combined adsorption behaviour. Full article
(This article belongs to the Section Natural Sciences)
Show Figures

Figure 1

29 pages, 2990 KB  
Article
Federated and Interpretable AI Framework for Secure and Transparent Loan Default Prediction in Financial Institutions
by Awad M. Awadelkarim
Math. Comput. Appl. 2026, 31(2), 56; https://doi.org/10.3390/mca31020056 - 5 Apr 2026
Viewed by 211
Abstract
Predicting loan defaults is a significant challenge for financial institutions; however, current machine learning techniques often encounter issues in areas such as data privacy, cross-institutional cooperation, and model transparency. The restrictions on the practical implementation of advanced predictive models are centralized training paradigms, [...] Read more.
Predicting loan defaults is a significant challenge for financial institutions; however, current machine learning techniques often encounter issues in areas such as data privacy, cross-institutional cooperation, and model transparency. The restrictions on the practical implementation of advanced predictive models are centralized training paradigms, which limit the application of advanced models because of regulatory and confidentiality issues, and black-box decision making, which diminishes confidence in automated credit risk tools. This study mitigates these problems by adopting a federated-inspired decentralized ensemble learning model combined with explainable artificial intelligence (XAI) in predicting loan defaults. Various machine learning classifiers are trained on partitioned institutional data without the need to share any data; they include K-Nearest Neighbors, support vector machine, random forest, and XGBoost. They use a prediction-level aggregation strategy to simulate the collaborative decision-making process without losing locality of data. SHAP and LIME are used to promote model interpretability by giving both global and local explanations of the consequences of prediction. The proposed framework was tested on a large public dataset of loans that contains more than 116,000 records, including various financial and borrower-related features. The experimental findings show that XGBoost has high and reliable predictive accuracy in both centralized and decentralized scenarios, achieving 99.7% accuracy under federated-inspired evaluation. The explanation analysis shows interest rate spread and upfront charges as the most significant predictors of loan default risk. The main contributions of this research are as follows: (i) a privacy-preserving decentralized ensemble learning framework that is applicable in multi-institutional financial contexts, (ii) a detailed analysis of centralized and decentralized predictive performances, and (iii) the pipeline of the XAI, which can be used to increase its transparency and regulatory confidence in automated credit risk evaluation. These results prove that decentralized learning combined with explainable AI can provide high-performing, transparent and privacy-sensitive loan default prediction systems in practice in real-world banking systems. Full article
Show Figures

Figure 1

19 pages, 551 KB  
Article
SCAFormer: Side-Channel Analysis Based on a Transformer with Focal Modulation
by Longde Yan, Aidong Chen, Wenwen Chen, Jiawang Huang, Yanlong Zhang, Shuo Wang and Jing Zhou
Math. Comput. Appl. 2026, 31(2), 55; https://doi.org/10.3390/mca31020055 - 4 Apr 2026
Viewed by 181
Abstract
With the rapid development of Internet technology, information security has become increasingly important. Cryptographic analysis techniques, especially side-channel analysis (SCA), pose a significant threat to security systems. The latest SCA technology mainly utilizes the physical leakage signals generated during the operation of encryption [...] Read more.
With the rapid development of Internet technology, information security has become increasingly important. Cryptographic analysis techniques, especially side-channel analysis (SCA), pose a significant threat to security systems. The latest SCA technology mainly utilizes the physical leakage signals generated during the operation of encryption devices, such as power consumption, temperature and electromagnetic radiation. These signals themselves carry the physical characteristics of the device, which are related to the encryption algorithm. Among them, the power consumption trace remains the main target of modern SCA research. However, such trajectories often bring about some analytical difficulties, such as the data sequence being too long, the feature points being distributed sparsely, and the internal relationships of the data being complex. These challenges hinder effective analysis. While Transformer architectures are good at capturing long-range dependencies in sequential data, their high computational complexity limits practical deployment. To address this, we propose replacing the self-attention (SA) module in Transformers with a focal modulation module. This modification significantly reduces computational complexity and reduces computational operations during feature extraction, enabling efficient and accurate side-channel attacks. Experimental results on benchmark datasets (ASCAD, AES_RD, AES_HD, DPAv4) demonstrate the superiority of our approach. The proposed method achieves a reduction in training time compared to standard Transformer models, and achieves superior key recovery performance, outperforming existing state-of-the-art models. Full article
Show Figures

Figure 1

23 pages, 2014 KB  
Article
A Machine Learning Framework for Interpreting Composition-Dependent Weathering in Heritage Glass
by Hailu Wan, Zhuo Jin, Gengqiang Huang and Shuang Li
Math. Comput. Appl. 2026, 31(2), 54; https://doi.org/10.3390/mca31020054 - 3 Apr 2026
Viewed by 221
Abstract
Glass artworks represent a significant component of cultural heritage, yet their surfaces are highly vulnerable to physicochemical weathering resulting from composition-dependent interactions with environmental factors. Understanding the complex and nonlinear relationships between glass composition and deterioration remains challenging using conventional, often invasive, analytical [...] Read more.
Glass artworks represent a significant component of cultural heritage, yet their surfaces are highly vulnerable to physicochemical weathering resulting from composition-dependent interactions with environmental factors. Understanding the complex and nonlinear relationships between glass composition and deterioration remains challenging using conventional, often invasive, analytical techniques. To address this issue, this study proposes an interpretable and non-destructive computational framework to analyze weathering patterns in historical glass based on oxide composition data. The framework combines statistical hypothesis testing (Chi-squared analysis), metric-based machine learning (Prototypical Networks), probabilistic modeling (Gaussian Mixture Models), multivariate statistical analysis (orthogonal partial least squares discriminant analysis), and information-theoretic methods (mutual information analysis) to identify key compositional features and inter-elemental relationships associated with surface degradation. The results show that lead-barium glass exhibits a higher susceptibility to weathering compared with high-potassium glass, with PbO, BaO, and SiO2 identified as the most discriminative components. The Prototypical Network achieved 100% accuracy on most specific data partitions within the analyzed dataset, demonstrating its effectiveness in small-sample compositional classification. Meanwhile, mutual information network analysis revealed the complex interrelationships among chemical components involved in surface weathering behavior. These findings indicate that interpretable machine learning and statistical modeling can provide meaningful insights into composition-dependent patterns and support reproducible analysis for the sustainable conservation of cultural heritage glass. Full article
Show Figures

Figure 1

36 pages, 8945 KB  
Article
Multivariate Uncertainty Quantification with Tomographic Quantile Forests
by Takuya Kanazawa
Math. Comput. Appl. 2026, 31(2), 53; https://doi.org/10.3390/mca31020053 - 2 Apr 2026
Viewed by 317
Abstract
Quantifying predictive uncertainty is essential for safe and trustworthy real-world AI deployment. However, the fully nonparametric estimation of conditional distributions remains challenging for multivariate targets. We propose Tomographic Quantile Forests (TQF), a nonparametric, uncertainty-aware, tree-based regression model for multivariate targets. TQF learns conditional [...] Read more.
Quantifying predictive uncertainty is essential for safe and trustworthy real-world AI deployment. However, the fully nonparametric estimation of conditional distributions remains challenging for multivariate targets. We propose Tomographic Quantile Forests (TQF), a nonparametric, uncertainty-aware, tree-based regression model for multivariate targets. TQF learns conditional quantiles of directional projections ny as functions of the input x and the direction n. At inference, it aggregates quantiles across many directions and reconstructs the multivariate conditional distribution by minimizing the sliced Wasserstein distance via an efficient alternating scheme with convex subproblems. Unlike classical directional-quantile approaches that typically produce only convex quantile regions and require training separate models for different directions, TQF covers all directions with a single model to reconstruct the full conditional distribution itself, naturally overcoming any convexity restrictions. We evaluate TQF on synthetic and real-world datasets, and release the source code on GitHub. Full article
(This article belongs to the Section Engineering)
Show Figures

Figure 1

26 pages, 1972 KB  
Article
Multiphysics Design and Fuzzy-Based Optimization of Materials and Geometry for the Triple Scissor Deployable Antenna Mechanism
by Mamoon Aamir, Mohamed Omri, Aqsa Zafar Abbasi and Lioua Kolsi
Math. Comput. Appl. 2026, 31(2), 52; https://doi.org/10.3390/mca31020052 - 2 Apr 2026
Viewed by 212
Abstract
There is a demand for a structurally sound fire detection and suppression system that can support a large deployable ground or space antenna in a lower Earth orbit (LEO) environment and remains thermally stable across the entire range of the LEO environment. This [...] Read more.
There is a demand for a structurally sound fire detection and suppression system that can support a large deployable ground or space antenna in a lower Earth orbit (LEO) environment and remains thermally stable across the entire range of the LEO environment. This paper describes a new type of deployable antenna, i.e., triple scissor deployable antenna mechanism (TSDAM), which has a circumferential modular structure and can deploy into position with one degree of freedom; its deployment does not change its geometric precision or structural stability. This research creates a comprehensive design methodology based on a multiphysics approach, which encompasses nonlinear kinematics analysis, fuzzy logic-based material selection, structural and thermal optimization using fuzzy logic geometries, coupled thermo-structural-dynamic analysis, and finally, dynamic analysis of the deployed structure. The material selection process identified the most suitable candidate material to be the T1100G carbon fiber reinforced plastic as its stiffness-to-weight ratio and thermal performance under LEO cycling was the best in the study. The optimal geometric deployment yield for the antenna was 26.8 m with a total structural weight of 128.4 kg and the base case geometric deployment yielded a feasible ratio of 0.91. This work provides a comparison of the mass savings using traditional deployable truss designs; testing of conventional designs showed a much greater mass overhead compared to the smart design’s mass. From a dynamic analysis perspective, the predicted fundamental frequency for the TSDAM as deployed was 0.09912 Hz and compared favorably to the corresponding finite element models (1.91% error), thereby validating the analytical model. The overall test provides a systematic, scalable methodology for designing ultra-lightweight, geometrically precise deployable reflector systems that satisfy the requirements of next-generation space operations. Full article
Show Figures

Figure 1

25 pages, 2236 KB  
Article
On the Unambiguous, Traceable and Dimensionally Homogeneous Calculation of Per-Unit Parameters for the Two-Mass Drive Train Model of a Set of Reference Wind Turbines
by Joel Rodríguez-Guillén, Rubén Salas-Cabrera, Bárbara María-Esther García-Morales, Miguel A. García-Morales and Juan Frausto-Solís
Math. Comput. Appl. 2026, 31(2), 51; https://doi.org/10.3390/mca31020051 - 1 Apr 2026
Viewed by 149
Abstract
The Bond Graph (BG) methodology, a multi-domain graphical description formalism, is used to study a horizontal-axis two-mass drive train of a wind turbine. The main contribution of this work is to address the lack of wind energy literature dealing with fully unambiguous, traceable, [...] Read more.
The Bond Graph (BG) methodology, a multi-domain graphical description formalism, is used to study a horizontal-axis two-mass drive train of a wind turbine. The main contribution of this work is to address the lack of wind energy literature dealing with fully unambiguous, traceable, and dimensionally homogeneous per-unit quantities for two-mass drive train models. Data in real quantities for the drive train are collected from open-access datasheets and their corresponding design information files. Wind turbines that may serve as Reference Wind Turbines (RWTs), with traceable calculations, are carefully selected. A lumped-parameter order-reduction method is employed to convert data from higher-order models into data for a reduced-order two-mass model. The BG methodology is then used to formally derive the per-unit drive train model and its corresponding dimensionally homogeneous per-unit parameters for a set of six representative Reference Wind Turbines, covering a nominal power range from 0.75 MW to 5 MW. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2025)
Show Figures

Figure 1

24 pages, 1112 KB  
Article
Reliable Emergency Facility Location Planning Under Complex Polygonal Barriers and Facility Failure Risks
by Mingyuan Liu, Lintao Liu, Zhujia Yu, Futai Liang and Guocheng Wang
Math. Comput. Appl. 2026, 31(2), 50; https://doi.org/10.3390/mca31020050 - 18 Mar 2026
Viewed by 295
Abstract
Emergency facility location and layout are critical to the efficiency of emergency rescue and resource allocation. However, practical emergency scenarios are plagued by two key challenges: the risk of facility failure due to various uncertain factors and the presence of complex polygonal barriers [...] Read more.
Emergency facility location and layout are critical to the efficiency of emergency rescue and resource allocation. However, practical emergency scenarios are plagued by two key challenges: the risk of facility failure due to various uncertain factors and the presence of complex polygonal barriers (including convex and concave polygons) that hinder transportation. Existing studies often overlook concave polygonal barriers or fail to prioritize time satisfaction, a core demand in emergency response. To address these gaps, this paper proposes a reliable emergency facility location optimization model with the objective of maximizing time satisfaction, considering constraints such as capacity, cost, and demand. The model integrates three key methods: a convex hull algorithm to convert concave barriers into convex ones for simplified calculation, a path optimization algorithm to find the shortest bypass routes around barriers, and an Artificial Ecosystem Optimization (AEO) algorithm to solve the nonlinear programming model. Through numerical experiments (single-facility, multi-facility, and medium-scale scenarios) and a practical case study in the Meknès region of Morocco for ambulance deployment, the feasibility and effectiveness of the model and algorithms are verified. The results show that the model achieves high time satisfaction (all above 0.8, with most exceeding 0.9) and efficiently optimizes facility locations and resource allocation. Sensitivity analysis indicates that increased failure risk parameters (α and θ) lead to a gradual decrease in average time satisfaction. This research provides a systematic mathematical model and practical method for emergency facility location decision-making, effectively addressing the challenges of complex barriers and facility failure. Full article
(This article belongs to the Special Issue Applied Optimization in Automatic Control and Systems Engineering)
Show Figures

Figure 1

41 pages, 2638 KB  
Systematic Review
ML-Based Autoscaling for Elastic Cloud Applications: Taxonomy, Frameworks, and Evaluation
by Vishwanath Srikanth Machiraju, Vijay Kumar and Sahil Sharma
Math. Comput. Appl. 2026, 31(2), 49; https://doi.org/10.3390/mca31020049 - 16 Mar 2026
Viewed by 510
Abstract
Elastic cloud systems are increasingly employing machine learning (ML) to automate resource scaling in response to variable workloads and stringent service-level objectives. However, current ML-based autoscalers are fragmented across different platforms, objectives, and evaluation frameworks. This survey examines 60 primary studies conducted between [...] Read more.
Elastic cloud systems are increasingly employing machine learning (ML) to automate resource scaling in response to variable workloads and stringent service-level objectives. However, current ML-based autoscalers are fragmented across different platforms, objectives, and evaluation frameworks. This survey examines 60 primary studies conducted between 2015 and 2025, categorising them according to a five-dimensional taxonomy that includes goal, decision logic, scaling mode, control scope, and deployment. This study classifies supervised, unsupervised, and reinforcement learning approaches and analyzes their integration into practical frameworks, including Kubernetes-based controllers and cloud provider services. This paper summarizes the application of machine learning to workload prediction, proactive and hybrid horizontal–vertical scaling, and adaptive policy optimization. Additionally, it synthesises common evaluation practices, encompassing workloads, metrics, and benchmarks. The analysis identifies ongoing challenges: actuation delays and telemetry lag, the intricacies of hybrid scaling, coordination across multi-service and edge-cloud deployments, and the constrained joint consideration of cost, SLO, and energy objectives. The identified gaps necessitate additional research on unified machine learning-driven orchestration, multi-agent and federated control, standardised benchmarks, and sustainability-aware autoscaling. Full article
Show Figures

Figure 1

23 pages, 626 KB  
Article
Collaborative Optimization of Cost and Risk for Industrial Equipment Maintenance Projects Based on DRO-CVaR
by Xiaohang Wan
Math. Comput. Appl. 2026, 31(2), 48; https://doi.org/10.3390/mca31020048 - 15 Mar 2026
Viewed by 288
Abstract
Aiming at the poor robustness of maintenance schemes in industrial equipment maintenance projects, which arises from uncertain factors including fault degree, maintenance time, and resource availability, this paper proposes a synergistic cost-risk optimization method that integrates Distributionally Robust Optimization (DRO) and Conditional Value-at-Risk [...] Read more.
Aiming at the poor robustness of maintenance schemes in industrial equipment maintenance projects, which arises from uncertain factors including fault degree, maintenance time, and resource availability, this paper proposes a synergistic cost-risk optimization method that integrates Distributionally Robust Optimization (DRO) and Conditional Value-at-Risk (CVaR). First, the paper analyzes the uncertainty characteristics of such projects and constructs a distribution ambiguity set based on the Wasserstein distance to depict unknown probability distributions. Second, a two-stage DRO-CVaR optimization model is established: the first stage formulates a pre-optimization scheme to minimize maintenance costs, and the second stage introduces CVaR for extreme risk measurement, thus achieving optimal decision-making under the worst-case scenario. Finally, a nested Column-and-Constraint Generation (C&CG) algorithm is designed to solve the proposed model. A numerical example is conducted for verification, and results show that compared with traditional stochastic programming and pure DRO methods, the proposed method reduces the total cost by 10.4%, the worst-case scenario loss by 28.9%, and the CVaR value by 32.0%. It thus exhibits superior economic efficiency and risk resistance in uncertain environments. Full article
Show Figures

Figure 1

22 pages, 102250 KB  
Article
An Improved Method for 3D Style Transfer of Cliff Carvings Based on Gaussian Splatting
by Yang Li, He Ren, Yacong Li, Dong Sui and Maozu Guo
Math. Comput. Appl. 2026, 31(2), 47; https://doi.org/10.3390/mca31020047 - 11 Mar 2026
Viewed by 251
Abstract
Cliff carvings, as significant art forms bearing historical, cultural, and religious connotations, face dual threats from natural weathering and human-induced damage. Their protection and restoration of the artistic style present pressing challenges. In recent years, the rapid advancement of digital technologies has offered [...] Read more.
Cliff carvings, as significant art forms bearing historical, cultural, and religious connotations, face dual threats from natural weathering and human-induced damage. Their protection and restoration of the artistic style present pressing challenges. In recent years, the rapid advancement of digital technologies has offered new opportunities for preserving and reproducing cultural heritage. Particularly, 3D style transfer techniques are emerging as crucial tools for digital safeguarding. The advantages of three-dimensional style transfer in cultural heritage applications include dynamic stylized rendering, simulation of styles from multiple historical periods, alternative modes of exhibition, and facilitating a paradigm shift in conservation practices from static digital archiving to dynamic revitalization. This study proposes a novel 3D stylization method for cliff carvings by integrating 3D Gaussian Splatting (3DGS) and Nearest Neighbor Feature Matching (NNFM) loss metric. The method represents ancient cliff carvings as a set of optimizable 3D Gaussians representation, enabling efficient capture and processing of complex geometric structures and rich textural details. By integrating the textural and geometric characteristics of the target artistic style, 3DGS facilitates high-quality transfer of diverse artistic styles while effectively preserving the original intricate details of the carvings. Additionally, we employ the NNFM loss function to transfer 2D visual details into 3D representations while maintaining multi-perspective style consistency. Experimental results demonstrate that the proposed method exhibits significant advantages in texture fidelity, style consistency, and rendering efficiency. This research showcases the potential of our model for the digital preservation and presentation of cliff-carved cultural heritage, offering an innovative technological approach with theoretical value and practical significance. Full article
(This article belongs to the Special Issue Advances in Computational and Applied Mechanics (SACAM))
Show Figures

Figure 1

17 pages, 986 KB  
Article
Selective State-Space Models with Adaptive Collaborative Awareness for Sequential Recommendation
by Dun Ao, Yao Xiao and Fei Lei
Math. Comput. Appl. 2026, 31(2), 46; https://doi.org/10.3390/mca31020046 - 10 Mar 2026
Viewed by 496
Abstract
Sequential recommendation systems face challenges in integrating local sequential patterns with global collaborative information. While Transformers capture long-term dependencies through self-attention, they suffer from quadratic complexity. State-space models offer linear efficiency but are constrained by Markovian assumptions that limit their ability to model [...] Read more.
Sequential recommendation systems face challenges in integrating local sequential patterns with global collaborative information. While Transformers capture long-term dependencies through self-attention, they suffer from quadratic complexity. State-space models offer linear efficiency but are constrained by Markovian assumptions that limit their ability to model direct inter-item relationships. This paper addresses the expressiveness limitations of selective state-space models in capturing collaborative signals. We propose MCARec, which integrates selective state spaces with a dedicated collaborative awareness module. The key components include: (1) a lightweight attention mechanism that explicitly models item co-occurrence and transition patterns, enabling direct pairwise relationship modeling beyond the sequential bottleneck; (2) context-aware adaptive gating that dynamically balances sequential and collaborative features based on input context; (3) a lightweight architecture that enhances representational capacity while maintaining computational efficiency. On MovieLens-1M, a dataset characterized by dense user interactions, MCARec achieves improvements of 3.89% in HR@10, 5.52% in NDCG@10, and 6.97% in MRR@10 over Mamba4Rec, and 9.19%, 12.09%, and 8.45% respectively over SASRec (all p<0.001). Performance gains correlate with interaction density: substantial improvements on dense datasets diminish on sparser Amazon datasets (2–6% over SASRec in most metrics), while showing mixed results compared to Mamba4Rec on sparse datasets, suggesting that the collaborative awareness mechanism is most effective when sufficient co-occurrence signals are available. This work provides the first systematic analysis of how Markovian constraints in state-space models limit collaborative information utilization in recommendations. MCARec demonstrates that augmenting state-space models with explicit collaborative modeling significantly improves recommendation accuracy in dense interaction scenarios, offering a complementary approach to pure sequential or pure attention-based methods. Full article
(This article belongs to the Section Engineering)
Show Figures

Figure 1

20 pages, 508 KB  
Article
Predictive Modelling of Credit Default Risk Using Machine Learning and Ensemble Techniques
by Mofoka Rebuseditsoe Mathibela and Daniel Maposa
Math. Comput. Appl. 2026, 31(2), 45; https://doi.org/10.3390/mca31020045 - 10 Mar 2026
Viewed by 709
Abstract
This study develops a hybrid framework integrating ensemble learning with explainable artificial intelligence to address the methodological challenge of balancing predictive accuracy and interpretability in credit risk model comparison. Using the German Credit Dataset, we implemented a comprehensive preprocessing pipeline, including feature encoding, [...] Read more.
This study develops a hybrid framework integrating ensemble learning with explainable artificial intelligence to address the methodological challenge of balancing predictive accuracy and interpretability in credit risk model comparison. Using the German Credit Dataset, we implemented a comprehensive preprocessing pipeline, including feature encoding, scaling, and SMOTE for class imbalance handling. Four base models, logistic regression, Random Forest, XGBoost, and Multilayer Perceptron, were combined through a Stacked Ensemble with a logistic regression meta learner. The ensemble demonstrated strong performance, achieving an AUC of 0.761, precision of 0.783, recall of 0.806, and an F1 score of 0.794, which represented the highest scores among all models tested. Notably, Random Forest (AUC = 0.749) surpassed XGBoost (AUC = 0.733), challenging conventional algorithmic hierarchies. SHAP analysis provided transparent global and local interpretability, identifying Current Account status (SHAP = 0.153), Loan Duration (0.064), and Savings Account (0.063) as dominant predictor variables. Class-imbalance handling and threshold optimisation enhanced practical utility by reducing false positives from 39 to 16, thereby aligning with financial risk priorities. The framework provides a reproducible methodological pipeline for systematically comparing credit scoring approaches, demonstrating how predictive performance can be evaluated alongside interpretability considerations within a benchmark dataset context. Full article
Show Figures

Figure 1

32 pages, 5960 KB  
Article
Complex Double Interface Dynamics in Time-Fractional Models: Computational Analysis of Meshless and Multi-Resolution Techniques
by Faisal Bilal, Muhammad Asif, Mehnaz Shakeel and Ioan-Lucian Popa
Math. Comput. Appl. 2026, 31(2), 44; https://doi.org/10.3390/mca31020044 - 7 Mar 2026
Viewed by 283
Abstract
Time-fractional interface problems, found in heat transfer with discontinuous conductivities and fluid flows with surface tension forces, are challenging due to irregular interfaces and the history-dependent nature of fractional derivatives. This paper presents two numerical methods for simulating time-fractional double interface problems. The [...] Read more.
Time-fractional interface problems, found in heat transfer with discontinuous conductivities and fluid flows with surface tension forces, are challenging due to irregular interfaces and the history-dependent nature of fractional derivatives. This paper presents two numerical methods for simulating time-fractional double interface problems. The first method uses the Haar wavelet collocation technique, while the second relies on a meshless approach with radial basis functions. The fractional derivatives are replaced with the Caputo sense, the resulting first-order time derivatives are handled using the finite difference method, and the spatial operator is approximated using the two proposed methods. Gauss elimination is used to solve linear problems. Quasi-Newton linearization method is used for nonlinear problems. Both methods accommodate constant and variable coefficients, handling discontinuities and singularities in both solutions and coefficients. To evaluate the effectiveness of the proposed methods, numerical experiments are carried out. The accuracy of each method is quantified using the L error norm, and a comparative analysis highlights the validity and advantages of the approaches. Moreover, the proposed schemes are rigorously analyzed to establish their stability, and the existence and uniqueness of the solutions. Full article
Show Figures

Figure 1

27 pages, 3946 KB  
Article
Cost Parameters-Based Comprehensive Analysis of a New Cost Function Construction for Coxian-k Queueing System Characterized by Customer Service Speed Variability
by Stefan Mirchevski, Aleksandra Popovska-Mitrovikj and Verica Bakeva
Math. Comput. Appl. 2026, 31(2), 43; https://doi.org/10.3390/mca31020043 - 6 Mar 2026
Viewed by 632
Abstract
We investigate cost optimization in an M/Coxk/1 queueing system with phase-dependent service speeds. A unified parametric framework is introduced to model both homogeneous and heterogeneous service regimes, and closed-form expressions for steady-state performance measures are derived. These results [...] Read more.
We investigate cost optimization in an M/Coxk/1 queueing system with phase-dependent service speeds. A unified parametric framework is introduced to model both homogeneous and heterogeneous service regimes, and closed-form expressions for steady-state performance measures are derived. These results are used to construct an expected total cost function explicitly parameterized by the traffic intensity. We prove that the cost function is strictly convex on the stability region, ensuring the existence and uniqueness of the optimal traffic intensity. For the Coxian-2 case, analytical and numerical sweep analyses are conducted with respect to waiting and service-capacity cost parameters. Polynomial response surfaces and nonparametric statistical tests are employed to validate the robustness of the results. The analysis shows that balanced service speeds across phases consistently yield lower optimal traffic intensity levels and reduced expected total costs, whereas heterogeneous service speeds increase congestion and cost sweep. These findings provide practical guidance for the economic design and control of multi-phase service systems. Full article
Show Figures

Figure 1

26 pages, 6016 KB  
Article
Mathematical Modeling-Driven Shape Digitization: A Perspective of Mongolian Motifs and Patterns
by Yadamragchaa Tsogtgerel and Sharifu Ura
Math. Comput. Appl. 2026, 31(2), 42; https://doi.org/10.3390/mca31020042 - 5 Mar 2026
Viewed by 656
Abstract
Human civilization embodies a rich cultural heritage shaped over long historical periods by numerous ethnic groups, each employing distinctive motifs and patterns in religious spaces, architecture, clothing, utensils, and other artifacts. Such motifs commonly originate from elementary geometric primitives that are organized through [...] Read more.
Human civilization embodies a rich cultural heritage shaped over long historical periods by numerous ethnic groups, each employing distinctive motifs and patterns in religious spaces, architecture, clothing, utensils, and other artifacts. Such motifs commonly originate from elementary geometric primitives that are organized through symmetric or asymmetric compositions to convey symbolic and esthetic meaning. This study focuses on Mongolian patterns derived from the nomadic heritage of Mongolia and still prevalent in contemporary design. These patterns draw inspiration from nature, geometry, animals, plants, and symbolic forms. This article proposes a mathematical modeling-driven digitization framework for the systematic analysis and digitization of Mongolian patterns, with the objective of generating accurate digital representations in the form of computer-aided design (CAD) models. A concise review of related work is first presented, followed by a structured digitization framework and a taxonomy of representative Mongolian motifs. A case study demonstrates that, when combined through distance-preserving and shape-preserving geometric operations such as translation, rotation, and reflection, four fundamental geometric entities, namely the circle, circular arc, spiral, and astroid, are sufficient to retain the intrinsic symmetry and compositional coherence of complex patterns observed in selected artifacts. Furthermore, the proposed analytical modeling approach enables the generation of vector-based line drawings that support precise CAD model construction. Accordingly, this study establishes a computational design workflow that integrates cultural heritage patterns into CAD-based modeling environments, thereby supporting digital preservation and fabrication with high geometric fidelity. Full article
(This article belongs to the Section Engineering)
Show Figures

Figure 1

18 pages, 1474 KB  
Article
A Mathematical Model for Type 1 Diabetes Regulation Using a Smart Insulin Patch: In Silico Validation Based on Published Rat Data
by Haneen Hamam
Math. Comput. Appl. 2026, 31(2), 41; https://doi.org/10.3390/mca31020041 - 5 Mar 2026
Viewed by 420
Abstract
This work introduces a new mathematical model designed to describe the glucose–insulin dynamics associated with a glucose-responsive smart microneedle patch reported in the literature. The model captures the complete sequence of the patch behavior, from detecting glucose changes to controlled transdermal insulin delivery [...] Read more.
This work introduces a new mathematical model designed to describe the glucose–insulin dynamics associated with a glucose-responsive smart microneedle patch reported in the literature. The model captures the complete sequence of the patch behavior, from detecting glucose changes to controlled transdermal insulin delivery and gradually restoring blood glucose levels to the normal range. Our simulations show that the patch can effectively manage glucose not only during fasting conditions but also after single and multiple meals, restoring glucose levels to healthy levels within a short period. The model predictions are consistent with experimentally reported trends in previously published studies, which strengthens confidence in the biological realism of the proposed mechanism. Because some parameters in such systems are difficult to measure directly, we also performed a comprehensive sensitivity analysis to understand how variations in key parameters influence system stability. The results highlight the central role of the insulin release rate and the five glucose–regulation parameters examined in the sensitivity analysis, providing clear guidance on the most critical aspects of patch design for reliable performance. Overall, this study provides a simplified yet robust mathematical framework that makes the behavior of a glucose-responsive microneedle patch easy to understand and analyze. It lays the groundwork for future refinement of control strategies and optimization of patch design, improving control strategies, and developing more advanced systems that can maintain healthy glucose levels naturally and intuitively. Full article
Show Figures

Figure 1

31 pages, 703 KB  
Article
A Novel Fractional-Order Scheme for Non-Linear Problems with Applications in Optimization
by Mudassir Shams, Nasreen Kausar and Pourya Pourhejazy
Math. Comput. Appl. 2026, 31(2), 40; https://doi.org/10.3390/mca31020040 - 3 Mar 2026
Viewed by 307
Abstract
The existing methods for solving non-linear equations encounter convergence issues and computing constraints, especially when used in fractional-order or complex non-linear problems. This study develops a higher-order fractional technique for solving non-linear equations based on the Caputo fractional derivative. The proposed method uses [...] Read more.
The existing methods for solving non-linear equations encounter convergence issues and computing constraints, especially when used in fractional-order or complex non-linear problems. This study develops a higher-order fractional technique for solving non-linear equations based on the Caputo fractional derivative. The proposed method uses a fractional framework to improve local convergence and stability while ensuring high efficiency in every iteration step. Local convergence analysis using generalized Taylor series expansion reveals that the order of the new fractional scheme for solving non-linear equations is 5¢+1, where ¢ (0,1] represents the Caputo fractional order, determining the memory depth of the Caputo fractional derivative. The performance of the method is further investigated using a variety of non-linear problems from engineering optimization and applied sciences, such as engineering control systems, computational chemistry, thermodynamics models, and operations research, such as inventory optimization. Analyzing the key performance metrics, such as dynamical analysis, percentage convergence, residual error, and computation time, confirms the advantages of the developed method over the state-of-the-art. This study provides a solid framework for higher-order fractional iterative approaches, paving the way for advanced applications of non-linear problems. Full article
Show Figures

Figure 1

36 pages, 2892 KB  
Article
Bridging Behavioral and Emotional Intelligence: An Interpretable Multimodal Deep Learning Framework for Customer Lifetime Value Estimation in the Hospitality Industry
by Milena Nikolić, Marina Marjanović and Žarko Rađenović
Math. Comput. Appl. 2026, 31(2), 39; https://doi.org/10.3390/mca31020039 - 3 Mar 2026
Viewed by 466
Abstract
Customer Lifetime Value (CLV) estimation over the observed transactional horizon is a fundamental challenge in hospitality analytics, supporting revenue management, personalization, and long-term customer relationship strategies. However, existing models predominantly rely on structured behavioral data while overlooking the emotional intelligence embedded in guest [...] Read more.
Customer Lifetime Value (CLV) estimation over the observed transactional horizon is a fundamental challenge in hospitality analytics, supporting revenue management, personalization, and long-term customer relationship strategies. However, existing models predominantly rely on structured behavioral data while overlooking the emotional intelligence embedded in guest narratives. This study proposes an interpretable multimodal deep learning (DL) framework that bridges behavioral and emotional intelligence for CLV estimation by integrating structured booking records with unstructured hotel review text. Model interpretability is ensured through SHAP analysis for structured attributes, LIME for local textual explanations, and attention visualization for modality interaction analysis. Experimental evaluation on large-scale hospitality datasets demonstrates that the proposed multimodal framework outperforms traditional machine learning models, unimodal deep learning baselines, and classical ensemble learners, yielding consistent improvements across multiple error metrics and a notable increase in goodness of fit. The results confirm that emotional intelligence extracted from guest reviews significantly enhances CLV estimation and provides actionable insights for hospitality decision-making, supporting the deployment of transparent and explainable artificial intelligence (XAI) systems for strategic customer value management. Full article
(This article belongs to the Special Issue Recent Advances in Algorithms for Neural Networks)
Show Figures

Figure 1

23 pages, 2178 KB  
Article
GDFSIC: A Few-Shot Image Classification Framework Integrating Global–Local Attention with Distance–Direction Similarity
by Biao Geng and Liping Pu
Math. Comput. Appl. 2026, 31(2), 38; https://doi.org/10.3390/mca31020038 - 3 Mar 2026
Viewed by 432
Abstract
For few-shot image classification tasks, the recognition accuracy of existing models remains limited due to the inherent complexity of the few-shot learning setting. To address this challenge, this paper proposes a few-shot image classification approach, termed GDFSIC, which integrates a Global–Local Channel Attention [...] Read more.
For few-shot image classification tasks, the recognition accuracy of existing models remains limited due to the inherent complexity of the few-shot learning setting. To address this challenge, this paper proposes a few-shot image classification approach, termed GDFSIC, which integrates a Global–Local Channel Attention Module (GLCAM) with a graph-propagation-based Distance–Direction Similarity Earth Mover’s Distance (DDS-EMD). The GLCAM module is incorporated into the feature extractor to enhance focus on discriminative regions and increase model attention to critical feature areas. Furthermore, a Distance–Direction Similarity (DDS) metric is introduced as a more effective distance criterion for capturing subtle differences in latent spatial representations. The proposed method is evaluated on four widely used few-shot image classification benchmarks: CIFAR-FS, CUB-200-2011, mini-ImageNet, and Tiered-ImageNet. Experimental results demonstrate that our approach achieves a clear competitive advantage in classification accuracy across these datasets. Ablation studies and further analyses confirm the effectiveness of each component of the proposed framework. Full article
Show Figures

Figure 1

25 pages, 337 KB  
Article
A Belief Model for BDI Agents Derived from Roles and Personality Traits
by Eduardo David Martínez-Hernández, Bárbara María-Esther García-Morales, María Lucila Morales-Rodríguez, Claudia Guadalupe Gómez-Santillán and Nelson Rangel-Valdez
Math. Comput. Appl. 2026, 31(2), 37; https://doi.org/10.3390/mca31020037 - 3 Mar 2026
Viewed by 440
Abstract
Recent advancements in AI have enabled autonomous agents to interact within complex environments, with deliberative BDI (Belief–Desire–Intention) agents standing out for their human-inspired reasoning capabilities. However, defining the initial beliefs that constitute an agent’s cognitive profile remains a significant challenge. This process often [...] Read more.
Recent advancements in AI have enabled autonomous agents to interact within complex environments, with deliberative BDI (Belief–Desire–Intention) agents standing out for their human-inspired reasoning capabilities. However, defining the initial beliefs that constitute an agent’s cognitive profile remains a significant challenge. This process often relies on manual approaches that limit scalability and validation. This study proposes the Personality–Role–Belief (P–R–B) Model for BDI agents, introducing a novel architecture for generating cognitive profiles applicable to domains such as social simulation and non-player characters (NPCs). The model translates Five-Factor Model (FFM) scores into specific social roles, assigning base beliefs to each. A key contribution is a weighting mechanism designed to resolve conflicts between beliefs when multiple roles coexist. Inspired by Cohen’s effect size conventions, this mechanism establishes an influence hierarchy that quantifies belief strength based on social roles. Consequently, this approach not only enables agents to exhibit coherent behavior consistent with their personality but also establishes a foundation for modeling ethical decision-making through role–trait alignment, thereby facilitating the creation of agents capable of navigating morally complex social contexts. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2025)
Show Figures

Figure 1

24 pages, 8953 KB  
Article
Face Recognition System Using CLIP and FAISS for Scalable and Real-Time Identification
by Antonio Labinjan, Sandi Baressi Šegota, Ivan Lorencin and Nikola Tanković
Math. Comput. Appl. 2026, 31(2), 36; https://doi.org/10.3390/mca31020036 - 1 Mar 2026
Viewed by 669
Abstract
Face recognition is increasingly being adopted in industries such as education, security, and personalized services. This research introduces a face recognition system that leverages the embedding capabilities of the CLIP model. The model is trained on multimodal data, such as images and text [...] Read more.
Face recognition is increasingly being adopted in industries such as education, security, and personalized services. This research introduces a face recognition system that leverages the embedding capabilities of the CLIP model. The model is trained on multimodal data, such as images and text and it generates high-dimensional features, which are then stored in a vector index for further queries. The system is designed to facilitate accurate real-time identification, with potential applications in areas such as attendance tracking and security screening. Specific use cases include event check-ins, implementation of advanced security systems, and more. The process involves encoding known faces into high-dimensional vectors, indexing them using a vector index FAISS, and comparing them to unknown images based on L2 (euclidean) distance. Experimental results demonstrate a high accuracy that exceeds 90% and prove efficient scalability and good performance efficiency even in datasets with a high volume of entries. Notably, the system exhibits superior computational efficiency compared to traditional deep convolutional neural networks (CNNs), significantly reducing CPU load and memory consumption while maintaining competitive inference speeds. In the first iteration of experiments, the system achieved over 90% accuracy on live video feeds where each identity had a single reference video for both training and validation; however, when tested on a more challenging dataset with many low-quality classes, accuracy dropped to approximately 73%, highlighting the impact of dataset quality and variability on performance. Full article
Show Figures

Figure 1

18 pages, 2882 KB  
Article
Fault Detection and Identification of Wind Turbines via Causal Spatio-Temporal Features and Variable-Level Normalized Flow
by Xiheng Gao, Weimin Li and Hongxiu Zhu
Math. Comput. Appl. 2026, 31(2), 35; https://doi.org/10.3390/mca31020035 - 1 Mar 2026
Viewed by 373
Abstract
Anomaly identification and fault localization of wind turbines through Supervisory Control and Data Acquisition (SCADA) data is a popular topic today, but most studies overlook the complex time-space interdependence between wind turbine (WT) SCADA variables, which results in low detection accuracy for anomalies [...] Read more.
Anomaly identification and fault localization of wind turbines through Supervisory Control and Data Acquisition (SCADA) data is a popular topic today, but most studies overlook the complex time-space interdependence between wind turbine (WT) SCADA variables, which results in low detection accuracy for anomalies in critical moving components of the wind turbine. To address this problem, this paper proposes a fault detection and identification method based on a dynamic graph model with a causal spatio-temporal attention mechanism and variable-level normalized flow. First, it introduces a spatio-temporal attention mechanism under causality to extract the spatio-temporal attention mechanism under causality to extract spatio-temporal features of the variables and uses a graph convolutional neural network to represent the extracted spatio-temporal features as a dynamic graph. Secondly, a dynamic normalization flow is suggested for calculating the logarithmic density estimation between variables. Finally, the anomaly scores are calculated through logarithmic density estimation. Based on these scores, anomalies are detected and localized. Experimental validation on real SCADA data from wind turbines demonstrates that the method can effectively identify abnormal operating states and provide early warnings, achieving higher accuracy and greater stability. Full article
Show Figures

Figure 1

43 pages, 1864 KB  
Article
An Adaptive Grouping Genetic Algorithm with Controlled Gene Transmission Based on Fullness and Item Strategies (AGGA-CGT-FIS)
by Stephanie Amador-Larrea, Marcela Quiroz-Castellanos, Octavio Ramos-Figueroa and Alejandro Guerra-Hernández
Math. Comput. Appl. 2026, 31(2), 34; https://doi.org/10.3390/mca31020034 - 1 Mar 2026
Viewed by 473
Abstract
The one-dimensional Bin Packing Problem (1D-BPP) is a well-known NP-hard grouping problem characterized by high structural complexity and broad practical relevance. Among the metaheuristic approaches proposed for this problem, the Grouping Genetic Algorithm with Controlled Gene Transmission (GGA-CGT) has shown remarkable performance. In [...] Read more.
The one-dimensional Bin Packing Problem (1D-BPP) is a well-known NP-hard grouping problem characterized by high structural complexity and broad practical relevance. Among the metaheuristic approaches proposed for this problem, the Grouping Genetic Algorithm with Controlled Gene Transmission (GGA-CGT) has shown remarkable performance. In this work, an Adaptive Grouping Genetic Algorithm with Controlled Gene Transmission based on Fullness and Item Strategies (AGGA-CGT-FIS) is presented. This approach extends the original GGA-CGT by integrating domain-guided crossover mechanisms and adaptive parameter control schemes. The proposed algorithm incorporates a novel gene-level crossover operator, termed Fullness–Items Gene-Level Crossover 1 (FI-GLX-1). This operator exploits structural information from the solutions through Fullness- and Item-based ordering and transmission strategies. In addition, adaptive control schemes are introduced for key evolutionary parameters associated with crossover and mutation. These mechanisms allow the algorithm to dynamically adjust its behavior according to feedback extracted from the search process, resulting in a fully adaptive variant of the GGA-CGT. The effectiveness of AGGA-CGT-FIS is evaluated using two benchmark sets for the 1D-BPP: the classic and the BPPvu_c instances. The proposed approach is compared against the baseline GGA-CGT using the original Gene-Level Crossover (GLX) operator. Experimental results show improvements in solution quality and convergence behavior, supported by statistical analyses that confirm the significance of the observed performance differences. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2025)
Show Figures

Figure 1

26 pages, 2942 KB  
Article
Real-Time Adaptive Linear Quadratic Regulator Control for the QUBE–2 Rotary Inverted Pendulum
by Cynthia Lopez-Jordan and Mohammad Jafari
Math. Comput. Appl. 2026, 31(2), 33; https://doi.org/10.3390/mca31020033 - 27 Feb 2026
Viewed by 473
Abstract
This paper presents a real-time adaptive Linear Quadratic Regulator (LQR) control strategy for the rotary inverted pendulum. The state weighting matrix of the LQR cost function is continuously adapted online based on real-time tracking error, state dynamics, and sliding-mode-inspired robustness measures. Unlike conventional [...] Read more.
This paper presents a real-time adaptive Linear Quadratic Regulator (LQR) control strategy for the rotary inverted pendulum. The state weighting matrix of the LQR cost function is continuously adapted online based on real-time tracking error, state dynamics, and sliding-mode-inspired robustness measures. Unlike conventional LQR controllers with fixed weighting matrices or hybrid schemes that apply sliding mode control directly to the control input, the proposed approach modulates the LQR cost function itself, enabling dynamic reshaping of controller behavior while preserving smooth control action. The real-time adaptive controller is implemented using a continuous-time Riccati differential equation solved online, making the method suitable for real-time deployment. Experimental validation is conducted on two Quanser QUBE-Servo 2 rotary inverted pendulum platforms under square, sinusoidal, and sawtooth reference trajectories. Performance is compared against a fixed-gain LQR controller using multiple quantitative metrics, including tracking error and control effort. Experimental results demonstrate substantial improvements in tracking accuracy, with reductions exceeding 70–90% in error metrics, while simultaneously achieving over 94% reduction in control effort. These findings verify that adaptive cost shaping provides an effective and practical mechanism for enhancing LQR performance in underactuated experimental systems. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop