Journal Description
Mathematical and Computational Applications
Mathematical and Computational Applications
is an international, peer-reviewed, open access journal on applications of mathematical and/or computational techniques, published bimonthly online by MDPI. The South African Association for Theoretical and Applied Mechanics (SAAM) is affiliated with the journal Mathematical and Computational Applications and its members receive discounts on the article processing charges.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Inspec, and other databases.
- Journal Rank: JCR - Q2 (Mathematics, Interdisciplinary Applications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 25.3 days after submission; acceptance to publication is undertaken in 2.9 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Testimonials: See what our editors and authors say about MCA.
Impact Factor:
2.1 (2024);
5-Year Impact Factor:
1.6 (2024)
Latest Articles
Measuring Circular Economy with Data Envelopment Analysis: A Systematic Literature Review
Math. Comput. Appl. 2025, 30(5), 102; https://doi.org/10.3390/mca30050102 - 17 Sep 2025
Abstract
This article presents a systematic literature review of data envelopment analysis (DEA) models used to evaluate circular economy (CE) practices. The review is based on 151 peer-reviewed articles published between 2015 and 2024. By analyzing this collection, this review categorizes different DEA models
[...] Read more.
This article presents a systematic literature review of data envelopment analysis (DEA) models used to evaluate circular economy (CE) practices. The review is based on 151 peer-reviewed articles published between 2015 and 2024. By analyzing this collection, this review categorizes different DEA models and their levels of application, discusses the data sources utilized, and identifies the prevailing methodologies and evaluation criteria used to measure the CE performance. Despite the extensive literature on measuring the circular economy using DEA, a critical evaluation of existing DEA approaches that highlights their strengths and weaknesses is still missing. Our analysis shows that DEA models provide valuable insights when assessing circular strategies, namely, R2—Reduce, R8—Recycling, and R9—Recovering. Over 40% of the surveyed literature focuses on China, with nearly 20% on the European Union. Other regions are sparsely represented within our sample, highlighting a potential gap in the current research landscape.
Full article
(This article belongs to the Special Issue Feature Papers in Mathematical and Computational Applications 2025)
►
Show Figures
Open AccessArticle
HGREncoder: Enhancing Real-Time Hand Gesture Recognition with Transformer Encoder—A Comparative Study
by
Luis Gabriel Macías, Jonathan A. Zea, Lorena Isabel Barona, Ángel Leonardo Valdivieso and Marco E. Benalcázar
Math. Comput. Appl. 2025, 30(5), 101; https://doi.org/10.3390/mca30050101 - 16 Sep 2025
Abstract
In the field of Hand Gesture Recognition (HGR), Electromyography (EMG) is used to detect the electrical impulses that muscles emit when a movement is generated. Currently, there are several HGR models that use EMG to predict hand gestures. However, most of these models
[...] Read more.
In the field of Hand Gesture Recognition (HGR), Electromyography (EMG) is used to detect the electrical impulses that muscles emit when a movement is generated. Currently, there are several HGR models that use EMG to predict hand gestures. However, most of these models have limited performance in real-time applications, with the highest recognition rate achieved being 65.78 ± 15.15%, without post-processing steps. Other non-generalizable models, i.e., those trained with a small number of users, achieved a window-based classification accuracy of 93.84%, but not in time-real applications. Therefore, this study addresses these issues by employing transformers to create a generalizable model and enhance recognition accuracy in real-time applications. The architecture of our model is composed of a Convolutional Neural Network (CNN), a positional encoding layer, and the transformer encoder. To obtain a generalizable model, the EMG-EPN-612 dataset was used. This dataset contains records of 612 individuals. Several experiments were conducted with different architectures, and our best results were compared with other previous research that used CNN, LSTM, and transformers. The findings of this research reached a classification accuracy of 95.25 ± 4.9% and a recognition accuracy of 89.7 ± 8.77%. This recognition accuracy is a significant contribution because it encompasses the entire sequence without post-processing steps.
Full article
(This article belongs to the Special Issue New Trends in Computational Intelligence and Applications 2025)
►▼
Show Figures

Figure 1
Open AccessArticle
Dynamical Analysis and Solitary Wave Solutions of the Zhanbota-IIA Equation with Computational Approach
by
Beenish, Maria Samreen and Manuel De la Sen
Math. Comput. Appl. 2025, 30(5), 100; https://doi.org/10.3390/mca30050100 - 15 Sep 2025
Abstract
This study conducts an in-depth analysis of the dynamical characteristics and solitary wave solutions of the integrable Zhanbota-IIA equation through the lens of planar dynamic system theory. This research applies Lie symmetry to convert nonlinear partial differential equations into ordinary differential equations, enabling
[...] Read more.
This study conducts an in-depth analysis of the dynamical characteristics and solitary wave solutions of the integrable Zhanbota-IIA equation through the lens of planar dynamic system theory. This research applies Lie symmetry to convert nonlinear partial differential equations into ordinary differential equations, enabling the investigation of bifurcation, phase portraits, and dynamic behaviors within the framework of chaos theory. A variety of analytical instruments, such as chaotic attractors, return maps, recurrence plots, Lyapunov exponents, Poincaré maps, three-dimensional phase portraits, time analysis, and two-dimensional phase portraits, are utilized to scrutinize both perturbed and unperturbed systems. Furthermore, the study examines the power frequency response and the system’s sensitivity to temporal delays. A novel classification framework, predicated on Lyapunov exponents, systematically categorizes the system’s behavior across a spectrum of parameters and initial conditions, thereby elucidating aspects of multistability and sensitivity. The perturbed system exhibits chaotic and quasi-periodic dynamics. The research employs the maximum Lyapunov exponent portrait as a tool for assessing system stability and derives solitary wave solutions accompanied by illustrative visualization diagrams. The methodology presented herein possesses significant implications for applications in optical fibers and various other engineering disciplines.
Full article
(This article belongs to the Section Natural Sciences)
►▼
Show Figures

Figure 1
Open AccessArticle
Distributed PD Average Consensus of Lipschitz Nonlinear MASs in the Presence of Mixed Delays
by
Tuo Zhou
Math. Comput. Appl. 2025, 30(5), 99; https://doi.org/10.3390/mca30050099 - 11 Sep 2025
Abstract
In this work, the distributed average consensus for dynamical networks with Lipschitz nonlinear dynamics is studied, where the network communication switches quickly among a set of directed and balanced switching graphs. Differing from existing research concerning uniform constant delay or time-varying delays, this
[...] Read more.
In this work, the distributed average consensus for dynamical networks with Lipschitz nonlinear dynamics is studied, where the network communication switches quickly among a set of directed and balanced switching graphs. Differing from existing research concerning uniform constant delay or time-varying delays, this study focuses on consensus problems with mixed delays, equipped with one class of delays embedded within the nonlinear dynamics and another class of delays present in the control input. In order to solve these problems, a proportional and derivative control strategy with time delays is proposed. In this way, by using Lyapunov theory, the stability is analytically established and the conditions required for solving the consensus problems are rigorously derived over switching digraphs. Finally, the effectiveness of the designed algorithm is tested using the MATLAB R2021a platform.
Full article
(This article belongs to the Topic Fractional Calculus, Symmetry Phenomenon and Probability Theory for PDEs, and ODEs)
►▼
Show Figures

Figure 1
Open AccessArticle
The Extended Goodwin Model and Wage–Labor Paradoxes Metric in South Africa
by
Tichaona Chikore, Miglas Tumelo Makobe and Farai Nyabadza
Math. Comput. Appl. 2025, 30(5), 98; https://doi.org/10.3390/mca30050098 - 10 Sep 2025
Abstract
This study extends the post-Keynesian framework for cyclical economic growth, initially proposed by Goodwin in 1967, by integrating government intervention to stabilize employment amidst wage mismatches. Given the pressing challenges of unemployment and wage disparity in developing economies, particularly South Africa, this extension
[...] Read more.
This study extends the post-Keynesian framework for cyclical economic growth, initially proposed by Goodwin in 1967, by integrating government intervention to stabilize employment amidst wage mismatches. Given the pressing challenges of unemployment and wage disparity in developing economies, particularly South Africa, this extension is necessary to better understand how policy interventions can influence labor market dynamics. Central to the study is the endogenous interaction between capital and labor, where class dynamics influence economic growth patterns. The research focuses on the competitive relationship between real wage growth and its effects on employment. Methodologically, the study measures the impact of intervention strategies using a system of coupled ordinary differential equations (Lotka–Volterra type), along with econometric techniques such as quantile regression, moving averages, and mean absolute error to measure wages mismatch. Results demonstrate the inherent contradictions of capitalism under intervention, confirming established theoretical perspectives. This work further contributes to economic optimality discussions, especially regarding the timing and persistence of economic cycles. The model provides a quantifiable approach for formulating intervention strategies to achieve long-term economic equilibrium and sustainable labor–capital coexistence.
Full article
(This article belongs to the Section Natural Sciences)
►▼
Show Figures

Figure 1
Open AccessArticle
Public Security Patrol and Alert Recognition for Police Patrol Robots Based on Improved YOLOv8 Algorithm
by
Yuehan Shi, Xiaoming Zhang, Qilei Wang and Xiaojun Liu
Math. Comput. Appl. 2025, 30(5), 97; https://doi.org/10.3390/mca30050097 - 10 Sep 2025
Abstract
Addressing the prevalent challenges of inadequate detection accuracy and sluggish detection speed encountered by police patrol robots during security patrols, we propose an innovative algorithm based on the YOLOv8 model. Our method consists of substituting the backbone network of YOLOv8 with FasterNet. As
[...] Read more.
Addressing the prevalent challenges of inadequate detection accuracy and sluggish detection speed encountered by police patrol robots during security patrols, we propose an innovative algorithm based on the YOLOv8 model. Our method consists of substituting the backbone network of YOLOv8 with FasterNet. As a result, the model’s ability to identify accurately is enhanced, and its computational performance is improved. Additionally, the extraction of geographical data becomes more efficient. In addition, we introduce the BiFormer attention mechanism, incorporating dynamic sparse attention to significantly improve algorithm performance and computational efficiency. Furthermore, to bolster the regression performance of bounding boxes and enhance detection robustness, we utilize Wise-IoU as the loss function. Through experimentation across three perilous police scenarios—fighting, knife threats, and gun incidents—we demonstrate the efficacy of our proposed algorithm. The results indicate notable improvements over the original model, with enhancements of 2.42% and 5.83% in detection accuracy and speed for behavioral recognition of fighting, 2.87% and 4.67% for knife threat detection, and 3.01% and 4.91% for gun-related situation detection, respectively.
Full article
(This article belongs to the Section Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Multi-Objective Optimization in Virtual Power Plants for Day-Ahead Market Considering Flexibility
by
Mohammad Hosein Salehi, Mohammad Reza Moradian, Ghazanfar Shahgholian and Majid Moazzami
Math. Comput. Appl. 2025, 30(5), 96; https://doi.org/10.3390/mca30050096 - 5 Sep 2025
Abstract
This research proposes a novel multi-objective optimization framework for virtual power plants (VPPs) operating in day-ahead electricity markets. The VPP integrates diverse distributed energy resources (DERs) such as wind turbines, solar photovoltaics (PV), fuel cells (FCs), combined heat and power (CHP) systems, and
[...] Read more.
This research proposes a novel multi-objective optimization framework for virtual power plants (VPPs) operating in day-ahead electricity markets. The VPP integrates diverse distributed energy resources (DERs) such as wind turbines, solar photovoltaics (PV), fuel cells (FCs), combined heat and power (CHP) systems, and microturbines (MTs), along with demand response (DR) programs and energy storage systems (ESSs). The trading model is designed to optimize the VPP’s participation in the day-ahead market by aggregating these resources to function as a single entity, thereby improving market efficiency and resource utilization. The optimization framework simultaneously minimizes operational costs, maximizes system flexibility, and enhances reliability, addressing challenges posed by renewable energy integration and market uncertainties. A new flexibility index is introduced, incorporating both the technical and economic factors of individual units within the VPP, offering a comprehensive measure of system adaptability. The model is validated on IEEE 24-bus and 118-bus systems using evolutionary algorithms, achieving significant improvements in flexibility (20% increase), cost reduction (15%), and reliability (a 30% reduction in unsupplied energy). This study advances the development of efficient and resilient power systems amid growing renewable energy penetration.
Full article
(This article belongs to the Section Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
A Comparison of the Robust Zero-Inflated and Hurdle Models with an Application to Maternal Mortality
by
Phelo Pitsha, Raymond T. Chiruka and Chioneso S. Marange
Math. Comput. Appl. 2025, 30(5), 95; https://doi.org/10.3390/mca30050095 - 2 Sep 2025
Abstract
►▼
Show Figures
This study evaluates the performance of count regression models in the presence of zero inflation, outliers, and overdispersion using both simulated and real-world maternal mortality dataset. Traditional Poisson and negative binomial regression models often struggle to account for the complexities introduced by excess
[...] Read more.
This study evaluates the performance of count regression models in the presence of zero inflation, outliers, and overdispersion using both simulated and real-world maternal mortality dataset. Traditional Poisson and negative binomial regression models often struggle to account for the complexities introduced by excess zeros and outliers. To address these limitations, this study compares the performance of robust zero-inflated (RZI) and robust hurdle (RH) models against conventional models using the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) to determine the best-fitting model. Results indicate that the robust zero-inflated Poisson (RZIP) model performs best overall. The simulation study considers various scenarios, including different levels of zero inflation (50%, 70%, and 80%), outlier proportions (0%, 5%, 10%, and 15%), dispersion values (1, 3, and 5), and sample sizes (50, 200, and 500). Based on AIC comparisons, the robust zero-inflated Poisson (RZIP) and robust hurdle Poisson (RHP) models demonstrate superior performance when outliers are absent or limited to 5%, particularly when dispersion is low (5). However, as outlier levels and dispersion increase, the robust zero-inflated negative binomial (RZINB) and robust hurdle negative binomial (RHNB) models outperform robust zero-inflated Poisson (RZIP) and robust hurdle Poisson (RHP) across all levels of zero inflation and sample sizes considered in the study.
Full article

Figure 1
Open AccessArticle
Physics-Informed Machine Learning for Mechanical Performance Prediction of ECC-Strengthened Reinforced Concrete Beams: An Empirical-Guided Framework
by
Jinshan Yu, Yongchao Li, Haifeng Yang and Yongquan Zhang
Math. Comput. Appl. 2025, 30(5), 94; https://doi.org/10.3390/mca30050094 - 1 Sep 2025
Abstract
►▼
Show Figures
Predicting the mechanical performance of Engineered Cementitious Composite (ECC)-strengthened reinforced concrete (RC) beams is both meaningful and challenging. Although existing methods each have their advantages, traditional numerical simulations struggle to capture the complex micro-mechanical behavior of ECC, experimental approaches are costly, and data-driven
[...] Read more.
Predicting the mechanical performance of Engineered Cementitious Composite (ECC)-strengthened reinforced concrete (RC) beams is both meaningful and challenging. Although existing methods each have their advantages, traditional numerical simulations struggle to capture the complex micro-mechanical behavior of ECC, experimental approaches are costly, and data-driven methods heavily depend on large, high-quality datasets. This study proposes a novel physics-informed machine learning framework that integrates domain-specific empirical knowledge and physical laws into a neural network architecture to enhance predictive accuracy and interpretability. The approach leverages outputs from physics-based simulations and experimental insights as weak supervision and incorporates physically consistent loss terms into the training process to guide the model toward scientifically valid solutions, even for unlabeled or sparse data regimes. While the proposed physics-informed model yields slightly lower accuracy than purely data-driven models (mean squared errors of 0.101 VS. 0.091 on the test set), it demonstrates superior physical consistency and significantly better generalization. This trade-off ensures more robust and scientifically reliable predictions, especially under limited data conditions. The results indicate that the empirical-guided framework is a practical and reliable tool for evaluating the structural performance of ECC-strengthened RC beams, supporting their design, retrofitting, and safety assessment.
Full article

Figure 1
Open AccessArticle
Regression Modeling for Cure Factors on Uterine Cancer Data Using the Reparametrized Defective Generalized Gompertz Distribution
by
Dionisio Silva-Neto, Francisco Louzada-Neto and Vera Lucia Tomazella
Math. Comput. Appl. 2025, 30(5), 93; https://doi.org/10.3390/mca30050093 - 31 Aug 2025
Abstract
Recent advances in medical research have improved survival outcomes for patients with life-threatening diseases. As a result, the existence of long-term survivors from these illnesses is becoming common. However, conventional models in survival analysis assume that all individuals remain at risk of death
[...] Read more.
Recent advances in medical research have improved survival outcomes for patients with life-threatening diseases. As a result, the existence of long-term survivors from these illnesses is becoming common. However, conventional models in survival analysis assume that all individuals remain at risk of death after the follow-up, disregarding the presence of a cured subpopulation. An important methodological advancement in this context is the use of defective distributions. In the defective models, the survival function converges to a constant value as a function of the parameters. Among these models, the defective generalized Gompertz distribution (DGGD) has emerged as a flexible approach. In this work, we introduce a reparametrized version of the DGGD that incorporates the cure parameter and accommodates covariate effects to assess individual-level factors associated with long-term survival. A Bayesian model is presented, with parameter estimation via the Hamiltonian Monte Carlo algorithm. A simulation study demonstrates good asymptotic results of the estimation process under vague prior information. The proposed methodology is applied to a real-world dataset of patients with uterine cancer. Our results reveal statistically significant protective effects of surgical intervention, alongside elevated risk associated with age over 50 years, diagnosis at the metastatic stage, and treatment with chemotherapy.
Full article
(This article belongs to the Special Issue Statistical Inference in Linear Models, 2nd Edition)
►▼
Show Figures

Figure 1
Open AccessFeature PaperArticle
Application of the Kurganov–Tadmor Scheme in Curvilinear Coordinates for Supersonic Flow
by
Sebastián Bertolo, Sergio Elaskar and Luis Gutiérrez Marcantoni
Math. Comput. Appl. 2025, 30(5), 92; https://doi.org/10.3390/mca30050092 - 23 Aug 2025
Abstract
In this current study, we developed a second-order Kurganov–Tadmor scheme in curvilinear coordinates to analyze the external supersonic flow over bodies of various shapes. This scheme is capable of handling interfaces across different regions of the domain. We utilized a fourth-order Runge–Kutta temporal
[...] Read more.
In this current study, we developed a second-order Kurganov–Tadmor scheme in curvilinear coordinates to analyze the external supersonic flow over bodies of various shapes. This scheme is capable of handling interfaces across different regions of the domain. We utilized a fourth-order Runge–Kutta temporal integrator and conducted several test cases to validate the performance of the new scheme. The results from the analyzed tests indicate that the new method produces highly accurate outcomes.
Full article
(This article belongs to the Special Issue Feature Papers in Mathematical and Computational Applications 2025)
►▼
Show Figures

Figure 1
Open AccessArticle
A Novel Hybrid Attention-Based RoBERTa-BiLSTM Model for Cyberbullying Detection
by
Mohammed A. Mahdi, Suliman Mohamed Fati, Mohammed Gamal Ragab, Mohamed A. G. Hazber, Shahanawaj Ahamad, Sawsan A. Saad and Mohammed Al-Shalabi
Math. Comput. Appl. 2025, 30(4), 91; https://doi.org/10.3390/mca30040091 - 21 Aug 2025
Abstract
The escalating scale and psychological harm of cyberbullying across digital platforms present a critical social challenge, demanding the urgent development of highly accurate and reliable automated detection systems. Standard fine-tuned transformer models, while powerful, often fall short in capturing the nuanced, context-dependent nature
[...] Read more.
The escalating scale and psychological harm of cyberbullying across digital platforms present a critical social challenge, demanding the urgent development of highly accurate and reliable automated detection systems. Standard fine-tuned transformer models, while powerful, often fall short in capturing the nuanced, context-dependent nature of online harassment. This paper introduces a novel hybrid deep learning model called Robustly Optimized Bidirectional Encoder Representations from the Transformers with the Bidirectional Long Short-Term Memory-based Attention model (RoBERTa-BiLSTM), specifically designed to address this challenge. To maximize its effectiveness, the model was systematically optimized using the Optuna framework and rigorously benchmarked against eight state-of-the-art transformer baseline models on a large cyberbullying dataset. Our proposed model achieves state-of-the-art performance, outperforming BERT-base, RoBERTa-base, RoBERTa-large, DistilBERT, ALBERT-xxlarge, XLNet-large, ELECTRA-base, DeBERTa-v3-small with an accuracy of 94.8%, precision of 96.4%, recall of 95.3%, F1-score of 95.8%, and an AUC of 98.5%. Significantly, it demonstrates a substantial improvement in F1-score over the strongest baseline and reduces critical false negative errors by 43%, all while maintaining moderate computational efficiency. Furthermore, our efficiency analysis indicates that this superior performance is achieved with a moderate computational complexity. The results validate our hypothesis that a specialized hybrid architecture, which synergizes contextual embedding with sequential processing and attention mechanism, offers a more robust and practical solution for real-world social media applications.
Full article
(This article belongs to the Special Issue Innovative Deep Transfer Learning Techniques and Their Use in Real-World Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Last-Mile Decomposition Heuristics with Multi-Period Embedded Optimization Models
by
Mojahid Saeed Osman
Math. Comput. Appl. 2025, 30(4), 90; https://doi.org/10.3390/mca30040090 - 17 Aug 2025
Abstract
This paper investigates last-mile delivery and explores hybrid distributed computational models for routing and scheduling delivery services and assigning delivery-points to deliverymen over multiple time periods. The objective of these models is to minimize the number of deliverymen hired for providing delivery services
[...] Read more.
This paper investigates last-mile delivery and explores hybrid distributed computational models for routing and scheduling delivery services and assigning delivery-points to deliverymen over multiple time periods. The objective of these models is to minimize the number of deliverymen hired for providing delivery services over multiple periods while satisfying predetermined time limits. This paper describes the development of multiple traveling deliverymen approaches, multi-period optimization models, and a multi-period distributed algorithm, to optimize routing and scheduling for last-mile deliveries. This paper utilizes a computer-aided modeling system to facilitate the proposed distributed approach, which offers an optimization model for large numbers of delivery-points and helps in performing limited computation as required to minimize the memory usage and provide efficiently solvable models within acceptable durations of execution. To illustrate the solvability of the proposed approach and scalability to large instances, 26 case problems are presented for last-mile delivery services. The key results include optimized routing and scheduling, a minimum number of deliverymen, and a significant reduction in computational effort and time.
Full article
(This article belongs to the Section Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Statistical Modeling of Reliable Intervals for Solutions to Linear Transfer Problems Under Boundary Experimental Data
by
Olha Chernukha, Petro Pukach, Yurii Bilushchak, Halyna Bilushchak and Myroslava Vovk
Math. Comput. Appl. 2025, 30(4), 89; https://doi.org/10.3390/mca30040089 - 12 Aug 2025
Abstract
A methodology for the statistical modeling of boundary value problems of mathematical physics for parabolic equations used to describe transport processes in a layer with incomplete data at the boundary of a body has been developed and presented. The boundary value problem is
[...] Read more.
A methodology for the statistical modeling of boundary value problems of mathematical physics for parabolic equations used to describe transport processes in a layer with incomplete data at the boundary of a body has been developed and presented. The boundary value problem is formulated for the case of a non-zero initial condition, the presence of a stable source at one boundary of the body (classical boundary condition), and a sample of experimental data for the desired function at the other boundary (statistical boundary condition). A linear regression model obtained from experimental data by the least squares method is used as a boundary condition. The article defines two-sided statistical estimates of the solution of the boundary value problem through linear regression coefficients, analyzes the mathematical model taking into account the influence of the sample size and covariance, determines the reliable intervals for linear regression and the desired function depending on the given level of reliability. The influence of the experimental data statistical characteristics on the desired function at the lower layer’s boundary for different types of samples in the case of large or small-time intervals is studied. The two-sided critical domain is obtained and analyzed on the basis of Fisher’s criterion. The influence of the reliability level on the reliable intervals, the solution to the parabolic boundary value problem, and the width of the bilateral critical domain constructed for the solution is analyzed.
Full article
(This article belongs to the Special Issue Statistical Inference in Linear Models, 2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Spherical Shape Functions for a Six-Node Tri-Rectangular Prism and an Eight-Node Quadrangular Right Prism
by
Anna Maria Marotta, Riccardo Barzaghi and Roberto Sabadini
Math. Comput. Appl. 2025, 30(4), 88; https://doi.org/10.3390/mca30040088 - 10 Aug 2025
Abstract
►▼
Show Figures
In this work, we present the procedure to obtain exact spherical shape functions for finite element modeling applications, without resorting to any kind of approximation, for generic prismatic spherical elements and for the case of spherical six-node tri-rectangular and eight-node quadrangular spherical prisms.
[...] Read more.
In this work, we present the procedure to obtain exact spherical shape functions for finite element modeling applications, without resorting to any kind of approximation, for generic prismatic spherical elements and for the case of spherical six-node tri-rectangular and eight-node quadrangular spherical prisms. The proposed spherical shape functions, given in explicit analytical form, are expressed in geographic coordinates, namely colatitude, longitude and distance from the center of the sphere. We demonstrate that our analytical shape functions satisfy all the properties required by this class of functions, deriving at the same time the analytical expression of the Jacobian, which allows us changes in coordinate systems. Within the perspective of volume integration on Earth, entering a variety of geophysical and geodetic problems, as for mass change contribution to gravity, we consider our analytical expression of the shape functions and Jacobian for the six-node tri-rectangular and eight-node quadrangular right spherical prisms as reference volumes to evaluate the volume of generic spherical triangular and quadrangular prisms over the sphere; volume integration is carried out via Gauss–Legendre quadrature points. We show that for spherical quadrangular prisms, the percentage volume difference between the exact and the numerically evaluated volumes is independent from both the geographical position and the depth and ranges from 10−3 to lower than 10−4 for angular dimensions ranging from 1° × 1° to 0.25° × 0.25°. A satisfactory accuracy is attained for eight Gauss–Legendre quadrature points. We also solve the Poisson equation and compare the numerical solution with the analytical solution, obtained in the case of steady-state heat conduction with internal heat production. We show that, even with a relatively coarse grid, our elements are capable of providing a satisfactory fit between numerical and analytical solutions, with a maximum difference in the order of 0.2% of the exact value.
Full article

Figure 1
Open AccessFeature PaperArticle
Modified Engel Algorithm and Applications in Absorbing/Non-Absorbing Markov Chains and Monopoly Game
by
Chunhe Liu and Jeff Chak Fu Wong
Math. Comput. Appl. 2025, 30(4), 87; https://doi.org/10.3390/mca30040087 - 8 Aug 2025
Abstract
The Engel algorithm was created to solve chip-firing games and can be used to find the stationary distribution for absorbing Markov chains. Kaushal et al. developed a matlab-based version of the generalized Engel algorithm based on Engel’s probabilistic abacus theory. This paper
[...] Read more.
The Engel algorithm was created to solve chip-firing games and can be used to find the stationary distribution for absorbing Markov chains. Kaushal et al. developed a matlab-based version of the generalized Engel algorithm based on Engel’s probabilistic abacus theory. This paper introduces a modified version of the generalized Engel algorithm, which we call the modified Engel algorithm, or the mEngel algorithm for short. This modified version is designed to address issues related to non-absorbing Markov chains. It achieves this by breaking down the transition matrix into two distinct matrices, where each entry in the transition matrix is calculated from the ratio of the numerator and denominator matrices. In a nested iteration setting, these matrices play a crucial role in converting non-absorbing Markov chains into absorbing ones and then back again, thereby providing an approximation of the solutions of non-absorbing Markov chains until the distribution of a Markov chain converges to a stationary distribution. Our results show that the numerical outcomes of the mEngel algorithm align with those obtained from the power method and the canonical decomposition of absorbing Markov chains. We provide an example, Torrence’s problem, to illustrate the application of absorbing probabilities. Furthermore, our proposed algorithm analyzes the Monopoly transition matrix as a form of non-absorbing probabilities based on the rules of the Monopoly game, a complete information dynamic game, particularly the probability of landing on the Jail square, which is determined by the order of the product of the movement, Jail, Chance, and Community Chest matrices. The Long Jail strategy, the Short Jail strategy, and the strategy for getting out of Jail by rolling consecutive doubles three times have been formulated and tested. In addition, choosing which color group to buy is also an important strategy. By comparing the probability distribution of each strategy and the profit return for each property and color group of properties, and the color group property, we find which one should be used when playing Monopoly. In conclusion, the mEngel algorithm, implemented using R codes, offers an alternative approach to solving the Monopoly game and demonstrates practical value.
Full article
(This article belongs to the Section Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Computing Two Heuristic Shrinkage Penalized Deep Neural Network Approach
by
Mostafa Behzadi, Saharuddin Bin Mohamad, Mahdi Roozbeh, Rossita Mohamad Yunus and Nor Aishah Hamzah
Math. Comput. Appl. 2025, 30(4), 86; https://doi.org/10.3390/mca30040086 - 7 Aug 2025
Abstract
►▼
Show Figures
Linear models are not always able to sufficiently capture the structure of a dataset. Sometimes, combining predictors in a non-parametric method, such as deep neural networks (DNNs), would yield a more flexible modeling of the response variables in the predictions. Furthermore, the standard
[...] Read more.
Linear models are not always able to sufficiently capture the structure of a dataset. Sometimes, combining predictors in a non-parametric method, such as deep neural networks (DNNs), would yield a more flexible modeling of the response variables in the predictions. Furthermore, the standard statistical classification or regression approaches are inefficient when dealing with more complexity, such as a high-dimensional problem, which usually suffers from multicollinearity. For confronting these cases, penalized non-parametric methods are very useful. This paper proposes two heuristic approaches and implements new shrinkage penalized cost functions in the DNN, based on the elastic-net penalty function concept. In other words, some new methods via the development of shirnkaged penalized DNN, such as and , are established, which are strong rivals for and . If there is any dataset grouping information in each layer of the DNN, it may be transferred using the derived penalized function of elastic-net; other penalized DNNs cannot provide this functionality. Regarding the outcomes in the tables, in the developed DNN, not only are there slight increases in the classification results, but there are also nullifying processes of some nodes in addition to a shrinkage property simultaneously in the structure of each layer. A simulated dataset was generated with the binary response variables, and the classic and heuristic shrinkage penalized DNN models were generated and tested. For comparison purposes, the DNN models were also compared to the classification tree using GUIDE and applied to a real microbiome dataset.
Full article

Figure 1
Open AccessArticle
Mathematical Modelling of Upper Room UVGI in UFAD Systems for Enhanced Energy Efficiency and Airborne Disease Control: Applications for COVID-19 and Tuberculosis
by
Mohamad Kanaan, Eddie Gazo-Hanna and Semaan Amine
Math. Comput. Appl. 2025, 30(4), 85; https://doi.org/10.3390/mca30040085 - 5 Aug 2025
Abstract
►▼
Show Figures
This study is the first to investigate the performance of ultraviolet germicidal irradiation (UVGI) in underfloor air distribution (UFAD) systems. A simplified mathematical model is developed to predict airborne pathogen transport and inactivation by upper room UVGI in UFAD spaces. The proposed model
[...] Read more.
This study is the first to investigate the performance of ultraviolet germicidal irradiation (UVGI) in underfloor air distribution (UFAD) systems. A simplified mathematical model is developed to predict airborne pathogen transport and inactivation by upper room UVGI in UFAD spaces. The proposed model is substantiated for the SARS-CoV-2 virus as a simulated pathogen through a comprehensive computational fluid dynamics methodology validated against published experimental data of upper room UVGI and UFAD flows. Simulations show an 11% decrease in viral concentration within the upper irradiated zone when a 15 W louvered germicidal lamp is utilized. Finally, a case study on Mycobacterium tuberculosis (M. tuberculosis) bacteria is carried out using the validated simplified model to optimize the use of return air and UVGI implementation, ensuring acceptable indoor air quality and enhanced energy efficiency. Results reveal that the UFAD-UVGI system may consume up to 13.6% less energy while keeping the occupants at acceptable levels of M. tuberculosis concentration and UV irradiance when operated with 26% return air and a UVGI output of 72 W.
Full article

Figure 1
Open AccessArticle
Adaptive Filtered-x Least Mean Square Algorithm to Improve the Performance of Multi-Channel Noise Control Systems
by
Maha Yousif Hasan, Ahmed Sabah Alaraji, Amjad J. Humaidi and Huthaifa Al-Khazraji
Math. Comput. Appl. 2025, 30(4), 84; https://doi.org/10.3390/mca30040084 - 5 Aug 2025
Abstract
►▼
Show Figures
This paper proposes an optimized control filter (OCF) based on the Filtered-x Least Mean Square (FxLMS) algorithm for multi-channel active noise control (ANC) systems. The proposed OCF-McFxLMS algorithm delivers three key contributions. Firstly, even in difficult noise situations such as White Gaussian, Brownian,
[...] Read more.
This paper proposes an optimized control filter (OCF) based on the Filtered-x Least Mean Square (FxLMS) algorithm for multi-channel active noise control (ANC) systems. The proposed OCF-McFxLMS algorithm delivers three key contributions. Firstly, even in difficult noise situations such as White Gaussian, Brownian, and pink noise, it greatly reduces error, reaching nearly zero mean squared error (MSE) values across all Microphone (Mic) channels. Secondly, it improves computational efficiency by drastically reducing execution time from 58.17 s in the standard McFxLMS algorithm to just 0.0436 s under White Gaussian noise, enabling real-time noise control without compromising accuracy. Finally, the OCF-McFxLMS demonstrates robust noise attenuation, achieving signal-to-noise ratio (SNR) values of 137.41 dB under White Gaussian noise and over 100 dB for Brownian and pink noise, consistently outperforming traditional approaches. These contributions collectively establish the OCF-McFxLMS algorithm as an efficient and effective solution for real-time ANC systems, delivering superior noise reduction and computational speed performance across diverse noise environments.
Full article

Figure 1
Open AccessArticle
A Hybrid Deep Reinforcement Learning Architecture for Optimizing Concrete Mix Design Through Precision Strength Prediction
by
Ali Mirzaei and Amir Aghsami
Math. Comput. Appl. 2025, 30(4), 83; https://doi.org/10.3390/mca30040083 - 3 Aug 2025
Abstract
Concrete mix design plays a pivotal role in ensuring the mechanical performance, durability, and sustainability of construction projects. However, the nonlinear interactions among the mix components challenge traditional approaches in predicting compressive strength and optimizing proportions. This study presents a two-stage hybrid framework
[...] Read more.
Concrete mix design plays a pivotal role in ensuring the mechanical performance, durability, and sustainability of construction projects. However, the nonlinear interactions among the mix components challenge traditional approaches in predicting compressive strength and optimizing proportions. This study presents a two-stage hybrid framework that integrates deep learning with reinforcement learning to overcome these limitations. First, a Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM) model was developed to capture spatial–temporal patterns from a dataset of 1030 historical concrete samples. The extracted features were enhanced using an eXtreme Gradient Boosting (XGBoost) meta-model to improve generalizability and noise resistance. Then, a Dueling Double Deep Q-Network (Dueling DDQN) agent was used to iteratively identify optimal mix ratios that maximize the predicted compressive strength. The proposed framework outperformed ten benchmark models, achieving an MAE of 2.97, RMSE of 4.08, and R2 of 0.94. Feature attribution methods—including SHapley Additive exPlanations (SHAP), Elasticity-Based Feature Importance (EFI), and Permutation Feature Importance (PFI)—highlighted the dominant influence of cement content and curing age, as well as revealing non-intuitive effects such as the compensatory role of superplasticizers in low-water mixtures. These findings demonstrate the potential of the proposed approach to support intelligent concrete mix design and real-time optimization in smart construction environments.
Full article
(This article belongs to the Section Engineering)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
AppliedMath, Axioms, Fractal Fract, MCA, Mathematics, Symmetry
Fractional Calculus, Symmetry Phenomenon and Probability Theory for PDEs, and ODEs
Topic Editors: Renhai Wang, Junesang ChoiDeadline: 31 December 2025
Topic in
Actuators, Gels, JFB, Polymers, MCA, Materials
Recent Advances in Smart Soft Materials: From Theory to Practice
Topic Editors: Lorenzo Bonetti, Giulia Scalet, Silvia Farè, Nicola FerroDeadline: 31 December 2026

Conferences
Special Issues
Special Issue in
MCA
Numerical and Symbolic Computation: Developments and Applications 2025
Guest Editor: Maria Amélia Ramos LojaDeadline: 30 September 2025
Special Issue in
MCA
Latest Research in Mathematical Modeling in Cancer Research
Guest Editor: Ophir NaveDeadline: 30 November 2025
Special Issue in
MCA
Advances in Computational and Applied Mechanics (SACAM)
Guest Editors: Mohsen Sharifpur, Philip LovedayDeadline: 20 December 2025
Special Issue in
MCA
New Trends in Computational Intelligence and Applications 2025
Guest Editors: Nancy Pérez-Castro, Aldo Márquez-GrajalesDeadline: 31 December 2025