Journal Description
AppliedMath
AppliedMath
is an international, peer-reviewed, open access journal on applied mathematics published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within ESCI (Web of Science), Scopus, EBSCO, and other databases.
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 20.6 days after submission; acceptance to publication is undertaken in 8.6 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
- Journal Cluster of Mathematics and Its Applications: AppliedMath, Axioms, Computation, Fractal and Fractional, Geometry, International Journal of Topology, Logics, Mathematics and Symmetry.
Impact Factor:
0.7 (2024)
Latest Articles
A Sixth-Order Vieta–Lucas Polynomial-Based Block Method with Optimal Stability for Solving Practical First-Order ODE Models
AppliedMath 2026, 6(2), 34; https://doi.org/10.3390/appliedmath6020034 - 13 Feb 2026
Abstract
This paper addresses the numerical integration of first-order ordinary differential equations by developing a continuous linear multistep block method. The method is constructed through the approximation of the exact solution using a linear combination of shifted Vieta–Lucas polynomials defined on the interval
[...] Read more.
This paper addresses the numerical integration of first-order ordinary differential equations by developing a continuous linear multistep block method. The method is constructed through the approximation of the exact solution using a linear combination of shifted Vieta–Lucas polynomials defined on the interval . The use of this polynomial basis extends traditional approximation approaches and provides improved stability while maintaining high-order accuracy. Theoretical analysis shows that the proposed method attains sixth-order convergence and possesses an extended stability interval of , ensuring reliable performance for moderately stiff problems. Numerical experiments confirm that the method achieves lower errors and higher computational efficiency than conventional methods. These results demonstrate the suitability of the proposed approach for scientific computing applications, including engineering simulations and mathematical modeling, where accurate numerical integration of first-order differential equation is required.
Full article
(This article belongs to the Topic Advances in Natural Computing: Methods and Applications)
►
Show Figures
Open AccessArticle
Dispersive Quiescent Optical Solitons with DWDM Topology
by
Elsayed M. E. Zayed, Mona El-Shater, Ahmed H. Arnous, Lina S. Calucag and Anjan Biswas
AppliedMath 2026, 6(2), 33; https://doi.org/10.3390/appliedmath6020033 - 13 Feb 2026
Abstract
The paper retrieves quiescent dispersive solitons in dispersion-flattened optical fibers having nonlinear chromatic dispersion and the Kerr law of self-phase modulation. The platform model is the Schrödinger–Hirota equation. The enhanced direct algebraic method has made this retrieval possible. The intermediary functions are Jacobi’s
[...] Read more.
The paper retrieves quiescent dispersive solitons in dispersion-flattened optical fibers having nonlinear chromatic dispersion and the Kerr law of self-phase modulation. The platform model is the Schrödinger–Hirota equation. The enhanced direct algebraic method has made this retrieval possible. The intermediary functions are Jacobi’s elliptic function and Weierstrass’ elliptic function. The final results appear with parameter constraints for the existence of such solitons.
Full article
(This article belongs to the Special Issue Mathematical Structures in Quantum Information and Photonics: From Foundations to Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Exploring Artificial Intelligence and Machine Learning Approaches to Legal Reasoning
by
Wullianallur Raghupathi
AppliedMath 2026, 6(2), 32; https://doi.org/10.3390/appliedmath6020032 - 12 Feb 2026
Abstract
Modeling legal reasoning with artificial intelligence and machine learning presents formidable challenges. Legal decisions emerge from a complex interplay of factual circumstances, statutory interpretation, case precedent, jurisdictional variation, and human judgment—including the behavioral characteristics of judges and juries. This paper takes an exploratory
[...] Read more.
Modeling legal reasoning with artificial intelligence and machine learning presents formidable challenges. Legal decisions emerge from a complex interplay of factual circumstances, statutory interpretation, case precedent, jurisdictional variation, and human judgment—including the behavioral characteristics of judges and juries. This paper takes an exploratory approach to investigating how contemporary ML techniques might capture aspects of this complexity. Using pharmaceutical patent litigation as an illustrative domain, we develop a multi-layer analytical pipeline integrating text mining, clustering, topic modeling, and classification to analyze 698 U.S. federal district court decisions spanning January 2016 through December 2018, comprising substantive validity and infringement rulings under the Hatch-Waxman regulatory framework. Results demonstrate that the pipeline achieves 85–89% prediction accuracy—substantially exceeding the 42% baseline majority-class rate and comparing favorably with prior legal prediction studies—while producing interpretable intermediate outputs: clusters that correspond to recognized doctrinal categories (Abbreviated New Drug Application—ANDA litigation, obviousness, written description, claim construction) and topics that capture recurring legal themes. We discuss what these findings reveal about both the possibilities and limitations of computational approaches to legal reasoning, acknowledging the significant gap between statistical prediction and genuine legal understanding.
Full article
(This article belongs to the Special Issue Computer Science, Machine Learning, Algorithms, and Applied Mathematics)
►▼
Show Figures

Figure 1
Open AccessArticle
Hadamard Products of Projective Varieties with Errors and Erasures
by
Edoardo Ballico
AppliedMath 2026, 6(2), 31; https://doi.org/10.3390/appliedmath6020031 - 12 Feb 2026
Abstract
In Algebraic Statistics, M.A. Cueto, J. Morton and B. Sturmfels introduced a statistical model, the Restricted Boltzmann Machine, which introduced the Hadamard product of two or more vectors of an affine or projective space, i.e., the componentwise product of their entries, forcing Algebraic
[...] Read more.
In Algebraic Statistics, M.A. Cueto, J. Morton and B. Sturmfels introduced a statistical model, the Restricted Boltzmann Machine, which introduced the Hadamard product of two or more vectors of an affine or projective space, i.e., the componentwise product of their entries, forcing Algebraic Geometry to enter. The Hadamard product of two subvarieties is defined as the Zariski closure of the Hadamard product of its elements. Recently, D. Antolini and A. Oneto introduced and studied the definition of Hadamard rank, and we prove some results on it. Moreover, we prove some theorems on the dimension and shape of the Hadamard powers of X. The aim is to describe the images of the Hadamard products without taking the Zariski closure. We also discuss several scenarios describing the case in which some of the data, i.e., the variety X, is wrong or it is not possible to recover it.
Full article
Open AccessArticle
General Stochastic Vector Integration: A New Approach
by
Moritz Sohns and Ali Zakaria Idriss
AppliedMath 2026, 6(2), 30; https://doi.org/10.3390/appliedmath6020030 - 11 Feb 2026
Abstract
This paper presents a topology-based approach to the general vector-valued stochastic integral for predictable integrands and semimartingale integrators. The integral is defined as a unique mapping that achieves closure under the semimartingale topology. While the topology and the closedness of the integral operator
[...] Read more.
This paper presents a topology-based approach to the general vector-valued stochastic integral for predictable integrands and semimartingale integrators. The integral is defined as a unique mapping that achieves closure under the semimartingale topology. While the topology and the closedness of the integral operator are well known, the method of defining the integral via this mapping is new and offers a significantly more efficient path to understanding the general stochastic integral compared to existing techniques. Instead of defining a basic integral and then extending it through a sequence of case distinctions, our construction performs a single topological closure: we define the vector stochastic integral as the unique continuous extension of the simple-predictable integral under the Émery topology, within the predictable -algebra. This single step yields the general predictable, vector-valued integral without invoking semimartingale decompositions, Doob–Meyer, or detours through /quasimartingale frameworks and without re-engineering from the componentwise to the vector case.
Full article
(This article belongs to the Section Probabilistic & Statistical Mathematics)
Open AccessArticle
Optimal Control of a Genotype-Structured Prey–Predator Model: Strategies for Ecological Rescue and Oscillatory Dynamics Restoration
by
Preet Mishra, Shyam Kumar, Sorokhaibam Cha Captain Vyom and R. K. Brojen Singh
AppliedMath 2026, 6(2), 29; https://doi.org/10.3390/appliedmath6020029 - 10 Feb 2026
Abstract
►▼
Show Figures
Evolutionary changes can significantly impact interactions among populations and disrupt ecosystems by driving extinctions or collapsing population oscillations, posing substantial challenges to biodiversity conservation. This study addresses the ecological rescue of a predator population threatened by a mutant prey population using the optimal
[...] Read more.
Evolutionary changes can significantly impact interactions among populations and disrupt ecosystems by driving extinctions or collapsing population oscillations, posing substantial challenges to biodiversity conservation. This study addresses the ecological rescue of a predator population threatened by a mutant prey population using the optimal control method. To study this, we study a model that incorporates a genotypically structured prey population comprising wild-type, heterozygous, and mutant prey types, as well as the predator population. We prove that this model has both local and global existence and uniqueness of solutions, ensuring the model’s robustness. Then, we applied the optimal control method, incorporating Pontryagin’s Maximum Principle, to introduce a control input into the model and minimize the mutant population, thereby stabilizing the ecosystem. We utilize a reproduction number and a control efficacy measure to numerically demonstrate that the undesired dynamics of the model can be controlled, leading to the suppression of the mutant and the restoration of the oscillatory dynamics of the system. These findings demonstrate the applicability of optimal control strategies and provide a mathematical framework for managing such ecological disruptions.
Full article

Figure 1
Open AccessArticle
The Junction of PDEs, Financial Mathematics and Probability: Deriving Classical and Generalized Black-Scholes–Merton Formulas
by
Len Meas, Chhaunny Chhum, Phichhang Ou and Mara Mong
AppliedMath 2026, 6(2), 28; https://doi.org/10.3390/appliedmath6020028 - 10 Feb 2026
Abstract
This paper explores the intersection of three foundational areas—partial differential equations, financial mathematics, and probability—by providing a rigorous framework for the classical Black-Scholes–Merton option pricing model and its generalized extensions. For the classical model, a change in variables is employed to transform the
[...] Read more.
This paper explores the intersection of three foundational areas—partial differential equations, financial mathematics, and probability—by providing a rigorous framework for the classical Black-Scholes–Merton option pricing model and its generalized extensions. For the classical model, a change in variables is employed to transform the Black-Scholes partial differential equation into the linear heat equation. The resulting formulation enables the use of Fourier transform techniques and the fundamental solution (heat kernel) to derive the closed-form Black-Scholes–Merton formula. To extend the classical setting, the interest rate in the discount factor and the stock’s rate of return are modeled using a multifactor Vasicek process, leading to a more sophisticated and realistic option pricing framework. In addition, a complementary derivation based on probabilistic methods, using a change in measure, yields an alternative explicit pricing formula. Numerical experiments based on Monte Carlo simulation show excellent agreement with the closed-form solutions and illustrate notable gains in computational efficiency. The comparative analysis further demonstrates that stochastic interest rates systematically produce lower option prices than the classical constant-rate model, underscoring the importance of accurate interest-rate modeling in practical valuation.
Full article
(This article belongs to the Section Probabilistic & Statistical Mathematics)
►▼
Show Figures

Figure 1
Open AccessReview
Modeling COVID-19 Population Dynamics with a Viral Reservoir and Human Mobility
by
Jené Mercia van Schalkwyk, Peter Joseph Witbooi, Sibaliwe Maku Vyambwera and Mozart Umba Nsuami
AppliedMath 2026, 6(2), 27; https://doi.org/10.3390/appliedmath6020027 - 10 Feb 2026
Abstract
►▼
Show Figures
This article introduces and thoroughly examines a novel deterministic compartmental model of COVID-19 dynamics. The model uniquely incorporates compartments for symptomatic and asymptomatic individuals alongside an environmental reservoir for the pathogen. It also accounts for a steady inflow of infected visitors and a
[...] Read more.
This article introduces and thoroughly examines a novel deterministic compartmental model of COVID-19 dynamics. The model uniquely incorporates compartments for symptomatic and asymptomatic individuals alongside an environmental reservoir for the pathogen. It also accounts for a steady inflow of infected visitors and a steady outflow from the removed class. The mathematical soundness of the model is established by identifying the invariant region and ensuring positivity of solutions. Notably, during surges of infected visitors, certain classes maintain positive minimum values. We analytically determine endemic equilibrium points and prove the global stability of the disease-free equilibrium. Sensitivity analysis highlights the significant roles of transmission rates and asymptomatic individuals in disease spread. Simulation results corroborate the theoretical findings and provide additional insights into the model’s predictive capabilities.
Full article

Figure 1
Open AccessArticle
A Benchmarking Study for Algorithm Selection in Scientific Machine Learning (SciML): PINN vs. gPINN for Solving Partial Differential Equations
by
Muhammad Azam, Imran Shabir Chuhan, Muhammad Shafiq Ahmed and Kaleem Arshid
AppliedMath 2026, 6(2), 26; https://doi.org/10.3390/appliedmath6020026 - 9 Feb 2026
Abstract
Recent advances in physics-informed neural networks (PINN) have highlighted the need for systematic criteria for selecting appropriate algorithms to solve differential equations. This paper presents a numerical comparison between standard PINNs and gradient-enhanced PINNs (gPINNs) used to solve a high-order partial differential equations
[...] Read more.
Recent advances in physics-informed neural networks (PINN) have highlighted the need for systematic criteria for selecting appropriate algorithms to solve differential equations. This paper presents a numerical comparison between standard PINNs and gradient-enhanced PINNs (gPINNs) used to solve a high-order partial differential equations (PDE). To verify the accuracy and convergence behavior of all the methods, we solve a fourth-order PDE whose analytical solution is known. gPINN is recommended for problems requiring high accuracy in gradient fields or operating with sparse data, whereas standard PINN is advised for strongly nonlinear or computationally constrained scenarios. We synthesize our findings into a practical selection guide; gPINN is recommended for problems requiring high accuracy in gradient fields or operating with sparse data, whereas standard PINN is advised for strongly nonlinear or computationally constrained scenarios. This framework provides a clear, evidence-based policy for algorithm choice in SciML. Beyond numerical comparison, we provide an analytical interpretation linking solver performance to the spectral and stiffness properties of each PDE class, offering a principled basis for algorithm selection.
Full article
(This article belongs to the Special Issue Applied Mathematical Modeling and Machine Learning for Geomechanics and Superconducting Materials)
►▼
Show Figures

Figure 1
Open AccessArticle
The Chain Rule for Fractional-Order Derivatives: Theories, Challenges, and Unifying Directions
by
Sroor M. Elnady, Mohamed A. El-Beltagy, Mohammed E. Fouda and Ahmed G. Radwan
AppliedMath 2026, 6(2), 25; https://doi.org/10.3390/appliedmath6020025 - 9 Feb 2026
Abstract
The chain rule is a foundational concept in calculus, critical for differentiating composite functions, especially those appearing in modern AI techniques. Its extension to fractional calculus presents challenges due to the integral-based nature and intrinsic memory effects of these fractional operators. This survey
[...] Read more.
The chain rule is a foundational concept in calculus, critical for differentiating composite functions, especially those appearing in modern AI techniques. Its extension to fractional calculus presents challenges due to the integral-based nature and intrinsic memory effects of these fractional operators. This survey provides a review of chain-rule formulations across major known FDs, including Riemann-Liouville (RL), Caputo, Caputo-Fabrizio (CF), Atangana-Baleanu-Riemann (ABR), Atangana-Baleanu-Caputo (ABC), and Caputo-Fabrizio with Gaussian kernel (CFG). The main contribution here is the introduction of a unified criterion, denoted as , which synthesizes and extends previous classification frameworks for systematically formulating the chain rule across different operators. Each chain rule is examined in terms of its derivation, operator structure, and scope of applicability. In addition, the survey analyzes series-based approximations that appear in computing these derivatives, highlighting the minimum number of terms required to achieve acceptable mean absolute error (MAE). By consolidating theoretical developments, derivation methods, and numerical strategies, this paper provides a comprehensive resource for researchers and practitioners working with fractional-order models.
Full article
(This article belongs to the Section Computational and Numerical Mathematics)
►▼
Show Figures

Figure 1
Open AccessArticle
On the Use of the Quantum Alternating Operator Ansatz in Quantum-Informed Recursive Optimization: A Case Study on the Minimum Vertex Cover
by
Pablo Ramos-Ruiz, Antonio Miguel Fuentes-Jiménez, José E. Ramos-Ruiz and Inmaculada Jiménez-Manchado
AppliedMath 2026, 6(2), 24; https://doi.org/10.3390/appliedmath6020024 - 6 Feb 2026
Abstract
In recent years, several quantum algorithms have been proposed for addressing combinatorial optimization problems. Among them, the Quantum Approximate Optimization Algorithm (QAOA) has become a widely used approach. However, reported limitations of QAOA have motivated the development of multiple algorithmic variants, including recursive
[...] Read more.
In recent years, several quantum algorithms have been proposed for addressing combinatorial optimization problems. Among them, the Quantum Approximate Optimization Algorithm (QAOA) has become a widely used approach. However, reported limitations of QAOA have motivated the development of multiple algorithmic variants, including recursive hybrid methods such as the Recursive Quantum Approximate Optimization Algorithm (RQAOA), as well as the Quantum-Informed Recursive Optimization (QIRO) framework. In this work, we integrate the Quantum Alternating Operator Ansatz within the QIRO framework in order to improve its quantum inference stage. Both the original and the enhanced versions of QIRO are applied to the Minimum Vertex Cover problem, an NP-complete problem of practical relevance. Performance is evaluated on a benchmark of Erdös-Rényi graph instances with varying sizes, densities, and random seeds. The results show that the proposed modification leads to a higher number of successfully solved instances across the considered benchmark, indicating that refinements of the variational layer can improve the effectiveness of the QIRO framework.
Full article
(This article belongs to the Special Issue Optimization and Machine Learning)
►▼
Show Figures

Figure 1
Open AccessArticle
Optimizing the Bounds of Neural Networks Using a Novel Simulated Annealing Method
by
Ioannis G. Tsoulos, Vasileios Charilogis and Dimitrios Tsalikakis
AppliedMath 2026, 6(2), 23; https://doi.org/10.3390/appliedmath6020023 - 6 Feb 2026
Abstract
Artificial neural networks are reliable machine learning models that have been applied to a multitude of practical and scientific applications in recent decades. Among these applications, there are examples from the areas of physics, chemistry, medicine, etc. To effectively apply them to these
[...] Read more.
Artificial neural networks are reliable machine learning models that have been applied to a multitude of practical and scientific applications in recent decades. Among these applications, there are examples from the areas of physics, chemistry, medicine, etc. To effectively apply them to these problems, it is necessary to adapt their parameters using optimization techniques. However, in order to be effective, optimization techniques must know the range of values for the parameters of the artificial neural network, so that they can adequately train the artificial neural network. In most cases, this is not possible, as these ranges are also significantly affected by the inputs to the artificial neural network from the objective problem it is called upon to solve. This situation usually results in artificial neural networks becoming trapped in local minima of the error function or, even worse, in the phenomenon of overfitting, where although the training error achieves low values, the artificial neural network exhibits low performance in the corresponding test set. To address this limitation, this work proposes a novel two-stage training approach in which a simulated annealing (SA)-based preprocessing stage is employed to automatically identify optimal parameter value intervals before the application of any optimization method to train the neural network. Unlike similar approaches that rely on fixed or heuristically selected parameter bounds, the proposed preprocessing technique explores the parameter space probabilistically, guided by a temperature-controlled acceptance mechanism that balances global exploration and local refinement. The proposed method has been successfully applied to a wide range of classification and regression problems and comparative results are presented in detail in the present work.
Full article
(This article belongs to the Section Computational and Numerical Mathematics)
►▼
Show Figures

Figure 1
Open AccessArticle
Mathematical Approaches for the Characterization and Analysis of Molecular Markers in the Study of the Progression and Severity of Amyotrophic Lateral Sclerosis
by
Luisa Carracciuolo, Ugo D’Amora, Raffaele Dubbioso and Ines Fasolino
AppliedMath 2026, 6(2), 22; https://doi.org/10.3390/appliedmath6020022 - 5 Feb 2026
Abstract
►▼
Show Figures
Amyotrophic Lateral Sclerosis (ALS) is a progressive neurodegenerative disorder for which despite its severity, no validated biomarker currently exists to support early diagnosis, limiting therapeutic effectiveness and patient survival. In this context, mathematical modeling therefore becomes essential: it allows us to maximize the
[...] Read more.
Amyotrophic Lateral Sclerosis (ALS) is a progressive neurodegenerative disorder for which despite its severity, no validated biomarker currently exists to support early diagnosis, limiting therapeutic effectiveness and patient survival. In this context, mathematical modeling therefore becomes essential: it allows us to maximize the information obtainable from a limited number of samples, identify patterns that may not be directly observable, and estimate the relative contribution of different molecular markers to ALS progression. In this work, we propose methods for qualitatively and quantitatively evaluating the relevance of selected biomarkers in ALS classification and disease-state identification and laying the foundations for the definition of a protocol useful for constructing “digital twins” of the entire process of study, diagnosis, and treatment of the disease from the perspective of innovative precision medicine.
Full article

Figure 1
Open AccessReview
Fifth-Order Block Hybrid Approach for Solving First-Order Stiff Ordinary Differential Equations
by
Ibrahim Mohammed Dibal and Yeak Su Hoe
AppliedMath 2026, 6(2), 21; https://doi.org/10.3390/appliedmath6020021 - 5 Feb 2026
Abstract
This study introduces a novel single-step hybrid block method with three intra-step points that attains fifth-order accuracy, offering an accurate and computationally economical tool for solving first-order differential equations. The method is specifically designed to handle first-order differential equations with efficiency and precision
[...] Read more.
This study introduces a novel single-step hybrid block method with three intra-step points that attains fifth-order accuracy, offering an accurate and computationally economical tool for solving first-order differential equations. The method is specifically designed to handle first-order differential equations with efficiency and precision while employing a constant step size throughout the computation. To further enhance accuracy, interpolation techniques are incorporated to approximate function values at specific positions, addressing the fundamental properties of the method and verifying its mathematical soundness. These analyses confirm that the scheme satisfies the essential requirements of stability, consistency, and convergence, ensuring reliability in practical applications. In addition, the method demonstrates strong adaptability, making it suitable for a broad spectrum of problem settings that involve both stiff and non-stiff systems. Numerical experiments are carried out, and the results consistently demonstrate that the proposed method is robust and effective under various test cases. The outcomes further reveal that it frequently outperforms several existing numerical approaches in terms of both accuracy and computational efficiency.
Full article
(This article belongs to the Section Computational and Numerical Mathematics)
►▼
Show Figures

Figure 1
Open AccessArticle
Enhanced Assumption-Aware Linear Discriminant Analysis for the Wisconsin Breast Cancer Dataset: A Guide to Dimensionality Reduction and Prediction with Performance Comparable to Machine Learning Methods
by
Vasiliki Pantoula, Vasileios Mandikas and Tryfon Daras
AppliedMath 2026, 6(2), 20; https://doi.org/10.3390/appliedmath6020020 - 3 Feb 2026
Abstract
The analysis of multivariate data is a central issue in biomedical research, where the accurate classification of patients and the extraction of reliable conclusions are of critical importance. Linear Discriminant Analysis (LDA) remains one of the most established methods for both dimensionality reduction
[...] Read more.
The analysis of multivariate data is a central issue in biomedical research, where the accurate classification of patients and the extraction of reliable conclusions are of critical importance. Linear Discriminant Analysis (LDA) remains one of the most established methods for both dimensionality reduction and classification of data. In this paper, we examine in detail the theoretical foundations, assumptions, and statistical properties of LDA, and apply the method step by step to real data from the Breast Cancer Wisconsin (Diagnostic) database, which includes cellular features from breast biopsy samples with the aim of distinguishing benign from malignant tumors. Emphasis is placed on the importance of the method’s assumptions, such as multivariate normality, equality of covariance matrices, and absence of multicollinearity, demonstrating that their fulfillment leads to significant improvements in model performance. Specifically, careful preprocessing and strict adherence to these assumptions increase classification accuracy from ( cross-validated) to ( cross-validated). To our knowledge, this study is the first to demonstrate the dual use of LDA as both a dimensionality-reduction tool and a predictive classification model for this medical database within the same biomedical analysis framework. Moreover, we provide, for the first time, a systematic comparison between our assumption-aware LDA model and related studies employing the most accurate machine-learning classifiers reported in the literature for this dataset, showing that classical LDA achieves accuracy comparable to these more complex methods. The resulting discriminant model, which uses 13 variables out of the original 30, can be applied easily by clinical researchers to classify new cases as benign or malignant, while simultaneously providing interpretable coefficients that reveal the underlying relationships among variables. The implementation is carried out in the SPSS environment, following the theoretical steps described in the paper, thus offering a user-friendly and reproducible framework for reliable application. In addition, the study establishes a structured and transparent workflow for the proper application of LDA in biomedical research by explicitly linking assumption verification, preprocessing, dimensionality reduction, and classification.
Full article
(This article belongs to the Topic Mathematical Applications and Computational Intelligence in Medicine and Biology)
►▼
Show Figures

Figure 1
Open AccessArticle
Inforpower: Quantifying the Informational Power of Probability Distributions
by
Hening Huang
AppliedMath 2026, 6(2), 19; https://doi.org/10.3390/appliedmath6020019 - 2 Feb 2026
Abstract
In many scientific and engineering fields (e.g., measurement science), a probability density function often models a system comprising a signal embedded in noise. Conventional measures, such as the mean, variance, entropy, and informity, characterize signal strength and uncertainty (or noise level) separately. However,
[...] Read more.
In many scientific and engineering fields (e.g., measurement science), a probability density function often models a system comprising a signal embedded in noise. Conventional measures, such as the mean, variance, entropy, and informity, characterize signal strength and uncertainty (or noise level) separately. However, the true performance of a system depends on the interaction between signal and noise. In this paper, we propose a novel measure, called “inforpower”, for quantifying the system’s informational power that explicitly captures the interaction between signal and noise. We also propose a new measure of central tendency, called “information-energy center”. Closed-form expressions for inforpower and information-energy center are provided for ten well known continuous distributions. Moreover, we propose a maximum inforpower criterion, which can complement the Akaike information criterion (AIC), the minimum entropy criterion, and the maximum informity criterion for selecting the best distribution from a set of candidate distributions. Two examples (synthetic Weibull distribution data and Tana River annual maximum streamflow) are presented to demonstrate the effectiveness of the proposed maximum inforpower criterion and compare it with existing goodness-of-fit criteria.
Full article
(This article belongs to the Section Probabilistic & Statistical Mathematics)
►▼
Show Figures

Figure 1
Open AccessArticle
Recovering Einstein’s Mature View of Gravitation: A Dynamical Reconstruction Grounded in the Equivalence Principle
by
Jaume de Haro and Emilio Elizalde
AppliedMath 2026, 6(1), 18; https://doi.org/10.3390/appliedmath6010018 - 21 Jan 2026
Abstract
The historical and conceptual foundations of General Relativity are revisited, putting the main focus on the physical meaning of the invariant , the Equivalence Principle, and the precise interpretation of spacetime geometry. It is argued that Albert Einstein initially sought
[...] Read more.
The historical and conceptual foundations of General Relativity are revisited, putting the main focus on the physical meaning of the invariant , the Equivalence Principle, and the precise interpretation of spacetime geometry. It is argued that Albert Einstein initially sought a dynamical formulation in which encoded the gravitational effects, without invoking curvature as a physical entity. The now more familiar geometrical interpretation—identifying gravitation with spacetime curvature—gradually emerged through his collaboration with Marcel Grossmann and the adoption of the Ricci tensor in 1915. Anyhow, in his 1920 Leiden lecture, Einstein explicitly reinterpreted spacetime geometry as the state of a physical medium—an “ether” endowed with metrical properties but devoid of mechanical substance—thereby actually rejecting geometry as an independent ontological reality. Building upon this mature view, gravitation is reconstructed from the Weak Equivalence Principle, understood as the exact compensation between inertial and gravitational forces acting on a body under a uniform gravitational field. From this fundamental principle, together with an extension of Fermat’s Principle to massive objects, the invariant is obtained, first in the static case, where the gravitational potential modifies the flow of proper time. Then, by applying the Lorentz transformation to this static invariant, its general form is derived for the case of matter in motion. The resulting invariant reproduces the relativistic form of Newton’s second law in proper time and coincides with the weak-field limit of General Relativity in the harmonic gauge. This approach restores the operational meaning of Einstein’s theory: spacetime geometry represents dynamical relations between physical measurements, rather than the substance of spacetime itself. By deriving the gravitational modification of the invariant directly from the Weak Equivalence Principle, Fermat Principle and Lorentz invariance, this formulation clarifies the physical origin of the metric structure and resolves long-standing conceptual issues—such as the recurrent hole argument—while recovering all the empirical successes of General Relativity within a coherent and sound Machian framework.
Full article
(This article belongs to the Section Deterministic Mathematics)
Open AccessArticle
Assessing Cost Efficiency Thresholds in Fragmented Agriculture: A Gamma-Based Model of the Trade-Off Between Unit and Total Parcel Costs
by
Elena Sánchez Arnau, Antonia Ferrer Sapena, Maria Carmen Cárcel-Mas, Claudia Sánchez Arnau and Enrique A. Sánchez Pérez
AppliedMath 2026, 6(1), 17; https://doi.org/10.3390/appliedmath6010017 - 20 Jan 2026
Abstract
Parcel size strongly influences agricultural production costs, and combining spatial and economic information within a mathematical setting helps to clarify this relationship. In this study, we introduce a Gamma-based stochastic framework to integrate actual parcel size distributions into cost estimates, an approach that,
[...] Read more.
Parcel size strongly influences agricultural production costs, and combining spatial and economic information within a mathematical setting helps to clarify this relationship. In this study, we introduce a Gamma-based stochastic framework to integrate actual parcel size distributions into cost estimates, an approach that, to our knowledge, has not been applied in this context. Using a representative traditional orchard system as a case study, parcel sizes (characterized by strong right skewness) are modelled with a Gamma distribution; for highly fragmented landscapes, a truncated Gamma on ha yields a mean parcel area of about ha. Results show that parcel-size heterogeneity substantially affects expected per-parcel costs; for example, calibrating ploughing at 800 EUR/ha leads to an average of ∼160 EUR/parcel, whereas intensive vegetable harvesting at 5000 EUR/ha reaches ∼2100 EUR/parcel. In our simulation, in which the main parameters have been roughly fixed with the aim of showing the methodology, results are given on an expected costs scale relative to parcel area and operation intensity. Overall, the framework shows how parcel-size distributions condition cost estimates and provides a transferable basis for comparative analyses, while acknowledging limitations related to the area-only specification.
Full article
(This article belongs to the Section Probabilistic & Statistical Mathematics)
►▼
Show Figures

Figure 1
Open AccessArticle
Uncertainty-Aware Multimodal Fusion and Bayesian Decision-Making for DSS
by
Vesna Antoska Knights, Marija Prchkovska, Luka Krašnjak and Jasenka Gajdoš Kljusurić
AppliedMath 2026, 6(1), 16; https://doi.org/10.3390/appliedmath6010016 - 20 Jan 2026
Abstract
Uncertainty-aware decision-making increasingly relies on multimodal sensing pipelines that must fuse correlated measurements, propagate uncertainty, and trigger reliable control actions. This study develops a unified mathematical framework for multimodal data fusion and Bayesian decision-making under uncertainty. The approach integrates adaptive Covariance Intersection (aCI)
[...] Read more.
Uncertainty-aware decision-making increasingly relies on multimodal sensing pipelines that must fuse correlated measurements, propagate uncertainty, and trigger reliable control actions. This study develops a unified mathematical framework for multimodal data fusion and Bayesian decision-making under uncertainty. The approach integrates adaptive Covariance Intersection (aCI) for correlation-robust sensor fusion, a Gaussian state–space backbone with Kalman filtering, heteroskedastic Bayesian regression with full posterior sampling via an affine-invariant MCMC sampler, and a Bayesian likelihood-ratio test (LRT) coupled to a risk-sensitive proportional–derivative (PD) control law. Theoretical guarantees are provided by bounding the state covariance under stability conditions, establishing convexity of the aCI weight optimization on the simplex, and deriving a Bayes-risk-optimal decision threshold for the LRT under symmetric Gaussian likelihoods. A proof-of-concept agro-environmental decision-support application is considered, where heterogeneous data streams (IoT soil sensors, meteorological stations, and drone-derived vegetation indices) are fused to generate early-warning alarms for crop stress and to adapt irrigation and fertilization inputs. The proposed pipeline reduces predictive variance and sharpens posterior credible intervals (up to 34% narrower 95% intervals and 44% lower NLL/Brier score under heteroskedastic modeling), while a Bayesian uncertainty-aware controller achieves 14.2% lower water usage and 35.5% fewer false stress alarms compared to a rule-based strategy. The framework is mathematically grounded yet domain-independent, providing a probabilistic pipeline that propagates uncertainty from raw multimodal data to operational control actions, and can be transferred beyond agriculture to robotics, signal processing, and environmental monitoring applications.
Full article
(This article belongs to the Section Probabilistic & Statistical Mathematics)
►▼
Show Figures

Figure 1
Open AccessArticle
A Maple Implementation for Deterministically Certifying Isolated Simple Zeros of Over-Determined Polynomial Systems with Interval Arithmetic and Its Applications
by
Xiaojie Dou, Jin-San Cheng and Junyi Wen
AppliedMath 2026, 6(1), 15; https://doi.org/10.3390/appliedmath6010015 - 19 Jan 2026
Abstract
This paper presents a Maple implementation of an interval verification method for identifying isolated simple zeros in square polynomial systems. Compared to the known MATLAB (R2019b) implementation, the Maple-based approach achieves significantly higher numerical accuracy. The implementation enables polynomial evaluation at specific points
[...] Read more.
This paper presents a Maple implementation of an interval verification method for identifying isolated simple zeros in square polynomial systems. Compared to the known MATLAB (R2019b) implementation, the Maple-based approach achieves significantly higher numerical accuracy. The implementation enables polynomial evaluation at specific points to yield results with very small absolute values—sufficiently precise to reach error bounds computed through theoretical formulations for moderate-sized systems. This advancement allows the deterministic certification of isolated simple zeros in over-determined polynomial systems containing approximately 10,000 complex zeros. As a practical demonstration, the method is further applied to rigorously verify isolated multiple zeros in smaller-scale polynomial systems.
Full article
(This article belongs to the Section Computational and Numerical Mathematics)
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
AppliedMath, Axioms, Computation, Mathematics, Symmetry
A Real-World Application of Chaos Theory
Topic Editors: Adil Jhangeer, Mudassar ImranDeadline: 28 February 2026
Topic in
AppliedMath, Axioms, Fractal Fract, Mathematics, Symmetry
Modeling, Stability, and Control of Dynamic Systems and Their Applications
Topic Editors: Quanxin Zhu, Alexander ZaslavskiDeadline: 30 June 2026
Topic in
AppliedMath, Mathematics, Symmetry, Geometry, Axioms
Functional Equations: Methods and Applications
Topic Editors: Yunyun Yang, Gabriele Bonanno, Sandra PinelasDeadline: 31 August 2026
Topic in
Applied Sciences, Axioms, Information, Mathematics, Symmetry, AppliedMath
Fuzzy Optimization and Decision Making
Topic Editors: Hengjie Zhang, Quanbo Zha, Jing XiaoDeadline: 30 September 2026
Special Issues
Special Issue in
AppliedMath
Mathematical Structures in Quantum Information and Photonics: From Foundations to Applications
Guest Editor: Artur CzerwinskiDeadline: 28 February 2026
Special Issue in
AppliedMath
Nonlinear Dynamics and Complex Phenomena in Fluid Mechanics and Related Systems
Guest Editor: Chaudry KhaliqueDeadline: 31 March 2026
Special Issue in
AppliedMath
Advanced Mathematical Modeling, Dynamics and Applications
Guest Editor: Ophir NaveDeadline: 30 April 2026
Special Issue in
AppliedMath
Mathematical Innovations in Thermal Dynamics and Optimization
Guest Editors: Libor Pekař, Radek Matušů, Xuan Zhang, Long ZhangDeadline: 31 May 2026



