Next Issue
Volume 6, March
Previous Issue
Volume 6, January
 
 

AppliedMath, Volume 6, Issue 2 (February 2026) – 16 articles

Cover Story (view full-size image): We present a two-stage training pipeline for sigmoid neural networks that first learns safe weight bounds with Simulated Annealing and then optimizes weights inside those bounds using a Genetic search plus BFGS refinement. The bound-selection step aims to avoid saturation-prone regimes, improving training stability and generalization. Across diverse classification and regression datasets, the proposed approach achieves lower average errors than strong baselines, at the cost of increased computation. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
30 pages, 734 KB  
Article
A Sixth-Order Vieta–Lucas Polynomial-Based Block Method with Optimal Stability for Solving Practical First-Order ODE Models
by Olugbade Ezekiel Faniyi, Mark Ifeanyi Modebei, Matthew Olanrewaju Oluwayemi and Ikechukwu Jackson Otaide
AppliedMath 2026, 6(2), 34; https://doi.org/10.3390/appliedmath6020034 - 13 Feb 2026
Viewed by 367
Abstract
This paper addresses the numerical integration of first-order ordinary differential equations by developing a continuous linear multistep block method. The method is constructed through the approximation of the exact solution using a linear combination of shifted Vieta–Lucas polynomials defined on the interval [...] Read more.
This paper addresses the numerical integration of first-order ordinary differential equations by developing a continuous linear multistep block method. The method is constructed through the approximation of the exact solution using a linear combination of shifted Vieta–Lucas polynomials defined on the interval [0, 4]. The use of this polynomial basis extends traditional approximation approaches and provides improved stability while maintaining high-order accuracy. Theoretical analysis shows that the proposed method attains sixth-order convergence and possesses an extended stability interval of [19.5,0], ensuring reliable performance for moderately stiff problems. Numerical experiments confirm that the method achieves lower errors and higher computational efficiency than conventional methods. These results demonstrate the suitability of the proposed approach for scientific computing applications, including engineering simulations and mathematical modeling, where accurate numerical integration of first-order differential equation is required. Full article
(This article belongs to the Topic Advances in Natural Computing: Methods and Applications)
Show Figures

Figure 1

19 pages, 622 KB  
Article
Dispersive Quiescent Optical Solitons with DWDM Topology
by Elsayed M. E. Zayed, Mona El-Shater, Ahmed H. Arnous, Lina S. Calucag and Anjan Biswas
AppliedMath 2026, 6(2), 33; https://doi.org/10.3390/appliedmath6020033 - 13 Feb 2026
Viewed by 257
Abstract
The paper retrieves quiescent dispersive solitons in dispersion-flattened optical fibers having nonlinear chromatic dispersion and the Kerr law of self-phase modulation. The platform model is the Schrödinger–Hirota equation. The enhanced direct algebraic method has made this retrieval possible. The intermediary functions are Jacobi’s [...] Read more.
The paper retrieves quiescent dispersive solitons in dispersion-flattened optical fibers having nonlinear chromatic dispersion and the Kerr law of self-phase modulation. The platform model is the Schrödinger–Hirota equation. The enhanced direct algebraic method has made this retrieval possible. The intermediary functions are Jacobi’s elliptic function and Weierstrass’ elliptic function. The final results appear with parameter constraints for the existence of such solitons. Full article
Show Figures

Figure 1

34 pages, 2420 KB  
Article
Exploring Artificial Intelligence and Machine Learning Approaches to Legal Reasoning
by Wullianallur Raghupathi
AppliedMath 2026, 6(2), 32; https://doi.org/10.3390/appliedmath6020032 - 12 Feb 2026
Viewed by 677
Abstract
Modeling legal reasoning with artificial intelligence and machine learning presents formidable challenges. Legal decisions emerge from a complex interplay of factual circumstances, statutory interpretation, case precedent, jurisdictional variation, and human judgment—including the behavioral characteristics of judges and juries. This paper takes an exploratory [...] Read more.
Modeling legal reasoning with artificial intelligence and machine learning presents formidable challenges. Legal decisions emerge from a complex interplay of factual circumstances, statutory interpretation, case precedent, jurisdictional variation, and human judgment—including the behavioral characteristics of judges and juries. This paper takes an exploratory approach to investigating how contemporary ML techniques might capture aspects of this complexity. Using pharmaceutical patent litigation as an illustrative domain, we develop a multi-layer analytical pipeline integrating text mining, clustering, topic modeling, and classification to analyze 698 U.S. federal district court decisions spanning January 2016 through December 2018, comprising substantive validity and infringement rulings under the Hatch-Waxman regulatory framework. Results demonstrate that the pipeline achieves 85–89% prediction accuracy—substantially exceeding the 42% baseline majority-class rate and comparing favorably with prior legal prediction studies—while producing interpretable intermediate outputs: clusters that correspond to recognized doctrinal categories (Abbreviated New Drug Application—ANDA litigation, obviousness, written description, claim construction) and topics that capture recurring legal themes. We discuss what these findings reveal about both the possibilities and limitations of computational approaches to legal reasoning, acknowledging the significant gap between statistical prediction and genuine legal understanding. Full article
Show Figures

Figure 1

14 pages, 309 KB  
Article
Hadamard Products of Projective Varieties with Errors and Erasures
by Edoardo Ballico
AppliedMath 2026, 6(2), 31; https://doi.org/10.3390/appliedmath6020031 - 12 Feb 2026
Viewed by 255
Abstract
In Algebraic Statistics, M.A. Cueto, J. Morton and B. Sturmfels introduced a statistical model, the Restricted Boltzmann Machine, which introduced the Hadamard product of two or more vectors of an affine or projective space, i.e., the componentwise product of their entries, forcing Algebraic [...] Read more.
In Algebraic Statistics, M.A. Cueto, J. Morton and B. Sturmfels introduced a statistical model, the Restricted Boltzmann Machine, which introduced the Hadamard product of two or more vectors of an affine or projective space, i.e., the componentwise product of their entries, forcing Algebraic Geometry to enter. The Hadamard product XY of two subvarieties X,YPn is defined as the Zariski closure of the Hadamard product of its elements. Recently, D. Antolini and A. Oneto introduced and studied the definition of Hadamard rank, and we prove some results on it. Moreover, we prove some theorems on the dimension and shape of the Hadamard powers of X. The aim is to describe the images of the Hadamard products without taking the Zariski closure. We also discuss several scenarios describing the case in which some of the data, i.e., the variety X, is wrong or it is not possible to recover it. Full article
49 pages, 571 KB  
Article
General Stochastic Vector Integration: A New Approach
by Moritz Sohns and Ali Zakaria Idriss
AppliedMath 2026, 6(2), 30; https://doi.org/10.3390/appliedmath6020030 - 11 Feb 2026
Viewed by 350
Abstract
This paper presents a topology-based approach to the general vector-valued stochastic integral for predictable integrands and semimartingale integrators. The integral is defined as a unique mapping that achieves closure under the semimartingale topology. While the topology and the closedness of the integral operator [...] Read more.
This paper presents a topology-based approach to the general vector-valued stochastic integral for predictable integrands and semimartingale integrators. The integral is defined as a unique mapping that achieves closure under the semimartingale topology. While the topology and the closedness of the integral operator are well known, the method of defining the integral via this mapping is new and offers a significantly more efficient path to understanding the general stochastic integral compared to existing techniques. Instead of defining a basic integral and then extending it through a sequence of case distinctions, our construction performs a single topological closure: we define the vector stochastic integral as the unique continuous extension of the simple-predictable integral under the Émery topology, within the predictable σ-algebra. This single step yields the general predictable, vector-valued integral without invoking semimartingale decompositions, Doob–Meyer, or detours through H2/quasimartingale frameworks and without re-engineering from the componentwise to the vector case. Full article
(This article belongs to the Section Probabilistic & Statistical Mathematics)
23 pages, 1536 KB  
Article
Optimal Control of a Genotype-Structured Prey–Predator Model: Strategies for Ecological Rescue and Oscillatory Dynamics Restoration
by Preet Mishra, Shyam Kumar, Sorokhaibam Cha Captain Vyom and R. K. Brojen Singh
AppliedMath 2026, 6(2), 29; https://doi.org/10.3390/appliedmath6020029 - 10 Feb 2026
Viewed by 493
Abstract
Evolutionary changes can significantly impact interactions among populations and disrupt ecosystems by driving extinctions or collapsing population oscillations, posing substantial challenges to biodiversity conservation. This study addresses the ecological rescue of a predator population threatened by a mutant prey population using the optimal [...] Read more.
Evolutionary changes can significantly impact interactions among populations and disrupt ecosystems by driving extinctions or collapsing population oscillations, posing substantial challenges to biodiversity conservation. This study addresses the ecological rescue of a predator population threatened by a mutant prey population using the optimal control method. To study this, we study a model that incorporates a genotypically structured prey population comprising wild-type, heterozygous, and mutant prey types, as well as the predator population. We prove that this model has both local and global existence and uniqueness of solutions, ensuring the model’s robustness. Then, we applied the optimal control method, incorporating Pontryagin’s Maximum Principle, to introduce a control input into the model and minimize the mutant population, thereby stabilizing the ecosystem. We utilize a reproduction number and a control efficacy measure to numerically demonstrate that the undesired dynamics of the model can be controlled, leading to the suppression of the mutant and the restoration of the oscillatory dynamics of the system. These findings demonstrate the applicability of optimal control strategies and provide a mathematical framework for managing such ecological disruptions. Full article
Show Figures

Figure 1

28 pages, 728 KB  
Article
The Junction of PDEs, Financial Mathematics and Probability: Deriving Classical and Generalized Black-Scholes–Merton Formulas
by Len Meas, Chhaunny Chhum, Phichhang Ou and Mara Mong
AppliedMath 2026, 6(2), 28; https://doi.org/10.3390/appliedmath6020028 - 10 Feb 2026
Viewed by 546
Abstract
This paper explores the intersection of three foundational areas—partial differential equations, financial mathematics, and probability—by providing a rigorous framework for the classical Black-Scholes–Merton option pricing model and its generalized extensions. For the classical model, a change in variables is employed to transform the [...] Read more.
This paper explores the intersection of three foundational areas—partial differential equations, financial mathematics, and probability—by providing a rigorous framework for the classical Black-Scholes–Merton option pricing model and its generalized extensions. For the classical model, a change in variables is employed to transform the Black-Scholes partial differential equation into the linear heat equation. The resulting formulation enables the use of Fourier transform techniques and the fundamental solution (heat kernel) to derive the closed-form Black-Scholes–Merton formula. To extend the classical setting, the interest rate in the discount factor and the stock’s rate of return are modeled using a multifactor Vasicek process, leading to a more sophisticated and realistic option pricing framework. In addition, a complementary derivation based on probabilistic methods, using a change in measure, yields an alternative explicit pricing formula. Numerical experiments based on Monte Carlo simulation show excellent agreement with the closed-form solutions and illustrate notable gains in computational efficiency. The comparative analysis further demonstrates that stochastic interest rates systematically produce lower option prices than the classical constant-rate model, underscoring the importance of accurate interest-rate modeling in practical valuation. Full article
(This article belongs to the Section Probabilistic & Statistical Mathematics)
Show Figures

Figure 1

23 pages, 351 KB  
Review
Modeling COVID-19 Population Dynamics with a Viral Reservoir and Human Mobility
by Jené Mercia van Schalkwyk, Peter Joseph Witbooi, Sibaliwe Maku Vyambwera and Mozart Umba Nsuami
AppliedMath 2026, 6(2), 27; https://doi.org/10.3390/appliedmath6020027 - 10 Feb 2026
Viewed by 370
Abstract
This article introduces and thoroughly examines a novel deterministic compartmental model of COVID-19 dynamics. The model uniquely incorporates compartments for symptomatic and asymptomatic individuals alongside an environmental reservoir for the pathogen. It also accounts for a steady inflow of infected visitors and a [...] Read more.
This article introduces and thoroughly examines a novel deterministic compartmental model of COVID-19 dynamics. The model uniquely incorporates compartments for symptomatic and asymptomatic individuals alongside an environmental reservoir for the pathogen. It also accounts for a steady inflow of infected visitors and a steady outflow from the removed class. The mathematical soundness of the model is established by identifying the invariant region and ensuring positivity of solutions. Notably, during surges of infected visitors, certain classes maintain positive minimum values. We analytically determine endemic equilibrium points and prove the global stability of the disease-free equilibrium. Sensitivity analysis highlights the significant roles of transmission rates and asymptomatic individuals in disease spread. Simulation results corroborate the theoretical findings and provide additional insights into the model’s predictive capabilities. Full article
Show Figures

Figure 1

24 pages, 9493 KB  
Article
A Benchmarking Study for Algorithm Selection in Scientific Machine Learning (SciML): PINN vs. gPINN for Solving Partial Differential Equations
by Muhammad Azam, Imran Shabir Chuhan, Muhammad Shafiq Ahmed and Kaleem Arshid
AppliedMath 2026, 6(2), 26; https://doi.org/10.3390/appliedmath6020026 - 9 Feb 2026
Cited by 1 | Viewed by 608
Abstract
Recent advances in physics-informed neural networks (PINN) have highlighted the need for systematic criteria for selecting appropriate algorithms to solve differential equations. This paper presents a numerical comparison between standard PINNs and gradient-enhanced PINNs (gPINNs) used to solve a high-order partial differential equations [...] Read more.
Recent advances in physics-informed neural networks (PINN) have highlighted the need for systematic criteria for selecting appropriate algorithms to solve differential equations. This paper presents a numerical comparison between standard PINNs and gradient-enhanced PINNs (gPINNs) used to solve a high-order partial differential equations (PDE). To verify the accuracy and convergence behavior of all the methods, we solve a fourth-order PDE whose analytical solution is known. gPINN is recommended for problems requiring high accuracy in gradient fields or operating with sparse data, whereas standard PINN is advised for strongly nonlinear or computationally constrained scenarios. We synthesize our findings into a practical selection guide; gPINN is recommended for problems requiring high accuracy in gradient fields or operating with sparse data, whereas standard PINN is advised for strongly nonlinear or computationally constrained scenarios. This framework provides a clear, evidence-based policy for algorithm choice in SciML. Beyond numerical comparison, we provide an analytical interpretation linking solver performance to the spectral and stiffness properties of each PDE class, offering a principled basis for algorithm selection. Full article
Show Figures

Figure 1

17 pages, 1251 KB  
Article
The Chain Rule for Fractional-Order Derivatives: Theories, Challenges, and Unifying Directions
by Sroor M. Elnady, Mohamed A. El-Beltagy, Mohammed E. Fouda and Ahmed G. Radwan
AppliedMath 2026, 6(2), 25; https://doi.org/10.3390/appliedmath6020025 - 9 Feb 2026
Viewed by 495
Abstract
The chain rule is a foundational concept in calculus, critical for differentiating composite functions, especially those appearing in modern AI techniques. Its extension to fractional calculus presents challenges due to the integral-based nature and intrinsic memory effects of these fractional operators. This survey [...] Read more.
The chain rule is a foundational concept in calculus, critical for differentiating composite functions, especially those appearing in modern AI techniques. Its extension to fractional calculus presents challenges due to the integral-based nature and intrinsic memory effects of these fractional operators. This survey provides a review of chain-rule formulations across major known FDs, including Riemann-Liouville (RL), Caputo, Caputo-Fabrizio (CF), Atangana-Baleanu-Riemann (ABR), Atangana-Baleanu-Caputo (ABC), and Caputo-Fabrizio with Gaussian kernel (CFG). The main contribution here is the introduction of a unified criterion, denoted as C, which synthesizes and extends previous classification frameworks for systematically formulating the chain rule across different operators. Each chain rule is examined in terms of its derivation, operator structure, and scope of applicability. In addition, the survey analyzes series-based approximations that appear in computing these derivatives, highlighting the minimum number of terms required to achieve acceptable mean absolute error (MAE). By consolidating theoretical developments, derivation methods, and numerical strategies, this paper provides a comprehensive resource for researchers and practitioners working with fractional-order models. Full article
(This article belongs to the Section Computational and Numerical Mathematics)
Show Figures

Figure 1

13 pages, 1236 KB  
Article
On the Use of the Quantum Alternating Operator Ansatz in Quantum-Informed Recursive Optimization: A Case Study on the Minimum Vertex Cover
by Pablo Ramos-Ruiz, Antonio Miguel Fuentes-Jiménez, José E. Ramos-Ruiz and Inmaculada Jiménez-Manchado
AppliedMath 2026, 6(2), 24; https://doi.org/10.3390/appliedmath6020024 - 6 Feb 2026
Viewed by 290
Abstract
In recent years, several quantum algorithms have been proposed for addressing combinatorial optimization problems. Among them, the Quantum Approximate Optimization Algorithm (QAOA) has become a widely used approach. However, reported limitations of QAOA have motivated the development of multiple algorithmic variants, including recursive [...] Read more.
In recent years, several quantum algorithms have been proposed for addressing combinatorial optimization problems. Among them, the Quantum Approximate Optimization Algorithm (QAOA) has become a widely used approach. However, reported limitations of QAOA have motivated the development of multiple algorithmic variants, including recursive hybrid methods such as the Recursive Quantum Approximate Optimization Algorithm (RQAOA), as well as the Quantum-Informed Recursive Optimization (QIRO) framework. In this work, we integrate the Quantum Alternating Operator Ansatz within the QIRO framework in order to improve its quantum inference stage. Both the original and the enhanced versions of QIRO are applied to the Minimum Vertex Cover problem, an NP-complete problem of practical relevance. Performance is evaluated on a benchmark of Erdös-Rényi graph instances with varying sizes, densities, and random seeds. The results show that the proposed modification leads to a higher number of successfully solved instances across the considered benchmark, indicating that refinements of the variational layer can improve the effectiveness of the QIRO framework. Full article
(This article belongs to the Special Issue Optimization and Machine Learning)
Show Figures

Figure 1

38 pages, 2287 KB  
Article
Optimizing the Bounds of Neural Networks Using a Novel Simulated Annealing Method
by Ioannis G. Tsoulos, Vasileios Charilogis and Dimitrios Tsalikakis
AppliedMath 2026, 6(2), 23; https://doi.org/10.3390/appliedmath6020023 - 6 Feb 2026
Viewed by 476
Abstract
Artificial neural networks are reliable machine learning models that have been applied to a multitude of practical and scientific applications in recent decades. Among these applications, there are examples from the areas of physics, chemistry, medicine, etc. To effectively apply them to these [...] Read more.
Artificial neural networks are reliable machine learning models that have been applied to a multitude of practical and scientific applications in recent decades. Among these applications, there are examples from the areas of physics, chemistry, medicine, etc. To effectively apply them to these problems, it is necessary to adapt their parameters using optimization techniques. However, in order to be effective, optimization techniques must know the range of values for the parameters of the artificial neural network, so that they can adequately train the artificial neural network. In most cases, this is not possible, as these ranges are also significantly affected by the inputs to the artificial neural network from the objective problem it is called upon to solve. This situation usually results in artificial neural networks becoming trapped in local minima of the error function or, even worse, in the phenomenon of overfitting, where although the training error achieves low values, the artificial neural network exhibits low performance in the corresponding test set. To address this limitation, this work proposes a novel two-stage training approach in which a simulated annealing (SA)-based preprocessing stage is employed to automatically identify optimal parameter value intervals before the application of any optimization method to train the neural network. Unlike similar approaches that rely on fixed or heuristically selected parameter bounds, the proposed preprocessing technique explores the parameter space probabilistically, guided by a temperature-controlled acceptance mechanism that balances global exploration and local refinement. The proposed method has been successfully applied to a wide range of classification and regression problems and comparative results are presented in detail in the present work. Full article
(This article belongs to the Section Computational and Numerical Mathematics)
Show Figures

Figure 1

44 pages, 2431 KB  
Article
Mathematical Approaches for the Characterization and Analysis of Molecular Markers in the Study of the Progression and Severity of Amyotrophic Lateral Sclerosis
by Luisa Carracciuolo, Ugo D’Amora, Raffaele Dubbioso and Ines Fasolino
AppliedMath 2026, 6(2), 22; https://doi.org/10.3390/appliedmath6020022 - 5 Feb 2026
Cited by 1 | Viewed by 403
Abstract
Amyotrophic Lateral Sclerosis (ALS) is a progressive neurodegenerative disorder for which despite its severity, no validated biomarker currently exists to support early diagnosis, limiting therapeutic effectiveness and patient survival. In this context, mathematical modeling therefore becomes essential: it allows us to maximize the [...] Read more.
Amyotrophic Lateral Sclerosis (ALS) is a progressive neurodegenerative disorder for which despite its severity, no validated biomarker currently exists to support early diagnosis, limiting therapeutic effectiveness and patient survival. In this context, mathematical modeling therefore becomes essential: it allows us to maximize the information obtainable from a limited number of samples, identify patterns that may not be directly observable, and estimate the relative contribution of different molecular markers to ALS progression. In this work, we propose methods for qualitatively and quantitatively evaluating the relevance of selected biomarkers in ALS classification and disease-state identification and laying the foundations for the definition of a protocol useful for constructing “digital twins” of the entire process of study, diagnosis, and treatment of the disease from the perspective of innovative precision medicine. Full article
Show Figures

Figure 1

17 pages, 912 KB  
Review
Fifth-Order Block Hybrid Approach for Solving First-Order Stiff Ordinary Differential Equations
by Ibrahim Mohammed Dibal and Yeak Su Hoe
AppliedMath 2026, 6(2), 21; https://doi.org/10.3390/appliedmath6020021 - 5 Feb 2026
Viewed by 394
Abstract
This study introduces a novel single-step hybrid block method with three intra-step points that attains fifth-order accuracy, offering an accurate and computationally economical tool for solving first-order differential equations. The method is specifically designed to handle first-order differential equations with efficiency and precision [...] Read more.
This study introduces a novel single-step hybrid block method with three intra-step points that attains fifth-order accuracy, offering an accurate and computationally economical tool for solving first-order differential equations. The method is specifically designed to handle first-order differential equations with efficiency and precision while employing a constant step size throughout the computation. To further enhance accuracy, interpolation techniques are incorporated to approximate function values at specific positions, addressing the fundamental properties of the method and verifying its mathematical soundness. These analyses confirm that the scheme satisfies the essential requirements of stability, consistency, and convergence, ensuring reliability in practical applications. In addition, the method demonstrates strong adaptability, making it suitable for a broad spectrum of problem settings that involve both stiff and non-stiff systems. Numerical experiments are carried out, and the results consistently demonstrate that the proposed method is robust and effective under various test cases. The outcomes further reveal that it frequently outperforms several existing numerical approaches in terms of both accuracy and computational efficiency. Full article
(This article belongs to the Section Computational and Numerical Mathematics)
Show Figures

Figure 1

26 pages, 5240 KB  
Article
Enhanced Assumption-Aware Linear Discriminant Analysis for the Wisconsin Breast Cancer Dataset: A Guide to Dimensionality Reduction and Prediction with Performance Comparable to Machine Learning Methods
by Vasiliki Pantoula, Vasileios Mandikas and Tryfon Daras
AppliedMath 2026, 6(2), 20; https://doi.org/10.3390/appliedmath6020020 - 3 Feb 2026
Viewed by 438
Abstract
The analysis of multivariate data is a central issue in biomedical research, where the accurate classification of patients and the extraction of reliable conclusions are of critical importance. Linear Discriminant Analysis (LDA) remains one of the most established methods for both dimensionality reduction [...] Read more.
The analysis of multivariate data is a central issue in biomedical research, where the accurate classification of patients and the extraction of reliable conclusions are of critical importance. Linear Discriminant Analysis (LDA) remains one of the most established methods for both dimensionality reduction and classification of data. In this paper, we examine in detail the theoretical foundations, assumptions, and statistical properties of LDA, and apply the method step by step to real data from the Breast Cancer Wisconsin (Diagnostic) database, which includes cellular features from breast biopsy samples with the aim of distinguishing benign from malignant tumors. Emphasis is placed on the importance of the method’s assumptions, such as multivariate normality, equality of covariance matrices, and absence of multicollinearity, demonstrating that their fulfillment leads to significant improvements in model performance. Specifically, careful preprocessing and strict adherence to these assumptions increase classification accuracy from 95.6% (94.7% cross-validated) to 97.8% (97.4% cross-validated). To our knowledge, this study is the first to demonstrate the dual use of LDA as both a dimensionality-reduction tool and a predictive classification model for this medical database within the same biomedical analysis framework. Moreover, we provide, for the first time, a systematic comparison between our assumption-aware LDA model and related studies employing the most accurate machine-learning classifiers reported in the literature for this dataset, showing that classical LDA achieves accuracy comparable to these more complex methods. The resulting discriminant model, which uses 13 variables out of the original 30, can be applied easily by clinical researchers to classify new cases as benign or malignant, while simultaneously providing interpretable coefficients that reveal the underlying relationships among variables. The implementation is carried out in the SPSS environment, following the theoretical steps described in the paper, thus offering a user-friendly and reproducible framework for reliable application. In addition, the study establishes a structured and transparent workflow for the proper application of LDA in biomedical research by explicitly linking assumption verification, preprocessing, dimensionality reduction, and classification. Full article
Show Figures

Figure 1

12 pages, 1058 KB  
Article
Inforpower: Quantifying the Informational Power of Probability Distributions
by Hening Huang
AppliedMath 2026, 6(2), 19; https://doi.org/10.3390/appliedmath6020019 - 2 Feb 2026
Viewed by 261
Abstract
In many scientific and engineering fields (e.g., measurement science), a probability density function often models a system comprising a signal embedded in noise. Conventional measures, such as the mean, variance, entropy, and informity, characterize signal strength and uncertainty (or noise level) separately. However, [...] Read more.
In many scientific and engineering fields (e.g., measurement science), a probability density function often models a system comprising a signal embedded in noise. Conventional measures, such as the mean, variance, entropy, and informity, characterize signal strength and uncertainty (or noise level) separately. However, the true performance of a system depends on the interaction between signal and noise. In this paper, we propose a novel measure, called “inforpower”, for quantifying the system’s informational power that explicitly captures the interaction between signal and noise. We also propose a new measure of central tendency, called “information-energy center”. Closed-form expressions for inforpower and information-energy center are provided for ten well known continuous distributions. Moreover, we propose a maximum inforpower criterion, which can complement the Akaike information criterion (AIC), the minimum entropy criterion, and the maximum informity criterion for selecting the best distribution from a set of candidate distributions. Two examples (synthetic Weibull distribution data and Tana River annual maximum streamflow) are presented to demonstrate the effectiveness of the proposed maximum inforpower criterion and compare it with existing goodness-of-fit criteria. Full article
(This article belongs to the Section Probabilistic & Statistical Mathematics)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop